text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringlengths 9
15
⌀ | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
---|---|---|---|---|---|
Context:
Writing Log Records to a Log File
Writing Log Records to a Log File
This section demonstrates for writing log records to a
log file. Logger provides different types of level like: warning, info
Advantages of Servlets over CGI
java servlets provide excellent framework for
server side processing.
Using...
Advantages of Servlets over CGI
Servlets are server side components that provides
a powerful mechanism
Using Log Files
Using Log Files
Log files keeps a records of internet protocol
addresses....
In this program we are writing some information in log
file.
The code
Servlets Books
Platform, by Dustin R. Callaway
Java Servlets by Example, by Alan R...
Servlets Books
...
Courses
Looking for short hands-on training classes on servlets
Writing Log Records Only After a Condition Occurs
Writing Log Records Only After a Condition Occurs
This section deals with log records those have been
written in a log file if the "if" conditions
Servlets Programming
Servlets Programming Hi this is tanu,
This is a code for knowing...;
import javax.servlet.http.HttpServletResponse;
//In this example we are going... visit the following links:
Create a Custom Log Level in Java
Create a Custom Log Level in Java
This section tells you how to create a custom log
level that means log levels are created by users for own need. Java logging
class provides
JSP Request.getContextPath( )
;
JSP Request .get Context Path ( ), the context path is the portion of the
request URL that indicates the context of the request.
Understand with Example
The section of this Tutorial illustrate
Comparing Log Levels in Java
Comparing Log Levels in Java
This section describes, how to compare log levels to
each other... an individual integer type value. With the help of this you can
compare4j example
log4j example
This Example shows you how to create a log in a Servlet.
Description... of
log operations and getLogger method is used for return a logger according
Log-in Valdidation
Log-in Valdidation how to create login validation in jsp
log out how do i create a log out on a project without it exiting system so it does not reload again
A simple example of log4j for Servlet
A simple example of log4j for Servlet
This Example shows you how to create a log... for handling the majority of
log operations and getLogger method is used for return
A simple example of log4j
This Example shows you how to create a log...(): Logger class is used for handling the majority of log operations and getLogger
log 4j
log 4j what is log4j? can you give me clear information about log4j?
Please go through the following link:
log4j Tutorials
servlets
servlets what is the duties of response object in servlets
servlets
servlets why we are using servlets
Servlets - JSP-Servlet
Servlets Hello Sir, can you give me an example of a serve let program which connects to Microsoft sql server and retrieves data from it and display... visit the following link:
servlets
what is the architecture of a servlets package what is the architecture of a servlets package
The javax.servlet package provides interfaces and classes for writing servlets.
The Servlet Interface
The central
failure to log in
failure to log in <blockquote>
<p>Blockquote</p>
</blockquote>
<p><%@page contentType="text/html"%>
<%@page pageEncoding="UTF-8"%>
<%@page import.
What is Application Context?
What is Application Context? Hello,
What is Application Context?let Interview Questions
to the client.Server-push Example and CodeServerPushExam.javaimport java.io.
application context file problem
application context file problem how to configure junit application context file with struts........?
it is not finding sessionfactory method
servlets
SERVLETS
the servlets
Servlets | http://roseindia.net/tutorialhelp/comment/98114 | CC-MAIN-2014-41 | refinedweb | 605 | 60.14 |
writing out schema attribute in xmlwriter
Discussion in 'ASP .Net Web Services' started by Stu, Mar38
- Markus
- Nov 23, 2005
Re: Convert DB2 schema to XML SchemaKlaus Johannes Rusch, Aug 6, 2003, in forum: XML
- Replies:
- 0
- Views:
- 586
- Klaus Johannes Rusch
- Aug 6, 2003
[XML Schema] Including a schema document with absent target namespace to a schema with specified tarStanimir Stamenkov, Apr 22, 2005, in forum: XML
- Replies:
- 3
- Views:
- 1,386
- Stanimir Stamenkov
- Apr 25, 2005
SAX XMLReader, XMLFilter, ContentHandler and XMLWriter questionJeff Calico, Feb 22, 2006, in forum: XML
- Replies:
- 2
- Views:
- 1,249
- Joseph Kesselman
- Feb 22, 2006
using xmlwriter to create attributes for an excel tag=?Utf-8?B?UGF1bA==?=, Nov 7, 2006, in forum: ASP .Net
- Replies:
- 1
- Views:
- 395
- Olaf Rabbachin
- Nov 7, 2006 | http://www.thecodingforums.com/threads/writing-out-schema-attribute-in-xmlwriter.784786/ | CC-MAIN-2015-18 | refinedweb | 131 | 56.63 |
ScalazScalaz
Scalaz is a Scala library for functional programming.
It provides purely functional data structures to complement those from the Scala standard library. It defines a set of foundational type classes (e.g.
Functor,
Monad) and corresponding instances for a large number of data structures.
Getting ScalazGetting Scalaz
The current stable version is 7.3.3, which is cross-built against Scala 2.11.x, 2.12.x, 2.13.x and Scala.js, scala-native.
If you're using SBT, add the following line to your build file:
libraryDependencies += "org.scalaz" %% "scalaz-core" % "7.3.3"
For Maven and other build tools, you can visit search.maven.org. (This search will also list all available modules of scalaz.)
To get sample configurations, click on the version of the module you are interested in. You can also find direct download links at the bottom of that page. Choose the file ending in
7.3.3.jar.
Quick StartQuick Start
import scalaz._ import std.option._, std.list._ // functions and type class instances for Option and List scala> Apply[Option].apply2(some(1), some(2))((a, b) => a + b) res0: Option[Int] = Some(3) scala> Traverse[List].traverse(List(1, 2, 3))(i => some(i)) res1: Option[List[Int]] = Some(List(1, 2, 3))
Use of the
Ops classes, defined under
scalaz.syntax.
import scalaz._ import std.list._ // type class instances for List import syntax.bind._ // syntax for the Bind type class (and its parents) scala> List(List(1)).join res0: List[Int] = List(1) scala> List(true, false).ifM(List(0, 1), List(2, 3)) res1: List[Int] = List(0, 1, 2, 3)
We've gone to great lengths to give you an a-la-carte importing experience, but if you prefer an all-you-can-eat buffet, you're in luck:
import scalaz._ import Scalaz._ scala> NonEmptyList(1, 2, 3).cojoin res0: scalaz.NonEmptyList[scalaz.NonEmptyList[Int]] = NonEmptyList(NonEmptyList(1, 2, 3), NonEmptyList(2, 3), NonEmptyList(3)) scala> 1.node(2.leaf, 3.node(4.leaf)) res1: scalaz.Tree[Int] = <tree> scala> List(some(1), none).suml res2: Option[Int] = Some(1)
ResourcesResources
Let the types speak for themselves via the Scalaz Scaladocs!
The examples module contains some snippets of Scalaz usage.
The wiki contains release and migration information.
Talk with us by joining IRC: irc.freenode.net channel #scalaz, or join the Scalaz mailing list on Google Groups.
The typelevel blog has some great posts such as Towards Scalaz by Adelbert Chang.
Learning Scalaz is a great series of blog posts by Eugene Yokota. Thanks, Eugene!
Changes in Version 7Changes in Version 7
Scalaz 7 represents a major reorganization of the library. We have taken a fresh look at the challenges of encoding type classes in Scala, in particular at when and how to employ the implicit scope.
At a glanceAt a glance
scalaz.{effect, iteratee}split to separate sub-projects;
scalaz.{http, geo}dropped.
- Refined and expanded the type class hierarchy.
- Type class instances are no longer defined in the companion objects of the type class. Instances for standard library types are defined under
scalaz.std, and instances for Scalaz data types are defined in the companion object for those types. An instance definition can provide multiple type classes in a single place, which was not always possible in Scalaz 6.
- Type class instances have been organized to avoid ambiguity, a problem that arises when instances are dependent on other instances (for example,
Monoid[(A, B)])
- Use of implicit views to provide access to Scalaz functionality as extension methods has been segregated to
scalaz.syntax, and can be imported selectively, and need not be used at all.
- Related functions are defined in the type class trait, to support standalone usage of the type class. In Scalaz 6, these were defined in
Identity,
MA, or
MAB.
- New data structures have been added, and existing ones generalized. A number of monad transformers have been provided, in some cases generalizing old data structures.
ModularityModularity
Scalaz has been modularised.
- scalaz-core: Type class hierarchy, data structures, type class instances for the Scala and Java standard libraries, implicit conversions / syntax to access these.
- scalaz-effect: Data structures to represent and compose IO effects in the type system.
- scalaz-iteratee: Experimental new Iteratee implementation
Type Class HierarchyType Class Hierarchy
- Type classes form an inheritance hierarchy, as in Scalaz 6. This is convenient both at the call site and at the type class instance definition. At the call site, it ensures that you can call a method requiring a more general type class with an instance of a more specific type class:
def bar[M[_]: Functor] = () def foo[M[_]: Monad] = bar[M] // Monad[M] is a subtype of Functor[M]
- The hierarchy itself is largely the same as in Scalaz 6. However, there have been a few adjustments, some method signatures have been adjusted to support better standalone usage, so code depending on these will need to be re-worked.
Type Class Instance DefinitionType Class Instance Definition
- Constructive implicits, which create a type class instance automatically based on instances of all parent type classes, are removed. These led to subtle errors with ambiguous implicits, such as this problem with FunctorBindApply
- Type class instances are no longer declared in fragments in the companion objects of the type class. Instead, they are defined in the package
scalaz.std, and must be imported. These instances are defined in traits which will be mixed together into an object for importing en-masse, if desired.
- A single implicit can define a number of type class instances for a type.
- A type class definition can override methods (including derived methods) for efficiency.
Here is an instance definition for
Option. Notice that the method
map has been overridden.
implicit val option = new Traverse[Option] with MonadPlus[Option] { def point[A](a: => A) = Some(a) def bind[A, B](fa: Option[A])(f: A => Option[B]): Option[B] = fa flatMap f override def map[A, B](fa: Option[A])(f: A => B): Option[B] = fa map f def traverseImpl[F[_], A, B](fa: Option[A])(f: A => F[B])(implicit F: Applicative[F]) = fa map (a => F.map(f(a))(Some(_): Option[B])) getOrElse F.point(None) def empty[A]: Option[A] = None def plus[A](a: Option[A], b: => Option[A]) = a orElse b def foldR[A, B](fa: Option[A], z: B)(f: (A) => (=> B) => B): B = fa match { case Some(a) => f(a)(z) case None => z } }
To use this, one would:
import scalaz.std.option.optionInstance // or, importing all instances en-masse // import scalaz.Scalaz._ val M = Monad[Option] val oi: Option[Int] = M.point(0)
SyntaxSyntax
We co-opt the term syntax to refer to the way we allow the functionality of Scalaz to be called in the
object.method(args) form, which can be easier to read, and, given that type inference in Scala flows from left-to-right, can require fewer type annotations.
- No more
Identity,
MA, or
MABfrom Scalaz 6.
- Syntax is segregated from rest of the library, in a sub-package
scalaz.syntax.
- All Scalaz functionality is available without using the provided syntax, by directly calling methods on the type class or its companion object.
- Syntax is available a-la-carte. You can import the syntax for working with particular type classes where you need it. This avoids flooding the autocompletion in your IDE with every possible extension method. This should also help compiler performance, by reducing the implicit search space.
- Syntax is layered in the same way as type classes. Importing the syntax for, say,
Applicativewill also provide the syntax for
Applyand
Functor.
Syntax can be imported in two ways. Firstly, the syntax specialized for a particular instance of a type class can be imported directly from the instance itself.
// import the type class instance import scalaz.std.option.optionInstance // import the implicit conversions to `MonadOps[Option, A]`, `BindOps[Option, A]`, ... import optionInstance.monadSyntax._ val oi: Option[Option[Int]] = Some(Some(1)) // Expands to: `ToBindOps(io).join` oi.join
Alternatively, the syntax can be imported for a particular type class.
// import the type class instance import scalaz.std.option.optionInstance // import the implicit conversions to `MonadOps[F, A]`, `BindOps[F, A]`, ... import scalaz.syntax.monad._ val oi: Option[Option[Int]] = Some(Some(1)) // Expands to: ToBindOps(io).join oi.join
For some degree of backwards compatibility with Scalaz 6, the über-import of
import scalaz.Scalaz._ will import all implicit conversions that provide syntax (as well as type class instances and other functions). However, we recommend to review usage of this and replace with more focussed imports.
Standalone Type Class UsageStandalone Type Class Usage
Type classes should be directly usable, without first needing to trigger implicit conversions. This might be desirable to reduce the runtime or cognitive overhead of the pimped types, or to define your own pimped types with a syntax of your choosing.
- The methods in type classes have been curried to maximize type inference.
- Derived methods, based on the abstract methods in a type class, are defined in the type class itself.
- Each type class companion object is fitted with a convenient
applymethod to obtain an instance of the type class.
// Equivalent to `implicitly[Monad[Option]]` val O = Monad[Option] // `bind` is defined with two parameter sections, so that the type of `x` is inferred as `Int`. O.bind(Some(1))(x => Some(x * 2)) def plus(a: Int, b: Int) = a + b // `Apply#lift2` is a function derived from `Apply#ap`. val plusOpt = O.lift2(plus)
Type Class Instance DependenciesType Class Instance Dependencies
Type class instances may depend on other instances. In simple cases, this is as straightforward as adding an implicit parameter (or, equivalently, a context bound), to the implicit method.
implicit def optionMonoid[A: Semigroup]: Monoid[Option[A]] = new Monoid[Option[A]] { def append(f1: Option[A], f2: => Option[A]): Option[A] = (f1, f2) match { case (Some(a1), Some(a2)) => Some(Semigroup[A].append(a1, a2)) case (Some(a1), None) => f1 case (None, Some(a2)) => f2 case (None, None) => None } def zero: Option[A] = None }
Type class instances for 'transformers', such as
OptionT, present a more subtle challenge.
OptionT[F, A] is a wrapper for a value of type
F[Option[A]]. It allows us to write:
val ot = OptionT(List(Some(1), None)) ot.map((a: Int) => a * 2) // OptionT(List(Some(2), None))
The method
OptionT#map requires an implicit parameter of type
Functor[F], whereas
OptionT#flatMap requires one of type
Monad[F]. The capabilities of
OptionT increase with those of
F. We need to encode this into the type class instances for
[a]OptionT[F[A]].
This is done with a hierarchy of type class implementation traits and a corresponding set of prioritized implicit methods.
In case of ambiguous implicits, Scala will favour one defined in a sub-class of the other. This is to avoid ambiguity when in cases like the following:
type OptionTList[A] = OptionT[List[A]] implicitly[Functor[OptionTList]] // Candidates: // 1. OptionT.OptionTFunctor[List](implicitly[Functor[List]]) // 2. OptionT.OptionTMonad[List](implicitly[Functor[List]]) // #2 is defined in a subclass of the enclosing class of #1, so #2 is preferred.
Transformers and IdentityTransformers and Identity
A stronger emphasis has been placed on transformer data structures (aka Monad Transformers). For example
State is now a type alias for
StateT[Id, A, B].
Id is defined in the
scalaz package object as:
type Id[A] = A
ContributingContributing
Documentation for contributors
CreditsCredits
Support for Scalaz development is provided by Jetbrains.
Thanks to Mark Harrah and the sbt contributors for providing our build tool. | https://index.scala-lang.org/scalaz/scalaz/scalaz-iteratee/7.2.11?target=_2.10 | CC-MAIN-2021-21 | refinedweb | 1,946 | 57.47 |
Table Of Contents
Introducing Client/Server Telnet
The TSO Client Telnet Command
TSO Client Telnet Operation
TSO Client Telnet Commands
Commands for Sending Data
Commands for Session Control
Commands for Controlling Input and Output
Invoking VTAM Client Telnet
VTAM Client Telnet Operation
NVT Operation from 3278 Terminals
Full-Screen Operation from 3278 Terminals
Client Telnet for ASCII Terminals
Invoking Client Telnet from an ASCII Terminal
Full-Screen-Only TSO Service Port
Autologon to Specific VTAM Applications
USS Table Support for Server Telnet
Client/Server Telnet
This chapter describes the Cisco IOS for S/390 Telnet facilities. It provides the information necessary to develop a working knowledge of the Cisco IOS for S/390 implementation of Client Telnet and Server Telnet. This chapter contains these sections:
•
Introducing Client/Server TelnetIntroducing Client/Server Telnet
Provides a brief overview of the services Client Telnet provides.
Describes how to use the Client Telnet facilities to access Cisco IOS for S/390 locally.
Describes how to use the Server Telnet facilities to access Cisco IOS for S/390 remotely.
Describes the escape sequences Cisco IOS for S/390 uses to implement the Telnet protocol.
Introducing Client/Server Telnet
Cisco IOS for S/390 provides Client Telnet facilities that let TSO and VTAM users access your network through the Telnet protocol. Cisco IOS for S/390 also provides Server Telnet facilities that let remote network users access host application programs. These facilities are shown in .
Table 6-1
Telnet Facilities
Client Telnet
The Client Telnet facilities provide local access to your network via Cisco IOS for S/390. You can access Cisco IOS for S/390 locally in either of these ways:
•
Indirectly, using the Client Telnet (or FTP) command processors under TSOIndirectly, using the Client Telnet (or FTP) command processors under TSO
The TSO Telnet processor supports line-by-line and full-screen terminals, but both are mapped into line-by-line Network Virtual Terminal (NVT) operation of the remote host. A difficulty with this program is that TSO supports only half-duplex locked-keyboard operation, which makes access to character-by-character hosts awkward. You can receive pending screen data only by using Enter.
The TSO Telnet program is written in PL/I and you must have the PL/I Transient (runtime) Library (an IBM program product). Some useful features of this program are saving typescripts and multiplexing several sessions at once.
•
Directly, using VTAM-supported 3767 or 3278 terminalsDirectly, using VTAM-supported 3767 or 3278 terminals
Although the Cisco IOS for S/390 3278 terminal manager does not have all the features of the TSO Telnet program, it can access the 3278 with a true full-duplex unlocked-keyboard protocol.
The 3278 can either be mapped into line-by-line operation as an NVT or can be operated in transparent 3278 full-screen mode to access a remote IBM MVS or VM server.
The TSO Client Telnet Command
The TSO Client Telnet command, TELNET, provides a TSO Client Telnet interface to the network. The TELNET command supports only line-by-line or NVT operation of remote hosts. However, it has functions, such as multiple concurrent sessions, that the direct VTAM Client Telnet lacks.
Client Telnet operates the local terminal in either line or screen mode, depending on whether it is accessed from a terminal of the IBM 3270 family or from an ASCII terminal. Choice of mode is automatic and usually transparent. However, you can override the automatic choice if you need to operate in line mode on a terminal. This may prove useful if you use facilities such as the Session Manager in TSO/E.
Since IBM systems normally do not support character-by-character interactions, Client Telnet does not operate in character-oriented mode, and it can be inconvenient to communicate with processes on remote hosts that do operate in such a mode. Because there may be such hosts on a network, Client Telnet implements devices and techniques to ease the incompatibility.
The TSO Telnet program screen mode can present one or more multiplexed line-oriented terminal sessions; however, full-screen interaction with a processing program is not possible with this version of the program.
In screen mode, Client Telnet does all its own screen management. Client Telnet is not compatible with operation under the IBM ISPF program product. It can be invoked under ISPF, as can most TSO command processor programs, but Client Telnet is not aware of the ISPF environment, so it does not support such ISPF features as split-screen operation. In anticipation of future enhancements, this version of Client Telnet reserves certain screen fields and function keys for ISPF compatibility.
Some options of the TSO Telnet program (the PRINT and TEST commands) require allocation of a SYSPRINT file, but this is not absolutely necessary in normal operation (that is, when you are not using PRINT or TEST).
Allocating a SYSPRINT file to the terminal in screen mode causes constant switching between screen and line modes. To avoid this, allocate the SYSPRINT file to a SYSOUT file instead of to the terminal.
The TELNET Command
Invoke the TSO Telnet program with the TSO TELNET command in one of these forms:
TELNET
TELNET / argument argument...
No arguments are required and none are useful to most Client Telnet users. Any that are specified must be preceded by slash (/) to accommodate the conventions of the PL/I runtime support package.
The two classes of TELNET command options, general and debugging, are described in the following sections.
General Command Options
These are the general Telnet command options:
•
TTYTTY
The TTY option specifies that your terminal is capable of generating carriage returns. Since virtual 3767 line terminals such as those supported by the Virtual Line Terminal (VLT) facility do not generate carriage return (CR) or new line (NL) characters at the end of lines, Client Telnet automatically appends an NL to every line of user input that is received. The TTY option disables this by specifying to Client Telnet that your terminal appends either a CR or an NL at the end of every line of input. This option is useful in supporting real local ASCII terminals that connect to TSO through TCAM, NTO, or NPSI.
•
LINELINE
The LINE option causes the TSO Telnet program to drive the terminal in line mode even if the terminal is a CRT.
•
SYSSYS
The SYS option, in the form SYS=x, where x is an arbitrary character, causes Client Telnet to open its VLT connection to a network name of ACCESx instead of the usual ACCES. This allows communication through a test version of Cisco IOS for S/390.
•
APPLIDAPPLID
The APPLID option, in the form APPLID=aaaaaaaa, where aaaaaaaa specifies to the TSO Telnet program the default VTAM application ID of the local Cisco IOS for S/390. This command causes Client Telnet to open its VLT connection to a network name of aaaaaaaa instead of the usual ACCES. If supplied, this parameter need not point exclusively to the local Cisco IOS for S/390; it can refer to any VTAM application. For example, TSO is the necessary APPLID to connect to TSO.
Debug Options
The debugging options, TEST and U, are described here:
•
TESTTEST
The TEST option causes the program to operate in test mode, where status information is written to the SYSPRINT file. This information is essentially unformatted and is not useful to the casual Telnet user.
•
UU
The U option, specified as U=userid, modifies the output of TEST mode. It arranges to send the output via TPUT to the specified TSO user ID instead of to SYSPRINT.
TSO Client Telnet Operation
Once the program has been activated, you can enter Telnet commands or data to be transmitted on the session. In session data, the logical not character (ÿ) is reserved as an escape character. To transmit the ÿ character, you must type it twice (ÿÿ). Refer to Telnet Escape Sequences for details on using the Telnet escape character.
Line Mode Operation
TSO Telnet operates in line mode if TSO believes your terminal is line-oriented or if you have used the LINE argument when invoking the program. In most cases, Client Telnet commands operate essentially the same in either line or screen mode.
The techniques used to send data lines in line mode are described here:
•.
•.
•
When you terminate an input line (including a null line) with a carriage return, the data is sent with a new line character appended.When you terminate an input line (including a null line) with a carriage return, the data is sent with a new line character appended.
•
When you terminate an input line with CONTROL-D, the data is sent without a new line character. This facilitates communication with remote systems that operate in character mode.When you terminate an input line with CONTROL-D, the data is sent without a new line character. This facilitates communication with remote systems that operate in character mode.
•
CONTROL-C and ESCAPE can be used as data characters and are transmitted properly. Most other control characters are filtered from the input by TSO. CONTROL-C is usually interpreted by a remote IBM process as an attention.CONTROL-C and ESCAPE can be used as data characters and are transmitted properly. Most other control characters are filtered from the input by TSO. CONTROL-C is usually interpreted by a remote IBM process as an attention.
•
The ATTN key is reserved for use by the local process. It stops output flow long enough to enter an input line.The ATTN key is reserved for use by the local process. It stops output flow long enough to enter an input line.
Screen Mode Operation
TSO Telnet operates in screen mode if your terminal is believed by TSO to be a display terminal of the 3270 family and if you have not used the LINE argument when invoking the program. The rules of screen mode interaction seem complex, but screen mode is very useful.
In screen mode, the screen is divided into the following areas:
•
The TSO Telnet banner and version numberThe TSO Telnet banner and version number
This field is also used to present short error or exception messages.
•
The primary input areaThe primary input area
This 149-character field (CMD) is where you type most session input and Telnet commands, so TSO Telnet keeps moving the cursor to the beginning of this field.
•
The command input areaThe command input area
This field is provided for future ISPF compatibility and is not needed for normal TSO Telnet operation.
•
The current VTAM application identification defaultThe current VTAM application identification default
This field points to the local Cisco IOS for S/390 application. Every Telnet CONNECT command initiates a VTAM connection to the Cisco IOS for S/390 currently identified by the APPLID default. Multiple VTAM sessions to different VTAM applications are possible by changing the APPLID default dynamically. This is described in VTAM Client Telnet.
•
The current session identificationThe current session identification
When a session has been established, this identifies the session number and the host to which it is connected.
•
A list of other session numbersA list of other session numbers
Each session number is a single symbol, hence the limit of ten concurrent sessions. If a session is defined but not currently selected, its symbol I appears in this list. If the session has output waiting, its symbol O appears in the list. Undefined sessions are shown as a period (.).
•
A separator row of dash characters (-)A separator row of dash characters (-)
This row conceals a set of indicators that are replaced as various operating modes are activated. These are the operating modes:
•
AUTO for automatic page turningAUTO for automatic page turning
•
NOECHO for non-display of outputNOECHO for non-display of output
•
READ during read processingREAD during read processing
•
WRITE during write processingWRITE during write processing
•
SLEEP when the keyboard is disabledSLEEP when the keyboard is disabled
•
TEST during test debuggingTEST during test debugging
•
HIDE when the input line is turned into a non-display lineHIDE when the input line is turned into a non-display line
•
17 rows of output area17 rows of output area
Both input and output data are echoed to the output area. This data is not scrolled; the current output line is indicated by a row of equal characters (=). The line of equal characters rolls around the screen, erasing old output and overlaying it with new. When the indicator wraps from the bottom of the output area to the top, press Enter to prevent overwriting data that has not been read (in other words, "turn the page").
•
2 rows of Program Function key (PF) definitions, associating certain commands with function keys2 rows of Program Function key (PF) definitions, associating certain commands with function keys
These associations are fixed and cannot be overridden. The key assignments are chosen to be compatible with the default assignments used by the ISPF program product, and, in this version, some function keys are reserved for ISPF functions that have not yet been implemented (SPLIT and SWAP). Like ISPF, Client Telnet follows the convention that there are no functions available through function keys that are not also available through commands.
In screen mode, you normally type into the input area; however, you can modify lines in the output area and cause them to be reread as input. Read Retransmitting Data.
The NULL Transaction
In most cases, Client Telnet commands operate the same in either line or screen mode. However, the techniques used to send data lines and to differentiate them from command lines are different. Because IBM terminals operating under TSO are half-duplex, it is not possible to operate in screen mode with an unlocked keyboard.
For a NULL transaction in screen mode, press Enter with no screen fields modified or with all modified fields blank for a no-operation. This does not send data; it merely returns control of the terminal to the TSO Telnet program and allows the program to switch into output mode.
The most obvious effect of this is that an empty line can be sent only by using a Telnet command (XWNL) or a function key (PF10, which sends a NULL transaction and a new line (NL)).
A more important effect is that communication in screen mode frequently requires constantly using Enter to keep output flowing. Client Telnet tends to hold control of the terminal until there is an indication that no more output is immediately available. You can control how long the program waits for this indication, but the defaults are satisfactory for most conditions. While the null transaction is used frequently in screen mode, the real work of a Telnet session is done with the other kinds of transactions. The most common occur when non-blank data is typed into the input field and/or when the keyboard is locked through a key other than Enter.
Sending Data
There are several ways to send data to the current session. The usual method is to type a complete input line into the input area and press Enter, which stands for the SEND command. The data is sent, including explicit leading and trailing blanks, and a new line character is sent after them.
Another method is to type the SEND command (or one of the other similar commands) into the input area, followed by the data to be sent, and then press PF5 instead of Enter. PF5 stands for the EXEC command, which causes Client Telnet to parse its input into a command and an operand string, and then to execute the command. The operand string begins with the character after the single blank that terminates the command.
A third method is to type the data into the input field and press a function key that has been assigned the value of a command. This is equivalent to using an explicit command and PF5 (EXEC).
With any of these methods, you can enter a command name in the command field on the screen. That command takes precedence over the command implied by the key you press. Normally, you do not need to use this method of input, but the rules of interaction with ISPF make it necessary.
Command Entry Rules
Many Client Telnet commands perform functions other than sending data. In all cases, the same rules apply.
•
If you clear the screen and enter the command at the top left of the screen, the key you use has no effect. The command is parsed for a command verb and operands. (This is compatible with a panic exit, such as CLEAR followed by END.)If you clear the screen and enter the command at the top left of the screen, the key you use has no effect. The command is parsed for a command verb and operands. (This is compatible with a panic exit, such as CLEAR followed by END.)
•
If you modified the CMD field, its contents are the command verb. Otherwise, the verb is implied by the key you pressed.If you modified the CMD field, its contents are the command verb. Otherwise, the verb is implied by the key you pressed.
•
Operands for the command are in the CMD field.Operands for the command are in the CMD field.
•.
•.)
Retransmitting Data
You can edit and retransmit lines from the output area of the screen. However, each line of the output area is a screen field and the folding of received data to accommodate the 79-character usable screen width might split a logical line into two separate fields. To make the feature easier to use, PF4 has been assigned the CURSOR command. The CURSOR command moves the physical cursor to the beginning of the last line echoed into the output area.
These are the rules of transmission from the output area:
•
For every user interaction, there is an implicit command that is determined either by the contents of the CMD field or by the function key pressed. (The PA and Enter keys are included for this purpose.)For every user interaction, there is an implicit command that is determined either by the contents of the CMD field or by the function key pressed. (The PA and Enter keys are included for this purpose.)
•
For every such interaction, there is an ordered set of modified data fields, potentially including the input area and each line of the output area. The ordering is from the top of the screen down.For every such interaction, there is an ordered set of modified data fields, potentially including the input area and each line of the output area. The ordering is from the top of the screen down.
•
The implied command is applied once with the operand field composed of the concatenation of all the modified data fields. New line characters are not automatically inserted when two lines are concatenated. Use the - (dash) command to insert them.The implied command is applied once with the operand field composed of the concatenation of all the modified data fields. New line characters are not automatically inserted when two lines are concatenated. Use the - (dash) command to insert them.
Function Keys
A fixed mapping of commands onto function keys is used in this version of Client Telnet. The commands that can be executed by pressing function keys are listed in :
Table 6-2 Client Telnet Function Keys
Note
Those commands annotated with an asterisk can be entered when the screen shows the message HIT ENTER TO TURN PAGE.Those commands annotated with an asterisk can be entered when the screen shows the message HIT ENTER TO TURN PAGE.
TSO Client Telnet Commands
This section outlines commands for sending data, for session control, for controlling input and output, and miscellaneous commands.
Some commands can be entered by hitting a programmed function key. Where a function key can be used in place of typing the command, the key name is shown to the right of the command name.
Commands for Sending Data
These are the commands for sending data:
•
SEND or ENTERSEND or ENTER
The SEND command sends its operand followed by a new line. If the operand is null, SEND has no effect (no-op).
•
XWNL or PF10XWNL or PF10
The XWNL transmit with null line command sends its operand followed by a new line. A null operand results in only a new line.
•
XNNL or PF11XNNL or PF11
The XNNL transmit with no null line command sends only its operand. If the operand is null, XNNL is a no-op.
•
XCTL or PF8XCTL or PF8
The XCTL transmit control command sends its operand after transforming only the last character into control case (CONTROL-x). If the operand is null, XCTL is a no-op.
•
XESC or PF7XESC or PF7
The XESC transmit escape command sends its operand followed by an escape. If only an escape is sent, a null operand results.
•
KO or PF6KO or PF6
The KO kill output command transmits an abort-output signal followed by an interrupt-process signal. Any operand is ignored.
•
HEXHEX
The HEX command interprets its operand as hexadecimal field characters, so the operand must consist of an even number of hexadecimal digits with no blanks or delimiters. The operand is converted to binary bytes and transmitted. The TSO Telnet program operates in EBCDIC mode and translates your string from EBCDIC to ASCII before placing it on the network.
•
IP or PA1IP or PA1
The IP command transmits an interrupt-process signal. Any operand is ignored.
The PA1 function key is also used for ATTN command.
•
BRK or PF6BRK or PF6
The BRK command transmits a break signal. Any operand is ignored.
Commands for Session Control
These commands are sent to Client Telnet to add sessions, switch between sessions, and change the status of a session:
Note
Any matching function keys are in parentheses to the right of the command.Any matching function keys are in parentheses to the right of the command.
•
APPLID stringAPPLID string
Changes the current APPLID default used by TSO Telnet to connect to the local Cisco IOS for S/390. The new APPLID is given by string. Thus, connections to multiple copies of Cisco IOS for S/390 or to other VTAM applications are possible.
•
END | BYE (PF3)END | BYE (PF3)
Terminates the current TSO Telnet activity. Normally, this is a session or a HELP screen. However, if no sessions are defined and HELP is not in effect, TSO Telnet is terminated. This command is refused if you issue it when there are sessions defined but no session is current.
•
RETURN (PF12)RETURN (PF12)
Ends all TSO Telnet activity. It is equivalent to multiple END commands.
Commands for Controlling Input and Output
These commands manipulate the TSO Telnet session that is currently running:
•
TTO numberTTO number
This command is effective only in screen mode. It specifies the number of milliseconds that TSO Telnet is to wait for more output before unlocking a locked keyboard. Large values cause sluggish operation, and small values require excessive use of ENTER. The default value of 500 is a reasonable compromise in most cases.
•
READ dsname | OFFREAD dsname | OFF
The READ command opens the file dsname and reads its records as Telnet input lines. Each line is processed as though entered from the keyboard with the ENTER or carriage return in line mode. This means that if you are operating in screen mode, lines beginning with the greater than (>) character are processed as though entered with PF5, that is, as Telnet commands.
Use this feature carefully, because it can cause confusing results if errors occur.
The dsname is specified in the usual TSO syntax, either quoted or unquoted. It can name any file that can be read sequentially with the PL/I READ statement, including a PDS member. Blanks and sequence numbers are treated as data by the TSO Telnet READ command, so an unsequenced variable-length file is the preferred input form. TSO Telnet continues to accept input from the screen during the READ operation. Lines read from the screen are executed at whatever point in the file they fall. The most useful application of this feature is the ability to enter the READ OFF command to abort reading. Otherwise, reading proceeds to end-of-file and stops. Obviously, you cannot read a file named OFF unless you use its quoted name. When a READ operation is in effect in screen mode, a READ indicator is visible on the separator line.
•
RTO secondsRTO seconds
The Read Timeout command specifies the number of milliseconds that TSO Telnet is to wait between input records during a READ operation. Normally, a READ operation is limited by the rate at which the data can be transmitted or by the need to turn the page as the read data is echoed; however, this command is provided for cases where those limitations do not apply. The default value is 500 milliseconds.
•
WAIT millisecondsWAIT milliseconds
This command causes TSO Telnet to pause just once for the number of milliseconds specified. You can interleave your READ data with WAIT commands at points where you know an operation takes a lot of time. For instance, it is wise to include WAIT commands behind CONNECT commands and LOGON sequences.
•
WRITE dsname OFFWRITE dsname OFF
Use the WRITE OFF command to terminate writing. You cannot write to a file named OFF unless you use its quoted name. When a WRITE operation is in effect in screen mode, a WRITE indicator is visible on the separator line.
WRITE OFF opens the file dsname and echoes Telnet input and output records into it. This produces a typescript of all interactions with all sessions that TSO Telnet is managing. The named file must already exist. Its DCB characteristics are changed to VB, 260, 4000.
•
This command is effective only in screen mode. It writes a snapshot of only the current screen to the SYSPRINT file. It is different from WRITE, which writes continuously into a data set of your choice.
•
ECHO ON | OFFECHO ON | OFF
This command controls echoing of output to the terminal and, in screen mode, echoing of input to the output area. When ECHO OFF has been specified, no output is written to the terminal, and a NOECHO indicator is visible on the separator line. This mode is usually used in conjunction with WRITE.
•
SAMPLESAMPLE
This command is used in ECHO OFF mode. It causes a small amount of output data to be echoed. In screen mode, one page is echoed, while in line mode the sample is determined by the size of the data records being received from the network and is usually only a partial line. SAMPLE lets you monitor a session that is writing its output only to a file. However, your SAMPLE commands appear in the output file.
•
AUTO ON | OFFAUTO ON | OFF
This command controls automatic page turning in screen mode. When AUTO ON (or just AUTO) has been specified, pages are turned without your intervention and an AUTO indicator is visible on the separator line. This mode, used with SLEEP mode, removes the terminal entirely from your control. Avoid using this combination.
•
SLEEPSLEEP
This command disables keyboard input (so you do not need to keep pressing ENTER to maintain output flow) and places a SLEEP indicator on the separator line. The only way to exit SLEEP mode is to press ATTN (PA1 in screen mode). This mode can be used with AUTO mode to remove the terminal entirely from the user's control. Use caution with this option.
•
ATTN (PA1)ATTN (PA1)
This command is always invoked through ATTN. In screen mode, attention is signaled through PA1 and is used only to break SLEEP mode. In line mode, attention interrupts TSO Telnet operation and requests a new input line. This input is then processed like any other.
The PA1 function key is also used for the IP command.
Note
The attention key on a 3278 is the PA1 key, not the ATTN key.The attention key on a 3278 is the PA1 key, not the ATTN key.
•
NOTENOTE
This command introduces a limited comment. No data is transmitted, but when you are in screen mode, data is echoed to the output area.
•
HIDE (PF9)HIDE (PF9)
This command causes the next input on the terminal not to display. It implements password protection. When in screen mode, HIDE causes HIDE to appear on the separator line and turns the primary input area into a nondisplay field. It is a toggle switch, so if it is on, you can enter it again to turn it off. Any operand associated with HIDE is ignored.
Miscellaneous Commands
These are some miscellaneous commands that can be useful:
•
EXEC (PF5)EXEC (PF5)
This command is always invoked through PF5, although it works as a command. It executes its argument as a Telnet command.
•
HELP (PF1)HELP (PF1)
This command presents brief tutorial information. In line mode, it lists common commands briefly. In screen mode, it displays a sequence of HELP screens. You can step through the screens with ENTER or return to Client Telnet with PF3.
•
RESHOW (PA2)RESHOW (PA2)
This command is meaningful in screen mode only and is invoked through PA2. It restores the screen to its previous condition.
•
CURSOR (PF4)CURSOR (PF4)
This command is effective in screen mode only. It moves the cursor to the beginning of the last line written to the output area.
•
CLEARCLEAR
This command is effective in screen mode only. It clears the screen for the current session and resets the current output line to the top of the output area, which can be useful in keeping things together on one display screen.
•
TEST ON | OFFTEST ON | OFF
This command controls the output of debugging information. When TEST ON (or just TEST) is specified, diagnostic data are written to the SYSPRINT file and a TEST indicator displays on the separator line.
•
LOG useridLOG userid
This command modifies the action of TEST. userid is the target of TPUT macro instructions to write the TEST output data. The same data is not written to SYSPRINT. This command lets you receive test output in real time, but on another terminal. LOG without an operand stops this special behavior.
•
TSO | DOTSO | DO
This command executes its argument as a TSO command and pre-empts Client Telnet temporarily. When the command processor returns, if screen mode is in effect, Client Telnet refreshes its screen.
VTAM Client Telnet
Cisco IOS for S/390 is a VTAM primary application and can support 3278 or 3767 terminals with Client Telnet access to your network.
Invoking VTAM Client Telnet
The VTAM Client Telnet command, VTELNET, operates in either NVT (line-by-line) mode or in transparent full-screen 3278 mode, depending on the Telnet negotiations initiated by the remote Server Telnet. The choice of mode is automatic and does not normally concern you except for the usage of special function keys (PFn). See NVT Operation from 3278 Terminals for more information on programmable function key assignments.
Invoke VTELNET with one of these commands:
ACCES host_name
or
LOGON APPLID(ACCES) to DATA(host_name)
Note
host_name is a required operand.host_name is a required operand.
The command is entered on the system login invitation screen in place of the LOGON command used to start a TSO session. Here ACCES is the VTAM application name for Cisco IOS for S/390 and the permissible host name is defined in Using Host Name Strings in Introduction to Cisco IOS for S/390. For example, if you enter ACCES SRI-NIC and the message SRI-NIC PARAMETER UNRECOGNIZED displays, it means that the LOGTAB=INTTAB parameter has not been set correctly during the Cisco IOS for S/390 installation. Report this problem to your Cisco IOS for S/390 site administrator.
Note
The exact VTAM logon command might be different at your installation. If in doubt, contact your Cisco IOS for S/390 site administrator.The exact VTAM logon command might be different at your installation. If in doubt, contact your Cisco IOS for S/390 site administrator.
VTAM Client Telnet Operation
When you enter the VTAM LOGON command, VTAM connects the terminal to Cisco IOS for S/390. Cisco IOS for S/390 then checks the host name, displays appropriate error messages, and disconnects the terminal. If the host name is correct, Cisco IOS for S/390 prompts you for a user ID and password, checking that you are allowed access to the network. If you supply the correct user ID and password, the Telnet connection to the remote server is established and the server's banner message displays.
NVT Operation from 3278 Terminals
While the remote host is operating in normal Telnet ASCII or NVT mode, VTAM Telnet maps line-by-line data onto the local 3278 screen much like TCAS does for TSO.
Each line segment is placed on a blank screen sequentially at the cursor. The control characters BS, CR, and LF affect the cursor in the applicable fashion. Characters typed on the keyboard display and transmit with the carriage return/line feed symbol (CRLF) appended when you press ENTER. The necessary 3278 orders are added to the screen data and deleted from the keyboard data.
The terminal operates in full-duplex local echo mode when controlled by VTAM Telnet. You need not press ENTER or any other key to poll for output. Any data received is immediately displayed. If you attempt to type while data is being sent to the screen, the typed data is lost. This is not normally a problem because usually you must wait for a prompt before beginning to type.
The screen can be erased at any time by pressing CLEAR. This homes the cursor so data display starts at the top of the screen. When the screen is filled, three asterisks (***) display on the last line. Read the screen and press ENTER or CLEAR to erase the screen and continue the display of data.
The program function keys map into the functions shown in :
Table 6-3 VTAM Client Telnet Function Keys
Only Enter and PF10/PF22 append CRLF to the data sent. PF7/PF19 appends the ESC character and PF8/PF20 takes the last character typed and converts it to a control character before sending. Enter and PF10/PF22 are similar, but Enter causes the cursor to move to the next line while PF10/PF22 sends a blank line without moving to the next line. Use Enter.
On SNA 3278 terminals, attention maps into the PA1 key for compatibility.
Full-Screen Operation from 3278 Terminals
If the remote server host is an IBM system (either MVS or VM) and the local user has a 3278-type terminal, VTELNET negotiates transparent full-screen mode with the server host.
In full-screen mode, all keys are transmitted and no local action is performed. If you must abruptly disconnect from Cisco IOS for S/390 while in full-screen mode, use the 3278 system request key (SYS REQ).
Client Telnet for ASCII Terminals
Cisco IOS for S/390 can support Client Telnet access from ASCII terminals and other non-3270 terminals that support remote echo mode. You can run Client Telnet from qualifying clients as if they were full-screen 3270 facilities. Among the supported terminal types are:
•
DEC VT52DEC VT52
•
DEC VT100DEC VT100
•
DEC VT220DEC VT220
•
DEC VT320DEC VT320
•
IBM316xIBM316x
•
TeleVideo 905TeleVideo 905
•
Zentec 8031Zentec 8031
•
AT&T 610AT&T 610
•
Hewlett-Packard terminals that have TCP/IP capabilityHewlett-Packard terminals that have TCP/IP capability
The above list is not all-inclusive. Other kinds of terminals can also process Client Telnet commands. You can display an online listing of terminals supported at your site. See Invoking Client Telnet from an ASCII Terminal for details.
Note.
Invoking Client Telnet from an ASCII Terminal
Follow these steps to invoke Client Telnet from an ASCII terminal or other non-3270 terminal:
1
At the system prompt, enter the TELNET command followed by the host name, and press ENTER. For example:At the system prompt, enter the TELNET command followed by the host name, and press ENTER. For example:
TELNET my_host
2
At the prompt, enter the command to call the service you want to access. The availability of commands and services is determined by site system managers. Consult your system manager to obtain the command(s) that are valid at your site. For example:At the prompt, enter the command to call the service you want to access. The availability of commands and services is determined by site system managers. Consult your system manager to obtain the command(s) that are valid at your site. For example:
TSO
3
You are prompted to either enter the terminal type or log off.You are prompted to either enter the terminal type or log off.
If you are not sure whether your terminal is supported, type a question mark (?) at the prompt to see a list of supported terminals for your site. For example:
Otherwise, enter the terminal type and press ENTER.
VT220
4
The software validates the terminal access and connects you to the target service.The software validates the terminal access and connects you to the target service.
You can use normal commands and operations to invoke features offered by that service, and to logoff the service.
Server Telnet
The Server Telnet facilities provide remote access to host application programs through the Telnet protocol. The Server Telnet facility accesses a server subsystem that drives a supported terminal type via ACF/VTAM. This includes TSO, CICS, and other popular subsystems.
The supported (virtual) terminal types are:
•
IBM 3767 typewriter terminals (SNA LU1 virtual terminals)IBM 3767 typewriter terminals (SNA LU1 virtual terminals)
•
Locally connected non-SNA IBM 3278 terminals (SNA LU0 virtual terminals)Locally connected non-SNA IBM 3278 terminals (SNA LU0 virtual terminals)
Either of these can be driven from an NVT to provide line-at-a-time operation for the remote user. The virtual 3278 can also be used in transparent full-screen mode. When you are a remote user from a remote IBM MVS or VM system and open a TCP connection to the well known Telnet server port (23), you are connected to a Telnet server process (ULPP) in Cisco IOS for S/390. If you proceed to logon to TSO, for example, the Telnet server ULPP invokes the Virtual Terminal Facility (VTF), which uses ACF/VTAM to make a cross-address-space connection to the virtual terminal handler for TSO. The ULPP makes all necessary conversions of code and protocols.
Remote users on non-IBM hosts who want to connect to applications in full-screen mode must have 3270 client emulation software on their host.
Server Telnet also supports the use of Session Level USSTAB (Unformatted System Services Tables) and the associated msg10 screens, as described later in this chapter.
Server Telnet Commands
The Telnet server also implements these pre-logon services within Cisco IOS for S/390:
•
HELPHELP
Displays available commands.
•
HELP commandHELP command
Displays help information for that command.
•
NEWSNEWS
Displays the news data set.
•
BYE, CLOSE, END, QUIT, or LOGOFFBYE, CLOSE, END, QUIT, or LOGOFF
Causes Server Telnet to close the connection.
•
NETSTATNETSTAT
Provides status information regarding Cisco IOS for S/390. For system programmers, an alternate entry called SYSSTAT is provided that enables the Cisco IOS for S/390 control functions in NETSTAT. The Telnet server requires a local LOGON before allowing access to SYSSTAT. The subcommands of NETSTAT are documented in the Cisco IOS for S/390 Customization Guide.
•
Prompts for user ID and password.
•
SIGNOFF or LOGOUTSIGNOFF or LOGOUT
Logs out the user.
•
ACTESTACTEST
This is the Cisco IOS for S/390 interactive debugging tool. It requires a local LOGON from the Server Telnet and is restricted to system programmers. The subcommands of ACTEST are documented in the Cisco IOS for S/390 Customization Guide.
Full-Screen-Only TSO Service Port
At present, the Client Telnet implementation within VM/370 TCP/IP is unable to negotiate full-screen 3278 operation with an MVS service (like TSO) that can support it. The restriction applies only when VM contacts Cisco IOS for S/390 on Server Telnet port 23.
To circumvent this problem, Cisco IOS for S/390 has a special port on which users can directly access MVS/TSO with full-screen operation. The port number is 1023 and connects only to TSO (although other ports could easily be added in APPCFGxx for other services).
When Cisco IOS for S/390 is contacted on port 1023, full-screen negotiation takes place immediately. If it fails, service reverts to NVT.
Autologon to Specific VTAM Applications
Many sites with session manager software prefer their users to be automatically connected to the session manager. This requires minor changes to the configuration file. Read the Cisco IOS for S/390 Customization Guide for details.
USS Table Support for Server Telnet
Server Telnet supports the use of Session Level USSTAB (Unformatted System Services Tables) and their associated msg10 screens. The feature enables you to customize screen access information for VTAM applications that are opened through Cisco IOS for S/390.
For more information about USS table support, read the Cisco IOS for S/390 Customization Guide.
Telnet Escape Sequences
Cisco IOS for S/390 implements the Telnet protocol using the logical not character (ÿ) as an escape character. This character must be doubled (ÿÿ) to be transmitted correctly.
An escape sequence is a single Telnet escape character ÿ, followed by a predefined character sequence. The entire sequence represents a single character, usually nongraphic or one that is not available on all IBM keyboards.
Valid Escape Sequences
The sequences described in are valid:
Table 6-4 Valid Escape Sequences | http://www.cisco.com/c/en/us/td/docs/ios/sw_upgrades/interlink/r2_0/user/ugtelnet.html | CC-MAIN-2016-18 | refinedweb | 6,960 | 61.67 |
flatten¶
paddle.fluid.layers.
flatten(x, axis=1, name=None)[source]
Flatten op
Flatten the input tensor into a 2D matrix.
For Example:
Case 1: Given X.shape = (3, 100, 100, 4) and axis = 2 We get: Out.shape = (3 * 100, 4 * 100) Case 2: Given X.shape = (3, 100, 100, 4) and axis = 0 We get: Out.shape = (1, 3 * 100 * 100 * 4)
- Parameters
x (Variable) – A tensor of rank >= axis. A tensor with type float32, float64, int8, int32, int64.
axis (int) – Indicate up to which input dimensions (exclusive) should be flattened to the outer dimension of the output. The value for axis must be in the range [0, R], where R is the rank of the input tensor. Default: 1.
name (str, Optional) – For details, please refer to Name. Generally, no setting is required. Default: None.
- Returns
A 2D tensor with the contents of the input tensor, with input dimensions up to axis flattened to the outer dimension of the output and remaining input dimensions flattened into the inner dimension of the output. A Tensor with type same as input x.
- Return type
Variable
- Raises
ValueError– If x is not a variable.
ValueError– If axis is not in range [0, rank(x)].
Examples
import paddle.fluid as fluid x = fluid.data(name="x", shape=[4, 4, 3], dtype="float32") # x shape is [4, 4, 3] out = fluid.layers.flatten(x=x, axis=2) # out shape is [16, 3] | https://www.paddlepaddle.org.cn/documentation/docs/en/api/layers/flatten.html | CC-MAIN-2020-05 | refinedweb | 240 | 68.26 |
git clone todo -- where is the code repository??? cd meego-units qmake make && sudo make install
To start, you need to import the MeeGo Units plug-in:
import Qt 4.7 import MeeGo.units 1.0
This loads the libmeegounitsplugin adds a new QML type called 'MeeGoUnits' Since plug-ins do not have direct access to the global object namespace, the plug-in can't automatically populate a top level namespace with a 'Units' element--so you must instantiate it within your top level widget:
Rectangle { MeeGoUnits { id: units } }
At this point, you can now reference the unit conversion properties via units. The properties exposed are mm, inch, vp, and density. All three are implemented as properties with notification signals, so if the underlying device PPI changes, all UI elements can be re-laid out. This is useful when moving (at run-time) an application from a small handset screen to a larger display (for example via an attached HDMI)
Extending the above example to set the rectangle size to be 10cm x 5cm:
Rectangle { MeeGoUnits { id: units } width: units.mm2px(100.0) height: units.mm2px(50.0) }
Using the conversion properties is pretty straight forward and works as you would expect. The following instantiates a 1cm x 1cm rectangle, offset from the top left of its parent by 1cm:
Rectangle { x: units.mm2px(10.0) y: units.mm2px(10.0) width: units.mm2px(10.0) height: units.mm2px(10.0) }
Since the value of the properties returned by MeeGoUnits() conversion method) }
/* Not done yet -- auto scaling and caching from an asset daemon with target size scaling: * * source: "image://" + unit + "/" + w + "x" + h "/text-bg" *
// Every image used could be a BorderImage BorderImage { width: 1024 height: 768 source: "image://super[/dimensions[/units]]/resource[.ext]" <-- Optional extension to override the default priority ordering source: "image://super/100x75/resource" <-- width is 100 pixels, height 75 pixels source: "image://super; probably requires creating a new "borderimage" replacement that supports a set of target border size properties. Usage example is to take a 1px border from a pixel asset and have the border actually scaled to 2mm on the display
* * So, instead, set the source width and height to the scaled dimensions : */ | http://wiki.meego.com/index.php?title=User:Jketreno/scaling&diff=37177&oldid=37175 | CC-MAIN-2013-20 | refinedweb | 368 | 50.57 |
Hi all. Ron here.
It's been a few weeks. I hope everyone is well.It's been a few weeks. I hope everyone is well.
What I have here is part of a program I'm working on. What I'm doing at this point is converting a Fahrenheit temperature to Celsius. Simple enough, but every time I run this thing, it keeps returning -0.0, no matter what. I can't figure out if it's the way I've keyed the formula or if I've done something wrong with the methods. Anyone care to chime in?
import javax.swing.JOptionPane; public class Celsius { public static void main(String[] args) { String input; double fahrenTemp; fahrenTemp = getFahren(); convertTemp(fahrenTemp); System.exit(0); } public static double getFahren() { String input = JOptionPane.showInputDialog("Please enter a Fahrenheit temperature from 0 to 10 " + "and I will convert it to Celsius for you."); double fahrenTemp = Double.parseDouble(input); return fahrenTemp; } public static void convertTemp(double fahrenTemp) { double toCelsius = (5 / 9) * (fahrenTemp - 32); JOptionPane.showMessageDialog(null, "That would be " + toCelsius + " degrees Celsius."); } } | http://www.javaprogrammingforums.com/whats-wrong-my-code/27527-math-my-methods.html | CC-MAIN-2014-35 | refinedweb | 178 | 61.63 |
arc3d · area · define_shape · diam · diam3d · diam_changed · distance · getSpineArea · L · n3d · pt3dadd · pt3dchange · pt3dclear · pt3dconst · pt3dinsert · pt3dremove · pt3dstyle · Ra · ri · setSpineArea · spine3d · x3d · y3d · z3d
Conceptual Overview of Sections¶ ↑
Sections are unbranched lengths of continuous cable connected together to form a neuron. Sections can be connected to form any tree-shaped structure but loops are not permitted. (You may, however, develop membrane mechanisms, such as electrical gap junctions which do not have the loop restriction. But be aware that the electrical current flows through such connections are calculated by a modified euler method instead of the more numerically robust fully implicit/crank-nicholson methods)
Do not confuse sections with segments. Sections are divided into segments
of equal length for numerical simulation purposes (see
Section.nseg).
NEURON uses segments to represent the electrical circuit shown below.
Ra o/`--o--'\/\/`--o--'\/\/`--o--'\/\/`--o--'\o v | | | | --- --- --- --- | | | | | | | | --- --- --- --- | | | | -------------------------------------------- ground
Such segments are similar to compartments in compartmental modeling programs.
Geometry¶ ↑
Section geometry is used to compute the area and axial resistance of each segment.
There are two ways to specify section geometry:
- The stylized method simply specifies parameters for length and diameter.
- The 3-D method specifies a section’s shape, orientation, and location in three dimensions.
Choose the stylized method if the notions of cable length and diameter
are authoritative and where 3-d shape is irrelevant. For plotting purposes,
length and diameter will be used to generate 3-d info automatically for
a stylized straight cylinder. (see
define_shape())
Choose the 3-D method if the shape comes from 3-d reconstruction data
or if your 3-d visualization is paramount. This method makes the 3-d info
authoritative and automatically
determines the abstract cable’s length and diameter.
With this method, you may change a section’s length/diameter only by
changing it’s 3-d info. (but see
pt3dconst())
Stylized specification of geometry¶ ↑
For simulations one needs to specify L, nseg, diam, Ra, and connectivity.
- section.L
- For each section, L is the length of the entire section in microns.
- section.nseg
- The section is divided into nseg compartments of length L/nseg. Membrane potential will be computed at the ends of the section and the middle of each compartment.
- seg.diam
- The diameter in microns. Note that diam is a range variable and therefore must be respecified whenever
nsegis changed.
- section.Ra
- Axial resistivity in ohm-cm.
- connectivity
- This is established with the connect command and defines the parent of the section, which end of the section is attached to the parent, and where on the parent the attachment takes place. To avoid confusion, it is best to attach the 0 end of a section to the 1 end of its parent.
In the stylized specification, the shape model used for a section is
a sequence of right circular cylinders of length, L/nseg, with diameter
given by the diam range variable at the center of each segment.
The area of a segment is PI*diam*L/nseg (micron2) and the half-segment axial
resistance is
.01*sec.Ra*(L/2/sec.nseg)/(PI*(seg.diam/2)^2). The .01 factor is necessary
to convert ohm-cm micron/micron2 to MegOhms. Ends of cylinders are not
counted in the area and, in fact, the areas are very close to those of
truncated cones as long as the diameter does not change too much.
from neuron import h import numpy sec = h.Section(name='sec') sec.nseg = 10 sec.Ra = 100 sec.L = 1000 # linearly interpolate diameters from 10 to 100 for seg in sec: seg.diam = numpy.interp(seg.x, [0, 1], [10, 100]))))
Output:
0.0 14.5 0.0 4555.30934771 1e+30 0.30279180612 0.05 14.5 4555.30934771 4555.30934771 0.30279180612 0.30279180612 0.15 23.5 7382.74273594 7382.74273594 0.418069266033 0.115277459913 0.25 32.5 10210.1761242 10210.1761242 0.175549154338 0.0602716944253 0.35 41.5 13037.6095124 13037.6095124 0.0972361172657 0.0369644228403 0.45 50.5 15865.0429006 15865.0429006 0.0619274567534 0.0249630339131 0.55 59.5 18692.4762889 18692.4762889 0.0429453733627 0.0179823394497 0.65 68.5 21519.9096771 21519.9096771 0.0315498128871 0.0135674734374 0.75 77.5 24347.3430653 24347.3430653 0.0241667620512 0.0105992886138 0.85 86.5 27174.7764536 27174.7764536 0.0191076887925 0.00850840017866 0.95 95.5 30002.2098418 30002.2098418 0.0154886887932 0.00698028861454 1.0 95.5 0.0 30002.2098418 0.00698028861454 0.00698028861454
Note that the area (and length) of the 0,1 terminal ends is equal to 0
and the axial resistance
is the sum of the adjacent half-segment resistances between segment and
parent segment. Such, niceties allow the spatial discretization error to
be proportional to
(1/nseg)^2. However, for second order correctness,
all point processes must be located at the center of the segments or at the
ends and all branches should be connected at the ends or centers of segments.
Note that if one increases nseg by a factor of 3, old centers are preserved.
For single compartment simulations it is most convenient to choose a membrane area of 100 micron2 so that point process currents (nanoamps) are equivalent to density currents (milliamps/cm2).
Also note that a single compartment of length = diameter has the same effective area as that of a sphere of the same diameter.
- Example:
The following example demonstrates the automatic 3-d shape construction. The root section “a” is drawn with it’s 0 end (left) at the origin and is colored red.
Sections connected to its 1 end (sections b, c, d) get drawn from left to right. Sections descended from the 0 end (section e) of the root get drawn from right to left.
Especially note the diameter pattern of section c whose “1” end is connected to the “b” parent. You don’t have to understand this if you always connect the “0” end to the parent.
from neuron import h, gui import numpy a, b, c, d, e = [h.Section(name=n) for n in ['a', 'b', 'c', 'd', 'e']] b.connect(a) c.connect(b(1), 1) # connect the 1 end of c to the 1 end of b d.connect(b) e.connect(a(0)) # connect the 0 end of e to the 0 end of a for sec in h.allsec(): sec.nseg = 20 sec.L = 100 for seg in sec: seg.diam = numpy.interp(seg.x, [0, 1], [10, 40]) s = h.Shape() s.show(False) s.color(2, sec=a) # color section "a" red h.topology() h.finitialize() for sec in h.allsec(): print(sec) for i in range(sec.n3d()): print('%d: (%g, %g, %g; %g)' % (i, sec.x3d(i), sec.y3d(i), sec.z3d(i), sec.diam3d(i)))
If you change the diameter or length, the Shape instances are automatically redrawn or when
doNotify()is called. Segment area and axial resistance will be automatically recomputed prior to their use.
Under some circumstances, involving nonlinearly varying diameters across a section, at first sight surprising results can occur when the stylized method is used and a Shape instance is created. This is because under a define_shape() with no pre-existing 3-d points in a section, a number of 3-d points is created equal to the number of segments plus the end areas. When 3-d points exist, they determine the calculation of L, diam, area, and ri. Thus diam can change slightly merely due to shape creation. When L and diam are changed, there is first a change to the 3-d points and then L and diam are updated to reflect the actual values of these 3-d points. Due to multiple interpolation effects, specifying a nonlinearly varying diam will, in general, not give exactly the same diameter values as the case where no 3-d information exists. This effect is illustrated in the following example
from neuron import h, gui def pr(nseg): sec.pt3dclear() sec.nseg = nseg setup_diam() h.define_shape() print_stats() def setup_diam(): for seg in sec: seg.diam = 20 if 0.34 <= seg.x <= 0.66 else 10 def print_stats(): for seg in sec.allseg(): print('%g %g %g %g' % (seg.x * sec.L, seg.diam, seg.area(), seg.ri())) h.xpanel("change nseg") h.xradiobutton("nseg = 3", (pr, 3)) h.xradiobutton("nseg = 11", (pr, 11)) h.xradiobutton("nseg = 101", (pr, 101)) h.xpanel() sec = h.Section(name='sec') sec.Ra = 100 sec.L = 100 sec.nseg = 3 setup_diam() print_stats() s = h.Shape() s.show(False) for i in range(sec.n3d()): print('%d: %g %g') % (i, sec.arc3d(i), sec.diam3d(i))) print("L= %g" % sec.L) print_stats()
The difference is that the 3-d points define a series of truncated cones instead of a series of right circular cylinders. The difference is reduced with larger nseg. With the stylized method, abrupt changes in diameter should only take place at the boundaries of sections if you wish to view shape and also make use of the fewest possible number of segments. But remember, end area of the abrupt changes is not calculated. For that, you need an explicit pair of 3-d points with the same location and different diameters.
3-D specification of geometry¶ ↑
3-d information for a section is kept in a list of (x,y,z,diam) “points”. The first point is associated with the end of the section that is connected to the parent (NB: Not necessarily the 0 end) and the last point is associated with the opposite end. There must be at least two points and they should be ordered in terms of monotonically increasing arc length.
The root section is treated as the origin of the cell with respect to 3-d position. When any section’s 3-d shape or length changes, all the sections in the child trees have their 3-d information translated to correspond to the new position. So, assuming the soma is the root section, to translate an entire cell to another location it suffices to change only the location of the soma. It will avoid confusion if, except for good reason, one attaches only the 0 end of a child section to a parent. This will ensure that the sec(x).diam as x ranges from 0 to 1 has the same sense as sec.diam3d(i) as i ranges from 0 to sec.n3d()-1.
The shape model used for a section when the pt3d list is non-empty
is that of a sequence of truncated cones in which the pt3d points define
the location and diameter of the ends. From this sequence of points,
the effective area, diameter, and resistance is computed for each segment
via a trapezoidal integration across the segment length. This takes
into account the extra area due to
sqrt(dx^2 + dy^2) for fast changing
diameters (even degenerate cones of 0 length can be specified, ie. two
points with same coordinates but different diameters)
but no attempt is made to deal with centroid curvature effects
on the area. Note that the number of 3d points used to describe a shape
has nothing to do with nseg and does not affect simulation speed.
(Although, of course, it does affect how fast one can draw the shape)
- Example:
The following illustrates the notion of the 3-d points as describing a sequence of cones. Note that the segment area and resistance is different than the simplistic calculation used in the stylized method. In this case the area of the segment has very little to do with the diameter of the center of the segment.
from neuron import h, gui from math import sin, cos sec = h.Section(name='sec') sec.Ra=100 sec.nseg = 10 h.pt3dclear(sec=sec) for i in range(31): x = h.PI * i / 30. h.pt3dadd(200 * sin(x), 200 * cos(x), 0, 100 * sin(4 * x), sec=sec) s = h.Shape() s.show(0) print(sec.L))))
Note that at one point the diameter is numerically 0 and the axial resistance becomes essentially infinite thus decoupling the adjacent segments. Take care to avoid constructing spheres with a beginning and ending diameter of 0. No current would flow from the end to a connecting section. The end diameter should be the diameter of the end of the connecting section.
The following loads the pyramidal cell 3-d reconstruction from the demo directory of your neuron system. Notice that you can modify the length only if the pt3dconst mode is 0.
from neuron import h, gui import __main__ h.xopen("$(NEURONHOME)/demo/pyramid.nrn") mode = 1 h.pt3dconst(mode) # uses default section from pyramid.nrn s = h.Shape() s.action(lambda: s.select(sec=h.dendrite_1[8])) s.color(2, sec=h.dendrite_1[8]) h.xpanel("Change Length") h.xvalue("dendrite_1[8].L", "dendrite_1[8].L", 1) # using HOC syntax # to directly access # the length h.xcheckbox("Can't change length", (__main__, 'mode'), lambda: h.pt3dconst(mode, sec=h.dendrite_1[8])) h.xpanel()
See also
pt3dclear(),
pt3dadd(),
pt3dconst(),
pt3dstyle(),
n3d(),
x3d(),
y3d(),
z3d(),
diam3d(),
arc3d()
getSpineArea(),
setSpineArea(),
spine3d()
See also
define_shape(),
pt3dconst()
If 3-D shape is not an issue it is sufficient to specify the section variables L (length in microns), Ra (axial resistivity in ohm-cm), and the range variable diam (diameter in microns).
A list of 3-D points with corresponding diameters describes the geometry of a given section.
Defining the 3D Shape¶ ↑
pt3dclear()¶ ↑
- Syntax:
buffersize = h.pt3dclear(sec=section)
buffersize = h.pt3dclear(buffersize, sec=section)
- Description:
- Destroy the 3d location info in
section. With an argument, that amount of space is allocated for storage of 3-d points in that section.
Note
A more object-oriented approach is to use
sec.pt3dclear()instead.
pt3dadd()¶ ↑
- Syntax:
h.pt3dadd(x, y, z, d, sec=section)
h.pt3dadd(xvec, yvec, zvec, dvec, sec=section)
Description:
Add the 3d location and diameter point (or points in the second form) at the end of the current pt3d list. Assume that successive additions increase the arc length monotonically. When pt3d points exist in
sectionthey are used to compute diam and L. When diam or L are changed and
h.pt3dconst(sec=section)==0the 3-d info is changed to be consistent with the new values of L and diam. (Note: When L is changed,
h.define_shape()should be executed to adjust the 3-d info so that branches appear connected.) The existence of a spine at this point is signaled by a negative value for d.
The vectorized form is more efficient than looping over lists in Python.
Example of vectorized specification:
from neuron import h, gui import numpy # compute vectors defining a geometry theta = numpy.linspace(0, 6.28, 63) xvec = h.Vector(4 * numpy.cos(theta)) yvec = h.Vector(4 * numpy.sin(theta)) zvec = h.Vector(theta) dvec = h.Vector([1] * len(theta)) dend = h.Section(name='dend') h.pt3dadd(xvec, yvec, zvec, dvec, sec=dend) s = h.Shape() s.show(0)
Note
The vectorized form was added in NEURON 7.5.
Note
A more object-oriented approach is to use
sec.pt3daddinstead.
pt3dconst()¶ ↑
- Syntax:
h.pt3dconst(0, sec=section)
h.pt3dconst(1, sec=section)
- Description:
If
pt3dconstis set at 0, newly assigned values for d and L will automatically update pre-existing 3d information.
pt3dconstreturns its previous state on each call. Its original value is 0.
Note that the diam information transferred to the 3d point information comes from the current diameter of the segments and does not change the number of 3d points. Thus if there are a lot of 3d points the shape will appear as a string of uniform diameter cylinders each of length L/nseg. ie. after transfer
sec.diam3d(i) == sec(sec.arc3d(i)/sec.L).diam. Then, after a call to an internal function such as
area()or
h.finitialize(), the 3d point info will be used to determine the values of the segment diameters.
Because of the three separate interpolations: hoc range spec -> segment diameter -> 3d point diam -> segment diameter, the final values of the segment diameter may be different from the case where 3d info does not exist.
Because of the surprises noted above, when using 3d points consider treating them as the authoritative diameter info and set
h.pt3dconst(1, sec=section).
3d points are automatically generated when one uses the NEURON Shape class. Experiment with
sec.nsegand
sec.n3d()in order to understand the exact consequences of interpolation.
See also
pt3dstyle()¶ ↑
- Syntax:
style = h.pt3dstyle(sec=section)
style = h.pt3dstyle(0, sec=section)
style = h.pt3dstyle(1, x, y, z, sec=section)
style = h.pt3dstyle(1, _ref_x, _ref_y, _ref_z, sec=section)
- Description:
With no args besides the
sec=keyword, returns 1 if using a logical connection point.
With a first arg of 0, then style is NO logical connection point and (with
pt3dconst()== 0 and
h.define_shape()is executed) the 3-d location info is translated so the first 3-d point coincides with the parent connection location. This is the classical and default behavior.
With a first arg of 1 and x,y,z value arguments, those values are used to define a logical connection point relative to the first 3-d point. When
pt3dconst()== 0 and define_shape is executed, the 3-d location info is translated so that the logical connection point coincides with the parent connection location. Note that logical connection points have absolutely no effect on the electrical properties of the structure since they do not affect the length or area of a section. They are useful mostly for accurate visualization of a dendrite connected to the large diameter edge of a soma that happens to be far from the soma centroid. The logical connection point should be set to the location of the parent centroid connection, i.e. most often the 0.5 location of the soma. Note, that under translation and scaling, the relative position between the logical connection point and the first 3-d point is preserved.
With a first arg of 1 and x,y,z reference arguments, the x,y,z variables are assigned the values of the logical connection point (if the style in fact was 1).
See also
pt3dconst(),
define_shape()
pt3dinsert()¶ ↑
pt3dremove()¶ ↑
- Syntax:
h.pt3dremove(i, sec=section)
- Description:
- Remove the i’th 3D point from
section.
pt3dchange()¶ ↑
- Syntax:
h.pt3dchange(i, x, y, z, diam, sec=section)
h.pt3dchange(i, diam, sec=section)
- Description:
Change the i’th 3-d point info. If only two args then the second arg is the diameter and the location is unchanged.
h.pt3dchange(5, section.x3d(5), section.y3d(5), section.z3d(5), section.diam3d(5) if not h.spine3d(sec=section) else -section.diam3d(5), sec=section)
leaves the pt3d info unchanged.
Reading 3D Data from NEURON¶ ↑
n3d()¶ ↑
- Syntax:
section.n3d()
h.n3d(sec=section)
- Description:
- Return the number of 3d locations stored in the
section. The
section.n3d()syntax returns an integer and is generally clearer than the
h.n3d(sec=section)which returns a float and therefore has to be cast to an int to use with
range. The latter form is, however, slightly more efficient when used with
section.push()and
h.pop_section()to set a default section used for many morphology queries (in which case the sec= would be omitted).
x3d()¶ ↑
- Syntax:
section.x3d(i)
h.x3d(i, sec=section)
- Description:
- Returns the x coordinate of the ith point in the 3-d list of the
section(or in the second form, if no section is specified of NEURON’s current default section). As with
n3d(), temporarily setting the default section is slightly more efficient when dealing with large numbers of queries about the same section; the tradeoff is a loss of code clarity.
diam3d()¶ ↑
- Syntax:
section.diam3d(i)
h.x3d(diam, sec=section)
- Description:
- Returns the diameter of the ith 3d point of
section(or of NEURON’s current default if no
sec=argument is provided).
diam3d(i)will always be positive even if there is a spine at the ith point.
arc3d()¶ ↑
- Syntax:
section.arc3d(i)
h.arc3d(i, sec=section)
- Description:
- This is the arc length position of the ith point in the 3d list.
section.arc3d(section.n3d()-1) == section.L
spine3d()¶ ↑
- Syntax:
h.spine3d(i, sec=section)
- Description:
- Return 0 or 1 depending on whether a spine exists at this point.
setSpineArea()¶ ↑
- Syntax:
h.setSpineArea(area)
- Description:
- The area of an average spine in um2.
setSpineAreamerely adds to the total area of a segment.
Note
This value affects all sections on the current compute node.
getSpineArea()¶ ↑
- Syntax:
h.getSpineArea()
- Description:
- Return the area of the average spine. This value is the same for all sections.
define_shape()¶ ↑
- Syntax:
h.define_shape()
- Description:
Fill in empty pt3d information with a naive algorithm based on current values for L and diam. Sections that already have pt3d info are translated to ensure that their first point is at the same location as the parent. But see
pt3dstyle()with regard to the use of a logical connection point if the translation ruins the visualization.
Note: This may not work right when a branch is connected to the interior of a parent section
0 < x < 1, rather only when it is connected to the parent at 0 or 1.
area()¶ ↑
- Syntax:
h.area(x, sec=section)
section(x).area()
- Description:
Return the area (in square microns) of the segment
section(x).
section(0).area()and
section(1).area()= 0
ri()¶ ↑
- Syntax:
h.ri(x, sec=section)
section(x).ri()
- Description:
- Return the resistance (in megohms) between the center of the segment
section(x)and its parent segment. This can be used to compute axial current given the voltage at two adjacent points. If there is no parent the “infinite” resistance returned is 1e30.
Example:
for seg in sec.allseg(): print('%g %g %g' % (seg.x * sec.L, seg.area(), seg.ri()))
will print the arc length, the segment area at that arc length, and the resistance along that length for the section
sec.
distance()¶ ↑
- Syntax:
h.distance(sec=section)or
h.distance(0, x, sec=section)or
h.distance(0, section(x))
length = h.distance(x, sec=section)or
length = h.distance(1, x, sec=section)
length = h.distance(segment1, segment2)
Description:
Compute the path distance between two points on a neuron. If a continuous path does not exist the return value is 1e20.
h.distance(sec=section)
- specifies the origin as location 0 of
section
h.distance(x, sec=section)or
h.distance(section(x))for 0 <= x <= 1
- returns the distance (in microns) from the origin to
section(x).
To overcome the old initialization restriction,
h.distance(0, x, sec=section)or the shorter
h.distance(0, section(x))can be used to set the origin. Note that distance is measured from the centers of segments.
Example:
from neuron import h soma = h.Section(name='soma') dend = h.Section(name='dend') dend.connect(soma(0.5)) soma.L = 10 dend.L = 50 length = h.distance(soma(0.5), dend(1))
Warning
When subtrees are connected by
ParallelContext.multisplit(), the distance function returns 1e20 if the path spans the split location.
Note
Support for the variants of this function using a segment (i.e. with
section(x)) was added in NEURON 7.5. The two segment form requires NEURON 7.7+.
See also
diam_changed¶ ↑
- Syntax:
h.diam_changed = 1
- Description:
Signals the system that the coefficient matrix needs to be recalculated.
This is not needed since
Rais now a section variable and automatically sets diam_changed whenever any sections Ra is changed. Changing diam or any pt3d value will cause it to be set automatically.
Note
The value is automatically reset to 0 when NEURON has recalculated the coefficient matrix, so reading it may not always produce the result you expect.
If it is important to monitor changes to the diameter, look at the internal variable
diam_change_cntwhich increments every time
h.diam_changedis automatically reset to 0:
from neuron import h, gui import neuron import ctypes import time diam_change_cnt = neuron.nrn_dll_sym('diam_change_cnt', ctypes.c_int) print('{} {}'.format(h.diam_changed, diam_change_cnt.value) # 1 0 s = h.Section(name='s') print('{} {}'.format(h.diam_changed, diam_change_cnt.value) # 1 0 time.sleep(0.2) print('{} {}'.format(h.diam_changed, diam_change_cnt.value) # 0 1 s.diam = 42 print('{} {}'.format(h.diam_changed, diam_change_cnt.value) # 1 1 time.sleep(0.2) print('{} {}'.format(h.diam_changed, diam_change_cnt.value) # 1 2
Ra¶ ↑
- Syntax:
section.Ra
- Description:
Axial resistivity in ohm-cm. This used to be a global variable so that it was the same for all sections. Now, it is a section variable and must be set individually for each section. A simple way to set its value is
for sec in h.allsec(): sec.Ra = 35.4
Prior to 1/6/95 the default value for Ra was 34.5. Presently it is 35.4. | https://www.neuron.yale.edu/neuron/static/py_doc/modelspec/programmatic/topology/geometry.html | CC-MAIN-2021-17 | refinedweb | 4,148 | 58.28 |
Hi All!
I am working on a project and the deadline is really fast approaching... I have been trying to solve this problem for weeks and I still havn't figured this out. I'm not really that familiar with swing, and I have been stucked in a JTable limbo.
My JTable implements single row/column selection, cells are editable, and has Double values in it. I wanted my JTable to behave in such a way that whenever I press <ENTER>, it will select(or focus) the next cell on the right or when it is already at the end of the column, it will select the cell of the first column below. The default behaviour of JTable is that It selects(or focuses) the next cell below, and so on and so forth...
Another problem also is that when I try to edit the values on the cells, by pressing a key on the keyboard, it appends the character at the end of the existing value. How do I override it that instead of appending it, it will replace the existing value.
I couldn't find a way to figure this thing out... please help...
I'd add a KeyListener to the JTable to listen for when they press enter.
You must implement it at the top (public class MyClass extends JApplet, implements KeyListener)
and add it to the instance of JTable (table.addKeyListener( this )) and add the following methods:
Code:
public void keyPressed( KeyEvent ke ) { }
public void keyTyped( KeyEvent ke ) { }
public void keyReleased( KeyEvent ke ) { }
So you would probably want to do this in one of those methods:
Code:
if ( ke.getKeyCode() == KeyEvent.VK_ENTER ) {
// code to set focus the specific cell
}
public void keyPressed( KeyEvent ke ) { }
public void keyTyped( KeyEvent ke ) { }
public void keyReleased( KeyEvent ke ) { }
if ( ke.getKeyCode() == KeyEvent.VK_ENTER ) {
// code to set focus the specific cell
}
Last edited by destin; 12-26-2005 at 10:59 PM.
Hotjoe Java forums: ()
I was looking for sum code to override the Selection mechanism of JTable... I hope I find any...
Forum Rules
Development Centers
-- Android Development Center
-- Cloud Development Project Center
-- HTML5 Development Center
-- Windows Mobile Development Center | http://forums.devx.com/showthread.php?149097-Please-Help!-I-am-meeting-a-deadline-and-i-still-haven-t-figured-this-out... | CC-MAIN-2014-10 | refinedweb | 360 | 67.79 |
separated out from bug 563088.
This blocks some beta, not necessarily beta 2.
When is this going to be turned on? I've been running with DoD for weeks and haven't noticed any issues, so I'm hoping this will make Fx 4.0, as it's a huge perf/footprint win.
(In reply to comment #2)
> When is this going to be turned on? I've been running with DoD for weeks and
> haven't noticed any issues, so I'm hoping this will make Fx 4.0, as it's a huge
> perf/footprint win.
Is it really? All the image.mem.decodeondraw pref does these days (as far as I can tell) is makes us not decode images loaded in background tabs when you control+click (instead of decoding those images and then discarding them 15 seconds later). Most of the snazzy work was folded into discarding, which is already turned on by default.
Since I think it's likely that users will tab over to those pages within 15 seconds, I'm starting to think that we don't want to do this for desktop.
Can you verify the perf/footprint effects and give examples?
Thanks for your help.
(In reply to comment #3)
> Since I think it's likely that users will tab over to those pages within 15
> seconds
This is not always a good assumption. There are plenty of different browser use habits out there, and one of which is opening dozens of tabs at once and then slowly going through them one at a time.
(In reply to comment #3)
> makes us not decode images loaded in background tabs when you control+click
Which is something I and others do all the time. (and I have not enough RAM in this laptop) This will help some people in a signifigant way.
Will it hurt anyone in any significant way? This is the bit of info needed to know if it's worth turning on. If it could potentially hurt other use methods, then some sort of smart system to decide when to do what would be needed.
(In reply to comment #3)
> Can you verify the perf/footprint effects and give examples?
An example I had before no longer works, as the page was long ago redesigned so that there was no longer a huge page of high-res images. Back then, I would easily hit over 800 MB opening up the page in a background tab; the tab would remain loading for several minutes, and overall browser responsiveness would suffer. So, I'm not sure why you don't believe this is a worthwhile benefit. I realize you're not saying that explicitly, but that is the unfortunate result.
As for discarding, you can observe the benefit on a site like. As you keep scrolling that page, the page gets longer and longer and more and more images are added. It used to be able to increase memory consumption indefinitely (well, until you ran out). Now memory consumption stays reasonably low.
D2D bugs aside, All I know is that I no longer come close to cracking 1 GB memory consumption since using discarding + DoD. In fact I rarely ever hit 500 MB anymore -- something that used to happen very easily.
> Is it really? All the image.mem.decodeondraw pref does these days...
Wasn't aware of that, but performance is performance; and I'll take what I can get. ;-)
> Since I think it's likely that users will tab over to those pages within 15
> seconds, I'm starting to think that we don't want to do this for desktop.
I'm with Dave on this one. If there's no *reasonable* negative to enabling an immensely beneficial feature, why forgo based on a remote possibility of an abnormal use case? And my browsing habit, when opening tabs in the background, is similar to Dave's. I open up a whole whack then go through them one-by-one. If the one I am looking at has interesting information, it will definitely be a whole lot more than 15 seconds before I switch to another tab.
I can't see why a user would open up a bunch a tabs in the background and then quickly toggle through them. Is the user more interested in making tabs flash or actually doing something even remotely productive? And even if this is some theoretical user's habit, the user experience should be orders of magnitude better with DoD than old behavior of having the browser slow to a crawl for several minutes, while a bunch of background tabs load.
That said, hasn't Mozilla's position generally been to support features that benefit the vast majority of users and leave the special-casing to extensions? To me, the scenario you describe seems very special-case and likely very rare.
All that being said, I do hope you will reconsider.
Many thanks
I think it's not reasonable to decode not-visible images in background tabs always and then proceed to discard the work after 15 seconds. If I open 10 tabs from google results page, the changes are high that I will not see the last within 15 seconds. Using CPU to decode images on that tab is wasted CPU time.
I'd prefer not wasting memory and CPU by default even if it causes some flicker when changing to a new tab because of decode-on-draw happening at tab switch. In the long run, I'd suggest following heuristics:
* decode images on visible portion of current page (obviously) and in addition, decode images below and above the visible portion for, say, up to 10 million pixels (if the non-visible page contains a lot of small images, such as user icons in discussion forums, decoding all of them before hand should not use too much memory. On the other hand, if the page contains 20 pieces of 10 megapixel photos, it doesn't make any sense to decode all of them beforehand. In addition, counting pixels is easy because the code just needs to multiply width and height and add to total).
* decode images on "visible" (viewport) portion of non-visible tab if the tab is immediately left or right from the current tab (or perhaps keep a pixel counter here too and pre-decode visible portions of more tabs left and right from current tab if not-too-much pixels are decoded.)
* in every other case, decode the image on draw.
Especially note that if a newly opened non-visible tab is far from the current tab, the changes are high that it will not be visited within 15 seconds.
As an end result, the pre-rendering/pre-decoding takes predictable (and hopefully adjustable) amount of memory and other images may flicker a bit because of asynchronous decoding.
I've been running Firefox betas and nightlies with image.mem.decodeondraw=true for months now and have been quite pleased with the results.
I think it's worth flipping this on ASAP to get full user testing in the next betas. Any possibility of getting this flipped on for beta 8? (at least 9?)
Thanks to libjpeg-turbo, performance when switching to background loaded tabs is very good now. Will decode-on-draw be enabled by default in FF 5 or 6?
This is a good page for testing background loaded tab switching performance:
Created attachment 536675 [details] [diff] [review]
enable decode-on-draw
Joe, can you land this?
Landed on inbound.
Mozilla inbound Talos shows a 3.5% Tp4 increase on XP, was it expected?
No - I've backed this change out.
It looks like the talos hit was on other platforms too:
mlb.com is one of the pages that shows the regression pretty clearly.
*** Bug 666560 has been marked as a duplicate of this bug. ***
I've added some telemetry to try to figure out what's going on here:
With decode-on-draw we decode many images twice, however, one of the decodes is just a 'size-decode' which is relatively cheap.
I counted the amount of times we paint an image and didn't notice a statistically significant difference.
One thing that was interesting was that with decode-on-draw we spend noticeably less time decoding:
65 samples, average = 48.2, sum = 3136
vs.
67 samples, average = 82.6, sum = 5537
decode-on-draw will only help when it decodes only the images in the visible part (and its neighbor screens) of the a page, or it is totally useless and will make the things worse.
For example, if we open a image heavy page which have 200 big images and will take up 3G byte when they are fully decoded.
WITHOUT decode-on-draw: firefox will take a long time to decode all images and keep them in memory. After that firefox will work well, except we loss 3G byte memory.
WITH decode-on-draw: EVERYTIME this big page actived, firefox will take a long time to decode all images. It will use 3G byte memory. This will make firefox almost unusable, if we switch to other tab and then switch back.
THE REASON for decode-on-draw should be protecting firefox from out of memory when visiting such large pages. However if decode-on-draw decodes all images in current page, the program will still have the problem of out of memory. for example, if we switch from a 3G byte page to another 3G byte page, it will take 6G byte, and easily run out of memory.
Conclusion is, decode-on-draw should decode only part of the page in the active page, or it is just a joke.
One more comment:
decode-on-draw should be an idea of page rander, instead of garbage collector.
What about doing something like Page Visibility API? Page Visibility API can compensate it little bit, I think. Forgive my dumbness if feel useless reply.
(In reply to Ahmad Saleem from comment #20)
> What about doing something like Page Visibility API? Page Visibility API can
> compensate it little bit, I think. Forgive my dumbness if feel useless reply.
That would be useful, yes. The issue thus far is that this is rather hard to get right. But if there were other compelling uses for such an API, it might provide more motivation to implement it.
(In reply to Bobby Holley (:bholley) from comment #21)
> That would be useful, yes. The issue thus far is that this is rather hard to
> get right. But if there were other compelling uses for such an API, it might
> provide more motivation to implement it.
Oh, whoops - I didn't realize that Page Visibility API refers to a webkit thing. We do something like that internally (see nsIDocShell::IsActive()), but we don't expose it to content.
What we really need to mitigate the discarding issue is a callback mechanism for the visibility of particular bits of content (ie, whether they're scrolled out of view or not). But again: hard.
I think its not limited to WebKit only. What about render only images up to desktop resolution and half to the resolution next (means half of resolution) so total 1 full resolution frame while half (0.5) next of resolution. Something like this. I know its difficult but I trust you guys. :-)
Marking for MemShrink triage. See e.g. bug 682230.
jrmuizel will investigate the regression some more.
My best guess is that this causes performance regressions because now we're doing size decodes in addition to regular decodes. I'll try to find some more evidence for that theory.
Looks like my theory was wrong.
The following patch seems to fix the performance regression:
diff --git a/modules/libpr0n/src/RasterImage.cpp b/modules/libpr0n/src/RasterImage.cpp
--- a/modules/libpr0n/src/RasterImage.cpp
+++ b/modules/libpr0n/src/RasterImage.cpp
@@ -2140,17 +2140,17 @@ bool
bool
RasterImage::StoringSourceData() {
- return (mDecodeOnDraw || mDiscardable);
+ return ((mDecodeOnDraw && false) || mDiscardable);
}
Not storing source data basically makes decode-on-draw moot for image types that aren't discardable.
Also, as far as I can tell we turn off decode-on-draw for images that aren't discardable in imgRequest::OnDataAvailable(). So I guess I'm interested in when that is the case!
*** Bug 687323 has been marked as a duplicate of this bug. ***
For what ever reason, I can't reproduce the talos regression locally anymore...
Oh wow, this is on by default now? That should make a huge difference to our image memory usage problems, right?
I thought this makes a difference only when you load a new tab in the background. With DoD we won't decode those images. Without DoD, we'll decode them, and then throw them away 10s later.
What Justin says is true. We will still decode all images on a particular tab when we switch to it.
So this change only removes the decode-then-discard-after-10-seconds dance when you open a tab in the background, is that right?
Correct. | https://bugzilla.mozilla.org/show_bug.cgi?id=573583 | CC-MAIN-2016-36 | refinedweb | 2,186 | 64.2 |
Namespace is a container that has all the names (of
variables/functions/classes) that you define. You can define same
names in different namespaces. A name or a variable exists in a
specific area of the code which defines its scope. The information
regarding binding between the variables/objects is stored in the
namespace. There are three types of namespaces or scopes.
Built-in These are in-built functions that are available across all
files or modules.
Global The global namespace has all the variables, functions, and
classes that are available in a single file.
Local The local namespace are variables defined within a function.
The scopes are nested, which means that the local namespace is
nested within a global namespace which is nested within built-in
namespace. Each scope has its namespace.
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/yeboahd24/scopes-and-namespace-5f7m | CC-MAIN-2021-49 | refinedweb | 135 | 66.84 |
This page describes the data set used in the paper:Measuring and Characterizing End-to-End Route Dynamics in the Presence of Load Balancing
This data set contains traceroute-style measurements from monitors to 1,000 destinations using FastMapping, a probing method that exploits load balancing characteristics to reduce the number of probes needed to measure accurate route dynamics.
We used 70 PlanetLab nodes as monitors during five weeks starting September 1st, 2010. Click here for the list of PlanetLab nodes we used. Each monitor selects 1,000 destinations at random from a list of 34,820 randomly chosen reachable destinations. Monitors use ICMP probes and probe as fast as they can, which results in an average measurement round duration of 4.4 minutes. For a more detailed description of FastMapping, please see the original paper (PDF).
The data is organized first by monitor, then by measurement round. A
measurement round comprises the measurement of all monitored paths. The result
of each measurement round is stored in a file named
<timestamp>.gz, where the timestamp marks the time when the
measurement round started. The result of each measurement round is stored in
binary form. We provide example code that reads the binaries and provides an
interface to the data.
monlist.txt
The list of 70 PlanetLab nodes used in the data set.
dstlist.txt
The list of 34,820 destinations from which monitors choose 1000 destinations to probe.
dlib.py
Python code with an API to load the binary data set in memory and access it. See an example of usage below. The code has inlined Python documentation.
chronos-day1.tar
This file contains data for the first day of measurements from the monitor at
chronos.disy.inf.uni-konstanz.de. This file contains several
<timestamp>.gz files described above. To load the
1283506986.gz file and print its contents, it is enough to use the
commands below in a Python script:
import gzip
from dlib import Snapshot
fd = gzip.open('1283506986.gz')
s = Snapshot.read(fd)
print s
The whole data set (with the five weeks of data from the 70 monitors) is 47GB. As we have limited bandwidth, we currently request interested researchers to: first, use the example trace and scripts given above to evaluate the utility of the traces for your purposes; then, send an email to Italo Cunha to get access to the complete traces.
We are currently working towards a more convenient solution. If you have enough bandwidth to mirror the data, we would be very happy to upload it and link it from this page.
Italo Cunha <lastname dot remove dot this at dcc dot ufmg dot br> | https://homepages.dcc.ufmg.br/~cunha/datasets/fastmapping/index.html | CC-MAIN-2022-21 | refinedweb | 444 | 64.81 |
Hi Guillaume,
thanks a bunch, it looks great.
I will make a try later today.
Regards
JB
On 01/04/2012 10:58 AM, Guillaume Nodet wrote:
> I've pushed some changed to servicemix specs and create a github
> branch of karaf containing a new distribution that embeds those specs:
>
>
> I can then do the following:
>
> ((((($.context bundle 0) loadClass "javax.xml.stream.XMLInputFactory")
> getMethod newInstance) invoke null) getClass) getName
> bundle:install mvn:org.codehaus.woodstox/stax2-api/3.1.1
> bundle:install -s mvn:org.codehaus.woodstox/woodstox-core-asl/4.1.1
> ((((($.context bundle 0) loadClass "javax.xml.stream.XMLInputFactory")
> getMethod newInstance) invoke null) getClass) getName
>
> And you should see that the first call was using the JRE built in stax
> impl and the second one was using woodstox.
>
> If you want to see what happens with the spec, you can add
> org.apache.servicemix.debug=true property in etc/system.properties
> before starting.
>
>
> On Mon, Jan 2, 2012 at 18:15, Guillaume Nodet<[email protected]> wrote:
>> On Wed, Dec 28, 2011 at 05:06, Daniel Kulp<[email protected]> wrote:
>>> On Tuesday, December 27, 2011 3:22:48 PM Jean-Baptiste Onofré wrote:
>>>> Just a question, as you know CXF better than I:
>>>>
>>>> - why CXF can't use the packages provided by the JRE (assuming we use
>>>> JRE 1.6) ?
>>>
>>> Well, you did mention the JRE 1.6 thing... that is the first issue. CXF
>>> still supports Java5 (as does Karaf 2.2.x) and thus needs to provide a useful
>>> features.xml for Java5 folks. But that's a separate issue..... However,
>>> that IS the reason for having geronimo-annotation and geronimo-ws-metadata
>>> bundles. We could easily use the in JDK versions of these as they don't
>>> involve factories or implementations or anything like that.
>>
>> We decided to drop JDK 5 in Karaf.
>>
>>> SAAJ is the next spec to look at. Again, not part of Java5. However, the
>>> SAAJ implementations haven't changed in 3 years or so. Thus, using the
>>> version in the JDK is acceptable to me for now. However, this is a "factory"
>>> spec and thus the container should provide a way to load an additional
>>> implementations. Andreas and I HAVE talked about a new implementation based
>>> on his DDOM project that would allow deferred processing and such, but that
>>> project has been very slow in developing anything so I'm not worried about it
>>> now. There IS a strange issue with CXF and the SAAJ implementation in a
>>> certain version of WebSphere on AIX that I'm still not sure about where we did
>>> flip it out to use the Sun SAAJ impl, but that wasn't in Karaf/OSGi.
>>>
>>> The next spec to consider is StAX. CXF is OK (I think) with the in JDK
>>> version. (FYI: Camel 2.9.0 is not, more in a second) We don't test
>>> extensively with it as there are serious issues with the in JDK version. This
>>> falls into the area where every ENTERPRISE level person I've ever talked to
>>> has ALWAYS preferred performance and consistency and generally "works"
>>> compared to using JDK versions. The StAX implementation in the JDK is awful.
>>> Not only is it a LOT slower than WoodStox, it also is much harder to properly
>>> handle writing namespaces (CXF is OK I believe, but many other users of StAX
>>> don't know the details and thus don't handle it right.) The namespace issue
>>> is also different between the Sun JDK's, the Oracle JRocket based JDK's, and
>>> the IBM JDK's. However, the MAIN issue with the in-JDK version is that it's
>>> VERY VERY easy to write non-thread safe code. Most people don't realize that
>>> the StAX XMLInput/OutputFactory objects in the JDK are not thread safe. They
>>> are really the only commonly used "factories" in the XML spec that are not.
>>> Thus, people write code that works fine in the tests, but blows up when they
>>> scale up with strange errors and such. Just switching to woodstox (which has
>>> thread safe factories) fixes the issue. To prove my point, the Stax
>>> component submitted to Camel, reviewed by camel committers, committed, tested,
>>> etc... is NOT thread safe unless you use Woodstox due to the same issue.
>>> Again, putting woodstox in there not just provides enhanced performance and
>>> stability, it also prevents a whole range of programming problems and every
>>> enterprise customer I've ever talked to appreciates that. Basically, I'm OK
>>> using the "in JDK" API's as long as they can properly and easily load woodstox
>>> as the implementation.
>>>
>>> That leaves JAX-WS and JAXB. I'll start with JAX-WS. CXF is a certified
>>> JAX-WS 2.2 compliant JAX-WS implementation (except for the latest releases, we
>>> are working on fixing that). The code that is generated from the command
>>> line tools is JAX-WS 2.2 compliant. Thus, to use the generated code, you
>>> need to have the 2.2 API's. CXF does provide a "-fe jaxws21" flag to have
>>> it generate 2.1 compliant code, but for certification reasons and such, the
>>> default is 2.2. That said, when using Maven plugins, due to the endorse
>>> issues and such, you will get whatever version of JAX-WS API's found in the
>>> JDK. On java6, you will get 2.1 compliant code. On java7 or java5, you
>>> would get 2.2. The nice thing about the additions in 2.2 is that the API jar
>>> can be used with a 2.1 implementation. (new methods would throw
>>> UnsupportedOperationException). Thus, you can endorse the 2.2 API jar and
>>> still use the Java6 in-JDk 2.1 based version as long as you just use the 2.1
>>> provided methods. The main issue with JAX-WS is the "factory" issue. If I
>>> have CXF installed and I call "Endpoint.publish(...)" or "new
>>> MyService().getFooPort()", it should use CXF, not the in-JDK version. If the
>>> activator thing gnodet is working on can allow use of the system provided
>>> "API" packages (javax.xml.ws stuff) but have it find and load the CXF
>>> implementation, than I'm OK with that. That CAN be 2.1 on Java6 and 2.2 on
>>> Java5/7. However, there also would need to be a way on Java6 to have it
>>> update to 2.2 if really needed.
>>>
>>> Finally, that leaves JAXB. For 95% of the use cases, we can use the in-jdk
>>> version EXCEPT for the 2.1 vs 2.2 issue in the generated code. Again, we use
>>> the 2.2 xjc to generate code and thus it CAN generate code that will result in
>>> illegal annotation exceptions when run on 2.1. Again, the -fe jaxws21 flag
>>> works around that and the maven java6/7 things mentioned above come into play.
>>> The DynamicClient, however, requires the non-inJDK version as we have to call
>>> off to com.sun.xml.bind classes and such directly which have different
>>> packages in the JDK (.internal added) and it was way too much reflection
>>> needed to do it reflectively. CXF does have a few pathways where we try the
>>> ".internal" package names if the non-.internal versions aren't found
>>> specifically so it can work with the in-jdk version. It was just way too
>>> much to do for the DynamicClient however. (and the command line tools, but I
>>> assume we're not talking about running them in OSGi right now)
>>>
>>> However, the major issue I have with JAXB is the sheer number of bugs we run
>>> into with it. CXF users are constantly reporting bugs to us that, when
>>> debugged, turn out to be issues in JAXB. This last year has actually been
>>> good in that Oracle has actually fixed many of the bugs we reported, but ONLY
>>> in the 2.2 branch. Thus, to get the stable version with the bugs fixed, we
>>> need the 2.2. implementation. Unfortunately, you CANNOT use the 2.2
>>> implementation with the 2.1 API jar. The 2.2 implementation always looks for
>>> specific attributes on a couple annotations that only exist in 2.2 and will
>>> throw exceptions if the annotation is from 2.1 and doesn't contain them.
>>> However, I believe you can use a 2.2 API with the in -jdk 2.1 impl. Not 100%
>>> sure though.
>>>
>>> The other issue with JAXB related to the bugs is it's HARD to figure out what
>>> bugs you may encounter when using the in JDK version. They periodically
>>> update the in-jdk version with the JDK updates and various versions provide
>>> fixes, but also introduce new issues. I think JDK update 18 broke a few of
>>> CXF unit tests when using the in-JDK version. Update 23 fixed that. Update
>>> 27 introduced a never cleared thread local that causes jar locking (and thus
>>> memory leaks). I don't believe that has been fixed yet in any JDK other than
>>> the latest Java 7 (again, fixes go to 2.2 branch, rarely to 2.1). Also,
>>> flipping between the Sun/Oracle JDK (example, developer boxes) to IBM JDK's
>>> (example: deploy on AIX) can really cause differences in behavior.
>>>
>>
>> Thx for those detailed explanations ... That really helps
>> understanding the problem.
>>
>>>
>>> Unlike Guillaume, I've NEVER had an enterprise customer ask to use the in JDK
>>> versions. I've never seen anyone really request it, even on the lists here.
>>> In every case, they've valued performance and stability over the use of the
>>> in-JDK versions. By making CXF really prefer the "stable" versions, it not
>>> only provides a better experience for CXF users, but also reduces the support
>>> burden on the CXF lists as people are less likely to hit issues. That's
>>> important to me. If the Activator based stuff that Guillaume is working on
>>> can allow us to use the in-JDK "API" packages, but have them properly load
>>> OSGi based implementations, then that's a great start. If we can also
>>> provide the latest versions of those API's (so we can use the latest versions
>>> of those impls), I'd be happy.
>>>
>>
>> I don't recall having that such a thing. Or that's not really what I
>> meant, so let me rephrase once again.
>>
>> Imho Karaf is not a container dedicated to deploying Camel or CXF,
>> it's a general purpose OSGi container. So there are users that do use
>> Karaf and that don't care about CXF nor Camel. They may be very happy
>> with the JAXB version provided by the JRE and don't use Stax. I
>> don't really see why Karaf would have to provide its own Stax, JAXB,
>> SAAJ or JAXWS version for those users. Of course, they could still
>> use a container which would provide alternative implementations, but
>> it's all about being lightweight. I recall you once considered
>> ServiceMix being heavyweight, though the minimal distribution is now
>> only 10 Mb, where everything (cxf, camel, activemq, jbi) is ready to
>> be installed. It's the same idea as TSF iirc.
>>
>> I hope in the future, other Apache projects may use Karaf as their
>> runtime. DIrectory is an example and afaik they don't really care
>> about the speed of the JRE provided Stax implementation. There's imho
>> no need to include stuff in Karaf that isnt used by Karaf and may not
>> be used by users either. That sounds like a custom distribution to
>> me.
>>
>>>
>>> Anyway, I'm technically on vacation. My wife is giving me dirty looks for
>>> spending an hour doing work. I hope this helps explain things a bit. :-)
>>
>> It does.
>>
>> At the end I think we should provide a custom distribution (I still
>> think ServiceMix minimal could be a good candidate, but even it could
>> be yet another distribution in Karaf too, or even in CXF) that would
>> provide a clean and configured environment for CXF, with the needed
>> specs and implementations.
>>
>> I'll try to create a karaf branch at github to experiment with the new
>> specs behavior, but what I had a few weeks ago was promising. But I'd
>> like that to work well with OBR too.
>>
>>>
>>> Dan
>>>
>>>
>>>
>>>>
>>>> Regards
>>>> JB
>>>>
>>>> On 12/27/2011 03:18 PM, Christian Schneider wrote:
>>>>> Hi JB,
>>>>>
>>>>> I think that should not be a big problem.
>>>>>
>>>>> We currently only support three different runtimes 1.5, 1.6 and 1.7.
So
>>>>> it should not be a lot of work to provide features for them.
>>>>> We could also support the jre version as a kind of switch for feature
>>>>> files. So you can define bundles that will only be installed for certain
>>>>> jre versions.
>>>>> A bit like in maven. So then one feature would be fine for all 3
>>>>> versions.
>>>>>
>>>>> Christian
>>>>>
>>>>> Am 27.12.2011 13:42, schrieb Jean-Baptiste Onofré:
>>>>>> As discussed on IRC, my concern with your solution is about multiple
>>>>>> JRE support: different versions, different providers (IBM, Sun/Oracle)
>>>>>> etc.
>>>>>>
>>>>>> It's really painful to create fragments bundle per JRE, and know
which
>>>>>> one to deploy.
>>>>>>
>>>>>> Regards
>>>>>> JB
>>>>>>
>>>>>> On 12/26/2011 09:09 PM, Christian Schneider wrote:
>>>>>>> I am +1 for not exporting the packages by default in the system
>>>>>>> bundle.
>>>>>>>
>>>>>>> I also have an idea how we could create an environment for people
>>>>>>> who do not want the improved bundles.
>>>>>>>
>>>>>>> I propose that we provide two (sets of) features:
>>>>>>>
>>>>>>> 1. jaxb, stax, ... from jre
>>>>>>> These are just fragment bundles to the system bundle that export
the
>>>>>>> packages. So by installing these bundles
>>>>>>> you get the current behaviour of karaf
>>>>>>>
>>>>>>> 2. improved jaxb, stax, .. like used for servicemix, cxf, camel
>>>>>>> These will make cxf and camel behave like expected
>>>>>>>
>>>>>>> The reason why I prefer this aproach over the current setup of
>>>>>>> simply
>>>>>>> exporting the packages from the system bundle is that it makes
>>>>>>> installing cxf and camel much easier and at the same time also
>>>>>>> allows
>>>>>>> people to use "pure OSGi" like Guillaume wrote.
>>>>>>>
>>>>>>> Christian
>>>>>>>
>>>>>>> Am 26.12.2011 16:04, schrieb Jean-Baptiste Onofré:
>>>>>>>> Hi all,
>>>>>>>>
>>>>>>>> We have currently an issue in Camel and CXF with the default
>>>>>>>> jre.properties and some exported packages (like JAXB, etc).
>>>>>>>>
>>>>>>>> Currently, by default, the jre.properties exports all packages
>>>>>>>> from
>>>>>>>> the JRE.
>>>>>>>>
>>>>>>>> I would like to propose a new approach:
>>>>>>>> 1/ remove packages with problem by default from the jre.properties
>>>>>>>> 2/ add a set of Karaf features (in bootFeatures by default)
to
>>>>>>>> install
>>>>>>>> bundles providing the packages (JAXB, etc)
>>>>>>>>
>>>>>>>> It's a quick workaround for next Karaf 2.2.6 and Karaf 3.0.
>>>>>>>>
>>>>>>>> We can find a more elegant solution. I have some solutions
in
>>>>>>>> mind:
>>>>>>>> - new properties in the jre.properties to define an "override"
>>>>>>>> flag
>>>>>>>> - add a KARAF-INF/* files to define some behaviors (like
>>>>>>>> overriding
>>>>>>>> system packages)
>>>>>>>>
>>>>>>>> Feel free to propose your ideas for this problem.
>>>>>>>>
>>>>>>>> Please:
>>>>>>>> [ ] +1 to remove the packages from the jre.properties and
provide
>>>>>>>> a
>>>>>>>> set of Spec/API features in Karaf
>>>>>>>> [ ] 0
>>>>>>>> [ ] -1 for that (please provide arguments)
>>>>>>>> Ideas (if you have ;)):
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>> Regards
>>>>>>>> JB
>>> --
>>> Daniel Kulp
>>> [email protected] -
>>> Talend Community Coder -
>>
>>
>>
>> --
>> ------------------------
>> Guillaume Nodet
>> ------------------------
>> ------------------------
>> Open Source SOA
>>
>
>
>
--
Jean-Baptiste Onofré
[email protected]
Talend - | http://mail-archives.us.apache.org/mod_mbox/karaf-dev/201201.mbox/%[email protected]%3E | CC-MAIN-2020-29 | refinedweb | 2,531 | 74.69 |
The QCustomEvent class provides support for custom events. More...
#include <QCustomEvent>
This class is part of the Qt 3 support library. It is provided to keep old source code working. We strongly advise against using it in new code. See Porting to Qt 4 for more information.
Inherits QEvent.
The QCustomEvent class provides support for custom events.
QCustomEvent has a void * that can be used to store custom data.
In Qt 3, QObject::customEvent() took a QCustomEvent pointer. We found out that this approach was unsatisfactory, because there was often no safe way of deleting the data held in the void *.
In Qt 4, QObject::customEvent() takes a plain QEvent pointer. You can add custom data by subclassing.
See also QObject::customEvent() and QCoreApplication::notify(). | http://doc.trolltech.com/4.5-snapshot/qcustomevent.html | crawl-003 | refinedweb | 125 | 69.68 |
Windows Containers in Azure Kubernetes Services
Olivier Miossec
・6 min read
Kubernetes, or K8s, is the most popular tools to orchestrate containers. Kubernetes is widely available, you can deploy it on-premises or on any Cloud providers or use a managed service like AKS in Azure or Google Cloud.
But what happens when you want to orchestrate Windows Containers. There are many reasons to work with Windows Containers, libraries or languages are only available on Windows, you have too many dependencies or you may not have the knowledge or the time to use Linux.
Windows containers are here. They are stable and run with dockers. You can publish your app, build your solutions with it and use it like Linux containers. But can you use them in Kubernetes?
Yes, it's possible to use them in Azure with AKS or in Google cloud GKE.
How does it work in Azure AKS?
Kubernetes core services, API Engine, DNS, … still need to run on Linux. Every Kubernetes cluster, including those with Windows Containers, need at least one Linux node to run core services. You can add Windows Server to run containers, but the first node needs to be a Linux VM.
Windows Containers feature in AKS is in preview. Before starting to deploy AKS you will need to configure your workstation and your subscription.
First, be sure to use the latest version of AZURE CLI. The 2.0.76 version is required to run the Windows Container feature.
On windows, you can install the latest MSI or use Chocolatey to manage and update the installation.
On Linux is you are on Debian like distro you can manage the update by APT.
You will need also to add the AKS-Preview extension
az extension add --name aks-preview
And you should try to update to be sure to have the latest version.
az extension update --name aks-preview
If you never managed an AKS cluster, install the Kubernetes command-line tools (kubectl)
az aks install-cli
After that, you will need to update your subscription to register the Windows Container feature. This action changes the behavior of Azure for any new AKS cluster and not only those you want to use with Windows Containers. In other terms, any new cluster you deploy will be in preview mode, without any SLA from Microsoft. Do not choose a subscription with production clusters.
To perform this update.
az feature register --name WindowsPreview --namespace Microsoft.ContainerService
You will have to wait a few minutes, enough to go out and grab a coffee or a pizza. To check the result you can use this command.
az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/WindowsPreview')].{Name:name,State:properties.state}"
Finally, after the registration is completed, you can register the new Microsoft.ContainerService
az provider register --namespace Microsoft.ContainerService
The AKS cluster configuration is slightly different than normal AKS cluster. The Windows Preview enables multi-node pools. You can have multiple kinds of VM in different pools. Each pool can have different VM Size and OS version. You need tp have a pool for the control plane with small VMs, a pool with bigger VMs on Linux and a pool with Windows 2019 VMs for Windows containers. The Windows Preview also enables a machine scale set for nodes.
There is another important change. If with a classic AKS cluster you have the choice between Azure CNI and KubeNet for the network plugin, the Windows preview limits the choice to Azure CNI only. Azure CNI is a little more complex than KubeNet.
Pools in the same cluster must share the same subnet and VNET and pool names have different requirements for Windows and Linux; 12 characters limit for Linux and 6 for Windows.
Knowing that we can deploy the cluster with the first Linux pool for the control plane.
WINPASS="YourRealPassWord" az aks create \ --resource-group omc-lab-akswin \ --name akswin01 \ --node-count 2 \ --kubernetes-version 1.15.7 \ --generate-ssh-keys \ --windows-admin-password $WINPASS \ --windows-admin-username adminomc \ --vm-set-type VirtualMachineScaleSets \ --load-balancer-sku standard \ --network-plugin azure \ --node-vm-size Standard_B2ms \ --node-osdisk-size 80 \ --nodepool-name coreaks \ --docker-bridge-address 172.24.0.1/16 \ --dns-service-ip 10.10.0.15 \ --service-cidr 10.10.0.0/24 \ --dns-name-prefix omc-aks-windows \ --tags 'env=lab' 'app=Aks Windows'
A password is needed for Windows, even if the first action is to deploy the control plane in the Linux pool named coreaks. The password must comply with the Windows 2019 default password policy.
The command deploys a Resource Group, a public IP and a VNET. You can also use your own VNET by providing the subnet ID with the --vnet-subnet-id parameter. Be sure to have enough available IP.
If you connect to the cluster and list pods, you will not find any Windows pods.
kubectl get nodes aks-coreaks-17338944-vmss000000 Ready agent 18m v1.15.7 aks-coreaks-17338944-vmss000001 Ready agent 18m v1.15.7
You need to add them
az aks nodepool add \ --resource-group omc-lab-akswin \ --cluster-name akswin01 \ --os-type Windows \ --name win01 \ --node-vm-size Standard_B2ms \ --node-count 1 \ --kubernetes-version 1.15.7
Be careful with the pool name, the maximum number of characters is 6.
Now you can see the Windows pools
kubectl get nodes NAME STATUS ROLES AGE VERSION aks-coreaks-17338944-vmss000000 Ready agent 30m v1.15.7 aks-coreaks-17338944-vmss000001 Ready agent 30m v1.15.7 akswin01000000 Ready agent 2m17s v1.15.7
From the Azure Portal, you should be able to see theses two node pools in the Setting of your Aks cluster, coreaks and win01.
Before deploying containers in the cluster, we need to create a namespace. It’s not a requirement and you can deploy whatever you want without a namespace (in the default namespace in fact). But namespace helps to organize resources and projects in a Kubernetes cluster.
kubectl create namespace wintest
Deploying Windows Applications
I have a simple windows container, a web site with IIS. Here's the Dockerfile
FROM microsoft/iis COPY /site/ /inetpub/wwwroot
To deploy the container to the AKS cluster we need to put it in a registry, public or private. Azure provides a private registry service, Azure Container Registry.
To use it, login to your subscription using AZ Cli then login to the registry.
az acr login --name $acrname
Then tag the container
docker tag webpage xxxx.azurecr.io/samples/webpage
and push it to the registry
docker push xxxx.azurecr.io/samples/webpage
But if we need to login to the registry, how to manage login from the AKS cluster? During the cluster installation, the system created a Service Principal to manage VMs and networks in the AKS resource group.
It’s possible to extract the Client ID using the command line to use it to manage rights in the containers registry.
$AKSClientID=az aks show --resource-group $rgname --name $AksClusterName --query "servicePrincipalProfile.clientId" --output tsv
We also need to get the Resource ID of the container registry
$AcrID= az acr show --name $acrname --resource-group $rgname --query "id" --output tsv
You can now manage permission on the container registry, pull and read are needed.
az role assignment create --assignee $AKSClientID --role acrpull --scope $AcrID az role assignment create --assignee $AKSClientID --role reader --scope $AcrID
You can start to deploy applications and services in the cluster. For the webpage containers we need to create a deployment, how the container should run (number of pods, port, limit, …) and service (how can we access the application?).
To be sure to run this application on a Windows server you only need to use nodeSelector property in the Deployment specification.
apiVersion: apps/v1beta1 kind: Deployment metadata: name: webiis namespace: wintest labels: app: webiis spec: replicas: 1 template: metadata: name: webiis labels: app: webiis spec: nodeSelector: "beta.kubernetes.io/os": windows containers: - name: webiis image: xxxxxx.azurecr.io/samples/webpage:latest ports: - containerPort: 80 resources: limits: cpu: 1 memory: 800M requests: cpu: .1 memory: 300M selector: matchLabels: app: webiis --- apiVersion: v1 kind: Service metadata: name: webiis spec: type: LoadBalancer ports: - protocol: TCP port: 80 selector: app: webiis
to apply
kubectl apply -n wintest -f webapp.yaml
It should take a few minutes for the pod to be ready. You can monitor the pod with the get pods command.
kubectl get pods -n wintest
To monitor the service, you can use the get services command. It will be necessary to get the load balancer IP.
kubectl get service -n wintest
As you can see, deploying Windows Application on Kubernetes is almost the same thing as deploying Linux applications. There is only one parameter to add to your deployments, the nodeSelector. Windows and Linux applications can coexist in the same cluster.
Thanks for stopping by ❤️
🎩 JavaScript Enhanced Scss mixins! 🎩 concepts explained
In the next post we are going to explore CSS @apply to supercharge what we talk about here....
Network Policies are not supported in windows containers and they seem to be essential when it comes to pod to pod interactions within a cluster. Did you have any requirement to restrict the communications among pods? If yes what do you suggest to achieve that? | https://dev.to/omiossec/windows-containers-in-azure-kubernetes-services-1i0c | CC-MAIN-2020-24 | refinedweb | 1,542 | 56.05 |
API, sd=1) obs = pm.Normal('obs', mu=mu, sd=1, observed=np.random.randn(100))
In [4]:
model.basic_RVs
Out[4]:
[mu, obs]
In [5]:
model.free_RVs
Out[5]:
[mu]
In [6]:
model.observed_RVs
Out[6]:
[obs]
In [7]:
model.logp({'mu': 0})
Out[7]:
array(-154.30312285)})
88.8 ms ± 2.49 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) 29.6 µs ± 1.44 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
2. Probability Distributions¶
Every probabilistic program consists of observed and unobserved Random Variables (RVs). Observed RVs are defined via likelihood distributions, while unobserved RVs are defined via prior distributions. In PyMC3, probability distributions are available from the main module space:
In [9]:
help(pm.Normal))
In the PyMC3 module, the structure for probability distributions looks like this:
pymc3.distributions - continuous - discrete - timeseries - mixture
In [10]:
dir(pm.distributions.mixture)
Out[10]:
['Discrete', 'Distribution', 'Mixture', 'Normal', 'NormalMixture', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__', 'all_discrete', 'bound', 'draw_values', 'generate_samples', 'get_tau_sd', 'get_variable_name', 'logsumexp', 'np', 'tt']
Unobserved Random Variables¶
Every unobserved RV has the following calling signature: name (str), parameter keyword arguments. Thus, a normal prior can be defined in a model context like this:
In [11]:
with pm.Model(): x = pm.Normal('x', mu=0, sd [13]:
with pm.Model(): obs = pm.Normal('x', mu=0, sd=1, observed=np.random.randn(100))
observed supports lists,
numpy.ndarray,
theano and
pandas data structures.
Deterministic transforms¶
PyMC3 allows you to freely do algebra with RVs in all kinds of ways:
In [14]:
with pm.Model(): x = pm.Normal('x', mu=0, sd=1) y = pm.Gamma('y', alpha=1, beta=1) plus_2 = x + 2 summed = x + y squared = x**2 sined = pm.math.sin(x)
While these transformations work seamlessly, their results are not
stored automatically. Thus, if you want to keep track of a transformed
variable, you have to use
pm.Deterministic:
In [15]:
with pm.Model(): x = pm.Normal('x', mu=0, sd=1) plus_2 = pm.Deterministic('x plus 2', x + 2)
Note that
plus_2 can be used in the identical way to above, we only
tell PyMC3 to keep track of this RV for us.
Automatic transforms of bounded RVs¶
In order to sample models more efficiently, PyMC3 automatically transforms bounded RVs to be unbounded.
In [16]:
with pm.Model() as model: x = pm.Uniform('x', lower=0, upper=1)
When we look at the RVs of the model, we would expect to find
x
there, however:
In [17]:
model.free_RVs
Out[17]:
[x_interval__]
x_interval__ represents
x transformed to accept parameter values
between -inf and +inf. In the case of an upper and a lower bound, a
LogOdds transform is applied. Sampling in this transformed space
makes it easier for the sampler. PyMC3 also keeps track of the
non-transformed, bounded parameters. These are common determinstics (see
above): a different transformation x2 = pm.Gamma('x2', alpha=1, beta=1, transform=tr.log_exp_m1) print('The default transformation of x1 is: ' + x1.transformation.name) print('The user specified transformation of x2 is: ' + x2.transformation.name)
The default transformation of x1 is: log The user specified transformation of x2 is: log_exp_m1
Transformed distributions and changes of variables¶
PyMC3 does not provide explicit functionality to transform one
distribution to another. Instead, a dedicated distribution is usually
created in consideration of optimising performance. However, users can
still create transformed distribution by passing the inverse
transformation to
transform kwarg. Take the classical textbook
example of LogNormal: \(log(y) \sim \text{Normal}(\mu, \sigma)\)
In [21]:
class Exp(tr.ElemwiseTransform): name = "exp" def backward(self, x): return tt.log(x) def forward(self, x): return tt.exp(x) def jacobian_det(self, x): return -tt.log(x) with pm.Model() as model: x1 = pm.Normal('x1', 0., is Lognormal distributed.
| Using similar approach, we can create ordered RVs following some
distribution. For example, we can combine the
ordered transformation
and
logodds transformation using
Chain to (2 chains in 2 jobs) NUTS: [x] There were 74 divergences after tuning. Increase `target_accept` or reparameterize. There were 7, sd=1) for i in range(10)] # bad
However, even though this works it is quite slow and not recommended.
Instead, use the
shape kwarg:
In [24]:
with pm.Model() as model: x = pm.Normal('x', mu=0, sd=1, shape=10) # good
x is now a random vector of length 10. We can index into it or do
linear algebra operations on it:
In [25]:
with model: y = x[0] * x[1] # full indexing is supported x.dot(x.T) # Linear algebra is supported
Initialization with test_values¶
While PyMC3 tries to automatically initialize models it is sometimes
helpful to define initial values for RVs. This can be done via the
testval kwarg:
In [26]:
with pm.Model(): x = pm.Normal('x', mu=0, sd=1, shape=5) x.tag.test_value
Out[26]:
array([0., 0., 0., 0., 0.])
In [27]:
with pm.Model(): x = pm.Normal('x', mu=0, sd=1, shape=5, testval=np.random.randn(5)) x.tag.test_value
Out[27]:
array([-0.64480095, -1.04717266, -0.37850385, 0.77916362, -0.26477045])
This technique is quite useful to identify problems with model specification or initialization.
3. Inference¶
Once we have defined our model, we have to perform inference to approximate the posterior distribution. PyMC3 supports two broad classes of inference: sampling and variational inference.
3.1 Sampling¶
The main entry point to MCMC sampling algorithms is via the
pm.sample() function. By default, this function tries to auto-assign
the right sampler(s) and auto-initialize if you don’t pass anything.
In [28]:
with pm.Model() as model: mu = pm.Normal('mu', mu=0, sd=1) obs = pm.Normal('obs', mu=mu, sd=1, observed=np.random.randn(100)) trace = pm.sample(1000, tune=500)
Auto-assigning NUTS sampler... Initializing NUTS using jitter+adapt_diag... Multiprocess sampling (2 chains in 2 jobs) NUTS: [mu] 100%|██████████| 1500/1500 [00:01<00:00, 1279.00it, sd=1) obs = pm.Normal('obs', mu=mu, sd=1, observed=np.random.randn(100)) trace = pm.sample(cores=4)
Auto-assigning NUTS sampler... Initializing NUTS using jitter+adapt_diag... Multiprocess sampling (4 chains in 4 jobs) NUTS: [mu] 100%|██████████| 1000/1000 [00:01<00:00, 579.21it/s] The acceptance probability does not match the target. It is 0.8912043362192583, but should be close to 0.8. Try to increase the number of tuning steps. The acceptance probability does not match the target. It is 0.8891559864732794,', 'CSG', ', 'SGFS', 'SMC', 'Slice']
Commonly used step-methods besides NUTS are
Metropolis and
Slice. For almost all continuous models, ``NUTS`` should be
preferred. There are hard-to-sample models for which
NUTS will be
very slow causing many users to use
Metropolis instead. This
practice, however, is rarely successful. NUTS is fast on simple models
but can be slow if the model is very complex or it is badly initialized.
In the case of a complex model that is hard for NUTS, Metropolis, while
faster, will have a very low effective sample size or not converge
properly at all. A better approach is to instead try to improve
initialization of NUTS, or reparameterize the model.
For completeness, other sampling methods can be passed to sample:
In [35]:
with pm.Model() as model: mu = pm.Normal('mu', mu=0, sd=1) obs = pm.Normal('obs', mu=mu, sd=1, observed=np.random.randn(100)) step = pm.Metropolis() trace = pm.sample(1000, step=step)
Multiprocess sampling (2 chains in 2 jobs) Metropolis: [mu] 100%|██████████| 1500/1500 [00:00<00:00, 5483.44it/s] The number of effective samples is smaller than 25% for some parameters.
You can also assign variables to different step methods.
In [36]:
with pm.Model() as model: mu = pm.Normal('mu', mu=0, sd=1) sd = pm.HalfNormal('sd', sd=1) obs = pm.Normal('obs', mu=mu, sd=sd, observed=np.random.randn(100)) step1 = pm.Metropolis(vars=[mu]) step2 = pm.Slice(vars=[sd]) trace = pm.sample(10000, step=[step1, step2], cores=4)
Multiprocess sampling (4 chains in 4 jobs) CompoundStep >Metropolis: [mu] >Slice: [sd] 100%|██████████| 10500/10500 [00:20<00:00, 504.15it/s] The number of effective samples is smaller than 25% for some parameters.
3.2 Analyze sampling results¶
The most common used plot to analyze sampling results is the so-called trace-plot:
In [37]:
pm.traceplot(trace);
Another common metric to look at is R-hat, also known as the Gelman-Rubin statistic:
In [38]:
pm.gelman_rubin(trace)
Out[38]:
{'mu': 1.0005205177587686, 'sd': 0.9999544815961842}
These are also part of the
forestplot:
In [39]:
pm.forestplot(trace);, sd=1, shape=100) trace = pm.sample(cores=4) pm.energyplot(trace);
Auto-assigning NUTS sampler... Initializing NUTS using jitter+adapt_diag... Multiprocess sampling (4 chains in 4 jobs) NUTS: [x] 100%|██████████| 1000/1000 [00:05<00:00, 171.82it/s]
For more information on sampler stats and the energy plot, see here. For more information on identifying sampling problems and what to do about them, see here.
3.3 Variational inference¶
PyMC3 supports various Variational Inference techniques. While these
methods are much faster, they are often also less accurate and can lead
to biased inference. The main entry point is
pymc3.fit().
In [42]:
with pm.Model() as model: mu = pm.Normal('mu', mu=0, sd=1) sd = pm.HalfNormal('sd', sd=1) obs = pm.Normal('obs', mu=mu, sd=sd, observed=np.random.randn(100)) approx = pm.fit()
Average Loss = 148.77: 100%|██████████| 10000/10000 [00:01<00:00, 9024.38it/s] Finished [100%]: Average Loss = 148.76')
Average Loss = 0.0068883: 100%|██████████| 10000/10000 [00:09<00:00, 1069.48:06<00:00, 1451.91it/s] Finished [100%]: Average Loss = 0.011343
In [46]:
plt.figure() trace = approx.sample(10000) sns.kdeplot(trace['x'][:, 0], trace['x'][:, 1]);
Stein Variational Gradient Descent (SVGD) uses particles to estimate the posterior:
In [47]:
w = pm.floatX([.2, .8]) mu = pm.floatX([-.3, .5]) sd = pm.floatX([.1, .1]) with pm.Model() as model: pm.NormalMixture('x', w=w, mu=mu, sd=sd) approx = pm.fit(method=pm.SVGD(n_particles=200, jitter=1.))
100%|██████████| 10000/10000 [01:26<00:00, 115.56, sd=1) sd = pm.HalfNormal('sd', sd=1) obs = pm.Normal('obs', mu=mu, sd=sd, observed=data) trace = pm.sample()
Auto-assigning NUTS sampler... Initializing NUTS using jitter+adapt_diag... Multiprocess sampling (2 chains in 2 jobs) NUTS: [sd, mu] 100%|██████████| 1000/1000 [00:01<00:00, 743.16it/s]
In [50]:
with model: post_pred = pm.sample_posterior_predictive(trace, samples=500, size=len(data))
100%|██████████| 500/500 [00:00<00:00, 1611.43it/s]
sample_posterior_predictive() returns a dict with a key for every
observed node:
In [51]:
post_pred['obs'].shape
Out[51]:
(500, 100)
In [52]:
plt.figure() ax = sns.distplot(post_pred['obs'].mean(axis=1), label='Posterior predictive means') ax.axvline(data.mean(), color='r', ls='--', whose values can be changed later. Otherwise they can be
passed into PyMC3 just like any other numpy array or tensor.
This distinction is significant since internally all models in PyMC3 are
giant symbolic expressions. When you pass data directly into a model,
you are giving Theano permission to treat this data as a constant and
optimize it away as it sees fit. If you need to change this data later
you might not have a way to point at it in the symbolic expression.
Using
theano.shared offers a way to point to a place in that
symbolic expression, and change what is there.
In [53]:
import theano x = np.random.randn(100) y = x > 0 x_shared = theano.shared(x) y_shared = theano.shared(y) with pm.Model() as model: coeff = pm.Normal('x', mu=0, sd=1) logistic = pm.math.sigmoid(coeff * x_shared) pm.Bernoulli('obs', p=logistic, observed=y_shared) trace = pm.sample()
Auto-assigning NUTS sampler... Initializing NUTS using jitter+adapt_diag... Multiprocess sampling (2 chains in 2 jobs) NUTS: [x] 100%|██████████| 1000/1000 [00:01<00:00, 902.91it/s]
Now assume we want to predict on unseen data. For this we have to change
the values of
x_shared and
y_shared. Theoretically we don’t need
to set
y_shared as we want to predict it but it has to match the
shape of
x_shared.
In [54]:
x_shared.set_value([-1, 0, 1.]) y_shared.set_value([0, 0, 0]) # dummy values with model: post_pred = pm.sample_posterior_predictive(trace, samples=500)
100%|██████████| 500/500 [00:00<00:00, 1704.02it/s]
In [55]:
post_pred['obs'].mean(axis=0)
Out[55]:
array([0.02 , 0.488, 0.97 ]) | https://docs.pymc.io/notebooks/api_quickstart.html | CC-MAIN-2018-47 | refinedweb | 2,089 | 54.69 |
13 January 2011 10:13 [Source: ICIS news]
(adds more downstream details in paragraphs 20,21)
By Felicia Loo and Helen Lee
?xml:namespace>
“It was an unplanned shutdown. There is a fire. It wasn't an explosion,” one source said.
Prior to the outage, the cracker operated at 100%, sources said.
A YNCC spokesperson said its 578,000 tonne/year No 2 cracker was shut at about 16:00 hours South
The other two crackers are operating normally at above 100% capacity, the spokesperson added.
A company official said the cracker outage might take a long time as the furnace was badly affected.
“The situation is getting worse because the mechanisms in the furnace are destroyed and we will need a long time to repair,” he said, adding that all the furnaces were shut down.
He added that there were no injuries but their ethylene tank level was very low and one or two term cargoes for domestic customers might have to be delayed or cancelled.
“The cracker was running at around 1,800 tonnes per day and our tank level is very low therefore we have to delay or cancel one or two term cargoes,” he said. “We don’t know how long the delay might be.”
YNCC, which operates an 857,000 tonne/year cracker and a 465,000 tonne/year cracker on the same site, bought 100,000 tonnes of naphtha for second-half February delivery on Wednesday.
Also going downstream, there were concerns about YNCC’s C3 supply.
Downstream Polymirae's No 4 polypropylene (PP) plant with a 190,000 tonne/year capacity, which receives propylene feedstock from YNCC’s No 2 cracker, was running normally, a Polymirae source said.
"Our No 4 PP plant is still running. We’re checking if YNCC can continue supplying propylene to our plant from its storage,” the Polymirae source added.
Similarly, production at YNCC’s No 2 Yeosu-based aromatics unit was cut back to 75% due to the shutdown of the cracker, said a company source.
YNCC was likely to maintain the operating rate at 75% for this unit in the next few days, he added.
The No 2 facility can produce 120,000 tonnes/year of benzene, 60,000 tonnes/year of toluene and 40,000 tonnes/year of solvent grade xylene.
YNCC was operating its two other aromatics units at 100% at present, the source added.
The No 1 unit at Yeosu can produce 140,000 tonnes/year of benzene, 80,000 tonnes/year of toluene and 50,000 tonnes/year of solvent grade xylene.
The No 3 unit is able to produce 120,000 tonnes/year of benzene, 60,000 tonnes/year of toluene and 40,000 tonnes/year of solvent grade xylene.
Market sources put the loss of propylene at 500 tonnes and said they heard the cracker might be resumed next morning but a company source said they were undecided on the restart.
YNCC’s 170,000 tonne/year MTBE plant was not affected by the fire and was currently running well, sources said.
Asia’s naphtha crack spread would face further downward pressure following the outage of the YNCC cracker, after the spread slumped to a fresh three-month low of $126.60/tonne on Thursday due to voluminous deep-sea imports from the West, traders said.
Additional reporting by Chow Bee Lin, Mahua Chakravarty and Heng Hui
( | http://www.icis.com/Articles/2011/01/13/9425496/south-koreas-yncc-shuts-no-2-naphtha-cracker-after.html | CC-MAIN-2015-22 | refinedweb | 567 | 68.91 |
Jason Sachs publishes a new blog? system to keep in mind. And here it is:
Imagine we have a rigid pendulum: a steel weight mounted on a thin steel rod attached to a bearing, subject to several forces:
The electromagnets are triggered electronically, and impart a very short and very strong impulse to the pendulum; there are sensors in the base that accumulate the amount of time the pendulum stays at the top or bottom of the swing, and when the accumulators reach a threshold, they reset and trigger the electromagnet pulses.
Gravity and friction are easy to model. We talked about that last time, for a rigid pendulum with none of this magnet funny stuff:
$$.
The permanent magnets cause a lot of attractive force when the pendulum is close by, but barely any when it gets further away, so the oscillation frequency gets larger at the bottom of the swing. The electromagnets give the pendulum a kick to keep it from settling at the top or bottom; the reason they're mounted slightly off center is to make them more efficient at exerting torque on the pendulum. (If they were mounted exactly at top or bottom, and they triggered when the pendulum was exactly aligned, then there would be no torque exerted, just a pull downwards countered by the tension in the rod. Being off center changes the direction of force to have a sideways component.)
We won't be using this pendulum example quantitatively today, but keep it in the back of your mind — it's an example where most of the time the position changes very predictably, but sometimes it changes rapidly and may be hard to track.
We're going to measure position using an incremental encoder. Let's think about this a bit carefully. Given position readings, what kind of information do we know, and what kind of mistakes can we make?
If you get into hard-core estimation theory, you'll deal with all sorts of matrix equations to deal with multiple variables and the cross-correlation of their probability distributions. There's terms like covariance matrices and Gram-Schmidt Orthogonalization and the Fisher Information Matrix and the Cramer-Rao Bound. I once understood these really well for a few weeks while taking a class in college. Alas, that light bulb has dimmed for me....
Anyway, don't worry about trying to understand this stuff. One major take-away from it is that the concepts of standard deviation (how much variation there is in a measurement due to randomness) and information are related. The Cramer-Rao bound basically says that information and variance (the square of standard deviation) are inversely related: the more variance your measurements have, the less information you have.
For Gaussian distributions, the relationships are exact, and if I combine unbiased measurements of the same underlying parameter M in an optimal manner, the amount of information accumulates linearly. So if I have a measurement M1 of that parameter with a variance of 3.0 (information = 1/3), and an independent, uncorrelated measurement M2 of the same parameter with a variance of 6.0 (information = 1/6), then the optimal combination of the two will yield a combined measurement \( \hat{M} \) with a variance of 2.0 (information = 1/2 = 1/3 + 1/6). It turns out that this optimal combination happens to be \( \hat{M} = \frac{2}{3}M_1 + \frac{1}{3}M_2 \). This is a weighted linear sum of the individual measurements: the weights always sum to 1.0, and the weights have the same proportion as each measurement's amount of information.
So one central concept of estimation is that you try to make optimal use of the information you have. If I combine measurements in another way, I will have less information. (With the M example before, if I make the naive estimate \( \hat{M} = \frac{1}{2}M_1 + \frac{1}{2}M_2 \), then the resulting variance happens to be \( (\frac{1}{2})^2 \times 3.0 + (\frac{1}{2})^2 \times 6.0 = 2.25 \), which is larger, so my estimate is slightly less likely to be as accurate.)
Back to our pendulum + encoder example, what do we know, and what kind of errors do we have?
We know the basic equations of the pendulum's motion; most significantly, knowing the pendulum position at one instant in time means that we are fairly likely to know it at a time shortly thereafter. The position measurements are highly correlated, so we should try to make the best use of all measurements we can take. Same thing with the velocity: as long as the torque on the pendulum isn't too high, knowing the pendulum velocity at one instant means we are fairly likely to know it at a time shortly thereafter.
Here's what errors prevent us from knowing the pendulum position with perfect precision:
The best estimators are the ones that take all these effects into account, reduce the net estimation error, and are robust to unexpected mistakes. The idea of robustness is subtle: if you design an estimator that's great when you know the pendulum parameters exactly and only takes one encoder reading per second, it could still give you a very low error, but then someone squirts WD-40 into the pendulum pivot to change the viscous drag and the estimator starts being way off.
What's wrong with the types of estimators we mentioned in Part I? Well, nothing really; they're simple, but they're just not optimal, or even close. If you're taking position measurements 10,000 times a second, and you compute velocity by taking each position measurement and subtracting off the position measured 1 second earlier, then you ignore all the potential information available in those other 9,998 readings between the two.
With that, let's go quickly over a small menagerie of estimator structures.
Adaptive and Kalman filters are best used in cases where the sources of noise or error are "ordinary" — that is, they have a distribution that is somewhat Gaussian in character and uncorrelated with the measurements. Kalman filters were developed for guidance systems in the aerospace industry: things like radar and GPS and trajectory tracking are really good applications. Kalman filters also do very well when the signal-to-noise ratio varies with time, as they can adapt to such a situation. I often read articles that try to apply Kalman filters in sensorless position estimators used in motor control, and it's saddening to see this, since in these applications the errors are more often due to imperfect cancellation of coupling between system states, instead of random noise, and the errors are anything but Gaussian or uncorrelated. Likewise in this encoder application: quantization noise from a position encoder is not really a good match for an adaptive filter or Kalman filter, so I won't discuss it further.
The remaining three structures mentioned here are similar. I'll cover PLLs and tracking loops next, leaving Luenberger observers for Part III. Since tracking loops are fairly general, and PLLs and Luenberger observers are specific types of tracking loops, it makes sense to cover tracking loops first.
Here's a really simple example. Let's say we have a continuous position signal, and when we measure it, we get the position signal along with additive Gaussian white noise:
import numpy as np import matplotlib.pyplot as plt t = np.linspace(0,4,5000) pos_exact = (np.abs(t-0.5) - np.abs(t-1.5) - np.abs(t-2.5) + np.abs(t-3.5))/2 pos_measured = pos_exact + 0.04*np.random.randn(t.size) fig = plt.figure(figsize=(8,6),dpi=80) ax = fig.add_subplot(1,1,1) ax.plot(t,pos_measured,'.',markersize=2) ax.plot(t,pos_exact,'k') ax.set_ylim(-0.5,1.5)
(-0.5, 1.5)
Now, your first reaction might be, "Hey, let's just filter out the noise with a low-pass filter!" That's not such a bad idea, so let's do it:
import scipy.signal def lpf1(x,alpha): '''1-pole low-pass filter with coefficient alpha = 1/tau''' return scipy.signal.lfilter([alpha], [1, alpha-1], x) def rms(e): '''root-mean square''' return np.sqrt(np.mean(e*e)) def maxabs(e): '''max absolute value''' return max(np.abs(e)) alphas = [0.2,0.1,0.05,0.02] estimates = [lpf1(pos_measured, alpha) for alpha in alphas] fig = plt.figure(figsize=(8,6),dpi=80) ax = fig.add_subplot(2,1,1) ax.plot(t,pos_exact,'k') for y in estimates: ax.plot(t,y) ax.set_ylabel('position') ax.legend(['exact'] + ['$\\alpha = %.2f$' % alpha for alpha in alphas]) ax = fig.add_subplot(2,1,2) for alpha,y in zip(alphas,estimates): err = y-pos_exact ax.plot(t,err) ax.set_ylabel('position error') print 'alpha=%.2f -> rms error = %.5f, peak error = %.4f' % (alpha, rms(err), maxabs(err))
alpha=0.20 -> rms error = 0.01369, peak error = 0.0470 alpha=0.10 -> rms error = 0.01058, peak error = 0.0366 alpha=0.05 -> rms error = 0.01237, peak error = 0.0348 alpha=0.02 -> rms error = 0.02733, peak error = 0.0498
And here we have the same problem we ran into when we were looking at evaluating algorithms in Part 1.5: there's a tradeoff between noise level and the effects of phase lag and time delay. A simple low-pass filter doesn't do very well tracking ramp waveforms: the time delay causes a DC offset.
With a tracking loop, we try to model the system and drive the steady-state error to zero. Let's model our system as a velocity that varies, and integrate the estimated velocity to get position. The velocity will be the output of a proportional-integral control loop driven by the position error.
def trkloop(x,dt,kp,ki): def helper(): velest = 0 posest = 0 velintegrator = 0 for xmeas in x: posest += velest*dt poserr = xmeas - posest velintegrator += poserr * ki * dt velest = poserr * kp + velintegrator yield (posest, velest, velintegrator) y = np.array([yi for yi in helper()]) return y[:,0],y[:,1],y[:,2] [posest,velest,velestfilt] = trkloop(pos_measured,t[1]-t[0],kp=40.0,ki=900.0) fig = plt.figure(figsize=(8,6),dpi=80) ax = fig.add_subplot(2,1,1) ax.plot(t,pos_exact,'k',t,posest) ax.set_ylabel('position') ax = fig.add_subplot(2,1,2) err = posest-pos_exact ax.plot(t,posest-pos_exact) ax.set_ylabel('position error') print 'rms error = %.5f, peak error = %.4f' % (rms(err), maxabs(err))
rms error = 0.00724, peak error = 0.0308
The RMS and peak error here are less than in the 1-pole low-pass filter. Not only that, but in the process, we get an estimate of velocity! We actually get two estimates of velocity. One is the integrator of the PI loop used in the tracking loop, the other is the output of the PI loop. Let's plot these (integrator in blue, PI output in yellow):
fig = plt.figure(figsize=(8,6),dpi=80) ax = fig.add_subplot(1,1,1) vel_exact = (t > 0.5) * (t < 1.5) + (-1.0*(t > 2.5) * (t < 3.5)) ax.plot(t,velest,'y',t,velestfilt,'b',t,vel_exact,'r');
The PI output looks horrible; the PI integrator looks okay. (There's quite a bit of noise here, so it's really difficult to get a good output signal.)
Which one is better to use? Well, for display purposes, I'd use the integrator value; it doesn't contain high frequency noise. For input into a feedback loop (like a velocity controller), I might use the PI output directly, since the high-frequency stuff will most likely get filtered out anyway.
So the tracking loop is better than a plain low-pass filter, right?
Well, in reality there's a trick here. This tracking loop is a linear filter, so it can be written as a regular IIR low-pass filter. The thing is, it's a 2nd-order filter, whereas we compared it against a 1st-order low-pass filter, so that's not really fair.
But by writing it as a tracking loop, we get a more physical meaning to filter state variables — and more importantly, if we want to, we can deal with nonlinear system behavior using a tracking loop that includes nonlinear elements.
For those of you interested in the Laplace-domain algebra (for the rest of you, skip to the next section) the estimated position \( \hat{x} \) and estimated velocity \( \hat{v} \) behave like this (quick refresher: \( 1/s \) is the Laplace-domain equivalent of an integrator):
$$\begin{eqnarray} \hat{x}&=&\frac{1}{s}\hat{v}\cr \hat{v}&=&(\frac{k_i}{s} + k_p)(x-\hat{x}) \end{eqnarray}$$
which we can then solve to get
$$\hat{x} = \frac{k_ps + k_i}{s^2}(x-\hat{x}) $$
and then (after a little more algebraic manipulation)
$$\hat{x} = \frac{\frac{k_p}{k_i}s + 1}{\frac{1}{k_i}s^2 + \frac{k_p}{k_i}s + 1}x $$
which is just a low-pass filter with two poles and one zero, whereas the 1-pole low-pass filter is
$$\hat{x} = \frac{1}{\tau s+1}x$$
The error in these systems is
$$\tilde{x} = x-\hat{x} = \frac{\frac{1}{k_i}s^2}{\frac{1}{k_i}s^2 + \frac{k_p}{k_i}s + 1}x $$
and
$$\tilde{x} = \frac{\tau s}{\tau s+1}x$$
If we use the Final Value Theorem, the steady-state error for both of these to a position step input is zero, but the steady-state error for a position ramp input (velocity step input) is nonzero for the 1-pole low-pass filter, whereas it is still zero for the tracking loop. This is because of the zero in the tracking loop's transfer function.
Need to track position in case of constant acceleration? Then go ahead and add another integrator... just make sure you analyze the transfer function and add a proportional term to that integrator so the resulting filter is stable.
Tracking loops are great! There is a special class of tracking loops to handle problems where it is important to lock onto the phase or frequency of a periodic signal. These are called phase-locked loops (go figure!), and they usually consist of the following structure:
The idea is that you have a voltage-controlled oscillator (VCO) creating some output, that goes through a feedback filter, gets compared in phase against the input with a phase detector, and the phase error signal goes through a loop filter before it is used as a control signal for the VCO. The loop filter's input and output are essentially DC signals proportional to output frequency; the other signals in the diagram are periodic signals. The feedback filter is usually just a passthrough (no filter) or a frequency divider. Most microcontrollers these days with PLL-based clocks have a divide-by-N block in the feedback filter, which has the net effect that the output of the PLL multiplies the input frequency by N. This way you can take, for example, an 8 MHz crystal oscillator and turn it into a 128MHz clock signal on the chip: as a result, you don't need to distribute high-frequency clock signals on your printed circuit board, just a low-frequency clock, and it will get multiplied up internal to the microcontroller. At steady-state, the signals in the PLL are sine waves or square waves, except for the VCO input which is a DC voltage; the inputs to the phase detector line up in phase. (Digital PLLs are possible as well, in which case the VCO is replaced by a digitally-controlled oscillator with a digital input representing the control frequency.)
One simple example of a PLL is where the phase detector is a multiplier that acts on sine waves, the loop filter is an integrator, and there is no feedback filter, just a passthrough. In this case you have
$$\begin{eqnarray} V_{in} &=& A \sin \phi_i(t) \cr \phi_i(t) &\approx& \omega t + \phi_{i0}\cr V_{out} &=& B \sin \phi_o(t) \cr V_{pd} &=& V_{in}V_{out} = AB \sin \phi_i(t) \sin \phi_o(t) \cr &=& \frac{AB}{2} (\cos (\phi_i(t) - \phi_o(t)) + \cos(\phi_i(t) + \phi_o(t)))
\end{eqnarray}$$
The phase detector outputs a sum and difference frequency: if the output frequency is the about same, then the sum term \( \cos(\phi_i(t) + \phi_o(t)) \) is about double the input frequency, and the difference term \( \cos(\phi_i(t) - \phi_o(t)) \) is at low frequency. The loop filter is designed to filter out the double-frequency term, and integrate the low-frequency term:
$$\begin{eqnarray} V_{VCO_in} &\approx& K\sin(\phi_i(t) - \phi_o(t)) + f_0 \end{eqnarray}$$
This will reach equilibrium with constant phase difference between \( \phi_i(t) \) and \( \phi_o(t) \) → the loop locks onto the input phase!
In general, phase-locked loops with sine-wave signals tend to have dynamics that look like this:
$$\begin{eqnarray} \frac{dx}{dt} &=& A\sin \tilde{\phi} = A\sin(\phi_i - \phi_o) \cr \tilde{\omega} &=& -x - B\sin \tilde{\phi} - C\sin 2\omega t\cr \frac{d\tilde{\phi}}{dt} &=& \tilde{\omega} \end{eqnarray}$$
If you're not familiar with these types of equations, your eyes may glaze over. It turns out that they have a very similar structure to the rigid pendulum equations above! (The plain pendulum equations, with only gravity, inertia, and damping — no magnets.) With a good loop filter, the high frequency amplitude C is very small, and we can neglect this term. At low values of phase error \( \tilde{\phi} \), the phase error oscillates with a characteristic frequency and a decaying amplitude. At high values of \( \tilde{\phi} \) there are some weird behaviors, that are similar to that of a pendulum spinning around before it settles down.
t = np.linspace(0,5,10000) def simpll(tlist,A,B,omega0,phi0): def helper(): phi = phi0 x = -omega0 omega = -x - B*np.sin(phi) it = iter(tlist) tprev = it.next() yield(tprev, omega, phi, x) for t in it: dt = t - tprev # Verlet solver: phi_mid = phi + omega*dt/2 x += A*np.sin(phi_mid)*dt omega = -x - B*np.sin(phi_mid) phi = phi_mid + omega*dt/2 tprev = t yield(tprev, omega, phi, x) return np.array([v for v in helper()]) v = simpll(t,A=1800,B=10,omega0=140,phi0=0) omega = v[:,1] phi = v[:,2] fig = plt.figure(figsize=(8,6), dpi=80) ax = fig.add_subplot(2,1,1) ax.plot(t,omega) ax.set_ylabel('$\\tilde{\\omega}$',fontsize=20) ax = fig.add_subplot(2,1,2) ax.plot(t,phi/(2*np.pi)) ax.set_ylabel('$\\tilde{\\phi}/2\\pi$ ',fontsize=20) ax.set_xlabel('t',fontsize=16)
This is typical of the behavior of phase-locked loops: because there is no absolute phase reference, with a large initial frequency error, you can get cycle slips before the loop locks onto the input signal. It is often useful to plot the behavior of phase and frequency error in phase space, rather than as a pair of time-series plots:
fig = plt.figure(figsize=(8,6), dpi=80) ax = fig.add_subplot(1,1,1) ax.plot(phi/(2*np.pi),omega) ax.set_xlabel('phase error (cycles) = $\\tilde{\\phi}/2\\pi$', fontsize=16) ax.set_ylabel('velocity error (rad/sec) = $\\tilde{\\omega}$', fontsize=16) ax.grid('on')
We can also try graphing a bunch of trials with different initial conditions:
fig = plt.figure(figsize=(8,6), dpi=80) ax = fig.add_subplot(1,1,1) t = np.linspace(0,5,2000) for i in xrange(-2,2): for s in [-2,-1,1,2]: omega0 = s*100 v = simpll(t,A=1800,B=10,omega0=omega0,phi0=(i/2.0)*np.pi) omega = v[:,1] phi = v[:,2] k = math.floor(phi[-1]/(2*np.pi) + 0.5) phi -= k*2*np.pi for cycrepeat in np.array([-2,-1,0,1,2])+np.sign(s): ax.plot(phi/(2*np.pi)+cycrepeat,omega,'k') ax.set_ylim(-120,120) ax.set_xlim(-1.5,1.5) ax.set_xlabel('$\\tilde{\\phi}/2\\pi$ ',fontsize=20) ax.set_ylabel('$\\tilde{\\omega}$ ',fontsize=20)
Crazy-looking, huh? The trajectory in phase space oscillates until it gets close enough to one of the stable points, and then swirls around with decreasing amplitude.
PLLs should always be tuned properly — there are tradeoffs in choosing the two gains A and B that affect loop bandwidth and damping, and also noise rejection and lock acquisition time. I may cover that in another article, but for now we'll try to keep things at a fairly high level.
Is a PLL relevant to our encoder example? Well, yes and no.
The "no" answer (PLLs are not relevant) is true if we use a dedicated encoder counter; aside from initial location via an index pulse, the encoder counter will always give us an exact position. We don't need to guess whether the encoder is at position X or position X+1 or position X+2. If we want to smooth out the position and estimate velocity, we can use a regular tracking loop and know with certainty that we will always end up at the right position.
The "yes" answer (PLLs are relevant) is true if we use an encoder counter in a very noisy system. We may get spurious encoder counts that cause us to slip in the 4-count encoder cycle. (00 -> 01 -> 11 -> 10 -> 00) In this case a PLL can be very useful because it will reject high frequency glitches. Alternatively, if we are using position sensors that are more analog in nature (resolvers or analog hall sensors, or sensorless estimators), PLLs are very appropriate, especially if they are a set of analog sensors. Here's why:
Let's look at that good old sine wave again:
t = np.linspace(0,1,1000) tpts = np.linspace(0,1,5) f = lambda t: 0.9*cos(2*np.pi*t) ''' f(t) = A*cos(omega*t)''' fderiv = lambda t: -0.9*2*np.pi*sin(2*np.pi*t) ''' f'(t) = -A*omega*sin(omega*t)''' fig = plt.figure(figsize=(8,6),dpi=80); ax=fig.add_subplot(1,1,1) ax.plot(t,f(t)) phasediff = 6.0/360 plt.plot(t,f(t+phasediff),'gray') plt.plot(tpts,f(tpts),'b.',markersize=7) h=plt.plot(tpts,f(tpts+phasediff),'.',color='gray',markersize=7) for t in tpts: slope = fderiv(t) a = 0.1 ax.plot([t-a,t+a],[f(t)-slope*a,f(t)+slope*a],'r-') ax.grid('on') ax.set_xlim(0,1) ax.set_xticks(np.linspace(0,1,13)) ax.set_xticklabels(['%d' % x for x in np.linspace(0,360,13)]);
Here's two sine waves, actually; the two are 6° apart in phase. (Six degrees of separation! Ha! Sorry, couldn't resist.) Look at the difference between the resulting signals at different points in the cycle. Near 90° and 270°, when the signal is near zero, the slope is large, and we can easily distinguish these two signals by their values at the same time. When the signal is near its extremes, however, the slope is near zero, and the signals are very close to each other. Higher slope gives us more phase information. We also can't tell exactly where the signal is in phase just by looking at it at one point in time: if the signal value is 0, is the phase at 90° or 270°? They have the same value. Or if these signals are representing the cosine of position, we can't tell whether the position is moving backwards or forwards, since \( cos(x) = cos(-x) \).
Now suppose we have two sine waves 90° apart:
t = np.linspace(0,1,1000) f = lambda A,t: np.vstack([0.9*np.cos(t*2*np.pi), 0.9*np.sin(t*2*np.pi)]).transpose() plt.plot(t*360,f(0.9,t)); plt.plot(t*360,f(0.9,t+6.0/360),'gray') plt.xlim(0,360) plt.xticks(np.linspace(0,360,13));
Here we can estimate phase by using both signals! When one signal is at its extreme, and the slope is zero, we get very little information, but we can get useful information from the other signal, which is passing through zero and is at maximum slope. It turns out that the optimum way to estimate phase angle from given measurements of these two signals at a single instant is to use the arctangent: φ = atan2(y,x). We can identify the phase angle of these signals at any point in the cycle, and can distinguish whether the phase is going forwards and backwards. We can even estimate the error of the phase estimate: if the signals have amplitude A, and there is additive Gaussian white noise on both signals with rms value n, where n is small compared to A, it turns out that the resulting error in the phase estimate has rms value of n/A in radians, independent of phase:
def phase_estimate_2d(A,n,N=20000): t = np.linspace(0,1,N) xy_nonoise = f(A,t) xy = xy_nonoise + n * np.random.randn(N,2) x = xy[:,0]; y = xy[:,1] plt.plot(x,y,'.') plt.plot(xy_nonoise[:,0],xy_nonoise[:,1],'-r') plt.xlabel('x') plt.ylabel('y') plt.figure() def principal_angle(x,c=1.0): ''' find the principal angle: between -c/2 and +c/2 ''' return (x+c/2)%c - c/2 phase_error_radians = principal_angle(np.arctan2(y,x) - t*2*np.pi, 2*np.pi) plt.plot(t,phase_error_radians) plt.ylabel('phase error, radians') print 'n/A = %.4f' % (n/A) print 'rms phase error = ',rms(phase_error_radians) phase_estimate_2d(0.9,0.02)
n/A = 0.0222 rms phase error = 0.0223209290452
Now suppose we have a high noise situation:
phase_estimate_2d(0.9,0.25)
n/A = 0.2778 rms phase error = 0.293673249387
Oh, dear. When the signal plus noise results in readings near (0,0) it gets kind of nasty, and the phase can suddenly flip around. Let's say we're making measurements of x and y every so often, then calculating the phase using the arctangent, and we derive successive angle estimates of 3°, 68°, -152°, -20°, 17°, 63°. Did the angle wander near zero, with a noise spike at 68° and -152°, which we should filter out? Or did it increase moderately fast, wrapping around 1 full cycle from 3°, 68°, 208°, 340°, 377°, 423°? We can't tell; the principal angles are the same.
One big problem with the atan2() method is that it only tells us the principal angle, with no regard to past history. If we want to construct a coherent history, we have to use an unwrapping function:
angles = np.array([90,117,136,160,-171,-166,-141,-118,-83,-42,-27,3,68,-152,-20,17,63]) ierror=13 angles2 = angles+0.0; angles2[ierror] = 44 unwrap_deg = lambda deg: np.unwrap(deg/180.0*np.pi)*180/np.pi fig=plt.figure() ax=fig.add_subplot(1,1,1) msz=4 ax.plot(angles,'+r',markersize=msz) ax.plot(unwrap_deg(angles),'+-b',markersize=msz) ax.plot(angles2,'xr',markersize=msz) ax.plot(ierror,angles[ierror],'+g',markersize=msz) ax.plot(ierror,angles2[ierror],'xg',markersize=msz) ax.plot(unwrap_deg(angles2),'x:b',markersize=msz) ax.legend(('principal angle','unwrapped angle'),'best') ax.set_yticks(np.arange(-180,900,90));
And the problem is that in signals with high frequency content, one single sample that has a noise spike can lead to a cycle slip, because we can't distinguish a noise spike from a legitimate change in value. In the graph above, two different angle values measured at index 13 cause us to pick different numbers of revolutions in unwrapped angle. Lowpass filtering after the arctan will not help us out; lowpass filtering before the arctan will cause a phase error. There are two better solutions:
A phase-locked loop will filter out noise more easily. Since we have two signals x and y, we need a vector PLL rather than a scalar PLL. One of the best approaches for a vector PLL with two sine waves 90 degrees out of phase, is a quadrature mixer: if we use a phase detector (PD) that computes the cross product between estimated and measured vectors, we get a very nice result. If the incoming angle φ is unknown, then
$$\begin{eqnarray} x &=& A \cos \phi\cr y &=& A \sin \phi\cr \hat{x} &=& \cos \hat{\phi}\cr \hat{y} &=& \sin \hat{\phi}\cr \mathrm{PD\ output} &=& \hat{x}y - \hat{y}x \cr &=& A \cos \hat{\phi} \sin \phi - A \sin \hat{\phi} \cos \phi \cr &=& A \sin (\phi - \hat{\phi})\cr &=& A \sin \tilde{\phi} \end{eqnarray}$$
Just as a reminder: the ^ terms are estimates; the ~ terms are errors, and the "plain" x and y are measurements.
There's no high frequency term to filter out here! That's one of the big advantages of a vector PLL over a scalar PLL; if we can measure (or derive from a series of measurements) quadrature components that are proportional to \( \cos \phi \) and \( \sin \phi \), we don't need as much of a filter. In reality, imperfect phase and amplitude relationship means that there will be some double-frequency term that makes it into the output of the phase detector, but the amplitude should be fairly small.
A vector PLL is a tracking loop on the x and y measurements, but based on state variables in terms of phase angle and its derivatives. (Or to say it a different way: measurements are in rectangular coordinates, but state variables are in polar coordinates.) This is kind of the best of both worlds, because we can use information about reasonable changes in angle and amplitude, but not have to worry about angle unwrapping errors if we get single noise spikes, since we don't ever have to convert from principal angle (-180° to +180°) to unwrapped angle. If noise at one particular instant causes our (x,y) measurements to come close to zero, the phase detector output will be small and the effect on the PLL output will be minimal.
You caught me — we've veered off onto a tangent, and it doesn't have much to do with digital encoders. (Resolvers and analog sensors, yes. Digital encoders, no.) But I wanted you to see the big picture before delving into the world of observers.
Tracking loops
Phase-locked loops
Hope you learned something useful, and happy tracking!
Next up: Observers!
Add a Comment | http://www.embeddedrelated.com/showarticle/530.php | CC-MAIN-2014-49 | refinedweb | 5,120 | 54.42 |
Introduction
Everyone dealing with data in any capacity has to be conversant in both
SQL and
Python. Python has Pandas which makes data import, manipulation, and working with data, in general, easy and fun. On the other hand, SQL is the guardian angel of Databases across the globe. It has retained its rightful grip on this field for decades.
By virtue of this monopolistic hold, the data being stored by an organization, especially the Relational Databases needs the use of SQL to access the database, as well as to create tables, and some operations on the tables as well. Most of these operations can be done in Python using Pandas as well. Through experience, I have noticed that for some operations, SQL is more efficient (and hence easy to use) and for others, Pandas has an upper hand (and hence more fun to use).
The purpose of this article is to introduce you to “Best of Both Worlds”. You shall know how to do operations in both of these interchangeably. This will be of much use to those who have experience working with SQL but new to Python. Just one more thing: This is my first attempt to marry SQL and Python. Watch out this space for more such articles and leave your demand for specific topics in comments for me to write about them.
So let us begin with our journey without any further ado.
Installing and Importing the library pyodbc
We are going to use the library named
pyodbc to connect python to
SQL. This will give us the ability to use the dynamic nature of Python to build and run queries like
SQL. These two languages together are a formidable force in our hands. These together can take your code to the pinnacle of automation and efficiency.
Install
pyodbc using pip or visit their webpage.
pip install pyodbc
and then import the library in your Jupyter notebook
import pyodbc
pyodbc is going to be the bridge between SQL and Python. This makes access easy to ODBC (Open Database Connectivity) databases. ODBC was developed by SQL Access Group in the early ’90s as an API (Application Programming Interface) to access databases. These DBMS (Database management Systems) are compliant with ODBC.
- MySQL
- MS Access
- IBM Db2
- Oracle
- MS SQL Server
I am presently working on MS SQL Server, and that’s what I will be using for this article as well. However, the same codes can be used for any other ODBC compliant database. Only the connection step will vary a little.
Connection to the SQL Server
We need to establish the connection with the server first, and we will use
pyodbc.connect function for the same. This function needs a
connection string as a parameter. The connection string can be defined and declared separately. Let’s have a look at the sample connection String.
There can be two types of connection Strings. One when the connection is trusted one, and another where you need to enter your User_id and Password. You would know which one you are using, from your SQL Server Management Studio.
- For Trusted Connection:
connection_string = ("Driver={SQL Server Native Client 11.0};" "Server=Your_Server_Name;" "Database=My_Database_Name;" "Trusted_Connection=yes;")
- For Non-Trusted Connection:
connection_string = ("Driver={SQL Server Native Client 11.0};" "Server=Your_Server_Name;" "Database=My_Database_Name;" "UID=Your_User_ID;" "PWD=Your_Password;")
You need the following to access:
- Server
- Database
- User ID
Let me help you locate your Server Name and Database
Get Server Name
You can find the server name using two ways. One is to have a look at your SQL Server Management login window.
The other way is to run the following query in SQL.
SELECT @@SERVERNAME
Get Database
You need to give the name of the database in which your desired table is stored. You can locate the database name in the object Explorer menu under the Databases section. This is located on the left-hand side of your SQL server.
In our case, the Database Name is My_Database_Name.
Get UID
You can find the User ID in your SQL Server Management login window. The Login_ID_Here is the user name.
Once you have written the
connection_string, you can initialize the connection by calling the
pyodbc.connect function as below.
connection = pyodbc.connect(connection_string)
Note: In case you have ever used SQL database connection with any other software or environment (like SAS or R), you can copy the values of the above parameters from the connection string used there as well.
# Lets summarise the codes till now import pyodbc connection_string = ("Driver={SQL Server Native Client 11.0};" "Server=Your_Server_Name;" "Database=My_Database_Name;" "UID=Your_User_ID;" "PWD=Your_Password;") connection = pyodbc.connect(connection_string)
Running the Query in SQL from Python
Now when you have established the connection of the SQL server with Python, you can run the SQL queries from your python environment (Jupyter notebook or any other IDE).
To do so, you need to define the cursor. Let’s do that here.
Let us run a simple query to select the first 10 rows of a table in the database defined in the connection_string, table name as State_Population.
# Initialise the Cursor cursor = connection.cursor() # Executing a SQL Query cursor.execute('SELECT TOP(10) * FROM State_Population')
This executes the query, but you will not see any output in python, as the query is executed in SQL. However, you can print the results which will be the same as the ones returned inside the SQL server.
for row in cursor: print(row)
Out[3]:
(AL, under18, 2012, 1117489.0) (AL, total, 2012, 4817528.0) (AL, under18, 2010, 1130966.0) (AL, total, 2010, 4785570.0) (AL, under18, 2011, 1125763.0) (AL, total, 2011, 4801627.0) (AL, total, 2009, 4757938.0) (AL, under18, 2009, 1134192.0) (AL, under18, 2013, 1111481.0) (AL, total, 2013, 4833722.0)
This way you can see the result in Python, but that is not very useful for further processing. It would be useful if we can import the table as a Pandas DataFrame in the Python environment. Let us do that now.
Bringing SQL table in Python
Pandas bring a data structure in Python, which is similar to a SQL (or for that matter, any other) table. That’s Pandas DataFrame for you. So it’s prudent to import the data from SQL to Python in form of Pandas DataFrame. Pandas have import functions that read SQL data types.
These are the three functions pandas provide for reading from SQL.
- pandas.read_sql_table()
- pandas.read_sql_query()
- pandas.read_sql()
The
read_sql_table function takes a table name as a parameter, the
read_sql_query function takes SQL query as a parameter. The third one,
read_sql is a wrapper function around the above two. It takes either, a table name or a query, and imports the data from SQL to Python in form of a DataFrame.
Also, notice how we give the SQL query in form of a string, and also tell the function the name of the
connection.
import pandas as pd # Using the same query as above to get the output in dataframe # We are importing top 10 rows and all the columns of State_Population Table data = pd.read_sql('SELECT TOP(10) * FROM State_Population', connection)
data
Notice the output above, it’s the same as you would expect from any local data file (say .csv), imported in Python as Pandas DataFrame.
We can write the query outside the function, and call it by the variable name as well. Let us import another table named state_areas which has the name of the states, and their area in Square Miles. But instead of calling the first 10 rows, we will like to see all the states whose area is more than 100,000 Square miles.
# write the query and assign it to variable query = 'SELECT * FROM STATE_AREAS WHERE [area (sq. mi)] > 100000' # use the variable name in place of query string area = pd.read_sql(query, connection)
In [7]:
area
Conclusion:
In this article, you saw how to connect the two most powerful workhorses of the Data Science world, SQL and Python. This is not the end, but only the first step towards getting the “Best of Both Worlds”.
Now you can start using Python to work upon your data which rests in SQL Databases. Once you brought it as DataFrame, then all the operations are usual Pandas operations. Many of these operations were not possible in SQL.
I have found marrying SQL to Python immensely useful. This has opened the doors I didn’t know even existed.
The implied learning in this article was, that you can use Python to do things that you thought were only possible using SQL. mateusz-butkiewicz on Unsplash
| https://www.analyticsvidhya.com/blog/2021/06/how-to-access-use-sql-database-with-pyodbc-in-python/ | CC-MAIN-2021-25 | refinedweb | 1,439 | 65.32 |
WSA_QOS_ESHAPERATEOBJ 11030 Invalid QoS shaping rate object. A protocol was specified in the socket function call that does not support the semantics of the socket type requested. Cheers ChrizClick to expand... An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full. Check This Out
Chriz1977 Well-Known Member Joined: Sep 18, 2006 Messages: 191 Likes Received: 0 Trophy Points: 16 Hi When I try to send email from Outlook Express I get 'socket error 10061'. The requested name is valid and was found in the database, but it does not have the correct associated data being resolved for. Step 4: Apply Changes Change settings according to the table above. Dit beleid geldt voor alle services van Google. have a peek at this web-site
Another possibility is that connection is blocked by a firewall. WSAENETRESET 10052 Network dropped connection on reset. WSA_E_NO_MORE 10110 No more results. WSAENOTSOCK 10038 Socket operation on nonsocket.
Some error codes defined in the Winsock2.h header file are not returned from any function. Try reconnecting at a later time. WSA_QOS_SENDERS 11006 QoS senders. What Is A Socket Error WSA_QOS_ADMISSION_FAILURE 11010 QoS admission error.
In the Administrator tab, you can set the user name and password that are used to log into the interface. Socket Error 10061 Connection Refused Socket Error # 11004, Unable to connect: Check to make sure there isn't a trailing or leading space character on the FTP hostname. Bezig... When this error occurs it makes it impossible for the user to accomplish what they set out to do in regards to their e-mail.
Why does the connection not work? Socket Error 10061 Connection Refused Smtp While this might be a problem with the computer’s registry or an intentional block from an ISP’s server as it might regard too much sent mail as SPAM, the most likely Knowledgebase Additional Resources User Guide Business On Tapp Who's that Cow? These error codes and a short text description associated with an error code are defined in the Winerror.h header file.
Try entering the hostname or IP address, not a URL (e.g. This is because the firewall is blocking the connection as the permissions for the SyncBackSE/Pro application (within the firewall) are being set to "Trusted Application" instead of "SYSTEM", which will give Socket Error 10061 Ppsspp The third-party products that are discussed in this article are manufactured by companies that are independent of Wyith Limited. Socket Error 10054 You see the settings the Enterprise Console uses to connect to the PRTG Web Server.
This article was helpful (thinking…) · Flag this article as inaccurate…Flag this article as inaccurate… · Admin → New and returning users may sign in Sign in prestine Your name his comment is here An error with the underlying traffic control (TC) API as the generic QoS request was converted for local enforcement by the TC API. Learn More. Find out how you can reduce cost, increase QoS and ease planning, as well. Socket Error 10061 Connection Refused Windows 7
if you have entered something like then change it to my.hostname.com Socket Error # 10061, Connection refused: The hostname is correct, but either the FTP server is not listening on Save all settings Updated 10/21/16 How helpful did you find this article? Click stars to rate the article: Comments (optional): Please login to view your tickets. For example, if a call to WaitForMultipleEvents fails or one of the registry functions fails trying to manipulate the protocol/namespace catalogs. this contact form On the client side, if the connection is accepted, a socket is successfully created and the client can use the socket to communicate with the server.
The protocol family has not been configured into the system or no implementation for it exists. Socket Error 10053 Note that this error is returned by the operating system, so the error number may change in future releases of Windows. Your own settings will vary.
WeergavewachtrijWachtrijWeergavewachtrijWachtrij Alles verwijderenOntkoppelen Laden... In both the PRTG Administration Tool and Enterprise Console, confirm the settings. WSA_QOS_TRAFFIC_CTRL_ERROR 11014 QoS traffic control error. Socket Error 10061 Windows Live Mail A name component or a name was too long.
Try temporarily turning off any firewall you are running. 10052 - Network dropped connection on reset. You are invited to get involved by asking and answering questions! Learn more You're viewing YouTube in Dutch. navigate here Laden...
This error is returned by WSAStartup if the Windows Sockets implementation cannot function at this time because the underlying system it uses to provide network services is currently unavailable. WSAEAFNOSUPPORT 10047 Address family not supported by protocol family. WSAVERNOTSUPPORTED 10092 Winsock.dll version out of range. The connection to the server has failed.
WSAEPROCLIM 10067 Too many processes. WSAEHOSTDOWN 10064 Host is down. Deze functie is momenteel niet beschikbaar. An invalid QoS flow descriptor was found in the flow descriptor list.
When this does happen the user knows right away as a large error message comes up and any outgoing messages that have been attempted to be sent will have gone nowhere For server applications that need to bind multiple sockets to the same port number, consider using setsockopt (SO_REUSEADDR). There can be several reasons for this. | http://dlldesigner.com/socket-error/no-socket-error-1061.php | CC-MAIN-2017-51 | refinedweb | 882 | 56.15 |
Technical Support
On-Line Manuals
RL-ARM User's Guide (MDK v4)
#include <net_config.h>
BOOL com_putchar (
U8 c ); /* The character to write to the output buffer. */
The com_putchar function writes the character specified by
the argument c to the serial output buffer and activates the
serial transmission if it is not already active.
The com_putchar function is part of RL-TCPnet. The
prototype is defined in net_config.h.
note
com_getchar, com_tx_active, init_serial
BOOL com_putchar (U8 c) {
struct buf_st *p = &tbuf;
/* Write a byte to serial interface */
if ((U8)(p->in + 1) == p->out) {
/* Serial transmit buffer is full. */
return (__FALSE);
}
VICIntEnClr = (1 << 7);
if (tx_active == __FALSE) {
/* Send directly to UART. */
U1THR = (U8)c;
tx_active = __TRUE;
}
else {
/* Add data to transmit buffer. */
p->buf [p->in++] = c;
}
VICIntEnable = (1 << 7);
return (_. | http://www.keil.com/support/man/docs/rlarm/rlarm_com_putchar.htm | CC-MAIN-2019-43 | refinedweb | 132 | 66.44 |
user authenticationslikone27 Jul 27, 2006 7:41 AM
I am coldfusion programmer and new to flex and was wondering if anyone could help me out. I am converting a coldfusion app to flex and dont understand user authentication in flex. I have looked at the JamJar app and want to do something similar..
This content has been marked as final. Show 5 replies
1. Re: user authenticationANSCORP Jul 29, 2006 5:40 PM (in response to slikone27)This can be handled a couple of different ways, I think. You can actually use a CFML page to authenticate a user against your database and then pass information (such as the user ID, user name, access control variables, etc.) into the Flex application in the html wrapper that loads your SWF file (using Flashvars). These variables would be accessed in the Application.application.parameters scope in Action Script. The documentation on how to do this can be found here: iveDocs_Parts&file=00001003.html
I can't see why you would'nt be able to do something similar with a Flex login screen too using view states and/or transitions. You'd make a remote call to your database and then store the results in public variables in your application to determine access within your application.
If you find a better way to handle this, let me know. For now, this is likely the method that I'll use.
Good Luck,
M. McConnell
2. Re: user authenticationANSCORP Jul 30, 2006 9:44 AM (in response to slikone27)Actually, scratch that. The documentation I've referred you to is basically useless. The code examples don't work.
I am becoming increasingly frustrated with this product (Flex). It is fairly complex, particularly if you're just a CF developer, and I find myself spending more time trying to figure something out in the documentation than doing any real programming. It's like having to learn to code all over again. In addition, the first response to questions in these forums by Adobe (and others - myself included) is "look at the docs". Well, if the docs aren't correct, what good are they? I literally cut and pasted all the example code associated with communicating with a Flex app through flahsvars as described in LiveDocs (see link in my previous post in this thread) and it did not work. All I get for the application variables is "null" values.
At this point, I'm considering just cutting my losses and going back to CFForms.
M. McConnell
3. Re: user authenticationANSCORP Jul 30, 2006 12:14 PM (in response to ANSCORP)Update - After reviewing several more articles and Doc references(none of which had the complete answer by itself), I found the answer to using Flashvars. You'll notice that when you compile your Flex app, Flex builder creates an html wrapper for your SWF. It's usually entitled "yourappname.html" where "yourappname" is the name of your main mxml application file. There is a section in the JavaScript code that identifies the user's version of the Flash player. If the user passes all tests for major and minor version, the JavaScript creates the code to embed the SWF file for your Flex app. There is a line in this code segment that references "flashvars". By default, it looks like this:
"flashvars",'historyUrl=history.htm%3F&lconid=' + lc_id + '',
If you edit the line with "flashvars" on it to pass in your variables, you can then access the variables with the Application.application.parameters.myvariablename call inside your mxml code. Let's say you wanted to pass a user name into the Flex app. You could do it by modifying the flashvars line in the wrappers JavaScript code as follows:
"flashvars","myName=Mike&historyURL=history.htm%3f&Iconid=' + Ic_id + ",
Then, you would access it in MXML as follows:
<mx:Application xmlns:
<mx:Script><![CDATA[
import mx.controls.Alert;
import mx.core.Application;
// Declare bindable properties in Application scope.
[Bindable]
public var myName:String;
// Assign values to new properties.
private function initVars():void {
myName = Application.application.parameters.myName;
}
]]></mx:Script>
<mx:VBox>
<mx:HBox>
<mx:Label
<mx:Label
</mx:HBox>
</mx:VBox>
</mx:Application>
When you run this file, you should see the value of the name you've passed in via Flashvars.
This code is available in the Flex docs, but the Flex docs don't do a very good job of telling you how to use Flashvars to pass in the values. A WORD OF CAUTION: If you make a change to your mxml code and Flex recompiles your application, the changes you made to the wrapper are overwritten and you'll lose your flashvar settings. I'm sure there is a way to avoid this behavior (at least I hope there is), but I haven't looked into it yet.
M. McConnell
4. user authenticationANSCORP Jul 30, 2006 12:34 PM (in response to slikone27)And one more thing....If you save an exact copy of your Flex created wrapper file, let's say "main.html" as "main.cfm", it won't get overwritten. Plus, you can pass variables to the flashvars line in the JavaScript code between <cfoutput> tags so you can read url variables or form variables and change the content of the flashvars (eg. myName=<cfoutput>#url.myname#</cfoutput>. I just tried it and it actually works. If anyone has a better way to do this, please chime in.
M. McConnell
5. Re: user authenticationslikone27 Jul 31, 2006 6:21 AM (in response to ANSCORP)thanks... I currently have a login screen as a state of the app. When the user logins the app changes view states so the menu comes up at the top. Just wanted to make sure I was doing it right before I got to far along. Unfortunately I can't use CF because we are going to FDS so data will be real time. I think I remember seeing somewhere that if you change the wrapper file that is in the bin directory of your workspace, when it recompiles you wont lose your changes. | https://forums.adobe.com/thread/276441 | CC-MAIN-2017-51 | refinedweb | 1,017 | 64.61 |
Go to the first, previous, next, last section, table of contents..
The problem of multidimensional nonlinear least-squares fitting requires the minimization of the squared residuals of n functions, f_i, in p parameters, x_i,
\Phi(x) = (1/2) \sum_{i=1}^{n} f_i(x_1, ..., x_p)^2 = (1/2) || F(x) ||^2
All algorithms proceed from an initial guess using the linearization,
\psi(p) = || F(x+p) || ~=~ || F(x) + J p ||
where x is the initial point, p.
If there is insufficient memory to create the solver then the function
returns a null pointer and the error handler is invoked with an error
code of
GSL_ENOMEM.
const gsl_multifit_fdfsolver_type * T = gsl_multifit_fdfsolver_lmder; gsl_multifit_fdfsolver * s = gsl_multifit_fdfsolver_alloc (T, 100, 3);
If there is insufficient memory to create the solver then the function
returns a null pointer and the error handler is invoked with an error
code of
GSL_ENOMEM.
printf ("s is a '%s' solver\n", gsl_multifit_fdfsolver_name (s));
would print something like
s is a 'lmder' solver.
You must provide n functions of p variables for the minimization algorithms to operate on. In order to allow for general parameters the functions are defined by the following data types:
int (* f) (const gsl_vector * x, void * params, gsl_vector * f)
size_t n
size_t p
void * params
int (* f) (const gsl_vector * x, void * params, gsl_vector * f)
int (* df) (const gsl_vector * x, void * params, gsl_matrix * J)
int (* fdf) (const gsl_vector * x, void * params, gsl_vector * f, gsl_matrix * J)
size_t n
size_t p
void * params
The following functions drive the iteration of each algorithm. Each function performs one iteration to update the state of any solver of the corresponding type. The same functions work for all solvers so that different methods can be substituted at runtime without modifications to the code.
A minimization procedure should stop when one of the following conditions is true:
The handling of these conditions is under user control. The functions below allow the user to test the current estimate of the best-fit parameters |g_i| < epsabs
and returns
GSL_CONTINUE otherwise. This criterion is suitable
for situations where the precise location of the minimum, x,
is unimportant provided a value can be found where the gradient is small
enough.
The minimization algorithms described in this section make use of both the function and its derivative. They require an initial guess for the location of the minimum. There is no absolute guarantee of convergence -- the function must be suitable for this technique and the initial guess must be sufficiently close to the minimum for it to work. attempts to minimize the linear system |F + J p| subject to the constraint |D p| < \Delta. The solution to this constrained linear system is found using the Levenberg-Marquardt method.
The proposed step is now tested by evaluating the function at the resulting point, x'. If the step reduces the norm of the function sufficiently, and follows the predicted behavior of the function within the trust region. then it is accepted and size of the trust region is increased. If the proposed step fails to improve the solution, or differs significantly from the expected behavior within the trust region, then the size of the trust region is decreased and another trial step is computed.
The algorithm also monitors the progress of the solution and returns an error if the changes in the solution are smaller than the machine precision. The possible error codes are,
GSL_ETOLF
GSL_ETOLX
GSL_ETOLG
These error codes indicate that further iterations will be unlikely to change the solution from its current value.
There are no algorithms implemented in this section at the moment.
The covariance matrix is given by,
covar = (J).
The following example program fits a weighted exponential model with
background to experimental data, Y = A \exp(-\lambda t) + b. The
first part of the program sets up the functions
expb_f and
expb_df to calculate the model and its Jacobian. The appropriate
fitting function is given by,
f_i = ((A \exp(-\lambda t_i) + b) - y_i)/\sigma_i
where we have chosen t_i = i. The Jacobian matrix J is the derivative of these functions with respect to the three parameters (A, \lambda, b). It is given by,
J_{ij} = d f_i / d x_j
where x_0 = A, x_1 = \lambda and x_2 = b.
#include <stdlib.h> #include <stdio.h> #include <gsl/gsl_rng.h> #include <gsl/gsl_randist.h> #include <gsl/gsl_vector.h> #include <gsl/gsl_blas.h> #include <gsl/gsl_multifit_nlin.h> struct data { size_t n; double * y; double * sigma; }; int expb_f (const gsl_vector * x, void *params, gsl_vector * f) { size_t n = ((struct data *)params)->n; double *y = ((struct data *)params)->y; double *sigma = ((struct data *) params)->sigma; double A = gsl_vector_get (x, 0); double lambda = gsl_vector_get (x, 1); double b = gsl_vector_get (x, 2); size_t i; for (i = 0; i < n; i++) { /* Model Yi = A * exp(-lambda * i) + b */ double t = i; double Yi = A * exp (-lambda * t) + b; gsl_vector_set (f, i, (Yi - y[i])/sigma[i]); } return GSL_SUCCESS; } int expb_df (const gsl_vector * x, void *params, gsl_matrix * J) { size_t n = ((struct data *)params)->n; double *sigma = ((struct data *) params)->sigma; double A = gsl_vector_get (x, 0); double lambda = gsl_vector_get (x, 1); size_t i; for (i = 0; i < n; i++) { /* Jacobian matrix J(i,j) = dfi / dxj, */ /* where fi = (Yi - yi)/sigma[i], */ /* Yi = A * exp(-lambda * i) + b */ /* and the xj are the parameters (A,lambda,b) */ double t = i; double s = sigma[i]; double e = exp(-lambda * t); gsl_matrix_set (J, i, 0, e/s); gsl_matrix_set (J, i, 1, -t * A * e/s); gsl_matrix_set (J, i, 2, 1/s); } return GSL_SUCCESS; } int expb_fdf (const gsl_vector * x, void *params, gsl_vector * f, gsl_matrix * J) { expb_f (x, params, f); expb_df (x, params, J); return GSL_SUCCESS; }
The main part of the program sets up a Levenberg-Marquardt solver and some simulated random data. The data uses the known parameters (1.0,5.0,0.1) combined with gaussian noise (standard deviation = 0.1) over a range of 40 timesteps. The initial guess for the parameters is chosen as (0.0, 1.0, 0.0).
#define N 40 int main (void) { const gsl_multifit_fdfsolver_type *T; gsl_multifit_fdfsolver *s; int status; size_t i, iter = 0; const size_t n = N; const size_t p = 3; gsl_matrix *covar = gsl_matrix_alloc (p, p); double y[N], sigma[N]; struct data d = { n, y, sigma}; gsl_multifit_function_fdf f; double x_init[3] = { 1.0, 0.0, 0.0 }; gsl_vector_view x = gsl_vector_view_array (x_init, p); const gsl_rng_type * type; gsl_rng * r; gsl_rng_env_setup(); type = gsl_rng_default; r = gsl_rng_alloc (type); f.f = &expb_f; f.df = &expb_df; f.fdf = &expb_fdf; f.n = n; f.p = p; f.params = &d; /* This is the data to be fitted */ for (i = 0; i < n; i++) { double t = i; y[i] = 1.0 + 5 * exp (-0.1 * t) + gsl_ran_gaussian (r, 0.1); sigma[i] = 0.1; printf ("data: %d %g %g\n", i, y[i], sigma[i]); }; T = gsl_multifit_fdfsolver_lmsder; s = gsl_multifit_fdfsolver_alloc (T, n, p); gsl_multifit_fdfsolver_set (s, &f, &x.vector); print_state (iter, s); do { iter++; status = gsl_multifit_fdfsolver_iterate (s); printf ("status = %s\n", gsl_strerror (status)); print_state (iter, s); if (status) break; status = gsl_multifit_test_delta (s->dx, s->x, 1e-4, 1e-4); } while (status == GSL_CONTINUE && iter < 500); gsl_multifit_covar (s->J, 0.0, covar); gsl_matrix_fprintf (stdout, covar, "%g"); #define FIT(i) gsl_vector_get(s->x, i) #define ERR(i) sqrt(gsl_matrix_get(covar,i,i)) printf ("A = %.5f +/- %.5f\n", FIT(0), ERR(0)); printf ("lambda = %.5f +/- %.5f\n", FIT(1), ERR(1)); printf ("b = %.5f +/- %.5f\n", FIT(2), ERR(2)); { double chi = gsl_blas_dnrm2(s->f); printf("chisq/dof = %g\n", pow(chi, 2.0)/ (n - p)); } printf ("status = %s\n", gsl_strerror (status)); gsl_multifit_fdfsolver_free (s); return 0; } int print_state (size_t iter, gsl_multifit_fdfsolver * s) { printf ("iter: %3u x = % 15.8f % 15.8f % 15.8f " "|f(x)| = %g\n", iter, gsl_vector_get (s->x, 0), gsl_vector_get (s->x, 1), gsl_vector_get (s->x, 2), gsl_blas_dnrm2 (s->f)); }
The iteration terminates when the change in x is smaller than 0.0001, as both an absolute and relative change. Here are the results of running the program,
iter: 0 x = 1.00000000 0.00000000 0.00000000 |f(x)| = 118.574 iter: 1 x = 1.64919392 0.01780040 0.64919392 |f(x)| = 77.2068 iter: 2 x = 2.86269020 0.08032198 1.45913464 |f(x)| = 38.0579 iter: 3 x = 4.97908864 0.11510525 1.06649948 |f(x)| = 10.1548 iter: 4 x = 5.03295496 0.09912462 1.00939075 |f(x)| = 6.4982 iter: 5 x = 5.05811477 0.10055914 0.99819876 |f(x)| = 6.33121 iter: 6 x = 5.05827645 0.10051697 0.99756444 |f(x)| = 6.33119 iter: 7 x = 5.05828006 0.10051819 0.99757710 |f(x)| = 6.33119 A = 5.05828 +/- 0.05983 lambda = 0.10052 +/- 0.00309 b = 0.99758 +/- 0.03944 chisq/dof = 1.08335 status = success
The approximate values of the parameters are found correctly, and the chi-squared value indicates a good fit (the chi-squared per degree of freedom is approximately 1). In this case the errors on the parameters can be estimated from the square roots of the diagonal elements of the covariance matrix. If the chi-squared value indicates a poor fit then error estimates obtained from the covariance matrix are not valid, since the Gaussian approximation would not apply.
The MINPACK algorithm is described in the following article,
The following paper is also relevant to the algorithms described in this section,
Go to the first, previous, next, last section, table of contents. | http://linux.math.tifr.res.in/programming-doc/gsl/gsl-ref_36.html | CC-MAIN-2017-39 | refinedweb | 1,545 | 63.8 |
Handling JSON in Python 3
To handle the JSON file format, Python provides a module named
json.
STEP 1: import the json module
import json as JS
STEP 2: import xml.etree.ElementTree module
import xml.etree.ElementTree as ET
STEP 3: Read the json file
here, “data” is the variable in which we have loaded our JSON data.
with open("quiz.json", "r") as json_file: data = JS.load(json_file);
STEP 4: Build the root element
Every xml file must have exactly one root element
root = ET.Element("quiz")
STEP 5: Build the subelements of the root
SubElement takes two parameters:
- root- It is the name of the variable where root element is stored.
- subelement_name: It is the name of subelement.Example:
Maths = ET.SubElement(root, "maths")
STEP 6: Build the tree of xml document
tree = ET.ElementTree(root)
STEP 7: Write the xml to quiz.xml file
tree.write("quiz.xml")
Note : XML elements does not support integer values so we need to convert them to string.
Example:
JSON | https://www.geeksforgeeks.org/python-json-to-xml/?ref=lbp | CC-MAIN-2021-25 | refinedweb | 170 | 68.16 |
Hello everyone (my 1st post for 1st uC project). I got a ATMega8 mC and LPS331AP sensor. uC can "talk" with LPS331AP using I2C or SPI interface. For my project I've chosen to use SPI option (no real reason, I thought it would be simpler...). LPS331AP uses following pins for SPI:
SPC - clock
SDA (SDI) - SPI data input
SDO - SPI output
CS - chip select
As far as I know the idea of SPI is simple: connect SPC to SCK(PB5 on ATmega8), SDA to MOSI(PB3), SDO to MISO(PB4) and CS to chosen GPIO configured as output.
Then I need to configure ATMega as master using SPCR register (in data sheet I read that SS on ATMega have to be configured as output(PB2) to prevent "dropping" to slave), here is my SPI_init function:
void SPI_init() { //Pin MOSI, SCK, SS configuration as output DDRB = ((1<<DDB3)|(1<<DDB5)|(1<<DDB2)); /* * SPI control register setup * spi enable * configure as master */ SPCR = ((1<<SPE)|(1<<MSTR)); }
I have left bits SPR1 and SPR0 to set f_cpu/4 fro SCK frequency
Next this is my transmit function, straightforward - put data to sent to SPDR, wait until SPIF bit is set in SPSR and then return SPDR if needed:
uint8_t SPI_tx(uint8_t data) { //transmit SPDR = data; //Wait for end of tx while(!(SPSR & (1<<SPIF))) ; uint8_t recv = (uint8_t)SPDR; return recv; }
and here is main function:
#include <avr/io.h> #include <util/delay.h> #include <stdint.h> #define F_CPU 4000000UL #define LPS331AP_CS_PIN DDB0 #define LPS331AP_READ_CMD 0x80 //sensor registers #define WHO_AM_I 0x0F void peripherial_init() { //LEDS DDRD |= (1<<DDD2); //LPS331AP DDRB |= (1<<LPS331AP_CS_PIN); PORTB |= (1<<LPS331AP_CS_PIN); } int main(void) { SPI_init(); init_USART(25); //for 9600 baud rate with 4Mhz oscilator PORTD |= (1<<DDD7); while(1) { uint8_t recv = 1; PORTB &= ~(1<<LPS331AP_CS_PIN); //CS goes low, start SPI_tx(LPS331AP_READ_CMD | WHO_AM_I); recv = SPI_tx(0x00); PORTB |= (1<<LPS331AP_CS_PIN); //CS high, stop if(recv == 0xBB) //test sensor. This will be removed in final { PORTD |= (1<<DD2); } USART_tx(recv); } }
And as topic of this post suggest code above do not work: recv is never set to 0xBB (value in WHO_AM_I register of sensor), it is always equal to 0xFF (as I see in serial monitor, and LED on DD2 is down).
I am using USART to send recv value to arduino UNO:
#include <SoftwareSerial.h> int pinRx = 9; int pinTx = 8; SoftwareSerial atmega(pinRx, pinTx); void setup() { atmega.begin(9600); Serial.begin(9600); } void loop() { if(atmega.available()) { Serial.println(atmega.read(),HEX); } }
I've tested LPS331AP using arduino UNO to be sure that it is not damaged, and it works as expected:
#include <SPI.h> /* SPI.h sets these for us in arduino const int SDI = 11; const int SDO = 12; const int SCL = 13; */ int CS_LPS331AP = 2; byte WHO_AM_I = 0x0F; byte read_cmd = 0x80; void enable_LPS331AP() { digitalWrite(CS_LPS331AP, LOW); } void disable_LPS331AP() { digitalWrite(CS_LPS331AP, HIGH); } void pinConfigure() { pinMode(CS_LPS331AP, OUTPUT); digitalWrite(CS_LPS331AP, HIGH); } void setup() { pinConfigure(); Serial.begin(9600); SPI.begin(); } void loop() { byte ret = 0x01; enable_LPS331AP(); SPI.transfer(read_cmd | WHO_AM_I); ret = SPI.transfer(0x00); disable_LPS331AP(); Serial.println("read:"); Serial.println(ret,HEX); delay(1000); }
So where I made a mistake in ATMega code? Thanks in advance.
You need to pay attention to the required SPI mode, which defines the polarity and phase of when data is valid relative to the clock.
Writing code is like having sex.... make one little mistake, and you're supporting it for life.
Top
- Log in or register to post comments
On my very old installation of Arduino I see the SPI.cpp and SPI.h files in:
D:\arduino-1.8.6\hardware\arduino\avr\libraries\SPI\src
As far as i can see there's no mention of CPOL/CPHA support there so it would appear to be using default just as OP's C code will implicitly do too.
If it were me I'd dig out a scope or logic analyser and compare the wire activity in the go/no-go cases.
Top
- Log in or register to post comments
You may need some small delay from CS going low to the start of the SPI transmission, the Arduino may have more overhead then your direct implementation and that my explain why one works and the other does not. Check the datasheet for timing requirements of the CS signal.
Jim
Click Link: Get Free Stock: Retire early! PM for strategy
share.robinhood.com/jamesc3274
Top
- Log in or register to post comments
I also thought that this may be the issue, but no (I even tried all 4 combinations as last chance .... truly un-enginieering way of looking for solution).
I did it too. The only difference is transfer/recieve function:
precise this "nop" instruction.
I know that this is the best way for looking for issue in this case, but I do not have logic analyser or oscilloscope (yet :-))
I've added _delay_ms( ) with some value from 1-5 and it did not help.
I have also changed CS pin and outcome is still the same.
Top
- Log in or register to post comments
You are going to need >>some<< method of debugging. (could it be as simple as the wrong SPI mpode?) A poor man might slow the bit rate WAY down and then put an LED on the SCK line. And on the chip-select. And on MOSI sending a pattern, or alternate bytes. Or ...
Suggestion for avatar for your account:
/> />
You can put lipstick on a pig, but it is still a pig.
I've never met a pig I didn't like, as long as you have some salt and pepper.
Top
- Log in or register to post comments
Thanks for avatar idea! In previous post I forgot to mention that I connected master out to master in to check if spi is working - I saw on Arduino serial out byte 0x8F. This was expected result so I guess that I should look for root cause in clock signal.
Top
- Log in or register to post comments
did you try adding that NOP between the writing to SPDR, and the while wait loop? It is needed due to the fact that the bit is cleared after it would be sampled for the next instruction, so in this case the SPIF bit is not cleared by the time you're checking it in the loop, and you will exit out immediately, before the transfer is actually complete.
Writing code is like having sex.... make one little mistake, and you're supporting it for life.
Top
- Log in or register to post comments
yes I did, same result.
I've ordered logic level analyser so I soon be able to conduct proper problem investigation.
Top
- Log in or register to post comments
I have tested my code and setup using logic analyzer (cheapest possible, I totally not recommend it if you only use windows, but after a fight I got it working under linux), and then I have added some _delay_ms and led. It turns out that CS PIN NEVER GO LOW! Do you have any ide why? I have switched ports and pins but I have same result always.
Top
- Log in or register to post comments
This line does not look right to me:
DDB0 i'm sure is defined in the header as an address where that register located rather then as a bit mask.
But I could be wrong....
Jim
Edit: No just looked it up in the header file, it is a bit mask = 0, my bad!
Personally I would have used PB0 instead...
Click Link: Get Free Stock: Retire early! PM for strategy
share.robinhood.com/jamesc3274
Top
- Log in or register to post comments
Nah, it is just an unsigned ( )
Top
- Log in or register to post comments
Hi,
I've found root cause of problem! But first I must say that I have learned some things: working with led as a logic analyzer and working with proper logic analyzer. Also I did a lot of measuring with voltmeter and dig through technical docs, so in the end I feel like I gained some experience in uC world.
So my code (yes the code posted here!) have 2 serious bugs:
1. void SPI-init()
I have used a assignment operator instead of logical OR, so this ruined all configuration that was set before initialization of SPI, but what was more dreadful and ultimately prevented me for working with sensor was:
2. int main(void)
I was checking pins using voltmeter and pin that was (I thought it was ...) configured as CS had some strange value of 3.7V (or something about that, this was strange because I have used 5V to power up circuit). I found in LPS331AP that CS of sensor is pulled-up, then I disconnected all wires from sensor, I've only left GND and VIN. So you probably guessed: when I measured voltage on CS it was 3.7V - so configuration of ATMeaga pins was wrong!
Now look at my code - I've wrote a fancy: void peripherial_init() fuction to keep all configuration in one place ... and never called it in int main(void)! In effect CS pin was input all the time, and sensor CS was always UP because it is pulled-up.
If someone will look in this thread in future here are fixed functions:
I hope in future I will have more complex problem with some more awesome solution :-)
Top
- Log in or register to post comments | https://www.avrfreaks.net/comment/2664346 | CC-MAIN-2019-18 | refinedweb | 1,583 | 70.02 |
Odoo Help
This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers.
detailed procedure to do the number to text conversion [Closed]
The Question has been closedby
hey all!!! i wanted to display my amount figure in words....whether PO,SO etc wherever it is and i also want a field which will do that>....help me....help me openerp community...plz help me...it would be helpful...
To convert amount to text, you can use the
amount_to_text method defined in
openerp/tools/amount_to_text_en.py
Create a field which will store text value of amount. You can either create
char or
text field.
'text_amount': fields.char("Text Amount", size=100)
or
'text_amount': fields.text("Text Amount")
Now write a
on_change method which will be called when you change the value of amount field. First add
on_change in your xml where your amount field is.
Like this:
<field name="your_amount_field" on_change="onchange_amount(your_amount_field)"/>
Write the following code inside your class in your py:
from openerp.tools import amount_to_text_en def onchange_amount(self, cr, uid, ids, amount): text_amount = amount_to_text_en.amount_to_text(amount, 'en', 'EURO') return {'value': {'text_amount': text_amount}}
Perfect Sudhir !
Thank you for the compliment.
i get an error like File "C:\Program Files\OpenERP 6.0\Server\addons\crm\crm_lead.py", line 180 text_amount = amount_to_text_en.amount_to_text(amount, 'en', 'EURO') ^ IndentationError: expected an indented block..this is the last 3 lines of my error...
It is just an indentation error. Make sure your code is indented(give tab) properly. | https://www.odoo.com/forum/help-1/question/detailed-procedure-to-do-the-number-to-text-conversion-10504 | CC-MAIN-2016-50 | refinedweb | 265 | 61.73 |
Hi
I just got my 3DR 433 mhz kit. But one of the radios seems to have a problem.
When i plug it into my computer using tha FTDI cable the green light stays on and the red one blinks once about every scond. The terminal says:
"**PANIC**
radio_initialise failed"
The hardware seems ok. I was able to get it into bootloader mode by shorting the CTS and ground. firmware was then updated sucessfully. My other "Air"-radio is fine and i can upgrade firmware in "AT" mode and configure it with APM planner.
Im running:
Windows 7
APM planner 1.1.81 mav 0.9
Any suggestions?
Views: 2442
<![if !IE]>▶<![endif]> Reply to This
Is there a general way to unbrick these radios?
<![if !IE]>▶<![endif]> Reply
Hi kbcopter,
Here at 3DR we've seen one radio output the same panic message as yours, and from trying to unbrick that one I can say that reflashing the firmware won't fix it (uploading the bootloader again do it won't either, it's unrelated.)
you can contact [email protected] to get a replacement.
-Sam
<![if !IE]>▶<![endif]> Reply
Hi kbcopter,
I've had this happen to me twice with these radio during the development process. The two causes were:
In both cases the radios would panic continuously on boot. After I fixed the causes they were fine.
The 'panic' messages happen because the 8051 doesn't get sensible replies from the radio module when it tries to initialise the radio. That does sometimes happen on a software reboot using ATZ if we've left the radio module in a bad state (ie. I probably don't have the reset code quite right), but it shouldn't happen on an initial powerup. If it does happen on initial powerup (which it seems to be for you) then its almost certainly a hardware issue.
As Sam says, you can send yours back in for replacement, but if you prefer to tinker a bit first, perhaps the above will help you work out what's wrong.
Cheers, Tridge
<![if !IE]>▶<![endif]> Reply
I seem to be having the same exact issue as you. Did you ever get it fixed or just a replacement?
<![if !IE]>▶<![endif]> Reply
I got impatient bought another radio:) was planning on returning radio but wanted to see if i could fix it, motivation gone when i got another radio that worked.
There are some other threads about similar problems, u tried suggestions in this thread : ?
....or just contact 3dr and get a replacement.
-k
<![if !IE]>▶<![endif]> Reply
Hi gutzmann, the panic issue is usually caused by a malfunctioning HM-TRP module, which we don't manufacture. You can contact [email protected] to set up an RMA and get a replacement.
-Sam
<![if !IE]>▶<![endif]> Reply
<![if !IE]>▶<![endif]> Reply
Alan: The second box is auto filled out when the two radios connect, which has not happened here. Are you sure you've got two 433 radios, and not one 433 and one 900? Where did you get them?
<![if !IE]>▶<![endif]> Reply
I bought them direct from 3D Robotics.
They did work when I received them. I had them on the bench and only tried them a couple of times. They have not been flown, moved or had not been disconnected only the MP had been updated during this time.
<![if !IE]>▶<![endif]> Reply
Please connect the "air" on the the FTDI cable and confirm that it has the same settings as the "ground" one that you read in the screen shot above.
<![if !IE]>▶<![endif]> Reply
Thanks Chris, sorry to waste your time silly me I had number of channels set differently.
I will go and see the optician on Monday.
<![if !IE]>▶<![endif]> Reply
Hi, I'm getting exactly the same symptoms as the original poster on both (ground and air) of my radios.
These are the set I bought:...
When powered the green LED is solid and the red LED flashes briefly (about 1hz)
Ground PCB says 3DR Radio USB v1.0
Air PCB says 3DR Radio v1.2
I've tried reflashing the firmware (which succeeds) by shorting the 3rd pin with ground to get the solid but neither respond to any commands and constantly print
**PANIC** radio_initialise failed on the terminal screen (correct COM port and 57600 baud selected) and doesn't respond to any of the AT* commands.
I've tried both the radio part of the latest Mission Planner and the 3DR radio tool.
I have contacted Goodluckbuy to request return (not that I hold out much hope)
I'm out of ideas, is there anything I can check hardware wise for continuity / incorrect wiring?
Thanks
<![if !IE]>▶<![endif]> Reply
<![if !IE]>▶<![endif]> Reply to Discussion | https://diydrones.com/forum/topics/panic-radio-initialise-failed?commentId=705844%3AComment%3A856529 | CC-MAIN-2019-30 | refinedweb | 800 | 72.36 |
Using C++ OpenCV code with Android
It’s been five months since I have written a blog post. I was not too busy to write it, but just enjoying my last semester at the campus. I’m currently pursuing my thesis at Tesseract Imaging and have been working with Android OpenCV for some time. Although I don’t know much about it, I know enough to do my work. I’m going to write about how to use Native Development Kit (NDK) to use C++ OpenCV code with Android. I keep having issues with my Linux distro and have to change it often so I have to reconfigure the whole Eclipse Android settings.
Here’s the official OpenCV Page about NDK. Using C++ OpenCV code with Android binary package
Requirements
Setting Up
It’s really easy to set up all of these in Ubuntu. I am assuming that your ADT plugin is configured and only NDK is to be configured.
Extract Android NDK. And set
NDKROOT as the folder path. If you want to get into more details about Native Development, you can read this.
Important Aspects
Each Android Application with Native Code has a folder called jni/. It has your native source code and couple of other files. These other files are scripts which instructs the compiler to include certain files, libraries and modules.
Android.mk and Application.mk
Android.mk builds C++ source code of an Android Application. Application.mk is used when STL and exceptions are used in C++.
JNI Part
Java Native Interface (JNI) helps code written in Java to interact with native code (C/C++). It’s very useful for loading code from dynamic shared libraries. I’ll show you a sample program which converts the image to gray image.
This was a small example of using native code with android. Now, I’ll explain the code part by part.
extern “C”:
To allow for overloading of functions, C++ uses something called ** name mangling **. This means that function names are not the same in C++ as in plain C. To inhibit this name mangling, you have to declare functions as extern “C”.
JNIEXPORT jint JNICALL
This is the main function which interacts with Java code. JNIEXPORT void JNICALL passes a JNIENV pointer, a jobject pointer, and any Java arguments declared by the Java method. To define this function
- The first item is Java
- The second item is the name of the Java class where the method is declared.
- Finally, the name of the method appears.
Each part is separated by underscores. And, the “.” in class name is also replaced by “_”.
So,
Java_com_example_myapp_opencvpart_convertNativeGray com.example.myapp is the application name opencvpart is the class name where the method is declared. convertNativeGray is the function name
Passing Java Arguments
To pass OpenCV’s Mat from Java to Native code, we pass its address. This address is obtained using getNativeObjAddr() function. New Mat are initialized based on native address and can be used normally in C++ code.
I have also shown a function which can be easily added in the JNI part.
Java Part
Now, we move on to Java part where we’ll define the function and call it when required. We’ll use CvCameraViewListener2 and simply show gray image on the screen.
package com.example.myapp // Add imports here public class opencvpart extends Activity implements CvCameraViewListener2 { public native int convertNativeGray(long matAddrRgba, long matAddrGray); private Mat mRgba; private Mat mGray; // other part private BaseLoaderCallback mLoaderCallback = new BaseLoaderCallback(this) { @Override public void onManagerConnected(int status) { switch (status) { case LoaderCallbackInterface.SUCCESS: { System.loadLibrary("nativegray");// Load Native module Log.i(TAG, "OpenCV loaded successfully"); mOpenCvCameraView.enableView(); } break; default: { super.onManagerConnected(status); } break; } } }; // some more stuff public Mat onCameraFrame(CvCameraViewFrame inputFrame) { mRgba = inputFrame.rgba(); convertNativeGray(mRgba.getNativeObjAddr(), mGray.getNativeObjAddr()); return mGray; } }
How To Build Application using Eclipse (CDT Builder)
Follow these instructions step by step and it should be done.
Building application native part from Eclipse (CDT Builder)
Small Stuff
You’ll need to change layout.xml and AndroidManifest.xml file according to the requirements of the application.
Deploy
Now, the application is ready to deploy on the phone!
Final Application
I have made a sample application based on this which you can refer to. You can fork it.
nativecodeGray.
If you are trying this code, make sure that you set appropriate paths.
P.S. Didn’t do much in past few months. Looking forward to a lot of work. Plus, I’ll be busy on thursdays.
Playing around with Android UI
Articles focusing on Android UI - playing around with ViewPagers, CoordinatorLayout, meaningful motions and animations, implementing difficult customized views, etc. | https://jayrambhia.com/blog/ndk-android-opencv | CC-MAIN-2022-05 | refinedweb | 772 | 58.48 |
27 January 2010
By clicking Submit, you accept the Adobe Terms of Use.
This article assumes a familiarity with ActionScript 3.
All
As the Flash Platform continues to proliferate and reach more devices, developers need to adopt techniques for authoring with multiple screen sizes and resolutions in mind. This article discusses several techniques to help Flash developers author content that will render properly on any device, regardless of its screen resolution and pixel density.
The techniques explored in this article are somewhat "low-level" in that they show the programmatic creation of vectors and the use of algorithms (albeit simple ones) to dynamically size and position assets. There will always be a need for this level of authoring control for some applications, but there will also be "higher-level" and simpler alternatives in the future.
Adobe is currently working on a mobile Flex framework (codenamed "Slider"), which will automatically apply some of what's discussed here, and will make it much easier for you to write applications that adapt to different screens. Until Slider is available, however—and for those whose applications might not fit into the framework model—the tips and tricks discussed in this article will help to jumpstart your multi-screen development.
Before exploring specific techniques for authoring SWF-based applications for multiple screen sizes, it's worth covering some relevant terminology. Although you are probably familiar with the general meaning of these terms, a thorough understanding is necessary to actually put them to use:
The goal of a multi-screen application is not necessarily to look identical on every device; rather, it should adapt to any device it's installed on. In other words, multi-screen applications should dynamically adjust to the resolution and the PPI of their host devices, displaying more information on larger screens, removing or shrinking elements on smaller screens, ensuring buttons are physically large enough to tap on, and so on. In order for applications to work across different screens, they must be architected in such a way that they draw and redraw themselves at the proper times and using the proper constraints.
Before laying out your application, it's important that you set the Stage's scale mode and alignment. This should be done in your Sprite's constructor, just before or after registering for the Stage resize event (more on this below):
this.stage.scaleMode = StageScaleMode.NO_SCALE; this.stage.align = StageAlign.TOP_LEFT;
Setting the Stage's scale mode to NO_SCALE indicates that you don't want any kind of automatic scaling or layout of your content to occur, and that you will handle all the layout and scaling yourself. This is what enables applications to dynamically adapt themselves to different screen sizes.
Setting the Stage's align property to TOP_LEFT indicates that you want to lay content out relative to the top left-hand corner with the coordinates of 0,0.
The best place to do rendering in a multi-screen application is in a Stage resize event handler. The Stage will dispatch a resize event when the application is initialized and the size of the Stage (the area your application has to work with) is set. In a pure ActionScript application, you will want to listen for a Stage resize event in your main Sprite's constructor, like this:
this.stage.addEventListener(Event.RESIZE, doLayout);
After registering for resize events on the Stage, doLayout will get called whenever the Stage is resized. For example:
By performing your layout in the Stage resize event handler, your application will automatically lay itself out whenever the size of the Stage changes, regardless of why it changes.
Note:To determine the size of the Stage, use the stage.stageWidth and stage.stageHeight properties.
It is not necessary to set the dimensions of your SWF file using SWF metadata. In fact, doing so may prevent your resize event handler from being called when the application initializes. It's best to set the width and height of your application in the initial window section of your application descriptor file like this:
<initialWindow> <width>320</width> <height>480</height> <!-- several other properties... --> </initialWindow>
Applications designed to run on devices with different screen sizes and resolutions will often need to determine the size of assets dynamically. In other words, a button that looks and works perfectly on one device might be far too small to read or tap on devices with higher resolutions. Consequently, it's important that developers know how to think in terms of both pixels and inches.
In order to add a solid-colored background to an application, you need only think in terms of pixels. For example, it doesn't matter how big or small the screen is—the background will always need to match the screen's dimensions in pixels. The code below shows adding a solid-colored background to an application of any size:
var bg:Sprite = new Sprite(); bg.x = 0; bg.y = 0; bg.graphics.beginFill(0x006E59); bg.graphics.drawRect(0, 0, this.stage.stageWidth, this.stage.stageHeight); this.addChild(bg);
The stageWidth and stageHeight properties on the Stage object indicate the dimensions in pixels of the content's Stage. This information is all you need to create a background that works with any size application on any size device.
Sizing assets in pixels works in cases where the assets can be sized relatively (as in the case of a background), but not when assets need to be sized absolutely. In other words, it doesn't matter how big or small a background is as long as it's the size of the entire Stage; however, the size of things like fonts and buttons needs to be controlled more precisely. That's when you have to think in terms of physical units, or PPI.
Using PPI to determine an asset's dimensions allows you to control the exact size of an asset regardless of what kind of screen it's being rendered on. For example, to make a button that is always ¾" × ¼" whether it's being rendered on a huge desktop monitor or a small mobile screen, you must use the screen's PPI.
Note: Research has shown that a hit target should be no smaller than ¼", or 7mm, in order to be hit consistently and reliably. The only way to make sure your buttons are usable across devices is to think in terms of physical units.
The PPI of the current screen can be determined by the Capabilities.screenDPI property. Of course, assets are always ultimately sized in pixels rather than inches, so it's necessary to convert PPI into pixels. I use a simple utility function like this:
/** * Convert inches to pixels. */ private function inchesToPixels(inches:Number):uint { return Math.round(Capabilities.screenDPI * inches); }
The code below demonstrates how to create a sprite that will appear as ¾" × ¼" on any device:
var button:Sprite = new Sprite(); button.x = 20; button.y = 20; button.graphics.beginFill(0x 003037); button.graphics.drawRect(0, 0, this.inchesToPixels(.75), this.inchesToPixels(.25)); button.graphics.endFill(); this.addChild(button);
So as not to be too American-centric, and because the metric system is superior at smaller scales, here's a version for converting millimeters to pixels:
/** * Convert millimeters to pixels. */ private function mmToPixels(mm:Number):uint { return Math.round(Capabilities.screenDPI * (mm / 25.4)); }
Now that your application is architected in such a way that it can be authored to adapt to multiple screen sizes, and now that you have techniques for determining asset sizes, it's time to start laying out assets.
The key to laying out assets capable of adapting to different screen sizes is to know what to hard-code and what to calculate based on properties of the current screen. For example, to create a title bar at the top of your application, you know that you want the x and y coordinates to be 0,0 which means those properties can be hard-coded. In other words, regardless of what kind of device your application is running on, you will always want your title bar positioned in the top-left corner. The following code shows creating a new Sprite to be used as a title bar, and hard-coding its position:
var titleBar:Sprite = new Sprite(); titleBar.x = 0; titleBar.y = 0;
Although the position of the title bar won't change from one device to another, its width will. On high-resolution devices, the width needs to be greater.
Determining the width of the title bar is as easy as using the stage.stageWidth property, but what about the height? You could hard-code the height in pixels, but the size of it will change pretty dramatically from device to device depending on resolution. In this case, a better approach is to think in terms of physical units which will give your title bar a consistent look across all devices.
The following code creates a title bar that demonstrates all of the following concepts:
stage.stageWidthto dynamically determine the width of the title bar in pixels
var titleBar:Sprite = new Sprite(); titleBar.x = 0; titleBar.y = 0; titleBar.graphics.beginFill(0x003037); titleBar.graphics.drawRect(0, 0, this.stage.stageWidth, this.inchesToPixels(.3)); titleBar.graphics.endFill(); this.addChild(titleBar);
The examples above demonstrate how to dynamically size assets, but what about dynamically positioning them? For example, the position of the title bar is obvious since it always originates from the top-leftcorner, but what about positioning a footer whose x coordinate is always 0 but whose y coordinate is determined by the height of the Stage?
The code below shows how to create a footer that will always span the entire width of the application, and always be positioned at the bottom, regardless of the height of the screen:
var footer:Sprite = new Sprite(); footer.graphics.beginFill(0x003037); footer.graphics.drawRect(0, 0, this.stage.stageWidth, this.inchesToPixels(.3)); footer.graphics.endFill(); footer.x = 0; footer.y = this.stage.stageHeight - footer.height; this.addChild(footer);
There are three primary ways to lay out assets (two of which I've already covered):
Calculating the position of an asset based on another asset is referred to as relative positioning, and it's an extremely important technique for designing multi-screen applications. Going back to the title bar example, we succeeded in creating a title bar that will always be positioned and rendered like you want it to, but what about the title itself?
You could always hard-code a y position, which would place it a few pixels down from the top, then calculate an x coordinate based on the width of the Stage and the width of the title, but that won't always yield the best results—for two reasons:
Both of these issues can be addressed by positioning your title relative to your title bar. Since this is something I find myself doing often, I have a simple utility function that does it for me:
/** * Center one DisplayObject relative to another. */ private function center(foreground:DisplayObject, background:DisplayObject):void { foreground.x = (background.width / 2) - (foreground.width / 2); foreground.y = (background.height / 2) + (foreground.height / 2); }
Using the center() function above, positioning my title is simple:
var title:SimpleLabel = new SimpleLabel("My Application", "bold", 0xffffff, "_sans", this.inchesToPixels(.15)); this.center(title, titleBar); this.addChild(title);
Dynamically sizing and laying out things like title bars is one thing, but actual application content can be more challenging. For example, consider a game whose main content is a grid of squares. What's the best technique for making the game playable on multiple devices? Should the squares simply be scaled up or down depending on screen size, or should rows and columns be added or removed?
Both are valid approaches, depending on the game. For example, in the case of a chess or checkers game, you can't add or remove rows or columns based on the size of the screen. In this case, it's usually best just to scale your content up or down in order to keep it consistent.
Some games can actually adapt their game play based on the size of the screen. For example, a real-time strategy game may be enhanced on a larger screen since higher resolutions can accommodate more tiles, or in the case of smaller screens, it may be best to remove tiles so that the remaining tiles can be larger and render more detail. In this case, you need your content to adapt.
There's no single strategy or formula for adapting content to various screen sizes since content is so diverse, but there are some standard techniques that can be used. The following describes the logic of adapting a game board to any size screen:
The code below demonstrates the logic of laying as many ¼" blocks as possible in the allotted space while maintaining an equal margin both above and below the game board:
// Display as many blocks on the screen as will fit var BLOCK_SIZE:Number = .25; var BLOCK_BUFFER:uint = 3; var blockSize:uint = this.inchesToPixels(BLOCK_SIZE); var blockTotal:uint = blockSize + BLOCK_BUFFER; var cols:uint = Math.floor(this.stage.stageWidth / blockTotal); var rows:uint = Math.floor((this.stage.stageHeight - titleBar.height) / blockTotal); var blockXStart:uint = (this.stage.stageWidth - ((cols * blockSize) + ((cols - 1) * BLOCK_BUFFER))) / 2; var blockX:uint = blockXStart; var blockY:uint = ((this.stage.stageHeight + titleBar.height) - ((rows * blockSize) + ((rows - 1) * BLOCK_BUFFER))) / 2; for (var colIndex:uint = 0; colIndex < rows; ++colIndex) { for (var rowIndex:uint = 0; rowIndex < cols; ++rowIndex) { // Use a private function to draw the block var block:Sprite = this.getBlock(blockSize); block.x = blockX; block.y = blockY; this.addChild(block); blockX += blockTotal; } blockY += blockTotal; blockX = blockXStart; } }
The code below is the function that generates each block:
/** * Get a new block to add to the game board */ private function getBlock(blockSize:uint):Sprite { var block:Sprite = new Sprite(); block.graphics.beginFill(0xAAC228); block.graphics.drawRect(0, 0, blockSize, blockSize); block.graphics.endFill(); block.cacheAsBitmap = true; return block; }
Note: As each block is created, its cacheAsBitmap property is set to true. Although mobile application optimization is beyond the scope of this article, it's always best to set the cacheAsBitmap property to true for DisplayObjects that you don't anticipate will need to be scaled or rotated frequently. Although this improves performance on the desktop, it can have dramatic results on devices with less powerful processors.
Fonts are a key element of almost all applications, and must be handled with the same care as other assets when designing for multiple screens. Below are three tips for using fonts in a way that will successfully adapt across devices:
TextFieldobjects, but FTE gives you the ability to position your text much more precisely. The properties of
TextLinelike
ascent,
descent,
textWidth, and
textHeightmake it possible to position text with pixel-perfect accuracy.
SimpleLabelwhich encapsulates my use of FTE and makes creating text far simpler. Not only do I save several lines of code everyplace I want to add some text, but I also have one central location where I can make universal text changes, or fix text-related bugs.
DisplayObjectobject's
widthand
heightproperties in order to make text fields work better with some of my utilities like the
center()function above. Following are the
widthand
heightgetters that give me the best results:
public override function get width():Number { return this.textLine.textWidth; } public override function get height():Number { return (this.textLine.ascent - 1); }
As the Flash Platform proliferates, so do opportunities for Flash developers. The ability to use the same tools, skills, and code to build applications across an increasing array of diverse devices is hugely powerful, and gives Flash developers the chance to reach an unprecedented number of users. In order to take advantage of the ubiquity of the Flash Platform, however, developers need to build applications with multiple screens in mind. Fortunately, the understanding and mastery of just a few relatively simple techniques give developers the tools they need to make the most of the Flash Platform. | http://www.adobe.com/devnet/flash/articles/authoring_for_multiple_screen_sizes.html | CC-MAIN-2017-09 | refinedweb | 2,673 | 52.09 |
Hi I am fairly new to C++ programming, but my professor has given us a project that I feel is beyond our scope of what we have learned.
The problem is to write a program that creates a cryptogram out of a string. Note that spaces are not scrambled. Uppercase letter should be treated the same as lowercase letters. Well this is the program that I wrote going with the example in our book, but he says that, it is completely wrong and that he wants a cryptogram program that asks for the input of a string and no matter what is entered it will encrypt it. So now I am completely lost and there is nothing in our book that goes beyond what my program does. Can any one help me?
#include <iostream>
using std::cout;
using std::endl;
#include <string>
using std::string;
int main()
{
//compiler concatenates all parts into one string
string s ( "see spot run");
cout <<"Original string before any replacements:\n"<<s
<<"\n\nAfter replacements:\n";
//replace all letters with other letters
int x = s.find("s");
while (x<string::npos) {
s.replace( x,1, "b");
x = s.find ("s", x+1);
int x = s.find ("e");
while (x<string::npos) {
s.replace(x,1, "j");
x = s.find ("e", x+1);
int x = s.find ("p");
while (x<string::npos){
s.replace(x,1,"m");
x = s.find ("p", x+1);
int x = s.find ("o");
while (x<string::npos){
s.replace(x,1,"z");
x = s.find ("o", x+1);
int x = s.find ("t");
while (x<string::npos){
s.replace(x,1,"k");
x = s.find ("t", x+1);
int x = s.find ("r");
while (x<string::npos){
s.replace(x,1, "q");
x = s.find ("r", x +1);
int x = s.find ("u");
while (x<string::npos){
s.replace(x,1, "c");
x = s.find ("u", x +1);
int x = s.find ("n");
while (x<string::npos){
s.replace(x,1, "a");
x = s.find ("n", x +1);
}
}
}
}
}
}
}
}
cout <<s<< endl;
return 0;
} | http://cboard.cprogramming.com/cplusplus-programming/3162-cryptogram-program-printable-thread.html | CC-MAIN-2015-32 | refinedweb | 342 | 88.23 |
I have been reading the xmonad development thread, and it got me to thinking about my experiences with xmonad. I definitely think things could be improved, but I'm not sure how would be best. Maybe it's all a documentation thing, or maybe some of it could be new or changed APIs. I'll just try to explain how I have experienced xmonad, and what gave me problems. First of all, as background, I am a programmer. I did a little Haskell at university, but not much and I use it so rarely that I can never remember the syntax. Mostly I program in C#, C++, Python and a little bit of Coffeescript. I run Ubuntu on a laptop and mostly only install stuff via apt-get. I like the command-line and vim, but I'm no anti-mouse purist and spend a lot of time using only the mouse or the touchpad while browsing the web. I think I first installed xmonad via apt-get, and at some point had some problem that meant moving to cabal or darcs. That was a bit of a pain, especially trying to understand where each of these things had put files and knowing whether I'd successfully uninstalled them later. What is much more of a pain is months later when I try to change something and realise I need to upgrade and then I'm not even sure how I installed xmonad in the first place, so I'm not sure what I need to do to update it. After an upgrade to a new version of Ubuntu there's generally a period of a few weeks before I find the time to figure out how to get xmonad up and running again. Configuring xmonad is generally pretty mystifying. Even though I have a basic knowledge of Haskell, it's often very hard to untangle things in the config file or in examples. Problems I've had: * Strange symbols, like ".|.", "|||", "-->", "=?", "<&&>". These can't be Googled for, so it's hard to find out what they do or where they're defined. Haskell already has lots of operators that aren't particularly familiar to programmers coming from other languages, so this just seems to add to the confusion. * Hard to find out where functions are defined. Example configuration files generally import all modules into the one big namespace. In Python, the equivalent construct, "from foo import *" is generally discouraged for this reason, but I don't know what's conventional in Haskell. It seems the forms "import Module (x,y,z)" or "import qualified Module" would be better for example configuration files, as they aid discoverability: then when I see a name used in a configuration file I can easily see which module it came from without having to grep through source files. * Poor layout documentation. One of the things I was most interested when I was a new user was in finding out what layouts were available and how to compose them. I found the layout documentation to be very limited. There's no overview and there are no diagrams, just a great big list of layout names, so to find anything you pretty much have to try them all one by one. Even worse, some of them require arguments that they do not document. E.g. If there is somewhere that shows example screenshots of all these layouts, it would be good if it was more prominently linked from the docs. * Baffling failure messages. If you make a mistake in your xmonad configuration, or if you upgrade and the API for something has changed, the error messages are generally very hard to understand, and it can take quite a deep understanding of Haskell to realise that you just missed out an argument or used the wrong operator. I think most of these things can be improved, though maybe not the error messages. I really love *using* xmonad, and I've not found another window manager that I'm nearly as happy with, but the horror of *configuring* xmonad wastes a lot of my time and makes me disinclined to recommend it to others. Weeble. | http://www.haskell.org/pipermail/xmonad/2011-November/011958.html | CC-MAIN-2014-23 | refinedweb | 696 | 67.18 |
Struts 2 - The property, push and set tags
The property tag is used to get the property of a value, which will default to the top of the stack if none is specified. This example shows you the usage of three simple data tags - namely set, push and property.
Create action classes:
For this exercise, let us reuse examples given in "Data Type Conversion" chapter but with little modifications. So let us start with creating classes. Consider the following POJO class Environment.java.
package com.tutorialspoint.struts2; public class Environment { private String name; public Environment(String name) { this.name = name; } public String getName() { return name; } public void setName(String name) { this.name = name; } }; } }
Create views
Let us have System.jsp with the following content:
<%@ page <html> <head> <title>System Details</title> </head> <body> <p>The environment name property can be accessed in three ways:</p> (Method 1) Environment Name: <s:property<br/> (Method 2) Environment Name: <s:push <s:property<br/> </s:push> (Method 3) Environment Name: <s:set <s:property </body> </html>
Let us now go through the three options one by one:
In the first method, we use the property tag to get the value of the environment's name. Since the environment variable is in the action class, it is automatically available in the value stack. We can directly refer to it using the property environment.name. Method 1 works fine, when you have limited number of properties in a class. Imagine if you have 20 properties in the Environment class. Every time you need to refer to these variables you need to add "environment." as the prefix. This is where the push tag comes in handly.
In the second method, we push the "environment" property to the stack. Therefore now within the body of the push tag, the environment property is available at the root of the stack. So you now refer to the property quite easily as shown in the example.
In the final method, we use the set tag to create a new variable called myenv. This variable's value is set to environment.name. So, now we can use this variable wherever we refer to the environment's name.
Configuration Files
Your struts.xml should look like:
<>
Your web.xml should look like:
<_0<< | http://www.tutorialspoint.com/struts_2/struts_push_tag.htm | CC-MAIN-2015-06 | refinedweb | 381 | 57.06 |
Star patterns are a series of '*' or any other character that are used to create some patterns or any geometrical shape such as - square, triangle(Pyramid), rhombus, heart etc. These patterns are often prescribed by many programming books and are best for practicing loops in programming and to enhance logical thinking capability. Before printing any star pattern you must have knowledge of loops and basic of pattern printing (Logic to print star patterns and shape).
Below is a list of easy and hard star patterns in C programming with explanation. Practice as much as you can to increase your logical thinking.
right triangle
right triangle
right triangle
right triangle
right triangle
mirrored right triangle
(Equilateral triangle)
half diamond
Below is a list of easy and hard star patterns in C programming with explanation. Practice as much as you can to increase your logical thinking.
List of various star pattern series and solution in C programming:
***** ***** ***** ***** *****1. Square
***** * * * * * * *****2. Hollow Square
***** ***** ***** ***** *****3. Rhombus
***** * * * * * * *****4. Hollow Rhombus
***** ***** ***** ***** *****5. Mirrored Rhombus
***** * * * * * * *****6. Hollow mirrored Rhombus
* ** *** **** *****7. Right triangle
* ** * * * * *****8. Hollow right triangle
* ** *** **** *****9. Mirrored
right triangle
* ** * * * * *****10. Hollow mirrored
right triangle
***** **** *** ** *11. Inverted
right triangle
***** * * * * ** *12. Hollow inverted
right triangle
***** **** *** ** *13. Inverted mirrored
right triangle
***** * * * * ** *14. Hollow inverted
mirrored right triangle
* *** ***** ******* *********15. Pyramid
(Equilateral triangle)
* * * * * * * *********16. Hollow Pyramid
********* ******* ***** *** *17. Inverted Pyramid
********* * * * * * * *18. Hollow inverted pyramid
* ** *** **** ***** **** *** ** *19. Half diamond
* ** *** **** ***** **** *** ** *20. Mirrored
half diamond
* *** ***** ******* ********* ******* ***** *** *21. Diamond
********** **** **** *** *** ** ** * * * * ** ** *** *** **** **** **********22. Hollow diamond
***** **** *** ** * ** *** **** *****23. Right Arrow
***** **** *** ** * ** *** **** *****24. Left arrow
+ + + + +++++++++ + + + +25. Plus Star pattern
* * * * * * * * * * * * * * * * *26. X Star pattern
***** ***** ******* ******* ********* ********* ******************* ***************** *************** ************* *********** ********* ******* ***** *** *27. Heart Star pattern 1
***** ***** ******* ******* ********* ********* *****Codeforwin**** ***************** *************** ************* *********** ********* ******* ***** *** *28. Heart Star pattern 2
You may also like
- Number pattern programs in C.
- Basic programming exercises and solutions.
- If else programming exercises and solutions.
- Switch case programming exercise and solutions.
- Conditional operators programming exercises and solutions.
- For loop programming exercises and solutions.
- Array programming exercises and solutions.
- String programming exercises and solutions.
This blog awesome and i learn a lot about programming from here.The best thing about this blog is that you doing from beginning to experts level.
Love from C Programming Hub
Thanks, giving my best. Keep visiting.
This comment has been removed by the author.
hello Pankaj..
I want to write a menu oriented dynamic program, which will be both flow forward and flow backward type.
the structure of the program will be like
Main menu->Menu->Sub-menu and so on..
The main thing in this program is that at the end of every sub-menu, i want to put two options:
" 1. to go back to previous menu, press ____ .
2. to go back to main menu, press _____ ."
i know that to put menu in the program, i will have to use switch-case statements.
but what will be the code to put those two options mentioned above in the program.
you may advice to use fflush(stdin) after each sub menu, but it is useful only to deal with first option. I want to deal with both the optinos simultaneously.
what should be the proper coding??
I am writing my program in C Language.
Hello ankit,
Here I am giving the basic logic to create menu oriented program in C that fits your need. I won't be writing full program here. You can use the below approach.
#include <stdio.h>
#include <conio.h> //Used for clrscr()
int main()
{
int choice;
//Runs forever until user selects 3
while(1)
{
choice = printHomeMenu();
switch(choice)
{
case 1: function1(); break;
case 2: function2(); break;
case 3: return 0;
default: printf("Invalid choice try again");
system("pause");
}
}
}
int printHomeMenu()
{
int choice;
//You may want to clear the screen
//before printing menu on home screen
clrscr();
printf("YOUR WELCOME MESSAGE\n");
printf("--------------------\n");
printf("1. Function 1\n");
printf("2. Function 2\n");
printf("3. Exit\n");
printf("--------------------\n");
printf("Enter your choice(1-3):");
scanf("%d", &choice);
return choice;
}
int printSubMenu()
{
int choice;
printf("--------------------------\n");
printf("1. Some function"\n);
printf("0. Main menu\n");
printf("---------------------------\n");
printf("Enter your choice: ");
scanf("%d", &choice);
return choice;
}
function1()
{
int choice;
while(1)
{
choice = printSubMenu();
//Do some pther task based on choice
switch(choice)
{
case 1: //do some task
break;
case 0: return 0; //return to main menu
default: printf("Invalid choice enter again");
system("pause");
}
}
}
function2()
{
//Logic similar as function1
}
thank you so much bro...
i will try with this sample code, and will contact u with the results!!!!
Chat Conversation End | http://www.codeforwin.in/2015/07/star-patterns-program-in-c.html | CC-MAIN-2017-09 | refinedweb | 729 | 69.48 |
LROUND(3) BSD Programmer's Manual LROUND(3)
llround, llroundf, lround, lroundf - convert to nearest integral value
#include <math.h> long long llround(double x); long long llroundf(float x); long lround(double x); long lroundf(float x);
The lround() function returns the integer nearest to its argument x, rounding away from zero in halfway cases. If the rounded result is too large to be represented as a long value, an invalid exception is raised and the return value is undefined. Otherwise, if x is not an integer, lround() may raise an inexact exception. When the rounded result is representable as a long, the expression lround(x) is equivalent to (long)round(x) (although the former may be more efficient). The llround(), llroundf(), and lroundf() functions differ from lround() only in their input and output types.
lrint(3), math(3), rint(3)
The llround(), llroundf(), lround(), and lroundf() functions conform to ISO/IEC 9899:1999 ("ISO C99"). MirOS BSD #10-current April 7,. | http://mirbsd.mirsolutions.de/htman/sparc/man3/lround.htm | crawl-003 | refinedweb | 163 | 61.36 |
This preview has intentionally blurred parts. Sign up to view the full documentView Full Document
- Download Document
-
-
- Showing pages 1 - 2 of 151
- Word Count: 58260
Unformatted Document Excerpt
FINANCIAL Appendix A SPECIMEN STATEMENTS: PepsiCo, Inc. T HE ANNUAL REPORT Once each year a corporation communicates to its stockholders and other interested parties by issuing a complete set of audited financial statements.The annual report, as this communication is called, summarizes the financial results of the companys corporations accounting system. The content and organization of corporate annual reports have become fairly standardized. Excluding the public relations part of the report (pictures, products, etc.), the following are the traditional financial portions of the annual report: Financial Highlights Letter to the Stockholders Managements Discussion and Analysis Financial Statements Notes to the Financial Statements Managements Report on Internal Control Management Certification of Financial Statements Auditors Report Supplementary Financial tenepsiCos Annual Report is shown on page A-2. The financial information herein is reprinted with permission from the PepsiCo, Inc. 2005 Annual Report. The complete financial statements are available through a link at the books companion website. A1 A2 Appendix A Specimen Financial Statements: PepsiCo, Inc. Financial Highlights PepsiCo, Inc. and Subsidiaries ($ in millions except per share amounts; all per share amounts assume dilution) Net Revenue Total: $32,562 PepsiCo International 35% 5% Quaker Foods North America Division Operating Profit Total: $6,710 PepsiCo International 24% 8% Quaker Foods North America 28% 32% Frito-Lay North America PepsiCo Beverages North America 30% 38% Frito-Lay North America PepsiCo Beverages North America 2005 Summary of Operations Total net revenue Division operating profit Total operating profit Net income(b) Earnings per share(b) Other Data Management operating cash flow(c) Net cash provided by operating activities Capital spending Common share repurchases Dividends paid Long-term debt $5,852 $1,736 $3,012 $1,642 $2,313 $4,204 $32,562 $6,710 $5,922 $4,536 $2.66 2004 $29,261 $6,098 $5,259 $4,004 $2.32 % Chg(a) 11 10 13 13 15 $3,705 $5,054 $1,387 $3,028 $1,329 $2,397 13 16 25 (0.5) 24 (3.5) (a) Percentage changes above and in text are based on unrounded amounts. (b) In 2005, excludes the impact of AJCA tax charge, the 53rd week and restructuring charges. In 2004, excludes certain prior year tax benefits, and restructuring and impairment charges. See page 76 for reconciliation to net income and earnings per share on a GAAP basis. (c) Includes the impact of net capital spending. Also, see Our Liquidity, Capital Resources and Financial Position in Managements Discussion and Analysis. L ETTER TO THE STOCKHOLDERS Nearly every annual report contains a letter to the stockholders from the chairman of the board or the president, or both. This letter typically discusses the companys accomplishments during the past year and highlights significant events such as mergers and acquisitions, new products, operating achievements, business philosophy, changes in officers or directors, financing commitments, expansion plans, and Financial Statements and Accompanying Notes A3 future prospects. The letter to the stockholders is signed by Steve Reinemund, Chairman of the Board and Chief Executive Officer, of PepsiCo. Only a short summary of the letter is provided below. The full letter can be accessed at the books companion website at. MANAGEMENTS DISCUSSION AND ANALYSIS The managements discussion and analysis (MD&A) section covers three financial aspects of a company: its results of operations, its ability to pay near-term obligations, and its ability to fund operations and expansion. Management must highlight favorable or unfavorable trends and identity significant events and uncertainties that affect these three factors. This discussion obviously involves a number of subjective estimates and opinions. In its MD&A section, PepsiCo breaks its discussion into three major headings: Our Business, Our Critical Accounting Policies, and Our Financial Results. PepsiCos MD&A section is 22 pages long. You can access that section at. FINANCIAL STATEMENTS AND A CCOMPANYING NOTES The standard set of financial statements consists of: (1) a comparative income statement for 3 years, (2) a comparative statement of cash flows for 3 years, (3) a comparative balance sheet for 2 years, (4) a statement of stockholders equity for 3 years, and (5) a set of accompanying notes that are considered an integral part of the financial statements. The auditors report, unless stated otherwise, covers the financial statements and the accompanying notes. PepsiCos financial statements and accompanying notes plus supplementary data and analyses follow. A4 Appendix A Specimen Financial Statements: PepsiCo, Inc. Consolidated Statement of Income PepsiCo, Inc. and Subsidiaries Fiscal years ended December 31, 2005, December 25, 2004 and December 27, 2003 (in millions except per share amounts) Net Revenue........................................................................................................................... Cost of sales........................................................................................................................... Selling, general and administrative expenses ........................................................................ Amortization of intangible assets........................................................................................... Restructuring and impairment charges.................................................................................. Merger-related costs............................................................................................................... Operating Profit ..................................................................................................................... Bottling equity income............................................................................................................ Interest expense...................................................................................................................... Interest income....................................................................................................................... Income from Continuing Operations before Income Taxes ................................................. Provision for Income Taxes................................................................................................... Income from Continuing Operations ..................................................................................... Tax Benefit from Discontinued Operations ........................................................................... Net Income ............................................................................................................................ Net Income per Common Share Basic Continuing operations ....................................................................................................... Discontinued operations.................................................................................................... Total .................................................................................................................................. Net Income per Common Share Diluted Continuing operations ....................................................................................................... Discontinued operations.................................................................................................... Total .................................................................................................................................. * Based on unrounded amounts. See accompanying notes to consolidated financial statements. 2005 $32,562 14,176 12,314 150 5,922 557 (256) 159 6,382 2,304 4,078 $ 4,078 $2.43 $2.43 $2.39 $2.39 2004 $29,261 12,674 11,031 147 150 5,259 380 (167) 74 5,546 1,372 4,174 38 $ 4,212 $2.45 0.02 $2.47 $2.41 0.02 $2.44* 2003 $26,971 11,691 10,148 145 147 59 4,781 323 (163) 51 4,992 1,424 3,568 $ 3,568 $2.07 $2.07 $2.05 $2.05 Net Revenue $32,562 $26,971 $29,261 Operating Profit $5,922 $5,259 $4,781 2003 2004 2005 2003 2004 2005 Income from Continuing Operations $4,174 $3,568 Net Income per Common Share Continuing Operations $2.41 $4,078 $2.05 $2.39 2003 2004 2005 2003 2004 2005 Financial Statements and Accompanying Notes A5 Consolidated Statement of Cash Flows PepsiCo, Inc. and Subsidiaries Fiscal years ended December 31, 2005, December 25, 2004 and December 27, 2003 (in millions) Operating Activities Net income................................................................................................................................. Adjustments to reconcile net income to net cash provided by operating activities Depreciation and amortization ............................................................................................. Stock-based compensation expense..................................................................................... Restructuring and impairment charges ............................................................................... Cash payments for merger-related costs and restructuring charges ................................... Tax benefit from discontinued operations............................................................................. Pension and retiree medical plan contributions ................................................................... Pension and retiree medical plan expenses.......................................................................... Bottling equity income, net of dividends .............................................................................. Deferred income taxes and other tax charges and credits ................................................... Merger-related costs............................................................................................................. Other non-cash charges and credits, net ............................................................................. Changes in operating working capital, excluding effects of acquisitions and divestitures Accounts and notes receivable........................................................................................ Inventories ...................................................................................................................... Prepaid expenses and other current assets .................................................................... Accounts payable and other current liabilities................................................................ Income taxes payable...................................................................................................... Net change in operating working capital.............................................................................. Other..................................................................................................................................... Net Cash Provided by Operating Activities .............................................................................. Investing Activities Snack Ventures Europe (SVE) minority interest acquisition ....................................................... Capital spending ....................................................................................................................... Sales of property, plant and equipment..................................................................................... Other acquisitions and investments in noncontrolled affiliates ................................................ Cash proceeds from sale of PBG stock ...................................................................................... Divestitures................................................................................................................................ Short-term investments, by original maturity More than three months purchases ................................................................................ More than three months maturities ................................................................................ Three months or less, net ..................................................................................................... Net Cash Used for Investing Activities ..................................................................................... Financing Activities Proceeds from issuances of long-term debt .............................................................................. Payments of long-term debt ...................................................................................................... Short-term borrowings, by original maturity More than three months proceeds................................................................................... More than three months payments ................................................................................. Three months or less, net ..................................................................................................... Cash dividends paid .................................................................................................................. Share repurchases common ................................................................................................. Share repurchases preferred ................................................................................................ Proceeds from exercises of stock options................................................................................... Net Cash Used for Financing Activities .................................................................................... Effect of exchange rate changes on cash and cash equivalents ............................................... Net Increase/(Decrease) in Cash and Cash Equivalents ......................................................... Cash and Cash Equivalents, Beginning of Year ....................................................................... Cash and Cash Equivalents, End of Year ................................................................................. See accompanying notes to consolidated financial statements. 2005 $ 4,078 1,308 311 (22) (877) 464 (411) 440 145 (272) (132) (56) 188 609 337 79 5,852 (750) (1,736) 88 (345) 214 3 (83) 84 (992) (3,517) 25 (177) 332 (85) 1,601 (1,642) (3,012) (19) 1,099 (1,878) (21) 436 1,280 $ 1,716 2004 $ 4,212 1,264 368 150 (92) (38) (534) 395 (297) (203) 166 (130) (100) (31) 216 (268) (313) (24) 5,054 (1,387) 38 (64) 52 (44) 38 (963) (2,330) 504 (512) 153 (160) 1,119 (1,329) (3,028) (27) 965 (2,315) 51 460 820 $ 1,280 2003 $ 3,568 1,221 407 147 (109) (605) 277 (276) (286) 59 101 (220) (49) 23 (11) 182 (75) (101) 4,328 (1,345) 49 (71) 46 (38) 28 (940) (2,271) 52 (641) 88 (115) 40 (1,070) (1,929) (16) 689 (2,902) 27 (818) 1,638 $ 820 A6 Appendix A Specimen Financial Statements: PepsiCo, Inc. Consolidated Balance Sheet PepsiCo, Inc. and Subsidiaries December 31, 2005 and December 25, 2004 (in millions except per share amounts) ASSETS Current Assets Cash and cash equivalents................................................................................................................................... Short-term investments ........................................................................................................................................ Accounts and notes receivable, net....................................................................................................................... Inventories............................................................................................................................................................. Prepaid expenses and other current assets........................................................................................................... Total Current Assets ....................................................................................................................................... Property, Plant and Equipment, net .................................................................................................................... Amortizable Intangible Assets, net ...................................................................................................................... Goodwill................................................................................................................................................................. Other nonamortizable intangible assets................................................................................................................ Nonamortizable Intangible Assets.................................................................................................................. Investments in Noncontrolled Affiliates .............................................................................................................. Other Assets ......................................................................................................................................................... Total Assets................................................................................................................................................ LIABILITIES AND SHAREHOLDERS EQUITY Current Liabilities Short-term obligations .......................................................................................................................................... Accounts payable and other current liabilities...................................................................................................... Income taxes payable............................................................................................................................................ Total Current Liabilities .................................................................................................................................. Long-Term Debt Obligations................................................................................................................................. Other Liabilities .................................................................................................................................................... Deferred Income Taxes ........................................................................................................................................ Total Liabilities ................................................................................................................................................ Commitments and Contingencies Preferred Stock, no par value ............................................................................................................................. Repurchased Preferred Stock ............................................................................................................................. Common Shareholders Equity Common stock, par value 1 2/3 per share (issued 1,782 shares)....................................................................... Capital in excess of par value............................................................................................................................... Retained earnings ................................................................................................................................................. Accumulated other comprehensive loss ................................................................................................................ Less: repurchased common stock, at cost (126 and 103 shares, respectively) ................................................... Total Common Shareholders Equity .............................................................................................................. Total Liabilities and Shareholders Equity ................................................................................................ See accompanying notes to consolidated financial statements. 2005 2004 $ 1,716 3,166 4,882 3,261 1,693 618 10,454 8,681 530 4,088 1,086 5,174 3,485 3,403 $31,727 $ 1,280 2,165 3,445 2,999 1,541 654 8,639 8,149 598 3,909 933 4,842 3,284 2,475 $27,987 $ 2,889 5,971 546 9,406 2,313 4,323 1,434 17,476 41 (110) $ 1,054 5,599 99 6,752 2,397 4,099 1,216 14,464 41 (90) 30 614 21,116 (1,053) 20,707 (6,387) 14,320 $31,727 30 618 18,730 (886) 18,492 (4,920) 13,572 $27,987 Financial Statements and Accompanying Notes A7 Consolidated Statement of Common Shareholders Equity PepsiCo, Inc. and Subsidiaries Fiscal years ended December 31, 2005, December 25, 2004 and December 27, 2003 (in millions) Common Stock Capital in Excess of Par Value Balance, beginning of year........................................... Stock-based compensation expense............................. Stock option exercises(a) ............................................... Balance, end of year..................................................... Retained Earnings Balance, beginning of year........................................... Net income ................................................................... Cash dividends declared common .......................... Cash dividends declared preferred ......................... Cash dividends declared RSUs ............................... Other ............................................................................ Balance, end of year..................................................... Accumulated Other Comprehensive Loss Balance, beginning of year .......................................... Currency translation adjustment.................................. Cash flow hedges, net of tax: Net derivative gains/(losses) .................................. Reclassification of (gains)/losses to net income .... Minimum pension liability adjustment, net of tax ............................................................... Unrealized gain on securities, net of tax ...................... Other ............................................................................ Balance, end of year..................................................... Repurchased Common Stock Balance, beginning of year........................................... Share repurchases........................................................ Stock option exercises .................................................. Other ............................................................................ Balance, end of year..................................................... Total Common Shareholders Equity ................................ (103) (54) 31 (126) Shares 1,782 2005 Amount $ 30 618 311 (315) 614 18,730 4,078 (1,684) (3) (5) 21,116 (886) (251) 54 (8) 16 24 (2) (1,053) (4,920) (2,995) 1,523 5 (6,387) $14,320 2005 (77) (58) 32 (103) Shares 1,782 2004 Amount $ 30 548 368 (298) 618 15,961 4,212 (1,438) (3) (2) 18,730 (1,267) 401 (16) 9 (19) 6 (886) (3,376) (2,994) 1,434 16 (4,920) $13,572 2004 $4,212 401 (7) (19) 6 $4,593 (60) (43) 26 (77) Shares 1,782 2003 Amount $ 30 207 407 (66) 548 13,489 3,568 (1,082) (3) (11) 15,961 (1,672) 410 (11) (1) 7 1 (1) (1,267) (2,524) (1,946) 1,096 (2) (3,376) $11,896 2003 $3,568 410 (12) 7 1 (1) $3,973 Comprehensive Income Net income .................................................................. Currency translation adjustment.................................. Cash flow hedges, net of tax........................................ Minimum pension liability adjustment, net of tax ....... Unrealized gain on securities, net of tax ...................... Other ............................................................................ Total Comprehensive Income ........................................... (a) Includes total tax benefit of $125 million in 2005, $183 million in 2004 and $340 million in 2003. See accompanying notes to consolidated financial statements. $4,078 (251) 46 16 24 (2) $3,911 A8 Appendix A Specimen Financial Statements: PepsiCo, Inc. Notes to Consolidated Financial Statements Note 1 Basis of Presentation and Our Divisions Basis of Presentation Our%. Our share of the net income of noncontrolled bottling affiliates is reported in our income statement as bottling equity income. Bottling equity income also includes any changes in our ownership interests of these affiliates. In 2005, bottling equity income includes $126 million of pre-tax gains on our sales of PBG stock. See Note 8 for additional information on our noncontrolled bottling affiliates. Our share of other noncontrolled affiliates is included in division operating profit. Intercompany balances and transactions are eliminated. In 2005, we had an additional week of results (53rd week). Our fiscal year ends on the last Saturday of each December, resulting in an additional week of results every five or six years. In connection with our ongoing BPT initiative, we aligned certain accounting policies across our divisions in 2005. We conformed our methodology for calculating our bad debt reserves and modified our policy for recognizing revenue for products shipped to customers by third-party carriers. Additionally, we conformed our method of accounting for certain costs, primarily warehouse and freight. These changes reduced our net revenue by $36 million and our operating profit by $60 million in 2005. We also made certain reclassifications on our Consolidated Statement of Income in the fourth quarter of 2005 from cost of sales to selling, general and administrative expenses in connection with our BPT initiative. These reclassifications resulted in reductions to cost of sales of $556 million through the third quarter of 2005, $732 million in the full year 2004 and $688 million in the full year 2003, with corresponding increases to selling, general and administrative expenses in those periods. These reclassifications had no net impact on operating profit and have been made to all periods presented for comparability., future cash flows associated with impairment testing for perpetual brands and goodwill, useful lives for intangible assets, tax reserves, stock-based compensation and pension and retiree medical accruals. Actual results could differ from these estimates. See Our Divisions below and for additional unaudited information on items affecting the comparability of our consolidated results, see Items Affecting Comparability in Managements Discussion and Analysis. Tabular dollars are in millions, except per share amounts. All per share amounts reflect common per share amounts, assume dilution unless noted, and are based on unrounded amounts. Certain reclassifications were made to prior years amounts to conform to the 2005 presentation. Our Divisions We manufacture or use contract manufacturers, market and sell a variety of salty, sweet and grain-based snacks, carbonated and non-carbonated beverages, and foods through our North American and international business divisions. Our North American divisions include the United States and Canada. The accounting policies for the divisions are the same as those described in Note 2, except for certain allocation methodologies for stock-based compensation expense and pension and retiree medical expense, as described in the unaudited information in Our Critical Accounting Policies. Additionally, beginning in the fourth quarter of 2005, we began centrally managing commodity derivatives on behalf of our divisions. Certain of the commodity derivatives, primarily those related to the purchase of energy for use by our divisions, do not qualify for hedge accounting treatment. These derivatives hedge underlying commodity price risk and were not entered into for speculative purposes. Such derivatives are marked to market with the resulting gains and losses recognized as a component of corporate unallocated expense. These gains and losses are reflected in division results when the divisions take delivery of the underlying commodity. Therefore, division results reflect the contract purchase price of the energy or other commodities. Division results are based on how our Chairman and Chief Executive Officer evaluates our divisions. Division results exclude certain Corporate-initiated restructuring and impairment charges, mergerrelated costs and divested businesses. For additional unaudited information on our divisions, see Our Operations in Managements Discussion and Analysis. Financial Statements and Accompanying Notes A9 Frito-Lay North America (FLNA) PepsiCo Beverages North America (PBNA) PepsiCo International (PI) Quaker Foods North America (QFNA) 2005 FLNA...................................................................... PBNA..................................................................... PI ......................................................................... QFNA ..................................................................... Total division ........................................................ Divested businesses ............................................. Corporate .............................................................. Restructuring and impairment charges................ Merger-related costs............................................. Total...................................................................... $10,322 9,146 11,376 1,718 32,562 32,562 $32,562 2004 Net Revenue $ 9,560 8,313 9,862 1,526 29,261 29,261 $29,261 2003 $ 9,091 7,733 8,678 1,467 26,969 2 26,971 $26,971 2005 $2,529 2,037 1,607 537 6,710 (788) 5,922 $5,922 2004 Operating Profit $2,389 1,911 1,323 475 6,098 (689) 5,409 (150) $5,259 2003 $2,242 1,690 1,061 470 5,463 26 (502) 4,987 (147) (59) $4,781 Division Net Revenue QFNA 5% FLNA 32% Division Operating Profit QFNA 8% FLNA 38% PI 35% PI 24% PBNA 28% PBNA 30% Divested Businesses During 2003, we sold our Quaker Foods North America Mission pasta business. The results of this business are reported as divested businesses. Corporate Corporate includes costs of our corporate headquarters, centrally managed initiatives, such as our BPT initiative, unallocated insurance and benefit programs, foreign exchange transaction gains and losses, and certain commodity derivative gains and losses, as well as profit-in-inventory elimination adjustments for our noncontrolled bottling affiliates and certain other items. Restructuring and Impairment Charges and Merger-Related Costs See Note 3. A10 Appendix A Specimen Financial Statements: PepsiCo, Inc. Other Division Information 2005 FLNA PBNA PI QFNA Total division Corporate(a) Investments in bottling affiliates $ 5,948 6,316 9,983 989 23,236 5,331 3,160 $31,727 2004 Total Assets $ 5,476 6,048 8,921 978 21,423 3,569 2,995 $27,987 2003 $ 5,332 5,856 8,109 995 20,292 2,384 2,651 $25,327 2005 $ 512 320 667 31 1,530 206 $1,736 2004 Capital Spending $ 469 265 537 33 1,304 83 $1,387 2003 $ 426 332 521 32 1,311 34 $1,345 (a) Corporate assets consist principally of cash and cash equivalents, short-term investments, and property, plant and equipment. Total Assets Capital Spending QFNA 2% Other 27% FLNA 19% Other 12% FLNA 30% Net Revenue Canada 4% United Kingdom 6% Other 19% QFNA 3% PI 31% PBNA 20% PI 38% PBNA 18% Mexico 10% United States 61% FLNA PBNA PI QFNA Total division Corporate 2005 2004 2003 Amortization of Intangible Assets $3 $3 $3 76 75 75 71 68 66 1 1 150 147 145 $150 $147 $145 2004 2003 Net Revenue(a) $19,937 $18,329 $17,377 3,095 2,724 2,642 1,821 1,692 1,510 1,509 1,309 1,147 6,200 5,207 4,295 $32,562 $29,261 $26,971 2005 2005 2004 2003 Depreciation and Other Amortization $ 419 $ 420 $ 416 264 258 245 420 382 350 34 36 36 1,137 1,096 1,047 21 21 29 $1,158 $1,117 $1,076 2005 2004 2003 Long-Lived Assets(b) $10,723 $10,212 $ 9,907 902 878 869 1,715 1,896 1,724 582 548 508 3,948 3,339 3,123 $17,870 $16,873 $16,131 Long-Lived Assets Other 22% Canada 3% United Kingdom 10% United States 60% Mexico 5% U.S. Mexico United Kingdom Canada All other countries (a) Represents net revenue from businesses operating in these countries. (b) Long-lived assets represent net property, plant and equipment, nonamortizable and net amortizable intangible assets and investments in noncontrolled affiliates. These assets are reported in the country where they are primarily used. Financial Statements and Accompanying Notes A11 Note 2 Our Significant Accounting Policies Revenue Recognition We recognize revenue upon shipment or delivery to our customers based on written sales terms that do not allow for a right of return. However, our policy for direct-storedelivery (DSD) and chilled products is to remove and replace damaged and out-ofdate products from store shelves to ensure that our consumers receive the product quality and freshness that they expect. Similarly, our policy for warehouse distributed products is to replace damaged and out-of-date products. Based on our historical experience with this practice, we have reserved for anticipated damaged and outof-date products. For additional unaudited information on our revenue recognition and related policies, including our policy on bad debts, see Our Critical Accounting Policies in Managements Discussion and Analysis. We are exposed to concentration of credit risk by our customers, Wal-Mart and PBG. Wal-Mart represents approximately 9% of our net revenue, including concentrate sales to our bottlers which are used in finished goods sold by them to Wal-Mart; and PBG represents approximately 10%. We have not experienced credit issues with these customers. Sales Incentives and Other Marketplace Spending We offer sales incentives and discounts through various programs to our customers and consumers. Sales incentives and discounts are accounted for as a reduction of revenue and totaled $8.9 billion in 2005, $7.8 billion in 2004 and $7.1 billion in 2003. While most of these incentive arrangements have terms of no more than one year, certain arrangements extend beyond one year. For example, fountain pouring rights may extend up to 15 years. Costs incurred to obtain these arrangements are recognized over the contract period and the remaining balances of $321 million at December 31, 2005 and $337 million at December 25, 2004 are included in current assets and other assets in our Consolidated Balance Sheet. For additional unaudited information on our sales incentives, see Our Critical Accounting Policies in Managements Discussion and Analysis. Other marketplace spending includes the costs of advertising and other marketing activities and is reported as selling, general and administrative expenses. Advertising expenses were $1.8 billion in 2005, $1.7 billion in 2004 and $1.6 billion in 2003. Deferred advertising costs are not expensed until the year first used and consist of: media and personal service prepayments, promotional materials in inventory, and production costs of future media advertising. Deferred advertising costs of $202 million and $137 million at year-end 2005 and 2004, respectively, are classified as prepaid expenses in our Consolidated Balance Sheet. Distribution Costs Distribution costs, including the costs of shipping and handling activities, are reported as selling, general and administrative expenses. Shipping and handling expenses were $4.1 billion in 2005, $3.9 billion in 2004 and $3.6 billion in 2003. Cash Equivalents Cash equivalents are investments with original maturities of three months or less which we do not intend to rollover beyond three months. Software Costs We capitalize certain computer software and software development costs incurred in connection with developing or obtaining computer software for internal use. Capitalized software costs are included in property, plant and equipment on our Consolidated Balance Sheet and amortized on a straight-line basis over the estimated useful lives of the software, which generally do not exceed 5 years. Net capitalized software and development costs were $327 million at December 31, 2005 and $181 million at December 25, 2004. Commitments and Contingencies We are subject to various claims and contingencies related to lawsuits, taxes and environmental matters, as well as commitments under contractual and other commercial obligations. We recognize liabilities for contingencies and commitments when a loss is probable and estimable. For additional information on our commitments, see Note 9. Other Significant Accounting Policies Our other significant accounting policies are disclosed as follows: Property, Plant and Equipment and Intangible Assets Note 4 and, for additional unaudited information on brands and goodwill, see Our Critical Accounting Policies in Managements Discussion and Analysis. Income Taxes Note 5 and, for additional unaudited information, see Our Critical Accounting Policies in Managements Discussion and Analysis. Stock-Based Compensation Expense Note 6 and, for additional unaudited information, see Our Critical Accounting Policies in Managements Discussion and Analysis. Pension, Retiree Medical and Savings Plans Note 7 and, for additional unaudited information, see Our Critical Accounting Policies in Managements Discussion and Analysis. Risk Management Note 10 and, for additional unaudited information, see Our Business Risks in Managements Discussion and Analysis. There have been no new accounting pronouncements issued or effective during 2005 that have had, or are expected to have, a material impact on our consolidated financial statements. A12 Appendix A Specimen Financial Statements: PepsiCo, Inc. Note 3 Restructuring and Impairment Charges and Merger-Related Costs 2005 Restructuring Charges In the fourth quarter of 2005, we incurred a charge of $83 million ($55 million aftertax or $0.03 per share) in conjunction with actions taken to reduce costs in our operations, principally through headcount reductions. Of this charge, $34 million related to FLNA, $21 million to PBNA, $16 million to PI and $12 million to Corporate (recorded in corporate unallocated expenses). Most of this charge related to the termination of approximately 700 employees. We expect the substantial portion of the cash payments related to this charge to be paid in 2006. 2004 and 2003 Restructuring and Impairment Charges In the fourth quarter of 2004, we incurred a charge of $150 million ($96 million after-tax or $0.06 per share) in conjunction with the consolidation of FLNAs manufacturing network as part of its ongoing productivity program. Of this charge, $93 million related to asset impairment, primarily reflecting the closure of four U.S. plants. Production from these plants was redeployed to other FLNA facilities in the U.S. The remaining $57 million included employee-related costs of $29 million, contract termination costs of $8 million and other exit costs of $20 million. Employee-related costs primarily reflect the termination costs for approximately 700 employees. Through December 31, 2005, we have paid $47 million and incurred non-cash charges of $10 million, leaving substantially no accrual. In the fourth quarter of 2003, we incurred a charge of $147 million ($100 million after-tax or $0.06 per share) in conjunction with actions taken to streamline our North American divisions and PepsiCo International. These actions were taken to increase focus and eliminate redundancies at PBNA and PI and to improve the efficiency of the supply chain at FLNA. Of this charge, $81 million related to asset impairment, reflecting $57 million for the closure of a snack plant in Kentucky, the retirement of snack manufacturing lines in Maryland and Arkansas and $24 million for the closure of a PBNA office building in Florida. The remaining $66 million included employeerelated costs of $54 million and facility and other exit costs of $12 million. Employee-related costs primarily reflect the termination costs for approximately 850 sales, distribution, manufacturing, research and marketing employees. As of December 31, 2005, all terminations had occurred and substantially no accrual remains. Merger-Related Costs In connection with the Quaker merger in 2001, we recognized merger-related costs of $59 million ($42 million after-tax or $0.02 per share) in 2003. Note 4 Property, Plant and Equipment and Intangible Assets Average Useful Life Property, plant and equipment, net Land and improvements 10 30 yrs. Buildings and improvements 20 44 Machinery and equipment, including fleet and software 5 15 Construction in progress Accumulated depreciation Depreciation expense Amortizable intangible assets, net Brands Other identifiable intangibles Accumulated amortization Amortization expense 5 40 3 15 2005 $ 685 3,736 11,658 1,066 17,145 (8,464) $ 8,681 $1,103 $1,054 257 1,311 (781) $ 530 $150 2004 $ 646 3,605 10,950 729 15,930 (7,781) $ 8,149 $1,062 $1,008 225 1,233 (635) $ 598 $147 $145 $1,020 2003 Depreciation and amortization are recognized on a straight-line basis over an assets estimated useful life. Land is not depreciated and construction in progress is not depreciated until ready for service. Amortization of intangible assets for each of the next five years, based on average 2005 foreign exchange rates, is expected to be $152 million in 2006, $35 million in 2007, $35 million in 2008, $34 million in 2009 and $33 million in 2010. Managements Discussion and Analysis. Financial Statements and Accompanying Notes A13 Nonamortizable Intangible Assets Perpetual brands and goodwill are assessed for impairment at least annually to ensure that discounted future cash flows continue to exceed the related book value. A perpetual brand is impaired if its book value exceeds its fair value. Goodwill is evaluated for impairment if the book value of its reporting unit exceeds its fair value. A reporting unit can be a division or business within a division. If the fair value of an evaluated asset is less than its book value, the asset is written down based on its discounted future cash flows to fair value. No impairment charges resulted from the required impairment evaluations. The change in the book value of nonamortizable intangible assets is as follows: Balance, Beginning 2004 Frito-Lay North America Goodwill PepsiCo Beverages North America Goodwill Brands PepsiCo International Goodwill Brands Quaker Foods North America Goodwill Corporate Pension intangible Total goodwill Total brands Total pension intangible $ 130 2,157 59 2,216 1,334 808 2,142 175 2 3,796 867 2 $4,665 Acquisition $ 29 29 29 $29 Translation and Other $8 4 4 72 61 133 3 84 61 3 $148 Balance, End of 2004 $ 138 2,161 59 2,220 1,435 869 2,304 175 5 3,909 928 5 $ 4,842 Acquisition $ 278 263 541 278 263 $541 Translation and Other $ 7 3 3 (109) (106) (215) (4) (99) (106) (4) $(209) Balance, End of 2005 $ 145 2,164 59 2,223 1,604 1,026 2,630 175 1 4,088 1,085 1 $5,174 A14 Appendix A Specimen Financial Statements: PepsiCo, Inc. Note 5 Income Taxes 2005 Income before income taxes continuing operations U.S.................................................................................................................................................... Foreign.............................................................................................................................................. Provision for income taxes continuing operations Current: U.S. Federal....................................................................................................................... Foreign .............................................................................................................................. State ................................................................................................................................. Deferred: U.S. Federal ....................................................................................................................... Foreign .............................................................................................................................. State ................................................................................................................................. $3,175 3,207 $6,382 $1,638 426 118 2,182 137 (26) 11 122 $2,304 35.0% 1.4 7.0 (6.5) (0.8) 36.1% $ 993 772 863 135 35 169 2,967 608 426 400 342 520 2,296 (532) 1,764 $1,203 $231 $1,434 $564 (28) (4) $532 2004 $2,946 2,600 $5,546 $1,030 256 69 1,355 11 5 1 17 $1,372 35.0% 0.8 (5.4) (4.8) (0.9) 24.7% $ 850 857 669 153 46 157 2,732 666 402 402 379 460 2,309 (564) 1,745 $ 987 $229 $1,216 $438 118 8 $564 $487 (52) 3 $438 2003 $3,267 1,725 $4,992 $1,326 341 80 1,747 (274) (47) (2) (323) $1,424 35.0% 1.0 (5.5) (2.2) 0.2 28.5% Tax rate reconciliation continuing operations U.S. Federal statutory tax rate .......................................................................................................... State income tax, net of U.S. Federal tax benefit.............................................................................. Taxes on AJCA repatriation................................................................................................................ Lower taxes on foreign results .......................................................................................................... Settlement of prior years audit ........................................................................................................ Other, net.......................................................................................................................................... Annual tax rate ................................................................................................................................. Deferred tax liabilities Investments in noncontrolled affiliates ............................................................................................ Property, plant and equipment ......................................................................................................... Pension benefits ............................................................................................................................... Intangible assets other than nondeductible goodwill....................................................................... Zero coupon notes ............................................................................................................................ Other................................................................................................................................................. Gross deferred tax liabilities............................................................................................................. Deferred tax assets Net carryforwards ............................................................................................................................. Stock-based compensation............................................................................................................... Retiree medical benefits................................................................................................................... Other employee-related benefits....................................................................................................... Other................................................................................................................................................. Gross deferred tax assets ................................................................................................................. Valuation allowances........................................................................................................................ Deferred tax assets, net.................................................................................................................... Net deferred tax liabilities ................................................................................................................ Deferred taxes included within: Prepaid expenses and other current assets.................................................................................. Deferred income taxes .................................................................................................................. Analysis of valuation allowances Balance, beginning of year............................................................................................................... (Benefit)/provision........................................................................................................................ Other (deductions)/additions........................................................................................................ Balance, end of year......................................................................................................................... Financial Statements and Accompanying Notes A15 For additional unaudited information on our income tax policies, including our reserves for income taxes, see Our Critical Accounting Policies in Managements Discussion and Analysis. Carryforwards, Credits and Allowances Operating loss carryforwards totaling $5.1 billion at year-end 2005 are being carried forward in a number of foreign and state jurisdictions where we are permitted to use tax operating losses from prior periods to reduce future taxable income. These operating losses will expire as follows: $0.1 billion in 2006, $4.1 billion between 2007 and 2025 and $0.9 billion may be carried forward indefinitely. In addition, certain tax credits generated in prior periods of approximately $39.4 million are available to reduce certain foreign tax liabilities through 2011. We establish valuation allowances for our deferred tax assets when the amount of expected future taxable income is not likely to support the use of the deduction or credit. Undistributed International Earnings The AJCA created a one-time incentive for U.S. corporations to repatriate undistributed international earnings by providing an 85% dividends received deduction. As approved by our Board of Directors in July 2005, we repatriated approximately $7.5 billion in earnings previously considered indefinitely reinvested outside the U.S. in the fourth quarter of 2005. In 2005, we recorded income tax expense of $460 million associated with this repatriation. Other than the earnings repatriated, we intend to continue to reinvest earnings outside the U.S. for the foreseeable future and, therefore, have not recognized any U.S. tax expense on these earnings. At December 31, 2005, we had approximately $7.5 billion of undistributed international earnings. Reserves A number of years may elapse before a particular matter, for which we have established a reserve, is audited and finally resolved. The number of years with open tax audits varies depending on the tax jurisdiction. During 2004, we recognized $266 million of tax benefits related to the favorable resolution of certain open tax issues. In addition, in 2004, we recognized a tax benefit of $38 million upon agreement with the IRS on an open issue related to our discontinued restaurant operations. At the end of 2003, we entered into agreements with the IRS for open years through 1997. These agreements resulted in a tax benefit of $109 million in the fourth quarter of 2003. As part of these agreements, we also resolved the treatment of certain other issues related to future tax years. The IRS has initiated their audits of our tax returns for the years 1998 through 2002. Our tax returns subsequent to 2002 have not yet been examined. While it is often difficult to predict the final outcome or the timing of resolution of any particular tax matter, we believe that our reserves reflect the probable outcome of known tax contingencies. Settlement of any particular issue would usually require the use of cash. Favorable resolution would be recognized as a reduction to our annual tax rate in the year of resolution. Our tax reserves, covering all federal, state and foreign jurisdictions, are presented in the balance sheet within other liabilities (see Note 14), except for any amounts relating to items we expect to pay in the coming year which are included in current income taxes payable. For further unaudited information on the impact of the resolution of open tax issues, see Other Consolidated Results. Note 6 Stock-Based Compensation Our stock-based compensation program is a broad-based program designed to attract and retain employees while also aligning employees interests with the interests of our shareholders. Employees at all levels participate in our stock-based compensation program. In addition, members of our Board of Directors participate in our stockbased compensation program in connection with their service on our Board. Stock options and RSUs are granted to employees under the shareholder-approved 2003 Long-Term Incentive Plan (LTIP), our only active stock-based plan. Stock-based compensation expense was $311 million in 2005, $368 million in 2004 and $407 million in 2003. Related income tax benefits recognized in earnings were $87 million in 2005, $103 million in 2004 and $114 million in 2003. At yearend 2005, 51 million shares were available for future executive and SharePower grants. For additional unaudited information on our stock-based compensation program, see Our Critical Accounting Policies in Managements Discussion and Analysis. SharePower Grants SharePower options are awarded under our LTIP to all eligible employees, based on job level or classification, and in the case of international employees, tenure as well. All stock option grants have an exercise price equal to the fair market value of our common stock on the day of grant and generally have a 10-year term with vesting after three years. Executive Grants All senior management and certain middle management are eligible for executive grants under our LTIP. All stock option grants have an exercise price equal to the fair market value of our common stock on the day of grant and generally have a 10-year term with vesting after three years. There have been no reductions to the exercise price of previously issued awards, and any repricing of awards would require approval of our shareholders. Beginning in 2004, executives who are awarded long-term incentives based on their performance are offered the choice of stock options or RSUs. RSU expense is based on the fair value of PepsiCo stock on the date of grant and is amortized over the vesting period, generally three years. Each restricted stock unit can be settled in a share of our stock after the vesting period. Executives who elect RSUs receive one RSU for every four stock options that would have otherwise been granted. Senior officers do not have a choice and are granted 50% stock options and 50% RSUs. Vesting of RSU awards for senior officers is contingent upon the achievement of pre-established performance targets. We granted 3 million RSUs in both 2005 and 2004 with weighted-average intrinsic values of $53.83 and $47.28, respectively. A16 Appendix A Specimen Financial Statements: PepsiCo, Inc. Method of Accounting and Our Assumptions We account for our employee stock options under the fair value method of accounting using a Black-Scholes valuation model to measure stock-based compensation expense at the date of grant. We adopted SFAS 123R, Share-Based Payment, under the modified prospective method in the first quarter of 2006. We do not expect our adoption of SFAS 123R to materially impact our financial statements. Our Stock Option Activity(a) Our weighted-average Black-Scholes fair value assumptions include: Expected life Risk free interest rate Expected volatility Expected dividend yield 2005 6 yrs. 3.8% 23% 1.8% 2004 6 yrs. 3.3% 26% 1.8% 2003 6 yrs. 3.1% 27% 1.15% Outstanding at beginning of year Granted Exercised Forfeited/expired Outstanding at end of year Exercisable at end of year 2005 Options Average Price(b) 174,261 $40.05 12,328 53.82 (30,945) 35.40 (5,495) 43.31 150,149 42.03 89,652 40.52 Options 198,173 14,137 (31,614) (6,435) 174,261 94,643 2004 Average Price(b) $38.12 47.47 30.57 43.82 40.05 36.41 2003 Options Average Price(b) 190,432 $36.45 41,630 39.89 (25,833) 26.74 (8,056) 43.56 198,173 38.12 97,663 32.56 Stock options outstanding and exercisable at December 31, 2005(a) Range of Exercise Price $14.40 to $21.54 $23.00 to $33.75 $34.00 to $43.50 $43.75 to $56.75 Options Outstanding Options Average Price(b) Average Life(c) 905 $ 20.01 3.56 yrs. 14,559 30.46 3.07 82,410 39.44 5.34 52,275 49.77 7.17 150,149 42.03 5.67 Options Exercisable Options Average Price(b) Average Life(c) 905 $20.01 3.56 yrs. 14,398 30.50 3.05 48,921 39.19 4.10 25,428 49.48 6.09 89,652 40.52 4.45 (a) Options are in thousands and include options previously granted under Quaker plans. No additional options or shares may be granted under the Quaker plans. (b) Weighted-average exercise price. (c) Weighted-average contractual life remaining. Our RSU Activity(a) Outstanding at beginning of year Granted Converted Forfeited/expired Outstanding at end of year (a) RSUs are in thousands. (b) Weighted-average intrinsic value. (c) Weighted-average contractual life remaining. RSUs 2,922 3,097 (91) (259) 5,669 2005 Average Intrinsic Value(b) $47.30 53.83 48.73 50.51 50.70 Average Life(c) 1.8 yrs. RSUs 3,077 (18) (137) 2,922 2004 Average Intrinsic Value(b) $ 47.28 47.25 47.25 47.30 Average Life(c) 2.2 yrs. Other stock-based compensation data Weighted-average fair value of options granted Total intrinsic value of options/RSUs exercised/converted(a) Total intrinsic value of options/RSUs outstanding(a) Total intrinsic value of options exercisable(a) (a) In thousands. 2005 $13.45 $632,603 $2,553,594 $1,662,198 Stock Options 2004 $12.04 $667,001 $2,062,153 $1,464,926 RSUs 2003 $11.21 $466,719 $1,641,505 $1,348,658 2005 $4,974 $334,931 2004 $914 $151,760 At December 31, 2005, there was $315 million of total unrecognized compensation cost related to nonvested share-based compensation grants. This unrecognized compensation is expected to be recognized over a weighted-average period of 1.6 years. Financial Statements and Accompanying Notes A17 Note 7 Pension, Retiree Medical and Savings Plans Our pension plans cover full-time employees in the U.S. and certain international employees. Benefits are determined based on either years of service or a combination of years of service and earnings. U.S. retirees are also eligible for medical and life insurance benefits (retiree medical) if they meet age and service requirements. Generally, our share of retiree medical costs is capped at specified dollar amounts, which vary based upon years of service, with retirees contributing the remainder of the costs. We use a September 30 measurement date and all plan assets and liabilities are generally reported as of that date. The cost or benefit of plan changes that increase or decrease benefits for prior employee service (prior service cost) is included in expense on a straight-line basis over the average remaining service period of employees expected to receive benefits. The Medicare Act was signed into law in December 2003 and we applied the provisions of the Medicare Act to our plans in 2005 and 2004. The Medicare Act provides a subsidy for sponsors of retiree medical plans who offer drug benefits equivalent to those provided under Medicare. As a result of the Medicare Act, our 2005 and 2004 retiree medical costs were $11 million and $7 million lower, respectively, and our 2005 and 2004 liabilities were reduced by $136 million and $80 million, respectively. We expect our 2006 retiree medical costs to be approximately $18 million lower than they otherwise would have been as a result of the Medicare Act. For additional unaudited information on our pension and retiree medical plans and related accounting policies and assumptions, see Our Critical Accounting Policies in Managements Discussion and Analysis. 2005 Weighted-average assumptions Liability discount rate........................................................ Expense discount rate........................................................ Expected return on plan assets ......................................... Rate of compensation increases........................................ Components of benefit expense Service cost....................................................................... Interest cost...................................................................... Expected return on plan assets ........................................ Amortization of prior service cost/(benefit)....................... Amortization of experience loss......................................... Benefit expense................................................................. Settlement/curtailment loss ............................................. Special termination benefits............................................. Total.................................................................................. 2004 U.S. 6.1% 6.1% 7.8% 4.5% Pension 2003 2005 2004 2003 International 6.1% 6.1% 8.0% 3.9% 6.1% 6.4% 8.0% 3.8% Retiree Medical 2005 2004 2003 5.7% 6.1% 7.8% 4.4% 6.1% 6.7% 8.3% 4.5% 5.1% 6.1% 8.0% 4.1% 5.7% 6.1% 6.1% 6.1% 6.1% 6.7% $ 213 296 (344) 3 106 274 21 $ 295 $ 193 271 (325) 6 81 226 4 19 $ 249 $ 153 245 (305) 6 44 143 4 $ 147 $ 32 55 (69) 1 15 34 $ 34 $ 27 47 (65) 1 9 19 1 1 $ 21 $ 24 39 (54) 5 14 $ 14 $ 40 78 (11) 26 133 2 $135 $ 38 72 (8) 19 121 4 $125 $ 33 73 (3) 13 116 $116 A18 Appendix A Specimen Financial Statements: PepsiCo, Inc. 2005 U.S. Change in projected benefit liability Liability at beginning of year Service cost Interest cost Plan amendments Participant contributions Experience loss/(gain) Benefit payments Settlement/curtailment loss Special termination benefits Foreign currency adjustment Other Liability at end of year Liability at end of year for service to date Change in fair value of plan assets Fair value at beginning of year Actual return on plan assets Employer contributions/funding Participant contributions Benefit payments Settlement/curtailment loss Foreign currency adjustment Other Fair value at end of year $4,968 213 296 517 (241) 21 (3) $5,771 $4,783 $4,152 477 699 (241) (1) $5,086 2004 Pension 2005 2004 International $ 952 32 55 3 10 203 (28) (68) 104 $1,263 $1,047 $ 838 142 104 10 (28) (61) 94 $1,099 $(164) 17 474 4 $ 331 $367 1 (41) 4 $331 $194 2 7 (73) (15) (22) $ 93 $(65) $(84) $33 $758 27 47 1 9 73 (29) (2) 1 67 $952 $779 $687 77 37 9 (29) (2) 59 $838 $(113) 13 380 7 $ 287 $294 5 (37) 25 $287 $4 65 4 (12) (9) 26 $ 78 $(191) $(227) $161 Retiree Medical 2005 2004 $4,456 193 271 (17) 261 (205) (9) 18 $4,968 $4,164 $3,558 392 416 (205) (9) $4,152 $ (817) 9 2,013 5 $1,210 $1,572 (387) 25 $1,210 $ 196 65 (67) (81) (5) $108 $1,319 40 78 (8) (45) (74) 2 $1,312 $1,264 38 72 (41) 58 (76) 4 $1,319 $ 74 (74) $ $(1,312) (113) 402 19 $(1,004) $ (1,004) $(1,004) $ 61 (54) (26) (52) $(71) $(1,312) $(1,312) $ $ 76 (76) $ $(1,319) (116) 473 19 $ (943) $ (943) $(943) $ 109 31 (19) (82) $ 39 $(1,319) $(1,319) $ Funded status as recognized in our Consolidated Balance Sheet Funded status at end of year $ (685) 5 Unrecognized prior service cost/(benefit) 2,288 Unrecognized experience loss Fourth quarter benefit payments 5 Net amounts recognized $1,613 Net amounts as recognized in our Consolidated Balance Sheet Other assets $2,068 Intangible assets Other liabilities (479) Accumulated other comprehensive loss 24 Net amounts recognized $1,613 Components of increase in unrecognized experience loss Decrease in discount rate $ 365 57 Employee-related assumption changes 95 Liability-related experience different from assumptions (133) Actual asset return different from expected return (106) Amortization of losses Other, including foreign currency adjustments and 2003 Medicare Act (3) Total $ 275 Selected information for plans with liability for service to date in excess of plan assets Liability for service to date $ (374) $(320) Projected benefit liability $ (815) $(685) Fair value of plan assets $8 $11 Of the total projected pension benefit liability at year-end 2005, $765 million relates to plans that we do not fund because the funding of such plans does not receive favorable tax treatment. Financial Statements and Accompanying Notes A19 Future Benefit Payments Our estimated future benefit payments are as follows: Pension Retiree medical 2006 $235 $85 2007 $255 $90 2008 $275 $90 2009 $300 $95 2010 $330 $100 2011-15 $2,215 $545 These future benefits to beneficiaries include payments from both funded and unfunded pension plans. Pension Assets The expected return on pension plan assets is based on our historical experience, our pension plan investment guidelines, and our expectations for long-term rates of return. We use a market-related value method that recognizes each years asset gain or loss over a five-year period. Therefore, it takes five years for the gain or loss from any one year to be fully included in the value of pension plan assets that is used to calculate the expected return. Our pension plan investment guidelines are established based upon an evaluation of market conditions, tolerance for risk and cash requirements for benefit payments. Our investment objective is to ensure that funds are available to meet the plans benefit obligations when they are due. Our investment strategy is to prudently invest plan assets in high-quality and diversified equity and debt securities to achieve our long-term return expectation. Our target allocation and actual pension plan asset allocations for the plan years 2005 and 2004, are below. Pension assets include approximately 5.5 million shares of PepsiCo common stock with a market value of $311 million in 2005, and 5.5 million shares with a market value of $267 million in 2004. Our investment policy limits the investment in PepsiCo stock at the time of investment to 10% of the fair value of plan assets. Asset Category Equity securities Debt securities Other, primarily cash Total Target Allocation 60% 40% 100% Actual Allocation 2005 2004 60% 60% 39% 39% 1% 1% 100% 100% Retiree Medical Cost Trend Rates An average increase of 10% in the cost of covered retiree medical benefits is assumed for 2006. This average increase is then projected to decline gradually to 5% in 2010: 2005 service and interest cost components 2005 benefit liability 1% Increase $3 $38 1% Decrease $(2) $(33) Savings Plans Our U.S. employees are eligible to participate in 401(k) savings plans, which are voluntary defined contribution plans. The plans are designed to help employees accumulate additional savings for retirement. We make matching contributions on a portion of eligible pay based on years of service. In 2005 and 2004, our matching contributions were $52 million and $35 million, respectively. Note 8 Noncontrolled Bottling Affiliates Our most significant noncontrolled bottling affiliates are PBG and PAS. Approximately 10% of our net revenue in 2005, 2004 and 2003 reflects sales to PBG. The Pepsi Bottling Group In addition to approximately 41% and 42% of PBGs outstanding common stock that we own at year-end 2005 and 2004, respectively, we own 100% of PBGs class B common stock and approximately 7% of the equity of Bottling Group, LLC, PBGs principal operating subsidiary. This gives us economic ownership of approximately 45% and 46% of PBGs combined operations at year-end 2005 and 2004, respectively. In 2005, bottling equity income includes $126 million of pre-tax gains on our sales of PBG stock. A20 Appendix A Specimen Financial Statements: PepsiCo, Inc. PBGs summarized financial information is as follows: Current assets Noncurrent assets Total assets Current liabilities Noncurrent liabilities Minority interest Total liabilities Our investment Net revenue Gross profit Operating profit Net income 2005 $ 2,412 9,112 $11,524 $2,598 6,387 496 $9,481 $1,738 $11,885 $5,632 $1,023 $466 2004 $ 2,183 8,754 $10,937 $1,725 6,818 445 $8,988 $1,594 $10,906 $5,250 $976 $457 2003 $10,265 $5,050 $956 $416 Our investment in PBG, which includes the related goodwill, was $400 million and $321 million higher than our ownership interest in their net assets at year-end 2005 and 2004, respectively. Based upon the quoted closing price of PBG shares at year-end 2005 and 2004, the calculated market value of our shares in PBG, excluding our investment in Bottling Group, LLC, exceeded our investment balance by approximately $1.5 billion and $1.7 billion, respectively. PepsiAmericas At year-end 2005 and 2004, we owned approximately 43% and 41% of PepsiAmericas, respectively, and their summarized financial information is as follows: Current assets Noncurrent assets Total assets Current liabilities Noncurrent liabilities Total liabilities Our investment Net revenue Gross profit Operating profit Net income 2005 $ 598 3,456 $4,054 $ 722 1,763 $2,485 $968 $3,726 $1,562 $393 $195 2004 $ 530 3,000 $3,530 $ 521 1,386 $1,907 $924 $3,345 $1,423 $340 $182 2003 $3,237 $1,360 $316 $158 Our investment in PAS, which includes the related goodwill, was $292 million and $253 million higher than our ownership interest in their net assets at year-end 2005 and 2004, respectively. Based upon the quoted closing price of PAS shares at year-end 2005 and 2004, the calculated market value of our shares in PepsiAmericas exceeded our investment balance by approximately $364 million and $277 million, respectively. In January 2005, PAS acquired a regional bottler, Central Investment Corporation. The table above includes the results of Central Investment Corporation from the transaction date forward. Related Party Transactions Our significant related party transactions involve our noncontrolled bottling affiliates. We sell concentrate to these affiliates, which is used in the production of carbonated soft drinks and non-carbonated bever- ages. We also sell certain finished goods to these affiliates and we receive royalties for the use of our trademarks for certain products. Sales of concentrate and finished goods are reported net of bottler funding. For further unaudited information on these bottlers, see Our Customers in Managements Discussion and Analysis. These transactions with our bottling affiliates are reflected in our consolidated financial statements as follows: Net revenue Selling, general and administrative expenses Accounts and notes receivable Accounts payable and other current liabilities Such amounts are settled on terms consistent with other trade receivables and payables. See Note 9 regarding our guarantee of certain PBG debt. In addition, we coordinate, on an aggregate basis, the negotiation and purchase of sweeteners and other raw materials 2005 $4,633 $143 $178 $117 2004 $ 4,170 $114 $157 $95 2003 $3,699 $128 requirements for certain of our bottlers with suppliers. Once we have negotiated the contracts, the bottlers order and take delivery directly from the supplier and pay the suppliers directly. Consequently, these transactions are not reflected in our consolidated financial statements. As the contracting party, we could be liable to these suppliers in the event of any nonpayment by our bottlers, but we consider this exposure to be remote. Financial Statements and Accompanying Notes A21 Note 9 Debt Obligations and Commitments 2005 Short-term debt obligations Current maturities of long-term debt Commercial paper (3.3% and 1.6%) Other borrowings (7.4% and 6.6%) Amounts reclassified to long-term debt Long-term debt obligations Short-term borrowings, reclassified Notes due 2006-2026 (5.4% and 4.7%) Zero coupon notes, $475 million due 2006-2012 (13.4%) Other, due 2006-2014 (6.3% and 6.2%) Less: current maturities of long-term debt obligations The interest rates in the above table reflect weighted-average rates as of year-end. 2004 $ 160 1,287 357 (750) $1,054 $ 750 1,274 321 212 2,557 (160) $2,397 $ 143 3,140 356 (750) $2,889 $ 750 1,161 312 233 2,456 (143) $2,313 At December 31, 2005, approximately 78% of total debt, after the impact of the associated interest rate swaps, was exposed to variable interest rates, compared to 67% at December 25, 2004. In addition to variable rate long-term debt, all debt with maturities of less than one year is categorized as variable for purposes of this measure. Cross Currency Interest Rate Swaps In 2004, we entered into a cross currency interest rate swap to hedge the currency exposure on U.S. dollar denominated debt of $50 million held by a foreign affiliate. The terms of this swap match the terms of the debt it modifies. The swap matures in 2008. The unrecognized gain related to this swap was less than $1 million at December 31, 2005, resulting in a U.S. dollar liability of $50 million. At December 25, 2004, the unrecognized loss related to this swap was $3 million, resulting in a U.S. dollar liability of $53 million. We have also entered into cross currency interest rate swaps to hedge the currency exposure on U.S. dollar denominated intercompany debt of $125 million. The terms of the swaps match the terms of the debt they modify. The swaps mature over the next two years. The net unrecognized gain related to these swaps was $5 million at December 31, 2005. The net unrecognized loss related to these swaps was less than $1 million at December 25, 2004. Short-term borrowings are reclassified to long-term when we have the intent and ability, through the existence of the unused lines of credit, to refinance these borrowings on a long-term basis. At year-end 2005, we maintained $2.1 billion in corporate lines of credit subject to normal banking terms and conditions. These credit facilities support short-term debt issuances and remained unused as of December 31, 2005. Of the $2.1 billion, $1.35 billion expires in May 2006 with the remaining $750 million expiring in June 2009. In addition, $181 million of our debt was outstanding on various lines of credit maintained for our international divisions. Long-Term Contractual Commitments These lines of credit are subject to normal banking terms and conditions and are committed to the extent of our borrowings. Interest Rate Swaps We entered into interest rate swaps in 2004 to effectively convert the interest rate of a specific debt issuance from a fixed rate of 3.2% to a variable rate. The variable weighted-average interest rate that we pay is linked to LIBOR and is subject to change. The notional amount of the interest rate swaps outstanding at December 31, 2005 and December 25, 2004 was $500 million. The terms of the interest rate swaps match the terms of the debt they modify. The swaps mature in 2007. Payments Due by Period Long-term debt obligations(a) .......................................................... Operating leases ............................................................................. Purchasing commitments(b) ............................................................ Marketing commitments.................................................................. Other commitments......................................................................... (a) Excludes current maturities of long-term debt of $143 million which are classified within current liabilities. Total $2,313 769 4,533 1,487 99 $9,201 $ 2006 187 1,169 412 82 $1,850 2007-2008 $1,052 253 1,630 438 10 $3,383 2009-2010 2011 and beyond $ 876 $ 385 132 197 775 959 381 256 6 1 $2,170 $1,798 (b) Includes approximately $13 million of long-term commitments which are reflected in other liabilities in our Consolidated Balance Sheet. The above table reflects non-cancelable commitments as of December 31, 2005 based on year-end foreign exchange rates. A22 Appendix A Specimen Financial Statements: PepsiCo, Inc. Most long-term contractual commitments, except for our long-term debt obligations, are not recorded in our Consolidated Balance Sheet. Non-cancelable operating leases primarily represent building leases. Non-cancelable purchasing commitments are primarily for oranges and orange juices to be used for our Tropicana brand beverages. Non-cancelable marketing commitments primarily are for sports marketing and with our fountain customers. Bottler funding is not reflected in our long-term contractual commitments as it is negotiated on an annual basis. See Note 7 regarding our pension and retiree medical obligations and discussion below regarding our commitments to noncontrolled bottling affiliates and former restaurant operations. Off-Balance Sheet Arrangements It is not our business practice to enter into off-balance sheet arrangements, other than in the normal course of business, nor is it our policy to issue guarantees to our bottlers, noncontrolled affiliates or third parties. However, certain guarantees were necessary to facilitate the separation of our bottling and restaurant operations from us. In connection with these transactions, we have guaranteed $2.3 billion of Bottling Group, LLCs long-term debt through 2012 and $28 million of YUM! Brands, Inc. (YUM) outstanding obligations, primarily property leases, through 2020. The terms of our Bottling Group, LLC debt guarantee are intended to preserve the structure of PBGs separation from us and our payment obligation would be triggered if Bottling Group, LLC failed to perform under these debt obligations or the structure significantly changed. Our guarantees of certain obligations ensured YUMs continued use of certain properties. These guarantees would require our cash payment if YUM failed to perform under these lease obligations. See Our Liquidity, Capital Resources and Financial Position in Managements Discussion and Analysis for further unaudited information on our borrowings. Note 10 Risk Management We are exposed to the risk of loss arising from adverse changes in: commodity prices, affecting the cost of our raw materials and energy, foreign exchange risks, interest rates, stock prices, and discount rates affecting the measurement of our pension and retiree medical liabilities. In the normal course of business, we manage these risks through a variety of strategies, including the use of derivatives. Certain derivatives are designated as either cash flow or fair value hedges and qualify for hedge accounting treatment, while others do not qualify and are marked to market through earnings. See Our Business Risks in Managements Discussion and Analysis for further unaudited information on our business risks. our. If the derivative instrument is terminated, we continue to defer the related gain or loss and include it as a component of the cost of the underlying hedged item. Upon determination that the underlying hedged item will not be part of an actual transaction, we recognize the related gain or loss in net income in that period. We also use derivatives that do not qualify for hedge accounting treatment. We account for such derivatives at market value with the resulting gains and losses reflected in our income statement. We do not use derivative instruments for trading or speculative purposes and we limit our exposure to individual counterparties to manage credit risk. Commodity Prices We are subject to commodity price risk because our ability to recover increased costs through higher pricing may be limited in the competitive environment in which we operate. This risk is managed through the use of fixed-price purchase orders, pricing agreements, geographic diversity and derivatives. We use derivatives, with terms of no more than two years, to economically hedge price fluctuations related to a portion of our anticipated commodity purchases, primarily for natural gas and diesel fuel. For those derivatives that are designated as cash flow hedges, any ineffectiveness is recorded immediately. However, our commodity cash flow hedges have not had any significant ineffectiveness for all periods presented. We classify both the earnings and cash flow impact from these derivatives consistent with the underlying hedged item. During the next 12 months, we expect to reclassify gains of $24 million related to cash flow hedges from accumulated other comprehensive loss into net income. Foreign Exchange Our operations outside of the U.S. generate over a third of our net revenue of foreign exchange rates. Ineffectiveness on these hedges has not been material. Interest Rates We centrally manage our debt and investment portfolios considering investment opportunities and risks, tax consequences and overall financing strategies. We may use interest rate and cross currency interest rate swaps to manage our overall interest expense and foreign exchange risk. These instruments effectively change the interest rate and currency of specific debt issuances. These swaps are entered into Financial Statements and Accompanying Notes A23 concurrently with the issuance of the debt that they are intended to modify. The notional amount, interest payment and maturity date of the swaps match the principal, interest payment and maturity date of the related debt. These swaps are entered into only with strong creditworthy counterparties, are settled on a net basis and are of relatively short duration. Stock Prices The portion of our deferred compensation liability that is based on certain market indices and on our stock price is subject to market risk. We hold mutual fund investments and prepaid forward contracts to manage this risk. Changes in the fair value of these investments and contracts are recognized immediately in earnings and are offset by changes in the related compensation liability. Fair Value All derivative instruments are recognized in our Consolidated Balance Sheet at fair value. The fair value of our derivative instruments is generally based on quoted market prices. Book and fair values of our derivative and financial instruments are as follows: 2005 Book Value Assets Cash and cash equivalents(a) .................................................................................. Short-term investments(b) ........................................................................................ Forward exchange contracts(c) ................................................................................. Commodity contracts(d) ............................................................................................ Prepaid forward contract(e) ...................................................................................... Cross currency interest rate swaps(f) ....................................................................... Liabilities Forward exchange contracts(c) ................................................................................. Commodity contracts(d) ............................................................................................ Debt obligations....................................................................................................... Interest rate swaps(g) ............................................................................................... Cross currency interest rate swaps(f) ...................................................................... (a) Book value approximates fair value due to the short maturity. 2004 Fair Value $1,716 $3,166 $19 $41 $107 $6 $15 $3 $5,378 $9 $ Book Value $1,280 $2,165 $8 $7 $120 $ $35 $8 $3,451 $1 $3 Fair Value $1,280 $2,165 $8 $7 $120 $ $35 $8 $3,676 $1 $3 $1,716 $3,166 $19 $41 $107 $6 $15 $3 $5,202 $9 $ Included in our Consolidated Balance Sheet under the captions noted above or as indicated below. In addition, derivatives are designated as accounting hedges unless otherwise noted below. (b) Principally short-term time deposits and includes $124 million at December 31, 2005 and $118 million at December 25, 2004 of mutual fund investments used to manage a portion of market risk arising from our deferred compensation liability. (c) 2005 asset includes $14 million related to derivatives not designated as accounting hedges. Assets are reported within current assets and other assets and liabilities are reported within current liabilities and other liabilities. (d) 2005 asset includes $2 million related to derivatives not designated as accounting hedges and the liability relates entirely to derivatives not designated as accounting hedges. Assets are reported within current assets and other assets and liabilities are reported within current liabilities and other liabilities. (e) Included in current assets and other assets. (f) Asset included within other assets and liability included in long-term debt. (g) Reported in other liabilities. This table excludes guarantees, including our guarantee of $2.3 billion of Bottling Group, LLCs long-term debt. The guarantee had a fair value of $47 million at December 31, 2005 and $46 million at December 25, 2004 based on an external estimate of the cost to us of transferring the liability to an independent financial institution. See Note 9 for additional information on our guarantees. Note 11 Net Income per Common Share from Continuing Operations and RSUs and preferred shares were converted into common shares. Options to purchase 3.0 million shares in 2005, 7.0 million shares in 2004 and 49.0 million shares in 2003 were not included in the calculation of diluted earnings per common share because these options were out-of-the-money. Out-of-themoney options had average exercise prices of $53.77 in 2005, $52.88 in 2004 and $48.27 in 2003. A24 Appendix A Specimen Financial Statements: PepsiCo, Inc. The computations of basic and diluted net income per common share from continuing operations are as follows: 2005 Net income Preferred shares: Dividends Redemption premium Net income available for common shareholders Basic net income per common share Net income available for common shareholders Dilutive securities: Stock options and RSUs ESOP convertible preferred stock Unvested stock awards Diluted Diluted net income per common share (a) Weighted-average common shares outstanding. 2004 Shares(a) Income $4,174 (3) (22) $4,149 $2.45 1,669 35 2 1,706 $4,149 24 $4,173 $2.41 1,696 31 2 1,729 Shares(a) Income $3,568 (3) (12) $3,553 $2.07 $3,553 15 $3,568 $2.05 2003 Shares(a) Income $4,078 (2) (16) $4,060 $2.43 $4,060 18 $4,078 $2.39 1,669 1,696 1,718 1,718 17 3 1 1,739 Note 12 Preferred and Common Stock As of December 31, 2005 and December 25, 2004, there were 3.6 billion shares of common stock and 3 million shares of convertible preferred stock authorized. The preferred stock was issued only for an employee stock ownership plan (ESOP) established by Quaker and these shares are redeemable for common stock by the ESOP participants. The preferred stock accrues dividends at an annual rate of $5.46 per share. At year-end 2005 and 2004, there were 803,953 preferred shares issued and 354,853 and 424,853 shares outstanding, respectively. Each share is convertible at the option of the holder into 4.9625 shares of common stock. The preferred shares may be called by us upon written notice at $78 per share plus accrued and unpaid dividends. As of December 31, 2005, 0.3 million outstanding shares of preferred stock with a fair value of $104 million and 17 million shares of common stock were held in the accounts of ESOP participants. As of December 25, 2004, 0.4 million outstanding shares of preferred stock with a fair value of $110 million and 18 million shares of common stock were held in the accounts of ESOP participants. Quaker made the final award to its ESOP plan in June 2001. 2005 Preferred stock Repurchased preferred stock Balance, beginning of year Redemptions Balance, end of year *Does not sum due to rounding. 2004 Amount $41 $ 90 19 $110* Shares 0.8 0.3 0.1 0.4 Amount $41 $63 27 $90 Shares 0.8 0.2 0.1 0.3 2003 Amount $41 $48 15 $63 Shares 0.8 0.4 0.1 0.5 Note 13 Accumulated Other Comprehensive Loss Comprehensive income is a measure of income which includes both net income and other comprehensive income or loss. Other comprehensive loss results from items deferred on the balance sheet in shareholders equity. Other comprehensive (loss)/income was $(167) million in 2005, $381 million in 2004, and $405 million in 2003. The accumulated balances for each component of other comprehensive loss were as follows: Currency translation adjustment Cash flow hedges, net of tax(a) Minimum pension liability adjustment(b) Unrealized gain on securities, net of tax Other Accumulated other comprehensive loss 2005 $ (971) 27 (138) 31 (2) $(1,053) 2004 $(720) (19) (154) 7 $(886) 2003 $(1,121) (12) (135) 1 $(1,267) (a) Includes net commodity gains of $55 million in 2005. Also includes no impact in 2005, $6 million gain in 2004 and $8 million gain in 2003 for our share of our equity investees accumulated derivative activity. Deferred gains/(losses) reclassified into earnings were $8 million in 2005, $(10) million in 2004 and no impact in 2003. (b) Net of taxes of $72 million in 2005, $77 million in 2004 and $67 million in 2003. Also, includes $120 million in 2005, $121 million in 2004 and $110 million in 2003 for our share of our equity investees minimum pension liability adjustments. Financial Statements and Accompanying Notes A25 Note 14 Supplemental Financial Information 2005 Accounts receivable Trade receivables ..................................................... Other receivables ..................................................... Allowance, beginning of year ................................... Net amounts (credited)/charged to expense ........ Deductions(a) ........................................................ Other(b) ................................................................. Allowance, end of year ............................................. Net receivables ........................................................ Inventory(c) Raw materials.......................................................... Work-in-process ....................................................... Finished goods ......................................................... Accounts payable and other current liabilities Accounts payable ..................................................... Accrued marketplace spending................................ Accrued compensation and benefits ........................ Dividends payable.................................................... Insurance accruals .................................................. Other current liabilities............................................ Other liabilities Reserves for income taxes........................................ Other ........................................................................ Other supplemental information Rent expense............................................................ Interest paid ............................................................ Income taxes paid, net of refunds............................ Acquisitions(d) Fair value of assets acquired............................... Cash paid and debt issued.................................. SVE minority interest eliminated.......................... Liabilities assumed.............................................. (a) Includes accounts written off. (b) Includes collections of previously written-off accounts and currency translation effects. (c) Inventories are valued at the lower of cost or market. Cost is determined using the average, first-in, first-out (FIFO) or last-in, first-out (LIFO) methods. Approximately 17% in 2005 and 15% in 2004 of the inventory cost was computed using the LIFO method. The differences between LIFO and FIFO methods of valuing these inventories were not material. (d) In 2005, these amounts include the impact of our acquisition of General Mills, Inc.s 40.5% ownership interest in SVE for $750 million. The excess of our purchase price over the fair value of net assets acquired is $250 million and is included in goodwill. We also reacquired rights to distribute global brands for $263 million which is included in other nonamortizable intangible assets. 2004 $2,505 591 3,096 105 18 (25) (1) 97 $2,999 $ 665 156 720 $1,541 $1,731 1,285 961 387 131 1,104 $5,599 $1,567 2,532 $4,099 $245 $137 $1,833 $ 78 (64) $ 14 2003 $2,718 618 3,336 97 (1) (22) 1 75 $3,261 $ 738 112 843 $1,693 $1,799 1,383 1,062 431 136 1,160 $5,971 $1,884 2,439 $4,323 $228 $213 $1,258 $ 1,089 (1,096) 216 $ 209 $116 32 (43) $105 $231 $147 $1,530 $178 (71) $107 A26 Appendix A Specimen Financial Statements: PepsiCo, Inc. ADDITIONAL INFORMATION In addition to the financial statements and accompanying notes, companies are required to provide a report on internal control over financial reporting and to have an auditors report on the financial statements. In addition, PepsiCo has provided a report indicating that financial reporting is managements responsibility. Finally, PepsiCo also provides selected financial data it believes is useful. The two required reports are further explained below. Managements Report on Internal Control over Financial Reporting The Sarbanes-Oxley Act of 2002 requires managers of publicly traded companies to establish and maintain systems of internal control over the companys financial reporting processes. In addition, management must express its responsibility for financial reporting, and it must provide certifications regarding the accuracy of the financial statements. Auditors Report All publicly held corporations, as well as many other enterprises and organizations engage the services of independent certified public accountants for the purpose of obtaining an objective, expert report on their financial statements. Based on a comprehensive examination of the companys accounting system, accounting records, and the financial statements, the outside CPA issues the auditors report. The standard auditors report identifies who and what was audited and indicates the responsibilities of management and the auditor relative to the financial statements. It states that the audit was conducted in accordance with generally accepted auditing standards and discusses the nature and limitations of the audit. It then expresses an informed opinion as to (1) the fairness of the financial statements and (2) their conformity with generally accepted accounting principles. It also expresses an opinion regarding the effectiveness of the companys internal controls. All of this additional information for PepsiCo is provided on the following pages. Additional Information A27 Managements Responsibility for Financial Reporting To Our Shareholders: At PepsiCo, our actions the actions of all our associates are governed by our Worldwide Code of Conduct. This code is clearly aligned with our stated values a commitment to sustained growth, through empowered people, operating with responsibility and building trust. Both the code and our core values enable us to operate with integrity both within the letter and the spirit of the law. Our code of conduct is reinforced consistently at all levels and in all countries. We have maintained strong governance policies and practices for many years. The management of PepsiCo is responsible for the objectivity and integrity of our consolidated financial statements. The Audit Committee of the Board of Directors has engaged independent registered public accounting firm, KPMG LLP, to audit our consolidated financial statements and they have expressed an unqualified opinion. We are committed to providing timely, accurate and understandable information to investors. Our commitment encompasses the following: Maintaining strong controls over financial reporting. Our system of internal control is based on the control criteria framework of the Committee of Sponsoring Organizations of the Treadway Commission published in their report titled, Internal Control Integrated Framework. The system is designed to provide reasonable assurance that transactions are executed as authorized and accurately recorded; that assets are safeguarded; and that accounting records are sufficiently reliable to permit the preparation of financial statements that conform in all material respects with accounting principles generally accepted in the U.S. We maintain disclosure controls and procedures designed to ensure that information required to be disclosed in reports under the Securities Exchange Act of 1934 is recorded, processed, summarized and reported within the specified time periods. We monitor these internal controls through self-assessments and an ongoing program of internal audits. Our internal controls are reinforced through our Worldwide Code of Conduct, which sets forth our commitment to conduct business with integrity, and within both the letter and the spirit of the law. Exerting rigorous oversight of the business. We continuously review our business results and strategies. This encompasses financial discipline in our strategic and daily business decisions. Our Executive Committee is actively involved from understanding strategies and alternatives to reviewing key initiatives and financial performance. The intent is to ensure we remain objective in our assessments, constructively challenge our approach to potential business opportunities and issues, and monitor results and controls. Engaging strong and effective Corporate Governance from our Board of Directors. We have an active, capable and diligent Board that meets the required standards for independence, and we welcome the Boards oversight as a representative of our shareholders. Our Audit Committee comprises independent directors with the financial literacy, knowledge and experience to provide appropriate oversight. We review our critical accounting policies, financial reporting and internal control matters with them and encourage their direct communication with KPMG LLP, with our General Auditor, and with our General Counsel. In 2005, we named a senior compliance officer to lead and coordinate our compliance policies and practices. Providing investors with financial results that are complete, transparent and understandable. The consolidated financial statements and financial information included in this report are the responsibility of management. This includes preparing the financial statements in accordance with accounting principles generally accepted in the U.S., which require estimates based on managements best judgment. PepsiCo has a strong history of doing whats right. We realize that great companies are built on trust, strong ethical standards and principles. Our financial results are delivered from that culture of accountability, and we take responsibility for the quality and accuracy of our financial reporting. Peter A. Bridgman Senior Vice President and Controller Indra K. Nooyi President and Chief Financial Officer Steven S Reinemund Chairman of the Board and Chief Executive Officer A28 Appendix A Specimen Financial Statements: PepsiCo, Inc. Managements Report on Internal Control over Financial Reporting To Our Shareholders: Integrated Framework issued by the Committee of Sponsoring Organizations of the Treadway Commission. Based on that evaluation, our management concluded that our internal control over financial reporting is effective as of December 31, 2005. KPMG LLP, an independent registered public accounting firm, has audited the consolidated financial statements included in this Annual Report and, as part of their audit, has issued their report, included herein, (1) on our managements assessment of the effectiveness of our internal controls over financial reporting and (2) on the effectiveness of our internal control over financial reporting. Peter A. Bridgman Senior Vice President and Controller Indra K. Nooyi President and Chief Financial Officer Steven S Reinemund Chairman of the Board and Chief Executive Officer Additional Information A29 Report of Independent Registered Public Accounting Firm Board of Directors and Shareholders PepsiCo, Inc.: We have audited the accompanying Consolidated Balance Sheet of PepsiCo, Inc. and Subsidiaries as of December 31, 2005 and December 25, 2004 and the related Consolidated Statements of Income, Cash Flows and Common Shareholders Equity for each of the years in the three-year period ended December 31, 2005. We have also audited managements assessment, included in Managements Report on Internal Control over Financial Reporting, that PepsiCo, Inc. and Subsidiaries maintained effective internal control over financial reporting as of December 31, 2005, based on criteria established in Internal Control Integrated Framework issued by the Committee of Sponsoring Organizations of the Treadway Commission (COSO). PepsiCo, Inc.s management is responsible for these consolidated financial statements, for maintaining effective internal control over financial reporting, and for its assessment of the effectiveness of internal control over financial reporting. Our responsibility is to express an opinion on these consolidated financial statements, an opinion on managements assessment, and an opinion on the effectiveness of PepsiCo, Inc audit, evaluating managements assessment, PepsiCo, Inc. and Subsidiaries as of December 31, 2005 and December 25, 2004, and the results of their operations and their cash flows for each of the years in the three-year period ended December 31, 2005, in conformity with United States generally accepted accounting principles. Also, in our opinion, managements assessment that PepsiCo, Inc. maintained effective internal control over financial reporting as of December 31, 2005, is fairly stated, in all material respects, based on criteria established in Internal Control Integrated Framework issued by COSO. Furthermore, in our opinion, PepsiCo, Inc. maintained, in all material respects, effective internal control over financial reporting as of December 31, 2005, based on criteria established in Internal Control Integrated Framework issued by COSO. KPMG LLP New York, New York February 24, 2006 A30 Appendix A Specimen Financial Statements: PepsiCo, Inc. Selected Financial Data Quarterly Net revenue 2005 2004 Gross profit(a) 2005 2004 2005 restructuring charges(b) 2005 2004 restructuring and impairment charges(c) 2004 AJCA tax charge(d) 2005 Net income(e) 2005 2004 Net income per common share basic(e) 2005 2004 Net income per common share diluted(e) 2005 2004 Cash dividends declared per common share 2005 2004 2005 stock price per share(f) High Low Close 2004 stock price per share(f) High Low Close (in millions except per share amounts, unaudited) First Second Third Fourth Quarter Quarter Quarter Quarter $6,585 $6,131 $3,715 $3,466 $7,697 $7,070 $4,383 $4,039 $8,184 $10,096 $7,257 $8,803 $4,669 $4,139 $5,619 $4,943 $83 Five-Year Summary 2005 2004 2003 Net revenue $32,562 $29,261 $26,971 Income from continuing operations $4,078 $4,174 $3,568 Net income $4,078 $4,212 $3,568 Income per common share basic, continuing operations $2.43 $2.45 $2.07 Income per common share diluted, $2.39 $2.41 $2.05 continuing operations Cash dividends declared per common share $1.01 $0.850 $0.630 $31,727 $27,987 $25,327 Total assets $2,313 $2,397 $1,702 Long-term debt 22.7% 27.4% 27.5% Return on invested capital(a) Five-Year Summary (Cont.) Net revenue Net income Income per common share basic Income per common share diluted Cash dividends declared per common share Total assets Long-term debt Return on invested capital(a) 2002 2001 $25,112 $23,512 $3,000 $2,400 $1.69 $1.35 $1.68 $1.33 $0.595 $0.575 $23,474 $21,695 $2,187 $2,651 25.7% 22.1% $912 $804 $1,194 $1,059 $468 $864 $1,364 $150 $(8) $1,108 $985 $0.54 $0.47 $0.71 $0.62 $0.52 $0.80 $0.66 $0.58 $0.53 $0.46 $0.70 $0.61 $0.51 $0.79 $0.65 $0.58 (a) Return on invested capital is defined as adjusted net income divided by the sum of average shareholders equity and average total debt. Adjusted net income is defined as net income plus net interest expense after tax. Net interest expense after tax was $62 million in 2005, $60 million in 2004, $72 million in 2003, $93 million in 2002, and $99 million in 2001. As a result of the adoption of SFAS 142, Goodwill and Other Intangible Assets, and the consolidation of SVE in 2002, the data provided above is not comparable. $0.23 $0.16 $55.71 $51.34 $52.62 $53.00 $45.30 $50.93 $0.26 $0.23 $57.20 $51.78 $55.52 $55.48 $50.28 $54.95 $0.26 $0.23 $56.73 $52.07 $54.65 $55.71 $48.41 $50.84 $0.26 $0.23 $60.34 $53.55 $59.08 $53.00 $47.37 $51.94 Includes restructuring and impairment charges of: 2005 Pre-tax After-tax Per share Includes Quaker merger-related costs of: 2003 Pre-tax After-tax Per share $59 $42 $0.02 2002 $224 $190 $0.11 2001 $356 $322 $0.18 $83 $55 $0.03 2004 $150 $96 $0.06 2003 $147 $100 $0.06 2001 $31 $19 $0.01 The 2005 fiscal year consisted of fifty-three weeks compared to fifty-two weeks in our normal fiscal year. The 53rd week increased 2005 net revenue by an estimated $418 million and net income by an estimated $57 million or $0.03 per share. Cash dividends per common share in 2001 are those of pre-merger PepsiCo prior to the effective date of the merger. In the fourth quarter of 2004, we reached agreement with the IRS for an open issue related to our discontinued restaurant operations which resulted in a tax benefit of $38 million or $0.02 per share. The first, second, and third quarters consist of 12 weeks and the fourth quarter consists of 16 weeks in 2004 and 17 weeks in 2005. (a) Reflects net reclassifications in all periods from cost of sales to selling, general and administrative expenses related to the alignment of certain accounting policies in connection with our ongoing BPT initiative. See Note 1. (b) The 2005 restructuring charges were $83 million ($55 million or $0.03 per share after-tax). See Note 3. (c) The 2004 restructuring and impairment charges were $150 million ($96 million or $0.06 per share after-tax). See Note 3. (d) Represents income tax expense associated with the repatriation of earnings in connection with the AJCA. See Note 5. (e) Fourth quarter 2004 net income reflects a tax benefit from discontinued operations of $38 million or $0.02 per share. See Note 5. (f) Represents the composite high and low sales price and quarterly closing prices for one share of PepsiCo common stock. Appendix B THE COCA-COLA COMPANY AND SUBSIDIARIES CONSOLIDATED STATEMENTS OF INCOME S PECIMEN FINANCIAL STATEMENTS: The Coca-Cola Company Year Ended December 31, (In millions except per share data) 2005 2004 2003 NET OPERATING REVENUES Cost of goods sold GROSS PROFIT Selling, general and administrative expenses Other operating charges OPERATING INCOME Interest income Interest expense Equity income net Other loss net Gains on issuances of stock by equity investees INCOME BEFORE INCOME TAXES Income taxes NET INCOME BASIC NET INCOME PER SHARE DILUTED NET INCOME PER SHARE AVERAGE SHARES OUTSTANDING Effect of dilutive securities AVERAGE SHARES OUTSTANDING ASSUMING DILUTION $ 23,104 8,195 14,909 8,739 85 6,085 235 240 680 (93) 23 6,690 1,818 $ $ $ 4,872 2.04 2.04 2,392 1 2,393 $ 21,742 7,674 14,068 7,890 480 5,698 157 196 621 (82) 24 6,222 1,375 $ $ $ 4,847 2.00 2.00 2,426 3 2,429 $ 20,857 7,776 13,081 7,287 573 5,221 176 178 406 (138) 8 5,495 1,148 $ $ $ 4,347 1.77 1.77 2,459 3 2,462 Refer to Notes to Consolidated Financial Statements. The financial information herein is reprinted with permission from The Coca-Cola Company 2005 Annual Report. The accompanying Notes are an integral part of the consolidated financial statements. The complete financial statements are available through a link at the books companion website. B1 B2 Appendix B Specimen Financial Statements: The Coca-Cola Company THE COCA-COLA COMPANY AND SUBSIDIARIES CONSOLIDATED BALANCE SHEETS December 31, (In millions except par value) 2005 2004 ASSETS CURRENT ASSETS Cash and cash equivalents Marketable securities Trade accounts receivable, less allowances of $72 and $69, respectively Inventories Prepaid expenses and other assets TOTAL CURRENT ASSETS INVESTMENTS Equity method investments: Coca-Cola Enterprises Inc. Coca-Cola Hellenic Bottling Company S.A. Coca-Cola FEMSA, S.A. de C.V. Coca-Cola Amatil Limited Other, principally bottling companies; Issued 3,507 and 3,500 shares, respectively Capital surplus Reinvested earnings Accumulated other comprehensive income (loss) Treasury stock, at cost 1,138 and 1,091 shares, respectively TOTAL SHAREOWNERS EQUITY TOTAL LIABILITIES AND SHAREOWNERS EQUITY $ 4,701 66 2,281 1,424 1,778 10,250 $ 6,707 61 2,244 1,420 1,849 12,281 1,731 1,039 982 748 2,062 360 6,922 2,648 5,786 1,946 1,047 828 $ 29,427 1,569 1,067 792 736 1,733 355 6,252 2,981 6,091 2,037 1,097 702 $ 31,441 $ 4,493 4,518 28 797 9,836 1,154 1,730 352 877 5,492 31,299 (1,669) (19,644) 16,355 $ 4,403 4,531 1,490 709 11,133 1,157 2,814 402 875 4,928 29,105 (1,348) (17,625) 15,935 $ 31,441 $ 29,427 Refer to Notes to Consolidated Financial Statements. Specimen Financial Statements: The Coca-Cola Company B3 THE COCA-COLA COMPANY AND SUBSIDIARIES CONSOLIDATED STATEMENTS OF CASH FLOWS Year Ended December 31, (In millions) 2005 2004 2003 OPERATING ACTIVITIES Net income Depreciation and amortization Stock-based compensation expense Deferred income taxes Equity income or loss, net of dividends Foreign currency adjustments Gains on issuances of stock by equity investees Gains on sales of assets, including bottling interests Other operating charges Other items Net change in operating assets and liabilities Net cash provided by operating activities INVESTING ACTIVITIES Acquisitions and investments, principally trademarks and bottling companies Purchases of investments and other assets Proceeds from disposals of investments and other assets Purchases of property, plant and equipment Proceeds from disposals of property, plant and equipment Other investing activities Net cash used in investing activities FINANCING ACTIVITIES Issuances of debt Payments of debt Issuances of stock Purchases of stock for treasury Dividends Net cash used in financing activities EFFECT OF EXCHANGE RATE CHANGES ON CASH AND CASH EQUIVALENTS CASH AND CASH EQUIVALENTS Net increase (decrease) during the year Balance at beginning of year Balance at end of year $ 4,872 $ 4,847 $ 4,347 932 893 850 324 345 422 (88) 162 (188) (446) (476) (294) 47 (59) (79) (23) (24) (8) (9) (20) (5) 85 480 330 299 437 249 430 (617) (168) 6,423 (637) (53) 33 (899) 88 (28) (1,496) 178 (2,460) 230 (2,055) (2,678) (6,785) (148) (2,006) 6,707 $ 4,701 5,968 (267) (46) 161 (755) 341 63 (503) 3,030 (1,316) 193 (1,739) (2,429) (2,261) 141 3,345 3,362 $ 6,707 5,456 (359) (177) 147 (812) 87 178 (936) 1,026 (1,119) 98 (1,440) (2,166) (3,601) 183 1,102 2,260 $ 3,362 Refer to Notes to Consolidated Financial Statements. B4 Appendix B Specimen Financial Statements: The Coca-Cola Company THE COCA-COLA COMPANY AND SUBSIDIARIES CONSOLIDATED STATEMENTS OF SHAREOWNERS EQUITY Year Ended December 31, (In millions except per share data) 2005 2004 2003 NUMBER OF COMMON SHARES OUTSTANDING Balance at beginning of year Stock issued to employees exercising stock options Purchases of stock for treasury1 Balance at end of year COMMON STOCK Balance at beginning of year Stock issued to employees exercising stock options Balance at end of year CAPITAL SURPLUS Balance at beginning of year Stock issued to employees exercising stock options Tax benefit from employees stock option and restricted stock plans Stock-based compensation Balance at end of year REINVESTED EARNINGS Balance at beginning of year Net income Dividends (per share $1.12, $1.00 and $0.88 in 2005, 2004 and 2003, respectively) Balance at end of year ACCUMULATED OTHER COMPREHENSIVE INCOME (LOSS) Balance at beginning of year Net foreign currency translation adjustment Net gain (loss) on derivatives Net change in unrealized gain on available-for-sale securities Net change in minimum pension liability Net other comprehensive income adjustments Balance at end of year TREASURY STOCK Balance at beginning of year Purchases of treasury stock Balance at end of year TOTAL SHAREOWNERS EQUITY COMPREHENSIVE INCOME Net income Net other comprehensive income adjustments TOTAL COMPREHENSIVE INCOME 1 2,409 7 (47) 2,369 $ 875 2 877 4,928 229 11 324 5,492 29,105 4,872 (2,678) 31,299 (1,348) (396) 57 13 5 (321) (1,669) (17,625) (2,019) (19,644) $ 16,355 $ $ $ 2,442 5 (38) 2,409 874 1 875 4,395 175 13 345 4,928 26,687 4,847 (2,429) 29,105 (1,995) 665 (3) 39 (54) 647 (1,348) (15,871) (1,754) (17,625) $ 15,935 4,847 647 5,494 $ 2,471 4 (33) 2,442 873 1 874 3,857 105 11 422 4,395 24,506 4,347 (2,166) 26,687 (3,047) 921 (33) 40 124 1,052 (1,995) (14,389) (1,482) (15,871) $ 14,090 $ $ 4,347 1,052 5,399 4,872 $ (321) 4,551 $ Common stock purchased from employees exercising stock options numbered 0.5 shares, 0.4 shares and 0.4 shares for the years ended December 31, 2005, 2004 and 2003, respectively. Refer to Notes to Consolidated Financial Statements. Appendix C OBJECTIVES Time Value of Money STUDY After studying this appendix, you should be able to:. 8 Use a financial calculator to solve time value of money problems. Would you rather receive $1,000 today or a year from now? You should prefer to receive the $1,000 today because you can invest the $1,000 and earn interest on it. As a result, you will have more than $1,000 a year from now. What this example illustrates is the concept of the t ime value of money . Everyone prefers to receive money today rather than in the future because of the interest factor. THE NATURE OF INTEREST Interest is payment for the use of another persons money. It is the difference between the amount borrowed or invested (called the principal) and the amount repaid or collected. The amount of interest to be paid or collected is usually stated as a rate over a specific period of time. The rate of interest is generally stated as an annual rate. The amount of interest involved in any financing transaction is based on three elements: 1. Principal (p): The original amount borrowed or invested. 2. Interest Rate (i): An annual percentage of the principal. 3. Time (n): The number of years that the principal is borrowed or invested. Simple Interest Simple interest is computed on the principal amount only. It is the return on the principal for one period. Simple interest is usually expressed as shown in Illustration C-1 on the next page. STUDY OBJECTIVE 1 Distinguish between simple and compound interest. C1 C2 Appendix C Time Value of Money Principal p Rate i Time n Illustration C-1 Interest computation Interest For example, if you borrowed $5,000 for 2 years at a simple interest rate of 12% annually, you would pay $1,200 in total interest computed as follows: Interest pin $5,000 .12 $1,200 2 Compound Interest Compound interest is computed on principal and on any interest earned that has not been paid or withdrawn. It is the return on the principal for two or more time periods. Compounding computes interest not only on the principal but also on the interest earned to date on that principal, assuming the interest is left on deposit. To illustrate the difference between simple and compound interest, assume that you deposit $1,000 in Bank Two, where it will earn simple interest of 9% per year, and you deposit another $1,000 in Citizens Bank, where it will earn compound interest of 9% per year compounded annually. Also assume that in both cases you will not withdraw any interest until three years from the date of deposit. Illustration C-2 shows the computation of interest you will receive and the accumulated year-end balances. Illustration C-2 Simple versus compound interest Bank Two Simple Interest Calculation Year 1 $1,000.00 9 % Year 2 $1,000.00 9 % Year 3 $1,000.00 9 % Simple Interest $ 90.00 90.00 90.00 $ 270.00 Accumulated Year-end Balance $1,090.00 $1,180.00 $1,270.00 $25.03 Difference Citizens Bank Compound Interest Calculation Year 1 $1,000.00 9 % Year 2 $1,090.00 9 % Year 3 $1,188.10 9 % Compound Interest $ 90.00 98.10 106.93 $ 295.03 Accumulated Year-end Balance $1,090.00 $1,188.10 $1,295.03 Note in Illustration C-2 that simple interest uses the initial principal of $1,000 to compute the interest in all three years. Compound interest uses the accumulated balance (principal plus interest to date) at each year-end to compute interest in the succeeding yearwhich explains why your compound interest account is larger. Obviously, if you had a choice between investing your money at simple interest or at compound interest, you would choose compound interest, all other things especially riskbeing equal. In the example, compounding provides $25.03 of additional interest income. For practical purposes, compounding assumes that unpaid interest earned becomes a part of the principal, and the accumulated balance at the Future Value of a Single Amount C3 end of each year becomes the new principal on which interest is earned during the next year. Illustration C-2 indicates that you should invest your money at the bank that compounds interest annually. Most business situations use compound interest. Simple interest is generally applicable only to short-term situations of one year or less. SECTION 1 Future Value Concepts STUDY OBJECTIVE 2 Solve for future value of a single amount. FUTURE VALUE OF A SINGLE AMOUNT The future value of a single amount is the value at a future date of a given amount invested assuming compound interest. For example, in Illustration C-2, $1,295.03 is the future value of the $1,000 at the end of three years. The $1,295.03 could be determined more easily by using the following formula. FV p (1 i) n Illustration C-3 Formula for future value where: FV p i n future value of a single amount principal (or present value) interest rate for one period number of periods FV p (1 i)n $1,000 (1 i)3 $1,000 1.29503 $1,295.03 The $1,295.03 is computed as follows. The 1.29503 is computed by multiplying (1.09 1.09 1.09). The amounts in this example can be depicted in the following time diagram. Illustration C-4 Time diagram i = 9% Present Value (p) Future Value 0 $1,000 1 n = 3 years 2 3 $1,295.03 C4 Appendix C Time Value of Money Another method that can be used to compute the future value of a single amount involves the use of a compound interest table. This table shows the future value of 1 for n periods. Table 1, shown below, is such a table. TABLE 1 Future Value of 1 (n) Periods 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 4% 1.04000 1.08160 1.12486 1.16986 1.21665 1.26532 1.31593 1.36857 1.42331 1.48024 1.53945 1.60103 1.66507 1.73168 1.80094 1.87298 1.94790 2.02582 2.10685 2.19112 5% 1.05000 1.10250 1.15763 1.21551 1.27628 1.34010 1.40710 1.47746 1.55133 1.62889 1.71034 1.79586 1.88565 1.97993 2.07893 2.18287 2.29202 2.40662 2.52695 2.65330 6% 1.06000 1.12360 1.19102 1.26248 1.33823 1.41852 1.50363 1.59385 1.68948 1.79085 1.89830 2.01220 2.13293 2.26090 2.39656 2.54035 2.69277 2.85434 3.02560 3.20714 8% 1.08000 1.16640 1.25971 1.36049 1.46933 1.58687 1.71382 1.85093 1.99900 2.15892 2.33164 2.51817 2.71962 2.93719 3.17217 3.42594 3.70002 3.99602 4.31570 4.66096 9% 1.09000 1.18810 1.29503 1.41158 1.53862 1.67710 1.82804 1.99256 2.17189 2.36736 2.58043 2.81267 3.06581 3.34173 3.64248 3.97031 4.32763 4.71712 5.14166 5.60441 10% 1.10000 1.21000 1.33100 1.46410 1.61051 1.77156 1.94872 2.14359 2.35795 2.59374 2.85312 3.13843 3.45227 3.79750 4.17725 4.59497 5.05447 5.55992 6.11591 6.72750 11% 1.11000 1.23210 1.36763 1.51807 1.68506 1.87041 2.07616 2.30454 2.55803 2.83942 3.15176 3.49845 3.88328 4.31044 4.78459 5.31089 5.89509 6.54355 7.26334 8.06231 12% 1.12000 1.25440 1.40493 1.57352 1.76234 1.97382 2.21068 2.47596 2.77308 3.10585 3.47855 3.89598 4.36349 4.88711 5.47357 6.13039 6.86604 7.68997 8.61276 9.64629 15% 1.15000 1.32250 1.52088 1.74901 2.01136 2.31306 2.66002 3.05902 3.51788 4.04556 4.65239 5.35025 6.15279 7.07571 8.13706 9.35762 10.76126 12.37545 14.23177 16.36654 In Table 1, n is the number of compounding periods, the percentages are the periodic interest rates, and the five-digit decimal numbers in the respective columns are the future value of 1 factors. In using Table 1, the principal amount is multiplied by the future value factor for the specified number of periods and interest rate. For example, the future value factor for two periods at 9% is 1.18810. Multiplying this factor by $1,000 equals $1,188.10, which is the accumulated balance at the end of year 2 in the Citizens Bank example in Illustration C-2. The $1,295.03 accumulated balance at the end of the third year can be calculated from Table 1 by multiplying the future value factor for three periods (1.29503) by the $1,000. The following demonstration problem illustrates how to use Table 1. Future Value of an Annuity C5 John and Mary Rich invested $20,000 in a savings account paying 6% interest at the time their son, Mike, was born. The money is to be used by Mike for his college education. On his 18th birthday, Mike withdraws the money from his savings account. How much did Mike withdraw from his account? Future Value = ? Present Value (p) i = 6% 0 1 $20,000 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 n = 18 years Answer: The future value factor from Table 1 is 2.85434 (18 periods at 6%). The future value of $20,000 earning 6% per year for 18 years is $57,086.80 ($20,000 2.85434). Illustration C-5 Demonstration Problem Using Table 1 for FV of 1 FUTURE VALUE OF AN ANNUITY The preceding discussion involved the accumulation of only a single STUDY OBJECTIVE 3 principal sum. Individuals and businesses frequently encounter situa- Solve for future value of an tions in which a series of equal dollar amounts are to be paid or received annuity. periodically, such as loans or lease (rental) contracts. Such payments or receipts of equal dollar amounts are referred to as annuities. the periodic payments or receipts. To illustrate the computation of the future value of an annuity, assume that you invest $2,000 at the end of each year for three years at 5% interest compounded annually. This situation is depicted in the time diagram in Illustration C-6. Illustration C-6 Time diagram for a threeyear annuity i = 5% Present Value $2,000 $2,000 Future Value = ? $2,000 0 1 n = 3 years 2 3 C6 Appendix C Time Value of Money As can be seen in Illustration C-6, the $2,000 invested at the end of year 1 will earn interest for two years (years 2 and 3), and the $2,000 invested at the end of year 2 will earn interest for one year (year 3). However, the last $2,000 investment (made at the end of year 3) will not earn any interest. The future value of these periodic payments could be computed using the future value factors from Table 1 as shown in Illustration C-7. Illustration C-7 Future value of periodic payments Year Invested 1 2 3 Amount Invested $2,000 $2,000 $2,000 Future Value of 1 Factor at 5% 1.10250 1.05000 1.00000 3.15250 Future Value $ 2,205 2,100 2,000 $6,305 The first $2,000 investment is multiplied by the future value factor for two periods (1.1025) because two years interest will accumulate on it (in years 2 and 3). The second $2,000 investment will earn only one years interest (in year 3) and therefore is multiplied by the future value factor for one year (1.0500). The final $2,000 investment is made at the end of the third year and will not earn any interest. Consequently, the future value of the last $2,000 invested is only $2,000 since it does not accumulate any interest. This method of calculation is required when the periodic payments or receipts are not equal in each period. However, when the periodic payments (receipts) are the same in each period, the future value can be computed by using a future value of an annuity of 1 table. Table 2, shown below, is such a table. TABLE 2 Future Value of an Annuity of 1 (n) Periods 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 4% 1.00000 2.04000 3.12160 4.24646 5.41632 6.63298 7.89829 9.21423 10.58280 12.00611 13.48635 15.02581 16.62684 18.29191 20.02359 21.82453 23.69751 25.64541 27.67123 29.77808 5% 1.00000 2.05000 3.15250 4.31013 5.52563 6.80191 8.14201 9.54911 11.02656 12.57789 14.20679 15.91713 17.71298 19.59863 21.57856 23.65749 25.84037 28.13238 30.53900 33.06595 6% 1.00000 2.06000 3.18360 4.37462 5.63709 6.97532 8.39384 9.89747 11.49132 13.18079 14.97164 16.86994 18.88214 21.01507 23.27597 25.67253 28.21288 30.90565 33.75999 36.78559 8% 1.00000 2.08000 3.24640 4.50611 5.86660 7.33592 8.92280 10.63663 12.48756 14.48656 16.64549 18.97713 21.49530 24.21492 27.15211 30.32428 33.75023 37.45024 41.44626 45.76196 9% 1.00000 2.09000 3.27810 4.57313 5.98471 7.52334 9.20044 11.02847 13.02104 15.19293 17.56029 20.14072 22.95339 26.01919 29.36092 33.00340 36.97351 41.30134 46.01846 51.16012 10% 1.00000 2.10000 3.31000 4.64100 6.10510 7.71561 9.48717 11.43589 13.57948 15.93743 18.53117 21.38428 24.52271 27.97498 31.77248 35.94973 40.54470 45.59917 51.15909 57.27500 11% 1.00000 2.11000 3.34210 4.70973 6.22780 7.91286 9.78327 11.85943 14.16397 16.72201 19.56143 22.71319 26.21164 30.09492 34.40536 39.18995 44.50084 50.39593 56.93949 64.20283 12% 1.00000 2.12000 3.37440 4.77933 6.35285 8.11519 10.08901 12.29969 14.77566 17.54874 20.65458 24.13313 28.02911 32.39260 37.27972 42.75328 48.88367 55.74972 63.43968 72.05244 15% 1.00000 2.15000 3.47250 4.99338 6.74238 8.75374 11.06680 13.72682 16.78584 20.30372 24.34928 29.00167 34.35192 40.50471 47.58041 55.71747 65.07509 75.83636 88.21181 102.44358 Present Value Variables C7 Table 2 shows the future value of 1 to be received periodically for a given number of periods. You can see from Table 2 that the future value of an annuity of 1 factor for three periods at 5% is 3.15250. The future value factor is the total of the three individual future value factors as shown in Illustration C-8. Multiplying this amount by the annual investment of $2,000 produces a future value of $6,305. The demonstration problem in Illustration C-8 illustrates how to use Table 2. Illustration C-8 Demonstration Problem Using Table 2 for FV of an annuity of 1 Henning Printing Company knows that in four years it must replace one of its existing printing presses with a new one. To insure that some funds are available to replace the machine in 4 years, the company is depositing $25,000 in a savings account at the end of each of the next four years (4 deposits in total). The savings account will earn 6% interest compounded annually. How much will be in the savings account at the end of 4 years when the new printing press is to be purchased? i = 6% Present Value $25,000 $25,000 $25,000 Future Value = ? $25,000 0 1 2 n = 4 years 3 4 Answer: The future value factor from Table 2 is 4.37462 (4 periods at 6%). The future value of $25,000 invested at the end of each year for 4 years at 6% interest is $109,365.50 ($25,000 4.37462). SECTION 2 Present Value Concepts PRESENT VALUE VARIABLES The present value is the value now of a given amount to be paid or reSTUDY OBJECTIVE 4 ceived in the future, assuming compound interest. The present value is Identify the variables fundamental based on three variables: (1) the dollar amount to be received (future to solving present value problems. amount), (2) the length of time until the amount is received (number of periods), and (3) the interest rate (the discount rate). The process of determining the present value is referred to as discounting the future amount. In this textbook, we use present value computations in measuring several items. For example, Chapter 11 computed the present value of the principal and interest payments to determine the market price of a bond. In addition, determining the amount to be reported for notes payable involves present value computations. C8 Appendix C Time Value of Money PRESENT VALUE OF A SINGLE AMOUNT STUDY OBJECTIVE 5 Solve for present value of a single amount. To illustrate present value, assume that you want to invest a sum of money that will yield $1,000 at the end of one year. What amount would you need to invest today to have $1,000 one year from now? Illustration C-9 shows the formula for calculating present value. Illustration C-9 Formula for present value Present Value Future Value (1 i )n Thus, if you want a 10% rate of return, you would compute the present value of $1,000 for one year as follows: PV PV PV FV (1 i)n $1,000 (1 .10)1 $1,000 1.10 $909.09 We know the future amount ($1,000), the discount rate (10%), and the number of periods (one). These variables are depicted in the time diagram in Illustration C-10. Illustration C-10 Finding present value if discounted for one period Present Value (?) i = 10% Future Value $909.09 n = 1 year $1,000 If you receive the single amount of $1,000 in two years, discounted at 10% [PV $1,000 (1 .10)2], the present value of your $1,000 is $826.45 [($1,000 1.21), depicted as shown in Illustration C-11 below. Illustration C-11 Finding present value if discounted for two periods Present Value (?) i = 10% Future Value 0 $826.45 1 n = 2 years 2 $1,000 You also could find the present value of your amount through tables that show the present value of 1 for n periods. In Table 3, on the next page, n (represented in Present Value of a Single Amount C9 the tables rows) is the number of discounting periods involved. The percentages (represented in the tables columns) are the periodic interest rates or discount rates. The five-digit decimal numbers in the intersections of the rows and columns are called the present value of 1 factors. When using Table 3 to determine present value, you multiply the future value by the present value factor specified at the intersection of the number of periods and the discount rate. TABLE 3 Present Value of 1 (n) Periods 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 4% .96154 .92456 .88900 .85480 .82193 .79031 .75992 .73069 .70259 .67556 .64958 .62460 .60057 .57748 .55526 .53391 .51337 .49363 .47464 .45639 5% .95238 .90703 .86384 .82270 .78353 .74622 .71068 .67684 .64461 .61391 .58468 .55684 .53032 .50507 .48102 .45811 .43630 .41552 .39573 .37689 6% .94340 .89000 .83962 .79209 .74726 .70496 .66506 .62741 .59190 .55839 .52679 .49697 .46884 .44230 .41727 .39365 .37136 .35034 .33051 .31180 8% .92593 .85734 .79383 .73503 .68058 .63017 .58349 .54027 .50025 .46319 .42888 .39711 .36770 .34046 .31524 .29189 .27027 .25025 .23171 .21455 9% .91743 .84168 .77218 .70843 .64993 .59627 .54703 .50187 .46043 .42241 .38753 .35554 .32618 .29925 .27454 .25187 .23107 .21199 .19449 .17843 10% .90909 .82645 .75132 .68301 .62092 .56447 .51316 .46651 .42410 .38554 .35049 .31863 .28966 .26333 .23939 .21763 .19785 .17986 .16351 .14864 11% .90090 .81162 .73119 .65873 .59345 .53464 .48166 .43393 .39092 .35218 .31728 .28584 .25751 .23199 .20900 .18829 .16963 .15282 .13768 .12403 12% .89286 .79719 .71178 .63552 .56743 .50663 .45235 .40388 .36061 .32197 .28748 .25668 .22917 .20462 .18270 .16312 .14564 .13004 .11611 .10367 15% .86957 .75614 .65752 .57175 .49718 .43233 .37594 .32690 .28426 .24719 .21494 .18691 .16253 .14133 .12289 .10687 .09293 .08081 .07027 .06110 For example, the present value factor for one period at a discount rate of 10% is .90909, which equals the $909.09 ($1,000 .90909) computed in Illustration C-10. For two periods at a discount rate of 10%, the present value factor is .82645, which equals the $826.45 ($1,000 .82645) computed previously. Note that a higher discount rate produces a smaller present value. For example, using a 15% discount rate, the present value of $1,000 due one year from now is $869.57, versus $909.09 at 10%. Also note that the further removed from the present the future value is, the smaller the present value. For example, using the same discount rate of 10%, the present value of $1,000 due in five years is $620.92, versus the present value of $1,000 due in one year, which is $909.09. The two demonstration problems on the next page (Illustrations C-12, C-13) illustrate how to use Table 3. C10 Appendix C Time Value of Money Illustration C-12 Demonstration problem Using Table 3 for PV of 1 Suppose you have a winning lottery ticket and the state gives you the option of taking $10,000 three years from now or taking the present value of $10,000 now. The state uses an 8% rate in discounting. How much will you receive if you accept your winnings now? PV = ? i = 8% $10,000 Now 1 n=3 2 3 years Answer: The present value factor from Table 3 is .79383 (3 periods at 8%). The present value of $10,000 to be received in 3 years discounted at 8% is $7,938.30 ($10,000 .79383). Illustration C-13 Demonstration problem Using Table 3 for PV of 1 Determine the amount you must deposit now in your SUPER savings account, paying 9% interest, in order to accumulate $5,000 for a down payment 4 years from now on a new Chevy Tahoe. PV = ? i = 9% $5,000 Now 1 2 n=4 3 4 years Answer: The present value factor from Table 3 is .70843 (4 periods at 9%). The present value of $5,000 to be received in 4 years discounted at 9% is $3,542.15 ($5,000 .70843). PRESENT VALUE OF AN ANNUITY The preceding discussion involved the discounting of only a single future amount. Businesses and individuals frequently engage in transactions in Solve for present value of an which a series of equal dollar amounts are to be received or paid periodiannuity. cally. Examples of a series of periodic receipts or payments are loan agreements, installment sales, mortgage notes, lease (rental) contracts, and pension obligations. These periodic receipts or payments are annuities. The present value of an annuity is the value now of a series of future receipts or payments, discounted assuming compound interest. In computing the present value of an annuity, you need to know: (1) the discount rate, (2) the number of discount periods, and (3) the amount of the periodic receipts or payments. To illustrate how to compute the present value of an annuity, assume that you will receive $1,000 cash annually for three years at a time when the discount rate is 10%. Illustration C-14 depicts this situation, and Illustration C-15 shows the computation of its present value. STUDY OBJECTIVE 6 Present Value of an Annuity C11 PV = ? $1,000 i = 10% n=3 $1,000 $1,000 Illustration C-14 Time diagram for a threeyear annuity Now 1 2 3 years Future Amount $1,000 (one year away) 1,000 (two years away) 1,000 (three years away) Present Value of 1 Factor at 10% .90909 .82645 . 75132 2.48686 Present Value $ 909.09 826.45 751.32 $2,486.86 Illustration C-15 Present value of a series of future amounts computation This method of calculation is required when the periodic cash flows are not uniform in each period. However, when the future receipts are the same in each period, there are two other ways to compute present value. First, you can multiply the annual cash flow by the sum of the three present value factors. In the previous example, $1,000 2.48686 equals $2,486.86. The second method is to use annuity tables. As illustrated in Table 4 below, these tables show the present value of 1 to be received periodically for a given number of periods. TABLE 4 Present Value of an Annuity of 1 (n) Periods 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 4% .96154 1.88609 2.77509 3.62990 4.45182 5.24214 6.00205 6.73274 7.43533 8.11090 8.76048 9.38507 9.98565 10.56312 11.11839 11.65230 12.16567 12.65930 13.13394 13.59033 5% .95238 1.85941 2.72325 3.54595 4.32948 5.07569 5.78637 6.46321 7.10782 7.72173 8.30641 8.86325 9.39357 9.89864 10.37966 10.83777 11.27407 11.68959 12.08532 12.46221 6% .94340 1.83339 2.67301 3.46511 4.21236 4.91732 5.58238 6.20979 6.80169 7.36009 7.88687 8.38384 8.85268 9.29498 9.71225 10.10590 10.47726 10.82760 11.15812 11.46992 8% .92593 1.78326 2.57710 3.31213 3.99271 4.62288 5.20637 5.74664 6.24689 6.71008 7.13896 7.53608 7.90378 8.24424 8.55948 8.85137 9.12164 9.37189 9.60360 9.81815 9% .91743 1.75911 2.53130 3.23972 3.88965 4.48592 5.03295 5.53482 5.99525 6.41766 6.80519 7.16073 7.48690 7.78615 8.06069 8.31256 8.54363 8.75563 8.95012 9.12855 10% .90909 1.73554 2.48685 3.16986 3.79079 4.35526 4.86842 5.33493 5.75902 6.14457 6.49506 6.81369 7.10336 7.36669 7.60608 7.82371 8.02155 8.20141 8.36492 8.51356 11% .90090 1.71252 2.44371 3.10245 3.69590 4.23054 4.71220 5.14612 5.53705 5.88923 6.20652 6.49236 6.74987 6.98187 7.19087 7.37916 7.54879 7.70162 7.83929 7.96333 12% .89286 1.69005 2.40183 3.03735 3.60478 4.11141 4.56376 4.96764 5.32825 5.65022 5.93770 6.19437 6.42355 6.62817 6.81086 6.97399 7.11963 7.24967 7.36578 7.46944 15% .86957 1.62571 2.28323 2.85498 3.35216 3.78448 4.16042 4.48732 4.77158 5.01877 5.23371 5.42062 5.58315 5.72448 5.84737 5.95424 6.04716 6.12797 6.19823 6.25933 C12 Appendix C Time Value of Money Table 4 shows that the present value of an annuity of 1 factor for three periods at 10% is 2.48685.1 (This present value factor is the total of the three individual present value factors, as shown in Illustration C-15.) Applying this amount to the annual cash flow of $1,000 produces a present value of $2,486.85. The following demonstration problem (Illustration C-16) illustrates how to use Table 4. Illustration C-16 Demonstration problem Using Table 4 for PV of an annuity of 1 Kildare Company has just signed a capitalizable lease contract for equipment that requires rental payments of $6,000 each, to be paid at the end of each of the next 5 years. The appropriate discount rate is 12%. What is the present value of the rental paymentsthat is, the amount used to capitalize the leased equipment? PV = ? $6,000 $6,000 $6,000 i = 12% n=5 2 3 $6,000 $6,000 Now 1 4 5 years Answer: The present value factor from Table 4 is 3.60478 (5 periods at 12%). The present value of 5 payments of $6,000 each discounted at 12% is $21,628.68 ($6,000 3.60478). TIME PERIODS AND DISCOUNTING In the preceding calculations, the discounting was done on an annual basis using an annual interest rate. Discounting may also be done over shorter periods of time such as monthly, quarterly, or semiannually. When the time frame is less than one year, you need to convert the annual interest rate to the applicable time frame. Assume, for example, that the investor in Illustration C-14 received $500 semiannually for three years instead of $1,000 annually. In this case, the number of periods becomes six (3 2), the discount rate is 5% (10% 2), the present value factor from Table 4 is 5.07569, and the present value of the future cash flows is $2,537.85 (5.07569 $500). This amount is slightly higher than the $2,486.86 computed in Illustration C-15 because interest is paid twice during the same year; therefore interest is earned on the first half years interest. COMPUTING THE PRESENT VALUE OF A LONG-TERM NOTE OR BOND STUDY OBJECTIVE 7 Compute the present value of notes and bonds. The present value (or market price) of a long-term note or bond is a function of three variables: (1) the payment amounts, (2) the length of time until the amounts are paid, and (3) the discount rate. Our illustration uses a five-year bond issue. 1 The difference of .00001 between 2.48686 and 2.48685 is due to rounding. Computing the Present Value of a Long-Term Note or Bond C13 The first variabledollars to be paidis made up of two elements: (1) a series of interest payments (an annuity), and (2) the principal amount (a single sum). To compute the present value of the bond, we must discount both the interest payments and the principal amounttwo different computations. The time diagrams for a bond due in five years are shown in Illustration C-17. Illustration C-17 Present value of a bond time diagram Diagram for Principal Present Value (?) Interest Rate (i) n=5 Principal Amount Now 1 yr. 2 yr. 3 yr. 4 yr. 5 yr. Diagram for Interest Present Value (?) Interest Interest Rate (i) Interest Interest n=5 Interest Interest Now 1 yr. 2 yr. 3 yr. 4 yr. 5 yr. When the investors market interest rate is equal to the bonds contractual interest rate, the present value of the bonds will equal the face value of the bonds. To illustrate, assume a bond issue of 10%, five-year bonds with a face value of $100,000 with interest payable semiannually on January 1 and July 1. If the discount rate is the same as the contractual rate, the bonds will sell at face value. In this case, the investor will receive the following: (1) $100,000 at maturity, and (2) a series of ten $5,000 interest payments [($100,000 10%) 2] over the term of the bonds. The length of time is expressed in terms of interest periodsin this case10, and the discount rate per interest period, 5%. The following time diagram (Illustration C-18) depicts the variables involved in this discounting situation. Illustration C-18 Time diagram for present value of a 10%, five-year bond paying interest semiannually Diagram for Principal Present Value (?) i = 5% Principal Amount $100,000 Now 1 2 3 4 5 n = 10 6 7 8 9 10 Diagram for Interest Present Interest Value i = 5% Payments (?) $5,000 $5,000 $5,000 $5,000 $5,000 $5,000 $5,000 $5,000 $5,000 $5,000 Now 1 2 3 4 5 n = 10 6 7 8 9 10 C14 Appendix C Time Value of Money Illustration C-19 shows the computation of the present value of these bonds. Illustration C-19 Present value of principal and interestface value 10% Contractual Rate10% Discount Rate Present value of principal to be received at maturity $100,000 PV of 1 due in 10 periods at 5% $100,000 .61391 (Table 3) Present value of interest to be received periodically over the term of the bonds $5,000 PV of 1 due periodically for 10 periods at 5% $5,000 7.72173 (Table 4) Present value of bonds *Rounded $ 61,391 38,609* $100,000 Now assume that the investors required rate of return is 12%, not 10%.The future amounts are again $100,000 and $5,000, respectively, but now a discount rate of 6% (12% 2) must be used. The present value of the bonds is $92,639, as computed in Illustration C-20. Illustration C-20 Present value of principal and interestdiscount 10% Contractual Rate12% Discount Rate Present value of principal to be received at maturity $100,000 .55839 (Table 3) Present value of interest to be received periodically over the term of the bonds $5,000 7.36009 (Table 4) Present value of bonds $55,839 36,800 $92,639 Conversely, if the discount rate is 8% and the contractual rate is 10%, the present value of the bonds is $108,111, computed as shown in Illustration C-21. Illustration C-21 Present value of principal and interestpremium 10% Contractual Rate8% Discount Rate Present value of principal to be received at maturity $100,000 .67556 (Table 3) Present value of interest to be received periodically over the term of the bonds $5,000 8.11090 (Table 4) Present value of bonds $ 67,556 40,555 $108,111 The above discussion relies on present value tables in solving present value problems. Many people use spreadsheets such as Excel or Financial calculators (some even on websites) to compute present values, without the use of tables. Many calculators, especially financial calculators, have present value ( PV ) functions that allow you to calculate present values by merely inputting the proper amount, discount rate, and periods, and pressing the PV key. The next section illustrates how to use a financial calculator in various business situations. Using Financial CalculatorsPresent Value of a Single Sum C15 SECTION 3 Using Financial Calculators Business professionals, once they have mastered the underlying concepts STUDY OBJECTIVE 8 in sections 1 and 2, often use a financial (business) calculator to solve time Use a financial calculator to solve value of money problems. In many cases, they must use calculators if in- time value of money problems. terest rates or time periods do not correspond with the information provided in the compound interest tables. To use financial calculators, you enter the time value of money variables into the calculator. Illustration C-22 shows the five most common keys used to solve time value of money problems.2 Illustration C-22 Financial calculator keys N I PV PMT FV where N I PV PMT FV number of periods interest rate per period (some calculators use I/YR or i) present value (occurs at the beginning of the first period) payment (all payments are equal, and none are skipped) future value (occurs at the end of the last period) In solving time value of money problems in this appendix, you will generally be given three of four variables and will have to solve for the remaining variable. The fifth key (the key not used) is given a value of zero to ensure that this variable is not used in the computation. PRESENT VALUE OF A SINGLE SUM To illustrate how to solve a present value problem using a financial calculator, assume that you want to know the present value of $84,253 to be received in five years, discounted at 11% compounded annually. Illustration C-23 pictures this problem. Illustration C-23 Calculator solution for present value of a single sum Inputs: 5 N 11 I ? PV 50,000 0 PMT 84,253 FV Answer: The diagram shows you the information (inputs) to enter into the calculator: N 5, I 11, PMT 0, and FV 84,253. You then press PV for the answer: $50,000. As indicated, the PMT key was given a value of zero because a series of payments did not occur in this problem. On many calculators, these keys are actual buttons on the face of the calculator; on others they appear on the display after the user accesses a present value menu. 2 C16 Appendix C Time Value of Money Plus and Minus The use of plus and minus signs in time value of money problems with a financial calculator can be confusing. Most financial calculators are programmed so that the positive and negative cash flows in any problem offset each other. In the present value problem, we identified the $84,253 future value initial investment as a positive (inflow); the answer $50,000 was shown as a negative amount, reflecting a cash outflow. If the 84,253 were entered as a negative, then the final answer would have been reported as a positive 50,000. Hopefully, the sign convention will not cause confusion. If you understand what is required in a problem, you should be able to interpret a positive or negative amount in determining the solution to a problem. Compounding Periods In the problem on page C15, we assumed that compounding occurs once a year. Some financial calculators have a default setting, which assumes that compounding occurs 12 times a year. You must determine what default period has been programmed into your calculator and change it as necessary to arrive at the proper compounding period. Rounding Most financial calculators store and calculate using 12 decimal places. As a result, because compound interest tables generally have factors only up to 5 decimal places, a slight difference in the final answer can result. In most time value of money problems, the final answer will not include more than two decimal points. PRESENT VALUE OF AN ANNUITY To illustrate how to solve a present value of an annuity problem using a financial calculator, assume that you are asked to determine the present value of rental receipts of $6,000 each to be received at the end of each of the next five years, when discounted at 12%, as pictured in Illustration C-24. Illustration C-24 Calculator solution for present value of an annuity Inputs: 5 N 12 I ? PV 21,628.66 6,000 PMT 0 FV Answer: In this case, you enter N 5, I 12, PMT arrive at the answer of $21, 628.66. 6,000, FV 0, and then press PV to USEFUL APPLICATIONS OF THE F INANCIAL CALCULATOR With a financial calculator you can solve for any interest rate or for any number of periods in a time value of money problem. Here are some examples of these applications. Summary of Study Objectives C17 Auto Loan Assume you are financing a car with a three-year loan. The loan has a 9.5% nominal annual interest rate, compounded monthly. The price of the car is $6,000, and you want to determine the monthly payments, assuming that the payments start one month after the purchase. This problem is pictured in Illustration C-25. Illustration C-25 Calculator solution for auto loan payments Inputs: 36 N 9.5 I 6,000 PV ? PMT 192.20 0 FV Answer: To solve this problem, you enter N 36 (12 3), I 9.5, PV 6,000, FV 0, and then press PMT. You will find that the monthly payments will be $192.20. Note that the payment key is usually programmed for 12 payments per year. Thus, you must change the default (compounding period) if the payments are other than monthly. Mortgage Loan Amount Lets say you are evaluating financing options for a loan on a house. You decide that the maximum mortgage payment you can afford is $700 per month.The annual interest rate is 8.4%. If you get a mortgage that requires you to make monthly payments over a 15-year period, what is the maximum purchase price you can afford? Illustration C-26 depicts this problem. Illustration C-26 Calculator solution for mortgage amount Inputs: 180 N 8.4 I ? PV 71,509.81 700 PMT 0 FV Answer: You enter N 180 (12 15 years), I 8.4, PMT 700, FV 0, and press PV. With the payment-per-year key set at 12, you find a present value of $71,509.81 the maximum house price you can afford, given that you want to keep your mortgage payments at $700. Note that by changing any of the variables, you can quickly conduct what-if analyses for different situations. SUMMARY OF STUDY OBJECTIVES 1. Distinguish between simple and compound interest. Simple interest is computed on the principal only, whereas compound interest is computed on the principal and any interest earned that has not been withdrawn. 2. Solve for future value of a single amount. Prepare a time diagram of the problem. Identify the principal amount, the number of compounding periods, and the interest rate. Using the future value of 1 table, multiply the principal amount by the future value factor specified at the intersection of the number of periods and the interest rate. 3. Solve for future value of an annuity. Prepare a time diagram of the problem. Identify the amount of the periodic payments, the number of compounding periods, and the C18 Appendix C Time Value of Money 7. Compute the present value of notes and bonds. To determine the present value of the principal amount: Multiply the principal amount (a single future amount) by the present value factor (from the present value of 1 table) intersecting at the number of periods (number of interest payments) and the discount rate. To determine the present value of the series of interest payments: Multiply the amount of the interest payment by the present value factor (from the present value of an annuity of 1 table) intersecting at the number of periods (number of interest payments) and the discount rate. Add the present value of the principal amount to the present value of the interest payments to arrive at the present value of the note or bond. 8. Use a financial calculator to solve time value of money problems. Financial calculators can be used to solve the same and additional problems as those solved with time value of money tables. One enters into the financial calculator the amounts for all of the known elements of a time value of money problem (periods, interest rate, payments, future or present value) and solves for the unknown element. Particularly useful situations involve interest rates and compounding periods not presented in the tables. interest rate. Using the future value of an annuity of 1 table, multiply the amount of the payments by the future value factor specified at the intersection of the number of periods and the interest rate. 4. Identify the variables fundamental to solving present value problems. The following three variables are fundamental to solving present value problems: (1) the future amount, (2) the number of periods, and (3) the interest rate (the discount rate). 5. Solve for present value of a single amount. Prepare a time diagram of the problem. Identify the future amount, the number of discounting periods, and the discount (interest) rate. Using the present value of 1 table, multiply the future amount by the present value factor specified at the intersection of the number of periods and the discount rate. 6. Solve for present value of an annuity. Prepare a time diagram of the problem. Identify the future amounts (annuities), the number of discounting periods, and the discount (interest) rate. Using the present value of an annuity of 1 table, multiply the amount of the annuity by the present value factor specified at the intersection of the number of periods and the interest rate. GLOSSARY Annuity A series of equal dollar amounts to be paid or received periodically. (p. C5, C10) Compound interest The interest computed on the principal and any interest earned that has not been paid or received. (p. C2) Discounting the future amount(s) The process of determining present value. (p. C7) Future value of a single amount The value at a future date of a given amount invested assuming compound interest. (p. C3) Future value of an annuity The sum of all the payments or receipts plus the accumulated compound interest on them. (p. C5) Interest Payment for the use of anothers money. (p. C1) Present value The value now of a given amount to be invested or received in the future assuming compound interest. (p. C7) Present value of an annuity A series of future receipts or payments discounted to their value now assuming compound interest. (p. C10) Principal The amount borrowed or invested. (p. C1) Simple interest The interest computed on the principal only. (p. C1) BRIEF EXERCISES Use tables to solve Brief Exercises 1-23. Compute the future value of a single amount. (SO 2) BEC-1 Russ Holub invested $4,000 at 5% annual interest, and left the money invested without withdrawing any of the interest for 10 years. At the end of the 10 years, Russ withdrew the accumulated amount of money. (a) What amount did Russ withdraw assuming the investment earns simple interest? (b) What amount did Russ withdraw assuming the investment earns interest compound annually? Use future value tables. (SO 2, 3) BEC-2 For each of the following cases, indicate (1) to what interest rate columns and (2) to what number of periods you would refer in looking up the future value factor. 1. In Table 1 (future value of 1): Annual Rate (a) (b) 8% 5% Number of Years Invested 5 3 Compounded Annually Semiannually Brief Exercises 2. In Table 2 (future value of an annuity of 1): C19 Annual Rate (a) (b) 5% 4% Number of Years Invested 10 6 Compounded Annually Semiannually Compute the future value of a single amount. (SO 2) BEC-3 Racine Company signed a lease for an office building for a period of 10 years. Under the lease agreement, a security deposit of $10,000 is made. The deposit will be returned at the expiration of the lease with interest compounded at 4% per year. What amount will Racine receive at the time the lease expires? BEC-4 Chaffee Company issued $1,000,000, 10-year bonds and agreed to make annual sinking fund deposits of $75,000. The deposits are made at the end of each year into an account paying 6% annual interest. What amount will be in the sinking fund at the end of 10 years? BEC-5 Wayne and Brenda Anderson invested $5,000 in a savings account paying 5% compound annual interest when their daughter, Sue, was born. They also deposited $1,000 on each of her birthdays until she was 18 (including her 18th birthday). How much will be in the savings account on her 18th birthday (after the last deposit)? BEC-6 Ty Ngu borrowed $20,000 on July 1, 2002. This amount plus accrued interest at 6% compounded annually is to be repaid on July 1, 2008. How much will Ty have to repay on July 1, 2008? BEC-7 For each of the following cases, indicate (a) to what interest rate columns and (b) to what number of periods you would refer in looking up the discount rate. 1. In Table 3 (present value of 1): Annual Rate (a) (b) (c) 12% 10% 8% Number of Years Involved 6 15 10 Discounts Per Year Annually Annually Semiannually Compute the future value of an annuity. (SO 3) Compute the future value of a single amount and of an annuity. (SO 2, 3) Compute the future value of a single amount. (SO 2) Use present value tables. (SO 5, 6) 2. In Table 4 (present value of an annuity of 1): Annual Rate (a) (b) (c) 8% 10% 12% Number of Years Involved 20 5 4 Number of Payments Involved 20 5 8 Frequency of Payments Annually Annually Semiannually Determine present values. (SO 5, 6) Compute the present value of a single-sum investment. (SO 5) Compute the present value of a single-sum investment. (SO 5) Compute the present value of an annuity investment. (SO 6) Compute the present value of an annuity investment. (SO 6) Compute the present value of bonds. (SO 5, 6, 7) BEC-8 (a) What is the present value of $20,000 due 8 periods from now, discounted at 8%? (b) What is the present value of $20,000 to be received at the end of each of 6 periods, discounted at 9%? BEC-9 Gonzalez Company is considering an investment that will return a lump sum of $500,000 5 years from now. What amount should Gonzalez Company pay for this investment in order to earn a 10% return? BEC-10 Lasorda Company earns 9% on an investment that will return $875,000 8 years from now. What is the amount Lasorda should invest now in order to earn this rate of return? BEC-11 Bosco Company is considering investing in an annuity contract that will return $30,000 annually at the end of each year for 15 years. What amount should Bosco Company pay for this investment if it earns a 6% return? BEC-12 Modine Enterprises earns 11% on an investment that pays back $120,000 at the end of each of the next 4 years. What is the amount Modine Enterprises invested to earn the 11% rate of return? BEC-13 Midwest Railroad Co. is about to issue $100,000 of 10-year bonds paying a 10% interest rate, with interest payable semiannually. The discount rate for such securities is 8%. How much can Midwest expect to receive from the sale of these bonds? C20 Appendix C Time Value of Money BEC-14 Assume the same information as in BEC-13 except that the discount rate is 10% instead of 8%. In this case, how much can Midwest expect to receive from the sale of these bonds? BEC-15 Lounsbury Company receives a $50,000, 6-year note bearing interest of 8% (paid annually) from a customer at a time when the discount rate is 9%. What is the present value of the note received by Lounsbury Company? BEC-16 Hartzler Enterprises issued 8%, 8-year, $2,000,000 par value bonds that pay interest semiannually on October 1 and April 1. The bonds are dated April 1, 2008, and are issued on that date. The discount rate of interest for such bonds on April 1, 2008, is 10%. What cash proceeds did Hartzler receive from issuance of the bonds? BEC-17 Vinny Carpino owns a garage and is contemplating purchasing a tire retreading machine for $16,280.After estimating costs and revenues,Vinny projects a net cash flow from the retreading machine of $3,000 annually for 8 years. Vinny hopes to earn a return of 11% on such investments. What is the present value of the retreading operation? Should Vinny Carpino purchase the retreading machine? BEC-18 Rodriguez Company issues a 10%, 6-year mortgage note on January 1, 2008, to obtain financing for new equipment. Land is used as collateral for the note. The terms provide for semiannual installment payments of $56,413.What were the cash proceeds received from the issuance of the note? BEC-19 Goltra Company is considering purchasing equipment. The equipment will produce the following cash flows: Year 1, $30,000; Year 2, $40,000; Year 3, $50,000. Goltra requires a minimum rate of return of 12%. What is the maximum price Goltra should pay for this equipment? BEC-20 If Maria Sanchez invests $3,152 now, she will receive $10,000 at the end of 15 years. What annual rate of interest will Maria earn on her investment? (Hint: Use Table 3.) BEC-21 Lori Burke has been offered the opportunity of investing $42,410 now. The investment will earn 10% per year and at the end of that time will return Lori $100,000. How many years must Lori wait to receive $100,000? (Hint: Use Table 3.) BEC-22 Nancy Burns purchased an investment for $12,462.21. From this investment, she will receive $1,000 annually for the next 20 years, starting one year from now. What rate of interest will Nancys investment be earning for her? (Hint: Use Table 4.) BEC-23 Betty Estes invests $7,536.08 now for a series of $1,000 annual returns, beginning one year from now. Betty will earn a return of 8% on the initial investment. How many annual payments of $1,000 will Betty receive? (Hint: Use Table 4.) BEC-24 Reba McEntire wishes to invest $19,000 on July 1, 2008, and have it accumulate to $49,000 by July 1, 2018. Instructions Use a financial calculator to determine at what exact annual rate of interest Reba must invest the $19,000. Compute the present value of bonds. (SO 5, 6, 7) Compute the present value of a note. (SO 5, 6, 7) Compute the present value of bonds. (SO 5, 6, 7) Compute the value of a machine for purposes of making a purchase decision. (SO 7) Compute the present value of a note. (SO 5, 6) Compute the maximum price to pay for the equipment. (SO 7) Compute the interest rate on a single sum. (SO 5) Compute the number of periods of a single sum. (SO 5) Compute the interest rate on an annuity. (SO 6) Compute the number of periods of an annuity. (SO 6) Determine interest rate. (SO 8) Determine interest rate. (SO 8) BEC-25 On July 17, 2008, Tim McGraw borrowed $42,000 from his grandfather to open a clothing store. Starting July 17, 2009, Tim has to make 10 equal annual payments of $6,500 each to repay the loan. Instructions Use a financial calculator to determine what interest rate Tim is paying. Determine interest rate. (SO 8) BEC-26 As the purchaser of a new house, Patty Loveless has signed a mortgage note to pay the Memphis National Bank and Trust Co. $14,000 every 6 months for 20 years, at the end of which time she will own the house. At the date the mortgage is signed the purchase price was $198,000, and Loveless made a down payment of $20,000.The first payment will be made 6 months after the date the mortgage is signed. Instructions Using a financial calculator, compute the exact rate of interest earned on the mortgage by the bank. Brief Exercises BEC-27 Using a financial calculator, solve for the unknowns in each of the following situations. (a) On June 1, 2008, Shelley Long purchases lakefront property from her neighbor, Joey Brenner, and agrees to pay the purchase price in seven payments of $16,000 each, the first payment to be payable June 1, 2009. (Assume that interest compounded at an annual rate of 7.35% is implicit in the payments.) What is the purchase price of the property? (b) On January 1, 2008, Cooke Corporation purchased 200 of the $1,000 face value, 8% coupon, 10-year bonds of Howe Inc. The bonds mature on January 1, 2018, and pay interest annually beginning January 1, 2009. Cooke purchased the bonds to yield 10.65%. How much did Cooke pay for the bonds? BEC-28 Using a financial calculator, provide a solution to each of the following situations. (a) Bill Schroeder owes a debt of $35,000 from the purchase of his new sport utility vehicle. The debt bears annual interest of 9.1% compounded monthly. Bill wishes to pay the debt and interest in equal monthly payments over 8 years, beginning one month hence. What equal monthly payments will pay off the debt and interest? (b) On January 1, 2008, Sammy Sosa offers to buy Mark Graces used snowmobile for $8,000, payable in five equal annual installments, which are to include 8.25% interest on the unpaid balance and a portion of the principal. If the first payment is to be made on December 31, 2008, how much will each payment be? C21 Various time value of money situations. (SO 8) Various time value of money situations. (SO 8) Appendix D OBJECTIVE Payroll Accounting STUDY After studying this appendix, you should be able to: 1. Discuss the objectives of internal control for payroll. 2. Compute and record the payroll for a pay period. 3. Describe and record employer payroll taxes. Payroll and related fringe benefits often make up a large percentage of current liabilities. Employee compensation is often the most significant expense that a company incurs. For example, Costco recently reported total employees of 103,000 and labor and fringe benefits costs that approximated 70% of the companys total cost of operations. Payroll accounting involves more than paying employees wages. Companies are required by law to maintain payroll records for each employee, to file and pay payroll taxes, and to comply with numerous state and federal tax laws related to employee compensation. Accounting for payroll has become much more complex due to these regulations. PAYROLL DEFINED The term payroll pertains to both salaries and wages. Managerial, administrative, and sales personnel are generally paid salaries. Salaries are often expressed in terms of a specified amount per month or per year rather than an hourly rate. Store clerks, factory employees, and manual laborers are normally paid wages. Wages are based on a rate per hour or on a piecework basis (such as per unit of product). Frequently, people use the terms salaries and wages interchangeably. The term payroll does not apply to payments made for services of professionals such as certified public accountants, attorneys, and architects. Such professionals are independent contractors rather than salaried employees. Payments to them are called fees. This distinction is important because government regulations relating to the payment and reporting of payroll taxes apply only to employees. INTERNAL CONTROL OF PAYROLL Chapter 8 introduced internal control. As applied to payrolls, the objecSTUDY OBJECTIVE 1 tives of internal control are (1) to safeguard company assets against unau- Discuss the objectives of internal thorized payments of payroll and (2) to ensure the accuracy and reliability control for payroll. of the accounting records pertaining to payrolls. Irregularities often result if internal control is lax. Methods of theft involving payroll include overstating hours, using unauthorized pay rates, adding fictitious employees to the payroll, continuing terminated employees on the payroll, and distributing duplicate payroll checks. Moreover, inaccurate records will result in incorrect paychecks, financial statements, and payroll tax returns. D1 D2 Appendix D Payroll Accounting Payroll activities involve four functions: hiring employees, timekeeping, preparing the payroll, and paying the payroll. For effective internal control, the company should assign these four functions to different departments or individuals. To illustrate these functions, we will examine the case of Academy Company and one of its employees, Michael Jordan. Hiring Employees Human Resources Hiring Employees The human resources (personnel) department is responsible for posting job openings, screening and interviewing applicants, and hiring employees. From a control standpoint, this department provides significant documentation and authorization. When an employee is hired, the human resources department prepares an authorization form. The one used by Academy Company for Michael Jordan is shown in Illustration D-1. Human Resources department documents and authorizes employment. Illustration D-1 Authorization form prepared by the human resources department ACADEMY COMPANY Employee Name Classification Department Jordan, LAST Michael FIRST MI Starting Date 9/01/06 329-36-9547 Skilled-Level 10 Shipping Classification Rate $ 10.00 New Rate $ Clerk per hour 12.00 Social Security No. Division Entertainment Trans. from Temp. NEW HIRE Salary Grade Level 10 Bonus N/A 9/1/07 Non-exempt x Exempt Effective Date RATE CHANGE Present Rate $ 10.00 Merit x Promotion Previous Increase Date Resignation Discharge Decrease None Retirement Other Amount $ Reason per Type SEPARATION Leave of absence Last Day Worked From to Type APPROVALS BRANCH OR DEPT. MANAGER DATE DIVISION V.P. DATE PERSONNEL DEPARTMENT The human resources department sends the authorization form to the payroll department, where it is used to place the new employee on the payroll.A chief concern of the human resources department is ensuring the accuracy of this form. The reason is quite simple: One of the most common types of payroll frauds is adding fictitious employees to the payroll. The human resources department is also responsible for authorizing changes in employment status. Specifically, they must authorize (1) changes in pay rates and (2) terminations of employment. Every authorization should be in writing, and a copy of the change in status should be sent to the payroll department. Notice in Illustration D-1 that Jordan received a pay increase of $2 per hour. Internal Control of Payroll D3 Timekeeping Another area in which internal control is important is timekeeping. Hourly employees are usually required to record time worked by punching a time clock. The employee inserts a time card into the clock, which automatically records the employees arrival and departure times. Illustration D-2 shows Michael Jordans time card. Timekeeping Supervisors monitor hours worked through time cards and time reports. PAY PERIOD ENDING No. 17 NAME Michael Jordan 1/14/08 REGULAR TIME 8:58 12:00 1:00 5:01 P.M. A.M. 9:00 11:59 12:59 5:00 P.M. A.M. 8:59 12:01 1:01 5:00 P.M. A.M. 9:00 12:00 1:00 5:00 P.M. A.M. 8:57 11:58 1:00 5:01 P.M. A.M. 8:00 1:00 A.M. IN OUT IN OUT IN OUT IN OUT IN OUT IN OUT IN OUT IN OUT IN OUT IN OUT IN OUT IN OUT IN OUT IN OUT EXTRA TIME 1st Day 2nd Day NOON NOON THIS THIS 5:00 9:00 3rd Day 4th Day 5th Day 6th Day 7th Day NOON NOON SIDE SIDE OU T OU T P.M. A.M. NOON P.M. NOON NOON TOTAL 4 TOTAL 40 Illustration D-2 Time card In large companies, time clock procedures are often monitored by a supervisor or security guard to make sure an employee punches only his or her own card.At the end of the pay period, each employees supervisor approves the hours shown by signing the time card.When overtime hours are involved, approval by a supervisor is usually mandatory. This guards against unauthorized overtime. The approved time cards are then sent to the payroll department. For salaried employees, a manually prepared weekly or monthly time report kept by a supervisor may be used to record time worked. Preparing the Payroll The payroll department prepares the payroll on the basis of two inputs: (1) human resources department authorizations and (2) approved time cards. Numerous calculations are involved in determining gross wages and payroll deductions. Therefore, a second payroll department employee, working independently, verifies all calculated amounts, and a payroll department supervisor then approves the payroll.The payroll department is also responsible for preparing (but not signing) payroll checks, maintaining payroll records, and preparing payroll tax returns. Preparing the Payroll Two (or more) employees verify payroll amounts; supervisor approves. D4 Appendix D Payroll Accounting Paying the Payroll Paying the Payroll The treasurers department pays the payroll. Payment by check minimizes the risk of loss from theft, and the endorsed check provides proof of payment. For good internal control, payroll checks should be prenumbered, and all checks should be accounted for. All checks must be signed by the treasurer (or a designated agent). Distribution of the payroll checks to employees should be controlled by the treasurers department. Many employees have their pay credited electronically to their bank accounts. To control these disbursements, the company provides to employees receipts detailing gross pay deductions and net pay. Occasionally companies pay the payroll in currency. In such cases it is customary to have a second person count the cash in each pay envelope. The paymaster should obtain a signed receipt from the employee upon payment. Treasurer signs and distributes checks. DETERMINING THE PAYROLL STUDY OBJECTIVE 2 Compute and record the payroll for a pay period. Determining the payroll involves computing three amounts: (1) gross earnings, (2) payroll deductions, and (3) net pay. Gross Earnings Gross earnings is the total compensation earned by an employee. It consists of wages or salaries, plus any bonuses and commissions. Companies determine total wages for an employee by multiplying the hours worked by the hourly rate of pay. In addition to the hourly pay rate, most companies are required by law to pay hourly workers a minimum of 112 times the regular hourly rate for overtime work in excess of eight hours per day or 40 hours per week. In addition, many employers pay overtime rates for work done at night, on weekends, and on holidays. For example, assume that Michael Jordan, an employee of Academy Company, worked 44 hours for the weekly pay period ending January 14. His regular wage is $12 per hour. For any hours in excess of 40, the company pays at one-and-a-half times the regular rate.Academy computes Jordans gross earnings (total wages) as follows. Illustration D-3 Computation of total wages Type of Pay Regular Overtime Total wages Hours 40 4 Rate $12 18 Gross Earnings $480 72 $552 This computation assumes that Jordan receives 112 times his regular hourly rate ($12 1.5) for his overtime hours. Union contracts often reBonuses often reward outquire that overtime rates be as much as twice the regular rates. standing individual perAn employees salary is generally based on a monthly or yearly rate. formance, but successful corpoThe company then prorates these rates to its payroll periods (e.g., birations also need considerable teamwork. A challenge is to weekly or monthly). Most executive and administrative positions are motivate individuals while presalaried. Federal law does not require overtime pay for employees in venting an unethical employee such positions. from taking anothers idea for Many companies have bonus agreements for employees. One survey his or her own advantage. found that over 94% of the largest U.S. manufacturing companies offer annual bonuses to key executives. Bonus arrangements may be based on such factors as increased sales or net income. Companies may pay bonuses in cash and/or by granting employees the opportunity to acquire shares of company stock at favorable prices (called stock option plans). ETHICS NOTE Determining the Payroll D5 Payroll Deductions As anyone who has received a paycheck knows, gross earnings are usually very different from the amount actually received. The difference is due to payroll deductions. Payroll deductions may be mandatory or voluntary. Mandatory deductions are required by law and consist of FICA taxes and income taxes. Voluntary deductions are at the option of the employee. Illustration D-4 summarizes common types of payroll deductions. Such deductions do not result in payroll tax expense to the employer. The employer is merely a collection agent, and subsequently transfers the deducted amounts to the government and designated recipients. Federal Income Tax FICA Taxes State and City Income Taxes Gross Pay Net Pay Charity Insurance, Pensions, and/or Union Dues Illustration D-4 Payroll deductions FICA TAXES In 1937 Congress enacted the Federal Insurance Contribution Act (FICA). FICA taxes are designed to provide workers with supplemental retirement, employment disability, and medical benefits. In 1965, Congress extended benefits to include Medicare for individuals over 65 years of age. The benefits are financed by a tax levied on employees earnings. FICA taxes are commonly referred to as Social Security taxes. Congress sets the tax rate and the tax base for FICA taxes. When FICA taxes were first imposed, the rate was 1% on the first $3,000 of gross earnings, or a maximum of $30 per year.The rate and base have changed dramatically since that time! In 2007, the rate was 7.65% (6.2% Social Security plus 1.45% Medicare) on the first $97,500 of gross earnings for each employee.1 For purpose of illustration in this chapter, we will assume a rate of 8% on the first $97,500 of gross earnings, or a maximum of $7,800. Using the 8% rate, the FICA withholding for Jordan for the weekly pay period ending January 14 is $44.16 ($552 8%). 1 The Medicare provision also includes a tax of 1.45% on gross earnings in excess of $97,500. In the interest of simplification, we ignore this 1.45% charge in our end-of-chapter assignment material. We assume zero FICA withholdings on gross earnings above $97,500. D6 Appendix D Payroll Accounting INCOME TAXES Under the U.S. pay-as-you-go system of federal income taxes, employers are required to withhold income taxes from employees each pay period.Three variables determine the amount to be withheld: (1) the employees gross earnings; (2) the number of allowances claimed by the employee; and (3) the length of the pay period. The number of allowances claimed typically includes the employee, his or her spouse, and other dependents. To indicate to the Internal Revenue Service the number of allowances claimed, the employee must complete an Employees Withholding Allowance Certificate (Form W-4). As shown in Illustration D-5, Michael Jordan claims two allowances on his W-4. Illustration D-5 W-4 form Form W-4 Michael Employee's Withholding Allowance Certificate For Privacy Act and Paperwork Reduction Act Notice, see page 2. Last name OMB No. 1545-0010 Department of the Treasury Internal Revenue Service 1 Type or print your first name and middle initial Home address (number and street or rural route) 2 Your social security number 3 4 2345 Mifflin Ave. City or town, State, and ZIP code Jordan Single x Married 329-36-9547 Married, but withhold at higher Single rate. Note: If married, but legally separated, or spouse is a nonresident alien, check the Single box. If your last name differs from that on your social security card, check here and call 1-800-772-1213 for a new card . . . . . 5 5 Total number of allowances you are claiming (from line H above or from the worksheet on page 2 if they apply) 2 6$ 6 Additional amount, if any, you want withheld from each paycheck 7 I claim exemption from withholding for 2006,. Hampton, MI 48292 If you meet both conditions, enter Exempt here 7 Under penalties of perjury, I certify that I am entitled to the number of withholding allowances claimed on this certificate or entitled to claim exempt status. Employee's signature 8 Employers name and address (Employer: Complete 8 and 10 only if sending to the IRS) Date September 1 , 20 08 9 Office code (optional) 10 Employer identification number Cat. No. 102200 Withholding tables furnished by the Internal Revenue Service indicate the amount of income tax to be withheld. Withholding amounts are based on gross wages and the number of allowances claimed. Separate tables are provided for weekly, biweekly, semimonthly, and monthly pay periods. Illustration D-6 (next page) shows the withholding tax table for Michael Jordan (assuming he earns $552 per week and claims two allowances). For a weekly salary of $552 with two allowances, the income tax to be withheld is $49. In addition, most states (and some cities) require employers to withhold income taxes from employees earnings. As a rule, the amounts withheld are a percentage (specified in the state revenue code) of the amount withheld for the federal income tax. Or they may be a specified percentage of the employees earnings. For the sake of simplicity, we have assumed that Jordans wages are subject to state income taxes of 2%, or $11.04 (2% $552) per week. There is no limit on the amount of gross earnings subject to income tax withholdings. In fact, under our progressive system of taxation, the higher the earnings, the higher the percentage of income withheld for taxes. OTHER DEDUCTIONS Employees may voluntarily authorize withholdings for charitable, retirement, and other purposes. All voluntary deductions from gross earnings should be authorized in writing by the employee. The authorization(s) may be made individually or as part of a group plan. Deductions for charitable organizations, such as the United Way, or for financial arrangements, such as U.S. savings bonds and repayment of Determining the Payroll Illustration D-6 Withholding tax table MARRIED Persons WEEKLY Payroll Period (For Wages Paid in 2008) D7 If the wages are At least 490 500 510 520 530 540 550 560 570 580 590 600 610 620 630 640 650 660 670 680 But less than 500 510 520 530 540 550 560 570 580 590 600 610 620 630 640 650 660 670 680 690 0 1 And the number of withholding allowances claimed is 2 3 4 5 6 7 8 9 10 The amount of income tax to be withheld is 56 57 59 60 62 63 65 66 68 69 71 72 74 75 77 78 80 81 83 84 48 49 51 52 54 55 57 58 60 61 63 64 66 67 69 70 72 73 75 76 40 42 43 45 46 48 49 51 52 54 55 57 58 60 61 63 64 66 67 69 32 34 35 37 38 40 41 43 44 46 47 49 50 52 53 55 56 58 59 61 24 26 27 29 30 32 33 35 36 38 39 41 42 44 45 47 48 50 51 53 17 18 20 21 23 24 26 27 29 30 32 33 35 36 38 39 41 42 44 45 9 10 12 13 15 16 18 19 21 22 24 25 27 28 30 31 33 34 36 37 1 3 4 6 7 9 10 12 13 15 16 18 19 21 22 24 25 27 28 30 0 0 0 0 0 1 2 4 5 7 8 10 11 13 14 16 17 19 20 22 0 0 0 0 0 0 0 0 0 0 1 2 4 5 7 8 10 11 13 14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 3 5 6 loans from company credit unions, are made individually. Deductions for union dues, health and life insurance, and pension plans are often made on a group basis. We will assume that Jordan has weekly voluntary deductions of $10 for the United Way and $5 for union dues. Net Pay Academy Company determines net pay by subtracting payroll deductions from gross earnings. Illustration D-7 shows the computation of Jordans net pay for the pay period. A LT E R N AT I V E TERMINOLOGY Net pay is also called take-home pay. Illustration D-7 Computation of net pay Gross earnings Payroll deductions: FICA taxes Federal income taxes State income taxes United Way Union dues Net pay $552.00 $44.16 49.00 11.04 10.00 5.00 119.20 $432.80 Assuming that Michael Jordans wages for each week during the year are $552, total wages for the year are $28,704 (52 $552).Thus, all of Jordans wages are subject to FICA tax during the year. In comparison, lets assume that Jordans department head earns $2,000 per week, or $104,000 for the year. Since only the first $97,500 is subject to FICA taxes, the maximum FICA withholdings on the department heads earnings would be $7,800 ($97,500 8%). D8 Appendix D Payroll Accounting RECORDING THE PAYROLL Recording the payroll involves maintaining payroll department records, recognizing payroll expenses and liabilities, and recording payment of the payroll. Maintaining Payroll Department Records To comply with state and federal laws, an employer must keep a cumulative record of each employees gross earnings, deductions, and net pay during the year. The record that provides this information is the employee earnings record. Illustration D-8 shows Michael Jordans employee earnings record. Illustration D-8 Employee earnings record File A Edit B View C Insert D Format E Tools Data F Window G Help H I J K L M N 1 2 3 4 5 6 7 8 9 10 11 12 ACADEMY COMPANY Employee Earnings Record For the Year 2008 Name Social Security Number Date of Birth Date Employed Sex Michael Jordan 329-36-9547 December 24, 1962 September 1, 2003 Male Telephone Date Employment Ended Exemptions 2 Address 2345 Mifflin Ave. Hampton, Michigan 48292 555-238-9051 13 Single x Married 14 Gross Earnings 15 2008 16 Period Total 17 Ending Hours Regular Overtime Total Cumulative 42 480.00 36.00 516.00 516.00 18 1/7 480.00 72.00 552.00 1,068.00 44 19 1/14 43 480.00 54.00 534.00 1,602.00 20 1/21 42 480.00 36.00 516.00 2,118.00 21 1/28 Jan. 22 1,920.00 198.00 2,118.00 Total 23 24 FICA 41.28 44.16 42.72 41.28 Deductions Fed. State United Inc. Tax Inc. Tax Way 43.00 10.32 10.00 49.00 11.04 10.00 46.00 10.68 10.00 43.00 10.32 10.00 42.36 Union Dues 5.00 5.00 5.00 5.00 Total 109.60 119.20 114.40 109.60 Payment Net Check Amount No. 406.40 974 432.80 1028 419.60 1077 406.40 1133 169.44 181.00 40.00 20.00 452.80 1,665.20 Companies keep a separate earnings record for each employee, and update these records after each pay period. The employer uses the cumulative payroll data on the earnings record to: (1) determine when an employee has earned the maximum earnings subject to FICA taxes, (2) file state and federal payroll tax returns (as explained later), and (3) provide each employee with a statement of gross earnings and tax withholdings for the year. (Illustration D-12 on page D13 shows this statement.) In addition to employee earnings records, many companies find it useful to prepare a payroll register. This record accumulates the gross earnings, deductions, and net pay by employee for each pay period. It provides the documentation for preparing a paycheck for each employee. Illustration D-9 (next page) presents Academy Companys payroll register. It shows the data for Michael Jordan in the wages section. In this example, Academy Companys total weekly payroll is $17,210, as shown in the gross earnings column. Note that this record is a listing of each employees payroll data for the pay period. In some companies, a payroll register is a journal or book of original entry; Recording the Payroll D9 File Edit A View B Insert C Format D Tools E Data F Window G Help H I J K L M N O 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 ACADEMY COMPANY Payroll Register For the Week Ending January 14, 2008 Earnings g Total OverEmployee Hours Regular time Office Salaries Arnold, Patricia 40 580.00 Canton, Matthew 40 590.00 Gross 580.00 590.00 FICA 46.40 47.20 Deductions Federal State Income Income United Union Tax Tax Way Dues 61.00 63.00 11.60 11.80 15.00 20.00 Accounts Debited Office Check Salaries Wages Net Pay No. Expense Expense 446.00 448.00 998 999 580.00 590.00 Paid Total 134.00 142.00 Mueller, William Subtotal Wages Bennett, Robin Jordan, Michael 40 42 44 530.00 5,200.00 480.00 480.00 36.00 72.00 530.00 5,200.00 516.00 552.00 42.40 54.00 10.60 11.00 416.00 1,090.00 104.00 120.00 41.28 44.16 43.00 49.00 10.32 11.04 18.00 10.00 5.00 5.00 118.00 412.00 1000 530.00 1,730.00 3,470.00 5,200.00 117.60 119.20 398.40 1025 432.80 1028 516.00 552.00 Milroy, Lee Subtotal Total 43 480.00 54.00 534.00 42.72 46.00 10.68 10.00 5.00 114.40 419.60 1029 534.00 11,000.00 1,010.00 12,010.00 960.80 2,400.00 240.20 301.50 115.00 4,017.50 7,992.50 12,010.00 16,200.00 1,010.00 17,210.00 1,376.80 3,490.00 344.20 421.50 115.00 5,747.50 11,462.50 5,200.00 12,010.00 Illustration D-9 Payroll register postings are made from the payroll register directly to ledger accounts. In other companies, the payroll register is a memorandum record that provides the data for a general journal entry and subsequent posting to the ledger accounts.At Academy Company, the latter procedure is followed. Recognizing Payroll Expenses and Liabilities From the payroll register in Illustration D-9, Academy Company makes a journal entry to record the payroll. For the week ending January 14 the entry is: A L SE 5,200.00 Exp 12,010.00 Exp Jan. 14 Office Salaries Expense Wages Expense FICA Taxes Payable Federal Income Taxes Payable State Income Taxes Payable United Way Payable Union Dues Payable Salaries and Wages Payable (To record payroll for the week ending January 14) 5,200.00 12,010.00 1,376.80 3,490.00 344.20 421.50 115.00 11,462.50 1,376.80 3,490.00 344.20 421.50 115.00 11,462.50 Cash Flows no effect The company credits specific liability accounts for the mandatory and voluntary deductions made during the pay period. In the example, Academy debits Office Salaries Expense for the gross earnings of salaried office workers, and it debits Wages Expense for the gross earnings of employees who are paid at an hourly rate. Other companies may debit other accounts such as Store Salaries or Sales Salaries. The amount credited to Salaries and Wages Payable is the sum of the individual checks the employees will receive. D10 Appendix D Payroll Accounting Recording Payment of the Payroll A company makes payments by check (or electronic funds transfer) either from its regular bank account or a payroll bank account. Each paycheck is usually accompanied by a detachable statement of earnings document.This shows the employees gross earnings, payroll deductions, and net pay, both for the period and for the year-to-date. Academy Company uses its regular bank account for payroll checks. Illustration D-10 shows the paycheck and statement of earnings for Michael Jordan. Illustration D-10 Paycheck and statement of earnings AC Pay to the order of City Bank & Trust P.O. Box 3000 Hampton, MI 48291 For ACADEMY COMPANY 19 Center St. Hampton, MI 48291 $ No. 1028 20 621113 610 Dollars HELPFUL HINT Do any of the income tax liabilities result in payroll tax expense for the employer? Answer: No. The employer is acting only as a collection agent for the government. DETACH AND RETAIN THIS PORTION FOR YOUR RECORDS NAME SOC. SEC. NO. EMPL. NUMBER NO. EXEMP PAY PERIOD ENDING Michael Jordan REG. HRS. O.T. HRS. OTH. HRS. (1) OTH. HRS. (2) REG. EARNINGS 329-36-9547 O.T. EARNINGS 2 1/14/08 GROSS OTH. EARNINGS (1) OTH. EARNINGS (2) 40 FED. W/H TAX 4 FICA STATE TAX LOCAL TAX 480.00 11.04 (1) 72.00 OTHER DEDUCTIONS (2) $552.00 (3) (4) NET PAY 49.00 44.16 10.00 5.00 432.80 FED. W/H TAX FICA STATE TAX LOCAL TAX YEAR TO DATE OTHER DEDUCTIONS (1) 92.00 85.44 21.36 20.00 (2) 10.00 (3) (4) NET PAY $839.20 Following payment of the payroll, the company enters the check numbers in the payroll register. Academy Company records payment of the payroll as follows. A 11,462.50 Cash Flows 11,462.50 L OE 11,462.50 Jan. 14 Salaries and Wages Payable Cash (To record payment of payroll) 11,462.50 11,462.50 When a company uses currency in payment, it prepares one check for the payrolls total amount of net pay. The company cashes this check, and inserts the coins and currency in individual pay envelopes for disbursement to individual employees. Before You Go On... REVIEW IT 1. Identify two internal control procedures that apply to each payroll function. 2. What are the primary sources of gross earnings? 3. What payroll deductions are (a) mandatory and (b) voluntary? 4. What account titles do companies use in recording a payroll, assuming only mandatory payroll deductions are involved? Employer Payroll Taxes D11 DO IT Your cousin Stan is establishing a house-cleaning business and will have a number of employees working for him. He is aware that documentation procedures are an important part of internal control. But he is unsure about the difference between an employee earnings record and a payroll register. He asks you to explain the principal differences, because he wants to be sure that he sets up the proper payroll procedures. Action Plan Determine the earnings and deductions data that must be recorded and reported for each employee. Design a record that will accumulate earnings and deductions data and will serve as a basis for journal entries to be prepared and posted to the general ledger accounts. Explain the difference between the employee earnings record and the payroll register. Solution An employee earnings record is kept for each employee. It shows gross earnings, payroll deductions, and net pay for each pay period, as well as cumulative payroll data for that employee. In contrast, a payroll register is a listing of all employees gross earnings, payroll deductions, and net pay for each pay period. It is the documentation for preparing paychecks and for recording the payroll. Of course, Stan will need to keep both documents. Related exercise material: BED-1, BED-3, and ED-1. EMPLOYER PAYROLL TAXES Payroll tax expense for businesses results from three taxes that governSTUDY OBJECTIVE 3 mental agencies levy on employers. These taxes are: (1) FICA, (2) federal Describe and record employer unemployment tax, and (3) state unemployment tax. These taxes plus such payroll taxes. items as paid vacations and pensions (discussed in the appendix to this chapter) are collectively referred to as fringe benefits. As indicated earlier, the cost of fringe benefits in many companies is substantial. The pie chart in the margin shows the pieces of the benefits pie. BENEFITS FICA Taxes Each employee must pay FICA taxes. In addition, employers must match each employees FICA contribution. The matching contribution results in payroll tax expense to the employer. The employers tax is subject to the same rate and maximum earnings as the employees. The company uses the same account, FICA Taxes Payable, to record both the employees and the employers FICA contributions. For the January 14 payroll, Academy Companys FICA tax contribution is $1,376.80 ($17,210.00 8%). 3% Disability and life insurance 13% Retirement income such as pensions 23% Legally required benefits such as Social Security 24% Medical benefits 37% Vacation and other benefits such as parental and sick leaves, child care Federal Unemployment Taxes The Federal Unemployment Tax Act (FUTA) is another feature of the federal Social Security program. Federal unemployment taxes provide benefits for a limited period of time to employees who lose their jobs through no fault of their own. The FUTA tax rate is 6.2% of taxable wages. The taxable wage base is the first $7,000 of wages paid to each employee in a calendar year. Employers who HELPFUL HINT Both the employer and employee pay FICA taxes. Federal unemployment taxes and (in most states) the state unemployment taxes are borne entirely by the employer. D12 Appendix D Payroll Accounting pay the state unemployment tax on a timely basis will receive an offset credit of up to 5.4%. Therefore, the net federal tax rate is generally 0.8% (6.2%5.4%). This rate would equate to a maximum of $56 of federal tax per employee per year (.008 $7,000). State tax rates are based on state law. The employer bears the entire federal unemployment tax. There is no deduction or withholding from employees. Companies use the account Federal Unemployment Taxes Payable to recognize this liability. The federal unemployment tax for Academy Company for the January 14 payroll is $137.68 ($17,210.00 0.8%). State Unemployment Taxes All states have unemployment compensation programs under state unemployment tax acts (SUTA). Like federal unemployment taxes, state unemployment taxes provide benefits to employees who lose their jobs. These taxes are levied on employers.2 The basic rate is usually 5.4% on the first $7,000 of wages paid to an employee during the year.The state adjusts the basic rate according to the employers experience rating: Companies with a history of stable employment may pay less than 5.4%. Companies with a history of unstable employment may pay more than the basic rate. Regardless of the rate paid, the companys credit on the federal unemployment tax is still 5.4%. Companies use the account State Unemployment Taxes Payable for this liability. The state unemployment tax for Academy Company for the January 14 payroll is $929.34 ($17,210.00 5.4%). Illustration D-11 summarizes the types of employer payroll taxes. Illustration D-11 Employer payroll taxes FICA Taxes Federal Unemployment Taxes State Unemployment Taxes Computation Based on Wages Recording Employer Payroll Taxes Companies usually record employer payroll taxes at the same time they record the payroll. The entire amount of gross pay ($17,210.00) shown in the payroll register in Illustration D-9 is subject to each of the three taxes mentioned above. Accordingly, Academy records the payroll tax expense associated with the January 14 payroll with the entry shown on page D13. 2 In a few states, the employee is also required to make a contribution. In this textbook, including the homework, we will assume that the tax is only on the employer. Filing and Remitting Payroll Taxes Jan. 14 Payroll Tax Expense FICA Taxes Payable Federal Unemployment Taxes Payable State Unemployment Taxes Payable (To record employers payroll taxes on January 14 payroll) 2,443.82 1,376.80 137.68 929.34 A L 1,376.80 137.68 929.34 Cash Flows no effect D13 SE 2,443.82 Exp Note that Academy uses separate liability accounts instead of a single credit to Payroll Taxes Payable. Why? Because these liabilities are payable to different taxing authorities at different dates. Companies classify the liability accounts in the balance sheet as current liabilities since they will be paid within the next year. They classify Payroll Tax Expense on the income statement as an operating expense. FILING AND REMITTING PAYROLL TAXES Preparation of payroll tax returns is the responsibility of the payroll department. The treasurers department makes the tax payment. Much of the information for the returns is obtained from employee earnings records. For purposes of reporting and remitting to the IRS, the Company combines the FICA taxes and federal income taxes that it withheld. Companies must report the taxes quarterly, no later than one month following the close of each quarter. The remitting requirements depend on the amount of taxes withheld and the length of the pay period. Companies remit funds through deposits in either a Federal Reserve bank or an authorized commercial bank. Companies generally file and remit federal unemployment taxes annually on or before January 31 of the subsequent year. Earlier payments are required when the tax exceeds a specified amount. Companies usually must file and pay state unemployment taxes by the end of the month following each quarter.When payroll taxes are paid, companies debit payroll liability accounts, and credit Cash. Employers also must provide each employee with a Wage and Tax Statement (Form W-2) by January 31 following the end of a calendar year. This statement shows gross earnings, FICA taxes withheld, and income taxes withheld for the year. The required W-2 form for Michael Jordan, using assumed annual data, is shown in Illustration D-12. The employer must send a copy of each employees Illustration D-12 W-2 form Form W-2 Wage and Tax Statement OMB No. 1545-0008 Calendar Year 2008 1 Control number 2 Employer's name, address and ZIP code 3 Employer's identification number 4 Employer's State number Academy Company 19 Center St. Hampton, MI 48291 36-2167852 5 Stat. Deceased employee 6 Allocated tips Legal rep. 942 emp. Subtotal Void 7 Advance EIC payment 8 Employee's social security number 9 Federal income tax withheld 10 Wages, tips, other compensation 11 Social security tax withheld 329-36-9547 12 Employee's name, address, and ZIP code $2,248.00 $26,300.00 13 Social security wages $2,104.00 14 Social security tips $26,300.00 16 Michael Jordan 2345 Mifflin Ave. Hampton, MI 48292 17 State income tax 18 State wages, tips, etc. 19 Name of State $526.00 20 Local income tax Michigan 21 Local wages, tips, etc. 22 Name of locality HELPFUL HINT Employers generally transmit their W-2s to the government electronically. The taxing agencies store the information in their computer systems for subsequent comparison against earnings and taxes withheld reported on employees income tax returns. D14 Appendix D Payroll Accounting Wage and Tax Statement (Form W-2) to the Social Security Administration. This agency subsequently furnishes the Internal Revenue Service with the income data required. Before You Go On... REVIEW IT 1. What payroll taxes do governments levy on employers? 2. What accounts are involved in accruing employer payroll taxes? DO IT In January, the payroll supervisor determines that gross earnings for Halo Company are $70,000. All earnings are subject to 8% FICA taxes, 5.4% state unemployment taxes, and 0.8% federal unemployment taxes. Halo asks you to record the employers payroll taxes. Action Plan Compute the employers payroll taxes on the periods gross earnings. Identify the expense account(s) to be debited. Identify the liability account(s) to be credited. The entry to record the employers payroll taxes is: 9,940 5,600 560 3,780 Solution Payroll Tax Expense FICA Taxes Payable ($70,000 8%) Federal Unemployment Taxes Payable ($70,000 0.8%) State Unemployment Taxes Payable ($70,000 5.4%) (To record employers payroll taxes on January payroll) Related exercise material: BED-2, BED-3, BED-4, ED-1, ED-2, ED-3, ED-4, and ED-5. Demonstration Problem Indiana Jones Company had the following selected transactions. Feb. 1 Signs a $50,000, 6-month, 9%-interest-bearing note payable to CitiBank and receives $50,000 in cash. 10 Cash register sales total $43,200, which includes an 8% sales tax. 28 The payroll for the month consists of Sales Salaries $32,000 and Office Salaries $18,000. All wages are subject to 8% FICA taxes. A total of $8,900 federal income taxes are withheld. The salaries are paid on March 1. 28 The following adjustment data are developed. 1. Interest expense of $375 has been incurred on the note. 2. Employer payroll taxes include 8% FICA taxes, a 5.4% state unemployment tax, and a 0.8% federal unemployment tax. Instructions (a) Journalize the February transactions. (b) Journalize the adjusting entries at February 28. Glossary D15 Solution (a) Feb. 1 Cash Notes Payable (Issued 6-month, 9%-interest-bearing note to CitiBank) Cash Sales ($43,200 1.08) Sales Taxes Payable ($40,000 8%) (To record sales and sales taxes payable) Sales Salaries Expense Office Salaries Expense FICA Taxes Payable (8% $50,000) Federal Income Taxes Payable Salaries Payable (To record February salaries) Interest Expense Interest Payable (To record accrued interest for February) Payroll Tax Expense FICA Taxes Payable Federal Unemployment Taxes Payable (0.8% $50,000) State Unemployment Taxes Payable (5.4% $50,000) (To record employers payroll taxes on February payroll) 50,000 50,000 action plan To determine sales, divide the cash register total by 100% plus the sales tax percentage. Base payroll taxes on gross earnings. 10 43,200 40,000 3,200 32,000 18,000 4,000 8,900 37,100 375 375 7,100 4,000 400 2,700 28 (b) Feb. 28 28 SUMMARY OF STUDY OBJECTIVES 1 Discuss the objectives of internal control for payroll. The objectives of internal control for payroll are (1) to safeguard company assets against unauthorized payments of payrolls, and (2) to ensure the accuracy and reliability of the accounting records pertaining to payrolls. 2 Compute and record the payroll for a pay period. The computation of the payroll involves gross earnings, payroll deductions, and net pay. In recording the payroll, Salaries (or Wages) Expense is debited for gross earnings, individual tax and other liability accounts are credited for payroll deductions, and Salaries (Wages) Payable is credited for net pay. When the payroll is paid, Salaries and Wages Payable is debited, and Cash is credited. 3 Describe and record employer payroll taxes. Employer payroll taxes consist of FICA, federal unemployment taxes, and state unemployment taxes. The taxes are usually accrued at the time the payroll is recorded by debiting Payroll Tax Expense and crediting separate liability accounts for each type of tax. GLOSSARY Bonus Compensation to management personnel and other employees, based on factors such as increased sales or the amount of net income. (p. D4). Employee earnings record A cumulative record of each employees gross earnings, deductions, and net pay during the year. (p. D8). Employees Withholding Allowance Certificate (Form W-4) An Internal Revenue Service form on which the employee indicates the number of allowances claimed for withholding federal income taxes. (p. D6). Federal unemployment taxes Taxes imposed on the employer that provide benefits for a limited time period to employees who lose their jobs through no fault of their own. (p. D11). Fees Payments made for the services of professionals. (p. D1). FICA taxes Taxes designed to provide workers with supplemental retirement, employment disability, and medical benefits. (p. D5). Gross earnings Total compensation earned by an employee. (p. D4). D16 Appendix D Payroll Accounting State unemployment taxes Taxes imposed on the employer that provide benefits to employees who lose their jobs. (p. D12). Wage and Tax Statement (Form W-2) A form showing gross earnings, FICA taxes withheld, and income taxes withheld which is prepared annually by an employer for each employee. (p. D13). Wages Amounts paid to employees based on a rate per hour or on a piece-work basis. (p. D1). Net pay Gross earnings less payroll deductions. (p. D7). Payroll deductions Deductions from gross earnings to determine the amount of a paycheck. (p. D5). Payroll register A payroll record that accumulates the gross earnings, deductions, and net pay by employee for each pay period. (p. D8). Salaries Specified amount per month or per year paid to managerial, administrative, and sales personnel. (p. D1). Statement of earnings A document attached to a paycheck that indicates the employees gross earnings, payroll deductions, and net pay. (p. D10). SELF-STUDY QUESTIONS Answers are at the end of the appendix. 1. The department that should pay the payroll is the: a. timekeeping department. b. human resources department. c. payroll department. d. treasurers department. (SO 2) 2. J. Barr earns $14 per hour for a 40-hour week and $21 per hour for any overtime work. If Barr works 45 hours in a week, gross earnings are: a. $560. b. $630. (SO 1) c. $650. d. $665. 3. Employer payroll taxes do not include: a. federal unemployment taxes. b. state unemployment taxes. c. federal income taxes. d. FICA taxes. Go to the books website,, for Additional Self-Study questions. (SO 3) QUESTIONS 1. You are a newly hired accountant with Schindlebeck Company. On your first day, the controller asks you to identify the main internal control objectives related to payroll accounting. How would you respond? 2. What are the four functions associated with payroll activities? 3. What is the difference between gross pay and net pay? Which amount should a company record as wages or salaries expense? 4. Which payroll tax is levied on both employers and employees? 5. Are the federal and state income taxes withheld from employee paychecks a payroll tax expense for the employer? Explain your answer. 6. What do the following acronyms stand for: FICA, FUTA, and SUTA? 7. What information is shown on a W-4 statement? On a W-2 statement? 8. Distinguish between the two types of payroll deductions and give examples of each. 9. What are the primary uses of the employee earnings record? 10. (a) Identify the three types of employer payroll taxes. (b) How are tax liability accounts and Payroll Tax Expense classified in the financial statements? BRIEF EXERCISES Identify payroll functions. (SO 1) BED-1 (a) (b) (c) (d) Hernandez Company has the following payroll procedures. Supervisor approves overtime work. The human resources department prepares hiring authorization forms for new hires. A second payroll department employee verifies payroll calculations. The treasurers department pays employees. Identify the payroll function to which each procedure pertains. Exercises BED-2 Sandy Teters regular hourly wage rate is $16, and she receives an hourly rate of $24 for work in excess of 40 hours. During a January pay period, Sandy works 45 hours. Sandys federal income tax withholding is $95, and she has no voluntary deductions. Compute Sandy Teters gross earnings and net pay for the pay period. BED-3 Data for Sandy Teter are presented in BED-2. Prepare the journal entries to record (a) Sandys pay for the period and (b) the payment of Sandys wages. Use January 15 for the end of the pay period and the payment date. BED-4 In January, gross earnings in Yoon Company totaled $90,000. All earnings are subject to 8% FICA taxes, 5.4% state unemployment taxes, and 0.8% federal unemployment taxes. Prepare the entry to record January payroll tax expense. D17 Compute gross earnings and net pay. (SO 2) Record a payroll and the payment of wages. (SO 2) Record employer payroll taxes. (SO 3) EXERCISES ED-1 Betty Williams regular hourly wage rate is $14, and she receives a wage of 112 times the regular hourly rate for work in excess of 40 hours. During a March weekly pay period Betty worked 42 hours. Her gross earnings prior to the current week were $6,000. Betty is married and claims three withholding allowances. Her only voluntary deduction is for group hospitalization insurance at $15 per week. Instructions (a) Compute the following amounts for Bettys wages for the current week. (1) Gross earnings. (2) FICA taxes. (Assume an 8% rate on maximum of $97,500.) (3) Federal income taxes withheld. (Use the withholding table in the text, page D7.) (4) State income taxes withheld. (Assume a 2.0% rate.) (5) Net pay. (b) Record Bettys pay, assuming she is an office computer operator. ED-2 Employee earnings records for Brantley Company reveal the following gross earnings for four employees through the pay period of December 15. C. Mays L. Jeter $83,500 $95,200 D. Delgado T. Rolen $95,700 $97,500 Compute maximum FICA deductions. (SO 2) Compute net pay and record pay for one employee. (SO 2) For the pay period ending December 31, each employees gross earnings is $3,000. Employees are required to pay a FICA tax rate of 8% gross earnings of $97,500. Instructions Compute the FICA withholdings that should be made for each employee for the December 31 pay period. (Show computations.) ED-3 Piniella Company has the following data for the weekly payroll ending January 31. Hours Employee M. Hindi E. Benson K. Estes M 8 8 9 T 8 8 10 W 9 8 8 T 8 8 8 F 10 8 9 S 3 2 0 Federal Income Tax Withholding $34 37 58 Prepare payroll register and record payroll and payroll tax expense. (SO 2, 3) Hourly Rate $11 13 14 Health Insurance $10 15 15 Employees are paid 112 times the regular hourly rate for all hours worked in excess of 40 hours per week. FICA taxes are 8% on the first $97,500 of gross earnings. Piniella Company is subject to 5.4% state unemployment taxes on the first $9,800 and 0.8% federal unemployment taxes on the first $7,000 of gross earnings. Instructions (a) Prepare the payroll register for the weekly payroll. (b) Prepare the journal entries to record the payroll and Piniellas payroll tax expense. D18 Appendix D Payroll Accounting ED-4 Selected data from a February payroll register for Landmark Company are presented below. Some amounts are intentionally omitted. Gross earnings: Regular Overtime Total Deductions: FICA taxes Federal income taxes $8,900 (1) (2) $ 760 1,140 State income taxes Union dues Total deductions Net pay Accounts debited: Warehouse wages Store wages $(3) 100 (4) $7,215 (5) $4,000 Compute missing payroll amounts and record payroll. (SO 2) FICA taxes are 8%. State income taxes are 3% of gross earnings. Instructions (a) Fill in the missing amounts. (b) Journalize the February payroll and the payment of the payroll. Determine employers payroll taxes; record payroll tax expense. (SO 3) ED-5 According to a payroll register summary of Cruz Company, the amount of employees gross pay in December was $850,000, of which $70,000 was not subject to FICA tax and $760,000 was not subject to state and federal unemployment taxes. Instructions (a) Determine the employers payroll tax expense for the month, using the following rates: FICA 8%, state unemployment 5.4%, federal unemployment 0.8%. (b) Prepare the journal entry to record December payroll tax expense. llege /w eygand t Visit the books website at, and choose the Student Companion site, to access Exercise Set B. PROBLEMS: SET A Identify internal control weaknesses and make recommendations for improvement. (SO 1) PD-1A The payroll procedures used by three different companies are described below. 1. In Brewer Company each employee is required to mark on a clock card the hours worked. At the end of each pay period, the employee must have this clock card approved by the department manager. The approved card is then given to the payroll department by the employee. Subsequently, the treasurers department pays the employee by check. 2. In Hilyard Computer Company clock cards and time clocks are used. At the end of each pay period, the department manager initials the cards, indicates the rates of pay, and sends them to payroll. A payroll register is prepared from the cards by the payroll department. Cash equal to the total net pay in each department is given to the department manager, who pays the employees in cash. 3. In Hyun-chan Company employees are required to record hours worked by punching clock cards in a time clock. At the end of each pay period, the clock cards are collected by the department manager. The manager prepares a payroll register in duplicate and forwards the original to payroll. In payroll, the summaries are checked for mathematical accuracy, and a payroll supervisor pays each employee by check. Instructions (a) Indicate the weakness(es) in internal control in each company. (b) For each weakness, describe the control procedure(s) that will provide effective internal Use control. the following format for your answer: (a) Weaknesses (b) Recommended Procedures .w i l e y. c o EXERCISES: SET B www m /co Problems: Set A PD-2A Graves Drug Store has four employees who are paid on an hourly basis plus time-anda-half for all hours worked in excess of 40 a week. Payroll data for the week ended February 15, 2008, are presented below. Federal Income Tax Withholdings $? ? 61 52 D19 Prepare payroll register and payroll entries. (SO 2, 3) Employees L. Leiss S. Bjork M. Cape L. Wild Hours Worked 39 42 44 48 Hourly Rate $14.00 $12.00 $12.00 $12.00 United Way $0 5.00 7.50 5.00 Leiss and Bjork are married. They claim 2 February 15, 2008, and the accrual of employer payroll taxes. (c) Journalize the payment of the payroll on February 16, 2008. (d) Journalize the deposit in a Federal Reserve bank on February 28, 2008, of the FICA and federal income taxes payable to the government. PD-3A The following payroll liability accounts are included in the ledger of Eikleberry Company on January 1, 2008. FICA Taxes Payable Federal Income Taxes Payable State Income Taxes Payable Federal Unemployment Taxes Payable State Unemployment Taxes Payable Union Dues Payable U.S. Savings Bonds Payable In January, the following transactions occurred. Jan. 10 Sent check for $250.00 to union treasurer for union dues. 12 Deposited check for $1,916.80 in Federal Reserve bank for FICA taxes and federal income taxes withheld. 15 Purchased U.S. Savings Bonds for employees by writing check for $350.00. 17 Paid state income taxes withheld from employees. 20 Paid federal and state unemployment taxes. 31 Completed monthly payroll register, which shows office salaries $17,600, store wages $27,400, FICA taxes withheld $3,600, federal income taxes payable $1,770, state income taxes payable $360, union dues payable $400, United Fund contributions payable $1,800, and net pay $37,070. 31 Prepared payroll checks for the net pay and distributed checks to employees. At January 31, the company also makes the following accrual for employer payroll taxes: FICA taxes 8%, state unemployment taxes 5.4%, and federal unemployment taxes 0.8%. Instructions (a) Journalize the January transactions. (b) Journalize the adjustments pertaining to employee compensation at January 31. $ 662.20 1,254.60 102.15 312.00 1,954.40 250.00 350.00 (a) Net pay $1,786.32; Store wages expense $1,614.00 (b) Payroll tax expense $317.79 Journalize payroll transactions and adjusting entries. (SO 2, 3) (b) Payroll tax expense $6,390.00 D20 Appendix D Payroll Accounting PD-4A For the year ended December 31, 2008, R. Visnak Company reports the following summary payroll data. Gross earnings: Administrative salaries Electricians wages Total Deductions: FICA taxes Federal income taxes withheld State income taxes withheld (2.6%) United Way contributions payable *Hospital insurance premiums Total $180,000 320,000 $500,000 $ 35,200 153,000 13,000 25,000 15,800 $242,000 Prepare entries for payroll and payroll taxes; prepare W-2 data. (SO 2, 3) R. Visnak Companys payroll taxes are: FICA 8%, state unemployment 2.5% (due to a stable employment record), and 0.8% federal unemployment. Gross earnings subject to FICA taxes total $440,000, and unemployment taxes total $110,000. (a) Wages Payable $258,000 (b) Payroll tax expense $38,830 R. Lopez K. Kirk Gross Earnings $60,000 27,000 Federal Income Tax Withheld $27,500 11,000 PROBLEMS: SET B Identify internal control weaknesses and make recommendations for improvement. (SO 1) PD-1B Selected payroll procedures of Wallace Company are described below. 1. Department managers interview applicants and on the basis of the interview either hire or reject the applicants. When an applicant is hired, the applicant fills out a W-4 form (Employees Withholding Allowance Certificate). One copy of the form is sent to the human resources department, and one copy is sent to the payroll department as notice that the individual has been hired. On the copy of the W-4 sent to payroll, the managers manually indicate the hourly pay rate for the new hire. 2. The payroll checks are manually signed by the chief accountant and given to the department managers for distribution to employees in their department.The managers are responsible for seeing that any absent employees receive their checks. 3. There are two clerks in the payroll department. The payroll is divided alphabetically; one clerk has employees A to L and the other has employees M to Z. Each clerk computes the gross earnings, deductions, and net pay for employees in the section and posts the data to the employee earnings records. Instructions (a) Indicate the weaknesses in internal control. (b) For each weakness, describe the control procedures that will provide effective internal control. Use the following format for your answer: (a) Weaknesses (b) Recommended Procedures Problems: Set B PD-2B Lee Hardware has four employees who are paid on an hourly basis plus time-and-a half for all hours worked in excess of 40 a week. Payroll data for the week ended March 15, 2008, are presented below. D21 Prepare payroll register and payroll entries. (SO 2, 3) Employee Joe Coomer Mary Walker Andy Dye Kim Shen Hours Worked 40 42 44 48 Hourly Rate $15.00 13.00 13.00 13.00 Federal Income Tax Withholdings $? ? 60 67 United Way $5.00 5.00 8.00 5.00 Coomer and Walker March 15, 2008, and the accrual of employer payroll taxes. (c) Journalize the payment of the payroll on March 16, 2008. (d) Journalize the deposit in a Federal Reserve bank on March 31, 2008, of the FICA and federal income taxes payable to the government. PD-3B The following payroll liability accounts are included in the ledger of Nordlund Company on January 1, 2008. FICA Taxes Payable Federal Income Taxes Payable State Income Taxes Payable Federal Unemployment Taxes Payable State Unemployment Taxes Payable Union Dues Payable $ 760.00 1,204.60 108.95 288.95 1,954.40 870.00 (a) Net pay $1,910.37; Store wages expense $1,757 (b) Payroll tax expense $345.48 Journalize payroll transactions and adjusting entries. (SO 2, 3) U.S. Savings Bonds Payable In January, the following transactions occurred. 360.00 Jan. 10 Sent check for $870.00 to union treasurer for union dues. 12 Deposited check for $1,964.60 in Federal Reserve bank for FICA taxes and federal income taxes withheld. 15 Purchased U.S. Savings Bonds for employees by writing check for $360.00. 17 Paid state income taxes withheld from employees. 20 Paid federal and state unemployment taxes. 31 Completed monthly payroll register, which shows office salaries $21,600, store wages $28,400, FICA taxes withheld $4,000, federal income taxes payable $1,958, state income taxes payable $414, union dues payable $400, United Fund contributions payable $1,888, and net pay $41,340. 31 Prepared payroll checks for the net pay and distributed checks to employees. At January 31, the company also makes the following accrued adjustment for employer payroll taxes: FICA taxes 8%, federal unemployment taxes 0.8%, and state unemployment taxes 5.4%. Instructions (a) Journalize the January transactions. (b) Journalize the adjustments pertaining to employee compensation at January 31. (b) Payroll tax expense $7,100 D22 Appendix D Payroll Accounting PD-4B For the year ended December 31, 2008, Niehaus Electrical Repair Company reports the following summary payroll data. Gross earnings: Administrative salaries Electricians wages Total $180,000 370,000 $550,000 Prepare entries for payroll and payroll taxes; prepare W-2 data. (SO 2, 3) Deductions: FICA taxes $ 38,000 Federal income taxes withheld 168,000 State income taxes withheld (2.6%) 14,300 United Way contributions payable 27,500 *Hospital insurance premiums 17,200 Total $265,000 Niehaus Companys payroll taxes are: FICA 8%, state unemployment 2.5% (due to a stable employment record), and 0.8% federal unemployment. Gross earnings subject to FICA taxes total $475,000, and unemployment taxes total $125,000. (a) Wages payable $285,000 (b) Payroll tax expense $42,125 Anna Hashmi Sharon Bishop Gross Earnings $59,000 26,000 Federal Income Tax Withheld $28,500 10,200 llege /w eygand t Visit the books website at, and choose the Student Companion site, to access Problem Set C. BROADENING YOUR PERSPECTIVE FINANCIAL REPORTING AND ANALYSIS llege /w eygand Exploring the Web BYPD-1 The Internal Revenue Service provides considerable information over the Internet. The following demonstrates how useful one of its sites is in answering payroll tax questions faced by employers. Address:, or go to Steps 1. Go to the site shown above. 2. Choose View Online, Tax Publications. 3. Choose Publication 15, Circular E, Employers Tax Guide. m /co .w i l e y. c o PROBLEMS: SET C www m /co t www .w i l e y. c o Broadening Your Perspective Instructions Answer each of the following questions. (a) How does the government define employees? (b) What are the special rules for Social Security and Medicare regarding children who are employed by their parents? (c) How can an employee obtain a Social Security card if he or she doesnt have one? (d) Must employees report to their employer tips received from customers? If so, what is the process? (e) Where should the employer deposit Social Security taxes withheld or contributed? D23 CRITICAL THINKING Decision Making Across the Organization BYPD-2 Summerville Processing Company provides word-processing services for business clients and students in a university community. The work for business clients is fairly steady throughout the year. The work for students peaks significantly in December and May as a result of term papers, research project reports, and dissertations. Two years ago, the company attempted to meet the peak demand by hiring part-time help. However, this led to numerous errors and considerable customer dissatisfaction. A year ago, the company hired four experienced employees on a permanent basis instead of using part-time help. This proved to be much better in terms of productivity and customer satisfaction. But, it has caused an increase in annual payroll costs and a significant decline in annual net income. Recently, Valarie Flynn, a sales representative of Davidson Services Inc., has made a proposal to the company. Under her plan, Davidson Services will provide up to four experienced workers at a daily rate of $80 per person for an 8-hour workday. Davidson workers are not available on an hourly basis. Summerville Processing would have to pay only the daily rate for the workers used. The owner of Summerville Processing, Nancy Bell, asks you, as the companys accountant, to prepare a report on the expenses that are pertinent to the decision. If the Davidson plan is adopted, Nancy will terminate the employment of two permanent employees and will keep two permanent employees. At the moment, each employee earns an annual income of $22,000. Summerville Processing pays 8% FICA taxes, 0.8% federal unemployment taxes, and 5.4% state unemployment taxes. The unemployment taxes apply to only the first $7,000 of gross earnings. In addition, Summerville Processing pays $40 per month for each employee for medical and dental insurance. Nancy indicates that if the Davidson Services plan is accepted, her needs for workers will be as follows. Months JanuaryMarch AprilMay JuneOctober NovemberDecember Number 2 3 2 3 Working Days per Month 20 25 18 23 Instructions With the class divided into groups, answer the following. (a) Prepare a report showing the comparative payroll expense of continuing to employ permanent workers compared to adopting the Davidson Services Inc. plan. (b) What other factors should Nancy consider before finalizing her decision? Communication Activity BYPD-3 Ivan Blanco, president of the Blue Sky Company, has recently hired a number of additional employees. He recognizes that additional payroll taxes will be due as a result of this hiring, and that the company will serve as the collection agent for other taxes. D24 Appendix D Payroll Accounting Instructions In a memorandum to Ivan Blanco, explain each of the taxes, and identify the taxes that result in payroll tax expense to Blue Sky Company. Ethics Case BYPD-4 Johnny Fuller owns and manages Johnnys Restaurant, a 24-hour restaurant near the citys medical complex. Johnny employs 9 full-time employees and 16 part-time employees. He pays all of the full-time employees by check, the amounts of which are determined by Johnnys public accountant, Mary Lake. Johnny pays all of his part-time employees in cash. He computes their wages and withdraws the cash directly from his cash register. Mary has repeatedly urged Johnny to pay all employees by check. But as Johnny has told his competitor and friend, Steve Hill, who owns the Greasy Diner, First of all, my part-time employees prefer the cash over a check, and secondly I dont withhold or pay any taxes or workmens compensation insurance on those wages because they go totally unrecorded and unnoticed. Instructions (a) Who are the stakeholders in this situation? (b) What are the legal and ethical considerations regarding Johnnys handling of his payroll? (c) Mary Lake is aware of Johnnys payment of the part-time payroll in cash. What are her ethical responsibilities in this case? (d) What internal control principle is violated in this payroll process? Answers to Self-Study Questions 1. d 2. d 3. c Appendix E OBJECTIVES Subsidiary Ledgers and Special Journals STUDY After studying this appendix, you should be able to: 1. Describe the nature and purpose of a subsidiary ledger. 2. Explain how companies use special journals in journalizing. 3. Indicate how companies post a multi-column journal. SECTION 1 Expanding the Ledger Subsidiary Ledgers NATURE AND PURPOSE OF SUBSIDIARY LEDGERS Imagine a business that has several thousand charge (credit) customers STUDY OBJECTIVE 1 and shows the transactions with these customers in only one general Describe the nature and purpose ledger accountAccounts Receivable. It would be nearly impossible to of a subsidiary ledger.: 1. The accounts receivable (or customers) subsidiary ledger, which collects transaction data of individual customers. 2. The accounts payable (or creditors) subsidiary ledger, which collects transaction data of individual creditors. In each of these subsidiary ledgers, companies usually arrange individual accounts in alphabetical order. A general ledger account summarizes the detailed data from a subsidiary ledger. For example, the detailed data from the accounts receivable subsidiary ledger are summarized in Accounts Receivable in the general ledger. The general ledger account that summarizes subsidiary ledger data is called a control account. Illustration E-1 (page E2) presents an overview of the relationship of subsidiary ledgers to the general ledger.There, the general ledger control accounts and subsidiary ledger accounts are in green. Note that cash and owners capital in this E1 E2 Appendix E Subsidiary Ledgers and Special Journals illustration are not control accounts because there are no subsidiary ledger accounts related to these accounts. At the end of an accounting period, each general ledger control account balance must equal the composite balance of the individual accounts in the related subsidiary ledger. For example, the balance in Accounts Payable in Illustration E-1 must equal the total of the subsidiary balances of Creditors X Y Z. Control accounts General Ledger Accounts Receivable Accounts Payable Cash Common Stock Subsidiary Ledgers Customer Customer Customer A B C Creditor X Creditor Y Creditor Z Illustration E-1 Relationship of general ledger and subsidiary ledgers Subsidiary Ledger Example Illustration E-2 provides an example of a control account and subsidiary ledger for Pujols Enterprises. (Due to space considerations, the explanation column in these accounts is not shown in this and subsequent illustrations.) Illustration E-2 is based on the transactions listed in Illustration E-3 (next page). Illustration E-2 Relationship between general and subsidiary ledgers File Edit View ? Go Bookmarks Tools Entries Help Post Closing Reports Tools Help Problem Date 2008 Jan 10 19 Ref. Aaron Co. Debit Credit 6,000 4,000 Branden Inc. Debit Credit 3,000 3,000 Caron Co. Debit Credit 3,000 1,000 Balance 6,000 2,000 Date 2008 Jan 31 31 Accounts Receivable Ref. Debit Credit 12,000 8,000 No. 112 Balance 12,000 4,000 Date 2008 Jan 12 21 Ref. Balance 3,000 -----The subsidiary ledger is separate from the general ledger. Accounts Receivable is a control account. Date 2008 Jan 20 29 Ref. Balance 3,000 2,000 General Ledger General Jrnl Sales Jrnl Cash Receipts Jrnl Purchases Jrnl Nature and Purpose of Subsidiary Ledgers Credit Sales Jan. 10 Aaron Co. 12 Branden Inc. 20 Caron Co. $ 6,000 3,000 3,000 $12,000 Collections on Account Jan. 19 Aaron Co. 21 Branden Inc. 29 Caron Co. $ 4,000 3,000 1,000 $ 8,000 Illustration E-3 Sales and collection transactions E3 Pujols can reconcile the total debits ($12,000) and credits ($8,000) in Accounts Receivable in the general ledger to the detailed debits and credits in the subsidiary accounts. Also, the balance of $4,000 in the control account agrees with the total of the balances in the individual accounts (Aaron Co. $2,000 Branden Inc. $0 Caron Co. $2,000) in the subsidiary ledger. As Illustration E-2 shows, companies make monthly postings to the control accounts in the general ledger.This practice allows them to prepare monthly financial statements. Companies post to the individual accounts in the subsidiary ledger daily. Daily posting ensures that account information is current. This enables the company to monitor credit limits, bill customers, and answer inquiries from customers about their account balances. Advantages of Subsidiary Ledgers Subsidiary ledgers have several advantages: 1. They show in a single account transactions affecting one customer or one creditor, thus providing up-to-date information on specific account balances. 2. They free the general ledger of excessive details. As a result, a trial balance of the general ledger does not contain vast numbers of individual account balances. 3. They help locate errors in individual accounts by reducing the number of accounts in one ledger and by using control accounts. 4. They make possible a division of labor in posting. One employee can post to the general ledger while someone else posts to the subsidiary ledgers. Before You Go On... REVIEW IT 1. What is a subsidiary ledger, and what purpose does it serve? 2. What is a control account, and what purpose does it serve? 3. Name two general ledger accounts that may act as control accounts for a subsidiary ledger. Can you think of a third control account? DO IT Presented below is information related to Sims Company for its first month of operations. Determine the balances that appear in the accounts payable subsidiary ledger. What Accounts Payable balance appears in the general ledger at the end of January? Credit Purchases Jan. 5 11 22 Devon Co. Shelby Co. Taylor Co. $11,000 7,000 14,000 Jan. 9 14 27 Cash Paid Devon Co. Shelby Co. Taylor Co. $7,000 2,000 9,000 Action Plan Subtract cash paid from credit purchases to determine the balances in the accounts payable subsidiary ledger. Sum the individual balances to determine the Accounts Payable balance. E4 Appendix E Subsidiary Ledgers and Special Journals Solution Subsidiary ledger balances: Devon Co. $4,000 ($11,000 $7,000) Shelby Co. $5,000 ($7,000 $2,000) Taylor Co. $5,000 ($14,000 $9,000). General ledger Accounts Payable balance: $14,000 ($4,000 $5,000 $5,000). Related exercise material: BEE-4, BEE-5, EE-1, EE-2, EE-4, and EE-5. Expanding the Journal Special Journals SECTION 2 So far you have learned to journalize transactions in a two-column general journal and post each entry to the general ledger. This procedure is satisfacExplain how companies use tory in only the very smallest companies. To expedite journalizing and postspecial journals in journalizing. ing, most companies use special journals in addition to the general journal. Companies use special journals to record similar types of transactions. Examples are all sales of merchandise on account, or all cash receipts. The types of transactions that occur frequently in a company determine what special journals the company uses. Most merchandising enterprises record daily transactions using Illustration E-4 the journals shown in Illustration E-4. STUDY OBJECTIVE 2 Use of special journals and the general journal Sales Journal Used for: All sales of merchandise on account Cash Receipts Journal Used for: All cash received (including cash sales) Purchases Journal Used for: All purchases of merchandise on account Cash Payments Journal Used for: All cash paid (including cash purchases) General Journal Used for: Transactions that cannot be entered in a special journal, including correcting, adjusting, and closing entries If a transaction cannot be recorded in a special journal, the company records it in the general journal. For example, if a company had special journals for only the four types of transactions listed above, it would record purchase returns and allowances in the general journal. Similarly, correcting, adjusting, and closing entries are recorded in the general journal. In some situations, companies might use special journals other than those listed above. For example, when sales returns and allowances are frequent, a company might use a special journal to record these transactions. Special journals permit greater division of labor because several people can record entries in different journals at the same time. For example, one employee may journalize all cash receipts, and another may journalize all credit sales. Also, the use of special journals reduces the time needed to complete the posting process. With special journals, companies may post some accounts monthly, instead of daily, as we will illustrate later in the chapter. On the following pages, we discuss the four special journals shown in Illustration E-4. Sales Journal E5 SALES JOURNAL In the sales journal, companies record sales of merchandise on account. Cash sales of merchandise go in the cash receipts journal. Credit sales of assets other than merchandise go in the general journal. Journalizing Credit Sales To demonstrate use of a sales journal, we will use data for Karns Wholesale Supply, which uses a perpetual inventory system. Under this system, each entry in the sales journal results in one entry at selling price and another entry at cost. The entry at selling price is a debit to Accounts Receivable (a control account) and a credit of equal amount to Sales. The entry at cost is a debit to Cost of Goods Sold and a credit of equal amount to Merchandise Inventory (a control account). Using a sales journal with two amount columns, the company can show on only one line a sales transaction at both selling price and cost. Illustration E-5 shows this two-column sales journal of Karns Wholesale Supply, using assumed credit sales transactions (for sales invoices 101107). HELPFUL HINT Postings are also made daily to individual ledger accounts in the inventory subsidiary ledger to maintain a perpetual inventory. Illustration E-5 Journalizing the sales journalperpetual inventory system No. Ref. 101 102 103 104 105 106 107 Accts. Receivable Dr. Sales Cr. 10,600 11,350 7,800 9,300 15,400 21,210 14,570 , 90,230 , Cost of Goods Sold Dr. Merchandise Inventory Cr. 6,360 7,370 5,070 6,510 10,780 15,900 10,200 , 62,190 , General Ledger General Jrnl Sales Jrnl Cash Receipts Jrnl Purchases Jrnl Note several points: Unlike the general journal, an explanation is not required for each entry in a special journal. Also, use of prenumbered invoices ensures that all invoices are journalized. Finally, the reference (Ref.) column is not used in journalizing. It is used in posting the sales journal, as explained next. Posting the Sales Journal Companies make daily postings from the sales journal to the individual accounts receivable in the subsidiary ledger. Posting to the general ledger is done monthly. Illustration E-6 (page E6) shows both the daily and monthly postings. A check mark () is inserted in the reference posting column to indicate that the daily posting to the customers account has been made. If the subsidiary ledger accounts were numbered, the account number would be entered in place of the check mark. At the end of the month, Karns posts the column totals of the sales E6 Appendix E Subsidiary Ledgers and Special Journals Accts. Receivable Dr. Cost of Goods Sold Dr. No. Ref. Sales Cr. Merchandise Inventory Cr. 101 102 103 104 105 106 107 10,600 11,350 7,800 9,300 15,400 21,210 14,570 , 90,230 , (112) / (401) 6,360 7,370 5,070 6,510 10,780 15,900 10,200 , 62,190 , (505) / (120) At the end of the accounting period, the company posts totals to the general ledger. The company posts individual amounts to the subsidiary ledger daily. Accounts Receivable Date Ref. Debit Credit 2008 May 31 S1 90,230 No. 112 Balance 90,230 Date 2008 May 3 y 21 Ref. S1 S1 Abbot Sisters Debit Credit 10,600 15,400 Babson Co. Debit Credit 11,350 14,570 Carson Bros. Debit Credit 7,800 Balance 10,600 26,000 Date 2008 May 7 27 Ref. S1 S1 Balance 11,350 25,920 Merchandise Inventory Date Ref. Debit Credit 2008 May 31 S1 62,190 No. 120 Balance 62,1901 Date Ref. 2008 May 14 S1 Balance 7,800 Date Ref. 2008 May 31 S1 Sales Debit Credit 90,230 No. 401 Balance 90,230 Date Ref. 2008 May 19 S1 24 S1 Deli Co. Debit Credit 9,300 21,210 Balance 9,300 30,510 Date Ref. 2008 May 31 S1 Cost of Goods Sold Debit Credit 62,190 No. 505 Balance 62,190 The subsidiary ledger is separate from the general ledger. 1 Accounts Receivable is a control account. The normal balance for Merchandise Inventory is a debit. But, because of the sequence in which we have posted the special journals, with the sales journals first, the credits to Merchandise Inventory are posted before the debits. This posting sequence explains the credit balance in Merchandise Inventory, which exists only until the other journals are posted. General Ledger General Jrnl Sales Jrnl Cash Receipts Jrnl Purchases Jrnl Illustration E-6 Posting the sales journal journal to the general ledger. Here, the column totals are as follows: From the sellingprice column, a debit of $90,230 to Accounts Receivable (account No. 112), and a credit of $90,230 to Sales (account No. 401). From the cost column, a debit of $62,190 to Cost of Goods Sold (account No. 505), and a credit of $62,190 to Merchandise Inventory (account No. 120). Karns inserts the account numbers Cash Receipts Journal E7 below the column totals to indicate that the postings have been made. In both the general ledger and subsidiary ledger accounts, the reference S1 indicates that the posting came from page 1 of the sales journal. Proving the Ledgers The next step is to prove the ledgers. To do so, Karns must determine two things: (1) The total of the general ledger debit balances must equal the total of the general ledger credit balances. (2) The sum of the subsidiary ledger balances must equal the balance in the control account. Illustration E-7 shows the proof of the postings from the sales journal to the general and subsidiary ledger. Postings to General Ledger General Ledger Credits Merchandise Inventory Sales Debits Accounts Receivable Cost of Goods Sold Debit Postings to the Accounts Receivable Subsidiary Ledger Subsidiary Ledger $62,190 90,230 $152,420 $90,230 62,190 $152,420 Abbot Sisters Babson Co. Carson Bros. Deli Co. $26,000 25,920 7,800 30,510 $90,230 Illustration E-7 Proving the equality of the postings from the sales journal Advantages of the Sales Journal Use of a special journal to record sales on account has several advantages. First, the one-line entry for each sales transaction saves time. In the sales journal, it is not necessary to write out the four account titles for each transaction. Second, only totals, rather than individual entries, are posted to the general ledger. This saves posting time and reduces the possibilities of posting errors. Finally, a division of labor results, because one individual can take responsibility for the sales journal. CASH RECEIPTS JOURNAL In the cash receipts journal, companies record all receipts of cash. The most common types of cash receipts are cash sales of merchandise and collections of accounts receivable. Many other possibilities exist, such as receipt of money from bank loans and cash proceeds from disposal of equipment. A one- or two-column cash receipts journal would not have space enough for all possible cash receipt transactions. Therefore, companies use a multiple-column cash receipts journal. Generally, a cash receipts journal includes the following columns: debit columns for Cash and Sales Discounts, and credit columns for Accounts Receivable, Sales, and Other accounts. Companies use the Other Accounts category when the cash receipt does not involve a cash sale or a collection of accounts receivable. Under a perpetual inventory system, each sales entry also is accompanied by an entry that debits Cost of Goods Sold and credits Merchandise Inventory for the cost of the merchandise sold. Illustration E-8 (page E8) shows a six-column cash receipts journal. E8 Appendix E Subsidiary Ledgers and Special Journals Illustration E-8 Journalizing and posting the cash receipts journal File Edit View ? Go Bookmarks Tools Help Post Closing Reports Tools Help Problem Entries Date 2008 May 1 7 10 12 17 22 23 28 Account Credited Common Stock Abbot Sisters Babson Co. Notes Payable Carson Bros. Deli Co. Ref. 311 Cash Dr. 5,000 1,900 10,388 2,600 11,123 6,000 7,644 9,114 , 53,769 , (101) Sales Accounts Other Cost of Goods Discounts Receivable Sales Accounts Sold Dr. Dr. Cr. Cr. Cr. Mdse. Inv. Cr. 5,000 1,900 212 227 156 186 781 (414) 10,600 2,600 11,350 6,000 7,800 9,300 , 39,050 , (112) 1,690 1,240 200 4,500 (401) 2,930 11,000 , (505)/(120) (x ) The company posts individual amounts to the subsidiary ledger daily. At the end of the accounting period, the company posts totals to the general ledger. Date Ref. 2008 May 3 S1 10 CR1 21 S1 Abbot Sisters Debit Credit 10,600 10,600 15,400 Babson Co. Debit Credit 11,350 11,350 14,570 Carson Bros. Debit Credit 7,800 7,800 Deli Co. Balance 10,600 -------15,400 Date Ref. 2008 May 31 CR1 Cash Debit Credit 53,769 No. 101 Balance 53,769 No. 112 Balance 90,230 51,180 No. 120 Balance 62,190 65,120 No. 200 Balance 6,000 No. 311 Balance 5,000 No. 401 Accounts Receivable Date Ref. 2008 May 31 S1 31 CR1 Debit 90,230 39,050 Credit Date Ref. 2008 May 7 S1 17 CR1 27 S1 Balance 11,350 -------14,570 Merchandise Inventory Date Ref. 2008 May 31 S1 31 CR1 Debit Credit 62,190 2,930 Notes Payable Date Ref. 2008 May 22 CR1 Debit Credit 6,000 Common Stock Date Ref. 2008 May 1 CR1 Debit Credit 5,000 Sales Date Ref. 2008 May 14 S1 23 CR1 Balance 7,800 ------- Date Ref. 2008 May 19 S1 24 S1 28 CR1 Debit 9,300 21,210 Credit Balance 9,300 30,510 21,210 9,300 The subsidiary ledger is separate from the general ledger. Accounts Receivable is a control account. Date Ref. 2008 May 31 S1 31 CR1 Debit Credit 90,230 4,500 Balance 90,230 94,730 No. 414 Balance 781 No. 505 Balance 62,190 65,120 Sales Discounts Date Ref. 2008 May 31 CR1 y Debit 781 Cost of Goods Sold Date Ref. 2008 May 31 S1 31 CR1 Debit 62,190 2,930 Credit Credit General Ledger General Jrnl Sales Jrnl Cash Receipts Jrnl Purchases Jrnl Cash Receipts Journal E9 Companies may use additional credit columns if these columns significantly reduce postings to a specific account. For example, a loan company, such as Household International, receives thousands of cash collections from customers. Using separate credit columns for Loans Receivable and Interest Revenue, rather than the Other Accounts credit column, would reduce postings. Journalizing Cash Receipts Transactions To illustrate the journalizing of cash receipts transactions, we will continue with the May transactions of Karns Wholesale Supply. Collections from customers relate to the entries recorded in the sales journal in Illustration E-5. The entries in the cash receipts journal are based on the following cash receipts. May 1 Stockholders invested $5,000 in the business. 7 Cash sales of merchandise total $1,900 (cost, $1,240). 10 Received a check for $10,388 from Abbot Sisters in payment of invoice No. 101 for $10,600 less a 2% discount. 12 Cash sales of merchandise total $2,600 (cost, $1,690). 17 Received a check for $11,123 from Babson Co. in payment of invoice No. 102 for $11,350 less a 2% discount. 22 Received cash by signing a note for $6,000. 23 Received a check for $7,644 from Carson Bros. in full for invoice No. 103 for $7,800 less a 2% discount. 28 Received a check for $9,114 from Deli Co. in full for invoice No. 104 for $9,300 less a 2% discount. Further information about the columns in the cash receipts journal is listed below. Debit Columns: 1. Cash. Karns enters in this column the amount of cash actually received in each transaction. The column total indicates the total cash receipts for the month. 2. Sales Discounts. Karns includes a Sales Discounts column in its cash receipts journal. By doing so, it does not need to enter sales discount items in the general journal. As a result, the cash receipts journal shows on one line the collection of an account receivable within the discount period. Credit Columns: 3. Accounts Receivable. Karns uses the Accounts Receivable column to record cash collections on account. The amount entered here is the amount to be credited to the individual customers account. 4. Sales. The Sales column records all cash sales of merchandise. Cash sales of other assets (plant assets, for example) are not reported in this column. 5. Other Accounts. Karns uses the Other Accounts column whenever the credit is other than to Accounts Receivable or Sales. For example, in the first entry, Karns enters $5,000 as a credit to Common Stock.This column is often referred to as the sundry accounts column. Debit and Credit Column: 6. Cost of Goods Sold and Merchandise Inventory. This column records debits to Cost of Goods Sold and credits to Merchandise Inventory. In a multi-column journal, generally only one line is needed for each entry. Debit and credit amounts for each line must be equal. When Karns journalizes the collection from Abbot Sisters on May 10, for example, three amounts are indicated. Note also that the Account Credited column identifies both general ledger and subsidiary ledger account titles. General ledger accounts are illustrated in the May 1 HELPFUL HINT When is an account title entered in the Account Credited column of the cash receipts journal? Register to View Answersubsidiary ledger account is entered when the entry involves a collection of accounts receivable. A general ledger account is entered when the account is not shown in a special column (and an amount must be entered in the Other Accounts column). Otherwise, no account is shown in the Account Credited column. E10 Appendix E Subsidiary Ledgers and Special Journals and May 22 entries. A subsidiary account is illustrated in the May 10 entry for the collection from Abbot Sisters. When Karns has finished journalizing a multi-column journal, it totals the amount columns and compares the totals to prove the equality of debits and credits. Illustration E-9 shows the proof of the equality of Karnss cash receipts journal. Illustration E-9 Proving the equality of the cash receipts journal Debits Cash Sales Discounts Cost of Goods Sold $53,769 781 2,930 $57,480 Credits Accounts Receivable Sales Other Accounts Merchandise Inventory $39,050 4,500 11,000 2,930 $57,480 Totaling the columns of a journal and proving the equality of the totals is called footing and cross-footing a journal. Posting the Cash Receipts Journal Posting a multi-column journal involves the following steps. 1. At the end of the month, the company posts all column totals, except for the Other Accounts total, to the account title(s) specified in the Indicate how companies post a column heading (such as Cash or Accounts Receivable). The company multi-column journal. then enters account numbers below the column totals to show that they have been posted. For example, Karns has posted cash to account No. 101, accounts receivable to account No. 112, merchandise inventory to account No. 120, sales to account No. 401, sales discounts to account No. 414, and cost of goods sold to account No. 505. 2. The company separately posts the individual amounts comprising the Other Accounts total to the general ledger accounts specified in the Account Credited column. See, for example, the credit posting to Common Stock: The total amount of this column has not been. STUDY OBJECTIVE 3 The symbol CR, used in both the subsidiary and general ledgers, identifies postings from the cash receipts journal. Proving the Ledgers After posting of the cash receipts journal is completed, Karns proves the ledgers. As shown in Illustration E-10 (next page), the general ledger totals agree. Also, the sum of the subsidiary ledger balances equals the control account balance. Purchases Journal E11 Accounts Receivable Subsidiary Ledger General Ledger Debits Illustration E-10 Proving the ledgers after posting the sales and the cash receipts journals $53,769 51,180 781 65,120 $170,850 Abbot Sisters Babson Co. Deli Co. $15,400 14,570 21,210 $51,180 Cash Accounts Receivable Sales Discounts Cost of Goods Sold Credits Notes Payable Common Stock Sales Merchandise Inventory $ 6,000 5,000 94,730 65,120 $170,850 PURCHASES JOURNAL In the purchases journal, companies record all purchases of merchandise on account. Each entry in this journal results in a debit to Merchandise Inventory and a credit to Accounts Payable. Illustration E-11 (page E12) shows the purchases journal for Karns Wholesale Supply. When using a one-column purchases journal (as in Illustration E-11), a company cannot journalize other types of purchases on account or cash purchases in it. For example, using the purchases journal shown in Illustration E-11, Karns would have to record credit purchases of equipment or supplies in the general journal. Likewise, all cash purchases would be entered in the cash payments journal. As illustrated later, companies that make numerous credit purchases for items other than merchandise often expand the purchases journal to a multi-column format. (See Illustration E-14 on page E13.) Journalizing Credit Purchases of Merchandise The journalizing procedure is similar to that for a sales journal. Companies make entries in the purchases journal from purchase invoices. In contrast to the sales journal, the purchases journal may not have an invoice number column, because invoices received from different suppliers will not be in numerical sequence. To ensure that they record all purchase invoices, some companies consecutively number each invoice upon receipt and then use an internal document number column in the purchases journal. The entries for Karns Wholesale Supply are based on the assumed credit purchases listed in Illustration E-12 (page E12). Posting the Purchases Journal The procedures for posting the purchases journal are similar to those for the sales journal. In this case, Karns makes daily postings to the accounts payable ledger; it makes monthly postings to Merchandise Inventory and Accounts Payable in the general ledger. In both ledgers, Karns uses P1 in the reference column to show that the postings are from page 1 of the purchases journal. Proof of the equality of the postings from the purchases journal to both ledgers is shown in Illustration E-13 (page E13). HELPFUL HINT Postings to subsidiary ledger accounts are done daily because it is often necessary to know a current balance for the subsidiary accounts. E12 Appendix E Subsidiary Ledgers and Special Journals File Edit View ? Go Bookmarks Tools Entries Help Post Closing Reports Tools Help Problem Date 2008 May 6 10 14 19 26 29 Account Credited Jasper Manufacturing Inc. Eaton and Howe Inc. Fabor and Son Jasper Manufacturing Inc. Fabor and Son Eaton and Howe Inc. Terms 2/10, n/30 3/10, n/30 1/10, n/30 2/10, n/30 1/10, n/30 3/10, n/30 Merchandise Inventory Dr. Ref. Accounts Payable Cr. 11,000 7,200 6,900 17,500 8,700 12,600 63,900 (120)/(201) The company posts individual amounts to the subsidiary ledger daily. At the end of the accounting period, the company posts totals to the general ledger. Date 2008 May 10 29 Eaton and Howe Inc. Ref. Debit Credit P1 P1 7,200 12,600 Fabor and Son Debit Credit 6,900 8,700 Merchandise Inventory Balance 7,200 19,800 Date Ref. Debit Credit 62,190 2,930 63,900 Accounts Payable Debit Credit 63,900 2008 May 31 S1 31 CR1 31 P1 No. 120 Balance 62,190 65,120 1,220 No. 201 Balance 63,900 Date 2008 May 14 26 Ref. P1 P1 Balance Date 6,900 15,600 2008 May 31 y Ref. P1 Date 2008 May 6 19 Jasper Manufacturing Inc. Ref. Debit Credit Balance P1 P1 11,000 17,500 11,000 28,500 The subsidiary ledger is separate from the general ledger. Accounts Payable is a control account. General Ledger General Jrnl Sales Jrnl Cash Receipts Jrnl Purchases Jrnl Illustration E-11 Journalizing and posting the purchases journal Illustration E-12 Credit purchases transactions Date 5/6 5/10 5/14 5/19 5/26 5/29 Supplier Jasper Manufacturing Inc. Eaton and Howe Inc. Fabor and Son Jasper Manufacturing Inc. Fabor and Son Eaton and Howe Inc. Amount $11,000 7,200 6,900 17,500 8,700 12,600 Cash Payments Journal E13 Postings to General Ledger Merchandise Inventory (debit) $63,900 Credit Postings to Accounts Payable Ledger Eaton and Howe Inc. Fabor and Son Jasper Manufacturing Inc. $19,800 15,600 28,500 $63,900 Illustration E-13 Proving the equality of the purchases journal Accounts Payable (credit) $63,900 Expanding the Purchases Journal As noted earlier, some companies expand the purchases journal to include all types of purchases on account. Instead of one column for merchandise inventory and accounts payable, they use a multiple-column format. This format usually includes a credit column for Accounts Payable and debit columns for purchases of Merchandise Inventory, Office Supplies, Store Supplies, and Other Accounts. Illustration E-14 shows a multi-column purchases journal for Hanover Co.The posting procedures are similar to those shown earlier for posting the cash receipts journal. HELPFUL HINT A single-column purchases journal needs only to be footed to prove the equality of debits and credits. Illustration E-14 Multi-column purchases journal File Edit View ? Go Bookmarks Tools Entries Help Post Closing Reports Tools Help Problem Date 2008 June 1 Signe Audio 3 Wight Co. 5 Orange Tree Co. Accounts Merchandise Office Store Other Accounts Payable Inventory Supplies Supplies Dr. Account Ref. Amount Account Credited Ref. Cr. Dr. Dr. Dr. 2,000 1,500 2,600 800 56,600 2,000 1,500 Equipment 157 43,000 7,500 800 1,200 2,600 4,900 30 Sue's Business Forms General Ledger General Jrnl Sales Jrnl Cash Receipts Jrnl Purchases Jrnl CASH PAYMENTS JOURNAL In a cash payments (cash disbursements) journal, companies record all disbursements of cash. Entries are made from prenumbered checks. Because companies make cash payments for various purposes, the cash payments journal has multiple columns. Illustration E-15 (page E14) shows a four-column journal. Journalizing Cash Payments Transactions The procedures for journalizing transactions in this journal are similar to those for the cash receipts journal. Karns records each transaction on one line, and for each line there must be equal debit and credit amounts.The entries in the cash payments E14 Appendix E Subsidiary Ledgers and Special Journals File Edit View ? Go Bookmarks Tools Entries Help Post Closing Reports Tools Help Problem Date 2008 May 1 3 8 10 19 23 28 30 Ck. No. 101 102 103 104 105 106 107 108 Account Debited Prepaid Insurance Mdse. Inventory Mdse. Inventory Jasper Manuf. Inc. Eaton & Howe Inc. Fabor and Son Jp Jasper Manuf. Inc. Dividends Accounts Merchandise Other Ref. Accounts Dr. Payable Dr. Inventory Cr. 130 120 120 1,200 100 4,400 11,000 7,200 6,900 17,500 332 500 6,200 (x) 42,600 (201) 220 216 69 350 855 (120) Cash Cr. 1,200 100 4,400 10,780 6,984 6,831 17,150 500 47,945 (101) The company posts individual amounts to the subsidiary ledger daily. At the end of the accounting period, the company posts totals to the general ledger. Date Eaton and Howe Inc. Ref. Debit Credit 7,200 7,200 12,600 Fabor and Son Debit Credit 6,900 6,900 8,700 Balance 7,200 ------12,600 Date Ref. Cash Debit Credit 53,769 47,945 No. 101 Balance 53,769 5,824 No. 120 Balance 100 4,500 57,690 60,620 3,280 2,425 No. 130 Balance 1,200 2008 May 10 P1 19 CP1 29 P1 2008 May 31 CR1 31 CP1 Merchandise Inventory Balance 6,900 ------8,700 Date 2008 May 3 8 31 31 31 31 Ref. CPI CPI SI CRI Pl CPI Debit 100 4,400 62,190 2,930 63,900 855 Prepaid Insurance Debit Credit 1,200 Credit Date Ref. 2008 May 14 P1 23 CP1 26 P1 Date 2008 May 6 P1 10 CP1 19 P1 28 CP1 Jasper Manufacturing Inc. Ref. Debit Credit Balance 11,000 11,000 17,500 17,500 11,000 -------17,500 -------- Date Ref. 2008 May 1 CP1 The subsidiary ledger is separate from the general ledger. Accounts Receivable is a control account. Date Ref. Accounts Payable Debit Credit 63,900 42,600 Dividends Debit Credit 500 No. 201 Balance 63,900 21,300 No. 332 Balance 500 2008 May 31 P1 31 CP1 Date Ref. 2008 May 30 CP1 General Ledger General Jrnl Sales Jrnl Cash Receipts Jrnl Purchases Jrnl Illustration E-15 Journalizing and posting the cash payments journal Cash Payments Journal E15 journal in Illustration E-15 are based on the following transactions for Karns Wholesale Supply. May 1 Issued check No. 101 for $1,200 for the annual premium on a fire insurance policy. 3 Issued check No. 102 for $100 in payment of freight when terms were FOB shipping point. 8 Issued check No. 103 for $4,400 for the purchase of merchandise. 10 Sent check No. 104 for $10,780 to Jasper Manufacturing Inc. in payment of May 6 invoice for $11,000 less a 2% discount. 19 Mailed check No. 105 for $6,984 to Eaton and Howe Inc. in payment of May 10 invoice for $7,200 less a 3% discount. 23 Sent check No. 106 for $6,831 to Fabor and Son in payment of May 14 invoice for $6,900 less a 1% discount. 28 Sent check No. 107 for $17,150 to Jasper Manufacturing Inc. in payment of May 19 invoice for $17,500 less a 2% discount. 30 Issued check No. 108 for $500 to stockholders as a dividend. Note that whenever Karns enters an amount in the Other Accounts column, it must identify a specific general ledger account in the Account Debited column. The entries for checks No. 101, 102, 103, and 108 illustrate this situation. Similarly, Karns must identify a subsidiary account in the Account Debited column whenever it enters an amount in the Accounts Payable column. See, for example, the entry for check No. 104. After Karns journalizes the cash payments journal, it totals the columns. The totals are then balanced to prove the equality of debits and credits. Posting the Cash Payments Journal The procedures for posting the cash payments journal are similar to those for the cash receipts journal. Karns posts the amounts recorded in the Accounts Payable column individually to the subsidiary ledger and in total to the control account. It posts Merchandise Inventory and Cash only in total at the end of the month. Transactions in the Other Accounts column are posted individually to the appropriate account(s) affected. The company does not post totals for the Other Accounts column. Illustration E-15 shows the posting of the cash payments journal. Note that Karns uses the symbol CP as the posting reference. After postings are completed, the company proves the equality of the debit and credit balances in the general ledger. In addition, the control account balances should agree with the subsidiary ledger total balance. Illustration E-16 shows the agreement of these balances. Illustration E-16 Proving the ledgers after postings from the sales, cash receipts, purchases, and cash payments journals Accounts Payable Subsidiary Ledger General Ledger Debits Cash Accounts Receivable Merchandise Inventory Prepaid Insurance Dividends Sales Discounts Cost of Goods Sold Credits Notes Payable Accounts Payable Common Stock Sales $ 6,000 21,300 5,000 94,730 $127,030 $ 5,824 51,180 2,425 1,200 500 781 65,120 $127,030 Eaton and Howe Inc. Fabor and Son $12,600 8,700 $21,300 E16 Appendix E Subsidiary Ledgers and Special Journals EFFECTS OF SPECIAL JOURNALS ON THE GENERAL JOURNAL Special journals for sales, purchases, and cash substantially reduce the number of entries that companies make in the general journal. Only transactions that cannot be entered in a special journal are recorded in the general journal. For example, a company may use the general journal to record such transactions as granting of credit to a customer for a sales return or allowance, granting of credit from a supplier for purchases returned, acceptance of a note receivable from a customer, and purchase of equipment by issuing a note payable. Also, correcting, adjusting, and closing entries are made in the general journal. The general journal has columns for date, account title and explanation, reference, and debit and credit amounts. When control and subsidiary accounts are not involved, the procedures for journalizing and posting of transactions are the same as those described in earlier chapters. When control and subsidiary accounts are involved, companies make two changes from the earlier procedures: 1. In journalizing, they identify both the control and the subsidiary accounts. 2. In posting, there must be a dual posting: once to the control account and once to the subsidiary account. Illustration E-17 Journalizing and posting the general journal File Edit View ? Go Bookmarks Tools Entries Help Post Closing Reports Tools Help Problem Date Account Title and Explanation Ref. Debit 500 Credit 2008 201/ May 31 Accounts PayableFabor and Son 120 Merchandise Inventory (Received credit for returned goods) 500 Date Ref. Fabor and Son Debit Credit 6,900 6,900 8,700 500 Balance 6,900 ------8,700 8,200 , Date Merchandise Inventory Ref. Debit Credit 500 No. 120 Balance 500 2008 May 14 P1 23 CP1 26 P1 31 G1 2008 May 31 G1 Date Ref. Accounts Payable Debit Credit 63,900 42,600 500 No. 201 Balance 63,900 21,300 20,800 2008 May 31 P1 31 CP1 31 G1 General Ledger General Jrnl Sales Jrnl Cash Receipts Jrnl Purchases Jrnl Effects of Special Journals on the General Journal E17 To illustrate, assume that on May 31, Karns Wholesale Supply returns $500 of merchandise for credit to Fabor and Son. Illustration E-17 shows the entry in the general journal and the posting of the entry. Note that if Karns receives cash instead of credit on this return, then it would record the transaction in the cash receipts journal. Note that the general journal indicates two accounts (Accounts Payable, and Fabor and Son) for the debit, and two postings (201/) in the reference column. One debit is posted to the control account and another debit to the creditors account in the subsidiary ledger. Before You Go On... REVIEW IT 1. What types of special journals do companies frequently use to record transactions? Why do they use special journals? 2. Explain how companies post transactions recorded in the sales journal and the cash receipts journal. 3. Indicate the types of transactions that companies record in the general journal when they use special journals. Demonstration Problem Cassandra Wilson Company uses a six-column cash receipts journal with the following columns: Cash (Dr.) Sales Discounts (Dr.) Accounts Receivable (Cr.) Sales (Cr.) Other Accounts (Cr.) Cost of Goods Sold (Dr.) and Merchandise Inventory (Cr.) Cash receipts transactions for the month of July 2008 are as follows. July 3 Cash sales total $5,800 (cost, $3,480). 5 Received a check for $6,370 from Jeltz Company in payment of an invoice dated June 26 for $6,500, terms 2/10, n/30. 9 Stockholders made an additional investment of $5,000 cash in the business. 10 Cash sales total $12,519 (cost, $7,511). 12 Received a check for $7,275 from R. Eliot & Co. in payment of a $7,500 invoice dated July 3, terms 3/10, n/30. 15 Received a customer advance of $700 cash for future sales. 20 Cash sales total $15,472 (cost, $9,283). 22 Received a check for $5,880 from Beck Company in payment of $6,000 invoice dated July 13, terms 2/10, n/30. 29 Cash sales total $17,660 (cost, $10,596). 31 Received cash of $200 on interest earned for July. action plan Record all cash receipts in the cash receipts journal. The account credited indicates items posted individually to the subsidiary ledger or general ledger. Record cash sales in the cash receipts journalnot in the sales journal. The total debits must equal the total credits. Instructions (a) Journalize the transactions in the cash receipts journal. (b) Contrast the posting of the Accounts Receivable and Other Accounts columns. E18 Appendix E Subsidiary Ledgers and Special Journals Solution (a) CASSANDRA WILSON COMPANY Cash Receipts Journal CR1 Sales Cr. 5,800 Date 2008 7/3 5 9 10 12 15 20 22 29 31 Account Credited Ref. Cash Dr. 5,800 6,370 5,000 12,519 7,275 700 15,472 5,880 17,660 200 76,876 Sales Discounts Dr. Accounts Receivable Cr. Other Accounts Cr. Cost of Goods Sold Dr. Mdse. Inv. Cr. 3,480 Jeltz Company Common Stock R. Eliot & Co. Unearned Revenue Beck Company Interest Revenue 130 6,500 5,000 12,519 7,511 700 15,472 9,283 10,596 200 5,900 30,870 225 7,500 120 6,000 17,660 475 20,000 51,451 (b) The Accounts Receivable column is posted as a credit to Accounts Receivable. The individual amounts are credited to the customers accounts identified in the Account Credited column, which are maintained in the accounts receivable subsidiary ledger. The amounts in the Other Accounts column are posted individually. They are credited to the account titles identified in the Account Credited column. SUMMARY OF STUDY OBJECTIVES 1 Describe the nature and purpose of a subsidiary ledger. A subsidiary ledger is a group of accounts with a common characteristic. It facilitates the recording process by freeing the general ledger from details of individual balances. 2 Explain how companies use special journals in journalizing. Companies use special journals to group similar types of transactions. In a special journal, generally only one line is used to record a complete transaction. 3 Indicate how companies post a multi-column journal. In posting a multi-column journal: (a) Companies post all column totals except for the Other Accounts column once at the end of the month to the account title specified in the column heading. (b) Companies do not post the total of the Other Accounts column. Instead, the individual amounts comprising the total are posted separately to the general ledger accounts specified in the Account Credited (Debited) column. (c) The individual amounts in a column posted in total to a control account are posted daily to the subsidiary ledger accounts specified in the Account Credited (Debited) column. GLOSSARY Accounts payable (creditors) subsidiary ledger A subsidiary ledger that collects transaction data of individual creditors. (p. E1). Accounts receivable (customers) subsidiary ledger A subsidiary ledger that collects transaction data of individual customers. (p. E1). Cash payments (disbursements) journal A special journal that records all cash paid. (p. E13). Cash receipts journal A special journal that records all cash received. (p. E7). Control account An account in the general ledger that summarizes subsidiary ledger. (p. E1). Questions Purchases journal A special journal that records all purchases of merchandise on account. (p. E11). Sales journal A special journal that records all sales of merchandise on account. (p. E5). E19 Special journal A journal that records similar types of transactions, such as all credit sales. (p. E4). Subsidiary ledger A group of accounts with a common characteristic. (p. E1). SELF-STUDY QUESTIONS Answers are at the end of the chapter. (SO 1) (SO 2) 1. Which of the following is incorrect concerning subsidiary ledgers? a. The purchases ledger is a common subsidiary ledger for creditor accounts. b. The accounts receivable ledger is a subsidiary ledger. c. A subsidiary ledger is a group of accounts with a common characteristic. d. An advantage of the subsidiary ledger is that it permits a division of labor in posting. 2. A sales journal will be used for: Credit Cash Sales Sales Sales Discounts a. no yes yes b. yes no yes c. yes no no d. yes yes no 3. Which of the following statements is correct? a. The sales discount column is included in the cash receipts journal. b. The purchases journal records all purchases of merchandise whether for cash or on account. c. The cash receipts journal records sales on account. d. Merchandise returned by the buyer is recorded by the seller in the purchases journal. 4. Which of the following is incorrect concerning the posting of the cash receipts journal? a. The total of the Other Accounts column is not posted. b. All column totals except the total for the Other Accounts column are posted once at the end of the month to the account title(s) specified in the column heading. c. The totals of all columns are posted daily to the accounts specified in the column heading. d. The individual amounts in a column posted in total to a control account are posted daily to the subsidiary (SO 2, 3) (SO 3) ledger account specified in the Account Credited column. 5. Postings from the purchases journal to the subsidiary (SO 3) ledger are generally made: a. yearly. b. monthly. c. weekly. d. daily. 6. Which statement is incorrect regarding the general journal? (SO 2) a. Only transactions that cannot be entered in a special journal are recorded in the general journal. b. Dual postings are always required in the general journal. c. The general journal may be used to record acceptance of a note receivable in payment of an account receivable. d. Correcting, adjusting, and closing entries are made in the general journal. 7. When companies use special journals: (SO 2) a. they record all purchase transactions in the purchases journal. b. they record all cash received, except from cash sales, in the cash receipts journal. c. they record all cash disbursements in the cash payments journal. d. a general journal is not necessary. 8. If a customer returns goods for credit, the selling company (SO 2) normally makes an entry in the: a. cash payments journal. b. sales journal. c. general journal. d. cash receipts journal. Go to the books website,, for Additional Self-Study questions. QUESTIONS 1. What are the advantages of using subsidiary ledgers? 2. (a) When do companies normally post to (1) the subsidiary accounts and (2) the general ledger control accounts? (b) Describe the relationship between a control account and a subsidiary ledger. 3. Identify and explain the four special journals discussed in the chapter. List an advantage of using each of these journals rather than using only a general journal. 4. Thogmartin Company uses special journals. It recorded in a sales journal a sale made on account to R. Peters for $435. A few days later, R. Peters returns $70 worth of merchandise for credit. Where should Thogmartin Company record the sales return? Why? 5. A $500 purchase of merchandise on account from Lore Company was properly recorded in the purchases journal. When posted, however, the amount recorded in the E20 Appendix E Subsidiary Ledgers and Special Journals (d) Sales of merchandise on account. (e) Collection of cash on account from a customer. (f) Purchase of office supplies on account. In what journal would the following transactions be recorded? (Assume that a two-column sales journal and a single-column purchases journal are used.) (a) Cash received from signing a note payable. (b) Investment of cash by stockholders. (c) Closing of the expense accounts at the end of the year. (d) Purchase of merchandise on account. (e) Credit received for merchandise purchased and returned to supplier. (f) Payment of cash on account due a supplier. What transactions might be included in a multiple-column purchases journal that would not be included in a singlecolumn purchases journal? Give an example of a transaction in the general journal that causes an entry to be posted twice (i.e., to two accounts), one in the general ledger, the other in the subsidiary ledger. Does this affect the debit/credit equality of the general ledger? Give some examples of appropriate general journal transactions for an organization using special journals. 6. 7. 8. 9. subsidiary ledger was $50. How might this error be discovered? Why would special journals used in different businesses not be identical in format? What type of business would maintain a cash receipts journal but not include a column for accounts receivable? The cash and the accounts receivable columns in the cash receipts journal were mistakenly overadded by $4,000 at the end of the month. (a) Will the customers ledger agree with the Accounts Receivable control account? (b) Assuming no other errors, will the trial balance totals be equal? One column total of a special journal is posted at monthend to only two general ledger accounts. One of these two accounts is Accounts Receivable. What is the name of this special journal? What is the other general ledger account to which that same month-end total is posted? In what journal would the following transactions be recorded? (Assume that a two-column sales journal and a single-column purchases journal are used.) (a) Recording of depreciation expense for the year. (b) Credit given to a customer for merchandise purchased on credit and returned. (c) Sales of merchandise for cash. 10. 11. 12. 13. BRIEF EXERCISES Identify subsidiary ledger balances. (SO 1) BEE-1 Presented below is information related to Kienholz Company for its first month of operations. Identify the balances that appear in the accounts receivable subsidiary ledger and the accounts receivable balance that appears in the general ledger at the end of January. Credit Sales Jan. 7 Agler Co. 15 Barto Co. 23 Maris Co. Identify subsidiary ledger accounts. (SO 1) Identify special journals. (SO 2) Cash Collections $10,000 6,000 9,000 Jan. 17 Agler Co. 24 Barto Co. 29 Maris Co. $7,000 4,000 9,000 BEE-2 Identify in what ledger (general or subsidiary) each of the following accounts is shown. 3. Notes Payable 4. Accounts PayableThebeau 4. Credit sales 5. Purchase of merchandise on account 6. Receipt of cash for services performed 1. Rent Expense 2. Accounts ReceivableChar BEE-3 1. Cash sales 2. Payment of dividends 3. Cash purchase of land Identify the journal in which each of the following transactions is recorded. Identify entries to cash receipts journal. (SO 2) BEE-4 Indicate whether each of the following debits and credits is included in the cash receipts journal. (Use Yes or No to answer this question.) 1. Debit to Sales 2. Credit to Merchandise Inventory 3. Credit to Accounts Receivable 4. Debit to Accounts Payable Identify transactions for special journals. (SO 2) BEE-5 Galindo Co. uses special journals and a general journal. Identify the journal in which each of the following transactions is recorded. (a) (b) (c) (d) Purchased equipment on account. Purchased merchandise on account. Paid utility expense in cash. Sold merchandise on account. Exercises BEE-6 Identify the special journal(s) in which the following column headings appear. 4. Sales Cr. 5. Merchandise Inventory Dr. Identify transactions for special journals. (SO 2) E21 1. Sales Discounts Dr. 2. Accounts Receivable Cr. 3. Cash Dr. BEE-7 Kidwell Computer Components Inc. uses a multi-column cash receipts journal. Indicate which column(s) is/are posted only in total, only daily, or both in total and daily. 1. Accounts Receivable 2. Sales Discounts 3. Cash 4. Other Accounts Indicate postings to cash receipts journal. (SO 3) EXERCISES EE-1 Donahue Company uses both special journals and a general journal as described in this chapter. On June 30, after all monthly postings had been completed, the Accounts Receivable control account in the general ledger had a debit balance of $320,000; the Accounts Payable control account had a credit balance of $77,000. The July transactions recorded in the special journals are summarized below. No entries affecting accounts receivable and accounts payable were recorded in the general journal for July. Sales journal Purchases journal Cash receipts journal Cash payments journal Total sales $161,400 Total purchases $56,400 Accounts receivable column total $131,000 Accounts payable column total $47,500 Determine control account balances, and explain posting of special journals. (SO 1, 3) $161,400 in the sales journal posted? (d) To what account(s) is the accounts receivable column total of $131,000 in the cash receipts journal posted? EE-2 Presented below is the subsidiary accounts receivable account of Jeremy Dody. Explain postings to subsidiary ledger. (SO 1) Date 2008 Sept. 2 9 27 Ref. S31 G4 CR8 Debit 61,000 Credit Balance 61,000 47,000 14,000 47,000 Instructions Write a memo to Andrea Barden, chief financial officer, that explains each transaction. EE-3. Instructions (a) Set up control and subsidiary accounts and enter the beginning balances. Do not construct the journals. (b) Post the various journals. Post the items as individual items or as totals, whichever would be the appropriate procedure. (No sales discounts given.) Post various journals to control and subsidiary accounts. (SO 1, 3) E22 Appendix E Subsidiary Ledgers and Special Journals (c) Prepare a list of customers and prove the agreement of the controlling account with the subsidiary ledger at September 30, 2008. Determine control and subsidiary ledger balances for accounts receivable. (SO 1) EE-4 Yu Suzuki Company has a balance in its Accounts Receivable control account of $11,000 on January 1, 2008. The subsidiary ledger contains three accounts: Smith Company, balance $4,000; Green Company, balance $2,500; and Koyan Company. During January, the following receivable-related transactions occurred. Credit Sales Smith Company Green Company Koyan Company $9,000 7,000 8,500 Collections $8,000 2,500 9,000 Returns $ -03,000 -0- Instructions (a) What is the January 1 balance in the Koyan Company subsidiary account? (b) What is the January 31 balance in the control account? (c) Compute the balances in the subsidiary accounts at the end of the month. (d) Which January transaction would not be recorded in a special journal? Determine control and subsidiary ledger balances for accounts payable. (SO 1) EE-5 Nobo Uematsu Company has a balance in its Accounts Payable control account of $8,250 on January 1, 2008. The subsidiary ledger contains three accounts: Jones Company, balance $3,000; Brown Company, balance $1,875; and Aatski Company. During January, the following receivable-related transactions occurred. Purchases Jones Company Brown Company Aatski Company $6,750 5,250 6,375 Payments $6,000 1,875 6,750 Returns $ -02,250 -0- Instructions (a) What is the January 1 balance in the Aatski Company subsidiary account? (b) What is the January 31 balance in the control account? (c) Compute the balances in the subsidiary accounts at the end of the month. (d) Which January transaction would not be recorded in a special journal? Record transactions in sales and purchases journal. (SO 1, 2) EE-6 Montalvo Company uses special journals and a general journal. The following transactions occurred during September 2008. Sept. 2 Sold merchandise on account to T. Hossfeld, invoice no. 101, $720, terms n/30. The cost of the merchandise sold was $420. 10 Purchased merchandise on account from L. Rincon $600, terms 2/10, n/30. 12 Purchased office equipment on account from R. Press $6,500. 21 Sold merchandise on account to P. Lowther, invoice no. 102 for $800, terms 2/10, n/30. The cost of the merchandise sold was $480. 25 Purchased merchandise on account from W. Barone $860, terms n/30. 27 Sold merchandise to S. Miller for $700 cash. The cost of the merchandise sold was $400. Instructions (a) Prepare a sales journal (see Illustration E-6) and a single-column purchase journal (see Illustration E-11). (Use page 1 for each journal.) (b) Record the transaction(s) for September that should be journalized in the sales journal and the purchases journal. Record transactions in cash receipts and cash payments journal. (SO 1, 2) EE-7 Pherigo Co. uses special journals and a general journal. The following transactions occurred during May 2008. May 1 2 3 14 I. Pherigo invested $50,000 cash in the business in exchange for common stock. Sold merchandise to B. Sherrick for $6,300 cash. The cost of the merchandise sold was $4,200. Purchased merchandise for $7,200 from J. DeLeon using check no. 101. Paid salary to H. Potter $700 by issuing check no. 102. Exercises 16 22 Sold merchandise on account to K. Kimbell for $900, terms n/30. The cost of the merchandise sold was $630. A check of $9,000 is received from M. Moody in full for invoice 101; no discount given. E23 Instructions (a) Prepare a multiple-column cash receipts journal (see Illustration E-8) and a multiplecolumn cash payments journal (see Illustration E-15). (Use page 1 for each journal.) (b) Record the transaction(s) for May that should be journalized in the cash receipts journal and cash payments journal. EE-8 Wick Company uses the columnar cash journals illustrated in the textbook. In April, the following selected cash transactions occurred. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. Made a refund to a customer for the return of damaged goods. Received collection from customer within the 3% discount period. Purchased merchandise for cash. Paid a creditor within the 3% discount period. Received collection from customer after the 3% discount period had expired. Paid freight on merchandise purchased. Paid cash for office equipment. Received cash refund from supplier for merchandise returned. Paid cash dividend to stockholders. Made cash sales. Explain journalizing in cash journals. (SO 2) Instructions Indicate (a) the journal, and (b) the columns in the journal that should be used in recording each transaction. EE-9 Velasquez Company has the following selected transactions during March. Purchased equipment costing $9,400 from Chang Company on account. Received credit of $410 from Lyden Company for merchandise damaged in shipment to Velasquez. Issued credit of $400 to Higley Company for merchandise the customer returned. The returned merchandise had a cost of $260. Journalize transactions in general journal and post. (SO 1, 3) Mar. 2 5 7 Velasquez Company uses a one-column purchases journal, a sales journal, the columnar cash journals used in the text, and a general journal. Instructions (a) Journalize the transactions in the general journal. (b) In a brief memo to the president of Velasquez Company, explain the postings to the control and subsidiary accounts from each type of journal. EE-10 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. Below are some typical transactions incurred by Kwun Company. Indicate journalizing in special journals. (SO 2) Payment of creditors on account. Return of merchandise sold for credit. Collection on account from customers. Sale of land for cash. Sale of merchandise on account. Sale of merchandise for cash. Received credit for merchandise purchased on credit. Sales discount taken on goods sold. Payment of employee wages. Payment of cash dividend to stockholders. Depreciation on building. Purchase of office supplies for cash. Purchase of merchandise on account. Instructions For each transaction, indicate whether it would normally be recorded in a cash receipts journal, cash payments journal, sales journal, single-column purchases journal, or general journal. E24 Appendix E Subsidiary Ledgers and Special Journals EE-11 The general ledger of Sanchez Company contained the following Accounts Payable control account (in T-account form). Also shown is the related subsidiary ledger. Explain posting to control account and subsidiary ledger. (SO 1, 3) GENERAL LEDGER Accounts Payable Feb. 15 General journal 28 ? 1,400 ? Feb. 1 5 11 28 Balance General journal General journal Purchases 26,025 265 550 13,400 9,500 Feb. 28 Balance ACCOUNTS PAYABLE LEDGER Perez Feb. 28 Bal. 4,600 Tebbetts Feb. 28 Bal. ? Zerbe Feb. 28 Bal. 2,300 Instructions (a) Indicate the missing posting reference and amount in the control account, and the missing ending balance in the subsidiary ledger. (b) Indicate the amounts in the control account that were dual-posted (i.e., posted to the control account and the subsidiary accounts). Prepare purchases and general journals. (SO 1, 2) EE-12 Selected accounts from the ledgers of Lockhart Company at July 31 showed the following. GENERAL LEDGER Store Equipment Date July 1 Date July 1 15 18 25 31 Explanation Explanation Ref. G1 Ref. G1 G1 G1 G1 P1 Debit 3,900 Debit Credit 3,900 400 100 200 8,300 Credit No. 153 Balance 3,900 No. 201 Balance 3,900 4,300 4,200 4,000 12,300 Date July 15 18 25 31 Merchandise Inventory Explanation Ref. G1 G1 G1 P1 Debit 400 100 200 8,300 Credit No. 120 Balance 400 300 100 8,400 Accounts Payable ACCOUNTS PAYABLE LEDGER Albin Equipment Co. Date July 1 Explanation Ref. G1 Debit Credit 3,900 Balance 3,900 Date July 14 25 Date July 12 21 Date July 15 Explanation Explanation Explanation Drago Co. Ref. P1 G1 Ref. P1 P1 Debit 200 Debit Credit 500 600 Debit Credit 400 Credit 1,100 Balance 1,100 900 Balance 500 1,100 Balance 400 Brian Co. Date July 3 20 Date July 17 18 29 Explanation Explanation Ref. P1 P1 Debit Credit 2,400 700 Debit 100 1,600 Credit 1,400 Balance 2,400 3,100 Balance 1,400 1,300 2,900 Erik Co. Chacon Corp Ref. P1 G1 P1 Heinen Inc. Ref. G1 Problems: Set A Instructions From the data prepare: (a) the single-column purchases journal for July. (b) the general journal entries for July. EE-13 Kansas Products uses both special journals and a general journal as described in this chapter. Kansas also posts customers accounts in the accounts receivable subsidiary ledger. The postings for the most recent month are included in the subsidiary T accounts below. E25 Determine correct posting amount to control account. (SO 3) Bargo Bal. 340 200 250 Bal. Leary 150 240 150 Carol Bal. 0 145 145 Bal. Paul 120 190 150 120 Instructions Determine the correct amount of the end-of-month posting from the sales journal to the Accounts Receivable control account. EE-14 below. Selected account balances for Matisyahu Company at January 1, 2008, are presented Compute balances in various accounts. (SO 3) Accounts Payable Accounts Receivable Cash Inventory $14,000 22,000 17,000 13,500 Matisyahus sales journal for January shows a total of $100,000 in the selling price column, and its one-column purchases journal for January shows a total of $72,000. The column totals in Matisyahus cash receipts journal are: Cash Dr. $61,000; Sales Discounts Dr. $1,100; Accounts Receivable Cr. $45,000; Sales Cr. $6,000; and Other Accounts Cr. $11,100. The column totals in Matisyahus cash payments journal for January are: Cash Cr. $55,000; Inventory Cr. $1,000; Accounts Payable Dr. $46,000; and Other Accounts Dr. $10,000. Matisyahus total cost of goods sold for January is $63,600. Accounts Payable, Accounts Receivable, Cash, Inventory, and Sales are not involved in the Other Accounts column in either the cash receipts or cash payments journal, and are not involved in any general journal entries. Instructions Compute the January 31 balance for Matisyahu in the following accounts. (a) Accounts Payable. (b) Accounts Receivable. (c) Cash. (d) Inventory. (e) Sales. llege /w eygand t Visit the books website at, and choose the Student Companion site, to access Exercise Set B. PROBLEMS: SET A PE-1A Grider Companys chart of accounts includes the following selected accounts. 101 112 120 311 Cash Accounts Receivable Merchandise Inventory Common Stock 401 Sales 414 Sales Discounts 505 Cost of Goods Sold Journalize transactions in cash receipts journal; post to control account and subsidiary ledger. (SO 1, 2, 3) .w i l e y. c o EXERCISES: SET B www m /co E26 Appendix E Subsidiary Ledgers and Special Journals Stockholders invested $7,200 additional cash in the business, in exchange for common stock.. (a) Balancing totals $21,205 (c) Accounts Receivable $1,430 Journalize transactions in cash payments journal; post to control account April transactions to these accounts. (c) Prove the agreement of the control account and subsidiary account balances. PE-2A Ming Companys chart of accounts includes the following selected accounts. 101 120 130 157 Cash Merchandise Inventory Prepaid Insurance Equipment 201 Accounts Payable 332 Dividends 505 Cost of Goods Sold On October 1 the accounts payable ledger of Ming Company showed the following balances: Bovary Company $2,700, Nyman Co. $2,500, Pyron Co. $1,800, and Sims Company $3,700. The October transactions involving the payment of cash were as follows. Oct. 1 3 5 10 15 16 19 29 (a) Balancing totals $12,350 Purchased merchandise, check no. 63, $300. Purchased equipment, check no. 64, $800. Paid Bovary Company balance due of $2,700, less 2% discount, check no. 65, $2,646. Purchased merchandise, check no. 66, $2,250. Paid Pyron Co. balance due of $1,800, check no. 67. Paid cash dividend of $400, check no. 68. Paid Nyman Co. in full for invoice no. 610, $1,600 less 2% cash discount, check no. 69, $1,568. Paid Sims Company in full for invoice no. 264, $2,500, check no. 70. (c) Accounts Payable $2,100 Journalize transactions in multi-column purchases journal; post to the general and subsidiary ledgers. (SO 1, 2, 3) October transactions to these accounts. (c) Prove the agreement of the control account and the subsidiary account balances. PE-3A The chart of accounts of Lopez Company includes the following selected accounts. 112 120 126 157 201 Accounts Receivable Merchandise Inventory Supplies Equipment Accounts Payable 401 412 505 610 Sales Sales Returns and Allowances Cost of Goods Sold Advertising Expense In July the following selected transactions were completed. All purchases and sales were on account. The cost of all merchandise sold was 70% of the sales price. July 1 2 3 Purchased merchandise from Fritz Company $8,000. Received freight bill from Wayward Shipping on Fritz purchase $400. Made sales to Pinick Company $1,300, and to Wayne Bros. $1,500. Problems: Set A 5 8 13 15 16 18 21 22 24 26 28 30 Purchased merchandise from Moon Company $3,200. Received credit on merchandise returned to Moon Company $300. Purchased store supplies from Cress Supply $720. Purchased merchandise from Fritz Company $3,600 and from Anton Company $3,300. Made sales to Sager Company $3,450 and to Wayne Bros. $1,570. Received bill for advertising from Lynda Advertisements $600. Made sales to Pinick Company $310 and to Haddad Company $2,800. Granted allowance to Pinick Company for merchandise damaged in shipment $40. Purchased merchandise from Moon Company $3,000. Purchased equipment from Cress Supply $900. Received freight bill from Wayward Shipping on Moon purchase of July 24, $380. Made sales to Sager Company $5,600. E27A Selected accounts from the chart of accounts of Boyden Company are shown below. 101 112 120 126 157 201 Cash Accounts Receivable Merchandise Inventory Supplies Equipment Accounts Payable 401 412 414 505 726 Sales Sales Returns and Allowances Sales Discounts Cost of Goods Sold Salaries Expense (a) Purchases journal Accounts Payable $24,100 Sales column total $16,530 (c) Accounts Receivable $16,490 Accounts Payable $23,800 Journalize transactions in special journals. (SO 1, 2, 3) The cost of all merchandise sold was 60% of the sales price. During January, Boyden completed the following transactions. Jan. 3 4 4 5 6 8 9 11 13 13 15 15 17 17 19 20 20 23 24 27 30 31 31 1. 2. Purchased merchandise on account from Wortham Co. $10,000. Purchased supplies for cash $80. Sold merchandise on account to Milam $5,250, invoice no. 371, terms 1/10, n/30. Returned $300 worth of damaged goods purchased on account from Wortham Co. on January 3. Made cash sales for the week totaling $3,150. Purchased merchandise on account from Noyes Co. $4,500. Sold merchandise on account to Connor Corp. $6,400, invoice no. 372, terms 1/10, n/30. Purchased merchandise on account from Betz Co. $3,700. Paid in full Wortham Co. on account less a 2% discount. Made cash sales for the week totaling $6,260. Received payment from Connor Corp. for invoice no. 372. Paid semi-monthly salaries of $14,300 to employees. Received payment from Milam for invoice no. 371. Sold merchandise on account to Bullock Co. $1,200, invoice no. 373, terms 1/10, n/30. Purchased equipment on account from Murphy Corp. $5,500. Cash sales for the week totaled $3,200. Paid in full Noyes Co. on account less a 2% discount. Purchased merchandise on account from Wortham Co. $7,800. Purchased merchandise on account from Forgetta Corp. $5,100. Made cash sales for the week totaling $4,230. Received payment from Bullock Co. for invoice no. 373. Paid semi-monthly salaries of $13,200 to employees. Sold merchandise on account to Milam $9,330, invoice no. 374, terms 1/10, n/30. Boyden Company uses the following journals. Sales journal. Single-column purchases journal. E28 Appendix E Subsidiary Ledgers and Special Journals 3. 4. 5. Cash receipts journal with columns for Cash Dr., Sales Discounts Dr., Accounts Receivable Cr., Sales Cr., Other Accounts Cr., and Cost of Goods Sold Dr./Merchandise Inventory Cr. Cash payments journal with columns for Other Accounts Dr., Accounts Payable Dr., Merchandise Inventory Cr., and Cash Cr. General journal. (a) Sales journal $22,180 Purchases journal $31,100 Cash receipts journal balancing total $29,690 Cash payments journal balancing total $41,780 Journalize in sales and cash receipts journals; post; prepare a trial balance; prove control to subsidiary; prepare adjusting entries; prepare an adjusted trial balance. (SO 1, 2, 3) Instructions Using the selected accounts provided: (a) Record the January transactions in the appropriate journal noted. (b) Foot and crossfoot all special journals. (c) Show how postings would be made by placing ledger account numbers and checkmarks as needed in the journals. (Actual posting to ledger accounts is not required.) PE-5A Presented below are the purchases and cash payments journals for Reyes Co. for its first month of operations. PURCHASES JOURNAL Date July 4 5 11 13 20 P1 Account Credited G. Clemens A. Ernst J. Happy C. Tabor M. Sneezy Ref. Merchandise Inventory Dr. Accounts Payable Cr. 6,800 8,100 5,920 15,300 7,900 44,020 CASH PAYMENTS JOURNAL Account Debited Store Supplies A. Ernst Prepaid Rent G. Clemens Dividends C. Tabor CP1 Cash Cr. 600 8,019 6,000 6,800 2,500 15,147 39,066 Date July 4 10 11 15 19 21 Ref. Other Accounts Dr. 600 Accounts Payable Dr. 8,100 Merchandise Inventory Cr. 81 6,000 6,800 2,500 15,300 9,100 30,200 153 234 In addition, the following transactions have not been journalized for July. The cost of all merchandise sold was 65% of the sales price. July 1 6 7 8 10 13 16 20 21 29 D. Reyes invested $80,000 in cash in exchange for common stock. Sold merchandise on account to Ewing Co. $6,200 terms 1/10, n/30. Made cash sales totaling $6,000. Sold merchandise on account to S. Beauty $3,600, terms 1/10, n/30. Sold merchandise on account to W. Pitts $4,900, terms 1/10, n/30. Received payment in full from S. Beauty. Received payment in full from W. Pitts. Received payment in full from Ewing Co. Sold merchandise on account to H. Prince $5,000, terms 1/10, n/30. Returned damaged goods to G. Clemens and received cash refund of $420. Instructions (a) Open the following accounts in the general ledger. 101 Cash 112 Accounts Receivable 120 Merchandise Inventory 127 Store Supplies 131 Prepaid Rent 201 Accounts Payable Problems: Set A 311 332 401 414 Common Stock Dividends Sales Sales Discounts 505 Cost of Goods Sold 631 Supplies Expense 729 Rent Expense E29 (b) Journalize the transactions that have not been journalized in the sales journal, the cash receipts journal (see Illustration E-8), and the general journal. (c) Post to the accounts receivable and accounts payable subsidiary ledgers. Follow the sequence of transactions as shown in the problem. (d) Post the individual entries and totals to the general ledger. (e) Prepare a trial balance at July 31, 2008. (f) Determine whether the subsidiary ledgers agree with the control accounts in the general ledger. (g) The following adjustments at the end of July are necessary. (1) A count of supplies indicates that $140 is still on hand. (2) Recognize rent expense for July, $500. Prepare the necessary entries in the general journal. Post the entries to the general ledger. (h) Prepare an adjusted trial balance at July 31, 2008. PE-6A The post-closing trial balance for Cortez Co. is as follows. (b) Sales journal total $19,700 Cash receipts journal balancing totals $101,120 (e) Totals $119,520 (f) Accounts Receivable $5,000 Accounts Payable $13,820 (h) Totals $119,520 Journalize in special journals; post; prepare a trial balance. (SO 1, 2, 3) CORTEZ CO. Post-Closing Trial Balance December 31, 2008 Debit Cash Accounts Receivable Notes Receivable Merchandise Inventory Equipment Accumulated DepreciationEquipment Accounts Payable Common Stock $ 41,500 15,000 45,000 23,000 6,450 Credit $ 1,500 43,000 86,450 $130,950 $130,950 The subsidiary ledgers contain the following information: (1) accounts receivableJ. Anders $2,500, F. Cone $7,500, T. Dudley $5,000; (2) accounts payableJ. Feeney $10,000, D. Goodman $18,000, and K. Inwood $15,000. The cost of all merchandise sold was 60% of the sales price. The transactions for January 2009 are as follows. Jan. 3 5 7 11 12 13 14 15 17 18 20 23 24 27 29 30 Sell merchandise to M. Rensing $5,000, terms 2/10, n/30. Purchase merchandise from E. Vietti $2,000, terms 2/10, n/30. Receive a check from T. Dudley $3,500. Pay freight on merchandise purchased $300. Pay rent of $1,000 for January. Receive payment in full from M. Rensing. Post all entries to the subsidiary ledgers. Issued credit of $300 to J. Aders for returned merchandise. Send K. Inwood a check for $14,850 in full payment of account, discount $150. Purchase merchandise from G. Marley $1,600, terms 2/10, n/30. Pay sales salaries of $2,800 and office salaries $2,000. Give D. Goodman a 60-day note for $18,000 in full payment of account payable. Total cash sales amount to $9,100. Post all entries to the subsidiary ledgers. Sell merchandise on account to F. Cone $7,400, terms 1/10, n/30. Send E. Vietti a check for $950. Receive payment on a note of $40,000 from B. Lemke. Post all entries to the subsidiary ledgers. Return merchandise of $300 to G. Marley for credit. E30 Appendix E Subsidiary Ledgers and Special Journals Instructions (a) Open general and subsidiary ledger accounts for the following. 101 112 115 120 157 158 200 201 Cash Accounts Receivable Notes Receivable Merchandise Inventory Equipment Accumulated DepreciationEquipment Notes Payable Accounts Payable 311 401 412 414 505 726 727 729 Common Stock Sales Sales Returns and Allowances Sales Discounts Cost of Goods Sold Sales Salaries Expense Office Salaries Expense Rent Expense (b) Sales journal $12,400 Purchases journal $3,600 Cash receipts journal (balancing) $57,600 Cash payments journal (balancing) $22,050 (d) Totals $139,800 (e) Accounts Receivable $18,600 Accounts Payable $12,350 (b) Record the January transactions in a sales journal, a single-column purchases journal, a cash receipts journal (see Illustration E-8), a cash payments journal (see Illustration E-15), and a general journal. (c) Post the appropriate amounts to the general ledger. (d) Prepare a trial balance at January 31, 2009. (e) Determine whether the subsidiary ledgers agree with controlling accounts in the general ledger. PROBLEMS: SET B Journalize transactions in cash receipts journal; post to control account and subsidiary ledger. (SO 1, 2, 3) PE-1B Darby Companys chart of accounts includes the following selected accounts. 101 112 120 311 Cash Accounts Receivable Merchandise Inventory Common Stock 401 Sales 414 Sales Discounts 505 Cost of Goods Sold On June 1 the accounts receivable ledger of Darby Company showed the following balances: Deering & Son $2,500, Farley Co. $1,900, Grinnell Bros. $1,600, and Lenninger Co. $1,300. The June transactions involving the receipt of cash were as follows. June 1 3 6 7 9 11 15 20 (a) Balancing totals $28,255 Stockholders invested $10,000 additional cash in the business, in exchange for common stock. Received check in full from Lenninger Co. less 2% cash discount. Received check in full from Farley Co. less 2% cash discount. Made cash sales of merchandise totaling $6,135. The cost of the merchandise sold was $4,090. Received check in full from Deering & Son less 2% cash discount. Received cash refund from a supplier for damaged merchandise $320. Made cash sales of merchandise totaling $4,500. The cost of the merchandise sold was $3,000. Received check in full from Grinnell Bros. $1,600. (c) Accounts Receivable $0 Journalize transactions in cash payments journal; post to the general June transactions to these accounts. (c) Prove the agreement of the control account and subsidiary account balances. PE-2B Gonya Companys chart of accounts includes the following selected accounts. 101 Cash 120 Merchandise Inventory 130 Prepaid Insurance 157 Equipment 201 Accounts Payable 332 Dividends On November 1 the accounts payable ledger of Gonya Company showed the following balances: A. Hess & Co. $4,500, C. Kimberlin $2,350, G. Ruttan $1,000, and Wex Bros. $1,500. The November transactions involving the payment of cash were as follows. Nov. 1 3 Purchased merchandise, check no. 11, $1,140. Purchased store equipment, check no. 12, $1,700. Problems: Set B 5 11 15 16 19 25 30 Paid Wex Bros. balance due of $1,500, less 1% discount, check no. 13, $1,485. Purchased merchandise, check no. 14, $2,000. Paid G. Ruttan balance due of $1,000, less 3% discount, check no. 15, $970. Paid cash dividend of $500, check no. 16. Paid C. Kimberlin in full for invoice no. 1245, $1,150 less 2% discount, check no. 17, $1,127. Paid premium due on one-year insurance policy, check no. 18, $3,000. Paid A. Hess & Co. in full for invoice no. 832, $3,500, check no. 19. E31 November transactions to these accounts. (c) Prove the agreement of the control account and the subsidiary account balances. PE-3B The chart of accounts of Emley Company includes the following selected accounts. 112 120 126 157 201 Accounts Receivable Merchandise Inventory Supplies Equipment Accounts Payable 401 412 505 610 Sales Sales Returns and Allowances Cost of Goods Sold Advertising Expense (a) Balancing totals $15,490 (c) Accounts Payable $2,200 Journalize transactions in multi-column purchases journal; post to the general and subsidiary ledgers. (SO 1, 2, 3) In May the following selected transactions were completed. All purchases and sales were on account except as indicated. The cost of all merchandise sold was 65% of the sales price. May 2 3 5 8 10 15 16 17 18 20 23 25 26 28 Purchased merchandise from Younger Company $7,500. Received freight bill from Ruden Freight on Younger purchase $360. Made sales to Ellie Company $1,980, DeShazer Bros. $2,700, and Liu Company $1,500. Purchased merchandise from Utley Company $8,000 and Zeider Company $8,700. Received credit on merchandise returned to Zeider Company $500. Purchased supplies from Rodriquez Supply $900. Purchased merchandise from Younger Company $4,500, and Utley Company $7,200. Returned supplies to Rodriquez Supply, receiving credit $100. (Hint: Credit Supplies.) Received freight bills on May 16 purchases from Ruden Freight $500. Returned merchandise to Younger Company receiving credit $300. Made sales to DeShazer Bros. $2,400 and to Liu Company $3,600. Received bill for advertising from Amster Advertising $900. Granted allowance to Liu Company for merchandise damaged in shipment $200. Purchased equipment from Rodriquez Supply $500. (a) Purchases journal Accounts Payable, Cr. $39,060 Sales column total $12,180 (c) Accounts Receivable $11,980 Accounts Payable $38,160 Journalize transactions in special journals. (SO 1, 2, 3)B Selected accounts from the chart of accounts of Litke Company are shown below. 101 112 120 126 140 145 Cash Accounts Receivable Merchandise Inventory Supplies Land Buildings 201 401 414 505 610 Accounts Payable Sales Sales Discounts Cost of Goods Sold Advertising Expense The cost of all merchandise sold was 70% of the sales price. During October, Litke Company completed the following transactions. E32 Appendix E Subsidiary Ledgers and Special Journals Oct. 2 4 5 7 9 10 12 13 14 16 17 18 21 23 25 25 25 26 27 28 30 30 30 Purchased merchandise on account from Camacho Company $16,500. Sold merchandise on account to Enos Co. $7,700. Invoice no. 204, terms 2/10, n/30. Purchased supplies for cash $80. Made cash sales for the week totaling $9,160. Paid in full the amount owed Camacho Company less a 2% discount. Purchased merchandise on account from Finn Corp. $3,500. Received payment from Enos Co. for invoice no. 204. Returned $210 worth of damaged goods purchased on account from Finn Corp. on October 10. Made cash sales for the week totaling $8,180. Sold a parcel of land for $27,000 cash, the lands original cost. Sold merchandise on account to G. Richter & Co. $5,350, invoice no. 205, terms 2/10, n/30. Purchased merchandise for cash $2,125. Made cash sales for the week totaling $8,200. Paid in full the amount owed Finn Corp. for the goods kept (no discount). Purchased supplies on account from Robinson Co. $260. Sold merchandise on account to Hunt Corp. $5,220, invoice no. 206, terms 2/10, n/30. Received payment from G. Richter & Co. for invoice no. 205. Purchased for cash a small parcel of land and a building on the land to use as a storage facility.The total cost of $35,000 was allocated $21,000 to the land and $14,000 to the building. Purchased merchandise on account from Kudro Co. $8,500. Made cash sales for the week totaling $7,540. Purchased merchandise on account from Camacho Company $14,000. Paid advertising bill for the month from the Gazette, $400. Sold merchandise on account to G. Richter & Co. $4,600, invoice no. 207, terms 2/10, n/30. Litke Company uses the following journals. 1. 2. 3. Sales journal. Single-column purchases journal. Cash receipts journal with columns for Cash Dr., Sales Discounts Dr., Accounts Receivable Cr., Sales Cr., Other Accounts Cr., and Cost of Goods Sold Dr./Merchandise Inventory Cr. Cash payments journal with columns for Other Accounts Dr., Accounts Payable Dr., Merchandise Inventory Cr., and Cash Cr. General journal. 4. 5. (b) Sales journal $22,870 Purchases journal $42,500 Cash receipts journal Cash, Dr. $72,869 Cash payments journal, Cash, Cr. $57,065 Journalize in purchases and cash payments journals; post; prepare a trial balance; prove control to subsidiary; prepare adjusting entries; prepare an adjusted trial balance. (SO 1, 2, 3) Instructions Using the selected accounts provided: (a) Record the October transactions in the appropriate journals. (b) Foot and crossfoot all special journals. (c) Show how postings would be made by placing ledger account numbers and check marks as needed in the journals. (Actual posting to ledger accounts is not required.) PE-5B Presented below are the sales and cash receipts journals for Wyrick Co. for its first month of operations. SALES JOURNAL Date Feb. 3 9 12 26 S. Arndt C. Boyd F. Catt M. Didde S1 Cost of Goods Sold Dr. Merchandise Inventory Cr. 3,630 4,290 5,280 4,620 17,820 Account Debited Ref. Accounts Receivable Dr. Sales Cr. 5,500 6,500 8,000 7,000 27,000 Comprehensive Problem: Chapters 3 to 6 and Appendix E E33 CR1 CASH RECEIPTS JOURNAL Cash Dr. 30,000 6,500 5,445 150 6,500 48,595 Date Feb. 1 2 13 18 26 Account Credited Common Stock S. Arndt Merchandise Inventory C. Boyd Ref. Sales Accounts Other Discounts Receivable Sales Accounts Cost of Goods Sold Dr. Dr. Cr. Cr. Cr. Merchandise Inventory Cr. 30,000 6,500 55 5,500 150 6,500 55 12,000 6,500 30,150 4,290 4,290 In addition, the following transactions have not been journalized for February 2008. Feb. 2 7 9 12 15 16 17 20 21 28 Purchased merchandise on account from J. Vopat for $4,600, terms 2/10, n/30. Purchased merchandise on account from P. Kneiser for $30,000, terms 1/10, n/30. Paid cash of $1,250 for purchase of supplies. Paid $4,508 to J. Vopat in payment for $4,600 invoice, less 2% discount. Purchased equipment for $7,000 cash. Purchased merchandise on account from J. Nunez $2,400, terms 2/10, n/30. Paid $29,700 to P. Kneiser in payment of $30,000 invoice, less 1% discount. Paid cash dividend of $1,100. Purchased merchandise on account from G. Reedy for $7,800, terms 1/10, n/30. Paid $2,400 to J. Nunez in payment of $2,400 invoice. Instructions (a) Open the following accounts in the general ledger. 101 112 120 126 157 158 201 Cash Accounts Receivable Merchandise Inventory Supplies Equipment Accumulated DepreciationEquipment Accounts Payable 311 332 401 414 505 631 711 Common Stock Dividends Sales Sales Discounts Cost of Goods Sold Supplies Expense Depreciation Expense (b) Purchases journal total $44,800 Cash payments journal Cash, Cr. $45,958 (e) Totals $71,300 (f) Accounts Receivable $15,000 Accounts Payable $7,800 (b) Journalize the transactions that have not been journalized in a one-column purchases journal and the cash payments journal (see Illustration E-15). (c) Post to the accounts receivable and accounts payable subsidiary ledgers. Follow the sequence of transactions as shown in the problem. (d) Post the individual entries and totals to the general ledger. (e) Prepare a trial balance at February 29, 2008. (f) Determine that the subsidiary ledgers agree with the control accounts in the general ledger. (g) The following adjustments at the end of February are necessary. (1) A count of supplies indicates that $300 is still on hand. (2) Depreciation on equipment for February is $200. Prepare the adjusting entries and then post the adjusting entries to the general ledger. (h) Prepare an adjusted trial balance at February 29, 2008. (h) Totals $71,500 llege /w eygand t Visit the books website at, and choose the Student Companion site, to access Problem Set C. COMPREHENSIVE PROBLEM: CHAPTERS 3 TO 6 AND APPENDIX E Packard Company has the following opening account balances in its general and subsidiary ledgers on January 1 and uses the periodic inventory system.All accounts have normal debit and credit balances. .w i l e y. c o PROBLEMS: SET C www m /co E34 Appendix E Subsidiary Ledgers and Special Journals General Ledger Account Number 101 112 115 120 125 130 157 158 201 311 320 Account Title Cash Accounts Receivable Notes Receivable Merchandise Inventory Office Supplies Prepaid Insurance Equipment Accumulated Depreciation Accounts Payable Common Stock Retained Earnings January 1 Opening Balance $33,750 13,000 39,000 20,000 1,000 2,000 6,450 1,500 35,000 70,000 8,700 Accounts Receivable Subsidiary Ledger January 1 Opening Customer Balance R. Draves B. Hachinski S. Ingles $1,500 7,500 4,000 Accounts Payable Subsidiary Ledger January 1 Opening Creditor Balance S. Kosko R. Mikush D. Moreno $ 9,000 15,000 11,000 Jan. 3 5 7 8 9 9 10 11 12 13 15 16 17 18 20 21 21 22 23 25 27 28 31 31 Sell merchandise on account to B. Remy $3,100, invoice no. 510, and J. Fine $1,800, invoice no. 511. Purchase merchandise on account from S. Yost $3,000 and D. Laux $2,700. Receive checks for $4,000 from S. Ingles and $2,000 from B. Hachinski. Pay freight on merchandise purchased $180. Send checks to S. Kosko for $9,000 and D. Moreno for $11,000. Issue credit of $300 to J. Fine for merchandise returned. Summary cash sales total $15,500. Sell merchandise on account to R. Draves for $1,900, invoice no. 512, and to S. Ingles $900, invoice no. 513. Post all entries to the subsidiary ledgers. Pay rent of $1,000 for January. Receive payment in full from B. Remy and J. Fine. Pay cash dividend of $800. Purchase merchandise on account from D. Moreno for $15,000, from S. Kosko for $13,900, and from S. Yost for $1,500. Pay $400 cash for office supplies. Return $200 of merchandise to S. Kosko and receive credit. Summary cash sales total $17,500. Issue $15,000 note to R. Mikush in payment of balance due. Receive payment in full from S. Ingles. Post all entries to the subsidiary ledgers. Sell merchandise on account to B. Remy for $3,700, invoice no. 514, and to R. Draves for $800, invoice no. 515. Send checks to D. Moreno and S. Kosko in full payment. Sell merchandise on account to B. Hachinski for $3,500, invoice no. 516, and to J. Fine for $6,100, invoice no. 517. Purchase merchandise on account from D. Moreno for $12,500, from D. Laux for $1,200, and from S. Yost for $2,800. Pay $200 cash for office supplies. Summary cash sales total $22,920. Pay sales salaries of $4,300 and office salaries of $3,600. Broadening Your Perspective Instructions (a) Record the January transactions in the appropriate journalsales, purchases, cash receipts, cash payments, and general. (b) Post the journals to the general and subsidiary ledgers. Add and number new accounts in an orderly fashion as needed. (c) Prepare a trial balance at January 31, 2008, using a worksheet. Complete the worksheet using the following additional information. (1) Office supplies at January 31 total $700. (2) Insurance coverage expires on October 31, 2008. (3) Annual depreciation on the equipment is $1,500. (4) Interest of $30 has accrued on the note payable. (5) Merchandise inventory at January 31 is $15,000. (d) Prepare a multiple-step income statement and a retained earnings statement for January and a classified balance sheet at the end of January. (e) Prepare and post the adjusting and closing entries. (f) Prepare a post-closing trial balance, and determine whether the subsidiary ledgers agree with the control accounts in the general ledger. E35 (c) Trial balance totals $196,820; Adj. T/B totals $196,975 (d) Net income $9,685 Total assets $126,315 (f) Post-closing T/B totals $127,940 BROADENING YOUR PERSPECTIVE FINANCIAL REPORTING AND ANALYSIS Financial Reporting ProblemMini Practice Set BYPE-1 (You will need the working papers that accompany this textbook in order to work this mini practice set.) Bluma Co. uses a perpetual inventory system and both an accounts receivable and an accounts payable subsidiary ledger. Balances related to both the general ledger and the subsidiary ledger for Bluma are indicated in the working papers. Presented below are a series of transactions for Bluma Co. for the month of January. Credit sales terms are 2/10, n/30. The cost of all merchandise sold was 60% of the sales price. Jan. 3 Sell merchandise on account to B. Richey $3,100, invoice no. 510, and to J. Forbes $1,800, invoice no. 511. 5 Purchase merchandise from S. Vogel $5,000 and D. Lynch $2,200, terms n/30. 7 Receive checks from S. LaDew $4,000 and B. Garcia $2,000 after discount period has lapsed. 8 Pay freight on merchandise purchased $235. 9 Send checks to S. Hoyt for $9,000 less 2% cash discount, and to D. Omara for $11,000 less 1% cash discount. 9 Issue credit of $300 to J. Forbes for merchandise returned. 10 Summary daily cash sales total $15,500. 11 Sell merchandise on account to R. Dvorak $1,600, invoice no. 512, and to S. LaDew $900, invoice no. 513. 12 Pay rent of $1,000 for January. 13 Receive payment in full from B. Richey and J. Forbes less cash discounts. 14 Pay an $800 cash dividend. 15 Post all entries to the subsidiary ledgers. 16 Purchase merchandise from D. Omara $18,000, terms 1/10, n/30; S. Hoyt $14,200, terms 2/10, n/30; and S. Vogel $1,500, terms n/30. 17 Pay $400 cash for office supplies. 18 Return $200 of merchandise to S. Hoyt and receive credit. 20 Summary daily cash sales total $20,100. 21 Issue $15,000 note, maturing in 90 days, to R. Moses in payment of balance due. 21 Receive payment in full from S. LaDew less cash discount. 22 Sell merchandise on account to B. Richey $2,700, invoice no. 514, and to R. Dvorak $1,300, invoice no. 515. 22 Post all entries to the subsidiary ledgers. E36 Appendix E Subsidiary Ledgers and Special Journals 23 Send checks to D. Omara and S. Hoyt in full payment less cash discounts. 25 Sell merchandise on account to B. Garcia $3,500, invoice no. 516, and to J. Forbes $6,100, invoice no. 517. 27 Purchase merchandise from D. Omara $14,500, terms 1/10, n/30; D. Lynch $1,200, terms n/30; and S. Vogel $5,400, terms n/30. 27 Post all entries to the subsidiary ledgers. 28 Pay $200 cash for office supplies. 31 Summary daily cash sales total $21,300. 31 Pay sales salaries $4,300 and office salaries $3,800. Instructions (a) Record the January transactions in a sales journal, a single-column purchases journal, a cash receipts journal as shown on page E8, a cash payments journal as shown on page E14, and a two-column general journal. (b) Post the journals to the general ledger. (c) Prepare a trial balance at January 31, 2008, in the trial balance columns of the worksheet. Complete the worksheet using the following additional information. (1) Office supplies at January 31 total $900. (2) Insurance coverage expires on October 31, 2008. (3) Annual depreciation on the equipment is $1,500. (4) Interest of $50 has accrued on the note payable. (d) Prepare a multiple-step income statement and a retained earnings statement for January and a classified balance sheet at the end of January. (e) Prepare and post adjusting and closing entries. (f) Prepare a post-closing trial balance, and determine whether the subsidiary ledgers agree with the control accounts in the general ledger. llege /w eygand Exploring the Web BYPE-2 Great Plains Accounting is one of the leading accounting software packages. Information related to this package is found at its website. Address:, or go to Steps 1. Go to the site shown above. 2. Choose General Ledger. Perform instruction (a). 3. Choose Accounts Payable. Perform instruction (b). Instructions (a) What are three key features of the general ledger module highlighted by the company? (b) What are three key features of the payables management module highlighted by the company? www t m /co CRITICAL THINKING Decision Making Across the Organization BYPE-3 Hughey & Payne is a wholesaler of small appliances and parts. Hughey & Payne is operated by two owners, Rich Hughey and Kristen Payne.numbered sales invoices. Credit terms are always net/30 days. All parts sales and repair work. .w i l e y. c o Broadening Your Perspective Rich and Kristen each make a monthly drawing in cash for personal living expenses. The salaried repairman is paid twice monthly. Hughey & Payne currently has a manual accounting system. Instructions With the class divided into groups, answer the following. (a) Identify the special journals that Hughey & Payne should have in its manual system. List the column headings appropriate for each of the special journals. (b) What control and subsidiary accounts should be included in Hughey & Payne manual system? Why? E37 Communication Activity BYPE-4 Barb Doane, a classmate, has a part-time bookkeeping job. She is concerned about the inefficiencies in journalizing and posting transactions. Jim Houser is the owner of the company where Barb works. In response to numerous complaints from Barb and others, Jim hired two additional bookkeepers a month ago. However, the inefficiencies have continued at an even higher rate. The accounting information system for the company has only a general journal and a general ledger. Jim refuses to install an electronic accounting system. Instructions Now that Barb is an expert in manual accounting information systems, she decides to send a letter to Jim Houser explaining (1) why the additional personnel did not help and (2) what changes should be made to improve the efficiency of the accounting department. Write the letter that you think Barb should send. Ethics Case BYPE-5 Roniger Products Company operates three divisions, each with its own manufacturing plant and marketing/sales force. The corporate headquarters and central accounting office are in Roniger, and the plants are in Freeport, Rockport, and Bayport, all within 50 miles of Roniger. Corporate management treats each division as an independent profit center and encourages competition among them. They each have similar but different product lines. As a competitive incentive, bonuses are awarded each year to the employees of the fastest growing and most profitable division. Jose Molina is the manager of Ronigers centralized computer accounting operation that enters the sales transactions and maintains the accounts receivable for all three divisions. Jose came up in the accounting ranks from the Bayport division where his wife, several relatives, and many friends still work. As sales documents are entered into the computer, the originating division is identified by code. Most sales documents (95%) are coded, but some (5%) are not coded or are coded incorrectly. As the manager, Jose has instructed the data-entry personnel to assign the Bayport code to all uncoded and incorrectly coded sales documents. This is done he says, in order to expedite processing and to keep the computer files current since they are updated daily. All receivables and cash collections for all three divisions are handled by Roniger as one subsidiary accounts receivable ledger. Instructions (a) Who are the stakeholders in this situation? (b) What are the ethical issues in this case? (c) How might the system be improved to prevent this situation? Answers to Self-Study Questions 1. a 2. c 3. a 4. c 5. d 6. b 7. c 8. c Appendix F OBJECTIVE Other Significant Liabilities STUDY After studying this appendix, you should be able to: 1. Describe the accounting and disclosure requirements for contingent liabilities. 2. Contrast the accounting for operating and capital leases. 3. Identify additional fringe benefits associated with employee compensation. In addition to the current and long-term liabilities discussed in Chapter 11, several more types of liabilities may exist that could have a significant impact on a companys financial position and future cash flows. These other significant liabilities will be discussed in this appendix. They are: (a) contingent liabilities, (b) lease liabilities, and (c) additional liabilities for employee fringe benefits (paid absences and postretirement benefits). CONTINGENT LIABILITIES With notes payable, interest payable, accounts payable, and sales taxes STUDY OBJECTIVE 1 payable, we know that an obligation to make a payment exists. But sup- Describe the accounting and pose that your company is involved in a dispute with the Internal Revenue disclosure requirements for Service (IRS) over the amount of its income tax liability. Should you re- contingent liabilities. port the disputed amount as a liability on the balance sheet? Or suppose your company is involved in a lawsuit which, if you lose, might result in bankruptcy. How should you report this major contingency? The answers to these questions are difficult, because these liabilities are dependentcontingentupon some future event. In other words, a contingent liability is a potential liability that may become an actual liability in the future. How should companies report contingent liabilities? They use the following guidelines: 1. If the contingency is probable (if it is likely to occur) and the amount can be reasonably estimated, the liability should be recorded in the accounts. 2. If the contingency is only reasonably possible (if it could happen), then it needs to be disclosed only in the notes that accompany the financial statements. 3. If the contingency is remote (if it is unlikely to occur), it need not be recorded or disclosed. Recording a Contingent Liability Product warranties are an example of a contingent liability that companies should record in the accounts. Warranty contracts result in future costs that companies may incur in replacing defective units or repairing malfunctioning units. Generally, F1 F2 Appendix F Other Significant Liabilities a manufacturer, such as Black & Decker, knows that it will incur some warranty costs. From prior experience with the product, the company usually can reasonably estimate the anticipated cost of servicing (honoring) the warranty. The accounting for warranty costs is based on the matching principle. The estimated cost of honoring product warranty contracts should be recognized as an expense in the period in which the sale occurs. To illustrate, assume that in 2008 Denson Manufacturing Company sells 10,000 washers and dryers at an average price of $600 each. The selling price includes a one-year warranty on parts. Denson expects that 500 units (5%) will be defective and that warranty repair costs will average $80 per unit. In 2008, the company honors warranty contracts on 300 units, at a total cost of $24,000. At December 31, it is necessary to accrue the estimated warranty costs on the 2008 sales. Denson computes the estimated warranty liability as follows. Illustration F-1 Computation of estimated product warranty liability Number of units sold Estimated rate of defective units Total estimated defective units Average warranty repair cost Estimated product warranty liability 10,000 5% 500 $80 $40,000 The company makes the following adjusting entry. A L 40,000 Cash Flows no effect SE 40,000 Exp Dec. 31 Warranty Expense Estimated Warranty Liability (To accrue estimated warranty costs) 40,000 40,000 Denson records those repair costs incurred in 2008 to honor warranty contracts on 2008 sales as shown below. A 24,000 Cash Flows no effect L 24,000 SE Jan. 1 Dec. 31 Estimated Warranty Liability Repair Parts (To record honoring of 300 warranty contracts on 2008 sales) 24,000 24,000 The company reports warranty expense of $40,000 under selling expenses in the income statement. It classifies estimated warranty liability of $16,000 ($40,000 $24,000) as a current liability on the balance sheet. In the following year, Denson should debit to Estimated Warranty Liability all expenses incurred in honoring warranty contracts on 2008 sales. To illustrate, assume that the company replaces 20 defective units in January 2009, at an average cost of $80 in parts and labor. The summary entry for the month of January 2009 is: A 1,600 Cash Flows no effect L 1,600 SE Jan. 31 Estimated Warranty Liability Repair Parts (To record honoring of 20 warranty contracts on 2008 sales) 1,600 1,600 Disclosure of Contingent Liabilities When it is probable that a company will incur a contingent liability but it cannot reasonably estimate the amount, or when the contingent liability is only reasonably possible, only disclosure of the contingency is required. Examples of contingencies Lease Liabilities F3 that may require disclosure are pending or threatened lawsuits and assessment of additional income taxes pending an IRS audit of the tax return. The disclosure should identify the nature of the item and, if known, the amount of the contingency and the expected outcome of the future event. Disclosure is usually accomplished through a note to the financial statements, as illustrated by the following. YAHOO! INC. Notes to the Financial Statements Contingencies. From time to time, third parties assert patent infringement claims against the company. Currently the company is engaged in several lawsuits regarding patent issues and has been notified of a number of other potential patent disputes. In addition, from time to time the company is subject to other legal proceedings and claims in the ordinary course of business, including claims for infringement of trademarks, copyrights and other intellectual property rights.... The Company does not believe, based on current knowledge, that any of the foregoing legal proceedings or claims are likely to have a material adverse effect on the financial position, results of operations or cash flows. Illustration F-2 Disclosure of contingent liability The required disclosure for contingencies is a good example of the use of the full-disclosure principle. The full-disclosure principle requires that companies disclose all circumstances and events that would make a difference to financial statement users. Some important financial information, such as contingencies, is not easily reported in the financial statements. Reporting information on contingencies in the notes to the financial statements will help investors be aware of events that can affect the financial health of a company. LEASE LIABILITIES A lease is a contractual arrangement between a lessor (owner of a property) STUDY OBJECTIVE 2 and a lessee (renter of the property). It grants the right to use specific Contrast the accounting for property for a period of time in return for cash payments. Leasing is big operating and capital leases. business. U.S. companies leased an estimated $125 billion of capital equipment in a recent year. This represents approximately one-third of equipment financed that year. The two most common types of leases are operating leases and capital leases. Operating. For example, assume that a sales representative for Western Inc. leases a car from Hertz Car Rental at the Los Angeles airport and that Hertz charges a total of $275. Western, the lessee, records the rental as follows: A L Car Rental Expense Cash (To record payment of lease rental charge) 275 275 275 Cash Flows 275 SE 275 Exp F4 Appendix F Other Significant Liabilities The lessee may incur other costs during the lease period. For example, in the case above, Western will generally incur costs for gas. Western would report these costs as an expense. Capital Leases In most lease contracts, the lessee makes a periodic payment and records that payment in the income statement as rent expense. In some cases, however, the lease contract transfers to the lessee substantially all the benefits and risks of ownership. Such a lease is in effect a purchase of the property. This type of lease is a capital lease. Its name comes from the fact that the company capitalizes the present value of the cash payments for the lease and records that amount as an asset. Illustration F-3 indicates the major difference between operating and capital leases. Illustration F-3 Types of leases Operating lease Capital lease Only 3 more payments and this baby is ours! Have it back by 6:00 Sunday. HELPFUL HINT A capital lease situation is one that, although legally a rental case, is in substance an installment purchase by the lessee. Accounting standards require that substance over form be used in such a situation. OK! Lessor has substantially all of the benefits and risks of ownership Lessee has substantially all of the benefits and risks of ownership If any one of the following conditions exists, the lessee must record a lease as an assetthat is, as a capital lease: 1. The lease transfers ownership of the property to the lessee. Rationale: If during the lease term the lessee receives ownership of the asset, the lessee should report the leased asset as an asset on its books. 2. The lease contains a bargain purchase option. Rationale: If during the term of the lease the lessee can purchase the asset at a price substantially below its fair market value, the lessee will exercise this option. Thus, the lessee should report the lease as a leased asset on its books. 3. The lease term is equal to 75% or more of the economic life of the leased property. Rationale: If the lease term is for much of the assets useful life, the lessee should report the asset as a leased asset on its books. 4. The present value of the lease payments equals or exceeds 90% of the fair market value of the leased property. Rationale: If the present value of the lease payments is equal to or almost equal to the fair market value of the asset, the lessee has essentially purchased the asset. As a result, the lessee should report the leased asset as an asset on its books. Additional Liabilities for Employee Fringe Benefits F5 To illustrate, assume that Gonzalez Company decides to lease new equipment. The lease period is four years; the economic life of the leased equipment is estimated to be five years. The present value of the lease payments is $190,000, which is equal to the fair market value of the equipment. There is no transfer of ownership during the lease term, nor is there any bargain purchase option. In this example, Gonzalez has essentially purchased the equipment. Conditions 3 and 4 have been met. First, the lease term is 75% or more of the economic life of the asset. Second, the present value of cash payments is equal to the equipments fair market value. Gonzalez records the transaction as follows. Leased AssetEquipment Lease Liability (To record leased asset and lease liability) 190,000 190,000 Cash Flows no effect A 190,000 L 190,000 SE The lessee reports a leased asset on the balance sheet under plant assets. It reports the lease liability on the balance sheet as a liability. The portion of the lease liability expected to be paid in the next year is a current liability. The remainder is classified as a long-term liability. Most lessees do not like to report leases on their balance sheets. Why? ETHICS NOTE Because the lease liability increases the companys total liabilities. This, in Accounting standard setters turn, may make it more difficult for the company to obtain needed funds are attempting to rewrite from lenders. As a result, companies attempt to keep leased assets and rules on lease accounting because lease liabilities off the balance sheet by structuring leases so as not to meet of concerns that abuse of the curany of the four conditions mentioned on page F4. The practice of keeping rent standards is reducing the liabilities off the balance sheet is referred to as off-balance-sheet financing. usefulness of financial statements. ADDITIONAL LIABILITIES FOR EMPLOYEE F RINGE BENEFITS In addition to the three payroll tax fringe benefits discussed in Appendix STUDY OBJECTIVE 3 D (FICA taxes and state and federal unemployment taxes), employers in- Identify additional fringe benefits cur other substantial fringe benefit costs. Indeed, fringe benefits have been associated with employee growing faster than pay. In a recent year, benefits equaled 38 percent of compensation. wages and salaries. While vacations and other forms of paid leave still take the biggest bite out of the benefits pie, as shown in Illustration F-4, medical costs are the fastest-growing item. Illustration F-4 The fringe benefits pie BENEFITS 3% Disability and life insurance 13% Retirement income such as pensions 23% Legally required benefits such as Social Security 24% Medical benefits 37% Vacation and other benefits such as parental and sick leaves, child care We discuss two of the most important fringe benefitspaid absences and postretirement benefitsin this section. F6 Appendix F Other Significant Liabilities Paid Absences Employees often are given rights to receive compensation for absences when certain conditions of employment are met. The compensation may be for paid vacations, sick pay benefits, and paid holidays. When the payment for such absences is probable and the amount can be reasonably estimated, a liability should be accrued for paid future absences. When the amount cannot be reasonably estimated, companies should instead disclose the potential liability. Ordinarily, vacation pay is the only paid absence that is accrued. The other types of paid absences are only disclosed.1 To illustrate, assume that Academy Company employees are entitled to one days vacation for each month worked. If 30 employees earn an average of $110 per day in a given month, the accrual for vacation benefits in one month is $3,300. The liability is recognized at the end of the month by the following adjusting entry. A L 3,300 Cash Flows no effect SE 3,300 Exp Jan. 31 Vacation Benefits Expense Vacation Benefits Payable (To accrue vacation benefits expense) 3,300 3,300 This accrual is required by the matching principle. Academy would report Vacation Benefits Expense as an operating expense in the income statement, and Vacation Benefits Payable as a current liability in the balance sheet. Later, when Academy pays vacation benefits, it debits Vacation Benefits Payable and credits Cash. For example, if the above benefits for 10 employees are paid in July, the entry is: A 1,100 Cash Flows 1,100 L 1,100 SE July 31 Vacation Benefits Payable Cash (To record payment of vacation benefits) 1,100 1,100 The magnitude of unpaid absences has gained employers attention. Consider the case of an assistant superintendent of schools who worked for 20 years and rarely took a vacation or sick day. A month or so before she retired, the school district discovered that she was due nearly $30,000 in accrued benefits. Yet the school district had never accrued the liability. Postretirement Benefits Postretirement benefits are benefits provided by employers to retired employees for (1) health care and life insurance and (2) pensions. For many years the accounting for postretirement benefits was on a cash basis. Companies now account for both types of postretirement benefits on the accrual basis. The cost of postretirement benefits is getting steep. For example, General Motors pension and health-care costs for retirees in a recent year totaled $6.2 billion, or approximately $1,784 per vehicle produced. The average American has debt of approximately $10,000 (not counting the mortgage on their home) and has little in the way of savings. What will happen at retirement for these people? The picture is not prettypeople are living longer, the future of Social Security is unclear, and companies are cutting back on postretirement benefits. This situation may lead to one of the great social and moral dilemmas this country faces in the next 40 years. The more you know about postThe typical U.S. company provides an average of 12 days of paid vacation for its employees, at an average cost of 5% of gross earnings. 1 Additional Liabilities for Employee Fringe Benefits F7 retirement benefits, the better you will understand the issues involved in this dilemma. POSTRETIREMENT HEALTH-CARE AND LIFE INSURANCE BENEFITS Providing medical and related health-care benefits for retirees was at one time an inexpensive and highly effective way of generating employee goodwill. This practice has now turned into one of corporate Americas most worrisome financial problems. Runaway medical costs, early retirement, and increased longevity are sending the liability for retiree health plans through the roof. Many companies began offering retiree health-care coverage in the form of Medicare supplements in the 1960s. Almost all plans operated on a pay-as-you-go basis. The companies simply paid for the bills as they came in, rather than setting aside funds to meet the cost of future benefits. These plans were accounted for on the cash basis. But, the FASB concluded that shareholders and creditors should know the amount of the employers obligations. As a result, employers must now use the accrual basis in accounting for postretirement health-care and life insurance benefits. PENSION PLANS A pension plan is an agreement whereby an employer provides benefits (payments) to employees after they retire. Over 50 million workers currently participate in pension plans in the United States. The need for good accounting for pension plans becomes apparent when one appreciates the size of existing pension funds. Most pension plans are subject to the provisions of ERISA (Employee Retirement Income Security Act), a law enacted to curb abuses in the administration and funding of such plans. Three parties are generally involved in a pension plan. The employer (company) sponsors the pension plan. The plan administrator receives the contributions from the employer, invests the pension assets, and makes the benefit payments to the pension recipients (retired employees). Illustration F-5 indicates the flow of cash among the three parties involved in a pension plan. Illustration F-5 Parties in a pension plan Employer Contributions Plan Administrator Kear Trust Co. Pension Recipients Benefits Fund Assets: Investments and Earnings An employer-financed pension is part of the employees compensation. ERISA establishes the minimum contribution that a company must make each year toward employee pensions. The most popular type of pension plan used is the 401(k) plan. A 401(k) plan works as follows: As an employee, you can contribute up to a certain percentage of your pay into a 401(k) plan, and your employer will match a percentage of your contribution.These contributions are then generally invested in stocks and bonds through mutual funds. These funds will grow without being taxed and can be withdrawn beginning at age 59-1/2. If you must access the funds earlier, you may be able to do so, but a penalty usually occurs along with a payment of tax F8 Appendix F Other Significant Liabilities on the proceeds. Any time you have the opportunity to be involved in a 401(k) plan, you should avail yourself of this benefit! Companies record pension costs as an expense while the employees are working because that is when the company receives benefits from the employees services. Generally the pension expense is reported as an operating expense in the companys income statement. Frequently, the amount contributed by the company to the pension plan is different from the amount of the pension expense. A liability is recognized when the pension expense to date is more than the companys contributions to date. An asset is recognized when the pension expense to date is less than the companys contributions to date. Further consideration of the accounting for pension plans is left for more advanced courses. The two most common types of pension arrangements for providing benefits to employees after they retire are defined-contribution plans and defined-benefit plans. Defined-Contribution Plan. In a defined-contribution plan, the plan defines the employers contribution but not the benefit that the employee will receive at retirement. That is, the employer agrees to contribute a certain sum each period based on a formula. A 401(k) plan is typically a defined-contribution plan. The accounting for a defined-contribution plan is straightforward: The employer simply makes a contribution each year based on the formula established in the plan. As a result, the employers obligation is easily determined. It follows that the company reports the amount of the contribution required each period as pension expense. The employer reports a liability only if it has not made the contribution in full. To illustrate, assume that Alba Office Interiors Corp. has a defined-contribution plan in which it contributes $200,000 each year to the pension fund for its employees. The entry to record this transaction is: Pension Expense Cash (To record pension expense and contribution to pension fund) 200,000 200,000 A 200,000 Cash Flows 200,000 L SE 200,000 To the extent that Alba did not contribute the $200,000 defined contribution, it would record a liability. Pension payments to retired employees are made from the pension fund by the plan administrator. Defined-Benefit Plan. In a defined-benefit plan, the benefits that the employee will receive at the time of retirement are defined by the terms of the plan. Benefits are typically calculated using a formula that considers an employees compensation level when he or she nears retirement and the employees years of service. Because the benefits in this plan are defined in terms of uncertain future variables, an appropriate funding pattern is established to ensure that enough funds are available at retirement to meet the benefits promised. This funding level depends on a number of factors such as employee turnover, length of service, mortality, compensation levels, and investment earnings. The proper accounting for these plans is complex and is considered in more advanced accounting courses. POSTRETIREMENT BENEFITS AS LONG-TERM LIABILITIES While part of the liability associated with (1) postretirement health-care and life insurance benefits and (2) pension plans is generally a current liability, the greater portion of these liabilities extends many years into the future. Therefore, many companies are required to report significant amounts as long-term liabilities for postretirement benefits. Self-Study Questions F9 Before You Go On... REVIEW IT 1. What is a contingent liability? 2. How are contingent liabilities reported in financial statements? 3. What accounts are involved in accruing and paying vacation benefits? 4. What basis should be used in accounting for postretirement benefits? SUMMARY OF STUDY OBJECTIVES 1 Describe the accounting and disclosure requirements for contingent liabilities. If it is probable that the contingency will happen (if it is likely to occur) and the amount can be reasonably estimated, the liability should be recorded in the accounts. If the contingency is only reasonably possible (it could occur), then it should be disclosed only in the notes to the financial statements. If the possibility that the contingency will happen is remote (unlikely to occur), it need not be recorded or disclosed. 2 Contrast the accounting for operating and capital leases. For an operating lease, lease (or rental) payments are recorded as an expense by the lessee (renter). For a capital lease, the lessee records the asset and related obligation at the present value of the future lease payments. 3 Identify additional fringe benefits associated with employee compensation. Additional fringe benefits associated with wages are paid absences (paid vacations, sick pay benefits, and paid holidays), postretirement health care and life insurance, and pensions. The two most common types of pension arrangements are a defined-contribution plan and a defined-benefit plan. GLOSSARY Capital lease A contractual arrangement that transfers substantially all the benefits and risks of ownership to the lessee so that the lease is in effect a purchase of the property. (p. F4). Contingent liability A potential liability that may become an actual liability in the future. (p. F1). Defined-benefit plan A pension plan in which the benefits that the employee will receive at retirement are defined by the terms of the plan. (p. F8). Defined-contribution plan A pension plan in which the employers contribution to the plan is defined by the terms of the plan. (p. F8). Lease A contractual arrangement between a lessor (owner of a property) and a lessee (renter of the property). (p. F3). Operating lease A contractual arrangement giving the lessee temporary use of the property, with continued ownership of the property by the lessor. (p. F3). Pension plan An agreement whereby an employer provides benefits to employees after they retire. (p. F7). Postretirement benefits Payments by employers to retired employees for health care, life insurance, and pensions. (p. F6). SELF-STUDY QUESTIONS (SO 1) Answers are at the end of the appendix. 1. A contingency should be recorded in the accounts when: a. It is probable the contingency will happen but the amount cannot be reasonably estimated. b. It is reasonably possible the contingency will happen and the amount can be reasonably estimated. c. It is reasonably possible the contingency will happen but the amount cannot be reasonably estimated. d. It is probable the contingency will happen and the amount can be reasonably estimated. 2. At December 31, Anthony Company prepares an adjust- (SO 1) ing entry for a product warranty contract. Which of the following accounts are included in the entry? a. Warranty Expense. b. Estimated Warranty Liability. c. Repair Parts/Wages Payable. d. Both (a) and (b). 3. Lease A does not contain a bargain purchase option, but (SO 2) the lease term is equal to 90 percent of the estimated economic life of the leased property. Lease B does not F10 Appendix F Other Significant Liabilities 4. Which of the following is not an additional fringe benefit? (SO 3) a. Salaries. b. Paid absences. c. Paid vacations. d. Postretirement pensions. transfer ownership of the property to the lessee by the end of the lease term, but the lease term is equal to 75 percent of the estimated economic life of the lease property. How should the lessee classify these leases? Lease A a. b. c. d. Operating lease Operating lease Capital lease Capital lease Lease B Capital lease Operating lease Capital lease Operating lease QUESTIONS 1. What is a contingent liability? Give an example of a contingent liability that is usually recorded in the accounts. 2. Under what circumstances is a contingent liability disclosed only in the notes to the financial statements? Under what circumstances is a contingent liability not recorded in the accounts nor disclosed in the notes to the financial statements? 3. (a) What is a lease agreement? (b) What are the two most common types of leases? (c) Distinguish between the two types of leases. 4. Orbison Company rents a warehouse on a month-tomonth basis for the storage of its excess inventory. The company periodically must rent space when its production greatly exceeds actual sales. What is the nature of this type of lease agreement, and what accounting treatment should be accorded it? 5. Costello Company entered into an agreement to lease 12 computers from Estes Electronics Inc. The present value of the lease payments is $186,300. Assuming that this is a capital lease, what entry would Costello Company make on the date of the lease agreement? 6. Identify three additional types of fringe benefits associated with employees compensation. 7. Often during job interviews, the candidate asks the potential employer about the firms paid absences policy. What are paid absences? How are they accounted for? 8. What are the two types of postretirement benefits? During what years does the FASB advocate expensing the employers costs of these postretirement benefits? 9. What basis of accounting for the employers cost of postretirement health-care and life insurance benefits has been used by most companies, and what basis does the FASB advocate in the future? Explain the basic difference between these methods in recognizing postretirement benefit costs. 10. Identify the three parties in a pension plan. What role does each party have in the plan? 11. Brenna Ottare and Caitlin Wilkes are reviewing pension plans. They ask your help in distinguishing between a defined-contribution plan and a defined-benefit plan. Explain the principal difference to Brenna and Caitlin. Go to the books website,, for Additional Self-Study questions. BRIEF EXERCISES Prepare adjusting entry for warranty costs. (SO 1) Prepare entries for operating and capital leases. (SO 2) BEF-1 On December 1, Vina Company introduces a new product that includes a 1-year warranty on parts. In December 1,000 units are sold. Management believes that 5% of the units will be defective and that the average warranty costs will be $60 per unit. Prepare the adjusting entry at December 31 to accrue the estimated warranty cost. BEF-2 Prepare the journal entries that the lessee should make to record the following transactions. 1. 2. The lessee makes a lease payment of $80,000 to the lessor in an operating lease transaction. Zander Company leases a new building from Joel Construction, Inc.The present value of the lease payments is $900,000. The lease qualifies as a capital lease. Record estimated vacation benefits. (SO 3) BEF-3 In Alomar Company, employees are entitled to 1 days vacation for each month worked. In January, 50 employees worked the full month. Record the vacation pay liability for January assuming the average daily pay for each employee is $120. Exercises: Set B F11 EXERCISES EF-1 Boone Company sells automatic can openers under a 75-day warranty for defective merchandise. Based on past experience, Boone Company estimates that 3% of the units sold will become defective during the warranty period. Management estimates that the average cost of replacing or repairing a defective unit is $15. The units sold and units defective that occurred during the last 2 months of 2006 are as follows. Month November December Units Sold 30,000 32,000 Units Defective Prior to December 31 600 400 Record estimated liability and expense for warranties. (SO 1) Instructions (a) Determine the estimated warranty liability at December 31 for the units sold in November and December. (b) Prepare the journal entries to record the estimated liability for warranties and the costs (assume actual costs of $15,000) incurred in honoring 1,000 warranty claims. (c) Give the entry to record the honoring of 500 warranty contracts in January at an average cost of $15. EF-2 Larkin Online Company has the following liability accounts after posting adjusting entries: Accounts Payable $63,000, Unearned Ticket Revenue $24,000, Estimated Warranty Liability $18,000, Interest Payable $8,000, Mortgage Payable $120,000, Notes Payable $80,000, and Sales Taxes Payable $10,000. Assume the companys operating cycle is less than 1 year, ticket revenue will be earned within 1 year, warranty costs are expected to be incurred within 1 year, and the notes mature in 3 years. Instructions (a) Prepare the current liabilities section of the balance sheet, assuming $40,000 of the mortgage is payable next year. (b) Comment on Larkin Online Companys liquidity, assuming total current assets are $300,000. EF-3 1. 2. Presented below are two independent situations. Speedy Car Rental leased a car to Rundgren Company for 1 year. Terms of the operating lease agreement call for monthly payments of $500. On January 1, 2008, Miles Inc. entered into an agreement to lease 20 computers from Halo Electronics. The terms of the lease agreement require three annual rental payments of $40,000 (including 10% interest) beginning December 31, 2008. The present value of the three rental payments is $99,474. Miles considers this a capital lease. Prepare journal entries for operating lease and capital lease. (SO 2) Prepare the current liabilities section of the balance sheet. (SO 1) Instructions (a) Prepare the appropriate journal entry to be made by Rundgren Company for the first lease payment. (b) Prepare the journal entry to record the lease agreement on the books of Miles Inc. on January 1, 2008. EF-4 Bunill Company has two fringe benefit plans for its employees: 1. It grants employees 2 days vacation for each month worked. Ten employees worked the entire month of March at an average daily wage of $80 per employee. 2. It has a defined contribution pension plan in which the company contributes 10% of gross earnings. Gross earnings in March were $30,000. The payment to the pension fund has not been made. Instructions Prepare the adjusting entries at March 31. llege Prepare adjusting entries for fringe benefits. (SO 3) /w eygand t Visit the books website at, and choose the Student Companion site, to access Exercise Set B. .w i l e y. c o EXERCISES: SET B www m /co F12 Appendix F Other Significant Liabilities PROBLEMS: SET A Prepare current liability entries, adjusting entries, and current liabilities section. (SO 1) PF-1A On January 1, 2008, the ledger of Shumway Software Company contains the following liability accounts. Accounts Payable $42,500 Sales Taxes Payable 5,800 Unearned Service Revenue 15,000 During January the following selected transactions occurred. Jan. 1 Borrowed $15,000 in cash from Amsterdam Bank on a 4-month, 8%, $15,000 note. 5 Sold merchandise for cash totaling $10,400 which includes 4% sales taxes. 12 Provided services for customers who had made advance payments of $9,000. (Credit Service Revenue.) 14 Paid state treasurers department for sales taxes collected in December 2007 ($5,800). 20 Sold 700 units of a new product on credit at $52 per unit, plus 4% sales tax. This new product is subject to a 1-year warranty. 25 Sold merchandise for cash totaling $12,480, which includes, 2008. Assume no change in accounts payable. Analyze three different lease situations and prepare journal entries. (SO 2) PF-2A Presented below are three different lease transactions in which Ortiz Enterprises engaged in 2008. Assume that all lease transactions start on January 1, 2008. In no case does Ortiz receive title to the properties leased during or at the end of the lease term. Lessor Schoen Inc. Type of property Bargain purchase option Lease term Estimated economic life Yearly rental Fair market value of leased asset Present value of the lease rental payments Bulldozer None 4 years 8 years $13,000 $80,000 $48,000 Casey Co. Truck None 6 years 7 years $15,000 $72,000 $62,000 Lester Inc. Furniture None 3 years 5 years $4,000 $27,500 $12,000 Instructions (a) Identify the leases above as operating or capital leases. Explain. (b) How should the lease transaction with Casey Co. be recorded on January 1, 2008? (c) How should the lease transactions for Lester Inc. be recorded in 2008? PROBLEMS: SET B Prepare current liability entries, adjusting entries, and current liabilities section. (SO 1) PF-1B On January 1, 2008, the ledger of Zaur Company contains the following liability accounts. Accounts Payable $52,000 Sales Taxes Payable 7,700 Unearned Service Revenue 16,000 During January the following selected transactions occurred. Jan. 5 Sold merchandise for cash totaling $17,280, which includes 8% sales taxes. 12 Provided services for customers who had made advance payments of $10,000. (Credit Service Revenue.) Broadening Your Perspective 14 Paid state revenue department for sales taxes collected in December 2007 ($7,700). 20 Sold 600 units of a new product on credit at $50 per unit, plus 8% sales tax. This new product is subject to a 1-year warranty. 21 Borrowed $18,000 from UCLA Bank on a 3-month, 9%, $18,000 note. 25 Sold merchandise for cash totaling $12,420, which includes 8% sales taxes. UCLA Bank note.) (c) Prepare the current liabilities section of the balance sheet at January 31, 2008. Assume no change in accounts payable. PF-2B Presented below are three different lease transactions that occurred for Milo Inc. in 2008. Assume that all lease contracts start on January 1, 2008. In no case does Milo receive title to the properties leased during or at the end of the lease term. Lessor Gibson Delivery Type of property Yearly rental Lease term Estimated economic life Fair market value of leased asset Present value of the lease rental payments Bargain purchase option Computer $ 8,000 6 years 7 years $44,000 $41,000 None Eller Co. Delivery equipment $ 4,200 4 years 7 years $19,000 $13,000 None Louis Auto Automobile $ 3,700 2 years 5 years $11,000 $6,400 None F13 Analyze three different lease situations and prepare journal entries. (SO 2) Instructions (a) Which of the leases above are operating leases and which are capital leases? Explain. (b) How should the lease transaction with Eller Co. be recorded in 2008? (c) How should the lease transaction for Gibson Delivery be recorded on January 1, 2008? llege /w eygand t Visit the books website at w ww.wiley.com/college/weygandt , and choose the Student Companion site, to access Problem Set C. BROADENING YOUR PERSPECTIVE FINANCIAL REPORTING AND ANALYSIS Financial Reporting Problems BYPF-1 Refer to the financial statements of PepsiCo and the Notes to Consolidated Financial Statements in Appendix A to answer the following questions about contingent liabilities, lease liabilities, and pension costs. (a) Where does PepsiCo report its contingent liabilities? (b) What is managements opinion as to the ultimate effect of the various claims and legal proceedings pending against the company? (c) Where did PepsiCo report the details of its lease obligations? What amount of rent expense from operating leases did PepsiCo incur in 2005? What was PepsiCos total future minimum annual rental commitment under noncancelable operating leases as of December 31, 2005? (d) What type of employee pension plan does PepsiCo have? (e) What is the amount of postretirement benefit expense (other than pensions) for 2005? .w i l e y. c o PROBLEMS: SET C www m /co F14 Appendix F Other Significant Liabilities BYPF-2 Presented below is the lease portion of the notes to the financial statements of CF Industries, Inc. CF INDUSTRIES, INC. Notes to the Financial Statements Leases The present value of future minimum capital lease payments and the future minimum lease payments under noncancelable operating leases at December 31, 2006, are: (in millions) Capital Lease Operating Lease Payments Payments 2007 2008 2009 2010 2011 Thereafter Future minimum lease payments Less: Equivalent interest Present value Less: Current portion $ 7,733 6,791 6,730 6,788 6,785 13,441 48,268 11,391 36,877 5,570 $31,307 Rent expense for operating leases was $7.0 million for the year ended December 31, 2006, $5.3 million for 2005, and $5.6 million for 2004. Instructions What type of leases does CF Industries, Inc. use? What is the amount of the current portion of the capital lease obligation? $3,067 2,052 1,056 918 86 6 $7,185 CRITICAL THINKING Decision Making Across the Organization BYPF-3 Presented below is the condensed balance sheet for Express, Inc. as of December 31, 2008. EXPRESS, INC. Balance Sheet December 31, 2008 Current assets Plant assets $ 800,000 1,600,000 Current liabilities Long-term liabilities Common stock Retained earnings Total $1,200,000 700,000 400,000 100,000 $2,400,000 Total $2,400,000 Express has decided that it needs to purchase a new crane for its operations. The new crane costs $900,000 and has a useful life of 15 years. However, Expresss bank has refused to provide any help in financing the purchase of the new equipment, even though Express is willing to pay an above-market interest rate for the financing. The chief financial officer for Express, Lisa Colder, has discussed with the manufacturer of the crane the possibility of a lease agreement. After some negotiation, the crane manufacturer agrees to lease the crane to Express under the following terms: length of the lease 7 years; payments $100,000 per year. The present value of the lease payments is $548,732. Broadening Your Perspective The board of directors at Express is delighted with this new lease. They reason they have the use of the crane for the next 7 years. In addition, Lisa Colder notes that this type of financing is a good deal because it will keep debt off the balance sheet. Instructions F15 With the class divided into groups, answer the following. (a) Why do you think the bank decided not to lend money to Express, Inc.? (b) How should this lease transaction be reported in the financial statements? (c) What did Lisa Colder mean when she said leasing will keep debt off the balance sheet? Answers to Self-Study Questions 1. d 2. d 3. c 4. a PHOTO CREDITS Chapter 1 Page 3: Dinodia Images/Alamy Limited. Page 9: Hai Wen China Tourism Press/Getty Images, Inc. Page 11: Brent Holland/iStockphoto. Page 23: iStockphoto. Page 47: NBAE/Getty Images. Page 56: Koichi Kamoshida/AsiaPac/Getty Images, Inc. Page 58: Mike Stewart/Corbis Sygma Page 70 PhotoDisc, Inc./Getty Images. Page 93: Witte Thomas E/Gamma Presse, Inc. Page 96: Kevin Winter/Getty Images, Inc. Page 100: Chris Weeks/Getty Images, Inc. Page 104: iStockphoto. Page 143: Brian Bahr/Getty Images, Inc. Page 155: M. Tcherevkoff/Getty Images, Inc. Page 160: Christian Lagereek/iStockphoto. Page 164: Digital Vision Page 164: Nikki Ward/iStockphoto. Page 165: Brand X/PictureArts. Page 166: iStockphoto. Page 166: iStockphoto. Page 195: Stone/Getty Images, Inc. Page 199: Courtesy Morrow Snowboards Inc. Page 205: iStockphoto. Page 213: Victor Prikhoddko/iStockphoto. Page 245: Pathaithai Chungyam/iStockphoto. Page 247: Bjorn Kindler/iStockphoto. Page 248: iStockphoto. Page 257: PhotoDisc, Inc./Getty Images. Page 262: Courtesy Samsung Electronics America. Page 293: image (c)2000 Artville, Inc. Page 301: iStockphoto. Page 302: Barbara Nessim/Stock Illustration Source/Images.com. Page 310: Steve Forney/SUPERSTOCK. Page 312: Olney Vasan/Stone/Getty Images. Page 339: Valerie Loiseleux/iStockphoto. Page 343: Gianni Dagli Orti/Corbis Images. Page 344: Terence John/Retna. Page 346: Nick Koudis/AFP/Getty Images. Page 357: Ingvald Kaldhussaeter/iStockphoto. Chapter 9 Page 385: Jorg Greuel/AFP/Getty Images. Page 388: Alice Millikan/iStockphoto. Page 394: Joe Polillio/Getty Images, Inc. Page 397: Michael Braun/ iStockphoto. Page 402: Jamie Evans/iStockphoto. Chapter 2 Chapter 10 Page 425: David Trood/Getty Images, Inc. Page 429: iStockphoto. Page 438: AFP/Getty Images. Page 445: Andy Lions/Photonica/Getty Images, Inc. Chapter 11 Page 473: Cary Westfall/iStockphoto. Page 478: Catherine dee Auvil/iStockphoto. Page 486: iStockphoto. Page 494: Greg Nicholas/iStockphoto. Page 495: Corbis Stock Market. Chapter 12 Page 533: David Young-Wolf/PhotoEdit. Page 537: Reuters NewMedia Inc/Corbis Images. Page 541: Brandon Laufenberg/iStockphoto. Page 548: Alex Fevzer/Corbis Images. Page 555 Tomasz Resiak/iStockphoto. Page 561: Arpad Benedek/iStockphoto. Chapter 13 Page 595: Warner Bros. David James/The Kobal Collection, Ltd. Page 608: John Lamb/Stone/Getty Images, Inc. Chapter 14 Page 637: Rudi Von Briel/PhotoEdit. Page 641: Elle Wagner and Lisa Gee/John Wiley & Sons. Page 644: Corbis Digital Stock. Page 655: PhotoDisc, Inc./Getty Images. Chapter 15 Page 697: Jeremy Edwards/iStockphoto. Page 700. Don Wilkie/iStockphoto 700 Don Wilkie/iStockphoto. Page 707: Nora Good/Masterfile. Page 715: Royalty-Free/Corbis Images. Page 720: Martina Misar/iStockphoto. Page 724: iStockphoto. Page 724: iStockphoto. Chapter 3 Chapter 4 Chapter 5 Chapter 6 Chapter 7 Chapter 8 PC-1 C O M PA N Y I N D E X A ABC, 445 Ace Hardware, 271 Adelphia, 10 Advanced Micro, 540 AIG, 8 Alcatel-Alsthom, 298 Alliance Atlantis Communications Inc., 676 Altria Group, 470, 603, 634 Aluminum Company of America (Alcoa), 581 Amazon.com, 560, 696 America Bank, 412 American Airlines, 102, 479 American Cancer Society, 534 American Eagle Outfitters, 350 American Express, 396, 473 American Standard, 707 America Online (AOL), 470, 595, 597 Anaheim Angels, 604 AOL Time Warner, 597 Apple Computer, 6, 115, 298, 443, 715 Arthur Andersen, 537 AT&T, 4, 603 Avis, 425, 431, 604 B Babies R Us, 604 BankAmerica, 314 Bank of America, 11 Bank One Corporation, 70 Batten Ltd., 325326 Baylor University, 514 Berkshire Hathaway, 470 Best Buy, 9, 104, 140 Bill and Melinda Gates Foundation, 29, 534 Black & Decker Manufacturing Company, 255 Boeing Capital Corporation, 429 Boeing Company, 440, 470, 485, 552, 710 Boise Cascade, 434 Book-of-the-Month Club, 595 Breyer, 470 Bristol-Myers Squibb, 215, 255, 722 Budget, 425 C Cadbury-Schweppes, 10 Campbell Soup Company, 255, 433, 707 Capital Cities/ABC, Inc., 604 Cargill Inc., 535 Caterpillar Inc., 244246, 257, 480, 481, 535 Caterpillar Logistics Services, Inc., 245 Cendant Corp., 314, 604 Century 21, 604 Chase, 70 Chevron, 434 Cisco Systems, 155, 193, 289, 404, 470, 722 Citibank, 409 Citigroup, 11 CNN, 595 Coca-Cola Amatil Limited, B2 The Coca-Cola Company, 3, 5, 10, 11, 42, 43, 87, 100, 137, 163, 191, 215, 240, 243, 289, 333, 381, 421, 468, 470, 480, 528, 589, 618, 632, 692, 744, B1B4 Coca-Cola Enterprises Inc., B2 Coca-Cola FEMSA, S.A. de C.V., B2 Coca-Cola Hellenic Bottling Company S.A., B2 Coldwell Banker, 604 Columbia Sportswear Company, 676 Computer Associates International, 106 ConAgra Foods, 212 Consolidated Edison, 711 Continental Bank, 429 Costco Wholesale Corp., 641, D1 Craig Consumer Electronics, 249 Crane Company, 551 Cypress Semiconductor Corporation, 676 D DaimlerChrysler Corporation, 296, 491 Dairy Queen, 470 DeKalb Genetics Corporation, 626 Dell Computer, 60, 247, 298, 605 Dell Financial Services, 429 Delta Air Lines, 23, 45, 95, 102, 440 Discover, 395 Disney Company, see The Walt Disney Company Disneyland, 604 DisneyWorld, 604 Dun & Bradstreet, 309, 699 Dunkin Donuts, 23, 45 DuPont, 484, 485 Dynegy, Inc., 644, 694 E EarthLink, 552 Eastman Kodak Company, 346, 363, 606, 638 eBay, 357 Enron, 8, 29, 213, 301, 314, 340, 537, 591, 722, 746 ESPN, 445, 470, 604 Este Lauder Companies, Inc., 724725 ExxonMobil, 11, 296, 298, 470, 547 F Fannie Mae, 70, 108 Fidelity Investments, 47, 48 First National Bank, 12 Florida Citrus Company, 718 Ford Motor Company, 4, 11, 198, 258, 296, 532536, 547 Frito-Lay, 315, A9, A10, A12, A13 G GE, see General Electric General Dynamics, 745 General Electric (GE), 7, 10, 204, 213, 298, 301, 341, 470, 535, 597 General Mills, 433 General Motors (GM), 67, 11, 194, 296, 302, 410, 538, 650, 695, 721, 728 Global Crossing, 301, 340 GM, see General Motors Goldman Sachs, 11 Google, 29, 534, 540 Gulf Oil, 538 H Harley-Davidson, 215 Harolds Club, 301, 335 HBO, 595 HealthSouth, 8 Hershey Foods Corp., 552 Hertz, 425, F3 Hewlett-Packard, 298 Hilton, 429 Home Depot, 4, 247, 271, 428 Howard Johnson, 604 I IBM, 71, 213, 242, 298, 535, 539n.2 Imaginarium, 604 Intel Corporation, 325, 535, 555, 560 InterContinental, 429 International Harvester, 3 IT&T, 2 J J. Crew, 350 J.C. Penney Company, Inc., 349, 387, 412413, 641, 698700, 704715 John Deere Capital Corporation, 429 Johnson & Johnson, 310, 325 J.P. Morgan Leasing, 429 K Kellogg Company, 12, 29, 495, 560, 565, 566 Kids R Us, 604 Kmart, 196, 310, 699, 710, 717 Kodak, see Eastman Kodak Company Kohls Corporation, 641 KPMG LLP, A27, A29 Kraft Foods, Inc., 597, 603, 634 Krispy Kreme Doughnuts, 215 Kroger Stores, 198, 255, 310, 710, 711 K2, Inc., 164 L Leslie Fay Cos., 248, 290 The Limited, 296 Limited Brands, 100 Little, Brown & Co., 595 L.L. Bean, 296 Lockheed Martin Corporation, 155, 440, 561 Long Beach City College, 334 Lotus, 71 Lucent Technologies, 314 M McDonalds Corporation, 11, 443, 455, 470, 493, 534 McKesson Corporation, 196, 290 Major League Baseball Players Association, 7 Marriott, 429, 433 Marshall Farms, 626 Massachusetts General Hospital, 11 MasterCard, 395397, 412, 416 Merck, 310 Merrill Lynch, 11 Microsoft Corporation, 6, 11, 56, 89, 204, 295, 302, 443, 470, 547, 555, 636637, 655, 728 Mighty Ducks, 604 Minnesota Mining and Manufacturing Company, see 3M Mitsubishi Motors, 402, 423 Moodys, 529, 699 Morgan Stanley, 608 Morrow Snowboards, Inc., 199 Motorola, 255, 325, 720 N NationsBank, 314 NBC, 470 New York Stock Exchange, 609 Nike, Inc., 4, 100, 542, 552, 553, 558, 705706 Nordstrom, Inc., 166, 397, 423, 732733 Nortel Networks, 394, 715 North American Van Lines, 542 Northern Virginia Community College, 11 Northwest Airlines, 94, 166 I-1 I-2 Company Index Regions Financial Corp., 567 Rent-A-Wreck, 425428, 431, 433, 443, 445, 448, 470 Rent-Way Inc., 293 Republic Carloading, 160 Reynolds Company, 653654 Rhino Foods, Inc., 142144 Robert Half and Co., 30 Royal Dutch/Shell Group, 442, 445 S Safeway, 310, 710 Salvation Army, 534 SAMS CLUB, 261 Samsung Electronics Co., 262, 291 Sears, Roebuck, and Company, 195, 349, 387 Sears Holdings, 296 Shell, see Royal Dutch/Shell Group Snack Ventures Europe, A5 Southern Company, 325 Sports Illustrated, 479 Springfield ReManufacturing Corporation, 24 Standard & Poors, 699 Starbucks, 29, 255, 695 Stephanies Gourmet Coffee and More, 338341, 344, 345 Sunbeam Corporation, 243, 292293 Sunset Books, 595 SunTrust Banks Inc., 346 T Taco Bell, 445, 470 Target Corporation, 196, 247, 355, 641, 739 Tektronix Inc., 561 Texaco Oil Company, 409 3M Company, 411, 518 Tiffany & Co., 710 Time-Life Books, 595 Time Warner, Inc., 7, 164, 298, 470, 547, 594597, 601, 603 TNT, 595 Toys R Us, Inc., 325, 346, 604 Trek, 11 True Value Hardware, 247 Turner Broadcasting, 597, 601, 603 Twentieth Century Fox, 96 Tyco, 340 U U.S. Olympic Committee, 71 United Airlines, 7, 102, 479, 638 United Stationers, 196 USAir, 491 US Bancorp Equipment Finance, 429 USX Corp., 491 V Veritas Software, 71 Verizon Communications, 310 Visa, 395397, 406, 409, 411, 413, 419 W Walgreen Drugs, 196, 255 Wall Street Journal, 8, 537, 541 Wal-Mart Stores, Inc., 11, 58, 90, 194196, 199, 200, 204, 247, 248, 261262, 271, 291, 310, 349, 355, 555, 641, 710, 739, A11 The Walt Disney Company, 7, 23, 45, 95, 295, 470, 560, 603, 604 Warner Bros., 595 Waste Management Company, 70 Wells Fargo Bank, 340 Wendys International, 255, 470 Weyerhaeuser Co., 718 Whirlpool, 707 Whitehall-Robins, 384385, 393 WorldCom, Inc., 8, 29, 93, 301, 314, 340, 438, 470, 644, 694, 722 X Xerox, 93 Y Yahoo! Inc., 163, 696, F3 Yale Express, 160, 193 YUM! Brands, 470, A22 O Office Depot, 196 Oracle Corporation, 655 Owens-Illinois, 446, 447 P PACE Membership Warehouse, 717 PayLess Drug Stores Northwest, 717 PayPal, 357 PepsiAmericas, A20 Pepsi Bottling Group, A19A20, A22, A23 PepsiCo, Inc., 36, 10, 13, 4243, 45, 53, 8687, 90, 95, 100, 104, 137, 140, 168, 190191, 193, 213, 214, 239240, 243, 257, 258, 288289, 291, 298, 315, 333, 336, 365, 380381, 383, 394, 421, 423, 430, 468, 471, 480, 481, 492, 528, 531, 534, 541, 546, 550, 551, 559, 565, 588589, 592, 604, 632, 635, 642, 692, 695, 743744, 747, A1A30, F13 PepsiCo Beverages North America, A9, A10, A12, A13 PepsiCo International, A9, A10, A12, A13 Pfizer, 310 P&G, see Procter & Gamble Company Philip Morris, 470, 597, 603 Pizza Hut, 470 Planet Hollywood, 514 PNC Financial Services Group Inc., 567 Policy Management Systems, 292 Procter & Gamble Company (P&G), 11, 446, 447, 470, 542, 720 Prudential Real Estate, 11 Q Quaker, A24 Quaker Foods, 257, A9, A10, A12, A13, A30 Qualcomm, 538 Qwest, 310 R Radio Shack, 71, 90 Ramada Inn, 604 Red Cross, 29 Reebok International Ltd., 255, 541, 548 SUBJECT INDEX A Absences, paid, F6 Accelerated-depreciation method, 435 Account(s), 4853 chart of, 6061 control, E1E2 T, 48 three-column form of, 58 Accounting: basic activities of, 45 career opportunities in, 2930 Accounting cycle: optional steps in, 162164 required steps in, 161162 Accounting cycle tutorial adjusting entries, 97 preparing financial statements and closing the books, 148 recording process, 61 Accounting data, users of, 67 Accounting principle, changes in, 720 Accounts payable subsidiary ledger, E1 Accounts receivable, 386398 defined, 386 disposing of, 395398 recognizing, 387 types of, 386 valuing, 388395 Accounts receivable subsidiary ledger, E1 Accounts receivable turnover ratio, 403404. See also Receivables turnover Accruals, adjusting entries for, 97, 105110 expenses, accrued, 106109 revenues, accrued, 105106 Accrual-basis accounting, cash-basis vs., 95 Accrued expenses, 106109 Accrued interest, 107108 Accrued revenues, 105106 Acid-test (quick) ratio, 707708 Additional paid-in capital, 564 Additions and improvements, 438 Adjustable-rate mortgages, 493 Adjusted trial balance: preparation of, 112 preparing financial statements from, 113114, 116 Adjusting entries, 97111 for accruals, 105110 expenses, accrued, 106109 revenues, accrued, 105106 classes of, 9798 for deferrals, 98105 prepaid expenses, 98102 unearned revenues, 102103 example of journalizing/posting, 110111 for merchandising operations, 207 preparing, from worksheets, 148, 150, 152, 154 purpose of, 97 Administrative expenses, 211 Affiliated (subsidiary) company, 602 Agents: collection, 476 of corporations, 535 Aging schedule, 393 Aging the accounts receivable, 393 Allowance for Doubtful Accounts, 390, 393 Allowance method, 389394 Alternative accounting methods, 721722 Amortization, 443 of bonds, 506513 straight-line method, 509513 Annual report(s), 71, A1 Annuity(-ies): defined, C5, C10 future value of an, C5C7 present value of an, 502503, C10C12, C16 Assets, 12 depreciable, 431 in double-entry system, 49 return on, 311, 711 Asset turnover ratio, 446447, 710711 Assumptions, accounting, 911, 298299 Auditing, as area of public accounting, 29 Auditors, internal, 345346 Authorized stock, 540 Auto loans, calculating, C17 Available-for-sale securities, 605, 607608 Averages, industry, 699 Average collection period, 404 Average-cost method, 254255, 267268 B Bad Debts Expense, 388, 391 Balance sheet, 2123. See also Classified balance sheet consolidated, 615618 effects of cost flow methods on, 257 effects of inventory errors on, 260261 horizontal analysis of, 700701 investments on, 608609 stockholders equity section of, 564565 vertical analysis of, 703 Bank(s), 355362 deposits to, 355 and writing checks, 355357 Bank accounts, reconciling, 359362 Banking, investment, 540 Bank reconciliation, 355, 359362 entries from, 361362 example of, 360361 procedure for, 359360 Bank service charges, 357 Bank statements, 357358 Basic accounting equation, 1113 expansion of, 5253 using, 1420 Bearer (coupon) bonds, 484 Best-efforts contracts, 540n.3 Blank, Arthur, 4 Bond(s), 482492, 500513 amortization of, 506513 effective-interest method, 506509 straight-line method, 509513 bearer, 484 callable, 484 conversion of, to common stock, 491492 defined, 482 determining market value of, 486 discounting of, 487488, 503504 issuance of: accounting for, 487490 at discount, 488489 at face value, 487 at premium, 489490 procedures for, 484 premiums on, 487488 present value of, 504505 and present value of annuity, 502503 present value of face value of, 500502 pricing of, 500505 recording acquisition of, 598 recording interest from, 598 recording sale of, 598599 redemption of: at maturity, 491 before maturity, 491 registered, 484 retirement of, 489492 secured, 483 trading of, 484485 Bond discount, 488489 amortization of, 506508, 510511 defined, 488 Bonding, 346 Bond premium, 489490 amortization of, 508509, 511513 defined, 488 Bonuses, D4 Bookkeeping, 5 Book value, 102, 431 Book value per share, 571572 Buildings, 428 Business documents, 54, 203 By-laws, 538 C Calculator, using a, C16C17 Calendar year, 95 Callable bonds, 484 Canceled checks, 357 Capital, 305 ability of corporations to acquire, 535 corporate, 542 paid-in, 542 working, 309310, 481, 707 Capital expenditures, 438 Capital leases, F4F5 Capital stock, 564 Careers, accounting, 2930 Carrying (book) value: of convertible bonds, 491492 defined, 489 Carrying (book) value method, 492 Cash: defined, 348 net change in: direct method, 671 indirect method, 652654 reporting, 363, 365366 restricted, 363 Cash-basis accounting, accrual-basis vs., 95 Cash controls, 347355 disbursements, 351355 receipts, 348351 Cash disbursements journal, see Cash payments journal Cash dividends, 51, 552555 Cash equivalents, 363 Cash flow(s): classification of, 639640 free, 654655 statement of, see Statement of cash flows Cash payments journal, E13E15 Cash (net) realizable value, 389, 400 Cash receipts journal, E7E11 Cash register tapes, 203 Cash sales, credit card sales as, 396397 Castle, Ted, 142143 CEO (chief executive officer), 536 Certified public accountants (CPAs), 29 Changes in accounting principle, 720 Channel stuffing, 215, 722 Charter, 538 I-3 I-4 Subject Index characteristics of, 535537 classification of, 534535 defined, 10, 534 formation of, 538 issuance of stock by, 540542 owners equity in, 542543 ownership of, 538539 Correcting entries, 158160 Cost(s): depreciable, 432 expired/unexpired, 300 organization, 538 of plant assets, 427430 research and development, 446 Cost flow assumptions, 251255, 266269 Cost method: and stock investments, 600 for valuation of treasury stock, 547 Cost of goods sold: defined, 196 determining, under periodic system, 216218 and matching principle, 300 Cost principle, 302, 598 Coupon (bearer) bonds, 484 CPAs (certified public accountants), 29 Credit, 49 Credit cards: sales via, 396397 using, 405 Credit memoranda, 358 Creditors, long- vs. short-term, 698 Creditors subsidiary ledger, E1 Credit sales, journalizing, E5 Credit terms, 201202 Cumulative dividend, 551552 Current assets: on classified balance sheet, 162163 and current liabilities, 474 Current liabilities, 474482 changes in, 649 on classified balance sheet, 165166 and current assets, 474 defined, 474 long-term debt, current maturities of, 479480 notes payable, 475476 payroll and payroll taxes payable, 476478 sales taxes payable, 476 statement presentation/analysis of, 480482 unearned revenues, 479 Current maturities of long-term debt, 479480 Current ratio, 309, 706707 Current replacement cost, 258 Customers subsidiary ledger, E1 D Days in inventory, 262 Debenture bonds, 484 Debit, 49 Debit memoranda, 357 Debt investments, 597598 Debt to total assets ratio, 311312, 495, 714715 Declaration date, 553 Declining-balance method, 434435 Deferrals, adjusting entries for, 97105 prepaid expenses, 98102 unearned revenues, 102103 Deficits, 560 Defined-benefit plans, F8 Defined-contribution plans, F8 Depletion, 442 Deposits, bank, 355 Deposits in transit, 359 Depreciable assets, 431 Depreciable cost, 432 Depreciation: declining-balance method of, 434435 defined, 101, 430 of plant assets, 430438 computation, 431432 and income taxes, 436 methods, 432436 revisions in estimate of, 436437 as prepaid expense, 101 straight-line method of, 432433 units-of-activity method of, 433434 Depreciation expense, 646647 Direct method (of preparing statement of cash flows), 644, 665671 investing/financing activities, 670671 net change in cash, 671 operating activities, cash provided/used by, 666670 Direct write-off method, 388389 Disbursements, cash, 351355 and EFT system, 352353 and petty cash fund, 353355 and voucher system, 351352 Discontinued operations, 717718 Discount(s): bonds issued at, 488489, 506508, 510511 purchase, 201202 sales, 205 Discounting the future amount, C7, C12 Discount period, 202 Dishonored notes, 401402 Disposal: of accounts receivable, 395398 of notes receivable, 401402 of plant assets, 439441 retirement, 439440 sale, 440441 of treasury stock, 548550 Dividend(s), 51, 558 cash, 552555 cumulative, 551552 defined, 13, 552 preferred, 712 stock, 556558 stock splits, 558559 Dividends in arrears, 551552 Documentation procedures, 344 Double-declining-balance method, 435 Double-entry system, 49 Dunlap, Chainsaw Al, 292293 Duties, segregation of, 342343 E Earnings: gross, D4 statement of, D10 Earning power, 717 Earnings management, 300 Earnings per share (EPS), 307308, 712713 Economic entity assumption, 10 Effective-interest amortization method, 506509 EFT, see Electronic funds transfers Egypt, ancient, 343 Electronic controls, 344, 345 Electronic funds transfers (EFT), 352353 Employee earnings record, D8 Employee fringe benefits, liabilities for, F5F8 Employee Retirement Income Security Act (ERISA), F7 Employees: bonding of, 346 hiring of, D2 Employees Withholding Allowance Certificate (W-4), D6 The End of Work (Jeremy Rifkin), 194 Endorsements, restrictive, 350 Environmental liabilities, 115 EPS, see Earnings per share Equipment, 428429, 647 Chart of accounts, 6061 Check(s): canceled, 357 outstanding, 359 paying payroll via, D4 writing, 355357 Check register, 352 Chief executive officer (CEO), 536 Classified balance sheet, 161166, 168170, 305306 current assets on, 162163 current liabilities on, 165166 examples of, 168170 intangible assets on, 164 long-term investments on, 163 long-term liabilities on, 166 for merchandising operations, 213, 214 property, plant, and equipment on, 164 stockholders equity on, 166 valuing/reporting of investments on, 610611 Classified financial statements, 305308 Classified income statement, 306308 Closing entries: for merchandising operations, 207 posting of, 153154, 157158 preparation of, 151153, 155157 Closing the books, 154161 defined, 154 and posting of closing entries, 157158 and preparation of closing entries, 155157 and preparation of post-closing trial balance, 159161 Collection agents, 476 Collection period, average, 404 Collusion, 347 Common stock, 50, 538546 cash dividend allocation, 554, 555 issuance of, 540546 and owners equity, 542543 and ownership rights of stockholders, 538539 par-value vs. no-par-value, 544546 for services or noncash assets, 545546 Common stockholders equity, return on, 311, 566, 711712 Comparability of accounting information, 296 Comparative analysis, 698699 Compensating balances, 363 Compound entries, 5556 Compound interest, C2C3 Comprehensive income, 610, 720 Computer controls, 344 Conceptual framework, 295 Conservatism, 258, 304 Consigned goods, 249 Consistency, of accounting information, 296 Consistency principle, 257258 Consolidated balance sheet, 615618 Consolidated income statement, 618 Constraints, accounting, 303304 Consumerism, 194, 195 Consumption, 194195 Contingent liabilities, F1F3 Continuous life (of corporation), 536 Contra asset accounts, 101, 488 Contracts, best-efforts, 540n.3 Contractual interest rate, 484, 488 Contra-revenue accounts, 204 Contra stockholders equity account, 547548 Controls, internal, see Internal control(s) Control accounts, E1E2 Controller, 536 Controlling interest, 603 Convertible bonds, 484, 491492 Copyrights, 444 Corporate capital, 542 Corporation(s), 532543 book value per share of, 571572 Subject Index Equity: stockholders, 1213 trading on the, 712 Equity method, 601602 ERISA (Employee Retirement Income Security Act), F7 Errors: on bank statements, 359360 in inventory, 259261 balance sheet effects, 260261 income statement effects, 259260 Ethics: in financial reporting, 89 in personal financial reporting, 25 Exchange of intangible assets, 452454 gain treatment, 453454 loss treatment, 452453 Expense(s), 51 accrued, 106109 administrative, 211 defined, 13 operating, 210211 prepaid, 98102, 118119 selling, 211 Expired costs, 300 External transactions, 14 External users of accounting data, 6 Extraordinary operations, 718720 F Face value, 488 of bonds, 484, 487 of notes receivable, 400 present value of, 500502 Factors, 395396 FAFSA form, 25 Fair value, 605607 FASB, see Financial Accounting Standards Board Federal Bureau of Investigation (FBI), 4 Federal Insurance Contribution Act (FICA), D5 Federal unemployment taxes, D11D12 Federal Unemployment Tax Act (FUTA), D11 FICA (Federal Insurance Contribution Act), D5 FICA taxes: employer contribution for, D11 payroll deduction for, D5D6 FIFO method, see First-in, first-out method Financial accounting, database concept of, 302 Financial Accounting Standards Board (FASB), 9, 294295, 297, 313 Financial calculator, using a, C16C17 Financial statement presentation and analysis: of intangible assets, 446447 Financial statements, 5, 2124, 294 analysis of, 308312, 698726 classified, 305308 for Coca-Cola Company, B1B4 current liabilities on, 480482 and determination of earning power, 717 elements of, 297 and global economy, 312313 horizontal analysis of, 699703 inventories on: cost flow methods, 255257 presentation and analysis, 261262, 264 irregular items on, 717720 long-term liabilities on, 494495, 497498 for merchandising operations, 209214 classified balance sheet, 213, 214 multiple-step income statement, 209213 single-step income statement, 213, 214 operating guidelines for preparation of, 297 for PepsiCo, Inc., A1A30 preparing, from adjusted trial balance, 113114, 116 preparing, from worksheets, 148, 152, 153 and quality of earnings, 721722, 724726 ratio analysis of, 705717 receivables on, 403404 retained earnings on, 564566 retained earnings statement, 562563 tools for, 699 vertical analysis of, 703705 Financing activities, cash inflow/outflow from, 639, 640 direct method, 670671 indirect method, 651652 Finished goods inventory, 246 First-in, first-out (FIFO) method, 252253, 266 Fiscal year, 95 Fixed assets, 426. See also Plant assets Fixed-rate mortgages, 493 FOB (free on board), 200, 248 FOB destination, 200, 248, 249 FOB shipping point, 200, 248, 249 Ford, Henry, 533534 For Deposit Only, 350 Forensic accounting, 30 Form W-2 (Wage and Tax Statement), D13D14 Form W-4 (Employees Withholding Allowance Certificate), D6 Franchises, 445 Free Application for Federal Student Aid (FAFSA) form, 25 Free cash flow, 654655 Free on board (FOB), 200, 248 Freight costs, 200201 Fringe benefits, liabilities for, F5F8 Full disclosure principle, 301, F3 FUTA (Federal Unemployment Tax Act), D11 Future value, C3C7 of an annuity, C5C7 of a single amount, C3C5 G GAAP, see Generally accepted accounting principles Geneen, Harold, 2 General journal, 54, E16E17 General ledger (ledger), 5761 Generally accepted accounting principles (GAAP), 9, 294 and allowance method, 389 and alternative accounting methods, 721, 722 and cash-basis accounting, 95 and materiality, 303 Global economy, and financial statement presentation, 312313 Going concern assumption, 298299, 431 Goods in transit, 248249 Goodwill, 445446 Government, accounting career opportunities in, 30 Government regulation, of corporations, 537 Gross earnings, D4 Gross profit, 210 Gross profit method (for estimating inventories), 270271 Gross profit rate, 210 H Health insurance, cost of, 496 Held-to-maturity securities, 605 Hiring employees, D2 Home-equity loans, 567 Honor (of notes receivable), 401 Horizontal analysis, 699703 of balance sheet, 700701 of income statement, 701702 of retained earnings statement, 702703 Human resources (HR), 344, D2 I-5 I IASB, see International Accounting Standards Board Identity theft, 364 Imprest system, 353 Improper recognition, 722 Improvements: additions and, 438 land, 427428 Income: comprehensive, 610, 720 pro forma, 722 Income statement, 21, 22 classified, 306308 consolidated, 618 effects of cost flow methods on, 255257 effects of inventory errors on, 259260 horizontal analysis of, 701702 for merchandising operations, 209214 multiple-step income statement, 209213 single-step income statement, 213, 214 vertical analysis of, 703705 Income taxes (income taxation): on classified income statement, 306307 of corporations, 537 and depreciation of plant assets, 436 effects of cost flow methods on, 257 payroll deduction for, D6 remitting, D13 Independent internal verification, 344346 Indirect method (of preparing statement of cash flows), 643654 investing/financing activities, 651652 net change in cash, 652654 operating activities, cash provided/used by, 646650 worksheets, using, 659664 Industry averages (norms), 699 Information, accounting, 295297 Insurance, as prepaid expense, 100 Intangible assets, 443447 accounting for, 443446 amortization of, 443 on classified balance sheet, 164 copyrights, 444 exchange of, 452454 gain treatment, 453454 loss treatment, 452453 franchises and licenses, 445 goodwill, 445446 patents, 444 research and development costs, 446 statement presentation/analyis of, 446447 trademarks and trade names, 444 Intercompany comparisons, 699 Intercompany eliminations, 615, 616, 618 Intercompany transactions, 615, 618 Interest, C1C3 accrued, 107108 on checking accounts, 358 compound, C2C3 defined, C1 on notes receivable, 400 simple, C1C2 Interest rate, C1 Interim periods, 95 Internal auditors, 345346 Internal control(s), 340347 defined, 340 and documentation procedures, 344 and establishment of responsibility, 341, 342 and independent internal verification, 344346 limitations of, 346347 for payroll, D1D4 physical/mechanical/electronic controls, 344, 345 and Sarbanes-Oxley Act, 341 and segregation of duties, 342343 I-6 Subject Index J JIT (just-in-time) inventory, 247 Johnson, Matthew, 715 Journal, 5457 Journalizing, 5455, 6768, 110111 Just-in-time (JIT) inventory, 247 K Knight, Phil, 4 L Land, 427 Land improvements, 427428 Large stock dividend, 556 Last-in, first-out (LIFO) method, 253254, 267 LCM (lower-of-cost-or-market), 258 Leases, F3F5 capital, F4F5 operating, F3F4 Lease liabilities, F3F5 Ledger, see General ledger Legal capital, 541 Letter to the stockholders, A2A3 Leverage, 712 Leveraging, 712 Liabilities, 12, 472499 contingent, F1F3 current, 474482 long-term debt, current maturities of, 479480 notes payable, 475476 payroll and payroll taxes payable, 476478 sales taxes payable, 476 statement presentation/analysis of, 480482 unearned revenues, 479 in double-entry system, 49 for employee fringe benefits, F5F8 environmental, 115 lease, F3F5 long-term, 482495, 497498 bonds, 482492, 500513 notes payable, long-term, 492493 statement presentation/analysis of, 494495, 497498 Licenses, 445 LIFO conformity rule, 257 LIFO method, see Last-in, first-out method Limited liability, of corporate stockholders, 535 Liquidating dividend, 553 Liquidation preference, 552 Liquidity, 309310, 481 Liquidity ratios, 706710 acid-test ratio, 707708 current ratio, 706707 inventory turnover, 709710 receivables turnover, 708709 Long-term debt, current maturities of, 479480 Long-term debt due within one year, 480 Long-term investments, 163, 608, 609 Long-term liabilities, 482495, 497498 bonds, 482492, 500513 on classified balance sheet, 166 notes payable, long-term, 492493 postretirement benefits as, F8 present value of, C12C15 statement presentation/analysis of, 494495, 497498 Long-term notes payable, 492493 Lower-of-cost-or-market (LCM), 258 Lucas, George, 96 M MACRS (Modified Accelerated Cost Recovery System), 436 Mail receipts, 350351 Maker, 398 Management (of corporation), 536 Management consulting, as area of public accounting, 29 Managements discussion and analysis (MD&A), A3 Managerial accounting, 6, 29 Market interest rate, 486, 488 Market value: book value vs., 102, 572 of stock, 541 Marshall, John, 534 Matching principle, 9596, 300 Materiality (materiality principle), 303, 438 Maturity date (of promissory note), 399 MD&A (managements discussion and analysis), A3 Mechanical controls, 344, 345 Medicare, D5n.1 Merchandising operations, 194224 completing the accounting cycle for, 206208 adjusting entries, 207 closing entries, 207 cost of goods sold in, 216218 financial statements for, 209214 classified balance sheet, 213, 214 multiple-step income statement, 209213 single-step income statement, 213, 214 inventory systems in, 197199 periodic system, 198 perpetual system, 197198, 219222 operating cycles in, 196197 recording purchases of merchandise in, 199203 freight costs, 200201 purchase discounts, 201202 purchase returns and allowances, 201 recording sales of merchandise in, 203205 sales discounts, 205 sales returns and allowances, 204205 Merchandising profit, 210 Mintenko, Stephanie, 338339 MNCs (multinational corporations), 312 Modified Accelerated Cost Recovery System (MACRS), 436 Monetary unit assumption, 10, 298 Mortgage bonds, 483 Mortgage loans, calculating, C17 Mortgage notes payable, 493 Multinational corporations (MNCs), 312 Multiple-step income statement, 209213 N Natural resources, 442443 Net change in cash: direct method, 671 indirect method, 652654 Net pay, D7 Net (cash) realizable value, 389, 400 Net sales, 209210 Net worth, 167 Noncash activities, significant, 640641 Noncash current assets, changes in, 647648 Nonoperating activities, 211213 No-par-value stock, 542, 544545 Norms, industry, 699 Normal balance, 50 Notes payable, 475476 Notes receivable, 398403 computing interest for, 400 defined, 386 disposing of, 401402 maturity date of, 399 recognizing, 400 valuing, 400401 Not-for-profit corporations, 534 NSF (not sufficient funds), 358 Internal Revenue Service (IRS), 436 Internal transactions, 14 Internal users of accounting data, 6 International Accounting Standards Board (IASB), 9, 313 Intracompany comparisons, 699 Inventory(-ies), 244272 classification of, 246247 costing of: average-cost method for, 254255, 267268 balance sheet effects, 257 and consistency principle, 257 and cost flow assumption, 251252 FIFO method for, 252253, 266 financial statement effects, 255257 LIFO method for, 253254, 267 lower-of-cost-or-market method for, 258 and quality of earnings, 721 specific identification method for, 250251 tax effects, 257 days in, 262 determining quantities of, 247249 and ownership of goods, 248249 physical inventory, 247248 errors in, 259261 balance sheet effects, 260261 income statement effects, 259260 estimating, 269272 gross profit method for, 270271 retail inventory method for, 271272 finished goods, 246 just-in-time, 247 in merchandising operations, 197199, 219222 periodic inventory system, 198 perpetual inventory systems, 197198, 219222, 266269 statement presentation and analysis of, 261262, 264 taking, 247248 theft of, 263 Inventory turnover, 261262, 709710 Investee, 600 Investing activities, cash inflow/outflow from, 639, 640 direct method, 670671 indirect method, 651652 Investments, 594614 debt, 598599 purchase of, by corporations, 596597 short- vs. long-term, 608609 stock, 600605 between 20% and 50%, holdings, 601602 less than 20%, holdings of, 600601 more than 50%, holdings of, 602603 valuing/reporting of, 605611, 613 available-for-sale securities, 607608 on balance sheet, 608609 on classified balance sheet, 610611 realized/unrealized gain/loss presentation, 609610, 613 trading securities, 605607 Investment banking, 540 Investment portfolio, 600 Investments, long-term, see Long-term investments Invoice(s): purchase, 199, 200 sales, 203 Irregular items, 717720 changes in accounting principle, 720 comprehensive income, 720 discontinued operations, 717718 extraordinary operations, 718720 IRS (Internal Revenue Service), 436 Subject Index O Obsolescence, 431 Off-balance-sheet financing, F5 Open-book management, 3 Operating activities, cash inflow/outflow from, 639, 640 direct method, 666670 indirect method, 646650 Operating cycles, in merchandising operations, 196197 Operating expenses, 210211, 300 Operating leases, F3F4 Ordinary repairs, 438 Organization costs, 538 Other expenses and losses, 211 Other receivables, 386 Other revenues and gains, 211 Outstanding checks, 359 Outstanding stock, 548 Over-the counter receipts, 349350 P Paid absences, F6 Paid-in capital, 542, 564 Paper (phantom) profit, 256 Parent company, 602603 Partnerships, 10 Par-value stock, 541, 544, 546 Passwords, computer, 344 Patents, 444 Payee, 398 Payment date (dividends), 554 Payout ratio, 713714 Payroll, D1D15 defined, D1 determining, D4D7 internal control of, D1D4 recording, D8D10 Payroll and payroll taxes payable, 476478 Payroll deductions, D5D7 for FICA taxes, D5 for income taxes, D6 Payroll register, D8D9 Payroll taxes, 476, D11D15 federal unemployment taxes, D11D12 FICA, D11 filing/remitting, D13D15 recording, D12D13 state unemployment taxes, D12 PCAOB (Public Company Accounting Oversight Board), 341 Pension plans, F7F8 P-E ratio, see Price-earnings ratio Percentage-of-receivables basis, 393394 Percentage-of-sales basis, 392393 Periodic inventory system, 198, 219222 merchandise purchases in, 220221 merchandise sales in, 221222 Permanent accounts, 150151, 154155 Perpetual inventory system(s), 197198 inventory cost flow methods in, 266269 periodic vs., 219222 Personal annual report, 71 Personal financial reporting, ethics in, 25 Petty cash fund, 353355 establishment of, 353 making payments from, 353354 replenishment of, 354355 Phantom (paper) profit, 256 Physical controls, 344, 345 Pickard, Thomas, 4 Plan administrator (pensions), F7 Plant and equipment, see Plant assets Plant assets, 426441 buildings, 428 defined, 426 depreciation of, 430438 computation, 431432 and income taxes, 436 methods, 432436 revisions in estimate of, 436437 determining cost of, 427430 disposal of, 439441 retirement, 439440 sale, 440441 equipment, 428429 exchange of, 452454 gain treatment, 453454 loss treatment, 452453 expenditures during useful life of, 438 land, 427 land improvements, 427428 loss on sale of, 647 Post-closing trial balance, 155157, 159161 Posting, 5960, 6768, 110111 Postretirement benefits, F6F8 Preferred dividend, 712 Preferred stock, 550552, 554555 Premium, bonds issued at, 488490 Prepaid expenses (prepayments), 98102, 118119 Present value, C7C16 of an annuity, 502503, C10C12, C16 and bond pricing, 500505 defined, C7 of a long-term note or bond, C12C15 and market value of bonds, 486 of a single amount, C8C10, C1516 variables affecting, C7 Present value of 1 factors, C9 Price-earnings (P-E) ratio, 307n.4, 713 Principal, C1 Principle(s) of accounting, 294295, 299303 cost principle as, 302 full disclosure as, 301 matching as, 300 revenue recognition as, 299300 Prior period adjustments, 562 Private accounting, 29. See also Managerial accounting Privately held corporations, 535 Profit: gross, 210 as purpose of corporation, 534 Profitability, 310 Profitability ratios, 710714 asset turnover, 710711 earnings per share, 712713 payout ratio, 713714 price-earnings ratio, 713 profit margin, 710 return on assets, 711 return on common stockholders equity, 711712 Profit margin (profit margin percentage), 310, 710 Pro forma income, 722 Promissory notes, 398 Property, plant, and equipment, 164. See also Plant assets Proprietorships, 10 Public accounting, 29 Public Company Accounting Oversight Board (PCAOB), 341 Publicly held corporations, 534535 Purchase allowances, 201 Purchase discounts, 201202 Purchase invoices, 199, 200 Purchase returns, 201 Purchases, recording, 199203 discounts, 201202 freight costs, 200201 returns and allowances, 201 Purchases journal, E11E13 Purchasing activities, and segregation of duties, 342 I-7 Q Quality of earnings, 721722, 724726 and alternative accounting methods, 721722 and improper recognition, 722 and pro forma income, 722 Quick (acid-test) ratio, 707708 R Ratio analysis, 699, 705717 liquidity ratios, 706710 profitability ratios, 710714 solvency ratios, 714715 summary of ratios, 716 Raw materials, 246 R&D (research and development) costs, 446 Receipts, cash, 348351 mail receipts, 350351 over-the counter receipts, 349350 Receivables, 384408 accounts receivable, 386398 disposing of, 395398 recognizing, 387 types of, 386 valuing, 388395 defined, 386 notes receivable, 398403 computing interest for, 400 disposing of, 401402 maturity date of, 399 recognizing, 400 valuing, 400401 statement presentation/analysis for, 403404 trade, 386 Receivables turnover, 708709. See also Accounts receivable turnover ratio Recessions, inventory fraud during, 260 Recognition, improper, 722 Reconciliation, see Bank reconciliation Record date (dividends), 554 Recording process, 4674 and accounts, 4853 illustrated example of, 6168 for payroll, D8D10 for payroll taxes, D12D13 steps in, 5361 journalizing, 5457 ledger, transfer to, 5761 transaction analysis, 1520 and trial balance, 6870, 7273 Registered bonds, 484 Relevance, of accounting information, 296 Reliability, of accounting information, 296 Reporting: of cash, 363, 365366 ethics in, 89 Research and development (R&D) costs, 446 Responsibility, establishment of, 341, 342 Restricted cash, 363 Restrictive endorsements, 350 Retailers, 196 Retail inventory method, 271272 Retained earnings, 51, 542543, 560565 defined, 560 and prior period adjustments, 562 restrictions on, 560561 statement of, 562563 statement presentation/analysis of, 564566 Retained earnings restrictions, 561 Retained earnings statement, 2123, 562563 horizontal analysis of, 702703 statement presentation/analysis of, 564565, 568 Retirement, of plant assets, 439440 I-8 Subject Index State income taxes, D6 Statement of cash flows, 21, 22, 24, 638671 classification of cash flows on, 639640 direct method of preparing, 644, 665671 investing/financing activities, 670671 net change in cash, 671 operating activities, 666670 evaluating a company using, 654655, 657658 format of, 641 indirect method of preparing, 643654 investing/financing activities, 651652 net change in cash, 652654 operating activities, 646650 worksheets, using, 659664 preparation of, 642643 preparing, from worksheets, 659664 and significant noncash activities, 640641 usefulness of, 638639 Statement of earnings, D10 State unemployment taxes, D12 State unemployment tax acts (SUTA), D12 Stock: authorized, 540 book value of, 571572 deciding to invest in, 723 issuance of, 540546 market value of, 541, 572 par vs. no-par-value, 541542, 544546 preferred, 550552 treasury, 546550 disposal of, 548550 purchase of, 547548 Stock certificate, 538 Stock dividends, 556558 Stockholders: financial statement analysis by, 698 letter to the, A2A3 limited liability of, 535 ownership rights of, 538539 Stockholders equity, 1213 on classified balance sheet, 166 return on common stockholders equity, 311, 566, 711712 Stockholders equity account, 557 Stockholders equity statement, 565, 570571 Stock investments, 600605 between 20% and 50%, holdings, 601602 less than 20%, holdings of, 600601 more than 50%, holdings of, 602603 Stock splits, 558559 Straight-line method, 432433, 509513 Su, Vivi, 384385 Subsidiary (affiliated) company, 602 Subsidiary ledger(s), E1E4 advantages of, E3 defined, E1 example, E1E2 Supplies, as prepaid expense, 99 SUTA (state unemployment tax acts), D12 T T account, 48 Taking inventory, 247248 Taxes and taxation. See also Income taxes (income taxation); Payroll taxes as area of public accounting, 29 burden of, 478 corporate, 537 sales taxes payable, 476 Temporary accounts, 150, 154 Term bonds, 484 Theft, inventory, 263 Three-column form of account, 58 Time cards, D3 Timekeeping, D3 Time periods, and discounting of bonds, 503504 Time period assumption, 94, 298 Times interest earned ratio, 495, 715 Time value of money, C1C18 and discounting, C12 future value, C3C7 and interest, C1C3 and market value of bonds, 486 present value, C7C16 and use of financial calculator, C16C17 Timing issue(s), 9496 accrual- vs. cash-basis accounting as, 95 fiscal/calendar years as, 95 recognizing revenues/expenses as, 9596 selection of accounting time period as, 94 Trademarks and trade names, 444 Trade receivables, 386 Trading on the equity, 712 Trading securities, 605607 Transactions, 14 Transaction analysis, 1520 Transfer, of corporate ownership rights, 535 Transit, goods in, 248249 Transposition errors, 70 Treasurer, 536 Treasury stock, 546550 disposal of, 548550 purchase of, 547548 Trend analysis, see Horizontal analysis Trial balance, 6870, 7273 defined, 68 limitations of, 69 locating errors in, 6970 post-closing, 155157, 159161 steps in preparation of, 69 use of dollar signs in, 70 Trustee (of bond), 484 Turnover: asset, 446447, 710711 inventory, 261262, 709710 receivables, 403404, 708709 U Uncollectible accounts: allowance method for, 389394 direct write-off method for, 388389 Underwriting, of stock issues, 540 Unearned revenues, 102103, 119120, 479 Unemployment taxes: federal, D11D12 state, D12 Unexpired costs, 300 Units-of-activity method, 433434, 442 Unsecured bonds, 484 Useful life, 101, 431, 432, 438 V Valuation: of accounts receivable, 388395 of notes receivable, 400401 Vertical analysis, 699, 703705 of balance sheet, 703 of income statement, 703705 Virtual close, 155 Vouchers, 351, 352 Voucher register, 352 Voucher systems, 351352 W W-2 (Wage and Tax Statement), D13D14 W-4 (Employees Withholding Allowance Certificate), D6 Wages, D1 Wages and salaries payable, 476 Wage and Tax Statement (Form W-2), D13D14 Return on assets, 311, 711 Return on common stockholders equity, 311, 566, 711712 Returns and allowances: merchandise purchases, 201 for merchandise sales, 204205 Revenue(s), 51 accrued, 105106 defined, 13 sales, 196 unearned, 102103, 119120, 479 Revenue expenditures, 438 Revenue recognition principle, 95, 299300 Reversing entries, 158, 171173 Rifkin, Jeremy, 194 Rowling, J. K., 443 Rubino, Carlos, 696697 S Salaries, 108109, D1, D4 Sale(s): of bonds, 598599 credit card, 396397 net, 209210 of plant assets, 440441, 647 of receivables, 395396 recording, 203205 discounts, 205 returns and allowances, 204205 Sales activities, and segregation of duties, 342343 Sales invoices, 203 Sales journal, E5E7 Sales revenue, 196 Sales taxes payable, 476 Salvage value, 431, 432 Sarbanes-Oxley Act of 2002 (SOX; Sarbox), 8, 29, 341 and human resources, 344 and identity theft, 364 and restatements, 159 Saving, personal, 612 SEC, see Securities and Exchange Commission Secured bonds, 483 Securities and Exchange Commission (SEC), 9, 294, 537 Segregation of duties, 342343 Selling expenses, 211 Semiannually payable interest, C12, C13 Serial bonds, 484 Service charges, bank, 357 Short-term investments, 608609 Short-term paper, 609n.4 Significant noncash activities, 640641 Simple entries, 55 Simple interest, C1C2 Single-step income statement, 213, 214 Sinking fund bond, 483 Small stock dividend, 556 Social Security taxes, see FICA taxes Solvency, 311 Solvency ratios, 714715 debt to total assets ratio, 714715 times interest earned, 715 SOX, see Sarbanes-Oxley Act of 2002 Special journals, E4E18 cash payments journal, E13E15 cash receipts journal, E7E11 effects of, on general journal, E16E17 purchases journal, E11E13 sales journal, E5E7 usefulness of, E4 Specific identification method, 250251 Stack, Jack, 2 Star Wars, 96 Stated value, 542, 544545 Subject Index Wear and tear, 431 Weighted-average unit cost, 254 Wholesalers, 196 Withholding taxes, 476. See also Payroll taxes Working capital, 309310, 481, 707 Working capital ratio, 707 Work in process, 246 Worksheet(s), 144154 defined, 144 for merchandising company, 222224 preparing adjusting entries from, 148, 150, 152, 154 preparing consolidated balance sheets from, 616617 I-9 preparing financial statements from, 148, 152, 153 preparing statement of cash flows from, 659664 steps in preparation of, 144152 Z Zero-interest bonds, 486 RAPID REVIEW Chapter Content BASIC ACCOUNTING EQUATION (Chapter 2) Basic Equation Expanded Basic Equation Debit/Credit Effects Assets = Liabilities + Stockholders Equity INVENTORY (Chapters 5 and 6) Ownership Freight Terms FOB Shipping point FOB Destination Ownership of goods on public carrier resides with: Buyer Seller Assets Dr. Cr. + = Liabilities Dr. Cr. + + ADJUSTING ENTRIES (Chapter 3) Type Deferrals 1. Prepaid expenses 2. Unearned revenues 1. Accrued revenues 2. Accrued expenses Adjusting Entry Dr. Expenses Dr. Liabilities Dr. Assets Dr. Expenses Cr. Assets Cr. Revenues Cr. Revenues Cr. Liabilities Common Stock Dr. Cr. + + Retained Earnings Dr. Cr. + Dividends Dr. + Cr. + Revenues Dr. Cr. + Expenses Dr. + Cr. Perpetual vs. Periodic Journal Entries Event Purchase of goods Perpetual Inventory Cash (A/P) Inventory Cash Cash (or A/P) Inventory Cash (or A/R) Sales Cost of Goods Sold Inventory No entry Periodic* Purchases Cash (A/P) Freight-In Cash Cash (or A/P) Purchase Returns and Allowances Cash (or A/R) Sales No entry Accruals Freight (shipping point) Note: Each adjusting entry will affect one or more income statement accounts and one or more balance sheet accounts. Interest Computation Interest Face value of note Annual interest rate Time in terms of one year Return of goods Sale of goods CLOSING ENTRIES (Chapter 4) Purpose: (1) Update the Retained Earnings account in the ledger by transferring net income (loss) and dividends to retained earnings. (2) Prepare the temporary accounts (revenue, expense, dividends) for the next periods postings by reducing their balances to zero. Process 1. 2. Debit each revenue account for its balance (assuming normal balances), and credit Income Summary for total revenues. Debit Income Summary for total expenses, and credit each expense account for its balance (assuming normal balances). STOP AND CHECK: Does the balance in your Income Summary Account equal the net income (loss) reported in the income statement? 3. 4. Debit (credit) Income Summary, and credit (debit) Retained Earnings for the amount of net income (loss). Debit Retained Earnings for the balance in the Dividends account and credit Dividends for the same amount. STOP AND CHECK: Does the balance in your Retained Earnings account equal the ending balance reported in the balance sheet and the retained earnings statement? Are all of your temporary account balances zero? End of period Cost Flow Methods Specific identification First-in, first-out (FIFO) Closing or adjusting entry required Weighted average Last-in, first-out (LIFO) CONCEPTUAL FRAMEWORK OF ACCOUNTING (Chapter 7) Characteristics Relevance Comparability Reliability Assumptions Monetary unit Economic entity Time period Going concern Principles Revenue recognition Matching Full disclosure Cost Constraints Materiality Conservatism INTERNAL CONTROL AND CASH (Chapter 8) Principles of Internal Control Establishment of responsibility Segregation of duties Documentation procedures Bank Reconciliation Physical, mechanical, and electronic controls Independent internal verification Other controls ACCOUNTING CYCLE (Chapter 4) 1 Analyze business transactions Bank Balance per bank statement Add: Deposit in transit Deduct: Outstanding checks Adjusted cash balance Books Balance per books Add: Unrecorded credit memoranda from bank statement Deduct: Unrecorded debit memoranda from bank statement Adjusted cash balance 9 Prepare a post-closing trial balance 2 Journalize the transactions 8 Journalize and post closing entries 3 Post to ledger accounts Note: 1. Errors should be offset (added or deducted) on the side that made the error. 2. Adjusting journal entries should only be made on the books. STOP AND CHECK: Does the adjusted cash balance in the Cash account equal the reconciled balance? 7 Prepare financial statements: Income statement Retained earnings statement Balance sheet 4 Prepare a trial balance *Items with an asterisk are covered in a chapter-end appendix. 5 Journalize and post adjusting entries: Prepayments/Accruals 6 Prepare an adjusted trial balance Optional steps: If a worksheet is prepared, steps 4, 5, and 6 are incorporated in the worksheet. If reversing entries are prepared, they occur between steps 9 and 1 as discussed below. EP-1 RAPID REVIEW Chapter Content RECEIVABLES (Chapter 9) Methods to Account for Uncollectible Accounts STOCKHOLDERS EQUITY (Chapter 12) No-Par Value vs. Par Value Stock Journal Entries No-Par Value Cash Common Stock Par Value Cash Common Stock (par value) Paid-in Capital in Excess of Par Value Direct write-off method Record bad debts expense when the company determines a particular account to be uncollectible. At the end of each period estimate the amount of credit sales uncollectible. Debit Bad Debts Expense and credit Allowance for Doubtful Accounts for this amount. As specific accounts become uncollectible, debit Allowance for Doubtful Accounts and credit Accounts Receivable. At the end of each period estimate the amount of uncollectible receivables. Debit Bad Debts Expense and credit Allowance for Doubtful Accounts in an amount that results in a balance in the allowance account equal to the estimate of uncollectibles. As specific accounts become uncollectible, debit Allowance for Doubtful Accounts and credit Accounts Receivable. Allowance methods: Percentage-of-sales Comparison of Dividend Effects Cash Cash dividend Stock dividend Stock split No effect No effect Common Stock No effect No effect Retained Earnings No effect Percentage-of-receivables Debits and Credits to Retained Earnings Retained Earnings Debits (Decreases) 1. Net loss 2. Prior period adjustments for overstatement of net income 3. Cash dividends and stock dividends 4. Some disposals of treasury stock Credits (Increases) 1. Net income 2. Prior period adjustments for Understatement of net income PLANT ASSETS (Chapter 10) Presentation Tangible Assets Property, plant, and equipment Intangible Assets Intangible assets (Patents, copyrights, trademarks, franchises, goodwill) Natural resources INVESTMENTS (Chapter 13) Comparison of Long-Term Bond Investment and Liability Journal Entries Event Investor Debt Investments Cash Cash Interest Revenue Investee Cash Bonds Payable Interest Expense Cash Computation of Annual Depreciation Expense Cost Salvage value Useful life (in years) Depreciable cost Useful life (in units) Units of activity during year Straight-line Units-of-activity Declining-balance Purchase / issue of bonds Interest receipt / payment Book value at beginning of year Declining balance rate* *Declining-balance rate 1 Useful life (in years) Note: If depreciation is calculated for partial periods, the straight-line and decliningbalance methods must be adjusted for the relevant proportion of the year. Multiply the annual depreciation expense by the number of months expired in the year divided by 12 months. Comparison of Cost and Equity Methods of Accounting for Long-Term Stock Investments Event Acquisition Cost Stock Investments Cash No entry Equity Stock Investments Cash Stock Investments Investment Revenue Cash Stock Investments BONDS (Chapter 11) Premium Face Value Discount Market interest rate Market interest rate Market interest rate Contractual interest rate Contractual interest rate Contractual interest rate Investee reports earnings Investee pays dividends Cash Dividend Revenue Computation of Annual Bond Interest Expense Interest expense Interest paid (payable) (OR Amortization of discount Amortization of premium) Trading and Available-for-Sale Securities Trading Available-forsale Report at fair value with changes reported in net income. Report at fair value with changes reported in the stockholders equity section. Straight-line amortization Effective-interest amortization (preferred method) Bond discount (premium) Number of interest periods Bond interest expense Carrying value of bonds at beginning of period Effective interest rate Bond interest paid Face amount of bonds Contractual interest rate EP-2 RAPID REVIEW Chapter Content STATEMENT OF CASH FLOWS (Chapter 14) Cash flows from operating activities (indirect method) Net income Add: Losses on disposals of assets Amortization and depreciation Decreases in noncash current assets Increases in current liabilities Deduct: Gains on disposals of assets Increases in noncash current assets Decreases in current liabilities Net cash provided (used) by operating activities $X X X X (X) (X) (X) $X Cash flows from operating activities (direct method) Cash receipts (Examples: from sales of goods and services to customers, from receipts of interest and dividends on loans and investments) $X Cash payments (Examples: to suppliers, for operating expenses, for interest, for taxes) (X) Cash provided (used) by operating activities $X PRESENTATION OF NON-TYPICAL ITEMS (Chapter 15) Prior period adjustments (Chapter 12) Discontinued operations Statement of retained earnings (adjustment of beginning retained earnings) Income statement (presented separately after Income from continuing operations) Income statement (presented separately after Income before extraordinary items) In most instances, use the new method in current period and restate previous years results using new method. For changes in depreciation and amortization methods, use the new method in the current period, but do not restate previous periods. Extraordinary items Changes in accounting principle EP-3 RAPID REVIEW Financial Statements Order of Preparation Statement Type 1. Income statement 2. Retained earnings statement 3. Balance sheet 4. Statement of cash flows Date For the period ended For the period ended As of the end of the period For the period ended Retained Earnings Statement Name of Company Retained Earnings Statement For the Period Ended Retained earnings, beginning of period Add: Net income (or deduct net loss) Deduct: Dividends Retained earnings, end of period $X X X X $X Income Statement (perpetual inventory system) Name of Company Income Statement For the Period Ended Sales revenues Sales Less: Sales returns and allowances Sales discounts Net sales Cost of goods sold Gross profit Operating expenses (Examples: store salaries, advertising, delivery, rent, depreciation, utilities, insurance) Income from operations Other revenues and gains (Examples: interest, gains) Other expenses and losses (Examples: interest, losses) Income before income taxes Income tax expense Net income Income Statement (periodic inventory system) Name of Company Income Statement For the Period Ended Sales revenues Sales Less: Sales returns and allowances Sales discounts Net sales Cost of goods sold Beginning inventory Purchases $X Less: Purchase returns and allowances X Net purchases X Add: Freight in X Cost of goods purchased Cost of goods available for sale Less: Ending inventory Cost of goods sold Gross profit Operating expenses (Examples: store salaries, advertising, delivery, rent, depreciation, utilities, insurance) Income from operations Other revenues and gains (Examples: interest, gains) Other expenses and losses (Examples: interest, losses) Income before income taxes Income tax expense Net income STOP AND CHECK: Net income (loss) presented on the retained earnings statement must equal the net income (loss) presented on the income statement. Balance Sheet $X X X $X X X Name of Company Balance Sheet As of the End of the Period Assets Current assets (Examples: cash, short-term investments, accounts receivable, merchandise inventory, prepaid expenses) Long-term investments (Examples: investments in bonds, investments in stocks) Property, plant, and equipment Land Buildings and equipment $X Less: Accumulated depreciation X Intangible assets Total assets Liabilities and Stockholders Equity Liabilities Current liabilities (Examples: notes payable, accounts payable, accruals, unearned revenues, current portion of notes payable) Long-term liabilities (Examples: notes payable, bonds payable) Total liabilities Stockholders equity Common stock Retained earnings Total liabilities and stockholders equity $X X $X X X X $X X X X X X X X $X $X X X X X $X $X X X $X X STOP AND CHECK: Total assets on the balance sheet must equal total liabilities and stockholders equity; and, ending retained earnings on the balance sheet must equal ending retained earnings on the retained earnings statement. X X X X X Statement of Cash Flows Name of Company Statement of Cash Flows For the Period Ended Cash flows from operating activities Note: May be prepared using the direct or indirect method Cash provided (used) by operating activities Cash flows from investing activities (Examples: purchase / sale of long-term assets) Cash provided (used) by investing activities Cash flows from financing activities (Examples: issue / repayment of long-term liabilities, issue of stock, payment of dividends) Net cash provided (used) by financing activities Net increase (decrease) in cash Cash, beginning of the period Cash, end of the period X X X X X X X $X $X X X X X $X STOP AND CHECK: Cash, end of the period, on the statement of cash flows must equal cash presented on the balance sheet. EP-4 RAPID REVIEW Using the Information in the Financial Statements Ratio Liquidity Ratios 1. Current ratio Current assets Current liabilities Cash Short-term investments Receivables (net) Current liabilities Net credit sales Average net receivables Cost of goods sold Average inventory Measures short-term debt-paying ability. Formula Purpose or Use 2. Acid-test (quick) ratio Measures immediate short-term liquidity. 3. Receivables turnover Measures liquidity of receivables. 4. Inventory turnover Measures liquidity of inventory. Profitability Ratios 5. Profit margin Net income Net sales Net sales Average assets Net income Average total assets Net income Average common stockholders equity Net income Weighted average common shares outstanding Market price per share of stock Earnings per share Cash dividends Net income Measures net income generated by each dollar of sales. Measures how efficiently assets are used to generate sales. Measures overall profitability of assets. 6. Asset turnover 7. Return on assets 8. Return on common stockholders equity 9. Earnings per share (EPS) Measures profitability of stockholders investment. Measures net income earned on each share of common stock. Measures the ratio of the market price per share to earnings per share. Measures percentage of earnings distributed in the form of cash dividends. 10. Price-earnings (P-E) ratio 11. Payout ratio Solvency Ratios 12. Debt to total assets ratio Total debt Total assets Income before income taxes and interest expense Interest expense Cash provided by operating activities Capital expenditures Cash dividends Measures percentage of total assets provided by creditors. Measures ability to meet interest payments as they come due. Measures the amount of cash generated during the current year that is available for the payment of additional dividends or for expansion. 13. Times interest earned 14. Free cash flow EP-5 ... View Full Document | http://www.coursehero.com/file/6215053/Week-3-Appendix-E-Online-text/ | CC-MAIN-2014-41 | refinedweb | 68,396 | 54.93 |
5. ACPI considerations for PCI host bridges¶
The general rule is that the ACPI namespace should describe everything the OS might use unless there’s another way for the OS to find it [1, 2].
For example, there’s no standard hardware mechanism for enumerating PCI host bridges, so the ACPI namespace must describe each host bridge, the method for accessing PCI config space below it, the address space windows the host bridge forwards to PCI (using _CRS), and the routing of legacy INTx interrupts (using _PRT).
PCI devices, which are below the host bridge, generally do not need to be described via ACPI. The OS can discover them via the standard PCI enumeration mechanism, using config accesses to discover and identify devices and read and size their BARs. However, ACPI may describe PCI devices if it provides power management or hotplug functionality for them or if the device has INTx interrupts connected by platform interrupt controllers and a _PRT is needed to describe those connections.
ACPI resource description is done via _CRS objects of devices in the ACPI namespace [2]. The _CRS is like a generalized PCI BAR: the OS can read _CRS and figure out what resource is being consumed even if it doesn’t have a driver for the device [3]. That’s important because it means an old OS can work correctly even on a system with new devices unknown to the OS. The new devices might not do anything, but the OS can at least make sure no resources conflict with them.
Static tables like MCFG, HPET, ECDT, etc., are not mechanisms for reserving address space. The static tables are for things the OS needs to know early in boot, before it can parse the ACPI namespace. If a new table is defined, an old OS needs to operate correctly even though it ignores the table. _CRS allows that because it is generic and understood by the old OS; a static table does not.
If the OS is expected to manage a non-discoverable device described via ACPI, that device will have a specific _HID/_CID that tells the OS what driver to bind to it, and the _CRS tells the OS and the driver where the device’s registers are.
PCI host bridges are PNP0A03 or PNP0A08 devices. Their _CRS should describe all the address space they consume. This includes all the windows they forward down to the PCI bus, as well as registers of the host bridge itself that are not forwarded to PCI. The host bridge registers include things like secondary/subordinate bus registers that determine the bus range below the bridge, window registers that describe the apertures, etc. These are all device-specific, non-architected things, so the only way a PNP0A03/PNP0A08 driver can manage them is via _PRS/_CRS/_SRS, which contain the device-specific details. The host bridge registers also include ECAM space, since it is consumed by the host bridge.
ACPI defines a Consumer/Producer bit to distinguish the bridge registers (“Consumer”) from the bridge apertures (“Producer”) [4, 5], but early BIOSes didn’t use that bit correctly. The result is that the current ACPI spec defines Consumer/Producer only for the Extended Address Space descriptors; the bit should be ignored in the older QWord/DWord/Word Address Space descriptors. Consequently, OSes have to assume all QWord/DWord/Word descriptors are windows.
Prior to the addition of Extended Address Space descriptors, the failure of Consumer/Producer meant there was no way to describe bridge registers in the PNP0A03/PNP0A08 device itself. The workaround was to describe the bridge registers (including ECAM space) in PNP0C02 catch-all devices [6]. With the exception of ECAM, the bridge register space is device-specific anyway, so the generic PNP0A03/PNP0A08 driver (pci_root.c) has no need to know about it.
New architectures should be able to use “Consumer” Extended Address Space descriptors in the PNP0A03 device for bridge registers, including ECAM, although a strict interpretation of [6] might prohibit this. Old x86 and ia64 kernels assume all address space descriptors, including “Consumer” Extended Address Space ones, are windows, so it would not be safe to describe bridge registers this way on those architectures.
PNP0C02 “motherboard” devices are basically a catch-all. There’s no programming model for them other than “don’t use these resources for anything else.” So a PNP0C02 _CRS should claim any address space that is (1) not claimed by _CRS under any other device object in the ACPI namespace and (2) should not be assigned by the OS to something else.
The PCIe spec requires the Enhanced Configuration Access Method (ECAM) unless there’s a standard firmware interface for config access, e.g., the ia64 SAL interface [7]. A host bridge consumes ECAM memory address space and converts memory accesses into PCI configuration accesses. The spec defines the ECAM address space layout and functionality; only the base of the address space is device-specific. An ACPI OS learns the base address from either the static MCFG table or a _CBA method in the PNP0A03 device.
The MCFG table must describe the ECAM space of non-hot pluggable host bridges [8]. Since MCFG is a static table and can’t be updated by hotplug, a _CBA method in the PNP0A03 device describes the ECAM space of a hot-pluggable host bridge [9]. Note that for both MCFG and _CBA, the base address always corresponds to bus 0, even if the bus range below the bridge (which is reported via _CRS) doesn’t start at 0.
- [1] ACPI 6.2, sec 6.1:
- For any device that is on a non-enumerable type of bus (for example, an ISA bus), OSPM enumerates the devices’ identifier(s) and the ACPI system firmware must supply an _HID object … for each device to enable OSPM to do that.
- [2] ACPI 6.2, sec 3.7:
The OS enumerates motherboard devices simply by reading through the ACPI Namespace looking for devices with hardware IDs.
Each device enumerated by ACPI includes ACPI-defined objects in the ACPI Namespace that report the hardware resources the device could occupy [_PRS], an object that reports the resources that are currently used by the device [_CRS], and objects for configuring those resources [_SRS]. The information is used by the Plug and Play OS (OSPM) to configure the devices.
- [3] ACPI 6.2, sec 6.2:
OSPM uses device configuration objects to configure hardware resources for devices enumerated via ACPI. Device configuration objects provide information about current and possible resource requirements, the relationship between shared resources, and methods for configuring hardware resources.
When OSPM enumerates a device, it calls _PRS to determine the resource requirements of the device. It may also call _CRS to find the current resource settings for the device. Using this information, the Plug and Play system determines what resources the device should consume and sets those resources by calling the device’s _SRS control method.
In ACPI, devices can consume resources (for example, legacy keyboards), provide resources (for example, a proprietary PCI bridge), or do both. Unless otherwise specified, resources for a device are assumed to be taken from the nearest matching resource above the device in the device hierarchy.
- [4] ACPI 6.2, sec 6.4.3.5.1, 2, 3, 4:
- QWord/DWord/Word Address Space Descriptor (.1, .2, .3)
- General Flags: Bit [0] Ignored
- Extended Address Space Descriptor (.4)
General Flags: Bit [0] Consumer/Producer:
- 1 – This device consumes this resource
- 0 – This device produces and consumes this resource
- [5] ACPI 6.2, sec 19.6.43:
- ResourceUsage specifies whether the Memory range is consumed by this device (ResourceConsumer) or passed on to child devices (ResourceProducer). If nothing is specified, then ResourceConsumer is assumed.
- [6] PCI Firmware 3.2, sec 4.1.2:
- If the operating system does not natively comprehend reserving the MMCFG region, the MMCFG region must be reserved by firmware. The address range reported in the MCFG table or by _CBA method (see Section 4.1.3) must be reserved by declaring a motherboard resource. For most systems, the motherboard resource would appear at the root of the ACPI namespace (under _SB) in a node with a _HID of EISAID (PNP0C02), and the resources in this case should not be claimed in the root PCI bus’s _CRS. The resources can optionally be returned in Int15 E820 or EFIGetMemoryMap as reserved memory but must always be reported through ACPI as a motherboard resource.
- [7] PCI Express 4.0, sec 7.2.2:
- For systems that are PC-compatible, or that do not implement a processor-architecture-specific firmware interface standard that allows access to the Configuration Space, the ECAM is required as defined in this section.
- [8] PCI Firmware 3.2, sec 4.1.2:
The MCFG table is an ACPI table that is used to communicate the base addresses corresponding to the non-hot removable PCI Segment Groups range within a PCI Segment Group available to the operating system at boot. This is required for the PC-compatible systems.
The MCFG table is only used to communicate the base addresses corresponding to the PCI Segment Groups available to the system at boot.
- [9] PCI Firmware 3.2, sec 4.1.3:
The _CBA (Memory mapped Configuration Base Address) control method is an optional ACPI object that returns the 64-bit memory mapped configuration base address for the hot plug capable host bridge. The base address returned by _CBA is processor-relative address. The _CBA control method evaluates to an Integer.
This control method appears under a host bridge object. When the _CBA method appears under an active host bridge object, the operating system evaluates this structure to identify the memory mapped configuration base address corresponding to the PCI Segment Group for the bus number range specified in _CRS method. An ACPI name space object that contains the _CBA method must also contain a corresponding _SEG method. | https://www.kernel.org/doc/html/v5.7/PCI/acpi-info.html | CC-MAIN-2021-39 | refinedweb | 1,658 | 61.97 |
Data validation is important topic in applications. There are many validation frameworks available and there should be one that you are happy with. I am currently playing with Enterprise Library 4.1 Validation Application Block and I am integrating it to my ASP.NET MVC application. In this posting I will show you how to use validation block in your ASP.NET MVC application.
Note. This posting gives you first ideas about validation and shows you how to get things done quick and dirty. For production-ready validation there more steps to follow and I will introduce these steps in my future postings. Stay tuned!
Note. This posting gives you first ideas about validation and shows you how to get things done quick and dirty. For production-ready validation there more steps to follow and I will introduce these steps in my future postings. Stay tuned!
Shortly, you can create ASP.NET MVC views that craete and initialize objects for you. I assume you know this feature and you know how it works at basic level.
Here is how my application is layered.
Currently all external stuff is referenced by infrastructure layer. Infrastructure layer provides common interfaces for dependency injection and validation. These interfaces doesn’t change when implementations change. Presentation layer uses infrastructure resolver to get implementations of repositories.
I have Enteprise Library 4.1 downloaded and installed on my development machine. If you want to just test my solution you can also create one ASP.NET MVC web application project and put all stuff there. No problem at all. After installing Enterprise Library you need some references so your application can use validation block. Take these files:
These libraries should be enough. I added references to these libraries to my infrastructure library.
As a next thing we need facade for our validation feature. I created these three classes:
Let’s see those classes now.
ValidationError
public class ValidationError
{
public string PropertyName { get; set; }
public string Message { get; set; }
}
ValidationException
public class ValidationException : Exception
private readonly ValidationError[] _errors;
public ValidationException(ValidationError[] errors)
{
_errors = errors;
}
public ValidationError[] ValidationErrors
get
{
return _errors;
}
Validator();
Now we are almost done and it is time to add some rules.
Make sure you have web.config file in your application because we are going to modify it. Run Enterprise Library configuration program from all programs menu and open your web.config file.
Add some validation rules for you classes and save configuration. Enterprise Library Configurator creates all required sections to your web.config file automatically.
As a first thing take a look at this simple form that let’s users insert new price enquiries.
<h2>Insert</h2>
<%= Html.ValidationMessage("_FORM") %>
<% using (Html.BeginForm()) {%>
<fieldset>
<legend>New price enquiry</legend>
<table>
<tr>
<td valign="top"><label for="Title">Title</label>:</td>
<td valign="top">
<%= Html.TextBox("Title") %><br />
<%= Html.ValidationMessage("Title")%>
</td>
</tr>
<label for="From">From</label>:
<%= Html.TextBox("From") %><br />
<%= Html.ValidationMessage("From")%>
<td valign="top"><label for="DocNumber">Number</label>:</td>
<%= Html.TextBox("DocNumber") %><br />
<%= Html.ValidationMessage("DocNumber") %>
<td valign="top"><label for="Date">Date:</label>:</td>
<%= Html.TextBox("Date", DateTime.Now.ToShortDateString()) %><br />
<%= Html.ValidationMessage("Date") %>
<label for="DueName">Due date:</label>:
<%= Html.TextBox("DueDate", DateTime.Now.ToShortDateString()) %><br />
<%= Html.ValidationMessage("DueDate") %>
</table>
<p>
<input type="submit" value="Save" />
</p>
</fieldset>
Let’s see one repository method that accepts object to be validated. Let’s assume we have repository that validates objects before saving them. If there are validation errors ValidationException will be thrown. Here is simplified save method of repository.
public void SavePriceEnquiry(PriceEnquiry instance)
var results = Validator.Validate<PriceEnquiry>(instance);
if (results.Length > 0)
throw new ValidationException(results);
Save<PriceEnquiry>(instance);
And let’s use this repositoy in ASP.NET MVC controller (if your version of ASP.NET MVC doesn’t support HttpPost sttribute you can use AcceptVerbs(HttpVerbs.Post) instead).
[HttpPost]
public ActionResult Insert(PriceEnquiry enquiry)
try
_repository.SavePriceEnquiry(enquiry);
catch (ValidationException vex)
Helper.BindErrorsToModel(vex.ValidationErrors, ModelState);
return Insert();
catch (Exception ex)
ModelState.AddModelError("_FORM", ex.ToString());
return RedirectToAction("Index");
You can see call to method called BindErrorsToModel(). This is helper method that steps through validation errors array and binds errors to current model. You can take this method and use it in your own projects if you like.
public static class Helper
public static void BindErrorsToModel(ValidationException exception, ModelStateDictionary modelState)
BindErrorsToModel(exception.ValidationErrors, modelState);
}
public static void BindErrorsToModel(ValidationError[] errors, ModelStateDictionary modelState)
if (errors == null)
return;
if (errors.Length == 0)
foreach (var error in errors)
modelState.AddModelError(error.PropertyName, error.Message);
NB! Don’t forget that fields in your views must be named like properties of class you are expecting as a result of binding.
Now you can test your application and see if validation works. You should see correct error messages if everything went well and there are no bugs in code or configuration.
Although this example is long one it is not hard to add validation support to your applications. It takes you some time if it is your first time but if you are familiar with tools and you keep yourself from planning rocket sience level validation then everything goes fast and smooth.
There are some more nyances you should know before stating that you fully suppor validation through your application. I will introduce some more ideas in my future postings about validation.
Thank you for submitting this cool story - Trackback from PimpThisBlog.com
Thank you for submitting this cool story - Trackback from DotNetShoutout
You've been kicked (a good thing) - Trackback from DotNetKicks.com
Thank you for submitting this cool story - Trackback from progg.ru
You are voted (great) - Trackback from WebDevVote.com
9efish.感谢你的文章 - Trackback from 9eFish
Pingback from Twitter Trackbacks for ASP.NET MVC: Validating objects using Enterprise Library validation application block - Gunnar Peipman's [asp.net] on Topsy.com
This post was mentioned on Twitter by gpeipman: New blog post: - ASP.NET MVC: Validating objects using Enterprise Library validation application block
Pingback from ASP.NET MVC Archived Buzz, Page 1
I just use existing validation functionality in ASP.NET MVC 2.0 and .NET 4.0.
Just decorate model classes with data annotation attributes and that's it. Looks somthing like this:
public class SignupForm
[Required]
public string UserName { get; set; }
then in action method:
public ActionMethod Signup(SignupForm form)
if (ModelState.IsValid())
{
Membership.CreateUser(form);
}
return View();
This is a case of bad usage of exceptions. If the user forgot to enter the number is an exception of the normal flow of your application?
What if the database turns offline? Does the application will send an email to Benedict XVI?
Sorry, but this is not a use case for exceptions. This is really bad, you need to read.
On the other hand, is too much responsability for a repository. This is not a repository is more like a "pepe" or "lito", you can call as you want but it is fairly away from repository.
Repository pattern:
martinfowler.com/.../repository.html
Thanks for feedback, Jose!
I'm trying to keep those code samples as minimal as possible to keep focus on the topic of posting. I really don't want to publish here code samples that contain a lot of details that are not important in context of blog posting.
Keeping validation rules in web.config may grow it pretty long and it is not convenient to make changes
The use of exceptions aren't that bad. If we try to persist an invalid entity into the repository, we are violating the contract and an exception is what we deserve.
However, the controller should initiate the validation before we use the repository. Then the controller can decide to continue to persist the entity, or present the validation errors to the user.
I guess this is one of the details the author left out in favour of a simpler blog post.
Thanks for feedback, Thomas! :)
I already plan next posting that introduces custom model binders. I will show in this posting how to take validation out from controller and how to make controllers shorter this way. I like this idea more and more but I need to play with custom binders a little bit more to write posting that is useful for readers.
Why I let repository to throw exception? My point is simple - in repository you should anyway avoid situation where some other part of code wants to save invalid entity. Yes, we can validate separately in controllers (or custom binders) but what happens with code that has no controllers and binders (let's say we have command line application)? In this case we also need to be sure that invalid entities are not saved.
No, you are wrong. Exceptions are _exceptions_ of the normal flow. You are using exceptions as part of the normal flow and THIS is a *really* a BAD BAD BAD DESIGN.
You don't let exceptions to hapen and then catch. You need to handle the normal flow.
I don't use exceptions and my validation stuff are simpler than yours. So, this has nothing to do with simplistic approach.
I agree that coder of controllers shouldn't shoot whatever he likes to repositories and then hope that repository detects and resolves all the problems.
In my next posting I will show how to move all the validation stuff away from controller and I will also point out why it is not good idea to put all hope on repositories.
As a second paragraphs here sais - the code here is not role model (like Beavis and Butthead). It is quick'n'dirty and its only purpose is to get validation work without too much side topics and discussions.
Pingback from Reflective Perspective - Chris Alcock » The Morning Brew #477
Pingback from ASP.NET MVC Archived Blog Posts, Page 1
Thanks a lot for this wounder full read. It is really very informative. | http://weblogs.asp.net/gunnarpeipman/archive/2009/11/13/asp-net-mvc-validating-objects-using-enterprise-library-validation-application-block.aspx?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+gunnarpeipman+%28Gunnar+Peipman%27s+ASP.NET+blog%29 | crawl-003 | refinedweb | 1,646 | 50.94 |
This bug has been spun off from bug 510035. I fear we're heading for a Java train wreck on OS X on the 1.9.2 branch (what will become Firefox 3.6). We currently don't have a usable Java plugin for the 1.9.2 branch (or the trunk) on OS X, and none is likely to be available before next year -- some time between January and March, 2010, or possibly even later. This is well after the planned release of Firefox 3.6 (currently scheduled for November of this year). There are really only two feasible ways of dealing with this problem: 1) Postpone the 3.6 release until after Apple releases its port of Sun's Java Plugin2. 2) Restore OJI, Liveconnect and the JEP on the 1.9.2 branch. I have no control and little influence over whether or not we choose option 1. But I *can* show that option 2, though somewhat clumsy, is feasible. In my next comment I'll post a patch that restores OJI, the JEP and Liveconnect on the 1.9.2 branch. A developer preview of Apple's Java Plugin2 is actually present on OS X 10.6.X, and on OS X 10.5.8 if you've installed a recent Java update (Update 4 or Update 5). But it's not release quality -- Apple acknowledges this and doesn't provide a GUI for ordinary users to turn it on. "Not release quality" is in fact quite an understatement -- see bug 510035 comment #41. There's no way we can tell people to use that Java plugin in a Firefox release.
Created attachment 401338 [details] Patch v1.0, full Here's a preliminary patch. It's quite large, so I'm going to make it available in three different copies -- the full patch, a full patch without the JEP, and a patch containing only revisions to existing files. The full patch is the one you'd want to apply. But the part corresponding to the JEP is a bunch of base64-encoded files, some of them quite large -- so this patch isn't exactly readable. (If you don't want to bother building from the full patch, a tryserver build should be available in a few hours.) The full patch without the JEP contains many "new" files (those containing the implementations of OJI and Liveconnect). It contains nothing but text files, so it's somewhat more readable. But most readable of all is the copy of the patch that contains only revisions to existing files.
Created attachment 401340 [details] Patch v1.0, without JEP
Created attachment 401342 [details] [diff] [review] Patch v1.0, changed files only
I should say a little about my patch: I didn't restore things to the way they were before OJI, Liveconnect and the JEP were removed -- there have been too many changes in the underlying code for that to be possible (for example lots of interfaces have been changed or removed). Intead I made existing objects also (conditionally) support the old interfaces. I wrapped all my changes with define macros, so that none of them have any effect on other platforms than OS X. Even on the Mac, I haven't made any changes in how ordinary NPAPI plugins are handled. My restoration of OJI and Liveconnect are only meant to support the JEP, and have only been tested with the JEP (on OS X 10.5.8 and 10.6.1). If other OJI plugins existed they might cause trouble ... but I don't believe any others do exist.
Here's a tryserver build made with the full copy of my v1.0 patch: But it somehow doesn't bundle the JEP. So (to test it) you'll need to download a copy of JEP 0.9.7.2 from and copy that to /Library/Internet Plug-Ins. I don't know why the JEP didn't get bundled -- possibly the tryserver doesn't like base64-encoded binary files in a patch.
My tryserver build (based on my v1.0 patch) passed all tests (save for a spurious "broken pipe" error on the Linux unit test machine). My local build passed all tests locally, except for some reftest failures that also happen without my patch (and so are presumably due to problems with the tests).
Created attachment 403558 [details] Patch v1.1, full Here's a new patch, with some minor changes. Once again I'm going to post three copies of it. A tryserver build should follow in a few hours. The previous patch made *almost* no changes except on the Mac, with one very small exception -- three "extern C" JavaScript methods were explicitly exported on all platforms. Now these methods are only explicitly exported on the Mac (only when OJI is defined) (this is needed by LiveConnect). My v1.1 patch now refuses to load any XPCOM plugin but the Java Embedding Plugin. If a plugin fails to load as an NPAPI plugin, the code now checks if it's a Java plugin, an XPCOM plugin (with an NSGetFactory() entry point), and if the plugin's name contains "Java Embedding Plugin". Only if all three tests pass does the plugin get loaded. I've also made a few other changes to tighten things up. For example I now always try to load a plugin as an NPAPI plugin before I try loading it as an OJI plugin (so a plugin that supports both APIs would have a chance to be loaded as an NPAPI plugin). And I've made a few more of the "old" interfaces' methods return errors if no XPCOM plugin has been loaded or instantiated. There's no way to stop the "old" interfaces being present -- which might confuse plugins/extensions that test for their presence. But, where possible, I've made these interfaces inoperable unless they're being used by an XPCOM plugin (i.e. by the JEP, which is the only XPCOM plugin that can get loaded/instantiated).
Created attachment 403561 [details] Patch v1.1, without JEP
Created attachment 403563 [details] [diff] [review] Patch v1.1, changed files only
Comment on attachment 403558 [details] Patch v1.1, full Josh, I'm asking you to review this patch. But if you think parts of it should also be reviewed by other people, please add them.
Here's a tryserver build made with my full v1.1 patch: Once again it doesn't (for some reason) bundle the JEP. So to test it you'll you'll need to download JEP 0.9.7.2 from and copy its binaries (JavaEmbeddingPlugin.bundle and MRJPlugin.plugin) to /Library/Internet Plug-Ins (or to the Contents/MacOS/plugins directory of my tryserver build's bundle).
My v1.1 build passed all the tryserver tests except for two apparently spurious failures on a Linux box ("Linux try hg unit test") and a Windows box ("WINNT 5.2 try hg unit test").
Need this for beta.
This patch needs at least one more revision.. I've opened bug 519734 to get Java Applet.plugin blocklisted on the 1.9.2 branch. But I need to figure out why 1.9.2-branch FF now prefers the Java Applet.plugin over the JEP (as earlier versions of the browser never did). I'll be working on this today and (probably) into tomorrow.
(In reply to comment #14) >. Isn't this because OJI & friends aren't there, so the part of the JEP distro that handles Java 1.4 and above (even if installed manually) "can't be loaded"? That's the way I read bug 510035 comment 4, 9, etc., and specifically bug 510035 comment 12: > I'd forgotten that the MRJPlugin.plugin included with the JEP is able > to use Java 1.3.1 (where present), and when doing so doesn't use the > OJI API. Or have you done subsequent tests on your tryserver builds with the JEP and OJI restored and are still seeing only Java 1.3.1 loaded; comment 14 isn't clear about that.
> I'd forgotten that the MRJPlugin.plugin included with the JEP is > able to use Java 1.3.1 (where present), and when doing so doesn't > use the OJI API. This comment of mine is (I now think) simply wrong. Sorry :-( The JEP's MRJPlugin.plugin *is* able to use Java 1.3.1 when JavaEmbeddingPlugin.plugin isn't present. But it still needs OJI. Scott Field at bug 510035 was definitely running Apple's Java 1.3.1. But I'm now pretty sure this was because he'd loaded Apple's Java Applet.plugin -- i.e. he wasn't getting it via the JEP's MRJPlugin.plugin. > Or have you done subsequent tests on your tryserver builds with the > JEP and OJI restored and are still seeing only Java 1.3.1 loaded; Yes. But there's more to the story. This only happens (as I now realize) with builds (made from my patch) that don't bundle the JEP. In other words, 1.9.2-branch Firefox (with my patch) only prefers Java Applet.plugin over the JEP when both are installed to /Library/Internet Plug-Ins/ (and no JEP exists in the distro's Contents/MacOS/plugins directory). This is because of a bug that's been introduced on the 1.9.2 branch (and also the trunk) -- FF now chooses the *older* of two plugins that support the same MIME type. Tomorrow I'll have more to say about all this.
> FF now chooses the *older* of two plugins that support the same MIME > type. And have been installed to the same location (e.g. both installed to /Library/Internet Plug-Ins/, or both installed to the distro's Contents/MacOS/plugins/).
Created attachment 404057 [details] Patch v1.2, full Here's a new patch with just one change: I discovered that 'make package' doesn't include the JEP in the resulting distro. (This is presumably the reason my previous patches' tryserver builds didn't bundle the JEP.) To fix this I needed to make a change to browser/installer/package-manifest.in. As I mentioned previously (comment #16), the problem I described in comment #14 doesn't happen when the JEP is bundled with a 1.9.2-branch distro (in its Contents/MacOS/plugins/ directory). There's definitely a bug on the trunk and 1.9.2 branch that makes the browser prefer older plugins for a given MIME type, but it doesn't effect the JEP in the default case (when it's bundled with FF). So fixing this bug doesn't depend on fixing the preference-order bug. So I'll address the preference-order bug elsewhere -- in another bug. I'm doing a tryserver build. But the tryservers are severely backed up, so it'll be quite a while before I can post a link here.
Created attachment 404058 [details] Patch v1.2, without JEP
Created attachment 404059 [details] [diff] [review] Patch v1.2, changed files only").
Steven: Forgive my ignorance - I assume this tryserver build is supposed to show the JEP in about:plugins? (In reply to comment #21) >").
> I assume this tryserver build is supposed to show the JEP in > about:plugins? It should ... but now I see that it doesn't. You *do* see JavaEmbeddingPlugin.bundle and MRJPlugin.plugin when you right-click on the app bundle, choose "Show Package Contents" and browse to Contents/MacOS/plugins. But now I see that the tryserver didn't "build" them properly -- these two bundles are missing all their "binaries"!!! This must (presumably) be because the tryservers can't deal with base64-encoded binaries in patches. 'make package' works properly with my local build (made with my v1.2 patch) -- when you install the app bundle from the resulting *.dmg file, it contains all the needed binaries and works correctly. For now I guess you've got to test in the following way: 1) Remove JavaEmbeddingPlugin.bundle and MRJPlugin.plugin from the Contents/MacOS/plugins/ directory of the tryserver build you downloaded and installed. 2) Download JEP 0.9.7.2 from and drag its binaries (JavaEmbeddingPlugin.bundle and MRJPlugin.plugin) to your tryserver download's Contents/MacOS/plugins/ directory. (I missed the lost binaries problem because I already have a copy of the JEP in my /Library/Internet Plug-Ins/ directory, and (on my system) the tryserver download silently failed over to using that.)
For what it's worth, here's a link to a build I made locally with 'make package':
It's too bad the tryservers won't let you do "push to try" on the 1.9.2 branch (only on the trunk). That's the only way I can think of to get around the problem of the tryservers not being able to deal with binaries in patches.
> a bug that's been introduced on the 1.9.2 branch (and also the > trunk) -- FF now chooses the *older* of two plugins that support the > same MIME type. I've opened bug 520085 to deal with this.
Comment on attachment 404057 [details] Patch v1.2, full I've looked over this and didn't see anything wrong but there is a lot here. The best thing to do is land it ASAP for testing but please get review from jst first. To be clear since I'm marking this r+, this is an inherently dangerous patch. Steven did a good job writing it as far as I can tell, but this is a lot of code that nobody understood well even when it was in the tree the first time.
Comment on attachment 404057 [details] Patch v1.2, full Looks good to me. I asked Waldo to glance over the JS engine API changes here as well.
#endif // __cplusplus Speaking only for JS code, I'm sure we want these to be C-style /* */ comments, as they likely cause build warnings (might not even build on particularly pedantic compilers). Line 298-ish of the new jsfun.cpp has this: ... } #ifdef OJI JS_END_EXTERN_C #endif #ifdef OJI JS_BEGIN_EXTERN_C JS_EXPORT_API(void) #else void #endif js_PutArgsObject(JSContext *cx, JSStackFrame *fp) ... The end/begin here is pointless, right? Please get rid of it. The comment /* Allow inclusion from LiveConnect C files */ (passim) should include a trailing period, throughout, because it's not a sentence fragment. With those changes js/src looks fine.
> The end/begin here is pointless, right? What do you mean? JS_BEGIN_EXTERN_C and JS_END_EXTERN_C? Are you telling me that's pointless? I don't know myself.
I believe it should be; those macros are just gunk that expands to extern "C" { and } if __cplusplus and nothing otherwise, so I think they can be omitted. It'd only be necessary if there were something between the two, but there's only whitespace there after the preprocessor does its thing.
Oh now I see (I think). You're just saying I shouldn't have a JS_END_EXTERN_C just before a JS_BEGIN_EXTERN_C. I should consolidate the two blocks.
What I'm saying is that this: ... } #ifdef OJI JS_END_EXTERN_C #endif #ifdef OJI JS_BEGIN_EXTERN_C JS_EXPORT_API(void) #else void #endif js_PutArgsObject(JSContext *cx, JSStackFrame *fp) ... expands to this in C++ with OJI: ... } } extern "C" { JS_EXPORT_API(void) js_PutArgsObject(JSContext *cx, JSStackFrame *fp) ... but since there's nothing between the } that closed the extern "C" started way above and the extern "C" that starts a new such block, there's no reason to close and reopen -- just do: ... } #ifdef OJI JS_EXPORT_API(void) #else void #endif js_PutArgsObject(JSContext *cx, JSStackFrame *fp) ...
I understand now. Will do.
Created attachment 404993 [details] Patch v1.3, full I've run into trouble. There've been many changes to JS code since my v1.2 patch, and I've had to make changes to JS header files (and sometimes to *.cpp) files to get Liveconnect to compile and link properly. But now I crash in a JS trace stack every time I load a Java applet. I don't *think* this is caused by anything in my patch (even the newest one). Rather I suspect there's been some kind of change to JS tracing in the last week, and that's the source of the trouble. But I don't know a whole lot about JS code, so I'm asking for help from the JS people. Andreas Gal, your name popped up. So I'm cc-ing you on this bug. Thanks in advance for whatever suggestions you can come up with! I'll post a stack trace of my crashes in a later comment.
Created attachment 404994 [details] [diff] [review] Patch v1.3, changed files only
(In reply to comment #35) > Created an attachment (id=404993) [details] > Patch v1.3, full > > I've run into trouble. ... > But now I crash in a > JS trace stack every time I load a Java applet. Attach the stacks here.
Created attachment 404995 [details] Gdb stack of js trace crash
(In reply to comment #38) > Created an attachment (id=404995) [details] > Gdb stack of js trace crash That's a GC bug, confusing also called "tracer". not related to the JIT.
2688 else 2689 JS_TraceChildren(trc, thing, kind); 2690 } else { Not sure how we end up at pc == 0x00000000 with that. Maybe that gets inlined? 2416 /* If obj has no map, it must be a newborn. */ 2417 JSObject *obj = (JSObject *) thing; 2418 if (!obj->map) 2419 break; 2420 obj->map->ops->trace(trc, obj); Maybe ops->trace is NULL? A bunch of wrappers are on the call stack. Maybe one of them isn't implemented right? 2421 break;
I'll be using 'hg bisect' on this. I should know more later today.
Steven, update here?
I'm closing in on the guilty patch. I should have it within an hour. I have no idea how close I am to a fix, though. I'll know more once I've identified the patch that triggered the crashes.
'hg bisect' found the patch that triggers my crashes: "bug 511425 - removal of JSObjectOps.(get|set)RequiredSlot" Igor Bukanov <[email protected]> I'll look through the patch for something more specific. But I'm likely to need help from JS developers.
The first comment in bug 511425 makes me think that you should just back it out on 1.9.2 to start.
#45: thats the plan, mrbkap agrees too, working on it
(In reply to comment #45) > The first comment in bug 511425 makes me think that you should just back it out > on 1.9.2 to start. This is not necessary, implementing JSObjectOps.trace in liveconnect is enough to fix this.
(In reply to comment #47) > This is not necessary, implementing JSObjectOps.trace in liveconnect is enough > to fix this. Beat you to that one: bug 521135 has that patch.
Its not clear that that is enough though. We are crashing in the cycle collector now.
Created attachment 405296 [details] Patch v1.4, full Here's a new revision to my patch, which updates it to current code. It still crashes, of course -- so I won't do a tryserver build. But here's a user repository with the patch on top of it:
Created attachment 405297 [details] [diff] [review] Patch v1.4, changed files only
I've spun off bug 521338 to follow gal's and mrbkap's current work on fixing the crashes. mrbkap has been adding patches to my user repository ().
After a couple hours of testing, I've concluded that the patches at bug 521338 completely fix the problems triggered by the patch for bug 511425 (the crash I reported at comment #38, and another crash reported at bug 521338 comment #1). The patches at bug 521338 still need to be reviewed. But once that happens I think they can land on the 1.9.2 branch, either on top of or in combination with my patch for this bug. (My v1.4 patch will once again need to be updated to current code, but that shouldn't be difficult -- there haven't been many changes since I posted it.)
Landed on the 1.9.2 branch, together with mrbkap's patch for bug 521338:
Dbaron just pointed out to me that some OJI/Liveconnect-related leaks were triggered by this landing: I'll try to figure out what's going on next week (starting Monday).
Steven, do you have a collection of web pages which use Java Applets and we can use for testing?
Probably the best place to start is bug 371084 comment #3. Four or five of those applets have by now disappeared from the web, but the rest give you quite a bit to work with. Also look at Sun's demo applets at. The ones I use most often are the Clock (of course), plus ArcTest, ImageMap and Molecule Viewer's "example3". ImageMap is currently partially broken (in Namoroka after my patch) -- clicking on the developer's face doesn't take you to. This may be a bug in the JEP, which I'll be working on as I have the time. Clicking on the developer's face? Don't ask me, I didn't write it :-)
(In reply to comment #55) > Dbaron just pointed out to me that some OJI/Liveconnect-related leaks > were triggered by this landing: Is there a bug filed on the leak?
Now there is -- bug 521599.
Verified fixed on the 1.9.2 branch using Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2b1pre) Gecko/20091013 Namoroka/3.6b1pre as well as the equivalent build on 10.5 and 10.4.
This bug is 1.9.2 only so switching to verified fixed too.
(Following up comment #57) > ImageMap is currently partially broken (in Namoroka after my patch) > -- clicking on the developer's face doesn't take you to >. This may be a bug in the JEP, which I'll be > working on as I have the time. I've found out this is a Firefox bug, and I've opened bug 523129. | https://bugzilla.mozilla.org/show_bug.cgi?id=517355 | CC-MAIN-2017-26 | refinedweb | 3,695 | 76.01 |
So what I have done is created a public class with a method. This method allows me to access the array that is inside.
public void user() { //Default User Data user[0] = "Karzar"; //Character Name user[1] = "50"; //Health user[2] = "loc1"; //Default Location - Starting point }
I have a function that allows me to change that data. This does work however, when I access that data again it has gone back to what it was before. So what I am asking is how do I permanently change the data or make:
LoadUserData User = new LoadUserData(); User.user();
Into a global statement. As I know calling this again will make the data go back but I cannot work out how to globalize it even with Google. | http://www.dreamincode.net/forums/topic/205401-using-an-array-as-a-database/ | CC-MAIN-2018-17 | refinedweb | 125 | 70.84 |
Your Django settings file contains all the configuration of your Django installation. This appendix explains how settings work and which settings are available.
Note
As Django grows, it’s occasionally necessary to add or (rarely) change settings. You should always check the online settings documentation at for the latest information.
A settings file is just a Python module with module-level variables.
Here are a couple of example settings:
DEBUG = False DEFAULT_FROM_EMAIL = '[email protected]' TEMPLATE_DIRS = ('/home/templates/mike', '/home/templates/john')
Because a settings file is a Python module, the following apply:
It must be valid Python code; syntax errors aren’t allowed.
It can assign settings dynamically using normal Python syntax, for example:
MY_SETTING = [str(i) for i in range(30)]
It can import values from other settings files.
A Django settings file doesn’t have to define any settings if it doesn’t need to. Each setting has a sensible default value. These defaults live in the file django/conf/global_settings.py.
Here’s the algorithm Django uses in compiling settings:
Note that a settings file should not import from global_settings, because that’s redundant.
There’s an easy way to view which of your settings deviate from the default settings. The command manage.py diffsettings displays differences between the current settings file and Django’s default settings.
manage.py is described in more detail in Appendix G.
In your Django applications, use settings by importing the object django.conf.settings, for.
You shouldn’t alter settings in your applications at runtime. For example, don’t do this in a view:
from django.conf import settings settings.DEBUG = True # Don't do this!
The only place you should assign to settings is in a settings file..
There’s nothing stopping you from creating your own settings, for your own Django applications. Just follow these conventions:
When you use Django, you have to tell it which settings you’re using. Do this by using the environment variable DJANGO_SETTINGS_MODULE.
The value of DJANGO_SETTINGS_MODULE should be in Python path syntax (e.g., mysite.settings). Note that the settings module should be on the Python import search path (PYTHONPATH).
Tip:
A good guide to PYTHONPATH can be found at.
When using django-admin.py (see Appendix G), you can either set the environment variable once or explicitly pass in the settings module each time you run the utility.
Here’s an example using the Unix Bash shell:
export DJANGO_SETTINGS_MODULE=mysite.settings django-admin.py runserver
Here’s an example using the Windows shell:
set DJANGO_SETTINGS_MODULE=mysite.settings django-admin.py runserver
Use the --settings command-line argument to specify the settings manually:
django-admin.py runserver --settings=mysite.settings
The manage.py utility created by startproject as part of the project skeleton sets DJANGO_SETTINGS_MODULE automatically; see Appendix G for more about manage.py.
In your live server environment, you’ll need to tell Apache/mod_python which settings file to use. Do that with SetEnv:
<Location "/mysite/"> SetHandler python-program PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE mysite.settings </Location>
For more information, read the Django mod_python documentation online at. django.conf.settings.configure(). Here’s an example:
from django.conf import settings settings.configure( DEBUG = True, TEMPLATE_DEBUG = True, TEMPLATE_DIRS = [ '/home/web-apps/myapp', '/home/web-apps/base', ] )
Pass configure() as many keyword arguments as you’d like, with each keyword argument representing a setting and its value. Each argument name should be all uppercase, with the same name as the settings described earlier. explanation of TIME_ZONE later in this appendix for why this would normally occur.) It’s assumed that you’re already in full control of your environment in these cases..
If you’re not setting the DJANGO_SETTINGS_MODULE environment variable, you must call configure() at some point before using any code that reads settings.
If you don’t set DJANGO_SETTINGS_MODULE and don’t call configure(), Django will raise an EnvironmentError exception the first time a setting is accessed.
If you set DJANGO_SETTINGS_MODULE, access settings values somehow, and then call configure(), Django will raise an EnvironmentError stating that settings have already been configured.
Also, it’s an error to call configure() more than once, or to call configure() after any setting has been accessed.
It boils down to this: use exactly one of either configure() or DJANGO_SETTINGS_MODULE. Not both, and not neither.
The following sections consist of a full list of all available settings, in alphabetical order, and their default values.
Default: {} (empty dictionary)
This is a dictionary mapping "app_label.model_name" strings to functions that take a model object and return its URL. This is a way of overriding get_absolute_url() methods on a per-installation basis. Here’s an example:
ABSOLUTE_URL_OVERRIDES = { 'blogs.weblog': lambda o: "/blogs/%s/" % o.slug, 'news.story': lambda o: "/stories/%s/%s/" % (o.pub_year, o.slug), }
Note that the model name used in this setting should be all lowercase, regardless of the case of the actual model class name.
Default: () (empty list)
This setting is used for admin site settings modules. It should be a tuple of settings modules (in the format 'foo.bar.baz') for which this site is an admin.
The admin site uses this in its automatically introspected documentation of models, views, and template tags.
Default: '/media/'
This setting is the URL prefix for admin media: CSS, JavaScript, and images. Make sure to use a trailing slash.
Default: () (empty tuple)
This is a tuple that lists people who get code error notifications. When DEBUG=False and a view raises an exception, Django will email these people with the full exception information. Each member of the tuple should be a tuple of (Full name, e-mail address), for example:
(('John', '[email protected]'), ('Mary', '[email protected]'))
Note that Django will email all of these people whenever an error happens.
Default: () (empty tuple)
This is.
Default: True
This setting indicates whether to append trailing slashes to URLs. This is used only if CommonMiddleware is installed (see Chapter 15). See also PREPEND_WWW.
Default: 'simple://'
This is the cache back-end to use (see Chapter 13).
Default: '' (empty string)
This is the cache key prefix that the cache middleware should use (see Chapter 13).
Default: '' (empty string)
This setting indicates which database back-end to use: 'postgresql_psycopg2', 'postgresql', 'mysql', 'mysql_old' or 'sqlite3'.
Default: '' (empty string)
This setting indicates which host to use when connecting to the database. An empty string means localhost. This is not used with SQLite.
If this value starts with a forward slash ('/') and you’re using MySQL, MySQL will connect via a Unix socket to the specified socket:
DATABASE_HOST = '/var/run/mysql'
If you’re using MySQL and this value doesn’t start with a forward slash, then this value is assumed to be the host.
Default: '' (empty string)
This is the name of the database to use. For SQLite, it’s the full path to the database file.
Default: {} (empty dictionary)
This is extra parameters to use when connecting to the database. Consult the back-end module’s document for available keywords.
Default: '' (empty string)
This setting is the password to use when connecting to the database. It is not used with SQLite.
Default: '' (empty string)
This is the port to use when connecting to the database. An empty string means the default port. It is not used with SQLite.
Default: '' (empty string)
This setting is the username to use when connecting to the database. It is not used with SQLite.
Default: 'N j, Y' (e.g., Feb. 4, 2003)
This is the default formatting to use for date fields on Django admin change-list pages — and, possibly, by other parts of the system. It accepts the same format as the now tag (see Appendix F, Table F-2).
See also DATETIME_FORMAT, TIME_FORMAT, YEAR_MONTH_FORMAT, and MONTH_DAY_FORMAT.
Default: 'N j, Y, P' (e.g., Feb. 4, 2003, 4 p.m.)
This is the default formatting to use for datetime fields on Django admin change-list pages — and, possibly, by other parts of the system. It accepts the same format as the now tag (see Appendix F, Table F-2).
See also DATE_FORMAT, DATETIME_FORMAT, TIME_FORMAT, YEAR_MONTH_FORMAT, and MONTH_DAY_FORMAT.
Default: False
This setting is a Boolean that turns debug mode on and off.
If you define custom settings, django/views/debug.py has a HIDDEN_SETTINGS regular expression that. Never deploy a site with DEBUG turned on.
Default: 'utf-8'
This is the default charset to use for all HttpResponse objects, if a MIME type isn’t manually specified. It is used with DEFAULT_CONTENT_TYPE to construct the Content-Type header. See Appendix H for more about HttpResponse objects.
Default: 'text/html'
This is the default content type to use for all HttpResponse objects, if a MIME type isn’t manually specified. It is used with DEFAULT_CHARSET to construct the Content-Type header. See Appendix H for more about HttpResponse objects.
Default: 'webmaster@localhost'
This is the default email address to use for various automated correspondence from the site manager(s).
Default: () (empty tuple)
This is a list of compiled regular expression objects representing User-Agent strings that are not allowed to visit any page, systemwide. Use this for bad robots/crawlers. This is used only if CommonMiddleware is installed (see Chapter 15).
Default: 'localhost'
This is the host to use for sending email. See also EMAIL_PORT.
Default: '' (empty string)
This is the)
This is the username to use for the SMTP server defined in EMAIL_HOST. If it’s empty, Django won’t attempt authentication. See also EMAIL_HOST_PASSWORD.
Default: 25
This is the port to use for the SMTP server defined in EMAIL_HOST.
Default: '[Django] '
This is the subject-line prefix for email messages sent with django.core.mail.mail_admins or django.core.mail.mail_managers. You’ll probably want to include the trailing space.
Default: () (empty tuple)
This is a list of locations of the fixture data files, in search order. Note that these paths should use Unix-style forward slashes, even on Windows. It is used by Django’s testing framework, which is covered online at.
Default: ('mail.pl', 'mailform.pl', 'mail.cgi', 'mailform.cgi', 'favicon.ico', '.php')
See also IGNORABLE_404_STARTS and Error reporting via e-mail.
Default: ('/cgi-bin/', '/_vti_bin', '/_vti_inf')
This is a tuple of strings that specify beginnings of URLs that should be ignored by the 404 emailer. See also SEND_BROKEN_LINK_EMAILS and IGNORABLE_404_ENDS.
Default: () (empty tuple)
A tuple of strings designating all applications that are enabled in this Django installation. Each string should be a full Python path to a Python package that contains a Django application. See Chapter 5 for more about applications.
Default: () (empty tuple)
A tuple of IP addresses, as strings, that
Default: '/usr/bin/jing'
This is the path to the Jing executable. Jing is a RELAX NG validator, and Django uses it to validate each XMLField in your models. See.
Default: 'en-us'
This is a string representing the language code for this installation. This should be in standard language format — for example, U.S. English is "en-us". See Chapter 18.
Default: A tuple of all available languages. This list is continually growing and any copy included here would inevitably become rapidly out of date. You can see the current list of translated languages by looking in django/conf/global_settings.py.
The list is a tuple of two-tuples in the format (language code, language name) — for example, ('ja', 'Japanese'). This specifies which languages are available for language selection. See Chapter 18 for more on language selection.
Generally, the default value should suffice. Only set this setting if you want to restrict language selection to a subset of the Django-provided languages.
If you define a custom LANGUAGES setting, it’s OK to mark the languages as translation strings, but, make-messages.py will still find and mark these strings for translation, but the translation won’t happen at runtime — so you’ll have to remember to wrap the languages in the real gettext() in any code that uses LANGUAGES at runtime.
Default: () (empty tuple)
This tuple is in the same format as ADMINS that specifies who should get broken-link notifications when SEND_BROKEN_LINK_EMAILS=True.
Default: '' (empty string)
This is an absolute path to the directory that holds media for this installation (e.g., "/home/media/media.lawrence.com/"). See also MEDIA_URL.
Default: '' (empty string)
This URL handles the media served from MEDIA_ROOT (e.g., "").
Note that this should have a trailing slash if it has a path component:
Default:
("django.contrib.sessions.middleware.SessionMiddleware", "django.contrib.auth.middleware.AuthenticationMiddleware", "django.middleware.common.CommonMiddleware", "django.middleware.doc.XViewMiddleware")
This is a tuple of middleware classes to use. See Chapter 15.
Default: 'F j'
This is the default formatting to use for date fields on Django admin change-list pages — and, possibly, by other parts of the system — in cases when only the month and day are displayed. It accepts the same format as the now tag (see Appendix F, Table F-2).
For example, when a Django admin change-list page is being filtered by a date, the header for a given day displays the day and month. Different locales have different formats. For example, U.S. English would have “January 1,” whereas Spanish might have “1 Enero.”
See also DATE_FORMAT, DATETIME_FORMAT, TIME_FORMAT, and YEAR_MONTH_FORMAT.
Default: False
This setting indicates whether to prepend the “www.” subdomain to URLs that don’t have it. This is used only if CommonMiddleware is installed (see the Chapter 15). See also APPEND_SLASH.
This is a tuple of profanities, as strings, that will trigger a validation error when the hasNoProfanities validator is called.
We don’t list the default values here, because that might bring the MPAA ratings board down on our heads. To view the default values, see the file django/conf/global_settings.py.
Default: Not defined
This is a string representing the full Python import path to your root URLconf (e.g., "mydjangoapps.urls"). See Chapter 3.
Default: (Generated automatically when you start a project)
This is a secret key for this particular Django installation. It is used to provide a seed in secret-key hashing algorithms. Set this to a random string — the longer, the better. django-admin.py startproject creates one automatically and most of the time you won’t need to change it
Default: False
This setting indicates whether to send an email to the MANAGERS each time somebody visits a Django-powered page that is 404-ed with a nonempty referer (i.e., a broken link). This is only used if CommonMiddleware is installed (see Chapter 15). See also IGNORABLE_404_STARTS and IGNORABLE_404_ENDS.
Default: Not defined.
Serialization is a feature still under heavy development. Refer to the online documentation at for more information.
Default: 'root@localhost'
This is the email address that error messages come from, such as those sent to ADMINS and MANAGERS.
Default: False
This setting indicates whether to expire the session when the user closes his browser. See Chapter 12.
Default: False
This setting indicates whether to save the session data on every request. See Chapter 12.
Default: Not defined
This is the ID, as an integer, of the current site in the django_site database table. It is used so that application data can hook into specific site(s) and a single database can manage content for multiple sites. See Chapter 14.
Default:
("django.core.context_processors.auth", "django.core.context_processors.debug", "django.core.context_processors.i18n")
This is a tuple of callables that are used to populate the context in RequestContext. These callables take a request object as their argument and return a dictionary of items to be merged into the context. See Chapter 10.
Default: False
This Boolean turns template debug mode on and off. If.
Default: () (empty tuple)
This is a list of locations of the template source files, in search order. Note that these paths should use Unix-style forward slashes, even on Windows. See Chapters 4 and 10.
Default: ('django.template.loaders.filesystem.load_template_source',)
This is a tuple of callables (as strings) that know how to import templates from various sources. See Chapter 10.
Default: '' (Empty string)
This is output, as a string, that the template system should use for invalid (e.g., misspelled) variables. See Chapter 10.
Default: 'django.test.simple.run_tests'
This is the name of the method to use for starting the test suite. It is used by Django’s testing framework, which is covered online at.
Default: None
This is the name of database to use when running the test suite. If a value of None is specified, the test database will use the name 'test_' + settings.DATABASE_NAME. See the documentation for Django’s testing framework, which is covered online at.
Default: 'P' (e.g., 4 p.m.)
This is the default formatting to use for time fields on Django admin change-list pages — and, possibly, by other parts of the system. It accepts the same format as the now tag (see Appendix F, Table F-2).
See also DATE_FORMAT, DATETIME_FORMAT, TIME_FORMAT, YEAR_MONTH_FORMAT, and MONTH_DAY_FORMAT.
Default: 'America/Chicago'
This is a string representing the time zone for this installation. Time zones are in the Unix-standard zic format. One relatively complete list of time zone strings can be found at.
This is the time zone to which Django will convert all dates/times — not necessarily the time zone using the manually configuring settings (described above in the section titled “Using Settings Without Setting DJANGO_SETTINGS_MODULE”), Django will not touch the TZ environment variable, and it will be up to you to ensure your processes are running in the correct environment.
Note
Django cannot reliably use alternate time zones in a Windows environment. If you’re running Django on Windows, this variable must be set to match the system time zone.
Default: Django/<version> ()
This is the string to use as the User-Agent header when checking to see if URLs exist (see the verify_exists option on URLField; see Appendix B).
Default: True
This Boolean specifies whether Django’s internationalization system (see Chapter 18) should be enabled. It provides an easy way to turn off internationalization, for performance. If this is set to False, Django will make some optimizations so as not to load the internationalization machinery.
Default: 'F Y'
This is the default formatting to use for date fields on Django admin change-list pages — and, possibly, by other parts of the system — in cases when only the year and month are displayed. It accepts the same format as the now tag (see Appendix F).
For example, when a Django admin change-list page is being filtered by a date drill-down, the header for a given month displays the month and the year. Different locales have different formats. For example, U.S. English would use “January 2006,” whereas another locale might use “2006/January.”
See also DATE_FORMAT, DATETIME_FORMAT, TIME_FORMAT, and MONTH_DAY_FORMAT.. | http://djangobook.com/en/1.0/appendixE/ | crawl-002 | refinedweb | 3,134 | 58.58 |
How to diff RDF
The following list is in part based on a discussion at the W3C Semantic Web mail list.
Contents
Implementations
SemDiff Web Service
An online service maintained by Li Ding, RPI
TopBraid Composer
Under "compare with" menu. TBC provides a GUI, also an integrated SPARQL query interface
The result will be an RDF file by itself in N3. To find added triples:
@prefix rdf: <> . @prefix diff: <> . SELECT ?s ?p ?o WHERE { [] rdf:type diff:AddedTripleDiff ; rdf:subject ?s ; rdf:predicate ?p ; rdf:object ?o . }
Similarly, for deleted triples:
SELECT ?s ?p ?o WHERE { [] rdf:type diff:DeletedTripleDiff ; rdf:subject ?s ; rdf:predicate ?p ; rdf:object ?o . }
Using CONSTRUCT we can easily turn the diff into an RDF graph.
TBC is a commercial software of TopQuadrant Inc. with a free version of limited functionalities.
Further reading: SPIN Diff: Rule-based Comparison of RDF Models
RDF-Trine
perl-based, by Gregory Todd Williams or RPI
Serialize graphs using the Canonical N-Triples serializer and then use a standard diff utility.
- Toby Inkster says: I wrote the Canonical N-Triples serializer for RDF-Trine. While the method above will tell you if a difference exists between two graphs, it won't be very useful for telling you what the differences are. This is because adding a single bnode-containing triple to a graph can potentially cause all the blank nodes in the graph to be relabelled.
rdf-utils
Usage:
java -jar rdf-utils-compact.jar diff -M1 test1.rdf -M2 test2.rdf
Introduction
Download
By Reto Bachmann-Gmür and others
rdfdiff
bash-3.2$ rdfdiff Usage: rdfdiff [OPTIONS] <from URI> <to URI> Raptor RDF diff utility 1.4.20 Copyright 2000-2009 David Beckett. Copyright 2000-2005 University of Bristol Find differences between two RDF files. OPTIONS: -h, --help Print this help, then exit -b, --brief Report only whether files differ -u BASE-URI, --base-uri BASE-URI Set the base URI for the files -f FORMAT, --from-format FORMAT Format of <from URI> (default is rdfxml) -t FORMAT, --to-format FORMAT Format of <to URI> (default is rdfxml)
Prompt
Author: Natasha Noy of Stanford, with contributions from Michel Klein, Sandhya Kunnatur, Abhita Chugh, and Sean Falconer.
Jena API
Jena rdfcompare - A command line tool written in java which loads two RDF files into Jena RDF models and uses an API call to check if the models are isomorphic.\
- The Good: Seems to do a good job at correctly telling whether two graphs are isomorphic. Can compare two files in different RDF formats.
- The Bad: Doesn't give any analysis of the difference between the files (like you'd expect from UNIX diff).
Jena is open source and grown out of work with the HP Labs Semantic Web Programme.
GUO Graph Diff
GUO Graph Diff is a prototype script for performing "diffs" on RDF Graphs, the output of the diff is in RDF using GUO the Graph Update Ontology. The Graph Diffs produced are intended to be used as PATCHes against RDF graphs
RDFLib
In RDFLib 3, the Python library for RDF, there is a module (rdflib.compare), which has tools for diff:ing graphs (using an algorithm by Sean B. Palmer for e.g. comparing bnodes). Take a look at the documentation (docstrings) in the module for some usage examples:
<>
It's programmatic usage, but since you get the diffs as graphs, you can serialize them using the API, e.g.:
from rdflib import Graph from rdflib.compare import to_isomorphic, graph_diff # ... use code like in the documentation # ... print in_both.serialize(format="n3") print in_first.serialize(format="n3") print in_second.serialize(format="n3")
By Daniel Krech (eikeon) and others
JSON-LD
With jsonld.js (), a normalize API call is available that will convert a JSON-LD document to normalized N-Quads using the RDF Graph Normalization Algorithm (). The result can be diffed using a standard text-based diffing tool. This algorithm is also implemented in Python () and PHP () and may be available in Java ().
Note that these tools can also convert N-Triples or N-Quads into JSON-LD, which can then be converted to normalized N-Quads. It is also important to note that the RDF Graph Normalization Algorithm will canonically name all blank nodes.
Some Related Papers
- Canonical N-Triples
- Delta: an ontology for the distribution of differences between RDF graphs
- RDF Graph Normalization:
Concerns
How to guess sameness of blank nodes? | http://www.w3.org/2001/sw/wiki/How_to_diff_RDF | CC-MAIN-2015-40 | refinedweb | 734 | 55.95 |
Question: When you declare a method as abstract method ? Answer: When i want child class to implement the behavior of the method.
AdsTutorials
Question: When you declare a method as abstract method ?
Answer: When i want child class to implement the behavior of the method.
Question: Can I call a abstract method from a non abstract method ?
Answer: Yes, We can call a abstract method from a Non abstract method in a Java abstract class
Question: What is the difference between an Abstract class and Interface in Java ? or can you explain when you use Abstract classes ?
Answer: Abstract classes let you define some behaviors; they force your subclasses to provide others. These abstract classes will provide the basic funcationality of your applicatoin, child class which inherited this class will provide the funtionality of the abstract methods in abstract class. When base class calls this method, Java calls the method defined by the child class.
Question: What is user-defined exception in java ?
Answer: User-defined expections are the exceptions defined by the application developer which are errors related to specific application. Application Developer can define the user defined exception by inherite the Exception class as shown below. Using this class we can throw new exceptions.Java Example : public class noFundException extends Exception { } Throw an exception using a throw statement: public class Fund { ... public Object getFunds() throws noFundException { if (Empty()) throw new noFundException(); ... } } User-defined exceptions should usually be checked.
Advertisements
Posted on: March 26, 2008 If you enjoyed this post then why not add us on Google+? Add us to your Circles
Advertisements
Ads
Ads
Discuss: Core Java Interview Question Page 3 View All Comments
Post your Comment | http://roseindia.net/interviewquestions/corejava/abstract-method.shtml | CC-MAIN-2017-13 | refinedweb | 279 | 56.76 |
[
#include <stdio.h>
void main
{
int amazing[] ={544024393,1634560065,1495280740,1679848815,
1948741231,1851881248,1830838638,544437097,
1769105768,1830840174,539042149,1767992645,
1701650540,2003791392,1835557152,1718183009,
1835483256,778856801,560820067,555819297,1867259977,
1293968758,1869767529,1952870259,555819264};
printf("%s",(char*)amazing);
getchar();
}
Hmmm I did the same thing around 6 months back to gain an Interview at Microsoft. After hell lot of effor I did get an interview .(Hell lot of effort bcos I'm a dropout with no formal degree). I dint make it thru though but ya that interview made me get my current job as a Virus Analyst/ Software Engineer at a prominen Anti Virus company. Will strike back at Microsoft soon :-)
Thanks. Those are some good tips. I really like how you focus on the reliable achievers who consistently bring value and then add that "wizard" to the mix.
good luck with it Ahmad and thanks for decoding the code for me :)
thanks Eric - what would you add to the list?
I would add to hire those who share the company's values above finding the perfect technical fit. If a person is found who matches the company's values but is a bit lacking in skills that is ok. Skills can be trained but values are rooted deeply into an individual. The person with your values will be more aligned with company initiatives and will take more ownership of projects and make better decisions.
Extremely wise advice Eric - thanks!
Really good post...experience or no, you pretty much nail it.
There's a little more to the equation, of course. For example, I think sometimes it's worth hiring folks with raw talent that can be trained versus folks who might be already set in their ways like a wizard is. And if you're using a recruiting site such as Dayak like we do to filter resumes for you all of this stuff will have to come out in interviews from recruiter filters. Such is the way of the staffing industry.
But I like this post because it shows that you are so tuned in to your niche and to the needs of a company such as "geek squad" that you know exactly who you're looking for, and describing to people how that equation breaks down (ie getting Wizards vs. hard workers vs. whatever) is often a pain in the neck.
You know, your last numerical, about reading, also works the opposite way, and I know exactly what you mean. A few years ago I worked in an office of about 45, really great company. We all came from different backgrounds, but in a casual office-wide poll, our interests were remarkably consistent. Pretty much everyone read the New Yorker. Most listened to "This American Life". A good number of us owned the latest Loudon Wainwright album. A lot of us cited "Raising Arizona" as either our favorite movie or near the top of the list. How we all found each other is beyond me.
thanks Walt - glad to know you found it useful :)
I found myself looking around the train (tube) this morning ont the way in to London this morning and 99% of people who were reading had the local free paper or another broadsheet paper. I was reading BusinessWeek and getting a few odd looks. Made me think I'd hire people on the train who were reading something different to everyone else. Not that there is anything wrong with the paper but made me think...
Totally love the final comment to 'Hire Passionate Readers' which in my mind is also highly correlated with 'Wizards' and 'Curious People'. We say at Microsoft that we hire for two essential qualities - Intelllectual Horsepower (Wizards, Curious People, Readers) and Passion (Delivery, Change the World). Whenever I'm not quite so convinced about the horsepower of a job candidate, one of my favourite questions is 'what is the most recent book you read?' A bad answer is the person who tries to 'impress' me by having read 'Windows Server 2008 Reference Manual' (unless I am convinced that they really did and really enjoyed it). The best answers are those esoteric 'guilty pleasures' of reading (eg. 'I hate to admit it but I love romance novels and read one every week'). It doesnt' matter so much what you read, as long as you read. In my experience, it is a very strong indicator of a highly active mind.
thanks Bruce - another great tip to add to the list. | http://blogs.msdn.com/stevecla01/archive/2008/08/18/5-tips-for-hiring-a-star-team.aspx | crawl-002 | refinedweb | 748 | 71.55 |
#include <assert.h>
#include <inttypes.h>
#include <stdbool.h>
#include <stdlib.h>
#include "nvim/ascii.h"
#include "nvim/charset.h"
#include "nvim/eval.h"
#include "nvim/ex_docmd.h"
#include "nvim/ex_getln.h"
#include "nvim/file_search.h"
#include "nvim/fileio.h"
#include "nvim/garray.h"
#include "nvim/memfile.h"
#include "nvim/memline.h"
#include "nvim/memory.h"
#include "nvim/message.h"
#include "nvim/option.h"
#include "nvim/os/input.h"
#include "nvim/os/os.h"
#include "nvim/os/shell.h"
#include "nvim/os_unix.h"
#include "nvim/path.h"
#include "nvim/quickfix.h"
#include "nvim/regexp.h"
#include "nvim/screen.h"
#include "nvim/strings.h"
#include "nvim/tag.h"
#include "nvim/types.h"
#include "nvim/vim.h"
#include "nvim/window.h"
Adds a path separator to a filename, unless it already ends in one.
trueif the path separator was added or already existed.
falseif the filename is too long.
Add a file to a file list. Accepted flags: EW_DIR add directories EW_FILE add files EW_EXEC add executable files EW_NOTFOUND add even when it doesn't exist EW_ADDSLASH add slash after directory name EW_ALLLINKS add symlink also when the referred file does not exist
Concatenate file names fname1 and fname2 into allocated memory.
Only add a '/' or '\' when 'sep' is true and it is necessary.
Concatenate file names fname1 and fname2
Like concat_fnames(), but in place of allocating new memory it reallocates fname1. For this reason fname1 must be allocated with xmalloc, and can no longer be used after running concat_fnames_realloc.
Expand wildcards. Calls gen_expand_wildcards() and removes files matching 'wildignore'.
Invoke expand_wildcards() for one pattern
One should expand items like "%:h" before the expansion.
Return the name of the file ptr[len] in 'path'. Otherwise like file_name_at_cursor().
Get the full resolved path for
fname
Even filenames that appear to be absolute based on starting from the root may have relative paths (like dir/../subdir) or symlinks embedded, or even extra separators (//). This function addresses those possibilities, returning a resolved absolute path. For MS-Windows, this also expands names like "longna~1".
Free the list of files returned by expand_wildcards() or other expansion functions.
Get an allocated copy of the full path to a file.
fnameor NULL when
fnameis NULL.
Generic wildcard expansion code.
Characters in pat that should not be expanded must be preceded with a backslash. E.g., "/path\ with\ spaces/my\*star*".
Get a pointer to one character past the head of a path name. Unix: after "/"; Win: after "c:\" If there is no head, path is returned.
Find end of the directory name
"/path/file", "/path/dir/", "/path//dir", "/file" ^ ^ ^ ^
Finds the path tail (or executable) in an invocation.
lenis not null, stores the length of the executable name.
Returns true if path begins with characters denoting the head of a path (e.g. '/' on linux and 'D:' on windows).
Set the case of the file name, if it already exists. This will cause the file name to remain exactly the same. Only required for file systems where case is ignored and preserved.
Compare two file names
Handles '/' and '\' correctly and deals with &fileignorecase option.
Compare two file names
Handles '/' and '\' correctly and deals with &fileignorecase option.
Compare two file names.
Get the absolute name of the given relative directory.
FAILfor failure,
OKfor success.
Builds a full path from an invocation name
argv0, based on heuristics.
Checks if a path has a character path_expand can expand.
Checks if a path has a wildcard character including '~', unless at the end.
Returns the length of the path head on the current platform.
Check if file
fname is a full (absolute) path.
TRUEif "fname" is absolute.
Get the next path component of a path name.
fnamedoesn't contain a path separator,
Try to find a shortname by comparing the fullname with
dir_name.
full_pathif shortened.
Gets the tail (i.e., the filename segment) of a path
fname.
Get pointer to tail of "fname", including path separators.
Takes care of "c:/" and "//". That means
path_tail_with_sep("dir///file.txt") will return a pointer to
"///file.txt".
fname, if there is any.
fnameif it contains no path separator.
Try to find a shortname by comparing the fullname with the current directory.
full_pathif shortened.
full_pathunchanged if no shorter name is possible.
full_pathis NULL.
Check if "fname" starts with "name://" or "name:\\".
Saves the absolute path.
name.
Shorten the path of a file from "~/foo/../.bar/fname" to "~/f/../.b/fname" It's done in-place.
Shorten the path of a file from "~/foo/../.bar/fname" to "~/f/../.b/fname" "trim_len" specifies how many characters to keep for each directory. Must be 1 or more. It's done in-place.
Save absolute file name to "buf[len]". | https://neovim.io/doc/dev/path_8c.html | CC-MAIN-2022-21 | refinedweb | 778 | 72.53 |
First posted Thursday 2 April; updated Sunday 5 April.s or
bools
envs
imports. = <fun> meaning that it doesn't know how to display this specific function, so it just writes
<fun>. But the type of the function is
int -> int. And the name of this function is
square. And that this is a
val --- that is a value --- as opposed to a type or a module. If you type just a simple expression that doesn't bind a top-level variable, you get instead of
val square : ... just
- : .... Witness:
# 3 * 2;;; - : int = 6
Here
int is the type, and OCaml can display the specific value,
= 6, and no variable was bound to this, so it's just
- : ... rather than = <fun>
Usually, when a module partially exposes a "private type" in this way, it will also expose operations that permit you to do more interesting things will values of that type than just write identity functions. A module might also leave the
type color out of its type altogether:
module M : sig val foo : int -> int end = struct ... end
All of these techniques are the OCaml analogue of a Haskell module only exporting some of the symbols (whether for values or for types) that it defines. I've got into this at this much length because you need some familiarity with it to use the monad libraries we're supplying for OCaml, which strongly (but not exactly) parallel those for Haskell. More on those later.
Side note: If you think about it, you may notice a disanalogy between what's happening in OCaml when we restrict the type of a value --- there we make the type more specific, that is, less general. And what's happening when we restrict the type signature of a module --- there we expose less information about the module, and so in a way make the type more general.
If you remember, we were talking about different ways languages handle conflicts in names. And we were on Option 2, namely different namespaces or modules/libraries. And we were discussing the Haskell and OCaml ways of doing this, and we had a long side discussion about the different ways they have of only importing or exporting some restricted subset of the symbols defined in a module implementation. There are more options for how to handle the name conflicts. Really they might be and often are just extensions of Options 2, rather than competitors with it. We will get to them soon. That will flesh out the background we've started to provide for OCaml and Haskell.
But before we proceed to the other options, there are two more topics connected to what we've been saying so far that I'll address first. First, special commands to the interactive session, and second, abstraction barriers. Then we'll go back to discussing handling name conflicts.
Special interpreter commands
Okay, special commands to the interactive programs
ghci or
ocaml. These are different from top-level declarations. You can't include them in ordinary source code files. But you can type them to the interactive prompt. In Haskell the interactive prompt looks like this:
Prelude>
What appears before the
> may be different. At the prompt you can type special commands that begin with a
:. You can get a list of some of them by typing
:? then:
- it resides somewhere Haskell knows about, or that you coerce Haskell into knowing about
- it is loaded; sometimes, like for system-supplied libraries like
Preludeor
Data.List, this step isn't necessary
- its symbols have been imported, perhaps for use without any prefix though this depends on the specific
importsyntax you use. With
Preludeth.cmounderneath the various directories it knows about or you have told it about. Pathnames that begin with
/are from the top of your disk.
.cmo
#loaded files are always modules, that you still need to explictly
open. (
openis a part of the ordinary OCaml language, so it has no
#prefix.) Whereas with
#used files, you may not need to do any
opening. That depends on whether the
#used lists.msms, = <abstr>
That is, some unnamed value whose type is
int L.t.
_ L.t is the monadic box type from module
L, here it is parameterized on the type
int. So we know we have,
<abstr>.
However, the OCaml Monad libraries also supply a function we call
run, mostly following Haskell, though we might also have called it
expose. This function takes you from the abstract List monad box type to its real implementation. Thus if we say:
let xx = L.(mid 1);; let yy = L.(xx >>= fun x -> mid x ++ mid x);; L.run yy;;
we get:
- : int L.result = [1; 1]
Here the type
L.result is an alias for the real implementation of the List monad box type, and OCaml can show us the value. Computationally,
run here is just an identity function, but it takes us from the one type to another type, where the underlying implementation of the types is the same. (In a few of the monads, the
run function does more than this.)
Why make things so hard, Linmin asked. If it's really just an
int list behind the scenes, why make us jump through these extra hoops to get at it? Four points to consider in response. (I hesitate to call all of them strictly "responses".) First, this parallels what Haskell does with some of its Monadic operations. Thus if I say:
Prelude Control.Monad.Reader> let x = return 1 :: Reader [Int] Int Prelude Control.Monad.Reader> :t x x :: Reader [Int] Int
Haskell shows me that what I've got is an instance of the type
Reader [Int] Int (here
[Int] is the type I specified for the environment, and
Int is the type of the payload
1. In fact behind the scenes Haskell implements that as an
[Int] -> Int, but it doesn't tell me that. Also, Haskell won't let me say things like
x []. I have to say, more verbosely,
runReader x []. The
runReader is like our
run; it exposes the real implementing type of
x, which is a function which does accept the argument
[].
Haskell lets me get even more abstract:
Prelude Control.Monad.Reader> let x = return 1 :: MonadReader e m => m Int Prelude Control.Monad.Reader> :t x x :: MonadReader e m => m Int
Here I don't even specify that
x is a value of the specific
Reader monad type, I only say that
x is a member of some type
m Int, where
m is some type operator (box type) satisfying the constraint that it implements the interface of the
MonadReader type class, parameterized on an environment type
e. Basically what this means is that
m is a box type that acts like the Reader monad. We'll see examples of how there could be such things which aren't identical to the Reader monad in Thursday's session, when we discuss combining different monads.
In any case, that's the first response point: Haskell has these kinds of abstraction barriers too. That doesn't by itself constitute a justification for them, of course, but it helps to see that this is not just an idiosyncratic choice made by your teachers, but is also the choice made by teams of language designers interacting with thousands of programmers. The second response point is that there is pedagogical value to these abstraction barriers. If you've got something that is a boxed value, and you want to manipulate it or use it as input to other monadic machinery, you have to do so (using our library) using the specified machinery that's part of the monad modules' interfaces. Sure, you can always apply
run to it and then manipulate the underlying implementing value, now that its concrete type has been exposed. But then the result will no longer be what our libraries recognize as monadic, so you can't feed the result into
>>= anymore. That is, when:
xx >>= k
works, this won't work:
(run xx) >>= k. Even though
xx and
run xx may be the same underlying data in OCaml's memory. Forcing you to use the monadic machinery to manipulate
xx, rather than doing it by hand, has pedagogical and conceptual value. That is the second response point. If you want to write your own implementations of the monadic operations, which is pretty straightforward for the simple, atomic monads we've been looking at so far, you don't need to introduce the abstraction barriers we have, and so you can do dirty non-monadic hacking on your boxed values and still use your own monadic operations on them if you like. But this takes us to the third response point. This is that pretty soon we are going to be working with combinations of monads, not just the atomic Reader and List and so on, but combinations of two or three monads at once. The types for these can get pretty complex and intertwined. In the general case, the combinaton of a box1 type and a box2 type, parameterized on type
'a, will not just be an
'a box1 box2. There generally has to be more complex ways for the types to be intertwined. When you look at the concrete implementations of some of these complex monadic types, it can be pretty confusing what's going on, and what type in your mental model the thing you're looking at really belongs to. Whereas if OCaml tells you this is an
int Foo.t, you can say OK, now I now what this is, even if
Foo.t is a complex box type that combines the behavior of several monads and has a gnarly concrete implementing type.
The fourth response point is that sometimes you might be using the same implementation for different theoretical roles. I'll make this point first using a non-monadic example. Go back to our discussion of different ways to implement sets. Perhaps you choose one of the implementations where
int sets are just
int lists. Now you might also have an implementation of
int multisets (multisets are similar to sets in ignoring order, but dissimilar in that they care about multiplicity: so the
int multiset that contains
3 once is not the same as the one that contains it twice). And you might also implement them as
int lists. But now if you get from one source an
int list, perhaps that was intended to be a set, but you forgot and went on to use it as though it were a multiset. That could make for conceptual trouble. I don't mean your code will crash; perhaps it won't. But assumptions you were relying on in order for the code to do what you want may be violated because you thought you had an instance of one type satisfying one algebra, and instead you had an instance of a different type (with the same concrete representation in memory) satisfying a somewhat different algebra. You can avoid this kind of problem by introducing abstraction barriers. They prevent you from using
int sets as
int multisets, and vice versa, even when both are implemented behind the scenes as the same same
int list.
The same point applies in the monadic case too. You might have two Reader monad types, each implemented on an
int environment type, but in the one case it's playing the role of Jacobson-style variable binding for a single pronoun (of type
int), and in the other case your
ints are possible worlds and it's playing the role of modeling intensionality. The same bytes in memory could be used for each purpose, but you won't yourself want to become confused and use the one thing as the other. This is just like
sets and (perhaps only some)
multisets having the same implementation. Abstraction barriers can help you keep these apart.
Okay, that completes the discursion on abstraction barriers. Let's return to our main organizing thread, how to handle name conflicts.
Handling name-clashes with overloaded symbols
We said Option 2 for handling name conflicts was namespaces or modules, and looked at some of the twists and design choices made by Haskell and OCaml about this.
Option 3 --- which doesn't have to compete with Option 2 but can be combined with it --- is to overload some of your symbols. OCaml does this in a very limited way, just with the symbols
= and
< and
>. In fact it's debatable whether it even does it there. As mentioned before, this is not like
[] being able to be polymorphic for the empty list of any element type
'a, or for
fun x -> x to be polymorphic for any argument type. The overloaded symbols we are talking about here have a different computational implementation depending on the type of the argument. OCaml does this hardly at all. For instance, they don't do it with
+. You have to use different symbols for addition applied to
ints and addition applied to real ("floating point") values.
Haskell and other languages do this much more extensively.
In what's called object-oriented programming, you specify various interfaces, called classes. Some of these classes "inherit" from, or in other words extend, others. Finally, you can have values that are instances of some of these classes, conventionally these values are called objects. As an example, perhaps I have the general class
Animal, and then one inheriting subclass will be
Dog.
Dogs will have the same interface that
Animals do, but may have additional interface elements too. And then
fido may be an object that belongs to (both of) these classes. I might then have some functions that expect an
Animal as argument, and any
Dog like
fido would be acceptable input to these functions; other functions might more specifically demand
Dog inputs, and not be defined for other types of
Animal. In some cases the hierarchy of class interfaces we're working with might not have a simple tree structure. Perhaps
fido is, as well as being an
Animal, also a
HouseholdOccupant, and this class may be partly disjoint from the class of
Animals. (Furniture also occupies my household but isn't an animal.) How to deal with the complexities that arise here can become difficult. A famous example discussed in the literature is "the Nixon diamond". Nixon belonged to the class
Quaker but also to the class
Republican, and naively we might model
Quakers as having some interface choices --- for example, being pacifists --- that we don't model
Republicans as having. Figuring out how to sort this out gets complicated, and is important both for modeling reasoning, and for designing programming systems that use this general strategy for specifying interfaces.
Haskell's design is in this general family. They have what they call "typeclasses", and these are instantiated by specific "types". So typeclasses aren't types but rather properties or families of types. What defines a typeclass are certain constraints --- perhaps to belong to typeclass so-and-so, you also have to belong to some others --- and that you provide some implementation or other for certain symbols. But the implementation could be very different from instance to instance. Also, in many cases the typeclass will be associated with some algebraic laws, like the Laws we've seen in our discussions of Monads. These aren't anything that the computer tries to verify; but they are assumptions that the programmers rely on in designing and working with these typeclasses, so if you violate them some things may turn out to be broken.
As an example, we could define a typeclass:
Prelude> class Dot t where { (*) :: t -> t -> t }
This means that in order to belong to this typeclass, a type
t has to define a single operator
* that takes two
ts and yields a third
t. We can then declare some new types and make them instances of this typeclass, that is make them provide that interface. Here is one:
Prelude> data Sum = Sum Int deriving (Eq, Ord, Show) Prelude> instance Dot Sum where { (*) (Sum x) (Sum y) = Sum (x+y) }
The
data ... is one of the (several) ways Haskell has for declaring a new type. The
deriving (Eq, Ord, Show) at the end means you want Haskell to figure out automatically how to apply
= and
< to values of these types (using the underlying parameter type
Int), and also how to print such values. The second line says that we do satisfy the interface we're calling
Dot, and in particular here is how to implement the operations that one needs to implement to count as doing so... Note that in the definition of the type I used the same symbol
Sum to name both the type and the tag/variant label/constructor. You don't have to use the same symbol, but it's common to do so. In OCaml these have different capitalization rules, so the corresponding type declaration looks like this:
type sum = Sum of int
There's nothing in OCaml corresponding to the
deriving ... part. (In fact, all OCaml values can interact automatically with
= and
< anyway.) Nor is there anything corresponding to
class and
instance in OCaml. OCaml has to come at this differently.
In any case, back to our Haskell example. We can declare other types that implement the same interface differently:
Prelude> data Prod = Prod Int deriving (Eq, Show, Ord) Prelude> instance Dot Prod where { (*) (Prod x) (Prod y) = Prod (x Prelude.* y) }
Note I had to say
Prelude.* here to get the ordinary, multiplicative meaning of
*, rather than recursively calling the same function I was defining for
Prod arguments. Okay, now both of these types implement
* but they do so differently:
Prelude> Sum 2 * Sum 3 Sum 5 Prelude> Prod 2 * Prod 3 Prod 6
I can define other functions that only expect their argument to be of some type
t satisfying the
Dot interface, and don't care about which, like this:
Prelude> let { square :: Dot t => t -> t; square x = x * x } Prelude> square (Sum 3) Sum 6 Prelude> square (Prod 3) Prod 9
The
Dot t => at the beginning of the type declaration for the function
square is a "type constraint". It essentially means "for any type
t satisfying the
Dot interface...". And then in the definition of
square, the symbol
* is used (not with its ordinary necessarily multiplicative meaning, but) with whatever implementation
t happens to provide for
*. That's why
square (Sum 3) and
square (Prod 3) give such different results.
We can also have such constraints on our original
class declarations. Whereas we had:
class Dot t where ...
Haskell can also have declarations like:
class Semigroup t => Monoid t where ...
meaning that in order to have the
Monoid interface, type
t also has to have the
Semigroup interface. (This example is not in fact yet part of the official language.) And so on.
Haskell uses this technique extensively for its Monad interfaces. Monadic box types are specified in terms of the interface they have to supply, analagously to our
Dot interface and the
Sum and
Product types.
Okay, that was all about Option 3 for handling name clashes/ambiguities. Haskell embraces this by letting different types define the symbol differently, and then it figures out what definition to use by figuring out what the argument types are. There are just some common constraints: for example, with the
Dot interface, the
*
function has to take two arguments of the same type and return a result of that type. With other examples, we might also have that if you declare yourself to satisfy the interface, you have to supply several different operations. An loose analogy might be that when I talk to my family,
Mom might have one meaning, with some paired meaning for
Dad, but then in the AI lab these have different meanings (two different computers) and in MI6 two yet different meanings. (I don't know if there's a "Dad" in MI6, but in the modern Bond movies they called Judi Dench's M character "Mom.")
OCaml's parameterized modules
OK, now let's turn to Option 4, which is OCaml's strategy. We've already discussed OCaml modules and how one might use their type declarations to only expose some part of the module's concrete definitions. A further quirk is that OCaml also permits you to define things that aren't modules but are rather module makers, that is things that take certain parameters (these are always other modules, usually small ones), and generate modules as a result. OCaml calls these "functors" which is a shame because Haskell (and category theory) use that term differently. (At least, I assume there is no underlying connection between OCaml's use and the category theory use, though I don't know.) I'll just call them "module makers". The specific syntax for declaring these is not important. What is important is how to use them.
Recall the example from before:
module R = Monad.Reader(struct type env = ... end) R.mid
Here
Monad.Reader is a module maker, and
struct type env = ... end is the parameter (you have to fill in the
... with an actual type, perhaps
int list or
string -> int, where
strings are how you represent variables). First we bind the module variable
R to the module made by supplying that parameter to the module maker. Then
R is a monad library, just like
Monad.List and
Monad.Option are.
Here is some code showing how to generate the common monad modules, and also some additional values defined in each module, beyond the core monad operations. This code assumes you have installed the Juli8 libraries for OCaml.
module O = Monad.Option O.(test, mzero, guard) module L = Monad.List L.((++), pick, test, mzero, guard) module T = Monad.LTree (* LTree for "leaf-labeled tree" *) T.((++)) module I = Monad.Identity module R = Monad.Reader(struct type env = int list end) (* or any other implementation of envs *) R.(ask, asks, shift) (* same additional interface as Haskell has; we'll explain them later *) module S = Monad.State(struct type store = int end) (* or any other implementation of stores *) S.(get,gets,put,modify) (* same additional interface as Haskell has; we'll explain them later *) module Ref = Monad.Ref(struct type value = string end) (* this is essentially a State monad, but with a different interface *) Ref.(newref,getref,putref) module W = Monad.Writer(struct type log = string let empty = "" let append = (^) end) (* or any other implementation of logs *) W.(listen,listens,tell,censor) module E = Monad.Error(struct type err = string exception Exc = Failure end) (* or other specifications of type err and exception Exc of err *) E.(throw, catch)
These mostly have to be entered as individual lines in the interactive interpreter, separated by
;; and
returns.
There remains a final major Monad, the Continuation monad, that we'll discuss and add to the library later.
We'll discuss the different
ask, shift, pick, and so on functions on another page..) So I tend to write instead just
List.append. But when working with Lists as abstract monadic values, in OCaml, you need to use
++ instead of
List.append. OCaml will act like it doesn't know that abstract monadic Lists are really
lists.s. | http://lambda.jimpryor.net/topics/week8_monads_and_modules/ | CC-MAIN-2018-13 | refinedweb | 3,889 | 62.17 |
Lint > Correctness
This is my 1st post of Code Inspection series. In your Android Development career, you might have ignored many lint warnings. Being an ideal developer, we should not ignore the lint warnings unknowingly. If you have a known reason for it, then you may ignore some warnings. In this post, we will have a brief idea about lint warnings related to Correctness.
Below are the sub categories of Lint > Correctness warnings with examples.
Case 1: This should probably be a plural rather than a string
We might have faced many situation where you need to declare a dynamic string resource. For example: lets consider the below string, which contains a formatter %d for some integer value.
<string name="alert_message_winner">Congratulations!!! You won the game and earned %1$d Reward Points.</string>
Lets understand the above string with two different cases.
- When %d should be replaced with 1.
- When %d should be replaced with more than 1.
The 1st case will of course fail. Instead of showing the expected result 1 Reward Point, it will show 1 Reward Points (Which is grammatically wrong).
To avoid this kind of situation, Android has something called plurals for us. Let us try to understand more about plurals and how to use it with the above mentioned string resource.
<plurals name="alert_message_winner">
<item quantity="one">Congratulations!!! You won the game and earned %1$d Reward Point.</item>
<item quantity="other">Congratulations!!! You won the game and earned %1$d Reward Points.</item>
</plurals>
So we have declared two string resources, one for singular and another for plurals. But how to get the value in java? Lets see the below snippet.
int rewardPoints = getRewardPoints();
Resources res = getResources();
String message = res.getQuantityString(R.plurals.alert_message_winner, rewardPoints, rewardPoints);
// Use message to show in UI
We have below take aways from the above snippet:
- R.plurals.alert_message_winner instead of R.string.alert_message_winner
- res.getQuantityString() instead of res.getString()
Now lets consider, you have some similar kind of string resource but you don’t need to maintain multiple string resources for handling plurality. With below snippet, you can a bit more clarity:
<string name="alert_password_size">Password length should be more than %1$d characters.</string>
Your business requirement might tell that there is no possibility of having zero or one characters. But still, lint will throw a warning suggesting to use plurals. So to avoid that we can just suppress the lint warning using tools:ignore.
<string name="alert_password_size" tools:Password length should be more than %1$d characters.</string>
Reference:
Explains how to use string resources in your UI.developer.android.com
Case 2: Class is not registered in manifest
Let us consider an example:
public class MyActivity extends Activity { … }
Now, lets start this Activity:
Intent intent = new Intent(context, MyActivity.class);
startActivity(intent);
The above code will surely throw an exception:
android.content.ActivityNotFoundException:Unable to find explicit activity class
So, we have to register this Activity into AndroidManifest.xml
<activity android:name=".MyActivity />
The same rule applies to other two components (Services and Content Providers). So we can finally conclude that each Android components except BroadcastReceivers should be registered with Manifest file.
Now lets consider another case study where you want to create an Activity which is only meant for a parent class of all other real Activities.
public class BaseActivity extends Activity { ... }
public class MyActivity1 extends BaseActivity { ... }
public class MyActivity2 extends BaseActivity { ... }
In this case, even though BaseActivity is an Activity, it should not be registered with Manifest, since its not a real Activity. But still, the lint will give this warning. So to avoid this we should always make the BaseActivity as an abstract class.
public abstract class BaseActivity extends Activity { ... }
Conclusion
Out of four Android components, Activities, Services and Content Providers are the three which must be registered in AndroidManifest.xml file using <activity>, <service> and <provider> tags respectively.
If your activity is simply a parent class intended to be subclassed by real activities, make it an abstract class.
Case 3: Gradle Dynamic Version
Gradle build system gave us a flexibility of using Android libraries just by adding a single line of code in build.gradle file.
dependencies {
compile 'com.android.support:appcompat-v7:24.2.0'
}
It also gives us the flexibility of choosing the latest available library version instead of a specific version.
dependencies {
compile 'com.android.support:appcompat-v7:24.0.+'
}
Even though the later one gives us more flexibility, it is not recommended.
Why Not Recommended?
Lets say, we have developed and tested our app with a library version 1.0.0 and submitted to build server for releasing to Play Store. By the time, out build server builds our app, there may be a slightly higher version of dependent library is available (e.g. 1.0.1). Since we have not mentioned a specific version to be used, build server will choose the latest one, not the version which is used while testing. This might cause some functionality not work.
Conclusion
We should always use a specific version of any dependent library. | https://medium.com/chanse-games-developers/lint-correctness-this-should-probably-be-a-plural-rather-than-a-string-a1c7298f4996 | CC-MAIN-2017-39 | refinedweb | 845 | 50.43 |
We recently announced the availability of BizTalk Server 2010 R2 CTP. BizTalk Server 2010 R2 comes with a lot of features to take advantage of the whole cloud push. In the last post, we have shown how you could leverage the power of the Azure IaaS cloud to get your BizTalk deployments up and running in no time. In this post, we will talk another feature which will help you take advantage of the cloud – the Service Bus Messaging adapter. The Service Bus Messaging adapter, in a nutshell, allows BizTalk Server to read and send messages to Azure Service Bus Queues and Topics/Subscriptions.
The presence of a messaging infrastructure on the cloud has many advantages – which is not the subject of this blog, but is well documented elsewhere. Let us take a simple scenario: Our customer is an insurance company and it deals with multiple partners who help in administering insurances to the end customers. An on-premise BizTalk Server would handle insurance claims coming from multiple partners. Each of the partners submits these claims to an incoming Service Bus queue. BizTalk Server picks up these claims from the incoming queue and processes them. Once it is done, BizTalk Server would publish the status of the claims to an outgoing topic. The partners could create a subscription on the topic to receive the claim status. Such a scenario that leverages Azure Service Bus can now be easily integrated with BizTalk Server with the new Service Bus Messaging adapter. Let us have a quick walk through of this adapter.
Getting Started
In order to get started, you will need two things: A Windows Azure Service Bus namespace and BizTalk Server 2010 R2 CTP. You can get both of these on Windows Azure – sign up for a free trial here if you do not have one. Now, as outlined in this article, you can create your queue/topic. And as outlined here you can quickly create an instance of BizTalk Server 2010 R2 CTP running in a Windows Azure Virtual Machine.
Receiving messages from a Queue or a Subscription
BizTalk Server 2010 R2 provides a receive adapter to fetch messages from a Windows Azure Service Bus queue or a subscription. The receive adapter is one-way and will work with messages that you post on the queue. The adapter is simple to set up and configure. You just need to provide the URL of the Service Bus queue or subscription from where you need to pick the messages from and the credentials for authenticating with Service Bus. To set up a receive adapter for Service Bus Queue:
(1) Create a one-way receive port in your BizTalk application.
(2) Create a new receive location and select “SB-Messaging” as the Transport Type as shown below.
(3) Click on “Configure…” to configure the properties of the receive location. In the transport properties General Tab, specify the URL for the queue or subscription from where BizTalk Server needs to fetch message from. You could also configure connection properties like open/close/receive timeout as well as the prefetch count. The Prefetch Count specifies the number of messages that BizTalk Server will fetch at a time. This can help increase the throughput of the adapter as well.
(4) On the Authentication Tab, you need to specify how BizTalk Server will fetch the required ACS Token for authenticating with Service Bus. You can read on how Service Bus uses Access Control Service for authentication here. In a nutshell, you will need to specify the Access Control Service URL for the service namespace. It is usually derived from the service bus namespace (suffixed with “-sb”) and you will only need to update the service namespace in the default template. You can find this in the Azure management portal as well. You also need to specify an issuer name and issuer key. You will need to ensure that the service identity has a Listen claim. If you are using the default service identity (“owner”), it will have the necessary claims.
(5) Click on “OK” or “Apply” to create the Receive Location.
You can then start your Receive Location and BizTalk Server will now start to fetch messages from the Queue or subscription. Easy!
Handling Brokered Message Properties in Receive Adapter
The SB-Messaging receive adapter understands BrokeredMessage properties. This means two things. First, BizTalk Sever 2010 R2 comes with a predefined property schema for all the standard properties of a BrokeredMessage. The adapter will also promote these properties automatically for you. Second, the adapter can write custom user-defined Brokered Message properties in the BizTalk message context. And, if you desire, you could also promote them. Promoting them will allow you to use them in your routing filters. For example, for our insurance claims applications, different applications could be routed based on who the type of insurance, the partner, the claim amount, etc. These properties could be defined as Brokered Message properties and passed on with the incoming message. These properties could be used to route the message to different backend systems or workflows/orchestrations in BizTalk Server. To promote the properties though, you need to create and add a property schema in your BizTalk application. Then, on the properties Tab of the adapter, you could specify the namespace for your schema and check the option to promote the property.
Sending messages to a Queue or a Topic
BizTalk Server 2010 R2 also provides a send side adapter for posting messages to a Service Bus Queue or Topic. This is a one-way send adapter. To set up a send port for posting messages to a Service Bus Queue or a Topic:
(1) Create a Send Port and select SB-Messaging as the Transport Type. Click Configure to configure the properties of the adapter.
(2) Click “Configure…” to configure the transport properties of the send adapter. You need to specify the URL of the Service Bus Queue or Topic where the message should be posted.
(3) On the Authentication Tab, you enter the credentials for authenticating with Service Bus. This is the same as you see on the receive adapter.
(4) Click on “OK” to save the transport properties of the adapter.
(5) Specify the other properties of your Send port like pipeline, handler, Filters, etc.
(6) Click “OK” to finish creating your Send port.
Now, you can enlist and start the send port. The adapter will now post outgoing BizTalk message to the Service Bus queue or topic.
Handling Brokered Message Properties in Send Adapter
As in the case of receive adapter, the send adapter is aware of Brokered Message properties. If the outgoing BizTalk message has any of the standard Brokered Message in its context, the adapter will automatically set them as Brokered Message property. In addition, you could also specify defaults as part of the Send port properties. For custom user defined Brokered Message, you could specify a namespace as part of the adapter configuration. The adapter will take any property in that namespace and set them as Brokered Message Properties.
Finally, a note on serialization
When you start using the adapter, you may find that the message you receive in BizTalk Server is garbled – especially if you are using the Service Bus .NET API. Let me explain why this happens and what the adapter expects.
The Service Bus messaging adapter just fetches the stream in the incoming message and submits it to BizTalk Server. While sending out a BizTalk message, it simply uses the content as a stream. We preserve the message format on the wire. So, if you write or read the stream directly in your code, say using the Service Bus REST API, you would see the same data in the payload as you would expect. However, if you are using the Service Bus .NET API, you may find that there is a serialization issue if you use the default serializer (DataContractSerializer with binary XmlDictionaryWriter). This is because the default serializer in the Brokered Message .NET API uses Binary encoding. To avoid this issue, you will need to use Text by explicitly provide your own serializer, instead of the default serializer.
For sending message:
// For sending message using DataContractSerializer with Text encoding
var message = new BrokeredMessage(data, new DataContractSerializer(typeof(MyDataType)));
While reading the message:
// For receiving message using a DataContractSerializer with Text encoding
var data = message.GetBody<MyDataType>(new DataContractSerializer(typeof(MyDataType)));
You could, of course, directly read the stream as well. For a detailed write-up on the content serialization of Service Bus messages, you can refer to this blog post.
Next Steps
This post provides an overview of the new Service Bus messaging adapter in BizTalk Server 2010 R2. This is one of the new features that we have enabled with BizTalk Server 2010 R2. With this adapter, BizTalk Server can seamlessly integrate with your applications that leverage the Windows Azure Service Bus Queues and Topics. We encourage you to try out this feature and provide your comments and feedback. You can use the BizTalk forum to post your questions/queries as well.
Thanks
BizTalk Server Team | https://blogs.msdn.microsoft.com/biztalk_server_team_blog/2012/09/13/connecting-to-windows-azure-service-bus-from-biztalk-server-2010-r2-ctp/ | CC-MAIN-2019-13 | refinedweb | 1,519 | 62.88 |
This site uses strictly necessary cookies. More Information
Hi, is there a way to assign variables to ScriptA, which is required by ScriptZ, so that these variables already show up in the editor? I want to accomplish some sort of auto-setup: A GameObject needs script A,B,C and I just want to assign script Z which assigns script A, B, C already with the right public variables. I can only imagine to do this in the Awake function but then false variables will be shown while still in the editor...
Example Code:
using UnityEngine;
using System.Collections;
[RequireComponent(typeof(ScriptA))]
public class ScriptB : MonoBehaviour {
public int a = 1;
public int b = 2;
void Awake() {
ScriptA scriptA = GetComponent<ScriptA>();
scriptA.a = a; // when game is inactive maybe ScriptA shows 42 instead of 1
}
void Update() {
do_something...
}
}
Answer by perchik
·
Feb 25, 2014 at 06:18 PM
Trying moving your code to Start() not awake
this would not change anything (already tested), I want to assign scriptB (with let's say an public int x = 3) per drag and drop in the editor and then already have this public int x from scriptB assigned to scriptA to a public int. everything before the game runs
Do you perhaps want an editor script to add the correct scripts and assign the variables? (seems like the right way to.
myObjetc..GetType().GetCustomAttributes(typeof(RequireComponent) not working correctly
1
Answer
public variables assigned by script in editor vanish at runtime - but manually entered values persist
2
Answers
Confused about custom GameObjects,Custom GameObject confusion
0
Answers
OnInteractivePreviewGUI multiple selection. Works but spams errors.
0
Answers
Custom Inspector button texture?
1
Answer
EnterpriseSocial Q&A | https://answers.unity.com/questions/649787/requirecomponent-with-custom-public-vars.html | CC-MAIN-2021-21 | refinedweb | 281 | 52.29 |
Database Design
Author Doug Thews introduces you to writing stored procedures and UDFs in .NET 2.0 and SQL Server 2005.
Technology Toolbox: VB.NET, Visual Studio .NET 2005, SQL Server 2005
One of the most anticipated features in SQL Server 2005 (code-named Yukon) and Visual Studio .NET 2005 (code-named Whidbey) is the capability to develop stored procedures, user-defined functions (UDFs), and user-defined data types in .NET 2.0. SQL Server 2005 now supports developing UDFs, user-defined procedures (UDPs), user-defined triggers, and user-defined data types in any .NET 2.0 Common Language Runtime (CLR)-compliant language.
In this article, I'll provide an introduction to writing stored procedures and functions in .NET 2.0. While the scope of CLR integration with SQL Server 2005 could take up an entire book, I'll focus on how the CLR works within SQL Server 2005 and how to start writing and deploying CLR code to SQL Server 2005.
The code in this article is based on beta versions for both Visual Studio .NET 2005 (beta 1) and SQL Server 2005 (beta 2). As with any beta product, there is a slight chance that some of the functionality or namespaces might change before the final products are released. I've tried to be as careful as possible: I've worked with the internal beta teams at Microsoft and have purposely stayed away from various features and internals that are likely to change between now and both products' final release.
Getting both SQL Server 2005 and VS.NET 2005 installed on the same machine is pretty straightforward, with one exception. The beta 2 build of SQL Server 2005 uses a minor-version upgrade of the .NET 2.0 Framework. Therefore, you should install SQL Server 2005 first so that the higher .NET Framework is installed. (The VS.NET 2005 install won't overwrite it because it detects that the .NET 2.0 Framework is installed already.) If you install VS.NET 2005 first, you'll need to uninstall the .NET 2.0 Framework and then install SQL Server 2005. Microsoft says that both versions will use the same framework by the time they're released, so this should only be an issue for developers who work with the betas.
Once you've completed the installations, it's time to write some code to see what you can do. To test out how the CLR inside of SQL Server 2005 actually works, I'll show you how to write a simple stored procedure that just returns the current version of SQL Server 2005.
First, bring up VS.NET 2005 and create an empty solution. I've called mine VSM122004 and included it with the downloadable source code for this article. Next, select Add | Project for the solution, and select SQL Server Project from under the SQL Server project types (see Figure 1).
For purposes of this exercise, I'll be using the SampleProcedures project to hold all user-defined stored procedures, the SampleFunctions project to hold all UDFs, and the SampleTriggers project to hold all user-defined triggers.
Once you add a new project, VS.NET 2005 asks you to create a database reference for that specific SQL Server project (see Figure 2). It's not mandatory you add a database reference because assemblies run in-process with SQL Server 2005 (meaning they don't need their own connections), but when you add one, it helps you browse for SQL Server objects in the IDE.
Create a Stored Procedure Class File
Now that you've created the project, create a simple stored procedure class file by selecting Add | Stored Procedure and call it SimpleStoredProc.vb. Notice the IDE creates a stub:
Partial Public Class StoredProcedures
<SqlProcedure()> _
Public Shared Sub SimpleStoredProc()
' Add your code here
End Function
Any Function or Sub must be declared as Public Shared within the class to be deployed as a SQL CLR object. The IDE also generates a Partial Public Class called Stored Procedures. If you create another stored procedure using the previous steps, you'll get another Partial Class. When they're compiled, these classes are merged into a single class called StoredProcedures, which enables you to keep your stored procedures in a single class file, or separate them into functionality-specific class files. The end result is a single assembly per project to deploy to SQL Server 2005.
Don't be confused about the terminology of "function" and "procedure" in SQL Server vs. VB.NET. In SQL Server 2005, a stored procedure can take parameters by value or by reference and perform some action, possibly returning a set of rows to the caller. UDFs in SQL Server 2005 (scalar UDFs and table-valued functions, or TVFs) take parameters by value only and can return either a scalar value (such as a string) or a table value (such as a set of rows and columns), as well as return a set of rows (like the stored procedure does).
If you're using VB.NET, you'll need to use a Function instead of a Sub if you want to return a value directly from the UDF. Of course, both SQL stored procedures and functions can also return a data resultset as well, but that is done through the SqlPipe class and not a traditional return parameter. I'll talk about how to do this later on in the article.
Now, you'll add some code so your stored procedure returns the SQL Server version information. I've included the complete class module that contains the definition of the SimpleStoredProc method (see Listing 1).
There's a lot here to talk about, so I've broken it down one step at a time. First, look at the <SqlProcedure()> decoration attribute for the method:
<SqlProcedure()> Public Shared Sub SimpleStoredProc()
This decoration is used to define information about the stored procedure object. The VS.NET 2005 IDE uses it to determine how the assembly is cataloged and how the stored procedure is created when deploying from within the IDE.
Take a look at the code within the method that will become your stored procedure. First, the current execution context is retrieved, which gives you the ability to run commands, look at the current connection, and perform anything related to the current SQL execution context. Remember, because this code is running in-process within SQL Server, you don't need to create a separate database connectionyou're running under the context of the code that is calling the stored procedure. If you need access to the connection to do something like implement a transaction, you can access the current execution context's connection:
Dim objSqlConnection as SqlConnection = SqlContext.GetConnection()
The SqlContext class's GetCommand() method retrieves a command object for you, which you can now use to create your own database commands (just as you do in ADO.NET). In this example, the query just selects the version of SQL Server using the standard SQL statement:
SELECT @@VERSION
Finally, you'll need to get the query's output back to the caller after executing the SQL command. The standard way to return query results from a stored procedure is by using the SqlPipe class. You call the GetPipe() method to get the current context's pipe back to the caller, then use that pipe's Send() method to send back results to the stored procedure caller.
Catalog the Assembly in SQL Server 2005
Now that you've developed the stored procedure, it's necessary to catalog and create it inside SQL Server 2005. This is a two-phased approach. First, you need to catalog the assembly; and second, you need to create and map stored procedures, UDFs, or triggers to the methods inside the cataloged assemblies.
The SQL Data Definition Language (DDL) that does this for your SimpleStoredProc method looks like this:
CREATE ASSEMBLY MySQLSampleProcedures
FROM
'C:\Development
Area\VSM122004\SampleProcedures\bin\SampleP
rocedures.dll'
WITH PERMISSION_SET=SAFE
This creates an assembly inside of the AdventureWorks sample database that's created when you install SQL Server 2005 (assuming it's the current database when you run the script). I've given the assembly name used inside of SQL Server 2005 (MySQLSampleProcedures) a different name than the physical assembly name of the VS.NET 2005 project to show the naming differences between VS.NET 2005 and SQL Server 2005. Notice that assemblies are loaded within the current database. If you want to load a CLR object from an assembly in another database, you need to register that assembly in the current database, or change the CREATE [PROCEDURE|FUNCTION|TRIGGER] command to map to the assembly within the other database. (Additionally, the user ID cataloging the assembly must have access to the other assembly and database, and both assemblies must be cataloged under the same user ID or a common security role.)
The PERMISSION_SET property tells SQL Server 2005 what kind of security this object has. There are three possible choices (see Table 1).
If your code violates the permission set it was cataloged with, a SQL security exception will be thrown. Also, you can't access anything from the System.Windows.Forms namespace within a SQL cataloged assembly.
Once you catalog an assembly, SQL Server 2005 stores information about CLR assemblies in these tables (see Table 2). Perform this query in SqlQuery if you want to see your assembly:
SELECT Content FROM sys.assembly_files
WHERE name="<Path & Filename of Assembly
Cataloged>"
This gives you your assembly's binary CLR code cataloged in SQL Server. You can also deploy your assembly with the VS.NET 2005 IDE. When you're deploying through the VS.NET 2005 IDE, the assembly, debug symbols (if DEBUG is ON), the actual source code, and the project file are imported. You can change the PERMISSION_SET value inside VS.NET 2005 by editing the Properties page of your SQL Server project. VS.NET 2005 will use this setting when it's deploying from within the IDE. For this article, you'll catalog and create your CLR objects manually to become more familiar with the inner workings of CLR objects within SQL Server 2005.
You can add your debug information manually after cataloging the assembly by using the ALTER ASSEMBLY statement:
ALTER ASSEMBLY MySqlSampleProcedures
ADD FILE FROM
'C:\Development
Area\VSM122004\SampleProcedures\bin\SampleProcedures.pdb'
This adds another row in the sys.assembly_files table, which is tied to the same assembly_id.
Use Statements to Change the Assembly
Once you catalog an assembly, its CLR code resides in SQL Server 2005, and the external file you used to create the assembly isn't referenced again. If you want to change the assembly, you can use the ALTER ASSEMBLY statement, or you can use the DROP ASSEMBLY statement and re-create the assembly after the desired changes are compiled. The CreateObjects.sql file in the Deployment project, included in the downloadable VS.NET 2005 solution for this article, contains all the DDL to catalog all the assemblies and CLR objects discussed in this article.
There is one strange bug to be aware of that happens only for VB.NET projects in VS.NET 2005. As you've probably noticed already, VS.NET 2005 now creates a lot of auto-generated code under the MyProjects folder (turn on View All Files to see MyProjects as a folder). If you attempt to rename either the assembly or root namespace under the MyProject properties, you'll get an error message about having a static variable/member when trying to compile and deploy your CLR object. So, for now, you'll probably want to keep the default properties created for you already when you create your SQL Server project inside VS.NET 2005.
In order to catalog an assembly, you must have the necessary permissions within SQL Server 2005. You must be logged into SQL Server with an integrated security account (SQL security accounts can't create assemblies). You catalog an assembly with the security granted to the integrated Windows account that created the assembly, so be careful which user ID you use. In addition, you need to catalog dependent assemblies under the same user ID or role. After you catalog an assembly, the owner can extend access permissions for that assembly to other IDs and roles.
Now that you've cataloged the assembly, you need to create your stored procedure definition and map it to the SimpleStoredProc() method in the assembly. You do this by using the standard CREATE PROCEDURE statement in SQL Server:
CREATE PROCEDURE usp_clr_GetSQLVersion
AS EXTERNAL NAME
MySQLSampleProcedures.[SampleProcedures.StoredProcedures].SimpleStoredProc
This creates a user stored procedure called usp_clr_GetSQLVersion and binds it to the SimpleStoredProc method of the StoredProcedure class within the SampleProcedures namespace of the MySQLSampleProcedures assembly. Notice that the assembly name used is what you cataloged it as in the CREATE ASSEMBLY phase and not the name of the assembly of the CLR code itself. A word of caution, especially when working with VB.NET: Case is significant for EXTERNAL NAME, even though the language might not be case-sensitive (such as VB.NET). The names of the class and method must match the names in the source code exactly.
Now that you've created your stored procedure and mapped it to a method within a cataloged assembly, you can simply run it from within a SQLQuery command window (see Figure 3).
You might be wondering how SQL Server is running the CLR code. The .NET 2.0 runtime is loaded by the default AppDomain when the first SQL CLR object is executed. Each database has its own separate AppDomain to run CLR code in. This is why it's necessary to catalog an assembly within the database itself: The CLR code being run is isolated to that AppDomain. SQL Server 2005 beta 1 provided the capability to query active AppDomains (sys.fn_appdomains), but this is no longer available in beta 2.
Because calling a CLR object in SQL Server 2005 is just like invoking a normal stored procedure or function in ADO.NET, I haven't included an application that consumes a CLR object so I can dedicate more space to discussing the CLR objects themselves.
Develop Your First UDF
Now, I'll show you how to develop a UDF, and instead of returning the SQL version as a row set through a SqlPipe, you'll return the string as part of a scalar value from the function itself. One thing to remember about scalar UDFs is that they can only return a specific set of value types. They can only return a type from the namespace System.Data.SqlTypes, a native CLR data type that maps explicitly to a SqlType (an example of this would be the CLR data type "String" that maps to the SqlType "SqlString"), or a SQL user-defined data type.
Creating a UDF is the same as creating a stored procedure, except that you choose the User-Defined Function template instead of the Stored Procedure template when you add an item to your project. Take a look at the GetSqlVersion UDF (see Listing 2).
Notice that this example looks a lot like your first stored procedure, with the exception that you're specifying a UDF instead of a stored procedure (hence the need for the <SqlFunction()> decoration). One difference between UDFs and user stored procedures is you'll need to define what kind of data access is going to be done within your code. In your case, you'll need access to the in-process data provider, so you'll need to set the DataAccess property to allow access to the in-process context:
<SqlFunction(DataAccess:=DataAccessKind.Read)>
If you don't need data access from the in-proc, you can set the DataAccess property to DataAccessKind.None, which helps SQL Server 2005 optimize the UDF. Also, you can also set properties for a SqlFunction decoration (see Table 3).
The actual code to develop a UDF is similar to the code to develop a stored procedure, with the exception that you're returning a scalar string value back from the function instead of as a resultset. Keep in mind that you should convert your types explicitly before returning them to prevent any possibility of an Invalid Cast Exception being thrown.
Now it's time to catalog this assembly and create the UDF within SQL Server with some SQL DDL:
CREATE ASSEMBLY MySQLSampleFunctions
FROM
'C:\Development
Area\VSM122004\SampleFunctions\bin\SampleFunctions.dll'
WITH PERMISSION_SET=SAFE
GO
ALTER ASSEMBLY MySQLSampleFunctions
ADD FILE FROM
'C:\Development
Area\VSM122004\SampleFunctions\bin\SampleFunctions.pdb'
GO
CREATE FUNCTION udf_clr_GetSQLVersion()
RETURNS NCHAR(255)
EXTERNAL NAME
MySQLSampleFunctions.[SampleFunctions.
UserDefinedFunctions].GetSqlVersion
GO
Now that you've cataloged the assembly and created the UDF, you can execute the function within a SQLQuery window using some simple T-SQL code:
declare @mystring as NCHAR(255)
set @mystring = dbo.udf_clr_GetSQLVersion()
print @mystring
This figure shows you the results (see Figure 4).
It's also possible to create a TVF as a UDF. A TVF is basically the same thing as a scalar UDF, except that it returns a set of columns whose properties are predefined by the definition of the TDF. The main goal behind a TVF is that instead of returning a row through a SqlPipe, you return specific columns within a row set. So, instead of executing the stored procedure to get a set of rows back, the caller performs a query that looks like this:
SELECT * FROM udf_MyExampleTVF
Create a More Complex CLR Stored Procedure
Now that you've seen how it all works, it's time to create something more substantial than just returning the version of SQL Server being run. In the next example, you'll create a stored procedure that takes a filter string as a parameter and searches for all contacts in the AdventureWorks Person.Contacts table. The resultset is returned through a SqlPipe so the caller can manipulate it easily through ADO.NET. Consider the GetFilteredContacts CLR stored procedure (see Listing 3).
Notice that it looks similar to what you've done already. The only difference is that instead of sending a single value back through the SqlPipe, you're going to send a SqlDataReader object that contains the returned rows of the query. Notice that the method checks to make sure the input parameter is not null. This is especially important when passing in types that can't represent null (such as Single, Float, and so on). Once the stored procedure is compiled, it's time to catalog it and register the stored procedure using some more SQL DDL:
CREATE ASSEMBLY MySQLSampleProcedures
FROM 'C:\Development
Area\VSM122004\SampleProcedures\bin\SampleProcedures.dll'
WITH PERMISSION_SET=SAFE
GO
ALTER ASSEMBLY MySQLSampleProcedures
ADD FILE FROM 'C:\Development
Area\VSM122004\SampleProcedures\bin\SampleProcedures.pdb'
GO
CREATE PROCEDURE
usp_clr_GetFilteredContacts(@strFilter
NCHAR(255) = '')
AS EXTERNAL NAME
MySQLSampleProcedures.[SampleProcedures.
StoredProcedures].GetFilteredContacts
GO
You'll need to define input parameters for stored procedures explicitly, and take care to match the types to the method signature within the assembly. If the types don't match, the CREATE PROCEDURE statement will fail. A default value for strFilter is also provided within the definition (an empty string), so this stored procedure can be called without any parameters to get all of the rows from Person.Contact, unfiltered. You can see what happens when you execute the GetFilteredContacts stored procedures, looking for any contacts with the last name "Smith" in them (see Figure 5).
Write CLR Triggers
As I mentioned earlier, you can also write database triggers in .NET 2.0. You use the same process to write triggers as you do for stored procedures and UDFs. As an example, I've created a sample generic trigger to use with a test table called dbo.Test_Table within the AdventureWorks database (see Listing 4).
Again, it looks similar to the stored procedure you wrote earlier. The biggest difference is that instead of accessing SqlContext, the trigger must access SqlTriggerContext to get the current execution context information. This provides the same context information as before, but also provides access to why the trigger was invoked (insert, update, deleteas well as many more types of events).
CLR triggers have access to the DELETED and INSERTED tables, just like a normal T-SQL trigger does. In this sample, the trigger action is determined and then the table that contains the affected rows is queried. From there, you can add in your own business or transactional logic.
Notice that there is a <SqlTrigger()> decoration that's commented out in the code. I included it in the source code to show you how you can tell the VS.NET 2005 IDE how to deploy this trigger to the database. Remember, the VS.NET 2005 IDE uses these decorations to determine how the CLR objects need to be cataloged and created.
Now that the trigger is created and compiled, you'll use some more SQL DDL to catalog and create your SQL CLR trigger object:
CREATE ASSEMBLY MySQLSampleTriggers
FROM 'C:\Development
Area\VSM122004\SampleTriggers\bin\SampleTriggers.dll'
WITH PERMISSION_SET=SAFE
GO
ALTER ASSEMBLY MySQLSampleTriggers
ADD FILE FROM 'C:\Development
Area\VSM122004\SampleTriggers\bin\SampleTriggers.pdb'
GO
CREATE TRIGGER TestTableTrigger
ON dbo.Test_Table
FOR INSERT, UPDATE, DELETE
AS EXTERNAL NAME
MySqlSampleTriggers.[SampleTriggers.Triggers].TestTableTrigger
GO
Debug Your SQL CLR Code
Now that you've developed some CLR objects and cataloged them in SQL Server 2005, you're probably going to want to debug the code at some point in time. You do this easily by attaching to an external process SQLSERVR.exe process (just like debugging an ISAPI filter or an NT Service in .NET). In this case, you'll be attaching to the SQL Server 2005 process (SQLSERVR.exe). Under Tools in the VS.NET 2005 IDE, select Attach to Process and select the instance of SQLSERVR.exe that corresponds to SQL Server 2005. (You might have multiple versions of SQL Server running, especially if you have Outlook 2003 with BCM installed on your machine, which automatically installs an instance of SQL Server 2000 and runs it in the background.)
Next, go into the code that you want to debug, insert your breakpoint, and perform an action that will cause your breakpoint to be hit (for example, perform a query that will fire a trigger). You can see debugging the TestTableTrigger in Listing 4 looks like after you enter an INSERT statement in a SQLQuery window (see Figure 6).
Be cautious when you exit the debugger after attaching to the SQLSERVR.exe process. Given you're working under the execution of the SQL Server engine, you should always use the VS.NET 2005 Continue toolbar button (the green VCR button) to continue executing your code after you're finished. Not doing so could stop the entire SQL Server engine from executing when you exit the debugger. A good practice is to make sure you select the Stop Debugging menu option under the Debug menu in the VS.NET 2005 IDE.
This article has given you a high-level overview of how to create .NET 2.0 CLR objects in SQL Server 2005, but I've only touched the surface of what's possible. Think of the things that you can now implement in your application's database tier. You can implement things such as structured transactions with detailed exception handling, plus access to outside resources (for example, Web services) to consume within your database objects. Not to mention, you have the ability to make calls to unmanaged COM code to take advantage of existing business logic or third-party extensions (although this should be done only when you've tested the unmanaged object thoroughly, because it can affect the SQL Server 2005 AppDomain and because the SQL CLR code must be marked as UNSAFE to allow calls to COM objects).
Microsoft claims the performance of .NET 2.0 CLR objects is comparable to that of standard T-SQL, but this claim should be taken with a grain of salt. My suggestion is that .NET 2.0 CLR objects work well when the requirements for the object call for complex or highly structured, nested code, or require access to outside resources. If it's a simple query or insert, then it's probably best to stick with T-SQL. However, what's exciting is the wealth of new and exciting features you can place in the database tier of your n-tier applications now that your database objects can be created with the feature-rich .NET 2.0 Framework and programming languages.
Printable Format
> More TechLibrary
I agree to this site's Privacy Policy.
> More Webcasts | https://visualstudiomagazine.com/articles/2004/12/01/build-sql-clr-objects-with-net.aspx | CC-MAIN-2018-26 | refinedweb | 4,112 | 52.6 |
Multithreading
Threads can be used to offload big chunks of work without skipping turns. Robocode imposes some specific constraints on robots using threads:
- You can only run 5 threads (not including your main thread) at a time. Previous threads have to terminate before the security manager will let you run more.
- This does not affect thread creation. You can create as many Thread objects as you wish, but you may only run 5 at a time.
- You must stop your threads at the end of each round, or you will throw an Exception (and all the nasties that entails).
- This means you must override both
onWin(...)and
onDeath(...)to ensure your child threads get stopped at the end of every round, win or lose.
- You must also stop your threads at the end of the battle, or your robot will throw an Exception if the user stops a battle before it finishes. It's also annoying having to wait for robots to be killed when they don't do this (especially if you're debugging them!)
- To ensure your robot also gets stopped when the user aborts a battle, you must override
onBattleEnded(...)and stop your threads if they haven't stopped already.
Needless to say, threading is not for the faint of heart.
Fairness Concerns
Since threads aren't stopped during the enemy's turn, they can be used to lower the enemy's score. You could spawn a thread which does heavy work during the enemy's turn to increase the chances of them skipping a turn. Needless to say, this is not cool.
Because of these concerns, robot multithreading may not always be supported by Robocode into the future.
There is also a consensus that for such reasons, threading should not be used in bots meant to compete in RoboRumble. While there is for now one bot (Toad) which is in the rumble currently and uses multithreading, one should expect opposition to trying to enter a new multithreaded bot.
Sample Code
The following code spawns 5 threads (one per turn) which print a message each turn for 100 turns, then terminate.
import robocode.*; public class ThreadDemo extends AdvancedRobot { // This will be used to tell our threads to terminate. To do this, we must // be able to modify its value, so a simple Boolean object will not do. // It will also be used to make our threads wait until the next round. For // this, it must inherit from Object to get at its notifyAll() method. final boolean[] token = {true}; // Number of threads we have spawned. int threadCount = 0; @Override public void run() { // Get the radar spinning so we can get ScannedRobotEvents setTurnRadarRightRadians(Double.POSITIVE_INFINITY); while (true) { out.println("Robot time: " + getTime()); if (threadCount < 5) { final long spawnTime = getTime(); // Quick and dirty code to create a new thread. If you don't // already know how to do this, you probably haven't learned all // the intricacies involved in multithreading on the JVM yet. new Thread(new Runnable() { int runCount = 0; public void run() { synchronized(token) { while (token[0] == true && runCount < 100) { System.out.printf( "\tHi from Thread#%d (current time: %d). Repeat count: %d\n", spawnTime, getTime(), ++runCount); try { // Sleep until re-awakened in next turn token.wait(); } catch (InterruptedException e) {} } } } }).start(); threadCount++; } execute(); } } @Override public void onScannedRobot(ScannedRobotEvent event) { synchronized(token) { // Wake up threads! It's a new turn! token.notifyAll(); } } // The following events MUST be overriden if you plan to multithread your robot. // Failure to do so can cause exceptions and general annoyance. @Override public void onWin(WinEvent e) { // Politely tell our threads to stop because the round is over synchronized(token) { token[0] = false; } } @Override public void onDeath(DeathEvent e) { // Politely tell our threads to stop because the round is over synchronized(token) { token[0] = false; } } @Override public void onBattleEnded(BattleEndedEvent e) { // Politely tell our threads to stop because the battle has been ended. // This gets called whether the battle was aborted or ended naturally, // so beware of duplication with onDeath/onWin (if that is important to you). synchronized(token) { token[0] = false; } } } | https://robowiki.net/w/index.php?title=Multithreading&oldid=16791 | CC-MAIN-2022-40 | refinedweb | 677 | 63.8 |
Set track parameters.
#include <mm/renderer.h>
int mmr_track_parameters( mmr_context_t *ctxt, unsigned index, strm_dict_t *parms )
Set track parameters. This function can be used when the input type is "playlist" or "autolist". When the input type is "track", this function has no effect.
For "playlist" inputs, index specifies the track that these parameters are applied to. The provided index must be within range of the current playlist window or the function call will fail. An index of zero specifies the default parameters given to a new track when it enters the playlist window.
For "autolist" inputs, any input parameters that you set before attaching the input are taken as the initial track parameters (because the single track is the input). If you want to change them after attaching the input, use mmr_track_parameters(). Changes to input parameters other than repeat are ignored.
Some mm-renderer plugins don't return errors when you provide unacceptable values for track parameters. Instead, these plugins revert bad parameters to their previous values or to their default values (for parameters that you set for the first time). To see which values were accepted or changed, client applications can examine the parameters that the Event API returned.
When the input URL starts with audio:, you can set one of the following two parameters:
When the input URL starts with http: or https:, you can set the following parameters that map to libcurl options:
Zero on success, -1 on failure (use mmr_error_info()).
QNX Neutrino | http://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.mm_renderer/topic/mmr_api/mmr_track_parameters.html | CC-MAIN-2018-47 | refinedweb | 245 | 54.02 |
rate the plugins they like.
One task is to get a list of available Grails plugins. I wanted to do that programmatically, too, because I’d like to update the list automatically using the Quartz plugin (of course).
How do you get a list of available plugins? My first thought was to do the HTML equivalent of screen scraping at the main plugin site, . At that site everything is nicely divided into categories, along with links to descriptions and more.
Screen scraping HTML is not fun, though. I’ve done it before, when necessary, but it’s not very robust and tends to run into problems. Many of those problems have to do with the fact that HTML is a mess. Most web sites are filled with HTML that isn’t even well-formed, making processing it programmatically a real pain.
GinA, however, mentioned HTTPUnit as an easy way to access a web page. Since it’s a regular old Java library, that meant I could use it with Groovy. Therefore, my first attempt was:
import com.meterware.httpunit.WebConversation def baseUrl = '' def wc = new WebConversation() def resp = wc.getResponse(baseUrl)
Unfortunately, I’m already in trouble even at that point. If I run that, I get a massive exception stack trace with shows that the included Neko DOM parser choked on the embedded prototype JavaScript library.
While I was debating what to do about that (I really didn’t want to just open the URL, get the text, and start having Fun With Regular Expressions), I noticed a blog posting here, from someone named Isak Rickyanto, from Jakarta, Indonesia.
(A Java developer from Java. How cool is that? Or should I say, “how Groovy?” :))
Isak points out that there is a list of Grails plugins at . As a Subversion repository listing, it’s not full of JavaScript. Even better, every plugin is listed as a simple link in an unordered list.
I therefore modified my script to look like this:
def baseUrl = '' def wc = new WebConversation() def resp = wc.getResponse(baseUrl) def pluginNames = [] resp.links.each { link -> if (link.text =~ /^grails/) { def name = link.text - 'grails-' - '/' pluginNames << name } } println pluginNames
Here I’m taking advantage of the fact that the
WebResponse class (returned from
getResponse(url)) has a method called
getLinks(). Since there was one link that had the name “
.plugin-meta“, I decided to use a trivial regular expression to filter down to the links definitely associated with plugins. The
WebLink.getText() method then returned the text of the link, with gave values of the form
grails-XXX/
for each plugin. One of the things I love about Groovy is that I can then just subtract out the characters I don’t want, which is how I added the actual plugin names to an array.
Unfortunately, while that’s part of what I want, that isn’t everything I want. I’d like the version numbers and the descriptions, too, if possible. I could go digging into the various directories and look for patterns, but a different idea occurred to me.
I finally remembered that the way I normally find out what plugins are available is to run the
grails list-plugins
command and look at the output. You’ve probably seen it. It gives an output like
Welcome to Grails 1.0.3 - Licensed under Apache Standard License 2.0 Grails home is set to: c:\grails-1.0.3 Base Directory: c:\ Note: No plugin scripts found Running script c:\grails-1.0.3\scripts\ListPlugins.groovy Environment set to development Plug-ins available in the Grails repository are listed below: ------------------------------------------------------------- acegi <0.3> -- Grails Spring Security 2.0 Plugin aop <no releases> -- No description available audit-logging <0.4> -- adds hibernate audit logging and onChange event handlers ... authentication <1.0> -- Simple, extensible authentication services with signup .... autorest <no releases> -- No description available
etc. So if I could get this output, I could break each line into the pieces I want with simple String processing.
How can I do that? In the spirit of reducing it to a problem already solved, I realized I just wanted to execute that command programmatically and capture the output. One way to do that is to take advantage of Groovy’s ability to run command line scripts (GinA covers this, of course, but so does Scott Davis’s most excellent Groovy Recipes book). Here’s the result:
def names = [] def out = "cmd /c grails list-plugins".execute().text out.split("\n").each { line -> if (line =~ /<.*>/) { def spaceSplit = line.split() def tokenSplit = line.split('--') def name = spaceSplit[0] def version = spaceSplit[1] - '<' - '>' def description = tokenSplit[-1].trim() names << name } }
Basically I’m executing the
list-plugins command at a command prompt under Windows (sorry, but that’s still my life), splitting the output at the carriage returns (for some odd reason, using
eachLine directly kept giving me errors), and processing each line individually. The lines listing plugins are the ones with version numbers in angle brackets (like
<0.3>), and the descriptions came after two dashes. It seemed easiest to just split the lines both ways in order to get the data I wanted.
I ran this script and the other script together to see if I got the same output. Here’s the result:
println "From 'grails list-plugins': " + names println "From svn repo: " + pluginNames println "Difference: " + (pluginNames - names) From 'grails list-plugins': ["acegi", "aop", "audit-logging", ..., "yui"] From svn repo: ["acegi", "aop", "audit-logging", ..., "yui"] Difference: ["extended-data-binding"]
Why the difference? From the list-plugins output, here’s the line for “
extended-data-binding“:
ext-ui <no releases> -- No description available extended-data-binding<0.2> -- This plugin extends Grails' data binding ...
Yup, the name ran into the version number format. Sigh. Of course, the other problem with this is that at the moment it’s dependent on my own system configuration (Windows, with the grails command in the path), which can’t be a good thing.
Finally, after all this work, I suddenly realized that I already have the script used to list the plugins. As with all the other Grails commands, it’s a Gant script in the
<GRAILS_HOME>\scripts directory called, obviously enough,
ListPlugins.groovy. According to the documentation at the top, it was written by Sergey Nebolsin for version 0.5.5.
What Sergey does is to go to a slightly different URL and then parse the results as XML. His script accesses
DEFAULT_PLUGIN_DIST = ""
instead of the SVN repo location listed above, but if you go there, they look remarkably alike. I wouldn’t be surprised if is simply an alias for the SVN repository.
Note that the script also creates a cached version of the plugin list, called
plugins-list.xml, which is kept in the
"${userHome}/.grails/${grailsVersion}/plugins/"
directory. That’s completely understandable, but a lousy location on a Windows box. I never go to my so-called “user home” directory, so I would never occur to me to look there for information.
His script checks to see if that file is missing or out of date. If it’s necessary to update it, he opens a URL and starts processing:
def remoteRevision = 0 new URL(DEFAULT_PLUGIN_DIST).withReader { Reader reader -> def line = reader.readLine() ... // for each plugin directory under Grails Plugins SVN in form of 'grails-*' while(line=reader.readLine()) { line.eachMatch(/<li><a href="grails-(.+?)">/) { // extract plugin name def pluginName = it[1][0..-2] // collect information about plugin buildPluginInfo(pluginsList, pluginName) }
etc.
So, in effect, he’s screen scraping the SVN page; he’s just doing a better job of it than I was.
Incidentally, the line in his script that lead to my parsing problems is on line 86:
plugins << "${pluginLine.padRight(20, " ")}${versionLine.padRight(16, " ")} -- ${title}"
I could bump up the padding by one, or learn to parse the output better. 🙂 I expect the “right” answer, though, is to do what Sergey did, pretty much. Still, if all I have to do is add a little padding, it’s awfully tempting to just “reuse” Sergey’s existing script.
In an upcoming post, I’ll talk about how I used the RichUI plugin to apply a “star rating” to each entry so that people could vote. I don’t have the site ready yet, though. I’ll be sure to mention it when I do.
2 thoughts on “Getting a list of Grails plugins programmatically”
Have a more careful look at GinA again, it surely doesn’t refer to HttpUnit but to HtmlUnit.
Hi Marc,
Yes, I did find that after the fact, and I’ve used it since. I wished I’d noticed it sooner. Still, processing by hand probably wasn’t a bad exercise for me.
Thanks for your comment,
Ken | https://kousenit.org/2008/08/15/getting-a-list-of-grails-plugins-programmatically/ | CC-MAIN-2021-04 | refinedweb | 1,464 | 66.03 |
mkfifo, mkfifoat - make a FIFO special file
#include <sys/stat.h>
int mkfifo(const char *path, mode_t mode);
[OH]
#include <fcntl.h>#include <fcntl.h>
int mkfifoat(int f last data access, last data modification, and last file status change timestamps of the file. Also, the last data modification and last file status change timestamps of the directory that contains the new entry shall be marked for update.
The mkfifoat() function shall be equivalent to the mkfifo() function except in the case where path specifies a relative path. In this case the newly created FIFOfifoat() is passed the special value AT_FDCWD in the fd parameter, the current working directory shall be used and the behavior shall be identical to a call to mkfifo().
Upon successful completion, these functions shall return 0. Otherwise, these functions shall return -1 and set errno to indicate the error. If -1 is returned, no FIFO shall be created.
These functions.
- [EROFS]
- The named file resides on a read-only file system.
The mkfifoat().
The purpose of the mkfifoat() function is to create a FIFO special file in directories other than the current working directory without exposure to race conditions. Any part of the path of a file could be changed in parallel to a call to mkfifo(), resulting in unspecified behavior. By opening a file descriptor for the target directory and using the mkfifoat() function it can be guaranteed that the newly created FIFO is located relative to the desired directory.
None.
chmod,fifoat()383 [461], XSH/TC1-2008/0384 [146,435], XSH/TC1-2008/0385 [324], XSH/TC1-2008/0386 [278], and XSH/TC1-2008/0387 [278] are applied.
POSIX.1-2008, Technical Corrigendum 2, XSH/TC2-2008/0216 [873], XSH/TC2-2008/0217 [591], XSH/TC2-2008/0218 [817], XSH/TC2-2008/0219 [822], XSH/TC2-2008/0220 [817], and XSH/TC2-2008/0221 [591] are applied.
return to top of pagereturn to top of page | https://pubs.opengroup.org/onlinepubs/9699919799/functions/mkfifo.html | CC-MAIN-2019-47 | refinedweb | 323 | 53.71 |
setbuf() prototype
void setbuf(FILE* stream, char* buffer);
If the buffer is not null, it is equivalent to calling setvbuf(stream, buffer, _IOFBF, BUFSIZ).
If the buffer is null, it is equivalent to calling setvbuf(stream, NULL, _IONBF, 0). In this case the buffering is turned off.
It is defined in <cstdio> header file.
setbuf() Parameters
- stream: A file stream.
- buffer: A pointer to a buffer which may be null or not. If it is null, buffering is turned off, otherwise it should of at least BUFSIZ bytes.
setbuf() Return value
None
The below 2 examples illustrates the use of setbuf() function. Both of these programs use file operation. In the first example, buffer is set using the setbuf() to store the contents of the file internally.
In the next example, the statement
setbuf(fp, NULL) turns off buffering. So in order to read the file content, fread() is used.
Example 1: How setbuf() function works
#include <iostream> #include <cstdio> using namespace std; int main () { char str[] = "Buffered Stream"; char buffer[BUFSIZ]; FILE *fp; fp=fopen ("test.txt","wb"); setbuf(fp,buffer); fwrite(str, sizeof(str), 1, fp); fflush(fp); fclose(fp); cout << buffer; return 0; }
When you run the program, the output will be:
Buffered Stream
Example 2: setbuf() function with buffering turned off
#include <iostream> #include <cstdio> using namespace std; int main () { char str[] = "Unbuffered Stream"; char strFromFile[20]; FILE *fp; fp=fopen ("test.txt","wb+"); setbuf(fp,NULL); fwrite(str, sizeof(str), 1, fp); fflush(fp); /* We need to rewind the file pointer and read the file because the data from test.txt isn't saved in any buffer */ rewind(fp); fread(strFromFile, sizeof(strFromFile), 1, fp); fclose(fp); cout << strFromFile; return 0; }
When you run the program, the output will be:
Unbuffered Stream | https://www.programiz.com/cpp-programming/library-function/cstdio/setbuf | CC-MAIN-2020-16 | refinedweb | 297 | 62.88 |
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
Starting the Builder6:02 with Kenneth Love
We can name our bear. Now let's see what else is available for it. Our `builder()` view is the bulk of our application, so let's get started!
New Terms
{% for x in y %}: You already know what
for x in y: does in Python, but this is the template version. This will cause the enclosed code to be run as many times as there are things in
y. Has to be followed by
{% endfor %}.
{% if %}: The template version of Python's
if condition. Closed with
{% endif %}.
- 0:00
Okay. So, this was pretty cool,
- 0:02
being able to like name our bear and pull the name back out.
- 0:05
But, I'd like to do a whole lot more.
- 0:06
We're supposed to have this whole bear builder, character builder thing.
- 0:10
So, let's see if we can build that whole view in one video.
- 0:16
All right.
- 0:17
So, let's add a new route.
- 0:23
And I'm gonna call it, builder.
- 0:25
And, I'm gonna call the function, builder cuz that's just the kind of dude I am.
- 0:30
'Kay.
- 0:31
So, this builder one is gonna render our pre-built builder template.
- 0:38
And really, that's all I want it to do.
- 0:40
So, I'm just gonna go return render template, and
- 0:43
the template's name is builder.html.
- 0:51
And, saves is gonna be getSavedData, which we can actually do here.
- 0:59
>> getSavedData.
- 1:02
>> Save ourselves one line of code, right?
- 1:05
>> And then, I wanna pass in what these like, these default things are.
- 1:11
>> That are here inside options.
- 1:13
>> If we look in options, there is a dictionary called defaults.
- 1:16
I want to pass that in.
- 1:17
So first, I need to import it.
- 1:20
So, from Options > Import Defaults.
- 1:24
And then, secondly, I need to pass it in here.
- 1:28
So, options equals defaults.
- 1:31
Okay.
- 1:35
So, there's our builder.
- 1:38
And, what I want, my,
- 1:39
the way I'm envisioning this is after the save, it always goes back to the builder.
- 1:44
So, we're gonna actually have those redirect to builder.
- 1:50
Sorry, hit the wrong key.
- 1:52
We're gonna have those redirect to builder, not to index.
- 1:58
'Kay.
- 1:58
So, let's see how that looks.
- 1:59
Let's just hit Build It.
- 2:02
See what happens.
- 2:02
All right.
- 2:05
Well, not a whole lot's on here.
- 2:07
We've got a bear.
- 2:08
And, I've got a button that says Update.
- 2:09
But, that's really all we have.
- 2:14
All right. Let's just,
- 2:14
let's check out this builder thing.
- 2:17
It extends our layout, which is good.
- 2:21
And then, we've got, this, this block.
- 2:27
There's an area here to, to type something in, and there's a bear, and that's it.
- 2:33
'Kay. So, let's add a little bit.
- 2:35
So, the first thing we need to add is, here in this value,
- 2:38
we want to print out saves.getmain.
- 2:41
We wanna do that thing again, so let's refresh.
- 2:47
Hey! There's Mike, cool.
- 2:49
Mike The Bear, that little bear.
- 2:54
And then, now, we want to be able to have a bunch of colors.
- 2:58
So, if we look in options, and like I said, we're not going to edit this.
- 3:03
But, we see we have this colors, we have all this colors things.
- 3:06
So, let's add here inside this div, we're going to loop over all of that.
- 3:12
So, for color in Options > Colors.
- 3:19
Cuz remember, we can basically just write Python in here which is really handy.
- 3:24
We're gonna do this input.
- 3:26
It's a radio.
- 3:29
We'll give it an id of whatever the color name is,
- 3:31
so if the color's black, it'll be black.
- 3:33
We're gonna call it colors.
- 3:38
And, the value is gonna be whatever color we're currently on.
- 3:43
And, what I want to do is, I want to have a thing here
- 3:48
where if this is the currently selected color, like the, the color they've saved.
- 3:53
Then it will select this color.
- 3:56
So, we're gonna add an if here.
- 3:59
And, we can actually do this on the next line, which is kind of cool.
- 4:03
So if saves dot get color
- 4:08
is equal to whatever color we're currently on, then we want to print checked.
- 4:16
And, end that if and end that input.
- 4:19
And then, we're gonna have a label.
- 4:25
And, we're not gonna put anything in that label.
- 4:28
Okay.
- 4:28
Now obviously, the HTML I'm writing here,
- 4:31
I know it's right because we've already covered it with our designer.
- 4:34
So, let's refresh, and there's our color.
- 4:36
So, let's make this a little bigger.
- 4:39
There we go, that's not bad.
- 4:40
There's all of our colors.
- 4:41
So, we can say, all right.
- 4:44
Let's make it blue, hit update.
- 4:47
Oh.
- 4:49
Because we don't know what the form action is.
- 4:50
So, we know how to do this.
- 4:53
And, that'll be for save.
- 4:56
Right?
- 4:57
Let's go back.
- 4:58
Let's hit Update.
- 4:59
Sorry, Refresh.
- 5:03
Hit Update.
- 5:05
Cool, so it came back out.
- 5:07
But we didn't get our color.
- 5:08
Let's look at our Inspector.
- 5:12
Blue 504, er, 054.
- 5:16
[BLANK_AUDIO]
- 5:18
[SOUND] Oh.
- 5:24
This should be colors.
- 5:28
Refresh. There we go.
- 5:30
There's our color.
- 5:32
So, I want to make that actually show up.
- 5:35
So, let's go and modify this grid this, our class down here,
- 5:40
and let's add in saves.get color, so colors.
- 5:47
See? I almost did it again.
- 5:49
And, let's refresh that.
- 5:51
Hey, look at that.
- 5:52
Our bear is now bright orange.
- 5:55
That's pretty cool.
- 5:56
I like having a bright orange bear back there.
- 6:00
Or, the blue one, that's pretty cool too. | https://teamtreehouse.com/library/flask-basics/character-builder/starting-the-builder | CC-MAIN-2016-50 | refinedweb | 1,197 | 95.57 |
Raspberry Pi Cluster Node – 07 Sending data to the Slave
This post builds on my previous posts in the Raspberry Pi Cluster series by adding the ability to receive data from the master. In this update, I will be adding a way for the slave to request data and have it returned by the master.
Moving machine details into its own file
The first thing that I am going to do is move the machine details currently in the slave, to a separate file. In the future, this will allow obtaining more information about the node. However, I am moving it into a separate file for now so the Slave and Master can access the data.
For now, my machine file will include the following function and be accessible to both slave and master:
import psutil import platform import multiprocessing import socket def get_base_machine_info(): return { 'hostname': socket.gethostname(), 'cpu_percent_used': psutil.cpu_percent(1), 'ram': psutil.virtual_memory().total, 'cpu': platform.processor(), 'cpu_cores': multiprocessing.cpu_count() }
Configuring the Master to respond to information requests
In the master message handling while loop I am going to add a new message type to be handled. The master will listen to any messages with the type
info and return any information the slave requests. The payload will define what type of information it is looking for and return it. The following segment of code is the new
elif statement used for
info type messages.
elif message['type'] == 'info': logger.info("Slave wants to know my info about " + message['payload']) if message['payload'] == 'computer_details': clientsocket.send(create_payload(get_base_machine_info(), "master_info")) else: clientsocket.send(create_payload("unknown", "bad_message"))
Here I am checking if the message type is
info and logging a message that the slave is requesting information about the specific payload. Each payload will require different handling and more types will be added in the future. For now I have added a single type
computer_details to match the message type the slave sends the master.
This calls the
get_base_machine_info() function we earlier abstracted into a function, imported from the
MachineInfo file.
If the slave requests information about an unknown type a
bad_message payload is created and returned to the slave. Going forward this will be a standard payload type that will be handled differently.
Once the master has sent the requested data to the slave it continues to listen to messages and act on them.
Configuring the slave to request information from the Master
I have decided that as part of the initial hello to the master the slave will send its machine details, and request the same from the master. This is also refactored a little to move the piece of code handling the machine details into the above
MachineInfo file. Below is the new handshake for the slave as it joins the cluster.
logger.info("Sending an initial hello to master") sock.send(create_payload(get_base_machine_info(), 'computer_details')) sock.send(create_payload("computer_details", "info")) message = get_message(sock) logger.info("We have information about the master " + json.dumps(message['payload']))
Once we have sent our machine info we request the machine info of the master. This is again performed using create_payload with the type
info and payload
computer_details.
Once we have sent the message asking for the master’s details we then use get_message to retrieve the reply from the master. This is used identically to how the master receives the slave’s messages and uses the same underlying shared code.
Summary
Now we have a structure which lets the master and slave communicate by sending and requesting information.
In the next post I will look at adding a few more payloads to let the master control the slave further. These will form the basis of the master requesting the slave to perform computation.
The full code is available on Github, any comments or questions can be raised there as issues or posted below. | https://chewett.co.uk/blog/1781/raspberry-pi-cluster-node-07-sending-data-to-the-slave/ | CC-MAIN-2020-40 | refinedweb | 642 | 55.64 |
The locking rules for this qotd_3 driver are as follows:
You must have exclusive access to do any of the following operations. To have exclusive access, you must own the mutex or you must set QOTD_BUSY. Threads must wait on QOTD_BUSY.
Test the contents of the storage buffer.
Modify the contents of the storage buffer.
Modify the size of the storage buffer.
Modify variables that refer to the address of the storage buffer.
If your operation does not need to sleep, do the following actions:
Acquire the mutex.
Wait until QOTD_BUSY is cleared. When the thread that set QOTD_BUSY clears QOTD_BUSY, that thread also should signal threads waiting on the condition variable and then drop the mutex.
Perform your operation. You do not need to set QOTD_BUSY before you perform your operation.
Drop the mutex.
The following code sample illustrates this rule:
mutex_enter(&qsp->lock); while (qsp->flags & QOTD_BUSY) { if (cv_wait_sig(&qsp->cv, &qsp->lock) == 0) { mutex_exit(&qsp->lock); ddi_umem_free(new_cookie); return (EINTR); } } memcpy(new_qotd, qsp->qotd, min(qsp->qotd_len, new_len)); ddi_umem_free(qsp->qotd_cookie); qsp->qotd = new_qotd; qsp->qotd_cookie = new_cookie; qsp->qotd_len = new_len; qsp->flags |= QOTD_CHANGED; mutex_exit(&qsp->lock);
If your operation must sleep, do the following actions:
Acquire the mutex.
Set QOTD_BUSY.
Drop the mutex.
Perform your operation.
Reacquire the mutex.
Signal any threads waiting on the condition variable.
Drop the mutex.
These locking rules are very simple. These three rules ensure consistent access to the buffer and its metadata. Realistic drivers probably have more complex locking requirements. For example, drivers that use ring buffers or drivers that manage multiple register sets or multiple devices have more complex locking requirements. | http://docs.oracle.com/cd/E19253-01/817-5789/fgpaf/index.html | CC-MAIN-2015-40 | refinedweb | 271 | 60.41 |
On Thu, 13 Jul 2017 05:32:26 PDT (-0700), [email protected] wrote: > On Thu, Jul 13, 2017 at 09:59:53PM +1000, Michael Ellerman wrote: >> Palmer Dabbelt <[email protected]> writes: >> >> > On Wed, 12 Jul 2017 04:04:00 PDT (-0700), [email protected] wrote: >> >> Palmer Dabbelt <[email protected]> writes: >> >> >> >>> On Mon, 10 Jul 2017 23:21:07 PDT (-0700), [email protected] wrote: >> >>>> Palmer Dabbelt <[email protected]> writes: >> >>>>> >> >>>> ... >> >>>>> +#ifdef CONFIG_EARLY_PRINTK >> >>>>> +static void sbi_console_write(struct console *co, const char *buf, >> >>>>> + unsigned int n) >> >>>>> +{ >> >>>>> + int i; >> >>>>> + >> >>>>> + for (i = 0; i < n; ++i) { >> >>>>> + if (buf[i] == '\n') >> >>>>> + sbi_console_putchar('\r'); >> >>>>> + sbi_console_putchar(buf[i]); >> >>>>> + } >> >>>>> +} >> >>>>> + >> >>>>> +static struct console early_console_dev __initdata = { >> >>>>> + .name = "early", >> >>>>> + .write = sbi_console_write, >> >>>>> + .flags = CON_PRINTBUFFER | CON_BOOT, >> >>>> >> >>>> AFAICS you could add CON_ANYTIME here, which would mean this console >> >>>> would print output before the CPU is online. >> >>>> >> >>>> I think it doesn't currently matter because you call parse_early_param() >> >>>> from setup_arch(), at which point the boot CPU has been marked online. >> >>>> >> >>>> But if this console can actually work earlier then you might be better >> >>>> off just registering it unconditionally very early. >> >>> >> >>> That seems like a good idea. I'm not familiar with how all this works, >> >>> but >> >>> from my understanding of this early_initcall() should be sufficient to >> >>> make >> >>> this work? The only other driver that sets CON_ANYTIME and supports >> >>> EARLY_PRINTK is hvc_xen, but that installs a header to let init code >> >>> register >> >>> the console directly. The early_initcall mechanism seems cleaner if it >> >>> does >> >>> the right thing. >> >> >> >> Unfortunately early_initcall is not very "early" :) It's earlier than >> >> all the other initcalls, but it's late compared to most of the arch boot >> >> code. >> >> >> >> The early_param() will work better, ie. register the console earlier >> >> and increase the chance of you getting output from an early crash, than >> >> early_initcall. But it requires you to put earlyprintk on the command >> >> line. >> >> >> >> The best option is to just register the console as early as you can, ie. >> >> as soon as it can give you output. So somewhere in your setup_arch(), or >> >> even earlier (I haven't read your boot code). >> > >> > Doing it that way would require either moving the TTY driver into arch >> > code (it >> > was specifically suggested we move it out) or adding a header file to allow >> > setup_arch() to call into the driver (XEN does this, and we're doing it >> > for our >> > timer, but it seems hacky). >> >> I think it's fairly uncontroversial to have the early console in arch >> code, especially in a case like this where there's no code shared >> between the console and the TTY driver. But maybe someone will prove me >> wrong. >> >> Doing it the other way is not really hacky IMO, you can just have an >> extern for the early console in one of your asm headers. > > For reference both metag and mips do something like this for JTAG based > consoles (with drivers both residing in drivers/tty/): > > > > > > > > Its not all that pretty but it gets you console output that much > earlier and is a fairly special case, so I think its worth it.
If someone else is doing it, then it's good enough for me :). How does this look? diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c index 319fad96f537..148fd0dc414b 100644 --- a/arch/riscv/kernel/setup.c +++ b/arch/riscv/kernel/setup.c @@ -59,6 +59,14 @@ unsigned long pfn_base; /* The lucky hart to first increment this variable will boot the other cores */ atomic_t hart_lottery; +#if defined(CONFIG_HVC_RISCV_SBI) && defined(CONFIG_EARLY_PRINTK) +/* + * The SBI's early console lives in hvc_riscv_sbi.c, but we want very early + * access + */ +extern struct console riscv_sbi_early_console_dev; +#endif + #ifdef CONFIG_BLK_DEV_INITRD static void __init setup_initrd(void) { @@ -203,6 +211,13 @@ static void __init setup_bootmem(void) void __init setup_arch(char **cmdline_p) { +#if defined(CONFIG_TTY_RISCV_SBI) && defined(CONFIG_EARLY_PRINTK) + if (likely(early_console == NULL)) { + early_console = &riscv_sbi_early_console; + register_console(early_console); + } +#endif + #ifdef CONFIG_CMDLINE_BOOL #ifdef CONFIG_CMDLINE_OVERRIDE strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE); diff --git a/drivers/tty/hvc/hvc_riscv_sbi.c b/drivers/tty/hvc/hvc_riscv_sbi.c index 534d6b75a2c6..20a6bfda4e32 100644 --- a/drivers/tty/hvc/hvc_riscv_sbi.c +++ b/drivers/tty/hvc/hvc_riscv_sbi.c @@ -84,20 +84,11 @@ static void sbi_console_write(struct console *co, const char *buf, } } -static struct console early_console_dev __initdata = { +/* This is used by arch/riscv/kernel/setup.c */ +struct console riscv_sbi_early_console_dev __initdata = { .name = "early", .write = sbi_console_write, .flags = CON_PRINTBUFFER | CON_BOOT | CON_ANYTIME, .index = -1 }; - -static int __init setup_early_printk(void) -{ - if (early_console == NULL) { - early_console = &early_console_dev; - register_console(early_console); - } - return 0; -} -early_initcall(setup_early_printk); #endif | https://www.mail-archive.com/[email protected]/msg1443657.html | CC-MAIN-2019-30 | refinedweb | 733 | 55.13 |
NB 6.0 Visual Web Table Component Binding Enhancements To Use POJOs
By winston on Oct 14, 2007
Netbeans 5.5.1 Visual Web Table Component allows binding only via Data Providers such as CahcedRowsetDataProvider. While working with Hibernate or JPA or Spring framework, you must be dealing with Plain Old Java Objects (POJOs). If you have List or Array of POJOs, then only way to bind them to the Table Component is by creating ObjectListDataProvider or ObjectArrayDataProvider. Also, due to a bug in Netbeans 5.5.1 Visual Web designtime, you may have to use the work around I explained in one of my earlier blogs - Work around for Object List Data Provider design time problem.
We have changed couple of things in Netbeans 6.0 related to Table Component binding.
- Creation of an Object Array or a List Data Provider is not necessary to bind a Table Component to POJOs
- We have eliminated the need for compile, close and open the project ceremony after creating the POJOs in the project. It is enough to compile the project and may be refresh the designer if the page is already opened. (My wish is to eliminate even the need for compiling the project, by source modeling all the Java sources in the project. The main obstacle is performance. So, currently we handle only compiled objects.)
Binding Array of POJOs
In order to bind Array of POJOs, first you need to create a property in the backing Page Bean or some other Managed Beans (Ex. SessionBean1), that returns Array (say myArray) of the Specified Object (say MyObject). To create the property
- Type the line private MyObject[] myArray;
- Right click on this line and select the action "Generate Code"
- In the resulting popup menu select "Getter and Setter"
The above would add the code to the Java source as
private MyObject[] myArray;
public MyObject[] getMyArray(){
return myArray;
}
public void setMyArray(MyObject[] myArray){
this.myArray = myArray;
}
Now to bind myArray, compile the project, refresh the designer and bring up the Table Layout. You will find myArray listed in the drop down list as show in the picture below.
Binding List of POJOs
Binding List of POJOs is more or less same as binding arrays with one important difference. In case of Array of POJOs, the Array itself has information about the type of the POJO. However, in case of List of POJOs, the Table Component can not determine the type of the POJO, since List is one of the generic interface in the Collections Framework. In order to solve this problem, Java Generics comes handy. You can specify the type of the POJO, that will be added to the list, to the design system using a parameterized List as shown below. This is really useful if the List will be populated lazily during runtime. Add the myList property to the backing Page Bean
- Type the line private List<MyObject> myList;
- Right click on this line and select the action "Generate Code"
- In the resulting popup menu select "Getter and Setter"
This would add the code to the Java source as
private List<MyObject> myList;
public List<MyObject> getMyList(){
return myList;
}
public void setMyList(List<MyObject> myArray){
this.myList = myList;
}
Similar to binding Array of POJOs, bring up the Table Layout and myList will be listed in the drop down list as show in the picture below. Don't forget to compile the project and refresh the designer.
Note: Even though explicit creation of Data Provider is not necessary, Table Component implicitly creates the Data Provider. You might want to get information from the underlying Data Provider. For example if you have a column with Button or CheckBox or RadioButton, you might want to find the location of the selected row. Here is a code that would help you to do that.
RowKey[] selectedRowKeys = getTableRowGroup1().getSelectedRowKeys();
MyObject[] myArray = getSessionBean1().getMyArray();
int rowId = Integer.parseInt(selectedRowKeys[0].getRowId());
MyObject myObject = myArray[rowId];
Thank you for the new enhancements.
I am wondering if the visual table component in NB 6 can bind some javax.faces.model.DataModel ojbects, I can see some no visual JSF/JPA examples doing that way. Meanwhile I would like to know if some list components like dropdown list can bind Array or List objects as the table component does, because those kinds of components are equally useful as table.
Thank you!
Posted by Kane Li on October 14, 2007 at 09:11 PM PDT #
I've tried the array approach, the list approach and the (explicit) DataProvider approach. The list approach won't work, no matter how many times i recompile, refresh the visual editor or restart netbeans. I've tried the following code onn both the session bean and the page's backing bean; any ideas? Thanks! -- Erik
private List<InvoiceDetail> invoiceDetailList;
public List<InvoiceDetail> getInvoiceDetailList() {
return invoiceDetailList;
}
public void setInvoiceDetailList(List<InvoiceDetail> myInvoiceDetailList) {
this.invoiceDetailList = myInvoiceDetailList;
}
Posted by Erik on October 19, 2007 at 12:06 AM PDT #
Hi Erik, which version of NB you are using?. This feature is available only after Netbeans 6.0 Beta1 or later. I tried now, it works for me.
Posted by Winston Prakash on October 19, 2007 at 12:22 AM PDT #
Hi Winston, thanks for your reply. I'm using 6.0beta1.
Posted by Erik on October 19, 2007 at 12:32 AM PDT #
Erik, post your sample code at the nbuser alias (put VWP in the subject), I'll take a look at it.
Posted by Winston Prakash on October 19, 2007 at 12:36 AM PDT #
Winston, my direct list aproach works fine, but I can't find getDataProvider() method in com.sun.webui.jsf.component.Table
Any ideas?
Product Version: NetBeans IDE Dev (Build 20071005123835)
Posted by v.m.kotov on October 23, 2007 at 05:19 AM PDT #
Ha!, my bad. The correct way to get the TableDataProvider is using TableRowGroup Component method tableRowgGroup.getTableRowDataProvider().getTableDataProvider();
Posted by Winston Prakash on October 23, 2007 at 08:00 AM PDT #
Thanks for reply, but getTableRowDataProvider() - has protected access, so I can't call it from for ex. Page1.java.
Posted by v.m.kotov on October 23, 2007 at 08:37 AM PDT #
Hi and thanks for your informative article.
I am having pretty much the same problem with v.m.kotov.
I am using Netbeans 6.0 Beta 1 on a JavaEE5 project. I have created a table with a binding to an ArrayList of POJOs. I now want to add a "delete row" button on each row of the table. I obviously need to access the underlying data provider in order to delete the row, based on the current RowKey. However, I cannot find the getTableRowDataProvider() or getTableDataProvider() methods you are talking about...
Any help would be very much appreciated...
Posted by Zzzzz on October 24, 2007 at 02:35 AM PDT #
OK I found a solution to my problem.
I created a subclass of TableRowGroup and modified the rowGroup of my table so that it is an instance of this subclass. I created a method deleteRow(RowKey rk) in the subclass. The subclass can now access the protected method getTableRowDataProvider(), so I can get a reference to the data provider and delete the row.
Note that I also had to cast the data provider to ObjectListDataProvider, and call commitChanges() in order for the deletion to propagate to the List and the table...
(Hope that this approach also helps v.m.kotov)
All this would be much simpler if getTableRowDataProvider() was public. So is there any good reason that it is declared as protected?
Cheers...
Posted by Zzzzz on October 24, 2007 at 09:45 PM PDT #
Table Component is maintained by Woodstock team only the design time support is maintained by VW team. I talked to Woodstock team. Looks like they did not think there is any need to get back the dataprovider to manipulate the data given to Table Component. He said the possible solution would be use TableRowGroup.getSourceData(). But looks like there may be need for proper casting etc. BTW, it is possible to update, delete, append with out getting back the Dataprovider from table. Look at the article I wrote (which uses Array of Objects obtained from database via JPA)
Posted by Winston Prakash on October 25, 2007 at 12:25 AM PDT #
Thanks Winston, very usefull link, I sugest to replace Note about getTableDataProvider() with smth like this:
RowKey[] selectedRowKeys = getTableRowGroup1().getSelectedRowKeys();
MyObject[] myArray = getSessionBean1().getMyArray();
int rowId = Integer.parseInt(selectedRowKeys[0].getRowId());
MyObject myObject = myArray[rowId];
Posted by v.m.kotov on October 25, 2007 at 07:31 AM PDT #
Thanks. I added your suggestion.
Posted by Winston Prakash on October 25, 2007 at 08:12 AM PDT #
Hi Winston
Thanks for your tip it is very useful for me .
Posted by saeed on October 27, 2007 at 10:59 PM PDT #
Hi Winston.
how can i link a ejb (entity class, in web service) with a Table in visual WEB pack? i want to write acces data model and then use in a VWP project, mobile aplicattion or J2se.
Pd: Sorry for my horrible english. i will study english the next year. it is my promise
Posted by javiersinnada on November 28, 2007 at 07:07 AM PST #
Hi Javier, take a look at my tutorial. In this one I have separated out the data access as separate model project and view as another web application. You might want something like that, I suppose.
BTW, your English is not that bad :)
Posted by Winston Prakash on November 29, 2007 at 01:25 AM PST #
Hi Winston, i was working on the jpa y vwp tutorial(), i have a problem, it is when the connection is with postgres...that's the error:
(Oracle TopLink Essentials - 2.0 (Build b58g-fcs (09/07/2007))): oracle.toplink.essentials.exceptions.DatabaseException
Internal Exception: java.sql.SQLException: No suitable driver found for jdbc:postgresql://localhost:5432/sample
Error Code: 0
at oracle.toplink.essentials.exceptions.DatabaseException.sqlException(DatabaseException.java:305)
at oracle.toplink.essentials.sessions.DefaultConnector.connect(DefaultConnector.java:102)
at oracle.toplink.essentials.sessions.DatasourceLogin.connectToDatasource(DatasourceLogin.java:184)
.............
note: if drop the dataproviders normally with the same connection it works at perfection.
Posted by Gustavo on December 12, 2007 at 03:38 AM PST #
Hi, thanks for those tutorials, it's realy help me to understood how to manged with db.
I had similar problem to Gustavo with postgresql database, also glassfish displaied that it can't find suitable driver.
Realy 2 days with it, and at last found what's going on, simple you have to add to classpath in glassfish admin console fully-qualified path name for the driver’s JAR file. (Application Server->JVM Setting Tab->Path)
Posted by Jakub on December 27, 2007 at 01:22 AM PST #
Posted by weblog on January 16, 2008 at 05:57 PM PST #
Is it possible to show data from an underlying pojo in the table?
I.e. PojoA has a PojoB property and I want to show a property from PojoB but it's PojoA that's bound to the table. Like this: pojoA.getPojoB.getName().
Posted by Mats on January 29, 2008 at 09:32 PM PST #
You try something like ..
Create a method in the page bean
public String getName(){
PojoB pojoa = (PojoA) getValue("#{currentRow.value['PojoA.PojoB']}");
return pojoa.getName();
}
and then bind this method to the static text in the table column.
Posted by guest on February 01, 2008 at 05:07 AM PST #
I can't seem to get the table to bind to a list of objects (followed all your instructions). Array of objects works fine as the table in the UI recognizes it as a possible source. When trying to bind to List<MyObject> getMyObjects the table in the UI doesn't see it.
Posted by Mark on February 05, 2008 at 10:41 PM PST #
When binding a Table component to an array/list of POJO, how does the component see properties that are in the subclasses of the POJO? Since the binding is done at design time, the subclass properties don't appear in the "Bind to Data" dialog. That is, if my POJO is Person, which has Employee as a subclass, how do I bind a property of Employee, say worksFor, to the Table component? Any help is greatly appreciated.
Posted by Billy Lim on March 27, 2008 at 12:54 AM PDT #
Hi,
I discovered the following bug while trying to bind some entity classes to a VWP table in Netbeans 6.1 beta:
1. NB does not recognize and show the bean for an Array in Navigator and Dropdown (Table Layout/Get Data From) if the class is not in the same project.
2. NB does not recognize the fields when binding a generic List<E> if class E is not in the same project.
The project containing the entity classes is, of course, added.
Regards,
Markus
Posted by Markus on March 27, 2008 at 11:38 PM PDT #
Markus, a issue has been filed against the problem you mentioned and being investigated.
Posted by guest on April 01, 2008 at 12:50 AM PDT #
I'm using Netbeans 6.0.1 and NB doesn't recognize my list of beans if the entities are not in the same project. Works well with array of beans though
Posted by guest on April 02, 2008 at 11:23 AM PDT #
Buen dia Doctor.
tengo una inquietud mas, como se podria hacer una tabla editable con woodstock utilizando una list cuando vinculo la tabla con un ObjectListDataProvider yo lo se hacer pero ahora quisiera probar cuando vinculo la tabla con un simple List,
este es el codigo de la jsp
<webuijsf:table
<webuijsf:tableRowGroup
<webuijsf:tableColumn
<webuijsf:staticText
</webuijsf:tableColumn>
<webuijsf:tableColumn
<webuijsf:textField
</webuijsf:tableColumn>
</webuijsf:tableRowGroup>
</webuijsf:table>
este es la declaracion de la List en en sessionBean1
List<newPruebaDto> lista=new ArrayList<newPruebaDto>();
public List<newPruebaDto> getLista() {
return lista;
}
public void setLista(List<newPruebaDto> lista) {
this.lista = lista;
}
y la clase newPruebaDto tiene esta declaracion
public class newPruebaDto implements Serializable {
private String codigo;
private String nombre;
public newPruebaDto(String codigo, String nombre) {
this.codigo = codigo;
this.nombre = nombre;
}
public newPruebaDto() {
}
public String getCodigo() {
return codigo;
}
public void setCodigo(String codigo) {
this.codigo = codigo;
}
public String getNombre() {
return nombre;
}
public void setNombre(String nombre) {
this.nombre = nombre;
}
}
agradezco la ayuda porfa
Posted by Jaider rodriguez lozano on April 17, 2008 at 11:19 AM PDT #
Hi Winston,
Thanks a lot for all your tutorials, they have been a great help. I've been having an issue with buttons in a column of a table. When the table is bound to an ObjectListDataProvider the button actions fire appropriately, however if I change the binding to the List, rather than the DataProvider, the button events stop firing.
Any help would be greatly appreciated.
Regards,
Judd
Posted by guest on April 20, 2008 at 11:06 AM PDT #
No matter if I use an Array or an ObjectListDataProvider, I do not get my table sortable. That is, the sort buttons in the column heads are not provided. What can I do about that?
Thank you for your hints,
Stefan
Posted by Stefan Bley on April 27, 2008 at 09:13 PM PDT #
Hi Winston,
I would like to know if it is possible to use a table component bind to an array object locate on request bean?
I am executing a SQL and saving the result in an array object, however it only work if I locate the array on session bean, but this seems inefficient because all data remain on session even when this is not used
Hola Winston,
Me gustaria saber si es posible usar un table component(incluyendo las funcionalidad de ordenamiento y paginacion) que obtenga los datos de un array ubicado en request?
Actualmente yo estoy ejecutando un SQL y guardando los resultado en un array, para posteriormente cargar los datos en el table component, sin embargo esto solo funciona si localizo el array en Session, ya que si lo localizo en Request seria necesario ejecutar el query cada vez que se pagina o se ordena el listado.
El problema es que ambas cosas me parecen ineficientes, si guardo todos los resultado en sesion estos van a permanecer alli aun cuando no sean usados y si lo guardo en request ejecutaria el Query (select) cada vez que se pagina o se ordena el listado, entonces la pregunta es:
Es posible hacer permanecer los datos cargados en el table component en request sin que sea necesarios recargarlos nuevamente cada vez que se pagine o ordene el listado?
Posted by Roger on April 29, 2008 at 12:14 AM PDT #
Hi Winston,
I am also experiencing the same error as Judd. If I use a List<MyObject> instead of a DataProvider, any buttons in the table columns will not fire.
Is there a solution for this problem?
Thanks.
Josh.
Posted by Josh on May 01, 2008 at 06:29 AM PDT #
Hi, Winston!
I have a POJO with some properties and i can bind it to a table with no problems. I want to diplay one of the properties in that table in text fields, instead of static text, so the user will could change the values. That's ok too. My question is how could I update my pojo list with the values entered by users? If I work with ObjectListDataProvider, I just use commitChanges().
Thanks in advance!
Estevão Lisauskas
Posted by Estevão on May 17, 2008 at 11:32 PM PDT #
All the names of array properties of my java class object are displayed as table-component column titles. However, the contents (values) of these arrays are not displayed on the table. The table says "No items found"
Has anyone else apart from Dr. Winston Prakash succeeded in having the table component display contents of array of pojos?
Posted by Raymond Rugemalira on May 21, 2008 at 03:05 AM PDT #
My problem appears to be similar with the poster above, Raymond Rugemalira.
I am using the array of objects method. After I Bind to Data to a table component (as described in Winston's personal site), but when I run it I am not able to see the data inside the table. In the design view, I can see that the binding is done, but only when I run it, it says on the table "no items found".
It'll be nice if someone can share a solution, as I can do that part two of this tutorial to add, delete & update.
Posted by Marat on July 01, 2008 at 06:16 PM PDT #
The "binding enhancements" for List does work for the woodstock table components, but not for the standard DataTable component. What am i missing? Thanks in advance.
Posted by ts on July 12, 2008 at 06:01 AM PDT #
Hi Winston,
thanks for these helpful information.
I would like to know if there's a way to \*really\* bind a webuijsf:table with a List of POJO.
I have a table with some textfields in it.
The problem I'm facing is that \*real binding\* does not take place ( framework should update the underlying List ), even if the List is in session scope.
My set() methods never get called ... so I need to grasp data \*by hand\* from HttpServletRequest using inside-table components' client ids, and put back
data into session list ( calling every set() method on my POJO ).
Hope there's a less \*hard\* method.
Thanks in advance
Tony
Posted by Tonyweb on September 18, 2008 at 05:20 PM PDT #
hi friends ,
it would be great if you could tell me how to add the manually data to above mention solution , because i have a problem that, i want to display xml prase data in woodstock table .But i have created a class exactly how you mentioned in this solution page . but i have no idea where to input this prased xml data which is stored in a array so that i could displayed in table.
Posted by jaysonkn on October 02, 2008 at 03:40 PM PDT #
A simple questionb about netbeans 6.5rc2. HOW DO I RESFRESH THE DESIGNER???
Posted by arragon99 on November 10, 2008 at 01:00 AM PST #
oh ja even I found the possibility to refresh the jsp site via the navigator view. ;) Sorry for that
Posted by arragon99 on November 10, 2008 at 09:24 PM PST #
Hi winston, I need your collaboration with a performance problem relate to the Table component,
it load all data and the page its too slowly
Currently are only 200 records
How can I avoid that this component load the all data?
It should load the data when it paginates
Thanks in advance
Sorry for my bad English
Posted by rogerzam on November 11, 2008 at 06:10 AM PST #
I was able to bind the table to myArray, but I don't see any information. For example, myArray is a String array that contains "1", "2", and "3". The table fields say CASE_INSENSITIVE_ORDER, bytes, and empty. The information I get is 3 rows of jibberish. How can I put the data that I want in my table using an array? I've been working on this for hours. Any help would be much appreciated.
Posted by Luke on November 13, 2008 at 11:14 AM PST #
Hi Winston, I am using NB6.1, it seems to work great, and thank you very much for all your tutorials. Everything seems to work fine. My question is, using java persistence, how do I populate a drop down list. I have no problem populating a table. Do I still need to drop a table onto the drop down in the design panel, or can i use the entities to populate the list. thanks
Posted by Jon on December 09, 2008 at 06:46 AM PST #
Previous question re-phrased. How to populate drop down list from a database table using java persistence?
Posted by jon on December 09, 2008 at 08:00 AM PST #
I am using Netbeans 6.5 with the Woodstock components that come with it. So far I have managed to bind woodstock tables to my custom objects as shown in your article with the following method:
private List<MyObject> myList;
public List<MyObject> getMyList(){
return myList;
}
public void setMyList(List<MyObject> myArray){
this.myList = myList;
}
For some reason though, yesterday, I created a new page, followed the same usual steps. Only this time, the new woodstock table I dropped on the page was not listing my List in the "Get Data From" dropdown, when I tried to Bind To Data. On the other hand, binding to MyObject[] myArray still works as usual.
I then went to other pages where I had successfully bound tables to List<T>. Running the pages worked at runtime. However when I tried to access the 'Bind To Data' menu, Netbeans was throwing a Null Pointer Exception (on every table) as follows:
java.lang.NullPointerException
at com.sun.webui.jsf.component.table.TableBindToDataPanel.setTableDataProviderDesignState(TableBindToDataPanel.java:211)
at com.sun.webui.jsf.component.table.TableBindToDataPanel.initialize(TableBindToDataPanel.java:179)
at com.sun.webui.jsf.component.table.TableBindToDataPanel.<init>(TableBindToDataPanel.java:84)
at com.sun.webui.jsf.component.customizers.TableBindToDataCustomizer.getCustomizerPanel(TableBindToDataCustomizer.java:55)
at org.netbeans.modules.visualweb.insync.CustomizerDisplayer.show(CustomizerDisplayer.java:115)
at org.netbeans.modules.visualweb.insync.ResultHandler.handleResult(ResultHandler.java:205)
at org.netbeans.modules.visualweb.insync.action.AbstractDisplayActionAction.invokeDisplayAction(AbstractDisplayActionAction.java:145)
at org.netbeans.modules.visualweb.insync.action.AbstractDisplayActionAction.access$200(AbstractDisplayActionAction.java:94)
at org.netbeans.modules.visualweb.insync.action.AbstractDisplayActionAction$SingleDisplayActionAction.actionPerformed(AbstractDisplayActionAction.java:272)
[catch] at java.awt.EventQueue.dispatchEvent(EventQueue.java:599)
at org.netbeans.core.TimableEventQueue.dispatchEvent(TimableEventQueue.java:104))
I have tried reinstalling Netbeans 6.5, to no avail. I have also removed the '6.5' folder in my Documents and Settings folder, to no avail.
Is there anything that can be done?
Thanks.
Posted by Andrea DeMarco on February 24, 2009 at 04:00 PM PST #
Hi Winston,
I am using NB6.1 ,By changing the static text to text field in table layout we can edit the data in to jsf table.But how
can we update the whole columns we edited, Into database.
Posted by chandu on October 27, 2009 at 09:28 PM PDT # | https://blogs.oracle.com/winston/entry/nb6_table_binding_enhancement | CC-MAIN-2015-18 | refinedweb | 4,089 | 54.52 |
I have a BPM application where I have created multiple versions from a single shapshot (A). Now I need to copy all epv values from track of the snapshot A to all the new tracks which I created from A.
Answer by SergeiMalynovskyi (2706) | Nov 14, 2016 at 06:34 AM
If you're talking about Runtime server then it does not matter if you have snapshots on different tracks, for the Runtime it's all snapshots of a particular process app. There are no tracks on the runtime server as such.
So, to sync EPVs from an old snapshot to a new one you use the following technique:
Log into Process Admin Console
Click installed Apps
Locate and click the new snapshot
On the right side click 'Sync Settings'
Choose the old snapshot in the panel that appeared
Here you choose whether to sync Participant Groups, EPVs, etc or any combination of them.
Click Sync
Now your EPVs should be updated to the values from your older snapshot. NOTE: Default values for Env variables are never migrated to newer snapshots during the sync settings. When migrating values that were updated in previous versions, it will only migrate values that would be the latest set value between all the involved snapshots. Default values are not considered during this migration of values.
If you're talking about copying default values between snapshots in DEV server then it's a different story and I don't think there is any ootb way to do that automatically. It should be easy to write your own code for that using tw.epv.* namespace and methods.
110 people are following this question.
BPM Migration from on-premise to Cloud 3 Answers
Is there a Swagger definition of the Process Federation Server REST API? 1 Answer | https://developer.ibm.com/answers/questions/320211/$%7BawardType.awardUrl%7D/ | CC-MAIN-2019-18 | refinedweb | 300 | 67.18 |
On Thu, Mar 31, 2011 at 10:01 AM, phelix wrote: > > >> I am trying to write an extension to add autocompletion for "self". > >Have you tried adding this functionality to the existing AutoComplete > >extension? That could save you a lot of work and would be the best way to > >do it. > I took a good look at it and also at CodeContext and learned a lot from it. > But it was easier to put it into its own extension and seemed cleaner to > me. > Of course the completions show up in AutoComplete. What it does is that it > simply points a global Reference "self" to the classname(s) it finds in the > editor above the cursor. > For your own good and for the good of IDLE, I urge you to try implementing this by improving AutoComplete. Your approach (as I understand it) both requires first fixing a delicate bug regarding IDLE's event handling and messes around with the Shell's namespace. I don't think you will manage to get such a patch accepted. >The best tip I can give you is to run IDLE from the command line (on > >Windows, C:\Python32\python.exe C:\Python32\Lib\idlelib\idle.py). Then you > >can see output and exceptions printed to the command console. You could > also > >print your own debugging info using "print" instead of using that s_print > >function, and it won't show up in IDLE's shell, just in the command > console. > I figured this out by coincidence just after I had posted this question. > Though it seems very obvious to me know this is a very good hint and should > by all means be included in the extend.txt file or on some website about > IDLE Development. > Good point. If you submit a patch which updates extend.txt appropriately, I'll review it. - Tal -------------- next part -------------- An HTML attachment was scrubbed... URL: <> | https://mail.python.org/pipermail/idle-dev/2011-March/003028.html | CC-MAIN-2018-05 | refinedweb | 316 | 72.46 |
Turtle!¶
Turtle graphics is a term for a method of programming vector graphics using a cursor (the “turtle”) on a Cartesian plane. The turtle module is Python’s implementation of this method.
Exercise 0¶
In gedit, type the following code into a new document and save it as turtle1.py:
import turtle turtle.left(90) turtle.forward(25) turtle.left(90) turtle.forward(25) turtle.left(90) turtle.forward(25) turtle.left(90) turtle.forward(25) turtle.exitonclick()
Run turtle1.py using the CLI. A “Python Turtle Graphics” window should pop up and you should see an animation resulting in a black outlined square with sides that are 25 pixels long. Click inside the window and it should close - this is because of the line turtle.exitonclick() in your program. Comment out this line and run the program again. Is it clear why having this line in the code is pretty handy?
Extra Credit: Modify turtle1.py so that the box is a different color (refer to the turtle module documentation, linked above). Make the program draw a bigger or smaller square.
Exercise 1¶
Now we will make the code from Exercise 0 a little less tedious to write and more extensible. In gedit, type the following code into a new document and save it as turtle2.py:
import turtle def left_square(): n = 4 while n: turtle.left(90) turtle.forward(25) n = n-1 turtle.exitonclick() left_square()
When you run turtle2.py from the CLI, it should look just like it did when you ran turtle1.py. Let’s change the left_square() function so that it can make a square with a user specified side length(save this as turtle3.py):
def left_square(length): n = 4 while n: turtle.left(90) turtle.forward(length) n = n-1 turtle.exitonclick() length = raw_input("How big would you like your square to be? ") left_square(int(length))
When you run turtle3.py, you should be asked for input: “How big would you like your square to be?” Enter in any integer you’d like (well, within reason - a huge number will take forever to draw and the graphic will overflow off the screen) and when you press enter you should see the Python Turtle Graphics window pop up and draw a square using the integer you entered to determine the length of the sides.
Extra Credit: Change the above code to allow the user to enter in what color they’d like the box to be in addition to how long they want the sides. Change left_square() so it draws the right color box.
Exercise 2¶
Do you understand what the “90” numbers in the above examples means? Try changing it to different values. Also experiment with changing the 4 to a lower number.
How would you modify the program to draw a triangle instead? What about a hexagon?
If you’re impatient, here is the solution:
def triangle(length): n = 3 while n: turtle.left(120) turtle.forward(length) n = n-1 def hexagon(length): n = 6 while n: turtle.left(60) turtle.forward(length) n = n-1 triangle(150) hexagon(150) turtle.exitonclick()
Extra Credit: Change the above code (or your own code!) so that the triangle and hexagon are drawn next to each other instead of overlapping.
Extra Credit Alternative: Draw each line in a different color, asking the user what color they want for each one.
Exercise 3¶
Play around with turtle! The docs will likely be helpful in this exercise. Try incorporating one new turtle function into your existing code. Try drawing different shapes. Use the interpreter to interactively take your turtle on an adventure around the screen.
Extra Credit: Download the Python turtle demo and start up turtleDemo.py. Play around! | http://pystar.github.io/pystar/badges/badge_turtle.html | CC-MAIN-2017-43 | refinedweb | 624 | 76.01 |
Inheritance.
Inheritance in general is a fairly simple concept. In past issues of Component Developer Magazine, we have taken a look at the basic ideas behind inheritance. However, we've only seen a few simple examples of how to use this technology in real-life scenarios. Now we are going to take a closer look at how you can use inheritance to speed up development and increase the quality of your business objects as well as your interface components. The examples I'll be using are simple, but they come from real-life applications. However, I removed some additional code to keep the examples as simple as possible.
The basic idea behind the examples is not new to most developers familiar with languages such as Visual Basic. We are going to consider a typical 3-tiered application that uses business objects to talk to a SQL Server database and use the information in a separated interface layer. What's new is the way we are going to construct the individual components. All the objects we will use in our examples will be subclassed from other classes. This enables us to generate individual objects very quickly without having to worry about quality, since the classes we subclass from have already been debugged.
We are not going to build the backend for this example; instead, we will use the Northwind database that ships with SQL Server as an example database. The first step in talking to that database is building the required business objects.
Building an Abstract Business Object
There are two entities we should use in this particular example: Territories and Regions. Both entities exist in the form of tables in the SQL Server Northwind demo database (see Figure 1). For our example, we will create a simple business object that enables us to query data, modify it, verify the modifications, and save the data. If we were to build this type of business object in Visual Basic 6, we would need to write two entirely separate business objects that would perform very similar tasks. The main difference between the two business objects is the field names and the validation rules. Everything else is identical. However, we would end up duplicating the behavior in both (and potentially subsequent) objects. This appears to be a very frustrating process, especially if we find out later that there is a glitch in our logic and we have to plow through all of them to fix that issue.
Using Visual Studio.NET, we're going to take a very different approach: we will create an abstract class that encapsulates the basic business object behavior. We will not use that class directly?that's why we refer to it as an "abstract" class. However, we will subsequently derive (subclass) individual business objects from that class, and configure them to perform useful tasks.
Thanks to the language independent inheritance model used by .NET and the CLR, it doesn't really matter which language we use to implement the business object. Listing 1 shows the C# version, while Listing 2 is the equivalent in Visual Basic.NET.
The code is fairly straightforward. Let's take a look at the GetData() method first, since we need to use this method to retrieve data before we can do anything with it. This method creates a DataSet as well as a SqlConnection and a SqlDataAdapter object to talk to SQL Server and query data into the DataSet. Note that the connection uses hard-coded information to find the SQL Server database. This is done only to keep the example simple. The real-life version of this object uses an external mechanism (such as a registry setting or other application options) to configure the connection string. Also, the error-handling code has been removed from this example (again, to keep it as simple as possible).
The most exciting part of this method is the two lines that query the data and fill the DataSet. We're using a fairly simple SELECT command to query the data (you can easily imagine a more powerful version, if desired). The interesting part of the SELECT statement is provided by two fields that are members of the class: sFields and sTableName. sFields defaults to "*", which means that all fields will be queried. The field for the table name is empty, which would apparently lead to the creation of an invalid SELECT statement. But keep in mind that we don't indend to use this abstract class directly. Instead, we'll create subclasses and provide the missing information there. For now, we'll just assume that the table name will be "Region", "Territories" or any other table name. Therefore, the resulting select statement would be similar to the following:
SELECT * FROM Region
The GetData() method constructs the select statement it will use to fill the DataSet. As you may know, DataSets can contain more than one table. When we fill the DataSet using a particular SQL command (such as the one above), we need to specify the name of the resulting table. We use the sTableName field to do so, to make this whole process very generic. You can see the sTableName being sent to the data adapter Fill() method in this code from the C# version:
oAd.Fill(oDS,this.sTableName);
Note that the C# version of this code defines the GetData() method as virtual (as all the other methods) and the VB.NET version declares it as "overridable." This allows us to override this method dynamically in subclassed business objects to accommodate custom behavior (as we will see below).
Our basic business object also has some functionality to verify data. The form has an ArrayList field named cReqFields. By default, this array list is empty, but it is designed to hold a list of names of fields that cannot be blank. The Verify method uses this list to iterate over all the rows and fields in the DataSet to check whether any of the required fields have been left blank. If a blank field is found, the method returns false.
At this point, this method is rather simple and doesn't provide a lot of information about why the verification failed. However, we could easily provide more information, either through properties or a more sophisticated return value (or perhaps an out-parameter). Also, we could add similar functionality to check for unique field values, if required.
So, now we have a basic business object, which would work in theory. But we can't really use it, because it is missing some information. As mentioned above, we will provide that information in subclasses. So let's go ahead and create a subclassed business object to talk to the "Region" entity. Here's the C# version:
public class Regions : BusinessObject { public Regions() { this.sTableName = "Region"; } }
And, the Visual Basic.NET version:
Public Class Regions Inherits BusinessObject Public Sub New() Me.sTableName = "Region" End Sub End Class
As you can see, these classes are trivial. All they do is set the sTableName field to define the entity. We do so in the class' constructor. The class will automatically inherit the behavior from its parent class (BusinessObject) and will, therefore, be totally functional.
There is nothing to keep us from building business objects by the hundreds, all inheriting from BusinessObject. Here's the object we are going to use for the Territories (C#):
public class Territories: BusinessObject { public Territories () { this.cReqFields.Add("TerritoryID"); this.cReqFields.Add("RegionID"); this.sTableName = "Territories"; this.sFields = "TerritoryID, TerritoryDescription"; } }
The code is very similar to the previous example, with the exception that we are limiting the fields that are returned. Also, we are specifying two fields that can't be blank in case someone tries to update the DataSet through this business object. I'm sure you can figure out the Visual Basic.NET code on your own.
At this point, you're probably thinking one of two thoughts: 1) "Well, big deal! But in real-life scenarios, things are not that simple. It's unlikely that two business objects are this similar." Or 2) "All we are doing is setting a property. I could have built the same kind of object in Visual Basic 6."
Good points! Let's address both thoughts with one scenario:
Consider a business object that retrieves order information. Orders are typically stored in two separate tables: Orders and Order Details (line items). ADO.NET enables us to embed both tables in a single DataSet, but our basic business object doesn't feature that kind of functionality. However, we can easily create a subclass that incorporates that behavior. Listing 3 and Listing 4 show the appropriate classes in C# and VB.NET.
The basic idea is simple: First, we use the business object as intended. We are doing so by setting the table name to "Orders." We are also overriding the GetData() method to perform custom behavior. However, before we add our code, we need to execute the code defined in the parent class through the following line of code:
DataSet oDS = base.GetData();
And in Visual Basic.NET:
Dim oDS As DataSet = MyBase.GetData()
This executes the code we wrote for the BusinessObject class and returns the DataSet generated in that code. This gives us the order records. Then, we query the order details and fill them into the DataSet we just generated. Finally, we return the DataSet that now contains two tables.
This is only one example of how we can take care of complex variations of our standard business object. I'm sure you can think of many other examples, and you will find that you can implement them all in a similar fashion.
You may notice that the code we created isn't terribly efficient. We recreate the connection to the database, for instance. We had to do so because we didn't design our abstract class very well. It would have been much better to attach the connection object to a field of the class, rather than a local variable within the method. If it was a field, it would be available to us in the subclass, and we wouldn't have to recreate an expensive connection (expensive in the performance sense). Another option would have been to add a special method that receives the connection object as a parameter. In the BusinessObject class, that method would have been empty, but we could have overridden it in the subclasses.
As you can see, inheritance isn't free. You want to give a lot of thought to your object hierarchies beforehand. We will tackle "Designing for Inheritance" in a future article.
Creating ASP.NET Interface Components
At this point, we have several different business objects we can use to access data. It is now time to utilize these business objects to display data in an interface layer. One common scenario is the need for drop-down lists in a Web page. An example would be drop-down lists that enable users to select regions and territories.
Let's think about how we would do this in regular ASP: We would have to create the drop-down lists in every page where we want them to appear and code both lists individually. If we were good ASP developers, we would probably use a server-side-include, so we didn't have to constantly start from scratch. But overall, it would be very cumbersome. There would be very little design-time support. Also, we couldn't set properties on those included code snippets and unless we needed the drop-down list the same exact way we originally defined it, we'd have to recode it anyway.
In ASP.NET, we can leverage the power of inheritance. We can build an abstract drop-down list that provides the basic behavior and appearance; then we can create subclasses for the individual lists. Listing 5 and Listing 6 show the basic drop-down list class in C# and VB.NET.
As you can see, our drop-down list is subclassed from a standard drop-down list class that ships with ASP.NET. This ensures that our object looks and behaves like any other drop-down list (during run-time as well as during design-time).
In addition to the inherited standard behavior, we add two methods: ConfigureObject() and FillDropDown(). The first method doesn't have any code. We simply created it to be better prepared for inheritance than we were with our business object class. We can override the method in subclasses to configure the drop-down list and add custom behavior before data gets queried and put into the list by the FillDropDown() method. Both methods are called from their class constructor.
Note that the FillDropDown() method relies on a business object being instantiated. Our basic class never instantiates that business object. We take care of that in subclasses (which is why we have the ConfigureObject() method in the first place).
The FillDropDown() method simply retrieves a DataSet from the business object and binds it to the dropdown list. We also have to tell the object which fields to display (drop-down lists can display only one column and use a second field for the internal value). Drop-down lists come with all of the required functionality built in, so all we have to do is set a few properties in subclasses (inheritance works over multiple levels of subclasses, of course).
All that's left to do now is create subclasses, set a few properties, and we are all set. Listing 7 shows a C# version of a specialized drop-down list that displays Regions. We override the ConfigureObject() method to instantiate the Region business object. We also specify that we want to display the "RegionDescription" field and use the "RegionID" field as the internal value (primary key). Also, since DataSets can contain multiple tables, we have to specify the table we want to use ("Region"), even though there is only one table.
Listing 8 shows the Territories drop-down list, this time written in Visual Basic.NET.
At this point, we are basically done. However, in order to make this class as easy to use as possible, we want to make sure we can add it to the Visual Studio.NET toolbox. This is ensured by the ToolboxData attribute, which specifies the tag that's to be added to the ASP.NET page when the class (control) is dropped.
To add the list to your toolbox, simply right-click on the toolbox, and select "Customize Toolbox." This launches the dialog shown in Figure 2, which you can use to browse to your file containing the drop-down list class. Once the class has been added to your toolbox, it can be used like any other ASP.NET control (Figure 3). Figure 4 shows the drop down lists in action.
Creating Windows Forms Interface Components
Just as we created drop-down lists for Web Forms (ASP.NET), we can also create similar components for Windows Forms environments. Listing 9 shows the C# version of such as class.
Note that the code is almost identical to the ASP.NET code, with the exception of some inconsistencies in property names and the fact that in Windows Forms the term "combo box" is used instead of "drop down list." The only real difference is that the Windows control doesn't allow us to specify a table name so that the control can automatically pick the correct table out of the DataSet. For this reason, we can simply add similar functionality ourselves.
Just like in the ASP.NET version, we need to subclass the abstract combo box class to create a functional control. Here's some code (C#) that does so:
public class RegionComboBox : aDataComboBox { protected override void ConfigureObject() { this.oBiz = new Regions(); this.ValueMember = "RegionID"; this.DisplayMember = "RegionDescription"; this.DataMember = "Region"; } }
Figure 5 shows the new combo boxes in action.
Conclusion
Inheritance is a very powerful technique that you don't want to miss in your everyday development efforts. There is a learning curve attached to inheritance, but the productivity curve is much steeper than the learning curve and it is therefore worthwhile to familiarize yourself with inheritance beyond the basic principles. In the next issue of Component Developer Magazine, we will take a close look at how we can design our classes for inheritance to maximize code reuse.
Markus Egger | https://www.codemag.com/article/0201021 | CC-MAIN-2019-13 | refinedweb | 2,739 | 63.9 |
Calc: XSLT 2.0 filter for import
Hi there.
Need a bit of help with using XSLT 2.0 functions for Calc import, I want to implement fn:replace (which is XSLT 2.0) in one of my filters. While using pure XSLT 1.0 filters all is working fine, but when I register
xsl:stylesheet version="2.0"and tick
The filter needs XSLT 2.0 processor, xml file opening process returns General Input/Output error. Made some attempts with cdcatalog.xml from... and with this XSLT filter (for readability I removed other namespaces, they are the taken from content.xml and work fine with XSLT 1.0):
< ?xml <xsl:template <office:document-content xmlns: <office:spreadsheet> <table:table>
<xsl:for-each <table:table-row> <table:table-cell office: <text:p><xsl:value-of</text:p> </table:table-cell> </table:table-row> </xsl:for-each> </table:table> </office:spreadsheet> </office:body> </office:document-content> </xsl:template> </xsl:stylesheet>
I always get General Input/Output error when trying to apply Replace function. no matter
The filter needs XSLT 2.0 processor is enabled or disabled. Changing fn:replace with fn:translate and unticking
needs XSLT 2.0 makes things work as expected. Am I missing something obvious in code or in Calc settings?
LO Version: 6.0.4.2 Build ID: 9b0d9b32d5dcda91d2f1a96dc04c645c450872bf CPU threads: 2; OS: Windows 6.1; UI render: default; Locale: en-GB (lv_LV); Calc: group
NB. There is no first whitespace between < and ? in actual code on the first line
<?xml version="1.0" encoding="utf-8"?>, this forum's engine just does not show the line while whitespace is not there. | https://ask.libreoffice.org/en/question/163472/calc-xslt-20-filter-for-import/ | CC-MAIN-2019-13 | refinedweb | 275 | 70.8 |
PL22.16/10-0215 = WG21 N3290.:2003 document and corrected defects in the earlier ISO/IEC 14882:1998 document; others refer to text in the working draft for the next revision of the C++ language, informally known as C++0x,.
Tentatively Ready: Like "ready" except that the resolution was produced and approved by a subset of the working group membership between meetings. Persons not participating in these betwee..)
Proposed resolution .
Proposed resolution (March, set of potential results of the second operand and the set of potential results of the third operand,
if e is a comma expression (5.18 [expr.comma]), the set of potential results of the right operand,
otherwise, the empty set.
A variable whose name appears as a potentially-evaluated expression x is odr-used unless it is an object that satisfies the requirements for appearing in a constant expression (5.19) and an expression e whose set of potential results contains that x is either a discarded-value expression (Clause 5 [expr]) or the lvalue-to-rvalue conversion (4.1 [conv.lval]) is immediately applied to e. this is odr-used...[Drafting note: this wording requires S::a to be defined if it is used in an expression like *&S::a..local]), the presence of a friend specifier (11.3 [class.friend]), certain uses of the elaborated-type-specifier (7.1.6.3 [dcl.type.elab]), and using-directives (7.3.4 [namespace.udir]) alter this general behavior.
Change 3.3.3 [basic.scope.local])?.].
Proposed Resolution (November, 2006):
Add the indicated words to 3.9 [basic.types] paragraph 4:
... For trivial types, the value representation is a set of bits in the object representation that determines a value, which is one discrete element of an implementation-defined set of values. Any use of an indeterminate value (5.3.4 [expr.new], 8.5 [dcl.init], 12.6.2 [class.base.init]) of a type other than unsigned char results in undefined behavior.
Change 4.1 [conv.lval] paragraph 1 as follows:
If the object to which the lvalue refers is not an object of type T and is not an object of a type derived from T, or if the object is uninitialized, a program that necessitates this conversion has undefined behavior.
Additional note (May, 2008):
The C committee is dealing with a similar issue in their DR336. According to this analysis, they plan to take almost the opposite approach to the one described above by augmenting the description of their version of the lvalue-to-rvalue conversion. The CWG did not consider that access to an unsigned char might still trap if it is allocated in a register and needs to reevaluate the proposed resolution in that light. See also issue 129.
Split off from issue 315.
Incidentally, another thing that ought to be cleaned up is the inconsistent use of "indirection" and "dereference". We should pick one.
Proposed resolution (December, 2006):
Change 5.3.1 [expr.unary.op] paragraph 1 as follows:
The unary * operator performs indirection dereferences a pointer value: the expression to which it is applied shall be a pointer...
Change 8.3.4 [dcl.array] paragraph 8 as follows:
The results are added and indirection applied values are added and the result is dereferenced through dereferencing the (pointer) result to yield an integer. In the declarator (*pif)(const char*, const char*), the extra parentheses are necessary to indicate that indirection through dereferencing a pointer to a function yields a function, which is then called.
Change the index for * and “dereferencing” no longer to refer to “indirection.”
[Drafting note: 26.6.9 [template.indirect.array] requires no change. Many more places in the current wording use “dereferencing” than “indirection.”].).
Proposed Resolution (November, 2006):
Add the indicated wording to 5.18 [expr.comma] paragraph 1:
.... If the value of the right operand is a temporary (12.2 [class.temporary]), the result is that temporary..)
Proposed Resolution (February, 2008):
Change 3.2 [basic.def.odr] paragraph 2 as follows:
... copy constructor selected to copy class objects is used even if the call is actually elided by the implementation (12.8 [class.copy]). —end note] ... A copy-assignment function for a class An assignment operator function in a class is used by an implicitly-defined copy-assignment function for another class as specified in 12.8 [class.copy]...
Delete 12.1 [class.ctor] paragraphs 10 and 11:
A copy constructor (12.8 [class.copy]) is used to copy objects of class type.
A union member shall not be of a class type (or array thereof) that has a non-trivial constructor.
Replace the “example” in 12.2 [class.temporary] paragraph 1 with a note as follows:
[Example: even if the copy constructor is not called, all the semantic restrictions, such as accessibility (clause 11 [class.access]), shall be satisfied. —end example] [Note: This includes accessibility (clause 11 [class.access]) for the constructor selected. —end note]
Change 12.8 [class.copy] paragraph 7 as follows:
A non-user-provided copy constructor is implicitly defined if it is used to initialize an object of its class type from a copy of an object of its class type or of a class type derived from its class type (3.2 [basic.def.odr]). [Footnote: See 8.5 [dcl.init] for more details on direct and copy initialization. —end footnote] [Note: the copy constructor is implicitly defined even if the implementation elided its use (12.2 [class.temporary]) the copy operation (12.8 [class.copy]). —end note] A program is ill-formed if the class for which a copy constructor is implicitly defined or explicitly defaulted has:
-
a non-static data member of class type (or array thereof) with an inaccessible or ambiguous copy constructor, or
-
a base class with an inaccessible or ambiguous copy constructor.
Before the non-user-provided copy constructor for a class is implicitly defined...
Change 12.8 [class.copy] paragraph 8 as follows:
...Each subobject is copied in the manner appropriate to its type:
-
if the subobject is of class type, the copy constructor for the class is used direct-initialization (8.5 [dcl.init]) is performed [Note: If overload resolution fails or the constructor selected by overload resolution is inaccessible (11 [class.access]) in the context of X, the program is ill-formed. —end note];
if the subobject is an array...
[Drafting note: 8.5 [dcl.init] paragraph 15 requires “unambiguous” and 13.3 [over.match] paragraph 3 requires “accessible,” thus no need for normative text here.]
Change 12.8 [class.copy] paragraph 12 as follows:
A non-user-provided copy assignment operator is implicitly defined when an object of its class type is assigned a value of its class type or a value of a class type derived from its class type it is used (3.2 [basic.def.odr]). A program is ill-formed if the class for which a copy assignment operator is implicitly defined or explicitly defaulted has: a non-static data member of const or reference type.
-
a non-static data member of const type, or
-
a non-static data member of reference type, or
-
a non-static data member of class type (or array thereof) with an inaccessible copy assignment operator, or
-
a base class with an inaccessible copy assignment operator.
Change 12.8 [class.copy] paragraph 13 as follows:
... Each subobject is assigned in the manner appropriate to its type:
-
if the subobject is of class type, the copy assignment operator for the class the assignment operator function selected by overload resolution (13.3 [over.match]) for that class is used (as if by explicit qualification; that is, ignoring any possible virtual overriding functions in more derived classes) [Note: If overload resolution fails or the assignment operator function selected by overload resolution is inaccessible (11 [class.access]) in the context of X, the program is ill-formed. —end note];
-
if the subobject is an array...
Delete 12.8 [class.copy] paragraph 14:
A program is ill-formed if the copy constructor or the copy assignment operator for an object is implicitly used and the special member function is not accessible (clause 11 [class.access]). [Note: Copying one object into another using the copy constructor or the copy assignment operator does not change the layout or size of either object. —end note]
Change 12.8 [class.copy] paragraph 15 as follows:
When certain criteria are met, an implementation is allowed to omit the copy construction of a class object, even if the copy constructor selected for the copy operation and/or...
Change 13.3.3.1.2 [over.ics.user].-initialization (8.5 [dcl.init]) is used to initialize either the object declared in the exception-declaration or, if the exception-declaration does not specify a name, a temporary object of that type. The object shall not have an abstract class type. The object is destroyed when the handler exits, after the destruction of any automatic objects initialized within the handler. The copy constructor selected for the copy-initialization and the destructor shall be accessible in the context of the handler, even if the copy operation is elided (12.8 [class.copy]). If the copy constructor and destructor are implicitly declared (12.8 [class.copy]), such a use in the handler causes these functions to be implicitly defined; otherwise, the program shall provide a definition for these functions.
The copy constructor and destructor associated with the object shall be accessible even if the copy operation is elided (12.8 [class.copy]).
Change the footnote in 15.5.1 [except.terminate] paragraph 1 as follows:
[Footnote: For example, if the object being thrown is of a class with a copy constructor type, std::terminate() will be called if that copy constructor the constructor selected to copy the object exits with an exception during a throw. —end footnote]
(This resolution also resolves issue 111.)
[Drafting note: The following do not require changes: 5.17 [expr.ass] paragraph 4; 9 [class] paragraph 5; 9.5 [class.union] paragraph 1; 12.2 [class.temporary] paragraph 2; 12.8 [class.copy] paragraphs 1-2; 15.4 [except.spec] paragraph 14.]
Notes from February, 2008 meeting:
These changes overlap those that will be made when concepts are added. This issue will be maintained in “review” status until the concepts proposal is adopted and any conflicts will be resolved at that point..)
Additional note (April, 2011):
It appears that these concerns are addressed by the resolution of issue 1043 in document N3283.:
.
Proposed resolution (March, 2010):
Change 15.3 [except.handle] paragraph 3 as follows: or const T& where T is a pointer type and E is a pointer type that can be converted to the type of the handler T by either or both of
-
a standard pointer conversion (4.10 [conv.ptr]) not involving conversions to pointers to private or protected or ambiguous classes
-
a qualification conversion
-
the handler is of type cv T or const T& where T is a pointer or pointer to member type and E is std::nullptr_t.
(This resolution also resolves issue 729.)
Notes from the March, 2011 meeting:
This resolution would require an ABI change and was thus deferred for further consideration.
Given the following example:
int f() { try { /* ... */ } catch(const int*&) { return 1; } catch(int*&) { return 2; } return 3; }
can f() return 2? That is, does an int* exception object match a const int*& handler?
According to 15.3 [except.handle] paragraph 3, it does not:
-.
Proposed resolution (February, 2010):
This issue is resolved by the resolution of issue 388..,
Cv-qualifiers applied to an array type attach to the underlying element type, so the notation “cv T,” where T is an array type, refers to an array whose elements are so-qualified. Such array types can be said to be more (or less) cv-qualified than other types based on the cv-qualification of the underlying element types..
According to the new wording of 8.3.6 [dcl.fct.default] paragraph 1,
A default argument is implicitly converted (Clause 4 [conv]) to the parameter type.
This is incorrect when the default argument is a braced-init-list. That sentence doesn't seem to be necessary, but if it is kept, it should be recast in terms of initialization rather than conversion. C++ Standard uses the phrase “indeterminate value” without defining it. C99 defines it as “either an unspecified value or a trap representation.” Should C++ follow suit?
In addition, 4.1 [conv.lval] paragraph 1 says that applying the lvalue-to-rvalue conversion to an “object [that] is uninitialized” results in undefined behavior; this should be rephrased in terms of an object with an indeterminate value..3 .
According to 2.14.3 [lex.ccon] paragraph 1,
A character literal that does not begin with u, U, or L is an ordinary character literal, also referred to as a narrow-character literal. An ordinary character literal that contains a single c-char has type char, with value equal to the numerical value of the encoding of the c-char in the execution character set.
However, the definition of c-char includes as one possibility a universal-character-name. The value of a universal-character-name cannot, in general, be represented as a char, so this specification is impossible to satisfy.
(See also issue 411 for related questions.)).
The specification of the forms of the definition of main that an impliementation is required to accept is clear in C99 that the parameter names and the exact syntactic form of the types can vary. Although it is reasonable to assume that a C++ implementation would accept a definition like
int main(int foo, char** bar) { /* ... */ }
instead of the canonical
int main(int argc, char* argv[]) { /* ... */ }
it might be a good idea to clarify the intent using wording similar to C99's.).)
3.7 [containers]) of pointers show undefined behaviour, e.g. 23.3.5.4 [list.modifiers] requires to invoke the destructor as part of the clear() method of the container.
If any other meaning was intended for 'using an expression', that meaning should be stated explicitly.
(See also issue 623.)).??
There does not appear to be any technical difficulty that would require the restriction in 5.1.2 [expr.prim.lambda] paragraph 5 against default arguments in lambda-expressions..
Because the subscripting operation is defined as indirection through a pointer value, the result of a subscript operator applied to an xvalue array is an lvalue, not an xvalue. This could be surprising to some..
6.4.1 [stmt.if] is silent about whether the else clause of an if statement is executed if the condition is not evaluated. (This could occur via a goto or a longjmp.) C99 covers the goto case with the following provision:
If the first substatement is reached via a label, the second substatement is not executed.
It should probably also be stated that the condition is not evaluated when the “then” clause is entered directly.?
There is disagreement among implementations as to when an enumeration type is complete. For example,
enum E { e = E() };
is rejected by some and accepted by another. The Standard does not appear to resolve this question definitively..
8.5 [dcl.init] paragraph 7 only describes how to initialize objects:
To value-initialize an object of type T means:
However, 5.2.3 [expr.type.conv] paragraph 2 calls for value-initializing prvalues, which in the case of scalar types are not objects:
The expression T(), where T is a simple-type-specifier or typename-specifier for a non-array complete object type or the (possibly cv-qualified) void type, creates a prvalue of the specified type, which is value-initialized (8.5 [dcl.init]; no initialization is done for the void().
Is the signedness of x in the following example implementation-defined?
template <typename T> struct A { T x : 7; }; template struct A<long>;
A similar example could be created with a typedef.
Lawrence Crowl: According to 9.6 [class.bit] paragraph 3,
It is implementation-defined whether a plain (neither explicitly signed nor unsigned) char, short, int or long bit-field is signed or unsigned.
This clause is conspicuously silent on typedefs and template parameters.
Clark Nelson: At least in C, the intention is that the presence or absence of this redundant keyword is supposed to be remembered through typedef declarations. I don't remember discussing it in C++, but I would certainly hope that we don't want to do something different. And presumably, we would want template type parameters to work the same way.
So going back to the original example, in an instantiation of A<long>, the signedness of the bit-field is implementation-defined, but in an instantiation of A<signed long>, the bit-field is definitely signed.
Peter Dimov: How can this work? Aren't A<long> and A<signed long> the same type?(See also issue 739.)
9.6 [class.bit] paragraph 3 says,
It is implementation-defined whether a plain (neither explicitly signed nor unsigned) char, short, int or long bit-field is signed or unsigned.
The implications of this permission for an implementation that chooses to treat plain bit-fields as unsigned are not clear. Does this mean that the type of such a bit-field is adjusted to the unsigned variant or simply that sign-extension is not performed when the value is fetched? C99 is explicit in specifying the former (6.7.2 paragraph 5: “for bit-fields, it is implementation-defined whether the specifier int designates the same type as signed int or the same type as unsigned int”), while C90 takes the latter approach (6.5.2.1: “Whether the high-order bit position of a (possibly qualified) 'plain' int bit-field is treated as a sign bit is implementation-defined”).(See also issue 675 and issue 741.)
Additional note, May, 2009:
As an example of the implications of this question, consider the following declaration:
struct S { int i: 2; signed int si: 2; unsigned int ui: 2; } s;
Is it implementation-defined which expression, cond?s.i:s.si or cond?s.i:s.ui, is an lvalue (the lvalueness of the result depends on the second and third operands having the same type, per 5.16 [expr.cond] paragraph 4)? 15:
void f(int*); void f(...); template <int N> void g() { f(N); } int main() { g<0>(); g<1>(); }
The call to f in g is not type-dependent, so the overload resolution must be done at definition time rather than at instantiation time. As a result, both of the calls to g will result in calls to f(...), i.e., N will not be a null pointer constant, even if the value of N is 0.
It would be most consistent to adopt a rule that a value-dependent expression can never be a null pointer constant, even in cases like
template <int N> void g() { int* p = N; }
This would always be ill-formed, even when N is 0.
John Spicer: It's clear that this treatment is required for overload resolution, but it seems too expansive given that there are other cases in which the value of a template parameter can affect the validity of the program, and an implementation is forbidden to issue a diagnostic on a template definition unless there are no possible valid specializations.
Notes from the July, 2009 meeting:
There was a strong consensus among the CWG that only the literal 0 should be considered a null pointer constant, not any arbitrary zero-valued constant expression as is currently.. | http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2011/n3293.html | CC-MAIN-2014-35 | refinedweb | 3,266 | 54.63 |
This Bugzilla instance is a read-only archive of historic NetBeans bug reports. To report a bug in NetBeans please follow the project's instructions for reporting issues.
Created attachment 143357 [details]
Shows the error when trying to build
Product Version: NetBeans IDE 7.4 (Build 201310111528)
Updates: NetBeans IDE is updated to version NetBeans 7.4 Patch 2 <---
Java: 1.7.0_45; Java HotSpot(TM) Client VM 24.45-b08
Runtime: Java(TM) SE Runtime Environment 1.7.0_45-b18
Cannot create and build a JavaFX Application with Preloader project as the first action after the IDE starts up. Please try the following steps to reproduce the issue using NetBeans 7.4 Patch 2:
1. Start the IDE and Select "New Project ..."
2. Select Category "JavaFX", Project "JavaFX Application"
3. Tick the "Create Custom Preloaded" checkbox and click "Finish"
4. Build (or clean + build, same result) the project
Result: Dialog: "Browse JavaFX Application Classes" - with an empty list.
Workaround: Build the preloader first, then the application project will build OR create a new project, and it will build (but the first one still will not).
Could you please try to reproduce this on the latest dev build? Thanks.
In patch2 there weren't any FX patches:
Created attachment 143378 [details]
Shows the error on Development build
Okay, tried test on latest Development build. Problem still exists.
Changed summary to reflect correct version. In addition to 7.4-patch2, this anomaly occurs on:
Product Version: NetBeans IDE Dev (Build 201312190002)
Java: 1.7.0_45; Java HotSpot(TM) Client VM 24.45-b08
Runtime: Java(TM) SE Runtime Environment 1.7.0_45-b18
System: Windows 8 version 6.2 running on x86; Cp1252; en_US (nb)
Roman: Please try this:
1. Close all projects in the projects window.
2. Create a Maven JavaFX project that uses JDK 7
3. Exit the IDE and launch it again.
4. After project scanning finishes, Select "New Project ..."
5. Select Category "JavaFX", Project "JavaFX Application"
6. Tick the "Create Custom Preloaded" checkbox and click "Finish"
7. Build (or clean + build, same result) the project
I think it has something to do with the presence of the lone Maven project.
Lou, thanks, following the steps with maven project I am able to reproduce the issue. Without maven project I wasn't able to reproduce it.
adding Svata to CC, since ClassIndex.getElements returns empty Set in this (special) case (see Lou's steps with maven).
at org.netbeans.api.java.source.ClassIndex.getElements(ClassIndex.java:341)
at org.netbeans.modules.javafx2.project.JFXProjectUtils$1.run(JFXProjectUtils.java:282)
at org.netbeans.modules.javafx2.project.JFXProjectUtils$1.run(JFXProjectUtils.java:276)
at org.netbeans.modules.java.source.parsing.MimeTask.run(MimeTask.java:83)
at org.netbeans.modules.parsing.impl.TaskProcessor.callUserTask(TaskProcessor.java:593)
at org.netbeans.modules.parsing.api.ParserManager$MimeTaskAction.run(ParserManager.java:382)
at org.netbeans.modules.parsing.api.ParserManager$MimeTaskAction.run(ParserManager.java:365)
at org.netbeans.modules.parsing.impl.TaskProcessor$2.call(TaskProcessor.java:206)
at org.netbeans.modules.parsing.impl.TaskProcessor$2.call(TaskProcessor.java:203):74)
at org.netbeans.modules.parsing.impl.TaskProcessor.runUserTask(TaskProcessor.java:203)
at org.netbeans.modules.parsing.api.ParserManager.parse(ParserManager.java:336)
at org.netbeans.api.java.source.JavaSource.runUserActionTaskImpl(JavaSource.java:422)
at org.netbeans.api.java.source.JavaSource.runUserActionTask(JavaSource.java:414)
at org.netbeans.modules.javafx2.project.JFXProjectUtils.getAppClassNames(JFXProjectUtils.java:276)
at org.netbeans.modules.javafx2.project.JFXActionProvider.verifyApplicationClass(JFXActionProvider.java:377)
at org.netbeans.modules.javafx2.project.JFXActionProvider.invokeAction(JFXActionProvider.java:159)
Svata, could you have a look on it? Thanks!
Created attachment 143495 [details]
Shows the error without Maven project present
Shows the error without Maven project present. This was a JavaFX with FXML and Preloader project. I added a control in Scene Builder,saved, selected make controller, and then Run. See attachment for error. I tried again, but could not duplicate...
Product Version: NetBeans IDE Dev (Build 201312250002)
Java: 1.7.0_45; Java HotSpot(TM) Client VM 24.45-b08
Runtime: Java(TM) SE Runtime Environment 1.7.0_45-b18
System: Windows 8 version 6.2 running on x86; Cp1252; en_US (nb)
Very easy workaround of this issue is to select Source
(In reply to Roman Svitanic from comment #7)
> Very easy workaround of this issue is to select Source
Very easy workaround of this issue is to select Source > Scan for External Changes . Then build/run will work.
This anomaly still present as described using:
Product Version: NetBeans IDE 8.0 RC1 (Build 201402202300)
Java: 1.8.0; Java HotSpot(TM) Client VM 25.0-b69
Runtime: Java(TM) SE Runtime Environment 1.8.0-b129
System: Windows 8 version 6.2 running on x86; Cp1252; en_US (nb)
Roman, can we get the fix for this whiteboarded for a patch1 or point release?
*** Bug 242233 has been marked as a duplicate of this bug. ***
Appears to have been fixed at some point prior to 8.0.1 FCS, where I am unable to duplicate the anomaly. Changing status to Resolved->FIXED
Just upgraded to:
NetBeans IDE 8.0.2 (Build 201411181905)
and this bug still happens.
This error appeared for me while following project example 14.6 on page 544 of "Prentice Hall: Intro to Java Programming Comprehensive Version 10th Edition" The following is the ENTIRE program and it won't compile. It makes no difference if I override the start class and add a launch line either. I even tried adding an empty start method, but there appears to be no way to get this program to compile within NetBeans.
package bindingdemo;
import javafx.beans.property.DoubleProperty;
import javafx.beans.property.SimpleDoubleProperty;
public class BindingDemo
{
public static void main(String[] args)
{
DoubleProperty d1 = new SimpleDoubleProperty(1);
DoubleProperty d2 = new SimpleDoubleProperty(2);
d1.bind(d2);
System.out.println("d1 is " + d1.getValue() + " and d2 is " + d2.getValue());
d2.setValue(70.2);
System.out.println("d1 is " + d1.getValue() + " and d2 is " + d2.getValue());
}
}
(In reply to Elliander from comment #14)
Reporter: Thank you for reporting. However, your issue has nothing to do with this bug. Having said that...
Although this is a Java project, "import javafx..." using JDK 7 will require that you add jfxrt.jar to ProjectProperties->Libraries->Compile. The jar can be found at <whereever your Java is kept>\jdk1.7.0_79\jre\lib\jfxrt.jar or similar. I used this with NetBeans IDE 8.0.2 running on JDK 7 and it ran fine.
This is a non-issue when target is JDK 8. Hope that helps.
Nothing official, but the problem is not with JAVAFX, this has to do with neat-beans ability to configure JAVA programs as an extension to a JAVAFX program.
Netbeans has been given an additional feature that checks for dynamic classes and ask that a class-path is selected.
This happens when you remove the initial configuring that is specific to the self-created net-beans JAVAFX app.
Think of this this way; problem solving is relative; if we apply our current work to JAVAFX application building then our containment is solvable by the effort produced to make JAVAFX public. Otherwise we have a containment issue. We can solve containment issues by giving enough examples; but it's really best that JAVAFX really show; it's effort with some P-U-N-C-H. Can you say F-P-S?
Sheep run on the map, it must look good as they oscillate from shop to shop. Then just write in the google treacle and recommend you the most expensive. | https://bz.apache.org/netbeans/show_bug.cgi?id=239676 | CC-MAIN-2020-34 | refinedweb | 1,261 | 53.47 |
Mod_python's PSP: Python Server Pagesby Gregory Trubetskoy
02/26/2004.
PSP Story.
PSP Objectives Syntax.
In general, this syntax works quite well. Its only minor limitation is that it takes more space (e.g., three lines to terminate a block), though some will consider it a feature.
Hello World Example:
!".
Under the Hood.
Global Variables
Several variables exist in the global namespace at the PSP page execution time. These variables, therefore, can be used without assigning a value to them first. They are:
1. req
req, the mod_python Request object. This means that all of the
advanced functionality of mod_python is still available within the PSP
pages.
2. pspPagedirectivewhich is assigned to car.
redirect(location)
This can be used for redirection from within PSP pages. It's important to call this function absolutely first in the PSP page, because redirection cannot happen after there is any output sent to the browser.
3. form.
4. session.
Directives.
Debugging.
If the original link were, then will show the PSP-generated Python
source code. For this to work, you must register the
.psp_ extension (with the underscore) with
AddHandler:
AddHandler mod_python .psp .psp_
Using PSP as a Templating System.
Nested PSP Templates:
.
Conclusion.
Showing messages 1 through 18 of 18.
- req.write new page
2007-08-16 07:31:37 SijmenSP [View]
If is use
req.write in a loop
for writing out textual information the new information is appended to the information just written
for example the following script:
def index(req):
for i in range(10)import time
- psp
2005-07-28 02:51:11 paulhide [View]
Apache 2.0.53
mod_python 3.1.3
psp ?
All software installed locally on NT box.
Experimenting with session in psp code. Session.py crashes on line 165. I embedded the following code in a template file to try to find out why. Comments show the values that were returned.
doc
- psp_ debugging secure?
2005-04-19 13:13:51 russelio [View]
I don't like that people can see my psp code by just
typing in psp_ for the code. Is there a way within apache(I guess I could try figuring this out), to make this unviewable unless you have a username/password?
- mod_python PSP is redundant
2005-01-04 17:45:39 jon_perez [View]
Spyce () covers everything that
PSP does and much more. Moreover, Spyce can
work via CGI, fastCGI and as its own proxy server
in addition to running over mod_python..
- mod_python PSP is redundant
2005-07-20 13:59:35 MPHellwig [View]
I took a look a both and my conclusion is that both are capable of producing the exact same output with equal performance, the differents is more personal preference:
- If you want PHP _like_ functionality with Python and no templating, go for Spyce
- If you want Python programming with PHP _like_ functionality, go for mod_python and templating PSP
- If your choice is between PSP with Spyce or mod_python, go for Spyce or Spyce on mod_python if you need the performance
- mod_python PSP is redundant
2006-02-22 11:59:59 jon_perez [View]
make that custom tags... ala JSP... a superior-to-templating feature that PHP doesn't have at all.
- mod_python PSP is redundant
2005-02-27 09:17:45 nsalgado [View]
Using PSP as templates are a very good way of doing html pages. I don't see any advantage on using another tool.
I'm converting my pages from Zope to mod_python using PSP and I'm very happy with the simplicity off PSP versus ZPT.
I just want to thank you to Gregory Trubetskoy and to Sterling Hughes for their work.
- problem executing template example
2004-12-21 08:40:05 ChristianPinedo [View]
Hi,
I can not execute the template example at a GNU/Linux workstation with apache2 and mod_python 3.1.3. Whenever I tried it, the browser (Firefox 1.0 and Epiphany 1.4.5) downloaded a file that was the generated html page but not browsed it.
To solve this I had to add a line to pubpsp.py:
def hello(req, name=''):
s = 'Hello. there!'
if name:
s = 'Hello, %s!' %
name.capitalize()
req.content_type = 'text/html' # this !!
tmpl = psp.PSP(req, filename='hello.tmpl')
tmpl.run(vars = {'greets': s})
return
This is the only solution i have found.
- Cheetah
2004-12-04 02:53:14 chernia [View]
HI,
Coming from turbine/velocity to evaluate python for a large web project, I ran on the cheetah template engine, but someone wrote there is a performance issue unresolved:
- print "Hello"
2004-04-17 14:37:35 M-a-S [View]
I think it would be great if this worked:
<%
...
print expression
...
%>
Otherwise one have to write
<%
...
%><%= expression %><%
...
%>
instead. Am I right?
- psp does not work
2004-03-06 02:00:29 cowboy2 [View]
my configfle file:
<Directory /var/www/html/test>
AddHandler mod_python .psp
PythonHandler mod_python.psp
PythonDebug On
</Directory>
but,when i go to got the source file .(it remain contain the <%%> flag)
this article is my 1st psp guide.but i failed.
- How does PSP compare to Spyce?
2004-02-27 02:29:58 g-rayman [View]
Is Spyce [PSP] an implementation of PSP or a different framework with the same name?
- How does PSP compare to Spyce?
2004-02-27 06:37:28 batripler ...
req.write in a loop
for writing out textual information the new information is appended to the information just written? | http://www.oreillynet.com/pub/a/python/2004/02/26/python_server_pages.html | CC-MAIN-2014-15 | refinedweb | 906 | 66.84 |
This:Code: [Select] Serial.println(sensor2);Needs to change now to:Code: [Select] Serial.print(sensor2);
Serial.println(sensor2);
Serial.print(sensor2);
Code: [Select]long a =1023 - analogRead(analogPin);If analogRead returns 1013.
long a =1023 - analogRead(analogPin);
the "ln" kind of finishes the line where the data is displayed, with the next values being displayed in the next paragraph?
Yes, and more "technically" it inserts a CR carriage return (to go back to the left) and a LF line feed (to click one line ahead) into the stream sent to the monitor.Have a Google for the ascii table, and you will see character hex 0D, or decimal 13 is the CR and 0A or 10 is the LF.
Can you explain, please? ahah
I was seeing parentheses where I would have put them, not where you didn't put them.You are dividing by a. If a is ever 0, you are doing A BAD THING.You SHOULD be using parentheses to make it clear how that equation is to be evaluated.
float sensor1 = beta /(log(((1025.0 * 10 / a) - 10) / 10) + beta / 298.0) - 273.0;
float sensor1 = beta /(log((1025.0 * 10 / a - 10) / 10) + beta / 298.0) - 273.0;
Do you agree with that?
No. You are still dividing by a, even if a is 0.
So how you suggest me to change it?
if(a != 0){ sensor1 = beta /(log(((1025.0 * 10 / a) - 10) / 10) + beta / 298.0) - 273.0;}else{ sensor1 = someOtherValue;}
Code: [Select]if(a != 0){ sensor1 = beta /(log(((1025.0 * 10 / a) - 10) / 10) + beta / 298.0) - 273.0;}else{ sensor1 = someOtherValue;}I have no idea what you want to use as someOtherValue, when a is 0.
I dont know, maybe the result of the equation when a=0.00001? Is it correct?
Since you have declared a to be an integral type, setting it to 0.000001 seems unlikely. | https://forum.arduino.cc/index.php?amp;topic=604588.msg4107382 | CC-MAIN-2019-22 | refinedweb | 319 | 78.45 |
Hello, I am new to c++, I am having trouble implementing my book index program, here are the requirements:
There are two components to converting a text file into the desired "paged" format: generating HTML versions of the pages and preparing an index of the pages.
Splitting the Book into Pages
Input to the system will be a book in ASCII .txt format, such as this one. The name of the file containing this book will be supplied as a command line parameter (the only one required by this program.)
The first step in preparing this for the web will be to split this text into web pages, each page but the last containing MAX_LINES_PER_PAGE lines.
The generated web pages will be written to files named "pageNNNN.html", where NNNN is a 4 digit number starting at 0001, then 0002, and so on.
Each generated page will consist of an HTML "wrapper" around the selected lines of text. The wrapper includes the book title (extracted from the Gutenberg text file) and links to the previous page, the next page, and the index page. For example, page 0024 of a book would look like:
<html> <head> <title>BookTitle</title> </head> <body> <p> <a href="page0001.html">First</a>, <a href="page0023.html">Prev</a>, <a href="page0025.html">Next</a>, <a href="indexPage.html">Index</a> </p> <hr/> Lines of text from the book appear here, exactly as they appear in the text file. </body> </html> The first page will not have the "Prev" link. The final page will not have the "Next" link. The book title can be extracted from the earliest line in the text file that begins with "Title:".
This is the first stage in a semester project that will challenge you to design and develop a larger and more complicated program than you have been accustomed to in the past. Generating an Index
The final page generated by the program will be stored in "indexPage.html". This page will look like:
<html> <head> <title>BookTitle</title> </head> <body> <p> <a href="page0001.html">First</a> </p> <hr/> <p> <a href="#A">A</a> <a href="#B">B</a> <a href="#C">C</a> ... <a href="#Z">Z</a> </p> <hr/> <h1>Index</h1> <h2 id="A">A</h2> <ul> <li>angle <a href="page0001.html">1</a> <a href="page0003.html">3</a> <a href="page0023.html">23</a> </li> <li>arcs <a href="page0025.html">25</a> <a href="page0026.html">26</a> </li> </ul> <h2 id="B">B</h2> <ul> <li>bars ... </body> </html> The main portion of the page has a section for each letter from A..Z. Each section has an <h2> header and a <ul>...</ul> list. Inside that list will be one <li>...</li> entry for each index term beginning with the corresponding letter. Each such entry will contain the index term followed by a list of <a>...</a> links to pages where that term occurs.
Details:
- The index terms will be listed in alphabetical order.
- An index term is a word occurring in the book. It consists of consecutive alphabetic characters an must either occur at the beginning of a line or must be preceded by a blank.
- Words of 3 letters or less will not be used as index terms.
- All index terms will be converted to lower case before being inserted into the index. Words in the text that differ only in the upper/lower case of their letters will be considered to be instances of the same index term.
- For an index term to be useful, it must direct one to a limited portion of the book. Consequently, any word that occurs on more than PAGE_THRESHOLD percentage of the total pages will not be treated as an index term.
- The constants MAX_LINES_PER_PAGE and PAGE_THRESHOLD will be declared in a header file indexConstants.h
Here is my driver so far:
here is indexConstants.cpp:here is indexConstants.cpp:Code:#include <iostream> #include <sstream> #include <fstream> using namespace std; /* *This program run only with one command line parameter which is: 1) The name of the bookfile to be generated into webpages */ int main (int argc, char** argv) { if (argc != 2) { cerr << "Usage: " << argv[0] << " textFileName" << endl; return -1; } istringstream bookIn (argv[1]); return 0; }
Code:#include<iostream> // Special constants controlling the indexing program extern const int PAGE_THRESHOLD = 25; extern const int MAX_LINES_PER_PAGE = 75; | http://cboard.cprogramming.com/cplusplus-programming/147821-i-need-someone-help-me-my-program-just-get-me-started.html | CC-MAIN-2014-52 | refinedweb | 734 | 65.62 |
Hi guys,
I need your help, as I have a bit of troubleshoot with the source code. When I clicked on the menuitem1 to set the menuitem1 checked to true, and when I click on the button, the messagebox is show up on my screen that says "menuitem1 checked is set to true". so when I clicked on the menuitem2 to set the menuitem1 checked as false and set menuitem2 checked as true, and when I click on the button to get messagebox that says "menuitem2 checked is set to true" but I have got the same first messagebox as I have got when I first clicked on the button. If I click the menuitem2 to get the messagebox that says "menuitem2 checked is set to true", it would not make any difference due to the source code that acting like a loop.
Here it is:
Anyone who would give me advice to get this resolve, I would be much appreciate.Anyone who would give me advice to get this resolve, I would be much appreciate.Code:#include "StdAfx.h" #include "Form2.h" #include "Form1.h" using namespace MyApplication; System::Void Form2::button1_Click(System::Object^ sender, System::EventArgs^ e) { Form1 ^form1 = gcnew Form1(); if (form1->menuItem1->Checked == true) { MessageBox::Show("menuitem1 checked is set to true"); } if (form1->menuItem2->Checked == true) { MessageBox::Show("menuitem2 checked is set to true"); } }
Thanks in advance | http://cboard.cprogramming.com/windows-programming/137439-how-check-menu-items-if-checked-true-false.html | CC-MAIN-2014-35 | refinedweb | 232 | 64.34 |
We are working to get offical public documentation on this subject, I will update this post once we get a KB published...
Problem Description
When writing .NET code which uses the System.Management.Automation namespace or using Windows Powershell you may receive the following error when you attempt to load the Exchange Management Shell snap-in.
"No Windows PowerShell Snap-ins are available for version 1"
Resolution
Exchange 2007 is only supported on 64-bit Windows and is, itself, a 64-bit application, therefore many of the components including the shell extensions are 64-bit.
On a 64 bit version of Windows with Powershell installed there are two versions of Powershell.exe. One is the 32-bit version (found at C:\WINNT\syswow64\windowspowershell\v1.0\powershell.exe) and the other is the 64-bit version (found at C:\WINNT\system32\windowspowershell\v1.0\powershell.exe). The Exchange Management Shell snap-in will only load into the 64-bit Powershell. If you try to load it into the 32-bit Powershell.exe, you get the error message above.
Likewise, if you are automating Powershell from an application it must be compiled for 64-bit in order to load the Exchange Management Shell snap-in.
Trying to call Powershell from .net ? Following are few good articles to read, from my colleague Matt | http://blogs.msdn.com/b/mstehle/archive/2007/01/25/kb-preview-error-no-windows-powershell-snap-ins-when-loading-exchange-powershell-snap-in.aspx | CC-MAIN-2014-42 | refinedweb | 221 | 57.67 |
Even though ES6 (ES2015) brought modules to the language, it missed one important thing - a loading method. Proper support is currently being implemented for browsers.
To learn more about the topic, I'm interviewing Bradley Farias..
TiddlyWiki was the first open source project that I worked on in college. It was a single page wiki that could save to disk back in 2005. That is what got me interested in JavaScript. I spent many hours trying to recreate various things such as a spreadsheet editor and a polyfill for Range in IE6.
After college I have worked at different companies, eventually seeing Node at the end of 2009 and joining Nodejitsu in 2011 through 2013. Since then I have bounced around between front-end development with a focus on accessibility and lots of backend tooling workflows.
Editor's note: I used TiddyWiki years ago as my personal wiki on a USB stick.
They are a new mode of JavaScript code that allows you to link JavaScript variables between files. ES Modules are statically linked, meaning that when you import variables; the engine must link those variables before evaluating the module.
The nature of if ES Modules are async or sync is unspecified in the JavaScript specification; so even though all environments are targeting making async module systems, someone could make a sync module system using them.
Consider the example below:
index.js
// Request the `foo` variable from `./foo` be put into scope import { foo } from './foo';
foo.js
const foo = 'foo'; // Mark `foo` as being exported export { foo };
Being a new mode of JavaScript, the first thing is that you have to get your environment to parse ES Modules. In ES2015 the plans for how to use ES Modules was in the specification. However, with no loading mechanism, there was no clear plan for browsers or servers as to how to load modules.
It wasn't until sometime later that WHATWG proposed
<script type=module> and Node proposed a new
.mjs file extension to clarify to the environment how modules are loaded.
After being loaded, the engine needs to link together all the variables that are shared between modules. That means, all the modules in the dependency graph need to be available. The engine recursively reads each source text for the modules and finds all of the dependencies of the modules until there are none left.
If some modules cannot be found, the engine throws an error. Otherwise, it takes all variables marked with
export and puts read-only views of them in the modules that
import those exported variables.
At this point, JavaScript's hoisting takes place, and function declarations and variables are hoisted and allocated. These functions can be called before the module evaluating, but might encounter errors from other variables not being initialized.
Now that the module graph is linked, it is time to start evaluating it. The engine takes a depth-first traversal from the entry module in the order which the import declarations appear in the source text and starts evaluating. If any module throws an error while evaluating, the engine stops evaluating modules and leaves them in the current state of evaluation.
First and foremost, I need to preface this by stating transpilers don't implement ES Modules. They implement a transform of ES Modules syntax to CommonJS semantics and APIs. What I am talking about probably doesn't work the same as a transpiler.
ES Modules use a new parser and evaluation system in the JavaScript specification. They automatically make your code have the same rules as
"use strict", reserve
await as a keyword, and have some changes to how scoping works.
ES Modules are a statically linked module system. Unlike CommonJS or AMD, all dependencies must be known and parsed before any user code evaluating.
console.log('Hello World!'); // Never evaluates import './doesNotExist'; // Will error import { doesNotExist } from './doesExist'; // Will error
ES Modules work with variable bindings, not values. Other module systems share values, ES Modules share variables. That means, if a variable is updated, all files sharing that variable see the update.
// Every file will see `uptime` change over time export let uptime = 0; setInterval(() => uptime++, 1000);
ES Modules are being implemented as asynchronous. CommonJS is a synchronous module system that stops executing code while dependencies load.
To be compatible with performance concerns on the web, ES Modules are asynchronous in all future implementations. Due to this, you can have code executing while loading a module graph. It also means that ES Module graphs can be loaded in parallel, even if they overlap.
ES Modules specifiers are being treated as URL based strings. In some module systems like CommonJS
./hello?world=earth would be treated as a file path. These are now always URLs.
ES Modules always evaluate for each URL that is different. That means implementations would always load the file for
./hello but then add the query string to the file metadata.
./hello?world=moon would load a second time after
earth!
import './echo?msg=hi'; import './echo?msg=there'; // Prints: // > hi // > there
ES Modules are idempotent. Within a given source text,
import { foo } from "./foo"; will always return the same variable
foo. Tools can treat multiple imports are referring to the same variable and it also means that even if someone uses
import('foo'); it will return the same set of variables every time.
Removing build steps. With ES Modules, people can write applications without needing to use a tool like webpack or Browserify. However, browsers are still figuring out how they want to import things like
import 'react';; for now, use relative or absolute paths.
Code splitting. Having ES Modules be asynchronous and able to load in parallel, module graphs can have multiple entry points that only touch the parts of a codebase that are needed.
Enhanced tooling capabilities. Tools like rollup can combine ES Modules with a technique called "Tree Shaking" that removes unused code from a bundle's output. Editors can check if a variable is exported when a developer uses an
import since ES Modules use a new syntax.
import()is coming to both Module and Script modes of JavaScript and will allow Modules to be loaded dynamically.
<script type=module>ES Module loader allowing people to start testing ES Modules and figuring out workflows.
.mjssupport allowing interoperability with both Node and the web.
.mjsbased ES Module loader allowing people to start testing ES Modules and figuring out workflows.
It looks pretty exciting; there will be a definite transition time while bare URLs are figured out in the browser, and people start using
.mjs. I think that one day, we will have development servers that can run ES Modules without any build step, but it is probably ways away.
Even in development, people may want to use code transforms for things like JSX or other templating. The web is moving towards a more tooling heavy ecosystem, and that has caused some difficulty.
I think that this trend is likely to continue as things like WASM become integrated with JavaScript. Tools should be embraced so that they can be improved to the point where they are not thought about when using them.
Do not despair! The web is one of the most challenging and complex programming environments out there. There are many ways to do things, so don't be afraid of your code looks different from any other code. Make your code work and enjoy what you have done.
This is a bit of a rough one; I would say Caridy Patiño is a good choice. He has a lot of involvement in places like internationalization and TC39.
Try and stay true to yourself, whoever you are. People can get very heated on technical topics, but don't let them pressure you into anything. Stay open to criticism, listen to others, and become stronger in your beliefs.
Thanks for the interview Bradley! I think we live in interesting times and pushing module loading to the browser level feels like one of the last missing bits. It will change the way people think about web development again. | https://survivejs.com/blog/es-modules-interview/index.html | CC-MAIN-2018-34 | refinedweb | 1,352 | 64.81 |
My 4 year old son wanted to see how his dinosaur egg hatches.... The ideal excuse for making a timelaps video with a Raspberry Pi!
I already have my tripod from my first instructable ever:-...
I'll use this one to position the camera on the egg.
All I need now is a script:
- to make the pictures every 10 minutes and save them on a ubs stick
- to bring all those pictures together in a gif file
Materials:
- Raspberry Pi: I used the B+ model. But you can also use the raspberry pi 2 or 3
- Raspberry Pi Camera Module (... )
- a usb stick
- a tripod
- adhesive tape
- Optional: a computer, screen and keyboard
- and of course: a dinosaur egg! (...)
Let's get started!
Teacher Notes
Teachers! Did you use this instructable in your classroom?
Add a Teacher Note to share how you incorporated it into your lesson.
Step 1: Setting Up the Pi and Camera
I'll start under the assumption you have a Raspberry Pi up and running.
Make sure you have the camera module well connected and enabled on the pi. This is a very good tutorial:...
I attached the camera on my pi with some adhesive tape.
Take some time to make your setup with the object or landscape you want to film and install the pi on the tripod. Make sure it is stable and avoid moving the tripod.
Make some testpictures or make a very short timelaps video with the script that comes in the next step. Adjust your setup untill it is perfect!!
Now, let's go to the software side of the story.
Step 2: Shooting the Pictures
I've made the following script for taking the pictures.
Let's take a closer look at it!
- First we import some necessary libraries
import os
from picamera import PiCamera
from os import system
from time import sleep
- we prepare the camera and you can set a resolution of your choise. The # in front of the second line means that the line won't be used. Delete the # to change the resolution
camera = PiCamera()
#camera.resolution = (1024, 768)
- we will mount an usb-stick to save the pictures on, in the folder 'images'
os.system("sudo mkdir /mnt/images")
os.system("sudo mount /dev/sda1 /mnt/images")
os.system("ls /mnt/images")
os.chdir('/mnt/images/')
The line 'os.system("ls /mnt/images")' will show all the files in the folder, so you can check if you are in the right place!
- This part of the code let you make 10 pictures with a time interval of 10 seconds. You can adjust this by changing the amount of pinctures (change the number after range) or the interval (change the number after sleep). It all depends on how many photo's you want and the period of time.
for i in range(10):
camera.capture('image{0:04d}.jpg'.format(i))
sleep(10)
- This line will print 'Done' to the screen when all the pictures have been taken
print('Done')
You can download the script here. It is ready to use!
Type sudo python timelaps.py to start the process
Step 3: Bring It Together in a Timelaps Video
After we have taken our pictures, it is time to bring it all together in a timelaps gif!
This is the script for making the gif:
- Again, we start with importing some libraries
import os
from picamera import PiCamera
from os import system
- Again, we mount the usb-stick to save everything on, in the folder 'images'
os.system("sudo mkdir /mnt/images")
os.system("sudo mount /dev/sda1 /mnt/images")
os.system("ls /mnt/images")
os.chdir('/mnt/images/')
- This command 'glues' the pictures together. The number after '-delay' tells you how long each photo will be shown. This is in milliseconds. So here, every picture is shown 20 milliseconds
system('convert -delay 20 -loop 0 image*.jpg animation.gif')
- When your gif is ready, you will see the message 'done'
print('done')
Attention!
- Depeding on the amount of pictures you use and the resolution of them, it can take some time, for the process to be finished. Be patient!!
- You might get an error because the 'images' folder already exists. No worry, it won't stop the process!
You can download the file here.
Type python make.py to start the process
Step 4: A Dinosaur Egg Hatching!
TADA!!! My son goes crazy when he sees the video!
Bear in mind that this is shot in our livingroom, without good lights! You will also notice that a round object (the egg) does not always stays stable om its position :-)
This 35 sec long video has 707 pictures in it, each one displayed 50 milliseconds. The pictures have been taken with an interval of 10 minutes over a period of approx. 130 hours. This amount of pictures is to much for the make.py script, so I used MovieMaker (Microsoft) to make the video. It took a few seconds to put it together!
Resources used to make this project:
-...
-...
I hope you like it and get timelapsing!!
Discussions
3 years ago
cool dino | https://www.instructables.com/id/Timelaps-a-Dinosaur-Egg-With-Raspberry-Pi/ | CC-MAIN-2019-43 | refinedweb | 858 | 75.71 |
Here you'll find C/C++ examples (of all sorts, may be a few of sorts as well).
This is a discussion on C/C++ examples within the C++ Programming forums, part of the General Programming Boards category; Here you'll find C/C++ examples (of all sorts, may be a few of sorts as well)....
Here you'll find C/C++ examples (of all sorts, may be a few of sorts as well).
Code:Another example. Checks for balanced ()s {}s []s <>s. They may be nested.
A Rational number class that supports +, -, *, /, <<, >> operators.
Simple audio capture using OpenAL:
See below where I referenced Buffers[0]? Im only read the FIRST value of a large buffer, the entire sample is there so consider that!
This was designed to be an application to test/alter the values I get from the capture... understand that.
For those that need; }
Last edited by simpleid; 08-14-2007 at 01:59 PM.
An FFT algorithm:
(vectors, floats, etc. are as is due to the fact that i designed it for my application, just change it to suit your needs... obviously.)
Code:#include <math.h> #include <vector> vector<float> fourierT (vector<float> & w) { vector<float> Sa, Im, Re; int N=w.size(), L= int(float(N)/2.0); float fIm=0.0, fRe=0.0, p=0.0, h=0.0; for (k=0; k<L; ++k) { Im.push_back(w.at(k*2)); Re.push_back(w.at((k*2)+1)); } p = (2 * M_PI / float(N+N)); for (j=0; j<N; ++j) { fIm = 0.0, fRe = 0.0; for (k=0; k<L; ++k) { h = j * k * p; fIm += Im.at(k) * sin(h); fRe += Re.at(k) * cos(h); } Sa.push_back( sqrt(fIm*fIm + fRe*fRe) ); } return Sa; }
Last edited by simpleid; 08-14-2007 at 01:43 PM.
Once I saw someone ask about this on the forum, so here's the code again- RGB->INT | INT->RGB:
Code:#include <iostream> #include <vector> using namespace std; int toINT (int r, int g, int b); vector<int> toRGB (int k); int main(int argc, char *argv[]) { int hClr=0; vector<int> vRGB; vector<int>::iterator vRGBi; hClr = toINT(40, 120, 255); cout << hClr <<endl; vRGB = toRGB(hClr); for (vRGBi = vRGB.begin(); vRGBi != vRGB.end(); ++vRGBi) { cout << (*vRGBi) << " "; } cout <<endl; system("PAUSE"); return EXIT_SUCCESS; } int toINT (int r, int g, int b) { // assuming rgb is > 0 and <= 255 int cR=0, cG=0, cB=0; cR = r; // cR = 0x000000RR cG = cR << 8; // cG = 0x0000RR00 cG = cG | g; // cG = 0x0000RRGG cB = cG << 8; // cB = 0x00RRGG00 cB = cB | b; // cB = 0x00RRGGBB return cB; } vector<int> toRGB (int k) { vector<int> sINT; sINT.push_back((k >> 16) & 0x000000FF); sINT.push_back((k >> 8) & 0x000000FF); sINT.push_back(k & 0x000000FF); return sINT; }
Last edited by simpleid; 08-14-2007 at 01:57 PM.
Oh and BTW, a forum thread makes for a very bad code repository, but I do know the above will be useful to certain individuals... that's why I posted anyway.
Last edited by simpleid; 08-14-2007 at 01:47 PM.
I find that rational class rather lacking.
For something a bit more satisfying, check the one on my homepage (Useful Classes)
My homepage
Advice: Take only as directed - If symptoms persist, please see your debugger
Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong"
Yeah iMalc!
I am actually a An Idiot at C++ and posted my tiny examples here for my own reference and
any newbie like me (who may see them useful).
Thanks brewbuck!
My own auto_ptr implementation:
Code:#include <iostream> using namespace std; template <typename T> class ptr { T* p; public: ptr(T* q=0) { p = q; } ~ptr() { cout << *p << " eek.\n"; delete p; } T& operator*() { return *p; } }; int main() { ptr<int> ip = new int; *ip = 5; cout << *ip << endl; ptr<double> dp = new double; *dp = 5.5; cout << *dp << endl; return 0; } | http://cboard.cprogramming.com/cplusplus-programming/91787-c-cplusplus-examples.html | CC-MAIN-2014-23 | refinedweb | 667 | 72.97 |
Created attachment 200848 [details]
diagnostic files generated by clang
Tried to compile www/chromium on rpi3 at r342781M using four swap partitions,
3 usb and 1 microSD.
M on revision is for
which seems unlikely to be the cause.
@Bob Could you please:
- Attach the patch in question that is in use with this port
- Attempt to reproduce and confirm the issue (crash) is apparent without that patch
The patch attempted was found here:
It has been backed out and the system is rebuilding. I'll then try compiling
www/chromium again.
Created attachment 200875 [details]
Clang segfault diagnostic file (1 of 2)
Created attachment 200876 [details]
Clang segfault diagnostic file (2 of 2)
(In reply to Bob Prohaska from comment #0)
Surely there is some problem that mitigate with clang 7.0.1 on aarch64. Let's see if Oleksandr can shed some light.
(In reply to Bob Prohaska from comment #2)
The audio patch has been backed out, the running kernel is now
up to r342855. Sources are at r342874
All attempts to make buildworld result in segmentation faults in clang.
Make toolchain is running now and seems better-behaved.
The system appears reasonably well-behaved (no console errors or hardware
faults reported) apart from the errors in clang.
The difficulties only emerged when running a version of clang compiled with
the patched kernel. It's almost as if clang and only clang got corrupted
by rebuilding with the patched kernel.
(In reply to Bob Prohaska from comment #6)
The system has been updated to
13.0-CURRENT FreeBSD 13.0-CURRENT r342987 GENERIC arm64
and segfaults persist, even when make is run without -j.
This looks more like clang-related crash than ARM-specific. I'll try to take a look but my expertise in this area is not great. Adding dim@ to Cc, he may be aware of known problems with clang/ARM in the latest release.
I tried the sprintf-4b97e4.{c,sh} test case, and that compiles without any problem for me, with clang 7.0.1 on head r342759. I also tried clang 6.0.1 succesfully.
The other test case, partial_circular_buffer-563908.sh, is missing the corresponding .cpp file, so I can't evaluate it.
(In reply to Dimitry Andric from comment #9)
Here's the latest segfault message, generated using make buildworld (with
no -j) on r342987, from sources at 343001
......
/usr/obj/usr/src/arm64.aarch64/tmp/usr/include/openssl/opensslconf.h:107:55: note:
expanded from macro 'DECLARE_DEPRECATED'
# define DECLARE_DEPRECATED(f) f __attribute__ ((deprecated));
^
1 warning generated.
cc: error: unable to execute command: Segmentation fault (core dumped)
cc: error: clang frontend command failed due to signal (use -v to see invocation)
FreeBSD clang version 7.0.1 (tags/RELEASE_701/final 349250) (based on LLVM 7.0.1)
Target: aarch64-unknown-freebsd13.0
Thread model: posix
InstalledDir: /usr/bin
cc: note: diagnostic msg: PLEASE submit a bug report to and include the crash backtrace, preprocessed source, and associated run script.
cc: note: diagnostic msg:
********************
PLEASE ATTACH THE FOLLOWING FILES TO THE BUG REPORT:
Preprocessed source(s) and associated run script(s) are located at:
cc: note: diagnostic msg: /tmp/serverloop-a25e04.c
cc: note: diagnostic msg: /tmp/serverloop-a25e04.sh
cc: note: diagnostic msg:
********************
root@www:/usr/src #
root@www:/usr/src # find /usr/obj -name serverloop.o -depth -print
root@www:/usr/src # ls -l /tmp/serverloop-a25e04.*
-rw-r--r-- 1 root wheel 2320872 Jan 16 12:41 /tmp/serverloop-a25e04.c
-rw-r--r-- 1 root wheel 4056 Jan 16 12:41 /tmp/serverloop-a25e04.sh
The end of the build log contains:
cc -target aarch64-unknown-freebsd13.0 --sysroot=/usr/obj/usr/src/arm64.aarch64/tmp -B/usr/obj/usr/src/arm64.aarch64/tmp/usr/bin -O2 -pipe -I/usr/src/crypto/openssh -include ssh_namespace.h -DHAVE_LDNS=1 -DUSE_BSM_AUDIT=1 -DHAVE_GETAUDIT_ADDR=1 -DUSE_BLACKLIST=1 -I/usr/src/contrib/blacklist/include -include krb5_config.h -DLIBWRAP=1 -g -MD -MF.depend.serverloop.o -MTserverloop.o -std=gnu99 -fstack-protector-strong -Qunused-arguments -c /usr/src/crypto/openssh/serverloop.c -o serverloop.o
*** Error code 254
Stop.
make[5]: stopped in /usr/src/secure/usr.sbin/sshd
*** Error code 1
Stop.
make[4]: stopped in /usr/src/secure/usr.sbin
*** Error code 1
Stop.
make[3]: stopped in /usr/src/secure
*** Error code 1
Stop.
make[2]: stopped in /usr/src
*** Error code 1
Stop.
make[1]: stopped in /usr/src
*** Error code 1
make: stopped in /usr/src
The diagnostic files are at
It's starting to look as if I've somehow corrupted my clang installation.
Is it possible to download a precompiled binary, akin to a package, as a
workaround?
(In reply to Bob Prohaska from comment #10)
> (In reply to Dimitry Andric from comment #9)
>
> Here's the latest segfault message, generated using make buildworld (with
> no -j) on r342987, from sources at 343001
...
> -rw-r--r-- 1 root wheel 2320872 Jan 16 12:41 /tmp/serverloop-a25e04.c
> -rw-r--r-- 1 root wheel 4056 Jan 16 12:41 /tmp/serverloop-a25e04.sh
I've tried these files, but they compile just fine for me. However, this is on an amd64 host machine. I haven't tried it on an aarch64 machine, but I suspect that there is either something wrong with your aarch64 host, or with your installation
> It's starting to look as if I've somehow corrupted my clang installation.
> Is it possible to download a precompiled binary, akin to a package, as a
> workaround?
It is probably easiest to extract them from a snapshot. E.g. from here: download the base.txz file, extract it into a temp dir, and get the usr/bin/clang executable (and maybe also lld) from there.
Attempts to work through the crashes by cleaning up and restarting
buildworld have so far proved ineffective. Clang still stops with
signal 11's or internal higher-numbered (e.g., 254) errors fairly regularly.
Oddly enough, chromium (the browser) seems to work alright, if slowly,
and no errors apart from clang are evident.
The only complaint other than from clang occurs when starting chromium:
[83827:1218383872:0120/075747.927080:ERROR:gpu_process_transport_factory.cc(1016)] Lost UI shared context.
[84111:1339003392:0120/075753.574384:ERROR:command_buffer_proxy_impl.cc(113)] ContextResult::kFatalFailure: Shared memory handle is not valid
Still, chromium does not crash.
In any case, this does not seem to be a chromium issue so this particular
bug report might as well be closed.
Thanks for everyone's attention,
bob prohaska
There's a seemingly unrelated bug report at
in which an i386 host cross-compiling for arm64 using
clang segfaults repeatedly, much as I'm seeing in, first,
chromium and then buildworld.
IIUC the problem is on the arm64 side of the compiler.
Most curiously, using the suggested workaround of
setting CFLAGS=-O2 in /etc/make.conf seems to help.
On the first buildworld signal 11's were seen early
in buildworld but grew scarce with repetition.
So far it looks as if three OS build and install cycles
might be enough to flush out the problem.
The bug report indicates a fix in place as of nearly
a year ago. I'd think freebsd-arm would have it by now,
but....
Adding CFLAGS=-O2 to /etc/make.conf _almost_ fixed the clang segfaults,
there was still one signal 11 error, IIRC somewhat past halfway through
the build. After restarting make, chromium compiled successfully. While
not perfect, it represents a huge improvement.
Chrome failed to run, with the glib error reported in Bug 220103. /usr/ports
is updating now and I'll try again. | https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=234672 | CC-MAIN-2019-39 | refinedweb | 1,274 | 67.25 |
view raw
I would like to write a script in python which will logging a file, one per day.
It will log gps records and the log file will be a gpx(I have the script for that).
So for today it will create a file named 12-05-2014.gpx and will keep all the gps records until it turns 13-05-2014. Then it will create a new file 13-05-2014.gpx and keep logging in there.
Is this possible? Could you give me some hints about that please?
N.B.: I'm understanding that you're extending an existing python script that handles your GPS logs. If not, @aj8uppal may be right.
I suggest you should use the logging module to output your data, and take advantage of the
RotatingFileHandler which will do that on your behalf. Along with the
Formatter class, you can use the power logging module at your advantage for doing a rotating log.
Though if you consider the logging module of python is a no go – though I think that's the best option, you can always use the following write function in your program:
import os import time class RotatingFileOpener(): def __init__(self, path, mode='a', prepend="", append=""): if not os.path.isdir(path): raise FileNotFoundError("Can't open directory '{}' for data output.".format(path)) self._path = path self._prepend = prepend self._append = append self._mode = mode self._day = time.localtime().tm_mday def __enter__(self): self._filename = self._format_filename() self._file = open(self._filename, self._mode) return self def __exit__(self, *args): return getattr(self._file, '__exit__')(*args) def _day_changed(self): return self._day != time.localtime().tm_mday def _format_filename(self): return os.path.join(self._path, "{}{}{}".format(self._prepend, time.strftime("%Y%m%d"), self._append)) def write(self, *args): if self._day_changed(): self._file.close() self._file = open(self._format_filename()) return getattr(self._file, 'write')(*args) def __getattr__(self, attr): return getattr(self._file, attr) def __iter__(self): return iter(self._file)
which you can use as follows:
with RotateFileOpener('/var/log/gps', prepend='gps_data-', append='.gpx') as logger: while True: log = get_gpx_data() logger.write(log)
which will write into
/var/log/gps:
/var/log/gps/gps_data-20140512.gpx /var/log/gps/gps_data-20140513.gpx /var/log/gps/gps_data-20140514.gpx … | https://codedump.io/share/8wXEVLhgx4or/1/create-log-file-one-every-day-in-python | CC-MAIN-2017-22 | refinedweb | 381 | 53.07 |
Dask Read Parquet Files into DataFrames with read_parquet
• March 14, 2022
This blog post explains how to read Parquet files into Dask DataFrames. Parquet is a columnar, binary file format that has multiple advantages when compared to a row-based file format like CSV. Luckily Dask makes it easy to read Parquet files into Dask DataFrames with
read_parquet.
It’s important to properly read Parquet files to take advantage of performance optimizations. Disk I/O can be a major bottleneck for distributed compute workflows on large datasets. Reading Parquet files properly allows you to send less data to the computation cluster, so your analysis can run faster.
Let’s look at some examples on small datasets to better understand the options when reading Parquet files. Then we’ll look at examples on larger datasets with thousands of Parquet files that are processed on a cluster in the cloud.
Dask read_parquet: basic usage
Let’s create a small DataFrame and write it out as Parquet files. This will give us some files to try out
read_parquet. Start by creating the DataFrame.
import dask.dataframe as dd import pandas as pd df = pd.DataFrame( {"nums": [1, 2, 3, 4, 5, 6], "letters": ["a", "b", "c", "d", "e", "f"]} ) ddf = dd.from_pandas(df, npartitions=2)
Now write the DataFrame to Parquet files with the pyarrow engine. The installation instructions for pyarrow are in the Conda environment for reading Parquet files section that follows.
ddf.to_parquet("data/something", engine="pyarrow")
Here are the files that are output to disk.
data/something/ _common_metadata _metadata part.0.parquet part.1.parquet
You can read the files into a Dask DataFrame with
read_parquet.
ddf = dd.read_parquet("data/something", engine="pyarrow")
Check the contents of the DataFrame to make sure all the Parquet data was properly read.
ddf.compute()
Dask read Parquet supports two Parquet engines, but most users can simply use pyarrow, as we’ve done in the previous example, without digging deep into this option.
Dask read_parquet: pyarrow vs fastparquet engines
You can read and write Parquet files to Dask DataFrames with the fastparquet and pyarrow engines. Both engines work fine most of the time. The subtle differences between the two engines doesn’t matter for the vast majority of use cases.
It’s generally best to avoid mixing and matching the Parquet engines. For example, you usually won’t want to write Parquet files with pyarrow and then try to read them with fastparquet.
This blog post will only use the pyarrow engine and won’t dive into the subtle differences between pyarrow and fastparquet. You can typically just use pyarrow and not think about the minor difference between the engines.
Dask read_parquet: lots of files in the cloud
Our previous example showed how to read two Parquet files on localhost, but you’ll often want to read thousands of Parquet files that are stored in a cloud based file system like Amazon S3.
Here’s how to read a 662 million row Parquet dataset into a Dask DataFrame with a 5 node computational cluster.
import coiled import dask import dask.dataframe as dd cluster = coiled.Cluster(name="read-parquet-demo", n_workers=5) client = dask.distributed.Client(cluster) ddf = dd.read_parquet( "s3://coiled-datasets/timeseries/20-years/parquet", engine="pyarrow", storage_options={"anon": True, "use_ssl": True}, )
Take a look at the first 5 rows of this DataFrame to get a feel for the data.
ddf.head()
This dataset contains a timestamp index and four columns of data.
Let’s run a query to compute the number of unique values in the id column.
ddf["id"].nunique().compute()
This query takes 59 seconds to execute.
Notice that this query only requires the data in the
id column. However, we transferred the data for all columns of the Parquet file to run this query. Spending time to transfer data that’s not used from the filesystem to the cluster is obviously inefficient.
Let’s see how Parquet allows you to only read the columns you need to speed up query times.
Dask read_parquet: column selection
Parquet is a columnar file format which allows you to selectively read certain columns when reading files. You can’t cherry pick certain columns when reading from row-based file formats like CSV. Parquet’s columnar nature is a major advantage.
Let’s refactor the query from the previous section to only read the
id column to the cluster by setting the
columns argument.
ddf = dd.read_parquet( "s3://coiled-datasets/timeseries/20-years/parquet", engine="pyarrow", storage_options={"anon": True, "use_ssl": True}, columns=["id"], )
Now let’s run the same query as before.
ddf["id"].nunique().compute()
This query only takes 43 seconds to execute, which is 27% faster. This performance enhancement can be much larger for different datasets / queries.
Cherry picking individual columns from files is often referred to as column pruning. The more columns you can skip, the more column pruning will help speed up your query.
Definitely make sure to leverage column pruning when you’re querying Parquet files with Dask.
Dask read_parquet: row group filters
Parquet files store data in row groups. Each row group contains metadata, including the min/max value for each column in the row group. For certain filtering queries, you can skip over entire row groups just based on the row group metadata.
For example, suppose
columnA in
row_group_3 has a min value of 2 and a max value of 34. If you’re looking for all rows with a
columnA value greater than 95, then you know
row_group_3 won’t contain any data that’s relevant for your query. You can skip over the row group entirely for that query.
Let’s run a query without any row group filters and then run the same query with row group filters to see the performance book predicate pushdown filtering can provide.
ddf = dd.read_parquet( "s3://coiled-datasets/timeseries/20-years/parquet", engine="pyarrow", storage_options={"anon": True, "use_ssl": True}, ) len(ddf[ddf.id > 1170])
This query takes 77 seconds to execute.
Let’s run the same query with row group filtering.
ddf = dd.read_parquet( "s3://coiled-datasets/timeseries/20-years/parquet",", 1170)]], ) len(ddf[ddf.id > 1170])
This query runs in 4.5 seconds and is significantly faster.
Row group filtering is also known as predicate pushdown filtering and can be applied in Dask read Parquet by setting the
filters argument when invoking
read_parquet.
Predicate pushdown filters can provide massive performance gains or none at all. It depends on how many row groups Dask will be able to skip for the specific query. The more row groups you can skip with the row group filters, the less data you’ll need to read to the cluster, and the faster your analysis will execute.
Dask read_parquet: ignore metadata file
When you write Parquet files with Dask, it’ll output a
_metadata file by default. The
_metadata file contains the Parquet file footer information for all files in the filesystem, so Dask doesn’t need to individually read the file footer for every file in the Parquet dataset every time the Parquet lake is read.
The
_metadata file is a nice performance optimization for smaller datasets, but it has downsides.
_metadata is a single file, so it’s not scalable for huge datasets. For large data lakes, even the metadata can be “big data”, with the same scaling issues of “regular data”.
You can have Dask read Parquet ignore the metadata file by setting
ignore_metadata_file=True.
ddf = dd.read_parquet( "s3://coiled-datasets/timeseries/20-years/parquet", engine="pyarrow", storage_options={"anon": True, "use_ssl": True}, ignore_metadata_file=True, )
Dask will gather and process the metadata for each Parquet file in the lake when it’s instructed to ignore the
_metadata file.
Dask read_parquet: index
You may be surprised to see that Dask can intelligently infer the index when reading Parquet files. Dask is able to confirm the index from the Pandas parquet file metadata. You can manually specify the index as well.
ddf = dd.read_parquet( "s3://coiled-datasets/timeseries/20-years/parquet", engine="pyarrow", storage_options={"anon": True, "use_ssl": True}, index="timestamp", )
You can also read in all data as regular columns without specifying an index.
ddf = dd.read_parquet( "s3://coiled-datasets/timeseries/20-years/parquet", engine="pyarrow", storage_options={"anon": True, "use_ssl": True}, index=False, ) ddf.head()
Dask read_parquet: categories argument
You can read in a column as a category column by setting the categories option.
ddf = dd.read_parquet( "s3://coiled-datasets/timeseries/20-years/parquet", engine="pyarrow", storage_options={"anon": True, "use_ssl": True}, categories=["id"], )
Check the dtypes to make sure this was read in as a category.
ddf.dtypes id category name object x float64 y float64 dtype: object
Conda environment for reading Parquet
Here’s an abbreviated Conda YAML file for creating an environment with the pyarrow and fastparquet dependencies:
name: standard-coiled channels: - conda-forge - defaults dependencies: - python=3.9 - pandas - dask - pyarrow - fastparquet …
You don’t need to include both pyarrow and fastparquet in your environment. Just add the Parquet engine you’ll be using.
You can use this environment when running Dask to keep your life simple.
Additional resources
Here are additional resources if you’d like to learn more:
- 5 Reasons Parquet is better than CSV for data analyses
- Advantages of Parquet file format
- How to speed up a Pandas query 10x with 6 Dask tricks, many of which involve the Parquet tactics outlined in this post
Conclusion
This blog post showed you how to properly read Parquet files with Dask.
There are a lot of options and they can impact the runtime of your analysis significantly, so knowing how to read Parquet files is quite important.
Column pruning is of particular importance. It’s easy to apply column pruning and it often yields a significant performance gain. | https://coiled.io/blog/dask-read-parquet-into-dataframe/ | CC-MAIN-2022-21 | refinedweb | 1,631 | 54.93 |
Python might seem like a recent development but at this point it is 20 years old. The first release of Python 1.0 goes as far back 1994. Even the Python 2 series which is the dominant version of Python to this day was released way back in 2000. The language was created by Guido van Rossom and he continues to have a central role in its development. Python is in a rather odd situation currently where the most popular version of the language is the Python 2 variant despite the availability of Python 3 since 2008. Python 2 had its final major release in 2010 with the release of Python 2.7 and it will only be bug fixes from here. Python 3 itself is currently at 3.4 with 3.5 to release next year.
Guido van Rossum, the creator of Python worked for Google while they launched Google App Engine with Python support out of the box. He now works for Dropbox.
This preference for Python 2 comes from the fact that the Python 3 breaks compatibility with Python 2 code. Python 3 was a major update to the programming language and the developers took the opportunity to modernise the language in many ways and get rid of poor decisions made in the past.
The syntax differences between Python 2 and 3 kept a majority of Python libraries from instantly supporting it, and some are yet to support it. While there is an automated tool for converting Python 2 code to Python 3 code, it doesn’t work in all cases. The tides are slowly changing as Python 3 gains new and enticing features, and more libraries are ported to it. To further ease this process Python 3.3 brought its syntax closer to 2 and made it easy to make code compatible with both versions. While this might make it sound like they are very different languages, they are really not, it’s just the nature of the changes that has caused these issues rather than their intensity or quantity.
With talk of versions out of the way, let’s look at the language itself, which is unconventional in many ways. Python takes a very different approach to structuring code, rather that denote blocks of code with curly braces {}, it uses whitespace indentation. Most languages keep the formatting of the code separate from syntax, which means you are likely to run into dozens of different coding styles and arguments over which is better. Python makes the indentation part of the language resulting in more consistent looking code.
Python has a core philosophy, and the more you code in Python the more you tend to understand what makes certain code more or less ‘Pythonic’. Long time Python developers can be said to have gotten the ‘The Zen of Python’. You can read ‘The Zen of Python’ on their website, or by typing `import this` in a Python console.
Significance
Possibly one of Python’s greatest strengths is its simplicity. It is easy to get into it. Python is widely used, to the point that it is present by default if you’re running OSX. The same can be said of most Linux distributions. Installing Python on Windows isn’t that hard either.
When you reach the limits of what can be accomplished using the Python’s inbuilt libraries, there is an extensive ecosystem of packages out there to explore. The PyPI or Python Package Index is a website that lists all Python packages submitted to it. This index of packages is accessible from the command line with the `easy_install` and `pip` which lets you search this index, download packages and update them if need be. You can just list all the packages you want for you project in a text file and `pip` can install them all automatically. Python packages are diverse, with libraries that integrate with APIs, GUI programming frameworks, mathematical and biological computation, web frameworks and more.
If your designs are more mercenary, you’ll be glad to know that there is healthy demand for Python programmers, and it has been placed at #8 or above in the TIOBE Programming Community Index from many years. It is used by major organisations like Google — where its creator worked till 2012 — and scientific research organisations such as CERN. A popular video hosting website called YouTube is also built on Python, and it is used behind the scenes on other Google products as well. DropBox also uses Python, and in fact currently employs Guido. Another high-traffic website, Reddit is built on Python and is in fact open source. The popular Discus commenting platform is also built on Python, using the Django framework. Many popular applications — 3ds MAX, Maya, GIMP among others — support Python for automating the application or for creating plugins.
Hello World
print(”Hello, World”)
It’s hard to make things any simpler than this, and Python is often about making things as simple as they can be.
Sort
def insertion_sort(
ulist
):
for
idx
in range(1,
len
(
ulist
)):
curr_value = ulist[idx]
pos =
idx
while pos > 0 and ulist[pos - 1] > curr_value:
ulist[pos] = ulist[pos - 1]
pos -= 1
ulist[pos] = curr_value
Python code tends to be self-documenting with the tendency of many Python programmers to use longer variable and function names in Snake Case rather than CamelCase.
Of course, the above code need not be written in Python since it comes with an inbuilt function called sort. Python’s sort happens to implement an excellent sorting algorithm that was invented for Python itself. It is called Timsort after its inventor Tim Peters and is widely used .
Tools and Learning Resources
You would be forgiven for believing that a major part of the internet was actually composed of Python tutorials. Even so, Mark Pilgim’s “Dive into Python” and “Dive into Python 3” books are great introductory texts. Another great book is “Think Python: How to Think Like a Computer Scientist”. It too is available for free here. The folks at interactive python have also converted the above book, among others, into an interactive online version that lets you run code samples and run your own code right in the browser. Browser-based tools for Python are now abundant, thanks to Skulpt, a JavaScript ‘port’ of the Python interpreter. You can run basic Python code online here and here. There are also some specialised Python-based online tools available. Morph.io lets you write Python scrapers for extracting data from other websites; PythonAnywhere focuses on hosting Python code and Wakari lets you do scientific computing using Python.
You should definitely prefer the interactive version of the book, unless you have an excessively slow computer or internet connection.
Speaking of scientific computing, packages like NumPy allow one to perform extensive mathematical operations, SymPy allows for symbolic mathematics, Pandas is great for data manipulations, and matplotlib is brilliant for plotting data.
One of the greatest tools though is probably IPython, an enhanced Python shell that integrates well with the above to let you show plots embedded in your Python console. IPython notebook lets you create interactive documents that contain a mixture of rich text, plots and calculations.
There are even specialised versions of Python that come with these packages pre-installed, such as Enthought’s Canopy, Continuum Analytics’ Anaconda, and Python(x,y).
The Viz Mode in online Python IDE CodeSkulptor lets you visualise your Python code as it runs.
Speaking of specialised versions, it’s important to note that the Python installers you get on python.org are just one implementation of the Python language. The three implementations mentioned above bundle tonnes of scientific tools with Python, then there is eGenix Python that’s a single exe! There are also complete re-implementations of Python. For instance JPython is a Python interpreter that runs on the Java VM, and as such it can access Java functionality. Similarly, you have IronPython that runs on .NET Most interesting is PyPy, a Python implementation that was originally written in Python itself. Now it’s a JIT compiler — the kind you find for JavaScript in modern browsers — for Python. In many cases it performs significantly faster than the standard Python implementation. We would never forgive ourselves if we didn’t mention virtualenv, a very popular package for Python that makes developing apps much simpler. It allows you to create virtual Python environments which lets you develop and test your code in a clean environment that isn’t tainted by all the Python packages you have installed globally.
For tutorials on 15 other hot programming languages go here.
Other Popular Deals
- Samsung A7 2016 Edition GoldenRs. 18990 *Buy Now
- Redmi 6A (Rose Gold, 2GB RAM,...Rs. 6099Buy Now
- Redmi Y2 (Gold, 3GB RAM, 32GB...Rs. 9099Buy Now | https://www.digit.in/software/learn-python-tutorial-python-basics-28502.html | CC-MAIN-2019-09 | refinedweb | 1,466 | 61.87 |
Asked by:
[Python][Azure IoT Hub] IoT Hub - Complete and Reject a cloud-to-device message getting error 412
Question
Hi,
I am writing Python application using Azure IoT Hub REST API. It looks like documentation is not correct and IoT SDK too.
1.
If I follow this document I get HTTP Error 405: Method Not Allowed
2. azure-iot-sdks
I tried to copy behavior of IoT SDKs. But there is problem too. If I try to call Complete for Azure IoT Cloud to Device mesaage I get HTTP Error 412: Precondition Failed Same for Reject
I use ETag for If-Match HTTP header and MessageId in URL.
headers = {
'Authorization': sas,
'If-Match': etag
}
req =
urllib.request.Request('https://'+ HOST + url + "/messages/devicebound/"+ messageid + "?api-version=2015-08-15-preview",
headers = headers,
method =
'DELETE')
What is correct call for Complete and Reject?
- Edited by Gary Liu - MSFTModerator Friday, January 1, 2016 12:46 AM edit title
- Moved by Peter Pan - MSFTMicrosoft contingent staff, Moderator Friday, July 1, 2016 3:00 AM Move to new forum
- Edited by Peter Pan - MSFTMicrosoft contingent staff, Moderator Friday, July 1, 2016 6:23 AM add title tag
All replies
Hi,
Thank you for your patience and apologize for delay. I had reported this issue and we will get back to you later as soon as we can.
Appreciate your understanding.
Best Regards,
Will
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Hi,
I found answer. There is issue in documentation and maybe behavior.
1. If-Match header doesn't have any impact
2. Correct url is
https://{IoTHubName}.azure-devices.net/devices/{deviceId}/messages/devicebound/{E-tag}?api-version={api-version}
3. In message header is E-Tag in quotes. You must delete quotes before you use it in url
ETag: "41636a1d-8fb5-4cdc-abca-155b9db9e356"
Final url can looks like this
Here is full example in Python:
import hmac import base64 import urllib.parse import urllib.request import time # START: Azure Evet Hub settings KEY = "xxxx"; HOST = "xxxx.azure-devices.net"; DEVICE_NAME = "xxxx"; # END: Azure Evet Hub settings # current time +10 minutes timestamp = int(time.time()) + (10 * 60) url = "/devices/" + DEVICE_NAME; urlToSign = urllib.parse.quote(HOST + url, safe='') h = hmac.new(base64.b64decode(KEY), msg = "{0}\n{1}".format(urlToSign, timestamp).encode('utf-8'), digestmod = 'sha256') sas = "SharedAccessSignature sr={0}&sig={1}&se={2}".format(urlToSign, urllib.parse.quote(base64.b64encode(h.digest()), safe = ''), timestamp) print(sas) headers = { 'Authorization': sas, 'Content-Type' : 'application/json' } data = b"{message: 'Hello from Python'}" req = urllib.request.Request('https://' + HOST + url + "/messages/events?api-version=2015-08-15-preview", data, headers, method = 'POST') with urllib.request.urlopen(req) as f: print(f.code) headers = { 'Authorization': sas } req = urllib.request.Request('https://' + HOST + url + "/messages/devicebound?api-version=2015-08-15-preview", headers = headers, method = 'GET') print('---------------------------------') with urllib.request.urlopen(req) as f: # Process headers print(f.info()) etag = f.info()['ETag'].strip('"') print(etag) # message print(f.read().decode('utf-8')) if messageid == None: exit() headers = { 'Authorization': sas, 'If-Match' : etag } url = 'https://' + HOST + url + "/messages/deviceBound/" + etag + "?api-version=2015-08-15-preview" req = urllib.request.Request(url, headers = headers, method = 'DELETE') print('---------------------------------') with urllib.request.urlopen(req) as f: print(f.code)
Stepan
- Proposed as answer by Gary Liu - MSFTModerator Monday, January 18, 2016 8:16 AM
Hello Stepan
At your code: which SAS token should be used?
The SAS Token generated for the IOT Hub in general? or the SAS Token generated for the specific device?
After running your code I receive this error:
urllib.error.HTTPError: HTTP Error 401: Unauthorized
Any idea's?
Kind Regards
Nathan Hofstee | https://social.msdn.microsoft.com/Forums/en-US/8f9b33ef-358e-470d-9364-876c37849491/pythonazure-iot-hub-iot-hub-complete-and-reject-a-cloudtodevice-message-getting-error-412?forum=opensourcedevwithazure | CC-MAIN-2020-40 | refinedweb | 634 | 52.26 |
Simplest solution for more information of how to convert TGZ file into pdf with the help of TGZ to PDF Converter.
how to convert tgz file into pdf
Fastest solution for user to know how to convert Zimbra to Lotus Notes with all details and attachment.
how to convert zimbra to lotus notes
MBOX to PDF Converter Online is one of the best solutions that can easily convert MBOX to PDF online embedded all emails ...
mbox to pdf converter online
IBM Notes 9 Convert to PDF Tool to export emails from Lotus Notes to PDF is handy, reliable & simple to access application.
ibm notes 9 convert to pdf
Mac Mail Export to PDF Tool to batch export MBOX files to PDF with attachments in an exact form.
Zimbra webmail export emails to PST, PDF, MSG, EML and MBOX using Zimbra Export Tool with 100% accurate outcome.
zimbra webmail export emails
Get the tool to import TGZ file into Thunderbird with all meta details.
import tgz file into thunderbird
If you are facing IncrediMail to Mac Mail Conversion trouble, then try IncrediMail to Mac Mail Converter Software that could ...
convert incredimail to mac mail , incredimail to mac mail converter
Looking tool for how TGZ open in Windows with high speed? So, the most popular software Zimbra Converter is one of the excellent ...
Get a solution to know more about free software to convert TGZ to PDF with the help of Zimbra Exporter.
free software to convert tgz to pdf
How free software open TGZ files successfully? I think you were already faced a lot of problem during the process of how ...
free software open tgz files
Instantly migrate Zimbra open source mailbox to PST, PDF, MSG, EML, MBOX and NSF with Zimbra Migration Tool.
migrate zimbra open source
MBOX 2 PDF Converter is Windows based program which can support all the latest versions like Win 10, Win 8.
print mbox file to pdf file , mbox 2 pdf , convert mbox emails in pdf
Zimbra Backup Restore Account Tool to restore Zimbra mailbox backup to PST, PDF, MSG, EML, MBOX & NSF with contacts, notes, ...
zimbra backup restore account
Filter: All / Freeware / Shareware / Mac / Mobile | http://freedownloadsapps.com/information-management/book-collection-managers/ | CC-MAIN-2018-22 | refinedweb | 361 | 66.88 |
28 November 2008 16:44 [Source: ICIS news]
LONDON (ICIS news)--The European Chemicals Agency (ECHA) has not had to resort to its Reach pre-registration fall-back plan yet but said late on Friday that it was prepared to launch the system on 1 December if needed. (Details can be found on this web site)
The final pre-registrations under the EU’s new chemicals control scheme are pouring in - 2,000 bulk pre-registrations were in the system earlier on Friday. Each could hold as many as 500 separate substance or legal entity pre-registrations.
The number of pre-registrations has topped 2m for 50,000 substances - at the outset the EU was expecting something like 200,000 pre-registrations for 30,000 chemicals.
Reach, the registration, evaluation and authorisation of chemicals scheme, is a behemoth that few in the EU legislature could have imagined.
The pre-registration system - which has been in operations since late June - has struggled under the weight of use. Even the world’s largest chemical company, BASF, said this week that it was having difficulties in the run-up to the 1 December pre-registration deadline.
Companies do not simply have to register substances under Reach. Each of their legal entities operating in or importing into the EU have to have their product base covered by the new rules.
For a large company with many legally separate operations this can be particularly burdensome.
The focus for the past few months, at least as far as Reach is concerned, has been on the difficulties companies have faced in trying to pre-register often large product portfolios.
The Reach IT system was not in operation for the first weeks of the six month pre-registration period.
Organisations such as the ?xml:namespace>
Reach IT capacities have been increased significantly, the ECHA says, but its own data give some idea of just how overloaded the system has become.
It has a fallback system that would allow registrants to contact the ECHA with their submissions without having to use Reach IT, but it has not been put into operation yet.
The ECHA cannot move the 1 December deadline as it is enshrined in law. So companies wishing to report cannot do anything other than continue to try to do so. They are going to have to work hard to meet their legal obligations.
But the sector and downstream users of chemicals should not be too focused on this first phase of Reach.
An ongoing series of deadlines will see the EU’s new system of chemicals control really come into its own.
'No registration, no market' applies to any seller or maker of chemicals in the EU. From the start of next month the more important registration phase of Reach begins for products produced or sold in volumes of more than 1,000 tonnes.
Over a two-year period, registration dossiers will be prepared on many chemicals, by large and small groups of producers and others.
Overall this will be a costly process involving registration fees paid to the ECHA, administration, general scientific and toxicology work. Companies will share some data, but not others, and confidentiality at this stage becomes a critical issue.
Simply the fees and charges paid to the ECHA per registration could be as high as €30,000. Following registration comes the evaluation phase of Reach which potentially might involve toxicological testing, including animal testing, on a large scale.
The cost of Reach to chemicals firms will rack up to about €4bn, of which €3bn will be for experimental and toxicological studies, BASF believes.
The chemicals giant expects its Reach costs to peak around 2013 but is seeking to keep those costs down by using whatever expertise it can tap into to derive toxicological data.
For Reach participants the next stages are potentially the most difficult to implement and the most costly.
Pre-registration has been popular, if that is a word that can be applied to this particularly onerous and frustrating process, because it is cheap.
From now on Reach becomes much more costly and more difficult to navigate.
Companies will need advice and ideas on how best to streamline registration processes - through the substance information exchange forums (SIEFs) themselves - and how best to minimise the burdens of toxicological testing.
The Reach process has not been easy to understand or employ to date but is about to get a great deal more difficult.
The running total of Reach pre-registrtions and other data is shown in the ECHA home | http://www.icis.com/Articles/2008/11/28/9175632/insight-now-reach-costs-really-start-to-rise.html | CC-MAIN-2014-10 | refinedweb | 756 | 50.36 |
score:16
This error typically happens if you're accidentally committing
node_modules to your project's Git Repostiory.
Could you try to do the following?
- Ensure all changes have been committed and you have a clean directory.
- Run
rm -rf node_modules(or delete the folder on Windows).
- Run
git add -Athen
git commit -m "Remove all module files".
- Add
node_modulesto your
.gitignorefile (and save).
- Run
git add -Athen
git commit -m "Update ignored files".
- Verify your directory is completely clean via
git status.
- Then, run
git push. This deployment should work on Vercel.
- Finally, re-run
npm ior
yarndepending on your package manager to get your local copy working.
score:0
What about creating a
.gitignore file, and adding the .next folder to it ?
score:0
How I resolved the missing module error on Vercel.
- install the package explicitly so that it is present in your
package.json
- then import the supposed missing module into the app and use it.
For Example (just a scenario) // lets assume
lodash is said to be the missing module,
1 Make sure it is present in your package.json
"dependencies": { // some dependencies ... "lodash": "^4.17.20", // some other dependencies ... },
2 Import and use it in your app (usually, I just console.log the import in a non-production env.)
import LODASH from 'lodash' if (process.env.NODE_ENV !== 'production') console.log(LODASH)
score:0
I created the folder in lowercased, then, renamed it in capitalized, updated all the imports, but, for some reason, Github didn't update the folder name when I pushed the changes. I needed to renamed with a different name. It worked.
score:0
For me it was a problem with that specific package, when I looked for it in my package.json and under node_modules i couldn't find it. Even though it was working in local builds somehow.
score:0
I added a NODE_ENV="production" environment variable in vercel which hosed everything for me. Once I removed it, things recovered.
score:0
If the program runs normally by executing
node_modules/next/dist/bin/next, you should suspect that the symbolic link of the file is broken.
In my case, it occurred during AWS deployment, and it occurred in the process of compressing the files for deployment.
So, I was able to solve the problem by adding the symlinks option during compression as shown below.
zip -r --symlinks xxxx
If it is deployed on a server such as AWS, like me, download the actually distributed program and Check the node_modules/.bin/next file. If the symbolic link is broken, you will need to find and fix the cause of the broken link during the deployment process.
cf)
score:0
- Delete package-lock.json (rm package-lock.json)
- Delete node_modules (rm -R node_modules)
- Switch versions of Node, which is easy if you have Node installed via NVM (nvm install 17, nvm use 17)
- Install dependencies again with new version of node (npm install)
I ran into this issue on a server running node 16.15.0 LTS, On my local machine node v16.12.0, and on another server running node v12.22.10 and it was not giving the error.
Took a look at my dependencies and decided to switch to Node 17.
devDependencies": { "@types/node": "17.0.23", "@types/react": "17.0.43",
After following the steps above and using Node 17 code ran successfully, and no more error.
score:1
in my case it looks like something to do with
yarn and the next dependency i.e. inside
node_modules/next/dist/bin/next having conflicts information about something.
never quite understand why after using
next & building our code into production we still have to rely on the (relatively) heavy module
the whole notion of doing build is supposed so that it becomes independent of the build tools.
score:1
I tried all of the above problems and nothing works.
The problem got solved when I changed the version of next.js. In case, someone is searching for a solution and nothing works...
score:2
It seems like I have run into the same error.
The strange thing is that I have been building on Vercel all weekend without any problems, and it only started failing after I added Tailwind CSS to my project.
The first build with the Tailwind CSS addition succeded but styling was not loaded.
You can still see the result at.
The local build with "vercel dev" still runs perfectly.
See the repository at
Error from Build logs:
22:28:35.104 Running "npm run build" 22:28:35.287 > [email protected] build /vercel/6ddf29b8 22:28:35.287 > next build 22:28:35.328 internal/modules/cjs/loader.js:983 22:28:35.329 throw err; 22:28:35.329 ^ 22:28:35.329 Error: Cannot find module '../build/output/log' 22:28:35.329 Require stack: 22:28:35.329 - /vercel/6ddf29b8/node_modules/.bin/next 22:28:35.329 at Function.Module._resolveFilename (internal/modules/cjs/loader.js:980:15) 22:28:35.329 at Function.Module._load (internal/modules/cjs/loader.js:862:27) 22:28:35.329 at Module.require (internal/modules/cjs/loader.js:1042:19) 22:28:35.329 at require (internal/modules/cjs/helpers.js:77:18) 22:28:35.329 at Object.<anonymous> (/vercel/6ddf29b8/node_modules/.bin/next:2:46) 22:28:35.329 at Module._compile (internal/modules/cjs/loader.js:1156:30) 22:28:35.329 at Object.Module._extensions..js (internal/modules/cjs/loader.js:1176:10) 22:28:35.329 at Module.load (internal/modules/cjs/loader.js:1000:32) 22:28:35.329 at Function.Module._load (internal/modules/cjs/loader.js:899:14) 22:28:35.329 at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:74:12) { 22:28:35.329 code: 'MODULE_NOT_FOUND', 22:28:35.329 requireStack: [ '/vercel/6ddf29b8/node_modules/.bin/next' ] 22:28:35.329 } 22:28:35.331 npm ERR! code ELIFECYCLE 22:28:35.331 npm ERR! errno 1 22:28:35.332 npm ERR! [email protected] build: `next build` 22:28:35.332 npm ERR! Exit status 1 22:28:35.332 npm ERR! 22:28:35.332 npm ERR! Failed at the [email protected] build script. 22:28:35.332 npm ERR! This is probably not a problem with npm. There is likely additional logging output above. 22:28:35.336 npm ERR! A complete log of this run can be found in: 22:28:35.336 npm ERR! /vercel/.npm/_logs/2020-06-21T20_28_35_332Z-debug.log 22:28:35.342 Error: Command "npm run build" exited with 1
score:3
I had the same issue. In my github desktop I noticed that a file that was capitalized in the editor was not in the github desktop. Fixed the spelling to match what was showing on github and the project built successfully.
score:8
This answer worked for me:
TL;DR; update
git cache:
git rm -r --cached . git add --all . git commit -a -m "Versioning untracked files" git push origin master
score:9
I'm having this exact same issue. I think it may be an internal issue with Vercel's deployment infrastructure. Notice the line it is failing on:
Error: Cannot find module '../build/output/log' 20:43:24.967 Require stack: 20:43:24.967 - /vercel/5ccaedc9/node_modules/.bin/next 20:43:24.967
My issue started yesterday, quite unexpectedly -- i.e. with a very simple commit. In my case, previously successful deploys also fail. Likewise, deleting the project and starting over did not help. I am in communication with Vercel support but they have not yet acknowledged the problem is on their end yet or offered any kind of solution.
score:15
I had to edit my
package.json to use the
next binary that ships in the
node_modules/next directory:
"scripts": { "start": "node_modules/next/dist/bin/next start -p $PORT" }
Not the most elegant fix but it works.
Source: stackoverflow.com
Related Query
- How to fix Next.js Vercel deployment module not found error
- How to fix this error : " Module not found :can't resolve popper.js "
- How to resolve module not found error in webpack/reactjs app?
- How to Fix "export 'React' (imported as 'React') was not found in 'react'" Error in React js
- how to solve the error that fs module is not found when used react and next.js
- how to fix 404 not found error nginx on docker?
- How to solve not found error in heroku after deployment
- How to fix error "Failed to compile : ./node_modules/@react-leaflet/core/esm/path.js 10:41 Module parse failed: Unexpected token (10:41)"
- How to fix "TypeError: fsevents is not a constructor" react error
- How to fix - Module not found: Can't resolve '@babel/runtime/helpers/objectWithoutPropertiesLoose'
- How to fix `TypeError: document.createRange is not a function` error while testing material ui popper with react-testing-library?
- How to fix error 'FB' is not defined no-undef on create-react-app project?
- How to fix '_react["default"].memo is not a function. (In '_react["default"].memo(connectFunction)' error in React native?
- Module not found Error when deployed on Heroku
- Next JS npm start app load 404 page not found error for physical pages
- How to fix React Native error "jest-haste-map: Haste module naming collision"?
- How to fix an error "CODE NOT FOUND" in Vercel?
- Firebase-Admin, importing it to react application throws Module not found error
- How to fix the error 'alpha' is not exported from '@material-ui/core/styles' when using Skeleton in Material UI
- How to fix ' "X" is not defined no-undef' error in React.js
- react-scripts start error (Cannot find module '../scripts/start') - how can i fix this?
- Module not found error using Yarn 2 to link React components
- What causes the typescript Module has no exported member .ts(2305) error and how do you fix it?
- Error in Entry Module not found - in Webpack Config file
- How to fix an error "Prop `className` did not match. Server: "MuiFormLabel-root-75...."?
- How to create React App including Web3 using create-react-app? I am getting Module not found Error. BREAKING CHANGE: webpack < 5 used
- How to fix "TypeError: categories.map is not a function" error in React
- How to fix "dispatch is not a function" error
- How to fix the type error Type '(string | Element) [] is not assignable to type 'string | Element | undefined' using react and typescript?
- How to fix Module not found: Can't resolve '@heroicons/react/solid' in react app?
More Query from same tag
- How to add background image on a material ui Dialog component
- Correct way to rerender my React component?
- How to use recompose's toClass HOC to add a ref to a functional component?
- Compare form component values
- How to import js file in react component?
- How to focus trap with styled-components? How to access classname from styled-components?
- React-Native Image Invalid prop 'source' supplied to Image
- Rendering more than 1000 markers in react-leaflet takes ages and user experience is horrible, need to improve my approach
- lazy load for dynamic component test coverage
- How to Add input fields in form when "Other" option is selected in dropdown in React.js
- Not Redirection on a component that contains id parameter on Production React js
- How can i clear the chat input-box after sending the message(from suggestion) in botframework webchat?
- React CSSTransitionGroup deleted item shifted to end
- React JS, unable to redirect component
- Detecting Firebase user change with useEffect vs onAuthStateChanged - what's the difference?
- Change the position of the buttons
- react-router Link changes url but Route component doesn't render
- What is wrong in the below code of input validation?
- how to setState with a file object
- Conditional Destructuring of a JS Object for Gatsby / React SSR Build
- Why is a ReactJS component using Hooks rendered once or twice depending on developer console is open or not?
- React Router cannot GET /route only after deployed to Heroku
- REACT: How to modify styles from another component
- Unexpected token punc «(», expected punc when creating chunk from UglifyJS
- Why does does my JS external files not loading into my react page?
- Language object prop to pass in the react component
- handling click on `ButtonGroup` in react-bootstrap
- Font awesome icons are running perfectly in local but not in production in next js
- How to stop a carousel sliding when reaches the last item?
- Keeping placeholder on react-select | https://www.appsloveworld.com/reactjs/100/8/how-to-fix-next-js-vercel-deployment-module-not-found-error | CC-MAIN-2022-40 | refinedweb | 2,091 | 59.6 |
I've just finished working on a new How Do I video for Visual Studio Extensibility on the topic of T4 Code Generation [Update: the video is now live at How Do I: Create And Use T4 Templates?]. T4, or Text Template Transformation Toolkit, is the free code generation engine from Microsoft that underpins their Domain Specific Languages and Software Factories toolkits. T4 is usually used only to generate code from the models in your DSL, but it's a pretty rich code generation engine in it's own right. I've played a lot over the years with CodeSmith and MyGeneration, so I've been meaning to play with T4 for a while. What's nice about the new version is that it is now built into Visual Studio 2008 - no need for the SDK to get it any more.
If you're interested in finding out more about how T4 works, and to get some great samples, check out Oleg Sych's blog.
One thing I noticed when working with T4 though, is that while the ".tt" file extension works in Visual Studio, there is no item template when you Add .. New Item to your project. I've put 2 Item Templates together to overcome this - one for a simple T4 template and one for an SMO Database-driven template (thinking of replacing the code generator for LINQ To SQL?). You can download the simple T4 C# item template here and the T4 C# Database item template here. To use the templates, simply drop the files into [My Documents]\Visual Studio 2008\Templates\ItemTemplates\
By the way, if you're working with and creating T4 templates, make sure you check out the great free T4 Editor from Clarius Consulting.
Pingback from Oleg Sych - » Visual Studio Templates for T4
You've been kicked (a good thing) - Trackback from DotNetKicks.com
Did you know there's T4 ( Text Template Transformation Toolkit ) support inside VS2008 now? Add a file
Pingback from Entity Framework Stored Procedure Generation | David DeWinter
Just lately there seems to be a veritable feast of new blog stuff about VSX (if you don't mind me switching
Pingback from David DeWinter » Blog Archive » Entity Framework Stored Procedure Generation
In your database example, I wasn't able to see the namespaces:
Microsoft.SqlServer.Smo
Microsoft.SqlServer.ConnectionInfo
Is that because I only have SQL Server Express on this PC?
I'm not 100% sure if it's available in the full version only, but it seems SMO is part of, or available from Express as well. Check out and
Pingback from Experimental LINQ to SQL template » DamienG
I posted earlier about how to use enums in LINQ To SQL , and I spoke about why I think enums are useful
Hilton,
is there a way in T4 templates to separate out generation into separate .cs files? E.g. could I have a "NorthwindManager" (or NorthwindDataContext) and have a separate .cs file for each table?
Marc
Hi Marc,
This is definitely possible. You would basically be invoking the System.IO classes directly and saving the output to a filestream. | http://dotnet.org.za/hiltong/archive/2008/02/18/t4-template-items.aspx | crawl-002 | refinedweb | 518 | 58.32 |
In programming, decisions can be one, two, or multi-branched. Java's
switch statement is the most suitable construct for multi-branched decisions. An
if statement causes a branch in the flow of a program's execution. You can use multiple
if statements, as shown in the previous section, to perform a multiway branch. This is not always the best solution, however, especially when all of the branches depend on the value of a single variable. In this case, it is inefficient to repeatedly check the value of the same variable in multiple
if statements.
A better solution is to use a
switch statement. A
switch statement starts with an expression whose type is an
int,
short,
char, or
byte. Here is the syntax of a switch statement:
switch (int-or-char-value) { case label_1: // statement sequence break; case label_2: // statement sequence break; . . . case label_N: // statement sequence break; default: // default statement sequence }
The
default clause is optional in a
switch construct. Hence, you can omit
default if this is not required. Execution in a
switch, starts from the the
case label that matches the value provided in
switch's parentheses and continues until the next
break is encountered or the end of the
switch. If none of the
case labels match, then the
default clause is executed, if it is present.
Note that
default need not be the very last delegate of statements, it can be placed anywhere in the body of switch. But, while doing so never forget to place a
break after
default block of statements. If the
default is the last thing to do in
switch's body then of course, no
break is needed because the body of switch gets ended right after the
default.
Let's implement determining if an alphabet is vowel or consonant by the
switch construct.
//Demonstrates switch-case and break public class ControlFlowDemo { public static void main(String[] args) { Scanner in = new Scanner(System.in); System.out.print("Enter an alphabet (a-z or A- Z): "); char ch = in.next().charAt(0); switch (ch) { default: System.out.println(ch + " is consonant."); break; case 'a': case 'e': case 'i': case 'o': case 'u': case 'A': case 'E': case 'I': case 'O': case 'U': System.out.println(ch + " is vowel."); } } } OUTPUT ====== Enter an alphabet (a-z or A- Z): A A is vowel.
If you look at above program carefully, you will observe that it takes input from the user. We would input one character long string and then assign its first character to variable
ch. This
ch then passed to
switch to check whether it is a vowel or consonant. Inside
switch block
default is placed first to demonstrate that
default can be placed anywhere in the
switch block, provided that
break is used appropriately after
default section.
Java's control flow statements define two jump statements to jump from one place to another by defying the natural flow of control. These statements are
break and
continue. Java also has reserved
goto keyword, but this is not currently used. As you have seen in above example,
break stops the execution at the place it is executed and gets control out of the block. In above example,
break has been used in conjunction with
switch statement, but it is also used to abruptly terminate a
do or
for or
while loop.
In this tutorial we discussed Java's
switch,
case,
default and
break | http://cs-fundamentals.com/java-programming/switch-case-default-break-statements.php | CC-MAIN-2017-17 | refinedweb | 570 | 63.8 |
Full Text Search: The Key to Better Natural Language Queries for NoSQL in Node.js
Watch→
These restrictions reflect the limited display capabilities of the target devices. Remember that you're working with an abstraction of the mobile device application. Your application is rendered differently for each target device. The Mobile Internet Toolkit allows you to concentrate on the functionality you want to deliver, without worrying about the specific markup language a particular device requires. Mobile device capabilities differ substantially in characteristics such as color support or screen size. Consequently, the visual representation of the controls you place on a mobile Web Forms page signifies the intended functionality, not the exact appearance.
Within a mobile Web Forms page, you might have one or more mobile Form controls. The wizard creates a single Form control in your application, which you can see in the Design window when you create a new project. Figure 3-11 shows what this mobile control looks like.
Click to view graphic
Figure 3-11 The mobile Form control within a mobile Web Forms page
You can use the Form control to group other standard controls and contain them. A Form control is the outermost container for other controls in a mobile page. You can't nest a mobile Form control within another Form control; however, a mobile Web Forms page can contain multiple Form controls.
Don't make the mistake of thinking of a Form control as a single, rendered page on a target device. From the developer's perspective, it's more accurate to describe a Form control as a container for a named, logical grouping of controls. In fact, a single Form control can result in one or more display screens on the target device. The Form control can be set to paginate the output so that the data sent for each page doesn't exceed the limitations of the receiving device. For example, if you've placed a large number of controls inside a Form control or a control that is capable of displaying a large amount of output, such as the TextView control, the output from those controls can end up being displayed on different display pages on smaller devices. You'll learn more about pagination in Chapter 5.
Positioning Controls on Web Forms
Unlike standard ASP.NET Web Forms, which you use to build applications for desktop browsers, the Mobile Web Designer, used to lay out mobile Web Forms, doesn't offer a grid for placing mobile controls. Instead, the Mobile Web Designer lets you position controls only from the top down. To illustrate this, this section shows you how to create a new project with a selection of controls.
If you still have Visual Studio .NET open to the MyFirstMobileApp solution, click the File menu. Click Close Solution to close your previous project, and save any changes if prompted.
Create a new mobile project the same way you did earlier, but name this one TestControls. The wizard creates the project at and opens the newly created mobile Web Forms page, MobileWebForm1.aspx, with the cursor positioned on the new Form control. Now click on the Toolbox tab to open it, and drag two Label controls and a Textbox control onto the Form control.
If you click any of the controls you've just placed on the Form control, or on the Form control's title bar, some small squares will appear at the four corners and in the middle of each side. Those of you familiar with Visual Studio will recognize these squares; they indicate anchor points that you can click with the mouse and then drag to resize the control. However, this isn't possible with mobile Web Forms controls. Remember that mobile controls are just design objects that enable you to create the functionality of an application. A mobile control's actual appearance differs from one type of target device to another, and some of the more complex controls might differ in appearance substantially. In this context, resizing controls on a design palette has no relevance.
Mobile devices tend to have very small displays, and the scope for artistic expression on your user interface is unfortunately very small. The Mobile Internet Toolkit's main purpose is to make it easy to build applications that run on mobile devices using various client browsers. Developers of ASP .NET applications targeted at desktop browsers using full Web Forms work with the visual appearance of the form in mind. Mobile Internet developers concentrate more on the functionality of the application andwith some exceptions, as you will see in Chapter 8leave the presentation to the runtime and the target browser.
Many wireless developers are already familiar with this idea. In general, mobile phone displays don't have a screen size that allows complex layouts or a mouse-like navigation device.
When designing a mobile Web Forms page, you can use the mouse to drag controls to a new location within the Form. If you want to move a control above an existing one, you must drop it immediately to the left of or above the existing control. To position a control below an existing one, you must drop it just to the right of or below the existing control.
To introduce you to structuring your application into multiple Form controls, this discussion shows you how to build a simple application that uses three forms. To do so, execute the following steps:
Every time you add a new control to a Web Forms page, Visual Studio .NET assigns it a name consisting of the control type followed by a numeric suffix, such as Form1 or Form2. Many developers prefer to change these IDs to names that are more meaningful and that indicate the control's function within the application. For example, you might name a Label control that displays a city name CityName. Think back to the two Link controls used in our sample application. Meaningful names for these two controls could be LinkToForm2 and LinkToForm3. Such a name immediately indicates the purpose of the control. In a real application, the Form controls would also have meaningful names describing their purpose.
As you'll see in Chapter 4, you'll frequently write code that will access the properties and methods of controls. If you use meaningful control names, your application code will be more precise, readable, and clear.
That's it! Now click the Start button in the Visual Studio .NET toolbar to build and run your application in your chosen browser.
When it comes to mobile devices, however, experienced wireless developers know that backward navigation support isn't built into all browsers. Pocket Internet Explorer displays a Back control at the foot of the page. Some mobile phone browsers (such as the Openwave WML browser) hardwire one of the soft keys under the phone screen so that the back function is always available. Other browsers (such as the Nokia WML browser) require you to program backward navigation support into the WML markup.
Fortunately, using the Mobile Internet Toolkit saves you from having to worry about such idiosyncrasies. The Mobile Internet Controls Runtime delivers the required markup to each of the supported client devices to ensure that backward navigation is always available. Application developers can concentrate on the functionality of the application, knowing that it will behave consistently on supported client devices.
You can enhance the usability of certain applications by employing Link controls to deliver more explicit backward navigation, rather than relying on the default implementation.
There's one other consideration of standard navigation options you should be aware of: Internet Explorer and other major desktop browsers offer a Forward navigation button that enables the user to return to a page from which they just backed out. However, mobile browsers don't offer this option. Small browsers can't retain such a detailed record of a user's navigation. Whenever a user leaves a page via backward navigation, the browser removes any references to that page from its history, keeping no record that the user ever visited the page. Consequently, a mobile user can't undo a backward navigation by accessing a built-in forward function.
The following steps create an application that allows a user to enter the preferred date for an appointment:
If you run this application with Internet Explorer, with Pocket Internet Explorer on a Pocket PC, or on a mobile phone, the difference in appearance will be quite striking. Figures 3-14 and 3-15 show this difference. Internet Explorer and Pocket Internet Explorer render this appointment application as a calendar grid. But on a mobile phone, the appearance is quite different. Clearly, a grid isn't possible on such a small display; instead, the user either types in a date directly or steps through a number of selection options to choose the desired date.
Figure 3-14 The Calendar control in Internet Explorer and Pocket Internet Explorer
Figure 3-15 The Calendar control on a mobile phone
Despite the obvious differences in appearance, the Calendar control's functionalityits ability to select a dateremains unchanged, regardless of the mobile device you use to access it. Sophisticated controls like this handle the details of delivering functionality to the user so that you don't have to waste valuable time worrying about it. That's not to say you can't dictate the appearance of controls on different platforms. (You can, as you will see in Chapter 8.) However, you might find a control's default rendering appropriate for many applications.
You can access the Visual Studio .NET Help documentation with the familiar Contents, Index, and Search options by clicking the Help menu. When searching the full Help library, you can apply filters so that only topics relating to your area of interest appear. For example, you can search within "Visual C++ and related," "Visual Studio .NET," or ".NET Framework." By default, when you select any Help topic Visual Studio shows the Help description in the main IDE window. You can change this to a floating window by changing the Help configuration options. To do so, click Tools and then click Options. Open the Environment folder, and click Help. Then select the External Help option.
Visual Studio .NET also includes a number of other Help features that give you the information you need to program effectivelyfor example, Dynamic Help. By default, the Dynamic Help window appears in the same area of the screen as the Properties window. You can access the Dynamic Help window by clicking the Dynamic Help tab at the bottom of the Properties window. If the Dynamic Help tab isn't visible, you can make it appear by selecting the Dynamic Help option on the Help menu.
The Dynamic Help window continuously tracks the actions you perform, the position of the cursor, and the object or objects that are currently in focus. Dynamic Help presents topics relevant to the actions you perform, as you perform them. For example, if you open a new project, the application creates a mobile Web Forms page for you containing a Form control, which comes into focus. One of the top entries the Dynamic Help window shows at this time is Introduction To The Form Control. If you click a Label control that you've placed onto the Form, the Dynamic Help window updates to display entries relating primarily to the Label control.
In addition to the Dynamic Help window, Visual Studio .NET offers many other Help features that are less obvious. For example, when entering code, the text editor uses a squiggly red underline or other language-specific highlighting to mark any syntax errors you enter. You can also position your editing cursor on any identifiable objectsuch as a class name, method, property, or language keywordand press the F1 key. This causes the Help system to automatically display the appropriate Help topic.
Let's implement the "Hello World!" project that you created earlier in the chapter, in the section "Creating Your First Mobile Web Applications," as a single file of ASP.NET code. To do so, use a text editor to write the following code:
<%@ Register <mobile:LabelHello World!</mobile:Label></mobile:Form>
Save the code as SimpleSolution.aspx, and place it into the root directory of IIS (in \inetpub\wwwroot). If you access the URL from a browser, you'll access an application giving the same result as MyFirstMobileApp (in which you first implemented the "Hello World!" project).
What's the connection between this simple solution, and the Visual Studio .NET project? The connection becomes clearer if we take a closer look at the application created in Visual Studio .NET.
Open the MyFirstMobileApp project that you created earlier and open the Default.aspx mobile Web Forms page. On the taskbar at the bottom of the design window, you'll see two view options: Design and HTML. Select the HTML view (shown in Figure 3-16). You'll now see text that resembles the single file solution shown just a moment ago.
Figure 3-16 The Default.aspx mobile Web Forms page in HTML view
The Design and HTML views offer alternate ways to see the same file. In the Design view, you position visual representations of the mobile controls onto a mobile Web Forms page. However, when you save the Default.aspx file, you're actually saving a text file in ASP.NET syntax, which is the text shown when you select the HTML view. In fact, Source view might be a more appropriate name than HTML view. But Microsoft uses the latter because ASP.NET is a development of ASP and ASP developers are familiar working with the HTML view. Clearly, static HTML has no relevance for any applications that you build using the Mobile Internet Toolkit, since you must build Web pages solely with the mobile server controls to generate markup for clients that require HTML, cHTML, or WML markup.
If you examine the code shown in the HTML view, you'll see the following syntax in the middle of the text:
<mobile:Form id=Form1 <mobile:Label id=Label1Hello World! </mobile:Label></mobile:Form>
Apart from the addition of the id attributes, this syntax is virtually identical to the SimpleSolution.aspx code just shown. An Extensible Markup Language (XML) element represents each control that you place on a mobile Web Forms page. The opening tag for the mobile control is <mobile:Form >, and the closing tag is </ mobile:Form>. You can see the text representing the Label control enclosed within the Web Forms tags. You write this textual representation of XML visual elements in something called ASP.NET server control syntax.
If you add controls to the Form control using the GUI designer, the designer will simply add lines of text in server control syntax within the Form tags. The properties of controls that you set through the Properties window can appear here as text positioned between a control's opening and closing tags. For example, the Text property of the Label control, which you set to "Hello World!" would look like this:
<mobile:Label id=Label1Hello World!</ mobile:Label>
The other way to represent properties in server control syntax is as XML attributes, which assign values to identifiers within a control's opening tags using the form property-name=value. For example, you can set a control's ID property through an attribute:
<mobile:Form id=Form1
The XML text, which represents the mobile controls, lies within the body of the document and is enclosed by the <body> and </body> tags. The three </meta> tags are additional metadata that Visual Studio .NET uses only at design time.
Note that the SimpleSolution.aspx file contains only the source code representing the mobile controls. This is sufficient. When your application runs, ASP.NET doesn't require the </meta> tags.
The two lines of code at the top of the HTML view in Figure 3-16 deserve more explanation. These lines are ASP.NET page directives, which means they specify settings that ASP.NET compilers must use when processing the page.
The first line of the mobile Web Forms page that Visual Studio .NET generates reads
<%@ Register TagPrefix="mobile" Namespace="System.Web.UI.MobileControls" Assembly="System.Web.Mobile" %>
You can find the same code in SimpleSolution.aspx, although in that file it's split among the first, second, and third lines for readability. This syntax simply tells the ASP.NET runtime that when it compiles the page for display, any server controls tags using the prefix mobile (such as <mobile:Form> and <mobile:Label>) represent controls found in the System.Web.UI.MobileControls namespace, within the System.Web.Mobile assembly. (An assembly is the .NET name for a compiled file containing executable code, similar to an .exe or a .dll file. The System.Web.Mobile.dll assembly contains the Mobile Internet Controls Runtime and all the mobile Web Forms controls.)
The @ Page directive in SimpleSolution.aspx uses different attributes than the similar directive in the file that Visual Studio generates. The @ Page directive defines page-specific attributes that the ASP.NET page parser and compiler use. You can include many of these directives in your own code. Here are some of the more important ones and their meanings:
Reproduced from Building .NET Applications for Mobile Devices by permission of Microsoft Press. ISBN 0735615322, copyright 2002. All rights reserved.
Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled.
Your name/nickname
Your email
WebSite
Subject
(Maximum characters: 1200). You have 1200 characters left. | http://www.devx.com/dotnet/Article/10478/0/page/2 | CC-MAIN-2018-13 | refinedweb | 2,928 | 55.34 |
This.
Introduction
Cross-List searching is not new. You were able to search across lists within a site in WSS 2.0 although it was slow and tedious to combine the results. WSS 3.0 offers the new GetSiteData method of the SPWeb class which takes the new SPSiteDataQuery class as an argument. This method will return all the results of a cross-list search in one DataTable. Cross-List searching was also available in SharePoint Portal 2003 via the Microsoft.SharePoint.Portal.Search namespace. In MOSS cross-list searching is located in the Microsoft.Office.Server.Search.Query namespace in the Microsoft.Office.Server.Search assembly. Cross-List searches in MOSS can be executed using either the KeyWordQuery or the FullTextSqlQuery classes. These two classes are also available on a WSS 3.0 server via the Microsoft.SharePoint.Search.Query namespace in the Microsoft.Sharepoint.Search assembly. However, even though these classes are functionally equivalent WSS 3.0 does not provide the ability to manage crawled properties (metadata) which you will see is very important when developing custom search solutions.
MOSS
WSS
Namespace
Microsoft.Office.Server.Query
Microsoft. SharePoint
Classes
FullTextSqlQuery or KeywordQuery
SPWeb and SPSiteDataQuery
Syntax
SQL or Keyword
CAML (Collaborative Application Markup Language)
Manage Metadata
Yes
No
Results
DataTable
Result Latency
Results based on last crawl
No latency
The importance of managing metadata
SharePoint metadata represents data that describes or categorizes documents and list items. A user wishing to search for documents in SharePoint will typically use keywords to describe the data they are searching for. Having users memorize all the different keywords that could describe the documents they are searching for makes searching difficult. Providing the user with a choice of metadata to search for is better. MOSS provides the built-in "Search Center". The "Search Center" provides an "Advance Search" where you can search using "Property Restrictions". Here the user is presented a drop down list of built-in metadata or properties. However, this drop down list is not dynamically built but is populated from configuration settings of the "Advance Search" webpart. SharePoint site collections can be dynamic with new lists, document libraries and columns being added daily. A good search solution will provide the user with the most current metadata to choose from. Users should be able to easily understand what the metadata represents and how it categorizes documents and list items. Providing "friendly names" that have meaning within a group of users or a corporation will facilitate searching. Moss has the capabilities to manage metadata via SharePoint 3.0 Central Administration. Unfortunately WSS has nothing. As users add columns to document libraries or content types there is the risk that columns with the same name can be added across sites. The columns may have the same name but represent different things depending on what document library they are in or what content type they belong to if any. Developing strategies to manage metadata will be different depending on which product you use.
Managing metadata for searching in MOSS
In order for your custom search solution to search against all SharePoint crawled properties without having to manually create managed properties, you must configure the crawled property category. In SharePoint Central Administration click on the link below "Shared Services Administration". Go to "Search Settings", "Metadata property mappings", "Crawled Properties","SharePoint","Edit Category".
Under the "Bulk Crawled Property Settings" section make sure the "Automatically discover new properties when crawl takes place" is checked along with the "Map all string properties in this category to the Content managed property" and the "Automatically generate a new managed property for each crawled property discovered in this category" options. Making sure these options are on ensures that managed properties are automatically created when new SharePoint columns are created. Your solution can use these new managed properties to present to the user. Unfortunately, the name of the managed property is not that user friendly. SharePoint crawled properties are prefixed with an "ows_" and the auto generated managed property is prefixed with "ows". For example, if a user creates a new column in a document library called "CustomerName" then the crawled property will be "ows_CustomerName" and the managed property will be "owsCustomerName". If you don't want to display this to your users then you will have to write some code to parse out the real column name and make sure you map it back to the managed property name when constructing your query. Additional parsing may be needed if the column name has spaces in it. For instance if a user creates a column named "Customer Name" then the crawled property will be "ows_Customer_x0020_Name" and the managed property will be "owsCustomerx0020Name".
Other strategies to manage metadata would be to periodically monitor new crawled properties and map them to managed properties manually. This might include restrictions on who can add columns to site collections. However, in large site collections this could become a slow process and prevent users from finding documents they need. Finally, the search solution should provide a way for the user to scope there searches either by selecting MOSS scopes or allowing the user to select certain document libraries.
Managing metadata for searching in WSS
Using WSS to develop a custom search solution requires code to crawl the various document libraries to build a unique list of columns the user can search against. This can present a problem if the columns are the same name but have different data types. For instance, an "Invoice Number" column could be defined twice once as text and another as number in different document libraries. The SPSiteDataQuery class uses CAML and the CAML syntax requires the type attribute to be set to the corresponding SharePoint column data type. Therefore, in order to construct a valid CAML query you would have to present the column twice in the drop down list along with its data type (e.g. Invoice Number (Number)). Users searching for documents using WSS will relate to the display name of columns in SharePoint; therefore, you will want to populate the drop down list with the column's display name. However, CAML requires the column's internal name and your solution will have to map the display name to the internal name to construct valid CAML. Other problems arise when columns are renamed. It is possible that a column with a particular display name may map to multiple columns with different internal names. For instance, two columns are created on different document libraries, "Customer Age" and "Age". After a certain amount of time someone decides that they want to make the column names consistent across document libraries and rename the "Customer Age" column to "Age". The search solution now presents just "Age" in the drop down list. The solution now will have to map "Age" to two columns with the internal names of "Customer Age" and "Age" when generating the CAML query. So if the user selects "Age" and wants to find documents where the "Age" is equal to 25 then two where criteria will have to be generated in the CAML as listed below:
<Where>
<Or>
<Eq>
<FieldRef Name="Customer_x0020_Age" />
<Value Type="Number">25</Value>
</Eq>
<FieldRef Name="Age" />
</Or>
</Where>
The above CAML would solve the problem. Unfortunately, this will return no results. This leads us into the main problem of doing searches in MOSS or WSS, the inability to do "OR" searching. In the next part in this series I will illustrate why you cannot rely on MOSS or WSS to return correct results when doing "OR" logical searching.
Pingback from Sharepoint link love 06-21-2007 at Virtual Generations
That would work...
Carmelo Lisciotto
Thanks for the well written explanation.
I am trying to add some tags to word docs from a pick-list. Ehereafter I want to present the picklist in an advanced search page and I suppose the fields will show up as ows_MyField1 etc.
Question:
How can you remove the ows_ in front from the views?
if i use WSS 3.0 in my custom applicaiton and i store data in SSEE. Is it posible to search the data from SSEE through my custom application using WSS 3.0. I dont want to use MOSS 2007. I would have only WSS 3.0 installed.
Dear ,
i have question , i need to add interactive search field on WSS 3.0 as yahoo search field i.e when i just enter letter s then drop down all the my clients that those name start with "s" such suzan ,sam,....
second question if i need to create serach function on each page or part on share point services 3.0 i.e for eaxmple i have list called client information then when i use search field for this just appear information for the rows in this part. how can i do that????????
can u help me plzzzzzzzzzzzzzzzzz
hi,
i am new to the sharepoint. i need a help to search the document in the site with check the user permissons. iam using window 2003 and wss 3.0 ...any one have idea and solution ..help me
thanks
senthil
Nice Articles
<a href= ></a> | http://www.sharepointblogs.com/smc750/archive/2007/06/20/custom-cross-list-search-development-pitfalls-part-one.aspx | crawl-002 | refinedweb | 1,526 | 54.42 |
Holy cow, I wrote a book!
I got seven out of ten right.
Originally, the term cadence meant
the rate at which a regular event recurs,
possibly with variations, but with an overall cycle that repeats.
For example,
the cadence for team meetings might be "Every Monday,
with a longer meeting on the last meeting of each month."
Project X is on a six-month release cadence, whereas
Project Y takes two to three years between releases.
Q: What was the cadence of email requests you sent out to drive
contributions?
A: We started with an announcement in
September, with two follow-up messages in the next month.
Q: What was the cadence of email requests you sent out to drive
contributions?
A: We started with an announcement in
September, with two follow-up messages in the next month.
In what I suspect is a case of
I want to use this cool word other people are using,
even though I don't know exactly what it means,
the term has been applied more broadly to mean
schedule or timeline,
even for nonrecurring events.
Sample usage:
"What is our cadence for making this available outside the United States?"
Some.
BX
CX
n
BX:CX.
A customer wanted help with monitoring the lifetime of an
Explorer window.
We want to launch a copy of Explorer to open a specific folder,
then wait until the user closes the folder before continuing.
We tried launching a copy of Explorer with the folder on the
command line, then doing a WaitForSingleObject
on the process handle, but the wait sometimes completes immediately
without waiting.
How do we wait until the user closes the Explorer window?
WaitForSingleObject
This is another case of solving a problem halfway and then
having trouble with the other half.
The reason that WaitForSingleObject
returns immediately
is that Explorer is a single-instance program (well, limited-instance).
When you open an Explorer window, the request is handed off to
a running copy of Explorer, and the copy of Explorer you launched
exits.
That's why your WaitForSingleObject
returns immediately.
Fortunately, the customer was willing to explain their underlying
problem.
We have a wizard that creates some files in a directory
based on information provided by the user,
and we want to launch Explorer to view that directory
so users can verify that things
are set up the way they want them.
When users close the Explorer window, we ask them if everything
was good; if not, then we back up and let the user try again.
Aha, the program is using Explorer as a "view this folder for
a little while" subroutine.
Unfortunately, Explorer doesn't work that way.
For example, the user might decide to use the Address Bar
and go visit some other folders completely unrelated to your
program, and your program would just be sitting there waiting
for the user to close that window;
meanwhile, the user doesn't realize that your program is waiting
for it.
What you can do is host the Explorer Browser control inside
a page of your wizard
and control it with interfaces like
IExplorerBrowser.
You can disable navigation in the Explorer Browser
(so the user can look only at the folder
you want to preview),
and the user can click Back if they want to try again
or Next if they are happy and want to continue.
This has the additional advantage of keeping all the parts of
your wizard inside the wizard framework itself,
allowing users to continue using the wizard navigation model
that they are already familiar with.
IExplorerBrowser
A
sample program which uses the Explorer Browser control
can be found in the Platform SDK.
For the impatient, here's the
scratch program version.
Note that this is the minimal version;
in real life, you would probably want to set some options and stuff like that.
#include <shlobj.h>
IExplorerBrowser *g_peb;
void
OnSize(HWND hwnd, UINT state, int cx, int cy)
{
if (g_peb) {
RECT rc = { 0, 0, cx, cy };
g_peb->SetRect(NULL, rc);
}
}
BOOL
OnCreate(HWND hwnd, LPCREATESTRUCT lpcs)
{
BOOL fSuccess = FALSE;
RECT rc;
PIDLIST_ABSOLUTE pidl = NULL;
if (SUCCEEDED(CoCreateInstance(CLSID_ExplorerBrowser, NULL,
CLSCTX_INPROC, IID_PPV_ARGS(&g_peb))) &&
GetClientRect(hwnd, &rc) &&
SUCCEEDED(g_peb->Initialize(hwnd, &rc, NULL)) &&
SUCCEEDED(SHParseDisplayName(
L"C:\\Program Files\\Internet Explorer",
NULL, &pidl, 0, NULL)) &&
SUCCEEDED(g_peb->SetOptions(EBO_NAVIGATEONCE)) &&
SUCCEEDED(g_peb->BrowseToIDList(pidl, SBSP_ABSOLUTE))) {
fSuccess = TRUE;
}
ILFree(pidl);
return fSuccess;
}
void
OnDestroy(HWND hwnd)
{
if (g_peb) {
g_peb->Destroy();
g_peb->Release();
}
PostQuitMessage(0);
}
This same technique of hosting the Explorer Browser control
can be used for other types of "build your own burrito" scenarios:
For example, you might
host the Explorer Browser control in a window and tell users
to copy files into that window.
When they click OK or Next or whatever, you can enumerate
the contents of the folder and do your business.
Armed with this knowledge, you can answer these customers' questions: a 32-bit shell extension for which a 64-bit version is not
available.
Since our clients are running 64-bit Windows,
the 32-bit shell extension is not available in Explorer.
How can we obtain access to this context menu?
We have a shell extension that is not UAC-compliant.
It requires that the user have administrative privileges in order
to function properly.
We would rather not disable UAC across the board just for this
one shell extension.
Is there a workaround that lets us run Explorer elevated temporarily?
Bonus sample program:
The
Explorer Browser Search Sample
shows how to filter the view.
Bonus alternative:
If you really just want to watch Explorer windows rather than
host one,
you can use
the ShellWindows object,
something I covered
many years ago
(and followed up with a much shorter
scripting version).
A
and substituting the serial number for the final nine hex digits.
Is this a viable technique?
uuidgen.
Although the
x64 calling convention
reserves space on the stack as spill locations
for the first four parameters (passed in registers),
there is no requirement that the spill locations actually be used
for spilling.
They're just 32 bytes of memory available for scratch use by the function
being called.
We have a test program that works okay when optimizations are disabled,
but when compiled with full optimizations, everything appears to be wrong
right off the bat.
It doesn't get the correct values for
argc and)
We have a test program that works okay when optimizations are disabled,
but when compiled with full optimizations, everything appears to be wrong
right off the bat.
It doesn't get the correct values for
argc and argv:
argc)
When compiler optimizations are disabled, the Visual C++ x64 compiler
will spill all register parameters into their corresponding slots.
This has as a nice side effect that debugging is a little easier,
but really it's just because you disabled optimizations,
so the compiler generates simple, straightforward code,
making no attempts to be clever.
When optimizations are enabled, then the compiler becomes more
aggressive about removing redundant operations and using memory
for multiple purposes when variable lifetimes don't overlap.
If it finds that it doesn't need to save argc
into memory (maybe it puts it into a register),
then the spill slot for argc can be used for
something else; in this case, it's being used to preserve
the value of rbx.
rbx
You see the same thing even in x86 code,
where the memory used to pass parameters can be re-used
for other purposes once the value of the parameter is no
longer needed in memory.
(The compiler might load the value into a register and use
the value from the register for the remainder of the function,
at which point the memory used to hold the parameter becomes
unused and can be redeployed for some other purpose.)
Whatever problem you're having with your test program,
there is nothing obviously wrong with the code generation
provided in the purported defect report.
The problem lies elsewhere.
(And it's probably somewhere in your program.
Don't immediately assume that the reason for your problem
is a compiler bug.)
Bonus chatter:
In a (sadly rare) follow-up, the customer confessed that the
problem was indeed in their program.
They put a function call inside an assert,
and in the nondebug build, they disabled assertions
(by passing /DNDEBUG to the compiler),
which means that in the nondebug build, the function was never called.
assert
/DNDEBUG
Extra reading:
Challenges of debugging optimized x64 code.
That .frame /r command is real time-saver.
.frame /r
A customer reported that a shortcut they deployed to their employees'
desktops was triggering unwanted server traffic.?
Fortunately, the customer provided context for the question,
because the question the customer is asking doesn't actually
match the scenario.
The customer doesn't want to stop Explorer from querying the shortcut
information; the customer just wants to stop Explorer from contacting
the server to get the icon.
The default icon for a shortcut is the icon of the target,
and in order to get that icon, Explorer needs to contact the target.
But you can override that default.
Programmatically, you call
IShellLink::SetIconLocation;
interactively, you view the shortcut's properties and click
Change Icon....
In either case, set it to an icon that doesn't reside on the server.
Save the changes and deploy the modified shortcut.
IShellLink::SetIconLocation
When.)
ENTER n,0
push / mov / sub
LEAVE
mov / pop.)
...
Bottom
__stdcall
Top
0040F8F8
b
EBP = 0040F8EC
0040F8F4
a
0040F8F0
0040F8EC
0040F8E8
toplocal
Middle
0040F8E4
d
EBP = 0040F8D8
0040F8E0
c
0040F8DC
0040F8D8
0040F8D4
e
EBP = 0040F8CC
0040F8D0
0040F8CC
0040F8C8
bottomlocal1
0040F8C4
bottomlocal2
Each stack frame is identified by the EBP value
which the function uses during its execution.
EBP
The structure of each stack frame is therefore
[ebp+n]
[ebp+4]
[ebp+0]
[ebp-n]
001af384-80
.
001af478
Once you find where the EBP chain resumes, you can ask the debugger
to resume its stack trace from that point with the =n
option to the k command.
=n
k.
EIP.)
If you ask
Michael Kaplan,
he'd probably say that
it stands for lame.
In his article, Michael presents a nice chart of the various L-functions
and their sort-of counterparts.
There are other L-functions not on his list,
not because he missed them,
but because they don't have anything to do with characters or encodings.
On the other hand, those other functions help shed light on
the history of the L-functions.
Those other functions are
lopen,
lcreat,
lread,
lwrite,
lclose,
and llseek.
There are all L-version sort-of counterparts to
open,
creat, and
read,
write,
and lseek.
Note that we've already uncovered the answer to the unasked question
"Why does llseek have two L's?"
The first L is a prefix (whose meaning we will soon discover)
and the second L comes from the function it's sort-of acting as the
counterpart to.
But what does the L stand for?
Once you find those other L-functions,
you'll see next door the H-functions
hread and hwrite.
As we learned a while back,
being lucky is simply observing things you weren't planning to observe.
We weren't expecting to find the H-functions, but there they were,
and they blow the lid off the story.
The H prefix in hread and hwrite stands for huge.
Those two functions operated on so-called huge pointers,
which is 16-bit jargon for pointers to memory blocks larger than 64KB.
To increment your average 16:16 pointer by one byte,
you increment the bottom 16 bits.
But when the bottom 16 bits contain the value 0xFFFF,
the increment rolls over, and where do you put the carry?
If the pointer is a huge pointer, the convention is that the byte
that comes after S:0xFFFF is
(S+__AHINCR):0x0000, where
__AHINCR is a special value exported by the Windows kernel.
If you allocate memory larger than 64KB, the GlobalAlloc
function breaks your allocation into 64KB chunks and arranges them
so that incrementing the selector by __AHINCR takes you
from one chunk to the next.
S:0xFFFF
(S+__AHINCR):0x0000
__AHINCR
GlobalAlloc
Working backwards, then, the L prefix therefore stands for long.
These functions explicitly accept far pointers,
which makes them useful for 16-bit Windows programs
since they are independent of the program's memory model.
Unlike the L-functions,
the standard library functions like strcpy
and read operate on pointers whose size match
the data model.
If you write your program in the so-called medium memory model,
then all data pointers default to near
(i.e., they are 16-bit offsets into the default data segment),
and all the C runtime functions operate on near pointers.
This is a problem if you need to, say, read some data off the disk
into a block of memory you allocated with GlobalAlloc:
That memory is expressible only as a far pointer, but the
read function accepts a near pointer.
strcpy
read
To the rescue comes the lread function,
which you can use to read from the disk into your far pointer.
lread
How did Windows decide which C runtime functions
should have corresponding L-functions?
They were the functions that Windows itself used internally,
and which were exported as a courtesy.
Okay, now let's go back to the Lame part.
Michael Kaplan notes that the lstrcmp and
lstrcmpi functions actually are sort-of counterparts to
strcoll and strcolli.
So why weren't these functions called lstrcoll
and lstrcolli instead?
lstrcmp
lstrcmpi
strcoll
strcolli
lstrcoll
lstrcolli
Because back when lstrcmp and lstrcmpi
were being named, the strcoll and strcolli
functions hadn't been invented yet!
It's like asking,
"Why did the parents of
General Sir Michael Jackson give him the same name as the pop singer?"
or
"Why didn't they use the Space Shuttle to rescue the Apollo 13 astronauts?"
... for when regular strength lParam just isn't enough.
A little-known and even less-used feature of the shell property sheet
is that you can hang custom data off the end of the
PROPSHEETPAGE structure,
and the shell will carry it around for you.
Mind you, the shell carries it around by means of
memcpy and destroys it by just freeing the
underlying memory,
so whatever you stick on the end needs to be
plain old data.
(Though you do get an opportunity to "construct" and "destruct"
if you register a PropSheetPageProc callback,
during which you are permitted to modify your bonus data
and the lParam field of the
PROPSHEETPAGE.)
PROPSHEETPAGE
memcpy
PropSheetPageProc
lParam
Here's a program that illustrates this technique.
It doesn't do much interesting, mind you,
but maybe that's a good thing: Makes for fewer distractions.
#include <windows.h>
#include <prsht.h>
HINSTANCE g_hinst;
struct ITEMPROPSHEETPAGE : public PROPSHEETPAGE
{
int cWidgets;
TCHAR szItemName[100];
};
ITEMPROPSHEETPAGE is a
custom structure that appends our bonus
data (an integer and a string) to the standard
PROPSHEETPAGE.
This is the structure that our property sheet page will use.
ITEMPROPSHEETPAGE
INT_PTR CALLBACK DlgProc(HWND hwnd, UINT uiMsg, WPARAM wParam, LPARAM lParam)
{
switch (uiMsg) {
case WM_INITDIALOG:
{
ITEMPROPSHEETPAGE *ppsp =
reinterpret_cast<ITEMPROPSHEETPAGE*>(lParam));
SetDlgItemText(hwnd, 100, ppsp->szItemName);
SetDlgItemInt(hwnd, 101, ppsp->cWidgets, FALSE);
}
return TRUE;
}
return FALSE;
}
The lParam passed to WM_INITDIALOG
is a pointer to the shell-managed copy of the PROPSHEETPAGE
structure.
Since we associated this dialog procedure with a
ITEMPROPSHEETPAGE structure,
we can cast it to the larger structure to get at our bonus data
(which the shell happily memcpy'd from our copy
into the shell-managed copy).
WM_INITDIALOG
HPROPSHEETPAGE CreateItemPropertySheetPage(
int cWidgets, PCTSTR pszItemName)
{);
return CreatePropertySheetPage(&psp);
}
It is here that we associate the DlgProc
with the ITEMPROPSHEETPAGE.
Just to highlight that the pointer passed to the DlgProc
is a copy of the ITEMPROPSHEETPAGE used to create
the property sheet page, I created a separate function to create
the property sheet page, so that the ITEMPROPSHEETPAGE
on the stack goes out of scope,
making it clear that the copy passed to the DlgProc
is not the one we passed to CreatePropertySheetPage.
DlgProc
CreatePropertySheetPage
Note that you must set the dwSize of the
base PROPSHEETPAGE
to the size of the
PROPSHEETPAGE plus the size of your bonus data.
In other words, set it to the size of your ITEMPROPSHEETPAGE.
dwSize
int WINAPI WinMain(HINSTANCE hInst, HINSTANCE hPrevInst,
LPSTR lpCmdLine, int nCmdShow)
{
g_hinst = hinst;
HPROPSHEETPAGE hpage =
CreateItemPropertySheetPage(42, TEXT("Elmo"));
if (hpage) {
PROPSHEETHEADER psh = { 0 };
psh.dwSize = sizeof(psh);
psh.dwFlags = PSH_DEFAULT;
psh.hInstance = hinst;
psh.pszCaption = TEXT("Item Properties");
psh.nPages = 1;
psh.phpage = &hpage;
PropertySheet(&psh);
}
return 0;
}
Here is where we display the property sheet.
It looks just like any other code that displays a property sheet.
All the magic happens in the way we created
the HPROPSHEETPAGE.
HPROPSHEETPAGE
If you prefer to use the
PSH_PROPSHEETPAGE flag, then the above code could have
been written like this:
PSH_PROPSHEETPAGE
int WINAPI WinMain(HINSTANCE hInst, HINSTANCE hPrevInst,
LPSTR lpCmdLine, int nCmdShow)
{);
PROPSHEETHEADER psh = { 0 };
psh.dwSize = sizeof(psh);
psh.dwFlags = PSH_PROPSHEETPAGE;
psh.hInstance = hinst;
psh.pszCaption = TEXT("Item Properties");
psh.nPages = 1;
psh.ppsp = &psp;
PropertySheet(&psh);
return 0;
}
If you want to create a property sheet with more than one page,
then you would pass an array of ITEMPROPSHEETPAGEs.
Note that passing an array requires all the pages in the array
to use the same custom structure (because that's how arrays work;
all the elements of an array are the same type).
Finally, here's the dialog template.
Pretty anticlimactic.
1 DIALOG 0, 0, PROP_SM_CXDLG, PROP_SM_CYDLG
STYLE WS_CAPTION | WS_SYSMENU
CAPTION "General"
FONT 8, "MS Shell Dlg"
BEGIN
LTEXT "Name:",-1,7,11,42,14
LTEXT "",100,56,11,164,14
LTEXT "Widgets:",-1,7,38,42,14
LTEXT "",101,56,38,164,14
END
And there you have it.
Tacking custom data onto the end of a PROPSHEETPAGE,
an alternative to
trying to cram everything into a single lParam.
Exercise:
Observe that the size of the PROPSHEETPAGE structure
has changed over time.
For example, the original PROPSHEETPAGE ends at the
pcRefParent.
In Windows 2000, there are two more fields,
the pszHeaderTitle and pszHeaderSubTitle.
Windows XP added yet another field, the hActCtx.
Consider a program written for Windows 95 that uses this
technique.
How does the shell know that the cWidgets is really
bonus data and not a pszHeaderTitle?
pcRefParent
pszHeaderTitle
pszHeaderSubTitle
hActCtx
cWidgets | http://blogs.msdn.com/b/oldnewthing/archive/2011/03.aspx?PageIndex=2&PostSortBy=MostComments | CC-MAIN-2014-23 | refinedweb | 3,068 | 60.35 |
Archived:PyS60 1.4.5 Quick Start
Archived: This article is archived because it is not considered relevant for third-party developers creating commercial solutions today. If you think this article is still relevant, let us know by adding the template {{ReviewForRemovalFromArchive|user=~~~~|write your reason here}}.
Warning: See the Getting Started with Python (Trail) to get the latest (and greatest!) version of Python on Symbian.
This article covers the older and less functionally rich PySymbian 1.4.5 release, which we recommend only if you're targeting S60 2nd Edition mobile devices.
This Quick Start document provides information on how to begin developing Python applications for the Symbian platform. At the end of the tutorial you will have installed Python (PySymbian v1.4.5) on your phone or Emulator, written a basic helloworld application and run it on the phone both from the interactive shell script and as a stand-alone application.
If you are completely new to Python we recommend you start your development on the desktop - by taking the short course Dive Into Python or working through the tutorial: Python Programming for the Non Programmer.
Setting Up Your Development Environment
Python applications are simple "scripts" created in any text editor. It contains code written in the Python programming language conventions and named with the file extension ".py". The python scripts can be run either from "Python Interactive" on a mobile device or the Symbian platform emulator, or as standalone applications on your mobile device.
Getting PySymbian
The PySymbian 1.4.5 (stable release) files can be downloaded from sourceforge.net/projects/pys60. There are different files for the different versions of S60 (descriptions are provided for all files in the Python on Symbian Technical Overview#Table2)
(Please note that the current available version for PySymbian can be downloaded from )
For this tutorial we assume you are working on Symbian platform phones and SDKs. Therefore you require the following files:
- PythonForS60_1_4_5_3rdEd.sis - the Python runtime
- PythonScriptShell_1_4_5_3rdEd.SIS - the Python script shell
- PythonForS60_1_4_5_SDK_3rdEdFP1.zip - the PySymbian SDK
- PythonForS60_1_4_5_doc.pdf - the relevant PySymbian library reference and API documentation (applies to all)
Installing Python on your Phone
The SIS file listed above are compatible with Symbian platform (and any S60 3rd Edition phones (or later)). If you have a S60 2nd Edition phone then you will need to download some new files.
Using your PC Suite software, install the following files to your Symbian platform or S60 3rd Edition phone:
- PythonForS60_1_4_5_3rdEd.sis - the Python runtime
- PythonScriptShell_1_4_5_3rdEd.SIS - the Python script shell
You can verify the installation by launching the "Python" icon from the device Installations folder. If you do menu "Options | Run Script", you can select some pre installed scripts (usually installed on c:\data\python or e:\data\python) to demonstrate the power of Python! The icon and list of applications are shown below:
Setting up your Windows PC
Install/unzip/copy the following files (in order):
- S60 5th Edition SDK
- Note that you'll first need to unzip the SDK to a temporary directory, and then run setup.exe. The patch will then need to be unzipped over the SDK, so that it overwrites the SDK's \epoc32\ directory. Accept all prompts during unzipping to allow this to happen.
- On first use, the SDK will prompt you to register it. The process is straight forward, but you will need to sign up with Nokia Developer if you haven't already.
- You can launch the Emulator by clicking on the file [SDK]/epoc32/release/winscw/udeb/epoc.exe. You then navigate to the Python icon in the Emulator in the same way as you did on the device.
- PythonForS60_1_4_5_SDK_3rdEdFP1.zip - The PySymbian SDK
- The PySymbian SDK ZIP file contains another file named sdk_files.zip. Extract that in the S60 SDK folder (by default C:\S60\devices\S60_3rd_FP2_SDK for the SDK we are using). PySymbian is now installed on your emulator.
- PythonForS60_1_4_5_doc.pdf - Copy of the relevant PySymbian library reference and API docs (applies to all)
- Python 2.2.2 for Windows (needed to run Ensymble to create standalone applications
You will also need a text editor for writing your scripts. We recommend you to use editor like Notepad ++. However you can use any other text editor including Windows Notepad.
Note: While you can write code in the classic Python python.org/idle IDLE IDE, it is not possible to run code that depends on PySymbian-specific libraries from this or other IDEs. They must be run in the S60 emulator or on the target device.
Your first script
The very simple script below asks the user for their name, then displays a dialog with the text: "Hello Name, welcome to Python World."
# import the app user interface framework module
import appuifw
# create a single-field dialog (text input field): appuifw.query(label, type)
data = appuifw.query(u"Type your name", "text")
# create an information note: appuifw.note(label, type)
appuifw.note(u"Hello "+str(data)+", welcome to Python World", "info")
Copy this text into your preferred text editor and then save the file as Helloworld.py (the ".py" extension is used for uncompiled python scripts).
If you're using the interactive shell for testing, you need to copy the file to \Python\ on any drive. For the device you can copy the file directly into the correct folder using PC Suite, or send it as a message with Bluetooth and then move it to the correct folder, using a file manager application (YBrowser - recommended). If you're using the emulator you can copy it direct into the appropriate folder - [SDK]\epoc32\winscw\c\python.
You can then test your script in the same way you verified the Python installation - by launching the "Python" icon for the interactive shell, then doing "Options | Run Script" and selecting it from the list.
That's it. You now have Python on your device and/or your Emulator and know how to write/launch scripts. The next section shows how you can package your script as an application.
Making a Standalone Application
Scripts are ideal for testing because they can be quickly and easily modified. However, an application should be distributed in the form of an installable SIS file. This applies even more when the application has external resource files that have to be distributed with it.
The PySymbian 1.4.5 tool for creating standalone application's is Ensymble. This is a Python application, so you need to install Python 2.2.2 before it can be run.
Tip: You can also use the GUI front end to this script (the "Application Packager" described in the wiki book Python on Symbian, but you'll need to get the other version of PySymbian to do so.
Before using Ensymble, we recommend you first read the (very extensive) Ensymble’s readme file. You can then package the script we created in the previous section.
- Open the Windows command prompt.
- Navigate to the folder containing your application script and Ensymble script.
- Type the following command and press Enter.
ensymble.py py2sis helloworld.py
A SIS file named helloworld_v1_0_0.sis will be created in the same folder. Instructions for using Ensymble on Linux OS are available here.
Many other useful parameters are available for the py2sis command for additional options. Ensymble’s readme file contains more information.
--uid=0x01234567
--appname=AppName
--version=1.0.0
--lang=EN,...
--icon=icon.svg
--shortcaption="App. Name"
--caption="Application Name"
--drive=C
--textfile=mytext.txt
--cert=mycert.cer
--privkey=mykey.key
--passphrase=12345
--caps=Cap1+Cap2+...
--vendor="Vendor Name"
--autostart
--encoding=terminal,filesystem
--verbose
You might want to compile the Python scripts (PY) to Python compiled scripts (PYC) using Python for PC (python.org) before packaging them in SIS files. Converting to PYC increases performance and execution speed and protects the source code to some extent. The following code commands are used on Python on PC for cross compiling scripts, as shown in Figure 5.
import py_compile
py_compile.compile('myscript.py')
#Compiles myscript.py
or
import compileall
compileall.compile_dir('Myfolder', force=1)
#Compiles all scripts in the directory Myfolder
Summary
This Quick Start tutorial has shown you how to get set up with PySymbian 1.4.5 on the Symbian platform, from getting the developer environment through to making standalone applications.
Related Information
There are plenty of public domain resources to help you get started on learning generic Python, and the PySymbian flavour in particular:
- Python on Symbian Technical Overview
- Dive Into Python - Free book on (Generic) Python programming
- Nokia Developer's Python Elearning Module
- Nokia Developer's Official Python Training course with worked examples
- Applications on Croozeus.com blogs
- Croozeus.com Tutorials
- Mobilenin Tutorials.
- Python reference library (including the modules for PySymbian) installed as part of your windows setup.
- Python 2.2.2 Reference
- Mobile Python - Rapid prototyping of applications on the mobile platform by Jürgen Scheible and Ville Tuulos. Symbian Press provides a sample introductory chapter online and a website with source code for all the examples in the book:
© 2010 Symbian Foundation Limited. This document is licensed under the Creative Commons Attribution-Share Alike 2.0 license. See for the full terms of the license.
Note that this content was originally hosted on the Symbian Foundation developer wiki. | http://developer.nokia.com/community/wiki/Archived:PyS60_1.4.5_Quick_Start | CC-MAIN-2014-35 | refinedweb | 1,539 | 56.55 |
2.3.Mapping JPAQL/HQL queries. Mapping JPAQL/HQL queries
You can map EJBQL/HQL queries using annotations.
@NamedQuery and
@NamedQueries can be defined at the class level or in a JPA XML file. However their definitions are global to the session factory/entity manager factory scope. A named query is defined by its name and the actual query string.
<entity-mappings> <named-query <query>select p from Plane p</query> </named-query> ... </entity-mappings> ... @Entity @NamedQuery(name="night.moreRecentThan", query="select n from Night n where n.date >= :date") public class Night { ... } public class MyDao { doStuff() { Query q = s.getNamedQuery("night.moreRecentThan"); q.setDate( "date", aMonthAgo ); List results = q.list(); ... } ... }
You can also provide some hints to a query through an array of
QueryHint through a
hints attribute.
The availabe Hibernate hints are | http://www.redhat.com/docs/manuals/jboss/jboss-eap-4.3/doc/hibernate/Annotations_Reference_Guide/Mapping_Queries-Mapping_JPAQLHQL_queries.html | crawl-001 | refinedweb | 134 | 53.58 |
Parametric polymorphism
You are encouraged to solve this task according to the task description, using any language you may know.
- Task
Write a small example for a type declaration that is parametric over another type, together with a short bit of code (and its type signature) that uses it.
A good example is a container type, let's say a binary tree, together with some function that traverses the tree, say, a map-function that operates on every element of the tree.
This language feature only applies to statically-typed languages.
Contents
- 1 Ada
- 2 C
- 3 C++
- 4 C#
- 5 Ceylon
- 6 Clean
- 7 Common Lisp
- 8 D
- 9 Dart
- 10 E
- 11 F#
- 12 Fortran
- 13 Go
- 14 Groovy
- 15 Haskell
- 16 Inform 7
- 17 Icon and Unicon
- 18 J
- 19 Java
- 20 Julia
- 21 Kotlin
- 22 Mercury
- 23 Nim
- 24 Objective-C
- 25 OCaml
- 26 Perl 6
- 27 Phix
- 28 PicoLisp
- 29 Racket
- 30 REXX
- 31 Rust
- 32 Scala
- 33 Seed7
- 34 Standard ML
- 35 Swift
- 36 Ursala
- 37 Visual Prolog
Ada[edit]
generic
type Element_Type is private;
package Container is
type Tree is tagged private;
procedure Replace_All(The_Tree : in out Tree; New_Value : Element_Type);
private
type Node;
type Node_Access is access Node;
type Tree tagged record
Value : Element_type;
Left : Node_Access := null;
Right : Node_Access := null;
end record;
end Container;
package body Container is
procedure Replace_All(The_Tree : in out Tree; New_Value : Element_Type) is
begin
The_Tree.Value := New_Value;
If The_Tree.Left /= null then
The_Tree.Left.all.Replace_All(New_Value);
end if;
if The_tree.Right /= null then
The_Tree.Right.all.Replace_All(New_Value);
end if;
end Replace_All;
end Container;
C[edit]If the goal is to separate algorithms from types at compile type, C may do it by macros. Here's sample code implementing binary tree with node creation and insertion:
#include <stdio.h>
#include <stdlib.h>
#define decl_tree_type(T) \
typedef struct node_##T##_t node_##T##_t, *node_##T; \
struct node_##T##_t { node_##T left, right; T value; }; \
\
node_##T node_##T##_new(T v) { \
node_##T node = malloc(sizeof(node_##T##_t)); \
node->value = v; \
node->left = node->right = 0; \
return node; \
} \
node_##T node_##T##_insert(node_##T root, T v) { \
node_##T n = node_##T##_new(v); \
while (root) { \
if (root->value < n->value) \
if (!root->left) return root->left = n; \
else root = root->left; \
else \
if (!root->right) return root->right = n; \
else root = root->right; \
} \
return 0; \
}
#define tree_node(T) node_##T
#define node_insert(T, r, x) node_##T##_insert(r, x)
#define node_new(T, x) node_##T##_new(x)
decl_tree_type(double);
decl_tree_type(int);
int main()
{
int i;
tree_node(double) root_d = node_new(double, (double)rand() / RAND_MAX);
for (i = 0; i < 10000; i++)
node_insert(double, root_d, (double)rand() / RAND_MAX);
tree_node(int) root_i = node_new(int, rand());
for (i = 0; i < 10000; i++)
node_insert(int, root_i, rand());
return 0;
}
Comments: It's ugly looking, but it gets the job done. It has the drawback that all methods need to be re-created for each tree data type used, but hey, C++ template does that, too.
Arguably more interesting is run time polymorphism, which can't be trivially done; if you are confident in your coding skill, you could keep track of data types and method dispatch at run time yourself -- but then, you are probably too confident to not realize you might be better off using some higher level languages.
C++[edit]
template<class T>
class tree
{
T value;
tree *left;
tree *right;
public:
void replace_all (T new_value);
};
For simplicity, we replace all values in the tree with a new value:
template<class T>
void tree<T>::replace_all (T new_value)
{
value = new_value;
if (left != NULL)
left->replace_all (new_value);
if (right != NULL)
right->replace_all (new_value);
}
C#[edit]
namespace RosettaCode {
class BinaryTree<T> {
public T value;
public BinaryTree<T> left;
public BinaryTree<T> right;
public BinaryTree(T value) {
this.value = value;
}
public BinaryTree<U> Map<U>(Func<T,U> f) {
BinaryTree<U> Tree = new BinaryTree<U>(f(this.value));
if (left != null) {
Tree.left = left.Map(f);
}
if (right != null) {
Tree.right = right.Map(f);
}
return Tree;
}
}
}
Sample that creates a tree to hold int values:
namespace RosettaCode {
class Program {
static void Main(string[] args) {
BinaryTree<int> b = new BinaryTree<int>(6);
b.left = new BinaryTree<int>(5);
b.right = new BinaryTree<int>(7);
BinaryTree<double> b2 = b.Map(x => x * 10.0);
}
}
}
Ceylon[edit]
class BinaryTree<Data>(shared Data data, shared BinaryTree<Data>? left = null, shared BinaryTree<Data>? right = null) {
shared BinaryTree<NewData> myMap<NewData>(NewData f(Data d)) =>
BinaryTree {
data = f(data);
left = left?.myMap(f);
right = right?.myMap(f);
};
}
shared void run() {
value tree1 = BinaryTree {
data = 3;
left = BinaryTree {
data = 4;
};
right = BinaryTree {
data = 5;
left = BinaryTree {
data = 6;
};
};
};
tree1.myMap(print);
print("");
value tree2 = tree1.myMap((x) => x * 333.33);
tree2.myMap(print);
}
Clean[edit]
: :: (f a) -> (f a) | Functor f & Num a
add1Everywhere nums = fmap (\x = x + 1) nums
If we have a tree of integers, i.e. f is
Treeand a is
Integer, then the type of
add1Everywhereis
Tree Integer -> Tree Integer.
Common Lisp[edit]
Common Lisp is not statically typed, but types can be defined which are parameterized over other types. In the following piece of code, a type
pair is defined which accepts two (optional) type specifiers. An object is of type
(pair :car car-type :cdr cdr-type) if an only if it is a cons whose car is of type
car-type and whose cdr is of type
cdr-type.
(deftype pair (&key (car 't) (cdr 't))
`(cons ,car ,cdr))
Example
> (typep (cons 1 2) '(pair :car number :cdr number)) T
> (typep (cons 1 2) '(pair :car number :cdr character)) NIL
D[edit]
class ArrayTree(T, uint N) {
T[N] data;
typeof(this) left, right;
this(T initValue) { this.data[] = initValue; }
void tmap(const void delegate(ref typeof(data)) dg) {
dg(this.data);
if (left) left.tmap(dg);
if (right) right.tmap(dg);
}
}
void main() { // Demo code.
import std.stdio;
// Instantiate the template ArrayTree of three doubles.
alias AT3 = ArrayTree!(double, 3);
// Allocate the tree root.
auto root = new AT3(1.00);
// Add some nodes.
root.left = new AT3(1.10);
root.left.left = new AT3(1.11);
root.left.right = new AT3(1.12);
root.right = new AT3(1.20);
root.right.left = new AT3(1.21);
root.right.right = new AT3(1.22);
// Now the tree has seven nodes.
// Show the arrays of the whole tree.
//root.tmap(x => writefln("%(%.2f %)", x));
root.tmap((ref x) => writefln("%(%.2f %)", x));
// Modify the arrays of the whole tree.
//root.tmap((x){ x[] += 10; });
root.tmap((ref x){ x[] += 10; });
// Show the arrays of the whole tree again.
writeln();
//root.tmap(x => writefln("%(%.2f %)", x));
root.tmap((ref x) => writefln("%(%.2f %)", x));
}
- Output:
1.00 1.00 1.00 1.10 1.10 1.10 1.11 1.11 1.11 1.12 1.12 1.12 1.20 1.20 1.20 1.21 1.21 1.21 1.22 1.22 1.22 11.00 11.00 11.00 11.10 11.10 11.10 11.11 11.11 11.11 11.12 11.12 11.12 11.20 11.20 11.20 11.21 11.21 11.21 11.22 11.22 11.22
Dart[edit]
class TreeNode<T> {
T value;
TreeNode<T> left;
TreeNode<T> right;
TreeNode(this.value);
TreeNode map(T f(T t)) {
var node = new TreeNode(f(value));
if(left != null) {
node.left = left.map(f);
}
if(right != null) {
node.right = right.map(f);
}
return node;
}
void forEach(void f(T t)) {
f(value);
if(left != null) {
left.forEach(f);
}
if(right != null) {
right.forEach(f);
}
}
}
void main() {
TreeNode root = new TreeNode(1);
root.left = new TreeNode(2);
root.right = new TreeNode(3);
root.left.right = new TreeNode(4);
print('first tree');
root.forEach(print);
var newRoot = root.map((t) => t * 222);
print('second tree');
newRoot.forEach(print);
}
- Output:
first tree 1 2 4 3 second tree 222 444 888 666
E[edit]
While E itself does not do static (before evaluation) type checking, E does have guards which form a runtime type system, and has typed collections in the standard library. Here, we implement a typed tree, and a guard which accepts trees of a specific type.
(Note: Like some other examples here, this is an incomplete program in that the tree provides no way to insert or delete nodes.)
(Note: The guard definition is arguably messy boilerplate; future versions of E may provide a scheme where the
interface expression can itself be used to describe parametricity, and message signatures using the type parameter, but this has not been implemented or fully designed yet. Currently, this example is more of “you can do it if you need to” than something worth doing for every data structure in your program.)
interface TreeAny guards TreeStamp {}
def Tree {
to get(Value) {
def Tree1 {
to coerce(specimen, ejector) {
def tree := TreeAny.coerce(specimen, ejector)
if (tree.valueType() != Value) {
throw.eject(ejector, "Tree value type mismatch")
}
return tree
}
}
return Tree1
}
}
def makeTree(T, var value :T, left :nullOk[Tree[T]], right :nullOk[Tree[T]]) {
def tree implements TreeStamp {
to valueType() { return T }
to map(f) {
value := f(value) # the declaration of value causes this to be checked
if (left != null) {
left.map(f)
}
if (right != null) {
right.map(f)
}
}
}
return tree
}
? def t := makeTree(int, 0, null, null)
# value: <tree>
? t :Tree[String]
# problem: Tree value type mismatch
? t :Tree[Int]
# problem: Failed: Undefined variable: Int
? t :Tree[int]
# value: <tree>
F#[edit]
namespace RosettaCode
type BinaryTree<'T> =
| Element of 'T
| Tree of 'T * BinaryTree<'T> * BinaryTree<'T>
member this.Map(f) =
match this with
| Element(x) -> Element(f x)
| Tree(x,left,right) -> Tree((f x), left.Map(f), right.Map(f))
We can test this binary tree like so:
let t1 = Tree(2, Element(1), Tree(4,Element(3),Element(5)) )
let t2 = t1.Map(fun x -> x * 10)
Fortran[edit]
Fortran does not offer polymorphism by parameter type, which is to say, enables the same source code to be declared applicable for parameters of different types, so that a contained statement such as
X = A + B*C would work for any combination of integer or floating-point or complex variables as actual parameters, since exactly that (source) code would be workable in every case. Further, there is no standardised pre-processor protocol whereby one could replicate such code to produce a separate subroutine or function specific to every combination.
MODULE SORTSEARCH !Genuflect towards Prof. D. Knuth.
INTERFACE FIND !Binary chop search, not indexed.
MODULE PROCEDURE
1 FINDI4, !I: of integers.
2 FINDF4,FINDF8, !F: of numbers.
3 FINDTTI2,FINDTTI4 !T: of texts.
END INTERFACE FIND
CONTAINS
INTEGER FUNCTION FINDI4(THIS,NUMB,N) !Binary chopper. Find i such that THIS = NUMB(i)
USE ASSISTANCE !Only for the trace stuff.
INTENT(IN) THIS,NUMB,N !Imply read-only, but definitely no need for any "copy-back".
INTEGER*4 THIS,NUMB(1:*) !Where is THIS in array NUMB(1:N)?
INTEGER N !The count. In other versions, it is supplied by the index.
INTEGER L,R,P !Fingers.
Chop away.
L = 0 !Establish outer bounds.
R = N + 1 !One before, and one after, the first and last.
1 P = (R - L)/2 !Probe point offset. Beware integer overflow with (L + R)/2.
IF (P.LE.0) THEN !Aha! Nowhere! And THIS follows NUMB(L).
FINDI4 = -L !Having -L rather than 0 (or other code) might be of interest.
RETURN !Finished.
END IF !So much for exhaustion.
P = P + L !Convert from offset to probe point.
IF (THIS - NUMB(P)) 3,4,2 !Compare to the probe point.
2 L = P !Shift the left bound up: THIS follows NUMB(P).
GO TO 1 !Another chop.
3 R = P !Shift the right bound down: THIS precedes NUMB(P).
GO TO 1 !Try again.
Caught it! THIS = NUMB(P)
4 FINDI4 = P !So, THIS is found, here!
END FUNCTION FINDI4 !On success, THIS = NUMB(FINDI4); no fancy index here...
END MODULE SORTSEARCH
There would be a function (with a unique name) for each of the contemplated variations in parameter types, and when the compiler reached an invocation of FIND(...) it would select by matching amongst the combinations that had been defined in the routines named in the INTERFACE statement. The various actual functions could have different code, and in this case, only the
INTEGER*4 THIS,NUMB(1:*) need be changed, say to
REAL*4 THIS,NUMB(1:*) for FINDF4, which is why both variables are named in the one statement. However, for searching CHARACTER arrays, because the character comparison operations differ from those for numbers (and, no three-way IF-test either), additional changes are required. Thus, function FIND would appear to be a polymorphic function that accepts and returns a variety of types, but it is not, and indeed, there is actually no function called FIND anywhere in the compiled code.
That said, some systems had polymorphic variables, such as the B6700 whereby integers were represented as floating-point numbers and so exactly the same function could be presented with an integer or a floating-point variable (provided the compiler didn't check for parameter type matching - but this was routine) and it would work - so long as no divisions were involved since addition, subtraction, and multiplication are the same for both, but integer division discards any remainders. More recent computers following the Intel 8087 floating-point processor and similar add novel states to the scheme for floating-point arithmetic: not just zero and "gradual underflow" but "Infinity" and "Not a Number", which last violates even more of the axia of mathematics in that NaN does not equal NaN. In turn, this forces a modicum of polymorphism into the language so as to contend with the additional features, such as the special function IsNaN(x).
More generally, using the same code for different types of variable can be problematical. A scheme that works in single precision may not work in double precision (or vice-versa) or may not give corresponding levels of accuracy, or not converge at all, etc. While F90 also standardised special functions that give information about the precision of variables and the like, and in principle, a method could be coded that, guided by such information, would work for different precisions, this sort of scheme is beset by all manner of difficulties in problems more complex than the simple examples of text books.
Polymorphism just exacerbates the difficulties, thus, on page 219 of 16-Bit Modern Microcomputers by G. M. Corsline appears the remark "At least some of the generalized numerical solutions to common mathematical procedures have coding that is so involved and tricky in order to take care of all possible roundoff contingencies that they have been termed 'pornographic algorithms'.". And "Mathematical software is easy for the uninitiated to write but notoriously hard for the expert. This paradox exists because the beginner is satisfied if his code usually works in his own machine while the expert attempts, against overwhelming obstacles, to produce programs that always work on a large number of computers. The problem is that while standard formulas of mathematics are fairly easy to translate into FORTRAN they often are subject to instabilities due to roundoff error." - quoting John Palmer, 1980, Intel Corporation.
But sometimes it is not so troublesome, as in Pathological_floating_point_problems#The_Chaotic_Bank_Society whereby the special EPSILON(x) function that reports on the precision of a nominated variable of type x is used to determine the point beyond which further calculation (in that precision, for that formula) will make no difference.Having flexible facilities available my lead one astray. Consider the following data aggregate, as became available with F90:
TYPE STUFF
INTEGER CODE !A key number.
CHARACTER*6 NAME !Associated data.
INTEGER THIS !etc.
END TYPE STUFF
TYPE(STUFF) TABLE(600) !An array of such entries.
Suppose the array was in sorted order by each entry's value of CODE so that TABLE(1).CODE <= TABLE(2).CODE, etc. and one wished to find the index of an entry with a specific value, x, of CODE. It is pleasing to be able to write
FIND(x,TABLE.CODE,N) and have it accepted by the compiler. Rather less pleasing is that it runs very slowly.
This is because consecutive elements in an array are expected to occupy consecutive locations in storage, but the CODE elements do not, being separated by the other elements of the aggregate. So, the compiler generates code to copy the required elements to a work area, presents that as the actual parameter, and copies from the work area back on return from the function, thereby vitiating the speed advantages of the binary search. This is why the
INTENT(IN) might help in such situations, as will writing
FIND(x,TABLE(1:N).CODE,N) should N be often less than the full size of the table. But really, in-line code for each such usage is the only answer, despite the lack of a pre-processor to generate it.
Other options are to remain with the older-style of Fortran, using separately-defined arrays having a naming convention such as TABLECODE(600), TABLENAME(600), etc. thus not gaining the unity of declaring a TYPE, or, declaring the size within the type as in
INTEGER CODE(600) except that this means that the size is a part of the type and different-sized tables would require different types, or, perhaps the compiler will handle this problem by passing a "stride" value for every array dimension so that subroutines and functions can index such parameters properly - at the cost of yet more overhead for parameter passing, and more complex indexing calculations.
In short, the available polymorphism whereby a parameter can be a normal array, or, an array-like "selection" of a component from an array of compound entities enables appealing syntax, but disasterous performance.
Go[edit]
The parametric function in this example is the function average. It's type parameter is the interface type intCollection, and its logic uses the polymorphic function mapElements. In Go terminology, average is an ordinary function whose parameter happens to be of interface type. Code inside of average is ordinary code that just happens to call the mapElements method of its parameter. This code accesses the underlying static type only through the interface and so has no knowledge of the details of the static type or even which static type it is dealing with.
Function main creates objects t1 and t2 of two different static types, binaryTree an bTree. Both types implement the interface intCollection. t1 and t2 have different static types, but when they are passed to average, they are bound to parameter c, of interface type, and their static types are not visible within average.
Implementation of binaryTree and bTree is dummied, but you can see that implementation of average of binaryTree contains code specific to its representation (left, right) and that implementation of bTree contains code specific to its representation (buckets.)
package main
import "fmt"
func average(c intCollection) float64 {
var sum, count int
c.mapElements(func(n int) {
sum += n
count++
})
return float64(sum) / float64(count)
}
func main() {
t1 := new(binaryTree)
t2 := new(bTree)
a1 := average(t1)
a2 := average(t2)
fmt.Println("binary tree average:", a1)
fmt.Println("b-tree average:", a2)
}
type intCollection interface {
mapElements(func(int))
}
type binaryTree struct {
// dummy representation details
left, right bool
}
func (t *binaryTree) mapElements(visit func(int)) {
// dummy implementation
if t.left == t.right {
visit(3)
visit(1)
visit(4)
}
}
type bTree struct {
// dummy representation details
buckets int
}
func (t *bTree) mapElements(visit func(int)) {
// dummy implementation
if t.buckets >= 0 {
visit(1)
visit(5)
visit(9)
}
}
Output:
binary tree average: 2.6666666666666665 b-tree average: 5
Groovy[edit](more or less)
Solution:
class Tree<T> {
T value
Tree<T> left
Tree<T> right
Tree(T value = null, Tree<T> left = null, Tree<T> right = null) {
this.value = value
this.left = left
this.right = right
}
void replaceAll(T value) {
this.value = value
left?.replaceAll(value)
right?.replaceAll(value)
}
}
Haskell[edit]
data :: (Functor f, Num a) => f a -> f a
add1Everywhere nums = fmap (\x -> x + 1) nums
If we have a tree of integers, i.e. f is
Treeand a is
Integer, then the type of
add1Everywhereis
Tree Integer -> Tree Integer.
Inform 7[edit]
Phrases (the equivalent of global functions) can be defined with type parameters:
Polymorphism is a room.
To find (V - K) in (L - list of values of kind K):
repeat with N running from 1 to the number of entries in L:
if entry N in L is V:
say "Found [V] at entry [N] in [L].";
stop;
say "Did not find [V] in [L]."
When play begins:
find "needle" in {"parrot", "needle", "rutabaga"};
find 6 in {2, 3, 4};
end the story.
Inform 7 does not allow user-defined parametric types. Some built-in types can be parameterized, though:
list of numbers
relation of texts to rooms
object based rulebook producing a number
description of things
activity on things
number valued property
text valued table column
phrase (text, text) -> number
Icon and Unicon[edit]
Like PicoLisp, Icon and Unicon are dynamically typed and hence inherently polymorphic. Here's an example that can apply a function to the nodes in an n-tree regardless of the type of each node. It is up to the function to decide what to do with a given type of node. Note that the nodes do no even have to be of the same type.
procedure main()
bTree := [1, [2, [4, [7]], [5]], [3, [6, [8], [9]]]]
mapTree(bTree, write)
bTree := [1, ["two", ["four", [7]], [5]], [3, ["six", ["eight"], [9]]]]
mapTree(bTree, write)
end
procedure mapTree(tree, f)
every f(\tree[1]) | mapTree(!tree[2:0], f)
end
J[edit]
In J, all functions are generic over other types.
Alternatively, J is statically typed in the sense that it supports only one data type (the array), though of course inspecting a value can reveal additional details (such as: is it an array of numbers?)
(That said, note that J also supports some types which are not, strictly speaking, data. These are the verb, adverb and conjunction types. To fit this nomenclature, data is of type "noun". Also, nouns have some additional taxonomy which is beyond the scope of this task.)
Java[edit]
Following the C++ example:
public class Tree<T>{
private T value;
private Tree<T> left;
private Tree<T> right;
public void replaceAll(T value){
this.value = value;
if(left != null)
left.replaceAll(value);
if(right != null)
right.replaceAll(value);
}
}
Julia[edit]
mutable struct Tree{T}
value::T
lchild::Nullable{Tree{T}}
rchild::Nullable{Tree{T}}
end
function replaceall!(t::Tree{T}, v::T) where T
t.value = v
isnull(lchild) || replaceall(get(lchild), v)
isnull(rchild) || replaceall(get(rchild), v)
return t
end
Kotlin[edit]
// version 1.0.6
class BinaryTree<T>(var value: T) {
var left : BinaryTree<T>? = null
var right: BinaryTree<T>? = null
fun <U> map(f: (T) -> U): BinaryTree<U> {
val tree = BinaryTree<U>(f(value))
if (left != null) tree.left = left?.map(f)
if (right != null) tree.right = right?.map(f)
return tree
}
fun showTopThree() = "(${left?.value}, $value, ${right?.value})"
}
fun main(args: Array<String>) {
val b = BinaryTree(6)
b.left = BinaryTree(5)
b.right = BinaryTree(7)
println(b.showTopThree())
val b2 = b.map { it * 10.0 }
println(b2.showTopThree())
}
- Output:
(5, 6, 7) (50.0, 60.0, 70.0)
Mercury[edit]
:- type tree(A) ---> empty ; node(A, tree(A), tree(A)).
:- func map(func(A) = B, tree(A)) = tree(B).
map(_, empty) = empty.
map(F, node(A, Left, Right)) = node(F(A), map(F, Left), map(F, Right)).
Nim[edit]
type Tree[T] = ref object
value: T
left, right: Tree[T]
Objective-C[edit]
@interface Tree<T> : NSObject {
T value;
Tree<T> *left;
Tree<T> *right;
}
- (void)replaceAll:(T)v;
@end
@implementation Tree
- (void)replaceAll:(id)v {
value = v;
[left replaceAll:v];
[right replaceAll:v];
}
@end
Note that the generic type variable is only used in the declaration, but not in the implementation.
OCaml[edit]
type 'a tree = Empty | Node of 'a * 'a tree * 'a tree
(** val map_tree : ('a -> 'b) -> 'a tree -> 'b tree *)
let rec map_tree f = function
| Empty -> Empty
| Node (x,l,r) -> Node (f x, map_tree f l, map_tree f r)
Perl 6[edit]
role BinaryTree[::T] {
has T $.value;
has BinaryTree[T] $.left;
has BinaryTree[T] $.right;
method replace-all(T $value) {
$!value = $value;
$!left.replace-all($value) if $!left.defined;
$!right.replace-all($value) if $!right.defined;
}
}
class IntTree does BinaryTree[Int] { }
my IntTree $it .= new(value => 1,
left => IntTree.new(value => 2),
right => IntTree.new(value => 3));
$it.replace-all(42);
say $it.perl;
- Output:
IntTree.new(value => 42, left => IntTree.new(value => 42, left => BinaryTree[T], right => BinaryTree[T]), right => IntTree.new(value => 42, left => BinaryTree[T], right => BinaryTree[T]))
Phix[edit]
Phix is naturally polymorphic, with optional static typing.
The standard builtin type hierarcy is trivial:
<-------- object ---------> | | +-atom +-sequence | | +-integer +-string
User defined types are subclasses of those.
If you declare a parameter as type integer then obviously it is optimised for that, and crashes when given something else (with a clear human-readable message and file name/line number). If you declare a parameter as type object then it can handle anything you can throw at it - integers, floats, strings, or (deeply) nested sequences.
Of course many builtin routines are naturally generic, such as sort and print.
Most programming languages would throw a hissy fit if you tried to sort (or print) a mixed collection of strings and integers, but not Phix:
?sort(shuffle({5,"oranges",6,"apples",7}))
- Output:
{5,6,7,"apples","oranges"}
For comparison purposes (and because this entry looked a bit sparse without it) this is the D example from this page translated to Phix.
Note that tmap has to be a function rather than a procedure with a reference parameter, but this still achieves pass-by-reference/in-situ updates, mainly because root is a local rather than global/static, and is the target of (aka assigned to/overwritten on return from) the top-level tmap() call, and yet also manages the C#/Dart/Kotlin thing (by which I am referring to those specific examples on this page) of creating a whole new tree, simply because lhs assignee!=rhs reference (aka root2!=root) in "root2 = tmap(root,rid)", not that such a "deep clone" would (barring a few dirty low-level tricks) behave any differently to "root2=root", which is "a straightforward shared reference with cow semantics".
enum data, left, right
function tmap(sequence tree, integer rid)
tree[data] = call_func(rid,{tree[data]})
if tree[left]!=null then tree[left] = tmap(tree[left],rid) end if
if tree[right]!=null then tree[right] = tmap(tree[right],rid) end if
return tree
end function
function newnode(object v)
return {v,null,null}
end function
function add10(atom x) return x+10 end function
procedure main()
object root = newnode(1.00)
-- Add some nodes.
root[left] = newnode(1.10)
root[left][left] = newnode(1.11)
root[left][right] = newnode(1.12)
root[right] = newnode(1.20)
root[right][left] = newnode(1.21)
root[right][right] = newnode(1.22)
-- Now the tree has seven nodes.
-- Show the whole tree.
ppOpt({pp_Nest,2})
pp(root)
-- Modify the whole tree.
root = tmap(root,routine_id("add10"))
-- Create a whole new tree.
object root2 = tmap(root,rid)
-- Show the whole tree again.
pp(root)
end procedure
main()
- Output:
{1, {1.1, {1.11,0,0}, {1.12,0,0}}, {1.2, {1.21,0,0}, {1.22,0,0}}} {11, {11.1, {11.11,0,0}, {11.12,0,0}}, {11.2, {11.21,0,0}, {11.22,0,0}}}
PicoLisp[edit]
PicoLisp is dynamically-typed, so in principle every function is polymetric over its arguments. It is up to the function to decide what to do with them. A function traversing a tree, modifying the nodes in-place (no matter what the type of the node is):
(de mapTree (Tree Fun)
(set Tree (Fun (car Tree)))
(and (cadr Tree) (mapTree @ Fun))
(and (cddr Tree) (mapTree @ Fun)) )
Test:
(balance 'MyTree (range 1 7)) # Create a tree of numbers -> NIL : (view MyTree T) # Display it 7 6 5 4 3 2 1 -> NIL : (mapTree MyTree inc) # Increment all nodes -> NIL : (view MyTree T) # Display the tree 8 7 6 5 4 3 2 -> NIL : (balance 'MyTree '("a" "b" "c" "d" "e" "f" "g")) # Create a tree of strings -> NIL : (view MyTree T) # Display it "g" "f" "e" "d" "c" "b" "a" -> NIL : (mapTree MyTree uppc) # Convert all nodes to upper case -> NIL : (view MyTree T) # Display the tree "G" "F" "E" "D" "C" "B" "A" -> NIL
Racket[edit]
Typed Racket has parametric polymorphism:
#lang typed/racket
(define-type (Tree A) (U False (Node A)))
(struct: (A) Node
([val : A] [left : (Tree A)] [right : (Tree A)])
#:transparent)
(: tree-map (All (A B) (A -> B) (Tree A) -> (Tree B)))
(define (tree-map f tree)
(match tree
[#f #f]
[(Node val left right)
(Node (f val) (tree-map f left) (tree-map f right))]))
;; unit tests
(require typed/rackunit)
(check-equal?
(tree-map add1 (Node 5 (Node 3 #f #f) #f))
(Node 6 (Node 4 #f #f) #f))
REXX[edit]
This REXX programming example is modeled after the D example.
/*REXX program demonstrates (with displays) a method of parametric polymorphism. */
call newRoot 1.00, 3 /*new root, and also indicate 3 stems.*/
/* [↓] no need to label the stems. */
call addStem 1.10 /*a new stem and its initial value. */
call addStem 1.11 /*" " " " " " " */
call addStem 1.12 /*" " " " " " " */
call addStem 1.20 /*" " " " " " " */
call addStem 1.21 /*" " " " " " " */
call addStem 1.22 /*" " " " " " " */
call sayNodes /*display some nicely formatted values.*/
call modRoot 50 /*modRoot will add fifty to all stems. */
call sayNodes /*display some nicely formatted values.*/
exit /*stick a fork in it, we're all done. */
/*──────────────────────────────────────────────────────────────────────────────────────*/
addStem: nodes=nodes + 1; do j=1 for stems; root.nodes.j=arg(1); end; return
newRoot: parse arg @,stems; nodes=-1; call addStem copies('═',9); call addStem @; return
/*──────────────────────────────────────────────────────────────────────────────────────*/
modRoot: arg #; do j=1 for nodes /*traipse through all the defined nodes*/
do k=1 for stems
if datatype(root.j.k,'N') then root.j.k=root.j.k + # /*add bias.*/
end /*k*/ /* [↑] only add if numeric stem value.*/
end /*j*/
return
/*──────────────────────────────────────────────────────────────────────────────────────*/
sayNodes: w=9; do j=0 to nodes; _= /*ensure each of the nodes gets shown. */
do k=1 for stems; _=_ center(root.j.k, w) /*concatenate a node*/
end /*k*/
$=word('node='j, 1 + (j<1) ) /*define a label for this line's output*/
say center($, w) substr(_, 2) /*ignore 1st (leading) blank which was */
end /*j*/ /* [↑] caused by concatenation.*/
say /*show a blank line to separate outputs*/
return /* [↑] extreme indentation to terminal*/
- output when using the default input:
═════════ ═════════ ═════════ node=1 1.00 1.00 1.00 node=2 1.10 1.10 1.10 node=3 1.11 1.11 1.11 node=4 1.12 1.12 1.12 node=5 1.20 1.20 1.20 node=6 1.21 1.21 1.21 node=7 1.22 1.22 1.22 ═════════ ═════════ ═════════ node=1 51.00 51.00 51.00 node=2 51.10 51.10 51.10 node=3 51.11 51.11 51.11 node=4 51.12 51.12 51.12 node=5 51.20 51.20 51.20 node=6 51.21 51.21 51.21 node=7 51.22 51.22 51.22
Rust[edit]
struct TreeNode<T> {
value: T,
left: Option<Box<TreeNode<T>>>,
right: Option<Box<TreeNode<T>>>,
}
impl <T> TreeNode<T> {
fn my_map<U,F>(&self, f: &F) -> TreeNode<U> where
F: Fn(&T) -> U {
TreeNode {
value: f(&self.value),
left: match self.left {
None => None,
Some(ref n) => Some(Box::new(n.my_map(f))),
},
right: match self.right {
None => None,
Some(ref n) => Some(Box::new(n.my_map(f))),
},
}
}
}
fn main() {
let root = TreeNode {
value: 3,
left: Some(Box::new(TreeNode {
value: 55,
left: None,
right: None,
})),
right: Some(Box::new(TreeNode {
value: 234,
left: Some(Box::new(TreeNode {
value: 0,
left: None,
right: None,
})),
right: None,
})),
};
root.my_map(&|x| { println!("{}" , x)});
println!("---------------");
let new_root = root.my_map(&|x| *x as f64 * 333.333f64);
new_root.my_map(&|x| { println!("{}" , x) });
}
Scala[edit]
There's much to be said about parametric polymorphism in Scala. Let's first see the example in question:
case class Tree[+A](value: A, left: Option[Tree[A]], right: Option[Tree[A]]) {
def map[B](f: A => B): Tree[B] =
Tree(f(value), left map (_.map(f)), right map (_.map(f)))
}
Note that the type parameter of the class Tree, [+A]. The plus sign indicates that Tree is co-variant on A. That means Tree[X] will be a subtype of Tree[Y] if X is a subtype of Y. For example:
class Employee(val name: String)
class Manager(name: String) extends Employee(name)
val t = Tree(new Manager("PHB"), None, None)
val t2: Tree[Employee] = t
The second assignment is legal because t is of type Tree[Manager], and since Manager is a subclass of Employee, then Tree[Manager] is a subtype of Tree[Employee].
Another possible variance is the contra-variance. For instance, consider the following example:
def toName(e: Employee) = e.name
val treeOfNames = t.map(toName)
This works, even though map is expecting a function from Manager into something, but toName is a function of Employee into String, and Employee is a supertype, not a subtype, of Manager. It works because functions have the following definition in Scala:
trait Function1[-T1, +R]
The minus sign indicates that this trait is contra-variant in T1, which happens to be the type of the argument of the function. In other words, it tell us that, Employee => String is a subtype of Manager => String, because Employee is a supertype of Manager. While the concept of contra-variance is not intuitive, it should be clear to anyone that toName can handle arguments of type Manager, but, were not for the contra-variance, it would not be usable with a Tree[Manager].
Let's add another method to Tree to see another concept:
case class Tree[+A](value: A, left: Option[Tree[A]], right: Option[Tree[A]]) {
def map[B](f: A => B): Tree[B] =
Tree(f(value), left map (_.map(f)), right map (_.map(f)))
def find[B >: A](what: B): Boolean =
(value == what) || left.map(_.find(what)).getOrElse(false) || right.map(_.find(what)).getOrElse(false)
}
The type parameter of find is [B >: A]. That means the type is some B, as long as that B is a supertype of A. If I tried to declare what: A, Scala would not accept it. To understand why, let's consider the following code:
if (t2.find(new Employee("Dilbert")))
println("Call Catbert!")
Here we have find receiving an argument of type Employee, even though the tree it was defined on is of type Manager. The co-variance of Tree means a situation such as this is possible.
There is also an operator <:, with the opposite meaning of >:.
Finally, Scala also allows abstract types. Abtract types are similar to abstract methods: they have to be defined when a class is inherited. One simple example would be:
trait DFA {
type Element
val map = new collection.mutable.HashMap[Element, DFA]()
}
A concrete class wishing to inherit from DFA would need to define Element. Abstract types aren't all that different from type parameters. Mainly, they ensure that the type will be selected in the definition site (the declaration of the concrete class), and not at the usage site (instantiation of the concrete class). The difference is mainly one of style, though.
Seed7[edit]
In Seed7 types like array and struct are not built-in, but are defined with parametric polymorphism. In the Seed7 documentation the terms "template" and "function with type parameters and type result" are used instead of "parametric polymorphism". E.g.: array is actually a function, which takes an element type as parameter and returns a type. To concentrate on the essentials, the example below defines the type container as array. Note that the map function has three parameters: aContainer, aVariable, and aFunc. When map is called aVariable is used also in the actual parameter of aFunc: map(container1, num, num + 1)
$ include "seed7_05.s7i";
const func type: container (in type: elemType) is func
result
var type: container is void;
begin
container := array elemType;
global
const func container: map (in container: aContainer,
inout elemType: aVariable, ref func elemType: aFunc) is func
result
var container: mapResult is container.value;
begin
for aVariable range aContainer do
mapResult &:= aFunc;
end for;
end func;
end global;
end func;
const type: intContainer is container(integer);
var intContainer: container1 is [] (1, 2, 4, 6, 10, 12, 16, 18, 22);
var intContainer: container2 is 0 times 0;
const proc: main is func
local
var integer: num is 0;
begin
container2 := map(container1, num, num + 1);
for num range container2 do
write(num <& " ");
end for;
writeln;
end func;
Output:
2 3 5 7 11 13 17 19 23
Standard ML[edit]
datatype 'a tree = Empty | Node of 'a * 'a tree * 'a tree
(** val map_tree = fn : ('a -> 'b) -> 'a tree -> 'b tree *)
fun map_tree f Empty = Empty
| map_tree f (Node (x,l,r)) = Node (f x, map_tree f l, map_tree f r)
Swift[edit]
class Tree<T> {
var value: T?
var left: Tree<T>?
var right: Tree<T>?
func replaceAll(value: T?) {
self.value = value
left?.replaceAll(value)
right?.replaceAll(value)
}
}
Another version based on Algebraic Data Types:
enum Tree<T> {
case Empty
indirect case Node(T, Tree<T>, Tree<T>)
func map<U>(f : T -> U) -> Tree<U> {
switch(self) {
case .Empty : return .Empty
case let .Node(x, l, r): return .Node(f(x), l.map(f), r.map(f))
}
}
}
Ursala[edit]
Types are first class entities and functions to construct or operate on them may be defined routinely. A parameterized binary tree type can be defined using a syntax for anonymous recursion in type expressions as in this example,
binary_tree_of "node-type" = "node-type"%hhhhWZAZ
or by way of a recurrence solved using a fixed point combinator imported from a library as shown below.
#import tag
#fix general_type_fixer 1
binary_tree_of "node-type" = ("node-type",(binary_tree_of "node-type")%Z)%drWZwlwAZ
(The
%Z type operator constructs a "maybe" type, i.e., the free union of its operand type
with the null value. Others shown above are standard stack manipulation primitives, e.g.
d (dup) and
w (swap), used to build the type expression tree.) At the other extreme, one may construct an equivalent parameterized type in
point-free form.
binary_tree_of = %-hhhhWZAZ
A mapping combinator over this type can be defined with pattern matching like this
binary_tree_map "f" = ~&a^& ^A/"f"@an ~&amPfamPWB
or in point free form like this.
binary_tree_map = ~&a^&+ ^A\~&amPfamPWB+ @an
Here is a test program defining a type of binary trees of strings, and a function that concatenates each node with itself.
string_tree = binary_tree_of %s
x = 'foo': ('bar': (),'baz': ())
#cast string_tree
example = (binary_tree_map "s". "s"--"s") x
Type signatures are not necessarily associated with function declarations, but
have uses in the other contexts such as assertions and compiler directives
(e.g.,
#cast). Here is the output.
'foofoo': ('barbar': (),'bazbaz': ())
Visual Prolog[edit]
domains
tree{Type} = branch(tree{Type} Left, tree{Type} Right); leaf(Type Value).
class predicates
treewalk : (tree{X},function{X,Y}) -> tree{Y} procedure (i,i).
clauses
treewalk(branch(Left,Right),Func) = branch(NewLeft,NewRight) :-
NewLeft = treewalk(Left,Func), NewRight = treewalk(Right,Func).
treewalk(leaf(Value),Func) = leaf(X) :-
X = Func(Value).
run():-
init(),
X = branch(leaf(2), branch(leaf(3),leaf(4))),
Y = treewalk(X,addone),
write(Y),
succeed().
- Programming Tasks
- Basic language learning
- Type System
- Ada
- C
- C++
- C sharp
- Ceylon
- Clean
- Clojure/Omit
- Common Lisp
- D
- Dart
- E
- F Sharp
- Fortran
- Go
- Groovy
- Haskell
- Inform 7
- Icon
- Unicon
- J
- Java
- Julia
- Kotlin
- Mercury
- Nim
- Objective-C
- OCaml
- Oforth/Omit
- Perl 6
- Phix
- PicoLisp
- Racket
- REXX
- Rust
- Scala
- Seed7
- Standard ML
- Swift
- Ursala
- Visual Prolog
- Axe/Omit
- C/Omit
- Factor/Omit
- J/Omit
- JavaScript/Omit
- M4/Omit
- Maxima/Omit
- Oz/Omit
- Perl/Omit
- Python/Omit
- Ruby/Omit
- Tcl/Omit
- TI-83 BASIC/Omit
- TI-89 BASIC/Omit
- LaTeX/Omit
- Retro/Omit
- Zkl/Omit | http://rosettacode.org/wiki/Parametric_polymorphism | CC-MAIN-2018-51 | refinedweb | 6,722 | 56.25 |
hi i'm new here and new to c++ (only 2 classes deep).
i am using windows 2000 and i need to connect to my schools unix computer ( SunOS 5.8) via telnet.
when i connect and write my program i hit Ctrl+x+c to save.
Then i hit g++ (filename) to check my program.
after i do that i recive the message.
"ID: Fatal: file (filename) unknown file type
ID: Fatal: file processing errors. No output written to a.out
collect2: ld returned 1 exit status
any ideas because i'm stumped heres my first program.
#include <iostream>
using namespace std;
int main ()
{
cout<<"Welcome to c++ programming"<<endl;
return 0;
}
if i were at school i can save compile and execute but, from home i can just save and i really need to check my work.
thanks guys
j | http://forums.devshed.com/programming/84934-telnet-execute-last-post.html | CC-MAIN-2017-26 | refinedweb | 142 | 84.47 |
- NAME
- SYNOPSIS
- DESCRIPTION
- NOTATION
- ERROR HANDLING
- CONSTANTS
- FUNCTIONS
- CHESS PIECES
- CHESS POSITIONS
- BUGS
- SEE ALSO
- AUTHOR
NAME
Games::Chess - represent chess positions and games
SYNOPSIS
use Games::Chess qw(:constants); my $p = Games::Chess::Position->new; $p->at(0,0,BLACK,ROOK); $p->at(7,7,WHITE,ROOK); print $p->to_text;
DESCRIPTION
The
Games::Chess package provides the class
Games::Chess::Piece to represent chess pieces, and the class
Games::Chess::Position to represent a position in a chess game. Objects can be instantiated from data in standard formats and exported to these formats.
NOTATION
See Games::Chess::PGN for full details of the notations.
- SAN
Standard Algebraic Notation. The modern international notation for chess games. For example,
1. e4 e5 2. f4 exf4 3. Nf3 g5
- FEN
- PGN
Portable Game Notation. A notation for chess games, including the moves, commentary, variations, and metadata such as the players, the event, the round number, and the date of the match. For example,
- EPD
Extended Position Description. An extensible notation based on FEN. Intended for data interchange between chess-playing programs and for the construction of opening databases. Not used by
Games::Chess.
ERROR HANDLING.
CONSTANTS.)
FUNCTIONS
To import all these functions into your namespace, include the tag
:functions in the
use statement, for example
use Games::Chess qw(:functions);
- algebraic_to_xy($square)
If $square represents a square in Standard Algebraic Notation (from a1 to h8), return a list of two elements ($x,$y) giving the coordinates of that square, from (0,0) to (7,7). Return undefined otherwise.
- colour_valid($colour)
Return 1 if $colour is a valid colour value,
WHITEor
BLACK. Return undefined otherwise.
- debug($level)
Set the debugging level. See "ERROR HANDLING".
- errmsg
Return a description of the most recent error in any of the
Games::Chess::*packages, or the empty string if no errors have occurred. See "ERROR HANDLING".
- halfmove_valid($halfmove)
Return 1 if $halfmove is a valid value for the halfmove clock, which counts the number of ply (moves by either player) since the last pawn move or capture. Return undefined otherwise.
- move_valid($move)
Return 1 if $move is a valid value for the full move count (the number of black moves since the start of the game, plus 1). Return undefined otherwise.
- piece_valid($piece)
Return 1 if $piece is a valid piece value, from
PAWNto
KING. Return undefined otherwise.
- xy_to_algebraic($x,$y)
If ($x,$y) is a valid board position, from (0,0) to (7,7), return the algebraic notation for that square, from
a1to
h8. Return undefined otherwise.
- xy_valid($x,$y)
Return 1 if ($x,$y) is a valid board position, from (0,0) to (7,7). Return undefined otherwise.
CHESS PIECES
A chess piece, or an empty square on a chess board, is represented as an object belonging to the
Games::Chess::Piece class.
PIECE REPRESENTATION).
PIECE CONSTRUCTORS
- Piece->new
With no argument, return an object representing an empty square.
- Piece->new($piece)
With a single argument that is a member of the
Games::Chess::Piececlass, return an object representing the same piece as $piece.
- Piece->new($number)
With a numeric argument, return an object representing a piece with that encoding. Return undefined if $number is not an integer in the range 0 to 255.
- Piece->new($character).
- Piece->new($color,$piece)
Return an object representing the piece described. Return undefined if $color is not WHITE or BLACK, or $piece is not PAWN, KNIGHT, BISHOP, ROOK, QUEEN or KING.
PIECE METHODS
- Piece->code
Return the FEN code for the piece as a single character (PNBRQKpnbrqk), or a space if the piece represents an empty square.
- Piece->colour
Return
EMPTY,
WHITEor
BLACKas appropriate.
- Piece->colour_name
Return "empty", "white" or "black" as appropriate.
- Piece->name
Return a string describing the piece, for example "black knight", or "white king", or "empty square".
- Piece->piece
Return
EMPTY,
PAWN,
KNIGHT,
BISHOP,
ROOK,
QUEEN, or
KINGas appropriate.
- Piece->piece_name
Return "square", "pawn", "knight", "bishop", "rook", "queen", or "king" as appropriate.
CHESS POSITIONS
A chess position represented as an object belonging to the
Games::Chess::Position class.
POSITION REPRESENTATION.
POSITION CONSTRUCTORS
- Position->new
With no argument, return an object representing a position with all 16 pieces in their initial positions.
- Position->new($position)
With a single argument that is a member of the
Games::Chess::Positionclass, return a copy of $position.
- Position->new($FEN).
POSITION METHODS
- Position->at($x,$y)
If ($x,$y) is a valid board position, return an object of class
Games::Chess::Piecerepresenting the square at ($x,$y). Return undefined otherwise.
- Position->at($x,$y,@piece)
If ($x,$y) is a valid board position, and @piece would be valid as arguments to the
Games::Chess::Piececonstructor (see "PIECE CONSTRUCTORS"), put the specified piece on the specified square and return 1. Return undefined otherwise.
- Position->board
Return the board position as a vector of 64 bytes.
- Position->can_castle($colour,$piece)
If $colour is a valid colour, and $piece is
KINGor
QUEEN, return true if the player given by $colour can castle on the side given by $piece, false if they cannot. Return undefined otherwise.
- Position->can_castle($colour,$piece,$can_castle)
If $colour is a valid colour, and $piece is
KINGor
QUEEN, set the castling availability for the player given by $colour and the side given by $piece to the truth value of $can_castle, and return 1. Return undefined otherwise.
- Position->clear($x,$y)
If ($x,$y) is a valid board position, clear the specified square and return 1. Return undefined otherwise. Equivalent to
Position->at($x,$y,Piece->new);
- Position->en_passant
Return the en passant target square as the list (FILE,RANK), or undefined if there is no en passant target square.
- Position->en_passant($x,$y)
If ($x,$y) is a valid board position, set the en passant target square to ($x,$y) and return 1. Return undefined otherwise.
- Position->halfmove_clock
Return the halfmove clock (the number of ply since the last pawn move or capture).
- Position->halfmove_clock($halfmove)
If $halfmove is a valid halfmove clock value, set the halfmove clock to $halfmove and return 1. Return undefined otherwise.
- Position->move_number
Return the move number (the number of full moves since the beginning of the game, plus 1).
- Position->move_number($move)
If $move is a valid move number, set the move number to $move and return 1. Return undefined otherwise.
- Position->player_to_move
Return
WHITEif white is to move,
BLACKotherwise.
- Position->player_to_move($colour)
If $colour is
WHITEor
BLACK, set the player to move to $colour and return 1. Return undefined otherwise.
- Position->to_FEN
- Position->to_GIF(option => value, ...)
Return a string representing the board position as a GIF (an image in Graphics Interchange Format). The following options can be passed to control the image:
- border
The width of the black border around the chess board, in pixels (defaults to 2).
- letters
If true, draw a margin to the left of the board containing rank numbers, and a margin below the board containing file letters (defaults to true).
- bmargin
The height of the margin to draw below the board (containing the file letters), in pixels (defaults to 20). Ignored if the
lettersoption is false.
- lmargin
The width of the margin to draw to the left of the board (containing the rank numbers), in pixels (defaults to 20). Ignored if the
lettersoption is false.
- font
A reference to a
GD::Fontobject describing the font to use to draw the rank numbers and file letters (defaults to
GD::Font::Giant). Ignored if the
lettersoption is false.
- Position->to_text
- Position->validate
Apply some simple validation tests to the position. Return 1 if the position passes the tests, undefined otherwise. If the position fails to validate, the reason for failure can be found by calling
Games::Chess::errmsg.
These tests are applied:
The total of pawns plus obviously promoted pieces (for example, a second queen or a third rook) must be no more than 8 on each side.
Each side must have exactly one king.
There must be no pawns on ranks 1 and 8.
The en passant target square, if specified, must be plausible. That is, if white is to move, the ep square must be on rank 6, with a black pawn on rank 5 and empty squares on ranks 6 and 7 in that file. If black is to move, the ep square must be on rank 3, with a white pawn on rank 4 and empty squares on ranks 2 and 3 in that file.
The castling availability must be plausible. For example, if white can castle queenside, there must be a white rook on a1 and a white king on e1.
The halfmove count must be between 0 and 50 (it can't be greater than 50 or the game would have been drawn).
The full move number must be 1 or more.
BUGS
No representation of chess moves.
No representation of chess games and no support for PGN.
No simple way to clear the en passant target square.
No way to choose a different font for the chess pieces when creating a GIF (if anyone knows an easy way to do this, I'd love to know about it).
No way to choose the size of the chess pieces when creating a GIF.
SEE ALSO)
AUTHOR
Gareth Rees
<[email protected]>.
This module is free software; you can redistribute it and/or modify it under the same terms as Perl itself. | https://metacpan.org/pod/Games::Chess | CC-MAIN-2019-43 | refinedweb | 1,560 | 57.27 |
The KMP is a pattern matching algorithm which searches for occurrences of a "word" W within a main "text string" S by employing the observation that when a mismatch occurs, we have the sufficient information to determine where the next match could begin.We take advantage of this information to avoid matching the characters that we know will anyway match.The worst case complexity for searching a pattern reduces to O(n).
Algorithm
This algorithm is a two step process.First we create a auxiliary array lps[] and then use this array for searching the pattern.
Preprocessing :
Searching
We keep matching characters txt[i] and pat[j] and keep incrementing i and j while pat[j] and txt[i] keep matching.
When we see a mismatch,we know that characters pat[0..j-1] match with txt[i-j+1…i-1].We also know that lps[j-1] is count of characters of pat[0…j-1] that are both proper prefix and suffix.From this we can conclude that we do not need to match these lps[j-1] characters with txt[i-j…i-1] because we know that these characters will match anyway.
Implementaion in Java
public class KMP { public static void main(String[] args) { // TODO Auto-generated method stub String str = "abcabdabc"; String pattern = "abc"; KMP obj = new KMP(); System.out.println(obj.patternExistKMP(str.toCharArray(), pattern.toCharArray())); } public int[] computeLPS(char[] str){ int lps[] = new int[str.length]; lps[0] = 0; int j = 0; for(int i =1;i<str.length;i++){ if(str[j] == str[i]){ lps[i] = j+1; j++; i++; }else{ if(j!=0){ j = lps[j-1]; }else{ lps[i] = j+1; i++; } } } return lps; } public boolean patternExistKMP(char[] text,char[] pat){ int[] lps = computeLPS(pat); int i=0,j=0; while(i<text.length && j<pat.length){ if(text[i] == pat[j]){ i++; j++; }else{ if(j!=0){ j = lps[j-1]; }else{ i++; } } } if(j==pat.length) return true; return false; } } | https://sodocumentation.net/algorithm/topic/10811/knuth-morris-pratt--kmp--algorithm | CC-MAIN-2022-27 | refinedweb | 333 | 57.16 |
Arduino Home Basketball Hoop Score Detection System A.k.a. ScoreKeeper
My little sister and I found this indoor basketball hoop (pictured above) at a dumpster a few weeks ago. We were coming home from church just as two people were unloading it from their truck. We decided to grab it, along with the air hockey table the couple were throwing out. We figured we would do some hacking so, we started to think, "Hmm, I wonder if there is a way to detect when a ball has gone through the hoop?" Our first thought was to use an ultrasonic distance sensor and place it right below the rim. I was a little worried because I wasn't sure if the net would occlude the distance sensor and make the results erratic. It turns out that this works pretty well actually. The sensor peers through a gap in the net and easily detects when a ball goes through.
Next, we needed to decide what we were going to use as the scoreboard. Our first thought was to buy a large seven segment display. We found some online, but a relatively large display (5 inches or taller) gets kind of expense (~$15). And we needed 2 of them, plus another 3 for another project we were planning. Thought there had to be another alternative. Then we found this Instructable by Kurt E. Clothier. Kurt used individual LEDs to make a custom 7-segment display. He was able to diffuse the light of the each LED to properly created a lit "segment" using hot glue. This is the approach we have taken. Thanks Kurt!
Please enjoy this step by step Instructable on how to set up your own Score Detection Hoop System or Scorekeeper.
Step 1: Gather Your Tools
1 x indoor basketball hoop
1 x breadboard (perfboard would be better, there are a ton of connections so it would be best to solder everything together)
1 x indoor basketball
1 x Arduino (I used an Arduino Pro Mini, but any Arduino will do)
1 x battery for external power
1 x 16-Channel Multiplexer (I used CD74HC4067)
1 x 200 Ohm - 1 kOhm resistor
1 x Ultrasonic distance sensor (I used HC-SR04)
28 x LEDs
Lots of wire
Hot glue
14 pieces of aluminum foil
Appropriate material for the baseboard
NOTE: the supply voltage for the multiplexer and distance sensor are 5 V. I say this because most of the times, when we apply external power to the Arduino, it is with a 9 V battery. Just make sure you are taking power from the 5 V pin (or VCC if you use the Pro Mini), not Vin (or RAW if you use the Pro Mini).
Step 2: Preparing Your Base Board for the LED
You want a sturdy surface for the baseboard. We chose cardboard, but you should use something more fire-safe. It is a good idea to draw out the 7-segment display and map out where you are going to put the LEDs. After you have decided where you are placing your LEDs, go ahead and glue the aluminum foil onto the baseboard. The aluminum foil should outline the 7-segment. The LEDs will be oriented at a 90 degree angle relative to your direction of viewing. So, most of the light is not pointing to you. The aluminum foil reflects the light back to your eye.
Step 3: Placing Your LEDs
Each segment has two LEDs at opposing ends of the segment. The LEDs will are bent at a 90-degree angle and point towards the inside of the segment. Cut out or bore some holes for your LEDs. DO NOT ALLOW THE LEADS OF THE LED TO COME IN CONTACT WITH THE ALUMINUM FOIL. Aluminum foil is a conductor so it will short out the LEDs. After you have your LEDs placed correctly, it is time to lay the hot glue. You want hot glue to be over top of the bulb of the LEDs and between the LEDs, effectively connecting the two LEDs. The hot glue diffuses the light along the entire segment.
Step 4: Solder Your LEDs
This is the tedious part. Solder wire to each lead of the LED. To make this a little easier, You could solder all the ground pins together and then make a single connection to your power supply ground. We connected the two LEDs that make up a single segment in parallel. 5: Control Logic for 7-Segment Display
There are many ways to control a 7-segment display. The easiest would be to wire each LED to a digital pin on the Arduino. This is undesirable for many reasons. First of all, you would not have enough digital pins to wire to each segment or provide enough current to light each LED simultaneously. To counter these two problems, the common technique is to multiplex the segments. Usually, this is done with shift registers, but we decided to use an analog multiplexer instead. A multiplexer allows a single input to be sent to several different outputs by controlling a few logic selector pins. Note, the multiplexer can only output to one channel at a time. The multiplexer we used has 16 channels. The active channel is determined by 4 selector pins (S0-S3). The trick into making the multiplexer display a number on our custom 7-segment display is to quickly change the channels that the multiplexer is outputting to. For example, if we wanted to display the number 2, we would need to light up segments A, B, D, E, and G. With our multiplexer, we would output to the channels that are attached to each segment. We would need to switch from one channel to the next so quickly that the human eye will see all the channels lit at once and subsequently, the number "2."
Oh, I forgot to mention, I wrote an Arduino library for the multiplexer. I needed a multiplexer for this and other projects so I figured that I would go ahead and write a library for it. The multiplexer library is for easy control of a single multiplexer with N number of channels up to 32 channels. It was meant to be extremely simple and to the point. There are plenty of other libraries on the web if you want to use a different one. Keep in mind that you would need to modify the scoreKeeper.ino code that we will be giving you. Please add Mux.h and Mux.cpconsult the Arduino website on how to import a library if you do not know how to do so.
The main functions of the library that you need to be concerned with is the constructor and the open() function. The constructor initializes a single multiplexer. The open() function takes a single parameter which is the channel that you would like to output to. The library is sufficiently commented if you have any questions.
Step 6: Wire Up the Multiplexer
The wiring for the multiplexer is simple. The selector pins S0, S1, S2, and S3 are wired to digital pins 2, 3, 4, and 5 respectively. The selector pins determine which channel of the multiplexer is active. Additionally, the enable pin (E) is wired to digital pin 6. The enable pin, enables or disables the multiplexer from outputting to a channel. If the enable pin is high, the multiplexer is disabled. If the enable pin is low, the multiplexer is enabled. You may be tempted to leave this unwired or permanently wired to power or ground, do not. Otherwise, you will get a sporadic display.
The common input pin (the signal that will be sent to each LED on our custom 7-segment display), is wired to our positive voltage rail via a resistor. Notice that when we set up the LEDs we did not put a resistor in series to limit the current. This is because we are using a resistor at the common input pin of the multiplexer to the limit the current. This way, we only need on resistor the entire display.
Step 7: Wiring Up 7-Segment Display
Remember, the two LEDs that make up a single segment are wired in parallel. Then, connect each segment to a single output channel on the multiplexer. I wired the segments in the following order: for the tens digits, segments A-G are wired to channels 8-14 (respectively) on the multiplexer. For the ones digit, segments A-G are wired to channels 0-6 (respectively) on the multiplexer. If you change these connections for any reason, you must modify the code. 8: Ultrasonic Distance Sensor
Setting up the ultrasonic distance sensor is pretty straightforward. There are a ton of tutorials online for the sensor so I will just give you the information. The concept behind the sensor is pretty simple. The "trigger" pin sends out a sound wave. The sound wave bounces off the closest object and back to the sensor and hits the "echo" pin. Based upon the time it took for the ping to be sent and received and the speed of the sound wave, we can calculate the distance the object is from the sensor. The code for this is also pretty simple.
The distance sensor is placed below the rim. There just happens to be one of those spaces in between the next so that the sensor is not occluded.
/ }
Step 9: Code
Controlling the Multiplexer
Download the code from the ScoreKeep GitHub Repository, or from the attached file below.
To summarize, the segment is controlled using a multiplexer which outputs to a different segment every millisecond. I used a timer interrupt to ensure the timing was precise. A timer interrupt does exactly what it says. It interrupts the code at precise time intervals to execute commands written in the accompanying interrupt service routine (ISR). A good tutorial on Arduino timer interrupts can be found here.
The timer interrupt in this code outputs to a single segment using the multiplexer. On the next iteration of the ISR, the code outputs to another segment and so on. So, if we would like to display the number 22 (segments A, B, D, E, and G of both the tens and ones digit), the code would output to segment A of the ones digit on the first iteration, then to segment B, then segment D, then segment E, then segment G, all of the ones digit. On the next iteration, the we output to segment A of the tens digit, then segment B, then segment D, then segment E, then segment G. Afterwards, we start from segment A of the ones digit again and repeat until the number we need to display changes.
//Interrupt Service Routine //Displays the numbers for the score on the 7-segment display. //It lights a single segment every 1 ms incrementing the segment index every iteration. ISR(TIMER1_COMPA_vect) { //Ones digit if (index < 8) { if (bitRead(HEXvalues[score%10],index)) { myMux.open(index); } index++; } //Tens digit else if (index >= 8 && index < 16) { if (bitRead(HEXvalues[score/10],index-8)) { myMux.open(index); } index++; } //resets index else { index = 0; } }
Detecting a score
Using the distance sensor code, we use the distance returned and check if it is below a "scoreThreshold." If the distance to the next closest object gets really close, then it must be a ball going through the hoop. Additionally, I have added a refractory period for detecting a shot. This means, when a shot is detected at time t, then another shot cannot be registered until a certain amount of time after time t. I think I chose 2 seconds, but you may choose to increase or decrease this as you see fit. This is done because the loop() function of the Arduino runs so quickly that it would register the same shot more than once as the ball moves through the hoop.
//boolean detectScore() //@return true if shot is detected, false if otherwise // //Detects whether or not a shot was made by checking if the //distance from the ultrasonic distance sensor to the next closest //object is under the "threshold" used to determine when a shot //was made. boolean detectScore() { return (distance() <= scoreThreshold); } / }
Increment score
Increments the score by one point when a score is detected.
//void incrementScore() //Increments the score variable by 1. void incrementScore() { score += 1; } | http://www.instructables.com/id/Arduino-Home-Basketball-Hoop-Score-Detection-Syste/ | CC-MAIN-2017-26 | refinedweb | 2,068 | 71.55 |
Building Arduino
Last time, I talked about examining the assembly language output from gcc and showed an example of the output from both a Linux compiler and an AVR compiler like the one that the Arduino uses.
This led to the inevitable question: How can I get assembly output if I'm using the Arduino IDE? As far as I know, the answer is: you can't. At least, not directly. It is tempting to think of the Arduino IDE as just a text editor that calls the compiler on your behalf, but it does a lot more than that.
In an effort to hide a little complexity, the Arduino IDE cretes a lot more complexity, as described in the official documentation.
When you build an Arduino sketch (forgive me if I most often call it a program), the IDE merges all of your files into one big file. It puts an include file along with a bunch of prototypes for any functions you've written. The parse isn't very detailed, so if you use C++ features like default arguments or namespaces, you can expect this to break.
Once this file is complete, the IDE uses the specified board definition to elaborate an environment and call the avr-gcc compiler. After a successful build, it uses avrdude to upload the generated hex file.
You can see the actual compile steps if you open the Arduino's File | Preferences menu and select verbose compilation, but that doesn't really help you get assembly output or set other compiler options.
It stands to reason that anything the IDE can do, you can do too. There are a few alternate build tools for Arduino (I've talked about the Eclipse plug-in before). There is a good makefile available and that's probably a good starting point for changing compiler options without much fuss.
To use the makefile, you do have to modify the template a bit (the instructions are in the initial comments of the file). You can add additional compiler flags to the CEXTRA environment variable, or modify specific rules.
If you dig into the template, you can see the simple preprocessing it does to your code. I notice it doesn't build the function prototypes for you, so I assume if you use your own functions (outside of the Arduino-defined ones) you have to prototype them yourself, which isn't a bad idea anyway. Here's the preprocesing code:
test -d applet || mkdir applet echo '#include "Arduino.h"' > applet/$(TARGET).cpp cat $(TARGET).ino >> applet/$(TARGET).cpp echo 'extern "C" void __cxa_pure_virtual() { while (1) ; }' \ >> applet/$(TARGET).cpp cat $(ARDUINO_CORE)/main.cpp >> applet/$(TARGET).cpp
Once you have this level of control over the build process, you could experiment with different techniques, including the assembly language generation I mentioned last time. | http://www.drdobbs.com/embedded-systems/building-arduino/240168716?cid=SBX_ddj_related_commentary_default_parallel&itc=SBX_ddj_related_commentary_default_parallel | CC-MAIN-2017-09 | refinedweb | 472 | 61.06 |
Writing applications as modules
The Problem
I recently had to write a few command line applications of the form "command [options] args" that did some stuff, maybe printed a few things on screen and exited with a certain exit code. Nothing weird here.
These apps where part of a larger server system however and needed to use some of the modules from these servers for some of their work (in the name of code reuse obviously). A little later these apps would look nicer when they are separated out into their own modules as well (all hail code reuse again the apps can share code) and now it is really a short step to wanting to use some of the more general modules of the apps in the server.
I'm not sure that last step was very important, I think it all started when the app was split up in modules. But the last one made it very obvious: you can't just print random stuff to the user and decide to sys.exit() the thing anywhere you want. You want the code to behave like real modules: throw exceptions and not print anything on the terminal. That's not all, you also want to write unit tests for every bit of code too. Ultimately you need one main routine and you want to test that too, so even that can't exit the program.
The Solution
Executable Wrapper
The untestable code needs to remain to an absolute minimum. Code is untestable (ok, there are work arounds) when it sys.exit()s so I raise exceptions instead. I defined exceptions as such:
class Exit(Exception): def __init__(self, status): self.status = status def __str__(self): return 'Exit with status: %d' % self.status class ExitSucess(Exit): def __init__(self): Exit.__init__(self, 0) class ExitFailure(Exit): def __init__(self): Exit.__init__(self, 1)
This allows for a very small executable wrapper:
#!/usr/bin/env python import sys from mypackage.apps import myapp try: myapp.main() except myapp.Exit, e: sys.exit(e.status) except Exception, e: sys.stderr.write('INTERNAL ERROR: ' + str(e) + '\n') sys.exit(1)
The last detail is having main() defined as def mypackage.myapp.main(args=sys.argv) for testability, but that's really natural.
Messages for the user
These fall broadly in two categories: (1) short warning messages and (2) printing output. The second type is easily limited to a few very simple functions that do little more then just a few print statements, help() is an obvious example. For the first there is the logging module. In our case the logging module is used almost everywhere in the server code anyway, but even if it isn't it is a convenient way to be able to silence the logging. It's default behaviour is actually rather useful for an application, all that's needed is something like:
import logging logging.basicConfig(format='%(levelname)s: %(message)s')
The lovely thing about this that you get --verbose or --quiet almost for free.
Mixing it together
This one handles fatal problems the program detects. You could just do a logging.error(msg) followed by a raise ExitFailure. But this just doesn't look very nice, certainly not outside the main app module (mypackages.apps.myapp in this case). But a second option is to do something like
raise MyFatalError, 'message to user'
And have inside the main() another big try...except block:
try: workhorse(args) except FatalError, e: sys.stderr.write('ERROR: ' + str(e) + '\n') raise ExitFailure
Just make sure FatalError is the superclass of all your fatal exceptions and that they all have a decent __str__() method. The reason I like this is that it helps keeping fatal error messages consistent wherever you use them in the app, as all the work is done inside the __str__() methods.
One final note; when using the optparse module you can take two stances: (1) "optparse does the right thing and I don't need to debug it or write tests for it" or (2) "I'm a control freak". In the second case you can subclass the OptionParser and override it's error() and exit() methods to conform to your conventions.
2 comments:
PJE said...
Two points:
First, you don't need FatalError. raising SystemExit("message") does the same thing already, and sys.exit(arg) raises SystemExit(arg), so you'll find that sys.exit("message") is sufficient. What's more, raising a SystemExit(arg) where arg is not an integer or None, will automatically print arg to stderr as the program exits.
Thus, if you want to trap the exit, you can use "except SystemExit,v", and v.args[0] will be the message (or None, or whatever the sys.exit() argument was).
Second, and this is just an FYI, setuptools will automatically wrap a "main" function in a script wrapper like this, and it automatically calls sys.exit() on the function's return value, so it can return an integer exit code, a string error message, or None to exit normally. So, if you like developing in this style, you might want to check that out, especially since it automatically handles making the appropriate wrapper script by platform: for example, it generates .exe wrappers on Windows, and plain (no file-extension) #! scripts everywhere else.
Floris Bruynooghe said...
The SystemExit is rather cute, I didn't realise you could do that. However I'd still want to use subclasses of it as it is nice to have diversify between the error in case the module it originates from is not used in the normal application. Also overwriting __str__() or FalalError allows more consistent error messages. Lastly I get a nice list of all possible fatal errors in one place. I agree that it's all very subjective however.
As for the second point, nice to know setuptools agrees with my way of building a script. I'll have to check that out sometime.
New comments are not allowed. | http://blog.devork.be/2007/02/writing-applications-as-modules.html | CC-MAIN-2019-09 | refinedweb | 1,001 | 65.83 |
In a previous example, “Determining the number of lines in a TextArea control in Flex”, we saw how you could get the number of lines in an MX TextArea control by using the
getTextField() method (in the
mx_internal namespace) and the
numLines property.
The following example shows how you can get the number of lines in a Spark RichEditableText control in Flex 4 by using the
textContainerManager property (in the
mx_internal namespace) and the
numLines property.
Full code after the jump.
Since this example uses the
mx_internal namespace, you can't always depend on this behavior to work in future versions of the Flex SDK. Use at your own.
15 thoughts on “Determining the number of lines in a Spark RichEditableText control in Flex 4”
Your examples are really good, but the font size used for the code snippet above is too small to read easily. If you made it the same size as the body text it would be better
I’m using IE8 on WinXP and Win7 and it looks pretty good there. Although I notice it does look a bit smaller in Firefox on Windows.
You’re using IE8?!?!?!
Please switch to a real browser ;)
Hey, you can mock my browser of choice, but I can actually read the font in IE8… Can’t say that about Firefox! *cough*
Thanks for your examples. It does worry me a little that we have to use namespaces to access something so fundamental as the number of lines in a text field. Do you agree that there should be this level of obscurity in these tasks or is this something that is likely to be changed before release?
@Richard Leggett,
This will not be changed before the Flex 4.0 release. There was a bug/enhancement filed for this already () but it was deferred. If you feel strongly about it, I’d suggest having you (and a few of your friends) vote on the issue and show Adobe it is important to you. In the meantime you’ll have to use the
mx_internalnamespace or subclass the s:TextArea and/or s:RichEditableText controls to expose this as a public read-only getter (or whatever).
Peter
Thanks, Peter. I’ve voted for it. It’s no big deal for me to use mx_internal, I was just thinking about developers new to Flex. Thanks for your reply and all of these examples!
So close! First off, I’m a beginner that has learned a great deal from your site. Secondly, I have a problem very similar to this.
You’ve shown how to get the number of lines from a Text Area, but do you know how to get the unset height of a Label field or a RichText field AFTER the element has finished rendering? Currently all I’m getting is the height of a single line and not the entire element height. A thought is to be able to get the number of lines (numLines?) and multiply it by the $height, but there doesn’t appear to be a method to handle this.
Thank you greatly for your time and help!
hey Peter,
I’m trying to export and then reimport html text from your above example but I seem to get extra carriage returns – any ideas?
<s:TextArea id=”” />
<s:Button id=”saveBtn” label=”save” click=”saveBtn_clickHandler(event)”/>
protected function saveBtn_clickHandler(event:MouseEvent):void{
var t:String = TextFlowUtil.export(editor.textFlow);
sample.textFlow = TextConverter.importToFlow(t, TextConverter.TEXT_FIELD_HTML_FORMAT)
}
—
“this
is
a
test”
becomes:
”
this
is
a
test
“
oops the id of the textarea above should be “sample”:
<s:TextArea id=”sample” />
bugger – I pasted this in the wrong post!
In the last minutes I discovered the method to know the number lines, only is for textArea Spark:
“myTextArea.textDisplay.textContainerManager.numLines”
, see you next! twt: @acidventure
Or, here’s another way that seems to work without having to use the
mx_internalnamespace:
Peter
thank you very much. i was struggling for this for a long time. you saved my life… :) keep it up.
Man !! thank you very very much ..! your site is my home page now <3 | http://blog.flexexamples.com/2010/01/13/determining-the-number-of-lines-in-a-spark-richeditabletext-control-in-flex-4/ | CC-MAIN-2018-39 | refinedweb | 687 | 71.14 |
Sorry guys, me again
Don’t wanna waste your weekend, but having another little issue here. It seems that
RowFormat.Height is not returning the updated value. I have a document with a table and I fill that table with the Mail Merge function. Some rows then have 3 lines and some 4 or 5. To calculate the right line break, I tried to record the REAL height of every column, but unfortunatly RowFormat.Height always seems to return the same number.
public class CollectRowHeightsInTable : DocumentVisitor
{
public override VisitorAction VisitRowEnd(Aspose.Words.Tables.Row row)
{
double height = row.RowFormat.Height;
return VisitorAction.Continue;
}
} | https://forum.aspose.com/t/rowformat-height-not-returning-current-value/71222 | CC-MAIN-2022-27 | refinedweb | 103 | 53.58 |
So, what's up with pow()?
Here at Codeforces it is quite common to see solutions that use pow() fail. Most recently this was the case in round #333 problem div2A. Whose fault is it?
Level 1 answer is that it is obviously the contestant's fault. The contestant should have been aware that pow operates on floating-point numbers and that there can be precision errors. If you expect that a floating-point variable contains an integer, you cannot just cast it to an int. The small precision errors mean that your nice round 100 can actually be stored as 100.00000001 (in which case the typecast to an int still works), but it can also be stored as 99.99999999 (in which case the typecast will produce a 99).
You cannot even expect any kinds of deterministic behavior. For example, consider the following short program:
#include <bits/stdc++.h> using namespace std; int main () { printf("%d\n", (int)pow(10,2)); for (int j=0; j<3; ++j) printf("%d\n", (int)pow(10,j)); }
This code computes 10^2, 10^0, 10^1, and again 10^2. What is its output when using the current g++ version at Codeforces? 100, 1, 10, 99. Fun fun fun :)
For extra fun, change the initialization in the cycle to
int j=2. The new output: 100, 100 :)
So, what should you do? Be scared and avoid pow() completely? Nah. Just be aware that precision errors may occur. Instead of truncating the value inside the variable, round it to the nearest integer. See "man llround", for instance.
That being said, it's time for the...
Level 2 answer. Wait a moment. Why the f*#& should there be a precision error when I'm computing something as simple as 10^2? Ten squared is clearly 100. Shouldn't the value returned by pow() be as precise as possible? In this case, 100 can be represented exactly in a double. Why isn't the return value 100? Isn't the compiler violating any standards if my program computes 99.99999999 instead?
Well, kind of.
The standard that actually matters is the C++ standard. Regardless of which one you look into (be it the old one, C++11, or C++14), you will find that it actually never states anything about the required precision of operations such as exp(), log(), pow(), and many others. Nothing at all. (At least to the best of my knowledge.) So, technically, the compiler is still "correct", we cannot claim that it violates the C++ standard here.
However, there is another standard that should apply here: the international standard ISO/IEC/IEEE 60559:2011, previously known as the standard IEEE 754-2008. This is the standard for the floating point arithmetic. What does it say about the situation at hand?
The function pow() is one of about 50 functions that are recommended to be implemented in programming languages. Doing so is optional -- i.e., it is not required to conform to the standard.
However, if a language decides to implement these functions, the standard requires the following: A conforming function shall return results correctly rounded for the applicable rounding direction for all operands in its domain. The preferred quantum is language-defined.
In this particular case, this is violated. Thus, we can claim that the Codeforces g++ compiler's pow() implementation does not conform to the IEEE floating-point number standard.
Hence, if you failed a problem because of this issue, do blame the compiler a little. But regardless of that, in the future take precautions and don't trust anyone. | http://codeforces.com/topic/21929/en3 | CC-MAIN-2020-40 | refinedweb | 602 | 68.16 |
JSP date example
JSP date example
JSP date example
Till now you learned about the JSP syntax...;
The heart of this example is Date() function of the java.util
hello what is the code for adding groups in contacts using servlet and jsp???pls help me
Reading Request Information - Struts
not using jsp & servlets
I want to make icon and click icon open fist page friends,
I have some query please suggest me
I have 290 fields and very large form......I want to insert data in the table but i desingn the form after 200 then jsp page is not compile this page give
hello
there is so many error...please try check it out..
Hello Friend,
We have
hello
character is vowel or not.
Hello Friend,
Try the following code
hello
Hello World JSP Page
Hello World JSP Page
... a "hello world" on your browser. Jsp can be learned very
easily... of JSP illustrates you how to print
simple "Hello World!" string
Display "Hello JSP" using Jsp expression
Display "Hello JSP" using Jsp expression
In this section, We will simply output "Hello JSP" in a web browser
using Jsp...;
<html>
<head><title>Hello World JSP Page.</title><BarSF Hello World
JSF Hello World
In this example, we will be developing JSF Hello... classes etc.
Steps to create JSF Hello World Example:
1. Go to your project
hii sir... Hello World message on the Browser.
For that we have created a file called "
Developing JSP, Java and Configuration for Hello World Application
Writing JSP, Java and Configuration for Hello World Application
In this section we will write JSP, Java and required configuration files for
our Struts 2 Hello World application. Now "
Spring Hello World Application
Spring Hello World Application
Hello World Example using Spring, The tutorial given below describes you the way to make a spring web
application that displays Hello World
jsp
jsp hello sir..i am editing data with an edit image but the values are not coming i am sending you my code...plz look on that where i am doing...();
}
%>
<jsp:forward
Jsp - JSP-Servlet
Jsp
draftAd:
color:
Hello sir i want that when... he mut be givin a color pallete so how to get a color pallete in jsp in this code???
Please Reply Sir??
ThaankYou
hi sir - Java Beginners
hi sir Hi,sir,i am try in netbeans for to develop the swings,plz provide details about the how to run a program in netbeans and details about... the details sir,
Thanks for ur coporation sir Hi Friend
Struts2.2.1 hello world annotations Example using Properties
Struts2.2.1 hello world annotations Example using Properties
In this tutorial, We will discuss about hello world annotation
application using properties.
In this example we use the /result where we put the result
Jsp - JSP-Servlet
Jsp Hello sir,
And how to store an array of strings in access database please reply me sir Thank you sir.
Hi Friend,
We have created a table named names(id(autonumber),name(text),address(text)).
Try
Spring MVC Say Hello Example
Say Hello application in Spring 2.5 MVC
.... Then
application displays name with "Hello" message. For example we will enter name
"Brijesh" then the application will display with "Hello
Struts2.2.1 hello world annotations Example
Struts2.2.1 hello world annotations Example
In this tutorial, We will discuss about hello world application using
annotation and how to apply... of the application with the Action.java class and
jsp file. Now for action mapping
JSP Doubt - JSP-Servlet
JSP Doubt Hello Sir,
Sir Actually I have created my DraftAd in Html and I want that any thing i type in draft ad should be converted... in draftad
Eg: Hello Hi in draftad
then in count it will show 2 words
JSP-EL - JSP-Servlet
JSP-EL Dear Sir,
I know that this below code run on your...
Use of Expression Language in jsp
Hello ${vij.name... of Expression Language in jsp
Hello ${vij.nameSP-EL - JSP-Servlet
Deploying Hello World Application on Apache Geronimo Application Server
Hello World JSP
application and test on the Apache Geronimo Application Server...Deploying Hello World Application on Apache Geronimo Application Server...
Structure of Web Component
/hello/
Hello Eyeryone...
Hello Eyeryone... how to download java material in roseindia.net website material please kindly help me...
by
visu
Hello world
sensitive programming language.
For Example
hello world !=(not equal...
Hello world (First java program)
... and can be run on any operating System. Writing Hello World
program is very
jsp
jsp how to calculate mark..using radio button?????? Hello,
Please specify some more details.
Thanks
hello in vertical manner
hello in vertical manner How to print HELLO as in vertical manner
Hi,
Try this:
class HelloExample
{
public static void main(String[] args)
{
String str="HELLO";
char ch[]=str.toCharArray
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://www.roseindia.net/tutorialhelp/comment/46950 | CC-MAIN-2015-18 | refinedweb | 823 | 57.37 |
Subsets and Splits