text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
---|---|---|---|---|
GHC Commentary: Software Transactional Memory (STM)
This document gives an overview of the runtime system (RTS) support for GHC's STM implementation. We will focus on the case where fine grain locking is used (STM_FG_LOCKS).
Some details about the implementation can be found in the papers Composable Memory Transactions and Transactional memory with data invariants. Additional details can be found in the Harris et al book Transactional memory. Some analysis on performance can be found in the paper The Limits of Software Transactional Memory though this work only looks at the coarse grain lock version. Many of the other details here are gleaned from the comments in the source code.
Background
This document assumes the reader is familiar with some general details of GHC's execution and memory layout. A good starting point for this information is can be found here: Generated Code.
Definitions
Useful RTS terms
Capability
Corresponds to a CPU. The number of capabilities should match the number of CPUs. See Capabilities.
TSO
Thread State Object. The state of a Haskell thread. See Thread State Objects.
Heap object
Objects on the heap all take the form of an StgClosure structure with a header pointing and a payload of data. The header points to code and an info table. See Heap Objects.
Transactional Memory terms
Read set
The set of TVars that are read, but not written to during a transaction.
Write set
The set of TVars that are written to during a transaction. In the code each written TVar is called an "update entry" in the transactional record.
Access set
All TVars accessed during the transaction.
While GHC's STM does not have a separate read set and write set these terms are useful for discussion.
Retry
Here we will use the term retry exclusively for the blocking primitive in GHC's STM. This should not be confused with the steps taken when a transaction detects that it has seen an inconsistent view of memory and must start again from the beginning.
Failure
A failed transaction is one that has seen inconsistent state. This should not be confused with a successful transaction that executes the retry primitive.
Overview of Features
At the high level, transactions are computations that read and write to TVars with changes only being committed atomically after seeing a consistent view of memory. Transactions can also be composed together, building new transactions out of existing transactions. In the RTS each transaction keeps a record of its interaction with the TVars it touches in a TRec. A pointer to this record is stored in the TSO that is running the transaction.
Reading and Writing
The semantics of a transaction require that when a TVar is read in a transaction, ts value will stay the same for the duration of execution. Similarly a write to a TVar will keep the same value for the duration of the transaction. The transaction itself, however, from the perspective of other threads can apply all of its effects in one moment. That is, other threads cannot see intermediate states of the transaction, so it is as if all the effects happen in a single moment.
As a simple example we can consider a transaction that transfers value between two accounts:
transfer :: Int -> TVar Int -> TVar Int -> STM () transfer v a b = do x <- readTVar a y <- readTVar b writeTVar a (x - v) writeTVar b (y + v)
No other thread can observe the value x - v in a without also observing y + v in b.
Blocking
Transactions can choose to block until changes are made to TVars that allow it to try again. This is enabled with an explicit retry. Note that when changes are made the transaction is restarted from the beginning.
Continuing the example, we can choose to block when there are insufficient funds:
transferBlocking :: Int -> TVar Int -> TVar Int -> STM () transferBlocking v a b = do x <- readTVar a y <- readTVar b if x < v then retry else do writeTVar a (x - v) writeTVar b (y + v)
Choice
Any blocking transaction can be composed with orElse to choose an alternative transaction to run instead of blocking. The orElse primitive operation creates a nested transaction and if this first transaction executes retry, the effects of the nested transaction are rolled back and the alternative transaction is executed. This choice is biased towards the first parameter. A validation failure in the first branch aborts the entire transaction, not just the nested part. An explicit retry is the only mechanism that gives partial rollback.
We now can choose the account that has enough funds for the transfer:
transferChoice :: Int -> TVar Int -> TVar Int -> TVar Int -> STM () transferChoice v a a' b = do transferBlocking v a b `orElse` transferBlocking v a' b
Data Invariants
Invariants support checking global data invariants beyond the atomicity transactions demand. For instance, a transactional linked list (written correctly) will never have an inconsistent structure due to the atomicity of updates. It is no harder to maintain this property in a concurrent setting then in a sequential one with STM. It may be desired, however, to make statements about the consistency of the data in a particular a sorted linked list is sorted, not because of the structure (where the TVars point to) but instead because of the data in the structure (the relation between the data in adjacent nodes). Global data invariant checks can be introduced with the always operation which demands that the transaction it is given results in True and that it continues to hold for every transaction that is committed globally.
We can use data invariants to guard against negative balances:
newNonNegativeAccount :: STM (TVar Int) newNonNegativeAccount = do t <- newTVar 0 always $ do x <- readTVar t return (x > 0) return t
Exceptions
Exceptions inside transactions should only propagate outside if the transaction has seen a consistent view of memory. Note that the semantics of exceptions allow the exception itself to capture the view of memory from inside the transaction, but this transaction is not committed.
Overview of the Implementation
We will start this section by considering building GHC's STM with only the features of reading and writing. Then we will add retry then orElse and finally data invariants. Each of the subsequent features adds more complexity to the implementation. Taken all at once it can be difficult to understand the subtlety of some of the design choices.
Transactions that Read and Write.
With this simplified view we only support newTVar, readTVar, and writeTVar as well as all the STM type class instances except Alternative.
Transactional Record
The overall scheme of GHC's STM is to perform all the effects of a transaction locally in the transactional record or TRec. Once the transaction has finished its work locally, a value based consistency check determines if the values read for the entire access set are consistent. This only needs to consider the TRec and the main memory view of the access set as it is assumed that main memory is always consistent. This check also obtains locks for the write set and with those locks we can update main memory and unlock. Rolling back the effects of a transaction is just forgetting the current TRec and starting again.
The transactional record itself will have an entry for each transactional variable that is accessed. Each entry has a pointer to the TVar heap object and a record of the value that the TVar held when it was first accessed.
Starting
A transaction starts by initializing a new TRec (stmStartTransaction) assigning the TSO's trec pointer to the new TRec then executing the transaction's code.
(See rts/PrimOps.cmm stg_atomicallyzh and rts/STM.c stmStartTransaction).
Reading
When a read is attempted we first search the TRec for an existing entry. If it is found, we use that local view of the variable. On the first read of the variable, a new entry is allocated and the value of the variable is read and stored locally. The original TVar does not need to be accessed again for its value until a validation check is needed.
In the coarse grain version, the read is done without synchronization. With the fine grain lock, the lock variable is the current_value of the TVar structure. While reading an inconsistent value is an issue that can be resolved later, reading a value that indicates a lock and handing that value to code that expects a different type of heap object will almost certainly lead to a runtime failure. To avoid this the fine grain lock version of the code will spin if the value read is a lock, waiting to observe the lock released with an appropriate pointer to a heap object.
(See rts/STM.c stmReadTVar)
Writing
Writing to a TVar requires that the variable first be in the TRec. If it is not currently in the TRec, a read of the TVar's value is stored in a new entry (this value will be used to validate and ensure that no updates were made concurrently to this variable).
In both the fine grain and coarse grain lock versions of the code no synchronization is needed to perform the write as the value is stored locally in the TRec until commit time.
(See rts/STM.c stmWriteTVar)
Validation
Before a transaction can make its effects visible to other threads it must check that it has seen a consistent view of memory while it was executing. Most of the work is done in validate_and_acquire_ownership by checking that TVars hold their expected values.
For the coarse grain lock version the lock is held before entering validate_and_acquire_ownership through the writing of values to TVars. With the fine grain lock, validation acquires locks for the write set and reads a version number consistent with the expected value for each TVar in the read set. After all the locks for writes have been acquired, The read set is checked again to see if each value is still the expected value and the version number still matches (check_read_only).
(See rts/STM.c validate_and_acquire_ownership and check_read_only)
Committing
Before committing, each invariant associated with each accessed TVar needs to be checked by running the invariant transaction with its own TRec. The read set for each invariant is merged into the transaction as those reads must be included in the consistency check. The TRec is then validated. If validation fails, the transaction must start over from the beginning after releasing all locks. In the case of the coarse grain lock validation and commit are in a critical section protected by the global STM lock. Updates to TVars proceeds while holding the global lock.
With the fine grain lock version when validation, including any read-only phase, succeeds, two properties will hold simultaneously that give the desired atomicity:
- Validation has witnessed all TVars with their expected value.
- Locks are held for all of the TVars in the write set.
Commit can proceed to increment each locked TVar's num_updates field and unlock by writing the new value to the current_value field. While these updates happen one-by-one, any attempt to read from this set will spin while the lock is held. Any reads made before the lock was acquired will fail to validate as the number of updates will change.
(See rts/PrimOps.cmm stg_atomically_frame and rts/STM.c stmCommitTransaction)
Aborting
Aborting is simply throwing away changes that are stored in the TRec.
(See rts/STM.c stmAbortTransaction)
Exceptions
An exception in a transaction will only propagate outside of the transaction if the transaction can be validated. If validation fails, the whole transaction will abort and start again from the beginning. Nothing special needs to be done to support the semantics allowing the view inside the aborted transaction.
(See rts/Exception.cmm which calls stmValidateNestOfTransactions from rts/STM.c).
Blocking with retry
We will now introduce the blocking feature. To support this we will add a watch queue to each TVar where we can place a pointer to a blocked TSO. When a transaction commits we will now wake up the TSOs on watch queues for TVars that are written.
The mechanism for retry is similar to exception handling. In the simple case of only supporting blocking and not supporting choice, an encountered retry should validate, and if valid, add the TSO to the watch queue of every accessed TVar (see rts/STM.c stmWait and build_watch_queue_entries_for_trec). Locks are acquired for all TVars when validating to control access to the watch queues and prevent missing an update to a TVar before the thread is sleeping. In particular if validation is successful the locks are held after the return of stmWait, through the return to the scheduler, after the thread is safely paused (see rts/HeapStackCheck.cmm stg_block_stmwait), and until stmWaitUnlock is called. This ensures that no updates to the TVars are made until the TSO is ready to be woken. If validation fails, the TRec is discarded and the transaction is started from the beginning. (See rts/PrimOps.cmm stg_retryzh)
When a transaction is committed, each write that it makes to a TVar is preceded by waking up each TSO in the watch queue. Eventually these TSOs will be run, but before restarting the transaction its TRec is validated again if valid then nothing has changed that will allow the transaction to proceed with a different result. If invalid, some other transaction has committed and progress may be possible (note there is the additional case that some other transaction is merely holding a lock temporarily, causing validation to fail). The TSO is not removed from the watch queues it is on until the transaction is aborted (at this point we no longer need the TRec) and the abort happens after the failure to validate on wakeup. (See rts/STM.c stmReWait and stmAbortTransaction)
Choice with orElse
When retry# executes it searches the stack for either a CATCH_RETRY_FRAME or the outer ATOMICALLY_FRAME (the boundary between normal execution and the transaction). The former is placed on the stack by an orElse (see rts/PrimOps.cmm stg_catchRetryzh) and if executing the first branch we can partially abort and switch to the second branch, otherwise we propagate the retry further. In the latter case this retry represents a transaction that should block and the behavior is as above with only retry.
How do we support a "partial abort"? This introduces the need for a nested transaction. Our TRec will now have a pointer to an outer TRec (the enclosing_trec field). This allows us to isolate effects from the branch of the orElse that we might need to abort. Let's revisit the features that need to take this into account.
- Reading -- Reads now search the chain of nested transactions in addition to the local TRec. When an entry is found in a parent it is copied into the local TRec. Note that there is still only a single access to the actual TVar through the life of the transaction (until validation).
- Writing -- Writes, like reads, now search the parent TRecs and the write is stored in the local copy.
- Retry -- As described above, we now need to search the stack for a CATCH_RETRY_FRAME and if found, aborting the nested transaction and attempting the alternative or propagating the retry instead of immediately working on blocking.
- Validation -- If we are validating in the middle of a running transaction we will need to validate the whole nest of transactions.
(See rts/STM.c stmValidateNestOfTransactions and its uses in rts/Exception.cmm and rts/Schedule.c)
- Committing -- Just as we now have a partial abort, we need a partial commit when we finish a branch of an orElse. This commit is done with stmCommitNestedTransaction which validates just the inner TRec and merges updates back into its parent. Note that an update is distinguished from a read only entry by value. This means that if a nested transaction performs a write that reverts a value this is a change and must still propagate to the parent (see ticket #7493).
- Aborting -- There is another subtle issue with how choice and blocking interact. When we block we need to wake up if there is a change to any accessed TVar. Consider a transaction:
t = t1 `orElse` t2
If both t1 and t2 execute retry then even though the effects of t1 are thrown away, it could be that a change to a TVar that is only in the access set of t1 will allow the whole transaction to succeed when it is woken.
To solve this problem, when a branch on a nested transaction is aborted the access set of the nested transaction is merged as a read set into the parent TRec. Specifically if the TVar is in any TRec up the chain of nested transactions it must be ignored, otherwise it is entered as a new entry (retaining just the read) in the parent TRec.
(See again ticket #7493 and rts/STM.c merge_read_into)
- Exceptions -- The only change needed here each CATCH_RETRY_FRAME on the stack represents a nested transaction. As the stack is searched for a handler, at each encountered CATCH_RETRY_FRAME the nested transaction is aborted. When the ATOMICALLY_FRAME is encountered we then know that there is no nested transaction.
(See rts/Exception.cmm stg_raisezh)
(See rts/PrimOps.cmm stg_retryzh and stg_catch_retry_frame)
Invariants
We will start this section with an overview of some of the details then review with notes on the changes from the choice case.
Details
As a transaction is executing it can collect dynamically checked data invariants. These invariants are transactions that are never committed, but if they raise an exception when executed successfully that exception will propagate out of the atomic frame.
check#
Primitive operation that adds an invariant (transaction to run) to the queue of the current TRec by calling stmAddInvariantToCheck.
checkInv :: STM a -> STM ()
A wrapper for check# (to give it the STM type).
alwaysSucceeds :: STM a -> STM ()
This is the check from the "Transactional memory with data invariants" paper. The action immediately runs, wrapped in a nested transaction so that it will never commit but will have an opportunity to raise an exception. If successful, the originally passed action is added to the invariant queue.
always :: STM Bool -> STM ()
Takes an STM action that results in a Bool and adds an invariant that throws an exception when the result of the transaction is False.
The bookkeeping for invariants is in each TRecs invariants_to_check queue and the StgAtomicallyFrames next_invariant_to_check field. Each invariant is in a StgAtomicInvariant structure that includes the STM action, the TRec where it was last executed, and a lock. This is added to the current TRecs queue when check# is executed.
When a transaction completes, execution will reach the stg_atomically_frame and the TRecs enclosing_trec will be NO_TREC (a nested transaction would have a stg_catch_retry_frame before the stg_atomically_frame to handle cases of non-empty enclosing_trec). The frame will then check the invariants by collecting the invariants it needs to check with stmGetInvariantsToCheck, dequeuing each, executing, and when (or if) we get back to the frame, aborting the invariant action. If the invariant failed to hold, we would not get here due to an exception and if it succeeds we do not want its effects. Once all the invariants have been checked, the frame will to commit.
Which invariants need to be checked for a given transaction? Clearly invariants introduced in the transaction will be checked these are added to the TRecs invariants_to_check queue directly when check# is executed. In addition, once the transaction has finished executing, we can look at each entry in the write set and search its watch queue for any invariants.
Note that there is a check in the stm package in Control.Monad.STM which matches the check from the beauty chapter of "Beautiful code":
check :: Bool -> STM () check b = if b then return () else retry
It requires no additional runtime support. If it is a transaction that produces the Bool argument it will be committed (when True) and it is only a one time check, not an invariant that will be checked at commits.
Changes from Choice
With the addition of data invariants we have the following changes to the implementation:
- Retrying -- A retry in an invariant indicates that the invariant could not proceed and the whole transaction should block. This special case is detected when an ATOMICALLY_FRAME is encountered with a nest of transactions (i.e. when the enclosing_trec field is not NO_TREC). The invariant is simply aborted and execution proceeds to stmWait (see rts/PrimOps.cmm stg_retryzh).
- Commiting -- Commit now needs a phase where it runs invariants after the code of the transaction has completed but before commit. The implementation recycles the structure already in place for this phase so special cases are needed in the ATOMICALLY_FRAME that collects invariants and works through them one at a time then moves on to committing (see rts/PrimOps.cmm stg_atomically_frame).
To efficiently handle invariants they need to only be checked when a relevant data dependency changes. This means we can associate them with the TRec of the last commit that needed to check the invariant at the cost of serializing invariant handling commits. This is enforced by the lock on each invariant. If it cannot be acquired the whole transaction must start over.
At commit time, each invariant is locked and the read set for the last commited transaction of each invariant is merged into the TRec.
Validation acuqires lock for all entries in the TRec (not just the writes). After validation, each invariant is removed from the watch queue of each TVar it previously depended on, then the TRec that was used when executing the invariant code is updated to reflect the values from the final execution of the main transaction and each TVar, being a data depenency of the invariant, has the invariant added to its watch queue.
(See rts/STM.c stmCommitTransaction, disconnect_invariant and connect_invariant_to_trec)
- Exceptions -- When an exception propagates to the ATOMICALLY_FRAME there are now two states that it could encounter. If there is no enclosing TRec we are not dealing with an exception from an invariant and it proceeds as above. Seeing a nest of transactions indicates that the transaction was checking an invariant when it encountered the exception. The effect of a failed invariant is this exception so nothing special needs to be done except to validate and abort both the outer transaction and the nested transaction (see rts/Exception.cmm stg_raisezh).
Other Details
This section describes some details that can be discussed largely in isolation from the rest of the system.
Detecting Long Running Transactions
While the type system enforces STM actions to be constrained to STM side effects, pure computations in Haskell can be non-terminating. It could be that a transaction sees inconsistent data that leads to non-termination that would never happen in a program that only saw consistent data. To detect this problem, every time a thread yields it is validated. A validation failure causes the transaction to be condemned.
Transaction State
Each TRec has a state field that holds the status of the transaction. It can be one of the following:
TREC_ACTIVE
The transaction is actively running.
TREC_CONDEMNED
The transaction has seen an inconsistency.
TREC_COMMITTED
The transaction has committed and is in the process of updating TVar values.
TREC_ABORTED
The transaction has aborted and is working to release locks.
TREC_WAITING
The transaction has hit a retry and is waiting to be woken.
If a TRec state is TREC_CONDEMNED (some inconsistency was seen) validate does nothing. When a top-level transaction is aborted in stmAbortTransaction, if the state is TREC_WAITING it will remove the watch queue entries for the TRec. Similarly if a waiting TRec is condemned via an asynchronous exception when a validation failure is observed after a thread yield, its watch queue entries are removed. Finally a TRec in the TREC_WAITING state is not condemned by a validation. In this case the TRec is already waiting for a wake up from a TVar that changes and observing an inconsistency merely indicates that this will happen soon.
In the work of Keir Fraser a transaction state is used for cooperative efforts of transactions to give lock-free properties for STM systems. The design of GHC's STM is clearly influenced by this work and seems close to some of the algorithms in Fraser's work. It does not, however, implement what would be required to be lock-free or live-lock free (in the fine grain lock code). For instance, if two transactions T1 and T2 are committing at the same time and T1 has read A and written B while T2 has read B and written A, both the transactions can fail to commit. For example, consider the interleaving:
Note: the first and third columns are the local state of the TRecs and the second column is the values of the TVar structures. Each TRec entry has the expected value followed by the new value and a number of updates field when it is read for validation.
At this point T1 and T2 both perform their read_only_check and both could (at least one will) discover that a TVar in their read set is now locked. This leads to both transactions aborting. The chances of this are narrow but not impossible (see ticket #7815). Fraser's work avoids this by using the transaction status and the fact that locks point back to the TRec holding the lock to detect other transactions in a read only check (read phase) and resolving conflicts so that at least one of the transactions can commit.
A simpler example can also cause both transactions to abort. Consider two transactions with the same write set, but the writes entered the TRecs in a different order. Both transactions could encounter a lock from the other before they have a chance to release locks and get out of the way. Having an ordering on lock could avoid this problem but would add a little more complexity.
GC and ABA
GHC's STM does comparisons for validation by value. Since these are always pure computations these values are represented by heap objects and a simple pointer comparison is sufficient to know if the same value is in place. This presents an ABA problem however if the location of some value is recycled it could appear as though the value has not changed when, in fact, it is a different value. This is avoided by making the expected_value fields of the TRec entries pointers into the heap followed by the garbage collector. As long as a TRec is still alive it will keep the original value it read for a TVar alive.
Management of TRecs
The TRec structure is built as a list of chunks to give better locality and amortize the cost of searching and allocating entries. Additionally TRecs are recycled to aid locality further when a transaction is aborted and started again. Both of these details add a little complexity to the implementation that is abated with some macros such as FOR_EACH_ENTRY and BREAK_FOR_EACH.
Tokens and Version Numbers.
When validating a transaction each entry in the TRec is checked for consistency. Any entry that is an update (in the write set) is locked. This locking is a visible effect to the rest of the system and prevents other committing transactions from progress. Reads, however, are not going to be updated. Instead we check that a read to the value matches our expected value, then we read a version number (the num_updates field) and check again that the expected value holds. This gives us a read of num_updates that is consistent with the TVar holding the expected value. Once all the locks for the write set are acquired we know that only our transaction can have an effect on the write set. All that remains is to rule out some change to the read set while we were still acquiring locks for the writes. This is done in the read phase (with read_only_check) which checks first if the value matches the expectation then checks if the version numbers match. If this holds for each entry in the read set then there must have existed a moment, while we held the locks for all the write set, where the read set held all its values. Even if some other transaction committed a new value and yet another transaction committed the expected value back the version number will have been incremented.
All that remains is managing these version numbers. When a TVar is updated its version number is incremented before the value is updated with the lock release. There is the unlikely case that the finite version numbers wrap around to an expected value while the transaction is committing (even with a 32-bit version number this is highly unlikely to happen). This is, however, accounted for by allocating a batch of tokens to each capability from a global max_commits variable. Each time a transaction is started it decrements it's batch of tokens. By sampling max_commits at the beginning of commit and after the read phase the possibility of an overflow can be detected (when more then 32-bits worth of commits have been allocated out).
(See rts/STM.c validate_and_acquire_ownership, check_read_only, getToken, stmStartTransaction, and stmCommitTransaction)
Implementation Invariants
Some of the invariants of the implementation:
- Locks are only acquired in rts/STM.c and are always released before the end of a function call (with the exception of stmWait which must release locks after the thread is safe).
- When running a transaction each TVar is read exactly once and if it is a write, is updated exactly once.
- Main memory (TVars) always holds consistent values or locks of a partially updated commit. That is a set of reads at any moment from TVars will result in consistent data if none of the values are locks.
- A nest of TRecs has a matching nest of CATCH_RETRY_FRAMEs ending with an ATOMICALLY_FRAME on the stack. One exception to this is when checking data invariants the invariant's TRec is nested under the top level TRec without a CATCH_RETRY_FRAME.
Fine Grain Locking
The locks in fine grain locking (STM_FG_LOCKS) are at the TVar level and are implemented by placing the locking thread's TRec in the TVars current value using a compare and swap (lock_tvar). The value observed when locking is returned by lock_tvar. To test if a TVar is locked the value is inspected to see if it is a TRec (checking that the closure's info table pointer is to stg_TREC_HEADER_info). If a TRec is found lock_tvar will spin reading the TVars current value until it is not a TRec and then attempt again to obtain the lock. Unlocking is simply a write of the current value of the TVar. There is also a conditional lock cond_lock_tvar which will obtain the lock if the TVars current value is the given expected value. If the TVar is already locked this will not be the case (the value would be a TRec) and if the TVar has been updated to a new (different) value then locking will fail because the value does not match the expected value. A compare and swap is used for cond_lock_tvar.
This arrangement is useful for allowing a transaction that encounters a locked TVar to know which particular transaction is locked (used in algorithms in from Fraser). GHC's STM does not, however, use this information.
Bibliography
Fraser, Keir. Practical lock-freedom. Diss. PhD thesis, University of Cambridge Computer Laboratory, 2004.
Jones, Simon Peyton. "Beautiful concurrency." Beautiful Code: Leading Programmers Explain How They Think (2007): 385-406.
Harris, Tim, et al. "Composable memory transactions." Proceedings of the tenth ACM SIGPLAN symposium on Principles and practice of parallel programming. ACM, 2005.
Harris, Tim, James Larus, and Ravi Rajwar. "Transactional memory." Synthesis Lectures on Computer Architecture 5.1 (2010): 1-263.
Harris, Tim, and Simon Peyton Jones. "Transactional memory with data invariants." First ACM SIGPLAN Workshop on Languages, Compilers, and Hardware Support for Transactional Computing (TRANSACT'06), Ottowa. 2006.
|
https://ghc.haskell.org/trac/ghc/wiki/Commentary/Rts/STM?version=3
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
Where are the classes that are imported?
Hi
WHere are the classes that are imported.
import org.eclipse.swt.SWT;
import org.eclipse.swt.widgets.*;
import org.eclipse.swt.widgets.Tree;
import org.eclipse.swt.graphics.Point;
import org.eclipse.swt.layout.FillLayout;
import org.ecli
SWT
C++Tutorials
Create Tree in SWT
Drag and Drop Example in SWT
Create Menu Bar in SWT
Many Public Classes in One File - Java Tutorials
Tree Example
SWT TextEditor
Disassembling Java Classes - Java Tutorials
Thread Deadlocks - Java Tutorials
Ask Questions?
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions.
|
http://www.roseindia.net/tutorialhelp/allcomments/137686
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
I created the NamespaceLoader in the feature branch. It has a load_module, but it's only ever called by the code in PathFinder.load_module:
loader = NamespaceLoader(namespace_path)
return loader.load_module(fullname)
namespace_path is what will become module.__path__. In order to keep the load_module API (single fullname argument), I pass it in to the constructor. There's no particular need for it to follow that API, but it does.
|
http://bugs.python.org/msg159162
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
Hi guys just trying to understand Getline and implement it in a problem im trying to solve.
First off i have a program that is working:
Code:#include <iostream.h> #include <string> #include <fstream> using namespace std; void main() { ifstream fin("data.txt"); ofstream fout; char name[30]; char jersey_number[10]; char best_time[10]; char sport[40]; char high_school[40]; while(!fin.getline(name, 30, '|').eof()) { fin.getline(jersey_number, 10, '|'); fin.getline(best_time, 10); fin.getline(sport, 40, '|'); fin.getline(high_school, 40); cout << jersey_number << best_time << sport << high_school << endl; } }
This works fine with the input data text file containing the following:
Code:John|83|52.2 swimming|Jefferson Jane|26|10.09 sprinting|San Marin
Now the problem! I tried to learn from this and make a new program, which should do almost the same thing, read data from a text file, and cout the information i want.
The data in the text file is in a different format, for example:
Here is what i programmed, going from the first example:Here is what i programmed, going from the first example:Code:firstName middleName surname 38 47 38 27 36 firstName middleName surname 84 37 29 34 72
Code:#include <iostream.h> #include <string> #include <fstream> using namespace std; void main() { ifstream fin("test.txt"); ofstream fout("output.txt"); char name[30]; char fullName[30]; char marks[30]; while(!fin.getline(name, 30, '\n').eof()) { fin.getline(fullName, 30, '\n'); fin.getline(marks, 30, '\n'); cout << fullName << " " << marks << endl; } }
All this does is bring up the program window which just runs forever, what am i doing wrong?
The different thing between the working program and the one that isnt working is that the working one uses "|" but 2nd one uses "\n" for end of each line i believe.
ideas anyone or help?
|
http://cboard.cprogramming.com/cplusplus-programming/74568-getline-help.html
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
NAME
flock - apply or remove an advisory lock on an open file
SYNOPSIS
#include <sys/file.h> int flock(int fd, int operation);
DESCRIPTION
Apply.
CONFORMING TO.
close(2), dup(2), execve(2), fcntl(2), fork(2), open(2), lockf(3) See also Documentation/locks.txt and Documentation/mandatory.txt in the kernel source.
COLOPHON
This page is part of release 3.21 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
|
http://manpages.ubuntu.com/manpages/karmic/en/man2/flock.2.html
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
NAME
setns - reassociate thread with a namespace
SYNOPSIS
#define _GNU_SOURCE /* See feature_test_macros(7) */ #include <sched.h> int setns(int fd, int nstype);
DESCRIPTION.)
RETURN VALUE
On success, setns() returns 0. On failure, -1 is returned and errno is set to indicate the error.
ERRORS.
VERSIONS
The setns() system call first appeared in Linux in kernel 3.0
CONFORMING TO
The setns() system call is Linux-specific.
NOTES
Not all of the attributes that can be shared when a new thread is created using clone(2) can be changed using setns().
BUGS
The PID namespace and the mount namespace are not currently supported. (See the descriptions of CLONE_NEWPID and CLONE_NEWNS in clone(2).)
SEE ALSO
clone(2), fork(2), vfork(2), proc(5), unix(7)
COLOPHON
This page is part of release 3.35 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
|
http://manpages.ubuntu.com/manpages/precise/en/man2/setns.2.html
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
Asynchronous Http Handlers, Page 2
Asynchronous Http Handlers
Asynchronous Http handlers work similarly to regular http handlers, except that they are capable of serving many client requests with very few actual threads. In fact, depending on how religiously the handler-code uses asynchronous methods in its implementation, it is possible to peg the CPU(s) in a system using only a handful of threads while serving many times as many clients. This approach has excellent scale-up possibilities.
To implement an asynchronous Http handler, your class must derive from the IHttpAsyncHandler interface rather than the IHttpHandler interface as seen in Listing 1. But the interface used is not the only difference between synchronous and asynchronous Http handlers. The program flow is also a bit more difficult to design..
In addition to using asynchronous Begin*() and End*() methods exclusively for all of your "blocking" IO calls, your asynchronous handler must also implement asynchronous BeginProcessRequest() and EndProcessRequest() methods that are called asynchronously by ASP.Net itself. The BeginProcessRequest() and EndProcessRequest() are the IHttpAsyncHandler interface's methods that allow ASP.Net to do asynchronously what it would have done synchronously using IHttpHandler's ProcessRequest() method.
Here is a summary of the things that you will have to think about when creating an asynchronous Http handler.
- You should do all of your IO using asynchronous Begin*() and End*() methods.
- You must implement Begin*() and End*() methods that allow ASP.Net to call you asynchronously.
- You must create a type that implements IAsyncResult that you instantiate and return to ASP.Net from your Begin*() method. ASP.Net will pass the object back to your End*() method. You should use this object to keep track of client-request state (such as the context object) as the work is divided amongst multiple calls.
The code shown in Listing 2 is a very simple asynchronous http handler (which you can deploy in the same manner described for the code in Listing 1).
<%@ WebHandler Language="C#" Class="AsyncHttpHandler"%> using System; using System.Web; using System.Threading; public class AsyncHttpHandler : IHttpAsyncHandler { public IAsyncResult BeginProcessRequest( HttpContext context, AsyncCallback cb, Object extraData){ // Create an async object to return to caller Async async = new Async(cb, extraData); // store a little context for us to use later as well async.context = context; // Normally real work would be done here... then, most likely // as the result of an async callback, you eventually // "complete" the async operation so that the caller knows to // call the EndProcessRequest() method async.SetCompleted(); // return IAsyncResult object to caller return async; } // Finish up public void EndProcessRequest(IAsyncResult result){ // Finish things Async async = result as Async; async.context.Response.Write( "<H1>This is an <i>Asynchronous</i> response!!</H1>"); } // This method is never called by ASP.Net in the async case public void ProcessRequest(HttpContext context) { throw new InvalidOperationException( "ASP.Net should never use this method"); } // This means that the same AsyncHttpHandler object is used // for all requests returning false hear makes ASP.Net create // object per request. public bool IsReusable { get { return true; } } } // This object is necessary for the caller to help your code keep // track of state between begin and end calls class Async:IAsyncResult{ internal Async(AsyncCallback cb, Object extraData){ this.cb = cb; asyncState = extraData; isCompleted = false; } private AsyncCallback cb = null; private Object asyncState; public object AsyncState { get { return asyncState; } } public bool CompletedSynchronously { get { return false; } } // If this object was not being used solely with ASP.Net this // method would need an implementation. ASP.Net never uses the // event, so it is not implemented here. public WaitHandle AsyncWaitHandle { get { throw new InvalidOperationException( "ASP.Net should never use this property"); } } private Boolean isCompleted; public bool IsCompleted { get { return isCompleted; } } internal void SetCompleted(){ isCompleted = true; if(cb != null){ cb(this); } } // state internal fields internal HttpContext context=null; }
Listing 2—Async.ashx, an asynchronous Http handler.
There is more to say about asynchronous server code, and if this article generates interest, perhaps I will publish a second part on this site in the future. Unfortunately, though, this article has already reached more than double its expected size for the Web site and I frankly hope they have space for this much! Fortunately, this coverage should get you well on your way to writing scalable server-side code using asynchronous Http handlers.
In future versions of the .NET Framework, look for more innovations in the asynchronous programming model. For example, eventually there will be an asynchronous Page object that provides a way to implement Web forms and other more structured Web applications asynchronously.
This is cool stuff. Have fun!.
# # #
|
http://www.developer.com/net/vb/article.php/10926_1383181_2/Asynchronous-Http-Handlers.htm
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
Hello, I have this assignment that I don't understand at all. here are the instructions:
Write a program that uses the Monte Carlo sampling method to estimate
the average number of bottles of e-Boost someone would have to drink to
win a prize. There is a 1 in 5 chance that a bottle cap will have a prize.
and here is what i have so far:
//import the classes import java.io.IOException; import java.util.Scanner; import java.io.File; import java.util.Random; public class montecarlomethod {); } } } }
|
http://www.javaprogrammingforums.com/loops-control-statements/13537-monte-carlo-method.html
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
Since I've seen so many poor solutions to this problem I thought I'd share mine. Here's how I make printer friendly pages.
1. Add a method in your base class that looks like this:
def getMainTemplate(self): """ return the suitable METAL header object """ # assuming zpt/main_template.zpt template_obj = self.main_template # assuming the "first" line of main_template.zpt to # look like this: # <metal:block metal: return template_obj.macros['master']
2. Change all your Page Templates to refer to a method rather than a macro directly so that pages like index_html.zpt start like this:
<html metal:
I've seen the hard coded way too many times where people do something like this
<html metal: which gives you no flexibility.
3. Now make a copy of
main_template.zpt called
print_main_template.zpt and the most important change is to make
print.css load by default. Here's what it should look like this somewhere inside the
<head> tag:
<link rel="stylesheet" type="text/css" href="/screen.css" media="screen" /> <link rel="stylesheet" type="text/css" href="/print.css" />
Note how the print.css
link tag is now not conditional. Before in main_template.zpt it should have looked like this:
<link rel="stylesheet" type="text/css" href="/print.css" media="print" /> <link rel="stylesheet" type="text/css" href="/screen.css" media="screen" />
And note how the order is stacked just to be extra safe to weird browsers that don't understand the
media condition.
As a last optional feature you should add is to add these lines at the bottom of the template 'print_main_template.zpt':
<script type="text/javascript"> window.print(); </script> </body> </html>
Another tip is to add something like this to the footer because it becomes useful when you look at a printed copy:
<div id="footer"> Printed from <span tal:</span> on <span tal:</span>
4. Now rewrite the method
getMainTemplate() to become usefully intelligent:
def getMainTemplate(self): """ return the suitable METAL header object """ if self.REQUEST.get('print-version'): # assuming zpt/print_main_template.zpt template_obj = self.print_main_template else: # assuming zpt/main_template.zpt template_obj = self.main_template # assuming the "first" line of main_template.zpt to # look like this: # <metal:block metal: return template_obj.macros['master']
5. Prepare the interface now for the printer friendly page. This can be done in two different ways. One way is to put a link in the footer or byline like this:
<a href="?print-version=1">Print this page</a>
Or if you want to force a particular page to always be printer friendly, for example
print_invoice.zpt then write it like this:
<tal:item<html metal:
As a final point; how you solve your web design with
screen.css and
print.css varies. One way is to define multiple css files each suitable for individual things like this example shows:
<link rel="stylesheet" type="text/css" href="/typography.css" /> <link rel="stylesheet" type="text/css" href="/print.css" media="print" /> <link rel="stylesheet" type="text/css" href="/screen.css" media="screen" />
An alternative solution is to don't expect
print.css to stand on its own two legs but only be a supplement of the general css file. When doing this you're probably just going to want to override some things and hide some other things like this example from a 'print.css':
body { width:100% !important } form#login, #navigation, .also-online { display:none }
To conclude
This gives you a robust framework for enabling printer friendly pages that are quite different from the main template and doing it like this means that you don't have add conditional hacks to your main template that displays certain things if in printer friendly mode or not.
Most importantly, this gives you the framework for adding other versions of main template. For example these:
mobile_main_template.zpt(guess what for)
minimal_main_template.zpt(for things like Help page popups)
A healthy and fair use of METAL macros is also key to asserting that you don't have to repeat yourself too much in the copies of
main_template.pt.
Good luck!
|
http://www.peterbe.com/plog/tip-printer-friendly-pages-zope
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
server. There are many application servers are available both free and commercial... of 3.0 version. Hi friend,
EJB :
The Enterprise JavaBeans architecture or EJB for short is an architecture for the development and deployment
Problem in EJB application creation - EJB
Problem in EJB application creation Hi,
I am new to EJB 3.0....
Deployment error:
The Sun Java System Application Server could not start.
More... application.
1. I would like to know how to associate an Entity bean
EJB Books
business objects in any J2EE application.
Professional EJB shows how..., JDO,
iBatis-and EJB 3. It discusses the patterns of good lightweight... concepts, showing you both the good and bad in building real-world EJB
EJB based application
EJB based application Project description
The project has two separate parts, the client GUI application and the server application.
The Client... with the server must be done via a TCP/IP socket connection and preferably by sending
Building a Simple EJB Application Tutorial
. This application, while simple, provides a good introduction to EJB
development and some...Building a Simple EJB Application - A Tutorial
... EJB and a client web application
using eclipse IDE along with Lomboz
plug
Application Server
Application Server
An application server is an application program that accepts connections in order..., FTP server, application server, VPN server, DHCP server, DNS server, WINS server
EJB 3.0 Tutorials
. This application, while simple, provides a good introduction to EJB development...
implementations.
Introduction to Application
Server
Application... is added in Java EE 5 platform. Every application server compatible with Java EE 5
Swing EJB
would work with EJB application server...Swing EJB Hi everyone !!!
I tried to find wether EJB architecture can operate
with swing, because I am trying to make application
which would work
Why application server is important?
Why application server is important? Hi,
I have to select the good application server for running my Java based web applications. Which application server or tomcat container is good to run the application. On my website
java - EJB
application. A session bean represents a single client accessing the enterprise application deployed on the server by invoking its method. An application may contain... an application by an example that contains a session bean and a CMP but not able Dear Sir ,
Which book is good for learning EJB .
THANKS AND REGARDS,
SENTHILKUMAR.P
EJB 3.0
are the Java EE server side components that run inside the ejb container... the database and communicating with the server.
The
EJB Container
An EJB container is nothing but the program that runs on the server Container or EJB Server
EJB Container or EJB Server
An EJB container is nothing but the program that runs on the server... less load and sends the request to that
server.
EJB container encapsulates
EJB
server ur using cannot always instantiate as many beans as the number of active... clients communicating with ur server, there will not always 100 session beans...://
Thanks
ejb - EJB
ejb plz send me the process of running a cmp with a web application send me an example
Application Servers Available in Market. Web Servers. J2EE server.
details of the
EJB let's look at some of the EJB Application Servers available in the
market.
Application Servers...Application
Servers Available in Market
java EJB - EJB
java EJB Please,
Tell me the directory structure of EJB applications
and how to deploy ejb on glassfish server
EJB deployment descriptor
;
Deployment descriptor is the file which tells the EJB server
that which classes make... application. In the example given below our application consists of
single EJB...-beans>
</application-client>
<ejb-name>
Difference between Web Server and Application Server
and
Application Server. Web Server handles HTTP and HTTPS request and response while the
Application server allows business logic to client through various protocols... response
generation to some other program.
An application server are designed
Description of EJB 3
Description of EJB 3
Enterprise beans are the Java EE server side components
that run inside the ejb container and encapsulate the business logic of an
enterprise application. EJB3
JBoss Application Server
JBoss Application Server
Introduction to JBoss Application Server
JBoss is a free, open source application server under the LGPL license that is widely deployment descriptor
EJB deployment descriptor
Deployment descriptor is the file which tells the EJB server...;
</application-client>
<ejb-name>
JBoss Application Server
JBoss Application Server
JBoss is an open source Java EE-based application server... is developed and supported by a wide network of programmers.
EJB
3.0
Attachement in Web Service + EJB 3 + Jboss
Attachement in Web Service + EJB 3 + Jboss How to send attachements in Web Service using EJB3 with JBoss Application Server
What is EJB 3.0?
application server. There are many application servers are available both free... the Application Server. Stateless session is easy to develop and its
efficient...
What is EJB 3.0
EJB Project
EJB Project How to create Enterprise Application project with EJB module in Eclipse Galileo?
Please visit the following link:
j2ee - EJB
for more information. i want to know the ejb 2.0 architecture by diagram and also ejb 3.0 architecture
i want to know the flow in ejb 2.0 and ejb 3.0
Intro Java help with Double
Intro Java help with Double I have to evaluate a math expression using double in Java. I can input the code fine for the expression, but I don't know how to view the answer that it gives me. help
EJB, Enterprise java bean- Why EJB (Enterprise Java Beans)?
Why EJB (Enterprise Java Beans)?
Enterprise Java Beans or EJB for short is the
server-side component architecture for the Java 2 Platform
Intro please - Hibernate
Intro please Hi,
Anyone please tell me why we go for hibernate? is there any advanced feature?
Thanks,
Prabhakaran. Hi friend,
Here is the detail information about hibernate.
Read for more
EntityBean - EJB
with storing and retrieving of application data, can now be programmed with Java Persistence API starting from EJB 3.0.
Read for more information.
Tomcat server/any application server
Tomcat server/any application server how the server understands request is coming from client and how can it give response within very short span of time
Application Server and Web Server - WebSevices
Application Server and Web Server General difference, Application... connectivity and messing .While an application server exposes business logic.... An application server providers allows the client to access the business logic for use
configure Datasource in enterprise application - EJB
MDB - EJB
in an EJB server - all the Swing code you've supplied is not MDB, its regular JMS MessageListeners / consumers as its not using MDBs or EJB.
import javax.swing.
weblogic server
;WebLogic is a server software application that runs on a middle tier, between back-end... application on any client to interoperate with server applications, Enterprise... and applications.
WebLogic server is based on Java 2 Platform, Enterprise Edition (J2EE
Application Servers Available in Market. Web Servers. J2EE server.
Application Server - JNDI
Application Server How can we create Domain in Weblogic9.1 application server and also how can we create jdbc connection pooling by using oraclethin driver and how can we configure it to jndi
Weblogic - EJB
Weblogic How can i download the weblogic sever of application develop could u provide the link for that. Hi friend,
Download the weblogic sever of application visit to ::
DIFFERENCE BETWEEN APPLICATION SERVER AND WEB SERVER
DIFFERENCE BETWEEN APPLICATION SERVER AND WEB SERVER What is the difference between application server and web server
NullPointerException - EJB
page on the browser with the following exception in the server logs...] at java.lang.Thread.run(Thread.java:595)
PLEASE HELP..... I am new to ejb and jboss
intro. - Java Beginners
difference - EJB
for many clients.
*)It maintain the state of the client with server
my question - EJB
my question is it possiable to create web application using java beans & EJB's with out implementing Servlets and jsps in that java beans and EJB's
Jboss 3.2 EJB Examples
server.
Writing EJB Code is not very difficult and mostly it follows the same pattern for all applications. But deployment of the EJBean in application server has... these lessons.
It is always a good practice to develop J2EE application in separate
Tutorial - Sun Java System Application Server Platform Edition
Sun Java System Application Server Platform Edition
... System Application Server Platform
version 9 for the deployment and testing of our... provided by it.
The Sun Java System Application Server Platform Edition 9
Architecture of application
to comunicate between client and middleware running on top of application server, some... components:
- Desktop client application (multi-user, multi-role) - Swing... care about database server and client communication?
- any other suggestion
server - Development process
server difference between application server and web server ... that the web server does not support to EJB (business logic component) component... the following link:
Describe the effects of a server failure on the application
Describe the effects of a server failure on the application... an application within an IBM WebSphere Application Server environment Next
Describe the effects of a server failure on the application
Developing Distributed application using Enterprise Java Beans, J2EE Architecture, EJB Tutorial, WebLogic Tutorial.
of the application server.
J2EETM Architecture...,
or objects in an application server...
Two-tier application
Application Server
Application Server
WAST
WAST (Web Application Server Toolkit) is a framework for developing web application server adapters. It contains core and UI
Stateful Session Beans Example, EJB Tutorial
account onto the server
IV. Running the account Application Client ... to develop, deploy, and run a simple Java EE application named account
using... application consists of an enterprise bean, which performs the
transactions, and two
Web Server
server does not
support to EJB (business logic component) component.
A computer...
Web Server Introduction
A web server is the combination of computer and the program installed on
it. Web
ejb
ejb why
ejb components are invisible components.justify that ejb components are invisible
ejb
ejb what is ejb
ejb is entity java bean
Introduction To Enterprise Java Bean(EJB). WebLogic 6.0 Tutorial.
component-based distributed
application using the java language. EJB...
Enterprise Java beans(EJB) compliant application servers.
Prerequisites... Java Beans (EJB)?
Application
Free JSP, Free EJB and Free Servlets Hosting Servers
Free JSP, Free EJB and Free Servlets Hosting Servers...;
MyCGIserver
- Free Hosting server provides..., JDBC, SQL, WAP and WML. Server provides
5MB space and FTP access to the site:
EJB directory structure
. Just follow the above mentioned steps and test your EJB application
using...
EJB directory structure
The tutorial is going to explain the standard directory
structure of an EJB
EJB
EJB How is EJB different from servlets
EJB what is an EJB container?
Hi Friend,
Please visit the following link:
EJB Container
Thanks
EJB Hello world example
EJB environment setup. This simple application will required three different... EJB Hello world example
... step towards learning of any application or
programming language. In the given
Application Server : Java Glossary
Application Server : Java Glossary
An Application server is a server side program that is
used... based applications uses application servers.
Application server are developed
Ejb
Ejb what kind of web projects requires ejb framework
Ask Questions?
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions.
|
http://www.roseindia.net/tutorialhelp/comment/84769
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
Products and Services
Downloads
Store
Support
Education
Partners
About
Oracle Technology Network
Permspace can get corrupted on Solaris Sparc 7,8,9 if it is positioned at
0x80000000. This is reproducible even with one of the demo programs:
| 1. get hold of a complete J2SE 1.4.2_07 (for example) SDK installation.
| One of mine happens to live at /opt_earth/java/j2sdk1.4.2_07/. Adapt
| path names below suitably.-- Make sure DISPLAY is set, but this is
| just so the JVM has something nice to do.
|
| 2. cd /opt_earth/java/j2sdk1.4.2_07
| bin/java -Xms300m -Xmx768m -XX:MaxPermSize=1440m \
| -jar demo/jfc/Java2D/Java2Demo.jar
| (from another terminal window) locate the pid of this java process,
| and
| pmap $pid
|
| 3. Inspect the pmap output. Find the segment or two which make up the
| java heap. The permanent generation segment will be the next one
| above these in the address space, i.e. the next one below these in
| pmap output. For example, this might look like:
| ...
| 6F380000 8K read/write/exec [ anon ]
| 6F400000 34112K read/write/exec [ anon ] <<<<java heap
| 74950000 273088K read/write/exec [ anon ] <<<<java heap
| 9F400000 6912K read/write/exec [ anon ] <<<<perm gen
| F9580000 8K read/write/exec [ anon ]
| Note that the segments are spaced such that there's room to grow the
| heap to 768m, and room to grow the perm gen to 1440m, and additional
| red zone padding between segments.
| Note also that these segments are created from higher towards lower
| addresses. The 0x9F400000 was chosen by the kernel to accommodate
| a 1440m segment plus redzone padding, given that another segment
| already started at 0xF9580000. Thus the requested MaxPermSize
| determines where the perm gen segment will start (the stuff at
| higher addresses will not vary much from one run to the next; but
| it will depend on the exact sizes of several shared libraries, and
| thus will vary depending on Solaris version and patch levels.)
|
| 4. Work out how much larger MaxPermSize will need to be in order to
| get the base address of the perm gen segment just below 0x80000000,
| but the already-occupied part of it extends above . This
| may take a few trial and error iterations, since small changes are
| sometimes absorbed by the padding. For example:
| bin/java -Xms300m -Xmx768m -XX:MaxPermSize=1940m \
| -jar demo/jfc/Java2D/Java2Demo.jar
| ...
| 4FF7C000 96K read/write/exec /usr/dt/lib/libXm.so.4
| 50000000 34112K read/write/exec [ anon ]
| 55550000 273088K read/write/exec [ anon ]
| 80000000 6400K read/write/exec [ anon ]
| F9500000 352K read/exec /opt_earth/java/j2sdk1.4.2_07/jre/lib/sparc/libawt.so
| ...
| and I got the same at 1942m, but the next 2-MBy increment did the
| trick on my system:
|
| 5. bin/java -Xms300m -Xmx768m -XX:MaxPermSize=1944m \
| -jar demo/jfc/Java2D/Java2Demo.jar
| Exception in thread "main" java.lang.NoSuchMethodError: sun.awt.image.InputStreamImageSource: method <init>()V not found
| at sun.awt.image.URLImageSource.<init>(URLImageSource.java:23)
| at sun.awt.SunToolkit.createImage(SunToolkit.java:529)
| at java2d.DemoImages.getImage(DemoImages.java:87)
| at java2d.DemoImages.<init>(DemoImages.java:72)
| at java2d.Java2Demo.<init>(Java2Demo.java:113)
| at java2d.Java2Demo.main(Java2Demo.java:478)
|
| This is le bug.
|
| The progress-bar graphics in the preliminary window will have frozen
| up showing "Loading image" (for example - the victim class would
| depend on where exactly each class falls in the perm gen segment
| which now straddles the 0x80000000 line), and the java process is
| sitting there doing nothing, ready to debug with everything you can
| think of throwing at it - SIGQUIT, pmap, gcore, dbx, SA...
| In my case, the address space layout around the perm gen segment was
| now:
| 4FB7C000 96K read/write/exec /usr/dt/lib/libXm.so.4
| 4FC00000 34112K read/write/exec [ anon ]
| 55150000 273088K read/write/exec [ anon ]
| 7FC00000 4352K read/write/exec [ anon ]
| F9500000 352K read/exec /opt_earth/java/j2sdk1.4.2_07/jre/lib/sparc/libawt.so
| so of those 4352K's worth of page table mappings already created in
| the perm gen, there are 4096k in 0x7FC00000-7FFFFFFF and 256k above
| 0x80000000.
|
| 6. It seems that similar problems can also happen when the perm gen is
| above 0x80000000 but the java heap crosses the 0x80000000 mark, or
| even when the java heap is strictly below and the perm gen is strictly
| above this (and some copying of stuff from the heap to the perm gen
| occurs), but this is not as easy to reproduce with a small demo such
| as the Java2D one - you'd need to run something which loads far more
+ classes and exercises garbage collection seriously.
2.4 Describe problem impact to the customer. (optional)
###@###.### 2005-2-25 14:28:42 GMT
EVALUATION
I'm unable to reproduce this problem, using the example method
shown.
1$ uname -a
SunOS analemma 5.8 Generic_108528-29 sun4u sparc SUNW,Ultra-60
1$ /java/re/jdk/1.4.2_07/promoted/latest/binaries/solaris-sparc/bin/java -Xms300m -Xmx768m -XX:MaxPermSize=1940m -jar /java/re/jdk/1.4.2_07/promoted/latest/binaries/solaris-sparc/demo/jfc/Java2D/Java2Demo.jar
2$ /bin/ps -ef | grep 1.4.2
pbk 17527 17526 14 15:14:53 pts/20 0:15 /java/re/jdk/1.4.2_07/promoted/latest/binaries/solaris-sparc/bin/java -Xms300m
2$ pmap 17527
17527: /java/re/jdk/1.4.2_07/promoted/latest/binaries/solaris-sparc/bin/java
....
50000000 34240K read/write/exec [ anon ]
55550000 273088K read/write/exec [ anon ]
80000000 7168K read/write/exec [ anon ]
....
and Java2Demo seems to run just fine.
Another way to know where the various parts of the Java object heap are is to run with -XX:+PrintHeapAtGC, which produces maps like:
Heap before GC invocations=0:
Heap
def new generation total 33152K, used 16043K [0x50000000, 0x52150000, 0x55550000)
eden space 32192K, 49% used [0x50000000, 0x50faafc0, 0x51f70000)
from space 960K, 0% used [0x51f70000, 0x51f70000, 0x52060000)
to space 960K, 0% used [0x52060000, 0x52060000, 0x52150000)
tenured generation total 273088K, used 0K [0x55550000, 0x66000000, 0x80000000)
the space 273088K, 0% used [0x55550000, 0x55550000, 0x55550200, 0x66000000)
compacting perm gen total 7168K, used 6945K [0x80000000, 0x80700000, 0xf9400000)
the space 7168K, 96% used [0x80000000, 0x806c8578, 0x806c8600, 0x80700000)
where you can see that the "compacting perm gen" starts at 0x80000000. (And you can see from the fact that the perm generation is the only that's nearly full that this collection is from a failed attempt to allocate in the perm generation.) Of course, this way of looking at the heap only works if the VM runs long enough to cause a collection, and doesn't hang with the NoSuchMethodError you show.
I've tried other sizes for the -XX:MaxPermSize= to get the permanent generation to start at various other places near the 2^31 boundary. E.g.,
compacting perm gen total 7168K, used 6949K [0x79c00000, 0x7a300000, 0xf9400000)
the space 7168K, 96% used [0x79c00000, 0x7a2c9638, 0x7a2c9800, 0x7a300000)
and
compacting perm gen total 7168K, used 6951K [0x86400000, 0x86b00000, 0xf9400000)
the space 7168K, 96% used [0x86400000, 0x86ac9db0, 0x86ac9e00, 0x86b00000)
to see if *crossing* that boundary was the problem, but I've been unable to reproduce the problem.
Why am I unable to reproduce the problem? Especially that you gave such clear instructions.
We did used to have bugs like this with spaces laid out across the 2^31 boundary, but I think we fixed them all a while ago. We might have inadvertently introduced a new one. But if I can't reproduce the problem it will be hard to track down.
###@###.### 2005-2-25 23:35:10 GMT
I'm now able to reproduce the problem, on a recent JDK-1.6.0 fastdebug build.
We seem to have failed a binary search lookup. Since no garbage collection has happened, this is not a GC problem. I'm reassigning it to the runtime group, even though I think it might be fun to track this one down. I've attached the hs_err file from the fastdebug failure, and the dbx stack trace.
###@###.### 2005-03-01 18:18:56 GMT
I'm pretty sure the problem is this line in symoblOop.hpp:
return (int)(intptr_t(this) - intptr_t(other));
where those casts (that look like constructors!) should be to uintptr_t's, since the symbolOops can be on either side of the sign-bit.
Then there's the cast of the difference to an int, but I think that's okay as long as the symbolOops aren't too far apart, e.g., in 64-bit VMs with huge permanent generations. If no one uses anything except the sign of the result, it might be better to just compute -1, 0, or 1, rather than trying to fit the difference into an int.
###@###.### 2005-03-02 18:37:49 GMT
|
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6233169
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
#include <image_view.hpp>
List of all members.
Image view consists of a pixel 2D locator (defining the mechanism for navigating in 2D) and the image dimensions.
Image views to images are what ranges are to STL containers. They are lightweight objects, that don't own the pixels. It is the user's responsibility that the underlying data remains valid for the lifetime of the image view.
Similar to iterators and ranges, constness of views does not extend to constness of pixels. A const image_view does not allow changing its location in memory (resizing, moving) but does not prevent one from changing the pixels. The latter requires an image view whose value_type is const.
image_view
Images have interfaces consistent with STL 1D random access containers, so they can be used directly in STL algorithms like:
std::fill(img.begin(), img.end(), red_pixel);
In addition, horizontal, vertical and 2D random access iterators are provided.
Note also that image_view does not require that its element type be a pixel. It could be instantiated with a locator whose value_type models only Regular. In this case the image view models the weaker RandomAccess2DImageViewConcept, and does not model PixelBasedConcept. Many generic algorithms don't require the elements to be pixels.
value_type
Regular
|
http://www.boost.org/doc/libs/1_42_0/libs/gil/doc/html/g_i_l_0040.html
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
.
In the second half of this two part article I will demonstrate how to use the powerful capabilities of .NET to incorporate simple code generators into VS.NET, helping you and your co-workers become more productive. The example in this section actually uses macros and the Automation extensibility model. For more advanced code generators you have the option of using the Reflection.Emit namespace or System.CodeDOM. The macro example uses string substitution. The Reflection.Emit namespace emits CIL (Common Intermediate Language), and the CodeDOM can generate C#, VB .NET, and J# .NET source code.
Let's set the stage for the task we are trying to automate and proceed.
Defining the Code Template.)
The first step in using a macro to generate code is to open the Macros IDE, add a new module, a macro, and stub out the code template. Here are the steps stubbing a new macro and the syntactical template for a property is shown in listing 1 and the templatized version is shown in listing 2.
- To create a new macro, open Visual Studio .NET—I am using VS.NET 2003, but the example works in version 1—and select Tools|Macros|Macros IDE
- In the Macros Project Explorer click on the MyMacros project, right-clicking Add|Add Module from the Project Explorer context menu
- Add a public subroutine named WriteProperty to the module
After step 3 we are ready to stub out the code template. Listing 1 contains a syntactical example of the property we want to write and listing 2 converts that example to a template.
Listing 1: The syntax of a field and its associated property.
Private FField As String Public Property Field() As String Get Return FField End Get Set(ByVal Value As String) If (FField = Value) Then Return FField = Value End Set End Property
Listing 2: A templatized version of a property.
Private Cr As String = Environment.NewLine Private mask As String = _ "Public Property {0}() As {1}" + Cr + _ " Get" + Cr + _ " Return {2}" + Cr + _ " End Get" + Cr + _ " Set(ByVal Value As {1})" + Cr + _ " If ({2} = Value) Then Return" + Cr + _ " {2} = Value" + Cr + _ " End Set" + Cr + _ "End Property" + Cr
The code in listing 2 defines a field named mask that contains a parameterized version of a property. Pagination is used literally to manage layout and parameters are used to represent aspects of the template that will be replaced by literal values. The parameter {0} represents the Property name. {1} represents the Property type, and {2} will be replaced with the underlying field value.
The next step is to write some code that substitutes literal values for the parameters in the mask string. We can accomplish this step by using the plain old vanilla InputBox function. We need to prompt for the property name and data type and the field name.
Rather than attempting to get all of the code right all at once, we can stage our solution, testing incremental revisions as we proceed. To that end we can add the queries to obtain the parameterized values and send the results to the Output window to ensure that the code is generated correctly. The staged revisions are shown in listing 3.
Listing 3: Incrementally testing additions to the code generator.) End Sub
Run the macro in the Macros IDE by placing the cursor anywhere in the WriteProperty subroutine and pressing F5. The code displays three InputBoxes and then substitutes the parameterized values in mask with the input values, writing the results to the Output window (shown in figure 1).
Figure 1: The generate code written to the Output window.
The code generates correctly, as depicted in the figure. The next step is to send the generated code to a VB .NET source code file.
|
http://www.developer.com/net/net/article.php/2213041/Creating-Simplified-Code-Generators-in-VSNET-Part-II.htm
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
24 May 2012 05:24 [Source: ICIS news]
By Heng Hui
?xml:namespace>
SINGAPORE
POM’s downstream industries rely solely on consumer spending, but consumer confidence is currently at lows because of prevailing economic uncertainties. The eurozone is still neck-deep in a debt crisis, while the
Asia’s demand for POM is estimated at slightly less than 800,000 tonnes/year, and is expected to grow at an annual rate of about 4-5%, in line with the average economic growth of the region.
POM is considered a low-cost plastic that can replace metals and other engineering resins in automobile manufacturing such as nylon 6,6, polyethylene terephthalate (PET), polycarbonate (PC) polybutylene terephthalate (PBT), and polyvinyl chloride (PVC), in selected applications. It is used in gear drives, pumps, conveyor belts, hand tools, toys, clock and watch parts, and medical devices.
Global POM capacity is currently estimated at above 1.1m tonnes/year, with Asia accounting for more than 70% of the total.
An upbeat outlook on demand growth for engineering plastics, particularly in
Over the next two years, some 280,000 tonnes of new POM capacity are confirmed to come on stream from major players in
In 2013, a new 60,000 tonne/year facility will be started up by
Thai Polyacetal’s new 45,000 tonne/year facility at Map Ta Phut in Rayong province is also due to come on stream in the second quarter of next year. The company has an existing 55,000 tonne/year POM facility at the site.
In
It is expected to add 90,000 tonnes/year of capacity in early 2014 in
Korea Engineering Plastics is also adding a 35,000 tonne/year facility in
Smaller capacity increases are also being planned in
With
In 2011,
Market players also warned that if domestic POM production grows too big in
Spot polyacetal prices in Asia were assessed at $1,580-1,700/tonne (€1,264-1,360/tonne) FOB (free on board) NE (northeast)
Prices look set to fall given the current supply glut in Asia, as demand from the key
Scheduled POM plant shutdowns are due in the next few months, but these are unlikely to tighten supply and lift prices as they have been anticipated by the market, market sources said.
PTM Engineering Plastics' 60,000 tonne/year POM resin facility in
Polyplastics will also shut its 100,000 tonne/year POM facility at
($1 = €0.80)
Request for your Asia polyacetal (POM) sample.
|
http://www.icis.com/Articles/2012/05/24/9562519/asia-polyacetal-supply-glut-to-stay-for-two-yrs-on-tepid-demand.html
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
Can anyone give me an example on how to use the TLabel in Borland builder?
Thanks a lot!
Pete
This is a discussion on TLabel in BorlandBuilder5 help!!! within the C++ Programming forums, part of the General Programming Boards category; Can anyone give me an example on how to use the TLabel in Borland builder? Thanks a lot! Pete...
Can anyone give me an example on how to use the TLabel in Borland builder?
Thanks a lot!
Pete
You choose the TLabel component from the component pallete. Then you click on the form where you want to put it. You can then change the text in the label by changing the Caption property in the object inspector.
Need any more info?
Thanks for your response. I tried that, but I doesn't give me the results I want.
All I want is to print "Hello world" and all I get is a grey screen without output.
As you can see I'm just learning this and any feedback on how to get that output will be appreciated.
Thanks!
In other words, I want to have "hello world" print by using code; either this code or any code that works with builder5.
#include <iostream.h>
main ()
{
cout<<"hello world";
return 0;
}
Jim S.
Ok, I'll try to make it a little more detailed.
1) Click on the label component.
2) click on the form(It's the grey window that will say something like "Form1" unless you changed that). It will display a label with the labels name as the caption.
3) there should be a windows to the left of the form called the object inspector. The top drop down list should say something like "Label1 : TLabel". and below the drop down listthere should be a list of properties. To change the text you change the text in the Caption property. On the form it should now say what you put in that property.
If you still can't figure it out, I could give you an example "Hello World" project.
Hey Thanks again, I was able to print with the instructions. Is there any way I can print it just by using code as in the example?
If you can give an "Hello World" example with code and where to inserted, it would be great.
Thanks for your help!
Jim S.
There might be a way, but why would you want to do that. if you want to change the Caption in code, you can use code like:
I don't really know if there is a way to make a label using JUST code in BCB.I don't really know if there is a way to make a label using JUST code in BCB.Code:Label1->Caption = "whatever";
-kje11
Thanks once again for all your help! The only reason I wanted to do it like that is to practice what I learned in class today. In the Lab, we have this old compiler called Turbo C++ and even though it's from Borland, it does things differently. With that program, the only way to get that output is by typing the code shown in the example.
Thanks for all your help!!!
Jim S.
BCB and VC++ have the capacity to build projects in text (aka console or DOS) mode and graphical mode. TLabel is one way to get things shown on the screen in graphical mode. To use iostream and cout you would be in text/console/DOS mode however. If you click on File and then new application you should be given a menu of options as to which mode you wish to use to build/make the current project/application. Choose the Console/text/Dos mode option if you want to use text mode programming. Leave it at default setting if you wish to use graphical/Windows mode.
In Windows, all output to the screen is in graphical mode. cout and cin play no (or very little routine) role. Things are "event" driven, meaning a button is "pushed", the mouse moves over a given spot on the scree, a change occurs in an edit box secondary to user input, etc. none of which are pertinent to text mode. Although it isn't obvious when using BCB, use of classes, inheritance, etc. are useful in Windows, although use of control loops, knowing how lists, arrays, queues, etc. and even knowing about the standard template library all have a big impact. Learning how to program in text mode before going to graphical mode, eventhough you can make a basic Window program pretty easy with BCB, is something I think that will be worth your while.
|
http://cboard.cprogramming.com/cplusplus-programming/10929-tlabel-borlandbuilder5-help.html
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
Hello all,
I am a new programmer and I am trying to look at how two classes can interact if one class is called by another. What I am trying to do is have a control class. This control class will then make a decision in the constructor about what "sub" classes to call (they are not inherited classes so not strictly a sub class). Since I don't know ahead of time which sub classes will be called I use a pointer in my class header file. In the constructor I then make a decision about the sub classes to call, instantiate the sub classes, and set the pointer to that sub class.
Now here is the my problem; once the constructor finishes, the new sub class I made is destroyed. Later in the control class I have a function that uses the sub class pointer. The pointer works fine and all of the data is passed through the pointer and the function. My worry is that I am working off an artifact in the memory. Since the destructor is called on the sub class the pointer should be randomly pointing into space. Am I missing something about constructors/destructors and pointers? I am sure I am missing something, but I want to learn good technique and make sure I have easily modifiable code. Any help or insight is appreciated.
Below is my test code:
main file
my control class header and cpp filesmy control class header and cpp filesCode:#include "stdafx.h" using namespace std; int main() { int kb = 0; // just a value to halt the program from closing the window { // defining a new scope to test destruction and construction int val_test = 5; // initialize the value val_test; Class1 input(0); // call class one and initialize class one's variable for (int ii = 0; ii < 10; ii++){ // loop to test this setup cout << "my value: " << ii << endl; input.set(ii); // resetting the values in class 1 } } cin >> kb; // just a halt command return 0; }
,,Code:#pragma once class Class2; class Class1 { public: Class1(int a); // constructor ~Class1(void); // destructor void set(int a); // set function int & get(); // get function friend class Class2; // make class two a friend private: int b; // integer variable testing value passing Class2 *Ptr; // pointer to second class that is called from this class };
my sub class header and cpp filesmy sub class header and cpp filesCode:#include "StdAfx.h" #include "Class1.h" using namespace std; Class1::Class1(int a) { b = a; // initial assignment Class2 sub_class; // creating a class2 object Ptr = &sub_class; // setting a pointer to class2 object cout << "constructor value of class1: " << b << endl; // output statement to see the value was initialized } void Class1::set(int a) { b = a; // set the integer value cout << "Class1 stored value is: " << b << endl; // out put the set interger value cout << "Calling second sub_class" << endl; // calling the pointer to class2 Ptr->get(this); // pointer to second class passes the pointer of this object } int &Class1::get() { return b; // return a reference to this value } Class1::~Class1(void) { cout << "my control class destructor is called" << endl; // tell me when the destructor is called }
,,Code:#pragma once class Class1; class Class2 { public: Class2(void); // constructor ~Class2(void); // destructor friend class Class1; // friending class 1 to see private data private: int val; // value i want passed in from class1 object int *valPtr; // pointer Class1 *Ptr; // pointer to class one object void get(Class1 *cPtr); // get function to get value from class1 object };
thanks again and I hope this is readablethanks again and I hope this is readableCode:#include "StdAfx.h" #include "Class2.h" using namespace std; Class2::Class2(void) { val = 0;// initialize the value } void Class2::get(Class1 *cPtr) { Ptr = cPtr; // set the pointer to class1 object val = Ptr->get(); // calling the class1 function get to pass the data to class 2 cout << " Sub_class value is: " << val << endl; // output what the data is } Class2::~Class2(void) { cout << "my sub_class destructor is called" << endl; // telling me when this destructor is called }
Specter
|
http://cboard.cprogramming.com/cplusplus-programming/125372-question-about-constructors-destructors-pointers.html
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
Javascript autocompletions and having one for Sublime Text 2
Autocompletion is major software development productivity booster in modern programming text editors. This article discuss about Javascript autocompletion and how to make Sublime Text 2 editor to support them. If you are not familiar with Sublime Text 2 please see my earlier post what Sublime Text 2 is all about and how to tune it to be a superior programmer’s tool. Please note that information here applies to Sublime Text 2, Sublime Text 3 is underway (most ST2 plugins are open source and can be easily upgraded for ST3 compatibility).
1. Why autocompletion for Javascript is so hard
Javascript does not have Java-like static typing system, meaning that it’s not possible to know what’s the type variables of like this, $, jQuery, etc. until the code is actually executed in the web browser. In statically typed systems like Java you declare the type of the variables when you introduce them, like int i or MyClass foo. In Javascript you have only var or let. This makes Javascript lighter to write, but it also effectively means that during writing the code the information about the variable contents is not really available. Thus, your text editor cannot know how to autocomplete your typing because it cannot know what autocompletions there are available.
What makes things even more worse and sets Javascript apart from Ruby and Python is the lack of native support of classes and modules. Prototypial inheritance gives great freedoms – leading to the sad state of the things that each Javascript framework has invented their own way to declare classes, modules and namespaces. There isn’t one right way, but many incompatible ways. Thus, there is no single solution to extract the classy type information, even run-time.
2. Introducing type information into Javascript
There exist several ways to annotate Javascript source code or overlay type information over it. By using these type hints your text editor can know that $ is actually jQuery and can guess what kind of functions it could provide.
First of all, most text editors have their own type information format and files which are generated from the source code or entered as manually maintained files. Examples include Sublime Text 2 .sublime-completions files and Aptana Studio Javascript autocompletions. To make the editor support autocompletion one needs to integrate a source code scanner plug-in which extracts the type information and turns it to the native symbol database format supported by the editor.
Please note that autocompletion is not the only reason to have static typing support or type hinting. When project size and number of developers grow, static typing becomes more and more preferable as it decreases the cognitive load needed to work with the codebase, reducing human errors.
3. JsDoc directive scanning
JsDoc is a loosely followed convention to annotate Javascript with source code comments to contain references about packages, namespaces, singlentons, etc. Basically you write something like @class foo.bar.Baz in /** */ comment blocks and it is picked up. JsDoc was originally created to generate Javadoc like API documentation for Javascript projects. If you have any self respect you document your source code with comments. Do this by following the JsDoc best practices and you’ll be lucky and the same type information can be utilized for generating autocompletions too.
JsDoc class and namespace hints are especially needed when you are using something like RequireJS for defining your modules. This is because naive source code scanners have very hard time to determine module exports and classes from this kind of source code and as far as I know, no IDE supports RequreJS natively yet.
Note that personally I prefer superior JsDuck over JsDoc for actual HTML documentation generation.
4. TypeScript definitions and language
With some bad rap due to its owner, Microsoft, TypeScript is an open source project to provide type information for Javascript. TypeScript comes with two “modes” of operating.
The less invasive approach is to provide type information in externally typed interface files (example .ts for jQuery).
Move invasive, but easier to maintain approach is to write your source code in TypeScript language itself which compiles down to Javascript and .ts type information files. TypeScript language is a Javascript superset, so all Javascript is valid TypeScript. TypeScript adds optional support for classes, modules and static typing of variables. The TypeScript compiler will detect if you are trying to mix wrong types or using missing functions and gives a compiler error at the compile phase, when generating JS from TS.
Sublime Text 2 has a plugin to support TypeScript based autocompletions.
5. SublimeCodeIntel plug-in and OpenKomodo CodeIntel
OpenKomodo is a software repository for the open source parts of ActiveState’s Komodo Edit editor. It has a subsystem called CodeIntel for autocompletion and source code symbol scanning.
What makes CodeIntel interesting from the perspective of Sublime Text 2 is that CodeIntel is 1) open source 2) written in Python, making it easy to integrate with the Python based plug-in system of Sublime Text 2. Thus, there exist SublimeCodeIntel plug-in.
CodeIntel has a Javascript scanner. Based on its source code, it should provide JsDoc @class decorator support. However, I have never managed to get it working and there is an open stackoverflow.com question how to make SublimeCodeIntel to work with JsDoc directives. All help welcome.
6. Sublime Text 2 and CTags
Exuberant CTags is a generic “tag” type information extractor backend for various programming languages. As the name implies, it was originally created to autocomplete C source code. Sublime Text 2 has support for CTags with a plug-in which is called CTags.
I have not used this plug-in myself, so I hope someone can comment how well it works with Javascript autocompletion.
7. Manually generating autocompletions for your favorite Javascript project
Though not so advanced approach, this gave me a wow effect and motivation to write this blog post (instead of sitting under a palm tree sipping caipirinhas). I came across this little Python script in Three.js, a 3D engine for WebGL and Javascript.
Three.js has invented yet another Javascript class system of their own. But instead of using JsDoc @class, @module or @package like annotations they have a custom Python script which scans the Javascript codebase using regular expressions. Regexes match Three.js custom class declarations and then the script generates Sublime Text 2 autocompletion file based on the results. Crude, but apparently seems to work.
I recommend check the source code and see how you could apply this for your own little project if you are a major Sublime Text 2 user. The approach is not limited to Javascript, but should work for any dynamically typed language where you have control over how you define your namespaces.
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)
|
http://css.dzone.com/articles/javascript-autocompletions-and
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
ZF-2921: Allow defining of the schema separator in Zend_Db_Adapter
Description
To simulate the schema system in RDBMS which don't support it (as MySQL, where SCHEMA is a DATABASE) I thought to patch the Zend_Db_Adapter_Abstract by adding a protected attribute called Zend_Db_Adapter_Abstract::$_schemaSeparator (default to '.') and a public method Zend_Db_Adapter_Abstract::getSchemaSeparator()
Then by simply changing the '.' string with a call to Zend_Db_Adapter_Abstract::getSchemaSeparator() in the following methods: Zend_Db_Table_Abstract::_setupMetadata() Zend_Db_Table_Abstract::_setupTableName() Zend_Db_Table_Abstract::_setupPrimaryKey() Zend_Db_Table_Abstract::insert() Zend_Db_Table_Abstract::update Zend_Db_Table_Abstract::delete() Zend_Db_Select::__toString() Zend_Db_Select::_join()
So you could extend an Adapter (eg MySQLi) by definig the attribute $_schemaSeparator (say with '__') and you get a sort of schema management analogue of PostgreSQL one (for example), to group table in namespaces.
I've also build a patch you could find in
The work estimate contain simply adding the patch to the core CVS and testing more widely...
Posted by Giacomo Tesio (shamar) on 2008-03-19T08:51:28.000+0000
The patch I wrote...
Posted by Wil Sinclair (wil) on 2008-03-25T20:25:42.000+0000
Please categorize/fix as needed.
Posted by Giacomo Tesio (shamar) on 2008-04-08T09:40:18.000+0000
I realized that, when using such a feature, you should set Adapter::$_autoQuoteIdentifiers = FALSE: otherwise, a "virtual schema" called "schema" and a table named "test_table" with an Adapter::$_schemaSeparator = '__' you would get a query quoting the table as
schema__
test_tablewhich is clearly wrong.
To make it work better I'm wondering of a protected _schemaSperatator setter (say Adapter::_setSchemaSeparator(string $separator, boolean $explodeIdentifiers = false) )
$explodeIdentifiers set a protected Adapter boolean attribute (defined true by default in Zend_Db_Adapter_Abstract) which enable / disable the explode() call in Zend_Db_Adapter_Abstract::_quoteIdentifierAs()
What do you think about this?
Posted by Giacomo Tesio (shamar) on 2008-04-09T02:43:40.000+0000
Here an updated Patch which fix a little bug in Zend_Db_Adapter_Abstract::_joinUsing() and follow the contributer guide (aka, built using svn diff)
Posted by Wil Sinclair (wil) on 2009-01-06T10:48:58.000+0000
No action on this issue for too long. I'm reassigning to Ralph for re-evaluation and categorization.
|
http://framework.zend.com/issues/browse/ZF-2921?page=com.atlassian.streams.streams-jira-plugin:activity-stream-issue-tab
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
19 January 2011 15:24 [Source: ICIS news]
TORONTO (ICIS)--?xml:namespace>
The increase this year would come after 3.6% growth in 2010 and a 4.7% decline in 2009 from 2008.
In an official outlook report, entitled “Jahreswirtschaftsbericht 2011”, the government said strong export demand in 2010 had sparked higher domestic demand as well.
Unemployment was falling, with overall employment reaching 40.5m people – the highest level since the country’s reunification, it said.
In fact,
Also,
At the same time, the government would consolidate public budgets and continue to phase out measures that supported the economy during the crisis, it added.
However,
Eurozone countries needed to take additional steps to improve “monitoring of economic policies,” the EU's growth and stability pact needed to be strengthened, and the eurozone needed to prepare for future liquidity and solvency crises, the government said.
Martin Wansleben, general manager of the German chamber of commerce (DIHK), said stability in the eurozone was key for
All eurozone member states needed to control government spending and take steps to improve their competitiveness, he said.
Wansleben also said that
Anton Borner, head of German exporters trade group BGA, warned
Higher interest rates could help dampen inflation, Borner said. However, the European Central Bank was not likely to raise rates given the fragile state of the economy in some eurozone countries, he added.
In
However, last week a chemical employers’ trade group.
|
http://www.icis.com/Articles/2011/01/19/9427553/berlin-forecasts-germanys-2011-economic-growth-at-2.3.html
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
Difference between revisions of "User:Fingolfin"
Revision as of 14:47, 23 July 2008.
Contents
TODO
Usability
- Check for the presence of all required data files in all games; or, as a somewhat less extensive measure, at least do this: For games which were shipped on multiple volumes (floppies, CDs), check for the presence of at least one file per volume.
- Check for the presence of our external support files like kyra.dat, sky.cpt, etc., and also verify that they are up-to-date. For details, see FR #1520433
- Add/unify "confirm exit" dialog, globally (see FR #1731025)
- Add global main menu dialog (see also below under GUI)?
Savefile manager
- Consider replacing "const char *" usages with "Common::String". This might or might not be a good idea, though -- don't just do it blindly!
File code
- several backends #define fopen, fread etc. -- this is bad, try to get rid of these hacks
- to get rid of all usages of fopen, etc. we could add backends/file and move the current file.cpp to backends/file/stdc (this is just a rough idea, mind you)
- at the same time, finally change File to read-only, and add a DumpFile class, which can be used for script dumps, screenshots etc.; ports can simply provide an "empty" implementations if they don't support dumping large files
- To enforce that no code uses fopen etc. directly, we could add our own #define's to scummsys.h to trigger errors in code doing it
GUI
- The options dialog may show a button for configuring the savepath even on systems where it is fixed -> not good. This button should be hidden/removed for these systems
- make a shared "main menu dialog", based on the SCUMM one
- accessible via the same hotkey in all engines
- Provides the following buttons/features in *all* games & engines: Resume, About, Quit
- Ideally also provides an options dialog based on the generic option dialogs in the launcher
- Engines can provide a subclass, which adds buttons/functions, like "save/load", or "help"
- For backends that need it (or maybe even for all), provide access to the "key remapper" and "virtual keyboard", once/if we add those globally
- Sugar on the cake: Display the engine name at the top, maybe also "ScummVM 0.x.y", and other goodies (ScummVM logo anyone?)
- Highlight the "default" button in dialogs (e.g. the classic MacOS way, drawing a fat border around it; or by using different coloring; or a combination). This falls under "usability", too.
OSystem
- get to rid of the evil global gBitFormat!
- Add getOverlayBitFormat() method, so we can avoid using RGBToColor, colorToRGB, ARGBToColor, colorToARGB in tight loops (it would return a value of 8 to indicate palette mode, otherwise a bitformat value compatible to those used in graphics/colormasks.h)
- Remove slack in OSystem
- move getScreenChangeID functionality to EventManager
- what are these for (and can we remove them)?: screenToOverlayX, screenToOverlayY, overlayToScreenX, overlayToScreenY
- remove getOutputSampleRate -- instead, add a private API to the mixer code to setup the sample rate (like, a param to the Mixer constructor)
- Implement the RFC: Flexible keymapping via new EVENT_ (post 0.10)
- Further work on the modularization of OSystem
- Change backends to use namespaces ?! (Not really that useful, I guess, except for the Doxygen pages)
- Remove the 'addDefaultDirectory' calls from runGame in base/main.cpp. Reason: Their presence causes an asymmetry between the "detect a game" and "run a game" use cases, as different files are "seen" in each case. This can lead to subtle bugs, and also causes ugly code duplication in the AdvancedDetector code right now (there to work around asymmetries like this one)
- add plugin API to fetch a fresh desc string for a given target ?!?
- completely get rid of #pragma pack for increased portability
- rename scummsys.h; and/or split it (types.h, defs.h ... ?)
- Implement a "Main" class, from which backends derive. The "Main" of each backend then would become:
int main() { Main *m = new MyCustomMain(); int retval = MyCustomMain->run(); delete MyCustomMain; return retval; } class Main { StringMap settings; String command; String specialDebug; OSystem *system; public: virtual int run(int argc, const char *argv[]) { registerDefaults(); parseCommandLine(argc, argv); // A port w/o command line would simply set settings to a more suitable value loadConfig(); PluginManager::instance().loadPlugins(); system.initBackend(); system.setWindowCaption(gScummVMFullVersion); } int registerDefaults() int parseCommandLine(int argc, const char *argv[]); void loadConfig(); int processSettings(); int runLauncherDialog(); };
|
https://wiki.scummvm.org/index.php?title=User:Fingolfin&diff=prev&oldid=9236
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
Add a carriage return \r and linefeed \n to the beginning of a sentence using a regular expression without replacing the text
- Michael Poplawski last edited by
The Following text has a carriage return (\r) and line feed (\n) after certain sentences or paragraphs.
As can be seen in the following text, paragraph 28. has space before the paragraph.
Paragraph 29 does not have space at the beginning of the sentence.
I want to use Regular Expressions in notepad++ to find these sentences then add a \r\n to the beginning of sentences that do not have any space.
I don’t want to replace the text found; just find the paragraph marker and add carriage return / line feed to the beginning.
Here is my regular expression in the Find what box:
[\r\n]{1}+[\d]{1,2}.\s\w
My pseudocode is as follows:
find the single carriage return / line feed, followed by 1 or 2 digit number, followed by period, followed by space, following by single character.
I don’t know what to enter in the Replace with box?
Any help is greatly appreciated.
inequality and income inequality go hand in hand. In many EMDCs, low-income households and available financial products tend to be more limited and relatively costly.
IV. INEQUALITY DRIVERS
A. Factors Driving Higher Income Inequality
- Global trends: the good side of the story. Over the past four decades, technology has reduced the costs of transportation, improved automation, and communication dramatically. New markets have opened, bringing growth opportunities in countries rich and poor alike, and hundreds of millions of people have been lifted out of poverty. However, inequality has also risen, possibly
reflecting the fact that growth has been accompanied by skill-biased technological change, or
because other aspects of the growth process have generated higher inequality. In this section, we
discuss potential global and country-specific drivers of income inequality across countries.
- Technological change. New information technology has led to improvements in
productivity and well-being by leaps and bounds, but has also played a central role in driving up the
skill premium, resulting in increased labor income inequality (Figure 15). This is because
technological changes can disproportionately raise the demand for capital and skilled labor over
low-skilled and unskilled labor by eliminating many jobs through automation or upgrading the skill
- Scott Sumner last edited by Scott Sumner
Not sure why this was downvoted…are regex questions becoming hated here? I guess I can see that, although they don’t bother me (like HTML/CSS questions do!)
Anyway, to try and help without reading too much into what you really want (or maybe I am), try:
Find what zone:
([^\r\n]\R)(\d\d?\.\s.)
Replace with zone:
\1\r\n\2
Hi, michael-poplawski, @Scott-sumner and All,
Just a variant of the Scott’s solution :
SEARCH
[^\r\n](?=\R\h*\d+\.\h+)
REPLACE
$0\r\n
Notes :
The searched string is, essentially, the part
[^\r\n], which represents any single character, different from, either, the Carriage Return and the New Line characters
But the regex engine will consider this match ONLY IF the positive look-ahead
(?=\R\h*\d+\.\h+), ending this regex, is true. That is to say, IF it is, immediately followed with a Line Break (
\R), some horizontal blank characters, even 0 (
\h*), at least one digit (
\d+), then a literal dot (
\.) and, finally, at least, one horizontal blank character (
\h+)
If this condition is true, in replacement, that single character is, simply, rewritten, due to the
$0syntax, with represents the overall match, and followed by the usual
\r\nsequence, which defines a Windows line-break !
Best Regards,
guy038
|
https://community.notepad-plus-plus.org/topic/15289/add-a-carriage-return-r-and-linefeed-n-to-the-beginning-of-a-sentence-using-a-regular-expression-without-replacing-the-text
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
Building your service with tests
When working on a Service Fabric application, one of the first things that you will probably notice is that there is no easy way to get started with writing tests around your service.
To build a stateful service within Service Fabric we inherit from a class called
StatefulService and override a protected
RunAsync method:
public sealed class MyStatefulService : StatefulService { public MyStatefulService(StatefulServiceContext context, IReliableStateManagerReplica reliableStateManagerReplica) : base(context, reliableStateManagerReplica) { } protected override async Task RunAsync(CancellationToken cancellationToken) { while (true) { cancellationToken.ThrowIfCancellationRequested(); // Do Work. } } }
Calling RunAsync
Due to the protection level of the method there is no way for us to call it within our tests.
new MyStatefulService(...).RunAsync(...); will give us the following error
Cannot access method ‘RunAsync’ here due to its protection level.
So let’s find out what calls in to our RunAsync method,
StatefulServiceBase is the base class of
StatefulService and has been interface of
IStatefulUserServiceReplica which also has a
RunAsync method but it is explicitly implemented thus hiding the details of
IStatefulUserServiceReplica. So instead we can cast our service to the interface and then call the method.
var service = (IStatefulUserServiceReplica)new MyStatefulService(...); service.RunAsync(...);
The above also fails on referencing
IStatefulUserServiceReplica with the following error
‘IStatefulUserServiceReplica’ is inaccessible due to its protection level MyStatefulService
Everything else down the chain that calls
IStatefulUserServiceReplica is all internal too, so we’ve got no luck here.
So for calling our
RunAsync method we will have to hand roll some reflection code that invokes the protected method 🤮.
var runAsyncMethodInfo = typeof(StatefulServiceBase) .GetMethod("RunAsync" BindingFlags.Instance | BindingFlags.NonPublic); var service = new MyStatefulService(...); await (Task) runAsyncMethodInfo.Invoke(Service, new object[] { new CancellationTokenSource().Token });
RunAsync continuous loop
The next problem we face is that the
RunAsync method never returns to its caller. Calling the method directly within our tests will never end execution. We do however have a cancellation token that we can trigger to force an exception to be thrown, this will allow us to break out of the infinite while loop.
The standard practice within a stateful service is to start a transaction and then commit the transaction after the work is completed.
while (true) { cancellationToken.ThrowIfCancellationRequested(); using (var tx = this.StateManager.CreateTransaction()) { // Do Work. await tx.CommitAsync(); } }
So if we could mock out a transaction object to trigger the cancellation token on disposal then this would give us one iteration of the while loop. The below code uses
moq to create a mock of a
ITransaction object that will trigger
Cancel on a
CancellationTokenSource object when the
Dispose method is called. We also need to wrap the running of the service in a try/catch block as we’ll be expecting a
OperationCanceledException to be thrown.
var cancellationTokenSource = new CancellationTokenSource(); var transaction = new Mock<ITransaction>(); transaction.Setup(x => x.CommitAsync()) .Returns(Task.FromResult(0)); transaction.Setup(x => x.Dispose()) .Callback(() => cancellationTokenSource.Cancel()); var reliableStateManagerReplica = new Mock<IReliableStateManagerReplica>(); reliableStateManagerReplica.Setup(x => x.CreateTransaction()) .Returns(transaction.Object); var service = new MyStatefulService(null, reliableStateManagerReplica.Object,null); try { await Run(service, cancellationTokenSource.Token); } catch (OperationCanceledException) { }
Base test fixture
Now we have got all of the building blocks, we can start to build a base test fixture. Below is an example of a test fixture that uses a template pattern to force the user to override a
CreateService for create a
StatefulService. It also has a
RunServiceTransactionOnce method that will allow derived classes to run the service for one transaction.
public abstract class StatefulServiceFixture<TStatefulService> where TStatefulService : StatefulService { private static readonly MethodInfo RunAsyncMethodInfo = typeof(StatefulServiceBase) .GetMethod("RunAsync", BindingFlags.Instance | BindingFlags.NonPublic); protected StatefulServiceContext StatefulServiceContext { get; } protected Mock<IReliableStateManagerReplica> ReliableStateManagerReplica { get; } protected Mock<ITransaction> Transaction { get; } protected TStatefulService Service { get; } protected StatefulServiceFixture() { StatefulServiceContext = new StatefulServiceContext( new NodeContext(string.Empty, new NodeId(0, 0), 0, string.Empty, string.Empty), Mock.Of<ICodePackageActivationContext>(), string.Empty, new Uri("fabric:/Mock"), new byte[0], Guid.NewGuid(), 0); Transaction = new Mock<ITransaction>(); Transaction.Setup(x => x.CommitAsync()) .Returns(Task.FromResult(0)); ReliableStateManagerReplica = new Mock<IReliableStateManagerReplica>(); ReliableStateManagerReplica.Setup(x => x.CreateTransaction()) .Returns(Transaction.Object); Service = CreateService(StatefulServiceContext, ReliableStateManagerReplica, Transaction); } protected abstract TStatefulService CreateService(StatefulServiceContext statefulServiceContext, Mock<IReliableStateManagerReplica> reliableStateManagerReplica, Mock<ITransaction> transaction); protected async Task RunServiceTransactionOnce() { var cancellationTokenSource = new CancellationTokenSource(); Transaction.Setup(x => x.Dispose()) .Callback(() => cancellationTokenSource.Cancel()); try { await (Task) RunAsyncMethodInfo.Invoke(Service, new object[] { cancellationTokenSource.Token }); } catch (OperationCanceledException) { // We expect the task to be cancelled after one transaction. return; } throw new Exception("RunAsync method should have been cancelled"); } }
Real life example
We can now map this to a real life example. We have requirements to create a microservice that listens to a
orders queue and dequeue an item off each iterations within our loops, every
Order that is received off the queue is checked to see if it requires a receipt to be generated and then dispatches the
Order to the
ReceiptGenerator. The xUnit tests for this are below:
public class OrderServiceTests : StatefulServiceFixture<OrderService> { private Mock<IReceiptGenerator> _orderTaker; protected override OrderService CreateService(StatefulServiceContext statefulServiceContext, Mock<IReliableStateManagerReplica> reliableStateManagerReplica, Mock<ITransaction> transaction) { _orderTaker = new Mock<IReceiptGenerator>(); return new OrderService(statefulServiceContext, reliableStateManagerReplica.Object, _orderTaker.Object); } [Fact] public async void ShouldGenerateReceipt_WhenOrderRequiresReceipt() { var order = new Order() {RequiresReceipt = true};.Once); } [Fact] public async void ShouldNotGenerateReceipt_WhenOrderDoesNotRequiresReceipt() { var order = new Order() { RequiresReceipt = false };.Never); } }
For completeness the
OrderService implemention is below to cross reference with the tests.
public sealed class OrderService : StatefulService { private readonly IReceiptGenerator _receiptGenerator; public OrderService(StatefulServiceContext context, IReliableStateManagerReplica reliableStateManagerReplica, IReceiptGenerator receiptGenerator) : base(context, reliableStateManagerReplica) { this._receiptGenerator = receiptGenerator; } protected override async Task RunAsync(CancellationToken cancellationToken) { var orderQueue = await this.StateManager.GetOrAddAsync<IReliableQueue<Order>>("orders"); while (true) { cancellationToken.ThrowIfCancellationRequested(); using (var tx = this.StateManager.CreateTransaction()) { var result = await orderQueue.TryDequeueAsync(tx); if (result.HasValue && result.Value.RequiresReceipt) { await _receiptGenerator.Generate(result.Value); } await tx.CommitAsync(); } } } }
|
https://kevsoft.net/2016/12/17/unit-testing-service-fabric-services.html
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
referance content
PubNub Functions is built upon some key concepts – It's advised to become familiar with them before you start developing and testing with
PubNub Functions.
PubNub Channels
A channel represents a virtual path between publishing and subscribing clients in the PubNub Data Stream Network (DSN). If you're familiar with general message queue concepts, a channel is similar to a
topic.
Any message will have exactly one channel associated with it. As an example, in order to receive a message published on the channel BayAreaNews, the client must be subscribed to [at least] the channel BayAreaNews (a subscribing client may be subscribed to more than one channel at a time.)
A channel doesn't need to be declared, defined, or instantiated in any way, they can be considered unlimited and arbitrary in scope. You can choose as many, or as few channels as you'd like to use in your apps – consider the channel as metadata on the message itself.
In
PubNub Functions, a specific Function (written in Javascript) can be associated with one or more channels. Every message published on a given channel will be processed by its associated event handler (if defined).
PubNub Endpoints
An Endpoint is a URI path you can make an HTTP request to trigger a Function. It's the same concept as an HTTP handler but you don't have to spin up a webserver etc. All Endpoints are by default secure and only accept requests via HTTPS. Endpoints allow you to start a microservice on PubNub's Network in seconds. Functions with Endpoints are On Request types.
API Keysets
The first level of partitioning data between PubNub accounts is via the PubNub API keysets. Keysets are managed by PubNub and created by users from their admin portal. Each Keyset contains a Publish, Subscribe, and Secret Key.
Secret keys are also unique to a keyset, and are used for management functions.
Consider a publish key, subscribe key, and channel name the composite identifier for any message over the PubNub network. In other words, any channel is namespaced by the subscribe and publish keyset used to publish it.
For example, anyone publishing with a PubNub instance defined against keyset1 on channel1 can be received only by subscribers defined against keyset1 on channel1. To make it more clear, consider channel1's real name in this case to be keyset1:channel1… if someone published on a different keyset, but same channel, since the channel is namespaced by the keyset, there would be no collision.
Functions
A Function is a block of Javascript code which runs when triggered by a given Function type, against a given keyset and channel(s) or URI path – it is the logic to perform against a message meeting certain criteria or an HTTP request.
Functions can be deployed to the PubNub DSN via the GUI, CLI, or REST APIs.
Function Types
The way a Function is triggered and executed depends on the type of Function it is. A Function can be 1 of the 4 types below:
- Before Publish or Fire - Executed in response to publishing a message on a channel and before the message has been forwarded on to awaiting subscribers, these Functions allow you to operate on, or mutate the message.
- After Publish or Fire - Executed in response to publishing a message on a channel and after the message has been forwarded on to awaiting subscribers, these Functions allow you to act on the message publish event without having to worry about the latency impact of it.
- After Presence - Executed in response to a Presence event and after it has been forwarded on to awaiting subscribers, these functions allow you to operate on the Presence event.
- On Request - Executed in response to an HTTP request, these functions allow you to build microservices and webhooks.
Modules
Groups of Functions that share a scope are called a Modules.
|
https://www.pubnub.com/docs/blocks/key-concepts
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
Is there a universal JavaScript function that checks that a variable has a value and ensures that it's not
undefined
null
function isEmpty(val){
return (val === undefined || val == null || val.length <= 0) ? true : false;
}
You can just check if the variable has a
truthy value or not. That means
if( value ) { }
will evaluate to
true if
value is not:.
Further read:
|
https://codedump.io/share/MjV1g2zepZOE/1/is-there-a-standard-function-to-check-for-null-undefined-or-blank-variables-in-javascript
|
CC-MAIN-2017-39
|
en
|
refinedweb
|
Status
Current state: Accepted
Discussion thread: here and here
JIRA:
KAFKA-3162
-
KIP-42: Add Producer and Consumer Interceptors
Resolved
KAFKA-3196
-
KIP-42 (part 2): add record size and CRC to RecordMetadata and ConsumerRecords
Resolved
KAFKA-3303
-
Pass partial record metadata to Interceptor onAcknowledgement in case of errors
Resolved
Please keep the discussion on the mailing list rather than commenting on the wiki (wiki discussions get unwieldy fast).
Motivation
Today, Kafka metrics are only collected for individual clients or brokers. This makes it difficult for users to trace the path of individual messages across the cluster, providing a complete end-to-end picture of system performance and behavior. Technically, it is possible to measure end-to-end performance today by modifying users applications to collect and track additional information, but that isn't always practical for critical infrastructure applications. The ability to quickly deploy tools to observe, measure, and monitor Kafka client behavior, down to the message level, is valuable in production environments. At the same time, metrics might need contextual metadata that may vary across applications. The ability to measure and monitor clients without writing new code or recompiling applications is essential. (In some cases, it might help to connect to running applications.)
To enable this functionality, we would like to add producer and consumer interceptors that can intercept messages at different points on producer and consumer. The mechanism that we are proposing is inspired by the interceptor interface in Apache Flume. While there are potentially many ways to use an interceptor interface (for example, detecting anomalies, encrypting data, filtering fields), each of them would require a careful evaluation of whether or not it should be done with interceptor or with another mechanism. It is better to add the related APIs when there is a clear motivation for those use cases. Thus, we are proposing minimal producer and consumer interceptor interfaces that are designed to support only measurement and monitoring.
While it is possible to add more metrics or improve monitoring in Kafka, we believe that creating a flexible, customizable interface is beneficial for the following reasons:
Common monitoring tools. In a large company, different teams collaborate on building systems. Often, different teams develop and deploy different components over time. In addition, organizations want to standardize on common metrics, formats, and data collection systems. We think it is valuable for an organization to develop and deploy common Kafka client monitoring tools and deploy these across all applications that use Kafka.
Monitoring can be expensive. Adding additional metrics to Kafka might compromise performance. (For example see this JIRA ticket for an example of a performance regression caused by just checking timestamps.) Unfortunately, there is sometimes a tradeoff between system performance and data collection. As an example, consider the problem of measuring message sizes. The cheapest, simplest, and most straightforward approach is to measure average values. Calculating percentiles on a distributed system is more expensive and complicated than calculating simple averages, but would be useful in many applications. We would like to give users the ability to adopt different algorithms for metric collection, or to choose not to collect metrics at all.
Different applications require different metrics. For example, a user might find it important to monitor the cardinality of different keys in Kafka messages. It would be impractical for Kafka to provide all possible metrics internally; a pluggable intercept system provides a simple way to develop customized metrics.
Kafka is often a part of a bigger infrastructure in an organization, and it would be very useful to enable end-to-end tracing in that infrastructure. Consider LinkedIn’s use of Samza to trace frontend user calls across all services by tagging each call with a unique value, called TreeId, and propagating that value across all subsequent service calls. Interceptors will allow tracing of Kafka clients through the same infrastructure, tracing with the same TreeId stored in a message.
In this KIP, we propose adding two new interfaces: ProducerInterceptor on producer and ConsumerInterceptor on consumer. User will be able to implement and configure a chain of custom interceptors and listen to events that happen to a record at different points on producer and consumer. Interceptor API will allow mutate the records to support the ability to add metadata to a message for auditing/end-to-end monitoring.
Public Interfaces
We add two new interfaces: ProducerInterceptor interface that will allow plugging in classes that will be notified of events happening to the record during its lifetime on the producer; and ConsumerInterceptor interface that will allow plugging in classes that will be notified of record events on the consumer. ProducerInterceptor API will allow to modify keys and values pre-serialization. For symmetry, ConsumerInterceptor API will allow to modify keys and values post-deserialization.
Both ProducerInterceptor and ConsumerInterceptor inherit from Configurable. Properties passed to configure() method will be consumer/producer config properties (including clientId if it was not specified in the config and assigned by KafkaProducer/KafkaConsumer). We will document in the Producer/ConsumerInterceptor class description that they will be sharing producer/consumer config namespace possibly with many other interceptors and serializers. So, it could be useful to use a prefix to prevent conflicts.
All exceptions thrown by interceptor callbacks will be caught by the caller method and ignored. The alternative was to allow exceptions to propagate through the original calls (at least for some of the callbacks), which will enable an additional level of control. For example, interceptors can filter messages this way on consumer side or stop messages on producer because they do not have the right field. However, this will effectively change KafkaProducer and KafkaConsumer API, because now they can throw exceptions that are not documented in KafkaProducer and KafkaConsumer API. In this KIP, we propose to ignore all exceptions from interceptors, but this could be changed in the future if/when we have strong use-cases for this.
Add a new configuration setting interceptor.classes to the KafkaProducer API which sets a list of classes to use as producer interceptors. Each specified class must implement ProducerInterceptor interface. The default configuration will have an empty list.
Add a new configuration setting interceptor.classes to the KafkaConsumer API which sets a list of classes to use as consumer interceptors. Each specified class must implement ConsumerInterceptor interface. The default configuration will have an empty list.
Here is more detailed description of new interfaces:
ProducerInterceptor interface
onSend() will be called in KafkaProducer.send(), before key and value gets serialized and before partition gets assigned. If the implementation modifies key and/or value, it must return modified key and value in a new ProducerRecord object. The implication of interceptors modifying a key in onSend() method is that partition will be assigned based on modified key, not the key from the client. If key/value transformation is not consistent (same key and value does not mutate to the same, but modified, key/value), then log compaction would not work. We will document this in ProducerInterceptor class. However, known use-cases, such as adding app name, host name to a message will do consistent transformation.
Another implication of onSend() returning ProducerRecord is that the interceptor can potentially modify topic/partition. It will be up to the interceptor that ProducerRecord returned from onSend() is correct (e.g. topic and partition, if given, are preserved or modified). KafkaProducer will use ProducerRecord returned from onSend() instead of record passed into KafkaProducer.send() method.
Since there may be multiple interceptors, the first interceptor will get a record from client passed as the 'record' parameter. The next interceptor in the list will get the record returned by the previous interceptor, and so on. Since interceptors are allowed to mutate records, interceptors may potentially get the record already modified by other interceptors. However, we will state in the javadoc that building a pipeline of mutable interceptors that depend on the output of the previous interceptors is discouraged, because of potential side-effects caused by interceptors potentially failing to mutate the record and throwing and exception. If one of the interceptors in the list throws an exception from onSend(), the exception is caught, logged,and the next interceptor is called with the record returned by the last successful interceptor in the list, or otherwise the client.
onAcknowledgement() will be called when the send is acknowledged. It has same API as Callback.onCompletion(), and is called just before Callback.onCompletion() is called. In addition, onAcknowledgement() will be called just before KafkaProducer.send() throws an exception (even when it does not call user callback). The difference in the behavior of ProducerInterceptor.onAcknowledgement() is that if an error occurred, metadata parameter will not be null. In this case, metadata will contain topic and possibly partition information (if available). If partition information is not available, then partition will be assigned -1.
ProducerInterceptor APIs will be called from multiple threads: onSend() will be called on submitting thread and onAcknowledgement() will be called on producer I/O thread. It is up to the interceptor implementation to ensure thread safety. Since onAcknowledgement() is called on producer I/O thread, onAcknowledgement() implementation should be reasonably fast, or otherwise sending of messages from other threads could be delayed.
ConsumerInterceptor interface
onCommit() will be called when offsets get committed: just before OffsetCommitCallback.onCompletion() is called and in ConsumerCoordinator.commitOffsetsSync() on successful commit.
Since new consumer is single-threaded, ConsumerInterceptor API will be called from a single thread. Since interceptor callbacks are called for every record, the interceptor implementation should be careful about adding performance overhead to consumer.
Add more record metadata to RecordMetadata and ConsumerRecord
Currently, RecordMetadata contains topic/partition, offset, and timestamp (KIP-32). We propose to add remaining record's metadata in RecordMetadata: checksum and record size. Both checksum and record size are useful for monitoring and audit. Checksum provides an easy way to get a summary of the message and is also useful for validating a message end-to-end. For symmetry, we also propose to expose the same metadata on consumer side and make available to interceptors.
We will add checksum and record size fields to RecordMetadata and ConsumerRecord.
We will make it clear in the documentation (of ConsumerRecord and onAknowledgement/onConsume) that checksum the consumer sees may not always be the one initially set on the producer. CRC may be overwritten by the broker during upgrade after message format change or in the case of topic config with timestamp type == LogAppendTime, which requires over-writing message timestamps in the message on the broker and as a result overwriting.
Proposed Changes
We propose to add two new interfaces listed and described in the Public Interfaces section: ProducerInterceptor and ConsumerInterceptor. We will allow a chain of interceptors. It is up to the user to correctly specify the order of interceptors in producer.interceptor.classes and in consumer.interceptor.classes.
Kafka Producer changes
We will create a new class that will encapsulate a list of ProducerInterceptor instances: ProducerInterceptorsProducerInterceptors
- KafkaProducer will have a new member:
ProducerInterceptors<K, V> interceptors;
- KafkaProducer constructor will load instances of interceptor classes specified in interceptor.classes. If interceptor.classes config does not list any interceptor classes, interceptors list will be empty. It will call configure() on each interceptor class, passing in ProducerConfig.originals(). KafkaProducerconstructor will instantiate 'interceptors' with a list of interceptor classes.
To be able to call interceptor on producer callback, we wrap client callback passed to KafkaProducer.send() method inside ProducerCallback – a new class that inherits Callback and will have a reference to client callback and 'interceptors'. ProducerCallback.onCompletion() implementation will call client's callback onCompletion (if client's callback is not null) and will call 'interceptors' onAcknowledgement().
KafkaProducer.send() will create ProducerCallback and call onSend() method.
producerCallback = new ProducerCallback(callback, this.interceptors);
ProducerRecord<K, V> sentRecord = interceptors.onSend(record);
- The rest of KafkaProducer.send() code will use sendRecord in place of 'record'.
- KafkaProducer.close() will close interceptors:
ClientUtils.closeQuietly(interceptors, "producer interceptors", firstException);
Kafka Consumer changes
We will create a new class that will encapsulate a list of ConsumerInterceptor instances: ConsumerInterceptorsConsumerInterceptors
- KafkaConsumer will have a new member
ConsumerInterceptors<K, V> interceptors;
- KafkaConsumer constructor will load instances of interceptor classes specified in interceptor.classes. If interceptor.classes config does not list any interceptor classes, interceptors list will be empty. It will call configure() on each interceptor class, passing in ConsumerConfig.originals() and clientId. KafkaConsumer constructor will instantiate 'interceptors' with a list of interceptor classes.
- KafkaConsumer.close() will close 'interceptors':
ClientUtils.closeQuietly(interceptors, "consumer interceptors", firstException);
- KafkaConsumer.poll will call
this.interceptors.onConsume(consumerRecords);
and return ConsumerRecords<K, V> returned from onConsume().
- ConsumerCoordinator.commitOffsetsAsync and commitOffsetsSync will call onCommit().
Compatibility, Deprecation, and Migration Plan
It will not impact any of existing clients. When clients upgrade to new version, they do not need to add interceptor.classes config.
Future compatibility. When/if new methods will be added to ProducerInterceptor and ConsumerInterceptor (as part of other KIP(s)), they will be added with an empty implementation to the Producer/ConsumerInterceptor interfaces. This is a new feature in Java 8.
Rejected Alternatives
Alternative 1 - Interceptor interfaces on the broker
This KIP proposes interceptors only on producers and consumers. Adding message interceptor on the broker makes a lot of sense, and will add more detail to monitoring. However, the proposal is to do it later in a separate KIP for the following reasons:
- Broker interceptors are more risky because brokers are more sensitive to overheads that could be added by interceptors. Added performance overhead on brokers would affect all clients.
- Producer and consumer interceptors are less risky, and give us good risk vs. reward tradeoff, since producer and consumer interceptors alone will enable end-to-end monitoring.
- As a result, it is better to start with producer and consumer interceptors and gains experience to see how usable they are.
- Once we see usability from experience with producer and consumer interceptors, we can create a broker interceptor KIP, which will allow us to have a more complete/detailed message monitoring.
Alternative 2 – Interceptor callbacks that expose internal implementation of producer/consumer
The producer and consumer interceptor callbacks proposed in this KIP are fundamental aspects of producer and consumer protocol, and they don't depend on implementation of producer and consumer. In addition to the proposed methods, it may be useful to add more hooks such as ProducerInterceptor.onEnqueue (called before adding serialized key and value to the accumulator) or producerInterceptor.onDequeue(). They can be useful, but have disadvantage of exposing internal implementation. This can be limiting as changing internal implementation in the future may require changing the interfaces.
We can add some of these methods later if we find concrete use-cases for them. For the use-cases raised so far, it was not clear whether they should be implemented by interceptors or by other means. Examples:
- Use onEnqueue() and onDequeue() methods to measure fine-grain latency, such as serialization latency or time records spend in the accumulator. However, the insights into these latencies could be provided by Kafka Metrics.
- Encryption. There are several design options here. One is per-record encryption which would require adding ProducerInterceptor.onEnqueued() and ConsumerInterceptor.onReceive(). One could argue that in that case encryption could be done by adding a custom serializer/deserializer. Another option is to do encryption after message gets compressed, but there are issues that arise regarding broker doing re-compression. Thus, it is not clear yet whether interceptors are the right approach for adding encryption.
Alternative 3 – Wrapper around KafkaProducer and KafkaConsumer.
Some monitoring can be done (such as using unique ID for end-to-end tracing) by using a wrapper around KafkaProducer and KafkaConsumer. he wrappers could catch the events at similar points as KafkaProducer.onSend() and onAcknowledgement() and KafkaConsumer.onConsume and onCommit:
- Requires changes in clients to use the wrappers to KafkaConsumer and KafkaProducer
- Will not be able to catch events at intermediate stages of a request lifetime in KafkaConsumer and KafkaProducer.
|
https://cwiki.apache.org/confluence/display/KAFKA/KIP-42%3A+Add+Producer+and+Consumer+Interceptors
|
CC-MAIN-2017-39
|
en
|
refinedweb
|
#include <hallo.h> Adam Heath wrote on Thu May 30, 2002 um 12:58:48PM: > > You don't, you use makedev to create them in your postinst. > > > > Wichert. > > When this mail first hit the list, I checked dpkg. It should handle creation > of device files. If it doesn't, it's a bug. > > It's policy that enforces the "no devices in debs" principal. Common praxis is asking with Debconf and creating them in postinst using mknod. > permissions of files. Since permissions on devices are changed very often(as > compared to normal dirs/files), I believe it became policy to not ship device > files in debs. > > Maybe it's time to revisit this? Should be revisited. I remember a file against policy, dealing with this issue among others.
|
https://lists.debian.org/debian-devel/2002/05/msg03113.html
|
CC-MAIN-2017-39
|
en
|
refinedweb
|
CodePlexProject Hosting for Open Source Software
The IronPython import statement needs namespace names, not assembly names. You should be able to get the names of the namespaces in Animate.NET.dll from its documentation. If not, you could use a tool like Reflector or ILDASM to inspect the dll and see which
namespaces are inside of it.
The 1 namespace is Animate.NET.dll is “Animator” so that’s what you need to import. There are some other top level classes as well – you can see those
yourself using ildasm or Reflector as Curt mentioned. You can also do clr.LoadAssemblyFromFile(…) and use dir to inspect the assembly.
From: zeyoo [mailto:[email protected]]
Sent: Sunday, May 23, 2010 7:27 AM
To: Dino Viehland
Subject: Re: Loading additional .NET libraries problem [dlr:213566]
From: zeyoo
Hi CurtHagenlocher,
Can you download the code from
here? Tell me there is any problem, please! Thank you!
.
|
http://dlr.codeplex.com/discussions/213566
|
CC-MAIN-2017-39
|
en
|
refinedweb
|
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
BOOST_FUSION_DEFINE_STRUCT_INLINE is a macro that can be used to generate all the necessary boilerplate to define and adapt an arbitrary struct as a model of Random Access Sequence. Unlike BOOST_FUSION_DEFINE_STRUCT, it can be used at class or namespace scope.
BOOST_FUSION_DEFINE_STRUCT_INLINE( struct_name, (member_type0, member_name0) (member_type1, member_name1) ... )
The semantics of BOOST_FUSION_DEFINE_STRUCT_INLINE are identical to those of BOOST_FUSION_DEFINE_STRUCT, with two differences:
#include <boost/fusion/adapted/struct/define_struct_inline.hpp> #include <boost/fusion/include/define_struct_inline.hpp>
// enclosing::employee is a Fusion sequence class enclosing { BOOST_FUSION_DEFINE_STRUCT_INLINE( employee, (std::string, name) (int, age)) };
|
http://www.boost.org/doc/libs/1_65_1/libs/fusion/doc/html/fusion/adapted/define_struct_inline.html
|
CC-MAIN-2017-39
|
en
|
refinedweb
|
One thing that has bothered me for a while is that the unzip command line tool only decompresses one file at a time. As a weekend project I wanted to see if I could make it work in parallel. One could write this functionality from scratch but this gave me a possibility to try really look into incremental development.
All developers love writing stuff from scratch rather than fixing and existing solutions (yes, guilty as charged). However the accepted wisdom is that you should never do a from scratch rewrite but instead improve what you have via incremental improvements. Thus I downloaded the sources of Info-Zip and got to work.
Info-Zip’s code base is quite peculiar. It predates things such as ANSI C and has support for tons of crazy long dead hardware. MS-DOS ranks among the most recently added platforms. There is a lot of code for 16 bit processors, near and far pointers and all that fun stuff your grandad used to complain about. There are even completely bizarre things such as this snippet:
#ifndef const
# define const const
#endif
The code base contains roughly 80 000 lines of K&R C. This should prove an interesting challenge. Those wanting to play along can get the code from Github .
Compiling the code turned out to be quite simple. There is no configure script or the like, everything is #ifdef fed inside the source files. You just compile them into an app and then you have a working exe. The downside is that the source has more preprocessor code than actual code (only a slight exaggaration).
Originally the code used a single (huge) global struct that houses everything . At some point the developers needed to make the code reentrant. Usually this means changing every function to take the global state struct as a function argument instead. These people chose not to do this. Instead they created a C preprocessor macro system that can be used to pass the struct as an argument but also compile the code so it has the old style global struct. I have no idea why they did that. The only explanation that makes any sort of sense is that adding the pointer to stack on every function call is too expensive on 16 bit and smaller platforms. This is just speculation, though, but if anyone knows for sure please let me know.
This meant that every single function definition was a weird concoction of preprocessor macros and K&R syntax. For details see this commit that eventually killed it.
Getting rid of all the cruft was not particularly difficult, only tedious. The original developers were very pedantic about flagging their #if / #endif pairs so killing dead code was straightforward. The downside was that what remained after that was awful. The code had more asterisks than letters. A typical function was hundreds of lines long. Some preprocessor symbols were defined in opposite ways in different header files but things worked because some other preprocessor clauses kept all but one from being evaluated (the code confused Eclipse’s syntax highlighter so it’s really hard to see what was really happening).
Ten or so hours of solid work later most dead cruft was deleted and the code base had shrunk to 30 000 lines of code. At this point looking into adding threading was starting to become feasible. After going through the code that iterates the zip index and extracts files it became a lot less feasible. As an example the inflate function was not isolated from the rest of the code. All its arguments were given in The One Big Struct and it fiddled with it constantly. Those would need to be fully separated to make anything work.
That nagging sound in your ear
While fixing the code I kept hearing the call of the rewrite siren. Just rewrite from scratch, it would say. It’s a lot less work. Go on! Just try it! You know you want to!
Eventually the lure got too strong so I opened the Wikipedia page on Zip file format. Three hours and 373 lines of C++ later I had a parallel unzipper written from scratch. Granted it does not do advanced stuff like encryption, ZIP64 or creating subdirectories for files that it writes. But it works! Code is available in this repo .
Even better, adding multithreading took one commit with 22 additions and 7 deletions. The build definition is 10 lines of Meson instead of 1000+ lines of incomprehensible Make.
There really is no reason, business or otherwise, to modernise the codebase of Info-Zip. With contemporary tools, libraries and methodologies you can create code that is an order of magnitude simpler, clearer, more maintainable and just all around more pleasant to work with than existing code. In a fraction of the time.
Sometimes rewriting from scratch is the correct thing to do.
This is the exact opposite of what I set out to prove but that’s research » Rewriting from scratch, should you do it?
评论 抢沙发
|
http://www.shellsec.com/news/13697.html
|
CC-MAIN-2017-39
|
en
|
refinedweb
|
For the animations and transitions on b2g, throttling of OMTA flushing is let down by unthrottable flushes due to SMIL. We should not flush when we don't have to.
Comment by dholbert from bug 780692 (see that bug for a little more discussion): (In reply to Daniel Holbert [:dholbert] from comment #166) > d'oh... So I just remembered some caveats to comment 164 that make this > trickier. (Sorry, it's been a little while since I thought in depth about > animations, so it took a bit to fully page the complex bits back in.) > > I think we technically *do* need to flush CSS animations during SMIL > samples, for correctness, after all. > > If you have a CSS animation that targets one property, and then a SMIL > animation targeting a *different* property who is influenced by the > CSS-animated property, then we need CSS animations to be up-to-date in order > for the SMIL animation to be correct. > > One example of this would be a CSS animation on "font-size", while a SMIL > animation changes some length from "5em" to "10em". (The meanings of those > "em" units would be dependent on the CSS font-size animation.) Another > example would be a CSS animation on "color" while a SMIL animation takes > "fill" to the currentColor keyword. (which computes to the current value of > the "color" property) You can get similar interactions w/ inheritance, too. > (if you have a CSS animation on a parent and a SMIL animation w/ "inherit" > on a child) > > Sorry for not remembering that earlier. Does that complicate things here?
Created attachment 684911 [details] [diff] [review] WIP patch This patch causes SMIL to never flush animations, which is presumably broken, but should make b2g pretty snappy.
cjones: how do we use SMIL in Gaia? Does it have any interaction with CSS animations that you know of? Is it used where we care about transition performance? Thanks!
We use it to animate the new lockscreen "unlock" interaction That animation is very perf sensitive. It doesn't directly interact with CSS transitions or animations, but there are CSS transitions/animations that will run in parallel. There was a bug previously where this SMIL animation would continue to run even when the lock screen was dismissed (bug 814076). This has been fixed. Do *inactive* SMIL animations cause us to flush?
Yeah, it looks like these animations should not be running, and while they're not running we should not be calling DoSample!
I'm not sure what "these animations" is specifically here. There are some cases where we continue composing animations even after they're finished. For example: <animate attributeName="x" by="1em" dur="1s" fill="freeze"/> Here, even after the animation is finished the resulting animation value could change if the font-size changes. For some cases like this we keep composing forever. See bug 533291 comment 18 (and comments 19-20). I think this also applies to animations on the 'display' property due to bug 536660. (Which is one reason, amongst others, why it's better to animate visibility.)
(In reply to Brian Birtles (:birtles) from comment #6) > I'm not sure what "these animations" is specifically here. I think roc's referring to the ones in the github link in comment 4. Those aren't fill="freeze", so I don't think there's any reason we'd be sampling them after they complete.
(though the second-to-last paragraph of comment 4 makes it sound like everything's OK w/ those animations now, I think...?)
Created attachment 685994 [details] [diff] [review] patch
Comment on attachment 685994 [details] [diff] [review] patch > // Set running sample flag -- do this before flushing styles so that when we > // flush styles we don't end up requesting extra samples > mRunningSample = true; >- nsCOMPtr<nsIDocument> kungFuDeathGrip(mDocument); // keeps 'this' alive too >- mDocument->FlushPendingNotifications(Flush_Style); > > // WARNING: > // WARNING: the above flush may have destroyed the pres shell and/or > // WARNING: frames and other layout related objects. > // WARNING: Move those "WARNING" lines, too -- they go with the flush. >+ if (currentCompositorTable->Count() == 0) { >+ mLastCompositorTable = nullptr; >+ mRunningSample = false; >+ return; >+ } Ah, good catch @ turning off mRunningSample. Rather than explicitly un-setting it here (and at the end, and in any other early-returns we add in the future), could you add an AutoRestore helper-variable, like the one we use here: (but with "using namespace mozilla" added up top, instead of the mozilla:: prefixing) > currentCompositorTable->EnumerateEntries(DoComposeAttribute, nullptr); > mRunningSample = false; (...and we can remove this "mRunningSample = false" line, once we've got that AutoRestore helper set up.) r=me with that.
Created attachment 686289 [details] [diff] [review] patch patch with all requested changes, carrying r=dholbert
Hmm, now that I've landed this on aurora, I realise I don't actually have approval. But it is logically part of bug 780692 (and used to be actually part of) which does, there is no point in landing that without this, so I hope it is OK, but asking for b-b+ now to be proper. Obviously, I can backout if there is a problem with landing this.
Comment on attachment 686289 [details] [diff] [review] patch Probably best to back out for now. This is part of the package with bug 780692. There's no point in taking either without the other.
Comment on attachment 686289 [details] [diff] [review] patch Bug 780692 is now blocking, so I'm going to carry that flag over to here.
landed on Aurora already, will land on b2g18 with 780692, not worth checking in separately
BTW, I rebased this too, if you haven't already.
|
https://bugzilla.mozilla.org/show_bug.cgi?id=814921
|
CC-MAIN-2017-39
|
en
|
refinedweb
|
Mark Trench Financial Accounting Standards Board (FASB) 401 Merritt 7 P. O. Box 5116 Norwalk, CT 06856-5116
Hans van der Veen International Accounting Standards Board (IASB) 30 Cannon Street London, EC4M 6XH United Kingdom
Dear Mark and Hans: The topic of discounting plays a key role in several conceptual accounting projects that are currently underway. The concept of “fair value” is also prominent in these discussions. A wide variety of methods have been, and continue to be, developed for discounting in the context of fair value measurements. Such methods combine both discounting and provision for the market price of risk in various ways. Concern has arisen among many actuaries, however, that when certain methods are specifically enumerated in accounting standards, it could imply a prohibition on the use of alternative methods that are consistent with stated valuation objectives, unless it is clearly stated that alternative methods are allowed. As accounting standards evolve, the need to allow the use of alternative methods that are consistent with a set of stated principles could be overlooked. The Financial Reporting Committee of the American Academy of Actuaries 1 (“Academy”) wishes to emphasize the importance of allowing flexibility in methodology for discounting and fair value measurement, subject to stated principles and valuation objectives. With that in mind, we have developed a White Paper, “Notes on the Use of Discount Rates in Accounting Present Value Estimates,” which is attached for your information. Current accounting standards provide some comfort with respect to the use of alternative methods. For example, Statement of Financial Accounting Concepts No. 7 (“CON 7”).” 1
The American Academy of Actuaries (“Academy”)
Similarly, Statement of Financial Accounting Standards No. 157 (“FAS 157”) includes the following statement in Appendix B: “This appendix neither prescribes the use of one specific present value technique nor limits the use of present value techniques to measure fair value to the techniques discussed herein.” Nearly identical wording appears in the current IASB Exposure Draft on Fair Value Measurement (“IASB ED 2009/5”) in the first paragraph of Appendix C. Our White Paper describes a number of valuation methods commonly used today that are consistent with the principles stated in CON 7, FAS 157, and IASB ED 2009/5. Several of the methods discussed in the White Paper are not explicitly mentioned in those documents. These include: certain methods for valuation of insurance liabilities with non-guaranteed investment elements; stochastic methods that use adjusted probabilities (rather than adjusted cash flows or adjusted discount rates); and stochastic methods that use discount rates that vary by scenario. Note that we do not advocate adding descriptions of these methods to future accounting standards. Instead, we hope that this sampling of currently applied methodologies will reinforce the need for future standards to continue the practice of explicitly stating that alternate approaches are allowed. If we can be of further assistance, please contact the Academy’s Senior Risk Management and Financial Reporting Policy Analyst, Tina Getachew, at [email protected] or +1 202.223.8196. Sincerely yours,
Stephen J. Strommen, FSA, MAAA Vice-Chair, Financial Reporting Committee American Academy of Actuaries Cc: Sam Gutterman (Chair, Insurance Accounting Committee, International Actuarial Association)
1850 M Street NW Suite 300
Washington, DC 20036
Telephone 202 223 8196
Facsimile 202 872 1948
Discussion on the use of Discount Rates in Accounting Present Value Estimates American Academy of Actuaries1 Financial Reporting Committee September 2009 This white.
The members of the Discounting Subgroup that are responsible for this white paper are as follows: Chairperson: Stephen Strommen, CERA,FSA,MAAA Steven Alpert, EA, FCA, FSA, MAAA, MSPA Mark Freedman, FSA, MAAA Burton Jay, FCA, FSA,MAAA Kevin Kehn, FSA,MAAA Ken LaSorella, FSA,MAAA Jeffrey Lortie, FSA,MAAA Robert Miccolis, FCAS, MAAA William Odell, FCA, FSA,MAA,ACAS Jeffrey Petertil, ASA, FCA, MAAA Leonard Reback, FSA,MAAA
1
The American Academy of Actuaries (“Academy�)
Introduction and Purpose At the time of writing (mid-2009), several conceptual accounting projects were underway that involve discussion of discount rates and present value estimates, including but not limited to the IASB/FASB joint project on Revenue Recognition and the IASB/FASB joint project on insurance contracts. When discount rates 1 and present value estimates have been discussed in these wider projects, the discussion has tended to be limited in scope. There is concern, however, that such limited discussions may result in accounting standards that limit the scope or breadth of techniques that are allowed when making present value estimates for accounting purposes. In this paper, we use the term “present value estimate” rather than the accounting term measurement to make a useful distinction. Present value estimates involve unknown (and in many cases unknowable) future outcomes; the estimating process thus aims to determine a single value (i.e., the present value) as representative of a range or distribution of potential future outcomes. By contrast, the term measurement is often used in the context of known or knowable fixed quantities. In the context of insurance, for example, the total of benefits actually paid is a measurement; the present value of benefit obligations is an estimate. Estimates are therefore needed for accounting measurement of items whose value is uncertain. Our focus in this document is on present value estimates that are intended to reflect market conditions on the valuation date 2 . There are several terms for such estimates, including fair value, fulfillment value, current value, and so on. These estimates share the common trait that similar methods of discounting can be used for any one of them, and it is those methods of discounting that we wish to discuss. 3 FASB Statement of Financial Accounting Concepts No. 7 (“CON7”) allows a broad range of valuation techniques when it.” The elements of fair value estimate in paragraph 23 are: “
1
In this white paper we use the term “discount rate” as it is used in FAS 157, somewhat interchangeably with “interest rate”. Texts on the theory of the time value of money draw a technical distinction between “discount rate” and “interest rate”, but for ease of understanding we ignore this technical issue. 2 The market conditions we are concerned with in this discussion of discounting are limited to market interest rates and items related to interest rates such as liquidity premiums, credit spreads, and market volatility. The definition of accounting measures such as fair value and fulfillment value sometimes differ in whether market-based or entityspecific assumptions are to be used when projecting the cash flows to be discounted, but we are not concerned with those differences in this paper. 3 This means that amortized-cost methodologies for accounting valuation are outside the scope of this discussion.
2
a. An estimate of future cash flow, or in more complex cases, series of future cash flows at different times. b. Expectations about possible variations in the amount or timing of those cash flows. c. The time value of money, represented by the risk-free rate of interest. d. The price for bearing the uncertainty inherent in the asset or liability. e. Other, sometimes unidentifiable, factors including illiquidity and market imperfections.” Exactly the same elements of fair value are documented in the May 2009 IASB Exposure Draft (“IASB ED 2009/5”) on Fair Value Measurement, in Appendix C, paragraph 2. CON 7 was issued in February 2000. Some more recent accounting discussions that mention present values or discounting explicitly refer to the risk-free rate mentioned in paragraph 23c above, without putting it in the context of a complete valuation technique that must embody all five elements. Some readers interpret such mention as a proposed rule forbidding the use of valuation techniques which use discount rates that reflect the combined effect of several of the five elements above, including paragraph 23d (uncertainty) and paragraph 23e (illiquidity). In addition, some accounting discussions 4 mention the “expected cash flow approach” wherein several possible patterns of future cash flow are projected and then probability weighted to obtain “expected” future cash flows, which are then discounted to obtain the present value. Some readers interpret such mention as a proposed rule forbidding the use of path-specific discount rates that are commonly used in practice when future cash flows depend on the uncertain level of future interest rates. And, some readers lament the limited nature of discussion on how the market price of risk and uncertainty is incorporated into the “expected cash flow approach”. We are concerned that these discussions and others are potentially leading to valuation rules that may not reflect all the elements of fair value outlined above. This paper explains the basic theory behind some valuation techniques that reflect all five elements of fair value as listed in CON 7 paragraph 23. It is our hope that the Boards will reaffirm the relevance of all five elements in any valuation intended to reflect market interest rates on the valuation date, and will continue to use language like the following, which appears in both FAS 157 and IASB ED 2009/5, in the respective Appendices titled Present Value Techniques. “This appendix neither prescribes the use of one specific present value technique nor limits the use of present value techniques to measure fair value to the techniques discussed herein.” Section 1 of this white paper covers the use of discount rates other than the risk-free rate. Section 2 covers stochastic techniques, where multiple paths of future cash flows are projected and probability-weighted to obtain a single value. All of the techniques discussed in this paper would appear to be consistent with paragraph 57 of CON 7, since they reflect the five elements listed in paragraph 23. However, many of the discounting techniques discussed here differ from those specifically mentioned in FAS 157 and IASB ED 2009/5. The purpose of this paper is to explain these methods with a focus on how they reflect “the price for bearing the uncertainty inherent in the asset or liability”. The intent is
4
E.g., paragraph 46 of CON7
3
to clarify any discussion concerning the use of techniques like these under a broad interpretation of CON 7 paragraph 57.
Section 1 – The discount rate and provision for risk The time value of money is conceptually represented by the risk-free rate of interest 5 . But for accounting purposes one is often required to measure the value of an asset or liability that is not risk-free. There are many ways to adjust for risk in a present value calculation. One convenient and frequently used method is to adjust the discount rate, reflecting market-observable discount rates for cash flow streams with similar timing and risk characteristics. The very name of the term “risk-free rate” presumes that there exist other rates of interest that exist when risk is present. This concept can easily be illustrated using a simple corporate bond as an example. Our example bond has a market value of $100.00 today, and is scheduled to mature in one year for $107.00. No coupons or other cash flows will contractually occur between today and the maturity date one year from now. Assume that the risk-free rate for a one-year term is 5%, and also that this bond bears a default risk. Since the market value of the bond is $100.00, we don’t even need to pick a discount rate or evaluate the provision for risk. The market price implies that the risk-adjusted discount rate is 7.0%. However, it is instructive to examine the components that are embodied in the difference between the risk-adjusted and the risk-free rates. Note that the market’s provision for risk in the price of this bond can be dissected into two parts. First there is the market’s perception of the expected, or probability-weighted cost of default. Second, there is the risk of whether a default will or will not occur, and the price the market extracts for bearing that risk. In the case of a bond, it is not usually possible to separate these two elements; all that is observable is the total market price for the risky asset.
Example 1: Discount rate includes full provision for risk Discount rate = 7%. Under this method, the discount rate provides the full provision for both aspects of the risk. We discount the contractual cash flows at the market’s risk-adjusted discount rate of 7%. The present value of $107.00 at 7% interest is $100.00.
Example 2: Cash flows are adjusted for expected defaults; discount rate provides only for the cost of uncertainty about default. Discount rate= 5.93% In this example, we need to make an assumption about the expected rate of default. As noted earlier, the expected rate of default cannot be separately identified by market observations. For purposes of this example only, we assume that the expected rate of 5
As a practical matter, identifying the risk-free rate, or many of the other conceptual quantities that will be discussed in this paper, is not always easy. The purpose here is to focus on the conceptual framework that governs relationships between quantities. Estimation of the quantities themselves (such as the risk free rate) is a separate topic and is beyond the scope of this paper.
4
default is 1%. Under this method the discount rate provides only for the risk of whether default will occur, but does not adjust for the probability-weighted “expected” cost of default. The cash flows are adjusted to reflect the expected defaults. To adjust the cash flows, we subtract an amount reflecting 1% probability of total default, or $1.07, so the expected cash flow before discounting is $107.00 - $1.07 = $105.93 6 . Discounting the expected cash flow at 5.93% then provides the $100.00 market value. Observe that 5.93% is greater than the risk-free rate because it includes a provision for the risk of whether default will occur (that is, the assumed market price for bearing the default risk).
Example 3: Discount rate includes no provision for risk Discount rate = 5.00% Under this method the discount rate is the risk-free rate and all adjustment for risk is done by adjusting the cash flows. The adjustment to cash flows must include not only the $1.07 for expected defaults, but also a market-calibrated provision for the risk of whether default will occur. The cash flows must be adjusted to be “certainty equivalents”. The market-calibrated provision for risk in this case is $0.93, so the certainty equivalent cash flow is $107.00 – $1.07 – $0.93 = $105.00. Discounting at the risk-free rate of 5% then leads to the market price of $100.00.
IASB ED 2009/5 lists three methods for adjusting for risk in Appendix C, paragraph C6 (a), (b), and (c). Example 1 above is analogous to the “discount rate adjustment technique” from C6(a). Example 2 is analogous to “method 2 of the expected present value technique” from C6(c). Example 3 is analogous to “method 1 of the expected present value technique” from C6(b). These same three methods are outlined in FAS 157 Appendix B. When the purpose for a valuation is to obtain a market-based or market-consistent value for an item that includes risk or uncertainty, it is best to use the most directly applicable and readily available information concerning market pricing for similar risks. This information can take several forms, leading to several different valuation techniques for estimating the effects of risk. In the example above, the most readily available information on market pricing is the market spread over the risk-free rate, or 2% = 7% - 5%. This spread could be used as means of estimating the effect of risk when valuing other assets that have similar risk and payment characteristics. However, there are situations where cash flow adjustments are more directly related to observable market parameters than are discount rates. A variety of methods should be allowed, as the most direct method is likely to be the most reliable, and depends on the circumstances. Sections 1.1 to 1.3 present examples of risk adjustment methods for a variety of risks that are relevant to valuing insurance contract liabilities. It is understood that insurance contracts often involve many different risks, all of which must be reflected in the same valuation, so several adjustments often need to be combined together. Insurance contracts may also require assumptions and estimates to account for the difference between contract risks and marketobservable analogues used as inputs to the valuation.
6
Or, equivalently, take a weighed average of 107 x 99% [assumed probability of payment in full] + 0 x 1% [assumed probability of total default]. This simple example assumes no partial recovery on default.
5
1.1 Risks of own credit standing and liquidity: the financing rate When a customer pays money to initiate an insurance contract with an insurer, the customer takes on two risks: •
Credit risk: The risk that the insurer will not pay contractual benefits when due.
•
Liquidity risk: The loss of ready access to funds. The customer loses access to their funds between the time the contract is paid for and the time when contractual benefits are due. 7
These two risks are also present in a financing transaction wherein a lender provides funds to a borrower. The lender takes the credit risk and also loses access to the funds except under contractual terms. There is a clear analogy here – the insurance customer is in the position of the lender and the insurer is in the position of the borrower. As was mentioned earlier, provision for risk can be made in many forms. We now discuss how to provide for the credit and liquidity risks. We start from a valuation that does not provide for risk: the present value of certain cash flows at the risk-free rate. We note that risks to the insurer should increase the value, and risks to the customer should decrease the value. Since credit risk and liquidity risk are both risks to the customer, when taken into account they should decrease the value of the insurer’s liability 8 . A decrease in present value can be obtained by increasing the discount rate with some sort of interest rate spread. Let the risk-free rate be i rf ; let the spread for credit standing be s credit ; and, let the spread for liquidity risk be s liquidity . The adjusted discount rate is then irf + s credit + s liquidity 9 . As was noted above, these same two risks are present in a financing transaction. Therefore, one can think of this adjusted discount rate as a financing rate. For purposes of this document, we will symbolize the financing rate as: i f = irf + s credit + s liquidity .
1.2 Risk of the uncertain level of claims In the valuation of insurance liabilities, one of the most common risks is that of the unknown level of actual incurred claims and claim payments. A margin for this risk should increase the value of the liability so that it is greater than the discounted present value of expected claims taken at the financing rate 10 . 7
Even under contracts that contain a deposit element wherein the deposit is accessible, generally part of the customer’s funds do enter the deposit element and are used to pay for pure insurance protection. This part of the funds is subject to liquidity risk under such contracts. In addition, even when the deposit element is accessible, there is often a surrender charge that enforces a penalty for withdrawal, thereby reducing liquidity of the funds. 8 There is considerable debate concerning whether valuation of liabilities should reflect the credit risk of the liability holder (for example, the IASB has requested comments on exactly this issue for liability measurement). We take no position on this issue in this white paper, At the time of writing, the Academy is in the process of formulating its response to IASB Discussion Paper 2009/2, Credit Risk in Liability Measurement. 9 As noted earlier, in practice it may be difficult if not impossible to separately identify scredit and sliquidity, and a single market observation of a financing rate is used as a stand-in for both. 10 This risk margin might be included as part of the reported liability or it might be shown as a separate item in the financial statements, depending on evolving rules for financial statement presentation and disclosure.
6
Two of the most common methods of providing this margin are: 1) adjusting the cash flows by adding to them a “cost of capital 11 ” amount that represents the market price of risk; and 2) adjusting the discount rate downward. These two methods can, of course, yield exactly the same resulting value for the liability. To see this, assume that the amount of capital attributable to claims risks is 10% of the liability at any point in time, and the cost of capital rate is 8% of the amount of capital. The cash flow adjustment for any period would be 10% x 8% = 0.8% of the value of the liability at the beginning of the period. Exactly the same valuation is arrived at without making any adjustment to cash flows if one instead subtracts 0.8% from the discount rate used when determining the present value of expected cash flows 12 . We will call this discount rate adjustment s clm . The discount rate to be used would then be i f − s clm . While the above methods are simple enough mathematically, there can be some difficulty in estimating both the portion of an insurer’s total capital that is attributable solely to claims risks and the cost of capital rate. For example, although total capital may be more observable (and the total cost of capital easier to estimate), most insurers retain some investment risk in addition to the claims risks that they take on, and their total capital includes the amount attributable to investment risks. Some actuaries point out that an alternative methodology allows calibration of the discount rate adjustment by using two quantities that are sometimes easier to estimate – the insurer’s total cost of capital and its expected total investment return (i.e., the portfolio rate 13 ). The argument is that if one subtracts the full cost of capital (expressed as a yield spread) from the portfolio rate, the result is an appropriate discount rate for insurance liabilities and no cash flow adjustment is required to include a provision for risk. This “portfolio rate” methodology can be shown to fall within CON 7 guidelines as long as certain relationships are enforced. Essentially, the method assumes that: i portfolio − s capital cos t = i f − s clm
11
The cost of capital is the market price for obtaining capital that is to be put at risk. The providers of capital to an insurer expect an investment return higher than the risk free interest rate because their funds have been put at risk. The excess of the investor’s expected investment return over the risk free rate is the estimated cost of capital rate. The cost of capital in dollars (or appropriate currency) is the cost of capital rate times the amount of capital required. This amount can be added to liability cash flows in each future time period as a provision for the market price of risk. 12 This can be demonstrated by example. Suppose the liability at the beginning of the period is L, and the expected claim payment at the end of the period is C. Suppose the financing rate is 5.0%, and the cost of capital is 0.8% of the liability. The cost of capital can be treated as an addition to cash flow, in which case we have L = (C + .008L) / (1.05), using the financing rate as the discount rate. One can re-arrange this equation to be L = C / (1.05 - .008), in which case the cost of capital appears as a reduction to the discount rate rather than a cash flow adjustment. The liability value is the same no matter which formula is used. 13 Sometimes the term “portfolio rate” is used to refer to a measure of investment return on an amortized cost bais. The usage here is different; “portfolio rate” refers to the expected total return on market value for the portfolio. This estimate depends on current market conditions and is therefore appropriate for market-consistent valuation if used properly.
7
It is not at first obvious why this relationship should hold. To understand why, let us enumerate all of the risks that need to be reflected in the company’s cost of capital. The risks and the interest rate spreads that reflects their market prices are: s clm
spread for claims risks retained by the insurer (adds to cost of capital)
s inv
spread for investment risks taken by the insurer (adds to cost of capital)
s credit spread for credit risk accepted by the policyholder (this is the option to default, and if reflected, it reduces the cost of capital) s liquidity spread for liquidity risk accepted by the policyholder (reduces the cost of capital)
The total cost of capital is then s capital cos t = s clm + s inv − s credit − s liquidity . Financial economic theory tells us that the expected yield spread on a risky investment should be equal to the market price for accepting the risk of the investment. Since we assume the market price of investment risk is s inv , that could suggest that s inv = i portfolio − irf We can now write i portfolio − scapital cos t = i portfolio − ( sclm + sinv − scredit − sliquidity ) = i portfolio − ( sclm + (i portfolio − irf ) − scredit − sliquidity ) = irf − ( sclm − scredit − sliquidity ) = (i f − ( scredit + sliquidity )) − ( sclm − scredit − sliquidity ) = i f − sclm
to demonstrate the equality stated earlier. Objections to the use of the “portfolio rate method” have often been based on the idea that the value of an insurer’s liabilities should not depend on its investment strategy. Since the portfolio rate does depend on investment strategy, its use in any way suggests that the valuation depends on the strategy. However, the portfolio rate method as described here includes adjustment for the full investment risk through the cost of capital adjustment. Any increase in investment risk that would lead to a higher portfolio rate also leads to a higher cost of capital, so that the quantity i portfolio − s capital cos t does not change, at least in theory. As a practical matter, one can check whether i portfolio − s capital cos t < i f to determine whether the portfolio rate method is being applied in an appropriate way 14 . When properly applied, the “portfolio rate method” is conceptually consistent with CON7, and may have the advantage of calibrating to more easily estimated quantities.
14
An exception to this relationship occurs for insurance contracts that contain non-guaranteed investment elements, as described in the next section.
8
The cost of capital that is used as the provision for risk in the portfolio rate method is not always converted into an interest rate spread. In some cases it is applied as an addition to projected cash flows. Under this variation, the portfolio rate is used directly as the discount rate, but full provision for risk is made through the adjustment to cash flows.
1.3 Risks retained by the policyholder: non-guaranteed elements in insurance contracts Many insurance contracts include elements that are not guaranteed. For example, an insurer may guarantee that insurance coverage can always be renewed or extended for another time period, but may not guarantee the premium rate for the renewal period. Alternately, an insurer may offer a plan of insurance that returns a portion of that premium on a non-guaranteed basis at the end of the coverage period if and only if claims experience is favorable. The premium for such a plan would, of course, be higher than that for a plan that offers no such potential nonguaranteed benefit. One can readily see that such non-guaranteed elements tend to reduce the risk of the contract to the insurer. They do so by shifting some risk to the customer. In the latter example, the customer’s risk is the uncertainty about the size of the non-guaranteed payment to be received at the end of the coverage period. In our discussion concerning the discount rate, a special focus needs to be placed on nonguaranteed investment elements in insurance contracts. Many insurance contracts involve an investment (or deposit) element. The contract may include a deposit or fund amount that the customer owns and on which interest is credited. When the interest credited to the fund is not fully guaranteed but depends in some way on the performance of a portfolio of assets, we have a non-guaranteed investment element inside an insurance contract. To understand valuation of an insurance contract with a non-guaranteed investment element, it helps to start by thinking about a pure investment contract. A pure investment contract simply passes the results of an investment portfolio directly to the contract owner, so we will refer to it as a pure pass-through. The liability for such a contract is typically the current account balance 15 . In some cases, the investment element inside an insurance contract does operate very much like a pure pass-through. In the US, such contracts are called Variable or Separate Account contracts. In the UK, the term is unit-linked. 16 The more interesting case is an investment element that is not a pure pass-through. The insurer may provide a minimum guaranteed interest rate that will be credited, and then provide the customer with non-guaranteed additional interest credits if investment performance is good. The additional interest credits may be based directly on current market performance or may be spread over time based on an amortized-cost measure of return. In either case, the insurer will charge a fee for the guarantee of a minimum credited rate. The fee is conceptually related to the cost of 15
That is, before adjustments for fees, expenses, or other elements of the contract. These contracts typically include certain fees that are subtracted from the account value each period. So while they are not pure pass-throughs because part of the investment return is held back in the form of fees, they still have the characteristic that fluctuations in investment return (and the corresponding risk) are passed through to the account value. 16
9
the capital required to support the guarantee. The lower or weaker the guarantee, the lower the fee, and the closer we get to a pure investment pass-through contract. With this in mind, let’s revisit the “portfolio rate method” as described in the previous section. The discount rate including full adjustment for risk was i portfolio − s capital cos t and was to be strictly less than the financing rate, so we could check that i portfolio − s capital cos t < i f . However, when nonguaranteed elements are included in insurance contracts, they provide a means of reducing the insurer’s risk and therefore reducing its capital cost without necessarily reducing either the portfolio rate or the financing rate. As a result, we could have a discount rate that exceeds the financing rate so that i portfolio − s capital cos t ≥ i f . Now, even though the discount rate is based on the portfolio rate and may exceed the financing rate, this does not mean that the resulting liability value depends on the company’s investment strategy. There is an offset to the excess of the discount rate over the financing rate. The offset is the additional projected liability cash flows that arise from the non-guaranteed interest credited to the customer. The reduction in capital cost attributable to the non-guaranteed elements is typically passed through to the customer as an increase in expected (but non-guaranteed) benefits. Any change in investment strategy that increases the discount rate under this methodology is offset by an increase in projected cash flows from non-guaranteed benefits, so the liability value is at least theoretically not sensitive to the investment strategy. As a result, when non-guaranteed elements with some sort of investment return pass-through are present, it can be appropriate to use a discount rate in excess of the financing rate as long as the projected cash flows include the pass-through of the additional investment return net of capital cost. An alternative method for valuation of such contracts is to assume the portfolio earns the riskfree rate (or the financing rate) and to project the non-guaranteed benefit amounts that would be paid with that level of investment return. Such a method is consistent with the reasoning above and with CON 7, but is less realistic because it requires a projection that alters current nonguaranteed crediting rates away from the actual rates currently being paid. This is important, because the behavior of the owners of contracts with non-guaranteed investment elements depends on the level of non-guaranteed amounts being paid. If these amounts are not competitive, contract owners may terminate their contracts. Since contract-owner behavior is typically reflected in cash flow models used for valuation, a projection that alters current nonguaranteed crediting rates away from the actual rates being paid also alters assumed behavior, thereby distorting the cash flow projection used for valuation unless policy-owner behavior algorithms are adjusted. 17
17
This problem is frequently encountered in risk-neutral valuations of spread-managed business where the credited rate is assumed to be the portfolio rate less a spread. Since all assets are assumed to earn the risk-free rate(s), the resulting modeled credited rate will be below the risk free-rate. If policyholder behavior assumptions (such as surrender rates) are not appropriately translated into a risk neutral environment, excess surrenders and early exercise of policyholder options might be triggered even in the base scenario and in those scenarios that vary little from the base. It should be further noted that not all actuaries believe policyholder behavior algorithms should be altered in a risk-neutral valuation; these actuaries believe that any resulting anomalous policyholder activity is part of such a valuation.
10
Section 2 – Interest rate risk under stochastic valuation methods The accounting literature discusses valuation techniques that involve probability-weighting of several different future outcomes. In CON 7 this is called the “expected cash flows” method, and in the IASB exposure draft on fair valuation, this is called the “expected present value technique”. In both cases, the “expected” cash flows are determined as a probability-weighted average of the outcomes of various scenarios, and the valuation is done by discounting the “expected” cash flows. While CON 7 is silent on how to provide for risk under this method, the IASB exposure draft suggests two methods, one in which the discount rate is adjusted away from the risk-free rate and one in which the risk-free rate is used for discounting but the expected cash flows are adjusted to “certainty equivalents”. These methods are reasonable, but they are not the only methods that are in common use. We wish to explain two variations on these methods that are in common use. The variations are 1) the use of probabilities other than the “real” probabilities, and 2) the use of discount rates that vary by scenario.
2.1 Probabilities other than the “real” probabilities As noted above, IASB ED 2009/5 mentions two methods that can be used to include a provision for risk under the “expected present value technique”. The two methods involve either an adjusted discount rate (other than the risk-free rate) or adjustments to the expected cash flows. A third technique not mentioned in ED 2009/5 is to adjust the probabilities of the scenarios to give adverse outcomes greater probability weight. When this is done, the discount rate can be the risk-free rate because the provision for risk is provided through the probability weighting, which adjusts the cash flows to certainty equivalents. The adjustment of probabilities, in combination with discounting at the risk-free rate, is the theoretical basis of the widely-used Black-Scholes method for valuation of stock options. Consider the valuation of an option to buy a stock at a price of $100 per share any time within the next year. Suppose the current market price of the stock is $95 per share. The value of the option is based on the probability of the price rising over $100 per share within the coming year. This probability depends on the “volatility” of the stock price. Under the Black-Scholes method, one uses the observed market value of options to work backwards to determine the “implied” volatility that is consistent with the observed market price. That implied volatility, calibrated to market prices of options available in the market, can be used in valuation of options for which a market price is not available; say, options with different strike prices or options with different expiry dates. The important concept here is that the “implied” volatility is not the real volatility, and the probability of the option having value based on the “implied” volatility is not the real probability of the option having value. The “implied” volatility is a biased probability that includes an adjustment to reflect the market price of risk. One can use the “implied” volatility and the 11
associated biased probabilities to determine market-consistent prices that include provision for risk without ever needing to know what the real probabilities are. It should be noted that while valuations calibrated in this manner include a provision for risk, the exact size of the provision for risk is not known, and the method provides no way to determine it. Therefore, any accounting requirement to disclose the size of the provision for risk in the valuation can only be met via a rough approximation that can be less reliable that the estimated market-consistent price itself. The Black-Scholes method, and other methods that involve use of adjusted probabilities, can be considered variants of “method 1 of the expected present value technique” as outlined in FAS 157 Appendix B and IASB ED 2009/5 Appendix C. The adjusted probabilities are used to determine risk-adjusted expected cash flows, which are then discounted at the risk-free rate. The important aspect of this family of variants is that the method of calibration to market prices bypasses any need to determine or specify the size of the risk margin that is included.
2.2 Discount rates that vary by scenario An added variation on the “expected present value” method comes into play in valuation of items whose cash flows depend on the level of future interest rates. Common methods for valuation of such instruments involve not only adjusted probabilities, but also discount rates that depend on the scenario of future interest rates. A common example of such an item is a fixed-rate home mortgage that can be prepaid without penalty at any time. Such a mortgage is more likely to be prepaid if interest rates are low (allowing the mortgagor to re-finance at a lower rate) than if interest rates are high. Therefore the cash flows from such a mortgage are sensitive to the level of future interest rates. We will use an example based on a prepayable mortgage to illustrate how the combination of adjusted probabilities and scenario-specific discount rates is often used to include a provision for the market price of the prepayment risk. Similar techniques are often applied in valuation of insurance contracts with cash flows that depend on the level of future interest rates. However an example of such a contract would be much more complex. The same principles and methods can be illustrated much more simply in the context of a prepayable mortgage. For our example, we focus on a simple fixed-rate mortgage that requires annual payment of interest, with a balloon payment to repay the full principal at the end of its term. The mortgage will have a principal amount of $1000, an interest rate of 5.122%, and a maturity date of two years after the valuation date. The contractual cash flows are $51.22 at the end of one year and $1051.22 at the end of two years. We also assume that the risk-free yield curve on the valuation date is 5.0% for the first year and 5.25% for the second year. If there were no option to prepay, the present value at the risk-free yield curve of $51.22 due in one year and $1051.22 due in two years is $1000, and this would be the current value of the mortgage. However, if there is an option to prepay at the end of one year without penalty, one might expect that if market interest rates decline, many mortgage holders would prepay and refinance their 12
mortgage at the lower interest rate. This option to prepay is a risk to an institution that holds the mortgage as an asset. Market provision for this risk should reduce the market value of the prepayable mortgage below $1000. To value the prepayable mortgage we will, as a simplified example, use the expected present value technique with two scenarios. Under scenario 1 the risk-free rate rises from 5% to 6% after one year. Under scenario 2 the risk-free rate falls from 5% to 4% after one year. The first step in applying this technique is to calibrate the implied risk-neutral probability weights to market prices. The implied probability weights must be such that they properly price a fixed and certain cash flow at the end of two years, when path-specific discounting of the scenario cash flows is used. Exhibit 1 shows how this is done. Each line in Exhibit 1 corresponds to a present value calculation along a scenario. Example 1.1 shows a single scenario calculation of the present value of a fixed and certain cash flow of $1000 at the end of two years, with discounting at the current risk-free yield curve. The present value is $904.88. Example 1.2 shows what happens if we apply path-specific discounting using our two scenarios, and guess at the probabilities. As an initial guess we specify probabilities of 50% for each scenario. The probability-weighted present value is $907.11, which is incorrect. We therefore need to adjust the probabilities to produce the proper probability-weighted value of $904.88. Example 1.3 shows the corrected probabilities. Note that for the institution that holds a fixedrate instrument as an asset, an increase in market interest rates is an “adverse” scenario because the return on the asset is locked and does not rise with the market. The probability of the adverse scenario is increased to 62.9454% from 50%, thereby giving more weight to the adverse scenario. When these probability weights are used, the probability-weighted present value is $904.88, as it should be. These adjusted probabilities are often termed the “risk-neutral” probabilities. This process of calibrating the scenarios and probabilities to market prices is vitally important when including a provision for risk using “expected present value” techniques. Now that our scenarios and probabilities have been calibrated, we can use them to value the mortgage, first assuming no prepayment risk and then assuming significant prepayment risk. Examples 2.1 and 2.2 are valuations assuming no prepayment risk. Example 2.1 does not use path-specific discount rates, and Example 2.2 does use path-specific discounting. Since the cash flows are fixed and do not depend on the scenario, the probability-weighted present value is the same under both methods at $1000.00. Examples 2.3 and 2.4 are valuations assuming significant prepayment risk. We assume that if interest rates fall to 4% at the end of year 1, fully half of the mortgage principal will be prepaid, of a prepayment of $500. In that case the cash flows are $551.22 at the end of year 1 and $525.61 at the end of year 2. Example 2.3 does not use path-specific discounting, and obtains a probability-weighted present value of $1000.22. Clearly this result must be incorrect because it is greater than the value of the non-prepayable mortgage. The fundamental reason it is incorrect is that it includes no provision for risk. Example 2.4 uses path-specific discounting to obtain a value of $998.10. This clearly 13
does make provision for the prepayment risk. The value of the prepayment option is $1000 $998.10, or $1.90. This example is not intended to fully explain the theory behind use of scenario-specific discount rates. However, the reader should take away the following main ideas. • •
Scenario‐specific discounting is commonly used to include a provision for risk when future cash flows depend on the level of future interest rates. Scenario‐specific discounting is used only in combination with careful calibration of the scenarios and the implied risk‐neutral probability weights to market prices.
The second point above is particularly important because it is critical to the theory that supports scenario-specific discounting. There are many ways to calibrate the scenarios and the probability weights to market prices. A short description of a method commonly used in valuation of insurance liabilities may be helpful. In valuation of insurance liabilities whose cash flows depend on future interest rates, one frequently uses a very large number of interest rate scenarios for many months or years into the future. Rather than adjusting the probabilities of the scenarios, the calibration process adjusts the path of future interest rates in each scenario. This is done by adding a calibrated “drift” to the change in interest rates each period before any stochastic or random change is applied by the scenario generator. In that way, even though the probabilities of the scenarios are all treated as equal, the number of scenarios with increasing or decreasing interest rates is adjusted, thereby adjusting the overall probability that interest rates will rise or fall. The “drift” is calibrated so that the probability-weighted present value of a fixed and certain cash flow at any future date is equal to its current market price on the valuation date. To accomplish this for all future dates, the “drift” is not a constant, but a series of calibrated amounts for each future time period. Lastly, an important aspect of scenario-specific discounting is that the scenario-specific discount rate for each period in each scenario is the short-term one-period risk-free rate. While a scenario generator may provide a full yield curve at every point in time along each scenario, it is the path of the short-term one-period rates that needs to be used for discounting.
Summary The purpose of this paper has been to outline several methods that can be used in valuation of items that involve risk and uncertainty, emphasizing their compliance with the principles of fair valuation as outlined in such existing accounting pronouncements as CON 7. There has been concern among some actuaries that certain methods that we believe are consistent with CON 7 could, in future accounting standards, be disallowed. There has been particular concern over methods for valuation of insurance liabilities that involve scenario path-specific discounting, or use of the insurer’s portfolio rate, or methods used when non-guaranteed investment elements are present. These methods have been shown here to be consistent with CON 7 when properly applied. A short paper such as this cannot possibly cover all the valuation methods that are in use, and neither can an accounting standard. That is why CON 7 is so important – it states the principles that must be followed, yet allows flexibility in the application of those principles as needed and appropriate. It is our hope that future accounting standards continue the practice of explicitly stating that alternate approaches for discounting and fair valuation are allowed. 14
Exhibit 1 ‐ Valuation of a fixed and certain cash flow: Calibration of Risk‐Neutral Probability Weights Scenario
Cash flows End yr 1 End yr 2
Discount rate during Year 1 Year 2
Discount factor for: End yr 1 End yr 2
Scenario Present Value
0.952381 0.904875
$ 904.88
100.00%
$ 904.88
$ 898.47 $ 915.75
50.0% 50.0%
$ 449.24 $ 457.88
Total weighted present value:
$ 907.11 Incorrect!
Probability Weight
Weighted Present Value
Example 1.1 No path‐specific discounting 1
$ ‐
$ 1,000
5.00%
5.25%
Example 1.2 Path‐specific discounting, "real" probabilities 1 2
$ ‐ $ ‐
$ 1,000 $ 1,000
5.00% 5.00%
6.00% 4.00%
0.952381 0.898473 0.952381 0.915751
Example 1.3 Path‐specific discounting, "risk‐neutral" probabilities 1 2
$ ‐ $ ‐
$ 1,000 $ 1,000
5.00% 5.00%
6.00% 4.00%
0.952381 0.898473 0.952381 0.915751
$ 898.47 $ 915.75
62.9454% 37.0546%
$ 565.55 $ 339.33
Total weighted present value:
$ 904.88
When discounting using the scenario path of risk free rates, one must use calibrated "risk‐neutral" probability weights.
Exhibit 2 ‐ Valuation of $1000 mortgage Cash flows End yr 1 End yr 2
Discount rate during Year 1 Year 2
Discount factor for: End yr 1 End yr 2
Scenario Present Value
5.25% 5.25%
0.952381 0.904875 0.952381 0.904875
$ 1,000.00 62.9454% $ 1,000.00 37.0546% Total weighted present value:
$ 629.46 $ 370.55 $ 1,000.00
6.00% 4.00%
0.952381 0.898473 0.952381 0.915751
$ 993.27 62.9454% $ 1,011.44 37.0546% Total weighted present value:
$ 625.22 $ 374.78 $ 1,000.00
0.952381 0.904875 0.952381 0.904875
$ 1,000.00 $ 1,000.58
$ 629.46 $ 370.76
Probability Weight
Weighted Present Value
If cash flows have no prepayment risk: Example 2.1 No path‐specific discounting 1 2
$ 51.22 $ 1,051.22 $ 51.22 $ 1,051.22
5.00% 5.00%
Example 2.2 With path‐specific discounting 1 2
$ 51.22 $ 1,051.22 $ 51.22 $ 1,051.22
5.00% 5.00%
If cash flows have significant prepayment risk: Example 2.3 No path‐specific discounting 1 2
$ 51.22 $ 1,051.22 $ 551.22 $ 525.61
5.00% 5.00%
5.25% 5.25%
62.9454% 37.0546%
Total weighted present value: $ 1,000.22 Incorrect! No provision for risk.
Example 2.4 With path‐specific discounting 1 2
$ 51.22 $ 1,051.22 $ 551.22 $ 525.61
5.00% 5.00%
6.00% 4.00%
0.952381 0.898473 0.952381 0.915751
$ 993.27 $ 1,006.30
62.9454% 37.0546%
$ 625.22 $ 372.88
Total weighted present value: $ 998.10 Correct! The value of the option to prepay is $1.90.
15
Bibliography This bibliography lists some textbooks and a monograph relevant to the subject of this white paper. The material in these books provides an indication of the wide variety of methods that have been developed and are being used in valuations that involve risk and uncertainty.
Interest Rate Modeling, a textbook by Jessica James and Nick Webber. John Wiley & Sons, 2000. Part III of the book is titled “Valuation Methods”. Derivatives Markets, a textbook by Robert L. McDonald. Addison Wesley, 2006. Part 5 of the book is titled “Advanced Pricing Theory”. Models for Quantifying Risk, a textbook by Robin Cunningham, Thomas Herzog, and Richard L. London. ACTEX publications, 2005. This book covers a wide variety of models used to characterize and quantify insurance risks. Insurance Risk Models, a textbook by Harry H. Panjer and Gordon E. Willmot. Society of Actuaries, 1992. This is a classic educational text covering a wide variety of models used to characterize and quantify insurance risks. Fair Valuation of Insurance Liabilities: Principles and Methods. This 2002 public policy monograph by the American Academy of Actuaries provides an introductory-level survey of many of the methods described in textbooks and other literature. Measurement of Liabilities for Insurance Contracts: Current Estimates and Risk Margins. This 2009 International Actuarial Research Paper was prepared by the ad hoc Risk Margin Work Group of the International Actuarial Association.
16
Published on Jun 7, 2012
Current accounting standards provide some comfort with respect to the use of alternative methods. For example, Statement of Financial Accoun...
|
https://issuu.com/actuarypdf/docs/discount_0915091
|
CC-MAIN-2017-39
|
en
|
refinedweb
|
.
For example, to have the caller pass space for the result:
char *itoa(int n, char *retbuf) { sprintf(retbuf, "%d", n); return retbuf; } ... char str[20]; itoa(123, str);
To use malloc:
#include <stdlib.h> char *itoa(int n) { char *retbuf = malloc(20); if(retbuf != NULL) sprintf(retbuf, "%d", n); return retbuf; } ... char *str = itoa(123);(In this last case, the caller must remember to free the returned pointer when it is no longer needed.)
See also question 20.1.
Additional links: further reading
Hosted by
|
http://c-faq.com/malloc/retaggr2.html
|
CC-MAIN-2017-39
|
en
|
refinedweb
|
Frequently Asked Questions¶
Why the name jug?¶
The cluster I was using when I first started developing jug was called “juggernaut”. That is too long and there is a Unix tradition of 3-character programme names, so I abbreviated it to jug.
How to work with multiple computers?¶
The typical setting is that all the computers have access to a networked filesystem like NFS. This means that they all “see” the same files. In this case, the default file-based backend will work nicely.
You need to start separate
jug execute processes on each node.
See also the answer to the next question if you are using a batch system or the bash utilities page if you are not.
Will jug work on batch cluster systems (like SGE/LFS/PBS)?¶
Yes, it was built for it.
The simplest way to do it is to use a job array.
On LFS, it would be run like this:
bsub -o output.txt -J "jug[1-100]" jug execute myscript.py
For SGE, you often need to write a script. For example:
cat >>jug1.sh <<EOF #!/bin/bash exec jug execute myscript.py EOF chmod +x jug1.sh
Now, you can run a job array:
qsub -t 1-100 ./jug1.sh
Alternatively, depending on your set up, you can pass in the script on STDIN:
echo jug execute myscript.py | qsub -t 1-100
In any case, 100 jobs would start running with jug synchronizing their outputs.
Given that jobs can join the computation at any time and all of the communication is through the backend (file system by default), jug is especially suited for these environments.
The project gridjug integrates jug with gridmap to help run jug on SGE clusters (this is an external project).
How do I clean up locks if jug processes are killed?¶
Jug will attempt to clean up when exiting, including if it receives a SIGTERM signal on Unix. However, there is nothing it can do if it receives a SIGKILL (or if the computer crashes).
The solution is to run
jug cleanup to remove all the locks.
In some cases, you can avoid the problem in the first place by making sure that SIGTERM is being properly delivered to the jug process.
For example, if you executing a script that only runs jug (like in the previous
question), then use
exec to replace the script by the jug process.
Alternatively, in bash you can set a
trap to catch and propagate the
SIGTERM:
#!/bin/bash N_JOBS=10 pids="" for i in $(seq $N_JOBS); do jug execute & pids="$! $pids" done trap "kill -TERM $pids; exit 1" TERM wait
It doesn’t work with random input!¶
Normally the problem boils down to the following:
from jug import Task from random import random def f(x): return x*2 result = Task(f, random())
Now, if you check
jug status, you will see that you have one task, an
f
task. If you run
jug execute, jug will execute your one task. But, now, if
you check
jug status again, there is still one task that needs to be run!
While this may be surprising, it is actually correct. Everytime you run the
script, you build a task that consists of calling
f with a different number
(because it’s a randomly generated number). Given that tasks are defined as the
combination of a Python function and its arguments, every time you run jug, you
build a different task (unless you, by chance, draw twice the same number).
My solution is typically to set the random seed at the start of the computation explicitly:
from jug import Task from random import random, seed def f(x): return x*2 seed(123) # <- set the random seed result = Task(f, random())
Now, everything will work as expected.
(As an aside: given that jug was developed in a context where it is important to be able to reproduce your results, it is generally a good idea that if your computation depends on pseudo-random numbers, you be explicit about the seeds. So, this is a feature not a bug.)
Why does jug not check for code changes?¶
1) It is very hard to get this right. You can easily check Python code (with dependencies), but checking into compiled C is harder. If the system runs any command line programmes you need to check for them (including libraries) as well as any configuration/datafiles they touch.
You can do this by monitoring the programmes, but it is no longer portable (I could probably figure out how to do it on Linux, but not other operating systems) and it is a lot of work.
It would also slow things down. Even if it checked only the Python code: it would need to check the function code & all dependencies + global variables at the time of task generation.
I believe sumatra accomplishes this. Consider using it if you desire all this functionality.
2) I was also afraid that this would make people wary of refactoring their code. If improving your code even in ways which would not change the results (refactoring) makes jug recompute 2 hours of results, then you don’t do it.
3) Jug supports explicit invalidation with jug invalidate. This checks your dependencies. It is not automatic, but often you need a person to understand the code changes in any case.
Can jug handle non-pickle() objects?¶
Short answer: No.
Long answer: Yes, with a little bit of special code. If you have another way to
get them from one machine to another, you could write a special backend for
that. Right now, only
numpy arrays are treated as a special case (they are
not pickled, but rather saved in their native format), but you could extend
this. Ask on the mailing list if
you want to learn more.
Is jug based on a background server?¶
No. Jug processes do not need a server running. They need a shared backend. This may be the filesystem or a redis database. But jug does not need any sort of jug server.
Can I pass command line arguments to a Jugfile?¶
Yes. They will be available using
sys.argv as usual.
If you need to pass arguments starting with a dash, you can use
-- (double
dash) to terminate option processing. For example, if your jugfile contains:
import sys print(sys.argv)
Now you can call it as:
# Argv[0] is the name of the script $ jug execute ['jugfile.py'] $ jug execute jugfile.py ['jugfile.py'] # Using a jug option does not change ``sys.argv`` $ jug execute --verbose=info jugfile.py ['jugfile.py'] $ jug execute --verbose=info jugfile.py argument ['jugfile.py', 'argument'] # Use -- to terminate argument processing $ jug execute --verbose=info jugfile.py argument -- --arg --arg2=yes ['jugfile.py', 'argument', '--arg', '--arg2=yes']
|
http://jug.readthedocs.io/en/latest/faq.html
|
CC-MAIN-2017-39
|
en
|
refinedweb
|
When backgrounding.
So, here is how you can obscure the preview window when your app is backgrounded.
Blurring the preview on Android
The easiest way to obscure the preview window in Android is by setting two flags in your MainActivity's OnCreate
public class MainActivity : FormsApplicationActivity { protected override void OnCreate(Bundle bundle) { base.OnCreate(bundle); Window.SetFlags(WindowManagerFlags.Secure, WindowManagerFlags.Secure); } }
Blurring the preview on iOS
For iOS all the solutions I found didn't work in all cases. It should be simple, but because I was using Xamarin forms it took me a while to get it working in all cases. I noticed when using a Tabbed layout with a navigation structure, the default blurring mechanism gave some problems. The trick is to find the correct RootViewController before adding the blur.
In your AppDelegate add the following methods
private UIVisualEffectView _blurView = null; public override void OnResignActivation(UIApplication uiApplication) { // First find the correct RootViewController var window = UIApplication.SharedApplication.KeyWindow; var vc = window.RootViewController; while (vc.PresentedViewController != null) { vc = vc.PresentedViewController; } // Add the blur effect using (var blurEffect = UIBlurEffect.FromStyle(UIBlurEffectStyle.Light)) { _blurView = new UIVisualEffectView(blurEffect); _blurView.Frame = UIApplication.SharedApplication.KeyWindow.RootViewController.View.Bounds; vc.View.AddSubview(_blurView); } base.OnResignActivation(uiApplication); } public override void OnActivated(UIApplication uiApplication) { try { if (_blurView != null) { _blurView.RemoveFromSuperview(); _blurView.Dispose(); _blurView = null; } } catch { } base.OnActivated(uiApplication); }
References
- Find the RootViewController code snippet from Adam Kemp
- Blurring on iOS code snippet from Danny Cabrera
|
https://www.devprotocol.com/blur-your-app-preview-when-backgrounding/
|
CC-MAIN-2017-39
|
en
|
refinedweb
|
18 March 2010 13:04 [Source: ICIS news]
(Adds CEO comments throughout)
?xml:namespace>
Major projects such as Borouge 2 - a joint venture with Abu Dhabi National Oil Company (ADNOC) at Ruwais in the United Arab Emirates, which is expected to come online in mid-2010 - meant the company would incur costs in the months between start-ups and products finally reaching customers, as no value in terms of additional sales would be created, according to Garrett.
“We expect 2010 to be even tougher instead of being easier,” Garrett said.
“We think the economy is still nervous, and although we see some recovery, we believe it will be more difficult for our company because we are starting up both Barouge 2 and a LDPE [low density polyethylene] plant at Stenungsund in
However, Garrett added that once the company’s major projects were completed, Borealis would start to reap the rewards and the industry would see the group’s financial figures improve.
The company would continue to focus on innovation, improving operations, cash generation and cost cutting, Garrett said.
The polyolefin maker posted a fourth-quarter net profit of €13m ($17.8m), reversing the heavy €122m loss incurred in the same period of 2008.
It also recorded an operating profit of €11m in the last three months of 2009, against a €199m loss in the previous corresponding period, despite a 5.9% fall in sales to €1.27bn.
For the full year, Borealis’s net profit shrank 84% year on year to €38m as sales plunged 30% to €4.71bn, the company said. Operating profit last year declined 85.3% to €24m.
Reflecting on the results, Garrett was upbeat about Borealis’s performance.
“Despite a tough year for the plastics industry, Borealis achieved a positive result,” Garrett said.
“The figures are a much better result than what we achieved in the boom years... this result was much tougher to achieve because if you look back then, everyone was making money and now we are making money when no one else has been,” he added.
($1 = €0.73)
Additional reporting by Pearl Bantillo
Read Paul Hodges’ Chemicals and the Economy blog
Please visit the complete ICIS plants and projects database
|
http://www.icis.com/Articles/2010/03/18/9344014/borealis-expects-tough-2010-on-costs-from-new-start-ups.html
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
29 June 2011 04:07 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
The plant usually runs at full rates, but production was slightly decreased because machinery works less efficiently in the summer months with temperature at more than 46 degree Celsius, the source added.
“The high temperature and humidity causes the operating rates to decrease, but only slightly,” she said.
Kharg Petrochemical has no plans for maintenance at the unit this year as of now, the source said.
Other methanol producers in the
|
http://www.icis.com/Articles/2011/06/29/9473404/irans-kharg-petrochemical-runs-methanol-plant-at-95.html
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
The "What's New!" items were badly cluttering up the main index page.
I'd thought I'd take all but the most recent notices about changes to this web site and move them here, to provide a form of historical log (running oldest to new) for the web site.
::Brian::
The (unofficial) "I Love A
Mystery" web site is officially open for business!
March 18th 1999:
Chapter 4 of my ILAM pastiche, The Ghost With Nine Fingers is now available for you to enjoy!
To find out who shot Nasha in this sequel to both BURY
YOUR DEAD ARIZONA and TEMPLE OF VAMPIRES, you can access it via the Home
Brew link. Please tell me if you want to read more!
I've added some more images to the ILAM Photo Gallery; there's also my review and synopsis of the Made-For-TV Pilot for "I Love A Mystery."
April 10th,1999
I've added another short essay on the ages and backgrounds of Jack, Doc and Reggie in the "Rambling's Page". I've also included Jim Farst's synopsis of his recently unearthed ILAM fragment, Episode 13 of THE SNAKE WITH THE DIAMOND EYES. Look for it on the "Fragment's Page", and thanks for letting me reprint this, Jim!
April 27th 1999
I've since completed the second part of the ILAM log, incorporating the broadcasts dates and my comments on the MBS run of the show. Also, thanks to one of the site visitors (Keith Lee, take a bow!), I now have a new image in the ILAM photo gallery you might want to check out. Finally, Jim Farst also submitted an image of Mr. Morse in his later years, which is also in the gallery.
April 28th 1999
The 500th visitor to this ILAM web site arrived
today!
I've been away from the web site for a little while, what with my new rotation in Cardiology, as well as a recent trip to Washington DC. At the Library of Congress, I found some further ILAM materials! Please follow the "ILAM at the Library of Congress" link for an essay on what I turned up!
Also new, is a new link to an FTP site, where MP3 versions of several ILAM stories are found, along with a couple of ILAM fragments and even complete ILA stories. Check it out (via the "Other ILAM Web Pages Link"), if you can afford the downloading time and space!
May 25th, 1999
The first weeks' worth of synopsized episodes of the "lost"
ILAM story of THE GRAVES OF WHAMPERJAW, TEXAS
is now available for your reading. While at the Library of Congress, I
read and took notes (including direct quotes) on this story, and the results
of this will be serialized over the next few weeks. Follow the link from
the "ILAM at the Library of Congress" page to read the first five episodes
of this spooky story!
The *second* weeks' worth of episodes, #6 through to #11 of the wonderfully eerie "lost" ILAM story, THE GRAVES OF WHAMPERJAW, TEXAS is now available for your reading pleasure; follow the new "Synopses" link! The third week is also completed, but I still have to convert it to HTML format.
June 15th, 1999
The *third* and final weeks' worth of episodes, #11 through to #15 of THE GRAVES OF WHAMPERJAW, TEXASis now available for your reading pleasure! To find out the secret identity of the mysterious murderer who digs the graves of his victims first, and scribbles verse on their tombstonesforetelling the murder, read on to the final chapter, #15!
Also, the 1000th visitor to the site arrived today!
June 17th, 1999
Having completed the synopsis for all 15 episodes of the "lost" ILAM THE GRAVES OF WHAMPERJAW, TEXAS, I've started working on one of the other stories, notes which I scribbled into a $1.50 spiral bound notebook when I did my research at the Library of Congress at the end of May. As a teaser for the next story, I've provided via the "Synopses of Lost Shows" link the first episode synopsis of the "lost" ILAM MURDER HOLLYWOOD STYLE.
I hope you enjoy reading it!
I've also cleaned up some errors in my ILAM Log (thanks to Jim Widner, for pointing these out), and I've added some further facts andrumors on my "Raiders of the Lost ILAMS" page, thanks to Ken Greenwald.
Finally, for all you whom I owe a reply to, many apologies.
Work has become very busy here in the Cardiology unit, and ironically,
I get most of my work done on the web-site when I am on call in the Coronary
Care Unit (in between such acute coronary syndromes as MI's and tachyarrhythmias!)
where I can'talways reply appropriately to my E-mail. Many apologies for
those I owe letters or packages to; I don't mind a gentle reminder if I've
been deficient!
I've started a new rotation, being one of the Senior Medical Residents at University Hospital (here in London, Ontario). Because of this, previous work I had started on MURDER HOLLYWOOD STYLE is stuck on the CCU's computer at a different hospital, a little difficult for me to access right now! To make up for that, I've created a synopsis for the first episode of another "lost" ILAM for you to enjoy, one from YOU CAN'T PIN A MURDER ON NEVADA. I hope you'll enjoy it; you'll find it in the same area as the others.
July 16th, 1999
With my new duties as the Senior Medicine Resident at very busy Clinical Teaching Unit here in London, I've found my spare time to work on the web-page to be very short. However, I have managed to create a synopsis of the first five episodes, or one week's worth, of the lost ILAM story, YOU CAN'T PIN A MURDER ON NEVADA. The next two weeks worth of episodes, #5-#15 will follow some time over the next month.
July 24th, 1999
The second week's worth of episodes of the "lost" ILAM, YOU CAN'T PIN A MURDER ON NEVADA is now available for your reading pleasure. Simply follow the link "Synopses of lost Shows." I'm working on the final week's worth of synopses right now, and it should be available for reading in about a week's time. Tell me what you think of it!
July 27th, 1999
The story synopsis for the lost ILAM YOU CAN'T PIN A MURDER ON NEVADA is now complete! The final weeks worth of synopsized episodes for this story of a western mob war, grisly murder and a devilish mystery for Jack Packard to solve is now available for your reading pleasure. As usual, simply follow the link of "Synopses of Lost Shows"!
Also, as an additional treat, my friend Jim Farst has
provided us with some more ILAM images. On the "Photo Gallery"
page, there are scans of two more lobby cards from the eponymous movie.
And on the "Introduction" page, I have a very nice image of the cast from
the second run of the show (Russel Thorson, who plays Jack Packard, is
lighting a cigarette for Jim Boles, who plays Doc Long; Tony Randall, who
played Reggie York, is seen looking on)! Give them all a gander, and thanks
again, Jim, for your contribution!
Another surprise for your reading pleasure. I've finished the synopsis for the first week (or a big two episodes in total) of the final ILAM story, FIND ELSA HOLBERG, DEAD OR ALIVE. Because of a lack of time during my research at the Library of Congress, I unfortunately took very few notes (and even few direct quotes, alas) from this story, one that reads much more like an extended version of one of Morse's later "I Love Adventure" stories.
Still, in the synopsized final chapters of this story I *do* provide some extended direct quotes, including some that provide the mostrisqué remarks ever from Mary Kay Brown (the A-One's red-headed and final secretary), and some quotes which provide some of the most revealing insights into Jack Packard's character. Keep visiting this web site for my synopses for the next 10 episodes, which will appear over the next few weeks.
August 10th, 1999
Another surprise for all of you! I have the second week's worth of episodes, this week *5* chapters in total, of the *very last* ILAM story, FIND ELSA HOLBERG, DEAD OR ALIVE now available for your reading pleasure. Just follow the usual links to read it, and again, tell me what you think!
August 26th, 1999
Hi there! I was away last week for what was supposed to be a vacation week, but it turned out into interview week, for I'm applying for a Fellowship position in Geriatric Medicine. When I got back (after driving to Hamilton, then Toronto, then Ottawa, then back to London, blech!), I was astonished to find that the 2000th visitor had come to the site!
I also found in my mailbox a very nice gift from a visitor to this site. Gary Holman, who has several of the ILAM record albums made by RADIOLA andMARK 56 Records make photocopies of the front and back of these items, and they are very interesting indeed. I'm trying to find a way to scan these in, but in the meanwhile, thanks Gary!
Finally, I'm still very busy filling out Clinical Fellowship
application forms, forms for my Royal College exams, a couple of lectures,
a manuscript that needs to be worked on, etc., but as a break for myself,
and as a treat for all of you, I've finished the first four episodes of
MURDER HOLLYWOOD STYLE. Fear not: I'll
soon be getting back to finishing the last week's worth of episodes for
FIND ELSA HOLBERG, DEAD OR ALIVE.
I have *finally* finished my synopsis of the third and final week's worth of episodes of the *very last* ILAM serial, FIND ELSA HOLBERG, DEAD OR ALIVE. Again, just follow the usual links to read it!
September 5th, 1999
With the completion of episode #5, the entire first week's worth of MURDER HOLLYWOOD STYLE is now complete! Follow the usual links!
September 7th, 1999
A breakdown of four more episodes, (#6-#9) of the "lost" ILAM serial MURDER HOLLYWOOD STYLE is now available for your reading pleasure. Simply follow the "Synopses of Lost Shows" link from the main page to read them!
September 9th, 1999
Honest to my Grandma, but Week #2 of MURDER HOLLYWOOD STYLE is now complete!
With the inclusion of Episode #10, all the synopsized episodes for this week's worth of this ILAM story is available for your reading pleasure. Additionally, I've finished typing the breakdown for chapter's 11-15, too (but I still have to convert these to HTML). Stay tuned sometime next week, for the conclusion of this baffling and eerie story!
September 11th, 1999
At last! The final five episodes for the lost "I Love A Mystery" Story, MURDER HOLLYWOOD STYLE have now been completed! Simply follow the usual links to read it!
Additionally, I've made a few changes to the web-site, cleaning up the intro page, creating a new "About the Web-Master" page, and so forth. Please tell me what you think of the changes, or if you have suggestions for more.
I've also tracked down some copyright information about the series, which I'll be posting soon, perhaps as a new page. I'm also thinking about developing an ILAM FAQ to house on the site; what do you think should be included? I've also learned that there is an Internet vendor who is claiming to have a nearly complete version of BATTLE OF THE CENTURY (missing only two chapters!). Stay tuned to find out more!
Finally, I'd like to publicly thank Tom Fetters for pointing out some errors in the first two weeks worth ofHOLLYWOOD, providing me a very detailed description of spelling mistakes, grammatical errors and the like which I've promptly corrected. Thanks, Tom!
September 13th, 1999
You'll notice some great new images on the web-site, thanks to a very kind visitor to this site (and ILAM fan, to boot!)! To check them out, visit the "main index" page (here!), the "ILAM Log" page, and my "ILAM Holding's" page to see them all!
September 15th, 1999
I have a new page set up, one dealing with US Copyright information about the series. Included is a list of ILAM (and other materials) registered and archived at the US CopyrightOffice, in Washington D.C. I was finally able to access the COHM database over the Internet, and have provided a listing of what they have there (essentially, every single ILAM episode!).
Also, I notice that when I used different computers, especially those with different screen sizes, some of the images lookkind-a crummy (at least to my eye). Are there any particularly bad images or pages, and if so, do you have any advice for how I can correct this? Finally, there's an image of a microphone and script on the "ILAM at the Library of Congress" page that I want to shrink in size to best fit the page; any advice on how do to this (I don't have any imaging programs of my own, alas).
September 30th, 1999
I've been unfortunately away from both the web-site and ILAM in general for the last few weeks. Keeping me busy were 1) two podium presentations I gave at the Royal College of Physicians & Surgeons of Canada's annual meeting, held last weekend in Montreal Canada, and 2) pondering over offers for my ClinicalFellowship in Geriatric Medicine which will start July 1st 2000 in either Toronto or Hamilton, Ontario.
Because of this, I have very little new to offer you at
this point on the ILAM web-page itself. I *am* tracking downrumors
of a 1950's(!) ILAM television pilot, scanning some more images
to place on the page, figuring out how to place some contemporary *Variety*
reviews on the radio series onto the web-site, reading Jim Bannon's (who
played Jack Packard in the ILAM Columbia B's) autobiography, and
so forth. Another fan of the series is currently reading some of the archived
scripts at Thousand Oaks Public Library, and is providing me some fascinating
tid-bits which I'll share with you all shortly. Stay tuned, ILAM-ophiles
andMORSE-holics!
Well, a few new additions to the web-site! If you hit the new "ILAM at the Movies" link, you'll see a few new pages to follow. First of all, thanks to generous (and anonymous!) visitor to the site, I managed to get not only a copy of Jim Bannon's autobiography, but also a whack of new images which have allowed me to properly comment on the three Columbia B ILAM movies. Thank you very much!
Also, for those of you interested, I've finally accepted a position in Geriatric Medicine at McMaster University, to be held in Hamilton as of July 1st, 2000 (FYI, Hamilton is located half-way between Buffalo and Toronto, on the extreme West edge of Lake Ontario).
October 17th, 1999
Well, I should have been studying for my upcoming exams, but I've updated the web-site in several ways. Firstly, there's a new ILAM FAQ (Frequently Asked Questions) list, available on the links listing below. It's a beta-version now, but I'd thought I'd upload it now, and await some of your feedback for corrections, additions, and so on.
Secondly, I've cleaned up some of the over large image files on the "ILAM AT THE MOVIES" pages. There's also an image of myself (taken in New York City several years ago) for those of you wondering what I look like (look for this on the"ABOUT THE WEB-MASTER" link).
Finally, I've noticed that during the last few days, the 3000th visitor has arrived at the site! Not bad, considering I've only had this up and running since February (grin!). Thanks, everybody! And keep on writing!
October 22nd, 1999
Just a quick note to tell you that the version 0.91 of the Beta ILAM FAQ has just been uploaded. Also, there's a new page devoted to the creator of ILAM, Carlton E. Morse. Follow the links below to read these.
I've also learned by chance that the free web-host for this site, ANGELFIRE.COM, will be requiring all web pages to provide either banner or pop-up ads on their pages. Previously an elective option (which I elected not to do), I now have no choice in the matter. Does anyone have a preference as to whether pop-up's are preferable to banner ads? Which is the lesser of the two evils?
October 25th, 1999
A few minor changes and housekeeping duties to pass on to you.
I've taken a dead link off the "LINKS" page, the one to the MP3 ILAM files that was housed at another site. This link now requires a password (as several of you have let me know, thank you) which I don't have. If anyone knows of another site that has ILAM MP3's or RAs, please let me know so I can provide a link to it.
Secondly, I've made a few changes to version 0.92 of the
FAQ. Finally, I have a little bit of a breather, since I just finished
my MCCQE II exam (six hours of testing,blech) yesterday. The breather won't
last, since I start my role as senior resident at Victoria Hospital SSC
starting November 1st, with very green student MD's I have to break in.
I've been away from the web-site for far too long, a combination of increasedwork responsibilities, as well as a well deserved vacation in Las Vegas with mywife (strangely, we don't gamble, drink or smoke to any extent), after which I had even MORE work to do on my return (sigh)!
I've a few new things to pass on. Firstly, I've cleaned up the firstILAM synopsis I wrote, THE GRAVES OF WHAMPERJAW, TEXAS, to add some simple images, and to set out Mr. Morse's text apart from my own.
Secondly, I've added a new partial synopsis! If you follow the usual SYNOPSISlink (see below), you'll find the first episode of the lost classic, THEBLUE PHANTOM MURDERS there for your reading pleasure.
Please tell me what you think of the new change in format.
Happy 2000 everyone! The new year has finally arrived, even though it isn't quite a new millenniumyet (that's for next year). And so far, no Y2K glitches to spoil abrand-spanking new century!
I'm also glad I survived the last few weeks. The holiday season isn't avery kind one to a Senior Medical Resident, what with having to cover forabsentee colleagues, working all Christmas Eve until noon Christmas day, dealingwith Influenza A outbreaks in nursing home patients, and more. To behonest, I've been dead dog tired, and only over the last few days have I started feeling more like myself. My next two months, being spent in Dermatology, willbe a much less hectic time.
Meanwhile, to my utter chagrin, I notice that it's been an entire month sinceI've last added any content to these pages. I've been slowly adding to the new partialsynopsis of the lost classic, THE BLUE PHANTOM MURDERS; episodes #1and #2 are now here for your reading pleasure. Simply follow the "Synopsesof Lost Shows" link below!
I've also cleaned up another older synopsis, You Can't Pin A Murder On Nevada, making it easier to read (andcleaning up some typo's). Many thanks to Tom Fetters for his help in thisregard.
Finally, I also note that over 4000 persons have visited the web site since Icreated it last February. If all goes well, perhaps we'll have a full 5000visitors for our first anniversary here together!
January 8th, 2000
I have a new High-Speed Internet access account, and I've been a little tardyin replying to all your recent E-messages to me (in particular, thosesurrounding the Don Sherwood "I Love A Mystery" comic art, which hasrecently appeared on E-Bay). Many apologies all around.
I'm still slowly adding to the new partialsynopsis of the lost classic, THE BLUE PHANTOM MURDERS; episodes #1 to#3 are now here for your reading pleasure, and I'd like to thank TomFetters yet again, for helping with editing my material. Simply follow the "Synopses of Lost Shows" page as before!
I've also cleaned up yet another older synopsis, Murder Hollywood Style, making it easier to read (andcleaning up some typo's). Also, a new version of the ILAMFAQ is now up, including new material (thanks to James Herman'sresearch at the Morse Collection, located at Thousand Oaks Public Library,in Thousand Oaks, California). I've also cleaned up the Miscellaneous Ramblings page, and have included ashort essay on the relationship ILAM had with the genesis of that classictelevision cartoon series, Scooby-Doo!
Finally, I've been revisiting my old home-brew ILAM radio-play pastiche ILUVVA MYSTERY THE GHOST WITH NINE FINGERS.After re-reading this, and making many minor (and several major!) revisions,I've started again to complete writing Episode 5 of this story, which is asequel to both BURY YOUR DEAD ARIZONA and TEMPLE OF VAMPIRES. I'm alsorethinking the title of this parallel series, and will perhaps change it to THREE LOVE A MYSTERY. If all goes well, perhaps I'll have five "new andimproved" episodes posted by the time of our first anniversary heretogether next month!
January 20th, 2000
Episodes #1-4 of THE BLUE PHANTOM MURDERS are now complete! Simply follow the "Synopsesof Lost Shows" page as before. I've also cleaned up four previous chapters of my old home-brew ILAM radio-play pastiche I LUVVA MYSTERY THE GHOST WITH NINE FINGERS, and have also added a fifth episode for yourenjoyment. I've also re-titled this parallel series as THREE LOVE A MYSTERY.
Let me know if you want to read any more of these.
Happy Groundhog Day! Also, to mark the month of the 1st Anniversary of this ILAM web-site, the5000th visitor arrived today!
While I don't have a new synopsis episode to share today, I do have someinteresting news. Tom Brown,Director of the newly established First Generation Radio Archives ()/) has recently informed me (January 30th, 2000) that they have an uncirculated"lost" ILAM in their collection! To find out more about this,follow the "Raiders of the Lost ILAMs"link below, and scroll down (and pay place close attention to the new imageat the top of the page!).
Secondly, I have a new essay on the MiscellaneousRumbling'spage, regarding the ILAM comic strip! Simplyfollow the link to read this, and to examine a few examples of the originalcomic strip art based on our favorite "blood and thunder" radioshow!
February 12th, 2000
I've picked up a new scanner, and with its help, I've made some changes andadditions to the web site.
Firstly, with the OCR option, I've scanned in materials sent in by a verygenerous visitor to the site. If you check out the MiscellaneousRambling's link, you'll find three new essays. One is acollection of contemporaneous reviews by radio critics about the series. The second is a recounting of how the Republican convention of 1940 wreckedhavoc on the final episode of the ILAM serial, THESNAKE WITH THE DIAMOND EYES.
The third essay on the Miscellaneous Rambling'slink is a transcription of an interview with Mr. Carlton E. Morse himself, circa1970! That's not all! I've changed the ILAM PhotoGallery into an area detailing the actors and actresses involvedwith both runs of "I Love A Mystery"! Not only do I have images, but brief biographical information I've collected.Check out The ILAM Cast below forfurther information!
February 15th, 2000
I have two very good pieces of news to share with you (and one bad piece).
Firstly, Tom Brown, Director of the First Generation Radio Archives (located at)has informed me that the ILAM ET that they have is for a FULL THIRTYMINUTE show, the entire final episode of EIGHT KINDS OF MURDER (and not the 15 minutes I had erroneously supposed). Additionally, I've listened to part of the digital re-mastering that Mr. Brownhas done, and the difference is striking. I can hardly wait until it iscompleted!
Secondly, I have a brand new synopsis of a 15 episode "lost" ILAMshow for you all to enjoy, courtesy of a visitor to this site, Harold M.Hart! Mr. Hart visited the U.S. Library of Congress' Recorded SoundDepartment and took careful notes after reading Mr. Morse's script for THE GIRL IN THE GILDED CAGE. On Valentine's Day he mailed this outto me, and with some made scrambling, I have converted them into HTML formattonight.
To read Mr. Hart's wonderful effort (and believe me, I *know* how hard a taskthis is!), simply follow the Synopses to Lost Showslink below, and scroll down to the bottom of the page. I'm sure you'll enjoyreading this as much as I have (and perhaps Harold's effort will inspire me tofinish the synopsis for BLUE PHANTOM!).
Finally, the bad news (sigh). I've been inundated for requests for copying ILAM recordings, video tapes andscripts, etc., for others over the last few months, and as much as I'd like to, I can'taccommodate the literally dozens of requests that have been made. Frankly, I have no more free time to do this (and I'm already disappointing a dozen or so persons who asked me to do this for them since September and who are stillwaiting, sigh) not to mention the close to 50 or so others who have asked.
Right now, I'm terribly, terribly busy. My situation may change some time in the next 6 months, after I start myGeriatrics Fellowship, but what little free time I have left in the last fewmonths of my Internal Medicine postgraduate medical work is being eaten up with working on manuscriptsI'm submitting for publication, talks I'm doing, and my medical studies and exam preparation (for myRoyal College orals). What little free time I have, is spent with my wifeCaroline, and what's left over from her is spent fooling around on the computer updatingthis web site at irregular intervals whenever I'm at work evenings in the hospital, and whenever I have accessto my dial-up Internet account.
So please don't feel bad if I can't comply with all your requests right now(I can truly sympathise with the frustration at tracking down ILAM materials, Ireally can!).
I'll try to answer each and every Email I still get, but sometimes I'm weeksbehind on doing so (this evening I spent 45 minutes alone doing this, when Ishould be working on a talk on Common Skin Disorders in Older Persons), and forthis I do apologize (and if I don't reply within a month, send it again!). Again, things may change, and I may be able to help you track down alternativesources for such things as the ILAM videos or sources for recordings orscripts. But for now, I'll only try to honor those requests made earlier,and place a halt on all future trading. I'll hope you'll all understand this newpolicy decision.
February 29th, 2000
Well, I have some very interesting news for all of you to leap about on thisLeap Day!
James Herman, who is a regular visitor to this site, has visited theMorse Collection at Thousand Oaks Public Library on several locations toread some of the ILAM scripts there. Happily for us, during one mad 2 hourdash, he jotted down enough notes to make synopsis of yet another lost show!
The shortest ILAM of all, the five episode "TheCorpse in Compartment C, Car 76" is now available on the Synopsisof Lost Shows page for all you to read and enjoy. Many thanksto James for all his hard work and effort.
Additionally, I've tracked down some more news about some
of the"other" Jack Packard's from the Hollywood run of the show, JayNovello
and John McIntire. As soon as Angelfire allocates somemore
space for our web-site (I'm nudging the 5 MB point now!), I'll add thisinformation
to the web site. I've also finished the first week's worth of episodesfor
THE BLUE PHANTOM MURDERS, all five episodes. You can read these in
theusual place!
I'm delighted to present a new surprise on the web site, an interview withJim Harmon, author of THE GREAT RADIO HEROES,and ILAM fan #1!
Mr. Harmon was the very first ILAM fan to meet Mr. Morse
back in 1960, and itis largely because of him that we have many of the
circulating programs thatexist today. His book (written the same
year I was born!) was my firstintroduction to Jack, Doc and Reggie, long
before I heard a single episode. Mr. Harmon was generous enough to let
me interview him surrounding hisrelationship with this ILAM, and Carlton
E. Morse himself. Head over the to the Miscellaneous
Ramblings pagebelow, and follow the directions you find to read
this brand new interview!
This has been a long delay in updating the web-site, and it really isn't muchof an update (other than the FAQ), as it is justthis page. Part of the reason has been my busy rotation in Infectious Disease, and another reason has been planning an upcoming trip to the UK for two weeks (mywife and I fly out this weekend, so there will be an even longerhiatus).
The real reason has been four discouraging items related to ILAMand this website.
Firstly, I've been right up to my limit with this Angelfireaccount for three months now. There's no room for more images, andI've even had to take some older material down I had archived away on thesite. Despite repeated requests, they haven't responded to my plea forexpanding the size of this free account.
Secondly, I have been in touch with the archives of Procterand Gamble, sponsors of ILAM for thelast half of the first run of the series. After waiting for many months, Ifinally had their disappointing reply:
I have checked our database for "I Love a Mystery" and searched our files andThirdly, efforts to track down the ILAM comic created by Don Sherwoodby myself, Jim Farst, and Michael Simons were unsuccessful. Michael sumsup things very eloquently:
unfortunately, no luck. P&G sponsored many TV and Radio Shows throughout its
history and the Archives tries to cover everything. However, sometimes our
searching is unsuccessful.
I've thrown in the towel. I have just finished what must have been my 10th hour of internet research. Not only can I not get any information about the ILAM strip, but absolutely nothing about Don Sherwood. I checked all the European sites I could in English, e-mailed the syndicate that distributed the strip and even contacted several British comic book stores for any kind of a lead! All were dead ends.Fourthly, and most depressing is the following that I'velearned. I'll quote you a heavily edited E-mail I received from someonewhose identity they wish concealed:
I have pretty much confirmed the rumor that an East Coast OTR trader truly has a copy of the lost ILAM, STAIRWAY TO THE SUN. He is reluctant to give up a copy Stairway to the Sun, which he claims he has from disc, not paper tape. He does not appear to be interested in trading, nor is he interested in money.Four strike outs, all in a row (sigh). To make up for this, and to get out of my blue funk, I'll try and get back towork on the synopsis for the Second weeks of THE BLUE PHANTOM MURDER.
And if Angelfire ever does expand my accountsize, I'll upload a new version of page describing the actors of ILAM (includingall the different Jack Packards from both runs of the show).
Finally, my friend Jim Farst has informed me of the following interesting information:
The latest SPERDVAC says that the Radio Enthusiasts of Puget Sound will definitely re-create an ILAM at their 2000 showcase June 30 - July 1. No other info is available at this time and their website hasn't been updated yet. We will probably have to contact them directly to find out exactly which episode will be done.::Brian::
P.S. If anyone has
any new informationregarding ILAM (new fragment
discoveries, newrecreations, contemporary news articles, etc.), I'd be
very interested inreading it. Also, any information surrounding Don
Sherwood and how to contact him, as well as information on how
to contact the widow ofOTR trader Al Bloch (rumored to have had much ILAM
treasures in his collection before his death), would be greatlyappreciated.
It may take a little time to get back to you all with myvacation plans
and all, so please be patient!
This has been another long delay in updating the web-site, but I really dohave some good excuses! Firstly, I was away with my wife Carolinevacationing in the United Kingdom, where we had a very good time for twoweeks. Of course, on our return, there was the usual backlog of work toclear away! I also began my final rotation of my core rotation in InternalMedicine, before I start my fellowship in Geriatric Medicine July 1st 2000, atMcMaster University, Ontario, Canada.
There were a few interesting developments while I was away, in the ILAM worldthat have cheered me up from my previous doldrums.
Firstly, I had a nice E-mail from Pat Richoux, reminiscing about listening to the series when he was young. He also mentions another ILAM sighting, this time in the novel, "Marathon Man" by noted author andscreenwriting William Goldman (I've still trying to find a local copy of thiswork, Pat!).
Secondly, Renee Hyatt sent me a few great images of Carlton E. Morsefrom his high school yearbook, which (I'll try to upload on the site some timesoon). There's also this wonderful item which he had bestowed to all posterity in his class will: "I, Carlton Morse, will to whoever feels called upon to accept same, the large wad of gum which may be found under the first chair in the front row in Mr. Bender's room."
Thirdly, there has been interesting discussion which I have started onCharlie Sumner's great web page The ILAM Phorum,regarding the level of interest for an future ILAM recreations. You maywant to read the thread by following the following URL regarding thiscontroversial
Fourthly, while I was away, the counter tipped over7000 visitors! Thanks to everyone who has written with all their kindwords.
Fifthly, there seems to be a new way to add to the 5 Mbof space that Angelfire offers here to host this site. A new website,called, seems to be a wayto add additional files and so on. If anyone has advice on how to use thisso I can load some new pages devoted to the other actors involved with ILAM, thenew Morse images, etc., I'd appreciate their input.
Finally, while I'll be busy this weekend and much of the next with mymedical on-call duties and a few talks I have to finish, I'll be shortly postingthe new synopses for the lost episodes of THE BLUEPHANTOM MURDERS, which I've dragged out on far too long. Afterthis is (finally!) done, is there preference for what other circulating ILAMscripts become synopsized next? I was thinking of MURDER ON FEBRUARY ISLAND next, but I could be convinced to try one of theothers.
July 31st, 2000
I started my new Fellowship in Geriatric Medicine in Hamilton, Ontario on July 1st, and I'm having a great time here. Unfortunately, I've had more than a few setbacks, too.
Perhaps the most serious of these was a break-in in my house in London, Ontario (which I commute to on weekends). Not only did they destroy the back door of our house in order to gain entry and grab all our CD's and jewelry, but they also grabbed the scanner and the Dell computer. The same computer that held not only a few years worth of medical essays, talks, manuscripts in process, E-mail addresses, software, backups, etc., but also all my ILAM materials, including images sent to me, E-mail regarding ILAM, and the synopsis I was working on for BLUE PHANTOM. Much of this material is lost forever now, including the software used to generate the site. Until I can shake some money out of our insurance company, it may take some time before I can get up to speed on expanding the web-site (sigh). As it is, I'm making these changes to the site the old fashioned way, fumbling over what little I remember of HTML.
I also have a few other things occupying my time, including difficulties getting a new E-mail address from McMaster U after losing the old one from Western, some car troubles (to the tune of one G note), the need to find an apartment here in Hamilton, commuting back and forth to visit my wife once a week in London, and (in the little time that is left over!) studying for my final written exams in Internal Medicine (held in September).
I also have a new E-mail address, since the old one is gone. I can be reached at [email protected] If you haven't heard from me in a while, please feel free to drop me a line!
November 3rd, 2000
It's been far too long since I've updated the site. Part of the problem is that since our robbery this summer, I don't have a computer of my own, being forced to use borrowed machines at the various hospitals I work in here in Hamilton. None of these machines have web-authoring software, and (to be honest) I'm not all that familiar with HTML coding.
Another problem is an utter lack of spare time; I live in Hamilton 5 days a week, and commute back to London (Ontario) on weekends (unless I am on call, or my wife is on call, or...). So what little time is left over, I do want to spend it with Caroline, and not trapped in front of a computer. Finally, the two of us were studying frantically for our Royal College Exams (she for Psychiatry, and myself for Internal Medicine) over much of the summer, and we finally wrote the two day exams in early September. Thankfully, we both passed (hurrah!), leaving only the oral exams for both of us next June!
I thought I would have a bit of a breather from all these setbacks, but this this past Monday, "Devil's Night", Caroline urgently paged me with some more bad news:
We'd been broken into again....
Someone had smashed through the little front glass window of the front door, and unfastened the lock IN BROAD DAY LIGHT! They must have cut themselves, for they left a bloody trail throughout the house, including the upstairs bedroom,and all over our duvet and carpet. They tore into Caroline's jewelry box again (thankfully missing the diamonds I had given her for Christmas last year) but they did steal some her other favourites. They also stole my CD player, and made mess in the bathroom, taking medications, and filling the toilet with pills they didn't want, and so on.
Caroline is understandably scared... and I don't blame her. The police took her more seriously this robbery, and did fingerprinting, took blood samples, etc. She slept that night at a friends place (and last night with a fireplace poker near her bed). I think we'll have to invest in some form of security system now...
However, there will be some good news to announce today. Firstly, the number of visitors to this web-site has gone well over the 10,000 number! Secondly, I've finished the latest chapter (episode 6) of my ILAM pastiche, "Three Love A Mystery: The Ghost With Nine Fingers" which I'll soon be posting. Thirdly, I've recently learned that SPERDVAC will be recreating a new "lost" ILAM, MURDER IN TURQUOISE PASS (and I'm frantically trying to supply them the final chapter of this script!).
Fourthly, I have a major to make an exciting announcment regarding ILAM scripts, but I will save this until December 1st, 2000!
Until then, please enjoy the web-site. I'll be getting a new computer shortly, and hopefull will get some web-authoring software to upgrade the site, add in some new essays, finish the synopsis for "THE BLUE PHANTOM MURDERS" and more.
I am also starting again to trade copies of the 1989 recreation of TEMPLE OF VAMPIRES, so for anyone I've been stalling about this for whom I've promised to trade this with in the past (ie. Norm Cukras, John Callahan, Bob Boston, etc.), please get in touch with me ASAP!
November 24th, 2000
A misshap off an icy mountain road today with myself and
my car (my car's mostly okay, ditto the driver) combined with some cancelled
medical clinics have allowed me some free time to complete the second week's
worth of episodes for the synopsis of THE BLUE
PHANTOM MURDERS, with three more episodes detailing the
murders taking place on a lonely power yacht deep in the South Atlantic
Ocean, just in time for the US Thanksgiving Day weekend! You can find this
by following the links below on the Synopis page, below!.
I note with interest that the site passed the 12,000 visit mark January 18th, 2001, and it's still a few weeks from the second anniversary from the web-site! For everyone who has dropped in, take a bow!
Additionally, you man be intersted in learning that the
ILAM web-site has made it into the news...way back in April 1999!
Norman A Cukras, who used to write a weekly column (The
County Line) for The News-Sun (located in Sebring,
Florida) wrote an article about both the web-site and myself on April 16, 1999. Being humble, I didn't want to say much at the time, but since Norm has graciously granted me permission to reproduce the column on my web-page, which you can read yourself either through the "About the Web-Master" or the "Misc. Ramblings" pages. Thanks, Norm!
I also spent some time tracking down a rumour that ILAM disks may have made it as far as Japan, when the show was aired on the Far East Network (FEN). I wrote to the Armed Forces Radio & Television Service Broadcast Center, and this is the Affiliate Relations Customer Service Officer's reply to my query letter:
Those recordings of I Love A Mystery and all others were kept in library "just in case" there was a mail interruption or war prevented FEN and others from getting a shipment from here. When technology changed and television was added, and space became an issue, those "old" electronic transcriptions (ETs) were destroyed per regulation. All this in accordance with Department of Defense regulations, copyright laws and our contractual agreements with the industry.Unfortunately, not very good news for us ILAM fans. Our only help is if anyone
For a second bit of bad news, I've learned that while SPERDVAC was able to pull of a new "lost" ILAM recreation at the end of November 2000 (one for MURDER IN TURQUOISE PASS), SPERDVAC was told to send all scripts and tapes back to the Morse Family Trust...apparently there is a secret commercial venture making ILAM recreations somewhere out on the east coast and they don't want other organizations such as SPERDVAC muddying up the waters by releasing materials such as the recreations...
Meanwhile, I'm putting the finishing touches on the latest
installment of my ILAM pastiche, THE
GHOST WITH NINE FINGERS, and will hopefully post this some
time before the end of the next month or so. Finally, my next project
will be beginning another "lost" ILAM synopsis (and if you have any preferences on which one you want to see me do, please write to let me know!).
Welcome back to the unofficial ILAM web-site!
Today marks the second anniversary of the website, first created at 4 am
February 22nd 1999 while trying to stay awake in the ICU (Intensive Care
Unit), after having just admitted someone from the OR who had receieved
a heart transplant. To mark the occasion (and the lucky 13,000 visitors
to the web-site), I have a
couple of treats in store:
Firstly, I've completed the 7th episode of my ILAM pastiche "Three Love A Mystery", a story entitled, The Ghost With Nine Fingers. I hope you'll enjoy reading it, and you can find it via the Home-Brew ILAM link below.
Secondly, I have a new section available on ILAM recreations! Not only do I have information available about previous projects, I also have some files available for downloading that may be of interest for those persons looking for a recreation of the lost interior episodes of TEMPLE OF VAMPIRES. Visit the Recreations link below for more information and details!
I hope you'll enjoy both these new items!
This is just a short note to explain my long absence from the web-site. I have my oral/practical exams in Internal Medicine with the Royal College of Physicians & Surgeons of Canada in late June, and a big move a week after that. As much as I would like to start the synopsis for the clear winner of your vote for most wanted lost ILAM, (THE PIRATE LOOT OF THE ISLAND OF SKULLS), this will have to wait until after July 2001.
When I find the time, I'll try and answer each and every one of your E-mails on questions surrounding ILAM that I can dash off in a jiffy. Questions (or rather, my answers!) surrounding tape trades, scripts, etc., will have to be deferred until after my chaotic work and personal life has settled down a mite.
In the meanwhile, enjoy visiting the site and check out
the action. I also have some questions on the ILAM
Phorum (follow the links far below) that some of you may
be able to answer for me!
Another short note. I passed my oral Internal Medicine exams in late June, so I am now a Fellow of the Royal College of Physicans and Surgeons of Canada.
Also, a big (and terrible) move for my wife and I just after that (and *still* no new computer) has slowed down my ability to update things here.
However, the next few weeks should see some new updates to the web-site, including the starting of a new synopsis, some new essays, an updated FAQ, changes to the log, and more.
On September 11th, 2001, the world changed forever
with the horrific terrorist attacks in the United States, in both New York
City and Washington D.C.
My wife Caroline & I have been in a state of perpetual shock up here in Canada, riveted to televised news reports of the terrible events unfolding that day just south of the border. Work crawled to a standstill in the hospital we both work in, as not only staff but patients too talked about little else. There were massive blood donor clinics, memorial services, fire-fighters at the mall soliciting funds to help their American brothers, and American flags popping up in windows, garage doors & on homes and businesses everywhere in support.
As a Canadian, I'd like to express my deepest and most profound sympathies to all my American friends who have been touched by this terrible tragedy. My thoughts are with you in this time of sorrow and anger.
|
http://www.angelfire.com/on/ilam/newpage.html
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
In this article you can download the very first alpha version of Aubergine; a BDD /DSL framework for .Net, initially based on Machine.Specifications, but later on heavily inspired by Cucumber. It is AFAIK the very first Cucumber like environment in .NET available, and you will see that it is very easy to use. Due to it's inspirator (Cucumber, I have decided to use the name Aubergine - I have no idea if they are actually related or not .
Please do note that it is an alpha version, so right now we only have a single testrunner that outputs to the console (i.e. no unit test integration yet). In the article I do include a postbuild step, which automatically makes my BDD tests run after each build, and displays the output in notepad, which is fine for me atm.
Anyway, enough with the talkin, Let's Get Busy !!!
First you need to download the example project; it includes all the binaries needed to do your BDD development. You can find it here :
Be.Corebvba.Aubergine.Examples.zip (16,43 kb)
Once you have the zip file, you can either explore the project, or walk through the following scenario to create your own test.
Add a class named "BrowserContext" to your project
Add a class named "Make_sure_my_website_gets_enough_visibility" , import the Be.Corebvba.Aubergine namespace, and derive your class from Story
Then you can start typeing your story; the final file should look like this :
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Be.Corebvba.Aubergine;
namespace Example
{
class Make_sure_my_website_gets_enough_visibility: Story<BrowserContext>
{
As_a website_owner;
I_want to_make_sure_that_I_get_enough_visibility;
So_that I_can_get_enough_traffic;
[Cols("searchengine","search_url","keywords","my_url")]
[Data("google","","core bvba tom janssens","")]
[Data("google","","BDD .Net","")]
[Data("bing","","core bvba tom janssens","")]
class Search_results_for_keywords_on_searchengine_should_contain_my_url : Scenario
{
Given current_url_is_search_url;
When searching_for_keywords;
Then result_should_contain_my_url;
}
}
}
After having created a story with scenarios we need to define the context for these scenarios.
You add all your domain objects that need to be tested to the "BrowserContext" class
In this case it is quite simple :
public class BrowserContext
{
public string Url { get; set; }
public string Result { get; set; }
private WebClient wc = new WebClient();
}
This should be enough to test, but wait, isn't there something missing ?
Next up we need to define how to interpret the story. As I allready mentioned; this is heavily inspired by Ruby/Cucumber : regular expressions are used to find out how all scenario steps should be matched to a real function. While this maybe sounds complicated, it really isn't; this is the code :
public class BrowserContext
{
public string Url { get; set; }
public string Result { get; set; }
private WebClient wc = new WebClient();
[DSL("current_url_is_(.*)")]
void SetUrl(string url)
{
Url = url;
}
[DSL("searching_for_(.*)")]
void SearchForKeyWords(string keywords)
{
Result = wc.DownloadString(Url + HttpUtility.UrlEncode(keywords));
}
[DSL("result_should_contain_(.*)")]
void ResultShouldContain(string myurl)
{
Result.Contains(myurl).ShouldEqual(true);
}
}
That's all there is to it; you are ready to run your tests now !!!
Ok, since we do not want to run these tests manually each time, but at every build action, we need to add a postbuildstep.
In visual studio you can do it like this :
"$(ProjectDir)\lib\Be.Corebvba.Aubergine.ConsoleRunner.exe" "$(TargetPath)" > "$(TargetDir)output.txt"
"$(TargetDir)output.txt"
exit 0
Now build your project, and the tests will be ran !! Your default texteditor will start, and it should contain this text :
==STORY================================================================
Make_sure_my_website_gets_enough_visibility => OK
========================================================================
Search_results_for_core bvba tom janssens_on_google_should_contain_ => OK
Given current_url_is_ => OK
When searching_for_core bvba tom janssens => OK
Then result_should_contain_ => OK
Search_results_for_core bvba tom janssens_on_bing_should_contain_ => OK
Given current_url_is_ => OK
When searching_for_core bvba tom janssens => OK
Then result_should_contain_ => OK
Search_results_for_BDD .Net_on_google_should_contain_ => OK
Given current_url_is_ => OK
When searching_for_BDD .Net => OK
Then result_should_contain_ => OK
It's actually quite easy once you have the hang of it; this is pseudocode :
foreach (var story in AllClassesDerivedFromTheAubergineStoryClass)
foreach(var scenario in AllPossibleScenariosFor(story))
create a new context object
foreach (var possiblesteps in AllSteps (given,when,then)
in story and scenario)
find a regex DSL match for possiblesteps.name in the context,
extract all the regex groups and add them as string parameters
to the function you call call the corresponding
step function
If one of this steps should fail then the test fails and the reason is mentioned in the report.
If you expect a step to fail, then you should add a membervariable named "<steptype>Exception", and if a step of that type fails, the step is marked as successfull, but the Exceptionvariable will contain the exception thrown. You can see an example of this in the Example zip file.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
|
http://www.codeproject.com/Articles/43585/BDD-with-DSL-Aubergine-a-ruby-cucumber-like-altern
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
sunil langeh wrote:
import static java.lang.System.*;
class _ {
static public void main(String... __A_V_) {
String $ = "";
for(int x=0; ++x < __A_V_.length; ) // Line 1
$ += __A_V_[x];
out.println($);
}
}
I have also a doubt in Line 1 about for loop syntax, Can be use both conditional and increment/decrement operator together??
Sachin Adat wrote:Yes you can!!!
sunil langeh wrote:Thanks Sachin
sunil langeh wrote:Compiler firstly looks on condition or on increment/decrement?
|
http://www.coderanch.com/t/427414/java-programmer-SCJP/certification/explain-program-briefly
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
Introduction
When planning a software solution, you have many different programming languages to choose from, and it's easy to get lost in the intricacies of each one. Your choice of language can depend on many factors. If it's for a personal project or hobby, you may settle for a language you know. If your choice depends on available resources, you might end up with really cryptic approaches. Or, you could spend a lot of time developing reusable components, which can cause the documentation to become a nightmare.
This article doesn't contain a redundant comparison of procedural, object-oriented, and functional languages. With practical examples and scenarios, it shows how to select a language with maximum efficiency and ease of development for your project. It helps you examine several factors to consider while selecting a programming language, whether it is for personal use or for a large project within an organization.
Factors to consider
There isn't just one factor to think about when choosing a programming language. For example, while developing a dynamic web page, you might consider JavaServer Pages (JSP)/servlets as the best option, and others might prefer using PHP or a similar scripting language. No single language is the "best choice." Though you might give preference to certain factors, such as performance and security in enterprise applications, other factors, such as fewer lines of code, might be lower priorities. There's always some trade-off.
After you're given a project or assignment, there's often preparation work to be done before solving the actual problem. The choice of language is by far the most overlooked component of this preparation.
When selecting a language for a personal project, you may pick a personal favorite. Lines of code are important here; the obvious choice is a language that can get the work done in 10 instead of 20 lines of code. You want to get the solution out first, and then worry about the neatness, or performance.
For projects built for a large organization, it's a different scenario. Teams will build components that are going to interact and interconnect with each other to solve a particular problem. The choice of language might involve factors such as how easily the program can be ported to a different platform or the availability of resources.
Selecting the right programming language can yield solutions that are concise, easy to debug, easy to extend, easy to document, and easy to fix. Factors to consider when selecting a programming language are:
- The targeted platform
- The elasticity of a language
- The time to production
- The performance
- The support and community
Targeted platform
The most important factor to consider is the platform where the program will run. Think in terms of the Java™ language and C. If the program is written in C and needs to be run on Windows® and Linux® machines, it would require platform compilers and two different executables. With the Java language, the byte code generated would be enough to run the program on any machine with a Java Virtual Machine (JVM) installed.
A very similar argument applies for websites. They should all look and work the same across all browsers. Using CSS3 tags, HTML5, without checking browser compatibility, will cause the same site to look and behave differently across browsers.
Elasticity
The "elasticity" of a language is the ease with which new features can be added to the existing program. Elasticity can involve adding a new set of functions, or using an existing library to add a new feature. Consider the following questions related to elasticity.
- Can I start using a capability of the language without including a new library?
- If not, is the capability available in the language library?
- If it's not a native capability and not available as a library, what is the effort to build the features from scratch?
Before making a decision, you should know how the program has been designed and what features have been set aside as future improvements.
Though a comparison of
these languages is not technically correct, consider Perl and Python. Perl
has regular expression support built in as a
ready-to-use feature. In the case of Python, you have to import the
re module from the standard library.
Time to production
The time to production is the time it takes to make the program go live—when the code is production-ready and will work the way it's intended. The presentation logic should be added to the control logic when calculating time to production.
Time to production is very dependent on the size of the code. Theoretically, the easier it is to learn a language, the smaller the amount of code and, hence, less time to go live.
For example, a content management site can be developed using PHP scripts in days compared to servlets code that can take months, assuming you are learning both languages from scratch.
Performance
You can squeeze only so much performance out of a program and a platform, and the language used to develop the program affects performance. There are many studies comparing how fast programming languages are in the same environment. You can see different computer benchmarks to use as a reference, though the figures are not for concrete assessments of the performance of any language.
Consider a web application written in both Java code and Python. The performance data, as shown in the benchmark, would lead you to conclude that, given similar environments, the application written in the Java language should run faster than the one written in Python. But what about the environment itself? If the environment is an x86 Ubuntu Intel Q6600 one core, it's a fair game because the computational power is limited. What if the web application is in the cloud, running on Google App Engine? You now have access to virtually unlimited processing power, and both the programs are going to return results at almost the same time. The choice factor now revolves around lines of code and maintainability.
The performance of a language should matter when the target environment doesn't offer much scope for scaling. Hand-held devices are an example of such an environment.
Support and community
Just as good software needs a community following to help it grow, a programming language should also have a strong community behind it. A language with an active forum is likely to be more popular than even a great language that doesn't have help at hand.
Community support generates wikis, forums, tutorials, and, most importantly, additional libraries that help the language to grow. Gone are the days when people operate in silos. People don't want to skim through all the documentation to get one minor problem solved. If a language has a good following, the chances are good that someone else faced your same issue and wrote about it in a wiki or forum.
Perl is a good example of the importance of community. The Comprehensive Perl Archive Network (CPAN) is a community-driven effort. CPAN's main purpose is to help programmers locate modules and programs not included in the Perl standard distribution. Its structure is decentralized; authors maintain and improve their own modules. Forking, and creating competing modules for the same task or purpose, are common.
Scenarios
The project scenarios in this section illustrate different factors that affect the decision-making process when choosing a language.
- REST service for add operation
- A simple feed reader
- Enterprise applications
- Research projects
REST service for add operation
This scenario is for a service that will do addition in the format of a REST service. You'll invoke a URL, http://<url>?num1=number1&num2=number2, and the result should contain the sum of the two numbers passed to it. You could write the program using different languages. The example here uses JSP, as shown in Listing 1, and PHP, as shown in Listing 2. The JSP program was written in the Eclipse IDE.
Listing 1. REST service using>Sum</title> </head> <body> <% if (request.getParameter("num1") == null || request.getParameter("num2") == null) { %> <p> <b>Wrong URL!!!</b> </p> <p> <b>Enter URL in this format: </b> <i> http://<url>?num1=number1&num2=number2</i> </p> <% } else { %> <b>Number 1:</b> <i><%= request.getParameter("num1") %></i> <br> <b>Number 2:</b> <i><%= request.getParameter("num2") %></i> <br> <b>Sum:</b> <i><%= Integer.parseInt(request.getParameter("num1")) + Integer.parseInt(request.getParameter("num2")) %></i> <br> <% } %> </body> </html>
Listing 2 shows the same program in PHP.
Listing 2. REST service using PHP
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" ""> <html> <head> <meta http- <title>Sum</title> </head> <body> <?php if ($_GET["num1"] == NULL || $_GET["num2"] == NULL) { ?> <p><b>Wrong URL!!!</b></p> <p> <b>Enter URL in this format: </b> <i>http://<url>?num1=number1&num2=number2</i> </p> <?php } else { ?> <b>Number 1:</b> <i><?= $_GET["num1"] ?></i> <br> <b>Number 2:</b> <i><?= $_GET["num2"] ?></i> <br> <b>Sum:</b> <i><?= $_GET["num1"] + $_GET["num2"] ?></i> <br> <?php } ?> </body> </html>
There isn't much difference between the two examples. The program itself doesn't explore all the capabilities of the two languages. It demonstrates that, when it comes to basics, both the languages are at par.
Features of JSP allow it to be used more at an enterprise level. For example, with JSP, the very first time the program is called it's loaded into the memory as a servlet. For every subsequent request the program in the memory is called, giving better response time with subsequent calls. It's also ideal in a Java environment. With PHP, however, each time the program is called it's loaded into the memory, which might increase response time for critical applications.
Another notable feature that makes JSP a better choice in an enterprise is its multi-threading capabilities. PHP has no built-in support for multi-threading.
A simple feed reader
The goal in this scenario is to provide the program with a feed link. The program has to get the feed and list all the titles in the feed. To make it a bit more interesting, you'll subscribe to a JSON-formatted feed and not RSS.
The code snipped in Listing 3 is from O'Reilly and is written in Java code.
Listing 3. Feed reader using Java code
import java.io.InputStream; import java.net.URL; import java.net.URLConnection; import org.apache.commons.io.IOUtils; import net.sf.json.JSONArray; import net.sf.json.JSONObject; import net.sf.json.JSONSerializer; public class JsonParser { public static void main(String[] args) throws Exception { String urlString = " _render=json& textinput1=and&urlinput1=http%3A%2F%2Ffeeds.wired.com%2Fwired%2Findex"; URL url = new URL(urlString); URLConnection urlCon = url.openConnection(); InputStream is = urlCon.getInputStream(); String jsonTxt = IOUtils.toString(is); JSONObject json = (JSONObject) JSONSerializer.toJSON(jsonTxt); JSONObject value = json.getJSONObject("value"); JSONArray items = value.getJSONArray("items"); String title; for (Object item : items) { title=((JSONObject)item).getString("title"); System.out.println("\n" + title); } } }
Listing 4 shows the program in Python.
Listing 4. Feed reader using Python
#!/usr/bin/python import urllib.request url = "? _id=df36e60df711191549cf529e1df96884&_render=json& textinput1=and&urlinput1=http%3A%2F%2Ffeeds.wired.com%2Fwired%2Findex" HTTPdata = urllib.request.urlopen(url) json_data = eval(HTTPdata.read()) for item in json_data['value']['items']: print (item['title'])
The Python program can be further abridged into just three lines. Retain the first two lines of Listing 4, and replace the rest of the code with the line in Listing 5.
Listing 5. Abridged 3rd line
for item in eval((urllib.request.urlopen("? _id=df36e60df711191549cf529e1df96884&_render=json&textinput1=and& urlinput1=http%3A%2F%2Ffeeds.wired.com%2Fwired%2Findex")) .read()))['value']['items']:print (item['title'])
The example application showed the elasticity of these languages. None of them had native support for all the required libraries; you have to import the necessary packages. With Python, it was even simpler because you could manipulate JSON by default. With Java code, it was more difficult because you had to get the JSON libraries and their dependencies to make the program work.
Enterprise applications
With enterprise applications, designers and programmers need to walk a tightrope when it comes to performance, security, maintainability, and development time. It's not just about using the programming language that can get you the best performance figures. Other important factors include: time to production, elasticity, and how well the program can integrate with the existing infrastructure.
The environment in which the program will be used also plays an important part. Programs written at an enterprise level are never stand-alone. Each program becomes part of an even larger goal, so interoperability becomes a factor.
Imagine that an enterprise with its web services implemented in Java code wants to add WebSphere® MQ as a reliable platform. It doesn't make any sense to use the C APIs of WebSphere MQ to write the application; the choice would have to be Java code.
Research projects
Suppose your next project is to do research in fields unrelated to information technology and computers. For example, the project might involve image processing, audio processing, watermarking, or possibly stock market research. You need to create code to simulate certain real-time behaviors, but you aren't much of a computer geek.
The project requires a lot of quick and dirty code. The most relevant factor is time to production. In this case, time to production means how soon you can make the component work so you can get back to the bigger task at hand. It's a lot like writing small stubs, without giving any attention to interoperability, at this stage. The project might become a full-fledged product, but right now it's in its initial stages. Your prime requirement is prototyping.
Languages such as MATLAB and LISP might come to the rescue. If you start prototyping in C, you'll delve into the details of the variables and pointers without seeing much of the actual result in terms of research. MATLAB has an integration with C/C++ and Fortran that allows C code to be called from MATLAB, and vice versa.
Conclusion
This article outlined some of the factors to consider when choosing a programming language. The factors discussed here are not the only ones to be considered, however. For example, if a very experienced programmer suggests a language that was not under consideration, perhaps you should assess that language, too.
We hope that the process of selecting a programming language for your next project will now be easier. There are always more languages evolving, and there's always room for improvement.
Resources
Learn
- The Computer Language Benchmarks Game provides provisional facts about the performance of programs written in approximately 24 different programming languages for a dozen simple tasks.
- Read about the Google App Engine, which lets you run your web applications on Google's infrastructure.
- One of the first benchmark results, the Computer Language Shootout Scorecard from 2003, summarizes the benchmark results and measures each language's performance.
- "The PHP Scalability Myth" maintains that PHP does scale.
- Learn more about Comprehensive Perl Archive Network (CPAN).
- Stack Overflow: Provides discussions about the speed of PHP, ASP, JSP, CGI, and so on.
- Get more information about MATLAB from MathWorks.
- How to Parse JSON in Java, the code snippet for Listing 3, is from O'Reilly answers.
- developerWorks Web development zone: Find articles covering various web-based solutions.
- developerWorks technical events and webcasts: Stay current with developerWorks technical events and webcasts.
- How to Parse JSON in Java, the code snippet for Listing 3, is from O'Reilly answers. programming languages. Get connected and stay connected with developerWorks community.
- Find other developerWorks members interested in web development.
- Share what you know: Join one of our developerWorks groups focused on web.
|
http://www.ibm.com/developerworks/library/wa-optimal/
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
So that better model binder I built a couple of years ago to address conditional model binding in ASP.NET MVC 1-2 is obsolete with the release of ASP.NET MVC 3. Instead, the concept of a model binder provider allows this same functionality, fairly easily. Back in my Put Your Controllers on a Diet talk at MVCConf, I showed how we can get rid of all those pesky “GetEntityById” calls out of our controller actions. We wanted to turn this:
public ActionResult Show(Guid id) { var conf = _repository.GetById(id); return AutoMapView<ConferenceShowModel>(View(conf)); }
Into this:
public ActionResult Show(Conference conf) { return AutoMapView<ConferenceShowModel>(View(conf)); }
We can use a custom model binder to achieve this result. However, our original implementation used model binders per concrete entity type, not too efficient:
ModelBinders.Binders .Add(typeof(Conference), new ConferenceModelBinder());
The problem here is that we’d have to add a model binder for each concrete entity type. In the old solution from my post for a better model binder, we solved this problem with a model binder that also included a condition on whether or not the model binder applies:
public interface IFilteredModelBinder : IModelBinder { bool IsMatch(ModelBindingContext bindingContext); }
However, this is exactly what model binder providers can do for us. Let’s ditch the filtered model binder and go for the model binder provider route instead.
Custom model binder provider
Before we get to the model binder provider, let’s first build out a generic model binder. We want this model binder to not just accept a single concrete Entity type, but any Entity type we supply:
public class EntityModelBinder<TEntity> : IModelBinder where TEntity : Entity { private readonly IRepository<TEntity> _repository; public EntityModelBinder(IRepository<TEntity> repository) { _repository = repository; } public object BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext) { ValueProviderResult value = bindingContext .ValueProvider.GetValue(bindingContext.ModelName); var id = Guid.Parse(value.AttemptedValue); var entity = _repository.GetById(id); return entity; } }
We took the model binder used from the original “controllers on a diet” talk, and extended it to be able to handle any kind of entity. In the example above, all entities derive from a common base class, “Entity”. Our entity repository implementation (although it could be any common data access gateway) allows us to retrieve the specific kind of entity (Customer, Order, Conference, whatever) by filling in the type. It’s just another example of using type information as a means of altering behavior in our system.
In our previous incarnation, we would create either our IFilteredModelBinder to be able to handle any base entity type. If we didn’t have that kind of abstraction in place, we’d have to register concrete implementations for each concrete entity type. Not ideal. Instead, let’s build a model binder provider:
public class EntityModelBinderProvider : IModelBinderProvider { public IModelBinder GetBinder(Type modelType) { if (!typeof(Entity).IsAssignableFrom(modelType)) return null; Type modelBinderType = typeof(EntityModelBinder<>) .MakeGenericType(modelType); var modelBinder = ObjectFactory.GetInstance(modelBinderType); return (IModelBinder) modelBinder; } }
First, we create a class implementing IModelBinderProvider. This interface has one member, “GetBinder”, and the parameter is the type of the model attempting to be bound. Model binder providers return a model binder instance if it’s able to bind based on the model type, and null otherwise. That allows the IModelBinderProviders to have the same function in our IFilteredModelBinder, but just in a slightly modified form.
If it does match our condition, namely that the model type is derived from Entity, we can then use some generics magic to build up the closed generic type of the EntityModelBinder, build it from our IoC container of choice, and return that as our model binder.
Finally, we need to actually register the custom model binder provider:
protected void Application_Start() { AreaRegistration.RegisterAllAreas(); ModelBinderProviders.BinderProviders .Add(new EntityModelBinderProvider()); RegisterGlobalFilters(GlobalFilters.Filters); RegisterRoutes(RouteTable.Routes); }
We now have just one model binder provider that can handle any bound entity in our incoming model in our controller actions. Whereas MVC 1-2 forced us to either come up with a new abstraction or register specific types, the IModelBinderProvider allows us to make intelligent decisions on what to bind, without incurring a lot of duplication costs.
Yet another set of code to delete when moving to ASP.NET MVC 3!
Post Footer automatically generated by Add Post Footer Plugin for wordpress.
Pingback: The Morning Brew - Chris Alcock » The Morning Brew #889
Pingback: ASP.NET MVC: Using a custom model binder to post a list of interface objects | (Sitecore) Martina
|
http://lostechies.com/jimmybogard/2011/07/07/intelligent-model-binding-with-model-binder-providers/
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
csRenderBuffer::Props Struct Reference
To scrape off a few bytes use bitfields; assumes values are in sane limits. More...
#include <csgfx/renderbuffer.h>
Detailed Description
To scrape off a few bytes use bitfields; assumes values are in sane limits.
Definition at line 274 of file renderbuffer.h.
Member Data Documentation
hint about main usage
Definition at line 277 of file renderbuffer.h.
number of components per element
Definition at line 282 of file renderbuffer.h.
datatype for each component
Definition at line 279 of file renderbuffer.h.
should we copy data, or just use supplied buffer
Definition at line 289 of file renderbuffer.h.
if buffer should be deleted on deallocation
Definition at line 291 of file renderbuffer.h.
if this is index-buffer
Definition at line 295 of file renderbuffer.h.
currently locked? (to prevent recursive locking)
Definition at line 293 of file renderbuffer.h.
last type of lock used
Definition at line 298 of file renderbuffer.h.
offset from buffer start to data
Definition at line 286 of file renderbuffer.h.
buffer stride
Definition at line 284 of file renderbuffer.h.
The documentation for this struct was generated from the following file:
- csgfx/renderbuffer.h
Generated for Crystal Space 1.4.1 by doxygen 1.7.1
|
http://www.crystalspace3d.org/docs/online/api-1.4/structcsRenderBuffer_1_1Props.html
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
A Quick Introduction to the Spring Python Framework
Introduction
The original Spring, written for Java, is a framework which at its heart uses an "Inversion of Control" container (IoC). Inside of Spring, are several subframeworks for handling aspect oriented programming(aop), Data Access Framework (Dao), transaction management, a object to relational mapping(orm), and other features. The SpringPython framework, is a port of the original Java framework. Mind you, the developers of SpringPython didn't do a "direct port of existing code" from the Java counterpart, but rather rewrote it using Python idioms. SpringPython includes features for:
- IoC
- AOP
- Database Template
- Database Transactions
- Security
- Remoting
- Plugins/Command Line Tools
As with the Java version, Spring Python includes some samples; Petclinic, Spring Wiki, and Spring Bot. This article will cover mostly the basic use of the Inversion Of Control container. The purpose of this article is to get you up and using SpringPython quickly without having to parse the in depth theory.
Dependencies
To get started, you will need Pyro (Python Remote Objects) from .net, which is "an object-oriented form of RPC". Think, a Python version of Remote Method Invocation (RMI). You will also need Amara from Amara for XML processing. Amara itself has some dependencies, so if you want to make your life easier I highly recommend downloading and installing Setuptools from .python.org/pypi/setuptools. Setuptools has easy_install which will take care of the dependencies.
$ sudo easy_install Amara-1.2.0.2-py2.5.egg
Listing 1. The dollar sign represents the Linux command line.
Inversion of Control and Dependency Injection
Inversion of Control is a design pattern that applies the Hollywood principle, "Don't call us, we'll call you", in other words, the flow of control of a system is inverted. Using an adapted example from Martin Fowler, in a simple program that gets input from the command line, the program holds the control. Here, my program controls the flow.
name = raw_input('What is your name? ') doSomethingWithName(name) quest = raw_input('What is your quest? ') doSomethingWithQuest(quest)
Listing 2. A simple procedural program that accepts some input, and processes the information. Martin Fowler's original example uses Ruby.
In a GUI run program, the control is held by the windowing system. Again paraphrasing Martin Fowler's example, with Python and Tkinter:
from Tkinter import * def process_name(name): print "name is ", name def process_quest(quest): print "quest is ", quest root = Tk() name_label = Label(root, text="What is Your Name?").pack() name = Entry(root) name.bind("<FOCUSOUT>", (lambda event: process_name(name.get()))) name.pack() quest_label = Label(root, text="What is Your Quest?").pack() quest = Entry(root) quest.bind("<FOCUSOUT>", (lambda event : process_quest(quest.get()))) quest.pack() root.mainloop()
Listing 3. Instead of the program controlling the flow, now the windowing system takes the control
So, the flow of control was "Inverted"; instead of me doing the calling, the windowing system does the calling. Spring is known as a Inversion of Control container, abbreviated as IoC, with some added features, such as aspect oriented programming.
In the case of Spring, what's being inverted is the process of obtaining an external dependency. This is a special case of IoC is known as Dependency Injection, or DI. In other words, a dependency is "injected" into a component. In practice, as you're about to see, this is much simpler than it sounds.
The DI pattern has three parts; a dependent, its dependencies and an injector or container. Figure 1 illustrates this relationship. Objects A, B, and C (The dependencies) are injected into Component D (The dependant ) via the container.
Figure 1. A simplified illustration of dependency injection.
A great advantage to using this pattern is being able to feed a program mock objects, which can be replaced later, without having to change the code. This automatically lends itself well for unit testing.
Page 1 of 2
|
http://www.developer.com/open/article.php/3844801/A-Quick-Introduction-to-the-Spring-Python-Framework.htm
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
09 August 2012 04:05 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
The cargo fetched a premium of $1-3/tonne (€0.81-2.43/tonne) to
The company recently returned to the spot market to purchase material after an absence of nearly four months, according to traders.
Its last purchase was in early April for a 55,000-tonne full-range naphtha cargo for second-half May delivery from trading house Vitol.
The cargo fetched a premium of $9.00-13.00/tonne to CFR Japan
|
http://www.icis.com/Articles/2012/08/09/9585317/Taiwans-CPC-buys-30000-tonnes-full-range-naphtha.html
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
Build Tools Software
Showing page 1 of 1.
WhiteSnake Editor
WhiteSnake is professional script editor for many script languages
myApps
xApps,an enterprise universal platform(BPM),facilitates non IT user to build IT solution flexibly by form desinger,workflow designer & report builder etc.(We moved this prject to a new place.1 weekly downloads
SAP RFC/BAPI Proxy Builder
"SAP RFC Proxy Builder" is the easy way to generate the C# Proxy classes of SAP RFC/BAPI functions. Last Update, Please visit: weekly downloads
TFS Extra Build
A set of libraries and tools to enhance the build process for Team Foundation Server
Freeform II
Freeform II is a visual GUI editor for Liberty BASIC0 weekly downloads
PHP++
PHP++ is a new programming language with a syntax similar to PHP, but it's completly rewritten in C++ and comes with a lot of new features like namespaces and a own, easy extendable object oriented framework
fxtool
The Open Source part of the product fxtool. Over time, more functions will move over to being Open Source plug-ins or modules. Visit the site for more information
|
http://sourceforge.net/directory/development/build/natlanguage:english/os:mswin_server2003/
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
Let’s start with a classical 1st year Computer Science homework assignment: a fibonacci series that doesn’t start with 0, 1 but that starts with 1, 1. So the series will look like: 1, 1, 2, 3, 5, 8, 13, … every number is the sum of the previous two.
In Java, we could do:
public int fibonacci(int i) { if (i < 0) return 0; switch(i) { case 0: return 1; case 1: return 1; default: return fibonacci(i-1) + fibonacci(i - 2); } }
All straight forward. If
0 is passed in it counts as the first element in the series so
1 should be returned. Note: to add some more spice to the party and make things a little bit more interesting I added a little bit of logic to return
0 if a negative number is passed in to our fibonacci method.
In Scala to achieve the same behaviour we would do:
def fibonacci(in: Int): Int = { in match { case n if n <= 0 => 0 case 0 | 1 => 1 case n => fibonacci(n - 1) + fibonacci(n- 2) } }
Key points:
- The return type of the recursive method fibonacci is an
Int. Recursive methods must explictly specify the return type (see: Odersky – Programming in Scala – Chapter 2).
- It is possible to test for multiple values on the one line using the
|notation. I do this to return a 1 for both 0 and 1 on line 4 of the example.
- There is no need for multiple
returnstatements. In Java you must use multiple
returnstatements or multiple
breakstatements.
- Pattern matching is an expression which always returns something.
- In this example, I employ a guard to check for a negative number and if it a number is negative zero is returned.
- In Scala it is also possible to check across different types. It is also possible to use the wildcard
_notation. We didn’t use either in the fibonacci, but just to illustrate these features…
def multitypes(in: Any): String = in match { case i:Int => 'You are an int!' case 'Alex' => 'You must be Alex' case s:String => 'I don't know who you are but I know you are a String' case _ => 'I haven't a clue who you are' }
Pattern matching can be used with Scala Maps to useful effect. Suppose we have a Map to capture who we think should be playing in each position of the Lions backline for the Lions series in Austrailia. The keys of the map will be the position in the back line and the corresponding value will be the player who we think should be playing there. To represent a Rugby player we use a
case class. Now now you Java Heads, think of the case class as an immutable POJO written in extremely concise way – they can be mutable too but for now think immutable.
case class RugbyPlayer(name: String, country: String); val robKearney = RugbyPlayer('Rob Kearney', 'Ireland'); val georgeNorth = RugbyPlayer('George North', 'Wales'); val brianODriscol = RugbyPlayer('Brian O'Driscol', 'Ireland'); val jonnySexton = RugbyPlayer('Jonny Sexton', 'Ireland'); val benYoungs = RugbyPlayer('Ben Youngs', 'England'); // build a map val lionsPlayers = Map('FullBack' -> robKearney, 'RightWing' -> georgeNorth, 'OutsideCentre' -> brianODriscol, 'Outhalf' -> jonnySexton, 'Scrumhalf' -> benYoungs); // Note: Unlike Java HashMaps Scala Maps can return nulls. This achieved by returing // an Option which can either be Some or None. // So, if we ask for something that exists in the Map like below println(lionsPlayers.get('Outhalf')); // Outputs: Some(RugbyPlayer(Jonny Sexton,Ireland)) // If we ask for something that is not in the Map yet like below println(lionsPlayers.get('InsideCentre')); // Outputs: None
In this example we have players for every position except inside centre – which we can’t make up our mind about. Scala Maps are allowed to store nulls as values. Now in our case we don’t actually store a null for inside center. So, instead of null being returned for inside centre (as what would happen if we were using a Java HashMap), the type
None is returned.
For the other positions in the back line, we have matching values and the type
Some is returned which wraps around the corresponding RugbyPlayer. (Note: both
Some and
Option extend from
Option). We can write a function which pattern matches on the returned value from the HashMap and returns us something a little more user friendly.
def show(x: Option[RugbyPlayer]) = x match { case Some(rugbyPlayerExt) => rugbyPlayerExt.name // If a rugby player is matched return its name case None => 'Not decided yet ?' // } println(show(lionsPlayers.get('Outhalf'))) // outputs: Jonny Sexton println(show(lionsPlayers.get('InsideCentre'))) // Outputs: Not decided yet
This example doesn’t just illustrate pattern matching but another concept known as extraction. The rugby player when matched is extracted and assigned to the
rugbyPlayerExt. We can then return the value of the rugby player’s name by getting it from
rugbyPlayerExt. In fact, we can also add a guard and change around some logic. Suppose we had a biased journalist (
Stephen Jones) who didn’t want any Irish players in the team. He could implement his own biased function to check for Irish players
def biasedShow(x: Option[RugbyPlayer]) = x match { case Some(rugbyPlayerExt) if rugbyPlayerExt.country == 'Ireland' => rugbyPlayerExt.name + ', don't pick him.' case Some(rugbyPlayerExt) => rugbyPlayerExt.name case None => 'Not decided yet ?' } println(biasedShow(lionsPlayers.get('Outhalf'))) // Outputs Jonny... don't pick him println(biasedShow(lionsPlayers.get('Scrumhalf'))) // Outputs Ben Youngs
Pattern matching Collections
Scala also provides some powerful pattern matching features for Collections. Here’s a trivial exampe for getting the length of a list.
def length[A](list : List[A]) : Int = list match { case _ :: tail => 1 + length(tail) case Nil => 0 }
And suppose we want to parse arguments from a tuple…
def parseArgument(arg : String, value: Any) = (arg, value) match { case ('-l', lang) => setLanguage(lang) case ('-o' | '--optim', n : Int) if ((0 < n) && (n <= 3)) => setOptimizationLevel(n) case ('-h' | '--help', null) => displayHelp() case bad => badArgument(bad) }
Single Parameter functions
Consider a list of numbers from 1 to 10. The filter method takes a single parameter function that returns
true or
false. The single parameter function can be applied for every element in the list and will return
true or
false for every element. The elements that return
true will be filtered in; the elements that return
false will be filtered out of the resultant list.
scala> val myList = List(1,2,3,4,5,6,7,8,9,10) myList: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9, 10) scala> myList.filter(x => x % 2 ==1) res13: List[Int] = List(1, 3, 5, 7, 9)
Now now now, listen up and remember this. A pattern can be passed to any method that takes a single parameter function. Instead of passing a single parameter function which always returned true or false we could have used a pattern which always returns true or false.
scala> myList.filter { | case i: Int => i % 2 == 1 // odd number will return false | case _ => false // anything else will return false | } res14: List[Int] = List(1, 3, 5, 7, 9)
Use it later?
Scala compiles patterns to a
PartialFunction. This means that not only can Scala pattern expressions be passed to other functions but they can also be stored for later use.
scala> val patternToUseLater = : PartialFunction[String, String] = { | case 'Dublin' => 'Ireland' | case _ => 'Unknown' }
What this example is saying is
patternToUseLater is a partial function that takes a string and returns a string. The last statemeent in a function is returned by default and because the case expression is a partial function it will returned as a partial function and assigned to
pattenrToUseLater which of course can use it later.
Finally, Johnny Sexton is a phenomenal Rugby player and it is a shame to hear he is leaving Leinster. Obviously, with Sexton’s busy schedule we can’t be sure if Johnny is reading this blog but if he is, Johnny sorry to see you go we wish you all the best and hopefully will see you back one day in the Blue Jersey.
Reference: Scala pattern matching: A Case for new thinking? from our JCG partner Alex Staveley at the Dublin’s Tech Blog blog.
java code for foolowing patttern
*
* * *
* * * * *
* * *
*
|
http://www.javacodegeeks.com/2013/01/scala-pattern-matching-a-case-for-new-thinking.html
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
Title.
This is a discussion on What is your favorite language other than C/C++ and why? within the General Discussions forums, part of the Community Boards category; Title....
Title.
From a nostalgic point of view, "Just Basic", because it was the first language I messed with.
Prolog is awesome, in it own, unique way.
Brainf..k, nuff said!...
Devoted my life to programming...
Russian, since I can understand 99% of things told in it
The first 90% of a project takes 90% of the time,
the last 10% takes the other 90% of the time.
I really want to learn russian and klingon.
for computer languages, C++ is my clear favorite, but I also really enjoy go and python.
Code:namespace life { const bool change = true; }
Fortran (90 and later) by far. The real benefit from Fortran is the ability to do whole array operations with one line.
where a, b, c could be huge multidimensional arrays. You can also do neat stuff like this:where a, b, c could be huge multidimensional arrays. You can also do neat stuff like this:Code:a = 5*b + c
which makes 2 partial array assignments. Some of the spoils of C++ are somewhat present, such as:which makes 2 partial array assignments. Some of the spoils of C++ are somewhat present, such as:Code:WHERE (a /= 0.0) a = 1.0/a ELSEWHERE a = HUGE(a) END WHERE
(printing to places without worrying about type, like streaming to stdout, and built-in string operations)(printing to places without worrying about type, like streaming to stdout, and built-in string operations)Code:PRINT *, a_str, a_int, a_float, a_bool a_str = 'some text'//a_str//'some more text'//another_str
Edit: disregard the formatting of the code block, // is Fortran's concatenate operator
Last edited by Epy; 10-11-2013 at 10:39 AM.
I've been enjoying writing Octave code lately.
Code://try //{ if (a) do { f( b); } while(1); else do { f(!b); } while(1); //}
My favorite language of the past was Basic09; but, I have not used in in years.
It was the first real programming language I did much programming in.
Ran under Microwave OS9 on the Radio Shack Color Computer.
I often think of writing a translator or emulator for it.
Basic09 was Basic crossed with Pascal.
Tim S.
"Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the universe trying to produce bigger and better idiots. So far, the Universe is winning." Rick Cook
C# has been my love for the past few years... despite not being able to separate declarations from definitions.
Last edited by Mario F.; 10-12-2013 at 07:03 PM..
VB is my fave language, because I can create Windows apps quickly and don't require extensive language-oriented knowledge.
EDIT: I did learn C a while back, and enjoyed it immensely, but had very little use for it in my life(since I can't do Windows programming), so switched to VB.
Last edited by cfanatic; 10-12-2013 at 08:25 PM.
Python Scripting Language is my other favourite one
|
http://cboard.cprogramming.com/general-discussions/159654-what-your-favorite-language-other-than-c-cplusplus-why.html
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
tag:blogger.com,1999:blog-59034556449887337252014-10-18T06:32:13.481-11:00Download Hacking Tools at 'Tools Yard'Archive by The Hacker News for Hacking tools, networking tools, gmail hacking, learn ethical hacking, vulnerability assessment, penetration testing, email hacking, password hackingMohit [email protected] Social-Engineer Toolkit (SET) v4.7 released The <img src="//feeds.feedburner.com/~r/PenetrationTestingTools/~4/018aXOv5Gp0" height="1" width="1" alt=""/>Mohit Kumar password cracking wordlist with millions of words One Crackstation project. <img src="//feeds.feedburner.com/~r/PenetrationTestingTools/~4/V8Fgrq9xAIM" height="1" width="1" alt=""/>Mohit Kumar Keylogger Lite v1.0 download Phrozen Keylogger Lite is finally available, developed by Dark comet RAT developer. Phrozen Keylogger Lite is a powerful and user friendly keylogger especially created for Microsoft Windows systems. Phrozen Keylogger Lite is compatible with all currently supported versions of Windows, which effectively means Windows XP to the recently released Windows 8. Phrozen Keylogger Lite has been <img src="//feeds.feedburner.com/~r/PenetrationTestingTools/~4/7ryYnpT6gW4" height="1" width="1" alt=""/>Mohit Kumar 2013.0 RC1.1 Released Pentoo is a security-focused live CD based on Gentoo It's basically a Gentoo install with lots of customized tools, customized kernel, and much more. Pentoo 2013.0 RC1.1 features : Changes saving CUDA/OpenCL Enhanced cracking software John the ripper Hashcat Suite of tools Kernel 3.7.5 and all needed patches for injection XFCE 4.10 All the latest tools and a responsive development team! <img src="//feeds.feedburner.com/~r/PenetrationTestingTools/~4/GvHJ1yPvqcA" height="1" width="1" alt=""/>Mohit Kumar 2.9.4.1 - Network intrusion detection system Sn <img src="//feeds.feedburner.com/~r/PenetrationTestingTools/~4/Ch2JOIyyjXI" height="1" width="1" alt=""/>Mohit Kumar : Web Reconnaisance framework for Penetration testers Recon-ng is a full-featured Web Reconnaissance framework written in Python. Recon-ng has a look and feel similar to the Metasploit Framework, reducing the learning curve for leveraging the framework. Complete with independent modules, database interaction, built in convenience functions, interactive help, and command completion, Recon-ng provides a powerful environment in which open source<img src="//feeds.feedburner.com/~r/PenetrationTestingTools/~4/B-BI4SYVQV4" height="1" width="1" alt=""/>Mohit Kumar Forensic Tool, Find hidden processes and ports <img src="//feeds.feedburner.com/~r/PenetrationTestingTools/~4/AmYdSgrPHgE" height="1" width="1" alt=""/>Mohit Kumar v2.0 : Web Application exploitation Tool. <img src="//feeds.feedburner.com/~r/PenetrationTestingTools/~4/lXFoX1NoQzE" height="1" width="1" alt=""/>Mohit Kumar Cracker Tool Hashkill version 0.3.1<img src="//feeds.feedburner.com/~r/PenetrationTestingTools/~4/KpAmPL7b4mU" height="1" width="1" alt=""/>Mohit Kumar : Stealth PHP web shell with telnet style console Weevely is a stealth PHP web shell that provides a telnet-like console.<img src="//feeds.feedburner.com/~r/PenetrationTestingTools/~4/ImssXEikVCQ" height="1" width="1" alt=""/>Mohit Kumar HTTP Enumeration Tool. Version 0.2 adds scanning <img src="//feeds.feedburner.com/~r/PenetrationTestingTools/~4/yJXZV-YmjIk" height="1" width="1" alt=""/>Mohit Kumar – Web Application Fingerprinting During, <img src="//feeds.feedburner.com/~r/PenetrationTestingTools/~4/7AXPY-Z0KT0" height="1" width="1" alt=""/>Mohit Kumar latest version with new Exploits released A bash script to launch a Soft AP, configurable with a wide variety of attack options. Includes a number of index.html and server php scripts, for sniffing/phishing. Can act as multi-client captive portal using php and iptables. Launches classic exploits such as evil-PDF. De-auth with aireplay, airdrop-ng or MDK3. Changes and New Features “hotspot_3″ is a simple phishing web page, used <img src="//feeds.feedburner.com/~r/PenetrationTestingTools/~4/0KvmAXpdytQ" height="1" width="1" alt=""/>Mohit Kumar v2.0 - A Pen Test Drop Box distro for the Raspberry Pi PwnPi is a Linux-based penetration testing dropbox distribution for the Raspberry Pi. It currently has 114 network security tools pre-installed to aid the penetration tester. It is built on the debian squeeze image from the raspberry pi foundation’s website and uses Xfce as the window manager Login username and password is root:root Tools List: Download Here <img src="//feeds.feedburner.com/~r/PenetrationTestingTools/~4/_KRZGXM1WEM" height="1" width="1" alt=""/>Mohit Kumar v 0.4.5 - Man-in-the-middle attacks against SSL/TLS SL <img src="//feeds.feedburner.com/~r/PenetrationTestingTools/~4/pWDLqQZQKCI" height="1" width="1" alt=""/>Mohit Kumar : Open source Network Forensics And Analysis Tools NetSleuth identifies and fingerprints network devices by silent network monitoring or by processing data from PCAP files. NetSleuth is an opensource network forensics and analysis tool, designed for triage in incident response situations. It can identify and fingerprint network hosts and devices from pcap files captured from Ethernet or WiFi data (from tools like Kismet). It also <img src="//feeds.feedburner.com/~r/PenetrationTestingTools/~4/19aHOjNmAuU" height="1" width="1" alt=""/>Mohit Kumar v 2.2.1 - Aggressive multithreaded DNS digger TXDNS is a Win32 aggressive multithreaded DNS digger. Capable of placing, on the wire, thousands of DNS queries per minute. TXDNS main goal is to expose a domain namespace trough a number of techniques: -- Typos: Mised, doouble and transposde keystrokes; -- TLD/ccSLD rotation; -- Dictionary attack; -- Full Brute-force attack: alpha, numeric or alphanumeric charsets. New features: <img src="//feeds.feedburner.com/~r/PenetrationTestingTools/~4/0u1x8zO5nU4" height="1" width="1" alt=""/>Mohit Kumar - Python SQL injection framework PySQLi is a python framework designed to exploit complex SQL injection vulnerabilities. It provides dedicated bricks that can be used to build advanced exploits or easily extended/improved to fit the case. PySQLi is thought to be easily modified and extended through derivated classes and to be able to inject into various ways such as command line, custom network protocols and even in <img src="//feeds.feedburner.com/~r/PenetrationTestingTools/~4/k7hjpOw9aco" height="1" width="1" alt=""/>Mohit Kumar Browser Edition - Forget about browser vulnerabilities ExploitShield Browser Edition protects against all known and unknown 0-day day vulnerability exploits, protecting users where traditional antivirus and security products fail. It consists of an innovative patent-pending vulnerability-agnostic application shielding technology that prevents malicious vulnerability exploits from compromising computers. Includes "shields" for all major <img src="//feeds.feedburner.com/~r/PenetrationTestingTools/~4/LeAaHErlWNE" height="1" width="1" alt=""/>Mohit Kumar updated - now can identify 673 joomla vulnerabilities Security <img src="//feeds.feedburner.com/~r/PenetrationTestingTools/~4/FcAWvLpep1k" height="1" width="1" alt=""/>Mohit Kumar 0.4.3.8 - Browser Exploitation Framework The Browser Exploitation Framework (BeEF) is a powerful professional security tool. It is a penetration testing tool that focuses on the web browser. BeEF is pioneering techniques that provide the experienced penetration tester with practical client side attack vectors. Unlike other security frameworks, BeEF focuses on leveraging browser vulnerabilities to assess the security posture of a<img src="//feeds.feedburner.com/~r/PenetrationTestingTools/~4/8xCk1lntFaQ" height="1" width="1" alt=""/>Mohit Kumar 0.5.2 - Automated spoofing or cloning Bluetooth device). <img src="//feeds.feedburner.com/~r/PenetrationTestingTools/~4/7d1NAEegpYU" height="1" width="1" alt=""/>Mohit Kumar Honey - Creates fake APs using all encryption This is a script, attack can use to creates fake APs using all encryption and monitors with Airodump. It automate the setup process, it creates five monitor mode interfaces, four are used as APs and the fifth is used for airdump-ng. To make things easier, rather than having five windows all this is done in a screen session which allows you to switch between screens to see what is going on. All<img src="//feeds.feedburner.com/~r/PenetrationTestingTools/~4/BIUcd1o8YVE" height="1" width="1" alt=""/>Mohit Kumar
|
http://feeds.feedburner.com/PenetrationTestingTools
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
You can subscribe to this list here.
Showing
5
results of 5
A new article about using Mimer Provider Manager in web development with=20
Asp.NET have been published on=20
Regards,
Fredrik
--=20
********************************************
Fredrik =C5lund
Mimer Information Technology AB
+46 (0)18 780 92 00
Fredrik.Al
_______________________________________________________________________
A new version of Mpm is now available. This beta version contains
several enhancements, mainly concerning ease-of-use and administration.
Among other things, the framework has been refactored in order to
further reflect the ADO.NET Provider architecture. A noticeable change
is that the Mpm specific parts have been moved to the
Mimer.Mpm.Data.Extensions namespace. This might break your existing
application. If you, for example, have used
Mimer.Mpm.Data.MpmProviderInfo you have to change that to
Mimer.Mpm.Data.Extensions.MpmInfo.
A new article about using Mimer Provider Manager have been published on C#
Corner. You can read the article at
/Fredrik
--
********************************************
Fredrik Ålund
Mimer Information Technology AB
+46 (0)18 780 92 28
Fredrik.Alund@...
********************************************
Welcome to the mailing list for users of Mimer Provider Manager (Mpm).
This list is used to discuss matters concerning the usage of Mpm and to
ask questions to other users.
Regards,
Fredrik, Mimer Provider Manager Project Manager
|
http://sourceforge.net/p/mimerpm/mailman/mimerpm-user/
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
You can subscribe to this list here.
Showing
8
results of 8
paul@... wrote:
>
> Anyway, if you take a look at my Web site () you'll
> find the almost-latest installment of my XMLForms package (the successor to my
> XForms package). It contains form descriptions in XML and some interesting
> applications of hidden fields in order to provide client-side persistence
> without cookies. Whilst I'm sure your time is precious, you may want to
> investigate some of the concepts - it's really quite interesting, honest! ;-)
yes, it works without problems (I uncommented the context stuff in
__init__.py
to use the examples and in products.py and sites.py I had to use
from WebKit.Examples.ExamplePage import ExamplePage
Strange enough in another Webware version it worked
whitout this changes. But my reference right now
is Webware-0.5.1rc3.tar.gz and Python-2.1 (Webware
needs some small tweeks to be completely 2.1 compliant.
regex -> re.. changes mostly).
XMLForms incorporates a lot of interesting ideas.
It is not really fast though but it's the
first time that PyXML/PYthon and
some application works without any problems
for me :-) I have the impression that
a lof of the data handling can be done with Cans,
but the more approaches out there the better.
I'm still looking for my favorite way to write
Webware applications (I havn't found it yet :-))
--
Tom Schwaller
tschwaller@...
> -----Original Message-----
> From: webware-discuss-admin@...
> [mailto:webware-discuss-admin@...]On Behalf Of Tavis
> Rudd
> Sent: 2. maí 2001 16:48
> To: Sasa Zivkov; webware-discuss@...
> Subject: Re: [Webware-discuss] Er, parsing?
>
>
>
> > It seems you can:
> > >>> import re
> > >>> > >>> re.findall("""\$\(\w* \w*="[^"]*"\)""", a)
> >
> > ['$(foo bar=")")']
> >
>
> yes, but what about:
> """ $(functionName( $anotherFunc(1234)))"""
> ?
I just answered Chuck's question :-)
Did not carefully read the whole discussion and did not know about all
possible cases.
Any way, nested parenthesis can not be expressed with a regular expression
if my memory works well.
- Sasa
> It seems you can:
> >>> import re
> >>> >>> re.findall("""\$\(\w* \w*="[^"]*"\)""", a)
>
> ['$(foo bar=")")']
>
yes, but what about:
""" $(functionName( $anotherFunc(1234)))"""
?
At 08:10 AM 5/2/2001 -0700, Mike Orr wrote:
>What's the difference between webware-discuss and webware-devel in terms
>of what to post where?
[snip]
It's a good question and I think the answer is still being formed as we go.
One thing I notice is that detailed discussions of the implementation of
some feature (such as templates) usually result in many messages. On those
days that we have a lot of these messages, between 1 and 3 people drop off
of webware-discuss (I get notified when that happens).
In general, I think of "webware-discuss" as ordinary users and
"webware-devel" as people who are "lifting up the hood" and tinkering with
the engine. I think ordinary users are concerned with installation, usage,
future directions, posting feedback on API or behavior, etc.
I hope that helps in some fashion.
-Chuck
What's the difference between webware-discuss and webware-devel in terms
of what to post where? It seems like since Webware is a development
platform, most of the discussion is about development anyway. And when
talking about generic modules which may or may not be included in
Webware someday, it becomes difficult to decide whether you're
"using Webware for development" or "developing Webware", because they
both merge into each other.
The only thing I can tell is that basic installation issues and discussion
about third-party applications which will never be part of Webware
belong in webware-discuss.
--
-Mike (Iron) Orr, iron@... (if mail problems: mso@...) English * Esperanto * Russkiy * Deutsch * Espan~ol
It seems you can:
>>> import re
>>>>> re.findall("""\$\(\w* \w*="[^"]*"\)""", a)
['$(foo bar=")")']
>>>
- Sasa
> -----Original Message-----
> From: webware-discuss-admin@...
> [mailto:webware-discuss-admin@...]On Behalf Of Chuck
> Esterbrook
> Sent: 30. apríl 2001 15:38
> To: webware-discuss@...
> Subject: [Webware-discuss] Er, parsing?
>
>
> How do you parse something like:
>
> $(foo bar=")")
>
> with a regex? My impression is that you can't.
>
>
> _______________________________________________
> Webware-discuss mailing list
> Webware-discuss@...
>
>
Yesterday I installed 0.5.1 rc#3 and migrated my app to it. A few problems I
could workaround have disappeared. Fine!
But I saw that I still have problems with the OneShot.cgi on NT4/IIS4
(OneShot.exe on IIS4, made with py2exe 0.2.6). Until now I always have
killed the AppServer and restarted it, if I did changes in underlying
modules of my app. But to be honest, I long for OneShot.cgi, because it
would make life a bit easier.
I tried it on 2 different machines. The problem seems to lie within
PlugIn.py (around line 71):
# Make a directory for it in Cache/
cacheDir = os.path.join(os.path.dirname(__file__),'Cache', self._name)
if not os.path.exists(cacheDir):
os.mkdir(cacheDir)
__file__ has the value "<WebKit\PlugIn from archive>"
so cacheDir becomes "<WebKit\Cache\COMKit" which raises an exception, which
is not valid.
Or did I miss something?
Best regards
Franz Geiger
> -----Ursprüngliche Nachricht-----
> Von: webware-discuss-admin@...
> [mailto:webware-discuss-admin@...]Im Auftrag von Chuck
> Esterbrook
> Gesendet: Mittwoch, 02. Mai 2001 01:22
> An: webware-discuss@...;
> webware-devel@...
> Betreff: [Webware-discuss] Cut final 0.5.1?
>
>
> Any objections to cutting the final release of 0.5.1 sometime on
> Wednesday
> and then announcing it to the world?
>
> The only outstanding recent problems I'm aware of:
>
> - OS/2 has URL path issues
>
> - Someone had issues setting multiple cookies
>
>
> I believe in both cases, the ball has been in the user's court to
> try out a
> patch or send back additional info.
>
>
> -Chuck
>
>
> _______________________________________________
> Webware-discuss mailing list
> Webware-discuss@...
>
>
At 08:32 PM 4/30/2001 -0500, Ian Bicking wrote:
>Anyway, I have two classes, one which is the container for another:
>
>Portfolio contains Pieces, (as in .pieces()), and Pieces have a
>reference to portfolio (as in .portfolio()).
>
>If I move a piece between portfolios, the .pieces() will be
>inaccurate, since it seems to cache these backreferences as
>self._pieces.
Yeah, I never dealt with moving objects between 2 lists before. e.g., not
in the test suite.
Seems like we need a removeFromPieces()
>Are these objects unique? Like, if I fetch the same object from the
>store twice, will they be equal (i.e., portfolio1 is portfolio2)?
Yes. This is called "uniquing" and MK does it.
>If so, should the generated code be such that:
>
>class Portfolio:
> def _removePiece(self, piece):
> """Semi-private because it must be called along with
> setPortfolio or self._pieces will be incorrect"""
> if self._pieces is not None:
> self._pieces.remove(piece)
>
>class Piece:
> def setPortfolio(self, portfolio):
> if self._portfolio is not None:
> self._portfolio._removePiece(self)
> self._portfolio = portfolio
>
>Except for all the asserts, and (I guess?) something where you can use
>objectRefs instead of the actual objects...? Maybe there needs to be
>a weak-fetch from the store, so that Piece can fetch its actual
>portfolio if it's been instantiated (in which case it may have an
>invalid cache), but if it hasn't then it doesn't matter.
I'd have to think about it some more. Geoff and I, in private discussions,
reworked the design for MKs list support. We need to enhance that with
moving an object between lists and then post it as a WEP for review.
My feeling is it needs to be tackled as a mini-project so that we nail down
all the semantics simultaneously and back them up with regression tests.
>And on a slightly related note -- if when I edit a Piece I run this
>method:
>
> def changePiece(self):
> piece = self._piece
> piece.setTitle(self.field('title'))
> piece.setName(self.field('name'))
> piece.setDescription(self.field('description'))
> piece.setDisplayOrder(self.field('displayOrder'))
> piece.setPortfolio(self.store().fetchObject('Portfolio',
> self.field('portfolio')))
> self.store().saveChanges()
> self.write('Changes saved.<p>\n')
>
>And I get this error from MySQLdb:
> Warning: Rows matched: 1 Changed: 1 Warnings: 1
>
>I don't know why MySQLdb even bothers giving an error message this
>lame (well Warning/error message)... but any idea what the warning is
>about, or how I can fix it?
I'm not sure. I spent several hours earlier trying to figure out how to get
MySQL to give me details for a warning. I read a bunch of docs, joined the
mailing list, posted my question, etc. I could never find out.
BTW You could rework some of your code like so:
for name in 'title name description displayOrder'.split():
piece.setAttr(name, self.field(name))
Make sure you are using Webware CVS for this. Obviously this only works for
"simple" attributes.
setAttr() will actually call your setFoo(), setBar(), etc.
(Previous setAttr() was called _set())
-Chuck
|
http://sourceforge.net/p/webware/mailman/webware-discuss/?viewmonth=200105&viewday=2
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
Exploring the API
If you're familiar with Google gadgets or you've created widgets for Google's OpenSocial platform, you'll feel right at home with the Google+ Hangouts API. Every Google+ Hangout extension or application begins with a gadget XML file.
The most basic gadget file is as follows:
<?xml version="1.0" encoding="UTF-8" ?> <Module> <ModulePrefs title="Your App Name"> <Require feature="rpc"/> <Require feature="views"/> </ModulePrefs> <Content type="html"> <![CDATA[ <script src="//talkgadget.google.com/hangouts/_/api/hangout.js?v=1.2"></script> <!-- Application HTML --> ]]> </Content> </Module>
A gadget file consists of two parts:
- The ModulePrefs section (short for module preferences) specifies metadata about the application, such as its title and the features it will use. This section can also include other metadata, such as the author, a description, and a thumbnail.
- The rpc feature enables your application to make remote procedure calls to the Google+ APIs. This is the only required feature.
- The views feature (optional) allows you to pass information directly to the app at startup.
- The Content section contains the HTML code that drives your application.
Basic Blackjack Details
Blackjack is a multiplayer casino card game where the player's goal is to have the numeric values of the cards in his hand total as close to 21 as possible—without going over that number (busting). After all bets are placed, the dealer deals two cards, one face down and one face up, to himself and each of the players. Aces are valued at 1 or 11, and face cards are valued at 10. Each player has the opportunity to hit (add cards to his hand) multiple times, thereby improving his chances of winning the hand, or choosing to stand (stay with the current cards). After all players are satisfied with their hands or can no longer play a hand, in the event that their hand total exceeds 21, the dealer's face-down card is revealed, and each player's hand is evaluated. The hand that is closest to 21 points wins. (Each player competes with only the dealer—not with the other players.)
Several other blackjack elements exist in a casino game, such as double-down, splits, and so on, but they aren't essential to the game we're building today, so I'll leave them as an exercise for you to explore.
Most blackjack games have general strategy suggestions for when players should hit or stand. A dealer might have additional restrictions specifying when he should stand on a particular hand. The code that evaluates player hands and gives suggestions on whether they should hit, stay, or double-down lives in the Evaluator.js file.
Cards for Use in the Game
Much of the core code to draw and interact with the cards comes from the War card game I created with Amino in a previous article. I had to tweak the presentation layer to use HTML5 Canvas instead of Amino, but that wasn't very difficult.
Managing Game State
Google+ Hangouts have a shared state in the gapi.hangout.data namespace that all participants can access and modify. The state object closely resembles a map object. The most important functions for our needs are setValue, getValue, and clearValue, which respectively set, get, and clear a key-value pair from the state object; and getState, which retrieves the complete object. In addition to the state, which persists all changes, you can send messages to all participants.
The code below demonstrates the creation and saving of a card deck to the hangout state object:
BlackJackGame.prototype.resetDeck = function(numDecks) { if (numDecks == undefined) numDecks = 2; groupDeck = new Deck(numDecks); gapi.hangout.data.setValue('numDecks', ''+numDecks); gapi.hangout.data.setValue('deck', groupDeck.toString()); }
An application that will be sending few updates can function just fine with the functions we've discussed previously. One thing that makes them inadequate, however, is if you have an application like ours that updates the UI whenever the state changes. Imagine the very likely case of a player choosing to hit (add another card to his hand). Several actions take place:
- A card is removed from the deck.
- The card is added to the player's hand.
- If the player busts (his card total exceeds 21), doubles-down, or stays, play transitions to the next player.
In this case, it's important that all actions take place before updating the game state and sending to other clients. Otherwise, depending on latency, you could end up in a state where play transitions to the next player and the deck has one less card, but the dealt card doesn't appear in the player's hand. When multiple changes need to happen at approximately the same time, we should use submitDelta rather than a series of setValue commands:
BlackJackGame.prototype.newRound = function() { var updates = {}; updates['gameState'] = 'DEAL'; this.players = this.loadPlayerData(); _.each(this.players, function(player) { player.clearCards(); updates[player.id] = player.toString(); }); this.dealer.clearCards(); updates['dealer'] = this.dealer.toString(); this.evaluator.setDealer(this.dealer.getCurrentHand()); game.updateGameBoard(); gapi.hangout.data.submitDelta(updates); }
In the snippet above, if the hangout is full and all participants are playing the game, we need to execute twelve updates to the hangout state. It's much more efficient to batch all the changes into one change set to be evaluated all at once. submitDelta can take a second parameter, an array of values to remove from the hangout state.
You can explore the API further by navigating to the Google+ Hangouts API developers page.
Responding to Events
Several events are fired when the state is updated or a message is sent, onStateChanged and onMessageReceived. We can add or remove an event handler by using the corresponding function:
gapi.hangout.data.onStateChanged.add(window.game.stateUpdated); gapi.hangout.data.onStateChanged.remove(window.game.stateUpdated);
The event that's fired contains the current state object, the keys that were added and removed, and the state metadata. In the following snippet, we make use of the addedKeys property to determine whether there was a gameState transition and how to respond appropriately:
BlackJackGame.prototype.stateUpdated = function(evt) { var gameHost = game.getGameHost(); var currentPlayer = gapi.hangout.getLocalParticipantId(); if (game.isGameHost()) { // Only run on host // Manage game state and evaluators game.gameState = evt.state.gameState; if (game.gameState != undefined) { if (game.gameState.substr(0,5) == "DPLAY") { game.gameState = 'DPLAY'; game.playDealerHand(); } else if (game.gameState == 'EVAL') { // Evaluate game hands and do payouts var hand = game.loadState('dealer')[0]; var handStatus = game.evaluator.evaluate(hand); game.evaluateHands(handStatus); } else { game.players = game.loadPlayerData(); game.updateGameBoard(); } } } else { game.players = game.loadPlayerData(); game.updateGameBoard(); } }
onApiReady Event Handler
The onApiReady event handler is our main entry point into the application. It creates our BlackJackGame object and its associated properties, and then attaches our events to their associated handlers. The full onApiReady handler is listed below:
gapi.hangout.onApiReady.add(function(event) { console.log('gapi loaded'); if(event.isApiReady) { window.game = new BlackJackGame(); window.game.players = window.game.loadPlayerData(); window.game.deck = new Deck(1, window.game.ctx); // check for saved deck if (gapi.hangout.data.getValue('deck') == undefined) { game.resetDeck(2); } gapi.hangout.data.onStateChanged.add(window.game.stateUpdated); gapi.hangout.onAppVisible.add(window.game.participantEnabledApp); gapi.hangout.onParticipantsEnabled.add(window.game.participantEnabledApp); gapi.hangout.onEnabledParticipantsChanged.add(window.game.participantsChanged); } });
Adding Overlays
In addition to setting a video stream on or off, the Hangout API allows you to control your experience further by overlaying images on an individual video stream. You can see this feature at work if you've played with the Google Effects extension that lets you select crazy accessories to attach to your face.
In Blackjack, we won't use dynamic facial-tracking, as in the Google Effects extension; instead, we'll use an image with a static position. A small gray dot on a player's video stream will indicate that it's his or her turn.
First, we need to create an ImageResource by passing a publicly accessible URL to the createImageResource function in the gapi.hangout.av.effects namespace. We then call the createOverlay function on that object to instantiate the overlay that we'll superimpose on the video stream. You either pass a map containing the specifications of the new overlay, or apply them one at a time using set* functions.
BlackJackGame.prototype.createTurnIndicator = function () { var url = ""; var temp = gapi.hangout.av.effects.createImageResource(url); this.overlay = temp.createOverlay({ position:{x:-0.35, y:0.25}, scale:{ magnitude:0.25, reference:gapi.hangout.av.effects.ScaleReference.WIDTH }}); };
In the snippet above, we first set the image resource to scale itself to 25% of the stream size, based on the width. Next, we set the dot to appear in the lower-right corner of the video stream. The position values for x and y range from -1 to 1 with (0,0) at the center of the stream, positive y toward the bottom of the stream, and positive x toward the viewer's left.
Drawing the Gameboard
Developer Advocate Johnathan Beri notes that the minimum dimensions for a hangout app are 940 × 465, while an extension has minimum dimensions of 300 × 465. The best course of action is to assume that the user may resize the window at will, and make the design responsive. Doing that is outside the scope of this article, so I'll leave it as a exercise for you to accommodate users who are viewing the hangout on a large screen; you should use responsive design to support all window sizes.
Deploying the App
To make your application available to the public, you'll need to do the following:
- Create and verify a Chrome Web Store account.
- Add application icons in various sizes.
- Add URLs for your application's Privacy Policy, Support, and Terms of Service pages.
- Create an OAuth 2.0 client ID for the application.
For more details, read more about publishing Hangout apps and extensions, and be sure to check out the source code for the project created in this article.
|
http://www.informit.com/articles/article.aspx?p=1963536
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
For code/output blocks: Use ``` (aka backtick or grave accent) in a single line before and after the block. See:
PeriodStats
I am currently using Python 3.6, it went smooth on using most of the Analyzers classes. However, there is one incompatible with the Python 3.x.
I am not quite sure how to deal with this.
cerebro.addanalyzer(btanalyzers.PeriodStats, _name='PeriodStats') strategies = cerebro.run() strategy = strategies[0] print('periodstats:', strategy.analyzers.PeriodStats.get_analysis())
In the periodstats.py, it shows, NameError: name 'itervalues' is not defined
def stop(self): trets = self._tr.get_analysis() # dict key = date, value = ret pos = nul = neg = 0 trets = list(itervalues(trets))
I tried to change itervalues to values, but in vain.
def stop(self): trets = self._tr.get_analysis() # dict key = date, value = ret pos = nul = neg = 0 trets = list(values(trets))
It returns, NameError: name 'values' is not defined.
Please advise.
- backtrader administrators last edited by
Check Release
1.9.60.122which addresses this issue. Community - Release 1.9.60.122
|
https://community.backtrader.com/topic/789/periodstats/1
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
...
M5EP M5Paper's screen.
#include <M5EPD.h> M5EPD_Canvas canvas(&M5.EPD); void setup() { M5.begin(); M5.EPD.SetRotation(90); M5.EPD.Clear(true); M5.RTC.begin(); canvas.createCanvas(540, 960); canvas.setTextSize(3); canvas.drawString("Hello World", 45, 350); canvas.pushCanvas(0,0,UPDATE_MODE_DU4); } void loop() { }
Github
When using FactoryTest to load special characters (such as Chinese, Japanese), please put the font file into the TF card and name it as
font.ttf.
ttf file download address
Arduino API
Tools
|
https://docs.m5stack.com/en/quick_start/m5paper/arduino
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
Advertisement
In this blog, we will learn the edge detection in OpenCV Python using Canny’s edge detection algorithm. Edge Detection has great importance in computer vision.
Edge Detection deals with the contours of an image that is usually denoted in an image as an outline of a particular object.
There’s a lot of edge detection algorithms like Sobel, Laplacian, and Canny.
The Canny Edge Detection algorithm is the most commonly used for ease of use as well as the degree of accuracy.
Imports for Canny Edge Detection OpenCV Algorithm
import cv2 as cv
import numpy as np
from matplotlib import pyplot as plt
Composition of Canny Edge Detection
The Canny edge detection OpenCV algorithm is composed of 5 steps:
- Noise Reduction – If noise is not removed it may cause the image to detect incorrect edges.
- Gradient Calculation
- Non-maximum Suppressions
- Double threshold
- Edge Tracking by Hysteresis
Canny Edge Detection Code
First of all, the image is loaded into a variable using the OpenCV function
cv.imread(). The image is loaded in Gray Scale as edges can be easily identified in a grayscale image.
The
canny() function takes 3 parameters from the user.
First the image, then the threshold value for the first and second.
The Edge Detection relies on the threshold values and so the values are identified by shuffling the threshold values together.
After canny edge detection is over we store the title and images in separate arrays and display them using
plt.subplot() function present in the matplotlib library.
def canny(): img = cv.imread("./img/image.jpg", 0) canny = cv.Canny(img, 150, 200) title = ["Original Image", "Canny"] images = [img, canny] for i in range(len(images)): plt.subplot(2, 2, i+1), plt.imshow(images[i], 'gray') plt.title(title[i]) plt.xticks([]), plt.yticks([]) plt.show()
Edge Detected Image
Learn More Python OpenCV topics like lane detection OpenCV python, line detection, etc.
Get the full source code of all OpenCV projects from the Github page.
|
https://hackthedeveloper.com/canny-edge-detection-opencv-python/
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
Tag:Recipient
Should the method be declared on T or * tFriendly tips: This article takes about 3 minutes and 49 seconds to read. Please give more advice on the shortcomings. Thank you for reading. Subscribe to this site Should methods be declared on T or * t – David In go, for any type T, there is a type * t, which is the result […]
Rust programming video tutorial (Advanced) – 017_ 1 messaging 1Video address Headline address:…Station B address: Source address GitHub address:… Explanation content 1. One of the main tools for implementing message passing concurrency in rust is the channel. The channel consists of two parts, one is the sender and the other is the receiver. The sender is used to send messages and the receiver is […]
Understanding distributed consensus algorithmsStarting with rocketmq supporting automatic failover Before rocketmq version 4.5, rocketmq only had a master / slave deployment mode. There was one master in a group of brokers and there were zero to multiple slaves. The slave synchronized the master’s data through synchronous replication or asynchronous replication. Master / slave deployment mode provides certain high […]
[go language introduction series] (VII) how to use go?[go language introduction series] previous articles: [go language introduction series] (IV) use of map [go language introduction series] (V) the use of pointers and structures [go language introduction series] (VI) further exploration of functions This paper introduces the use of go language method. 1. Declaration If you have used an object-oriented language, such as Java, […]
Simple application of redisSpring boot integrates redis //Import dependency <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-redis</artifactId> </dependency> //File configuration spring: redis: host: 127.0.0.1 port: 6379 password: root //Write your own redistemplate package com.shuaikb.config; import com.fasterxml.jackson.annotation.JsonAutoDetect; import com.fasterxml.jackson.annotation.PropertyAccessor; import com.fasterxml.jackson.databind.ObjectMapper; import com.fasterxml.jackson.databind.jsontype.impl.LaissezFaireSubTypeValidator; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.data.redis.connection.RedisConnectionFactory; import org.springframework.data.redis.core.RedisTemplate; import org.springframework.data.redis.serializer.Jackson2JsonRedisSerializer; import org.springframework.data.redis.serializer.StringRedisSerializer; @Configuration public class RedisConfig { @Bean public RedisTemplate<String, Object> […]
Go quick start 04 | functions: what are the differences between functions and methods?Functions and methods are the first step towards code reuse and multi person collaborative development. Through the function, the development task can be divided into small units, which can be reused by other units, so as to improve the development efficiency and reduce the code coincidence. In addition, the ready-made functions have been fully tested […]
Things about the golang | method setThe value of type can also call the method of the pointer receiver! Those who learn some knowledge about methods in golang must have known itMethod setThe concept of method set must be familiar to the above figure. The method set is defined as follows: Rules: The method set of a value of type contains […]
Kick you into the go language gate! Beginners must read, 10000 words long text, it is recommended to collect!@[toc] Hello, I’m clean! Part I: kick you into the door of go language! I. the foundation is not firm and the earth is shaking 1. First example: Hello World package main import “fmt” func main(){ fmt.Println(“Hello World”) } First line package mainIt represents which package the current file belongs to. Package is the keyword […]
Practical application of responsibility chain modelResponsibility chain model The chain of responsibility pattern creates a chain of receiver objects for requests。This mode gives the type of request and decouples the sender and receiver of the request。This type of design pattern belongs to behavioral pattern。In this pattern, each recipient usually contains a reference to another recipient。If an object cannot process the […]
Design pattern learning 16 (Java implementation) — command modeWrite in front Take notes on learning design patterns Improve the flexible use of design patterns Learning address…… Reference articles… Project source code 18. Command mode 18.1 definition and characteristics of command mode The command mode is defined as follows:Encapsulating a request as an object separates the responsibility of issuing the request […]
Go quick start 05 struct and interface: what functions do structs and interfaces implement?structural morphology Structure definition A structure is an aggregation type, which can contain any type of values. These values are members of the structure defined by us, also known as fields. In go language, to customize a structure, you need to use the keyword combination of type and struct.A structure type named person is defined […]
Method value and method expression usage of goThe explanation of this part in the manual is not very detailed and clear. After several examples, I summarize the usage of this part. Method expression: to put it simply, it is actually the assignment of a method object to a variable. There are two ways to use it: 1) Method value: implicitly call the […]
|
https://developpaper.com/tag/recipient/
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
A Short Overview of Typed Template Haskell
Welcome to our second post on Template Haskell!
Today we will take a quick look at typed Template Haskell. This article assumes some familiarity with Template Haskell (TH) already. If this is your first journey with TH, then check out our introduction to Template Haskell first.
For this article, we will be using GHC 8.10.4.
Why typed TH?
Typed TH, as the name implies, allows us to provide stronger, static guarantees about the correctness of the meta-program. With untyped TH, the generated expressions would be type-checked when they are spliced, i.e., during their usage, rather than their definition. With typed TH, those expressions are now type-checked at their definition site.
Like with anything else in computer science, there are advantages and disadvantages in using typed TH in comparison to ordinary TH, some of which are listed below.
Advantages:
- Greater type safety guarantees.
- Errors won’t be delayed until use; instead, they are reported on their definition.
Disadvantages:
- Must be used with the
[|| ||]quoter.
- This means that we can’t easily use
Expconstructors directly.
- In comparison, for the untyped quoter, we could either use
[| |]or directly call the
Expconstructors.
- Alternatively, you may use
unsafeCodeCoerceto work around this, if you’re willing to use unsafe functions.
- Only supports a typed version of
Exp(no typed version for
Dec,
Pat, etc).
- Our previous TH tutorial could not have been written purely with Typed TH, as it heavily uses
Dec, for example.
- Requires that the type being used is known in advance, which may limit the kinds of TH programs you can make.
Before we begin, make sure you have the
template-haskell package installed, as well as the
TemplateHaskell language extension enabled.
>>> :set -XTemplateHaskell >>> import Language.Haskell.TH
Typed expressions
In our previous tutorial, we learned that we could use the
[e|...|] quoter (which is the same as
[|...|]) to create expressions of type
Q Exp. With typed TH, we will use
[e||...||] (which is the same as
[||...||]) to create expressions of type
Q (TExp a).
What is
TExp a, you might wonder? It’s simply a
newtype wrapper around our familiar
Exp:
type role TExp nominal newtype TExp (a :: TYPE (r :: RuntimeRep)) = TExp { unType :: Exp }
The meaning of the
TYPE (r :: RuntimeRep) part is not important to us, but simply put, it allows GHC to describe how to represent some types (boxed, unboxed, etc) during runtime. For more information, see levity polymorphism.
This allows us to use our familiar constructions for
Exp, in addition to a type for
a which represents the type of the expression. This gives us stronger type-safety mechanisms for our TH application, which will cause the compiler to reject invalid TH programs during their construction.
In the example below,
template-haskell gladly accepts
42 :: String using an untyped expression, while the typed counterpart refuses it with a type error.
>>> runQ [|42 :: String|] SigE (LitE (IntegerL 42)) (ConT GHC.Base.String) >>> runQ [||42 :: String||] <interactive>:358:9: error: • Could not deduce (Num String) arising from the literal ‘42’ from the context: Language.Haskell.TH.Syntax.Quasi m bound by the inferred type of it :: Language.Haskell.TH.Syntax.Quasi m => m (TExp String) at <interactive>:358:1-23 • In the Template Haskell quotation [|| 42 :: String ||] In the first argument of ‘runQ’, namely ‘[|| 42 :: String ||]’ In the expression: runQ [|| 42 :: String ||]
Typed splices
Just like we had untyped splices such as
$foo, now we also have typed splices, written as
$$foo. Note, however, that if your GHC version is below 9.0, you may need to write
$$(foo) instead.
Example: calculating prime numbers
As an example, let’s consider the following functions that implement prime number evaluation up to some number. We will make create two versions, one with ordinary Haskell, and another with Template Haskell, so we can see the differences between them. The implementation may be somewhat more verbose than it needs to be to demonstrate the techniques in typed TH and contrast them with an ordinary function.
First, create a file
Primes.hs containing two functions: one that checks whether a given a number is prime, and another that generates primes numbers up until some given limit.
module Primes where isPrime :: Integer -> Bool isPrime n | n <= 1 = False | n == 2 = True | even n = False -- No even number except for 2 is prime | otherwise = go 3 where go i | i >= n = True -- We saw all smaller numbers and no divisors, so it's prime | n `mod` i == 0 = False | otherwise = go (i + 2) -- Iterate through the odd numbers primesUpTo :: Integer -> [Integer] primesUpTo n = go 2 where go i | i > n = [] | isPrime i = i : go (i + 1) | otherwise = go (i + 1)
The first function checks whether a number has any divisors. If it has any divisor (apart from 1 and itself), then the number is composite and the function returns
False, otherwise it keeps testing for more divisors. If we reach a number that is greater or equal to the input, it means that we have checked all smaller numbers and found no divisors, and so the number is prime, and the function returns
True.
The second function simply iterates through the numbers, collecting all primes. We start with 2 since it’s the first prime number.
Keep in mind that these functions are very inefficient, so make sure to use a more optimized version for anything serious!
Now for our Template Haskell version. As usual, let’s create two files,
TH.hs and
Main.hs, to work with through this example.
This is what should be in
TH.hs:
module TH where import Language.Haskell.TH import Primes (isPrime) primesUpTo' :: Integer -> Q (TExp [Integer]) primesUpTo' n = go 2 where go i | i > n = [||[]||] | isPrime i = [||i : $$(go (i + 1))||] | otherwise = [||$$(go (i + 1))||]
In general, it’s the same thing as the ordinary version. The only difference now being that we return a
Q (TExp [Integer]) and generate our list inside the typed expression quoter.
We wrap our recursive calls to
go inside splices. Since
go has a type of
Q (TExp [Integer]), if we didn’t splice it, we’d try to use the cons operator (
:) on an
Integer and a
Q (TExp [Integer]) which would not type-check. An error message might describe the problem quite well:
>>> :l TH [2 of 2] Compiling TH ( TH.hs, interpreted ) Failed, no modules loaded. TH.hs:15:21: error: • Couldn't match type ‘Q (TExp [Integer])’ with ‘[Integer]’ Expected type: Q (TExp [Integer]) Actual type: Q (TExp (Q (TExp [Integer]))) • In the Template Haskell quotation [|| (go (i + 1)) ||] In the expression: [|| (go (i + 1)) ||] In an equation for ‘go’: go i | i > n = [|| [] ||] | isPrime i = [|| i : $$(go (i + 1)) ||] | otherwise = [|| (go (i + 1)) ||] | 15 | | otherwise = [||(go (i + 1))||] | ^^^^^^^^^^^^^^^^^^
As a matter of fact, we could have written that branch above simply as
go (i + 1), without the quoter. Try it!
And now we can use our new function in GHCi like so:
>>> $$(primesUpTo' 100) [2,3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67,71,73,79,83,89,97]
We can also inspect it as an untyped Template Haskell definition if we want, using the
unType function:
>>> runQ (unType <$> primesUpTo' 10) InfixE (Just (LitE (IntegerL 2))) (ConE GHC.Types.:) (Just (InfixE (Just (LitE (IntegerL 3))) (ConE GHC.Types.:) (Just (InfixE (Just (LitE (IntegerL 5))) (ConE GHC.Types.:) (Just (InfixE (Just (LitE (IntegerL 7))) (ConE GHC.Types.:) (Just (ConE GHC.Types.[]))))))))
Or, more simply put:
2 : 3 : 5 : 7 : []
Had we made any mistakes in the definition, for example, by using the following definition where we forget a recursive call:
primesUpTo' :: Integer -> Q (TExp [Integer]) primesUpTo' n = go 2 where go i | i > n = [||[]||] | isPrime i = [||i||] -- We forgot to build a list here | otherwise = go (i + 1))
Then we’d be immediately greeted with a type-error:
>>> :r [2 of 2] Compiling TH ( TH.hs, interpreted ) Failed, no modules loaded. TH.hs:14:21: error: • Couldn't match type ‘Integer’ with ‘[a]’ Expected type: Q (TExp [a]) Actual type: Q (TExp Integer) • In the Template Haskell quotation [|| i ||] In the expression: [|| i ||] In an equation for ‘go’: go i | i > n = [|| [] ||] | isPrime i = [|| i ||] | otherwise = [|| $$(go (i + 1)) ||] • Relevant bindings include go :: Integer -> Q (TExp [a]) (bound at TH.hs:43:5) | 14 | | isPrime i = [||i||] | ^^^^^^^
Our
primesUpTo' will generate the list of primes at compile-time, and now we can use this list to check the values at runtime.
With this, we can create our
Main.hs, where we can try our code:
import TH main :: IO () main = do let numbers = $$(primesUpTo' 10000) putStrLn "Which prime number do you want to know?" input <- readLn -- n.b.: partial function if input < length numbers then print (numbers !! (input - 1)) else putStrLn "Number too big!"
And that’s it! A very simple program using typed TH. Load
Main.hs in GHCi, and after a few seconds when it’s loaded, run our
main function. Once asked for an input, type a number such as 200, asking for the 200th prime. The function should output the correct result of 1223.
>>> main Which prime number do you want to know? 200 1223
Again, our algorithm is quite inefficient and this may take some seconds to compile (as it’s generating numbers as it compiles), and for further improvements, it may be a good idea to have a less naïve algorithm for generating primes, but for educational purposes, it will do for now.
The code used in this post can also be found in this GitHub gist.
A shorter implementation
As mentioned before, we could implement the functions above in a more simple manner, such as:
primesUpTo :: Integer -> [Integer] primesUpTo n = filter isPrime [2 .. n]
And the corresponding TH function as:
primesUpTo' :: Integer -> Q (TExp [Integer]) primesUpTo' n = [|| primesUpTo n ||]
And with this, you should be ready to use typed Template Haskell in the wild.
Caveat
Typed Template Haskell may have some difficulties resolving overloads. Surprisingly, the following does not type-check:
>>> mempty' :: Monoid a => Q (TExp a) ... mempty' = [|| mempty ||] >>> x :: String ... x = id $$(mempty') <interactive>:549:11: error: • Ambiguous type variable ‘a0’ arising from a use of ‘mempty'’ prevents the constraint ‘(Monoid a0)’ from being solved. Probable fix: use a type annotation to specify what ‘a0’ should be. These potential instances exist: instance Monoid a => Monoid (IO a) -- Defined in ‘GHC.Base’ instance Monoid Ordering -- Defined in ‘GHC.Base’ instance Semigroup a => Monoid (Maybe a) -- Defined in ‘GHC.Base’ ...plus 7 others (use -fprint-potential-instances to see them all) • In the expression: mempty' In the Template Haskell splice $$(mempty') In the first argument of ‘id’, namely ‘$$(mempty')’
Annotating
mempty' may resolve it in this case:
>>> x :: String ... x = id $$(mempty' :: Q (TExp String)) >>> x ""
An open ticket exists describing the issue, but if you run into some strange errors, it’s a good idea to keep it in mind.
Further reading
In this post, we extended our Template Haskell knowledge with
TExp. We created a short example where we generated some values during compile time that can be later looked up at runtime. For more resources on typed Template Haskell, check out the following links:
- Using Template Haskell to generated static data
- A Little Bloop on Typed Template Haskell
- Statically checked overloaded string
For more Haskell tutorials, you can check out our Haskell articles or follow us on Twitter or Medium.
.jpg)
|
https://serokell.io/blog/typed-template-haskell-overview
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
Converting to another web framework: Basic apps in Symfony and Django
Many times have I heard the following from a developer: “I am scared to change technologies”, “I am excited but I’m afraid it will be entirely different”, “I only know <insert web framework here>, I’ve never seen any <insert another web framework here> examples will be in Symfony2, a modern PHP web framework and Django, another such framework for Python. Both frameworks are widely used in companies today, for the development of small to medium-sized web applications and backends. In fact, we have a few interesting articles in Python (mostly Django, since it’s one of our favourite frameworks) and PHP.
You can learn about the basic similarities between these two frameworks, but even if your speciality is Rails, Spring or any other web framework, you can use this article to better understand your specialty and to let go of any fears about converting. For the sake of good examples, I will be using the user stories and application development from the Symfony2 Jobeet tutorial, along with bits and pieces from the Django Write your first app tutorial.
Installing a web framework is easy
You can always experiment with features of a new web framework release using the modern installers. Nowadays specialised installers (package managers, if you wish) will automatically download the framework version of your choice and will bootstrap a project for immediate running. Usually the steps go as such: install the programming language, the installer, edit a settings file then use a shell utility provided by your framework to run your project on a local server.
In Symfony, it goes like this:
sudo apt-get install php5 sudo curl -LsS -o /usr/local/bin/symfony sudo chmod a+x /usr/local/bin/symfony symfony new jobeet 2.8 php app/console server:run
Which leads you to a splash screen of your first Symfony application running on localhost . Sometimes, you need to go through additional steps like adding your timezone in the php.ini file, which takes a couple of minutes of your time. In case you haven’t noticed, our programming language is PHP (the php5 package), our installer is called symfony and we start our project using the 2.8 LTS version of the framework. Our shell utility is app/console, which we will use extensively throughout the development process using various commands. You will also find settings files like parameters.yml and config.yml in the app/config path, where you can edit your defaults.
In Django, the installation is similar:
sudo apt-get install python python-pip python-django-common python-django django-admin startproject jobeet_py python manage.py runserver
Again, we have installed python, the pip and django-admin installers, and we have started a project (called jobeet.py). We then run it using the manage.py shell utility (which we will use in Django development a lot) and we will see a welcome screen on localhost. Your settings file is in the root folder of your project, in settings.py .
The whole process is made this way because the programmers require confirmation that their settings and framework installation are correct. After making sure the framework is installed and properly configured, we can move to the next step.
Starting the application
Symfony organises code into bundles, while Django prefers the equivalent app naming. The first things to do after installing the framework itself is to start your own separate project, which will use but not rewrite elements from the framework.
Run this in your symfony project root folder and answer all questions using the default values.
php app/console generate:bundle --namespace=Ens/JobeetBundle --format=yml
It will generate a folder structure in your src folder, under Ens/JobeetBundle . An automatic action is also added, but we will go into more detail on routes, actions and views later. For now, know that your custom code will go inside this newly-created structure. Don’t forget to clear your cache like explained in the Jobeet tutorial.
In Django, we do more or less the same thing, using our shell utility to create an app:
python manage.py startapp jobeet
You also need to add it to your installed apps in the settings.py file.
INSTALLED_APPS = ( [...] 'jobeet' )
And now you’re good to go.
Hello World! : The triad of URL, Controller, View
You have now reached the essential point of MVC Web Frameworks. Understanding how to tie in the functionality from accessing a URL to computing and visualising the desired information is the crucial part of web development. Doing so will enable you to study related topics such as external libraries and custom handling of requests with much ease. Web frameworks work by mapping URL paths to Controller actions, which can be functions or classes and are written by the programmer to contain logical handling of data. Most of the times, the Controller will also return a view, which is the user-friendly display of computed data (as HTML and CSS). Sounds simple? Well, the most confusing part of this aspect is that different frameworks tend to name these concepts differently. For example, Symfony calls them Route-Controller-View, while Django calls them URL-View-Template. Whatever they are named, these three, combined with the Model part, represent the fundamentals of web framework development.
What I want to do using both frameworks is to make the root URL / display a simple page with a custom signature. Since projects may contain several sub-projects (bundles, apps), we delegate from the main URL configuration file to specialised ones, located in their corresponding sub-projects. First, let’s see how this looks in Symfony. Consider the main routing file app/config/routing.yml :
ens_jobeet: resource: "@EnsJobeetBundle/Resources/config/routing.yml" prefix: /
Here we defer routing of our Jobeet routes to the Jobeet Bundle. So next we create this new and specialised routing file in src/Ens/JobeetBundle/Resources/config/routing.yml :
ens_jobeet_homepage: path: / defaults: { _controller: EnsJobeetBundle:Default:index }
The components of a route definition are the path (here, root URL), the name of the route (here, ens_jobeet_homepage ) and the mapping to a controller action. Next, we write that controller action in src/Ens/JobeetBundle/DefaultController :
<?php namespace Ens\JobeetBundle\Controller; use Symfony\Bundle\FrameworkBundle\Controller\Controller; class DefaultController extends Controller { public function indexAction() { return $this->render('EnsJobeetBundle:Default:index.html.twig', array( 'signature' => 'C3-PO, Human-cyborg relations.' )); } }
As you can see, our index action is pretty simple. For our “Hello World!” example, we don’t need to connect to the database, change any available data or generally compute much. We simply render the index view and pass along the signature variable. The purpose is simply to demonstrate how variable transmission affects the view. So in our view, located at src/Ens/JobeetBundle/Resources/views/Default/index.html.twig we simply write:
Hello World! I am {{ signature }}.
Therefore in our template we can use the Twig templating engine’s syntax for printing a variable. The double curly braces output the signature variable we previously defined in the controller.
The result? Upon going into the browser at the address we will see a page that prints:
Hello World! I am C3-PO, Human-cyborg relations..
In Django, the process is almost identical. As we do in Symfony, first we defer app-related URL definition to the app itself. In our main urls.py file, located in the root folder of our Django project, we add this delegation:
from django.conf.urls import patterns, include, url from django.contrib import admin urlpatterns = patterns( '', url(r'^', include('jobeet.urls')), )
This indicates to our main URL definition file that it needs to add the patterns from the jobeet/urls.py file. We edit this file to contain a index route and tie it to an action.
from django.conf.urls import url from jobeet import views urlpatterns = [ url(r'^$', views.index, name='index'), ]
The defined URL pattern points the root URL to a function inside the jobeet/views.py file. Note the structure of imports, which considers the name of the app, then its contained files as importable. In our views.py file we can now write:
from django.shortcuts import render def index(request): return render(request, 'jobeet/index.html', { 'signature': 'C3-PO, Human-Cyborg relations.' })
Note the similarity between the two views, Symfony and Django. If the functionality is identical, the structure of the action will be similar in any chosen modern web framework. Again we use a rendering function to display a template file (which is HTML in nature) and pass it a custom variable. In Symfony, the context that we send to the view is an array used like a dictionary, while in Django we use Python’s dictionary construct in the same fashion.
The big surprise comes now. In jobeet/templates/jobeet/index.html we can input the exact same text as in our Symfony Twig view. This is made possible by the fact that Twig and Django’s templating engines are similar to a great extent and use the same curly braces syntax when outputting the value of a variable. Furthermore, there are a lot of similarities in the way the two templating engines handle iteration, inheritance and so on. The differences are mostly minor inconveniences, such as Twig maintaining that routes should be expressed with an echoing ({{ path(‘index’) }} ) syntax, while Django uses a block syntax ({% url ‘index’ %} ). After creating your triad of elements (URL, Action function and Template), you can check in your browser that, if you run the Django app using the runserver command, will again print your greeting from C3-PO.
Wrap-up
I hope that this side-by-side comparison of some simple web framework features has helped you gain some insight into just how similar they can be. It is often that not our fear of what lies ahead, but our fear of the unknown that makes us reluctant to any technological changes. However, in our times, flexibility and adaptability are essential to developers which desire a long and prosperous career. In this article, I have shown you how to install the Symfony and Django frameworks, how to start your project and how to handle basic URL-Controller-View definition. Unfortunately, I have barely touched the tip of the iceberg. Interesting similarities and differences between web frameworks can be seen in model handling, CRUD approaches, third-party providers, open source community and so on. If you want us to write more articles about today’s web frameworks and what makes them great, make sure you leave us a comment. Not too keen about PHP and Python? If Javascript is your backend flavour, be sure to check out Paul’s articles on NodeJS (building APIs and creating Admin Panels)
We transform challenges into digital experiences
Get in touch to let us know what you’re looking for. Our policy includes 14 days risk-free!Free project consultation
|
https://www.algotech.solutions/blog/php/converting-to-another-web-framework-basic-apps-in-symfony-and-django/
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
Hello,
After the latest ReSharper update (C++ 2019.2.20191016.62802) I've got these nice highlights of matching #if #ifdef #ifndef blocks, too bad they are giving me eye bleeding.
I've been digging through all the color settings in Visual Studio to find them and change their background, with no luck. Can someone point me to the coloring label for these blocks or for a way to disable them?
Thank you
Hello,
You can use the "ReSharper Parameter Name Hint" color setting to change the color of the inlay hints. If you want to disable them, just right-click on the hint and there will be an option there.
Note that if you choose the "Dark" color theme in VS, the color of the hints is changed accordingly. Here's how they look with default "Dark" colors:
Hi Igor,
Thank you so much for the info. If set to a custom color it works correctly and all is well, my eyes are thanking you.
As a bit more info for you, I am not using the default Dark Theme, maybe that's where the conflict lies.
Oddly enough the default color in settings seems indeed to be closer to what you have shown. There must be a conflict arising between the settings and the editor with the "Default" color as it's not looking like that. Maybe that "Default" color is a reference to something else which is that bright grey in my case in the editor. This was the setting I had with the first screenshot. Hope it helps.
The name of the color setting is a bit misleading. The hints were originally used only for parameter names, but now there are many more kind of hints - namespace, directive, and type name. We'll probably change the name of the color to something like "ReSharper Inlay Hint" in the next release.
|
https://resharper-support.jetbrains.com/hc/en-us/community/posts/360006736599-Changing-bg-color-of-the-matching-preprocesor-ifdef-section-blocks
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
GuwenBERT
Model description
This is a RoBERTa model pre-trained on Classical Chinese. You can fine-tune GuwenBERT for downstream tasks, such as sentence breaking, punctuation, named entity recognition, and so on.
For more information about RoBERTa, take a look at the RoBERTa's offical repo.
How to use
from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("ethanyt/guwenbert-base") model = AutoModel.from_pretrained("ethanyt/guwenbert-base")
Training data
The training data is daizhige dataset (殆知阁古代文献) which is contains of 15,694 books in Classical Chinese, covering Buddhism, Confucianism, Medicine, History, Zi, Yi, Yizang, Shizang, Taoism, and Jizang. 76% of them are punctuated. The total number of characters is 1.7B (1,743,337,673). All traditional Characters are converted to simplified characters. The vocabulary is constructed from this data set and the size is 23,292.
Training procedure
The models are initialized with
hfl/chinese-roberta-wwm-ext and then pre-trained with a 2-step strategy.
In the first step, the model learns MLM with only word embeddings updated during training, until convergence. In the second step, all parameters are updated during training.
The models are trained on 4 V100 GPUs for 120K steps (20K for step#1, 100K for step#2) with a batch size of 2,048 and a sequence length of 512. The optimizer used is Adam with a learning rate of 2e-4, adam-betas of (0.9,0.98), adam-eps of 1e-6, a weight decay of 0.01, learning rate warmup for 5K steps, and linear decay of learning rate after.
Eval results
"Gulian Cup" Ancient Books Named Entity Recognition Evaluation
Second place in the competition. Detailed test results:
We are from Datahammer, Beijing Institute of Technology. For more cooperation, please contact email: ethanyt [at] qq.com
Created with ❤️ by Tan Yan
and Zewen Chiand Zewen Chi
- Downloads last month
- 716
|
https://huggingface.co/ethanyt/guwenbert-base
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
I am working with the book agile Web D. second edition, and in
page 110 there is an example i how to create the model CartItem.
I am using rails 2.0 and after adding the class CartItem I get an error
at the constructor Initialize
class CartItem < ActiveRecord::Base
attr_reader :product, :quantity
def initialize(product)
@product = product
@quantity = 1
end
it underlines the word “initialize” and says “Subclass does not call
super in constructor”
|
https://www.ruby-forum.com/t/subclass-does-not-call-super-in-constructor/144226
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
Can anybody explain unwrapConnection in SimpleDataSource for me? The
code reads:
public static Connection unwrapConnection(Connection conn) {
if (conn instanceof SimplePooledConnection) {
return ((SimplePooledConnection) conn).getRealConnection();
}
...
But the private (argh!) SimplePooledConnection isnt a connection object?
it extends Object and implements InvokationHandler.. Im guessing this
hasn't ever worked.
- Richard
|
https://mail-archives.us.apache.org/mod_mbox/ibatis-user-java/200504.mbox/%[email protected]%3E
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
If you are just getting started with React JS, I understand that it can be really confusing getting to understand these concepts and how you can use them, so I decided to write this article to explain these concepts as simply as possible.
So to begin with, what do Props mean in React?
Props is the short form of properties and they are used to pass data from one component to another. The flow of this data is always in one direction (uni-directional) from parent to child component. It should also be noted that the data that is passed is always read-only and should not be changed.
Think of props as an object that contains the attribute and their values which have been passed from the parent component. Props make it possible to reuse components.
Let's take a look at an example;
We have a simple component
/SayHello.js that output a simple message
import React from 'react' const SayHello =()=>{ return( <div> <h1>Hello and welcome onboard</h1> </div> ) } export default SayHello;
Now we render this component in our
/App.js component
import React from 'react' import SayHello from './SayHello' const App=()=>{ return( <div> <SayHello /> </div> ) } export default App;
So this is a sample of a simple component without props, however, what if we would like to add a name property to the SayHello Message and we do not want to hardcode it into the h1 so as we can change the name we SayHello to with ease.
So this is where we introduce props into our components, so the
/SayHello.js will now look like this
import React from 'react' const SayHello =(props)=>{ return( <div> <h1>Hello and welcome onboard {props.name}</h1> </div> ) } export default SayHello;
While the name properties (props) will also be added to our
/App.js component in this way
import React from 'react' import SayHello from './SayHello' const App=(props)=>{ return( <div> <SayHello name="Martha" /> </div> ) } export default App;
So you can see how simple it is to introduce props into our components, we simply need to add the property (in this case name) to the component and add the Props.(whatever property) we are passing to where we want to call it.
Let's also look at how we can use props in a class component (note that our first example is a functional component).
So in a class component, our
/SayHello.js will look like this
import React from 'react' class SayHello extends React.Component{ render(props){ return( <div> <h1>Hello and welcome onboard {this.props.name}</h1> </div> ) } } export default SayHello;
So we have seen how props work in both function and class components.
Now let's take a look at States
Just like props, State holds information about a component, it allows components to create and manage their own data, so while components pass data with Props, they create and manage data with States. This means that a component state can change, and whenever it changes the component re-renders
let's take a look at an example of a Component creating and managing data with States.
import React from 'react' class Record extends React.Component{ constructor(props){ super(props) this.state={ count : 0 } this.handeClick = this.handeClick.bind(this) } handeClick(){ this.setState(prevState =>{ return{ count:prevState.count + 1 } }) } render(){ return( <div> <h1>{this.state.count}</h1> <button onClick={this.handeClick}>Change</button> </div> ) } } export default Record;
From the above example, it can be seen that the Record component had a count state which is initially set to zero, but this state is changed by the action of a button click. You can see that the button has an onClick that calls the function "handleClick" which is set to change the initial state of count using the setState method.
One important thing to note is that in the early days that is before now, State could only be used in class component and not in functional component (this is why functional components were referred to as Stateless components) but with the introduction of React Hooks, State can now be used in functional components also. I will write about React Hook in my next article.
From all we have looked at in this article we can draw the following differences between Props and State in React.
- Props are used to pass data while State is used to manage data.
- Components use Props to receive data from outside while components create and manage their own data with State.
- Props can only be passed from parent to child component and they are read-only.
- State can be modified in its own component and this must be done using the
setState()method.
Conclusion
Props and State are very important concepts in React JS and understanding how to use them is very crucial, Getting a solid understanding of these two would help your React journey. Feel free to leave a comment below and I would also like to hear from you about anything you need clarity on.
The complete project on everything on this article can be found here
Discussion (0)
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/martha/react-props-and-state-2f5e
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
BBC micro:bit
With
Copy and test the following program. Then read the explanations before starting the task.
from microbit import * import music while True: if button_a.is_pressed(): music.pitch(440,-1) else: music.stop() sleep(20)
The first two lines import the modules we need, the microbit module as always and the music module to be able to play sounds. The program has an infinite loop. Inside the loop, the program checks to see if the A button is held down. If the A button is held down, a tone is played. 440 is the frequency of the tone and -1 means to play it forever. If the A button is not held down, all sounds are stopped.
Adapt the program so that pressing the A button starts the tone from playing. Pressing the B button should stop the tone. To do this, remove the ELSE clause and the statement that stops the tone from playing. Add an IF statement to check if button B was pressed. If it was, stop the tone from playing. This program will not work correctly if your second IF statement is not at the same level of indentation as your first.
Task 2
The following statement plays a 440Hz tone for 1000 milliseconds.
music.pitch(440,1000)
Adapt the following program to make a Morse Code sounder. The lines that start with a # are comments. Replace these with statements that do the job indicated by the comment. In Morse Code, a dash is meant to be 3 times longer than a dot. Around 100-300 milliseconds makes sense for a dot.
from microbit import * import music while True: if button_a.was_pressed(): # play a 'dot' tone elif button_b.was_pressed(): # play a 'dash' tone
Task 3
The following program defines and plays a music tune.)
Study the way that the tune has been defined. When you see something like 'C4:4', this means Middle C (4th octave on the piano), played for 4 beats. Use the table below to define the notes for a different tune.
Task 4
Find some sheet music (or other musical notation) for a tune that you want to encode. All sorts of weird and wonderful tunes can be played, along with the classics (popular and classical).
Task 5
The X and Y axes on the accelerometer return values from about -1000 to 1000.
Look back at a program where you read from the accelerometer or check the following page,.
Write a program with an infinite loop. Inside the loop, check if the A button is being pressed ('is' not 'was'). If it is, take a reading from the X or Y axis of the accelerometer. Write some code to convert the reading to a number that is positive - negative numbers are less than 0, multiplying a negative number by -1 will turn it into a positive number.
If the button is being pressed, use your accelerometer reading to play a frequency. You need the music.pitch statement to do this.
If the button is not being pressed, the music should be stopped.
Task 6
Copy and test the following code. It creates a dot and then moves it randomly every half a second.
from microbit import * import music from random import randint x = 2 y = 2 while True: display.set_pixel(x,y,0) dx = randint(-1,1) dy = randint(-1,1) x += dx y += dy x = max(0, min(x, 4)) y = max(0, min(y, 4)) display.set_pixel(x,y,9) sleep(500)
The variables x and y, store the position of the dot. An infinite loop is created. Inside the loop, the pixel is 'undrawn'. 2 random numbers are chosen from -1 to 1. These numbers are added to the x asnd y coordinates. The max and min statements are used to keep x and y in the 0-4 range. The pixel is redrawn at the end of the loop and a half second pause happens before the loop repeats.
Start by adding a simple beep once in each loop. Use the music.pitch statement. Be careful with using timings - starting and stopping the pitch using -1 as the length allows the same timings to continue in the program.
If dx and dy are both 0, the dot doesn't move - so there should not be a beep. The dot does not move if it is on the edge and dy/dx would take it off the edge. This should also not get a beep.
The final refinement is to vary the pitch of the beep depending on the direction of movement. The direction of movement can be worked from dy and dx.
Task 7
If you vary the pitch of a tone quickly up and down across a large range of frequencies, you can make a siren sound. Use a for loop that counts from a few hundred to around a 1000. Use the music.pitch statement to play the frequency with no delay. Experiment until you have an annoying siren/alarm sound.
Task 8
Study the Morse Code program on the following web page,
Copy and adapt the program so that it makes the appropriate beeping noises. You should remove any statements that print - you will not see this output using the browser-based editor. You will also need to add a line calling the procedure into your program. On the web page, you see this being typed somewhere else. You should include it at the end of the program.
|
http://www.multiwingspan.co.uk/micro.php?page=pyex2
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
- Rafael J. Wysocki authored
ACPI container devices require special hotplug handling, at least on some systems, since generally user space needs to carry out system-specific cleanup before it makes sense to offline devices in the container. However, the current ACPI hotplug code for containers first attempts to offline devices in the container and only then it notifies user space of the container offline. Moreover, after commit 202317a5 (ACPI / scan: Add acpi_device objects for all device nodes in the namespace), ACPI device objects representing containers are present as long as the ACPI namespace nodes corresponding to them are present, which may be forever, even if the container devices are physically detached from the system (the return values of the corresponding _STA methods change in those cases, but generally the namespace nodes themselves are still there). Thus it is useful to introduce entities representing containers that will go away during container hot-unplug. The goal of this change is to address both the above issues. The idea is to create a "companion" container system device for each of the ACPI container device objects during the initial namespace scan or on a hotplug event making the container present. That system device will be unregistered on container removal. A new bus type for container devices is added for this purpose, because device offline and online operations need to be defined for them. The online operation is a trivial function that is always successful and the offline uses a callback pointed to by the container device's offline member. For ACPI containers that callback simply walks the list of ACPI device objects right below the container object (its children) and checks if all of their physical companion devices are offline. If that's not the case, it returns -EBUSY and the container system devivce cannot be put offline. Consequently, to put the container system device offline, it is necessary to put all of the physical devices depending on its ACPI companion object offline beforehand. Container system devices created for ACPI container objects are initially online. They are created by the container ACPI scan handler whose hotplug.demand_offline flag is set. That causes acpi_scan_hot_remove() to check if the companion container system device is offline before attempting to remove an ACPI container or any devices below it. If the check fails, a KOBJ_CHANGE uevent is emitted for the container system device in question and user space is expected to offline all devices below the container and the container itself in response to it. Then, user space can finalize the removal of the container with the help of its ACPI device object's eject attribute in sysfs. Tested-by:
Yasuaki Ishimatsu <[email protected]> Signed-off-by:
Rafael J. Wysocki <[email protected]> Acked-by:
Greg Kroah-Hartman <[email protected]>caa73ea1
|
https://gitlab.flux.utah.edu/xcap/xcap-capability-linux/blob/41e1d4fd2978f1035ceb210a7482901434770c2d/include/linux/container.h
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
Add User Data from Multiple Objects to a Null
On 17/05/2016 at 11:00, xxxxxxxx wrote:
Hi,
GOAL:
Ultimately I want to create an object plugin called "Rig."
- You place multiple objects that have User Data as children of the Rig object.
- The rig object copies the User Data of each child and then adds them to itself.
- It then creates an Xpresso tag and connects it's User Data to the childrens user data
- If a child is removed or deleted, the rig updates itself.
So if I have a two objects with user data and I put them under the "Rig" object, that rig Object will have the user data of the two objects and it can control them.
Current Code
I have a scene with a null named "Null" and two objects with User Data on them. When I run this the User Data from obj get overwritten by obj2. How can I add the obj2 user data and not overwrite the obj user data on the Null object?
import c4d
from c4d import gui
#Welcome to the world of Python
def main() :
target = doc.SearchObject("Null")
#print target
obj = doc.SearchObject("Red-EpicDragonBody")
#print obj
obj2 = doc.SearchObject("WoodenCamera-Baseplate19mmRedSet")
for id, bc in obj.GetUserDataContainer() :
target.SetUserDataContainer(id, bc)
print id
for id, bc in obj2.GetUserDataContainer() :
target.SetUserDataContainer(id, bc)
print id
if __name__=='__main__':
main()
On 18/05/2016 at 01:35, xxxxxxxx wrote:
Hello,
the user data is overwritten because you overwrite it. The IDs of user data parameters are unique on an object but not globally. So the user data parameters of object A have the IDs 1,2,3 ... and the user data parameters on object B also have the IDs 1, 2, 3 … .
So if you merge them together in one container you must create a new, offset ID for the user data of the second object or use AddUserData().
Best wishes,
Sebastian
On 27/05/2016 at 06:59, xxxxxxxx wrote:
Hello Matt,
was your question answered?
Best wishes,
Sebastian
On 27/05/2016 at 17:43, xxxxxxxx wrote:
Hey Sebastian,
Thanks for the reply. In theory, I understand that the ID have to be unique. But I need to figure how to actually implement it!
Thanks!
Matt
|
https://plugincafe.maxon.net/topic/9501/12744_add-user-data-from-multiple-objects-to-a-null/?page=1
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
[Edit]
This post has generated a lot of buzz, spawned some great discussion and even a full length article on how to set up Sub Sonic as a stand-alone assembly. For the reader stumbling on this post via Google or what not, please ensure you read the full comments on this post. Also, ensure that you check out Willie's tutorial on how to set Sub Sonic as a standalone DAL.
Finally, to see how this ended up and where the conversation is leading, check out my follow up post here.
D'Arcy - January 4/2008
[/Edit]
I've become disenchanted with Sub Sonic. Well...not entirely disenchanted: what it does it does very well. If you want to couple your web application's DAL and Logic tiers into the same web-app assembly, with only namespaces acting as logical borders to the components, then go nuts. Sub Sonic is definately the tool for you.
But although I've appreciated using Sub Sonic's ability to create my DAL and entity objects quickly, there's one glaring issue that is just a huge problem for me...HUGE...as in, deal breaker. As in, if you want to architect an application with proper separation of responsibilities, you just can't do it with Sub Sonic.
I wanted to take my generated classes and put them into a separate assembly library. Why? Seriously, you're asking that? You shouldn't...you should know that you should ensure that all your code is highly cohesive and loosely coupled...you should make sure that your UI layer doesn't know anything about the database implementation, providers required, or connection string. For that matter, your business/logic/whatever-you-call-it tier shouldn't know any of the database stuff either...that's why it's called a DAL: its the DATA ACCESS LAYER.
Ok...rant done.
So I've been beating my head on my desk because for the life of me I can't realize why I keep getting the error:
"Can't find the SubSonicService in your application's config"
What do you mean you can't find it?! It's right there...I can see it...the section is THERE.
Alas, it matters not...because it will never read it from there. It will never look there, it doesn't even care that there's a config file associated with the library.
All it knows is that it needs to find the SubSonicService section in the Web.Config section. This was cemented by a post I found on the SubSonic forum where "spookytooth"...
[Edit] ahem...who happens to be "Rob"...who happens to be the main guy *behind* Sub Sonic...yeah...[/Edit]
who happens to be a SubSonic team member as well as on the ASP.NET team, clarified that indeed the execution environment is the web, and therefore the SubSonic dataservice will be looking for a web.config file to pull the values out.
[Edit] Took out this part because it was a misunderstanding between what I *thought* Rob was trying to say and what he meant...he explained it in the comments. [/Edit]
Look, SubSonic is a great tool if you want to create a web or windows based application that you fully expect to be tightly coupled with the data access and entity layers. But if you're looking for something with more flexibility, you're better off using other code generation tools that do the same thing as SubSonic without handcuffing you to their data service components.
[Note: strong ending statement...read the comments below for the full conversation]
D
Title:
Name:
Comment:
|
http://gamecontest.geekswithblogs.net/dlussier/archive/2007/12/30/118069.aspx
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
BBC micro:bit
RGB LED
Introduction
This is a photograph of an RGB LED.
If you work from left to right in the photograph, the 4 legs of the LED correspond to BLUE, GROUND, GREEN & RED. The ground pin is shared by all 3 LEDs. This is called a common cathode RGB LED. The RGB LED allows us to turn each of the colours on and off. We can also mix different amounts of red, green and blue light to make the exact colours we want.
Study the diagram of the circuit carefully.
- The 4 legs of the RGB LED are placed in the breadboard so that each leg is on a different row of the breadboard.
- The longest leg of the LED is connected directly to GND on the micro:bit.
- A 200 OHM resistor is connected to each of the other legs of the LED.
- The red LED is connected to pin2.
- The green LED is connected to pin1.
- The blue LED is connected to pin0.
Task 1
Your first program is all about testing the connections in your circuit.
from microbit import * b = pin0 g = pin1 r = pin2 # red r.write_digital(1) sleep(1000) r.write_digital(0) sleep(1000) # green g.write_digital(1) sleep(1000) g.write_digital(0) sleep(1000) # blue b.write_digital(1) sleep(1000) b.write_digital(0) sleep(1000)
The 3 variables r,g and b are used in this program to make it easier to remember which pin has which colour of LED connected to it.
Although the LEDs are in the same package, we can turn them on and off individually just as if they were separate LEDs.
Task 2
Whilst you can use the LEDs separately, it is more interesting to combine them to make new colours. Write a program to help you fill in the following table with the colour you make from different combinations of the LEDs
Task 3
Write a program that changes the colour that you see on the RGB LED every time you press one of the micro:bit buttons. Decide whether or not you want the choice of colour to be predictable (hard-coded) or generated at random.
Task 4
The word 'digital', means 'on or off'. So far, our LEDs have always been in one of these two states, on or off.
The following program uses analog_write() to vary the brightness of each of the 3 LEDs. That allows us to blend them to make even more colours. The values you can write with this statement range from 0 to 1023.
from microbit import * b = pin0 g = pin1 r = pin2 r.write_analog(0) g.write_analog(800) b.write_analog(300)
Task 5
The following program contains a subroutine with parameters that can be used to set the colour blend efficiently.
from microbit import * def light(red,green,blue): pin0.write_analog(blue) pin1.write_analog(green) pin2.write_analog(red) light(1023,0,0) sleep(1000) light(0,0,0)
The last 3 lines of the program turn on the RED LED for a second before turning all of the LEDs off. Write a more interesting program with this subroutine.
Task 6
Write a program that displays the traffic light sequence. Work out how to blend the colours to make the amber light.
Look up the complete sequence of traffic lights and replicate it perfectly in your program.
Task 7
The following snippet of code generates a random integer from 0 to 1023.
from microbit import * import random a = random.randint(0,1023)
Use this technique and the procedure from Task 5 to write a program that blends random amounts of red, green and blue.
Task 8
Adapt and extend your program from Task 8 so that the LEDs are all off when the micro:bit starts up. Use a while true loop and, if the A button was pressed, generate 3 random numbers and use them to blend a colour on the RGB LED. Pressing the button again should generate a new random colour.
|
http://www.multiwingspan.co.uk/micro.php?page=pyex5
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
Gain Remote Access to the Get-ExCommand Exchange Command
Dr Scripto
Summary: Learn how to gain access to the Get-ExCommand Exchange command while in an implicit remote Windows PowerShell session.
Hey, Scripting Guy! I liked your idea about connecting remotely to Windows PowerShell on an Exchange Server. The problem is that I do not know all of the cmdlet names. When I am using RDP to connect to the Exchange server, there is a cmdlet named Get-ExCommand that I can use to find what I need. But when I use your technique of using Windows PowerShell remoting to connect to the Exchange Server, for some reason Get-ExCommand cmdlet does not work. Am I doing something wrong? Please help.
—JM
Microsoft Scripting Guy, Ed Wilson, is here. Well, it looks like my colleagues in Seattle are starting to dig out from the major snowstorm they received last week. Here in Charlotte, it has been sunny and cool. Of course, Seattle does not get a lot of 100 degrees Fahrenheit (37.7 degrees Celsius) days in the summer. Actually, the temperature is not what is so bad, but rather it is the humidity that is oppressive. A day that is 100 degrees Fahrenheit with 85% humidity makes a good day to spend in the pool, or to spend writing Windows PowerShell scripts whilst hugging an air conditioner. Back when I was traveling, the Scripting Wife and I usually ended up in Australia during our summer (and their winter)—it is our favorite way to escape the heat and the humidity. Thus, fall and winter in Charlotte is one of the reasons people move here—to escape the more rugged winters in the north. Anyway…
Yesterday, I wrote a useful function that makes a remote connection to a server running Exchange Server 2010 and brings all of the Exchange commands into the current session. This function uses a technique called implicit remoting.
It is unfortunate that the Get-ExCommand command is not available outside the native Exchange Server Management Shell, because the Exchange commands are not all that discoverable by using normal Windows PowerShell techniques. For example, I would expect to be able to find the commands via the Get-Command cmdlet, but as is shown here, nothing returns.
PS C:\> Get-Command -Module *exchange*
PS C:\>
The Get-ExCommand cmdlet is actually a function and not a Windows PowerShell cmdlet. In reality, it does not make much of a difference that Get-ExCommand is not a cmdlet, except that with a function, I can easily use the Get-Content cmdlet to figure out what the command actually accomplishes. The function resides on the function drive in Windows PowerShell, and therefore the command to retrieve the content of the Get-ExCommand function looks like this:
Get-Content Function:\Get-ExCommand
The command and output associated with that command when run from within the Exchange Server Management Shell are shown in the image that follows.
The following steps are needed to duplicate the Get-ExCommand function:
- Open the Windows PowerShell ISE (or some other script editor).
- Establish a remote session onto an Exchange Server. Use the New-ExchangeSession function from yesterday’s Hey, Scripting Guy! blog.
- Make an RDP connection to a remote Exchange Server and use the Get-Content cmdlet to determine the syntax for the new Get-ExCommand command.
- Use the Windows PowerShell ISE (or other script editor) to write a new function that contains the commands from Step 2 inside a new function named Get-ExCommand.
In the image that follows, I run the New-ExchangeSession function and make an implicit remoting session to the server named “ex1,” which is running Exchange Server 2010. This step brings the Exchange commands into the current Windows PowerShell environment and provides commands with which to work when I am creating the new Get-ExCommand function.
Here is a version of the Get-ExCommand function that retrieves all of the Microsoft Exchange commands.
Function Get-ExCommand
{
Get-Command -Module $global:importresults |
Where-Object { $_.commandtype -eq ‘function’ -AND $_.name -match ‘-‘}
} #end function Get-ExCommand
I copied the portion of the function that retrieves the module name from the $global namespace. It came from the contents of the Get-ExCommand function from the server running Exchange Server 2010. One of the nice things about functions is that they allow the code to be read.
I added the Where-Object to filter out only the functions. In addition, I added the match clause to look for a “-“ in the function name. This portion arose because of the functions that set the working location to the various drive letters.
To search for Exchange cmdlets that work with the database requires the following syntax.
Get-ExCommand | where { $_.name –match ‘database’}
That is not too bad, but if I need to type it on a regular basis, it rapidly becomes annoying.
In the original Get-ExCommand function, the function uses the $args automatic variable to determine the presence of an argument to the function. When an argument exists, the function uses that and attempts to use the Get-Command cmdlet to retrieve a CmdletInfo object for the command in question. This is helpful because it allows the use of wildcards to discover applicable Windows PowerShell cmdlets for specific tasks.
I decided to add a similar capability to my version of the Get-ExCommand function, but instead of using the $args variable, I created a command-line parameter named Name. To me, it makes the script easier to read. The following is the content of the Get-ExCommand function.
Function Get-ExCommand
{
Param ([string]$name)
If(!($name))
{Get-Command -Module $global:importresults |
Where-Object { $_.commandtype -eq ‘function’ -AND $_.name -match ‘-‘} }
Else
{Get-Command -Module $global:importresults |
Where-Object { $_.commandtype -eq ‘function’ -AND
$_.name -match ‘-‘ -AND $_.name -match $name} }
} #end function Get-ExCommand
The first thing the Get-ExCommand function does is to create the $name parameter. Next, the if statement checks to see if the $name parameter exists on the command line. If it does not exist, the same syntax the previous version utilized appears. If the $name parameter does exist, an additional clause to match the value of the $name parameter appears.
The following code illustrates searching for all Exchange commands related to the database.
Get-ExCommand database
The image that follows illustrates using the Get-ExCommand function, and the associated output.
The complete Get-ExCommand function, including comment-based Help, appears in the Scripting Guys Script Repository.
JM, that is all there is to gaining access to the Get-ExCommand command in a remote Windows PowerShell session. Join me tomorrow for more cool stuff. Until then, keep on
|
https://devblogs.microsoft.com/scripting/gain-remote-access-to-the-get-excommand-exchange-command/
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
Support for training models.
Modules
queue_runner module: Public API for tf.train.queue_runner namespace.
Classes
class AdadeltaOptimizer: Optimizer that implements the Adadelta algorithm.
class AdagradDAOptimizer: Adagrad Dual Averaging algorithm for sparse linear models.
class AdagradOptimizer: Optimizer that implements the Adagrad algorithm.
class AdamOptimizer: Optimizer that implements the Adam algorithm.
class BytesList: A ProtocolMessage
class Checkpoint: Groups checkpointable objects, saving and restoring them.
class CheckpointManager: Deletes old checkpoints.
class CheckpointSaverHook: Saves checkpoints every N steps or seconds.
class CheckpointSaverListener: Interface for listeners that take action before or after checkpoint save.
class ChiefSessionCreator: Creates a tf.Session for a chief.
class JobDef: A ProtocolMessage
class LoggingTensorHook: Prints the given tensors every N local steps, every N seconds, or at end.
class LooperThread: A thread that runs code repeatedly, optionally on a timer.
class MomentumOptimizer: Optimizer that implements the Momentum algorithm.
class MonitoredSession: Session-like object that handles initialization, recovery and hooks.
class NanLossDuringTrainingError: Unspecified run-time error.
class NanTensorHook: Monitors the loss tensor and stops training if loss is NaN.
class Optimizer: Base class for optimizers.
class ProfilerHook: Captures CPU/GPU profiling information every N steps or seconds.
class ProximalAdagradOptimizer: Optimizer that implements the Proximal Adagrad algorithm.
class ProximalGradientDescentOptimizer: Optimizer that implements the proximal gradient descent algorithm.
class QueueRunner: Holds a list of enqueue operations for a queue, each to be run in a thread.
class RMSPropOptimizer: Optimizer that implements the RMSProp algorithm.
class Saver: Saves and restores variables.
class SaverDef: A ProtocolMessage
class Scaffold: Structure to create or gather pieces commonly needed to train a model.
class SecondOrStepTimer: Timer that triggers at most once every N seconds or once every N steps.
class SequenceExample: A ProtocolMessage
class Server: An in-process TensorFlow server, for use in distributed training.
class ServerDef: A ProtocolMessage
class SessionCreator: A factory for tf.Session.
class SessionManager: Training helper that restores from checkpoint and creates session. SingularMonitoredSession: Session-like object that handles initialization, restoring, and hooks.
class StepCounterHook: Hook that counts steps per second.
class StopAtStepHook: Hook that requests stop at a specified step.
class SummarySaverHook: Saves summaries every N steps.
class Supervisor: A training helper that checkpoints models and computes summaries.
class SyncReplicasOptimizer: Class to synchronize, aggregate gradients and pass them to the optimizer.
class VocabInfo: Vocabulary information for warm-starting.
class WorkerSessionCreator: Creates a tf.Session for a worker.
Functions
MonitoredTrainingSession(...): Creates a
MonitoredSession for training.
add_queue_runner(...): Adds a
QueueRunner to a collection in the graph. (deprecated)
assert_global_step(...): Asserts
global_step_tensor is a scalar int
Variable or
Tensor.
basic_train_loop(...): Basic loop to train a model.
batch(...): Creates batches of tensors in
tensors. (deprecated)
batch_join(...): Runs a list of tensors to fill a queue to create batches of examples. (deprecated)
checkpoint_exists(...): Checks whether a V1 or V2 checkpoint exists with the specified prefix. (deprecated). Optionally writes it to filename.(...): Returns list of all variables in(...): An iterator for reading
Event protocol buffers from an event file.
update_checkpoint_state(...): Updates the content of the 'checkpoint' file. (deprecated)
warm_start(...): Warm-starts a model using the given settings.
write_graph(...): Writes a graph proto to a file.
|
https://www.tensorflow.org/versions/r1.13/api_docs/python/tf/train?hl=pl
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
Building Secure ASP.NET Applications: Authentication, Authorization, and Secure Communication
Cryptography and Certificates.
See the Landing Page for the starting point and a complete overview of Building Secure ASP.NET Applications.
Summary: This section provides an overview of certificates, certificate stores, cryptography and cryptography support in the .NET Framework. (7 printed pages)
Contents
Keys and Certificates
Cryptography
Summary
Keys and Certificates
Asymmetric encryption uses a public/private key pair. Data encrypted with the private key can be decrypted only with the corresponding public key and vice versa.
Public keys (as their name suggests) are made generally available. Conversely, a private key remains private to a specific individual. The distribution mechanism by which public keys are transported to users is a certificate. Certificates are normally signed by a certification authority (CA) in order to confirm that the public key is from the subject who claims to have sent the public key. The CA is a mutually trusted entity.
The typical implementation of digital certification involves a process for signing the certificate. The process is shown in Figure 1.
Figure 1. Digital certification process
The sequence of events shown in Figure 1 is as follows:
- Alice sends a signed certificate request containing her name, her public key, and perhaps some additional information to a CA.
- The CA creates a message from Alice's request. The CA signs the message with its private key, creating a separate signature. The CA returns the message and the signature, to Alice. Together, the message and signature form Alice's certificate.
- Alice sends her certificate to Bob to give him access to her public key.
- Bob verifies the certificate's signature, using the CA's public key. If the signature proves valid, he accepts the public key in the certificate as Alice's public key.
As with any digital signature, any receiver with access to the CA's public key can determine whether a specific CA signed the certificate. This process requires no access to any secret information. The preceding scenario assumes that Bob has access to the CA's public key. Bob would have access to that key if he has a copy of the CA's certificate that contains that public key.
X.509 Digital Certificates
X.509 digital certificates include not only a user's name and public key, but also other information about the user. These certificates are more than stepping stones in a digital hierarchy of trust. They enable the CA to give a certificate's receiver a means of trusting not only the public key of the certificate's subject, but also that other information about the certificate's subject. That other information can include, among other things, an e-mail address, an authorization to sign documents of a given value, or the authorization to become a CA and sign other certificates.
X.509 certificates and many other certificates have a valid time duration. A certificate can expire and no longer be valid. A CA can revoke a certificate for a number of reasons. To handle revocations, a CA maintains and distributes a list of revoked certificates called a Certificate Revocation List (CRL). Network users access the CRL to determine the validity of a certificate.
Certificate Stores
Certificates are stored in safe locations called a certificate stores. A certificate store can contain certificates, CRLs, and Certificate Trust Lists (CTLs). Each user has a personal store (called the "MY store") where that user's certificates are stored. The MY store can be physically implemented in a number of locations including the registry, on a local or remote computer, a disk file, a data base, a directory service, a smart device, or another location.
While any certificate can be stored in the MY store, this store should be reserved for a user's personal certificates, that is the certificates used for signing and decrypting that particular user's messages.
In addition to the MY store, Windows also maintains the following certificate stores:
- CA and ROOT. This store contains the certificates of certificate authorities that the user trusts to issue certificates to others. A set of trusted CA certificates are supplied with the operating system and others can be added by administrators.
- Other. This store contains the certificates of other people to whom the user exchanges signed messages.
The CryptoAPI provides functions to manage certificates. These APIs can be accessed only through unmanaged code. Also, CAPICOM is a COM-based API for the CryptoAPI, which can be accessed via COM Interop.
Note The .NET Framework 2.0 supports classes such as X509Store and X509Certificate2 in the System.Security.Cryptography.X509Certificates namespace for managing the certificates. You do not need to use unmanaged code. For more information, see "System.Security.Cryptography.X509Certificates namespace."
More Information
For more information, see Cryptography, CryptoAPI, and CAPICOM.
Cryptography
Cryptography is used to provide the following:
-.
- Dataintegrity. To ensure data is protected from accidental or deliberate (malicious) modification. Integrity is usually provided by message authentication codes.
- Authentication. To assure that data originates from a particular party. Digital certificates are used to provide authentication. Digital signatures are usually applied to hash values as these are significantly smaller than the source data that they represent..
Cryptography in .NET
The System.Security.Cryptography namespace provides cryptographic services, including secure encoding and decoding of data, hashing, random number generation, and message authentication.
The .NET Framework provides implementations of many standard cryptographic algorithms and these can be easily extended because of the well-defined inheritance hierarchy consisting of abstract classes that define the basic algorithm types—symmetric, asymmetric and hash algorithms, together with algorithm classes.
Table 1. Algorithms for which the .NET Framework provides implementation classes "out of the box"
Symmetric algorithm support
.NET provides the following implementation classes that provide symmetric, secret key encryption algorithms:
- DESCryptoServiceProvider
- RC2CryptoServiceProvider
- RijndaelManaged
- TripleDESCryptoServiceProvider
Note The classes that end with "CryptoServiceProvider" are wrappers that use the underlying services of the cryptographic service provider (CSP) and the classes that end with "Managed" are implemented in managed code.
Figure 2 shows the inheritance hierarchy adopted by the .NET Framework. The algorithm type base class (for example, SymmetricAlgorithm) is abstract. A set of abstract algorithm classes derive from the abstract type base class. Algorithm implementation classes provide concrete implementations of the selected algorithm; for example DES, Triple-DES, Rijndael and RC2.
Figure 2. The symmetric crypto class inheritance hierarchy
Asymmetric algorithm support
.NET provides following asymmetric (public/private key) encryption algorithms through the abstract base class (System.Security.Crytography.AsymmetricAlgorithm):
- DSACryptoServiceProvider
- RSACryptoServiceProvider
These are used to digitally sign and encrypt data. Figure 3 shows the inheritance hierarchy.
Figure 3. The asymmetric crypto class inheritance hierarchy
Hashing algorithm support
.NET provides following hash algorithms:
- SHA1, SHA256, SHA384, SHA512
- MD5
- HMACSHA (Keyed Hashed algorithm)
- MACTripleDES (Keyed Hashed algorithm)
Figure 4 shows the inheritance hierarchy for the hash algorithm classes.
Figure 4. The hash crypto class inheritance hierarchy
Summary
Cryptography is an important technology for building secure Web applications. This appendix has covered some of the fundamentals of certificates and cryptography and has introduced some of the classes exposed by the System.Security.Cryptography namespace, which enable you to more easily incorporate cryptographic security solutions into your .NET applications.
For more information about cryptography in .NET, see .NET Framework Cryptography Model.
|
https://docs.microsoft.com/en-us/previous-versions/msp-n-p/ff649260(v=pandp.10)?redirectedfrom=MSDN
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
Default Configuring Methods of Start-Up Class
When we create a new ASP.NET Core application, our Startup class looks like this.
public class Startup {=Home}/{action=Index}/{id?}"); }); } }
The ConfigureServices() method adds services to the container, and the Configure() method configures the request pipeline.
Configuring Multiple Environments
It’s possible we need to configure our application based on the environment where the application is running. We may want to use some specific services and features in development mode and some others in release mode. One option is to use default configuring methods and #ifdef checks, but these make our code look ugly.
ASP.NET Core allows us to define special versions of configuring methods:
- Configure<ENVIRONMENT>Services()
- Configure<ENVIRONMENT>()
<ENVIRONMENT> is the name of the environment where the application runs. For example, with debug builds, we can use Development as the environment name. It gives us the following two methods:
- ConfigureDevelopmentServices()
- ConfigureDevelopment()
In the same way, we can use Staging and Production as environment names. We can try out these methods with some lines of additional code.
public void ConfigureDevelopmentServices(IServiceCollection services) { Debug.WriteLine("ConfigureDevelopmentServices"); ConfigureServices(services); } public void ConfigureDevelopment(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory) { Debug.WriteLine("ConfigureDevelopment"); Configure(app, env, loggerFactory); }
We can run the application and check the output in the debug window, or we can just use breakpoints to see if the application is stopping on these when we run it.
Adding Custom Environments
It’s possible we need support for additional environments. Behind the curtains, ASP.NET Core uses the ASPNETCORE_ENVIRONMENT environment variable to find out what type of environment it is currently running in. We can set a value to this variable on the Debug settings page of project properties.
And methods for environment called “Custom” are here:
public void ConfigureCustomServices(IServiceCollection services) { Debug.WriteLine("ConfigureCustomServices"); ConfigureServices(services); } public void ConfigureCustom(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory) { Debug.WriteLine("ConfigureCustom"); Configure(app, env, loggerFactory); }
This way we can add methods for as many environments as we want.
Wrapping Up
We can write configuration methods based on the environment name and use default ones as fallbacks for other environments. This way we can avoid ugly code that has to go through several environment name checks. The environment name is held in the ASPNETCORE_ENVIRONMENT variable, and the value can be set on the project properties page. Using environment based configuration methods we can keep our code cleaner.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/aspnet-core-environment-based-configuring-methods?fromrel=true
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
Curated lists of tags and attributes for sanitizing html
Project description
A curated list of tags, attributes, and styles suitable for filtering user-provided HTML using bleach.
Currently, it consists of basic set of tags suitable for rendering markdown, and markup intended for printing, as well as a list of all CSS styles. Please send pull requests with improvements or lists of tags and attributes for other purposes (wikis, comments, etc?).
Installation
pip install bleach-whitelist
Use
import bleach from bleach_whitelist import print_tags, print_attrs, all_styles bleach.clean(raw_html, print_tags, print_attrs, all_styles)
Properties:
- markdown_tags: Safe HTML tags needed to render markdown-style markup.
- markdown_attrs: Safe attributes tags needed to render markdown-style markup.
- print_tags: Safe HTML tags suitable for printing / PDFs.
- print_attrs: Safe attributes suitable for printing / PDFs.
- all_styles: A list of all CSS properties supported by major browsers.
- standard_styles: A list of standard (non-vendor-specific) CSS properaties.
See bleach_whitelist.py for more.
Have improvements or lists of tags suitable for other purposes? Please send a pull request! Let’s build a few good task-specific whitelists rather than reinventing these lists every time.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/bleach-whitelist/
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
September 2017
Volume 32 Number 9
[Data Points]
DDD-Friendlier EF Core 2.0
By Julie Lerman
If you’ve been following this column for a while, you may have noticed quite a few articles about implementing Entity Framework (EF) when building solutions that lean on Domain-Driven Design (DDD) patterns and guidance. Even though DDD is focused on the domain, not on how data is persisted, at some point you need data to flow in and out of your software.
In past iterations of EF, its patterns (conventional or customized) have allowed users to map uncomplicated domain classes directly to the database without too much friction. And my guidance has generally been that if the EF mapping layer takes care of getting your nicely designed domain models into and out of the database without having to create an additional data model, this is sufficient. But at the point when you find yourself tweaking your domain classes and logic to make them work better with Entity Framework, that’s a red flag that it’s time to create a model for data persistence and then map from the domain model classes to the data model classes.
I was a little surprised to realize how long ago those articles about DDD and EF mappings came out. It’s been four years since I wrote the three-part series called “Coding for Domain-Driven Design: Tips for Data-Focused Devs,” which span the August, September and October 2013 issues of MSDN Magazine. Here’s a link to the first part, which includes links to the entire series: msdn.com/magazine/dn342868.
There were two specific columns that addressed DDD patterns and explained how EF did or didn’t easily map between your domain classes and database. As of EF6, one of the biggest issues was the fact that you couldn’t encapsulate and thereby protect a “child” collection. Using the known patterns for protecting the collection (most often this meant em-ploying an IEnumerable) didn’t align with EF’s requirements, and EF wouldn’t even recognize that the navigation should be part of the model. Steve Smith and I spent a lot of time thinking about this when we created our Pluralsight course, Domain-Driven Design Fundamentals (bit.ly/PS-DDD) and eventually Steve came up with a nice workaround (bit.ly/2ufw89D).
EF Core finally solved this problem with version 1.1 and I wrote about this new feature in the January 2017 column (msdn.com/magazine/mt745093). EF Core 1.0 and 1.1 also resolved a few other DDD constraints but left some gaps—most notably the inability to map DDD value objects used in your domain types. The ability to do this had existed in EF since its beginning, but it hadn’t yet been brought to EF Core. But with the upcoming EF Core 2.0, that limitation is now gone.
What I’m going to do in this article is lay out the EF Core 2.0 features available to you that align with many of the DDD concepts. EF Core 2.0 is much friendlier to developers who are leveraging these concepts and perhaps it will introduce you to them for the first time. Even if you don’t plan to embrace DDD, you can still benefit from its many great patterns! And now you can do that even more with EF Core.
One-to-One Gets Smarter
In his book, “Domain-Driven Design,” Eric Evans says, “A bidirectional association means that both objects can be un-derstood only together. When application requirements do not call for traversal in both directions, adding a traversal di-rection reduces interdependence and simplifies the design.” Following this guidance has indeed removed side effects in my code. EF has always been able to handle uni-directional relationships with one-to-many and one-to-one. In fact, while writing this article, I learned that my longtime misunderstanding that a one-to-one relationship with both ends required forces you into a bi-directional relationship was wrong. However, you did have to explicitly configure those required rela-tionships, and that’s something you don’t have to do now in EF Core, except for with edge cases.
An unfavorable requirement in EF6 for one-to-one relationships was that the key property in the dependent type had to double as the foreign key back to the principal entity. This forced you to design classes in an odd way, even if you got used to it. Thanks to the introduction of support for unique foreign keys in EF Core, you can now have an explicit foreign key property in the dependent end of the one-to-one relationship. Having an explicit foreign key is more natural. And in most cases, EF Core should be able to correctly infer the dependent end of the relationship based on the existence of that foreign key property. If it doesn’t get it right because of some edge case, you’ll have to add a configuration, which I’ll demonstrate shortly when I rename the foreign key property.
To demonstrate a one-to-one relationship, I’ll use my favorite EF Core domain: classes from the movie “Seven Samu-rai”:
public class Samurai { public int Id { get; set; } public string Name { get; set; } public Entrance Entrance { get; set; } } public class Entrance { public int Id { get; set; } public string SceneName { get; set; } public int SamuraiId { get; set; } }
Now with EF Core, this pair of classes—Samurai and Entrance (the character’s first appearance in the movie)—will be correctly identified as a uni-directional one-to-one relationship, with Entrance being the dependent type. I don’t need to include a navigation property in the Entrance and I don’t need any special mapping in the Fluent API. The foreign key (SamuraiId) follows convention, so EF Core is able to recognize the relationship.
EF Core infers that in the database, Entrance.SamuraiId is a unique foreign key pointing back to Samurai. Keep in mind something I struggled with because (as I have to continually remind myself), EF Core is not EF6! By default, .NET and EF Core will treat the Samurai.Entrance as an optional property at run time unless you have domain logic in place to enforce that Entrance is required. Starting with EF4.3, you had the benefit of the validation API that would respond to a [Required] annotation in the class or mapping. But there is no validation API (yet?) in EF Core to watch for that particular problem. And there are other requirements that are database-related. For example, Entrance.SamuraiId will be a non-nullable int. If you try to insert an Entrance without a SamuraiId value populated, EF Core won’t catch the invalid data, which also means that the InMemory provider currently doesn’t complain. But your relational database should throw an error for the constraint conflict.
From a DDD perspective, however, this isn’t really a problem because you shouldn’t be relying on the persistence layer to point out errors in your domain logic. If the Samurai requires an Entrance, that’s a business rule. If you can’t have or-phaned Entrances, that’s also a business rule. So the validation should be part of your domain logic anyway.
For those edge cases I suggested earlier, here’s an example. If the foreign key in the dependent entity (for example, Entrance) doesn’t follow convention, you can use the Fluent API to let EF Core know. If Entrance.SamuraiId was, perhaps Entrance.SamuraiFK, you can clarify that FK via:
modelBuilder.Entity<Samurai>().HasOne(s=>s.Entrance) .WithOne().HasForeignKey<Entrance>(e=>e.SamuraiFK);
If the relationship is required on both ends (that is, Entrance must have a Samurai) you can add IsRequired after WithOne.
Properties Can Be Further Encapsulated
DDD guides you to build aggregates (object graphs), where the aggregate root (the primary object in the graph) is in control of all of the other objects in the graph. That means writing code that prevents other code from misusing or even abusing the rules. Encapsulating properties so they can’t be randomly set (and, often, randomly read) is a key method of protecting a graph. In EF6 and earlier, it was always possible to make scalar and navigation properties have private setters and still be recognized by EF when it read and updated data, but you couldn’t easily make the properties private. A post by Rowan Miller shows one way to do it in EF6 and links back to some earlier workarounds (bit.ly/2eHTm2t). And there was no true way to protect a navigation collection in a one-to-many relationship. Much has been written about this latter problem. Now, not only can you easily have EF Core work with private properties that have backing fields (or inferred backing fields), but you can also truly encapsulate collection properties, thanks to support for mapping IEnumerable<T>. I wrote about the backing fields and IEnumerable<T> in my previously mentioned January 2017 column, so I won’t rehash the details here. However, this is very important to DDD patterns and therefore relevant to note in this article.
While you can hide scalars and collections, there’s one other type of property you may very well want to encapsulate—navigation properties. Navigation collections benefit from the IEnumerable<T> support, but navigation properties that are private, such as Samurai.Entrance, can’t be comprehended by the model. However, there is a way to configure the model to comprehend a navigation property that’s hidden in the aggregate root.
For example, in the following code I declared Entrance as a private property of Samurai (and I’m not even using an explicit backing field, though I could if needed). You can create a new Entrance with the CreateEntrance method (which calls a factory method in Entrance) and you can only read an Entrance’s SceneName property. Note that I’m employing the C# 6 null-conditional operator to prevent an exception if I haven’t yet loaded the Entrance:
private Entrance Entrance {get;set;} public void CreateEntrance (string sceneName) { Entrance = Entrance.Create (sceneName); } public string EntranceScene => Entrance?.SceneName;
By convention, EF Core won’t make a presumption about this private property. Even if I had the backing field, the private Entrance wouldn’t be automatically discovered and you wouldn’t be able to use it when interacting with the data store. This is an intentional API design to help protect you from potential side effects. But you can configure it explicitly. Remember that when Entrance is public, EF Core is able to comprehend the one-to-one relationship. However, because it’s private you first need to be sure that EF knows about this.
In OnModelCreating, you need to add the HasOne/WithOne fluent mapping to make EF Core aware. Because Entrance is private, you can’t use a lambda expression as a parameter of HasOne. Instead, you have to describe the property by its type and its name. WithOne normally takes a lambda expression to specify the navigation property back to the other end of the pairing. But Entrance doesn’t have a Samurai navigation property, just the foreign key. That’s fine! You can leave the parameter empty because EF Core now has enough information to figure it out:
modelBuilder.Entity<Samurai> () .HasOne (typeof (Entrance), "Entrance").WithOne();
What if you use a backing property, such as _entrance in the Samurai class, as shown in these changes:
private Entrance _entrance; private Entrance Entrance { get{return _entrance;} } public void CreateEntrance (string sceneName) { _entrance = _entrance.Create (sceneName); } public string EntranceScene => _entrance?.SceneName;
EF Core will figure out that it needs to use the backing field when materializing the Entrance property. This is because, as Arthur Vickers explained in the very long conversation we had on GitHub while I was learning about this, if "there is a backing field and there is no setter, EF just uses the backing field [because] there is nothing else it can use." So it just works.
If that backing field name doesn’t follow convention, if, for example, you named it _foo, you will need a metadata con-figuration:
modelBuilder.Entity<Samurai> () .Metadata .FindNavigation ("Entrance") .SetField("_foo");
Now updates to the database and queries will be able to work out that relationship. Keep in mind that if you want to use eager loading, you’ll need to use a string for Entrance becaise it can’t be discovered by the lambda expression; for example:
var samurai = context.Samurais.Include("Entrance").FirstOrDefault();
You can use the standard syntax for interacting with backing fields for things like filters, as shown at the bottom of the Backing Fields documentation page at bit.ly/2wJeHQ7.
Value Objects Are Now Supported
Value objects are an important concept for DDD as they allow you to define domain models as value types. A value ob-ject doesn’t have its own identity and becomes part of the entity that uses it as a property. Consider the string value type, which is made up of a series of characters. Because changing even a single character changes the meaning of the word, strings are immutable. In order to change a string, you must replace the entire string object. DDD guides you to consider using value objects anywhere you’ve identified a one-to-one relationship. You can learn more about value ob-jects in the DDD Fundamentals course I mentioned earlier.
EF always supported the ability to include value objects through its ComplexType type. You could define a type with no key and use that type as a property of an entity. That was enough to trigger EF to recognize it as a ComplexType and map its properties into the table to which the entity is mapped. You could then extend the type to also have features re-quired of a value object, such as ensuring the type is immutable and a means to assess every property when determining equality and overriding the Hash. I often derive my types from Jimmy Bogard’s ValueObject base class to quickly adopt these attributes.
A person’s name is a type that’s commonly used as a value object. You can ensure that any time someone wants to have a person’s name in an entity, they always follow a common set of rules. Figure 1 shows a simple PersonName class that has First and Last properties—both fully encapsulated—as well as a property to return a FullName. The class is designed to ensure both parts of the name are always supplied.
Figure 1 The PersonName Value Object
public class PersonName : ValueObject<PersonName> { public static PersonName Create (string first, string last) { return new PersonName (first, last); } private PersonName () { } private PersonName (string first, string last) { First = first; Last = last; } public string First { get; private set; } public string Last { get; private set; } public string FullName => First + " " + Last; }
I can use PersonName as a property in other types and continue to flesh out additional logic in the PersonName class. The beauty of the value object over a one-to-one relationship here is that I don’t have to maintain the relationship when I’m coding. This is standard object-oriented programming. It’s just another property. In the Samurai class, I’ve add a new property of this type, made its setter private and provided another method named Identify to use instead of the setter:
public PersonName SecretIdentity{get;private set;} public void Identify (string first, string last) { SecretIdentity = PersonName.Create (first, last); }
Until EF Core 2.0, there was no feature similar to ComplexTypes, so you couldn’t easily use value objects without add-ing in a separate data model. Rather than just reimplement the ComplexType in EF Core, the EF team created a concept called owned entities, which leverages another EF Core feature, shadow properties. Now, owned entities are recognized as additional properties of the types that own them and EF Core understands how they resolve in the database schema and how to build queries and updates that respect that data.
EF Core 2.0 convention won’t automatically discover that this new SecretIdentity property is a type to be incorporated into the persisted data. You’ll need to explicitly tell the DbContext that the Samurai.SecretIdentity property is an owned entity in DbContext.OnModelCreating using the OwnsOne method:
protected override void OnModelCreating (ModelBuilder modelBuilder) { modelBuilder.Entity<Samurai>().OwnsOne(s => s.SecretIdentity); }
This forces the properties of PersonName to resolve as properties of Samurai. While your code will use the Samurai.SecretIdentity type and navigate through that to the First and Last properties, those two properties will resolve as columns in the Samurais database table. EF Core convention will name them with the name of the property in Samurai (SecretIdentity) and the name of the owned entity property, as shown in Figure 2.
Figure 2 The Schema of the Samurais Table, Including the Properties of the Value
.png)
Now I can identify a Samurai’s secret name and save it with code similar to this:
using (var context = new SamuraiContext()) { var samurai = new Samurai { Name = "HubbieSan" samurai.Identify ("Late", "Todinner"); context.Samurais.Add (samurai); context.SaveChanges (); }
In the data store, "Late" gets persisted into the SecretIdentity_First field and "Todinner" into the SecretIdentity_Last field.
Then I can simply query for a Samurai:
var samurai=context.Samurais .FirstOrDefaultAsync (s => s.Name == "HubbieSan")
EF Core will assure that the resulting Samurai’s SecretIdentity property is populated and I can then see the identity by requesting:
samurai.SecretIdentity.FullName
EF Core requires that properties that are owned entities are populated. In the sample download, you’ll see how I designed the PersonName type to accommodate that.
Simple Classes for Simple Lessons
What I’ve shown you here are simple classes that leverage some of the core concepts of a DDD implementation in the most minimal way that lets you see how EF Core responds to those constructs. You’ve seen that EF Core 2.0 is able to comprehend one-to-one uni-directional relationships. It can persist data from entities where scalar, navigation and collection properties are fully encapsulated. It also allows you to use value objects in your domain model and is able to persist those, as well.
For this article, I’ve kept the classes simple and lacking in additional logic that more correctly constrains the entities and value objects using DDD patterns. This simplicity is reflected in the download sample, which is also on GitHub at bit.ly/2tDRXwi. There you’ll find both the simple version and an advanced branch where I’ve tightened down this domain model and applied some additional DDD practices to the aggregate root (Samurai), its related entity (Entrance) and the value object (PersonName) so you can see how EF Core 2.0 handles a more realistic expression of a DDD aggregate. In an upcoming column, I’ll discuss the advanced patterns applied in that branch.
Please keep in mind that I’m using a version of EF Core 2.0 shortly before its final release. While most of the behaviors I’ve laid out are solid, there’s still the possibility of minor tweaks before 2.0.0 is released.
Julie Lerman* is a Microsoft Regional Director, Microsoft MVP, software team mentor.*
Discuss this article in the MSDN Magazine forum
|
https://docs.microsoft.com/en-us/archive/msdn-magazine/2017/september/data-points-ddd-friendlier-ef-core-2-0
|
CC-MAIN-2019-47
|
en
|
refinedweb
|
Windows Notepad Finally Supports Unix, Mac OS Line Endings (theregister.co.uk) 291
Microsoft's text editing app, Notepad, which has been shipping with Windows since version 1.0 in 1985, now supports line endings in text files created on Linux, Unix, Mac OS, and macOS devices. "This has been a major annoyance for developers, IT Pros, administrators, and end users throughout the community," Microsoft said in a blog post today. The Register reports:++ ? (Score:5, Interesting)
Re: (Score:2)
All users caring about line endings had probably migrated to Notepad++ 10 years ago, right ?
And yet this is a godsend when working on other people's machines which *don't* have Notepad++
Even wordpad sucks these days.
Re: (Score:2)
And yet this is a godsend
Is it, though? I think it is worse than nothing.
The problem is that it will now read LF, but any new line you put into the text will still have a CR+LF.
So earlier, when you opened a Unix style text file in Notepad, you would notice that it was LF-based because everything was on one line. So you would open it with Wordpad or something else instead.
Now, on the other hand, you will open it, see nothing amiss, modify it, and save it, and because the new lines you made will have CR+LF, it may break the system
Re: (Score:2)
The problem is that it will now read LF, but any new line you put into the text will still have a CR+LF.
So don't edit python scripts. Seriously though if you're doing something sensitive to CR vs CR+LF then Notepad is the wrong thing to use to edit a file and you'll know it's wrong too. The biggest problem with CR vs CR+LF is being unable to read files (a universal problem) and not that the LF will break the system (a problem that affects an incredibly minor set of possible scenarios exposed to an incredible minor part of the userbase and a part of the user base that is a) most equipped to handle it, and b) l
Re: (Score:2)
That's worse than nothing in my opinion. A typical Microsoft "solution".
This typical Microsoft "solution": "New files created within Notepad will use Windows line ending (CRLF) by default, but it will now be possible to view, edit, and print existing files, correctly maintaining the file’s current line ending format."
It actually is funny to see prejudice just blow up in people's faces.... [microsoft.com]
Re:Notepad++ ? (Score:5, Funny)
it is a must have on your usb flash drive of tools and utilities
Lol, found the Windows admin.
Re: (Score:3)
Well yeah actually, because with Unix you either SSH into the box remotely, or your toolkit consists of a single liveUSB. Real Unix Admins(tm) can restore the whole system from deletion [ryerson.ca] with a half-working copy of cat and no filesystem, of course.
Re:Notepad++ ? (Score:5, Insightful)
it is a must have on your usb flash drive
It's faster to download it and run it as a portable than it is to mail a USB drive to the computer you're supporting.
Know what's even faster? Having the default text editor able to display text correctly.
Re: (Score:2)
If you're required to edit plaintext on other people's computers - I feel sorry for you.
Not needed for linux setups - you ssh in (or sshmount their fs) and do all work from your own office/computer. No getting used to their keyboard setup or whatever.
Well, it's great if that works in your setup. But you don't always have complete control.
We have some 10.000 Linux server appliances running within our customers networks. We don't have direct access to most of those. So if we need to troubleshoot anything, we need to ask our customers to grant us access to their local network via TeamViewer or such like and then connect with putty (as practically everyone uses Windows).
Re: (Score:2)
Next year Microsoft will release their new feature swollen text editor: Notepad#.
Re: (Score:2)
All users caring about line endings had probably migrated to Notepad++ 10 years ago, right ?
Creating a never-ending cycle. Notepad is the only editor you can count on in a workflow, as a result a dependency is built in to a lot of windows apps, and CRLF leaks all over. While on linux/bsd/osx most things obey the EDITOR ev, allowing the user to pick his favorite. As a result if Apple dropped/changed TextEdit, many people may not notice, but if MS drops or changes Notepad, major LOB apps are going to break
Re: (Score:2)
The first tech book on Unix I owned had a section on how you couldn't rely on this newfangled "vi" thing being available, or working from the console, on every system, and both taught and suggested as a default "ed", which is available everywhere.
I have recovered a very minimally-booting system with "ed" in anger. I don't want to have to do it again any time soon.
(Also had a great chapter on the joys of booting, and how to use repeated dcheck / icheck iterations to repair filesystems - unless you were on a
Re: Notepad++ ? (Score:2)
Re: (Score:2)
All users caring about line endings had probably migrated to Notepad++ 10 years ago, right ?
Nope. I just don’t open up anything but Word documents on my Windows machine. Visual Studio already handles the line endings, though it does always try to convert to Windows line endings.
Mac OS and macOS? (Score:2)
Wow. How are they different?
Re: (Score:3, Informative)
Re: (Score:2)
And did the line endings change between < 10.0.0 and >= 10.0.0?
ProDOS, UNIX, and CP/M newlines (Score:5, Informative)
Mac OS 1 through 9 use the same newline as ProDOS on the Apple IIe: $0D.
Mac OS X 10.0 through 10.11 and macOS 10.12 to present use the same newline as UNIX: $0A.
Traditionally, MS-DOS and Windows have used the same newline as Digital Research's CP/M: $0D $0A.
The $0D $0A sequence dates back to the Teletype Model 33 terminal [wikipedia.org], one of the first terminals to use ASCII. It could process a carriage return ($0D) and a line feed ($0A) in parallel, but because a return took longer than a line feed, computers sent the return first, then the line feed, then a split second of pausing before the next character so that it wouldn't get smeared across the page during the return. If your Model 33 had the optional ASR paper tape drive, you might have had to use the delete key to insert the pauses yourself.
UNIX relied on terminal drivers to convert a newline to whatever sequence a particular terminal needed. CP/M just encoded what the terminal expected directly into an application. MS-DOS was originally a clone of CP/M (and DR-DOS was forked from authentic CP/M), and Windows was originally a GUI shell around MS-DOS. Though MS-DOS 2 was sophisticated enough to use these sorts of drivers, it had to remain compatible with applications designed for the much more CP/M-like MS-DOS 1.
Re: (Score:2) the line demarcating the change isn't exactly clean.
Yaz
iConfused
Re: (Score:2)
You can export a tab delimited file from FileMaker today, and still get CRs.
It's like, cute and quaint.
Ah, fond memories of booting Mac OS X Cheetah (or was it public beta?) on a Quadra 8500 with 130 odd MB of RAM, and it not crashing the whole system whenever an app crashed.
Re: (Score:2)
No, MacOS is ancient, macOS is new..ish.
Yeah..
Re:Mac OS and macOS? (Score:5, Funny)
Re: (Score:2)
Why has it been an annoyance? (Score:4, Informative)
If you want to do something more complex then download a non-minimal text editor. There are loads available for free.
Re: (Score:2)
Imagine those config files are shared with non-windows computers.
Re:Why has it been an annoyance? (Score:5, Interesting)
Notepad is a small simple text editor that exists because occasionally you might need to edit some text files (typically for config files or something). These will be in a Windows friendly text format. It doesn't pretend to do anything remotely sophisticated.
That's great if you're the one running the editor and doing the editing.
What's not so great is when you give a co-worker a bash script, and they open it in Notepad, and then complain to you about all the extra spacing -- forcing you to waste a ton of breath explaining why it's not a problem with the text file, but an issue with their editor.
I once had to send a developer at my employer a SQL script intended to be run on Linux, and they did just this. It was unbelievable how long it took me to finally convince them that Notepad was the issue. And it wasn't just the double-spacing; they early had a fit because the file showed up as "ANSI" encoding in Notepad, whereas the spec said the file had to be UTF-8. So not only did I have to convince them (with lots of references) that Notepad was rendering CR/LF as two lines whereas UNIX systems treat them as a single line ending pair, but then I ALSO had to waste a lot of time convincing them that not only is there no such encoding standard as "ANSI" (a very long-standing bug in Notepad Microsoft has never got around to fixing), but that ASCII and UTF-8 are identical for values between 0x00 and 0x7F (which every byte in the document were within).
It was extremely annoying, because even with lots of links to references as to why they shouldn't be using Notepad for UNIX text files in the first place (and why you can't trust its encoding field), in the end I couldn't convince them. Our DBA eventually had to tell them the file was just fine as-is. And sadly, this wasn't the first person I've had this problem with.
As such, as a non-Windows user I'm rather happy for this change. I can't believe how many developers I run into who have no notion of line termination or the actual details of encoding standards, and who simply trust whatever Notepad tell them. Hopefully it will save me some aggravation in the future.
Yaz
Re: (Score:2)
Significant figures (Score:2)
A fraction in ratio notation, such as 1/2, is assumed to be exact unless specified otherwise. A decimal, on the other hand, often represents an interval of real numbers based on significant figure conventions [wikipedia.org]. For example, 0.5 means "anything that rounds to 0.5", namely the interval 0.450 to 0.550, and 0.50 means "anything that rounds to 0.50", namely the interval 0.495 to 0.505.
Re: (Score:2)
Well Microsoft can't fix stupid people. Look on the bright side, due to this issue someone learned something, even brighter would be if you consulted at the time, because then that also translated to billable hours
:)
Re: (Score:3)
Fuck BOMs
Re: (Score:2)
You're correct about Notepad rendering CR/LF as a single break, but Unix is not a text editor. Notepad is the only editor I've used in modern times that cannot deal with mixed line breaks.
Re: (Score:2)
Wordpad is not a text editor. It's a *choke* word processor, and quite easily even more bad at doing its job than Notepad is.
Re: (Score:2)
Notepad is a small simple text editor that exists because occasionally you might need to edit some text files (typically for config files or something).
Just because it's simple and occasionally used doesn't mean it can't be annoying. Also just because there are alternatives doesn't mean I'm going to install them on every computer I touch (or even can install them).
... And Wordpad is now a mess.
Re: (Score:2)
Re: (Score:2)
Nothing really. It just isn't the program it used to be... [wikipedia.org]
You can change some settings force it to look like normal, live with the ribbon, but more fundamentally: It isn't the default text editor. None of this is really a problem, it's just irritating: Help some, open explorer, double click, "crap", close notepad, right click, open in word pad, change the view mode and the wordwrap settings, keep doing what it is you were doing in the first place.
Notepad not screwing up every
Re: (Score:2)
I've often encountered downloaded text files which aren't Windows-formatted. While there are many alternatives that do handle line ends correctly (the most readily available in Windows is WordPad), Notepad is a default for various file types and this added support will certainly help.
This really isn't something basic, not something sophisticated, and there's no particular reason not to include it. While Microsoft is very late to the party, it's a definite case of 'better late than never'.
Re: (Score:2)
Notepad is a small simple text editor that exists because occasionally you might need to edit some text files (typically for config files or something) on a machine that is not yours so doesn't have Notepad++ installed. These will be in a Windows friendly text format. It doesn't pretend to do anything remotely sophisticated.
If you want to do something more complex then download a non-minimal text editor. There are loads available for free.
TFTFY. If you're regularly editing text files, Notepad++ (or a contemporary) is essential. Notepad is for when you don't have anything like Notepad++
CRLF is technically correct (Score:5, Insightful)
You want the carriage to return and the paper moved up by one line, not print over the last line (CR only) or continue at the current position one line down (LF only). Imagine that, Microsoft doing something correctly.
Re: (Score:2)
When printing sure, but most text won't be printed and is just edited electronically. Using a single character makes more sense as it reduces file size, especially if you have short lines.
Re: (Score:2)
Where can I find this paper version of notepad you're talking about?
Re: (Score:2)
The deal for me is that, as an old MUD coder in the late 90's, I am so used to the VT100 convention that the Unix way of doing it baffles me. I'm too used to doing \n\r.
Re: (Score:2)
It's not a order of the characters that matters to me; as former email developer, I'm also used to standardizing on CRLF as per RFC. But there were enough non-standard clients out there that I was used to having to deal with either-or. What fucked it all up were those clients that only send bare LF's. "Be liberal in what you accept" except most of these were spam clients, anyhow.
Re:CRLF is technically correct (Score:5, Informative)
You want the carriage to return and the paper moved up by one line, not print over the last line (CR only) or continue at the current position one line down (LF only). Imagine that, Microsoft doing something correctly.
It's a holdover from the old mechanical printer / typewriter days. Since the LF and CR were handled by separate mechanisms separate commands allowed controlling them independently when needed.
While in general you wanted a CR and LF, they also had utility themselves. A LF allowed advancing paper without activating the CR mechanism if a CR was not needed, while a CR allows you to over print and blackout text, such as a password.
Re: (Score:2)
Specifically, it is a holdover from the Teletype Corporation telegraphs. Previous Murray telegraphs had used a single "Line" code for a new line.
The Teletype machines were electro-mechanical and while a character could be typed relatively quickly, the printer's carriage return operation was slow.
The "Line" code was split into two codes to allow the printer to keep up!
Re: (Score:2)
It's not just a holdover: it's also a compromise after different OS builders tried to simplify things the same way without coordinating and whose arbitrary choices happened to conflict. Once you have unixy LF and macish CR in the wild, reviving the old CRLF admixture made an equally unhappy compromised. That compromise was baked into telnet and subsequent protocols. By the time Microsoft brought MS-DOS to market, CRLF looked like the sensible, standards-compliant choice.
I am mostly summarizing the old EOLst [rfc-editor.org]
Re: (Score:2)
Re: (Score:2)
Even today it would make a difference on the web. This page that I am typing in is according to a combination of wget and wc 1213 lines long. That is 1.2KB less if using a LF formatted HTML file over a CR/LF formatted one. Multiply that by all the CR/LF formatted files being shoved around the internet and I would imagined it comes to many TB a day.
Finally, a reason to upgrade to Windows 10! (Score:5, Funny)
Or will this be backported to Windows 7?
Notepad a major annoyance for developers (Score:3)
You cannot be serious, what professional developer in his right mind would use Notepad?
Re: (Score:3)
Developers have clients who aren't developers. I don't use Windows, but I'm happy about this change because occasionally I've had clients who wanted to edit one of my files in Notepad and would find it looking broken to them because of lack of line break parsing.
Re: (Score:2)
You cannot be serious, what professional developer in his right mind would use Notepad?
Close to 100% of them. Just not necessarily while developing. Kind of like just because vi is my editor of choice doesn't mean that I don't frequently end up on a test system opening something in nano or *shudders* emacs.
Developers especially frequently send files cross platforms onto test systems they don't administer and need to use a standard OS image. God forbid they remotely access a file on another system, or their main OS from another OS.
Re: (Score:3)
You cannot be serious, what professional developer in his right mind would use Notepad?
Any developer having to do a change of an ini file or script on a locked down machine where no user software can be installed, such as a machine in a production environment or factory.
And any developer who has to guide a user in such a change over the phone or a remote connection.
Making Notepad actually useful is a huge step in reducing the pain of maintaining Windows based automation and enterprise solutions.
Re: (Score:2)
You cannot be serious, what professional developer in his right mind would use Notepad?
Those same senior developers that use pico and nano, I would assume.
Wordpad (Score:3)
Re: (Score:2)
WordPad works on Mac OS files just fine. I use notepad++ if it's available because WordPad defaults to a proportional font, which makes code and script really hard to read...but in a pinch, WordPad will do.
Azure (Score:2)
Drop the negativity - a good and useful thing has just happened. Thanks.
Re: (Score:2)
Just wait until notepad corrupts your file when it writes the file back to disk in CR/LF format... and this will be classed as a "feature".
Write (Score:2)
The funny thing... (Score:5, Informative)
Is that edit.exe -- the console-based editor that came out with DOS 5.0 -- *did* support UNIX EOL. Go figger.
Re: (Score:2)
Reason MS didn't do this earlier is because majority of people were using Windows
Re: (Score:2)
I have to disagree somewhat. While I will never be guilty of ascribing good things to MS while under Ballmer/Gates, once the web came along, UNIX EOL suddenly became righter -- or at least terribly common. I would have to say it was just sheer hamfisted bluster and pride, moreso than a desire to put the hurt on the (then) microscopic userbase of people like you and me.
But, really, barring an internal document showing this, it doesn't really matter what we think the reason was.
Yawn (Score:2, Interesting)
Wake me up when windows can read EXT4 filesystems, I mean it has only been around for 15 years, is an open standard which could easily have been coded for, and it would be just common sense to do so. Meanwhile linux has been able to read NTFS/FAT/FAT32 for 20+ years.
But oh yay, linebreaks, lookit all that progress..
see comment (Score:2)
WordPad stock plunges 17%... (Score:4, Funny)
...in after-hours trading.
Step one to being usable, done (Score:2)
Being able to handle large files by NOT trying to load a huge file into ram and only noticing after two minutes or 10 that it fails will probably take another 40 years.
huh? (Score:3)
the registry must be a nightmare (Score:2)
Why does this need to be disabled ever? How is it ever better to ignore obvious line breaks?
Hey wait... (Score:2)
I just heard that Windows notepad tried to replace MS-DOS edline (line editor... [wikipedia.org]), but failed as edline is still in Windows 10 !?
Re: Odd (Score:2)
Re: (Score:2)
Ubuntu Server, yes. Anything that relies on the presence of an X server, not quite yet. WSL users trying to run GUI apps have to obtain an X server elsewhere, which usually means a decade-old copy of Xming.
Re: (Score:2)
It is just odd that they would leave this out forever on purpose and then suddenly fix it. It has been literal decades, and the absence was obviously malicious.
Cloud is king and the writing is on the wall. You don't take you lead architects of core products unless your business strategy is changing. This is just another sign of the inevitable.
Re: (Score:3)
Re: (Score:2)
You are a modern human right? So you use Unicode, right?
U+2029
Re: (Score:2)
Yes this is probably yet another advantage due to the Linux subsystem and therefore (indirectly) Linux.
Re: (Score:2)
who cares?
Millions upon millions of MS Windows admins 'stuck' with Linux systems? It's actually kind of funny to watch them work, they are so used to point-n-click snap-in GUI interfaces that most of them don't even know how to write a script. Recognise a Windows admin worth having a conversation with by the fact that he scripts most of his work using VB or C# rather than sitting there for hours pounding a mouse button working a GUI management tool to do stuff a script can do in 10 minutes.
Re:too little, too late (Score:5, Insightful)
Yeah, but then.. Notepad++
Re: (Score:2)
Re: (Score:2)
Do you have a moment to talk about our lord and savior Sublime Text [sublimetext.com]?
Re:too little, too late (Score:5, Interesting)
Zaelath noted:
Yeah, but then.. Notepad++
Personally, I've used Alan Phillips' Programmer's File Editor [lancaster.ac.uk] in place of Notepad for almost 20 years now.
MS made it harder when they killed off support for the
.hlp helpfile format, but there are ways around that - and, in addition to a pretty useful feature set [lancaster.ac.uk], the program IS free, after all ...
Re: (Score:2)
Re:too little, too late (Score:4, Informative)
Re: (Score:2, Insightful)
I call bullshit. How often do you really script something robust in 10 minutes? Do you have proper error handling, have you considered the edge cases, what about notifications of failure and logging output? It can take hours.
For one-off jobs a shitty little brittle 10 minute script is fine, but for something of high importance 10 minutes is usually not enough.
Personally, I don't see speed as the primary benefit... reproducibility is what I care about. I can spend 10 minutes doing a daily task... or I ca
Re: (Score:2, Insightful)
Millions upon millions of MS Windows admins 'stuck' with Linux systems?
It's not called 'stuck' when you are too stupid to learn how to do your job which includes managing Windows, Linux, BSD, various router and switching platforms, etc... The word you're looking for is 'incompetence'. Millions upon millions of *incompetent* MS Windows admins don't know *how* to work on Linux systems....
Re: (Score:2)
> Recognise a Windows admin worth having a conversation with by the fact that he scripts most of his work using VB or C#
powershell. they should be scripting in powershell these days, which is kept up to date, has tons of built in functions and available modules for working in AD and just about anything on a wdinwos computer or server already available, and can take advantage of
.Net libraries so you dont have to develop in c# to get something that powershell doesnt have as a native cmdlet. VB still has i
Re: (Score:2, Troll)
Exactly.
I'm not going to suddenly start editing text in Windoze. I mean, I'm not going to complain that they started actually ending lines properly, it only took forty-ish years, but they finally figured out how to do it.
Meanwhile, TeachText became SimpleText became TextEdit. The Macintosh user interface evolved through many generations. And now, finally, in 2018, MicroShit figures out how to do what they should have been able to do in 1984.
Idiots.
Re:too little, too late (Score:5, Informative)
CR: return to first character of the line.... [wikipedia.org]
LF: jump to the next line.... [wikipedia.org]
Perhaps you should read those articles (I've only verified the relevant parts so normal Wikipedia cautions apply), understand where the control characters came from, what they were used for and why there are different line endings out there? No "properly" about this.
That it have taken this long for MS to change something this trivial is strange though. Guess they always assumed nobody use notepad?
Re: (Score:2)
If it's good enough for RFC 5321 [ietf.org], it's good enough for me.
Re: (Score:2)
Guess they always assumed nobody use notepad?
Or maybe they are planning on screwing up wordpad even more.
:-/
Re: (Score:2)
Re: (Score:3)
Look, if you want to emulate ancient technology, you'd also better make sure that if you only send carriage-return, your emulation should smear the next character across the paper about 40 positions to the left of the prior character, and that every character past 72 should overwrite that 72nd position, getting darker and darker until the ink starts to spread. And your terminal emulator should make a terrible racket with every printable character, which by the way, only included UPPERCASE letters and run at
Re:too little, too late (Score:5, Interesting)
Let's be fair here. The correct implementation of a new line in a text file *IS* CRLF. It is the format you need to send a printer to print the text. A single CR would just print all the text on a single line overwriting itself over and over, and a LF would make the text look like a staircase (until it ran off the side of the page). CRLF is therefore the correct way to end lines in a text file (or LF+CR which actually makes more sense, but I wasn't consulted when the standards started). Seriously, just go read any manual that describes the ASCII control characters and there will be no doubt left in your head about what SHOULD be the correct way.
Linux got it wrong because it copied it from Unix. Unix got it wrong because it got copied from Multics (some of the original devs working on Unix were also devs on Multics). Multics (most likely) got it wrong because it was a bad performance hack (using a single byte to end lines is easier).
Re: (Score:2)
who cares?
All they have to do now is replace the rest of their OS. And also get notepad to not output CRLF, because we don't need that in the world.
I mean if they want their OS to just be for games great, but anyone that can make a choice is selecting anything else. It's a horrible environment to get real work done on.
Re: (Score:2)
> It's a horrible environment to get real work done on.
this is sort of ridiculous, we are stuck on windows 7 at work and I can get all of my work done without an issue--my last job had 8.1 which i liked better, what do you really get out of linux that is so great? I got frustrated with linux a long time ago and have never looked back--to each his own, right? windows is not perfect, and *nix has had several features MS has been stupid slow to incorporate, but come on, to act like it is worthless is just s
Re: (Score:2)
Re: (Score:2)
Good thing you can just read TFA and learn there's an option to retain the old behavior
Re: (Score:2)
Good thing you can just read TFA and learn there's an option to retain the old behavior
I "could" RTFA but then I would not be able to post something stupid. Sigh. I'm just gunna fire up RS5 and play around with it instead
:)
Re: (Score:2)
"man unix2dos" and for good measure "man dos2unix"
Re: (Score:2)
To be honest, the CR + LF line ending is closer to be a standard for text interchange than any other combination of CR and/or LF. Many IETF RFCs mandate the use of CR + LF.
Re: (Score:2)
Tis the year that M$ embraces Linux...
It is the year of Linux on the Windows Desktop.
M$ is now extending its support for all things GNU / Linux in a bid to extinguish GNU / Linux once and for all.
Re: (Score:2)
Yep, that was the original idea of ASCII control codes to control teletypes and teleprinters over serial communication links.
|
https://tech.slashdot.org/story/18/05/08/2149216/windows-notepad-finally-supports-unix-mac-os-line-endings
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
Note: The section of RNN will span several posts including this one covering basic RNN structure and supporting naive and TF implementation for character level sequence generation. Subsequent posts for RNNs will cover more advanced topics like attention while using more complicated RNN architectures for tasks such as machine translation.
I. Overview:
There are many advantaged to using a recurrent structure but the obvious ones are being able to keep in memory the representation of the previous inputs. With this, we can better predict the subsequent outputs. There are many problems that arise by keeping track of long streams of memory, such as vanishing gradients during BPTT. Luckily, there are even more architectural changes we can make to combat many of these issues.
II. Char-RNN
We will not implement the naive and TF implementation for a character generating model. The model’s objective is to read from each input sequence (stream of chars), one letter at a time and predict the next letter. During training, we will feed in the letter from our input to generate each output letter, but during inference (generation), we will feed in the previous output as the new input (starting with a random token as the first input).
There are a few preprocessing steps done to the text data, so please review the GitHub Repo for more detailed info.
Example input: Hello there Charlie, how are you? Today I want a nice day. All I want for myself is a car.
- DATA_SIZE = len(input)
- BATCH_SIZE = # of sequences per batch
- NUM_STEPS = # of tokens per split (aka seq_len)
- STATE_SIZE = # of hidden units PER hidden state = H
num_batches = number of batch_sized batches to split in the input into
Note: needs to be columns (above it is rows) because we feed into RNN cell by rows. So you will reshape raw input from to . Also note that each letter will be fed in a as one-hot-encoded vector that will be embedded. Note in the image above, each sentence is perfectly split into a batch. This is just for visualization purpose so you can see how an input would be split. In the actual char-rnn implementation, we don’t care about a sentence. We just split the entire input into num_batches and each batch is split so each input is of length num_steps (aka seq_len).
III. Backpropagation
The BPTT for the RNN structure can be a bit messy at first, especially when computing influence on hidden states and inputs.Use code below from Karpathy’s naive numpy implementation to follow along the math in my diagram.
Forward pass:
for t in xrange(len(inputs)): xs[t] = np.zeros((vocab_size,1)) # encode in 1-of-k representation xs[t][inputs[t]] = 1 hs[t] = np.tanh(np.dot(Wxh, xs[t]) + np.dot(Whh, hs[t-1]) + bh) # hidden state ys[t] = np.dot(Why, hs[t]) + by # unnormalized log probabilities for next chars ps[t] = np.exp(ys[t]) / np.sum(np.exp(ys[t])) # probabilities for next chars loss += -np.log(ps[t][targets[t],0]) # softmax (cross-entropy loss)
Backpropagation:
for t in reversed(xrange(len(inputs))): dy = np.copy(ps[t]) dy[targets[t]] -= 1 dWhy += np.dot(dy, hs[t].T) dby += dy dh = np.dot(Why.T, dy) + dhnext # backprop into h dhraw = (1 - hs[t] * hs[t]) * dh # backprop through tanh nonlinearity dbh += dhraw dWxh += np.dot(dhraw, xs[t].T) dWhh += np.dot(dhraw, hs[t-1].T) dhnext = np.dot(Whh.T, dhraw)
Learning about shapes
Before getting into the implementations, let’s talk about shapes. This char-rnn example is a bit odd in terms of shaping, so I’ll show you how we make batches here and how they are usually made for seq-seq tasks.
This task is a bit weird in that we feed the entire row of seq_len (all batch_size sequences) at once. Normally, we will just pass in one batch at once, where each batch will have batch_size sequences (batch_size, seq_len). We also don’t usually split by seq_len but just take the entire length of a sequence. With seq-seq tasks, as you will see in Part 2 and 3, we feed in a batch with batch_size sequences where each sequence is of length seq_len. We cannot dictate seq_len as we do there because seq_len will just be a max len from all the examples. We just PAD the sequences that do not match that max length. But we’ll take a closer look at this in the subsequent posts.
IV. Char-RNN TF Implementation (no RNN abstractions)
This implementation will be using tensorflow but none of the RNN classes for abstraction. We will just be using our own set of weights to really understand where the input data is going and how our output is generated. I will provide code and breakdown analysis with links but will talk about some significant highlights of the code here. If you want an implementation with TF RNN classes, go to section V.
Highlights:
First thing I want to draw attention to is how we generate our batched data. You may notice that we have an additional step where we batch the data and then further split it into seq_lens. This is because of the vanishing gradient problem with BPTT in RNN structures. More information can be found in my blog post here. But essentially, we cannot process to many characters at once because during backprop, we will quickly diminish the gradients if the sequence is too long. So a simple trick is to save the output state of a seq_len long sequence and then feed that as the initial_state for the next seq_len. This is also referred to as truncated backpropagation where we choose how much we process (apply BPTT to) and also how often we update. The initial_state starts from zeros and is reset for every epoch. So, we are still able to hold some type of representation in a specific batch from previous seq_len sequences. We need to do this because at the char-level, a small sequence is not enough to really be able to learn adequate representations.
def generate_batch(FLAGS, raw_data): raw_X, raw_y = raw_data data_length = len(raw_X) # Create batches from raw data num_batches = FLAGS.DATA_SIZE // FLAGS.BATCH_SIZE # tokens per batch data_X = np.zeros([num_batches, FLAGS.BATCH_SIZE], dtype=np.int32) data_y = np.zeros([num_batches, FLAGS.BATCH_SIZE], dtype=np.int32) for i in range(num_batches): data_X[i, :] = raw_X[FLAGS.BATCH_SIZE * i: FLAGS.BATCH_SIZE * (i+1)] data_y[i, :] = raw_y[FLAGS.BATCH_SIZE * i: FLAGS.BATCH_SIZE * (i+1)] # Even though we have tokens per batch, # We only want to feed in &lt;SEQ_LEN&gt; tokens at a time feed_size = FLAGS.BATCH_SIZE // FLAGS.SEQ_LEN for i in range(feed_size): X = data_X[:, i * FLAGS.SEQ_LEN:(i+1) * FLAGS.SEQ_LEN] y = data_y[:, i * FLAGS.SEQ_LEN:(i+1) * FLAGS.SEQ_LEN] yield (X, y)
Below is the code that uses all of our weights. We have an rnn_cell that takes in the input and the state from the previous cell in order to generate the rnn output which is also the next cell’s input state. The next function, rnn_logits, converts our rnn output using weights to generate logits to be used for probability determination via softmax.
def rnn_cell(FLAGS, rnn_input, state): with tf.variable_scope('rnn_cell', reuse=True): W_input = tf.get_variable('W_input', [FLAGS.NUM_CLASSES, FLAGS.NUM_HIDDEN_UNITS]) W_hidden = tf.get_variable('W_hidden', [FLAGS.NUM_HIDDEN_UNITS, FLAGS.NUM_HIDDEN_UNITS]) b_hidden = tf.get_variable('b_hidden', [FLAGS.NUM_HIDDEN_UNITS], initializer=tf.constant_initializer(0.0)) return tf.tanh(tf.matmul(rnn_input, W_input) + tf.matmul(state, W_hidden) + b_hidden) def rnn_logits(FLAGS, rnn_output): with tf.variable_scope('softmax', reuse=True): W_softmax = tf.get_variable('W_softmax', [FLAGS.NUM_HIDDEN_UNITS, FLAGS.NUM_CLASSES]) b_softmax = tf.get_variable('b_softmax', [FLAGS.NUM_CLASSES], initializer=tf.constant_initializer(0.0)) return tf.matmul(rnn_output, W_softmax) + b_softmax
We take our input and one hot encode it and then reshape for batch processing in the RNN. We can then run our RNN to predict the next token using the rnn_cell and rnn_logits functions with softmax. You can see that we generate the state but that also is the same as our rnn output in this simple implementation here.
class model(object): def __init__(self, FLAGS): # Placeholders self.X = tf.placeholder(tf.int32, [None, None], name='input_placeholder') self.y = tf.placeholder(tf.int32, [None, None], name='labels_placeholder') self.initial_state = tf.zeros([FLAGS.NUM_BATCHES, FLAGS.NUM_HIDDEN_UNITS]) # Prepre the inputs X_one_hot = tf.one_hot(self.X, FLAGS.NUM_CLASSES) rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) \ for i in tf.split(1, FLAGS.SEQ_LEN, X_one_hot)] # Define the RNN cell with tf.variable_scope('rnn_cell'): W_input = tf.get_variable('W_input', [FLAGS.NUM_CLASSES, FLAGS.NUM_HIDDEN_UNITS]) W_hidden = tf.get_variable('W_hidden', [FLAGS.NUM_HIDDEN_UNITS, FLAGS.NUM_HIDDEN_UNITS]) b_hidden = tf.get_variable('b_hidden', [FLAGS.NUM_HIDDEN_UNITS], initializer=tf.constant_initializer(0.0)) # Creating the RNN state = self.initial_state rnn_outputs = [] for rnn_input in rnn_inputs: state = rnn_cell(FLAGS, rnn_input, state) rnn_outputs.append(state) self.final_state = rnn_outputs[-1] # Logits and predictions with tf.variable_scope('softmax'): W_softmax = tf.get_variable('W_softmax', [FLAGS.NUM_HIDDEN_UNITS, FLAGS.NUM_CLASSES]) b_softmax = tf.get_variable('b_softmax', [FLAGS.NUM_CLASSES], initializer=tf.constant_initializer(0.0)) logits = [rnn_logits(FLAGS, rnn_output) for rnn_output in rnn_outputs] self.predictions = [tf.nn.softmax(logit) for logit in logits] # Loss and optimization y_as_list = [tf.squeeze(i, squeeze_dims=[1]) \ for i in tf.split(1, FLAGS.SEQ_LEN, self.y)] losses = [tf.nn.sparse_softmax_cross_entropy_with_logits(logit, label) \ for logit, label in zip(logits, y_as_list)] self.total_loss = tf.reduce_mean(losses) self.train_step = tf.train.AdagradOptimizer( FLAGS.LEARNING_RATE).minimize(self.total_loss)
We also sample from our model every once in a while. For sampling, we can either choose to take the argmax (boring) of the logits or introduce some uncertainty in the chosen class using temperature.
def sample(self, FLAGS, sampling_type=1): initial_state = tf.zeros([1,FLAGS.NUM_HIDDEN_UNITS]) predictions = [] # Process preset tokens state = initial_state for char in FLAGS.START_TOKEN: idx = FLAGS.char_to_idx[char] idx_one_hot = tf.one_hot(idx, FLAGS.NUM_CLASSES) rnn_input = tf.reshape(idx_one_hot, [1, 65]) state = rnn_cell(FLAGS, rnn_input, state) # Predict after preset tokens logit = rnn_logits(FLAGS, state) prediction = tf.argmax(tf.nn.softmax(logit), 1)[0] predictions.append(prediction.eval()) for token_num in range(FLAGS.PREDICTION_LENGTH-1): idx_one_hot = tf.one_hot(prediction, FLAGS.NUM_CLASSES) rnn_input = tf.reshape(idx_one_hot, [1, 65]) state = rnn_cell(FLAGS, rnn_input, state) logit = rnn_logits(FLAGS, state) # scale the distribution # for creativity, higher temperatures produce more nonexistent words # BUT more creative samples next_char_dist = logit/FLAGS.TEMPERATURE next_char_dist = tf.exp(next_char_dist) next_char_dist /= tf.reduce_sum(next_char_dist) dist = next_char_dist.eval() # sample a character if sampling_type == 0: prediction = tf.argmax(tf.nn.softmax( next_char_dist), 1)[0].eval() elif sampling_type == 1: prediction = FLAGS.NUM_CLASSES - 1 point = random.random() weight = 0.0 for index in range(0, FLAGS.NUM_CLASSES): weight += dist[0][index] if weight &gt;= point: prediction = index break else: raise ValueError("Pick a valid sampling_type!") predictions.append(prediction) return predictions
Also take a look at how we pass in an initial_state parameter into the data flow. This is updated with the final_state after each sequence is processed. We need to do this in order to avoid vanishing gradients in our RNN. Notice that we feed in a zero initial state for the start and then for subsequent sequences, we take the final_state of the previous sequence as the new input state.
state = np.zeros([FLAGS.NUM_BATCHES, FLAGS.NUM_HIDDEN_UNITS]) for step, (input_X, input_y) in enumerate(epoch): predictions, total_loss, state, _= model.step(sess, input_X, input_y, state) training_losses.append(total_loss)
V. TF RNN Library Implementation
In this implementation, in contrast with the one above, we will be using tensorflow’s nn utility to create the rnn abstraction classes. It’s important what we understand what is the input, internal operations and outputs for each of these classes before we use them. We will still be using the basic rnn_cell here so we will be employing truncated backpropagation, but if using GRU or LSTM, there is no need to use it. In fact, just split the entire data into batch_size and just process the entire sequence.
def rnn_cell(FLAGS): # Get the cell type if FLAGS.MODEL == 'rnn': rnn_cell_type = tf.nn.rnn_cell.BasicRNNCell elif FLAGS.MODEL == 'gru': rnn_cell_type = tf.nn.rnn_cell.GRUCell elif FLAGS.MODEL == 'lstm': rnn_cell_type = tf.nn.rnn_cell.BasicLSTMCell else: raise Exception("Choose a valid RNN unit type.") # Single cell single_cell = rnn_cell_type(FLAGS.NUM_HIDDEN_UNITS) # Dropout single_cell = tf.nn.rnn_cell.DropoutWrapper(single_cell, output_keep_prob=1-FLAGS.DROPOUT) # Each state as one cell stacked_cell = tf.nn.rnn_cell.MultiRNNCell([single_cell] * FLAGS.NUM_LAYERS) return stacked_cell
The code above is about creating our specific rnn architecture. We can choose from many different rnn cell types but here you can see three of most common (basic, GRU, and LSTM). We create each cell with a certain number of hidden units. We can then add a dropout layer after cell layer for regularization. Finally, we can make the stacked rnn architecture by replicating the single_cell. Note the state_is_tuple=True condition added to single_cell and stacked_cell. This ensures that we get a tuple return that contains the states after each input in a given sequence. The above statement will be true if using an LSTM unit, otherwise, please disregard.
def rnn_inputs(FLAGS, input_data): with tf.variable_scope('rnn_inputs', reuse=True): W_input = tf.get_variable("W_input", [FLAGS.NUM_CLASSES, FLAGS.NUM_HIDDEN_UNITS]) # &lt;BATCH_SIZE, seq_len, num_hidden_units&gt; embeddings = tf.nn.embedding_lookup(W_input, input_data) # &lt;seq_len, BATCH_SIZE, num_hidden_units&gt; # BATCH_SIZE will be in columns bc we feed in row by row into RNN. # 1st row = 1st tokens from each batch #inputs = [tf.squeeze(i, [1]) for i in tf.split(1, FLAGS.SEQ_LEN, embeddings)] # NO NEED if using dynamic_rnn(time_major=False) return embeddings def rnn_softmax(FLAGS, outputs): with tf.variable_scope('rnn_softmax', reuse=True): W_softmax = tf.get_variable("W_softmax", [FLAGS.NUM_HIDDEN_UNITS, FLAGS.NUM_CLASSES]) b_softmax = tf.get_variable("b_softmax", [FLAGS.NUM_CLASSES]) logits = tf.matmul(outputs, W_softmax) + b_softmax return logits
Couple differences between the rnn_inputs function here and in the naive TF implementation. As you can see, we no longer have to reshape our inputs to that it changed from to . This is because we will be receiving the output and state from our rnn by using tf.nn.dynamic_rnn. This is a significantly effective and efficient rnn abstraction that requires the inputs not be shaped when fed it, so all we feed in are the embeddings. The rnn_softmax class which gives us the logits remains the same as the previous implementation.
class model(object): def __init__(self, FLAGS): ''' Data placeholders ''' self.input_data = tf.placeholder(tf.int32, [None, None]) self.targets = tf.placeholder(tf.int32, [None, None]) ''' RNN cell ''' self.stacked_cell = rnn_cell(FLAGS) self.initial_state = self.stacked_cell.zero_state( FLAGS.NUM_BATCHES, tf.float32) ''' Inputs to RNN ''' # Embedding (aka W_input weights) with tf.variable_scope('rnn_inputs'): W_input = tf.get_variable("W_input", [FLAGS.NUM_CLASSES, FLAGS.NUM_HIDDEN_UNITS]) inputs = rnn_inputs(FLAGS, self.input_data) ''' Outputs from RNN ''' # Outputs: &lt;seq_len, BATCH_SIZE, num_hidden_units&gt; # state: &lt;BATCH_SIZE, num_layers*num_hidden_units&gt; outputs, state = tf.nn.dynamic_rnn(cell=self.stacked_cell, inputs=inputs, initial_state=self.initial_state) # &lt;seq_len*BATCH_SIZE, num_hidden_units&gt; outputs = tf.reshape(tf.concat(1, outputs), [-1, FLAGS.NUM_HIDDEN_UNITS]) ''' Process RNN outputs ''' with tf.variable_scope('rnn_softmax'): W_softmax = tf.get_variable("W_softmax", [FLAGS.NUM_HIDDEN_UNITS, FLAGS.NUM_CLASSES]) b_softmax = tf.get_variable("b_softmax", [FLAGS.NUM_CLASSES]) # Logits self.logits = rnn_softmax(FLAGS, outputs) self.probabilities = tf.nn.softmax(self.logits) ''' Loss ''' y_as_list = tf.reshape(self.targets, [-1]) self.loss = tf.reduce_mean( tf.nn.sparse_softmax_cross_entropy_with_logits( self.logits, y_as_list)) self.final_state = state ''' Optimization ''' self.lr = tf.Variable(0.0, trainable=False) trainable_vars = tf.trainable_variables() # glip the gradient to avoid vanishing or blowing up gradients grads, _ = tf.clip_by_global_norm(tf.gradients(self.loss, trainable_vars), FLAGS.GRAD_CLIP) optimizer = tf.train.AdamOptimizer(self.lr) self.train_optimizer = optimizer.apply_gradients( zip(grads, trainable_vars)) # Components for model saving self.global_step = tf.Variable(0, trainable=False) self.saver = tf.train.Saver(tf.all_variables())
Also notice that we don’t manually do one-hot encoding on our input tokens before embeddings them. This is because tf.nn.embedding_lookup in rnn_inputs function above does this automatically for us.
For generating the outputs, we use tf.nn.dynamic_rnn where the outputs will be the output for each input and the returning state is a tuple containing the last state for each input batch. Finally, we reshape the outputs to shape so we can get the logits and compare to targets.
Notice the self.initial_state, with stacked_cell.zero_state, all we have to specify is the batch_size. Here you will see NUM_BATCHES, please refer to section above on shaping for clarification. An another alternative is not including initial_state at all! dynamic_rnn() will figure it out on its own, all we need to do is specify the data type (ie. dtype=tf.float32, etc.). But we can’t do that here because we pass in the final_state of a sequence as the initial_state of the next sequence. You may notice that we pass in the previous final_state as the new initial_state even though self.initial_state is not a placeholder. We can still feed in our own initial just by redefining self.initial_state in step(). What ever we need to calculate in our output_feed, the input_feeds will be used and if it is missing, it will just go to predefined (stacked_cell.zero_state in this case) value.
def step(self, sess, batch_X, batch_y, initial_state=None): if initial_state == None: input_feed = {self.input_data: batch_X, self.targets: batch_y} else: input_feed = {self.input_data: batch_X, self.targets: batch_y, self.initial_state: initial_state} output_feed = [self.loss, self.final_state, self.logits, self.train_optimizer] outputs = sess.run(output_feed, input_feed) return outputs[0], outputs[1], outputs[2], outputs[3]
Results:
Let’s take a look at a few results. This by no means going to be earth shattering creativity, but I did use temperature instead of argmax for reproduction. So we will see more creativity but more errors (grammar, spelling, ordering, etc.). I only let it train for 10 epochs but we can already start to see words and sentence structure and even the concept for acting lines for each character (data was shakespeare’s work). For decent results, let it train over-night on a GPU.
Look’s like Shakespeare has lost his touch and ability to spell.
Update: I got a lot of doubts about the shapes of a typical input, output and state.
- input – [num_batches, seq_len, num_classes]
- output – [num_batches, seq_len, num_hidden_units] (all outputs from each of the states)
- state – [num_batches, num_hidden_units] (this is just the output from the last state)
In this next blog post, we will be dealing with inputs that contain variable sequence lengths and show an implementation for the of text classification.
All Code:
GitHub Repo (Updating all repos, will be back up soon!)
3 thoughts on “Recurrent Neural Networks (RNN) – Part 1: Basic RNN / Char-RNN”
in train function, is it necessary to let state = None? I mean clear state after each epoch?
model = create_model(sess, FLAGS)
state = None
for epoch_num, epoch in enumerate(generate_epochs(FLAGS,
FLAGS.train_X, FLAGS.train_y)):
train_loss = []
####### clear state?
state = None
# Assign/update learning rate
sess.run(tf.assign(model.lr, FLAGS.LEARNING_RATE *
(FLAGS.DECAY_RATE ** epoch_num)))
# Training
for minibatch_num, (X, y) in enumerate(epoch):
loss, state, logits, _ = model.step(sess, X, y, state)
train_loss.append(loss)
Amazing tutorial thank you so much!
|
https://theneuralperspective.com/2016/10/04/05-recurrent-neural-networks-rnn-part-1-basic-rnn-char-rnn/
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
Save your program in a file called
proj.bf
in folders called
proj1 and
proj2
respectively for phases 1 and 2 of your project.
I will test your code in the same environment as the lab
machines in MI 302, using the command
/usr/bin/beef proj.bf
For phase 2 of your project, you will write a compiler from a highly
simplified subset of the C programming language to brainfuck code. You will
not submit brainfuck code for this part, but rather a parser and scanner
generator using flex and bison, in two files called
proj.lpp
and
proj.ypp.
I will compile your code to the executable compiler
proj
as follows:
flex -o proj.yy.cpp proj.lpp bison -d proj.ypp g++ -o proj proj.tab.cpp proj.yy.cpp
Then, if
test.c contains a program in the C subset described
below, I will test your compiler as follows:
./proj < test.c > test.bf /usr/bin/beef test.bf
Of course, the result of the second command should be the same as if
I had compiled the
test.c program using
gcc and
run the resulting executable.
Now don't worry, the subset of C that we are going to compile from is going to be very restricted. Specifically, our C subset has the following properties:
#character is ignored by your compiler.
int main() {function. There must be no function prototypes or global variables. This main function must end with a
return 0;statement, which is ignored by your compiler.
main.
int, but may never store any integer larger than 127.
int x,y;is not allowed.
int x = 5;is OK, but
int x = 5 + 2is not.
VAR = ANY OP ANY;. Here
ANYis either a variable name or an integer in the range 0-127, and
VARmust be a variable name.
OPcan be any of the following:
+,
-,
>,
<,
==. Addition and subtraction work as usual, and the comparison operators set the variable to 1 if the statement is true, and to 0 if it is false.
VAR = getchar();, and an output statement has the form
putchar(ANY);.
I suggest you write your program in the following steps. Of course you are free to develop however you wish. As always, you are encouraged to submit every time you get some small step of the program working.
OPfor all the operators, but I advise against this. Ultimately your parser is going to have to be writing brainfuck code, and this code will be very different for something like
5 + 2compared to
5 == 2. So it might make your job easier to just have every operator be a different token.
So for instance, given the following C program:
#include <stdio.h> int main() { int x = 5; int y; y = getchar(); x = x + y; putchar(x); return 0; }your program might produce (written to standard out) the brainfuck program
+++++ > , <[->>+>+<<<] >>>[-<<<+>>>] <<[->+>+<<] >>[-<<+>>] <<<[-] >>[-<<+>>] <<.
Of course your actual program might differ from this one. As long as they behave identically, it's fine.
|
https://www.usna.edu/Users/cs/roche/courses/f11si413/project/brainfuck.php.html
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
The OESmartsParseOpts namespace encodes symbolic constants used as bit-masks to indicate how to interpret SMARTS and SMIRKS strings.
See also
This namespace contains constants.
None
Only constraints explicitly specified in the SMARTS and SMIRKS strings are added to the OEQMolBase query structure.
RingConstraint
Additional ring constraint is added to each ring atom of the generated OEQMolBase query molecule.
This can significantly increase the performance of the substructure search. For example, matching the C1CCCCC1 ring against the CCCCCCCC chain will fail much faster, since none of the query ring atoms can be mapped to any of the target chain atom.
Default
Same as the OESmartsParseOpts.RingConstraint constant.
|
https://docs.eyesopen.com/toolkits/csharp/oechemtk/OEChemConstants/OESmartsParseOpts.html
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
A library for Cayenne LPP
Hi all,
Recently I had to do a few projects using LoPy boards, The Things Network and its Cayenne Integration to quickly build some dashboard.
In order to use the integration, the packets send by the LoPy should use the in the Low Power Payload format.
To facilitate that, I made a simple library and thought I would share it with you since it could be useful to someone else. It is available on GitHub.
The type of sensors compatible with this library are:
- digital input/output;
- analog input/output;
- luminosity (or illuminance) sensor;
- presence sensor;
- temperature sensor;
- humidity sensor;
- accelerometer;
- barometer;
- gyrometer;
- and gps.
Here is a small example of how it works, assuming that the network join has already been done:
import socket # importing the module import cayenneLPP # create a LoRa socket s = socket.socket(socket.AF_LORA, socket.SOCK_RAW) s.setsockopt(socket.SOL_LORA, socket.SO_DR, 0) s.setblocking(True) # creating Cayenne LPP packet lpp = cayenneLPP.CayenneLPP(size = 100, sock = s) # adding 2 digital outputs, the first one uses the default channel lpp.add_digital_input(True) lpp.add_digital_input(False, channel = 112) # sending the packet via the socket lpp.send()
There are some other examples in the GithHub repo.
Hope it help :)
Cheers,
Johan
- miroslav.petrov
The problem is that I have insufficient knowledge in python. I cannot write a working script that(for example) reads a DHT22 sensor and formats the data in LPP. Thats why I want a working example with a real sensor.
Can you be a bit more specific when you say you have some difficulties using the library? Is it because you did not join the network? Or is it because you have troubles reading the data from a particular sensor?
An example is available here for using the library with TTN. You simply need to fill you application credentials in the lines 31 and 32. Please note that this example assumes that you are using the frequency plan for Australia.
- miroslav.petrov
I have difficulties using the library. Can somebody share a working code with a real sensor(bme280, dht11/22,ds18b20 etc.)? I think many people would appreciate it!
@jojo said in A library for Cayenne LPP:
GitHub
Hi,
Thanks for sharing this with the rest of the community, it looks very useful and very well documented!
|
https://forum.pycom.io/topic/2545/a-library-for-cayenne-lpp
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
ColdFusion 10 - XmlSearch() And XmlTransform() Now Support XPath 2.0
In today's world, we don't often work with XML; the majority of data exchange is done using JavaScript Object Notation (JSON). Even APIs that support both XML and JSON seem to be dropping XML support in their roadmap (I know this from personal experience). That said, XML is still a data type that will inevitably be a part of our lives for some time. That's why it's actually kind of exciting that ColdFusion 10 now supports XPath 2.0 in the xmlSearch() and xmlTransform() functions.
NOTE: At the time of this writing, ColdFusion 10 was in public beta.
I don't pretend to be an expert on XPath or XSLT (Extensible Stylesheet Language Transformations); so, rather than try to explain the differences between the versions of XPath, I figured I would just demonstrate some of the functionality that is now available in ColdFusion 10. In the following code, I'm creating a simple XML document and then using xmlSearch() to gather various nodes. I try to explain what's going on in the comments.
- <!---
- Create an XML document on which to test new XPath 2.0
- functionality support.
- --->
- <cfxml variable="bookData">
- <books>
- <book id="101" rating="4.5">
- <title>Muscle: Confessions of an Unlikely Bodybuilder</title>
- <author>Samuel W. Fussell</author>
- <published>August 1, 1992</published>
- <isbn>0380717638</isbn>
- </book>
- <book id="201" rating="4">
- <title>The Fountainhead</title>
- <author>Ayn Rand</author>
- <published>November 1, 1994</published>
- <isbn>0452273331</isbn>
- </book>
- <book id="301" rating="4.5">
- <title>It Was On Fire When I Lay Down On It</title>
- <author>Robert Fulghum</author>
- <isbn>0804105820</isbn>
- </book>
- </books>
- </cfxml>
- <!--- Groovy - now let's execute some XML Path queries. --->
- <cfscript>
- // Get all of the ratings that are greater than or equal to 4.5.
- results = xmlSearch(
- bookData,
- "//book/@rating[ number( . ) > 4.0 ]"
- );
- // Get the average rating of the reviews.
- results = xmlSearch(
- bookData,
- "avg( //book/@rating )"
- );
- // Get a compoud result of the Title and Author notdes. Notice
- // that we can now create divergent results in the SAME path.
- // We don't need to create two completely different paths.
- results = xmlSearch(
- bookData,
- "//book/( title, author )"
- );
- // Get all of the book's children EXCEPT for the ISBN number.
- // XPath 2.0 introduces some intesting operators like "except",
- // "every", "some", etc.
- results = xmlSearch(
- bookData,
- "//book/( * except isbn )"
- );
- // XPath 2.0 now uses sequences instead of node-sets which allow
- // for more interesting data combinations. This only gets the
- // nodes from one collection that are NOT in the other collection.
- // We're using inline branching and merging!
- results = xmlSearch(
- bookData,
- "//book/( (title, published) except (isbn, published) )"
- );
- // Get all of the ISBN numbers that use a 10-digit ISBN. XPath
- // 2.0 now supports regular exprdssion functions like matches(),
- // replace(), and tokenize() -- thought it is quicky and a
- // bit limited in patterns.
- results = xmlSearch(
- bookData,
- "//book/isbn[ matches( text(), '^\d{10}$' ) ]"
- );
- // Iterate over one collection and map it onto the resultant
- // collection. We can now iterate inline within a path.
- results = xmlSearch(
- bookData,
- "for $b in (//book) return ( $b/published )"
- );
- // We can now pass in params into our xmlSeach() calls. Notice
- // that the key, "title" is quoted - that is because XPATH is
- // case-sensitive.
- results = xmlSearch(
- bookData,
- "//book/title[ . = $title ]",
- {
- "title": "The Fountainhead"
- }
- );
- // Get the given book, no matter what the casing. FINALLY, we
- // can case-insensitive searching in XML :)
- results = xmlSearch(
- bookData,
- "//book[ upper-case( title ) = 'THE FOUNTAINHEAD' ]"
- );
- // Debug the results.
- writeDump( results );
- </cfscript>
From what I've read about the functionality in XPath 2.0, the biggest upgrades seem to be the use of sequences over node-sets and the use of inline path branching and logic. At a very practical level, XPath 2.0 simply supports more functions like lower-case() and upper-case() for case-insensitive matching - something many people have asked for in previous versions of ColdFusion.
Oh, and XPath 2.0 now supports Regular Expression matching as well - yeah boyyyyyy!
Well, that's probably about as much excitement as I can squeeze out of searching XML documents in ColdFusion 10. That is, of course, until you realize that ColdFusion 10 can now parse HTML... but more to come on that shortly.
Reader Comments
@All,
And here's part of why XML is getting more exciting in ColdFusion 10 - we can now "easily" convert dirty HTML into valid XML documents:
Due to the JAR files that now ship with ColdFusion 10 (ie. TagSoup), we have now have built-in Java classes that facilitate this kind of parsing.
It all looked good until you added the part about regular expression support. That really put it over the edge to greatness!
@Steve,
Heck yeah! Regular expressions are always groovy :) Unfortunately, it looks like the "\b" word-boundary construct is not supported, which I only realized because it was the first thing I tried. They have a slightly different notation for some things, which I haven't gone through yet.
But, good to know that it's there.
That is disappointing omission. Still, I guess some regular expression support is a major improvement over no regular expression support at all.
Please provide the explanation of xml transfom in Cold Fusion
|
https://www.bennadel.com/blog/2340-coldfusion-10---xmlsearch-and-xmltransform-now-support-xpath-2-0.htm
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
Lately I’e been playing with Arduino and PHP to create simple demos and see how the interaction between them works. For my surprise it worked really well and I could at least do the blink example with few lines of code.
From now on I’ assuming that you’e played around with Arduino and feel familiar with some terms like, execution loop, setup, pins and LED.
I used Arduino Uno to keep the things as simple as possible, but any Arduino family board is supported.
The first thing that came to my mind at least was to use the command line to interact with the serial port in the Arduino board. It could have worked as expected but for sure is not the best approach, since we are using a workaround.
I say workaround based on other languages where we can have native integration with those devices, have you ever heard about pi4j? What about Jhonnyfive? Those libraries provide a simple, elegant and the most important a native connection direct to the board.
To interact via command line with our Arduino we would simply do, something like this (assuming that the Arduino is connected via USB cable)
echo '1' >> /dev/cu.wchusbserial1410
The command above will send 1 to the Arduino serial port and blink the RX LED in the Arduino board.
We have taken the first step and communicated successfully to our Arduino board, but it is terminal only. Our goal here is do it via PHP, and as programmers we usually take the easiest path. Which means to send this command line through PHP script.
<?php shell_exec("echo '1' >> /dev/cu.wchusbserial1410");
The PHP script above does the job well, but its not elegant. The error handling is terrible.
Comes in place PHP streams, I’m sure you’ve used them before. PHP developers are used to streams and don’ know that.
Streams are accessible by any function that handles files on PHP, such as fopen and file_get_contents. The magic happens when we use what is known as wrappers to handle different resources. Wrappers allow SSH connection, FTP handling or even read zipped files content without extracting them.
PHP comes with 12 wrappers ready to use as shown in the list bellow, and if is needed something specific that PHP does not has is possible to create custom streams using the function stream_wrapper_register.
The magic here is to create a Arduino wrapper, so we could do something like
<?php $resource = fopen('arduino://ttyUSB0', 'r+'); print fread($resource, 1024);
PHP has a good documentation about creating your own wrapper, but I’m going to clarify a few points here.
The first is the class streamWrapper, which is a prototype to follow and create our own class. Though the weird part is that we don’t extend anything we just need to declare the right methods and it’ll just work.
The class is big but luckily we don’t need to implement every method, described below.
<?php class streamWrapper { /* Properties */ public resource $context ; /* Methods */ __construct ( void ) _ ) }
The code above was taken from PHP.net official documentation and you can see it at
The methods we are going to needs are:
<?php namespace Arduino; class Wrapper { private static $wrapperName = 'arduino'; private $path; public function __construct() { self::register(); } public function stream_open($path, $mode, $options = null, &$opened_path = null) { $realPath = str_replace('arduino://', '', $path); $this->path = fopen($realPath, 'r+'); return true; } public function stream_read($count) { sleep(2); return fgets($this->path, $count); } public function stream_write($data) { sleep(2); return fwrite($this->path, $data); } public function stream_eof() { return fclose($this->path); } public static function register() { // if we already defined the wrapper just return false foreach (stream_get_wrappers() as $wrapper) { if ($wrapper == self::$wrapperName) { return false; } } stream_wrapper_register(self::$wrapperName, self::class); } }
The last step is to register the wrapper before call it. In order to do that, just call the function stream_wrapper_register passing the wrapper name(alias) and the class name.
<?php stream_wrapper_register('arduino', Arduino\Wrapper::class);
In our Wrapper class we have a register method already, to turn easier the wrapper registration, instead of using the PHP function you could call \Arduino\Wrapper::register().
In conclusion to write a simple wrapper is easy. I would say that it is easier than build a simple CURD, the class we created is very small but does the jog really well.
To test it, just connect your Arduino USB in any port and use the wrapper with any function that you like, I prefer to use fopen, but the examples below are with both, fopen and file_put_contents.
Are you wondering why do I wrote two times the sleep function call? This is very simple, Arduino takes at least 1 second to set everything up and start to respond through the serial port. Adding 2 seconds we guarantee that all the date that we send to it will be received.
<?php \Arduino\Wrapper::register(); $resource = fopen('arduino://ttyUSB0', 'r+'); print fwrite('hello Arduino');
<?php \Arduino\Wrapper::register(); print file_put_contents('arduino://hello Arduino');
A good example of using custom PHP streams is Amazon and it’ storage service called S3. Amazon provides a SDK in PHP which uses a custom wrapper to store data in the cloud.
Official documentation can be found at
With S3 wrapper to download a file is a matter of calling the function file_get_contents
<?php $data = file_get_contents('s3://bucket/key');
|
https://marabesi.com/post/2017/03/01/running-php-with-arduino.html
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
Welcome Sophie!
On Sun, Oct 24, 2004 at 03:52:25AM -0500, Sophie Italy wrote:
| Any timeline for a schema?
Realistically, I won't get to it for another 12 months; however, if
someone does some serious thinking and posts a strawman, I'll gladly
participate. I'd be looking for something based on RelaxNG.
| I hope you'll don't try to write the schema in
| YAML; XML schema concrete syntax was a disaster.
Well, I'd figure one would want to do something like RelaxNG, have a
YAML syntax (so that you can define a transform language on YAML and
have it apply to YAML schema). However, I also think that it should
have a shorter syntax, similar to what RelaxNG has done.
That said, it looks like you are making a great start.
Cheers!
Clark
Posted a shorter version of this on the wiki as well...
Any timeline for a schema? I hope you'll don't try to write the schema in
YAML; XML schema concrete syntax was a disaster. Just use a suitable
language. Perhaps base the schema on a pattern construct? Then perhaps
permit a distinguished "start" element like RelaxNG?
The following could be executable Ruby code:
p1 = pattern {...}
p2 = pattern {...}
p3 = ( p1 || p2 || pattern {...} ) && pattern {...}
start = p3 || pattern {...}
class Pattern
def match(y)... end # y is a YamlNode
def &&(p2) ... end
def ||(p2) ... end
end
class YamlNode
def method_missing(...) ... end # will search for attribute,
accessor, hash-key
end
# pre-defined patterns: scalars, sequence, reference, optional, or, and,
etc.
# some (e.g. Sequence, Reference, Optional) are type-parametric
def string ... end
def sequence(pattern) ... end
def optional(pattern) ... end
def reference(pattern) ... end
# general patterns for Objects or other explicitly required types
def pattern (type=:Object, &block)
p = Pattern.new(type, block)
def p.match(x)
x.instance_eval(p)
end
end
# users simply use "pattern" for user types
e.g.
person = pattern(:Person) { # must match !/ruby/Person
name string
friends sequence(reference(person))
home optional(:Home)
}
_________________________________________________________________
Check out Election 2004 for up-to-date election news, plus voter tools and
more!
|
https://sourceforge.net/p/yaml/mailman/yaml-core/?viewmonth=200410&viewday=24
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
This type defines an
address
of a particular socket; in
particular, it defines the family of networking protocols to which
the address belongs (for example, IP or IPv6), as well as the size of
the address itself. This type can be safely ignored for most
high-level (and, arguably, most low-level) networking operations.
public class SocketAddress {
// Public Constructors
public SocketAddress(System.Net.Sockets.AddressFamily family);
public SocketAddress(System.Net.Sockets.AddressFamily family, int size);
// Public Instance Properties
public AddressFamily Family{get; }
public int Size{get; }
public byte this[int offset]{set; get; }
// Public Instance Methods
public override bool Equals(object comparand);
// overrides object
public override int GetHashCode( );
// overrides object
public override string ToString( );
// overrides object
}
EndPoint.Serialize( )
EndPoint.Create( )
|
http://etutorials.org/Programming/C+in+a+nutshell+tutorial/Part+IV+API+Quick+Reference/Chapter+33.+System.Net/SocketAddress/
|
CC-MAIN-2018-22
|
en
|
refinedweb
|
Mark Brown Tuition
Physics, Mathematics and Computer
Science Tuition & Resources
Going in spirals with Turtle - Writing good code
Posted on 14-02-18 in Turtle Exercises
Using the Python Turtle library we draw spirals with ever increasing complexity. The aim is to see examples of good and bad code. In this post we'll look at the code in each case and observe the result. I would recommend running the code for yourself as you go to observe what is happening.
import turtle step = 10 while True: turtle.forward(step) turtle.left(90) step += 2
This will produce
At each run of the loop we increase step by 2 pixels. This will cause the turtle to move further out each run.
The first improvement we will do is remove all hardcoded numbers. A hard coded number is one that isn't defined by a variable. This is generally bad practice as we may end up using the variable in multiple places. Secondly we don't really want infinite loops in our code with no way of ending. Let's change it such that it does a fixed number of loops before quitting.
import turtle step = 1 # ever increasing distance angle = 90 # turn by this angle each loop loops = 5 # how many loops we wish to complete before quitting increase_by = 2 # how many pixels to increase each run steps_per_loop = 360 // angle # how many steps we need to make a single loop for loop in range(loops * steps_per_loop): turtle.forward(step) turtle.left(angle) step += increase_by turtle.exitonclick()
This will work fine however we're more interested in how many sides, rather than the angle. So let's alter this slightly to become
import turtle step = 1 # ever increasing distance sides = 4 # how many sides per loop loops = 50 # how many loops we wish to complete before quitting increase_by = 10 # how many pixels to increase each loop angle = 360 // sides # angle to turn by each step increase_by_per_step = increase_by // sides for loop in range(loops * sides): turtle.forward(step) turtle.left(angle) step += increase_by_per_step turtle.exitonclick()
Success! We can see now the code does one thing based on a set of inputs (our variables). Let's turn this into a function.
import turtle def draw_spiral(step, sides, loops, increase_by): """ Draws spirals using turtle step # ever increasing distance sides # how many sides per loop loops # how many loops we wish to complete before quitting increase_by # how many pixels to increase each loop """ angle = 360 // sides increase_by_per_step = increase_by // sides for loop in range(loops * sides): turtle.forward(step) turtle.left(angle) step += increase_by_per_step def move_to(x,y=0): """ Moves turtle to position without drawing """ turtle.penup() turtle.goto(x,y) turtle.pendown() x = -200 while True: move_to(x) draw_spiral(1, 4, 5, 10) x += 50 turtle.exitonclick()
Here we have used our new function to draw the spiral pattern like a stamp and move onto the next location. I've also written a short function called move_to. Note how we've used a default argument, y=0. In code where we have numbers that usually don't change, this is good practice.
User Exercises
Using the above code as a guide produce the following patterns
|
https://markbrowntuition.co.uk/turtle-exercises/2018/02/14/going-in-spirals-with-turtle-writing-good-code/
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
and a time limit.
Use the
get-retention subcommand and specify the namespace.
Example
$ pulsar-admin namespaces get-retention my-tenant/my-ns
{
"retentionTimeInMinutes": 10,
"retentionSizeInMB": 500
}
REST API
GET /admin/v2/namespaces/:tenant/:namespace/retention
Java:
Use the
set-backlog-quota subcommand and specify a namespace, a size limit using the
-l/
--limit flag, and a retention policy using the
-p/
--policy flag.
Example
$ pulsar-admin namespaces set-backlog-quota my-tenant/my-ns \
--limit 2G \
--limitTime 36000 \
--policy producer_request_hold
REST API
Java
Use the
get-backlog-quotas subcommand and specify a namespace. Here's an example:
$ pulsar-admin namespaces get-backlog-quotas my-tenant/my-ns
{
"destination_storage": {
"limit" : 2147483648,
"policy" : "producer_request_hold"
}
}
REST API
GET /admin/v2/namespaces/:tenant/:namespace/backlogQuotaMap
Java
Map<BacklogQuota.BacklogQuotaType,BacklogQuota> quotas =
admin.namespaces().getBacklogQuotas(namespace);
Remove backlog quotas
pulsar-admin
Use the
remove-backlog-quota subcommand and specify a namespace. Here's an example:
$ pulsar-admin namespaces remove-backlog-quota my-tenant/my-ns
REST API
DELETE /admin/v2/namespaces/:tenant/:namespace/backlogQuota
Java.
Set the TTL for a namespace
pulsar-admin
REST API
Java
admin.namespaces().setNamespaceMessageTTL(namespace, ttlInSeconds);
Get the TTL configuration for a namespace
pulsar-admin
Use the
get-message-ttl subcommand and specify a namespace.
Example
$ pulsar-admin namespaces get-message-ttl my-tenant/my-ns
60
REST API
GET /admin/v2/namespaces/:tenant/:namespace/messageTTL
Java
admin.namespaces().getNamespaceMessageTTL(namespace)
Remove the TTL configuration for a namespace
pulsar-admin
Use the
remove-message-ttl subcommand and specify a namespace.
Example
$ pulsar-admin namespaces remove-message-ttl my-tenant/my-ns
REST API
DELETE /admin/v2/namespaces/:tenant/:namespace/messageTTL
Java
admin.namespaces().removeNamespaceMessageTTL(namespace)
Delete)..
|
https://pulsar.apache.org/docs/2.8.0/cookbooks-retention-expiry/
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Extract vector and raster data from OSM.
Project description
OSMxtract
Description
OSMxtract is a simple Python package that uses the Overpass API to fetch OpenStreetMap features and export them in a GeoJSON file.
Installation
Using
pip:
pip install osmxtract
Command-line interface
Usage
OSMxtract can guess the extent of your query based on three different options:
--fromfile: use the bounds from the input vector or raster file ;
--latlonand
--buffer: use the bounds of a buffer around a given point ;
--addressand
--buffer: use the bounds of a buffer around a geocoded address.
Usage: osmxtract [OPTIONS] OUTPUT Extract GeoJSON features from OSM with the Overpass API. Options: --fromfile PATH Bounding box from input file. --latlon FLOAT... Space-separated lat/lon coordinates. --address TEXT Address to geocode. --buffer INTEGER Buffer size in meters around lat/lon or address. --tag TEXT OSM tag of interest (ex: "highway"). --values TEXT Comma-separated list of possible values (ex: "tertiary,primary"). --case-insensitive Make the first character of each value case insensitive. --geom [point|linestring|polygon|multipolygon] Output geometry type. --help Show this message and exit.
Examples
# buildings around the "Université Libre de Bruxelles" as polygons # save features in the file `buildings.geojson`. since no values # are provided, all non-null values are accepted for the tag # "highway" are accepted. osmxtract --address "Université Libre de Bruxelles" --buffer 5000 \ --tag building --geom polygon buildings.geojson # primary, secondary and tertiary roads based on the extent # of an existing raster. save the result as linestrings in the # `major_roads.geojson` file. we use the `--case-insensitive` # flag to get roads tagged as "primary" as well as "Primary". osmxtract --fromfile map.tif --tag highway \ --values "primary,secondary,tertiary" \ --case-insensitive --geom linestring \ major_roads.geojson # cafes and bars near "Atomium, Brussels" osmxtract --address "atomium, brussels" --buffer 1000 \ --tag amenity --values "cafe,bar" --geom point \ cafes_and_bars.geojson
API
import json from osmxtract import overpass, location import geopandas as gpd # Get bounding box coordinates from a 2km buffer # around the Atomium in Brussels lat, lon = location.geocode('Atomium, Brussels') bounds = location.from_buffer(lat, lon, buffer_size=2000) # Build an overpass QL query and get the JSON response query = overpass.ql_query(bounds, tag='amenity', values=['cafe', 'bar']) response = overpass.request(query) # Process response manually... for elem in response['elements']: print(elem['tags'].get('name')) # Output: # Au Bon Coin # Aux 4 Coins du Monde # Excelsior # Welcome II # Heymbos # Games Café # Stadium # Le Beau Rivage # The Corner # None # Expo # Koning # Centrum # St. Amands # Bij Manu # ...or parse them as GeoJSON feature_collection = overpass.as_geojson(response, 'point') # Write as GeoJSON with open('cafes_and_bars.geojson', 'w') as f: json.dump(feature_collection, f) # To GeoPandas GeoDataFrame: geodataframe = gpd.GeoDataFrame.from_features(feature_collection)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
osmxtract-0.0.1.tar.gz (7.9 kB view hashes)
|
https://pypi.org/project/osmxtract/
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Scatter plots are the go-to for illustrating the relationship between two variables. They can show huge amounts of data, but often at a cost of being able to tell the identity of any given data point.
Key data points can be highlighted with annotations, but when we have a smaller dataset and value in distinguishing each point, we might want to add images instead of anonymous points.
In this tutorial, we’re going to create a scatter plot of teams xG & xGA, but with club logos representing each one.
To do this, we’re going to go through the following steps:
- Prep our badge images
- Import and check data
- Plot a regular scatter chart
- Plot badges on top of the scatter points
- Tidy and improve our chart
All the data and images needed to follow this tutorial are available here.
Setting up our images
To automate plotting each image, we need to have some order to our image locations and names.
The simplest way to do this is to keep them all in a folder alongside our code and have a naming convention of ‘team name’.png. The team names match up to the data that we are going to use soon. All of this is already prepared for you in the Github folder.
Import data
To start with, our data has three columns: team name, xG for and xG against. Let’s import our modules, data and check the first few lines of the dataframe:
import pandas as pd import matplotlib.pyplot as plt from matplotlib.offsetbox import OffsetImage, AnnotationBbox df = pd.read_csv(‘xGTable.csv’) df.head()
We have our numbers to plot, but we need to add a reference for each team’s badge location in a new column. As we took the time to match the badge file names against the team names, this is really simple – we just add ‘images/‘ before and ‘.png’ after the team name. Let’s save this in a new column called ‘path’:
df[‘path’] = df[‘Squad’] + ‘.png’ df.head()
Plot a regular scatter chart
Before making our plot with the badges, we need to create a regular scatter plot. This gives us the correct dimensions of the plot, the axes and other benefits of working with a matplotlib figure in Python. Once we have this, we can get fancy with our badges and other cosmetic changes.
We have covered scatter plots before here, so let’s get straight into it.
fig, ax = plt.subplots(figsize=(6, 4), dpi=120) ax.scatter(df[‘xG’], df[‘xGA’])
Super simple chart, and without annotations or visual cues we cannot tell who any of the points are. Adding badges will hopefully add more value and information to our plot.
Adding badges to our plot
Our base figure provides the canvas for the club badges. Adding these requires a couple of extra matplotlib tools.
The first one we will use is ‘OffsetImage’, which creates a box with an image, allows us to edit the image and readies it to be added to our plot. Let’s add this to a function as we’ll use it a few times:
def getImage(path): return OffsetImage(plt.imread(path), zoom=.05, alpha = 1)
OffsetImage takes a few arguments. Let’s look at them in order:
- The image. We use the plt.imread function to read in an image from the location that we provide. In this case, it will look in the path that we created in the dataframe earlier.
- Zoom level. The images are too big by default. .05 reduces their size to 5% of the original.
- Alpha level. Our badges are likely to overlap, in case you want to make them transparent, change this figure to any number between 0 and 1.
This function prepares the image, but we still need to plot them. Let’s do this by creating a new plot, just as before, then iterating on our dataframe to plot each team crest.
fig, ax = plt.subplots(figsize=(6, 4), dpi=120) ax.scatter(df[‘xG’], df[‘xGA’], color=‘white’) for index, row in df.iterrows(): ab = AnnotationBbox(getImage(row[‘path’]), (row[‘xG’], row[‘xGA’]), frameon=False) ax.add_artist(ab)
What’s happening here? Firstly, we have created our scatter plot with white points to hide them against the background, rather than interfere with the club logos.
We then iterate through our dataframe with df.iterrows(). For each row of our data we create a new variable called ‘ab’ which uses the AnnotationBbox function from matplotlib to take the desired image and assign its x/y location. The ax.add_artist function then draws this on our plot.
This should give us something like this:
Great work! We can now see who all the points are!
Improving our chart
Clearly there is plenty to improve on this chart. I won’t go through everything individually, but I’ll share the commented code below for some of the essential changes – titles, colours, comments, etc.
# Set font and background colour plt.rcParams.update({'font.family':'Avenir'}) bgcol = '#fafafa' # Create initial plot fig, ax = plt.subplots(figsize=(6, 4), dpi=120) fig.set_facecolor(bgcol) ax.set_facecolor(bgcol) ax.scatter(df['xG'], df['xGA'], c=bgcol) # Change plot spines ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.spines['left'].set_color('#ccc8c8') ax.spines['bottom'].set_color('#ccc8c8') # Change ticks plt.tick_params(axis='x', labelsize=12, color='#ccc8c8') plt.tick_params(axis='y', labelsize=12, color='#ccc8c8') # Plot badges def getImage(path): return OffsetImage(plt.imread(path), zoom=.05, alpha = 1) for index, row in df.iterrows(): ab = AnnotationBbox(getImage(row['path']), (row['xG'], row['xGA']), frameon=False) ax.add_artist(ab) # Add average lines plt.hlines(df['xGA'].mean(), df['xG'].min(), df['xG'].max(), color='#c2c1c0') plt.vlines(df['xG'].mean(), df['xGA'].min(), df['xGA'].max(), color='#c2c1c0') # Text ## Title & comment fig.text(.15,.98,'xG Performance, Weeks 1-6',size=20) fig.text(.15,.93,'Turns out some teams good, others bad', size=12) ## Avg line explanation fig.text(.06,.14,'xG Against', size=9, color='#575654',rotation=90) fig.text(.12,0.05,'xG For', size=9, color='#575654') ## Axes titles fig.text(.76,.535,'Avg. xG Against', size=6, color='#c2c1c0') fig.text(.325,.17,'Avg. xG For', size=6, color='#c2c1c0',rotation=90) ## Save plot plt.savefig('xGChart.png', dpi=1200, bbox_inches = "tight")
This should return something like this:
Conclusion
In this tutorial we have learned how to programmatically add images to a scatter plot. We created an underlying plot, then looped through the data to overlay a relevant image on each point.
This isn’t a good idea for every scatter chart, particularly when there are many points, as it will be an absolute mess. But with limited data points and value in distinguishing between them, I think we have a good use case for using club logos in our example.
You might also have luck using this method to distinguish between leagues, or drawing the image for just a few data points that you want to highlight in place of an annotation.
Interested in other visualisations with Python? Check out our other tutorials here!
|
https://fcpython.com/visualisation/creating-scatter-plots-with-club-logos-in-python
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
This article is a compilation of useful Wi-Fi functions for the ESP32. We’ll cover the following topics: scan Wi-Fi networks, connect to a Wi-Fi network, get Wi-Fi connection strength, check connection status, reconnect to the network after a connection is lost, Wi-Fi status, Wi-Fi modes, get the ESP32 IP address, set a fixed IP address and more.
This is not a novelty. There are plenty of examples of how to handle Wi-Fi with the ESP32. However, we thought it would be useful to compile some of the most used and practical Wi-Fi functions for the ESP32.
Table of Contents
Here’s a list of what will be covered in this tutorial (you can click on the links to go to the corresponding section):
- Wi-Fi Modes: station, access point and both (station + access point);
- Scan Wi-Fi networks;
- Connect to a Wi-Fi network;
- Get Wi-Fi connection status;
- Check Wi-Fi connection strength;
- Get ESP32 IP address;
- Set an ESP32 Static IP address;
- Disconnect Wi-Fi;
- Reconnect to Wi-Fi after connection is lost;
- ESP32 Wi-Fi Events;
- Reconnect to Wi-Fi Network After Lost Connection (Wi-Fi Events);
- ESP32 WiFiMulti
- Change ESP32 Hostname.
Including the Wi-Fi Library
The first thing you need to do to use the ESP32 Wi-Fi functionalities is to include the WiFi.h library in your code, as follows:
#include <WiFi.h>
This library is automatically “installed” when you install the ESP32 add-on in your Arduino IDE. If you don’t have the ESP32 installed, you can follow the next tutorial:
If you prefer to use VS Code + PaltformIO, you just need to start a new project with an ESP32 board to be able to use the WiFi.h library and its functions.
ESP32 Wi-Fi Modes
The ESP32 board can act as Wi-Fi Station, Access Point or both. To set the Wi-Fi mode, use WiFi.mode() and set the desired mode as argument:
Wi-Fi Station
When the ESP32 is set as a Wi-Fi station, it can connect to other networks (like your router). In this scenario, the router assigns a unique IP address to your ESP board. You can communicate with the ESP using other devices (stations) that are also connected to the same network by referring to the ESP unique IP address.
The router is connected to the internet, so we can request information from the internet using the ESP32 board like data from APIs (weather data, for example), publish data to online platforms, use icons and images from the internet or include JavaScript libraries to build web server pages.
Set the ESP32 as a Station and Connect to Wi-Fi Network
Go to “Connect to Wi-Fi Network” to learn how to set the ESP32 as station and connect it to a network.
In some cases, this might not be the best configuration – when you don’t have a network nearby and want you still want to connect to the ESP to control it. In this scenario, you must set your ESP board as an access point.
Access Point
When you set your ESP32 board as an access point, you can be connected using any device with Wi-Fi capabilities without connecting to your router. When you set the ESP32 as an access point, you create its own Wi-Fi network, and nearby Wi-Fi devices (stations) can connect to it, like your smartphone or computer. So, you don’t need to be connected to a router to control it.
This can be also useful if you want to have several ESP32 devices talking to each other without the need for a router.
Because the ESP32 doesn’t connect further to a wired network like your router, it is called soft-AP (soft Access Point). This means that if you try to load libraries or use firmware from the internet, it will not work. It also doesn’t work if you make HTTP requests to services on the internet to publish sensor readings to the cloud or use services on the internet (like sending an email, for example).
Set the ESP32 as an Access Point
To set the ESP32 as an access point, set the Wi-Fi mode to access point:
WiFi.mode(WIFI_AP)
And then, use the softAP() method as follows:
WiFi.softAP(ssid, password);
ssid is the name you want to give to the ESP32 access point, and the password variable is the password for the access point. If you don’t want to set a password, set it to NULL.
There are also other optional parameters you can pass to the softAP() method. Here are all the parameters:
WiFi.softAP(const char* ssid, const char* password, int channel, int ssid_hidden, int max_connection)
- ssid: name for the access point – maximum of 63 characters;
- password: minimum of 8 characters; set to NULL if you want the access point to be open;
- channel: Wi-Fi channel number (1-13)
- ssid_hidden: (0 = broadcast SSID, 1 = hide SSID)
- max_connection: maximum simultaneous connected clients (1-4)
We have a complete tutorial explaining how to set up the ESP32 as an access point:
Wi-Fi Station + Access Point
The ESP32 can be set as a Wi-Fi station and access point simultaneously. Set its mode to WIFI_AP_STA.
WiFi.mode(WIFI_AP_STA);
Scan Wi-Fi Networks
The ESP32 can scan nearby Wi-Fi networks within its Wi-Fi range. In your Arduino IDE, go to File > Examples > WiFi > WiFiScan. This will load a sketch that scans Wi-Fi networks within the range of your ESP32 board.
This can be useful to check if the Wi-Fi network you’re trying to connect is within the range of your board or other applications. Your Wi-Fi project may not often work because it may not be able to connect to your router due to insufficient Wi-Fi strength.
Here’s the example:
/* Example from WiFi > WiFiScan Complete details at */ ); }
You can upload it to your board and check the available networks as well as the RSSI (received signal strength indicator).
WiFi.scanNetworks() returns the number of networks found.
int n = WiFi.scanNetworks();
After the scanning, you can access the parameters about each network.
WiFi.SSID() prints the SSID for a specific network:
Serial.print(WiFi.SSID(i));
WiFi.RSSI() returns the RSSI of that network. RSSI stands for Received Signal Strength Indicator. It is an estimated measure of power level that an RF client device is receiving from an access point or router.
Serial.print(WiFi.RSSI(i));
Finally, WiFi.encryptionType() returns the network encryption type. That specific example puts a * in the case of open networks. However, that function can return one of the following options (not just open networks):
- WIFI_AUTH_OPEN
- WIFI_AUTH_WEP
- WIFI_AUTH_WPA_PSK
- WIFI_AUTH_WPA2_PSK
- WIFI_AUTH_WPA_WPA2_PSK
- WIFI_AUTH_WPA2_ENTERPRISE
Connect to a Wi-Fi Network
To connect the ESP32 to a specific Wi-Fi network, you must know its SSID and password. Additionally, that network must be within the ESP32 Wi-Fi range (to check that, you can use the previous example to scan Wi-Fi networks).
You can use the following function to connect the ESP32 to a Wi-Fi network initWiFi():
void initWiFi() { WiFi.mode(WIFI_STA); WiFi.begin(ssid, password); Serial.print("Connecting to WiFi .."); while (WiFi.status() != WL_CONNECTED) { Serial.print('.'); delay(1000); } Serial.println(WiFi.localIP()); }
The ssid and password variables hold the SSID and password of the network you want to connect to.
// Replace with your network credentials const char* ssid = "REPLACE_WITH_YOUR_SSID"; const char* password = "REPLACE_WITH_YOUR_PASSWORD";
Then, you simply need to call the initWiFi() function in your setup().
How it Works?
Let’s take a quick look on how this function works.
First, set the Wi-Fi mode. If the ESP32 will connected to another network (access point/hotspot) it must be in station mode.
WiFi.mode(WIFI_STA);
Then, use WiFi.begin() to connect to a network. You must pass as arguments the network SSID and its password:
WiFi.begin(ssid, password);
Connecting to a Wi-Fi network can take a while, so we usually add a while loop that keeps checking if the connection was already established by using WiFi.status(). When the connection is successfully established, it returns WL_CONNECTED.
while (WiFi.status() != WL_CONNECTED) {
Get Wi-Fi Connection Status
To get the status of the Wi-Fi connection, you can use WiFi.status(). This returns one of the following values that correspond to the constants on the table:
Get WiFi Connection Strength
To get the WiFi connection strength, you can simply call WiFi.RSSI() after a WiFi connection.
Here’s an example:
/* Complete details at */ #include <WiFi.h> // Replace with your network credentials (STATION) const char* ssid = "REPLACE_WITH_YOUR_SSID"; const char* password = "REPLACE_WITH_YOUR_PASSWORD"; void initWiFi() { WiFi.mode(WIFI_STA);: }
Insert your network credentials and upload the code.
Open the Serial Monitor and press the ESP32 on-board RST button. It will connect to your network and print the RSSI (received signal strength indicator).
A lower absolute value means a strongest Wi-Fi connection.
Get ESP32 IP Address
When the ESP32 is set as a Wi-Fi station, it can connect to other networks (like your router). In this scenario, the router assigns a unique IP address to your ESP32 board. To get your board’s IP address, you need to call WiFi.localIP() after establishing a connection with your network.
Serial.println(WiFi.localIP());
Set a Static ESP32 IP Address
Instead of getting a randomly assigned IP address, you can set an available IP address of your preference to the ESP32 using WiFi.config().
Outside the setup() and loop() functions, define the following variables with your own static IP address and corresponding gateway IP address. By default, the following code assigns the IP address 192.168.1.184 that works in the gateway 192.168.1.1.
// Set your Static IP address IPAddress local_IP(192, 168, 1, 184); // Set your Gateway IP address IPAddress gateway(192, 168, 1, 1); IPAddress subnet(255, 255, 0, 0); IPAddress primaryDNS(8, 8, 8, 8); // optional IPAddress secondaryDNS(8, 8, 4, 4); // optional
Then, in the setup() you need to call the WiFi.config() method to assign the configurations to your ESP32.
// Configures static IP address if (!WiFi.config(local_IP, gateway, subnet, primaryDNS, secondaryDNS)) { Serial.println("STA Failed to configure"); }
The primaryDNS and secondaryDNS parameters are optional and you can remove them.
We recommend reading the following tutorial to learn how to set a static IP address:
Disconnect from Wi-Fi Network
To disconnect from a previously connected Wi-Fi network, use WiFi.disconnect():
WiFi.disconnect()
Reconnect to Wi-Fi Network After Lost Connection
To reconnect to Wi-Fi after a connection is lost, you can use WiFi.reconnect() to try to reconnect to the previously connected access point:
WiFi.reconnect()
Or, you can call WiFi.disconnect() followed by WiFi.begin(ssid,password).
WiFi.disconnect(); WiFi.begin(ssid, password);
Alternatively, you can also try to restart the ESP32 with ESP.restart() when the connection is lost.
You can add something like the snippet below to your loop() that checks once in a while if the board is connected.
unsigned long currentMillis = millis(); // if WiFi is down, try reconnecting if ((WiFi.status() != WL_CONNECTED) && (currentMillis - previousMillis >=interval)) { Serial.print(millis()); Serial.println("Reconnecting to WiFi..."); WiFi.disconnect(); WiFi.reconnect(); previousMillis = currentMillis; }
Don’t forget to declare the previousMillis and interval variables. The interval corresponds to the period of time between each check in milliseconds (for example 30 seconds):
unsigned long previousMillis = 0; unsigned long interval = 30000;
Here’s a complete example.
/*"; unsigned long previousMillis = 0; unsigned long interval = 30000; void initWiFi() { WiFi.mode(WIFI_STA); WiFi.begin(ssid, password); Serial.print("Connecting to WiFi .."); while (WiFi.status() != WL_CONNECTED) { Serial.print('.'); delay(1000); } Serial.println(WiFi.localIP()); } void setup() { Serial.begin(115200); initWiFi(); Serial.print("RSSI: "); Serial.println(WiFi.RSSI()); } void loop() { unsigned long currentMillis = millis(); // if WiFi is down, try reconnecting every CHECK_WIFI_TIME seconds if ((WiFi.status() != WL_CONNECTED) && (currentMillis - previousMillis >=interval)) { Serial.print(millis()); Serial.println("Reconnecting to WiFi..."); WiFi.disconnect(); WiFi.reconnect(); previousMillis = currentMillis; } }
This example shows how to connect to a network and checks every 30 seconds if it is still connected. If it isn’t, it disconnects and tries to reconnect again.
You can read our guide: [SOLVED] Reconnect ESP32 to Wi-Fi Network After Lost Connection.
Alternatively, you can also use WiFi Events to detected that the connection was lost and call a function to handle what to do when that happens (see the next section).
ESP32 Wi-Fi Events
The ESP32 can handle all the following Wi-Fi events:
For a complete example on how to use those events, in your Arduino IDE, go to File > Examples > WiFi > WiFiClientEvents.
/* This sketch shows the WiFi event usage - Example from WiFi > WiFiClientEvents Complete details at */ /* * = "REPLACE_WITH_YOUR_SSID"; const char* password = "REPLACE_WITH"); WiFi.begin(ssid, password);; default:); }
With Wi-Fi Events, you don’t need to be constantly checking the Wi-Fi state. When a certain event happens, it automatically calls the corresponding handling function.
Reconnect to Wi-Fi Network After Lost Connection (Wi-Fi Events)
Wi-Fi events can be useful to detect that a connection was lost and try to reconnect right after (use the SYSTEM_EVENT_AP_STADISCONNECTED event). Here’s a sample> const char* ssid = "REPLACE_WITH_YOUR_SSID"; const char* password = "REPLACE_WITH_YOUR_PASSWORD"; void WiFiStationConnected(WiFiEvent_t event, WiFiEventInfo_t info){ Serial.println("Connected to AP successfully!"); } void WiFiGotIP(WiFiEvent_t event, WiFiEventInfo_t info){ Serial.println("WiFi connected"); Serial.println("IP address: "); Serial.println(WiFi.localIP()); } void WiFiStationDisconnected(WiFiEvent_t event, WiFiEventInfo_t info){ Serial.println("Disconnected from WiFi access point"); Serial.print("WiFi lost connection. Reason: "); Serial.println(info.disconnected.reason); Serial.println("Trying to Reconnect"); WiFi.begin(ssid, password); } void setup(){ Serial.begin(115200); // delete old config WiFi.disconnect(true); delay(1000); WiFi.onEvent(WiFiStationConnected, SYSTEM_EVENT_STA_CONNECTED); WiFi.onEvent(WiFiGotIP, SYSTEM_EVENT_STA_GOT_IP); WiFi.onEvent(WiFiStationDisconnected, SYSTEM_EVENT_STA_DISCONNECTED); /* Remove WiFi event Serial.print("WiFi Event ID: "); Serial.println(eventID); WiFi.removeEvent(eventID);*/ WiFi.begin(ssid, password); Serial.println(); Serial.println(); Serial.println("Wait for WiFi... "); } void loop(){ delay(1000); }
How it Works?
In this example we’ve added three Wi-Fi events: when the ESP32 connects, when it gets an IP address, and when it disconnects: SYSTEM_EVENT_STA_CONNECTED, SYSTEM_EVENT_STA_GOT_IP, SYSTEM_EVENT_STA_DISCONNECTED.
When the ESP32 station connects to the access point (SYSTEM_EVENT_STA_CONNECTED event), the WiFiStationConnected() function will be called:
WiFi.onEvent(WiFiStationConnected, SYSTEM_EVENT_STA_CONNECTED);
The WiFiStationConnected() function simply prints that the ESP32 connected to an access point (for example, your router) successfully. However, you can modify the function to do any other task (like light up an LED to indicate that it is successfully connected to the network).
void WiFiStationConnected(WiFiEvent_t event, WiFiEventInfo_t info){ Serial.println("Connected to AP successfully!"); }
When the ESP32 gets its IP address, the WiFiGotIP() function runs.
WiFi.onEvent(WiFiGotIP, SYSTEM_EVENT_STA_GOT_IP);
That function simply prints the IP address son the Serial Monitor.
void WiFiGotIP(WiFiEvent_t event, WiFiEventInfo_t info){ Serial.println("WiFi connected"); Serial.println("IP address: "); Serial.println(WiFi.localIP()); }
When the ESP32 loses the connection with the access point (SYSTEM_EVENT_STA_DISCONNECTED), the WiFiStationDisconnected() function is called.
WiFi.onEvent(WiFiStationDisconnected, SYSTEM_EVENT_STA_DISCONNECTED);
That function prints a message indicating that the connection was lost and tries to reconnect:
void WiFiStationDisconnected(WiFiEvent_t event, WiFiEventInfo_t info){ Serial.println("Disconnected from WiFi access point"); Serial.print("WiFi lost connection. Reason: "); Serial.println(info.disconnected.reason); Serial.println("Trying to Reconnect"); WiFi.begin(ssid, password); }
ESP32 WiFiMulti
The ESP32 WiFiMulti allows you to register multiple networks (SSID/password combinations). The ESP32 will connect to the Wi-Fi network with the strongest signal (RSSI). If the connection is lost, it will connect to the next network on the list. This requires that you include the WiFiMulti.h library (you don’t need to install it, it comes by default with the ESP32 package).
To learn how to use WiFiMulti, read the following tutorial:
Change ESP32 Hostname
To set a custom hostname for your board, call WiFi.setHostname(YOUR_NEW_HOSTNAME); before WiFi.begin();
The default ESP32 hostname is espressif.
There is a method provided by the WiFi.h library that allows you to set a custom hostname.
First, start by defining your new hostname. For example:
String hostname = "ESP32 Node Temperature";
Then, call the WiFi.setHostname() function before calling WiFi.begin(). You also need to call WiFi.config() as shown below:
WiFi.config(INADDR_NONE, INADDR_NONE, INADDR_NONE, INADDR_NONE); WiFi.setHostname(hostname.c_str()); //define hostname
You can copy the complete example below:
/*"; String hostname = "ESP32 Node Temperature";: }
You can use this previous snippet of code in your projects to set a custom hostname for the ESP32.
Important: you may need to restart your router for the changes to take effect.
After this, if you go to your router settings, you’ll see the ESP32 with the custom hostname.
Wrapping Up
This article was a compilation of some of the most used and useful ESP32 Wi-Fi functions. Although there are plenty of examples of using the ESP32 Wi-Fi capabilities, there is little documentation explaining how to use the Wi-Fi functions with the ESP32 using Arduino IDE. So, we’ve decided to put together this guide to make it easier to use ESP32 Wi-Fi related functions in your projects.
If you have other useful suggestions, you can share them on the comments’ section.
We hope you’ve found this tutorial useful.
Learn more about the ESP32 with our resources:
78 thoughts on “ESP32 Useful Wi-Fi Library Functions (Arduino IDE)”
Hi
Your blog is very interresting and helps a lot, but i miss the information to change the hostname of the esp32. Is it possible to change the Espressif unit name to a project specifish name.
I know i can change the name in the wlan router but the unit name still is Espressif
Regards
Hi,
You can change ESP32 hostname using commands like this:
String MA02_ID = “MA02_09”;
WiFi.setHostname(MA02_ID.c_str()); //define hostname
Hi Werner,
Try this, it should do the trick 😉
WiFi.mode(WIFI_MODE_STA);
WiFi.config(INADDR_NONE, INADDR_NONE, INADDR_NONE, INADDR_NONE);
WiFi.setHostname(YOUR_HOSTNAME);
WiFi.begin(YOUR_WIFI_SSID, YOUR_WIFI_PASS);
Hi,
Unfortunately doesn’t work for me – tried it on 2 different boards and with different sized hostnames. Small hostnames just continue to show expressif on network scan and larger hostname just gives a blank 🙁
Regards, Sandy
Sorry to hear that…
It works perfectly on my side!
Even with a rather long name with spaces.
Try to refresh the DHCP client list on your router.
Hi,
Sorry for the delay.
I rebooted my router and lo and behold the changed hostnames appeared – thank you 🙂
Sandy
I wanted to post an image to illustrate my point in my last comment, but it didn’t work. You can see it by clicking on the link.
The arduino-esp32 framework has just been updated to version 1.0.5. Some of the fixes include the following:
ad4cf146 Rework setHostname for WiFi STA
5de03a39 Fix WiFi STA config IP to INADDR_NONE results in 255.255.255.255
See if this solves your problem?
Is there a way to quickly do a WiFi Scan for a SPECIFIC SSID and, when it detects the WiFi SSID is available (or not), does something with this information in the sketch?
Hi Jim,
Something like that?
#include <Arduino.h>
#include <WiFi.h>
const char* SPECIFIC_SSID = "MyNetwork";
const char* ENC_TYPE[] = {
"Open",
"WEP",
"WPA_PSK",
"WPA2_PSK",
"WPA_WPA2_PSK",
"WPA2_ENTERPRISE",
"MAX"
};
struct WiFiInfo {
bool found;
int32_t channel;
int32_t rssi;
wifi_auth_mode_t auth_mode;
} wifi_info;
void findWiFi(const char *ssid, WiFiInfo *info) {
info->found = false;
int16_t n = WiFi.scanNetworks();
for (uint8_t i=0; i<n; ++i) {
if (strcmp(WiFi.SSID(i).c_str(), ssid) == 0) {
info->found = true;
info->channel = WiFi.channel(i);
info->rssi = WiFi.RSSI(i);
info->auth_mode = WiFi.encryptionType(i);
return;
}
}
}
void setup() {
Serial.begin(115200);
findWiFi(SPECIFIC_SSID, &wifi_info);
Serial.printf(wifi_info.found
? "SSID: %s, channel: %i, RSSI: %i dBm, encryption: %s\n"
: "SSID: %s ... could not be found\n",
SPECIFIC_SSID,
wifi_info.channel,
wifi_info.rssi,
ENC_TYPE[wifi_info.auth_mode]);
}
void loop() {}
Sorry… formatting with Markdown didn’t work properly :-/
But you should be able to format the code in your favorite editor.
I fixed that by publishing the code on GitHub Gist 😉
Hi Stéphane,
Thanks for sharing this.
Regards,
Sara
This worked perfectly. Thank you! I wonder if there is a faster way to get the SSID without having to scan for all networks first, and then isolating the network I’m searching for. Is there a way to do this?
hi, I hope someone can help me with this issue becasue I have weeks working on it and nothing looks to work….. I have a esp32 with my basic code just to connect with my wifi, not matter what I do, it does not connect, always shows me the error disconnected from AP (event 5)…. do you have any idea what is this happening? thanks
To use as a WiFi reference, there is a couple of things I would’ve liked to see included; i.e., commands to manipulate the MAC address of a board, and an example using the soft_AP and STA modes together.
As Werner noted, if there is a way to redefine the identifier, that would be great to know too!
That said, it is still a great article, and I would love to see it expanded with more examples of the more obscure commands’ responses. Maybe a downloadable table of the command/method/event, use format, possible responses, and any comments (such as “only valid in STA mode” or, a link to an example. Most of this is already in the article, just not well summarized, so hard to locate.
Cheers!
+1 for an example of how the combination WIFI_AP_STA works.
Thanks,
Dennis
Great tutorial!
Thanks for your work.
I found the automatic reconnection feature after the card disconnected very interesting. However, they seem to understand that this doesn’t always work. Especially when using Blynk. Has anyone had any experience in this regard?
Greetings. Mike.
Great article. Woyld have liked to ser how to use both modes to make a WiFi extender on ESP32. Thete is little info on this on the web although i know it is possible.
Check this sentences pls: “Or, you can call WiFi.diconnect() followed by WiFi.begin(ssid,password)”. Must WiFi.disconnect
Hi.
Thanks. You are right.
It’s fixed now!
Regards,
Sara
Is it now possible to run ESP-Now together with WiFi in STA mode?
My last state was: ESP-Now and WiFi must share the same channel and the channel number must be 1. Even when the router starts a connection at channel 1 in STA mode, if the router changes the channel to avoid traffic collisions, the ESP-Now connection breaks.
Hi,
Really good – thanks!
I would echo Dave’s request for an example using the soft_AP and STA modes together.
Sandy
Thanks for the suggestion.
I’ll add that to my list.
Regards,
Sara
Great Tutorial!
I am having some trouble connecting to my local WiFi and I’m sure this info will help me understand what is happening.
Do you have something similar for the ESP8266?
What I would find most useful would be some sample code that:
attempted a connection with stored network ID and credentials
if this failed fall back to AP mode, so that a user can connect, login to a webUI, save new network details, and then reboot / attempt to reconnect.
No-one want to recompile code, just so a device can change networks.
Hi.
Thanks for the suggestion.
In fact, we have a project like that in our most recent eBook:
I’ll probably release that project or a simplified free version of that project soon.
Regards,
Sara
Very helpful artical. Thank you very much
It’s a pity that you omitted
WiFi.persistent()command. I think, that this function is one of the most important when using WiFi and literally nobody knows about it.
In short: This function is true on default and causes, that every time (on every reboot) when calling WiFi.begin the ssid+pass are written into flash. This is a flash killer when using WiFi and deepsleep together.
Thanks for sharing that.
I’ll add it to the tutorial in the next update.
Regards,
Sara
Hi,
Thank you for publishing the article.
Regarding Reconnect to Wi-Fi Network After Lost Connection code
I have a need to modify the while loop so it checks for two conditions for example
(WiFi.status() != WL_CONNECTED) or (WiFi.status() != WL_NO_SSID_AVAIL)
Could you advice me of the best approach
Best regards
I have found that setting a static IP address in Station mode sometimes works, and sometimes not. I guess it depends on the router and a bunch of advanced network stuff that I dont understand. Nevertheless: this means for a user who wants to access the ESP32 webserver page, he/she must know the IP. How do you solve this when that user does not have access to the serial print nor some ESP-attached display?
A static IP address would be a great solution, but it is just not reliable enough. I have also tried with mDNS and also that was unreliable (worked on iPhone but not Android).
This is a use case where I would give a project as a gift to someone who doesnt know Arduino and can’t expect them to read Serial monitor or something like that. I haven’t found a true solution to this problem anywhere. How do you solve it?
Hi Amin,
normally a DHCP server on the router supplies IP addresses to the clients from a list of allowed addresses. If you set a static IP address in Station mode, this address must be excluded from the list of addresses the router is allowed to supply, otherwise two same addresses clash. Look in the configuration of your router, search for a DHCP entry and exclude your static IP address. Then static IP addresses are really reliable.
Thank you for your reply Peter. That would totally make static IP addresses work. But my use case is when I give a project as a gift to someone to use in their home network. I can’t expect them (think your mother haha) to go in and mess in the config of their router. I am looking for a solid and simple solution that does not require reading the serial monitor, or attaching a screen (one time use only to find out the IP!) nor changing their router settings. Most people can’t do this and I want to build something for most people.
Hi Amin,
in your case you cannot go with a static IP address. You know the MAC (physical) address of your ESP32 board or you can set the MAC address of the ESP32 to your own address (see RUI’s tutorials for details). Then you need to identify the dynamic IP address corresponding to your MAC address within your client app. This is done by the ReverseARP protocol RARP. On Windows or Linux use “arp -a” which creates a list of all MAC addresses and the corresponding IP addresses in your network. See for details.
Peter
If you don’t want to attach an LCD or TFT screen to your device, you can flash a simple LED to reveal the last byte of the IP address, going through the 3 decimal digits that make it up for example… assuming the person knows the address class of their private network (which they can view on any of their connected devices) 😋
Hi
Thank you for publishing the article. Regarding the while loop I have a need to modify the while loop so it checks for two conditions for example
(WiFi.status() != WL_CONNECTED) or (WiFi.status() != WL_NO_SSID_AVAIL)
Could you advice me of the best approach
Best Regards
Is there a way to show my gratitude, Rui and Sara? The books and tutorials that you produce are nothing short of fantastic. It’s a joy to work on a project that incorporates your work knowing that you have published information and code that is accurate and complete. You two are making a huge impact on the world of IOT and data networking that will advance the technology as well as advance the knowledge of thousands of us nerds! Thank you for what you do for the world every day!
Hi Bob.
Thank you so much for your nice words.
I’m glad you enjoy our tutorials and eBooks.
There are several ways to support our work. See here:.
Regards,
Sara
I was looking for same type of content. Thanks for doing enhanced research for us. I will try it and give my feedback again.
Hi Sara & Rui,
Thank you for another great tutorial. You guys are really the best on the ESP tutorials for many reasons. Please forgive the length of this little comment. Note to all of you: I do not make a dime writing what I post below. I just love the work that Rui & Sara are doing and want them to keep doing it forever!
I have a stressful job that I actually love, (regulatory consultant helping people comply with impossible regulations), but as far as concentration zen time to re-load a kinder gentler me, reading & following the Random Nerds tutorials & courses is the best stress relief I have had in 50 years (neglecting some obvious things like swimming, fishing, time with wife, and family gatherings, etc.) No kidding!
Note to Bob above about support. You can’t go wrong purchasing RN courses at their.
The Random Nerd courses are very comprehensive, better organized, easy to follow, inexpensive, and just pure excellence compared to other courses on similar subjects. They have on-line and pdf version of the courses, as well as great videos for all the course material. They are the best organized courses with useful examples. Every time I have had a relevant question and email either Rui or Sara, they respond with really helpful info that is to the point and relevant. I just wish I had more time to spend reading their stuff and using it on my little projects.
As far as the boards used on their ESP32 courses, even though Rui & Sara mostly use the ESP-WROOM-32 varieties (30 pin and 38 pin Devkits mostly), their code runs on every ESP32 board that I have tried it on, by just paying attention to the pinout for the board and adapting the code a little.
Hope I did not miss Amin’s point, but maybe a suggestion for Amin: if you try out their code on a couple of ESP32’s with little on board displays, even though they are a little more expensive, like the M5Sticks, M5 Core, LILIGO TTGO T-Display, or LILIGO TTGO TS, or HelTec WiFi Kit V2, or even the MorphESP240, (Definitely use Rui & Sara’s Maker Advisor at for where to get a ESP32 board with display for the best price), then you will find one for a reasonable price that you can try your code on. Then when you are satisfied with how it works, just port it over to a very inexpensive DevKit V1 or whatever ESP board you like.
One last thing, I looked and seem to remember a Random Nerds tutorial for ESP32 autoconnect (initial connection and adding to existing code, something like the stuff on Hackster, IOTDesignpro, Circuit Digest, or Instructables:
And a great one by Frenoy Osburn,
Anyway, they work mostly, but I pray that Rui & Sara do a tutorial on that, because it it will be far superior to what I have tried so far. Maybe they have & I just can’t find it. Anyway, I might be confusing it with another source. Don’t get me wrong, the other authors tutorials & githubs are fine. Nonetheless, the way Rui & Sara do their stuff blows everyone else out of the water.
Thanks again Random Nerds. I wish words were sufficient to express my appreciation. I’ve already purchase and read all you courses (except 2).
Hi.
Thank you so much for your nice words and for supporting our work.
I’m speechless about your awesome testimonial. I just want to say “Thank you!”. It feels nice to know that people find value in our work.
The randomnerds website is not ours. I’ve deleted that line in your comment to not confused other people.
These are the sites we own:
Once again, thank you.
Regards,
Sara
I think the tutorial you are looking for is this one:
Regards,
Sara
Small correction
soft-AP stands for ‘software enabled Access Point’ meaning; the feature is emulated in software. It’s nothing to do with the fact it’s not connected to a wired network (In fact you can connect a softAP to a wired network; this and it’s still a softAP)
have any example with “WiFi.mode (WIFI_STA_AP)”???
I’m just starting to use the Espressif ‘ESP-S2-SAOLA-1’ (ESP32-S2-WROVER)
I copied your demo but I’m not able to connect to my private WiFi using ‘SAOLA’ board. Using a different ESP32 board works fine.
Software never goes in WL_CONNECTED
‘ssid’ and ‘password’ are correct.
Serial.print(“Connecting to “);
Serial.println(ssid);
WiFi.begin(ssid, password);
while (WiFi.status() != WL_CONNECTED) {
delay(500);
Serial.print(“.”);
}
SAOLA board works fine for other functionality, also ‘Access Point’ implementation is fine! Only the ‘WIFI_STA’ give me trouble.
What’s wrong?
Something relate to ESP32-S2?
Thanks.
Hi, I’ve just started playing with ESP32s and this site has been a nice resource for me. I know that WPA2* is not very secure and I have discovered that EspressIf has added support for WPA3. I am however not sure if the Arduino IDE supports this yet, or even if they are working on adding this support.
Do you happen to know what is going on with WPA3 and Esp32 in relation to Arduino IDE?
Thanks!
Sinds arduino core 2.0, WPA3 is supported. I can’t find a tutorial yet but hopefully soon here
Great tutorial!
Thanks for your work.
but, do you have any example “WiFi.mode (WIFI_AP_STA)” ??
For example, the user with a new ESP at home uses AP mode to select his home network and configure his email, after saving these data ESP restarts and enters Station operating mode connecting to this wifi network previously configured for send temperature data to a server using the e-mail registered as data, if it is necessary to reconfigure the wifi network again it should go back to the menu with the available wifi networks, it could have a physical button for this or a config/use jumper.
Thank you for this useful sample AP code.
It works with my WROOM board, except that ‘channel’ and ‘ssid_hidden’ parameters do nothing. I always get channel 1, and the SSID is always visible.
Seems the WiFi library is for 8266 devices, so thought I’d try for something more specific to the hardware. Unfortunately Espressif want us to use ESP-IDF. I don’t know how to apply their code to the Arduino IDE. (Could try with Eclipse, but it’s a bit clumsy.)
So I’ll have a play with the Esp32WifiManager and see what happens.
I was wondering if I could influence the signal strength, hence wanting to not use channel 1, which is pretty crowded in my house. Maybe not. Trying external antennas soldered to the boards. That helped a bit with a LilyGo ESP32+LCD board.
An update on my previous comment:
The ‘channel’ and ‘ssid_hidden’ parameters to WiFi.softAP() do take effect if the password is long enough. (A password of less than 8 chars doesn’t work, it is equivalent to no password.)
Excellent tutorial
My need is a bit similar to. ‘Admin’
I have an air quality sensor. CO2 VOCs etc.
Set as an AP
I want to go into a public place that has an open WiFi.
Scan for the open WiFi.
Select it with a cellphone
Then my unit would connect to that.
Ive spoken to some of the places i visit and so everyone would permit me to rest the air.
Hi.
Take a look at WiFiManager:
Regards,
Sara
great tutorial!
my system is not able to find the definition for this: WiFiEventInfo_t
I am including these headers: ESP8266WiFi.h ESPAsyncTCP.h ESPAsyncWebServer.h
any suggestions?
Hi.
This tutorial is for the ESP32, not for the ESP8266.
Regards,
Sara
Hi, this is very useful information. I wish I had found your article a few weeks ago when I started with my ESP32-C3 project. I’m working on a project that involves collecting CSI data between chips for distance ranging and I can collect CSI data at the moment only through the chips connecting one as station, the other as AP. It would be better if I could get the chips to broadcast packets and listen for these to generate the CSI data, rather than make individual connections so that the distance estimation can occur for multiple chips. Is this possible?
Hi.
Take a look at ESP-NOW and ESP-MESH communication:
–
–
Regards,
Sara
My understanding is that ESP-NOW won’t provide CSI data, but I haven’t played around with it yet myself. Thanks for the reply.
ESP-MESH looks promising thanks!
ATTN: Arduino IDE libraries now seem to require different constant and type names.
This code does not work when using the Arduino IDE (Nov 2021). The constant names that work are different and (to add to the confusion) these (SYSTEM) names are still defined. For example, instead of
SYSTEM_EVENT_STA_CONNECTED
use ARDUINO_EVENT_WIFI_STA_CONNECTED.
You can find all of the connect constants and types (some type names are also different) here:
Thanks for all the great tutorials.
Cheers,
–joey
Hi.
Thanks for pointing that out.
I’ll test it soon and update the tutorial if needed.
Regards,
Sara
WPA3 is nog supported in arduino core 2.x. Can the tutorial updated with this?
Is there a simple speed test I can run to test signal strength/quality? I’m trying to run an app where a camera image captured is uploaded to a cloud drive, but get beacon timeouts that trash the wifi connection. So I’m trying to diagnose things.
Hi.
Check example 5:
Regards,
Sara
ESP32 ESP-Now and channels
My ESP32 ESP-Now transmitter is always on in my shed, it transmits weather data. It was apparently set to channel 2.
My ESP-Now receiver is in my workroom and sometimes is turned off for new programming.
One time the receiver went off IP address and channel. I booted my network system and the receiver channel came back to channel 2.
I read through the article, but didn’t find anything about keeping the channel number between two devices.
Apparently in the article it mentioned doing something to the transmitter to change the channel (but I don’t know what).
Is there a way to make the router set the channel # on the receiver?
Or is there another way?
Thanks
Hi.
In the ESP-NOW Web Server example, there’s a section that shows how to put the sender and receiver on the same channel automatically.
That solution was presented to us by one of our readers. Maybe it is better to take a look at his explanation and examples here:
Regards,
Sara
As I see ESP32 supports explicit LongRange mode (ofcourse at a cost of compromised band width), but how to enable it?
Hi, I made everything like here, but I have an error: initWiFi() was not declared in this scope. Why? I copy everything like in ebook and it doesn’t work((
Hi.
what code are you using?
Where are you calling the initWiFi() function and where is it declared?
Regards,
Sara
Dear all, I have a strange problem, I have this code for a DOIT ESP32 DEV KIT V1:
// Connect to Wi-Fi
WiFi.mode(WIFI_STA);
WiFi.begin(ssid, password);
Inside setup(), and I have no problems at all to compile it.
If I import the same file from PlatformIO, I get the following error:
C:/Users/josem/Documents/PlatformIO/Projects/220726-183829-esp32doit-devkit-v1/src/_20220724_Telegram_Control_ESP32_ESP8266.ino: In function ‘void setup()’:
C:/Users/josem/Documents/PlatformIO/Projects/220726-183829-esp32doit-devkit-v1/src/_20220724_Telegram_Control_ESP32_ESP8266.ino:262:7: error: ‘class WiFiClass’
has no member named ‘mode’
WiFi.mode(WIFI_STA);
Any suggestion about?
I mean I have no problem to compile, under Arduino IDE, but at opposite, I’m not able to compile inside VSC – PlatformIO
Hi.
Did you have the right board in the platformio.ini file?
Regards,
Sara
Yes, I think so, this is my PlatformIO.ini
[env:esp32doit-devkit-v1]
platform = espressif32
board = esp32doit-devkit-v1
framework = arduino
lib_extra_dirs = ~/Documents/Arduino/libraries
; Serial Monitor options
monitor_speed = 115200
Do you thing that it could be a good idea to publish it on Facebook too?
I don’t think it is necessary.
However, if you’re not able to solve the issue, you can search for further help.
Regards.
Sara
Why are you using this:
lib_extra_dirs = ~/Documents/Arduino/libraries
?
I’ts not because of me, just appeared when I imported the project from Arduino
I suggest creating a new project from the start using VS Code.
Regards,
Sara
|
https://randomnerdtutorials.com/esp32-useful-wi-fi-functions-arduino/?replytocom=556040
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Data Software Design Pitfalls on Java: Should We Have a Constructor on JPA?
In this article, explore details on code, especially inside the Jakarta EE world, mainly to answer the questions: should we have a constructor on JPA, and why?
Join the DZone community and get the full member experience.Join For Free
The data in any modern and distributed architecture, such as microservices, work as a vein in a system. It fits like a state in a stateless application. On the other hand, we have the most popular paradigms in the code, especially when we talk about enterprise OOP. How do you combine both archive and software design, primarily on Java?
This article will explore more details on code, especially inside the Jakarta EE world, mainly to answer the questions in a previous Jakarta JPA discussion: should we have a constructor on JPA, and why?
Context Data and Java
When we talk about Java and databases, the most systematic way to integrate both worlds is through thought frameworks. In the framework, we have types and categories based on communication levels and the usability of API.
- Communication level: It defines how far the code is from a database or closer to the OOP domain.
- A driver is a framework level closer to OOP and domain, and far from a database. A driver we can smoothly work on is data-oriented. However, it might bring more boilerplate to obtain the code to the domain (e.g., JDBC).
- A mapping goes in another direction, and thus, closer to OOP and far from the database. Where it reduces the boilerplate to a domain, we might face mismatch impedance and performance issues (e.g., Hibernate and Panache).
- Usability of the API: Give an API, how many times will you use it for different databases? Once we have SQL as a standard on the relational database, we usually have one API for all database types.
- A specific API is an API that works exclusively on a database. It often brings updates from this vendor; nonetheless, replacing a database means changing the whole API (e.g., Mophia, Neo4j-OGM Object Graph Mapper).
- An agnostic API is a spread API where you have one API for many databases. It would be easier to use more databases, but the updates or particular database behavior are more challenging.
DDD vs. Data-Oriented
Whenever we talk about software design on Java, we mainly talk about the OOP paradigm. At the same time, a database is usually a different paradigm. The main difference is what we call the impedance mismatch.
The OOP brings several approaches and good practices, such as encapsulation, composition, inheritance, polymorphism, etc., which won't have support on a database.
You might read the book "Clean Code" where we have an Uncle Bob quote: "OOPs hide data to expose behavior." The DDD works this way to have a ubiquitous language and domain often around OOP.
In his book "Data-Oriented Programming", author Yehonathan Sharvit proposes reducing complexity by promoting and treating data as a "first-class citizen."
This pattern summarizes three principles:
- The code is data separated.
- Data is immutable.
- Data has flexible access.
That is the biggest issue with both paradigms: it is hard to have both simultaneously, but it fits in the context.
JPA and Data
The JPA is the most popular solution with relational databases. It is a Java standard to work, and we can see several platforms use it, such as Quarkus, Spring, and so on.
To fight against the impedance, JPA has several features to reduce this attraction, such as inheritance, where the JPA's implementation engine will translate to/from the database.
@Entity @Inheritance(strategy = InheritanceType.SINGLE_TABLE) public class Product { @Id private long id; @Column private String name; //... } @Entity public class Computer extends Product { @Column private String version; } @Entity public class Food extends Product { @Column private Localdate expiry; }
JPA and Constructor
Once we have the context, let's discuss this great Jakarta EE Ambassador discussion, and we also have a GitHub issue.
We understand that there are always trade-offs when discussing software architecture and design. Thus, the enterprise architecture requires both DDD and a data-oriented approach based on the context.
Recently, Brian Goetz wrote an Oriented Data Programming in Java where he talks about how to archive success on data-programming using features such as record and sealed class.
It would be nice if we could explore and reuse record with JPA, but we have a legacy problem because JPA requires a default constructor.
The question is, should it be enough? Or should JPA support more than OOP/DDD, ignoring the data programming? In my option, we should run for the data programming even if it breaks the previously-required default constructor.
"JPA requiring default constructors pretty much everywhere is a severe limitation to the entity design for dozens of reasons. Records make that pretty obvious. So, while you can argue that Persistence doesn't 'need ' to do anything regarding this aspect, I think it should. Because improving on this would broadly benefit Persistence, not only in persisting records." Oliver Drotbohm
We can imagine several scenarios where we can have benefits from the code design approach:
- An immutable entity: We have a read-only entity. The source is the database.
public class City { private final String city; private final String country; public City(String city, String country) { this.city = city; this.country = country; } public String getCity() { return city; } public String getCountry() { return country; } }
- Force a bullet-proved entity: Imagine that we want both an immutable entity to force the consistency, and the entity is instantiated. So, we can combine it with Bean Validation to always create an entity when it brings valid values.
public class Player { private final String name; private final String city; private final MonetaryAmount salary; private final int score; private final Position position; public Player(@Size(min = 5, max = 200) @NotBlank String name, @Size(min = 5, max = 200) @NotBlank String city, @NotNull MonetaryAmount salary, @Min(0) int score, @NotNull Position position) { this.name = name; this.city = city; this.salary = salary; this.score = score; this.position = position; } }
JPA and Proposal
We learned from Agile methodology to release continuously and do a baby-step process. Consequently, we can start with support on two annotations, get feedback, fail-fast and then move it forward.
As the first step, we can have a new annotation: constructor. Once we have it on the constructor, it will ignore the field annotations to use on the constructor. We can have support for two annotations:
Id and
Column.
@Entity public class Person { private final Long id; private final String name; @Constructor public Person(@Id Long id, @Column String name) { this.id = id; this.name = name; } //... }
We also should have support on Bean Validation on this step.
@Entity public class Person { @Id private final Long id; @Column private final String name; @Constructor public Person(@NotNull @Id Long id, @NotBlank @Column String name) { this.id = id; this.name = name; } //... }
You can explore
records this case as well.
@Entity public record Person(@Id @NotNull Long id, @NotBlank @Column String name){}
Annotations on a record component of a record class may be propagated to members and constructors of the record class as specified in 8.10.3.
The baby step is proposed and done. The next step is to receive feedback and points from the community.
Conclusion
The software design, mainly on OOP, is a rich world and brings several new perspectives. It is customary to review old concepts to get new ones. It happened with CDI, where it has improved the constructor to express a better design, and it should happen to JPA with the same proposal.con
Opinions expressed by DZone contributors are their own.
|
https://dzone.com/articles/jpa-constructor?fromrel=true
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Introduction:.
Now the service is started, and you will be able to see entries in the log file we defined in the code.
Now open the log file in your folder that Output of the file like this.
What is Windows Service?
Windows Services are applications that run in the background and perform various tasks. The applications do not have a user interface or produce any visual output. Windows Services are started automatically when computer is booted. They do not require a logged in user in order to execute and can run under the context of any user including the system. Windows Services are controlled through the Service Control Manager where they can be stopped, paused, and started as needed.
Creating Windows Service is very easy with visual studio just follow the below steps to create windows service
Create a Windows Service
Creating Windows Service is very easy with visual studio just follow the below steps to create windows service
Open visual studio --> Select File --> New -->Project--> select Windows Service
And give name as WinServiceSample
After give WinServiceSample name click ok button after create our project that should like this
In Solution explorer select Service1.cs file and change Service1.cs name to ScheduledService.cs because in this project I am using this name if you want to use another name for your service you should give your required name.
After change name of service open ScheduledService.cs in design view right click and select Properties now one window will open in that change Name value to ScheduledService and change ServiceName to ScheduledService. Check below properties window that should be like this
After change Name and ServiceName properties again open ScheduledService.cs in design view and right click on it to Add Installer files to our application.
The main purpose of using Windows Installer is an installation and configuration service provided with Windows. The installer service enables customers to provide better corporate deployment and provides a standard format for component management.
After click on Add installer a designer screen added to project with 2 controls: serviceProcessInstaller1 and ServiceInstaller1
Now right click on serviceProcessInstaller1 and select properties in that change Account to LocalSystem
After set those properties now right click on ServiceInstaller1 and change following StartType property to Automatic and give proper name for DisplayName property
After completion of setting all the properties now we need to write the code to run the windows services at scheduled intervals.
If you observe our project structure that contains Program.cs file that file contains Main() method otherwise write the Main() method like this in Program.cs file
After completion of adding Main() method open ScheduledService.cs file and add following namespaces in codebehind of ScheduledService.cs file
If you observe code behind file you will find two methods those are
We will write entire code in these two methods to start and stop the windows service. Write the following code in code behind to run service in scheduled intervals
If you observe above code in OnStart method I written event ElapsedEventHandler this event is used to run the windows service for every one minute
After completion code writing build the application and install windows service. To install windows service check this post here I explained clearly how to install windows service and how to start windows service.
Now the service is installed. To start and stop the service, go to Control Panel --> Administrative Tools --> Services. Right click the service and select Start.
Now the service is started, and you will be able to see entries in the log file we defined in the code.
Now open the log file in your folder that Output of the file like this
109 comments :
Gooood Article Suresh
Thanks Mehtab Keep visiting...
I always visit your site.....keep it up Suresh..:)
nice Article suresh
can we schedule our application daily on specific time using win services.
thank you
hi rajesh,
here if you observe this windows service will run for every one minute based on that you can adjust your time it will work for you
thanks and keep it up
Hi in my windows service in the onstart method im writing some data from one text file to other text file. so lets say it runs for every 5 mins i.e it keeps on checking for files in the dir every five min and then process...
so w.r.t to your above code done i need to write the same code in OnElapsedTime or is it fine if its just in onstart method(since after every 5 min the service starts from onstart method?
it's ok you can write your code in onstart method it will work for you. if you observe above code i used onstart method it will run for every one minute
Easy and informative samples.. thanks
Helpful article, thanks.
Thanks
Suri
Nicely Explained...KEEP IT UP :)
Hi suresh how r u...? your explaination is very nice...keep it up:)
nice example..
but i think u forgot to start the timer after assigning interval
sorry i missed that line
its really very help full............
thanks suresh garu..............
Hi sir,
Please Answer this if you know My requirement is like this " I have folder in System when images copied into that folder that images names automatically inserted into database.
How can I do th is?
Regards
VeerendraNadh
@VeerendraNadh...
if your using application to save images in folder check this link
if your directly copying the images into folder then you need to insert image names automatically into database then you need to write windows service to check all the images in folder and save those names in database
Thank you. really nice article.
excellent share!
Great post. Very interesting to see. I usually do not read blog, but this post definatelly caught my attention.
Nice work, is there a way to log as user, giving the username and password.
very nice article,
thanks for sharing...
keep it up well done nicely explained
hi suresh sir, this is very easy to learn to newer. very nice explanation.
sir my requirement is i have to insert record from one table to another table at particular interval. please answer if any solution/idea
thanks
Thank You!! :) Its really great suresh...
nice article
gr8 Suresh... ths article really helped me a lot..
hey iam using windows service to insert record from one to another table on particular interval but it eats lots of memory when i run ths service under services.msc... any solution to flush the memory used by ths service.Thanks!
Hi, This tutorial is relay helpful. But I want to read data from access DB using timer. I write code in OnElapsedTime() and also try onStart() but still got error. It was "the service on local computer started and then stopped. some services stop automatically...". Please tell me what's the problem. I clear my log file also and googling lot of blogs. Please help me.
suresh can u tel me how to send automatic email notification on weely bases or daily for multiple recipients ....
Thanks ! Good post suresh
Hi Suresh,
I want to implement timer in asp.net
so that user can see how much time is left for his test to complete
I want to set timer for 30 mins to complete online test.
can you please help me ?
I created a server in C# with the services it provides furnished
as a
WellKnownServiceTypeEntry,
and used
System.Runtime.Remoting.RemotingConfiguration.RegisterWellKnownServiceType()
to make it available to clients over the network.
It works fine when the server is made as a console application, but doesn't work when the server is made as a Windows service. Please advise.
The Windows service runs as LocalSystem, was created in Visual Studio 2012. Firewall has been disabled.
Gud 1
Hi sir
In Windows service, how to display popup alert or system tray alert. Please give some ideas sir
nice... simple & clear.
Hi,Suresh
Very Nice and Impressive artical for begginers.
Now, I want to execute the stored procedure for every 5minutes which have the one out put parameter. So that where i need the call the procedure from the above code.
please help
thnaks in advance
Srikanth
very Good Support My Heart Full Thanks By KANNAN
hello sir,
i created one GUI application, now i want that application run as windows service.. meaning it should start when system is started , how to make my application run as i told above??
regards,
harini bangalore
this is very useful information
Hi suresh,
Nice blog.. really helped me out
vidur:nice one
Now Its Working fine...Thanks..
Very useful , your site willing to learn new stuff. Thank you.
its really very help full...
but I want to take backup of my database Every Day ,I need to write Windows Service,Give Code for that in C# . please...
thanks
Thanks so much for sharing.
Hai!! Thanks alot for your articles..:)God Bless
my requirement is: create windows service to run daily ONCE c#, fetch records from DB and export records to excel and send email that excel as a attachment. Please do the needful.
HI Suresh,
Could u please explain how to debug window service by step by step.
Thanks in adv.
-MadhuKrishna
Hi Suresh,
It was a wonderful tutorial. I had just one problem. Service couldn't write to text file. Although same code worked in Windows Form Application. Please help me out.
Thanks
Error 2 Unable to copy file "obj\x86\Debug\WindowsService1.exe" to "bin\Debug\WindowsService1.exe". The process cannot access the file 'bin\Debug\WindowsService1.exe' because it is being used by another process. WindowsService1
Gaurav Sharma Please help the solve the error
hi suresh...
I am getting Bad image format exception at the time of installation what i have to do pls help me..
sir i have made service build it and also run it
it is displayed in the list of the services but sir when i start the service in will give give the error "Could Not Start the Winservice service on local computer
Error 1067: the process terminated Unexpectedly."
my service name is Winservice
sir whats gone wrong ?
Hi Suresh...Nice article....I want to ask can we use this for ASP.NET application? Means I want to ask that consider a web page or application that is using windows service for printing tasks..
thanks and regards
Archit
Good article...! Neatly explained..
how do i get the user's entry out time automatically???? for example : like LOGIN time and LOGOUT time...
how do i get the user's entry out time automatically???? for example : like LOGIN time and LOGOUT time...
Hello Sir i have problem in start the service...my code is...
-------------------------------------------------------------------------------------------------------
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Diagnostics;
using System.Linq;
using System.ServiceProcess;
using System.Text;
using System.IO;
using System.Timers;
namespace Database_Configuration_Wizard
{
partial class DBConfigService : ServiceBase
{
Timer timer = new Timer();
MsAccess acce = new MsAccess();
public DBConfigService()
{
InitializeComponent();
}
//public void onCall()
//{
// this.OnStart(null);
//}
protected override void OnStart(string[] args)
{
this.GetData();
timer.Elapsed += new ElapsedEventHandler(OnElapsedTime);
timer.Interval = 10000;
timer.Enabled = true;
// TODO: Add code here to start your service.
}
private void OnElapsedTime(object source, ElapsedEventArgs e)
{
GetData();
}
public DataTable GetData()
{
DataTable data = new DataTable();
acce.testConn(Properties.Settings.Default.ConnString);
data = acce.getTable("SELECT Employee_Info.EmpName, RFID_ATTEN.EnteryDate, RFID_ATTEN.RFIDReaderIp FROM Employee_Info INNER JOIN RFID_ATTEN ON Employee_Info.EmployeeId = RFID_ATTEN.EmployeeId");
acce.testConn(Properties.Settings.Default.DBbackup);
foreach (DataRow row in data.Rows)
{
int i = acce.excuteQuery("INSERT INTO Attendances ([employee_code],[name_date],[action_event]) VALUES ('" + row[0] + "','" + row[1] + "','" + row[2] + "')");
}
return data;
}
protected override void OnStop()
{
timer.Enabled = false;
// TODO: Add code here to perform any tear-down necessary to stop your service.
}
}
}
please give suggestion asap....
thanks in advance....
Hi,
Currently i do have a requirement to develop a console based windows application which will check the status of all scheduled jobs scheduled in windows task scheduler and based on the job status it will fire email .
Anyone have any idea on this please.
Many Thanks,
Sisir
Hi,
I do have requirement which is of two parts :
1.Need to schedule a job in remote machine.Get all scheduled jobs in windows task scheduler and check the status of them.
2.Send alert email based on the job status(whether ran successfully or failed),
Any suggestion on this please.
Many Thanks,
Sisir
Hi,
can you tell me how to send email automatically in windows application based on the date specified in database.
ie,database table named sponser has 2 columns named sponsershipdate and type,by comparing these two values i have to send email automatically
Thanks you very much suresh.You article helped me to create a service easily and quickly.Keep it up.
hi suresh is there any method in .net start ,stop ,restart services in system tray .
Sir, This is Abhinav Singh 993 , Sir, whenever I am in problem I just view your website and my problem gets solved.
by reading the time from the database based on the time value the windows service has to run. how to do can u please help me?
Hi sir I have one requirement.
Actually I have one requirement.
in database there is one table called device table.there is one column called flag.
I will have to create one windows service which will be monitoring the table ,the work for the service will be if the flag is false for 1 hour then it will turn it to true.
Hi, I did same steps as above. But i couldn't find my service name in services(control panel). Could any one please help me
hi sir i want to disply Messagebox in a windows Service .... can we....???? please....help me...
hello sir I want to to the following.....
Create a DB having Price & Currency table. Price & Currency tables will be linked using referential constraint for the key CurrencyID (PK in Currency table). Price table will have Prices (Bid rate & Offer rate) of currencies for the different time stamps. For this, write a window service that will insert random prices for all the currencies in every 60 secs
Thanks
hello sir,
in a window service there is a function with try catch and i want that if exception generates than the value should be save into the database so plz guide me.
because in exception part function is not called.
can i call a method in exception part in window service????
sir,
please tell me, how to call windows service on hosting site or is it possible or not.
Hi Suresh how to save Backup files from DB in folder on Button Click in C# code
can u pls suggest me or help me
Excellent articles all urs...
i need help on windows 8 app development please start a section on windows store app development using c# .
Hi suresh,
this is really nice article thank you very much for your help.
now i want execute this service on every day at 8 am only so is it possible ??
now what happens suppose my computer restart then service start automatic at whichever time but i want it run at specific time
please help
beleave me suresh ur way of explanation is very very excellent with screen shots and samples,downloads..carry on ur good work
hi suresh
i need your help some prblem in intervals time in timer with write trace file
this my tracefile code. my prblem was trace file write function not properly work include end of post
private void tracefile(string content)
{
FileStream fs = new FileStream(@"D:\logingdetails.txt", FileMode.Open, FileAccess.Write);
StreamWriter sw = new StreamWriter(fs);
sw.BaseStream.Seek(0, SeekOrigin.End);
sw.WriteLine(content);
sw.Flush();
sw.Close();
}
this my event handler
private void time_Elapsed(object sender, ElapsedEventArgs e)
{
tracefile("Another Login :: " +DateTime.Now);
time.Stop();
}
this my onstart() funtion
protected override void OnStart(string[] args)
{
time.Interval = 6000;
time.Enabled = true;
time.Elapsed += new ElapsedEventHandler(time_Elapsed);
}
this my program.cs code
if(System.Diagnostics.Debugger.IsAttached)
{
Schdservice myser = new Schdservice();
string[] arg = new string [] {"agrs1","args2"};
myser.startdebub(arg);
System.Diagnostics.Debugger.Break();
myser.stopdebub();
}
logfile data
Service Start
Another Login :: 10/25/2013 5:27:44 PM
Another Login :: 10/25/2013 5:33:49 PM
Service Stopped
help me
thank in advance
Hi Suresh.. This Is Raffi Your Code Work as Best but If Any Developer want particular time like 10 AM , every day fire time interval for that time we need this kind stuff in real time
protected override void OnStart(string[] args)
{
//add this line to text file during start of service
TraceService("start service");
//handle Elapsed event
//ElapsedEventHandler this event is used to run the windows service for every 24 Hours
timer.Elapsed += new ElapsedEventHandler(OnElapsedTime);
int hours = Convert.ToInt32(ConfigurationSettings.AppSettings["DailyEventTriggerTime"]);
//set the time to today 10-00
DateTime t = DateTime.Now.Date.Add(TimeSpan.FromHours(hours));
TimeSpan ts = new TimeSpan();
ts = t - System.DateTime.Now;
if (ts.TotalMilliseconds < 0)
{
// the time span between dates
ts = t.AddDays(1) - System.DateTime.Now;
}
double totalSeconds = ts.TotalMilliseconds;
if (totalSeconds == 86400000)
{
//This statement is used to set interval to 24 Hours (= 8,64,00,000 milliseconds)
timer.Interval = 86400000;
//enabling the timer
timer.Enabled = true;
}
}
Hello Sir
how can create formatted message box in windows services using vb.net..msg box pop on right side bottom with good design [email protected]
Hi Suresh Its not working for Hours
Can You tell me how to write a public and my ownmethod in service/cs and how to call it.
public void OnWriteErrorLog(string error)
{
using (System.IO.StreamWriter file = new System.IO.StreamWriter(Application.StartupPath+@"\ATELogCheck.txt", true))
{
file.WriteLine(error);
}
}
Very good article..
Excellent Article, Very Useful.
Thanks Suresh
Nice article
super article. i have read only once and i got the complete idea...
Such a wonderful article about windows service.
This is really helpful.
Thank you so much.
Good Article but after installing service there is start and stop all options are disabled in my case please help
Hi Suresh,
How to debug this service through visual studio. I had written all complex logic in the OnStart method. While putting break point and started debugging, getting error , service not installed. I want to debug the service without installing it.please help
Thank you.
Thank you Very much it helps me a lot
Very nice and detailed article..Thanks :)
Hi Suresh,
good article it helped me lot.
i just wanted to know is it possible to link window services to other application.
for ex., i have one tool which is done in C# window application which execute testing scripts.
i want to capture of the time of start time script execution, failed time of script and end time of script. can you please suggest me on this.
Nice
very very useful for all beginers thanks lot ......keep rocks
This article helps me a lot to develop custom services.
Nice article. Useful for all beginners to develop their custom services.
Thanks a lot..
Regards
Muhammad Rabie
Software Engineer
thank you sir to giving me solution by this article.
HI sir,
can be change the interval time of windows service using any UI . is possible or not.please suggest me
Thanks
Anand Upadhyay
how to show a windows form on the desktop on service start? please help
Hi,
How I can write a windows service so it will call my URL and runs it.
Nice blog..
Nice Article ,it help to understand how to create the window service
Hi,
How I can write a windows service so it will show latest inserted row in my database..?
Hi
i have facing a problem while installing window service
can you help me to solving this problem?????????????
i have the following exception
The source was not found, but some or all event logs could not be searched. Inaccessible logs: Security
how i solve this problem ????????????????
Note: Only a member of this blog may post a comment.
|
https://www.aspdotnet-suresh.com/2011/06/creating-windows-service-in-c-or.html?showComment=1354769329772
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
OpenReviewIO Python API
Project description
/!\ Still in alpha stage and on its way to be stable. Every feedback is welcome!
Overview
OpenReviewIO is a standard that describes a format for sharing media (shots, animations...) review informations. It's main purpose is to guarantee review informations compatibility across media reviewing tools. Please read the specifications for more informations.
OpenReviewIO Python API is the main Python API for the ORIO standard, maintained by the designer of the standard.
Version
The version of the API is related to the version of the standard.
API 1.x.y <=> Standard 1.x
In Alpha stage, the API is 0.x.y but related to 1.x Standard. Will become 1.x.y when first stable release.
Last standard version: 1.0
Usage
import openreviewio as orio
Create a media review
review = orio.MediaReview("/path/to/media.ext")
Create a note
note = orio.Note(author="Alice")
Create content
Contents are defined by the standard version
There is a naming convention about the contents:
-
Comment means something related to the whole media.
-
Annotation means something related to a specific frame and duration of the media.
Text comment
text_comment = orio.Content.TextComment(body="My text comment")
Text annotation
text_annotation = orio.Content.TextComment( body="My text comment", frame=17, duration=20 )
Image comment
image = orio.Content.Image(path_to_image="/path/to/image_comment.png")
Image annotation
image_annotation = orio.Content.ImageAnnotation( frame=17, duration=20, path_to_image="/path/to/image_annotation.png" )
Add content to note
# Single content note.add_content(text_comment) # Several contents note.add_content([text_annotation, image, image_annotation])
Add note to review
review.add_note(note)
Write media review to disk
# Write next to the media review.write() # Specifying a directory review.write("/path/to/review_dir")
Export/Import a note as zip
# Export exported_note_path = note.export("/path/to/folder", compress=True) # Import review.import_note(exported_note_path)
Examples
From content to review
# Content text = orio.Content.TextComment(body="Banana") # Note new_note = orio.Note("Michel", content=text) # Review review = orio.MediaReview("/path/to/media.ext", note=new_note)
Reply to a note
# Main note text = orio.Content.TextComment(body="Make the logo bigger.") main_note = orio.Note("Michel", content=text) # Reply to the main note reply = orio.Content.TextComment(body="Done, I'm waiting for my visibility payment.") note_reply = orio.Note("Michel", content=reply, parent=main_note)
Add a reference image to an image annotation
Useful for keeping an image as reference of the drawing.
image_annotation = orio.Content.ImageAnnotation( frame=17, duration=20, path_to_image="/path/to/image_annotation.png", reference_image="/path/to/reference_image.png" ) # Or image_annotation.reference_image = "/path/to/reference_image.png"
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/openreviewio/
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Get protocol-specific service information.
X/Open Transport Interface Library (libxti.a)
#include <xti.h>
int t_getinfo (fd, info) int fd; struct t_info *info;
The t_getinfo subroutine returns the current characteristics of the underlying transport protocol and/or transport connection associated with the file descriptor specified by the fd parameter. The pointer specified by the info parameter returns the same information returned by the t_open subroutine, although not necessarily precisely the same values. This subroutine enables a transport user to access this information during any phase of communication.
The values of the fields have the following meanings:
If a transport user is concerned with protocol independence, the above sizes may be accessed to determine how large the buffers must be to hold each piece of information. Alternatively, the t_alloc subroutine may be used to allocate these buffers. An error results if a transport user exceeds the allowed data size on any subroutine. The value of each field may change as a result of protocol option negotiation during connection establishment (the t_optmgmt call has no affect on the values returned by the t_getinfo subroutine). These values will only change from the values presented to the t_open subroutine after the endpoint enters the T_DATAXFER state.
ALL - apart from T_UNINIT.
On failure, t_errno is set to one of the following:
The t_alloc subroutine, t_open subroutine.
|
http://ps-2.kev009.com/tl/techlib/manuals/adoclib/libs/commtrf2/tgetinfp.htm
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
C/C++ Application How-Tos for Code Assistance
- What To Do When Your Project Has a Question Mark in the Projects Window
- Configuring Build Analyzer for Code Assistance
- Configuring Code Assistance for a Multi-Platform Project
- Configuring Code Assistance When You Cannot Build the Project
- Using Hyperlinks to Navigate Between Invocations and Declarations
- Finding All Definitions of a Namespace
Contributed.
What To Do When Your Project Has a Question Mark in the Projects Window):
Wrong or insufficient user include paths specified in the project, logical folder, or file properties
Wrong or insufficient user-defined macros specified in the project, logical folder, or file properties
Source file is included in the project by mistake
Header file is not included in any source files and hence is included in the project by mistake
If you hold the mouse cursor over the project folder, a tooltip displays some information about the problem. For more information, you can right-click the project and select Code Assistance →.
Configuring Build Analyzer for Code Assistance:
New compilation units are added to the IDE project.
Existing compilation units are modified with new or changed user-defined includes and macros.
Compilation units that are excluded from building are not excluded from code assistance.:
Right-click the project node in the Projects window and select Properties.
In the Project Properties dialog box, click the Code Assistance category.
Deselect the Use Build Analyzer option..
Configuring Code Assistance for a Multi-Platform Project
If you are developing a multi-platform project from existing code, you can use the same IDE project for different platforms..
Using Hyperlinks to Navigate Between Invocations and Declarations
Hyperlink navigation lets you jump from the invocation of a function, class, method, variable, or constant to its declaration. To use a hyperlink, do one of the following:
Mouse over a class, method, variable, or constant while pressing Ctrl. A hyperlink appears along with a tooltip with information about the element. Click the hyperlink and the editor jumps to the declaration. Press Alt+Left to jump back to the invocation.
Mouse over an identifier and press Ctrl+B. The editor jumps to the declaration.
Press Alt+Left to jump back to the invocation. Press Alt+Left and Alt+Right to move backward and forward through the history of the cursor position.
You can also right-click the item and select Navigate > Go to Declaration/Definition, or other options to navigate through your code.
Finding All Definitions of a Namespace
A namespace can be defined in different files of the project. To navigate between different namespace definitions, use the Classes window (Ctrl-9). Right-click the namespace you are interested in and choose All Declarations. You will see a list of all definitions sorted by file names.
link:mailto:[email protected]?subject=subject=Feedback:%20C/C+%20Application%20How-Tos%20-%20NetBeans%20IDE%208.0[+Send Feedback on This Tutorial]
|
https://netbeans.apache.org/kb/docs/cnd/HowTos.html
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
NetBeans Nodes API Tutorial.
As its basis, this tutorial uses the source code created in the first tutorial and enhanced further in the second. If you have not yet done these tutorials, it is recommended to do them first.
Optionally, for troubleshooting purposes, you can download the completed sample.
Creating a Node subclass
As mentioned in the previous tutorial, Nodes are presentation objects. That means that they are not a data model themselves—rather, they are a presentation layer for an underlying data model. In the Projects or Files windows in the NetBeans IDE, you can see `Node`s used in a case where the underlying data model is files on disk. In the Services window in the IDE, you can see them used in a case where the underlying objects are configurable aspects of NetBeans runtime environment, such as available application servers and databases.
As a presentation layer, `Node`s
MyChildren class to create `Node`s, by calling
new AbstractNode (new MyChildren(), Lookups.singleton(obj));
and then calling
setDisplayName(obj.toString()) to provide a basic display name. There is much more that can be done to make your
Node`s more user-friendly. First you will need to create a `Node subclass to work with:
In the My Editor project, right click the package
org.myorg.myeditorand choose New > Java Class.
When the wizard opens, name the class "MyNode" and press Enter or click Finish.
Change the signature and constructors of the class as follows:
public class MyNode extends AbstractNode { public MyNode(APIObject obj) { super (new MyChildren(), Lookups.singleton(obj)); setDisplayName ( "APIObject " + obj.getIndex()); } public MyNode() { super (new MyChildren()); setDisplayName ("Root"); }
Open
MyEditorfrom the same package, in the code editor.
Replace these lines in the constructor:
mgr.setRootContext(new AbstractNode(new MyChildren())); setDisplayName ("My Editor");
with this single line of code:
mgr.setRootContext(new MyNode());
Now you will make a similar change to the Children object. Open the
MyChildrenclass in the editor, and change its
createNodesmethod as follows:
protected Node[] createNodes(Object o) { APIObject obj = (APIObject) o; return new Node[] { new MyNode(obj) }; }
Enhancing Display Names with HTML `Node`s are shown in Explorer UI components. The following tags are supported:
font color—font size and face settings are not supported, but color is, using standard html syntax
font style tags—b,i,u and s tags—bold, italic, underline, strikethrough
A limited subset of SGML entities: ", <, &, ‘, ’, “, ”, –, —, ≠, ≤, ≥, ©, ®, ™, and
Since there is no terribly exciting data available from your
APIObject, which only has an integer and a creation date, you’ll extend this artificial example, and decide that odd numbered
APIObjects should appear with blue text.
Add the following method to
MyNode:
public String getHtmlDisplayName() { APIObject obj = getLookup().lookup (APIObject.class); if (obj!=null && obj.getIndex() % 2 != 0) { return "<font color='0000FF'>APIObject " +
MyNodewhose
APIObjecthas an index not divisible by 2 will have a non-null HTML display name.
Run the suite again and you should see the following:
.
Modify the
getHtmlDisplayName()method as follows:
public String getHtmlDisplayName() { APIObject obj = getLookup().lookup (APIObject.class); if (obj != null) { return "<font color='#0000FF'>APIObject " + obj.getIndex() + "</font>" + "<font color='AAAAAA'><i>" + obj.getDate() + "</i></font>"; } else { return null; } }
Run the suite again and now you should see the following:
.
Modify the
getHtmlDisplayName()method as follows:
public String getHtmlDisplayName() { APIObject obj = getLookup().lookup (APIObject.class); if (obj != null) { return "<font color='!textText'>APIObject " + obj.getIndex() + "</font>" + "<font color='!controlShadow'><i>" + obj.getDate() + "</i></font>"; } else { return null; } }
Run the suite again and now you should see the following:
())…).
Providing Icons:
Copy the image linked above, or another 16x16 PNG or GIF, into the same package as the
MyEditorclass.
Add the following method to the
MyNodeclass:
public Image getIcon (int type) { return Utilities.loadImage ("org/myorg/myeditor/icon.png"); }
Note that,
Utilities.loadImage() is more optimized, has better caching behavior, and supports branding of images.
MyNode:
public Image getOpenedIcon(int i) { return getIcon (i); }
Now if you run the suite, all of the Nodes will have the correct icon, as shown below:
Actions and Nodes
The next aspect of
Node`s.
First, let’s create a simple action for your nodes to provide:
Override the
getActions()method of
MyNodeas follows:
public Action[] getActions (boolean popup) { return new Action[] { new MyAction() }; }
Now, create the
MyActionclass as an inner class of
MyNode:
private class MyAction extends AbstractAction { public MyAction () { putValue (NAME, "Do Something"); } public void actionPerformed(ActionEvent e) { APIObject obj = getLookup().lookup (APIObject.class); JOptionPane.showMessageDialog(null, "Hello from " + obj); } }
Run the suite {
Press Ctrl-Shift-I to fix imports.
Position the caret in the class signature line of
MyActionand press Alt-Enter when the lightbulb glyph appears in the margin, and accept the hint "Implement All Abstract Methods".
Implement the newly created method
getPopupPresenter()as follows:
public JMenuItem getPopupPresenter() { JMenu result = new JMenu("Submenu"); //remember JMenu is a subclass of JMenuItem result.add (new JMenuItem(this)); result.add (new JMenuItem(this)); return result; }
Run the suite again and notice that you now have the following:
.
Properties and the Property Sheet).
So, built into
Node`s.
Override
MyNode.createSheet()as follows:.
Right click the module suite and choose Run to launch a copy of NetBeans with the suite’s modules installed.
Use File > Open Editor to show your editor.
Select Window > Properties to show the NetBeans property sheet.
Click in your editor window and move the selection between different nodes, and notice the property sheet updating, just as your
MyViewercomponent does, as shown below:
).
Read-Write Properties
To play with this concept further, what you really need is a read/write property. So the next step is to add some additional support to
APIObject to make the
Date property settable.
Open
org.myorg.myapi.APIObjectin the code editor.
Remove the
finalkeyword from the line declaring the
datefield
Add the following setter and property change support methods to
APIObject:)); } }
Now, within the
APIObject, call the
firemethod above:
public void setDate(Date d) { Date oldDate = date; date = d; fire("date", oldDate, date); }
In
MyNode.createSheet(), change the way
datePropis declared, so that it will be writable as well as readable:
Property dateProp = new PropertySupport.Reflection(obj, Date.class, "date");
Now, rather than specifying explicit getters and setters, you are just providing the property name, and
PropertySupport.Reflection will find the getter and setter methods for us (and in fact it will also find the
addPropertyChangeListener() method automatically).
Re-run the module suite, and notice that you can now select an instance of
MyNodein
MyEditorand actually edit the date value, as shown below:
However, there is still one bug in this code: When you change the Date property, you should also update the display name of your node. So you will make one more change to
MyNode and have it listen for property changes on
APIObject.
Modify the signature of
MyNodeso that it implements
java.beans.PropertyChangeListener:
public class MyNode extends AbstractNode implements PropertyChangeListener {
Press Ctrl-Shift-I to Fix Imports.
Placing the caret in the signature line, accept the hint "Implement All Abstract Methods".
Add the following line to the constructor which takes an argument of
APIObject:
obj.addPropertyChangeListener(WeakListeners.propertyChange(this, obj));
Note that here you are using a utility method on
org.openide.util.WeakListeners. This is a technique for avoiding memory leaks—an
APIObject will only weakly reference its
MyNode, so if the
Node’s parent is collapsed, the `Node can be garbage collected. If the
Node were still referenced in the list of listeners owned by
APIObject, it would be a memory leak. In your case, the
Node actually owns the
APIObject, so this is not a terrible situation—but in real world programming, objects in a data model (such as files on disk) may be much longer-lived than
Node`s.
Finally, implement the
propertyChange()method:
public void propertyChange(PropertyChangeEvent evt) { if ("date".equals(evt.getPropertyName())) { this.fireDisplayNameChange(null, getDisplayName()); } }
Run the module suite again, select a
MyNodein the
MyEditorwindow and change its
Dateproperty—notice that the display name of the
Nodeis now updated correctly, as shown below, where the year 2009 and is now reflected both on the node and in the property sheet:
Grouping Property SetsSet`s. In real world coding, this should be a localized string, not a hard-coded string as below:
Open
MyNodein the code editor
Modify the method
createSheet()as follows (modified and added lines are in blue): suite again, and notice that there are now buttons at the top of the property sheet, and there is one property under each, as seen here:
General Property Sheet Caveats.
Review of Concepts
This tutorial has sought to get across the following ideas:
Nodes are a presentation layer
The display names of Nodes can be customized using a limited subset of HTML
Nodes have icons, and you can provide custom icons for nodes you create
Nodes have Actions; an Action which implements Presenter.Popup can.
|
https://netbeans.apache.org/tutorials/60/nbm-nodesapi2.html
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
During the last post we discussed about the several components used in the data pipeline flow. We saw how the different services merged into one flow and finally ingested data into snowflake. Me and my colleague Mehani Hakim worked on the technical aspects of this data Pipeline flow . Here we will see implementation part along with the Code we have used in the process.
Following Component has been used in this pipeline:
Source bucket: salessnow
Bucket where source json file will be uploaded
Lambda function: kinesislambda
Lambda to read JSON file from S3 bucket and trigger the Kinesis data stream
Kinesis Stream: ksstream
Stream to capture the JSON file
Kinesis Firehose: firehosestream
Receive input from Kinesis stream and store output to kinesisdest bucket
Firehose Destination Bucket : kinesisdest
Bucket to store the file by Firehose
Lambda: kinesistoglue
Lambda triggeres once file uploaded to kinesisdest bucket
Call the Glue job whcih will convert JSON to Parque
GlueJob: JSONtoPARQUET
Job to convert JSON to Parquet and store file to bucket i.e. parquebucket
snowpipe: kinesispipe
Parquet Table: PARQUET_TABLE
5. Glue Script:
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "kinesis_db", table_name = "demo_json_json", transformation_ctx = "datasource0")
datasink4 = glueContext.write_dynamic_frame.from_options(frame = datasource0, connection_type = "s3", connection_options = {"path": "s3://parquebucket"}, format = "parquet", transformation_ctx = "datasink4")
job.commit()
7. Lambda : To Call Glue Service
import json
import boto3
def lambda_handler(event, context):
s3_client = boto3.client("glue");
s3_client.start_job_run(JobName="JSONtoPARQUET")
# TODO implement
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
8. Snowflake: Finally see below snowflake commands to load data into snowflake. Moreover, Schema detection feature is used to generate the Parquet structure at run time instead of creating the table manually.
|
https://cloudyard.in/2021/12/data-pipeline-flow-snowflake-with-kinesis-glue-lambda-snowpipe-part2/
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Adding Unit Tests to a C Project - NetBeans IDE Tutorial
- Requirements
- Introduction
- Install the CUnit Testing Framework
- Create the Project for the Tutorial
- Add CUnit Tests to the NetBeans Managed Project
- Run the C Unit Test
- Add Another CUnit Test
- Debug My CUnit Test
- Add a Simple Test
- Edit the C Simple Test
- Run Tests From the Command Line
- Adding Support for Other Test Frameworks
Requirements
To follow this tutorial, you need the following software.
See the NetBeans IDE Installation Instructions and + Configuring the NetBeans IDE for C/C+/Fortran for information about downloading and installing the required NetBeans software.
Introduction:
C simple test
C++ simple test
CUnit test
CppUnit test
CppUnit test runner.
Install the CUnit Testing Framework:
How to Install CUnit on Linux or Mac OS $ sudo make install
When the 'make install' finishes, the CUnit test framework is ready to use in the IDE and you can continue on to Create the Project for the Tutorial.
How to Install CUnit on Oracle Solaris $ make install
When the 'make install' finishes, the CUnit test framework is ready to use in the IDE and you can continue on to Create the Project for the Tutorial.
How to Install CUnit on Windows and MinGW
These instructions assume you downloaded the file CUnit-2.1-2-src.tar.bz2 into the directory C:/distr. the C:/distr example.
Start the MinGW shell application in Windows by choosing Start > All Programs > MinGW > MinGW Shell.
In the MinGW Shell window, unpack the
CUnit-2.1-2-src.tar.bz2file as follows:
$ cd c:/distr $ bunzip2.exe CUnit-2.1-2-src.tar.bz2 $ tar xvf CUnit-2.1-2-src.tar $ cd /CUnit-2.1-2
Find the Unix path to MinGW using the mount command.
$ mount
You see output similar to the following:)*
The last line in bold above shows the Unix path is /mingw. Your system may report something different, so make a note of it because you need to specify the path in the next command.
Configure the Makefile with the following command. If your MinGW is not in /mingw, be sure to specify the appropriate Unix location of your MinGW with the --prefix= option.
$ libtoolize $ automake --add-missing $ autoreconf $ ./configure --prefix=/mingw _(lots of output about checking and configuring) ..._ config.status: executing depfiles commands config.status: executing libtool commands
Build the library for CUnit:
$ make make all-recursive make[1]: Entering directory 'c/distr/CUnit-2.1-2' Making all in CUnit ... _(lots of other output)_ make[1]: Leaving directory 'c/distr/CUnit-2.1-2' $
Install the CUnit library into C:/MinGW/include/CUnit, C:/MinGW/share/CUnit and C:/MinGW/doc/CUnit by running make install:
$' $
If you use Java 7 update 21, 25, or 40 you must perform the following workaround due to issue 236867 in order to get CUnit and this tutorial to work.
Go to Tools > Options > C/C++ > Build Tools and select the MinGW tool collection.
Change the Make Command entry to make.exe without a complete path.
Exit the IDE.
On Windows 7 and above, type var in the Start menu’s search box to quickly find a link to Edit the system environment variables.
Select the Advanced tab and click Environment Variables.
In the System Variables panel of the Environment Variables dialog, select click New.
Set the Variable Name to MAKE and the Variable Value to make.exe.
Click OK in each dialog to save the change.
Start the IDE and continue to the next section.
When the 'make install' finishes, your CUnit is ready to use in the IDE and you can continue on to Create the Project for the Tutorial.
How to Install CUnit on Windows and Cygwin.
Create the Project for the Tutorial
To explore the unit test features, you should first create a new C Application:
Choose File > New Project.
In the project wizard, click C/C and then select C/C Application.
In the New C/C++ Application dialog box, select Create Main file and select the C language. Accept the defaults for all other options.
Click Finish, and the Cpp_Application__x_ project is created.
In the Projects window, open the Source Files folder and double-click the
main.cfile to open it in the editor. The file’s content is similar to that shown here:
To give the program something to do, replace the code in the
main.cfile with the following code to create a simple factorial calculator:
:
Save the file by pressing Ctrl+S.
Build and run the project to make sure it works by clicking the Run button in the IDE toolbar. The output should look similar to the following if you enter 8 as the integer:
You might need to press Enter twice on some platforms.
Add CUnit Tests to the NetBeans Managed Project
When you are developing an application, it is a good idea to add unit tests as part of your development process.
Each test should contain one
main function and generate one executable.
In the Projects window, right-click the
main.csource file and select Create Test > New CUnit Test.
A wizard opens to help you create the test.
In the wizard’s Select Elements window, click the checkbox for the
mainfunction. This causes all the functions within
mainto also be selected. In this program, there is only one other function,
factorial().
Click Next.
Keep the default name New CUnit Test and click Finish.
The New CUnit Test node is displayed under the Test Files folder.
The New CUnit Test folder contains the template files for the test. You can add new files to the folder the same way you add source files to a project, by right-clicking the folder.
Expand the New CUnit Test folder, and see that it contains a file
newcunittest.cwhich should be open in the source editor.
In the
newcunittest.cfile, notice the
#include "CUnit/Basic.h"statement to access the CUnit library. The
newcunittest.cfile contains an automatically generated test function,
testFactorial, for the
factorial()function of
main.c.
>
The generated test is a stub that you must edit to make useful tests, but the generated test can be run successfully even without editing.
Run the C Unit Test
The IDE provides a few ways to run tests. You can right-click the project node, or the Test Files folder, or a test subfolder, and select Test. You can also use the menu bar and select Run > Test Project, or press Alt+F6.
Run the test by right-clicking the New CUnit Test folder and selecting Test.
The IDE opens a new Test Results window, and you should see output similar to the following, which shows that the test fails.
If you do not see the Test Results window, open it by choosing Window > IDE Tools > Test Results or by pressing Alt+Shift+R.
Notice that the Test Results window is split into two panels. The right panel displays the console output from the tests. The left panel displays a summary of the passed and failed tests and the description of failed tests.
In the Test Results window, double-click the node
testFactorial caused an ERRORto jump to the
testFactorialfunction in the source editor. If you look at the function you can see that it does not actually test anything, but merely asserts that the unit test failed by setting CU_ASSERT(0). The condition evaluates to 0, which is equivalent to FALSE, so the CUnit framework interprets this as a test failure.
Change the line CU_ASSERT(0) to CU_ASSERT(1) and save the file (Ctrl+S).
Run the test again by right-clicking the New CUnit Test folder and selecting Test. The Test Results window should indicate that the test passed.
Add Another CUnit Test
Create a generic CUnit test template by right-clicking the Test Files folder and selecting New CUnit Test.
Name the test My CUnit Test and the test file name
mycunittestand click Finish.
A new test folder called My CUnit Test is created and it contains a
mycunittest.cfile, which opens in the editor.
Examine the
mycunittest.ctest file and see that it contains two tests. test1 will pass because it evaluates to TRUE, and test2 will fail because it evaluates to FALSE since 2*2 does not equal 5.
void test1() { CU_ASSERT(2*2 == 4); } void test2() { CU_ASSERT(2*2 == 5); }
Run the test as before and you should see:
Run all the tests from the IDE main menu by selecting Run > Test Project (Cpp_Application__x_) and see that both test suites run and display their success and failure in the Test Results window.
Mouse over the failed test to see more information about the failure.
Click the buttons in the left margin of the Test Results window to show and hide tests that pass or fail.
Debug My CUnit Test
You can debug tests using the same techniques you use to debug your project source files, as described in the Debugging C/C+ Projects Tutorial+.
In the Projects window, right-click the My CUnit Test folder and select Step Into Test.
You can also run the debugger by right-clicking a test in the Test Results window and selecting Debug.
The debugger toolbar is displayed.
Click the Step Into button to execute the program one statement at a time with each click of the button.
Open the Call Stack window by selecting Window > Debugging > Call Stack so you can watch the function calls as you step through the test.
Add a Simple Test
The C simple test uses the IDE’s own simple test framework. You do not need to download any test framework to use simple tests.
In the Projects window, right-click the
main.csource file and select Create Test > New C Simple Test.
In the wizard’s Select Elements window, click the checkbox for the
mainfunction, then click Next.
In the Name and Location window, keep the default name New C Simple Test and click Finish.
The New C Simple Test node is displayed under the Test Files folder.
Expand the New C Simple Test folder, and see that it contains a file
newsimpletest.c. This file should be open in the source editor.
Notice the
newsimpletest.cfile contains an automatically generated test function,
testFactorial, for the
factorial()function of
main.c, just as with the CUnit test.
.
Run the test to see that it generates a failure shown in the Test Results window.
Next you edit the test file to see tests that pass.
Edit the C Simple Test
Copy and paste a new function below the
testFactorialfunction. The new function is:
void testNew() { int arg = 8; long result = factorial(arg); if(result != 40320) { printf("%%TEST_FAILED%% time=0 testname=testNew (newsimpletest) message=Error calculating %d factorial.\n", arg); } }
The
main function must also be modified to call the new test function.
In the
mainfunction, copy the lines:
printf("%%TEST_STARTED%% testFactorial (newsimpletest)\n"); testFactorial(); printf("%%TEST_FINISHED%% time=0 testFactorial (newsimpletest)\n");
Paste the lines immediately below the ones you copied, and change the name
testFactorialto
testNewin the pasted lines. There are three occurrences that need to be changed. The complete
newsimpletest.cfile should look as follows:
); }
In the Projects window, run the test by right-clicking New C Simple Test and choosing Test. The Test Results should look as follows:
.
Run Tests From the Command Line.
Open a terminal window in the IDE by selecting Window > Output and clicking the Terminal button in the left margin of the Output window. This opens a terminal window at the working directory of the current project.
In the terminal, type the commands shown in bold:
*make test*
The output of the test build and run should look similar to the following. Note that some
make output has been deleted.
'
Adding Support for Other Test Frameworks
You can add support for your favorite C/C test framework by creating a NetBeans module. See the NetBeans developer's link:[+C/C Unit Test Plugin Tutorial+] on the NetBeans wiki.
link:mailto:[email protected]?subject=Feedback:%20Adding%20Unit%20Tests%20to%20a%20C/C+%20Project%20-%20NetBeans%20IDE%207.4%20Tutorial[+Send Feedback on This Tutorial]
|
https://netbeans.apache.org/kb/docs/cnd/c-unit-test.html
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
To summarize Ben's mail, we have three options presented:
* Status quo: record only commit time; never set timestamps in wc
* Simple approach: record only commit time; set timestamp to commit
time in wc if asked
* Complex approach: record mod time of file when committing; set
timestamp to commit time or to recorded mod time if asked
I think the status quo has the important feature of being trivial to
implement. We are talking about libsvn_wc here, the library we're not
supposed to complicated before a post-1.0 redesign.
I think the simple approach has limited value, because the commit times
of a big collection of imported files are unlikely to have any useful
relationship.
I think the complex approach has value, because then you can:
* Import code, including generated files
* Make local mods and check them in
* Build the result (or a copy of the result) without trying to
recreate generated files
without requiring the code's build system to properly respect
"maintainer mode" vs. "non-maintainer-mode". However, it does require
modifying the entries file to remember the --timestamps option, and it
most likely requires timestamp handling sprinkled all over libsvn_wc
given how libsvn_wc is currently designed. So I'm not convinced that it
should be allowed as a 1.0 feature.
(I understand the make(1) risks of the simple approach and the client
clock synchronization risks of the complex approach. Because of those
risks, the status quo should be the default behavior.)
On Thu, 2003-06-19 at 11:37, Ben Collins-Sussman wrote:
> What exactly does CVS do, and how does it justify the behavior?
> What do CVS users think of it?
On the way in, CVS records only the commit time. The "-d" option to cvs
import will use the mod time of the imported files as the commit time,
which of course causes your repository's commit times to be a lie. (To
work around this problem, I have a perl script which "squeezes" the
timestamps of a directory so that they're all close to the current time,
but have the same relationships.)
On the way out, CVS sets mod times to the commit time when a wc file is
first created. However, when a file is updated, CVS leaves the mod time
at the current time. I have no idea what mbk's -M option is; like Jack
Repenning, I can't find it in my version of CVS.
So, if you import with the -d option and then make local mods, a
checkout will bear the same timestamp relationships, allowing you to do
the same thing as I said above. However, if you then "cvs update" the
working directory, you won't get useful timestamp relationships. So
you'd have to do a checkout from scratch each time.
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]
Received on Thu Jun 19 18:54:45 2003
This is an archived mail posted to the Subversion Dev
mailing list.
This site is subject to the Apache Privacy Policy and the Apache Public Forum Archive Policy.
|
https://svn.haxx.se/dev/archive-2003-06/1122.shtml
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
<<
prosper nyamukondiwa656 Points
setting fuelLevel to the value of the intent
whats the missing part on my code
import android.os.Bundle; public class FlightActivity extends AppCompatActivity { public int fuelLevel = 0; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_flight); Intent intent=getIntent(); String fuelLevel=intent.getStringExtra("FUEL_LEVEL"); // Add your code below! } }
|
https://teamtreehouse.com/community/setting-fuellevel-to-the-value-of-the-intent
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Introduction.
Main Article.
First, let’s take a look at the way the JVM uses memory. There are two main areas of memory in the JVM – the ‘Heap’ and the ‘Permanent Generation.’ In the diagram below, the permanent generation is shown in green. The remainder (to the left) is the heap.
The permanent generation is used only by the JVM itself, to keep data that it requires. You cannot place any data in the permanent generation. One of the things the JVM uses this space for is keeping metadata about the objects you create. So every time you create an object, the JVM will store some information in the permanent generation. So the more objects you create, the more room you need in the permanent generation.
The size of the permanent generation is controlled by two JVM parameters. -XX:PermSize sets the minimum, or initial, size of the permanent generation, and -XX:MaxPermSize sets the maximum size. When running large Java applications, we often set these two to the same value, so that the permanent generation will be created at its maximum size initially. This can improve performance because resizing the permanent generation is an expensive (time consuming) operation. If you set these two parameters to the same size, you can avoid a lot of extra work in the JVM to figure out if it needs to resize, and actually performing resizes of, the permanent generation.
The heap is the main area of memory. This is where all of your objects will be stored. The heap is further divided into the ‘Old Generation’ and the ‘New Generation.’ The new generation in turn is divided into ‘Eden’ and two ‘Survivor’ spaces.
This size of the heap is also controlled by JVM paramaters. You can see on the diagram above the heap size is -Xms at minimum and -Xmx at maximum. Additional parameters control the sizes of the various parts of the heap. We will see one of those later on, the others are beyond the scope of this post.
When you create an object, e.g. when you say byte[] data = new byte[1024], that object is created in the area called Eden. New objects are created in Eden. In addition to the data for the byte array, there will also be a reference (pointer) for ‘data.’
The following explanation has been simplified for the purposes of this post. When you want to create a new object, and there is not enough room left in eden, the JVM will perform ‘garbage collection.’ This means that it will look for any objects in memory that are no longer needed and get rid of them.
Garbage collection is great! If you have ever programmed in a language like C or Objective-C, you will know that managing memory yourself is somewhat tedious and error prone. Having the JVM automatically find unused objects and get rid of them for you makes writing code much simpler and saves a lot of time debugging. If you have never used a language that does not have garbage collection – you might want to go write a C program – it will certainly help you to appreciate what you are getting from your language for free!
There are in fact a number of different algorithms that the JVM may use to do garbage collection. You can control which algorithms are used by changing the JVM paramaters.
Let’s take a look at an example. Suppose we do the following:
String a = "hello"; String b = "apple"; String c = "banana"; String d = "apricot"; String e = "pear"; // // do some other things // a = null; b = null; c = null; e = null;
This will cause five objects to be created, or ‘allocated,’ in eden, as shown by the five yellow boxes in the diagram below. After we have done ‘some other things,’ we free a, b, c and e – by setting the references to null. Assuming there are no other references to these objects, they will now be unused. They are shown in red in the second diagram. We are still using String d, it is shown in green.
If we try to allocate another object, the JVM will find that eden is full, and that it needs to perform garbage collection. The most simple garbage collection algorithm is called ‘Copy Collection.’ It works as shown in the diagram above. In the first phase (‘Mark’) it will mark (illustrated by red colour) the unused objects. In the second phase (‘Copy’) it will copy the objects we still need (i.e. d) into a ‘survivor’ space – the little box on the right. There are two survivor spaces and they are smaller than eden in size. Now that all the objects we want to keep are safe in the survivor space, it can simply delete everything in eden, and it is done.
This kind of garbage collection creates something known as a ‘stop the world’ pause. While the garbage collection is running, all other threads in the JVM are paused. This is necessary so that no thread tries to change memory after we have copied it, which would cause us to lose the change. This is not a big problem in a small application, but if we have a large application, say with a 8GB heap for example, then it could actually take a significant amount of time to run this algorithm – seconds or even minutes. Having your application stop for a few minutes every now and then is not suitable for many applications. That is why other garbage collection algorithms exist and are often used. Copy Collection works well when there is a relatively large amount of garbage and a small amount of used objects.
In this post, we will just discuss two of the commonly used algorithms. For those who are interested, there is plenty of information available online and several good books if you want to know more!
The second garbage collection algorithm we will look at is called ‘Mark-Sweep-Compact Collection.’ This algorithm uses three phases. In the first phase (‘Mark’), it marks the unused objects, shown below in red. In the second phase (‘Sweep’), it deletes those objects from memory. Notice the empty slots in the diagram below. Then in the final phase (‘Compact’), it moves objects to ‘fill up the gaps,’ thus leaving the largest amount of contiguous memory available in case a large object is created.
So far this is all theoretical – let’s take a look at how this actually works with a real application. Fortunately, the JDK includes a nice visual tool for watching the behaviour of the JVM in ‘real time.’ This tool is called jvisualvm. You should find it right there in bin directory of your JDK installation. We will use that a little later, but first, let’s create an application to test.
I used Maven to create the application and manage the builds and dependencies and so on. You don’t need to use Maven to follow this example. You can go ahead and type in the commands to compile and run the application if you prefer.
I created a new project using the Maven archetype generate goal:
mvn archetype:generate -DarchetypeGroupId=org.apache.maven.archetypes -DgroupId=com.redstack -DartifactId=memoryTool
I took type 98 – for a simple JAR – and the defaults for everything else. Next, I changed into my memoryTool directory and edited my pom.xml as shown below. I just added the part shown in red. That will allow me to run my application directly from Maven, passing in some memory configuration and garbage collection logging parameters.
<project xmlns="" xmlns: <modelVersion>4.0.0</modelVersion> <groupId>com.redstack</groupId> <artifactId>memoryTool</artifactId> <version>1.0-SNAPSHOT</version> <packaging>jar</packaging> <name>memoryTool</name> <url></url> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> <build> <plugins> <plugin> <artifactId>maven-compiler-plugin</artifactId> <version>2.0.2</version> <configuration> <source>1.6</source> <target>1.6</target> </configuration> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <configuration> <executable>java</executable> <arguments> <argument>-Xms512m</argument> <argument>-Xmx512m</argument> <argument>-XX:NewRatio=3</argument> <argument>-XX:+PrintGCTimeStamps</argument> <argument>-XX:+PrintGCDetails</argument> <argument>-Xloggc:gc.log</argument> <argument>-classpath</argument> <classpath/> <argument>com.redstack.App</argument> </arguments> </configuration> </plugin> </plugins> </build> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.8.1</version> <scope>test</scope> </dependency> </dependencies> </project>
If you prefer not to use Maven, you can start the application using the following command:
java -Xms512m -Xmx512m -XX:NewRatio=3 -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -Xloggc:gc.log -classpath <whatever> com.redstack.App
The switches are telling the JVM the following:
I have chosen these options so that you can see pretty clearly what is going on and you wont need to spend all day creating objects to make something happen!
Here is the code in that main class. This is a simple program that will allow us to create objects and throw them away easily, so we can understand how much memory we are using, and watch what the JVM does with it.
# com.redstack; import java.io.*; import java.util.*; public class App { private static List objects = new ArrayList(); private static boolean cont = true; private static String input; private static BufferedReader in = new BufferedReader(new InputStreamReader(System.in)); public static void main(String[] args) throws Exception { System.out.println("Welcome to Memory Tool!"); while (cont) { System.out.println( "\n\nI have " + objects.size() + " objects in use, about " + (objects.size() * 10) + " MB." + "\nWhat would you like me to do?\n" + "1. Create some objects\n" + "2. Remove some objects\n" + "0. Quit"); input = in.readLine(); if ((input != null) && (input.length() >= 1)) { if (input.startsWith("0")) cont = false; if (input.startsWith("1")) createObjects(); if (input.startsWith("2")) removeObjects(); } } System.out.println("Bye!"); } private static void createObjects() { System.out.println("Creating objects..."); for (int i = 0; i < 2; i++) { objects.add(new byte[10*1024*1024]); } } private static void removeObjects() { System.out.println("Removing objects..."); int start = objects.size() - 1; int end = start - 2; for (int i = start; ((i >= 0) && (i > end)); i--) { objects.remove(i); } } }
If you are using Maven, you can build, package and execute this code using the following command:
mvn package exec:exec
Once you have this compiled and ready to go, start it up, and fire up jvisualvm as well. You might like to arrange your screen so you can see both, as shown in the image below. If you have never used JVisualVM before, you will need to install the VisualGC plugin. Select Plugins from the Tools menu. Open the Available Plugins tab. Place a tick next to the entry for Visual GC. Then click on the Install button. You may need to restart it.
Back in the main panel, you should see a lit of JVM processes. Double click on the one running your application, com.redstack.App in this example, and then open the Visual GC tab. You should see something like what is shown below.
Notice that you can visually see the permanent generation, the old generation and eden and the two survivor spaces (S0 and S1). The coloured bars indicate memory in use. On the right hand side, you can also see a historical view that shows you when the JVM spent time performing garbage collections, and the amount of memory used in each space over time.
In your application window, start creating some objects (by selecting option 1). Watch what happens in Visual GC. Notice how the new objects always get created in eden. Now throw away some objects (option 2). You will probably not see anything happen in Visual GC. That is because the JVM will not clean up that space until a garbage collection is performed.
To make it do a garbage collection, create some more objects until eden is full. Notice what happens when you do this. If there is a lot of garbage in eden, you should see the objects in eden move to a survivor space. However, if eden had little garbage, you will see the objects in eden move to the old generation. This happens when the objects you need to keep are bigger than the survivor space.
Notice as well that the permanent generation grows slowly as you create new objects.
Try almost filling eden, don’t fill it completely, then throw away almost all of your objects – just keep 20MB. This will mean that eden is mostly full of garbage. Then create some more objects. This time you should see the objects in eden move into the survivor space.
Now, let’s see what happens when we run out of memory. Keep creating objects until you have around 460MB. Notice that both eden and the old generation are nearly full. Create a few more objects. When there is no more space left, your application will crash and you will get an OutOfMemoryException. You might have got those before and wondered what causes them – especially if you have a lot more physical memory on your machine, you may have wondered how you could possibly be ‘out of memory’ – now you know! If you happen to fill up your permanent generation (which will be pretty difficult to do in this example) you would get a different exception telling you PermGen was full.
Finally, another way to look at this data is in that garbage collection log we asked for. Here are the first few lines from one run on my machine:
13.373: [GC 13.373: [ParNew: 96871K->11646K(118016K), 0.1215535 secs] 96871K->73088K(511232K), 0.1216535 secs] [Times : user=0.11 sys=0.07, real=0.12 secs] 16.267: [GC 16.267: [ParNew: 111290K->11461K(118016K), 0.1581621 secs] 172732K->166597K(511232K), 0.1582428 secs] [Ti mes: user=0.16 sys=0.08, real=0.16 secs] 19.177: [GC 19.177: [ParNew: 107162K->10546K(118016K), 0.1494799 secs] 262297K->257845K(511232K), 0.1495659 secs] [Ti mes: user=0.15 sys=0.07, real=0.15 secs] 19.331: [GC [1 CMS-initial-mark: 247299K(393216K)] 268085K(511232K), 0.0007000 secs] [Times: user=0.00 sys=0.00, real =0.00 secs] 19.332: [CMS-concurrent-mark-start] 19.355: [CMS-concurrent-mark: 0.023/0.023 secs] [Times: user=0.01 sys=0.01, real=0.02 secs] 19.355: [CMS-concurrent-preclean-start] 19.356: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 19.356: [CMS-concurrent-abortable-preclean-start] CMS: abort preclean due to time 24.417: [CMS-concurrent-abortable-preclean: 0.050/5.061 secs] [Times: user=0.10 sys= 0.01, real=5.06 secs] 24.417: [GC[YG occupancy: 23579 K (118016 K)]24.417: [Rescan (parallel) , 0.0015049 secs]24.419: [weak refs processin g, 0.0000064 secs] [1 CMS-remark: 247299K(393216K)] 270878K(511232K), 0.0016149 secs] [Times: user=0.00 sys=0.00, rea l=0.00 secs] 24.419: [CMS-concurrent-sweep-start] 24.420: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 24.420: [CMS-concurrent-reset-start] 24.422: [CMS-concurrent-reset: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 24.711: [GC [1 CMS-initial-mark: 247298K(393216K)] 291358K(511232K), 0.0017944 secs] [Times: user=0.00 sys=0.00, real =0.01 secs] 24.713: [CMS-concurrent-mark-start] 24.755: [CMS-concurrent-mark: 0.040/0.043 secs] [Times: user=0.08 sys=0.00, real=0.04 secs] 24.755: [CMS-concurrent-preclean-start] 24.756: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 24.756: [CMS-concurrent-abortable-preclean-start] 25.882: [GC 25.882: [ParNew: 105499K->10319K(118016K), 0.1209086 secs] 352798K->329314K(511232K), 0.1209842 secs] [Ti mes: user=0.12 sys=0.06, real=0.12 secs] 26.711: [CMS-concurrent-abortable-preclean: 0.018/1.955 secs] [Times: user=0.22 sys=0.06, real=1.95 secs] 26.711: [GC[YG occupancy: 72983 K (118016 K)]26.711: [Rescan (parallel) , 0.0008802 secs]26.712: [weak refs processin g, 0.0000046 secs] [1 CMS-remark: 318994K(393216K)] 391978K(511232K), 0.0009480 secs] [Times: user=0.00 sys=0.00, rea l=0.01 secs]
You can see from this log what was happening in the JVM. Notice it shows that the Concurrent Mark Sweep Compact Collection algorithm (it calls it CMS) was being used. You can see when the different phases ran. Also, near the bottom notice it is showing us the ‘YG’ (young generation) occupancy.
You can leave those same three settings on in production environments to produce this log. There are even some tools available that will read these logs and show you what was happening visually.
Well, that was a short, and by no means exhaustive, introduction to some of the basic theory and practice of JVM garbage collection. Hopefully the example application helped you to clearly visualise what happens inside the JVM as your applications run.
Thanks to Rupesh Ramachandran who taught me many of the things I know about JVM tuning and garbage collection.
|
http://www.ateam-oracle.com/visualising-garbage-collection-in-the-jvm
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
How to use a custom SVM with HOGDescriptor, in Python
Hi,
this is a bit against the Idea here, but as its a topic i struggeled quite a bit, i’ll post it here anyways.
The problem is using a custom SVM in HOGDescriptor.detect[MultiScale]. Now, HOGDescriptor wants a float numpy array with mystery content for setSVMDetector(…). As to what the mystery-content is, we can look at the source:...
Now the problem is how to get the list of SVs and rho — while CvSVM seems to have facilities for this, they’re not exposed to Python. This is where a dirty but working hack comes in: Saving the SVM to XML, then parsing that file to get the parameters. Something like:
<collect your training data in descs and labels in resps as usual> import xml.etree.ElementTree as ET svm = cv2.SVM() svm.train_auto(descs, resps, None, None, params=svm_params, k_fold=5) svm.save("svm.xml") tree = ET.parse('svm.xml') root = tree.getroot() # now this is really dirty, but after ~3h of fighting OpenCV its what happens :-) SVs = root.getchildren()[0].getchildren()[-2].getchildren()[0] rho = float( root.getchildren()[0].getchildren()[-1].getchildren()[0].getchildren()[1].text ) svmvec = [float(x) for x in re.sub( '\s+', ' ', SVs.text ).strip().split(' ')] svmvec.append(-rho) pickle.dump(svmvec, open("svm.pickle", 'w'))
And for using it:
img = cv2.imread(inp) hog = cv2.HOGDescriptor((32,64), (16,16), (8,8), (8,8), 9) svm = pickle.load(open("svm.pickle")) hog.setSVMDetector( np.array(svm) ) del svm found, w = hog.detectMultiScale(img)
this seems to work for me — so if anyone ever needs a custom SVM for HOG in OpenCV-Python without touching C++, i hope you can find this post!
Best Regards, hope this helps! Dario Ernst
Thank you! I've been searching for a solution to training a SVM for the python HOG descriptors but I wasn't getting much.
You're a lifesaver!
One question: what is ET? Guess some xml parser, but for completeness please add your import to the example. Thx!
Guanta: The ET stands for ElementTree, which is a very useful python XML parser. They typical import call is:
import xml.etree.ElementTree as ET.
Dario: I tried implementing the code you suggested, but instead of getting actual targets I always seem to get a single return in the center of the image, regardless of the input. Do you know anything that might cause such a thing to happen?
Added the import — crd319 was right, thanks for telling quicker than me :-). @crd319: What exactly happens with those svm vectors in the depths of HOG.detectMultiScale is a big mytho for me too — i fear i can not be of much help. A few things i would check is the strides, meanShift can get funky results too. Also, i had „glitches” with too small scales happening where i’d always get full-frame BBoxes. But … all of those i didn’t understand why. Sorry!
Questions. What is re in the line: 'svmvec = [float(x) for x in re.sub( '\s+', ' ', SVs.text ).strip().split(' ')]'?
The regexpression submodule, s.....
Hi, I'm following your tutorial and it works(only training), when it came to testing, I got error at hog.setSVMDetector( np.array(svm) , ") " My system opencv : 3.2.0 os : windows 7 64 bit python : 2.7
@adamaulia, make sure that the winSize that you're using for your hog detector has the same dimensions as the training images you are using
@gravatar, I too got the same error as "penCV Error: Assertion failed (checkDetectorSize())", i had made sure that the winSize of hog is same dimension as the training image, and i also checked the size of np.array(svm) it shows 15553, while the trained descriptor of train image sample size is 15552, i think adding rho increased the length of np.array(svm), can you suggest a solution for the error.
|
https://answers.opencv.org/question/56655/how-to-use-a-custom-svm-with-hogdescriptor-in-python/
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
The loader registry allows applications to cherry-pick which loaders to include in their application bundle by importing just the loaders they need and registering them during initialization.
Applications can then make all those imported loaders available (via format autodetection) to all subsequent
parse and
load calls, without those calls having to specify which loaders to use.
Initialization imports and registers loaders:
import {registerLoaders} from '@loaders.gl/core'; import {CSVLoader} from '@loaders.gl/csv'; registerLoaders(CSVLoader);
Some other file that needs to load CSV:
import {load} from '@loaders.gl/core'; // The pre-registered CSVLoader gets auto selected based on file extension... const data = await load('data.csv');
Registers one or more loader objects to a global loader object registry, these loaders will be used if no loader object is supplied to
parse and
load.
loaders- can be a single loader or an array of loaders. The specified loaders will be added to any previously registered loaders.
|
https://loaders.gl/docs/api-reference/core/register-loaders/
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.