text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
---|---|---|---|---|
Hi all First a brief question - is there a nicer way to do something like #ifdef __GLASGOW_HASKELL__ #include "GHCCode.hs" #else > import HugsCode #endif than that (i.e. code that needs to be different depending on if you are using GHC or HUGS)? Secondly, I don't know if this sort of thing is of interest to anyone, but inspired by the number of people who looked at the MD5 stuff I thought I might as well mention it. I've put all the Haskell stuff I've written at (although I'm new at this game so it may not be the best code in the world). At the moment this consists of a (very nearly complete) clone of GNU ls, and MD5 module and test program and the smae for SHA1 and DES. The ls clone needs a ptch to GHC for things like isLink (incidentally, would it be sensible to try and get this included with GHC? It is basically a simple set of changes to the PosixFiles module, but needs __USE_BSD defined (which I guess is the reason it is not in there, but it could have it's own file?)). Have fun Ian, wondering how this message got to be so long
|
http://www.haskell.org/pipermail/haskell-cafe/2001-February/001497.html
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
The first article in this series explored LLVM
intermediate representation (IR). You hand-crafted a "Hello World" test
program; learned some of LLVM's nuances, like type casting; and finished
it off by creating the same program using LLVM application programming
interfaces (APIs). In the process, you also learned about LLVM tools such
as
llc and
lli and
figured out how to use llvm-gcc to emit LLVM IR for you. This second and
concluding part of the series explores some of the other cool things that
you can do with LLVM. Specifically, it looks at instrumenting
code—that is, adding information to the final executable that is
generated. It also explores a bit of clang, a front end for LLVM
supporting
C,
C++,
and Objective-C. You use the clang API to preprocess and generate an
abstract syntax tree (AST) for
C/C++ code.
LLVM passes
LLVM is known for the optimization features it provides. Optimizations are implemented as passes (for high-level details about LLVM passes, see Resources). The thing to note here is that LLVM provides you with the ability to create utility passes with minimum code. For example, if you don't want your function names to begin with "hello," you can have a utility pass to do just that.
Understanding the LLVM opt tool
From the man page for
opt, the
"
opt command is the modular LLVM optimizer and
analyzer." Once you have the code for the custom pass ready, you compile
it into a shared library and load it using
opt.
If your LLVM installation went well,
opt should
already be available in your system. The
opt
command accepts both LLVM IR (the .ll extension) and LLVM bit-code formats
(the .bc extension) and can generate the output as either LLVM IR or
bit-code. Here's how you use
opt to load your
custom shared library:
tintin# opt –load=mycustom_pass.so –help –S
Also note that running
opt –help from the
command line generates a laundry list of passes that LLVM performs. Using
the
load option with
help generates a help message that includes
information about your custom pass.
Creating a custom LLVM pass
You declare LLVM passes in the Pass.h file, which on my system is installed
under /usr/include/llvm. This file defines the interface for an individual
pass as part of the
Pass class. Individual pass
types, each derived from
Pass, are also
declared in this file. Pass types include:
BasicBlockPassclass. Used to implement local optimizations, with optimizations typically operating on a basic block or instruction at a time
FunctionPassclass. Used for global optimizations, one function at a time
ModulePassclass. Used to perform just about any unstructured interprocedural optimizations
Because you intend to create a pass that should object to any function
names beginning with "Hello," it's imperative that you create your own
pass by deriving from
FunctionPass. Copy the
code in Listing 1 from Pass.h.
Listing 1. Overriding the runOnFunction class in FunctionPass
Class FunctionPass : public Pass { /// explicit FunctionPass(char &pid) : Pass(PT_Function, pid) {} /// runOnFunction - Virtual method overridden by subclasses to do the /// per-function processing of the pass. /// virtual bool runOnFunction(Function &F) = 0; /// … };
Likewise, the
BasicBlockPass class declares a
runOnBasicBlock, and the
ModulePass class declares a
runOnModule pure virtual method. The child
class needs to provide a definition for the virtual method.
Coming back to the
runOnFunction method in
Listing 1, you see that the input is an object of type
Function. Dig into the
/usr/include/llvm/Function.h file to easily see that the
Function class is what LLVM uses to encapsulate
the functionality of a
C/C++ function.
Function in turn is derived from the
Value class defined in Value.h and supports a
getName method. Listing 2
shows the code.
Listing 2. Creating a custom LLVM pass
#include "llvm/Pass.h" #include "llvm/Function.h" class TestClass : public llvm::FunctionPass { public: virtual bool runOnFunction(llvm::Function &F) { if (F.getName().startswith("hello")) { std::cout << "Function name starts with hello\n"; } return false; } };
The code in Listing 2 misses out on two important details:
- The
FunctionPassconstructor needs a
char, which is used internally by LLVM. LLVM uses the address of the
char, so what you use to initialize it should not matter.
- You need some way for the LLVM system to understand that the class you created is a new pass. This is where the
RegisterPassLLVM template comes in. You declare the
RegisterPasstemplate in the PassSupport.h header file; this file is included in Pass.h, so you don't need additional headers.
Listing 3 shows the complete code.
Listing 3. Registering the LLVM Function pass
class TestClass : public llvm::FunctionPass { public: TestClass() : llvm::FunctionPass(TestClass::ID) { } virtual bool runOnFunction(llvm::Function &F) { if (F.getName().startswith("hello")) { std::cout << "Function name starts with hello\n"; } return false; } static char ID; // could be a global too }; char TestClass::ID = 'a'; static llvm::RegisterPass<TestClass> global_("test_llvm", "test llvm", false, false);
The
template parameter in the
RegisterPass template is the name of the pass
to be used on the command line with
opt. That's
it: All you need to do now is create a shared library out of the code in
Listing 3, and then run
opt to load the library
followed by the name of the command you registered using
RegisterPass—in this case,
test_llvm—and finally a bit-code file on
which your custom pass will run along with the other passes. The steps are
outlined in Listing 4.
Listing 4. Running the custom pass
bash$ g++ -c pass.cpp -I/usr/local/include `llvm-config --cxxflags` bash$ g++ -shared -o pass.so pass.o -L/usr/local/lib `llvm-config --ldflags -libs` bash$ opt -load=./pass.so –test_llvm < test.bc
Now look at the other side of the coin—the front end to the LLVM back end: clang.
Introducing clang
LLVM has its own front end—a tool (appropriately enough) called
clang. Clang is a powerful
C/C++/Objective-C compiler, with compilation
speeds comparable to or better than GNU Compiler Collection (GCC) tools
(see Resources for a link to more information).
More importantly, clang has a hackable code base, making for easy custom
extensions. Much like the way you used the LLVM back-end API for your
custom plug-in
in Part 1,
in this article you use the API for the LLVM front end and develop some small applications for
preprocessing and parsing.
Common clang classes
You need to familiarize yourself with some of the most common clang classes:
CompilerInstance
Preprocessor
FileManager
SourceManager
DiagnosticsEngine
LangOptions
TargetInfo
ASTConsumer
Sema
ParseASTis perhaps the most important clang method.
More on the
ParseAST method shortly.
For all practical purposes, consider
CompilerInstance the compiler proper. It
provides interfaces and manages access to the AST, preprocesses the input
sources, and maintains the target information. Typical applications need
to create a
CompilerInstance object to do
anything useful. Listing 5 provides a sneak peek into
the CompilerInstance.h header file.
Listing 5. The CompilerInstance class
class CompilerInstance : public ModuleLoader { /// The options used in this compiler instance. llvm::IntrusiveRefCntPtr<CompilerInvocation> Invocation; /// The diagnostics engine instance. llvm::IntrusiveRefCntPtr<DiagnosticsEngine> Diagnostics; /// The target being compiled for. llvm::IntrusiveRefCntPtr<TargetInfo> Target; /// The file manager. llvm::IntrusiveRefCntPtr<FileManager> FileMgr; /// The source manager. llvm::IntrusiveRefCntPtr<SourceManager> SourceMgr; /// The preprocessor. llvm::IntrusiveRefCntPtr<Preprocessor> PP; /// The AST context. llvm::IntrusiveRefCntPtr<ASTContext> Context; /// The AST consumer. OwningPtr<ASTConsumer> Consumer; /// \brief The semantic analysis object. OwningPtr<Sema> TheSema; //… the list continues };
Preprocessing a C file
There are at least two ways to create a preprocessor object in clang:
- Directly instantiate a
Preprocessorobject
- Use the
CompilerInstanceclass to create a
Preprocessorobject for you
Let's begin with the latter approach.
Helper and utility classes needed for preprocessing
The
Preprocessor alone won't be of much help:
You need the
FileManager and
SourceManager classes for reading files and
tracking source locations for diagnostics. The
FileManager class implements support for file
system lookup, file system caching, and directory search. Look into the
FileEntry class, which defines the clang
abstraction for a source file. Listing 6 provides an
excerpt from the FileManager.h header file.
Listing 6. The clang FileManager class
class FileManager : public llvm::RefCountedBase<FileManager> { FileSystemOptions FileSystemOpts; /// \brief The virtual directories that we have allocated. For each /// virtual file (e.g. foo/bar/baz.cpp), we add all of its parent /// directories (foo/ and foo/bar/) here. SmallVector<DirectoryEntry*, 4> VirtualDirectoryEntries; /// \brief The virtual files that we have allocated. SmallVector<FileEntry*, 4> VirtualFileEntries; /// NextFileUID - Each FileEntry we create is assigned a unique ID #. unsigned NextFileUID; // Statistics. unsigned NumDirLookups, NumFileLookups; unsigned NumDirCacheMisses, NumFileCacheMisses; // … // Caching. OwningPtr<FileSystemStatCache> StatCache;
The
SourceManager class is typically queried for
SourceLocation objects. From the
SourceManager.h header file, information about
SourceLocation objects is provided in Listing 7.
Listing 7. Understanding SourceLocation
/// There are three different types of locations in a file: a spelling /// location, an expansion location, and a presumed location. /// /// Given an example of: /// #define min(x, y) x < y ? x : y /// /// and then later on a use of min: /// #line 17 /// return min(a, b); /// /// The expansion location is the line in the source code where the macro /// was expanded (the return statement), the spelling location is the /// location in the source where the macro was originally defined, /// and the presumed location is where the line directive states that /// the line is 17, or any other line.
Clearly,
SourceManager depends on
FileManager behind the scenes; indeed, the
SourceManager class constructor accepts a
FileManager class as the input argument.
Finally, you need to keep track of errors you might encounter while
processing the source and report the same. You do so using the
DiagnosticsEngine class. As with
Preprocessor, you have two options:
- Create all the necessary objects on your own
- Let the
CompilerInstancedo everything for you
Let's stick with the latter option. Listing 8 shows the code for
the
Preprocessor; everything else has already
been explained.
Listing 8. Creating a preprocessor with the clang API
using namespace clang; int main() { CompilerInstance ci; ci.createDiagnostics(0,NULL); // create DiagnosticsEngine ci.createFileManager(); // create FileManager ci.createSourceManager(ci.getFileManager()); // create SourceManager ci.createPreprocessor(); // create Preprocessor const FileEntry *pFile = ci.getFileManager().getFile("hello.c"); ci.getSourceManager().createMainFileID(pFile); ci.getPreprocessor().EnterMainSourceFile(); ci.getDiagnosticClient().BeginSourceFile(ci.getLangOpts(), &ci.getPreprocessor()); Token tok; do { ci.getPreprocessor().Lex(tok); if( ci.getDiagnostics().hasErrorOccurred()) break; ci.getPreprocessor().DumpToken(tok); std::cerr << std::endl; } while ( tok.isNot(clang::tok::eof)); ci.getDiagnosticClient().EndSourceFile(); }
Listing 8 uses the
CompilerInstance class to
serially create the
DiagnosticsEngine (the
ci.createDiagnostics method call) and
FileManager
(
ci.createFileManager and
ci.CreateSourceManager) for you. Once the file
association is done using
FileEntry, continue
processing each token in the source file until you reach end of file
(EOF). The preprocessor's
DumpToken method
dumps the token on screen.
To compile and run the code in Listing 8, use the makefile provided in Listing 9, with appropriate adjustments for your clang and LLVM installation folders. The idea is to use the llvm-config tool to provide any necessary LLVM include paths and libraries: You should never attempt to hand link those on a g++ command line.
Listing 9. Makefile for building preprocessor code
CXX := g++ RTTIFLAG := -fno-rtti CXXFLAGS := $(shell llvm-config --cxxflags) $(RTTIFLAG) LLVMLDFLAGS := $(shell llvm-config --ldflags --libs) DDD := $(shell echo $(LLVMLDFLAGS)) SOURCES = main.cpp OBJECTS = $(SOURCES:.cpp=.o) EXES = $(OBJECTS:.o=) CLANGLIBS = \ -L /usr/local/lib \ -lclangFrontend \ -lclangParse \ -lclangSema \ -lclangAnalysis \ -lclangAST \ -lclangLex \ -lclangBasic \ -lclangDriver \ -lclangSerialization \ -lLLVMMC \ -lLLVMSupport \ all: $(OBJECTS) $(EXES) %: %.o $(CXX) -o $@ $< $(CLANGLIBS) $(LLVMLDFLAGS)
After compiling and running the above code, you should get the output in Listing 10.
Listing 10. Crash while running the coding in Listing 7
Assertion failed: (Target && "Compiler instance has no target!"), function getTarget, file /Users/Arpan/llvm/tools/clang/lib/Frontend/../.. /include/clang/Frontend/CompilerInstance.h, line 294. Abort trap: 6
What happened here is that you missed out on one last piece of the
CompilerInstance settings: the target platform
for which this code should be compiled. This is where the
TargetInfo and
TargetOptions classes come in. According to the
clang header TargetInfo.h, the
TargetInfo class
stores the necessary information about the target system for the code
generation and has to be created before compilation or preprocessing can
ensue. As expected,
TargetInfo seems to have
information about integer and float widths, alignments, and the like. Listing 11 provides an excerpt from the
TargetInfo.h header file.
Listing 11. The clang TargetInfo class
class TargetInfo : public llvm::RefCountedBase<TargetInfo> { llvm::Triple Triple; protected: bool BigEndian; unsigned char PointerWidth, PointerAlign; unsigned char IntWidth, IntAlign; unsigned char HalfWidth, HalfAlign; unsigned char FloatWidth, FloatAlign; unsigned char DoubleWidth, DoubleAlign; unsigned char LongDoubleWidth, LongDoubleAlign; // …
The
TargetInfo class takes in two arguments for
its initialization:
DiagnosticsEngine and
TargetOptions. Of these, the latter must have
the
Triple string set to the appropriate value
for the current platform. This is where LLVM comes in handy. Listing 12 shows the addition to Listing 9 to make
the preprocessor work.
Listing 12. Setting the target options for the compiler
int main() { CompilerInstance ci; ci.createDiagnostics(0,NULL); // create TargetOptions TargetOptions to; to.Triple = llvm::sys::getDefaultTargetTriple(); // create TargetInfo TargetInfo *pti = TargetInfo::CreateTargetInfo(ci.getDiagnostics(), to); ci.setTarget(pti); // rest of the code same as in Listing 9… ci.createFileManager(); // …
That's it. Run this code and see what output you get for the simple hello.c test:
#include <stdio.h> int main() { printf("hello world!\n"); }
Listing 13 shows the partial preprocessor output.
Listing 13. Preprocessor output (partial)
typedef 'typedef' struct 'struct' identifier '__va_list_tag' l_brace '{' unsigned 'unsigned' identifier 'gp_offset' semi ';' unsigned 'unsigned' identifier 'fp_offset' semi ';' void 'void' star '*' identifier 'overflow_arg_area' semi ';' void 'void' star '*' identifier 'reg_save_area' semi ';' r_brace '}' identifier '__va_list_tag' semi ';' identifier '__va_list_tag' identifier '__builtin_va_list' l_square '[' numeric_constant '1' r_square ']' semi ';'
Hand-crafting a Preprocessor object
One of the good things about clang libraries, you can achieve the same result in multiple
ways. In this section, you craft a
Preprocessor object but without making a direct
request to
CompilerInstance. From the
Preprocessor.h header file, Listing 14 shows the
constructor for the
Preprocessor.
Listing 14. Constructing a Preprocessor object
Preprocessor(DiagnosticsEngine &diags, LangOptions &opts, const TargetInfo *target, SourceManager &SM, HeaderSearch &Headers, ModuleLoader &TheModuleLoader, IdentifierInfoLookup *IILookup = 0, bool OwnsHeaderSearch = false, bool DelayInitialization = false);
Looking at the constructor, it's clear that you need to create six
different objects before this beast can start up. You already know
DiagnosticsEngine,
TargetInfo, and
SourceManager.
CompilerInstance is derived from
ModuleLoader. So you must create two new
objects—one for
LangOptions and another
for
HeaderSearch. The
LangOptions class lets you compile a range of
C/C++ dialects, including
C99,
C11, and
C++0x. Refer to the LangOptions.h and
LangOptions.def headers for more information. Finally, the
HeaderSearch class stores an
std::vector of directories to search, amidst
other things. Listing 15 shows the code for the
Preprocessor.
Listing 15. Hand-crafted preprocessor
using namespace clang; int main() { DiagnosticOptions diagnosticOptions; TextDiagnosticPrinter *printer = new TextDiagnosticPrinter(llvm::outs(), diagnosticOptions); llvm::IntrusiveRefCntPtr<clang::DiagnosticIDs> diagIDs; DiagnosticsEngine diagnostics(diagIDs, printer); LangOptions langOpts; clang::TargetOptions to; to.Triple = llvm::sys::getDefaultTargetTriple(); TargetInfo *pti = TargetInfo::CreateTargetInfo(diagnostics, to); FileSystemOptions fsopts; FileManager fileManager(fsopts); SourceManager sourceManager(diagnostics, fileManager); HeaderSearch headerSearch(fileManager, diagnostics, langOpts, pti); CompilerInstance ci; Preprocessor preprocessor(diagnostics, langOpts, pti, sourceManager, headerSearch, ci); const FileEntry *pFile = fileManager.getFile("test.c"); sourceManager.createMainFileID(pFile); preprocessor.EnterMainSourceFile(); printer->BeginSourceFile(langOpts, &preprocessor); // … similar to Listing 8 here on }
Note a few things about the code in Listing 15:
- You have not initialized
HeaderSearchto point to any specific directories. You should do so.
- The clang API requires that
TextDiagnosticPrinterbe allocated on the heap. Allocating on the stack causes a crash.
- You have not been able to get rid of
CompilerInstance. Because you are using
CompilerInstanceanyway, why bother to hand-craft it at all other than to be more comfortable with the clang API?
Language option: C++
You have worked with
C test code so far:
How about a bit of
C++, then? To the code in Listing 15, add
langOpts.CPlusPlus = 1;, and try it with the
test code in Listing 16.
Listing 16. C++ test code for the preprocessor
template <typename T, int n> struct s { T array[n]; }; int main() { s<int, 20> var; }
Listing 17 shows the partial output from your program.
Listing 17. Partial preprocessor output from the code in Listing 16
identifier 'template' less '<' identifier 'typename' identifier 'T' comma ',' int 'int' identifier 'n' greater '>' struct 'struct' identifier 's' l_brace '{' identifier 'T' identifier 'array' l_square '[' identifier 'n' r_square ']' semi ';' r_brace '}' semi ';' int 'int' identifier 'main' l_paren '(' r_paren ')'
Creating a parse tree
The
ParseAST method defined in
clang/Parse/ParseAST.h is one of the more important methods that clang
provides. Here's one of the declaration of the routine, copied from
ParseAST.h:
void ParseAST(Preprocessor &pp, ASTConsumer *C, ASTContext &Ctx, bool PrintStats = false, TranslationUnitKind TUKind = TU_Complete, CodeCompleteConsumer *CompletionConsumer = 0);
ASTConsumer provides you with an abstract
interface from which to derive. This is the right thing to do, because
different clients are likely to dump or process the AST in
different ways. Your client code will be derived from
ASTConsumer. The
ASTContext class stores—among other
things—information about type declarations. Here's the easiest
thing to try: Print a list of global variables in your code using the
clang ASTConsumer API. Many tech firms have strict rules about global
variable usage in
C++ code, and this could be
the starting point of creating your custom lint tool. The code for your
custom consumer is provided in Listing 18.
Listing 18. A custom AST consumer class
class CustomASTConsumer : public ASTConsumer { public: CustomASTConsumer () : ASTConsumer() { } virtual ~ CustomASTConsumer () { } virtual bool HandleTopLevelDecl(DeclGroupRef decls) { clang::DeclGroupRef::iterator it; for( it = decls.begin(); it != decls.end(); it++) { clang::VarDecl *vd = llvm::dyn_cast<clang::VarDecl>(*it); if(vd) std::cout << vd->getDeclName().getAsString() << std::endl;; } return true; } };
You are overriding the
HandleTopLevelDecl method
(originally provided in
ASTConsumer) with your
own version. Clang is passing the list of globals to you; you iterate over
the list and print the variable names. Excerpted from ASTConsumer.h,
Listing 19 shows several other methods that client
consumer code could override.
Listing 19. Other methods you could override in client code
/// HandleInterestingDecl - Handle the specified interesting declaration. This /// is called by the AST reader when deserializing things that might interest /// the consumer. The default implementation forwards to HandleTopLevelDecl. virtual void HandleInterestingDecl(DeclGroupRef D); /// HandleTranslationUnit - This method is called when the ASTs for entire /// translation unit have been parsed. virtual void HandleTranslationUnit(ASTContext &Ctx) {} /// HandleTagDeclDefinition - This callback is invoked each time a TagDecl /// (e.g. struct, union, enum, class) is completed. This allows the client to /// hack on the type, which can occur at any point in the file (because these /// can be defined in declspecs). virtual void HandleTagDeclDefinition(TagDecl *D) {} /// Note that at this point it does not have a body, its body is /// instantiated at the end of the translation unit and passed to /// HandleTopLevelDecl. virtual void HandleCXXImplicitFunctionInstantiation(FunctionDecl *D) {}
Finally, Listing 20 shows the actual client code using the custom AST consumer class that you developed.
Listing 20. Client code using a custom AST consumer
int main() { CompilerInstance ci; ci.createDiagnostics(0,NULL); TargetOptions to; to.Triple = llvm::sys::getDefaultTargetTriple(); TargetInfo *tin = TargetInfo::CreateTargetInfo(ci.getDiagnostics(), to); ci.setTarget(tin); ci.createFileManager(); ci.createSourceManager(ci.getFileManager()); ci.createPreprocessor(); ci.createASTContext(); CustomASTConsumer *astConsumer = new CustomASTConsumer (); ci.setASTConsumer(astConsumer); const FileEntry *file = ci.getFileManager().getFile("hello.c"); ci.getSourceManager().createMainFileID(file); ci.getDiagnosticClient().BeginSourceFile( ci.getLangOpts(), &ci.getPreprocessor()); clang::ParseAST(ci.getPreprocessor(), astConsumer, ci.getASTContext()); ci.getDiagnosticClient().EndSourceFile(); return 0; }
Conclusion
This two-part series covered a lot of ground: It explored LLVM IR, offered
ways to generate IR through hand-crafting and LLVM APIs, showed how to
create a custom plug-in for the LLVM back end, and explained the LLVM
front end and its rich set of headers. You also learned how to use this
front end for preprocessing and AST consumption. Creating a compiler and
extending it, particularly for complex languages like
C++, had seemed like rocket science earlier in
the history of computing, but with LLVM, life has been simplified.
Documentation is where LLVM and clang still need work, but until that's
sorted out, I recommend a fresh cup of brew and VIM/doxygen to browse the
headers. Have fun!
Resources
Learn
- Learn the basics of the LLVM in Create a working compiler with the LLVM framework, Part 1: Build a custom compiler with LLVM and its intermediate representation (Arpan Sen, developerWorks, June 2012). Optimize your applications regardless of the programming language you use with the powerful the LLVM compiler infrastructure. Building a custom compiler just got easier!
- Learn more about LLVM passes.
- Get on the clang developer mailing list.
- Read Getting Started: Building and Running Clang for detailed information on building and installing clang.
- Take the official LLVM Tutorial for a great introduction to LLVM.
- Dig into the LLVM Programmer's Manual, an indispensable resource for the LLVM API.
- The Open Source developerWorks zone provides a wealth of information on open source tools and using open source technologies.
- In the developerWorks Linux zone, find hundreds of how-to articles and tutorials, as well as downloads, discussion forums, and a wealth of other resources for Linux developers and administrators.
-
- Visit the LLVM project site and download the latest version.
- Find details about clang from the LLVM site.
-.
|
http://www.ibm.com/developerworks/linux/library/os-createcompilerllvm2/index.html?ca=drs-
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
for...in Statement
Executes one or more statements for each property of an object, or each element of an array or collection..
The following example illustrates the use of the for ... in statement with an object used as an associative array.
This function returns the string that contains the following.
This example illustrates the use of the for ... in statement with a JScript Array object that has expando properties.
function ForInDemo2() { // Initialize the array. var arr = new Array("zero","one","two"); // Add a few expando properties to the array. arr["orange"] = "fruit"; arr["carrot"] = "vegetable"; // Iterate over the properties and elements // and create the string result. var s = ""; for (var key in arr) { s += key + ": " + arr[key]; s += "\n"; } return (s); }
This function returns the string that contains the following.
The following example illustrates the use of the for ... in statement with a collection. Here, the GetEnumerator method of the System.String object provides a collection of the characters in the string.
function ForInDemo3() { // Initialize the collection. var str : System.String = "Test."; var chars : System.CharEnumerator = str.GetEnumerator(); // Iterate over the collection elements and // create the string result. var s = ""; var i : int = 0; for (var elem in chars) { s += i + ": " + elem s += "\n"; i++; } return(s); }
This function returns the string that contains the following.
|
http://msdn.microsoft.com/en-us/library/4z08sst3.aspx
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
hibernate firstExample not inserting data - Hibernate
hibernate firstExample not inserting data hello all ,
i followed... class FirstExample {
public static void main(String[] args) {
Session... Session
This section contains the explanation of hibernate session
hibernate session invalid in jpa
hibernate session invalid in jpa hibernate session invalid in jpa
Hibernate Session
In this section, you will learn about Hibernate Session Why to use Session interface in Hibernate?
It is the primary interface in Hibernate. It is a single... persistent objects.
Session session = sessionFactory.openSession
Session Interface in hibernate.
Session Interface in hibernate. Define the session interface?
Hi Samar,
Session interface is a single threaded object. It is the major.... It represents hibernate session, which perform the manipulation on the database
Hibernate Session Load
This section contain description of Hibernate session load
Hibernate Session Management
In this section we will discuss How to manage Hibernate session
Define the session factory interface in hibernate?
Define the session factory interface in hibernate? Define the session factory interface in hibernate?
Session factory is used for manageing the session objects.public interface SessionFactory
extends Referenceable
Hibernate: Session Caching
In this section we will discuss first type of caching in Hibernate that is Session caching
Hibernate Session Get
This part of discussion contain description of Hibernate session get () method
Why to use Session interface in Hibernate?
Why to use Session interface in Hibernate? Why to use Session interface in Hibernate?
Session interface is defined in org.hibernate.Session.
It is a single-threaded, short-lived object representing a conversation
Java - Hibernate
FirstExample {
public static void main(String[] args) {
Session session = null..., this type of output.
----------------------------
Inserting Record
Done
Hibernate...().configure().buildSessionFactory();
session =sessionFactory.openSession
How do we create session factory in hibernate?
How do we create session factory in hibernate? Hi,
How do we create session factory in hibernate?
thanks
Problem in running first hibernate program.... - Hibernate
/FirstExample
Exception in thread "main" "... FirstExample {
public static void main(String[] args) {
Session session = null...Problem in running first hibernate program.... Hi...I am using
Session Factory
Session Factory Define the session factory interface in hibernate?
Hi Samar,
It creates new hibernate sessions by referencing immutable and thread safe objects. Application using hibernate are usually allowed
I need hibernate session factory example.
I need hibernate session factory example. Hi,
I want a simple hibernate session factory example..
hello,
Here is a simple Hibernate SessionFactory Example
Also go through the Hibernate 4
Thanks
update count from session - : Stateless Session
In this section we will show how stateless session works
Hibernate session close
In this section, you will learn about session life cycle - from start to end(session close
Hibernate :Clear Session ,Close Session
This part contains description of Hibernate session.clear() and session.close. : Session Lock
This tutorial contains how Session.lock() method works in Hibernate
Hibernate : Flushing Session
In this section we will discuss about Hibernate session.flush
Hibernate : Session Save
In this section we will discuss how session.save() works in Hibernate;
import org.hibernate.Session;
import org.hibernate.*;
import... static void main(String[] args) {
Session session = null;
try{
SessionFactory...();
session =sessionFactory.openSession();
//Create Select Clause HQL
String SQL;
import org.hibernate.Session;
import org.hibernate.*;
import... static void main(String[] args) {
Session session = null;
try...()
.buildSessionFactory();
session =sessionFactory.openSession
Hibernate - Hibernate
Hibernate pojo example I need a simple Hibernate Pojo example ...[]){ Session session = null; try{ SessionFactory sessionFactory = new Configuration().configure().buildSessionFactory(); session =sessionFactory.openSession
hibernate - Hibernate
hibernate hi i am new to hibernate.
i wan to run a select query... class SelectHQLExample {
public static void main(String[] args) {
Session session = null;
try{
// This step will read hibernate.cfg.xml and prepare
why do we need session?
actually what stored in a session... and user would not able to retrieve the information. So, Session provides that facility to store information on server memory.
2)Variables are stored in session
Struts&Hibernate - Hibernate
Struts Hibernate Session Please give me an example of Struts and Hibernate session get and closed
session
session how to implement login-logout session????
Please visit the following links:
http Criteria Queries - Hibernate
Hibernate Criteria Queries Can I use the Hibernate Criteria Query features without creating the Hibernate.cfg.xml file?
Session session = null...().buildSessionFactory();
session = sessionFactory.openSession();
Criteria crit: core interfaces of hibernate framework
Hibernate: core interfaces of hibernate framework What are the core interfaces of hibernate framework
Most of hibernate based application code deals with following interfaces provided by the Hibernate Core
Session #1 - Hibernate introduction and creating CRUD application
Session #1 - Video tutorial of Hibernate introduction and creating CRUD....
Topics covered in this session are:
Data persistence
Hibernate... programs.
Following image shows the agenda of this Hibernate video training session code - Hibernate
hibernate code well deepak,Regarding the inputs u asked for my fisr... void main(String[] args) {
Session session = null;
try{
// This step will read hibernate.cfg.xml and prepare hibernate for use
SessionFactory
hibernate record not showing in database - Hibernate
hibernate record not showing in database session =sessionFactory.openSession(); //inserting rocords in Echo Message table...)); //It showing on console Records inserted 21 But not showing in database
Could not read mappings from resource: e1.hbm.xml - Hibernate
/*tHIS IS THE HIBERNATE cONFIGURATION*/
/*THIS IS THE HIBERNATE... org.hibernate.cfg.Configuration;
import org.hibernate.cfg.*;
public class FirstExample
Hibernate SessionFactory Example
In this example, we will learn about the getting session instance from the SessionFactory interface in Hibernate 4
java(Hibernate) - Hibernate
java(Hibernate) Hai Amardeep
This is jagadhish.Iam giving full code...();
SessionFactory factory=cf.buildSessionFactory();
Session session=factory.openSession...) {
// TODO Auto-generated method stub
Session session = null;
try{
// This step
java - Hibernate
getSessionFactory() is undefined for the type HibernateUtil
Session cannot be resolved... the code and run using Hibernate and annotation..plse help me... Hi friend,
Read for more information.
Configuring Hibernate
">
<hibernate-configuration>
<session-factory>
<property...
</session-factory>
</hibernate...Configuring Hibernate How to configure Hibern
in connectivity - Hibernate
the hibernate and postgresql
that progrram is running while showing no error... insertted Hi friend,
This is connectivity and hibernate configuration...(String[] args) {
// TODO Auto-generated method stub
Session session
Persist a List Object in Hibernate - Hibernate
on hibernate.
How to persist a List object in Hibernate ?
Can you give me...) {
// TODO Auto-generated method stub
Session sess = null;
try....
Searching - Hibernate
Searching How we can search the record through Hibernate. As we do rs.next() to
get the records in jdbc .Then how we can do this using Hibernate...{
public static void main(String[] arg){
Session session = null;
try
Please explain Hibernate Sessionfactory.
Please explain Hibernate Sessionfactory. Hi there,
Please explain Hibernate session factory in detail. I have just started learning hibernate so i... session factory -
Hibernate SessionFactory
Hibernate SessionFactory Example
Hibernate - Framework
) {
SessionFactory sessFact=null;
Session sess=null;
try {
sessFact=new... on hibenate to visit....
Thanks
hibernate annotations
hibernate annotations I am facing following problem, I have created... hibernate annotations to insert records into these tables. But it is trying...-generated method stub
Session session=HibernateSessionFactory.getSession
struts - hibernate doubt
that it's not necessary to close Hibernate session nor flush the session if you...struts - hibernate doubt Hello.
I'm developing a web application using Struts 1.0 and Hibernate (I'm a beginner).
I've been reading your web
jsf &hibernate integration
jsf &hibernate integration how can i get session factory from hibernate configuration file with java server faces application specially(bean methods delete a row error - Hibernate
Hibernate delete a row error Hello,
I been try with the hibernate delete example (... static void main(String[] args) {
Session sess = null;
try
Hibernate : Session evict
SessionFactory interface in Hibernate
SessionFactory interface in Hibernate What is SessionFactory interface in Hibernate?
The application obtains Session instances from... application.If your application accesses multiple databases using Hibernate, you'll need
Hiber Architecture
by an instance of Session interface. Transaction API in
Hibernate abstracts... Hibernate
obtains a JDBC connections.
Session : Session interface is a major...Hibernate Architecture
In this tutorial you will learn about the Hibernate
Dynamic-update not working in Hibernate.
Dynamic-update not working in Hibernate. Why is dynamic update not working in hibernate?
Dynamic-update is not working. It means when...());
Session session = sessionFactory.openSession();
Transaction tx
DriverClass hibernate mysql connection.
/hibernate-configuration-3.0.dtd">
<hibernate-configuration>
<session...="net.roseindia.table.Employee" />
</session-factory>
</hibernate...DriverClass hibernate mysql connection. What is DriverClass
Session in Php
Session in Php What Is a Session
session hanling
session hanling session handling in facelets/jsf
How to create sessionfactory in Hibernate 4.1?
Session Factory
How to create SessionFactory in Hibernate 4.3.1
Hibernate...How to create sessionfactory in Hibernate 4.1? hi,
I have worked on earlier versions of Hibernate but I am finding it difficult to work
Using sum() in hibernate criteria.
Using sum() in hibernate criteria. How to calculate sum() in hibernate criteria by using projection?
You can calculate sum of any numeric value in hibernate criteria by using projection. Projection is an interface
Hibernate SessionFactory
Hibernate SessionFactory
In this tutorial we will learn about how the SessionFactory creates the
session.
SessionFactory is an interface that extends the Referenceable and
Serializable interfaces and produces the Session instances
Benefits of detached objects in hibernate
Benefits of detached objects in hibernate What are benefits of detached objects in hibernate?
Detached objects can be passed across... session
How to create sessionfactory in Hibernate 4.1.1?
in hibernate 4.1.1:
Hibernate 4 create Session Factory
How to create SessionFactory...How to create sessionfactory in Hibernate 4.1.1? HI,
I need to create Sessionfactory in Hibernate 4.1.1. How can i do it?
Thanks.
Complete Hibernate 4.0 Tutorial
Hibernate Dependency
Hibernate Mapping
Hibernate Session...
Hibernate session close
Hibernate lazyinitializationexception...
This section contains the Complete Hibernate 4.0
Tutorial
Hibernate Architecture
and the related O/R mapping.
Hibernate Session: The Session interface provides the API...Hibernate Architecture - Understand the architecture of the Hibernate ORM... Hibernate
ORM framework. Hibernate is based on the Java technologies - Training
;
Session Reconnect
Hibernate Action Base Class ...
Hibernate Training
Hibernate
Training Course Objectives
|
http://www.roseindia.net/tutorialhelp/comment/37611
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
On 18 Nov 2004 12:58:49 GMT, Duncan Booth <duncan.booth at invalid.invalid> wrote: > Carlos. I think that a better way to solve the problem is to create a names method on the tuple itself: return ('1', '2').names('ONE', 'TWO') It's shorter and clean, and avoids a potential argument against named parameters for the tuple constructor -- none of the standard contructors take named parameters to set extended behavior as far as I know. -- Carlos Ribeiro Consultoria em Projetos blog: blog: mail: carribeiro at gmail.com mail: carribeiro at yahoo.com
|
https://mail.python.org/pipermail/python-list/2004-November/276785.html
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
Type: Posts; User: ciscoqueen90x
Wow, I did not know that. Thank you, I will have to try that.
What if there are no values listed in the watch window? Sometimes that happens to me.
#include <iostream>
using namespace std;
bool isPrime(int value); //Prototyle for "prime number function"
int reverse (int value2); //Prototype for "emirp function"
int main()
{
//Ask the...
Calculate the first N emirp (prime, spelled backwards) numbers, where N is a positive number that the
user provides as input. An Emirp is a prime number whose reversal is also a prime. For...
|
http://forums.codeguru.com/search.php?s=6f29854d87dc7e7419dd10b288f92787&searchid=2757591
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
Products and Services
Downloads
Store
Support
Education
Partners
About
Oracle Technology Network
When IPv6 is enabled on Suse 9 (or JDS 3) it is not possible
to bind a ServerSocket to the local address.
App receives a "cannot assign requested address" error
###@###.### 2004-12-09 18:17:48 GMT
Just in case.., test case is here:
============ TestCase ===========
import java.net.*;
public class TestServer {
public static void main(String[] args) throws Exception {
if (args.length != 2) {
System.out.println("Usage: java TestServer <host_address_ipv6> <port>");
System.exit(1);
}
System.out.println("Host Address="+args[0]);
System.out.println("Port ="+args[1]);
ServerSocket ss = new ServerSocket();
ss.bind(new InetSocketAddress(InetAddress.getByName(args[0]),
Integer.parseInt(args[1])));
System.out.println("ServerSocket Bound");
ss.accept();
}
}
java TestServer <host ipv6 address> <port>
for host use local link-local address of the host port 5555
Verify:
On success, the server socket will be bound to the specified address and port. O
n failure some exception will be thrown. For
SuSE Linux ES 9.0, following is thrown:
Exception in thread "main" java.net.BindException: Cannot assign requested addre
ss
at java.net.PlainSocketImpl.socketBind(Native Method)
at java.net.PlainSocketImpl.bind(PlainSocketImpl.java:331)
at java.net.ServerSocket.bind(ServerSocket.java:318)
at java.net.ServerSocket.bind(ServerSocket.java:276)
at TestServer.main(TestServer.java:17)
--------------------------------------------------------------------------------
---
System Specification:
java version "1.4.2_05"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.2_05-b04)
Java HotSpot(TM) Client VM (build 1.4.2_05-b04, mixed mode)
SUSE LINUX Enterprise Server 9 (i586) - Kernel 2.6.5-7.97-default (6)
----------------end
###@###.### 2004-12-15 19:15:49 GMT
EVALUATION
In the Linux native code, we use the kernel IPv6 routing table
to figure out which scope_id value to use when a given Inet6Address
has no scope_id set. For the local link-local address, this
always points to the "lo" loopback interface, which is correct
for destination addresses, ie. packets are sent out the loopback
interface when the destination is the local LL address. However,
this is not correct when binding to a local address. In this
case, it doesn't make sense to set the scope_id to the loopback
interface because that would seem to preclude binding on physical
interfaces (which is normally what you want to do). This worked
ok on Linux kernels prior to 2.6 because presumably they just ignored
the scope_id setting in the bind() call. Unfortunately, it is broken
now in 2.6.
Linux should allow the application to use a scope_id of zero
which would force the kernel to try an figure out which interface
to use (in most cases it would be possible to determine it).
Since it does not do this, we need to use a different method
for determining the scope_id when binding to a local address.
In this case, we will search /proc/net/if_inet6 instead of
/proc/net/ipv6_route.
###@###.### 2005-1-04 13:31:44 GMT
|
http://bugs.java.com/bugdatabase/view_bug.do?bug_id=6206527
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
As author for ebook for epub, all carriage return (or enter) show as a question marks. Why?
I doubt you're using Authorware to make an eBook....? What software tool are you using?
Best to post in that forum....likely InDesign?
That said, it probably has something to do with the font you're using (embed it with the project?) or the encoding (ASCII vs UTF-8 vs Unicode) you have setup...maybe.
Erik
Hi Erik - I used Abobe Digital Editions.
Pam
Erik
I am on a mac and want to check my .ePub file. Opened manuscript in Digital Editions. The jacket did not show at all, some page breaks (or might be section breaks) are ignored and ALL of the carriage returns (or ENTERS) are question marks. Presumably Digital Editions is the right place, and not Authorware which I have only just found?
Pam
|
http://forums.adobe.com/thread/973600
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
Code. Collaborate. Organize.
No Limits. Try it Today.
With the introduction of 2 in 1 devices, applications need to be able to toggle between "laptop mode" and "tablet mode" to provide the best possible user experience. Touch-optimized applications are much easier to use in "tablet mode" (without a mouse or keyboard), than applications originally written for use in "laptop mode" or "desktop mode" (with a mouse and keyboard). It is critical for applications to know when the device mode has changed and toggle between the two modes dynamically.
This paper describes a mechanism for detecting mode changes in a Windows* 8 or Windows* 8.1 Desktop application, and provides code examples from a sample application that was enhanced to provide this functionality.
Most people are familiar with Desktop mode applications. Windows* XP and Windows* 7 applications are the examples. These types of apps commonly use a mouse and a keyboard to input data, and often have very small icons to click on, menus that contain many items, sub-menus, etc. These items are usually too small and close together to be selected effectively using a touch interface.
Touch-optimized applications are developed with the touch interface in mind from the start. The icons are normally larger, and the number of small items are kept to a minimum. These optimizations to the user interface make using touch-based devices much easier. With these UI elements correctly sized, you should extend this attention to the usability of objects the application is handling. Therefore, graphic objects representing these items should also be adapted dynamically.
The MTI (MultiTouchInterface) sample application was originally written as part of the Intel® Energy Checker SDK (see Additional Resources) to demonstrate (among many other things) how the ambient light sensors can be used to change the application interface.
At its core, the MTI sample application allows the user to draw and manipulate a Bézier curve. The user simply defines in order, the first anchor point, the first control point, the second anchor point, and finally the second and last control point.
Figure 1 shows an example of a Bézier curve. Note that the size and color of each graphic element are designed to allow quick recognition—even via a computer vision system if required—and easy manipulation using touch.
Figure 2, Figure 3, Figure 4, and Figure 5 show the key interactions the user can have with the Bézier curve. An extra touch to the screen allows redrawing a new curve.
Support for Ambient Light Sensors (ALS) was added to the MTI sample application. Once the level of light is determined, the display dynamically changes to make it easier for the user to see and use the application in varying light situations. Microsoft recommends increasing the size of UI objects and color contrast as illumination increases.
MTI changed the interface in numerous stages, according to the light level. In a bright light situation, the MTI application changes the display to "high contrast" mode, increasing the size of the anchor and control points and fading the colors progressively to black and white. In a lower light situation, the application displays a more colorful (less contrasted) interface, with smaller anchor and control points.
Indeed, anyone who has used a device with an LCD screen, even with backlight, knows it may be difficult to read the screen on a sunny day. Figures 6 and Figure 7 show the issue clearly.
In our case, we decided to re-use the size change mechanism that we implemented for the ALS support. We are using only the two extremes of the display changes for the UI objects’ size that were introduced for the ALS support. We do this, simply by setting the UI objects’ size to the minimum when the system is in non-tablet mode, and to the maximum when it is in tablet mode (by convention, the unknown mode maps to the non-tablet mode).
Using the two extremes of the display shown above, the original MTI source code was modified to add new capabilities to toggle between the two contrast extremes based on a certain event. The event used to toggle between the two contrast extremes is the switch between tablet mode and laptop mode of a 2 in 1 device. Switches in the hardware signal the device configuration change to the software (Figure 8).
Upon starting the Bezier_MTI application, the initial status of the device is unknown (Figure 9). This is because the output of the API used to retrieve the configuration, is valid only when a switch notification has been received. At any other time, the output of the API is undefined.
Note that only the first notification is required since an application can memorize that it received a notification using a registry value. With this memorization mechanism, at next start, the application could detect its state using the API. If the application knows that it has received a notification in the past on this platform, then it can use the GetSystemMetrics function to detect its initial state. Such mechanism is not implemented in this sample.
GetSystemMetrics
When the mode of the device is changed, Windows sends a WM_SETTINGCHANGE message to the top level window only, with "ConvertibleSlateMode" in the LPARAM parameter. Bezier_MTI detects the configuration change notification from the OS via this message.
WM_SETTINGCHANGE
ConvertibleSlateMode
LPARAM
If LPARAM points to a string equal to "ConvertibleSlateMode", then the app should call GetSystemMetrics(SM_CONVERTIBLESLATEMODE). A "0" returned means it is in tablet mode. A "1" returned means it is in non-tablet mode (Figure 10).
GetSystemMetrics(SM_CONVERTIBLESLATEMODE)
...
//---------------------------------------------------------------------
// Process system setting update.
//---------------------------------------------------------------------
case WM_SETTINGCHANGE:
//-----------------------------------------------------------------
// Check slate status.
//-----------------------------------------------------------------
if(
((TCHAR *)lparam != NULL) &&
(
_tcsnccmp(
(TCHAR *)lparam,
CONVERTIBLE_SLATE_MODE_STRING,
_tcslen(CONVERTIBLE_SLATE_MODE_STRING)
) == 0
)
) {
//-------------------------------------------------------------
// Note:
// SM_CONVERTIBLESLATEMODE reflects the state of the
// laptop or slate mode. When this system metric changes,
// the system sends a broadcast message via WM_SETTING...
// CHANGE with "ConvertibleSlateMode" in the LPARAM.
// Source: MSDN.
//-------------------------------------------------------------
ret = GetSystemMetrics(SM_CONVERTIBLESLATEMODE);
if(ret == 0) {
data._2_in_1_data.device_configuration =
DEVICE_CONFIGURATION_TABLET
;
} else {
data._2_in_1_data.device_configuration =
DEVICE_CONFIGURATION_NON_TABLET
;
}
...
As good practice, Bezier_MTI includes an override button to manually set the device mode. The button is displayed as a Question Mark (Figure 11) at application startup; then changes to a Phone icon (Figure 12) or a Desktop icon (Figure 13) depending on the device mode at the time. The user is able to touch the icon to manually override the detected display mode. The application display changes according to the mode selected/detected. Note that in this sample, the mode annunciator is conveniently used as a manual override button.
A phone icon is displayed in tablet mode.
A desktop icon is displayed in non-tablet mode.
Most of the changes in this sample are graphics related. An adaptive UI should also change the nature and the number of the functions exposed to the user (this is not covered in this sample).
For the graphics, you should disassociate the graphics rendering code from the management code. Here, the drawing of the Bezier and other UI elements are separated from the geometry data computation.
In the graphics rendering code, you should avoid using static GDI objects. For example, the pens and brushes should be re-created each time a new drawing is performed, so the parameters can be adapted to the current status, or more generally to any sensor information. If no changes occur, there is no need to re-create the objects.
This way, as in the sample, the size of the UI elements adapt automatically to the device configuration readings. This not only impacts the color, but also the objects’ size. Note that the system display’s DPI (dots per inch) should be taken in account during the design of this feature. Indeed, small form factor devices have high DPI. This is not a new consideration, but it becomes more important as device display DPI is increasing.
In our case, we decided to re-use the size change mechanism that we implemented for the ALS support (Figure 14). We do this simply by setting the UI objects’ size to the minimum when the system is in non-tablet mode and to the maximum when it is in tablet mode (by convention, the unknown mode maps to the non-tablet mode).
...
ret = GetSystemMetrics(SM_CONVERTIBLESLATEMODE);
if(ret == 0) {
data._2_in_1_data.device_configuration =
DEVICE_CONFIGURATION_TABLET
;
//---------------------------------------------------------
shared_data.lux = MAX_LUX_VALUE;
shared_data.light_coefficient = NORMALIZE_LUX(shared_data.lux);
} else {
data._2_in_1_data.device_configuration =
DEVICE_CONFIGURATION_NON_TABLET
;
//---------------------------------------------------------
shared_data.lux = MIN_LUX_VALUE;
shared_data.light_coefficient = NORMALIZE_LUX(shared_data.lux);
}
...
The following code (Figure 15) shows how a set of macros makes this automatic. These macros are then used in the sample’s drawing functions.
...
#define MTI_SAMPLE_ADAPT_TO_LIGHT(v)
((v) + ((int)(shared_data.light_coefficient * (double)(v))))
#ifdef __MTI_SAMPLE_LINEAR_COLOR_SCALE__
#define MTI_SAMPLE_LIGHT_COLOR_COEFFICIENT
(1.0 - shared_data.light_coefficient)
#else // __MTI_SAMPLE_LINEAR_COLOR_SCALE__
#define MTI_SAMPLE_LIGHT_COLOR_COEFFICIENT
(log10(MAX_LUX_VALUE - shared_data.lux))
#endif // __MTI_SAMPLE_LINEAR_COLOR_SCALE__
#define MTI_SAMPLE_ADAPT_RGB_TO_LIGHT(r, g, b)
RGB(
(int)(MTI_SAMPLE_LIGHT_COLOR_COEFFICIENT * ((double)(r))),
(int)(MTI_SAMPLE_LIGHT_COLOR_COEFFICIENT * ((double)(g))),
(int)(MTI_SAMPLE_LIGHT_COLOR_COEFFICIENT * ((double)(b)))
...
Windows 8 and Windows 8.1 user interface allows developers to customize the user experience for 2 in 1 devices. The device usage mode change can be detected, and the application interface changed dynamically, resulting in a better user experience for the user.
Stevan Rogers has been with Intel for over 20 years. He specializes in systems configuration and lab management and develops marketing materials for mobile devices using Line Of Business applications.
Jamel Tayeb is the architect for the Intel® Energy Checker SDK. Jamel Tayeb is a software engineer in Intel's Software and Services Group. He has held a variety of engineering, marketing and PR roles over his 10 years at Intel. Jamel has been worked with enterprise and telecommunications hardware and software companies in optimizing and porting applications for/to Intel platforms, including Itanium and Xeon processors. Most recently, Jamel has been involved with several energy-efficiency projects at Intel. Prior to reaching Intel, Jamel was a professional journalist. Jamel earned a PhD in Computer Science from Université de Valenciennes, a Post-graduate diploma in Artificial Intelligence from Université Paris 8, and a Professional Journalist Diploma from CFPJ (Centre de formation et de perfectionnement des journalistes – Paris Ecole du Louvre).
Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and other countries. *Other names and brands may be claimed as the property of others Copyright© 2013 Intel Corporation. All rights reserved.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
C# 6: First reactions
|
http://www.codeproject.com/script/Articles/View.aspx?aid=679676
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
edit Editing Undictionary: From A to Z, or Zed, or was it Zee?
Undictionary entries reside in Category:Undictionary (or under large rocks) and the "Undictionary:" prefix precedes the name of the individual entries.
When creating a new Undictionary entry, please use the namespace "Undictionary:" (with no trailing space) in front of the article title and include the {{dict}} tag in the entry itself.
By their nature, dictionary entries tend to be one or two paragraphs with rather simple formatting and no included media files.
A series of templates is used to aid in assembling the individual definitions into 26 main pages, Undictionary:A through Z.
edit Undictionary templates
There are currently three Undictionary-specific templates:
- {{dictionary}} is the table of contents header for Undictionary, included at the top of all main pages (only)
- {{dict}} is the [[Category:Undictionary]] category tag for inclusion in your individual entries
- {{def|Name of Your Word Definition...}} causes your text (from article Undictionary:Name of Your Word Definition...) to be included inline.
The main pages Undictionary:A - Undictionary: Z are constructed nearly entirely from templates; please do not add other body text (of your definition itself) there.
edit The list of definitions
The 26 main dictionary pages Undictionary:A through Undictionary:Z are built from templates which retrieve the definitions and display them inline. These pages, to be overwritten periodically with the latest list of Undictionary entries, internally look like:
- {{dictionary}}
- == Q ==
- {{def|Quarks}}
- {{def|Quirks}}
- ...
Effectively, each is a page of a dictionary on which the individual definitions are automatically inserted according to these tags.
All pretty boring and even a robot could generate these... and maybe a robot should generate these?
Any other added text on these main pages may and likely will be lost upon the next update of the generated indices. To view the full, current list of Undictionary entries, click here.
|
http://uncyclopedia.wikia.com/wiki/Help:Editing_Undictionary?diff=prev&oldid=415915
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
12 August 2009 09:19 [Source: ICIS news]
SINGAPORE (ICIS news)--China’s East Hope group has delayed the commercial run of its new purified terephthalic acid (PTA) plant in Chongqing until at least end of the month, citing unspecified mechanical problems, said company sources on Wednesday.
The start-up of the yuan (CNY)2bn ($293m) project has been delayed several times, with the latest schedule set in June or July of this year.
“You can say the plant was largely completed already. We need to do some rectification works in some [areas] in order to prepare the start-up, perhaps in end-August,” said one of the sources.
The company remains committed to starting up the 600,000 tonne/year plant in southwestern China near the resource-rich ?xml:namespace>
East Hope is a Sichuan-based conglomerate with businesses in properties, speciality chemicals and metals. The PTA project is its first foray into petrochemicals.
|
http://www.icis.com/Articles/2009/08/12/9239120/Chinas-East-Hope-delays-PTA-start-up-to-end-Aug.html
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
import "maze.io/x/pixel"
Package pixel contains common pixel formats.
doc.go draw.go fx.go fx_bit.go image.go util.go
Circle draws a circle at point p with radius r.
FadeOutDither applies gradual 4×4 Bayes dithering to all pixels off.
FillRectangle draws a filled rectangle between the two points.
HLine draws a horizontal line at p with the given width.
Line draws a line between two points.
Rectangle draws a rectangle outline between the two points.
VLine draws a vertical line at p with the given height.
func (b *Bitmap) SetBit(x, y int, c pixelcolor.Bit)
const ( UnknownFormat Format = iota MHMSBFormat MVLSBFormat RGB332Format RGB565Format RGB888Format RGBA4444Format RGBA5551Format )
Package pixel imports 7 packages (graph) and is imported by 2 packages. Updated 2020-07-28. Refresh now. Tools for package owners.
|
https://godoc.org/maze.io/x/pixel
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
Marble::EditPlacemarkDialog
#include <EditPlacemarkDialog.h>
Detailed Description
The EditPlacemarkDialog class deals with customizing placemarks.
Definition at line 27 of file EditPlacemarkDialog.h.
Member Function Documentation
idFilter gets filter for id of placemark
- Returns
- QStringList of ids which could not be used as id.
Definition at line 258 of file EditPlacemarkDialog.cpp.
isIdFieldVisible tells if targetId field is shown.
Definition at line 273 of file EditPlacemarkDialog.cpp.
isTargetIdFieldVisible tells if targetId field is shown.
Definition at line 268 of file EditPlacemarkDialog.cpp.
relationCreated signals the annotate plugin that a new relation has been created( or modified ) within the relation editor
- Parameters
-
setIdFieldVisible tells the dialog whether id field should be shown.
Definition at line 299 of file EditPlacemarkDialog.cpp.
setIdFilter sets filter for id of placemark.
- Parameters
-
Definition at line 284 of file EditPlacemarkDialog.cpp.
setLabelColor tells the dialog what the label color is
Definition at line 252 of file EditPlacemarkDialog.cpp.
Protecting data from input fields changes.
Definition at line 304 of file EditPlacemarkDialog.cpp.
setTargetIdFieldVisible tells the dialog whether targetId field should be shown.
Definition at line 294 of file EditPlacemarkDialog.cpp.
setTargetIds sets ids which could be target of placemark.
- Parameters
-
Definition at line 289 of file EditPlacemarkDialog.cpp.
targetIds gets ids which could be target of placemark.
- Returns
- QStringList of ids which could be target of placemark.
Definition at line 263 of file EditPlacemarkDialog.cpp.
toogleDescriptionEditMode toggles edit mode for description field.
textAnnotationUpdated signals that some property of the PlacemarkTextAnnotation instance has changed.
- Parameters
-
updateDialogFields is connected to a signal from AnnotatePlugin in order to update some fields in the dialog as the user interacts directly with the text annotation item.
Definition at line 278 of file EditPlacemarkDialog.cpp.
The documentation for this class was generated from the following files:
Documentation copyright © 1996-2020 The KDE developers.
Generated on Fri Sep 18 2020 23:18:35 by doxygen 1.8.11 written by Dimitri van Heesch, © 1997-2006
KDE's Doxygen guidelines are available online.
|
https://api.kde.org/marble/html/classMarble_1_1EditPlacemarkDialog.html
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
GraphQL is a dream for frontend developers and clients alike. After all, clients don’t care where data is coming from or what database format you’re using. They care about getting the data they’re asking for quickly, cleanly, and painlessly. Bonus points if it doesn’t put too heavy of a load on the server.
It’s been a goal for all APIs to be consumed in the GraphQL format,for developers who’ve grown accustomed to its simplicity and ease. That dream is now a reality thanks to a new library, GraphQL Mesh. GraphQL Mesh is a Rosetta Stone allowing all of your APIs and local databases to play nice together.
What is GraphQL?
GraphQL Mesh is a new library created by The Guild, an open-source group dedicated to empowering developers to take advantage of the many benefits of GraphQL.
The Guild are also responsible for popular GraphQL resources like GraphQL Code Editor, GraphQL Inspector, and GraphQL-CLI. They clearly know a thing or two about making GraphQL available for a wide pool of different developers, regardless of whether or not they’re previously familiar with the specification created by Facebook.
GraphQL Mesh is a dream come true for developers who’ve been wanting to try GraphQL but have been reluctant due to either lack of experience or having legacy products in older formats like REST. GraphQL Mesh is designed to act as an intermediary layer to receive data from nearly anywhere and translate it into a GraphQL format.
The goal of GraphQL Mesh is to take data from a wide array of different formats and integrate them with GraphQL so they can be modified with GraphQL queries and mutations.
So far, GraphQL Mesh has native support for:
- GraphQL
- gRPC
- JSON
- MongoDB
- OpenAPI/Swagger
- PostgreSQL
- SOAP/WSDL
- Apache Thrift
This makes it easy to modify output schema, link types across schema and merge schema types. It also gives you granular control over how you retrieve data, overcome backend limitations, as well as complications due to schema specifications and non-typed APIs.
GraphQL Mesh also acts as a proxy for your local data and lets you use common libraries with other APIs. You can use this proxy locally or you can call the service in other applications with an execute function.
Keep in mind that GraphQL Mesh is mainly intended as a background layer for your enterprise. If you want to serve the data to the public, you’ll most likely need to add an additional abstraction layer.
GraphQL starts by collecting API schemas from the services it communicates with. It then creates a runtime environment of fully-typed SDKs for those services. Then it translates various API specs into the GraphQL schema, where custom schema transformations and extensions can be performed. Finally, all of this is wrapped up into one SDK which is used to obtain data from the service you’re trying to communicate with.
This is achieved using local schema, which is created from the autogenerated directory when you install GraphQL Mesh.
This schema lets you use GraphQL’s execute function to run query and mutation functions locally in your application. This enables GraphQL to act as a central nervous system between your app and whatever you’re using to power it.
Benefits Of GraphQL Mesh
GraphQL allows clients and end users to integrate data of all kinds of different formats.
Users don’t need to have a thorough understanding of a complex API architecture to retrieve the data they need. It also makes rapid prototyping much quicker and more efficient since you don’t have to go under the hood of your API every time you want to make an insignificant change.
GraphQL is also much more efficient than other specifications like REST. REST returns all of the data stored in a database when it’s queried, which can result in overfetching and underfetching.
GraphQL only returns the exact data the user queries for. Not only does this save on resources, but it also makes an API easier to use since you spend less time looking for the data you need.
How GraphQL Mesh works
All of that data is returned to one place. While REST’s prolific use of endpoints definitely has its uses, it has its downsides as well. Having the ability to have all of that data routed to one endpoint is a major benefit and reason enough to give GraphQL Mesh a try in and of itself.
GraphQL Mesh translates APIs in nearly any given format into a GraphQL format. It’s an abstraction layer that can be overlaid nearly any source, including local files and databases.
Installing GraphQL Mesh
GraphQL Mesh comes available as several packages which you can install depending on your particular needs. We’ll show you how to set up a basic instance of GraphQL Mesh so you can get started with the library and try it out yourself.
To start, you’re going to need to install the
Yarn package handler, which makes packages available globally. For the sake of good housekeeping, create a new directory for this project in your development folder. We’ve called ours
GraphQL_mesh.
In the root directory of your project folder, create a file called
.meshrc.yaml using a text editor of your choosing.
We’re using Notepad++, an open source text editor that lets you save files in whatever file format you want. Paste the following into the file and then save:
sources: - name: Wiki handler: openapi: source:
Navigate to that directory using Terminal and input the following:
$ npm install yarn --global
To install the basic GraphQL Mesh package, type the following:
$ yarn add graphql @graphql-mesh/runtime @graphql-mesh/cli
Now you need to install a Mesh handler, depending on the needs of the specific API you’ll be using. For the sake of this example, we’ll be installing the Mesh handler for the OpenAPI spec:
$ yarn add graphql @graphql-mesh/openapi
To see a full list of the supported API specs, consult the GraphQL-mesh documentation.
Now you can run GraphQL. Type the following command:
$ yarn graphql-mesh serve
This serves an instance of GraphQL following the schema you’ve provided to, so you can test your code and make sure everything’s working as it should be.
How To Use GraphQL Mesh
Now let’s see an example of GraphQL Mesh in action to give you an idea of how you can integrate it into your development workflow. It’ll also help you visualize how GraphQL Mesh can make consolidating data from multiple API sources much easier and more intuitive than other languages.
To illustrate some of these concepts, we’re going to build a simple app that consolidates data from two different APIs and merges them together. We’re gathering data from a weather API and an API of geographic data.
For the sake of good housekeeping, let’s create a new directory for our project. We’ve named ours
locationweather. Navigate to this folder using Terminal.
Now we’ll start by re-installing our libraries and gathering the permissions we’ll need. Once you’re in your programming director, type:
npm install yarn --global yarn add graphql @graphql-mesh/runtime @graphql-mesh/cli yarn add apollo-server yarn add @graphql-mesh/openapi
This installs the libraries that will be called inside of our GraphQL function and makes them available globally.
Now open an instance of your preferred text editor for programming.
We’re using Notepad++, since it lets you save files in whatever file format you prefer.
Let’s start by making the GraphQL schema, which makes up the bulk of what your GraphQL function does. In the root directory of your project folder, create a file called
.meshrc.yaml using your text editor and save it. Then input the following:
sources: - name: Cities handler: openapi: source: operationHeaders: 'X-RapidAPI-Key': f93d3b393dmsh13fea7cb6981b2ep1dba0ajsn654ffeb48c26 - name: Weather context: apiKey: 971a693de7ff47a89127664547988be5 handler: openapi: source: transforms: - extend: | extend type PopulatedPlaceSummary { dailyForecast: [Forecast] todayForecast: Forecast } - cache: # Geo data doesn't change frequntly, so we can cache it forever - field: Query.* # Forecast data might change, so we can cache it for 1 hour only - field: PopulatedPlaceSummary.dailyForecast invalidate: ttl: 3600 - field: PopulatedPlaceSummary.todayForecast invalidate: ttl: 3600 require: - ts-node/register/transpile-only additionalResolvers: - ./src/mesh/additional-resolvers.ts
You can see this function is calling the APIs we’re working with for this project. It should give you an idea of how these principles can be applied to practically any API or data source you may want to work with.
Next, you’re going to create
package.json, which makes up much of the rest of this simple app. Create a blank file and put the following code into the body:
{ "name": "typescript-location-weather-example", "version": "0.0.20", "license": "MIT", "private": true, "scripts": { "predev": "yarn mesh:ts", "dev": "ts-node-dev src/index.ts", "prestart": "yarn mesh:ts", "start": "ts-node src/index.ts", "premesh:serve": "yarn mesh:ts", "mesh:serve": "graphql-mesh serve", "mesh:ts": "graphql-mesh typescript --output ./src/mesh/__generated__/types.ts" }, "devDependencies": { "@types/node": "13.9.0", "ts-node": "8.8.2", "ts-node-dev": "1.0.0-pre.44", "typescript": "3.8.3" }, "dependencies": { "@graphql-mesh/cli": "0.0.20", "@graphql-mesh/openapi": "0.0.20", "@graphql-mesh/runtime": "0.0.20", "@graphql-mesh/transform-cache": "0.0.20", "@graphql-mesh/transform-extend": "0.0.20", "apollo-server": "2.11.0", "graphql": "15.0.0" } }
You can see that most of the variables we’ll be using are defined in
package.json. This is another of GraphQL’s greatest strengths — its ability to be hard-typed. Things are much more settled and fixed and, thus, less likely to break using GraphQL’s JSON strings.
The last file of substance in our root directory is
tsconfig.ts. Create the file and insert these few short lines:
{ "compilerOptions": { "target": "es2015", "module": "commonjs", "moduleResolution": "node", /* Specify module resolution strategy: 'node' (Node.js) or 'classic' (TypeScript pre-1.6). */ "lib": [ "esnext" ], "sourceMap": true /* Generates corresponding '.map' file. */, }, "include": ["src"], "exclude": ["node_modules"] }
Now there’s just a tiny bit more housekeeping to do incase anyone uses this app after you. We’ll make a readme file,
README.md:
## Location-Weather Example
This example takes two API sources based on Openapi 3 and Swagger, and links between them.
It allows you to query for cities and locations, and include fields for the weather in that found place.
Finally, we’ll create a file responsible for some additional routing,called
.gitingore with no file extension.
__generated__ src/__generated__
We’re almost done! There’s just the tiniest bit of additional formatting we’ll want to incorporate. To do so, start off by creating a sub-folder called
SRC. Then make a file called
index.ts.
Insert the following code:
import { ApolloServer } from 'apollo-server'; import { getMesh, findAndParseConfig } from '@graphql-mesh/runtime'; async function main() { const meshConfig = await findAndParseConfig(); const { schema, contextBuilder } = await getMesh(meshConfig); const server = new ApolloServer({ schema, context: contextBuilder, }); server.listen().then(({ url }) => { console.log(`🚀 Server ready at ${url}`); }); } main().catch(err => console.error(err));
You can see that
index.ts imports the functions we installed earlier, like apollo-server and, of course, GraphQL Mesh, and makes them available to the rest of the functions.
Create one more additional sub-folder in the
src directory and call it
mesh. You’re going to make one final file, in that folder, called
additional-resolvers.ts:
import { Resolvers } from './__generated__/types'; export const resolvers: Resolvers = { PopulatedPlaceSummary: { dailyForecast: async (placeSummary, _, { Weather }) => { const forecast = await Weather.api.getForecastDailyLatLatLonLon({ lat: placeSummary.latitude!, lon: placeSummary.longitude!, key: Weather.config.apiKey, }); return forecast.data!; }, todayForecast: async (placeSummary, _, { Weather }) => { const forecast = await Weather.api.getForecastDailyLatLatLonLon({ lat: placeSummary.latitude!, lon: placeSummary.longitude!, key: Weather.config.apiKey, }); return forecast.data![0]!; }, }, };
That’s the last of the code! Now you can go to the command line and run:
yarn graphql-mesh serve
This will serve your app to, running on an instance of GraphQL, where you can perform your queries and mutations.
If you’d like to try out GraphQL Mesh without messing with any code, the entire project is available on codesandbox, including the code, so you can see GraphQL Mesh in action for yourself and get a sense of how you might integrate this clever translator into your existing workflow.
Conclusion
GraphQL Mesh is a dream come true for frontend developers and end users alike.
From the client’s perspective, they don’t have to know as much about the API structure to do what they’re trying to do. Instead, they just need to know what they’re querying for and GraphQL Mesh delivers.
From the programmer’s perspective, GraphQL Mesh makes code exponentially more robust and flexible. You won’t have to worry about reconfiguring your API every time you change your data. No more routing endless endpoints or constantly coding complex databases..
|
http://blog.logrocket.com/a-guide-to-the-graphql-mesh-library/
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
This tutorial provides a basic Java programmer's introduction to working with protocol buffers. By walking through creating a simple example application, it shows you how to
- Define message formats in a
.protofile.
- Use the protocol buffer compiler.
- Use the Java protocol buffer API to write and read messages.
This isn't a comprehensive guide to using protocol buffers in Java. For more detailed reference information, see the Protocol Buffer Language Guide, the Java API Reference, the Java Java Serialization. This is the default approach since it's built into the language, but it has a host of well-known problems (see Effective Java, by Josh Bloch pp. 213), and also doesn't work very well if you need to share data with applications written in C++ or Python.
-.
After the package declaration, you can see two options that are Java-specific:
java_package and
java_outer_classname.
java_package specifies in what Java package name your generated classes should live. If you don't specify this explicitly, it simply matches the package name given by the
package declaration, but these names usually aren't appropriate Java package names (since they usually don't start with a domain name). The
java_outer_classname option defines the class name which should contain all of the classes in this file. If you don't give a
java_outer_classname explicitly, it will be generated by converting the file name to upper camel case. For example, "my_proto.proto" would, by default, use "MyProto" as the outer class:. --java_out=$DST_DIR $SRC_DIR/addressbook.protoBecause you want Java classes, you use the
--java_outoption – similar options are provided for other supported languages.
This generates
com/example/tutorial/AddressBookProtos.java in your specified destination directory.
The Protocol Buffer API
Let's look at some of the generated code and see what classes and methods the compiler has created for you. If you look in
AddressBookProtos.java, you can see that it defines a class called
AddressBookProtos, nested within which is a class for each message you specified in
addressbook.proto. Each class has its own
Builder class that you use to create instances of that class. You can find out more about builders in the Builders vs. Messages section below.
Both messages and builders have auto-generated accessor methods for each field of the message; messages have only getters while builders have both getters and setters. Here are some of the accessors for the
Person class (implementations omitted for brevity):
// required string name = 1; public boolean hasName(); public String getName(); // required int32 id = 2; public boolean hasId(); public int getId(); // optional string email = 3; public boolean hasEmail(); public String getEmail(); // repeated .tutorial.Person.PhoneNumber phones = 4; public List<PhoneNumber> getPhonesList(); public int getPhonesCount(); public PhoneNumber getPhones(int index);
Meanwhile,
Person.Builder has the same getters plus setters:
// required string name = 1; public boolean hasName(); public java.lang.String getName(); public Builder setName(String value); public Builder clearName(); // required int32 id = 2; public boolean hasId(); public int getId(); public Builder setId(int value); public Builder clearId(); // optional string email = 3; public boolean hasEmail(); public String getEmail(); public Builder setEmail(String value); public Builder clearEmail(); // repeated .tutorial.Person.PhoneNumber phones = 4; public List<PhoneNumber> getPhonesList(); public int getPhonesCount(); public PhoneNumber getPhones(int index); public Builder setPhones(int index, PhoneNumber value); public Builder addPhones(PhoneNumber value); public Builder addAllPhones(Iterable<PhoneNumber> value); public Builder clearPhones();
As you can see, there are simple JavaBeans-style getters and setters for each field. There are also
has getters for each singular field which return true if that field has been set. Finally, each field has a
clear method that un-sets the field back to its empty state.
Repeated fields have some extra methods – a
Count method (which is just shorthand for the list's size), getters and setters which get or set a specific element of the list by index, an
add method which appends a new element to the list, and an
addAll method which adds an entire container full of elements to the list.
Notice how these accessor methods use camel-case naming, even though the
.proto file uses lowercase-with-underscores. This transformation is done automatically by the protocol buffer compiler so that the generated classes match standard Java style conventions. You should always use lowercase-with-underscores for field names in your
.proto files; this ensures good naming practice in all the generated languages. See the style guide for more on good
.proto style.
For more information on exactly what members the protocol compiler generates for any particular field definition, see the Java generated code reference.
Enums and Nested Classes
The generated code includes a
PhoneType Java 5 enum, nested within
Person:
public static enum PhoneType { MOBILE(0, 0), HOME(1, 1), WORK(2, 2), ; ... }
The nested type
Person.PhoneNumber is generated, as you'd expect, as a nested class within
Person.
Builders vs. Messages
The message classes generated by the protocol buffer compiler are all immutable. Once a message object is constructed, it cannot be modified, just like a Java
String. To construct a message, you must first construct a builder, set any fields you want to set to your chosen values, then call the builder's
build() method.
You may have noticed that each method of the builder which modifies the message returns another builder. The returned object is actually the same builder on which you called the method. It is returned for convenience so that you can string several setters together on a single line of code.
Here's an example of how you would create an instance of
Person:
Person john = Person.newBuilder() .setId(1234) .setName("John Doe") .setEmail("[email protected]") .addPhones( Person.PhoneNumber.newBuilder() .setNumber("555-4321") .setType(Person.PhoneType.HOME)) .build();
Standard Message Methods
Each message and builder class also contains a number of other methods that let you check or manipulate the entire message, including:
isInitialized(): checks if all the required fields have been set.
toString(): returns a human-readable representation of the message, particularly useful for debugging.
mergeFrom(Message other): (builder only) merges the contents of
otherinto this message, overwriting singular scalar fields, merging composite fields, and concatenating repeated fields.
clear(): (builder only) clears all the fields back to the empty state.
These methods implement the
Message and
Message.Builder interfaces shared by all Java messages and builders. For more information, see the complete API documentation for.
Protocol Buffers and O-O Design Protocol buffer classes are basically dumb data holders (like structs in C); they don't make good first class citizens in an object model. If you want to add richer behaviour behaviourPerson( PromptForAddress(new BufferedReader(new InputStreamReader(System.in)), System.out)); // Write the new address book back to disk. FileOutputStream output = new FileOutputStream(args[0]); addressBook.build().writeTo(output); output.close(); } }
Reading A Message
Of course, an address book wouldn't be much use if you couldn't get any information out of it! This example reads the file created by the above example and prints all the information in it.
import com.example.tutorial.AddressBookProtos.AddressBook; import com.example.tutorial.AddressBookProtos.Person; import java.io.FileInputStream; import java.io.IOException; import java.io.PrintStream; class ListPeople { // Iterates though all people in the AddressBook and prints info about them. static void Print(AddressBook addressBook) { for (Person person: addressBook.getPeopleList()) { System.out.println("Person ID: " + person.getId()); System.out.println(" Name: " + person.getName()); if (person.hasEmail()) { System.out.println(" E-mail address: " + person.getEmail()); } for (Person.PhoneNumber phoneNumber : person.getPhonesList()) { switch (phoneNumber.getType()) { case MOBILE: System.out.print(" Mobile phone #: "); break; case HOME: System.out.print(" Home phone #: "); break; case WORK: System.out.print(" Work phone #: "); break; } System.out.println(phoneNumber.getNumber()); } } } // Main function: Reads the entire address book from a file and prints all // the information inside. public static void main(String[] args) throws Exception { if (args.length != 1) { System.err.println("Usage: ListPeople ADDRESS_BOOK_FILE"); System.exit(-1); } // Read the existing address book. AddressBook addressBook = AddressBook.parseFrom(new FileInputStream(args[0])); Print Java and
Message.Builder interfaces.
|
https://developers.google.cn/protocol-buffers/docs/javatutorial?hl=es-419
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
JGit/New and Noteworthy/4.9
< JGit | New and Noteworthy
Contents
JGit: new ref storage format. Some repositories contain a lot of references (e.g. android at 866k, rails at 31k). The reftable format provides:
- Near constant time lookup for any single reference, even when the repository is cold and not in process or kernel cache.
- Near constant time verification a SHA-1 is referred to by at least one reference (for allow-tip-sha1-in-want).
- Efficient lookup of an entire namespace, such as `refs/tags/`.
- Support atomic push `O(size_of_update)` operations.
- Combine reflog storage with ref storage.
- bug 470318 - Fetch submodule repo before resolving commits
- bug 374703 - Handle SSL handshake failures in TransportHttp, use CredentialsProvider to inform the user
- Support http.<url>.* configs
- Add BlobObjectChecker
- bug 520978 - Improve getting typed values from a Config to enable handling invalid configuration options
- bug 496170 - Support most %-token substitutions in OpenSshConfig
- bug 490939 - Let Jsch know about ~/.ssh/config. Ensure the Jsch instance used knows about ~/.ssh/config. This enables Jsch to honor more user configurations, in particular also the UserKnownHostsFile configuration, or additional identities given via multiple IdentityFile entries.
- bug 465167 - Add support to follow HTTP redirects
- Implement config setting http.followRedirects
- Number of redirects followed can be limited by http.maxRedirects (default 5)
- bug 500106 - Send a detailed event on working tree modifications to provide the foundations for better file change tracking
- Add dfs fsck implementation
- bug 517128 - Support -merge attribute in binary macro. The merger is now able to react to the use of the merge attribute. The value unset and the custom value 'binary' are handled (-merge and merge=binary)
- bug 518377 - Support --match functionality in DescribeCommand
- bug 517847 - Allow to programmatically set FastForwardMode for PullCommand
- bug 474174 - Add support for config option "pull.ff"
-.
- bug 518377 - Add --match option for `jgit describe` to CLI
2 enhancement requests and 23 bugs were closed
- bug 521296 - Fix missing RefsChangedEvent when packed refs are used
- bug 376369 - Fix Daemon.stop() to actually stop the listener thread
-.
- bug 508801 - Don't assume name = path in .gitmodules
- bug 515325 - FetchCommand: pass on CredentialsProvider to submodule fetches
- bug 520920 - Exclude file matching: fix backtracking on match failures after "**"
- bug 508568 - Fix path pattern matching to work also for gitattributes
- bug 429625 - Ignore invalid TagOpt values
- bug 519883 - Fix default directory used to clone when setDirectory wasn't called
- bug 513043 - Do authentication re-tries on HTTP POST
- Fix exception handling for opening bitmap index files
- bug 393170 - Do not apply pushInsteadOf to existing pushUris
- bug 520702 - Record submodule paths with untracked changes as FileMode.GITLINK
- bug 520910 - Ensure EOL stream type is DIRECT when -text attribute is present. Otherwise fancy combinations of attributes (binary or -text in combination with crlf or eol) may result in the corruption of binary data.
- bug 520677 - Use relative paths for attribute rule matching
- Treat RawText of binary data as file with one single line. This avoids executing mergeAlgorithm.merge on binary data, which is unlikely to be useful.
- bug 510685 - Fix committing empty commits
- bug 519887 - Fix JGit set core.fileMode to false by default instead of true for non Windows OS.
- Fix matching ignores and attributes pattern of form a/b/**.
- Fix deleting symrefs
- bug 518377 - Fix bug in multiple tag handling on DescribeCommand
- bug 393170 -).
Contributors
The following 21 developers worked on this release of JGit :
Changcheng Xiao, Christian Halstrick, Dave Borowitz, David Pursehouse, David Turner, Dmitry Pavlenko, Han-Wen Nienhuys, Joan Goyeau, Jonathan Nieder, Masaya Suzuki, Mathieu Cartaud, Matthias Sohn, Mattias Neuling, Michael FIG, Ned Twigg, Oliver Lockwood, Robin Stocker, Shawn Pearce, Terry Parker, Thomas Wolf, Zhen Chen
|
https://wiki.eclipse.org/index.php?title=JGit/New_and_Noteworthy/4.9&printable=yes
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
This package consist of all DesignSystem components from AdminBro so you can use all of them inside and outside the AdminBro environment.
It was created with the help of 2 amazing packages:
- styled-components which is a peerDependency
- styled-system
make sure to check them out in order to use full potential of this design system
Usage within the AdminBro
If you are using this module inside AdminBro there is no need to install anything, just use its components like this:
import { Box, Button } from '@admin-bro/design-system' // and here you can use them
Usage outside the AdminBro
Nothing stays in a way of using @admin-bro/design-system in your project which doesn't require AdminBro. Simply visit the module page and follow installation instructions
Changing theme
Design System provides you with the default theme. It contains all the parameters like paddings, colors, font sizes etc. For the list of all available parameters take a look at the Theme spec.
But nothing stands in a way for you to change the default theme. In order to override the Theme or its selected parameters use AdminBroOptions.branding theme property.
Changing particular components
Sometimes you might want to change the look and feel of a
particular component - not the entire theme. You can achieve that with
styled method:
import { Button } from '@admin-bro/design-system' import styled from 'styled-components' const MyRoundedButton = styled(Button)` border-radius: 10px; `
and then you can use it like a normal button component:
<MyRoundedButton variant="primary">Rounded I am</MyRoundedButton>
Components
This is the list of all our components
|
https://adminbro.com/section-design-system.html
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
%matplotlib inline
import numpy as np import pandas as pd import theano import theano.tensor as T import matplotlib.pyplot as plt import keras from sklearn.preprocessing import StandardScaler from sklearn.preprocessing import LabelEncoder from keras.utils import np_utils from sklearn.cross_validation import train_test_split from keras.callbacks import EarlyStopping, ModelCheckpoint from keras.models import Sequential from keras.layers import Dense, Activation
Using Theano backend.
For this section we will use the Kaggle otto challenge. If you want to follow, Get the data from Kaggle:
The Otto Group is one of the world’s biggest e-commerce companies, A consistent analysis of the performance of products is crucial. However, due to diverse global infrastructure, many identical products get classified differently. For this competition, we have provided a dataset with 93 features for more than 200,000 products. The objective is to build a predictive model which is able to distinguish between our main product categories. Each row corresponds to a single product. There are a total of 93 numerical features, which represent counts of different events. All features have been obfuscated and will not be defined any further.
def load_data(path, train=True): """Load data from a CSV File Parameters ---------- path: str The path to the CSV file train: bool (default True) Decide whether or not data are *training data*. If True, some random shuffling is applied. Return ------ X: numpy.ndarray The data as a multi dimensional array of floats ids: numpy.ndarray A vector of ids for each sample """ df = pd.read_csv(path) X = df.values.copy() if train: np.random.shuffle(X) # X, labels = X[:, 1:-1].astype(np.float32), X[:, -1] return X, labels else: X, ids = X[:, 1:].astype(np.float32), X[:, 0].astype(str) return X, ids
def preprocess_data(X, scaler=None): """Preprocess input data by standardise features by removing the mean and scaling to unit variance""" if not scaler: scaler = StandardScaler() scaler.fit(X) X = scaler.transform(X) return X, scaler def preprocess_labels(labels, encoder=None, categorical=True): """Encode labels with values among 0 and `n-classes-1`""" if not encoder: encoder = LabelEncoder() encoder.fit(labels) y = encoder.transform(labels).astype(np.int32) if categorical: y = np_utils.to_categorical(y) return y, encoder
print("Loading data...") X, labels = load_data('train.csv', train=True) X, scaler = preprocess_data(X) Y, encoder = preprocess_labels(labels) X_test, ids = load_data('test.csv', train=False) X_test, ids = X_test[:1000], ids[:1000] #Plotting the data print(X_test[:1]) X_test, _ = preprocess_data(X_test, scaler) nb_classes = Y.shape[1] print(nb_classes, 'classes') dims = X.shape[1] print(dims, 'dims')
Loading data... [[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 3. 0. 0. 0. 3. 2. 1. 0. 0. 0. 0. 0. 0. 0. 5. 3. 1. 1. 0. 0. 0. 0. 0. 1. 0. 0. 1. 0. 1. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 3. 0. 0. 0. 0. 1. 1. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 11. 1. 20. 0. 0. 0. 0. 0.]] (9L, 'classes') (93L, 'dims')
Now lets create and train a logistic regression model.. ref:).
dims = X.shape[1] print(dims, 'dims') print("Building model...") nb_classes = Y.shape[1] print(nb_classes, 'classes') model = Sequential() model.add(Dense(nb_classes, input_shape=(dims,))) model.add(Activation('softmax')) model.compile(optimizer='sgd', loss='categorical_crossentropy') model.fit(X, Y)
(93L, 'dims') Building model... (9L, 'classes') Epoch 1/10 61878/61878 [==============================] - 1s - loss: 1.0574 Epoch 2/10 61878/61878 [==============================] - 1s - loss: 0.7730 Epoch 3/10 61878/61878 [==============================] - 1s - loss: 0.7297 Epoch 4/10 61878/61878 [==============================] - 1s - loss: 0.7080 Epoch 5/10 61878/61878 [==============================] - 1s - loss: 0.6948 Epoch 6/10 61878/61878 [==============================] - 1s - loss: 0.6854 Epoch 7/10 61878/61878 [==============================] - 1s - loss: 0.6787 Epoch 8/10 61878/61878 [==============================] - 1s - loss: 0.6734 Epoch 9/10 61878/61878 [==============================] - 1s - loss: 0.6691 Epoch 10/10 61878/61878 [==============================] - 1s - loss: 0.6657
<keras.callbacks.History at 0x23d330f0>
Simplicity is pretty impressive right? :)
Now lets understand:
The core data structure of Keras is a model, a way to organize layers. The main type of model is the Sequential model, a linear stack of layers.
What we did here is stacking a Fully Connected (Dense) layer of trainable weights from the input to the output and an Activation layer on top of the weights layer.
from keras.layers.core import Dense Dense(output_dim, init='glorot_uniform', activation='linear', weights=None, W_regularizer=None, b_regularizer=None, activity_regularizer=None, W_constraint=None, b_constraint=None, bias=True, input_dim=None)
from keras.layers.core import Activation Activation(activation)
If you need to, you can further configure your optimizer. A core principle of Keras is to make things reasonably simple, while allowing the user to be fully in control when they need to (the ultimate control being the easy extensibility of the source code). Here we used SGD (stochastic gradient descent) as an optimization algorithm for our trainable weights.
What we did here is nice, however in the real world it is not useable because of overfitting. Lets try and solve it with cross validation..
To avoid overfitting, we will first split out data to training set and test set and test out model on the test set. Next: we will use two of keras's callbacks EarlyStopping and ModelCheckpoint
X, X_test, Y, Y_test = train_test_split(X, Y, test_size=0.15, random_state=42) fBestModel = 'best_model.h5' early_stop = EarlyStopping(monitor='val_loss', patience=4, verbose=1) best_model = ModelCheckpoint(fBestModel, verbose=0, save_best_only=True) model.fit(X, Y, validation_data = (X_test, Y_test), nb_epoch=20, batch_size=128, verbose=True, validation_split=0.15, callbacks=[best_model, early_stop])
Train on 19835 samples, validate on 3501 samples Epoch 1/20 19835/19835 [==============================] - 0s - loss: 0.6391 - val_loss: 0.6680 Epoch 2/20 19835/19835 [==============================] - 0s - loss: 0.6386 - val_loss: 0.6689 Epoch 3/20 19835/19835 [==============================] - 0s - loss: 0.6384 - val_loss: 0.6695 Epoch 4/20 19835/19835 [==============================] - 0s - loss: 0.6381 - val_loss: 0.6702 Epoch 5/20 19835/19835 [==============================] - 0s - loss: 0.6378 - val_loss: 0.6709 Epoch 6/20 19328/19835 [============================>.] - ETA: 0s - loss: 0.6380Epoch 00005: early stopping 19835/19835 [==============================] - 0s - loss: 0.6375 - val_loss: 0.6716
<keras.callbacks.History at 0x1d7245f8>
So, how hard can it be to build a Multi-Layer percepton with keras? It is baiscly the same, just add more layers!
model = Sequential() model.add(Dense(100, input_shape=(dims,))) model.add(Dense(nb_classes)) model.add(Activation('softmax')) model.compile(optimizer='sgd', loss='categorical_crossentropy') model.fit(X, Y)
Your Turn!
Take couple of minutes and Try and optimize the number of layers and the number of parameters in the layers to get the best results.
model = Sequential() model.add(Dense(100, input_shape=(dims,))) # ... # ... # Play with it! add as much layers as you want! try and get better results. model.add(Dense(nb_classes)) model.add(Activation('softmax')) model.compile(optimizer='sgd', loss='categorical_crossentropy') model.fit(X, Y)
Building a question answering system, an image classification model, a Neural Turing Machine, a word2vec embedder or any other model is just as fast. The ideas behind deep learning are simple, so why should their implementation be painful?
Much has been studied about the depth of neural nets. Is has been proven mathematically[1] and empirically that convolutional neural network benifit from depth!
[1] - On the Expressive Power of Deep Learning: A Tensor Analysis - Cohen, et al 2015
One much quoted theorem about neural network states that:
Universal approximation theorem states[1] that a feed-forward network with a single hidden layer containing a finite number of neurons (i.e., a multilayer perceptron), can approximate continuous functions on compact subsets of $\mathbb{R}^n$, under mild assumptions on the activation function. The theorem thus states that simple neural networks can represent a wide variety of interesting functions when given appropriate parameters; however, it does not touch upon the algorithmic learnability of those parameters.
[1] - Approximation Capabilities of Multilayer Feedforward Networks - Kurt Hornik 1991
|
https://nbviewer.jupyter.org/github/donnemartin/data-science-ipython-notebooks/blob/master/deep-learning/keras-tutorial/1.3%20Introduction%20-%20Keras.ipynb
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
From: Dave Harris (brangdon_at_[hidden])
Date: 2003-12-30 10:26:12
In-Reply-To: <bsq7jd$ruf$1_at_[hidden]>
eric_at_[hidden] (Eric Niebler) wrote (abridged):
> 2003\Vc7\atlmfc\include\afxtempl.h(398) : error C2668: 'std::max' :
> ambiguous call to overloaded function
Does adding:
#include <algorithm>
before boost/utility help? I don't currently have access to my system to
be sure, but I came across a similar issue and I think I decided boost had
defined just two overloads of max without defining the general template.
Adding the generic max template seemed to satisfy MFC. (I don't recall if
I needed a using-declaration.)
-- Dave Harris, Nottingham, UK
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
https://lists.boost.org/Archives/boost/2003/12/58230.php
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
Laravel Eloquent Filter is a simple package that helps us filter Eloquent data using query filters.
This simple package helps you filter Eloquent data using query filters.
Run the following command:
composer require nahidulhasan/eloquent-filter
Use the trait
NahidulHasan\EloquentFilter\Filterable in your eloquent model.
Create a new class by extending the class
NahidulHasan\EloquentFilter\QueryFilters and define your custom filters as methods with one argument. Where function names are the filter argument name and the arguments are the value.
Let's assume you want to allow to filter articles data. Please see the following code.
<?php namespace App\Models;
use Illuminate\Database\Eloquent\Model;
use NahidulHasan\EloquentFilter\Filterable;
class Article extends Model
{ use Filterable;
/**
Create ArticleFilter class extending QueryFilters.
<?php
namespace App\Filters;
use Illuminate\Database\Eloquent\Builder;
use NahidulHasan\EloquentFilter\QueryFilters;
class ArticleFilters extends QueryFilters
{
/**
With this class we can use the http query string :
title=article_name or any combination of these filters. It is up to you to define if you will use AND wheres or OR.
Now in the controller you can apply these filters like as described in below :
<?php
namespace App\Http\Controllers;
use App\Filters\ArticleFilters;
use App\Models\Article;
use Illuminate\Http\Request;
/**
If you go to this link you will get all code:
Thanks to :
Thanks for reading ❤
If you liked this post, share it with all of your programming buddies!
Follow me on Facebook | Twitter
☞ What’s New in Laravel 6.0
☞ Laravel 6 CRUD Application Tutorial
☞ Laravel 6 Image Upload Tutorial
☞ Laravel 6 Authentication Tutorial...
|
https://morioh.com/p/4307feb43790
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
It is important to learn about the foundations in each area. You need to have basic information to be a professional. Good usage of tools is almost as important as the foundation. Without good tools, your foundation won't be used well.
This chapter is about tools that will help to build better CSS code. It describes features of preprocessors and finally the foundation knowledge about SASS. In this chapter, you can get basic knowledge about automatization of repeatable processes in frontend development with
GULP.js. Finally, you can find an example of file structure, which will partialize your project into small, easy to edit, and maintainable files.
In this chapter, we will:
Create a CSS project with a proper structure.
Building CSS code is pretty simple. If you want to start, you just need a simple text editor and start writing your code. If you want to speed up the process, you will need to choose the right text editor or integrated development environment (IDE). Currently the most popular editors/IDEs for frontend developers are as follows:
Sublime Text
Atom
WebStorm/PHPStorm
Eclipse/Aptana
Brackets
Your choice will be based on price and quality. You should use the editor that you feel most comfortable with.
When you are creating a code, you have parts of codes that you repeat in all projects/files. You will need to create snippets that will help you to speed up the process of writing code. As a frontend developer, I recommend you to get a basic knowledge about
Emmet (previously Zen Coding). This is a collection of HTML/CSS snippets, which will help you build code faster. How to use it? It is basically included in modern frontend editors (Sublime Text, Atom, Brackets, WebStorm, and so on). If you want to check how Emmet works in CSS you need to start a declaration of some class for example
.className, open the brackets (
{}) and write for example:
pl
Then press the Tab button, which will trigger the Emmet snippet. As a result, you will get the following:
padding-left
Following are examples of the most used properties and values:
For a better understanding of Emmet and to get a full list of features, it is recommended to check the official website of the project at:.
Do you remember when you learned the most impressive keyboard shortcuts Ctrl + C ,Ctrl + V? It helped you to save about 2 seconds each time you wanted to make an operation of copying and pasting some text or any other element. But what about automizing some processes in building code? Yeah, it's going to be helpful and you can do it with keyboard shortcuts.
Shortcuts that you should know in your IDE are as follows:
Duplicating line
Deleting line
Moving line
Formatting code
To test your code, you will need all the modern web browsers. In your list, you should have the following browsers:
Google Chrome (newest version)
Mozilla Firefox (newest version)
Mozilla Firefox developers edition (newest version)
Opera (newest version)
Safari (newest version)
Internet Explorer
Internet Explorer (IE) is the biggest issue in frontend developers' lives because you will need a bunch of IEs on your machine, for example, 9, 10, and 11. The list is getting smaller because back in the days the list was longer. IE6, 7, 8, 9, and so on. Now IE6, 7, and 8 are mostly not supported by the biggest web projects like YouTube and Facebook. But it sometimes occurs in big companies in which the changing of operating systems is a pretty complicated process.
To easily test your code on a bunch of browsers, it is good to use online tools dedicated for this test:
But an easy and free way to do it is to create a virtual machine on your computer and use the system and browser which you need. To collect the required versions of IE, you can refer to. With
modern.ie, you can select the IE version you need and your version of virtual machine platform (VirtualBox, Parallels, Vagrant, VMware).
Dealing with HTML and CSS code is almost impossible nowadays without inspector. In this tool, you can see the markup and CSS. Additionally, you can see the box model. This is well known too in browsers for web developers. A few years ago, everybody was using Firebug dedicated for Firefox. Now each modern browser has its own built-in inspector, which helps you to debug a code.
The easiest way to invoke inspector is to right-click on an element and choose Inspect. In Chrome, you can do it with a key shortcut. In Windows, you have to press F12. In MAC OSX, you can use cmd + alt + I to invoke inspector.
A preprocessor is a program that will build CSS code from other syntax similar or almost identical to CSS. The main advantages of preprocessors are as follows:
Code nesting
Ability of using variables
Ability of creating mixins
Ability of using mathematical/logical operations
Ability of using loops and conditions
Joining of multiple files
Preprocessors give you the advantage of building code with nesting of declarations. In simple CSS, you have to write the following:
.class { property: value; } .class .insideClass { property: value; }
In the preprocessor, you just need to write the following:
.class { property: value; .insideClass { property: value; } }
Or in SASS with the following indentation:
.class property: value .insideClass property: value
And it will simply compile to code:
.class { property: value; } .class .insideClass { property: value; }
The proper usage of nesting will give you the best results. You need to know that good CSS code.
In good CSS code, there is no possibility to use variables in all browsers. Sometimes you are using same value in the few places, but when you have change requests from client/project manager/account manager, you just immediately need to change some colors/margins, and so on. In CSS, usage of variables is not supported in old versions of Internet Explorer. Usage of variables is possible with CSS preprocessors.
In classic programming language, you can use functions to execute some math operations or do something else like displaying text. In CSS, you haven't got this feature, but in preprocessors you can create mixins. For example, you need prefixes for border-radius (old IE, Opera versions):
-webkit-border-radius: 50%; -moz-border-radius: 50%; border-radius: 50%;
You can create a mixin (in SASS):
@mixin borderRadius($radius) { -webkit-border-radius: $radius; -moz-border-radius: $radius; border-radius: $radius; }
@include borderRadius(20px)
In preprocessors, you can use math operations like the following:
Addition
Subtraction
Multiplying
Dividing
As an example, we can create simple grid system. You will need, for example, 10 columns with a resolution of 1,000 pixels:
$wrapperWidth: 1000px; $columnsNumber: 10; $innerPadding: 10px; $widthOfColumn = $wrapperWidth / $columnsNumber; .wrapper { width: $wrapperWidth; } .column { width: $widthOfColumn; padding: 0 10px; }
Without a logical operator's comparison of operations and loops, you cannot create a good program in classic programming language. The same applies to preprocessors. You need them to automatize the creation of classes/mixins, and so on. The following is the list of possible operators and loops.
The list of comparison operators is as follows:
<: less than
>: greater than
==: equal to
!=: not equal to
<=: less or equal than
>=: greater or equal than
The list of logical operators is as follows:
and
or
not
The list of loops is as follows:
if
for
each
while
In classic CSS, you can import files into one CSS document. But in a browser, it still makes additional requests to the server. So, let's say when you have a file with the following content:
@import "typography.css" @import "blocks.css" @import "main.css" @import "single.css"
It will generate four additional requests to CSS files. With a preprocessor, each
@import makes a merging for you, and in this place you will have the content of the mentioned file. So, finally, you have four files in one.
Less is a preprocessor mainly used in a Bootstrap framework. It has all the features of a preprocessor (mixins, math, nesting, and variables).
One of the good features is the quick invoking of declared mixins. For example, you have created a class:
.text-settings { font-size: 12px; font-family: Arial; text-align: center; }
Then you can add declared properties with its values in other elements declared in your less file (it works like a mixin):
p { .text-settings; color: red; }
You will finally get the following:
p { font-size: 12px; font-family: Arial; text-align: center; color: red; }
Stylus has two versions of code (like SASS): one with braces/semicolons and the other without braces/semicolons. Additionally (over SASS), you can omit colons. If it continues to be developed and still retains its present features, it's going to be the biggest competitor for SASS.
SASS stands for Syntactically Awesome Stylesheets. It first appeared in 2006 and was mainly connected to Ruby on Rails (RoR) projects. Agile methodology used in RoR had an influence on frontend development. This is currently the best known CSS preprocessor used in the Foundation framework with the combination of Compass. A new version of the Twitter Bootstrap (fourth version) framework is going to be based on SASS too.
In SASS, you can write code in a CSS-like version called SCSS. This version of code looks pretty similar to CSS syntax:
a { color: #000; &:hover { color: #f00; } }
The second version of code is SASS. It uses indentations and is the same as the preceding code, but written in SASS:
a color: #000; &:hover color: #f00;
You can see bigger differences in mixins. To invoke a mixin in SCSS, write the following:
@include nameOfMixin()
To invoke a mixin in SASS, write the following:
+nameOfMixin()
As you can see, SASS is a shorter version than SCSS. Because of the shortcuts and the automatization processes it is highly recommend to use SASS over SCSSâwrite Lessâget more.
Personally I'm using SASS. Why? The first reason is its structure. It looks very similar to Jade (an HTML preprocessor). Both of them are based on indentation and it is easy stylize Jade code. The second reason is the shorter versions of functions (especially mixins). And the third reason is its readability. Sometimes, when your code is bigger, the nesting in SCSS looks like a big mess. If you want, for example, to change a nested class to be in any other element, you have to change your
{}. In SASS, you are just dealing with indentation.
I've been working a lot with Less and SASS. Why did I finally chose SASS? Because of the following reasons:
It's a mature preprocessor
It has very good math operations
It has extensions (Compass, Bourbon)
Usage of Compass is recommended because:
It has a collection of modern mixins
It creates sprites
Most preprocessors have the same options and the reason you will choose one is your own preferences. In this book, I will be using SASS and Compass. In the following table, you can find a short comparison:
Using the SASS preprocessor is really simple. You can use it in two ways: SCSS and SASS itself. Using the SASS preprocessor is really simple. You can use it in two ways: SCSS and SASS. The SCSS syntax looks like extended CSS. You can nest your definitions using new braces. SASS syntax is based on indent (similar for example to Python language).
Using variables is the essential feature of SASS, which is mostly impossible in CSS that is used on most modern browsers. Variables can be used in every element that you want to parametrize, such as colors, margins, paddings, and fonts.
To define variables in SASS, you just need to do it with the
$ sign and add the name of your variable after it.
In SCSS:
$color_blue: blue;
Usage:
.className { color: $color_blue; }
As mentioned in the previous section, variables can be used to parametrize the code. The second best known feature is to add some predefined block of code that you can invoke with some shorter version.
In SCSS, you can predefine it this way:
@mixin animateAll($time) { -webkit-transition: all $time ease-in-out; -moz-transition: all $time ease-in-out ; -o-transition: all $time ease-in-out; transition: all $time ease-in-out; }
And then invoke with:
@include animateAll(5s)
In the SASS version:
=animateAll($time) -webkit-transition: all $time ease-in-out -moz-transition: all $time ease-in-out -o-transition: all $time ease-in-out transition: all $time ease-in-out
+animateAll(5s)
Example:
SASS:
.animatedElement +animateAll(5s)
Compiled CSS:
.animatedElement { -webkit-transition: all 5s ease-in-out; -moz-transition: all 5s ease-in-out; -o-transition: all 5s ease-in-out; transition: all 5s ease-in-out; }
What does
@extend make in SASS code? For example, you have a part of code that is repeating in all fonts:
.font-small { font-family: Arial; font-size: 12px; font-weight: normal; }
And you don't want to repeat this part of code in the next selector. You will write in SASS:
.font-small-red { @extend .font-small; color: red; }
The code it will generate will look like the following:
.font-small, .font-small-red { font-family: Arial; font-size: 12px; font-weight: normal; } .font-small-red { color: red; }
This SASS feature is great to build optimized code. Remember to use it in your project over mixins, which will generate more code.
In CSS, you could import CSS files into one root file with
@import. For example:
@import "typography.css" @import "grid.css"
In SASS, you can import SASS/SCSS files into one with an automatic merge option. In case you have, for example, two files that you want to include in one SASS file, you need to write the following code:
@import "typography" @import "grid"
As you can see in the preceding code, you don't need to add an extension of the file into
import as it automatically loads the SASS or SCSS file. The only thing you need to remember is to have only one file in this example named,
typography.
Let's check how it will behave in real code. Imagine that we have two files,
_typography.sass and
_grid.sass.
File
_grid.sass:
.grid-1of2 float: left width: 50% .grid-1of4 float: left width: 25% .grid-1of5 float: left width: 20%
File
_typography.sass:
Now let's create a
style.sass file:
@import _typography @import _grid
After compilation of
style.sass, you will see a
style.css file:; }â© .grid-1of2 { float: left; width: 50%; } .grid-1of4 { float: left; width: 25%; } .grid-1of5 { float: left; width: 2%; }
As you can see, two files are merged into one CSS, so, additionally, we made a small optimization of code because we reduced the number of requests to the server. In case of three files, we have three requests (
style.css, then
typography.css, and
grid.css). Now there will be only one request.
Sometimes, in nesting, you will need to use the name of the selector that you are currently describing. As a best description of the problem, you need to first describe a link:
a { color: #000; }
and then:
a:hover { color: #f00; }
In SCSS, you can use
& to do that:
a { color: #000; &:hover { color: #f00; } }
In SASS:
a color: #000 &:hover color: #f00
You can resolve with this element other problems like combining names:
.classname {} .classname_inside {}
In SCSS:
.classname { &_inside { } }
In SASS:
.classname &_inside
This option has been possible since SASS 3.5. It will be very helpful in creating code build in BEM methodologies.
Compass is a very useful SASS framework, especially when you are working with a big list of icons/reusable images. What you need to do is gather all the images in one folder in your project. For example,
yourfolder/envelope.png and
yourfloder/star.png.
Then in your SASS code:
@import "compass/utilities/sprites" @import "yourfolder/*.png" @include all-yourfolder-sprites
Then in your code, you can use images as an example:
.simple-class-envelope @extend .yourfolder-envelope .simple-class-star @extend .yourfolder-star
And it will add a code to your classes:
.simple-class-envelope { background-image: url('spriteurl.png'); background-position: -100px -200px; }
Where
-100px and
-200px are examples of offset in your sprite.
Every time we are compiling project files (for example, Compass, Jade, image optimization, and so on), we are thinking about how we can automatize and speed up the process. The first ideaâsome terminal snippets and compiling invokers. But we can use
grunt.js and
gulp.js. What are Grunt and Gulp? In shortâtask runners. You can define a list of tasks, which you repeat all the time, group them into some logical structure, and run.
In most projects, you can use them to automatize a process of SASS/Compass compilation.
I assume that you have installed Node.js, Ruby, sass, and Compass. If not, I recommend you to do this first. To install all of the listed software, you need to visit: to install Node.js to install Ruby to install SASS to install Compass to install Gulp globally on your machine
On these pages, you can find guides and tutorials on how to install all of this software.
Then you will need to create a basic structure for your project. It is best to create folders:
src: In this folder we will keep our source files
dist: In this folder we will keep our compiled files
In the
src folder, please create a
css folder, which will keep our SASS files.
Then in the
root folder, run the following command line:
npm init npm install gulp-compass gulp --save-dev
In
gulpfile.js add the following lines of code:
var gulp = require('gulp'), compass = require('gulp-compass'); gulp.task('compass', function () { return gulp.src('src/styles/main.sass') .pipe(compass({ sass: 'src/styles', image: 'src/images', css: 'dist/css', sourcemap: true, style: 'compressed' })); }); gulp.task('default', function () { gulp.watch('src/css/**/*.sass', ['compass']); });
Now you can run your automatizer with the following in your command line:
gulp
This will run the
default task from your
gulpfile.js, which will add a watcher to the files with
.sass extensions, which are located in the
src/css folder. Every time you change any file in this location, your task
compass will run. It means that it will run the
compass task and create a sourcemap for us. We could use a default
compass command, but
gulp.js is a part of the modern frontend developer workflow. We will be adding new functions to this automatizer in the next chapters.
Let's analyze the code a little deeper:
gulp.task('default', function () { gulp.watch('src/css/**/*.sass', ['compass']); });
The preceding code defines the default task. It appends a watcher, which checks the
src/css/**/*.sass location for sass files. It means that every file in a
src/css folder and any subsequent folder, for example,
src/css/folder/file.sass, will have a watcher. When files in this location are changed, the task defined in the array
[compass]will run. Our
task compass is the only element in the array but it, of course, can be extended (we will do this in the next chapters).
Now let's analyze the
task compass:
gulp.task('compass', function () { return gulp.src('src/styles/main.sass') .pipe(compass({ sass: 'src/styles', image: 'src/images', css: 'dist/css', sourcemap: true, style: 'compressed' })); });
It will compile the
gulp.src('src/styles/main.sass)file and save the compiled file in
pipe (
gulp.dest('style.css')). The
compass task is defined in
pipe:
.pipe(compass({ sass: 'src/styles', image: 'src/images', css: 'dist/css', sourcemap: true, style: 'compressed' }))
The first line of this task defines the source folder for SASS files. The second line defines the images folder. The third line sets the destination of the CSS file. The fourth line is set to generate a source map for the file (for easier debugging).The fifth line defines the style of the saved CSS file; in this case, it will be compressed (it means that it will be ready for production code).
In a common workflow, a graphic designer creates the design of a website/application. Then, next in the process is the HTML/CSS coding. After the development process, the project is in the quality assurance (QA) phase. Sometimes it's focused only on the functional side of the project, but in a good workflow, it checks of graphic design phase. During the QA process, the designer is involved, he/she will find all pixels that are not good in your code. How would check all the details in a pixelperfect project?
The question is about mobile projects. How to check if it is still pixel perfect when it needs to be flexible in browsers? You will need to make it in described ranges. For example, you have to create HTML/CSS for the web page, which has three views for mobile, tablet, and desktop. You will need plugins, which will help you to build pixel perfect layouts.
Pixelperfect plugin will help you to compare design with your HTML/CSS in your browser. This plugin is available on Firefox and Chrome. To work with it, you need to make a screenshot of your design and add it in a plugin. Then you can set a position of image and opacity. This plugin is one of the most used by frontend developers to create pixel perfect HTML layouts.
This plugin will help you to keep proper distances between elements, fonts, and so on. As you can see in the following screenshot, it looks like a ruler over your web page. It is easy to useâjust click on the plugin icon in the browser and then click on the website (it will start the ruler), and move the cursor to the place to which you want to know the distance, and voila!
Some CSS features don't work in all browsers. Some new properties need browser-specific prefixes (like
-ms,
-o,
-webkit) to work properly across all modern browsers. But how to check if you can use some properties in your project? Of course, you can check it yourself, but the easiest way is to check it on. You can open this web page and check which properties you can use.
While you are creating CSS code, you have to remember initial assumptions that will help you to keep clear and very readable code. These assumptions are as follows:
Naming conventionâYou need to remember that your code needs to be the exact names of classes.
Use comments, but not everywhere, only in places where they are needed. Yeah, but when they are needed? They are especially needed when you have some exception or when you have some quick fixes for browsers. With comments, you can describe blocks of code, which describes the views, for example, of footer/header, or any other element.
Try to keep code which is readable and logical. But how does unlogical code look like? Look at the following two examples:
Example 1 is as follows:
.classname { font-size: 12px; color: red; font-weight: bold; text-align: center; margin: 10px; padding-left: 2px; text-transform: uppercase; }
Example 2 is as follows:
.classname { margin: 10px; padding-left: 2px; font-size: 12px; font-weight: bold; text-align: center; text-transform: uppercase; color: red; }
Which code looks better? Yeah, of course, the second example because it has grouped declarations. First the description of the box model, then the font and text behaviors, and finally color. You can try to keep it in another hierarchy which will be more readable for you.
Using sample 2 in SASS:
.classname margin: 10px padding: left: 2px font: size: 12px weight: bold text: align: center transform: uppercase color: red
Isn't it shorter and more logical?
Create proper selectors (this will be described later in this chapter).
Create an elastic structure for your files.
The main problem of the CSS coder is creating proper selectors. Knowledge about priors in selectors is mandatory. It will help you to omit the
!important statement in your code and will help you to create smaller and more readable files.
Using of IDs in CSS is rather bad behavior. The foundation of HTML says that an ID is unique and should be used only once in an HTML code. It is good to omit IDs in CSS and use them only when it is the only way to style some element:
#id_name { property: value; }
Usage of IDs in CSS code is bad behavior because selectors based on ID are stronger than selectors based on classes. This is confusing in legacy code when you see that some part of the code is still preceded by another selector because it is added in the ID's parents-based selector as follows:
#someID .class { /* your code */ }
It is good to omit this problem in your projects. First, think twice if a selector based on an ID is a good idea in this place and if this cannot be replaced with any other "weaker" selector.
Classes are the best friends of the HTML/CSS coder. They are reusable elements that you can define and then reuse as much as you want in your HTML code, for example:
.class_name { property: value; }
You can group and nest selectors. First, let's nest them:
.class_wrapper .class_nested { property: value; }
Then let's group them:
.class_wrapper_one, .class_wrapper_two { property: value; }
In CSS code, you need to be a selector specialist. It is a very important skill to make a right selector that will match a specific element in the DOM structure. Let's provide a little bit of fundamental knowledge about selectors.
The plus sign in CSS can be used in selectors in which you will need to select an element right after the element on the left side of the plus sign, for example:
p + a { property: value; }
This selector will return
a, which is right after the
p selector, like in the following example:
<p>Text</p> <a>Text</a>
But it won't work in the following case:
<p>Text</p> <h1>Text</h1> <a>Text</a>
With element (
>) in the selector, you can match every element that is right into the element. Let's analyze the following example:
p >a { property: value; }
This selector will return all
<a> elements which are into
<p> element but are not nested deeper, for example:
<p> <a>text</a> </p>
But this won't work in the following case:
<p> <span> <a>text</a> </span> </p>
With
~, you can create a selector that will match every element that is parallel in the DOM structure, for example:
p ~ a { color: pink; }
This selector will work in the following cases:
<p></p> <a></a>
and:
<p>Text</p> <span>Text</span> <a>Text</a>
Sometimes, there is no way to create a selector based on elements, classes, and IDs. So this is the moment when you need to search for any other possibility to create the right selector. It is possible to get elements by their attributes (
data,
href, and so on):
[attribute] { property: value; }
It will return the following:
<p attribute>text</p>
And will also return the following:
<p attribute="1">text</p>
In real CSS/HTML code, there are examples when you will need a selector which is based on attributes with an exact value like inputs with the type as text or when elements data attribute is set with some value. It is possible with a selector which is similar to this example code:
input[type="text"] { background: #0000ff; }
will match:
<input type="text">
This selector is very useful when you want to match elements with attributes that begin with some specific string. Let's check an example:
<div class="container"> <div class="grid-1of4">Grid 2</div> <div class="grid-1of2">Grid 1</div> <div class="grid-1of4">Grid 3</div> </div>
SASS code:
.grid-1of2 width: 50% background: blue .grid-1of4 width: 25% background: green [class^="grid"] float: left
Compiled CSS:
.grid-1of2 { width: 50%; background: blue; } .grid-1of4 { width: 25%; background: green; } [class^="grid"] { float: left; }
Let's analyze this fragment in SASS code:
[class^="grid"] float: left
This selector will match every element that has an attribute with a
grid word in the beginning of this attribute. This will match in our case:
.grid-1of2 and
.grid-1of4. Of course, we could do it with SASS:
.grid-1of2, .grid-1of4 float: left
And get it in compiled code:
.grid-1of2, .grid-1of4 { float: left; }
But let's imagine that we have about
10 or maybe
40 classes like the following:
.grid-2of4 width: 50% .grid-3of4 width: 75% .grid-1of5 width: 20% .grid-2of5 width: 40% .grid-3of5 width: 60% .grid-4of5 width: 80%
In compiled CSS:
.grid-2of4 { width: 50%; } .grid-3of4 { width: 75%; } .grid-1of5 { width: 20%; } .grid-2of5 { width: 40%; } .grid-3of5 { width: 60%; } .grid-4of5 { width: 80%; }
And now we want to apply a
float: left to these elements like:
.grid-1of2, .grid-1of4, .grid-2of4, .grid-3of4, .grid-1of5, .grid-2of5, .grid-3of5, .grid-4of5 float: left
In CSS:
.grid-1of2, .grid-1of4, .grid-2of4, .grid-3of4, .grid-1of5, .grid-2of5, .grid-3of5, .grid-4of5 { float: left; }
It is easier to use a selector based on
[attribute^="value"] and match all of the elements with a class which starts with a grid string:
[class^="grid"] float: left
With this selector you can match all elements which in list of "attributes" that contains a string described as a "value". Let's analyze the following example.
HTML:
<div class="container"> <div data-Element green font10</div> <div data-Element black font24</div> <div data-Element blue font17</div> </div>
Now in SASS:
[data-style~="green"] color: green [data-style~="black"] color: black [data-style~="blue"] color: blue [data-style~="font10"] font: size: 10px [data-style~="font17"] font: size: 17px [data-style~="font24"] font: size: 24px
Compiled CSS:
[data-style~="green"] { color: green; } [data-style~="black"] { color: black; } [data-style~="blue"] { color: blue; } [data-style~="font10"] { font-size: 10px; } [data-style~="font17"] { font-size: 17px; } [data-style~="font24"] { font-size: 24px; }
And the effect in the browser is as follows:
In one of the previous sections, we had an example of a selector based on beginning of an attribute. But what if we need an attribute ending? With this feature comes a selector based on a pattern
[attribute$="value"]. Let's check the following example code:
<div class="container"> <a href="/contact-form">Contact form</a><br> <a href="/contact">Contact page</a><br> <a href="/recommendation-form">Recommendation form</a> </div>
SASS:
[href$="form"] color: yellowgreen font: weight: bold
Compiled CSS:
[href$="form"] { color: yellowgreen; font-weight: bold; }
The effect in the browser is as follows:
With the selector
[href$="form"],we matched all elements whose attribute
href ends with the string
form.
With this selector, you can match every element that contains a string in a value in any place. Let's analyze the following example code.
HTML:
<div class="container"> <a href="/contact-form">Contact form</a><br> <a href="/form-contact">Contact form</a><br> <a href="/rocommendation-form">Recommendation form</a><br> <a href="/rocommendation-and-contact-form">Recommendation and contact form</a> </div>
SASS:
[href*="contact"] color: yellowgreen font: weight: bold
Compiled CSS:
[href*="contact"] { color: yellowgreen; font-weight: bold; }
In the browser we will see:
With the selector
[href*="contact"], we matched every element that contains the
contact string in the value of the attribute
href.
Hah⦠the magic word in CSS, which you can see in some special cases. With
!important, you can even overwrite inline code added by JavaScript in your HTML.
How to use it? It is very simple:
element { property: value !important; }
Remember to use it properly and in cases where you really need it. Don't overuse it in your code because it can have a big impact in the future, especially in cases when somebody will read your code and will try to debug it.
Starting your project and planning it is one of the most important processes. You need to create a simple strategy for keeping variables and mixins and also create a proper file structure. This chapter is about the most known problems in planning your file structure and the partialization of files in your project.
The most important thing when you are starting a project is to make a good plan of its process. First, you will need to separate settings:
Fonts
Variables
Mixins
Then you will need to partialize your project. You will need to create files for repeatable elements along all sites:
Header
Footer
Forms
Then you will need to prepare next partializationâspecific views of styling and elements, for example:
View home
View blog
View contact page
What can you keep in variables? Yeah, that is a good question, for sure:
In this file, you can collect your mostly used mixins. I've divided it into local and global. In global mixins, I'm gathering the most used mixins I'm using along all projects.
In local mixins, I recommend to gather those mixins that you will use only in this project:
Dedicated gradient
Font styling including font family size and so on
Hover/active states and so on
This file is dedicated for all the most important text elements:
h1-
h6
p
a
strong
span
Additionally, you can add classes like the following:
.h1-
h6
.red .blue(or any other which you know that will repeat in your texts)
.small,
.large
Why should you use classes like
.h1-
.h6?
Yeah, it's a pretty obvious question. Sometimes you cannot repeat
h1-
h6 elements, but, for example, on a blog, you need to make them the same font style as
h1. This is the best usage of this style, for example (HTML structure):
<h1> Main title</h1> <h2>Subtitle</h2> <p>... Text block ... </p> <h2>Second subtitle</h2> <p>... Text block ... </p> <p class="h2">Something important</p> <p>... Text block ... </p> <p class="h1">Something important</p> <p>... Text block ... </p>
In the following listed files, you can gather all elements that are visible in some specific views. For example, in a blog structure, you can have a view of single post or page view. So you need to create files:
_view_singlepost.sass _view_singlepage.sass _view_contactpage.sass
Tip.
You can also download the code files by clicking on the Code Files button on the book's webpage:
WinRAR / 7-Zip for Windows
Zipeg / iZip / UnRarX for Mac
7-Zip / PeaZip for Linux
In this chapter, you gathered information about the fundamentals of modern CSS workflow. We started with choosing an IDE and then we focused on speeding up the process through the usage of snippets, preprocessors, and processes automatization.
In the next chapter, we will focus on the basics of CSS theory, box models, positions, and displaying modes in CSS.
|
https://www.packtpub.com/product/professional-css3/9781785880940
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
Software Engineer. Pythonista, JavaScripter. Find me on Twitter @melvinkcx2 😁.
(left) When an instructor is assigned,.. (right) an event is created automatically.
What I want:
Since we are using GraphQL, adding a hook into my Resolver is all I have to do. Whenever a Block is mutated, the logic to update the corresponding Google Calendar event is then triggered.
The requirements are straightforward, consuming Google APIs is not!
My main challenges were:
from google.oauth2 import service_account from googleapiclient.discovery import build SCOPES = [''] SERVICE_ACCOUNT_FILE = './xxxxxxxxxxxx.json' # You should make it an environment variable credentials = service_account.Credentials.from_service_account_file(SERVICE_ACCOUNT_FILE, scopes=SCOPES) delegated_credentials = credentials.with_subject('[email protected]') service = build('calendar', 'v3', credentials=delegated_credentials) print(service.calendarList().list().execute())
> Accessing calendar data with delegated credentials in Python
Up to this point, I know my service account is properly set up!.
const google = require("googleapis").google; const calendar = google.calendar("v3"); (async function () { const scopes = ['']; const keyFile = './xxxxxxxxxxx.json'; // Your should make it an environment variable const client = await google.auth.getClient({ keyFile, scopes, }); // Delegated Credential client.subject = "[email protected]"; const res = await calendar.calendarList.list({ auth: client }); console.log(JSON.stringify(res.data)); })();
>.
I set an expectation after understanding various authentication mechanism available given their complete step-by-step guide. The documentation of the library is however less anticipated. As a software engineer, keeping documentation updated is perhaps one of the ways to keep users from frustration.
|
https://hackernoon.com/my-journey-integrating-google-calendar-g-suite-in-node-62fbc8596455
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
Show Table of Contents
Chapter 17.. For system administrators, using the nameserver allows them to change the IP address for a host without ever affecting the name-based queries, or to decide which machines handle these queries.
17.1. Introduction to DNS
DNS is usually implemented using one or more centralized servers that are authoritative for certain domains. When a client host requests information from a nameserver, it usually connects to port 53. The nameserver then attempts to resolve the name requested. If it does not have an authoritative answer, or does not already have the answer cached from an earlier query, it queries other nameservers, called root nameservers, to determine which nameservers are authoritative for the name in question, and then queries them to get the requested name.
17.1.1. Nameserver Zones
In a DNS server such as BIND (Berkeley Internet Name Domain), all information is stored in basic data elements called resource records (RR). The resource record is usually a fully qualified domain name (FQDN) of a host, and is broken down into multiple sections organized into a tree-like hierarchy. This hierarchy consists of a main trunk, primary branches, secondary branches, and so on.
Example 17.1. A simple resource record
bob.sales.example.com
Each level of the hierarchy is divided by a period (that is,
.). In Example 17.1, “A simple resource record”,
comdefines the top-level domain,
exampleits subdomain, and
salesthe subdomain of
example. In this case,
bobidentifies a resource record that is part of the
sales.example.comdomain. With the exception of the part furthest to the left (that is,
bob), each of these sections is called a zone and defines a specific namespace.
Zones are defined on authoritative nameservers through the use of zone files, which contain definitions of the resource records in each zone. Zone files are stored on primary nameservers (also called master nameservers), where changes are made to the files, and secondary nameservers (also called slave nameservers), which receive zone definitions from the primary nameservers. Both primary and secondary nameservers are authoritative for the zone and look the same to clients. Depending on the configuration, any nameserver can also serve as a primary or secondary server for multiple zones at the same time.
17.1.2. Nameserver Types
There.
17.1.3. BIND as a Nameserver.
|
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/deployment_guide/ch-dns_servers
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
Data Templating Overview
The WPF data templating model provides you with great flexibility to define the presentation of your data. WPF controls have built-in functionality to support the customization of data presentation. This topic first demonstrates how to define a DataTemplate and then introduces other data templating features, such as the selection of templates based on custom logic and the support for the display of hierarchical data.
Prerequisites
This topic focuses on data templating features and is not an introduction of data binding concepts. For information about basic data binding concepts, see the Data Binding Overview.
DataTemplate is about the presentation of data and is one of the many features provided by the WPF styling and templating model. For an introduction of the WPF styling and templating model, such as how to use a Style to set properties on controls, see the Styling and Templating topic.
In addition, it is important to understand
Resources, which are essentially what enable objects such as Style and DataTemplate to be reusable. For more information on resources, see XAML Resources.
Data Templating Basics
To demonstrate why DataTemplate is important, let's walk through a data binding example. In this example, we have a ListBox that is bound to a list of
Task objects. Each
Task object has a
TaskName (string), a
Description (string), a
Priority (int), and a property of type
TaskType, which is an
Enum with values
Home and
Work.
<Window x: <Window.Resources> <local:Tasks x:
</Window.Resources> <StackPanel> <TextBlock Name="blah" FontSize="20" Text="My Task List:"/> <ListBox Width="400" Margin="10" ItemsSource="{Binding Source={StaticResource myTodoList}}"/>
</StackPanel> </Window>
Without a DataTemplate
Without a DataTemplate, our ListBox currently looks like this:
What's happening is that without any specific instructions, the ListBox by default calls
ToString when trying to display the objects in the collection. Therefore, if the
Task object overrides the
ToString method, then the ListBox displays the string representation of each source object in the underlying collection.
For example, if the
Task class overrides the
ToString method this way, where
name is the field for the
TaskName property:
public override string ToString() { return name.ToString(); }
Public Overrides Function ToString() As String Return _name.ToString() End Function
Then the ListBox looks like the following:
However, that is limiting and inflexible. Also, if you are binding to XML data, you wouldn't be able to override
ToString.
Defining a Simple DataTemplate
The solution is to define a DataTemplate. One way to do that is to set the ItemTemplate property of the ListBox to a DataTemplate. What you specify in your DataTemplate becomes the visual structure of your data object. The following DataTemplate is fairly simple. We are giving instructions that each item appears as three TextBlock elements within a StackPanel. Each TextBlock element>
The underlying data for the examples in this topic is a collection of CLR objects. If you are binding to XML data, the fundamental concepts are the same, but there is a slight syntactic difference. For example, instead of having
Path=TaskName, you would set XPath to
@TaskName (if
TaskName is an attribute of your XML node).
Now our ListBox looks like the following:
Creating the DataTemplate as a Resource
In the above example, we defined the DataTemplate inline. It is more common to define it in the resources section so it can be a reusable object, as in the following example:
<Window.Resources>
<DataTemplate x: <StackPanel> <TextBlock Text="{Binding Path=TaskName}" /> <TextBlock Text="{Binding Path=Description}"/> <TextBlock Text="{Binding Path=Priority}"/> </StackPanel> </DataTemplate>
</Window.Resources>
Now you can use
myTaskTemplate as a resource, as in the following example:
<ListBox Width="400" Margin="10" ItemsSource="{Binding Source={StaticResource myTodoList}}" ItemTemplate="{StaticResource myTaskTemplate}"/>
Because
myTaskTemplate is a resource, you can now use it on other controls that have a property that takes a DataTemplate type. As shown above, for ItemsControl objects, such as the ListBox, it is the ItemTemplate property. For ContentControl objects, it is the ContentTemplate property.
The DataType Property
The DataTemplate class has a DataType property that is very similar to the TargetType property of the Style class. Therefore, instead of specifying an
x:Key for the DataTemplate in the above example, you can do the following:
<DataTemplate DataType="{x:Type local:Task}"> <StackPanel> <TextBlock Text="{Binding Path=TaskName}" /> <TextBlock Text="{Binding Path=Description}"/> <TextBlock Text="{Binding Path=Priority}"/> </StackPanel> </DataTemplate>
This DataTemplate gets applied automatically to all
Task objects. Note that in this case the
x:Key is set implicitly. Therefore, if you assign this DataTemplate an
x:Key value, you are overriding the implicit
x:Key and the DataTemplate would not be applied automatically.
If you are binding a ContentControl to a collection of
Task objects, the ContentControl does not use the above Bind to a Collection and Display Information Based on Selection. Otherwise, you need to specify the DataTemplate explicitly by setting the ContentTemplate property.
The DataType property is particularly useful when you have a CompositeCollection of different types of data objects. For an example, see Implement a CompositeCollection.
Adding More to the DataTemplate
Currently the data appears with the necessary information, but there's definitely room for improvement. Let's improve on the presentation by adding a Border, a Grid, and some TextBlock elements that describe the data that is being displayed.
<DataTemplate x: <Border Name="border" BorderBrush="Aqua" BorderThickness="1" Padding="5" Margin="5"> <Grid> <Grid.RowDefinitions> <RowDefinition/> <RowDefinition/> <RowDefinition/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition /> <ColumnDefinition /> </Grid.ColumnDefinitions> <TextBlock Grid. <TextBlock Grid. <TextBlock Grid. <TextBlock Grid. <TextBlock Grid. <TextBlock Grid. </Grid> </Border>
</DataTemplate>
The following screenshot shows the ListBox with this modified DataTemplate:
We can set HorizontalContentAlignment to Stretch on the ListBox to make sure the width of the items takes up the entire space:
<ListBox Width="400" Margin="10" ItemsSource="{Binding Source={StaticResource myTodoList}}" ItemTemplate="{StaticResource myTaskTemplate}" HorizontalContentAlignment="Stretch"/>
With the HorizontalContentAlignment property set to Stretch, the ListBox now looks like this:
Use DataTriggers to Apply Property Values
The current presentation does not tell us whether a
Task is a home task or an office task. Remember that the
Task object has a
TaskType property of type
TaskType, which is an enumeration with values
Home and
Work.>
Our application now looks like the following. Home tasks appear with a yellow border and office tasks appear with an aqua border:
In this example the DataTrigger uses a Setter to set a property value. The trigger classes also have the EnterActions and ExitActions properties that allow you to start a set of actions such as animations. In addition, there is also a MultiDataTrigger class that allows you to apply changes based on multiple data-bound property values.
An alternative way to achieve the same effect is to bind the BorderBrush property to the
TaskType property and use a value converter to return the color based on the
TaskType value. Creating the above effect using a converter is slightly more efficient in terms of performance. Additionally, creating your own converter gives you more flexibility because you are supplying your own logic. Ultimately, which technique you choose depends on your scenario and your preference. For information about how to write a converter, see IValueConverter.
What Belongs in a DataTemplate?
In the previous example, we placed the trigger within the DataTemplate using the DataTemplate.Triggers property. The Setter of the trigger sets the value of a property of an element (the Border element) that is within the DataTemplate. However, if the properties that your
Setters are concerned with are not properties of elements that are within the current DataTemplate, it may be more suitable to set the properties using a Style that is for the ListBoxItem class (if the control you are binding is a ListBox). For example, if you want your Trigger to animate the Opacity value of the item when a mouse points to an item, you define triggers within a ListBoxItem style. For an example, see the Introduction to Styling and Templating Sample.
In general, keep in mind that the DataTemplate is being applied to each of the generated ListBoxItem (for more information about how and where it is actually applied, see the ItemTemplate page.). Your DataTemplate is concerned with only the presentation and appearance of the data objects. In most cases, all other aspects of presentation, such as what an item looks like when it is selected or how the ListBox lays out the items, do not belong in the definition of a DataTemplate. For an example, see the Styling and Templating an ItemsControl section.
Choosing a DataTemplate Based on Properties of the Data Object
In The DataType Property section, we discussed that you can define different data templates for different data objects. That is especially useful when you have a CompositeCollection of different types or collections with items of different types. In the Use DataTriggers to Apply Property Values section, we have shown that if you have a collection of the same type of data objects you can create a DataTemplate and then use triggers to apply changes based on the property values of each data object. However, triggers allow you to apply property values or start animations but they don't give you the flexibility to reconstruct the structure of your data objects. Some scenarios may require you to create a different DataTemplate for data objects that are of the same type but have different properties.
For example, when a
Task object has a
Priority value of
1, you may want to give it a completely different look to serve as an alert for yourself. In that case, you create a DataTemplate for the display of the high-priority
Task objects. Let's add the following DataTemplate to the resources section:
<DataTemplate x: <DataTemplate.Resources> <Style TargetType="TextBlock"> <Setter Property="FontSize" Value="20"/> </Style> </DataTemplate.Resources> <Border Name="border" BorderBrush="Red" BorderThickness="1" Padding="5" Margin="5"> <DockPanel HorizontalAlignment="Center"> <TextBlock Text="{Binding Path=Description}" /> <TextBlock>!</TextBlock> </DockPanel> </Border> </DataTemplate>
Notice this example uses the DataTemplate.Resources property. Resources defined in that section are shared by the elements within the DataTemplate.
To supply logic to choose which DataTemplate to use based on the
Priority value of the data object, create a subclass of DataTemplateSelector and override the SelectTemplate method. In the following example, the SelectTemplate method provides logic to return the appropriate template based on the value of the
Priority property. The template to return is found in the resources of the enveloping Window element.
using System.Windows; using System.Windows.Controls; namespace SDKSample { public class TaskListDataTemplateSelector : DataTemplateSelector { public override DataTemplate SelectTemplate(object item, DependencyObject container) { FrameworkElement element = container as FrameworkElement; if (element != null && item != null && item is Task) { Task taskitem = item as Task; if (taskitem.Priority == 1) return element.FindResource("importantTaskTemplate") as DataTemplate; else return element.FindResource("myTaskTemplate") as DataTemplate; } return null; } } }
Namespace SDKSample Public Class TaskListDataTemplateSelector Inherits DataTemplateSelector Public Overrides Function SelectTemplate(ByVal item As Object, ByVal container As DependencyObject) As DataTemplate Dim element As FrameworkElement element = TryCast(container, FrameworkElement) If element IsNot Nothing AndAlso item IsNot Nothing AndAlso TypeOf item Is Task Then Dim taskitem As Task = TryCast(item, Task) If taskitem.Priority = 1 Then Return TryCast(element.FindResource("importantTaskTemplate"), DataTemplate) Else Return TryCast(element.FindResource("myTaskTemplate"), DataTemplate) End If End If Return Nothing End Function End Class End Namespace
We can then declare the
TaskListDataTemplateSelector as a resource:
<Window.Resources>
<local:TaskListDataTemplateSelector x:
</Window.Resources>
To use the template selector resource, assign it to the ItemTemplateSelector property of the ListBox. The ListBox calls the SelectTemplate method of the
TaskListDataTemplateSelector for each of the items in the underlying collection. The call passes the data object as the item parameter. The DataTemplate that is returned by the method is then applied to that data object.
<ListBox Width="400" Margin="10" ItemsSource="{Binding Source={StaticResource myTodoList}}" ItemTemplateSelector="{StaticResource myDataTemplateSelector}" HorizontalContentAlignment="Stretch"/>
With the template selector in place, the ListBox now appears as follows:
This concludes our discussion of this example. For the complete sample, see Introduction to Data Templating Sample.
Styling and Templating an ItemsControl
Even though the ItemsControl is not the only control type that you can use a DataTemplate with, it is a very common scenario to bind an ItemsControl to a collection. In the What Belongs in a DataTemplate section we discussed that the definition of your DataTemplate should only be concerned with the presentation of data. In order to know when it is not suitable to use a DataTemplate it is important to understand the different style and template properties provided by the ItemsControl. The following example is designed to illustrate the function of each of these properties. The ItemsControl in this example is bound to the same
Tasks collection as in the previous example. For demonstration purposes, the styles and templates in this example are all declared inline.
<ItemsControl Margin="10" ItemsSource="{Binding Source={StaticResource myTodoList}}"> <!--The ItemsControl has no default visual appearance. Use the Template property to specify a ControlTemplate to define the appearance of an ItemsControl. The ItemsPresenter uses the specified ItemsPanelTemplate (see below) to layout the items. If an ItemsPanelTemplate is not specified, the default is used. (For ItemsControl, the default is an ItemsPanelTemplate that specifies a StackPanel.--> <ItemsControl.Template> <ControlTemplate TargetType="ItemsControl"> <Border BorderBrush="Aqua" BorderThickness="1" CornerRadius="15"> <ItemsPresenter/> </Border> </ControlTemplate> </ItemsControl.Template> <!--Use the ItemsPanel property to specify an ItemsPanelTemplate that defines the panel that is used to hold the generated items. In other words, use this property if you want to affect how the items are laid out.--> <ItemsControl.ItemsPanel> <ItemsPanelTemplate> <WrapPanel /> </ItemsPanelTemplate> </ItemsControl.ItemsPanel> <!--Use the ItemTemplate to set a DataTemplate to define the visualization of the data objects. This DataTemplate specifies that each data object appears with the Proriity and TaskName on top of a silver ellipse.--> <ItemsControl Path=Priority}"/> <TextBlock Margin="3,0,3,7" Text="{Binding Path=TaskName}"/> </StackPanel> </Grid> </DataTemplate> </ItemsControl.ItemTemplate> <!--Use the ItemContainerStyle property to specify the appearance of the element that contains the data. This ItemContainerStyle gives each item container a margin and a width. There is also a trigger that sets a tooltip that shows the description of the data object when the mouse hovers over the item container.--> <ItemsControl.ItemContainerStyle> <Style> <Setter Property="Control.Width" Value="100"/> <Setter Property="Control.Margin" Value="5"/> <Style.Triggers> <Trigger Property="Control.IsMouseOver" Value="True"> <Setter Property="Control.ToolTip" Value="{Binding RelativeSource={x:Static RelativeSource.Self}, Path=Content.Description}"/> </Trigger> </Style.Triggers> </Style> </ItemsControl.ItemContainerStyle> </ItemsControl>
The following is a screenshot of the example when it is rendered:
Note that instead of using the ItemTemplate, you can use the ItemTemplateSelector. Refer to the previous section for an example. Similarly, instead of using the ItemContainerStyle, you have the option to use the ItemContainerStyleSelector.
Two other style-related properties of the ItemsControl that are not shown here are GroupStyle and GroupStyleSelector.
Support for Hierarchical Data
So far we have only looked at how to bind to and display a single collection. Sometimes you have a collection that contains other collections. The HierarchicalDataTemplate class is designed to be used with HeaderedItemsControl types to display such data. In the following example,
ListLeagueList is a list of
League objects. Each
League object has a
Name and a collection of
Division objects. Each
Division has a
Name and a collection of
Team objects, and each
Team object has a
Name.
<Window x: <DockPanel> <DockPanel.Resources> <src:ListLeagueList x: > </DockPanel.Resources> <Menu Name="menu1" DockPanel. <MenuItem Header="My Soccer Leagues" ItemsSource="{Binding Source={StaticResource MyList}}" /> </Menu> <TreeView> <TreeViewItem ItemsSource="{Binding Source={StaticResource MyList}}" Header="My Soccer Leagues" /> </TreeView> </DockPanel> </Window>
The example shows that with the use of HierarchicalDataTemplate, you can easily display list data that contains other lists. The following is a screenshot of the example.
See Also
Data Binding
Find DataTemplate-Generated Elements
Styling and Templating
Data Binding Overview
GridView Column Header Styles and Templates Overview
|
https://docs.microsoft.com/en-us/dotnet/framework/wpf/data/data-templating-overview
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
Extensible Stylesheet Language (XSL) is a templating system for XML documents, and with it you can process an XML document for output. With the same XML source, you might apply different XSL documents to format for the Web, PDAs, interactive television, and mobile phones.
Unfortunately, the details of XSL are beyond the scope of this book, but we can briefly examine PHP's support for it.
PHP's support for XSL is also currently in flux. The underlying library that PHP 5 now uses is libxslt (). This is a radical departure from previous versions of PHP, which worked with the Sablotron XSLT processor.
Because work is not yet complete, everything covered in this section is subject to change. Before using XSL in projects, you should visit the PHP manual () to check the current stability of support for the technology.
Although at the time of writing, XSL support is flagged as experimental and documentation is nonexistent, an easy-to-use and nicely integrated XSLT parser class is already available. Because libxslt is built on the libxml2 library that the DOM and Parser functions already use, PHP's XSL support now works directly with DomDocument objects.
At the time of writing, the libxslt library was not bundled with PHP 5; however, you can download it from. You also might need to compile PHP with XSL support. You should include the argument
--with-xsl
when you run the configure script.
In Listing 22.7, we apply a simple XSL document to the XML we created in Listing 22.1. It outputs a table for each article, adding formatting and changing the order of two of the siblings.
1: <?xml version="1.0"?>
2: <xsl:stylesheet
3: version="1.0"
4: xmlns:
5:
6: <xsl:output
7: <xsl:template
8: <table border="1">
9: <xsl:apply-templates
10: </table>
11: </xsl:template>
12:
13: <xsl:template
14: <tr><td>
15: <i><xsl:value-of</i>
16: <br />
17: <xsl:text> writes</xsl:text>
18: <b><xsl:value-of</b>
19: </td></tr>
20: </xsl:template>
21: </xsl:stylesheet>
Without getting in too deep with XSL, the purpose of this document should be relatively clear with a close look. Take a look at the first line. An XSL document is also an XML document! The root element
<xsl:stylesheet
version="1.0"
xmlns:
should always take this form. It establishes the XSL namespace and version number.
The <xsl:template> element on line 7 attempts to match the root element. After the match occurs, we establish some basic formatting and with <xsl:apply-templates> on line 9 we attempt to match <newsitem> elements and generate formatted XHTML for each one.
The HTML you see in Listing 22.7 is subject to the same rules as any XML document, which means that failure to close a <tr> or <td> element would cause a parser to generate an error message. The <xsl:value-of> tags (lines 15 and 18) are substituted by the value of the elements stipulated in their select attribute (<byline> and <headline>). Notice that we have switched the positions of byline and headline elements we are matching. XSL gives you control over the structure of data in output as well as its format.
Now that we have an XSL document, we can use it to transform our XML. In fact, to do this we only need to use a few functions. Listing 22.8 introduces them.
1: <?php
2: $xslt = new xsltprocessor();
3:
4: $xml_doc = new DomDocument();
5: $xml_doc->loadXML( file_get_contents("./listing22.1.xml") );
6:
7: $xsl_doc = new DomDocument();
8: $xsl_doc->loadXML( file_get_contents("./listing22.7.xsl") );
9:
10: $xslt->importStylesheet( $xsl_doc );
11: print $xslt->transformToXml( $xml_doc );
12: ?>
In Listing 22.8 we use the new XsltProcessor class to work with an XSL document and an XML document to produce formatted text. We initialize DomDocument objects to store our XSL and XML on lines 4 and 7. We then use the loadXML() method to acquire XML data on lines 5 and 8.
We now have an XsltProcessor object and two primed DomDocument objects. On line 10 we call XsltProcessor::importStylesheet(), passing it the DomDocument object containing our XSL. Finally, on line 11 we call transform_to_xml(), passing the method the DomDocument containing the XML to be transformed. The transformToXml() method returns the results of the transformation as a string we print.
|
http://books.gigatux.nl/mirror/php24hours/0672326191_ch22lev1sec4.html
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
With apologies to the more advanced users, I have decided to proceed fairly slowly and cover many simple concepts with this series of plugins. Thus, this second post will not yet discuss plugins, but simply lay the groundwork for future posts. By the way, for those interested, and as pointed out by Lennart Regebro in his post, all the code samples that I will use can be browsed at, or retrieved from, my py-fun google code repository.
As a first step before comparing different approaches to dealing with plugins, I will take the sample application introduced in the first post and modularize it.
The core application (calculator.py) is as follows:.
import re
from plugins.base import OPERATORS, init_plugins
assert calculate("2**3") == 8
assert calculate("2*2**3") == 16
print "Done!"
Communication between plugins and the core application is ensured via an Application Programming Interface (API) unique to that application. In our example, the API is a simple Python dict (OPERATORS) written in capital letters only to make it stand out.
In a sub-directory (plugins), in addition to an empty __init__.py file, we include the following three files:
1. base.py
OPERATORS = {}
def init_plugins(expression):
'''simulated plugin initializer'''
from plugins import op_1, op_2
op_1.expression = expression
op_2.expression = expression
OPERATORS['+'] = op_1.operator_add_token
OPERATORS['-'] = op_1.operator_sub_token
OPERATORS['*'] = op_1.operator_mul_token
OPERATORS['/'] = op_1.operator_div_token
OPERATORS['**'] = op_2.operator_pow_token
2. op_1.py
class operator_add_token(object):
lbp = 10
def nud(self):
return expression(100)
def led(self, left):
return left + expression)
and 3. op_2.py
class operator_pow_token(object):
lbp = 30
def led(self, left):
return left ** expression(30-1)
The last two files have been simply extracted with no modification from the original application. Instead of having 2 such files containing classes of the form operator_xxx_token, I could have included them all in one file, or split into 5 different files. The number of files is irrelevant here: they are only introduced to play the role of plugins in this application.
The file base.py plays the role here of a plugin initialization module: it ensures that plugins are properly registered and made available to the core program.
Since I wanted to change the original code as little as possible, a "wart" is present in the code as written since it was never intended to be a plugin-based application: the function expression() was accessible to all objects in the initial single-file application. It is now needed in a number of modules. The file base.py takes care of ensuring that "plugin" modules have access to that function in a transparent way. This will need to be changed when using some standard plugin frameworks, as was done in the zca example or the grok one.
In the next post, I will show how to take this now modularized application and transform it into a proper plugin-based one.
|
https://aroberge.blogspot.com/2008/12/plugins-part-2-modularization.html
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
How to Configure Features for dozens of team projects
May 31, 2012
In a previous post, I have told you how the Configure Features wizard works to upgrade the team projects on your TFS server. It works great. However if you are an administrator of dozens of team projects, you don’t want to walk through the wizard for each team project. Luckily we have a solution for you, which is outlined in this post.
You can either download the source code or execute the following steps.
Create and run the application
- Open Visual Studio 2012
- Create a new “C# Console Application” Project
- Open the folder C:\Program Files\Microsoft Team Foundation Server 11.0\Application Tier\Web Services\bin on the Application Tier.
- Copy the following files over to your local machine and add a reference to these local copies of the assembly
- Microsoft.TeamFoundation.Framework.Server.dll
- Microsoft.TeamFoundation.Server.Core.dll (Required if 2012 Update 2 was installed)
- Microsoft.TeamFoundation.Server.WebAccess.WorkItemTracking.Common.dll
- Add references to the following assemblies. These are located in the folder C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\ReferenceAssemblies\v2.0
- Microsoft.TeamFoundation.Client.dll
- Microsoft.TeamFoundation.Common.dll
- Microsoft.TeamFoundation.WorkItemTracking.Client.dll
- Open program.cs and overwrite the contents of the file with the following code and change the highlighted url.
- Copy your application to C:\Program Files\Microsoft Team Foundation Server 11.0\Application Tier\Web Services\bin on the Application Tier and run it from there.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Net;
using Microsoft.TeamFoundation.Client;
using Microsoft.TeamFoundation.Framework.Server;
using Microsoft.TeamFoundation.Integration.Server;
using Microsoft.TeamFoundation.Server;
using Microsoft.TeamFoundation.Server.WebAccess.WorkItemTracking.Common;
namespace FeatureEnablement
{
class Program
{
static void Main(string[] args)
{
string urlToCollection = @"";
Guid instanceId;
// Get the TF Request Context
using (DeploymentServiceHost deploymentServiceHost = GetDeploymentServiceHost(urlToCollection, out instanceId))
{
using (TeamFoundationRequestContext context = GetContext(deploymentServiceHost, instanceId))
{
// For each team project in the collection
CommonStructureService css = context.GetService<CommonStructureService>();
foreach (var project in css.GetWellFormedProjects(context))
{
// Run the 'Configuration Features Wizard'
ProvisionProjectFeatures(context, project);
}
}
}
}
private static DeploymentServiceHost GetDeploymentServiceHost(string urlToCollection, out Guid instanceId)
{
using (var teamProjectCollection = new TfsTeamProjectCollection(new Uri(urlToCollection), CredentialCache.DefaultCredentials))
{
const string connectionStringPath = "/Configuration/Database/Framework/ConnectionString";
var registry = teamProjectCollection.ConfigurationServer.GetService<Microsoft.TeamFoundation.Framework.Client.ITeamFoundationRegistry>();
string connectionString = registry.GetValue(connectionStringPath);
instanceId = teamProjectCollection.InstanceId;
// Get the system context
TeamFoundationServiceHostProperties deploymentHostProperties = new TeamFoundationServiceHostProperties();
deploymentHostProperties.ConnectionString = connectionString;
deploymentHostProperties.HostType = TeamFoundationHostType.Application | TeamFoundationHostType.Deployment;
return new DeploymentServiceHost(deploymentHostProperties, false);
}
}
private static TeamFoundationRequestContext GetContext(DeploymentServiceHost deploymentServiceHost, Guid instanceId)
{
using (TeamFoundationRequestContext deploymentRequestContext = deploymentServiceHost.CreateSystemContext())
{
// Get the identity for the tf request context
TeamFoundationIdentityService ims = deploymentRequestContext.GetService<TeamFoundationIdentityService>();
TeamFoundationIdentity identity = ims.ReadRequestIdentity(deploymentRequestContext);
// Get the tf request context
TeamFoundationHostManagementService hostManagementService = deploymentRequestContext.GetService<TeamFoundationHostManagementService>();
return hostManagementService.BeginUserRequest(deploymentRequestContext, instanceId, identity.Descriptor);
}
}
private static void ProvisionProjectFeatures(TeamFoundationRequestContext context, CommonStructureProjectInfo project)
{
// Get the Feature provisioning service ("Configure Features")
ProjectFeatureProvisioningService projectFeatureProvisioningService = context.GetService<ProjectFeatureProvisioningService>();
if (!projectFeatureProvisioningService.GetFeatures(context, project.Uri.ToString()).Where(f => (f.State == ProjectFeatureState.NotConfigured && !f.IsHidden)).Any())
{
// When the team project is already fully or partially configured, report it
Console.WriteLine("{0}: Project is up to date.", project.Name);
}
else
{
// find the valid process templates
IEnumerable<IProjectFeatureProvisioningDetails> projectFeatureProvisioningDetails = projectFeatureProvisioningService.ValidateProcessTemplates(context, project.Uri);
int validProcessTemplateCount = projectFeatureProvisioningDetails.Where(d => d.IsValid).Count();
if (validProcessTemplateCount == 0)
{
// when there are no valid process templates found
Console.WriteLine("{0}: No valid process template found!");
});
}
else if (validProcessTemplateCount > 1)
{
// when multiple process templates found that closely match, report it
Console.WriteLine("{0}: Multiple valid process templates found!", project.Name);
}
}
}
}
}
NOTE: You need to run the code on a machine that has the Application Tier installed. So if you have Visual Studio on the Team Foundation Server you can run the code, else compile the code, copy the application to your Application Tier and run it from there.
Understand the application
The first four private methods (GetTfsTeamProjectCollection, GetDeploymentServiceHost, GetDeploymentRequestContext and GetTeamFoundationRequestContext) are only to create a requestContext, which you need to run the Configure Features. When we drill deeper into the ConfigureFeatures method. The method is using the class ProjectFeatureProvisioningService which is the service to Configure Features. Feature subscribe to that service, and the features that are subscribed is currently hard coded. Each feature implements an interface, including it is a hidden feature (that means that it is not critical to be configured), their State (NotConfigured, PartiallyConfigured or FullyConfigured), the Validation and the Provisioning.
The first step is to determine whether the team project needs to be configured. So we ask if all features are either configured (partially or fully) or hidden.
if (projectFeatureProvisioningService.GetFeatures(context, project.Uri.ToString()).All(f => (f.State != ProjectFeatureState.NotConfigured || f.IsHidden)))
Then it is validating all the process templates. During the validation the service is determining whether the settings stored in the process template can be applied to the current team project. Take a look at the deep dive post if you want to know more about the Validation process.
IEnumerable<IProjectFeatureProvisioningDetails> projectFeatureProvisioningDetails = projectFeatureProvisioningService.ValidateProcessTemplates(context, project.Uri);
That method returns the validation details, which contains information like whether the process template is valid and the errors and warnings that it encountered during the validation. It counts the number of valid process templates, and acts upon it.
int validProcessTemplateCount = projectFeatureProvisioningDetails.Where(d => d.IsValid).Count();
The current implementation logs a text message to the console window when there is no or multiple valid process templates, but you can use the details to find out why process templates are invalid in case a team project.
if (validProcessTemplateCount == 0)
{
// when there are no valid process templates found
Console.WriteLine(“{0}: No valid process template found!”, project.Name);
}
…
else if (validProcessTemplateCount > 1)
{
// when multiple process templates found that closely match, report it
Console.WriteLine(“{0}: Multiple valid process templates found!”, project.Name);
}
If there is only one valid process template, the current implementation provisions that team project to configure the new features for TFS 2012.);
}
Thanks for the code Ewald, but is there an updated version for the RTM / Update 1 bits? It doesn't appear that Microsoft.TeamFoundation.Server.WebAccess.WorkItemTracking.Common.dll is a valid assembly as of the RTM. And the MSDN Library has no API docs for the ProjectFeatureProvisioningService…
social.msdn.microsoft.com/…/en-US
(Did somebody forget to generate the TFS 2012 API docs…?)
Thanks,
Matt
My bad… I figured it out.
I was searching the assemblies at [TFS Install Path]Tools, but the DLL I was missing was under [TFS Install Path]ApplicationTierWeb Servicesbin.
This doesn't resolve the issue that the MSDN documentation is not up-to-date, but at least the binaries are there.
We had our own issues after Update 1 when returning from GetDeploymentHostServer:
Unhandled Exception: Microsoft.TeamFoundation.Framework.Server.DatabaseInstanceException: TF30045: The instance information has not been configured or is not available for this Team Foundation Server. Please contact your Team Foundation Server administrator.
This was a result of not having updated the DLLs in the project after the update, and keeping those DLLs in the bin folder on the app tier. Once I updated the project's dlls to the new version, the program began working again.
I'm surprised at how fragile TFS seems to be and how inscrutable its errors are.
I don't know if the Update level matters, but with Update 2, I also needed to copy Microsoft.TeamFoundation.Server.Core.dll to get this to build.
@Burton, you are correct. I updated the post to correct that.
Is there an updated version of this code to target TFS 2013?
@Angela – I replaced the above TFS assemblies with their v12.0 equivalents and that got me 90% of the way. Right now, I'm getting a compilation error on the return type of the GetContext method (which returns type TeamFoundationRequestContext). The error indicates:
"The type 'Microsoft.VisualStudio.Services.WebApi.VssHttpClientBase' is defined in an assembly that is not referenced. You must add a reference to assembly 'Microsoft.VisualStudio.Services.WebApi, Version=12.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'."
I added this and was able to compile. But I don't see the association in the MSDN documentation and it adds a bunch of other assemblies to my project that I'm not using so I'm not sure why this namespace is needed.
If Ewald is still monitoring this, maybe he can elaborate…?
FYI… I've created an MSDN Forums question documenting where I'm at with this if anybody wants to contribute suggestions…
social.msdn.microsoft.com/…/feature-enablement-app-fails-with-tfs-2013
Thanks!
FYI: we have an updated version of the tool which works with TFS 2013. You can find source code here: features4tfs.codeplex.com
Thanks,
Oleg
How to update projects upgraded from TFS 2008 to TFS 2013.. will the above source code works for that too?
The code works only from 2010 and beyond. You first need to do the steps to add the changes between 2008 and 2010. There is a blog post that describes how to achieve this: blogs.msdn.com/…/configure-features-for-process-templates-based-on-v-4-2-process-templates.aspx
Hi, the path for "Microsoft.TeamFoundation.Server.WebAccess.WorkItemTracking.Common.dll"
is : C:Program FilesMicrosoft Team Foundation Server 12.0Application TierTFSJobAgentPlugins
I have tried given above all steps to update deployed process template of one team project. But I am unable to configure the features. It is given following error.
" There are no process template available with valid configuration settings for this ream project. Your team project cannot be configured automatically."
Please let me know if any body know the better solution to resolve this issue. I will be thankful to you.
Thanks.
Muhammad Waseem Alvi
TFS Administrator.
@Muhammad Waseem Alvi: You should see more information in the Event Viewer of the Application Tier. In the logs it shows why it does your installed process template don't meet the requirements for your team project(s)
We are triyng to adapt this script for TFS 2015 RTM, but we get this exception :
System.NotSupportedException: Specified method is not supported. at Microsoft.TeamFoundation.Integration.Server.CommonStructureService.Microsoft.TeamFoundation.Framework.Server.ITeamFoundationService.ServiceStart(TeamFoundationRequestContext systemRequestContext)
on this call
ProjectFeatureProvisioningService.GetFeatures(…).
Any idea?
|
https://blogs.msdn.microsoft.com/devops/2012/05/31/how-to-configure-features-for-dozens-of-team-projects/
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
Objective-C: the More Flexible C++.
This thread is awesome.
I love open threads. Look at this below - beautiful. Spanning from 2002 to 2010, there is a gradient of acceptance and spite which spans the birth and explosion of the iPhone. Surprisingly, the questioning of retroactive political correctness (c.2003) got the most attention.
I'd like to learn Objective-C. But I have better things to do, and a heap of legacy Java code. I just "learned" J2ME, which is a dung heap basically built for the "halcyon" monochrome days of Nokia. Smartphones are the future. Too bad Apple is cock-blocking Java, although I have to say in the last 8 years Java has been bastardized beyond recognition (FX, packages, J2ME etc.). Perhaps in part due to the question: how could Java defend itself while not generating any direct revenue?
We'll see about Objective C, and whatever Objective C++. Seems like the iPhone is the biggest player there. Who ever thought that C++ was the end-all, be all of languages? I guess I did.
Maybe more flexible, but definately not productivity.
Objective-C, maybe more flexible (still arguable) OOP than C++,
but definately not for it Productivity using this language.
(length of the code, readability, maintenance, etc)
For eg, below is the code required just to get a date-time data from an index on array of DateTime in Obj-C, in IPhone Cocoa Platform.
// <<<<<<<<<<<<< start of code section >>>>>>>>>>>>>>>>>>
NSDateFormatter *df = [[NSDateFormatter alloc] init];
[df setDateFormat:@"EEE, dd MMM yyyy HH:mm:ss ZZ"];
NSLocale *local = [[NSLocale alloc] initWithLocaleIdentifier:@"en_US"];
[df setLocale:local];
NSDate *theDate = [df dateFromString:[rssData[m_selectedGroupIndex].pubDate objectAtIndex:m_selectedArticleIndex]];
int month,year, day, weekday, hour, minute, sec;
[df setDateFormat:@"yyyy"];
year = [[df stringFromDate:theDate] intValue];
[df setDateFormat:@"MM"];
month = [[df stringFromDate:theDate] intValue];
[df setDateFormat:@"dd"];
day = [[df stringFromDate:theDate] intValue];
[df setDateFormat:@"c"];
weekday = [[df stringFromDate:theDate] intValue];
[df setDateFormat:@"H"];
hour = [[df stringFromDate:theDate] intValue];
[df setDateFormat:@"m"];
minute = [[df stringFromDate:theDate] intValue];
[df setDateFormat:@"s"];
sec = [[df stringFromDate:theDate] intValue];
// do something....
[df release]; // must, otherwise program crash.
[local release]; // must, otherwise program crash.
// <<<<<<<<<<<<< end of code section >>>>>>>>>>>>>>>>>>
This definately is NOT the shortest code among the OOP variant (C++/Java/C#/Obj-C).
Another major problem with Obj-C is it memory management, it is not purely programmer skill that can decided, but the way the object handling the memory in some sort like 'black-box' is a big problem,
for eg, when can free the object ? there is only a method called 'retainCount', but even these, it is not working, offical state that stated so in the document. This forced the developer to implement own obj state - and this increase the code needed and is more error prone and reduce productivity, if the object forget to free, the app will crash. There is no constructor and destructor as in C++ and JAVA, and it is not possible to support something like
{
MyClass *obj1 = new MyClass("Hello");
// Obj-C:
// [MyClass *obj1 = [[MyClass alloc] initWithFormat:@"%@",@"Hello"];
... do something on myclass
delete obj1; // obj-c : [obj1 release]
obj1 = NULL; // this not work in Obj-C, cause you are sending a NULL message to obj1, not re-assign the pointer, telling other that obj1 instance is no more exist/valid.
.... do something
// end of the program, doing cleanup.
if (obj1) // This not work in Obj-C
delete obj1;
// Obj-C version:
// Ideally, work is way, but it is NOT, never rely on retainCount, not work, officially state in the official docs.
if ([obj1 retainCount] > 0])
[obj1 release];
}
Obj-C have exist for some time, not a new lang, but until now, the memory handling problem is still exist, nothing have been improved, nothing have been done to correct the 'retainCount' defect.
Alhtough there is now Obj-C++, but not available on IPhone platform, not a good news for those who want to be developer for IPhone.
programmer should focus as much as possible on logic/biz logic, not too much on handling simple memory allocaiton and deallocation and state
(of-course, not to abuse memory used and not for case like linked-list, binary-tree kind of code, that is unavoidable, but here is just a very basic memory management case).
Obj-C memory handling definately add up very much time on the development time and resource.
About Java, one of problem is the code cannot divide into separate file. (whereas all other C++ variant can, include C#).
Sun say reason behind it, but the real reason is that their architect problem/limitation, which, to rectify now is not much possible due to compatible with older program issue.
About Apple blocking Java and Flash:
1. Apple force developer using their XCode to develop App on their IPhone, for the reason ther can then generate revenue on Mac machine sale and XCode license, annual subscribtion fee.
Many not realize that IPhone SDK is run on Apple Machine only, not Windows, neither Linux, for the fact that 99% of the programmer is using Windows (mostly) and/or Linux (smaller part). So you need to buy the macine from Apple, just for developing IPhone App.
(Apple NOT EVEN mention supported platfrom on their SDK download, many innocent developer even not realized this after find some 'hard' way to successfully download the huge 2.4GB + SDK).
2. Apple block Flash revealed at last - to facitate their own Ad network - iAdd.
(I have develop in C/C++ for some time, not as long as Bill Gate and Steve Jobs, however, already > 15 Years in various platform, dated back since Commodore Amiga day, using AimgaBasic in 1988 since I'm still in secondary-school, playing EA game like F-18 Interceptor. I have writeen many Windowsce commercial, fat/thick-client application. Last year I wrote a local News App for a customer in Obj-C, and it ranked #1 in that category. I cannot disclose the App name, due to Non-disclose-Agreement with Apple, and comment non on favor on apple will make your app disappear on it App-Store. And my conclusion on comparing this these 2 platform and language is - Apple should use C++ for it SDK platform, not Obj-C, for programmer productivity and reliable App (not easy crash) reason.)
(5425)
Now let's rewrite that cleanly
I'm not an expert in ObjC (only having a few months of experience) and I never used date formatter class. Even I managed to see that you're wrongly blaming the language.
1. Use categories. You know that you'll be getting 10 integers with various formats from a date? Well, extend that NSDateFormatter class and add a function for grabbing the appropriate think.
2. You may even want to define k-onstants for various formats to improve readability. (I didn't do that.)
3. Use that autorelease pool. It's there for a reason, and Apple uses it extensively in their code. Autorelease pool exists in the mainloop and gets automatically flushed whenever your code returns to the mainloop. You can declare extra pools if you need them. See main.m in the templates; you can see an autorelease pool declared there since Apple APIs use it so extensively. (I declared one in my main(), below.)
Here's a full program and a Makefile. I'm even fairly certain it should work under GNUStep on Linux. (Although I didn't test, and I think categories might be a 2.0 addition that most of the articles I read claim to be unsupported with GNUStep.)
<<<<<<<<<<<<< objcdate.m >>>>>>>>>>>>>>>>>>>>>
#import
@interface NSDateFormatter(withIntVal)
-(int)intWithDate:(NSDate*)date format:(NSString*)format;
@end
@implementation NSDateFormatter(withIntVal)
-(int)intWithDate:(NSDate*)date format:(NSString*)format
{
[self setDateFormat:format];
return [[self stringFromDate:date] intValue];
}
@end
void testDateFromRSSString(NSString* dateString)
{
NSDateFormatter *df = [[[NSDateFormatter alloc] init] autorelease];
[df setDateFormat:@"EEE, dd MMM yyyy HH:mm:ss ZZ"];
NSLocale *loc = [[[NSLocale alloc] initWithLocaleIdentifier:@"en_US"] autorelease];
[df setLocale:loc];
NSDate *date = [df dateFromString:dateString]; // already autoreleased
int year = [df intWithDate:date format:@"yyyy"];
int month = [df intWithDate:date format:@"MM"];
int day = [df intWithDate:date format:@"dd"];
int weekday = [df intWithDate:date format:@"c"];
int hour = [df intWithDate:date format:@"H"];
int minute = [df intWithDate:date format:@"m"];
int sec = [df intWithDate:date format:@"s"];
printf("Input: %s\n", dateString.UTF8String);
printf("Year: %d Month: %d Day: %d Weekday: %d Hour: %d Minute: %d Sec: %d\n", year, month, day, weekday, hour, minute, sec);
}
int main()
{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
testDateFromRSSString(@"Tue, 09 Mar 2010 00:01:02 +0100");
[pool release];
return 0;
}
<<<<<<<<<<<<<<<<<<< end objcdate.m >>>>>>>>>>>>>>>>>>
And the makefile:
<<<<<<<<<<<<<<<<< Makefile >>>>>>>>>>>>>>>>>>>>>>
all: objcdate.m
gcc -x objective-c -framework Foundation objcdate.m -o objcdate
<<<<<<<<<<<<<<<<< end Makefile >>>>>>>>>>>>>>>>>
Now, wasn't that shorter and simpler?
The reason code crashes is due to programmer's carelessness. When refcounting is included in the API itself (practically in the language itself), yet the programmer retains fine control over it, then the places where the program can crash is much reduced. Along with smart behavior when it comes to null pointers being passed as "self" into functions, my ObjC code gets written much faster than it was when I wrote C++ code.
You say ObjC memory management adds to development time. Sure it does, since you obviously don't use autorelease. Having autorelease pool flushed at a sensible place like the main event loop means I can return an object with my mind at ease, knowing that whoever grabs it has a duty to retain it, or else it'll get flushed when the code goes back into the mainloop. And most of the time, the object does indeed get flushed very soon. But my mind is at ease, since whoever got the object can still retain it. Refcounting is magical, without hiding itself as with garbage collectors!
C++ cannot easily get refcounting implemented at such a low level. ObjC trains you to refcount properly!
You say that [obj1 release] does not work. It does, if you consistently use retain, release and autorelease according to specs. I never had a problem where it wasn't a human error, and I never had to use [obj1 retainCount] except while debugging.
Finally, I tried out Flash Builder in CS5. What it produced sucks horribly. I have a single bitmap image scaling and fading out with alpha blending over the course of 25 frames. I estimate I'm getting 6-7 FPS on an iPhone 2G. That's unacceptable; this is a single image over a black background. Deploying an app takes a minimum of 1m20s. That's not counting that you still didn't transfer the produced .ipa to the device. Flash on iPhone sucks.
My biggest problems with Cocoa and Objective-C are:
- Apple did not continue producing YellowBox (Cocoa for Windows)
- Cocotron is a third party implementation of Cocoa for Windows and still works unreliably (be prepared to fix the framework and submit the patches back to Cocotron's most amazing developers)
- GNUStep people are not trying to implement Cocoa, they are trying to implement OpenSTEP, meaning I cannot count on porting to GNUStep to be an easy endeavor as soon as I step away from Foundation frameworks. Maybe this will change, or has changed since most of the docs I read were written.
Cocoa does have a few gaping holes and so does Objective-C, but your article and code demonstrate none of those.
What you are primarily
What you are primarily attacking is an SDK. The primary language is the implementation of the compiler, not the libraries the compiler considers to be standard. Attacking a language based on its standard library is a ridiculous thing to do. If you have such an issue with the SDK I recommend you create a "wrapper" for it. By the way, your code looks ridiculously overcomplicated for providing the date.
(Apple NOT EVEN mention supported platfrom on their SDK download, many innocent developer even not realized this after find some 'hard' way to successfully download the huge 2.4GB + SDK).
If the developer in question isn't smart enough to know this, they're not smart enough to program for it. And what is this 'hard' way you speak of? Sounds like something illegal, making the developer not-so innocent.
when can free the object ?
Try something along the lines of [myobject release]; There are also auto-release pools that allow for the object to be released when the pool is released. This simplifies programming to an extent.
Obj-C memory handling definately add up very much time on the development time and resource.
Provide a solid example of this. Objects are allocated quite easily.
MyObject *pObj = [[MyObject alloc] init];
/* do stuff with the object */
[pObj release];
Apple blocked Java and Flash to keep other developers from executing code that Apple can't validate themselves. Although this was probably intentional to make Apple money, it also adds a security feature to the iPhone. The user can't get any app that hasn't been validated (unless they jailbreak their phone) and therefore can't get any sort of a virus unless they go out of their way, in theory. (In practice, this may be different.)
The number of years you've been programming provides no merit by the way. Your ability to program does and solve problems does. I'm not attacking you, but showing why your statement is irrelevant. Programming for years measures up only to as much quality as you've programmed and how far you've expanded, in my opinion.
Your English does not appear to be well structured. Normally this can indicate not putting an effort into it, which reflects poorly on your programming skills. So I recommend cleaning up on it a little bit. Again, not attacking you; just recommending something.
Those of you who view this post as harsh, go away.
Those of you who have genuine corrections and criticisms (either beneficial to me or others), please share them.
Cheers.
Don't forget to mention that
Don't forget to mention that autorelease is an even better way to approach most of the work with objects, as long as you are careful:
MyObject *obj = [[[MyObject alloc] initWithArgument:arg] autorelease];
or even, if you write object well (or if the object is written well):
MyObject *obj = [MyObject myObjectWithArgument:arg];
in which case the conventions say the object will be allocated and autoreleased. Then, one can retain the returned object, or just leave it alone. In mainloop it'll be released, and if it wasn't retained, it'll get deallocated as well.
I love autorelease.
Speaking language and Computer language is 2 different thing.
> Your English does not appear to be well structured. Normally this can indicate not putting an effort into it, which reflects poorly on your programming skills.
What a theory behind this ? English on forum and on program code is two totally different thing. we are not written an essay here, neither an English-Literature.
In code, of-course it have to well structured, otherwise hard to maintenance and read.
In an analogy, people with keypad only handphone wrote a SMS "whre r u, bro ?" does not mean his is not well structured, it only mean spelling error and/or limited time, or write in rush hour.
In second analogy, All non-Enlgish speaking programmer have bad-structured code, because their english is not well structured as appear on forum. So the German cannot write a good program (sorry Siemens, Sorry SAP), neither France and Russia. Neither the Jap, The Korea and the China etc.
> So I recommend cleaning up on it a little bit. Again, not attacking you; just recommending something.
Time problem .... just have time to clean up in the code, not in forum comment. Apology if it cause hard to read.
Your are being racist and
Your are being racist and offensive to people who don't speak English natively, also your comment is full of grammar errors.
Programming languages are not English, they just generally use English keywords. I have spoken to many people concerning this issue in the past, and all of the people claim that poor English skills do not affect their programming.
Please think about your responses more in the future.
Some of point is true, yes, I
Some of point is true, yes, I admit (most programmer actually) am attachking the IPhone SDK, Obj-c might be innocent on some point.
ie, it is implementation problem, not solely language itself.
Below are some comment, not personal attack, just some factfull info / feedback. Also, it seem you work lot on theory, not much on practical - theory and practical not always tally.
below as some comment:
>> when can free the object ?
> Try something along the lines of [myobject release]; There are also
> auto-release pools that allow for the object to be released when the
> pool is released. This simplifies programming to an extent.
Of course we know we can release like [obj release], the problem is we must keep a record of whether the 'myobject' have been released or not, otherwise if it already released, call release again will cause the App crash.
Not to forget that the "suppose to be" retainCount - is NOT working.
(or maybe, I'm wrong, but at least - it is NOT working on IPhone SDK).
Autorelease pool is also "suppose to work", but it is not, for eg, it release time is not determistic, many time, in a loop, we have to explicitly free the obj, thus memory after the obj have been used, not until the whole method, or even worst, the App exit, otherwise if the memory not released by autorelease in time, the memory will used up and the App crash also.
>> Obj-C memory handling definately add up very much time on the development time and resource.
Provide a solid example of this. Objects are allocated quite easily.
> MyObject *pObj = [[MyObject alloc] init];
This is only in the simple case, most obj have something to init during it allocation.
Of course number of years in develop program not equal expertise, I see many can only wrote in some language like VB6, VB.net stuff and only wrote simple App. However, number of years in writting proven commerce / industrial grade program/App is.
Typing Error Correction (2)
> if the object forget to free, the app will crash. There is no constructor and destructor as in C++ and JAVA, and it is not possible to support something like.
It read :
If the object forget to free, it cause memory leak, if the object is "over-free", it cause the App crash.
{
some statement ....
obj1 = [[theObject alloc] init];
< some code >
[obj1 release];
< some statement ... >
// Tentatively, this way should work, but it is NOT.
if [[obj1 retainCount] > 0]
[obj1 release];
}
The retainCount not work correctly can cannot be rely on (non arguable, officially confirmed). after the first obj1 release, retainCount NOT neccessary will be minus 1 after the release call. Anyone rely on the retainCount will cause either cause memory leak (not released due to the retainCount say == 0), or crash (called release cause the retainCount say it is > 0).
(5425)
Typing Error Correction
> About Java, one of problem is the code cannot divide into separate file. (whereas all other C++ variant can, include C#).
Typing correction, it read:
the "class" cannot divide into separate file (not for different class), ie the source code for a class must keep in the same files.
(5425)
Fail
This article is beyond fail. Quake was NOT developed with Objective-C.
Quake runs on the Quake Engine - developed in C and ASM. It also uses a scripting language called QuakeC for game logic.
Nice try trying to make Objective-C hip.
not exactly true, the
not exactly true, the original QuakeEd used objective-c
see:
so if you interpret the quote in the artical
"some famous games including Quake and NuclearStrike were developed using Objective-C."
as, objective-c was used during the development of quake, the statement is completely true.
A game and an application for
A game and an application for a game are very different. For games, you usually care a lot about speed. Implying that the game itself was made using Objective-C is saying "Objective-C is fast enough for demanding games" (as Quake was pretty demanding back in the day). Most game developers choose C++ because it's such a fast language that still offers object-oriented programming.
Now, speed may be less of a concern for a lot of games nowadays, but one has to wonder how much of a benefit Objective-C's benefits are to games. Accepting any method? Well, that's fairly neat. Most games use an event-based messaging system. In C++, this is coded as an inherited class attribute. It doesn't take a ton of time to hookup and the implementation is inherently faster than the Objective-C equivalent.
Jim
A language is not just it syntax, but many other things as well. When your choosing a language to learn and develop in it's vital to consider several factors first:
1) How big is the community behind the language? The bigger the community the higher the probability the language will survive. When a community is big you can count on ports to other platforms, new libraries, feedback on language issues, articles that instruct how to use the language. Essentially the bigger the community the better the support.
2) Is the language open-source or closed source? If closed source, be prepared for the possibility of paying ridiculous fees in the future to use essential tools, libraries, or IDEs
3) Does the language and it's environment rely on any proprietary, or closed-source code (i.e. dot net). If so you risk having to pay future fees to produce commercial software, or being sued.
4) How many developers are there to the language? If relativly few then the language risk being underdeveloped. If there is only one developer and the language is closed-source--what happens if that developer is hit by a bus and dies?
5) Does the language support constructs that make it easy to develop applications rapidly without a serious performance hit? OOP isn't just about pretty abstractions, it's also about efficient designs that catch bugs, in the context of large teams.
6) Wrestling with a compiler and linker in order to get a third-party library to work is hell. This is one of the key reasons so many libraries have reinvented the wheel.
7) What is the learning curve for the language? Does an entrant to the language have to know complex mathematical concepts, or anything else that might prove to be a barrier?
8) Is the language so flexible that it sacrifices performance, readability, and maintainability (I'm looking at you Lisp).
9) Is the standard designed by a single person (or a *very* small group) or is it designed by committee. Cplusplus suffers from the design-by-committee flaw. Look at the D programming language. I can't say it's elegant, but it certainly is less of a headache to work with.
10) Is the documentation and reference material clean, organized, up to date, and regularly maintained? Does it provide articles on complex issues, tutorials for beginners, examples on implementation, and is it easily accessible in multiple formats (web, pdf, etc).
11) I realize some relatively small, and powerful languages (like Eiffel or Ruby) come with books to assist developers, but you have to pay. This is not necessarily a bad thing, but it's not a good thing either. Look at C, and C++ and you will find plenty of web-based books that are free.
12) Is the language natural and intuitive? The reason OOP and Procedural languages are so popular is because people naturally think in terms of objects, and procedures. Yes, I'm aware some of you functional-programs only think and lists, maps, monads, and closures--but your the exception, not the rule.
Anonymous
"Quake and NuclearStrike were developed using Objective-C."
Famous game ? How about the top-10 game like 'Need for speed', 'Command and Conquer', "War Craf" ? There are in C++, not objective-C.
"A decade is an eternity in the software industry. If the framework (and its programming language--Objective C) came through untouched these past ten years, there must be something special about it."
Can be two reason:
1. It is too good that no need to change.
2. It is too bad that no nobody care, or use.
Small talk ? How many commercial, big-project is using small-talk nowaday ?
The top-10 language ranking itself told itself.
Objective-C only now get alive because of IPhone. but how long will it keep ? esp when rival strike back, like Nokia, Samsung, Blackberry, Window-Mobile 7. (no one is using Objective-C).
The lame strikes back? Hahaha.
Good luck with waiting for Nokia, Samsung, Blackberry, or Windows Mobile to strike back. They never understood how to make a good device or operating system and there is no reason to think they ever will. They have always had crap developer programs and expensive tool chains which is no doubt a strong factor in why apps on those platforms are so few and have so much suck value.
Besides you can compile C and C++ into Objective-C programs. Good programmers can work with multiple languages and environments. It's a rare day that I don't work in at least half a dozen different programming languages and four or five operating systems.
It is about productivity and reliability.
Come on, let think rationally, how much commercial and industry software is developed / using Objective-C ?
Even Apple and IPhone Kernel itself is not using Obj-C, only it App.
(IPhone and some Macintosh OS is running Linux, nothing to do with Obj-C).
What we see is Objective-C is used only in expendable platform, like IPhone, where it functionality is not critical and crashing the App will not cause fatal consequence, like Game, Leisure, News App, etc.
For eg, if your know SCADA system, there is zero (0) SCADA software developed using Obj-C. (otherwise, human-live have been eliminate from earth due the the SCADA program crash).
Just see the application crash ratio on App Store. Almost 100% of the App, especially version 1.0, having crash problem. This does not happen on other platform. and it clearly shown there inherited a problem on the language itself. For eg, the "retainCount" method - exist but not work, even offically say so.
About the number of App in AppStore is greater than other, this is questionable. for eg, WindowsCE does not have store, due to many reason, for eg,
Also, it is not true other platform need expensive tools to develop, I have been using Embedded VC++ for WindowsCe since 1999. And J2ME development tools since 2002, PalmOS toolkit since 2000, Symbian C++ SDK since 2003, non of them need to paid to distribute the app to device.
(But for XCode, we need to buy the cert for distribute to device, and NOT once, but every year as long as you want your app on App store).
Also, non of them need huge 2.4+ GB of download on each new version release, even just minor update, for eg, just from 3.0 to 3.1, the whole SDK need to redownload.
You can say the App crash is due to poor programmer skill, but the crash happen also for big-name App like CNN and BBC App, where there have resource to employ the best programmer around the world, so there no need to employ cheap one.
It is true that Good programmer can work with any language, but the different is productivity and time to develop. Of course you can use assembly lang to develop an accounting software, but does it make sense ?
(unless you are R&D student in U that use tax-payer funded government fund to do this way, for eg, use 10 years to write a simple accouting app for own use)
WindowsCE, not only exist on Windows-Mobile device, but in many embedded system as well, where it reliability have been proven and used in many commercial system, like industry grade handheld devices by Intermec, Held-Held Computing Dolphin series, Symbol, etc.
IPhone get popoluate also largely due to it come on the right time - with 3G network available and polular, whereas WindowsCE have been exist on time even GPRS does not exist.
Apple choose Obj-C for it SDK is also largely due to Apple own Objective-C (and pround of it), and it Mac App is also Obj-C based also.
Addendum
Some addendum
> For eg, if your know SCADA system, there is zero (0) SCADA software developed using Obj-C. (otherwise, human-live have been eliminated from earth due the the SCADA program crash).
I mean here is SCADA system used in Nuclear Power Plant.
> About the number of App in AppStore is greater than other, this is questionable. for eg, WindowsCE does not have store, due to many reason, for eg,
One of the reason is that there is too many WindowsCE program developed, almost every programmer in the earth involve partly, or fully in Windows platform, Just imaging, if there 1 IPhone App programmer, there is at least 100X WindowsApp programmer, if there is million of IPhone App, there is at least Trillion of WindowsCE App exist, this is not kidding, Microsoft will need a very very huge resource on evaluate, checking and approve the App and hosting the App, and the transaction if the app is a paid App. Also Windowsce, Java Mobile App etc, no need to depend on App Store to distribute, (can distribut directly) so there is no such a must need for App store, and the number of App on these platform (WindowsCE, Java, Symbian etc) cannot be counted.
> Apple choose Obj-C for it SDK is also largely due to Apple own Objective-C (and pround of it), and it Mac App is also Obj-C based also.
I mean here there (stupidly) pround of it due to it is developed from them (directly or indirectly), not because it is the developer favourite choice / prefered lang.
wp
Um
What are you, 12?
Be constructive and factful
Be constructive and factful debade, not personal attack. dont act like child.
LocalBeacon.
Source for the List example portion of this tutorial...
Source for the list portion of this project for NEXTSTEP 3.3 is now available at this link (or until the site closes)...
The Project Builder project I created is a command line tool, not an application. Also the classes have been named so there are no name collisions with the NS Foundation class library installed in NS 3.3 Developer.
The PB project is set to build quad fat, select Options in PB and deselect the platforms you don't want.
Here is the output of the run.
--- start paste of output
NS33> ./ListProject
5 3
sum: 10
no clue!
--- end paste of output
Enjoy
window maker is NOT a GNUstep application.
window maker has nothing to do with GNUstep except the looks and it being the only window manager that works well with GNUstep currently. hopefully it will soon be replaced by a real GNUstep window manager
Re: Objective-C: the More Flexible C++
What is this "she" crap all about? Can we please stick to programming and not social agendas? We just want to program.
Appreciated the info about Objective-C by the way.
What is this "What is this 'she' crap" crap all about?
Get a life and stop whining / nitpicking about small things like this.
Re: What is this
He wouldn't have nitpicked about it if the author hadn't.
That's backwards. Having to
That's backwards. Having to bend over backwards to say "he/she" every time is more nit-picky than just picking a pronoun and using it.
What possible "social agenda" does one promote by using "she"? (I guess if you're living in the 1950's, the idea that women are doing something outside of the kitchen is pretty radical, but we haven't been living in the 1950's for a couple years now.)
Oh come on, we all know the
Oh come on, we all know the vast majority of programmers are males.
The author is clearly trying to get a message across.
If I was writing an article about construction, I would use "he"; if I were writing about shopping, I'd use "she".
This is very strange. I am
This is very strange. I am a woman and I am learning Objective-C, as are several of my female friends. Seems very normal for women to do this.
Like minds tend to attract.
Like minds tend to attract. If you're learning a language, I'd expect a portion of your friends to be equally interested.
The majority of programmers are male, and this hasn't changed. Yes, I know a number of females who program, but I know a lot more males who do. Females also tend to be more hobbyist programmers than males.
Really? That's news to me.
Really? That's news to me. At every company I have ever worked, female developers have slightly outnumbered male developers. The ratio is roughly 1:1. I think you may be using an out-of-date version of reality. Try fetching the latest version from your distro's repos. I'm pretty sure sexism was deprecated in the '60s.
HAHAHA I love that reply :D
HAHAHA
I love that reply :D
Ditto
Nicely said.
either way
I find it sexist either way, if an author were to use he or she in such an article. Why not use they? it fits in the context just as well and keeps people from arguing about such nonsense
What about IT
Or we could just reply to everyone all the time as "IT" until someone had the opposite reaction and then we'd just go back to he/she as we felt and no one would even notice let alone care whether he or she was said.
In the end it doesn't really matter and I don't think any reasonable person is offended by the use of he or she interchangeably.
Re: Objective-C: the More Flexible C++
I don't think the author's choice of words took away from the article, but this comment does! If it wasn't for mine and your post all comments would have been about programming. I guess you got to follow your own "social agenda" huh?
What ?
The fact is the author initiated the query with the comment (for whatever reason ... which I believe is the original responders complaint/comment). It certainly didn't add to the article ... so why make the quip ?
Re: Objective-C: the More Flexible C++
as close to Smalltalk as a compiled language can be.
Uh, Smalltalk is a compiled language. So the closest thing to Smalltalk that is a compiled language is Smalltalk itself!
Re: Objective-C: the More Flexible C++
Uh... check your facts again. Smalltalk is interpreted.
Languages are neither. Imple
Languages are neither. Implementations are.
FWIW, most implementations are compiled (to bytecodes).
By this argument, Python,
By this argument, Python, JavaScript, and Perl are all compiled languages. The only useful distinction between a "compiled language" and an "interpreted language" is "a language whose standard implementation compiles source code to object code designed to be run directly by a hardware processor (with a caveat for VMs)" and "a language whose standard implementation either interprets source code directly or interprets a bytecode format derived therefrom."
A more useful distinction lies between "static" and "dynamic" languages, of which many languages are arguably hybrids.
</semantics>
Re: Objective-C: the More Flexible C++
> some famous games including Quake and NuclearStrike
> were developed using Objective-C.
Quake was not written in Objective-C. Anyone can verify this in two minutes by downloading the source code from Quake.com. Carmack did write a level editor called QuakeEd in Objective-C, but the game itself was not. Please update the article to avoid misleading people into thinking that Carmack would write a game in Objective-C.
> Objective-C is (rightly) getting more attention because
> it is more flexible than C++ at the cost of being slower.
Most of the Objective-C constructs can be implemented in C++ without much difficulty, but the inverse is not true. C++ is more flexible, end of discussion.
> Compile-time and link-time constraints are limiting because
> they force issues to be decided from information found in
> the programmer's source code, rather than from information
> obtained by the running program.
Wrong. See any major API written in C. Let us take Win32 as an example. It is written in C, a statically-typed language, yet I can superclass windows as I please while the API DLL's are running. That is not so odd once you consider that dynamically-typed langauges are implemented using statically-typed languages. Your own explanation of the C underpinnings of Objective-C features confirms my point.
How about no?
All languages are equal in expressive power. This is a core tenet of linguistics and computer science alike.
The bottom line is any Turing-complete language can accomplish any computable task. It is utter absurdity to claim that any one language is "more flexible" than any other. They can each do anything.
Just as surely as you could write a C++ compiler in Objective-C (though why you would do this is anyone's guess), you could similarly insanely write it in brainf*ck or Python.
So can we please top the language war flames? This is as insane as claiming that Chinese poetry is more flexible than Turkish poetry.
Re: Objective-C: the More Flexible C++
Most of the Objective-C constructs can be implemented in C++ without much difficulty, but the inverse is not true. C++ is more flexible, end of discussion.But C++ is ugly as ***** and confusing as hell. Objective-C is neither of these things. And you will never find a better set of frameworks than OpenStep.
Re: Objective-C: the More Flexible C++
> Most of the Objective-C constructs can be implemented in C++ without
> much difficulty, but the inverse is not true. C++ is more flexible, end
> of discussion.
By this "logic", assembly is the most flexible language.
Re: Objective-C: the More Flexible C++
> By this "logic", assembly is the most flexible language.
No.
Most of Objective-C's "dynamic typing" features hinge around being able to pass messages around via a very simple run-time system. This system can be easily and cleanly wrapped into C++ code that greatly resembles the Objective-C constructs in simplicity and usefulness.
You cannot draw the same comparison between C and C++, between Objective-C and C++, or even between assembly and most other languages. Hence, C++ is more flexible than Objective-C.
Re: All right, I'm curious...
Would you please give us some examples... I guess they would come up pretty ugly if you truly want to "emulate" Obj-C messages in C++..
How about protocols and "protocols-inheritance"?
If anyone is able to do it cleanly in C++ I would be _extremely_ surprised... so, surprise us!
C++ is more "flexible" when you make C++-oriented programs: Obj-C is different and it does not need much of C++ features (some are nice, I admit, you can use Obj-C++ for them)... If you compare the runtimes Obj-C is a lot more flexible than C++!
Re: All right, I'm curious...
*yawn* It sure is late! I have already implemented the message inheritance thingamajig already without much trouble, but I will post it sometime tomorrow once I have had a chance to clean up the development cruft.
Here is an example of dynamic dispatch...
// This is the class that makes it all possible:
class id
{
int t;
public:
id (int type) : t (type) {}
virtual id* dispatch (int action)
{
return on_unknown (action);
}
virtual id* on_unknown (int action)
{
throw exceptional::runtime_error ("object does not support this action");
}
int type () const { return t; }
};
// Here we have defined some objects and actions for convenience:
namespace objects {
enum {
nil,
example1,
example2
};
}
namespace actions {
enum {
foo,
bar
};
}
// And here are two example classes:
class example1
: public id
{
typedef id base;
public:
example1 () : base (objects::example1) {}
id* dispatch (int action)
{
switch (action)
{
case actions::foo: return on_foo ();
case actions::bar: return on_bar ();
default: return base::dispatch (action);
}
}
id* on_foo () { cout dispatch (actions::foo);
p->dispatch (actions::bar);
p->dispatch (392075);
}
// And our test main function - play around with it a bit!
int main ()
{
try
{
id* p1 = new example1;
id* p2 = new example2;
dispatch_foo_and_bar (p1);
dispatch_foo_and_bar (p2);
}
catch (runtime_error& e)
{
cout << "runtime error: " << e.what () << endl;
}
}
The basic idea is that we have swapped a whole mess of virtual functions for one virtual function, an integer and a switch statement. When a virtual function is called, the most derived class's dispatch function is called. The class attempts to handle the message, but if it cannot, it invokes its base class's dispatch function with the same action parameter. If the base class cannot handle the action, then it invokes *its* base class's dispatch function.
This process continues until either some base class handles the action or it reaches the root class. Naturally the root class, id, is not designed to handle any actions by itself, so it just throws a runtime error.
Re: Here is an example of dynamic dispatch...
Argh! the script ate my example classes and function. :-P
If you cannot guess the code in on_foo etc., please send an email to null_pointer_us AT yahoo DOT com and I will send you a text file attachment containing the code. I would post it on a public server, but unfortunately I do not have any.
|
http://www.linuxjournal.com/article/6009?page=0,2&quicktabs_1=2
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
Issue with using text wrap for international language
I am having an issue with wrapping text within a QML Text element for foreign languages (language that uses Unicode characters like Korean or Chinese). I have the width defined and wrapMode set to Text.WordWrap for my Text element. It works for English language, but when I try to use it for another language, it doesn't work. The text doesn't properly wrap around a blank space as it should.
Any tip is truly appreciated.
Thanks!
Would you be able to paste an example showing this issue?
Thanks,
Michael
Sure, here is a simple example to illustrate the issue.
@import QtQuick 1.0
Rectangle {
width: 360
height: 360
Text { id: textDisplay anchors.centerIn: parent textFormat: Text.StyledText text: " width: 100 font.wordSpacing: 0 wrapMode: Text.WordWrap horizontalAlignment: Text.AlignLeft }
}@
Output:
해리 포터와 죽
음의 성물-1부
As you can see, the issue is that on the first line of the output, the line is being wrapped in the middle of the third word.
It works fine for English text or (text with Ascii characters). This issue is observed in language that is non-Ascii.
Please help.
Thanks!
Hi,
Chinese (and as I understand it Korean as well [edit: on further research, this is incorrect for Korean]) are typically written without explicit whitespace between "words" -- this may be why WordWrap allows wrapping at any character boundary. Do you have any experience with other toolkits and how they handle this? (this is Qt-wide behavior, and not specific to QML). Can you give further detail/links on how this mode of wrapping is used for these languages (maybe Qt needs a WrapAtWhitespace option?)?
Regards,
Michael
Michael, thanks for the reply. I thought Qt uses whitespace to detect the beginning/ending of a word and that's how it's supposed to work for WordWrap. What do you recommend for solving this issue at the QML level?
I wrote a function in C++ plugin that would take in the font pixel size and the field width and based on that it would break the string into separate lines by adding a QChar::LineSeparator to the original string.
It kind of works but it's not optimal because it doesn't take into account other things such as text font, italics, word space, boldness and other things.
What solution would you recommend?
Thanks much.
Hi,
I'm not sure what to suggest -- from what I understand the WordWrap behavior of Qt/QML is correct for these languages (they rightly break at any character boundary rather than just on explicit whitespace). If you could explain further why you need breaks only on whitespace (or could point to other toolkits/articles/etc regarding this) that would be very helpful.
Regards,
Michael
Hi Michael,
Here is the problem that we're having. We have an application that displays the title of a movie in a Text element with width of 100.
Let's take the movie, "Harry Potter And The Deathly." In English, this text is being wrapped correct at the word boundary.
In Korean, the text is " and as you notice, there's a space in between words. When you run the code I posted, you will see that the line breaks at the end of the first character of the third word. It doesn't look good on my application.
If Korean language is treated the same way as English by Qt, I would think it should break into a new line at the end of a word.
Thanks!
Hi,
Okay, thanks for the additional information. I'm not familiar with Korean, and mistakenly thought it was similar to Chinese in the use of whitespace.
Articles like seem to suggest that breaking between two characters is a more common case for Korean (which is different than English). Experimenting with e.g. TextEdit on OS X and IE on Windows seem to show similar default line-breaking for Korean (usually breaks on character rather than whitespace), while Office seems to break on whitespace by default.
If you can give examples or links showing that wrap-at-whitespace is a common/useful option for Korean (the above link at least suggests that it's an option in some programs), I'd highly suggest raising a bug for it using the "bug tracker":.
Unfortunately I can't think of any great solutions for getting WrapAtWhitespaceOnly-type behavior for Korean from Qt (though you could try experimenting with using the lower-level text APIs in Qt like QTextLayout, etc).
Regards,
Michael
As a work-around, would it be acceptable to use rich text, and put the different words in <span> elements for the time being? That should force wrapping at the spaces.
" would become
@
">"
@
For inserting this for single sentences (no line breaks), something like this might work:
@
#include <QStringBuilder>
QString spanify(const QString& input)
{
QStringList elements = input.split(QRegExp("\s"));
QString result = QLatin1String("<html>");
foreach (const QString element, elements) {
result = result % QLatin1String("<span>") % element % QLatin1String("</span> ");
}
result.chop(1); //remove the last space;
result = result % QLatin1String("</html>");
return result;
}
@ author="tuanster" date="1302192466"]
Sorry, I'm out of ideas. It seemed that that should work, but I guess I was wrong. Sorry to have led you down a blind alley.
Just a guess: try to replace
@
<span>xxx</span>
// with
<span style="white-space:nowrap">xxx</span>
@
Thanks for another suggestion.
I just tried it with the "white-space: nowrap" but no luck. It still doesn't work. If I put other attribute like color inside the span tag, the text's color is changed. But the text is still being wrapped in the middle of a word.
QML Text element has a mind of its own.
Any other ideas?
Hello,
Please try this.
I have used Webview. I have tried this and it seems to be working fine for me.
If this is feasible(changing from Text to Webview) or affordable for your application. This may work for you.
@import QtQuick 1.0
import QtWebKit 1.0
Rectangle {
width: 360
height: 360>" }
}@
sorry for wrong code. Please check updated.
>"
}
}@
i have given style width 100px to div but its not coming in comments.
Hi Ioma27,
Thanks for your recommendation. It turns out you also have to add the span tag for it to work.
A sample working code is this:
WebView { html : "<div>부</span></div>" preferredWidth: 140 preferredHeight: 400 }
|
https://forum.qt.io/topic/4797/issue-with-using-text-wrap-for-international-language
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
Example code teaches you how you can download a page from website using URLConnection object. Learn how to download a file from web using Java program and then save into a directory.
AdsTutorials
How to download a file from URL?
If you are looking for example code in Java for downloading and saving a file on your hard disk then this tutorial is for you. You will learn how to write code in Java for downloading a file from URL and then saving the file on your computer.
In this tutorial you will learn how to write a code in Java which takes file URL, local file name and destination directory as argument and then saves the file in destination directory.
This example can be used to download any type of file from URL. It can be text file, image file or simple HTML pages.
Here is the explanation of program:
Program constructs the object of following two classes:
OutputStream outStream = null;
URLConnection uCon = null;
The OutputStream class is used to write in the file on the disk.
Here in this example URLConnection is used to read the data from the URL resource.
Here is the complete code of the program:
package net.roseindia; import java.io.*; import java.net.*; public class DownloadFile { final static int size = 1024; public void downloadFile(String url, String saveAs, String destDir) { OutputStream outStream = null; URLConnection uCon = null; InputStream is = null; try { URL urlFile; byte[] buf; int byteRead, byteWritten = 0; urlFile = new URL(url); outStream = new BufferedOutputStream(new FileOutputStream(destDir + File.separator + saveAs)); uCon = urlFile.openConnection(); is = uCon.getInputStream(); buf = new byte[size]; while ((byteRead = is.read(buf)) != -1) { outStream.write(buf, 0, byteRead); byteWritten += byteRead; } System.out.println("Downloaded Successfully."); System.out.println("File name:\"" + saveAs + "\"\nNo ofbytes :" + byteWritten); } catch (Exception e) { e.printStackTrace(); } finally { try { is.close(); outStream.close(); } catch (IOException e) { e.printStackTrace(); } } } public static void main(String[] args) { String fileUrl = ""; String localFileName = "url-file-download-directory.gif"; String destinationDir = "d:/111"; DownloadFile testDownloadFile = new DownloadFile(); testDownloadFile.downloadFile(fileUrl, localFileName, destinationDir); } }
Above example code downloads the url-file-download-directory.gif from and saves into d:/111 folder.
Read more Java tutorials at.
Posted on: November 8, 2013 If you enjoyed this post then why not add us on Google+? Add us to your Circles
Advertisements
Ads
Ads
Discuss: How to download a file from URL?
Post your Comment
|
http://roseindia.net/java/example/How-to-download-a-file-from-URL.shtml
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
im just trying to open a file.
i have done it for 100 times, and then I sent SIGCHLD signal to other processes and i think right after that i couldn't open that file anymore.
#include <signal.h>
#include <stdio.h>
#include <unistd.h>
#include <fcntl.h>
#define FLAGS IPC_CREAT | 0644
int main() {
int res =open("results.txt",FLAGS);
if(res== -1) { printf("error!!")} //prints it every time
return 0;}
You're doing something strange with the flags. I think your intention is as per below code:
#define FLAGS O_CREAT #define MODE 0644 int main() { int res =open("results.txt",FLAGS,MODE); if(res== -1) { printf("error!!")} //prints it every time return 0; }
|
https://codedump.io/share/hqbOA9J0bxmV/1/open-file-linux-eclipse-c-error-after-getchld
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
Jeff Garzik wrote:> > People from time to time point out a wart in ethernet initialization:> They sure do. You were away at the time, but I had a 94 file,140k patch late last year which fixed all this. It'sat the design doc is at a quick look, I think the only substantive differencehere is that my `prepare_etherdev()' function allocatesand reserves the device's name (eth0), but prevents itfrom being available in netdevice namespace lookups. Thiswas done because lots of drivers wanted to do: init_etherdev(); (Replaced with prepare_etherdev()) printk("%s: something", dev->name);The changes to dev.c and net_init.c were fairly subtleand took some thinking about - we should revisit themif you want to go ahead with this.The patch all worked OK, was back-compatible with unaltereddrivers, and indeed altered all the drivers. But it kind ofgot lost. Too big, too late and dev_probe_lock() was there.Now, Arjan says that this race is causing oopses. Thissurprises me, because current kernels have the the dev_probe_lock()hack which I put in. This fixes the problem for PCI and Cardbusdrivers. The ISA drivers generally use the dev->init() techniquewhich is not racy. There isn't a lot left over. Arjan? Which driver?The other reason I'm surprised that it's causing oopses: mostr becausethe open() routine hasn't been called, but it should hangin there. A subsequent close() of the interface *will*call dev->close, and I guess the driver is likely to getupset if its close() routine is called without a correspondingopen().Yes, we can fix this if we want, and kill off dev_probe_lock().It'll only take a few days. Do we want? If not, we canextend the dev_probe_lock() thing to cover probes forother busses. USB, I guess.--To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to [email protected] majordomo info at read the FAQ at
|
http://lkml.org/lkml/2001/3/7/212
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
Dissecting the post-fix operator
Brian Lugo
Ranch Hand
Joined: Nov 10, 2000
Posts: 165
posted
Feb 06, 2002 17:35:00
0
Hello All!
I am going to try my best to dissect the post-fix operator.
First lets see what some people in the past have to say about this operator:
---------------------------------
Yuhri Hirata (June 15, 2000):
i = (i++);
The reason that the i++ doesn't increment despite the parentheses is that the purpose of the operator itself, ++, is to increment /after/ its use in the expression. Use of the parentheses won't negate that purpose.
Deepak M (July 25, 200):
Three steps in evaluting a = a++;
1. return value of a
2. increment a
3. assign the returned value of a in step 1 to LHS of = operator.
groggsy (August 30, 2000):
int x=0;
x=x++;
x Starts with 0
The right hand side of the = is evaluated (++ is post increment, so x is not incremented yet)
0 is put into a storage area which I will call - Store.
x is then incremented to 1 (because of ++).
Jane (August 31, 2000):
i = i++;
The assignment operator in the original example must somehow cause the postfix operation to complete abruptly; causing it to fail.
robl (September 11, 2000):
I would cahracterize this one as a bug in the JVM.
--------------------------------
These are only couple of things I could collect after doing a search on "Postfix". There were several more posts that I did not go through and hence the myth that I am about to dispel now may have already been dispelled.
Let me start this thread with the following code:
public class TestPostFix { public static void main(String[] args) { int a = 5; a++; // 1 System.out.println("After first increment a = " + a); a = a++; // 2 System.out.println("After second increment and assignment a = " + a); a = a++ + a; //3 System.out.println("a = a++ + a results in a being: " + a); System.out.println("a + a++ = " + (a + a++)); // 4 System.out.println("In the end a = " + a); } }
The output of the above code is:-
-----
After first increment a = 6
After second increment and assignment a = 6
a = a++ + a results in a being: 13
a + a++ = 26
In the end a = 14
----
Lets take a look at couple of things from the JLS Section 15.14.1 before I start rambling:
A. Also, as discussed in �15.8, names are not considered to be primary expressions, but are handled separately in the grammar to avoid certain ambiguities.
B. The value of the postfix increment expression is the value of the variable before the new value is stored.
Statement/Code Analysis:
1. The value of this postfix increment operation is 5. Please note that this increment expression is not assigned to any variable. In this statement the post fix increment expression is evaluated. The result of this evaluation i.e. "a = a + 1" is the value of "a"(variable a) is incremented and assigned to "a" i.e. a = 6 now.
2. This is the most notorious statement which took my afternoon!! Several things are happening here. Please keep in mind the statement B above and lets analyze:
a = a++;
This is an explicit assignement statement with Right hand side being a expression. Since the associativity of "=" is from right to left the right hand side expression must be evaluated first.
Now, one might think that all right lets increment the variable a on the right hand side and assign it back to a. Well why not, because the post fix operator definitely has the higher precedence. Well that's not exactly what happens even though the precedence of "++" is higher than "=".
Here is what I believe happens:
a++ is evaluted to the value stored in the variable "a" before incrementing.
So, now we have two expressions:
a = 6 and a = a+1 (i.e. the execution of post-fix operation)
Since the post-fix operation has the higher precedence it is executed first, i.e.
a = a + 1 i.e. 7 is stored in a.
After that a = 6 is executed.
Let me try to explain this wierd behavior in a better way:
For a moment assume that a++ is a function f() that always returns the value in "a" before incrementing. So,
a = f(); What happens in the function, in this assignment case, is irrelevant. What is important is what value is returned from the function. As seen from statement B above the value of the post-fix operator is the value of the variable before the increment occurs.
Hence, I strongly believe that the value of a is updated twice in the statement 2 of the code.
First a = a+1 i.e. a = 6 + 1.
and then final value is "a = 6" however.
You can call this a side-effect.
I believe it will help you if you see a++ as a function f() which works on a and returns a before it increments it.
3. The dreadly statement: a = a++ + a;
Remember the post-fix operator has a higher precedence
. So a++ is evaluated first, and it returns the original value of a i.e. 6. Or rather I should say, a++ returns 6 first and then a++ is executed. When a++ is executed it results in "a" being 7 but don't forget that the operation had returned the value of 6 and that's what will be used for a++. You can call this a side-effect. Now wait, don't go ahead and assign it to the variable "a" yet. Remember the addition operation is left and it has higher priority than the assignment operation. So, the value of a after a++ is executed is added to "a".
i.e. 6 + 7 Hence you get 13.
4. In statement no. 4 of the code there is no explicit assignment statement. The evaluation occurs in the following manner:
13 + 13(14) i.e. 13 is returned by a++ and after that a's value is incremented to 14.
5. In the end it does print a's value to be 14 even though there was no explicit assignemnt on statement 4.
I hope this helps,
Constructive criticism is welcome,
Brian
PS - Don't forget Maha Anna's wonderful discussion on the post-fix operators too:
[Edited by Val to correct the link to Maha's topic]
[ October 09, 2002: Message edited by: Valentin Crettaz ]
Valentin Crettaz
Gold Digger
Sheriff
Joined: Aug 26, 2001
Posts: 7610
posted
Feb 06, 2002 17:40:00
0
Good job Brian
SCJP 5, SCJD, SCBCD, SCWCD, SCDJWS, IBM XML
[Blogroll]
[My Reviews]
My Linked In
I agree. Here's the link:
subject: Dissecting the post-fix operator
Similar Threads
i++i+++;
i=i++
inc / dec Operations precedence
Ternary Operator
doubts getting exponentially INCREMENTed
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton
|
http://www.coderanch.com/t/236648/java-programmer-SCJP/certification/Dissecting-post-fix-operator
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
Overriding the Authentication Provider Selection Page
In the first part of this series, we saw how we register and configure our SharePoint site to use Windows Live ID as an authentication provider.. Add references to System.Web, Microsoft.SharePoint.dll, and Microsoft.SharePoint.IdentityModel.dll. The identity model assembly is in the global assembly cache. Therefore, I had to get a copy and place it in the root of my drive to add my references. For a suggestion about how to find and copy the assembly, see the blog post Writing A Custom Forms Login Page for SharePoint 2010 Part 1.
4. Strong name the assembly that you are creating, because you will place it the global assembly cache later.
5. Add a new ASPX page to your project. Copy a page from an existing ASP.NET web application project; copy the .aspx, .aspx.cs, and .aspx.designer.cs files all at the same time. Remember, in this case we want a file that is named "default.aspx", and it will be easier if there is no code written in it yet and there is minimal markup in the page.
6. In the code-behind file (.aspx.cs file), change the namespace to match the namespace of your current project.
7. Change the class so that it inherits from Microsoft.SharePoint.IdentityModel.Pages.MultiLogonPage.
8. Override the OnLoad event. When a user hits a site that has multiple authentication providers enabled, the user is first sent to the /_login/default.aspx page (the page described in step 1). On that page, a user selects which authentication provider to use and then the user is redirected to the correct page to authenticate. In this scenario,.
c. You post back to /_login/default.aspx.
d. You are redirected to the correct login page..
|
http://blogs.technet.com/b/meamcs/archive/2012/05/31/internet-facing-sharepoint-2010-site-with-windows-live-id-part-2.aspx
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
There are some computer games or applications where you need to repeatedly click on the same place on the screen many times. Imagine a game where you chop down a tree by clicking on it. Then a next tree appears on the same spot, so after a while, you need to click there again.
As we are human beings, we like to simplify boring tasks. As programmers, we can simplify this task by creating software to do it for us.
What do we need to do in the autoclicker? To keep things simple, we only set two things: where to click and how often to click. The point where to click can be stored in a variable of type Point, the interval will be set on our Timer.
Point
Timer
The first thing to do is to create a new Windows Form. Then we add our variable to hold the click location.
//this will hold the location where to click
Point clickLocation = new Point(0,0);
Next, we need to set the location for our clicking. One of the ways to do this is to start a countdown. While it is running, the user can point the mouse to the desired position and after the countdown ends, we get its coordinates.
We can do this by activating appropriately long timeout (for example 5 seconds) and then collecting the mouse location. So, we can add a button with following OnClick event handler:
OnClick
private void btnSetPoint_Click(object sender, EventArgs e)
{
timerPoint.Interval = 5000;
timerPoint.Start();
}
We also create a timer to help us get the mouse location. We can get the mouse location from the Position property of the Cursor class. The set location can be displayed for example on the window title.
Position
Cursor
So, let's create the second timer and add the following Tick handler to it:
Tick
private void timerPoint_Tick(object sender, EventArgs e)
{
clickLocation = Cursor.Position;
//show the location on window title
this.Text = "autoclicker " + clickLocation.ToString();
timerPoint.Stop();
}
Now we want to set the main timer interval (we can use the NumericUpDown control). Remember that the lower bound of its interval should be greater than zero, as we cannot set the timer interval to zero milliseconds.
NumericUpDown
Now to the clicking itself. In each time our main timer elapses (add another Timer control to the form), we want to click on a specified position. We cannot do this in pure managed code, we must use the import method SendInput of the user32.dll library. We will use it to synthesize the mouse click. Don't forget to place using System.Runtime.InteropServices; to the beginning of your code when you use DllImport.
SendInput
using System.Runtime.InteropServices;
DllImport
[DllImport("User32.dll", SetLastError = true)]
public static extern int SendInput(int nInputs, ref INPUT pInputs,
int cbSize);
The method SendInput uses three parameters:
nInputs
pInputs
pInputs
cbSize
To keep things simple, we will use only one INPUT structure. We will need to define this structure and some constants too:
INPUT
//mouse event constants
const int MOUSEEVENTF_LEFTDOWN = 2;
const int MOUSEEVENTF_LEFTUP = 4;
//input type constant
const int INPUT_MOUSE = 0;
public struct MOUSEINPUT
{
public int dx;
public int dy;
public int mouseData;
public int dwFlags;
public int time;
public IntPtr dwExtraInfo;
}
public struct INPUT
{
public uint type;
public MOUSEINPUT mi;
};
Each time our timer elapses, we want to click. We do it by moving the cursor to the memorized place, then setting up the INPUT structure and filling it with required values. Each click consists of pressing mouse button and releasing mouse button, so we need to send two messages - one for button down (press) and another for button up (release).
So add a handle to the Tick event of the main timer with this code:
private void timer1_Tick(object sender, EventArgs e)
{
//set cursor position to memorized location
Cursor.Position = clickLocation;
//set up the INPUT struct and fill it for the mouse down
INPUT i = new INPUT();
i.type = INPUT_MOUSE;
i.mi.dx = 0;
i.mi.dy = 0;
i.mi.dwFlags = MOUSEEVENTF_LEFTDOWN;
i.mi.dwExtraInfo = IntPtr.Zero;
i.mi.mouseData = 0;
i.mi.time = 0;
//send the input
SendInput(1, ref i, Marshal.SizeOf(i));
//set the INPUT for mouse up and send it
i.mi.dwFlags = MOUSEEVENTF_LEFTUP;
SendInput(1, ref i, Marshal.SizeOf(i));
}
Finally - we can use a button to start/stop the autoclicking feature. Let's create a button with the following handler:
private void btnStart_Click(object sender, EventArgs e)
{
timer1.Interval = (int)numericUpDown1.Value;
if (!timer1.Enabled)
{
timer1.Start();
this.Text = "autoclicker - started";
}
else
{
timer1.Stop();
this.Text = "autoclicker - stopped";
}
}
That's it. It's quite simple. How can we further improve this program?
We could...
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
[DllImport("User32.dll", SetLastError = true)]
public static extern int SendInput(int nInputs, ref INPUT pInputs,
int cbSize);
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Articles/15406/Creating-a-Simple-Autoclicker
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
29 October 2009 10:48 [Source: ICIS news]
LONDON (ICIS news)--Neste Oil's renewable fuels segment posted a third-quarter comparable operating loss of €6m ($8.8m) due to squeezed margins, the Finland-based refining and marketing company said on Thursday.
The comparable operating loss reported in the 2008 third quarter was €3m, Neste said, while the comparable operating loss in the 2009 second quarter was €7m.
The segment's third-quarter revenues were €59m, which was significantly higher than the €27m in revenues reported in the 2008 third quarter and up from the second quarter’s revenues of €38m.
The start-up of a second NExBTL renewable diesel plant at Porvoo, ?xml:namespace>
At the group level, comparable operating profit for the quarter was down 79% year on year at €42m, which was largely due to a 56% year-on-year drop in refining margins, the company said.
Quarterly revenues were down 44% at €2.5bn due to lower oil prices, Neste said.
The reported net profit for the quarter was €74m, up from €34m that was recorded in the 2008 third quarter, it said.
(
|
http://www.icis.com/Articles/2009/10/29/9259031/neste-oils-renewable-fuels-arm-posts-q3-operating-loss-of-6m.html
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
15 June 2012 13:31 [Source: ICIS news]
LONDON (ICIS)--?xml:namespace>
“We've conducted numerous interviews and meetings with shareholders [and] we can see that many of them need additional time to make a decision, so we've opted to extend the duration of the share call,” said vice president of Acron Vladimir Kantor.
On Thursday, Acron criticised the Polish treasury ministry, which has a controlling holding of 32% in ZAT, for recommending shareholders refuse its offer, amounting to (Zl) Zl 1.5bn ($441m, €349m) for 66% of ZAT, as too low.
The offer was a response to
The ministry has stated it believes Acron ownership could disrupt the growth strategy of ZAT, a producer of nitrogen and multi-component fertilizers, caprolactam (capro), polyamide 6, oxo-alcohols, plasticisers and titanium dioxide (TiO2).
The question now is whether a counter-offer from a “white knight” bidder might be submitted and on what price terms, said Wood & Company investment bank analyst Piotr Drozd.
“At this point, with the strategy argument in hand, it seems that the political impact, and not the price, will be the state treasury’s key focus,” said Drozd.
“Without the treasury’s consent, and with counter-bids expected, it should be difficult for Acron, if not impossible, to achieve the targeted 50% (+1 share) minimum threshold [for a successful bid], even assuming a tender price hike to the current stock price level,” he added.
($1 = €0.79)
($1 = Zl 3.40, €1 = Zl 4
|
http://www.icis.com/Articles/2012/06/15/9570185/acron-extends-zat-bid-deadline-possible-counter-bid.html
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
Details
- Type:
Bug
- Status: Open
- Priority:
Major
- Resolution: Unresolved
- Affects Version/s: 0.1
- Fix Version/s: None
- Component/s: Python - Compiler
- Labels:None
- Patch Info:Patch Available
Description
Esteve's last change to how default values are stored broke stuff. Here is a quick example:
{{
service Test
}}
generates
{{
class get_slice_args:
thrift_spec = None
def _init_(self, start=thrift_spec[-1][4],):
self.start = start
}}
which is obviously invalid.
I'm not sure how thrift_spec is supposed to be populated here so I'm unsure how to fix this.
Issue Links
- is related to
THRIFT-105 make a thrift_spec for a structures with negative tags
- Open
Activity
- All
- Work Log
- History
- Activity
- Transitions
Although I'm happy to admit any breakage I cause
I don't think this issue is valid (or at least, not for the reasons exposed). Note that you're not using a field key, and thus the thrift_spec variable is not populated at all. You can fix this using a positive field key:
service Test{ bool get_slice(1:i32 start = -1), }
which should generate this code:
class get_slice_args:
thrift_spec = (
None, # 0
(1, TType.I32, 'start', None, -1, ), # 1
)
I think the compiler should abort if it doesn't find a valid field key and don't generate any code, though.
i thought the field keys are supposed to be optional, which is good, because they fuglify things up quite well.
I spoke too early, it seems my patch DID actually break things
Here's the code that an old compiler generates:
thrift_spec = None
def _init_(self, d=None):
self.start = -1
if isinstance(d, dict):
if 'start' in d:
self.start = d['start']
so, unless we always generate thrift_spec or we use a sentinel, I don't know if this can be fixed. However, I wonder how the fastbinary extension could work if thrift_spec is None. I can change the compiler to generate the thrift_spec variable no matter what, but would like to hear the opinion of others.
Argh, I just realized that even if we always generated the thrift_spec variable, it wouldn't work as it would try to access thrift_spec[-1]
It's quite late here, so I'm probably asking something stupid, but why is thrift_spec a tuple? Using a dict keyed on field keys, would allow the fastbinary extension to support negative field keys, IMHO. But I'm sure there must be a reason why it isn't, apart from dictionaries being mutable. I wish we could use named tuples in Python 2.4 and 2.5
THRIFT-105 is slightly related to this, so Alexander can explain the problem with negative field keys much better than I.
I think it was a tuple for performance reasons. I just thought of a way that we might be able to simplify both this issue and THRIFT-105. tuples can take negative indexes (indices?) just like lists, and they are counted from the end. What if we just extended the thrift_spec tuple so that all of the negative fields were in the right place (after all of the positive fields)? So if you had a struct like
struct foo { i32 bar; i32 baz; 2: i32 qux; }
then the thrift_spec would look like
( BLANK, # 0 aka -5 BLANK, # 1 aka -4 spec_for_qux, # 2 aka -3 spec_for_baz, # 3 aka -2 spec_for_bar, # 4 aka -1 )
This would not disturb the existing thrift_spec fields at all, so the current fastbinary extension would work fine. It should be robust against adding new negative fields to the end, since the indexes (indices?) of the existing fields will not change. I think it would make the patch for THRIFT-105 a lot simpler too, though we might need a little bit of magic to make sure that negative indexes work in the C code.
Thoughts?
This patch should fix this issue, I tested it using both the Python-only binary protocol and the native extension. This should fix THRIFT-105 as well.
Argh, I forgot to add the changes I made to fastbinary.c in the previous patch, here they are.
I implemented it the way you suggested David, the changes to fastbinary.c were minimal and the compiler got a bit simpler. I've tested it, but it needs a review
This looks awesome. There is one thing that I'm a little uneasy about, though, which is that (in my example), a field with id 3 (which should be skipped) would be treated as -2 instead. I think the only solution for this is to provide some extra info to the extension, specifically the maximum expected positive field id, and use that to skip over anything larger.
Thanks! I think that information could be stored as the first member of the thrift_spec tuple (key 0), as it's already empty. It shouldn't take too much work (as it's already in the sorted_keys_pos variable). What do you think?
function result structures use field 0, so how about a new attribute?
Ouch, you're right. A new class attribute (e.g. thrift_tag_limits) for storing maximum and minimum field keys sounds good, but I worry we might end up having too many attributes in the future and I'd like to keep all this information in a single attribute. Anyway, I think I can implement it in a relatively short time.
This patch adds some checks on lower and upper field keys using a new attribute called thrift_limits, I tested it both with the native extension and the Python-based binary protocol
Doesn't fix anything, but makes the compiler source code a bit simpler and cleaner.
Something isn't right here. I think it has to do with resetting the sorted_key_pos inside the inner loop.
For this structure...
struct AllPos { 2: i32 foo; 3: i32 bar; }
I get this output...
thrift_spec = ( None, # 0 None, # 1 (2, TType.I32, 'foo', None, None, ), # 2 None, # 0 None, # 1 None, # 2 (3, TType.I32, 'bar', None, None, ), # 3 )
Fixes repeated fields in thrift_spec
Fixes an issue that with negative field keys, all tests pass.
I had to rewrite my latest patch to fix structures with negative field keys, if they end with something lower than -1 For example, the ThriftTest.testException method generated a thrift_spec like this:
thrift_spec = ( (-2, TType.STRUCT, 'err1', (Xception, Xception.thrift_spec, Xception.thrift_limits), None, ), # -2 )
which caused some errors (i.e. thrift_spec[-2] didn't exist)
It's not as clean or pretty as the previous patch
but it works.
The patch fix default values. The patch is depend on THRIFT-105 patch. The patch duplicate default values in constructor and thrift_spec. I found it far better compare to thrift_spec[xxxx][4].
The patch is small and (I believe) clean enough.
Esteve: We should be able to go back to patch #5 if my solution to
THRIFT-361 is accepted. Thanks for uncovering this bug!
Alexander: At the moment, I prefer the solution in Esteve's patch (I am also biased, because I tried to push him toward that implementation). The main reason is that it is very low impact. The old generated code will continue to work with the new extension and vice versa. This version allows you to simply index into the thrift_spec list with the field id and have it work (without having to add an offset). The limits are completely unnecessary as long as the data is assumed valid. Finally, it allows users to set a field to None in the constructor, overriding the default.
Alexander: I don't fully understand your patch. Actually, it breaks the compiler and default values are not properly handled. For example, given this structure:
struct Foo { 1:optional list<i32> l = [1,2,3,4], }
the code generated after applying your patch is:
def __init__(self, l=None): if l is None: l = [ 1, 2, 3, 4, ]
which is wrong, which makes it impossible to pass None as l to Foo. Also, it doesn't set self.l to l, but I guess it was just a slip. With the current behavior, the generated code is:
def __init__(self, l=thrift_spec[1][4],): if l is self.thrift_spec[1][4]: l = [ 1, 2, 3, 4, ] self.l = l
David: great! However, I think the thrift_limits variable is necessary because, as you pointed out, if we have a structure like the one you added, a field with id 3 would be treated as -2. So, if we had a newer version of that structure, which adds another field (with id 3), wouldn't it clash with the old version?
1. Oops... I have to sleep more. I've fixed the bug.
2. I can't imagine a scenario which makes you to pass None (instead of empty list/set/dict) to init. Can you give me one?
Using the aforementioned structure (Foo), if I want l (an optional member) to be None so that it's not serialized, it would still end up passing [1,2,3,4] to the underlying transport.
Actually, I don't know what's wrong with the current behavior. The problem is in how thrift_spec for negative field keys is being generated.
THRIFT-361 was applied. Does that mean we can apply patch #5 for this now, per David's comments above?
Esteve: Good call on 3 vs. -2.
Don't we also need to adjust type_to_spec_args in order to include the limits for nested structures?
Also, it is unfortunate that adding the limits broke compatibility with the old version of the extension module. It doesn't seem like it will be easy to maintain compatibility, so as long as we are breaking it, we might as well go wild. If you want to combine the field specifiers and limits into a single attribute, that would be okay.
Is this something we could/should commit in 0.1? If so we need to make a final decision today.
Another version of my patch + THRIFT-105.
Shooting for 0.1, if we get review.
I can be wrong, but Esteve's patch doesn't work with old generated code. I believe it's a problem (because we don't break backward compatibility in python library for 0.1).
The big chunk of my patch was reviewed by Mark Slee in THRIFT-105.
> I can be wrong, but Esteve's patch doesn't work with old generated code
I'm confused. The whole point of this ticket is so we can replace old, broken generated code with generated code that works. If you are regenerating what is there to not work with?
Personally I would be most comfortable to get Esteve's buy-in on whatever fix we go with since he was the author of the -242 patch IIRC.
The Esteve's patch tries to fix two (THRIFT-105 and THRIFT-339) issue.
The fix of fastbinary protocol requires three argument instead of two. The change breaks all old-generated code if you use fastbinary even if you hasn't got any fields with negative tags.
The fix from THRIFT-105 (oh, it was six months ago) works fine with old-generated code, but it makes natural fields order: (-1, 0, 1, 2). Esteve uses order (0, 1, 2, -1).
Negative field ids are deprecated, so this doesn't need to block the release.
My case does not deal with negative field ids. (In the example I gave, the field default is -1, not the field id.)
bool get_slice(i32 start = -1),
start has field id == -1
bool get_slice(1: i32 start = -1),
works fine
sync patch with head
I'm deprioritizing this for now because I'm not sure what the status is, or if there's a champion for it.
damn it, jira said {{ }} would monospace my code samples there.
at least they are short enough that it's still pretty clear what is going on.
|
https://issues.apache.org/jira/browse/THRIFT-339?focusedCommentId=12680113&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
Displaying an Image file inside the BorderLayout GUI
I have a quick question regarding the use of image files (.png) in Java's GUI.
I'm working on a public class that extends JPanel, which uses the BorderLayout to display a window. I know how to add sliders, buttons, etc to my BorderLayout window, but what I'm trying to do is display an image.
How can I do this?
Thanks for reading this.
edit:
Or is there any other way to display some sort of image in BorderLayout?
Message was edited by: lucas27
Hi,
JLabel is able to host images, so this is probably the shortest way, without creating a image-display widget:
ImageIcon icon = createImageIcon("images/middle.gif");
label3 = new JLabel(icon);
Found here:
lg Clemens
|
https://www.java.net/node/684529
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
17 July 2009 17:29 [Source: ICIS news]
By John Richardson
SINGAPORE (ICIS news)--If you predict a collapse in any market for long enough it will surely happen - or maybe not in this case.
Many of us have had to wipe large amounts of egg off our faces as the Asian polyolefins price “bubble”, which began mid-November last year, has continued.
Asian olefin and polyolefins prices remained firm as this article was published – close to their levels for the week ending 10 July, despite the recent fall in oil prices.
High-density polyethylene (HDPE) film grade reached a low point of $750/tonne CFR (cost and freight) ?xml:namespace>
Markets have not looked back since, with prices more or less following a constant upward or stable trajectory. HDPE film for August delivery business was recently settled at $1,220-1,280/tonne CFR China.
Pessimists (or maybe realists) continue to question the sustainability of the recovery, however.
This questioning hadn’t lost any of its intensity on Friday 17 July, despite the official announcement that
“Monthly PE demand in
“It averaged 980,000 tonnes in 2007 and 970,000 tonnes in 2008 and yet soared to 1,270,000 tonnes up until May this year.”
This has occurred despite the steep fall in
The reductions seem to be far too big to be entirely replaced by stronger local demand in 2009, largely the result of huge government spending.
One often-expressed concern is that speculators, awash with credit as bank lending has risen, have poured into the local stock market (which has gone up by 75% this year) and all sorts of commodities and exchanges.
More than 80m tonnes of linear-low density PE (LLDPE) were traded on the Dalian Commodity Exchange during the second quarter – around four times global annual demand.
Rising oil prices have contributed to the polyolefins price-rally in the physical market.
“If a lot of this product has gone into storage as either as polymers or finished goods, then at sometime it will re-emerge on world markets,” adds Hodges.
“And if it all comes at one time, when 'everyone' decides prices are falling, there is clearly a risk that it might have a major deflationary impact.”
There are many other factors behind the extraordinary and largely unexpected rebound.
“I remember going to ChinaPlas in May (the big plastics exhibition). There were a lot of Doomsday scenarios being bandied about because of the coming supply surge and the unsustainability of the Chinese economic recovery,” said a Singapore-based polyolefins trader.
“I think the rally is mainly down to tight supply. Production cutbacks in Europe and the
PP has been a major beneficiary: of the 2.77m tonnes/year of capacity, or thereabouts, due on stream in
As little as 1.02m tonnes/year of the 2.77m tonnes/year was understood to have started up by mid-July.
“I think a reason has been that EPC (engineering, procurement and construction) resources were severely overstretched,” added the European source.
“You just couldn’t get enough experienced project managers to oversee these big investments.
“Cost constraints have caused problems as have efforts to commission entire complexes at once when phased start-ups might have been better.”
Gas feedstock shortages resulted in low operating rates at
A substantial maintenance shutdown programme has taken place in
“Operating rates at many crackers unaffected by maintenance were also turned down as rapidly strengthening feedstock prices squeezed cracker and integrated derivative margins close to breakeven,” added ChemSystems
The Middle East and
“When you think about the
“Despite the global economic problems the market is still growing.”
PP has also been tight because small plants in
Refineries have been running at low operating rates due to, first of all, weaker fuels demand and, more recently, higher crude prices.
Tight supply in all grades of polyolefins seems to have so far counter-balanced the $10/bbl or so decline in oil prices over the last few weeks.
“US dollar-priced material (for shipment to
He gave two further reasons for the strength of the market.
“The Chinese government has been building up stocks of goods for disaster relief. It doesn’t want to be embarrassed again by the shortages that followed last year’s
“HDPE yarn grade is very tight, for example, as it is used to manufacture tents.”
Imports of recycled or scrap plastic into
“Consumers in the West are buying fewer durable goods, such as electronics, which arrive wrapped in plastic,” continued the trader.
“During the economic mega-boom lots of this plastic was collected in the
Chinese traders handling recycled material went bust by the legion in 2008. This was the result of the great petrochemicals price collapse which left virgin resin cheaper than re-used product.
Stricter government regulations on scrap imports due to concerns over pollution have also slowed trade.
But still, you keep coming back to the apparent PE demand numbers at the beginning of this article. How can they be up on 2007-08 with the world economy still in deep crisis?
“I know - you have to be worried. Something doesn’t seem quite right,” added the
“The bonded warehouses (where US dollar material is stored) I visited in northern and southern
“But inventories of RMB-priced material held by local traders and distributors seemed to be very high indeed.”
Some contacts we spoke to want a sentiment survey on PE and PP inventory levels in
The feeling is that PP imports and production may also have also increased in 2009, but the data were not immediately available.
“I think many people are having a hard time applying logic to the current market,” the European producer added.
“This is more supply than demand driven and once the new capacities finally start up, we will face a bloodbath.”
Yes, but exactly when? Maybe, just maybe, if project delays continue and the global economic recovery is sustained the air will be gently released from the polyolefins price bubble.
Or, if we can get a handle on inventory levels in
Read John Richardson's Asian Chemical Connections blog and
|
http://www.icis.com/Articles/2009/07/17/9233430/insight-squeezing-the-china-polyolefins-price-bubble.html
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
On Mon, Sep 11, 2006 at 01:01:18PM -0600, Eric W. Biederman wrote:> > Cedric you mentioned a couple of other patches that are in flight.> In the future could you please Cc: the containers list so independent> efforts are less likely to duplicate work, and we are more likely> to review each others patches instead?> > Cedric Le Goater <[email protected]> writes:> > > Eric W. Biederman wrote:> >> >>>> I was just about to send out this patch in a couple more hours.> >>> Well, you did the same with the usb/devio.c and friends :)> >> > >> Good. The you should be familiar enough with it to review my patch> >> and make certain I didn't do anything stupid :)> >> > well, the least i can try ...> >> > >>> * I started smbfs and realized it was useless.> >> > >> Killing the user space part is useless?> >> I thought that is what I saw happening.> >> > smb_fill_super() says :> >> > if (warn_count < 5) {> > warn_count++;> > printk(KERN_EMERG "smbfs is deprecated and will be removed in"> > " December, 2006. Please migrate to cifs\n");> > }> >> > So, i guess we should forget about it and spend our time on the cifs> > kthread instead.> > Sure. Although in this instance the changes are simple enough I will> probably send the patch anyway :) That at least explains why you> figured it was useless work.> > > >> Of course I don't frequently mount smbfs.> >> > >>> * in the following, the init process is being killed directly> >>> using 1. I'm not sure how useful it would be to use a struct pid.> >>> To begin with, may be they could use a :> >>>> >>> kill_init(int signum, int priv)> >> > >> An interesting notion. The other half of them use cad_pid.> >> > yes.> >> >> Converting that is going to need some sysctl work, so I have been> >> ignoring it temporarily.> >> > >> Filling in a struct pid through sysctl is extremely ugly at the> >> moment, plus cad_pid needs some locking.> >> > Which distros use /proc/sys/kernel/cad_pid and why ? I can image the> > need but i didn't find much on the topic.>> I'm not at all certain, and I'm not even certain I care. The concept> is there in the code so it needs to be dealt with. Although if I we> extend the cad_pid concept it may make a difference.>> >> My patch todo list (almost a series file) currently looks like:> >>> >>> n_r396r fs3270-Change-to-use-struct-pid.txt> >> > done that. will send to martin for review.>> Added to my queue of pending patches to look at review.>> >>> ncpfs-Use-struct-pid-to-track-the-userspace-watchdog-process.txt> >>>> >>> Don-t-use-kill_pg-in-the-sunos-compatibility-code.txt> >>>> >>> usbatm-use-kthread-api (I think I have this one)> >>> >> I did usbatm mostly to figure out why kthread conversions seem to> >> be so hard, and got lucky this one wasn't too ugly.> >> > argh. i've done also and i just send my second version of the patch> > to the maintainer Duncan Sands.> >> > This one might just be useless also because greg kh has a patch in> > -mm to enable multithread probing of USB devices.> > Added to my queue of pending patches to track down and reivew.> > > >>> The-dvb_core-needs-to-use-the-kthread-api-not-kernel-threads.txt> >>> nfs-Note-we-need-to-start-using-the-kthreads-api.txt> >> > >> dvb-core I have only started looking at.> >> > suka and i have sent patches to fix :> >> > drivers/media/video/tvaudio.c> > drivers/media/video/saa7134/saa7134-tvaudio.c> >> > we are no waiting for the maintainer feedback.> > Ok. I think I saw a little of that.> > >> nfs I noticed it is the svc stuff that matters.> >> > >> usbatm, dvb-core, and nfs are the 3 kernel_thread users> >> that also use kill_proc, and thus are high on my immediate hit list.> > nfs is also full of signal_pending() ...> Yes, there is a lot to read and understand before I can confidently> do much with nfs.I already did a lot of adjustments to the nfs system, andI poked aound in dvb-core before, so I will take a lookat this in the next few days, at least the switch to thekthread api should not be a big deal ...HTH,Herbert> >>> pid-Better-tests-for-same-thread-group-membership.txt> >>> pid-Cleanup-the-pid-equality-tests.txt> >>> pid-Track-the-sending-pid-of-a-queued-signal.txt> >> > is that about updating the siginfos in collect_signal() to hold the right> > pid value depending on the pid namespace they are being received ?> > Yes in send_signal, and in collect signal. To make it work easily I> needed to add a struct pid to struct sigqueue. So in send_signal I> generate the struct pid from the pid_t value and in collect signal I> regenerate the numeric value.> > > Eric> > _______________________________________________> Containers mailing list> [email protected]> unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to [email protected] majordomo info at read the FAQ at
|
https://lkml.org/lkml/2006/9/12/167
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
Quoting David Howells ([email protected]):> Randy Dunlap <[email protected]> wrote:> > > > +Any task in or resource belonging to the initial user namespace will, to this> > > +new task, appear to belong to UID and GID -1 - which is usually known as> > > > that extra hyphen is confusing. how about:> > > > to UID and GID -1, which is> > 'which are'.> > DavidThis will hold some info about the design. Currently it containsfuture todos, issues and questions.Changelog: jul 26: incorporate feedback from David Howells. jul 29: incorporate feedback from Randy Dunlap.Signed-off-by: Serge E. Hallyn <[email protected]>Cc: Eric W. Biederman <[email protected]>Cc: David Howells <[email protected]>Cc: Randy Dunlap <[email protected]>--- Documentation/namespaces/user_namespace.txt | 107 +++++++++++++++++++++++++++ 1 files changed, 107 insertions(+), 0 deletions(-) create mode 100644 Documentation/namespaces/user_namespace.txtdiff --git a/Documentation/namespaces/user_namespace.txt b/Documentation/namespaces/user_namespace.txtnew file mode 100644index 0000000..b0bc480--- /dev/null+++ b/Documentation/namespaces/user_namespace.txt@@ -0,0 +1,107 @@+Description+===========++Traditionally, each task is owned by a user ID (UID) and belongs to one or more+groups (GID). Both are simple numeric IDs, though userspace usually translates+them to names. The user namespace allows tasks to have different views of the+UIDs and GIDs associated with tasks and other resources. (See 'UID mapping'+below for more.)++The user namespace is a simple hierarchical one. The system starts with all+tasks belonging to the initial user namespace. A task creates a new user+namespace by passing the CLONE_NEWUSER flag to clone(2). This requires the+creating task to have the CAP_SETUID, CAP_SETGID, and CAP_CHOWN capabilities,+but it does not need to be running as root. The clone(2) call will result in a+new task which to itself appears to be running as UID and GID 0, but to its+creator seems to have the creator's credentials.+.++When a task belonging to (for example) userid 500 in the initial user namespace.++Relationship between the User namespace and other namespaces+============================================================++Other namespaces, such as UTS and network, are owned by a user namespace. When+such a namespace is created, it is assigned to the user namespace of the task+by which it was created. Therefore, attempts to exercise privilege to+resources in, for instance, a particular network namespace, can be properly+validated by checking whether the caller has the needed privilege (i.e.+CAP_NET_ADMIN) targeted to the user namespace which owns the network namespace..++UID Mapping+===========+The current plan (see 'flexible UID mapping' at+) is:++The UID/GID stored on disk will be that in the init_user_ns. Most likely+UID/GID in other namespaces will be stored in xattrs. But Eric was advocating+(a few years ago) leaving the details up to filesystems while providing a lib/+stock implementation. See the thread around here:+ notes+=============+Capability checks for actions related to syslog must be against the+init_user_ns until syslog is containerized.++Same is true for reboot and power, control groups, devices, and time.++Perf actions (kernel/event/core.c for instance) will always be constrained to+init_user_ns.++Q:+Is accounting considered properly containerized with respect to pidns? (it+appears to be). If so, then we can change the capable() check in deferred some of commoncap.c. I'm punting on xattr stuff as they take+dentries, not inodes.++For drivers/tty/tty_io.c and drivers/tty/vt/vt.c, we'll want to (for some of+them) target the capability checks at the user_ns owning the tty. That will+have to wait until we get userns owning files straightened out.++We need to figure out how to label devices. Should we just toss a user_ns+right into struct device?++capable(CAP_MAC_ADMIN) checks are always to be against init_user_ns, unless+some day LSMs were to be containerized, near zero chance.++inode_owner_or_capable() should probably take an optional ns and cap parameter.+If cap is 0, then CAP_FOWNER is checked. If ns is NULL, we derive the ns from+inode. But if ns is provided, then callers who need to derive+inode_userns(inode) anyway can save a few cycles.-- 1.7.5.4
|
http://lkml.org/lkml/2011/7/29/296
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
David Li Xing wrote:Hadoop is an outstanding big data solution. On one hand, its low cost and high scalability increases its popularity; on the other hand, its low development efficiency incurs user complaints.
Hadoop is based on the MapReduce framework for big data development and computation. Everything seems to be well if the computing task is simple. However, issues appear for those a little bit more complex computations. The poor development efficiency will bring more and more serious impacts with the growing difficulty of problem. One of the commonest computations is the "associative computing".
For example, in HDFS, there are 2 files holding the client data and the order data respectively, and the customerID is the associated field between them. How to perform the associated computation to add the client name to the order list?
The normal method is to input 2 source files first. Process each row of data in Map according to the file name. If the data is from Order, then mark the foreign key with ”O” to form the combined key; If the data is from Customer, then mark it with ”C”. After being processed with Map, the data is partitioned on keys, and then grouped and sorted on combined keys. Lastly, combine the result in the reduce and output. It is said that the below code is quite common:
public static class JMapper extends Mapper<LongWritable, Text, TextPair, Text> {
//mark every row with "O" or "C" according to file name
@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String pathName = ((FileSplit) context.getInputSplit()).getPath().toString();
if (pathName.contains("order.txt")) {//identify order by file name
String values[] = value.toString().split("\t");
TextPair tp = new TextPair(new Text(values[1]), new Text("O"));//mark with "O"
context.write(tp, new Text(values[0] + "\t" + values[2]));
}
if (pathName.contains("customer.txt")) {//identify customer by file name
String values[] = value.toString().split("\t");
TextPair tp = new TextPair(new Text(values[0]), new Text("C"));//mark with "C"
context.write(tp, new Text(values[1]));
}
}
}
public static class JPartitioner extends Partitioner<TextPair, Text> {
//partition by key, i.e. customerID
@Override
public int getPartition(TextPair key, Text value, int numParititon) {
return Math.abs(key.getFirst().hashCode() * 127) % numParititon;
}
}
public static class JComparator extends WritableComparator {
//group by muti-key
public JComparator() {
super(TextPair.class, true);
}
@SuppressWarnings("unchecked")
public int compare(WritableComparable a, WritableComparable b) {
TextPair t1 = (TextPair) a;
TextPair t2 = (TextPair) b;
return t1.getFirst().compareTo(t2.getFirst());
}
}
public static class JReduce extends Reducer<TextPair, Text, Text, Text> {
//merge and output
protected void reduce(TextPair key, Iterable<Text> values, Context context) throws IOException,InterruptedException {
Text pid = key.getFirst();
String desc = values.iterator().next().toString();
while (values.iterator().hasNext()) {
context.write(pid, new Text(values.iterator().next().toString() + "\t" + desc));
}
}
}
public class TextPair implements WritableComparable<TextPair> {
//make muti-key
private Text first;
private Text second;
public TextPair() {
set(new Text(), new Text());
}
public TextPair(String first, String second) {
set(new Text(first), new Text(second));
}
public TextPair(Text first, Text second) {
set(first, second);
}
public void set(Text first, Text second) {
this.first = first;
this.second = second;
}
public Text getFirst() {
return first;
}
public Text getSecond() {
return second;
}
public void write(DataOutput out) throws IOException {
first.write(out);
second.write(out);
}
public void readFields(DataInput in) throws IOException {
first.readFields(in);
second.readFields(in);
}
public int compareTo(TextPair tp) {
int cmp = first.compareTo(tp.first);
if (cmp != 0) {
return cmp;
}
return second.compareTo(tp.second);
}
}
public static void main(String agrs[]) throws IOException, InterruptedException, ClassNotFoundException {
//job entrance
Configuration conf = new Configuration();
GenericOptionsParser parser = new GenericOptionsParser(conf, agrs);
String[] otherArgs = parser.getRemainingArgs();
if (agrs.length < 3) {
System.err.println("Usage: J <in_path_one> <in_path_two> <output>");
System.exit(2);
}
Job job = new Job(conf, "J");
job.setJarByClass(J.class);//Join class
job.setMapperClass(JMapper.class);//Map class
job.setMapOutputKeyClass(TextPair.class);//Map output key class
job.setMapOutputValueClass(Text.class);//Map output value class
job.setPartitionerClass(JPartitioner.class);//partition class
job.setGroupingComparatorClass(JComparator.class);//condition group class after partition
job.setReducerClass(Example_Join_01_Reduce.class);//reduce class
job.setOutputKeyClass(Text.class);//reduce output key class
job.setOutputValueClass(Text.class);//reduce ouput value class
FileInputFormat.addInputPath(job, new Path(otherArgs[0]));//one of source files
FileInputFormat.addInputPath(job, new Path(otherArgs[1]));//another file
FileOutputFormat.setOutputPath(job, new Path(otherArgs[2]));//output path
System.exit(job.waitForCompletion(true) ? 0 : 1);//run untill job ends
}
As can be seen above, to implement the associative computing, since programmers are unable to use the raw data directly, they have to compose some complex codes to handle the tag, bypass the original framework of MapReduce, and design and compute the associative relation between data from the bottom layer. Obviously, handling such computations by this way requires those programmers with strong programming skills. Plus, it is quite time-consuming, and there is no guarantee on computational efficiency. The above case is just the simplest kind of associated computation. As you can image, if using MapReduce for multi-table association or the associative computing with complex business logics, then the degree of complexity will rise in a geometric ratio. The difficulty and development efficiency becomes nearly unbearable.
In fact, the associative computing itself is common and by no means complex. The reason of the apparent difficulty is that MapReduce is not specialized enough in a certain sector though it has strong universality. Similarly, developing via MapReduce is also quite inefficient when it comes to the ordered computations like year-on-year comparison and median operation, and the align or enum grouping.
Although Hadoop has packaged Hive/Pig and other advanced solutions with MapReduce, these solutions on one hand is not powerful enough, on other hand, they only offer the rather simple and basic queries. To complete the business logics involving complex procedure, the hard coding is still unavoidable.
esProc is a pure Java parallel computation framework with the focus on boosting the capability of Hadoop and the basic ability to improve the development efficiency of Hadoop programmers.
Still the above example, esProc solution is shown below:
Main program:
Sub program:
The above are two methods to implement one task, people can choose either one as per the feature of your question.
|
http://www.coderanch.com/t/622201/big-data/databases/solutions-improve-developing-efficiency-Hadoop
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
if, char array and or operator questionThanks a lot guys, you are right. That solved my problem. *cheers*
if, char array and or operator questionis it possible to make a statement like this:
[code]if( A[x] == 'a' || 'b' )[/code]
when I want to...
a problem with sin calculation.well, i bypassed it and it appears there is nothing wrong with calculation, there is something wrong...
a problem with sin calculation.#include <iostream.h>
#include <math.h>
#include <conio.h>
#include <fstream.h>
#include <iomani...
This user does not accept Private Messages
|
http://www.cplusplus.com/user/erga/
|
CC-MAIN-2015-22
|
en
|
refinedweb
|
This chapter includes the following sections:
About Developing Declaratively in JDeveloper
Creating an Application Workspace
Creating and Using Managed Beans
Viewing ADF Faces Javadoc
The ADF framework provides a visual and declarative approach to Java EE development. It supports rapid application development based on ready-to-use design patterns, metadata-driven and visual tools.
Using JDeveloper
Creating a View Page using either Facelets or JavaServer Pages (JSPs).
Deploying the application. See Deploying ADF Applications in Administering Oracle ADF Applications. If your application uses ADF Faces with the ADF Model layer, ADF Controller, and ADF Business Components, see the Deploying Fusion Web Applications chapter of Developing Fusion Web Applications with Oracle Application Development Framework.
Ongoing tasks throughout the development cycle will likely include the following:
Creating and Using Managed Beans
Viewing ADF Faces Javadoc
JDeveloper also includes debugging and testing capabilities. For more information, see the Testing and Debugging ADF Components chapter of Developing Fusion Web Applications with Oracle Application Development Framework.
An application workspace is a directory that helps the ADF application developer to gather various source code files and resources and work with them as a cohesive unit. It is the one of the first steps in building a new application., ADF Controller, and ADF Business Components, see Getting Started with Your Web Interface in Developing Fusion Web Applications with Oracle Application Development Framework.
You create an application workspace using the Create Application wizard.
To create an application:
In the menu, choose File > New > Application.
In the New Gallery, select Custom Application and click OK.
In the Create Custom Application dialog, set a name, directory location, and package prefix of your choice and click Next.
In the Name Your Project page, you can optionally change the name and location for your view project. On the Project Features tab, shuttle ADF then select ADF Faces for your project, JDeveloper creates a project that contains all the source and configuration files needed for an ADF Faces application. Additionally, JDeveloper adds the following libraries to your project:
JSF 2.2
JSTL 1.2
JSP Runtime
Once the projects are created for you, you can rename them. Figure 3-1 shows the workspace for a new ADF Faces application.
Figure 3-1 New Workspace for an). Following is the
web.xml file generated by JDeveloper when you create a new ADF Faces application.
< Allowing User Customization on JSF Pages. For comprehensive information about configuring an ADF Faces application, see ADF Faces Configuration.
Defining the page flow of your ADF Faces application refers to how the UI elements are arranged in pages, designing the right sequence of pages, how pages interact with each other and how they can be modularized and communicate with each other as well. Navigation cases and rules are used to define the page flow..
Figure 3-2 Navigation Diagram in JDeveloper
Note:
If you plan on using ADF Model data binding and ADF Controller, then you use ADF task flows to define your navigation rules. For more information, see the “Getting Started with ADF Task Flows" chapter of Developing Fusion Web Applications with Oracle Application Development Framework.
Best Practice.
Before you begin:
It may be helpful to have an understanding of page flows. For more information, see Defining Page Flows.
To create a page flow:
faces-config.xmlfile for your application. By default, this is in the Web Content/WEB-INF node of your project.
The components are contained in three accordion panels: Source Elements, Components, and Diagram Annotations. Figure 3-3 shows the Components window displaying JSF navigation components.
Figure 3-3 Components in JDeveloper
JDeveloper redraws the diagram with the newly added component.
Tip:
You can also use the overview editor to create navigation rules and navigation cases by clicking the Overview tab. For help with the editor, click Help or press F1. Working with Navigation Components.
When you use the diagrammer to create a page flow, JDeveloper creates the associated XML entries in the
faces-config.xml file. This code shows the XML generated for the navigation rules displayed in Figure 3-2.
>
The JSF pages you create for your ADF Faces application using JavaServer Faces can be Facelets documents (
.jsf) or JSP documents written in XML syntax (
.jspx).You can define the look and feel for the new page, and you can specify whether or not components on the page are exposed in a managed bean, to allow programmatic manipulation of the UI components. Organizing Content on Web Pages.
Figure 3-4 Quick Layouts Quick Start Layout Themes. For more information about themes, see.
Figure 3-5 Oracle Three Column Layout Template
Whenever a template is changed, for example if the layout changes, any page that uses the template will also be automatically updated. For more information about creating and using templates,).
Once your page files are created, you can add UI components and work with the page source.
You create JSF pages (either Facelets or JSP) using the Create JSF Page dialog.
Before you begin:
It may be helpful to have an understanding of the different options when creating a page. For more information, see Creating a View Page.
To create a JSF page:
OR
From a navigation diagram, double-click a page icon for a page that has not yet been created.
Note:
While a Facelets page can use any extension you'd like, a Facelets page must use the
.jsf extension to be customizable. For more information, see Facelets or JSP page.
The following shows the code for a Facelets page when it is first created by JDeveloper.
<?xml version='1.0' encoding='UTF-8'?> <!DOCTYPE html> <f:view xmlns: <af:document <af:form</af:form> </af:document> </f:view>
Below is the code for a
.jspx page when it is first created by JDeveloper.
<:form> </af:document> </f:view> </jsp:root>
If you chose to use one of the quick layouts, then JDeveloper also adds the components necessary to display the layout. For example, the code below is what JDeveloper generates when you choose a two-column layout where the first column is locked and the second column stretches to fill up available browser space, and you also choose to apply themes.
<:panelGridLayout <af:gridRow <af:gridCell <!-- Left --> </af:gridCell> <af:gridCell <af:decorativeBox <f:facet <af:decorativeBox <f:facet <!-- Content --> </f:facet> </af:decorativeBox> </f:facet> </af:decorativeBox> </af:gridCell> </af:gridRow> </af:panelGridLayout> </af:form> </af:document> </f:view> </jsp:root>
If you chose to automatically create a backing bean using the Managed Bean tab of the dialog, JDeveloper also creates and registers a backing bean for the page, and binds any existing components to the bean, as shown in the code below.. Following is the
web.xml file created once you create a JSF page.
<> <context-param> <param-name>javax.faces.FACELETS_VIEW_MAPPINGS</param-name> <param-value>*.jsf;*.xhtml</param-value> </context-param> > <description>Security precaution to prevent clickjacking: bust frames if the ancestor window domain(protocol, host, and port) and the frame domain are different. Another options for this parameter are always and never.</description> <param-name>org.apache.myfaces.trinidad.security.FRAME_BUSTING</param-name> <param-value>differentOrigin</param-value> </context-param> <context-param> <param-name>javax.faces.VALIDATE_EMPTY_FIELDS</param-name> <param-value>true</param-value> </context-param> <context-param> <param-name>oracle.adf.view.rich.geometry.DEFAULT_DIMENSIONS</param-name> <param-value>auto<> <context-param> <param-name>oracle.adf.view.rich.dvt.DEFAULT_IMAGE_FORMAT</param-name> <param-value>HTML5<> <dispatcher>ERROR</dispatcher> </filter below.
< below.
<?xml version="1.0" encoding="windows-1252"?> <trinidad-config <skin-family>skyros</skin-family> <skin-version>v1</skin-version> </trinidad-config>
When the page is first displayed in JDeveloper, it is displayed in the visual editor (accessed by clicking the Design tab), which allows you to view the page in a WYSIWYG environment. You can also view the source for the page in the source editor by clicking the Source tab. The Structure window located in the lower left-hand corner of JDeveloper, provides a hierarchical view of the page.
JSF 2.1 following code into your
web.xml page.
below.
<context-param> <param-name>javax.faces.PARTIAL_STATE_SAVING</param-name> <param-value>false</param-value> </context-param>
Once this incompatibility is resolved, you should re-enable partial state saving by removing the entry. Check your current release notes at for the latest information on partial state saving support.SF Object Scope Lifecycles).
Note:
JDeveloper does not create managed bean property entries in the
faces-config.xml file. If you wish the bean to be instantiated with certain property values, you must perform this configuration in the
faces-config.xml file manually. For more information, see How to Configure for ADF Faces in faces-config.xml.
On the newly created or selected bean, adds a property and accessor methods for each component tag you place on the JSF page. button component and then double-click it in the visual editor, the Bind Action Property dialog displays the page's backing bean along with a new skeleton action method, as shown in Figure 3-6.
Figure 3-6 Bind Action Property Dialogsf..
The following code from a JSF page uses automatic component binding, and contains
form,
inputText, and
button components.
<f:view> <af:document <af:form <af:inputText <af:button </af:form> </af:document> </f:view>
Following is the corresponding code on the backing bean.
package view.backing; import oracle.adf.view.rich.component.rich.RichDocument; import oracle.adf.view.rich.component.rich.RichForm; import oracle.adf.view.rich.component.rich.input.RichInputText; import oracle.adf.view.rich.component.rich.nav.RichButton; public class MyFile { private RichForm f1; private RichDocument d1; private RichInputText it1; private RichButton bB1(RichButton b1) { this.button1 = button1; } public RichButton getB1() { return b1; } public String b1_action() { // Add event code here... return null; } }
This code, added to the
faces-config.xml file, registers the page's backing bean as a managed bean.
<managed-bean> <managed-bean-name>backing_MyFile</managed-bean-name> <managed-bean-class>view.backing.MyFile</managed-bean-class> <managed-bean-scope>request</managed-bean-scope> </managed-bean>
Note:
Instead of registering the managed bean in the
faces-config.xml file,.
Figure 3-7 You Can Declaratively Create Skeleton Methods in the Source Editor
Once you create a page, you can turn automatic component binding off or on, and you can also change the backing bean to a different Java class. Open the JSF page Bean, and then choosing the bean from the list of beans associated with the JSF.
Once you have created a page, you can use the Components window to drag and drop components onto the page. JDeveloper then declaratively adds the necessary page code and sets certain values for component attributes.
Tip:
For detailed procedures and information about adding and using specific ADF Faces components, see.
Before you begin:
It may be helpful to have an understanding of creating a page. For more information, see Creating a View Page.
To add ADF Faces components to a page:
In the Applications window, double-click a JSF page to open it.
If the Components window is not displayed, from the menu choose Window > Components. By default, the Components window is displayed in the upper right-hand corner of JDeveloper.
In the Components window, use the dropdown menu to choose ADF Faces.
Tip:
If the ADF Faces page is not available in the Components window, then you need to add the ADF Faces tag library to the project.
For a Facelets file:
Right-click the project node and choose Project Properties.
Select JSP Tag Libraries to add the ADF Faces library to the project. For help, click Help or press F1.
For a JSPX file:
Right-click inside the Components window and choose Edit Tab Libraries.
In the Customize Components window dialog, shuttle ADF Faces Components to Selected Libraries, and click OK.
The components are contained in 6 accordion panels: General Controls (which contains components like buttons, icons, and menus), Text and Selection, Data Views (which contains components like tables and trees), Menus and Toolbars, Layout, and Operations.
Figure 3-8 shows the Components Window displaying the general controls for ADF Faces.
Figure 3-8 Components Window in JDeveloper Components window Components window onto a JSF page, JDeveloper adds the corresponding code to the JSF page. This code includes the tag necessary to render the component, as well as values for some of the component attributes. For example, the following shows the code when you drop an Input Text and a Button component from the palette.
<af:inputText <af:button
Note:
If you chose to use automatic component binding, then JDeveloper also adds the
binding attribute with its value bound to the corresponding property on the page's backing bean. For more information,.
Figure 3-9 Table Wizard in JDeveloper
Below is the code created when you use the wizard to create a table with three columns, each of which uses an
outputText component to display data.
<af:table var="row" rowBandingInterval="0" Properties window (displayed by default at the bottom right of JDeveloper) to set attribute values for each component.
Tip:
If the Properties window is not displayed, choose Window > Properties from the main menu.
Figure 3-10 shows the Properties window displaying the attributes for an
inputText component.
Figure 3-10 JDeveloper Properties Window
The Properties window has sections that group similar properties together. For example, the Properties window groups commonly used attributes for the
inputText component in the Common section, while properties that affect how the component behaves are grouped together in the Behavior section. Figure 3-11 shows the Behavior section of the Properties window for an
inputText component.
Figure 3-11 Behavior Section of the Properties window
Before you begin:
It may be helpful to have an understanding of the different options when creating a page. For more information, see Creating a View Page.
To set component attributes:
Tip:
Some attributes are displayed in more than one section. Entering or changing the value in one section will also change it in any other sections. You can search for an attribute by entering the attribute name in the search field at the top of the inspector.
Figure 3-12 Property Tools and Help
When you use the Properties window.
In ADF applications, Express Language (EL) expressions provide an important mechanism for enabling the presentation layer (web pages) to communicate with the application logic (managed beans). The EL expressions are used by both JavaServer Faces technology (JSF) and JavaServer Pages (JSP) technology. Creating and Using Managed Beans. For more information about EL expressions, see the Java EE 6 tutorial at.
Note:
When using an EL expression for the
value attribute of an editable component, you must have a corresponding set method for the that component, or else the EL expression will evaluate to read-only, and no updates to the value will be allowed.
For example, say you have an
inputText component (whose ID is
it1) on a page, and you have it's value set to
#{myBean.inputValue}. The
myBean managed bean would have to have get and set method as follows, in order for the
inputText value to be updated:
public void setIt1(RichInputText it1) { this.it1 = it1; } public RichInputText getIt1() { return it How to Use the EL Format Tags. For information about the time zone tags, see What You May Need to Know About Selecting Time Zones Without the inputDate Component.
You can create EL expressions declaratively using the JDeveloper Expression Builder. You can access the builder from the Properties window.
Before you begin:
It may be helpful to have an understanding of EL expressions. For more information, see Creating EL Expressions.
To use the Expression Builder:.
Figure 3-13 adfFacesContext Objects in the Expression Builder
Tip:
For more information about these objects, see the Java API Reference for Oracle ADF Faces..
Figure 3-14 The Expression Builder Dialog below.
. The following example shows a message that uses the
formatNamed2 tag to display the number of files on a specific disk. This message contains two parameters.
<af:outputText
While JDeveloper creates many needed EL expressions for you, and you can use the Expression Builder to create those not built for you, there may be times when you need to access, set, or invoke EL expressions within a managed bean.
The code below shows how you can get a reference to an EL expression and return (or create) the matching object. shows how you can resolve a method expression.); }
The code below shows how you can set); } }
A managed bean is created with a constructor with no arguments, a set of properties, and a set of methods that perform functions for a component. In the ADF framework, if you want to bind component values and objects to managed bean properties or to reference managed bean methods from component tags, you can use the EL syntax. ADF Model data binding and ADF Controller, then instead of registering managed beans in the
faces-config.xml file, you may need to register them within ADF task flows. For more information, refer to the “Using a Managed Bean in a Fusion Web Application" section in Developing Fusion Web Applications with Oracle Application Development Framework.
You can create a managed bean and register it with the JSF application at the same time using the overview editor for the
faces-config.xml file.
Before you begin:
It may be helpful to have an understanding of managed beans. For more information, see Creating and Using Managed Beans.
To create and register a managed bean:
Figure 3-15 shows the editor for the
faces-config.xml file used by the ADF Faces Components Demo application that contains the File Explorer application.
Figure 3-15 Managed Beans in the faces-config.xml File Object Scope Lifecycles. and
set methods for these bean properties.
When you create a managed bean and elect to generate the Java file, JDeveloper creates a stub class with the given name and a default constructor. Following is the code that would be added to the
MyBean class stored in the view package., as shown below.
<managed-bean> <managed-bean-name>my_bean</managed-bean-name> <managed-bean-class>view.MyBean</managed-bean-class> <managed-bean-scope>session</managed-bean-scope> </managed-bean> Developing Fusion Web Applications. The following shows how a managed bean might use the
UIComponentReference API to get and set values for a search field.
....
The ADF Faces Javadoc provides information required to work programmatically to develop ADF Faces applications using the ADF Faces API.:
oracle.adf.view.richpackage.
Tip:
When in a Java class file, you can go directly to the Javadoc for a class name reference or for a JavaScript function call by placing your cursor on the name or function and pressing Ctrl+D.
|
https://docs.oracle.com/middleware/12211/adf/develop-faces/GUID-2A82124E-633E-4787-B6EB-2E6DCE28F9F5.htm
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Logger setUseParentHandlers() method in Java with Examples
setUseParentHandlers() method of a Logger class used to set the configuration which defines whether or not this logger should send its output to its parent Logger. if we want to send the output to its parent Logger then we have to set the parameter to this method equal to true. This means that any log records will also be written to the parent’s Handlers, and potentially to its parent, recursively up the namespace.UseParentHandlers(boolean useParentHandlers)
Parameters: This method accepts one parameter useParentHandlers which represents true if output is to be sent to the logger’s parent.
Return value: This method returns nothing.
Exception: This method throws SecurityException if a security manager exists, this logger is not anonymous, and the caller does not have LoggingPermission(“control”).
Below programs illustrate the setUseParentHandlers() method:
Program 1:
Output:
The output printed on console of Eclipse is shown below-
Program 2:
Output:
The output printed on console output is shown below-
Reference:
|
https://www.geeksforgeeks.org/logger-setuseparenthandlers-method-in-java-with-examples/
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
jMuxerjMuxer
jMuxer - a simple javascript mp4 muxer that works in both browser and node environment. It is communication protocol agnostic and it is intended to play media files on the browser with the help of the media source extension. It also can export mp4 on the node environment. It expects raw H264 video data and/or AAC audio data in ADTS container as an input.
Live DemoLive Demo
How to use?How to use?
A distribution version is available on dist folder.
<script type="text/javascript" src="dist/jmuxer.min.js"></script> var jmuxer = new JMuxer(option);
Available options are:
node - String ID of a video tag / Reference of the HTMLVideoElement. Required field for browsers.
mode - Available values are: both, video and audio. Default is both
flushingTime - Buffer flushing time in milliseconds. Default value is 1500 milliseconds.
maxDelay - Maximum delay time in milliseconds. Default value is 500 milliseconds.
clearBuffer - true/false. Either it will clear played media buffer automatically or not. Default is true.
fps - Optional value. Frame rate of the video if it is known/fixed value. It will be used to find frame duration if chunk duration is not available with provided media data.
readFpsFromTrack - true/false. Will read FPS from MP4 track data instead of using (above) fps value. Default is false.
onReady - function. Will be called once MSE is ready.
onError - function. Will be fired if jMuxer encounters any buffer error.
debug - true/false. Will print debug log in browser console. Default is false.
Complete example:
<script type="text/javascript" src="dist/jmuxer.min.js"></script> <video id="player"></video> <script> var jmuxer = new JMuxer({ node: 'player', mode: 'both', /* available values are: both, audio and video */ debug: false }); /* Now feed media data using feed method. audio and video is buffer data and duration is in milliseconds */ jmuxer.feed({ audio: audio, video: video, duration: duration }); </script>
Media dataObject may have following properties:
video - h264 buffer
audio - AAC buffer
duration - duration in milliseconds of the provided chunk. If duration is not provided, it will calculate frame duration wtih the provided frame rate (fps).
ES6 Example:
Install module through
npm
npm install --save jmuxer
import JMuxer from 'jmuxer'; const jmuxer = new JMuxer({ node: 'player', debug: true }); /* Now feed media data using feed method. audio and video is buffer data and duration is in milliseconds */ jmuxer.feed({ audio: audio, video: video, duration: duration });
Node Example:
Install module through
npm
npm install --save jmuxer
const JMuxer = require('jmuxer'); const jmuxer = new JMuxer({ debug: true }); /* Stream in Object mode. Please check the example file for more details */ let h264_feeder = getFeederStreamSomehow(); let http_or_ws_or_any = getWritterStreamSomehow(); h264_feeder.pipe(jmuxer.createStream()).pipe(http_or_ws_or_any);
Available Methods
Typescript definition
npm install --save @types/jmuxer
Compatibility
compatible with browsers supporting MSE with 'video/MP4. it is supported on:
- Chrome for Android 34+
- Chrome for Desktop 34+
- Firefox for Android 41+
- Firefox for Desktop 42+
- IE11+ for Windows 8.1+
- Edge for Windows 10+
- Opera for Desktop
- Safari for Mac 8+
Demo Server and player exampleDemo Server and player example
A simple node server and some demo media data are available in the example directory. In the example, each chunk/packet is consist of 4 bytes of header and the payload following the header. The first two bytes of the header contain the chunk duration and remaining two bytes contain the audio data length. Packet format is shown in the image below:
Packet format
A step guideline to obtain above the packet format from your mp4 file using ffmpeg:
- Spliting video into 2 seconds chunks:
ffmpeg -i input.mp4 -c copy -map 0 -segment_time 2 -f segment %03d.mp4
- Extracting h264 for all chunks:
for f in *.mp4; do ffmpeg -i "$f" -vcodec copy -an -bsf:v h264_mp4toannexb "${f:0:3}.h264"; done
- Extracting audio for all chunks:
for f in *.mp4; do ffmpeg -i "$f" -acodec copy -vn "${f:0:3}.aac"; done
- Extracting duration for all chunks:
for f in *.mp4; do ffprobe "$f" -show_format 2>&1 | sed -n 's/duration=//p'; done
(see)
How to run example?
Demo files are available in example directory. For running the example, first run the node server by following command:
cd example
node server.js
then, visit example/index.html page using any webserver.
Player Example for raw h264 onlyPlayer Example for raw h264 only
Assuming you are still in
example directory. Now run followngs:
node h264.js
then, visit example/h264.html page using any webserver.
How to build?How to build?
A distribution version is available inside dist directory. However, if you need to build, you can do as follows:
- git clone
- cd jmuxer
- npm install
- npm run build OR npm run pro
SupportSupport
If the project helps you, buy me a cup of coffee!
CreditsCredits
Proudly inspired by hls.js, rtsp player
Cobrowse.io - for sponsoring the adaptation of jMuxer for Node.js
|
https://www.npmjs.com/package/jmuxer
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Factory that makes QImageFormat objects. More...
#include <qasyncimageio.h>
List of all member functions.
While, thus you can override current and future built-in formats.
[protected]
Constructs a factory. It automatically registers itself with QImageDecoder.
[virtual]
Destroys a factory. It automatically unregisters itself from QImageDecoder.
[virtual]
Returns a decoder for decoding an image which starts with the give bytes. This function should only return a decoder if it is definite that the decoder applies to data with the given header. Returns 0 if there is insufficient data in the header to make a positive identification, or if the data is not recognized.
[virtual]
Returns the name of the format supported by decoders from this factory. The string is statically allocated.
Search the documentation, FAQ, qt-interest archive and more (uses):
This file is part of the Qt toolkit, copyright © 1995-2005 Trolltech, all rights reserved.
|
https://doc.qt.io/archives/2.3/qimageformattype.html
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Collections and Metadata¶
Keys¶
True-Q™ uses
Keys to store attributes of some objects like
Circuits and
Estimates. This is
done to enable efficient filtering and grouping of similar or dissimilar objects.
Keys are lightweight hashable dictionaries; their keys and values
must both be hashable. One main feature of keys is their ability to match against
patterns:
import trueq as tq key = tq.Key(fruit="banana", color="yellow", quantity=1) print(key.match(fruit="apple")) print(key.match(fruit="banana")) print(key.match(fruit="banana", color="brown")) print(key.match(fruit={"apple", "pear"})) print(key.match(fruit={"apple", "banana"}))
False True False False True
Various other conveniences are supported:
import trueq as tq key = tq.Key(fruit="banana", color="yellow", quantity=1) print("shape in key: \t", "shape" in key) print("quantity in key:\t", "quantity" in key) print("key.quantity: \t", key.quantity) print("key['quantity']:\t", key["quantity"]) print("key.names: \t", key.names)
shape in key: False quantity in key: True key.quantity: 1 key['quantity']: 1 key.names: ('fruit', 'color', 'quantity')
Special Keywords¶
True-Q™ uses the following keywords for
Keys internally. There are
no restrictions on using these keywords for your own purposes, or manually overriding
them, but doing so may cause fitting and other features to break.
Usually,
fit() produces a separate set of estimates
for every unique combination of custom keywords (i.e. not in the list above). The
exceptions to this rule are the following custom keywords, which will be ignored and
deleted in the returned fit object:
KeySets¶
KeySets are special set-like containers for
Keys. They exist (instead of using the built-in
set)
mainly to easily create subsets using the
match() functionality of
Keys. When we call the
subset() of a KeySet, a new KeySet
is returned that contains all
Keys that match the arguments. KeySets
are also useful for getting all unique values of a particular parameter:
import trueq as tq keys = tq.KeySet( tq.Key(fruit="banana", color="yellow"), tq.Key(fruit="apple", color="red"), tq.Key(fruit="apple", color="yellow"), tq.Key(size=1), ) print(keys.subset(color="yellow")) print(keys.subset(size=1)) print(keys.subset(shape="triangle")) print(keys.fruit)
KeySet( Key(fruit='apple', color='yellow'), Key(fruit='banana', color='yellow')) KeySet( Key(size=1)) KeySet() ItemSet({'banana', 'apple'})
Circuit Collections¶
Collections of circuits are stored in instances of
CircuitCollection, and this is the type that will be returned by the
functions that create circuits for diagnostic tools. Circuit
collections are iterable and maintain insertion order. Multiple protocols can (and
usually should) put their circuits into the same collection, so that their results can
be analyzed together if applicable. Different types of circuit are differentiated within
the collection by their
key attribute, and collections of
circuits are designed so that accessing subsets of circuits by properties of these keys
is convenient.
For example, circuits created with
make_srb() have keys with the
property
protocol="SRB", while circuits created with
make_xrb() have
keys with the property
protocol="XRB". We may access only those circuits with keys
having the property
protocol="SRB" as follows.
import trueq as tq circuits = tq.make_srb([0], [4, 100]) circuits += tq.make_xrb([0], [4, 100]) # see all the values of protocol inside keys in the circuit collection print(circuits.keys().protocol) print(circuits.subset(protocol='SRB').keys().protocol)
ItemSet({'SRB', 'XRB'}) ItemSet({'SRB'})
See the API documentation for
CircuitCollection for other key-related
convenience functions.
Perhaps most importantly, circuit collections implement the method
fit(), which takes all of the available data in the
collection and analyzes it, returning an
EstimateCollection
(see below). Similarly,
plot() contains methods to
visualize the results of any analysis.
Accessing Estimates and Circuits¶
Both
CircuitCollections and
EstimateCollections use flat structures to enable a simple
interface between different protocols performed on various subsets of the same quantum
device. For example, the parameters estimated by the XRB protocol depend on
whether the SRB protocol was also performed, and the
make_crosstalk_diagnostics() macro generates SRB
circuits in both simultaneous and isolated modes.
This guide supplements the API documentation for methods in the
CircuitCollection and
EstimateCollection classes. Skip ahead to
Circuit Collection Internals or EstimateCollection Internals if you want
to skip to useful examples immediately. Throughout this page, we will use the
following objects in our examples:
import trueq as tq # make some varied circuits in a single CircuitCollection circuits = tq.CircuitCollection() circuits += tq.make_srb([4, 5], [4, 12, 64]) circuits += tq.make_xrb([4, 5], [4, 12, 64]) circuits += tq.make_cb({(4, 5): tq.Gate.cnot}, [4, 12, 64], n_decays=5) # simulate them and put their results in an EstimateCollection tq.Simulator().add_stochastic_pauli(px=0.02).run(circuits) fit = circuits.fit()
Circuit Collection Internals¶
CircuitCollections are effectively lists of circuits with
convenience functions, where each circuit contains its own
Key. The
protocols built in to True-Q™ typically assign keys such that each key only appears
once per generated circuit collection.
If you happen to know the key of the list of circuits you are interested in, you can access it directly.
key = tq.Key(n_random_cycles=4, protocol='SRB', twirl=(('C', 4),), compiled_pauli="I") len(circuits[key])
0
However, it is typically inconvenient to manually construct keys in this way. There are four convenience functions for accessing circuits:
trueq.CircuitCollection.keys():
Returns a KeySet of all keys in the collection that match a given filter.
For example, let’s get all of the keys in the collection that came from the SRB protocol:
circuits.keys(protocol="SRB")True-Q formatting will not be loaded without trusting this notebook or rerunning the affected cells. Notebooks can be marked as trusted by clicking "File -> Trust Notebook".
Since the
keys()method returns a new
KeySet, we have access to all of its methods. For example, this is the easiest way to get all sequence lengths present in the SRB protocol:
circuits.keys(protocol="SRB").n_random_cycles
{4, 12, 64}
This enables convenient looping such as the following:
for m in circuits.keys(protocol="SRB").n_random_cycles: for key in circuits.keys(protocol="SRB", n_random_cycles=m): pass
Note that in this case
circuits.similar_keys("n_random_cycles", protocol="SRB")(see below) would be even more convenient:
trueq.CircuitCollection.subset():
Returns an iterable over all circuits whose key attribute matches a given filter. This may seamlessly join together circuits in different circuit lists.
For example, we can put all
XRBcircuits of this protocol into a new
CircuitCollectionlike this:
circuits.subset(protocol="XRB");
We can match on multiple values:
len(circuits.subset(protocol="XRB", n_random_cycles={4,5}))
90
trueq.CircuitCollection.similar_keys():
Returns an iterable over KeySets. Each KeySet contains keys that match the filter, but are additionally grouped by some equal (or unequal) estimate value.
It is often useful to both group and filter at the same time. Suppose we want all
XRBcircuits, and that we want to group them by their
n_random_cyclesvalue.
for keys in circuits.similar_keys("n_random_cycles", protocol="XRB"): # keys is a KeySet where protocol=XRB, and where every n_random_cycles is equal for key in keys: pass
As a convenience for more concise code, we can also group where everything except
n_random_cyclesis equal. This is done with the
invert=Trueflag. This is particularly useful for
seq_labelin XRB.
for keys in circuits.similar_keys("seq_label", invert=True, protocol="XRB"): # keys is a KeySet where protocol=XRB, and all other key fields except # seq_label are equal for key in keys: pass
EstimateCollection Internals¶
EstimateCollections are containers of
Estimates with fancy pattern matching functionality.
The most convenient way to view the output of a
EstimateCollection is its fancy HTML representation in an
IPython Notebook. If you want to access specific values then the
relevant methods are listed below, followed by some illustrative examples.
trueq.estimate.EstimateCollection.keys()
Calling syntax:
fit.keys(**filter)
trueq.estimate.EstimateCollection.subset()
Calling syntax:
fit.subset(pattern="*", **filter)
Suppose we are after the
SRB infidelity of the qubit with label 4. Our first step
might be to list all of the keys just to see what sorts of estimates they have:
fit.keys()
Looking at these keys we can choose a filter to pass to
keys() and then extract the corresponding
estimates:
[estimate.e_F for estimate in fit.subset(protocol="SRB", labels=(4,))]
[EstimateTuple(name='e_F', val=0.023552957885108, std=0.002394866491413562)]
You will notice that there are two entries returned in the list. This is because we
added SRB to the circuit collection in two different ways; we are seeing the estimates
of \(e_F\) when SRB is isolated on qubit
4, and when SRB is run simultaneously
on qubits
(4, 5). If this behaviour were undesired, we would have to filter
further. For example, we could filter by the twirling group of the Cliffords on qubit
4:
[estimate.e_F for estimate in fit.subset(protocol="SRB", twirl=(("C", 4),))]
[]
|
https://trueq.quantumbenchmark.com/guides/fundamentals/collections.html
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Investors in PayPal Holdings Inc (Symbol: PYPL) saw new options begin trading today, for the May 31st expiration. At Stock Options Channel, our YieldBoost formula has looked up and down the PYPL options chain for the new May 31st contracts and identified one put and one call contract of particular interest.
The put contract at the $90.00 strike price has a current bid of 18 cents. If an investor was to sell-to-open that put contract, they are committing to purchase the stock at $90.00, but will also collect the premium, putting the cost basis of the shares at $89.82 (before broker commissions). To an investor already interested in purchasing shares of PYPL, that could represent an attractive alternative to paying $107.77/share today.
Because the $90 95%..59% annualized — at Stock Options Channel we call this the YieldBoost.
Below is a chart showing the trailing twelve month trading history for PayPal Holdings Inc, and highlighting in green where the $90.00 strike is located relative to that history:
Turning to the calls side of the option chain, the call contract at the $125.00 strike price has a current bid of 9 cents. If an investor was to purchase shares of PYPL stock at the current price level of $107.77/share, and then sell-to-open that call contract as a "covered call," they are committing to sell the stock at $125.00. Considering the call seller will also collect the premium, that would drive a total return (excluding dividends, if any) of 16.07% if the stock gets called away at the May 31st expiration (before broker commissions). Of course, a lot of upside could potentially be left on the table if PYPL shares really soar, which is why looking at the trailing twelve month trading history for PayPal Holdings Inc, as well as studying the business fundamentals becomes important. Below is a chart showing PYPL's trailing twelve month trading history, with the $125.00 strike highlighted in red:
Considering the fact that the $125.00 strike represents an approximate 89%..08% boost of extra return to the investor, or 0.66% annualized, which we refer to as the YieldBoost.
The implied volatility in the put contract example is 60%, while the implied volatility in the call contract example is 41%.
Meanwhile, we calculate the actual trailing twelve month volatility (considering the last 251 trading day closing values as well as today's price of $107.77).
|
https://www.nasdaq.com/articles/may-31st-options-now-available-paypal-holdings-pypl-2019-04-15
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
#include <CGAL/Residue.h>
The class
Residue represents a finite field \( \mathbb{Z}{/p\mathbb{Z}}\), for some prime number \( p\).
The prime number \( p\) is stored in a static member variable. The class provides static member functions to change this value..
The default primes \( p\) satisfy the additional property that \( 2\) is a primitive root of unity of maximum order (i.e., \( p-1\)) in \( \mathbb{Z}{/p\mathbb{Z}}\). Equivalently, \( 2\) generates the multiplicative group \( (\mathbb{Z}{/p\mathbb{Z}})^\times\).
Please.
In case the flag
CGAL_HAS_THREADS is undefined the prime is just stored in a static member of the class, that is,
Residue is not thread-safe in this case. In case
CGAL_HAS_THREADS the implementation of the class is thread safe using
boost::thread_specific_ptr. However, this may cause some performance penalty. Hence, it may be advisable to configure CGAL with
CGAL_HAS_NO_THREADS. See Section Thread Safety in the preliminaries.
Field
|
https://doc.cgal.org/latest/Modular_arithmetic/classCGAL_1_1Residue.html
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Containers & Kubernetes
Use GKE usage metering to combat over-provisioning
As Kubernetes has surged in popularity, its users are no longer just early adopters. A growing number of enterprises and SaaS providers also rely on large, multi-tenant Kubernetes clusters to run their workloads, benefitting from increased resource utilization and reduced management overhead.
In many large and medium-sized enterprises, a centralized infrastructure team is responsible for creating, maintaining, and monitoring Kubernetes clusters for the entire firm, and these teams provision resources for individual teams and applications. Consequently, there is a strong need to have accountability for resource usage of the shared environment.
Earlier this year, we announced GKE usage metering, which brings fine-grained visibility to your Kubernetes clusters. With it, you can see your GKE clusters' resource usage broken down by namespaces and labels, and attribute it to meaningful entities (for example, department, customer, application, or environment). We’re excited to announce that GKE usage metering is now generally available. In addition, through conversations with early adopters, we’ve added new functionality to help reduce waste from over-provisioning, with the addition of consumption-based metrics that allow you to compare and contrast the resource requests with actual utilization.
To overprovision is human
Among developers, there’s a tendency to err on the conservative side and request more resources than an application needs. One GKE administrator told us internal customers sometimes requested as much as 20X their actual usage! But it can be hard for administrators to identify which applications and teams are being the most wasteful and how much they are wasting, as this information comes from multiple sources.
Fixing under- and over-provisioning: what happens under the hood
There are three kinds of resource metrics in Kubernetes—resource requests, resource limits, and resource consumption. A resource request is in essence a lower bound on the amount of resources a container will receive; a resource limit is an upper bound; and resource consumption is the actual amount used by a container at runtime.
A resource request is the primary driver of cost, because the Kubernetes scheduler uses the resource request of a new pod, compared to the capacity of a node and the resource requests of the pods running on the node, to know whether a new pod will fit on a node. And if the pod doesn’t fit on any of the available nodes, the cluster autoscaler (if enabled) adds nodes to the cluster. If a pod uses less resources than it requests, then the system is needlessly reserving resources. To reduce waste and keep costs down, it is beneficial to set resource requests to be in line with actual consumption. The opposite may also happen: for example, a common way to increase utilization in fixed-size clusters is to use burstable or best-effort pods, which can use resources in excess of the request if unconsumed resources are available on the machine. In such a scenario, the reserved resources may be a small fraction of used resources, and consumed resources can present a more accurate view of the cluster resource usage.
When you enable the GKE usage metering feature, the agent collects consumption metrics in addition to the resource requests, by polling PodMetrics objects from the metrics-server. The resource request records and resource consumption records are exported to two separate tables in a BigQuery dataset that you specify. Comparing requested with consumed resources makes it easy for you to spot waste, and take corrective measures. In addition, new and improved Data Studio templates are now available to make this analysis even easier for you.
What else is new GKE usage metering
In addition to the new consumption-based metrics, GKE usage metering also includes a simplified Data Studio dashboard setup experience. Now with Data Studio connectors, setup only takes a few clicks, and you no longer have to copy or edit queries manually.
In the documentation, you’ll find plug-and-play Google Data Studio templates and sample BigQuery queries that join GKE usage metering and GCP billing export data to estimate a cost breakdown by namespace and labels. They allow you to create dashboards such as these:
Further, GKE usage metering can now also monitor Cloud TPUs and custom machine types. For other changes and improvements, please see the documentation.
GKE usage metering makes it easy to attribute cluster usage by \Kubernetes namespaces and labels, map usage to cost, and detect resource overprovisioning. Getting started is a three-step process: 1) Creating a BigQuery dataset in which to store the data; 2) Enabling GKE usage metering; and 3) Setting up a Data Studio templates in which to visualize the data. Usage metering can be enabled on a per-cluster basis. For more detailed instructions and information, read the relevant documentation.
|
https://cloud.google.com/blog/products/containers-kubernetes/use-gke-usage-metering-to-combat-over-provisioning
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Hey guys,
I’m currently building my first dash app which also features user managment.
Now I also want to integrate the possibility to amend user information which is shown in a datatable on the admin page. With the following code I already managed to update the database as soon as the table is altered, however, once i leave the admin page and go back to it the table doesn’t fetch the updated database data but instead the old data before the change. Is there a way to get the layout to reload the data from the database once it’s been updated?
def UpdateAdminRights(app): @app.callback( Output('users', 'data'), [Input('users', 'data_timestamp')], [State('users', 'data')]) def display_output(timestamp, df): old = show_users() #connects to the DB and fetches information changed_user = [change for user, change in zip(old,df) if user != change] if changed_user: username = changed_user[0]['username'] email = changed_user[0]['email'] admin = 1 if changed_user[0]['admin'] == "True" else 0 update_user(username, email, admin) #connects to the DB and updates information return df
It’s triggered as soon as this table is altered:
dbc.Container([ html.H3('View Users'), html.Hr(), dbc.Row([ dbc.Col([ dt.DataTable( id='users', columns = [{'name' : 'ID', 'id' : 'id'}, {'name' : 'Username', 'id' : 'username'}, {'name' : 'Email', 'id' : 'email'}, {'name' : 'Admin', 'id' : 'admin'}], data=show_users(), editable = True, ), ], md=12, className = 'twelve columns'), ]), ], className='pretty_container')
|
https://community.plotly.com/t/update-datatable-store-information-in-the-database-and-reload-layout-with-new-data/46308
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Stochastic Calibration (SC)¶
This example will show how to characterize the noise acting in a system so that it can be corrected. We begin by initializing a noisy simulator with a 2-qubit rotation about \(Z\) by some angle \(\theta\). Then we show how to find \(\theta\) using stochastic calibration.
import numpy as np import trueq as tq # make a noisy simulator sim = tq.Simulator().add_depolarizing(p=0.001) sim.add_overrotation(single_sys=0.01, multi_sys=0.01) # adding a rotation by 12 degrees to the first qubit in every 2-qubit gate, # i.e. Z(12) is the noise we are going to try to characterize mat = tq.Gate.from_generators("Z", 12).mat sim.add_kraus([np.kron(mat, np.eye(2))])
Out:
<trueq.simulation.simulator.Simulator object at 0x7f8b7f555910>
We are going to perform stochastic calibration for a \(CZ\) gate. We choose to look at the \(XI\) Pauli decay because it anticommutes with our suspected error, \(ZI\). Equivalently, we could have have chosen to use the \(YI\) Pauli decay to characterize our \(ZI\) error. To minimize the experimental footprint, we use data at a single sequence length as in [15].
cycle = tq.Cycle({(0, 1): tq.Gate.cz}) # generate SC circuits with 24 random cycles to get decays associated with XI circuits = tq.make_sc(cycle, [24], pauli_decays=["XI"]) print(len(circuits))
Out:
30
Find the expectation value and standard deviation of the circuit when corrections of the form \(Z(\phi)\) are applied on the first qubit prior to each \(CZ\) gate. The expectation values correspond to the \(XI\) diagonal entry of the superoperator in the Pauli basis; values close to \(1\) indicate that the suspected error is small.
# 20 equidistant points between -40 and 40 for trial values of phi angles = np.linspace(-40, 40, 20) all_circuits = tq.CircuitCollection() for j, phi in enumerate(angles): # adds a Z(phi) rotation on qubit 0 gate before every CZ gate c = tq.compilation.CycleReplacement( cycle, replacement=[tq.Cycle({(0): tq.Gate.from_generators("Z", phi)}), cycle] ) new_circs = tq.CircuitCollection(map(c.apply, circuits)) # run circuit collection (with Z(phi)s inserted) sim.run(new_circs) # put all circuits into one callection, organized by the custom keyword "phi" all_circuits.append(new_circs.update_keys(phi=phi))
plot the expectation values as a function of \(\phi\):
all_circuits.plot.compare("f_24_XI", "phi")
The y-axis is the expectation value after 24 (randomized) applications of the cycle of interest, and the x-axis is the correction angle we have compiled into the circuit. The maximum expectation value corresponds to the value of \(XI\) closest to 1, so finding the angle at which the plot peaks tells us which angle to rotate by to correct the noise. The peak occurs at approximately \(\phi = -12\), which is consistent with the noise applied by the simulator.
Total running time of the script: ( 0 minutes 5.399 seconds)
Gallery generated by Sphinx-Gallery
|
https://trueq.quantumbenchmark.com/examples/error_suppression/sc.html
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Optional Pylint checkers in the extensions module¶
Pylint provides the following optional plugins:
Deprecated Builtins checker
-
-
-
-
Comparison-Placement checker
-
Consider Ternary Expression checker
Parameter Documentation checker
-
-
Compare-To-Empty-String checker
Consider-Using-Any-Or-All checker
-
-
-
-
-
-
You can activate any or all of these extensions by adding a
load-plugins line to the
MASTER section of your
.pylintrc, for example:
load-plugins=pylint.extensions.docparams,pylint.extensions.docstyle
Broad Try Clause checker¶
This checker is provided by
pylint.extensions.broad_try_clause.
Verbatim name of the checker is
broad_try_clause.
Broad Try Clause checker Options¶
- max-try-statements
Maximum number of statements allowed in a try clause
Default:
1
Broad Try Clause checker Messages¶
- too-many-try-statements (W0717)
Try clause contains too many statements.
Code Style checker¶
This checker is provided by
pylint.extensions.code_style.
Verbatim name of the checker is
code_style.
Code Style checker Documentation¶
Checkers that can improve code consistency. As such they don't necessarily provide a performance benefit and are often times opinionated.
Code Style checker Options¶
- max-line-length-suggestions
Max line length for which to sill emit suggestions. Used to prevent optional suggestions which would get split by a code formatter (e.g., black). Will default to the setting for
max-line-length.
Code Style checker Messages¶
- consider-using-tuple (R6102)
Consider using an in-place tuple instead of list Only for style consistency! Emitted where an in-place defined
listcan be replaced by a
tuple. Due to optimizations by CPython, there is no performance benefit from it.
- consider-using-namedtuple-or-dataclass (R6101)
Consider using namedtuple or dataclass for dictionary values Emitted when dictionary values can be replaced by namedtuples or dataclass instances.
- consider-using-assignment-expr (R6103)
Use '%s' instead Emitted when an if assignment is directly followed by an if statement and both can be combined by using an assignment expression
:=. Requires Python 3.8 and
py-version >= 3.8.
Compare-To-Empty-String checker¶
This checker is provided by
pylint.extensions.emptystring.
Verbatim name of the checker is
compare-to-empty-string.
Compare-To-Empty-String checker Messages¶
- compare-to-empty-string (C1901)
Avoid comparisons to empty string Used when Pylint detects comparison to an empty string constant.
Compare-To-Zero checker¶
This checker is provided by
pylint.extensions.comparetozero.
Verbatim name of the checker is
compare-to-zero.
Compare-To-Zero checker Messages¶
- compare-to-zero (C2001)
Avoid comparisons to zero Used when Pylint detects comparison to a 0 constant.
Comparison-Placement checker¶
This checker is provided by
pylint.extensions.comparison_placement.
Verbatim name of the checker is
comparison-placement.
Comparison-Placement checker Messages¶
- misplaced-comparison-constant (C2201)
Comparison should be %s Used when the constant is placed on the left side of a comparison. It is usually clearer in intent to place it in the right hand side of the comparison.
Confusing Elif checker¶
This checker is provided by
pylint.extensions.confusing_elif.
Verbatim name of the checker is
confusing_elif.
Confusing Elif checker Messages¶
- confusing-consecutive-elif (R5601)
Consecutive elif with differing indentation level, consider creating a function to separate the inner elif Used when an elif statement follows right after an indented block which itself ends with if or elif. It may not be ovious if the elif statement was willingly or mistakenly unindented. Extracting the indented if statement into a separate function might avoid confusion and prevent errors.
Consider-Using-Any-Or-All checker¶
This checker is provided by
pylint.extensions.for_any_all.
Verbatim name of the checker is
consider-using-any-or-all.
Consider-Using-Any-Or-All checker Messages¶
- consider-using-any-or-all (C0501)
`for` loop could be `%s` A for loop that checks for a condition and return a bool can be replaced with any or all.
Consider Ternary Expression checker¶
This checker is provided by
pylint.extensions.consider_ternary_expression.
Verbatim name of the checker is
consider_ternary_expression.
Consider Ternary Expression checker Messages¶
- consider-ternary-expression (W0160)
Consider rewriting as a ternary expression Multiple assign statements spread across if/else blocks can be rewritten with a single assignment and ternary expression
Deprecated Builtins checker¶
This checker is provided by
pylint.extensions.bad_builtin.
Verbatim name of the checker is
deprecated_builtins.
Deprecated Builtins checker Documentation¶
This used to be the
bad-builtin core checker, but it was moved to
an extension instead. It can be used for finding prohibited used builtins,
such as
map or
filter, for which other alternatives exists.
If you want to control for what builtins the checker should warn about,
you can use the
bad-functions option:
$ pylint a.py --load-plugins=pylint.extensions.bad_builtin --bad-functions=apply,reduce ...
Deprecated Builtins checker Options¶
- bad-functions
List of builtins function names that should not be used, separated by a comma
Default:
map,filter
Deprecated Builtins checker Messages¶
- bad-builtin (W0141)
Used builtin function %s Used when a disallowed builtin function is used (see the bad-function option). Usual disallowed functions are the ones like map, or filter , where Python offers now some cleaner alternative like list comprehension.
Design checker¶
This checker is provided by
pylint.extensions.mccabe.
Verbatim name of the checker is
design.
Design checker Documentation¶
You can now use this plugin for finding complexity issues in your code base.
Activate it through
pylint --load-plugins=pylint.extensions.mccabe. It introduces
a new warning,
too-complex, which is emitted when a code block has a complexity
higher than a preestablished value, which can be controlled through the
max-complexity option, such as in this example:
$ cat a.py def f10(): """McCabe rating: 11""" myint = 2 if myint == 5: return myint elif myint == 6: return myint elif myint == 7: return myint elif myint == 8: return myint elif myint == 9: return myint elif myint == 10: if myint == 8: while True: return True elif myint == 8: with myint: return 8 else: if myint == 2: return myint return myint return myint $ pylint a.py --load-plugins=pylint.extensions.mccabe R:1: 'f10' is too complex. The McCabe rating is 11 (too-complex) $ pylint a.py --load-plugins=pylint.extensions.mccabe --max-complexity=50 $
Design checker Options¶
- max-complexity
McCabe complexity cyclomatic threshold
Default:
10
Design checker Messages¶
- too-complex (R1260)
%s is too complex. The McCabe rating is %d Used when a method or function is too complex based on McCabe Complexity Cyclomatic
Docstyle checker¶
This checker is provided by
pylint.extensions.docstyle.
Verbatim name of the checker is
docstyle.
Docstyle checker Messages¶
- bad-docstring-quotes (C0198)
Bad docstring quotes in %s, expected """, given %s Used when a docstring does not have triple double quotes.
- docstring-first-line-empty (C0199)
First line empty in %s docstring Used when a blank line is found at the beginning of a docstring.
Else If Used checker¶
This checker is provided by
pylint.extensions.check_elif.
Verbatim name of the checker is
else_if_used.
Else If Used checker Messages¶
- else-if-used (R5501)
Consider using "elif" instead of "else if" Used when an else statement is immediately followed by an if statement and does not contain statements that would be unrelated to it.
Multiple Types checker¶
This checker is provided by
pylint.extensions.redefined_variable_type.
Verbatim name of the checker is
multiple_types.
Multiple Types checker Messages¶
- redefined-variable-type (R0204)
Redefinition of %s type from %s to %s Used when the type of a variable changes inside a method or a function.
Overlap-Except checker¶
This checker is provided by
pylint.extensions.overlapping_exceptions.
Verbatim name of the checker is
overlap-except.
Overlap-Except checker Messages¶
- overlapping-except (W0714)
Overlapping exceptions (%s) Used when exceptions in handler overlap or are identical
Parameter Documentation checker¶
This checker is provided by
pylint.extensions.docparams.
Verbatim name of the checker is
parameter_documentation.
Parameter Documentation checker Documentation¶
If you document the parameters of your functions, methods and constructors and their types systematically in your code this optional component might be useful for you. Sphinx style, Google style, and Numpy style are supported. (For some examples, see .)
You can activate this checker by adding the line:
load-plugins=pylint.extensions.docparams
to the
MASTER section of your
.pylintrc.
This checker verifies that all function, method, and constructor docstrings include documentation of the
parameters and their types
return value and its type
exceptions raised
and can handle docstrings in
Sphinx style (
param,
type,
return,
rtype,
raise/
except):
def function_foo(x, y, z): '''function foo ... :param x: bla x :type x: int :param y: bla y :type y: float :param int z: bla z :return: sum :rtype: float :raises OSError: bla ''' return x + y + z
or the Google style (
Args:,
Returns:,
Raises:):
def function_foo(x, y, z): '''function foo ... Args: x (int): bla x y (float): bla y z (int): bla z Returns: float: sum Raises: OSError: bla ''' return x + y + z
or the Numpy style (
Parameters,
Returns,
Raises):
def function_foo(x, y, z): '''function foo ... Parameters ---------- x: int bla x y: float bla y z: int bla z Returns ------- float sum Raises ------ OSError bla ''' return x + y + z
You'll be notified of missing parameter documentation but also of
naming inconsistencies between the signature and the documentation which
often arise when parameters are renamed automatically in the code, but not in
the documentation.
Note: by default docstrings of private and magic methods are not checked.
To change this behaviour (for example, to also check
__init__) add
no-docstring-rgx=^(?!__init__$)_ to the
BASIC section of your
.pylintrc.
Constructor parameters can be documented in either the class docstring or
the
__init__ docstring, but not both:
class ClassFoo(object): '''Sphinx style docstring foo :param float x: bla x :param y: bla y :type y: int ''' def __init__(self, x, y): pass class ClassBar(object): def __init__(self, x, y): '''Google style docstring bar Args: x (float): bla x y (int): bla y ''' pass
In some cases, having to document all parameters is a nuisance, for instance if many of your functions or methods just follow a common interface. To remove this burden, the checker accepts missing parameter documentation if one of the following phrases is found in the docstring:
For the other parameters, see
For the parameters, see
(with arbitrary whitespace between the words). Please add a link to the docstring defining the interface, e.g. a superclass method, after "see":
def callback(x, y, z): '''Sphinx style docstring for callback ... :param x: bla x :type x: int For the other parameters, see :class:`MyFrameworkUsingAndDefiningCallback` ''' return x + y + z def callback(x, y, z): '''Google style docstring for callback ... Args: x (int): bla x For the other parameters, see :class:`MyFrameworkUsingAndDefiningCallback` ''' return x + y + z
Naming inconsistencies in existing parameter and their type documentations are still detected.
Parameter Documentation checker Options¶
- accept-no-param-doc
Whether to accept totally missing parameter documentation in the docstring of a function that has parameters.
Default:
yes
- accept-no-raise-doc
Whether to accept totally missing raises documentation in the docstring of a function that raises an exception.
Default:
yes
- accept-no-return-doc
Whether to accept totally missing return documentation in the docstring of a function that returns a statement.
Default:
yes
- accept-no-yields-doc
Whether to accept totally missing yields documentation in the docstring of a generator.
Default:
yes
- default-docstring-type
If the docstring type cannot be guessed the specified docstring type will be used.
Default:
default
Parameter Documentation checker Messages¶
- differing-param-doc (W9017)
"%s" differing in parameter documentation Please check parameter names in declarations.
- differing-type-doc (W9018)
"%s" differing in parameter type documentation Please check parameter names in type declarations.
- multiple-constructor-doc (W9005)
"%s" has constructor parameters documented in class and __init__ Please remove parameter declarations in the class or constructor.
- missing-param-doc (W9015)
"%s" missing in parameter documentation Please add parameter declarations for all parameters.
- missing-type-doc (W9016)
"%s" missing in parameter type documentation Please add parameter type declarations for all parameters.
- missing-raises-doc (W9006)
"%s" not documented as being raised Please document exceptions for all raised exception types.
- useless-param-doc (W9019)
"%s" useless ignored parameter documentation Please remove the ignored parameter documentation.
- useless-type-doc (W9020)
"%s" useless ignored parameter type documentation Please remove the ignored parameter type documentation.
- missing-any-param-doc (W9021)
Missing any documentation in "%s" Please add parameter and/or type documentation.
- missing-return-doc (W9011)
Missing return documentation Please add documentation about what this method returns.
- missing-return-type-doc (W9012)
Missing return type documentation Please document the type returned by this method.
- missing-yield-doc (W9013)
Missing yield documentation Please add documentation about what this generator yields.
- missing-yield-type-doc (W9014)
Missing yield type documentation Please document the type yielded by this method.
- redundant-returns-doc (W9008)
Redundant returns documentation Please remove the return/rtype documentation from this method.
- redundant-yields-doc (W9010)
Redundant yields documentation Please remove the yields documentation from this method.
Refactoring checker¶
This checker is provided by
pylint.extensions.empty_comment.
Verbatim name of the checker is
refactoring.
Refactoring checker Messages¶
- empty-comment (R2044)
Line with empty comment Used when a # symbol appears on a line not followed by an actual comment
Set Membership checker¶
This checker is provided by
pylint.extensions.set_membership.
Verbatim name of the checker is
set_membership.
Set Membership checker Messages¶
- use-set-for-membership (R6201)
Consider using set for membership test Membership tests are more efficient when performed on a lookup optimized datatype like
sets.
Typing checker¶
This checker is provided by
pylint.extensions.typing.
Verbatim name of the checker is
typing.
Typing checker Documentation¶
Find issue specifically related to type annotations.
Typing checker Options¶
- runtime-typing
Set to
noif the app / library does NOT need to support runtime introspection of type annotations. If you use type annotations exclusively for type checking of an application, you're probably fine. For libraries, evaluate if some users what to access the type hints at runtime first, e.g., through
typing.get_type_hints. Applies to Python versions 3.7 - 3.9
Default:
yes
Typing checker Messages¶
- deprecated-typing-alias (W6001)
'%s' is deprecated, use '%s' instead Emitted when a deprecated typing alias is used.
- consider-using-alias (R6002)
'%s' will be deprecated with PY39, consider using '%s' instead%s Only emitted if 'runtime-typing=no' and a deprecated typing alias is used in a type annotation context in Python 3.7 or 3.8.
- consider-alternative-union-syntax (R6003)
Consider using alternative Union syntax instead of '%s'%s Emitted when 'typing.Union' or 'typing.Optional' is used instead of the alternative Union syntax 'int | None'.
While Used checker¶
This checker is provided by
pylint.extensions.while_used.
Verbatim name of the checker is
while_used.
While Used checker Messages¶
- while-used (W0149)
Used `while` loop Unbounded while loops can often be rewritten as bounded for loops.
|
https://pylint.pycqa.org/en/stable/technical_reference/extensions.html
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
19. Linear State Space Models¶
Contents
“We may regard the present state of the universe as the effect of its past and the cause of its future” – Marquis de Laplace
In addition to what’s in Anaconda, this lecture will need the following libraries:
!conda install -y quantecon
Collecting package metadata (current_repodata.json): -
\
|
/
-
\
|
/
-
\
|
/
-
\
|
done Solving environment: -
\
|
/
-
\
|
/
-
\
|
/
-
\
|
/
-
\
|
/
-
\
|
/
done
# All requested packages already installed.
19.1. Overview¶
This lecture introduces the linear state space dynamic system.
The linear state space system is a generalization of the scalar AR(1) process we studied before..
Let’s start with some imports:
%matplotlib inline import matplotlib.pyplot as plt plt.rcParams["figure.figsize"] = (11, 5) #set default figure size import numpy as np from quantecon import LinearStateSpace from scipy.stats import norm import random
19.2. The Linear State Space Model¶
The objects in play are:
19.2 (19.1) pins down the values of the sequences \(\{x_t\}\) and \(\{y_t\}\).
Even without these draws, the primitives 1–3 pin down the probability distributions of \(\{x_t\}\) and \(\{y_t\}\).
Later we’ll see how to compute these distributions and their moments.
19.2.1.1.
This is a weaker condition than that \(\{w_t\}\) is IID with \(w_{t+1} \sim N(0,I)\).
19.2.2. Examples¶
By appropriate choice of the primitives, a variety of dynamics can be represented in terms of the linear state space model.
The following examples help to highlight this point.
They also illustrate the wise dictum finding the state is an art.
19.2.2.1. Second-order Difference Equation¶
Let \(\{y_t\}\) be a deterministic sequence that satisfies
To map (19.2) into our state space system (19.1), we set
You can confirm that under these definitions, (19.1) and (19.2) agree.
The next figure shows the dynamics of this process when \(\phi_0 = 1.1, \phi_1=0.8, \phi_2 = -0.8, y_0 = y_{-1} = 1\).
def plot_lss(A, C, G, n=3, ts_length=50): ar = LinearStateSpace(A, C, G, mu_0=np.ones(n)) x, y = ar.simulate(ts_length) fig, ax = plt.subplots() y = y.flatten() ax.plot(y, 'b-', lw=2, alpha=0.7) ax.grid() ax.set_xlabel('time', fontsize=12) ax.set_ylabel('$y_t$', fontsize=12) plt.show()
ϕ_0, ϕ_1, ϕ_2 = 1.1, 0.8, -0.8 A = [[1, 0, 0 ], [ϕ_0, ϕ_1, ϕ_2], [0, 1, 0 ]] C = np.zeros((3, 1)) G = [0, 1, 0] plot_lss(A, C, G)
Later you’ll be asked to recreate this figure.
19.2.2.2. Univariate Autoregressive Processes¶
We can use (19.1) to represent the model
where \(\{w_t\}\) is IID and standard normal.
To put this in the linear state space format we take \(x_t = \begin{bmatrix} y_t & y_{t-1} & y_{t-2} & y_{t-3} \end{bmatrix}'\) and
The matrix \(A\) has the form of the companion matrix to the vector \(\begin{bmatrix}\phi_1 & \phi_2 & \phi_3 & \phi_4 \end{bmatrix}\).
The next figure shows the dynamics of this process when
ϕ_1, ϕ_2, ϕ_3, ϕ_4 = 0.5, -0.2, 0, 0.5 σ = 0.2 A_1 = [[ϕ_1, ϕ_2, ϕ_3, ϕ_4], [1, 0, 0, 0 ], [0, 1, 0, 0 ], [0, 0, 1, 0 ]] C_1 = [[σ], [0], [0], [0]] G_1 = [1, 0, 0, 0] plot_lss(A_1, C_1, G_1, n=4, ts_length=200)
19.2.2.3. Vector Autoregressions¶
Now suppose that
\(y_t\) is a \(k \times 1\) vector
\(\phi_j\) is a \(k \times k\) matrix and
\(w_t\) is \(k \times 1\)
Then (19.3) is termed a vector autoregression.
To map this into (19.1), we set
where \(I\) is the \(k \times k\) identity matrix and \(\sigma\) is a \(k \times k\) matrix.
19.2.2.4. Seasonals¶
We can use (19.1) to represent
the deterministic seasonal \(y_t = y_{t-4}\)
the indeterministic seasonal \(y_t = \phi_4 y_{t-4} + w_t\)
In fact, both are special cases of (19.3).
With the deterministic seasonal, the transition matrix becomes
It is easy to check that \(A^4 = I\), which implies that \(x_t\) is strictly periodic with period 4:1
Such an \(x_t\) process can be used to model deterministic seasonals in quarterly time series.
The indeterministic seasonal produces recurrent, but aperiodic, seasonal fluctuations.
19.2.2.5. Time Trends¶
The model \(y_t = a t + b\) is known as a linear time trend.
We can represent this model in the linear state space form by taking
and starting at initial condition \(x_0 = \begin{bmatrix} 0 & 1\end{bmatrix}'\).
In fact, it’s possible to use the state-space system to represent polynomial trends of any order.
For instance, we can represent the model \(y_t = a t^2 + bt + c\) in the linear state space form by taking
and starting at initial condition \(x_0 = \begin{bmatrix} 0 & 0 & 1 \end{bmatrix}'\).
It follows that
Then \(x_t^\prime = \begin{bmatrix} t(t-1)/2 &t & 1 \end{bmatrix}\). You can now confirm that \(y_t = G x_t\) has the correct form.
19.2.3. Moving Average Representations¶
A nonrecursive expression for \(x_t\) as a function of \(x_0, w_1, w_2, \ldots, w_t\) can be found by using (19.1) repeatedly to obtain
Representation (19.5) is a moving average representation.
It expresses \(\{x_t\}\) as a linear function of
current and past values of the process \(\{w_t\}\) and
the initial condition \(x_0\)
As an example of a moving average representation, let the model be
You will be able to show that \(A^t = \begin{bmatrix} 1 & t \cr 0 & 1 \end{bmatrix}\) and \(A^j C = \begin{bmatrix} 1 & 0 \end{bmatrix}'\).
Substituting into the moving average representation (19.5), we obtain.
19.3. Distributions and Moments¶
19.3.1. Unconditional Moments¶
Using (19.1), it’s easy to obtain expressions for the (unconditional) means of \(x_t\) and \(y_t\).
We’ll explain what unconditional and conditional mean soon.
Letting \(\mu_t := \mathbb{E} [x_t]\) and using linearity of expectations, we find that
Here \(\mu_0\) is a primitive given in (19.1).
The variance-covariance matrix of \(x_t\) is \(\Sigma_t := \mathbb{E} [ (x_t - \mu_t) (x_t - \mu_t)']\).
Using \(x_{t+1} - \mu_{t+1} = A (x_t - \mu_t) + C w_{t+1}\), we can determine this matrix recursively via
As with \(\mu_0\), the matrix \(\Sigma_0\) is a primitive given in (19)\).
19.3.1.1. Moments of the Observations¶
Using linearity of expectations again we have
The variance-covariance matrix of \(y_t\) is easily shown to be
19.3.2.
In particular, given our Gaussian assumptions on the primitives and the linearity of (19 (19.6) and (19.7).
Letting \(\mu_t\) and \(\Sigma_t\) be as defined by these equations, we have
By similar reasoning combined with (19.8) and (19.9),
19.3.3. Ensemble Interpretations¶
How should we interpret the distributions defined by (19.11)–(19 (19.3).
The values of \(y_T\) are represented by black dots in the left-hand figure
def cross_section_plot(A, C, G, T=20, # Set the time ymin=-0.8, ymax=1.25, sample_size = 20, # 20 observations/simulations n=4): # The number of dimensions for the initial x0 ar = LinearStateSpace(A, C, G, mu_0=np.ones(n)) fig, axes = plt.subplots(1, 2, figsize=(16, 5)) for ax in axes: ax.grid(alpha=0.4) ax.set_ylim(ymin, ymax) ax = axes[0] ax.set_ylim(ymin, ymax) ax.set_ylabel('$y_t$', fontsize=12) ax.set_xlabel('time', fontsize=12) ax.vlines((T,), -1.5, 1.5) ax.set_xticks((T,)) ax.set_xticklabels(('$T$',)) sample = [] for i in range(sample_size): rcolor = random.choice(('c', 'g', 'b', 'k')) x, y = ar.simulate(ts_length=T+15) y = y.flatten() ax.plot(y, color=rcolor, lw=1, alpha=0.5) ax.plot((T,), (y[T],), 'ko', alpha=0.5) sample.append(y[T]) y = y.flatten() axes[1].set_ylim(ymin, ymax) axes[1].set_ylabel('$y_t$', fontsize=12) axes[1].set_xlabel('relative frequency', fontsize=12) axes[1].hist(sample, bins=16, density=True, orientation='horizontal', alpha=0.5) plt.show()
ϕ_1, ϕ_2, ϕ_3, ϕ_4 = 0.5, -0.2, 0, 0.5 σ = 0.1 A_2 = [[ϕ_1, ϕ_2, ϕ_3, ϕ_4], [1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0]] C_2 = [[σ], [0], [0], [0]] G_2 = [1, 0, 0, 0] cross_section_plot(A_2, C_2, G_2)
In the right-hand figure, these values are converted into a rotated histogram that shows relative frequencies from our sample of 20 \(y_T\)’s.
Here is another figure, this time with 100 observations
t = 100 cross_section_plot(A_2, C_2, G_2, T=t)
Let’s now try with 500,000 observations, showing only the histogram (without rotation)
T = 100 ymin=-0.8 ymax=1.25 sample_size = 500_000 ar = LinearStateSpace(A_2, C_2, G_2, mu_0=np.ones(4)) fig, ax = plt.subplots() x, y = ar.simulate(sample_size) mu_x, mu_y, Sigma_x, Sigma_y, Sigma_yx = ar.stationary_distributions() f_y = norm(loc=float(mu_y), scale=float(np.sqrt(Sigma_y))) y = y.flatten() ygrid = np.linspace(ymin, ymax, 150) ax.hist(y, bins=50, density=True, alpha=0.4) ax.plot(ygrid, f_y.pdf(ygrid), 'k-', lw=2, alpha=0.8, label=r'true density') ax.set_xlim(ymin, ymax) ax.set_xlabel('$y_t$', fontsize=12) ax.set_ylabel('relative frequency', fontsize=12) ax.legend(fontsize=12) plt.show()
The black line is the population density of \(y_T\) calculated from (19.12).
The histogram and population distribution are close, as expected.
By looking at the figures and experimenting with parameters, you will gain a feel for how the population distribution depends on the model primitives listed above, as intermediated by the distribution’s sufficient statistics.
19.3.3.1.\)).
I = 20 T = 50 ymin = -0.5 ymax = 1.15 ar = LinearStateSpace(A_2, C_2, G_2, mu_0=np.ones(4)) fig, ax = plt.subplots().set_ylim(ymin, ymax) ax.set_xlabel('time', fontsize=12) ax.set_ylabel('$y_t$', fontsize=12) ax.legend(ncol=2) plt.show()
The ensemble mean for \(x_t\) is
The limit \(\mu_T\) is a “long-run average”.
(By long-run average we mean the average for an infinite (\(I = \infty\)) number of sample \(x_T\)’s)
Another application of the law of large numbers assures us that
19.3.4.
From this rule we get \(p(x_0, x_1) = p(x_1 \,|\, x_0) p(x_0)\).
The Markov property \(p(x_t \,|\, x_{t-1}, \ldots, x_0) = p(x_t \,|\, x_{t-1})\) and repeated applications of the preceding rule lead us to
The marginal \(p(x_0)\) is just the primitive \(N(\mu_0, \Sigma_0)\).
In view of (19.1), the conditional densities are
19.3.4.1. Autocovariance Functions¶
An important object related to the joint distribution is the autocovariance function
Elementary calculations show that
Notice that \(\Sigma_{t+j,t}\) in general depends on both \(j\), the gap between the two dates, and \(t\), the earlier date.
19.4. Stationarity and Ergodicity¶
Stationarity and ergodicity are two properties that, when they hold, greatly aid analysis of linear state space models.
Let’s start with the intuition.
19.4.1. Visualizing Stability¶
Let’s look at some more time series from the same model that we analyzed above.
This picture shows cross-sectional distributions for \(y\) at times \(T, T', T''\)
def cross_plot(A, C, G, steady_state='False', T0 = 10, T1 = 50, T2 = 75, T4 = 100): ar = LinearStateSpace(A, C, G, mu_0=np.ones(4)) if steady_state == 'True': μ_x, μ_y, Σ_x, Σ_y, Σ_yx = ar.stationary_distributions() ar_state = LinearStateSpace(A, C, G, mu_0=μ_x, Sigma_0=Σ_x) ymin, ymax = -0.6, 0.6 fig, ax = plt.subplots() ax.grid(alpha=0.4) ax.set_ylim(ymin, ymax) ax.set_ylabel('$y_t$', fontsize=12) ax.set_xlabel('$time$', fontsize=12) ax.vlines((T0, T1, T2), -1.5, 1.5) ax.set_xticks((T0, T1, T2)) ax.set_xticklabels(("$T$", "$T'$", "$T''$"), fontsize=12) for i in range(80): rcolor = random.choice(('c', 'g', 'b')) if steady_state == 'True': x, y = ar_state.simulate(ts_length=T4) else: x, y = ar.simulate(ts_length=T4) y = y.flatten() ax.plot(y, color=rcolor, lw=0.8, alpha=0.5) ax.plot((T0, T1, T2), (y[T0], y[T1], y[T2],), 'ko', alpha=0.5) plt.show()
cross_plot(A_2, C_2, G.
19.4.2. Stationary Distributions¶
In our setting, a distribution \(\psi_{\infty}\) is said to be stationary for \(x_t\) if
Since
in the present case, all distributions are Gaussian
a Gaussian distribution is pinned down by its mean and variance-covariance matrix
we can restate the definition as follows: \(\psi_{\infty}\) is stationary for \(x_t\) if
where \(\mu_{\infty}\) and \(\Sigma_{\infty}\) are fixed points of (19.6) and (19.7) respectively.
19.4.3. Covariance Stationary Processes¶
Let’s see what happens to the preceding figure if we start \(x_0\) at the stationary distribution.
cross_plot(A_2, C_2, G_2, steady_state='True') (19.6) and (19.7) respectively
we’ve ensured that
Moreover, in view of (19\).
19.4.4. Conditions for Stationarity¶
19.4.4 (19.7) also has a unique fixed point in this case, and, moreover?
19.4.4.2. Processes with a Constant State Component¶
To investigate such a process, suppose that \(A\) and \(C\) take the form
where
\(A_1\) is an \((n-1) \times (n-1)\) matrix
\(a\) is an \((n-1) \times 1\) column vector
Let \(x_t = \begin{bmatrix} x_{1t}' & 1 \end{bmatrix}'\) where \(x_{1t}\) is \((n-1) \times 1\).
It follows that
Let \(\mu_{1t} = \mathbb{E} [x_{1t}]\) and take expectations on both sides of this expression to get
Assume now that the moduli of the eigenvalues of \(A_1\) are all strictly less than one.
Then (19.15) has a unique stationary solution, namely,
The stationary value of \(\mu_t\) itself is then \(\mu_\infty := \begin{bmatrix} \mu_{1\infty}' & 1 \end{bmatrix}'\).
The stationary values of \(\Sigma_t\) and \(\Sigma_{t+j,t}\) satisfy (19.7) converge to the fixed point of the discrete Lyapunov equation in the first line of (19.16).
19.4.5. Ergodicity¶
Let’s suppose that we’re working with a covariance stationary process.
In this case, we know that the ensemble mean will converge to \(\mu_{\infty}\) as the sample size \(I\) approaches infinity.
19.4.5.1. Averages over Time¶
Ensemble averages across simulations are interesting theoretically, but in real life, we usually observe only a single realization \(\{x_t, y_t\}_{t=0}^T\).
So now let’s take a single realization and form the time-series averages.
19.5.
The sequence \(\{v_t\}\) is assumed to be independent of \(\{w_t\}\).
The process \(\{x_t\}\) is not modified by noise in the observation equation and its moments, distributions and stability properties remain the same.
The unconditional moments of \(y_t\) from (19.8) and (19.9) now become
The variance-covariance matrix of \(y_t\) is easily shown to be
The distribution of \(y_t\) is therefore
19.6. Prediction¶
The theory of prediction for linear state space systems is elegant and simple.
19.6.1. Forecasting Formulas – Conditional Means¶
The natural way to predict variables is to use conditional distributions.
For example, the optimal forecast of \(x_{t+1}\) given information known at time \(t\)
The covariance matrix of the forecast error is
More generally, we’d like to compute the \(j\)-step ahead forecasts \(\mathbb{E}_t [x_{t+j}]\) and \(\mathbb{E}_t [y_{t+j}]\).
With a bit of algebra, we obtain
The \(j\)-step ahead forecast of \(y\) is therefore
19.6.2. Covariance of Prediction Errors¶
It is useful to obtain the covariance matrix of the vector of \(j\)-step-ahead prediction errors
Evidently,
\(V_j\) defined in (19.21) can be calculated recursively via \(V_1 = CC'\) and
\(V_j\) is the conditional covariance matrix of the errors in forecasting \(x_{t+j}\), conditioned on time \(t\) information \(x_t\).
Under particular conditions, \(V_j\) converges to
Equation (19\).
19.7..
19.8. Exercises¶
19.8.1. Exercise 1¶
In several contexts, we want to compute forecasts of geometric sums of future random variables governed by the linear state-space system (19
Show that:
and
what must the modulus for every eigenvalue of \(A\) be less than?
19.9. Solutions¶
19.9.1. Exercise 1¶
- 1
The eigenvalues of \(A\) are \((1,-1, i,-i)\).
- 2
The correct way to argue this is by induction. Suppose that \(x_t\) is Gaussian. Then (19.1) and (19.10) imply that \(x_{t+1}\) is Gaussian. Since \(x_0\) is assumed to be Gaussian, it follows that every \(x_t\) is Gaussian. Evidently, this implies that each \(y_t\) is Gaussian.
|
https://python.quantecon.org/linear_models.html
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
AWS Amplify provides a declarative and easy-to-use interface across different categories of cloud operations. AWS Amplify goes well with any JavaScript based frontend workflow, and React Native for mobile developers.
Our default implementation works with Amazon Web Services (AWS), but AWS Amplify is designed to be open and pluggable for any custom backend or service.
Notice:
[email protected] has structural changes. For details please check Amplify Modularization.
AWS Amplify is available as
aws-amplify package on npm
Web
$ npm install aws-amplify --save
or you could install the module you want to use individually:
$ npm install @aws-amplify/auth --save
React
If you are developing a React app, you can install an additional package
aws-amplify-react containing Higher Order Components:
$ npm install aws-amplify --save $ npm install aws-amplify-react --save
Angular
If you are developing an Angular app, you can install an additional package
aws-amplify-angular. This package contains an Angular module with a provider and components:
$ npm install aws-amplify --save $ npm install aws-amplify-angular --save
Visit our Installation Guide for Web to start building your web app.
Vue
If you are developing a Vue app, you can install an additional package
aws-amplify-vue. This package contains a Vue plugin for the Amplify library along with Vue components.
$ npm install aws-amplify --save $ npm install aws-amplify-vue --save
Visit our Installation Guide for Web to start building your Vue app.
React Native
For React Native development, install
aws-amplify
$ npm install aws-amplify --save
If you are developing a React Native app, you can install an additional package
aws-amplify-react-native containing Higher Order Components:
$ npm install aws-amplify-react-native --save
Visit our Installation Guide for React Native to start building your web app.
Somewhere in your app, preferably at the root level, configure Amplify with your resources.
Using AWS Resources
import Amplify from 'aws-amplify'; import aws_exports from './aws-exports'; Amplify.configure(aws_exports); // or you don't want to install all the categories import Amplify from '@aws-amplify/core'; import Auth from '@aws-amplify/auth'; import aws_exports from './aws-exports'; // in this way you are only importing Auth and configuring it. Amplify.configure(aws_exports);
Without AWS
Amplify.configure({ API: { graphql_endpoint: '' } });
AWS Amplify supports many category scenarios such as Auth, Analytics, APIs and Storage as outlined in the Developer Guide. A couple of samples are below:
By default, AWS Amplify can collect user session tracking data with a few lines of code:
import Analytics from '@aws-amplify/analytics'; Analytics.record('myCustomEvent');
See our Analytics Developer Guide for detailed information.
Add user sign up and sign in using two of the many methods available to the Auth class:
import Auth from '@aws-amplify/auth'; Auth.signUp({ username: 'AmandaB', password: 'MyCoolPassword1!', attributes: { email: '[email protected]' } }); Auth.signIn(username, password) .then(success => console.log('successful sign in')) .catch(err => console.log(err));
See our Authentication Developer Guide for detailed information.
React / React Native
Adding authentication to your React or React Native app is as easy as wrapping your app's main component with our
withAuthenticator higher order component. AWS Amplify will provide you customizable UI for common use cases such as user registration and login.
// For React import { withAuthenticator } from 'aws-amplify-react'; // For React Native import { withAuthenticator } from 'aws-amplify-react-native'; export default withAuthenticator(App);
Angular
To add authentication to your Angular app you can also use the built-in service provider and components:
// app.component.ts import { AmplifyService } from 'aws-amplify-angular'; ... constructor( public amplify:AmplifyService ) { // handle auth state changes this.amplify.authStateChange$ .subscribe(authState => { this.authenticated = authState.state === 'signedIn'; if (!authState.user) { this.user = null; } else { this.user = authState.user; } }); } // app.component.html <amplify-authenticator></amplify-authenticator>
See our Angular Guide for more details on Angular setup and usage.
AWS Amplify automatically signs your REST requests with AWS Signature Version 4 when using the API module :
import API from '@aws-amplify/api'; let apiName = 'MyApiName'; let path = '/path'; let options = { headers: {...} // OPTIONAL } API.get(apiName, path, options).then(response => { // Add your code here });
See our API Developer Guide for detailed information.
To access a GraphQL API with your app, you need to make sure to configure the endpoint URL in your app’s configuration.
// configure a custom GraphQL endpoint Amplify.configure({ API: { graphql_endpoint: '' } }); // Or configure an AWS AppSync endpoint. let myAppConfig = { // ... 'aws_appsync_graphqlEndpoint': '', 'aws_appsync_region': 'us-east-1', 'aws_appsync_authenticationType': 'API_KEY', 'aws_appsync_apiKey': 'da2-xxxxxxxxxxxxxxxxxxxxxxxxxx', // ... }; Amplify.configure(myAppConfig);
queries
import API, { graphqlOperation } from "@aws-amplify/api"; const ListEvents = `query ListEvents { listEvents { items { id where description } } }`; const allEvents = await API.graphql(graphqlOperation(ListEvents));
mutations
import API, { graphqlOperation } from "@aws-amplify/api"; const CreateEvent = `mutation CreateEvent($name: String!, $when: String!, $where: String!, $description: String!) { createEvent(name: $name, when: $when, where: $where, description: $description) { id name where when description } }`; const eventDetails = { name: 'Party tonight!', when: '8:00pm', where: 'Ballroom', decription: 'Coming together as a team!' }; const newEvent = await API.graphql(graphqlOperation(CreateEvent, eventDetails));
subscriptions
import API, { graphqlOperation } from "@aws-amplify/api"; const SubscribeToEventComments = `subscription subscribeToComments { subscribeToComments { commentId content } }`; const subscription = API.graphql( graphqlOperation(SubscribeToEventComments) ).subscribe({ next: (eventData) => console.log(eventData) });
See our GraphQL API Developer Guide for detailed information.
AWS Amplify provides an easy-to-use API to store and get content from public or private storage folders:
Storage.put(key, fileObj, {level: 'private'}) .then (result => console.log(result)) .catch(err => console.log(err)); // Stores data with specifying its MIME type Storage.put(key, fileObj, { level: 'private', contentType: 'text/plain' }) .then (result => console.log(result)) .catch(err => console.log(err));
See our Storage Developer Guide for detailed information.
Gets the closest parent element that matches the passed selector.
The element whose parents to check.
The CSS selector to match against.
True if the selector should test against the passed element itself.
The matching element or undefined.
Delegates the handling of events for an element matching a selector to an ancestor of the matching element.
The ancestor element to add the listener to.
The event type to listen to.
A CSS selector to match against child elements.
A function to run any time the event happens.
A configuration options object. The available options:
- useCapture<boolean>: If true, bind to the event capture phase. - deep<boolean>: If true, delegate into shadow trees.
The delegate object. It contains a destroy method.
Dispatches an event on the passed element.
The DOM element to dispatch the event on.
The type of event to dispatch.
The return value of
element.dispatchEvent, which will
be false if any of the event listeners called
preventDefault.
Gets all attributes of an element as a plain JavaScriot object.
The element whose attributes to get.
An object whose keys are the attribute keys and whose values are the attribute values. If no attributes exist, an empty object is returned.
return the byte size of the string
get current time
check if passed value is an integer
Tests if a DOM elements matches any of the test DOM elements or selectors.
The DOM element to test.
A DOM element, a CSS selector, or an array of DOM elements or CSS selectors to match against.
True of any part of the test matches.
Tests whether a DOM element matches a selector. This polyfills the native Element.prototype.matches method across browsers.
The DOM element to test.
The CSS selector to test element against.
True if the selector matches.
Returns an array of a DOM element's parent elements.
An array of all parent elemets, or an empty array if no parent elements are found.
Parses the given url and returns an object mimicing a
Location object.
An object with the same properties as a
Location.
Sign a HTTP request, add 'Authorization' header to request param
HTTP request object
request: { method: GET | POST | PUT ... url: ..., headers: { header1: ... }, data: data }
AWS access credential info
access_info: { access_key: ..., secret_key: ..., session_token: ... }
Signed HTTP request
List of header keys included in the canonical headers.
Default cache config
provide an object as the in-memory cache
|
https://aws-amplify.github.io/amplify-js/api/globals.html
|
CC-MAIN-2019-04
|
en
|
refinedweb
|
class FunctionXXLForwarders
extends MiniPhase with IdentityDenotTransformer
This phase adds forwarder for XXL functions
apply methods that are implemented with a method
with explicit parameters (not in Array[Object]).
In particular for every method
def apply(x1: T1, ... xn: Tn): R in class
M subtype of
FunctionN[T1, ..., Tn, R] with
N > 22
a forwarder
def apply(xs: Array[Object]): R = this.apply(xs(0).asInstanceOf[T1], ..., xs(n-1).asInstanceOf[Tn]).asInstanceOf[R]
is generated.
ConstructorsConstructors
FunctionXXLForwarders ( )
MembersMembers
override def phaseName : String
A name given to the
Phase that can be used to debug the compiler. For
instance, it is possible to print trees after a given phase using:
$ ./bin/dotc -Xprint:<phaseNameHere> sourceFile.scala
|
http://dotty.epfl.ch/api/dotty/tools/dotc/transform/FunctionXXLForwarders.html
|
CC-MAIN-2019-04
|
en
|
refinedweb
|
.NET Interview Questions – Part 3
1.What is an extender class?
An extender class allows you to extend the functionality of an existing control. It is used in Windows forms applications to add properties to controls.A demonstration of extender classes can be found over here.
2.What is inheritance?
Inheritance represents the relationship between two classes where one type derives functionality from a second type and then extends it by adding new methods, properties, events, fields and constants.
C# support two types of inheritance:
· Implementation inheritance
· Interface inheritance
3.
4.
5.How do you prevent a class from being inherited?
In VB.NET you use the NotInheritable modifier to prevent programmers from using the class as a base class. In C#, use the sealed keyword.
6.
7.Can you use multiple inheritance in .NET?
.NET supports only single inheritance. However the purpose is accomplished using multiple interfaces.
8.
9..
11.What is an Interface?
An interface is a standard or contract that contains only the signatures of methods or events. The implementation is done in the class that inherits from this interface. Interfaces are primarily used to set a common standard or contract.
12.
13.What is business logic?
It is the functionality which handles the exchange of information between database and a user interface.
14.What is a component?
Component is a group of logically related classes and methods. A component is a class that implements the IComponent interface or uses a class that implements IComponent interface.
15.What is a control?
A control is a component that provides user-interface (UI) capabilities.
16.What are the differences between a control and a component?
The differences can be studied over here.
17.
18.What is the global assembly cache (GAC)?
GAC is a machine-wide cache of assemblies that allows .NET applications to share libraries. GAC solves some of the problems associated with dll’s (DLL Hell).
19.What is a stack? What is a heap? Give the differences between the two?
Stack is a place in the memory where value types are stored. Heap is a place in the memory where the reference types are stored.
20.What is instrumentation?
It is the ability to monitor an application so that information about the application’s progress, performance and status can be captured and reported.
21.What is code review?
The process of examining the source code generally through a peer, to verify it against best practices.
22.What is logging?
Logging is the process of persisting information about the status of an application.
23.What are mock-ups?
Mock-ups are a set of designs in the form of screens, diagrams, snapshots etc., that helps verify the design and acquire feedback about the application’s requirements and use cases, at an early stage of the design process.
24.What is a Form?
A form is a representation of any window displayed in your application. Form can be used to create standard, borderless, floating, modal windows.
25.What is a multiple-document interface(MDI)?
A user interface container that enables a user to work with more than one document at a time. E.g. Microsoft Excel.
26.What is a single-document interface (SDI) ?
A user interface that is created to manage graphical user interfaces and controls into single windows. E.g. Microsoft Word
27.What is BLOB ?
A BLOB (binary large object) is a large item such as an image or an exe represented in binary form.
28.What is ClickOnce?
ClickOnce is a new deployment technology that allows you to create and publish self-updating applications that can be installed and run with minimal user interaction.
29.What is object role modeling (ORM) ?
It is a logical model for designing and querying database models. There are various ORM tools in the market like CaseTalk, Microsoft Visio for Enterprise Architects, Infagon etc.
30.What is a private assembly?
A private assembly is local to the installation directory of an application and is used only by that application.
31.What is a shared assembly?
A shared assembly is kept in the global assembly cache (GAC) and can be used by one or more applications on a machine.
32.
33.What are design patterns?
Design patterns are common solutions to common design problems.
34.What is a connection pool?
A connection pool is a ‘collection of connections’ which are shared between the clients requesting one. Once the connection is closed, it returns back to the pool. This allows the connections to be reused.
35.What is a flat file?
A flat file is the name given to text, which can be read or written only sequentially.
36.Where do custom controls reside?
In the global assembly cache (GAC).
37.What is a third-party control ?
A third-party control is one that is not created by the owners of a project. They are usually used to save time and resources and reuse the functionality developed by others (third-party).
38.What is a binary formatter?
Binary formatter is used to serialize and deserialize an object in binary format.
39.What is Boxing/Unboxing?
Boxing is used to convert value types to object.
E.g. int x = 1;
object obj = x ;
Unboxing is used to convert the object back to the value type.
E.g. int y = (int)obj;
Boxing/unboxing is quiet an expensive operation.
40.What is a COM Callable Wrapper (CCW)?
CCW is a wrapper created by the common language runtime(CLR) that enables COM components to access .NET objects.
41.What is a Runtime Callable Wrapper (RCW)?
RCW is a wrapper created by the common language runtime(CLR) to enable .NET components to call COM components.
42.What is a digital signature?
A digital signature is an electronic signature used to verify/gurantee the identity of the individual who is sending the message.
43.What is garbage collection?
Garbage collection is the process of managing the allocation and release of memory in your applications. Read this article for more information.
44.What is globalization?
Globalization is the process of customizing applications that support multiple cultures and regions.
45.What is localization?
Localization is the process of customizing applications that support a given culture and regions.
46”.
47.
48.
49.
50.
51.
52.
53.
54.
55.
56.
57.
58.What is a dynamic assembly?
A dynamic assembly is created dynamically at run time when an application requires the types within these assemblies.
59.
60.
61.
62.What is CLS?
Common Language Specification (CLS) defines the rules and standards to which languages must adhere to in order to be compatible with other .NET languages. This enables C# developers to inherit from classes defined in VB.NET or other .NET compatible languages.
63assembly OR C:WINNTassembly) (shfusion.dll tool)
· gacutil -i abc.dll
64.What is the caspol.exe tool used for?
The caspol tool grants and modifies permissions to code groups at the user policy, machine policy, and enterprise policy levels.
65.What is a garbage collector?
A garbage collector performs periodic checks on the managed heap to identify objects that are no longer required by the program and removes them from memory.
66.
67
68.
69.
70.How can you detect if a viewstate has been tampered?
By setting the EnableViewStateMac to true in the @Page directive. This attribute checks the encoded and encrypted viewstate for tampering.
71.Can I use different programming languages in the same application?
Yes. Each page can be written with a different programming language in the same application. You can create a few pages in C# and a few in VB.NET.
72.
73.How do you secure your connection string information?
By using the Protected Configuration feature.
74.How do you secure your configuration files to be accessed remotely by unauthorized users?
ASP.NET configures IIS to deny access to any user that requests access to the Machine.config or Web.config files.
75.What is Ilasm.exe used for?
Ilasm.exe is a tool that generates PE files from MSIL code. You can run the resulting executable to determine whether the MSIL code performs as expected.
76.What is Ildasm.exe used for?
Ildasm.exe is a tool that takes a PE file containing the MSIL code as a parameter and creates a text file that contains managed code.
77.What is the ResGen.exe tool used for?
ResGen.exe is a tool that is used to convert resource files in the form of .txt or .resx files to common language runtime binary .resources files that can be compiled into satellite assemblies.
78.How can I configure ASP.NET applications that are running on a remote machine?
You can use the Web Site Administration Tool to configure remote websites..
80.I have created a configuration setting in my web.config and have kept it at the root level. How do I prevent it from being overridden by another web.config that appears lower in the hierarchy?
By setting the element’s Override attribute to false.
81.
82.Can you change a Master Page dynamically at runtime? How?
Yes. To change a master page, set the MasterPageFile property to point to the .master page during the PreInit page event.
83.How do you apply Themes to an entire application?
By specifying the theme in the web.config file.
Eg: <configuration>
<system.web>
<pages theme=”BlueMoon” />
</system.web>
</configuration>
84.How do you exclude an ASP.NET page from using Themes?
To remove themes from your page, use the EnableTheming attribute of the Page directive.
85.Your client complains that he has a large form that collects user input. He wants to break the form into sections, keeping the information in the forms related. Which control will you use?
The ASP.NET Wizard Control.
86.Do webservices support data reader?
No. However it does support a dataset.
87.
88.What happens when you change the web.config file at run time?
ASP.NET invalidates the existing cache and assembles a new cache. Then ASP.NET automatically restarts the application to apply the changes.
89.Can you programmatically access IIS configuration settings?
Yes. You can use ADSI, WMI, or COM interfaces to configure IIS programmatically.
90.What are the differences between ASP.NET 1.1 and ASP.NET 2.0?
A comparison chart containing the differences between ASP.NET 1.1 and ASP.NET 2.0 can be found over here.
91.
92.
93.
94.
95.How do you disable AutoPostBack?
Hence the AutoPostBack can be disabled on an ASP.NET page by disabling AutoPostBack on all the controls of a page. AutoPostBack is caused by a control on the page.
96.What are the different code models available in ASP.NET 2.0?
There are 2 code models available in ASP.NET 2.0. One is the single-file page and the other one is the code behind page.
97.Which base class does the web form inherit from?
Page class in the System.Web.UI namespace.
98.
99.
100.
101.
102.
103.How do you indentify that the page is post back?
By checking the IsPostBack property. If IsPostBack is True, the page has been posted back.
104.
105.How is a Master Page different from an ASP.NET page?
The MasterPage has a @Master top directive and contains ContentPlaceHolder server controls. It is quiet similar to an ASP.NET page.
106.How do you attach an exisiting page to a Master page?
By using the MasterPageFile attribute in the @Page directive and removing some markup.
107.Where do you store your connection string information?
The connection string can be stored in configuration files (web.config).
108FrameworkVersionCONFIG.
There can be multiple web.config files in an application nested at different hierarchies. However there can be only one machine.config file on a web server.
109.How do you set the title of an ASP.NET page that is attached to a Master Page?
By using the Title property of the @Page directive in the content page. Eg:
<@Page MasterPageFile=”Sample.master” Title=”I hold content” %>
110.
111.What are Themes?
Themes are a collection of CSS files, .skin files, and images. They are text based style definitions and are very similar to CSS, in that they provide a common look and feel throughout the website.
112.
113.What is the difference between Skins and Css files?
Css is applied to HTML controls whereas skins are applied to server controls.
114.What is a User Control?
User controls are reusable controls, similar to web pages. They cannot be accessed directly.
115.Explain briefly the steps in creating a user control?
· Create a file with .ascx extension and place the @Control directive at top of the page.
· Included the user control in a Web Forms page using a @Register directive
116.
117.
118.
119.What method do you use to explicitly kill a users session?
Session.Abandon().
120.
|
http://www.lessons99.com/dot-net-interview-questions.html
|
CC-MAIN-2019-04
|
en
|
refinedweb
|
androidStatusBarkey in
app.json. It exposes the following options:
light-content- The status bar content is light colored (usually white). This is the default value.
dark-content- The status bar content is dark colored (usually dark grey). This is only available on Android 6.0 onwards. It will fallback to
light-contentin older versions.
{ "expo": { "androidStatusBar": { "backgroundColor": "#C2185B" } } }
StatusBarAPI from React Native
StatusBarAPI allows you to dynamically control the appearance of the status bar. You can use it as component, or as an API. Check the documentation on the React Native website for examples.
Viewon top of your screen with a background color to act as a status bar, or set a top padding. You can get the height of the status bar with
Expo.Constants.statusBarHeight. Though this should be your last resort since this doesn't work very well when status bar's height changes.
import React from 'react'; import { StyleSheet, View } from 'react-native'; import { Constants } from 'expo'; const styles = StyleSheet.create({ statusBar: { backgroundColor: "#C2185B", height: Constants.statusBarHeight, }, // rest of the styles }); const MyComponent = () => { <View> <View style={styles.statusBar} /> {/* rest of the content */} </View> }
Viewinstead.
|
https://docs.expo.io/versions/v25.0.0/guides/configuring-statusbar
|
CC-MAIN-2019-04
|
en
|
refinedweb
|
This is the mail archive of the [email protected] mailing list for the GCC project.
I believe I've found the problem. Before gcse we had (insn 4062 9281 4065 383 0x20000053188 (set (reg/v:DI 82 [ swapped ]) (eq:DI (reg/v:DI 82 [ swapped ]) (const_int 0 [0x0]))) 149 {*setcc_internal} (nil) (nil)) (jump_insn 4065 4062 9282 383 0x20000053188 (set (pc) (if_then_else (eq (reg/v:DI 82 [ swapped ]) (const_int 0 [0x0])) (label_ref 4223) (pc))) 174 {*bcc_normal} (nil) (expr_list:REG_BR_PRED (concat (const_int 20 [0x14]) (const_int 7000 [0x1b58])) (nil))) Note that reg 82 is modified and then tested. Our use of get_condition looked back through the first instruction to return (ne:DI (reg:DI 82) (const_int 0)). So we wound up inferring the wrong condition. My find_reloads test case is fixed by the following patch. I'm going to start a full bootstrap shortly. This also brings up an interesting point: swapped is boolean (I don't know whether it's declared bool or not, but that's not relevant at the moment). Ideally, we'd infer equality with one on the other side of the branch. Dunno how often that would make a difference. In this case I think we were able to make substitutions only because we had swapped = 0; label: A goal_alternative_swapped = swapped; B swapped = !swapped; if (swapped) { C goto label; } with the inferrence at C of swapped == 0, we've got two identical sets that together dominate the copy to goal_alternative_swapped. r~ Index: gcse.c =================================================================== RCS file: /cvs/gcc/gcc/gcc/gcse.c,v retrieving revision 1.232 diff -c -p -d -r1.232 gcse.c *** gcse.c 27 Jan 2003 11:30:35 -0000 1.232 --- gcse.c 7 Feb 2003 01:37:46 -0000 *************** struct ls_expr *** 479,484 **** --- 479,487 ---- rtx reaching_reg; /* Register to use when re-writing. */ }; + /* Array of implicit set patterns indexed by basic block index. */ + static rtx *implicit_sets; + /* Head of the list of load/store memory refs. */ static struct ls_expr * pre_ldst_mems = NULL; *************** static int load_killed_in_block_p PAR *** 614,619 **** --- 617,624 ---- static void canon_list_insert PARAMS ((rtx, rtx, void *)); static int cprop_insn PARAMS ((rtx, int)); static int cprop PARAMS ((int)); + static rtx fis_get_condition PARAMS ((rtx)); + static void find_implicit_sets PARAMS ((void)); static int one_cprop_pass PARAMS ((int, int, int)); static bool constprop_register PARAMS ((rtx, rtx, rtx, int)); static struct expr *find_bypass_set PARAMS ((int, int)); *************** record_last_set_info (dest, setter, data *** 2470,2476 **** Currently src must be a pseudo-reg or a const_int. - F is the first insn. TABLE is the table computed. */ static void --- 2475,2480 ---- *************** compute_hash_table_work (table) *** 2532,2537 **** --- 2536,2547 ---- note_stores (PATTERN (insn), record_last_set_info, insn); } + /* Insert implicit sets in the hash table. */ + if (table->set_p + && implicit_sets[current_bb->index] != NULL_RTX) + hash_scan_set (implicit_sets[current_bb->index], + current_bb->head, table); + /* The next pass builds the hash table. */ for (insn = current_bb->head, in_libcall_block = 0; *************** cprop (alter_jumps) *** 4478,4483 **** --- 4488,4604 ---- return changed; } + /* Similar to get_condition, only the resulting condition must be + valid at JUMP, instead of at EARLIEST. + + This differs from noce_get_condition in ifcvt.c in that we prefer not to + settle for the condition variable in the jump instruction being integral. + We prefer to be able to record the value of a user variable, rather than + the value of a temporary used in a condition. This could be solved by + recording the value of *every* register scaned by canonicalize_condition, + but this would require some code reorganization. */ + + static rtx + fis_get_condition (jump) + rtx jump; + { + rtx cond, set, tmp, insn, earliest; + bool reverse; + + if (! any_condjump_p (jump)) + return NULL_RTX; + + set = pc_set (jump); + cond = XEXP (SET_SRC (set), 0); + + /* If this branches to JUMP_LABEL when the condition is false, + reverse the condition. */ + reverse = (GET_CODE (XEXP (SET_SRC (set), 2)) == LABEL_REF + && XEXP (XEXP (SET_SRC (set), 2), 0) == JUMP_LABEL (jump)); + + /* Use canonicalize_condition to do the dirty work of manipulating + MODE_CC values and COMPARE rtx codes. */ + tmp = canonicalize_condition (jump, cond, reverse, &earliest, NULL_RTX); + if (!tmp) + return NULL_RTX; + + /* Verify that the given condition is valid at JUMP by virtue of not + having been modified since EARLIEST. */ + for (insn = earliest; insn != jump; insn = NEXT_INSN (insn)) + if (INSN_P (insn) && modified_in_p (tmp, insn)) + break; + if (insn == jump) + return tmp; + + /* The condition was modified. See if we can get a partial result + that doesn't follow all the reversals. Perhaps combine can fold + them together later. */ + tmp = XEXP (tmp, 0); + if (!REG_P (tmp) || GET_MODE_CLASS (GET_MODE (tmp)) != MODE_INT) + return NULL_RTX; + tmp = canonicalize_condition (jump, cond, reverse, &earliest, tmp); + if (!tmp) + return NULL_RTX; + + /* For sanity's sake, re-validate the new result. */ + for (insn = earliest; insn != jump; insn = NEXT_INSN (insn)) + if (INSN_P (insn) && modified_in_p (tmp, insn)) + return NULL_RTX; + + return tmp; + } + + /* Find the implicit sets of a function. An "implicit set" is a constraint + on the value of a variable, implied by a conditional jump. For example, + following "if (x == 2)", the then branch may be optimized as though the + conditional performed an "explicit set", in this example, "x = 2". This + function records the set patterns that are implicit at the start of each + basic block. */ + + static void + find_implicit_sets () + { + basic_block bb, dest; + unsigned int count; + rtx cond, new; + + count = 0; + FOR_EACH_BB (bb) + /* Check for more than one sucessor. */ + if (bb->succ && bb->succ->succ_next) + { + cond = fis_get_condition (bb->end); + + if (cond + && (GET_CODE (cond) == EQ || GET_CODE (cond) == NE) + && GET_CODE (XEXP (cond, 0)) == REG + && REGNO (XEXP (cond, 0)) >= FIRST_PSEUDO_REGISTER + && CONSTANT_P (XEXP (cond, 1))) + { + dest = GET_CODE (cond) == EQ ? BRANCH_EDGE (bb)->dest + : FALLTHRU_EDGE (bb)->dest; + + if (dest && ! dest->pred->pred_next + && dest != EXIT_BLOCK_PTR) + { + new = gen_rtx_SET (VOIDmode, XEXP (cond, 0), + XEXP (cond, 1)); + implicit_sets[dest->index] = new; + if (gcse_file) + { + fprintf(gcse_file, "Implicit set of reg %d in ", + REGNO (XEXP (cond, 0))); + fprintf(gcse_file, "basic block %d\n", dest->index); + } + count++; + } + } + } + + if (gcse_file) + fprintf (gcse_file, "Found %d implicit sets\n", count); + } + /* Perform one copy/constant propagation pass. PASS is the pass count. If CPROP_JUMPS is true, perform constant propagation into conditional jumps. If BYPASS_JUMPS is true, *************** one_cprop_pass (pass, cprop_jumps, bypas *** 4496,4503 **** --- 4617,4633 ---- local_cprop_pass (cprop_jumps); + /* Determine implicit sets. */ + implicit_sets = (rtx *) xcalloc (last_basic_block, sizeof (rtx)); + find_implicit_sets (); + alloc_hash_table (max_cuid, &set_hash_table, 1); compute_hash_table (&set_hash_table); + + /* Free implicit_sets before peak usage. */ + free (implicit_sets); + implicit_sets = NULL; + if (gcse_file) dump_hash_table (gcse_file, "SET", &set_hash_table); if (set_hash_table.n_elems > 0)
|
http://gcc.gnu.org/ml/gcc-patches/2003-02/msg00468.html
|
CC-MAIN-2019-04
|
en
|
refinedweb
|
Spring Boot Exit Codes
Last modified: November 5, 2018
1. Overview
Every application returns an exit code on exit; this code can be any integer value including negative values.
In this quick tutorial, we’re going to find out how we can return exit codes from a Spring Boot application.
2. Spring Boot and Exit Codes
A Spring Boot application will exit with the code 1 if an exception occurs at startup. Otherwise, on a clean exit, it provides 0 as the exit code.
Spring registers shutdown hooks with the JVM to ensure the ApplicationContext closes gracefully on exit. In addition to that, Spring also provides the interface org.springframework.boot.ExitCodeGenerator. This interface can return the specific code when System.exit() is called.
3. Implementing Exit Codes
Boot provides three methods that allow us to work with exit codes.
The ExitCodeGenerator interface and ExitCodeExceptionMapper allow us to specify custom exit codes while the ExitCodeEvent allows us to read the exit code on exit.
3.1. ExitCodeGenerator
Let’s create a class that implements the ExitCodeGenerator interface. We have to implement the method getExitCode() which returns an integer value:
@SpringBootApplication public class DemoApplication implements ExitCodeGenerator { public static void main(String[] args) { System.exit(SpringApplication .exit(SpringApplication.run(DemoApplication.class, args))); } @Override public int getExitCode() { return 42; } }
Here, the DemoApplication class implements the ExitCodeGenerator interface. Also, we wrapped the call to SpringApplication.run() with SpringApplication.exit().
On exit, the exit code will now be 42.
3.2. ExitCodeExceptionMapper
Now let’s find out how we can return an exit code based on a runtime exception. For this, we implement a CommandLineRunner which always throws a NumberFormatException and then register a bean of type ExitCodeExceptionMapper:
@Bean CommandLineRunner createException() { return args -> Integer.parseInt("test") ; } @Bean ExitCodeExceptionMapper exitCodeToexceptionMapper() { return exception -> { // set exit code base on the exception type if (exception.getCause() instanceof NumberFormatException) { return 80; } return 1; }; }
Within the ExitCodeExceptionMapper, we simply map the exception to a certain exit code.
3.3. ExitCodeEvent
Next, we’ll capture an ExitCodeEvent to read the exit code of our application. For this, we simply register an event listener which subscribes to ExitCodeEvents (named DemoListener in this example):
@Bean DemoListener demoListenerBean() { return new DemoListener(); } private static class DemoListener { @EventListener public void exitEvent(ExitCodeEvent event) { System.out.println("Exit code: " + event.getExitCode()); } }
Now, when the application exits, the method exitEvent() will be invoked and we can read the exit code from the event.
4. Conclusion
In this article, we’ve gone through multiple options provided by Spring Boot to work with exit codes.
It’s very important for any application to return the right error code while exiting. The exit code determines the state of the application when the exit happened. In addition to that, it helps in troubleshooting.
Code samples can be found over on GitHub.
|
https://www.baeldung.com/spring-boot-exit-codes
|
CC-MAIN-2019-04
|
en
|
refinedweb
|
Wizards are great tools to guide the user through tedious tasks. In many other UI toolkits, you have to code a wizard framework. JFace saves you the trouble by providing a solid wizard framework. This chapter introduces you to the JFace wizard framework with a sample application. You learn how to do the following:
Wizards are used heavily in the Eclipse IDE. You use wizards to create your projects and Java classes in Eclipse. The JFace wizard framework provides an effective mechanism to guide the user in completing complex and tedious tasks.
In JFace, a wizard is represented by the IWizard interface. The Wizard class provides an abstract base implementation of the IWizard interface. To create your own wizards, you usually extend the Wizard class instead of implementing the IWizard interface from scratch.
A wizard consists of one or more wizard pages, which are represented by the IWizardPage interface. The WizardPage class provides an abstract implementation of the IWizardPage interface. You can easily create wizard pages by extending the WizardPage class.
In order to run a wizard, you need a wizard container. Represented by the IWizardContainer interface, a wizard container is used to display wizard pages and provide a page navigation mechanism. WizardDialog is a ready-to-use wizard container. The layout of a wizard dialog is shown in Figure 19-1. The current wizard page title, description, and image are shown at the top of the dialog. The wizard page content is displayed in the center. If you need to run a time-consuming task, you can optionally configure the dialog to display a progress indicator below the wizard page content. At the bottom of the dialog, there are several navigation buttons that the user can use to navigate among multiple wizard pages.
Figure 19-1
Building a custom wizard involves the following steps:
In this chapter, you learn how to use the wizard framework through a hotel reservation sample application (see Figure 19-2). The hotel reservation wizard gathers the user's reservation details, information about the user, and payment information, and stores all the information in a model data object. The wizard consists of three wizard pages. The first page gathers the basic room reservation details. The second page collects the user's information, such as the user's name, phone number, e-mail address, and so on. The last page queries the user for payment information.
Figure 19-2
Before creating the wizard, you first construct a class to model the data:
// The data model. class ReservationData { Date arrivalDate; Date departureDate; int roomType; String customerName; String customerPhone; String customerEmail; String customerAddress; int creditCardType; String creditCardNumber; String creditCardExpiration; public String toString() { StringBuffer sb = new StringBuffer(); sb.append("* HOTEL ROOM RESERVATION DETAILS * "); sb.append("Arrival date: " + arrivalDate.toString() + " "); sb.append("Departure date: " + departureDate.toString() + " "); sb.append("Room type: " + roomType + " "); sb.append("Customer name: " + customerName + " "); sb.append("Customer email: " + customerEmail + " "); sb.append("Credit card no.: " + creditCardNumber + " "); return sb.toString(); } }
Then you create the wizard by extending the Wizard class:
public class ReservationWizard extends Wizard { // the model object. ReservationData data = new ReservationData(); public ReservationWizard() { setWindowTitle("Hotel room reservation wizard"); setNeedsProgressMonitor(true); setDefaultPageImageDescriptor( ImageDescriptor.createFromFile(null, "icons/hotel.gif")); } // Overrides org.eclipse.jface.wizard.IWizard#addPages() public void addPages() { addPage(new FrontPage()); addPage(new CustomerInfoPage()); addPage(new PaymentInfoPage()); } // Overrides org.eclipse.jface.wizard.IWizard#performFinish() public boolean performFinish() { try { // puts the data into a database ... getContainer().run(true, true, new IRunnableWithProgress() { public void run(IProgressMonitor monitor) throws InvocationTargetException, InterruptedException { monitor.beginTask("Store data", 100); monitor.worked(40); // stores the data, and it may take long time. System.out.println(data); Thread.sleep(2000); monitor.done(); } }); } catch (InvocationTargetException e) { e.printStackTrace(); } catch (InterruptedException e) { e.printStackTrace(); } return true; } // Overrides org.eclipse.jface.wizard.IWizard#performCancel() public boolean performCancel() { boolean ans = MessageDialog.openConfirm(getShell(), "Confirmation", "Are you sure to cancel the task?"); if(ans) return true; else return false; } }
The model data object is created first. Within the constructor of the class, you call several methods to configure the wizard:
The ReservationWizard class overrides three methods from the Wizard class.
The addPages method is called before the wizard opens. The Wizard implementation does nothing. In the ReservationWizard class, you override this method to add three wizard pages with the addPage method:
public void addPage(IWizardPage page)
The addPage method appends the specified wizard page at the end of the page list. Implementation of those wizard page classes is discussed later in the chapter.
The performFinish method is invoked when the user clicks the Finish button. Because this method is declared as abstract, you have to implement it. You can return a Boolean flag to indicate whether the finish request is granted or not. If the finish request is accepted (i.e., performFinish returns true), the wizard is closed. Otherwise, the wizard remains open.
In the reservation wizard, the performFinish method tries to store the model data into a database. Because this procedure may take considerable time, it is put into an IRunnableWithProgress object and executed through the wizard container. During the execution, the progress monitor shows the progress. The getContainer method simply returns the container hosting the wizard:
public IWizardContainer getContainer()
Notice that the Finish button is not always in the enabled status. It is enabled only if the canFinish method (in the Wizard class) returns true:
public boolean canFinish()
The default implementation of this method returns true only if every wizard page in the wizard is completed. (Wizard page completion is covered later in the chapter.) You can override this method to modify the default behavior.
Similar to the performFinished method, the performCancel method is executed when the Cancel button is clicked. The wizard implementation of the performCancel method simply returns true to indicate that the cancel request is accepted. The ReservationWizard class overrides the performCancel method to display a confirmation dialog to the user. If the user confirms the cancel action, the performCancel method returns true and the wizard closes. Otherwise, the performCancel method returns false and the cancel request is rejected.
The Cancel button is always enabled when the wizard is open.
So far, this chapter had covered the methods used and overridden in the ReservationWizard class. In addition to those methods, the Wizard class provides many methods to access wizard pages.
The getPageCount method returns the number of pages in the wizard:
public int getPageCount()
To retrieve all the wizard pages as an array, you can use the getPages method:
public IWizardPage[] getPages()
The getPage method allows you to get an individual page by its name:
public IWizardPage getPage(String name)
The first page to be displayed in the wizard can be obtained through the getStartingPage method:
public IWizardPage getStartingPage()
The getNextPage method returns the successor of the specified wizard page or null if none:
public IWizardPage getNextPage(IWizardPage page)
Similarly, the getPreviousPage returns the predecessor of the given page or null if none:
public IWizardPage getPreviousPage(IWizardPage page)
To obtain the currently displayed wizard page from a container, you can use the getCurrentPage method of the IWizardPage interface:
public IWizardPage getCurrentPage()
The data model and wizard have been created. The next step is to create all the wizard pages used in the wizard.
The first wizard page (refer to Figure 19-2) gathers basic reservation information — arrival date, departure date, and room type.
The following is the implementation of the first wizard page:
public class FrontPage extends WizardPage { Combo comboRoomTypes; Combo comboArrivalYear; Combo comboArrivalMonth; Combo comboArrivalDay; Combo comboDepartureYear; Combo comboDepartureMonth; Combo comboDepartureDay; FrontPage() { super("FrontPage"); setTitle("Your reservation information"); setDescription( "Select the type of room and your arrival date & departure date"); } /* (non-Javadoc) * @see org.eclipse.jface.dialogs.IDialogPage#createControl(Composite) */ public void createControl(Composite parent) { Composite composite = new Composite(parent, SWT.NULL); GridLayout gridLayout = new GridLayout(2, false); composite.setLayout(gridLayout); new Label(composite, SWT.NULL).setText("Arrival date: "); Composite compositeArrival = new Composite(composite, SWT.NULL); compositeArrival.setLayout(new RowLayout()); String[] months = new String[]{"Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec" }; Calendar calendar = new GregorianCalendar(); // today. ((ReservationWizard)getWizard()).data.arrivalDate = calendar.getTime(); comboArrivalMonth = new Combo(compositeArrival,SWT.BORDER | SWT.READ_ONLY); for(int i=0; i((ReservationWizard)getWizard()).data.roomType = comboRoomTypes.getSelectionIndex(); } }); setControl(composite); } // validates the dates and update the model data object. private void setDates(int arrivalDay, int arrivalMonth, int arrivalYear, int departureDay, int departureMonth, int departureYear) { Calendar calendar = new GregorianCalendar(); calendar.set(Calendar.DAY_OF_MONTH, arrivalDay); calendar.set(Calendar.MONTH, arrivalMonth); calendar.set(Calendar.YEAR, arrivalYear); Date arrivalDate = calendar.getTime(); calendar.set(Calendar.DAY_OF_MONTH, departureDay); calendar.set(Calendar.MONTH, departureMonth); calendar.set(Calendar.YEAR, departureYear); Date departureDate = calendar.getTime(); System.out.println(arrivalDate + " - " + departureDate); if(! arrivalDate.before(departureDate)) { // arrival date is before dep. date. setErrorMessage("The arrival date is not before the departure date"); setPageComplete(false); }else{ setErrorMessage(null); // clear error message. setPageComplete(true); ((ReservationWizard)getWizard()).data.arrivalDate = arrivalDate; ((ReservationWizard)getWizard()).data.departureDate = departureDate; } } }
The FrontPage class extends the WizardPage class and overrides the createControl method.
The createControl method will be called when the wizard is created. Within this method, the widget tree is created. Also, you call the setControl method to set the top-level control for this page. Selection event listeners are registered for the combos on the page. When an element in a combo is selected, the corresponding property in the model data object is updated. For arrival and departure dates, you use a function named setDates to validate the input and update the model object, if necessary.
In case of error (for example, the user sets the arrival data after the departure date), setErrorMessage is used to notify the wizard container to display an error message (see Figure 19-3):
public void setErrorMessage(String newMessage)
Figure 19-3
To remove the error message, you can call setErrorMessage with null as the argument.
In order to let the user resolve the error before proceeding to the next page or clicking the Finish button, you set the completion status of this page to true:
public void setPageComplete(boolean complete)
You can check the completion status by using the isPageCompleted method:
public boolean isPageComplete()
The Next button is enabled only if canFlipToNextPage returns true:
public boolean canFlipToNextPage()
The WizardPage implementation of this method returns true only when the page is completed and the next page exists. You may override this method to modify this behavior.
In the constructor of the FrontPage class, you called several methods to configure the wizard page. There are several methods that you can use to configure the wizard pages:
The two other wizard pages, CustomerInfoPage and PaymentInfoPage, can be implemented in similar ways.
You've created the wizard and added wizard pages. Now you are ready to run it in a wizard container. The following code shows the reservation wizard in a WizardDialog:
ReservationWizard wizard = new ReservationWizard(); WizardDialog dialog = new WizardDialog(getShell(), wizard); dialog.setBlockOnOpen(true); dialog.open();
First, an instance of the ReservationWizard class is created. The wizard instance is then used to create a WizardDialog object. After configuring the dialog, you bring the dialog up by calling its open method.
After the wizard dialog is open, you can then fill in necessary information and navigate among wizard pages using the Back and Next buttons. After entering all the required information correctly, you can finish your reservation by clicking Finish or you can cancel the task by clicking Cancel.
If the user uses a wizard regularly, have the wizard remember some dialog settings so that the user does not have to key in certain information repeatedly. In the hotel reservation dialog, customer information such as name, phone number, and e-mail address should be saved after the wizard is closed and loaded when the wizard is opened again.
The JFace wizard framework has built-in support for dialog setting persistence. The IDialogSettings interface represents a storage mechanism for making settings persistent. You can store a collection of key-value pairs in such stores. The key must be a string, and the values can be either a string or an array of strings. If you need to store other primitive types, such as int and double, you store them as strings and use some convenient functions declared in the interface to perform conversion. The DialogSettings class is a concrete implementation of the IDailogSettings interface. A DialogSettings store persists the settings in an XML file.
Usually, the dialog settings should be loaded before the wizard is opened and they should be saved when the wizard is closed. The following code is used in the sample wizard to load and save dialog settings:
public class ReservationWizard extends Wizard { static final String DIALOG_SETTING_FILE = "userInfo.xml"; static final String KEY_CUSTOMER_NAME = "customer-name"; static final String KEY_CUSTOMER_EMAIL = "customer-email"; static final String KEY_CUSTOMER_PHONE = "customer-phone"; static final String KEY_CUSTOMER_ADDRESS = "customer-address"; // the model object. ReservationData data = new ReservationData(); public ReservationWizard() { setWindowTitle("Hotel room reservation wizard"); setNeedsProgressMonitor(true); setDefaultPageImageDescriptor(ImageDescriptor.createFromFile(null, "icons/hotel.gif")); DialogSettings dialogSettings = new DialogSettings("userInfo"); try { // loads existing settings if any. dialogSettings.load(DIALOG_SETTING_FILE); } catch (IOException e) { e.printStackTrace(); } setDialogSettings(dialogSettings); } /* (non-Javadoc) * @see org.eclipse.jface.wizard.IWizard#performFinish() */ public boolean performFinish() { if(getDialogSettings() != null) { getDialogSettings().put(KEY_CUSTOMER_NAME, data.customerName); getDialogSettings().put(KEY_CUSTOMER_PHONE, data.customerPhone); getDialogSettings().put(KEY_CUSTOMER_EMAIL, data.customerEmail); getDialogSettings().put(KEY_CUSTOMER_ADDRESS, data.customerAddress); try { // Saves the dialog settings into the specified file. getDialogSettings().save(DIALOG_SETTING_FILE); } catch (IOException e1) { e1.printStackTrace(); } } ... return true; } ... }
In the preceding list, code in bold is the code inserted to support dialog setting persistence. A DialogSettings object is created with the section name userInfo. The load method of the DialogSettings class is used to load settings from an XML file:
public void load(String fileName) throws IOException
After the DialogSettings instance is created and loaded with existing settings, the setDialogSettings method of the Wizard class is invoked to register the dialog settings instance to the wizard:
public void setDialogSettings(IDialogSettings settings)
When the DialogSettings instance is registered, you can access it from wizard pages easily with the getDialogSettings method of the WizardPage class.
Finally, the dialog settings are saved to the file before the wizard closes with the save method of the DialogSettings class. The data stored in the XML file looks like the following:
Here is the code to obtain persisted dialog settings and use them to fill the UI fields in the CustomerInfoPage class:
public class CustomerInfoPage extends WizardPage { Text textName; Text textPhone; Text textEmail; Text textAddress; public CustomerInfoPage() { super("CustomerInfo"); setTitle("Customer Information"); setPageComplete(false); } /* * (non-Javadoc) * * @see org.eclipse.jface.dialogs.IDialogPage#createControl(Composite) */ public void createControl(Composite parent) { Composite composite = new Composite(parent, SWT.NULL); composite.setLayout(new GridLayout(2, false)); ... if (getDialogSettings() != null) { textName.setText( getDialogSettings().get( ReservationWizard.KEY_CUSTOMER_NAME)); textPhone.setText( getDialogSettings().get( ReservationWizard.KEY_CUSTOMER_PHONE)); textEmail.setText( getDialogSettings().get( ReservationWizard.KEY_CUSTOMER_EMAIL)); textAddress.setText( getDialogSettings().get( ReservationWizard.KEY_CUSTOMER_ADDRESS)); } setControl(composite); } private boolean validDialogSettings() { if(getDialogSettings().get(ReservationWizard.KEY_CUSTOMER_NAME) == null || getDialogSettings().get(ReservationWizard.KEY_CUSTOMER_ADDRESS)== null|| getDialogSettings().get(ReservationWizard.KEY_CUSTOMER_EMAIL)== null || getDialogSettings().get(ReservationWizard.KEY_CUSTOMER_PHONE) == null) return false; return true; } }
In the preceding code, if a specific record is available in the dialog settings store, it is used to fill the corresponding text field.
You should now know how to create a wizard, add wizard pages to it, and run it. The JFace wizard framework greatly simplifies the task of creating wizards. There are many such useful frameworks in JFace. The next chapter discusses JFace text, which is another important JFace framework.
Part I - Fundamentals
Part II - Design Basics
Part III - Dynamic Controls
Part IV - Application Development
|
https://flylib.com/books/en/1.70.1/jface_wizards.html
|
CC-MAIN-2019-04
|
en
|
refinedweb
|
A pure Dart library for Mixpanel analytics.
Add this to your package's pubspec.yaml file:
dependencies: pure_mixpanel: ^1.0.6
You can install packages from the command line:
with Flutter:
$ flutter packages get
Alternatively, your editor might support
flutter packages get.
Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:pure_mixpanel/pure_mixpanel.dart';
We analyzed this package on Jan 15, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using:
Detected platforms: Flutter
References Flutter, and has no conflicting libraries.
Format
lib/pure_mixpanel.dart.
Run
flutter format to format
lib/pure_mixpanel.dart.
Support latest stable Dart SDK in
pubspec.yaml. (-20 points)
The SDK constraint in pubspec.yaml doesn't allow the latest stable Dart SDK release.
pure_mixpanel.dart.
|
https://pub.dartlang.org/packages/pure_mixpanel
|
CC-MAIN-2019-04
|
en
|
refinedweb
|
Sorry for the confusing title, as I am new to C++
Basically, what I am trying to do is, to read a simple test.txt file's content, which are just two words actually: "Hello World" and desplay thier hexadecimal content in a shell, CMD or command line.(Whatever it is called), if the text thing is confusing you here, just think of it, as if I am trying to open a simple .wav file and read it's data and display it on CMD, like you would using
#include <iostream> #include <fstream> using namespace std; int main(){ fstream myFile; myFile.open("test.txt"); //reading goes here, the problem is displaying the hex data on the editor (CMD ) return 0; }
I am new to this, and downloaded codeblocks, so I can only work with the CMD as of now. I can open files with the fstream class, and write in it, but sadly I can not read it's hexa content.
Is there anyway to do this by simple means in C++?
|
https://www.daniweb.com/programming/software-development/threads/456871/reading-hexadecimal-numbers-into-shell
|
CC-MAIN-2019-04
|
en
|
refinedweb
|
Web Scraping Indeed.fi for Key Job Skills in Finland
As many of you probably know, being a data scientist requires a large skill set.
To master all of that at a high level would probably take a lifetime!
So which of these skills are most employers actually looking for in Finland (I am currently living in Helsinki, Finland?
To answer that question, I am going to scrape job postings from Indeed.fi and by job title.
Program Set Up
The basic workflow of the program is
- Enter the city and the job title we want to search for jobs skills in matching (in quotes so it is a direct match) on Indeed.fi
- See the list of job postings displayed by the website
- Access the link to each job posting
- Scrape all of content in the job posting
- Filter it to only include words
- Reduce the words to a set so that each word is only counted once
- Keep a running total of the words and see how often a job posting included them
The program is written in Python 2.7 using the Jupyter Notebook .
I will create two functions:
- The first will scape an individual job posting for the HTML, clean it up to get the words only, then output the final list of words.
- The second will manage which URLs to access via the job postings Indeed’s website links to, count the required skills and plot the skill frequency as the output.
Import the necessary library
In this post, I will use the urllib2 library to connect to the websites, the BeautifulSoup library to scrape the page content, the re library for parsing the words and filtering out other markup based on regular expressions, and pandas to manage and plot the final results
from bs4 import BeautifulSoup
import urllib2
import requests
import re
from time import sleep
from collections import Counter
from nltk.corpus import stopwords
import pandas as pd
%matplotlib inline
First Function: text_cleaner(website):
This function will be called every time we access a new job posting. Its input is a URL for a website, while the output will be a final set of words collected from that website.
Second Function: skill_info_fi(city = None, job = None):
This function will take a desired city and look for all new job postings on Indeed.fi. It will crawl all of the job postings and keep track of how many use a preset list of typical data science skills. The final percentage for each skill is then displayed at the end of the collation.
- Inputs: The location’s city and job. These are optional. If no city/state is input, the function will assume a national search for data scientist job (this can take a while!!!). Input the city/state as strings, such as skills_info(city = ‘Helsinki’, job = ‘Data Analytics’).
- Output: A bar chart showing the most commonly desired skills in the job market for a job title. Besides that, the function also export the plot to PNG file for later use
Result
Let’s now try running our new function on Helsinki, Espoo and Suomi (mean Finland for nationwide result) for 3 job in ‘Data scientist’, ‘Data analytics’ and ‘ machine learning’ to see what results we get. Just as a note, all of these results were run on September 11, 2018 .
Data Scientist Job
- Helsinki: There were 19 Data Scientists jobs found in Helsinki
- Espoo: There were 19 Data Scientists jobs found in Espoo
- Nationwide: There were 23 Data Scientists jobs found in Nationwide
Data Analytics Job
- Helsinki: There were 25 Data Analytics jobs found in Helsinki
- Espoo: There were 25 Data Analytics jobs found in Espoo
- Nationwide: There were 38 Data Analytics jobs found in Nationwide
Machine Learning Job
- Helsinki: There were 83 Machine Learning jobs found in Helsinki
- Espoo: There were 91 Machine Learning jobs found in Espoo
- Nationwide: There were 107 Machine Learning jobs found in Nationwide
Conclusion
There are not many job listing related to ‘Data analytics’, ‘Data scientists’ and ‘Machine learning’ in Indeed.fi (compared to Monster.fi or Indeed.com) . The next post , I will try to scrape content on some other major job listing websites in Finland to analyse and have more information.
The Jupyter Notebook and all the plot can be found in my GitHub
|
https://medium.com/@toan.tran/web-scraping-indeed-fi-for-key-job-skills-in-finland-5b9e08cbb8f6
|
CC-MAIN-2019-04
|
en
|
refinedweb
|
A reactive stream wrapper around sqflite inspired by sqlbrite.
In your flutter project, add the dependency to your
pubspec.yaml
dependencies: ... streamqflite: 0.2.0
Import
streamqflite.dart
import 'package:streamqflite/streamqflite.dart';
Wrap your database in a
StreamDatabase.
var streamDb = StreamDatabase(db);
You can then listen to a query
// Emits a single row, doesn't emit if the row dosen't exist. Stream<MyEntry> singleQuery = streamDb.createQuery("MyTable", where: 'id = ?', whereArgs: [id]) .mapToOne((row) => MyEntry(row)); // Emits a single row, or the given default value if the row doesn't exist. Stream<MyEntry> singleOrDefaultQuery = streamDb.createQuery("MyTable", where: 'id = ?', whereArgs: [id]) .mapToOneOrDefault((row) => MyEntry(row), MyEntry.empty()); // Emits a list of rows. Stream<List<MyEntry>> listQuery = streamDb.createQuery("MyTable", where: 'name LIKE ?', whereArgs: [query]) .mapToList((row) => MyEntry(row)); var flexibleQuery = streamDb.createQuery("MyTable", where: 'name LIKE ?', whereArgs: [query]) .asyncMap((query) => { // query is lazy, this lets you not even execute it if you don't need to. if (condition) { return query(); } else { return Stream.empty(); } }).map((rows) { // Do something with all the rows. return ...; });
These queries will run once to get the current data, then again whenever the given table is modified
though the
StreamDatabase.
Add this to your package's pubspec.yaml file:
dependencies: streamqflite: ^0.2.0
You can install packages from the command line:
with Flutter:
$ flutter packages get
Alternatively, your editor might support
flutter packages get.
Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:streamqflite/streamqflite.dart';
We analyzed this package on Jan 15, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using:
Detected platforms: Flutter
References Flutter, and has no conflicting libraries.
Document public APIs. (-1 points)
28 out of 28
streamqflite.dart.
|
https://pub.dartlang.org/packages/streamqflite
|
CC-MAIN-2019-04
|
en
|
refinedweb
|
pam_putenv - set or change PAM environment variable
#include <security/pam_appl.h> int pam_putenv(pam_handle_t *pamh, const char *name_value);
The pam_putenv function is used to add or change the value of PAM environment variables as associated with the pamh handle. The pamh argument is an authentication handle obtained by a prior call to pam_start(). The name_value argument is a single NUL terminated string of one of the following forms: NAME=value of variable In this case the environment variable of the given NAME is set to the indicated value: value of variable. If this variable is already known, it is overwritten. Otherwise it is added to the PAM environment. NAME= This function sets the variable to an empty value. It is listed separately to indicate that this is the correct way to achieve such a setting. NAME Without an '=' the pam_putenv() function will delete the corresponding variable from the PAM environment. pam_putenv() operates on a copy of name_value, which means in contrast to putenv(3), the application is responsible to free the data.
PAM_PERM_DENIED Argument name_value given is a NULL pointer. PAM_BAD_ITEM Variable requested (for deletion) is not currently set. PAM_ABORT The pamh handle is corrupt. PAM_BUF_ERR Memory buffer error. PAM_SUCCESS The environment variable was successfully updated.
pam_start(3), pam_getenv(3), pam_getenvlist(3), pam_strerror(3), pam(7)
|
http://huge-man-linux.net/man3/pam_putenv.html
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
Jukka Zitting wrote:
> Hi all,
>
> I've just received committer status with Jackrabbit (thanks for the vote
> of confidence), and would be ready to migrate my private contrib/jcr-ext
> work to the Jackrabbit source repository. Before doing that, I'd like to
> clarify some policy issues raised by Roy.
>
> Roy T.Fielding wrote:
>
>>More code sounds great, but I don't want to see a big hierarchy
>>of extension directories. Please just name them
>>
>> org.apache.jackrabbit.{meaningful-name}
>>
>>and we'll all work on managing that space.
>
>
> I understand the requirement. What I was looking for with the .ext space
> was a grouping of packages that have no dependencies from or to the
> Jackrabbit core packages.
>
> What would be the policy for managing the org.apache.jackrabbit space?
> For example: Each new top-level package should be proposed on
> jackrabbit-dev before creation. The proposal should contain a
> description of the package (could be used also for package.html) and
> information about who will develop and maintain the package.
>
>
>>Contrib should only be used for things that are not managed
>>by the jackrabbit project, with the hopeful intention of moving
>>everything into the main project once the contributor has earned
>>commit status.
>
>
> How about the JCR-RMI contrib package? If you want, I could now migrate
> the RMI layer to org.apache.jackrabbit.rmi within the main source tree.
> The Jackrabbit jar would then directly contain the RMI layer.
>
> However, I'd need to setup separate maven goals to create jars
> containing just the RMI client and server classes. These extra jars
> could be used by other JCR implementations without having to include the
> entire Jackrabbit implementation.
>
> The same dilemma applies also to the decorator, xml, and other general
> packages I proposed for jcr-ext. Any ideas on how these should be
> handled?
My 2 cents.
I understand that your code is not tied to JackRabbit but only to the
JCR API layer, but as you seen in many other project implementing things
both underneath and on-top-of an API, they use a common namespace for
their stuff.
For example, the webdav servlet in Tomcat is under org.apache.tomcat
even if nobody has a problem understanding that there is nothing
tomcat-related in that servlet.
JCR is a way bigger beast than the Servlet API and it's clearly not as
widely known, but what we are trying to do here is to incubate the
project and the more 'diversity' in the naming and namespaces and
'identity' of what JackRabbit is, the harder it becomes.
If we think that JackRabbit is the 'open source reference implementation
of JCR' i think we will get much less traction than saying that
JackRabbit is 'the project where you go for JCR-related tecnologies'.
I personally think it would be much easier to incubate the second than
the first, and your contribution shows very well why.
So, I understand that you might have a dilemma putting your stuff under
org.apache.jackrabbit since you associate jackrabbit with a 'specific
implementation' of that API and you don't want people to believe it only
works with that, but if we think at jackrabbit as a 'repository of open
source technologies about JCR repositories', thus a little wider scope,
it will be easier to achive the critical community mass required to exit
incubation.
--
Stefano.
|
http://mail-archives.apache.org/mod_mbox/jackrabbit-dev/200502.mbox/%[email protected]%3E
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
EaseXML: A Python Data-Binding Tool
July 27, 2005
EaseXML is an XML data-binding tool for Python, available under the Python Software Foundation License. The package used to be called "XMLObject," but that generic name led to the situation I mentioned in Location, Location, Location...
EaseXML at First Glance.
I'll start by showing the EaseXML binding schema used to process Listing 1, my usual address label example (labelsease.py) uses the EaseXML conventions to set up the data binding.Listing 2 (labelsease.py). EaseXML class definitions for address labels
from EaseXML import * class labels(XMLObject): labels = ListNode(u'label') class label(XMLObject): id = StringAttribute() added = StringAttribute(u'added') _nodesOrder = [u'name', u'address', u'quote'] name = TextNode() address = ItemNode(u'address') quote = ItemNode(u'quote', optional=True) class address(XMLObject): _nodesOrder = [u'street', u'city', u'state'] street = TextNode() city = TextNode() state = TextNode() class quote(XMLObject): _name = u'quote' content = ChoiceNode(['#PCDATA', 'emph'], optional=True, main=True, noLimit=True) emph = TextNode(optional=True)
The most important class
XMLObject still bears the name of the original
package. You have to subclass it to create your own specialized classes representing
elements. The top-level element
labels is defined using a class of the same
name. It expresses that its contents are a list of child elements (
EaseXML.ListNode) named
label. Each of these has an
id and
added attribute. Data binding tools have to deal with the
situation where XML's naming conventions don't match that of the host language. In
EaseXML,
the names of XML identifiers are usually assumed from the named of the matching Python
object references, but the definition of the
added attribute shows how you can
override that by specifying the actual XML identifier as the first argument. This
argument
is sometimes optional, as in
EaseXML.StringAttribute; but sometimes it's
mandatory, as in
EaseXML.ListNode and
EaseXML.ItemNode. You
specify the order of child nodes using the
_nodesOrder list, specifying XML
identifier names.
EaseXML.TextNode defines a simple node with text content
only. Such nodes do not require a separate Python class. The definition for the
quote element illustrates a few things. It uses the
name_
property to override the XML element identifier, which is derived form the class name
by
default (in this case, the override happens to be the same as the default).
quote is simple text in one of its occurrences in the XML example, and mixed
content in another. You define mixed content by using a
EaseXML.ChoiceNode,
with
#PCDATA as one of the entries. As in XML DTDs, this is a special
identifier for text content.
optional=True is specified for the mixed content
contsruct as a whole, indicating that the element can be empty, and for the
emph element, indicating that text alone can occur without any elements mixed
in.
Putting the Binding to Work
After you define the binding classes, you can use them to parse in XML. You can also use them to generate XML, but I don't cover that in this article. The following interactive session demonstrates reading XML with an EaseXML data binding.
$ python -i labelsease.py >>> XML = open('labels.xml', 'r').read() >>> doc = labels.fromXml(XML)
As you can see, I load Listing 2 upon starting the Python interpreter.
doc is
a data structure based on instances of those classes with the data from the XML document.
>>> #Print the ids of all the labels >>> for label in doc.labels: ... print label.id ... tse ep lh >>> #Print the first quote element's contents >>> doc.labels[0].quote.emph u'Midwinter Spring' >>> doc.labels[0].quote.content [u'is its own season\u2026']
I ran into all sorts of quirks when poking introspectively at the resulting data binding.
For example, I found a phantom processing instruction among the child nodes of the
quote element you see in the last snippet. The Unicode support seems to be
patchy, and I was unable to reserialize the quote element containing the ellipsis
character
… (I checked the
toxml method for encoding arguments but
didn't find any.) The API itself is a bit strange and hard to get your head around.
I
noticed that the
forEach method is the recommended way for walking EaseXML
objects. Keep in mind that it requires specialized callbacks to work.
I decided to write about EaseXML before I realized to what extent it's a young project. It needs quite a bit of work. Besides the quirks I mentioned above,.
More on Unicode: Character Information
In the last two articles, Unicode Secrets and More Unicode Secrets, I discussed Python's Unicode facilities, from the point of view of XML processing. There is one more useful part of Python's Unicode libraries that I want to cover.
There are hundreds of thousands of characters in Unicode, and the number grows with each version. There is also a complex internal structure of characters; they are classified as alphabetic, digits, control codes, combining characters, and more, and they have varying collation (sorting), directionality, etc. It can be quite overwhelming, and you can imagine why when you realize that Unicode aims to provide computer representation for just about every writing system on the planet. Developers need all the tools they can to deal with all this rich variety. A useful but not all that well-known resource is Python's built-in Unicode database, in the unicodedata module. It is a Python API for the character database provided by the Unicode Consortium, the definitive catalog of all the characters in Unicode, along with standard properties for each.
Every character has a name, and you can learn what it is with the
name
function.
>>> import unicodedata >>> unicodedata.name(u'a') 'LATIN SMALL LETTER A' >>> unicodedata.name(u'\u1000') 'MYANMAR LETTER KA' >>> unicodedata.name(u'\u00B0') 'DEGREE SIGN' >>>
Notice that the names are returned as strings, not Unicode objects. All Unicode character
names use what you can informally call the ASCII subset. You can basically reverse
this
operation, getting a Unicode character by name, using the
lookup function.
>>> unicodedata.lookup('DEGREE SIGN') u'\xb0' >>> unicodedata.lookup('LATIN SMALL LETTER A') u'a' >>>
You can really put this database to work giving your programs super duper powers
of
globalization, head and shoulders above the rest. For example, did you know that the
characters "0" through "9" are not the only form of digits used in
writing? Even though these European digit characters derive from historical Arabic
number
representations, modern Arabic scripts use a different set of characters sometimes
called
"Indic numerals." (Although these are distinct again from the digits used in
modern-day scripts from India. Is your head spinning, yet?) Unicode assigns these
digits the
appropriate decimal values, and you can effortlessly derive the decimal value of any
digit
regardless of script using the
decimal function.
>>> unicodedata.decimal(u'0') 0 >>> unicodedata.decimal(u'\u0660') 0 >>> unicodedata.decimal(u'1') 1 >>> unicodedata.decimal(u'\u0661') 1 >>> #If you pass an invalid digit, it lets you know >>> unicodedata.decimal(u'a') Traceback (most recent call last): File "<stdin>", line 1, in ? ValueError: not a decimal >>>
The
digit and
numeric functions are similar, but there are some
differences, and you should refer to the Unicode character database for details (one
obvious
difference from the Python point of view is that
numeric returns floating point
numbers). Unicode organizes characters into categories, such as "Letter,
Lowercase" (abbreviation "Ll"), "Symbol, Currency" (abbreviation
"Sc"), "Punctuation, Connector" (abbreviation "Pc"),
"Right-to-Left Arabic" (abbreviation "AL"), "Separator, Space"
(abbreviation "Zs"), etc. These categories are important for many
character-processing cases. As an example, you might want to be specific about what
you mean
by "white space" when writing Unicode-aware applications. There are more than just
the familiar space, newline, carriage return and tab from ASCII, or nonbreaking space
from
HTML. Interestingly, some of the characters we think of as spaces, such as tab, are
categorized as control codes in Unicode, and XML's own treatment of characters often
doesn't
fall along neat lines of Unicode categories. You can find the category of any character
using the
category function.
>>> unicodedata.category(u'a') 'Ll' >>> unicodedata.category(u'\u00B0') #DEGREE SIGN 'So' >>> unicodedata.category(u'\t') 'Cc' >>> unicodedata.category(u'$') 'Sc' >>>
There are other functions in
unicodedata, but I'll leave them to the reader's
attentions.
From the Community
I mentioned the CJKV writing systems and encodings of the Pacific Rim in my last article. There are many non-Unicode character encodings in heavy use in these regions. There have been several third-party packages supporting these encodings, and Python 2.4 incorporates codecs based on a patch by Hye-Shik Chang. These support the following encodings:
- Chinese: gb2312, gbk, gb18030, big5hkscs, hz, big5, cp950
- Japanese: cp932, euc-jis-2004, euc-jp, euc-jisx0213, iso-2022-jp,
Python 2.4 also adds a few other non-CJK encodings, and I recommend that everyone who is serious about internationalization upgrade to this version as soon as possible. discovered Ken Rimey's Personal Distributed Information Store (PDIS), which includes some XML tools for Nokia's Series 60 phones, which offer python support. This includes an XML parser based on PyExpat and an XPath implementation based on elementtree.
|
http://www.xml.com/pub/a/2005/07/27/py-xml.html
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
An Introduction to Schematron. For example,
a
simple content model can be written like this: "The
Person element should in
the XML instance document have an attribute
Title and contain the elements
Name and
Gender in that order. If the value of the
Title attribute is 'Mr' the value of the
Gender element must be
'Male'."
In this sentence the context in which the assertions should be applied is clearly
stated as
the
Person element while there are four different assertions:
- The context element (
Person) should have an attribute
Title
- The context element should contain two child elements,
Nameand
Gender
- The child element
Nameshould appear before the child element
Gender
- If attribute
Titlehas the value 'Mr' the element
Gendermust have the value 'Male'.
It has already been mentioned that Schematron makes various assertions based on a specific context in a document. Both the assertions and the context make up two of the four layers in Schematron's fixed four-layer hierarchy:
- phases (top-level)
- patterns
- rules (defines the context)
- assertions
Schematron hierarchy
This introduction coves only three of these layers (patterns, rules and assertions); these are most important for using embedded Schematron rules in RELAX NG. For a full description of the Schematron schema language, see the Schematron specification.
The three layers covered in this section are constructed so that each assertion is grouped into rules and each rule defines a context. Each rule is then grouped into patterns, which are given a name that is displayed together with the error message (there is really more to patterns than just a grouping mechanism, but for this introduction this is sufficient).
The following XML document contains a very simple content model that helps explain the three layers in the hierarchy:
Assertions
The bottom layer in the hierarchy is the assertions, which are used to specify the
constraints that should be checked within a specific context of the XML instance document.
In a Schematron schema, the typical element used to define assertions is
assert. The
assert element has a
test attribute,
which is an XSLT
pattern. In the preceding example, there was four assertions made on the document in
order to specify the content model, namely:
- The context element (
Person) should have an attribute
Title
- The context element should contain two child elements,
Nameand
Gender
- The child element
Nameshould appear before the child element
Gender
- If attribute
Titlehas the value 'Mr' the element
Gendermust have the value 'Male'
Written using Schematron assertions this would be expressed as
If you are familiar with XPath, these assertions are easy to understand, but even
for
people with limited experience using XPath they are rather straightforward. The first
assertion simply tests for the occurrence of an attribute
Title. The second
assertion tests that the total number of children is equal to 2 and that there is
one
Name element and one
Gender element. The third assertion tests
that the first child element is
Name, and the last assertion tests that if the
person's title is 'Mr' the gender of the person must be 'Male'.
If the condition in the
test attribute is not fulfilled, the content of the
assertion element is displayed to the user. So, for example, if the third condition
was
broken (*[1] =
Name), the following message is displayed:
Each of these assertions has a condition that is evaluated, but the assertion does
not
define where in the XML instance document this condition should be checked. For example,
the
first assertion tests for the occurrence of the attribute
Title, but it is not
specified on which element in the XML instance document this assertion is applied.
The next
layer in the hierarchy, the rules, specifies the location of the contexts of assertions.
Rules
The rules in Schematron are declared by using the
rule element, which has a
context attribute. The value of the
context attribute must match
an XPath Expression that is
used to select one or more nodes in the document. Like the name suggests, the
context attribute is used to specify the context in the XML instance document
where the assertions should be applied. In the previous example the context was specified
to
be the
Person element, and a Schematron rule with the
Person
element as context would simply be
Since the rules are used to group together all the assertions that share the same
context,
the rules are designed so that the assertions are declared as children of the
rule element. For the previous example this means that the complete
Schematron rule would be
This means that all the assertions in the rule will be tested on every
Person
element in the XML instance document. If the context is not all the
Person
elements, it is easy to change the XPath location path to define a more restricted
context.
The value
Database/
Person for example sets the context to be all
the
Person elements that have the element
Database as its
parent.
Patterns
The third layer in the Schematron hierarchy is the pattern, declared using the
pattern element, which is used to group together different rules. The
pattern element also has a
name attribute that will be displayed
in the output when the pattern is checked. For the preceding assertions, you could
have two
patterns: one for checking the structure and another for checking the co-occurrence
constraint. Since patterns group together different rules, Schematron is designed
so that
rules are declared as children of the
pattern element. This means that the
previous example, using the two patterns, would look like
The name of the pattern will always be displayed in the output, regardless of whether
the
assertions fail or succeed. If the assertion fails, the output will also contain the
content
of the assertion element. However, there is also additional information displayed
together
with the assertion text to help you locate the source of the failed assertion. For
example,
if the co-occurrence constraint above was violated by having
Title='Mr' and
Gender='Female' then the following diagnostic would be generated by
Schematron:
The pattern names are always displayed, while the assertion text is only displayed
when the
assertion fails. The additional information starts with an XPath expression that shows
the
location of the context element in the instance document (in this case the first
Person element) and then on a new line the start tag of the context element
is displayed.
The assertion to test the co-occurrence constraint is not trivial, and in fact this
rule
could be written in a simpler way by using an XPath predicate when selecting the
context. Instead of having the context set to all
Person elements, the
co-occurrence constraint can be simplified by only specifying the context to be all
the
Person elements that have the attribute
Title='Mr'. If the rule
was specified using this technique the co-occurrence constraint could be described
like
this
By moving some of the logic from the assertion to the specification of the context, the complexity of the rule has been decreased. This technique is often very useful when writing Schematron schemas.
This concludes the introduction of patterns; now all that is left to do to complete
the
schema is to wrap the patterns in the Schematron schema in a
schema element,
and to specify that all the Schematron elements used should be defined in the Schematron
namespace,. The complete
Schematron schema for the example follows:
Namespaces and Schematron
Schematron can also be used to validate XML instance documents that use namespaces.
Each
namespace used in the XML instance document should be declared in the Schematron schema.
The
element used to declare namespaces are the
ns element which should appear as a
child of the
schema element. The
ns element has two attributes,
uri and
prefix, which are used to define the namespace URI and
the namespace prefix. If the XML instance document in the example were defined in
the
namespace, the Schematron schema
would look like this:
Note that all XPath expressions that test element values now include the namespace
prefix
ex.
This Schematron schema would now validate the following instance:
Schematron processing
Schematron processing using XSLT is trivial to implement and works in two steps:
- The Schematron schema is first turned into a validating XSLT stylesheet by transforming it with an XSLT stylesheet provided by Academica Sinica Computing Centre. These stylesheets (schematron-basic.xsl, schematron-message.xsl and schematron-report.xsl) can be found at the Schematron site and the different stylesheets generate different output. For example, the schematron-basic.xsl is used to generate simple text output as in the example already shown.
- This validating stylesheet is then used on the XML instance document and the result will be a report that is based on the rules and assertions in the original Schematron schema.:
ISO Schematron..
Include mechanism
An include mechanism will be added to ISO Schematron that will allow a Schematron schema to include Schematron constructs from different documents.
Variables using <let>.
For example, say that a simple
time element should be validated so that the
value always match the HH:MM:SS format where 0<=HH<=23, 0<=MM<=59 and
0<=SS<=59:
Using the new
let element this can be implemented like this in ISO
Schematron:
<value-of> in assertions
A change requested by many users is to allow
value-of elements in the
assertions so that value information can be shown in the result. The
value-of
element has a
select attribute specifying an XPath expression that selects the
correct information.
In the above schema the assertion that for example checks the hour could then be written so that the output result contain the erroneous value:
The following instance
would then generate this output:
Abstract patterns:
Instead of validating the concrete elements used to define the time this abstract pattern instead work on the abstraction of what makes up a time: hours, minutes and seconds.
If the XML document use the below syntax to describe a time
the concrete pattern that realises the abstract one above would look like this:
If the XML instead uses a different syntax to describe a time the abstract pattern can still be used for the validation and the only thing that need to change is the concrete implementation. For example, if the XML looks like this
the concrete pattern would instead be implemented as follows:.
|
http://www.xml.com/pub/a/2003/11/12/schematron.html
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
im_copy, im_copy_set, im_copy_swap, im_copy_morph - copy an image
#include <vips/vips.h> int(3) copies the image held by the image descriptor in and writes the result to the image descriptor out. The input can be of any size and have any type. Does LABPACK coded images too!.
The function returns 0 on success and -1 on error.
im_extract(3), im_open(3) 11 April 1990 IM_COPY(3)
|
http://huge-man-linux.net/man3/im_copy_morph.html
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
ReactJS, JSX, async/await, babel, webpack and getting it all workingSeptember 20, 2016 in
Stranger in a strange land
I’m not primarily a front-end dev so getting everything setup and configured for web development is particularly frustrating for me and most blog posts, tutorials don’t actually give explanations, just copy-pasting blindly tons of little configs.
Here’s my post that I’m using as reference for me and hopefully for any other non-frontend developer that wants to use the latest and greatest
JavaScript like
fetch,
async,
await,
React,
JSX.
Getting started, compiling JavaScript to …JavaScript
Because of fragmentation in implementations of the latest
JavaScript features, we’ll use
babel to compile our
JavaScript using the latest features to
JavaScript that will work in
Chrome,
Firefox and
Safari.
babel has a concept of
plugins. These are like features that you can turn on during the compilation steps and are pretty granular. Often you’ll want a whole bunch of plugins together and that is so common enough that
babel has something called
presets. You can put that in separate
.babelrc file, but I prefer not having so much silly little config files, so you can also put it in your package.json; example:
"babel": { "presets": [ "react", "es2015", "stage-3" ], "plugins": [ "transform-es2015-modules-commonjs", "transform-async-to-generator", "transform-runtime" ] }
These are the ones I’m using to compile
JSX, use ES6 modules, and
async,
await.
So when you invoke
babel, it will look at the
package.json, see the
babel field and turn on those features, so an example invocation is:
$ babel lib --out-dir dist
which will compile all the code in the
lib directory and output the results in the
dist directory. This process is the same for
node.
Bundling code
Now we have our legal
JavaScript for today’s browsers/node. We can bundle up everything as a single
JavaScript file using
webpack. I previously used
browserify but like all things web, apparently its not hot anymore. We can invoke it like so:
$ webpack --progress --colors dist/homepage.js bundle.js
where
bundle.js is the name of the single output file that we’ll get. You can apparently do some kind of config file for webpack, yet another config file, but this is enough for me right now.
Actual code/project with JSX
So let’s say we have these two files, one is
homepage.jsx and the other is
button.jsx. Note that I use a real example of
async,
await, for a great explanation see here, for
OCaml programmers,
await is basically
>>= or
let%lwt.
This is
button.jsx
'use strict'; import React from 'react'; class Button extends React.Component { async do_request(e) { let query = '' + '/ticker/global/USD'; let nonsense = ""; try { let pulled = await fetch(query); let body = await pulled.json(); console.log(body); await fetch(nonsense); } catch (e) { console.log("Exception raised:", e); console.log('Logic continued'); } } render () { let s = {color:'red'}; return ( <p style={s} onClick={this.do_request.bind(this)}> Click Me </p> ); } }; // Remember to put wrap in {} export {Button};
and
homepage.jsx
'use strict'; import React from 'react'; import ReactDOM from 'react-dom'; // REMEMBER to do {} since button.jsx doesn't do // export default import {Button} from './button'; class Page extends React.Component { render () { return ( <div> Hello World <Button/> </div> ); } }; ReactDOM.render(<Page/>, document.getElementById('cont'));
So all this will be compiled correctly and turned into one
bundle.js which we can use in this
index.html
<!DOCTYPE html> <meta charset="utf-8"> <body> <div id="cont"></div> <script src="bundle.js"></script> </body>
and when we click the button we see this in the Chrome dev tools:
Object {24h_avg: 614.98, ask: 613.72, bid: 613.05, last: 613.56, timestamp: "Tue, 20 Sep 2016 20:09:30 -0000"…} bundle.js:23085 GET net::ERR_NAME_NOT_RESOLVED_callee$ @ bundle.js:23085tryCatch @ bundle.js:23242invoke @ bundle.js:23516prototype.(anonymous function) @ bundle.js:23275step @ bundle.js:23872(anonymous function) @ bundle.js:23883 bundle.js:23095 Exception raised: TypeError: Failed to fetch(…) bundle.js:23096 Logic continued
Yay, things worked.
See the repo here for the full
package.json
|
http://hyegar.com/2016/09/20/webdev-setup/index.html
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
This is the next step in my currently unresolved question in which I am attempting to sort the scores from 3 different teams. I have very limited knowledge of python because I am new to programming so my problem solving of this current project is quite difficult.
To begin I will need the example data (shown below) which are split over two cells to be sorted alphabetically according to the names, I will have this for 3 different teams in 3 different files. I am also trying to sort it out from highest to lowest depending on the score, this has proven of much difficulty to me so far.
Jake,5
Jake,3
Jake,7
Jeff,6
Jeff,4
Fred,5
admin_data = []
team_choice = input("Choose a team to sort")
if team_choice == 'Team 1':
path = 'team1scores.csv'
elif team_choice == 'Team 2':
path = 'team2scores.csv'
elif team_choice == 'Team 3':
path = 'team3scores.csv'
else:
print("--Error Defining File Path--")
print("As an admin you have access to sorting the data")
print("1 - Alpahbetical")
print("2 - Highest to Lowest")
print("3 - Average Score")
admin_int = int(input("Choose either 1, 2 or 3?"))
if sort_int == 1 and team_choice == 'Team 1':
do things
elif sort_int == 2 and team_choice == 'Team 1':
do things
elif sort_int == 3 and team_choice == 'Team 1':
do things
[['Fred', '9'], ['George', '7'], ['Jake', '5'], ['Jake', '4'], ['Derek', '4'], ['Jake', '2']]
The first step would be to break down the problem into small steps:
withstatement at the bottom of that section)
Expanding on the last one you can total up the scores as well as the number of entries for each name like this:
import csv import collections ... with open(path) as f: entries = collections.Counter() total_scores = collections.Counter() for name,score in csv.reader(f): total_scores[name] += int(score) entries[name] += 1
Then you can calculate the average score for each person with
total_scores[name] / entries[name]
for name in sorted(entries): ave_score = total_scores[name] / entries[name] print(name,ave_score) #sep=", ")
the other two actions are quite simple with a few of the steps listed above.
import csv import collections from operator import itemgetter ... if sort_int == 1: with open(path) as f: reader = csv.reader(f) for name, score in sorted(reader): print(name,score) elif sort_int == 2: with open(path) as f: entries = sorted(csv.reader(f), key=itemgetter(1), reverse=True) for name,score in entries: print(name,score) elif sort_int == 3: with open(path) as f: entries = collections.Counter() total_scores = collections.Counter() for name,score in csv.reader(f): score = int(score) total_scores[name] += score entries[name] += 1 for name in sorted(entries): ave_score = total_scores[name] / entries[name] print(name,ave_score)
If you want to apply the highest to lowest to the average scores then you will need to make a reference to all the averages such as a
dict:
ave_scores = {} for name in sorted(entries): ave_score = total_scores[name] / entries[name] ave_scores[name] = ave_score for name,ave_score in sorted(ave_scores.items(), key = itemgetter(1), reversed=True): print(name,ave_score)
|
https://codedump.io/share/ocD0AoMl7CWi/1/sorting-data-from-a-csv-alphabetically-highest-to-lowest-and-average
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
Like.
To illustrate the concept of nested for loop, let us consider a program to generate a pyramid of numbers.
//Program to Display Pyramid
import java.util.Scanner; //Program uses Scanner class
public class pyramid
{
public static void main(String [] args)
{
int n,i,j;
//create scanner object to obtain input from keyboard
Scanner input =new Scanner(System.in);
System.out.print("Enter How Many Lines :"); // input
n =input.nextInt(); //Read number
for(i=1 ; i<=n ; i++ )
{
for( j=1; j<=i ; j++)
{
System.out.print(i + " ");
}
System.out.print("\n");
}
}
}
On execution, we first input the number of lines n (say 10). In this program, the inner j loop is nested inside the outer i loop. For each value of variable i of outer loop, the inner loop will be executed completely. For the first iteration of outer loop, the outer loop variable i is initialized to 1 and the inner loop is executed once as the condition (j < =i) is satisfied only once, thus prints the value of i (i.e. 1) once.
The statement Systam.out.print (“\n”) i must be outside the inner loop and inside the outer loop in order to produce exactly one line for each iteration of the outer loop.
On second iteration (pass) of outer loop, when i=2 then the inner loop executes 2 times and thus displaying value of i (i.e. 2) 2 times and process will continue until the condition in the outer loop becomes false.
|
http://ecomputernotes.com/java/control-structures-in-java/nested-for-loop-in-java-example
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
#include <hallo.h> * Cameron Patrick [Fri, Nov 07 2003, 05:31:12PM]: >. That's not a problem with the certain terminal emulator, that is a problem with groff syntax not understood by many authors. Xterm works just fine as UTF-8 terminal as well as mlterm/pterm/konsole/gnome-terminal but the manpages simply specify the wrong char. It was promised that groff will recode hyphen to minus sign in some future version (maybe as an option) to work around broken manpages. MfG, Eduard.
|
https://lists.debian.org/debian-x/2003/11/msg00145.html
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
Strange, I just gotten PIL here (through easy-install) and from PIL
import Image didn't work even in the command line... what worked was
'import Image' directly -- after investigating a bit, it seems
easy_install did the wrong thing (it created a
site-packages/PIL-1.1.7-py2.6-win32.egg, but inside that directory
there was no PIL/__init__.py, but the Image.py file directly), so, I
changed to the structure below and it seems to be working properly
now.
The way I'd expect it to be is:
/usr/lib/python2.7/dist-packages/ <-- added to the PYTHONPATH
/usr/lib/python2.7/dist-packages/PIL <- directory with PIL
/usr/lib/python2.7/dist-packages/PIL/__init__.py <-- specify that PIL
is a package
/usr/lib/python2.7/dist-packages/PIL/Image.py <-- Image module
Can you check if your config is like that?
Also, I noted you said you added /usr/lib/python2.7/dist-packages/PIL
to the PYTHONPATH, but that seems strange... if you actually have
/usr/lib/python2.7/dist-packages/PIL/__init__.py, the directory that
should be in the PYTHONPATH would be /usr/lib/python2.7/dist-packages
(unless the actual PIL is at:
/usr/lib/python2.7/dist-packages/PIL/PIL/__init__.py).
Cheers,
Fabio
On Mon, Oct 3, 2011 at 12:16 PM, John Smith <bitnuk3r@...> wrote:
> Hi there,
> I'm using Eclipse 3.7 (Indigo) and pyDev 2.2.2.2011082312 on Ubuntu 11.04.
> When I run the following code...
> [code]
> from PIL import Image
> im = Image.open("download.png")
> im.rotate(45).show()[/code]
>
> It works fine on my command line python interpreter 2.7.1+. However when I
> enter this code in a blank form on Eclipse I get the "Image" part of the
> first line underlined in red and Eclipse says "Unresolved Import". Eclipse
> will still run the code fine because I'm using the same 2.7.1+ interpreter
> through Eclipse. It seems like it's an issue of Eclipse not finding the PIL
> package. I tried going to Preferences->PyDev->Interpreter-Python-> and
> deleting and recreating it but that did not work. Also the
> /usr/lib/python2.7/dist-packages/PIL is in the System PYTHONPATH in Eclipse.
> What am I doing wrong?
> ------------------------------------------------------------------------------
> All the data continuously generated in your IT infrastructure contains a
> definitive record of customers, application performance, security
> threats, fraudulent activity and more. Splunk takes this data and makes
> sense of it. Business sense. IT sense. Common sense.
>
> _______________________________________________
> pydev-code mailing list
> pydev-code@...
>
>
>
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
|
https://sourceforge.net/p/pydev/mailman/pydev-code/?viewmonth=201110&viewday=4
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
CodePlexProject Hosting for Open Source Software
I want to use a customized XMLSerilizer in an formatter for "application/xml" media types.
I´ve created a formatter, which adds the proper "SupportedMediaTypes". But the client requests get
redirected to the default XMLFormatter.
What can i do?
Background: i want to create hypermedia-based xml responses, see
Are you using the latest bits released yesterday? If so you need to change your MediaTypeFormatter to derive from XmlMediaTypeFormatter. There is some magic that will stop the default formatter from getting inserted into the formatter collection
if your class derives from XmlMediaTypeFormatter. Yes, seems crazy to me too!
If you are using earlier bits, then you need to make sure you "insert" your formatter into the beginning of the collection, rather than the end, so that your formatter gets priority.
humbrie/Darrrel you don't need to derive, you can insert your formatter to the beginning of the list so that it gets picked up. The collection exposes an Insert method, call it using 0 as the position.
Unfortunaly this applies not entirely. If you use a user agent like a browser, which accepts */* the MS XMLserializer will be used by default.
Even if i remove it from the Formatters collection, it gets still serialized with the build-in serializer. I gues this is because of this property: WebApiConfiguration.Formatters.XmlFormatter ? Unfortunaly, I cannot replace it, because the setter is missing
(or private)...
Did you try inserting your custom formatter to the beginning of the collection?
Glenn
EDIT: I guess this is related to this issue:
Yes, i did. I can see the Formatter Collection altered, when the server is starting. But i dont know, why it´s still not working...
snippet of global.asax
protected void Application_Start(object sender, EventArgs e)
{
// Set configuration & routes
RouteTable.Routes.SetDefaultHttpConfiguration(WebApiConfiguration);
RouteTable.Routes.MapServiceRoute<HomeController>(HomeController.ResourcePath, WebApiConfiguration);
}
private static WebApiConfiguration _WebApiConfiguration;
private static WebApiConfiguration WebApiConfiguration
{
get
{
if (_WebApiConfiguration == null)
{
_WebApiConfiguration = new WebApiConfiguration { EnableHelpPage = true, EnableTestClient = true };
_WebApiConfiguration.MessageHandlers.Add(typeof(LoggingHandler));
_WebApiConfiguration.MessageHandlers.Add(typeof(UriFormatHandler));
_WebApiConfiguration.Formatters.Insert(0, new HalMediaTypeFormatter());
}
return _WebApiConfiguration;
}
}
this is a snippet of the media tpye formatter:
public class HalMediaTypeFormatter : MediaTypeFormatter
{
public const string HalXmlMediaType = "application/hal+xml";
public const string XmlMediaType = "application/xml";
public const string XmlTextMediaType = "text/xml";
public HalMediaTypeFormatter(XmlSerializerNamespaces namespaces=null)
{
Namespaces = namespaces;
const string charset = "utf-8";
SupportedMediaTypes.Add(new MediaTypeHeaderValue(HalXmlMediaType) { CharSet = charset });
SupportedMediaTypes.Add(new MediaTypeHeaderValue(XmlMediaType) { CharSet = charset });
SupportedMediaTypes.Add(new MediaTypeHeaderValue(XmlTextMediaType) { CharSet = charset });
}
...
}
Hi,
Is it possible to get the Fiddler trace?
Regarding your issue, Can you try after making the following change in your custom media type formatter? Fyi...I removed the CharSet information here.
SupportedMediaTypes.Add(new MediaTypeHeaderValue(HalXmlMediaType));
SupportedMediaTypes.Add(new MediaTypeHeaderValue(XmlMediaType));
SupportedMediaTypes.Add(new MediaTypeHeaderValue(XmlTextMediaType));
Thanks,
Kiran Challa
Also be aware that content negotiation does not automatically match media ranges like */*. It is likely asking your custom formatter if it supports "*/*", finding no match, and moving on. We are currently discussing how to better handle the case
of matching no formatters, but I believe this is why you see the default Xml formatter come into play. It found not formatters and chose that as the backstop.
I'd recommend you look into the MediaTypeFormatter.MediaTypeMappings collection. I believe you want to add a 'new MediaRangeMapping("*/*", "application/xml")' (or whatever) to your custom formatter. Then, when a media range like "*/*"
comes through, your formatter will be picked.
Ron Cain
Well, i´ll try that (both)! From my point of view, i would expect, that any mediatype range would be handled by the framework by going through the formatters collection and choosing which type matches first (in case of */* it would be obviously the
first formatter in the collection).
A more complex scenario: If i´d have only one formatter in the list, like "application/atom+xml" and the client wants a "application/xml"...
What would the server do?
Sending atom (which is surely valid xml)?
With which media type declaration?
"application/atom+xml" would be correct, but this is not, what the client would likely expect.
As far as i know, there is no "media type specialization/derivation/inheritance" concept defined, which would apply to this scenario.
(i´ve already started a thread weeks ago on the rest-discuss group)
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later.
|
http://wcf.codeplex.com/discussions/271271
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
XML Namespaces
When dealing with XML documents in Mule you need to declare any namespaces used by the document. You can specify a namespace globally so that it can be used by XPath expressions across Mule. You can declare the namespace in any XML file in your Mule instance. To declare a namespace, include the
mule-xml.xsd schema in your XML file:
Next, specify the
<namespace-manager> element, and then add one or more
<namespace> elements within it to declare the prefix and URI of each namespace you want to add to the namespace manager. If you already declared a namespace at the top of the file in the
<mule> element, you can set the
includeConfigNamespaces attribute to
true to have the namespace manager pick up those namespaces as well.
You can also declare a namespace locally in an expression filter, router, or transformer using the
<namespace> element without the
<namespace-manager> element. You can then use that prefix within the XPath expression. For example, the following Jaxen filter declares a namespace with the prefix "e", which is then used in the filter expression:
If you had a global namespace with the "e" prefix, the local namespace URI would override the global namespace URI.
You can specify the namespace on any XML-based functionality in Mule, including the JXPath filter, Jaxen filter, XPath filter, filter-based splitter, expression splitter, round-robin splitter, JXPath extractor transformer, and XPath expression transformer in the XML Module Reference and XPath Annotation.
|
https://docs.mulesoft.com/mule-user-guide/v/3.3/xml-namespaces
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
I am studying for the Spring Core certification an I have some doubts about how Spring handle the beans lifecycle and in particular about the bean post processor.
So I have this schema:
It is pretty clear for me what it means:
Into the Load Bean Definitions phase happens that:
Spring doc explains the BPPs under Customizing beans using BeanPostProcessor. BPP beans are a special kind of beans that get created before any other beans and interact with newly created beans. With this construct, Spring gives you means to hook-up to and customize the lifecycle behavior simply by implementing a
BeanPostProcessor yourself.
Having a custom BPP like
public class CustomBeanPostProcessor implements BeanPostProcessor { public CustomBeanPostProcessor() { System.out.println("0. Spring calls constructor"); } @Override public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException { System.out.println(bean.getClass() + " " + beanName); return bean; } @Override public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException { System.out.println(bean.getClass() + " " + beanName); return bean; } }
would be called and print out the class and bean name for every created bean.
To undersand how the method fit the bean's lifecycle, and when exactly the method's get called check the docs
postProcessBeforeInitialization(Object bean, String beanName) Apply this BeanPostProcessor to the given new bean instance before any bean initialization callbacks (like InitializingBean's afterPropertiesSet or a custom init-method).
postProcessAfterInitialization(Object bean, String beanName) Apply this BeanPostProcessor to the given new bean instance after any bean initialization callbacks (like InitializingBean's afterPropertiesSet or a custom init-method).
The important bit is also that
The bean will already be populated with property values.
For what concerns the relation with the
@PostConstruct note that this annotation is a convenient way of declaring a
postProcessAfterInitialization method, and Spring becomes aware of it when you either by register
CommonAnnotationBeanPostProcessor or specify the
<context:annotation-config /> in bean configuration file. Whether the
@PostConstruct method will execute before or after any other
postProcessAfterInitialization depends on the
order property
You can configure multiple BeanPostProcessor instances, and you can control the order in which these BeanPostProcessors execute by setting the order property.
|
https://codedump.io/share/n0AdNEW5WVIr/1/how-exactly-works-the-spring-bean-post-processor
|
CC-MAIN-2017-13
|
en
|
refinedweb
|
The official source of product insight from the Visual Studio Engineering Team
Like
How to create a great online project template
Here’s how you can create an online project template like IE 8 Accelerator that has a clear purpose and shows you what to do next.
using System.Speech.Synthesis; d. Add the code below into the Main method of Program.cs. (The "using" statement automatically takes care of disposing of all the resources used by the speech synthesizer when it is done.)
using System.Speech.Synthesis; d.
using (SpeechSynthesizer synth = new System.Speech.Synthesis.SpeechSynthesizer()) { synth.Speak("Hello World"); }
e. Turn up the volume and debug your project. :).
|
http://blogs.msdn.com/b/visualstudio/archive/2010/05/13/how-to-create-a-great-online-project-template-or-how-to-make-applications-that-speak.aspx
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
In the context of this document, configuration is the process of preparing an application or deployable resource for deployment to a WebLogic Server instance. Most configuration information for an application is provided in its deployment descriptors. Certain elements in these descriptors refer to external objects and may require special handling depending on the server vendor. WebLogic Server uses descriptor extensions—WebLogic Server specific deployment descriptors. The mapping between standard descriptors and WebLogic Server descriptors is managed using
DDBeans and
DConfigBeans.
The following sections describe how to configure an application for deployment using the WebLogic Deployment API:
Overview of the Configuration Process
Types of Configuration Information
Perform Front-end Configuration
Customizing Deployment Configuration
This section provides information on the basic steps a deployment tool must implement to configure an application for deployment:
Application Evaluation—Inspection and evaluation of application files to determine the structure of the application and content of the embedded descriptors.
Initialize a deployment session by obtaining a
WebLogicDeploymentManager. See Application Evaluation.
Create a
WebLogicJ2eeApplicationObject or
WebLogicDeployableObject to represent the Java EE configuration of an enterprise application (EAR) or stand-alone module (WAR, EAR, RAR, or CAR). If the object is an EAR, child objects are generated. See Java EE Deployment API standard (JSR-88) at and Create a Deployable Object.
Front-end Configuration—Creation of configuration information based on content embedded within the application. This content may be in the form of WebLogic Server descriptors, defaults, and user provided deployment plans.
Create a
WebLogicDeploymentConfiguration object to represent the WebLogic Server configuration of an application. This is the first step in creating a deployment plan for this object. See Deployment Configuration.
Restore existing WebLogic Server configuration values from an existing deployment plan, if available. See Perform Front-end Configuration.
Deployment Configuration—Modification of individual WebLogic Server configuration values based on user inputs and the selected WebLogic Server targets.
A deployment tool must provide the ability to modify individual WebLogic Server configuration values based on user inputs and selected WebLogic Server targets. See Customizing Deployment Configuration.
Deployment Preparation—Generation of the final deployment plan and preliminary client-side validation of the application.
A deployment tool must have the ability to save the modified WebLogic Server configuration information to a new deployment plan or to variable definitions in an existing Deployment Plan.
The following sections provide background information on the types of configuration information, how it is represented, and the relationship between Java EE and WebLogic Server descriptors:
WebLogic Server Configuration
Representing Java EE and WebLogic Server Configuration Information
The Relationship Between Java EE and WebLogic Server Descriptors
The Java EE configuration for an application defines the basic semantics and run-time behavior of the application, as well as the external resources that are required for the application to function. This configuration information is stored in the standard Java EE deployment descriptor files associated with the application, as listed in Table 3-1.
Complete and valid Java EE deployment descriptors are a required input to any application configuration session.
Because the Java EE configuration controls the fundamental behavior of an application, the Java EE descriptors are typically defined only during the application development phase, and are not modified when the application is later deployed to a different environment. For example, when you deploy an application to a testing or production domain, the application's behavior (and therefore its Java EE configuration) should remain the same as when application was deployed in the development domain. See Perform Front-end Configuration for more information.
The WebLogic Server descriptors provide for enhanced features, resolution of external resources, and tuning associated with application semantics. Applications may or may not have these descriptors embedded in the application. The WebLogic Server configuration for an application:
Binds external resource names to resource definitions in the Java EE deployment descriptor so that the application can function in a given WebLogic Server domain
Defines tuning parameters for the application containers
Provides enhanced features for Java EE applications and stand-alone modules
The attributes and values of a WebLogic Server configuration are stored in the WebLogic Server deployment descriptor files, as shown in Table 3-2.
Because different WebLogic Server domains provide different types of external resources and different levels of service for the application, the WebLogic Server configuration for an application typically changes when the application is deployed to a new environment. For example, a production staging domain might use a different database vendor and provide more usable memory than a development domain. Therefore, when moving the application from development to the staging domain, the application's WebLogic Server descriptor values need to be updated in order to make use of the new database connection and available memory.
The primary job of a deployment configuration tool is to ensure that an application's WebLogic Server configuration is valid for the selected WebLogic targets.
Both the Java EE deployment descriptors and any available WebLogic Server descriptors are used as inputs to the application configuration process. You use the deployment API to represent both the Java EE configuration and WebLogic Server configuration as Java objects.
The Java EE configuration for an application is obtained by creating either a
WebLogicJ2eeApplicationObject for an EAR, or a
WeblogicDeployableObject for a stand-alone module. (A
WebLogicJ2eeApplicationObject contains multiple
DeployableObject instances to represent individual modules included in the EAR.)
Each
WebLogicJ2eeApplicationObject or
WeblogicDeployableObject contains a
DDBeanRoot to represent a corresponding Java EE deployment descriptor file. Java EE descriptor properties for EARs and modules are represented by one or more
DDBean objects that reside beneath the
DDBeanRoot.
DDBean components provide standard getter methods to access individual deployment descriptor properties, values, and nested descriptor elements.
DDBeans are described by the
javax.enterprise.deploy.model package. These objects provide a generic interface to elements in standard deployment descriptors, but can also be used as an XPath based mechanism to access arbitrary XML files that follow the basic form of the standard descriptors. Examples of such files would be WebLogic Server descriptors and Web services descriptors.
The
DDBean representation of a descriptor is a tree of
DDBeans, with a specialized
DDBean, a
DDBeanRoot, at the root of the tree.
DDBeans provide accessors for the element name, ID attribute, root, and text of the descriptor element they represent.
The
DDBeans for an application are populated by the model plug-in, the tool provider implementation of
javax.enterprise.deploy.model. An application is represented by the
DeployableObject interface. The WebLogic Server implementation of this interface is a public class, weblogic.deploy.api.model.
WebLogicDeployableObject. A WebLogic Server based deployment tool acquires an instance of
WebLogicDeployableObject object for an application using the
createDeployableObject factory methods. This results in the
DDBean tree for the application being created and populated by the elements in the Java EE descriptors embedded in the application. If the application is an EAR, multiple
WebLogicDeployableObject objects are created. The root
WebLogicDeployableObject, extended as
WebLogicJ2eeApplicationObject, would represent the EAR module, with its child
WebLogicDeployableObject instances being the modules contained within the application, such as WARs, EJBs, RARs and CARs.
Java EE descriptors and WebLogic Server descriptors are directly related in the configuration of external resources. A Java EE descriptor defines the types of resources that the application requires to function, but it does not identify the actual resource names to use. The WebLogic Server descriptor binds the resource definition in the Java EE descriptor name to the name of an actual resource in the target domain.
The process of binding external resources is a required part of the configuration process. Binding resources to the target domain ensures that the application can locate resources and successfully deploy.
Java EE descriptors and WebLogic Server descriptors are also indirectly related in the configuration of tuning parameters for WebLogic Server. Although no elements in the standard Java EE descriptors require tuning parameters to be set in WebLogic Server, the presence of individual descriptor files indicates which tuning parameters are of interest during the configuration of an application. For example, although the
ejb.xml descriptor does not contain elements related to tuning the WebLogic Server EJB container, the presence of an
ejb.xml file in the Java EE configuration indicates that tuning properties can be configured before deployment.
DConfigBeans (config beans) are the objects used to convey server configuration requirements to a deployment tool, and are also the primary source of information used to create deployment plans. Config beans are Java Beans and can be introspected for their properties. They also provide basic property editing capabilities.
DConfigBeans are created from information in embedded WebLogic Server descriptors, deployment plans, and input from an IDE deployment tool.
A
DConfigBean is potentially created for every weblogic Descriptor element that is associated with a dependency of the application. Descriptors are entities that describe resources that are available to the application, represented by a JNDI name provided by the server.
Descriptors are parsed into memory as a typed bean tree while setting up a configuration session. The
DConfigBean implementation classes delegate to the WebLogic Server descriptor beans. Only beans with dependency properties, such as resource references, have a
DConfigBean. The root of descriptor always has a
DConfigBeanRoot.
Bean Property accessors return a child
DConfigBean for elements that require configuration or a descriptor bean for those that do not. Property accessors return data from the descriptor beans.
Modifications to bean properties result in plan overrides. Plan overrides for existing descriptors are handled using variable assignments. If the application does not come with the relevant WebLogic Server descriptors, they are automatically created and placed in an external plan directory. For external deployment descriptors, the change is made directly to the descriptor. Embedded descriptors are never modified on disk.
Application evaluation consists of obtaining a deployment manager and a deployable object container for your application. Use the following steps:
Obtain a deployment factory class by specifying its name,
weblogic.deployer.spi.factories.internal.DeploymentFactoryImpl.
Register the factory class with a
javax.enterprise.deploy.spi.DeploymentFactoryManager instance.
For instance:);
Obtain a Deployment Manager
Create a Deployable Object
The following sections provide information on how to obtain a deployment manager:
Types of Deployment Managers
Connected and Disconnected Deployment Manager URIs
Using SessionHelper to Obtain a Deployment Manager
WebLogic Server provides a single implementation for
javax.enterprise.deploy.spi.DeploymentManager that behaves differently depending on the
URI specified when instantiating the class from a factory. WebLogic Server provides two basic types of deployment manager:
A disconnected deployment manager has no connection to a WebLogic Server instance. Use a disconnected deployment manager to configure an application on a remote client machine. It cannot be used it to perform deployment operations. (For example, a deployment tool cannot use a disconnected deployment manager to distribute an application.)
A connected deployment manager has a connection to the Administration Server for the WebLogic Server domain, and by a deployment tool to both to configure and deploy applications.
A connected deployment manager is further classified as being either local to the Administration Server, or running on a remote machine that is connected to the Administration Server. The local or remote classification determines whether file references are treated as being local or remote to the Administration Server.
Table 3-3 summarizes deployment manager types.
Each
DeploymentManager obtained from the
WebLogicDeploymentFactory supports WebLogic Server extensions. When creating deployment tools, obtain a specific type of deployment manager by calling the correct method on the deployment factory instance and supplying a string constant defined in
weblogic.deployer.spi.factories.WebLogicDeploymentFactory that describes the type of deployment manager required. Connected deployment managers require a valid server
URI and credentials to the method in order to obtain a connection to the Administration Server.
Table 3-4 summarizes the method signatures and constants used to obtain the different types of deployment managers.
The sample code in Example 3-1 shows how to obtain a disconnected deployment manager.
Example 3-1 Obtaining a Disconnected Deployment Manager); WebLogicDeploymentManager myDisconnectedManager = (WebLogicDeploymentManager)myDeploymentFactory.getDisconnectedDeploymentManager(WebLogicDeploymentFactory.LOCAL_DM_URI);
The deployment factory contains a helper method,
createUri() to help you form the URI argument for creating connected deployment managers. For example, to create a disconnected remote deployment manager, replace the final line of code with:
(WebLogicDeploymentManager)myDeploymentFactory.getDeploymentManager(myDeploymentFactory.createUri(WebLogicDeploymentFactory.REMOTE_DM_URI, "localhost", "7001", "weblogic", "weblogic"));
The
SessionHelper helper class provides several convenience methods to help you easily obtain a deployment manager without manually creating and registering the deployment factories as shown in Example 3-1. The
SessionHelper code required to obtain a disconnected deployment manager consists of a single line:
DeploymentManager myDisconnectedManager = SessionHelper.getDisconnectedDeploymentManager();
You can use the
SessionHelper to obtain a connected deployment manager, as shown below:
DeploymentManager myConnectedManager = SessionHelper.getDeploymentManager("adminhost", "7001", "weblogic", "weblogic"));
This method assumes a remote connection to an Administration Server (
adminhost). See the Javadocs for more information about
SessionHelper.
The following sections provide information on how to create a deployable object, which is the container your deployment tool uses to deploy applications. Once you have initialized a configuration session by Obtain a Deployment Manager, create a deployable object for your deployment tool in one of the following ways:
Using the WebLogicDeployableObject class
Using SessionHelper to obtain a Deployable Object
The direct approach uses the
WebLogicDeployableObject class of the model package as shown below:
WebLogicDeployableObject myDeployableObject = WebLogicDeployableObject.createWebLogicDeployableObject("myAppFileName");
Once the deployable object is created, a configuration can be created for the applications deployment.
The
SessionHelper helper class provides a convenient method to obtain a deployable object. The
SessionHelper code required to obtain a deployable object is shown below:
SessionHelper.setApplicationRoot(root); WebLogicDeployableObject myDeployableObject = SessionHelper.getDeployableObject();
There is no application specified in the
getDeployableObject() call.
SessionHelper uses the application in the root directory set by
setApplicationRoot(). Once the application root directory is set,
SessionHelper can be used to perform other operations, such as explicitly naming the dispatch file location or the deployment plan location.
You can also set the application file name using the
setApplication method as shown below:
SessionHelper.setApplication(AppFileName);
This method allows you to continue using
SessionHelper independent of the directory structure. The
getDeployableObject method returns the application specified.
Front-end configuration involves creating a
WebLogicDeploymentPlan and populating it and its associated bean trees with configuration information:
What is Front-end Configuration
Validating a Configuration
Front-end configuration phase consists of two logical operations:
Loading information from a deployment plan to a deployment configuration. If a deployment configuration does not yet exist, this includes creating a
WebLogicDeploymentConfiguration object to represent the WebLogic Server configuration of an application. This is the first step in the process of process of creating a deployment plan for this object.
Restoring any existing WebLogic Server configuration values from an existing deployment plan.
A deployment tool must be able to:
Extract information from a deployment configuration. The deployment configuration is the active Java object that is used by the Deployment Manager to obtain configuration information. The deployment plan exists outside of the application so that it can be changed without manipulating the application.
A deployment plan is an XML document that contains the environmental configuration for an application and is sometimes referred to as an application's front-end configuration. A deployment plan:
Separates the environment specific details of an application from the logic of the application.
Is not required for every application. However, a deployment plan typically exists for each environment an application is deployed to.
Describes the application structure, such as what modules are in the application.
Allows developers and administrators to update the configuration of an application without modifying the application archive.
Contains environment-specific descriptor override information (tunables). By modifying a deployment plan, you can provide environment specific values for tunable variables in an application.
The server configuration for an application is encapsulated in the
javax.enterprise.deploy.spi.DeploymentConfiguration interface. A
DeploymentConfiguration provides an object representation of a deployment plan. A
DeploymentConfiguration is associated with a
DeployableObject using the
DeploymentManager.createConfiguration method. Once a
DeploymentConfiguration object is created, a
DConfigBean tree representing the configurable and tunable elements contained in any and all WebLogic Server descriptors is available. If there are no WebLogic Server descriptors for an application, then a
DConfigBean tree is created using available default values. Binding properties that have no defaults are left unset.
When creating a deployment tool, you must ensure that the
DConfigBean tree is fully populated before the tool distributes an application.
The following code provides an example on how to populate
DConfigBeans:
Example 3-2 Example Code to Populate DConfigBeans
public class DeploymentSession { DeploymentManager dm; DeployableObject dObject = null; DeploymentConfiguration dConfig = null; Map beanMap = new HashMap(); . . . // Assumes app is a Web app. public void initializeConfig(File app) throws Throwable { /** * Init the wrapper for the DDBeans for this module. This example assumes * it is using the WLS implementation of the model api. */ dObject= WebLogicDeployableObject.createDeployableObject(app); //Get basic configuration for the module dConfig = dm.createConfiguration(dObject); /** * At this point the DeployableObject is populated. Populate the * DeploymentConfigurationbased on its content. * We first ask the DeployableObject for its root. */ DDBeanRoot root = dObject.getDDBeanRoot(); /** * The root DDBean is used to start the process of identifying the * necessary DConfigBeans for configuring this module. */ System.out.println("Looking up DCB for "+root.getXpath()); DConfigBeanRoot rootConfig = dConfig.getDConfigBeanRoot(root); collectConfigBeans(root, rootConfig); /** * The DeploymentConfiguration is now initialized, although not necessarily * completely setup. */ FileOutputStream fos = new FileOutputStream("test.xml"); dConfig.save(fos); } // bean and dcb are a related DDBean and DConfigBean. private void collectConfigBeans(DDBean bean, DConfigBean dcb) throws Throwable{ DConfigBean configBean; DDBean[] beans; if (dcb == null) return; /** * Maintain some sort of mapping between DDBeans and DConfigBeans * for later processing. */ beanMap.put(bean,dcb); /** * The config bean advertises xpaths into the web.xml descriptor it * needs to know about. */ String[] xpaths = dcb.getXpaths(); if (xpaths == null) return; /** * For each xpath get the associated DDBean and collect its associated * DConfigBeans. Continue this recursively until we have all DDBeans and * DConfigBeans collected. */ for (int i=0; i<xpaths.length; i++) { beans = bean.getChildBean(xpaths[i]); for (int j=0; j<beans.length; j++) { /** * Init the DConfigBean associated with each DDBean */ System.out.println("Looking up DCB for "+beans[j].getXpath()); configBean = dcb.getDConfigBean(beans[j]); collectConfigBeans(beans[j], configBean); } }
This example merely iterates through the
DDBean tree, requesting the
DConfigBean for each
DDBean to be instantiated.
DeploymentConfiguration objects may be persisted as deployment plans using
DeploymentConfiguration.save(). A deployment tool may allow the user to import a saved deployment plan into the
DeploymentConfiguration object instead of populating it from scratch.
DeploymentConfiguration.restore() provides this capability. This supports the idea of having a repository of deployment plans for an application, with different plans being applicable to different environments.
Similarly the
DeploymentConfiguration may be pieced together using partial plans, which were presumably saved in a repository from a previous configuration session. A partial plan maps to a module-root of a
DConfigBean tree.
DeploymentConfiguration.saveDConfigBean() and
DeploymentConfiguration.restoreDConfigBean() provide this capability.
Parsing of the WebLogic Server descriptors in an application occurs automatically when a
DeploymentConfiguration is created. The descriptors ideally conform to the most current schema. For older applications that include descriptors based on WebLogic Server 8.1 and earlier DTDs, a transformation is performed. Old descriptors are supported but they cannot be modified using a deployment plan. Therefore, any DOCTYPE declarations must be converted to name space references and element specific transformations must be performed.
SessionHelper.initializeConfiguration processes all standard and WebLogic Server descriptors in the application.
Prior to invoking
initializeConfiguration, you can specify an existing deployment plan to associate with the application using the
SessionHelper.setPlan() method. With a plan set, you can read in a deployment plan using the
DeploymentConfiguration.restore() method. In addition, the
DeploymentConfiguration.initializeConfiguration() method automatically restores configuration information once a plan is set.
When initiating a configuration session with the
SessionHelper class, you can easily initiate and fill a
deploymentConfiguration object with deployment plan information as illustrated below:
DeploymentManager dm = SessionHelper.getDisconnectedDeploymentManager(); SessionHelper helper = SessionHelper.getInstance(dm); // specify location of archive helper.setApplication(app); // specify location of existing deployment plan helper.setPlan(plan); // initialize the configuration session helper.initializeConfiguration(); DeploymentConfiguration dc = helper.getConfiguration();
The above code produces the deployment configuration and its associated
WebLogicDDBeanTree.
Validation of the configuration occurs mostly during the parsing of the descriptors which occurs when an application's descriptors are processed. Validation consists of ensuring the descriptors are valid XML documents and that the descriptors conform to their respective schemas.
The Customizing Deployment Configuration phase involves modifying individual WebLogic Server configuration values based on user inputs and the selected WebLogic Server targets.
Modifying Configuration Values
In this phase, a configuration is only as good as the descriptors or pre-existing plan associated with the application. The
DConfigBeans are designed as Java Beans and can be introspected, allowing a tool to present their content in some meaningful way. The properties of a
DConfigBean are, for the most part, those that are configurable. Key properties (those that provide uniqueness) are also exposed. Setters are only exposed on those properties that can be safely modified. In general, properties that describe application behavior are not modifiable. All properties are typed as defined by the descriptor schemas.
The property getters return subordinate
DConfigBeans, arrays of
DConfigBeans, descriptor beans, arrays of descriptor beans, simple values (primitives and
java.lang objects), or arrays of simple values. Descriptor beans represent descriptor elements that, while modifiable, do not require
DConfigBean features, meaning there are no standard descriptor elements they are directly related to. Editing a configuration is accomplished by invoking the property setters.
The Java JSR-88
DConfigBean class allows a tool to access beans using the
getDConfigBean(DDBean) method or introspection. The former approach is convenient for a tool that presents the standard descriptor based on the
DDBeans in the application's
DeployableObject and provides direct access to each
DDBean's configuration (its
DConfigBean). This provides configuration of the essential resource requirements an application may have. Introspection allows a tool to present the application's entire configuration, while highlighting the required resource requirements.
Introspection is required in both approaches in order to present or modify descriptor properties. The difference is in how a tool presents the information:
Driven by standard descriptor content or
WebLogic Server descriptor content.
A system of modifying configuration information must include a user interface to ask for configuration changes. See Example 3-3.
Example 3-3 Code Example to Modify Configuration Information
. . . // Introspect the DConfigBean tree and ask for input on properties with setters private void processBean(DConfigBean dcb) throws Exception { if (dcb instanceof DConfigBeanRoot) { System.out.println("Processing configuration for descriptor: "+dcb.getDDBean().getRoot().getFilename()); } // get property descriptor for the bean BeanInfo info = Introspector.getBeanInfo(dcb.getClass(),Introspector.USE_ALL_BEANINFO); PropertyDescriptor[] props = info.getPropertyDescriptors(); String bean = info.getBeanDescriptor().getDisplayName(); PropertyDescriptor prop; for (int i=0;i<props.length;i++) { prop = props[i]; // only allow primitives to be updated Method getter = prop.getReadMethod(); if (isPrimitive(getter.getReturnType())) // see isPrimitive method below { writeProperty(dcb,prop,bean); //see writeProperty method below } // recurse on child properties Object child = getter.invoke(dcb,new Object[]{}); if (child == null) continue; // traversable if child is a DConfigBean. Class cc = child.getClass(); if (!isPrimitive(cc)) { if (cc.isArray()) { Object[] cl = (Object[])child; for (int j=0;j<cl.length;j++) { if (cl[j] instanceof DConfigBean) processBean((DConfigBean) cl[j]); } } else { if (child instanceof DConfigBean) processBean((DConfigBean) child); } } } } // if the property has a setter then invoke it with user input private void writeProperty(DConfigBean dcb, PropertyDescriptor prop, String bean) throws Exception { Method getter = prop.getReadMethod(); Method setter = prop.getWriteMethod(); if (setter != null) { PropertyEditor pe = PropertyEditorManager.findEditor(prop.getPropertyType()); if (pe == null && String[].class.isAssignableFrom(getter.getReturnType())) pe = new StringArrayEditor(); // see StringArrayEditor class below if (pe != null) { Object oldValue = getter.invoke(dcb,new Object[0]); pe.setValue(oldValue); String val = getUserInput(bean,prop.getDisplayName(),pe.getAsText()); // see getUserInput method below if (val == null || val.length() == 0) return; pe.setAsText(val); Object newValue = pe.getValue(); prop.getWriteMethod().invoke(dcb,new Object[]{newValue}); } } } private String getUserInput(String element, String property, String curr) { try { System.out.println("Enter value for "+element+"."+property+". Current value is: "+curr); return br.readLine(); } catch (IOException ioe) { return null; } } // Primitive means a java primitive or String object here private boolean isPrimitive(Class cc) { boolean prim = false; if (cc.isPrimitive() || String.class.isAssignableFrom(cc)) prim = true; if (!prim) { // array of primitives? if (cc.isArray()) { Class ccc = cc.getComponentType(); if (ccc.isPrimitive() || String.class.isAssignableFrom(ccc)) prim = true; } } return prim; } /** * Custom editor for string arrays. Input text is converted into tokens using * commas as delimiters */ private class StringArrayEditor extends PropertyEditorSupport { String[] curr = null; public StringArrayEditor() {super();} // comma separated string public String getAsText() { if (curr == null) return null; StringBuffer sb = new StringBuffer(); for (int i=0;i<curr.length;i++) { sb.append(curr[i]); sb.append(','); } if (curr.length > 0) sb.deleteCharAt(sb.length()-1); return sb.toString(); } public Object getValue() { return curr; } public boolean isPaintable() { return false; } public void setAsText(String text) { if (text == null) curr = null; StringTokenizer st = new StringTokenizer(text,","); curr = new String[st.countTokens()]; for (int i=0;i<curr.length;i++) curr[i] = new String(st.nextToken()); } public void setValue(Object value) { if (value == null) { curr = null; } else { String[] v = (String[])value; // let caller handle class cast issues curr = new String[v.length]; for (int i=0;i<v.length;i++) curr[i] = new String(v[i]); } } } . . .
Beyond the mechanics of the rudimentary user interface, any interface that enables changes to the configuration by an administrator or user can use the property setters shown in Example 3-3.
Targets are associated with WebLogic Servers, clusters, Web servers, virtual hosts and JMS servers. See
weblogic.deploy.api.spi.WebLogicTarget and Support for Querying WebLogic Target Types.
In WebLogic Server, application names are provided by a deployment tool. Names of modules contained within an application are based on the associated archive or root directory name of the modules. These names are persisted in the configuration
MBeans constructed for the application.
In Java EE deployment there is no mention of the configured name of an application or its constituent modules, other than in the
TargetModuleID object. Yet
TargetModuleIDs exist only for applications that have been distributed to a WebLogic Server domain. Hence there is a need to represent application and module names in a deployment tool prior to distribution. This representation should be consistent with the names assigned by the server when the application is finally distributed.
Your deployment tool plug-in must construct a view of an application using the
DeployableObject and
J2eeApplicationObject classes. These classes represent stand-alone modules and EARs, respectively. Each of these classes is directly related to a
DDBeanRoot object. When presented with a distribution where the name is not configured, the deployment tool must create a name for the distribution. If the distribution is a
File object, use the filename of the distribution. If an archive is offered as an input stream, a random name is used for the root module.
The deployment preparation phase involves saving the resulting plan from a configuration session. Use the
DeploymentConfiguration.save() method (a standard Java EE Deployment API method). You can also use the
SessionHelper.savePlan() method to save a new copy of deployment plan along with any external documents in the plan directory.
The
DeploymentConfiguration.save methods creates an XML file based on the deployment plan schema that consists of a serialization of the current collection of
DConfigBeans, along with any variable assignments and definitions.
DConfigBean trees are always saved as external descriptors. These descriptors are only be saved if they do not already exist in the application archive or the external configuration area, meaning a save operation does not overwrite existing descriptors. The
DeploymentConfiguration.saveDConfigBean method does overwrite files. This is does not mean that any changes made to a configuration are lost, it means that they are handled using variable assignments.
As noted before, the
DeploymentConfiguration.restore methods are used to create configuration beans based on a previously saved deployment plan (see Perform Front-end Configuration). You can restore an entire collection of configuration beans or you can restore a subset of the configuration beans. It is also possible to save or restore the configuration beans for a specific module in an application.
Temporary files are created during a configuration session. Archives are exploded into the temp area and can only be removed after session configuration is complete. There is no standard API defined to close out a session. Use the
close() methods to
WebLogicDeployableObject and
WebLogicDeploymentConfiguration.
SessionHelper.close() to clean up after a session. If you do not clean up after closing sessions, the disk containing your temp directories may fill up over time.
|
http://docs.oracle.com/cd/E23943_01/web.1111/e13703/configuration.htm
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
. ;) Jean-Paul > >cheers >lvh > > > >On 20 Jul 2012, at 10:35, Glyph wrote: >> >>On Jul 20, 2012, at 1:11 AM, Laurens Van Houtven <_ at lvh.cc> wrote: >>>Hi, >>> >>> >>>Apparently AMPBoxes aren't Arguments. However, I kind of want an >>>AMPBox (like an AMPList, but only one). >>> >>>Use case: my responses have a "location", but a location is composed >>>of several sub-things: place name, country and postal code. >>>{"location": {"placeName": "Krakow", "countryCode": "PL", postalCode: >>>"30-015"}} would be a lot nicer than having those keys in the top >>>level namespace :) >>> >>>cheers >>>lvh >> >>Seems like an easy enough thing to write. Given that AMPList doesn't >>use a length prefix (it uses null-key box-termination, just like the >>rest of the protocol) the representation would be exactly the same. >>Just add a trivial wrapper that uses AMPList, unpacks its argument, >>and assert that there's only one of them? >> >>-glyph >> >> >>_______________________________________________ >>Twisted-Python mailing list >>Twisted-Python at twistedmatrix.com >> > > >_______________________________________________ >Twisted-Python mailing list >Twisted-Python at twistedmatrix.com >
|
http://twistedmatrix.com/pipermail/twisted-python/2012-July/025882.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
One year ago I published the article Practical ASP.NET MVC (3) tips, which has been quite helpful for a lot of people. Since the article is also a good reference for myself, I thought that publishing another article with some new tips might be a useful again.
What changed is that this time we'll focus on ASP.NET MVC 4. Most of the tips should still be applicable to older (and / or future) versions of ASP.NET MVC. There will be some tips about JavaScript interaction with ASP.NET MVC. Most tips will deal with user interaction and building up custom controls - controls that follow the principles of the modern web and MVC.
Some tips will be longer than others, some will be more trivial than others. I hope that everyone will find at least one or the other tip useful. Personally I like having a kind of reference for important things. Fragmentation is always a hard thing to cope with, which boosts my motivation towards unification.
Like last time be aware of the following disclaimer: This article will not try to teach you MVC, HTML, JavaScript or CSS. In this article I will give you a series of (mostly not-connected) tips, which could be helpful while dealing with ASP.NET MVC. Some of those tips might become obsolete with time, however, every tip will contain a lesson (or did contain one for me when I've been caught!).
From the moment I've seen ASP.NET MVC I knew that this is the best solution for creating scalable, robust and elegant dynamic web applications. The separation of concerns makes it easy to keep track of everything - even in large web applications. A lot of smart people did a great job in engineering the ASP.NET MVC framework, which is (in its core) lightweight and flexible. This flexibility makes it easy to extend or bend to our needs.
The main problem, however, is that only a few people know about how to archive certain things. Personally, I always have a look at the source code of MVC, to get an idea of how things are implemented or how things are done there. In this article we are going to see some of the inner working of ASP.NET MVC, which will hopefully help us to understand why some code works and other does not.
In my work as a consultant I am doing more web lately than ever before. The web is moving fast and everyone wants to have a great web application it seems. This is a struggle from some companies, which will eventually learn (the hard way) that their architecture is too stiff, since it is only adjusted for client (desktop) applications. Things like stateless requests or multiple users are hard to implement upon their current architecture. Nevertheless in the end they always come up with an architecture that does not only fit their previous needs, but also all future needs.
So what is the real deal behind this article? These tips will go in the following directions:
If you haven't tried out ASP.NET MVC, but you do know C# or the .NET-Framework (or even ASP.NET), then you should give it a shot right away! This article is the right choice if you did this, have a clue right now whats going on and if you want to learn more just in case. I can also recommend my previous article on ASP.NET MVC: Practical ASP.NET MVC (3) tips.
There are some limitations of the ASP.NET MVC model builder. Even though the builder is doing an almost perfect (and surely incredible) job in instantiating real objects from parameter strings (received in form of a variety of things, like the URL itself, query parameters or request content) it cannot instantiate some very particular objects from everyday strings like a simple date.
For writing our own model binders we only have to do two things:
IModelBinder
ModelBinders
Let's make a sample implementation for the DateTime model binder. All we want is that by default the date format is given in the kind of weird format dd of MM (yyyy). This can be archieved by coding the following class:
DateTime
dd of MM (yyyy)
public class CustomDateBinder : IModelBinder
{
static readonly Regex check = new Regex(@"^([0-9]{1,2})\s?of(\s[0-9]{1,2})\s?\(([0-9]{4})\)$");
public object BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext)
{
var value = bindingContext.ValueProvider.GetValue(bindingContext.ModelName);
var result = DateTime.Now;
if (check.IsMatch(value.AttemptedValue))
{
var matches = check.Matches(value.AttemptedValue);
if (matches.Count == 1 && matches[0].Groups.Count == 4)
{
try
{
int year = Int32.Parse(matches[0].Groups[3].Value);
int month = Int32.Parse(matches[0].Groups[2].Value);
int day = Int32.Parse(matches[0].Groups[1].Value);
return new DateTime(year, month, day);
}
catch { }
}
}
else if (DateTime.TryParse(value.AttemptedValue, CultureInfo.InvariantCulture, DateTimeStyles.None, out result))
return result;
bindingContext.ModelState.AddModelError(bindingContext.ModelName, "The value does not represent a valid date.");
return null;
}
}
We use a regular expression since the parsing mechanism of DateTime (usually by picking the ParseExact method with a formatting string) is not suited for our needs. We don't want to be caught off guard by inserting some invalid numbers, which is why we wrap the instantiation of the DateTime instance with a try-catch block. Still old formats should work which is why we just use a plain TryParse.
ParseExact
try-catch
TryParse
Registering the binder should be done in the Application_Start method found in the Global.asax.cs file. Usually one would create a method that does all the registering. For our case we only need to register two additional binders:
Application_Start
public static void RegisterBinders()
{
ModelBinders.Binders.Add(typeof(DateTime), new CustomDateBinder());
ModelBinders.Binders.Add(typeof(DateTime?), new CustomDateBinder());
}
Here we just register both, since Nullable<T> conversions will be performed automatically.
Nullable<T>
Remark One could of course grab the previously registered binder for DateTime and use it in the freshly created instance. Most of the time this makes much more sense than trying to do the usual binding in the rest of our own implementation.
Now there are several gotchas that should be noted:
The most critical point is actually client-side validation. Usually we want to activate this, to provide a much nicer user-experience. But what if this little script will block the user from submitting a valid form? So we have to extend the client-side validation (jQuery.validate) with our own code. Since we do not want change the original script directly (otherwise our changes would be overwritten once we receive an update), the best solution is actually to write another script file.
Let's call this script file jquery.validate.custom.js and let's add the following code:
(function ($, undefined) {
var oldDate = $.validator.methods['date'];
$.validator.addMethod(
"date",
function (value, element) {
if (/^([0-9]{1,2})\s?of(\s[0-9]{1,2})\s?\(([0-9]{4})\)$/.test(value)) {
alert('Hi from our own client-side validation !');
return true;
}
return oldDate(value, element);
}, "The given string is not a valid date ...");
})(jQuery);
The code looks actually more complicated than it is. Basically we are just fetching the (current) date validation function, adding (which is replacing) the new validation function and setting a validation message. If the string seems legit, we are also showing an alert (this is just as a proof and should be removed for any productive purposes).
Sometimes we just build a framework for our homepage. In this framework we will actually leave a lot of things open for areas or webpages to change. A good way to accomplish this flexibility is by using layouts in layouts.
For using a layout in a layout all we need to do is specifying the new layout within the view. If we do this within an area we might only specify the new area in the area's _Viewstart.cshtml file. Now the interesting part happens in this new layout:
@{
Layout = "~/Views/Shared/_Layout.cshtml";
}
<div class="row">
<div class="span2">
<!-- specify new stuff here -->
</div>
<div class="span6">
@RenderBody()
</div>
</div>
As usual for a layout we are calling the RenderBody method. However, the new thing here is that we are specifying another Layout again. This specification is very crucial and includes the current layout in the specified one.
RenderBody
Layout
Another thing to note here is that one can turn off the parent (or any parent) layout easily but just applying Layout = null, e.g.:
Layout = null
@{
Layout = null;
}
@* Start something completely new here! *@
jQuery is a really amazing JavaScript library. It provides a lot of interesting features and has a very good architecture. The whole design of jQuery is the foundation of its success, which also started a new wave of web development efforts. Even though I always felt that it is the best to use jQuery from a CDN, or (if not) obtain and update it via the NuGet feed I am now strongly against doing it that way.
There are several reasons for not updating jQuery (at least not automatically). Usually we are dealing with a bunch of jQuery plugins (which are either our own or the third-party) and we, as well as others, are making assumptions about the state of the jQuery API. However, sometimes the state of the API is much more fragile than one thinks. This results in a removal or modification of the current state.
If we now just update jQuery blindly (which might happen easily if we have it in NuGet) or instantly (over a CDN), we might get trouble with one or more of our plugins. This happened to me a couple of times and since I am not in the mood of debugging those third-party plugins (sometimes I am, but usually I do not have the time), I am no strongly against such (automatic) updates. These updates should be evaluated and tested first.
Removing jQuery from NuGet is quite easy and straight forward. We just fix the version and now that everything is in a determined state we should also think about determining the bundling. Of course just including a whole directory is very efficient, but sometimes the order is important. I always recommend a basic structure like:
Scripts/
Scripts/abilities
Scripts/plugins
Scripts/...
In scripts we place the main files, e.g. jquery.js or page.js (if you want to name the page's main JavaScript file that way). The plugins folder contains only jQuery plugins, which makes their order arbitrary. Here should not be any dependencies (other than files that are placed in the root directory).
Let's have a look at a sample configuration for the RegisterBundles method.
RegisterBundles
public class BundleConfig
{
public static void RegisterBundles(BundleCollection bundles)
{
//Determine the perfect ordering ourselves
bundles.FileSetOrderList.Clear();
//jQuery and its plugins!
bundles.Add(GetJquery());
//Separate the page JS from jQuery
bundles.Add(GetPage());
}
static ScriptBundle GetJquery()
{
var jquery = new ScriptBundle("~/bundles/jquery");
jquery.Include("~/Scripts/jquery.core.js");
jquery.Include("~/Scripts/jquery.validate.js");
jquery.Include("~/Scripts/jquery.validate.unobtrusive.js");
jquery.IncludeDirectory("~/Scripts/plugins/", "*.js");
//More to come - or even plugins of plugins (subdirectories of the plugins folder)
return jquery;
}
static Bundle GetPage()
{
var page = new ScriptBundle("~/bundles/page");
page.Include("~/Scripts/page.js");
//and maybe others
return page;
}
}
Why do we need to reset the ordering over FileSetOrderList? Well if we do not clear the default values then renaming jquery.js to jquery.core.js will (for instance) have the effect of loading jQuery after e.g. jquery.validate.js, since the name of this file is in the priority list, while jquery.core.js is not.
FileSetOrderList
If we derive our controllers directly from Controller we might get in trouble in the future. It is much better to use an abstraction that is (from the code's perspective) in our own hands.
Controller
Usually I call my own base controller just BaseController, but sometimes other names fit better. Such a controller would contain methods that are used across all other controllers. In principle such a controller might also contain actions, even though usually this is not the case.
BaseController
A quite useful start might be having the following structure:
public abstract class BaseController : Controller
{
protected static String MyName([CallerMemberName] String name = null)
{
return name;
}
}
This method could be used in any action (or other method) to determine the name of the action. The result is one string less, that could be wrong, if we are require to pass in the string of the current action somewhere.
One of the best things about the guys behind ASP.NET MVC is that they understand the web. The separation of concerns is not only important in the server-side architecture (Model-View-Controller), but also on the client-side (Description-Style-Interactivity, i.e. HTML-CSS-JavaScript). This means that HTML should not contain any CSS or JavaScript. Inline styling is really not a good thing to consider (even though there are situations where it makes sense, especially in debugging or playing around). The same applies for using JavaScript within HTML. This means that <script> tags which contain content (usually JavaScript) should be avoided.
<script>
The answer to this problem is of course unobtrusive JavaScript. Here we are setting our options in form of HTML attributes (usually data-* attributes). The JavaScript(s) pick up the element due to some class / and or other attributes being set. This is actually how the (client-side) validator (jquery.validate.js) gets its information. There is also another JavaScript file called jquery.validate.unobtrusive.js, which picks up those elements and feeds the original validator script with the found elements.
data-*
It makes sense to either build our own controls with an (optional) unobtrusive model, or to write our own wrapper. Let's consider the following example of a (third-party) datepicker control in the file jquery.datepicker.js. Actually the control itself is not unobtrusive, which is why we create a new file called jquery.datepicker.unobtrusive.js. All we need is the following content:
$(function () {
$('input.pickdate').each(function () {
var options = {};
for (var name in this.dataset)
options[name] = isNaN(this.dataset[name] * 1) ? this.dataset[name] : this.dataset[name] * 1;
$(this).datepicker(options);
});
});
What is done here? Not much, we pick up all elements that fit a certain unobtrusive criteria (in this case all <input> elements with the class pickdate being set) and iterate over them. We then get all data-* attributes and put them in an object called options. Finally we invoke the jQuery plugin with the created options.
<input>
pickdate
options
In our MVC view we can now write code like the following:
@Html.TextBox("mydate", DateTime.Now, new { @class = "pickdate", data_week_start = "4", data_format = "dd-mm-yyyy" })
And without any additional JavaScript code a datepicker control will be created - unobtrusively as preferred.
There are two things to remark here:
dataset
weekStart
Staying unobtrusive gives us more flexibility and easier maintenance.
Rarely we have a form that might contain an array. Even more rarely we have a form that contains an array, which again contains an array. In my case I had a complex JavaScript control, that could add, edit or remove entries. All operations would be tracked and send to server, once the user decides to save by clicking on a button. The submission had been done by an AJAX call (over jQuery).
What one would expect is a structure like IEnumerable<GridStateSaver<RowData>>. In this case RowData is just a model with some data (like an id, a name and so on). The generic class GridStateSaver looks the following:
IEnumerable<GridStateSaver<RowData>>
RowData
GridStateSaver
public class GridStateSaver<T>
{
public GridSaveState State
{
get;
set;
}
public IEnumerable<T> Rows
{
get;
set;
}
}
public enum GridSaveState
{
Added,
Updated,
Deleted
}
So all in all we are just enumerating over all possible changes, where we receive the whole batch of rows with the same modification type (add, update, delete). Using jQuery for the job we receive a quite nice response with everything being used with array index notation. However, even though ASP.NET MVC finds the right number of states being transmitted (e.g. 2 for only add and update or 1 for only delete etc.), it does not go further down the tree to instantiate rows or set the state.
The signature of the action in the MVC controller looks like this:
public ActionResult ActionName(IEnumerable<GridStateSaver<RowData>> states)
{
/* ... */
}
Now that we already dicussed that the straight forward way does not work, let's see a way that works. Suppose we have stored our information in an array called data. The following code would post this array as a stringified JSON object:
data
$.ajax({
contentType: 'application/json; charset=utf-8',
url: /* place URL to post here */,
type: 'POST',
success: /* place success callback here */,
error: /* place error callback here */,
data: JSON.stringify({ states : data })
});
This approach works quite nice, because MVC will automatically detect the transmission as being done via JSON. This way is also faster than the usual detection, because JSON has a direct array notation. Therefore use JSON for posting complex data with JavaScript.
A very important part of every web application is the JavaScript that basically connects all included JavaScript files and the webpage. Usually everything starts with one of the following blocks (using jQuery):
$(function() {}
/* Content */
);
$(document).ready(function() {
/* Content */
});
$(window).load(function() {
/* Content */
});
While this approach has a lot of benefits it has also one disadvantage: There is no object to communicate with (possible) other scripts. Or to say it differently: this method is not pluggable. This can be fixed by providing such an object, either by using the window object explicitly as a host, or by creating a global object. The global object could also be placed in another JavaScript file.
window
Another advantage of such an approach is that the global object could also give access for debugging information for instance. Let's design a very simple global container (just as an object, even though there are more advanced and better ways to do that):
var app = {
initialized: false,
path : '/',
name : 'myapp',
debug : [],
goto : function() { /* ... */ }
};
Such a central object has many other advantages as well. Of course it is most useful if we build something like a single-page-application, where additional JavaScripts might be required for some pages. In such cases one could do the following:
var app = {
queue : [],
load : function(callback) {
app.queue.push(callback);
},
run : function() {
for (var i = 0, n = app.queue.length; i < n; i++)
(app.queue[i])();
app.queue.splice(0, n);
},
/* ... */
};
So every (additional) JavaScript will run code like
app.load(function() {
/* additional code to load */
});
instead of the usual wrapped code
$(function() {
/* additional code to load */
});
Now such an architecture could then be used for any kind of modular experience one would wish. The loading functions could do additional bindings, activating some slick animations or just setting up some more specialized controls.
Sometimes customers have special requirements. They want to modularize their web project, but they do not want to include areas in the same project. Of course such a treatment is possible, however, archiving this is not straight forward. There are several possible ways and every way has benefits and disadvantages. Let's look at some of the possible solutions:
In my opinion the best solution is of course number 1. But this is not a solution for the original problem - to make everything pluggable by adding / removing just a single library (*.dll file)! Therefore option number 2-4 are also excluded, since these options have additional files to be transported. It should be noted, however, that NuGet would make such a process very elegant and easy.
So if a company would go for number 4, it would certainly have several benefits. The pluggable architecture would be provided by NuGet - if a (not-yet-used) NuGet package is found, it would be installed automatically (resources would be copied and the library would be placed). Otherwise NuGet packages could also be removed - which would result in a clean removal of the library, as well as the resources.
Nevertheless in this tip we will have a look at number 5. Since writing our own virtual path provider is tedious, we will use the MvcContrib library. What we get in the end is a web application that is centered around a central application, with pluggable modules being packaged in libraries.
The MvcContrib library does much more for us then only providing the abstract PortableAreaRegistration class, that we need to derive from for our portable area. It also provides the message bus, which is now included in the Mvc architecture out-of-the-box. The message bus couples two (otherwise loosly coupled) modules together, i.e. it helps us to establish a connection from the portable area to whatever web application and vice-versa.
PortableAreaRegistration
Nowadays nearly every (major) webpage offers a lot of interactivity and features. However, the real test is not if a webpage is really interactive and useful when users have JavaScript activated, but when JavaScript is not available. Of course this test will fail miserably in some obvious cases (try making a real-time game (like a jump and run) without JavaScript, or a painting program), where high interactivity is required.
However, in most cases the test should not fail. If the webpage is not usable any more without JavaScript, then something is terribly wrong. Think about Amazon requiring JavaScript for your checkout process. Most people would not be affected, but those few who either can't enable JavaScript in their browser (due to company policies) or don't want to turn it on (due to security concerns), cannot spend money on the webpage. In consequence Amazon will make less money.
If we have parts on the page that will be modified from a JavaScript it is quite easy to add some NoJS fallback. Consider the following:
<div class="loadfeed">
<noscript>This feature requires JavaScript.</noscript>
</div>
It's that easy! As long as the deactivated features are clearly marked and not required for operating the page everything's fine. However, the real NoJS has to come to our mind when thinking about form controls or related user interaction elements. Obviously our application has to be independent of such controls. If we use them, it must be self-explanatory that such controls would only enhance the user's experience, but not be mandatory for it.
Consider the following example: We include a datepicker control on our webpage.
<input type="date" class="datepicker" placeholder="Please enter a date in the format DD-MM-YYYY" />
If JavaScript is enabled we pick up all <input> tags with the datepicker class set. Then we will hide the original input and show a different one (with the datepicker). This solution is quite robust. Why?
datepicker
Of course sometimes more effort is required to provide such a flexible way of accessing things. Sometimes it might be impossible to provide a proper client-side solution for people without JavaScript. Nevertheless in most cases it is worth the additional effort.
This is how the demo looks without JavaScript being active:
It is worth testing the webpage / web application at least once without JavaScript being active.
In our code (C# or JavaScript) we always try to follow some principles like DRY or SOLID. We architecture everything and encapsulate data. Why aren't we doing the same thing with CSS? Variables would be a great starting point, followed by mixings and included hierarchy selectors. This is basically what LESS offers. I actually had my doubts due to the problems with distributing LESS stylesheets (a transpiler is required, since deliviering another JavaScript for it sounds like the completely wrong solution to me). Needless to say that Visual Studio has the perfect answer already included: a plugin called Web Essentials.
This plugin automatically saves LESS stylesheets additionally the CSS and minified CSS format. Therefore we can simply bundle / distribute CSS without having to think much about LESS or stylesheet preprocessors in general.
It's very similar with TypeScript. TypeScript is actually JavaScript and gives us a set of amazing features out-of-the-box. While LESS comes with web essentials, TypeScript (additionally) needs to be downloaded and installed. The whole process does not hurt and begins at the Download Center.
Again every TypeScript (.ts) file will be saved automatically as a JavaScript (.js) and minified version that ends with .min.js. So no real burden here, just use it!
As a final remark: If we want to use TypeScript efficiently then we might want to add TypeScript definition files *.d.ts. There is even a good database online. Additionally one should include references to other included JavaScript files by dragging them into the editor.
Quite often we want to group content in tabs. Tabs will require us to write 3 things:
Writing a little extension for generating such tabs sounds therefore like a good plan. In the end we want to generate HTML like this:
<div class="tabs">
<ul class="tabs-head">
<-- For every tab we need the following -->
<li>
<-- name of the tab -->
</li>
</ul>
<div class="tabs-body">
<-- For every tab we need the following -->
<div class="tab">
<-- content of the tab -->
</div>
</div>
</div>
This HTML could be styled the right way (to look like tabs) with the following CSS code:
ul.tabs-head {
display: block;
list-style: none;
border-bottom: 1px solid #ccc;
margin: 0;
padding: 0;
}
ul.tabs-head li {
display: inline-block;
margin: 0 10px;
border: 1px solid #ccc;
position: relative;
top: 1px;
height: 25px;
padding: 10px 20px 0 20px;
background: #eee;
cursor: pointer;
}
ul.tabs-head li:hover {
background: #fff;
}
ul.tabs-head li.active-tab {
border-bottom: 1px solid #fdfdfd;
background: #fff;
font-weight: bold;
}
div.tabs-body {
border: 1px solid #ccc;
border-top: 0;
padding: 10px;
}
Of course we also need a little bit of JavaScript to make this work smoothly. The simplest solution (without remembering the tab etc.) could be written like this:
; (function ($, undefined) {
$.fn.tabs = function () {
return this.each(function () {
var links = $('ul.tabs-head > li', this);
var tabs = $('.tab', this);
var showTab = function (i) {
links.removeClass('previous-tab next-tab active-tab')
.eq(i).addClass('active-tab');
if (i > 0) links.eq(i - 1).addClass('previous-tab');
if (i < links.length - 1) links.eq(i + 1).addClass('next-tab');
tabs.hide().eq(i).show(i);
};
links.each(function(i, v) {
$(v).click(function() {
showTab(i);
});
});
showTab(0);
});
};
})(jQuery);
Now we need to wire up everything. First we want to make an extension method that constructs such a tabs control. The problem here is, that the HTML is not sequential. We have to places where we need to enter data from our tabs (one place for all the titles and another one for all the content). Of course one could solve it by splitting the extension method into two parts, however, this would not be very elegant.
Therefore we go for a solution that will feel very close to the BeginForm extension method. The extension method is quite simple:
BeginForm
public static TabPanel Tabs(this HtmlHelper html)
{
return new TabPanel(html.ViewContext);
}
That does not look too complicated! In the end we use it like
@using(var tabs = Html.Tabs())
{
@tabs.NewTab("First tab",
@<text>
<strong>Some content (in first tab)...</strong>
</text>)
@tabs.NewTab("Second tab",
@<text>
<strong>More content (in second tab)...</strong>
</text>)
}
Obviously there is some magic going on in this TabPanel class. Let's see the implementation:
TabPanel
public sealed class TabPanel : IDisposable
{
Boolean _isdisposed;
ViewContext _viewContext;
List<Func<Object, Object>> _tabs;
internal TabPanel(ViewContext viewContext)
{
_viewContext = viewContext;
_viewContext.Writer.Write("<div class=\"tabs\"><ul class=\"tabs-head\">");
_tabs = new List<Func<Object, Object>>();
}
public MvcHtmlString NewTab(String name, Func<Object, Object> markup)
{
var tab = new TagBuilder("li");
tab.SetInnerText(name);
_tabs.Add(markup);
return MvcHtmlString.Create(tab.ToString(TagRenderMode.Normal));
}
public void Dispose()
{
if (!_isdisposed)
{
_isdisposed = true;
_viewContext.Writer.Write("</ul><div class=\"tabs-body\">");
for (int i = 0; i < _tabs.Count; i++)
{
_viewContext.Writer.Write("<div class=\"tab\">");
_viewContext.Writer.Write(_tabs[i].DynamicInvoke(_viewContext));
_viewContext.Writer.Write("</div>");
}
_viewContext.Writer.Write("</div></div>");
}
}
}
The main principle is quite easy: We are writing directly to the ViewContext. In order to archive this non-sequential stuff, we are buffering the contents of the tabs (while sequentially writing out the headers). In the end we are closing the head, flushing all the content and finalizing the container's HTML.
ViewContext
In order to buffer the content we are using a little trick with an automatic conversion to a function delegate by the view generator. This trick has one drawback: Other helpers that also write directly to the ViewContext are useless within tabs (most popular example: the BeginForm method). Here we would be required to write manual HTML or use another helper, which directly returns MvcHtmlString.
MvcHtmlString
Now we just need to wire up our jQuery tabs plugin with the generated content:
$(function() {
$('div.tabs').tabs();
});
Finally the outcome could look as shown in the image below.
One problem with this is that it is really not suited for smaller screen sizes (e.g. on mobile devices). This can be changed by simply adding the following CSS code:
@media only screen and (max-width: 540px) {
ul.tabs-head {
position: relative;
}
ul.tabs-head li {
display: none;
margin: 0;
height: auto;
padding: 0;
cursor: default;
overflow: hidden;
}
ul.tabs-head li:hover {
background: none;
}
ul.tabs-head li.active-tab {
display: block;
background: none;
font-weight: normal;
padding: 10px 50px;
font-size: 24px;
}
ul.tabs-head li.previous-tab, ul.tabs-head li.next-tab {
color: transparent;
display: block;
position: absolute;
width: 32px;
height: 32px;
top: 10px;
border: 0;
z-index: 100;
cursor: pointer;
}
ul.tabs-head li.previous-tab {
left: 10px;
background: url(images/back.png) #ffffff;
}
ul.tabs-head li.next-tab {
right: 10px;
background: url(images/next.png) #ffffff;
}
}
The 540px value determines the threshold. Below this value we will have the responsive design enabled. This value might be too low (depending on the amount of tabs), so a higher value might be better. This is how it looks:
One of the biggest performance killers of webpages is the database system. This is the central brain of the application and somehow fragmenting it into replicas with some internal synchronization is a real boost. The only thing that we can actually do for minimizing database load (and therefore minimizing page generation time as well as maximizing the number of requests per minute) is to improve the queries we write.
As discussed in the last set of tips we should always use a DAL to communicate with our database. A possible way is to use the Entity Framework. It is free, contains a lot of great features and is all-in-all a very robust implementation. Optimizations fall into the following categories:
Selects that do not change in a while (or only in certain time-intervals) could be cached. Some queries could be written with less statements and / or resulting in a much better query path. Also the number of returned values might be bigger than required, which is again another source of optimization. Of course if we could avoid a query completely or merge multiple queries into one query, we gain a lot of performance.
Let's have a look at some examples to understand where those categories can be applied. Let's start with the following uncached query:
User GetUser(Guid id)
{
return Db.Users.Where(m => m.Id == id).FirstOrDefault();
}
Now we might replace this with something that works like the following piece of code:
User GetUser(Guid id)
{
return HttpRuntime.Cache.GetOrStore<User>(
"User" + id,
() => Db.Users.Where(m => m.Id == id).FirstOrDefault()
);
}
A very simple implementation for this GetOrStore method could be done as shown below.
GetOrStore
public static class CacheExtensions
{
public static T GetOrStore<T>(this Cache cache, String key, Func<T> generator)
{
var result = cache[key];
if(result == null)
{
result = generator();
cache[key] = result;
}
return (T)result;
}
}
However, we should note that this caching algorithm does not contain any discard policy. Hence this is a memory leak. In productive environments one should always think about suitable discard policies before enabling a cache system.
So what is condensing about? Sometimes one writes a too complicated query. Simplifying the query or making it more lightweight is therefore one of the greatest possible performance boosts. It is hard to give an example in LINQ here, so we are just using some plain SQL.
Consider the following SQL:
FROM mytable mo
WHERE EXISTS
(
FROM othertable o
WHERE o.othercol = mo.col
)
Now we replace this with a JOIN:
SELECT mo.*
FROM mytable mo
INNER JOIN othertable o on o.othercol = mo.col
Overall condensing is not about writing the query shorter, but more efficiently. This should yield a faster execution plan.
Reducing does not require any example. Most of the time we are fetching too much data from the database. Even when we are interested in all the books that have been bought by a particular user, are we really interested in all the book data, too? Or would it be enough to return the ids and names of those books?
The fourth category, merging is explained by a very illustrative example.
public IEnumerable<Book> GetPurchasedBooksBy(String userName)
{
var userId = Db.Users.Where(m => m.Login == userName).FirstOrDefault();
if(userId != null)
{
var books = Db.Books.Where(m => m.FKBuyer == userId).ToEnumerable();
return books;
}
return Enumerable.Empty<Book>();
}
Why do we need two queries if everything can be done with one query? The code would also be much more straight forward then:
public IEnumerable<Book> GetPurchasedBooksBy(String userName)
{
return Db.Books.Join(
Db.Users.Where(m => m.Login == userName),
m => m.FKBuyer,
m => m.Id,
(book, user) => book
).ToEnumerable();
}
Here we are joining both tables on the foreign key that maps to the user's primary key. Additionally we are obeying our username constraint and we are just interested in the books.
Finally avoiding is just skipping queries that are actually not required. The king of such queries is usually executed by inexperienced users, who are using powerful frameworks:
public List<User> GetCreatedUsers(Guid id)
{
return Db.Users.Where(m => m.Creator.Id == id);
}
This only works, when the ORM is quite good. But even then there is a left outer join required to perform this query. It would be much better to use the (already placed) foreign key:
public List<User> GetCreatedUsers(Guid id)
{
return Db.Users.Where(m => m.FKCreator == id);
}
Not much difference but always a better choice (even though some ORM might optimize the case above).
There are cases where all we want to do is showing an overview of what actions are possible. Usually this is within the area of a particular controller. In such scenarios reflections comes in very handy.
If we combine reflection with the usage of attributes we are getting self-generating code. All we need to do is write a nice re-usable interface. Let's consider the following class:
public static class Generator<TController, TAttribute>
where TController : Controller
where TAttribute : Attribute
{
public static IEnumerable<Item> Create()
{
var controller = typeof(TController);
var attribute = typeof(TAttribute);
return controller.GetMethods()
.Where(m => m.DeclaringType == controller)
.Select(m => new
{
Method = m,
Attributes = m.GetCustomAttributes(attribute, false)
})
.Where(m => m.Attributes.Length == 1)
.Select(m => new Item
{
Action = m.Method,
Attribute = (TAttribute)m.Attributes[0]
});
}
public class Item
{
public MethodInfo Action
{
get;
set;
}
public TAttribute Attribute
{
get;
set;
}
}
}
With the information of the controller and the attribute type we are iterating over all actions of the given particular controller. Finally we are generating some kind of temporary object (but not anonymous, since otherwise we would lose the information) and returning this enumeration.
How can we use it? First let's see an example controller:
public MyController : Controller
{
public ViewResult Index()
{
return View();
}
[Item("First item", Description = "This is the first item")]
public ViewResult First()
{
/* ... */
}
[Item("2nd item", Description = "Another item - actually the second ...")]
public ViewResult Second()
{
/* ... */
}
[Item("Third item", Description = "This is most probably the last item")]
public ViewResult Third()
{
/* ... */
}
}
Okay, so all (shown) actions except the Index action are decorated with an ItemAttribute attribute. This makes sense, since we (most probably) want to get the listing of all the methods within the Index view. Also in other views we might be only interested in the sub-actions and not in the index-action. The implementation of the attribute class is given in the next code snippet.
Index
ItemAttribute
[AttributeUsage(AttributeTargets.Method)]
public sealed class ItemAttribute : Attribute
{
public ItemAttribute(String name)
{
Name = name;
}
public String Name
{
get;
private set;
}
public String Description
{
get;
set;
}
}
So how can we use our little generator? Actually it is not that hard. Let's see a sample generation:
var list = Generator<MyController, ItemAttribute>.Create()
.Select(m => new MyModel
{
Action = m.Action.Name,
Description = m.Attribute.Description,
Name = m.Attribute.Name
}).ToList();
The generator is independent of particular attributes, controllers or models. Therefore it could be used anywhere.
ASP.NET MVC 4 added some great features along the line. One of my favorite features is the all new ApiController, which is called the web api. This makes creating RESTful services that follow the CRUD (Create / POST, Read / GET, Update / PUT, Delete / DELETE) principles quite easy. Here we can embrace HTTP with great automatic behaviors like OData handling, protocol (e.g. JSON, XML, ...) detection or the usual model construction.
ApiController
Nevertheless if we provide an open API to specific functionality of our website's service, we are also required to provide a good and solid API, which lists and explains the various API calls. Writing documentation is hard enough, but Visual Studio helps us a lot in writing some inline documentation for our methods. As we already know, that inline documentation can be transformed to XML, which can be transported and read-out.
As shown above it is enough to activate the XML export and set a different path where the output should be written. The App_Data folder is a natural choice, since it is already configured to be used only internally. So access from the outside should be forbidden. This is what we want here, otherwise people could also see documentation for the rest of our code, which would give them indications on how our web application works.
Finally we might want to get a jump-start for a nice help page. The following NuGet command will do the trick:
Install-Package Microsoft.AspNet.WebApi.HelpPage
It installs a pre-configured help page. This help page is already installed if one started with a project of type MVC 4 web API application. Now only one more thing is required:
config.SetDocumentationProvider(new XmlDocumentationProvider(HttpContext.Current.Server.MapPath("~/App_Data/XmlDocument.xml")));
This snippet has to be placed inside the HelpPageConfig.cs file of the App_Start folder of the new help page. Additionally one step is required if one has not yet called the RegisterAllAreas method of the AreaRegistration class from within the global.asax.cs file.
RegisterAllAreas
AreaRegistration
MVC is really a beauty, however, sometimes it follows the web spirit to closely by relying too much on strings. In my opinion it should always be possible to specify things by a string OR by something that might be caught when compiling. This gives flexibility during runtime but ensures robustness at compile-time. One way of calling views in a strongly typed manner is the small, but helpful StronglyTypedViews T4 template, written by Omar Gamil Salem. We can install the template over NuGet, simply by running the following command:
Install-Package StronglyTypedViews
Now could replace the following statement,
public ViewResult Product(int id)
{
return View("Product", id);
}
with this version:
public ViewResult Product(int id)
{
return MVCStronglyTypedViews.Products.Product(id);
}
Now that looks much longer than before and quite useless. In the presented scenario we could have also written the following:
public ViewResult Product(int id)
{
return View(id);
}
Now there is also no string needed to specify the view (since we want the view that corresponds to the called action). But we have to remember two things here:
Int32
Object
The second point is the real killer argument here. Suppose we made a small change in our code:
public ViewResult Product(Guid id)
{
return View(id);
}
We would not see an error message. However, going on our webpage we would see one (and a really bad one: since this is during runtime!). So going for the strongly typed views will help us in three scenarios:
After having installed the NuGet package we have a new file called StronglyTypedViews.tt in the root directory of our solution.
Right clicking on the file as shown in the image above gives us the option of running it. Well, that's all we need to do after having added new views!
Internal exceptions in ASP.NET MVC will be handled quite nicely. Here the convention of the Error.cshtml file in the shared folder is enough. This convention is actually implemented by a filter - in form of the HandleErrorAttribute. The filter is integrated in the global.asax.cs file.
HandleErrorAttribute
public static void RegisterGlobalFilters(GlobalFilterCollection filters)
{
filters.Add(new HandleErrorAttribute());
}
Personally I believe that the custom errors mode should always be remote only. We do not want expose anything about our system to the outside. Nevertheless what we really want is to provide a custom error screen in any scenario.
<-- place this in the system.web node -->
<customErrors mode="RemoteOnly" />
To archieve this we have to install an EndRequest handler. The code is placed in the global.asax.cs file and looks like the following.
EndRequest
protected void Application_EndRequest(Object sender, EventArgs e)
{
ErrorConfig.Handle(Context);
}
Now the question is how the static Handle method of the ErrorConfig class is implemented. Here we simply look at the given status-code. We do not want to change the status-code, but we actually want to show a custom view. The best thing to do here is to create another controller and create it.
Handle
ErrorConfig
public class ErrorConfig
{
public static void Handle(HttpContext context)
{
switch (context.Response.StatusCode)
{
//Not authorized
case 401:
Show(context, 401);
break;
//Not found
case 404:
Show(context, 404);
break;
}
}
static void Show(HttpContext context, Int32 code)
{
context.Response.Clear();
var w = new HttpContextWrapper(context);
var c = new ErrorController() as IController;
var rd = new RouteData();
rd.Values["controller"] = "Error";
rd.Values["action"] = "Index";
rd.Values["id"] = code.ToString();
c.Execute(new RequestContext(w, rd));
}
}
The controller itself might be as simple as shown below.
internal class ErrorController : Controller
{
[HttpGet]
public ViewResult Index(Int32? id)
{
var statusCode = id.HasValue ? id.Value : 500;
var error = new HandleErrorInfo(new Exception("An exception with error " + statusCode + " occurred!"), "Error", "Index");
return View("Error", error);
}
}
What is really important here is to keep the ErrorController internal (here written explicitly for clarity). We don't want any user to invoke any action of this controller intentionally. Instead we only want that actions of this controller might be invoked if a real error occurs.
ErrorController
internal
Most people use ASP.NET MVC with the Razor view engine. There are quite some arguments to pick Razor over ASPX, however, the people who want to stick to ASPX are also free to do so. Other view engines exist as well, and might be better for some people or in some situations.
By default MVC comes with ASPX and the Razor view engine. The actual choice does not matter here, it only affects in which language the standard views (if any) will be generated. If a view is missing we actually see, that paths for *.aspx have been searched as well. This search is of course a little bit expensive.
The following code is enough to remove all view engines.
ViewEngines.Engines.Clear();
ViewEngines.Engines.Add(new RazorViewEngine());
We should place it somewhere to be executed within the Application_Start method. Additionally it makes sense to disable writing the standard MVC response header. Adding specific headers to the response is actually kind of a security issue (not a big one), since we are telling other people about the implementation of our system (here one could spy that obviously ASP.NET MVC is used).
MvcHandler.DisableMvcResponseHeader = true;
This leaves our web app behind with optimized headers (saves a little bit of output generation time) and optimized search paths (no *.aspx files will be searched before searching for *.cshtml files).
Razor allows us to define functions within the code. Of course we wouldn't use it to simply generate math functions or LINQ queries, but to generate functions that return HTML - without the burden of doing the concatenation and managing the tags.
In order to create a function within Razor we only need the @helper directive. Let's see a simple example.
@helper
You have @PluralPick(Model.Count, "octopus", "octopuses") in your collection.
@helper PluralPick(Int32 amount, String singular, String plural)
{
<span>
@amount @(amount == 1 ? singular : plural)
</span>
}
Now we can use the PluralPick function anywhere in the same view. What is even more useful is to create such helpers globally, i.e. for any view to use. How can this be done? Well, here the App_Code folder comes to rescue. This is kind of a special folder for ASP.NET MVC. Any *.cshtml file here will not derive from WebViewPage but from HelperPage.
PluralPick
WebViewPage
HelperPage
Such a view will create public static methods out of @helper directives. Therefore it makes sense to create files like Helper.cshtml within the App_Code folder and place all (probably) useful globally helper functions in there.
public static
Let's take the code above as an example and put the helper method inside a view called Helper.cshtml:
@helper PluralPick(Int32 amount, String singular, String plural)
{
<span>
@amount @(amount == 1 ? singular : plural)
</span>
}
This is exactly the same code as above! Now what do we have to change in our original view?
You have @Helper.PluralPick(Model.Count, "octopus", "octopuses") in your collection.
We did not change much, but we are required to specify the name of the HelperPage, where the helper function is defined.
I've compiled a small sample project, which contains actions for almost all tips here (or code fragments of the tip). You are free to use the code / adjust it or remove it as you wish.
Basically it is a MVC 4 web application that contains most of the tips. Some of the tips have been applied in some files, while others have been implemented as examples within the available actions.
Even though most tips will be known for every MVC developer I hope that some tips have been interesting and useful or at least fun reading.
If you have one or the other tip to share then go ahead and post it in the comments. As with the first article I would be more than happy to extend this article with your best tips and tricks around ASP.NET MVC.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
//_viewContext.Writer.Write("</ul><div class=\"tabs-body\">"); your one
_viewContext.Writer.Write("<ul><div class=\"tabs-body\">");
for (int i = 0; i < _tabs.Count; i++)
{
_viewContext.Writer.Write("<div class=\"tab\">");
_viewContext.Writer.Write(_tabs[i].DynamicInvoke(_viewContext));
_viewContext.Writer.Write("</div>");
}
//_viewContext.Writer.Write("</div></div>"); your One
_viewContext.Writer.Write("</div></ul>"); // Correct One
div
ul
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Articles/635324/Another-set-of-ASP-NET-MVC-tips?msg=4661334
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
Someone sent me email regarding my online calculator for computing the distance between to locations given their longitude and latitude values. He wants to do sort of the opposite. Starting with the longitude and latitude of one location, he wants to find the longitude and latitude of locations moving north/south or east/west of that location. I like to answer reader questions when I can, so here goes. I’ll give a theoretical derivation followed by some Python code.
Longitude and latitude and are usually measured in degrees, but theoretical calculations are cleaner in radians. Someone using the Python code below could think in terms of degrees; radians will only be used inside function implementations. We’ll use the fact that on a circle of radius r, an arc of angle θ radians has length rθ. We’ll assume the earth is a perfect sphere. See this post for a discussion of how close the earth is to being a sphere.
Moving North/South
I’ll start with moving north/south since that’s simpler. Let R be the radius of the earth. An arc of angle φ radians on the surface of the earth has length M = Rφ, so an arc M miles long corresponds to an angle of φ = M/R radians. Moving due north or due south does not change longitude.
Moving East/West
Moving east/west is a little more complicated. At the equator, the calculation is just like the calculation above, except that longitude changes rather than latitude. But the distance corresponding to one degree of longitude changes with latitude. For example, one degree of longitude along the Arctic Circle doesn’t take you nearly as far as it does at the equator.
Suppose you’re at latitude φ degrees north of the equator. The circumference of a circle at constant latitude φ, a circle parallel to the equator, is cos φ times smaller than the circumference of the equator. So at latitude φ an angle of θ radians describes an arc of length M = R θ cos φ. A distance M miles east or west corresponds to a change in longitude of θ = M/(R cos φ). Moving due east or due west does not change latitude.
Python code
The derivation above works with angles in radians. Python’s cosine function also works in radians. But longitude and latitude are usually expressed in degrees, so function inputs and outputs are in degrees.
import math # Distances are measured in miles. # Longitudes and latitudes are measured in degrees. # Earth is assumed to be perfectly spherical. earth_radius = 3960.0 degrees_to_radians = math.pi/180.0 radians_to_degrees = 180.0/math.pi def change_in_latitude(miles): "Given a distance north, return the change in latitude." return (miles/earth_radius)*radians_to_degrees def change_in_longitude(latitude, miles): "Given a latitude and a distance west, return the change in longitude." # Find the radius of a circle around the earth at given latitude. r = earth_radius*math.cos(latitude*degrees_to_radians) return (miles/r)*radians_to_degrees
13 thoughts on “Converting miles to degrees longitude or latitude”
I recently had a similar question, and found some great functions at
You may find math.radians() and math.degrees() useful.
Thanks. I hadn’t noticed those functions.
dr. cook i still need help on my math im in the 6th grade and i dont get this stuff … plz email me and tell me how in the world does this stuff work thnks so so much i really need ur help
kameran!!!!
Kameran, Are you wanting to understand longitude and latitude? You can send me email to discuss you questions. My email address and other contact info is listed here.
John,
Can you help me converting ***ft North , ****East into degress format ( I mean ** deg.N, ***Deg.E)
Thanks for your help
Hi,
Thanks for such a clear explanation and useful script.
Do you know how the script could be tweaked in order to get lat/long in some not completely spherical projection system- like WGS84 (1984 datum)?
Thanks very much.
tomas bar
Tomas, sorry, but I’m not familiar with that.
Thanks for the great explanation and code!
I’ve linked this from stackoverflow.com “Convet long/lat to pixel x/y on a given picure.” – maybe this article helps:
Thanks for the great explanation. I am referencing this blog and giving you credit in a blog I just finished: over here.
|
http://www.johndcook.com/blog/2009/04/27/converting-miles-to-degrees-longitude-or-latitude/
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
java.lang.Object
org.apache.shiro.crypto.JcaCipherServiceorg.apache.shiro.crypto.JcaCipherService
org.apache.shiro.crypto.AbstractSymmetricCipherServiceorg.apache.shiro.crypto.AbstractSymmetricCipherService
org.apache.shiro.crypto.DefaultBlockCipherServiceorg.apache.shiro.crypto.DefaultBlockCipherService
org.apache.shiro.crypto.AesCipherServiceorg.apache.shiro.crypto.AesCipherService
public class AesCipherService
CipherService using the
AES cipher algorithm for all encryption, decryption, and key operations.
128,
192and
256bits*. This implementation defaults to 128 bits. Note that this class retains the parent class's default
CBCmode of operation instead of the typical JDK default of
ECB.
ECBshould not be used in security-sensitive environments because
ECBdoes not allow for initialization vectors, which are considered necessary for strong encryption. See the
parent class's JavaDoc and the
JcaCipherServiceJavaDoc for more on why the JDK default should not be used and is not used in this implementation. * Generating and using AES key sizes greater than 128 require installation of the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy files.
public AesCipherService()
CipherServiceinstance using the
AEScipher algorithm with the following important cipher default attributes: * The
CBCoperation mode is used instead of the JDK default
ECBto ensure strong encryption.
ECBshould not be used in security-sensitive environments - see the
DefaultBlockCipherServiceclass JavaDoc's "Operation Mode" section for more. **In conjunction with the default
CBCoperation mode, initialization vectors are generated by default to ensure strong encryption. See the
JcaCipherServiceclass JavaDoc for more.
|
http://shiro.apache.org/static/current/apidocs/org/apache/shiro/crypto/AesCipherService.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
Patent application title: Virtual Machine Monitor
Inventors:
Vladimir Grouzdev (Paris, FR)
Assignees:
VirtualLogix SA
IPC8 Class: AG06F944FI
USPC Class:
718 1
Class name: Electrical computers and digital processing systems: virtual machine task or process management or task management/control virtual machine task or process management
Publication date: 2010-06-24
Patent application number: 20100162242
Abstract:
A method for managing virtual machines, the method comprising providing a
virtual Advanced Configuration and Power Interface, ACPI, arranged to
interact with the virtual machines, and interacting with a real ACPI
based on interaction between the virtual ACPI and the plurality of
virtual machines. (FIG. 2)
Claims:
1. A method for managing virtual machines, the method comprising:providing
a virtual Advanced Configuration and Power Interface, ACPI, arranged to
interact with the virtual machines; andinteracting with a real ACPI based
on interaction between the virtual ACPI and the plurality of virtual
machines.
2. A method as claimed in claim 1, wherein a device is shared between multiple ones of the virtual machines and, and the method comprises maintaining a respective virtual device state of the device for each of the multiple ones of the virtual machines.
3. A method as claimed in claim 2, comprising determining a real device state for the device from the virtual device states.
4. A method as claimed in claim 3, wherein the virtual device state is a device power or performance state, and determining the real device state comprises determining a maximum power or performance state from the virtual device states.
5. A method as claimed in claim 2, wherein the device is one of a data processing system, on-board device, PCI device, power resource, CPU or other device.
6. A method as claimed in claim 1, comprising interacting with a leaf device based on interaction between the virtual machine monitor and one of the virtual machines to which the leaf device is assigned.
7. A method as claimed in claim 1, comprising providing virtual power buttons that are usable by the virtual machines to send ACPI power button notification events to others of the virtual machines.
8. A method as claimed in claim 1, comprising receiving a first event notification from the real ACPI, and providing a second event notification to one or more of the virtual machines based on the first event notification.
9. A method as claimed in claim 8, comprising determining an event number of the first event notification, and providing the event number to one of the virtual machines in response to a query from that virtual machine.
10. A method as claimed in claim 1, comprising providing to each virtual machine respective virtual ACPI tables corresponding to devices available to the virtual machine.
11. A method as claimed in claim 10, comprising adding one or more wrapper control methods corresponding to the devices available to one of the virtual machines to a virtual Differentiated System Description Table, vDSDT, for the virtual machine.
12. A virtual machine monitor arranged to carry out steps of a method as claimed in claim 1.
13. A data processing system arranged to perform a method as claimed in claim 1.
14. A computer program comprising code form implementing a method as claimed in claim 1.
15. Computer readable storage storing a computer program as claimed in claim 14.
Description:
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001]This U.S. patent application claims the benefit of priority from European patent application no. 08291242.9, filed Dec. 24, 2008.
BACKGROUND OF THE INVENTION
[0002]1. Field of the Invention
[0003]Embodiments of the invention relate to a virtual machine monitor and a method of managing virtual machines. In particular, embodiments of the invention relate to the provision of virtual ACPI functionality to virtual machines.
[0004]2. Description of the Related Art
[0005]Virtualisation may be used, for example, to use a single data processing system, such as a computer, to run multiple operating systems. The operating systems may be different, or may include multiple instances of a single operating system. One reason for using virtualisation is server consolidation, where servers executing in different operating systems are executed using a single data processing system. Such an approach may reduce the cost of implementing the servers as fewer data processing systems may be required, and/or may increase the utilisation of the components of the data processing system.
[0006]Operating systems and/or applications such as server applications that execute in the operating systems may execute normally under virtualisation with little or no modification to the operating systems or applications. Virtualisation software is provided that provides a virtual platform that is a simulation of some or all of the components of the data processing system to the operating systems. Therefore, the operating systems and applications use the "virtual" components of the virtual platform. The virtualisation software (often called a virtual machine monitor, VMM) monitors use of the virtual components of the various virtual platforms and allocates use of the "real" components of the data processing system to the operating systems based on use of the corresponding virtual components. An operating system and the applications executing in it are called a virtual machine.
[0007]Some data processing systems include Advanced Configuration and Power Interface (ACPI) capabilities. ACPI is a standard for device configuration and power management in data processing systems such as computers. ACPI may be used to manage power usage and performance of the components of the data processing system or the system itself. An operating system executing on the data processing system may provide commands to ACPI functions in a BIOS of the data processing system, causing the data processing system to, for example, report on status of components of the data processing system and/or change the power or performance state of the components or the system. The latest version of the ACPI Specification, currently version 3.0b, is incorporated herein by reference and is available from, for example,.
SUMMARY OF THE INVENTION
[0008]Aspects of embodiments of the invention are set out in the claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[0009]Embodiments of the invention will now be described, by way of examples only, with reference to the accompanying drawings, in which:
[0010]FIG. 1 shows an example of a known virtualisation system;
[0011]FIG. 2 shows an example of a data processing system including virtualisation according to embodiments of the invention; and
[0012]FIG. 3 shows an example of a data processing system suitable for use with embodiments of the invention.
DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION
[0013]Embodiments of the invention provide a virtual Advanced Configuration and Power Interface (ACPI) to virtual machines in a data processing system. For example, software may be provided, for example as part of a virtual machine monitor (VMM) or otherwise, that provides ACPI functionality to one or more virtual machines such that the virtual machines may interact with the ACPI functionality. The software may then interact with the ACPI functionality of the data processing system, for example with the ACPI functions in the BIOS, based on the interaction with the virtual machines.
[0014]For example, a device may be shared between a plurality of virtual machines so that some or all of the functionality of that device is available to the virtual machines. Where multiple virtual machines assert using the virtual ACPI functionality that the device should be in a respective power state, then embodiments of the invention determine a power state for the device from the power states as asserted by the virtual machines. For example, the power state chosen for the device may be the maximum power state from the respective power states asserted by the virtual machines. Thus, power savings may be achieved, while the device is maintained in a power state that should not affect performance of the device for the virtual machine that asserted the highest power state. Embodiments of the invention may maintain "virtual" power states for the device in respect of each of the virtual machines. Therefore, the device may be in a first power state while appearing to a virtual machine to be in a different power state that was asserted by that virtual machine. A device may be shared between some or all of the virtual machines on the data processing system.
[0015]FIG. 1 shows an example of a known virtualisation system. A data processing system 100 includes hardware 102. The hardware 102 may comprise, for example, a CPU, on-board devices, expansion cards such as PCI devices and other hardware. A virtual machine monitor (VMM) 104 is executing on the data processing system 100 and monitors three virtual machines 106, 108 and 110. Each virtual machine includes a respective operating system kernel 112 that includes drivers 114. The drivers 114 may interact with the virtual machine monitor 104 to make use of devices and other hardware 102 in the data processing system 100. Each virtual machine 106, 108 and 110 may also include one or more applications 116 executing on the operating system within that virtual machine.
[0016]To make use of at least some of the devices and other hardware within the hardware 102, drivers 114 within a virtual machine 106, 108 or 110 interacts with the virtual machine monitor 104. This may be done transparently to the operating system or drivers within the virtual machine. The virtual machine monitor 104 then interacts with the hardware 102 such that functions or data requested by the virtual machine can be fulfilled. For example, an application 116 in a virtual machine may require data to be sent using a network interface card (NIC) in the hardware 102 of the data processing system 102. The virtual machine monitor 104 presents a virtual NIC to the virtual machine and optionally other virtual machines. The drivers 114 in the virtual machine send appropriate information to the virtual NIC in the virtual machine monitor 104. The virtual machine monitor 104 then provides appropriate information to the real NIC in the hardware 102, thus causing it to send the data required to be sent by the virtual machine.
[0017]FIG. 2 shows an example of a data processing system 200 according to embodiments of the invention. The data processing system 200 includes hardware 202 and a virtual machine monitor 204. The data processing system 200 also includes a BIOS 206 including ACPI routines 208 that may be used by an ACPI driver to interact with the configuration and power state of devices in the data processing system. As described herein, devices may include the data processing system itself as well as CPUs, on-board devices, expansion devices such as PCI cards, power resources and/or other devices. The virtual machine monitor 204 includes an ACPI driver 210 for interacting with the BIOS ACPI routines 208.
[0018]The data processing system 200 shown in FIG. 2 includes three virtual machines 212, 214 and 216 executing on the data processing system 200, although this is not a requirement and there may be zero, one or more than one virtual machine. Each virtual machine includes a kernel 218 including an ACPI driver 220 and may also include one or more applications 222. The virtual machine monitor 204 presents to each virtual machine 212, 214 and 216 a respective virtual Advanced Configuration and Power Interface (vACPI) 224, 226 and 228. That is, the vACPI appears to the associated virtual machine to be a real ACPI and the virtual machine can interact with the vACPI using the ACPI drivers 220. However, interaction between a virtual machine 212, 214 or 216 and its associated vACPI 224, 226 or 228 may or may not be passed to the BIOS ACPI routines 208 in altered or unaltered form by he virtual machine monitor ACPI driver 210.
[0019]The virtual machine monitor 204 may present a virtual ACPI to the virtual machines 212, 214 and 216 by providing virtual ACPI tables to the virtual machines. When the data processing system 200 is first powered on, real ACPI tables are created in memory (not shown) by the BIOS ACPI routines 208 that describe the ACPI capabilities and functions of the system 200. These include a Root System Description Pointer (RSDP) that points to a Root System Description Table (RSDT). The RSDT points to a Fixed ACPI Description Table (FADT), Firmware ACPI Control Structure (FACS) and Multiple APIC Description Table (MADT). The FADT points to a Differentiated System Description Table (DSDT) that includes the Differentiated Definition Block (DDB).
[0020]An operating system executing on a system without virtualisation locates the RSDP in system memory when it is started. The RSDT and other tables can then be located. The operating system typically creates an ACPI Namespace that is a hierarchical structure of all of the ACPI devices of the system that were described in the ACPI tables, particularly the DDB. The DDB contains a list of all of the ACPI devices and includes methods, in ACPI Machine Language (AML), for interacting with the ACPI BIOS routines 208 for controlling the ACPI devices. The operating system also copies these methods, called objects, into the ACPI Namespace, so the ACPI Namespace is a hierarchical list of the ACPI devices along with the methods (objects) that can be used to control and interact with them.
[0021]Embodiments of the invention provide virtual ACPI tables to each of the virtual machines. For example, for a virtual machine, the virtual machine monitor (VMM) 204 may provide a virtual RSDP (vRSDP), a virtual RSDT (vRSDT), a virtual FADT (vFADT), a virtual FACS (vFACS), a virtual MADT (vMADT) and a virtual DSDT (vDSDT). Thus, an operating system in a virtual machine may locate its vRSDP and from there locate the other virtual tables associated with that virtual machine. The VMM 204 may also provide virtual versions of other ACPI tables and data structures as appropriate.
[0022]Information in the virtual ACPI tables may be derived at least in part from the real ACPI tables. For example, input/output (I/O) ports specified in the vFADT may be directly inherited from the FADT. The virtual machine monitor 204 may provide virtual versions of the ACPI hardware register blocks of the data processing system 200, including, if available and defined in the real FADT, a SMI command port, PM1 and PM2 blocks, and General Purpose Event (GPE) 0 and 1 blocks.
[0023]The vFADT may also define an ACPI reset function that can be used by the associated virtual machine, even if the reset feature is not supported by the hardware of the data processing system 200. This is so that an operating system can reset itself and the virtual machine that contains it.
[0024]The virtual machine may also provide a virtual Differentiated System Description Table (vDSDT) containing a Differentiated Definition Block (DDB), and an example of creation of the vDSDT is as follows. On system startup, the real ACPI namespace may be created by the virtual machine monitor. For each object in the real ACPI namespace, the virtual machine monitor 204 determines whether the object is visible to the operating system in the virtual machine for which the vDSDT is being created. For a visible object, a wrapper method is created in the DDB of the vDSDT for that object. The wrapper method interacts with the virtual machine monitor 204 and may invoke methods in the virtual machine monitor 204 as described later.
[0025]The visibility of an object to a virtual machine may be determined based on one or more of the following criteria. A child of an invisible object is invisible. A leaf object (that is, an object associated with a device that is usable by only one of the virtual machines in the data processing system 200) with a non-public name is invisible. A processor object not executing the virtual machine and not usable by the virtual machine is invisible. A device object corresponding to a device "owned" by the virtual machine monitor, such as corresponding to XT-PIC, Programmable Interrupt Timer (PIT) or Real Time Clock (RTC) devices, is invisible. A device object corresponding to a non-PCI device using I/O ports or memory unavailable to the virtual machine is invisible. A device object corresponding to a PCI device located in a hidden PCI slot is invisible. The object may also be invisible due to other criteria as appropriate. Otherwise, the object is visible. Thus, the vDSDT may describe only those objects corresponding to physical devices that are available to the virtual machine, and the hardware described in the vDSDT may comprise some or all of the hardware described in the real DSDT.
[0026]The virtual machine monitor may also include in the vDSDT purely virtual devices such as a virtual XT-PIC, PIT, RTC and/or other devices.
[0027]The wrapper methods in the DDB of a virtual DSDT (vDSDT) are methods for interacting with the virtual machine monitor 204 rather than the BIOS ACPI routines 208. The wrapper methods may be used to obtain an object value for a data object or perform control of a device for a control method object. Therefore, on startup the operating system in a virtual machine creates its ACPI namespace that includes ACPI methods comprising the wrapper methods. When the operating system or an application executing in the virtual machine wishes to interact with a virtual device, it uses the ACPI drivers 220 in the kernel 218 to invoke the appropriate wrapper control method, which in turn invokes an appropriate action in the virtual machine monitor 204. In embodiments of the invention, the ACPI namespace includes ACPI methods that can be used by the operating system in the virtual machine to interact directly with one or more real physical devices in the data processing system. For example, such devices may be those devices that are managed by and/or used exclusively by the operating system and/or applications in that virtual machine.
[0028]In embodiments of the invention, a dedicated virtual I/O port, called the virtual ACPI (vACPI) interface port, is provided between the virtual machine monitor 204 and each virtual machine 212, 214 and 216, so that data can be exchanged between the AML wrapper methods within a virtual machine and the virtual machine monitor 204. When a wrapper method is called in a virtual machine, the object path and any arguments are transformed by an AML interface into a byte stream that is sent to the virtual machine monitor 204 over the vACPI interface port. A result is returned by the virtual machine monitor 204 as a byte stream over the vACPI interface port, and the AML interface transforms the byte stream into an AML data object (such as, for example, an integer, string, buffer, package or reference data object type).
[0029]The virtual machine monitor 204 then performs vACPI namespace object evaluation, whereby the vACPI namespace object that was invoked in the virtual machine is evaluated. This is a three-stage process, comprising preprocessing, object evaluation and post-processing. The preprocessing stage may modify the object path or arguments if required. For example, a _PRS (possible resource settings) object path can be replaced by a _CRS (current resource settings) object path when dynamic resource configuration is not supported.
[0030]The object evaluation stage is optional and is carried out when required. The object evaluation stage may, for example, invoke a real ACPI object for the virtual ACPI object visible in the vACPI namespace of the virtual machine.
[0031]The post-processing stage may modify a result returned by the object evaluation. For example, any objects in or referred to in the result that are invisible in the vACPI namespace are removed from the result, and/or some or all real ACPI objects are substituted with corresponding virtual ACPI objects. For visible PCI devices, for example, real IRQ values may be replaced with virtual IRQ values.
[0032]The data processing system 200 may include one or more leaf devices that are devices assigned to a maximum of one of the virtual machines 212, 214 and 216. The data processing system 200 may also include one or more shared nexus devices, which are devices that are shared between multiple ones of the virtual machines, for example some or all of the virtual machines. A shared nexus device is also a device that has at least two of its children assigned to different virtual machines or at least one of its children is a shared nexus device.
[0033]For a shared nexus device, embodiments of the invention maintain a respective virtual device state for each of the virtual machines that can use the device. The virtual device state for a virtual machine may be, for example, the state that the vACPI objects virtual machine have indicated to the virtual machine monitor 204 to be the state in which the device should be operating. The virtual state for a device may be different between virtual machines. Hence, the virtual machine monitor 204 may resolve the virtual device states into an actual device state, and may also invoke ACPI routines to put the device into the actual device state. The virtual device states may be maintained within the vACPI 224, 226 and 228 of the respective virtual machines 212, 214, and 216. In embodiments of the invention, the form of a device state may depend on the type of associated device. For example, a processor device may have a device state that comprises power, performance and throttling states, whereas another type of device may have a device state that comprises only a power state.
[0034]For example, the virtual machine monitor may use a "maximum" principle when deciding the actual device state from the virtual device state. A virtual device state may comprise, for example, a power state, performance state and/or clock throttling state of the device. The "maximum" principle selects the maximum power state and/or maximum performance state for the device, or the clock throttling state that results in maximum performance of the device. Thus, the device is not put into a state that results in lower power and/or lower performance of the device as requested by any of the virtual machines, while at the same time at least some power saving or performance reduction may be achieved. From the point of view of an operating system in a virtual machine, the device is in the virtual state, even if the actual state of the device is a higher power and/or performance state. In certain embodiments of the invention, where a virtual machine does not support or use functionality for changing the power, performance or throttling state of a device, it is assumed that the maximum power or performance state is required. For example, if an operating system in a virtual machine does not support power management of a CPU, then the CPU will always be in the C0 (highest power) state, irrespective of the virtual power states of the CPU from other virtual machines.
[0035]For example, wrapper methods of the vACPI namespace of an operating system in a virtual machine that attempt to change the state of a device may invoke methods in the virtual machine monitor 204 that change the virtual state of the virtual device and resolve all of the virtual device states into an actual device state. The virtual machine monitor 204 may then set the device state to the actual device state.
[0036]A general device (for example, an on-board device) may have a power state selected from D0, D1, D2 and D3 states. The D0 state is considered to be the maximum power state and D3 the minimum power state. For a CPU device, the device power states are C0, C1, C2 and C3, where C0 is the maximum power state and C3 is the minimum power state. For CPU performance states P0, P1, P2, . . . , P0 is considered to be the maximum performance state and P1, P2, . . . are progressively lower performance states. The clock throttling states T0, T1, T2, . . . include a minimum throttling state (maximum performance) T0 and T1, T2, . . . are states of progressively more throttling of the CPU clock frequency.
[0037]Each device may have one or more associated power resources that are required by the device to provide power in the various power states of the device. A power resource may be on, where it may be providing power to one or more devices, or off. Where a power resource is required by a single device in certain power state, the virtual machine monitor 204 may turn the device off using ACPI control methods when the device is in a power state that does not require the power resource.
[0038]A power resource is shared when it is required by at least one shared nexus device or by at least two leaf devices assigned to different virtual machines. The virtual machine monitor 204 may resolve an actual power state of a shared power resource from respective virtual power states associated with multiple ones of the virtual machines 212, 214 and 216 that can use the device or devices that require the shared power resource. For example, if at least one virtual power resource state requires that the power resource is on, then the actual state for the power resource is on, and if all of the virtual states are off then the actual power state will be off. The actual power state of the power resource is controlled by the virtual machine monitor 204.
[0039]In embodiments of the invention, where there are nexus PCI devices in the data processing system 200, the virtual machine monitor 204 controls the state of the PCI devices using a PCI configuration space. The virtual machine monitor 204 may control the PCI device to enter a particular power state by writing an appropriate command to the Power Management (PM) Control register. The state of the device can be queried using a PM Status register. For each nexus PCI device, the virtual machine monitor 204 maintains respective virtual PM Status and Control registers for the virtual machines 212, 214 and 216 that can use the PCI device. The virtual machine monitor 204 may then resolve the states in the virtual PM Control registers into an actual power state in a manner similar to that described above in respect of other devices.
[0040]Embodiments of the invention may also allow a virtual machine operating in a normal, working system state (S0) to suspend to a sleep state (S1, S2, S3 or S4) or power off (S5). Thus, a virtual system state of each of the virtual machines 212, 214 and 216 may be maintained by the virtual machine monitor 204, for example in each vACPI 224, 226 and 228. The virtual machine monitor 204 may also set the system state of the data processing system 200 to be the maximum of the virtual system states, where S0 is the highest state and S5 is the lowest state. A virtual machine entering a sleep state (S1, S2, S3 or S4) or power off state (S5) may instruct the devices used by that virtual machine to enter a lower power state, and thus the virtual state of these devices is set to a lower power state. If these devices are shared with one or more other virtual machines, then the virtual machine monitor resolves the actual state of the device as indicated above.
[0041]Embodiments of the virtual machine monitor may power off the data processing system into the power off (S5) state when, for example, all virtual machines are in the power off state or a selected one or more of the virtual machines are in the power off state.
[0042]A virtual machine in a sleep state (S1, S2 or S3) can awaken following an external event from either a power button or a waking device used by the virtual machine. In certain cases it may not be possible to wake from a waking device (a device that can wake the virtual machine) if, for example, the waking device was not put into a low power state when the virtual machine entered a sleep state, for example if the waking device is shared with other virtual machines. In this case, a waking mechanism of the device may not be operational. When a waking device indicates that a waking event has occurred, one or more virtual machines in a sleep state may be woken depending on which virtual machines use the device.
[0043]In certain embodiments, virtual power buttons may be provided that are usable by one or more of the virtual machines to control the power state of one or more other virtual machines, for example to wake another virtual machine from a sleep state or to power on to a working state (S0).
[0044]In certain embodiments, a device may notify an operating system of the data processing system 200 of a notification event. That is, the device notifies the operating system of an event relating to the device, such as, for example, change of a battery status, change of a thermal zone status or a power button press. The event triggers an appropriate real ACPI namespace object for the device. The virtual machine monitor 204 provides a virtual I/O port, the vACPI notification port, between the virtual machine monitor 204 and each of the virtual machines which can use a device for which a notification event can occur. The vACPI namespace for a virtual machine may include one or more objects that are General Purpose Event (GPE) handler methods. When a notification event occurs for a device, the event is detected by the virtual machine monitor 204. The virtual machine monitor 204 then emulates a GPE event to the virtual machine. The appropriate GPE handler method is invoked in the virtual machine. The handler method may then issue a request to the virtual machine monitor 204 using the vACPI notification port, and the virtual machine monitor 204 responds by sending the event number that is provided to or obtained by the virtual machine monitor 204 to the handler method. In this way, for example, an ACPI notification event is converted to an equivalent vACPI notification event.
[0045]FIG. 3 shows an example of a data processing system 300 that is suitable for use when implementing embodiments of the invention. The data processing system 300 includes a central processing unit (CPU) 302 and a main memory 304. The system 300 may also include a permanent storage device 306, such as a hard disk, and/or a communications device 308 such as a network interface controller (NIC). The system 300 may also include a display device 310 and/or an input device 312 such as a mouse and/or keyboard.
[00464748]Embodiments of the invention are not restricted to the details of any foregoing embodiments. Embodiments of the invention extend that fall within the scope of the claims.
Patent applications by Vladimir Grouzdev, Paris FR
Patent applications by VirtualLogix SA
Patent applications in class VIRTUAL MACHINE TASK OR PROCESS MANAGEMENT
Patent applications in all subclasses VIRTUAL MACHINE TASK OR PROCESS MANAGEMENT
User Contributions:
Comment about this patent or add new information about this topic:
|
http://www.faqs.org/patents/app/20100162242
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
SelectAnalogInput(port);delay(MUXSETTLETIME);StartADConversion();delay(ADCONVERTTIME);return(readADC();
It's common for a microcontroller to have many fewer A-D converters than it has analog pins, with an analog multiplexer in between the pins and the A-D converter.
The STM32F4 actually has three separate A2D converters, each with an 8:1 mux (and some complex connections to pins that I didn't look into very much.)
So you could read up to three Analog inputs without messing separately with the muxes (except at setup time), or you'll have to do something similar to what the Arduino core SW does.
In general, it is a useful technique in learning a new processor/board to try to COPY (only be sure to call it "port") an existing familiar set of software. So the question shouldn't be "how do I read analog inputs on STM32F4?", but "How do I duplicated the AnalogRead() function on STM32F4?" It may seem very similar, but the process is different. Instead of starting from scratch, you get to look at the steps that the arduino code uses, and figure out whether they have equivalents on the STM. Since the individual steps are smaller, they may be easier to understand. Of course, you end up needing to understand both the existing Arduino code AND the new processor code. But ... it's good for you!
int analogRead(uint8_t pin){ uint8_t low, high;#if defined(__AVR_ATmega1280__) || defined(__AVR_ATmega2560__) if (pin >= 54) pin -= 54; // allow for channel or pin numbers#else if (pin >= 14) pin -= 14; // allow for channel or pin numbers#endif #if defined(__AVR_ATmega32U4__) pin = analogPinToChannel(pin); ADCSRB = (ADCSRB & ~(1 << MUX5)) | (((pin >> 3) & 0x01) << MUX5);#elif defined(ADCSRB) && defined(MUX5) // the MUX5 bit of ADCSRB selects whether we're reading from channels // 0 to 7 (MUX5 low) or 8 to 15 (MUX5 high). ADCSRB = (ADCSRB & ~(1 << MUX5)) | (((pin >> 3) & 0x01) << MUX5);#endif // set the analog reference (high two bits of ADMUX) and select the // channel (low 4 bits). this also sets ADLAR (left-adjust result) // to 0 (the default).#if defined(ADMUX) ADMUX = (analog_reference << 6) | (pin & 0x07);#endif // without a delay, we seem to read from the wrong channel //delay(1);#if defined(ADCSRA) && defined(ADCL) // start the conversion sbi(ADCSRA, ADSC); // ADSC is cleared when the conversion finishes while (bit_is_set(ADCSRA,;#else // we dont have an ADC, return 0 low = 0; high = 0;#endif // combine the two bytes return (high << 8) | low;}
It's common for a microcontroller to have many fewer A-D converters than it has analog pins, with an analog multiplexer in between the pins and the A-D converter. The Atmel AVR used on the Arduino only has ONE A-D converter, fronted by a 16:1 analog multiplexer (some of whose inputs are not connected)
I am glad you are making great progress!
while (1) {GPIO_ToggleBits(GPIOD, GPIO_Pin_15);/***************************Main Winding*******************************************************/ SINWt= sinf(n*Wt); IMAINREF= 340*SINWt; ADC_SoftwareStartConv(ADC1); while(ADC_GetFlagStatus(ADC1, ADC_FLAG_EOC) == RESET); MAINHALL = *ADC1_DATA*2950/4095 - MAINOFFSET; IMAIN=(34*MAINHALL/40-2125); MAINERR=IMAIN-IMAINREF; if(MAINERR<=-2){ GPIO_ResetBits(GPIOD, GPIO_Pin_9); } if(MAINERR>=2){ GPIO_SetBits(GPIOD, GPIO_Pin_9); }/*************************Aux Winding****************************************************/ COSWt =cosf(n*Wt); IAUXREF= 340*COSWt; ADC_SoftwareStartConv(ADC2); while(ADC_GetFlagStatus(ADC2, ADC_FLAG_EOC) == RESET); AUXHALL = *ADC2_DATA*2950/4095 - AUXOFFSET; IAUX=(34*AUXHALL/40-2125); AUXERR=IAUX-IAUXREF; if(AUXERR<=-2){ GPIO_ResetBits(GPIOD, GPIO_Pin_13); } if(AUXERR>=2){ GPIO_SetBits(GPIOD, GPIO_Pin_13); }/***************************************************************************************/ ADC_SoftwareStartConv(ADC3); while(ADC_GetFlagStatus(ADC3, ADC_FLAG_EOC) == RESET); VIN = *ADC3_DATA; int CHANGEHZ = (VIN*22/3390)-(FREFINIT-25); if(CHANGEHZ<=-1 || CHANGEHZ>=1){ GPIO_ResetBits(GPIOD, GPIO_Pin_13|GPIO_Pin_11|GPIO_Pin_9); VIN = *ADC3_DATA; FREFINIT= (VIN*22/3390); FREFINIT+= 25; if(FREFINIT>=47) {FREFINIT=47;} if(FREFINIT<=25) {FREFINIT=25;} FREF= FREFINIT; Wt = 157.0796327*FREF/1000000; } }
here is a link to some hardware designs
|
http://forum.arduino.cc/index.php?topic=106477.msg822085
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
Type: Posts; User: MEversbergII
Oh. The book hadn't mentioned that yet, thanks. Do I just call it at the end?
To be honest, I'm still using the template I built up while in C++ 1 and 2 classes. During C++ 2 I didn't...pay...
I made a program to help me learn about Dynamic Arrays just now. The code:
#include <iostream> // std input and output
#include <string> // Allow string usage
#include <cmath> //...
As I've said, I've already tried Valve support.
M.
My Steam client crashed the other day, with reason: An unhandled win32 exception occurred in Steam.exe. Asked if I'd like to debug, and I hit no.
Kept doing this, so I deleted steam and...
All I can say is: I am afraid of my future.
I think.
M.
What happened to my reply in this thread?
M.
My professor went over these in class. I was wondering if they where commonly used? I'm no expert but it sounds like there would be clearer ways to do things without a recursion.
Could just be...
Pointers to objects of class Student? So, they are full objects? I thought pointers where holders for a memory address only. Or, will they, when I create objects in my main, be open to be assigned...
A class is a structure that contains variables and functions to perform manipulations on data.
Objects are creations with the data type of the defined class.
class:
class Thing
{
\ then /...
M.
I got this problem from my professor that has something to do with linked lists. His message goes:
The code he's talking about is thus:
#include <iostream>
#include <string>
Clearly, you would select "Other".
M.
I see. So I mixed that up? The way it sounded was that it essentialy achieves the same goal of being able to change data without a return _____;.
M.
I decided to go on ahead as a Computer Programming major.
M.
Germanic Pagan.
Or in a word, Asatru.
M.
That's not what I smell :|
M.
I was reading on pointers in a PDF I found on the tubes, and I noticed that a code example uses pointers to do a pass by reference.
In C++ I last spring, when we wanted to pass a value by...
To put it poetically:
Be not a cobbler nor a carver of shafts,
Except it be for yourself:
If a shoe fit ill or a shaft be crooked;
The maker gets curses and kicks.
--The Havamal
I graduated high school in 2006. I started college at a local community college that fall. My major was and still is Computer Science.
I took the major basically out of some notion that it would...
Too much?
M.
If you could pick any two languages for a up and coming programmer to really get to know, what would they be, and why?
I figure C++ is a must it's a wide spread, multi-purpose language. The...
Never mind I found it.
M.
I'm working on my Associates in Computer Science, and the school has required me to take a course in Computer Architecture. It's an online course ( sadly ) that seems to center around the PIC Micro...
And we programmers have our own definition of arguments too ;)
Welcome to the forum, how's Lebanon? I've an aunt from there -- considered visiting though it sounds a tad hostile at the moment :|....
This is a custom built PC I made a bit over a year ago. I came across some unexpected costs this semester for a computer architecture class and since this one is a spare, I'm selling it off to pay...
|
http://forums.codeguru.com/search.php?s=e45851be5da95c8b68def690bbd54357&searchid=5798457
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
If you have ever played with Django's class-based views, you will know how tangled the view code can be when it comes to customizing the view. In recent projects when working with class-based views, I tend to look at the Django source code directly to determine the best place to put specific code snippets. A commenter on my previous class-based views article didn't understand why I couldn't just modify the object, rather than creating the object from an uncommitted form, then modifying the object, then finally saving it to the database. Class-based views tend to bring with it some complexities, since there is a specific order of operations, much like in math.
This article here will explain this order of operations when it comes with class-based views, and how you should go about overriding specific callables in the class. Lets start with my most used class-based view DetailView. This is how a request is handled when a DetailView class is being used:
- View.as_view.view(request, *args, **kwargs)
- This is the main entrance point for the view code, as_view() is the class constructor, and assigns self as the generated class with the keyword arguments sent in from the urls.py. It returns the output of the next function in this list.
- View.dispatch(request, *args, **kwargs)
- This function sets up the class instance variables self.request, self.args, and self.kwargs. This function checks to see if the request.method is in the list of available methods for this particular view, and calls it's function.
- BaseDetailView.get(request, *args, **kwargs)
- The HTTP method function called from View.dispatch. This function assigns the instance variable of self.object by calling get_object(). It also obtains the context by calling get_context_data(object=self.object). Finally this function returns the result from render_to_response(context).
- SingleObjectMixin.get_object(queryset=None)
- This function has the task of obtaining a model instance by looking at the various resources available, such as variables from the URL. It's query can be limited by sending it a different queryset. If queryset is None, which is normally the case, then it will call get_queryset() to obtain the queryset to use from the current class. It then checks for either the pk or slug from the self.kwargs dict, which is normally set from the URL. It will then apply a filter on the queryset using both the PK and SLUG, if they are available. It will use both. Then it does a queryset.get() and returns the object found back to BaseDetailView.get().
- SingleObjectMixin.get_queryset()
- This provides the initial queryset for the DetailView. It basically checks to see if your subclass has a queryset model instance variable and uses that, otherwise provides an appropriate error message about being misconfigurated. It is very handy to override this to limit what users can and cannot access.
- SingleObjectMixin.get_context_data(**kwargs)
- Here is where a context data Python dictionary is generated for the view. First it takes in all the kwargs and uses it as the initial dictionary. Since the get() function provides the context with object=self.object, it generates an initial dictionary with an object key, similar to how function-based generic views worked. This is also where it adds the context for the model instances object name, such todo or entry. It then returns the dictionary back to get().
- TemplateResponseMixin.render_to_response(context, **response_kwargs)
- Almost all class-based views use this Mixin to render the final response. This function calls response_class(request=self.request, template=self.get_template_names(), context=context, **response_kwargs). It then returns the result from this back to get(). You can override the response_class in your subclass if you choose, the default is TemplateResponse.
- TemplateResponseMixin.get_template_names()
- This functions checks for and uses the template_name variable which should be in the class instance. It returns a list, with one element being the template_name.
There you have an entire class-based view, right from the request, down to where it renders the template for the end-user's browser. A great use of response_class would be to return different document types, such as JSON, XML, or even PDF generated data. You just need to create your own response class and make sure it accepts what the TemplateResponseMixin sends to it as variables.
Okay, now that we got a fairly simple class-based view out of the way, yes this is a more simpler one. Lets tread into CreateView, which uses two HTTP methods and therefore branches conditionally.
- View.as_view.view(request, *args, **kwargs)
- Does the same thing as the previous view mentioned, since both view types subclass View.
- View.dispatch(request, *args, **kwargs)
- Same as before, since we haven't left the View class yet, this will dispatch between get() and post() this time around.
- BaseCreateView.get(request, *args, **kwargs)
- This method obviously works much differently than the one included with DetailView. Instead, during the get request, it sets self.object to None so that the rest of the class knows that we are going to be creating a new object. It then returns the super of this, which we continue...
- ProcessFormView.get(request, *args, **kwargs)
- This function first gets the form_class by calling get_form_class(). It then takes the returned form_class and hands it over to get_form(form_class). The context is generated the same way it is in the previous view mentioned, get_context_data(form=form). Finally, the context is handed over to render_to_response(context), which is eventually sent to the browser.
- ModelFormMixin.get_form_class()
- This function is used to get the class which is used to render the form in the browser. This should return a valid ModelForm with the Model your attempting to create an instance of. By default it obtains the class from the class instance variable form_class. If this is not set, then it attempts to use the provided Model as the form to display in the browser.
- FormMixin.get_form(form_class)
- This function uses the form_class and creates an instance of it using the result from get_form_kwargs() as the keyword variables to instantiate the form class.
- FormMixin.get_form_kwargs()
- You can use this to customize the form instance, but by default it does plenty of what you'll need. It uses the result from calling get_initial() to provide the form with some initial data, and place the data and files into the form instance if the HTTP method is either POST or PUT.
- FormMixin.get_initial()
- This does a copy of the class instance's variable of initial. You can override this function to provide some Python logic to dynamically generate a dictionary for the initial data.
- ModelFormMixin.get_context_data(**kwargs)
- This function does the exact same thing as the get_context_data() for SingleObjectMixin.
- TemplateResponseMixin.render_to_response(context, **response_kwargs)
- This View also uses the same function as the previous view.
- SingleObjectTemplateResponseMixin.get_template_names()
- Unlike the DetailView, this function does a little more, which includes using a suffix such as _form for the template name.
- BaseCreateView.post(request, *args, **kwargs)
- Here is where we start the POST method, this is only called when the user actually submits the form. Firstly, it also sets self.object to None like the get() method does. Then it calls super and returns the result to the browser.
- ProcessFormView.post(request, *args, **kwargs)
- For the most part, this is very similar to get(), it obtains a form_class, and then a form to use. Once the form is obtained, this is where things change. The function then checks the result of form.is_valid(), very standard Django stuff from normal views. If the form is valid, it calls and returns the result of form_valid(form). If the form is not valid, it calls and returns the result of form_invalid(form).
- ModelFormMixin.form_valid(form)
- If the form is valid, this function is called and it sets the variable self.object to the return of form.save(), again very standard Django form handling stuff. Once the object is returned, it does a super.
- FormMixin.form_valid(form)
- This function does one thing only, and that is return an HttpResponseRedirect object, which is the result of get_success_url().
- ModelFormMixin.get_success_url()
- The task of this function is to return a valid URL to direct the user to after the form is valid. First it tries to use self.success_url if it is available. If it is not available, then it attempts to use self.object.get_absolute_url(), and if this fails, returns a misconfigured error. You can override this in your class to generate the URL dynamically using Python code.
- FormMixin.form_invalid(form)
- This is only called if the form was not valid, it basically returns the form back to the template and provides appropriate validation errors. An idea would be to override this to set other variables on self.object and attempt to re-validate. An example is setting the request.user on an object's field. I personally override form_valid() and it makes more sense to me.
As you can see from these two very different view classes, the Django class-based views system is very modular and expandable. It comes with many base classes, and plenty of mixins to make using class-based views work for almost any scenario. To transition a customized function-based view over to a class-based view is pretty easy as you may think:
def function_view(request): # Do something fancy here. if request.method == 'POST': # Do something with the POST data. elif request.method == 'GET': # Do something with GET. else: raise Http404 # Transition to a class-based solution: class ClassView(TemplateResponseMixin, View): template_name = "custom_view.html" def get_context_data(self, **kwargs): # This allows this View to be easily subclassed in the future to interchange context data. context = kwargs return context def dispatch(self, request, *args, **kwargs): # Do something fancy here. return super(ClassView, self).dispatch(request, *args, **kwargs) def get(self, request, *args, **kwargs): # Do something with GET. context = self.get_context_data(**kwargs) return self.render_to_response(context) def post(self, request, *args, **kwargs): # Do something with the POST data. context = self.get_context_data(**kwargs) return self.render_to_response(context)
Made correction from commenter Visa, thanks!
Most POST data in Django will be through forms, so to limit all your work, just subclass the Form class-based views to minimize the work overhead. The above can definitely apply to custom GET requests where you are returning calculated scientific data. If you need more than one view to calculate different datasets, using classes will be much easier, since you just need to create a base class and subclass this for each calculation/dataset. Class-based views allow you to make your view code much more modular and to be more DRY when developing custom views.
Although at first class-based views might seem like more work, they will pay off in very large projects which use the same logic in multiple views. You can place a large chunk of your business logic into a custom Mixin, and don't need to worry about if you need to pass around a request variable or other data. Since the business logic will be part of your class when the Mixin is applied, it will have immediate access to the request and kwargs, and other data. This can lower the overall work and better organize your code and logic.
|
http://pythondiary.com/blog/Nov.11,2012/mapping-out-djangos-class-based-views.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
XML Software
Showing page 2 of 3.
cat-RSSMaker
A tool to create and any kind of RSS-FEED e.g. PodCasts0 weekly downloads
HR-CRM
A Customer Relationship Management (CRM) webapplication that uses OASIS CIQ standards for administrating organisations, persons, memberships, as well as contact information.0 weekly downloads
Schau-Schaut
German: Schau-Schaut-projekt: lädt, organisiert und liest News die/über Schau-Schaut-Filme (tempoär:). English: bluebong-project: loads, organize and read news from (temporary adress:) weekly downloads
Appserver_S
A Client Server database management system to manage execution of repetitive data transfer over un-realiable networks. All SQL is stored server side and pushed to clients on demand.0 weekly downloads
Ajax XMLDB Connector
XML Based MySQL Interface with Ajax Frontend.0 weekly downloads
XML Source View
XML Source View provides a source viewer for XML files, written purely in XSLT. It includes syntax highlighting for popular XML languages like XHTML, XSL-FO and SVG as well as highlighting of a user selected namespace.3 weekly downloads
xsdTransformer
xsdTransformer generates xforms, xhtml, code, scripts and desriptors (e.g. xForms based xhtml sites) based on xml schemas.1 weekly downloads
ErgoTools
A collection of software utilities for owners of the Daum ergo_bike Premium 8i indoor bike and compatible devices.0 weekly downloads
MEX
MEX applies the international standards EAD, EAC and METS to edit structured Internet presentations of online finding aids including digital reproductions. For newer developments cf. incl. english documentation and contact.1 weekly downloads
GPX Track Editor
Java program to view and edit tracks in GPX format. Results can be saved in GPX format or stored into a database with JDBC connection
Example Code Manager
Example Code Manager is an Eclipse-Plugin for managing sample code and sample data from repositories around the world. Mainly subversion repositories, but support for flat file or CVS repositories are planned.0 weekly downloads
HL72XML.CPP Transformer
HL7 to XML Transformation toolkit. Using HL7 v2 XML
LoroDux
LoroDux is planned to be a OSM and Java based multi platform navigation software for mobile devices for blind and visually impaired persons. Release of first alpha version was September 10,NN Parser
Programm zum Konvertieren des FNN XML Formats in CSV Format
open geo coordinates database
At the current state, opengeodb provide geo coordinates and several other data (city name, zip) mainly for the german speaking area.13 weekly downloads
JOMM
JOMM is a "Java Object Model Mapping" Framework about generic persistence mapping between different worlds of models, such as Java model classes, SQL relational schema or XML
|
http://sourceforge.net/directory/development/data-formats/xml/natlanguage%3Agerman/?sort=rating&page=2
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
cel.h File Reference
#include "behaviourlayer.h"
#include "celtool.h"
#include "physicallayer.h"
#include "propclass.h"
#include "tools.h"
Go to the source code of this file.
Detailed Description
CEL.
This header file essentially causes most of the CEL header files to be included, providing a convenient way to use any feature of CEL without having to worry about including the exact right header file(s).
Definition in file cel.h.
Generated for CEL: Crystal Entity Layer 2.0 by doxygen 1.6.1
|
http://crystalspace3d.org/cel/docs/online/api-2.0/cel_8h.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
Go to the IRremote library folder -> look for IRremoteInt.h -> right click and select EDIT -> find WProgram.h and change it to Arduino.h -> Save file -> restart the arduino software.
#include <IRremote.h>#include <IRremoteInt.h>#include <Servo.h>int IR_PIN = 2; // IR Sensor pinIRrecv irrecv(IR_PIN);decode_results IRResult;int pos = 0;unsigned long IRValue;Servo myservo1;Servo myservo2;Servo myservo3;void setup(){ myservo1.attach(9); myservo2.attach(10); myservo3.attach(11); pinMode(IR_PIN, INPUT); irrecv.enableIRIn();}void loop(){ if (irrecv.decode(&IRResult)) { IRValue = IRResult.value; if ((IRValue == 1) || (IRValue == 2049)) { //If 1 is pressed } if ((IRValue == 2) || (IRValue == 2050)) { //If 2 is pressed } if((IRValue == 3) || (IRValue == 2051)) { //If 3 is pressed } delay(1000); irrecv.resume(); }}
when I verify this i geta lot of compiling error.
I have the IRremote library installed in the right place, although it is an older version that was not compatible with 1.0+.I pasted your code, and got a number of errors. I edited IRremoteInt.h, changed WProgramh to Arduino.h, and all the errors went away.Perhaps you need to do that, too.
Did you go into the .h file and actually change WProgram.h to Arduino.h?
Somewhere or other, a dot got lost.I change WProgram.h to Arduino.h, in an include statement.If you still have errors, you need to post them.
In file included from KI_Alat_Lipat_Baju.ino:2:D:\Arduino Libraries\libraries\IRremote/IRremoteInt.h:87: error: 'uint8_t' does not name a typeD:\Arduino Libraries\libraries\IRremote/IRremoteInt.h:88: error: 'uint8_t' does not name a typeD:\Arduino Libraries\libraries\IRremote/IRremoteInt.h:89: error: 'uint8_t' does not name a typeD:\Arduino Libraries\libraries\IRremote/IRremoteInt.h:92: error: 'uint8_t' does not name a type
i dont get it, how to do that?
Quotei dont get it, how to do that?Perhaps, if you used capital letters, like a big boy...Use any text editor.
#include <IRremote.h>#include <IRremoteInt.h>#include <Servo.h>#include "Arduino.h"
|
http://forum.arduino.cc/index.php?topic=154191.msg1156270
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
{-# LANGUAGE CPP, RankNTypes, GADTs #-} {-# OPTIONS_GHC -fno-warn-unused-imports #-} -- Copyright .Patch.Split ( Splitter(..), rawSplitter, noSplitter, primSplitter ) where import Data.List ( intersperse ) import Darcs.Witnesses.Ordered import Darcs.Witnesses.Sealed import Darcs.Patch.Patchy ( ReadPatch(..), ShowPatch(..), Invert(..) ) import Darcs.Patch.Prim ( Prim(..), FilePatchType(..), canonize, canonizeFL )' False) str of Just C(x y) -> Maybe (B.ByteString, B.ByteString -> Maybe (FL Prim C(x y))) doPrimSplit (FP fn (Hunk 2 return (hunk before before' +>+ hunk before' after' +>+ hunk after' after)) where sep = BC.pack "==========================" helptext =" , "" ] hunk :: [B.ByteString] -> [B.ByteString] -> FL Prim C(a b) hunk b a = canonize (FP fn (Hunk :: Splitter Prim primSplitter = Splitter { applySplitter = doPrimSplit, canonizeSplit = canonizeFL }
|
http://hackage.haskell.org/package/darcs-2.5.2/docs/src/Darcs-Patch-Split.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
Results 1 to 2 of 2
I'm trying to configure my Bind9 Server to resolve hostnames, lets say *.example.com (I'm using another domain with name servers setup, it's hosted at hostgator). example.com has public DNS entries ...
- Join Date
- Sep 2007
- 24
Bind9 DNS Question
example.com has public DNS entries like as server1.example.com, server2.example.com, server3.example.com, etc configured on that domain.
From inside my network, I want to be able to resolve server1.example.com to an IP that I specify, I also want to be able to add in my own domains, like voip.example.com to an IP (which isn't set on the public website) on my network. If an entry doesn't exist, like say server2.example.com on my DNS server, then it will resolve the IP address normally (Using what ever DNS server my DNS server uses).
So far, I have it working, but I must specify every domain. It is only resolving hostnames that I've specified for this domain. It doesn't resolve server2.example.com for example, unless I tell it how. I can't even ping example.com unless I specify the IP.
Basically: If Bind9 has entry for domain then give IP or else, use the name server specified (or the server's DNS server) to resolve the IP.
Code:
$ORIGIN example.com. ; designates the start of this zone file in the namespace $TTL 1h ; default expiration time of all resource records without their own TTL value ; ; BIND data file for example.com ; @ IN SOA ns.nsforexample.com. example.com. ( 2012112726 ; Serial 7200 ; Refresh 120 ; Retry 2419200 ; Expire 604800) ; Default TTL ; @ IN NS ns.nsforexample.com. @ IN NS ns2.nsforexample.com. server1 IN A 192.168.1.10 voip IN A 192.168.1.20
What you want to do is pretty common, you have a private DNS server that serves your own domains but is only visible on your LAN. All your on-LAN computers are then configured to use your internal DNS server (usually by DHCP, but you can do it statically).
In your internal DNS server's DNS configuration add 'forwarder' lines to the named.conf. This will tell it which DNS servers to look at for domains it doesn't control.
There's a bit of an explanation about how its done here: user #126863 - see
|
http://www.linuxforums.org/forum/servers/202618-bind9-dns-question.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
#include <X11/extensions/XInput.h>-grab time is set to the time at which the button..PKey(3)
|
http://www.makelinux.net/man/3/X/XGrabDeviceButton
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
I have a input file of 20 whole numbers, the user enters a number and it searchs the array for a match, if a match is found it tells you the number enterd was found and displays the number. What i need to be able to do is add a counter of how many times that number appears in the infile, currently it stops as soon as the number is found.
Code:#include <iostream> #include <fstream> #include <string> #include <cassert> using namespace std; const int MAXSIZE = 20; void get_score(int code[], int& count); int find_match(int inCode, int code[], int count); int main() { int code[MAXSIZE] = {0}; int inCode = 0, count = 0, location = 0; int total = 0; get_score(code, count); cout << "Enter a score: "; cin >> inCode; while(inCode != -1) { location = find_match(inCode, code, count); if(location < MAXSIZE) { cout << "Code: " << total << " "<< code[location] << endl; } else cout << "Code not found\n"; cout << "Enter a Code: "; cin >> inCode; } return 0; }//end main //------------------------------------------ void get_score(int code[], int & count) { ifstream fin; fin.open("pricelist.txt", ios::in); fin >> code[MAXSIZE]; assert(!fin.fail()); int c; while(fin >> ws && !fin.eof()) { if(count < MAXSIZE)//still room { fin >> code[count]; fin >> ws; count++; } else { fin >> c; fin >> ws; cout << "No room for " << c << endl; } } } //search int find_match(int inCode, int code[], int count) { int total = 0; int sub = 0; bool found = false; while(sub < count && ! found) { if(inCode == code[sub]) found = true; else sub++; } if(found) return sub; else return MAXSIZE; }
|
http://cboard.cprogramming.com/cplusplus-programming/59307-question.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
On Sun, 25 May 2008 20:07:24 +0300, Adrian Bunk <[email protected]>. :-)
Anyway, here is a patch to fix the build failure. Thank you for reporting.
------------------------------------------------------
Subject: [PATCH] MIPS: Fix CONF_CM_DEFAULT build error
From: Atsushi Nemoto <[email protected]>
Signed-off-by: Atsushi Nemoto <[email protected]>
---
diff --git a/include/asm-mips/pgtable-bits.h b/include/asm-mips/pgtable-bits.h
index 60e2f93..8a75677 100644
--- a/include/asm-mips/pgtable-bits.h
+++ b/include/asm-mips/pgtable-bits.h
@@ -134,6 +134,6 @@
#define _PAGE_CHG_MASK (PAGE_MASK | _PAGE_ACCESSED | _PAGE_MODIFIED |
_CACHE_MASK)
-#define CONF_CM_DEFAULT (PAGE_CACHABLE_DEFAULT>>_CACHE_SHIFT)
+#define CONF_CM_DEFAULT (_page_cachable_default >> _CACHE_SHIFT)
#endif /* _ASM_PGTABLE_BITS_H */
|
http://www.linux-mips.org/archives/linux-mips/2008-06/msg00001.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
Norbert Unterberg <nepo <at> gmx.net> writes:
> There remains an issue with the DIFFs. I have setup a diff.exe from the
> GnuWin32 tools which works with the command line from
> mailer.conf.example. The diffs are added to the commit mails.
> BUT: Each line from the diff contains an extra CRLF pair.
I saw the same extra lines in the emailed diffs. I don't know if this is
the correct fix or not, but changing the popen2/popen4 modes to text
instead of binary worked for me...
regards,
markt
Index: mailer.py
===================================================================
--- mailer.py (revision 12828)
+++ mailer.py (working copy)
@@ -107,9 +107,9 @@
cmd = argv_to_command_string(cmd)
if capturestderr:
self.fromchild, self.tochild, self.childerr \
- = popen2.popen3(cmd, mode='b')
+ = popen2.popen3(cmd, mode='t')
else:
- self.fromchild, self.tochild = popen2.popen2(cmd, mode='b')
+ self.fromchild, self.tochild = popen2.popen2(cmd, mode='t')
self.childerr = None
def wait(self):
@@ -123,7 +123,7 @@
def __init__(self, cmd):
if type(cmd) != types.StringType:
cmd = argv_to_command_string(cmd)
- self.fromchild, self.tochild = popen2.popen4(cmd, mode='b')
+ self.fromchild, self.tochild = popen2.popen4(cmd, mode='t')
def wait(self):
rv = self.fromchild.close()
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]
Received on Sun Jan 23 22:33:14 2005
This is an archived mail posted to the Subversion Dev
mailing list.
|
http://svn.haxx.se/dev/archive-2005-01/0908.shtml
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
Vass Lee wrote:
Hi All,
I am new guy for JAVA. I want to check my String Array is sorted or not ? how can i achieve this ?.Any one give some sample ?
import java.util.*;
import java.io.*;
import java.lang.String;
public class SortWords{
public static void main(String args[]){
String[] str = {"chanan","tapan","Amar","santosh","deepak"};
}
}
Here i want to check str is sorted or not ?.Values are stored in alphabetical ascending order or not ?
Thanks,
Vass Lee
Jesper de Jong wrote:There is a smarter way to check this than sorting the array yourself and comparing it to the original array. How do you think you could do this without sorting the array yourself?
Akhilesh Trivedi wrote:How is that Jesper?
Matthew Brown wrote:
Akhilesh Trivedi wrote:How is that Jesper?
Well think about it: {"chanan","tapan","Amar","santosh","deepak"} - did you need to put that in alphabetical order to tell that it wasn't already in order? Or was there an easier way to tell?
Akhilesh Trivedi wrote:I had to sort it. I can not think of any easier way... than to implement sorting like algorithm and do the check inside itself.
Matthew Brown wrote:
Akhilesh Trivedi wrote:I had to sort it. I can not think of any easier way... than to implement sorting like algorithm and do the check inside itself.
If I game you a thousand words in a random order, would you have to sort them all in order to tell that they weren't already sorted? I'm not talking about writing a program - I'm talking about doing it by sight.
(If you can sort a thousand words that quickly in your head, I'm impressed!
|
http://www.coderanch.com/t/560755/java/java/check-StringArray-sorted
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.