text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringlengths 9
15
⌀ | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
---|---|---|---|---|---|
Round-Tripping DatadiagramML Files
You can save Microsoft Visio drawings, stencils, and templates as XML files and then open them again in Visio without any loss of information. This is called round-tripping.
However, after Visio opens an XML file that was created in another application or opens a DatadiagramML file that was edited in another application and then resaves the document as a DatadiagramML file, the files are not guaranteed to be identical, although there should be no loss of data.
The following cases illustrate some differences that occur when an XML file that is created in another application or when a DatadiagramML file that is edited in another application is saved in Visio.
White space is not preserved in DatadiagramML files except within elements whose data type is String, including Text elements. Visio also preserves white space for any Solution XML or unknown XML because of the default xml:space='preserve' setting in all VDX files. In all other cases, for example in cell elements whose data type is NUM, white space is not preserved. By default, Visio typically does not display any formatting, such as carriage returns and tab indents, in the source code of XML files. This can make these files hard to read inside a general text editor such as Notepad. To instruct Visio to display formatting such as indents of child elements in XML files or other formatting that makes your files easier to read, modify the following registry key.
HKEY_CURRENT_USER\Software\Microsoft\Office\14.0\Visio\Application\XMLEmitIndents
To improve readability in Notepad and other text editors, set the value of XMLEmitIndents to 1.
Serious problems might occur if you modify the registry incorrectly by using Registry Editor or by using another method. These problems might require that you reinstall the operating system. Microsoft cannot guarantee that these problems can be solved. Modify the registry at your own risk.
Visio always emits local cell elements that have an explicit unit, value, and formula. If a third-party application omits portions of the element, Visio either picks default values or infers correct values for the missing components. Consider the following fragment that might be emitted from a third-party application.
Because only a formula was specified, Visio assigns the formula to the PinX element and then evaluates the formula to determine the value. Because the formula evaluates to a constant, Visio flags the element as a constant. A unit of measure was not specified, so Visio uses the default unit for the cell, which is inches (the default length unit or DL). When the file is resaved, the previous fragment appears similar to the following.
A DatadiagramML file can contain a series of elements that represent indexed rows. Consider the following file fragment.
When Visio reads the file and resaves it, it appears similar to the following.
Visio always writes indexed rows out in index order. If the original sequence of rows was indexed sparsely (there were gaps in the sequence), Visio implicitly creates the missing rows.
Named rows have no specific order. If a DatadiagramML file contains a series of named rows, their order is not guaranteed when the file is round-tripped.
Geometry sections follow the same round-tripping rules as indexed rows. Indexed sections can appear in any order, but when round-tripped, they are written out in index order and the sparse sections are implicitly created.
For more information about Geom elements, see Working with Geometry in DatadiagramML.
In the Visio engine, a block of text can contain ASCII control characters that represent tabs, line breaks, paragraph breaks, and field positions.
An ASCII 9, ASCII 10, or ASCII 13 character indicating a tab, paragraph break, or line break, respectively, is emitted as a normal ASCII character into DatadiagramML. These characters are considered legal Unicode characters by all XML parsers.
Any other ASCII control character between ASCII 0 and ASCII 31 (other than ASCII 9, 10, and 13) is considered an illegal Unicode character by some XML parsers. As a result, when you save a file that contains these control characters, Visio converts them into question-mark characters (?) and warns you about their existence.
For more information about working with text, see Working with a Shape's Text in DatadiagramML.
A third-party application can omit a document-level element from a DatadiagramML file, such as DocumentProperties or any of the document level child elements, such as Author. However, when the DatadiagramML file is round-tripped, Visio creates a document-level element for all non-empty document properties. For example, a DatadiagramML file can omit the TimeSaved element, but the round-tripped DatadiagramML file contains a TimeSaved element with an appropriate value.
Microsoft Visio 2010 introduces several New ShapeSheet cells and functions.
Support of new Visio 2010 ShapeSheet cells in Visio 2007
These new cells do not present a problem for documents saved in binary (.vsd) file format. However, the new XML elements that represent the new cells would not be recognized by Visio in files saved in XML format (.vdx files), because they are not included in the Visio 2007 XML schema. As a result, if you attempt to open a Visio 2010 file that is saved in XML format in Visio 2007, you get a warning if the XML is trusted (created by Visio); the file will fail to open if the XML is untrusted (created or edited outside Visio).
To address this issue, the elements that represent the new cells reference the "v14" namespace alias, and thus are considered to be unknown XML. The file opens, but Visio ignores the contents of these elements.
Support of new Visio 2010 ShapeSheet functions in Visio 2007
When Visio 2007 loads an XML file that was created in Visio 2010, it cannot parse new Visio 2010 ShapeSheet functions, and therefore does not recognize any formulas that contain references to new cells or new functions. To address this issue, when you save a Visio 2010 file in VDX format, anytime that a cell contains references to new Visio 2010 formulas and therefore requires that those cells be declared in extended XML format, Visio modifies the file as follows, and as shown in the accompanying example:
In the regular XML section, Visio writes the last valid computed value, but not the formula.
In the extended XML section, designated by the "x:" namespace, Visio writes both the last valid computed value and the new formula.
This ensures that Visio 2007 loads VSD and VDX files in the same manner. When it loads the files, Visio writes the extended XML sections as string data stored with an individual shape, as it would any other unknown XML. The last valid computed values are correct, but the formulas that are specific to Visio 2010 are re-evaluated in the event a user opens, edits, and then resaves the file in Visio 2007, which forces Visio to recalculate. In the event that you open a Visio 2010 VDX file in Visio 2007, save it as a Visio 2007 VDX file, and then reopen it in Visio 2010, the formulas in the extended XML sections of the VDX file match the original formulas specific to Visio 2010 only if both of the following conditions are met:
The last valid computed values in the regular and extended XML sections match each other exactly (indicating that no recalculation has taken place).
The current value is a trivial formula (for example, a formula that evaluates to a constant). | https://msdn.microsoft.com/en-us/library/ff768326.aspx | CC-MAIN-2017-34 | refinedweb | 1,246 | 50.16 |
Example Code: Code_050713.zip
In the previous post I talked about how you can use macro lists to solve the problem of wanting to generate a bunch of code based on the same set of data. This was useful for doing things like defining a list of resources a player could accumulate, and then being able to generate code to store and manipulate each resource type. You only had to update the resource list to add a new resource and the rest of the code would almost magically generate itself.
What if you wanted the reverse though? What if you had a fixed set of code that you want to apply to a bunch of different sets of data? This post is going to show you a way to do that.
In the example code, we are going to make a way to define several lists of items, and expand each list into an enum that also has a ToString and FromString function associated with it.
Another usage case for this technique might be to define lists of data fields, and expand each list into a data structure that contains serialization and deserialization functions. This would allow you to make data structures that could be saved and loaded to disk, or to sent and received over a network connection, just by defining what data fields they contained.
I haven’t yet seen this technique in the wild, and it kind of makes me wonder why since they are just two sides of the same coin.
GameEnums.h
In the last post, our data was always the same and we just applied it to different code. To do this, we had the code in one .h and the data in another .h that would get included multiple times. This allowed us to define different pieces of code in one .h, then include the other .h file to apply the fixed data to each piece of code.
In this post, it’s going to be the exact opposite. Our code will always stay the same and we will apply it to different data so our data will be in one .h and the code will be in another .h that gets included multiple times.
Here’s GameEnums.h:
////////////////////// // EDamageType ////////////////////// #define ENUMNAME DamageType #define ENUMLIST \ ENUMENTRY(Normal) \ ENUMENTRY(Electricity) \ ENUMENTRY(Fire) \ ENUMENTRY(BluntForce) #include "EnumBuilder.h" ////////////////////// // EDeathType ////////////////////// #define ENUMNAME DeathType #define ENUMLIST \ ENUMENTRY(Normal) \ ENUMENTRY(Electrocuted) \ ENUMENTRY(Incinerated) \ ENUMENTRY(Smashed) #include "EnumBuilder.h" ////////////////////// // EFruit ////////////////////// #define ENUMNAME Fruit #define ENUMLIST \ ENUMENTRY(Apple) \ ENUMENTRY(Banana) \ ENUMENTRY(Orange) \ ENUMENTRY(Kiwi) #include "EnumBuilder.h" ////////////////////// // EPlayers ////////////////////// #define ENUMNAME Player #define ENUMLIST \ ENUMENTRY(1) \ ENUMENTRY(2) \ ENUMENTRY(3) \ ENUMENTRY(4) #include "EnumBuilder.h"
EnumBuilder.h
This header file is where the real magic is; it’s responsible for taking the previously defined ENUMNAME and ENUMLIST macros as input, and turning them into an enum and the string functions. Here it is:
#include <string.h> // for _stricmp, for the enum Fromstring function // this EB_COMBINETEXT macro works in visual studio 2010. No promises anywhere else. // Check out the boost preprocessor library if this doesn't work for you. // BOOST_PP_CAT provides the same functionality, but ought to work on all compilers! #define EB_COMBINETEXT(a, b) EB_COMBINETEXT_INTERNAL(a, b) #define EB_COMBINETEXT_INTERNAL(a, b) a ## b // make the enum E<ENUMNAME> #define ENUMENTRY(EnumValue) EB_COMBINETEXT(e, EB_COMBINETEXT(ENUMNAME, EnumValue)), enum EB_COMBINETEXT(E,ENUMNAME) { EB_COMBINETEXT(EB_COMBINETEXT(e, ENUMNAME), Unknown) = -1, ENUMLIST EB_COMBINETEXT(EB_COMBINETEXT(e, ENUMNAME), Count), EB_COMBINETEXT(EB_COMBINETEXT(e, ENUMNAME), First) = 0, EB_COMBINETEXT(EB_COMBINETEXT(e, ENUMNAME), Last) = EB_COMBINETEXT(EB_COMBINETEXT(e, ENUMNAME), Count) - 1 }; #undef ENUMENTRY // make the E<ENUMNAME>ToString function const char *EB_COMBINETEXT(EB_COMBINETEXT(E,ENUMNAME), ToString)(EB_COMBINETEXT(E,ENUMNAME) value) { switch(value) { #define ENUMENTRY(EnumValue) \ case EB_COMBINETEXT(e, EB_COMBINETEXT(ENUMNAME, EnumValue)): \ return #EnumValue; ENUMLIST #undef ENUMENTRY } return "Unknown"; } // make the E<ENUMNAME>FromString function EB_COMBINETEXT(E,ENUMNAME) EB_COMBINETEXT(EB_COMBINETEXT(E,ENUMNAME), FromString)(const char *value) { #define ENUMENTRY(EnumValue) \ if(!_stricmp(value,#EnumValue)) \ return EB_COMBINETEXT(e, EB_COMBINETEXT(ENUMNAME, EnumValue)); ENUMLIST #undef ENUMENTRY return EB_COMBINETEXT(EB_COMBINETEXT(e, ENUMNAME), Unknown); } // clean up #undef EB_COMBINETEXT #undef EB_COMBINETEXT_INTERNAL // these were defined by the caller but clean them up for convinience #undef ENUMNAME #undef ENUMLIST
Main.cpp
Now, here’s how you can actually use this stuff!
#include "GameEnums.h" int main(int argc, char* argv[]) { EDamageType damageType = eDamageTypeBluntForce; EDeathType deathType = EDeathTypeFromString("smashed"); EFruit fruit = eFruitLast; EPlayer player = EPlayerFromString(EPlayerToString(ePlayer1)); return 0; }
Combining the files
As a quick aside, in both this and the last post, I separated the code and data files. This is probably how you would normally want to do things because it’ll usually be cleaner, but it isn’t required. Here’s a cool technique I came across today…
Here’s Macro.cpp:
#ifdef MACROHEADER // Put your header stuff here #else // Put cpp type stuff here // Include the "header" #define MACROHEADER #include "Macro.cpp" #undef MACROHEADER #endif
If you ever really just want to combine all your code and data into a single file and not muddy up a directory or project with more files, this technique can help you do that. IMO you really ought to just use separate files, but I wanted to share this for when there are exceptions to that rule (as there always seem to be for every rule!)
Being Data Driven
After my last post, a fellow game developer friend of mine pointed out..
I hope that one day we hurl C++ into a raging sea of fire.
I do like this technique, in theory at least, but whenever I feel like it’s the right solution, a voice in the back of my head yells that I’m digging too greedily and too deeply and should step back for a second and consider what design choices have lead me to this point.
And I think that usually that introspection ends up at the intersection of “we’re not data-driven enough but want to be” and “we decided to use C++ for our engine.”
— jsola
He does have a point. For instance, in the case of our resource list from last post, it would be better if you had some “source data”, such as an xml file, listing all the resources a player could have. The game should load that data in on startup and make a dynamic array, etc to handle those resources. When you had game actions that added or subtracted specific resources from a player, the details of which resources got modified, and by how much, should also be specified in data.
When you save or pack your data, or at runtime (as your situation calls for), it can verify that your data is well formed and makes sure that if a data field is meant to specify a resource type, that it actually corresponds to an actual resource type listed in the list of resources.
That is closer to the ideal situation when making a game – especially when making larger games with a lot of people.
But there are still some good usage cases for this kind of macro magic (and template metaprogramming as well). For instance, maybe you use macros to define your data schemas so that your application can be data driven in the first place – I’ve done that on several projects myself and have seen other well respected people do it as well. So, add these things to your toolbox I say, because you never know when you might need them!
Next Post…
The next post will be what i promised at the end of the last one. I’m going to talk about a way to define a list of lists and then be able to expand that list of lists in a single go, instead of having to do a file include for each list individually.
Example Code: Code_050713.zip | http://blog.demofox.org/2013/05/07/macro-lists-for-the-win-side-b/ | CC-MAIN-2017-22 | refinedweb | 1,294 | 50.16 |
i need a program for rational numbers to represent 500/1000 as 1/2 in java using java doc comments Hi Friend,
Try the following code:
public class RationalClass{
private int nr
java programs - Java Beginners
java programs 1) write a program to print prime numbers?
2) write a program to print factorial of given number?
3)Please provide complete material..., visit the following link: design a vehicle class hierachy in java.write a program to demonstrate polymorphism Hi Friend,
Try the following code:
class Vehicle {
void test(){}
}
class Bus extends Vehicle{
void test
java programs - Java Beginners
java programs Actually i am searching for the source code ,algorithm and flow chart for prime number ...can u suggest me where can i find it??? Hi Friend,
Try the following code:
import java.util.*;
class
java programs - Java Beginners
java programs
I am trying to write a program that prompts the user to enter the month and year and displays the number of days in the month. Need help Hi Friend,
Try the following code:
import java.util.
java programs - Java Beginners
java programs design a java interface for adt stack .develop two different classes that implement the interface one using array and another using linkedlist Hi Friend,
Try the following code:
import java.lang.
java
Programs in java
Programs in java Hi,
What are the best programs in java for a beginner?
Thanks
Java programs for beginners
RoseIndia provides a long list of Java programs for beginners that too... the Java programs provided for
beginners, as every latest topic and update... prepared these java programs describing every logic and method behind
java programs
java programs Why word "static" is used in java programs
what programs are needed in java programming? - Java Beginners
what programs are needed in java programming? What programs are needed in java programming? Hi friend,
For solving the problem visit to :]
Thanks Hi friend
Java Programs
Java Programs Hi,
What is Java Programs?
How to develop application for business in Java technology? Is there any tool to help Java programmer in development of Java Programs?
Thanks
java programs
java programs Explain types of Java programs. Also explain how to compile and run them.
Types of Java Programs:
Standalone Applications
Web Applications
Enterprise Applications
Console Application
Web services
Simple Java Programs for Beginners
RoseIndia brings you simple Java programs that can be used and downloaded... of
Java programs covering every topic, method, keywords, classes, functions, etc.
Programs on Hello World Array list, Linked list, Iterate list, Java swing Programs
Simple Java Programs
In this section we will discuss about the Java programs
This section will describe you the various Java programs that will help
java mail programs
java mail programs I got some codes from this about sending mail,forwarding mail,multipart mail etc..and I compiled and executed it,but it did not show any output,IDE(netbeans) showed "build successful" and no more
Collection of Large Number of Java Sample Programs and Tutorials
Collection of Large Number
of Java Sample Programs and Tutorials
Java Collection Examples
Java 6.0
New Features (Collection Framework... Java Program for beginners that prints HelloWorld! on
console
programs - Java Magazine
:
Thanks
Amardeep
Programs in java - Java Interview Questions
Program in Java decimal to binary i need a java example to String Reverse. Can you also explain the Java decimal to binary with the help.../java/example/java/util/CapturedText.shtml:
Sending SMS From Java Programs
Sending SMS From Java Programs I want to develop and application to send sms from my computer, can someone please help me, like tell me where to start and what i need
Installing programs over a network using java
Installing programs over a network using java Hi, i want to write a java program that will allow me to install programs from a server to a client machine. Any help will be appreciated. Thanks
java - Java Beginners
Java applets programs coding Java applets programs coding and example.Thanks in Advance
Complete Java programming tutorials for beginners
Complete collection of Java programming tutorials for beginners is available... videos, Java programs and Java examples acts as
an online tutor that helps... tutorials includes innumerous working
Java programs, Java examples, lessons and
Java
java - Java Beginners
java any programs on combination generation pls
Queue - Java Beginners
Queue i'm working with queue on java. since im beginners im asking for additional example programs on queue using java to enhance my knowledge. thanks so much for the help! God bless
Java guide for beginners
Java guide provided at RoseIndia for beginners is considered best to learn...
topic, Videos with voice overs on Java programs and hundreds of Java examples... and understand it completely.
Here is more tutorials for Java coding for beginners
How can java programs execute automatically when it connects to network
How can java programs execute automatically when it connects to network Good Day dears...
How can java programs execute automatically when... internet Connection through my java program
Programming in Java for beginners
Programming in Java can be difficult sometimes especially for beginners...
Java guide that has innumerous Java programs, examples, tutorials and videos.
This Java guide not only helps the beginners in Java to learn the language
java - Java Beginners
java why we use classpath.? Hi Friend,
We used classpath to tell the Java Virtual Machine about the location of user-defined classes and packages in Java programs.
Thanks
examples - Java Beginners
examples as am new to java can you please help me with basic programs on java with their examples Hi Friend,
Please visit the following link:
Hope
JAVA - Java Beginners
java 1.4 vs java 1.5 What is the difference between java 1.4 and java 1.5? Difference between java 1.4 and java 1.5Java programming language is simple,distributed , robust, object oriented & secure.The Java 2 SDK - Java Beginners
://... is simply the memory used by programs to store variables.
Element of the heap... information on Stack or Heap Visit to :
java - Java Beginners
java hello,
Please help me with code for the following
well,i am supposed to create a java program that recieves information from other programs and manupulates it.
its inteface will have various functions like:
input
java beginners - Java Beginners
the following links: beginners what is StringTokenizer?
what is the funciton
java - Java Beginners
one end of a two-way communications link
between two programs running...:
Thanks
java - Java Beginners
java i need help i realy need help m a first year student studying computer science i seem not to be understanding java quiet clearly i still fail to do simple programs what can i do , how can i understand java and how can i
methods type - Java Beginners
methods type in Java Give me an example programs of methods types in Java
JAVA - Java Beginners
JAVA Hi,
I have one small word(number type as password) of containing 6 digits. Now i want to encrypt to this password and decrypt the password in different programs. for encrypt and decrypt the public & private keys
core java - Java Beginners
core java 1. What are the Advantages of Java?
2. What are the Differences between c,c++ & java?
3. Where we need to Write Java Programs?
4... the following link:
Beginners in Java
Beginners in Java Hi, I am beginners in Java, can someone help me... tutorials for beginners in Java with example?
Thanks.
Hi, want to be command over Java, you should go on the link and follow the various beginners
java - Java Beginners
java give the code in java for the following programs:
1. 5
45
345
2345
12345
2. H
HE
HEL
HELL
HELLO Hi Friend,
1)
class Pyramid
{
public static
path - Java Beginners
path how to set the path in environment variables to run java programs in my pc?
Hi friend,
Read for more information.
java - Java Beginners
it gives the appearance of executing all of the programs at the same time. Multitasking allow processes (i.e. programs) to run
concurrently on the program....
For more information on Thread visit to :
HELP - Java Beginners
HELP Hello sir ,how i can make Java Programs Set up File ,Please give me steps to make
Java projects for beginners
In this tutorial you will be briefed about the Java projects for beginners... is more tutorials for Java coding for beginners
What... Hello World Java Program
Java Beginners tutorial Home page
Java - Java Beginners
codes (sorting and partitioning) to Java. It should be able
to execute any set... and contrast it to the quicksort
algorithm
(f) Write a java program...) Compare the running time for the above two programs for:
i. Small Inputs
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://www.roseindia.net/tutorialhelp/comment/90153 | CC-MAIN-2015-35 | refinedweb | 1,461 | 54.02 |
Streaming in WSGI is usually done by passing a generator function as a response. As HTTPResponse.write only accepts str and bytes, I tried this:
def hello(): yield b'hello' time.sleep(1) yield b'world' resp = wheezy.http.HTTPResponse() resp.buffer = hello() return resp
But this doesn't work, because HTTPResponse tries to calculate the length of the response, which is needed to emit the Content-Length header. So, I suggest:
- There needs to be some ways to disable emitting the header, by making an option, or by adding a subclass.
- HTTPResponse should also provide a method to replace its buffer. The direct assignment to HTTPResponse.buffer seems to be inconsistent with other methods it provides(write, write_bytes).
It seems that Werkzeug checks if the content is an iterator before setting Content-Length. The related code:
Bottle has a similar mechanism:
I believe HTTP response for streaming is somewhat unique and should not be generalized in HTTPResponse class. The user code is alway precise about doing regular response or streaming one, so I do not see a value in reuse of HTTPResponse and additional check for generator/iterator as WSGI response. Instead a separate response class is suggested that limit interface to only those operation that are applicable for streaming.
Initially I was thinking about something like this:
However later realized that it is unlikely to use status code other than 200, set cookies, use cache profile or policy.. so that reduced it down to this:
There is streaming demo here.
It would be convenient if the user can pass a
str(unicode) generator directly, rather than passing a
bytesgenerator. To do that, something like
would be appropriate.
If we do this, however, we should add an option to distinguish
striterators from
bytesiterators, as pre-knowing the type of the generated data from the iterators is impossible. For example, in Werkzeug, applying the
str->
bytesencoding iterator is the default, and the user must set
direct_passthrough=Trueto prevent this behavior. Personally, I think this '
str'-first approach is right, as the user more likely generates dynamic text content, except for some cases involving
bytes(sending a file, for example).
Second, I think there would be some use cases to set cookies from
HTTPStreamingResponse, though it might be a rare case.
And finally, the term "iterable" is more common than "iteratable" in the Python world. See:
I believe
textand
binaryare two very distinct use case and we should avoid to mix them. Here is an example for
HTTPTextStreamingResponse: | https://bitbucket.org/akorn/wheezy.http/issues/3/httpresponse-should-accept-generator-to | CC-MAIN-2019-43 | refinedweb | 417 | 53.92 |
On Wed, May 04, 2011 at 11:36:35AM +0200, Sorin Manolache wrote:
> On Wed, May 4, 2011 at 11:34, <[email protected]> wrote:
> > Hello list,
> >
> > as the subject line says, I'm trying to run a subrequest through
> > mod_proxy and need to post-process the subrequests response data.
> > Looking at older posts on this list it seems as if the only way to
> > accomplish this is:
> >
> > (1) create a subrequest with ap_sub_req_lookup_uri(...)
> >
> > (2) modify parts of the created subrequest (filename, handler, proxyreq
> > etc.)
> >
> > (3) Install a filter that captures the response data
> >
> > (4) run that subrequest
>
> Play it in conjunction to RewriteRules:
>
> RewriteCond %{IS_SUBREQ} true
> RewriteRule ^/some_name$
> [P]
Hmm, I don't seem to get what's you do different compared with my
approach:
> request_rec *subr = ap_sub_req_method_uri("GET", "/some_name", r, NULL);
Same as my (1)
Here, "/some_name" is still an arbitrary URI and _not_ the proxy URI I
want to query. BTW, this does clutter the URL namespace, a big no-no in
my usecase ...
> ap_add_output_filter(post_processing_filter_name, filter_context,
> subr, subr->connection);
Same as my (3)
> int status = ap_run_subreq(subr);
> int http_status = subr->status;
> // optional: subr->main = r;
> if (ap_is_HTTP_ERROR(status) || ap_is_HTTP_ERROR(http_status))
> // some error handling
> }
And you still need to _run_ the subrequest to get at the restponse
status etc.
>
> There are some subtleties here:
>
> 1. The rewrite rules are ran in the translate_name hook. If you want
> to use %{ENV:request_note_name} in your rewrite rule, you have to copy
> them somehow (for example in another translate_name callback that is
> run before the mod_rewrite callbacks) from the main request notes to
> the subrequest notes.
>
> 2. Subrequests are not kept alive. In order to keep them alive, you
> could try to hook APR_OPTIONAL_HOOK(proxy, fixups, &proxy_fixups,
> NULL, NULL, APR_HOOK_MIDDLE). In the proxy_fixups callback, you can
> set subr->main = NULL; Then, after ap_run_subreq, you can re-set
> subr->main = r (the "optional" line in the code example above). i
But that means loosing all request context in the subrequest! One of
tthe main reasons to use mod_proxy instead of
some-arbitrary-webclient-lib is the fact that mod_proxy passes all
incomming header to the backend server. A must in my case.
> I'm
> using this trick but I do not know all its consequences.
Hmmm - bold. The costs of server downtime might easily exeed my
monthly income in this case :-)
cheers, RalfD
> Sorin
>
>
> >
> > Now, (1) seems unelegant since it does need a valid URI which has
> > nothing to do with the final proxy request. Hence the value of the
> > subrequest's status has no meaning -- but isn't this exactly the purpose
> > of subrequests? To quote Nick Kew '....to run a fast partial request, to
> > gather information: what would happen if we ran thos request?'
> > Is there really no way to create a subrequest directly aiming at
> > mod_proxy.
> > It would be utterly nice to be able to access a (proxied) subrequests
> > metadata (content-type, etag etc.) before running the filter.
> >
> > Any ideas? Mabe a nice API extension for Apache or mod_proxy?
> >
> > TIA Ralf Mattes
> >
> > | http://mail-archives.apache.org/mod_mbox/httpd-modules-dev/201105.mbox/%[email protected]%3E | CC-MAIN-2016-44 | refinedweb | 504 | 61.46 |
Control.Concurrent.STM.Broadcast
Description
A Broadcast variable is a mechanism for communication between threads. Multiple reader threads can wait until a broadcaster thread writes a signal. The readers retry until the signal is received. When the broadcaster sends the signal all readers are woken.
This module is designed to be imported qualified. We suggest importing it like:
import Control.Concurrent.STM.Broadcast ( Broadcast ) import qualified Control.Concurrent.STM.Broadcast as Broadcast ( ... )
Synopsis
Documentation
A broadcast variable. It can be thought of as a box, which may be empty of full.
Instances
newWritten :: α -> STM (Broadcast α)Source
read :: Broadcast α -> STM αSource
tryRead :: Broadcast α -> STM (Maybe α)Source
write :: Broadcast α -> α -> STM ()Source | http://hackage.haskell.org/package/concurrent-extra-0.2/docs/Control-Concurrent-STM-Broadcast.html | CC-MAIN-2014-42 | refinedweb | 116 | 53.37 |
The MD5 hashing algorithm.
Implementing the algorithm itself was not as hard as I thought it would be. The algorithm is roughly like this:
- It starts with 4 32-bits integers which contain a predefined value.
- The algorithm contains 4 "rounds". On each of them, a different function is applied to those integers and the input string.
- The result of those rounds serves as input to the next ones.
- Add the 4 resulting 32-bit integers with the original, predefined integer values.
- Interpret the resulting integers as a chunk of 16 bytes.
The implementation
So basically the implementation was not that hard, some template metaprogramming and several constexpr functions. Since there was a pattern on the way the arguments were provided as input to the functions in each round, I used a specialization which avoided lots of repeated code.
The worst part was generating the input for that algorithm. The input is not just the string which is to be hashed. The steps to generate that input is roughly as follows:
- Create a buffer of 64 bytes, filled with zeros.
- Copy the input string into the start of the buffer.
- Add a 0x80 character on buffer[size_of_input_string].
- On buffer[56], the value sizeof_input_string * 8 must be stored.
In order to achieve this buffer initialization, I had to somehow decompose the input string into characters, and then join them and those special bytes, into an array which could be used during compile time. In order to achieve this, I implemented a constexpr_array, which is just a wrapper to a built-in array, but provides a data() constexpr member, something that std::array does not have(I'm not really sure why):
template<typename T, size_t n> struct constexpr_array { const T array[n]; constexpr const T *data() const { return array; } };The decomposition ended up pretty simple, but I had to struggle to figure out how the hell to implement it.
Finally, the interface for the compile time MD5 would be the following:
template<size_t n> constexpr md5_type md5(const char (&data)[n])
The typedef for the function's result is the following:
typedef std::array<char, 16> md5_type;
Note that everything in the implementation resides in namespace ConstexprHashes.
As an example, this code generates a hash and prints it:
#include <iostream> #include "md5.h" int main() { constexpr auto value = ConstexprHashes::md5("constexpr rulz"); std::cout << std::hex; for(auto v : value) { if(((size_t)v & 0xff) < 0x10) std::cout << '0'; std::cout << ((size_t)v & 0xff); } std::cout << std::endl; }
This prints out: "b8b4e2be16d2b11a5902b80f9c0fe6d6", which is the right hash for "constexpr rulz".
Unluckily, the only compiler that is able to compile this is clang 3.1(I haven't tested it on newer versions). GCC doesn't like the code, and believes some constexpr stuff is actually non-constexpr. I'm not 100% sure that clang is the one that's right, but it looks like it is.
You can find the MD5 implementation here. Maybe I'll implement SHA1 soon, whenever I have some time.
Well, this was the first time I used constexpr, it was a nice experience. Implementing this using only TMP would be an extreme pain in the ass.
looked at SHA1 yet? :)
Very cool. Works perfectly with Visual Studio 2015 Update 1 | http://average-coder.blogspot.nl/2012/08/compile-time-md5-using-constexpr.html | CC-MAIN-2018-17 | refinedweb | 543 | 54.93 |
// This is needed when accessing an XElement or Attribute that has a namespace on its name XNamespace xts = ""; List<Object> mySchedules = new List<object>(); XElement Root = xmlDoc.Root; List<DocumentSportsInfo> docSportsInfo = (from listing in xmlDoc.Descendants("sports-content") let SportsMetadata = listing.Elements("sports-metadata").First() let SportsContextCode = listing.Descendants("sports-content-code") select new DocumentSportsInfo { //QueryDateTime = myFodHelper.ConvertFodDateTime(Root.Attribute("query-date-time").Value), QueryString = Root.Attribute("query-string").Value, HostName = Root.Attribute("hostname").Value, ResultCount = Convert.ToInt32(Root.Attribute("result-count").Value), ErrorCount = Convert.ToInt32(Root.Attribute("error-count").Value), TotalCount = Convert.ToInt32(Root.Attribute("total-count").Value), ElapsedTime = Root.Attribute("elapsed-time").Value, DocumentId = SportsMetadata.Attribute("doc-id").Value, //DocumentDateTime = myFodHelper.ConvertFodDateTime(SportsMetadata.Attribute("date-time").Value), DocumentClass = SportsMetadata.Attribute("document-class").Value, FixtureKey = SportsMetadata.Attribute("fixture-key").Value, RevisionId = SportsMetadata.Attribute("revision-id").Value, DocumentSportsTitle = SportsMetadata.Element("sports-title").Value, Priority = SportsContextCode .Where(c => c.Attribute("code-type").Value == "priority") .Select(c => c.Attribute("code-key").Value).FirstOrDefault(), Sport = SportsContextCode .Where(c => c.Attribute("code-type").Value == "sport") .Select(c => c.Attribute("code-key").Value).FirstOrDefault(), League = SportsContextCode .Where(c => c.Attribute("code-type").Value == "league") .Select(c => c.Attribute("code-key").Value).FirstOrDefault(), Conference = SportsContextCode .Where(c => c.Attribute("code-type").Value == "conferenece") .Select(c => c.Attribute("code-key").Value).FirstOrDefault(), TeamKeys = SportsContextCode .Where(c => c.Attribute("code-type").Value == "team") .Select(c => c.Attribute("code-key").Value).ToList(), }).ToList(); mySchedules.Add(docSportsInfo); Console.WriteLine("Hello"); List<object> eventAndTeamInfo = (from sEvent in Root.Descendants("sports-event") select new List<object> { new EventSportsInfo { Key = sEvent.Element("event-metadata").Attribute("event-key").Value, //event-status Status = sEvent.Element("event-metadata").Attribute("event-status").Value, Week = Int32.Parse(sEvent.Element("event-metadata").Element("event-metadata-american-football").Attribute(xts + "week").Value), // Add the others here }, new List<TeamsSportsInfo> ( (from t in sEvent.Descendants("team") // The let's are here so that you do not have to write them 4 times and can // be used short string and can combine them as in FullName let FN = t.Element("team-metadata").Element("name").Attribute("first").Value let LN = t.Element("team-metadata").Element("name").Attribute("last").Value select new TeamsSportsInfo { // Add fileds here Key = t.Element("team-metadata").Attribute("team-key").Value, FirstName = FN, LastName = LN, FullName = FN + " " + LN // Add more fileds here }).ToList() ), } as object ).ToList(); mySchedules.Add(eventAndTeamInfo);
From novice to tech pro — start learning today.
First().Where(p => p != null)
First().DefaultIfEmpty()
Additional resources on LINQ:
LINQ to SQL: .NET Language-Integrated Query for Relational Data:
LINQ extensions referenced in existing solution:
"Microsoft has developed a set of extensions called Dynamic LINQ to do what you want to do. The extensions can be downloaded from this MS web site and I found the original link on this web site, "Dynamic LINQ (Part 1: Using the LINQ Dynamic Query Library)"."
LinqKit referenced in existing solution:
"Have you looked at LinqKit () or the work of Tomas Petricek (). Both places give good coverage of approaches to dynamic queries using Expression building. They both also provide some extensions (i.e. code) that make the job much easier. If I'm not mistaken, the tools in LinqKit are based on the work of Mr. Petricek, but are slightly newer and modified."
Let me know if you need additional information or which specific portion of the LINQ you are trying to protect from nulls.
Thanks, Eric
So, if we just look at the very first query for documentSportsInfo and break each of the areas down, we get....
1. from listing in xmlDoc.Descendants("sports
I believe we are ok with this statement, if no sports-content tags exist then nothing is returned and I believe the query will not go any further. If I'm mistaken, please let me know, haven't run into that yet.
2. let SportsMetadata = listing.Elements("sports-m
let SportsContextCode = listing.Descendants("sport
Again, I think (I maybe wrong and need this addressed too) we are ok with SportsContextCode since it uses Descendants, nothing will be returned if tag missing.
But SportsMetadata on the other hand will error out if this Element is missing. I tried using this approach:
let SportsMetadata = listing.Elements("sports-m
and it works, SportsMetadata is set to null if element missing and it keeps going with no error, but I get an error on any values below that try to use SportsMetaData, even when checking for nulls with inline if statement as seen below in #3.
3. FixtureKey = (SportsMetadata.Attribute(
RevisionId = (SportsMetadata.Attribute(
Next are assigning the values, lets stay with SportsMetadata as the example. These two statements above work fine if the attribute is missing in the element, it assigns null value if object is null. But as stated in #2, if SportsMetadata is null, then these statements don't work anymore and a null reference error is thrown.
Do I need to check every level of the structure for null values when assigning, doing a bunch of if statement nesting? This seems like it could get real messy. Anything better to use than if statements inline? And these are simple examples, I have others where its Elements("").Elements("").
There must be a best practice for each of these situations so the query doesn't break, I don't seem to understand the null value handling in LINQ very well yet.
Any ideas?
With monday.com’s project management tool, you can see what everyone on your team is working in a single glance. Its intuitive dashboards are customizable, so you can create systems that work for you.
2. Try inserting a Where statement as below (add your criteria for a good record):
let SportsMetadata = listing.Elements("sports-m
let SportsContextCode = listing.Descendants("sport
where SportsMetadata SportsMetadata.Attribute("
select new DocumentSportsInfo
3. See #2 above
Let me know if this helps at all.
Thanks, Eric
2. The Where statement won't really work because then it won't process the rest of the values if these conditions aren't meant. If the fixture-key attribute is null, I still want to assign the other values like revision-id. Each one needs to be independent of the other since I don't know between documents, which ones will be null.
The inline if statements seem to work just fine as-is, I'm more or less trying to figure out how to handle the Let keyword statements if they are null which will also not throw an error is used in assigning values. I only brought the assigning values up because the one solution for the Let statements I found (SingleOrDefault) that works then breaks the inline if statements because I'm not checking the outer most parent object for nulls.
Take this Let statement:
let SportsMetadata = listing.Elements("sports-m
I'm using the FirstOrDefault so if the element doesn't exist, it assigns null to SportsMetdata. This works fine, if this is not the correct way to do this, please let me know.
Now take the next part where I use SportsMetadata to assign a value to a collection list property:
FixtureKey = (SportsMetadata.Attribute(
Here I am checking if the attribute itself is null and assigning value. Now if SportsMetadata is null then I get a null reference error thrown on SportsMetadata.
I can resolve this by changing the null check for both the attribute and the element by nesting the statements like this:
FixtureKey = (SportsMetadata == null) ? null : (SportsMetadata.Attribute(
Now this works and all my issues are resolved but I want to know if there is a better way. Because as I pointed out, there are some attributes down 3 levels and nesting even 2 inline if statements let alone 3,4,5 etc.... gets very messy and hard to understand.
Just trying to find a better way.
Deleting this question and re-creating, need other input and won't get it with 7 comments on question already.
Thanks for the attempt. | https://www.experts-exchange.com/questions/25579650/LINQ-Let-Statement-Determine-Null-Elements.html | CC-MAIN-2018-09 | refinedweb | 1,321 | 50.63 |
Roman's Law and Fast Processing with Multiple CPU Cores
OpenMP and -xautopar seem to work pretty well for C, but what about C++? Will they mesh well with the kind of modern C++ usage peppered with generics and template metaprogramming? The short answer is, there's no short answer. But, let's see for ourselves with the following example of modern C++ [ab]use:
#include <vector> #include <iterator> #include <algorithm> #include <iostream> void standard_input_sorter() { using namespace std; vector<string> v; copy(istream_iterator<string>(cin), istream_iterator<string>(), back_inserter(v)); sort(v.begin(), v.end()); } $ CC -c -fast -xloopinfo -xautopar -xopenmp -library=stlport4 sorter.cc
The above produces a pretty long list of complaints, explaining why a particular section of the STLport library cannot be parallelized. The key issue here is that certain areas of C++ are notoriously difficult to parallelize by default. Even with OpenMP, things like concurrent container access are much more trouble than they are worth. Do we have to rewrite STL? Well, seems like Intel almost did. Intel has been working on what it calls the Thread Building Blocks (TBB) C++ library, and its claim to fame is exactly that—making modern C++ parallel. Give it a try, and see if it works for you. I especially recommend it if you're interested in exploiting task parallelism. But, then again, the amount of modern C++ that TBB throws at even the simplest of examples, such as calculating Fibonacci numbers, is something that really makes me sad. Tasks as defined in the upcoming OpenMP 3.0 standard seem far less threatening.
There is a fundamental trend toward concurrency in hardware. Multicore systems are now making their way into laptops and desktops. Unfortunately, unless software engineers start taking these trends into account, there's very little that modern hardware can do to make individual applications run faster. Of course, parallel programming is difficult and error-prone, but with the latest tools and programming techniques, there's much more to it than merely POSIX threads. Granted, this article scratches only the surface of what's available. Hopefully, the information presented here will be enough of a tipping point for most readers to start seriously thinking about concurrency in their applications. Our high-definition camcorders demand it and so does every gamer on earth.
Resources
Moore's Law:
Sun Studio Express: developers.sun.com/sunstudio/downloads/express/index.jsp
FFMPEG: ffmpeg.mplayerhq.hu
Intel VTune:
Intel Thread Checker:
TotalView Debugger:
POSIX Threads:
OpenMP:
Effective Use of OpenMP in Games:
Intel Thread Building Blocks: osstbb.intel.com
Cilk: supertech.csail.mit.edu/cilk
“The Problem with Threads”:
“The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software”: gotw.ca/publications/concurrency-ddj.htm
Roman Shaposhnik started his career in compilers back in 1994 when he had to write a translator for the programming language he'd just invented (the language was so weird, nobody else wanted the job). His first UNIX exposure was with Slackware 3.0, and he's been hooked ever since. Currently, he works for Sun Microsystems in the Developer Products Group. He is usually found pondering the question of how to make computers faster yet not drive application developers insane. He runs a blog at blogs.sun.com/rvs and can be reached via e-mail at [email protected].?) | http://www.linuxjournal.com/article/9857?page=0,2 | CC-MAIN-2017-22 | refinedweb | 554 | 56.96 |
."
Catch: a method can catch an exception by providing an exception handler for that type
of exception.
Specify: if a method chooses not to catch an exception, the method must specify that it
can throw that exception. Why did the Java designers make this requirement? Because any
exception that can be thrown by a method is really part of the method's public programming
interface: callers of a method must know about the exceptions that a method can throw in
order to intelligently and consciously decide what to do about those exceptions. Thus, in the
method signature you specify the exceptions that the method can throw.
Checked exceptions: Java. The cost of checking for runtime exceptions often exceeds the benefit of catching or specifying them. Thus the compiler does not require that you catch or specify
runtime exceptions, although you can. Checked exceptions are exceptions that are not runtime
exceptions and are checked by the compiler; the compiler checks that these exceptions are
caught or specified.
Some consider this a loophole in Java's exception handling mechanism, and
programmers are tempted to make all exceptions runtime exceptions. In general, this is not
recommended.
Exceptions that can be thrown within the scope of the method: this statement may
seem obvious at first: just look for the throw statement. However, this statement includes
more than just the exceptions that can be thrown directly by the method: the key is in the
phrase within the scope of. This phrase includes any exception that can be thrown while the
flow of control remains within the method. This statement includes both:
The following example defines and implements a class named ListOfNumbers. The
ListOfNumbers class calls two methods from classes in the Java packages that can throw
exceptions.
// Note: This class won't compile by design!
// See ListOfNumbersDeclared.java or ListOfNumbers.java
// for a version of this
class that will compile.
import java.io.*;
import java.util.Vector;
public class ListOfNumbers {
private Vector victor;
private static final int size = 10;
public ListOfNumbers () {
victor = new Vector(size);
for (int i = 0; i < size; i++)
victor.addElement(new Integer(i));
}
public void writeList() {
PrintWriter out=
new PrintWriter(new FileWriter("OutFile.txt"));
for (int i = 0; i < size; i++)
out.println("Value at: " + i + " = " + victor.elementAt(i));
out.close();
}
}
Upon construction, ListOfNumbers creates a Vector that contains ten Integer
elements with sequential values 0 through 9. The ListOfNumbers class also defines a method
named writeList that writes the list of numbers into a text file called OutFile.txt.
The writeList method calls two methods that can throw exceptions. First, the following
line invokes the constructor for FileWriter, which throws an IOException if the file cannot
be opened for any reason:
out = new PrintWriter(new FileWriter("OutFile.txt"));
Second, the Vector class's elementAt method throws an ArrayIndexOutOfBoundsException
if you pass in an index whose value is too small (a negative number) or too large (larger than
the number of elements currently contained by the Vector). Here's how ListOfNumbers
invokes elementAt:
out.println("Value at: " + i + " = " +
victor.elementAt(i));
If you try to compile the ListOfNumbers class, the compiler prints an error message
about the exception thrown by the FileWriter constructor, but does not display an error
message about the exception thrown by elementAt. This is because the exception thrown by
the FileWriter constructor, IOException, is a checked exception and the exception thrown
by the elementAt method, ArrayIndexOutOfBoundsException, is a runtime exception. Java
requires that you catch or specify only checked exceptions.
RSS feed Java FAQ News | http://javafaq.nu/java-article1185.html | CC-MAIN-2017-43 | refinedweb | 588 | 54.83 |
📅 15 May, 2019 – Kyle Galbraith easy to miss.
So in the spirit of learning in public, I thought id write up my experience with this problem. We can dive into the problem and the solution so that you can avoid this scar tissue in the future.
First, let’s set the stage for the problem by getting into some of the background of the project. For the purposes of this blog post, I am going to use an example that mirrors the actual project.
The project is a web API that is built using dotnet core. The main endpoint in the API takes in an array of ids and checks a table in the database for those ids. In terms of APIs, this one is rather straightforward.
dotnet core
In developing this API, we had access to the database but not the original schema. That wasn’t a problem as we made use of Entity Framework code first to represent the table we want to query. We looked at the schema that is on the real database and mirrored those columns with their types into our Entity Framework model.
Here is what that model looked like after that initial phase.
namespace my_api.Data
{
public class FieldTable
{
public Int64 Id { get; set; }
public string FieldId { get; set; }
public string Description { get; set; }
public DateTime DateAdded { get; set; }
public bool Available { get; set; }
}
}
The FieldId is the column we query in the API to see if the array of ids that were passed in match any FieldId values in the database. The ones that match we return to the client. Here is the API logic code that does that.
FieldId
public IEnumerable<MatchedField> GetMatches(IEnumerable<string> fieldIds)
{
return _dataContext.FieldTable.Where(f => fieldIds.Contains(f.FieldId))
.Select(f =>
new MatchedField()
{
FirstSeen = f.DateAdded,
FieldId = f.FieldId.ToLower(),
Available = f.Known
}
).ToList();
}
At a high-level Entity Framework is going to map our code above into a raw SQL query that is going to look like this.
select DateAdded, FieldId, Known
from dbo.FieldData
where FieldId in (N'1', N'2', N'3', N'4', N'etc')
This query looks innocent enough right? It is doing a rather straightforward lookup on the table. But it’s actually doing a bit more than that, notice the N character in front of each id value.
N
This character in SQL is declaring the type of that string as nvarchar. If you’re not familiar with the nvarchar type, it allows you to store any Unicode value in a column. This is different from a varchar data type which only allows you to store ASCII.
nvarchar
varchar
This is where the differences between nvarchar and varchar become important. When we defined our FieldTable class up above, we didn’t specify the SQL types of the columns on the table. So what does that mean? It means that Entity Framework is going to use it’s default SQL types, for .NET strings the default type is nvarchar.
FieldTable
This is why we see the N character in front of each of our values that the raw SQL query is looking up.
Is that a problem? Not if the column on our table is of type nvarchar. Entity Framework is sending nvarchar ids in the query and our column has that type so were all good.
But what if the column is of type varchar instead?
If FieldId is actually of type varchar but Entity Framework sends a nvarchar set of ids, now we have a type mismatch. When this happens a conversion now has to happen at query time.
This is a subtle nuance that can often go unlooked. But, the performance impact can be huge if our query is looking up hundreds of values.
This subtle difference when looking up one FieldId value wasn’t all that noticeable. It seemed a bit slower than it should have been but not to bad overall.
But, querying for 500 FieldId values was not performant at all, it was on the order of 45-60 seconds. This led to the investigation laid out above. Looking at the raw query Entity Framework was generating we saw that the id values were being prefixed with N'1', N'2', N'3'. We assumed that those columns were in fact nvarchar so that shouldn’t be the performance bottleneck.
N'1', N'2', N'3'
But, then we ran the same query but instead of prefixing the id values with N we looked up normal varchar strings.
select DateAdded, FieldId, Known
from dbo.FieldData
where FieldId in ('1', '2', '3', '4', 'etc')
The results came back in less than 500 milliseconds.
Looking at the FieldId column we were able to confirm that it was actually of type varchar and not nvarchar. Performance bottleneck found ✅.
This wasn’t Entity Frameworks fault or even the fault of the database. It was a small bug in the data model that we created and was very easy to overlook. When we defined the table, FieldTable, in code we specified that the FieldId was of type string.
When it came time for EF to query that table it did what it does best, translate your code into a SQL query. But, it added to the WHERE IN clause the N prefix for each id. It operated under the assumption that the FieldId column was of type nvarchar. That is because nvarchar is the default SQL type for the .NET type string in Entity Framework.
WHERE IN
string
Bada bing, major performance problem. But why is that? Because now SQL Server has to convert each id that is declared as nvarchar in the query to a varchar type to query the column. That conversion with 500+ ids to lookup was very costly.
The fix was very straightforward to put in place. We needed to be explicit with our properties defined for FieldTable. This meant adding an attribute to each property that tells Entity Framework the exact SQL data type this represents.
namespace my_api.Data
{
public class FieldTable
{
public Int64 Id { get; set; }
[Column(TypeName="varchar(50)")]
public string FieldId { get; set; }
[Column(TypeName="varchar(500)")]
public string Description { get; set; }
public DateTime DateAdded { get; set; }
public bool Available { get; set; }
}
}
The change was to add the [Column(TypeName="varchar(50)")] to the FieldId property. This tells Entity Framework the exact SQL data type of this column. By doing that, Entity Framework now generates the appropriate SQL query, one with out nvarchar strings.
[Column(TypeName="varchar(50)")]
The result? Looking up 500+ ids at a time can now be done in 200-400 milliseconds instead of 45-60 seconds.
Building solutions where you don’t have access to all the pieces in play can be challenging. Duplicating a schema manually into your own code is error-prone as we have seen.
ORMs like Entity Framework are fantastic at hiding complexities around database access. Most of the time this is what we want. However, as we have seen, sometimes that hiding can introduce nuances that are easy to overlook. Use ORMs when needed but make sure you have a solid footing in the implicit decisions they may or may not | https://blog.kylegalbraith.com/2019/05/15/the-curious-case-of-nvarchar-and-varchar-in-entity-framework/ | CC-MAIN-2020-50 | refinedweb | 1,200 | 73.47 |
I’m just starting Python web development, and have chosen Bottle as my framework of choice.
I’m trying to have a project structure that is modular, in that I can have a ‘core’ application that has modules built around it, where these modules can be enabled/disabled during setup (or on the fly, if possible…not sure how I would set that up tho).
My ‘main’ class is the following:
from bottle import Bottle, route, run from bottle import error from bottle import jinja2_view as view from core import core app = Bottle() app.mount('/demo', core) #@app.route('/') @route('/hello/<name>') @view('hello_template') def greet(name='Stranger'): return dict(name=name) @error(404) def error404(error): return 'Nothing here, sorry' run(app, host='localhost', port=5000)
My ‘subproject’ (i.e. module) is this:
from bottle import Bottle, route, run from bottle import error from bottle import jinja2_view as view app = Bottle() @app.route('/demo') @view('demographic') def greet(name='None', yob='None'): return dict(name=name, yob=yob) @error(404) def error404(error): return 'Nothing here, sorry'
When I go to in my browser, it shows a 500 error. The output from the bottle server is:
localhost - - [24/Jun/2012 15:51:27] "GET / HTTP/1.1" 404 720 localhost - - [24/Jun/2012 15:51:27] "GET /favicon.ico HTTP/1.1" 404 742 localhost - - [24/Jun/2012 15:51:27] "GET /favicon.ico HTTP/1.1" 404 742 Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/bottle-0.10.9-py2.7.egg/bottle.py", line 737, in _handle return route.call(**args) File "/usr/local/lib/python2.7/dist-packages/bottle-0.10.9-py2.7.egg/bottle.py", line 582, in mountpoint rs.body = itertools.chain(rs.body, app(request.environ, start_response)) TypeError: 'module' object is not callable
The folder structure is:
index.py views (folder) |-->hello_template.tpl core (folder) |-->core.py |-->__init__.py |-->views (folder) |--|-->demographic.tpl
I have no idea what I’m doing (wrong) 馃檪
Anyone have any idea how this can/should be done?
Thanks!
Best answer
You are Passing the module “core” to the mount() function. Instead you have to pass the bottle app object to the mount() function, So the call would be like this.
app.mount("/demo",core.app)
Here are formal docs for the mount() function.
mount(prefix, app, **options)[source]
Mount an application (Bottle or plain WSGI) to a specific URL
prefix.
Example:
root_app.mount('/admin/', admin_app)
Parameters:
prefix 鈥?path prefix or mount-point. If it ends in a
slash, that slash is mandatory.
app 鈥?an instance of Bottle or a WSGI application | https://pythonquestion.com/post/creating-subprojects-in-bottle/ | CC-MAIN-2020-16 | refinedweb | 443 | 58.89 |
First time here? Check out the FAQ!
@berak Thank you so much...
I am developing commercial software for reading images and finding region of interest by passing top left corner coordinates and bottom right coordinates of rectangle then crop the image.Some client provides samples like 100 dpi, 200 dpi or 400 dpi images of same image. Because these kind of images, my coordinates lie somewhere else. How can i fix a problem to find actual area of interest for different dpi of images and find exact ROI? Give some rough idea to solve it.
first = [1025,1000]
last = [1550,1400]
image = cv2.imread("db/SGBAU/pink OMR/200.jpg")
def ROI_extract(first_corner,last_corner,image):
#croping ROI
ROI_img = image[first_corner[1]:last_corner[1],first_corner[0]:last_corner[0]]
cv2.imwrite(os.path.join('db/','ROI.png'), ROI_img)
return ROI_img
in this code i am not getting how are you generating result image where alphabets are printed on filled circle. after that how are you decoding alphabets in single string?
yeah..i solved this issue.Thank you very much.
pnts = np.array(points)
rect = cv2.minAreaRect(pnts)
box = cv2.cv.BoxPoints(rect)
box1 = np.int0(box)
cv2.drawContours(resized,[box1],0,(0,255,0),2)
rect = cv2.minAreaRect(points)
box = cv2.cv.BoxPoints(rect)
box = np.int0(box)
cv2.drawContours(img,[box],0,(0,0,255),2)
i have written like this but error is coming like TypeError: points is not a numpy array, neither a scalar
i have to first save this center points of rectangle in one variable then i can pass that variable inside minAreaRect. Is it r8?
I have image which contain 12 center point of rectangle. I am not getting how to code for image rotation from that point.C:\fakepath\deskew.png.Please help me with some code in opencv and python
i am going to make software which can detect image and check image alignment and make it proper. Is we require reference image? what will be the case if reference image is also not in correct position? I want to clear the concept of image alignment?
@essamzaky Is hough transform give best result for deskew of image?
@pi-null-mezon thank you for the solution.i will try and let you know how it working
In OMR, if scanned OMR is not in correct position then how can i deskew the image and search exact location of roll number rectangle?i want to extract information from different rectangle based on bubble selection. i need help to sort out this problem.
i need solution in python itself. in my case after detecting black circle it will be evaluated as candidate roll number. so code will print roll number. i am facing problem to decoding circle as it is image. i am very new in this domain. Please help me out
Can i get this code in python with open cv? i am not able to understand it..
i want solution in python as m not good in java
I am working on OMR and i am detecting circle who are selected by student. Now i am not getting how to get the selected numbered circle which marked by student.I am looking solution/idea for opencv 2.3.12 and python
C:\fakepath\circlesample1.png | https://answers.opencv.org/users/63672/rashmi/?sort=recent | CC-MAIN-2020-45 | refinedweb | 550 | 68.97 |
Subject: [Boost-bugs] [Boost C++ Libraries] #12790: Left shift does not work on 32 bit Linux and MSVC 2015 (32 and 64 bits)
From: Boost C++ Libraries (noreply_at_[hidden])
Date: 2017-01-25 16:06:03
#12790: Left shift does not work on 32 bit Linux and MSVC 2015 (32 and 64 bits)
------------------------------+----------------------------
Reporter: mika.fischer@⦠| Owner: johnmaddock
Type: Bugs | Status: new
Milestone: To Be Determined | Component: multiprecision
Version: Boost 1.62.0 | Severity: Problem
Keywords: |
------------------------------+----------------------------
The following test program creates an unsigned 160 bit unsigned integer
with all bits set and shifts it by six bits to the left.
On 64 bit Linux, the expected result (F...FC0) is returned. But on 32 bit
Linux and MSVC 2015 (32 bit as well as 64 bit), the incorrect result
(F...FC000000000) is returned.
{{{
#include <boost/multiprecision/cpp_int.hpp>
namespace mp = boost::multiprecision;
using namespace std;
typedef mp::number<mp::cpp_int_backend<20*8, 20*8, mp::unsigned_magnitude,
mp::unchecked, void> > number_t;
int main()
{
number_t n = -1;
cout << hex << n << endl;
n = n << 6;
cout << hex << n << endl;
}
}}}
-- Ticket URL: <> Boost C++ Libraries <> Boost provides free peer-reviewed portable C++ source libraries.
This archive was generated by hypermail 2.1.7 : 2017-02-16 18:50:20 UTC | https://lists.boost.org/boost-bugs/2017/01/47201.php | CC-MAIN-2019-43 | refinedweb | 207 | 62.07 |
Installable Units
From Eclipsepedia
Installable Unit
As the name implies, Installable Units (IUs for short)". The metadata allows dependencies to be structured as graphs without forcing containment relationships between nodes. Here is detailed presentation of what an installable unit is made of.
IU Identity
An IU is uniquely identified by an ID and a version.
Enablement filter
The enablement filter is of the form of an LDAP filter [1]. It indicates in which contexts an installable unit can be installed. The evaluation of this filter is done against a set of valued variables called an “evaluation context”.
IU dependencies and capabilities
In the same way bundles have import and export packages, IUs have dependencies to talk about their prerequisites and provide capabilities to tell others what they offer.
Capability
A capability has the three following attributes:
- A namespace
- A name
- A version
We often say that an IU provides capabilities.
Dependencies are expressed against those capabilities to express all the requirements of an IU. This approach offers great flexibility to express dependencies.
Requirement expression
A requirement expression is composed of two parts:
- An enablement filter of the form of an LDAP filter [1]. The absence of a filter is equivalent to a filter evaluating to true. When a filter evaluates to false, the requirement is ignored.
- A Conjunctive Normal Form of Requirements.
Requirement
A requirement has the following attributes:
- A namespace
- A name
- A version range
- A greediness flag, indicates whether or not a new IU should be added to the solution to define satisfy this requirement
- A multiplicity flag, indicates whether or not multiple IUs should be added to the solution to satisfy this requirement
- An optionality flag
Requirements are satisfied by capabilities.
Note that the id of the IU is also exposed as a capability in the org.eclipse.equinox.p2.iu.
These dependencies information are used by the agent to decide what needs to be installed. For example if you are installing the org.eclipse.jdt.ui IU, the dependencies expressed will cause the transitive closure of IUs reachable to be installed.
Example
The syntax used here is not normative. In fact p2 will not define a serialization format for IUs to allow for greater flexibility in storage and manipulation.
IU org.eclipse.swt v 3.2.0 Capabilities: {namespace=package, name=a, version=1.0.0} {namespace=foo, name=b, version=1.3.0} {namespace=package, name=c, version=4.1.0} Requirement expressions (true) -> {namespace=package, name=r1, range=[1.0.0, 2.0.0)} and {namespace=foo, name=r1, range=[3.2.0, 4.0.0)} (& (os=linux) (ws=gtk)) -> {namespace=package, name=r2, range=[1.0.0, 2.0.0)} or {namespace=foo, name=bar, range=[3.2.0, 4.0.0)} IU org.eclipse.jface v 3.3.0 Capabilities: {namespace=package, name=a, version=1.0.0} {namespace=package, name=jface, version=3.1.0} Requirement expressions: (true) -> {namespace=package, name=a, range=[1.0.0, 2.0.0)} and {namespace=foo, name=b, range=[1.0.0, 4.0.0)} (& (os=linux) (ws=gtk)) -> {namespace=package, name=a, range=[1.0.0, 1.1.0)} or {namespace=foo, name=bar, range=[3.2.0, 4.0.0)}
Content aspect
The IU does not deliver any content. Instead it refers to artifacts. The artifacts are mirrored from an artifact server into a local artifact server on the request of touchpoints.
Touchpoint, touchpoint data
IUs can be stamped with a type. Using this type, the engine identifies the touchpoint responsible for marrying the IU with the related system. The touchpoint data contains information that will be used to apply the software lifecycle (install, uninstall, update, configure, etc).
Update information
The lineage information of an IU is explicit. Each IU can express the IU(s) it is an update of. This information is stored in the form of one requirement, thus allowing for an IU to be an update of multiple of its predecessors.
We are contemplating supporting multiple requirements to allow for cases where an IU has been split into multiple IUs or where multiple IUs have merged into one. Another thing that we are contemplating the addition of "staged update" concept. This would allow for cases where an update must be applied even though an higher version exists.
Fixes
Support for fixes will be added, however the format has not been decided yet.
Properties
An IU can carry arbitrary properties. These properties are usually only considered by the user interface and the director. The properties targeted at the user are the one containing user readable name information (UI name, license, description, etc.) and the one allowing for better filtering of what is being shown to the user (see grouping section). The properties targeted at the director are usually used as hints/advices to the resolution process.
For properties influencing the director, they should be such that even if the director to which these properties are targeted at is not used, the Installable Unit should still be successfully resolvable.
Grouping
There are various circumstances where grouping is necessary. To address this, p2 does not call out for a specific construct. Instead in p2 groups are just IUs expressing requirements on other IUs. For example, here is an excerpt of the group representing the RCP functionality of eclipse
IU org.eclipse.rcp v 3.2.0 Requirement expressions (true) -> {namespace=iu, name=org.eclipse.osgi, range=[3.2.0, 3.3.0)} and {namespace=iu, name=org.eclipse.jface, range=[3.2.0, 3.3.0)} (& (os=linux) (ws=gtk)) -> {namespace=iu, name=org.eclipse.swt.linux.gtk, range=[3.3.0, 3.4.0)} (& (os=win32) (ws=win32) (arch=x86)) -> {namespace=iu, name=org.eclipse.swt.win32.win32.x86, range=[3.3.0, 3.4.0)}
For filtering in the user interface a property flagging group as such is set on the IU.
Installable Unit fragments
Installable unit fragments are installable units that complement an existing installable unit. When a fragment applies to an installable unit, it is being attached to this installable unit. A fragment can apply to multiple installable unit. Once the fragment has been attached its content is seamlessly accessible from the installable unit.
An installable unit fragment can not modify the dependencies of the installable unit to which it is attached.
Installable unit fragments are used to deliver touchpoint data common to multiple installable units. It could also be used to deliver metadata translation.
Installable Unit best practices
The information contained in an installable unit must be kept generic to allow for reuse. For example, the org.eclipse.equinox.common bundle needs to be started. However the start level at which it needs to be started differs based on the scenario where the bundle is being used (e.g. in a RCP context, it needs to be started at level 2, whereas it needs to be started at level 3 in the server side context).
Therefore for the IU to stay generic the touchpoint data for org.eclipse.equinox.common can not specify a start level. The IU should only contain information related to the dependencies and capabilities that the IU has.
In order to deliver the start level information, installable unit fragments will be created. In our example we would have one installable unit fragment to be used in an RCP Context and another one for the server side context. Each of these IU fragments will have its own unique ID and will not collide with each others. | http://wiki.eclipse.org/Installable_Units | crawl-002 | refinedweb | 1,251 | 50.02 |
A class is like structure in C. A class is a way to bind the data and its associated functions in a unit. It allows the data to be hidden. The keyword 'class' is used to define a class. The body of class is enclosed within braces and terminated by a semicolon. These functions and variables collectively called 'class members'. The class body contains the variables and functions All classes members are private by default. A class is inherited privately by default. The arrays can be used as member variables in class.
The class members that have been declared private can be accessed only within the class. On the other hand public class members can be accessed outside the class.
The variables defined in the class is called data members and the functions that are used in the class is celled member function. We can say that class is a collection of data member and member function.
Only the member function can have access to the private data members and private function, while the public member can be accessed from outside the class.
A class must have end with a semicolon.
class class_name
{
private :
variable declaration;
function declaration;
public :
variable declaration;
function declaration;
};
A class provides the blueprints for objects, so we can say an object is created from a class. In other word Class is a collection of object .Hance we can say an object is the smallest entity of class which connect with the real word entity.
class_name variable name;
// Simple example in c++ using Class and Objects
#include <iostream>
using namespace std;
class temp
{
private:
int val1;
float val2;
public:
void int_val(int d){
val1=d;
cout<<"Number: "<<val1;
}
float float_val(){
cout<<"\nEnter data: ";
cin>>val2;
return val2;
}
};
int main(){
temp obj1, obj2;
obj1.int_val(12);
cout<<"You entered "<<obj2.float_val();
return 0;
} | http://r4r.co.in/cpp/cpp_oops_tutorial/classes_and_Objects.html | CC-MAIN-2018-47 | refinedweb | 306 | 65.62 |
You might’ve run up against this concept in Redux called a “selector.”
In this short tutorial I’ll explain what selectors are, why they’re useful, and when (and when not) to use them.
What’s a Selector?
A selector is a small function you write that can take the entire Redux state, and pick out a value from it.
You know how
mapStateToProps works? How it takes the entire state and picks out values? Selectors basically do that. And, bonus, they improve performance too, by caching the values until state changes. Well – they can improve performance. We’ll talk about that later.
If you don’t know how mapStateToProps works, you don’t need selectors yet. I recommend you stop right here and go read my Complete Redux Tutorial for Beginners. Selectors are an advanced thing, an extra layer of abstraction on top of regular Redux. Not necessary to understand until you know the basics of Redux.
An Example State Layout
Redux gives you a store where you can put state. In a larger app, that state is usually an object, where each key of the object is managed by a separate reducer. Let’s say your state object looks like this:
{ currentUser: { token, userId, username }, shoppingCart: { itemIds, loading, error }, products: { itemsById, loading, error } }
In our made-up example, it’s keeping track of the logged-in user, the products in your shop, and the items in the user’s shopping cart.
The data here is normalized, so items in the shopping cart are referenced by ID. When a user adds an item to their cart, the item itself is not copied into the cart – only the item’s ID gets added to the
shoppingCart.itemIds array.
First, without a selector
When it comes time to get data out of the Redux state and into your React components, you’ll write a
mapStateToProps function that takes the entire state and cherry-picks the parts you need.
Let’s say you want to show the items in the shopping cart. To do that, you need the items. Buuut the
shoppingCart doesn’t have items. It only has item IDs. You have to take each ID and look it up in the
products.items array. Here’s how you might do that:
function mapStateToProps(state) { return { items: state.shoppingCart.itemIds.map(id => state.products.itemsById[id] ) } }
Changing the state shape breaks mapStateToProps
Now – what happens if you (or another dev on the team) decide “You know…
shoppingCart should really be a property of the
currentUser instead of a standalone thing.” And then they reorganize the state to look like this:
{ currentUser: { token, userId, username, shoppingCart: { itemIds, loading, error }, }, products: { itemsById, loading, error } }
Well, now your previous
mapStateToProps function is broken. It refers to
state.shoppingCart which is now held at
state.currentUser.shoppingCart.
If you had a bunch of places in your app that referred to
state.shoppingCart, it’ll be a pain to update all of them. Fear or avoidance of that annoying update process might even prevent you from reorganizing the state when you know you should.
If only we had a way to centralize the knowledge of the shape of the state… some kind of function we could call that knew how to find the data we wanted…
Well, that’s exactly what a selector is for :)
Refactor: Write a simple selector
Let’s rewrite the broken
mapStateToProps and pull out the state access into a selector.
// put this in some global-ish place, // like selectors.js, // and import it when you need to access this bit of state function selectShoppingCartItems(state) { return state.currentUser.shoppingCart.itemIds.map(id => state.products.itemsById[id] ); } function mapStateToProps(state) { return { items: selectShoppingCartItems(state) } }
Next time the state shape changes, you can update that one selector and you’re done. Easy peasy.
As far as naming goes, it’s pretty common to prefix your selector functions with
select or
get. Feel free to follow some other convention in your app, of course.
You Probably Don’t Need Selectors for Everything
You might notice that there’s a bit of overhead here. It’s another function to write and test. Another step to follow when you want to add a new piece of state. And potentially another file to think about.
So, for simple state access, or state access that you know is unlikely to be done in a lot of places, you probably don’t need a selector at all. I personally wouldn’t go “all or nothing” with selectors. Write one where it makes sense, and skip it otherwise. It’s an optional abstraction.
Using reselect for Better Performance
The selector we wrote earlier was just a plain function. It hid the detail of the state shape – great! – but it does nothing for performance. There’s no magical caching going on there.
The name is overloaded, too (yay confusion). It’s called a “selector” because it does select data from state, but most of the time when people talk about “selectors,” they mean a memoized one.
A memoized function will “remember” the last set of arguments it received, and the value it returned. Then, the next time it’s called, it’ll check – are these new arguments the same as last time? If they are, return the old value (without recomputing it). Otherwise, recompute, and remember the new set of arguments + return value. Memoization (not memorization… even though the concept is the same) is basically a fancy word for caching.
To create memoized selectors, you could write your own memoization function… or you can install the
reselect library. (there are other selector libraries too; reselect is probably the most popular)
yarn add reselect
Then you can use the
createSelector function provided by
reselect to create a memoized selector. We’re gonna split up our previous selector into a couple atomic, tiny selectors too. I’ll explain why in a second.
import { createSelector } from 'reselect'; const getProducts = state => state.products.itemsById; const getCartItemIds = state => state.currentUser.shoppingCart.itemIds; export const selectShoppingCartItems = createSelector( getProducts, getCartItemIds, (products, itemIds) => itemIds.map(id => products[id]) );
What’s going on here?
We’ve split the function up into tiny fragments. Each fragment is a standalone selector for one piece of data.
getProducts knows where to find products;
Then, we combine the fragments with the
createSelector function. It takes the fragments (as many as you have) and a transform function (that’s what we’re calling the last argument there).
That transform function receives the results from the fragments, and then it can do whatever it needs to do. Whatever it returns is what gets returned by the “master” selector.
createSelector returns the “master” selector which can take
state and optionally
props, and it passes
(state, props) to each of the fragment selectors.
All of this work gets you a nice benefit: the transform function, which might be expensive/slow to run, will only run if one of the fragments returns a different value than last time (make sure the fragments are fast – like plain property access, not like
mapping over an array).
I’ll mention again that if your transform function isn’t expensive (as in, it’s only accessing a property like
state.foo.bar, or adding a couple numbers together or whatever), then it probably isn’t worth the hassle of creating a memoized selector for it.
Memoize Once, or Memoize Everything?
The
reselect library only remembers one call + return – the very last one. If you want it to remember multiple sets of arguments and their return values, take a look at the re-reselect library. It’s a wrapper around
reselect, so it works the same on the outside, but on the inside it can cache more stuff.
That’s Selectors in a Nutshell
That pretty much covers the basics! Hopefully selectors make a little more sense now. | https://daveceddia.com/redux-selectors/ | CC-MAIN-2019-35 | refinedweb | 1,315 | 64.71 |
Chapter 15
Advanced Internet Topics
"Surfing on the Shoulders of Giants"
This chapter concludes our look at Python Internet programming by exploring a handful of Internet-related topics and packages. We've covered many Internet topics in the previous five chapters--socket basics, client and server-side scripting tools, and programming full-blown web sites with Python. Yet we still haven't seen many of Python's standard built-in Internet modules in action. Moreover, there is a rich collection of third-party extensions for scripting the Web with Python that we have not touched on at all.
In this chapter, we explore a grab-bag of additional Internet-related tools and third-party extensions of interest to Python Internet developers. Along the way, we meet larger Internet packages, such as HTMLgen, JPython, Zope, PSP, Active Scripting, and Grail. We'll also study standard Python tools useful to Internet programmers, including Python's restricted execution mode, XML support, COM interfaces, and techniques for implementing proprietary servers. In addition to their practical uses, these systems demonstrate just how much can be achieved by wedding a powerful object-oriented scripting language such as Python to the Web.
Before we start, a disclaimer: none of these topics is presented in much detail here, and undoubtedly some interesting Internet systems will not be covered at all. Moreover, the Internet evolves at lightning speed, and new tools and techniques are certain to emerge after this edition is published; indeed, most of the systems in this chapter appeared in the five years after the first edition of this book was written, and the next five years promise to be just as prolific. As always, the standard moving-target caveat applies: read the Python library manual's Internet section for details we've skipped, and stay in touch with the Python community at for information about extensions not covered due to a lack of space or a lack of clairvoyance.
Zope: A Web Publishing Framework
Zope is an open source web-application server and toolkit, written in and customizable with Python. It is a server-side technology that allows web designers to implement sites and applications by publishing Python object hierarchies on the Web. With Zope, programmers can focus on writing objects, and let Zope handle most of the underlying HTTP and CGI details. If you are interested in implementing more complex web sites than the form-based interactions we've seen in the last three chapters, you should investigate Zope: it can obviate many of the tasks that web scripters wrestle with on a daily basis.
Sometimes compared to commercial web toolkits such as ColdFusion, Zope is made freely available over the Internet by a company called Digital Creations and enjoys a large and very active development community. Indeed, many attendees at a recent Python conference were attracted by Zope, which had its own conference track. The use of Zope has spread so quickly that many Pythonistas now look to it as Python's "killer application"--a system so good that it naturally pushes Python into the development spotlight. At the least, Zope offers a new, higher-level way of developing sites for the Web, above and beyond raw CGI scripting.[1]
Zope Components
Zope began life as a set of tools (part of which was named "Bobo") placed in the public domain by Digital Creations. Since then, it has grown into a large system with many components, a growing body of add-ons (called "products" in Zope parlance), and a fairly steep learning curve. We can't do it any sort of justice in this book, but since Zope is one of the most popular Python-based applications at this writing, I'd be remiss if I didn't provide a few details here.
In terms of its core components, Zope includes the following parts:
- Zope Object Request Broker (ORB)
- At the heart of Zope, the ORB dispatches incoming HTTP requests to Python objects and returns results to the requestor, working as a perpetually running middleman between the HTTP CGI world and your Python objects. The Zope ORB is described further in the next section.
- HTML document templates
- Zope provides a simple way to define web pages as templates, with values automatically inserted from Python objects. Templates allow an object's HTML representation to be defined independently of the object's implementation. For instance, values of attributes in a class instance object may be automatically plugged into a template's text by name. Template coders need not be Python coders, and vice versa.
- Object database
- To record data persistently, Zope comes with a full object-oriented database system for storing Python objects. The Zope object database is based on the Python
pickleserialization module we'll meet in the next part of this book, but adds support for transactions, lazy object fetches (sometimes called delayed evaluation), concurrent access, and more. Objects are stored and retrieved by key, much as they are with Python's standard
shelvemodule, but classes must subclass an imported
Persistentsuperclass, and object stores are instances of an imported
PickleDictionaryobject. Zope starts and commits transactions at the start and end of HTTP requests.
Zope also includes a management framework for administrating sites, as well as a product API used to package components. Zope ships with these and other components integrated into a whole system, but each part can be used on its own as well. For instance, the Zope object database can be used in arbitrary Python applications by itself.
What's Object Publishing?
If you're like me, the concept of publishing objects on the Web may be a bit vague at first glance, but it's fairly simple in Zope: the Zope ORB automatically maps URLs requested by HTTP into calls on Python objects. Consider the Python module and function in Example 15-1.
Example 15-1: PP2E\Internet\Other\messages.py
"A Python module published on the Web by Zope"
def greeting(size='brief', topic='zope'):
"a published Python function"
return 'A %s %s introduction' % (size, topic)
This is normal Python code, of course, and says nothing about Zope, CGI, or the Internet at large. We may call the function it defines from the interactive prompt as usual:
C:\...\PP2E\Internet\Other>python
>>> import messages
>>> messages.greeting( )
'A brief zope introduction'
>>> messages.greeting(size='short')
'A short zope introduction'
>>> messages.greeting(size='tiny', topic='ORB')
'A tiny ORB introduction'
But if we place this module file, along with Zope support files, in the appropriate directory on a server machine running Zope, it automatically becomes visible on the Web. That is, the function becomes a published object --it can be invoked through a URL, and its return value becomes a response page. For instance, if placed in a
cgi-bindirectory on a server called
myserver.net, the following URLs are equivalent to the three calls above:
When our function is accessed as a URL over the Web this way, the Zope ORB performs two feats of magic:
- The URL is automatically translated into a call to the Python function. The first part of the URL after the directory path (
messages) names the Python module, the second (
greeting) names a function or other callable object within that module, and any parameters after the
?become keyword arguments passed to the named function.
- After the function runs, its return value automatically appears in a new page in your web browser. Zope does all the work of formatting the result as a valid HTTP response.
In other words, URLs in Zope become remote function calls, not just script invocations. The functions (and methods) called by accessing URLs are coded in Python, and may live at arbitrary places on the Net. It's as if the Internet itself becomes a Python namespace, with one module directory per server.
Zope is a server-side technology based on objects, not text streams; the main advantage of this scheme is that the details of CGI input and output are handled by Zope, while programmers focus on writing domain objects, not on text generation. When our function is accessed with a URL, Zope automatically finds the referenced object, translates incoming parameters to function call arguments, runs the function, and uses its return value to generate an HTTP response. In general, a URL like:
is mapped by the Zope ORB running on
servernameinto a call to a Python object in a Python module file installed in
dirpath, taking the form:
module.object1.object2.method(arg1=val1, arg2=val2)
The return value is formatted into an HTML response page sent back to the client requestor (typically a browser). By using longer paths, programs can publish complete hierarchies of objects; Zope simply uses Python's generic object-access protocols to fetch objects along the path.
As usual, a URL like those listed here can appear as the text of a hyperlink, typed manually into a web browser, or used in an HTTP request generated by a program (e.g., using Python's
urllibmodule in a client-side script). Parameters are listed at the end of these URLs directly, but if you post information to this URL with a form instead, it works the same way:
<form action="" method=POST>
Size: <input type=text name=size>
Topic: <input type=text name=topic value=zope>
<input type=submit>
</form>
Here, the
actiontag references our function's URL again; when the user fills out this form and presses its submit button, inputs from the form sent by the browser magically show up as arguments to the function again. These inputs are typed by the user, not hardcoded at the end of a URL, but our published function doesn't need to care. In fact, Zope recognizes a variety of parameter sources and translates them all into Python function or method arguments: form inputs, parameters at the end of URLs, HTTP headers and cookies, CGI environment variables, and more.
This just scratches the surface of what published objects can do, though. For instance, published functions and methods can use the Zope object database to save state permanently, and Zope provides many more advanced tools such as debugging support, precoded HTTP servers for use with the ORB, and finer-grained control over responses to URL requestors.
For all things Zope, visit. There, you'll find up-to-date releases, as well as documentation ranging from tutorials to references to full-blown Zope example sites. Also see this book's CD for a copy of the Zope distribution, current as of the time we went to press.TIP: Python creator Guido van Rossum and his Pythonlabs team of core Python developers have moved from BeOpen to Digital Creations, home of the Zope framework introduced here. Although Python itself remains an open source system, Guido's presence at Digital Creations is seen as a strategic move that will foster future growth of both Zope and Python.
HTMLgen: Web Pages from Objects
One of the things that makes CGI scripts complex is their inherent dependence on HTML: they must embed and generate legal HTML code to build user interfaces. These tasks might be easier if the syntax of HTML were somehow removed from CGI scripts and handled by an external tool.
HTMLgen is a third-party Python tool designed to fill this need. With it, programs build web pages by constructing trees of Python objects that represent the desired page and "know" how to format themselves as HTML. Once constructed, the program asks the top of the Python object tree to generate HTML for itself, and out comes a complete, legally formatted HTML web page.
Programs that use HTMLgen to generate pages need never deal with the syntax of HTML; instead, they can use the higher-level object model provided by HTMLgen and trust it to do the formatting step. HTMLgen may be used in any context where you need to generate HTML. It is especially suited for HTML generated periodically from static data, but can also be used for HTML creation in CGI scripts (though its use in the CGI context incurs some extra speed costs). For instance, HTMLgen would be ideal if you run a nightly job to generate web pages from database contents. HTMLgen can also be used to generate documents that don't live on the Web at all; the HTML code it produces works just as well when viewed offline.
A Brief HTMLgen Tutorial
We can't investigate HTMLgen in depth here, but let's look at a few simple examples to sample the flavor of the system. HTMLgen is shipped as a collection of Python modules that must be installed on your machine; once it's installed, you simply import objects from the
HTMLgenmodule corresponding to the tag you wish to generate, and make instances:
C:\Stuff\HTMLgen\HTMLgen>python
>>> from HTMLgen import *
>>> p = Paragraph("Making pages from objects is easy\n")
>>> p
<HTMLgen.Paragraph instance at 7dbb00>
>>> print p
<P>Making pages from objects is easy
</P>
Here, we make a
HTMLgen.Paragraphobject (a class instance), passing in the text to be formatted. All HTMLgen objects implement
__str__methods and can emit legal HTML code for themselves. When we print the
Paragraphobject, it emits an HTML paragraph construct. HTMLgen objects also define
appendmethods, which do the right thing for the object type;
Paragraphs simply add appended text to the end of the text block:
>>> p.append("Special < characters > are & escaped")
>>> print p
<P>Making pages from objects is easy
Special < characters > are & escaped</P>
Notice that HTMLgen escaped the special characters (e.g.,
<means
<) so that they are legal HTML; you don't need to worry about writing either HTML or escape codes yourself. HTMLgen has one class for each HTML tag; here is the
Listobject at work, creating an ordered list:
>>> choices = ['python', 'tcl', 'perl']
>>> print List(choices)
<UL>
<LI>python
<LI>tcl
<LI>perl
</UL>
In general, HTMLgen is smart about interpreting data structures you pass to it. For instance, embedded sequences are automatically mapped to the HTML code for displaying nested lists:
>>> choices = ['tools', ['python', 'c++'], 'food', ['spam', 'eggs']]
>>> l = List(choices)
>>> print l
<UL>
<LI>tools
<UL>
<LI>python
<LI>c++
</UL>
<LI>food
<UL>
<LI>spam
<LI>eggs
</UL>
</UL>
Hyperlinks are just as easy: simply make and print an
Hrefobject with the link target and text. (The text argument can be replaced by an image, as we'll see later in Example 15-3.)
>>> h = Href('', 'python')
>>> print h
<A HREF="">python</A>
To generate HTML for complete pages, we create one of the HTML document objects, append its component objects, and print the document object. HTMLgen emits a complete page's code, ready to be viewed in a browser:
>>> d = SimpleDocument(title='My doc')
>>> p = Paragraph('Web pages made easy')
>>> d.append(p)
>>> d.append(h)
>>> print d
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN">
<HTML>
<!-- This file generated using Python HTMLgen module. -->
<HEAD>
<META NAME="GENERATOR" CONTENT="HTMLgen 2.2.2">
<TITLE>My doc</TITLE>
</HEAD>
<BODY>
<P>Web pages made easy</P>
<A HREF="">python</A>
</BODY> </HTML>
There are other kinds of document classes, including a
SeriesDocumentthat implements a standard layout for pages in a series.
SimpleDocumentis simple indeed: it's essentially a container for other components, and generates the appropriate wrapper HTML code. HTMLgen also provides classes such as
Table,
Form, and so on, one for each kind of HTML construct.
Naturally, you ordinarily use HTMLgen from within a script, so you can capture the generated HTML in a file or send it over an Internet connection in the context of a CGI application (remember, printed text goes to the browser in the CGI script environment). The script in Example 15-2 does roughly what we just did interactively, but saves the printed text in a file.
Example 15-2: PP2E\Internet\Other\htmlgen101.py
import sys
from HTMLgen import *
p = Paragraph('Making pages from objects is easy.\n')
p.append('Special < characters > are & escaped')
choices = ['tools', ['python', 'c++'], 'food', ['spam', 'eggs']]
l = List(choices)
s = SimpleDocument(title="HTMLgen 101")
s.append(Heading(1, 'Basic tags'))
s.append(p)
s.append(l)
s.append(HR( ))
s.append(Href('', 'Python home page'))
if len(sys.argv) == 1:
print s # send html to sys.stdout or real file
else:
open(sys.argv[1], 'w').write(str(s))
This script also uses the
HRobject to format a horizontal line, and
Headingto insert a header line. It either prints HTML to the standard output stream (if no arguments are listed) or writes HTML to an explicitly named file; the
strbuilt-in function invokes object
__str_ _methods just as
C:\...\PP2E\Internet\Other>python htmlgen101.py > htmlgen101.html
C:\...\PP2E\Internet\Other>python htmlgen101.py htmlgen101.html
Either way, the script's output is a legal HTML page file, which you can view in your favorite browser by typing the output filename in the address field or clicking on the file in your file explorer. Either way, it will look a lot like Figure 15-1.
See file htmlgen101.html in the examples distribution if you wish to inspect the HTML generated to describe this page directly (it looks much like the prior document's output). Example 15-3 shows another script that does something less hardcoded: it constructs a web page to display its own source code.
Example 15-3: PP2E\Internet\Other\htmlgen101-b.py
import sys
from HTMLgen import *
d = SimpleDocument(title="HTMLgen 101 B")
# show this script
text = open('htmlgen101-b.py', 'r').read( )
d.append(Heading(1, 'Source code'))
d.append(Paragraph( PRE(text) ))
# add gif and links
site = ''
gif = 'PythonPoweredSmall.gif'
image = Image(gif, alt='picture', align='left', hspace=10, border=0)
d.append(HR( ))
d.append(Href(site, image))
d.append(Href(site, 'Python home page'))
if len(sys.argv) == 1:
print d
else:
open(sys.argv[1], 'w').write(str(d))
We use the
PREobject here to specify preformatted text, and the
Imageobject to generate code to display a GIF file on the generated page. Notice that HTML tag options such as
altand
alignare specified as keyword arguments when making HTMLgen objects. Running this script and pointing a browser at its output yields the page shown in Figure 15-2; the image at the bottom is also a hyperlink, because it was embedded inside an
Hrefobject.
And that (along with a few nice advanced features) is all there is to using HTMLgen. Once you become familiar with it, you can construct web pages by writing Python code, without ever needing to manually type HTML tags again. Of course, you still must write code with HTMLgen instead of using a drag-and-drop page layout tool, but that code is incredibly simple and supports the addition of more complex programming logic where needed to construct pages dynamically.
In fact, now that you're familiar with HTMLgen, you'll see that many of the HTML files shown earlier in this book could have been simplified by recoding them to use HTMLgen instead of direct HTML code. The earlier CGI scripts could have used HTMLgen as well, albeit with additional speed overheads--printing text directly is faster than generating it from object trees, though perhaps not significantly so (CGI scripts are generally bound to network speeds, not CPU speed).
HTMLgen is open source software, but it is not a standard part of Python and must therefore be installed separately. You can find a copy of HTMLgen on this book's CD, but the Python web site should have its current location and version. Once installed, simply add the HTMLgen path to your PYTHONPATH variable setting to gain access to its modules. For more documentation about HTMLgen, see the package itself: its html subdirectory includes the HTMLgen manual in HTML format.
JPython ( Jython): Python for Java
JPython (recently renamed "Jython") is an entirely distinct implementation of the Python programming language that allows programmers to use Python as a scripting component in Java-based applications. In short, JPython makes Python code look like Java, and consequently offers a variety of technology options inherited from the Java world. With JPython, Python code may be run as client-side applets in web browsers, as server-side scripts, and in a variety of other roles. JPython is distinct from other systems mentioned in this section in terms of its scope: while it is based on the core Python language we've seen in this book, it actually replaces the underlying implementation of that language rather than augmenting it.[2]
This section briefly explores JPython and highlights some of the reasons you may or may not want to use it instead of the standard Python implementation. Although JPython is primarily of interest to programmers writing Java-based applications, it underscores integration possibilities and language definition issues that merit the attention of all Python users. Because JPython is Java-centric, you need to know something about Java development to make the most sense of JPython, and this book doesn't pretend to teach that in the next few pages. For more details, interested readers should consult other materials, including JPython documentation at.
TIP: The JPython port is now called "Jython." Although you are likely to still see it called by its original JPython name on the Net (and in this book) for some time, the new Jython title will become more common as time goes by.
A Quick Introduction to JPython
Functionally speaking, JPython is a collection of Java classes that run Python code. It consists of a Python compiler, written in Java, that translates Python scripts to Java bytecodes so they can be executed by a Java virtual machine --the runtime component that executes Java programs and is used by major web browsers. Moreover, JPython automatically exposes all Java class libraries for use in Python scripts. In a nutshell, here's what comes with the JPython system:
- Python-to-Java-bytecode compiler
- JPython always compiles Python source code into Java bytecode and passes it to a Java virtual machine (JVM) runtime engine to be executed. A command-line compiler program,
jpythonc, is also able to translate Python source code files into Java .class and .jar files, which can then be used as Java applets, beans, servlets, and so on. To the JVM, Python code run through JPython looks the same as Java code. Besides making Python code work on a JVM, JPython code also inherits all aspects of the Java runtime system, including Java's garbage collection and security models.
jpythoncalso imposes Java source file class rules.
- Access to Java class libraries (extending )
- JPython uses Java's reflection API (runtime type information) to expose all available Java class libraries to Python scripts. That is, Python programs written for the JPython system can call out to any resident Java class automatically simply by importing it. The Python-to-Java interface is completely automatic and remarkably seamless--Java class libraries appear as though they are coded in Python. Import statements in JPython scripts may refer to either JPython modules or Java class libraries. For instance, when a JPython script imports
java.awt, it gains access to all the tools available in the
awtlibrary. JPython internally creates a "dummy" Python module object to serve as an interface to
awtat import time. This dummy module consists of hooks for dispatching calls from JPython code to Java class methods and automatically converting datatypes between Java and Python representations as needed. To JPython scripts, Java class libraries look and feel exactly like normal Python modules (albeit with interfaces defined by the Java world).
- Unified object model
- JPython objects are actually Java objects internally. In fact, JPython implements Python types as instances of a Java
PyObjectclass. By contrast, C Python classes and types are still distinct in the current release. For instance, in JPython, the number 123 is an instance of the
PyIntegerJava class, and you can specify things like
[].__class__since all objects are class instances. That makes data mapping between languages simple: Java can process Python objects automatically, because they are Java objects. JPython automatically converts types between languages according to a standard type map as needed to call out to Java libraries, and selects among overloaded Java method signatures.
- API for running Python from Java (embedding )
- JPython also provides interfaces that allow Java programs to execute JPython code. As for embedding in C and C++, this allows Java applications to be customized by bits of dynamically written JPython code. For instance, JPython ships with a Java
PythonInterpreterclass, which allows Java programs to create objects that function as Python namespaces for running Python code. Each
PythonInterpreterobject is roughly a Python module, with methods such as
exec(a string of Python code
),
execfile(a Python filename
), and
getand
setmethods for assigning Python global variables. Because Python objects are really instances of a Java
PyObjectclass, an enclosing Java layer can access and process data created by Python code naturally.
- Interactive Python command line
- Like the standard Python implementation, JPython comes with an interactive command line that runs code immediately after it is typed. JPython's
jpythonprogram is equivalent to the
pythonexecutable we've been using in this book; without arguments, it starts an interactive session. Among other things, this allows JPython programmers to import and test class components actually written in Java. This ability alone is compelling enough to interest many Java programmers.
- Interface automations
- Java libraries are somewhat easier to use in JPython code than in Java. That's because JPython automates some of the coding steps Java implies. For instance, callback handlers for Java GUI libraries may be simple Python functions, even though Java coders need to provide methods in fully specified classes (Java does not have first-class function objects). JPython also makes Java class data members accessible as both Python attribute names (object.name) and object constructor keyword arguments (name=value); such Python syntax is translated into calls to
getNameand
setNameaccessor methods in Java classes. We'll see these automation tricks in action in the following examples. You don't have to use any of these (and they may confuse Java programmers at first glance), but they further simplify coding in JPython, and give Java class libraries a more Python-like flavor.
The net effect of all this is that JPython allows us to write Python programs that can run on any Java-aware machine--in particular, in the context of most web browsers. More importantly, because Python programs are translated into Java bytecodes, JPython provides an incredibly seamless and natural integration between the two languages. Both walk and talk in terms of the Java model, so calls across language boundaries are trivial. With JPython's approach, it's even possible to subclass a Java class in Python and vice versa.
So why go to all this trouble to mix Python into Java environments? The most obvious answer is that JPython makes Java components easier to use: JPython scripts are typically a fraction of the size of their Java equivalents, and much less complex. More generally, the answer is really the same as it is for C and C++ environments: Python, as an easy-to-use, object-oriented scripting language, naturally complements the Java programming language.
By now, it is clear to most people that Java is too complex to serve as a scripting or rapid-development tool. But this is exactly where Python excels; by adding Python to the mix with JPython, we add a scripting component to Java systems, exactly as we do when integrating Python with C or C++. For instance, we can use JPython to quickly prototype Java systems, test Java classes interactively, and open up Java systems for end-user customization. In general, adding Python to Java development can significantly boost programmer productivity, just as it does for C and C++ systems.
A Simple JPython Example
Once a Python program is compiled with JPython, it is all Java: the program is translated to Java bytecodes, it uses Java classes to do its work, and there is no Python left except for the original source code. Because the compiler tool itself is also written in Java, JPython is sometimes called "100% pure Java." That label may be more profound to marketeers than programmers, though, because JPython scripts are still written using standard Python syntax. For instance, Example 15-4 is a legal JPython program, derived from an example originally written by Guido van Rossum.
Example 15-4: PP2E\Internet\Other\jpython.py
############################################
# implement a simple calculator in JPython;
# evaluation runs a full expression all at
# once using the Python eval( ) built-in--
# JPython's compiler is present at run-time
############################################
from java import awt # get access to Java class libraries
from pawt import swing # they look like Python modules here
labels = ['0', '1', '2', '+', # labels for calculator buttons
'3', '4', '5', '-', # will be used for a 4x4 grid
'6', '7', '8', '*',
'9', '.', '=', '/' ]
keys = swing.JPanel(awt.GridLayout(4, 4)) # do Java class library magic
display = swing.JTextField( ) # Python data auto-mapped to Java
def push(event): # callback for regular keys
display.replaceSelection(event.actionCommand)
def enter(event): # callback for the '=' key
display.text = str(eval(display.text)) # use Python eval( ) to run expr
display.selectAll( )
for label in labels: # build up button widget grid
key = swing.JButton(label) # on press, invoke Python funcs
if label == '=':
key.actionPerformed = enter
else:
key.actionPerformed = push
keys.add(key)
panel = swing.JPanel(awt.BorderLayout( )) # make a swing panel
panel.add("North", display) # text plus key grid in middle
panel.add("Center", keys)
swing.test(panel) # start in a GUI viewer
The first thing you should notice is that this is genuine Python code--JPython scripts use the same core language that we've been using all along in this book. That's good news, both because Python is such an easy language to use and because you don't need to learn a new, proprietary scripting language to use JPython. It also means that all of Python's high-level language syntax and tools are available. For example, in this script, the Python
evalbuilt-in function is used to parse and evaluate constructed expressions all at once, saving us from having to write an expression evaluator from scratch.
Interface Automation Tricks
The previous calculator example also illustrates two interface automations performed by JPython: function callback and attribute mappings. Java programmers may have already noticed that this example doesn't use classes. Like standard Python and unlike Java, JPython supports but does not impose OOP. Simple Python functions work fine as callback handlers. In Example 15-4, assigning
key.actionPerformedto a Python function object has the same effect as registering an instance of a class that defines a callback handler method:
def push(event):
...
key = swing.JButton(label)
key.actionPerformed = push
This is noticeably simpler than the more Java-like:
class handler(awt.event.ActionListener):
def actionPerformed(self, event):
...
key = swing.JButton(label)
key.addActionListener(handler( ))
JPython automatically maps Python functions to the Java class method callback model. Java programmers may now be wondering why we can assign to something named
key.actionPerformedin the first place. JPython's second magic feat is to make Java data members look like simple object attributes in Python code. In abstract terms, JPython code of the form:
X = Object(argument)
X.property = value + X.property
is equivalent to the more traditional and complex Java style:
X = Object(argument)
X.setProperty(value + X.getProperty( ))
That is, JPython automatically maps attribute assignments and references to Java accessor method calls by inspecting Java class signatures (and possibly Java BeanInfo files if used). Moreover, properties can be assigned with keyword arguments in object constructor calls, such that:
X = Object(argument, property=value)
is equivalent to both this more traditional form:
X = Object(argument)
X.setProperty(value)
as well as the following, which relies on attribute name mapping:
X = Object(argument)
X.property = value
We can combine both callback and property automation for an even simpler version of the callback code snippet:
def push(event):
...
key = swing.JButton(label, actionPerformed=push)
You don't need to use these automation tricks, but again, they make JPython scripts simpler, and that's most of the point behind mixing Python with Java.
Writing Java Applets in JPython
I would be remiss if I didn't include a brief example of JPython code that more directly masquerades as a Java applet: code that lives on a server machine but is downloaded to and run on the client machine when its Internet address is referenced. Most of the magic behind this is subclassing the appropriate Java class in a JPython script, demonstrated in Example 15-5.
Example 15-5: PP2E\Internet\Other\jpython-applet.py
#######################################
# a simple java applet coded in Python
#######################################
from java.applet import Applet # get java superclass
class Hello(Applet):
def paint(self, gc): # on paint callback
gc.drawString("Hello applet world", 20, 30) # draw text message
if __name__ == '__main_ _': # if run standalone
import pawt # get java awt lib
pawt.test(Hello( )) # run under awt loop
The Python class in this code inherits all the necessary applet protocol from the standard Java
Appletsuperclass, so there is not much new to see here. Under JPython, Python classes can always subclass Java classes, because Python objects really are Java objects when compiled and run. The Python-coded
paintmethod in this script will be automatically run from the Java AWT event loop as needed; it simply uses the passed-in
gcuser-interface handle object to draw a text message.
If we use JPython's
jpythonccommand-line tool to compile this into a Java .class file and properly store that file on a web server, it can then be used exactly like applets written in Java. Because most web browsers include a JVM, this means that such Python scripts may be used as client-side programs that create sophisticated user-interface devices within the browser, and so on.
JPython Trade-offs
Depending on your background, though, the somewhat less good news about JPython is that even though the calculator and applet scripts discussed here are straight Python code, the libraries they use are different than what we've seen so far. In fact, the library calls employed are radically different. The calculator, for example, relies primarily on imported Java class libraries, not standard Python libraries. You really need to understand Java's
awtand
swinglibraries to make sense of its code, and this library skew between language implementations becomes more acute as programs grow larger. The applet example is even more Java-bound: it depends both on Java user-interface libraries and Java applet protocols.
If you are already familiar with Java libraries, this isn't an issue at all, of course. But because most of the work performed by realistic programs is done by using libraries, the fact that most JPython code relies on very different libraries makes compatibility with standard Python less potent than it may seem at first glance. To put that more strongly, apart from very trivial core language examples, many JPython programs won't run on the standard Python interpreter, and many standard Python programs won't work under JPython.
Generally, JPython presents a number of trade-offs, partly due to its relative immaturity as of this writing. I want to point out up front that JPython is indeed an excellent Java scripting tool--arguably the best one available, and most of its trade-offs are probably of little or no concern to Java developers. For instance, if you are coming to JPython from the Java world, the fact that Java libraries are at the heart of JPython scripts may be more asset than downside. But if you are presented with a choice between the standard and Java-based Python language implementations, some of JPython's implications are worth knowing about:
- JPython is not yet fully compatible with the standard Python language
- At this writing, JPython is not yet totally compatible with the standard Python language, as defined by the original C implementation. In subtle ways, the core Python language itself works differently in JPython. For example, until very recently, assigning file-like objects to the standard input
sys.stdinfailed, and exceptions were still strings, not class objects. The list of incompatibilities (viewable at) will likely shrink over time, but will probably never go away completely. Moreover, new language features are likely to show up later in JPython than in the standard C-based implementation.
- JPython requires programmers to learn Java development too
- Language syntax is only one aspect of programming. The library skew mentioned previously is just one example of JPython's dependence on the Java system. Not only do you need to learn Java libraries to get real work done in JPython, but you also must come to grips with the Java programming environment in general. Many standard Python libraries have been ported to JPython, and others are being adopted regularly. But major Python tools such as Tkinter GUIs may show up late or never in JPython (and instead are replaced with Java tools).[3] In addition, many core Python library features cannot be supported in JPython, because they would violate Java's security constraints. For example, the
os.systemcall for running shell commands may never become available in JPython.
- JPython applies only where a JVM is installed or shipped
- You need the Java runtime to run JPython code. This may sound like a non-issue given the pervasiveness of the Internet, but I have very recently worked in more than one company for which delivering applications to be run on JVMs was not an option. Simply put, there was no JVM to be found at the customer's site. In such scenarios, JPython is either not an option, or will require you to ship a JVM with your application just to run your compiled JPython code. Shipping the standard Python system with your products is completely free; shipping a JVM may require licensing and fees. This may become less of a concern as robust open source JVMs appear. But if you wish to use JPython today and can't be sure that your clients will be able to run your systems in Java-aware browsers (or other JVM components), you should consider the potential costs of shipping a Java runtime system with your products.[4]
- JPython doesn't support Python extension modules written in C or C++
- At present, no C or C++ extension modules written to work with the C Python implementation will work with JPython. This is a major impediment to deploying JPython outside the scope of applications run in a browser. To date, the half-million-strong Python user community has developed thousands of extensions written for C Python, and these constitute much of the substance of the Python development world. JPython's current alternative is to instead expose Java class libraries and ask programmers to write new extensions in Java. But this dismisses a vast library of prior and future Python art. In principle, C extensions could be supported by Java's native call interface, but it's complex, has not been done, and can negate Java portability and security.
- JPython is noticeably slower than C Python
- Today, Python code generally runs slower under the JPython implementation. How much slower depends on what you test, which JVM you use to run your test, whether a just-in-time ( JIT) compiler is available, and which tester you cite. Posted benchmarks have run the gamut from 1.7 times slower than C Python, to 10 times slower, and up to 100 times slower. Regardless of the exact number, the extra layer of logic JPython requires to map Python to the Java execution model adds speed overheads to an already slow JVM and makes it unlikely that JPython will ever be as fast as the C Python implementation. Given that C Python is already slower than compiled languages like C, the additional slowness of JPython makes it less useful outside the realm of Java scripting. Furthermore, the
SwingGUI library used by JPython scripts is powerful, but generally considered to be the slowest and largest of all Python GUI options. Given that Python's Tkinter library is a portable and standard GUI solution, Java's proprietary user-interface tools by themselves are probably not reason enough to use the JPython implementation.
- JPython is less robust than C Python
- At this writing, JPython is substantially more buggy than the standard C implementation of the language. This is certainly due to its younger age and smaller user base and varies from JVM to JVM, but you are more likely to hit snags in JPython. In contrast, C Python has been amazingly bug-free since its introduction in 1990.
- JPython may be less portable than C Python
- It's also worth noting that as of this writing, the core Python language is far more portable than Java (despite marketing statements to the contrary). Because of that, deploying standard Python code with the Java-based JPython implementation may actually lessen its portability. Naturally, this depends on the set of extensions you use, but standard Python runs today on everything from handheld PDAs and PCs to Cray supercomputers and IBM mainframes.
Some incompatibilities between JPython and standard Python can be very subtle. For instance, JPython inherits all of the Java runtime engine's behavior, including Java security constraints and garbage collection. Java garbage collection is not based on standard Python's reference count scheme, and therefore can automatically collect cyclic objects.[5] It also means that some common Python programming idioms won't work. For example, it's typical in Python to code file-processing loops in this form:
for filename in bigfilenamelist:
text = open(filename).read( )
dostuffwith(text)
That works because files are automatically closed when garbage-collected in standard Python, and we can be sure that the file object returned by the
opencall will be immediately garbage collected (it's a temporary, so there are no more references as soon as we call
read). It won't work in JPython, though, because we can't be sure when the temporary file object will be reclaimed. To avoid running out of file descriptors, we usually need to code this differently for JPython:
for filename in bigfilenamelist:
file = open(filename)
text = file.read( )
dostuffwith(text)
file.close( )
You may face a similar implementation mismatch if you assume that output files are immediately closed:
open(name,'w').write(bytes)collects and closes the temporary file object and hence flushes the bytes out to the file under the standard C implementation of Python only, while JPython instead collects the file object at some arbitrary time in the future. In addition to such file-closing concerns, Python
__del_ _class destructors are never called in JPython, due to complications associated with object termination.
Picking Your Python
Because of concerns such as those just mentioned, the JPython implementation of the Python language is probably best used only in contexts where Java integration or web browser interoperability are crucial design concerns. You should always be the judge, of course, but the standard C implementation seems better suited to most other Python applications. Still, that leaves a very substantial domain to JPython--almost all Java systems and programmers can benefit from adding JPython to their tool sets.
JPython allows programmers to write programs that use Java class libraries in a fraction of the code and complexity required by Java-coded equivalents. Hence, JPython excels as an extension language for Java-based systems, especially those that will run in the context of web browsers. Because Java is a standard component of most web browsers, JPython scripts will often run automatically without extra install steps on client machines. Furthermore, even Java-coded applications that have nothing to do with the Web can benefit from JPython's ease of use; its seamless integration with Java class libraries makes JPython simply the best Java scripting and testing tool available today.
For most other applications, though, the standard Python implementation, possibly integrated with C and C++ components, is probably a better design choice. The resulting system will likely run faster, cost less to ship, have access to all Python extension modules, be more robust and portable, and be more easily maintained by people familiar with standard Python.
On the other hand, I want to point out again that the trade-offs listed here are mostly written from the Python perspective; if you are a Java developer looking for a scripting tool for Java-based systems, many of these detriments may be of minor concern. And to be fair, some of JPython's problems may be addressed in future releases; for instance, its speed will probably improve over time. Yet even as it exists today, JPython clearly makes an ideal extension-language solution for Java-based applications, and offers a much more complete Java scripting solution than those currently available for other scripting languages.[6]
For more details, see the JPython package included on this book's CD, and consult the JPython home page, currently maintained at. At least one rumor has leaked concerning an upcoming JPython book as well, so check for developments on this front. See also the sidebar later in this chapter about the new Python implementation for the C#/.NET environment on Windows. It seems likely that there will be three Pythons to choose from very soon (not just two), and perhaps more in the future. All will likely implement the same core Python language we've used in this text, but may emphasize alternative integration schemes, application domains, development environments, and so on.
Grail: A Python-Based Web Browser
I briefly mentioned the Grail browser near the start of Chapter 10. Many of Python's Internet tools date back to and reuse the work that went into Grail, a full-blown Internet web browser that:
- Is written entirely in Python
- Uses the Tkinter GUI API to implement its user interface and render pages
- Downloads and runs Python/Tkinter scripts as client-side applets
As mentioned earlier, Grail was something of a proof-of-concept for using Python to code large-scale Internet applications. It implements all the usual Internet protocols and works much like common browsers such as Netscape and Internet Explorer. Grail pages are implemented with the Tk text widgets that we met in the GUI part of this book.
More interestingly, the Grail browser allows applets to be written in Python. Grail applets are simply bits of Python code that live on a server but are run on a client. If an HTML document references a Python class and file that live on a server machine, Grail automatically downloads the Python code over a socket and runs it on the client machine, passing it information about the browser's user interface. The downloaded Python code may use the passed-in browser context information to customize the user interface, add new kinds of widgets to it, and perform arbitrary client-side processing on the local machine. Roughly speaking, Python applets in Grail serve the same purposes as Java applets in common Internet browsers: they perform client-side tasks that are too complex or slow to implement with other technologies such as server-side CGI scripts and generated HTML.
A Simple Grail Applet Example
Writing Grail applets is remarkably straightforward. In fact, applets are really just Python/Tkinter programs; with a few exceptions, they don't need to "know" about Grail at all. Let's look at a short example; the code in Example 15-6 simply adds a button to the browser, which changes its appearance each time it's pressed (its bitmap is reconfigured in the button callback handler).
There are two components to this page definition: an HTML file and the Python applet code it references. As usual, the grail.html HTML file that follows describes how to format the web page when the HTML's URL address is selected in a browser. But here, the
APPtag also specifies a Python applet (class) to be run by the browser. By default, the Python module is assumed to have the same name as the class and must be stored in the same location (URL directory) as the HTML file that references it. Additional
APPtag options can override the applet's default location.
Example 15-6: PP2E\Internet\Other\grail.html
<HEAD>
<TITLE>Grail Applet Test Page</TITLE>
</HEAD>
<BODY>
<H1>Test an Applet Here!</h2>
Click this button!
<APP CLASS=Question>
</BODY>
The applet file referenced by the HTML is a Python script that adds widgets to the Tkinter-based Grail browser. Applets are simply classes in Python modules. When theExample 15-7: PP2E\Internet\Other\Question.py
APPtag is encountered in the HTML, the Grail browser downloads the Question.py source code module (Example 15-7) and makes an instance of its
Questionclass, passing in the browser widget as the master (parent). The master is the hook that lets applets attach new widgets to the browser itself; applets extend the GUI constructed by the HTML in this way.
# Python applet file: Question.py
# in the same location (URL) as the html file
# that references it; adds widgets to browser;
from Tkinter import *
class Question: # run by grail?
def __init_ _(self, parent): # parent=browser
self.button = Button(parent,
bitmap='question',
command=self.action)
self.button.pack( )
def action(self):
if self.button['bitmap'] == 'question':
self.button.config(bitmap='questhead')
else:
self.button.config(bitmap='question')
if __name__ == '__main_ _':
root = Tk( ) # run standalone?
button = Question(root) # parent=Tk: top-level
root.mainloop( )
Notice that nothing in this class is Grail- or Internet-specific; in fact, it can be run (and tested) standalone as a Python/Tkinter program. Figure 15-3 is what it looks like if run standalone on Windows (with a
Tkapplication root object as the master); when run by Grail (with the browser/page object as the master), the button appears as part of the web page instead. Either way, its bitmap changes on each press.
In effect, Grail applets are simply Python modules that are linked into HTML pages by using the
APPtag. The Grail browser downloads the source code identified by an
APPtag and runs it locally on the client during the process of creating the new page. New widgets added to the page (like the button here) may run Python callbacks on the client later, when activated by the user.
Applets interact with the user by creating one or more arbitrary Tk widgets. Of course, the previous example is artificial; but notice that the button's callback handler could do anything we can program in Python: updating persistent information, popping up new user interaction dialogs, calling C extensions, etc. However, by working in concert with Python's restricted execution mode (discussed later) applets can be prevented from performing potentially unsafe operations, like opening local files and talking over sockets.
Figure 15-4 shows a screen shot of Grail in action, hinting at what's possible with Python code downloaded to and run on a client. It shows the animated "game of life" demo; everything you see here is implemented using Python and the Tkinter GUI interface. To run the demo, you need to install Python with the Tk extension and download the Grail browser to run locally on your machine or copy it off the CD. Then point your browser to a URL where any Grail demo lives.
Having said all that, I should add that Grail is no longer formally maintained, and is now used primarily for research purposes (Guido never intended for Grail to put Netscape or Microsoft out of business). You can still get it for free (find it at) and use it for surfing the Web or experimenting with alternative web browser concepts, but it is not the active project it was a few years ago.
If you want to code web browser applets in Python, the more common approach today is to use the JPython system described previously to compile your scripts into Java applet bytecode files, and use Java libraries for your scripts' user-interface portions. Embedding Python code in HTML with the Active Scripting extension described later in this chapter is yet another way to integrate client-side code.
TIP: Alas, this advice may change over time too. For instance, if Tkinter is ever ported to JPython, you will be able to build GUIs in applet files with Tkinter, rather than with Java class libraries. In fact, as I wrote this, an early release of a complete Java JNI implementation of the Python built-in
_tkintermodule (which allows JPython scripts to import and use the Tkinter module in the standard Python library) was available on the Net at. Whether this makes Tkinter a viable GUI option under JPython or not, all current approaches are subject to change. Grail, for instance, was a much more prominent tool when the first edition of this book was written. As ever, be sure to keep in touch with developments in the Python community at; clairvoyance isn't all it's cracked up to be.
Python Restricted Execution Mode
In prior chapters, I've been careful to point out the dangers of running arbitrary Python code that was shipped across the Internet. There is nothing stopping a malicious user, for instance, from sending a string such as
os.system('rm *')in a form field where we expect a simple number; running such a code string with the built-in
evalfunction or
execstatement may, by default, really work--it might just delete all the files in the server or client directory where the calling Python script runs!
Moreover, a truly malicious user can use such hooks to view or download password files, and otherwise access, corrupt, or overload resources on your machine. Alas, where there is a hole, there is probably a hacker. As I've cautioned, if you are expecting a number in a form, you should use simpler string conversion tools such as
intor
string.atoiinstead of interpreting field contents as Python program syntax with
eval.
But what if you really want to run Python code transmitted over the Net? For instance, you may wish to put together a web-based training system that allows users to run code from a browser. It is possible to do this safely, but you need to use Python's restricted execution mode tools when you ask Python to run the code. Python's restricted execution mode support is provided in two standard library modules,
rexecand
bastion.
rexecis the primary interface to restricted execution, while
bastioncan be used to restrict and monitor access to object attributes.
On Unix systems, you can also use the standard
resourcemodule to limit things like CPU time and memory consumption while the code is running. Python's library manual goes into detail on these modules, but let's take a brief look at
rexechere.
Using rexec
The restricted execution mode implemented byExample 15-8: PP2E\Internet\Other\restricted.py
rexecis optional--by default, all Python code runs with full access to everything available in the Python language and library. But when we enable restricted mode, code executes in what is commonly called a "sandbox" model--access to components on the local machine is limited. Operations that are potentially unsafe are either disallowed or must be approved by code you can customize by subclassing. For example, the script in Example 15-8 runs a string of program code in a restricted environment and customizes the default
rexecclass to restrict file access to a single, specific directory.
#!/usr/bin/python
import rexec, sys
Test = 1
if sys.platform[:3] == 'win':
SafeDir = r'C:\temp'
else:
SafeDir = '/tmp/'
def commandLine(prompt='Input (ctrl+z=end) => '):
input = ''
while 1:
try:
input = input + raw_input(prompt) + '\n'
except EOFError:
break
print # clear for Windows
return input
if not Test:
import cgi # run on the web? - code from form
form = cgi.FieldStorage( ) # else input interactively to test
input = form['input'].value
else:
input = commandLine( )
# subclass to customize default rules: default=write modes disallowed
class Guard(rexec.RExec):
def r_open(self, name, mode='r', bufsz=-1):
if name[:len(SafeDir)] != SafeDir:
raise SystemError, 'files outside %s prohibited' % SafeDir
else:
return open(name, mode, bufsz)
# limit system resources (not available on Windows)
if sys.platform[:3] != 'win':
import resource # at most 5 cpu seconds
resource.setrlimit(resource.RLIMIT_CPU, (5, 5))
# run code string safely
guard = Guard( )
guard.r_exec(input) # ask guard to check and do opens
When we run Python code strings with this script on Windows, safe code works as usual, and we can read and write files that live in the C:\temp directory, because our custom
Guardclass's
r_openmethod allows files with names beginning with "C:\temp" to proceed. The default
r_openin
rexec.RExecallows all files to be read, but all write requests fail. Here, we type code interactively for testing, but it's exactly as if we received this string over the Internet in a CGI script's form field:
C:\...\PP2E\Internet\Other>python restricted.py
Input (ctrl+z=end) => x = 5
Input (ctrl+z=end) => for i in range(x): print 'hello%d' % i,
Input (ctrl+z=end) => hello0 hello1 hello2 hello3 hello4
C:\...\PP2E\Internet\Other>python restricted.py
Input (ctrl+z=end) => open(r'C:\temp\rexec.txt', 'w').write('Hello rexec\n')
Input (ctrl+z=end) =>
C:\...\PP2E\Internet\Other>python restricted.py
Input (ctrl+z=end) => print open(r'C:\temp\rexec.txt', 'r').read( )
Input (ctrl+z=end) => Hello rexec
On the other hand, attempting to access files outside the allowed directory will fail in our custom class, as will inherently unsafe things such as opening sockets, which
rexecalways makes out of bounds by default:
C:\...\PP2E\Internet\Other>python restricted.py
Input (ctrl+z=end) => open(r'C:\stuff\mark\hack.txt', 'w').write('BadStuff\n')) => open(r'C:\stuff\mark\secret.py', 'r').read( )) => from socket import *
Input (ctrl+z=end) => s = socket(AF_INET, SOCK_STREAM)
Input (ctrl+z=end) => Traceback (innermost last):
File "restricted.py", line 41, in ?
guard.r_exec(input) # ask guard to check and do opens
...part ommitted...
File "C:\Program Files\Python\Lib\ihooks.py", line 324, in load_module
exec code in m.__dict_ _
File "C:\Program Files\Python\Lib\plat-win\socket.py", line 17, in ?
_realsocketcall = socket
NameError: socket
And what of that nasty
rm
*problem? It's possible in normal Python mode like everything else, but not when running in restricted mode. Python makes some potentially dangerous attributes of the
osmodule, such as
system(for running shell commands), disallowed in restricted mode:
C:\temp>python
>>> import os
>>> os.system('ls -l rexec.txt')
-rwxrwxrwa 1 0 0 13 May 4 15:45 rexec.txt
0
>>>
C:\temp>python %X%\Part2\internet\other\restricted.py
Input (ctrl+z=end) => import os
Input (ctrl+z=end) => os.system('rm *.*')
Input (ctrl+z=end) => Traceback (innermost last):
File "C:\PP2ndEd\examples\Part2\internet\other\restricted.py", line 41, in ?
guard.r_exec(input) # ask guard to check and do opens
File "C:\Program Files\Python\Lib\rexec.py", line 253, in r_exec
exec code in m.__dict_ _
File "<string>", line 2, in ?
AttributeError: system
Internally, restricted mode works by taking away access to certain APIs (imports are controlled, for example) and changing the
__builtins_ _dictionary in the module where the restricted code runs to reference a custom and safe version of the standard
__builtin_ _built-in names scope. For instance, the custom version of name
_ _builtins_ _.openreferences a restricted version of the standard file open function.
rexecalso keeps customizable lists of safe built-in modules, safe
osand
sysmodule attributes, and more. For the rest of this story, see the Python library manual.
TIP: Restricted execution mode is not necessarily tied to Internet scripting. It can be useful any time you need to run Python code of possibly dubious origin. For instance, we will use Python's
evaland
execbuilt-ins to evaluate arithmetic expressions and input commands in a calculator program later in the book. Because user input is evaluated as executable code in this context, there is nothing preventing a user from knowingly or unknowingly entering code that can do damage when run (e.g., they might accidentally type Python code that deletes files). However, the risk of running raw code strings becomes more prevalent in applications that run on the Web, since they are inherently open to both use and abuse. Although JPython inherits the underlying Java security model, pure Python systems such as Zope, Grail, and custom CGI scripts can all benefit from restricted execution of strings sent over the Net.
XML Processing Tools
Python ships with XML parsing support in its standard library and plays host to a vigorous XML special-interest group. XML (eXtended Markup Language) is a tag-based markup language for describing many kinds of structured data. Among other things, it has been adopted in roles such as a standard database and Internet content representation by many companies. As an object-oriented scripting language, Python mixes remarkably well with XML's core notion of structured document interchange, and promises to be a major player in the XML arena.
XML is based upon a tag syntax familiar to web page writers. Python's
xmlliblibrary module includes tools for parsing XML. In short, this XML parser is used by defining a subclass of an
XMLParserPython class, with methods that serve as callbacks to be invoked as various XML structures are detected. Text analysis is largely automated by the library module. This module's source code, file xmllib.py in the Python library, includes self-test code near the bottom that gives additional usage details. Python also ships with a standard HTML parser,
htmllib, that works on similar principles and is based upon the
sgmllibSGML parser module.
Unfortunately, Python's XML support is still evolving, and describing it is well beyond the scope of this book. Rather than going into further details here, I will instead point you to sources for more information:
- Standard library
- First off, be sure to consult the Python library manual for more on the standard library's XML support tools. At the moment, this includes only the
xmllibparser module, but may expand over time.
- PyXML SIG package
- At this writing, the best place to find Python XML tools and documentation is at the XML SIG (Special Interest Group) web page at (click on the "SIGs" link near the top). This SIG is dedicated to wedding XML technologies with Python, and publishes a free XML tools package distribution called PyXML. That package contains tools not yet part of the standard Python distribution, including XML parsers implemented in both C and Python, a Python implementation of SAX and DOM (the XML Document Object Model), a Python interface to the
Expatparser, sample code, documentation, and a test suite.
- Third-party tools
- You can also find free, third-party Python support tools for XML on the Web by following links at the XML SIGs web page. These include a DOM implementation for CORBA environments (4DOM) that currently supports two ORBs (ILU and Fnorb) and much more.
- Documentation
- As I wrote these words, a book dedicated to XML processing with Python was on the eve of its publication; check the books list at or your favorite book outlet for details.
Given the rapid evolution of XML technology, I wouldn't wager on any of these resources being up to date a few years after this edition's release, so be sure to check Python's web site for more recent developments on this front.
TIP: In fact, the XML story changed substantially between the time I wrote this section and when I finally submitted it to O'Reilly. In Python 2.0, some of the tools described here as the PyXML SIG package have made their way into a standard
xmlmodule package in the Python library. In other words, they ship and install with Python itself; see the Python 2.0 library manual for more details. O'Reilly has a book in the works on this topic called Python and XML.
Windows Web Scripting Extensions
Although.
Active Scripting basics
Unfortunately, embedding Python in client-side HTML works only on machines where Python is installed and Internet Explorer is configured to know about the Python language (by installing the
win32allextension\Internet\Other\activescript-js.html
<HTML>
<BODY>
<H1>Embedded code demo: JavaScript</h2>
<SCRIPT>
// popup 3 alert boxes while this page is
// being constructed on client side by IE;
// JavaScript is the default script language,
// and alert is an automatically exposed name
function message(i) {
if (i == 2) {
alert("Finished!");
}
else {
alert("A JavaScript-generated alert => " + i);
}
}
for (count = 0; count < 3; count += 1) {
message(count);
}
</SCRIPT>
</BODY></HTML>
All the text between the
<SCRIPT>and
</SCRIPT.
The VBScript (Visual Basic) version of this example appears in Example 15-10, so you can compare and contrast.[7] It creates three similar pop-ups when run, but the windows say "VBScript" everywhere. Notice theExample 15-10: PP2E\Internet\Other\activescript-vb.html
Languageoption in the
<SCRIPT>tag here; it must be used to declare the language to IE (in this case, VBScript) unless your embedded scripts speak in its default tongue. In the JavaScript version in Example 15-9,
Languagewasn't needed, because JavaScript was the default language. Other than this declaration, IE doesn't care what language you insert between
<SCRIPT>and
</SCRIPT>; in principle, Active Scripting is a language-neutral scripting engine.
<HTML>
<BODY>
<H1>Embedded code demo: VBScript</h2>
<SCRIPT Language=VBScript>
' do the same but with embedded VBScript;
' the Language option in the SCRIPT tag
' tells IE which interpreter to use
sub message(i)
if i = 2 then
alert("Finished!")
else
alert("A VBScript-generated alert => " & i)
end if
end sub
for count = 0 to 2 step 1
message(count)
</SCRIPT>
</BODY><:
- First install the standard Python distribution your PC (you should have done this already by now--simply double-click the Python self-installer).
- Then download and install the
win32allpackage separately from (you can also find it on this book's CD).[8]
The
win32allpackage includes the
win32COMextensions for Python, plus the PythonWin IDE (a simple interface for editing and running Python programs, written with the MFC interfaces in
win32all) and lots of other Windows-specific tools not covered in this book. The relevant point here is that installing
win32allautomatically registers Python for use in IE. If needed, you can also perform this registration manually by running the following Python script file located in the win32 extensions package: python\win32comext\axscript\client\p\Internet\Other\activescript-py.html
<HTML>
<BODY>
<H1>Embedded code demo: Python</h2>
<SCRIPT Language=Python>
# do the same but with python, if configured;
# embedded python code shows three alert boxes
# as page is loaded; any Python code works here,
# and uses auto-imported global funcs and objects
def message(i):
if i == 2:
alert("Finished!")
else:
alert("A Python-generated alert => %d" % i)
for count in range(3): message(count)
</SCRIPT>
</BODY></HTML>
Figure 15-6 shows one of the three pop-ups you should see when you open this file in IE after installing the
win32allpackage moduleallextensions.
A short ASP example
We can't discuss ASP in any real detail here, but here's an example of what an ASP file looks like when Python code is embedded:
<HTML><BODY>
<SCRIPT RunAt=Server Language=Python>
#
# code here is run at the server
#
</SCRIPT>
</BODY></HTML>
As before, code may be embedded insideExample 15-12: PP2E\Internet\Other\asp-py.asp
SCRIPTtag pairs. This time, we tell ASP to run the code at the server with the
RunAtoption;.
<HTML><BODY>
<%@ Language=Python %>
<%
#
# Python code here, using global names Request (input), Response (output), etc.
#
Response.Write("Hello ASP World from URL %s" %
Request.ServerVariables("PATH_INFO"))
%>
</BODY></HTML>
However the code is marked, ASP executes it on the server after passing in a handful of named objects that the code may use to access input, output and server context. For instance, the automatically imported
Requestand
Responseobjects give access to input and output context. The code here calls a
Response.Writemethod to send text back to the browser on the client (much like a print statement in a simple Python CGI script), as well as
Request.ServerVariablesto access environment variable information. To make this script run live, you'll need to place it in the proper directory on a server machine running IIS with ASP support.in Example 15-11). Calls to such components from Python code are automatically routed through COM back to IE.
Python COM clients
With theExample 15-13: PP2E\Internet\Other\Com\comclient.py
win32allPythonattribute is set to 1, you can actually watch Word inserting and replacing text, saving files, etc., in response to calls in the script. (Alas, I couldn't quite figure out how to paste a movie clip in this book.)]
Luckily, we don't need to know which scheme will be used when we write client scripts. The
Dispatchcall properExample 15-14: PP2E\Internet\Other\Com\comserver.py
win32comregistration utility calls. Example 15-14 is a simple COM server coded in Python as a class.
################################################################
#:
- To register a server, simply call the
UseCommandLinefunction in the
win32com.server.registerpackage and pass in the Python server class. This function uses all the special class attribute settings to make the server known to COM. The file is set to automatically call the registration tools if it is run by itself (e.g., when clicked in a file explorer).
- To unregister a server, simply pass an
--unregisterargument on the command line when running this file. When run this way, the script automatically calls
UseCommandLineagain to unregister the server; as its name implies, this function inspects command-line arguments and knows to do the right thing when
--unregisteris passed. You can also unregister servers explicitly with the
UnregisterServercall demonstrated near the end of this script, though this is less commonly used.]
A:\>python
>>> import pythoncom
>>> pythoncom.CreateGuid( )
<iid:{1BA63CC0-7CF8-11D4-98D8-BB74DD3DDE3C}>Example 15-15: PP2E\Internet\Other\Com\comserver-test.py
PythonServers.MyServersymbolic program ID we gave the server (by setting class attribute
_reg_progid_) can be used to connect to this server from any language (including Python).
################################################################
#output\Internet\Other\Com\com
Helloreply messages are the same this time, because we make only one instance of the Python server class: in VB, by calling
CreateObject, with the program ID of the desired\Internet\Other\Com\comserver-test-vbs.html
<HTML><BODY>
<P>Run Python COM server from VBScript embedded in HTML via IE</P>
<SCRIPT Language=VBScript>
Sub runpyserver( )
' use python server from vb client
' alt-f8 in word to start macro editor
Set server = CreateObject("PythonServers.MyServer")
hello1 = server.hello( )
square = server.square(9)
pyattr = server.Version
hello2 = server.hello( )
sep = Chr(10)
Result = hello1 & sep & square & sep & pyattr & sep & hello2
MsgBox Result
End Sub
runpyserver( )
</SCRIPT>
</BODY></HTML>.
If your client code runs but generates a COM error, make sure that the
win32allpackage.
Python Server Pages
Though\Internet\Other\hello.psp
$[
# Generate a simple message page with the client's IP address
]$
<HTML><HEAD>
<TITLE>Hello PSP World</TITLE>
</HEAD>
<BODY>
$[include banner.psp]$
<H1>Hello PSP World</h2>
<BR>
$[
Response.write("Hello from PSP, %s." % (Request.server["REMOTE_ADDR"]) )
]$
<BR>
</BODY></HTML>
includestatement that simply inserts another PSP file's contents.
The third piece of embedded code is more useful. As in Active Scripting technologies, Python code embedded in HTML uses an exposed object API to interact with the execution context--in this case, the
Responseobject is used to write output to the client's browser (much like a
Requestis used to access HTTP headers for the request. The
Requestobject also has a
paramsdictionary containing
GETand
POSTinput parameters, as well as a
cookiesdictionary holding cookie information stored on the client by a PSP application.
Notice that the previous example could have just as easily been implemented with a Python CGI script using a Python.
Rolling Your Own Servers in Python
Most.
Coding Solutions
We saw the sort of code needed to build servers manually in Chapter 10, Network Scripting. Python programs typically implement servers either by using raw socket calls with threads, forks, or selects to handle clients in parallel, or by using the
SocketServermodule.implements the server itself; this class is derived from the standard
SocketServer.TCPServerclass.
SimpleHTTPServerand
CGIHTTPServerimplement standard handlers for incoming HTTP requests; the former handles simple web page file requests, while the latter also runs referenced CGI scripts on the server machine by forking processes.
For example, to start a CGI-capable HTTP server, simply run Python code like that shown in Example 15-19 on the server machine.Example 15-19: PP2E\Internet\Other\webserver.py
#!/usr/bin/python | http://oreilly.com/catalog/python2/chapter/ch15.html | crawl-002 | refinedweb | 12,136 | 51.38 |
This is a guest post by Tomas Mikula. It was initially published as a document in the hasheq. It has been slightly edited and is being republished here with the permission of the original author.
This article describes what we mean when we say that the data structures in this library are equivalence-aware in a type-safe fashion.
Set is a data structure that doesn’t contain duplicate elements. An implementation of Set must therefore have a way to compare elements for “sameness”. A useful notion of sameness is equivalence, i.e. a binary relation that is reflexive, symmetric and transitive. Any reasonable implementation of Set is equipped with some equivalence relation on its element type.
Here’s the catch: For any type with more than one inhabitant there are multiple valid equivalence relations. We cannot (in general) pick one that is suitable in all contexts. For example, are these two binary trees same?
+ + / \ / \ 1 + + 3 / \ / \ 2 3 1 2
It depends on the context. They clearly have different structure, but they are both binary search trees containing the same elements. For a balancing algorithm, they are different trees, but as an implementation of Set, they represent the same set of integers.
Despite the non-uniqueness, there is one equivalence relation that stands out: equality. Two objects are considered equal when they are indistinguishable to an observer. Formally, equality is required to have the substitution property:
\[ \forall a,b \in A, \forall f \in (A \to B): a=_A b \implies f(a)=_B f(b) \]
(Here, $=_A$ denotes equality on $A$, $=_B$ denotes equality on $B$.)
Equality is the finest equivalence: whenever two elements are equal, they are necessarily equivalent with respect to every equivalence.
Popular Scala libraries take one of these two approaches when dealing with comparing elements for “sameness”.
The current approach of cats is equality.
Instances of the
cats.Eq[A] typeclass are required to have all the properties of equality, including the substitution property above.
The problem with this approach is that for some types, such as
Set[Int], equality is too strict to be useful:
Are values
Set(1, 2) and
Set(2, 1) equal?
For that to be true, they have to be indistinguishable by any function.
Let’s try
(_.toList):
scala> Set(1, 2).toList == Set(2, 1).toList res0: Boolean = false
So,
Set(1, 2) and
Set(2, 1) are clearly not equal.
As a result, we cannot use
Set[Int] in a context where equality is required (without cheating).
On the other hand, scalaz uses unspecified equivalence.
Although the name
scalaz.Equal[A] might suggest equality, instances of this typeclass are only tested for properties of equivalence.
As mentioned above, there are multiple valid equivalence relations for virtually any type.
When there are also multiple useful equivalences for a type, we are at risk of mixing them up (and the fact that they are usually resolved as implicit arguments only makes things worse).
Let’s look at how we deal with this issue. We define typeclass
Equiv with an extra type parameter that serves as a “tag” identifying the meaning of the equivalence.
trait Equiv[A, Eq] { def equiv(a: A, b: A): Boolean } // defined trait Equiv
For the compiler, the “tag” is an opaque type. It only has specific meaning for humans. The only meaning it has for the compiler is that different tags represent (intensionally) different equivalence relations.
An equivalence-aware data structure then carries in its type the tag of the equivalence it uses.
import hasheq._ // import hasheq._ import hasheq.immutable._ // import hasheq.immutable._ import hasheq.std.int._ // import hasheq.std.int._
scala> HashSet(1, 2, 3, 4, 5) res0: hasheq.immutable.HashSet[Int] = HashSetoid(5, 1, 2, 3, 4)
What on earth is
HashSetoid?
A setoid is an equivalence-aware set.
HashSetoid is then just a setoid implementated using hash-table.
Let’s look at the definition of
HashSet:
type HashSet[A] = HashSetoid[A, Equality.type]
So
HashSet is just a
HashSetoid whose equivalence is equality.
To create an instance of
HashSet[Int] above, we needed to have an implicit instance of
Equiv[Int, Equality.type] in scope.
implicitly[Equiv[Int, Equality.type]]
For the compiler,
Equality is just a rather arbitrary singleton object.
It only has the meaning of mathematical equality for us, humans.
There is a convenient type alias provided for equality relation:
type Equal[A] = Equiv[A, Equality.type]
implicitly[Equal[Int]]
So how do we deal with the problem of set equality mentioned above, i.e. that
HashSet(1, 2) and
HashSet(2, 1) are not truly equal?
We just don’t provide a definition of equality for
HashSet[Int].
scala> implicitly[Equal[HashSet[Int]]] <console>:22: error: could not find implicit value for parameter e: hasheq.Equal[hasheq.immutable.HashSet[Int]] implicitly[Equal[HashSet[Int]]] ^
But that means we cannot have a
HashSet[HashSet[Int]]!
(Remember, for a
HashSet[A], we need an instance of
Equal[A], and we just showed we don’t have an instance of
Equal[HashSet[Int]].)
scala> HashSet(HashSet(1, 2, 3, 4, 5)) <console>:22: error: could not find implicit value for parameter A: hasheq.Hash[hasheq.immutable.HashSet[Int]] HashSet(HashSet(1, 2, 3, 4, 5)) ^
But we can have a
HashSetoid[HashSet[Int], E], where
E is some equivalence on
HashSet[Int].
scala> HashSet.of(HashSet(1, 2, 3, 4, 5)) res5: hasheq.immutable.HashSetoid[hasheq.immutable.HashSet[Int],hasheq.immutable.Setoid.ContentEquiv[Int,hasheq.Equality.type]] = HashSetoid(HashSetoid(5, 1, 2, 3, 4))
HashSet.of(elems) is like
HashSet(elems), except it tries to infer the equivalence on the element type, instead of requiring it to be equality.
Notice the equivalence tag:
Setoid.ContentEquiv[Int, Equality.type].
Its meaning is (again, for humans only) that two setoids are equivalent when they contain the same elements (here, of type
Int), as compared by the given equivalence of elements (here,
Equality).
The remaining question is: How does this work in the presence of multiple useful equivalences?
Let’s define another equivalence on
Int (in addition to the provided equality).
// Our "tag" for equivalence modulo 10. // This trait will never be instantiated. sealed trait Mod10 // Provide equivalence tagged by Mod10. implicit object EqMod10 extends Equiv[Int, Mod10] { def mod10(i: Int): Int = { val r = i % 10 if (r < 0) r + 10 else r } def equiv(a: Int, b: Int): Boolean = mod10(a) == mod10(b) } // Provide hash function compatible with equivalence modulo 10. // Note that the HashEq typeclass is also tagged by Mod10. implicit object HashMod10 extends HashEq[Int, Mod10] { def hash(a: Int): Int = EqMod10.mod10(a) }
Now let’s create a “setoid of sets of integers”, as before.
scala> HashSet.of(HashSet(1, 2, 3, 4, 5)) res13: hasheq.immutable.HashSetoid[hasheq.immutable.HashSet[Int],hasheq.immutable.Setoid.ContentEquiv[Int,hasheq.Equality.type]] = HashSetoid(HashSetoid(5, 1, 2, 3, 4))
This still works, because
HashSet requires an equality on
Int, and there is only one in the implicit scope (the newly defined equivalence
EqMod10 is not equality).
Let’s try to create a “setoid of setoids of integers”:
scala> HashSet.of(HashSet.of(1, 2, 3, 4, 5)) <console>:24: error: ambiguous implicit values: both method hashInstance in object int of type => hasheq.Hash[Int] and object HashMod10 of type HashMod10.type match expected type hasheq.HashEq[Int,Eq] HashSet.of(HashSet.of(1, 2, 3, 4, 5)) ^
This fails, because there are now more equivalences on
Int in scope.
(There are now also multiple hash functions, which is what the error message actually says.)
We need to be more specific:
scala> HashSet.of(HashSet.of[Int, Mod10](1, 2, 3, 4, 5)) res15: hasheq.immutable.HashSetoid[hasheq.immutable.HashSetoid[Int,Mod10],hasheq.immutable.Setoid.ContentEquiv[Int,Mod10]] = HashSetoid(HashSetoid(5, 1, 2, 3, 4))
Finally, does it prevent mixing up equivalences? Let’s see:
scala> val s1 = HashSet(1, 2, 3, 11, 12, 13 ) s1: hasheq.immutable.HashSet[Int] = HashSetoid(1, 13, 2, 12, 3, 11) scala> val s2 = HashSet( 2, 3, 4, 5, 13, 14) s2: hasheq.immutable.HashSet[Int] = HashSetoid(5, 14, 13, 2, 3, 4) scala> val t1 = HashSet.of[Int, Mod10](1, 2, 3, 11, 12, 13 ) t1: hasheq.immutable.HashSetoid[Int,Mod10] = HashSetoid(1, 2, 3) scala> val t2 = HashSet.of[Int, Mod10]( 2, 3, 4, 5, 13, 14) t2: hasheq.immutable.HashSetoid[Int,Mod10] = HashSetoid(5, 2, 3, 4)
Combining compatible setoids:
scala> s1 union s2 res16: hasheq.immutable.HashSetoid[Int,hasheq.Equality.type] = HashSetoid(5, 14, 1, 13, 2, 12, 3, 11, 4) scala> t1 union t2 res17: hasheq.immutable.HashSetoid[Int,Mod10] = HashSetoid(5, 1, 2, 3, 4)
Combining incompatible setoids:
scala> s1 union t2 <console>:26: error: type mismatch; found : hasheq.immutable.HashSetoid[Int,Mod10] required: hasheq.immutable.HashSetoid[Int,hasheq.Equality.type] s1 union t2 ^ scala> t1 union s2 <console>:26: error: type mismatch; found : hasheq.immutable.HashSet[Int] (which expands to) hasheq.immutable.HashSetoid[Int,hasheq.Equality.type] required: hasheq.immutable.HashSetoid[Int,Mod10] t1 union s2 ^
We went one step further in the direction of type-safe equivalence in Scala compared to what is typically seen out in the wild today. There is nothing very sophisticated about this encoding. I think the major win is that we can design APIs so that the extra type parameter (the “equivalence tag”) stays unnoticed by the user of the API as long as they only deal with equalities. As soon as the equivalence tag starts requesting our attention (via an ambiguous implicit or a type error), it is likely that the attention is justified.
This article was tested with Scala 2.11.8 and hasheq version 0.3.
Unless otherwise noted, all content is licensed under a Creative Commons Attribution 3.0 Unported License.Back to blog | https://typelevel.org/blog/2017/04/02/equivalence-vs-equality.html | CC-MAIN-2019-13 | refinedweb | 1,641 | 50.53 |
Java, J2EE & SOA Certification Training
- 35k Enrolled Learners
- Weekend
- Live Class
Java consists of several classes and interfaces to hold the objects before Java 1.2 version. Before this version, there was no existence of Collection Framework. Legacy classes and interfaces are used to hold objects in that scenario. This article on Legacy class in Java will let you understand the concept in depth.detail.
Let us study about the Legacy class.
Earlier versions of Java did not include the Collections Framework. Only in from version 1.2, you could actually use this Legacy class. In this, the original classes were reengineered to support the collection interface. These classes are also known as Legacy classes. All legacy classes and interfaces were redesigned by JDK 5 to support Generics.
Dictionary is an abstract class. The main work is to hold the data as the key or value pair. It works in the form of Map Collection.
Properties
Properties class is a thread-safe i.e., multiple threads that can share single properties objects without external Synchronization. The set of properties in this class will be held in the key or value pair. Properties class extends the Hashtable class. Example:
package lc; import java.util.Properties; import java.util.Set; public class Test { public static void main(String[] args) { Properties pr = new Properties(); pr.put("Joey", "Friends"); pr.put("Rachel", " Friends "); pr.put("Phoebe", " Friends "); pr.put("Chandler", " Friends "); Set<?> creator = pr.keySet(); for (Object ob : creator) { System.out.println(ob + " stars in " + pr.getProperty((String) ob)); } } }
Output:
Chandler stars in Friends
Phoebe stars in Friends
Rachel stars in Friends
Joey stars in Friends
Hashtable is a part of Java.util package and it is a concrete class that extends the dictionary class. Hashtable is synchronized. From Java 1.2 framework onwards, hash table class implements the map interface and it is the part of the collection framework.
Example of Hashtable
import java.util.*; class HashTableDemo { public static void main(String args[]) { Hashtable<String,Integer>ht = new Hashtable<String,Integer>(); ht.put("a",new Integer(10)); ht.put("b",new Integer(20)); ht.put("c",new Integer(30)); ht.put("d",new Integer(40)); Set st = ht.entrySet(); Iterator itr=st.iterator(); while(itr.hasNext()) { Map.Entry m=(Map.Entry)itr.next(); System.out.println(itr.getKey()+" "+itr.getValue()); } } }
Vector class is similar to the ArrayList class but there are certain differences. Vector is generally synchronized. It is used where the programmer doesn’t really have knowledge about the length of the Array.
Let us see some methods offered by this Vector method.
Example:
public class Test { public static void main(String[] args) { Vector ve = new Vector(); ve.add(1); ve.add(2); ve.add(3); ve.add(4); ve.add(5); ve.add(6); Enumeration en = ve.elements(); while(en.hasMoreElements()) { System.out.println(en.nextElement()); } } }
Stack represents LIFO. Stack class extends the Vector class mentioned above.
class Stack{ public static void main(String args[]) { Stack st = new Stack(); st.push(1); st.push(2); st.push(3); st.push(4); st.push(5); Enumeration e1 = st.elements(); while(e1.hasMoreElements()) System.out.print(e1.nextElement()+" "); st.pop(); st.pop(); System.out.println("nAfter popping out one element”); Enumeration e2 = st.elements(); while(e2.hasMoreElements()) System.out.print(e2.nextElement()+" "); } }
Output:
1 2 3 4 5
After popping out one element:
1 2 3 4
Now, let’s move on to the next segment that states the legacy interface.
Enumeration interface is used to enumerate elements of Vector and all values of hashtable and properties of the properties class. Operations are cloned by the iterator interface and there are several more operations added like remove method and many more.
With this, we come to the end of this article on the “Legacy Class in Java”. I hope that the concept is clear to you by now. Keep reading, keep exploring!
Now that you’ve gone through this Legacy Class in Java “Legacy Class in Java ”article and we will get back to you as soon as possible. | https://www.edureka.co/blog/legacy-classes-in-java/ | CC-MAIN-2020-05 | refinedweb | 676 | 53.07 |
Cross-platform colored terminal text.
Project description
- Download and docs:
-
-. It also provides some shortcuts to help generate ANSI sequences,().’.:
from colorama import init, AnsiToWin32 init(wrap=False) stream = AnsiToWin32(sys.stderr).stream print >>stream, Fore.BLUE + 'blue text on stderr'
Status & Known Problems. [ x;y H # position cursor at x,y #
Running tests requires:
- Michael Foord’s ‘mock’ module to be installed.
- Either to be run under Python2.7 or 3.1 stdlib unittest, or to have Michael Foord’s ‘unittest2’ module to be.
Thanks
Daniel Griffith for multiple fabulous patches. Oscar Lester for valuable fix to stop ANSI chars being sent to non-tty output. Roger Binns, for many suggestions, valuable feedback, & bug reports. Tim Golden for thought and much appreciated feedback on the initial idea.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/colorama/0.2.3/ | CC-MAIN-2021-21 | refinedweb | 157 | 70.9 |
Building on my last post (Move Your Client Logic to the Server - Windows Azure Mobile Services) I will show you how to create a Virtual (or dummy) table to give you a different view of your TodoItem table. This Virtual table will not contain any records however.
In this case, let’s assume you did not want to change your TodoItem table read script because you do need the full table with complete items elsewhere in your application.
Start by returning the read script back to it’s original form.
Navigate to the TodoItem read script and hit the ‘Clear’ icon at the bottom of the page to restore the script:
Your Read script should only contain the request.execute(); method as it originally did.
Next define a new table called ‘todoitemcomplete’ by choosing the CREATE icon in the DATA tab:
Be sure to set the appropriate permission levels as well
Next we will modify the four server scripts to use the TodoItem table and not the newly created table for their operations.
Here is the read script (actually the only one used in my example):
function read(query, user, request) { var todoItemsTable = tables.getTable('TodoItem'); todoItemsTable.where({ complete: false }).read({ success: respondOK }); function respondOK(results) { request.respond(statusCodes.OK, results); } // no! request.execute(); }
The next step is to setup for reading from this dummy table an mainpage.xaml.cs:
//Strongly typed class for dummy table is same structure as old table public class TodoItemComplete: TodoItem { } public sealed partial class MainPage : Page { //Changed the items collection to the new class private MobileServiceCollection<TodoItemComplete, TodoItemComplete> items; //Added a reference to the dummy table private IMobileServiceTable<TodoItemComplete> todoTableComplete = App.MobileService.GetTable<TodoItemComplete>(); //For now... keep a reference to the old table for Update and Insert operations private IMobileServiceTable<TodoItem> todoTable = App.MobileService.GetTable<TodoItem>();
Fix up any references that used a TodoItem in this collection by casting to a TodoItemComplete such as this:
items.Add(todoItem as TodoItemComplete);
items.Remove(item as TodoItemComplete);
And change the RefreshTodoItems() to use the new todoTableComplete:
private async void RefreshTodoItems() { MobileServiceInvalidOperationException exception = null; try { // This code refreshes the entries in the list view by querying the TodoItems table. // The query excludes completed TodoItems items = await todoTableComplete.ToCollectionAsync(); //items = await todoTable.ToCollectionAsync(); }
Now you have a Virtual table that will give you a view of complete items only with the logic on the server.
Further Discussion
I only discussed the Read script but you can extend this to the other scripts.
This has the downfall that the query parameters passed to the Read (if any) are difficult to extract and pass on to the table read.
A better option may be to use parameters in the query for the special cases (see my next post).
Let me know if this was useful to you!
-Jeff
Ref:
This is useful, but have a question regarding incrementally reading more items:
In a normal read() request to a "native" SQL Azure table, the mobile services client handles incrementally loading more items.
For instance, in C# we get back a MobileServiceCollection when doing .ToCollectionAsync()
If you "re-wire" the server-side to use a different table on read(), as you just did in this example, does this still work the same?
Do we still get back a collection that supports "pagination" ?
If so, I'm actually curious on how that's implemented!
A good question Miguel. +1
Extremely easy to test, right guys?
Hi Jeff,
I was thinking about doing the same in a .Net backend. Something like Where(a => a.UserId == currentUser.Id) to filter result server side.
For exemple, if you have a table with all records from all users and you want a view like MyRecords to get the current logged in user records, how would you proceed?
Hi Jeff! Any chance you can update us on how to implement on the latest App Service Editor? I am not familiar with Node.js and am struggling to figure out how to refactor this.
I will work on that! | https://blogs.msdn.microsoft.com/jpsanders/2013/06/12/using-virtual-tables-for-different-views-windows-azure-mobile-services/ | CC-MAIN-2019-04 | refinedweb | 670 | 63.9 |
05 January 2010 05:27 [Source: ICIS news]
By Felicia Loo
?xml:namespace>
Open-spec naphtha for second-half February delivery was assessed at $752.50-$755.50/tonne (€519.23-521.30/tonne) CFR (cost and freight)
Also up was the first-half March naphtha contract, quoted at $744.50-$747.50/tonne CFR Japan, while the second-half March contract was pegged at $739.50-$742.50/tonne CFR Japan.
The crack spread between naphtha and Brent crude futures was assessed healthy at $151/ tonne by midday Asian trade on Tuesday, compared to the 20-month high of $175.95/tonne on 22 December.
“In
Besides bullish demand, Asia was also facing low arbitrage supply as frigid temperatures in
Premiums for full-range naphtha were quoted at $20-$25/tonne to published CFR Japan quotes, while open-spec naphtha premiums were at $17-$18/tonne.
Downstream petrochemical prices seem to be racing ahead of naphtha due to strong domestic demand in
Spot paraxylene (PX) values for February delivery jumped $15/tonne to $1,125-1,135/tonne CFR Taiwan and/or China Main Port (CMP) on Monday after nearly two weeks of trading sideways.
“I am extremely optimistic for PX through the entire of the first quarter of this year,” said a source from Kuwait Aromatics, a greenfield PX producer.
Kuwait Aromatics had started up its new 830,000 tonne/year PX unit in Shuaiba two weeks ago and the company is expected to export up to 73,000 tonnes of material every month towards
“What we’ve been hearing from our customers in
A strong performing key downstream purified terephthalic acid (PTA) market was cited by market sources as being one of the key drivers for recent gains in regional PX prices.
Other market sources said that PX price would easily top the $1,150/tonne mark as end-users would be beginning to build inventory for the week-long Chinese New Year in mid-February.
“I’ve already bought a parcel for February arrival last week and we will be looking to buy more these two weeks,” said a source from China East Hope, which runs a 900,000 tonne/year PTA plant in
Tightness in the
“Everyone is bullish about everything. The market is looking good for the first quarter,” a Japanese benzene trader said.
The lack of March spot parcels dampened traders’ interest to sell, while buyers were more active to book fresh cargoes on expectation of higher benzene numbers in near term, market players said.
“Demand from the derivatives market is picking up particularly in SM [styrene monomer] sector,” a northeast Asian end-user said.
Spot benzene prices for March shipments jumped $40/tonne to $1,080-1,090/tonne FOB (free on board)
Price discussions for February lots were similar to March shipments, they added.
A Chinese major revised the January list price to yuan (CNY) 8,300/tonne ($1,215/tonne) ex-tank, up CNY300/tonne from December, traders said.
Toluene prices moved up in tandem with other aromatics despite weak Chinese demand. Bids for February shipments rose $25/tonne to $935/tonne FOB
An offer for a 3,000-tonne ethylene spot cargo was pegged at $1,250/tonne FOB SE (southeast)
($1 = €0.69 / $1 = CNY6.83)
With additional reporting by Loh Bohan and | http://www.icis.com/Articles/2010/01/05/9322511/strong-petchem-demand-to-push-asian-naphtha-higher.html | CC-MAIN-2014-23 | refinedweb | 555 | 57.5 |
Chris Kunicki OfficeZealot.com
September 2004
Applies to:
Microsoft Office 2003 Editions
Summary: One of four articles, read this article to discover the requirements and steps to build a new data provider plug in for use with custom data providers. The Microsoft Office 2003 Research Service Class Library (RSCL) Wizard includes a pluggable architecture to allow you to add new data providers to the wizard. (5 printed pages)
Download Office2003_ResearchServicesDevelopmentExtras.msi.
Using the Research Services Class Library Understanding the RSCLWizardInterface IPlugin Interface Walk Through the OleDB Data Provider Additional Documentation for the Research Service Development Extras Toolkit
The Research Services Class Library is an object model wrapper around the XML schema API for research services, as defined in the Microsoft Office 2003 Research Service SDK.
The Research Services Class Library simplifies the creation of research services by abstracting often-used features normally specified through the manipulation of XML into a simple object model wrapper. The wrapper produces the XML code required by the Research task pane. Using the Research Services Class Library, you can focus on the business logic of the solution rather than the XML code for the research service.
For more information about the Research Services Class Library, see Introduction to the Office 2003 Research Services Class Library.
The data provider plug-in is responsible for generating the data layer code used by the RSCL Wizard. The data providers included with the RSCL Wizard are actually plug-ins and can provide you with a good starting point to build a new data provider plug-in.
Once you install the download, you can find the source code to the included data provider plug-ins at the following location:
\Install_Path\Class Library Wizard Source Code\Plugins
A plug-in is a .NET Framework class library that exposes certain methods and properties, and a user interface in the form of a .NET UserControl. The plug-in must derive from the RSCLWizardInterface IPlugin interface. The RSCL Wizard loads all available plug-ins and displays the selected plug-in to the client. The RSCLWizardInterface IPlugin interface defines the methods and properties for the plug-in. The following is a list of interface property and method requirements for a plug-in:
Table 1. Properties and requirements for a plug-in
Note The recommended dimensions of the user control is no larger than 528 pixels wide by 256 pixels high; anything larger than this is truncated when displayed to the client.
Table 2. Methods and requirements for a plug-in
Note Because multiple plug-ins loaded at this time, do not ask for a response from the client computer.
First, consider what the OleDB Data Provider does. It is a very simple plug-in that collects two values from the client. The OleDB Data Provider collects a database connection string and a SQL statement to run against the database. The plug-in loads a code template and inputs the values entered by the client into the template. It then sets the public property Code to the value of the modified template. The RSCL Wizard uses the code for the data layer of the Research service project it generates.
To start, open the OLEDB Data Provider project by going to the following location:
Install_Path\Class Library Wizard Source Code\Plugins\OleDBProvider
And then open the OleDBProvider.sln solution file in Microsoft Visual Studio .NET.
Notice the RSCLWizardInterface reference under References. This reference points to
Install_Path\Class Library Wizard\Providers\RSCLWizardInterface.dll
This is the complied version of the RSCLWizardInterface. It is in the same directory as the release version of the plug-in. The RSCL Wizard iterates through this directory when it is looking for data provider plug-ins. For that reason, when you build your own plug-in, make sure the compiled version resides in this same directory.
The RSCL Wizard uses this class when it loads the plug-in. The first thing to notice in this file is the using statements at the top of the page. There are two noteworthy points: RSCLWizardInterface and System.Collections.Specialized for AssembliesToAdd. RSCLWizardInterface is used because it contains the IPlugin interface from which the OleDbProvider class is derived. Use System.Collections.Specialized for AssembliesToAdd as the StringDictionary collection of assemblies to add to the generated project.
Next, notice how the public class OleDbProvider inherits IPlugin. This is essential when the wizard loads. The RSCL Wizard scans the provider directory for all .dll files, then, using reflection, interrogates each DLL for a class inheriting from the IPlugin interface. If nothing inherits the IPlugin interface, the wizard will not load the plug-in.
All of the properties are GET or SET to an internal variable used by the plug-in. It is important to have the Name property set to a value right away because the wizard uses this name to display it values to the client.
Next, the methods, both Initialize and Finalize are present but do not do anything, because no setup or cleanup work was required for this plug-in. Both are required by the interface.
The SetValues method does a few things. It first determines the language in use then it selects a code template to use based on the chosen language. Let's review the code templates.
Open Constants.cs. There are two constants in this file. One is CSharp and the other is VB; these are code templates to generate for the data layer. There are two vital elements that the code for the data layer must have. First, the root namespace must be ResearchService. In Visual Basic .NET, this is specified for you, but in C #, you must specify the namespace. The second element is that there must be a public class called DataLayer with a method called RetrieveData accepting a single string parameter and returning a DataTable. If these two requirements are not present, the data layer fails and the generated Research service project will not build.
The SetValues method adds the client computer's input into the code template and sets the Code string to return to the wizard as the value of the modified code template.
Next, the SetValues method adds an entry into the AssembliesToAdd collection. This is a reference to the System.Data library that you need for the Research service to work. Using this method, you could add any assembly the plug-in requires to the generated Research service project.
UC.cs is a very simple UserControl with two labels and two textboxes. Your UserControl does not need to be simple; it could include many pages to collect whatever information from the client you need.
Read the following articles for more information about the Research Service Development Extras Toolkit:. | http://msdn.microsoft.com/en-us/library/aa159908(office.11).aspx | crawl-002 | refinedweb | 1,113 | 55.34 |
FOG Client / FOS report bios product key to database (Host) Activate through BIOS key (Deployment)
Hi,
i really would like to see the fog clients ability to read out the current windows key and report it back to the fog server, this information should then be added to each specific host definition.
When i first deploy a new computer, i use my tool setkey.exe as a snapin, this will activate the machine with it’s bios embedded key, if the fog client could report that key i can deploy the next time with the exact key instead of usage of my snapin:
That combined with a new report, “key report” would complete it.
My birthday is at december, 16th. so enough time ;)
What i cannot tell you is howto read out the bios key, i have tools for it but i don’t know howto do it yourself, for example if you use nirsoft’s key view there is a difference between the bios and the current registry key:
Every Win 10 that was activated by a bios key is showing the VK7JG key in its registry.
@Joe-Schmitt @tom-elliott @Wayne-Workman @george1421 @Sebastian-Roth
Edit:
Here is how it works:
import sys import ctypes import ctypes.wintypes ##################################################### #script to query windows 8.x OEM key from PC firmware #ACPI -> table MSDM -> raw content -> byte offset 56 to end #ck, 03-Jan-2014 ([email protected]) ##################################################### #for ref: common STR to DWORD conversions: ACPI: 1094930505 - FIRM: 1179210317 - RSMB: 1381190978 - FACP: 1178682192 - PCAF: 1346584902 - MSDM: 1297302605 - MDSM 1296323405 def EnumAcpiTables(): #returns a list of the names of the ACPI tables on this system FirmwareTableProviderSignature=ctypes.wintypes.DWORD(1094930505) pFirmwareTableBuffer=ctypes.create_string_buffer(0) BufferSize=ctypes.wintypes.DWORD(0) # EnumSystemFirmwareTables=ctypes.WinDLL("Kernel32").EnumSystemFirmwareTables ret=EnumSystemFirmwareTables(FirmwareTableProviderSignature, pFirmwareTableBuffer, BufferSize) pFirmwareTableBuffer=None pFirmwareTableBuffer=ctypes.create_string_buffer(ret) BufferSize.value=ret ret2=EnumSystemFirmwareTables(FirmwareTableProviderSignature, pFirmwareTableBuffer, BufferSize) return [pFirmwareTableBuffer.value[i:i+4] for i in range(0, len(pFirmwareTableBuffer.value), 4)] def FindAcpiTable(table): #checks if specific ACPI table exists and returns True/False tables = EnumAcpiTables() if table in tables: return True else: return False def GetAcpiTable(table,TableDwordID): #returns raw contents of ACPI table # GetSystemFirmwareTable=ctypes.WinDLL("Kernel32").GetSystemFirmwareTable FirmwareTableProviderSignature=ctypes.wintypes.DWORD(1094930505) FirmwareTableID=ctypes.wintypes.DWORD(int(TableDwordID)) pFirmwareTableBuffer=ctypes.create_string_buffer(0) BufferSize=ctypes.wintypes.DWORD(0) ret = GetSystemFirmwareTable(FirmwareTableProviderSignature, FirmwareTableID, pFirmwareTableBuffer, BufferSize) pFirmwareTableBuffer=None pFirmwareTableBuffer=ctypes.create_string_buffer(ret) BufferSize.value=ret ret2 = GetSystemFirmwareTable(FirmwareTableProviderSignature, FirmwareTableID, pFirmwareTableBuffer, BufferSize) return pFirmwareTableBuffer.raw def GetWindowsKey(): #returns Windows Key as string table=b"MSDM" TableDwordID=1296323405 if FindAcpiTable(table)==True: try: rawtable = GetAcpiTable(table, TableDwordID) # #byte offset 36 from beginning = Microsoft 'software licensing data structure' / 36 + 20 bytes offset from beginning = Win Key return rawtable[56:len(rawtable)].decode("utf-8") except: return False else: print("[ERR] - ACPI table " + str(table) + " not found on this system") return False try: WindowsKey=GetWindowsKey() if WindowsKey==False: print("unexpected error") sys.exit(1) else: print(str(WindowsKey)) except: print("unexpected error") sys.exit(1)
Additional another tool that can read out bios key:
(but it seems the source is missing)
Another python script:
Edit2:
Maybe this:
but i haven’t tried if the code is working. (Damn only registry)
Regards X23
In case anybody’s overly concerned. Here’s the OEM EULA. Notice, it states that you are allowed to transfer the license to another user so long as it’s with the device and the software is installed and the product key is given with it. There are no rules on what you can/cannot document. That’s like saying you can’t pull the serial number from the bios.
Just trying to get people to calm down. We know the device and software are owned by the person. We know how to obtain the information. We don’t know how the user intends to use it after it’s been stored, but that’s out of our hands. We cannot control what the admins/users do with the key once it’s know, but that’s out of our control to begin with. There’s plenty of ways for users to get that information to begin with, I don’t see anywhere in the EULA where it states we cannot store a copy of it for ourselves.
Well one of the most frequent feature requests of all time :D
- Sebastian Roth Developer last edited by
@george1421 said in FOG Client / FOS report bios product key to database (Host) Activate through BIOS key (Deployment):
Solution: Switch to Linux Mint and your M$ problems go away.
Thumbs up for that!!
@x23piracy said in FOG Client / FOS report bios product key to database (Host) Activate through BIOS key (Deployment):
Windows as subscription will come
Um, yeah its already here:
Solution: Switch to Linux Mint and your M$ problems go away.
@psycholiquid said in FOG Client / FOS report bios product key to database (Host) Activate through BIOS key (Deployment):.
As long as i own the machine, and i do and therefore feeling free to read any information whereever it’s stored as long as i don’t break any encryption or kind of protection this cannot be anything against a law, what ever whoever is writing into their EULA’s it’s up to the OEM and MS to protect this information if they really need this.
This sorry shit is only a try to gain their own profit against oem reselling or key stealing by removing a sticker or simply make a copy of the productkey only.
Windows as subscription will come :D
If the computers life time ends in our company we remove it from active directory, the antivirus software licensing and of course as host in the fog management, if done the key is been deleted, no fear ms i don’t collect your funny product keys :-p
Regards X23
- Psycholiquid Testers last edited by
@tom-elliott said in FOG Client / FOS report bios product key to database (Host) Activate through BIOS key (Deployment):.
@psycholiquid you are correct and on the same point like @george1421 but please show which law should be broken by reading that information out of the bios, there is no. Please send me links with the fact, we don’t need to talk about reimaging OEM i know the fact but in the end this is my problem ;).
Last week i purchased a VL of Windows 10 Enterprise, i will become legal but there is a bit of work to do.
I really hate the way MS is pushing all the middle class into the enterprise sector!
We had a SAM examination 2 years ago, and it was really easy to please them :D The only thing you need to do is beeing coorporative with em. In the end we purchased some SQL licenses (we had to less of them) and had to dig for some invoices for computers with oem os (about 5 of each kind)
All that bullshit storys about MS is coming into your company are fables, i don’t know a single person/admin where that happened. With which right (law) would they gain house right for deeper inspections?
Regards X23
- Psycholiquid Testers last edited by
OK after reading over all this. This is very illegal. You aren’t supposed to be pulling the OEM key out of the bios / firmware in the first place. Can you? Yes. Should you? No.
If you were to get audited and they saw the FOG system was doing this you could get in very deep trouble.
The real issue I see here is why. The reason I say this is the following. If you are sysprepping image with the OEM ISO and pushing that to each machine they will activate on their own without intervention you just have to set the rearm.
I would personally steer clear of this as I can see M$ taking a stand on this and would hate for them to even look at FOG for that reason. Seems to be a disconnect in how your image is made that they are not auto activating.
Personally I don’t like the rule of not imaging OEM machines. You bought them and they are all the same, they got their money you should be aloud to. It seems arbitrary that you have to load by hand…
Hi,
just some seconds ago i could test it and it is working.
To enable this you have to switch to working branch and enable key reading in the fog options:
When i run a quick reg with a host that has a productkey in bios i get this:
Thank you Tom for this really fast realization of the feature.
Regards X23
@george1421 I don’t think it matters, one way or the other. If the admins want to use the individual keys that ship with the systems, or if they want to use the VLK, I don’t see the harm. Automating it, I suppose, would actually be a good thing, as trying to keep track of Keys can become cumbersome, though with VLK it does make it easier.
I don’t think storing the Product Key’s is going against any legal issues here. You own the machines, and therefore own the keys for those machines. Storing them however you’d like is totally within your legal rights.
I’ve given a partial implementation of this feature already now. It does not store the product keys to the host in question by default though. This way you can still define how you’d like it. It only works for “quick registration” too.
The only “ramification” I can think this could cause is using the key may supersede your using a VLK as the product key field is meant to be a way for the client to “activate” the hosts in question as well.
@x23piracy said in FOG Client / FOS report bios product key to database (Host) Activate through BIOS key (Deployment):
what i am doing here is legal if the appropriate vl has been purchased
While this post is 4 years old, this is EXACTLY what I’ve been saying.
My previous post:.
From the article:
- The OEM and the VL license must be the same edition, e.g. you cannot deploy a Pro VL image to Home OEM licensed PCs using this licensing technique.
- You must ensure that the versions are matched, e.g. the OEM license entitles you to Windows 7 (including downgrades) if deploying Windows 7 images. For example, you can’t deploy a Windows 7 VL image to a PC with a Windows Vista OEM sticker/license using this licensing technique.
What if you company does not have a VL agreement? You need to 5 products to start one. You can buy a single copy of Windows (to get the ISO download and MAK/KMS keys) and 4 cheap dummy CALs – now you have a VL at minimum cost, and you can re-image your OEM-licensed PCs with an image made from your VL media.
You may deploy OEM media, as long as you have purchased a VLK key for that media. But then again once you have a VLK key you have access to download the volume media too. I have not tested it, but I assume a VLK key will activate an OEM image.
But again, if you purchased the VLK key and have it, there is no need to query the firmware for the bios OEM key. That key WILL NOT activate volume licensed media.
Understand I’m not saying no to this feature, I’m just not seeing the value in it. If you know what needs to be done, by all means fork the fog project make your changes and then submit the changes back to the project. That is one way to get your needed features back into the base code.
@Sebastian-Roth @george1421 please have a look in here:
Legally Deploying Images Windows To OEM Licensed PCs, what i am doing here is legal if the appropriate vl has been purchased.
Hi,
can i have some clues where is the right point to try to embed the command while doing an inventory? which file in the filesystem is doing all the commands while doing inventorisation?
If the team isn’t willing to integrate, i will do it on my own.
Regards X23
:( for me it’s hard to follow that position. But i have to respect it.
- george1421 Moderator last edited by george1421
The more I think about it, the less I’m inclined to say this is a needed feature. While its technically possible to add this to fog. I don’t see the value in having the devs spend their time to read out and store the bios activation key. That key is only of value to activate OEM images. The only way the OEM image can be deployed is via the original media is was delivered on. With OEM media you are not allowed to install, alter, capture and redeploy an OEM install. It may be only installed from the original OEM media. That process is not the intent of FOG Project..
When I get onto my other computer I’ll post a link to a post on Spiceworks that talks about what you can and can’t do (legally) with imaging MS products.
[update]: Here is the link I mentioned above
So I think if I had a vote, I would rather have the devs work on this unable to read inode from library issue than spend time adding a feature to FOG that only a limited number of people might use.
Yes it works,
i just created a FOS USB Stick, thank you @george1421 and booted it with a notebook that has a product key in it’s bios into kernel debug mode (i need shell).
Then i entered the following command:
tail -c+57 /sys/firmware/acpi/tables/MSDM
What i got was, surprise a product key:
To be sure that this is really our product key i also used the command i found and a key tool to crosscheck the key.
And yes it’s correct:
What we need now is the following @Sebastian-Roth:
- FOS ability to read and report bios product key to the host product key field in db (expand the inventory script with the command above to read the key and report it like any other inventory item)
- FOG Clients ability to also report product key (if not already done) for the case if the fog client has been mass deployed in existing environment where maybe not all host will be booted and inventoried by the FOG Boot Menu.
Afaik this should be all we need because if i fill the product key field of a host today with a product key and deploy a windows system, fog client will set this key into the system. So everything is prepared except the feature that we can read key from the bios and report them to the db.
Who is responsible from the dev team for the FOS?
Regarding to @george1421 post before, another solution could be a second product key field, one is for the manual known input, and another for determined bios product keys, now for each host there could be a switch in the options where we can decide which product key field to use?
Am i wrong?
Regards X23
@x23piracy said in FOG Client report Windows key to FOG WebIf (Host definition) Activate through BIOS key (Deployment):
Yes, i think thats the way to go but what if someone uses FOG in an already deployed but growing environment. Typically FOG Client would be mass installed, is the FOG Client reporting the same inventory stuff like the inventory been done by the boot menu? If not FOG Client also should have the ability to read the key from bios and report it.
I agree, but that is where your handy code comes into use. That can be integrated into the {next} fog client to update the bios key field if the developers see value in it.
can i try your usb FOS Image, maybe boot from the stick and try if i can get the serial from bios?
Yes that should work well. I forgot about that method of booting. There is a debug mode built into that usb stick. So it is pretty easy.
Lets say that works can’t we start with sending that key to fogs database for the current product key field? If this has been done we are good to go because currently fog client can activate a deployed system by an entered key in that host definition field.
The risk here is replacing a MAK or KMS key with the bios value may not be what all users consider useful. That is why I picked a new field, so it is stored. Then a crafty IT admin could write a simple mysql command to copy it over if its blank. I’m not seeing this as a widely used feature. But a useful one if you need it. | https://forums.fogproject.org/topic/10808/fog-client-fos-report-bios-product-key-to-database-host-activate-through-bios-key-deployment | CC-MAIN-2019-04 | refinedweb | 2,859 | 66.88 |
A huge portion of the data that exists today is textual and as a Data Scientist, it is very important to have the skill sets to process these textual data. Natural Language processing has been around for a long time and it has been growing in popularity. Today almost all tech devices have some sort of NLP technology that let them communicate with us.
NLP should be one of the most updated skill sets in a Data Scientist’s Tool kit. In this article, we will learn to implement Natural Language Processing in Machine Learning in the simplest way possible to solve MachineHack’s – Whose Line Is It Anyway: Identify The Author Hackathon
data-src="" width: 728px; height: class="lazyload" 90px;>
About The Data Set
The dataset we are going to use consists of sentences from thousands of books of 10 authors. The idea is to train our machine to predict which author has written a specific sentence. This is an NLP classification problem where the objective is to classify each sentence based on who wrote it.
Where to get the dataset?
Head to MachineHack, sign up and start the Whose Line Is It Anyway: Identify The Author Hackathon, you will find the dataset in the assignments page.
Natural Language Processing With Python
We will implement NLP in 8 simple steps as explained below.
Importing necessary libraries
import pandas as pd
import re
import nltk
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.svm import SVC
from sklearn.metrics import confusion_matrix
The above code block consists of the necessary libraries that we need to implement our NLP classifier. We will look into each of them as we come across various methods.
Importing the dataset
dataset = pd.read_csv('TRAIN.csv')
The above code block reads the data from the csv file and loads it into a pandas data-frame using the read_csv method of the pandas library that we imported earlier.
Let’s have a peek at the dataset :
Cleaning and preprocessing the data
Cleaning the data is one of the most essential tasks in not just Natural Language Processing but in the entire Data Science spectrum. In Natural Language Processing, there are various stages of cleaning. Some of the basic stages are listed below :
- Cleaning the test for unnecessary data (noises such as symbols, emojis, special characters, etc.)
- Stemming or lemmatization for reducing the words to its root form.
- Removing stopwords.
Note:
- Stemming is the process of reducing a word to its root form. This helps remove redundancy in words. For example, if the words ‘run’, ‘ran’ and ‘running’ are present in a sentence, each word is reduced to its base or root form ‘run’ and counted as 3 occurrences of the same word instead of counting each word as unique.
- Stopwords are the words that are too often used in a natural language and hence are useless when comparing documents or sentences. For example, ‘the’, ‘a’, ‘an’, ‘has’, ‘do’, ‘what’, etc are some of the stopwords. Such words are removed for NLP.
nltk.download('stopwords') #downloading the stopwords from nltk
corpus = [] # List for storing cleaned data
ps = PorterStemmer() #Initializing object for stemming
for i in range(len(dataset)): # for each obervation in the dataset
#Removing special characters
text = re.sub('[^a-zA-Z]', ' ', dataset['text'][i]).lower().split()
#Stemming and removing stop words
text = [ps.stem(word) for word in text if not word in set(stopwords.words('english'))]
#Joining all the cleaned words to form a sentence
text = ' '.join(text)
#Adding the cleaned sentence to a list
corpus.append(text)
The NLTK library comes with a collection of stopwords which we can use to clean the dataset. The PorterStemmer method of nltk.stem.porter library is used to perform stemming. In the above code block, we traverse through each observation in the dataset, removing special characters, performing stemming and removing stop words.
Let’s see the cleaned data :
Generating Count Vectors
cv = CountVectorizer(max_features = 120)
X = cv.fit_transform(corpus).toarray()
y = dataset.iloc[:, 1].values
With the above code block, we will create a Bag-of-Words model. The CountVectorizer method imported from sklearn.feature_extraction.text creates a matrix of vectors consisting of the counts of each word in a sentence. The parameter max_features = 120 selects a maximum of 120 unique words. We transform the cleaned data in corpus into CountVector X which is the independent variable set for the test classifier that we will build in the coming steps.
Here is what X looks like :
Each row represents a row in the actual observation and each column represents a word of the 120 selected words.
Splitting the dataset into the Training set and Validation set
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size = 0.20, random_state = 0)
In the above code block, we split the dataset into training and validation sets. The parameter test_size = 0.2 specifies that the test set (X_val & y_val ) should consist of 20 % of the overall data in X and y. The random_state parameter allows us to set a seed value to reproduce the exact same results.
Building a classifier
classifier = SVC()
classifier.fit(X_train, y_train)
Since we are ready with the training data we can now use it to train a classifier. The above code block initializes a Support Vector Classifier and fits the training data for learning.
Predicting the author
y_pred = classifier.predict(X_val)
After training the classifier with X_train and y_train, we can now make the classifier to predict the authors for the texts in the validation set X_val.
Evaluating the model
After predicting for the validation set, we need to check how many of the predictions are actually right.To do this, we will make use of the confusion matrix.Using the confusion matrix we will compare the predicted values in y_pred and the actual values in y_val.The accuracy from a confusion matrix can be calculated by summing up the diagonal elements and diving it by the total sum of elements in the matrix. We define a method as shown below:
def accuracy(confusion_matrix):
diagonal_sum = confusion_matrix.trace()
sum_of_all_elements = confusion_matrix.sum()
return diagonal_sum / sum_of_all_elements
#Creating the confusion matrix with y_val and y_pred
cm = confusion_matrix(y_val, y_pred)
print("Accuracy : ", accuracy(cm))
Output :
Accuracy : 0.7461271882975549
This means that 74 % of the overall predictions were actually true when compared to the real observations.
That’s it! You have now created your first NLP project in Machine Learning. You can use the above code blocks as a basic template for working with NLP. Happy Coding! | https://www.analyticsindiamag.com/how-to-solve-your-first-ever-nlp-classification-challenge/ | CC-MAIN-2019-39 | refinedweb | 1,099 | 55.84 |
Introduction: Scrolling Text With Arduino and Adafruit TFT Shield
What we are going to do: demonstrate text scrolling with an Arduino and an Adafruit 2.8 inch TFT touch shield
What we will use
Arduino Uno
Adafruit 2.8 inch TFT touch shield
Assembly
Mount TFT shield on Arduino
Take care to avoid bending pins - it does go together
Step 1: Arduino Sketch and Ino File Attachment
The following Arduino sketch is the program that makes the text scroll.
/*
Arduino project
Scrolling Text on ADAFRUIT TFT Arduino Shield
sketch uses Adafruit libraries - for more information...
a Renfrew Arduino 2014 project - public domain
(scroll routine thanks to Andrew Wendt)
*/
// libraries
#include "SPI.h" // SPI display
#include "Adafruit_GFX.h" // Adafruit graphics
#include "Adafruit_ILI9341.h" // ILI9341 screen controller
// pin definitions
#define TFT_DC 9
#define TFT_CS 10
Adafruit_ILI9341 tft=Adafruit_ILI9341(TFT_CS, TFT_DC); // hardware SPI
void setup()
{
tft.begin();
tft.fillScreen(ILI9341_CYAN);
tft.fillScreen(ILI9341_BLUE);
tft.setTextColor(ILI9341_WHITE, ILI9341_BLACK); // White on black
tft.setTextWrap(false); // Don't wrap text to next line
tft.setTextSize(5); // large letters
tft.setRotation(1); // horizontal display
}
void loop()
{
String text = " . . Text scrolling on Adafruit TFT shield . ."; // sample text
const int width = 18; // width of the marquee display (in characters)
// Loop once through the string
for (int offset = 0; offset < text.length(); offset++)
{
// Construct the string to display for this iteration
String t = "";
for (int i = 0; i < width; i++)
t += text.charAt((offset + i) % text.length());
// Print the string for this iteration
tft.setCursor(0, tft.height()/2-10); // display will be halfway down screen
tft.print(t);
// Short delay so the text doesn't move too fast
delay(200);
}
}
Step 2: How It Works
In this demo we displayed this text string:
. . Text Scrolling with Adafruit TFT shield .. (47 characters?)
The text displays in scrolling marquee fashion, in large letters, with 18 characters across the screen.
You can replace the text with your own message that you want to scroll across the screen.
How the Sketch works
- most of the work in assembling the strings of characters to display is done in this line:
{
t+= text.charAt ((offset + i) % text.length());
}
If you understand that line, you don't need any further explanation; stop reading.
- continuing on
For everyone else, you have to understand how this routine works or it won't be interesting, so keep reading.
In this explanation we will uses a simpler example, with a shorter text.
String text = "Hello" (which is 5 characters).
And we will define a shorter marquee display width.
const int width = 10;
Our goal is to display these strings consecutively in the display window:
HelloHello
elloHelloH
lloHelloHe
etc.
To keep track of where we start each consecutive text string we use the variable 'offset'.
offset is incremented in the following line to change the starting point of the display string.
for (int offset = 0; offset < text.length(); offset ++)
example:
HelloHello offset == 0
elloHelloH offset == 1
lloHelloHe offset == 2
and so on until offset is equal to 5
As stated above, offset keeps track of the starting point of the string.
We use a loop with the counter i to assemble the rest of the text string each time the text is displayed.
for (int i = 0; i < width; i++)
width is 10. By stepping through this loop 10 times we will assemble a string of characters that is equal to the width of the marquee display window, which is 10 characters. We do all this before printing the text to the screen.
Next is the line mentioned above that assembles the string one character at a time as we step through the i loop:
t+= text.charAt ((offset + i) % text.length());
The first time through this loop t holds the string H. The second time He - and so on, for the 10 iterations of the loop, at which point t holds HelloHello.
When the loop is completed, the sketch prints the string to the screen.
tft.print(t);
On the first iteration offset == 0 and i == 0. text.length() == 5, which is the length of our string.
This gives the result t+= text.charAt(0 % 5);
0 % 5 uses the modulo operator %
modulo is the remainder when two integers are divided.
0 divided by 5 produces remainder 0.
So the result of 0 % 5 (read as 0 mod 5) is 0
And text.charAt(0) is the first character of the string: H.
The second time through this loop i is incremented to 1.
The result of 1 % 5 ( or 1 mod 5) is 1.
This adds the second character, e, to the string t. t now is He
After 10 iterations t holds the characters HelloHello.
Then we print t
tft.print(t);
And then the offset variable is incremented, we go through the i loop again, and we assemble the string elloHelloH. The display scrolls!
4 Discussions
Thank you!
Only thing that shows on my TFT is a white screen. I've tried every library I can find. Anyone know whats wrong? I'm using Arduino UNO and Seeed TFT v1.
As I understand it, the v1 shield has and 8-bit parallel pin connection rather than the SPI bus connection that the v2 uses. The defines in the above sketch reference the v2's chip select (CS) and the Data/Command (DC) pin connections to the UNO. I believe there is another older Adafruit library that will work with parallel connections, but I don't know for sure. There may be a way to use the current one as well, but you would have to have a look at the ILI9341 library to see what the initialization would be for the older shield. Seeed may still have their libraries for the older shield available, I think they were also based on the Adafruit libraries
Awesome project, thanks for posting! | https://www.instructables.com/id/Scrolling-text-on-TFT-screen/ | CC-MAIN-2018-39 | refinedweb | 972 | 74.49 |
Is your ice cream float bigger than mine
Posted May 27, 2013 at 07:46 AM | categories: math | tags: | View Comments
Updated May 28, 2013 at 08:59 AM
Float numbers (i.e. the ones with decimals) cannot be perfectly represented in a computer. This can lead to some artifacts when you have to compare float numbers that on paper should be the same, but in silico are not. Let us look at some examples. In this example, we do some simple math that should result in an answer of 1, and then see if the answer is “equal” to one.
print 3.0 * (1.0/3.0) print 1.0 == 3.0 * (1.0/3.0)
1.0 True
Everything looks fine. Now, consider this example.
print 49.0 * (1.0/49.0) print 1.0 == 49.0 * (1.0/49.0)
1.0 False
The first line looks like everything is find, but the equality fails!
1.0 False
You can see here why the equality statement fails. We will print the two numbers to sixteen decimal places.
print '{0:1.16f}'.format(49.0 * (1.0/49.0) ) print '{0:1.16f}'.format(1.0) print 1 - 49.0 * (1.0/49.0)
0.9999999999999999 1.0000000000000000 1.11022302463e-16
The two numbers actually are not equal to each other because of float math. They are very, very close to each other, but not the same.
This leads to the idea of asking if two numbers are equal to each other within some tolerance. The question of what tolerance to use requires thought. Should it be an absolute tolerance? a relative tolerance? How large should the tolerance be? We will use the distance between 1 and the nearest floating point number (this is
eps in Matlab).
numpy can tell us this number with the
np.spacing command.
Below, we implement a comparison function from 10.1107/S010876730302186X that allows comparisons with tolerance.
# Implemented from Acta Crystallographica A60, 1-6 (2003). doi:10.1107/S010876730302186X import numpy as np print np.spacing(1) def feq(x, y, epsilon): 'x == y' return not((x < (y - epsilon)) or (y < (x - epsilon))) print feq(1.0, 49.0 * (1.0/49.0), np.spacing(1))
2.22044604925e-16 True
For completeness, here are the other float comparison operators from that paper. We also show a few examples.
import numpy as np def flt(x, y, epsilon): 'x < y' return x < (y - epsilon) def fgt(x, y, epsilon): 'x > y' return y < (x - epsilon) def fle(x, y, epsilon): 'x <= y' return not(y < (x - epsilon)) def fge(x, y, epsilon): 'x >= y' return not(x < (y - epsilon)) print fge(1.0, 49.0 * (1.0/49.0), np.spacing(1)) print fle(1.0, 49.0 * (1.0/49.0), np.spacing(1)) print fgt(1.0 + np.spacing(1), 49.0 * (1.0/49.0), np.spacing(1)) print flt(1.0 - 2 * np.spacing(1), 49.0 * (1.0/49.0), np.spacing(1))
True True True True
As you can see, float comparisons can be tricky. You have to give a lot of thought to how to make the comparisons, and the functions shown above are not the only way to do it. You need to build in testing to make sure your comparisons are doing what you want.
Copyright (C) 2013 by John Kitchin. See the License for information about copying. | http://kitchingroup.cheme.cmu.edu/blog/2013/05/27/Is-your-ice-cream-float-bigger-than-mine/ | CC-MAIN-2017-39 | refinedweb | 574 | 76.52 |
This blog was originally published on Ales Nosek - The Software Practitioner.
Pods on Kubernetes are ephemeral and can be created and destroyed at any time. In order for Envoy to load balance the traffic across pods, Envoy needs to be able to track the IP addresses of the pods over time. In this blog post, I am going to show you how to leverage Envoy’s Strict DNS discovery in combination with a headless service in Kubernetes to accomplish this.
Overview
Envoy provides several options on how to discover back-end servers. When using the Strict DNSoption, Envoy will periodically query a specified DNS name. If there are multiple IP addresses included in the response to Envoy’s query, each returned IP address will be considered a back-end server. Envoy will load balance the inbound traffic across all of them.
How to configure a DNS server to return multiple IP addresses to Envoy? Kubernetes comes with a Service object which, roughly speaking, provides two functions. It can create a single DNS name for a group of pods for discovery and it can load balance the traffic across those pods. We are not interested in the load balancing feature as we aim to use Envoy for that. However, we can make a good use of the discovery mechanism. The Service configuration we are looking for is called a headless service with selectors.
The diagram below depicts how to configure Envoy to auto-discover pods on Kubernetes. We are combining Envoy’s Strict DNS service discovery with a headless service in Kubernetes:
Practical implementation
To put this configuration into practice, I used Minishift 3.11 which is a variant of Minikube developed by Red Hat. First, I deployed two replicas of the httpd server on Kubernetes to play the role of back-end services. Next, I created a headless service using the following definition:
Note that we are explicitly specifying “None” for the cluster IP in the service definition. As a result, Kubernetes creates the respective Endpoints object containing the IP addresses of the discovered httpd pods:
If you ssh to one of the cluster nodes or rsh to any of the pods running on the cluster, you can verify that the DNS discovery is working:
Next, I used the container image
docker.io/envoyproxy/envoy:v1.7.0 to create an Envoy proxy. I deployed the proxy into the same Kubernetes namespace called
mynamespace where I created the headless service before. A minimum Envoy configuration that can accomplish our goal looks as follows:
Note that in the above configuration, I instructed Envoy to use the Strict DNS discovery and pointed it to the DNS name
httpd-discovery that is managed by Kubernetes.
That’s all that was needed to be done! Envoy is load balancing the inbound traffic across the two httpd pods now. And if you create a third pod replica, Envoy is going to route the traffic to this replica as well.
Conclusion
In this article, I shared with you the idea of using Envoy’s Strict DNS service discovery in combination with the headless service in Kubernetes to allow Envoy to auto-discover the back-end pods. While writing this article, I discovered this blog post by Mark Vincze that describes the same idea and you should take a look at it as well.
This idea opens the door for you to utilize the advanced features of Envoy proxy in your microservices architecture. However, if you find yourself looking for a more complex solution down the road, I would suggest that you evaluate the Istio project. Istio provides a control plane that can manage Envoy proxies for you achieving the so called service mesh.
Hope you found this article useful. If you are using Envoy proxy on top of Kubernetes I would be happy to hear about your experiences. You can leave your comments in the comment section below. | https://www.redhat.com/ja/blog/configuring-envoy-auto-discover-pods-kubernetes | CC-MAIN-2020-45 | refinedweb | 652 | 61.06 |
Other Aliasco_create, co_call, co_resume, co_exit_to, co_exit, co_current
SYNOPSIS
#include <pcl.h>);
DESCRIPTIONThe. This document defines an API for the low level handling of coroutines i.e. creating and deleting coroutines and switching between them. Higher level functionality (scheduler, etc.) is not covered.
FunctionsThe following functions are defined:
- coroutine_t co_create(void *func, void *data, void *stack, int stacksize);.
- void co_delete(coroutine_t co);.
- void co_call(coroutine_t co);.
- void co_resume(void);
This function passes execution back to the coroutine which either initially started this one or restarted it after a prior co_resume.
- void co_exit_to(coroutine_t co);
This function does the same a co_delete(co_current()) followed by a co_call would do. That is, it deletes itself and then passes execution to another coroutine co.
- void co_exit(void);
This function does the same a co_delete(co_current()) followed by a co_resume would do. That is, it deletes itself and then passes execution back to the coroutine which either initially started this one or restarted it after a prior co_resume.
- coroutine_t co_current(void);
This function returns the currently running coroutine.
NotesSome interactions with other parts of the system are covered here.
- Signals
- First, a signal handler is not defined to run in any specific coroutine. The only way to leave the signal handler is by a return statement.)
- setjmp/longjmp
- The use of setjmp(2)/longjmp(2) is limited to jumping inside one coroutine. Never try to jump from one coroutine to another with longjmp(2).
DIAGNOSTICSSome fatal errors are caught by the library. If one occurs, a short message is written to file descriptor 2 (stderr) and a segmentation violation is generated.
- [PCL]: Cannot delete itself
- A coroutine has called co_delete with it's own handle.
- [PCL]: Resume to deleted coroutine
- A coroutine has deleted itself with co_exit or co_exit_to and the coroutine that was activated by the exit tried a co_resume.
- [PCL]: Stale coroutine called
- Someone tried to active a coroutine that has already been deleted. This error is only detected, if the stack of the deleted coroutine is still resident in memory.
- [PCL]: Context switch failed
- Low level error generated by the library in case a context switch between two coroutines failes.
AUTHORDeveloped by Davide Libenzi < [email protected] >. Ideas and man page base source taken by the coroutine library developed by E. Toernig < [email protected] >. Also some code and ideas comes from the GNU Pth library available at .
BUGSThere are no known bugs. But, this library is still in development even if it results very stable and pretty much ready for production use.
Bug reports and comments to Davide Libenzi < [email protected] >. | http://manpages.org/co_delete/3 | CC-MAIN-2020-50 | refinedweb | 428 | 58.79 |
Prefix all system groups with "_"
RESOLVED WONTFIX
Status
()
▸
Administration
People
(Reporter: Joel Peshkin, Unassigned)
Tracking
Details
To prevent collisions between user-defined groups and system groups, all system groups should start with an underscore. While administrators should not be blocked from using groups beginning with an underscore, it should be understood that such names are part of the system namespace. Migration code in checksetup should check for BOTH the presence of the underscore version AND the absence of the non-underscore version before renaming. administration
QA Contact: mattyt-bugzilla → default-qa
That's a pretty bad idea. We already have UI issues with products and components starting with an underscore, because Template Toolkit considers them as private keys (when used in a hash) and skip them. And TT doesn't offer any option to turn off this behavior. So till TT fixes that, assuming they want to in a future release, we should wontfix this bug.
Renaming all system groups would break many extensions and custom code. I don't see the benefit of doing this.
Status: NEW → RESOLVED
Last Resolved: 5 years ago
Resolution: --- → WONTFIX | https://bugzilla.mozilla.org/show_bug.cgi?id=247081 | CC-MAIN-2018-09 | refinedweb | 187 | 53.21 |
Bit-field
Declares a class data member with explicit size, in bits. Adjacent bit-field members may (or may not) be packed to share and straddle the individual bytes.
A bit-field declaration is a class data member declaration which uses the following declarator:
The type of the bit-field is introduced by the decl-specifier-seq of the declaration syntax.
[edit] Explanation
The type of a bit-field can only be integral or (possibly cv-qualified) enumeration type.
A bit-field cannot be a static data member.
There are no bit-field prvalues: lvalue-to-rvalue conversion always produces an object of the underlying type of the bit-field.
The number of bits in a bit-field sets the limit to the range of values it can hold:
#include <iostream> struct S { // three-bit unsigned field, allowed values are 0...7 unsigned int b : 3; }; int main() { S s = {6}; ++s.b; // store the value 7 in the bit-field std::cout << s.b << '\n'; ++s.b; // the value 8 does not fit in this bit-field std::cout << s.b << '\n'; // formally implementation-defined, typically 0 }
Possible output:
7 0
Multiple adjacent bit-fields are usually packed together (although this behavior is implementation-defined):
Possible output:
2 }
Possible output:, and whether int bit-fields that are not explicitly signed or unsigned are signed or unsigned is implementation-defined. For example, int b:3; may have the range of values 0..7 or -4..3 in C, but only the latter choice is allowed in C++.
[edit] Defect reports
The following behavior-changing defect reports were applied retroactively to previously published C++ standards.
[edit] References
- C++20 standard (ISO/IEC 14882:2020):
- 11.4.9 Bit-fields [class.bit]
- C++17 standard (ISO/IEC 14882:2017):
- 12.2.4 Bit-fields [class.bit]
- C++14 standard (ISO/IEC 14882:2014):
- 9.6 Bit-fields [class.bit]
- C++11 standard (ISO/IEC 14882:2011):
- 9.6 Bit-fields [class.bit]
- C++03 standard (ISO/IEC 14882:2003):
- 9.6 Bit-fields [class.bit]
- C++98 standard (ISO/IEC 14882:1998):
- 9.6 Bit-fields [class.bit] | https://en.cppreference.com/w/cpp/language/bit_field | CC-MAIN-2022-21 | refinedweb | 356 | 61.12 |
uiomove,
uiomovei
— move data described by a struct uio
#include
<sys/systm.h>
int
uiomove(void
*buf, size_t n,
struct uio *uio);
int
uiomovei(void
*buf, int n,
struct uio *uio);;/* associated process or NULL */ };
A struct uio typically describes data in motion. Several of the fields described below reflect that expectation.
struct iovec { void *iov_base; /* Base address. */ size_t iov_len; /* Length. */ };
uiomoveitself does not use this field if the area is in kernel-space, but other functions that take a struct uio may depend on this information..
The
uiomovei function is similar to
uiomove, but uses a signed integer as the byte
count. It is a temporary legacy interface and should not be used in new
code.
uiomove and
uiomovei return 0 on success or EFAULT if a bad
address is encountered. | https://man.openbsd.org/OpenBSD-5.9/uiomovei.9 | CC-MAIN-2020-05 | refinedweb | 132 | 74.08 |
Simple exceptions and messaging
Project description
EZCeption (and EZMessage)
These classes provide a simplified means to define exception classes and message attributes.
Goals
- Less boilerplate to define error classes so they'll be used more.
- Formatting does not happen at raise time so catching errors doesn't incur string computation cost.
- Exception fields are easier to probe by exception handlers.
- Class-bases exceptions are a snap to define.
- Works with regular Python exception classes.
- Interfaces with the gettext module by default.
- Allows for associating other i18n messaging in the error classes.
Installation.
pip install ezception
NOTE: ezception is not on pypi yet. Until official releases are available:
pip install
Usage
import ezception class MyWebRequester(object): class Error(ezception.EZCeption): 'A general error has occurred.' class OpenError(Error): 'An error opening {self.url!r} has occurred.' class NoURLError(OpenError): ''' No URL was provided. '''' class NoURLError(OpenError): ''' The scheme {self.scheme!r} is not acceptable. ''' ... def open(self, url): ... stuff happens. ... Oh noes! raise self.BadSchemeError(scheme=url.scheme)
So in the example the class defines a simple hierarchy of errors. Progreammers can trivially trap MyWebRequester.Error to catch anything generated by MyWebRequester. Because it's very simple to define those errors, a more robust exception hierarchy is more likely to emerge.
In addition, the message is NOT generated at exception time. Because of this, it is easy to catch and determine better handling code. The default Python exceptions use args, which requires the programmer to have tighter coupling with the error classes to acquire the parameters of the failure (or the error class needs to define more properties).
Only when the exception is printed does stringification occur. In addition, the message to be presented is processed using the gettext module, so adding support for other languages for such messages is possible using regular .po files. All messages are tracked by a global ezception.ALL_MSGS dict so generating base .po files is also simplified.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/ezception/0.0.2/ | CC-MAIN-2022-40 | refinedweb | 351 | 52.26 |
The .NET Framework is a new set of interfaces intended to replace the old Win32 and COM APIs. A couple of the major design goals for the .NET Framework were to make programming in a Windows environment much simpler and more consistent. The .NET Framework has two major components: the common language runtime (CLR) and the .NET Framework class library.
The CLR is the sandbox from which all .NET-based code, called managed code, is executed. The CLR is in charge of things such as memory management, security management, thread management, and other code management functions. One of the great benefits of the CLR is that different programming languages can develop code that runs in the CLR and can be used by other programming languages. That means you can develop managed Perl code that can be easily used by a C# application.
The other major component of the .NET Framework is the class library, which is a comprehensive set of object-oriented interfaces that replace the traditional Win32 API. The class library is divided up into namespaces. You can think of a namespace as a grouping of classes, properties, and methods that are targeted for a specific function. For example, the System.Text namespace contains classes for representing strings in ASCII, Unicode, and other character encoding systems. The namespace that is of the most interest to us is the System.DirectoryServices namespace, which contains all the classes necessary to query and manipulate a directory, such as Active Directory, using the .NET Framework. | http://etutorials.org/Server+Administration/Active+directory/Part+III+Scripting+Active+Directory+with+ADSI+ADO+and+WMI/Chapter+28.+Getting+Started+with+VB.NET+and+System.Directory+Services/28.1+The+.NET+Framework/ | CC-MAIN-2016-44 | refinedweb | 252 | 57.47 |
There are three ways a strongly typed DataSet class can be generated. The easiest method is to drop one or more DataAdapter objects from the Data tab in the Visual Studio .NET Toolbox onto a design surface such as a form or a component. Configure each DataAdapter to select data from one table. Right-click on the design surface and select Generate DataSet. Provide a name for the DataSet, select the tables to be included, and generate the new strongly typed DataSet. To relate the two tables, double-click on the XSD file for the new DataSet in the Solution Explorer window to open it. Right-click on the child table in XSD schema designer, select Add/New Relation... from the shortcut menu, and complete the dialog. Instances of the strongly typed DataSet can now be created programmatically or by using the DataSet object from the Data tab in the Visual Studio.NET Toolbox.
The other two methods are more involved, and both require an XSD schema file, which can be generated in a number of ways, e.g., using Visual Studio IDE tools, third-party tools, or the DataSet WriteXmlSchema( ) method. The following example shows a utility that uses the WriteXmlSchema( ) method to create an XSD schema based on the Categories and Products tables in the Northwind database:
String connString = "Data Source=localhost;" + "Initial Catalog=Northwind;Integrated Security=SSPI"; SqlDataAdapter daCategories = new SqlDataAdapter( "SELECT * FROM Categories", connString); SqlDataAdapter daProducts = new SqlDataAdapter( "SELECT * FROM Products", connString); SqlDataAdapter daOrders = new SqlDataAdapter( "SELECT * FROM Orders", connString); SqlDataAdapter daOrderDetails = new SqlDataAdapter( "SELECT * FROM [Order Details]", connString); DataSet ds = new DataSet("Northwind"); // load the schema information for the tables into the DataSet daCategories.FillSchema(ds, SchemaType.Mapped, "Categories"); daProducts.FillSchema(ds, SchemaType.Mapped, "Products"); daOrders.FillSchema(ds, SchemaType.Mapped, "Orders"); daOrderDetails.FillSchema(ds, SchemaType.Mapped, "Order Details"); // add the relations ds.Relations.Add("Categories_Products", ds.Tables["Categories"].Columns["CategoryID"], ds.Tables["Products"].Columns["CategoryID"]); ds.Relations.Add("Orders_OrderDetails", ds.Tables["Orders"].Columns["OrderID"], ds.Tables["Order Details"].Columns["OrderID"]); ds.Relations.Add("Products_OrderDetails", ds.Tables["Products"].Columns["ProductID"], ds.Tables["Order Details"].Columns["ProductID"]); // output the XSD schema ds.WriteXmlSchema(@"c:\Northwind.xsd");
The following code is a partial listing of the XSD schema for the Categories and Products tables in the Northwind database. Note that the msdata namespace is defined to add Microsoft-specific extensions, including the read-only and auto-increment column properties. These attributes are described in more detail later in Appendix B.
<?xml version="1.0" standalone="yes"?> <xs:schema <xs:element <xs:complexType> <xs:choice <xs:element <xs:complexType> <xs:sequence> :sequence> </xs:complexType> </xs:element> <xs:element <!-- Product definition omitted. --> <>
From this schema, a strongly typed DataSet can be created using Visual Studio .NET or the XML Schema Definition Tool.
To create a strongly typed DataSet from the XSD schema using Visual Studio .NET, right-click on the project in the Solution Explorer window, choose Add / Existing Item... from the shortcut menu, and select the XSD file to add it to the project. Double-click on the XSD file to open it in the designer window. Right-click on the designer window and select Generate DataSet. To see the strongly typed DataSet file in the Solution Explorer window, select Show All Files from the Project menu. The strongly typed DataSet class Northwind.cs lists as a child of the Northwind.xsd node.
The second way to create a strongly typed DataSet from an XSD schema is to use the XML Schema Definition Tool (XSD.EXE) found in the .NET Framework SDK bin directory. To generate the class file from Northwind.xsd, issue the following command from the command prompt:
xsd Northwind.xsd /d /l:CS
The /d switch specifies that source code for a DataSet should be created, while the /l switch specifies that the utility should use the C# language, which is the default if not specified. The XML Schema Definition Tool offers additional options that are documented in the .NET Framework SDK documentation.
The resulting class file for the strongly typed DataSet is named using the DataSet name in the XSD schema and an extension specifying the language. In this case, the file is named Northwind.cs because the DataSet was named Northwind when it was constructed. The strongly typed DataSet is ready to be added to a project.
As mentioned before, the strongly typed DataSet is simply a collection of classes extending the functionality of the untyped DataSet. Specifically, three classes are generated for each table in the DataSet: one for each DataTable, DataRow, and DataRowChangeEvent. This section provides a brief overview of the classes generated and discusses the more commonly used methods, properties, and events of those classes.
A class called TableNameDataTable is created for each table in the DataSet. This class inherits from DataTable and implements the IEnumerable interface. Table 13-1 lists the commonly used methods of this class specific to strongly typed DataSet objects.
A class called TableNameRow is created for the DataRow in each table. This class inherits from DataRow. The class also exposes a property for each column in the table with the same name as the column. Table 13-2 lists the commonly used methods of this class.
Finally, a class called TableNameRowChangeEvent is created for each table in the DataSet. This class inherits from EventArgs. Table 13-3 lists the method of this class. | http://etutorials.org/Programming/ado+net/Part+I+ADO.NET+Tutorial/Chapter+13.+Strongly+Typed+DataSets/13.1+Creating+a+Strongly+Typed+DataSet/ | CC-MAIN-2018-09 | refinedweb | 897 | 57.37 |
netCDF4 Files Creation and Conventions¶
The Salish Sea MEOPAR project uses netCDF4 files as input for the NEMO model and for other purposes, where appropriate. This section documents the recommended way of creating netCDF4 files with compression of variables, limitation of variables to appropriate precision, and appropriate metadata attributes for the variables and the dataset as a whole. The recommendations are based on the NetCDF Climate and Forecast (CF) Metadata Conventions, Version 1.6, 5 December, 2011. Use of the netCDF4-python library (included in Anaconda Python Distribution) is assumed.
The nc_tools Module in the SalishSeaTools Package is a library of Python functions for exploring and managing the attributes of netCDF files. The PrepareTS.ipynb notebook shows examples of the use of those functions.
Creating netCDF4 Files¶
All of the following code examples assume that the netcdf4-python library has been imported and aliased to nc:
import netCDF4 as nc
Datasets and Files¶
Create an “empty” netCDF4 dataset and store it on disk:
foo = nc.Dataset('foo.nc', 'w') foo.close()
The
Dataset constructor defaults to creating NETCDF4 format objects.
Other formats may be specified with the format keyword argument
(see the netCDF4-python docs).
The first argument that
Dataset takes is the path and name of the netCDF4 file that will be created, updated, or read.
The second argument is the mode with which to access the file.
Use:
- w (write mode) to create a new file, use clobber=True to over-write and existing one
- r (read mode) to open an existing file read-only
- r+ (append mode) to open an existing file and change its contents
Dimensions¶
Create dimensions on a dataset with the
createDimension() method,
for example:
foo.createDimension('t', None) foo.createDimension('z', 40) foo.createDimension('y', 898) foo.createDimension('x', 398)
The first dimension is called t with unlimited size (i.e. variable values may be appended along the this dimension). Unlimited size dimensions must be declared before (“to the left of”) other dimensions. NEMO supports only a single unlimited size dimension that is used for time.
The other 3 dimensions are obviously spatial dimensions with sizes of 40, 898, and 398, respectively.
The recommended maximum number of dimensions is 4. The recommended order of dimensions is t, z, y, x. Not all datasets are required to have all 4 dimensions.
Variables¶
Create variables on a dataset with the
createVariable() method,
for example:
lats = foo.createVariable('nav_lat', float, ('y', 'x'), zlib=True) lons = foo.createVariable('nav_lon', float, ('y', 'x'), zlib=True) depths = foo.createVariable('Bathymetry', float, ('y', 'x'), zlib=True, least_significant_digit=1, fill_value=0)
The first argument to
createVariable() is the variable name.
For files read by NEMO the variable names must be those that NEMO expects.
The second argument is the variable type. There are many way of specifying type, but Python built-in types work well in the absence of specific requirements.
The third argument is a tuple of previously defined dimension names. As noted above,
- The recommended maximum number of dimensions is 4
- The recommended order of dimensions is t, z, y, x
- Not all variables are required to have all 4 dimensions
All variables should be created with the zlib=True argument to enable data compression within the netCDF4 file.
When appropriate, the least_significant_digit argument should be used to improve compression and storage efficiency by quantizing the variable data to the specified precision. In the example above the depths data will be quantized such that a precision of 0.1 is retained.
When appropriate, the fill_value argument can be used to specify the value that the variable gets filled with before any data is written to it. Doing so overrides the default netCDF _FillValue (which depends on the type of the variable). If fill_value is set to False, then the variable is not pre-filled. In the example above the depths data will be initialized to zero, the appropriate value for grid points that are on land.
Writing and Retrieving Data¶
Variable data in netCDF4 datasets are stored in NumPy array or masked array objects.
An appropriately sized and shaped NumPy array can be loaded into a dataset variable by assigning it to a slice that span the variable:
import numpy as np d[:] = np.arange(48, 51.1, 0.1)
and values can be retrieved using most of the usual NumPy indexing and slicing techniques.
There are differences between the NumPy and netCDF variable slicing rules; see the netCDF4-python docs for details.
netCDF4 File Conventions¶
The NetCDF Climate and Forecast (CF) Metadata Conventions, Version 1.6, 5 December, 2011 has the following stated goal: sense that each variable in the file has an associated description of what it represents, including physical units if appropriate, and that each value can be located in space (relative to earth-based coordinates) and time.
Datasets created by the Salish Sea MEOPAR project shall conform to CF-1.6. NEMO results nominally conform to an ealier version, CF-1.1.
Global Attributes¶
Global attributes are on the dataset. The can be access individually as attributes using dotted notation:
foo.Conventions = 'CF-1.6'
or in code using the methods on a
Dataset object.
Required¶
All datasets should have values for the following attributes unless there is a very good reason not to.
The following are defined in CF-1.6. See that documentation for more details of the intent behind these attributes.
- Conventions
Identification of conventions.
Example:
foo.Conventions = 'CF-1.6'
- title
A succinct description of what is in the dataset.
Example:
foo.title = 'Salish Sea NEMO Bathymetry'
- institution
Specifies where the dataset was produced.
Example:
foo.institution = 'Dept of Earth, Ocean & Atmospheric Sciences, University of British Columbia'
- source
The method of production of the original dataset. For datasets created via IPython Notebooks or code modules this should be the URL of the source code in the tools on Bitbucket.
Example:
foo.source = ''
- references
Published or web-based references that describe the dataset or methods used to produce it. This should include the URL of the dataset in the appropriate repo (typically NEMO-forcing) on Bitbucket.
Example:
foo.references = ''
- history
Provides an audit trail for modifications to the original dataset. Each line should begin with a timestamp indicating the date and time of day when the modification was done.
Example:
foo.history = """ [2013-10-30 13:18] Created netCDF4 zlib=True dataset. [2013-10-30 15:22] Set depths between 0 and 4m to 4m and those >428m to 428m. [2013-10-31 17:10] Algorithmic smoothing. """
- comment
Miscellaneous information about the dataset or methods used to produce it.
Example:
foo.comment = 'Based on 1_bathymetry_seagrid_WestCoast.nc file from 2-Oct-2013 WCSD_PREP tarball provided by J-P Paquin.'
Variable Attributes¶
Variable attributes are on particular variables in the dataset. The can be access individually as attributes using dotted notation:
depths.units = 'm'
or in code using the methods on a
Variable object.
Required¶
All variables should have values for the following attributes unless there is a very good reason not to.
The following are defined in CF-1.6. See that documentation for more details of the intent behind these attributes.
- units
Required for all variables that represent dimensional quantities. The value of the units attribute is a string that can be recognized by UNIDATA’s Udunits package, with a few exceptions.
Example:
depths.units = 'm'
Exceptions and special cases:
- For latitude use
units = 'degrees_north'
- For longitude use
units = 'degrees_east'
- For time use
units = `seconds since yyyy-mm-dd HH:MM:SS'with an actual date/time
- For practical salinity use
units = 1and
long_name = 'Practical Salinity'
- long_name
A long descriptive name which may, for example, be used for labeling plots.
Example:
depths.long_name = 'Depth'
As Applicable¶
- calendar
The calendar to use on a time axis to calculate a new date and time given a base date, base time and a time increment.
Example:
time.calendar = 'gregorian'
- positive
The direction of positive (i.e., the direction in which the coordinate values are increasing) for a vertical coordinate. For Salish Sea MEOPAR files this is applicable to depths and a value of down is used, indicating that the depth of the surface is 0 and depth values increase downward.
Example:
depths.positive = 'down'
- valid_range
Smallest and largest valid values of a variable. If valid minimum and maximum values for a variable can be stated, use this instead of valid_min and valid_max.
Example:
depths.valid_range = np.array((0.0, 428.0))
- valid_min
Smallest valid value of a variable. Use this only if there is no value for valid_max, otherwise, use valid_range.
Example:
sal.valid_min = 0
- valid_max
Largest valid value of a variable. Use this only if there is no value for valid_min, otherwise, use valid_range.
Example:
foo.valid_max = 42
- _FillValue
- The value that a variable gets filled with before any data is loaded into it. Each data type has a default for _FillValue, but a variable-specific value can be specified in the
createVariable()method (see Variables).
- standard_name
A name used to identify the physical quantity. A standard name contains no whitespace and is case sensitive. The standard_name attribute is typically used where a descriptive, code-friendly alternative to the long_name or the variable name itself is needed.
Example:
sal.standard_name = 'practical_salinity'
Applying netCDF4 Variable-Level Compression¶
NEMO-3.4 produces netCDF files that use the 64-bit offset format. The size on disk of those files can be reduced by up to 90% (depending on the contents of the file) by converting them to netCDF-4 format and applying Lempel-Ziv compression to each variable. The ncks tool from the NCO package can be used to accomplish that:
$ ncks -4 -L4 -O SalishSea_1d_grid_T.nc SalishSea_1d_grid_T.nc
Note
The above command replaces the original version of the file with its netCDF4 compressed version.
The -4 argument tells ncks to produce a netCDF-4 format file.
The -L4 argument causes level 4 compression to be used. Level 4 is a good compromise between the amount of compression that is achieved and the amount of processing time required to do the compression.
The -O argument tells ncks to over-write existing file without asking for confirmation.
The file names are the input and output files, respectively.
NEMO-3.6 produces netCDF files that use the netCDF-4 format with level 1 Lempel-Ziv compression applied to each variable. As above, the size of those files on disk can be reduced by up to 90% (depending on the contents of the file) by increasing the compression level to 4. The command to do so is the same:
$ ncks -4 -L4 -O SalishSea_1d_grid_T.nc SalishSea_1d_grid_T.nc
Note
The above command replaces the original version of the file with its netCDF4 compressed version. | http://salishsea-meopar-tools.readthedocs.io/en/latest/netcdf4/index.html | CC-MAIN-2018-13 | refinedweb | 1,783 | 55.84 |
Introduction to Variables in C++
Variables in C++ acts as a memory location, it is nothing but the name of the container or element that stores the data or values that are being used in the program later for execution. It can be defined using the combination of letters digits, or special symbols like underscore(_), defined by using the data types like char, int, float, double. Variables can be anything except the reserved keyword, the first letter of the variables must start with the letter only.
Variables are the most important part of any programming language. Any programming language is incomplete without a variable. We can also say that without variables, the program cannot run. Like any other programming language, the C++ language also needs variables to run their program. Variables are not used to run the program, instead, they are used to store the value or string. Without storing value, the program cannot run. Hence, variables are known for the backbone of the programming language. In C++ any word except the keywords is used as a variable. To define variables we need to specify the type for the variable. Type can be anything int, double, char, float, long int, short int, etc. int is used to store integer value i.e. 5, 19, 519, 1000. Char is used to storing the character or string i.e. a, educate. Float is used to store the float values like 2.3, 3.679, 9.45. Long int is used to store long integer values. In this article, we are going to discuss how to initialize and declare the variables in the C++ language. And the types of variables.
Rules and Regulations for Defining Variables in C++ Language
- Variables can be a mixture of digits, special characters like and percent (&), underscore (_) or string.
- Upper and lower cases are treated as different variables as C++ is case sensitive language. Educba and eduCBA will be treated as two different variables.
- C++ variables must be started with the character. It will not consider the number as a first character. 6educba is not a valid variable because it starts with the number where educba6 can be a valid variable as it started with the character.
- variables in C++ language should not be a keyword. for, this, if, else, while, do, char, this, etc are the keywords that are used for the specific purpose. These keywords cannot be used as a variable in C++.
- Blank spaces are not allowed for the variables. Edu cba is not valid as there is space between edu and cba where educba is valid variable or edu_cba is also a valid variable as underscore sign is used to join the variable.
How do Variables Work in C++ Language?
- Declaration of variables informs compiler the types of data variables will be used in the program.
- Declaration of variables names informs compiler the name of the variables that are used to store the value in the program.
- While declaring variables I tell the compiler the storage that variables need. The compiler does not have to worry about the storage until it is declared.
How to Declare Variables in C++ Language?
Variables can be declared first before starting with the programs. The syntax for declaration of a variable is as follows
data_type variable_name;
where
data_type: Defines types of data for storing value. Data types can be int, char, float, double, short int, etc.
variable_name: Defines the name of the variables. It can be anything except the keyword.
For example,
1. int cab;
2. float 6.9, 7.3
For example 1, int is a data type and cab is a variable name. In the second example, we have declared two variables where the float is a data type and 6.9 and 7.3 are variables.
Once the variables are declared, the storage for those variables has been allocated by the compiler as it will be used for the program.
Program to Illustrate the Declaration of Variables in C++ Language
#include<iostream>
using namespace std;
int main()
{
int x, y, z;
x = 10;
y = 3;
z = x + y;
cout << "Sum of two numbers is: " << z;
return 0;
}
How to Initialize Variables in C++ Language?
In C++, variables can be initialized by assigning the values at the time of declaration. The syntax for initialization of variables in C++ language is –
data_type variable_name = value;
For example,
- int x = 10;
- char b = ‘eduCBA’
In example 1, we initialized variable x with value 10. In example 2, we have initialized b as a character with eduCBA value.
Program to Illustrate Initialization of Variables in C++ Language
#include<iostream>
using namespace std;
int main()
{
int x = 5, y = 15;
int z = x + y;
cout << "Sum of two numbers is: "<< z;
return 0;
}
Types of Variables in C++ Language
There are 5 types of variables in C++ language which are as follows:
1. Local Variables
Local variables are declared inside the function. Local variables must be declared before they have used in the program. Functions that are declared inside the function can change the value of variables. Functions outside cannot change the value of local variables.
Here is an example
int main()
{
int x = 2; //local variable
}
2. Global Variables
Global variables are declared outside the functions. Any functions i.e. both local function and global function can change the value of global variables.
Example is given as follows,
int y = 10; //global variable
int main()
{
int x = 5; //local variable
}
3. Static Variables
These variables are declared with the word static.
Example is given as follows,
int main()
{
int x = 5; //local variable
static y = 2; //static variable
}
4. Automatic Variables
Automatic variables are declared with the auto keyword. All the variables that are declared inside the functions are default considered as an automatic variable.
Example is given as follows,
int main()
{
int x = 20; //local variable (Automatic variable)
auto y = 12; //automatic variable
}
5. External Variables
By using the extern keyword, external variables are declared.
extern z = 4; //external variable
Conclusion
In this article, we have seen the importance of variables in C++ language and how to work with variables with the help of examples. Also, we have seen five different types of variables in the C++ language with examples. I hope you will find this article helpful.
Recommended Articles
This is a guide to Variables in C++. Here we discuss Introduction, how to use Variables in C++ along with Examples. You can also go through our other suggested articles – | https://www.educba.com/variables-in-c-plus-plus/ | CC-MAIN-2020-29 | refinedweb | 1,083 | 65.42 |
In this section we will discuss about how to construct file path in Java.
A file path can be constructed manually or by a Java program. It is best practice to construct a file path in Java by a Java Program. When you will construct a file path manually the path name will be system dependent because the file separator character is system dependent. For example : Windows uses '\' (separator character) and UNIX uses '/' (separator character). But, we can construct a file path system independent due to separator and separatorChar. These are public static fields of the java.io.File class. So, when such fields are used to construct a file path it separates the path name by the separator what the underlying system uses to separate the file path name.
Example
An example given below demonstrates that how to construct a file pathname by a Java program in Java. In this example I have created a Java class where I have write the code for command line input and then get the user's current working directory and uses the File.separator to apply the separator character in constructing of file pathname. Then used createNewFile() method to create a new file with the specified name in the specified directory. In this example a file name can be given at the time of execution of the class i.e. file name can be given through command line.
Source Code
JavaConstructFilePathnameExample.java
import java.io.File; import java.io.IOException; public class JavaConstructFilePathnameExample { public static void main(String args[]) { String f = args[0]; String currDir = System.getProperty("user.dir"); System.out.println("\n Current Working Directory = "+currDir); String fileName = currDir + File.separator + f; File file = new File(fileName); if(file.exists()) { System.out.println("\n File '"+file.getName()+"' already existed"); } else { try { file.createNewFile(); System.out.println("\n File '"+file.getName()+ "' created successfully "); System.out.println("\n Path of '"+file.getName()+ "' file is : "+file.getAbsolutePath()); } catch(IOException ioe) { System.out.print(ioe); } } }// main() closed }// class closed
Output
When you will execute the above example you will get the output (white marked on console) as follows : Construct File Path
Post your Comment | http://www.roseindia.net/java/example/java/io/constructFilePath.shtml | CC-MAIN-2016-30 | refinedweb | 359 | 51.44 |
NamedNodeMap::getAttributeItem() reasonably assumes that an attribute with null namespace will also have a null prefix. But we actually let one create nodes with null namespace and non-null prefix, due to a bug introduced in <>.
Patch forthcoming.
Created attachment 46727 [details]
proposed fix
Comment on attachment 46727 [details]
proposed fix
Normally if you are comparing an atomic string to a string constant, we use an atomic string constant to make the check less expensive.
I suggest we add the strings "xmlns", "", and "xml" all to XMLNames.h alongside xmlNamespaceURI and use named constants instead of literal strings.
I'm going to say review+ but I am close to a review- because I'd like us to use atomic strings when comparing with other atomic strings.
Comment on attachment 46727 [details]
proposed fix
Oops, I said review+ but then chose review-!
Committed <>.
Mass moving XML DOM bugs to the "DOM" Component. | https://bugs.webkit.org/show_bug.cgi?id=33752 | CC-MAIN-2020-05 | refinedweb | 151 | 61.36 |
wcpcpy - copy a wide-character string, returning a pointer to its end
Synopsis
Description
Colophon
#include <wchar.h>
wchar_t *wcpcpy(wchar_t *dest, const wchar_t *src);
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
wcpcpy():
The wcpcpy() function is the wide-character equivalent of the stpc.
wcpcpy() returns a pointer to the end of the wide-character string dest, that is, a pointer to the terminating null wide character.
POSIX.1-2008.
strcpy(3), wcscpy(3)
This page is part of release 3.44 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. | http://manpages.sgvulcan.com/wcpcpy.3.php | CC-MAIN-2017-09 | refinedweb | 104 | 64.41 |
“Two cruise boats, two Lochs and some amazing cycling!”
I had heard about the two Lochs tour but wasn't quite sure what it entailed. I knew it involved joining Loch Lomond up with Loch Katrine but my geography let me down after that. Luckily the crew at Cruise Loch Lomond set me right and told me that it involved 2 cruise boats, 2 lochs and 33 k of cycling. A doddle!
Recruiting a friend, we set off down the A82 to Tarbet on Loch Lomond. Leaving the car in the car park (its still free parking) and grabbing an energy boosting hot chocolate and our cruise tickets at the Bonnie and Ben cafe, we made our way down to the Pier.
Taking the 10.00am boat from Tarbet, it was a half hour crossing to Inversnaid. This gave us plenty of time to make our next sailing at 11.30am at the Stronachlachar Pier.
What a lovely way to spend the first half hour listening to Archie our skipper regale us with stories and history from the Loch. Archie also told us that he had cycled the 2 Lochs tour the year before and assured us that 33k was not that long!
Disembarking at Inversnaid we made a quick toilet stop at the Inversnaid hotel, situated beside the beautiful Arklet waterfalls. Archie had warned us about the the very steep hill leaving Inversnaid, and boy, had he been right!
Managing the first tight corner we had to admit defeat and pushing our bikes we huffed and puffed to the top. It didn’t take long and once at the top we where able to catch our breath and enjoy the half hour cycle and 6k route down to Stronachlachar and our next boat to catch.
Grabbing a quick takeaway tea at the Stronachalar tea room we waited at the Pier for our next boat, The Sir Walter Scott steamboat. It was lovely watching it come into the pier, steam puffing gently out its funnels. We were joined by visitors from Lochs and Glens Holidays, who were very impressed with our grand day out and how far we were going to cycle!
Having experienced many cruises on Loch Lomond it was lovely to enjoy a different area of the National Park. As a born and bred east coaster it was like putting different parts of the jigsaw together in respect of geography. I now knew what was behind Inversnaid and it was even better knowing I had got there by pedal power and not by car.
The cruise from Stronachlachar to the Trossachs Pier takes an hour and from the boat we could see the Loch side route that we would be cycling, all 33k of it!
Disembarking at the pretty Trossachs pier we had time for a quick lunch at the Brenachoile Cafe and then we where on our way. Cycling on the road back to Inversnaid we were relaxed knowing we had a 3.5 hour window in which to reach our destination. We had been advised that the average cycle time for the route was 2.5 hours, so this gave us plenty of time for photos and pit stops.
There are no refreshment stops on the road to Stronachlachar so we made sure we had plenty of water with us.
As recreational cyclists this was perfect for us, undulating with some long uphill segments, and as soon as you felt your legs about to give up, the next corner would open up to a glorious downhill.
There were also lots of excuses to stop and take in the incredible views and as this is Rob Roy country, we were told to look out for historical signposts.
What we found to be especially welcome on this journey was the lack of traffic on the road. From the start we pretty much had the road to ourselves and in fact only passed one National Park Van the whole day.
It’s a bit of a sneaky section approaching the far end of the Loch, there’s a good 5 mile section where you are looking across at Stronachlachar, teasing you with its proximity. Its almost a stones throw away and with no bridge (why would there be?) we pushed on, looking forward to a cold drink on our return trip back to Tarbet.
Catching up with skipper Archie on board we had that satisfied glow you get after a day of fresh air, beautiful scenery and exercise. We absolutely deserved the Loch Lomond Ale we had in our hands and we were soon chatting about when we would do it again. That’s a sure sign of a good day out!?
Prices:
Cruise Loch Lomond £12.00 return (£1.00 extra with bikes)
Loch Katrine £14.00 Stronachlachar to Trossachs Pier.
Tickets sold separately and we advise you to book in advance to ensure a space.
Recommend for children 12+ who can cycle 33k
For more information and to book your tickets on the Two Lochs Tour go to:
w or call
01877 376 315
w or call
01301 702 356 | https://medium.com/@cruisell/2-cruise-boats-2-lochs-and-some-amazing-cycling-d71462e7ce4b?source=rss-9701916b0fdb------2 | CC-MAIN-2021-43 | refinedweb | 853 | 77.87 |
Mike
--
Mike Levin
[email protected]
I would recommend using OpenGL and its GLUT toolkit, which lets you create
simple applications with a window to draw in.
hth
meeroh
--
If this message helped you, consider buying an item
from my wish list: <>
the apple developer sw is free, last time i checked anyway.
when you install it, it also installs all the traditional
command line tools like cc (gcc), linker, standard libraries, etc.
you also want to make sure you install the BSD subsystem so
you get all the man entries.!
> programming from time to time). I've checked out the Apple developers kit on
> their website, and it looks like total overkill for me
If your prograqms are similar creating one template and then copy
and modifying it shouldn't be too hard. But the Dev kit does make
life easy even for plain C programming.
> Could someone give me some advice: what's the simplest way to write a simple
> C program to draw some color dots on a 2D square of the screen?
The simplest way is to join the program and use the Apple tools.
But you can do it by hand if you really want.
> can I just get a C compiler which would run from the shell
> (I need executables, maybe for gcc or something like this)?
gcc and the other tools are all there, just open a terminal
session, start vi or emacs and hack away. You need to find the
right libraries and include them of course but they too are there
to use.
But for casual use I'd almost never use C, it's the kind of task
scripting languages were made for!
Alan G.
Author of the Learn to Program website
>!
I already have a bunch of C code which does quite a bit of number
crunching (I'm moving from a different platform). I'd rather not change all
of it...
> The simplest way is to join the program and use the Apple tools.
> But you can do it by hand if you really want.
is this the Xcode package, or have I got the wrong thing?
Thanks for the help!
The question about IDE aside (I would recommend using "free" Xcode),
here is the basic app that draws pixel.
#include <Carbon.h>
int main(void)
{
WindowRef window;
Rect r = { 10, 50, 300, 300 };
RGBColor color = { 0xFFFF, 0, 0 }
window = NewCWindow(NULL, &r, "\pTitle", true, 0, (WindowPtr)-1, 0, 0);
SetPort(GetWindowPort(window));
SetCPixel(100, 100, &color);
QDFlushPortBuffer(GetWindowPort(window), NULL);
while (!Button()) {}
return 0;
}
--
Mike Kluev
PS. Remove "-DELETE-." part of my e-mail address to reply.
Yep. I'm using the XCode thing at the mo' and lovin' it.
James
Here's how to compile and link programs with quickdraw calls without
using a fancy IDE. I don't know how you find this kind of stuff out
other than asking on news groups...
cc tst.c -o tst -framework Carbon
Here's the test program posted earlier, with a few typos fixed...
#include <Carbon/Carbon.h>
int main(void)
{
WindowRef window;
Rect r = { 40, 50, 300, 300 };
RGBColor color = { 0xFFFF, 0, 0 };
window = NewCWindow(NULL, &r, 0, true, 0, (WindowPtr)-1, 0, 0);
SetPort(GetWindowPort(window));
SetCPixel(100, 100, &color);
QDFlushPortBuffer(GetWindowPort(window), NULL);
while (!Button()) {}
return 0;
}
rob
/* write directly to screen... */
/* scribble to first 100 rows... */
#include <Carbon/Carbon.h>
char *getfb(short *rowl,short *nrows,short *depth,short *rowbytes);
main()
{
short rowl,nrows,depth,rowbytes;
char *fb,val;
long i,j,jump;
fb = getfb(&rowl,&nrows,&depth,&rowbytes);
rowbytes &= 0x7fff;
jump = rowbytes - rowl*depth;
for(j=0;j<100;j++)
{
for(i=0;i<rowl*depth;i++)
*fb++ = val++;
fb += jump;
}
}
char *getfb(short *rowl,short *nrows,short *depth,short *rowbytes)
{
GDHandle theGDList;
Ptr theBase;
theGDList = GetDeviceList();
theBase = (*(*theGDList)->gdPMap)->baseAddr;
*rowl = (*(*theGDList)->gdPMap)->bounds.right;
*nrows = (*(*theGDList)->gdPMap)->bounds.bottom;
*depth = (*(*theGDList)->gdPMap)->pixelSize;
*rowbytes = (*(*theGDList)->gdPMap)->rowBytes;
return(theBase);
}
If you're working in and targeting OS X, then Xcode (or gcc as other
posters suggest) would be ideal. Otherwise, MPW (free download from ) is an excellent command
line-oriented development environment. It can create any kind of
Classic executable, and Carbon executables for OS X.
The example posted by Mike Kluev would be easily built under MPW. Note
that using SetCPixel() in this way would be very slow if you were
plotting many pixels. In that instance, a much faster method (with
Carbon) would be an offscreen GWorld, set memory directly to plot
pixels, then CopyBits the offscreen PixMap into your window.
Toby
I will also include the following CD:
The Macintosh Programmer's C and Object Pascal Workshop, Version 3.2
Bruce Sauls
Raleigh, NC | https://groups.google.com/g/comp.sys.mac.programmer/c/rXlAAJW7IxM | CC-MAIN-2021-49 | refinedweb | 787 | 71.75 |
By coincidence, today I was looking at Java bugs related to overloading. One report had a bad cast, like yours, and that value was passed to an overloaded method in Object and String. In Java 7, type inference inferred Object and the Object-taking method was picked; in Java 8, it could infer String, picked the other method, which failed. I think the lesson is that these “under-the-hood” casts, which seem to be just getting in your face, are actually doing useful type system work.
Cannot be cast to class scala.runtime.BoxedUnit
Ah, right, I forgot about those. In that case, I guess Unit must remain a legal argument for type parameters.
Makes me wonder how often T => Unit will return BoxedUnit when the author intended it to return nothing at all, and how that affects performance?
I’m assuming that the following will create a Seq with 100 references to BoxedUnit, right?
(1 to 100).map(println)
Yep. Note that
BoxedUnit is a singleton read from a Java static field, so it doesn’t itself increase memory usage (that’d be the data structure itself).
Right. And it’s also true that whenever we assign a
Unit to a value the actual type will be a
BoxedUnit. More than that, the actual value assigned will be a reference to the same singleton object:
def doSomething(): Unit = { new Object } val u1 = () // Debugger: {BoxedUnit@793} val u2 = doSomething() // Debugger: {BoxedUnit@793} val u3 = (new Object).asInstanceOf[Unit] // Debugger: {BoxedUnit@793}
decompiled Java code:
public void doSomething() { new Object(); } BoxedUnit u1 = BoxedUnit.UNIT; doSomething(); BoxedUnit u2 = BoxedUnit.UNIT; new Object(); BoxedUnit u3 = BoxedUnit.UNIT;
Note, we don’t see a cast to
BoxedUnit in any of those cases.
A different situation happens when we pass a
Unit as a type parameter:
def execute[R](): R = (new Object).asInstanceOf[R] val o1: Unit = execute[Object]() val o2: Unit = execute[String]() val o3: Unit = execute[Unit]() // class java.lang.Object cannot be cast to class scala.runtime.BoxedUnit
decompiled Java code:
public <R> R execute() { return (R)new Object(); } execute(); BoxedUnit o1 = BoxedUnit.UNIT; execute(); BoxedUnit o2 = BoxedUnit.UNIT; BoxedUnit o3 = (BoxedUnit)execute();
o3 is the only case when Scala doesn’t throw away the object but tries to cast it to
BoxedUnit.
Same here:
class MyFunc[R] extends Function0[R] { def apply(): R = (new Object).asInstanceOf[R] } val myFunc2 = new MyFunc[Unit] val r2: Unit = myFunc2() // java.lang.ClassCastException: class java.lang.Object cannot be cast to class scala.runtime.BoxedUnit val myFunc1 = new MyFunc[AnyVal] val r1: Unit = myFunc1() // SUCCESS!
decompiled Java code:
public class MyFunc<R> extends Object implements Function0<R> { public R apply() { return (R)new Object(); } } MyFunc myFunc2 = new MyFunc(); BoxedUnit r2 = (BoxedUnit)myFunc2.apply(); MyFunc myFunc1 = new MyFunc(); myFunc1.apply(); BoxedUnit r1 = BoxedUnit.UNIT;
Related:
**Welcome to Scala 2.13.0 (OpenJDK 64-Bit Server VM, Java 1.8.0_222). Type in expressions for evaluation. Or try :help. > 1.asInstanceOf[Double] res0: Double = 1.0 > def convert[A, B](x: A): B = x.asInstanceOf[B] convert: [A, B](x: A)B > convert[Int, Double](1) java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.Double at scala.runtime.BoxesRunTime.unboxToDouble(BoxesRunTime.java:113) ... 28 elided**
There is clearly an inconsistency in how asInstanceOf works. Sometimes it converts the value to a value of another type, and sometimes it preserves the value and merely ascribes a new type to it.
I suppose if the target is a parameter, the only possible thing to do is to preserve the value. For the sake of consistency, it should always do that.
No, this is not what
Unit means in the spec.
Unit means you expect/request/require that the method returns
(), the singleton value of the
Unit type. Since it in fact returns an
Object, which is not an instance of
Unit, the cast
.asInstanceOf[R] is invalid, and from that point on the compiler is free to throw a
ClassCastException, either immediately or later, or not at all.
There is no other way to interpret the specification of the language. The compiler is not doing anything wrong.
What’s incorrect is the original definition of
execute. It should say that it returns an
Object. It shouldn’t lie by pretending it can return any
R. This is whether the code is written in Scala or in Java. This is the root of the problem. The behavior of the compiler with
Unit/
BoxedUnit has nothing to do with it.
That’s an entirely different problem, for which I completely agree that the compiler is wrong. It has nothing to do with the fact that
x is a parameter or not. This behavior happens when the static type of
x is known to be a primitive type, and the type in the brackets is also a primitive type. In that case, for some reason, it compiles it as a coercion (like
.toDouble) instead of a cast.
There is nothing in the spec supporting this behavior. Writing this code should warn that it doesn’t do the right thing, and eventually become a compile error.
Would you like to submit a PR?
Then why is the following cast valid?
val u: Unit = (new Object).asInstanceOf[Unit] // u: Unit = ()
Do you have reason to believe that it is? It looks to me like a compiler artifact that it happens to work…
I’ve never submitted a PR for the compiler before, but it sounds exciting and I might consider it if there is some guidance. Are there some instructions? Thanks!
This what @sjrd just described:
Except that the compiler actually always tries to do this when the type between brackets is a primitive type. But
Unit is the only primitive type for which the coercion works when type
x is not a primitive type.
The difference with
def execute[R](): R = (new Object).asInstanceOf[R] is that
R is not statically known to be a primitive type.
The difference with
execute[Unit]() is that as far as Scala-the-language is concerned there is no cast here. In the bytecode a cast gets inserted but that’s to satisfy the JVM.
You get the same thing with other primitive types:
scala> def foo[A]: A = { val i: Int = 42; i.asInstanceOf[A] } foo: [A]=> A scala> foo[Double] java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.Double at scala.runtime.BoxesRunTime.unboxToDouble(BoxesRunTime.java:113) ... 36 elided scala> val i: Int = 42; i.asInstanceOf[Double] i: Int = 42 res3: Double = 42.0
The readme of, along with some pages it points to, is actually quite complete. It should get you through the basics. In particular is quite good!
I have a slightly different understanding, or, if you like, a different misunderstanding.
Here is the ticket for folks who want
asInstanceOf[Unit] to fail.
Here is Lukas’s improvement to boxing I mentioned before. It includes special casing
x.asInstanceOf[Unit] so that
null is handled correctly.
But I think it’s OK not to throw; maybe there should be a lint for this case, something like “dubious cast.” Because usually
asInstanceOf means you have more information than the compiler about the safety of the cast.
This thread began with “why is there too much unboxing of Unit”, but there is too little unboxing:
scala> def fromNull[A]: A = null.asInstanceOf[A] fromNull: [A]=> A scala> null.asInstanceOf[Unit] == fromNull[Unit] ^ warning: comparing values of types Unit and Unit using `==` will always yield true res0: Boolean = false scala> println(fromNull[Unit]) null
That is, it needs
box(unbox(x)), where
unbox could fail, as it ought to do here:
scala> def f[A]: A = new Object().asInstanceOf[A] f: [A]=> A scala> () == f[Unit] ^ warning: comparing values of types Unit and Unit using `==` will always yield true res0: Boolean = false scala> 42 == f[Int] java.lang.ClassCastException: class java.lang.Object cannot be cast to class java.lang.Integer (java.lang.Object and java.lang.Integer are in module java.base of loader 'bootstrap') at scala.runtime.BoxesRunTime.unboxToInt(BoxesRunTime.java:100) ... 28 elided
When you do a comparison the
BoxedRunTime.equals(Object, Object)Boolean method is called. It accepts parameters of type
Object, which is
Any in Scala. I that case, there is no need to downcast the actual values and the behaviour is equivalent to:
def f[A]: A = new Object().asInstanceOf[A] val a: Any = f[Unit]() // a: Any = java.lang.Object@4f45ed22
The behaviour is different for numbers, characters and null. If we compare numbers, then
BoxedRunTime will perform an actual cast.
Looking at all these corner cases I can see that the
Unit has a special place in the type hierarchy, regarding its conversion principles. Some of them are aimed at satisfying the JVM and others at making the life of the developers easier.
I don’t see any point in casting a value to a
BoxedUnit when the expected type is
Unit. For me, it’s one of the omitted cases, when the value discarding mechanism should take place.
I see your opinion is unchanged from your original post.
Unit is interesting only because it is so boring. Moreover, boxed unit is also boring. The utility of the behavior you decry is that it exposes your bug. There is some interest in avoiding needless paired calls to box/unbox the Unit value.
The other behavior you haven’t addressed is your example in
Nothing. It’s obvious the result of type
Nothing must throw. Your example throws
ClassCastException because of the bug, but if it did not unbox the value, you wouldn’t see the usual
NPE as thrown by
null.asInstanceOf[scala.runtime.Nothing$].
The value discard conversion only happens if conversion is required. Since
Nothing and
Unit already conform to
Unit, there can be no discard.
In a related ticket, someone suggests that
Unit should not be special in this regard, but that any
Singleton type should incur a discard. | https://contributors.scala-lang.org/t/cannot-be-cast-to-class-scala-runtime-boxedunit/3599?page=2 | CC-MAIN-2019-39 | refinedweb | 1,673 | 58.99 |
A guide to the Clojure Spec library
The clojure.spec library was introduced in the
1.9.0 version of the language. Born out of a recognition that documentation , of individual functions and collective behaviours , was not adequate for complex systems using clojure. This may have something to do with dynamic typing , being succinct as it is , the compiler does not have an exhaustive amount of information to work with. Furthermore, without the use of type hints and annotations it is up to the programmer to write extensive, well-thought-out unit tests to allow complex projects to survive.
Sign up to WorksHub to join our community of talented developers sharing insights and discovering jobs and opportunities
However this is arguably not the principle of clojure, to provide all these things , and that the true abstraction itself comes from the data that we organise and manipulate. To truly reap the rewards of using lisp , namely the flexibility of homoiconicity.
Spec is sort of like our battle-armour. It's a strong outer layer that wraps around us, stopping ourselves from getting injured unnecessarily from puny foes. Or to put it bluntly, spec provides checks around our defined data. Each spec, the checks we define , are the set of allowed rules or values that data passed to a spec must uphold. Specs can be defined through predicates, sets or other composed of other specs, and the functions available in the
clojure.spec.alpha library. It aims to be able to provide verification to the underlying properties and to uphold only those properties we care about. It certainly is more flexible than a traditional type system as we can do anything from uphold the value of a single key in a map to defining specs that uphold entire types.
Data verification isn't all that spec can do for us though. As specs are just rules, just clojure functions, they can be used to perform generative-testing. Example data is generated based on the specs we define (all the properties of our data we want to be generated), which is incredibly powerful as it produces a larger number of wider-ranged, more expressive tests. This is a good way of building upon docstrings, as they can't really be leveraged by programs or other testing facilities as all they can do is project information to the human consumer. Spec is a rarity in this sense as it is effectively allowing the computer and the programmer to understand the domain of the software without having to mention ad-hoc conventions that work for one party or only the other.
To recap, what we must define are the specs themselves. We define the properties to be upheld, so then we can use the validation functions that
clojure.spec.alpha,
clojure.spec.gen.alpha and
clojure.spec.test.alpha provides.
So, to get familiar with spec let's hop into a repl:
Now theNow the
;; require the spec library ;; note , you must have a version of clojure that has a version of 1.9.0 or above (require '[clojure.spec.alpha :as s])
valid?function on some example data.
We can pass any valid predicate to it really:We can pass any valid predicate to it really:
user=> (doc s/valid?) ------------------------- clojure.spec.alpha/valid? ([spec x] [spec x form]) Helper function that returns true when x is valid for spec. nil (s/valid? zero? 0) true (s/valid? zero? 1) false
;; it just checks that the data itself can pass the tests, by invoking the "spec" user=> (s/valid? #{:a :b :c} :c) true user=> (s/valid? #{:a :b :c} :d) false
Alright, now for an actual example. We'll make a file
core.clj in the folder
hello_world.
Now, we might want to know how exactly has our spec failed, and with more ambiguous functions especially so! In this case could use theNow, we might want to know how exactly has our spec failed, and with more ambiguous functions especially so! In this case could use the
(ns hello-world.core (:require [clojure.spec.alpha :as s] [clojure.string :refer [ends-with?]])) (defn punctuated? [s] "check if a string ends with a full-stop , question or exclamation mark." (let [punctuation #{"." "?" "!"}] (some (partial ends-with? s) punctuation))) (s/valid? punctuated "Hello World!") ;; => true (s/valid? punctuated "Hello World") ;; => false
s/explainfunction in an attempt to find out what is going wrong.
Well, that's about as much as we know about the error. What gives? Well, one way of getting a more in-depth look at the results of our evaluation would be to the useWell, that's about as much as we know about the error. What gives? Well, one way of getting a more in-depth look at the results of our evaluation would be to the use
;; just pass it the spec and the arguments to it and it will return some data on the evaluation ;; all operations to *out* return nil as IO is a side effect operation (s/explain punctuated? "Hello World") "Hello World" - failed: punctuated? nil
s/explain-datafunction:
(s/explain-data punctuated? "Hi") ;; #:clojure.spec.alpha{:problems [{:path [], :pred user/punctuated?, :val "Hi", :via [], :in []}], ;; :spec #object[user$punctuated_QMARK_ 0x6f1c3f18 "user$punctuated_QMARK_@6f1c3f18"], :value "Hi"}
This is better, but it is a bit cluttered. This is because we haven't registered the spec globally. Moreover, it deals with the spec as a normal clojure function, but by defining this as a spec we can get a slightly clearer report.
Keep in mind this isn't the main reason we define specs. While this is a nice add-on, we want to define specs so we can reuse them across our library or application. By using
s/def we give it a name , typically an auto-resolved keyword like
::punctuated? and then our spec. This globally defines the spec and it adds it to the central registry for which we can now reuse specs across our application. The clojure reader will resolve it to a fully qualified namespace, so in our case it would be
:hello-world.core/punctuated? . If we are writing code for our own use, or within a company, then it won't hurt to use these simpler conventions; if we are writing libraries for public use the we should take into account possible conflicts, and include things like , but not limited to : project name, url, organisation name etc.
But for brevity, and the fact this is a little project, I'm going to use auto-resolved keywords.
(s/def ::punctuated? punctuated?) ;; now it's defined, it allows functions like doc to print some info on it: ;; (doc ::punctuated?) ;; => :hello-world.core/punctuated? ;; Spec ;; punctuated? ;; nil
;; and now if we try and use s/explain (s/explain ::punctuated? "Hello World") ;; => "Hello World" - failed: punctuated? spec: :hello-world.core/punctuated? ;; how about s/explain-data ? ;; #:clojure.spec.alpha{:problems [{:path [], :pred hello-world.core/punctuated?, :val "Hi", ;; :via [:hello-world.core/punctuated?], :in []}], :spec :user/punctuated?, :value "Hi"}
Some differences to note. the
:via keyword's value vector is populated with where the spec came from, as it is now catalogued in our registry. Also the
:spec keyword is no longer a load of jargon. It's on the whole, a lot more readable.
Side note: the difference between
:val and
:value is that
:value is everything that was passed into the spec whereas
:val is the part of the input , the value, that failed. So the two could differ if we provide things like collections.
On the whole this extended, clearer report is of marginally greater use.
But this isn't the fault of spec.
This is a very simple function. It narrows down the number of culprits to one as there is only one function. Complexity is so minimal that spec can't really produce a more detailed report as we're only checking one property of the function. Of course it is important that with each predicate, that only one property is checked with it. As all our checks are pure functions, we can compose them very simply. And this is where the fun starts and spec becomes more useful.
So, we're happy we can check that a sentence is punctuated, but we would also like to check that a given sentence starts with a capital letter.
Now we can use theNow we can use the
(ns hello-world.core (:require [clojure.spec.alpha :as s] [clojure.string :refer [ends-with? ]])) (defn punctuated? [s] "check if a string ends with a full-stop , question or exclamation mark." (let [punctuation #{"." "?" "!"}] (some (partial ends-with? s) punctuation))) (defn starts-with-capital? [s] "Check if a given word , or sentence , starts with a capital letter." (Character/isUpperCase (first s))) (s/def ::punctuated? punctuated?) (s/def ::starts-with-capital? starts-with-capital?)
s/andfunction to check if a given input is valid by both specs.
(s/def ::proper-sentence? (s/and ::punctuated? ::starts-with-capital?))
(s/valid? ::proper-sentence? "Hello there") ;; => false (s/valid? ::proper-sentence? "hello there!") ;; => false (s/valid? ::proper-sentence? "Hello there!") ;; => true
We can make new specs from smaller ones, using their names. We don't need to use the functions directly, as it is more expressive, and the specs themselves may comprise other specs. Specs are not only composable, but re-usable. We can use the
::starts-with-capital? spec for validating a noun, as they enforce the same rules.
Now before we jump into an example withNow before we jump into an example with
(s/def ::noun? ::starts-with-capital?)
s/or, it is important to note that
s/or's functionality is a little different from
s/and. The latter will just take spec forms, one after another and return true or false on whether all of them were matched. However
s/oron the other hand, if given three specs for example and a given input only returns true for one of them then that is enough to suffice a pass (that or this or that). Meaning that for
s/orto fail, every given spec must fail, so make sure each spec enforces a single property that you're validating. Otherwise the
s/explain-datareport will be of less use.
This is why when we use it, for every spec (property) we include a key that names each check so that
s/or can easily show which properties the input passed on. Let's continue this grammatical theme and introduce a set of functions that can check what kind of a sentence has been entered. Either, they are continuing an idea with "Moreover", summarising a paragraph with "In conclusion" etc.
;; all in the file writing.clj , but I'll just reference a couple fns (ns spell-checker.writing (:require [clojure.spec.alpha :as s] [clojure.spec.test.alpha :as stest] [clojure.spec.gen.alpha :as gen] [clojure.string :refer [includes?]]))
;; for the purpose of this exercise the sets are extremely simple... (defn good-addition? [sentence] (some (partial includes? sentence) #{"Furthermore" "Therefore" "In addition" "Moreover"})) (defn bad-addition? [sentence] (some (partial includes? sentence) #{"And" "The" "Because"})) (s/def ::good-addition good-addition?) (s/def ::bad-addition bad-addition?) (s/def ::check-addition (s/or :good ::good-addition? :bad ::bad-addition?))
Now I am going to contradict some of my previous advice here, as you can see that I will be reusing a lot of the functionality that are in the predicate functions, but just changing the sets around. You could just group them and do a single
and check along the lines of
(and (good-set word) (nil? (bad-set word))). But it depends on the complexity, and how much you prioritise testing on a particular part of your program.
Now it would be nice to see the type of sentence that were passed to us, whether indicative of the start or the end of a paragraph or essay.But using valid isn't a good test as we want to see WHICH property passed and what kind of data it was.
But what if we want to see the data that got returned, what kind of sentence is it? Well, this is where theBut what if we want to see the data that got returned, what kind of sentence is it? Well, this is where the
user=> (s/valid? ::check-addition? "Furthermore") true
s/conformfunction comes in. Conform will take the data and mould it into the shape of our spec.
When we use it in this case, we get back:
We can actually see the type of sentence, and the data itself can be used as the input to other functions.We can actually see the type of sentence, and the data itself can be used as the input to other functions.
user=> (s/conform ::check-addition? "Furthermore") [:addition-of-idea "Furthermore"] user=> (s/conform ::check-summary? "Clearly") [:summary "Clearly"]
s/conform, in simple cases , will just return the data it was passed if there is no reformatting of de-structuring of any kind. If the spec fails it will return the
:clojure.spec.alpha/invalidkeyword.
So let's look at an example, where all of our specs fail:So let's look at an example, where all of our specs fail:
user=> (s/conform ::starts-with-capital? "Hello") "Hello" user=> (s/conform ::starts-with-capital? "ello") :clojure.spec.alpha/invalid
user=> (s/valid? ::good-opener? "Furthermo") false
user=> (s/explain-data ::good-opener? "Furthermo") #:clojure.spec.alpha{:problems ({:path [:addition-of-idea], :pred spell-checker.core/good-addition?, :val "Furthermo", :via [:spell-checker.core/good-opener?], :in []} {:path [:example-signalling], :pred spell-checker.core/good-example?, :val "Furthermo", :via [:spell-checker.core/good-opener?], :in []} {:path [:summary], :pred spell-checker.core/good-summary?, :val "Furthermo", :via [:spell-checker.core/good-opener?], :in []}), :spec :spell-checker.core/good-opener?, :value "Furthermo"}
As not a single spec succeeded we'll get back every spec in the
s/explain-data report. If it doesn't meet any criteria, then in order for
s/conformto return something useful (that other functions in the pipeline could work with) we could have a "last-resort spec" which just returns an empty map or some basic default settings. For example, if we are reading from the database and our spec which checks for data returns nil then we should have another spec which just returns an empty map. Then conform could return that, and other functions in this pipeline wouldn't have to worry about nil-objects.
user=> (s/conform (s/or :pass ::check-addition? :fail identity) "Furthermo") [:fail "Furthermo"]
Note , using s/nilable to wrap around our specs to accept nil values does not help the overall pipeline. As conform would still return nil which isn't good for other functions that may have expected some sort of map.
However it is debatable whether this sort of thing is actually useful in a production environment. Maybe you do want to let it crash, maybe you do want to show a nil value. Such things though are great for testing, if you are doing many tests at once, to see the value that fails instead of the
:clojure.spec.alpha/invalid keyword.
Now then, that about covers our introduction to the very basics of spec. I haven't really covered anything, but that's because I wanted you to get used to spec's philosophy and it's raison d'être. We're going to expand on this kind of spelling theme by introducing the Words API. Given a word it can provide us with it's synonyms, antonyms , definitions and much more. But now our little program is letting in all sorts of data from the outside world! To keep things solid we will need to define some specs. Just some of the issues we could have is sending a request with a misspelled word and getting back an error, we might call the wrong function but because of their similar response format it may slip through.
I won't be going through the actual implementation of the http client as I want to keep things focused on spec, but you can check it out here. But nevertheless, these are the main functions we will expose and their respective annotations, so we know what our result should come back as.
synonyms - takes a word and makes a request for all it's synonyms *the-word* -> {"word" *the-word* "synonyms" [all the synonyms]}
antonyms - takes a word and makes a request for all it's antonyms *the-word* -> {"word" *the-word* "antonyms" [all the antonyms]}
definitions - takes a word and makes a request for all it's definitions *the-word* -> {"word" *the-word* "definitions" [{"definition" "the-definition" "part-of-speech" "..."}]}
examples - takes a word and show example usages in a sentence or phrase *the-word* -> {"word" *the-word* "examples" ["the-example"]}
rhymes - takes a word and makes a request for all it's rhyming words. *the-word* -> {"word" *the-word* "rhymes" {"all" [all the rhymes]}}
syllables - takes a word and shows the number of syllables *the-word* -> {"word" *the-word* "syllables" {"count" the-count "list" [all the syllables]}}
frequency - takes a word and shows how common the word is on a scale of 1 - 10. *the-word* -> {"word" *the-word* "frequency" {"zipf" 1<=x<=10 "per-million" number "diversity" 0<=x<=1}}
And if we provide a word that isn't in the dictionary we get back a stacktrace error, with the HTTP request. In the body we would get
{"success" false "message" "word not found"} so we need to make sure that we check the word is in the dictionary before the request is issued.
Ok so let's construct the spec for checking words
(require '[clojure.string :refer [split-lines]]) (defn load-dictionary [] (-> (slurp "src/words.txt") (split-lines) (set)))
(def dictionary (load-dictionary))
;; check that a word is in the dictionary ;; all the words in our text file are lower case so it won't pass unless the other two checks pass. (defn in-dictionary? [word] (dictionary word))
And to check a word is valid.And to check a word is valid.
(defn all-lower-case? [word] (every? #(Character/isLowerCase %) word))
(require '[spell-checker.dictionary :refer [dictionary]])
(s/def ::dictionary dictionary) (s/def ::word ::dictionary)
We don't have to do any other checks, because if it is in the dictionary then it is correct. No need to add
string? ,
isLowerCase and with spec allowing sets as predicate checks we're left with a very nice solution (albeit to just one little portion of the api).
As we're defining an API it is only right we start following some idioms. For the betterment of the code and the consumer. Instead of just leaving it be at a fully resolved keyword, we should be specific:
;; we can still use the auto-resolve sugar on the second spec, as it is evaluated to the first kw. (s/def :spell-checker.dictionary/dictionary dictionary) (s/def :spell-checker.dictionary/word ::dictionary) ;; even better to do add the company name, then the application if applicable ;; (s/def :com.something.spell-checker.dictionary/word ::dictionary)
Aside from this, when I was making requests to the words API , in the response map the values , as well as the keys, were strings and not (un)qualified keywords. Now we shall see that when we try to check the map keys/values with a helpful function called
s/keys it doesn't work with strings, only (un)qualified keywords. We use
s/keys to desribe what we want our map to look like, typically with fully qualified keywords
::like-this-one as the keys and that will be used as the spec for the corresponding value.
For
s/keys to describe the map, one way we could do it is to transform the keys the api returns from strings to keys beforehand by utilising the
clojure.walk/keywordize-keys function, which can take the incoming response map and have all the keys formatted into keywords.
Now then it's time to inspect the specifics. The simplest functions to validate are the
synonyms,
antonyms , and
examples set of functions which return a simple structure to work with. Where they differ from other functions is that some will use maps , others will use vectors.
;; calling the synonyms function, which you can see in the handler.clj file (synonyms "garish") ;; returns , with the keywordizing-keys applied to it: {:word "garish", :synonyms ["brassy" "cheap" "flash" "flashy" "gaudy" "gimcrack" "loud" "meretricious" "tacky" "tatty" "tawdry" "trashy"]}
Looking at what we need to define here, the
:word key stays the regardless of any function that is called so we can re-use it:
Now as the value of the synonyms key is a vector, and we want to check every value is a string, we can use theNow as the value of the synonyms key is a vector, and we want to check every value is a string, we can use the
;; same spec as above, s/keys will use the spec name as the name of the key it expects to be passed to it. (s/def ::word dictionary)
s/coll-ofspec to say we want a collection of strings, each one being different, and the kind of collection we want to check is a vector.
And this would be how we could spec the entire map adding them together.And this would be how we could spec the entire map adding them together.
;; check that the value with the synonyms key is a vector of strings (s/def ::synonyms (s/coll-of string? :distinct true :kind vector?)) (s/valid? ::synonyms ["brassy" "cheap" "flash" "gaudy" ....]) ;; => true
So theSo the
(s/def ::check-synonyms (s/keys :req-un [::word ::synonyms))
:req-unkeyword means all the required keys that the map must have. The "un" on the end means the required key in question can be an unqualified keyword, like the ones we get back from the API, as otherwise we would have to restructure our whole app to allow for namespaced keywords.
Subsequently, the
opt-un key is for specifying all the optional keys a map could have. So if we had wished to make the spec a bit more modular we could have done:
HoweverHowever
(s/def ::check-spec (s/keys :req-un [::word] :opt-un [::synonyms ::antonyms ::rhymes ::examples])) ;; would also pass if we include a :synonyms key as well ;; any combination of optional keys would pass, which would be all wrong. (s/valid? ::check-spec {:word "garish", :antonyms ["brassy" "cheap" "flash" "flashy" "gaudy" "gimcrack" "loud" "meretricious" "tacky" "tatty" "tawdry" "trashy"]}) ;; => true
or, not
s/or, but
orwithin
opt-un. This is because opt-un matches against all the keywords that are given, by supplying
orit does the matching at that level, returning the only match left inside the
opt-unvector.
ThisThis
(s/def ::check-spec (s/keys :req-un [::word] :opt-un [(or ::synonyms ::antonyms ::rhymes ::examples)])) (s/valid? ::check-spec {:word "garish" :antonyms ["some" "antonym"] :synonyms ["some" "synonym"]}) ;; => false
::wordspec and specs like
::synonymsunder a joint roof like
::check-synonyms, and that will be the convention. Just need to spec the other keys now, next is the syllables function.
(syllables "condescending")
;; returns {:word "condescending", :syllables {:count 4, :list ["con" "de" "scend" "ing"]}}
And now the frequency function...And now the frequency function...
;; since specs are added to a global registry, I can specify them below... (s/def ::syllables (s/keys :req-un [::count ::list])) (s/def ::count int? #(> % 0)) (s/def ::list (s/coll-of string? :kind vector?)) ;; and to check the entire map (s/def ::check-syllables (s/keys :req-un [::word ::syllables]))
(frequency "monad") ;; returns {:word "monad", :frequency {:zipf 1.9, :perMillion 0.07, :diversity 0}} ;; so to describe the value of the :frequency key we can do (s/def ::frequency (s/keys :req-un [::zipf ::perMillion ::diversity])) (s/def ::zipf (s/and number? #(<= % 10))) (s/def ::perMillion (s/and number? #(<= % 1000000))) (s/def ::diversity (s/and float? #(<= 0 % 1)))
(s/def ::check-frequency (s/keys :req-un [::word ::frequency]))
Now if we run this as it is with "monad" we get back an error. Let's take a look at what
explain-data has to say:
(s/explain-data ::check-frequency frequency-result) #:clojure.spec.alpha{:problems ({:path [:frequency :diversity], :pred clojure.core/float?, :val 0, :via [:spell-checker.handler/check-frequency :spell-checker.handler/frequency :spell-checker.handler/diversity], :in [:frequency :diversity]}), :spec :spell-checker.handler/check-frequency, :value {:word "monad", :frequency {:zipf 1.9, :perMillion 0.07, :diversity 0}}}
So we can see the issues are to do with the
::diversity spec, the error is that while we accept any number including 0 or 1, 0 is not treated as float , which is fair enough. While it would be nice to keep the
float? check, since we are doing the additional check of
#(<= 0 % 1) we can get away with
number?
And with that it's time to take on the loch-nested monster. Bring on theAnd with that it's time to take on the loch-nested monster. Bring on the
;; so what we are saying is, unless it is a zero or a one, check it is a float, ;; within the range of zero to one inclusive, which allows the or check to pass too. (s/def ::diversity (s/and number? #(<= 0 % 1)))
definitionsfunction.
(definitions "monad") ;; returns {:word "monad", :definitions [{:definition "a singular metaphysical entity from which material properties are said to derive", :partOfSpeech "noun"} {:definition "(biology) a single-celled microorganism (especially a flagellate protozoan)", :partOfSpeech "noun"} {:definition "(chemistry) an atom having a valence of one", :partOfSpeech "noun"}]}
So this is the first and only time we've really come across nested collections.
s/keys made it simple, as the collection was separated almost by the key, but with the vector we have no such comfort, unless we hack away at working with indexes.
What we can do here is use the
s/coll-of function, that holds a spec containing all the checks we would perform on every map, using
s/keys. We will discuss the other ways in a moment.
The trick with these kind of collections is to break it down, and define the layers. So we start at the very top level keys which areThe trick with these kind of collections is to break it down, and define the layers. So we start at the very top level keys which are
(s/def ::check-definitions (s/keys :req-un [::word ::definitions])) ;; since it's a vector we don't have any keys that may constrict our naming (s/def ::definitions (s/coll-of ::each-definition)) (s/def ::each-definition (s/keys :req-un [::definition ::partOfSpeech])) (s/def ::definition string?) (s/def ::partOfSpeech #{"noun" "pronoun" "adjective" "adverb" "proverb" "verb" "preposition" "conjunction" "interjection"})
:wordand
:definitions. Now word is , as always , very straightforward and we can chalk it off at just
string?, but when we check the
:definitionskey we will have a vector of maps. Now given the vector, the spec we can use to go a layer further would be
s/coll-of. Taking a look at the docs of
coll-ofwe can see that it will exhaustively conform every value given to it, so coll-of will look at each value in the vector, in this case a map, and feed it to the spec. This brings us to our next layer, which is a map. Now we know what to do here from previous specs we have written, just model our ideal map, with the required (and optional) keys and define their constraints.
partOfSpeechis a bit more interesting as it uses a set to make sure that any value outside is rejected. This data was little harder than other nested collections, as each layer had a different set of requirements. We had to deal with a map , then a vector , than a map with different keys and values, meaning that we couldn't reuse much and had to define new layers as we went along.
Now let's write the spec for the
word function, which is essentially when we don't call a specific function like
synoyms, but get back as much as possible of any and all sorts of data.The data follows the exact same style as the
::definitions spec, but each map in the vector contains an arbitrary amount of keys on all the different properties of the given word, for a given definition.
What we can do is reuse the spec
::each-definition as it has the default keys which every map should contain, namely
::word and
::definition. But we also want to include the plethora of optional keys that could be present, which is a perfect time to use
s/merge to create an
s/keys spec which reflects all this:
(s/merge ::each-definition (s/keys :req-un [] :opt-un [::examples ::derivation ::typeOf ::similarTo ::entails ::inCategory]))
And then we can wrap all this in an
s/coll-of for checking every map this way.
(s/def ::results (s/coll-of (s/merge ::each-definition (s/keys :req-un [] :opt-un [::examples ::derivation ::typeOf ::similarTo ::entails ::inCategory]))))
So that has the vector covered, for the
::results key but there are other top level keys we need to spec like
::frequency,
::pronunciation ,
::syllables. At first glance I thought I could just reuse the
::frequency spec I defined earlier but it deals with the frequency format that comes as a map. What we can do is utilise the
s/or function to wrap a number-format into it, so we can work with either.
;; before (s/def ::frequency (s/keys :req-un [::zipf ::perMillion ::diversity])))
;; after (s/def ::frequency (s/or :number-format (s/and number? #(<= 0 % 10)) :map-format (s/keys :req-un [::zipf ::perMillion ::diversity])))
So we know the benefits of doing this: reusability, flexibility and succintness. But the downsides are only going to be realised further down the road. When you do this, we get test cases that pass when they otherwise shouldn't as they belonged to that of a different request. Moreover a number could be the key in a corresponding frequency request which wouldn't be right in that case, but still returning a pass. In this case it's important to clarify which frequency is meant for which type of request.
(s/def :frequency/frequency (s/keys :req-un [::zipf ::perMillion ::diversity]))
So now it is much clearer to someone reading the spec where and what the purpose of it is. Remembering that spec is also supposed to function as a supplement to documentation and seeing theSo now it is much clearer to someone reading the spec where and what the purpose of it is. Remembering that spec is also supposed to function as a supplement to documentation and seeing the
(s/def :word/frequency (s/and number? #(<= 0 % 10)))
orspec above can confuse, in which cases will it be a "number-format", and where is this used? But the name actually shows where this is referenced, whilst keeping the frequency name for
s/keysto perform checks.
And so the spec for checking the word should look like:.
(s/def ::check-word (s/keys :req-un [::word ::results ::syllables ::pronunciation :word/frequency]))
However this in no way solves our issue of checking the characteristics of the response map, why'd we bother writing out all those specs? Well, we've got all the properties we want to define, so what we could do instead is put theHowever this in no way solves our issue of checking the characteristics of the response map, why'd we bother writing out all those specs? Well, we've got all the properties we want to define, so what we could do instead is put the
(defn request [[header-name header-value] param word] {:pre [(s/valid? :spell-checker.dictionary/word word)] :post [(s/valid? map? %)]} (let [url "" req (str url word "/" param) options {:headers {header-name header-value} :accept :json}] (client/get req options)))
preand
postmap verifications in each api function, so
::check-rhymesas the
postcondition in the
rhymesfunction etc.
But is there a better way? What spec offers some APIs is the option to use
s/multi-spec. For APIs that communicate using a unique indentifier, so for example a key that is common in all maps of possible requests, but has differing values (attributes) which would point to the correct method.
s/multi-spec acts as a wrapper around the
defmulti and
defmethod functions , leveraging their polymorphism and putting our specs in their method bodies.
If you're familiar with multimethods you know that they do this based on a dispatch function which will return a dispatch tag, which is the value that dictates which method is the right one to use.
;; adapted example taken from doing (doc s/multi-spec) ;; just like the regular way of setting up defmulti we provide a function, we're being very succint ;; and leveraging the power of keywords to be able to pull whatever value in the map is at the :tag key. ;; this keyword can be resolved or un-resolved, but it's what must be seen in the data passed when multi-spec is called. ;; otherwise we can't pull out the value at that entry. ;; In this case the identifier is an un-resolved keyword , that should be in every map. (defmulti mspec :tag)
;; methods can ignore their argument, as s/multi-spec is called with the data ;; once the value is found and the correct method determined , the method will return the spec ;; so now whether we are using conform or valid the data and the spec will be checked. (defmethod mspec :int [_] (s/keys :req-un [::tag ::i])) ;; careful not to confuse this with :tag up above at defmulti, this is just the spec used for the map (s/def ::tag keyword?) (s/def ::i number?)
Using theUsing the
;; this is our wrapper. We use the *same tag* that our defmulti does, but this is simply to avoid errors down the line, ;; which I'll explain in the paragraph below. ;; multi-specs , for their second argument can take either a keyword or a function, ;; the difference between the two will become more apparent when we do generative testing... (s/def ::mmspec (s/multi-spec mspec :tag)) (s/valid? ::mmspec {:tag :int :i 4}) ;; => true
:tagkey is very succint , but it may cause some confusion.
defmultiwill just take a function like normal , and using the return value work out which method.
:tagitself could've been replaced with the more verbose
#(first (vals %)), just as long as we are identifying the right property.
Moving now toMoving now to
;; these two are functionally equivalent for maps of the form {:tag keyword? :i :number?} (defmulti mspec :tag) (defmulti mspec #(first (vals %)))
s/multi-specitself now , if you've never seen or used it before this is the part where you mind gets blown! For you see, the second argument of
s/multi-specis only important for generating values, testing . That's it. In fact, if you weren't doing any generation (which I'll cover further down in the article), just using
s/valid?,
s/conformthen you could just use
niland carry on...
The only reason you would would want to not have nil, is for the case of generating samples and checking that your specs do in fact pass the right properties, and so you would want to have the same key used so it passes theThe only reason you would would want to not have nil, is for the case of generating samples and checking that your specs do in fact pass the right properties, and so you would want to have the same key used so it passes the
;; the second arg, retag is only necessary for generation. nil won't change how the underlying ;; defmulti and methods work, as they are used simply to deduce , from the map , which is the right ;; spec to hand over. (s/def ::mmspec (s/multi-spec mspec nil)) (s/valid? ::mmspec {:tag :int :i 4}) ;; => true (s/conform ::mmspec {:tag :int :i 4}) ;; => {:tag :int :i 4}
s/keyscheck. Now then, let's take a look at the example used in the clojure spec guide and see if we can leverage this in our own spell-checker program. We see that
:event/typeis the omnipresent identifier, the key in every map, looking at our program, we don't have that sort of format. The only key that stays the same for us is
:wordbut we can't define methods for every word in the dictionary! At least manually...
We needed an identifier that persists in all requests and give us information on the "type" of data we have, so taking our usual map:
To implement this it would be just another wrapper in theTo implement this it would be just another wrapper in the
;; instead of this, {:word "Some word" :synonyms ["some" "synonyms"]} ;; we'd need this for multimethods {:type :synonyms :word "some-word" :synonyms ["some" "synonyms"]} ;; then s/multi-spec would look for a spec with :synonyms and that should work. ;; We do repeat ourself slightly, but if we wanted to use multimethods into this API ;; then this is the simplest way of doing it.
wrap-requestfunction, grabbing the second key and assoc'ing to the map for spec to check it.
;; ;; in our handler.clj file ;; (defn- show-type [m] (assoc m :type (second (keys m)))) (defn- request [[header-name header-value] param word] (let [url "" req (str url word "/" param) options {:headers {header-name header-value} :accept :json}] (client/get req options))) (defn- format-response [[header-name header-value] param word] (-> (request [header-name header-value] param word) (json->clj) (keywordize-keys) (show-type))) ;; ;; in our handler_spec.clj file ;; ;; key we expect in every map (defmulti spell-checker :type)
;; all the values , and given the data , the spec we would like to run. (defmethod spell-checker :synonyms [_] (s/keys :req-un [::type ::word ::synonyms])) (defmethod spell-checker :antonyms [_] (s/keys :req-un [::type ::word ::antonyms])) (defmethod spell-checker :rhymes [_] (s/keys :req-un [::type ::word ::rhymes])) (defmethod spell-checker :syllables [_] (s/keys :req-un [::type ::word ::syllables])) (defmethod spell-checker :frequency [_] (s/keys :req-un [::type ::word ::frequency])) (defmethod spell-checker :definitions [_] (s/keys :req-un [::type ::word ::definitions])) (defmethod spell-checker :word [_] (s/merge (s/keys :req-un [::type]) ::check-word)) ;; the spec for the type key itself, could use a set to define the only accepted vals... (s/def ::type keyword?) ;; and we'll use the below to put into each api function , around about this : {:pre [...] :post [(s/valid? handler-spec/response %)}.
;; keeping :type in for when we test this out later on (s/def ::response (s/multi-spec spell-checker :type)) (s/valid? :spell-checker.handler-spec/response {:type :antonyms :word "strong" :antonyms ["weak" "feeble"]}) ;; => true
In conclusion, think about whether your API would benefit from including this layer of abstraction and whether the API itself can cleanly accommodate it.
Testing our functions , using clojure.spec.gen.alpha and clojure.spec.test.alpha
Testing functions is slightly different from standard value checks, as we can make calls to foreign api's (as we will do in a moment) , there could be cases where we need to spec higher order functions which means another layer of complexity. Moreover,
s/valid? and
s/conform aren't enough for testing. What we can use is the
exercise-fn validator as well as the tools in the
clojure.spec.test.alpha namespace.
We'll define functions specs using
s/fdef which is what is says on the tin, a definition for a spec, tailored to functions. We can define specs for things like the arguments through
:args and the return type through
:ret. The properties of the function, the distinct features that link the input and output, wherever possible, can be specified through the
:fn keyword, all of this allowing us to see whether the function would work before we ship it to production.
We include the function name to the spec so it is registered to the appropriate fn, meaning when we do (doc my-fn) it will show the specs too this means the name is not an (un)qualified keyword, because the fn name isn't, and the registry we will match the specs for us. We reference arguments our actual function takes by keywordizing the names, so
:word for word, so if we were to write an fdef for the rhymes function:
I'll use all the specs that check the rhyme function:I'll use all the specs that check the rhyme function:
(s/fdef rhymes :args (s/cat :word ::word) :ret ::check-rhymes :fn '#(= (:word %) :word)) ;; note the :fn keyword essentially says , "take out the word in the map" ;; using the :word key and compare it to the :word parameter. ;; it is important to use some sort of sequence spec wrapper for :args so you can specify the ;; actual number of arguments, if you don't all the arguments will be put into one spec. ;; it should be one spec per argument, but of course you can use s/and to compose. ;; you could also use s/and outside of the sequencing function, meaning you would describe ;; the positioning of the arguments as well as any characteristic of the argument(s) ;; themselves. ;; :args is given a map , so if we did (rhymes "silly") it would be converted to ;; {:word "silly"} (s/fdef rhymes :args (s/and (s/cat :word ::word) #(not= (:word %) "orange")) :ret ::check-rhymes)
;; all the specs that we will use for the rhymes function (s/def :rhymes/all (s/coll-of string? :kind vector? :distinct true)) (s/def :rhymes/empty (s/and map? empty?)) (s/def ::rhymes (s/or :results (s/keys :req-un [:rhymes/all]) :empty :rhymes/empty))
(s/def ::check-rhymes (s/keys :req-un [::word ::rhymes]))
Now when we do useNow when we do use
(s/fdef rhymes :args (s/cat :word ::word) :ret ::check-rhymes)
s/exercise-fn, it will make actual HTTP requests and it will use the words in our own dictionary as the main input. The only problem is some words in our dictionary aren't words in their dictionary. Nonetheless this can make our errors harder to reason about, so in spec we trust that it is them and not us.
;; use the ` here as exercise-fn expects the symbol, which it uses to grab ;; the spec from the registry (s/exercise-fn `rhymes 2) ;; this will return anything , but when I tried it => ([("naphtha") {:word "naphtha", :rhymes {:all ["naphtha" "petroleum naphtha" "shale naphtha" "solvent naphtha" "wood naphtha"]}}] [("navet") {:word "navet", :rhymes {}}])
Instead of writing a function definition manually for each one, I think it would be rather nice to abstract it all out into a macro. Then we can use the macro, to dynamically generate tests, and use
exercise-fn to generate words and call them to see if they pass.
(defn param->spec "given a param, returns the appropriate spec check for that request." [param] (let [specs {"synonyms" ::check-synonyms "antonyms" ::check-antonyms "definitions" ::check-definitions "frequency" ::check-frequency "syllables" ::check-syllables "rhymes" ::check-rhymes nil ::check-word}] (specs param)))
(defmacro param->fdef [param] (list 's/fdef (symbol param) :args (s/cat :word ::word) :ret (param->spec param) :fn '#(= (:word %) :word)))
We can leverage the naming conventions of our parameters and our api functions to generate a complete function definition for a given function. We can call
param->fdef with say "synonyms" and it will add it to the global registry for the synonyms function, viewable by
doc.
Now that we have described the properties of our function, namely theNow that we have described the properties of our function, namely the
(param->fdef "synonyms") => spell-checker.handler/synonyms (doc synonyms) ------------------------- spell-checker.handler/synonyms ([word]) Spec args: (cat :word :spell-checker.handler/word) ret: (keys :req-un [:spell-checker.handler/word :spell-checker.handler/synonyms]) fn: (= (:word %) :word) nil (s/exercise-fn `synonyms 5) ([("sheepskins") {:word "sheepskin", :synonyms ["diploma" "lambskin" "parchment" "fleece"]}] [("herbert") {:word "herbert", :synonyms ["victor herbert"]}] [("cardamine") {:word "cardamine", :synonyms ["genus cardamine"]}] [("ranchman") {:word "ranchman", :synonyms []}] [("nonphilosophical") {:word "nonphilosophical", :synonyms []}])
:args.
:retand
:fnproperties, we can use some of the wrappers in the
clojure.spec.test.alphanamespace to enfore them.
instrumentfor example is a function, that when called, will "set the var's root binding to a fn that checks arg conformance (throwing an exception on failure) before delegating to the original fn" - as stated by the docs.
This means that when we doThis means that when we do
(require '[clojure.spec.test.alpha :as stest]) ;; do (doc "fn-name") to see if there is already a spec present (doc synonyms) ------------------------- spell-checker.handler/synonyms ([word]) nil
fdefor , use
param->fdefit needs to be done in the same namespace, otherwise the link doesn't form and it will expect there is a function called synonyms defined in whatever arbitrary namespace we happen to be in. This is why it can be handy to put all the functions and specs in the same namespace, so things like this don't happen. All we need to do though, if is switch into a repl and load the
spell-checker.handlernamespace, then
requirethings like
param->fdefand we are ready to get started.
Now if we want to call synonyms, it will validate the arguments before calling the real fn itself.Now if we want to call synonyms, it will validate the arguments before calling the real fn itself.
;; so, assuming the appropriate namespace, (param->fdef "synonyms") ;; => spell-checker.handler/synonyms (stest/instrument `synonyms) ;; => [spell-checker.handler/synonyms] ;; which confirms to us it has been found and linked. ;; otherwise we would be handed an empty vector
While this is nice, it's only leveraging a third of our specification. How about testing theWhile this is nice, it's only leveraging a third of our specification. How about testing the
(synonyms 3) ;; invalid argument to spell-checker.handler/synonyms .... ;; Spec assertion failed. ;; at [:word] :spec :spell-checker/handler-spec/dictionary (synonyms "functional") => {:word "functional" :synonyms ["operable" "operational" "usable" "useable" "operative" "running" "working"]}
:retand
:fnnamspaces, well the next step up would be the
stest/checkfunction, which is similar to the
exercise-fnwe saw earlier, but is more thorough and comes with a host of different options. But to be honest, all we need to do is call it with the symbol and watch the magic happen.
I just tried it now, and it went past twenty or so options, and I realised I was making a mistake with the
:fn property of the
fdef spec, as it passes a map of {:args "the-arg" , :ret "the-ret-val"} and I was too simplistic by just wanting to just pull the
:word out as it is in the value of those keys. So I changed it, and abstracted it out into a separate fn:
(defn fn-check "Is passed the value that :fn would from fdef, for all our api functions that would be:
{:args {:word \"the-word\"} , :ret {:word \"daphnis\"}}
Then I can stick that into theThen I can stick that into the
Given this I would take the value of the word in :args and the value of the word in :ret and check for equality. " [conformed-map] (let [args-word (:word (:args conformed-map)) ret-word (:word (:ret conformed-map))] (= args-word ret-word)))
:fnpart of my
param->fdefmacro and we can catch cases for where it does end up happening. For example, a word that I never would have thought of in a million years "jemadars" (meaning a minor official or junior officer) is actually tweaked to the singular. In the rare cases that the word is different between the two maps I may add a clause for things like plurals.
An incredibly useful tool, is the
stest/enumerate-namespace function. What this does is , you give it a namespace and it proceeds to grab every symbol that names vars in that ns, chucking them into a set to be passed to something like
stest/check to do many functions at once. Now this doesn't actually define
fdef's, so to give each function it's own spec I would have to keep calling
param->fdef, then call
stest/enumerate-namespace , passing the result to
stest/check giving me a complete rundown of each and every API function.
(stest/enumerate-namespace `spell-checker.handler) #{spell-checker.handler/rhymes spell-checker.handler/antonyms spell-checker.handler/defaults spell-checker.handler/*api-key* spell-checker.handler/json->clj spell-checker.handler/syllables spell-checker.handler/examples spell-checker.handler/set-api-key! spell-checker.handler/fn-ret spell-checker.handler/definitions spell-checker.handler/word spell-checker.handler/wrap-request spell-checker.handler/synonyms spell-checker.handler/frequency spell-checker.handler/request}
Of course you might not want every function in that namespace being checked, as some don't need to be, so before handing it over we could take out things like
json->clj,
defaults and focus on the core issues.
stest/check does produce quite a lot of noise, especially if we are passing it a big set of symbols like
enumerate-namespace wants to do. Add that with the fact I'm making HTTP requests, giving me a notification for each one, and the stacktraces that come along with any errors. A nice thing to use is
stest/abbrev-result to return a cleaner version of whatever
check returns, so you can easily see which property (or multiple) ended up failing.
Now all this isn't for production code. The role of these functions is to quickly and dynamically setup a testing environment for our specs , so they can be used in tandem with other things like generators, which I will get into more as we go on.
And this sums up this part of the spell-checker program. I'm going to move onto a different aspect, which is working more with the dictionary.
Let's say our dictionary gets an upgrade, where instead of a just a set of words, each word has a subsequent description of it's popularity and it's definitions. We could use a vector to comprise these three attributes. We could get by with
s/coll-of for this sort of thing as none of the attributes will change between a given word, but using the
s/tuple macro instead is much nicer. Essentially
s/tuple will conform each value of the vector with the predicate that is at the index. So if you gave it two predicates it would expect a vector of length 2, applying the first spec at index 0 etc.
;; so to describe our new dictionary, we will check index 0 for a word, ;; index 1 for the popularity and index 2 for the definitions. (s/def ::each-word (s/tuple ::word ::perMillion ::definitions)) ;; given an example element of our dictionary... (s/valid? ::each-word ["sorting" 6.78 [{:definition "..." :partOfSpeech "verb"}]]) ;; => true (s/conform ::each-word ["sorting" 6.78 [{:definition "..." :partOfSpeech "verb"}]]) ;; => ["sorting" 6.78 [{:definition "..." :partOfSpeech "verb"}]]
Then we can use
s/coll-of as the wrapper to
s/tuple to check every vector in our set. Now another way of doing this would be to use something like
s/cat which essentially combines the "tagging" functionality of
s/or with
s/tuple to provide more a reference for each element.
(doc s/cat) ------------------------- clojure.spec.alpha/cat ([& key-pred-forms]) Macro Takes key+pred pairs, e.g.
(s/cat :e even? :o odd?)
If we wanted a more descriptive specification, we would use this spec to include some tags, and it does the same sequential, index-style checking that we saw above.If we wanted a more descriptive specification, we would use this spec to include some tags, and it does the same sequential, index-style checking that we saw above.
Returns a regex op that matches (all) values in sequence, returning a map containing the keys of each pred and the corresponding value. nil ;; essentially like s/tuple, but we can tag pairs with keys for a better description of the data, ;; and it will conform to a map.
Keep in mind that by defining three keys we expect a collection of three elements, any more will throw an error because of "added input". Even if we do multiples of three it wouldn't work asKeep in mind that by defining three keys we expect a collection of three elements, any more will throw an error because of "added input". Even if we do multiples of three it wouldn't work as
(s/def ::each-word (s/cat :the-word ::word :popularity-per-million ::perMillion :definitions ::definitions)) (s/conform ::each-word ["siren" 9.56 [{:definition "loud noise" :partOfSpeech "noun"}]]) {:the-word "siren", :popularity-per-million 9.56, :definitions [{:definition "loud noise", :partOfSpeech "noun"}]} ;; in this simple example, data is remoulded into a map but specs in `s/cat` may do further modifications (s/conform ::each-word ["siren" 9.56 [{:definition "loud noise" :partOfSpeech "noun"}] "hello" 5.23 [{:definition "standard greeting" :partOfSpeech "exclamation"}]]) ;; => :clojure.spec.alpha/invalid
s/catexpected three elements. Luckily for us though we don't need to worry about this edge case as every vector has a count of three, meaning we can do
(s/coll-of ::each-word)on the dictionary without any worries.
But what would we have to do if we were to have our data structured like the second example? Well, up to now we have only used one of the regex functions that come with spec , namely
s/cat. But there are other functions that can help us when working with sequences:
s/* --> takes a predicate and matches against a sequence where 0 or more values fit the predicate. Conform to vector s/+ --> takes a predicate and matches against a sequence where 1 or more values fit the predicate. Conform to vector. s/? --> takes a predicate and matches against zero or no values, will return value if it fits predicate.
These are some of the functions that will help us to manipulate sequences like lists or vectors. Maps have things like
s/map-of,
s/keys to describe them, and in combination with those tools we can describe data in any combination of data structure.
Back to our dictionary, we expect 1 or more items, with the regular expression being "1 string , 1 number , 1 string". So we can use the
s/+ macro to achieve this.
(s/def ::an-entry (s/cat :the-word ::word :popularity-per-million ::perMillion :definitions ::definitions)) (s/def ::all-entries (s/+ ::an-entry))
Now then, let's try some example values:
(def mini-dictionary ["siren" 9.56 [{:definition "loud noise" :partOfSpeech "noun"}] "hello" 5.23 [{:definition "standard greeting" :partOfSpeech "exclamation"}]])
(s/conform ::all-entries mini-dictionary) [{:the-word "siren", :popularity-per-million 9.56, :definitions [{:definition "loud noise", :partOfSpeech "noun"}]} {:the-word "hello", :popularity-per-million 5.23, :definitions [{:definition "standard greeting", :partOfSpeech "exclamation"}]}]
Funnily enough the first time I did this spec it failed, as the word "hello" has the "exclamation" value with the
:partOfSpeech key, which is perfectly valid but wasn't in the spec. And I guess that was a mini-reminder as why spec is useful as it can bring up properties of the data that you forget about.
With
s/cat we define the regular expression to be 3 items, and then
s/+ takes that regular expression and matches it to 3n groups of elements in the sequence. There must be at least one item for the spec to run, and then there must be equal groups of three. If there isn't the spec will fail. If we didn't mind the incoming dictionary being empty, then we could have used
s/* instead, which just returns an empty vector if the dictionary is empty.
(s/def ::all-entries (s/* ::an-entry)) (s/conform ::all-entries []) ;; => []
Now this could be helpful to other functions that may use this in their pipeline, but it depends on the situation. I think in this case it would be better if we failed as other specs like
::word need the dictionary to validate things. We don't want
::word constantly giving
false if the word does exist, so let's crash and find out what went wrong.
Keep in mind using these two functions is only good for these single-layered collections, if each element were within it's own data structure then we would have to use
s/coll-of ,
s/map-of , or , as we shall we in a minute ,
s/spec.
The
s/? function is a bit different from the other two, as it just takes a sequence of zero or one values and checks it against the spec. Now whilst not massively useful on it's own, it is actually quite handy for one particular case. For some words in the dictionary, they could be trademarked which means they could be upper or lower case. But if we allowed some words to be up percase then our checks in
::word would cause those valid words to fail. What we can do is add the optional key/value pair
:trademark true/false , which would indicate that this word is special.
For some words in our dictionary, they could be trademarked which means they could be either upper or lower case. In the
words.txt. file that I'm using, all the words are lower-case but it could be useful to notify the end user that the word they searched for is trademarked. This would also be a handy check for
::word as another one of the specs included could be
::trademark which checks if the value is present. Combine this with
s/cat and we've got our functionality:
(s/def ::trademark (s/? boolean?)) ;; just need to tweak the ::an-e ntry spec (s/def ::an-entry (s/cat :the-word ::word :popularity-per-million ::perMillion :definitions ::definitions :trademark ::trademark)) (s/def ::all-entries (s/coll-of ::an-entry))
;; with trademark (s/conform ::all-entries ["Claxon" 3.89 [{:definition "type of air horn" :partOfSpeech "noun"}] true]) [{:the-word "claxon", :popularity-per-million 3.89, :definitions [{:definition "type of air horn", :partOfSpeech "noun"}], :trademark true}] ;; without trade [{:the-word "claxon", :popularity-per-million 3.89, :definitions [{:definition "type of air horn", :partOfSpeech "noun"}]}]
And lastly, we can use the
s/& function, which takes one of these regex operators and then applies any number of additional checks onto the collection the inital spec returns. So in the case of our dictionary, I want to make sure that every entry has a maximum of 4 elements:
(s/def ::an-entry (s/& (s/cat :the-word ::word :popularity-per-million ::perMillion :definitions ::definitions :trademark ::trademark) #(<= (count %) 4)))
Alright, now that aside let's move onto some other types of nesting. Say we have a collection like this one:
But this won't come as a surprise, because the last item is not a string. But if we were to simply "re-apply" the spec to that item,But this won't come as a surprise, because the last item is not a string. But if we were to simply "re-apply" the spec to that item,
;; word origins for the word mother (def word-origin [["Old english" "mōdor"] ["Dutch" "moeder"] ["German" "Mutter"]]) (s/def :spell-checker.core/origins (s/coll-of (s/+ string?))) (s/valid? :spell-checker.core/origins word-origin) ;; false
(s/coll-of string?)it would work, as that item is perfectly described by the entire spec. So we need something that , in the case of failure on the specific check, could pass an item that passes the entire check. What we can do is wrap the the code snippet above with
s/spec. Which, if we look at the docs:
But what about if we had a collection which was a bit more arbitrarily nested?But what about if we had a collection which was a bit more arbitrarily nested?
(s/def :spell-checker.core/origins (s/coll-of (s/spec (s/+ string)))) (s/valid? :spell-checker.core/origins word-origin) ;; true
(def arbitrary-nesting ["it" ["just" ["keeps" ["going" "but" "then" "it" "stops"]]]])
;; recall the spec and it will call this one and supply it with the remaining collection (s/def :spell-checker.core/check-nesting (s/coll-of (s/or :string string? :recur ::check-nesting))) (s/valid? :spell-checker.core/check-nesting arbitrary-nesting) ; => true
To round off this long tutorial on spec, we will see it's testing capabilities, and learn to wield the power of generators,
test.check and other goodies. We already started becoming somewhat familar with testing when I introduced function testing for development earlier on, with
s/fdef and
s/exercise-fn but that was only scratching the surface! Let's begin.
Every spec, when it is registered, like this simple spec:
(s/def ::check-map (s/keys :req-un [::word ::synonyms]))
will call it's underlying implementation when called, so as we are using
s/keys , it will call something called
map-spec-impl , which is essentially just how this spec adheres to the Spec protocol.
s/tuple for example has
tuple-impl. To adhere to the Spec protocol, you must also provide an implementation for the gen* function. Moreover, all specs have are wired to be given the capability, so that we can do generative testing. Using the specs that we've defined, let's generate some examples.
To return the generator of this spec, we use
s/gen, and to get some values out of this generator , we turn to the
clojure.spec.gen.alpha namespace, specifically the
generate function.
(require '[clojure.spec.alpha :as s] '[clojure.spec.gen.alpha :as gen] '[spell-checker.handler-spec :as hs])
;; to generate a single value (gen/generate (s/gen ::hs/word)) ;; => "unconstrainedly" in my case ;; to generate a sample (gen/sample (s/gen ::hs/word)) ;; => ("undiscontinued" "outcompete" "cofound" "maladjusted" "postsign" "pycnogonida" "multibreak" "unflickering" "hazinesses" "legislative")
One underlying aspect which is fairly important is that
::hs/word uses a set to generate it's values, so there is no indirection or blurring on how the spec will generate such a value. But if we had chosen to generate words using filters, like "must be lowercase" , "must not include numbers" etc, spec would just randomly generate values then apply these checks, so we could end up applying these checks to keywords, characters etc. That's what so great about sets in spec. If we were to generate words the latter way, we would need to specify , as the first check, that everything should be a string before evaluating all the unique criterium.
Now then, how about if we generate values using conformer specs, like
s/or ,
s/cat ,
s/alt ?
(s/def ::number-or-word (s/or :word ::hs/word :number number?)) (gen/generate (s/gen ::number-or-word)) "chrestomathic" (gen/sample (s/gen ::number-or-word)) ;; ("confrontation" -1 2.0 0.0 0 3.0 "aeroview" "katharina" 0 "theanthropos")
;; how about cat? (s/def ::number-or-word (s/cat :word ::hs/word :number number?)) (gen/sample (s/gen ::number-or-word)) ; => (("dulciloquy" -1.0) ("holyday" 0.5) ("frontierless" -2.0) ("vega" 3) ("diffinity" 0) ("tallowlike" -1) ("chirologies" 0) ("atlantomastoid" 2.4375) ("subfuscous" 50) ("dragginess" -2.5))
e, what we can use is the
exercise function, geared more toward r aw data than the
exercise-fn we made use of earlier:
(s/exercise ::number-or-word)
;; using "or" ;; => ([0.5 [:number 0.5]] ["chancelry" [:word "chancelry"]] [-1 [:number -1]] [-0.75 [:number -0.75]] ["gaddish" [:word "gaddish"]] ["psychorhythm" [:word "psychorhythm"]] [2.53125 [:number 2.53125]] ["gauntleting" [:word "gauntleting"]] ["inconcussible" [:word "inconcussible"]] [0 [:number 0]])
;; using "cat" ;; => ([("microfungal" -0.5) {:word "microfungal", :number -0.5}] [("anal" -0.5) {:word "anal", :number -0.5}] [("unharmonising" 1) {:word "unharmonising", :number 1}] [("pygmalion" 0) {:word "pygmalion", :number 0}] [("widowing" 0) {:word "widowing", :number 0}] [("sexualized" 3.5) {:word "sexualized", :number 3.5}] [("campana" 7) {:word "campana", :number 7}] [("ungelatinous" 1.0) {:word "ungelatinous", :number 1.0}] [("wei" -2) {:word "wei", :number -2}] [("rebloomed" 0.75) {:word "rebloomed", :number 0.75}])
Now let's try generating some more advanced ones:
(gen/generate (s/gen ::hs/check-frequency)) ;; => {:word "disculpatory", :frequency {:zipf -0.025056508740817662, :perMillion 47171, :diversity 0.6838045120239258}} (gen/generate (s/gen ::hs/check-word)) ;; too long to put here...
So far we've looked at how we can use these generators , given a spec, to work on generating some values for us. But what if we could meddle with the generation process itself? Fortunately spec comes with a couple ways of doing this, so as to help us create , when we need it , simpler , more streamlined processes that could return higher quality tests. What I plan to do is to create a generator, that takes each element of our word set , our dictionary , and puts it into the format we mentioned earlier of
[:word ::word :popularity ::perMillion :definitions ::definitions]. Now to actually do this, with accuracy, I would have to make two requests to the WordsAPI per word, and being on only the free tier myself I could only do 1250 words before hitting my limit. But as we're doing this solely with spec for the moment, there is no such limit we need to worry about.
The first tool we have in our box is the
s/with-gen function:
(doc s/with-gen) ------------------------- clojure.spec.alpha/with-gen ([spec gen-fn]) Takes a spec and a no-arg, generator-returning fn and returns a version of that spec that uses that generator nil
Essentially, we're taking a normal spec and attaching a tailored generator, filled with unique checks and such.
;; take this simple spec (s/def ::new-word string?) ;; what are the chances that it's going to generate a word? ;; without using our dictionary set, this is how we could help ;; this spec become a bit more useful (defn just-letters? [word] (every? #(Character/isLetter %) word)) (defn lower-case? [word] (every? #(Character/isLowerCase %) word))
(s/def ::new-word (s/with-gen string? #(s/gen (s/and string? just-letters? lower-case?))))
Whilst you can try all this out and get some stings that resemble something of words, it will be a long time before you get something actually in the dictionary. In fact, I tried this many times and got either "" or just a one letter word. What we can do is introduce a wrapper around our generator, using
gen/such-that to apply constraints. This is preffered over introducing more checks in the spec itself, which increases it's complexity , it's "noise" and it allows the spec itself to be cleaner and more reusable.
(s/def ::new-word (s/with-gen string? #(gen/such-that (complement empty?) (s/gen (s/and string? just-letters? lower-case?)))))
But then we enter another problem which is that we are more likely to get the
couldn't satisfy predicate after 100 tries error. This is ultimately why we use the dictionary, as the passing values are too niche and specific, and are not a "range" so to speak.
Moving on, toward our goal of a re-vamped dictionary, the aim is to have our original dictionary be the set, of which another function will take a value , alongside other specs and
s/cat them together. We can use
gen/tuple to setup the things we need for
s/cat.
gen/tuple takes a series of generator functions and then just pops them into a vector, which we can just pass along:
(s/def ::new-word (s/with-gen (s/cat :word ::word :popularity ::perMillion :definitions ::definitions) #(gen/tuple (s/gen ::word) (s/gen ::perMillion) (s/gen ::definitions)))) ;; but if we do (gen/generate (s/gen ::new-word)) ;; it will just return the vector of ["word" popularity definitions] ;; and so I would have to use conform to get the map that s/cat returns ;; one way of doing it would be to use the same spec, ;; just the generator for the value bit and conform using the spec. (s/conform ::new-word (gen/generate (s/gen ::new-word)))
By doing a bit of snooping I came across a function that's actually much nicer and simplifies this problem a lot, I don't really need a separate spec and generator, for testing all I need is a generator, which comes in the form of
gen/hash-map , which takes kv pairs, and every value should be a generator:
(gen/generate (gen/hash-map :word (s/gen ::word) :popularity (s/gen ::perMillion) :definitions (s/gen ::definitions)))
You can generate most types, otherwise use
s/gen, most data structures, and all sorts of nice helper functions. So far we've just been in the spec world, and only used spec functions, spec generators. But there is one function that helps us to bridge that gap, and allow us to mix in our own clojure functions right into the generation process. This is done with a handy little function called
gen/fmap.
Fmap leverages the functionalities of a basic map, and will take a function , not a spec , not necessarily a predicate either , and apply it to the generator. Since
gen/fmap itself returns a generator , we don't need to use an
s/gen wrapper, otherwise we will get an assertion error as all the spec would contain is the generator and no checks. Moreover, if we don't need to use
s/gen don't define functions like the one below as specs, as you'll get NPEs when trying to generate values from them. Although if you are using
gen/fmap as a component of a larger function, like if we were to incorporate with something like
s/with-gen , then that is indicative that we would go about defining it as a spec.
;; helper functions to help fmap generate accurate results (defn grab-popularity [word] (-> (frequency word) (:frequency) (:perMillion))) (defn grab-definitions [word] (-> (definitions word) (:definitions)))
;; so this could be how we would generate accurate words, using the API functions themselves, ;; but you might want to swap them out for generated results while testing. ;; we can keep the (s/gen ::word) set as the set is the same as our dictionary, and we need ;; a generator for it to work. (def new-word (gen/fmap (fn [word] [word (grab-popularity word) (grab-definitions word)]) (s/gen ::word)))
I guess the last step in our generatory journey would be to see if it can handle our
multi-spec we defined much much earlier on. We would hope to see maps of different keys, so of differing types , and in the case of the
:word method, we hope to see many different lengths of response (as some of the keys are optional). Firstly though, to illustrate all the technicalities and intricacies of spec, I'll show you a simpler multi-spec example and we'll study the different sorts of things we generate and the process of doing so.
;; these are just keys to differentiate between methods (s/def ::example-key keyword?) (s/def ::different-key keyword?) ;; spec which checks ::tag key (s/def ::tag keyword?) ;; grabbing value out of :tag , which is in all maps (defmulti example :tag) (defmethod example :a [_] (s/keys :req-un [::tag ::different-key])) (defmethod example :b [_] (s/keys :req-un [::tag ::example-key]))
Well, the second argument ofWell, the second argument of
;; not nil as we're going to be generating. (s/def ::example (s/multi-spec example *not nil! but what is it ?!*))
s/multi-specis called retag and it can be either a keyword or a function. In the case of a keyword, when the spec is created,
s/multi-specwill generate a function for us to use , and this is the default fn used.
;; taken from the clojure.spec.alpha source code, the impl for multi-spec ... ;; retag is either the dispatch tag , which is usually a keyword , or a function ;; which would then be the generator function ;; based on that value will be the method we call. (if (keyword? retag) #(assoc %1 retag %2) retag)
This generator function that spec would give us takes two args. I'll talk about the second argument first as it's more important , and sort of explains where the first is coming from. The second argument is the dispatch tag and one is picked each time we call (gen/generate ...), for the example above this would be
:a or
:b. Now we've got that tag, it then grabs the method with that key , so if the dispatch tag was
:a then it would grab the spec out of method
:a and generate some values using the spec from the method body. In this case it would use
s/keys and make a map of
:tag and
:different-key. This generated map is what is passed as the first argument. During generation, our function sits atop the generation process , accepting a map and the corresponding tag, and filter out (or reformat) certain attributes. But most of the time we just want the spec as it is. This
associng simply overrides the random value generated for the correct, corresponding dispatch tag. This is why it is called
retag , as we are retagging the randomly generated keyword for the actual, correct, tag , being whatever method attribute was picked.
And when we generate a value:And when we generate a value:
... ;; same code as above just with (s/def ::example (s/multi-spec example :tag)) ;; so we're going to using the default function, but doing ;; (s/def ::example (s/multi-spec example (fn [genv tag] (assoc genv :tag tag)))) ;; is equivalent.
For the eagle-eyed reader that managed to digest the last paragraph, I mentioned that the point ofFor the eagle-eyed reader that managed to digest the last paragraph, I mentioned that the point of
(gen/generate (s/gen ::example)) ;; => {:tag :b, :example-key :OOpmzE+?qRU?9fjYRg?PYEZ*i7/q_23g200}
associng was to have the
:tagmatch the correct method attribute. So what if we didn't assoc to
:tagbut to something like
:tagg? Well, depending on if our spec allows this is irrelevant as
:tagwill fail if the value isn't overriden.
But we see it isn't a problem withBut we see it isn't a problem with
(s/def ::example (s/multi-spec example (fn [genv tag] (assoc genv :tagg tag)))) (gen/generate (s/gen ::example)) ;; => couldn't satisfy predicate after 100 tries...
s/keysas it will let other keys through, if I
assoced tag properly and stuck something else onto
:taggwe pass.
What about when I decide to override the tag key for a different method attribute? So if I just wantedWhat about when I decide to override the tag key for a different method attribute? So if I just wanted
(s/def ::example (s/multi-spec example (fn [genv tag] (assoc (assoc genv :tag tag) :tagg "some val")))) (gen/generate (s/gen ::example)) ;; => {:tag :b, :example-key :-*--+!P-b?E?J81*R2, :tagg "some val"}
:ato be tested, could that work?
(s/def ::example (s/multi-spec example (fn [genv tag] (assoc (assoc genv :tag tag) :tag :a)))) (gen/generate (s/gen ::example)) ;; it does, it will just keep trying until it generates a set for a particular method ;; that has the same keyword as the one we assoced. ;; => {:tag :a, :different-key :bF.k1Es1?q0kk1Y_d*xDT2z+DzSA-!L*rO.fw**C22sGvZ2_R-JeP1*b.+R2!g!?V*PpWmdwp*._t!c2zM7_Oh3-78PI-7Y_N0*o0.PN33pxy0_y7L4-68*j.I6?.d3**-.C+.jio+S_*W5-_6v?5qq1lGhk.p0j.x621Wl21m*5*T+w09HdYI3_.i-?F_5?W8BHJ.!-2037H-*2UB4_f*V!8XL_/s++Tcc4++D*S2*?Q+Z_?52!84}
Up till now , all we have been doing is working with maps and
multi-spec. For this use case it is perfectly fine, and the default function is perfectly useful and we have no difficulty producing a range of samples, but what if we aren't working with maps? We may have to do something different. Take a look at the example, but be careful. Not supplying anything atop the generated value only works when the spec in the method does the job of matching the exact dispatch key. Here is the code from that example:
(defmulti foo first) (defmethod foo :so/one [_] (s/cat :typ #{:so/one} :num number?)) (defmethod foo :so/range [_] (s/cat :typ #{:so/range} :lo number? :hi number?))
defmultiwill look at the first value, and depending on it being
:so/oneor
:so/twoit will push to either method.
s/catthen does the work of making sure that the first argument is the same as the key that would be generated. Moreover, there is no point including any modifications atop the method's spec as it would cause issues.
In cases though where we are working with non-maps and don't have the spec itself producing perfect generations, it means we must include a generator function. Let's take a look at this multi-spec:
Say the data we're working with is in the form :
[{:tag string?}{:example string?}] or [{:tag string?}{:different-key string?}] we can see that
:tag will be the universal identifier, and that is what
defmulti needs to pull out.
(defmulti vectors-and-maps #(:tag (first %)))
Next up we need to define the methods, let's just say that for one
:tag we would like to see the value "a" and another we would like to see the value "b":
(defmethod vectors-and-maps "a" [_] (s/tuple ::tag-map ::example-map))
(defmethod vectors-and-maps "b" [_] (s/tuple ::tag-map ::different-map))
And now the specs for these...
(s/def ::tag (s/and string? #(<= (count %) 10))) (s/def ::tag-map (s/keys :req-un [::tag]))
(s/def ::example-key (s/and string? #(<= (count %) 10))) (s/def ::example-map (s/keys :req-un [::example-key]))
(s/def ::different-key (s/and string? #(<= (count %) 10))) (s/def ::different-map (s/keys :req-un [::different-key]))
And lastly,
(s/def ::example (s/multi-spec vectors-and-maps (fn [genv tag] (assoc-in genv [0 :tag] tag))))
Given that we are creating arbitrary strings we will need to retag, and that is the generator function to do it. Although, if you're thinking that you could just replace the
::tag spec with a set, you would be absolutely right! And you save yourself the hassle of retagging. The set only specifies the accepted method attributes, it won't take long to generate a value from the set that matches the dispatch tag.
(s/def ::tag #{"a" "b"}) ... (s/def ::example (s/multi-spec vectors-and-maps (fn [genv tag] genv))) (gen/generate (s/gen ::example)) ;; => [{:tag "b"} {:different-key "W7Ok86"}]
And that about wraps up about everyhting from me. I urge you to play around with all the generators spec, all the generators
gen has to offer, and by all means use the specs that I've been using in this guide, all of which can be found here. I'm afraid this is the end, but there is still lots to cover! I've missed some things about spec, such as validating macros, working with databases but I'm sure after reading all this you are ready to explore any and all things spec for yourself.
Before I head off, I do want to briefly speak of the changes that are coming to spec in the near future, in the
alpha2 version of the library. Things like
s/nest replacing
s/spec as it is more intuitive, and a whole host of changes are happening under the hood too, I mentioned earlier most specs have an underlying
impl , well that's changing to be more data oriented. You can see the direction of it by looking at the old implementation for
s/+ and the new implementation for it, which I do think will be having it much more aligned with the rest of clojure (data oriented and programmable). For a good explanation on this , see the InsideClojure journal entry on this shift. Other tools like
s/select are predicated to be quite big, a talk of what it does and it's ethos can be given from this talk, by the creator himself.
Further Reading:
Discussion on using fully-qualified namspaces for keys
Spec as a runtime transformation engine
Sign up to WorksHub to join our community of talented developers sharing insights and discovering jobs and opportunities | https://functional.works-hub.com/learn/a-guide-to-the-clojure-spec-library-606d6?utm_source=rss&utm_medium=automation&utm_content=606d6 | CC-MAIN-2020-16 | refinedweb | 13,473 | 60.35 |
Yhc/RTS/Exceptions
From HaskellWiki
1 RTS Exceptions
Support for 'imprecise exceptions' has recently been added to Yhc. Imprecise exceptions allow any kind of exception (including 'error') to be thrown from pure code and caught in the IO monad.
This page attempts to describe how imprecise exceptions are implemented in the Runtime System.
The most important haskell functions for imprecise exceptions are 'catch' and 'throw'.
catch :: IO a -> (Exception -> IO a) -> IO a throw :: Exception -> a
catch takes an IO actione to run, and an exception handler. It runs the IO action and if an exception occurs it runs the exception handler. throw simply throws an exception. For example
... catch doStuff $ \ e -> case e of ErrorCall _ -> return () _ -> throw e ...
This code will execute the IO action doStuff, and if an exception occurs it will catch it. If that exception resulted from a call to the 'error' function then it does nothing, otherwise it rethrows the exception.
1.1 The exception stack
catch blocks can be nested inside each other and throw returns control to the handler for the inner-most catch block. This structure naturally leads us to having a stack of exception handlers with each new catch block pushing a new handler on the stack at the beginning of the block and removing the top handler at the end of the block.
1.2 Haskell level implementation
1.2.1 throw
throw is implemented very simply
throw :: Exception -> a throw = primThrow primThrow :: a -> b
primCatch cannot be written in Haskell and is instead defined directly in bytecode (see src/runtime/BCKernel/primitive.c) as
primThrow e
PUSH_ZAP_ARG e THROW
The THROW instruction removes the value on the top of the program stack 'e' and removes the exception handler on the top of the exception stack. In then returns control to that exception handler, this strips the program stack back to the place where the exception handler was created. THROW then pushes 'e' on the new top of the program 'stack'.
Diagramatically (see Yhc/RTS/Machine for comparison). Before executing THROW we have:
================== | frame | +------------+ : stack-data : <- top exception handler points here ..............
. . .
+------------+ | frame | <- the frame for primThrow +------------+ : exception : ..............
afterwards we have:
================== | frame | +------------+ : stack-data : .............. : exception : ..............
1.2.2 catch
catch is implemented using YHC.Exception.catchException
catchException :: IO a -> (Exception -> IO a) -> IO a catchException action handler = IO $ \ w -> primCatch (unsafePerformIO action) (\e -> unsafePerformIO (handler e))
catchException simply converts from the IO monad into standard closures and passes them to the primitive 'primCatch'
primCatch :: a -> (b -> a) -> a
primCatch also cannot be written in Haskell, and is instead defined directly in bytecode:
primCatch act h
NEED_HEAP_32 CATCH_BEGIN handler PUSH_ZAP_ARG act EVAL CATCH_END RETURN handler: PUSH_ZAP_ARG h APPLY 1 RETURN_EVAL
'CATCH_BEGIN label' creates a new exception handler and pushes it on the top of the exception stack. The new exception handler will return control to the function that executed the CATCH_BEGIN instruction and resume execution at the code given by 'label'.
Having pushed the new exception handler on the stack, primCatch forces evaluation of the action, causing the code inside the catch block to be executed.
If evaluation of the action succeeds without throwing an exception then CATCH_END is executed which removes the handler on the top of the stack (which is necessarily the same handler as was pushed by CATCH_BEGIN).
However, if evaluation of the action results in a call to throw then execution returns to 'handler'. Here we need to remember that THROW pushes the exception thrown on the program stack after stripping back to the exception handler. Thus at 'handler' we know that the exception thrown is on the top of the program stack.
We thus 'PUSH_ZAP_ARG h' to push the handler function on the stack, and 'APPLY 1' to apply the handler function to the exception and finally 'RETURN_EVAL' to call the handler function.
1.3 Interpreter implementation
In the C interpreter exception handlers are stored in the heap as standard Haskell heap nodes. The structure is given in src/runtime/BCKernel/node.h
/* an exception handler */
typedef struct _ExceptionHandlerNode {
NodeHeader node; struct _ExceptionHandlerNode* next; /* next exception handler in the stack */ Node* vapptr; /* vapptr of the handler code */ CodePtr ip; /* ip to jump to for the handler code */ UInt spOffs; /* offset of sp from G_spBase, offset is easier than ptr here because of GC */ UInt fpOffs; /* offset of fp from G_spBase, again offsets easier */
}ExceptionHandlerNode;
vapptr, ip and fpOffs is basically the same information as stored in stack frames (see Yhc/RTS/Machine). spOffs is included for completeness although in practice it isn't strictly necessary.
fpOffs and spOffs are offsets from G_spBase rather than direct pointers because program stacks are stored in the heap, and thus might be moved by the garbage collector. The offset from the base of the stack, however, will not be changed by the garbage collector.
The 'next' field allows us to setup a stack of exception handlers. Each process has its own exception stack, so the handler on the top of the stack is stored in the process information structure (see Yhc/RTS/Concurrency).
Ensuring that the ExceptionHandlerNodes are treated correctly by the GC is ensured by simply having CATCH_BEGIN push the created ExceptionHandlerNode on the program stack. There it remains until it's either removed by the corresponding CATCH_END or it is stripped off the stack by THROW. This simple trick means we can avoid having to scan process information structures for pointers to heap nodes.
1.4 Changing the type of IO
Before imprecise exceptions the IO type was defined as:
newtype IO a = IO (World -> Either IOError a)
World is a dummy argument to prevent us from accidentally introducing a CAF, and the function either returns (Left err) if there was some error performing the IO action or (Right a) if the IO action succeeded with value 'a'.
However imprecise exceptions allow us to improve this to:
newtype IO a = IO (World -> a)
since we can simply implement throwing and catching IOErrors using 'throw' and 'catch'. This requires a slight change in the code for the IO monad to:
instance Monad IO where (IO x) >>= yf = IO $ \ w -> let xv = x w in xv `seq` case yf xv of IO y -> y w (IO x) >> (IO y) = IO $ \ w -> x w `seq` y w return a = IO $ \ w -> a
Here we use 'seq' to ensure that impure IO functions are executed in the correct order.
There are also some changes to src/packages/yhc-base-1.0/YHC/_Driver.hs to ensure that no exceptions can possibly 'escape' the program. | https://wiki.haskell.org/Yhc/RTS/Exceptions | CC-MAIN-2016-22 | refinedweb | 1,093 | 56.69 |
Object-Oriented Programming (OOP) is a widely popular programming paradigm used across many different languages. This method of structuring a program uses objects that have properties and behaviors. Each programming language handles the principles of OOP a little differently, so it’s important to learn OOP for each language you are learning. Today, we’ll discuss the basics of OOP in Python to propel your Python skills to the next level.
Whether you're new to OOP or just curious about its use in Python, this is the perfect article to get started. You'll learn the benefits of OOP in Python and how to apply OOP concepts in your code. By the end of this article, you'll be able to create classes, initialize objects, and apply inheritance to your Python projects.
Today we'll cover:
- What is Object-Oriented Programming?
- OOP in Python
- How to define a class in Python
- How to create an object in Python
- How to create instance methods in Python
- How to use inheritance in Python
- Putting it together: Calculator Problem
- Refresher and moving forward
Master Python object-oriented programming the easy way
Advance your Python skills with hands-on OOP lessons and practice problems.
Learn Object-Oriented Programming in Python
What is Object-Oriented Programming?
Object-Oriented Programming is a programming paradigm based on the creation of reusable "objects" that have their own properties and behavior that can be acted upon, manipulated, and bundled.
These objects package related data and behaviors into representations of real-life objects. OOP is a widely used paradigm across various popular programming languages like Python, C++, and Java.
Many developers use OOP because it makes your code reusable and logical, and it makes inheritance easier to implement. It follows the DRY principle, which makes programs much more efficient.
In OOP, every object is defined with its own properties. For example, say our object is an Employee. These properties could be their name, age, and role. OOP makes it easy to model real-world things and the relationships between them. Many beginners prefer to use OOP languages because they organize data much like how the human brain organizes information.
The four main principles of OOP are inheritance, encapsulation, abstraction, and polymorphism. To learn more about these, go read our article What is Object Oriented Programming? for a refresher before continuing here.
Let’s refresh our memory on the building blocks of OOP before seeing how it works in Python.
Properties
Properties are the data that describes an object. Each object has unique properties that can be accessed or manipulated by functions in the program. Think of these as variables that describe the individual object.
For example, a
sneaker1 object could have the properties
size and
isOnSale.
Methods
Methods define the behavior of an object. Methods are like functions that are within the scope of an object. They're often used to alter the object's properties.
For example, our
sneaker1 object would have the method
putOnSale that switches the
isOnSale property on or off.
They can also be used to report on a specific object's properties. For example, the same
sneaker1 object could also have a
printInfo method that displays its properties to the user.
Classes
Each object is created with a class. Think of this like the blueprint for a certain type of object. Classes list the properties essential to that type of object but do not assign them a value. Classes also define methods that are available to all objects of that type.
For example,
sneaker1 was created from the class
Shoe that defines our properties,
size and
isOnSale, and our methods,
putOnSale and
printInfo. All objects created from the
Shoe class blueprint will have those same fields defined.
Classes are like the umbrella category that each object falls under.
Instances
An object is an instance of its parent class with a unique name and property values. You can have multiple instances of the same class type in a single program. Program calls are directed to individual instances whereas the class remains unchanged.
For example, our
shoe class could have two instances
sneaker1, with a
size of
8, and
sneaker2, with a
size of
12. Any changes made to the instance
sneaker1 will not affect
sneaker2.
OOP in Python
Python is a multi-paradigm programming language, meaning it supports OOP as well as other paradigms. You use classes to achieve OOP in Python. Python provides all the standard features of object-oriented programming.
Developers often choose to use OOP in their Python programs because it makes code more reusable and makes it easier to work with larger programs. OOP programs prevent you from repeating code because a class can be defined once and reused many times. OOP, therefore, makes it easy to achieve the "Don't Repeat Yourself" (DRY) principle.
Let’s see an example of how OOP improves Python. Say you organize your data with a list instead of a class.
sneaker1 = [8, true, "leather", 60]
Here, the list
sneaker1 contains the values for properties
size,
isOnSale,
material and
cost. This approach doesn't use OOP and can lead to several problems:
- You must remember which index they used to store a certain type of data ie.
sneaker1[0] = size. This is not as intuitive as the object call of
sneaker1.size
- Not reusable. You have to create a new list for each item rather than just initializing a new object.
- Difficult to create object-specific behavior. Lists cannot contain methods. Each list must call the same global function to achieve a given behavior rather than an object-specific method.
Instead, with OOP, we could write this as a
Shoe class object to avoid these problems and make our code more useful down the line.
sneaker1 = Shoe(8, true, "leather", 60)
To avoid these problems, Python developers often use OOP over other available paradigms. Below we'll explore how you can implement OOP into your Python programs.
Keep learning Python OOP
Learn advanced Python OOP techniques from industry veterans. Educative's text-based courses are easily skimmed and give you the experience you need to land a job.
Learn Object-Oriented Programming in Python
How to define a class in Python
To create a class in Python, we use the
class keyword and a property like this:
class MyClass: x = 4
Then we use
MyClass to create an object like this:
p1 = MyClass() print(p1.x)
Let' take that bit deeper. For the following examples, imagine that you're hired to make an online store for a shoe store.
We'll learn how to use Python to define a
Shoe class and the properties each shoe must have listed on the site.
First, we use the
class keyword to begin our class and then set its name to
Shoe. Each instance of
Shoe will represent a different pair of shoes. We then list the properties each shoe will have,
size,
isOnSale,
material, and
price. Finally, we set each property to value
None. We'll set each of these property values when we initialize a
Shoe object.
class Shoe: # define the properties and assign none value size = None isOnSale= None material = None price = None
Note: Spacing
Remember to include four spaces before all properties or methods within a class so that Python recognizes they're all within the defined class.
How to create an object in Python
Now, we'll see how to initialize objects and set property values to represent each pair of shoes.
To create an object, we have to first set our initializer method. The initializer method is unique because it has a predefined name,
__init__, and does not have a return value. The program automatically calls the initializer method when a new object from that class is created.
An initializer method should accept the special
self parameter then all class properties as parameters. The
self parameter allows the initializer method to select the newly created object instance.
We then populate the initializer method with one instance variable initialization for each property. Each of these initializations sets a property of the created object to the value of the corresponding parameter.
For example, the first
self.size = sizesets the
sizeproperty of the created object to equal the
sizeparameter passed at object creation.
Once the initializer is set up, we can create an object with
[objectName] = Shoe() and pass the necessary parameters. On line 10, we create a
Shoe object called
sneaker3 with properties of
size = 11,
isOnSale = false,
material = "leather", and
price = 81
We can use this code to create as many instances of
Shoe that we need.
class Shoe: # defines the initializer method def __init__(self, size, isOnSale, material, price): self.size = size self.isOnSale = isOnSale self.material = material self.price = price # creates an object of the Shoe class and sets # each property to an appropriate value sneaker3 = Shoe(11, 'false', "leather", 81)
How to create instance methods in Python
Next, we'll add instance methods to our
Shoe class so we can interact with object properties in our shoe store program. The main advantage of instance methods is that they're all available for any
Shoe type object as soon as it is created.
To create instances, you call the class and pass the arguments that its
__init__method accepts.
class Shoe: # defines the initializer method def __init__(self, size, isOnSale, material, price): self.size = size self.isOnSale = isOnSale self.material = material self.price = price # Instance method def printInfo(self): return f" This pair of shoes are size {self.size}, are made of {self.material}, and costs ${self.price}" # Instance method def putOnSale(self): self.isOnSale = true sneaker3 = Shoe(11, 'false', "leather", 81) print (sneaker3.printInfo())
Our first instance method is
printInfo that lists all properties except
isOnSale. On line 10, we use the keyword
def to begin declaring a new method, then name that method
printInfo and finally list the special parameter
self.
In this case,
self allows the method to access any parameter within the object this method is called on. We then write out our message on line 11 using
self.[property] calls.
Note: This uses Python 3.6 f-string functionality. Any section of the message in curly brackets is not actually printed and instead prints the value of the stated property for the selected object.
Our second instance method,
putOnSale, changes the value of the
isOnSale property within the selected object to
true. On line 15, we use the keyword
def, our method name, and
self parameter to define a method.
Then we populate that method with a statement to change the
isOnSale property to
true on line 16. The first part of this statement selects the
isOnSale property of the currently selected object. The second part of this statement sets the value of that selected property to
true.
Changing the value of this
Shoe object's
isOnSale property does not change the default value within the
Shoe class. Python does not require a
return value within every method.
How to use inheritance in Python
Finally, we'll add the subcategory
Sandal of the
Shoe class using inheritance.
Inheritance allows a new class to take on the properties and behaviors from another class.
The class that is inherited from is called the parent class. Any class that inherits from a parent class is called a child class.
Child classes don't just inherit all properties and methods but can also expand or overwrite them.
Expand refers to the addition of properties or methods to the child class that are not present in the parent. Overwrite is the ability to redefine a method in a child class that has already been defined in the parent class.
The general syntax for single class inheritance is:
class BaseClass: Base class body class DerivedClass(BaseClass): Derived class body
We can also have multiple class inheritance:
class BaseClass1: Base class1 body class BaseClass: Base class2 body class DerivedClass(BaseClass1,BaseClass2): Derived class body
To implement inheritance in Python, define a class as normal but add the name of its parent class in parentheses before the final colon (line 2).
#Sandal is the child class to parent class Shoe class Sandal(Shoe): def __init__(self, size, isOnSale, material, price, waterproof): #inherit self, size, isOnSale, material, and price properties Shoe.__init__(self, size, isOnSale, material, price) #expands Sandal to contain additional property waterproof self.waterproof = waterproof #overwrites printInfo to reference "pair of sandals" rather than shoes def printInfo(self): return f" This pair of sandals are size {self.size}, are made of {self.material}, and costs ${self.price}" sandal1 = Sandal(11, False, "leather", 81, True) print (sandal1.printInfo())
We then define a new initializer method that takes all properties from
Shoe and adds an unique
waterproof property.
On line 3, we declare the initializer method for all properties we'll need from both parent and child classes. Then on line 5 we call the initializer method from the parent
Shoe class to handle shared properties. We then expand beyond the inherited properties to add the
waterproof property on line 7.
You can use expanded classes to reduce rewritten code. If our class did not inherit from
Shoe, we would need to recreate the entire initializer method to make just one small difference.
Next, we overwrite the
printInfo class defined in
Shoe to be
Sandal specific. Python will always use the most local definition of a method.
Therefore, Python will use the newly defined
printInfo method in
Sandal over the inherited
printInfo method from
Shoe when the
printInfo method is called.
Putting it all together: Calculator Problem
Let’s put the skills you learned into practice with a challenge. Your goal is to write a Python class called
Calculator. There will be two steps to this challenge: define the calculator's properties and add methods for each of the four operations.
Task 1
Write an initializer to initialize the values of
num1 and
num2. The properties are
num1 and
num2.
Task 2
Add four methods to the program:
add(), a method which returns the sum of
num1and
num2.
subtract(), a method which returns the subtraction of
num1from num2.
multiply(), a method which returns the product of
num1and
num2.
divide(), a method which returns the division of
num2by
num1.
Your input will be the object's property integers and your output will be addition, subtraction, division, and multiplication results for those numbers.
# Sample input obj = Calculator(10, 94); obj.add() obj.subtract() obj.multiply() obj.divide() # Sample output 104 84 940 9.4
Try it out on your own and check the solution if you get stuck.
Here's an empty program structure to get you started:
class Calculator: def __init__(self): pass def add(self): pass def subtract(self): pass def multiply(self): pass def divide(self): pass
Solution Breakdown
class Calculator: def __init__(self, num1, num2): self.num1 = num1 self.num2 = num2 def add(self): return (self.num2 + self.num1) def subtract(self): return (self.num2 - self.num1) def multiply(self): return (self.num2 * self.num1) def divide(self): return (self.num2 / self.num1) demo1 = Calculator(10, 94) print("Addition:", demo1.add()) print("Subtraction:", demo1.subtract()) print("Mutliplication:", demo1.multiply()) print("Division:", demo1.divide())
Let’s dive into the solution. It’s okay if you didn’t get it the first time around! Practice is how we learn.
- We first implement the
Calculatorclass with two properties:
num1and
num2.
- On line 3-4, we initialize both properties,
num1and
num2.
- On line 7, we implemented
add(), a method that returns the sum,
num1+
num1, of both properties.
- On line 10, we implement
subtraction(). This method returns the difference between
num1and
num2.
- On line 13, we implemented
multiplication(), a method that returns the product of
num2and
num1.
- On line 16, we implement
division(). This method returns the quotient of
num2by
num1.
Refresher and moving forward
You've now completed your dive into the world of object-oriented Python programming. Today, we broke down the definition of object-oriented programming, why it's popular in Python, and walked you through the key parts of an object-oriented program in Python.
However, the concepts covered in this article are just the beginning of what OOP is capable of. The next topics to learn about in your OOP journey are:
- Data Encapsulation
- Polymorphism
- Aggregation
- Operator Overloading
- Information Hiding
Educative's Learn Object-Oriented Programming in Python is the ideal tool for learning OOP concepts quickly. With interactive examples and quizzes, this course makes sure you have the practice experience you need to apply OOP in your own programs.
Happy Learning!
Discussion
Interesting article! I'd like to discuss on some points, however.
When you say:
This is debatable. First, because I don't think inheritance is something very valuable for a beginner; it's hard to use properly. OOP doesn't follow the DRY principle per se: if I want to create five different classes to "code" the same knowledge, I can if I want to.
Inheritance is definitely proper to OOP. However, polymorphism (inheritance is a flavor of it), abstraction, and encapsulation can be achieved with other paradigms. The real difference with OOP is this: local persistent variables (that's why you can have two properties with the same name in different objects and mutate one without affecting the other), inheritance indeed, and message passing (you can call two functions with the same name on different objects, you'll call two different functions; when you do obj.add(), you send the message add to an object and it might return something).
OOP is mainly used because, historically, C++ was heavily used. Why? Not because it was better than anything else, but mainly because it was derived from C and compatible with C. And C was heavily used. There are other reasons, too; the very good talk Why C++ Sails When the Vasa Sank gives very good insights on that.
From there, everybody was using OOP, so for a language to be successful, it had to have some implementation of OOP as well.
Last thing: a paradigm is a set of idea, and it's very difficult to define it. Smalltalk is totally different from C++ which is different from Golang which is different from Python, for example. But they are all considered OOP languages. | https://dev.to/educative/how-to-use-object-oriented-programming-in-python-3d6c | CC-MAIN-2020-50 | refinedweb | 3,044 | 56.55 |
Sometimes even the simple stuff can have some interesting unit tests and corner cases. I had a class that's basically a "Thing" with an X,Y position and an OnMoved event. In C# :
public class ThingMovedArgs {
public ThingMovedArgs(Thing t, Point ptOld, Point ptNew);
}
class Thing
{
int X; // just get/set X
int Y; // just get/set Y
Point XY; // get/set X and Y together as a Point
public event EventHandler<ThingMovedArgs> OnMoved;
}
The OnMoved handler provides the old location and new location. Assume this is all single-threaded.
It looks so simple. Yet there are some pretty interesting corner cases that would actually be reasonable for a well-intentioned developer to get wrong.
Here are the tests I wrote and some of the "specification holes" they reveal :
Point XY // get/set X and Y together as a Point
{
set // BUG!!! Fires OnMoved 2 times instead of just once
{
X = value.X; // <-- fires OnMoved once for changing X
Y = value.Y; // <-- fires OnMoved a 2nd time for changing Y
}
}
Anybody see others?
The lesson learned:There's an infinite number of possible bugs in a sufficiently dumb implementation. I think a key here is that many of these failures:- could occur with a bad corner-case from a reasonable attempt to implement the class. If you got 100 developers to implement the class, you'd probably hit all of these problems. Or you could probably hit these cases with just 5 implementations if you just asked the developers to try and optimize their code. :)- indicate spec holes. | http://blogs.msdn.com/b/jmstall/archive/2007/08/10/unit-tests-for-a-simple-onmoved-event-handler.aspx | CC-MAIN-2014-35 | refinedweb | 259 | 63.49 |
I wrote the slower table version first, then I realized if I had used a set, I would be able to avoid checking through the whole preceding s1 and s2. The set version did turn out to be significantly faster.
I believe both versions could be further tuned to run faster (for example, you do not really have to store tuples in the set, just an int should be enough, and this may actually eliminate the need for a set. I've not got time to try this though).
The faster set version:
def isInterleave(self, s1, s2, s3): if len(s1)+len(s2)!=len(s3): return False match=set([(0,0)]) # here 0 means including no element from s1 or s2. So (0,0) just means s1[:0] and s2[:0] form s3[:0] # So (k1, k2) will mean s[:k1] and s[:k2] form s3[:k3] for k3 in xrange(len(s3)): newMatch=set() for m in match: if len(s1)>m[0] and s1[m[0]]==s3[k3]: newMatch.add((m[0]+1, m[1])) if len(s2)>m[1] and s2[m[1]]==s3[k3]: newMatch.add((m[0], m[1]+1)) match=newMatch return (len(s1), len(s2)) in match
The slow table version:
def isInterleaveSlow(self, s1, s2, s3): if len(s1)+len(s2)!=len(s3): return False tmpBuf=[[0]*(len(s1)+1) for _ in xrange(len(s2)+1)] tmpBuf[0][0]=1 for k3 in xrange(len(s3)): newBuf=[[0]*(len(s1)+1) for _ in xrange(len(s2)+1)] for k1 in xrange(max(0, k3-len(s2)), min(k3+1, len(s1))): if s1[k1]==s3[k3]: newBuf[k3-k1][k1+1]=1 if tmpBuf[k3-k1][k1] else 0 for k2 in xrange(max(0, k3-len(s1)), min(k3+1, len(s2))): if s2[k2]==s3[k3]: newBuf[k2+1][k3-k2]=1 if (tmpBuf[k2][k3-k2] or newBuf[k2+1][k3-k2]) else 0 tmpBuf=newBuf return bool(tmpBuf[-1][-1])
Edited:
I found a lot of table solutions posted by other people already...much better than my clumsy table, I shouldn't have to go through layers of k3.
one example is here: | https://discuss.leetcode.com/topic/34214/2-python-dp-solutions-1-quick-44-ms-the-other-slow-156-ms | CC-MAIN-2017-51 | refinedweb | 368 | 67.28 |
On Fri, 2002-06-07 at 10:35, Peter Wächtler wrote:> Vladimir Zidar wrote:> > Nice to have everything as POSIX says, but how could process-shared> > mutex be usefull ? Imagine two processes useing one mutex to lock shared> > memory area. One process locks, and then dies (for example, it goes> > sigSEGV way). Second process could wait for ages (untill reboot ?) and> > it won't get lock() on that mutex ever. Wouldn't it be more usefull to> > have automatic mutex cleanup after process death ? Just make a cleanup,> > and mark it as 'damaged', so other processes will eventualy get error> > saying that something went wrong.> > > > > > Look at kernel/futex.c in 2.5 tree.> I vote for killing the "dangling" process - like it's done in IRIX. I don't like killing other processes just for that. I like the way file locks works. But they have some shortcomings: 1. they work for files only (and consume one file descriptor per lock) 2. they don't work as expected (hm, well, what is exactly expected Idon't know, but I don't like how they work) when used from threads. I don't like the way pshared pthread_mutex_t works 1. they are unnamed 2. there is no automatic cleanup I don't like the way sysv ipc works. 1. they are ... well, not exactly *named*, but have some twistedidentifiers generated with ftok() on files, messing with inodes and suchthat they look like one big kludge. 2. theres hard limit on how much of them can process create, use. 3. theres no automatic cleanups. So I had to invent (or at least to pick idea from other OS-es (-: )myself: Here is what I've implemented so far, for private use, but if somebodyis interested, I would be glad share the source, even to release itunder GPL. Process shared, thread shared, named mutexes. I call them nutexes. Every nutex has name, creator ownership andpermission bits much like files, with their names not in fs namespace,but rather somewhere else. Nutex works much like file lock, but it work in natural way, no matterfrom which user-space execution context (process, thread, anything else?) called. They can hold read or write lock, with write locks having higherpriority than read ones. Nutex connection to execution context is over one single filedescriptor ( "/dev/nutex" ), which is opened once from each context, atfirst access and stored for example, in static variable for processes,and for threads with pthread_set/getspecific(). On single file descriptor, caller can open/create as much nutexes as itlikes. There is no hard/implementation limits (soft-limits are to beimplemented - today maybe, over /proc/nutex interface ?). So far, so good. But, now it comes to abnormal program termination. Nutexes do threedifferent things in three different situations. When process terminates, /dev/nutex is automaticaly closed, and allassociated nutexes are automaticaly unlocked, BUT: 1. If process was holding READ lock, nothing special happens. 2. If process was holding WRITE lock, nutex is marked as 'damaged', andevery subsequent Lock() from other processes on that nutex will resultwith error EPIPE. 3. If process was *creator* of this nutex, it is marked as REMOVED, andall subsequent Lock() attempts from other processes on that nutex willresult with error EIDRM. Nice eh ? Also, there is early stage "/proc/nutex" interface, that can showstatus much like "/proc/locks" is doing now. All that is implemented as single kernel module which registers/dev/nutex and /proc/nutex at initialization, and do all hard work overfour IOCTLs. Anybody interesed can contact me for source in private mail, since itis not in final stage yet (two more things to implement), and I won'tpost it anywhere today. So, let me hear the comments... ?-- Bye, and have a very nice day !-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to [email protected] majordomo info at read the FAQ at | http://lkml.org/lkml/2002/6/7/91 | CC-MAIN-2015-06 | refinedweb | 654 | 65.73 |
#include <deal.II/fe/mapping_q_cache.h>
This class implements a caching strategy for objects of the MappingQ family in terms of the MappingQGeneric::compute_mapping_support_points() function, which is used in all operations of MappingQGeneric. The information of the mapping is pre-computed by the MappingQCache::initialize() function.
The use of this class is discussed extensively in step-65.
Definition at line 46 of file mapping_q_cache.h.
Constructor.
polynomial_degree denotes the polynomial degree of the polynomials that are used to map cells from the reference to the real cell.
Definition at line 26 of file mapping_q_cache.cc.
Copy constructor.
Definition at line 34 of file mapping_q_cache.cc.
Destructor.
Definition at line 43 of file mapping_q_cache.cc.
clone() functionality. For documentation, see Mapping::clone().
Reimplemented from MappingQGeneric< dim, spacedim >.
Definition at line 57 of file mapping_q_cache.cc.
Returns
false because the preservation of vertex locations depends on the mapping handed to the reinit() function.
Reimplemented from MappingQGeneric< dim, spacedim >.
Definition at line 66 of file mapping_q_cache.cc.
Initialize the data cache by computing the mapping support points for all cells (on all levels) of the given triangulation. Note that the cache is invalidated upon the signal Triangulation::Signals::any_change of the underlying triangulation.
Definition at line 75 of file mapping_q_cache.cc.
Initialize the data cache by letting the function given as an argument provide the mapping support points for all cells (on all levels) of the given triangulation. The function must return a vector of
Point<spacedim> whose length is the same as the size of the polynomial space, \((p+1)^\text{dim}\), where \(p\) is the polynomial degree of the mapping, and it must be in the order the mapping or FE_Q sort their points, i.e., all \(2^\text{dim}\) vertex points first, then the points on the lines, quads, and hexes according to the usual hierarchical numbering. No attempt is made to validate these points internally, except for the number of given points.
Definition at line 91 of file mapping_q_cache.cc.
Return the memory consumption (in bytes) of the cache.
Definition at line 129 of file mapping_q_cache.cc.
This is the main function overriden from the base class MappingQGeneric.
Reimplemented from MappingQGeneric< dim, spacedim >.
Definition at line 142 of file mapping_q_cache.cc.
The point cache filled upon calling initialize(). It is made a shared pointer to allow several instances (created via clone()) to share this cache.
Definition at line 138 of file mapping_q_cache.h.
The connection to Triangulation::signals::any that must be reset once this class goes out of scope.
Definition at line 144 of file mapping_q_cache.h. | https://dealii.org/developer/doxygen/deal.II/classMappingQCache.html | CC-MAIN-2020-10 | refinedweb | 429 | 50.43 |
Evaluating ACPI Control Methods Synchronously
A device driver can use the following device control requests to synchronously evaluate control methods that are defined in the ACPI namespace of a device:
Starting with Windows 2000, this request evaluates a control method that is an immediate child object in the ACPI namespace of the device to which the request is sent.
IOCTL_ACPI_EVAL_METHOD_EX
Starting with Windows Server 2008 and Windows Vista, this request synchronously evaluates a control method that is supported by the device or a descendant child object of the device to which the request is sent.
The Windows ACPI driver, Acpi.sys, handles these requests on behalf of devices that are specified in the system description tables in the ACPI BIOS..
For example, a WDM driver performs the following sequence of operations to use one of these IOCTLs:
Calls IoBuildDeviceIoControlRequest to build the request.
Calls IoCallDriver to send the request down the device stack.
Waits for the I/O manager to signal the driver that the lower-level drivers have completed the request.
Checks the status of the request.
Checks the validity of the output arguments.
Processes the output arguments that are returned to the driver.
Completes the request.
To build a request, a driver calls IoBuildDeviceIoControlRequest and supplies the following parameters:
IoControlCode is set to IOCTL_ACPI_EVAL_METHOD. The ACPI driver supports methods that take no input arguments, that take a single integer, that take an ASCII string, or that take a custom array of input arguments. For more information about the supported input buffer structures, see Control Method Input Buffer Structures.
InputBufferLength is set to the size, in bytes, of the input buffer that is supplied by InputBuffer.
OutputBufferLength supplies the size, in bytes, of the output buffer that is supplied by OutputBuffer.
InternalDeviceIoControl is set to FALSE.
Event is set to a pointer to a caller-allocated and initialized event object. The driver waits until the I/O manager signals this event, which indicates that the lower-level drivers have completed the request.
OutputBuffer supplies a pointer to an ACPI_EVAL_OUTPUT_BUFFER structure that contains the output arguments from the control method. Output arguments are specific to a given control method. For a driver to return any output, it must allocate a buffer that is large enough to hold all the output arguments.
IoStatusBlock is set to an IO_STATUS_BLOCK structure. This returns the status of the request that was set by the lower-level drivers.
For a code example of how to evaluate a control method that does not take input arguments, see Evaluating a Control Method Without Input Arguments.
For a code example of how to evaluate a control method that takes input arguments, see Evaluating a Control Method that Takes Input Arguments. | https://docs.microsoft.com/en-us/windows-hardware/drivers/acpi/evaluating-acpi-control-methods-synchronously | CC-MAIN-2018-22 | refinedweb | 452 | 54.12 |
RoboFont can install a font directly in the OS for quick testing. This font will be directly available for use in any other application. The font will be installed only for the current user and it will be deactivated when the UFO document in RoboFont is closed, when the user quits RoboFont or when the user logs out.
RoboFont does not provide a print function. Many aspects could influence the proofing of type, so it does not seem feasible to design a specimen-maker tool that could fit all the different needs. The Test Install feature allows users to check their type where and how they prefer.
There are some settings you can alter in the Preferences like auto hint, remove overlap and font format (.otf or .ttf).
Test install through the graphical interface
You can test install a font on your machine using the option Test Install option from the Application Menu > File
Test install via script
Test installing a font through code is as simple as calling the
.testInstall() method on a
RFont
myFont = CurrentFont() myFont.testInstall()
Using a test installed font with Python
Differently from other applications installed on your machine (like InDesign or TextEdit), DrawBot is not directly able to detect changes in a test installed font. Once a font is inserted in the OS with a unique name, it become frozen from the OS user interface standpoint. It happens for safety reasons. There are a few different ways to overcome this issue, each one with pros and cons.
Unique postscript name
Do not change the font you already test installed, just pretend to create a new one! The postscript font name is the unique identifier for the macOS and DrawBot, so you might use a unique postscript name each time you want to test install and typeset the font. Assuming you are running this code from the DrawBot extension:
import time myFont = CurrentFont() originalPostScriptName = myFont.info.postscriptFontName # we update the postscriptFontName attribute with the current time myFont.info.postscriptFontName = f'{myFont.info.postscriptFontName}-{time.time()}' # test install and typesetting myFont.testInstall() font(myFont.info.postscriptFontName, 24) text("Hello, world!", (200, 200)) # we revert the original name back in our font myFont.info.postscriptFontName = originalPostScriptName
This is definitely the hackiest approach among the three, but sometimes it is good enough. It makes the test installed font accessible system wide and it does not require any extra configuration.
Generate a temporary file
DrawBot accepts a file path as
font() input, so you can avoid
.testInstall() entirely. Just generate a temporary binary file and use it with DrawBot. Assuming you are running this code from the DrawBot extension:
import tempfile binaryFormat = "otf" myFont = CurrentFont() with tempfile.NamedTemporaryFile(suffix=f".{binaryFormat}") as temp: myFont.generate(path=temp.name, format=binaryFormat, checkoutlines=True) newPage('A3Landscape') font(temp.name, 24) text("Hello, world!", (200, 200))
With this solution, the font will not be available system wide.
Launch your code from terminal
When using DrawBot to generate test specimens it is advised to install DrawBot as a module and run your DrawBot script from terminal.
The scope of the process ends when the script is done, any changes to the font and a new test installed version will be available in your script that runs in terminal.
When using the application DrawBot, the scope ends only when the application quits, so changes in the font will not be detected by DrawBot. This implies to manually test install your font in RoboFont.
import drawBot as dB dB.newDrawing() dB.newPage('A3Landscape') dB.font("MyFont-Regular", 24) dB.text("Hello, world!", (200, 200)) dB.saveImage('mySpecimen.pdf') dB.endDrawing() | https://doc.robofont.com/documentation/tutorials/using-test-install/ | CC-MAIN-2021-39 | refinedweb | 605 | 55.64 |
Developing 3D Games for Windows* 8 with C++ and Microsoft DirectX*
Published on July 24, 2014
By Bruno Sonnino
Game development is a perennial hot topic: everybody loves to play games, and they are the top sellers on every list. But when you talk about developing a good game, performance is always a requirement. No one likes to play a game that lags or has glitches, even on low-end devices.
You can use many languages and frameworks to develop a game, but when you want performance for a Windows* game, nothing beats the real thing: Microsoft DirectX* with C++. With these technologies, you’re close to the bare metal, you can use the full capabilities of the hardware, and you get excellent performance.
I decided to develop such a game, even though I’m mostly a C# developer. I have developed a lot in C++ in the past, but the language now is far from what it used to be. In addition, DirectX is a new subject for me, so this article is about developing games from a newbie standpoint. Experienced developers will have to excuse my mistakes.
In this article, I show you how to develop a soccer penalties shootout game. The game kicks the ball, and the user moves the goalkeeper to catch it. We won’t be starting from scratch. We’ll be using the Microsoft Visual Studio* 3D Starter Kit, a logical starting point for those who want to develop games for Windows 8.1.
The Microsoft Visual Studio* 3D Starter Kit
After you download the Starter Kit, you can extract it to a folder and open the
StarterKit.sln file. This solution has a Windows 8.1 C++ project ready to run. If you run it, you will see something like Figure 1.
Figure 1.Microsoft Visual Studio* 3D Starter Kit initial state
This Starter Kit program demonstrates several useful concepts:
- Five objects are animated: four shapes turning around the teapot and the teapot “dancing.”
- Each element has a different material: some are solid colors, and the cube has a bitmap material.
- Lighting comes from the top left of the scene.
- The bottom right corner contains a frames-per-second (FPS) counter.
- A score indicator is positioned at the top.
- Clicking an object highlights it, and the score is incremented.
- Right-clicking the game or swiping from the bottom calls up a bottom app bar with two buttons to cycle the color of the teapot.
You can use these features to create any game, but first you want to look at the files in the kit.
Let’s start with
App.xaml and its
cpp/h counterparts. When you start the
App.xaml application, it runs
DirectXPage. In
DirectXPage.xaml, you have a
SwapChainPanel and the app bar. The
SwapChainPanel is a hosting surface for DirectX graphics on an XAML page. There, you can add XAML objects that will be presented with the Microsoft Direct3D* scene—this is convenient for adding buttons, labels, and other XAML objects to a DirectX game without having to create your own controls from scratch. The Starter Kit also adds a
StackPanel, which you will use as a scoreboard.
DirectXPage.xaml.cpp has the initialization of the variables, hooking the event handlers for resizing and changing orientation, the handlers for the Click event of app bar buttons, and the rendering loop. All the XAML objects are handled like any other Windows 8 program. The file also processes the
Tapped event, checking whether a tap (or mouse click) hits an object. If it does, the event increments the score for that object.
You must tell the program that the
SwapChainPanel should render the DirectX content. To do that, according to the documentation, you must .” This is done in the
CreateWindowSizeDependentResources method in
DeviceResources.cpp.
The main game loop is in
StarterKitMain.cpp where the page and the FPS counter are rendered.
Game.cpp has the game loop and hit testing. It calculates the animation in the
Update method and draws all objects in the
Render method. The FPS counter is rendered in
SampleFpsTextRenderer.cpp.
The objects of the game are in the
Assets folder.
Teapot.fbx has the teapot, and
GameLevel.fbx has the four shapes that spin around the dancing teapot.
With this basic knowledge of the Starter Kit sample app, you can start to create your own game.
Add Assets to the Game
You’re developing a soccer game, so the first asset you need is a soccer ball, which you add to
Gamelevel.fbx. First, remove the four shapes from this file by selecting each and pressing Delete. In the Solution Explorer, delete
CubeUVImage.png as well because you won’t need it; it’s the texture used to cover the cube, which you just deleted.
The next step is to add a sphere to the model. Open the toolbox (if you don’t see it, click View > Toolbox) and double-click the sphere to add it to the model. If the ball seems too small, you can zoom in by clicking the second button in the editor’s top toolbar, pressing Z to zoom with the mouse (dragging to the center increases the zoom), or tapping the up and down arrows. You can also press Ctrl and use the mouse wheel to zoom. You should have something like Figure 2.
Figure 2. Model editor with a sphere shape
This sphere has just a white color with some lighting on it. It needs a soccer ball texture. My first attempt was to use a hexagonal grid, like the one shown in Figure 3.
Figure 3.Hexagonal grid for the ball texture: first attempt
To apply the texture to the sphere, select it and, in the Properties window, assign the
.png file to the
Texture1 property. Although that seemed like a good idea, the result was less than good, as you can see in Figure 4.
Figure 4.Sphere with texture applied
The hexagons are distorted because of the projections of the texture points in the sphere. You need a distorted texture, like the one in Figure 5.
Figure 5. Soccer ball texture adapted to the sphere
When you apply this texture, the sphere starts to look more like a soccer ball. You just need to adjust some properties to make it more real. To do that, select the ball and edit the Phong effect in the Properties window. The Phong lighting model includes directional and ambient lighting and simulates the reflective properties of the object. This is a shader included with Visual Studio that you can drag from the toolbox. If you want to know more about shaders and how to design them using the Visual Studio shader designer, click the link in “For More Information.” Set the
Red,
Green, and
Blue properties under MaterialSpecular to 0.2 and MaterialSpecularPower to 16. Now you have a better-looking soccer ball (Figure 6).
Figure 6. Finished soccer ball
If you don’t want to design your own models in Visual Studio, you can grab a premade model from the Web. Visual Studio accepts any model in the FBX, DAE, and OBJ formats: you just need to add them to your assets in the solution. As an example, you can use an
.obj file like the one in Figure 7 (a free model downloaded from).
Figure 7. Three-dimensional .obj ball model
Animate the Model
With the model in place, it’s time to animate it. Before that, though, I want to remove the teapot since it won’t be needed. In the Assets folder, delete
teapot.fbx. Next, delete its loading and animation. In
Game.cpp, the loading of the models is done asynchronously in
CreateDeviceDependentResources:
// Load the scene objects. auto loadMeshTask = Mesh::LoadFromFileAsync( m_graphics, L"gamelevel.cmo", L"", L"", m_meshModels) .then([this]() { // Load the teapot from a separate file and add it to the vector of meshes. return Mesh::LoadFromFileAsync(
You must change the model and remove the continuation of the task, so only the ball is loaded:
void Game::CreateDeviceDependentResources() { m_graphics.Initialize(m_deviceResources->GetD3DDevice(), m_deviceResources->GetD3DDeviceContext(), m_deviceResources->GetDeviceFeatureLevel()); // Set DirectX to not cull any triangles so the entire mesh will always be shown. CD3D11_RASTERIZER_DESC d3dRas(D3D11_DEFAULT); d3dRas.CullMode = D3D11_CULL_NONE; d3dRas.MultisampleEnable = true; d3dRas.AntialiasedLineEnable = true; ComPtr<ID3D11RasterizerState> p3d3RasState; m_deviceResources->GetD3DDevice()->CreateRasterizerState(&d3dRas, &p3d3RasState); m_deviceResources->GetD3DDeviceContext()->RSSetState(p3d3RasState.Get()); // Load the scene objects. auto loadMeshTask = Mesh::LoadFromFileAsync( m_graphics, L"gamelevel.cmo", L"", L"", m_meshModels); (loadMeshTask).then([this]() { // Scene is ready to be rendered. m_loadingComplete = true; }); }
Its counterpart,
ReleaseDeviceDependentResources, needs only to clear the meshes:
void Game::ReleaseDeviceDependentResources() { for (Mesh* m : m_meshModels) { delete m; } m_meshModels.clear(); m_loadingComplete = false; }
The next step is to change the
Update method so that only the ball is rotated:
void Game::Update(DX::StepTimer const& timer) { // Rotate scene. m_rotation = static_cast<float>(timer.GetTotalSeconds()) * 0.5f; }
You manipulate the speed of the rotation through the multiplier (0.5f). If you want the ball to rotate faster, just make this multiplier larger. This means that the ball will rotate at an angle of 0.5/(2 * pi) radians every second. The
Render method renders the ball at the desired rotation:
void Game::Render() { // Loading is asynchronous. Only draw geometry after it's loaded. if (!m_loadingComplete) { return; } auto context = m_deviceResources->GetD3DDeviceContext(); // Set render targets to the screen. auto rtv = m_deviceResources->GetBackBufferRenderTargetView(); auto dsv = m_deviceResources->GetDepthStencilView(); ID3D11RenderTargetView *const targets[1] = { rtv }; context->OMSetRenderTargets(1, targets, dsv); // Draw our scene models. XMMATRIX rotation = XMMatrixRotationY(m_rotation); for (UINT i = 0; i < m_meshModels.size(); i++) { XMMATRIX modelTransform = rotation; String^ meshName = ref new String(m_meshModels[i]->Name()); m_graphics.UpdateMiscConstants(m_miscConstants); m_meshModels[i]->Render(m_graphics, modelTransform); } }
ToggleHitEffect won’t do anything here; the ball won’t change the glow if it’s touched:
void Game::ToggleHitEffect(String^ object) { }
Although you don’t want the ball to change the glow, you still want to report that it’s been touched. To do that, use this modified
OnHitObject:
String^ Game::OnHitObject(int x, int y) { String^ result = nullptr; XMFLOAT3 point; XMFLOAT3 dir; m_graphics.GetCamera().GetWorldLine(x, y, &point, &dir); XMFLOAT4X4 world; XMMATRIX worldMat = XMMatrixRotationY(m_rotation); XMStoreFloat4x4(&world, worldMat); float closestT = FLT_MAX; for (Mesh* m : m_meshModels) { XMFLOAT4X4 meshTransform = world; auto name = ref new String(m->Name()); float t = 0; bool hit = HitTestingHelpers::LineHitTest(*m, &point, &dir, &meshTransform, &t); if (hit && t < closestT) { result = name; } } return result; }
Now you can run the project and see that the ball is spinning on its y-axis. Now, let’s make the ball move.
Move the Ball
To move the ball, you need to translate it, for example, make it move up and down. The first thing to do is declare the variable for the current position in
Game.h:
class Game { public: // snip private: // snip float m_translation;
Then, in the
Update method, calculate the current position:
void Game::Update(DX::StepTimer const& timer) { // Rotate scene. m_rotation = static_cast<float>(timer.GetTotalSeconds()) * 0.5f; const float maxHeight = 7.0f; auto totalTime = (float) fmod(timer.GetTotalSeconds(), 2.0f); m_translation = totalTime > 1.0f ? maxHeight - (maxHeight * (totalTime - 1.0f)) : maxHeight *totalTime; }
This way, the ball will go up and down every 2 seconds. In the first second, it will move up; in the next, down. The
Render method calculates the resulting matrix and renders the ball at the new position:
void Game::Render() { // snip // Draw our scene models. XMMATRIX rotation = XMMatrixRotationY(m_rotation); rotation *= XMMatrixTranslation(0, m_translation, 0);
If you run the project now, you will see that the ball is moving up and down at a constant speed. Now, you need to add some physics to the ball.
Add Physics to the Ball
To add some physics to the ball, you must simulate a force on it, representing gravity. From your physics lessons (you do remember them, don’t you?), you know that an accelerated movement follows these equations:
s = s0 + v0t + 1/2at2
v = v0 + at
where s is the position at the instant t, s0 is the initial position, v0 is the initial velocity, and a is the acceleration. For the vertical movement, a is the acceleration caused by gravity (−10 m/s2) and s0 is 0 (the ball starts on the floor). So, the equations become:
s = v0t -5t2
v = v0 -10t
You want to reach the maximum height in 1 second. At the maximum height, the velocity is 0. So, the second equation allows you to find the initial velocity:
0 = v0 – 10 * 1 => v0 = 10 m/s
And that gives you the translation for the ball:
s = 10t – 5t2
You modify the
Update method to use this equation:
void Game::Update(DX::StepTimer const& timer) { // Rotate scene. m_rotation = static_cast<float>(timer.GetTotalSeconds()) * 0.5f; auto totalTime = (float) fmod(timer.GetTotalSeconds(), 2.0f); m_translation = 10*totalTime - 5 *totalTime*totalTime; }
Now that the ball moves up and down realistically it’s time to add the soccer field.
Add the Soccer Field
To add the soccer field, you must create a new scene. In the Assets folder, right-click to add a new three-dimensional (3D) scene and name it
field.fbx. From the toolbox, add a plane and select it, changing its scale
X to 107 and
Z to 60. Set its
Texture1 property to a soccer field image. You can use the zoom tool (or press Z) to zoom out.
Then, you must load the mesh in
CreateDeviceDependentResources in
Game.cpp:
void Game::CreateDeviceDependentResources() { // snip // Load the scene objects. ); }); (loadMeshTask).then([this]() { // Scene is ready to be rendered. m_loadingComplete = true; }); }
If you run the program, you’ll see that the field bounces with the ball. To stop the field from moving, change the
Render method:
// Renders one frame using the Starter Kit helpers. void Game::Render() { // sn m_meshModels[i]->Render(m_graphics, XMMatrixIdentity()); } }
With this change, the transform is applied only to the ball. The field is rendered with no transform. If you run the code now, you will see that the ball bounces in the field, but it “enters” it at the bottom. Fix this bug by translating the field by −0.5 in the y-axis. Select the field and change its translation property on the y-axis to −0.5. Now, when you run the app, you can see the ball bouncing in the field, like Figure 8.
Figure 8.Ball bouncing in the field
Set the Camera and Ball Position
The ball is positioned at the center of the field, but you don’t want it there. For this game, the ball must be positioned at the penalty mark. If you look at the scene editor in Figure 9, you can see that to do that, you must translate the ball in the x-axis by changing the translation of the ball in the
Render method in
Game.cpp:
rotation *= XMMatrixTranslation(63.0, m_translation, 0);
The ball is translated by 63 units in the x-axis, which positions it at the penalty mark.
Figure 9.Field with Axis – X (red) and Z (blue)
With this change, you won’t see the ball anymore because the camera is positioned in another direction—at the middle of the field, looking at the center. You need to reposition the camera so that it points toward the goal line, which you do in
CreateWindowSizeDependentResources in
Game.cpp:
m_graphics.GetCamera().SetViewport((UINT) outputSize.Width, (UINT) outputSize.Height); m_graphics.GetCamera().SetPosition(XMFLOAT3(25.0f, 10.0f, 0.0f)); m_graphics.GetCamera().SetLookAt(XMFLOAT3(100.0f, 0.0f, 0.0f)); float aspectRatio = outputSize.Width / outputSize.Height; float fovAngleY = 30.0f * XM_PI / 180.0f; if (aspectRatio < 1.0f) { // Portrait or snap view m_graphics.GetCamera().SetUpVector(XMFLOAT3(1.0f, 0.0f, 0.0f)); fovAngleY = 120.0f * XM_PI / 180.0f; } else { // Landscape view. m_graphics.GetCamera().SetUpVector(XMFLOAT3(0.0f, 1.0f, 0.0f)); } m_graphics.GetCamera().SetProjection(fovAngleY, aspectRatio, 1.0f, 100.0f);
The position of the camera is between the center mark and the penalty mark, looking at the goal line. The new view is similar to Figure 10.
Figure 10. Ball repositioned with new camera position
Now, you need to add the goal.
Add the Goal Post
To add the goal to the field, you need a new 3D scene with the goal. You can design your own, or you can get a model ready to use. With the model in place, you must add it to the Assets folder so that it can be compiled and used.
The model must be loaded in the
CreateDeviceDependentResources method in
Game.cpp: ); });
Once loaded, position and draw it in the
Render method in
Game.cpp:
auto goalTransform = XMMatrixScaling(2.0f, 2.0f, 2.0f) * XMMatrixRotationY(-XM_PIDIV2)* XMMatrixTranslation(85.5f, -0.5, m_meshModels[i]->Render(m_graphics, goalTransform); }
This change applies a transform to the goal and renders it. The transform is a combination of three transforms: a scale (multiplying the original size by 2), a rotation of 90 degrees, and a translation of 85.5 units in the x-axis and −0.5 units in the y-axis because of the displacement that you gave to the field. That way, the goal is positioned facing the field, at the goal line, as in Figure 11. Note that the order of the transforms is important: if you apply the rotation after the translation, the goal will be rendered in a completely different position, and you won’t see anything.
Figure 11. Field with goal positioned
Shooting the Ball
All the elements are positioned, but the ball is still jumping. It’s time to kick it. To do that, you must sharpen your physics skills again. The kick of the ball looks something like Figure 12.
Figure 12. Schematics of a ball kick
The ball is kicked with an initial velocity of v0, with an α angle (if you don’t remember your physics classes, just play a bit of Angry Birds* to see this in action). The movement of the ball can be decomposed in two different movements: the horizontal movement is a movement with constant velocity (I admit that there is neither air friction nor wind effects), and the vertical movement is like the one used before. The horizontal movement equation is:
sX = s0 + v0*cos(α)*t
. . . and the vertical movement is:
sY = s0 + v0*sin(α)*t – ½*g*t2
Now you have two translations: one in the x-axis and other in the y-axis. Considering that the kick is at 45 degrees, cos(α) = sin(α) = sqrt(2)/2, so v0*cos(α) = v0*sin(α)*t. You want the kick to enter the goal, so the distance must be greater than 86 (the goal line is at 85.5). You want the ball flight to take 2 seconds, so when you substitute these values in the first equation, you get:
86 = 63 + v0 * cos(α) * 2 ≥ v0 * cos(α) = 23/2 = 11.5
Replacing the values in the equations, the translation equation for the y-axis is:
sY = 0 + 11.5 * t – 5 * t2
. . . and for the x-axis:
sX = 63 + 11.5 * t
With the y-axis equation, you know the time the ball will hit the ground again, using the solution for the second degree equation (yes, I know you thought you would never use it, but here it is):
(−b ± sqrt(b2 − 4*a*c))/2*a ≥ (−11.5 ± sqrt(11.52 – 4 * −5 * 0)/2 * −5 ≥ 0 or 23/10 ≥ 2.3s
With these equations, you can replace the translation for the ball. First, in
Game.h, create variables to store the translation in the three axes:
float m_translationX, m_translationY, m_translationZ;
Then, in the
Update method in
Game.cpp, add the equations:; }
The
Render method uses these new translations:
rotation *= XMMatrixTranslation(m_translationX, m_translationY, 0);
If you run the program now, you will see the goal with the ball entering the center of it. If you want the ball to go in other directions, you must add a horizontal angle for the kick. You do this with a translation in the z-axis.
Figure 13 shows the distance between the penalty mark and the goal is 22.5, and the distance between goal posts is 14. That makes α = atan(7/22.5), or 17 degrees. You could calculate the translation in the z-axis, but to make it simpler: the ball must travel to the goal line at the same time it reaches the goal post. That means it must travel 7/22.5 units in the z-axis while the ball travels 1 unit in the x-axis. So, the equation for the z-axis is:
sz = 11.5 * t/3.2 ≥ sz = 3.6 * t
Figure 13. Schematics of the distance to the goal
This is the translation to reach the goal post. Any translation with a lower velocity will have a smaller angle. So to reach the goal, the velocity must be between −3.6 (left post) and 3.6 (right post). If you consider that the ball must enter the goal entirely, the maximum distance is 6/22.5, and the velocity range is between 3 and −3. With these numbers, you can set an angle for the kick with this code in the
Update method:; m_translationZ = 3 * totalTime; }
The translation in the z-axis will be used in the
Render method:
rotation *= XMMatrixTranslation(m_translationX, m_translationY, m_translationZ); ….
You should have something like Figure 14.
Figure 14. Kick at an angle
Add a Goalkeeper
With the ball movement and goal in place, you now need to add a goalkeeper to catch the ball. The goalkeeper will be a distorted cube. In the Assets folder, add a new item—a new 3D scene—and call it
goalkeeper.fbx.
Add a cube from the toolbox and select it. Set its scale to 0.3 in the x-axis, 1.9 in the y-axis, and 1 in the z-axis. Change its
MaterialAmbient property to 1 for the
Red and 0 for the
Blue and
Green properties to make it Red. Change the
Red property in
MaterialSpecular to 1 and
MaterialSpecularPower to 0.2.
Load the new resource in the
CreateDeviceDependentResources method: ); }).then([this]() { return Mesh::LoadFromFileAsync( m_graphics, L"goalkeeper.cmo", L"", L"", m_meshModels, false // Do not clear the vector of meshes ); });
The next step is to position and render the goalkeeper in the center of the goal. You do this in the
Render method of
Game.cpp:
void Game::Render() { // snip auto goalTransform = XMMatrixScaling(2.0f, 2.0f, 2.0f) * XMMatrixRotationY(-XM_PIDIV2)* XMMatrixTranslation(85.5f, -0.5f, 0); auto goalkeeperTransform = XMMatrixTranslation(85.65f, 1.4f, if (String::CompareOrdinal(meshName, L"Cube_Node") == 0) m_meshModels[i]->Render(m_graphics, goalkeeperTransform); else m_meshModels[i]->Render(m_graphics, goalTransform); } }
With this code, the goalkeeper is positioned at the center of the goal, as shown in Figure 15 (note that the camera position is different for the screenshot).
Figure 15. The goalkeeper at the center of the goal
Now, you need to make the keeper move to the right and the left to catch the ball. The user will use the left and right arrow keys to change the keeper’s movement.
The movement of the goalkeeper is limited by the goal posts, positioned at +7 and −7 units in the z-axis. The goalkeeper has 1 unit in both directions, so it can be moved 6 units on either side.
The key press is intercepted in the XAML page (
DirectXPage.xaml) and will be redirected to the
Game class. You add a
KeyDown event handler in
DirectXPage.xaml:
<Page x:
The event handler in
DirectXPage.xaml.cpp is:
void DirectXPage::OnKeyDown(Platform::Object^ sender, Windows::UI::Xaml::Input::KeyRoutedEventArgs^ e) { m_main->OnKeyDown(e->Key); }
m_main is the instance of the
StarterKitMain class, which renders the game and the FPS scenes. You must declare a public method in
StarterKitMain.h:
class StarterKitMain : public DX::IDeviceNotify { public: StarterKitMain(const std::shared_ptr<DX::DeviceResources>& deviceResources); ~StarterKitMain(); // Public methods passed straight to the Game renderer. Platform::String^ OnHitObject(int x, int y) { return m_sceneRenderer->OnHitObject(x, y); } void OnKeyDown(Windows::System::VirtualKey key) { m_sceneRenderer->OnKeyDown(key); } ….
This method redirects the key to the
OnKeyDown method in the
Game class. Now, you must declare the
OnKeyDown method in
Game.h:
class Game { public: Game(const std::shared_ptr<DX::DeviceResources>& deviceResources); void CreateDeviceDependentResources(); void CreateWindowSizeDependentResources(); void ReleaseDeviceDependentResources(); void Update(DX::StepTimer const& timer); void Render(); void OnKeyDown(Windows::System::VirtualKey key); ….
This method processes the key pressed and moves the goalkeeper with the arrows. Before creating the method, you must declare a private field in
Game.h that stores the goalkeeper’s position:
class Game { // snip private: // snip float m_goalkeeperPosition;
The goalkeeper position is initially 0 and will be incremented or decremented when the user presses an arrow key. If the position is larger than 6 or smaller than −6, the goalkeeper’s position won’t change. You do this in the
OnKeyDown method in
Game.cpp:
void Game::OnKeyDown(Windows::System::VirtualKey key) { const float MaxGoalkeeperPosition = 6.0; const float MinGoalkeeperPosition = -6.0; if (key == Windows::System::VirtualKey::Right) m_goalkeeperPosition = m_goalkeeperPosition >= MaxGoalkeeperPosition ? m_goalkeeperPosition : m_goalkeeperPosition + 0.1f; else if (key == Windows::System::VirtualKey::Left) m_goalkeeperPosition = m_goalkeeperPosition <= MinGoalkeeperPosition ? m_goalkeeperPosition : m_goalkeeperPosition - 0.1f; }
The new goalkeeper position is used in the
Render method of
Game.cpp, where the goalkeeper transform is calculated:
auto goalkeeperTransform = XMMatrixTranslation(85.65f, 1.40f, m_goalkeeperPosition);
With these changes, you can run the game and see that the goalkeeper moves to the right and the left when you press the arrow keys (Figure 16).
Figure 16. Game with the goalkeeper in position
Until now, the ball keeps moving all the time, but that’s not what you want. The ball should move just after it’s kicked and stop when it reaches the goal. Similarly, the goalkeeper shouldn’t move before the ball is kicked.
You must declare a private field,
m_isAnimating in
Game.h, so the game knows when the ball is moving:
class Game { public: // snip private: // snip bool m_isAnimating;
This variable is used in the
Update and
Render methods in
Game.cpp so the ball moves only when
m_isAnimating is true:
void Game::Update(DX::StepTimer const& timer) { if (m_isAnimating) { m_rotation = static_cast<float>(timer.GetTotalSeconds()) * 0.5f; auto totalTime = (float) fmod(timer.GetTotalSeconds(), 2.3f); m_translationX = 63.0f + 11.5f * totalTime; m_translationY = 11.5f * totalTime - 5.0f * totalTime*totalTime; m_translationZ = 3.0f * totalTime; } } void Game::Render() { // snip XMMATRIX modelTransform; if (m_isAnimating) { modelTransform = XMMatrixRotationY(m_rotation); modelTransform *= XMMatrixTranslation(m_translationX, m_translationY, m_translationZ); } else modelTransform = XMMatrixTranslation(63.0f, 0.0f, 0.0f); ….
The variable
modelTransform is moved from the loop to the top. The arrow keys should only be processed in the
OnKeyDown method when
m_isAnimating is true:; } }
The next step is to kick the ball. This happens when the user presses the space bar. Declare a new private field,
m_isKick, in
Game.h:
class Game { public: // snip private: // snip bool m_isKick;
Set this field to true in the
OnKeyDown method in
Game.cpp:; } else if (key == Windows::System::VirtualKey::Space) m_isKick = true; }
When
m_isKick is true, the animation starts in the
Update method:
void Game::Update(DX::StepTimer const& timer) { if (m_isKick) { m_startTime = static_cast<float>(timer.GetTotalSeconds()); m_isAnimating = true; m_isKick = false; } if (m_isAnimating) { auto totalTime = static_cast<float>(timer.GetTotalSeconds()) - m_startTime; m_rotation = totalTime * 0.5f; m_translationX = 63.0f + 11.5f * totalTime; m_translationY = 11.5f * totalTime - 5.0f * totalTime*totalTime; m_translationZ = 3.0f * totalTime; if (totalTime > 2.3f) ResetGame(); } }
The initial time for the kick is stored in the variable
m_startTime (declared as a private field in
Game.h), which is used to compute the time for the kick. If it’s above 2.3 seconds, the game is reset (the ball should have reached the goal). You declare the
ResetGame method as private in
Game.h:
void Game::ResetGame() { m_isAnimating = false; m_goalkeeperPosition = 0; }
This method sets
m_isAnimating to false and resets the goalkeeper’s position. The ball doesn’t need to be repositioned because it will be drawn in the penalty mark if
m_isAnimating is false. Another change you need to make is the kick angle. This code fixes the kick near the right post:
m_translationZ = 3.0f * totalTime;
You must change it so that the kick is somewhat random and the user doesn’t know where it will be. You must declare a private field
m_ballAngle in
Game.h and initialize it when the ball is kicked in the
Update method:
void Game::Update(DX::StepTimer const& timer) { if (m_isKick) { m_startTime = static_cast<float>(timer.GetTotalSeconds()); m_isAnimating = true; m_isKick = false; m_ballAngle = (static_cast <float> (rand()) / static_cast <float> (RAND_MAX) -0.5f) * 6.0f; } …
Rand()/RAND_MAX results in a number between 0 and 1. Subtract 0.5 from the result so the number is between −0.5 and 0.5 and multiply per 6, so the final angle is between −3 and 3. To use different sequences every game, you must initialize the generator by calling
srand in the
CreateDeviceDependentResources method:
void Game::CreateDeviceDependentResources() { srand(static_cast <unsigned int> (time(0))); …
To call the time function, you must include
ctime. You use
m_ballAngle in the
Update method to use the new angle for the ball:
m_translationZ = m_ballAngle * totalTime;
Most of the code is now in place, but you must know whether the goalkeeper caught the ball or the user scored a goal. Use a simple method to find out: when the ball reaches the goal line, you check whether the ball rectangle intersects the goalkeeper rectangle. If you want, you can use more complex methods to determine a goal, but for our needs, this is enough. All the calculations are made in the
Update method:
void Game::Update(DX::StepTimer const& timer) { if (m_isKick) { m_startTime = static_cast<float>(timer.GetTotalSeconds()); m_isAnimating = true; m_isKick = false; m_isGoal = m_isCaught = false; m_ballAngle = (static_cast <float> (rand()) / static_cast <float> (RAND_MAX) -0.5f) * 6.0f; } if (m_isAnimating) { auto totalTime = static_cast<float>(timer.GetTotalSeconds()) - m_startTime; m_rotation = totalTime * 0.5f; if (!m_isCaught) { // ball traveling m_translationX = 63.0f + 11.5f * totalTime; m_translationY = 11.5f * totalTime - 5.0f * totalTime*totalTime; m_translationZ = m_ballAngle * totalTime; } else { // if ball is caught, position it in the center of the goalkeeper m_translationX = 83.35f; m_translationY = 1.8f; m_translationZ = m_goalkeeperPosition; } if (!m_isGoal && !m_isCaught && m_translationX >= 85.5f) { // ball passed the goal line - goal or caught auto ballMin = m_translationZ - 0.5f + 7.0f; auto ballMax = m_translationZ + 0.5f + 7.0f; auto goalkeeperMin = m_goalkeeperPosition - 1.0f + 7.0f; auto goalkeeperMax = m_goalkeeperPosition + 1.0f + 7.0f; m_isGoal = (goalkeeperMax < ballMin || goalkeeperMin > ballMax); m_isCaught = !m_isGoal; } if (totalTime > 2.3f) ResetGame(); } }
Declare two private fields in
Game.h:
m_isGoal and
m_IsCaught. These fields tell you whether the user scored a goal or the goalkeeper caught the ball. If both are false, the ball is still travelling. When the ball reaches the goalkeeper, the program calculates the ball and the goalkeeper’s bounds and determines whether the bounds of the ball overlap with the bounds of the goalkeeper. If you look at the code, you will see that I added 7.0 f to every bound. I did this because the bounds can be positive or negative, and that would complicate the overlapping calculation. By adding 7.0 f, you ensure that all numbers are positive, which simplifies the calculation. If the ball is caught, the ball position is set to the center of the goalkeeper.
m_isGoal and
m_IsCaught are reset when there is a kick. Now, it’s time to add the scoreboard to the game.
Add Scorekeeping
In a DirectX game, you can render the score with Direct2D, but when you’re developing a Windows 8 game, you have another way to do it: using XAML. You can overlap XAML elements in your game and create a bridge between the XAML elements and your game logic. This is an easier way to show information and interact with the user, as you won’t have to deal with element positions, renders, and update loops.
The Starter Kit comes with an XAML scoreboard (the one used to record hits on the elements). You simply need to modify it to keep the goal score. The first step is to change
DirectXPage.xaml to change the scoreboard:
<SwapChainPanel x: <Border VerticalAlignment="Top" HorizontalAlignment="Center" Padding="10" Background="Black" Opacity="0.7"> <StackPanel Orientation="Horizontal" > <TextBlock x: <TextBlock Text="x" Style="{StaticResource HudCounter}"/> <TextBlock x: </StackPanel> </Border> </SwapChainPanel>
While you’re here, you can remove the bottom app bar, as it won’t be used in this game. You have removed all hit counters in the score, so you just need to remove the code that mentions them in the
OnTapped hander in
DirectXPage.xaml.cpp:
void DirectXPage::OnTapped(Object^ sender, TappedRoutedEventArgs^ e) { }
You can also remove
OnPreviousColorPressed,
OnNextColorPressed, and
ChangeObjectColor from the
cpp and
h pages because these were used in the app bar buttons that you removed.
To update the score for the game, there must be some way to communicate between the
Game class and the XAML page. The game score is updated in the
Game class, while the score is shown in the XAML page. One way to do that is to create an event in the
Game class, but this approach has a problem. If you add an event to the
Game class, you get a compilation error: “a WinRT event declaration must occur in a WinRT class.” This is because
Game is not a
WinRT (
ref) class. To be a
WinRT class, you must define the event as a public
ref class and seal it:
public ref class Game sealed
You could change the class to do that, but let’s go in a different direction: create a new class—in this case, a
WinRT class—and use it to communicate between the
Game class and the XAML page. Create a new class and name it
ViewModel:
#pragma once ref class ViewModel sealed { public: ViewModel(); };
In
ViewModel.h, add the event and the properties needed to update the score:
#pragma once namespace StarterKit { ref class ViewModel sealed { private: int m_scoreUser; int m_scoreMachine; public: ViewModel(); event Windows::Foundation::TypedEventHandler<Object^, Platform::String^>^ PropertyChanged; property int ScoreUser { int get() { return m_scoreUser; } void set(int value) { if (m_scoreUser != value) { m_scoreUser = value; PropertyChanged(this, L"ScoreUser"); } } }; property int ScoreMachine { int get() { return m_scoreMachine; } void set(int value) { if (m_scoreMachine != value) { m_scoreMachine = value; PropertyChanged(this, L"ScoreMachine"); } } }; }; }
Declare a private field of type
ViewModel in
Game.h (you must include
ViewModel.h in
Game.h). You should also declare a public getter for this field:
class Game { public: // snip StarterKit::ViewModel^ GetViewModel(); private: StarterKit::ViewModel^ m_viewModel;
This field is initialized in the constructor of
Game.cpp:
Game::Game(const std::shared_ptr<DX::DeviceResources>& deviceResources) : m_loadingComplete(false), m_deviceResources(deviceResources) { CreateDeviceDependentResources(); CreateWindowSizeDependentResources(); m_viewModel = ref new ViewModel(); }
The getter body is:
StarterKit::ViewModel^ Game::GetViewModel() { return m_viewModel; }
When the current kick ends, the variables are updated in
ResetGame in
Game.cpp:
void Game::ResetGame() { if (m_isCaught) m_viewModel->ScoreUser++; if (m_isGoal) m_viewModel->ScoreMachine++; m_isAnimating = false; m_goalkeeperPosition = 0; }
When one of these two properties changes, the
PropertyChanged event is raised, which can be handled in the XAML page. There is still one indirection here: the XAML page doesn’t have access to
Game (a non-
ref class) directly but instead calls the
StarterKitMain class. You must create a getter for the
ViewModel in
StarterKitMain.h:
class StarterKitMain : public DX::IDeviceNotify { public: // snip StarterKit::ViewModel^ GetViewModel() { return m_sceneRenderer->GetViewModel(); }
With this infrastructure in place, you can handle the
ViewModel’s
PropertyChanged event in the constructor of
DirectXPage.xaml.cpp:
DirectXPage::DirectXPage(): m_windowVisible(true), m_hitCountCube(0), m_hitCountCylinder(0), m_hitCountCone(0), m_hitCountSphere(0), m_hitCountTeapot(0), m_colorIndex(0) { // snip m_main = std::unique_ptr<StarterKitMain>(new StarterKitMain(m_deviceResources)); m_main->GetViewModel()->PropertyChanged += ref new TypedEventHandler<Object^, String^>(this, &DirectXPage::OnPropertyChanged); m_main->StartRenderLoop(); }
The handler updates the score (you must also declare it in
DirectXPage.xaml.cpp.h):
void StarterKit::DirectXPage::OnPropertyChanged(Platform::Object ^sender, Platform::String ^propertyName) { if (propertyName == "ScoreUser") { auto scoreUser = m_main->GetViewModel()->ScoreUser; Dispatcher->RunAsync(CoreDispatcherPriority::Normal, ref new DispatchedHandler([this, scoreUser]() { ScoreUser->Text = scoreUser.ToString(); })); } if (propertyName == "ScoreMachine") { auto scoreMachine= m_main->GetViewModel()->ScoreMachine; Dispatcher->RunAsync(CoreDispatcherPriority::Normal, ref new DispatchedHandler([this, scoreMachine]() { ScoreMachine->Text = scoreMachine.ToString(); })); } }
Now the score gets updated every time the user scores a goal or the goalkeeper catches the ball (Figure 17).
Figure 17. Game with score updating
Using Touch and Sensors in the Game
The game is working well, but you can still add flair to it. New Ultrabook™ devices have touch input and sensors that you can use to enhance the game. Instead of using the keyboard to kick the ball and move the goalkeeper, a user can kick the ball by tapping the screen and move the goalkeeper by tilting the screen to the right or the left.
To kick the ball with a tap on the screen, use the
OnTapped event in
DirectXPage.cpp:
void DirectXPage::OnTapped(Object^ sender, TappedRoutedEventArgs^ e) { m_main->OnKeyDown(VirtualKey::Space); }
The code calls the
OnKeyDown method, passing the space key as a parameter—the same code as if the user had pressed the space bar. If you want, you can enhance this code to get the position of the tap and only kick the ball if the tap is on the ball. I leave that as homework for you. As a starting point, the Starter Kit has code to detect whether the user has tapped an object in the scene.
The next step is to move the goalkeeper when the user tilts the screen. For that, you use the inclinometer, which detects all the movement for the screen. This sensor returns three readings: pitch, roll, and yaw, corresponding to the rotations around the x-, y-, and z-axes, respectively. For this game, you only need the roll reading.
To use the sensor, you must obtain the instance for it, which you do using the
GetDefault method. Then, you set its reporting interval, like this code in
void Game::CreateDeviceDependentResources in
Game.cpp:
void Game::CreateDeviceDependentResources() { m_inclinometer = Windows::Devices::Sensors::Inclinometer::GetDefault(); if (m_inclinometer != nullptr) { // Establish the report interval for all scenarios uint32 minReportInterval = m_inclinometer->MinimumReportInterval; uint32 reportInterval = minReportInterval > 16 ? minReportInterval : 16; m_inclinometer->ReportInterval = reportInterval; } ...
m_inclinometer is a private field declared in
Game.h. In the
Update method, reposition the goalkeeper:
void Game::Update(DX::StepTimer const& timer) { // snip SetGoalkeeperPosition(); if (totalTime > 2.3f) ResetGame(); } }
SetGoalkeeperPosition repositions the goalkeeper, depending on the inclinometer reading:
void StarterKit::Game::SetGoalkeeperPosition() { if (m_isAnimating && m_inclinometer != nullptr) { Windows::Devices::Sensors::InclinometerReading^ reading = m_inclinometer->GetCurrentReading(); auto goalkeeperVelocity = reading->RollDegrees / 100.0f; if (goalkeeperVelocity > 0.3f) goalkeeperVelocity = 0.3f; if (goalkeeperVelocity < -0.3f) goalkeeperVelocity = -0.3f; m_goalkeeperPosition = fabs(m_goalkeeperPosition) >= 6.0f ? m_goalkeeperPosition : m_goalkeeperPosition + goalkeeperVelocity; } }
With this change, you can move the goalkeeper by tilting the screen. You now have a finished game.
Performance Measuring
With the game running well on your development system, you should now try it on a less powerful mobile device. It’s one thing to develop on a powerhouse workstation, with a top-of-the-line graphics processor and 60 FPS. It’s completely different to run your game on a device with an Intel® Atom™ processor and a built-in graphics card.
Your game should perform well on both machines. To measure performance, you can use the tools included in Visual Studio or the Intel® Graphics Performance Analyzers (Intel® GPA), a suite of graphics analyzers that can detect bottlenecks and improve the performance of your game. Intel GPA provides a graphical analysis of how your game is performing and can help you make it run faster and smoother.
Conclusion
Finally, you’ve reached the end of your journey. You started with a dancing teapot and ended with a DirectX game, with keyboard and sensor input. With the languages becoming increasingly similar, C++/CX was not too difficult to use for a C# developer.
The major difficulty is mastering the 3D models, making them move, and positioning them in a familiar way. For that, you had to use some physics, geometry, trigonometry, and mathematics.
The bottom line is that developing a game isn’t an impossible task. With some patience and the right tools, you can create great games that have excellent performance.
Special Thanks
I’d like to thank Roberto Sonnino for his writing tips and the technical review of this article.
Image Credits
- Ball texture:
- Soccer field:
- Net:
- Goal:
For More Information
- “Herb Sutter: (Not Your Father’s) C++” at-
- Visual Studio Windows 8 3D Starter Kit at
- “Working with Shaders” at
- Intel GPA at
About the Author
Bruno Sonnino is a Microsoft Most Valuable Professional (MVP) located in Brazil. He is a developer, consultant, and author of five Delphi books published in Portuguese by Pearson Education Brazil and numerous articles for Brazilian and American magazines and websites., Intel Atom, and Ultrabook are trademarks of Intel Corporation in the U.S. and/or other countries.
*Other names and brands may be claimed as the property of others.
1 commentTop
steven m. said on Aug 20,2014
Excellent overview. I look forward to a deeper dive, as time allows. Thank You.
Add a CommentSign in
Have a technical question? Visit our forums. Have site or software product issues? Contact support. | https://software.intel.com/en-us/articles/developing-3d-games-for-windows-8-with-c-and-microsoft-directx?language=ru | CC-MAIN-2018-13 | refinedweb | 6,820 | 57.06 |
I’ve been working on getting a better understanding of the Discrete Fourier Transform. I’ve figured out some things which have really helped my intuition, and made it a lot simpler in my head, so I wanted to write these up for the benefit of other folks, as well as for my future self when I need a refresher.
The discrete Fourier transform takes in data and gives out the frequencies that the data contains. This is useful if you want to analyze data, but can also be useful if you want to modify the frequencies then use the inverse discrete Fourier transform to generate the frequency modified data.
Multiplying By Sinusoids (Sine / Cosine)
If you had a stream of data that had a single frequency in it at a constant amplitude, and you wanted to know the amplitude of that frequency, how could you figure that out?
One way would be to make a cosine wave of that frequency and multiply each sample of your wave by the corresponding samples.
For instance, let’s say that we have a data stream of four values that represent a 1hz cosine wave: 1, 0, -1, 0.
We could then multiply each point by a corresponding point on a cosine wave, add sum them together:
We get.. 1*1 + 0*0 + -1*-1 + 0*0 = 2.
The result we get is 2, which is twice the amplitude of the data points we had. We can divide by two to get the real amplitude.
To show that this works for any amplitude, here’s the same 1hz cosine wave data with an amplitude of five: 5, 0, -5, 0.
Multiplying by the 1hz cosine wave, we get… 5*1 + 0*0 + -5*-1 + 0*0 = 10.
The actual amplitude is 5, so we can see that our result was still twice the amplitude.
In general, you will need to divide by N / 2 where N is the number of samples you have.
What happens if our stream of data has something other than a cosine wave in it though?
Let’s try a sine wave: 0, 1, 0, -1
When we multiply those values by our cosine wave values we get: 0*1 + 1*0 + 0*-1 + -1*0 = 0.
We got zero. Our method broke!
In this case, if instead of multiplying by cosine, we multiply by sine, we get what we expect: 0*0 + 1*1 + 0*0 + -1*-1 = 2. We get results consistent with before, where our answer is the amplitude times two.
That’s not very useful if we have to know whether we are looking for a sine or cosine wave though. We might even have some other type of wave that is neither one!
The solution to this problem is actually to multiply the data points by both cosine and sine waves, and keep both results.
Let’s see how that works.
For this example We’ll take a cosine wave and shift it 0.25 radians to give us samples: 0.97, -0.25, -0.97, 0.25. (The formula is cos(x*2*pi/4+0.25) from 0 to 3)
When we multiply that by a cosine wave we get: 0.97*1 + -0.25*0 + -0.97*-1 + 0.25*0 = 1.94
When we multiply it by a sine wave we get: 0.97*0 + -0.25*1 + -0.97*0 + 0.25*-1 = -0.5
Since we know both of those numbers are twice the actual amplitude, we can divide them both by two to get:
cosine: 0.97
sine: -0.25
Those numbers kind of makes sense if you think about it. We took a cosine wave and shifted it over a little bit so our combined answer is that it’s mostly a cosine wave, but is heading towards being a sine wave (technically a sine wave that has an amplitude of -1, so is a negative sine wave, but is still a sine wave).
To get the amplitude of our phase shifted wave, we treat those values as a vector, and get the magnitude. sqrt((0.97*0.97)+(-0.25*-0.25)) = 1.00. That is correct! Our wave’s amplitude is 1.0.
To get the starting angle (phase) of the wave, we use inverse tangent of sine over cosine. We want to take atan2(sine, cosine), aka atan2(-0.25, 0.97) which gives us a result of -0.25 radians. That’s the amount that we shifted our wave, so it is also correct!
Formulas So Far
Let’s take a look at where we are at mathematically. For
samples of data, we multiply each sample of data by a sample from a wave with the frequency we are looking for and sum up the results (Note that this is really just a dot product between N dimensional vectors!). We have to do this both with a cosine wave and a sine wave, and keep the two resulting values to be able to get our true amplitude and phase information.
For now, let’s drop the “divide by two” part and look at the equations we have.
Those equations above do exactly what we worked through in the samples.
The part that might be confusing is the part inside of the Cos() and Sin() so I’ll explain that real quick.
Looking at the graph cos(x) and sin(x) you’ll see that they both make one full cycle between the x values of 0 and 2*pi (aprox. 6.28):
If instead we change it to a graph of cos(2*pi*x) and sin(2*pi*x) you’ll notice that they both make one full cycle between the x values of 0 and 1:
This means that we can think of the wave forms in terms of percent (0 to 1) instead of in radians (0 to 2pi).
Next, we multiply by the frequency we are looking for. We do this because that multiplication makes the wave form repeat that many times between 0 and 1. It makes us a wave of the desired frequency, still remaining in “percent space” where we can go from 0 to 1 to get samples on our cosine and sine waves.
Here’s a graph of cos(2*pi*x*2) and sin(2*pi*x*2), to show some 2hz frequency waves:
Lastly, we multiply by n/N. In our sum, n is our index variable and it goes from 0 to N-1. This is very much like a for loop. n/N is our current percent done we are in the for loop, so when we multiply by n/N, we are just sampling at a different location (by percentage done) on our cosine and sine waves that are at the specified frequencies.
Not too difficult right?
Imaginary Numbers
Wouldn’t it be neat if instead of having to do two separate calculations for our cosine and sine values, we could just do a single calculation that would give us both values?
Well, interestingly, we can! This is where imaginary numbers come in. Don’t get scared, they are here to make things simpler and more convenient, not to make things more complicated or harder to understand (:
There is something called Euler’s Identity which states the below:
That looks pretty useful for our usage case doesn’t it? If you notice that our equations both had the same parameters for cos and sin, that means that we can just use this identity instead!
We can now take our two equations and combine them into a single equation:
When we do the above calculation, we will get a complex number out with a real and imaginary part. The real part of the complex number is just the cosine value, while the imaginary part of the complex number is the sine value. Nothing scary has happened, we’ve just used/abused complex numbers a bit to give us what we want in a more compact form.
Multiple Frequencies
Now we know how to look for a single, specific frequency.
What if you want to look for multiple frequencies though? Further more, what if you don’t even know what frequencies to look for?
When doing the discrete Fourier transform on a stream of samples, there are a specific set of frequencies that it checks for. If you have N samples, it only checks for N different frequencies. You could ask about other frequencies, but these frequencies are enough to reconstruct the original signal from only the frequency data.
The first N/2 frequencies are going to be the frequencies 0 (DC) through N/2. N/2 is the Nyquist frequency and is the highest frequency that your signal is able of holding.
After N/2 comes the negative frequencies, counting from a large negative frequency (N/2-1) up to the last frequency which is -1.
As an example, if you had 8 samples and ran the DFT, you’d get 8 complex numbers as outputs, which represent these frequencies:
[0hz, 1hz, 2hz, 3hz, 4hz, -3hz, -2hz, -1hz]
The important take away from this section is that if you have N samples, the DFT asks only about N very specific frequencies, which will give enough information to go from frequency domain back to time domain and get the signal data back.
That then leads to this equation below, where we just put a subscript on Value, to signify which frequency we are probing for.
Frequency is an integer, and is between 0 and N-1. We can say that mathematically by listing this information along with the equation:
Final Form
Math folk use letters and symbols instead of words in their equations.
The letter
is used instead of frequency.
Instead of
, the symbol is
.
Instead of
, the symbol is
.
This gives us a more formal version of the equation:
We are close to the final form, but we aren’t quite done yet!
Remember the divide by two? I had us take out the divide by two so that I could re-introduce it now in it’s correct form. Basically, since we are querying for N frequencies, we need to divide each frequency’s result by N. I mentioned that we had to divide by N/2 to make the amplitude information correct, but since we are checking both positive AND negative frequencies, we have to divide by 2*(N/2), or just N. That makes our equation become this:
Lastly, we actually want to make the waves go backwards, so we make the exponent negative. The reason for this is a deeper topic than I want to get into right now, but you can read more about why here if you are curious: DFT – Why are the definitions for inverse and forward commonly switched?
That brings us to our final form of the formula for the discrete Fourier transform:
Taking a Test Drive
If we use that formula to calculate the DFT of a simple, 4 sample, 1hz cosine wave (1, 0, -1, 0), we get as output [0, 0.5, 0, -0.5].
That means the following:
- 0hz : 0 amplitude
- 1hz : 0.5 amplitude
- 2hz : 0 amplitude
- -1hz : -0.5 amplitude
It is a bit strange that the simple 1hz, 1.0 amplitude cosine wave was split in half, and made into a 1hz cosine wave and -1hz cosine wave, each contributing half amplitude to the result, but that is “just how it works”. I don’t have a very good explanation for this though. My personal understanding just goes that if we are checking for both positive AND negative frequencies, that makes it ambiguous whether it’s a 1hz wave with 1 amplitude, or -1hz wave with -1 amplitude. Since it’s ambiguous, both must be true, and they get half the amplitude each. If I get a better understanding, I’ll update this paragraph, or make a new post on it.
Making The Inverse Discrete Fourier Transform
Making the formula for the Inverse DFT is really simple.
We start with our DFT formula, drop the 1/N from the front, and also make the exponent to e positive instead of negative. That is all! The intuition here for me is that we are just doing the reverse of the DFT process, to do the inverse DFT process.
While the DFT takes in real valued signals and gives out complex valued frequency data, the IDFT takes in complex valued frequency data, and gives out real valued signals.
A fun thing to try is to take some data, DFT it, modify the frequency data, then IDFT it to see what comes out the other side.
Other Notes
Here are some other things of note about the discrete Fourier transform and it’s inverse.
Data Repeating Forever
When you run the DFT on a stream of data, the math is such that it assumes that the stream of data you gave it repeats forever both forwards and backwards in time. This is important because if you aren’t careful, you can add frequency content to your data that you didn’t intend to be there.
For instance, if you tile a 1hz sine wave, it’s continuous, and there is only the 1hz frequency present:
However, if you tile a 0.9hz sine wave, there is a discontinuity, which means that there will be other, unintended frequencies present when you do a DFT of the data, to be able to re-create the discontinuity:
Fast Fourier Transform
There are a group of algorithms called “Fast Fourier Transforms”. You might notice that if we have N samples, taking the DFT is an O(N^2) operation. Fast Fourier transforms can bring it down to O(N log N).
DFT / IDFT Formula Variations
The formula we came up with is one possible DFT formula, but there are a handful of variations that are acceptable, even though different variations come up with different values!
The first option is whether to make the e exponent negative or not. Check out these two formulas to see what I mean.
vs
Either one is acceptable, but when providing DFT’d data, you should mention which one you did, and make sure and use the opposite one in your inverse DFT formula.
The next options is whether to divide by N or not, like the below:
vs
Again, either one is acceptable, but you need to make sure and let people know which one you did, and also make sure and use the opposite one in your inverse DFT formula.
Instead of dividing by N, some people actually divide by 1/sqrt(N) in both the DFT and inverse DFT. Wolfram alpha does this for instance!
You can read more about this stuff here: DFT – Why are the definitions for inverse and forward commonly switched?
One thing to note though is that if doing 1/N on the DFT (my personal preference), the 0hz (DC) frequency bin gives you the average value of the signal, and the amplitude data you get out of the other bins is actually correct (keeping in mind the amplitudes are split in half between positive and negative frequencies).
Why Calculate Negative Frequencies
The negative frequencies are able to be calculated on demand from the positive frequency information (eg complex conjugation), so why should we even bother calculating them? Sure it’d be more computationally efficient not to calculate them, especially when just doing frequency analysis, right?!
Here’s a discussion about that: Why calculate negative frequencies of DFT?
Higher Dimensions
It’s possible to DFT in 2d, 3d, and higher. The last blog post shows how to do this with 2d images, but I’d also like to write a blog post like this one specifically talking about the intuition behind multi dimensional DFTs.
Example Program Source Code
Here’s some simple C++ source code which calculates the DFT and inverse DFT, optionally showing work in case you want to try to work some out by hand to better understand this. Working a few out by hand really helped me get a better intuition for all this stuff.
Program Output:
Source Code:
#include <stdio.h> #include <complex> #include <vector> #include <fcntl.h> #include <io.h> // set to 1 to have it show you the steps performed. Set to 0 to hide the work. // useful if checking work calculated by hand. #define SHOW_WORK 1 #if SHOW_WORK #define PRINT_WORK(...) wprintf(__VA_ARGS__) #else #define PRINT_WORK(...) #endif // Use UTF-16 encoding for Greek letters static const wchar_t kPi = 0x03C0; static const wchar_t kSigma = 0x03A3; typedef float TRealType; typedef std::complex<TRealType> TComplexType; const TRealType c_pi = (TRealType)3.14159265359; const TRealType c_twoPi = (TRealType)2.0 * c_pi; //================================================================================= TComplexType DFTSample (const std::vector<TRealType>& samples, int k) { size_t N = samples.size(); TComplexType ret; for (size_t n = 0; n < N; ++n) { TComplexType calc = TComplexType(samples[n], 0.0f) * std::polar<TRealType>(1.0f, -c_twoPi * TRealType(k) * TRealType(n) / TRealType(N)); PRINT_WORK(L" n = %i : (%f, %f)\n", n, calc.real(), calc.imag()); ret += calc; } ret /= TRealType(N); PRINT_WORK(L" Sum the above and divide by %i\n", N); return ret; } //================================================================================= std::vector<TComplexType> DFTSamples (const std::vector<TRealType>& samples) { PRINT_WORK(L"DFT: X_k = 1/N %cn[0,N) x_k * e^(-2%cikn/N)\n", kSigma, kPi); size_t N = samples.size(); std::vector<TComplexType> ret; ret.resize(N); for (size_t k = 0; k < N; ++k) { PRINT_WORK(L" k = %i\n", k); ret[k] = DFTSample(samples, k); PRINT_WORK(L" X_%i = (%f, %f)\n", k, ret[k].real(), ret[k].imag()); } PRINT_WORK(L"\n"); return ret; } //================================================================================= TRealType IDFTSample (const std::vector<TComplexType>& samples, int k) { size_t N = samples.size(); TComplexType ret; for (size_t n = 0; n < N; ++n) { TComplexType calc = samples[n] * std::polar<TRealType>(1.0f, c_twoPi * TRealType(k) * TRealType(n) / TRealType(N)); PRINT_WORK(L" n = %i : (%f, %f)\n", n, calc.real(), calc.imag()); ret += calc; } PRINT_WORK(L" Sum the above and take the real component\n"); return ret.real(); } //================================================================================= std::vector<TRealType> IDFTSamples (const std::vector<TComplexType>& samples) { PRINT_WORK(L"IDFT: x_k = %cn[0,N) X_k * e^(2%cikn/N)\n", kSigma, kPi); size_t N = samples.size(); std::vector<TRealType> ret; ret.resize(N); for (size_t k = 0; k < N; ++k) { PRINT_WORK(L" k = %i\n", k); ret[k] = IDFTSample(samples, k); PRINT_WORK(L" x_%i = %f\n", k, ret[k]); } PRINT_WORK(L"\n"); return ret; } //================================================================================= template<typename LAMBDA> std::vector<TRealType> GenerateSamples (int numSamples, LAMBDA lambda) { std::vector<TRealType> ret; ret.resize(numSamples); for (int i = 0; i < numSamples; ++i) { TRealType percent = TRealType(i) / TRealType(numSamples); ret[i] = lambda(percent); } return ret; } //================================================================================= int main (int argc, char **argv) { // Enable Unicode UTF-16 output to console _setmode(_fileno(stdout), _O_U16TEXT); // You can test specific data samples like this: //std::vector<TRealType> sourceData = { 1, 0, 1, 0 }; //std::vector<TRealType> sourceData = { 1, -1, 1, -1 }; // Or you can generate data samples from a function like this std::vector<TRealType> sourceData = GenerateSamples( 4, [] (TRealType percent) { const TRealType c_frequency = TRealType(1.0); return cos(percent * c_twoPi * c_frequency); } ); // Show the source data wprintf(L"\nSource = [ "); for (TRealType v : sourceData) wprintf(L"%f ",v); wprintf(L"]\n\n"); // Do a dft and show the results std::vector<TComplexType> dft = DFTSamples(sourceData); wprintf(L"dft = [ "); for (TComplexType v : dft) wprintf(L"(%f, %f) ", v.real(), v.imag()); wprintf(L"]\n\n"); // Do an inverse dft of the dft data, and show the results std::vector<TRealType> idft = IDFTSamples(dft); wprintf(L"idft = [ "); for (TRealType v : idft) wprintf(L"%f ", v); wprintf(L"]\n"); return 0; }
Links
Explaining how to calculate the frequencies represented by the bins of output of DFT:
How do I obtain the frequencies of each value in an FFT?
Another good explanation of the Fourier transform if it isn’t quite sinking in yet:
An Interactive Guide To The Fourier Transform
Some nice dft calculators that also have inverse dft equivelants:
DFT Calculator 1
IDFT Calculator 1
DFT Calculator 2
IDFT Calculator 2
Wolfram alpha can also do DFT and IDFT, but keep in mind that the formula used there is different and divides the results by 1/sqrt(N) in both DFT and IDFT so will be different values than you will get if you use a different formula.
Wolfram Alpha: Fourier[{1, 0, -1, 0}] = [0,1,0,1]
Wolfram Alpha: Inverse Fourier[{0, 1 , 0, 1}] = [1, 0, -1, 0] | http://blog.demofox.org/2016/08/11/understanding-the-discrete-fourier-transform/ | CC-MAIN-2017-22 | refinedweb | 3,413 | 59.64 |
Using the Simulation Data Inspector or Simulink® Test™, you can import data from a Microsoft® Excel® file or export data to a Microsoft Excel file. You can also log data to an Excel file using the Record block. The Simulation Data Inspector, Simulink Test, and the Record block all use the same file format, so you can use the same Microsoft Excel file with multiple applications.
Tip
When the format of the data in your Excel file does not match the specification in this topic, you can write
your own file reader to import the data using the
io.reader class.
In the simplest format, the first row in the Excel imported from the Excel file render as missing data in the Simulation Data Inspector..
The file can include metadata for signals such as data type, units, and interpolation method. Metadata for each signal is listed in rows between the signal names and the signal data. You can specify any combination of metadata for each signal. Leave a blank cell for signals with less specified metadata.
Label each piece of metadata according to this table. The table also indicates which tools and operations support each piece of metadata.
When an imported file does not specify signal metadata,
double data
type,
linear interpolation, and
union
synchronization are used.
In addition to built-in data types, you can use other labels in place of the
DataType: label to specify fixed-point, enumerated, alias, and
bus data types.
When you specify the type using the name of a
Simulink.Bus object and
the object is not in the MATLAB workspace, the data still imports from the file. However, individual
signals in the bus use data types described in the file rather than data types defined
in the
Simulink.Bus object.
You can import and export complex, multidimensional, and bus signals using an Excel file. The signal name for a column of data indicates whether that data is part of a complex, multidimensional, or bus signal. Excel file import and export do not support array of bus signals.
Multidimensional signal names include index information in parentheses. For example,
the signal name for a column might be
signal1(2,3). When you import
data from a file that includes multidimensional signal data, elements in the data not
included in the file take zero sample values with the same data type and complexity as
the other elements.
Complex signal data is always in real-imaginary format. Signal names for columns
containing complex signal data include
(real) and
(imag) to indicate which data each column contains. When you
import data from a file that includes imaginary signal data without specifying values
for the real component of that signal, the signal values for the real component default
to zero.
Multidimensional signals can contain complex data. The signal name includes the
indication for the index within the multidimensional signal and the real or imaginary
tag. For example,
signal1(1,3)(real).
Dots in signal names specify the hierarchy for bus signals. For example:
bus.y.a
bus.y.b
bus.x
Tip
When the name of your signal includes characters that could make it appear as
though it were part of a matrix, complex signal, or bus, use the
Name metadata option to specify the name you want the
imported signal to use in the Simulation Data Inspector and Simulink
Test.
Signal data specified in columns before the first time column is imported as one or
more function-call signals. The data in the column specifies the times at which the
function-call signal was enabled. The imported signals have a value of
1 for the times specified in the column. The time values for
function-call signals must be double, scalar, and real, and must increase
monotonically.
When you export data from the Simulation Data Inspector, function-call signals are formatted the same as other signals, with a time column and a column for signal values.
You can import data for parameter values used in simulation. In the Simulation Data Inspector, the parameter values are shown as signals. Simulink Test uses imported parameter values to specify values for those parameters in the tests it runs based on imported data.
Parameter data is specified using two or three columns. The first column specifies the
parameter names, with the cell in the header row for that column labeled
Parameter:. The second column specifies the value used for each
parameter, with the cell in the header row labeled
Value:. Parameter
data may also include a third column that contains the block path associated with each
parameter, with the cell in the header row labeled
BlockPath:.
Specify names, values, and block paths for parameters starting in the first row that
contains signal data, below rows used to specify signal metadata. For example, this file
specifies values for two parameters,
X and
Y.
You can include data for multiple runs in a single file. Within a sheet, you can
divide data into runs by labeling data with a simulation number and a source type, such
as
Input or
Output. Specify the simulation number
and source type as additional signal metadata, using the label
Simulation: for the simulation number and the label
Source: for the source type. The Simulation Data Inspector uses
the simulation number and source type only to determine which signals belong in each
run. Simulink
Test uses the information to define inputs, parameters, and acceptance criteria
for tests to run based on imported data.
You do not need to specify the simulation number and output type for every signal. Signals to the right of a signal with a simulation number and source use the same simulation number and source until the next signal with a different source or simulation number. For example, this file defines data for two simulations and imports into four runs in the Simulation Data Inspector:
Run 1 contains
signal1 and
signal2.
Run 2 contains
signal3,
X, and
Y.
Run 3 contains
signal4.
Run 4 contains
signal5.
You can also use sheets within the Microsoft Excel file to divide the data into runs and tests. When you do not specify simulation number and source information, the data on each sheet is imported into a separate run in the Simulation Data Inspector. When you export multiple runs from the Simulation Data Inspector, the data for each run is saved on a separate sheet. When you import a Microsoft Excel file that contains data on multiple sheets into Simulink Test, you are prompted to specify how to import the data.
Simulink.sdi.createRun |
Simulink.sdi.exportRun | https://au.mathworks.com/help/simulink/ug/simulation-data-inspector-import-file-format.html | CC-MAIN-2021-21 | refinedweb | 1,096 | 53.31 |
Convert and import flipshare video to Quicktime.Mov Flipshare video to mov software supports convert flip video from flipshare to mov, import flip share video from flip ultra(HD), mino(HD), slide(HD) to mov multimedia device, say flip video to Quicktime mov, flip share video to iTunes, flipshare video to Window live movie maker.mov etc. Part one: Convert and import flip video to Quicktime.Mov
QuickTime(default format is mov.) as a great player can handle various formats of digital video, picture, sound, panoramic images, and interactivity. It is available for Mac OS X Leopard or earlier, Windows XP or later, System 7 or later. Tips:The latest version is QuickTime X (10.0) and is currently only available on Mac OS X v10.6. Flip camera record video as mp4 formats, can not be accepted by Quicktime directly, flipshare video to mov converter supports convert flipshare mp4 video to mov, flip ultra hd mp4 video to mov, flip mino hd mp4 video to mov, flip slidehd video to mov and import flip video to Quicktime etc mov video player, editor, dvd burner etc freely.
Part two: Import flip share video to mov.windows live movie maker/edit flip video with WLMM.
FAQ: I'm trying to edit a video with Windows Live Movie Maker, but with a FlipHD camera, the audio is sped up and it goes out of sync with the video, WHY? WLMM announced to accept avchd, mpeg-4, wmv, 3gp, dv-avi, mov etc formats video, but many studies suggest that the particular 'flavor' of Flip's .MP4 files is only kinda sorta compatible,
import fine, but when you play it, the audio is speeded up, and out of sync with the video. Tips:Windows live movie maker is a great video editing software that works on Windows Vista and Windows 7.
Fortunately, flipshare video to mov converter supports convert flip video to mov/mpeg-4/avchd(windows 7), flip video to wmv, flip video to dvavi, flip video to 3gp for importing to Windows Live Movie Maker without downloading any more codecs. Tips: flipshare to mov tool also can convert flip video to windows movie maker, and convert windows movie maker video to WLMM vice versa freely. Part three:Import flipshare video to itunes
iTunes is a digital media player application that used for playing and organizing digital audio and video files. Tips: Now we can import flip video to itunes then sync with ipod, ipad, iphone 3g/3gs/4g etc directly.
Surely, flip mp4 video can be accepted by itunes, however, itunes can not support flip avi files. Iorgsoft flip share to mov converter can help convert flip avi files to iTunes.mpeg-4, import flip share video to iTunes and extract audio from flip video and save as mp3, aiff, wav, aac etc for itunes. (itunes can play Quicktime audio). Part three: Step by step on converting flipshare video to mov Step1: Download flipshare video to mov converter software on your PC, run it.
Tips: free trial version and free e-mail Supports.
Click "Add video" to add flip video from flipshare, click "profile drop-down list" to set output as mov, wmv, dv-avi or 3gp etc, then click "output arrow" to save it. Step2:
Step3: Click"Start", whole progress will be complete automatically. Part four::more about flipshare video to mov converter In fact, flip share video to mov software also can convert flip video to mpg, flv, rm, mkv, m4v, dv, swf, divx, xvid etc, extract audio from video and save as wma, wav, wma, aiff, flac, mp3, m4a, mka, aac, ac3, amr, ra, etc, import flip video Adobe premiere, iPod, iPhone 3G/3GS/4G, iPad, Mobile Phone, Blackberry, PSP, PS3 , vegas etc. Edit flip video like: adjust video parameter(resolution, Bit Rate, Frame Rate and
Encoder), merge, join, crop/set aspect ratio (16:9/4:3/full screen/original), clip/split/trim video, apply effect(adjust brightness, contrast and saturation etc) etc also can using flipshare video to mov converter. | https://issuu.com/verywell/docs/flipshare_video_to_mov_123 | CC-MAIN-2018-09 | refinedweb | 673 | 66.37 |
Today, I'd like to share with you one of the items from my tools-belt, which I'm successfully using for years now. It is simply a react component. It is a form. But not just a form, it is a form that allows anyone independently of their React or HTML knowledge to build a sophisticated feature-rich form based on any arbitrary expected data in a consistent manner.
Behold, the React JSON Schema Form, or simply RJSF. Originally started and built as an Open Source project by the Mozilla team. Evolved into a separate independent project.
Out of the box, RJSF provides us with rich customization of different form levels, extensibility, and data validation. We will talk about each aspect separately.
Configuration
JSON Schema
The end goal of any web form is to capture expected user input. The RJSF will capture the data as a JSON object. Before capturing expected data we need to define how the data will look like. The rest RJSF will do for us. To define and annotate the data we will use another JSON object. Bear with me here...
We will be defining the shape (or the schema) of the JSON object (the data) with another JSON object. The JSON object that defines the schema for another JSON object is called -drumroll- JSON Schema and follows the convention described in the JSON Schema standard.
To make things clear, we have two JSON objects so far. One representing the data we are interested in, another representing the schema of the data we are interested in. The last one will help RJSF to decide which input to set for each data attribute.
A while ago in one of my previous articles I've touched base on the JSON Schema.
I'm not going to repeat myself, I'll just distill to what I think is the most valuable aspect of it.
JSON Schema allows us to capture changing data and keep it meaningful. Think of arbitrary address data in the international application. Address differs from country to country, but the ultimate value doesn't. It represents a point in the world that is described with different notations. Hence even though the address format in the USA, Spain, Australia, or China is absolutely different, the ultimate value -from an application perspective- is the same-- a point on the Globe. It well might be employee home address, parcel destination or anything else and notation does not change this fact.
So if we want to capture, let's say, the first and last name and telephone number of a person. The expected data JSON object will look like
{ "firstName": "Chuck", "lastName": "Norris", "telephone": "123 456 789" }
And the JSON Schema object to define the shape of the data object above will look like
{ "title": "A person information", "description": "A simple person data.", "type": "object", "properties": { "firstName": { "type": "string", "title": "First name", }, "lastName": { "type": "string", "title": "Last name" }, "telephone": { "type": "string", "title": "Telephone", "minLength": 10 } } }
Something to keep in mind.
JSON Schema is following a permissive model. Meaning out of the box everything is allowed. The more details you specify, the more restrictions you put in place. So it is worth sometimes religiously define the expected data.
This is the bare minimum we need to start. Let's look at how the JSON Schema from the above will look like as a form. Just before let's also look at the code...
import Form from "@rjsf/core"; // ... <Form schema={schema}> <div /> </Form> // ...
Yup, that's it, now let's check out the form itself
UI Schema
Out of the box, the RJSF makes a judgment on how to render one field or another. Using JSON Schema you primarily control what to render, but using UI Schema you can control how to render.
UI Schema is yet another JSON that follows the tree structure of the JSON data, hence form. It has quite some stuff out of the box.
You can be as granular as picking a color for a particular input or as generic as defining a template for all fields for a
string type.
Let's try to do something with our demo form and say disable the first name and add help text for the phone number.
{ "firstName": { "ui:disabled": true }, "telephone": { "ui:help": "The phone number that can be used to contact you" } }
Let's tweak our component a bit
import Form from "@rjsf/core"; // ... <Form schema={schema} uiSchema={uiSchema} > <div /> </Form> // ...
And here is the final look
Nice and easy. There's a lot of built-in configurations that are ready to be used, but if nothing suits your needs, you can build your own...
Customization.
Another way to think of it is field includes label and other stuff around, while widget only the interaction component or simply input.
For the sake of example let's create a simple text widget that will make the input red and put a dash sign (-) after every character.
To keep things light and simple let's imagine that the whole form will be a single red field. The JSON Schema will look as follows
const schema = { title: "Mad Field", type: "string" };
Forgot to say that widgets are just components, that will be mounted in and will receive a standard set of
props. No limits, just your imagination ;)
const MadTextWidget = (props) => { return ( <input type="text" style={{backgroundColor: "red"}} className="custom" value={props.value} required={props.required} onChange={(event) => props.onChange(event.target.value + " - ")} /> ); };
The next step is to register the widget so that we can use it in the UI Schema
const widgets = { madTextWidget: MadTextWidget }
Finally, we can define the UI Schema
const uiSchema = { "ui:widget": "madTextWidget" };
And the full code with the RJSF
const schema = { title: "Mad Field", type: "string" }; const MadTextWidget = (props) => { return ( <input type="text" style={{backgroundColor: "red"}} className="custom" value={props.value} required={props.required} onChange={(event) => props.onChange(event.target.value + " - ")} /> ); }; const widgets = { madTextWidget: MadTextWidget } const uiSchema = { "ui:widget": "madTextWidget" }; ReactDOM.render(( <Form schema={schema} uiSchema={uiSchema} widgets={widgets} /> ), document.getElementById("app"));
It will look like this
Here, try it yourself. The field will be pretty similar but will have a wider impact area so to speak. As been said the field will include labels and everything around the input itself.
Custom templates allows you to re-define the layout for certain data types (simple field, array or object) on the form level.
Finally, you can build your own Theme which will contain all your custom widgets, fields, template other properties available for a
Form component.
Validation
As was mentioned before the JSON Schema defines the shape of the JSON data that we hope to capture with the form. JSON Schema allows us to define the shape fairly precisely. We can tune the definition beyond the expected type, e.g. we can define a length of the string or an email regexp or a top boundary for a numeric value and so forth.
const Form = JSONSchemaForm.default; const schema = { type: "string", minLength: 5 }; const formData = "Hi"; ReactDOM.render(( <Form schema={schema} formData={formData} liveValidate /> ), document.getElementById("app"));
Will end up looking like this
Of course, we can re-define messages, configure when, where, and how to show the error messages.
Out of the box our data will be validated against the JSON Schema using the (Ajv) A JSON Schema validator library. However, if we want to, we can implement our own custom validation process.
Dependencies
Dependencies allow us to add some action to the form. We can dynamically change form depending on the user input. Basically, we can request extra information depending on what the user enters.
Before we will get into dependencies, we need to get ourselves familiar with dynamic schema permutation. Don't worry, it is easier than it sounds. We just need to know what four key-words mean
allOf: Must be valid against all of the subschemas
anyOf: Must be valid against any of the subschemas
oneOf: Must be valid against exactly one of the subschemas
not: Must not be valid against the given schema ___
Although dependencies have been removed in the latest JSON Schema standard versions, RJSF still supports it. Hence you can use it, there are no plans for it to be removed so far.
Property dependencies
We may define that if one piece of the data has been filled, the other piece becomes mandatory. There are two ways to define this sort of relationship: unidirectional and bidirectional. Unidirectional as you might guess from the name will work in one direction. Bidirectional will work in both, so no matter which piece of data you fill in, the other will be required as well.
Let's try to use bidirectional dependency to define address in the shape of coordinates. The dependency will state that if one of the coordinates has been filled, the other one has to be filled in either. But if none is filled, none is required.
{ "type": "object", "title": "Longitude and Latitude Values", "description": "A geographical coordinate.", "properties": { "latitude": { "type": "number", "minimum": -90, "maximum": 90 }, "longitude": { "type": "number", "minimum": -180, "maximum": 180 } }, "dependencies": { "latitude": [ "longitude" ], "longitude": [ "latitude" ] }, "additionalProperties": false }
See lines 17 to 24. That's all there is to it, really. Once we will pass this schema to the form, we will see the following (watch for an asterisk (*) near the label, it indicates whether the field is mandatory or not).
Schema dependencies
This one is more entertaining, we can actually control visibility through the dependencies. Let's follow up on the previous example and for the sake of the example show longitude only if latitude is filled in.
{ "type": "object", "title": "Longitude and Latitude Values", "description": "A geographical coordinate.", "properties": { "latitude": { "type": "number", "minimum": -90, "maximum": 90 } }, "dependencies": { "latitude": { "properties": { "longitude": { "type": "number", "minimum": -180, "maximum": 180 } } } }, "additionalProperties": false }
No code changes are required, just a small dependency configuration tweak (lines 12 to 22).
Dynamic schema dependencies
So far so good, pretty straightforward. We input the data, we change the expected data requirements. But we can go one step further and have multiple requirements. Not only based on whether the data is presented or not but on the value of presented data.
Once again, no code, only JSON Schema modification
{ "title": "How many inputs do you need?", "type": "object", "properties": { "How many inputs do you need?": { "type": "string", "enum": [ "None", "One", "Two" ], "default": "None" } }, "required": [ "How many inputs do you need?" ], "dependencies": { "How many inputs do you need?": { "oneOf": [ { "properties": { "How many inputs do you need?": { "enum": [ "None" ] } } }, { "properties": { "How many inputs do you need?": { "enum": [ "One" ] }, "First input": { "type": "number" } } }, { "properties": { "How many inputs do you need?": { "enum": [ "Two" ] }, "First input": { "type": "number" }, "Second input": { "type": "number" } } } ] } } }
Bottom line
Even though we went through some major concepts and features, we are far away from covering everything that RJSF empowers us to do.
I'd encourage you to check out official documentation for more insights and examples, GitHub repository for undocumented goodies and live playground to get your hands dirty. Finally, worth mentioning that the Open Source community keeps things going, so look outside these resources, there are quite a few good things over there.
RJSF is a ridiculously powerful thing if you need to customize and capture meaningful data. Enjoy!
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/vudodov/react-json-schema-form-39da | CC-MAIN-2021-25 | refinedweb | 1,880 | 63.59 |
In this tutorial, we are going to cover how we can add sound effects to our Ionic applications. This particular example is going to cover adding a “click” sound effect that is triggered when switching between tabs, but we will be creating a service that can be used generally to play many different sound effects in whatever situation you would like.
I wrote a tutorial on this subject a while ago, but I wanted to publish a new tutorial for a couple of reasons:
- It was a long time ago and I think there is room for improvement
- The Cordova plugin that Ionic Native uses for native audio does not appear to work with Capacitor, so I wanted to adapt the service to be able to rely entirely on web audio if desired.
We will be creating an audio service that can handle switching between native and web audio depending on the platform that the application is running on. However, we will also be adding an option to force web audio even in a native environment which can be used in the case of Capacitor (at least until I can find a simple solution for native audio in Capacitor).
In any case, I don’t think using web audio for sound effects really has any obvious downsides in a native environment anyway.
Before We Get Started
Last updated for Ionic 4, beta.16
This tutorial assumes that you already have a decent working knowledge of the Ionic framework. If you need more of an introduction to Ionic I would recommend checking out my book or the Ionic tutorials on my website.
1. Install the Native Audio Plugin
In order to enable support for native audio, we will need to install the Cordova plugin for native audio as well as the Ionic Native package for that plugin.
Keep in mind that if you are using Capacitor, plugins should be installed using
npm install not
ionic cordova plugin add.
2. Creating an Audio Service
First, we are going to add an Audio service to our application. We can do that by running the following command:
ionic g service services/Audio
The role of this service will be to preload our audio assets and to play them on demand. It will be smart enough to handle preloading the audio in the correct manner, as well as using the correct playing mechanism based on the platform.
Modify src/app/services/audio.service.ts to reflect the following:
import { Injectable } from '@angular/core'; import { Platform } from '@ionic/angular'; import { NativeAudio } from '@ionic-native/native-audio/ngx'; interface Sound { key: string; asset: string; isNative: boolean } @Injectable({ providedIn: 'root' }) export class AudioService { private sounds: Sound[] = []; private audioPlayer: HTMLAudioElement = new Audio(); private forceWebAudio: boolean = true; constructor(private platform: Platform, private nativeAudio: NativeAudio){ } preload(key: string, asset: string): void { if(this.platform.is('cordova') && !this.forceWebAudio){ this.nativeAudio.preloadSimple(key, asset); this.sounds.push({ key: key, asset: asset, isNative: true }); } else { let audio = new Audio(); audio.src = asset; this.sounds.push({ key: key, asset: asset, isNative: false }); } } play(key: string): void { let soundToPlay = this.sounds.find((sound) => { return sound.key === key; }); if(soundToPlay.isNative){ this.nativeAudio.play(soundToPlay.asset).then((res) => { console.log(res); }, (err) => { console.log(err); }); } else { this.audioPlayer.src = soundToPlay.asset; this.audioPlayer.play(); } } }
In this service, we are keeping track of an array of
sounds which will contain all of the audio assets that we want to play.
We create an
audioPlayer object for playing web audio (which is the same as having an
<audio src="whatever.mp3"> element). We will use this single audio object and switch out the
src to play various sounds. The
forceWebAudio flag will always use web audio to play sounds when enabled.
The purpose of the
preload method is to load the sound assets and make them available for use. We supply it with a
key that we will use to reference a particular sound effect and an
asset which will link to the actual audio asset.
In the case of native audio, the sound is preloaded with the native plugin. For web audio, we create a new audio object and set its
src which will load the audio asset into the cache - this will allow us to immediately play the sound later rather than having to load it on demand.
The
play method allows us to pass it the
key of the sound we want to play, and then it will find it in the
sounds array. We when either play that sound using the native audio plugin, or we set the
src property on the
audioPlayer to that asset and then call the
play method.
3. Preloading Sound Assets
Before we can trigger playing our sounds, we need to pass the sound assets to the
preload method we just created. Since we want to play a sound when switching tabs for this example, we could preload the sound in the TabsPage component.
Modify src/app/tabs/tabs.page.ts to reflect the following:
import { Component, AfterViewInit } from '@angular/core'; import { AudioService } from '../services/audio.service'; @Component({ selector: 'app-tabs', templateUrl: 'tabs.page.html', styleUrls: ['tabs.page.scss'] }) export class TabsPage implements AfterViewInit { constructor(private audio: AudioService){ } ngAfterViewInit(){ this.audio.preload('tabSwitch', 'assets/audio/clickSound.mp3'); } }
This will cause the sound to be loaded (whether through web or native audio) and be available for use. You don’t have to preload the sound here - you could preload the sound from anywhere (perhaps you would prefer to load all of your sounds in the root component).
NOTE: You will need to add a sound asset to your
assets folder. For this example, I used a “click” sound from freesound.org. Make sure to check that the license for the sounds you want to use match your intended usage.
4. Triggering Sounds
Finally, we just need to trigger the sound in some way. All we need to do to achieve this is to call the
play method on the audio service like this:
this.audio.play('tabSwitch');
Let’s complete our example of triggering the sound on tab changes.
Modify src/app/tabs/tabs.page.ts to reflect the following:
import { Component, AfterViewInit, ViewChild } from '@angular/core'; import { Tabs } from '@ionic/angular'; import { AudioService } from '../services/audio.service'; import { skip } from 'rxjs/operators'; @Component({ selector: 'app-tabs', templateUrl: 'tabs.page.html', styleUrls: ['tabs.page.scss'] }) export class TabsPage implements AfterViewInit { @ViewChild(Tabs) tabs: Tabs; constructor(private audio: AudioService){ } ngAfterViewInit(){ this.audio.preload('tabSwitch', 'assets/audio/clickSound.mp3'); this.tabs.ionChange.pipe(skip(1)).subscribe((ev) => { this.audio.play('tabSwitch'); }); } }
We are grabbing a reference to the tabs component here, and then we subscribe to the
ionChange observable which triggers every time the tab changes. The
ionChange will also fire once upon initially loading the app, so to avoid playing the sound the first time it is triggered we
pipe the
skip operator on the observable to ignore the first time it is triggered.
If you are interested in learning more about the mechanism I am using here to detect tab switching, you might be interested in one of my recent videos: Using Tab Badges in Ionic.
Summary
Loading and playing audio programmatically can be a bit cumbersome, but with the service we have created we can break it down into just two simple method calls - one to load the asset, and one to play it. | https://www.joshmorony.com/adding-sound-effects-to-an-ionic-application/ | CC-MAIN-2021-04 | refinedweb | 1,235 | 50.77 |
hello all,
i'm a biology student currently taking programming in uni, and i have a homework question to solve.
i have to write a programme to create and print a one-month calendar which can be used as month and year calendars as well.
the programme should be able to;
1. detect the leap years and non-leap years
2. identify the first day (mon, tue, wed, etc)
3. identify the days in a month (number of days)
4. additional features such as help menu and other creative functions
for this assignment, my other group members and i are doing the different parts. i have written the codes for the first part (leap years) but i'm not very sure if it would work.
my greatest concern is whether we would be able to compile all the three different functions into one working programme, since we are all new to C++. i need some advice as to how to compile them
example of a code my instructor printed out for me;
#include <stdio.h> /* 1' == leap year, 0' != leap year */ int is_leap_year (int year); int main (void) { int years [4] = {2000, 2004, 2008, 2012); int i = 0; for (i = 0; i < 4; i++) { if (is_leap_year (years[i])==1) printf ("%d is a leap year\n", years [i]); else printf ("%d is not a leap year\n", years [i]); } return 0; }
we are using the C software but saving it under C++ since we don't have C++ in our school lab.
the actual codes i have written is different from this as i use the (year%4), (year%100==0) and (year%400==0) formula. i just need to know if the codes i typed earlier are more appropriate to be used in my assignment, because i do not understand them well
i also hope you can suggest to me some creative features to be added to my calendar to make it more interesting
any help is greatly appreciated, thank you
*added
i'm very sorry if i did not make myself clear, i mean we are using the C software in our school, so i have to use C code, but we are learning C++. i don't really understand this myself, my lecturer explained it to me. and i have solved the first part of the project. i'll be very thankful for everyone's help ^^ | https://www.daniweb.com/programming/software-development/threads/205037/compiling-different-parts-of-a-calendar | CC-MAIN-2017-34 | refinedweb | 399 | 74.12 |
TracUserPage
Description
Defines a namespace for user pages, where users can place personal wiki pages that can be public or private. A private page can only be opened by TRAC_ADMIN or the user whose login name matches the wiki page. Users can use this page to store a personal set of reports, queries, macros, etc.
Bugs/Feature Requests
Existing bugs and feature requests for TracUserPagePlugin are here.
If you have any issues, create a new ticket.
Download
Download the zipped source from [download:tracuserpageplugin here].
Source
You can check out TracUserPagePlugin from here using Subversion, or browse the source with Trac.
Example
<pre> [components] tracuserpage.* = enabled
[userpage] default = private error = u_private root = /wiki/u/ </pre>
This defines the prefix as a namespace for user pages. /wiki/u/santa would be a page belonging to the user 'santa'. Because pages are 'default = private', santa would be able to view this page when logged in, and so would TRAC_ADMIN users, but everyone else would receive the /wiki/u_private page instead.
Recent Changes
Author/Contributors
Author: dgc
Contributors: | https://trac-hacks.org/wiki/TracUserPagePlugin?version=2 | CC-MAIN-2017-26 | refinedweb | 175 | 57.27 |
I have no idea what does this code means even though the book explained it. I think maybe it didn't explain well enough. Can anyone tell me what does this code means?
I also have no idea about the char 256 thing =,=I also have no idea about the char 256 thing =,=Code://This programme shows an example of using an if statement to check for no input #include <iostream> #include <string> int main() { using namespace std; char response[256]; cout<< "What is your name? "; cin.getline(response,256); if (strlen(response) == 0) cout<< "You must tell me your name..."; else cout<< "It's nice to meet you, "<< response; return 0; } | http://cboard.cprogramming.com/cplusplus-programming/97876-what-does-code-means.html | CC-MAIN-2015-27 | refinedweb | 112 | 79.3 |
PyBytes MQTT integration
Is it possible to have an MQTT client watch data coming into PyBytes from registered devices and consume that data in real time? This client would be running on premise (not AWS). I've seen nothing in the docs on this so far. (It's the approach we use with TTN but in this case we'll be using NB-IoT not LoRa.)
Hi @paul_tanner
No Pybytes will not transmit location unless you request that under Pybytes.
basically, we use this to identify some actions on Pybytes section for example sending a location command. the pin is used to know which pin is used under this command (also can be used as an identifier a random number)
The part that holds the message details is the payload.
Best Regards,
Ahmad EL Masri
@ahmadelmasri thanks for that. Does PyBytes hold the device location (if not transmitted by the device) and if so is that accessible though this API?
Also, could you clarify "command" and "pin" in this context please.
Hi @paul_tanner
So we pack the message. it contains the command, the pin, and the payload (encoded also, but based on the type int, string, tuple, float, ...) check below link for more details on how to decode the message.
Reference:
Best Regards,
Ahmad El Masri
- Gijs Global Moderator last edited by Gijs
Hi,
The MQTT messages are all formatted in JSON. They contain a header, message type etc. You could turn on pybytes debugging on the device to see all content as well, using
import pycom pycom.nvs_set('pybytes_debug', 99)
Features like OTA firmware update and Pymakr Online are structured differently (outside of the scope of MQTT)
Thx @ahmadelmasri
Having a secure connection is good so we would do that.
However, there should be a shared secret between the two servers so that other people cannot subscribe to our data.
Please note my other question: What would the data coming back look like?
I am assuming it will be a json object. But what will the contents be?
Hi @paul_tanner
Yes, you can do this on a private server, if you want to listen to multiple devices you should do multiple subscriptions!
BTW. This doesn't sound secure enough to meet GDPR requirements.
We are trying to enhance our security level to match GDPR requirements. knowing that if you use the secure connection between device and MQTT which make the communication more secure, however, in this case, you should listen to another port 8883
Best Regards,
Ahmad El Masri
- paul_tanner last edited by paul_tanner
Thx @Ahmad,
The connection I am talking about is between an application on a private server and Pybytes. So this application could subscribe to mqtt.pybytes.pycom.io. If there were multiple devices it sounds like it would need to subscribe once for each device. What would the data coming back look like?
BTW. This doesn't sound secure enough to meet GDPR requirements.
Regards.
Hi @paul_tanner
You can use
MQTTXto see
MQTTdata between your device and Pybytes.
name: up to you
client-id: up to you
host: mqtt.pybytes.pycom.io
port: 1883
username: your email address
password: your device token
Link:
Please let me know if you require additional details | https://forum.pycom.io/topic/6457/pybytes-mqtt-integration/9?lang=en-US | CC-MAIN-2021-04 | refinedweb | 537 | 63.39 |
<!doctype html public "-//W3C//DTD HTML 4.01 Transitional//EN" ""> <html> <head> <title>Postfix IPv6 Support</title> <meta http- </head> <body> <h1><img src="postfix-logo.jpg" width="203" height="98" ALT="">Postfix IPv6 Support</h1> <hr> <h2>Introduction</h2> <p> Postfix 2.2 introduces support for the IPv6 (IP version 6) protocol. IPv6 support for older Postfix versions was available as an add-on patch. The section "<a href="#compat">Compatibility with Postfix <2.2 IPv6 support</a>" below discusses the differences between these implementations. </p> <p>. </p> <p>. </p> <p> This document provides information on the following topics: </p> <ul> <li><a href="#platforms">Supported platforms</a> <li><a href="#configuration">Configuration</a> <li><a href="#limitations">Known limitations</a> <li><a href="#compat">Compatibility with Postfix <2.2 IPv6 support</a> <li><a href="#porting">IPv6 Support for unsupported platforms</a> <li><a href="#credits">Credits</a> </ul> <h2><a name="platforms">Supported Platforms</a></h2> <p> Postfix version 2.2 supports IPv4 and IPv6 on the following platforms: </p> <ul> <li> AIX 5.1+ <li> Darwin 7.3+ <li> FreeBSD 4+ <li> Linux 2.4+ <li> NetBSD 1.5+ <li> OpenBSD 2+ <li> Solaris 8+ <li> Tru64Unix V5.1+ </ul> <p> On other platforms Postfix will simply use IPv4 as it has always done. </p> <p> See <a href="#porting">below</a> for tips how to port Postfix IPv6 support to other environments. </p> <h2><a name="configuration">Configuration</a></h2> <p> Postfix IPv6 support introduces two new main.cf configuration parameters, and introduces an important change in address syntax notation in match lists such as mynetworks or debug_peer_list. </p> <p> Postfix IPv6 address syntax is a little tricky, because there are a few places where you must enclose an IPv6 address inside "<tt>[]</tt>" characters, and a few places where you must not. It is a good idea to use "<tt>[]</tt>" only in the few places where you have to. Check out the postconf(5) manual whenever you do IPv6 related configuration work with Postfix. </p> <ul> <li> <p>. </p> <li> <p> The first new parameter is called inet_protocols. This specifies what protocols Postfix will use when it makes or accepts network connections, and also controls what DNS lookups Postfix will use when it makes network connections. </p> <blockquote> <pre> ) </pre> </blockquote> <p> By default, Postfix uses IPv4 only, because most systems aren't attached to an IPv6 network. </p> <ul> <li> <p> On systems with combined IPv4/IPv6 stacks, attempts to deliver mail via IPv6 would always fail with "network unreachable", and those attempts would only slow down Postfix. </p> <li> <p> Linux kernels don't even load IPv6 protocol support by default. Any attempt to use it would fail immediately. </p> </ul> <p> Note 1: you must stop and start Postfix after changing the inet_protocols configuration parameter. </p> <p>. </p> <blockquote> <pre> postconf: warning: inet_protocols: IPv6 support is disabled: Address family not supported by protocol postconf: warning: inet_protocols: configuring for IPv4 support only </pre> </blockquote> <p>. </p> <li> <p> The other new parameter is smtp_bind_address6. This sets the local interface address for outgoing IPv6 SMTP connections, just like the smtp_bind_address parameter does for IPv4: </p> <blockquote> <pre> /etc/postfix/main.cf: smtp_bind_address6 = 2001:240:587:0:250:56ff:fe89:1 </pre> </blockquote> <li> <p> If you left the value of the mynetworks parameter at its default (i.e. no mynetworks setting in main.cf) Postfix will figure out by itself what its network addresses are. This is what a typical setting looks like: </p> <blockquote> <pre> % postconf mynetworks mynetworks = 127.0.0.0/8 168.100.189.0/28 [::1]/128 [fe80::]/10 [2001:240:587::]/64 </pre> </blockquote> <p> If you did specify the mynetworks parameter value in main.cf, you need update the mynetworks value to include the IPv6 networks the system is in. Be sure to specify IPv6 address information inside "<tt>[]</tt>", like this: </p> <blockquote> <pre> /etc/postfix/main.cf: mynetworks = ...<i>IPv4 networks</i>... [::1]/128 [2001:240:587::]/64 ... </pre> </blockquote> </ul> <p> <b> NOTE: when configuring Postfix match lists such as mynetworks or debug_peer_list, you must specify IPv6 address information inside "<tt>[]</tt>" in the main.cf parameter value and in files specified with a "<i>/file/name</i>" pattern. IPv6 addresses contain the ":" character, and would otherwise be confused with a "<i>type:table</i>" pattern. </b> </p> <h2><a name="limitations">Known Limitations</a></h2> <ul> <li> <p> The order of IPv6/IPv4 outgoing connection attempts is not yet configurable. Currently, IPv6 is tried before IPv4. </p> <li> <p> Postfix currently does not support DNSBL (real-time blackhole list) lookups for IPv6 client IP addresses; currently there are no blacklists that cover the IPv6 address space. </p> <li> <p> IPv6 does not have class A, B, C, etc. networks. With IPv6 networks, the setting "mynetworks_style = class" has the same effect as the setting "mynetworks_style = subnet". </p> <li> <p> On Tru64Unix and AIX, Postfix can't figure out the local subnet mask and always assumes a /128 network. This is a problem only with "mynetworks_style = subnet" and no explicit mynetworks setting in main.cf. </p> </ul> <h2> <a name="compat">Compatibility with Postfix <2.2 IPv6 support</a> </h2> <p> Postfix version 2.2 IPv6 support is based on the Postfix/IPv6 patch by Dean Strik and others, but differs in a few minor ways. </p> <ul> <li> <p> main.cf: The inet_interfaces parameter does not support the notation "ipv6:all" or "ipv4:all". Use the inet_protocols parameter instead. </p> <li> <p> main.cf: Specify "inet_protocols = all" or "inet_protocols = ipv4, ipv6" in order to enable both IPv4 and IPv6 support. </p> <li> <p> main.cf: The inet_protocols parameter also controls what DNS lookups Postfix will attempt to make when delivering or receiving mail. </p> <li> <p> main.cf: Specify "inet_interfaces = loopback-only" to listen on loopback network interfaces only. </p> <li> <p> The lmtp_bind_address and lmtp_bind_address6 features were omitted. The Postfix LMTP client will be absorbed into the SMTP client, so there is no reason to keep adding features to the LMTP client. </p> <li> <p> The SMTP server now requires that IPv6 addresses in SMTP commands are specified as [ipv6:<i>ipv6address</i>], as described in RFC 2821. </p> <li> <p> The IPv6 network address matching code was rewritten from the ground up, and is expected to be closer to the specification. The result may be incompatible with the Postfix/IPv6 patch. </p> </ul> <h2><a name="porting">IPv6 Support for unsupported platforms</a></h2> <p> Getting Postfix IPv6 working on other platforms involves the following steps: </p> <ul> <li> <p> Specify how Postfix should find the local network interfaces. Postfix needs this information to avoid mailer loops and to find out if mail for <i>user@[ipaddress]</i> is a local or remote destination. </p> <p> If your system has the getifaddrs() routine then add the following to your platform-specific section in src/util/sys_defs.h: </p> <blockquote> <pre> #ifndef NO_IPV6 # define HAS_IPV6 # define HAVE_GETIFADDRS #endif </pre> </blockquote> <p> Otherwise, if your system has the SIOCGLIF ioctl() command in /usr/include/*/*.h, add the following to your platform-specific section in src/util/sys_defs.h: </p> <blockquote> <pre> #ifndef NO_IPV6 # define HAS_IPV6 # define HAS_SIOCGLIF #endif </pre> </blockquote> <p>: </p> <blockquote> <pre> #ifndef NO_IPV6 # define HAS_IPV6 #endif </pre> </blockquote> <li> <p> Test if Postfix can figure out its interface information. </p> <p> After compiling Postfix in the usual manner, step into the src/util directory and type "<b>make inet_addr_local</b>". Running this file by hand should produce all the interface addresses and network masks, for example: </p> <blockquote> <pre> % </pre> </blockquote> <p> The above is for an old FreeBSD machine. Other systems produce slightly different results, but you get the idea. </p> </ul> <p> If none of all this produces a usable result, send email to the [email protected] mailing list and we'll try to help you through this. </p> <h2><a name="credits">Credits</a></h2> <p> The following information is in part based on information that was compiled by Dean Strik. </p> <ul> <li> <p> Mark Huizer wrote the original Postfix IPv6 patch. </p> <li> <p> Jun-ichiro 'itojun' Hagino of the KAME project made substantial improvements. Since then, we speak of the KAME patch. </p> <li> <p> The PLD Linux Distribution ported the code to other stacks (notably USAGI). We speak of the PLD patch. A very important feature of the PLD patch was that it can work with Lutz Jaenicke's TLS patch for Postfix. </p> <li> <p>. </p> <li> . </p> </ul> </body> </html> | http://opensource.apple.com//source/postfix/postfix-247/postfix/proto/IPV6_README.html | CC-MAIN-2016-40 | refinedweb | 1,441 | 56.55 |
Technical Support
Support Resources
Product Information
How do I locate an initialized array in XDATA memory. I am declaring an array thus:
char * lcd_message = { "message 1", "message 2", ... "last message" };
and need to get the messages and pointers into XDATA as they may be changed during runtime.
The best way to do this is to declare the messages separately from the array. For example:
#include <stdio.h>
char xdata m1 [] = "Message One";
char xdata m2 [] = "Message Two";
char xdata m3 [] = "Message Three";
xdata char xdata *message_table [] =
{
m1,
m2,
m3,
NULL,
};
char xdata NULL_PLACE_HOLDER _at_ 0;
This is the only way to force the constant strings into XDATA instead of CONSTANT (code) memory.
Note that the NULL_PLACE_HOLDER is declared to reside at 0 in XDATA. That is probably useful if you will test for NULL pointers.
Last Reviewed: Monday, June 26, 2000 | http://www.keil.com/support/docs/299.htm | crawl-003 | refinedweb | 141 | 64.1 |
Created on 2008-03-30 22:23 by belopolsky, last changed 2010-06-25 21:06 by r.david.murray.
Opening a new issue per Raymond's request at msg64764:
"""
It would be *much* more useful to direct effort improving the mis-
reporting of the number of arguments given versus those required for
instance methods:
>>> a.f(1, 2)
TypeError: f() takes exactly 1 argument (3 given)
"""
I would suggest that this misreporting may be dear to old-beards
reminding the time when there was not as much difference between methods
and functions as there is now.
It does not seem to be that hard to change the diagnostic to
>>> a.f(1, 2)
TypeError: f() takes no arguments (2 given)
but the novice users would much rather see "a.f() takes no arguments (2
given)." The later is unfortunately not possible.
Raymond, what would you say to "<class 'A' instance>.f() takes no
arguments (2 given)" diagnostic?
Attached patch (issue2516poc.diff) presents proof-of-concept code which
changes the problematic reporting as follows:
>>> a.f(1,2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: <A object>.f() takes exactly 0 arguments (2 given)
More effort is needed to make this patch ready: support default/keyword
arguments, respect English grammar in 1-argument case, etc. Before I do
that, however, I would like to hear that this is a worthwhile fix and
that I've chosen the right place in the code to implement it.
You have +1 from me to continue developing this patch.
I am uploading another work in progress patch because the problem proved
to be more difficult than I thought in the beginning. The new patch
addresses two issues.
1. a.f(..) -> A.f(a, ..) transformation is performed is several places
in the current code. I have streamlined CALL_* opcodes processing to
make all calls go through PyObject_Call. This eliminated some
optimizations that can be put back in once the general framework is
accepted.
2. The only solution I could find to fixing reporting from
instancemethod_call was to add expected number of arguments information
to the exception raised in ceval and use it to reraise with a proper
message. Obviously, putting the necessary info into the argument tuple
is a hack and prevents reproducing original error messages from the
regular function calls. I see two alternatives: (a) parsing the error
string to extract the needed information (feels like even a bigger
hack), and (b) creating an ArgumentError subclass of TypeError and have
its instances store needed information in additional attributes (talking
about a canon and a mosquito!)
3. If a solution that requires extra information in an exception is
accepted, PyArg_Parse* functions should be similarly modified to add the
extra info when raising an error.
Finally, let's revisit whether this mosquito deserves to die. After
all, anyone looking at method definition sees the self argument, so
saying that a.f(1, 2) provides 3 arguments to f() is not such a stretch
of the truth.
It is also possible that I simply fail to see a simpler solution. It
this case, please enlighten me!
PS: The patch breaks cProfile and several other tests that check error
messages. I am making it available for discussion only.
Guido, what's your opinion on this? Is this a bug, and should it be fixed?
It's definitely a bug, but I think the reason it has been around so
long is that no-one has offerred a clean solution.
I was hoping for something along the lines of functions raising an
ArgumentError (a new subclass of TypeError) that could be trapped by
the __call__ slot for bound methods and then reraised with a new
argument count. The key is to find a *very* lightweight and minimal
solution; otherwise, we should just live with it for another 18 years :-
)
On Thu, Apr 3, 2008 at 9:39 PM, Raymond Hettinger
<[email protected]> wrote:
> It's definitely a bug
What would you say to the following:
def f(x):
pass
class X:
xf = f
x = X()
x.xf(1,2)
--> TypeError: f() takes exactly 1 argument (3 given)
Is this correct?
..
> I was hoping for something along the lines of functions raising an
> ArgumentError (a new subclass of TypeError) that could be trapped by
> the __call__ slot for bound methods and then reraised with a new
> argument count.
This would be my first choice for a clean solution as well. Since it
will require a change to the exception hierarchy, should we discuss a
modification to PEP 348 on python-dev? I would rather finish the
patch first and then make a concrete proposal.
> The key is to find a *very* lightweight and minimal solution;
.. according to what metric? Are you talking about the amount of code
that needs to be changed, the number of API changes or run-time
overhead? I don't think run-time overhead is an issue: argument
errors are rarely used for flow control because it is hard to predict
what a failed attempt to call a function would do.
The same subject on python-dev raised last month received positive feedbacks:
All that is needed now is a working patch... :)
Here is a similar issue which may be easier to fix:
>>> def f(a, b=None, *, c=None, d=None):
... pass
>>> f(1,2,3)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: f() takes at most 4 arguments (3 given)
Should be "f() takes at most 2 positional arguments (3 given)"
Fixed in r82220.
Maybe I'm misunderstanding what issue we're talking about here, but wasn't this supposed to be fixed?
giampaolo@ubuntu:~/svn/python-3.2$ python3.2
Python 3.2a0 (py3k:82220M, Jun 25 2010, 21:38:56)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> class A:
... def foo(self, x):
... pass
...
>>> A().foo()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: foo() takes exactly 2 arguments (1 given)
>>>
I presume Benjamin meant he fixed the special case Alexander reported. | http://bugs.python.org/issue2516 | crawl-003 | refinedweb | 1,024 | 62.38 |
You want to create a custom control
that remembers its state between
postbacks of a form, like the server controls
provided with ASP.NET.
Create a custom control like the one described in Recipe 5.2, implement the
IPostBackDataHandler interface to add
the functionality to retrieve the
values posted to the server, and then update the values in the custom
control from the
postback data.
Use the .NET language of your choice to:
Create a class that inherits from the
Control
class in the
System.Web.UI namespace.
Implement support for HTML-style attributes by adding properties to the class.
Implement an
IPostBackDataHandler as necessary to
update the state of the control with the posted data.
Override the
Render method to have it render the
HTML output of the control using the values of the properties.
To use the custom control in an ASP.NET page:
Register the assembly containing the control.
Insert the tag for the custom control anywhere in the page and set the attributes appropriately.
Example 5-7 and Example 5-8 show the VB and C# class files for a custom control that maintains state. Example 5-9 shows how we use the custom control in an ASP.NET page.
A version of the custom control that maintains state and provides the
added ability to raise an event when the control data has changed is
shown in Example 5-10 (VB) and Example 5-11 (C#). Example 5-12
through Example 5-14 show the
.aspx and code-behind files of an application that uses ...
No credit card required | https://www.oreilly.com/library/view/aspnet-cookbook/0596003781/ch05s04.html | CC-MAIN-2019-18 | refinedweb | 263 | 64.2 |
Introduction: Make a Soil Moisture Meter With the Help of Arduino
I am an academician in Faculty of Agriculture. We have a lot of experiments in field and measure some parameters that are constraint to do. One of the compulsory measurement is determining of soil moisture. Estimation of soil moisture as using soil sampler is more accurate measurement, but need to so much labor and time. So I decided to make a soil moisture meter with the aid of arduino and soil moisture sensor. I hope it will be useful for you.
Make it yourself
Sincerely..
Step 1: What You Need For?
Inventory:
- Arduino Uno R3
- Arduino IDE
- multimeter
- Breadboard and jumper cables
- soldering equipment
- dremel with part of spiral and drill bit
- calipers, pliers, tweezers, screwdriver, cutter (knife, scissors)
- lighter
- hot glue gun
Consumables:
- Plastic box (dimensions about 13 x 7 x 5 cm)
- Attiny85 micro controller (DIP socket) and suitable 8 pin DIP socket
- 74HC595 (8-bit shift register, DIP socket)
- Single side copper plate (dimensions about 30 x 50 mm)
- 15 x male header 180 degree
- 8 x male header 90 degree
- 2 x 10K resistor (1/4 watt)
- 20 x female to female jumper cables (approx. 18 cm)
- LM393 Soil Moisture Sensor
- push button
- switch button
- 12v A23 battery and holder
- USB male and female sockets
- stripboard
Step 2: Creating a Controller of Sensor
You can see above diagram of controller that contains of Attiny85 and 74HC595. You can also directly print it to copper plate by using PCB design (see below) as well as use strip-board. There are so many instructables how to make PCB.
Be patient, do it carefully and sensitively.
Step 3: Code to Attiny85 for Measuring
Check your Arduino IDE. If there is no Attiny85 in the cards, you need to update it and introduce with Attiny85.
1- Get the Arduino ISP codes from examples due to we use Arduino Uno as programmer.
2- Adjust the programmer as AVRISP mkII
3- Upload the sketches to Arduino Uno
4- Wiring the pins as following:
Attiny85 Pin 2 --> Arduino Uno Pin 13
Attiny85 Pin 1 --> Arduino Uno Pin 12
Attiny85 Pin 0 --> Arduino Uno Pin 11
Attiny85 Pin Reset --> Arduino Uno Pin 10
Attiny85 VCC --> Arduino Uno +5V
Attiny85 GND --> Arduino Uno GND
*** Connect 10uF electrolytic capacitor between Reset and GND in Arduino Uno, otherwise it always reset Arduino Uno.
5- Select Attiny85 as your card and adjust programmer as Arduino as ISP.
6- Upload sketch as follows:
#include <liquidcrystal595.h> LiquidCrystal595 lcd(0,1,2); // datapin, latchpin, clockpin
int sensor_pin=3; int battery_pin=4; int measure_btn_pin=4; int measure_buttonState=0; int measure_lastButtonState=0; int lcd_initiating=0; int measurement_value;
float battery_level; float ks; float battery_level_percentage;
void setup() { pinMode(battery_pin, OUTPUT); pinMode(measure_btn_pin, INPUT); lcd.begin(16,2); lcd.clear(); }
void loop() { battery_level=1024-analogRead(battery_pin);
ks=battery_level/1024.0; ///////// Measure Button//////////////////////////// measure_buttonState=digitalRead(measure_btn_pin); if(measure_buttonState != measure_lastButtonState) { if(measure_buttonState ==1){ measurement_value = round(analogRead(sensor_pin)/ks); lcd.clear(); lcd.setCursor(0,0); lcd.print("%Nem:"); lcd.print(measurement_value); lcd.display(); } delay(50); } measure_lastButtonState=measure_buttonState; if(lcd_initiating==0) { lcd.setCursor(0,0); lcd.print("NemOlcer"); lcd.setCursor(0,1); lcd.print("Bat:%"); lcd.print(battery_level_percentage); lcd_initiating=1; } }
Step 4: Prepare to Plastic Box
You can arrange differently to the slot of LCD display, push button, switch button and USB socket.
1- Draw a rectangular for LCD slot and cut it by using spiral.
2- Make a hole for push button and a rectangular for switch button via dremel.
3- Lastly, perform a slot for USB female socket by dremel.
4- Be sure edges of slots are clean appropriate mounting.
Step 5: Performing Some Components
It depends on you what you use. I used small pieces of strip-board.
1- I have preferred 12V A23 battery and LM7805 (5V regulator) due to A23 battery is the smallest one. You can see pin out of LM7805 above. I didn't have A23 battery slot and made one of them for myself.
2- Prepare push button in order to use with jumper cables.
3- Solder USB female socket to strip-board. We will use it as data collector from sensor.
4- We had already have component that read the conductivity between legs then transfer it to our micro controller.
5- Connect USB male socket and sensor leg to cables.
Step 6: Mounting All of the Components to Plastic Box
Mounting components is need a lot of steps and I don't wanna do images and data crowd in my instructable. So it is easier and suitable to demonstrate it by preparing a video.
Do it yourself..
Regards..
Recommendations
We have a be nice policy.
Please be positive and constructive. | http://www.instructables.com/id/Make-a-Soil-Moisture-Meter-With-the-Help-of-Arduin/ | CC-MAIN-2018-09 | refinedweb | 781 | 54.93 |
I was trying to setup a project for unit testing using Typescript, Mocha, Chai, Sinon, and Karma and I quickly realized that there were so many moving parts that made it a bit challenging to setup the project. Here are the detailed steps for successfully creating a JavaScript unit testing project using the aforementioned technologies. TL/DR: The code is available on GitHub.
Note: Project can be run using both VS Code as well as Visual Studio 2015 using Task Runner Explorer.
The NPM Packages
Run npm install command which will restore all the dependencies included in the package.json file below. Note that we are using the new feature under Typescript 2 which is the @types namespace.
The TypeScript Configuration File
The Karma Configuration File
Start by setting up the required frameworks and the file dependencies.
Next inform Karma to use the typescript preprocessor to transpile the .ts files to .js files on the fly
The Gulp Configuration File | https://blogs.msdn.microsoft.com/wael-kdouh/2017/06/27/unit-testing-using-typescript-mocha-chai-sinon-and-karma/ | CC-MAIN-2019-18 | refinedweb | 160 | 65.32 |
-Bsymbolicoption or Sun Studio compiler's
-xldscope=symbolicoption, all symbols of a library can be made non-interposable (those symbols are called protected symbols, since no one else can interpose on them). If the targeted routine is interposable, dynamic linker simply passes the control to whatever symbol it encounters first, that matches the function call (callee). Now with the preloaded library in force, hacker gets control over the routine. At this point, it is upto the hacker whether to pass the control to the actual routine that the client is intended to call. If the intention is just to collect data and let go, the required data can be collected and the control will be passed to the actual routine with the help of
libdlroutines. Note that the control has to be passed explicitly to the actual routine; and as far as dynamic linker is concerned, it is done with its job once it passes the control to the function (interposer in this case). If the idea is to completely change the behavior of the routine (easy to write a new routine with the new behavior, but the library and the clients have to be re-built to make use of the new routine), the new implementation will be part of the interposing routine and the control will never be passed to the actual routine. Yet in worst cases, a malicious hacker can intercept data that is supposed to be confidential (eg., passwords, account numbers etc.,) and may do more harm at his wish.
fopen(). The idea is to collect the number of calls to
fopen()and to find out the files being opened. Our interceptor, simply prints a message on the console with the file name to be opened, everytime there is a call to
fopen()from the application. Then it passes the control to
fopen()routine of
libc. For this, first we need to get the signature of
fopen().
fopen()is declared in
stdio.has follows:
FILE *fopen(const char *filename, const char *mode);
% cat interceptfopen.c
#include <stdio.h>
#include <dlfcn.h>
FILE *fopen(const char *filename, const char *mode) {
FILE *fd = NULL;
static void *(*actualfunction)();
if (!actualfunction) {
actualfunction = (void *(*)()) dlsym(RTLD_NEXT, "fopen");
}
printf("\nfopen() has been called. file name = %s, mode = %s \n
Forwarding the control to fopen() of libc", filename, mode);
fd = actualfunction(filename, mode);
return(fd);
}
% cc -G -o libfopenhack.so interceptfopen.c
% ls -lh libfopenhack.so
-rwxrwxr-x 1 build engr 3.7K May 19 19:02 libfopenhack.so*
actualfunctionis a function pointer to the actual
fopen()routine, which is in
libc.
dlsymis part of
libdland the
RTLD_NEXTargument directs the dynamic linker (
ld.so.1) to find the next reference to the specified function, using the normal dynamic linker search sequence.
% cat fopenclient.c
#include <stdio.h>
int main () {
FILE * pFile;
char string[30];
pFile = fopen ("myfile.txt", "w");
if (pFile != NULL) {
fputs ("Some Random String", pFile);
fclose (pFile);
}
pFile = fopen ("myfile.txt", "r");
if (pFile != NULL) {
fgets (string , 30 , pFile);
printf("\nstring = %s", string);
fclose (pFile);
} else {
perror("fgets(): ");
}
return 0;
}
% cc -o fopenclient fopenclient.c
% ./fopenclient
string = Some Random String
% setenv LD_PRELOAD ./libfopenhack.so
% ./fopenclient
fopen() has been called. file name = myfile.txt, mode = w
Forwarding the control to fopen() of libc
fopen() has been called. file name = myfile.txt, mode = r
Forwarding the control to fopen() of libc
string = Some Random String
%unsetenv LD_PRELOAD
fopen(), instead of the actual implementation in
libc. And the advantages of this technique is evident from this simple example, and it is up to the hacker to take advantage or abuse the flexibility of symbol interposition. | http://technopark02.blogspot.com/2005/05/solaris-hijacking-function-call.html | CC-MAIN-2015-22 | refinedweb | 601 | 65.62 |
Lever's libuv integration
Although I integrated libuv into Lever weeks ago, I figured out some improvements only about a week ago. I was motivated to do a walkthrough from the implementation details. This may help you if you are studying Lever's internals in general because we go through lot of details.
Lets study a bit what happens when we issue a read call.
Currently our libuv read function is accessible through
fs.raw_read as a temporary measure.
fs.raw_read(fd, [buffer], 0)
When the evaluator runs the 'call' opcode, it does an
internal
.call(argv). The code presented below is RPython
code. It is 'source code' of the Lever runtime.
Here's the implementation of the
raw_read, located in
runtime/stdlib/fs.py:
@builtin
@signature(Integer, List, Integer)
def raw_read(fileno, arrays, offset):
L = len(arrays.contents)
bufs = lltype.malloc(rffi.CArray(uv.buf_t), L, flavor='raw', zero=True)
try:
i = 0
for obj in arrays.contents:
obj = cast(obj, Uint8Array, u"raw_read expects uint8arrays")
bufs[i].c_base = rffi.cast(rffi.CCHARP, obj.uint8data)
bufs[i].c_len = rffi.r_size_t(obj.length)
i += 1
return Integer(rffi.r_long(fs_read(
fileno.value, bufs, L, offset.value)))
finally:
lltype.free(bufs, flavor='raw')
The
@builtin and
@signature are python decorators that
have ran before the rpython translates this program. Roughly
they wrap the raw_read with the signature and builtin
object, then place it inside the fs -module.
Objects that can be called in the runtime have a
.call(argv) -method. The argv is a list of objects passed
as arguments. The signature points out they have to be
converted to be (Integer, List, Integer) before they are
passed to the implementation.
Most of the code in
raw_read is just doing conversions to
place values into their places. The
fs_read is doing the
actual work. Here is the implementation of the
fs_read:
def fs_read(fileno, bufs, nbufs, offset):
req = lltype.malloc(uv.fs_ptr.TO, flavor='raw', zero=True)
try:
response = uv_callback.fs(req)
response.wait(uv.fs_read(response.ec.uv_loop, req,
fileno, bufs, nbufs, offset,
uv_callback.fs.cb))
if req.c_result < 0:
raise uv_callback.to_error(req.c_result)
return req.c_result
finally:
uv.fs_req_cleanup(req)
lltype.free(req, flavor='raw')
Here it actually calls
uv_fs_read. The C type signature of
that function is:
int uv_fs_read(uv_loop_t* loop,
uv_fs_t* req,
uv_file file,
const uv_buf_t bufs[],
unsigned int nbufs,
int64_t offset,
uv_fs_cb cb)
Libuv requires that it gets a callback, that is called when
the function completes. You should note we pass
response.ec.uv_loop as a loop, and
uv_callback.fs.cb as
a callback.
ec stands for the execution context. It contains the
context for the interpreter. The contents are described in
the
runtime/core.py. Lever starts up directly into the
libuv event loop. In identical manner to node.js. The call
to
uv_run can be found from the
runtime/main.py.
The
uv_callback.fs.cb is created with a decorator, and the
implementation of the callback inside the decorator looks
like this:
def _callback_(handle, *data):
ec = core.get_ec()
resp = pop_handle(getattr(ec, uv_name), handle)
if len(data) > 0:
resp.data = data
resp.pending = False
if resp.greenlet is not None:
core.root_switch(ec, [resp.greenlet])
core.get_ec()call returns the current execution context. We expect to return in the same thread as where we started a request.
pop_handleretrieves a response object for the handle from the execution context. The
getattr(ec, uv_name)is equivalent to
ec.uv__fs. This is roughly equivalent to
ec.uv__fs.pop(rffi.cast_ptr_to_adr(handle))but the cast operation is not understood by the JIT, therefore it is cloaked from the JIT generator into a small wrapper function.
- If there's arguments to the callback, they are passed into
.data-field. The resp.wait() returns these fields as a tuple.
resp.pending = Falsemeans that response was processed. It is possible for libuv callbacks to invoke as soon as the request was made. In such cases we can avoid switching into the eventloop.
- If the response didn't return immediately, the
resp.waithas set a greenlet where to resume after the request has completed. In that case the call is coming from the eventloop so it needs some handling for exceptions. They are provided by the
core.root_switch.
I found it necessary that the callback handler is not doing other extra work in the callback handler than forwarding the information.
There's yet one detail unmentioned here that is important for the libuv integration. This:
response = uv_callback.fs(req)
response.wait(...)
The
uv_callback.fs is a class. It is derived from a
decorator just like the fs.cb was. Here's the
implementation:
class response:
_immutable_fields_ = ["ec", "handle"]
cb = _callback_
def __init__(self, handle):
self.ec = core.get_ec()
self.handle = handle
self.data = None
self.greenlet = None
self.pending = True
push_handle(getattr(self.ec, uv_name), handle, self)
def wait(self, status=0):
if status < 0:
pop_handle(getattr(self.ec, uv_name), self.handle)
raise to_error(status)
elif self.pending:
self.greenlet = self.ec.current
core.switch([self.ec.eventloop])
# TODO: prepare some mechanic, so that
# we can be interrupted.
return self.data
The
.wait() jumps into the eventloop if the request has
not completed when it runs.
There are potentially complex interactions between the requests and the greenlets. For example, greenlets may require locks that prevent them from being switched into when they're waiting for content from an eventloop.
Another question is whether synchronously running requests should be allowed. I will wait for the implementations of such complex concepts until I find out more in practice.
Summary
In short, when Lever calls an async libuv function. It stores the current greenlet into a request and switches into the eventloop. Eventually the eventloop calls back as response and switches back into the greenlet that made the request. | http://boxbase.org/entries/2017/jan/16/lever-libuv-integration/ | CC-MAIN-2018-34 | refinedweb | 967 | 53.07 |
A more liberal autolink extension for python Markdown
Project description
A more liberal autolink extension for python Markdown - inspired by Django’s urlize function`
Requires:
To make this extension loadable by Mardown, just drop mdx_urlize.py into your PYTHONPATH or projects root.
From:
Once installed, you can use it in a Django template like this:
{{ value|markdown:”urlize” }}
Or in Python code like this:
import markdown md = markdown.Markdown(safe_mode=True, extensions=[‘urlize’]) converted_text = md.convert(text)
Here is the start of the [Markdown extension docs]() in case you need more info.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/markdown-urlize/ | CC-MAIN-2020-24 | refinedweb | 121 | 55.54 |
Firstly, my apologies to any one who may feel like this is a very dumb question or a question that has already been answered. I'm very new to MVVM, XAML and having read everything on StackOverflow, I'm still failing to understand what I'm sure is a simple concept.
So I have a MVVM set up where an Observable Collection is put into a ComboBox. Here is the view.
public class AppSettings : INotifyPropertyChanged
{
public class Regions
{
public int code { get; set; }
public string region { get; set; }
}
private ObservableCollection<Regions> _ItemsRegions = new ObservableCollection<Regions>
{
new Regions() { code = 0, region = "Metro : All Areas" },
new Regions() { code = 25, region = "Metro : North of River"},
new Regions() { code = 26, region = "Metro : South of River"},
new Regions() { code = 27, region = "Metro : East/Hills"},
};
public ObservableCollection<Regions> Region
{
get { return this._ItemsRegions; }
}
private Regions _selectedRegion;
public Regions SelectedRegion
{
get { return _selectedRegion; }
set
{
if (value != _selectedRegion)
{
_selectedRegion = value;
RaisePropertyChanged("SelectedRegion");
}
}
}
And for reference, here is my ComboBox.
<ComboBox Name="Regions" ItemsSource="{Binding Region}" DisplayMemberPath="region" SelectedItem="{Binding SelectedRegion, Mode=TwoWay}"/>
Now the good news is that I have the SelectedItem working and I'm able to with the MVVM obtain the values I need to make changes etc. which is great! That concept was a little difficult to grasp at first but once I had it, I knew what was going on. So I can save the data in any way at this point.... The eventual plan is to put it in roaming data. This is a Universal App.
So the problem is the user is finished with this page and navigates away from it . When he/she comes back to it the ComboBox is loaded with all the data (as expected) but the SelectedItem that I just saved is not. As in, the "IsSelected = True" item is null so it just says "choose an item".
What do I need to do in order to on page load, get the SelectedItem from save and set it to the ComboBox. I've been stupid and tried setting the Region.ComboBox.SelectedItem to a "string", but of course that defeats and destroys the whole MVVM purpose!
So, what do I need to do? I'm happy to be just pointed at some documentation for me to read.
Update Edit
So following advise from a user on here I took my get; set INotifyPropertyChanged logic and put it in a blank app. Just to see that I'm not doing anything stupid.
Unfortunately the blank app also has the same problem where navigation away from the page and back to it has none of my previously selected values... To illustrate this, here is a quick screenshot.
To navigate back, I'm using the hardware back button.
I'm guessing that I'm just missing a piece of the puzzle to make this work. This whole MVVM XAML concept is new to me and while I'm learning a lot about it, I'm genuinely stuck on this!
Any takers? Tips? Ideas? Examples?
Update 2
Okay so following some more advise posted in the comments on this question, I went and added a debugger track on the Object ID for "this._selectedRegion".
The watch of the Object ID shows that the data is in fact set correctly when a user selects one of the items from the combobox. Great! But when the user navigates away from the page and comes back to it, it's set to null. Not Great!
With my bland/stupid face am I missing the point completely? Am I meant to be on "OnNavigatedTo" aka, fist load setting the SelectedItem for the combobox myself? As in the data is never saved on navigation away and coming back to it is setting is null is in fact expected in MVVM? Do I manipulate the "get" statement and specify the ObservableCollection Item it's meant to be? Crude way, but like
_selectedRegion = something
return _selectedRegion
This sounds awfully crude to me and not what it should be, but maybe I'm just not appreciating the MVVM set up properly?
Thoughts anyone?
For a
ComboBox in WPF it is very important to set
SelectedItem to one of the items in
ItemsSource.
Make sure that
SelectedItem has the correct value (or is set) when he/she comes back.
Your binding
SelectedItem="{Binding SelectedRegion, Mode=TwoWay}"
will call the getter of
SelectedRegion when the control is built. It will call the setter when the value in the ComboBox changes. | http://m.dlxedu.com/m/askdetail/3/d3b5cea3e6cff94cfb7cf30064a1edde.html | CC-MAIN-2018-22 | refinedweb | 748 | 61.77 |
Can we make the permissions for deleting changes a bit more flexible?
Currently changes can only be deleted by administrators, or by the change owner if they have the "delete own changes" permission.
This means that for example in a system where projects are separated by namespaces and we assign permissions to "owner" groups based on those namespaces, we can't allow such an "owner" to delete changes.
This would be more flexible if we also had a "Delete Changes" permission that could be assigned.
Note that here we're only talking about open changes; permission to delete merged changes should probably remain possible only for admins. | https://bugs.chromium.org/p/gerrit/issues/detail?id=9354 | CC-MAIN-2018-43 | refinedweb | 107 | 67.08 |
LOG(9) NetBSD Kernel Developer's Manual LOG(9)Powered by man-cgi (2021-06-01). Maintained for NetBSD by Kimmo Suominen. Based on man-cgi by Panagiotis Christias.
NAME
log -- log a message from the kernel through the /dev/klog device
SYNOPSIS
#include <sys/syslog.h> void log(int level, const char *format, ...);
DESCRIPTION
The log() function allows the kernel to send messages to user processes listening on /dev/klog. Usually syslogd(8) monitors /dev/klog for these messages and writes them to a log file. All messages are logged using facility LOG_KERN. See syslog(3) for a listing of log levels.
SEE ALSO
syslog(3), syslogd(8) NetBSD 9.99 May 12, 1997 NetBSD 9.99 | https://man.netbsd.org/hppa/log.9 | CC-MAIN-2021-49 | refinedweb | 118 | 53.98 |
This is the mail archive of the [email protected] mailing list for the GCC project.
On 08/18/18 18:46, Richard Sandiford wrote: > Bernd Edlinger <[email protected]> writes: >> On 08/18/18 12:40, Richard Sandiford wrote: >>> Bernd Edlinger <[email protected]> writes: >>>> Hi everybody, >>>> >>>> On 08/16/18 08:36, Bernd Edlinger wrote: >>>>> Jeff Law wrote: >>>>>> I wonder if the change to how we set up the initializers is ultimately >>>>>> changing the section those go into and ultimately causing an overflow of >>>>>> the .sdata section. >>>>> >>>>> >>>>> Yes, that is definitely the case. >>>>> Due to the -fmerge-all-constants option used >>>>> named arrays with brace initializer look like string initializers >>>>> and can go into the merge section if there are no embedded nul chars. >>>>> But the string constants can now be huge. >>>>> >>>>> See my other patch about string merging: >>>>> [PATCH] Handle not explicitly zero terminated strings in merge sections >>>>> >>>>> >>>>> >>>>> Can this section overflow? >>>>> >>>> >>>> >>>> could someone try out if this (untested) patch fixes the issue? >>>> >>>> >>>> Thanks, >>>> Bernd. >>>> >>>> >>>> 2018-08-18 Bernd Edlinger <[email protected]> >>>> >>>> * expmed.c (simple_mem_bitfield_p): Do shift right signed. >>>> * config/alpha/alpha.h (CONSTANT_ADDRESS_P): Avoid signed >>>> integer overflow. >>>> >>>> Index: gcc/config/alpha/alpha.h >>>> =================================================================== >>>> --- gcc/config/alpha/alpha.h (revision 263611) >>>> +++ gcc/config/alpha/alpha.h (working copy) >>>> @@ -678,7 +678,7 @@ enum reg_class { >>>> >>>> #define CONSTANT_ADDRESS_P(X) \ >>>> (CONST_INT_P (X) \ >>>> - && (unsigned HOST_WIDE_INT) (INTVAL (X) + 0x8000) < 0x10000) >>>> + && (UINTVAL (X) + 0x8000) < 0x10000) >>>> >>>> /* The macros REG_OK_FOR..._P assume that the arg is a REG rtx >>>> and check its validity for a certain class. >>>> Index: gcc/expmed.c >>>> =================================================================== >>>> --- gcc/expmed.c (revision 263611) >>>> +++ gcc/expmed.c (working copy) >>>> @@ -579,8 +579,12 @@ static bool >>>> simple_mem_bitfield_p (rtx op0, poly_uint64 bitsize, poly_uint64 bitnum, >>>> machine_mode mode, poly_uint64 *bytenum) >>>> { >>>> + poly_int64 ibit = bitnum; >>>> + poly_int64 ibyte; >>>> + if (!multiple_p (ibit, BITS_PER_UNIT, &ibyte)) >>>> + return false; >>>> + *bytenum = ibyte; >>>> return (MEM_P (op0) >>>> - && multiple_p (bitnum, BITS_PER_UNIT, bytenum) >>>> && known_eq (bitsize, GET_MODE_BITSIZE (mode)) >>>> && (!targetm.slow_unaligned_access (mode, MEM_ALIGN (op0)) >>>> || (multiple_p (bitnum, GET_MODE_ALIGNMENT (mode)) >>> >>> Do we have a genuinely negative bit offset here? Seems like the callers >>> would need to be updated if so, since the code is consistent in treating >>> the offset as unsigned. >>> >> >> Aehm, yes. >> >> The test case plural.i contains this: >> >> static const yytype_int8 yypgoto[] = >> { >> -10, -10, -1 >> }; >> >> static const yytype_uint8 yyr1[] = >> { >> 0, 16, 17, 18, 18, 18, 18, 18, 18, 18, >> 18, 18, 18, 18 >> }; >> >> yyn = yyr1[yyn]; >> >> yystate = yypgoto[yyn - 16] + *yyssp; >> >> >> There will probably a reason why yyn can never be 0 >> in yyn = yyr1[yyn]; but it is not really obvious. >> >> In plural.i.228t.optimized we have: >> >> pretmp_400 = yypgoto[-16]; >> _385 = (int) pretmp_400; >> goto <bb 69>; [100.00%] > > Ah, ok. > > [...] > >> (gdb) frame 26 >> #26 0x000000000082f828 in expand_expr_real_1 (exp=<optimized out>, target=<optimized out>, >> tmode=<optimized out>, modifier=EXPAND_NORMAL, alt_rtl=0x0, inner_reference_p=<optimized out>) >> at ../../gcc-trunk/gcc/expr.c:10801 >> 10801 ext_mode, ext_mode, reversep, alt_rtl); >> (gdb) list >> 10796 reversep = TYPE_REVERSE_STORAGE_ORDER (type); >> 10797 >> 10798 op0 = extract_bit_field (op0, bitsize, bitpos, unsignedp, >> 10799 (modifier == EXPAND_STACK_PARM >> 10800 ? NULL_RTX : target), >> 10801 ext_mode, ext_mode, reversep, alt_rtl); >> 10802 >> 10803 /* If the result has a record type and the mode of OP0 is an >> 10804 integral mode then, if BITSIZE is narrower than this mode >> 10805 and this is for big-endian data, we must put the field >> (gdb) p bitpos >> $1 = {<poly_int_pod<1u, long>> = {coeffs = {-128}}, <No data fields>} > > The get_inner_reference->store_field path in expand_assignment has: > > /* Make sure bitpos is not negative, it can wreak havoc later. */ > if (maybe_lt (bitpos, 0)) > { > gcc_assert (offset == NULL_TREE); > offset = size_int (bits_to_bytes_round_down (bitpos)); > bitpos = num_trailing_bits (bitpos); > } > > So maybe this is what havoc looks like. > > It's not my area, but I think we should be doing something similar for > the get_inner_reference->expand_bit_field path in expand_expr_real_1. > Haven't checked whether the offset == NULL_TREE assert would be > guaranteed there though. > Yes, I come to the same conclusion. The offset==NULL assertion reflects the logic around the "wreak havoc" comment in get_inner_reference, so that is guaranteed to hold, at least immediately after get_inner_reference returns. >> $5 = {<poly_int_pod<1u, unsigned long>> = {coeffs = {0x1ffffffffffffff0}}, <No data fields>} >> >> The byte offset is completely wrong now, due to the bitnum was >> initially a negative integer and got converted to unsigned. At the >> moment when that is converted to byte offsets it is done wrong. I >> think it is too much to change everything to signed arithmetics, but >> at least when converting a bit pos to a byte pos, it should be done in >> signed arithmetics. > > But then we'd mishandle a real 1ULL << 63 bitpos (say). Realise it's > unlikely in real code, but it'd still probably be possible to construct > a variant of the original test case in which the bias was > +0x1000000000000000 rather than -16. > Yes, I see, thanks. Additionally I think we need some assertions, when signed bit quantities are passed to store_bit_field and expand_bit_field. Thanks Bernd.
2018-08-19 Bernd Edlinger <[email protected]> * expr.c (expand_assignment): Assert that bitpos is positive. (store_field): Likewise (expand_expr_real_1): Make sure that bitpos is positive. Index: gcc/expr.c =================================================================== --- gcc/expr.c (Revision 263644) +++ gcc/expr.c (Arbeitskopie) @@ -5270,6 +5270,7 @@ expand_assignment (tree to, tree from, bool nontem MEM_VOLATILE_P (to_rtx) = 1; } + gcc_assert (known_ge (bitpos, 0)); if (optimize_bitfield_assignment_op (bitsize, bitpos, bitregion_start, bitregion_end, mode1, to_rtx, to, from, @@ -7046,6 +7047,7 @@ store_field (rtx target, poly_int64 bitsize, poly_ } /* Store the value in the bitfield. */ + gcc_assert (known_ge (bitpos, 0)); store_bit_field (target, bitsize, bitpos, bitregion_start, bitregion_end, mode, temp, reverse); @@ -10545,6 +10547,14 @@ expand_expr_real_1 (tree exp, rtx target, machine_ mode2 = CONSTANT_P (op0) ? TYPE_MODE (TREE_TYPE (tem)) : GET_MODE (op0); + /* Make sure bitpos is not negative, it can wreak havoc later. */ + if (maybe_lt (bitpos, 0)) + { + gcc_assert (offset == NULL_TREE); + offset = size_int (bits_to_bytes_round_down (bitpos)); + bitpos = num_trailing_bits (bitpos); + } + /* If we have either an offset, a BLKmode result, or a reference outside the underlying object, we must force it to memory. Such a case can occur in Ada if we have unchecked conversion @@ -10795,6 +10805,7 @@ expand_expr_real_1 (tree exp, rtx target, machine_ && GET_MODE_CLASS (ext_mode) == MODE_INT) reversep = TYPE_REVERSE_STORAGE_ORDER (type); + gcc_assert (known_ge (bitpos, 0)); op0 = extract_bit_field (op0, bitsize, bitpos, unsignedp, (modifier == EXPAND_STACK_PARM ? NULL_RTX : target), | https://gcc.gnu.org/legacy-ml/gcc-patches/2018-08/msg01097.html | CC-MAIN-2021-49 | refinedweb | 1,001 | 52.9 |
With the final release of Python 2.5 we thought it was about time Builder AU gave our readers an overview of the popular programming language. Builder AU's Nick Gibson has stepped up to the plate to write this introductory article for beginners.
With the final release of Python 2.5 we thought it was about time Builder AU gave our readers an overview of the popular programming language. Builder AU's Nick Gibson has stepped up to the plate to write this introductory article for beginners.
Python is a high level, dynamic, object-oriented programming language similiar in some ways to both Java and Perl, which runs on a variety of platforms including Windows, Linux, Unix and mobile devices. Created over 15 years ago as an application specific scripting language, Python is now a serious choice for scripting and dynamic content frameworks. In fact it is being used by some of the world's dynamic programming shops including NASA and Google, among others. Python is also the language behind Web development frameworks Zope and Django. With a healthy growth rate now could be the perfect time to add Python to your tool belt. This quickstart guide will give you an overview of the basics of Python, from variables and control flow statements to exceptions and file input and output. In subsequent articles I'll build upon this foundation and offer more complex and specific code and advice for using Python in real world development.
Why learn Python?
- It's versatile: Python can be used as a scripting language, the "glue" that sticks other components together in a software system, much in the same way as Perl. It can be used as an applications development language, just like Java or C#. It can be used as a Web development language, similiarly to how you'd use PHP. Whatever you need to do, chances are you can use Python.
- It's free: Python is fully open source, meaning that its free to download and completely free to use, throw in a range of free tools and IDE's and you can get started in Python development on a shoestring.
- It's stable: Python has been around for more than fifteen years now, which makes it older than Java, and despite regular development it has only just has reached version 2.5. Any code you write now will work with future versions of Python for a long time.
- It plays well with others: It's easy to integrate Python code with C or Java code, through SWIG for C and Jython for Java, which allows Python code to call C or Java functions and vice versa. This means that you can incorporate Python in current projects, or embed C into your Python projects whenever you need a little extra speed.
- It's easy to learn and use: Python's syntax closely resembles pseudo code, meaning that writing Python code is straightforward. This also makes Python a great choice for rapid application development and prototyping, due to the decrease in development time.
- It's easy to read: It's a simple concept, a language that is easy to write should also be easy to read, which makes it easier for Python developers to work together.
Okay, enough already, I'm sold. Where do I get it?
Easy, in fact if you're running Mac or Unix, chances you've already got it, just pull up a terminal and type "python" to load up the interpreter. If you don't have it, or you're looking to upgrade to the latest version head on over to the download page.
Alternatively you could install ActivePython, a binary python distribution that smooths out many of the hassles. There is a graphical installer under most platforms, all that you need to do is click through the dialogues, setting the install path and components. In Windows, you can then start up the interpreter by browsing to its entry in the start menu, or on any system simply by typing 'python' in a terminal. While ActivePython is generally easier to set up, it tends to lag behind official Python releases, and at the time of writing is only available for Python 2.4.
Interactive Mode
Now it's time to load up the interpreter in interactive mode, this gives you a prompt, similiar to a command line where you can run Python expressions. This lets you run simple expressions without having to write a Python program every time, let's try this out:
Python 2.5 (r25:51908, Oct 6 2006, 15:24:43) [GCC 4.1.2 20060928 (prerelease) (Ubuntu 4.1.1-13ubuntu4)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> print "Hello World" Hello World >>>
The first two lines are the Python environment information, and are specific to my install, so your mileage may vary. Interactive mode is useful for more than just friendly greetings however, it also makes a handy calculator in a pinch, and being part of a programming language allows you to use intermediate variables for more complicated calculations.
>>> 2+2 4 >>> 2.223213 * 653.9232 1453.8105592415998 >>> x,y = 5,20 >>> x + y 25 >>> tax = 52000 * (8.5/100) >>> print tax 4420.0 >>> "hello" + "world" 'helloworld' >>> "ring " * 7 'ring ring ring ring ring ring ring '
Another Python type that is useful to know is the list; a sequence of other types in order. Lists can be added and multiplied like strings, and can also be indexed and cut into sublists, called slices:
>>> x = [1,2,3,4,5,6,7,8,9,10] >>> x [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] >>> x + [11] [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] >>> x + [12] * 2 [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 12] >>> x[0], x[1], x[9] (1, 2, 10) >>> x[1:3], x[4:], x[2:-2] ([2, 3], [5, 6, 7, 8, 9, 10], [3, 4, 5, 6, 7, 8]) >>> x[:] [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
The interactive mode is good for quick and dirty calculations, but once you start writing anything longer than a couple lines, or you would like to save your programs to use again later, it's time to let it go and start writing python programs. Thankfully this is easy, just type your commands into a .py file and run it with the same command.
Control Flow
Just like your favourite programming language, Python has all the usual ways of controlling the execution of programs through if, while and for statements. In Python, an if statement looks like this:
if a > b: print a else: print b
You'll see that unlike languages such as C or Java, Python does not use braces to group statements together. Instead statements are grouped together by how far they are indented from the margin, the body of a statement extends for as long as there are lines below it with the same or greater indentation.
x = 3 y = 4 if x > y: x = x + 1 y = y + 1
For example, in the above code x = 3 and y = 4 at the end of the program.
x = 3 y = 4 if x > y: x = x + 1 y = y + 1
In the second example, however, y finishes equal to 5.
Python also has loops, both of the while and for variety. While loops are straightforward:
while a > b: a = a + 1
For loops work a little differently from what you might be used to from programming in other languages, rather than increasing a counter they iterate over sucessive items in a sequence, similiar to a Perl or PHP foreach. If you do need to have that counter its no problem, you can use the built-in range function to generate a list of numbers like so:
for i in range(10): print i
If you need it, you can have finer control over your loops in Python by using either break or continue statements. A continue statement jumps execution to the top of the loop, whilst a break statement finishes the loop prematurely.
for x in range(10): if x % 2 == 0: continue if x > 6: break; print x
This code will produce the following output:
1 3 5
A real program: cat
Now we almost have the tools at our disposal to write a complete, albeit small, program, such as the common unix filter cat (or type in Windows). cat takes the names of text files as arguments and prints their contents, or if no filenames are given, repeats user input back into the terminal. Before we can write this program however, we need to introduce a few new things:
Opening and reading files.
In Python files are objects like any other type, and have methods for reading and writing. Say for example we have a file called lines.txt which contains the following:
line1 line2 line3
There are two main ways we can view the contents of this file, by character or by line. The following code demonstrates the difference:
>>> lines1 = file("lines.txt") >>> lines1.read() 'line1\nline2\nline3\n' >>> lines2 = file("lines.txt") >>> lines2.readlines() ['line1\n', 'line2\n', 'line3\n']
The read() method of a file object reads the file into a string, whilst the readlines() method returns a list of strings, one per line. It is possible to have finer control over file objects, for example reading less than the entire file into memory at once, for more information see the python library documentation ().
Importing modules and querying command line arguments.
Python comes with a wide variety of library modules containing functions for commonly used tasks, such as string and text processing, support for common internet protocols, operating system operations and file compression - a complete list of standard library modules can be found here. In order to use these functions you must import their module into your programs namespace, like so:
>>> import math >>> math.sin(math.radians(60)) 0.8660254037844386
In this example you can see that we refer to the imported functions by their module name (ie. math.sin). If you are planning to use a module a lot and don't want to go to the trouble of typing it's module name every time you use it then you can import the module like this:
>>> from math import * >>> sin(radians(60)) 0.8660254037844386
Access to command line arguments is in another module: sys, which provides access to the python interpreter. Command line arguments are kept in a list in this module called argv, the following example demonstrates a program that simply prints out all the command line arguments one per line.
import sys for argument in sys.argv: print argument
This produces the following output when run with multiple arguments, the first argument is always the name of the script.
% python args.py many command line arguments args.py many command line arguments
Input from the console
It seems maybe a little archaic in the era of GUIs and Web delivered content, but console input is a necessity for any program that is intended to be used with pipes. Python has a number of methods for dealing with input depending on the amount of control you need over the standard input buffer, however for most purposes a simple call to
raw_input() will do.
raw_input() works just like a readline in C or Java, capturing each character until a newline and returning a string containing them, like so:
name = raw_input() print "Hello ", name
Error handling with exceptions
When you're a programmer, runtime errors are a fact of life; even the best and most robust code can be susceptible to user error, hardware failures or simply conditions you haven't thought of. For this reason, Python, like most modern languages, provides runtime error handling via exceptions. Exceptions have the advantage that they can be raised at one level of the program and then caught further up the stack, meaning that errors that may be irretrievable at a deep level but can be dealt with elsewhere need not crash the program. Using exceptions in Python is simple:
try: filename = raw_input() filehandle = file(filename) print len(filehandle.readlines()) except EOFErrror: print "No filename specified" except IOError: print filename, ": cannot be opened"
In this example, a block of code is placed inside a try statement, indicating that exceptions may be raised during the execution of the block. Then two types of exceptions are caught, the first EOFError, is raised when the
raw_input() call reaches an end of line character before a newline, the second, IOError is raised when there is a problem opening the file. In either case if an exception is raised then the line that prints the number of lines in the file will not be reached. If any other exceptions are raised other than the two named, then they will be passed up to a higher level of the stack. Exceptions are such a clean way to deal with runtime errors that soon enough you'll be wanting to raise them in your own functions, and you'll be pleased to know that this is easy enough in Python.
try: raise ValueError, "Invalid type" except ValueError: print "Exception Caught"
The Final Product
Now that you've got all the tools at your disposal to write our filter. Let's take a look at the program in full first, before reading on, put your new knowledge to use by working out what each line does:
import sys if len(sys.argv)
The program is simple, if there are no command line arguments (except for the script name, of course), then the program starts reading lines from the console until an interrupt is given. If there are command line arguments, then they are opened and printed in order.
So there you have it, Python in a nutshell. For my next article I'll be walking you through the development of a Python program for finding all of the images used on a web page. If you'd like to suggest a topic for me to cover on Python drop me a line at [email protected]. | https://www.techrepublic.com/article/a-quick-start-to-python/ | CC-MAIN-2022-05 | refinedweb | 2,373 | 64.24 |
An input_event is _any_ kind of input from the user, including keys, screen resizes, etc. If NoKey is the type, then the event should be ignored. More...
#include <cursorwindow.h>
An input_event is _any_ kind of input from the user, including keys, screen resizes, etc. If NoKey is the type, then the event should be ignored.
Keys are categorized in to one of the type_constants described below. Further categorization of keys can be done using the CursorWindow::func -- which is a map of key values into logical rather than physical function keys. For example, there is a physical F1 function key, which is mapped to CursorWindow::key_f1. However, what does F1 mean? By default, it has no mapping to any edit function that would be used by a CursorWindow::dialog object. You could map key_f1 to something in your program using a switch statement -- or you could add a mapping between f1 and some standard edit function -- anything in the edit_func_names list or anything you add to that list.
Definition at line 273 of file cursorwindow.h.
Definition at line 314 of file cursorwindow.h.
Symbolic names for event types.
Definition at line 293 of file cursorwindow.h.
construct an event
Definition at line 321 of file cursorwindow.h.
default constructor
Definition at line 329 of file cursorwindow.h.
valid only on mouse events
Definition at line 312 of file cursorwindow.h.
valid only on mouse vents
Definition at line 311 of file cursorwindow.h.
data key, function key, resize
Definition at line 305 of file cursorwindow.h.
key value -- values are unique across function and data keys. value is the cursesinterface.h 'mouse_infostate_' and reflects a bit mask of the left,middle,right buttons are down flags... plus '@' -- so as to give a valid ascii char that is readable
Definition at line 306 of file cursorwindow.h. | http://www.bordoon.com/tools/structcxxtls_1_1CursorWindow_1_1input__event.html | CC-MAIN-2019-18 | refinedweb | 308 | 67.04 |
IRC log of xmlsec on 2007-09-25
Timestamps are in UTC.
15:57:53 [RRSAgent]
RRSAgent has joined #xmlsec
15:57:53 [RRSAgent]
logging to
15:57:59 [tlr]
Meeting: XML Security Workshop
15:58:07 [ht]
Scribe: Henry S. Thompson
15:58:11 [ht]
ScribeNick: ht
15:58:20 [tlr]
bridge is setup
15:58:36 [Zakim]
-Ed_Simon
15:59:08 [Zakim]
+Ed_Simon
15:59:17 [ht]
Agenda:
15:59:59 [smullan]
smullan has joined #xmlsec
16:00:10 [esimon2]
esimon2 has joined #xmlsec
16:00:36 [esimon2]
Hi, I've phoned in and can hear voices; not sure if you can hear me.
16:00:43 [tlr]
we haven't heard you so far
16:00:51 [esimon2]
I'm talking now.
16:00:56 [tlr]
we don't hear you
16:01:22 [esimon2]
Don't worry too much about not hearing me, I'll type in anything I need to say.
16:01:28 [esimon2]
I can hear you.
16:02:20 [esimon2]
I can hear you.
16:02:23 [ht]
We can't hear you :-(
16:02:37 [esimon2]
Don't worry too much about not hearing me, I'll type in anything I need to say.
16:02:54 [PHB]
Wait a mo, Chris will turn on the voice from God feature
16:02:56 [PHB]
Speak now
16:03:05 [ht]
you can stop now
16:03:07 [esimon2]
I'm countin
16:04:12 [esimon2]
Was there a web link to see slides; I think Phill had mentioned something about a Sharepoint server. Again, if it is not set up, that is fine. I'm OK with just listening in.
16:04:54 [esimon2]
Yes, I can hear Frederick.
16:05:12 [Zakim]
-Ed_Simon
16:06:42 [hiroki]
hiroki has joined #xmlsec
16:06:47 [Zakim]
+Ed_Simon
16:07:40 [tlr]
what?
16:08:06 [ht]
FH: The purpose is not to do the work here, but to decide if and how to take work forward -- how much interest is there in participating in a follow-on to the XMLSec Maint WG -- what would the charter look like, what issues we would addresss
16:08:20 [ht]
... We will use IRC to log and to provide background info
16:08:36 [sdw]
sdw has joined #xmlsec
16:08:48 [ht]
Chair: Frederick Hirsch
16:09:17 [ht]
s/The purpose/The purpose of this workshop/
16:10:43 [MikeMc]
MikeMc has joined #xmlsec
16:10:48 [ht]
ht has changed the topic to: Agenda:
16:11:26 [ht]
FH: Please consider joining the XMLSecMaint WG
16:11:39 [ht]
... Weekly call, interop wkshp on Thursday
16:11:54 [ht]
... Thanks to the members of the WG who reviewed papers for this workshop
16:12:48 [ht]
ThomasRoessler: Existing WG has a limited charter, maintenance work only
16:12:58 [esimon2]
I can hear very well, thanks.
16:13:11 [ht]
... ALso chartered to propose a charter for a followon WG
16:13:37 [ht]
... We won't draft a charter at this workshop, but we hope to produce a report which indicates support and directions
16:13:44 [gpilz]
gpilz has joined #xmlsec
16:13:49 [ht]
... That in turn will turn into a charter, if the outcome is positive
16:14:12 [ht]
... Which then goes to the Advisory Committee for a decision
16:14:23 [ht]
... The timescale is next year and beyond
16:15:17 [ht]
s/what?/Topic: Introduction/
16:15:42 [ht]
TR: [walks throught the agenda]
16:17:04 [ht]
Attendance: approx. 25 people in the room
16:19:47 [ht]
TR: Slides which are on the web, drop URI here; otherwise send in email to [email protected] and [email protected]
16:22:25 [FrederickHirsch]
FrederickHirsch has joined #xmlsec
16:25:05 [FrederickHirsch]
zakim, who is here?
16:25:06 [Zakim]
On the phone I see [Workshop], Ed_Simon
16:25:06 [Zakim]
On IRC I see FrederickHirsch, gpilz, MikeMc, sdw, hiroki, esimon2, RRSAgent, ht, rdmiller, tlr, trackbot-ng, Zakim
16:25:25 [esimon2]
I spoke, did you hear me?
16:25:38 [tlr]
ed, you were 100% clear
16:27:02 [cgi-irc]
cgi-irc has joined #xmlsec
16:27:12 [FrederickHirsch]
16:27:45 [ht_]
ht_ has joined #xmlsec
16:27:52 [ht_]
[Talks will not be scribed, scribing will resume for discussion]
16:28:14 [tlr]
16:29:17 [cgi-irc]
zakim, cgi-irc is brich
16:29:17 [Zakim]
sorry, cgi-irc, I do not recognize a party named 'cgi-irc'
16:29:29 [tlr]
bruce, don't worry. ;)
16:30:00 [klanz2]
klanz2 has joined #xmlsec
16:32:33 [klanz2]
test ...
16:33:35 [FrederickHirsch]
can use xslt and xpath2.0 to create signature malware
16:38:43 [esimon2]
Audio has ended for me.
16:38:58 [MikeMc]
we can tellsomething changed - phil is looking into it
16:39:08 [esimon2]
audio is back!
16:39:16 [MikeMc]
switched mics
16:39:47 [FrederickHirsch]
ed do you have audio now?
16:41:42 [ht_]
[Following is approximate, apologies for mistakes]
16:41:45 [ht_]
Attendance: Frederick Hirsch, Konrad Lanz, Juan Carlos Cruelas, Bruce Rich, Mike McIntosh, Hugo Krawczyk, Gilbert Pilz, Hiroki Ito, Brad Hill, Rob Miller, Jeannine Schmidt, Henry Thompson, Sean Mullan, Gary Chung, Michael Leventhal, Steven Williams, Pratik Data, ? Chen, Phillip Hallam-Baker, Thomas Roessler, Ed Simon (via phone)
16:45:52 [ht_]
TR: Questions of clarification
16:46:12 [ht_]
FH: Restricting to only a few transformations?
16:46:32 [ht_]
BH: Yes, restrict to just small well-known set
16:47:10 [ht_]
SC: Mostly to implementations, rather than specs
16:47:18 [smullan]
smullan has joined #xmlsec
16:47:29 [ht_]
... Need to reduce the attack surface of implementations
16:47:39 [ht_]
... so we need an implementors guide, right?
16:48:12 [ht_]
BH: Right
16:48:39 [ht_]
KL: XSLT should default to _dis_able features, not _en_able
16:49:26 [ht_]
s/Sean Mullan/Scott Cantor, Sean Mullan/
16:50:01 [ht_]
s/Pratik Data/Pratik Datta/
16:50:46 [tlr]
16:55:04 [tlr]
zakim, code?
16:55:04 [Zakim]
the conference code is 965732 (tel:+1.617.761.6200 tel:+33.4.89.06.34.99 tel:+44.117.370.6152), tlr
16:59:02 [esimon2]
Ed's comment: If the structure of a document is important to the meaning of the document (as shown in the examples), then signing by ID (which is movable) is insufficient.
16:59:22 [ht_]
BH: How would you compare doing a hashed retrieval compared to ???
17:00:02 [esimon2]
Presentation highlights the need to rethink the XPointer functionality.
17:00:05 [ht_]
[scribe didn't get the question]
17:00:25 [ht_]
MM: Apps certainly need to interact better with signature processing
17:01:07 [ht_]
... Need for overlapping signatures implies a need for a signature object model, so you can iterate over all the signatures and treat them independently
17:02:18 [esimon2]
Ed says: I don't know that apps need to interact with signature processing better; rather, apps need to ensure the signatures they use sign all the critical information -- content as well as structure.
17:02:32 [ht_]
TR: Open up discussion of security vulnerabilities, other than crypto
17:03:17 [ht_]
MM: It's a pain that I have to encrypt the signature block
17:04:42 [tlr]
MM: DigestValue should be optional
17:04:55 [ht_]
ScribeNick: tlr
17:05:07 [klanz2]
q+
17:05:52 [tlr]
ack klanz
17:06:34 [tlr]
MM: presence of DigestValue means that plaintext guessing attack is possible if plaintext encrypted
17:06:51 [tlr]
... therefore, would have to encrypt the signature as well ...
17:06:55 [tlr]
FH: why is tha tpainful?
17:06:58 [tlr]
MM: tried xml ecn?
17:07:01 [tlr]
s/ecn/enc/
17:07:37 [FrederickHirsch]
Konrad: having digest necessary for manifest procesing
17:07:51 [FrederickHirsch]
Scott: should be optional to have digests
17:08:18 [FrederickHirsch]
Konrad: also verification on constant parts that are archived separately etc
17:08:29 [tlr]
klanz: Know of manifest use in electronic billing context
17:10:08 [ht]
s/Datta/Datta, Corinna Witt, Jeff Hodges, Jimmy Zhang/
17:11:59 [klanz2]
17:12:17 [klanz2]
Signing XML Documents and the Concept of “What You See Is What You Sign”
17:12:47 [tlr]
scott: need profiles *and* implementation guidelines
17:13:25 [klanz2]
q+
17:14:45 [FrederickHirsch]
frederick: asks about clarifying what implementation guide is versus profiling
17:15:12 [FrederickHirsch]
Scott: need to have hooks in code that enable best practices to be followed ,implementation guide
17:15:55 [FrederickHirsch]
... for example, saying signature is valid isn't enough if you are not sure what has been signed, hooks may be needed for this
17:16:59 [klanz2]
ack klanz2
17:17:05 [klanz2]
q-
17:18:49 [esimon2]
Ed: I am OK with just listening in and typing comments on IRC. No need to complicate things for others on my account.
17:19:46 [klanz2]
q+
17:19:52 [FrederickHirsch]
Symon: policy can be used to limit what is done with xml security, anohter approach to avoid problems
17:20:29 [FrederickHirsch]
ack klanz
17:20:59 [FrederickHirsch]
discussion as to whether xmlsig spec is broken
17:21:14 [esimon2]
The XSLT 2.0 specification mentions a number of security considerations dealing with issues raised earlier.
17:21:31 [esimon2]
Agree with Konrad.
17:22:24 [tlr]
Hal Lockhart presenting; slides later
17:23:48 [esimon2]
yes
17:24:24 [tlr]
q?
17:25:01 [esimon2]
Thanks very much.
17:27:12 [jcc]
jcc has joined #xmlsec
17:37:55 [FrederickHirsch]
hal: notes various issues have been document in ws-security, ws-i basic security profile and other places
17:38:08 [FrederickHirsch]
frederick: also liberty alliance work
17:41:53 [klanz2]
q+
17:42:09 [FrederickHirsch]
ack klanz2
17:42:36 [FrederickHirsch]
klanz2: false negatives will be perceived very badly
17:43:21 [FrederickHirsch]
... need to focus on what you see is what you sign, then false negatives main issue
17:43:32 [FrederickHirsch]
hal: agrees
17:44:34 [FrederickHirsch]
hal: challenge is interface between applicatoin and security processing to get proper security for applciation
17:45:23 [FrederickHirsch]
henry thomson: liaison issues - schema, processing model wg,
17:45:48 [FrederickHirsch]
... say to validate you must decrypt, perhaps
17:45:57 [klanz2]
q+
17:47:08 [esimon2]
I agree, I think, with Henry re his comments about XPointer to help resolve the ID issue.
17:47:15 [FrederickHirsch]
... re id issue , maybe new xpointer version?
17:47:50 [ht]
s/xpointer version/xpointer scheme/
17:48:12 [FrederickHirsch]
scott: +1 to klanz, concern about false positivies, issues for adoption
17:48:26 [FrederickHirsch]
ack klanz2
17:48:30 [FrederickHirsch]
ack klanz
17:48:40 [FrederickHirsch]
ack klanz
17:49:28 [FrederickHirsch]
scott: most xml processing is not schema aware, xsi:type is not visible to processing
17:50:24 [FrederickHirsch]
ht: would issue be solved if sig re-worked to be signing of Infosets
17:52:03 [FrederickHirsch]
klanz: tradeoff performance & infoset signing
17:53:47 [PHB]
PHB has joined #xmlsec
17:54:19 [PHB]
q+
18:02:22 [FrederickHirsch]
ack PHB
18:02:59 [FrederickHirsch]
PHB: could use some examples of difference between infoset and current signing approach. What is really different.
18:05:43 [Symon]
Symon has joined #xmlsec
18:06:08 [tlr]
q-
18:30:58 [tlr]
ScribeNick: tlr
18:31:03 [tlr]
Topic: cryptographic aspects
18:31:27 [tlr]
18:31:35 [tlr]
Hugo Krawczyk presenting
18:31:46 [MikeMc]
the slides for this session are at
18:32:47 [tlr]
hugo: post-wang trauma, how do we deal with it...
18:33:01 [MikeMc]
actually - the papers are there - not the slides - sorry
18:33:22 [tlr]
slides are at the link I gave above
18:33:42 [sean]
sean has joined #xmlsec
18:35:55 [sean]
sean has joined #xmlsec
18:36:06 [FH]
FH has joined #xmlsec
18:36:36 [tlr]
zakim, who is on the phone?
18:36:36 [Zakim]
On the phone I see [Workshop], Ed_Simon
18:36:58 [FH]
zakim, who is here?
18:36:58 [Zakim]
On the phone I see [Workshop], Ed_Simon
18:36:59 [Zakim]
On IRC I see FH, sean, Symon, PHB, jcc, smullan, ht, cgi-irc, gpilz, MikeMc, sdw, hiroki, esimon2, RRSAgent, rdmiller, tlr, trackbot-ng, Zakim
18:48:13 [FrederickH]
FrederickH has joined #xmlsec
18:48:21 [klanz2]
klanz2 has joined #xmlsec
18:49:14 [tlr]
hal: any attacks for which we need to check whether random strings are diferent?
18:49:25 [tlr]
hugo: critical for the signer to check that these strings are different
18:49:35 [tlr]
hal: if random value same for every signature, then can do offline attacks
18:49:51 [tlr]
mike: every time you create new signature, you create new value
18:49:58 [tlr]
hal: how important is it to the verifier that this is the case?
18:50:05 [tlr]
... suppose there's no real signer, just a blackhat sending messages ...
18:50:15 [tlr]
... do you have to keep track of fact that he sends same random number? ...
18:50:53 [tlr]
hugo: if you don't find 2nd preimage on one-way function, then attacker can't
18:51:03 [tlr]
hal: thinking about guessing attack or so
18:51:17 [tlr]
... there are attacks against CBC if IV isn't always different ...
18:51:30 [tlr]
hugo: uniqueness of randomness per signature is not requirement
18:51:41 [tlr]
... requirement is that the attacker must not know randomness that legitimate signer is going to use ...
18:52:09 [tlr]
... question is a valid concern, though ...
18:52:14 [tlr]
... in this case, there's no more to it ..
18:52:36 [tlr]
phb: fuzzy about what security advantage is ...
18:52:55 [tlr]
... we're nervous about hash functions for which malicious signer can create signature collisions ...
18:53:01 [tlr]
... that's attack we're concerned with ...
18:53:26 [tlr]
... randomness proposal makes this the same difficulty as the legitimate signer signing document, and attacker tries to do duplicate ...
18:53:44 [tlr]
... how does this make anything more secure against malicious signers? ...
18:54:04 [tlr]
hugo: technique does not prevent legitimate signer from finding two messages that have same hash value ...
18:54:17 [tlr]
... legitimate (not honest) signer could in principle find two messages that map to same hash value ...
18:54:26 [tlr]
... can't be case if hash function is collision resistant ..
18:54:33 [tlr]
... if it isn't, problem could in principle occur ...
18:54:48 [tlr]
... if you receive message with signature, then signer is committed to that signature ...
18:55:01 [tlr]
... (example) ...
18:56:12 [tlr]
... point is: every message that has legitimate signature commits signer ...
18:56:41 [tlr]
... note that hash function might be collision-resistant, but signature algo might not be ...
18:57:15 [tlr]
hal: attack is to get somebody to sign a document, and have that signature make something else
18:57:19 [tlr]
phb: ok, now i get it
18:57:28 [tlr]
... more relevant to XML than certificate world ...
18:58:02 [tlr]
"not *any* randomness" backup slide
18:58:40 [tlr]
phb: what I can see as attractive here is -- once SHA3 discussions -- ....
18:59:01 [tlr]
... instead of having standard compressor, have compressor, MAC, randomized digest all at once ...
18:59:04 [tlr]
... with parameters ...
18:59:49 [tlr]
frederick: time!
19:00:13 [tlr]
hugo: re nist doc, it applies to any hash function
19:00:32 [tlr]
... exactly like CBC and block cyphers ...
19:01:12 [FrederickH]
q?
19:01:48 [tlr]
mcIntosh on implementing it
19:02:04 [tlr]
... implemented preprocessing as Transform
19:02:29 [tlr]
(occurs after c14n on slide)
19:07:50 [tlr]
19:08:54 [tlr]
hugo: rsa-pss doesn't solve same problem as previous randomization scheme ...
19:08:57 [tlr]
... orthogonal problem ...
19:09:01 [tlr]
konrad: ack
19:14:16 [FrederickH]
second hash function in diagram for RSA-PSS
19:21:35 [FrederickH]
tlr: asks about unique urls for two different randomizations, yet could they be combined?
19:22:03 [FrederickH]
e.g. RSA-PSS vs Randomized hashing as described by Hugo
19:22:07 [tlr]
tlr: these are two different randomization schemes, they're orthogonal to each other, yet both affect the same URI space to be addressed
19:22:15 [tlr]
... so the proposed integrations can't be integrated ...
19:22:25 [tlr]
konrad: ??
19:22:30 [tlr]
hugo: streaming issue (?)
19:22:56 [tlr]
s/??/maybe can share randomness between two approaches/
19:23:20 [tlr]
s/streaming issue (?)/want randomness in different places from ops perspective; streaming issue/
19:27:20 [FrederickH]
sean: why did tls not adopt RSA-PSS
19:27:58 [FrederickH]
hugo: inertia, people also are staying with SHA-! versus SHA-256
19:29:27 [FrederickH]
phill: tls different in terms requirements it is meeting. Documents different than handshake reuqiremnets
19:30:10 [tlr]
konrad: moving defaults...
19:30:12 [tlr]
... time for that
19:31:15 [tlr]
zakim, who is on the phone?
19:31:16 [Zakim]
On the phone I see [Workshop], Ed_Simon
19:31:40 [tlr]
19:31:47 [tlr]
jeanine schmidt presenting.
19:32:41 [tlr]
jeanine: Crypto Suite B algorithms ...
19:32:45 [tlr]
... regrets from Sandi ...
19:34:57 [FrederickH]
use of 1024 through 2010 by NIST, indicates potential key size growth issue
19:35:46 [FrederickH]
ecc offers benefits for key size and processing
19:36:33 [tlr]
looking for convergence of standards in suite B
19:37:02 [FrederickH]
NSA would like to see Suite B incorporated in XML Security
19:37:28 [FrederickH]
DoD requirements aligned with this
19:38:27 [tlr]
details could be worked out in collaboration
19:39:10 [tlr]
hugo: specifically saying key agreement is ECDH?
19:39:15 [tlr]
jeanine: yes, preliminarily
19:39:22 [tlr]
hugo: IP issue behind not talking about ECMQV?
19:39:31 [tlr]
jeanine: yeah, that's an issue ...
19:39:36 [tlr]
... but ECDH might be more appropriate algorithm for XML ...
19:39:44 [tlr]
... whether one or both is a question for future work ...
19:39:50 [tlr]
hugo: Can you make this analysis available?
19:40:08 [tlr]
jeanine: this is something that should be worked out betw w3c and nsa
19:40:12 [tlr]
... preliminary recommendation ...
19:40:35 [tlr]
tlr: w3c would need to mean "community as a whole"
19:40:50 [tlr]
frederick: I hear "nsa could participate in WG"?
19:40:52 [tlr]
jeanine: yes
19:41:03 [tlr]
phb: ECC included with recent versions of Windows ...
19:41:10 [tlr]
... doesn't believe they've licensed that from Certicom ...
19:41:16 [tlr]
... given MS's caution in areas to do with IP ...
19:41:23 [tlr]
... maybe ask them how they navigate this particular minefield ...
19:41:36 [tlr]
... if there is a least encumbered version ...
19:41:45 [tlr]
... then will follow the unencumbered path ...
19:41:56 [tlr]
hal: what is involved here in terms of spec?
19:42:02 [tlr]
jeanine: primarily identifiers
19:42:48 [tlr]
frederick: some unifying effort for identifiers might be needed
19:43:04 [tlr]
konrad: spirit of specs is to reuse identifiers
19:43:18 [tlr]
frederick: also recommended vs required
19:43:43 [FrederickH]
rfc 4050 has identifiers
19:43:46 [tlr]
sean: in RFC 4054, there's already identifiers for ECDSA with SHA-1
19:44:13 [tlr]
phb: keyprov would like to track down as many of algo ids as possible
19:44:26 [tlr]
... if you have uncovered any (OIDs, URIs), please send a link
19:44:34 [tlr]
hal: start with gutmann's list
19:44:42 [tlr]
frederick: please share with xmlsec WG
19:44:50 [sean]
has URIs for ECDSA-SHA1
19:45:01 [tlr]
frederick: what is next step for NSA at this point -- see what happens here?
19:45:05 [tlr]
jeanine: yes
19:45:38 [tlr]
lunch break; reconvene at 1:30
19:46:12 [esimon2]
yes
19:46:28 [tlr]
reconvene: 1:45
20:06:39 [Zakim]
-Ed_Simon
20:18:17 [smullan]
smullan has joined #xmlsec
20:45:12 [Zakim]
+Ed_Simon
20:46:42 [esimon2]
I am on the call.
20:47:54 [sdw]
scribe: sdw
20:48:13 [esimon2]
are slides available online?
20:49:14 [sdw]
Must Implement
20:49:30 [sdw]
A-la-cart - Combinatorial explosion
20:50:08 [sdw]
Hidden Constraints: Often not any to any for implementations
20:51:33 [tlr]
20:51:57 [esimon2]
Thanks
20:51:58 [sdw]
Result: many variations to test, many configurations for analysis, deviation from specification
20:52:28 [sdw]
Proposal: Quantum Profiles
20:53:02 [sdw]
Unique URI for profile that fully specifies choices at each level.
20:55:46 [sdw]
Discrete options combinations, modes are more complicated.
20:59:09 [sdw]
Negotiation of specific combinations.
20:59:29 [sdw]
URIs that are intentionally opaque, not sub-parsed.
21:02:11 [FrederickH]
sdw: a possible analogy is font strings in X11
21:02:57 [FrederickH]
konrad: would be useful to have uri's that indicate strength (eg weakest key length)
21:02:59 [sdw]
Partial ordering of profiles may make sense, but might not be good.
21:03:58 [smullan]
smullan has joined #xmlsec
21:04:15 [sdw]
Meeting certain requirements, such as for a country, may be more of a private profile, possibly including country name, for instance.
21:04:42 [sdw]
Hugo: Are you making similar proposals in other groups such as IETF?
21:05:43 [sd.
21:06:21 [sdw]
Where picking profiles, CFIG and other groups would likely participate to define OIDs, etc. for certain coherent suites.
21:06:53 [sdw]
How is that approach applicable to signature? We are already in A-la-carte situation.
21:10:23 [sdw]
Presentation: The Importance of incorporating XAdES extensions into ongoing XML-Sig work
21:13:24 [FrederickHirsch]
FrederickHirsch has joined #xmlsec
21:13:48 [tlr]
21:13:48 [esimon2]
slides?
21:13:57 [esimon2]
thanks
21:14:15 [ht]
Agenda now has more slide pointers. . .
21:14:47 [FH]
FH has joined #xmlsec
21:14:49 [gpilz]
gpilz has joined #xmlsec
21:16:06 [sdw]
Defines XAdES forms that incorporates specific combinations of properties.
21:16:28 [FH]
Agenda:
21:17:11 [sdw]
Use of these profiles allows much later use and auditing of signed data.
21:18:07 [FH]
jcc slides
21:21:02 [sdw]
Supports signer, verifier, and storage service.
21:23:46 [sdw]
Signature policy identifier references specific rules that are followed when generating and verifying signature.
21:24:00 [sdw]
Includes digest of policy document.
21:26:50 [sdw]
SignatureTimeStamp verifies that signature was performed earlier.
21:27:17 [sdw]
CompleteCertificateRefs has references to certificates in certpath that must be checked.
21:31:38 [sdw]
Has change been made to change countersignatures to include whole message rather than just original signature?
21:31:47 [sdw]
Don't believe that has been done yet.
21:34:24 [sdw]
Report in ETSI summarizes state of current cryptographic algorithms and makes certain recommendations.
21:35:40 [sdw]
Only minor changes to the standards are in process.
21:39:12 [sdw]
Can individuals use these signatures with the force of laws?
21:41:47 [sdw]
Depends on legal system: Rathole.
21:42:36 [esimon2]
Thanks.
21:47:16 [sdw]
Presentation: XML Signature Performance and One-Pass Processing issues
21:47:30 [klanz2]
klanz2 has joined #xmlsec
21:48:14 [sdw]
DOM provided good implementation but has performance issues
21:50:20 [sdw]
Event processing requires one or more passes.
21:51:12 [sdw]
Two passes, 1+, cache all elements with ID, or use profile-specific knowledge
21:56:04 [sdw]
Signature information needed before data vs. signature data etc. needed after data.
21:56:12 [sdw]
Can't do with current XML Signature standards.
21:57:38 [sdw]
XML DSig Streaming Impl.: STaX, JSR 105 API, exclusive C14N, forward references, enveloping signatures, Bas64 transform
21:58:28 [FH]
sean: recommend best practices for streaming implementations
21:58:47 [sdw]
Apache project...
21:59:07 [FH]
hal: integrity protecting data stream?
21:59:15 [FH]
... example is movie
22:00:09 [FH]
ht: w3c xml pipelining language wg
22:01:00 [FH]
q+
22:01:08 [sdw]
q+
22:02:22 [ht]
ack FH
22:03:27 [ht]
ack sdw
22:03:39 [klanz2]
q?
22:04:21 [FH]
steve: xml fragments are similar to streaming, but can sign/integrity protect fragments
22:04:34 [FH]
s/similar to/can be used in/
22:06:58 [ht]
ScribeNick: ht
22:07:44 [ht]
FH: The combination of streaming and signature is odd -- you can't release the beginning of the document until you've verified the signature at the end
22:07:54 [FH]
pratik: streaming is for performance, rationale for doing it
22:08:14 [FH]
one point I was making is that sometime you do not need integrity protection for streaming, e.g. in cases where it is ok to drop data
22:08:46 [ht]
HT: Following on, it's precisely for that reason that not doing signature generation is at least odd, since in that case you surely can ship the beginning of the doc while still working on the signature
22:09:18 [FH]
brad: +1 to pratik, value of streaming is performance
22:09:40 [ht]
various: Dispute the relevance of signature to streaming XML and/or dispute the value of streaming at all
22:10:30 [hal]
hal has joined #xmlsec
22:11:00 [ht]
HT: Requirements on XML Pipeline to support streaming of simple XML operations, interesting to understand how to integrate some kind of integrity confirmation _while_ streaming XML
22:11:51 [FH]
s/FH: The combination/?? The combination/
22:12:08 [jcc]
jcc has joined #xmlsec
22:12:08 [esimon2]
Audio seems to be dead.
22:12:30 [esimon2]
yes i can, thanks
22:12:42 [sdw]
Streaming is important in memory constrained or bandwidth / processing constrained applications.
22:13:11 [esimon2]
Yes, thanks.
22:14:17 [ht]
RRSAgent, pointer
22:14:17 [RRSAgent]
See
22:15:24 [FH]
scott: notes adoption in scripiting languages an issue, using c library not good enough
22:15:39 [FH]
jeff: example is use of XMLSig is barrier to saml adoption in OpenID
22:18:16 [FH]
peter gutman, "why xml security is broken"
22:18:50 [FH]
s/gutman/Gutmann/
22:18:54 [FH]
s/peter/Peter/
22:19:51 [FH]
Scott: Liberty Alliance worked at producing xml signature case that addresses many of the threats discussed.
22:20:02 [FH]
s/case/usage/
22:20:59 [FH]
scott: need simpler way of conveying bare public keys
22:21:10 [FH]
... eg pem block
22:21:43 [tlr]
22:21:43 [bhill]
bhill has joined #xmlsec
22:22:29 [FH]
scott: Retrieval method point to KeyInfo or child, issue with spec
22:26:03 [FH]
simplesign - sign whole piece of xml as a blob
22:27:30 [esimon2]
Ed: I agree with the above. If the XML is not going to be transformed by intermediate processes, one can just sign the XML as one does text. And use a detached signature.
22:28:17 [bhill]
have seen this approach successfully in use with XML in DRM and payment systems as well
22:28:30 [esimon2]
What is needed is perhaps a packaging convention like ODF and OOXML use.
22:28:52 [MikeMc]
how is this different from PKCS7 detached? is it the embedding of the signature in the signed data?
22:30:19 [esimon2]
I would have to review PKCS7 detached but I would say the idea is quite similar.
22:30:59 [FH]
konrad: need XML Core to allow nesting of XML, e.g. no prolog etc
22:32:24 [FH]
jeff: using for protocols is different use case than docs, sign before sending to receiver
22:33:20 [tlr]
q?
22:34:10 [tlr]
jimmy: how aboutnamespaces?
22:34:21 [tlr]
jeff: well, we don't care.
22:34:52 [tlr]
jimmy: has to be processed in context of original XML
22:35:09 [tlr]
mike: Why not PKCS#7 detached?
22:35:09 [bhill]
re: PKCS#7 - average Web-era developer doesn't like ASN.1
22:35:25 [bhill]
XML is successful and text wrangling is simple in any scripting language
22:35:37 [tlr]
cantor&hodges: this is for an as simple as possible use case
22:35:59 [tlr]
... point is, people tend to back off from XML Signature in certain use cases ...
22:36:05 [tlr]
... perhaps find a common way for the very simple cases ...
22:36:48 [tlr]
mike: well, there's a simple library, and then there's been 90% of the way to an XML Signature gone
22:37:05 [tlr]
sdw: want to emphasize that there are a number of different situations where you just simply want to encrypt a blob
22:37:07 [tlr]
... or sign it ...
22:37:18 [tlr]
... and be able to validate later without necessarily having complexity ...
22:37:26 [tlr]
... not only protocol-like situations (WS being a good example) ...
22:37:42 [tlr]
... but also in cases where you have sth that resembles more a traditional signed document ...
22:38:06 [tlr]
... store in a database, that way, archival ...
22:39:30 [tlr]
scott: what may be needed to solve my problem is basically a lot more ID attributes than schema (?)
22:39:37 [FH]
scott: more id atttributes in xml sig schema might be helpful
22:39:43 [tlr]
... s/than schema (?)/than in the current schema/
22:39:59 [tlr]
... there is room for improvement here for the ID attributes ...
22:40:08 [tlr]
... with more of these, a lot of referencing is likely to become possible ...
22:40:11 [tlr]
konrad: xml:id?
22:40:18 [tlr]
scott: might be a rationalization here
22:40:30 [tlr]
... if I want to say "htis key is the same as that key", ...
22:40:40 [tlr]
... looks like you need to reference keyInfo and then find the child with XPath ...
22:40:45 [tlr]
... which seems to be a heck of a lot of work ...
22:41:34 [tlr]
konrad: historic context -- at the time, wary of using mechanisms, hence "reference + transform" element
22:41:52 [bhill]
Dinner: anyone coming back by SJC airport?
22:43:20 [MikeMc]
Dinner @
22:43:52 [gpilz]
gpilz has joined #xmlsec
22:45:36 [tlr]
rragent, please make log public
22:45:38 [ht]
RRSAgent, make logs world-visible
22:48:23 [tlr]
(unminuted discussion about xpath vs id attributes)
22:49:19 [tlr]
scott: standard minimal version of xpath?
22:49:25 [tlr]
... preferably not implement the whole pile of work ...
22:49:38 [tlr]
... all of this is begging the question: ...
22:49:45 [tlr]
... ought to be standardized profiles for different problem domains ...
22:49:48 [esimon2]
Ed: ID is simple, but flawed for apps. XPath can be complicated but applications, including XML Signature, can profile its use for specific uses.
22:50:19 [bhill]
+1 for minimal XPath
22:50:41 [tlr]
... without standardized profiles for specific problem domains, a bit too much ...
22:50:55 [esimon2]
What time do we resume?
22:51:15 [sdw]
We called our implementation of "Simplified XPath" Spath.
22:51:43 [esimon2]
OK
22:51:44 [tlr]
sdw, is that publicly visible anywhere?
22:55:48 [brich]
brich has joined #xmlsec
23:27:24 [sdw]
Not currently.
23:31:04 [esimon2]
OK
23:33:49 [esimon2]
I am interested.
23:34:48 [FH]
topic: profiles summary
23:35:00 [FH]
basic robust profile
23:35:05 [klanz2]
klanz2 has joined #xmlsec
23:35:05 [FH]
bulk signing - blob signing
23:35:11 [FH]
use specific?
23:36:14 [FH]
metadata driven implementation
23:36:20 [FH]
brad - like policy
23:39:18 [bhill]
konrad: can this be done with / expressed as a schema?
23:40:29 [bhill]
FH: policy implies a general language vs. a hard/closed specification for a profile
23:41:05 [bhill]
tlr: difference between runtime and non-runtime profiles
23:41:07 [esimon2]
I believe the next version of XML Signature and XML Encryption should have an attribute designating the profile. I have also pondered whether this should not even be in XML Core.
23:41:59 [bhill]
tlr: implementation time avoids unwanted complexity - teach how to do this with use case examples
23:42:54 [bhill]
scott: implementers want to build a general library and constrain behavior, rather than many implementations
23:44:01 [bhill]
phb: profile reuse: catalog, wiki
23:45:07 [Pratik_]
Pratik_ has joined #xmlsec
23:45:54 [PratikDatta]
PratikDatta has joined #xmlsec
23:45:57 [Pratik_]
Pratik_ has left #xmlsec
23:46:24 [bhill]
michael leaventhal: robust is misleading, ease more important than flexibility, more performance and interop fundamentals over flexibility
23:47:53 [bhill]
jz (?): keep spec understandable and as short as possible
23:47:57 [FrederickH]
FrederickH has joined #xmlsec
23:51:02 [bhill]
brad: be able to limit total resource consumption even in languages like Java and .Net where platform services to limit low-level resource usage do not exist
23:51:20 [bhill]
konrad: some of these issues belong to Core, not just XML Security
23:51:54 [bhill]
fh: need to support scripting languages like Python
23:53:48 [bhill]
bhill: implementation guidelines to partition attack surface, order of operations
23:53:54 [bhill]
tlr: wrapping countermeasures
23:54:46 [bhill]
eric: possible to make it easier to verify than to sign?
23:56:20 [bhill]
konrad and jimmy: what is the scope/charter?
23:56:36 [bhill]
tlr: exploring interest, from profiling to deep refactoring
23:57:22 [hiroki]
hiroki has joined #xmlsec
23:58:28 [bhill]
fh: how to really do see what you sign?
23:59:37 [bhill]
topic: referencing model | http://www.w3.org/2007/09/25-xmlsec-irc | CC-MAIN-2016-26 | refinedweb | 5,726 | 59.64 |
Local continuous test runner with pytest and watchdog.
pytest-watch a zero-config CLI tool that runs pytest, and re-runs it when a file in your project changes. It beeps on failures and can run arbitrary commands on each passing and failing test run.
Whether or not you use the test-driven development method, running tests continuously is far more productive than waiting until you're finished programming to test your code. Additionally, manually running
pytesteach time you want to see if any tests were broken has more wait-time and cognitive overhead than merely listening for a notification. This could be a crucial difference when debugging a complex problem or on a tight deadline.
$ pip install pytest-watch
$:
$ ptw --onpass "say passed" --onfail "say failed"
$ ptw --onpass "growlnotify -m \"All tests passed!\"" \ --onfail "growlnotify -m \"Tests failed\""
using GrowlNotify.
>testas an argument.
$ ptw --afterrun cleanup_db.py
You can also use a custom runner script for full
pytestcontrol:
$ ptw --runner "python custom_pytest_runner.py"
Here's an minimal runner script that runs
pytestand prints its exit code:
# custom_pytest_runner.py
import sys import pytest
print('pytest exited with code:', pytest.main(sys.argv[1:]))
Need to exclude directories from being observed or collected for tests?
$ ptw --ignore ./deep-directory --ignore ./integration_tests
See the full list of options:
$ ptw --help Usage: ptw [options] [--ignore
...] [...] [-- ...]
Options: --ignore
Ignore directory from being watched and during collection (multi-allowed). --ext Comma-separated list of file extensions that can trigger a new test run when changed (default: .py). Use --ext=* to allow any file (including .pyc). --config Load configuration from
fileinstead of trying to locate one of the implicit configuration files. -c --clear Clear the screen before each run. -n --nobeep Do not beep on failure. -w --wait Waits for all tests to complete before re-running. Otherwise, tests are interrupted on filesystem events. --beforerun Run arbitrary command before tests are run. --afterrun Run arbitrary command on completion or interruption. The exit code of "pytest" is passed as an argument. --onpass Run arbitrary command on pass. --onfail Run arbitrary command on failure. --onexit Run arbitrary command when exiting pytest-watch. --runner Run a custom command instead of "pytest". --pdb Start the interactive Python debugger on errors. This also enables --wait to prevent pdb interruption. --spool.
CLI options can be added to a
[pytest-watch]section in your pytest.ini file to persist them in your project. For example:
# pytest.ini
[pytest] addopts = --maxfail=2
[pytest-watch] ignore = ./integration-tests nobeep = True
-test, whereas pytest-watch doesn't.
If you want to edit the README, be sure to make your changes to
README.mdand run the following to regenerate the
README.rstfile:
$ pandoc -t rst -o README.rst README.md
If your PR has been waiting a while, feel free to ping me on Twitter.
Use this software often?
:smiley: | https://xscode.com/joeyespo/pytest-watch | CC-MAIN-2020-50 | refinedweb | 473 | 60.82 |
audiocutter
A plugin to cut audio files. You pass in a path to a file, start and end times and the plugin will do the rest. For now, it automatically will reduce the quality quite a bit to ensure optimal file size for storage/streaming.
Getting Started
Take a look at the requirements and any gotchas for your platform at the flutter_ffmpeg package page. There can be some stumbling blocks getting it working on iOS. (Works on my machine ¯\_(ツ)_/¯)
Import and cut!
import 'package:audiocutter/audiocutter.dart'; {...} var start = 15.0; var end = 25.5; var path = 'path/to/audio/file.mp3'; // Get path to cut file and do whatever you want with it. var outputFilePath = await AudioCutter.cutAudio(path, start, end); | https://pub.dev/documentation/audiocutter/latest/ | CC-MAIN-2020-34 | refinedweb | 125 | 70.5 |
In this article, I’ll show you the easiest way possible to create a chat application using React.js. It’ll be done entirely without server-side code, as we’ll let the Chatkit API handle the back-end.
I’m assuming that you know basic JavaScript and that you’ve encountered a little bit of React.js before. Other than that, there are no prerequisites.
Note: I’ve also created a free full-length course on how to create a React.js chat app here:
If you follow along with this tutorial you’ll end up with your very own chat application at the end, which you then can build further upon if you’d like to.
Let’s get started!
Step 1: Breaking the UI into components
React is built around components, so the first thing you want to do when creating an app is to break its UI into components.
Let’s start by drawing a rectangle around the entire app. This is your root component and the common ancestor for all other components. Let’s call it
App:
Once you’ve defined your root component, you need to ask yourself the following question:
Which direct children does this component have?
In our case, it makes sense to give it three child components, which we’ll call the following:
Title
MessagesList
SendMessageForm
Let’s draw a rectangle for each of these:
This gives us a nice overview of the different components and the architecture behind our app.
We could have continued asking ourselves which children these components again have. Thus we could have broken the UI into even more components, for example through turning each of the messages into their own components. However, we’ll stop here for the sake of simplicity.
Step 2: Setting up the codebase
Now we’ll need to setup our repository. We’ll use the simplest structure possible: an *index.html *file with links to a JavaScript file and a stylesheet. We’re also importing the Chatkit SDK and Babel, which is used to transform our JSX:
Here’s a Scrimba playground with the final code for the tutorial. I’d recommend you to open it up in a new tab and play around with it whenever you feel confused.
Alternatively, you can download the Scrimba project as a .zip file and run a simple server to get it up and running locally.
Step 3: Creating the root component
With the repository in place, we’re able to start writing some React code, which we’ll do inside the *index.js *file.
Let’s start with the main component,
App. This will be our only “smart” component, as it’ll handle the data and the connection with the API. Here’s the basic setup for it (before we’ve added any logic):
class App extends React.Component { render() { return ( <div className="app"> <Title /> <MessageList /> <SendMessageForm /> </div> ) } }
As you can see, it simply renders out three children: the
<Title>,
<MessageList>, and the
<SendMessageForm> components.
We’re going to make it a bit more complex though, as the chat messages will need to be stored inside the state of this
App component. This will enable us to access the messages through
this.state.messages, and thus pass them around to other components.
We’ll begin with using dummy data so that we can understand the data flow of the app. Then we’ll swap this out with real data from the Chatkit API later on.
Let’s create a
DUMMY_DATA variable:
const DUMMY_DATA = [ { senderId: "perborgen", text: "who'll win?" }, { senderId: "janedoe", text: "who'll win?" } ]
Then we’ll add this data to the state of
App and pass it down to the
MessageList component as a prop.
class App extends React.Component { constructor() { super() this.state = { messages: DUMMY_DATA } } render() { return ( <div className="app"> <MessageList messages={this.state.messages}/> <SendMessageForm /> </div> ) } }
Here, we’re initializing the state in the
constructor and we’re also passing
this.state.messages down to
MessageList.
Note that we’re calling
super() in the constructor. You must do that if you want to create a stateful component.
Step 4: Rendering dummy messages
Let’s see how we can render these messages out in the
MessageList component. Here’s how it looks:
class MessageList extends React.Component { render() { return ( <ul className="message-list"> {this.props.messages.map(message => { return ( <li key={message.id}> <div> {message.senderId} </div> <div> {message.text} </div> </li> ) })} </ul> ) } }
This is a so-called stupid component. It takes one prop,
messages, which contains an array of objects. And then we’re simply rendering out the
text and
senderId properties from the objects.
With our dummy data flowing into this component, it will render the following:
So now we have the basic structure for our app, and we’re also able to render out messages. Great job!
Now let’s replace our dummy data with actual messages from a chat room!
Step 5: Fetching API-keys from Chatkit
In order to get fetch messages, we’ll need to connect with the Chatkit API. And to do so, we need to obtain API keys.
At this point, I want to encourage you to follow my steps so that you can get your own chat application up and running. You can use my Scrimba playground in order to test your own API keys.
Start by creating a free account here. Once you’ve done that you’ll see your dashboard. This is where you create new Chatkit instances. Create one and give it whatever name you want:
Then you’ll be navigated to your newly created instance. Here you’ll need to copy four values:
- Instance Locator
- Test Token Provider
- Room id
- Username
We’ll start with the Instance Locator:
You can copy using the icon on the right side of the Instance Locator.
And if you scroll a bit down you’ll find the Test Token Provider:
The next step is to create a User* *and a Room, which is done on the same page. Note that you’ll have to create a user first, and then you’ll be able to create a room, which again gives you access to the room identifier.
So now you’ve found your four identifiers. Well done!
However, before we head back to the codebase, I want you to manually send a message from the Chatkit dashboard as well, as this will help us in the next chapter.
Here’s how to do that:
This is so that we actually have a message to render out in the next step.
Step 6: Rendering real chat messages
Now let’s head back to our index.js file and store these four identifiers as variables at the top of our file.
Here are mine, but I’d encourage you to create your own:
const instanceLocator = "v1:us1:dfaf1e22-2d33-45c9-b4f8-31f634621d24" const testToken = "" const username = "perborgen" const roomId = 9796712
And with that in place, we’re finally ready to connect with Chatkit. This will happen in the
App component, and more specifically in the
componentDidMount method. That’s the method you should use when connecting React.js components to API’s.
First we’ll create a
chatManager:
componentDidMount() { const chatManager = new Chatkit.ChatManager({ instanceLocator: instanceLocator, userId: userId, tokenProvider: new Chatkit.TokenProvider({ url: testToken }) })
… and then we’ll do
chatManager.connect() to connect with the API:
chatManager.connect().then(currentUser => { currentUser.subscribeToRoom({ roomId: roomId, hooks: { onNewMessage: message => { this.setState({ messages: [...this.state.messages, message] }) } } }) }) }
This gives us access to the
currentUser object, which is the interface for interacting with the API.
Note: As we’ll need to use
currentUser later on, well store it on the instance by doing
this.currentUser = ``currentUser.
Then, we’re calling
currentUser.subscribeToRoom() and pass it our
roomId and an
onNewMessage hook.
The
onNewMessage hook is triggered every time a new message is broadcast to the chat room. So every time it happens, we’ll simply add the new message at the end of
this.state.messages.
This results in the app fetching data from the API and then rendering it out on the page.
This is awesome, as we now have the skeleton for our client-server connection.
Woohoo!
Step 7: Handling user input
The next thing we’ll need to create is the
SendMessageForm component. This will be a so-called controlled component, meaning the component controls what’s being rendered in the input field via its state.
Take a look at the
render() method, and pay special attention to the lines I’ve highlighted:
class SendMessageForm extends React.Component { render() { return ( <form className="send-message-form"> <input onChange={this.handleChange} value={this.state.message} </form> ) } }
We’re doing two things:
- Listening for user inputs with the
onChangeevent listener, so that we can
trigger the
handleChangemethod
- Setting the
valueof the input field explicitly using
this.state.message
The connection between these two steps is found inside the
handleChange method. It simply updates the state to whatever the user types into the input field:
handleChange(e) { this.setState({ message: e.target.value }) }
This triggers a re-render, and since the input field is set explicitly from the state using
value={this.state.message}, the input field will be updated.
So even though the app feels instant for the user when they type something into the input field, the data actually goes via the state before React updates the UI.
To wrap up this feature, we need to give the component a
constructor. In it, we’ll both initialize the state and bind
this in the
handleChange method:
constructor() { super() this.state = { message: '' } this.handleChange = this.handleChange.bind(this) }
We need to bind the
handleChangemethod so that we’ll have access to the
this keyword inside of it. That’s how JavaScript works: the
this keyword is by default undefined inside the body of a function.
Step 8: Sending messages
Our
SendMessageForm component is almost finished, but we also need to take care of the form submission. We need fetch the messages and send them off!
To do this we’ll hook a
handleSubmit even handler up with the
onSubmit event listener in the
<form>.
render() { return ( <form onSubmit={this.handleSubmit} <input onChange={this.handleChange} value={this.state.message} </form> ) }
As we have the value of the input field stored in
this.state.message, it’s actually pretty easy to pass the correct data along with the submission. We’ll
simply do:
handleSubmit(e) { e.preventDefault() this.props.sendMessage(this.state.message) this.setState({ message: '' }) }
Here, we’re calling the
sendMessage prop and passing in
this.state.message as a parameter. You might be a little confused by this, as we haven’t created the
sendMessage method yet. However, we’ll do that in the next section, as that method lives inside the
App component. So don’t worry!
Secondly, we’re clearing out the input field by setting
this.state.message to an empty string.
Here’s the entire
SendMessageForm component. Notice that we’ve also bound
this to the
handleSubmit method:
class SendMessageForm extends React.Component { constructor() { super() this.state = { message: '' } this.handleChange = this.handleChange.bind(this) this.handleSubmit = this.handleSubmit.bind(this) } handleChange(e) { this.setState({ message: e.target.value }) } handleSubmit(e) { e.preventDefault() this.props.sendMessage(this.state.message) this.setState({ message: '' }) } render() { return ( <form onSubmit={this.handleSubmit} <input onChange={this.handleChange} value={this.state.message} </form> ) } }
Step 9: Sending the messages to Chatkit
We’re now ready so send the messages off to Chatkit. That’s done up in the
App component, where we’ll create a method called
this.sendMessage:
sendMessage(text) { this.currentUser.sendMessage({ text: text, roomId: roomId }) }
It takes one parameter (the text) and it simply calls
this.currentUser.sendMessage().
The final step is to pass this down to the
<SendMessageForm> component as a prop:
/* App component */ render() { return ( <div className="app"> <Title /> <MessageList messages={this.state.messages} /> <SendMessageForm sendMessage={this.sendMessage} /> ) }
And with that, we’ve passed down the handler so that
SendMessageForm can invoke it when the form is submitted.
Step 10: Creating the Title component
To finish up, let’s also create the Title component. It’s just a simple functional component, meaning a function which returns a JSX expression.
function Title() { return <p class="title">My awesome chat app</p> }
It’s a good practice to use functional components, as they have more constraints than class components, which makes them less prone to bugs.
The result
And with that in place you have your own chat application which you can use to chat with your friends!
Give yourself a pat on the back if you’ve coded along until the very end.
If you want to learn how to build further upon this example, then check out my free course on how to create a chat app with React here.
Thanks for reading and happy coding :) | https://www.freecodecamp.org/news/how-to-build-a-react-js-chat-app-in-10-minutes-c9233794642b/ | CC-MAIN-2020-45 | refinedweb | 2,149 | 66.03 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
Which table stores image in openerp
Hi All,
I have created module where employee can upload their image. Module works perfectly. No issue with saving /displaying employee record including image.
But I would like to know where exactly image file is stored ?
Thanks. Here is my class and img field holds file name. Please let me know in which table image is stored and how it is related to test_base table ?
from osv import osv from osv import fields class test_base(osv.osv): _name='test.base' _columns={ 'name':fields.char('Name'), 'email':fields.char('Email'), 'mydate':fields.date('My Date'), 'code':fields.integer('Unique ID'), 'sal':fields.integer('Salary'), 'rate':fields.selection([(10,'10'), (20,'20'),(30,'30')],'Percentage of Deduction'), 'ded':fields.float('Deduction'), 'bdisplay':fields.float('Net Salary'), 'image': fields.binary("Image", help="Select image here"), 'skillid':fields.one2many('test.skill', table that store the image is test_base. Your self defined this in the columns('image': fields.binary("Image", help="Select image here")). This table of model test.base, have binary field and in this fields is where the image is stored in binary format. | https://www.odoo.com/forum/help-1/question/which-table-stores-image-in-openerp-42436 | CC-MAIN-2017-09 | refinedweb | 216 | 54.49 |
flask_restful Unable to install with StaSh
Guys, I search the forum about this but could not find anything. But using StaSh to pip install flask-restful fails as it has to run a setup file. Not sure what the setup file is doing, but it appears that the installation and dependencies get installed ok.
To test I just used the 'hello'a world' starter script below. Anyway, it fails. It sees flask_restful module, but something is not connected correctly.
Is there a alternative way I can install this module with StaSh? I really don't like asking without doing my own research, but this is sort of a black pit for me. The answer could be staring me in the face and I probably would not see it.
Thanks in advance...
btw, in this case I am looking for a solution for py3, but would be great to know if the same solution works under 2.7. If not does not matter so much, but plan to use one to run flask and the other to test query it.
from flask import Flask from flask_restful import Resource, Api app = Flask(__name__) api = Api(app) class HelloWorld(Resource): def get(self): return {'hello': 'world'} api.add_resource(HelloWorld, '/') if __name__ == '__main__': app.run(debug=True)
flask-restful looks like a regular pure Python package, no idea why Stash is having trouble with the
setup.py. You should be able to install the package by hand, by downloading the source code and moving the package into
site-packages:
$ wget "" $ tar -xzf Flask-RESTful-0.3.6.tar.gz $ mv Flask-RESTful-0.3.6/flask_restful ~/Documents/site-packages/flask_restful
@dgelessus , thanks for helping out. But still no go. I pip removed flask-restful, then did the manual install using the above cmds you wrote. The one thing I noted, and I guess is normal is that the dependencies that first installed with flask-restful do not get uninstalled. I have add a pic below of what I am getting, I though this might be better than the trace back. The code should work, it is the example code from the flask-restful site, I also have had in running in PyCharm on the mac. Any further hints would be great.
OK, it's showing a syntax error in the Flask source code, which is strange. Can you check if you have a
flaskfolder in any of your
site-packagesfolders, and delete it if there is one? Flask is already pre-installed in Pythonista, but Stash
pipprobably installed it again and might have broken something by doing that.
@dgelessus , I did have a flask folder in site-packages. I got rid of it and restarted Pythonista and got the below trace back.
I tried the exact same thing in the Pythonista 2 app. Exactly same error. Also, same error if I run it in the 3 app using py2.7
Look I don't want to waste your time. It would be nice to get it running. But I can still use it by doing it on my laptop.
- Running on
- Restarting with reloader
Traceback (most recent call last):
File "/private/var/mobile/Containers/Shared/AppGroup/3533032E-E336-4C25-BBC4-112A6BF2AF75/Pythonista3/Documents/MyProjects/scratch/flask_restful_test.py", line 14, in <module>
app.run(debug=True)
File "/var/containers/Bundle/Application/2A23FD4D-8164-4F78-9144-DD77D005D434/Pythonista3.app/Frameworks/Py3Kit.framework/pylib/site-packages/flask/app.py", line 773, in run
run_simple(host, port, self, **options)
File "/private/var/mobile/Containers/Shared/AppGroup/3533032E-E336-4C25-BBC4-112A6BF2AF75/Pythonista3/Documents/site-packages/werkzeug/serving.py", line 708, in run_simple
run_with_reloader(inner, extra_files, reloader_interval)
File "/private/var/mobile/Containers/Shared/AppGroup/3533032E-E336-4C25-BBC4-112A6BF2AF75/Pythonista3/Documents/site-packages/werkzeug/serving.py", line 617, in run_with_reloader
sys.exit(restart_with_reloader())
File "/private/var/mobile/Containers/Shared/AppGroup/3533032E-E336-4C25-BBC4-112A6BF2AF75/Pythonista3/Documents/site-packages/werkzeug/serving.py", line 601, in restart_with_reloader
exit_code = subprocess.call(args, env=new_environ)
File "/var/containers/Bundle/Application/2A23FD4D-8164-4F78-9144-DD77D005D434/Pythonista3.app/Frameworks/Py3Kit.framework/pylib/subprocess.py", line 268, in call
with Popen(*popenargs, **kwargs) as p:
File "/var/containers/Bundle/Application/2A23FD4D-8164-4F78-9144-DD77D005D434/Pythonista3.app/Frameworks/Py3Kit.framework/pylib/subprocess.py", line 708, in init
restore_signals, start_new_session)
File "/var/containers/Bundle/Application/2A23FD4D-8164-4F78-9144-DD77D005D434/Pythonista3.app/Frameworks/Py3Kit.framework/pylib/subprocess.py", line 1261, in _execute_child
restore_signals, start_new_session, preexec_fn)
PermissionError: [Errno 1] Operation not permitted
@dgelessus , btw I copied the output when I installed it on Pythonista app 2.
[~/Documents]$ pip install flask-restful
Querying PyPI ...
Downloading package ...
Opening:
Save as: /private/var/mobile/Containers/Data/Application/20091A2E-FA21-4354-B157-CF8CBF605DC4/tmp//Flask-RESTful-0.3.6.tar.gz (103092 bytes)
103092 [100.00%]
Extracting archive file ...
Archive extracted.
Running setup file ...
Package installed: Flask-RESTful
Installing dependency: aniso8601[('>=', '0.82')]
Querying PyPI ...
Downloading package ...
Opening:
Save as: /private/var/mobile/Containers/Data/Application/20091A2E-FA21-4354-B157-CF8CBF605DC4/tmp//aniso8601-1.2.1.tar.gz (62369 bytes)
62369 [100.00%]
Extracting archive file ...
Archive extracted.
Running setup file ...
Package installed: aniso8601
Installing dependency: python-dateutil
Querying PyPI ...
Downloading package ...
Opening:
Save as: /private/var/mobile/Containers/Data/Application/20091A2E-FA21-4354-B157-CF8CBF605DC4/tmp//python-dateutil-2.6.1.tar.gz (241428 bytes)
241428 [100.00%]
Extracting archive file ...
Archive extracted.
Running setup file ...
Package installed: python-dateutil
Dependency available in Pythonista bundle : six
Installing dependency: Flask[('>=', '0.8')]
Querying PyPI ...
Downloading package ...
Opening:
Save as: /private/var/mobile/Containers/Data/Application/20091A2E-FA21-4354-B157-CF8CBF605DC4/tmp//Flask-0.9.tar.gz (481982 bytes)
481982 [100.00%]
Extracting archive file ...
Archive extracted.
Running setup file ...
Package installed: Flask
Installing dependency: Werkzeug[('>=', '0.7')]
Querying PyPI ...
Downloading package ...
Opening:
Save as: /private/var/mobile/Containers/Data/Application/20091A2E-FA21-4354-B157-CF8CBF605DC4/tmp//Werkzeug-0.9.6.tar.gz (1128428 bytes)
1128428 [100.00%]
Extracting archive file ...
Archive extracted.
Running setup file ...
Package installed: Werkzeug
Installing dependency: Jinja2[('>=', '2.4')]
Querying PyPI ...
Downloading package ...
Opening:
Save as: /private/var/mobile/Containers/Data/Application/20091A2E-FA21-4354-B157-CF8CBF605DC4/tmp//Jinja2-2.9.6.tar.gz (437659 bytes)
437659 [100.00%]
Extracting archive file ...
Archive extracted.
Running setup file ...
TypeError('string indices must be integers, not str',)
Failed to run setup.py
Fall back to directory guessing ...
Error: Cannot locate packages. Manual installation required.
Ah, it's not possible to use Flask's debug mode in Pythonista. Among other things, debug mode enables automatic reloading of the server when you change the source code, which requires the ability to start new Python processes, which isn't possible in Pythonista.
@dgelessus , thanks so much. Took the debug out and it works. But I am almost sure that it used to work. Maybe I am wrong.
But really, thank you. It's great it's working.
I finally got PythonAnywhere working today the way I want. I just had to watch a video :).
here I just have the basic todo list example from the docs working. But I finally figured out how to install and configure a virtualenv there. | https://forum.omz-software.com/topic/4229/flask_restful-unable-to-install-with-stash | CC-MAIN-2021-17 | refinedweb | 1,189 | 53.78 |
prestapyt 0.1.1
A library to access Prestashop Web Service from Python.
prestapyt is a library for Python to interact with the PrestaShop's Web Service API.
Learn more about the PrestaShop Web Service from the [Official Documentation]().
prestapyt is a direct port of the PrestaShop PHP API Client, PSWebServiceLibrary.php Similar to PSWebServiceLibrary.php, prestapyt is a thin wrapper around the PrestaShop Web Service: it takes care of making the call to your PrestaShop instance's Web Service, supports the Web Service's HTTP-based CRUD operations (handling any errors) and then returns the XML ready for you to work with in Python (as well as prestasac if you work with scala)
Beta version, the post and put doesn't yet work.
#Installation TODO
#Usage
from prestapyt import PrestaShopWebServiceError, PrestaShopWebService
prestashop = PrestaShopWebService('', 'BVWPFFYBT97WKM959D7AVVD0M4815Y1L')
# get all addresses prestashop.get('addresses') # returns ElementTree
# get address 1 prestashop.get('addresses', resource_id=1) prestashop.get('addresses/1')
# full url prestashop.get('')
#filters prestashop.get('addresses', options={'limit': 10})
# head print prestashop.head('addresses')
# delete a resource prestashop.delete('addresses', resource_ids=4)
# delete many resources prestashop.delete('addresses', resource_ids=[5,6])
# add prestashop.add('addresses', xml)
# edit prestashop.edit('addresses', 5, xml)
# get a blank xml prestashop.get('addresses', options={'schema': 'blank'})
#API Documentation Documentation for the PrestaShop Web Service can be found on the PrestaShop wiki: [Using the REST webservice]()
#Credits: Thanks to Prestashop SA for their PHP API Client PSWebServiceLibrary.php
Thanks to Alex Dean for his port of PSWebServiceLibrary.php to the Scala language, prestasac () from which I also inspired my library.
prestapyt is copyright (c) 2012 Guewen Baconnier
prestapyt is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
prestapy prestapyt. If not, see [GNU licenses]().
- Author: Guewen Baconnier
- Keywords: prestashop api client rest
- License: GNU AGPL-3
- Categories
- Package Index Owner: Guewen.Baconnier
- DOAP record: prestapyt-0.1.1.xml | http://pypi.python.org/pypi/prestapyt/0.1.1 | crawl-003 | refinedweb | 343 | 58.89 |
FUNOPEN(3) BSD Programmer's Manual FUNOPEN(3)
funopen, fropen, fwopen - open a stream
#include <stdio.h> FILE * funopen(const void *cookie, int (*readfn)(void *, char *, int), int (*writefn)(void *, const char *, int), fpos_t (*seekfn)(void *, fpos_t, int), int (*closefn)(void *)); FILE * fropen(const void *cookie, int (*readfn)(void *, char *, int)); FILE * fwopen(const void *cookie, int (*writefn)(void *, const char *, int));
The funopen() function associates a stream with up to four "I/O functions". Either readfn or writefn must be specified; the others may be given as NULL pointers. These I/O functions will be used to read, write, seek, and close the new stream. exceptions that they are passed the cookie argument specified to funopen() in place of the traditional file descriptor argument and that the seek function takes an fpos_t argument and not an off_t re- cently.
Upon successful completion, funopen() returns a FILE pointer. Otherwise, NULL is returned and the global variable errno is set to indicate the er- ror.
[EINVAL] The funopen() function was called without either a read or write function. The funopen() function may also fail and set errno for any of the errors specified for the routine malloc(3).
fcntl(2), open(2), fclose(3), fopen(3), fseek(3), setbuf(3)
The funopen() functions first appeared in 4.4BSD.
The funopen() function may not be portable to systems other than BSD.. | https://www.mirbsd.org/htman/i386/man3/funopen.htm | CC-MAIN-2015-40 | refinedweb | 229 | 69.41 |
Up till now, you have used variables to store a single value. C++ offers the flexibility to store multiple values of the same type, and address them with a single name. The mechanism that enables this is arrays.
An Array is a variable that allows you to store multiple values of the same type. Let’s say you need to store three integer values. You can either create three different integer variables or create an integer array that stores three values.
11.1 Declaring Arrays
Similar to other variables, in C++, you need define an array before you can use it. Like other definitions, an array has a data type and a name. In addition, you need specify the size of an array. The size of an array is the no of values (data items) it will hold. It immediately follows the array name and is enclosed in square brackets “[]”.
Syntax for declaring an array is as follows:
datatype arrayname[size];
Let’s say you want to create an array that holds 3 integer values; following is the definition:
int arr[3];
In the above example, the name of the array is “arr”, its data type is integer and size is “3”.
11.2 Array Elements
Each value (data item) that an array holds is referred as an array element. All elements of an array are of the same data type. You can reference all the elements individually or together. Let’s create an array that stores the marks of three subjects:
int marks[3];
- The array name is “marks” and size of the array is 3. Thus, the array (marks) can hold three integer data items.
- When the compiler comes across the above statement it sets aside memory for three integers.
11.3 Array Index
Each array element has an array index. You can use the array index to reference each element individually. You can think of an array as a set of boxes. Each box has a label and contains an article. The label on the box is the array index and the article in the box is the array element.
A pictorial representation of the above example is as follows:
For all arrays, the array index begins from zero. So if you want to reference the 2nd array element its array index would be “1” and not “2”. Similarly, the array index to reference the 3rd array element would be “2” (not 3).
Let’s create an array that holds three integers and print each array element on screen. The C++ statements to do so are as follows:
1. int arr[3]={12,24,36}; 2. cout<< arr[0]<<endl; 3. cout<< arr[1]<<endl; 4. cout<< arr[2]<< endl;
Understanding the C++ statements:
- In the first statement, you create an integer array “arr” whose size is 3, that is; it can hold three integer values. You also initialize all three array elements
- The second statement displays “12” - the first array element on the screen. Note that the array index of the first element is zero “arr[0]”
- The third statement displays “24” - the second array element on the screen
- The fourth statement displays “36” - the third array element on the screen
11.4 Accessing Elements of Arrays
You can access individual array elements using the array index. For all arrays, the array index starts from zero. The syntax for referencing an array element is as follows:
arrayname[arrayindex];
Let’s declare and initialize an array that stores the marks of four subjects; the C++ statements are as follows:
int marks[4]={10,20,30,40};
A pictorial representation of the above array is as follows:
Let’s say you want to print the values of individual array elements on the screen; the C++ statements to do so are as follows:
cout<<marks[0]<<endl; cout<<marks[1]<<endl; cout<<marks[2]<<endl; cout<<marks[3]<<endl;
The above statements will display the following values on the screen:
10
20
30
40
The array index begins from zero. The index of the last element (based on the above example is “3”) and not “4”; even though the array stores four integer values. Similarly, if you were to access the 2nd element its index would be marks[1] and not marks[2].
11.5 Initialising an Array
Similar to other variables, you initialize arrays. When you initialize an array, you assign values to each array element. You can initialize array elements in the following ways:
- Array Initializer List: Let’s say you have defined an array to store the marks of three subjects. When you initialize this array, you will specify the actual marks for each subject. The C++ statement to create and initialize the array is as follows:
int marks[3]= {75,95,65};
The list of numbers enclosed in curly brackets “{}” to the right of “=” is known as an Array Initializer list.
- Loop: You can initialize array elements using a loop. This is useful when you have to initialize bigger arrays; for example, arrays that need to hold 100 or 1000 elements. To initialize bigger arrays, you can use an array initializer list too but it’s a tedious process. The C++ statements to initialize an array using a “for” loop are as follows:
1. int marks[20]; 2. for(int x=0;x<20;<<x++) 3. { 4. marks[x]=100; 5. cout<<”Element”<<x<<”:”<<”Value”<<marks[x]<<endl; 6. }
The output of the above program is as follows:
Understanding the statements of the above example:
- In the first statement, you declare an integer array named “marks” whose size is 10; that is, it can hold 10 integers. When the compiler comes across this statement, it sets aside memory for 10 integer data items.
- In the next statement, you set the “for” loop to iterate 10 times and increment the value of “x” (loop variable) by one with each iteration
- In the third statement, you initialize the value of each element to “100”. Thus, all array elements ( for the above example) will have the value of “100”
11.6 Multi-Dimensional Arrays
A multi-dimensional array contains other arrays. You can think of it as box that contains other smaller boxes in it. In this section, you will learn about two-dimensional array, which is a form of multi-dimensional array. A two-dimensional array is similar to a table or a grid. It contains two arrays; the first array is used to reference the rows and the second array the columns. Similar to a grid, you can access the elements of an array using the row and column coordinates.
The syntax for declaring a two-dimensional array is as follows:
datatype arrayname[no of rows][no of columns];
Let’s look at an example of a two-dimensional array:
int newarr[2][3];
In the above statement , you have declared an array “newarr” that has two rows and three columns. Now, let’s initialize the elements of the array “newarr”:
Following is a diagrammatic representation of the above example:
The above array has two rows and three columns. Thus, it can hold six data items. Each array forms a separate row and each element forms a separate column.
11.7 Accessing Elements of Two-Dimensional Arrays
To access an element of a two-dimensional array, you need to specify which row the element is in, and then the column. Let’s look at the following table:
Let’s say you want to print the value “4”. The element is in the row “0”, and column “2”. The C++ statement to print the element “4” is as follows:
int newarr[2][3]={{2,3,4},{8,9,10}}; //declare and initialise array
cout<<newarr[0][2]<<endl;
Let’s say you want to print the data element “9”. Now, “9” is in the first row and first column (based on the above table). The C++ statement to print the element “9” on screen is as follows:
cout<<newarr[1][1]<<endl;
11.8 Printing out Multi-Dimensional Array
To print out a multi-dimensional array on the screen you need to use loops. Let’s look at the following two-dimensional array:
The above table has two rows and each row has three columns. To print this table (two-dimensional array) on the screen, you need loop through each row (one by one). Next, loop through each column of that row. Let’s say you want to print the elements of the first row, in the above example, you will loop through the first row and every column of the first row.
11.9 C++ Example of Multi-Dimensional Arrays
The C++ statements to print the above two-dimensional array are as follows:
1. #include<iostream.h> 2. using namespace std; 3. int main() 4. { 5. int multiarr[2][3]={{1,2,3},{7,8,9}}; 6. for(row=0;row<2;row++) 7. { 8. for(column=0;column<3;column++) 9. { 10. cout<<multiarr[row][column]<<” ”; 11. } 12. cout<<endl; 13. } 14. }
Understanding the statements of the above program:
- In the 5th statement, of the above program, you create an array (“multiarr”) that has two rows and three columns and initialize the elements of the arrays. The multi-dimensional array “multiarr” contains two arrays - the first array represents the rows and the second array represents the elements of that row (columns).
- To print this array on screen (in a table/grid) format, you loop through each row one at a time and print the elements of that row (the row you are looping) on the screen. To do so, you need two “for” loops. The first one loops through each row (one at a time), and the second “for” loop is used to loop through each column of a row.
- In the 6th statement, you declare a “for” loop for each row. The name of loop variable of the “for” loop is “row”. Its starting value is zero (as all arrays start from zero) and maximum value is two (the no of rows in your array). Each iteration, increments the value of the loop variable “row” by one.
- Similarly, in the 8th statement, you declare a “for” loop for each column of a row. The name of loop variable of the “for” loop is “column”. Its starting value is zero (as all arrays start from zero) and maximum value is three (the no of columns in your array). Each iteration, increments the value of the loop variable “column” by one.
- The “cout” statement, the 12th statement of the program, prints the values of each column (of the row you are looping). For example, if you looping through the row”0”, then the “cout” statement will print the values “1,2,3” on the screen (with each iteration of the “for” loop, which references the columns).
- Let’s look at an example , where the value of row variable is set to “1” and that of the column variable is “0”
for (row=1, row<2; row++) { for(column=0; column<3;column++) { cout<< multarr[row][column]<<” “; } }
Based on above C++ statements, the “cout” will print the value “8” on the screen. This is because the coordinates of element “8” are row[1]and column[0]
- In the 12th statement of the above program, you include a statement to start a new line (endl). This is done so that the elements (values) of each row are displayed on separate lines | http://www.wideskills.com/c-plusplus/c-plusplus-arrays | CC-MAIN-2018-09 | refinedweb | 1,913 | 61.46 |
Recursively build tree.
- find midpoint by fast/slow method, use middle node as root.
- build left child by first half of the list
- build right child by second half of the list (head is midpoint->next)
<cpp> class Solution { public: TreeNode *sortedListToBST(ListNode *head) { if(!head) return NULL; if(!head->next) return new TreeNode(head->val); // fast/slow pointer to find the midpoint auto slow = head; auto fast = head; auto pre = head; while(fast && fast->next) { pre = slow; slow = slow->next; fast = fast->next->next; } pre->next = 0; // break two halves // slow is the midpoint, use as root TreeNode* root = new TreeNode(slow->val); root->left = sortedListToBST(head); root->right = sortedListToBST(slow->next); return root; } };
we share the same idea, but in my solution,
pre is not needed:
class Solution { public: TreeNode* sortedListToBST(ListNode* head) { if (head == NULL) return NULL; if (head->next == NULL) return new TreeNode(head->val); ListNode *fast = head->next->next, *slow = head; while (fast != NULL && fast->next != NULL) { slow = slow->next; fast = fast->next->next; } TreeNode* root = new TreeNode(slow->next->val); root->right = sortedListToBST(slow->next->next); slow->next = NULL; root->left = sortedListToBST(head); return root; } };
I don't think it's a proper way to solve the problem, though you can temporarily "solve" it in OJ. After constructing the tree, the list will be destroyed permanently. It causes serious memory leak, which can't be permitted in C++ programming.
pre->next = 0;
This statement will disconnect the two nodes. After building the tree, you can try to traverse the "list" from head and you will find there's only one node in the "list". Where are others? They are in the memory but you can no longer find them.
If you doesn't get it, you can try it and you will understand then.
You are right, I never thought of that, previously I think that we can still access list from
head pointer. Thank you.
The title of this problem is Convert Sorted List to Binary Search Tree. So I think we no longer need to access the original list after the conversion.
I did this by counting the number of nodes and making a separate case for even and odd number of nodes.
This does not destroy the linked list. And there is no memory loss.
TreeNode* bst (ListNode* head, int n) { if (n<=0) return NULL; if (n==1) { TreeNode* root = new TreeNode(head->val); return root; } /* To get to the middle point, traverse the list*/ int count=0; ListNode* current = head; while (current!=NULL && count<(n/2-1)) { current = current->next; count++; } /*if even nodes are there*/ if (n%2==0) { TreeNode* root = new TreeNode(current->val); root->left = bst(head,n/2-1); root->right = bst(current->next,n/2); return root; } /*if odd nodes are there*/ else { TreeNode* root = new TreeNode(current->next->val); root->left = bst(head,n/2); root->right = bst(current->next->next,n/2); return root; } } TreeNode* sortedListToBST(ListNode* head) { if (head==NULL) return NULL; ListNode* current = head; /*First getting the total number of nodes in list*/ int count = 0; while (current!=NULL) { current=current->next; count++; } return (bst(head,count)); }
I have the simliar idea. But I meet a problem that I used to use the dummy node to find the middle node.
So I have implemented almost the same code except that I use the dummy node to find the middle node.
So the only difference happens when the linked-list length is even. My implementation find the left middle
pointer while you find the right pointer. So I will meet the corner case. So I am wondering any one can
help me solve the corner cases with dummy pointer coding style.
Thank you very much !
class Solution { public: TreeNode* sortedListToBST(ListNode* head) { if(!head) return NULL; if(!head->next) return new TreeNode(head->val); ListNode *dummy = new ListNode(-1); dummy->next=head; /** find the middle pointer **/ ListNode *slow=dummy, *fast=dummy; while(fast && fast->next){ slow=slow->next; fast=fast->next->next; } /** set the pointer previous to the middle pointer point to NULL **/ ListNode *temp=dummy; while(temp->next!=slow){ temp=temp->next; } temp->next=NULL; /** revursively build the BST **/ ListNode *left=head, *right=slow->next; TreeNode *root = new TreeNode(slow->val); root->left=sortedListToBST(left); root->right=sortedListToBST(right); return root; } };
use pointer to pointer
TreeNode* sortedListToBST(ListNode* head) { if (head == nullptr) return nullptr; ListNode* fast = head; ListNode** slow = &head; while (fast->next != nullptr && fast->next->next != nullptr) { fast = fast->next->next; slow = &((*slow)->next); } TreeNode* root = new TreeNode((*slow)->val); root->right = sortedListToBST((*slow)->next); *slow = nullptr; root->left = sortedListToBST(head); return root; }
@StrayWarrior You are right, but we can easily fix this issue by first copy a same list from head , then call this function.For memory leak issue we can call delete the slow->next node to deallocate it. So there is no problem with the algorithm, but we need to consider more detail when implement it,isn't it?
- @RainbowSecret try using: slow = head, fast = head->next; then slow is always the middle node, use prev which starts from dummy, prev is the former node of middle. like this:
class Solution { public: TreeNode* sortedListToBST(ListNode* head) { if(!head) return NULL; if(!head->next) return new TreeNode(head->val); ListNode *dummy = new ListNode(-1),*p1 = head, *p2 = head->next, *prev; dummy->next = head; prev = dummy; while(p2 && p2->next){ p1 = p1->next; p2 = p2->next->next; prev = prev->next; } prev->next = NULL; TreeNode *root = new TreeNode(p1->val); root->left = sortedListToBST(dummy->next); root->right = sortedListToBST(p1->next); return root; } };
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect. | https://discuss.leetcode.com/topic/10979/clean-c-solution-recursion-o-nlogn-with-comment | CC-MAIN-2018-05 | refinedweb | 949 | 60.24 |
The process to set up a client library varies by programming language. Select the tab for the language you're using for development. If you are using a language not available here, see the complete table list of available Downloads.
Java
Using the Google APIs Client Library for Java requires that you download the core Java client library and the Google Calendar API Java library .jar files. Download the files and include them in your build path:
You can now import the classes you will need using the following statements:
import com.google.api.client.http.HttpTransport; import com.google.api.client.http.javanet.NetHttpTransport; import com.google.api.client.json.jackson.JacksonFactory; import com.google.api.services.calendar.Calendar; import com.google.api.services.calendar.model.*;
Python
To install the Google API Python Client on a system, you must use either the
pip command or
easy_install command.
easy_install --upgrade google-api-python-client
pip install --upgrade google-api-python-client
You can now import the classes you will need using the following statements:
from apiclient.discovery import build from oauth2client.client import OAuth2WebServerFlow import httplib2
If you need to access the Google API Python Client from a Google App Engine project, you can follow the instructions here.
PHP
There are multiple ways to install the Google APIs Client Library for PHP. See the installation page for the details. One method is to clone the github repository, as shown below.
$ git clone google-api-php-client
You can now import the classes you will need using the following statement:
require_once 'google-api-php-client/autoload.php';
.NET
Install the Google Calendar NuGet package.
Ruby
We recommend using RVM to configure and manage your Ruby and gem installation. Once you have RVM installed, proceed to install the google-api-client gem:
gem install google-api-client
You can now import the classes you will need using the following import statements:
require 'rubygems' require 'google/api_client' | https://developers.google.com/google-apps/calendar/setup | CC-MAIN-2015-14 | refinedweb | 324 | 57.47 |
The phrase “better safe than sorry” gets thrown around whenever people talk about monitoring or getting observability into your AWS resources but the truth is that you can’t sit around and wait until a problem arises, you need to proactively look for opportunities to improve your application in order to stay one step ahead of the competition. Setting up alerts that go off whenever a particular event happens is a great way to keep tabs on what’s going on behind the scenes of your serverless applications and this is exactly what I’d like to tackle in this article.
AWS Lambda Metrics
AWS Lambda is monitoring functions for you automatically, while it reports metrics through the Amazon CloudWatch. The metrics we speak of consist of total invocations, throttles, duration, error, DLQ errors, etc. You should consider CloudWatch as a metrics repository, being that metrics are the basic concept in CloudWatch and they represent a set of data points which are time-ordered. Metrics are defined by name, one or even more dimensions, as well as a namespace. Every data point has an optional unit of measure and a time stamp.
And while Cloudwatch is a good tool to get the metrics of your functions, Dashbird takes it up a notch by providing that missing link that you’d need in order to properly debug those pesky Lambda issues. It allows you to detect any kinds of failures within all programming languages supported by the platform. This includes crashes, configuration errors, timeouts, early exits, etc. Another quite valuable thing that Dashbird offers is Error Aggregation that allows you to see immediate metrics about errors, memory utilization, duration, invocations as well as code execution.
AWS Lambda metrics explained
Before we jump in I feel like we should discuss the metrics themselves to make sure we all understand and know what every term means or what they refer too.
From there, we’ll take a peek at some of the namespace metrics inside the AWS Lambda, and we’ll explain how do they operate. For example
Invocations will calculate the number of times a function has been invoked in response to invocation API call or to an event which substitutes the RequestCount metric. All of this includes the successful and failed invocations, but it doesn’t include the throttled attempts. You should note that AWS Lambda will send mentioned metrics to CloudWatch only if their value is at the point of nonzero.
Errors will primarily measure the number of failed invocations that happened because of the errors in the function itself which is a substitution for ErrorCount metric. Failed invocations are able to start a retry attempt which can be successful.
There are limitations we must mention:
- Doesn’t include the invocations that have failed because of the invocation rates exceeded the concurrent limits which were set by default (429 error code).
- Doesn’t include failures that occurred because of the internal service errors (500 error code).
DeadLetterErrors can start a discrete increase in numbers when Lambda is not able to write the failed payload event to your pre-configured DeadLetter lines. This incursion could happen due to permission errors, misconfigured resources, timeouts or even because of the throttles from downstream services.
Duration will measure the real-time beginning when the function code starts performing as a result of an invocation up until it stops executing. The duration will be rounded up to closest 100 milliseconds for billing. It’s notable that AWS Lambda sends these metrics to CloudWatch only if the value is nonzero.
Throttles will calculate the number of times a Lambda function has attempted an invocation and were throttled by the invocation rates that exceed the users’ concurrent limit (429 error code). You should also be aware that the failed invocations may trigger retry attempts automatically which can be successful.
Iterator Age is used for stream-based invocations only. These functions are triggered by one of the two streams: Amazon’s DynamoDB or Kinesis stream. Measuring the age of the last record for every batch of record processed. Age is the sole difference from the time Lambda receives the batch and the time the last record from the batch was written into the stream.
Concurrent Executions are basically an aggregate metric system for all functions inside the account, as well as for all other functions with a custom concurrent pre-set limit. Concurrent executions are not applicable for different forms and versions of functions. Basically, this means that it measures the sum of concurrent executions in a particular function from a certain point in time. It is crucial for it to be viewed as an average metric considering its aggregated across the time period.
Unreserved Concurrent Executions are almost the same as Concurrent Executions, but they represent the sum of the concurrency of the functions that don’t have custom concurrent limits specified. They apply only to the user’s account, and they need to be viewed as an average metric if they’re aggregated across the period of time.
Where do you start?
Cloudwatch
In order to access the metrics using the CloudWatch console, you should open the console and in the navigational panel choose the metrics option. Furthermore, in the CloudWatch Metrics by Category panel, you should select the Lambda Metrics option.
Dashbird
To access your metrics you need to log in the app and the first screen will show you a bird’s eye view of all the important stats of your functions. From cost, invocations, memory utilization, function duration as well as errors. Everything is conveniently packed on to a single screen.
Setting Up Metric Based Alarms For Lambda Functions
It is essential to set up alarms that will notify you when your Lambda function ends up with an error, so you’ll be able to react proficiently.
Cloudwatch
To set up an alarm for failed function (can be caused by the fall of the entire website or even an error in the code) you should go to the CloudWatch console, choose Alarms on your left and click Create Alarm. Choose the “Lambda Metrics,” and from there, you should look for your Lambda name in the list. From there, check the box of a row where the metric name is “Error.” Then just click Next.
Now, you’ll be able to put a name and a description for the alarm. From here, you should set up the alarm to be triggered every time “Errors” are over 0, for one continuous period. As the Statistic, select the “sum” and the minutes required for your particular case in the dropdown “Period” window.
Inside the Notification box, choose the “select notification list” in a dropdown menu and choose your SNS endpoint. The last step in this setup is to click the “Create Alarm” button.
Dashbird
Setting metric-based alerts with Dashbird is not as complicated, in fact it’s quite the opposite. While in the app, go to the Alerts menu and click on the add button on the right side of your screen and give it a name. After that you select the metric you are interested in, which can either be a coldstart, retry, invocation and of course error. All you have to do is select the rules (eg: whenever the number of coldstards are over 5 in a 10-minute window alert me) and you are done.
How do you pick the right solution for your metric based alerts?
Though question. While Cloudwatch is a great tool, the second you have more lambdas in your system you’ll find it very hard to debug or even understand your errors due to the large volume of information. Dashbird, on the other hand, offers details about your invocations and errors that are simple and concise and have a lot more flexibility when it comes to customization. My colleague Renato made a simple table that compares the two services.
I’d be remiss not to make an observation: with AWS CloudWatch whenever a function is invoked, they spin up a micro-container to serve the request and open a log stream in CloudWatch for it. They re-use the same log stream as long as this container remains alive. This means the same log stream gets logs from multiple invocations in one place.
This quickly gets very messy and it’s hard to debug issues because you need to open the latest log stream and browse all the way down to the latest invocations logs While in Dashbird we show individual invocations ordered by time which makes it a lot easier for developers to understand what’s going on at any point in time.
Have anything useful to add? Please do so in the comment box below.
read original article here | https://coinerblog.com/getting-down-and-dirty-with-metric-based-alerting-for-aws-lambda-44dee79df49a/ | CC-MAIN-2019-43 | refinedweb | 1,464 | 58.52 |
Earlier this week we discussed how to build fully-featured React forms with KendoReact, which is another great React forms tutorial. In this article, we’ll take a step a back and discuss the challenges inherent to building forms with just React, such as state management and validation, and then how to solve them with the KendoReact Form component.
Forms are hard, regardless of the framework or libraries you use to create them. But with React forms are especially tricky, as the official React form documentation is brief, and doesn’t discuss topics that real-world forms always need, such as form validation.
In this article you’ll learn how to build React forms the easy way using the newly released KendoReact Form component. You’ll learn how to simplify your form’s state management, how to integrate with custom components such as date pickers and drop-down lists, and how to implement robust form validation.
TIP: Check out the KendoReact Form Design Guidelines for best practices and usage examples for building great forms in React.
Let’s get started.
For this article’s demo we’ll look at a few different ways to implement the sign-up form below.
Let’s start by looking at an implementation of this form with no libraries, as it’ll show some of the challenges inherent to building forms with React today. The code to implement the form is below. Don’t worry about understanding every detail, as we’ll discuss the important parts momentarily.
import React from "react"; import countries from "./countries"; export default function App() { const [email, setEmail] = React.useState(""); const [password, setPassword] = React.useState(""); const [country, setCountry] = React.useState(""); const [acceptedTerms, setAcceptedTerms] = React.useState(false); const handleSubmit = (event) => { console.log(` Email: ${email} Password: ${password} Country: ${country} Accepted Terms: ${acceptedTerms} `); event.preventDefault(); } return ( <form onSubmit={handleSubmit}> <h1>Create Account</h1> <label> Email: <input name="email" type="email" value={email} onChange={e => setEmail(e.target.value)} required /> </label> <label> Password: <input name="password" type="password" value={password} onChange={e => setPassword(e.target.value)} required /> </label> <label> Country: <select name="country" value={country} onChange={e => setCountry(e.target.value)} required> <option key=""></option> {countries.map(country => ( <option key={country}>{country}</option> ))} </select> </label> <label> <input name="acceptedTerms" type="checkbox" onChange={e => setAcceptedTerms(e.target.value)} required /> I accept the terms of service </label> <button>Submit</button> </form> ); }
You can also try out this code on StackBlitz using the embedded sample below.
For this example, the first thing to note is how much code it takes to track the state of the form fields. For instance, to track the state of the email address this example uses a hook.
const [email, setEmail] = React.useState("");
Next, to ensure the email remains up-to-date as the user interacts with the form, you must add
value and
onChange attributes to the email address
<input>.
<input name="email" type="email" value={email} onChange={e => setEmail(e.target.value)} required />
Every field requires the same chunks of code, which can easily get verbose as your forms get more complex. And this verbosity has consequences, as verbose code is harder to maintain, and is also harder to refactor as your business requirements change.
Also, consider that this example’s sign-up form is purposefully simple to make this article easier to follow. Most real-world forms have far more fields and far more business logic, and as complexity rises, so does the importance of reducing the amount of code you need to write and maintain.
In order to clean up our example form’s logic, and in order to add powerful features like form validation and custom components, let’s look at how to refactor this form to use the KendoReact Form component.
The KendoReact Form is a small (5KB minified and gzipped) and fast package for state management with zero dependencies.
You can install the package into your own app from npm.
npm install --save @progress/kendo-react-form
The package contains two main components, Form and Field. The basic idea is you wrap your HTML
<form> with the Form component, and then use one Field component for each field in your form. The structure looks like this.
<Form ...> <form> <Field name="email" /> <Field name="password" /> ... <button>Submit</button> </form> </Form>
With that basic structure in mind, next, take a look at the code below, which shows our sign-up form example adapted to use the KendoReact Form and Field components. Again, don’t worry about understanding all the details here yet, as we’ll discuss the important parts momentarily.
import React from "react"; import { Form, Field } from "@progress/kendo-react-form"; import countries from "./countries"; export default function App() { const handleSubmit = (data, event) => { console.log(` Email: ${data.email} Password: ${data.password} Country: ${data.country} Accepted Terms: ${data.acceptedTerms} `); event.preventDefault(); } return ( <Form onSubmit={handleSubmit} initialValues={{}} render={(formRenderProps) => ( <form onSubmit={formRenderProps.onSubmit}> <h1>Create Account</h1> <Field label="Email:" name="email" fieldType="email" component={Input} /> <Field label="Password:" name="password" fieldType="password" component={Input} /> <Field label="Country:" name="country" component={DropDown} options={countries} /> <Field label="I accept the terms of service" name="acceptedTerms" component={Checkbox} /> <button>Submit</button> </form> )}> </Form> ); }
The first thing to note about this code is the lack of verbose state-management code. In fact, to get your form’s data, all you need to do is provide on
onSubmit prop on the Form component.
<Form onSubmit={handleSubmit}
And then, make sure each Field you use has a
name attribute.
<Field name="email" ... /> <Field name="password" ... />
If you do, the Form component will pass the
onSubmit handler an object that contains all the form’s data when the user submits the form. Here’s what that looks like in a live version of the form.
The other thing the Form component provides is the ability to render your fields using custom components, which our example does through the
component prop.
<Field ... component={Input} /> <Field ... component={Input} /> <Field ... component={DropDown} /> <Field ... component={Checkbox} />
The Form passes these custom components a variety of props, and these props allow you to render your fields according to your design and business requirements. For example, here’s how our example renders the custom
Input component.
const Input = (fieldProps) => { const { fieldType, label, onChange, value } = fieldProps; return ( <div> <label> { label } <input type={fieldType} value={value} onChange={onChange} /> </label> </div> ); };
NOTE: Although you have full control over how you render your fields, all KendoReact Fields require you to use controlled components. You can read more about controlled components on the React documentation.
And here’s what that example looks like on StackBlitz.
This ability to render custom components gives you the ability to consolidate how you display form controls throughout your application. It also gives you a logical place to implement more advanced form functionality, such as form validation. Let’s look at how to do that next.
The KendoReact Form provides a series of APIs that make it easy to add custom form validation logic. To see what this looks like, let’s return to our email input, which currently looks like this.
<Field label="Email:" name="email" fieldType="email" component={Input} />
To add validation, let’s start by adding a
validator prop to the field, which we’ll point at a function that determines whether the field’s contents are valid. For example, here is how you can ensure that email is a required field.
// Returning an empty string indicates that the field is valid. // Returning a non-empty string indicates that the field is NOT valid, // and the returned string serves as a validation message. const requiredValidator = (value) => { return value ? "" : "This field is required"; }
<Field label="Email:" name="email" fieldType="email" component={Input} validator={requiredValidator} />
In our example we want to enforce that the user provided an email address, and also that the user provided a valid email address. To do that we’ll tweak add a new email validator using the code below.
const emailValidator = (value) => ( new RegExp(/\S+@\S+\.\S+/).test(value) ? "" : "Please enter a valid email." );
And then pass both the required and email validators for the
validator prop.
<Field label="Email:" name="email" fieldType="email" component={Input} validator={[requiredValidator, emailValidator]} />
Now that you have a way of determining whether fields are valid, your last step is visually displaying that information to your users. Let’s do that by returning to your custom Input component, which currently looks like this.
const Input = (fieldProps) => { const { fieldType, label, onChange, value } = fieldProps; return ( <div> <label> { label } <input type={fieldType} value={value} onChange={onChange} /> </label> </div> ); };
To add a validation message you’ll need to use three additional provided props:
valid,
visited, and
validationMessage. The code below takes these new props, and uses them to display a validation message to the user on fields with errors.> ); };
The KendoReact form also provides a helpful
allowSubmit prop, making it easy for you to disable form submission until the user fixes all problems.
<Form render={(renderProps) => ( ... <button disabled={!renderProps.allowSubmit}> Submit </button> )}> </Form>
Here’s what all of this looks like in action.
The beauty of the KendoReact Form is just how easy it is to customize everything you see to meet your real-world requirements.
Don’t want to disable your app’s submit button? Then don’t include the
allowSubmit logic. Want to show your validation messages in a different place, or use different class names? Then adjust the logic in your custom components.
By using the KendoReact Form you get all this, and you also benefit from the easy state management that the Form provides. Before we wrap up, let’s look at one additional KendoReact Form benefit: how easily the Form integrates with the rest of KendoReact.
TIP: The validation we covered in this article was done at the Field level, but the KendoReact Form also allows you to perform Form-level validation, which can be useful for complex validation that spans many fields.
The KendoReact Form is a lightweight and standalone package, but it includes the ability to integrate with the rest of KendoReact.
And for good reason, as KendoReact provides a rich suite of form controls, allowing you to do so much more than what’s possible with built-in browser elements.
In the case of our example, using the KendoReact form controls will help you simplify our form’s logic, as well as allow us to add on some rich functionality.
For instance, recall that our previous custom Input implementation looked like this.> ); };
To enhance this Input let’s use the KendoReact Input, which you can add to your project by installing its package from npm.
npm install @progress/kendo-react-inputs
With the package installed, your only other step is importing the Input component into your application.
import { Input } from "@progress/kendo-react-inputs";
With that setup out of the way, rewriting the custom input is as simple as swapping
<input> for
<Input>, and removing some of the boilerplate props that KendoReact now handles for you. Here’s what that looks like.
const CustomInput = (fieldProps) => { const { fieldType, valid, visited, validationMessage, ...others } = fieldProps; const invalid = !valid && visited; return ( <div> <Input type={fieldType} {...others} /> { invalid && (<div className="required">{validationMessage}</div>) } </div> ); };
Just by doing this you get some new behavior for free, such as Material-Design-inspired floating labels.
If you take the next step and switch to using the KendoReact DropDownList and Checkbox, you also gain the ability to easily style your form controls.
Here’s what all of this looks like in the final version of this app in StackBlitz.
We’ve implemented a lot, but really we’re just getting started. For your more advanced needs you might want to bring in a ColorPicker, MaskedTextBox, Slider, Switch, DatePicker, TimePicker, DateRangePicker, AutoComplete, ComboBox, DropDownList, MultiSelect or Editor.
All the KendoReact form controls work with the KendoReact Form, and all adhere to KendoReact’s strict accessibility standards. It’s everything you need to build the rich forms your applications need.
Building forms with React can seem hard, but it doesn’t have to be. By using the KendoReact Form you can simplify your state management, implement form validation, and easily bring in custom components, such as additional KendoReact form controls.
The KendoReact Form is part of the KendoReact UI library for React, which contains 80+ similarly handy components. When you’re ready to get started with the Form, or want to check out the many other KendoReact components, go ahead and start a 30-day trial of KendoReact to see them in action.. | https://www.telerik.com/blogs/how-to-build-forms-with-react-the-easy-way?utm_medium=cpm&utm_source=jsweekly&utm_campaign=kendo-ui-react-blog-easy-react-form&utm_content=primary | CC-MAIN-2021-10 | refinedweb | 2,086 | 54.42 |
hey how do u use amouse in c++,is it possible and if yes to what extent,right click??
[/color][glowpurple]The ladder of sucess is never crowded at top:Napolean[/glowpurple]
What do you mean by "use a mouse"???
C++ is a programming language..
You can write code to take advantage of mouse clicks, mouse movement and everything in between..
Are you talking about your C++ IDE? Then the answer would be it depends on the IDE.. but 99.9% of the time yes you could use the mouse.(unless you're using a console editor like vi or pico (at a console without a mouse))
Peace,
HT
IT Blog: .:Computer Defense:.
PnCHd (Pronounced Pinched): Acronym - Point 'n Click Hacked. As in: "That website was pinched" or "The skiddie pinched my computer because I forgot to patch".
could u gimme an example or tell me where i cud get some info abt this
Originally Posted by phoenixmajestic
could u gimme an example or tell me where i cud get some info abt this
About what??? (How about you type with proper spelling and punctuation as well)... There's nothing to tell you.. You still haven't said which you are talking about... If you want to utilize the mouse in a C++ program that you're writing... then go look up documentation based on the graphics libraries and GUI support that you are including...
Originally Posted by phoenixmajestic
could u gimme an example or tell me where i cud get some info abt this
Oliver's Law:
Experience is something you don't get until just after you need it.
Try:
If you wish to use mouse clicks (this just works for clicking anywhere), then there is a way to do it in C++ (this will briefly tell you what to do):
#define KEY_DOWN(key_code)((GetAsyncKeyState(key_code)&0x8000)?1:0)
#define KEY_UP(key_code)((GetAsyncKeyState(key_code)&0x8000)?0:1)
#define KEY_LBUTTON 1
#define KEY_RBUTTON 2
#include <windows.h>
#include <iostream>
using namespace std;
int main()
{
while(1)
{
if(KEY_DOWN(KEY_LBUTTON)){cout<<"You have left mouse clicked"<<endl; Sleep(105);}
if(KEY_DOWN(KEY_RBUTTON)){cout<<"You have right mouse click"<<endl; Sleep(105);}
}
}
However bare in mind that this will work anywhere you click (outside of console, when console is minimised etc).
Hope this helps.
thanks guys
Forum Rules | http://www.antionline.com/showthread.php?274063-Mouse-in-c | CC-MAIN-2016-40 | refinedweb | 387 | 73.07 |
Hello.
I came across a serious problem with the rich data table when running on Mojarra 2.1.2. I would like to discuss possible solutions, b/c it is not obvious to me.
The problem can be recreated by having a data table with an ajax delete link on every row. The ajax re-renders the whole table. With Mojarra, you will find that even though the row is deleted in the data model, the table still shows the deleted row.
This happens b/c UIDataAdapter.processEvent() listens for PreRenderComponentEvent, which triggers the data model reset. Mojarra appears to have a bug, which prevents this event from being processed by the data table on postbacks. You can find the details about this Mojarra bug in the JIRA item I created:
To summarize the problem, the PreRenderComponentEvent is delivered to the parent of the data table rather than the data table itself, b/c the system event restore mechanism relies on grabbing the top component on the EL stack. The data table is restored before it is pushed onto the EL stack, so the PreRenderComponentEvent is never delivered to it on a postback and so the data model is not refreshed.
As a workaround I extended UIDataTable like this:
{code}
public class FixedUIDataAdapter extends UIDataTable {
@Override
public void restoreState(FacesContext context, Object stateObject) {
pushComponentToEL(context, null);
super.restoreState(context, stateObject);
popComponentFromEL(context);
}
}
{code}
This fixes the problem, but makes me worry about possible side effects. Can anyone comment on the possible repercussions of this fix?
In any case, this problem will need to be addressed. Ideally, the fix would be done on the Mojarra side, but failing that Richfaces might need to do a workaround...
Should I create a Richfaces jira to track this?
Val | https://developer.jboss.org/thread/170052 | CC-MAIN-2017-34 | refinedweb | 295 | 53.51 |
This article will continue to expand the capabilities of my reflection based event tests. Although my
VerifyEventCallbacks test method can inventory collections of EventHandler based events declared directly on a class, it only works on events backed by EventHandler delegates, and it won’t detect events declared on base classes. To be truly useful, these shortcomings need to be addressed.
If you have read the previous articles in this series, then I hope you have gained a decent grasp of some fundamental event system concepts. Here is a short table of contents for.
For your enjoyment and education, you can can get the code associated with these articles from GitHub. However, remember that this code is nothing more than a reimplementation of features already available in the ApprovalTests library, which is a free, open source library you can use to enhance your tests. If your primary interest is to use these features, then don’t bother with cut-and-paste, just get yourself a copy of ApprovalTests from SourceForge or NuGet.
Don’t know what ApprovalTests are? You will get more out of this article if you take a moment to watch a few videos in Llewellyn Falco’s ApprovalTests tutorial series on YouTube.
A Comprehensive Inventory
So far my tests against
Poco look pretty nice. I tried to keep the test general, because it would be nice to reuse my extension methods on objects besides
Poco instances. I’m worried that
Poco doesn’t represent objects I might find in the real world.
Here are a couple traits of
Poco that indicate it may not be complicated enough to model real world classes:
Poco‘s events are only based on
EventHandler. I should add some events based on
EventHandler<T>or the always popular (hated?)
PropertyChangedEventHandler.
Poco‘s events are all declared on
Poco. I should introduce a class that inherits from some of its events from a base class.
I’ll create a class with both these features by inheriting from
Poco and implementing
INotifyPropertyChanged on the descendant.
using System.ComponentModel; public class PocoExtension : Poco, INotifyPropertyChanged { public event PropertyChangedEventHandler PropertyChanged; protected virtual void OnPropertyChanged(object sender, PropertyChangedEventArgs e) { var handler = this.PropertyChanged; if (handler != null) { handler(sender, e); } } }
And I’ll write a test for this class.
[TestMethod] public void PocoExtensionTest() { var target = new PocoExtension(); target.ProcessStarted += Domain.HandleProcessStarted; target.PropertyChanged += Domain.HandlePropertyChanged; EventUtility.VerifyEventCallbacks(target); }
As a developer using
PocoExtension I can wire up a handler to
ProcessStarted (which is inherited from
Poco) just as easily as I can wire up a handler to
PropertyChanged (which is declared on
PocoExtension). Both events are part of the same object, so why should I need to worry about whether they are part of the same class? My intuition is that both events should show up in the inventory. Likewise, I wire up my handlers in the exact same manner, even though
ProcessStarted and
PropertyChanged leverage different delegate types to specify compatible handlers. They are both events, why should I have to worry about the delegate type? My intuition remains that both events should show up in the inventory. The test results do not meet my intuitive expectation:
Event callbacks for PocoExtension
With all the preamble in this article, it shouldn’t surprise you that neither event was found, but here is the final proof. More importantly, I have a failing test that I can use to guide me toward a solution. I have two problems to solve:
- Detect inherited events.
- Detect events not based on
EventHandler
Which one to attack first? The second requirement seems harder because all delegates inherit from
MulticastDelegate whether they are associated with events or not. It’s natural to think that
EventHandler<T> derives from
EventHandler but other than similar names (and sharing
MulticastDelegate as a base type) these two delegates have no relationship. Remember, delegates are their own types, and one of the rules for the delegate type is that they are all implicitly sealed (at the language level, the compiler can do what it likes when implementing delegates in IL). So, there is no least derived type that I could use to find all the “event” delegates because none of them can even derive from each other in the first place.
So, without inheritance to lean on, I’ll need some other way to filter for delegates that are related to events. I’ll come back to this problem after dealing with the easier problem of detecting inherited events on
PocoExtension.
Inherited Events
I wired up two event handlers in my test:
ProcessStarted and
PropertyChanged. Neither was detected. With
ProcessStarted I know that the problem is not the delegate type, because it is backed by an
EventHandler delegate. So, the problem with this event must be inheritance. In other words, my test does not detect
ProcessStarted because it is declared on the base class (
Poco).
Nothing has changed about the backing field I’m looking for, it’s still a private instance field on the declaring class. Although the reflection API provides a
BindingFlag that will flatten class hierarchies, private fields are not included, so this field is not showing up in my query. I need to implement this capability myself.
The procedure seems straightforward. Given an instance of
Type, I can use the
Type.BaseType property to get the less derived type. I can crawl up this inheritance chain in a loop until it ends, collecting private fields as I go. My guess is that I’m probably not the first developer to come up with this idea, and I wonder if maybe someone out there has a better solution than simple iteration. However, after some research, it looks like everyone is just iterating so that’s what I’ll do.
Here is my current
GetEventHandlers implementation.
public static IEnumerable<EventCallback> GetEventHandlers(this object value) { if (value == null) { return null; } return from fieldInfo in value.GetType().GetFields(NonPublicInstance) where typeof(EventHandler).IsAssignableFrom(fieldInfo.FieldType) let callback = fieldInfo.GetValue<EventHandler>(value) where callback != null select new EventCallback(fieldInfo.Name, callback); }
The problem is that
GetFields does not include inherited private fields (“inherited private” is a weird thing to say—all that I mean is these fields exist at runtime and have values). I can’t change the behavior of
GetFields, so I need to replace it. I’ll create an extension method on
Type that does what I need:
public static IEnumerable<FieldInfo> EnumerateFieldsWithInherited( this Type typeInfo, BindingFlags bindingFlags) { for (var type = typeInfo; type != null; type = type.BaseType) { foreach (var fieldInfo in type.GetFields(bindingFlags)) { yield return fieldInfo; } } }
This method is more or less implements the procedure described above, but instead of collecting the private fields, it streams them out as they are needed. Now I can test whether updating
GetEventHandlers to use this method will result in any changes to my results.
New results:
Event callbacks for PocoExtension ProcessStarted [0] Void HandleProcessStarted(System.Object, System.EventArgs)
The test found the inherited event, excellent. My previous tests still pass, so I haven’t broken anything either. That’s one down, on to the next challenge.
Find events of any type
The current
GetEventHandlers method does just what it advertises. It gets fields declared as
EventHandler. Unfortunately, nothing requires that events be declared as
EventHandler and there are many other options. Because delegates have no meaningful inheritance relationships, these other options don’t even inherit from
EventHandler.
I’ll approach this problem by refactoring the out the “defective” part of
GetEventHandlers. I’ll end up with two methods,
GetEventHandlers will continue to work as advertised, but it will use a new method,
GetEventsForType, for the heavy lifting.
public static IEnumerable<EventCallback> GetEventHandlers(this object value) { return value.GetEventsForType(typeof(EventHandler)); } public static IEnumerable<EventCallback> GetEventsForType( this object value, Type type) { if (value == null) { return null; } return from fieldInfo in value.GetType().EnumerateFieldsWithInherited(NonPublicInstance) where type.IsAssignableFrom(fieldInfo.FieldType) let callback = fieldInfo.GetValue<EventHandler>(value) where callback != null select new EventCallback(fieldInfo.Name, callback); }
After making this change, my existing test on
Poco still passes, and the output for my new test on
PocoExtension is the same. Technically,
PocoExtensionTest is failing, but that’s only because I haven’t approved anything yet. So this change hasn’t broken anything.
To get my test into a state where I can approve the result, I need to keep working on
GetEventsForType. In it’s current form,
GetEventsForType lets me specify a type to filter for, but I know that
PocoExtension uses more than one delegate type for its events. I would rather pass a collection of delegate types to
GetEventsForType (and pluralize the name). Once I have a collection of types, I can change the query to collect backing fields for any type in the collection.
public static IEnumerable<EventCallback> GetEventsForTypes( this object value, params Type[] types) { if (value == null) { return null; } return from fieldInfo in value.GetType().EnumerateFieldsWithInherited(NonPublicInstance) where types.Any(t => t == fieldInfo.FieldType) let callback = fieldInfo.GetValue<EventHandler>(value) where callback != null select new EventCallback(fieldInfo.Name, callback); }
Keeping in mind that delegate inheritance is restricted, the old query’s use of
IsAssignableFrom started to smell. If there are no inheritance chains for delegates, then a simple equality check should suffice. My existing tests are happy with this change, but
PocoExtensionTest still doesn’t detect the
PropertyChanged handlers.
The last piece of the puzzle is to create the collection of delegate types associated with
PocoExtension’s backing fields and pass it to
GetEventsForTypes. I know the reflection API has
GetProperties,
GetConstructors,
GetFields, why not
GetEvents? As a matter of fact, such a method exists. It returns an
EventInfo array, and each member includes an
EventHandlerType property that I can use to create a collection of types. Another extension method is in order, but naming it is hard.
GetEventHandlers still seems like the best name because its so general, but it implies a false relationship with the
EventHandler delegate. So, I’ll go with
Callback.
public static IEnumerable<EventCallback> GetEventCallbacks( this object value) { var types = value.GetType().GetEvents() .Select(ei => ei.EventHandlerType).Distinct(); return value.GetEventsForTypes(types.ToArray()); }
Notice that I don’t pass any binding flags to
GetEvents. Events should be public. They could be protected or perhaps even private but that seems kind of weird, so I’m not going to worry too much about it. I update
VerifyEventCallbacks to use this new method:
public static void VerifyEventCallbacks(object value) { // ... foreach (var callback in value.GetEventCallbacks()) { buffer.AppendLine(callback.ToString()); } // ... }
Making this change doesn’t break any passing tests, but it does “break” the test I haven’t approved yet. Instead of showing me any results, that test now throws an
InvalidCastException. Turns out that I did not pay enough attention when I let my refactoring tool extract
GetEventsForTypes.
Although the method takes an array of types, the query still attempts to cast everything to
EventHandler. Again, since inheritance relationships don’t exist between delegate types,
PropertyChangedEventHandler can’t cast to
EventHandler. My only choices are to use
Delegate or
MulticastDelegate for my cast.
Delegate will work fine because the only thing I need to do is call
GetInvocationList.
public static IEnumerable<EventCallback> GetEventsForTypes( this object value, params Type[] types) { // ... return from fieldInfo in value.GetType().EnumerateFieldsWithInherited(NonPublicInstance) where types.Any(t => t == fieldInfo.FieldType) let callback = fieldInfo.GetValue<Delegate>(value) where callback != null select new EventCallback(fieldInfo.Name, callback); }
Now my test makes it all the way to the call to
Approvals.Verify and produces output:
Event callbacks for PocoExtension PropertyChanged [0] Void HandlePropertyChanged(System.Object, System.ComponentModel.PropertyChangedEventArgs) ProcessStarted [0] Void HandleProcessStarted(System.Object, System.EventArgs)
More importantly, it’s output that I can approve. Cue Borat: “Great Success!”
Cleanup
Hopefully cleanup will be easier this time when compared to the “Making it Better” section in the last segment of this series. But a little cleanup is necessary because once again
GetType is called before making a null check, this time in
GetEventCallbacks.
Here’s a test to detect the defect:
[TestMethod] public void NullHasNoEventCallbacks() { Assert.IsFalse(ReflectionUtility.GetEventCallbacks(null).Any()); }
For previous null-checks I used a if-null-return-null pattern to handle the null case. However, as these null checks multiply I’m getting tired of writing the same code over and over again, simple as it may be. So in this test I’m not going to look for null when null is returned, I’m going to look for an empty collection. If I can come up with a NullObject solution to the null cases that I like, then I’ll refactor my null checks to use that instead of returning null.
I want a NullObject that is a do-nothing implementation of
Type. The
NullType should respond to
GetFields and
GetEvents calls with empty arrays. Before I run off an create one, I should see if the framework already has a
Type that would work as a
NullType. Turns out that a suitable type does exist:
typeof(void). MSDN says
System.Void is rarely useful in a typical application and that it is used by classes in
System.Reflection. I’m doing reflection, so I’ll use it too. I just need to make sure that when I try to get a value’s type, that
System.Void is used when that value is null.
public static Type GetType(object value) { return value == null ? typeof(void) : value.GetType(); }
Notice that this method is not an extension method. Now I change
GetEventCallbacks to use the new method. That gets me past my first null reference exception, but my test is still hitting a null reference when it tries to call
Any on the results from
GetEventCallbacks. I need to follow the execution a little further an make sure that
GetEventCallbacks never returns null. (By the way Code Contracts would be a great way to help diagnose and resolve this type of issue, but I’ll just investigate it “by hand” for this example.)
Remember that
GetEventsForTypes will return null when value is null. I’ll update this method to use the new
GetType method instead of a local null check and see what happens.
public static IEnumerable<EventCallback> GetEventsForTypes( this object value, params Type[] types) { return from fieldInfo in GetType(value).EnumerateFieldsWithInherited(NonPublicInstance) // ... }
NullHasNoEventCallbacks passes after making this update! However, an older test,
NullHasNoProcessCompletedHandler, fails now. This test is expecting
GetEventHandlers to return null, so I need to update its expectation.
[TestMethod] public void NullHasNoProcessCompletedHandler() { Assert.IsFalse(ReflectionUtility.GetEventHandlers(null).Any()); }
This new version of the test passes. Now I’ll just search for any more null-checks and see if I can use the new method there. The last candidate null check is in
VerifyEventCallbacks. I put a null check in there to make it safe to retrieve the type name. If I use
System.Void then instead of verifying an empty result I’ll get a result like this:
Event callbacks for Void
I’m on the fence on whether that is better than the empty result, but I lean toward the empty result. I think that seeing an empty result is more likely to make me consider that I passed in a null value rather than seeing “Void”. So I leave
VerifyEventCallbacks alone.
Relationship with EventApprovals
Unfortunately these scenarios are not supported in ApprovalTests 2.0. When I wrote EventApprovals, I needed to inventory the event handlers on a WinForms application. Neither inheritance problems nor problems with delegate types surfaced while using EventApprovals against a WinForms target, because of the custom event implementation WinForms uses.
Eventually, when I used EventApprovals with a POCO class which extended an
INotifyPropertyChanged implementer, I encountered these problems and figured this stuff out. The good news is that Llewellyn and I got together recently and these fixes have made their way upstream into ApprovalTests 2.0+. As of this writing, you should compile ApprovalTests from source if you need these features immediately. If you’re reading this later on, and you have ApprovalTests 2.1 or greater, then you already have these features. Once you have a version of ApprovalTests with these fixes, you don’t have to do anything special.
EventApprovals.VerifyEvents takes advantage of them automatically.
In terms of implementation, the delegate type issue is solved almost identically to what I’ve shown here. For inheritance, Llewellyn thought it would be fun to solve the problem with recursion, so that’s a little different.
Up Next
I’m feeling better about my tools now. To review, these extension methods can dynamically find all event-backing delegates, regardless of type. And they can find delegates no matter where they are declared in the class hierarchy. It looks like this set of extension methods can handle all my event testing needs for any object, but is that really so?
It turns out that there is large and important set of events which these methods will completely fail to find events for. I’ve mentioned it a couple times already: Windows Forms. I’ll take a look at WinForms events next time in: “Beyond the Event Horizon: WinForms Event System” | https://ihadthisideaonce.com/2012/09/09/beyond-the-event-horizon-event-complications/ | CC-MAIN-2020-05 | refinedweb | 2,853 | 56.96 |
Hello… I am writing a simple chat program, but i have encountered a
problem. I ask for a client to provide a handle, but it he disconnects
during a gets call, it does weird things.
def get_handle(session) for attempt in (0..2) session.print "Please Login\n" response=session.gets.strip # The problem re=/(Login)\s(.+)/ md=re.match(response) response=md[2] if([email protected]_value?(response)) # This just checks to make sure the handle isn't in use. return response else session.print "Handle alerady in use... " end end end
If the client disconnects at this point, ruby seems to be still waiting
for a response. Also, sometimes it sends ruby’s processor usage to 100%.
Is there a better way to do this? Or is there a way to give gets a
timeout? Thanks!
-Jon | https://www.ruby-forum.com/t/monitoring-socket-disconnect-during-gets-or-readline/70750 | CC-MAIN-2021-21 | refinedweb | 139 | 79.06 |
Path Class
Definition
public ref class Path abstract sealed
[System.Runtime.InteropServices.ComVisible(true)] public static class Path
type Path = class
Public Class Path
- Inheritance
-
- Attributes
-
Examples
The following example demonstrates some of the main members of the
Path class.
using namespace System; using namespace System::IO; int.):" ); Collections::IEnumerator^ myEnum = Path::InvalidPathChars->GetEnumerator(); while ( myEnum->MoveNext() ) { Char c = *safe_cast<Char^>(myEnum->Current); Console::WriteLine(. End Sub End Class
Remarks.
.NET Core 1.1 and later versions and .NET Framework 4.6.2 and later versions also support access to file system objects that are device names, such as "\?\C:".
For more information on file path formats on Windows, see File path formats on Windows systems. exception if the string contains characters that are not valid in path strings, as defined in the characters returned from the GetInvalidPathChars method. a list of common I/O tasks, see Common I/O Tasks. | https://docs.microsoft.com/en-us/dotnet/api/system.io.path?view=netframework-1.1 | CC-MAIN-2020-10 | refinedweb | 151 | 60.21 |
You want to pick a random element from a collection, where each element in the collection has a different probability of being chosen.
Store the elements in a hash, mapped to their relative probabilities. The following code will work with a hash whose keys are mapped to relative integer probabilities:
def choose_ weighted(weighted) sum = weighted.inject(0) do |sum, item_and_weight| sum += item_and_weight[1] end target = rand(sum) weighted.each do |item, weight| return item if target <= weight target -= weight end end
For instance, if all the keys in the hash map to 1, the keys will be chosen with equal probability. If all the keys map to 1, except for one which maps to 10, that key will be picked 10 times more often than any single other key. This algorithm lets you simulate those probability problems that begin like, "You have a box containing 51 black marbles and 17 white marbles…":
marbles = { :black => 51, :white => 17 } 3.times { puts choose_weighted(marbles) } # black # white # black
I'll use it to simulate a lottery in which the results have different probabilities of showing up:
lottery_probabilities = { "You've wasted your money!" => 1000, "You've won back the cost of your ticket!" => 50, "You've won two shiny zorkmids!" => 20, "You've won five zorkmids!" => 10, "You've won ten zorkmids!" => 5, "You've won a hundred zorkmids!" => 1 } # Let's buy some lottery tickets. 5.times { puts choose_weighted(lottery_probabilities) ...
No credit card required | https://www.safaribooksonline.com/library/view/ruby-cookbook/0596523696/ch05s11.html | CC-MAIN-2017-13 | refinedweb | 242 | 65.73 |
Template Matching
Template matching (a.k.a. Match by example) is a simple way to query the space - The template is a PONO.FirstName = "John"; Person person = spaceProxy.Read(template);
Read an entry of type Person whose FirstName is John and LastName is Smith:
Person template = new Person(); template.FirstName = "John"; template.LastName = "Smith"; Person person = spaceProxy.Read(template);
If none of the properties are set, all the entries of the type are matched. For example, to count all entries of type Person:
int numOfPersons = spaceProxy.Count(new Person());
If the template class is null, all the entries in the space are matched. For example, to clear all entries from the space:
spaceProxy.Clear(null);
Indexes
GigaSpaces XAP. For more information see Indexing.
Inheritance Support
Template Matching support inheritance relationships, so that entries of a sub-class are visible in the context of the super class, but not the other way around. For example, suppose class Citizen extends class Person:
spaceProxy.Write(new Person()); spaceProxy.Write(new Citizen()); // Count persons - should return 2: int numberOfPersons = spaceProxy.Count(new Person()); // Count citizends - should return 1: int numberOfCitizens = spaceProxy.Age = 30; spaceProxy.Write(p1); // Read person from space: Person p = spaceProxy = ?)] attribute. For example:
public class Person { private int age = -1; [SpaceProperty(NullValue = -1)] public int Age { get; set; } ....... }
We’ve indicated that
-1 should be treated as
null when performing template matching, and initialized age to
-1 so users of Person class need not set it explicitly whenever they use it. For more information refer to Object Metadata.
Properties of primitive types are implicitly boxed when stored in the space and unboxed when reconstructed to a PONO. It is highly recommended to use the primitive wrapper classes instead of primitives to simplify the code and avoid user errors. | https://docs.gigaspaces.com/xap/10.1/dev-dotnet/query-template-matching.html | CC-MAIN-2021-49 | refinedweb | 297 | 51.55 |
Here's a little details on my Raspberry Pi Midi-keyboard project I made as a degree work at school. This is my first attempt on making something with a RaspPi.
I've been interested in music and technology (and music technology) for almost my entire life, and when we had to come up with something to create as a degree work at our (finnish) college, I thougt it would be awesome to combine those two and create an instrument. As I thought of a method of implementation, I came across to my RaspPi I'd had for a while, which was perfect for the project.
I'm using Debian Wheezy as the distro, for it suits well with my needs. I made the program with Python, which I'd never used before. Luckily I learned it a little along the way. As the sound driver I used ALSA which comes within the distro. The software-synthesis happens with Timidity, a software synthesizer that renders midi data in real time without an external synthesizer (please correct me if I said it wrong). With Timidy its simple to change your soundfonts by just changing the .sf2-file path to timidity's configuration file.
The code:
The program itself is quite simple. I use Pygame to recognize the keypresses and output the midi data. If the key is pressed, the defined midi value plays. If the key is raised, the sound turns off.
The notes-variable contains all the key values, for example:
Code: Select all
def keypress(): event = pg.event.wait() if event.type == pg.KEYDOWN: if notes.get(key, 0): midiout.note_on(notes.get(key) + noteOffset, velocity) return "d_" + key if event.type == pg.KEYUP: if notes.get(key, 0): midiout.note_off(notes.get(key) + noteOffset, 0) return "u_" + key
etc. You can just put the key names there, and the number defines what note plays (72 is C in the 4th octave).
Code: Select all
notes = { "n":72, "j":73 }
The rest of the code just defines different fuctions for some keys on the keyboard. With the up and down arrow keys you can change volume, left and right changes the instrument and so forth. You can really change them to do anything you like. As the keyboards layout I used a pattern made by Paul von Janko (google it). I also made a layout like in an accordion. You can change it to anything.
Mechanics:
For the project I used an old Keytronic keyboard that I bought from a local flea market for 1€. The RaspPi is a model B+, which fits under the keyboard (with the help of two drawer handles
Pics: ... 6.jpg?dl=0 ... 5.jpg?dl=0 ... 4.jpg?dl=0
So, there's a quick introduction to my project. Please feel free to leave feedback, good or bad. Cheers!
EDIT (3.10.2017):
The needed packages for the project are Python 2.7 and Pygame for it, ALSA (Advanced Linux Sound Architecture) and Timidity. You can install them straight through apt-get or by downloading them with wget.
Timidity needs to be set to use ALSA to produce the sounds and this can be done by going to /etc/init.d/timidity (It’s there so it launches when the device boots) and adding parameters
”-iAD -B2,10 -Os”
The parameters “iAD” and “-Os” make Timidity use ALSA as the sound driver and “-B2,10” sets the buffer size to 1024. The bigger the buffer size is, the more there’s latency (if it’s too small the sound distorts).
You can change the sound “fonts” by changing the .sf2-file path in Timidity’s config file at /etc/timidity/timidity.cfg.
The program’s source code can be found at my GitHub: ... eyboard.py
Images: ... 1.jpg?dl=0 ... 2.jpg?dl=0 ... 3.jpg?dl=0 ... 4.jpg?dl=0 ... 5.jpg?dl=0 ... 6.jpg?dl=0
Hopefully this clears out the project a little.
| https://www.raspberrypi.org/forums/viewtopic.php?f=38&p=1219820&sid=dcd4a26a9b1bdb92b601034f2391e105 | CC-MAIN-2017-43 | refinedweb | 665 | 74.79 |
Francois Wirth wrote:
>
> Hi,
>
> Just want to know if anyone considered developing an XML driven database
> like Tamino. I would be nice to work on an open source database that uses
> Xerces, Xalan etc. for the XML processing. I know this is a huge project,
> but I think there could be a lot of uses for this database and it would be a
> challenging project. It could just be based on XML technology XQL, Soap, XML
> Shemas etc.
>
> What do you think?
almost everybody in the Java/XML world, sooner or later, happen to
think: placing objects or trees in a relational database is a pain in
the ass. It's like mixing apples with oranges.
DBMS research created OODBMS along with things like ODMG, object
oriented query languages and such. Another derivation is EJB.
Now we have tree-stuctured documents.
Suppose you have a million XML pages, these are you data, you content.
The nice thing about trees is that you can add nodes at will, the nice
thing about namespaces is you can have multiple dimensions without
worrying on name collisions.. and XMLSchema still being able to validate
them.
So, you have n documents and you do
<xdb:database xmlns:
<xdb:section xdb:
<xdb:tree xdb:
<page>
<title>this is one article</title>
...
</page>
</xdb:tree>
...
</xdb:section>
<xdb:section xdb:
<xdb:tree xdb:
<news title="ASF starts an XML database">
blah blah
</news>
...
</xdb:tree>
<xdb:section>
</xdb:database>
this is the XML "dump" of your database while, internally, it should be
able to do special indexing to optmizize queries and all that, just like
any DBMS does.
What do you use as a query language?
Possible usages are:
- xpath
- xpointer
- xql
XPointer extends XPath with ranges (which might be very useful in this
case), but is only for "pop data", nothing to "push data".
XQL will sure add the notion of "joins" "insert" and all that but I
don't have ideas on its status.
Anyway, yes, something like this is _incredibly_ important indeed.
> What is the possibility of this happening?).
Of course, I see lots of uses of Prowler both from the Cocoon project as
well as independently as it stands now, but I see a bright future of
something like this since it covers a particular aspect of XML that we
do not yet cover and I believe very important.
I'm happy you started this discussion so that now I can see your
Again, this is open development: the fact that we start with some code
is _NOT_ to "stamp" a project with the Apache quality label, but to fuel
innovation, increase visibility and accelerate development.
And since I (rather egoistically, I admit :) need such technology, I'd
rather see it happening here with the Apache spirit rather than
somewhere else.
What do you!
------------------------- --------------------- | http://mail-archives.apache.org/mod_mbox/xml-general/200007.mbox/%[email protected]%3E | CC-MAIN-2015-48 | refinedweb | 471 | 70.73 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.