text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringlengths 9
15
⌀ | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
---|---|---|---|---|---|
Guideline 2 encourages you to think of objects of bundles of services, not bundles of data. This guideline irreverently suggests that sometimes its OK to design objects that are bundles of data.
On occasion, you will see objects that are nothing more than containers for data. I like to call such objects Messengers. A messenger allows you to package and send bundles of data. Often the data is passed to the messenger's constructor, and the messenger is sent along its way. Recipients of the messenger access the data via get methods. Messengers are usually short-lived objects. Once a recipient retrieves the information contained in a messenger, it usually kills the messenger (even if the news is good).
In general, you should move code to data as described in Guideline 1. Most of your object designs should be service-oriented objects, as described in Guideline 2. But on occasion, you may find yourself with some data that you don't know what to do with. Sometimes you know some information, but you don't know what behavior that information implies. You can't move the code to the data, because even though you have the data, you don't know what the code should do. In such cases, you can encapsulate the data inside a messenger object, and send the messenger to some recipient that does know the behavior implied by the data. The recipient extracts the data from the messenger and takes appropriate action.
One common example of messengers is exceptions. As mentioned in Guideline ?, exception objects are usually composed of little more a small amount of data (if any), which is passed to the constructor, and some access methods to let catch clauses get at that data. The primary information in an exception is carried in the name of the exception itself.
1 package com.artima.examples.account.ex1; 2 3 /** 4 * Exception thrown by <code>Account</code>s to indicate that 5 * a requested withdrawal has failed because of insufficient funds. 6 */ 7 public class InsufficientFundsException 8 extends Exception { 9 10 /** 11 * Minimum additional balance required for the requested 12 * withdrawal to succeed. 13 */ 14 private long shortfall; 15 16 /** 17 * Constructs an <code>InsufficientFundsException</code> with 18 * the passed shortfall and no specified detail message. 19 * 20 * @param shortfall the amount in excess of available funds 21 * that caused a withdrawal request to fail. 22 */ 23 public InsufficientFundsException(long shortfall) { 24 this.shortfall = shortfall; 25 } 26 27 /** 28 * Constructs an <code>InsufficientFundsException</code> with 29 * the passed detail message and shortfall. 30 * 31 * @param message the detail message 32 * @param shortfall the amount in excess of available funds 33 * that caused a withdrawal request to fail. 34 */ 35 public InsufficientFundsException(String message, long shortfall) { 36 super(message); 37 this.shortfall = shortfall; 38 } 39 40 /** 41 * Returns the shortfall that caused a withrawal request 42 * to fail. The shortfall is the minimum additional balance 43 * required for the requested withdrawal to succeed. 44 * 45 * @returns shortfall the amount in excess of available funds 46 * that caused a withdrawal request to fail. 47 */ 48 public long getShortfall() { 49 return shortfall; 50 } 51 } 52 53
Another example of the Messenger are events. Like exceptions, event objects often just encapsulate some data, which is passed to the constructor, and offers access methods to let listeners get at the data. For examples of this see the Event Generator idiom.
Like exceptions and events, multivalued return objects are Messengers. It has some data, which must be passed to the constructor, and offers access methods so the calling method can get at the returned data.
Messengers are usually immutable (but not always: AWT event can be consumed)
In parting, I want to warn you to be suspicious of messengers when they appear in your designs. Challenge
their existence. A messenger makes sense when you don't know the behavior that should result
accompany some data. If you do know the behavior, then you know the code that should use that
data. In this case, you should refactor. You should move the code to the data, which will
transform the messenger into a service-oriented object. This is, in fact, the very process demonstrated
in Guideline 2: The
Matrix of Listing 2-1 was
a messenger, which was transformed into a service-oriented
Matrix in Listing 2-3.
To get from Listing 2-1 to Listing 2-3, I moved the code for matrix addition, subtraction,
multiplication, conversion to string, etc., to the
Matrix class that contained the matrix data.
Probably mention the attribute classes in the Service UI API, and talk about EJB Blueprints "value" objects. Or perhaps value objects belong in Immutable.
|
https://www.artima.com/interfacedesign/Messenger.html
|
CC-MAIN-2019-13
|
refinedweb
| 781 | 55.64 |
Yes, btw: there will be an automatic converter from .tmLanguage to .sublime-syntax files.
Syntax Fun
Tmlanguage to sublime-syntax convertor
Textmate 2 has nice feature that I’d like to see in SublimeText and that’s being able to generate the syntax name base on what you’re matching. here is a cool example from Markdown syntax in textmate:
heading = { name = 'markup.heading.${1/(#)(#)?(#)?(#)?(#)?(#)?/${6:?6:${5:?5:${4:?4:${3:?3:${2:?2:1}}}}}/}.markdown'; begin = '(?:^|\G)(#{1,6})\s*(?=\S^#]])'; end = '\s*(#{1,6})?$\n?'; captures = { 1 = { name = 'punctuation.definition.heading.markdown'; }; }; contentName = 'entity.name.section.markdown'; patterns = ( { include = '#inline'; } ); }
It adds scope names like
markup.heading.2.markdown and
markup.heading.3.markdown depending on the heading level of the markdown. I’ve recreated these scopes by duplicating the match definition 6 times.
I would be awesome to have some basic string replacement capabilities based on the match regex in scope name.
[quote=“jps”]
Yes, btw: there will be an automatic converter from .tmLanguage to .sublime-syntax files.[/quote]
Excellent. Since the new system looks much nicer to work with, I hope this will stimulate community involvement in the creation of new and bugfix/update of existing language definitions.
To return to an earlier point, I’m wondering just how much room there is for improvement in Sublime’s syntax parser and what benefits such improvements could bring. I referred previously to expanding APIs in this area to facilitate the development of language aware plugins like code-refactoring, but it also struck me that more precise and detailed scopes could allow for the implementation of useful editor functions, for example “goto reference” (inverse of goto definition), in a lightweight manner. Thoughts?
[quote=“jps”]
Unfortunately not. It can only recognise LL(1) grammars, so the contents of the following lines can’t influence the prior lines.[/quote]
Do you know if an LL(1) parser is sufficient for syntax-highlighting C++? I know that C++ can’t be lexed with anything less than a Turing Machine but I don’t if syntax highlighting is hard as lexing.
Great work so far! I absolutely love the new syntax file spec. I’ve been terrified about writing syntax files till now but I think I will give it another go once this is done. There are a lot of issues with C++ syntax highlighting that I want to fix.
First of, awesome!
Secondly, but muh time.
.
By the way, SyntaxHighlightTools goes in a different direction that defines a custom YAML-like syntax but it’s not really suited for the context model imo.
It’s nice that it bundles the tmPreferences stuff however which “works” but is still pretty clunky. It’s afaik the only extension file type that is still only available using ye old textmate-compatible plists besides tmTheme. Any plans for working on that in the future?
That sounds great! I would also like to know how much the usage of atomic groups for the popular keyword matches would improve performance compared to normal grouped matches because I never see them used anyway, besides maybe my own syntaxes.
find_by_selector or extract_scope are really useful in that regard, but sometimes a slightly better syntax crawling could be useful. Something similar to what jQuery does maybe with the parent, child or sibling queries. This should all be doable with the current APIs and some neat algorithms however, except for the weird case that facelessuser described. If I ever needed something more complex I’d likely write some of these algorithms myself but it would be interesting to know how taxing these two API calls are (for large files) and if it makes sense to implement more performant variants in ST itself.
Now to my own feedback regarding the new syntax format, which is a bit on the technical part.
The possibility to match multi-line stuff with barer context control is great.
The possibility to more or less “inject” matches into included contexts or other languages is also great with the prototype stuff.
Have you thought of including/pushing a specific context of a different syntax definition? Such as
push: "Packages/HTML/HTML.sublime-syntax:tag"or using another character that is usually invalid in filepaths. That would allow more flexible and organised syntax management and not require to define many multiple hidden syntax definitions.
Would it be feasable to allow some kind of reflexive context-awareness for these with_prototype-injected patterns? E.g. some way to only apply the pattern if “string” matches the entire current scope or only the deepest scope name. Or maybe just make it apply only in certain contexts.?
Are named capture groups an option instead of always using numbers? It would surely be cleaner to have named capture groups and assign scope names to those groups by their name instead of the number sometimes. Oniguruma seems to support this according to the spec but it’s not enabled for syntax definitions and I’m don’t know if the options are exclusive.
What happens when both
YAML.tmLanguageand
YAML.sublime-syntaxare present?
The current YAML syntax def is still not exactly correct, but its better than the previous one. I’ll wait until you settled the “bringing the syntax definitions up-to-date” (which hopefully results in some form of open sourcing for contributions) for fixing, however.
Will there be a .sublime-something replacement for the .tmPreferences file format?
[quote=“FichteFoll”]First of, awesome!
At this stage, no. YAML if a great format if you’re familiar with it, but the syntax can be quite unintuitive at times (e.g., exactly when a string needs to be quoted is not always obvious). I’m not ruling it out, it’s just something I’m approaching with caution..
[/quote]
Now that I’ve spent more time working with YAML, I’m still not sure how I feel about it. I don’t like the way if effectively forces 2 space indenting, and I don’t like that there’s just so much stuff in the spec, but once you know what’s going on it is nice to write. It’s easy enough to justify for things like syntax definitions, but not so much for everything else. Who would ever guess that double quotes and single quotes have different escaping rules, for example?
It’s not mentioned in the documentation, but you can use contexts in other .sublime-syntax files in the same way as local ones (e.g., push them or include them). They can be referenced via “Packages/Foo/Foo.sublime-syntax”, “Packages/Foo/Foo.sublime-syntax#main” (#main is implied if not present), or “Packages/Foo/Foo.sublime-syntax#strings”.
Currently the named context, and the transitive closure of the contexts it references are cloned and rewritten with the patterns mentioned in with_prototype. I’m open to extending this so that some contexts won’t be rewritten, but a use case would be nice
[quote=“FichteFoll”?
[/quote]
If the pop pattern (this processing is only done to the first, btw) has any backrefs, then when the context is pushed onto the stack, the pop pattern will be rewritten with the values that the push or set captured when the context was pushed onto the stack. If the regex used to push the context doesn’t define a corresponding capture, then the empty string is substituted.
This also has the implication that pop regexes are unable to use backrefs in the normal way, as they’ll be rewritten before the regex engine has a chance to see them. You can work around this with named captures though, which are ignored by the pop regex rewriting logic.
With regards to set vs push, they operate in exactly the same way, set just pops the current context and then pushes the given context(s) on the stack.
In principle they could be, I just haven’t done the work to enable them.
No special handling, they’ll just appear in the menu as two separate syntax definitions. My current recommendation is to leave both, but mark the tmLanguage as hidden.
That doesn’t surprise me, YAML is not easy to get right!
[sublime-syntax] Allow \1 in patterns that don't pop
Just a heads up, there may be some changes to the regex flavour used by sublime-syntax files in the next build, so you may not want to spend too much time playing with them in the current form.
I just discovered the conversion code and inspected a bit. (Initially because it didn’t work for me.)
- The reason why I looked into it in the first place: It doesn’t work for me.
Traceback (most recent call last):
File "C:\Program Files\Sublime Text 3\sublime_plugin.py", line 535, in run_
return self.run()
File "convert_syntax in C:\Program Files\Sublime Text 3\Packages\Default.sublime-package", line 415, in run
File "convert_syntax in C:\Program Files\Sublime Text 3\Packages\Default.sublime-package", line 344, in convert
File "convert_syntax in C:\Program Files\Sublime Text 3\Packages\Default.sublime-package", line 205, in make_context
File "convert_syntax in C:\Program Files\Sublime Text 3\Packages\Default.sublime-package", line 264, in make_context
File "convert_syntax in C:\Program Files\Sublime Text 3\Packages\Default.sublime-package", line 127, in format_external_syntax
File "convert_syntax in C:\Program Files\Sublime Text 3\Packages\Default.sublime-package", line 113, in syntax_for_scope
File "convert_syntax in C:\Program Files\Sublime Text 3\Packages\Default.sublime-package", line 103, in build_scope_map
File "./plistlib.py", line 104, in readPlistFromBytes
raise AttributeError(attr)
File "./plistlib.py", line 76, in readPlist
#
File "./plistlib.py", line 378, in parse
raise ValueError("unexpected key at line %d" %
xml.parsers.expat.ExpatError: not well-formed (invalid token): line 233, column 24
So, it appears that I have an invalid plist file in my packages somewhere. This needs to be caught and the conversion must continue. Furthermore, it would be nice if the name of the badly formatted file was displayed.
The format_regex function strips all leading whitespace of regular expressions and destroys indentation. I suggest using a function that detects the base indentation of a string (while overseeing the first line since that will most likely start with
(?x)) and then strip only that indent instead of everything. Something like
def format_regex(s):
if "\n" in s:
lines = s.splitlines(True)
s = lines[0] + textwrap.dedent(''.join(lines[1:]))
s = s.rstrip("\n") + "\n"
return s
You use plistlib for parsing. I’ve had problems
with that before, notably that
import xml.parsers.expat failed on some linux distros which plistlib uses. This was all on ST2 though, but you might still want to look into it. I also had problems with plistlib sometimes reporting an “unknown encoding” for files with
<?xml version="1.0" encoding="UTF-8"?>. I have no idea why that would have happened since that’s what it emits with
plistlib.writePlist too.
ConvertSyntaxCommand is a WindowCommand, despite only really operating on a view. Is there a certain reason for this? If it’s because TextCommands automatically come with an edit parameter (and create an undo point, though I’m not sure about that) then I propose to add a new “ViewCommand” that has the view object in self.view but does not automatically create an edit. The reason is that window.active_view() is not “portable” and doesn’t work as expected if run with
some_view.run_command("") and some_view is inactive.
import yamlshould really be part of the
if __name__ == "__main__":section
Edit: I discovered the
$baseinclude a while ago but never found out what it actually does. Considering it’s translated to
$top_level_mainI assume that this is only relevant if the syntax def this include is in has been included from externally?
By the way, I’m willing to work on an “official” Python.sublime-syntax file. I have a lot of experience with syntax definitions in general and hacked on the PythonImproved syntax to decent extend. Because of copyright issues I’d start from scratch, which would also result in a clean structure I suppose.
Excellent work Jon, already much better than working with the tmLanguage files.
Would it be possible to ignore spaces in the regular expressions? This would help to break up regular expressions into groups to make reading and editing them easier. Currently they are just big regexes on one line. If I have missed a way to break them up visually please let me know.
@315234 (awesome name): Use the
(?x) switch. It will ignore all spaces after it, except for those within character sets. geocities.jp/kosako3/oniguruma/doc/RE.txt
Allowing a captured subexpression in the scope would allow creating the following syntax for sublime-syntax files.
contexts: main: - match: "" push: Packages/YAML/YAML.sublime-syntax with_prototype: - match: ([a-z]+(?:\.[a-z]+)+) scope: \1
Basically it would assign to every scope expression (dot separated list of words) that precise expression as its scope.
Besides being a file that describes its own syntax (which is neat) and a nice example of a templating language; it would be useful when editing other syntax files because you would be able to see how your current theme would color the scope that you are assigning on the syntax file itself..
@jps,
In sublimetext.com/docs/3/syntax.html, in the “header” section where it says “file_line_match” should be “first_line_match”. Found out the hard way…
An update on this: I’m currently doing some work that I expect will give a significant speedup for syntax highlighting. This will primarily benefit loading large files and file indexing. I don’t expect this to have any compatibility issues wrt regex flavour. Early results are promising, but there’s more work to be done until it’s ready. Next build won’t be until next week at the earliest, and that may well turn out to be optimistic.
@FichteFoll thanks for looking into it, will fix
@farsouth, thanks, will fix
@jps do you anticipate ever superseding all the TextMate formats (tmTheme, tmPreferences, …)? Not sure there’s a business case for doing so, but it would be cool to have a “pure Sublime” environment
Are there plans for a substitution syntax in regular expressions?
As I am crawling the YAML spec I find myself needing certain regex patterns like yaml.org/spec/1.2/spec.html#ns-uri-char frequently. It’s a pain to copy&paste these patterns all the time. It would be a lot easier if there was some syntax where I could write e.g.
{{ns-uri-char}} inside a regular expression and have it substituted by the actual pattern which I defined elsewhere. Other syntax suggestions welcome.
Hopefully you add named capture groups.
[quote=“simonzack”.[/quote]
I created a syntax highlighting using the new file format which is able to detect argument lists across lines like this, for Fortran. It pushes a new scope when it sees the equivalent of “(a,” and only pops it when it sees the closing parenthesis. You could try something similar. My code is here:
github.com/315234/SublimeFortran
That’s the idea generally, but the reason this is an unusual case, for ES anyway, is that it’s a (fortunately rare) bit of syntax where you need to look at a later token, which is permitted to fall on another line, in order to disambiguate the initial construct. Until you hit either a rest operator or the fat arrow, the syntax of the parameter group could as easily be a parenthesized expression. Since arrow functions are themselves expressions, and can appear in exactly the same places parenthesized expressions may appear, there’s no outer context you can use to help.
The good news here is that in ES it’s honestly pretty weird to have a line break before the arrow, and this is likewise true for the other small handful of potentially ambiguous cases, like binding patterns (for destructured assignment) vs literal objects and arrays, so you can use an Akira Tanaka Special to match braces and get it right 99.9% of the time. Here’s what I used when I was attempting an ES6 sublime-syntax (since given up temporarily, because I kept running into problems with hanging Sublime that I couldn’t successfully debug):
arrowFunctionWithParens: - match: '(?x) \( (?= (?<parens> ^\(\)] | \( \g<parens>* \) )* \)\s*=> )' scope: punctuation.definition.parameters.begin set: arrowFunction_AFTER_PARAMS, params ]
For binding patterns vs literals it’s basically the same deal, except the lookahead wants to match {} or ] and see if it is followed by the vanilla assignment operator. In this case admittedly it’s a little more likely to span lines, but I still think that’s a weird enough case not to fret over. Plus, most of the time a destructured assignment will appear in a declaration, not an expression, in which case there’s no need for a lookahead at all; for example the opening bracket in the following unambiguously begins an array binding pattern:
let
An update on where I’m at: I’ve built a new regex engine, replacing Oniguruma for the majority of the matching tasks within the syntax highlighter. Speed of the hybrid system is around twice that of the one in 3084, which will make large file loading and file indexing substantially more efficient. Aside from speed, there should be no user visible changes, and the regex syntax is identical.
Plan is to tie off the loose ends and get a new build out next week, and then get back to updating the syntax definitions.
In hindsight, given the regex flavour did end up being identical, I would have put off this work until later, but I wasn’t sure that was going to be the case until recently.
Can you send me an email at [email protected] with your .sublime-syntax file and a file that triggers a lockup? I’ll see what I can do to make sure it doesn’t happen.
|
https://forum.sublimetext.com/t/syntax-fun/14830/37
|
CC-MAIN-2022-27
|
refinedweb
| 3,006 | 63.9 |
The:
public enum ClientStates{ Ordinary, HasDiscount, IsSupplier, IsBlackListed, IsOverdrawn}As you can see - these options could be combined in several ways. Rather then creating separate properties in the Client class, we could enable a bitwise combination of enumeration values by using the FlagsAttribute, when defining our enum type. See below.[Flags]public enum ClientStates{ Ordinary, HasDiscount, IsSupplier, IsBlackListed, IsOverdrawn}This means we can now use the bitwise OR operator to combine these enum values. For instance:Client c = new Client();c.ClientState = (ClientStates.HasDiscount|ClientStates.IsSupplier|ClientStates.IsOverdrawn);By setting the FlagsAttribute, the values for each enum value now effectively become bitflag patterns that look as follows if you see them in binary form:00000000 000000001 100000010 200000100 400001000 1600010000 3200100000 6401000000 128As you can see, this bit pattern in our enum values enables us to use the bitwise operator and combine enum values. It is plain to see that all enum values combined, using the bitwise OR, would result in 11111111.So, how do we check what particular ClientState values have been set? The key is the bitwise AND operator. Let's take our existing Client object 'c'.[Flags]public enum ClientStates{ Ordinary, // 0000 HasDiscount, // 0001 IsSupplier, // 0010 IsBlackListed, // 0100 IsOverdrawn // 1000}So for instance, if we want to check c.ClientState for ClientStates.HasDiscount we can use the following expression:(c.ClientState & ClientStates.HasDiscount) == ClientStates.HasDiscountAssuming that c.ClientState contains HasDiscount, IsSupplier and IsOverdrawn, the bit patterns would look like:c.ClientState: 1011ClientStates.HasDiscount: 0001(Logical AND): 0001So by AND’ing the bit value to check for with the combined bit pattern, we either get the same enum bit pattern value back, or we get a bit pattern which only contains zero’s.Here’s a little helper method you can use to check for particular ClientStates enum flag values:public static bool ContainsClientState(ClientStates combined, ClientStates checkagainst)
{
return ((combined & checkagainst)==checkagainst);}That’s it for now. Feel free to leave any comments. Cheers!
Great article - exactly what I was looking for. Thanks very much
Check here: - software cracks, serial numbers, keygens, patches, cd keys, activation codes
Wonderful and informative web site.I used information from that site its great.
My Catalogue site of Biker jacket leather
I just don't have much to say right now. Pretty much nothing seems worth doing. My mind is like an empty room. I haven't gotten anything done these days. What can I say?
A very interesting site, thank you very much !
[URL=]index[/URL]
[URL=]index[/URL]
<a href="">mevacor</a>">">mevacor</a>
[url=]mevacor[/url]
<a href="">albenza</a>">">albenza</a>
[url=]albenza[/url]
<a href="">viramune</a>">">viramune</a>
[url=]viramune[/url]
<a href="">sporanox</a>">">sporanox</a>
[url=]sporanox[/url]
<a href="">sumycin</a>">">sumycin</a>
[url=]sumycin[/url]
<a href="">aricept</a>">">aricept</a>
[url=]aricept[/url]
<a href="">aceon</a>">">aceon</a>
[url=]aceon[/url]
<a href="">albenza</a>">">albenza</a>
[url=]albenza[/url]
<a href="">aldactone</a>">">aldactone</a>
[url=]aldactone[/url]
A very interesting site, thank you very much ! My Homepage :
<a href="">phentermine</a>">">phentermine</a>
[url=]phentermine[/url]
So if I want to pass values around, am I passing the enum descriptors, the integers or the binary ox0100?
Great article. Found what I was searching for.
The "And" usage was good.
Thanks.
The IL code suggests setting decorating an enumeration with the Flags attribute does not automatically assign 2^ values. The FlagsAttribute remains in the disassembled code so it's possible the runtime still fiddles the values.
My personal opinion is that implicit language features can be handy but it's better to be explicit for whoever's maintaining the code.
Starting with this enum...
[Flags]
public enum MyEnum
None,
First,
Second,
Third,
Fourth,
All = First | Second | Third | Fourth
}
...disassembles to this class:
.class public auto ansi sealed EnumBestPractices.MyEnum2
extends [mscorlib]System.Enum
.custom instance void [mscorlib]System.FlagsAttribute::.ctor() = ( 01 00 00 00 )
.field public specialname rtspecialname int32 value__
.field public static literal valuetype EnumBestPractices.MyEnum None = int32(0x00000000)
.field public static literal valuetype EnumBestPractices.MyEnum First = int32(0x00000001)
.field public static literal valuetype EnumBestPractices.MyEnum Second = int32(0x00000002)
.field public static literal valuetype EnumBestPractices.MyEnum Third = int32(0x00000003)
.field public static literal valuetype EnumBestPractices.MyEnum Fourth = int32(0x00000004)
.field public static literal valuetype EnumBestPractices.MyEnum All = int32(0x00000007)
cigarettes.blogbugs.org - cigarettes blog
Thank you, this was very helpful. I'm doing:
public enum CommitType {
Insert = 1,
Update = 2,
Remove = 4 };
You gotta use 8 after 4 as enum values. So the line does not go on like 0,1,2,4,16,32, instead it must be 0,1,2,4,8,16,32.
Buenas dias - <a href="nursingcareerz.info/0.html ">nursing_career</a> - <a href="nursingcareerz.info/501.html ">nursing degree pinning ceremony</a> - <a href="nursingcareerz.info/172.html ">nj nursing license renewal</a> - <a href="nursingcareerz.info/233.html ">triservice nursing research program</a> - <a href="nursingcareerz.info/404.html ">cna training phoenix nursing home</a> - <a href="nursingcareerz.info/355.html ">sdsu nursing school</a> - <a href="nursingcareerz.info/236.html ">boise state university nursing program</a> - <a href="nursingcareerz.info/547.html ">pediatric nurse practitioner review course</a> - <a href="nursingcareerz.info/68.html ">oklahoma registered nurse practioner school</a> - <a href="nursingcareerz.info/559.html ">pediatric of nursing lecture note</a> best wishes
How can i check if an enum contains other enum?
example:
[Flags]
None,
First,
Second,
All = First | Second
.....
public MyEnum one = MyEnum.First ;
public MyEnum two = MyEnum.All;
if(one is in two)//psaudo code
...
To enter bit values more easily, use the left shift operator, like this:
[FlagsAttribute]
public enum MyFlagsEnum
None = 0,
One = 1 << 1,
Two = 1 << 2,
Three = 1 << 3,
Four = 1 << 4
@unknown programmer
Technically you don't "gotta use 8", you can use any combination of numbers that are single bits. You could choose 1 & 256 if you like. But it does appear the original poster screwed up the binary -> decimal conversion.
00000000 0
00000001 1
00000010 2
00000100 4
00001000 16 <-- 1000 = 8, rest of sequence is off
00010000 32 by a factor of 2
00100000 64
01000000 128
@original poster
Using ClientState(s?) as a parameter/variable is a bad idea if you want to do the bitwise combinations which I think was the whole point of this article.
ClientState state = IsSupplier | IsBlackListed;
This makes an illegal enum in C/C++, and your compiler should complain! 0110 was not assigned a value in the enum.
Storing into an int is legal.
int state = IsSupplier | IsBlackListed;
To make the other work, you have to add definitions to your enum. Which are not really something you want to do -- it would be tedious to map out all 255 combinations for an 8 bit enum.
IsSupplier_AND_IsBlackListed = 0110
or maybe
DUMMY1 = 0110
ClientState state = IsSupplier | IsBlackListed;
That IS a valid enum as long as you use the [Flags] attribute, which is the point of the article.
Here's how I roll:
public enum Column
None = 0,
Priority = 1 << 0,
Customer = 1 << 1,
Contract = 1 << 2,
Description = 1 << 3,
Tech = 1 << 4,
Created = 1 << 5,
Scheduled = 1 << 6,
DueDate = 1 << 7,
All = int.MaxValue
};
The [Flags] attribute allows you to do this:
Column MyColumns = Column.Customer | Column.Contract;
To check for a flag:
if((MyColumns & Column.Customer) != 0)
To set a flag:
MyColumns |= Column.Tech;
To clear a flag:
MyColumns &= ~Column.Tech;
To toggle (flip) a flag:
MyColumns ^= Column.Contract;
To clear all flags:
MyColumns = Column.None;
To set all flags:
MyColumns = Column.All;
To set all flags EXCEPT one or more:
MyColumns = Column.All ^ Column.Tech ^ Column.Status;
Het zal mijn Borland verleden zijn, maar ik vond de implementatie van sets in C# nogal zwak. Delphi heeft
Pingback from Using Flagged Enums - Quickduck
For Java programmers, EnumSet looks like a much more civilized alternative to bit flags, and since it's implemented with bit arrays, it's as performant: cristianvrabie.blogspot.com/.../bitflags-and-enumset.html
This will not work, as [Flags] attribute is solely instructs ToString() of the enum to recognize combined values.
You still need to set values manually to powers of 2 to use it like flags.
<b>AQ4.ru</b> - ���� ��� �������� �����. ���� �� ������� ���������� �� ������ � ������.
��� ���������� ������� ������� �� ���������� ������ ���������� ������� ���� �� �����
�������� ��� ���� � �� ���������<a href=></a>
Since you know how to toggle (flip) a flag with:
Then, to clear all flags:
MyColumns ^= MyColumns;
Another way to check for a flag:
if((MyColumns & Column.Customer) == Column.Customer)
To shorten your statements you can use extensions like this:
public static class EnumExtensions
public static bool Is(this object value, object value2)
{
return (Convert.ToInt32(value2) & Convert.ToInt32(value)) == Convert.ToInt32(value2);
}
if (MyColumns.Is(Column.Customer)) ...
Best transparent bikini
Best gas powered scooters for sale
Pingback from Enum values as bit flags « Keep it together man!
This article is completely wrong. Maybe next time you could test what you say before wasting everyone's time and making yourself look like a fool in the process.
Y6MROL this is google
|
http://weblogs.asp.net/wim/archive/2004/04/07/109095.aspx
|
crawl-002
|
refinedweb
| 1,494 | 50.53 |
Daniel Caujolle-Bert wrote:
> Hi Dirk,
>
> Le dim 30/11/2003 =E0 20:19, Dirk Meyer a =E9crit :
>> Hi,
>>=20
> ...
> In CVS.
Thanks. And it may be interesting for you to know that the current
Freevo cvs fully supports xine as video player (latest release only
used xine for dvd playback). Except one thing: how can I play a
specific track of a vcd? vcd:///dev/cdrom:1 doesn't work.
Dischi
--=20
Education is what you get from reading the small print; experience is
what you get from not reading it.
Hi Dirk,
Le dim 30/11/2003 =E0 20:19, Dirk Meyer a =E9crit :
> Hi,
>=20
...
In CVS.
Cheers.
--=20
73's de Daniel "Der Schreckliche", F1RMB.
-=3D- Daniel Caujolle-Bert -=3D- segfault@... -=
=3D-
-=3D- -=3D-
I have a Phantom of Inferno DVD that will not play properly using Xine. If
I donate a copy, or if I lend a copy (with an SASE to return it when done),
is there anyone who would work on getting the DVD to play?
The DVD is an adventure game which lets the user choose branches and which
plays a lot of audio over still scenes. My Apex player refuses to even play
it. It does appear to work on my PS2.
Miguel Freitas wrote:
>
> Anyway good luck with your new project, i know sometimes new projects
> are very fun! ;-)
>
> regards,
>
> Miguel
>
>
Well, my new pet project ( ) might help xine
indirectly in the future. ;-)
Cheers
James
Hi list,
after playing around a bit with xine's vidix output, I found a solution
for my problem with strange colors in the video (it appeared like a
color remapping, having blue-skined people, red sky, ...). After my
small change (see below), the colors are correct now. I have a ATI
graphics adaptor in my notebook, from which lspci says:
01:00.0 VGA compatible controller: ATI Technologies Inc Radeon Mobility
M6 LY (prog-if 00 [VGA])
Subsystem: Dell Computer Corporation: Unknown device 00: 0x08 (32 bytes)
Interrupt: pin A routed to IRQ 11
Region 0: Memory at e0000000 (32-bit, prefetchable) [size=128M]
Region 1: I/O ports at c000 [size=256]
Region 2: Memory at fcff0000 (32-bit, non-prefetchable) [size=64K]
Expansion ROM at <unassigned> -
Here my change to the current cvs of xine-lib:
Index: src/video_out/vidix/drivers/radeon_vid.c
===================================================================
RCS file: /cvsroot/xine/xine-lib/src/video_out/vidix/drivers/radeon_vid.c,v
retrieving revision 1.13
diff -u -r1.13 radeon_vid.c
--- src/video_out/vidix/drivers/radeon_vid.c 16 Nov 2003 17:18:10
-0000 1.13
+++ src/video_out/vidix/drivers/radeon_vid.c 1 Dec 2003 16:15:10 -0000
@@ -806,7 +806,7 @@
CAdjBCb = sat * OvHueCos * trans[ref].RefBCb;
CAdjBCr = sat * OvHueSin * trans[ref].RefBCb;
-#if 0 /* default constants */
+/* #if 0 /\* default constants *\/ */
CAdjLuma = 1.16455078125;
CAdjRCb = 0.0;
@@ -815,7 +815,7 @@
CAdjGCr = -0.8125;
CAdjBCb = 2.01708984375;
CAdjBCr = 0;
-#endif
+/* #endif */
OvLuma = CAdjLuma;
OvRCb = CAdjRCb;
OvRCr = CAdjRCr;
Maybe, someone of you can check it and maybe integrate it in future
versions of xine, so that other users with the same hardware could
benefit of.
CU/all
P.S.: As I do not have the time for reading xine-devel, I would be happy
if someone could drop me a short note, if the patch is
accepted/rejected/etc.
--
Patrick Cernko | mailto:errror@... |
Quote of the Week: "/vmlinuz does not exist.
Installing from scratch, eh?"
(Debian Kernel Installation)
Hello guys,
I thought I'd clean up a bit the video bits in gnome-mime-data. Ended up
looking over the web for a bunch of video files to test out the magic.
Here are a few files that xine can't play.
SGI Movie (video/x-sgi-movie):
Vivo video (video/vnd.vivo):
Enjoy :)
---
Bastien Nocera <hadess@...>
"Oh, Jason, take me!" she panted, her breasts heaving like a student on
31p-a-pint night.
Hi James,
On Sun, 2003-11-30 at 22:08, James Courtier-Dutton wrote:
>.
I guess i owe apologies for the confusion. I never followed dvdnav
project closely and i assumed (wrongly it seems) that Rich was the
driving force behind the original xine-dvdnav plugin (before libdvdnav
was created).
I remember your messages here at xine-devel when you started moving code
from the dvdnav plugin into the library, and i know that you contributed
_a lot_ to xine to cleanly integrate the dvd support. So i'm sorry, i
feel bad about having mentioned Rich and not you. I'm asking John Knight
if it is possible to make a correction statement at least in the online
version of the article.
> My interest in xine development has currently lost momentum. I have
> started on another pet project of mine which is not related to xine in
> anyway.
I hope this must be unrelated to the interview.
I'm sure that all your work to implement, debug and fix the dvd support
during the last two years or so was greatly appreciated. There are
several happy xine users around that may confirm that.
Anyway good luck with your new project, i know sometimes new projects
are very fun! ;-)
regards,
Miguel
Hi,
Having recompiled xine against my new glibc-2.3.2 +
linux-2.6.0-test11 headers, I have found that I can no
longer play MOV files that need Quicktime codecs in
Win32 DLLs. Instead, xine just hangs and must be
forcibly aborted:
-[ xiTK version 0.10.6 [XShm][XMB]]-
-[ WM type: (GnomeCompliant) Unknown ]-
Display is not using Xinerama.
XF86VidMode Extension (2.2) detected, trying to use
it.
XF86VidMode Extension: 34 modelines found.
XF86VidModeModeLine 1280x1024 isn't valid: discarded.
XF86VidModeModeLine 1280x960 isn't valid: discarded.
xine_interface: unknown param 10
xine_interface: unknown param 10
xine_interface: unknown param 10
xine_interface: unknown param 10
[1]+ Stopped xine
$ kill -abrt %
The last time I saw this, it was due to incorrect
optimisations in the libw32dll directory, but that is
long fixed. Does anyone have any other suggestions,
please?
Cheers,
Chris
________________________________________________________________________
Download Yahoo! Messenger now for a chance to win Live At Knebworth DVDs
Siggi Langauf wrote:
> On Sun, 30 Nov 2003, Miguel Freitas wrote:
>
>
>>John Knight of Linmagau.org made a great interview with xine developers.
>>I guess this must be a good reading for our users and contributors,
>>there are several interesting insights here... :)
>>
>>
>
>
> *sniff*, they didn't ask me :-(
> But it's a really nice article :-)
>
> [...]
>
>>PS: Siggi, may you add the link to xinehq when the machine migration is
>>complete?
>
>
> Well, yes, it's done.
> But unfortunately, the new machine didn't come back to life when I
> rebooted it.
> So right now, the old machine is back in DNS and I'm just hoping it'll be
> stable enough till somebody looks after the new one.
>
> Feel free to put a news item on the old xinehq for the time being!
> Right now, I'm too frustrated to do it myself...
>
> So long,
> Siggi
>.
My interest in xine development has currently lost momentum. I have
started on another pet project of mine which is not related to xine in
anyway.
Cheers
James
P.S. I will still fix bugs in xine-lib, but I will probably never
contribute to it as much as I did in the past. I will leave fast
forward, trick mode, and Software DTS decode to other people.
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
|
https://sourceforge.net/p/xine/mailman/xine-devel/?viewmonth=200312&viewday=1
|
CC-MAIN-2016-36
|
refinedweb
| 1,266 | 74.69 |
Hello all. I’m having trouble understanding how to set up multiple
relationships between two models.
I have a database for an art gallery. It will be used to track people
(artists, buyers, donors, press contacts, etc) and artworks. Thus, I
have three main tables: “contacts,” “roles,” and “items,” plus a join
table “contact_roles.” A contact person can have multiple roles, i.e.
an artist or press contact could also buy an artwork, or a donor could
also be a volunteer.
Here are the tables and the classes:
create_table :contacts do |t|
t.column :first_name, :string
t.column :last_name, :string
end
create_table :roles do |t|
t.column :name, :string
end
create_table :contact_roles do |t|
t.column :contact_id, :string
t.column :role_id, :string
end
create_table :items do |t|
t.column :name, :string
t.column :dimensions, :string
t.column :medium, :string
t.column :artist_id, :string
t.column :buyer_id, :string
end
class Contact < ActiveRecord::Base
has_and_belongs_to_many :roles
def full_name_last
last_name + ", " + first_name
end
end
class Artist < Contact
has_many :items
end
class Buyer < Contact
has_many :items
end
class Role < ActiveRecord::Base
has_and_belongs_to_many :contacts
end
class Item < ActiveRecord::Base
belongs_to :artists
belongs_to :buyers
end
My main question is: should I be using polymorphic associations to link
the contacts and the items? I don’t quite understand polymorphic
associations - any suggestions on web pages with good explanations for
an object-oriented newbie?
If I don’t need to use polymorphism, then do I need to use parameters
like :foreign_key on the has_many and belongs_to due to the fact that
artist/buyer are not semantically identical to “contacts”?
Thanks
David
|
https://www.ruby-forum.com/t/multiple-relationships-between-two-models/127704
|
CC-MAIN-2021-43
|
refinedweb
| 262 | 58.48 |
New in Symfony 3.3: Optional class for named services
Contributed by
Martin Hasoň and Nicolas Grekas
in #21133.
Services in Symfony applications are traditionally defined in YAML, XML or PHP configuration files. A typical but simple service definition looks like this in YAML:
And like this in XML:
In Symfony 3.3, we're adding some new features to the Dependency Injection component
that allow working with services in a different manner. For that reason, in
Symfony 3.3, the
class argument of the services is now optional. When it's
undefined, Symfony considers that the
id of the service is the PHP class:
When using this new feature, getting services from the container requires to pass the full PHP namespace:
The traditional service definition will keep working as always (and it's even mandatory to use it in cases like decorating services). However, this new feature together with other optional features such as autowiring and defining default service options per file, will enable RAD ("rapid application development") for those projects and developers that need it:
You put an example of the new service definition in Yaml but not in Xml, maybe you could add it to show people it works too :)
I'm pretty sure we will have fun with the additional \ to not forget when using class name as id in Service tags properties..
Can anyone please tell me what do we gain with this?
Of course, this is my opinion, but don't let Symfony fall in the easy way of doing everything RAD, and focus on DX, improve your code in order to make it accessible for more people and give the community the tools to make this important RAD part of the ecosystem, not part of the core :)
Thanks for the Response, Fabien.
To ensure that comments stay relevant, they are closed for old posts.
Sebastien Delarche said on Jan 9, 2017 at 13:18 #1
|
http://symfony.com/blog/new-in-symfony-3-3-optional-class-for-named-services
|
CC-MAIN-2018-34
|
refinedweb
| 322 | 55.47 |
module Lava2000.Ref ( Ref , ref , deref , memoRef , TableIO , tableIO , extendIO , findIO , memoRefIO , TableST , tableST , extendST , findST , memoRefST ) where import Lava2000.MyST import System.IO import System.IO.Unsafe import Data.IORef unsafeCoerce :: a -> b unsafeCoerce a = unsafePerformIO $ do writeIORef ref a readIORef ref where ref = unsafePerformIO $ newIORef undefined -- Defined here because Unsafe.Coerce doesn't exist in Hugs. {- Warning! One should regard this module as a portable extension to the Haskell language. It is not Haskell. -} {- Here is how we implement Tables of Refs: A Table is nothing but a unique tag, of type TableTag. TableTag can be anything, as long as it is easy to create new ones, and we can compare them for equality. (I chose IORef ()). So how do we store Refs in a Table? We do not want the Tables keeping track of their Refs (which would be disastrous when the table becomes big, and we would not have any garbage collection). Instead, every Ref keeps track of the value it has in each table it is in. This has the advantage that we have a constant lookup time (if the number of Tables we are using is small), and we get garbage collection of table entries for free. The disadvantage is that, since the types of the Tables vary, the Ref has no idea what type of values it is supposed to store. So we use dynamic types. A Ref is implemented as follows: it has two pieces of information. The first one is an updatable list of entries for each table it is a member in. Since it is an updatable list, it is an IORef, which we also use to compare two Refs. The second part is just the value the Ref is pointing at (this can never change anyway). -} ----------------------------------------------------------------- -- Ref data Ref a = Ref (IORef [(TableTag, Dyn)]) a instance Eq (Ref a) where Ref r1 _ == Ref r2 _ = r1 == r2 instance Show a => Show (Ref a) where showsPrec _ (Ref _ a) = showChar '{' . shows a . showChar '}' ref :: a -> Ref a ref a = unsafePerformIO $ do r <- newIORef [] return (Ref r a) deref :: Ref a -> a deref (Ref _ a) = a ----------------------------------------------------------------- -- Table IO type TableTag = IORef () newtype TableIO a b = TableIO TableTag deriving Eq tableIO :: IO (TableIO a b) tableIO = TableIO `fmap` newIORef () findIO :: TableIO a b -> Ref a -> IO (Maybe b) findIO (TableIO t) (Ref r _) = do list <- readIORef r return (fromDyn `fmap` lookup t list) extendIO :: TableIO a b -> Ref a -> b -> IO () extendIO (TableIO t) (Ref r _) b = do list <- readIORef r writeIORef r ((t,toDyn b) : filter ((/= t) . fst) list) ----------------------------------------------------------------- -- Table ST newtype TableST s a b = TableST (TableIO a b) deriving Eq tableST :: ST s (TableST s a b) tableST = unsafeIOtoST (TableST `fmap` tableIO) findST :: TableST s a b -> Ref a -> ST s (Maybe b) findST (TableST tab) r = unsafeIOtoST (findIO tab r) extendST :: TableST s a b -> Ref a -> b -> ST s () extendST (TableST tab) r b = unsafeIOtoST (extendIO tab r b) ----------------------------------------------------------------- -- Memo memoRef :: (Ref a -> b) -> (Ref a -> b) memoRef f = unsafePerformIO . memoRefIO (return . f) memoRefIO :: (Ref a -> IO b) -> (Ref a -> IO b) memoRefIO f = unsafePerformIO $ do tab <- tableIO let f' r = do mb <- findIO tab r case mb of Just b -> do return b Nothing -> fixIO $ \b -> do extendIO tab r b f r return f' memoRefST :: (Ref a -> ST s b) -> (Ref a -> ST s b) memoRefST f = unsafePerformST $ do tab <- tableST let f' r = do mb <- findST tab r case mb of Just b -> do return b Nothing -> fixST $ \b -> do extendST tab r b f r return f' ----------------------------------------------------------------- -- Dyn data Dyn = Dyn toDyn :: a -> Dyn toDyn = unsafeCoerce fromDyn :: Dyn -> a fromDyn = unsafeCoerce ----------------------------------------------------------------- -- the end.
|
http://hackage.haskell.org/package/chalmers-lava2000-1.0.2/docs/src/Lava2000-Ref.html
|
CC-MAIN-2018-05
|
refinedweb
| 615 | 63.83 |
> wordunp.zip > WU.CPP
// wu.cpp -- Unprotect Microsoft Word/Winword Document // Marc Thibault
// Word protects a document by XOR'ing a 16-byte key repetitively // through the document file, starting at byte 40. The header (0x180 bytes) // is filthy with zeros including what appears to be over 48 of them at // the end of the header. This program hopes that the last 32 are zeros // (it checks) and extracts the key from this area. Improvements can be // made to this if it ever turns out that these bytes are used for // something. // The encryption key is derived from the user's passphrase by some means // I have not attempted to discover. It is unnecessary, since the // encryption key can be directly discovered and applied. // Call: // wu infile outfile // Exit Status: // 1 too few arguments // 2 can't open given file for input // 3 can't open given file for output // 4 can't find a key (last two rows of header aren't the same) // 5 too short to be a Word file // 6 Problem writing to output file #include #include #ifdef __TURBOC__ #include #endif #ifdef __ZTC__ #include #endif #define Version "1.2" #define VersionDate "26 January 1993" #define keyLength 0x10 #define bufferLength 0x180 #define headerLength 0x180 int findKey(unsigned char buffer[], unsigned char key[]); void fixHeader(unsigned char buffer[], unsigned char key[]); void fixBuffer(unsigned char buffer[], unsigned char key[]); #ifdef debug void showBuffer(unsigned char buf[]); #endif char *copyLeft[] = {"\nMarc Thibault \n", " Oxford Mills, Ontario \n", " This work is released to the public domain. \n", " It may be copied and distributed freely \n", " with appropriate attribution to the author.\n"}; void main(int argc, char *argv[]) { unsigned char buffer[bufferLength]; // data buffer unsigned char key[keyLength]; // encryption key size_t count, check; int i; FILE *crypt, *plain; // ---------------------- if( argc < 3) // file names must be present { cout << "\n Word Unprotect -- Version " << Version; cout << "\n by Marc Thibault, " << VersionDate; cout << "\n Syntax: wu infile outfile \n"; exit (1); } // Open files if( NULL == (crypt = fopen(argv[1], "rb"))) { cout << "\n wu error: can't open the input file\n"; exit (2); } if( NULL == (plain = fopen(argv[2], "wb"))) { cout << "\n wu error: can't open the output file\n"; exit (3); } // Read header from input file count = fread(buffer,1,headerLength,crypt); if(count != bufferLength) { cout << "\n wu error: Input file too short to be a Word File\n"; exit(5); } // Extract the encryption key if(findKey(buffer,key)) { cout << "\n wu error: Couldn't find a key \n"; exit(4); } #ifdef debug cout << "\n Key in hexadecimal is"; for (i=0; i
|
http://read.pudn.com/downloads2/sourcecode/hack/crack/3228/WU.CPP__.htm
|
crawl-002
|
refinedweb
| 431 | 52.36 |
Asked by:
missing namespaces in schema
Question
Hi there,
I use a schema, which creates an xml while "create an instance" which looks like that.<?xml version="1.0"?>
<ns0:employee xmlns:
<ns0:city>City</ns0:city>
....
</ns0:employee>
When running in Biztalk the genereated xml looks like that
<employee xmlns="/employee">
<city>City</city>
....
</employee>
What did I misconfigure? What Do i need to change?
Thanks in advance, Daniel
All replies
The problem was the Sendpipeline, which was set to PassThrough.
I changed that, but now I get so called Zombies. :-/
'The instance completed without consuming all of its messages. The instance and its unconsumed messages have been suspended.'
But it's a Static Solicit-Response-Sendport. So I simply send a Request to it, and wait for a Response.
But the orchestration does not wait. So the correct Response arrives, but nobody is listening.
I uses Sendports already, with Request/Response. But I never had this effect.
What causes this?
When changing the Messagetype to XmlDocument, it works.
Is there a way to check the Response against the Receive, to find out what causes the Receive not to accept the Response.
- Edited by Daniel V_de Wednesday, October 2, 2019 1:28 PM additional information
Most probably the message type of the message from the response does not match the message type that your Orchestration is expecting.
Or the fact that your Orchestration is not waiting, means you've not configured something there correctly. Again, without more information, we can't really tell you what is wrong.
- Edited by Colin Dijkgraaf Wednesday, October 2, 2019 7:55 PM
|
https://social.technet.microsoft.com/Forums/azure/en-US/841f3802-0cc9-4b51-afc1-b28401235ca3/missing-namespaces-in-schema?forum=biztalkgeneral
|
CC-MAIN-2019-47
|
refinedweb
| 269 | 68.47 |
Graph Processing With Apache Flink
Graph Processing With Apache Flink
Gelly uses Flink API to process large scale graphs, provides simple API to create and edit graphs, and has handy algorithms for different graph processing tasks.
Join the DZone community and get the full member experience.Join For Free
Graphs are everywhere. The Internet, maps, and social networks, to name just a few, are all examples of massive graphs that contains vast amounts of useful information. Since the size of these networks is growing and processing them is becoming more and more ubiquitous, we need better tools to do the job.
In this article, I’ll describe how we can use Flink Gelly library to process large graphs and will provide a simple example of how we can find the shortest path between two users in the Twitter graph.
Introduction to Gelly
In a nutshell, Flink Gelly is a library for graph processing implemented on top batch processing API in Apache Flink:
It allows us to process huge graphs in a distributed fashion using Apache Flink API.
You may wonder why we need to have one more graph library. Since there are other existing graph processing systems (for example, Apache Giraph) or general purpose Big Data systems that can be used for some intermediate graph processing, it may seem that new graphs processing library may be superfluous.
Gelly still has one important advantage over other systems. Since it is part of Apache Flink, it allows us to preprocess graph data, process graphs, and transform result graphs using one system and one API. This is convenient both from a development and an operational standpoint since in this case we only have one API to learn and one system to operate.
Graph Intro
As you probably know, a graph is a set vertices connected by edges. Flink represents graphs as two datasets: a dataset of vertices and a dataset of edges.
/** * @param <K> the key type for edge and vertex identifiers * @param <VV> the value type for vertices * @param <EV> the value type for edges */ public class Graph<K, VV, EV> { private final DataSet<Vertex<K, VV>> vertices; private final DataSet<Edge<K, EV>> edges; ... }
The
Graph class has three generics arguments to specify a type of vertices keys that uniquely define, type of values associated with vertices and type of values associated with edges. So the following
Graph definition:
Graph<Long, String, Integer> graph = ...
...represents a graph with vertices with keys of type
Long, vertices with values of type
String and edges with values of type
Integer.
The
Vertex in Gelly is essentially a tuple with two values: id of a vertex and an optional value associated with it. Similarly, the
Edge is tuple with three elements.
Since the
DataSet is immutable Graph class is immutable as well, and all operations that change a graph create a new instance. Notice that the
Graphclass in Gelly is always directed.
Creating Graph
We can create a graph using one of three ways:
- Generate a graph using one of existing graph generators.
- Create a graph from one or two
DataSetinstances.
- Read a graph from a CSV file.
Let’s see how we can create a graph in Gelly in different ways.
Graph Generators
Flink supports a number of graph generators to generate star graphs, cycle graphs, complete graphs, and so on. Here is an example of how to generate a complete graph where every vertex is connected to all other vertices:
Graph<LongValue, NullValue, NullValue> graph = new CompleteGraph(env, vertexCount) .generate();
Graph generators are useful for testing and allow you to quickly create a graph for your experiments.
Create a Graph From Datasets
The most common way to create a graph with Gelly is from one or several
DataSet instances. They are usually read from an external system, such as distributed filesystem or a database.
Gelly allows a lot of freedom here. The simplest way is to create a dataset of vertices and a dataset of edges and pass them to the
Graph.fromDataSetmethod:
ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment(); DataSet<Vertex<Long, NullValue>> vertices = ... DataSet<Edge<Long, NullValue>> edges = ... Graph<Long, NullValue, NullValue> graph = Graph.fromDataSet(vertices, edges, env);
For testing purposes we can the
fromCollection method that allows to create small graphs from in-memory collections:
List<Vertex<Long, NullValue>> vertices = ... List<Edge<Long, NullValue>> edges = ... Graph<Long, NullValue, NullValue> graph = Graph.fromCollection(vertices, edges, env);
We can also create a graph using only a dataset of edges:
DataSet<Edge<Long, String>> edges = ... Graph<Long, NullValue, String> graph = Graph.fromDataSet(edges, env);
In this case, Gelly would extract vertices keys from edges and assign no values to vertices.
Gelly provides more methods to create a graph and has different variations that accept datasets of
Tuples and collections. You can find them in the official documentation.
Reading Graph From CSV files
Of course, we can create a
DataSet instance from a CSV file and then create a
Graph instance out of it. But to simplify this common case, Gelly has special support for this use case. To do this, we need to use the
Graph.fromCsvReadermethod that reads vertices and edges data from two CSV files:
Graph<Long, Double, String> graph = Graph.fromCsvReader("vertices.csv", "edges.csv", env) .types(Long.class, Double.class, String.class);
The
types method is used to specify types of keys and values in the graph.
As before Gelly can create a graph from edges only:
Graph<String, NullValue, NullValue> simpleGraph = Graph.fromCsvReader("edges.csv", env) .keyType(String.class);
Working With Graphs
Once we have a graph, it would be handy to be able to process it in some way. Let’s briefly go through what we can do with Gelly graphs:
Graph Properties
These are the most straightforward methods that allow querying basic graph properties, such as the number of vertices, number of edges, in degrees, etc:
Graph<Long, Double, String> graph = ... // Get graph vertices DataSet<Vertex<Long, Double>> vertices = graph.getVertices(); // Get graph edges DataSet<Edge<Long, String>> edges = graph.getEdges() // get the IDs of the vertices as a DataSet DataSet<Long> vertexIds = graph.getVertexIds() // get a DataSet of <vertex ID, in-degree> pairs for all vertices DataSet<Tuple2<Long, LongValue>> inDegrees = graph.inDegrees() // get a DataSet of <vertex ID, out-degree> pairs for all vertices DataSet<Tuple2<Long, LongValue>> outDegrees = graph.outDegrees()
Graph Transformations
With these methods, we can update vertices or edges. The group includes methods like:
- map: Change values associated with edges or vertices.
- filter: Leave only edges and vertices that match a predefined predicate.
- reverse: Creates a graph where the direction of edges is reversed.
- union: Merges two graphs together.
- difference: Leave only vertices and edges that do not exist in another graph.
Here is an example of filtering edges that keeps only edges where the source and target vertices are different:
Graph<Integer, NullValue, NullValue> graph = ... Graph<Integer, NullValue, NullValue> filteredGraph = graph.filterOnEdges(new FilterFunction<Edge<Integer, NullValue>>() { @Override public boolean filter(Edge<Integer, NullValue> edge) throws Exception { // Keep only edges where source and target are different return !edge.getSource().equals(edge.getTarget()); } });
Graph Mutations
This group of methods includes methods that allow to add or remove edges and vertices:
Graph<Integer, Double, String> graph = ... // Add edge to the graph graph.addEdge(new Edge<Integer, String>(1, 2, "1-2")); // Add vertex to the graph graph.addVertex(new Vertex<Integer, Double>(1, 4.2)); // Remove edge from the graph graph.removeEdge(new Edge<Integer, String>(1, 2, "1-2"));
Shortest Path in Twitter
Let’s get more practical. In the following example, I’ll show how to find the shortest path between two users in Twitter social graph. To do this, I’ll use
Graph methods mentioned above and the
SingleSourceShortestPathalgorithm that calculates path length from a source vertex to all vertices in a graph.
Instead of reading data from Twitter I use Stanford Twitter Dataset. If you want to know more about data format and how we can read, please refer to my previous post. All it is important to know here is that before processing dataset I load data into a dataset of
TwitterFollower objects:
final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment(); DataSet<TwitterFollower> twitterFollowers = env.createInput(new StanfordTweetsDataSetInputFormat("/Users/ivanmushketyk/Flink/twitter"));
Every
TwitterFollower object contains a pair of two Twitter users: a follower and the one he/she follows.
To process this data we first need to convert it into a dataset of
Edges that we will use later to create a graph instance. To do this, we can use the
map method:
DataSet<Edge<Integer, NullValue>> twitterEdges = twitterFollowers .map(new MapFunction<TwitterFollower, Edge<Integer, NullValue>>() { @Override public Edge<Integer, NullValue> map(TwitterFollower value) throws Exception { Edge<Integer, NullValue> edge = new Edge<>(); edge.setSource(value.getFollower()); edge.setTarget(value.getUser()); return edge; } }); Graph<Integer, NullValue, NullValue> followersGraph = Graph.fromDataSet(twitterEdges, env);
When we create a graph from a dataset of edges Gelly populates a dataset of vertices using keys specified in input edges.
To calculate the shortest path we will use the
SingleSourceShortestPaths that calculates the shortest path from a source vertex to all other vertices in the graph. The problem is that the
SingleSourceShortestPaths algorithm only works on a weighted graph, meaning that every edge should have a
Doublevalue associated with it. Since our graph so far has no values associated with edges, we need to add them first.
To do this, we can use the
mapEdges method that updates all edges in the graph. In this case, we simply set
1.0 as a weight for every edge:
Graph<Integer, NullValue, Double> weightedFollowersGraph = followersGraph.mapEdges(new MapFunction<Edge<Integer, NullValue>, Double>() { @Override public Double map(Edge<Integer, NullValue> edge) throws Exception { return 1.0; } });
Now when we have a weighted graph we can use the
SingleSourceShortestPath algorithm.
The process is pretty straightforward. First, we need to initialize the algorithm with a starting node, and a maximum number of iterations the algorithm will do before it returns a result:
// @fourzerotwo int sourceVertex = 3359851; int maxIterations = 10; SingleSourceShortestPaths<Integer, NullValue> singleSourceShortestPaths = new SingleSourceShortestPaths<>(sourceVertex, maxIterations); DataSet<Vertex<Integer, Double>> result = singleSourceShortestPaths.run(weightedFollowersGraph);
Once we have an instance of the
SingleSourceShortestPaths we need to call the
run method and pass the weighted graph to it. This method returns a dataset of Vertices. IDs of these vertices correspond to vertices ids from the original graph, while values represent distances from the source vertex.
The
SingleSourceShortestPaths algorithm calculates the shortest path to all vertices in the network. Just out of curiosity we can display a path length from the target vertex to a random Twitter user:
// @soulpancake int targetVertex = 19636959; result.filter(vertex -> vertex.getId().equals(targetVertex)) .print();
Here is the output that I got:
(19636959,3.0)
Conclusion
Graph processing is ubiquitous and can be used in different domains. To tackle this, Gelly helps us to use the power of Flink API to process large scale graphs. It provides simple API to create and edit graphs and has a plethora of handy algorithms for different graphs processing tasks.
You can find the full code of the example from this post in my Git repository with other Flink examples. Also, please check out my Understanding Apache Flink course available through Pluralsight. If you'd like a preview of what's going to be covered, take a look at this video. Thanks! }}
|
https://dzone.com/articles/graphs-processing-with-apache-flink
|
CC-MAIN-2019-18
|
refinedweb
| 1,887 | 54.93 |
Red Hat Bugzilla – Bug 891952
Review Request: perl-ExtUtils-Typemaps - Reads, modifies, creates and writes Perl XS typemap files
Last modified: 2013-02-28 02:11:08 EST
Spec URL:
SRPM URL:
Description:
ExtUtils::Typemaps can read, modify, create and write Perl XS typemap files.
The module is not entirely round-trip safe: For example it currently simply
strips all comments. The order of entries in the maps is, however, preserved.
We check for duplicate entries in the typemap, but do not check for missing
TYPEMAP entries for INPUTMAP or OUTPUTMAP entries since these might be hidden
in a different typemap.
Fedora Account System Username: churchyard
Note: This package is ment for Fedora 17 only. Newer releases has perl-ExtUtils-ParseXS 3.x providing this.
I know this package is just needed as a stopgap measure for F17, but it generates some nasty conflicts in rawhide:
INFO: mock.py version 1.1.28 starting...
Start: init plugins
Finish: init plugins
Start: run
Mock Version: 1.1.28
INFO: Mock Version: 1.1.28
Start: lock buildroot
INFO: installing package(s): /home/fedora/patches/891952-perl-ExtUtils-Typemaps/results/perl-ExtUtils-Typemaps-3.18-3.fc19.noarch.rpm
ERROR: Command failed:
# ['/usr/bin/yum', '--installroot', '/var/lib/mock/fedora-rawhide-x86_64/root/', 'install', '/home/fedora/patches/891952-perl-ExtUtils-Typemaps/results/perl-ExtUtils-Typemaps-3.18-3.fc19.noarch.rpm']
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
perl-ExtUtils-Typemaps noarch 3.18-3.fc19 /perl-ExtUtils-Typemaps-3.18-3.fc19.noarch
101 k
Installing for dependencies:
perl-ExtUtils-Install noarch 1.58-246.fc19 fedora 67 k
perl-ExtUtils-MakeMaker noarch 6.64-1.fc19 fedora 271 k
perl-ExtUtils-Manifest noarch 1.61-240.fc19 fedora 29 k
perl-ExtUtils-ParseXS noarch 1:3.16-246.fc19 fedora 98 k
perl-Test-Harness noarch 3.23-246.fc19 fedora 283 k
perl-devel x86_64 4:5.16.2-246.fc19 fedora 479 k
python x86_64 2.7.3-14.fc19 fedora 78 k
systemtap-sdt-devel x86_64 2.1-0.198.g4c5d990.fc19 fedora 69 k
Transaction Summary
================================================================================
Install 1 Package (+8 Dependent packages)
Total size: 1.4 M
Installed size: 3.7 M
Transaction Check Error:
file /usr/share/man/man3/ExtUtils::ParseXS::Constants.3pm.gz conflicts between attempted installs of perl-ExtUtils-Typemaps-3.18-3.fc19.noarch and perl-ExtUtils-ParseXS-1:3.16-246.fc19.noarch
file /usr/share/man/man3/ExtUtils::ParseXS::Utilities::Cmd::Input::Output::Type.3pm.gz conflicts between attempted installs of perl-ExtUtils-Typemaps-3.18-3.fc19.noarch and perl-ExtUtils-ParseXS-1:3.16-246.fc19.noarch
Stuff like this can cause users upgrading from F17 a lot of pain, so we need to be careful and do this right. I think there will probably need to be some explicit Conflicts in this package and Obsoletes in perl-ExtUtils-ParseXS.
CCing my sponsor, who will almost certainly know what to do here. Michael, would you mind providing some guidance?
Exactly, perl-ExtUtils-ParseXS (from perl src package) will conflict with this, but at the same time, it provides some necessary things. Long story short, there is no way to build this in rawhide or f18.
Adding Obsoletes to perl-ExtUtils-ParseXS in f18 and rawhide would be probably necessary.
Hmm, I suggest asking the Fedora Packaging Committee for a better explanation when setting a "Conflicts:" tag is permitted and what Anaconda will do in that case.
Implicit conflicts are never acceptable.
Btw, it's not just the man pages that conflict. perl-ExtUtils-ParseXS in F18 is older than this perl-ExtUtils-Typemaps (3.16 < 3.18), and the packages install their Perl module files in competing paths (/usr/share/perl5 vs. /usr/share/perl5/vendor_perl) where the files don't conflict but override eachother.
Preferably, and as you say, perl-ExtUtils-ParseXS from "perl" src.rpm in Fedora >= 18 would add a "Obsoletes" tag (possibly versioned) to replace the perl-ExtUtils-Typemaps package for a sane upgrade path.
So apparently we need FPC guidance on how to proceed.
You can contact them on their mailing list:
Or by filing a ticket on their trac instance:
BTW, have you talked to the Perl SIG about this? Perhaps they can backport or provide some other solution to this.
In my opinion the solution is straightforward:
(1) Put `Conflicts: perl-ExtUtils-ParseXS >= 3.14' into perl-ExtUtils-Typemaps.
(2) Put `Obsoletes: perl-ExtUtils-Typemaps' into perl-ExtUtils-ParseXS in F≥18.
(3) Block perl-ExtUtils-Typemaps in F≥18. Preferably before building the package.
I can do (2), but perl has now security update in testing, so I have to postpone this change until stabilizing current F18 update.
1) How does Anaconda handle such Conflicts during a distribution upgrade? And what do fedup and other tools do in such a case?
2) The way to go, IMO.
3) If you refer to koji inheritance, blocking it may become necessary indeed. And depending on how the initial src.rpm gets imported into git, marking it a dead.package may be necessary, too.
(In reply to comment #6)
> 1) How does Anaconda handle such Conflicts during a distribution upgrade?
> And what do fedup and other tools do in such a case?
>
Frankly, this is problem of anaconda and fedup. We have RPM specification which defines Conflicts: so it's clear what's intended behaviour.
Pragmatically, I use Conflicts here and there when needed, I usually test the transitions and I've never seen any problem. If there is Obsolete statement at the same time, there will be obvious solution.
> If there is Obsolete statement at the same time
Of course!
Because of that, setting the Conflicts tag is of very limited use. Only the Obsoletes statement is the important one for all sorts of dist updates/upgrades where packages are to be replaced (plus a "Provides" where appropriate). The target dist (here F17) will not include a pair of conflicting packages in its repositories.
| As a general rule, Fedora packages must NOT contain any usage of the
| Conflicts: field. [...] It confuses depsolvers and end-users for no
| good reason.
> I usually test the transitions and I've never seen any problem
Consider yourself lucky. ;) Implicit *and* explicit conflicts stop at the transaction check already, leaving the problem to the user, who must figure out how to alter the package set to not suffer from conflicts. This is extremely annoying if it's a large package set with complex inter-dependencies.
"Obsoletes" is the way to go here. I dunno whether a "perl-ExtUtils-Typemaps" package might ever want to return with changed/non-conflicting contents, so it may be okay to make the Obsoletes tag non-versioned.
Michaeal, you are mixing two independent things.
One is conflict, which is here regardless any possible replacement. You can never know user's package set (e.g. upgrading into stable distribution will bring perl without the Obsoletes). If the conflict is there, it should be declared. It's much friendly to hit the problem sooner while solving dependencies than any later on file system level.
Other one is replacement. In this case there is possible replacement. But this is only hint for dependency solver to select better choice.
I'm really sorry yum is so archaic it cannot offer more solutions as other package managers can. But this not excuse to sweep known conflicts under the carpet. Especially in this case where it does not make things worse as you already pointed.
FPC has spoken:
Please work with the Perl SIG to get the proper Obsoletes/Provides in the Perl SRPM. I'll proceed with the review shortly.
(In reply to comment #11)
> FPC has spoken:
>
>
> Please work with the Perl SIG to get the proper Obsoletes/Provides in the
> Perl SRPM. I'll proceed with the review shortly.
Petr, could you do it, or I have to ask someone else?
Package Review
==============
Key:
[x] = Pass
[!] = Fail
[-] = Not applicable
[?] = Not evaluated
[ ] = Manual review needed
Status: NEEDS WORK
==== Issues =====
[!]: Obsoletes/Provides needed in F18 so as not to break upgrade path.
[!]: This package will generate Conflicts in F18+.
Please file a ticket with release engineering after review approval to
ensure this package is blocked in F18+.
[!]: rpmlint output is not clean.
See the rpmlint section at the bottom of the review for details.
==== Things to Consider ====
[ ]: This package does not include a license file.
Consider querying.
[!]: Changelog in prescribed format.
See rpmlint warnings below.
/FedoraReview/891952-perl-ExtUtils-
Typemaps/licensecheck.txt
License "same as Perl itself" == GPL+ or Artisitic so OK
)
This package will generate Conflicts and should be blocked in F18+.
(ExtUtils-ParseXS-3.18: perl-ExtUtils-Typemaps-3.18-3.fc17.src.rpm
perl-ExtUtils-Typemaps-3.18-3.fc17.noarch.rpm
perl-ExtUtils-Typemaps.src: W: spelling-error Summary(en_US) typemap -> type map, type-map, typeface
perl-ExtUtils-Typemaps.src: W: spelling-error %description -l en_US typemap -> type map, type-map, typeface
perl-ExtUtils-Typemaps.src: E: specfile-error warning: bogus date in %changelog: Tue Sep 28 2012 Miro Hrončok <[email protected]> 3.15-10
perl-ExtUtils-Typemaps.noarch: W: spelling-error Summary(en_US) typemap -> type map, type-map, typeface
perl-ExtUtils-Typemaps.noarch: W: spelling-error %description -l en_US typemap -> type map, type-map, typeface
2 packages and 0 specfiles checked; 1 errors, 4 warnings.
Please fix the bogus date in changelog, everything else is false positive.
Rpmlint (installed packages)
----------------------------
# rpmlint perl-ExtUtils-Typemaps
perl-ExtUtils-Typemaps.noarch: W: spelling-error Summary(en_US) typemap -> type map, type-map, typeface
perl-ExtUtils-Typemaps.noarch: W: spelling-error %description -l en_US typemap -> type map, type-map, typeface
1 packages and 0 specfiles checked; 0 errors, 2 warnings.
# echo 'rpmlint-done:'
False positives.
Requires
--------
perl-ExtUtils-Typemaps-3.18-3.fc17.noarch.rpm (rpmlib, GLIBC filtered):
perl >= 0:5.006001
perl(:MODULE_COMPAT_5.14.3)
perl(Exporter)
perl(ExtUtils::ParseXS)
perl(ExtUtils::ParseXS::Constants)
perl(ExtUtils::Typemaps)
perl(ExtUtils::Typemaps::InputMap)
perl(ExtUtils::Typemaps::OutputMap)
perl(ExtUtils::Typemaps::Type)
perl(File::Spec)
perl(Symbol)
perl(lib)
perl(re)
perl(strict)
perl(warnings)
Provides
--------
perl-ExtUtils-Typemaps-3.18-3.fc17.noarch.rpm:
perl(ExtUtils::ParseXS::Constants) = 3.18
perl(ExtUtils::ParseXS::CountLines) = 3.18
perl(ExtUtils::ParseXS::Utilities) = 3.18
perl(ExtUtils::Typemaps) = 3.18
perl(ExtUtils::Typemaps::Cmd) = 3.18
perl(ExtUtils::Typemaps::InputMap) = 3.18
perl(ExtUtils::Typemaps::OutputMap) = 3.18
perl(ExtUtils::Typemaps::Type) = 3.18
perl-ExtUtils-Typemaps = 3.18-3.fc17 (f4bc12d) last change: 2012-10-16
Buildroot used: fedora-17-x86_64
Command line :./try-fedora-review -b891952 -mfedora-17-x86_64
(In reply to comment #13)
> [!]: rpmlint output is not clean.
>
> See the rpmlint section at the bottom of the review for details.
All false positives, as you also agree.
> False positives.
I meant this one:
perl-ExtUtils-Typemaps.src: E: specfile-error warning: bogus date in %changelog: Tue Sep 28 2012 Miro Hrončok <[email protected]> 3.15-10
The 28th was a Friday. ;-)
Sorry, I've missed that rpmlint output is separated to source and installed.
It's good that the FPC has answered like that.
[...]
Here's something unrelated to the reviewing guidelines. Not a blocker, just a recommendation:
> # Modifiy Makefile.PL
> sed … Makefile.PL
> sed … Makefile.PL
> sed …
> # Remove ExtUtils::ParseXS tests
> rm …
Among many (most?) programmers, comments of that sort are considered superfluous. Worthless. And creating RPM Spec files is similar to programming scripts. Obviously, "sed" commands run on a Makefile.PL "modify" the file, so the comment doesn't need to point that out. Similarly for the "rm" command that removes several files.
Much more interesting, and possibly even important, would be to explain _why_ that is being done? Why are the makefiles modified? Why is it necessary? And _what_ is the goal of those commands? Similarly for the "rm" command. _Why_ are the tests deleted? (especially if the package includes ExtUtils/ParseXS*)
As the author of the spec file, you may be intimately familiar with what it does and why it does that. Perhaps you don't need any helpful comments in the spec file yourself, perhaps you would still remember even after a year what the commands do and why they do it.
Nevertheless, I suggest replacing those comments with a more helpful rationale.
(In reply to comment #17)
> > # Remove ExtUtils::ParseXS tests
> > rm …
I got your point however, right this one seems useful: it removes ExtUtils::ParseXS tests, while ExtUtils::Typemaps test stays.
Just a little bit. Agreed. "rm -f t/00*.t t/1*.t" alone is not self-explaining and deserves a comment. The better comment would tell _why_ these tests are removed.
$
(In reply to comment #12)
> (In reply to comment #11)
> > FPC has spoken:
> >
> >
> > Please work with the Perl SIG to get the proper Obsoletes/Provides in the
> > Perl SRPM. I'll proceed with the review shortly.
>
> Petr, could you do it, or I have to ask someone else?
Implemented in perl-5.16.2-254.fc19 and perl-5.16.2-238.fc18.
perl-5.16.2-238.fc18 has been submitted as an update for Fedora 18.
Just curious here, but why not just reinstate dual-lived perl-ExtUtils-ParseXS for F-17 and update it to 3.18 rather than create perl-ExtUtils-Typemaps package?
Because there are changes is API.
.
> This is OK.
What is okay? In case you refer to file conflicts, I do not.
I still wonder _why_ the tests are deleted? Especially since some of the tests are about modules included in the package. Several of the tests fail (why?), but many other tests pass:
t/101-standard_typemap_locations.t ....... ok
t/102-trim_whitespace.t .................. ok
t/103-tidy_type.t ........................ ok
t/104-map_type.t ......................... ok
t/105-valid_proto_string.t ............... ok
t/106-process_typemaps.t ................. ok
t/107-make_targetable.t .................. ok
t/108-map_type.t ......................... ok
t/109-standard_XS_defs.t ................. ok
t/110-assign_func_args.t ................. ok
t/111-analyze_preprocessor_statements.t .. ok
t/112-set_cond.t ......................... ok
t/113-check_cond_preproc_statements.t .... ok
t/114-blurt_death_Warn.t ................. ok
(In reply to comment #25)
> Especially since some of the
> tests are about modules included in the package.
If so, that was a mistake. Anyway, now it should be fixed.
> Several of the tests fail
> (why?)
Because they are meant for ExtUtils:ParseXS 3.18, but instead ParseXS 2.x is used from Fedora 17. I suppose.
Spec URL:
SRPM URL:
* Fri Feb 08 2013 Miro Hrončok <[email protected]> - 3.18-4
- %%{_perl} to perl
- Updated comments
- Updated bogus date in %%changelog
- %%{perl_vendorlib}/ExtUtils/ParseXS* - removed asterisk, it is 1 dir
- Remove tests in much more cooler way
(In reply to comment #24)
> .
I agree that it's a little strange that this RPM is shipping modules in a completely different namespace usually provided by a different RPM.
Are these modules even used? Will this cause problems with ExtUtils::ParseXS?
> Are these modules even used?
Yes.
> Will this cause problems with
> ExtUtils::ParseXS?
No. F17's ParseXS doesn't know they are there and doesn't include/use them.
(In reply to comment #13)
>
> [!]: This package will generate Conflicts in F18+.
>
> Please file a ticket with release engineering after review approval to
> ensure this package is blocked in F18+.
>
>
What exactly I should request? Never done this before. Thanks
Just set the component to koji and ask them to block the package in F18 and rawhide. You'll need to do so AFTER git is done so koji knows the package exists.
BTW all issues seem to be accounted for, so this package is APPROVED.
New Package SCM Request
=======================
Package Name: perl-ExtUtils-Typemaps
Short Description: Reads, modifies, creates and writes Perl XS typemap files
Owners: churchyard
Branches: f17
InitialCC: perl-sig
Git done (by process-git-requests).
Thanks to all.
perl-5.16.2-238.fc18 has been pushed to the Fedora 18 stable repository. If problems still persist, please make note of it in this bug report.
perl-ExtUtils-Typemaps-3.18-5.fc17 has been submitted as an update for Fedora 17.
perl-ExtUtils-Typemaps-3.18-5.fc17 has been pushed to the Fedora 17 testing repository.
perl-ExtUtils-Typemaps-3.18-5.fc17 has been pushed to the Fedora 17 stable repository.
|
https://bugzilla.redhat.com/show_bug.cgi?id=891952
|
CC-MAIN-2017-26
|
refinedweb
| 2,712 | 52.36 |
A Primer on Automating Chaos
This Gremlin Time Travel Experiment Pack shares how you can utilize the Gremlin Time Travel attack to change the clock time of cloud infrastructure instances. This attack is cloud-agnostic and will work across AWS, GCP, Azure, DigitalOcean, Linode and more. There are many reasons to regularly use the Time Travel attack. One important reason is to ensure your systems can effectively handle certificate expiration.
With Gremlin, you have the ability to time travel any instance wherever it may reside.
This pack includes 3 x 5-minute experiments:
After you have created your Gremlin account (sign up here) you will need to get your Gremlin Daemon credentials. Time Travel requires a full account, contact our team to get an upgrade: [email protected]
Login to the Gremlin App using your Company name and sign-on credentials. These details were emailed to you when you signed up to start using Gremlin.
To install the Gremlin agent and Kubernetes client, you will need your Gremlin Team ID and Secret Key. If you don’t know what those are, you can get them from the Gremlin web app.
Visit the Teams page in Gremlin, and then click on your team’s name in the list.
On the Teams screen click on Configuration.
Make a note of your Team ID.
If you don’t know your Secret Key, you will need to reset it. Click.
Initialize Gremlin by running the following command and follow the prompts to enter your Team ID and Secret Key.
gremlin init
Now you’re ready to run attacks using Gremlin.
Using the built in Linux date tool check the current system time:
$ date
You will see a result similar to below:
Sat Mar 2 00:44:08 UTC 2019
Disable NTP on the instance:
sudo timedatectl set-ntp false
First click Create Attack.
First choose your target by selecting the host you registered with Gremlin.
Next we will use the Gremlin App to create a Time Travel Attack. Choose the State Category and Select the Time Travel Attack.
Click Unleash Gremlin and the Gremlin Time Travel Attack will time travel your host.
Using the built in Linux date tool check the adjusted system time:
$ date
In this step, you’ll install Docker.
Add Docker’s official GPG key:
curl -fsSL | sudo apt-key add -
Use the following command to set up the stable repository.
sudo add-apt-repository "deb [arch=amd64] $(lsb_release -cs) stable"
Update the apt package index:
sudo apt-get update
Make sure you are about to install from the Docker repo instead of the default Ubuntu 16.04 repo:
apt-cache policy docker-ce
Install the latest version of Docker CE:
sudo apt-get install docker-ce
Docker should now be installed, the daemon started, and the process enabled to start on boot. Check that it’s running:
sudo systemctl status docker
Make sure you are in the Docker usergroup, replace $USER with your username:
sudo usermod -aG docker $USER
Log out and back in for your permissions to take effect, or type the following:
su - ${USER}
Use the built in Linux date tool check the current system time
$ date
You will see a result similar to the following:
Sat Mar 2 00:44:08 UTC 2019
Disable NTP on the instance:
sudo timedatectl set-ntp false
Using your Gremlin login credentials (which were emailed to you when you created your account), log in to the Gremlin App. Open Settings and copy your Team ID and Secret.
Set the following export variables:
export GREMLIN_TEAM_ID=your_team_id
export GREMLIN_TEAM_SECRET=your_team_secret
Use docker run to pull the official Gremlin Docker image and run the Gremlin daemon:
$ sudo docker run -d \ --net=host \ --pid=host \ --cap-add=NET_ADMIN \ --cap-add=SYS_BOOT \ --cap-add=SYS_TIME \ --cap-add=KILL \ -e GREMLIN_TEAM_ID="${GREMLIN_TEAM_ID}" \ -e GREMLIN_TEAM_SECRET="${GREMLIN_TEAM_SECRET}" \ -v /var/run/docker.sock:/var/run/docker.sock \ -v /var/log/gremlin:/var/log/gremlin \ -v /var/lib/gremlin:/var/lib/gremlin \ gremlin/gremlin attack time_travel
Make sure to pass in the three environment variables you set in Step 4. If you don’t, the Gremlin daemon cannot connect to the Gremlin backend.
Use docker ps to see all running Docker containers:
$ sudo docker ps
You will see a result similar to the following:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7167cacb2536 gremlin/gremlin "/entrypoint.sh daem…" 40 seconds ago Up 39 seconds practical_benz
Using the built in Linux date tool check the adjusted system time:
$ date
Kubernetes is a container management system which is built with reliability in mind. Architecture is commonly 1 primary and 2 or more nodes which are replicated from the master. When the primary dies the nodes are ready to replace it. When one node dies another will be ready to replace it.
To create a Kubernetes cluster follow our guide on "How to Use and Install Kubernetes with Weave Net". Alternatively you can use a managed Kubernetes service such as GKE, EKS and AKS.
The simplest way to install the Gremlin agent on your Kubernetes cluster is to use Helm. If you do not already have Helm installed, go here to get started. Once Helm is installed and configured, the next steps are to add the Gremlin repo and install the client.
Add the Gremlin Helm chart:
helm repo add gremlin
Create a namespace for the Gremlin Kubernetes client:
kubectl create namespace gremlin
Next you will run the
helm command to install the Gremlin client. In this command there are three placeholder variables that you will need to replace with real data. Replace
$GREMLIN_TEAM_ID with your Team ID from Step 1.1, and replace
$GREMLIN_TEAM_SECRET with your Secret Key from Step 1.1. Replace
$GREMLIN_CLUSTER_ID with a name for the cluster.
If you are using Helm v3, run this command:
helm install gremlin gremlin/gremlin \ --namespace gremlin \ --set gremlin.secret.managed=true \ --set gremlin.secret.type=secret \ --set gremlin.secret.teamID=$GREMLIN_TEAM_ID \ --set gremlin.secret.clusterID=$GREMLIN_CLUSTER_ID \ --set gremlin.secret.teamSecret=$GREMLIN_TEAM_SECRET
For older versions of Helm, use the --name option:
helm install gremlin/gremlin \ --name gremlin \ --namespace gremlin \ --set gremlin.secret.managed=true \ --set gremlin.secret.type=secret \ --set gremlin.secret.teamID=$GREMLIN_TEAM_ID \ --set gremlin.secret.clusterID=$GREMLIN_CLUSTER_ID \ --set gremlin.secret.teamSecret=$GREMLIN_TEAM_SECRET
If you’re not sure which version of Helm you’re using, run this command:
helm version
For more information on the Gremlin Helm chart, including more configuration options, check out the chart on Github.
Use the built in Linux date tool check the current system time
$ date
You will see a result similar to the following:
Sat Mar 2 00:44:08 UTC 2019
Disable NTP on the instance:
sudo timedatectl set-ntp false
You can use the Gremlin App or the Gremlin API to trigger Gremlin attacks. You can view the available range of Gremlin Attacks in Gremlin Help.
To create a Time Travel Attack, click Attacks in the left Navigation bar and New Attack.
Host targeting should be selected by default. Click on the Exact button to expand the list of available hosts, and select one of them. You’ll see the Blast Radius for the attack is limited to 1 host.
Click “Choose a Gremlin,” and then select State and Time Travel.
Leave the Length set to 60 seconds. Leave the radio button for NTP set to “No,” as we’ve already disabled NTP on the host. Leave the offset set to 86400 second. That’s the amount of clock drift that will be introduced. Then hit the green Unleash Gremlin button.
When your attack is finished it will move to Completed Attacks in the Gremlin App. To view the logs of the Attack, click on the Attack in Completed Attacks then click to the arrow to view the logs.
Using the built in Linux date tool check the adjusted system time:
$ date
Gremlin free unlocks the ability to perform Shutdown and CPU attacks. To unlock Time Travel upgrade your Gremlin account by contacting our team [email protected].
Gremlin empowers you to proactively root out failure before it causes downtime. See how you can harness chaos to build resilient systems by requesting a demo of Gremlin.Request a Demo
|
https://www.gremlin.com/community/tutorials/time-travel-experiment-pack/
|
CC-MAIN-2020-16
|
refinedweb
| 1,364 | 63.19 |
Code. Collaborate. Organize.
No Limits. Try it Today.
Users of C#, VB.NET and MC++ have a nice feature available :- delegates. The C++
language does not support this construct. But fortunately, there is a way to
implement rudimentary delegates using templates and some clever tricks borrowed
from the boost library.
I assume that you have a solid C++ background. In this article I will solve the
delegate problem using member function pointers and templates. You may want to
read up on these topics before you read any further.
For those who have not yet been acquainted with .NET-languages, here's a short
explanation.
Put simply, delegates are objects which allows calling methods on objects. Big
deal? It is a big deal, since these objects masquerade as free functions with
no coupling whatsoever to a specific object. As the name implies, it delegates
method calls to a target object.
Since it is possible to take the address of a member function, and apply that
member function on any object of the class which defined the member function,
it is logical that one should be able make a delegate construct. One way would
be to store the address of an object alongside with one of its member functions.
The storage could be an object which overloads operator(). The
type signature (return type and argument types) of operator() should
match the type signature of the member function which we are use for
delegation. A very non-dynamic version could be:
operator()
struct delegate {
type* obj;
// The object which we delegate the call to
int (type::* method)(int);
// Method belonging to type, taking an int
// and returning an int
delegate(type* obj_,
int (type::* method)(int)) : obj(obj_),
method(method_) { }
int operator()(int x) {
// See how this operator() matches
// the method type signature
return (obj->*method)(x);
// Make the call
}
};
The above solution is not dynamic in any way. It can only deal with objects of
type type, methods taking an int and returning an
int. This forces
us to either write a new delegate type for each object type/method combination
we wish to delegate for, or use object polymorphism where all classes derive
from type - and be satisfied with only being able to delegate
virtual methods defined in type which matches the int/int type
signature! Clearly, this is not a good solution.
type
int
int/int
Obviously we need to parameterize the object type, parameter type and return
type. The only way to do that in C++ is to use templates. A second attempt may
look like this:
template <typename Class, typename T1, typename Result>
struct delegate {
typedef Result (Class::* MethodType)(T1);
Class* obj;
MethodType method;
delegate(Class* obj_,
MethodType method_) : obj(obj_), method(method_) { }
Result operator()(T1 v1) {
return (obj->*method)(v1);
}
};
Much better! Now we can delegate any object and any method with one parameter
in that object. This is a clear improvement over the previous implementation.
Unfortunately, it is not possible to write the delegate so that it can handle any
number of arguments. To solve this problem for methods taking two parameters,
one has to write a new delegate which handles two parameters. To solve the
problem for methods taking three parameters, one has to write a new delegate
which handles three parameters - and so on. This is however not a big problem.
If you need to cover all your methods, you will most likely not need more than
ten such delegate templates. How many of your methods have more than ten
parameters? If they do, are you sure they should have more than ten? Also,
you'd only need to write these ten delegate once - the sweet power of
templates.
However, a small problem, besides the parameter problem, still remains. When
this delegate template is instantiated, the resulting delegate type will only
be able to handle delegations for the class you supplied as template parameter.
The delegate<A, int, int> type is different from delegate<B,
int, int>. They are similar in that they delegate method calls
taking an int and returning an int. They are
dissimilar in that they do not delegate for methods of the same class. .NET
delegates ignore this dissimilarity, and so should we!
delegate<A, int, int>
delegate<B,
int, int>
To remove this type dissimilarity, it is obvious that we need to remove the
class type as a template parameter. This is best accomplished by using object
polymorphism and template constructors. This is a technique which I've borrowed
from the boost template library. More
specifically, I borrowed it from the implementation of the any class in
that library.
any
Since I'm not a native English writer, I will not attempt to describe the final
code with words. I could try but I think I'd just make it more complex than it
is. Put simply, I use polymorphism and a templated constructor to gain an extra
level of indirection so that I can "peel" away the class information from the
delegate. Here's the code:
// The polymorphic base
template <typename T1, typename Result>
struct delegate_base { // Ref counting added 2002-09-22
int ref_count; // delegate_base's are refcounted
delegate_base() : ref_count(0) { }
void addref() {
++ref_count;
}
void release() {
if(--ref_count < 0)
delete this;
}
virtual ~delegate_base() { } // Added 2002-09-22
virtual Result operator()(T1 v1) = 0;
};
// The actual implementation of the delegate
template <typename Class, typename T1, typename Result>
struct delegate_impl : public delegate_base<T1, Result> {
typedef Result (Class::* MethodType)(T1);
Class* obj;
MethodType method;
delegate_impl(Class* obj_, MethodType method_) :
obj(obj_), method(method_) { }
Result operator()(T1 v1) {
return (obj->*method)(v1);
}
};
template <typename T1, typename Result>
struct delegate {
// Notice the type: delegate_base<T1,
// Result> - no Class in sight!
delegate_base<T1, Result>* pDelegateImpl;
// The templated constructor - The presence of Class
// does not "pollute" the class itself
template <typename Class, typename T1, typename Result>
delegate(Class* obj, Result (Class::* method)(T1))
: pDelegateImpl(new delegate_impl<Class,
T1, Result>(obj, method)) {
pDelegateImpl->addref(); // Added 2002-09-22
}
// Copy constructor and assignment operator
// added 2002-09-27
delegate(const delegate<T1,
Result>& other) {
pDelegateImpl = other.pDelegateImpl;
pDelegateImpl->addref();
}
delegate<T1, Result>& operator=(const delegate<T1,
Result>& other) {
pDelegateImpl->release();
pDelegateImpl = other.pDelegateImpl;
pDelegateImpl->addref();
return *this;
}
~delegate() { pDelegateImpl->release();
} // Added & modified 2002-09-22
// Forward the delegate to the delegate implementation
Result operator()(T1 v1) {
return (*pDelegateImpl)(v1);
}
};
There, .NET delegate requirements satisfied! For information on how
to actually use the delegates, see the demo source code available
for download at the top of this article.
Because I think delegates can be quite powerful, and I for one like to have
powerful tools in the toolbox. They might be useful some.
|
http://www.codeproject.com/Articles/2922/NET-like-Delegates-in-Unmanaged-C?fid=7643&df=90&mpp=10&sort=Position&spc=None&tid=670578
|
CC-MAIN-2014-23
|
refinedweb
| 1,105 | 52.49 |
SWAPONSection: Linux Programmer's Manual (2)
Updated: 2004-10-10
Index Return to Main Contents
NAMEswapon, swapoff - start/stop swapping to file/device
SYNOPSIS#include <unistd.h>
#include <asm/page.h> /* to find PAGE_SIZE */
#include <sys/swap.h>
int swapon(const char *path, int swapflags);
int swapoff(const char *path);
DESCRIPTIONsw).
PriorityEach; or, for swapon(), the indicated path does not contain a valid swap signature; or, for swapoff(), path is not currently a swap area.
- ENFILE
- The system limit on the total number of open files has been reached.
- ENOENT
- The file path does not exist.
- ENOMEM
- The system has insufficient memory to start swapping.
- EPERM
- The caller does not have the CAP_SYS_ADMIN capability, or all MAX_SWAPFILES (earlier 8; 32 since Linux 2.4.10) are in use.
CONFORMING TOThese functions are Linux specific and should not be used in programs intended to be portable. The second swapflags argument was introduced in Linux 1.3.2.
NOTESThe partition or path must be prepared with mkswap(8).
SEE ALSOmkswap(8), swapoff(8), swapon(8)
Index
Random Man Pages:
wpa_supplicant
jigsaw
console_ioctl
|
http://www.thelinuxblog.com/linux-man-pages/2/swapon
|
CC-MAIN-2017-22
|
refinedweb
| 180 | 59.9 |
0
ok, yes this is a homework help question, and no i don't want just the answer. i actually really want to learn this.
Ok, so I have to the simple date format that displays today's date in "week day month day, year)
so here is what i got:
import java.text.*; //for SimpleDateFormat import java.util.*; // for Date public class date { public static void main(String[] args) { date today; today= new date(); [COLOR="Red"]sdf[/COLOR] = new SimpleDateFormat ("EEEE MMMM DD, YYYY"); System.out.println("Today is" + sdf.format(today)); /** * @param args */ // TODO Auto-generated method stub } }
ok, so it looks pretty good, excpet that stinkin java won't read the "sdf". So i don't know what i am doing wrong with it. Any help or tips would be greatly appriciated.
Remember, i am not looking just for the answer, i really do want to learn this stuff.
Edited by Dani: Formatting fixed
|
https://www.daniweb.com/programming/software-development/threads/147535/simple-date-format
|
CC-MAIN-2017-26
|
refinedweb
| 157 | 66.94 |
Preprocessing¶
dask_ml.preprocessing contains some scikit-learn style transformers that
can be used in
Pipelines to perform various data transformations as part
of the model fitting process. These transformers will work well on dask
collections (
dask.array,
dask.dataframe), NumPy arrays, or pandas
dataframes. They’ll fit and transform in parallel.
Scikit-Learn Clones¶
Some of the transformers are (mostly) drop-in replacements for their scikit-learn counterparts.
These can be used just like the scikit-learn versions, except that:
- They operate on dask collections in parallel
.transformwill return a
dask.arrayor
dask.dataframewhen the input is a dask collection
See
sklearn.preprocessing for more information about any particular
transformer. Scikit-learn does have some transforms that are alternatives to
the large-memory tasks that Dask serves. These include FeatureHasher (a
good alternative to DictVectorizer and CountVectorizer) and HashingVectorizer
(best suited for use in text over CountVectorizer). They are not
stateful, which allows easy use with Dask with
map_partitions:
import dask.bag as db from sklearn.feature_extraction import FeatureHasher D = [{'dog': 1, 'cat':2, 'elephant':4}, {'dog': 2, 'run': 5}] b = db.from_sequence(D) h = FeatureHasher() b.map_partitions(h.transform).compute()
Note
dask_ml.preprocessing.LabelEncoder and
dask_ml.preprocessing.OneHotEncoder
will use the categorical dtype information for a dask or pandas Series with
a
pandas.api.types.CategoricalDtype.
This improves performance, but may lead to different encodings depending on the
categories. See the class docstrings for more.
Encoding Categorical Features¶
dask_ml.preprocessing.OneHotEncoder can be useful for “one-hot” (or
“dummy”) encoding features.
See the scikit-learn documentation for a full discussion. This section focuses only on the differences from scikit-learn.
Dask-ML Supports pandas’ Categorical dtype¶
Dask-ML supports and uses the type information from pandas Categorical dtype. See for an introduction. For large datasets, using categorical dtypes is crucial for achieving performance.
This will have a couple effects on the learned attributes and transformed values.
- The learned
categories_may differ. Scikit-Learn requires the categories to be sorted. With a
CategoricalDtypethe categories do not need to be sorted.
- The output of
OneHotEncoder.transform()will be the same type as the input. Passing a pandas DataFrame returns a pandas Dataframe, instead of a NumPy array. Likewise, a Dask DataFrame returns a Dask DataFrame.
Dask-ML’s Sparse Support¶
The default behavior of OneHotEncoder is to return a sparse array. Scikit-Learn
returns a SciPy sparse matrix for ndarrays passed to
transform.
When passed a Dask Array,
OneHotEncoder.transform() returns a Dask Array
where each block is a scipy sparse matrix. SciPy sparse matricies don’t
support the same API as the NumPy ndarray, so most methods won’t work on the
result. Even basic things like
compute will fail. To work around this,
we currently recommend converting the sparse matricies to dense.
from dask_ml.preprocessing import OneHotEncoder import dask.array as da import numpy as np enc = OneHotEncoder(sparse=True) X = da.from_array(np.array([['A'], ['B'], ['A'], ['C']]), chunks=2) enc = enc.fit(X) result = enc.transform(X) result
Each block of
result is a scipy sparse matrix
result.blocks[0].compute() # This would fail! # result.compute() # Convert to, say, pydata/sparse COO matricies instead from sparse import COO result.map_blocks(COO.from_scipy_sparse, dtype=result.dtype).compute()
Dask-ML’s sparse support for sparse data is currently in flux. Reach out if you have any issues.
Additional Tranformers¶
Other transformers are specific to dask-ml.
Both
dask_ml.preprocessing.Categorizer and
dask_ml.preprocessing.DummyEncoder deal with converting non-numeric
data to numeric data. They are useful as a preprocessing step in a pipeline
where you start with heterogenous data (a mix of numeric and non-numeric), but
the estimator requires all numeric data.
In this toy example, we use a dataset with two columns.
'A' is numeric and
'B' contains text data. We make a small pipeline to
- Categorize the text data
- Dummy encode the categorical data
- Fit a linear regression
from dask_ml.preprocessing import Categorizer, DummyEncoder from sklearn.linear_model import LogisticRegression from sklearn.pipeline import make_pipeline import pandas as pd import dask.dataframe as dd df = pd.DataFrame({"A": [1, 2, 1, 2], "B": ["a", "b", "c", "c"]}) X = dd.from_pandas(df, npartitions=2) y = dd.from_pandas(pd.Series([0, 1, 1, 0]), npartitions=2) pipe = make_pipeline( Categorizer(), DummyEncoder(), LogisticRegression() ) pipe.fit(X, y)
Categorizer will convert a subset of the columns in
X to categorical
dtype (see here
for more about how pandas handles categorical data). By default, it converts all
the
object dtype columns.
DummyEncoder will dummy (or one-hot) encode the dataset. This replaces a
categorical column with multiple columns, where the values are either 0 or 1,
depending on whether the value in the original.
df['B'] pd.get_dummies(df['B'])
Wherever the original was
'a', the transformed now has a
1 in the
a
column and a
0 everywhere else.
Why was the
Categorizizer step necessary? Why couldn’t we operate directly
on the
object (string) dtype column? Doing this would be fragile,
especially when using
dask.dataframe, since the shape of the output would
depend on the values present. For example, suppose that we just saw the first
two rows in the training, and the last two rows in the tests datasets. Then,
when training, our transformed columns would be:
pd.get_dummies(df.loc[[0, 1], 'B'])
while on the test dataset, they would be:
pd.get_dummies(df.loc[[2, 3], 'B'])
Which is incorrect! The columns don’t match.
When we categorize the data, we can be confident that all the possible values
have been specified, so the output shape no longer depends on the values in the
whatever subset of the data we currently see. Instead, it depends on the
categories, which are identical in all the subsets.
|
https://dask-ml.readthedocs.io/en/latest/preprocessing.html
|
CC-MAIN-2019-04
|
refinedweb
| 957 | 52.46 |
view raw
I'm working on a UI for an app, and I'm attempting to use grayscale icons, and allow the user to change the theme to a color of their choosing. To do this, I'm trying to just apply a ColorFilter of some sort to overlay a color on top of the drawable. I've tried using PorterDuff.Mode.MULTIPLY, and it works almost exactly as I need, except that whites get overlayed with the color as well. What I'm ideally looking for is something like the "Color" blending mode in Photoshop, where the graphic retains its transparency and luminosity, and only modifies the color of the image. For example:
becomes
After doing some research, it appears that the ColorMatrixColorFilter class may do what I need, but I can't seem to find any resources pointing to how the matrix is used. It's a 4x5 matrix, but what I need to know is how I go about designing the matrix. Any ideas?
EDIT: So okay, what I've found so far on this is as follows:
1 0 0 0 0 //red
0 1 0 0 0 //green
0 0 1 0 0 //blue
0 0 0 1 0 //alpha
0.2, 0.5, 0.8, 1
2 0 0 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0 1 0
0.4, 0.5, 0.8, 1
This is what I use for my game. This is the compilation of various part found on various articles on websites. Credits goes to the original author from the @see links. Note that a lot more can be done with color matrices. Including inverting, etc...
public class ColorFilterGenerator { /** * Creates a HUE ajustment ColorFilter * @see * @see * @param value degrees to shift the hue. * @return */ public static ColorFilter adjustHue( float value ) { ColorMatrix cm = new ColorMatrix(); adjustHue(cm, value); return new ColorMatrixColorFilter(cm); } /** * @see * @see * @param cm * @param value */ public static void adjustHue(ColorMatrix cm, float value) { value = cleanValue(value, 180f) / 180f * (float) Math.PI; if (value == 0) { return; } float cosVal = (float) Math.cos(value); float sinVal = (float) Math.sin(value); float lumR = 0.213f; float lumG = 0.715f; float lumB = 0.072f; float[] mat = new float[] { lumR + cosVal * (1 - lumR) + sinVal * (-lumR), lumG + cosVal * (-lumG) + sinVal * (-lumG), lumB + cosVal * (-lumB) + sinVal * (1 - lumB), 0, 0, lumR + cosVal * (-lumR) + sinVal * (0.143f), lumG + cosVal * (1 - lumG) + sinVal * (0.140f), lumB + cosVal * (-lumB) + sinVal * (-0.283f), 0, 0, lumR + cosVal * (-lumR) + sinVal * (-(1 - lumR)), lumG + cosVal * (-lumG) + sinVal * (lumG), lumB + cosVal * (1 - lumB) + sinVal * (lumB), 0, 0, 0f, 0f, 0f, 1f, 0f, 0f, 0f, 0f, 0f, 1f }; cm.postConcat(new ColorMatrix(mat)); } protected static float cleanValue(float p_val, float p_limit) { return Math.min(p_limit, Math.max(-p_limit, p_val)); } }
To complete this I should add an example:
ImageView Sun = (ImageView)findViewById(R.id.sun); Sun.setColorFilter(ColorFilterGenerator.adjustHue(162)); // 162 degree rotation
|
https://codedump.io/share/TU7pS4pM20DF/1/understanding-the-use-of-colormatrix-and-colormatrixcolorfilter-to-modify-a-drawable39s-hue
|
CC-MAIN-2017-22
|
refinedweb
| 486 | 64 |
What it is :
A package of dynamic scene instant update (so you don’t have to shutdown Panda and reload it again and again everytime you make minor/major changes) and a little attempt to minimize Panda3D - IDE windows switch, if you have 1 small monitor like mine.
It’s a 1 application, 1 window solution.
Full reason why :
[color=red]What’s missing :
[-] line wrap
[-] unicode support
[-] code folding
[-] code browser
[-] and anything not mentioned down below…
FEATURES :
[+] create new file, open, save, and save duplicate
[+] error pinpoint
[+] output capture
[+] line:column bookmarks
[+] auto & smart indent
[+] repetitive characters insertion
[+] Python syntax highlight
[+] matching brackets highlight & selection
[+] 2-ways grow & shrink selection
[+] edit history
[+] color chooser
[+] macro recording, replaying, and editing (visually, so you don’t need macro editing manual) [1] [2]
Works with undo/redo during recording, the recorded actions are popped out/in upon undo/redo. The last recorded action type is displayed at status bar.
[+] find & replace, with Regular Expressions
Able to replace in all opened files, and selectable files under a directory.
[+] 3 mouse selection modes (character, word, line), switched by another mouse click
[+] easy launch of PStats server (on local machine) and auto-establish connection
[+] code completion
[+] save to snippets and its completion [1] [2] [3]
[+] import completion [Python] [Panda3D] [new Panda3D]
[+] call tips [Python functions] [Panda3D functions] [individual argument insertion]
[+] Preferences GUI [1] [2] [3] [4] [5] [6] [7]
[+] portable app (can live in removable media)
[+] single instance app, all files passed to IDE_STARTER or using the Welcome Screen will be opened by the running instance of IDE
[+] per-file CWD and arguments
[+] SceneGraph browser and Node Properties panel [1] [2]
[+] software auto-upgrade [1] [2] [3] [4]
[size=150]Dependency :[/size] wxPython
[size=150]Download:[/size]
Since v0.1, there is C++ extension to speed up text generation.
Source code : Text Drawer extension
Windows binaries : for Panda3D 1.5.3, 1.6.0, 1.6.2, 1.7.0, 1.7.2, 1.8.0.
NOTE : if you up/downgrade your Panda3D version, you should rebuild the extension, or it would be done in python, which is approx. 4x slower.
Read /TD/HOW_TO_USE.txt for further info.
IDE codes : OnscreenIDEdynamic.zip [UPDATED Jun/26/2012]
images : IDEimages.zip [UPDATED Aug/31/2011]
models : IDEmodels.zip [UPDATED 3/2/2010]
slider skins : IDEsliderSkins.zip [NEW 6/3/09]
test scene : testDynScene.zip [UPDATED 5/18/09]
fonts : IDEfonts.zip [UPDATED 12/09/08]
tab skins : IDEtabSkins.zip [NEW 07/27/08]
sounds : IDEsounds.zip [NEW 03/31/08]
Put all the .zip files in the directory of your choice and extract them, so that the fonts, models, sounds, images, and skins directories are at the same level with the IDE codes.
You can put the test scene somewhere else.
Screenshots :
(1st week)
(Oct 2008)
(June 2009)
See my later posts for more shots.
[size=150]KNOWN RESTRICTIONS & LIMITATIONS[/size] :
RESTRICTION <1> :
In your main script, you have to protect your World instantiation, because it’s done by the IDE.
This is a sample of it :
if __name__=='__main__': print '\n', '@'*10, "\n I'M PROCESSED !!!\n", '@'*10 if not hasattr(help,'IDE'): World() run()
So you must isolate World instantiation so it won’t be done twice each time you update the scene. The rest of your code will be executed normally.
World instantiation is done by the IDE, to isolate the instance, to ease the pain when searching for the must-be-destroyed class instances.
The run() call is safe and meaningless when you use the IDE, since it’s redirected to a dummy run() function.
The test scene is not already use this blocker, so add it yourself.
But, in case you need to instantiate it yourself, save the instance in global namespace as “winst”, like this :
if __name__=='__main__': winst = World( arg1,arg2,etc. ) run()
RESTRICTION <2> :
If you leave World instantiation to the IDE, you should name the class “World”.
Note that you don’t need to have a World class to instantiate. If you’re trying some very simple modules, you can just do everything in module global namespace.
LIMITATION <1> :
The destructors are still very limited to the commonly used ones, so if you think some destructors need to be added (in myFinder.py), let me know.
[size=150]IDE START-UP INSTRUCTIONS[/size] :
- run IDE_STARTER.pyw
- for testing only :
2a. select dyn1.py and (optional) all other test scene files to be edited
2b. select the main script file : dyn1.py
And the IDE will be opened in a new fresh Python session.
NOTE : the default key to open Preferences window is Shift-Ctrl-P
CRITICAL SITUATION (when it stops responding to any key) : Alt-M : print current mode Shift-Alt-R : restore to correct mode
Once you’re ready to update your scene, press your beloved F9. All edited documents will be saved, and all changes will be instantly updated.
PS. : code donation is always welcome
PPS. : You don’t have to read my rant below this line.
The reason why :
this IDE is generally a pain killer for me, on both Panda and editing sides.
PANDA3D side :
If I use large models or textures, they’re loaded from disk only once at first, and the next scene updates will load them from the pools. So, I don’t have to waste my time waiting for my scene being loaded from disk, each time I want to see some changes. This saves lots of time when debuging or developing shaders.
EDITING side :
Every IDE I’ve tried has some unbelieveably unique ways in hurting me. I can’t do this or that, or have to do it in weird way. At the end I can only sighhhhhhhhh.
To mention some (in contrast with mine) :
[1] Very small range of recent files, and there is no way to adjust it.
[2] Some IDE’s don’t save bookmarks, and some forcefully save bookmarks even if I don’t want to save the file.
[3] I can’t easily stop at the edge when cycling over bookmarks, and some IDE’s simply forbid me to wrap to the other edge.
[4] So far, no IDE I’ve tried offers me easy macro editing. All of them force me to read the commands reference, since there is no completion for IDE commands itself. It sounds like slouuuuuu hell if I only need to fix my mistakes when recording it, since I’m very aware that I’m just a mistakes factory.
[5] So far, no IDE I’ve tried adjusts the recorded macro commands upon undo/redo, it just runs 1 way straight forward, blindfold without looking back, so I have to go through point (4) if I made mistakes.
[6] I don’t know if there is IDE which displays indentation helper line only when needed, i.e. when the alignment notch is off the screen. The worst thing is if it’s offscreen, I can’t even see what it is. So, why don’t I just close my eyes instead ? Opening my eyes wouldn’t change anything.
[7] I don’t know if there is IDE which shows me other than start-matched completion.
[8] Any other IDE’s do multiple lines operation only on lines which have at least 1 selected character in it.
Imagine this, I put the cursor at the 1st column, and then select 2 lines downward, the line on which the cursor is now, won’t be included in the process. Now imagine if I do that in a macro, and I don’t put the cursor first at the first column, the result will be different at macro replay time.
Case 1: cursor is at column #1, there will be 2 processed lines
Case 2: cursor is at column #2, there will be 3 processed lines !!!
[9] I don’t know any IDE which is able to insert the call arguments into the script easily. The worst thing is it’s mostly not even wrapped.
|
https://discourse.panda3d.org/t/onscreen-ide-dynamic-instant-update-v0-5-4/3534/1
|
CC-MAIN-2022-33
|
refinedweb
| 1,345 | 70.63 |
Opened 9 years ago
Closed 9 years ago
#3504 closed (fixed)
Missing self in authentification doc
Description
There is a typo in authentification doc, in part ow writing an authentification bakend:
class MyBackend:
def authenticate(username=None, password=None):
# Check the username/password and return a User.
class MyBackend:
def authenticate(token=None):
# Check the token and return a User.
There should be a self in arguments list, as they are class methods.
Attachments (1)
Change History (5)
comment:1 Changed 9 years ago by Michael Radziej <mir@…>
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
- Triage Stage changed from Unreviewed to Accepted
Changed 9 years ago by Robert Myers <myer0052@…>
comment:2 Changed 9 years ago by Robert Myers <myer0052@…>
- Has patch set
comment:3 Changed 9 years ago by Per Jonsson <poj@…>
- Triage Stage changed from Accepted to Ready for checkin
Looks good, patch is clean.
comment:4 Changed 9 years ago by jacob
- Resolution set to fixed
- Status changed from new to closed
Note: See TracTickets for help on using tickets.
Added a patch to fix the documentation errors.
|
https://code.djangoproject.com/ticket/3504
|
CC-MAIN-2015-48
|
refinedweb
| 184 | 50.5 |
GDB (GNU Project debugger) is a command line base debugger that is good at analyzing running and cored programs. According to the user manual GDB supports C, C++, D, Go, Objective-C, Fortran, Java, OpenCL C, Pascal, Rust, assembly, Modula-2, and Ada.
GDB has the same feature set as most debuggers but is different from most that I have used in that is all based on typing commands instead of clicking on GUI elements. Some of these features include:
To start GDB, in the terminal,
gdb <executable name>
For the above example with a program named main, the command becomes
gdb main
Setting Breakpoints
You'll probably want you program to stop at some point so that you can review the condition of your program. The line at which you want the program to temporarily stop is called the breakpoint.
break <source code line number>
Running your program
To run your program, the command is, as you guessed,
run
Opening a core
gdb -c coreFile pathToExecutable
GDB, short for GNU Debugger, is the most popular debugger for UNIX systems to debug C and C++ programs. GNU Debugger, which is also called gdb, is the most popular debugger for UNIX systems to debug C and C++ programs.?
Created this really bad program
#include <stdio.h> #include <ctype.h> // forward declarations void bad_function() { int *test = 5; free(test); } int main(int argc, char *argv[]) { bad_function(); return 0; } gcc -g ex1.c ./a.out //or whatever gcc creates Segmentation fault (core dumped) gdb -c core a.out Core was generated by `./a.out'.
Program terminated with signal SIGSEGV, Segmentation fault. #0 __GI___libc_free (mem=0x5) at malloc.c:2929 2929 malloc.c: No such file or directory.
(gdb) where
#0 __GI___libc_free (mem=0x5) at malloc.c:2929 #1 0x0000000000400549 in bad_function () at ex1.c:12 #2 0x0000000000400564 in main (argc=1, argv=0x7fffb825bd68) at ex1.c:19
Since I compiled with -g you can see that calling where tells me that it didn't like the code on line 12 of bad_function()
Then I can examine the test variable that I tried to free
(gdb) up
#1 0x0000000000400549 in bad_function () at ex1.c:12 12 free(test);
(gdb) print test
$1 = (int *) 0x5
(gdb) print *test
Cannot access memory at address 0x5
In this case the bug is pretty obvious I tried to free a pointer that was just assigned the address 5 which wasn't created by malloc so free has no idea what to do with it.
|
https://riptutorial.com/gdb
|
CC-MAIN-2019-30
|
refinedweb
| 416 | 69.72 |
Difference between revisions of "Middle School Computing with Robots: 2010"
Latest revision as of 13:05, 29 April 2010
Contents..
Section Two
Our last formal class. Next week is the field trip to the BMC labs!
Strong Areas
They liked the lesson plan (I was worried the boys might think it wasn't "cool" enough for them) and most took to the project with relative ease.
Weak Areas
There were a lot of distractions in the room that the school provided for me. It was a high traffic area and some students from outside the class were there, which caused some issues. Many of the students got sidetracked taking about other technology or even looking it up online. Although this was outside of the curriculum, it did cause some interesting discussion..
Class Six
I used Ashley's six lesson plan with no changes. I also used the attached worksheet about Judith Bishop and took the students into the lab to see and experience the Microsoft Surface.
Strong Areas
Even though we had not had class for two weeks they still retained most of the syntax knowledge. They also readily understood the simple algorithms behind the assignment and used pen and paper to explore different ways of writing the code. They were really excited about the Microsoft surface and asked lots of questions about the class I am taking that it using it. seventh lesson plan but instead of doing the kinesthetic activity, I had them actually.
Class Eight
I used the 8th lesson plan (which is mostly a make-your-own program day with some suggestions) and I remembered to give out the woman of the week handout for week 7.
Strong Areas
They were really excited to start messing around with the obama prgrams we wrote last class. Most the the students choose to work with robot vision for the whole class period and had a really great time. One other student wrote a robot play based on one of her class assignments. At the end of class we discussed how this course in computer science has changed or not changed their perceptions of this field of study. All of the students agreed that it really really changed their minds about a lot of the stereotypes and content of computer science. They said they would love to take more computer science courses in the future.
Weak Areas
There was some procrastinating. Some students didn't know what they wanted to work on, but overall it went well.
Sample Student Code
Robot Dances
def Sonnywithachance():
wait(3) for i in range(1): turnRight(2,4) forward(2,4) for i in range(1): turnLeft(2,4) forward(2,4) for i in range(3): forward(2,.5) backward(2,.5) wait(.5) forward(2,.5) backward(2,.5) wait(1) for i in range(2): forward(2,1) turnRight(2,1) turnLeft(2,1) forward(2,1) backward(2,1) forward(2,1) backward(2,1) turnRight(1,.2) forward(1,3) turnLeft(1,.2) forward(1,3)
def Duck():
forward(1,3) turnLeft(5,3) for i in range(3): turnRight(7,2) backward(5,4) wait(5) forward(7,4) turnRight(6,3) for i in range(4): turnLeft(10,5) wait(4) backward(2,5) turnRight(8,5) wait(3) forward(4,7) turnLeft(5,6) wait(10) forward(5,4) backward(1,8) turnLeft(4,2) wait(7) turnRight(5,7) for i in range(5): wait(2) backward(1,4) forward(10,5) wait(5) forward(8,2) backward(2,3) turnRight(2,5) turnLeft(8,4) backward(1,5)
Maze Code
def aMAZEing():
while timeRemaining(240): if getObstacle("center") > 555: turnLeft(1,1) else: forward(2,1)
def Maze():
while timeRemaining(300): if getIR("left")==0 and getIR("right")==0: turnRight(0.8,0.3) backward(0.2,0.4) elif getIR("left")==0: turnRight(0.7,0.4) elif getIR("right")==0: turnLeft(0.5,0.7) else: forward(0.3,0.4) if getLight("center")<300: speak("Ha! I beat you, sucker!") stop()
def Mazealoo():
while timeRemaining(120): if getIR("left")==0: turnRight(.5,1) if getIR("right")==0: turnLeft(.5,1) if getIR("left") and getIR("right")==0: backward(1,1) turnLeft else: forward(1,1) if getLight("center")<300: speak("Benvolio, Romio I have found Juliet! She is here with our enemy, her kinsman, Tiibalt! HOW NOW BROWN COW named Tiibalt!")
Pixel Code
myPic = makePicture(pickAFile()) def Obama(myPic):
black = [0, 51, 76] red = [217, 26, 33] blue = [112, 150, 158] yellow = [252, 227, 116],"Obama Me.jpg")
myPic = makePicture(pickAFile()) def obamify(myPic):
blue = (112, 150, 158) black = (0, 51, 76) red = (217, 26, 33) yellow = (252, 227, 166),"jack donaghy.jpg")
Academic Work
The following link will take you to a short paper I wrote comparing my experiences as a Teaching Assistant at Bryn Mawr College to my experiences teaching two middle school classes.
[Observations in Computing Education]
Source Materials and Useful Links
[1] Blank, D., and Kumar, D. (2010) Assessing the Impact of using Robots in Education, or: How We Learned to Stop Worrying and Love the Chaos. To appear in American Association of Artificial Intelligence, 2010 Spring symposium series: Educational Robotics and Beyond: Design and Evaluation (SS03). AAAI Press.
[2] Kinesthetic Learning Activities. [[1]]
[3] Guzdial, Mark. Computing Education Blog. [[2]]
[4] Davis, B., Sumara, D., & Luce-Kapler, R. "Learning Theories" from "Engaging Minds: Learning and Teaching in a Complex World"
[5] Elby, Andrew. Another Reason that Physics Students Learn By Rote. 14 May 1999
[6] Kendall, Lori. Geeks May Be Chic, But Negative Nerd Stereotype Still Exists. [[3]]
[7] Konidari, Eleni and Louridas, Panos. "When Students Are Not Programmers" ACM Inroads Vol.1 No.1 2010 March: 55-60
[8] Cooper, Steve and Cunningham, Steve. "Teaching Computer Science in Context" ACM Inroads Vol.1 No.1 2010 March: 5-8
|
http://wiki.roboteducation.org/index.php?title=Middle_School_Computing_with_Robots:_2010&diff=9588&oldid=8001
|
CC-MAIN-2020-05
|
refinedweb
| 985 | 65.52 |
C++11 Hash Containers and Debug Mode
Microsoft has never been a slacker in the C++ department — it has always worked hard to provide a top-notch, compliant product. Visual Studio 10 supports its current incarnation, and for the most part it is up to Microsoft's usual standards. It's a great development environment, and I am a dedicated user, but I have to give Microsoft a demerit in one area: Its C++11 hash containers have some serious performance problems — so much that the debug versions of the containers may well be unusable in your application.
Background
I first noticed the problem with
unordered_map when I was working on the code for my updated LZW article. I found that when running in the debugger, my program would hang after exiting the compression routine. A little debugging showed that the destructor for my hash table was taking a long time to run. And by a long time, I mean it was approaching an hour.
Destroying a hash table didn't seem to be a complicated task, so I decided to see if I could come up with a reasonable benchmark. I wrote a test program that does a simple word frequency count. As a starter data set, I used the first one million whitespace-delimited words in the 2010 CIA factbook, as published by Project Gutenberg. This data set yields 74,208 unique tokens.
I wrote a simple test rig that I used to test the word count program using four different containers:
unordered_mapindexed by
std::string
unordered_mapindexed by
std::string *
mapindexed by
std::string
mapindexed by
std::string *
Testing with
std::string * reduces the cost of copying strings into the hash table as it was filled, and then reduces the cost of destroying those strings when the table was destroyed.
I ran tests against
map, expecting to see a pretty big difference in performance. Because
map is normally implemented using a balanced binary tree structure, it has O(log(N)) performance on insertions. A sparsely populated hash table can have O(1) performance. By using fairly large data sets, I expected to see a big difference between the two.
I tried to eliminate a few obvious sources for error in my test function — and I used a template function so that I could use the same code on all the different container types:
template<class CONTAINER, class DATA> void test( const DATA &data, const char *test_name ) { std::cout << "Testing container: " << test_name << std::endl; #ifdef _DEBUG const int passes = 2; #else const int passes = 10; #endif double fill_times = 0; double delete_times = 0; size_t entries; for ( int i = 0 ; i < passes ; i++ ) { CONTAINER *container = new CONTAINER(); std::cout << "Filling... " << std::flush; clock_t t0 = clock(); for ( auto ii = data.begin() ; ii != data.end() ; ii++ ) (*container)[*ii]++; double span = double(clock() - t0)/CLOCKS_PER_SEC; fill_times += span; entries = container->size(); std::cout << " " << span << " Deleting... " << std::flush; t0 = clock(); delete container; span = double(clock() - t0)/CLOCKS_PER_SEC; delete_times += span; std::cout << span << " " << std::endl; } std::cout << "Entries: " << entries << ", Fill time: " << (fill_times/passes) << ", Delete time: " << (delete_times/passes) << std::endl; }
I didn't go overboard when it came to instrumenting this problem, I just used the timing functions built into the C++ library. On my Windows and Linux test systems, the values of
CLOCKS_PER_SEC are both high enough that I'm not worried about granularity issues.
|
http://www.drdobbs.com/cpp/c11-hash-containers-and-debug-mode/232200410?pgno=1
|
CC-MAIN-2015-27
|
refinedweb
| 558 | 54.86 |
Error building empty project
Hi,
In order to solve another isue, i had try to build an empty QT console project ( Not even a single line added, regular .pro file).
I had recived the next error:
13:25:57: Running steps for project test1Hello...
13:25:57: Configuration unchanged, skipping qmake step.
13:25:57: Starting: "C:\Qt\Qt5.7.0\Tools\QtCreator\bin\jom.exe"
C:\Qt\Qt5.7.0\Tools\QtCreator\bin\jom.exe -f Makefile.Debug
link /NOLOGO /DYNAMICBASE /NXCOMPAT /MACHINE:X64 /DEBUG /SUBSYSTEM:CONSOLE "/MANIFESTDEPENDENCY:type='win32' name='Microsoft.Windows.Common-Controls' version='6.0.0.0' publicKeyToken='6595b64144ccf1df' language='' processorArchitecture=''" /MANIFEST:embed /OUT:debug\test1Hello.exe @C:\Users\nir\AppData\Local\Temp\test1Hello.exe.5048.0.jom
LINK : fatal error LNK1104: cannot open file 'kernel32.lib'
jom: C:\Temp\build-test1Hello-Desktop_Qt_5_7_0_MSVC2013_64bit-Debug\Makefile.Debug [debug\test1Hello.exe] Error 1104
jom: C:\Temp\build-test1Hello-Desktop_Qt_5_7_0_MSVC2013_64bit-Debug\Makefile [debug] Error 2
13:25:58: The process "C:\Qt\Qt5.7.0\Tools\QtCreator\bin\jom.exe" exited with code 2.
Error while building/deploying project test1Hello (kit: Desktop Qt 5.7.0 MSVC2013 64bit)
When executing step "Make"
13:25:58: Elapsed time: 00:00.
This is my .pro file:
QT += core
QT -= gui
CONFIG += c++11
TARGET = test1Hello
CONFIG += console
CONFIG -= app_bundle
TEMPLATE = app
SOURCES += main.cpp
And this is the main.cpp file:
#include <QCoreApplication>
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
return a.exec();
}
Probably this is something wrong with ENV?!
What do i miss here
Thanks,
Nirh
I recieve the same Error when trying to build a Qt example code on windows,
I probably missing something here!!
Help will be most appriciated!!
Thanks,
nirh
Hi, if you open Visual Studio 2013 and try to build an empty C++ project, does it fail the same way?
Also you can try to run Maintenance Tool and install the 32-bit MSVC2013 package, and build an empty project with that and see if it fails the same way.
|
https://forum.qt.io/topic/72932/error-building-empty-project
|
CC-MAIN-2018-30
|
refinedweb
| 337 | 51.75 |
iObjectWatcher Struct ReferenceThis is a generic object watcher.
More...
[Crystal Space 3D Engine]
#include <iengine/objwatch.h>
Inheritance diagram for iObjectWatcher:
Detailed DescriptionThis is a generic object watcher.
Currently it can watch on light and movable changes. You can query if something has changed by examining the 'number' or else you can register a listener and get notified when one of the objects changes. This object will not keep real references to the objects it is watching but it will clean up the watcher for some object if that object is removed.
Main creators of instances implementing this interface:
Main users of this interface:
- Application
Definition at line 110 of file objwatch.h.
Member Function Documentation
Get the last light.
Only valid if the last operation (GetLastOperation()) is one of CS_WATCH_LIGHT_....
Get the last mesh.
Only valid if the last operation (GetLastOperation()) is one of CS_WATCH_SECTOR_....
Get the last movable.
Only valid if the last operation (GetLastOperation()) is one of CS_WATCH_MOVABLE_....
Get the last operation that occured.
This will be one of:
- CS_WATCH_NONE: nothing happened yet.
- CS_WATCH_LIGHT_DESTROY: light is destroyed.
- CS_WATCH_LIGHT_MOVE: light has moved.
- CS_WATCH_LIGHT_COLOR: light has changed color.
- CS_WATCH_LIGHT_SECTOR: light has changed sector.
- CS_WATCH_LIGHT_RADIUS: light has changed radius.
- CS_WATCH_LIGHT_ATTENUATION: light has changed radius.
- CS_WATCH_MOVABLE_DESTROY: movable is destroyed.
- CS_WATCH_MOVABLE_CHANGED: movable is changed.
- CS_WATCH_SECTOR_NEWMESH: sector has a new mesh.
- CS_WATCH_SECTOR_REMOVEMESH: a mesh got removed from the sector.
Get the last sector.
Only valid if the last operation (GetLastOperation()) is one of CS_WATCH_SECTOR_....
Get the specified watched light.
Get the specified watched movable.
Get the specified watched sector.
Get the number of watched lights.
Get the number of watched movables.
Get the number of watched sectors.
Get the current number for his watcher.
This number will increase as soon as some of the watched objects change. When this happens you can query the last change (only the last change!) by calling GetLastOperation() and/or GetLastLight() or GetLastMovable(). Note that if the operation indicates that something is destroyed then you should no longer use the pointer returned by GetLastLight() or GetLastMovable() as the object will already be gone by then. You can only use the returned pointer to clean up from internal data structures.
Remove a light to watch.
Remove a listener.
Remove a movable to watch.
Remove a sector to watch.
Reset. Remove all watched objects from this watcher.
Add a light to watch.
Add a movable to watch.
Add a sector to watch for meshes.
The documentation for this struct was generated from the following file:
- iengine/objwatch.h
Generated for Crystal Space 1.0.2 by doxygen 1.4.7
|
http://www.crystalspace3d.org/docs/online/api-1.0/structiObjectWatcher.html
|
CC-MAIN-2014-10
|
refinedweb
| 430 | 62.95 |
On Wed, May 21, 2008 at 05:18:49PM +0200, Jim Meyering wrote: > "Richard W.M. Jones" <rjones redhat com> wrote: > > On Tue, May 20, 2008 at 03:51:53PM +0100, Daniel P. Berrange wrote: > >> +if GLIBC_RPCGEN > >> + mv $@ $ bak > >> + sed -e 's/\t/ /g' $ bak > $@ > >> +endif > > > > I guess it doesn't matter in a generated file, but is the above > > correct? Probably better to use the 'expand' command, if it is > > available. > > Thanks for bringing that up. > While I don't know how portable expand is, > (I've never used it in a portable build script or Makefile) > it's certain to be less portable than sed. > > However, (to my constant chagrin) \t is not portable in sed regexps. > Makes me want to use perl -pe 's/...//' FILE instead. > With Perl, "\t" *does* work portably. > On the other hand, maybe using perl (or expand) would be ok here, > since the only time an end user runs those rules is if/when they > modify a dependent .x file. We already use Perl extensively to post-process the .c file generated from rpcgen, so I just switched to use perl for the .h file too, and we can use its inplace edit to avoid the temporary file too, so I committed with: if GLIBC_RPCGEN perl -i -p -e 's/\t/ /g' $@ endif :|
|
https://www.redhat.com/archives/libvir-list/2008-May/msg00396.html
|
CC-MAIN-2018-22
|
refinedweb
| 220 | 80.72 |
BackTracking:
Find a solution by trying one of several choices. If the choice proves incorrect, computation backtracks or restarts at the point of choice and tries another choice. It is often convenient to maintain choice points.
In an exhaustive search algorithm you search every possible choice to reach to the goal state, however, the backtracking algorithm is an extension, where you realize a bad choice by using some constraint and do not explore that choice any further. Hence, this is optimized by reducing the number of choices that you would explore in case of exhaustive search.
Example:
The example can help you understand the backtracking algorithm. Almost all the problems have similar structure where you have to explore various options available and the choices are in the form of tree above.
Starting at Root, your options are A and B. You choose A.
At A, your options are C and D. You choose C.
C is a bad choice and it is decided by some constraint in the real world problems. Go back to A.
At A, you have already tried option C, and it failed. Try option D.
D is bad. Go back to A.
At A, you have no options left to try. Go back to Root.
At Root, you have already tried A. Try B.
At B, your options are E and F. Try E.
E is good which is your goal. Congratulations, you have solved the backtracking problem !
Backtracking Solution Structure:
ALGORITHM Backtrack(X[1..i]) //Gives a template of a generic backtracking algorithm //Input: X[1..i] specifies first i promising components of a solution //Output: All the tuples representing the problem’s solutions if X[1..i] is a solution write X[1..i] else // for each element x ∈ Si+1 consistent with X[1..i] and the constraints do X[i + 1]←x Backtrack(X[1..i + 1])
N Queens Problem:
The problem is to place n queens on an nxn chessboard so that no two
queens attack each other by being in the same row or in the same column or on
the same diagonal.
Solution:
We start with the empty board and then place queen 1 in the first possible
position of its row, which is in column 1 of row 1. Then we place queen 2, after
trying unsuccessfully columns 1 and 2, in the first acceptable position for it, which
is square (2, 3), the square in row 2 and column 3. This proves to be a dead end
because there is no acceptable position for queen 3. So, the algorithm backtracks
and puts queen 2 in the next possible position at (2, 4). Then queen 3 is placed
at (3, 2), which proves to be another dead end. The algorithm then backtracks all
the way to queen 1 and moves it to (1, 2). Queen 2 then goes to (2, 4), queen 3 to
(3, 1), and queen 4 to (4, 3), which is a solution to the problem.
public class NQueens { List<String[]> res = new ArrayList<String[]>(); /****number of possible solutions****/ int count=0; public List<String[]> solveNQueens(int n) { int A[] = new int[n]; for(int i=0; i<n; i++){ A[i]=-1; } solve(A,0,n); return res; } public void printBoard(int[] A, int n){ String str[] = new String[n]; for(int i=0; i<n; i++){ String temp=""; for(int j=0; j<n; j++){ if(A[i]==j){ temp +="Q"; }else{ temp +="."; } } str[i]=temp; } res.add(str); } public boolean isValid(int[] A, int col){ for(int i=0; i<col; i++){ if(A[i]==A[col] || Math.abs(A[i]-A[col])== (col-i)){ return false; } } return true; } public void solve(int[] A, int col, int n){ if(col==n){ printBoard(A,n); count++; }else{ for(int i=0; i<n; i++){ A[col] =i; if(isValid(A,col)){ System.out.print(col+" "); solve(A,col+1,n); } } } } }
|
http://www.crazyforcode.com/queens-backtracking-problems/
|
CC-MAIN-2016-50
|
refinedweb
| 657 | 73.78 |
Opened 4 years ago
Closed 4 years ago
Last modified 3 years ago
#16563 closed Bug (fixed)
Error pickling request.user
Description (last modified by jezdez)
trying to pickle a request.user in trunk raises: TypeError, can't pickle function objects
Looking it shows that request.user is a SimpleLazyObject and it is a <lambda>, so it can not be pickled.
try: import cPickle as pickle except: import pickle def some_view(request): pickle.dumps(request.user) # raise type error …
Attachments (2)
Change History (25)
comment:1 Changed 4 years ago by PaulM
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
- Resolution set to wontfix
- Status changed from new to closed
comment:2 Changed 4 years ago by SmileyChris
- milestone set to 1.4
- Resolution wontfix deleted
- Status changed from closed to reopened
- Triage Stage changed from Unreviewed to Design decision needed
I'm going to reopen for now due to the fact this is a regression in 1.4.
Previously, this was possible:
def my_view(request): obj = MyObj(user=request.user, amount=1) request.session['unsaved_obj'] = obj # ...
Now request.user has been changed to a SimpleLazyObject rather than changing the request class to actually lazily return the true user object, this will choke.
Maybe this is an acceptable regression, but it's something I tripped up on in a project recently.
comment:3 Changed 4 years ago by russellm
- Severity changed from Normal to Release blocker
If this is a regression, then it should be tracked as a release blocker.
comment:4 Changed 4 years ago by jezdez
comment:5 Changed 4 years ago by bronger
- Cc bronger added
comment:6 Changed 4 years ago by jacob
- Triage Stage changed from Design decision needed to Accepted
comment:7 Changed 4 years ago by jacob
- milestone 1.4 deleted
Milestone 1.4 deleted
comment:8 Changed 4 years ago by anonymous
Same problem with Django 1.4 (trunk), but in Django 1.3 all fine
from django.core.cache import cache car = Car.objects.get(pk=8809) car.name = 'test' car.save() cache.set('test', request.user, 1)
Raise error:
can't pickle function objects
C:\Python\lib\site-packages\django\core\cache\backends\locmem.py in set
self._set(key, pickle.dumps(value), timeout)
C:\Python\lib\copy_reg.py in _reduce_ex
raise TypeError, "can't pickle %s objects" % base.name
comment:9 Changed 4 years ago by PaulM
Is this regression related to the change to using pickle.HIGHEST_PROTOCOL throughout Django? The memcached backend used that, but locmem didn't. Does that mean this isn't a regression after all, and depended on the non-canonical behavior of locmem using the lowest pickle protocol?
comment:10 Changed 4 years ago by SmileyChris
No, it's due to the fact that a SimpleLazyObject isn't picklable.
comment:11 Changed 4 years ago by tobias
- Owner changed from nobody to tobias
- Status changed from reopened to new
comment:12 Changed 4 years ago by tobias
- Owner changed from tobias to nobody
comment:13 Changed 4 years ago by poirier
I wouldn't call this a regression unless Django promised you could do it in 1.3, at least implicitly. Not every behavior change is a regression.
comment:14 follow-up: ↓ 18 Changed 4 years ago by poirier
FYI, the current code that makes request.user a SimpleLazyObject is from changeset [16305], which was a re-working of [16297], to fix #15929.
A comment cleaned up along the way said:
# If we access request.user, request.session is accessed, which results in # 'Vary: Cookie' being sent in every request that uses this context # processor, which can easily be every request on a site if # TEMPLATE_CONTEXT_PROCESSORS has this context processor added. This kills # the ability to cache. So, we carefully ensure these attributes are lazy.
which seems like a pretty good reason for making request.user lazy.
Would it be possible to fix this instead by fixing the chain of events somewhere else? e.g. should any access of request.session result in setting the Vary: Cookie header?
comment:15 follow-up: ↓ 16 Changed 4 years ago by tobias
As a work around, can't you just pickle a real User model object instead of request.user? E.g., the dumb way to do this might look something like:
def some_view(request): pickle.dumps(User.objects.get(pk=request.user.pk))
comment:16 in reply to: ↑ 15 ; follow-up: ↓ 17 Changed 4 years ago by poirier
comment:17 in reply to: ↑ 16 Changed 4 years ago by tobias
As a work around, can't you just pickle a real User model object instead of request.user?
That would still require a code change when upgrading to 1.4, right?
Yes, I meant it as a complement to your original argument. As you said, being able to pickle request.user directly was never documented, so whether or not it's a release blocking regression is debatable.
comment:18 in reply to: ↑ 14 Changed 4 years ago by carljm
Would it be possible to fix this instead by fixing the chain of events somewhere else?
No, I don't think so.
e.g. should any access of request.session result in setting the Vary: Cookie header?
Yes, it should. Any access of the session means the response you are generating is almost certainly dependent in some way on values in the session, which means serving that same response as a cached response to other users would be at best wrong, and at worst a security issue. This applies even more strongly, if anything, to accessing request.user in particular. So it's quite important that request.user remain lazy, and that accessing it trigger Vary: Cookie on the response.
comment:19 Changed 4 years ago by kmike
- Cc kmike84@… added
Are there obstacles for making LazyObject instances picklable? E.g.:
class LazyObject(object): # ... def __getstate__(self): if self._wrapped is None: self._setup() return self._wrapped def __setstate__(self, state): self._wrapped = state
Changed 4 years ago by lukeplant
Patch with test
comment:20 Changed 4 years ago by lukeplant
@kmike: That doesn't quite work, but it is close. My attached patch works in some cases, if someone can verify that it fixes the issue with a real User instance in environment showing the bug, I will commit it.
I added the code to SimpleLazyObject not LazyObject, to minimize impact with other subclasses of LazyObject. SimpleLazyObject is not designed to be subclassed really - you are supposed to use it just by passing a callable to the constructor.
There is one potential backwards compatibility concern here: if you had used a SimpleLazyObject with a callable that was itself pickleable, it would previously have been possible to pickle that SimpleLazyObject without the patch. Potentially, the object could have been pickled in the 'lazy' state, without the callable having been called.
However, I've tested this, and it seems that - for whatever reason - calling pickle.dumps causes the the callable to be called. Therefore, the object is not serialised in a 'lazy' state - it is always 'evaluated'. And this is just the same as it is with the patch, except I have made it explicit. The only difference now is that SimpleLazyObject._setupfunc is never available now after unpickling, but that shouldn't be a problem since it is never used.
I'm therefore fairly confident that this patch shouldn't cause other problems.
Changed 4 years ago by lukeplant
Small fix to patch (unneeded import)
comment:21 Changed 4 years ago by lukeplant
OK, this has turned out to be rather tricky.
The previous patch only works for objects that have no state. The basic problem is that due to proxying the __class__ attribute, the pickling machinery is confused (I think), and so it stores a reference to the class of the wrapped object, rather than SimpleLazyObject.
So, we need to define the __reduce__ method for full customization, rather than the simpler __getstate__ method. However, for some reason, the presence of the __class__ attribute proxying causes this to fail - if you define __reduce__ on SimpleLazyObject, it is never called when pickling unless you remove the __class__ hacking.
This takes us back to using __getstate__ (which, for some reason, doesn't suffer from the same problem as __reduce__). But now we have to cooperate with the fact that pickling will skip SimpleLazyObject and store a reference to the wrapped class, and define __getstate__ so that it returns the state from the wrapped object. This appears to work:
def __getstate__(self): if self._wrapped is empty: self._setup() return self._wrapped.__dict__
The result is that on unpickling, the SimpleLazyObject wrapper disappears entirely, which should be OK.
I still get failure if SimpleLazyObject is wrapping a builtin, and nothing but removing the __class__ proxying appears to be able to fix that. But that bug has always been there, and this change doesn't affect it.
I've confirmed that with this patch you can pickle request.user, and on pickling get an object that compares equal to the original, so I'm therefore committing my improved patch.
comment:22 Changed 4 years ago by lukeplant
- Resolution set to fixed
- Status changed from new to closed
comment:23 Changed 3 years ago by jdunck
See related ticket:
Some things aren't pickleable. It's a known limitation at the Python level, and is unlikely to change. Is Django itself trying to do this somewhere? Is there an extremely good reason this should be possible? I'm closing this as wontfix. Please feel free to reply if you've got good answers to these questions.
|
https://code.djangoproject.com/ticket/16563?cversion=0&cnum_hist=21
|
CC-MAIN-2015-48
|
refinedweb
| 1,606 | 64.71 |
What is Liquibase?
Liquibase is an open-source database-independent library for tracking, managing and applying database schema changes. It was started in 2006 to allow easier tracking of database changes, especially in an agile software development environment.
I find Liquibase as a neat tool to migrate your database automatically. DB migration itself is a very complicated topic anyway, outside the scope of this article.
Liquibase can run as a standalone tool or it can be integrated into your application. It's easy to add it to Spring context.
Spring Boot makes it even easier. Liquibase is autoconfigured if you enable it in the properties file, you have Liquibase in the classpath and you have
DataSource in the context.
liquibase.change-log=classpath:changelog.xml liquibase.enabled=true
Liquibase makes the MockMvc testing very simple, too. One can configure it to create the H2 database for the testing purposes.
How Liquibase works?
Liquibase reads the xml changelog file and figures out what changesets it needs to apply. It uses the
DATABASECHANGELOG table in your DB (
DataSource) for this purpose. The
DATABASECHANGELOG contains the list of changesets that are already applied with their
ID,
FILENAME,
MD5SUM,
AUTHOR and few other properties.
The logic is relatively simple. Just by comparing the changelog with the table Liquibase knows what changesets it needs to apply. There are, however, few gotchas ...
- Liquibase can only take one changelog file
- Liquibase determines the list of changesets to apply before applying them
- the actual DB can get out of sync with the
DATABASECHANGELOGtable. E.g. if you manually modify database, or so on
- if Liquibase fails to apply a changeset, it fails immediately and won't continue with next datasets
- if Liquibase is running in Spring app as a bean, it executes during application startup, hence if it fails, then the application won't start
- changesets are not atomic. It can happen that part of the changeset passes, it modifies the DB properly, and next part fails. The changeset record won't go into
DATABASECHANGELOGtable. Hence it leaves the DB in the state that requires manual repair (e.g. reverting the part of the changeset and letting Liquibase to run again)
- the changesets can't be modified. If you modify changeset after it was applied in your db, then Liquibase fails stating that the MD5SUM doesn't match.
- The ID is not the unique identifier of the changeset. It is in fact the combination of
ID,
FILENAMEand
AUTHOR
- the changesets that are in the
DATABASECHANGELOGand are not in the changelog files are ignored
Of course, Liquibase has much more functionality. Just read the documentation. This is also out of scope of this article.
The changelog file grows over time
Yep, if you don't define some strategy at the beginning, your changelog file will just grow bigger and bigger. On a large project it can be a couple of thousands of lines long with hundreds of changesets. There is a high code churn on the changelog file, too, so it will cause you some merging effort.
There are a few alternatives that you should consider early on to avoid this.
Define multiple changelog files
.. and
<include> them in the master changelog, e.g.:
<?xml version="1.0" encoding="UTF-8"?> <databaseChangeLog xmlns="" xmlns: <include file="feature1.xml" relativeToChangelogFile="true"/> <include file="feature2.xml" relativeToChangelogFile="true"/> <include file="feature3.xml" relativeToChangelogFile="true"/> </databaseChangeLog>
The benefit of this one is obvious - less code churn, better organization. The problem comes if there are any logical dependencies between changesets across the files - e.g. if there are any relations defined between tables of multiple files. Since the Liquibase executes the changesets in sequence, it starts with
feature1.xml, continues with
feature2.xml.
Perhaps you can find out a better split key - based on target releases perhaps?
<include file="release_0.1.0.0.xml" relativeToChangelogFile="true"/> <include file="release_0.1.0.1.xml" relativeToChangelogFile="true"/> <include file="release_1.0.0.0.xml" relativeToChangelogFile="true"/>
Configure multiple Liquibase runs
Since one run can only take one changelog file, just define multiple changelog files and let the Liquibase run multiple times.
In your Spring (Boot) app just define multiple
liquibase beans:
import liquibase.integration.spring.SpringLiquibase; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.DependsOn; import javax.sql.DataSource; @Configuration public class MultipleLiquiaseConfiguration { @Bean public SpringLiquibase liquibaseRelease1(DataSource dataSource) { SpringLiquibase liquibase = new SpringLiquibase(); liquibase.setDataSource(dataSource); liquibase.setChangeLog("classpath:release_v1.xml"); return liquibase; } @Bean public SpringLiquibase liquibaseRelease2(DataSource dataSource) { SpringLiquibase liquibase = new SpringLiquibase(); liquibase.setDataSource(dataSource); liquibase.setChangeLog("classpath:release_v2.xml"); return liquibase; } }
Both beans will be created in the context, hence 2 Liquibase runs will be performed. If you rely on the Spring Boot's autoconfiguration, your
entityManager bean will force you to have one bean called
liquibase. This is easy to do. Also, if your changelogs need to run in a certain order, you can solve this with
@DependsOn:
@Configuration public class MultipleLiquiaseConfiguration { @Bean public SpringLiquibase liquibaseV1(DataSource dataSource) { SpringLiquibase liquibase = new SpringLiquibase(); liquibase.setDataSource(dataSource); liquibase.setChangeLog("classpath:release_v1.xml"); return liquibase; } @Bean @DependsOn("liquibaseV1") public SpringLiquibase liquibase(DataSource dataSource) { SpringLiquibase liquibase = new SpringLiquibase(); liquibase.setDataSource(dataSource); liquibase.setChangeLog("classpath:release_v2.xml"); return liquibase; } }
Note, that the last to run is called
liquibase (which your
entityManager depends on) and it points to the previous-to-run with
@DependsOn annotation.
How to deal with long changelog?
If you haven't applied any strategy early on, or you just joined a running project with legacy code, your changelog is already too big. Now, how to reduce it?
You might say - well, I just split it to multiple files and use either of the 2 strategies as mentioned above. Well, not so fast! :) I mentioned earlier that the filename is important as it is used to determine if a changeset was applied or not. If you simply move existing changesets to another file, Liquibase would think that those changesets were not applied and in fact will try to apply them again. And it will fail as the DB already contains the changes.
To describe the issue a bit better, just imagine a model situation having this
changelog.xml:
<?xml version="1.0" encoding="UTF-8"?> <databaseChangeLog xmlns="" xmlns: <changeSet author="me" id="changeset1"> <createTable tableName="TABLE1"> <column name="COLUMN1" type="VARCHAR2(10)"/> </createTable> </changeSet> <changeSet author="me" id="changeset2"> <createTable tableName="TABLE2"> <column name="COLUMN1" type="VARCHAR2(10)"/> </createTable> </changeSet> </databaseChangeLog>
And you do move the second changeset to
changelog2.xml and include
changelog2.xml in the
changelog.xml. Starting your app will fail with similar exception:
Table "TABLE1" already exists;
Ok, it will work just fine in your unit tests, since the DB is created from scratch, but will fail if you run Liquibase to migrate the DB of your deployed instance. We all agree that this is bad ;)
Luckily, we still have a few options left ;)
Change the
logicalFilePath
Liquibase allows you to define so called logical file path of your changelog. This allows you to fake Liquibase that the changesets actually come from the same file. Imagine the
changelog2.xml would look like this now:
<?xml version="1.0" encoding="UTF-8"?> <databaseChangeLog xmlns="" xmlns: <changeSet author="me" id="changeset2"> <createTable tableName="TABLE2"> <column name="COLUMN1" type="VARCHAR2(10)"/> </createTable> </changeSet> </databaseChangeLog>
Note the
loficalFilePath value there. Yes, this will work, Liquibase will treat this
changeset2 as if it was previously defined in
changelog.xml. Perfect.
Actually, this approach has also a few drawbacks that might (but might not) stop you from applying. If you don't store your changelog in the resources, but rather elsewhere in filesystem, your
DATABASECHANGELOG will contain the full path to the file. If you then have multiple environments where you want to migrate DB and your changelog file location vary, you have no way how to set the
logicalFilePath. Remember that it must match the previous value.
Another issue is that this approach is not the best if your intent to split the changelog is to move the part of it to another package, module, and so on.
Use intermediate changelog
If you intend to move part of your changelog to another module (e.g. you finally want to break that nasty monolith of yours to a few microservices having their own database), this approach might suite you the best. It contains some intermediate and temporary steps, but the outcome is what you want :)
The first step is to move all the relevant changesets to another file elsewhere. In our example above we just move the
changeset2 to
changelog2.xml. Now we need to fake Liquibase that those changesets didn't change. We do it by modifying the FILENAME value in the database as part of the Liquibase changelog itself ;)
Create one more (intermediate/temporary) changelog (let's call it
tmp-migration.xml) with just this one changeset:
<?xml version="1.0" encoding="UTF-8"?> <databaseChangeLog xmlns="" xmlns: <changeSet id="moving changesets from changelog to changelog2" author="Maros Kovacme"> <sql> UPDATE DATABASECHANGELOG SET FILENAME = REPLACE(FILENAME, 'changelog.xml', 'changelog2.xml')) WHERE ID IN ( 'changeset2' ); </sql> </changeSet> </databaseChangeLog>
This changeset will replace the FILENAME column value in the DB from
classpath:changelog.xml to
classpath:changelog2.xml. When we then run Liquibase with the
changelog2.xml, it will think that all changesets are already applied. It is not possible to use just 2 changelog files for this purpose. Liquibase first calculates the list of changesets to be applied (per changelog file) and only then it will apply them. We need to modify the
FILENAME before it processes the second file.
The last step we have to apply is to define the corresponding beans in our context in the right order:
@Configuration public class MultipleLiquiaseConfiguration { @Bean public SpringLiquibase liquibaseChangelog(DataSource dataSource) { SpringLiquibase liquibase = new SpringLiquibase(); liquibase.setDataSource(dataSource); liquibase.setChangeLog("classpath:changelog.xml"); return liquibase; } @Bean @DependsOn("liquibaseChangelog") public SpringLiquibase liquibaseMigration(DataSource dataSource) { SpringLiquibase liquibase = new SpringLiquibase(); liquibase.setDataSource(dataSource); liquibase.setChangeLog("classpath:tmp-migration.xml"); return liquibase; } @Bean("liquibase") @DependsOn("liquibaseMigration") public SpringLiquibase liquibaseChangelog2(DataSource dataSource) { SpringLiquibase liquibase = new SpringLiquibase(); liquibase.setDataSource(dataSource); liquibase.setChangeLog("classpath:changelog2.xml"); return liquibase; } }
The
changelog.xml will run first. The changeset2 exists in the
DATABASECHANGELOG but not in the file, hence it is ignored. Then the
tmp-migration.xml runs and changes the
FILENAME column. The last will run the
changelog2.xml, but Liquibase will treat the changeset2 as already applied.
Some time later (when you believe that all affected databases are already migrated) you might remove the
tmp-migration.xml together with it's bean. The changeset will stay in the
DATABASECHANGELOG table but that's just a minor thing I believe.
And then the next step could be to move the definition of beans to the contexts of your concrete microservices.
Conclusion
There is always some way ;)
Discussion
|
https://dev.to/vladonemo/splitting-liquibase-changelong-no-problem-2a4l
|
CC-MAIN-2020-50
|
refinedweb
| 1,803 | 56.96 |
In a previous example, “3D Rotating objects in Flex 4 using the Spark Rotate3D effect and Flash Player 10”, we saw how you could rotate objects in 3D space from 0 to 360 degrees along the X, Y, and Z axis using the Spark Rotate3D effect in the Flex 4 SDK.
The following example shows how you can rotate an image relative to its current rotation by setting the
yBy property on the Spark Rotate3D instance..
16 thoughts on “Incrementally 3D rotating objects using the Spark Rotate3D effect in Flex 4”
Nice,
Is there an event you can catch when the animation is finished, I want to do a multi part animation, like
part 1 (then when this ends)
part 2 (then when this ends)
part 3
etc.
Also, can you rotate in the Z, ie bring things closer / further from the camera. I would try myself, but not on the gumbo betas, looking forward to Flex 4.
Yes, just listen for the ‘end’ event.
Here is an example listening for an end event. (right mouse click for code).
Why & When has been the ‘Application’ tag changed to ‘FxApplication’ why can’t my FB handle that? Yes, I do have one of the last SDKs (4.0.0.3132).
Any suggestions?
Greetings,
Johannes
Johannes,
The <Application> (or <mx:Application>) tag is the Halo version of the root application. Flex “Gumbo” introduced a new root application tag, <FxApplication>. You can use either one in Flex Gumbo projects.
Flex Builder 3 can use either the Application or FxApplication tag, assuming you’re using a Flex 4.0.0.x build and set the Flash Player version to 10.0.0.
Peter
why flex builder didn’t help when you write script tag and some insede functionality i feel like i writing in notepad and know that can do very match mistakes ///who can help me?
Examples updated to Flex SDK build 4.0.0.6298 (but didn’t update SWF yet).
Peter
I just downloaded the very latest and greatest Gumbo sdk version 4.0.0.6310….Your latest name space related code tweaks are still quite happy!….. off the the races….Web 3.0, Social Decision systems with a Adobe Flex based 3D RIA front end, here we come!
-Peter L.
Peter Lindener,
Yeah, I changed the image asset to something I had handy in my current Flex Builder project. And I am planning on going through all 160+ Gumbo/Spark beta examples and updating them to the latest API changes and namespace changes and updating/creating SWFs, but it’s just something I do in my spare time and isn’t a huge priority at the moment.
Peter
Hi
Peter –
Thanks for helping me get through the more challenging aspects of coming up to speed of Flex development right on the leading edge….. I will do my best to return the favor, by helping look out for any Gumbo implementation issues still needing to be debugged….
I’m not sure if my current challenge is name space related or not… My hunch is that it may be… I’m trying to take as much of the Action Script code in your AS oriented Rotate3D example…and move as much of the AS code out of the MXML file and instead place most of it in a separate “.as” class file….. I’m then attempting to place this top level Action script file in the top level of the “src” folder…..
I’m having a bit of trouble setting up the related top level package/class callijng implementation such that an instance of this top level class gets created so that it is visible on the screen, as before when all was in one MXML file and the AS was declared as a script…… Maybe, it just becuase I’m new to this, of perhaps there are some additonal Name space related thing that have come about recently…. I relize I’m asking a bit here with regards to your assistance…. I just figured I should best learn some of the more prefured coding habits early on… any input on how best to layout ones AS src files, as they would then relate to the top level MXML, would be quite instructive….. at this point my separate AS code file seems to run, but nothing appears on the screen. I currently have it extending the VGroup class…. Then, I’m not sure that is what I should be doing.
To keep things simple, perhaps in a better coding style, you AS centric example could also be configured to employ a seprate “.as” file…
Just food for thought.
All the best
-Peter
I have it right, In some ways, my prior post is in essance asking about best practices for packaging and then invoking “.as” defined components at the top level as would best be employed within the latest Gumbo beta sdk..
-Peter L.
I’m still trying to figure out how to pass or make reference to the base Application object from an Action Scrip class defined out side the MXML file, but instead an external “.as” file…
It seems to do so I need to declare a class ( not just a function) in AS3?….. then how does one get/ or pass reference to the base Application object so that one can add things to it ?
I’m trying….. but still know sucsess at moving the AS code to a separate “.as” file,
-Peter L.
P.S. it seems the “id” attribute is not allowed on the root tage of a component”
… I must be missing something here.
-Peter
I managed to get an external .as” file version of your code working by passing a referance to a Panel (with an attached id), that I added to the MXML file…. I then pass this referance into an external AS class’s constructor.. it works! then I have a couple of questions: does all of this UI creation via an exturnal “.as” file, require as I have done, that one actually define and construct a full package/class?
I was unable to get the “[Embed(“assets/Fx2.png”)]” logic to work correctly in the “.as” file, thus I moved its creation back into the MXML file and just passed a reference to the image into the “.as” class file’s logic……
While I do now have the external “.as” file’s logic working…. I guess it would be good to see how the “pro”s would suggest structuring this approach to AS based component creation … I do see that regularizing the outside UI container was likely a good thing ( that is, not directly referencing the outside app object )….. but my hunch is that there are cleaner ways of getting all of this to work well… Your thoughts on any of this ?
Cheers
-Peter L.
I manged to get it all figured out, most all the action script for the AS centric version now lives in a “.as” class file, I’m pretty sure the example coded is now what would be considered as best practice, Then you feedback on how well I have done would always be valued, I have exported the two projects as FLEX zip archives, How do I Upload these zipped code changes back to your blog ?
-Peter L.
Example(s) updated to Flex SDK build 4.0.0.8974 (but didn’t update SWF yet).
The
angleXBy,
angleYBy, and
angleZByproperties were removed around 6/17, added workaround in example above.
Peter
|
http://blog.flexexamples.com/2008/10/25/incrementally-3d-rotating-objects-in-flex-using-the-fxrotate3d-in-flex/
|
CC-MAIN-2017-22
|
refinedweb
| 1,253 | 69.11 |
Kubernetes: Getting Started
Kubernetes: Getting Started
Getting started with Kubernetes might seem a bit daunting at first. Fortunately, this collection of guides, tips, and advice will help out.
Join the DZone community and get the full member experience.Join For Free
Getting Started with Kubernetes sounds like quite a daunting feat. How do you get started with “an open-source system for automating deployment, scaling, and management of containerized applications”? Let’s examine Kubernetes’ beginning.
Containers have been in use for a very long time in the Unix world. Linux containers are popular thanks to projects like Docker.
Google created Process Containers in 2006 and later realized they needed a way to maintain all these containers. Borg was born as an internal Google project and many tools sprang from its users. Omega was then built iterating on Borg. Omega maintained cluster state separate from the cluster members, thus breaking Borg’s monolith. Finally, Kubernetes sprung from Google. Kubernetes is now maintained by Cloud Native Computing Foundation’s members and contributors.
If you want an “Explain Like I’m Five” guide to what Kubernetes is and some of its primitives, take a look at “The Children’s Illustrated Guide to Kubernetes.” The Guide (PDF) features a cute little giraffe that represents a tiny PHP app that is looking for a home. Core Kubernetes primitives like pods, replication controllers, services, volumes, and namespaces are covered in the guide. It’s a good way to wrap your mind around the why and how of Kubernetes. Fair warning though, it does not cover Kubernetes networking components.
Let’s break down the two areas you can get started with Kubernetes. The first area is maintaining or operating the Kubernetes cluster itself. The second area is deploying and maintaining applications running in a Kubernetes cluster. The distinction here is to provide compartmentalization when learning Kubernetes. To be proficient at Kubernetes, you should know both, but you can get started knowing one area or the other.
To learn how the internals of Kubernetes works, I would recommend Kelsey Hightower’s “Kubernetes The Hard Way”. It is a hands-on series of labs to bringing up Kubernetes with zero automation. If you want to know how to stand up all the pieces that make a full Kubernetes cluster, then this is the path for you.
If you want to get started with deploying containerized apps to Kubernetes, then minikube is the way to go. minikube is a tool that helps you deploy Kubernetes locally. You have to be able to run a hypervisor on your host, but most modern devices can. Each OS is different with regards to setting up everything for minikube, but you can run minikube on Linux, macOS, or Windows so the sky’s the limit. Deploying Docker (or rkt) containers to minikube is easy. It’s the things that make your container more resilient in a Kubernetes cluster
After kicking the tires on minikube, if you feel like it is missing a few components, then I would recommend minishift or CoreOS Tectonic. minishift is the minikube of Red Hat OpenShift. OpenShift has a fantastic UI and many features that make Kubernetes a little better. CoreOS Tectonic is a more opinionated, enterprise-ready Kubernetes. Luckily, CoreOS Tectonic has a free sandbox version. The nice thing about CoreOS Tectonic is the networking and monitoring that come baked into this iteration of Kubernetes. CoreOS has been very thoughtful about the decisions made in Tectonic and it shows.
Regardless of how you get started learning Kubernetes, now is the time to start. There are so many places to deploy Kubernetes now that it doesn’t make sense to not kick the tires before determining if it is a great fit for your use cases. Before you deploy to AWS, Google Cloud, or Azure, make sure you’re not wasting your time.
Published at DZone with permission of Chris Short . See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/kubernetes-getting-started
|
CC-MAIN-2019-51
|
refinedweb
| 676 | 65.22 |
Details
- Type:
New Feature
- Status: Closed
- Priority:
Major
- Resolution: Fixed
- Affects Version/s: None
-
- Component/s: None
- Labels:None
- Hadoop Flags:Incompatible change, Reviewed
- Release Note:HideAdded.
Description.
Issue Links
- blocks
-
- is depended upon by
HADOOP-4077 Access permissions for setting access times and modification times for files
- Closed
-
- is related to
HADOOP-4099 HFTP interface compatibility with older releases broken
- Closed
HADOOP-4986 FSNamesystem.getBlockLocations sets access time without holding the namespace locks
- Closed
HADOOP-3336 Direct a subset of namenode RPC events for audit logging
- Closed
- relates to
-
-
Activity
Currently on our bigger grids, we have a significant amount of files that we aren't sure whether anyone is actually using or not (e.g., /tmp). While I recognize that atime is a huge performance killer, whenever one deals with users who have free reign over their space, it is an incredibly important tool to help maintain the system.
This is especially important given the lack of ACLs. On our larger grids, there are many files that are just kind of scattered all over that we have no real insight as to their purpose, much less their usage pattern. All we know is that we didn't put them there.
Having an access log would at least tell us whether something is being used. Once users get added to the system, having the user information combined with whether a file was touched will be extremely handy.
Operationally, I see this being used by dumping the data on a regular interval into an RDBMS or perhaps even inside HDFS itself. It is then fairly trivial to create tools and form policies around data retention.
I am not too worried about data structure size. But if we have to write a transaction for every file access, that could be a performance killer, do you agree? My proposal was to somehow batch access times updates to the disk (using the access bit). Using the access-bit and a cron job that runs every hour could effectively provide a coarse-grain-access-time.
> if we have to write a transaction for every file access, that could be a performance killer, do you agree?
I don't know. Logging file opens should be good enough, right? How much is transaction logging a bottleneck currently? How much worse would this make it? If files average ten or more blocks, and we're reading files not much more often than we're writing them, then the impact might be small.
Another option to consider is making this a separate log that's buffered, since its data is not as critical to filesystem function. We could flush the buffer every minute or so, so that when the namenode crashes we'd lose only the last minute of access time updates. Might that be acceptable?
> Another option to consider is making this a separate log that's buffered
This is a pretty good option. It should cost almost nothing to write the access to a buffered log. I suspect that for the use case Allen describes it would be sufficient to discover the outliers i.e. files and directories that haven't been accessed in months.
I agree that we can record the timestamp when an "open" occurred in namenode memory, mark the inode as dirty and then allow a async flush daemon to persist these dirty inodes to disk once every period. That should be optimal and should be light-weight.
Working on some job, i have realized that nearly all the MR jobs somehow create temporary directories/files. Due to various reasons(program bug, program crash, etc. ), these may not be deleted properly. We may add a FileSystem#createTempFIle() and attach it with a timestamp. And after some period, these can deleted by batch tasks.
Now that we have permissions and ownerships, such an access log would also need to differentiate between reads, writes, ownership changes, and permission changes. This would be extremely helpful in case forensics need to be performed to track down someone doing Bad Things(tm).
I plan on doing the following:
1. Add a 4 byte field to the in-memory inode to maintain access time.
2. Create a new transaction OP_SET_ACCESSTIME to the edits log.
3. Every access to read data/metadata for a file will do the following:
– update the in-memory access time of the inode.
– write a OP_SET_ACCESSTIME entry for this inode to the edits log buffer, do not sync or flush buffer
4. Enhance the dfs shell/webUI to display access times of files/directories
This should not adversely impact the transaction processing rate of the namenode. Other types of transactions (e.g. file creation) will anyway cause the transaction-log-buffer to get synced to disk pretty quickly. This implementation will not distinguish between different kind of metadata accesses and is primarily targeted to weed out files that are not used for a long long time.
Most tests seem to indicate amount of data synced matters as well (along with number of syncs). I will be surprised if a benchmark that tests mixed load (say 10% writes, 90% reads) is not impacted.
+1.
this looks good enough. i am wondering if u can also log the username somewhere (either the edit log or the namenode log). we are potentially interested in userid's reading specific parts of the file system (and then maintaining last reader information at the directory level). so this may allow us to tail the appropriate log and pick such information up.
Doesn't
HADOOP-3336 do this already in separate log file?
I agree that
HADOOP-3336 does this from an auditing perspective.
I am interested in making some form of archival store in HDFS. Files that are not used for a long time can automatically be moved to slower and/or denser storage. Given the rate at which a cluster size increases, and given the fact that the cost to store data for infinitely long time is very low, it makes sense for the file system to make intelligent storage decisions based on how/when data was accessed. This argues for "access time" to be stored in the file system itself.
HADOOP-3336 can be used to accomplish this to some extent... the separate log that it generates can be periodically merged with the file system image. But, I feel that design is a little awkward and not too elegant.
I think this proposal is in the right direction.
According to
HADOOP-3860 name-node can currently perform 20-25 times more opens per second than creates.
Which means that if we let every open / getBlockLocation be logged and flushed we loose big.
Another observation is that map-reduce does a lot of ls operations both for directories and individual files.
I have seen 20,000 per second. This is done when the job starts and depends on the user input data and on how many tasks should the job be running.
So may be we should not log file access for ls, permission checking, etc. I think it would be sufficient to write
OP_SET_ACCESSTIME only in case of getBlockLocations().
Also I think we should not support access time for directories only for regular files.
Another alternative would be to keep the access time only in the name-node memory. Would that be sufficient enough to detect "malicious"
behavior of some users? Name-nodes usually run for months, right? So before say upgrading the name-node or simply every (other) week
administrators may look at files that have never been touched during that period and act accordingly.
My main concern is that even though with Dhruba's approach we will batch access operations and would not loose time on flushing them
directly the journaling traffic will double that is with each flush more bytes need to be flushed. Meaning increased latency for each flush,
and bigger edits files.
It would be good to have some experimental data measuring throughput and latency for getBlockLocation with and without ACCESSTIME
transactions. The easy way to test would be to use NNThroughputBenchmark.
I agree with Konstantin on most counts. I will probably implement access times for files and directories for getblocklocations RPC only and then run NNThroughputBenchmark to determine its impact.
Did you mean files only, directories don't have getBlockLocations()?
Yes, Konstantin, I meant "access time for files for the getblocklocations RPC only".
Regarding Joydeep's requirement about recording the user-name of last access,, I agree with Raghu that it is more likely a case for
HADOOP-3336.
If and only if increase in EditsLog data noticeably affects performance (which I suspect it will) : Couple of options :
- Provide an accuracy setting : For e.g. if an admin is interested only in accuracy of days, this setting could be 24 hours and an access to a file with 24hours will be recorded only once.
- Maintain only the in-memory access times. Since NameNode is not restarted very often, admins could always run a script every week or so to get a snap shot of access times. When the NameNode restarts similar script could update the access times from prev snap-shot.
Hi Raghu, the idea you propose "update access times olce every 24 hours" sounds good. However, how will the namenode remember which inodes are dirty and need to be flushed? It can keep a data structure to record all inodes that have "dirty" access times, but it needs memory. Another option would be to "walk" the entire tree looking for "dirty" inodes. Both approaches are not very appealing. Do you have any other options?
> However, how will the namenode remember which inodes are dirty and need to be flushed?
It does not need any more structures than what you have proposed ("Add a 4 byte field to the in-memory inode to maintain access time."). So at each access, you check if this 4-byte field is older than "accuracy setting", then you add a EditLog entry. If you want to keep the in-memory accurately (which I don't think is required), then you need to add 4 bytes more to record "last logged time". Right?
Until we get volumes or the equivalent, it would be good to have the accuracy setting take a path. For example, I might want to have a more accurate setting for data outside a user's home directory.
Just to clarify once again, the first option above does not require any more memory or tree traversals. It just reduces the number of entries to EditsLog. Its actually just a tweak.
Raghu, just to make sure I understand right: let's say the accuray setting in 24 hours. Now, suppose I read the contents of a file /tmp/foo now at 1PM. The in-memory inode is updated with the accesstime of 1PM. But it is not recorded in the transaction log. let's assue, that no other files are accessed in the file system for the entire next day.
When it is 1 PM tomorrow, the system has to remember that /tmp/foo needs to be flushed. How does this occur? How does hdfs find out that the inode /tmp/foo is dirty and has to be flushed to the transaction log?
> The in-memory inode is updated with the accesstime of 1PM. But it is not recorded in the transaction log.
It is recorded in the transaction log at this time (assuming it was not accessed in 24 hours prior to that).
> When it is 1 PM tomorrow, the system has to remember that /tmp/foo needs to be flushed. How does this occur? [...]
It does not need to remember, since the transaction was written at 1 PM previous day.
I am trying to see if I am missing something here. Note that effect of not sync-ing the editslog file for each access is same as before.
IOW, a last access time of 't' returned by NameNode for file implies "this file was last accessed during [t, t+24h)".
> IOW, a last access time of 't' returned by NameNode for file implies "this file was last accessed during [t, t+24h)".
Basically, it is recording "access date" for the case above.
BTW, is it expensive to invoke System.currentTimeMillis()? I don't have any idea.
I got it Raghu. Thanks for the tip. I like your proposal. +1.
This patch does the following:
1. Implements access time for files. Directories do not have access times.
2. The access times of files is precise upto an hour boundary. This means that all accesses to a file between 2Pm and 2:59PM will have an access time of 2PM.
3. The transaction to write access time to edits is not synced. It is assumed that it will get synced as part of other transactions.
4. There is a configuration parameter to switch off access-times. By default, it is on.
5. There is a new FileSystem API setAccessTime() to set the access time on a file.
6.
> 2. The access times of files is precise upto an hour boundary
Can this be made configurable? If not now, I am 100% certain it will be made configurable in near future. I don't see any advantage to not making this a config variable (but probably there is one). In fact we might have a feature request to make it runtime-configurable.. but not required in this jira.
A few preliminary comments.
- I agree we should better make INode.accessTimePrecision a configuration parameter. In any case I did not expect it to be an extra field in INode (not even static!!).
- If we introduce hdfs.access.time.precision as a config parameter, then you do not need dfs.support.accessTime because you can either set hdfs.access.time.precision to a big (unreachable) number or pick a special value say "0" to indicate the atime should not be supported.
- FileSystem.setAccessTime() and related changes to client classes including ClientProtocol seem to be redundant. We already have such method, just call getBlockLocations().
- In FSDirectory.setAccessTime() I would modify atime in memory every time it's requested. This is free once you already have the INode. Only calling of logAccessTime() should be done once an hour.
Incorporated most review comments. I do not update the in-memory access time every time. The in-memory access time is in sync with the value persisted on disk. Otherwise, the access time of a file could move back in time when a namenode restarts!
I also ran benchmarks with NNThroughputBenchmark. All benchmarks remain at practically the same performance. In particular, the "open benchmark with 300 threads and 100K files" is as follows:
patch trunk
----------------------------
59916 59865 ops/sec
59171 59191 ops/sec
-1 overall. Here are the results of testing the latest attachment
against trunk revision 689230.
+1 @author. The patch does not contain any @author tags.
+1 tests included. The patch appears to include 4 new or modified tests.
-1 javadoc. The javadoc tool appears to have generated 1 warning messages.
-1 javac. The applied patch generated 484 javac compiler warnings (more than the trunk's current 480 data -1-2% looks good.
- FSConstants
- accessTimePrecision is not a constant and is not used anywhere.
- white space added.
- FileSystem
- I strongly disagree with introduction of FileSystem.setAccessTime().
File systems usually do not provide such an interface.
Access time is changed to current when you access a file (call getBlockLocations() in our case), but the clients should not be able to set atime. It is not secure. Users can mistakenly set it to some time in the past or maliciously to a future value and screw the system up.
- white space added.
- FSNamesystem. It is better to replace boolean member supportAccessTime by a method
boolean isAccessTimeSupported() { return accessTimePrecision > 0; }
- INode. Please revert changes to this line
+ /** * Get last modification time of inode.
- FSDirectory. The name-node will print the message below every time you access the file.
NameNode.stateChangeLog.info("File " + src + " already has an precise access time."
This is going to be a problem for log watchers, and will affect the performance.
Is it just a debug message? I'd remove it.
- I thought that ls should print aTime. Did not find any references to FileStatus.getAccessTime() in shell files.
Address review comments.
Incorporated all of Konstantin's review comments other than the one that said that setAccessTime() call should be removed.
The setAccessTime() call is a utility method that allows an application to set the access time of a file without having to "open" the file. The permission-access-checks are precisely the same as that for "opening" a file, so there isn't any security concern IMO. Most file systems support setting access times/modification times on a file, see.
I purposely did not add a command to the FsShell to display access times. An application can fetch the accessTime using a programmatic API FileSystem.getFileStatus().
Mistakenly closed issue, re-opening.
I don't understand. There must be some secret use case that you don't want to talk about or something. I have all those questions
- Why do we need to be able to setAccessTime() to Au 31, 1991 or Jul 15, 2036?
- Why do we need setAccessTime() but never needed setModificationTime()?
- My understanding is that access time main use case is for ops to be able to recognize files that have not been used for the last 6 month and remove them on the bases they are old. So a user can loose files if he by mistake sets aTime to e.g. 1b.c. Or alternately a user can set aTime to files 1 year in advance and that will keep ops from removing them for the next 1.5 years.
- More. Local file system, KFS, S3 all will not support setAccessTime(), but HDFS will. Is it right to make it a generic FileSystem interface?
- Pointing to utime() is the same as pointing to FSNamesystem.unprotectedSetAccessTime(). I still cannot change aTime or mTime using bash.
I guess I am saying I am ok with a touchAC() method (as in touch -ac), but it is already there, called getBlockLocations(), and I don't see why you need more.
The rest looks great.
I can see where the capability to set access time would be extremely useful for the FUSE case. (and the NFSv4 proxy case, should that come to pass)
I can see where the capability to set access time would be extremely useful for the FUSE case. (and the NFSv4 proxy case, should that come to pass)
+1 it would be nice as "touch /export/hdfs/foo" seems to be a common way of checking fuse-dfs and the returned IO Error is confusing since the module is working but just can't implement touch. I guess it could be a no op but when access time is configured, it would be nice to have.
> it would be nice as "touch /export/hdfs/foo"
Isn't touch just 'append(f); close(f)'? That what command line touch seems to do.
Edit: I am just talking about touch. no strong opinion about w.r.t. setModificationTime() and setAccessTime() .
I dunno about touch, but I was thinking of tar, etc case where atime can be restored as part of the unpack operation.
in fuse-dfs, this is the error I get:
touch /export/hdfs/user/pwyckoff/foo
touch: setting times of `/export/hdfs/user/pwyckoff/foo': Function not implemented
So, I assumed one needs to implement some attribute setting function. But, this is against 0.17 so appends also give me an IO Error.
touch() is getBlockLocations() in terms of hdfs.
Preserving file times while copying or tar-ing is useful, no doubt about it, but is rather different from being able to set it to an arbitrary value.
setAccessTime() gives more than what is needed.
Hi Konstantin, I am not fixated on providing the setAcessTime method. However, I think it is a powerful feature that can be used by any hierarchical storage system. It even has precedence on all Unix and Windows systems :
If more people feel strongly about not having the setAccessTime API, I will remove it from the patch.
+1 for setAccessTime() in HDFS.
-1 for adding it to FileSystem API. As we move towards stable APIs we really need to be more cautious about adding new APIs. Once there is enough utility demonstrated by this API in HDFS, it can move up (perhaps in a different form).
regd touch, its main function is to change modification time (and create the file if it does not exist).
> regd touch, its main function is to change modification time
i.e. getBlockLocations() does not seem sufficient or correct.
-1 overall. Here are the results of testing the latest attachment
against trunk revision 689666.
is what I get from the discussion above.
- touchAC() method is useful and can find immediate application in fuse.
- for archival and (remote) copy purposes we will need a flag that would create new files with time (and may be other) attributes having THE SAME value as the source files.
- there is no reasonable use case for setAcessTime method at the moment. I mean nobody needs to set aTime to yesterday or tomorrow unless it is the time of the file being copied or untared.
In the spirit of minimizing impact on external APIs I propose to
- introduce FileSystem.touchAC() and implement it in HDFS as a call to getBlockLocations().
- not introduce setAcessTime() anywhere in hdfs client code including ClientProtocol.
We can always introduce setAcessTime() when it will really be necessary. Because once introduced its hard to take it back.
+1 to not unduly adding stuff to the API, but when the thing in mind is something needed to support a posix API, shouldn't there be an exception? We already know as Allen points out that setAccessTime is needed to implement tar properly, so isn't that an important enough application?
I still vote for having FileSystem.setAccessTime(). It is very much like Posix and I do not see a security issue anywhere at all. Also, most archiving/unarchiving utilities sets the access time. For example, an restore utility will need to set the access time of the file.
As I said before setAccessTime has wider semantics than it is required for tar .
setAccessTime lets you set an arbitrary value to aTime, while in tar you just need to replicate the time of an existing file.
On the other hand setting only access time is not sufficient for tar. You also want to keep the same modification time, owner, group and permissions. So to me it makes more sense to introduce a new create method with FileStatus of the existing file as a parameter or a copy method with an option to replicate FileStatus fields.
Posix is not an argument in connection with hdfs.
I always find it hard to argue with my kind when after all pros and cons they say " I still want it".
What restore utility?
If we want to provide POSIX like functionality when ever possible (which I think is good idea), it would make more sense to keep the names/parameter etc similar as well. Though not part of this jira, I am +1 for FileSystem.utimes() and -1 for FileSystem.setAccessTime().
Sorry I created a confusion. When I talked about touch() I meant changing access time to current, but Raghu just corrected me that touch() in posix it can also change modification time. Just to clarify I did not want to say anything about modification time, everything I talked about was access times. Precisely, I meant
touch -ac foo
I'll replace touch() with touchAC() in my earlier posts to clarify that if anybody will be curious enough to read it.
Maybe this is a dumb question but how will hadoop archives (htars?) interact with access times?
Also, at least to me, an API called touchAC() seems very non-obvious as to its purpose. (esp if I'm doing this on Windows)
yes, I don't think we should have a touchAC()... touch would be a utility that could be implemented with other FileSystem API.
I like Raghu's proposal that FileSystem.setAccessTime() can be renamed as FileSystem.utimes(FileStatus). But it creates some other issues:
1. The FileStatus object has blockSize of the file. The blockSize cannot be changed. Similarly, the FileStatus object has a field called 'isdir". What happens to this one?
2. Similarly, the FileStatus has the length of the file. Are we going to truncate the file (or create a sparse file with holes if the user sets a longer length)?
3. There are existing APIs FileSystem.setReplication(), FileSystem.setOwner(), setGroup(), setPermissions(). etc. Will these be deprecated or coexist with the new API?
I prefer adding a setAccessTime because it allows an application to set the access time to an arbitrary value. If we want to merge all the above APIs into FileSystem.utimes(), I can do it as part of a separate JIRA.
Raghu, Konstanin: does it sound ok?
I don't know how FileStatus, blockSize etc matters. I would have thought it would be something like FileSystem.utimes(path, modTime, aTime) and keep the behaviour as close to as possible to posix man page, just like FileSystem.setAccessTime() interface you added.
I suggested the name utimes() since you gave utimes() as the justification. If you think setAccessTime() should be the name, that s alright I guess.
The API stuff could better done in a different jira IMHO.
Edit : minor
Hi Raghu, Thanks. I like your proposal of having
FileSystem.utimes(path, modTime, aTime).
I can do this as part of this JIRA. Further cleanup of the API can be done as part of another JIRA. Do we have consensus now?
+1 for me.. in this jira or a different one, does not matter. sticking to posix-like when possible helps since that interface has already gone through the arguments..
Actually I wanted to edit my comment make it clear that I am not too opposed to setAccessTime()...
> setAccessTime because it allows an application to set the access time to an arbitrary value.
This is exactly the reason why I am against introducing setAccessTime. No arbitrary value to access time. And therefore no utimes() for me.
I do not oppose to set aTime to current.
Ok, from Raghu's and Konstantin's comments, I guess nobody is stuck on how the API looks like. Whether it is setAccessTime() or utimes(), everybody is ok with it. It appears that both Raghu and Konstantin are +1 on this one. Please let me know if this is not the case.
The point that is being discussed is whether utimes/setAccessTime allows setting the time to any user-specified value or whether it sets it to current time on namenode. I still vote for allowing an user to set any access time... this is what POSIX does and it allows restore utilities to use a standard API. From Raghu's comments, it appears to me that he is +1 on it too.
>So to me it makes more sense to introduce a new create method with FileStatus of the existing file as a parameter or a
> copy method with an option to replicate FileStatus fields.
I do not like the idea of having a custom API as described above.
Clarifying.
> setAccessTime() or utimes(), everybody is ok with it. It appears that both Raghu and Konstantin are +1 on this one.
-1 on both.
> I still vote for allowing an user to set any access time.
-1.
Seriously, it's like nobody is listening to others. May be we need a meeting.
Just to add noise to the fire, I'm +1 to setAccessTime. I also think it is a very good idea to be able to configure it off at the namenode. My case for setAccessTime is that if you expand an archive or do distcp, it is really nice to be able to optionally set all of the times to match the copied files. That includes access time.?
> a file create (such as expansion of an archive and a distcp), [...]
During create is not enough even for these use cases. Say distcp copies 10GB file and sets Mod time at create time (to t - 1month), and the last block is written 1 min later.. then the mod time after Distcp will be (t + 1min) rather than (t - 1month). -1 for extra options to create, or close, etc.
Why not just provide utimes().. since we are using POSIX as a tie breaker?
Another Konstantin's point is that FS should not allow setting future time.. which sounds ok.. but it is just a file attribute to help users not something filesystem inherently depends upon. I don't see need to police it that much .. and since POSIX is a tie breaker we could just stick to it functionality. Note that all the use cases we need to be able to set modtime too.
Given that Raghu, Owen and Allen commented that it is better to follow the POSIX semantics of allowing an user to set either access time or modification time to any arbitrary value he/she likes, I change my earlier patch sightly to add the following API:
/**
- Set access time of a file
- @param p The path
- @param mtime Set the modification time of this file.
- The number of milliseconds since Jan 1, 1970.
- A value of -1 means that this call should not set modification time.
- @param atime Set the access time of this file.
- The number of milliseconds since Jan 1, 1970.
- A value of -1 means that this call should not set access time.
*/
public void setTimes(Path p, long mtime, long atime
) throws IOException;
This is precisely similar to the POSIX utimes call, but follows the Hadoop naming pattern for method names. This allows setting access time or modification time or both.
Submitting patch for HadoopQA tests.
What are the permissions required for setting arbitrary accessTime? just read permission does not seem enough at least on Linux box.
From my understanding, (and this is what I have implemented in this patch), a read access is required to be able to set access time on files. A write-access is required to be able to set modification time on files.
That is mostly not correct.. may be it needs to be changed later.
Another option would be to allow changing access times and modifications times by the owner of the file and the superuser. But this patch does not do this. This patch "a read access is required to be able to set access time on files. A write-access is required to be able to set modification time on files".
The main use cases are distcp, restore (or untar).
Konstantine raises 2 good points:
- restrict to create operation. In order to make this work the time has to be applied at close event otherwise you run into the situation that Ragu raises about the file taking a long time to write its last block.
- restrict times to be <= the NN's current time,
This could run into problems with distcp betweens two hdfs clusters with clocks out sync,
While the extended create operation works for our use case, there are few advantages to the utimes() approach:
- handles other use cases we haven't thought of today
- if we provide partial posix compatibility in the future, one could use posix's restore/untar tools
Hence I am in favour of:
FileSystem.utimes(path, modTime, aTime).
+1
Edit : its fine. getBlockLocations calls internal dir.setTimes().
Regd setTimes() implementation : We should have a private setTimes that does not do security checks and audit logging since most common use is internal (as in getBlockLocations()) . Security checks and logging is needed only when user actively invokes setTimes().. btw, should it be setUTimes()? I haven't looked at rest of the patch thoroughly.
Hi Raghu, thanks for reviewing this patch. the current patch does not do any adsitional security checks or audit logging while settign access times when invoked from getBlockLocations. In the case when FileSystem.setTimes() is called, it checks access priviledges and does audit logging. So, it behaves precisely the way you described in your comment. Please let me know if you have any additional comments.
Hadoop QA did not pick up this patch for tests. Resubmitting....
Hadoop QA did not pick up this patch for tests. Resubmitting....
-1 overall. Here are the results of testing the latest attachment
against trunk revision 6922 core tests. The patch failed core unit tests.
+1 contrib tests. The patch passed contrib unit tests.
Test results:
Findbugs warnings:
Checkstyle results:
Console output:
This message is automatically generated.
I created
HADOOP-4077 to debate on the access permissions that are requires to invoke the setTimes API.
The failed test is TestKosmosFileSystem and it has been failing for the last 5 builds. This failure is not part of this patch.
The findbugs warnings are not introduced by this patch. I believe that the test-patch process is getting confused while diffing the findbugs outout on trunk with the findbugs output from this patch.
Konstantin has reviewed this patch earlier. Please let me know if somebody else wants to review this patch. I would to get it commited by Friday Sept 5 so that it can make it into the 0.19 release.
Dhruba, I looked at setTimes() mainly it looks good. Since the rest of the patch hasn't changed, you can commit it.
Thanks Raghu. I will commit this patch.
I just committed this.
Integrated in Hadoop-trunk #595 (See)
I have committed this.
I have committed this.
Another approach might be to process the namenode logs (if they're kept) to find which files have been accessed when.
Also, in HDFS, would access times really be that expensive? We have relatively few files and relatively many blocks. So increasing the data structure size of a file shouldn't be that costly. The larger expense might be logging each time a file is opened. How bad would that be? Perhaps we could make it optional?
I'm just brainstorming...
|
https://issues.apache.org/jira/browse/HADOOP-1869?focusedCommentId=12628412&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
|
CC-MAIN-2015-11
|
refinedweb
| 5,709 | 74.59 |
28 February 2013 17:37 [Source: ICIS news]
LONDON (ICIS)--The European polyethylene terephthalate (PET) market is waiting for March contract news from the upstream paraxylene (PX) and monoethylene glycol (MEG) industry, in what is a turbulent time for exchange rates, sources said on Thursday.
The market is suffering from "pure knee jerk reactions," when it comes to the euro/US dollar exchange rate volatility, according to a PET producer. ?xml:namespace> areian spot MEG prices collapsed by 6-8% in two days this week.
A second MEG buyer said it was expecting a €15-20/tonne decrease on MEG contracts from February to March, adding that without the ethylene increase, MEG should have dropped by €30-40/tonne.
A €50/tonne increase in the cost of feedstock ethylene is one argument European MEG producers are using to support the idea of a higher price in March.
"I see an increase [on March MEG] based on increased costs, asian prices more or less similar to jan level if you take feb ave and and slightly increasing spot values in Europe," a producer of MEG. It is highly pressured. We will be lucky if raw materials rolled over," a second PET producer said.
A PET customer countered this by saying, "it's not only feedstock prices that are falling, but PET also," referring to import prices.
February PET prices were mostly at €1,310-1,330/tonne FD Europe, sources agreed.
"Nobody says the market is doing well. We are at unsustainable levels," a third PET producer said.
($1 = €0
|
http://www.icis.com/Articles/2013/02/28/9645541/europe-pet-market-holds-out-for-upstream-contract-news.html
|
CC-MAIN-2014-10
|
refinedweb
| 259 | 59.33 |
System.
I have a C++ object that creates a thread to read from a blocking UDP socket:
mRunning.store(true); while (mRunning.load(boost::memory_order_consume)) { ... int size = recvfrom(mSocket, buf, kTextBufSize , 0, (struct sockaddr *) &packet->mReplyAddr.mSockAddr, (socklen_t*)&packet->mReplyAddr.mSockAddrLen); if (size > 0) { //do stuff } } return 0;
(mRunning is a boost::atomic<bool>)
The object's destructor is called from another thread and does this:
mRunning.store(false); #ifdef WIN32 if (mSocket != -1) closesocket(mSocket); #else if (mSocket != -1) close(mSocket); #endif pthread_join(mThread, NULL);
This seems to work, but one of my colleagues suggested that there might be a problem if recv is interrupted in the middle of reading something. Is this thread safe? What's the correct way of closing a blocking UDP socket. (Needs to be cross-platform OSX/Linux/Windows)]]>
|
http://developerweb.net/extern.php?action=feed&tid=7250&type=atom
|
CC-MAIN-2015-11
|
refinedweb
| 135 | 51.65 |
How to deploy a reference database with an app for Windows Phone 8
[ This article is for Windows Phone 8 developers. If you’re developing for Windows 10, see the latest documentation. ]
Starting with Windows Phone OS 7.1, you can store reference data in a local database and deploy it with your Windows Phone app. After your app is installed on the device, you can leave the reference database in the installation folder for read-only connections, or copy it to the local folder for read-write operations. This topic describes the process of creating a reference database and using it in your app. For more information about using a local database, see Local database for Windows Phone 8.
This topic contains the following sections.
A helper app is required to create the local database that you will deploy with the primary app. The helper app runs on your development computer, creates the local database in the local folder, and loads the database with the desired reference data.
In this section, you create the helper app, use the helper app to create the reference database, and then use Isolated Storage Explorer (ISETool.exe) to extract the local database file and save it to your computer. For more information about Isolated Storage Explorer, see How to use the Isolated Storage Explorer tool for Windows Phone 8.
To create the reference database
Create the helper app that creates the local database and loads it with reference data. For more information, see How to create a basic local database app for Windows Phone 8 and How to create a local database app with MVVM for Windows Phone 8.
Deploy the helper app to the Windows Phone Emulator or a Windows Phone device.
Run the helper app as appropriate to create the local database and load it with reference data. All local databases are created in the local folder.
Get the Product GUID for the app specified in the ProductID attribute of App element of the WMAppManifest.xml file. You will need this when you copy the local database file from the local folder.
While the tethered device or emulator is still running, using Isolated Storage Explorer to copy the local database to your computer. For more information, see How to use the Isolated Storage Explorer tool for Windows Phone 8.
After you save the local database file to your computer, you can add it to your primary app in the same way that you would add other types of exiting files.
To add the reference database to your app
With Visual Studio, create a project for the Windows Phone app that consumes the reference database. This app, the primary app, is a different app than the helper app.
From the Project menu of the primary app, select Add Existing Item.
From the Add Existing Item menu, select the local database file that you saved to your computer with Isolated Storage Explorer, then click Add. This will add the local database to the project.
In Solution Explorer, right-click the local database file and set the file properties so that the file is built as Content and always copied to the output directory (Copy always).
When a local database is deployed with an app, it is stored in the installation folder after deployment. The installation folder is read-only. The primary app can connect to it there in a read-only fashion or copy it to the local folder for read-write operations. This section describes these two options in greater detail.
To read from the installation folder
When connecting to a reference database in the installation folder, you must use the File Mode property in the connection string to specify the connection as read-only. The following example demonstrates how to make a read-only connection to the installation folder. For more information about connection strings, see Local database connection strings for Windows Phone 8.
In this example, the appdata prefix is used in the file path to distinguish a path in the installation folder (appdata) from a path in the local folder (isostore). Without a prefix, the data context applies the path to the local folder.
To copy the reference database to the local folder
To copy the reference database from the installation folder to the local folder, perform a stream-based copy. The following example shows a method named MoveReferenceDatabase that copies a local database file named ReferencedDB.sdf from the root of the installation folder to the root of the local folder.
using System; using System.IO; using System.IO.IsolatedStorage; using System.Windows; namespace PrimaryApplication { public class DataHelper { public static void MoveReferenceDatabase() { // the local folder. using (IsolatedStorageFileStream output = iso.CreateFile("ReferenceDB.sdf")) { // Initialize the buffer. byte[] readBuffer = new byte[4096]; int bytesRead = -1; // Copy the file from the installation folder to the local folder. while ((bytesRead = input.Read(readBuffer, 0, readBuffer.Length)) > 0) { output.Write(readBuffer, 0, bytesRead); } } } } } }
|
https://msdn.microsoft.com/library/windows/apps/hh286411
|
CC-MAIN-2017-17
|
refinedweb
| 820 | 55.95 |
How to disable model timestamps in Laravel 5.3?
Sometimes we require to disable created_at and updated_at timestamps on Model on Laravel, so we can do it simply by using
$timestamps variable of model. It is very small things but important to understand and how to use it.
When you create new item or user using model at that time created_at and updated_at column set default time by default but you can prevent to set false value of $timestamps variable.
I am creating new records using create method of model like as bellow example:
Item::create(['title'=>'ItSolutionStuff.com']);
Ok, above code will simply add new records on items table with current timestamps value in created_at and updated_at. But you can prevent by add $timestamps variable value false. So your model will be like this way:
app/Item.php
<?php
namespace App;
use Illuminate\Database\Eloquent\Model;
class Item extends Model
{
public $fillable = ['title'];
public $timestamps = false;
}
Now, you can check, created_at and updated_at will be null.
Maybe.
|
https://www.itsolutionstuff.com/post/how-to-disable-model-timestamps-in-laravel-53example.html
|
CC-MAIN-2020-34
|
refinedweb
| 168 | 53.61 |
General Discussion
Hello Everyone,
I want to get NFS share which are in Netapp, can some one please tell me the command through which I can find all NFS shares listed under NetApp ??
Thanks..
Solved!
See The Solution
The "volume show -vserver * -junction" CLI command will show you all volumes in a given cluster, and their junction path as it is mounted in the NAS namespace. Is that what you are looking for?
View solution in original post
Yes Donny that's what I needed, Thanks a lot !! 🙂
Live Chat, Watch Parties, and More!
Engage digitally throughout the sales process, from product discovery to configuration, and handle all your post-purchase needs.
|
https://community.netapp.com/t5/General-Discussion/How-to-find-all-NFS-share-details/td-p/152892
|
CC-MAIN-2021-39
|
refinedweb
| 112 | 69.82 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
Notes For Contact and Attachments Need To Trickle Up To Parent Accounts
I need to trickle up the internal notes and attachments to the parent accout. For an example if Branch A is the parent Company of Branch B and we have an employee Mr.X from Branch B, we need to trickle up all of the internal notes and attachments of Mr.X, in not only Branch B ,also in Branch A.
You can make a new method in the employee that returns the internal notes and attachments. If you search for the employee record you can call this method to get the data from anywhere.
def get_employee_notes_attachments(self):
return employee.internal_notes, employee.attachments
and call it like this:
employee = self.env['employee.model'].search([('id', '=', employee_id)])
notes, attachments = employee.get_employee_notes_attachments()
I don't know what odoo version you use, this answer would be for v8 or
|
https://www.odoo.com/forum/help-1/question/notes-for-contact-and-attachments-need-to-trickle-up-to-parent-accounts-97204
|
CC-MAIN-2017-04
|
refinedweb
| 177 | 66.33 |
Article Body: An Arc Core ComponentArticle Body: An Arc Core Component
What does this do?What does this do?
When used in a Fusion Project, this Core Component can be used to render content elements of an ANS Story.
The Article Body currently supports the following ANS elements:
- Text
- Images
- Corrections
- Interstitial Links
- Lists
- Raw HTML
- Quotes
- BlockQuotes (Quotes that have subtype "blockquote" on them)
- Headers
- Oembeds (Facebook, Twitter, Instagram, Youtube)
How do I use it?How do I use it?
import ArticleBody from '@arc-core-components/feature_article-body' <ArticleBody data={yourArrayOfContentElements} renderElement={yourCustomRenderElementFunction}>
Take a look in the
src/mocks file to see a suggested implementation, in
article-body.mdx.
How do I add my own classes to the article body elements?How do I add my own classes to the article body elements?
See
Default rendering with element classes in
src/article-body.mdx.
How do I add my own custom components (such as a Video Player or a Gallery)?How do I add my own custom components (such as a Video Player or a Gallery)?
See the
Extending default rendering example in
src/article-body.mdx.
How do I render elements inline (such as ads, related content or a call to action)?How do I render elements inline (such as ads, related content or a call to action)?
The Core Component will render any type of element that has a type of
inline_element. So something like this:
const inlineAd = { type: "inline_element", element: <div>Any JSX element</div> };
Will be rendered within the Core Component.
Your job is to place it in the proper place in the
content_elements array from
within your Component repo.
Please note that you must pass in a completely constructed inline element -- the Core Component will render it as is.
You can use the
renderElement method to change the default implementation of
the Core Component, if you need to.
How can I view what's in there quickly?How can I view what's in there quickly?
Run
npm i && npm run docz:dev after cloning to see what is within.
Testing & LintingTesting & Linting
We are using Jest and XO for testing and linting.
We are using Husky to run a pre-push hook, preventing un-linted or code that fails tests from making it into the repo.
To test:
npm test
To lint:
npm run lint - This will also fix any simple linter errors
automatically.
To push without testing or linting:
git push --no-verify - This can often
be helpful if you just need to push a branch for demonstration purposes or for
|
https://www.npmjs.com/package/@arc-core-components/feature_article-body
|
CC-MAIN-2022-33
|
refinedweb
| 430 | 55.84 |
Check out this quick tour to find the best demos and examples for you, and to see how the Felgo SDK can help you to develop your next app or game!
The MultiResolutionImage changes the used image based on the display size to improve performance and memory usage. More...
A MultiResolutionImage changes automatically with different screen sizes: This concept is called dynamic image switching.
Felgo adds support for dynamic image switching based on the screen size, on top of Qt 5. This allows you to:
This concept is used by iOS and Android too, with the disadvantage the respective approaches are not cross-platform. For example, on iOS you provide multiple images with increased resolution with an added file prefix like
@2x or
@4x. Android uses a folder approach, where resources can be moved into the best-matching folder like
hdpi or
xhdpi.
Felgo adds cross-platform support based on Qt 5 File Selectors, which is similar to the Android folder approach but with the added benefit that it works across all platforms.
To take advantage of dynamic image switching in Felgo, create 3 versions of your images:
Scenesize of 480x320. E.g. your button could be 100x50 px.
Scenesize of 960x640. E.g. your button would then be 200x100 px.
Scenesize of 1920x1080. E.g. your button would then be 400x200 px.
Then move the hd image version to a
+hd subfolder and your hd2 version to a
+hd2 subfolder. Felgo will then use the best image version automatically.
Note: If you only create an sd image version and no hd and hd2 versions, the sd version is used also on hd and hd2 screens. It will look blurry though, but this may be useful in quickly testing different images during development and prototyping. To reduce app size, you can decide to not provide hd2 image versions, or only provide hd2 detail of some images. The hd image will not be crisp on hd2 devices, but is still a good compromise between image quality and app size. This allows you to decide based on your requirements if app size or image quality is more important on a per-image basis.
To save your time of manually downscaling hd2 resolution images, we have prepared a shell script doing that for you automatically.
The script has two use cases:
output.pngare stored in +hd2/output.png, +hd/output.png and output.png ready to be used with MultiResolutionImage or any of the Felgo Multiresolution Sprite Components AnimatedSprite or SpriteSequence.
Download the Felgo Sprite Script here.
The zip archive contains shell script for Windows, Mac and Linux and requires the ImageMagick command-line tools to be installed.
You can then create a sprite sheet from your sprite frames starting for example with "bird_" with the following command:
vplayspritecreator "bird_*.png" birdSprite 2048
By default the script creates downscaled +hd an +sd versions of the birdSprite in the correct folder structure. You can avoid this auto-scaling by adding the -noscale parameter:
vplayspritecreator "bird_*.png" birdSprite 2048 -noscale
To simulate different file selectors call
vplayApplication.setContentScaleAndFileSelectors(x); in your
main.cpp file, with
x being the content scale factor you want to test. The
+hd file selector is used at a scale >= 2, and the
+hd2 is used at a scale >= 2.8.
Suppose you have an image of a button, which should have the size 100x50 px on a screen with 480x320 pixels (this is called sd, single-definition, in Felgo). When you are running your app on a hd screen like the iPhone 4, your
normal Image element would be scaled up twice the size and will look blurry. To avoid that, you can create an image that is 200x100 px and thus exactly matches the display size. Now save the low resolution image as
button.png and the hd image as
button.png in a
+hd subfolder. The same applies for a full-hd, or hd2 image with the size of 400x200, and save it as
button.png in a
+hd2 subfolder.
Note: During development and quick testing, start with placeholder graphics and use a normal Image element. Or a MutliResolutionImage and call
vplayApplication.setContentScaleAndFileSelectors(1); in your
main.cpp file to only need to provide a single image instead of three versions with three different sizes.
We recommend to create your images in a high resolution initially. Thus you do not loose quality when scaling it up, but instead can just scale them down in your preferred imaging program like Adobe Photoshop or Gimp. For crisp images on all currently available devices, including iPad Retina devices for example, use the hd2 scene size of 1920x1080 as a reference. And 2280x1440 for your BackgroundImage. See the guide How to create mobile games for different screen sizes and resolutions for more information about this topic.
With the MultiResolutionImage, you can now access these 3 images with a single element:
import Felgo 3.0 GameWindow { Scene { MultiResolutionImage { // position based on the logical scene size x: 30 y: 10 // the logical size (i.e. this width & height) will always be 100 x 50 // you can also set it explicitly to a logical size which then scales the button // based on the used contentScaleFactor, one of these files will be used: // button.png, +hd/button.png or +hd2/button.png source: "button.png" } } }
You can position the image based on your logical scene size, and the correctly sized image is automatically loaded.
See Image::asynchronous.
See Image::cache.
See Image::fillMode.
See Image::horizontalAlignment.
This property holds whether the image should be horizontally mirrored. Image::paintedHeight.
See Image::paintedWidth.
See Image::progress.
See Image::smooth.
The source of the MultiResolution image. Based on the content scale, an image in the
+hd or
+hd2 subfolders might be used.
See Image::sourceSize.
See Image::status.
See Image::verticalAlignment.
|
https://felgo.com/doc/felgo-multiresolutionimage/
|
CC-MAIN-2021-04
|
refinedweb
| 972 | 56.15 |
Full pytest documentation¶
Table Of Contents
- Installation and Getting Started
- Usage and Invocations
- Calling pytest through
python -m pytest
- Getting help on version, option names, environment variables
- Stopping after the first (or N) failures
- Specifying tests / selecting tests
- Modifying Python traceback printing
- Dropping to PDB (Python Debugger) on failures
- Setting a breakpoint / aka
set_trace()
- Profiling test execution duration
- Creating JUnitXML format files
- Creating resultlog format files
- Sending test report to online pastebin service
- Disabling plugins
- Calling pytest from Python code
- The writing and reporting of assertions in tests
- Pytest API and builtin fixtures
- pytest fixtures: explicit, modular, scalable
- Fixtures as Function arguments
- “Funcargs” a prime example of dependency injection
- Sharing a fixture across tests in a module (or class/session)
- Fixture finalization / executing teardown code
- Fixtures can introspect the requesting test context
- Parametrizing fixtures
- Modularity: using fixtures from a fixture function
- Automatic grouping of tests by fixture instances
- Using fixtures from classes, modules or projects
- Autouse fixtures (xUnit setup on steroids)
- Shifting (visibility of) fixture functions
- Overriding fixtures on various levels
- Monkeypatching/mocking modules and environments
- Temporary directories and files
- Capturing of the stdout/stderr output
- Asserting Warnings
- Doctest integration for modules and test files
- Marking test functions with attributes
- Skip and xfail: dealing with tests that cannot succeed
- Marking a test function to be skipped
- Skip all test functions of a class or module
- Mark a test function as expected to fail
- Skip/xfail with parametrize
- Imperative xfail from within a test or setup function
- Skipping on a missing import dependency
- specifying conditions as strings versus booleans
- Summary
- Parametrizing fixtures and test functions
- Cache: working with cross-testrun state
- Support for unittest.TestCase / Integration of fixtures
- Running tests written for nose
- classic xunit-style setup
- Installing and Using plugins
- Writing plugins
- Writing hook functions
- pytest hook reference
- Reference of objects involved in hooks
- Usages and Examples
- Good Integration Practices
- Basic test configuration
- Setting up bash completion
- Backwards Compatibility Policy
- License
- Contribution getting started
- Talks and Tutorials
- Project examples
- Some Issues and Questions
- Contact channels
- Changelog history
- 3.0.7 (2017-03-14)
- 3.0.6 (2017-01-22)
- 3.0.5 (2016-12-05)
- 3.0.4 (2016-11-09)
- 3.0.3 (2016-09-28)
- 3.0.2 (2016-09-01)
- 3.0.1 (2016-08-23)
- 3.0.0 (2016-08-18)
- 2.9.2 (2016-05-31)
- 2.9.1 (2016-03-17)
- 2.9.0 (2016-02-29)
- 2.8.7 (2016-01-24)
- 2.8.6 (2016-01-21)
- 2.8.5 (2015-12-11)
- 2.8.4 (2015-12-06)
- 2.8.3 (2015-11-18)
- 2.8.2 (2015-10-07)
- 2.8.1 (2015-09-29)
- 2.8.0 (2015-09-18)
- 2.7.3 (2015-09-15)
- 2.7.2 (2015-06-23)
- 2.7.1 (2015-05-19)
- 2.7.0 (2015-03-26)
- 2.6.4 (2014-10-24)
- 2.6.3 (2014-09-24)
- 2.6.2 (2014-09-05)
- 2.6.1 (2014-08-07)
- 2.6
- 2.5.2 (2014-01-29)
- 2.5.1 (2013-12-17)
- 2.5.0 (2013-12-12)
- 2.4.2 (2013-10-04)
- 2.4.1 (2013-10-02)
- 2.4
- 2.3.5 (2013-04-30)
- 2.3.4 (2012-11-20)
- 2.3.3 (2012-11-06)
- 2.3.2 (2012-10-25)
- 2.3.1 (2012-10-20)
- 2.3.0 (2012-10-19)
- 2.2.4 (2012-05-22)
- 2.2.3 (2012-02-05)
- 2.2.2 (2012-02-05)
- 2.2.1 (2011-12-16)
- 2.2.0 (2011-11-18)
- 2.1.3 (2011-10-18)
- 2.1.2 (2011-09-24)
- 2.1.1
- 2.1.0 (2011-07-09)
- 2.0.3 (2011-05-11)
- 2.0.2 (2011-03-09)
- 2.0.1 (2011-02-07)
- 2.0.0 (2010-11-25)
- 1.3.4 (2010-09-14)
- 1.3.3 (2010-07-30)
- 1.3.2 (2010-07-08)
- 1.3.1 (2010-05-25)
- 1.3.0 (2010-05-05)
- 1.2.0 (2010-01-18)
- 1.1.1 (2009-11-24)
- 1.1.0 (2009-11-05)
- 1.0.3
- 1.0.2 (2009-08-27)
- 1.0.1 (2009-08-19)
- 1.0.0 (2009-08-04)
- 1.0.0b9 (2009-07-31)
- 1.0.0b8 (2009-07-22)
- 1.0.0b7
- 1.0.0b3 (2009-06-19)
- 1.0.0b1
- 0.9.2
- 0.9.1
Download latest version as PDF
Installation and Getting Started¶
Pythons: Python 2.6,2.7,3.3,3.4,3.5, Jython, PyPy-2.3
Platforms: Unix/Posix and Windows
PyPI package name: pytest
dependencies: py, colorama (Windows), argparse (py26).
documentation as PDF: download latest
Installation¶
Installation:
pip install -U pytest
To check your installation has installed the correct version:
$ pytest --version This is pytest version 3.0.7, imported from $PYTHON_PREFIX/lib/python3.5/site-packages/pytest.py
Our first test run¶
Let’s create a first test file with a simple test function:
# content of test_sample.py def func(x): return x + 1 def test_answer(): assert func(3) == 5
That’s it. You can execute the test function now:
$ pytest ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: collected 1 items test_sample.py F ======= FAILURES ======== _______ test_answer ________ def test_answer(): > assert func(3) == 5 E assert 4 == 5 E + where 4 = func(3) test_sample.py:5: AssertionError ======= 1 failed in 0.12 seconds ========
We got a failure report because our little
func(3) call did not return
5.
Note
You can simply use the
assert statement for asserting test
expectations. pytest’s Advanced assertion introspection will intelligently
report intermediate values of the assert expression freeing
you from the need to learn the many names of JUnit legacy methods.
Running multiple tests¶
pytest will run all files in the current directory and its subdirectories of the form test_*.py or *_test.py. More generally, it follows standard test discovery rules.
Asserting that a certain exception is raised¶
If you want to assert that some code raises an exception you can
use the
raises helper:
# content of test_sysexit.py import pytest def f(): raise SystemExit(1) def test_mytest(): with pytest.raises(SystemExit): f()
Running it with, this time in “quiet” reporting mode:
$ pytest -q test_sysexit.py . 1 passed in 0.12 seconds
Grouping multiple tests in a class¶
Once you start to have more than a few tests it often makes sense to group tests logically, in classes and modules. Let’s write a class containing two tests:
# content of test_class.py class TestClass: def test_one(self): x = "this" assert 'h' in x def test_two(self): x = "hello" assert hasattr(x, 'check')
The two tests are found because of the standard Conventions for Python test discovery. There is no need to subclass anything. We can simply run the module by passing its filename:
$ pytest -q test_class.py .F ======= FAILURES ======== _______ TestClass.test_two ________ self = <test_class.TestClass object at 0xdeadbeef> def test_two(self): assert hasattr(x, 'check') E AssertionError: assert False E + where False = hasattr('hello', 'check') test_class.py:8: AssertionError 1 failed, 1 passed in 0.12 seconds
The first test passed, the second failed. Again we can easily see the intermediate values used in the assertion, helping us to understand the reason for the failure.
Going functional: requesting a unique temporary directory¶
For functional tests one often needs to create some files and pass them to application objects. pytest provides Builtin fixtures/function arguments which allow to request arbitrary resources, for example a unique temporary directory:
# content of test_tmpdir.py def test_needsfiles(tmpdir): print (tmpdir) assert 0
We list the name
tmpdir in the test function signature and
pytest will lookup and call a fixture factory to create the resource
before performing the test function call. Let’s just run it:
$ pytest -q test_tmpdir.py F ======= FAILURES ======== _______ test_needsfiles ________ tmpdir = local('PYTEST_TMPDIR/test_needsfiles0') def test_needsfiles(tmpdir): print (tmpdir) > assert 0 E assert 0 test_tmpdir.py:3: AssertionError --------------------------- Captured stdout call --------------------------- PYTEST_TMPDIR/test_needsfiles0 1 failed in 0.12 seconds
Before the test runs, a unique-per-test-invocation temporary directory was created. More info at Temporary directories and files.
You can find out what kind of builtin pytest fixtures: explicit, modular, scalable exist by typing:
pytest --fixtures # shows builtin and custom fixtures
Where to go next¶
Here are a few suggestions where to go next:
- Calling pytest through python -m pytest for command line invocation examples
- good practices for virtualenv, test layout
- pytest fixtures: explicit, modular, scalable for providing a functional baseline to your tests
- Writing plugins managing and writing plugins
Usage and Invocations¶
Calling pytest through
python -m pytest¶
New in version 2.0.
You can invoke testing through the Python interpreter from the command line:
python -m pytest [...]
This is almost equivalent to invoking the command line script
pytest [...]
directly, except that python will also add the current directory to
sys.path.
Getting help on version, option names, environment variables¶
pytest --version # shows where pytest was imported from pytest --fixtures # show available builtin function arguments pytest -h | --help # show help on command line and config file options
Stopping after the first (or N) failures¶
To stop the testing process after the first (N) failures:
pytest -x # stop after first failure pytest --maxfail=2 # stop after two failures
Specifying tests / selecting tests¶
Several test run options:
pytest test_mod.py # run tests in module pytest somepath # run all tests below somepath pytest -k stringexpr # only run tests with names that match the # "string expression", e.g. "MyClass and not method" # will select TestMyClass.test_something # but not TestMyClass.test_method_simple pytest test_mod.py::test_func # only run tests that match the "node ID", # e.g. "test_mod.py::test_func" will select # only test_func in test_mod.py pytest test_mod.py::TestClass::test_method # run a single method in # a single class
Import ‘pkg’ and use its filesystem location to find and run tests:
pytest --pyargs pkg # run all tests found below directory of pkg
Modifying Python traceback printing¶
Examples for modifying traceback printing:
pytest --showlocals # show local variables in tracebacks pytest -l # show local variables (shortcut) pytest --tb=auto # (default) 'long' tracebacks for the first and last # entry, but 'short' style for the other entries pytest --tb=long # exhaustive, informative traceback formatting pytest --tb=short # shorter traceback format pytest --tb=line # only one line per failure pytest --tb=native # Python standard library formatting pytest --tb=no # no traceback at all
The
--full-trace causes very long traces to be printed on error (longer
than
--tb=long). It also ensures that a stack trace is printed on
KeyboardInterrupt (Ctrl+C).
This is very useful if the tests are taking too long and you interrupt them
with Ctrl+C to find out where the tests are hanging. By default no output
will be shown (because KeyboardInterrupt is caught by pytest). By using this
option you make sure a trace is shown.
Dropping to PDB (Python Debugger) on failures¶
Python comes with a builtin Python debugger called PDB.
pytest
allows one to drop into the PDB prompt via a command line option:
pytest --pdb
This will invoke the Python debugger on every failure. Often you might only want to do this for the first failing test to understand a certain failure situation:
pytest -x --pdb # drop to PDB on first failure, then end test session py can also manually access the exception information,
for example:
>>> import sys >>> sys.last_traceback.tb_lineno 42 >>> sys.last_value AssertionError('assert result == "ok"',)
Setting a breakpoint / aka
set_trace()¶
If you want to set a breakpoint and enter the
pdb.set_trace() you
can use a helper:
import pytest def test_function(): ... pytest.set_trace() # invoke PDB debugger and tracing
Prior to pytest version 2.0.0 you could only enter PDB tracing if you disabled
capturing on the command line via
pytest -s. In later versions, pytest
automatically disables its output capture when you enter PDB tracing:
- Output capture in other tests is not affected.
- Any prior test output that has already been captured and will be processed as such.
- Any later output produced within the same test will not be captured and will instead get sent directly to
sys.stdout. Note that this holds true even for test output occurring after you exit the interactive PDB tracing session and continue with the regular test run.
Since pytest version 2.4.0 you can also use the native Python
import pdb;pdb.set_trace() call to enter PDB tracing without having to use
the
pytest.set_trace() wrapper or explicitly disable pytest’s output
capturing via
pytest -s.
Profiling test execution duration¶
To get a list of the slowest 10 test durations:
pytest --durations=10
Creating JUnitXML format files¶
To create result files which can be read by Jenkins or other Continuous integration servers, use this invocation:
pytest --junitxml=path
to create an XML file at
path.
record_xml_property¶
New in version 2.8.
If you want to log additional information for a test, you can use the
record_xml_property fixture:
def test_function(record_xml_property): record_xml_property("example_key", 1) assert 0
This will add an extra property
example_key="1" to the generated
testcase tag:
<testcase classname="test_function" file="test_function.py" line="0" name="test_function" time="0.0009"> <properties> <property name="example_key" value="1" /> </properties> </testcase>
Warning
This is an experimental feature, and its interface might be replaced by something more powerful and general in future versions. The functionality per-se will be kept, however.
Currently it does not work when used with the
pytest-xdist plugin.
Also please note that using this feature will break any schema verification. This might be a problem when used with some CI servers.
LogXML: add_global_property¶
New in version 3.0.
If you want to add a properties node in the testsuite level, which may contains properties that are relevant
to all testcases you can use
LogXML.add_global_properties
import pytest @pytest.fixture(scope="session") def log_global_env_facts(f): if pytest.config.pluginmanager.hasplugin('junitxml'): my_junit = getattr(pytest.config, '_xml', None) my_junit.add_global_property('ARCH', 'PPC') my_junit.add_global_property('STORAGE_TYPE', 'CEPH') @pytest.mark.usefixtures(log_global_env_facts) def start_and_prepare_env(): pass class TestMe: def test_foo(self): assert True
This will add a property node below the testsuite node to the generated xml:
<testsuite errors="0" failures="0" name="pytest" skips="0" tests="1" time="0.006"> <properties> <property name="ARCH" value="PPC"/> <property name="STORAGE_TYPE" value="CEPH"/> </properties> <testcase classname="test_me.TestMe" file="test_me.py" line="16" name="test_foo" time="0.000243663787842"/> </testsuite>
Warning
This is an experimental feature, and its interface might be replaced by something more powerful and general in future versions. The functionality per-se will be kept.
Creating resultlog format files¶
Deprecated since version 3.0: This option is rarely used and is scheduled for removal in 4.0.
To create plain-text machine-readable result files you can issue:
pytest --resultlog=path
and look at the content at the
path location. Such files are used e.g.
by the PyPy-test web page to show test results over several revisions.
Sending test report to online pastebin service¶
Creating a URL for each test failure:
pytest --pastebin=failed
Currently only pasting to the service is implemented.
Disabling plugins¶
To disable loading specific plugins at invocation time, use the
-p option
together with the prefix
no:.
Example: to disable loading the plugin
doctest, which is responsible for
executing doctest tests from text files, invoke pytest like this:
pytest -p no:doctest
Calling pytest from Python code¶
New in version 2.0.
You can invoke
pytest from Python code directly:
pytest.main()
this acts as if you would call “pytest” from the command line.
It will not raise
SystemExit but return the exitcode instead.
You can pass in options and arguments:
pytest.main(['-x', 'mytestdir'])
You can specify additional plugins to
pytest.main:
# content of myinvoke.py import pytest class MyPlugin: def pytest_sessionfinish(self): print("*** test run reporting finishing") pytest.main(["-qq"], plugins=[MyPlugin()])
Running it will show that
MyPlugin was added and its
hook was invoked:
$ python myinvoke.py *** test run reporting finishing
The.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: collected 1 items test_assert1.py F =======) pytest.raises(ExpectedException, "func(*args, **kwargs)")
both of which execute the specified function with args and kwargs and
asserts.
If you want to test that a regular expression matches on the string
representation of an exception (like the
TestCase.assertRaisesRegexp method
from
unittest) you can use the
ExceptionInfo.match method:
import pytest def myfunc(): raise ValueError("Exception 123 raised") def test_match(): with pytest.raises(ValueError) as excinfo: myfunc() excinfo.match(r'.* 123 .*')
The regexp parameter of the
match method is matched with the
re.search
function. So in the above example
excinfo.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: collected 1 items test_assert2.py F =======.
Note
pytest rewrites test modules on import. It does this by using an import
hook to write new pyc files. Most of the time this works transparently.
However, if you are messing with import yourself, the import hook may
interfere. If this is the case, use
--assert=plain. Additionally,
rewriting will fail silently if it cannot write new pycs,.
Pytest API and builtin fixtures¶
This is a list of
pytest.* API functions and fixtures.
For information on plugin hooks and objects, see Writing plugins.
For information on the
pytest.mark mechanism, see Marking test functions with attributes.
For the below objects, you can also interactively ask for help, e.g. by typing on the Python interactive prompt something like:
import pytest help(pytest)
Invoking pytest interactively¶
More examples at Calling pytest from Python code
Helpers for assertions about Exceptions/Warnings¶
raises(expected_exception, *args, **kwargs)[source]¶
Assert that a code block/function call raises
expected_exceptionand raise a failure exception otherwise.
This helper produces a
ExceptionInfo()object (see below).
If using Python 2.5 or above, you may use this function as a context manager:
>>> with raises(ZeroDivisionError): ... 1/0
Changed in version 2.10.
In the context manager form you may use the keyword argument
messageto specify a custom failure message:
>>> with raises(ZeroDivisionError, message="Expecting ZeroDivisionError"): ... pass Traceback (most recent call last): ... Failed: Expecting ZeroDivisionError str(exc_info.value) == "value must be <= 10" # this will not execute
Instead, the following approach must be taken (note the difference in scope):
>>> with raises(ValueError) as exc_info: ... if value > 10: ... raise ValueError("value must be <= 10") ... >>> assert str(exc_info.value) == "value must be <= 10"
Or you ...>
A third possibility is to use a string to be executed:
>>> raises(ZeroDivisionError, "f(0)") <ExceptionInfo ...>
- class
ExceptionInfo(tup=None, exprinfo=None)[source]¶
wraps sys.exc_info() objects and offers help for navigating the traceback.
exconly(tryshort=False))
getrepr(showlocals=False, style='long', abspath=False, tbfilter=True, funcargs=False)[source]¶
return str()able representation of this exception info. showlocals: show locals per traceback entry style: long|short|no|native traceback style tbfilter: hide entries (where __tracebackhide__ is true)
in case of style==native, tbfilter and showlocals is ignored.. See the official Python
trystatement documentation for more detailed information.
Examples at Assertions about expected exceptions.
deprecated_call(func=None, *args, **kwargs)[source]¶
assert that calling
func(*args, **kwargs)triggers
Note: we cannot use WarningsRecorder here because it is still subject to the mechanism that prevents warnings of the same type from being triggered twice for the same module. See #1190.
Comparing floating point numbers¶
- class
approx(expected, rel=None, abs=None) on sequences of numbers:
>>> (0.1 + 0.2, 0.2 + 0.4) == approx((0.3, 0. Infinite numbers are another special case. They are only considered equal to themselves, regardless of the relative tolerance.. Only available in python>=3.5..
Raising a specific test outcome¶
You can use the following functions in your test, fixture or setup functions to force a certain test outcome. Note that most often you can rather use declarative marks, see Skip and xfail: dealing with tests that cannot succeed.
fail(msg='', pytrace=True)[source]¶
explicitly fail an currently-executing test with the given Message.
skip(msg='')[source]¶
skip an executing test with the given message. Note: it’s usually better to use the pytest.mark.skipif marker to declare a test to be skipped under certain conditions like mismatching platforms or dependencies. See the pytest_skipping plugin for details.
importorskip(modname, minversion=None)[source]¶
return imported module if it has at least “minversion” as its __version__ attribute. If no minversion is specified the a skip is only triggered if the module can not be imported.
Fixtures and requests¶
To mark a fixture function:
fixture(scope='function', params=None, autouse=False, ids=None, name=None)[source]¶
(return a) decorator to mark a fixture factory function.
This decorator can be used (with optionally provide their values to test functions using a
yieldstatement, instead of
return. In this case, the code block after the
yieldstatement is executed as teardown code regardless of the test outcome. A fixture function must yield exactly once.
Tutorial at pytest fixtures: explicit, modular, scalable.
The
request object that can be used from fixture functions.
- class
FixtureRequest[source]¶
A request for a fixture from a test or fixture function.
A request object gives access to the requesting test context and has an optional
paramattribute in case the fixture is parametrized indirectly.
addfinalizer(finalizer)[source]¶
add finalizer/teardown function to be called after the last test within the requesting test context finished execution.
applymarker(marker)[source]¶
Apply a marker to a single test function invocation. This method is useful if you don’t want to have a keyword/marker on all function invocations.
cached_setup(setup, teardown=None, scope='module', extrakey=None)[source]¶
(deprecated) Return a testing resource managed by
setup&
teardowncalls.
scopeand
extrakeydetermine when the
teardownfunction will be called so that subsequent calls to
setupwould recreate the resource. With pytest-2.3 you often do not need
cached_setup()as you can directly declare a scope on a fixture function and register a finalizer through
request.addfinalizer().
getfixturevalue(argname)[source]¶
Dynamically run a named fixture function.
Declaring fixtures via function argument is recommended where possible. But if you can only decide whether to use another fixture at test setup time, you may use this function to retrieve it inside a fixture or test function body.
Builtin fixtures/function arguments¶
You can ask for available builtin or project-custom fixtures by typing:
$/sys.stderr and make captured output available via ``capsys.readouterr()`` method calls which return a ``(out, err)`` tuple. capfd Enable capturing of writes to file descriptors 1 and 2 and make captured output available via ``capfd.readouterr()`` method calls which return a ``(out, err)`` tuple. doctest_namespace Inject names into the doctest namespace. pytestconfig the pytest config object with access to command line opts. record_xml_property Add extra xml properties to the tag for the calling test. The fixture is callable with ``(name, value)``, with value being automatically xml-encoded., value, WarningsRecorder instance that provides these methods: * ``pop(category=None)``: return last warning matching the category. * ``clear()``: clear list of warnings See for information on warning categories. tmpdir_factory Return a TempdirFactory instance for the test session. tmpdir Return a temporary directory path object which is unique to each test function invocation, created as a sub directory of the base temporary directory. The returned object is a `py.path.local`_ path object. no tests ran in 0.12 seconds
py(): import smtplib return smtplib.SMTP("smtp.gmail.com") def test_ehlo(smtp): response, msg = smtp.ehlo() assert response == 250 assert 0 # for demo purposes
Here, the
test_ehlo needs the
smtp fixture value. pytest
will discover and call the
@pytest.fixture
marked
smtp fixture function. Running the test looks like this:
$ pytest test_smtpsimple.py ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: collected 1 items test_smtpsimple.py F ======= FAILURES ======== _______ test_ehlo ________ smtp = <smtplib.SMTP object at 0xdeadbeef> def test_ehlo(smtp): response, msg = smtp.ehlo() assert response == 250 > assert 0 # for demo purposes E assert 0 test_smtpsimple.py:11: AssertionError ======= 1 failed in 0.12 seconds ========
In the failure traceback we see that the test function was called with a
smtp. A matching fixture function is discovered by looking for a fixture-marked function named
smtp.
smtp()is called to create an instance.
test_ehlo(<SMTP.
In versions prior to 2.3 there was no
@pytest.fixture marker
and you had to use a magic
pytest_funcarg__NAME prefix
for the fixture factory. This remains and will remain supported
but is not anymore advertised as the primary means of declaring fixture
functions.
“Funcargs” a prime example of dependency injection¶
When injecting fixtures to test functions, pytest-2.0 introduced the term “funcargs” or “funcarg mechanism” which continues to be present also in docs today. It now refers to the specific case of injecting fixture values as arguments to test functions. With pytest-2.3 there are more possibilities to use fixtures but “funcargs” remain as the main way as they allow to directly state the dependencies of a test function.
As the following examples show in more detail, funcargs(request): smtp = smtplib.SMTP("smtp.gmail.com") yield smtp # provide the fixture value print("teardown smtp") smtp(request): with smtplib.SMTP("smtp.gmail.com") as smtp: yield smtp # provide the fixture value
The
smtp connection will be closed after the test finished execution
because the
smtp object automatically closes when
the
with statement ends.
Note.
Note
As historical note, another way to write teardown code is
by accepting a
request object into your fixture function and can call its
request.addfinalizer one or multiple times:
# content of conftest.py import smtplib import pytest @pytest.fixture(scope="module") def smtp(request): smtp = smtplib.SMTP("smtp.gmail.com") def fin(): print ("teardown smtp") smtp.close() request.addfinalizer(fin) return smtp # provide the fixture value
The
fin function will execute when the last test in the module has finished execution.
This method is still fully supported, but
yield is recommended from 2.10 onward because
it is considered simpler and better describes the natural code flow.
Fixtures can introspect the requesting test context¶
Fixture function can accept the
request object
to introspect the “requesting” test function, class or module context.
Further extending the previous
smtp fixture example, let’s
read an optional server URL from the test module which uses our fixture:
# content of conftest.py import pytest import smtplib @pytest.fixture(scope="module") def smtp(request): server = getattr(request.module, "smtpserver", "smtp.gmail.com") smtp = smtplib.SMTP(server) yield smtp print ("finalizing %s (%s)" % (smtp, server)) smtp): assert 0, smtp.helo()
Running it:
$ pytest -qq --tb=short test_anothersmtp.py F ======= FAILURES ======== _______ test_showhelo ________ test_anothersmtp.py:5: in test_showhelo assert 0, smtp.helo() E AssertionError: (250, b'mail.python.org') E assert 0 ------------------------- Captured stdout teardown ------------------------- finalizing <smtplib.SMTP object at 0xdeadbeef> (mail.python.org)
voila! The
smtp fixture function picked up our mail server name
from the module namespace.
Parametrizing fixtures¶
Fixture functions can be parametrized in which case they will be called multiple times, each time executing the set of dependent tests, i. e. the tests that depend on this fixture. Test functions do usually not need to be aware of their re-running. Fixture parametrization helps to write exhaustive functional tests for components which themselves can be configured in multiple ways.
Extending the previous example, we can flag the fixture to create two
smtp(request): smtp = smtplib.SMTP(request.param) yield smtp print ("finalizing %s" % smtp) smtp ======= FAILURES ======== _______ test_ehlo[smtp.gmail.com] ________ smtp = <smtplib.SMTP object at 0xdeadbeef> def test_ehlo(smtp): response, msg = smtp.ehlo() assert response == 250 assert b"smtp.gmail.com" in msg > assert 0 # for demo purposes E assert 0 test_module.py:6: AssertionError _______ test_noop[smtp.gmail.com] ________ smtp = <smtplib.SMTP object at 0xdeadbeef> def test_noop(smtp): response, msg = smtp.noop() assert response == 250 > assert 0 # for demo purposes E assert 0 test_module.py:11: AssertionError _______ test_ehlo[mail.python.org] ________ smtp = <smtplib.SMTP object at 0xdeadbeef> def test_ehlo(smtp): response, msg = smtp.ehlo() assert response == 250 > assert b"smtp.gmail.com" in msg E AssertionError: assert b'smtp.gmail.com' in b'mail.python.org\nSIZE 51200000\nETRN\nSTARTTLS = <smtplib.SMTP object at 0xdeadbeef> def test_noop(smtp): response, msg = smtp.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 resource into it:
# content of test_appsetup.py import pytest class App: def __init__(self, smtp): self.smtp = smtp @pytest.fixture(scope="module") def app(smtp): return App(smtp) def test_smtp_exists(app): assert app.smtp
Here we declare an
app fixture which receives the previously defined
smtp fixture and instantiates an
App object with it. Let’s run it:
$ pytest -v test_appsetup.py ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5 cachedir: .cache rootdir: $REGENDOC_TMPDIR, inifile: collecting ... collected 2 items test_appsetup.py::test_smtp_exists[smtp.gmail.com] PASSED test_appsetup.py::test_smtp_exists[mail.python.org] PASSED ======= 2 passed in 0.12 seconds ========
Due to the parametrization of
smtp the test will run twice with two
different
App instances and respective smtp servers. There is no
need for the
app fixture to be aware of the
smtp parametrization
as pytest will fully analyse the fixture dependency graph.
Note, that the
app fixture has a scope of
module and uses a
module-scoped
smtp fixture. The example would still work if
smtp fixture,.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5 cachedir: [1-mod1] SETUP otherarg 1 RUN test2 with otherarg 1 and modarg mod1 PASSED TEARDOWN otherarg 1 test_module.py::test_2[2-mod1][1-mod2] SETUP otherarg 1 RUN test2 with otherarg 1 and modarg mod2 PASSED TEARDOWN otherarg 1 test_module.py::test_2[2-mod2] .. 2 passed in 0.12 seconds
You can specify multiple fixtures like this:
@pytest.mark.usefixtures("cleandir", "anotherfixture").
Lastly you can put fixtures required by all tests in your project into an ini-file:
# content of pytest.ini [pytest] usefixtures = cleandir
Autouse fixtures (xUnit setup on steroids)¶
Occasionally, you may want to have fixtures get invoked automatically without a usefixtures or funcargs reference.: def __init__(self): self.intransaction = [] def begin(self, name): self.intransaction.append(name) def rollback(self): self.intransaction.pop() @pytest.fixture(scope="module") def db(): return DB() class TestClass: (self, request, db): db.begin() yield db.rollback()
and then e.g. have a TestClass using it by declaring the need:
@pytest.mark.usefixtures("transact") class TestClass: def test_method1(self): ...
All test methods in this TestClass will use the transaction fixture while
other test classes or functions in the module will not use it unless
they also add a
transact reference.
Shifting (visibility of) fixture functions¶
If during implementing your tests you realize that you
want to use a fixture function from multiple test files you can move it
to a conftest.py file or even separately installable
plugins without changing test code. The discovery of
fixtures functions starts at test classes, then test modules, then
conftest.py files and finally builtin and third party plugins..
Monkey.
Method reference of the monkeypatch fixture¶
- class
MonkeyPatch[source]¶
Object returned by the
monkeypatchfixture keeping a record of setattr/item/env/syspath changes.
setattr(target, name, value=<notset>, raising=True)[source]¶
Set attribute value on target, memorizing the old value. By default raise AttributeError if the attribute did not exist.
For convenience you can specify a string as
targetwhich will be interpreted as a dotted import path, with the last part being the attribute name. Example:
monkeypatch.setattr("os.getcwd", lambda x: "/")would set the
getcwdfunction of the
osmodule.
The
raisingvalue determines if the setattr should fail if the attribute is not already present (defaults to True which means it will raise).
delattr(target, name=<notset>, raising=True)[source]¶
Delete attribute
namefrom
target, by default raise AttributeError it the attribute did not previously exist.
If no
nameis specified and
targetis a string it will be interpreted as a dotted import path with the last part being the attribute name.
If
raisingis set to False, no exception will be raised if the attribute is missing.
delitem(dic, name, raising=True)[source]¶
namefrom dict. Raise KeyError if it doesn’t exist.
If
raisingis set to False, no exception will be raised if the key is missing.
setenv(name, value, prepend=None)[source]¶
Set environment variable
nameto
value. If
prependis a character, read the current environment variable value and prepend the
valueadjoined with the
prependcharacter.
delenv(name, raising=True)[source]¶
namefrom the environment. Raise KeyError it does not exist.
If
raisingis set to False, no exception will be raised if the environment variable is missing.
chdir(path)[source]¶
Change the current working directory to the specified path. Path can be a string or a py.path.local object.
undo().
monkeypatch.setattr/delattr/delitem/delenv() all
by default raise an Exception if the target does not exist.
Pass
raising=False if you want to skip this check.
Temporary directories and files¶
The ‘tmpdir’ fixture¶
You can use the
tmpdir fixture which will
provide a temporary directory unique to the test invocation,
created in the base temporary directory.
tmpdir is a py.path.local object which offers
os.path methods
and more. Here is an example test usage:
# content of test_tmpdir.py import.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: collected 1 items test_tmpdir.py F ======= FAILURES ======== _______ test_create_file ________ tmpdir = local('PYTEST_TMPDIR/test_create_file0') def test_create_file(tmpdir): p = tmpdir.mkdir("sub").join("hello.txt") p.write("content") assert p.read() == "content" assert len(tmpdir.listdir()) == 1 > assert 0 E assert 0 test_tmpdir.py:7: AssertionError ======= 1 failed in 0.12 seconds ========
The ‘tmpdir_factory’ fixture¶
New in version 2.8.
The
tmpdir_factory is a session-scoped fixture which can be used
to create arbitrary temporary directories from any other fixture or test.
For example, suppose your test suite needs a large image on disk, which is
generated procedurally. Instead of computing the same image for each test
that uses it into its own
tmpdir, you can generate it once per-session
to save time:
# contents of conftest.py import pytest @pytest.fixture(scope='session') def image_file(tmpdir_factory): img = compute_expensive_image() fn = tmpdir_factory.mktemp('data').join('img.png') img.save(str(fn)) return fn # contents of test_image.py def test_histogram(image_file): img = load_image(image_file) # compute and test histogram
tmpdir_factory instances have the following methods:
TempdirFactory.
mktemp(basename, numbered=True)[source]¶
Create a subdirectory of the base temporary directory and return it. If
numbered, ensure the directory is unique by adding a number prefix greater than any existing one.
The default base temporary directory¶
Temporary directories are by default created as sub-directories of
the system temporary directory. The base name will be
pytest-NUM where
NUM will be incremented with each test run. Moreover, entries older
than 3 temporary directories will be removed.
You can override the default temporary directory setting like this:
pytest --basetemp=mydir
When distributing tests on the local machine,
pytest takes care to
configure a basetemp directory for the sub processes such that all temporary
data lands below a single per-test run basetemp directory.
Capturing of the.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: collected 2 items test_module.py .F ======= FAILURES ======== _______ test_func2 ________ def test_func2(): > assert False E assert False test_module.py:9: AssertionError -------------------------- Captured stdout setup --------------------------- setting up <function test_func2 at 0xdeadbeef> ======= 1 failed, 1 passed in 0.12 seconds ========
Accessing captured output from a test function¶
The
capsys and
capfd fixtures allow to access stdout/stderr
output created during test execution. Here is an example test function
that performs some output related checks:
def test_myoutput(capsys): # or use "capfd" for fd-level print ("hello") sys.stderr.write("world\n") out, err = capsys.readouterr() assert out == "hello\n" assert err == "world\n" print ("next") out, err = capsys.readouterr() assert function argument which offers the exact
same interface but allows to also capture output from
libraries or subprocesses that directly write to operating
system level output streams (FD1 and FD2).
New in version 3.0.')
Asserting Warnings¶
Asserting warnings with the warns function¶
New in version 2.8.
You can check that code raises a particular warning using
pytest.warns,
which works in a similar manner to raises:
import warnings import pytest def test_warning(): with pytest.warns(UserWarning): warnings.warn("my warning", UserWarning)
The test will fail if the warning in question is not raised.
You can also call).
Note
DeprecationWarning and
PendingDeprecationWarning are treated
differently; see Ensuring a function triggers a deprecation warning.
Recording warnings¶
You can record raised warnings either using
pytest.warns or with
the
recwarn fixture.
To record with
warnings, or index into it to get a particular recorded warning. It also
provides these methods:
- class
WarningsRecorder[source]¶
A context manager to record raised warnings.
Adapted from warnings.catch_warnings.
pop(cls=<type 'exceptions.Warning'>)[source]¶
Pop the first recorded warning, raise exception if not exists.
Each recorded warning has the attributes
category,
filename,
lineno,
file, and
line. The
category is the
class of the warning. The
message is the warning itself; calling
str(message) will return the actual message of the warning.
Note
DeprecationWarning and
PendingDeprecationWarning are treated
differently; see Ensuring a function triggers a deprecation warning.
Ensuring a function triggers a deprecation warning¶
You can also call a global helper for checking
that a certain function call triggers a
DeprecationWarning or
PendingDeprecationWarning:
import pytest def test_global(): pytest.deprecated_call(myfunction, 17)
By default,
DeprecationWarning and
PendingDeprecationWarning will not be
caught when using
pytest.warns or
recwarn because default Python warnings filters hide
them. If you wish to record them in your own code, use the
command
warnings.simplefilter('always'):
import warnings import pytest def test_deprecation(recwarn): warnings.simplefilter('always') warnings.warn("deprecated", DeprecationWarning) assert len(recwarn) == 1 assert recwarn.pop(DeprecationWarning)
You can also use it as a contextmanager:
def test_global(): with pytest.deprecated_call(): myobject.deprecated_method()
Doctest integration for modules and test files¶
By default all files matching the
test*.txt pattern will
be run through the python standard
doctest module. You
can change the pattern by issuing:
pytest --doctest-glob='*.rst'
on the command line. Since version
2.9,
--doctest-glob
can be given multiple times in the command-line.
You can also trigger running of doctests from docstrings in all python modules (including regular python test modules):
pytest --doctest-modules
You can make these changes permanent in your project by putting them into a pytest.ini file like this:
# content of pytest.ini [pytest] addopts = --doctest-modules
If you then have a text file like this:
# content of example.rst hello this is a doctest >>> x = 3 >>> x 3
and another like this:
# content of mymodule.py def something(): """ a doctest in a docstring >>> something() 42 """ return 42
then you can just invoke
pytest without command line options:
$ pytest ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini collected 1 items mymodule.py . ======= 1 passed in 0.12 seconds ========
It is possible to use fixtures using the
getfixture helper:
# content of example.rst >>> tmp = getfixture('tmpdir') >>> ... >>>
Also, Using fixtures from classes, modules or projects and Autouse fixtures (xUnit setup on steroids) fixtures are supported when executing text doctest files.
The standard
doctest module provides some setting flags to configure the
strictness of doctest tests. In pytest You can enable those flags those flags
using the configuration file. To make pytest ignore trailing whitespaces and
ignore lengthy exception stack traces you can just write:
[pytest] doctest_optionflags= NORMALIZE_WHITESPACE IGNORE_EXCEPTION_DETAIL
pytest also introduces new options to allow doctests to run in Python 2 and Python 3 unchanged:
ALLOW_UNICODE: when enabled, the
uprefix is stripped from unicode strings in expected doctest output.
ALLOW_BYTES: when enabled, the
bprefix is stripped from byte strings in expected doctest output.
As with any other option flag, these flags can be enabled in
pytest.ini using
the
doctest_optionflags ini option:
[pytest] doctest_optionflags = ALLOW_UNICODE ALLOW_BYTES
Alternatively, it can be enabled by an inline comment in the doc test itself:
# content of example.rst >>> get_unicode_greeting() # doctest: +ALLOW_UNICODE 'Hello'
The ‘doctest_namespace’ fixture¶
New in version 3.0.
Output format¶
New in version 3.0.
Marking test functions with attributes¶
By using the
pytest.mark helper you can easily set
metadata on your test functions. There are
some builtin markers, for example:
-.
Skip and xfail: dealing with tests that cannot succeed¶
Ifxs # show extra info on skips and xfails
(See How to change command line options defaults)
Marking a test function to be skipped¶
New in version 2.9.
The simplest way to skip a test function is to mark it with the
skip decorator
which may be passed an optional
reason:
@pytest.mark.skip(reason="no way of currently testing this") def test_the_unknown(): ...
skipif¶
New in version 2.0,: 2.4
If you wish to skip something conditionally then you can use
skipif instead..
Skip all test functions of a class or module¶
You can use the
skipif decorator .
Mark a test function as expected to fail¶
You can use the
xfail marker to indicate that you
expect a test to fail:
@pytest.mark.xfail def test_function(): ...
This test will be run but no traceback will be reported
when it fails. Instead terminal reporting will list it in the
“expected to fail” (
XFAIL) or “unexpectedly passing” (
XPASS) sections.
strict parameter¶
New in version 2.9.
Both
XFAIL and
XPASS don’t fail the test suite, unless the
strict keyword-only
parameter is passed
reason parameter¶
As with skipif you can also mark your expectation of a failure on a particular platform:
@pytest.mark.xfail(sys.version_info >= (3,3), reason="python3.3 api changes") def test_function(): ...
raises parameter¶
If you want to be more specific as to why the test is failing, you can specify
a single exception, or a list marking crashing tests for later inspection.
Ignoring xfail marks¶
By specifying on the commandline:
pytest --runxfail
you can force the running and reporting of an
xfail marked test
as if it weren’t marked at all.
Examples¶
Here is a simple test file with the several usages:
import pytest xfail = pytest.mark.xfail @xfail
Running it with the report-on-xfail option gives this output:
example $ pytest -rx xfail_demo.py ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR/example, inif seconds ========
Skip/xfail with parametrize¶
Imperative xfail from within a test or setup function¶")
Note that calling
pytest.skip at the module level
is not allowed since pytest 3.0. If you are upgrading
and
pytest.skip was being used at the module level, you can set a
pytestmark variable:
# before pytest 3.0 pytest.skip('skipping all tests because of reasons') # after pytest 3.0 pytestmark = pytest.mark.skip('skipping all tests because of reasons')
pytestmark applies a mark or list of marks to all tests in a module.
Skipping on a missing import dependency¶.
specifying conditions as strings versus booleans¶
Prior to pytest-2.4 the only way to specify skipif/xfail conditions was to use strings:
import sys @pytest.mark.skipif("sys.version_info >= (3,3)") def test_function(): ...
During test function setup the skipif condition is evaluated by calling
eval('sys.version_info >= (3,0)', namespace). The namespace contains
all the module globals, and
os and
sys as a minimum.
Since pytest-2.4 condition booleans are considered preferable because markers can then be freely imported between test modules. With strings you need to import not only the marker but all variables everything used by the marker, which violates encapsulation.
The reason for specifying the condition as a string was that
pytest can
report a summary of skip conditions based purely on the condition string.
With conditions as booleans you are required to specify a
reason string.
Note that string conditions will remain fully supported and you are free to use them if you have no need for cross-importing markers.
The evaluation of a condition string in
pytest.mark.skipif(conditionstring)
or
pytest.mark.xfail(conditionstring) takes place in a namespace
dictionary which is constructed as follows:
- the namespace is initialized by putting the
sysand
osmodules and the pytest
configobject into it.
- updated with the module globals of the test function for which the expression is applied.
The pytest
config object allows you to skip based on a test
configuration value which you might have added:
@pytest.mark.skipif("not config.getvalue('db')") def test_function(...): ...
The equivalent with “boolean conditions” is:
@pytest.mark.skipif(not pytest.config.getvalue("db"), reason="--db was not specified") def test_function(...): pass
Note
You cannot use
pytest.config.getvalue() in code
imported before pytest’s argument parsing takes place. For example,
conftest.py files are imported before command line parsing and thus
config.getvalue() will not execute correctly.', 'tests for linux only')
- Skip all tests in a module if some import is missing:
pexpect = pytest.importorskip('pexpect')
Parametrizing fixtures and test functions¶
pytest supports test parametrization in several well-integrated ways:
pytest.fixture()allows to define parametrization at the level of fixture functions.
- @pytest.mark.parametrize allows to define parametrization at the function or class level, provides multiple argument/fixture sets for a particular test function or class.
- pytest_generate_tests enables implementing your own custom dynamic parametrization scheme or extensions.
@pytest.mark.parametrize: parametrizing test functions¶
New in version 2.2.
Changed in version 2.4: Several improvements.
The builtin
pytest.mark.parametrize decorator enables
parametrization of arguments for a test function. Here is a typical example
of a test function that implements checking that a certain input leads
to an expected output:
# content of test_expectation.py import pytest @pytest.mark.parametrize("test_input,expected", [ ("3+5", 8), ("2+4", 6), ("6*9", 42), ]) def test_eval(test_input, expected): assert eval(test_input) == expected
Here, the
@parametrize decorator defines three different
(test_input,expected)
tuples so that the
test_eval function will run three times using
them in turn:
$ pytest ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: collected 3 items test_expectation.py ..F ======= FAILURES ======== _______ test_eval[6*9-42] ________ test_input = '6*9', expected = 42 @pytest.mark.parametrize("test_input,expected", [ ("3+5", 8), ("2+4", 6), ("6*9", 42), ]) def test_eval(test_input, expected): > assert eval(test_input) == expected E AssertionError: assert 54 == 42 E + where 54 = eval('6*9') test_expectation.py:8: AssertionError ======= 1 failed, 2 passed in 0.12 seconds ========
As designed in this example, only one pair of input/output values fails
the simple test function. And as usual with test function import pytest @pytest.mark.parametrize("test_input,expected", [ ("3+5", 8), ("2+4", 6), pytest.mark.xfail(("6*9", 42)), ]) def test_eval(test_input, expected): assert eval(test_input) == expected
Let’s run this:
$ pytest ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: collected 3 items test_expectation.py ..x ======= 2 passed, 1 xfailed in 0.12 seconds ========
The one parameter set which caused a failure previously now shows up as an “xfailed (expected to fail)” test.
To get all combinations of multiple parametrized arguments you can stack
parametrize decorators:
import pytest @pytest.mark.parametrize("x", [0, 1]) @pytest.mark.parametrize("y", [2, 3]) def test_foo(x, y): pass
This will run the test with the arguments set to x=0/y=2, x=0/y=3, x=1/y=2 and x=1/y=3.
Note
In versions prior to 2.4 one needed to specify the argument
names as a tuple. This remains valid but the simpler
"name1,name2,..."
comma-separated-string syntax is now advertised first because
it’s easier to write and produces less line noise.
Basic
pytest_generate_tests example¶
Sometimes you may want to implement your own parametrization scheme
or implement some dynamism for determining the parameters or scope
of a fixture. For this, you can use the
pytest_generate_tests hook
which is called when collecting a test function. Through the passed in
metafunc object you can inspect the requesting test context and, most
importantly, you can call
metafunc.parametrize() to cause
parametrization.
For example, let’s say we want to run a test taking string inputs which
we want to set via a new
pytest command line test function:
# content of conftest.py def pytest_addoption(parser): parser.addoption("--stringinput", action="append", default=[], help="list of stringinputs to pass to test functions") def pytest_generate_tests(metafunc): if 'stringinput' in metafunc.fixturenames: metafunc.parametrize("stringinput", metafunc.config.option.stringinput)
If we now pass two stringinput values, our test will run twice:
$ pytest -q --stringinput="hello" --stringinput="world" test_strings.py .. 2 passed in 0.12 seconds
Let’s also run with a stringinput that will lead to a failing test:
$ pytest -q --stringinput="!" test_strings.py F ======= FAILURES ======== _______ test_valid_string[!] ________ stringinput = '!' def test_valid_string(stringinput): > assert stringinput.isalpha() E AssertionError: assert False E + where False = <built-in method isalpha of str object at 0xdeadbeef>() E + where <built-in method isalpha of str object at 0xdeadbeef> = '!'.isalpha test_strings.py:3: AssertionError 1 failed in 0.12 seconds
As expected our test function fails.
If you don’t specify a stringinput it will be skipped because
metafunc.parametrize() will be called with an empty parameter
list:
$ pytest -q -rs test_strings.py s ======= short test summary info ======== SKIP [1] test_strings.py:1: got empty parameter set ['stringinput'], function test_valid_string at $REGENDOC_TMPDIR/test_strings.py:1 1 skipped in 0.12 seconds
For further examples, you might want to look at more parametrization examples.
The metafunc object¶
- class
Metafunc(function, fixtureinfo, config, cls=None, module=None)[source]¶
Metafunc objects are passed to the
pytest_generate_testshook. They help to inspect a test function and to generate tests according to test configuration or values specified in the class or module where a test function is defined.
config= None¶
access to the
_pytest.config.Configobject for the test session
parametrize(argnames, argvalues, indirect=False, ids=None, scope.
addcall(funcargs=None, id=<object object>, param=<object object>)[source]¶
(deprecated, use parametrize) Add a new call to the underlying test function during the collection phase of a test run. Note that request.addcall() is called during the test collection phase prior and independently to actual test execution. You should only use addcall() if you need to specify multiple arguments of a test function.
Cache: working with cross-testrun state¶
New in version 2.8.
Warning
The functionality of this core plugin was previously distributed
as a third party plugin named
pytest-cache. The core plugin
is compatible regarding command line options and API usage except that you
can only store/receive data between test runs that is json-serializable.
Usage¶
The plugin provides two command line options to rerun failures from the
pytest invocation:
--lf,
--last-failed- to only re-run the failures.
--ff,
--failed-first- to run the failures first and then the rest of the tests.
For cleanup (usually not needed), a
--cache-clear option allows to remove
all cross-session cache contents ahead of a test run.
Other plugins may access the config.cache object to set/get
json encodable values between
pytest invocations.
Note
This plugin is enabled by default, but can be disabled if needed: see
Deactivating / unregistering a plugin by name (the internal name for this plugin is
cacheprovider).
Rerunning only failures or failures first¶
First, let’s create 50 test invocation of which only 2 fail:
# content of test_50.py import pytest @pytest.mark.parametrize("i", range(50)) def test_num(i): if i in (17, 25): pytest.fail("bad luck")
If you run this for the first time you will see two failures:
$ pytest -q .................F.......F........................ =======
If you then run it with
--lf:
$ pytest --lf ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 run-last-failure: rerun last 2 failures ======= 48 tests deselected ======== ======= 2 failed, 48 deselected in 0.12 seconds ========
You have run only the two failing test from the last run, while 48 tests have not been run (“deselected”).
Now, if you run with the
--ff option, all tests will be run but the first
previous failures will be executed first (as can be seen from the series
of
FF and dots):
$ pytest --ff ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 run-last-failure: rerun last 2 failures first ========
The new config.cache object¶
Plugins or conftest.py support code can get a cached value using the
pytest
config object. Here is a basic example plugin which
implements a pytest fixtures: explicit, modular, scalable which re-uses previously created state
across pytest invocations:
# content of test_caching.py import pytest import time @pytest.fixture def mydata(request): val = request.config.cache.get("example/value", None) if val is None: time.sleep(9*0.6) # expensive computation :) val = 42 request.config.cache.set("example/value", val) return val def test_function(mydata): assert mydata == 23
If you run this command once, it will take a while because of the sleep:
$ pytest -q F ======= FAILURES ======== _______ test_function ________ mydata = 42 def test_function(mydata): > assert mydata == 23 E assert 42 == 23 test_caching.py:14: AssertionError 1 failed in 0.12 seconds
If you run it a second time the value will be retrieved from the cache and this will be quick:
$ pytest -q F ======= FAILURES ======== _______ test_function ________ mydata = 42 def test_function(mydata): > assert mydata == 23 E assert 42 == 23 test_caching.py:14: AssertionError 1 failed in 0.12 seconds
See the cache-api for more details.
Inspecting Cache content¶
You can always peek at the content of the cache using the
--cache-show command line option:
$ py.test --cache-show ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: cachedir: $REGENDOC_TMPDIR/.cache ------------------------------- cache values ------------------------------- cache/lastfailed contains: {'test_caching.py::test_function': True} example/value contains: 42 ======= no tests ran in 0.12 seconds ========
Clearing Cache content¶
You can instruct pytest to clear all cache files and values
by adding the
--cache-clear option like this:
pytest --cache-clear
This is recommended for invocations from Continuous Integration servers where isolation and correctness is more important than speed.
config.cache API¶
The
config.cache object allows other plugins,
including
conftest.py files,
to safely and flexibly store and retrieve values across
test runs because the
config object is available
in many places.
Under the hood, the cache plugin uses the simple dumps/loads API of the json stdlib module
Cache.
get(key, default)[source]¶
return cached value for the given key. If no value was yet cached or the value cannot be read, the specified default is returned.
Support for unittest.TestCase / Integration of fixtures¶
pytest has support for running Python unittest.py style tests.
It’s meant for leveraging existing unittest-style projects
to use pytest features. Concretely, pytest will automatically
collect
unittest.TestCase subclasses and their
test methods in
test files. It will invoke typical setup/teardown methods and
generally try to make test suites written to run on unittest, to also
run using
pytest. We assume here that you are familiar with writing
unittest.TestCase style tests and rather focus on
integration aspects.
Note that this is meant as a provisional way of running your test code
until you fully convert to pytest-style tests. To fully take advantage of
fixtures, parametrization and
hooks you should convert (tools like unittest2pytest are helpful).
Also, not all 3rd party pluging are expected to work best with
unittest.TestCase style tests.
Usage¶
After Installation type:
pytest
and you should be able to run your unittest-style tests if they
are contained in
test_* modules. If that works for you then
you can make use of most pytest features, for example
--pdb debugging in failures, using plain assert-statements,
more informative tracebacks, stdout-capturing or
distributing tests to multiple CPUs via the
-nNUM option if you
installed the
pytest-xdist plugin. Please refer to
the general
pytest documentation for many more examples.
Note
Running tests from
unittest.TestCase subclasses with
--pdb will
disable tearDown and cleanup methods for the case that an Exception
occurs. This allows proper post mortem debugging for all applications
which have significant logic in their tearDown machinery. However,
supporting this feature has the following side effect: If people
overwrite
unittest.TestCase
__call__ or
run, they need to
to overwrite
debug in the same way (this is also true for standard
unittest).
Mixing pytest fixtures into unittest.TestCase style tests.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: collected 2 items test_unittest_db.py FF =======:12: AssertionError ======= 2 failed in 0.12 seconds ========
This default pytest traceback shows that the two test methods
share the same
self.db instance which was our intention
when writing the class-scoped fixture function above.): s = open("samplefile.ini") . 1 passed in 0.12 seconds
... gives us one passed test because the
initdir fixture function
was executed ahead of the
test_method.
Note
While pytest supports receiving fixtures via test function arguments for non-unittest test methods,
unittest.TestCase methods cannot directly receive fixture
function arguments as implementing that is likely to inflict
on the ability to run general unittest.TestCase test suites.
Maybe optional support would be possible, though. If unittest finally
grows a plugin system that should help as well. In the meanwhile, the
above
usefixtures and
autouse examples should help to mix in
pytest fixtures into unittest suites. And of course you can also start
to selectively leave away the
unittest.TestCase subclassing, use
plain asserts and get the unlimited pytest feature set.
Running tests written for nose¶
pytest has basic support for running tests written for nose.
Usage¶
After Installation.
yield-based methods don’t support
setupproperly because the
setupmethod is always called in the same class instance. There are no plans to fix this currently because
yield-tests are deprecated in pytest 3.0, with
pytest.mark.parametrizebeing the recommended alternative.
classic.
Installing.
Here is a little annotated list for some popular plugins:
- pytest-django: write tests for django apps, using pytest integration.
- pytest-twisted: write tests for twisted apps, starting a reactor and processing deferreds from test functions.
- pytest-catch.
-test --trace-config environment variable to
-p no:name.
See Finding out which plugins are active for how to obtain the name of a plugin.
Pytest default plugin reference¶
You can find the source code for the following plugins in the pytest repository.
Writing:
- Pytest default plugin reference: file # will not show "setting up" pytest a/test_sub.py #.
Writing your own plugin¶
If you want to write a plugin, there are many real-life examples you can copy from:
- a custom collection example plugin: A basic example for specifying tests in Yaml files
- around 20 Pytest default plugin reference which provide pytest’s own functionality
- many external plugins providing additional features
All of these plugins implement the documented well specified hooks to extend and add functionality.
Note
Make sure to check out the excellent cookiecutter-pytest-plugin project, which is a cookiecutter template for authoring plugins.
The template provides an excellent starting point with a working plugin, tests running with tox, comprehensive README and entry-pointy already pre-configured.
well specified re-writing when modules get
imported. However since we do not want to test different bytecode
then you will run in production this hook only re-writes test modules
themselves as well as any modules which are part of plugins. Any
other imported module will not be re-written and normal assertion
behaviour will happen.
If you have assertion helpers in other modules where you would need
assertion rewriting to be enabled you need to ask
pytest
explicitly to re-write this module before it gets imported.
register_assert_rewrite(*names) re-written. If the
helper module also contains assert statements which need to be
re-written it needs to be marked as such, before it gets imported.
This is easiest by marking it for re-writing like this: plugins, and so on.
This mechanism makes it easy to share fixtures within applications or even
external applications without the need to create external plugins using
the
setuptools‘s re-written.plugin("name_of_plugin")
If you want to look at the names of existing plugins, use
the
--trace-config option.
Testing plugins¶
pytest comes with some facilities that you can enable for testing your
plugin. Given that you have an installed plugin you can enable the
testdir fixture via specifying a
command line option to include the pytester plugin (
-p pytester) or
by putting
pytest_plugins = "pytester" into your test or
conftest.py file. You then will have a
testdir fixture which you
can use like this:
# content of test_myplugin.py pytest_plugins = "pytester" # to get testdir fixture def test_myplugin(testdir): testdir.makepyfile(""" def test_example(): pass """) result = testdir.runpytest("--verbose") result.stdout.fnmatch_lines(""" test_example* """)
Note that by default
testdir.runpytest() will perform a pytest
in-process. You can pass the command line option
--runpytest=subprocess
to have it happen in a subprocess.
Also see the
RunResult for more
methods of the result object that you get from a call to
runpytest.
New in version 2.7.
CallOutcome whatever you want before the next hook executes outcome = yield # outcome.excinfo may be None or a (cls, val, tb) tuple res = outcome.get_result() # will raise if outcome was exception # postprocess result
Note that hook wrappers don’t return results themselves, they merely perform tracing or other side effects around the actual hook implementations. If the result of the underlying hook is a mutable object, they may modify that result but it’s probably better to avoid it.
CallOutcome
Plugins and
conftest.py files may declare new hooks that can then be
implemented by other plugins in order to alter behaviour or interact with
the new plugin:
pytest_addhooks(pluginmanager)[source]¶
called at plugin registration time to allow adding new hooks via a call to pluginmanager.add_hookspecs(module_or_class, prefix).
Hooks are usually declared as do-nothing functions that contain only documentation describing when the hook will be called and what return values are expected.
For an example, see newhooks.py from xdist.(object): "".
pytest hook reference¶
Initialization, command line and configuration hooks¶
pytest_load_initial_conftests(early_config, parser, args)[source]¶
implements the loading of initial conftest files ahead of command line option parsing.
pytest_cmdline_preparse(config, args)[source]¶
(deprecated) modify command line arguments before option parsing.
pytest_cmdline_parse(pluginmanager, args)[source]¶
return initialized config object, parsing the specified args.
pytest_namespace()[source]¶
return dict of name->object to be made globally available in the pytest namespace. This hook is called at plugin registration time.
pytest_addoption(parser) or accessed via (deprecated)
pytest.config.
pytest_cmdline_main(config)[source]¶
called for performing the main command line action. The default implementation will invoke the configure hooks and runtest_mainloop.
pytest_configure(config)[source]¶
called after command line options have been parsed and all plugins and initial conftest files been loaded. This hook is called for every plugin.
Generic “runtest” hooks¶
All runtest related hooks receive a
pytest.Item object.
pytest_runtest_protocol(item, nextitem)[source]¶
implements the runtest_setup/call/teardown protocol for the given test item, including capturing exceptions and calling reporting hooks.
pytest_runtest_makereport(item, call)[source]¶
return a
_pytest.runner.TestReportobject for the given
pytest.Itemand
_pytest.runner.CallInfo.
For deeper understanding you may look at the default implementation of
these hooks in
_pytest.runner and maybe also
in
_pytest.pdb which interacts with
_pytest.capture
and its input/output capturing in order to immediately drop
into interactive debugging when a test failure occurs.
The
_pytest.terminal reported specifically uses
the reporting hook to print information about a test run.
Collection hooks¶
pytest calls the following hooks for collecting files and directories:
pytest_ignore_collect(path, config)[source]¶
return True to prevent considering this path for collection. This hook is consulted for all files and directories prior to calling more specific hooks.
pytest_collect_directory(path, parent)[source]¶
called before traversing a directory for collection files.
pytest_collect_file(path, parent)[source]¶
return collection Node or None for the given path. Any new node needs to have the specified
parentas a parent.
For influencing the collection of objects in Python modules you can use the following hook:
pytest_pycollect_makeitem(collector, name, obj)[source]¶
return custom item/collector for a python object in a module, or None.
pytest_make_parametrize_id(config, val)[source]¶
Return a user-friendly string representation of the given
valthat will be used by @pytest.mark.parametrize calls. Return None if the hook doesn’t know about
val.
After collection is complete, you can modify the order of items, delete or otherwise amend the test items:
Reporting hooks¶
Session related reporting hooks:
pytest_report_header(config, startdir)[source]¶
return a string to be displayed as header info for terminal reporting.
Note
This function should be implemented only in plugins or
conftest.pyfiles situated at the tests root directory due to how pytest discovers plugins during startup.
pytest_report_teststatus(report)[source]¶
return result-category, shortletter and verbose word for reporting.
pytest_terminal_summary(terminalreporter, exitstatus)[source]¶
add additional section in terminal summary reporting.
pytest_fixture_post_finalizer(fixturedef)[source]¶
called after fixture teardown, but before the cache is cleared so the fixture result cache
fixturedef.cached_resultcan still be accessed.
And here is the central hook for reporting about test execution:
pytest_runtest_logreport(report)[source]¶
process a test setup/call/teardown report relating to the respective phase of executing a test.
You can also use this hook to customize assertion representation for some types:
pytest_assertrepr_compare(config, op, left,.
Debugging/Interaction hooks¶
There are few hooks which can be used for special reporting or interaction with exceptions:
pytest_exception_interact(node, call, report)[source]¶
called when an exception was raised which can potentially be interactively handled.
This hook is only called if an exception was raised that is not an internal exception like
skip.Exception.
Reference of objects involved in hooks¶
- class
Config[source]¶
access to configuration values, pluginmanager and plugin hooks.
option= None¶
access to command line option as attributes. (deprecated), use
getoption()instead
add_cleanup(func)[source]¶
Add a function to be called when the config object gets out of use (usually coninciding with pytest_unconfigure).
addinivalue_line(name, line)[source]¶
add a line to an ini-file option. The option must have been declared but might not yet be set in which case the line becomes the the first line in its value.
getini(name)[source]¶
return configuration value from an ini file. If the specified name hasn’t been registered through a prior
parser.addinicall (usually from a plugin), a ValueError is raised.
- class
Parser[source]¶
Parser for command line arguments and ini-file values.
getgroup(name, description='', after=None)[source]¶
get (or create) a named option Group.
The returned group object has an
addoptionmethod with the same signature as
parser.addoptionbut will be shown in the respective group in the output of
pytest. --help.
addoption(*opts, **attrs)[source]¶
register a command line option.
After command line parsing options are available on the pytest config object via
config.option.NAMEwhere
NAMEis usually set by passing a
destattribute, for example
addoption("--long", dest="NAME", ...).
parse_known_args(args, namespace=None)[source]¶
parses and returns a namespace object with known arguments at this point.
parse_known_and_unknown_args(args, namespace=None)[source]¶
parses and returns a namespace object with known arguments, and the remaining arguments unknown at this point.
addini(name, help, type=None, default=None)[source]¶
register an ini-file option.
The value of ini-variables can be retrieved via a call to
config.getini(name).
- class
Node[source]¶
base class for Collector and Item the test collection tree. Collector subclasses have children, Items are terminal nodes.
listchain()[source]¶
return list of all parent collectors up to self, starting from root of collection tree.
add_marker(marker)[source]¶
dynamically add a marker object to the node.
markercan be a string or pytest.mark.* instance.
get_marker(name)[source]¶
get a marker object from this node or None if the node doesn’t have a marker with that name.
addfinalizer(fin)[source]¶
register a function to be called when this node is finalized.
This method can only be called when this node is active in a setup chain, for example during self.setup().
- class
Collector[source]¶
Bases:
_pytest.main.Node
Collector instances create children through collect() and thus iteratively build a tree.
- exception
CollectError[source]¶
Bases:
exceptions.Exception
an error during collection, contains a custom message.
Collector.
collect()[source]¶
returns a list of children (items and collectors) for this collection node.
- class
Item[source]¶
Bases:
_pytest.main.Node
a basic test invocation item. Note that for a single function there might be multiple test invocation items.
- class
Module[source]¶
Bases:
_pytest.main.File,
_pytest.python.PyCollector
Collector for test classes and functions.
- class
Function[source]¶
Bases:
_pytest.python.FunctionMixin,
_pytest.main.Item,
_pytest.fixtures.FuncargnamesCompatAttr
a Function Item is responsible for setting up and executing a Python test function.
originalname= None¶
original function name, without any decorations (for example parametrization adds a
"[...]"suffix to function names).
New in version 3.0.
- class
CallInfo[source]¶
Result/Exception info a function invocation.
- class
TestReport[source]¶
Basic test report object (also used for setup and teardown calls if they fail).
location= None¶
a (filesystempath, lineno, domaininfo) tuple indicating the actual location of a test item - it might be different from the collected one e.g. if a method is inherited from a different module.
keywords= None¶
a name -> value dictionary containing all keywords and markers associated with a test invocation.
sections= None¶
list of pairs
(str, str)of extra information which needs to marshallable. Used by pytest to add captured text from
stdoutand
stderr, but may be used by other plugins to add arbitrary information to reports.
- class
_CallOutcome[source]¶
Outcome of a function call, either an exception or a proper result. Calling the
get_resultmethod will return the result or reraise the exception raised when the function was called.
get_plugin_manager()[source]¶
Obtain a new instance of the
_pytest.config.PytestPluginManager, with default plugins already loaded.
This function can be used by integration with other tools, like hooking into pytest to run tests into an IDE.
- class
PytestPluginManager[source]¶
Bases:
_pytest.vendored_packages.pluggy.PluginManager
Overwrites
pluggy.PluginManagerto add pytest-specific functionality:
PYTEST_PLUGINenv variable and
pytest_pluginsglobal variables found in plugins being loaded;
conftest.pyloading during start-up;
addhooks(module_or_class)[source]¶
Deprecated since version 2.8.
Use
pluggy.PluginManager.add_hookspecs()instead.
- class
PluginManager[source]¶
Core Pluginmanager class which manages registration of plugin objects and 1:N hook calling.
You can register new hooks by calling
add_hookspec(module_or_class). You can register plugin objects (which contain hooks) by calling
register(plugin). The Pluginmanager is initialized with a prefix that is searched for in the names of the dict of registered plugin objects. An optional excludefunc allows to blacklist names which are not considered as hooks despite a matching prefix.
For debugging purposes you can call
enable_tracing()which will subsequently send debug information to the trace helper.
register(plugin, name=None)[source]¶
Register a plugin and return its canonical name or None if the name is blocked from registering. Raise a ValueError if the plugin is already registered.
unregister(plugin=None, name=None)[source]¶
unregister a plugin object and all its contained hook implementations from internal data structures.
add_hookspecs(module_or_class)[source]¶
add new hook specifications defined in the given module_or_class. Functions are recognized if they have been decorated accordingly.
get_canonical_name(plugin)[source]¶
Return canonical name for a plugin object. Note that a plugin may be registered under a different name which was specified by the caller of register(plugin, name). To obtain the name of an registered plugin use
get_name(plugin)instead.
check_pending()[source]¶
Verify that all hooks which have not been verified against a hook specification are optional, otherwise raise PluginValidationError
load_setuptools_entrypoints(entrypoint_name)[source]¶
Load modules from querying the specified setuptools entrypoint name. Return the number of loaded plugins.
list_plugin_distinfo()[source]¶
return list of distinfo/plugin tuples for all setuptools registered plugins.
add_hookcall_monitoring(before, after)[source
_CallOutcome`object which represents the result of the overall hook call.
- class
Testdir[source]¶
Temporary test directory with tools to test/run pytest itself.
This is based on the
tmpdirfixture but provides a number of methods which aid with testing pytest itself. Unless
chdir()is used all methods will use
tmpdiras current working directory.
Attributes:
runpytest_inprocess(*args, **kwargs)[source]¶
Return result of running pytest in-process, providing a similar interface to what self.runpytest() provides.
runpytest(*args, **kwargs)[source]¶
Run pytest inline or in a subprocess, depending on the command line option “–runpytest” and return a
RunResult.
runpytest_subprocess(*args, **kwargs)[source]¶
Run pytest as a subprocess with given arguments.
Any plugins added to the
pluginslist will added using the
-pcommand line option. Addtionally
--basetempis used put any temporary files and directories in a numbered directory prefixed with “runpytest-” so they do not conflict with the normal numberd pytest location for temporary files and directories.
- class
RunResult[source]¶
The result of running a command.
Attributes:
parseoutcomes()[source]¶
Return a dictionary of outcomestring->num from parsing the terminal output that the test process produced.
- class
LineMatcher[source]¶
Flexible matching of text.
This is a convenience class to test large texts like the output of commands.
The constructor takes a list of lines without their trailing newlines, i.e.
text.splitlines().
fnmatch_lines_random(lines2)[source]¶
Check lines exist in the output.
The argument is a list of lines which have to occur in the output, in any order. Each line can contain glob whildcards.
get_lines_after(fnline)[source]¶
Return all lines following the given line in the text.
The given line can contain glob wildcards.
Usages and Examples
- Support for unittest.TestCase / Integration of fixtures for basic unittest integration
- Running tests written for nose for basic nosetests integration
The following examples aim at various use cases you might encounter.
Demo of Python failure reports with pytest¶
Here is a nice run of several tens of failures
and how
pytest presents things (unfortunately
not showing the nice colors here in the HTML that you
get on the terminal - we are working on that):
assertion $ pytest failure_demo.py ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR/assertion, inifile: collected 42 items failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF ======= FAILURES ======== _______ test_generative[0] ________ param1 = 3, param2 = 6 def test_generative(param1, param2): > assert param1 * 2 < param2 E assert (3 * 2) < 6 failure_demo.py:16: AssertionError _______ TestFailing.test_simple ________ self = <failure_demo.TestFailing object at 0xdeadbeef> def test_simple(self): def f(): return 42 def g(): return 43 > assert f() == g() E assert 42 == 43 E + where 42 = <function TestFailing.test_simple.<locals>.f at 0xdeadbeef>() E + and 43 = <function TestFailing.test_simple.<locals>.g at 0xdeadbeef>() failure_demo.py:29: AssertionError _______ TestFailing.test_simple_multiline ________ self = <failure_demo.TestFailing object at 0xdeadbeef> def test_simple_multiline(self): otherfunc_multi( 42, > 6*9) failure_demo.py:34: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ a = 42, b = 54 def otherfunc_multi(a,b): > assert (a == b) E assert 42 == 54 failure_demo.py:12: AssertionError _______ TestFailing.test_not ________ self = <failure_demo.TestFailing object at 0xdeadbeef> def test_not(self): def f(): return 42 > assert not f() E assert not 42 E + where 42 = <function TestFailing.test_not.<locals>.f at 0xdeadbeef>() failure_demo.py:39: AssertionError _______ TestSpecialisedExplanations.test_eq_text ________ self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef> def test_eq_text(self): > assert 'spam' == 'eggs' E AssertionError: assert 'spam' == 'eggs' E - spam E + eggs failure_demo.py:43: AssertionError _______ TestSpecialisedExplanations.test_eq_similar_text ________ self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef> def test_eq_similar_text(self): > assert 'foo 1 bar' == 'foo 2 bar' E AssertionError: assert 'foo 1 bar' == 'foo 2 bar' E - foo 1 bar E ? ^ E + foo 2 bar E ? ^ failure_demo.py:46: AssertionError _______ TestSpecialisedExplanations.test_eq_multiline_text ________ self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef> def test_eq_multiline_text(self): > assert 'foo\nspam\nbar' == 'foo\neggs\nbar' E AssertionError: assert 'foo\nspam\nbar' == 'foo\neggs\nbar' E foo E - spam E + eggs E bar failure_demo.py:49: AssertionError _______ TestSpecialisedExplanations.test_eq_long_text ________ self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef> def test_eq_long_text(self): assert 'foo' not in text E AssertionError: assert 'foo' not in 'some multiline\ntext\nw...ncludes foo\nand a\ntail' E 'foo' is contained here: E some multiline E text E which E includes foo E ? +++ E and a E tail failure_demo.py:83: AssertionError _______ TestSpecialisedExplanations.test_not_in_text_single ________ self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef> def test_not_in_text_single(self): assert 'foo' not in text E AssertionError: assert 'foo' not in 'single foo line' E 'foo' is contained here: E single foo line E ? +++ failure_demo.py:87: AssertionError _______ TestSpecialisedExplanations.test_not_in_text_single_long ________ self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef> def test_not_in_text_single_long(self):() failure_demo.py:108: AssertionError _______ test_attribute_failure ________ def test_attribute_failure(): class Foo(object): def _get_b(self): raise Exception('Failed to get attrib') b = property(_get_b) i = Foo() > assert i.b == 2 failure_demo.py:117: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <failure_demo.test_attribute_failure.<locals>.Foo object at 0xdeadbeef> def _get_b(self): > raise Exception('Failed to get attrib') E Exception: Failed to get attrib failure_demo.py:114: Exception _______ test_attribute_multiple ________ def test_attribute_multiple(): class Foo(object): b = 1 class Bar(object): b = 2 > assert Foo().b == Bar().b E AssertionError: assert 1 == 2 E + where 1 = <failure_demo.test_attribute_multiple.<locals>.Foo object at 0xdeadbeef>.b E + where <failure_demo.test_attribute_multiple.<locals>.Foo object at 0xdeadbeef> = <class 'failure_demo.test_attribute_multiple.<locals>.Foo'>() E + and 2 = <failure_demo.test_attribute_multiple.<locals>.Bar object at 0xdeadbeef>.b E + where <failure_demo.test_attribute_multiple.<locals>.Bar object at 0xdeadbeef> = <class 'failure_demo.test_attribute_multiple.<locals>.Bar'>() failure_demo.py:125: AssertionError _______ TestRaises.test_raises ________ self = <failure_demo.TestRaises object at 0xdeadbeef> def test_raises(self): raises(TypeError, "int(s)") failure_demo.py:134: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > int(s) E ValueError: invalid literal for int() with base 10: 'qwe' <0-codegen $PYTHON_PREFIX/lib/python3.5/site-packages/_pytest/python.py:1207>:1: ValueError _______ TestRaises.test_raises_doesnt ________ self = <failure_demo.TestRaises object at 0xdeadbeef> def test_raises_doesnt(self): > raises(IOError, "int('3')") E Failed: DID NOT RAISE <class 'OSError'> failure_demo.py:137: Failed _______ TestRaises.test_raise ________ self = <failure_demo.TestRaises object at 0xdeadbeef> def test_raise(self): > raise ValueError("demo error") E ValueError: demo error failure_demo.py:140: ValueError _______ TestRaises.test_tupleerror ________ self = <failure_demo.TestRaises object at 0xdeadbeef> def test_tupleerror(self): > a,b = [1] E ValueError: not enough values to unpack (expected 2, got 1) failure_demo.py:143: ValueError ______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______ self = <failure_demo.TestRaises object at 0xdeadbeef> def test_reinterpret_fails_with_print_for_the_fun_of_it(self): l = [1,2,3] print ("l is %r" % l) > a,b = l.pop() E TypeError: 'int' object is not iterable failure_demo.py:148: TypeError --------------------------- Captured stdout call --------------------------- l is [1, 2, 3] _______ TestRaises.test_some_error ________ self = <failure_demo.TestRaises object at 0xdeadbeef> def test_some_error(self): > if namenotexi: E NameError: name 'namenotexi' is not defined failure_demo.py:151: NameError _______ test_dynamic_compile_shows_nicely ________ def test_dynamic_compile_shows_nicely():.a failure_demo.py:222: AssertionError _______ TestCustomAssertMsg.test_multiline ________ self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef> def test_multiline(self): class A: a = 1 b = 2 > assert A.a == b, "A.a appears not to be b\n" \ "or does not appear to be b\none of those" E AssertionError: A.a appears not to be b E or does not appear to be b E one of those E assert 1 == 2 E + where 1 = <class 'failure_demo.TestCustomAssertMsg.test_multiline.<locals>.A'>.a failure_demo.py:228: AssertionError _______ TestCustomAssertMsg.test_custom_repr ________ self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef> def test_custom_repr(self): class JSON: a = 1 def __repr__(self): return "This is JSON\n{\n 'foo': 'bar'\n}" a = JSON() b = 2 > assert a.a == b, a E AssertionError: This is JSON E { E 'foo': 'bar' E } E assert 1 == 2 E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a failure_demo.py:238: AssertionError ======= 42 failed in 0.12 seconds ========
Basic patterns and examples¶
Pass different values to a test function, depending on command line options¶
Suppose we want to write a test that depends on a command line option. Here is a basic pattern to achieve this:
# content of test_sample.py def import pytest def pytest_addoption(parser): parser.addoption("--cmdopt", action="store", default="type1", help="my option: type1 or type2") @pytest.fixture def cmdopt(request): return request.config.getoption("--cmdopt")
Let’s run this without supplying our new option:
$ pytest -q test_sample.py F ======= FAILURES ======== _______ test_answer ________ cmdopt = 'type1' def test_answer(cmdopt): if cmdopt == "type1": print ("first") elif cmdopt == "type2": print ("second") > assert 0 # to see what was printed E assert 0 test_sample.py:6: AssertionError --------------------------- Captured stdout call --------------------------- first 1 failed in 0.12 seconds
And now with supplying a command line option:
$ pytest -q --cmdopt=type2 F ======= FAILURES ======== _______ test_answer ________ cmdopt = 'type2' def test_answer(cmdopt): if cmdopt == "type1": print ("first") elif cmdopt == "type2": print ("second") > assert 0 # to see what was printed E assert 0 test_sample.py:6: AssertionError --------------------------- Captured stdout call --------------------------- second 1 failed in 0.12 seconds
You can see that the command line option arrived in our test. This completes the basic pattern. However, one often rather wants to process command line options outside of the test and rather pass in different or more complex objects.
Dynamically adding command line options¶
Through
addopts you can statically add command line
options for your project. You can also dynamically modify
the command line arguments before they get processed:
# content of conftest.py import sys def pytest_cmdline_preparse to your CPU. Running in an empty directory with the above conftest.py:
$ pytest ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: collected 0 items ======= no tests ran in 0.12 seconds ========
Control skipping of tests according to command line option¶
Here is a
conftest.py file adding a
--runslow command
line option to control skipping of
slow marked tests:
# content of conftest.py import pytest def pytest_addoption(parser): parser.addoption("--runslow", action="store_true", help="run slow tests")
We can now write a test module like this:
# content of test_module.py import pytest slow = pytest.mark.skipif( not pytest.config.getoption("--runslow"), reason="need --runslow option to run" ) def test_func_fast(): pass @slow def test_func_slow(): pass
and when running it will see a skipped “slow” test:
$ pytest -rs # "-rs" means report details on the little 's' ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: collected 2 items test_module.py .s ======= short test summary info ======== SKIP [1] test_module.py:13: need --runslow option to run ======= 1 passed, 1 skipped in 0.12 seconds ========
Or run it including the
slow marked test:
$ pytest --runslow ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: collected 2 items test_module.py .. ======= 2 passed in 0.12 seconds ========
Writing well integrated assertion helpers¶
If you have a test helper function called from a test you can
use the
pytest.fail marker to fail a test with a certain message.
The test support function will not show up in the traceback if you
set the
__tracebackhide__ option somewhere in the helper function.
Example:
# content of test_checkconfig.py import pytest def checkconfig(x): __tracebackhide__ = True if not hasattr(x, "config"): pytest.fail("not configured: %s" %(x,)) def test_something(): checkconfig(42)
The
__tracebackhide__ setting influences
pytest showing
of tracebacks: the
checkconfig function will not be shown
unless the
--full-trace command line option is specified.
Let’s run our little function:
$ pytest -q test_checkconfig.py F ======= FAILURES ======== _______ test_something ________ def test_something(): > checkconfig(42) E Failed: not configured: 42 test_checkconfig.py:8: Failed 1 failed in 0.12 seconds
If you only want to hide certain exceptions, you can set
__tracebackhide__
to a callable which gets the
ExceptionInfo object. You can for example use
this to make sure unexpected exception types aren’t hidden:
import operator import pytest class ConfigException(Exception): pass def checkconfig(x): __tracebackhide__ = operator.methodcaller('errisinstance', ConfigException) if not hasattr(x, "config"): raise ConfigException("not configured: %s" %(x,)) def test_something(): checkconfig(42)
This will avoid hiding the exception traceback on unrelated exceptions (i.e. bugs in assertion helpers).
Detect if running from within a pytest run¶
Usually it is a bad idea to make application code behave differently if called from a test. But if you absolutely must find out if your application code is running from a test you can do something like this:
# content of conftest.py def pytest_configure(config): import sys sys._called_from_test = True def pytest_unconfigure(config): del sys._called_from_test
and then check for the
sys._called_from_test flag:
if hasattr(sys, '_called_from_test'): # called from within a test run else: # called "normally"
accordingly in your application. It’s also a good idea
to use your own application module rather than
sys
for handling flag.
Adding info to test report header¶
It’s easy to present extra information in a
pytest run:
# content of conftest.py def pytest_report_header(config): return "project deps: mylib-1.1"
which will add the string to the test header accordingly:
$ pytest ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 project deps: mylib-1.1 rootdir: $REGENDOC_TMPDIR, inifile: collected 0 items ======= no tests ran in 0.12 seconds ========
It is also possible to return a list of strings which will be considered as several
lines of information. You may consider
config.getoption('verbose') in order to
display more information if applicable:
# content of conftest.py def pytest_report_header(config): if config.getoption('verbose') > 0: return ["info1: did you know that ...", "did you?"]
which will add info only when run with “–v”:
$ pytest -v ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5 cachedir: .cache info1: did you know that ... did you? rootdir: $REGENDOC_TMPDIR, inifile: collecting ... collected 0 items ======= no tests ran in 0.12 seconds ========
and nothing when run plainly:
$ pytest ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: collected 0 items ======= no tests ran in 0.12 seconds ========
profiling test duration¶
If you have a slow running large test suite you might want to find out which tests are the slowest. Let’s make an artificial test suite:
# content of test_some_are_slow.py import time def test_funcfast(): pass def test_funcslow1(): time.sleep(0.1) def test_funcslow2(): time.sleep(0.2)
Now we can profile which test functions execute the slowest:
$ pytest --durations=3 ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: collected 3 items test_some_are_slow.py ... ======= slowest 3 test durations ======== 0.20s call test_some_are_slow.py::test_funcslow2 0.10s call test_some_are_slow.py::test_funcslow1 0.00s setup test_some_are_slow.py::test_funcfast ======= 3 passed in 0.12 seconds ========
incremental testing - test steps¶
Sometimes you may have a testing situation which consists of a series
of test steps. If one step fails it makes no sense to execute further
steps as they are all expected to fail anyway and their tracebacks
add no insight. Here is a simple
conftest.py file which introduces
an
incremental marker which is to be used on classes:
# content of conftest.py import pytest def pytest_runtest_makereport(item, call): if "incremental" in item.keywords: if call.excinfo is not None: parent = item.parent parent._previousfailed = item def pytest_runtest_setup(item): if "incremental" in item.keywords: previousfailed = getattr(item.parent, "_previousfailed", None) if previousfailed is not None: pytest.xfail("previous test failed (%s)" %previousfailed.name)
These two hook implementations work together to abort incremental-marked tests in a class. Here is a test module example:
# content of test_step.py import pytest @pytest.mark.incremental class TestUserHandling: def test_login(self): pass def test_modification(self): assert 0 def test_deletion(self): pass def test_normal(): pass
If we run this:
$ pytest -rx ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: collected 4 items test_step.py .Fx. ======= short test summary info ======== XFAIL test_step.py::TestUserHandling::()::test_deletion reason: previous test failed (test_modification) ======= FAILURES ======== _______ TestUserHandling.test_modification ________ self = <test_step.TestUserHandling object at 0xdeadbeef> def test_modification(self): > assert 0 E assert 0 test_step.py:9: AssertionError ======= 1 failed, 2 passed, 1 xfailed in 0.12 seconds ========
We’ll see that
test_deletion was not executed because
test_modification
failed. It is reported as an “expected failure”.. It’s however recommended to have explicit fixture references in your
tests or test classes rather than relying on implicitly executing
setup/teardown functions, especially if they are far away from the actual tests.
Here is an example for making a
db fixture available in a directory:
# content of a/conftest.py import pytest class DB: pass @pytest.fixture(scope="session") def db(): return DB()
and then a test module in that directory:
# content of a/test_db.py def test_a1(db): assert 0, db # to show value
another test module:
# content of a/test_db2.py def test_a2(db): assert 0, db # to show value
and then a module in a sister directory which will not see
the
db fixture:
# content of b/test_error.py def test_root(db): # no db here, will error out pass
We can run this:
$ pytest ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: collected 7 items test_step.py .Fx. a/test_db.py F a/test_db2.py F b/test_error.py E ======= ERRORS ======== _______ ERROR at setup of test_root ________ file $REGENDOC_TMPDIR/b/test_error.py, line 1 def test_root(db): # no db here, will error out E fixture 'db' not found > available fixtures: cache, capfd, capsys, doctest_namespace, monkeypatch, pytestconfig, record_xml_property, recwarn, tmpdir, tmpdir_factory > use 'pytest --fixtures [testpath]' for help on them. $REGENDOC_TMPDIR/b/test_error.py:1 ======= FAILURES ======== _______ TestUserHandling.test_modification ________ self = <test_step.TestUserHandling object at 0xdeadbeef> def test_modification(self): > assert 0 E assert 0 test_step.py:9: AssertionError _______ test_a1 ________ db = <conftest.DB object at 0xdeadbeef> def test_a1(db): > assert 0, db # to show value E AssertionError: <conftest.DB object at 0xdeadbeef> E assert 0 a/test_db.py:2: AssertionError _______ test_a2 ________ db = <conftest.DB object at 0xdeadbeef> def test_a2(db): > assert 0, db # to show value E AssertionError: <conftest.DB object at 0xdeadbeef> E assert 0 a/test_db2.py:2: AssertionError ======= 3 failed, 2 passed, 1 xfailed, 1 error in 0.12 seconds ========
The two test modules in the
a directory see the same
db fixture instance
while the one test in the sister-directory
b doesn’t see it. We could of course
also define a
db fixture in that sister directory’s
conftest.py file.
Note that each fixture is only instantiated if there is a test actually needing
it (unless you use “autouse” fixture which are always executed ahead of the first test
executing).
post-process test reports / failures¶
If you want to postprocess test reports and need access to the executing
environment you can implement a hook that gets called when the test
“report” object is about to be created. Here we write out all failing
test calls and also access a fixture (if it was used by the test) in
case you want to query/look at it during your post processing. In our
case we just write some information out to a
failures file:
# content of conftest.py import pytest import os.path @pytest.hookimpl(tryfirst=True, hookwrapper=True) def pytest_runtest_makereport(item, call): # execute all other hooks to obtain the report object outcome = yield rep = outcome.get_result() # we only look at actual failing test calls, not setup/teardown if rep.when == "call" and rep.failed: mode = "a" if os.path.exists("failures") else "w" with open("failures", mode) as f: # let's also access a fixture for the fun of it if "tmpdir" in item.fixturenames: extra = " (%s)" % item.funcargs["tmpdir"] else: extra = "" f.write(rep.nodeid + extra + "\n")
if you then have failing tests:
# content of test_module.py def test_fail1(tmpdir): assert 0 def test_fail2(): assert 0
and run them:
$ pytest test_module.py ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: collected 2 items test_module.py FF ======= FAILURES ======== _______ test_fail1 ________ tmpdir = local('PYTEST_TMPDIR/test_fail10') def test_fail1(tmpdir): > assert 0 E assert 0 test_module.py:2: AssertionError _______ test_fail2 ________ def test_fail2(): > assert 0 E assert 0 test_module.py:4: AssertionError ======= 2 failed in 0.12 seconds ========
you will have a “failures” file which contains the failing test ids:
$ cat failures test_module.py::test_fail1 (PYTEST_TMPDIR/test_fail10) test_module.py::test_fail2
Making test result information available in fixtures¶
If you want to make test result reports available in fixture finalizers here is a little example implemented via a local plugin:
# content of conftest.py import pytest @pytest.hookimpl(tryfirst=True, hookwrapper=True) def pytest_runtest_makereport(item, call): # execute all other hooks to obtain the report object outcome = yield rep = outcome.get_result() # set a report attribute for each phase of a call, which can # be "setup", "call", "teardown" setattr(item, "rep_" + rep.when, rep) @pytest.fixture)
if you then have failing tests:
# content of test_module.py import pytest @pytest.fixture def other(): assert 0 def test_setup_fails(something, other): pass def test_call_fails(something): assert 0 def test_fail2(): assert 0
and run it:
$ pytest -s test_module.py ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: collected 3 items test_module.py Esetting up a test failed! test_module.py::test_setup_fails Fexecuting test failed test_module.py::test_call_fails F ======= ERRORS ======== _______ ERROR at setup of test_setup_fails ________ @pytest.fixture def other(): > assert 0 E assert 0 test_module.py:6: AssertionError ======= FAILURES ======== _______ test_call_fails ________ something = None def test_call_fails(something): > assert 0 E assert 0 test_module.py:12: AssertionError _______ test_fail2 ________ def test_fail2(): > assert 0 E assert 0 test_module.py:15: AssertionError ======= 2 failed, 1 error in 0.12 seconds ========
You’ll see that the fixture finalizers could use the precise reporting information.
Freezing pytest¶
If you freeze your application using a tool like PyInstaller in order to distribute it to your end-users, it is a good idea to also package your test runner and run your tests using the frozen application. This way packaging errors such as dependencies not being included into the executable can be detected early while also allowing you to send test files to users so they can run them in their machines, which can be useful to obtain more information about a hard to reproduce bug.
Fortunately recent
PyInstaller releases already have a custom hook
for pytest, but if you are using another tool to freeze executables
such as
cx_freeze or
py2exe, you can use
pytest.freeze_includes()
to obtain the full list of internal pytest modules. How to configure the tools
to find the internal modules varies from tool to tool, however.
Instead of freezing the pytest runner as a separate executable, you can make your frozen program work as the pytest runner by some clever argument handling during program startup. This allows you to have a single executable, which is usually more convenient.
# contents of app_main.py import sys if len(sys.argv) > 1 and sys.argv[1] == '--pytest': import pytest sys.exit(pytest.main(sys.argv[2:]))/
Parametrizing tests¶
pytest allows to easily parametrize test functions.
For basic docs, see Parametrizing fixtures and test functions.
In the following we provide some examples using the builtin mechanisms.
Generating parameters combinations, depending on command line¶
Let’s say we want to execute a test with different computation parameters and the parameter range shall be determined by a command line argument. Let’s first write a simple (do-nothing) computation test:
# content of test_compute.py def test_compute(param1): assert param1 < 4
Now we add a test configuration like this:
# content of conftest.py def pytest_addoption(parser): parser.addoption("--all", action="store_true", help="run all combinations") def pytest_generate_tests(metafunc): if 'param1' in metafunc.fixturenames: if metafunc.config.option.all: end = 5 else: end = 2 metafunc.parametrize("param1", range(end))
This means that we only run 2 tests if we do not pass
--all:
$ pytest -q test_compute.py .. 2 passed in 0.12 seconds
We run only two computations, so we see two dots. let’s run the full monty:
$ pytest -q --all ....F ======= FAILURES ======== _______ test_compute[4] ________ param1 = 4 def test_compute(param1): > assert param1 < 4 E assert 4 < 4 test_compute.py:3: AssertionError 1 failed, 4 passed in 0.12 seconds
As expected when running the full range of
param1 values
we’ll get an error on the last one.
Different options for test IDs¶ import pytest from datetime import datetime, timedelta @pytest.mark.parametrize("a,b,expected", testdata, ids=["forward", "backward"]) def test_timedistance_v1(a, b, expected): diff = a - b assert diff == expected def idfn(val): if isinstance(val, (datetime,)): # note this wouldn't show any hours/minutes/seconds return val.strftime('%Y%m%d') @pytest.mark.parametrize("a,b,expected", testdata, ids=idfn) def test_timedistance_v2(a, b, expected): diff = a - b assert diff == expected
In
test_timedistance_v0, we let pytest generate the test IDs.
In
test_timedistance_v1, we specified
ids as a list of strings which were
used as the test IDs. These are succinct, but can be a pain to maintain.
In
test_timedistance_v2, we specified
ids as a function that can generate a
string representation to make part of the test ID. So our
datetime values use the
label generated by
idfn, but because we didn’t generate a label for
timedelta
objects, they are still using the default pytest representation:
$ pytest test_time.py --collect-only ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: collected 6 items <Module 'test_time.py'> <Function 'test_timedistance_v0[a0-b0-expected0]'> <Function 'test_timedistance_v0[a1-b1-expected1]'> <Function 'test_timedistance_v1[forward]'> <Function 'test_timedistance_v1[backward]'> <Function 'test_timedistance_v2[20011212-20011211-expected0]'> <Function 'test_timedistance_v2[20011211-20011212-expected1]'> ======= no tests ran in 0.12 seconds ========
A quick port of “testscenarios”¶
Here is a quick port to run tests configured with test scenarios,
an add-on from Robert Collins for the standard un") scenario1 = ('basic', {'attribute': 'value'}) scenario2 = ('advanced', {'attribute': 'value2'}) class TestSampleWithScenarios: scenarios = [scenario1, scenario2] def test_demo1(self, attribute): assert isinstance(attribute, str) def test_demo2(self, attribute): assert isinstance(attribute, str)
this is a fully self-contained example which you can run with:
$ pytest test_scenarios.py ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: collected 4 items test_scenarios.py .... ======= 4 passed in 0.12 seconds ========
If you just collect tests you’ll also nicely see ‘advanced’ and ‘basic’ as variants for the test function:
$ pytest --collect-only test_scenarios.py ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: collected 4 items <Module 'test_scenarios.py'> <Class 'TestSampleWithScenarios'> <Instance '()'> <Function 'test_demo1[basic]'> <Function 'test_demo2[basic]'> <Function 'test_demo1[advanced]'> <Function 'test_demo2[advanced]'> ======= no tests ran in 0.12 seconds ========
Note that we told
metafunc.parametrize() that your scenario values
should be considered class-scoped. With pytest-2.3 this leads to a
resource-based ordering.
Deferring the setup of parametrized resources¶
The parametrization of test functions happens at collection
time. It is a good idea to setup expensive resources like DB
connections or subprocess only when the actual test is run.
Here is a simple example how you can achieve that, first
the actual test requiring a
db object:
# content of test_backends.py import pytest def test_db_initialized(db): # a dummy test if db.__class__.__name__ == "DB2": pytest.fail("deliberately failing for demo purposes")
We can now add a test configuration that generates two invocations of
the
test_db_initialized function and also implements a factory that
creates a database object for the actual test invocations:
# content of conftest.py import pytest def pytest_generate_tests(metafunc): if 'db' in metafunc.fixturenames: metafunc.parametrize("db", ['d1', 'd2'], indirect=True) class DB1: "one database object" class DB2: "alternative database object" @pytest.fixture def db(request): if request.param == "d1": return DB1() elif request.param == "d2": return DB2() else: raise ValueError("invalid internal test config")
Let’s first see how it looks like at collection time:
$ pytest test_backends.py --collect-only ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: collected 2 items <Module 'test_backends.py'> <Function 'test_db_initialized[d1]'> <Function 'test_db_initialized[d2]'> ======= no tests ran in 0.12 seconds ========
And then when we run the test:
$ pytest -q test_backends.py .F ======= FAILURES ======== _______ test_db_initialized[d2] ________ db = <conftest.DB2 object at 0xdeadbeef> def test_db_initialized(db): # a dummy test if db.__class__.__name__ == "DB2": > pytest.fail("deliberately failing for demo purposes") E Failed: deliberately failing for demo purposes test_backends.py:6: Failed 1 failed, 1 passed in 0.12 seconds
The first invocation with
db == "DB1" passed while the second with
db == "DB2" failed. Our
db fixture function has instantiated each of the DB values during the setup phase while the
pytest_generate_tests generated two according calls to the
test_db_initialized during the collection phase.
Apply indirect on particular arguments¶
Very often parametrization uses more than one argument name. There is opportunity to apply
indirect
parameter on particular arguments. It can be done by passing list or tuple of
arguments’ names to
indirect. In the example below there is a function
test_indirect which uses
two fixtures:
x and
y. Here we give to indirect the list, which contains the name of the
fixture
x. The indirect parameter will be applied to this argument only, and the value
a
will be passed to respective fixture function:
# content of test_indirect_list.py import pytest @pytest.fixture(scope='function') def x(request): return request.param * 3 @pytest.fixture(scope='function') def y(request): return request.param * 2 @pytest.mark.parametrize('x, y', [('a', 'b')], indirect=['x']) def test_indirect(x,y): assert x == 'aaa' assert y == 'b'
The result of this test will be successful:
$ pytest test_indirect_list.py --collect-only ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: collected 1 items <Module 'test_indirect_list.py'> <Function 'test_indirect[a-b]'> ======= no tests ran in 0.12 seconds ========
Parametrizing test methods through per-class configuration¶
Here is an example
pytest_generate_function function implementing a
parametrization scheme similar to Michael Foord’s unittest
parametrizer but in a lot less code:
# content of ./test_parametrize.py import), ], } def test_equals(self, a, b): assert a == b def test_zerodivision(self, a, b): pytest.raises(ZeroDivisionError, "a/b")
Our test generator looks up a class-level definition which specifies which argument sets to use for each test function. Let’s run it:
$ pytest -q F.. ======= FAILURES ======== _______ TestClass.test_equals[1-2] ________ self = <test_parametrize.TestClass object at 0xdeadbeef>, a = 1, b = 2 def test_equals(self, a, b): > assert a == b E assert 1 == 2 test_parametrize.py:18: AssertionError 1 failed, 2 passed in 0.12 seconds
Indirect parametrization with multiple fixtures¶
Here is a stripped down real-life example of using parametrized
testing for testing serialization of objects between different python
interpreters. We define a
test_basic_objects function which
is to be run with different sets of arguments for its three arguments:
python1: first python interpreter, run to pickle-dump an object to a file
python2: second interpreter, run to pickle-load an object from a file
obj: object to be dumped/loaded
""" module containing a parametrized tests testing cross-python serialization via the pickle module. """ import py import pytest import _pytest._code pythonlist = ['python2.6', 'python2.7', 'python3.4', 'python3.5'] @pytest.fixture(params=pythonlist) def python1(request, tmpdir): picklefile = tmpdir.join("data.pickle") return Python(request.param, picklefile) @pytest.fixture(params=pythonlist) def python2(request, python1): return Python(request.param, python1.picklefile) class Python: def __init__(self, version, picklefile): self.pythonpath = py.path.local.sysfind(version) if not self.pythonpath: pytest.skip("%r not found" %(version,)) self.picklefile = picklefile def dumps(self, obj): dumpfile = self.picklefile.dirpath("dump.py") dumpfile.write(_pytest._code.Source(""" import pickle f = open(%r, 'wb') s = pickle.dump(%r, f, protocol=2) f.close() """ % (str(self.picklefile), obj))) py.process.cmdexec("%s %s" %(self.pythonpath, dumpfile)) def load_and_is_true(self, expression): loadfile = self.picklefile.dirpath("load.py") loadfile.write(_pytest._code.Source(""" import pickle f = open(%r, 'rb') obj = pickle.load(f) f.close() res = eval(%r) if not res: raise SystemExit(1) """ % (str(self.picklefile), expression))) print (loadfile) py.process.cmdexec("%s %s" %(self.pythonpath, loadfile)) @pytest.mark.parametrize("obj", [42, {}, {1:3},]) def test_basic_objects(python1, python2, obj): python1.dumps(obj) python2.load_and_is_true("obj == %s" % obj)
Running it results in some skips if we don’t have all the python interpreters installed and otherwise runs all combinations (5 interpreters times 5 interpreters times 3 objects to serialize/deserialize):
. $ pytest -rs -q multipython.py sssssssssssssss.........sss.........sss......... ======= short test summary info ======== SKIP [21] $REGENDOC_TMPDIR/CWD/multipython.py:23: 'python2.6' not found 27 passed, 21 skipped in 0.12 seconds
Indirect parametrization of optional implementations/imports¶
If you want to compare the outcomes of several implementations of a given API, you can write test functions that receive the already imported implementations and get skipped in case the implementation is not importable/available. Let’s say we have a “base” implementation and the other (possibly optimized ones) need to provide similar results:
# content of conftest.py import pytest @pytest.fixture(scope="session") def basemod(request): return pytest.importorskip("base") @pytest.fixture(scope="session", params=["opt1", "opt2"]) def optmod(request): return pytest.importorskip(request.param)
And then a base implementation of a simple function:
# content of base.py def func1(): return 1
And an optimized version:
# content of opt1.py def func1(): return 1.0001
And finally a little test module:
# content of test_module.py def test_func1(basemod, optmod): assert round(basemod.func1(), 3) == round(optmod.func1(), 3)
If you run this with reporting for skips enabled:
$ pytest -rs test_module.py ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: collected 2 items test_module.py .s ======= short test summary info ======== SKIP [1] $REGENDOC_TMPDIR/conftest.py:10: could not import 'opt2' ======= 1 passed, 1 skipped in 0.12 seconds ========
You’ll see that we don’t have a
opt2 module and thus the second test run
of our
test_func1 was skipped. A few notes:
- the fixture functions in the
conftest.pyfile are “session-scoped” because we don’t need to import more than once
- if you have multiple test functions and a skipped import, you will see the
[1]count increasing in the report
- you can put @pytest.mark.parametrize style parametrization on the test functions to parametrize input/output values as well.
Working with custom markers¶
Here are some example using the Marking test functions with attributes mechanism.
Marking test functions and selecting them for a run¶
You can “mark” a test function with custom metadata like this:
# content of test_server.py import pytest @pytest.mark.webtest def test_send_http(): pass # perform some webtest test for your app def test_something_quick(): pass def test_another(): pass class TestClass: def test_method(self): pass
New in version 2.2.
You can then restrict a test run to only run tests marked with
webtest:
$ pytest -v ========
Or the inverse, running all tests except the webtest ones:
$ pytest -v -m ========
Selecting tests based on their node ID¶
You can provide one or more node IDs as positional arguments to select only specified tests. This makes it easy to select tests based on their module, class, method, or function name:
$ pytest -v test_server.py::TestClass::test_method ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5 cachedir: .cache rootdir: $REGENDOC_TMPDIR, inifile: collecting ... collected 5 items test_server.py::TestClass::test_method PASSED ======= 1 passed in 0.12 seconds ========
You can also select on the class:
$ pytest -v test_server.py::TestClass =======Class::test_method PASSED ======= 1 passed in 0.12 seconds ========
Or select multiple nodes:
$ pytest -v test_server.py::TestClass test_server.py::test_send_http ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5 cachedir: .cache rootdir: $REGENDOC_TMPDIR, inifile: collecting ... collected 8 items test_server.py::TestClass::test_method PASSED test_server.py::test_send_http PASSED ======= 2 passed in 0.12 seconds ========
Note
Node IDs are of the form
module.py::class::method or
module.py::function. Node IDs control which tests are
collected, so
module.py::class will select all test methods
on the class. Nodes are also created for each parameter of a
parametrized fixture or test, so selecting a parametrized test
must include the parameter value, e.g.
module.py::function[param].
Node IDs for failing tests are displayed in the test summary info
when running pytest with the
-rf option. You can also
construct Node IDs from the output of
pytest --collectonly.
Using
-k expr to select tests based on their name¶
You can use the
-k command line option to specify an expression
which implements a substring match on the test names instead of the
exact match on markers that
-m provides. This makes it easy to
select tests based on their names:
$ pytest -v -k http # running with the above defined example module ======= ========
And you can also run all tests except the ones that match the keyword:
$ pytest -k "not send_http" ========
Or to select “http” and “quick” tests:
$ pytest -k "http or quick" test_server.py::test_something_quick PASSED ======= 2 tests deselected ======== ======= 2 passed, 2 deselected in 0.12 seconds ========
Note
If you are using expressions such as “X and Y” then both X and Y need to be simple non-keyword names. For example, “pass” or “from” will result in SyntaxErrors because “-k” evaluates the expression.
However, if the “-k” argument is a simple string, no such restrictions apply. Also “-k ‘not STRING’” has no restrictions. You can also specify numbers like “-k 1.3” to match tests which are parametrized with the float “1.3”.
Registering markers¶
New in version 2.2.
Registering markers for your test suite is simple:
# content of pytest.ini [pytest] markers = webtest: mark a test as a webtest.
You can ask which markers exist for your test suite - the list includes our just defined
webtest markers:
$ pytest --markers @pytest.mark.webtest: mark a test as a webtest. .
For an example on how to add and work with markers from a plugin, see Custom marker and command line option to control test runs.
Note
It is recommended to explicitly register markers so that:
- there is one place in your test suite defining your markers
- asking for existing markers via
pytest --markersgives good output
- typos in function markers are treated as an error if you use the
--strictoption. Future versions of
pytestare probably going to start treating non-registered markers as errors at some point.
Marking whole classes or modules¶
You may use
pytest.mark decorators with classes to apply markers to all of
its test methods:
# content of test_mark_classlevel.py import pytest @pytest.mark.webtest class TestClass: def test_startup(self): pass def test_startup_and_more(self): pass
This is equivalent to directly applying the decorator to the two test functions.
To remain backward-compatible with Python 2.4 you can also set a
pytestmark attribute on a TestClass like this:
import pytest class TestClass: pytestmark = pytest.mark.webtest
or if you need to use multiple markers you can use a list:
import pytest class TestClass: pytestmark = [pytest.mark.webtest, pytest.mark.slowtest]
You can also set a module level marker:
import pytest pytestmark = pytest.mark.webtest
in which case it will be applied to all functions and methods defined in the module.
Marking individual tests when using parametrize¶
When using parametrize, applying a mark will make it apply to each individual test. However it is also possible to apply a marker to an individual test instance:
import pytest @pytest.mark.foo @pytest.mark.parametrize(("n", "expected"), [ (1, 2), pytest.mark.bar((1, 3)), (2, 3), ]) def test_increment(n, expected): assert n + 1 == expected
In this example the mark “foo” will apply to each of the three tests, whereas the “bar” mark is only applied to the second test. Skip and xfail marks can also be applied in this way, see Skip/xfail with parametrize.
Note
If the data you are parametrizing happen to be single callables, you need to be careful when marking these items. pytest.mark.xfail(my_func) won’t work because it’s also the signature of a function being decorated. To resolve this ambiguity, you need to pass a reason argument: pytest.mark.xfail(func_bar, reason=”Issue#7”).
Custom marker and command line option to control test runs¶
Plugins can provide custom markers and implement specific behaviour based on it. This is a self-contained example which adds a command line option and a parametrized test function marker to run tests specifies via named environments:
# content of conftest.py import pytestmarker = item.get_marker("env") if envmarker is not None: envname = envmarker.args[0] if envname != item.config.getoption("-E"): pytest.skip("test requires env %r" % envname)
A test file using this local plugin:
# content of test_someenv.py import pytest @pytest.mark.env("stage1") def test_basic_db_operation(): pass
and an example invocations specifying a different environment than what the test needs:
$ pytest -E stage2 ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: collected 1 items test_someenv.py s ======= 1 skipped in 0.12 seconds ========
and here is one that specifies exactly the environment needed:
$ pytest -E stage1 ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: collected 1 items test_someenv.py . ======= 1 passed in 0.12 seconds ========
The
--markers option always gives you a list of available markers:
$ pytest --markers @pytest.mark.env(name): mark test to run only on named environment .
Reading markers which were set from multiple places¶
If you are heavily using markers in your test suite you may encounter the case where a marker is applied several times to a test function. From plugin code you can read over all such settings. Example:
# content of test_mark_three_times.py import pytest pytestmark = pytest.mark.glob("module", x=1) @pytest.mark.glob("class", x=2) class TestClass: @pytest.mark.glob("function", x=3) def test_something(self): pass
Here we have the marker “glob” applied three times to the same test function. From a conftest file we can read it like this:
# content of conftest.py import sys def pytest_runtest_setup(item): g = item.get_marker("glob") if g is not None: for info in g: print ("glob args=%s kwargs=%s" %(info.args, info.kwargs)) sys.stdout.flush()
Let’s run this without capturing output and see what we get:
$ pytest -q -s glob args=('function',) kwargs={'x': 3} glob args=('class',) kwargs={'x': 2} glob args=('module',) kwargs={'x': 1} . 1 passed in 0.12 seconds
marking platform specific tests with pytest¶
Consider you have a test suite which marks tests for particular platforms,
namely
pytest.mark.darwin,
pytest.mark.win32 etc. and you
also have tests that run on all platforms and have no specific
marker. If you now want to have a way to only run the tests
for your particular platform, you could use the following plugin:
# content of conftest.py # import sys import pytest ALL = set("darwin linux win32".split()) def pytest_runtest_setup(item): if isinstance(item, item.Function): plat = sys.platform if not item.get_marker(plat): if ALL.intersection(item.keywords): pytest.skip("cannot run on platform %s" %(plat))
then tests will be skipped if they were specified for a different platform. Let’s do a little test file to show how this looks like:
# content of test_plat.py import pytest @pytest.mark.darwin def test_if_apple_is_evil(): pass @pytest.mark.linux def test_if_linux_works(): pass @pytest.mark.win32 def test_if_win32_crashes(): pass def test_runs_everywhere(): pass
then you will see two tests skipped and two executed tests as expected:
$ pytest -rs # this option reports skip reasons ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: collected 4 items test_plat.py s.s. ======= short test summary info ======== SKIP [2] $REGENDOC_TMPDIR/conftest.py:12: cannot run on platform linux ======= 2 passed, 2 skipped in 0.12 seconds ========
Note that if you specify a platform via the marker-command line option like this:
$ pytest -m linux ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: collected 4 items test_plat.py . ======= 3 tests deselected ======== ======= 1 passed, 3 deselected in 0.12 seconds ========
then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests.
Automatically adding markers based on test names¶
If you a test suite where test function names indicate a certain
type of test, you can implement a hook that automatically defines
markers so that you can use the
-m option with it. Let’s look
at this test module:
# content of test_module.py def test_interface_simple(): assert 0 def test_interface_complex(): assert 0 def test_event_simple(): assert 0 def test_something_else(): assert 0
We want to dynamically define two markers and can do it in a
conftest.py plugin:
# content of conftest.py import pytest def pytest_collection_modifyitems(items): for item in items: if "interface" in item.nodeid: item.add_marker(pytest.mark.interface) elif "event" in item.nodeid: item.add_marker(pytest.mark.event)
We can now use the
-m option to select one set:
$ pytest -m interface --tb=short ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: collected 4 items test_module.py FF ======= FAILURES ======== _______ test_interface_simple ________ test_module.py:3: in test_interface_simple assert 0 E assert 0 _______ test_interface_complex ________ test_module.py:6: in test_interface_complex assert 0 E assert 0 ======= 2 tests deselected ======== ======= 2 failed, 2 deselected in 0.12 seconds ========
or to select both “event” and “interface” tests:
$ pytest -m "interface or event" --tb=short ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: collected 4 items test_module.py FFF ======= FAILURES ======== _______ test_interface_simple ________ test_module.py:3: in test_interface_simple assert 0 E assert 0 _______ test_interface_complex ________ test_module.py:6: in test_interface_complex assert 0 E assert 0 _______ test_event_simple ________ test_module.py:9: in test_event_simple assert 0 E assert 0 ======= 1 tests deselected ======== ======= 3 failed, 1 deselected in 0.12 seconds ========
A session-fixture which can look at all collected tests¶
A session-scoped fixture effectively has access to all
collected test items. Here is an example of a fixture
function which walks all collected tests and looks
if their test class defines a
callme method and
calls it:
# content of conftest.py import pytest @pytest.fixture(scope="session", autouse=True) def callattr_ahead_of_alltests(request): print ("callattr_ahead_of_alltests called") seen = set(:
# content of test_module.py") # works with unittest as well ... import unittest class SomeTest(unittest.TestCase): @classmethod def callme(self): print ("SomeTest callme called") def test_unit1(self): print ("test_unit1 method called")
If you run this without output capturing:
$ pytest -q -s test_module.py callattr_ahead_of_alltests called callme called! callme other called SomeTest callme called test_method1 called .test_method1 called .test other .test_unit1 method called . 4 passed in 0.12 seconds
Changing standard (Python) test discovery¶
Ignore paths during test collection¶
You can easily ignore certain test directories and modules during collection
by passing the
--ignore=path option
Now if you invoke
pytest with
--ignore=tests/foobar/test_foobar_03.py --ignore=tests/hello/,
you will see that
pytest only collects test-modules, which do not match the patterns specified:
========= test session starts ========== platform darwin -- Python 2.7.10, pytest-2.8.2, py-1.4.30, pluggy-0.3.1 rootdir: $REGENDOC_TMPDIR, inifile: collected 5 items tests/example/test_example_01.py . tests/example/test_example_02.py . tests/example/test_example_03.py . tests/foobar/test_foobar_01.py . tests/foobar/test_foobar_02.py . ======= 5 passed in 0.02 seconds =======
Keeping duplicate paths specified from command line¶
Default behavior of
pytest is to ignore duplicate paths specified from the command line.
Example:
py.test path_a path_a ... collected 1 item ...
Just collect tests once.
To collect duplicate tests, use the
--keep-duplicates option on the cli.
Example:
py.test --keep-duplicates path_a path_a ... collected 2 items ...
As the collector just works on directories, if you specify twice a single test file,
pytest will
still collect it twice, no matter if the
--keep-duplicates is not specified.
Example:
py.test test_a.py test_a.py ... collected 2 items ...
Changing directory recursion¶ directory.
Changing naming conventions¶
You can configure different naming conventions by setting
the
python_files,
python_classes and
python_functions configuration options. Example:
# content of pytest.ini # can also be defined in tox.ini or setup.cfg file, although the section # name in setup.cfg files should be "tool:pytest" [pytest] python_files=check_*.py python_classes=Check python_functions=*_check
This would make
pytest look for tests in files that match the
check_*
.py glob-pattern,
Check prefixes in classes, and functions and methods
that match
*_check. For example, if we have:
# content of check_myapp.py class CheckMyApp: def simple_check(self): pass def complex_check(self): pass
then the test collection looks like this:
$ pytest --collect-only ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini collected 2 items <Module 'check_myapp.py'> <Class 'CheckMyApp'> <Instance '()'> <Function 'simple_check'> <Function 'complex_check'> ======= no tests ran in 0.12 seconds ========
Note
the
python_functions and
python_classes options has no effect
for
unittest.TestCase test discovery because pytest delegates
detection of test case methods to unittest code.
Interpreting cmdline arguments as Python packages¶
You can use the
--pyargs option to make
pytest try
interpreting arguments as python package names, deriving
their file system path and then running the test. For
example if you have unittest2 installed you can type:
pytest --pyargs unittest2.test.test_skipping -q
which would run the respective test module. Like with
other options, through an ini-file and the
addopts option you
can make this change more permanently:
# content of pytest.ini [pytest] addopts = --pyargs
Now a simple invocation of
pytest NAME will check
if NAME exists as an importable package/module and otherwise
treat it as a filesystem path.
Finding out what is collected¶
You can always peek at the collection tree without running tests like this:
. $ pytest --collect-only pythoncollection.py ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini collected 3 items <Module 'CWD/pythoncollection.py'> <Function 'test_function'> <Class 'TestClass'> <Instance '()'> <Function 'test_method'> <Function 'test_anothermethod'> ======= no tests ran in 0.12 seconds ========
customizing test collection to find all .py files¶
You can easily instruct
pytest to discover tests from every python file:
# content of pytest.ini [pytest] python_files = *.py
However, many projects will have a
setup.py which they don’t want to be imported. Moreover, there may files only importable by a specific python version.
For such cases you can dynamically define files to be ignored by listing
them in a
conftest.py file:
# content of conftest.py import sys collect_ignore = ["setup.py"] if sys.version_info[0] > 2: collect_ignore.append("pkg/module_py2.py")
And then if you have a module file like this:
# content of pkg/module_py2.py def test_only_on_python2(): try: assert 0 except Exception, e: pass
and a setup.py dummy file like this:
# content of setup.py 0/0 # will raise exception if imported
then a pytest run on Python2 will find the one test and will leave out the setup.py file:
#$ pytest --collect-only ====== test session starts ====== platform linux2 -- Python 2.7.10, pytest-2.9.1, py-1.4.31, pluggy-0.3.1 rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini collected 1 items <Module 'pkg/module_py2.py'> <Function 'test_only_on_python2'> ====== no tests ran in 0.04 seconds ======
If you run with a Python3 interpreter both the one test and the setup.py file will be left out:
$ pytest --collect-only ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini collected 0 items ======= no tests ran in 0.12 seconds ========
Working with non-python tests¶
A basic example for specifying tests in Yaml files¶
Here is an example
conftest.py (extracted from Ali Afshnars special purpose pytest-yamlwsgi plugin). This
conftest.py will collect
test*.yml files and will execute the yaml-formatted content as custom tests:
# content of conftest.py import pytest def pytest_collect_file(parent, path): if path.ext == ".yml" and path.basename.startswith("test"): return YamlFile(path, parent) class YamlFile(pytest.File): def collect(self): import yaml # we need a yaml parser, e.g. PyYAML raw = yaml.safe_load(self.fspath.open()) for name, spec in sorted(raw.items()): yield YamlItem(name, self, spec) class YamlItem(pytest.Item): def __init__(self, name, parent, spec): super(YamlItem, self).__init__(name, parent) self.spec = spec def runtest(self): for name, value in sorted(self.spec.items()): # some custom test execution (dumb example follows) if name != value: raise YamlException(self, name, value) def repr_failure(self, excinfo): """ called when self.runtest() raises an exception. """ if isinstance(excinfo.value, YamlException): return "\n".join([ "usecase execution failed", " spec failed: %r: %r" % excinfo.value.args[1:3], " no further details known at this point." ]) def reportinfo(self): return self.fspath, 0, "usecase: %s" % self.name class YamlException(Exception): """ custom exception for error reporting. """
You can create a simple example file:
# test_simple.yml ok: sub1: sub1 hello: world: world some: other
and if you installed PyYAML or a compatible YAML-parser you can now execute the test specification:
nonpython $ pytest test_simple.yml ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR/nonpython, inifile: collected 2 items test_simple.yml F. ======= FAILURES ======== _______ usecase: hello ________ usecase execution failed spec failed: 'some': 'other' no further details known at this point. ======= 1 failed, 1 passed in 0.12 seconds ========
You get one dot for the passing
sub1: sub1 check and one failure.
Obviously in the above
conftest.py you’ll want to implement a more
interesting interpretation of the yaml-values. You can easily write
your own domain specific testing language this way.
Note
repr_failure(excinfo) is called for representing test failures.
If you create custom collection nodes you can return an error
representation string of your choice. It
will be reported as a (red) string.
reportinfo() is used for representing the test location and is also
consulted when reporting in
verbose mode:
nonpython $ pytest -v ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 -- $PYTHON_PREFIX/bin/python3.5 cachedir: .cache rootdir: $REGENDOC_TMPDIR/nonpython, inifile: collecting ... collected 2 items test_simple.yml::hello FAILED test_simple.yml::ok PASSED ======= FAILURES ======== _______ usecase: hello ________ usecase execution failed spec failed: 'some': 'other' no further details known at this point. ======= 1 failed, 1 passed in 0.12 seconds ========
While developing your custom test collection and execution it’s also interesting to just look at the collection tree:
nonpython $ pytest --collect-only ======= test session starts ======== platform linux -- Python 3.5.2, pytest-3.0.7, py-1.4.32, pluggy-0.4.0 rootdir: $REGENDOC_TMPDIR/nonpython, inifile: collected 2 items <YamlFile 'test_simple.yml'> <YamlItem 'hello'> <YamlItem 'ok'> ======= no tests ran in 0.12 seconds ========
Good.
Basic.
Setting)"
Backwards).
License¶
Distributed under the terms of the MIT license, pytest is free and open source software.
The MIT License (MIT) Copyright (c) 2004-2016 Holger Krekel.
Contribution getting started¶
Contributions are highly welcomed and appreciated. Every little help counts, so do not hesitate!
Contribution links
Feature requests and feedback¶
Do you like pytest? Share some love on Twitter or in your blog posts!
We’d also like to hear about your propositions and suggestions. Feel free to submit them as issues and:
- Explain in detail how they should work.
- Keep the scope as narrow as possible. This will make it easier to implement.
Report bugs¶
Report bugs for pytest in the issue tracker.
If you are reporting a bug, please include:
- Your operating system name and version.
- Any details about your local setup that might be helpful in troubleshooting, specifically Python interpreter version, installed libraries and pytest version.
- Detailed steps to reproduce the bug.
If you can write a demonstration test that currently fails but should pass (xfail), that is a very useful commit to make as well, even if you can’t find how to fix the bug yet.
Fix bugs¶
Look through the GitHub issues for bugs. Here is a filter you can use:
Talk to developers to find out how you can fix specific bugs.
Don’t forget to check the issue trackers of your favourite plugins, too!
Implement features¶
Look through the GitHub issues for enhancements. Here is a filter you can use:
Talk to developers to find out how you can implement specific features.
Write documentation¶
Pytest could always use more documentation. What exactly is needed?
- More complementary documentation. Have you perhaps found something unclear?
- Documentation translations. We currently have only English.
- Docstrings. There can never be too many of them.
- Blog posts, articles and such – they’re all very appreciated.
You can also edit documentation files directly in the GitHub web interface, without using a local copy. This can be convenient for small fixes.
Note
Build the documentation locally with the following command:
$ tox -e docs
The built documentation should be available in the
doc/en/_build/.
Where ‘en’ refers to the documentation language.
Submitting Plugins to pytest-dev¶
Pytest development of the core, some plugins and support code happens
in repositories living under the
pytest-dev organisations:
All pytest-dev Contributors team members have write access to all contained repositories. Pytest core and plugins are generally developed using pull requests to respective repositories.
The objectives of the
pytest-dev organisation are:
- Having a central location for popular pytest plugins
- Sharing some of the maintenance responsibility (in case a maintainer no longer wishes to maintain a plugin)
You can submit your plugin by subscribing to the pytest-dev mail list and writing a mail pointing to your existing pytest plugin repository which must have the following:
- PyPI presence with
If no contributor strongly objects and two agree, the repository can then be
transferred to the
pytest-dev organisation.
Here’s a rundown of how a repository transfer usually proceeds
(using a repository named
joedoe/pytest-xyz as example):
joedoetransfers repository ownership to
pytest-devadministrator
calvin.
calvincreates
pytest-xyz-adminand
pytest-xyz-developersteams, inviting
joedoeto both as maintainer.
calvintransfers repository to
pytest-devand configures team access:
pytest-xyz-adminadmin access;
pytest-xyz-developerswrite or take ownership in any way, except in rare cases
where someone becomes unresponsive after months of contact attempts.
As stated, the objective is to share maintenance and avoid “plugin-abandon”.
Preparing Pull Requests on GitHub¶
Note
What is a “pull request”? It informs project’s core developers about the changes you want to review and merge. Pull requests are stored on GitHub servers. Once you send a pull request, we can discuss its potential modifications and even add more commits to it later on.
There’s an excellent tutorial on how Pull Requests work in the GitHub Help Center, but here is a simple overview:
Fork the pytest GitHub repository. It’s fine to use
pytestas your fork repository name because it will live under your user.
Clone your fork locally using git and create a branch:
$ git clone [email protected]:YOUR_GITHUB_USERNAME/pytest.git $ cd pytest # now,.5 available in your system. Now running tests is as simple as issuing this command:
$ tox -e linting,py27,py35
This command will run tests via the “tox” tool against Python 2.7 and 3.5 and also perform “lint” coding-style checks.
You can now edit your local working copy.
You can now make the changes you want and run the tests again as necessary.
To run tests on Python 2.7 and pass options to pytest (e.g. enter pdb on failure) to pytest you can do:
$ tox -e py27 -- --pdb
Or to only run tests in a particular test module on Python 3.5:
$ tox -e py35 -- testing/test_config.py
Commit and push once your tests pass and you are happy with your change(s):
$ git commit -a -m "<commit message>" $ git push -u
Make sure you add a message to
CHANGELOG.rstand add yourself to
AUTHORS. If you are unsure about either of these steps, submit your pull request and we’ll help you fix it up.
Finally, submit a pull request through the GitHub website using this data:
head-fork: YOUR_GITHUB_USERNAME/pytest compare: your-branch-name base-fork: pytest-dev/pytest base: master # if it's a bugfix base: features # if it's a feature
Talks and Tutorials¶
Talks and blog postings¶
-.
- pytest fixtures: explicit, modular, scalable
-:
Project examples¶
Here are some examples of projects using
pytest (please send notes via Contact channels):
- PyPy, Python with a JIT compiler, running over 21000 tests
- the MoinMoin Wiki Engine
- sentry, realtime app-maintenance and exception tracking
- Astropy and affiliated packages
- tox, virtualenv/Hudson integration tool
- PIDA framework for integrated development
-
- Pacha configuration management in five minutes
- bbfreeze create standalone executables from Python scripts
- pdb++ a fancier version of PDB
-
- kss plugin timer
-
- Tandberg
- Shootq
- Stups department of Heinrich Heine University Duesseldorf
- cellzome
- Open End, Gothenborg
- Laboratory of Bioinformatics, Warsaw
- merlinux, Germany
- ESSS, Brazil
- many more ... (please be so kind to send a note via Contact channels)
Some. See also
Parametrizing tests for more examples.).
Contact bitbucket (including using git via gitifyhg).
- #pylib on irc.freenode.net IRC channel for random questions.
- private mail to Holger.Krekel at gmail com if you want to communicate sensitive issues
- merlinux.eu offers pytest and tox-related professional teaching and consulting.
Release announcements¶
pytest-3.0.7¶
pytest 3.0.7 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade:
pip install --upgrade pytest
The full changelog is available at.
Thanks to all who contributed to this release, among them:
- Anthony Sottile
- Barney Gale
- Bruno Oliveira
- Florian Bruhin
- Floris Bruynooghe
- Ionel Cristian Mărieș
- Katerina Koukiou
- NODA, Kai
- Omer Hadari
- Patrick Hayes
- Ran Benita
- Ronny Pfannschmidt
- Victor Uriarte
- Vidar Tonaas Fauske
- Ville Skyttä
- fbjorn
- mbyt
Happy testing, The pytest Development Team
pytest-3.0.6¶
pytest 3.0.6 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade:
pip install --upgrade pytest
The full changelog is available at.
Thanks to all who contributed to this release, among them:
- Andreas Pelme
- Bruno Oliveira
- Dmitry Malinovsky
- Eli Boyarski
- Jakub Wilk
- Jeff Widman
- Loïc Estève
- Luke Murphy
- Miro Hrončok
- Oscar Hellström
- Peter Heatwole
- Philippe Ombredanne
- Ronny Pfannschmidt
- Rutger Prins
- Stefan Scherfke
Happy testing, The pytest Development Team
pytest-3.0.5¶
pytest 3.0.5 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade:
pip install --upgrade pytest
The changelog is available at.
Thanks to all who contributed to this release, among them:
- Ana Vojnovic
- Bruno Oliveira
- Daniel Hahler
- Duncan Betts
- Igor Starikov
- Ismail
- Luke Murphy
- Ned Batchelder
- Ronny Pfannschmidt
- Sebastian Ramacher
- nmundar
Happy testing, The pytest Development Team
pytest-3.0.4¶
pytest 3.0
- Dan Wandschneider
- Florian Bruhin
- Georgy Dyuldin
- Grigorii Eremeev
- Jason R. Coombs
- Manuel Jacob
- Mathieu Clabaut
- Michael Seifert
- Nikolaus Rath
- Ronny Pfannschmidt
- Tom V
Happy testing, The pytest Development Team
pytest-3.0.3¶
pytest 3.0
- Florian Bruhin
- Floris Bruynooghe
- Huayi Zhang
- Lev Maximov
- Raquel Alegre
- Ronny Pfannschmidt
- Roy Williams
- Tyler Goodlet
- mbyt
Happy testing, The pytest Development Team
pytest-3.0.2¶
pytest 3.0.2 has just been released to PyPI.
This release fixes some regressions and bugs reported in version 3.0.1, being a drop-in replacement. To upgrade:
pip install --upgrade pytest
The changelog is available at.
Thanks to all who contributed to this release, among them:
- Ahn Ki-Wook
- Bruno Oliveira
- Florian Bruhin
- Jordan Guymon
- Raphael Pierzina
- Ronny Pfannschmidt
- mbyt
Happy testing, The pytest Development Team
pytest-3.0.1¶
pytest 3.0.1 has just been released to PyPI.
This release fixes some regressions reported in version 3.0.0, being a drop-in replacement. To upgrade:
pip install –upgrade pytest
The changelog is available at.
Thanks to all who contributed to this release, among them:
Adam Chainz Andrew Svetlov Bruno Oliveira Daniel Hahler Dmitry Dygalo Florian Bruhin Marcin Bachry Ronny Pfannschmidt matthiasha
Happy testing, The py.test Development Team
py]
python testing sprint June 20th-26th 2016¶
>>IMAGE.
pytest-2.9:.
pytest-2.9:
Bruno Oliveira Daniel Hahler Dmitry Malinovsky Florian Bruhin Floris Bruynooghe Matt Bachmann Ronny Pfannschmidt TomV Vladimir Bolshakov Zearin palaviv
Happy testing, The py.test Development Team
2.9.1 (compared to 2.9.0.
pytest-2.9.0:
Anatoly Bubenkov Bruno Oliveira Buck Golemon David Vierra Florian Bruhin Galaczi Endre Georgy Dyuldin Lukas Bednar Luke Murphy Marcin Biernat Matt Williams Michael Aquilina Raphael Pierzina Ronny Pfannschmidt Ryan Wooden Tiemo Kieft TomV holger krekel jab
Happy testing, The py.test Development Team
2.9.0 (compared to 2.8.7 strips
bprefixv isn’t used. Thanks @The-Compiler for the PR.
--lfand
--ffnow support long names:
--last-failedand
--failed-firstrespectively. Thanks @MichaelAquilina for the PR.
Added expected exceptions to pytest.raises fail.
pytest-2.8.7¶
This is a hotfix release to solve a regression in the builtin monkeypatch plugin that got introduced in 2.8.6.:
Ronny Pfannschmidt
Happy testing, The py.test Development Team
pytest-2.8.6¶:
AMiT Kumar Bruno Oliveira Erik M. Bray Florian Bruhin Georgy Dyuldin Jeff Widman Kartik Singhal Loïc Estève Manu Phatak Peter Demin Rick van Hattem Ronny Pfannschmidt Ulrich Petri foxx
Happy testing, The py.test Development Team
2.8.6 (compared to 2.8.5.
pytest-2.8.5¶
pytest is a mature Python testing tool with more than a 1100 tests against itself, passing on many different interpreters and platforms. This release is supposed to be drop-in compatible to 2.8.4.
See below for the changes and see docs at:
As usual, you can upgrade from pypi via:
pip install -U pytest
Thanks to all who contributed to this release, among them:
Alex Gaynor aselus-hub Bruno Oliveira Ronny Pfannschmidt
Happy testing, The py.test Development Team
2.8.5 (compared to 2.8.4.
pytest-2.8.4 Jeff Widman Mehdy Khoshnoody Nicholas Chammas Ronny Pfannschmidt Tim Chan
Happy testing, The py.test Development Team
2.8.4 (compared to 2.8.3.
pytest-2.8.3: bug fixes Gabe Hollombe Gabriel Reis Hartmut Goebel John Vandenberg Lee Kamentsky Michael Birtwell Raphael Pierzina Ronny Pfannschmidt William Martin Stewart
Happy testing, The py.test Development Team
2.8.3 (compared to 2.8.2)
pytest-2.8.2: bug fixes¶
pytest is a mature Python testing tool with more than a 1100 tests against itself, passing on many different interpreters and platforms. This release is supposed to be drop-in compatible to 2.8.1.
See below for the changes and see docs at:
As usual, you can upgrade from pypi via:
pip install -U pytest
Thanks to all who contributed to this release, among them:
Bruno Oliveira Demian Brecht Florian Bruhin Ionel Cristian Mărieș Raphael Pierzina Ronny Pfannschmidt holger krekel
Happy testing, The py.test Development Team
2.8.2 (compared to 2.7.2.
pytest-2.7.2: bug fixes¶
pytest is a mature Python testing tool with more than a 1100 tests against itself, passing on many different interpreters and platforms. This release is supposed to be drop-in compatible to 2.7.1.
See below for the changes and see docs at:
As usual, you can upgrade from pypi via:
pip install -U pytest
Thanks to all who contributed to this release, among them:
Bruno Oliveira Floris Bruynooghe Punyashloka Biswal Aron Curzon Benjamin Peterson Thomas De Schampheleire Edison Gustavo Muenz Holger Krekel
Happy testing, The py.test Development Team
2.7.2 (compared to 2.7.1.
pytest-2.7.1: bug fixes¶
pytest is a mature Python testing tool with more than a 1100 tests against itself, passing on many different interpreters and platforms. This release is supposed to be drop-in compatible to 2.7.0.
See below for the changes and see docs at:
As usual, you can upgrade from pypi via:
pip install -U pytest
Thanks to all who contributed to this release, among them:
Bruno Oliveira Holger Krekel Ionel Maries Cristian Floris Bruynooghe
Happy testing, The py.test Development Team
2.7.1 (compared to 2.7.
pytest-2.7.0: fixes, features, speed improvements¶
pytest is a mature Python testing tool with more than a 1100 tests against itself, passing on many different interpreters and platforms. This release is supposed to be drop-in compatible to 2.6.X.
See below for the changes and see docs at:
As usual, you can upgrade from pypi via:
pip install -U pytest
Thanks to all who contributed, among them:
Anatoly Bubenkoff Floris Bruynooghe Brianna Laugher Eric Siegerman Daniel Hahler Charles Cloud Tom Viner Holger Peters Ldiary Translations almarklein
have fun, holger krekel
2.7.0 (compared to 2.6.4).
pytest-2.6.3: fixes and little improvements¶
pytest is a mature Python testing tool with more than a 1100 tests against itself, passing on many different interpreters and platforms. This release is drop-in compatible to 2.5.2 and 2.6.X. See below for the changes and see docs at:
As usual, you can upgrade from pypi via:
pip install -U pytest
Thanks to all who contributed, among them:
Floris Bruynooghe Oleg Sinyavskiy Uwe Schmitt Charles Cloud Wolfgang Schnerring
have fun, holger krekel
Changes 2.6.3¶
-.
pytest-2.6.2: few fixes and cx_freeze support¶
pytest is a mature Python testing tool with more than a 1100 tests against itself, passing on many different interpreters and platforms. This release is drop-in compatible to 2.5.2 and 2.6.X. It also brings support for including pytest with cx_freeze or similar freezing tools into your single-file app distribution. For details see the CHANGELOG below.
See docs at:
As usual, you can upgrade from pypi via:
pip install -U pytest
Thanks to all who contributed, among them:
Floris Bruynooghe Benjamin Peterson Bruno Oliveira
have fun, holger krekel
2.6.2¶
-.
pytest-2.6.1: fixes and new xfail feature¶
pytest is a mature Python testing tool with more than a 1100 tests against itself, passing on many different interpreters and platforms. The 2.6.1 release is drop-in compatible to 2.5.2 and actually fixes some regressions introduced with 2.6.0. It also brings a little feature to the xfail marker which now recognizes expected exceptions, see the CHANGELOG below.
See docs at:
As usual, you can upgrade from pypi via:
pip install -U pytest
Thanks to all who contributed, among them:
Floris Bruynooghe Bruno Oliveira Nicolas Delaby
have fun, holger krekel
Changes 2.6.1¶
-.
pytest-2.6.0: shorter tracebacks, new warning system, test runner compat¶
pytest is a mature Python testing tool with more than a 1000 tests against itself, passing on many different interpreters and platforms.
The 2.6.0 release should be drop-in backward compatible to 2.5.2 and fixes a number of bugs and brings some new features, mainly:
- shorter tracebacks by default: only the first (test function) entry and the last (failure location) entry are shown, the ones between only in “short” format. Use
--tb=longto get back the old behaviour of showing “long” entries everywhere.
- a new warning system which reports oddities during collection and execution. For example, ignoring collecting Test* classes with an
__init__now produces a warning.
- various improvements to nose/mock/unittest integration
Note also that 2.6.0 departs with the “zero reported bugs” policy because it has been too hard to keep up with it, unfortunately. Instead we are for now rather bound to work on “upvoted” issues in the issue tracker.
See docs at:
As usual, you can upgrade from pypi via:
pip install -U pytest
Thanks to all who contributed, among them:
Benjamin Peterson Jurko Gospodnetić Floris Bruynooghe Marc Abramowitz Marc Schlaich Trevor Bekolay Bruno Oliveira Alex Groenholm
have fun, holger krekel
2.6.0¶
- fix issue537: Avoid importing old assertion reinterpretation code by default. Thanks Benjamin Peterson.
- Thanks Benjamin Peterson.
-.
- avoid importing “py.test” (an old alias module for “pytest”)
pytest-2.5.2: fixes¶
pytest is a mature Python testing tool with more than a 1000 tests against itself, passing on many different interpreters and platforms.
The 2.5.2 release fixes a few bugs with two maybe-bugs remaining and actively being worked on (and waiting for the bug reporter’s input). We also have a new contribution guide thanks to Piotr Banaszkiewicz and others.
See docs at:
As usual, you can upgrade from pypi via:
pip install -U pytest
Thanks to the following people who contributed to this release:
Anatoly Bubenkov Ronny Pfannschmidt Floris Bruynooghe Bruno Oliveira Andreas Pelme Jurko Gospodnetić Piotr Banaszkiewicz Simon Liedtke lakka Lukasz Balcerzak Philippe Muller Daniel Hahler
have fun, holger krekel
2.5.2¶
-
pytest-2.5.1: fixes and new home page styling¶
pytest is a mature Python testing tool with more than a 1000 tests against itself, passing on many different interpreters and platforms.
The 2.5.1 release maintains the “zero-reported-bugs” promise by fixing the three bugs reported since the last release a few days ago. It also features a new home page styling implemented by Tobias Bieniek, based on the flask theme from Armin Ronacher:
If you have anything more to improve styling and docs, we’d be very happy to merge further pull requests.
On the coding side, the release also contains a little enhancement to fixture decorators allowing to directly influence generation of test ids, thanks to Floris Bruynooghe. Other thanks for helping with this release go to Anatoly Bubenkoff and Ronny Pfannschmidt.
As usual, you can upgrade from pypi via:
pip install -U pytest
have fun and a nice remaining “bug-free” time of the year :) holger krekel
2.5.1¶
-.
pytest-2.5.0: now down to ZERO reported bugs!¶
pytest-2.5.0 is a big fixing release, the result of two community bug fixing days plus numerous additional works from many people and reporters. The release should be fully compatible to 2.4.2, existing plugins and test suites. We aim at maintaining this level of ZERO reported bugs because it’s no fun if your testing tool has bugs, is it? Under a condition, though: when submitting a bug report please provide clear information about the circumstances and a simple example which reproduces the problem.
The issue tracker is of course not empty now. We have many remaining “enhacement” issues which we’ll hopefully can tackle in 2014 with your help.
For those who use older Python versions, please note that pytest is not automatically tested on python2.5 due to virtualenv, setuptools and tox not supporting it anymore. Manual verification shows that it mostly works fine but it’s not going to be part of the automated release process and thus likely to break in the future.
As usual, current docs are at
and you can upgrade from pypi via:
pip install -U pytest
Particular thanks for helping with this release go to Anatoly Bubenkoff, Floris Bruynooghe, Marc Abramowitz, Ralph Schmitt, Ronny Pfannschmidt, Donald Stufft, James Lan, Rob Dennis, Jason R. Coombs, Mathieu Agopian, Virgil Dupras, Bruno Oliveira, Alex Gaynor and others.
have fun, holger krekel
2.5.0¶
pytest-2.4.2: colorama on windows, plugin/tmpdir fixes¶
pytest-2.4.2 is another bug-fixing release:
-
as usual, docs at and upgrades via:
pip install -U pytest
have fun, holger krekel
pytest-2.4.1: fixing three regressions compared to 2.3.5¶
pytest-2.4.1 is a quick follow up release to fix three regressions compared to 2.3.5 before they hit more people:
-.
- also merge doc typo fixes, thanks Andy Dirnberger
as usual, docs at and upgrades via:
pip install -U pytest
have fun, holger krekel
pytest-2.4.0: new fixture features/hooks and bug fixes¶
The just released pytest-2.4.0 brings many improvements and numerous bug fixes while remaining plugin- and test-suite compatible apart from a few supposedly very minor incompatibilities. See below for a full list of details. A few feature highlights:
- new yield-style fixtures pytest.yield_fixture, allowing to use existing with-style context managers in fixture functions.
- improved pdb support:
import pdb ; pdb.set_trace()now works without requiring prior disabling of stdout/stderr capturing. Also the
--pdboptions works now on collection and internal errors and we introduced a new experimental hook for IDEs/plugins to intercept debugging:
pytest_exception_interact(node, call, report).
- shorter monkeypatch variant to allow specifying an import path as a target, for example:
monkeypatch.setattr("requests.get", myfunc)
- better unittest/nose compatibility: all teardown methods are now only called if the corresponding setup method succeeded.
- integrate tab-completion on command line options if you have argcomplete configured.
- allow boolean expression directly with skipif/xfail if a “reason” is also specified.
- a new hook
pytest_load_initial_conftestsallows plugins like pytest-django to influence the environment before conftest files import
django.
- reporting: color the last line red or green depending if failures/errors occurred or everything passed.
The documentation has been updated to accommodate the changes, see
To install or upgrade pytest:
pip install -U pytest # or easy_install -U pytest
Many thanks to all who helped, including Floris Bruynooghe, Brianna Laugher, Andreas Pelme, Anthon van der Neut, Anatoly Bubenkoff, Vladimir Keleshev, Mathieu Agopian, Ronny Pfannschmidt, Christian Theunert and many others.
may passing tests be with you,
holger krekel
Changes between 2.3.5 and.
py
-
pytest-2.3.4: stabilization, more flexible selection via “-k expr”¶
pytest-2.3.4 is a small stabilization release of the py.test tool which offers uebersimple assertions, scalable fixture mechanisms and deep customization for testing with Python. This release comes with the following fixes and features:
- make “-k” option accept an expressions the same as with “-m” so that one can write: -k “name1 or name2” etc. This is a slight usage incompatibility if you used special syntax like “TestClass.test_method” which you now need to write as -k “TestClass and test_method” to match a certain method in a certain test class.
- allow to dynamically define markers via item.keywords[...]=assignment integrating with “-m” option
- a a/conftest.py file and tests in a/tests/test_some.py
- fix issue226 - LIFO ordering for fixture teardowns
- fix issue224 - invocations with >256 char arguments now work
- fix issue91 - add/discuss package/directory level setups in example
- fixes related to autouse discovery and calling
Thanks in particular to Thomas Waldmann for spotting and reporting issues.
See
for general information. To install or upgrade pytest:
pip install -U pytest # or easy_install -U pytest
best, holger krekel
pytest-2.3.3: integration fixes, py24 support,
*/** shown in traceback¶
pytest-2.3.3 is another stabilization release of the py.test tool which offers uebersimple assertions, scalable fixture mechanisms and deep customization for testing with Python. Particularly, this release provides:
- integration fixes and improvements related to flask, numpy, nose, unittest, mock
- makes pytest work on py24 again (yes, people sometimes still need to use it)
- show
*,**args in pytest tracebacks
Thanks to Manuel Jacob, Thomas Waldmann, Ronny Pfannschmidt, Pavel Repin and Andreas Taumoefolau for providing patches and all for the issues.
See
for general information. To install or upgrade pytest:
pip install -U pytest # or easy_install -U pytest
best, holger krekel
Changes between 2.3.2 and 2.3.3¶
-.
pytest-2.3.2: some fixes and more traceback-printing speed¶
pytest-2.3.2 is another stabilization release:
- issue 205: fixes a regression with conftest detection
- issue 208/29: fixes traceback-printing speed in some bad cases
- fix teardown-ordering for parametrized setups
- fix unittest and trial compat behaviour with respect to runTest() methods
- issue 206 and others: some improvements to packaging
- fix issue127 and others: improve some docs
See
for general information. To install or upgrade pytest:
pip install -U pytest # or easy_install -U pytest
best, holger krekel
Changes between 2.3.1 and 2.3.2¶
-
pytest-2.3.1: fix regression with factory functions¶
pytest-2.3.1 is a quick follow-up release:
- fix issue202 - regression with fixture functions/funcarg factories: using “self” is now safe again and works as in 2.2.4. Thanks to Eduard Schettino for the quick bug report.
- disable pexpect pytest self tests on Freebsd - thanks Koob for the quick reporting
- fix/improve interactive docs with –markers
See
for general information. To install or upgrade pytest:
pip install -U pytest # or easy_install -U pytest
best, holger krekel
Changes between 2.3.0 and 2.3.1¶
-.
pytest-2.3: improved fixtures / better unittest integration¶
pytest-2.3 comes with many major improvements for fixture/funcarg management and parametrized testing in Python. It is now easier, more efficient and more predicatable to re-run the same tests with different fixture instances. Also, you can directly declare the caching “scope” of fixtures so that dependent tests throughout your whole test suite can re-use database or other expensive fixture objects with ease. Lastly, it’s possible for fixture functions (formerly known as funcarg factories) to use other fixtures, allowing for a completely modular and re-useable fixture design.
For detailed info and tutorial-style examples, see:
Moreover, there is now support for using pytest fixtures/funcargs with unittest-style suites, see here for examples:
Besides, more unittest-test suites are now expected to “simply work” with pytest.
All changes are backward compatible and you should be able to continue to run your test suites and 3rd party plugins that worked with pytest-2.2.4.
If you are interested in the precise reasoning (including examples) of the pytest-2.3 fixture evolution, please consult
For general info on installation and getting started:
Docs and PDF access as usual at:
and more details for those already in the knowing of pytest can be found in the CHANGELOG below.
Particular thanks for this release go to Floris Bruynooghe, Alex Okrushko Carl Meyer, Ronny Pfannschmidt, Benjamin Peterson and Alex Gaynor for helping to get the new features right and well integrated. Ronny and Floris also helped to fix a number of bugs and yet more people helped by providing bug reports.
have fun, holger krekel
Changes between 2.2.4 and 2.3.0¶
- fix issue202 - better automatic names for parametrized test functions
- fix issue139 - introduce @pytest.fixture which allows direct scoping and parametrization of funcarg factories. Introduce new @pytest.setup marker to allow the writing of setup functions which accept funcargs.
-.setup
pytest-2.2.4: bug fixes, better junitxml/unittest/python3 compat¶
pytest-2.2.4 is a minor backward-compatible release of the versatile py.test testing tool. It contains bug fixes and a few refinements to junitxml reporting, better unittest- and python3 compatibility.
For general information see here:
To install or upgrade pytest:
pip install -U pytest # or easy_install -U pytest
Special thanks for helping on this release to Ronny Pfannschmidt and Benjamin Peterson and the contributors of issues.
best, holger krekel
Changes between 2.2.3 and 2.2.4¶
-
pytest-2.2.2: bug fixes¶
pytest-2.2.2 (updated to 2.2.3 to fix packaging issues) is a minor backward-compatible release of the versatile¶
-)
pytest-2.2.1: bug fixes, perfect teardowns¶
pytest-2.2.1 is a minor backward-compatible release of the py.test testing tool. It contains bug fixes and little improvements, including documentation fixes. If you are using the distributed testing pluginmake¶
-)
py.test 2.2.0: test marking++, parametrization++ and duration profiling¶
pytest-2.2.0 is a test-suite compatible release of the popular py.test testing tool. Plugins might need upgrades. It comes with these improvements:
- easier and more powerful parametrization of tests:
- new @pytest.mark.parametrize decorator to run tests with different arguments
- new metafunc.parametrize() API for parametrizing arguments independently
- see examples at
- NOTE that parametrize() related APIs are still a bit experimental and might change in future releases.
- improved handling of test markers and refined marking mechanism:
- “-m markexpr” option for selecting tests according to their mark
- a new “markers” ini-variable for registering test markers for your project
- the new “–strict” bails out with an error if using unregistered markers.
- see examples at
- duration profiling: new “–duration=N” option showing the N slowest test execution or setup/teardown calls. This is most useful if you want to find out where your slowest test code is.
- also 2.2.0 performs more eager calling of teardown/finalizers functions resulting in better and more accurate reporting when they fail
Besides there is the usual set of bug fixes along with a cleanup of pytest’s own test suite allowing it to run on a wider range of environments.
For general information, see extensive docs with examples here:
If you want to install or upgrade pytest you might just type:
pip install -U pytest # or easy_install -U pytest
Thanks to Ronny Pfannschmidt, David Burns, Jeff Donner, Daniel Nouri, Alfredo Deza and all who gave feedback or sent bug reports.
best, holger krekel
notes on incompatibility¶
While test suites should work unchanged you might need to upgrade plugins:
- You need a new version of the pytest-xdist plugin (1.7) for distributing test runs.
- Other plugins might need an upgrade if they implement the
pytest_runtest_logreporthook which now is called unconditionally for the setup/teardown fixture phases of a test. You may choose to ignore setup/teardown failures by inserting “if rep.when != ‘call’: return” or something similar. Note that most code probably “just” works because the hook was already called for failing setup/teardown phases of a test so a plugin should have been ready to grok such reports already.
Changes between 2.1.3 and 2.2.0¶
- examples at and its links.
- issue50: introduce “-m marker” option to select tests based on markers (this is a stricter and more predictable version of “
py.test 2.1.3: just some more fixes¶
pytest-2.1.3 is a minor backward compatible maintenance release of the popular py.test testing tool. It is commonly used for unit, functional- and integration testing. See extensive docs with examples here:
The release contains another fix to the perfected assertions introduced with the 2.1 series as well as the new possibility to customize reporting for assertion expressions on a per-directory level.
If you want to install or upgrade pytest, just type one of:
pip install -U pytest # or easy_install -U pytest
Thanks to the bug reporters and to Ronny Pfannschmidt, Benjamin Peterson and Floris Bruynooghe who implemented the fixes.
best, holger krekel
Changes between 2.1.2 and 2.1.3¶
- fix issue79: assertion rewriting failed on some comparisons in boolops,
- correctly handle zero length arguments (a la pytest ‘’)
- fix issue67 / junitxml now contains correct test durations
- fix issue75 / skipping test failure on jython
- fix issue77 / Allow assertrepr_compare hook to apply to a subset of tests
py.test 2.1.2: bug fixes and fixes for jython¶
pytest-2.1.2 is a minor backward compatible maintenance release of the popular py.test testing tool. pytest is commonly used for unit, functional- and integration testing. See extensive docs with examples here:
Most bug fixes address remaining issues with the perfected assertions introduced in the 2.1 series - many thanks to the bug reporters and to Benjamin Peterson for helping to fix them. pytest should also work better with Jython-2.5.1 (and Jython trunk).
If you want to install or upgrade pytest, just type one of:
pip install -U pytest # or easy_install -U pytest
best, holger krekel /
Changes between 2.1.1 and 2.1.2¶
-
py.test 2.1.1: assertion fixes and improved junitxml output¶
pytest-2.1.1 is a backward compatible maintenance release of the popular py.test testing tool. See extensive docs with examples here:
Most bug fixes address remaining issues with the perfected assertions introduced with 2.1.0 - many thanks to the bug reporters and to Benjamin Peterson for helping to fix them. Also, junitxml output now produces system-out/err tags which lead to better displays of tracebacks with Jenkins.
Also a quick note to package maintainers and others interested: there now is a “pytest” man page which can be generated with “make man” in doc/.
If you want to install or upgrade pytest, just type one of:
pip install -U pytest # or easy_install -U pytest
best, holger krekel /
Changes between 2.1.0 and”
py.test 2.1.0: perfected assertions and bug fixes¶
Welcome to the release of pytest-2.1, a mature testing tool for Python, supporting CPython 2.4-3.2, Jython and latest PyPy interpreters. See the improved extensive docs (now also as PDF!) with tested examples here:
The single biggest news about this release are perfected assertions
courtesy of Benjamin Peterson. You can now safely use
assert
statements in test modules without having to worry about side effects
or python optimization (“-OO”) options. This is achieved by rewriting
assert statements in test modules upon import, using a PEP302 hook.
See for
detailed information. The work has been partly sponsored by my company,
merlinux GmbH.
For further details on bug fixes and smaller enhancements see below.
If you want to install or upgrade pytest, just type one of:
pip install -U pytest # or easy_install -U pytest
best, holger krekel /
Changes between 2.0.3 and 2.1.0¶
- initialization
py.test 2.0.3: bug fixes and speed ups¶
Welcome to pytest-2.0.3,
There also is a bugfix release 1.6 of pytest-xdist, the plugin that enables seamless distributed and “looponfail” testing for Python.
best, holger krekel
Changes between 2.0.2 and 2.0.3¶
-
py.test 2.0.2: bug fixes, improved xfail/skip expressions, speed ups¶
Welcome to pytest-2.0.2,
Many thanks to all issue reporters and people asking questions or complaining, particularly Jurko for his insistence, Laura, Victor and Brianna for helping with improving and Ronny for his general advise.
best, holger krekel
Changes between 2.0.1 and 2.0.2¶)
py.
py history for more detailed changes.
3.0.7 (2017-03-14)¶
- Fix issue in assertion rewriting breaking due to modules silently discarding other modules when importing fails Notably, importing the anydbm module is fixed. (#2248). Thanks @pfhayes for the PR.
- junitxml: Fix problematic case where system-out tag occured"‘.
|
http://docs.activestate.com/activepython/2.7/pkg/pytest/contents.html
|
CC-MAIN-2018-39
|
refinedweb
| 27,361 | 50.02 |
be able to use a custom UINavigationBar the UINavigationController has a type of constructor.
new UINavigationController(typeof (BankingNavigationBar), typeof (UIToolbar));
Using the UINavigationController in this way requires for the UINavigationBar to have a IntPtr constructor which the base class is internal protected. So I cannot access it.
[Register ("BankingNavigationBar")]
public class BankingNavigationBar : UINavigationBar
{
// does not exist base (nHandle)
public BankingNavigationBar (IntPtr nHandle) : base (nHandle) //
{
}
public BankingNavigationBar () : base () //
{
}
}
Hello David
While your statement is true, all of our IntPtr constructors are now protected internal (this was introduced with unified api [1]) nothing stops you using/exposing it in any subclasses of any NSObject derived types, even we use it in our default templates
>public partial class ViewController : UIViewController {
> protected ViewController (IntPtr handle) : base (handle)
> {
> // Note: this .ctor should not contain any initialization logic.
> }
>
> public override void ViewDidLoad ()
> {
> base.ViewDidLoad ();
> // Perform any additional setup after loading the view, typically from a nib.
> }
>}
So I really do not see how you could not use it the way you want, the code you posted in comment #0 should compile just fine.
Could you please let me know how this isn't working for you? even better could you provide me a test case?
Cheers!
[1]:
Hi Alex,
Your absolutely right, I should have spotted that, I was perhaps a little too sleep deprived from young babies etc.
I was just going on what intellisense was tell me, saying there is no base(handle) on () constructor.
Thanks
Dave
(Made me smile to see your name, and thinking back to the PSPDFKit with Rene.)
Great to see that you sorted that out.
Cheers!
(Good old times :D I wasn't sure it was you heh glad to see you are doing Xamarin!)
|
https://xamarin.github.io/bugzilla-archives/44/44495/bug.html
|
CC-MAIN-2019-43
|
refinedweb
| 286 | 52.49 |
Finds edges in an image using the [Canny86] algorithm.
The function finds edges in the input image image and marks them in the output map edges using the Canny algorithm. The smallest value between threshold1 and threshold2 is used for edge linking. The largest value is used to find initial segments of strong edges. See
Calculates eigenvalues and eigenvectors of image blocks for corner detection.
For every pixel
, the function cornerEigenValsAndVecs considers a blockSize
blockSize neighborhood
. It calculates the covariation matrix of derivatives over the neighborhood as:
where the derivatives are computed using the Sobel() operator.
After that, it finds eigenvectors and eigenvalues of
and stores them in the destination image as
where
The output of the function can be used for robust edge or corner detection.
See also
cornerMinEigenVal(), cornerHarris(), preCornerDetect()
Harris edge detector.
The function runs the Harris edge detector on the image. Similarly to
cornerMinEigenVal() and
cornerEigenValsAndVecs() , for each pixel
it calculates a
gradient covariance matrix
over a
neighborhood. Then, it computes the following characteristic:
Corners in the image can be found as the local maxima of this response map.
Calculates the minimal eigenvalue of gradient matrices for corner detection.
The function is similar to
cornerEigenValsAndVecs() but it calculates and stores only the minimal eigenvalue of the covariance matrix of derivatives, that is,
in terms of the formulae in the
cornerEigenValsAndVecs() description.
Refines the corner locations.
The function iterates to find the sub-pixel accurate location of corners or radial saddle points, as shown on the figure below.
Sub-pixel accurate corner locator is based on the observation that every vector from the center
to a point
located within a neighborhood of
is orthogonal to the image gradient at
subject to image and measurement noise. Consider the expression:
where
is an image gradient at one of the points
in a neighborhood of
. The value of
is to be found so that
is minimized. A system of equations may be set up with
set to zero:
where the gradients are summed within a neighborhood (“search window”) of
. Calling the first gradient term
and the second gradient term
gives:
The algorithm sets the center of the neighborhood window at this new center
and then iterates until the center stays within a set threshold.
Determines strong corners on an image.
The function finds the most prominent corners in the image or in the specified image region, as described in [Shi94]:
The function can be used to initialize a point-based tracker of an object.
Note
If the function is called with different values A and B of the parameter qualityLevel , and A > {B}, the vector of returned corners with qualityLevel=A will be the prefix of the output vector with qualityLevel=B .
Finds circles in a grayscale image using the Hough transform.
The function finds circles in a grayscale image using a modification of the Hough transform.
Example:
#include <cv.h> #include <highgui.h> #include <math.h> using namespace cv; int main(int argc, char** argv) { Mat img, gray; if( argc != 2 && !(img=imread(argv[1], 1)).data) return -1; cvtColor(img, gray, CV_BGR2GRAY); // smooth it, otherwise a lot of false circles may be detected GaussianBlur( gray, gray, Size(9, 9), 2, 2 ); vector<Vec3f> circles; HoughCircles(gray, circles, CV_HOUGH_GRADIENT, 2, gray->rows/4, 200, 100 ); for( size_t i = 0; i < circles.size(); i++ ) { Point center(cvRound(circles[i][0]), cvRound(circles[i][1])); int radius = cvRound(circles[i][2]); // draw the circle center circle( img, center, 3, Scalar(0,255,0), -1, 8, 0 ); // draw the circle outline circle( img, center, radius, Scalar(0,0,255), 3, 8, 0 ); } namedWindow( "circles", 1 ); imshow( "circles", img ); return 0; }
Note
Usually the function detects the centers of circles well. However, it may fail to find correct radii. You can assist to the function by specifying the radius range ( minRadius and maxRadius ) if you know it. Or, you may ignore the returned radius, use only the center, and find the correct radius using an additional procedure.
See also
fitEllipse(), minEnclosingCircle()
Finds lines in a binary image using the standard Hough transform.
The function implements the standard or standard multi-scale Hough transform algorithm for line detection. See for a good explanation of Hough transform. See also the example in HoughLinesP() description.
Finds line segments in a binary image using the probabilistic Hough transform.
The function implements the probabilistic Hough transform algorithm for line detection, described in [Matas00]. See the line detection example below:
/* This is a standalone program. Pass an image name as the first parameter of the program. Switch between standard and probabilistic Hough transform by changing "#if 1" to "#if 0" and back */ #include <cv.h> #include <highgui.h> #include <math.h> using namespace cv; int main(int argc, char** argv) { Mat src, dst, color_dst; if( argc != 2 || !(src=imread(argv[1], 0)).data) return -1; Canny( src, dst, 50, 200, 3 ); cvtColor( dst, color_dst, CV_GRAY2BGR ); #if 0 vector<Vec2f> lines; HoughLines( dst, lines, 1, CV_PI/180, 100 ); for( size_t i = 0; i < lines.size(); i++ ) { float rho = lines[i][0]; float theta = lines[i][1]; double a = cos(theta), b = sin(theta); double x0 = a*rho, y0 = b*rho; Point pt1(cvRound(x0 + 1000*(-b)), cvRound(y0 + 1000*(a))); Point pt2(cvRound(x0 - 1000*(-b)), cvRound(y0 - 1000*(a))); line( color_dst, pt1, pt2, Scalar(0,0,255), 3, 8 ); } #else vector<Vec4i> lines; HoughLinesP( dst, lines, 1, CV_PI/180, 80, 30, 10 ); for( size_t i = 0; i < lines.size(); i++ ) { line( color_dst, Point(lines[i][0], lines[i][1]), Point(lines[i][2], lines[i][3]), Scalar(0,0,255), 3, 8 ); } #endif namedWindow( "Source", 1 ); imshow( "Source", src ); namedWindow( "Detected Lines", 1 ); imshow( "Detected Lines", color_dst ); waitKey(0); return 0; }
This is a sample picture the function parameters have been tuned for:
And this is the output of the above program in case of the probabilistic Hough transform:
Calculates a feature map for corner detection.
The function calculates the complex spatial derivative-based function of the source image
where
,:math:D_y are the first image derivatives,
,:math:D_{yy} are the second image derivatives, and
is the mixed derivative.
The corners can be found as local maximums of the functions, as shown below:
Mat corners, dilated_corners; preCornerDetect(image, corners, 3); // dilation with 3x3 rectangular structuring element dilate(corners, dilated_corners, Mat(), 1); Mat corner_mask = corners == dilated_corners;
|
http://docs.opencv.org/2.4.3/modules/imgproc/doc/feature_detection.html
|
CC-MAIN-2016-30
|
refinedweb
| 1,057 | 52.6 |
This is the mail archive of the [email protected] mailing list for the Archer project.
On Fri, 27 Feb 2009 23:03:26 +0100, Sami Wagiaalla wrote: > Sami Wagiaalla wrote: >> I introduced this problem earlier when i was trying to resolve a >> conflict that I had with a patch of Jan's. >> 1de38657622396795ce681e64b03fb74e81e6c3d vs >> 60eb8684d0d85d0884aca7a2f013e5eb16a51d47 >> >> I'll check in a fix for this. The issue is that sometimes a function >> Die might now have high/low pc tags but it has an abstract origin tag >> pointing to useful namespace information. > > I have check in a fix for this. [archer-keiths-expr-cumulative] 3513c57475d20e65ac2fb2be097227011c4b8b5a It still has a lot of regressions (tried archer-keiths-expr-cumulative) like: info function operator\*(^M memory clobbered past end of allocated block^M FAIL: gdb.cp/cplusfuncs.exp: info function for "operator%(" (timeout) (at least on F10.x86_64 using mcheck: LDFLAGS="-lmcheck" ./configure...). Possible quick fix is attached (not checked-in) - my code moved into explore_abstract_origin() needs to know (the `die_children' count is only read) the number of children DIEs - to allocate appropriately sized array. Unfortunately even after this attached fix the branch is FAILing on: +FAIL: gdb.cp/namespace-using.exp: print _a +FAIL: gdb.cp/namespace-using.exp: print x * Every test name should be unique - this is the reason the names exist. You do not follow it. * You use `send_gdb' without waiting on the response of each command - some wait on "$gdb_prompt $". The waiting can be usually done by `gdb_test', in more complicated cases by `gdb_test_multiple'. Otherwise the results get random as dejagnu is not in sync with the GDB output. * There are even specific library functions `gdb_breakpoint' and `gdb_continue_to_breakpoint' which you use but only sometimes. * Pattern start `"\\$\[0-9\].* =' should have been more `"\\$\[0-9\]+ =' (although it is also considered safe to use just `='). There is now a change KFAIL: gdb.cp/templates.exp: constructor breakpoint (PRMS: gdb/1062) -> FAIL: gdb.cp/templates.exp: constructor breakpoint although IMO the current output should be considered as PASS now. The branch is still pretty old 6.8.50.20090127-cvs and while I would update it but as it currently has regressions I cannot verify I would not break it. Regards, Jan
diff --git a/gdb/dwarf2read.c b/gdb/dwarf2read.c index 3e17767..a9251cb 100644 --- a/gdb/dwarf2read.c +++ b/gdb/dwarf2read.c @@ -3317,6 +3317,12 @@ read_func_scope (struct die_info *die, struct dwarf2_cu *cu) if (name == NULL || !dwarf2_get_pc_bounds (die, &lowpc, &highpc, cu, NULL)){ /* explore abstract origins if present. They might contain useful information such as import statements. */ + child_die = die->child; + while (child_die && child_die->tag) + { + child_die = sibling_die (child_die); + die_children++; + } explore_abstract_origin(die, cu, &die_children); return; }
|
https://www.sourceware.org/ml/archer/2009-q1/msg00298.html
|
CC-MAIN-2018-30
|
refinedweb
| 442 | 59.4 |
puka 0.0.7
Puka - the opinionated RabbitMQ clientPuka - the opinionated RabbitMQ client
======================================
Puka is yet-another Python client library for RabbitMQ. But as opposed
to similar libraries, it does not try to expose a generic AMQP
API. Instead, it takes an opinionated view on how the user should
interact with RabbitMQ.
Puka is simple
--------------
Puka exposes a simple, easy to understand API. Take a look at the
`publisher` example:
import puka
client = puka.Client("amqp://localhost/")
promise = client.connect()
client.wait(promise)
promise = client.queue_declare(queue='test')
client.wait(promise)
promise = client.basic_publish(exchange='', routing_key='test',
body='Hello world!')
client.wait(promise)
Puka is asynchronous
--------------------
Puka is fully asynchronous. Although, as you can see in example
above, it can behave synchronously. That's especially useful for
simple tasks when you don't want to introduce callbacks.
Here's the same code written in an asynchronous way:
import puka
def on_connection(promise, result):
client.queue_declare(queue='test', callback=on_queue_declare)
def on_queue_declare(promise, result):
client.basic_publish(exchange='', routing_key='test',
body="Hello world!",
callback=on_basic_publish)
def on_basic_publish(promise, result):
print " [*] Message sent"
client.loop_break()
client = puka.Client("amqp://localhost/")
client.connect(callback=on_connection)
client.loop()
You can mix synchronous and asynchronous programming styles if you want
to.
Puka never blocks
-----------------
In the pure asynchronous programming style Puka never blocks your
program waiting for network. However it is your responsibility to
notify when new data is available on the network socket. To allow that
Puka allows you to access the raw socket descriptor. With that in hand
you can construct your own event loop. Here's an the event loop that
may replace `wait_for_any` from previous example:
fd = client.fileno()
while True:
client.run_any_callbacks()
r, w, e = select.select([fd],
[fd] if client.needs_write() else [],
[fd])
if r or e:
client.on_read()
if w:
client.on_write()
Puka is fast
------------
Puka is asynchronous and has no trouble in handling many requests at a
time. This can be exploited to achieve a degree of parallelism. For
example, this snippet creates 1000 queues in parallel:
promises = [client.queue_declare(queue='a%04i' % i) for i in range(1000)]
for promise in promises:
client.wait(promise)
Puka also has a nicely optimized AMQP codec, but don't expect miracles
- it can't go faster than Python.
Puka is sensible
----------------
Puka does expose only a sensible subset of AMQP, as judged by the author.
The major differences between Puka and normal AMQP libraries include:
- Puka doesn't expose AMQP channels to the users.
- Puka treats `basic_publish` as a synchronous method. You can wait
on it and make sure that your data is delivered. Alternatively,
you may ignore the promise and treat it as an asynchronous command.
- Puka tries to cope with the AMQP exceptions and expose them
to the user in a predictable way. Unlike other libraries it's
possible (and recommended!) to recover from AMQP errors.
Puka is experimental
--------------------
Puka is a side project, written mostly to prove if it is possible to
create a reasonable API on top of the AMQP protocol.
I like it! Show me more!
------------------------
The best examples to start with are in the
[rabbitmq-tutorials repo]()
More code can be found in the `./examples` directory. Some
interesting bits:
- `./examples/send.py`: sends one message
- `./examples/receive_one.py`: receives one message
- `./examples/stress_amqp_consume.py`: a script used to
benchmark the throughput of the server
There is also a bunch of fairly complicated examples hidden in the
tests (see the `./tests` directory).
I want to install Puka
----------------------
Puka works with Python 2.6 and 2.7.
You can install Puka system-wide using pip:
sudo pip install puka
Alternatively to install it in the `virtualenv` local environment:
virtualenv my_venv
pip -E my_venv install puka
Or if you need the code from trunk:
sudo pip install -e git+
I want to run the examples
--------------------------
Great. Make sure you have `rabbitmq` server installed and follow this
steps:
git clone
cd puka
make
cd examples
Now you're ready to run the examples, start with:
python send.py
I want to see the API documentation
-----------------------------------
The easiest way to get started is to take a look at the examples and
tweak them to your needs. Detailed documentation doesn't exist
now. If it existed it would live here:
[]()
- Downloads (All Versions):
- 27 downloads in the last day
- 522 downloads in the last week
- 2112 downloads in the last month
- Author: Marek Majkowski
- License: MIT
- Platform: any
- Categories
- Development Status :: 3 - Alpha
- Intended Audience :: Developers
- License :: OSI Approved :: MIT License
- Operating System :: OS Independent
- Programming Language :: Python :: 2
- Programming Language :: Python :: 2.6
- Programming Language :: Python :: 2.7
- Topic :: Software Development :: Libraries :: Python Modules
- Package Index Owner: majek
- DOAP record: puka-0.0.7.xml
|
https://pypi.python.org/pypi/puka/0.0.7
|
CC-MAIN-2015-40
|
refinedweb
| 786 | 50.63 |
User Name:
Published: 18 Oct 2006
By: Azhar Khan
This article provides steps to solve the "Unrecognized Tag" problem for an Atlas Control. It provides the necessary solution to solve that given problem.
Having extender class, the page tag and behavior were rendered as:
Since I had changed the Root namespace, I had to change the extender file so that it renders the following:
Step 1
Modifications to project properties:
Step 2
Modify the "Assembly Resource Attribute" region, click on the + sign to open the region and change the namespace qualifier of the JavaScript file from <Project Name> to the root namespace. This is necessary because web resources must be referenced with their fully qualifed name.
The sections marked in Red were replaced with those marked in Blue
Step 3
Modifications to behavior file (AmericanSSNBehavior.vb):
The sections marked in Red were replaced with those marked in Blue at different places in the JavaScript file.
This article described how to change namespace from the default namespace to a more realistic one for an Atlas Extender Control project.
|
http://dotnetslackers.com/articles/ajax/unrecognized_tag_change_namespace_for_custom_atlas_control_project.aspx
|
crawl-003
|
refinedweb
| 177 | 54.46 |
I have not seen this be much of debate externally, but it
has been pretty hotly debated internally. We just closed down on this
guideline… As always, your comments are welcome:2.3.4 Assembly/DLL Naming GuidelinesAn assembly is the
unit of deployment and security for managed code projects. Typically
an assembly contains managed code and maps 1-1 with a DLL, althoughassemblies can span one or more files and may
contain managed and unmanaged code as well as non-code files such as
resources.Assemblies are self-describing
units of compiled managed code. When assemblies are in more than one file, there
is one file that contains the entry point and describes the other files in the
assembly. A Module is compiled code that can be added to an
assembly but has no assembly or version information of its own.You should choose names for your assemblies
that suggest large chunks of functionality such as System.Data. Assembly
and DLL names don’t correspond to namespace names but it is reasonable to follow
your namespace[mcm1]conventions[mcm1] when naming assemblies.?
Consider naming managed DLLs
according to the following pattern:<Company>.<Component>.dllWhere <Component> contains one or more
dot-separated clauses. For example,Microsoft.VisualBasic.dll
Microsoft.VisualBasic.Vsa.dll
Microsoft.Directx.dll
System.Web.dll
System.Web.Services.dll
Fabrikam.Security.dll
Litware.Controls.dllFxCop rule.[mcm1]Make this a hyperlink to 2.3
Namespaces
As the number of frameworks increase is there a consensus on a naming convention that identities the framework that the assembly targets. Currently we are building different assemblies for .net 1.0, .net 1.1, .net compact framework 1.0, sscli 1.0 and mono. We build from the same source with very minor conditional compilation for each framework.
So how should we identify the assemblies built for each framework?
Hi Nicko – While your dependencies will of course be bound to a specific version of the framework, you should choose different names only if you’re trying to aid developers in adding references to your assembly. If they typically have to choose from the different flavors above on the same machine, maybe choosing a name that calls out the flavor (neoworks.feature.sscli.dll, neoworks.feature.mono.dll) would help. $0.05CDN – Michael
Michael, thanks for the reply. Currently we are using the following assembly names:
log4net-net-1.0.dll
log4net-net-1.1.dll
log4net-netcf-1.0.dll
log4net-mono-0.23.dll
log4net-sscli-1.0.dll
As there are no assembly level identifier that can be used to identify which framework the assembly is built for we are left with no alternative but to use the assembly name itself. This does cause some confusion for users because they assume that the number in the assembly name is the version of our assembly rather than the version of the target framework.
So far I have seen no public discussion of this issue, which I see as becoming more important with the release of the Compact Framework.
Hey – Sorry for the delay :). Perhaps your convention above with a small tweak would help?
log4net-[version]-fw-[version].dll
or something like that, and document the convention. Not the nicest, but…
I’ll keep your example in mind thoug, it is a good one – thanks, m.
I think that versión will be fine with private assemblies, but If you use this assemblies in the GAC … I think you must use the same name and delegate in GAC the assembly versions’ organization.
I have a question about what to write down in <Component>. Layers?? Structural components?? Architectural elements?? Both??
PingBack from
|
https://blogs.msdn.microsoft.com/brada/2003/04/19/assemblydll-naming-guidelines/
|
CC-MAIN-2016-30
|
refinedweb
| 609 | 55.03 |
You need to write a program similar to NOTEPAD. The program will allow the user to type anything he/she wants. Letters, digits, special characters. In as many rows as he/she wants.
The user then can save the text he/she wrote, then later on load it to continue working on. Your program must have the following features:
Make sure you use your own code, do not use any reserved functions from external libraries. The idea is to get you to code Linked Lists :)
You can use this code to manipulate the interface:
#include "stdafx.h"
#include
using namespace std;
void gotoxy(int x, int y) {
COORD pos = { x, y };
HANDLE output = GetStdHandle(STD_OUTPUT_HANDLE);
SetConsoleCursorPosition(output, pos);
}
Solution.pdf
Solution.pdf
VALIDATING USER INPUT In this lab, you make additions to a C++ program provided with the data files for this book. The program is a guessing game. A random number between 1 and 10 is generated in the...
Creating a Data File and Retrieving/Plotting Data Saved in a Data File (a) Using the MATLAB editor, make a program “nm1p01a”, which lets its user input data pairs of heights [ft] and...
Mr GW Game For your assignment you must draw, and animate, the main character of the game, called Mr GW. Mr GW is a simple human-ish shape and can move in steps across the screen from left to right...
Lab 9-1: Writing Functions with No Parameters In this lab, you complete a partially prewritten C++ program that includes functions with no parameters. The program asks the user if he or she has...
You are required to submit: Task 1: task1.cpp, number.cpp, number.h, task1.txt Task 2a: task2a.cpp, roman.cpp, roman.h, task2a.txt You will be using the same number.cpp and number.h as in task 1. Task...
Your solution is just a click away! Get it Now
By creating an account, you agree to our terms & conditions
We don't post anything without your permission
Attach Files
Get it solved from our top experts within 48hrs!
|
https://www.transtutors.com/questions/write-a-program-similar-to-notepad-the-program-will-allow-the-user-to-type-anything--5379580.htm
|
CC-MAIN-2020-16
|
refinedweb
| 347 | 65.62 |
Pop out Kerning and Groups
As I understand the OS X hierarchy of windows versus panels, a panel modifies the object on the main window, and a window is its own, separate, data. This makes sense with Features, Font Info, Sort and Mark. Space Center is its own window because it displays the same data in a different manner, which is also logical. The exception to this is Kerning and Groups. These two panels currently block any interaction with the rest of the font, and would also make more sense as pop-out windows – like Space Center. It would also make it possible to keep an eye on the groups, and modify Space Center while kerning.
Crazy? Weird? Can I change it with a script?
I see
part is already possible :)
from lib.UI.kerningPreviewSheet import KerningPreview KerningPreview(CurrentFont().naked())
I will also open the Group view and make them nicely available in
mojo.UIso you don't have to worry about the internal naked object.
Like a charm. Thanks a lot!
|
https://forum.robofont.com/topic/363/pop-out-kerning-and-groups
|
CC-MAIN-2020-40
|
refinedweb
| 172 | 67.04 |
Using COM Interop in .NET Compact Framework 2.0
Maarten Struys
PTS Software bv
November 2005
Applies to:
Microsoft .NET Compact Framework version 2.0
Microsoft Windows CE
Microsoft Visual Studio 2005
Windows Mobile version 5.0
Summary: With the arrival of the .NET Compact Framework 2.0, interoperability between managed code and native code has tremendously improved. This article explores the new COM Interop capabilities of the .NET Compact Framework 2.0. You will learn about using existing COM objects from inside managed code. You will also learn how the new Windows Mobile 5.0 managed APIs make it even easier to use a number of existing COM objects—instance, to access data in Pocket Outlook. This article is filled with code examples that are taken from the download sample code, which you may want to have available when reading this article. All sample code runs in the new Windows Mobile 5.0 emulators which are available as part of the Windows Mobile 5.0 SDK that you should install after installing Visual Studio 2005. Reading this article will help you to make maximum use of existing COM components. (31 printed pages)
Download the COM_Interop.msi code sample.
Contents
Introduction
Simple COM Object
Using COM Objects with the .NET Compact Framework 1.0
Using COM Objects with the .NET Compact Framework 2.0
Using COM Interop to Access Pocket Outlook
Using the New Windows Mobile 5.0 Managed APIs to Access Pocket Outlook
Conclusion
Introduction
Many people who are using the .NET Compact Framework version 1.0 have asked for ways to incorporate existing Component Object Model (COM) components in new managed applications. Unfortunately, the .NET Compact Framework 1.0 does not support COM Interop. To be able to use existing COM objects inside a managed application, the approach is to write a native C++ wrapper DLL around the COM object. The wrapper exposes simple C functions that internally call into the COM object and pass the result to the caller of the DLL. In that way, a managed application can use platform invoke to call functions inside the wrapper DLL.
This article shows you how you can use a simple COM object in the .NET Compact Framework 1.0 by using platform invoke in combination with a wrapper DLL. After that, the article describes COM Interop, available in the Microsoft .NET Compact Framework 2.0. You will experience how COM Interop makes using existing COM objects much easier. The next part of this article explains how you can use COM Interop to interact with Microsoft Pocket Outlook. Finally, you will learn how it is even simpler to use managed wrappers around Pocket Outlook that are available with the new Windows Mobile version 5.0 managed APIs.
Simple COM Object
To see COM Interop in action, you will use a very simple COM object, as shown in Figure 1.
Figure 1. A simple COM object
As you can see, the COM object implements a simple calculator, called SimpleCalc, that exposes two different interfaces apart from IUnknown. Right now, you can concentrate on the Add, Subtract, Multiply, and Divide methods of the ISimpleCalc interface because the user of a COM object doesn't have to worry about the implementation details of the component. Implementing the Add, Subtract, Multiply and Divide methods is very simple. Each method takes two numeric input parameters and passes the result in an output parameter. A method inside a COM object is expected to return an HRESULT value to indicate the success or failure of the method. In the following code example, you can see the implementation of the Add method inside the SimpleCalc COM component.
The entire COM object, written in C++ by means of the Active Template Library (ATL), is available as part of this article's download code sample. The SimpleCalc COM object and its type library containing the description of the SimpleCalc COM object including its interfaces are compiled into Calculator.dll. If you are not interested in how to call COM objects inside .NET Compact Framework 1.0 applications, you can skip the next section of this article and continue to the section called Using COM Objects with the .NET Compact Framework 2.0.
Using COM Objects with the .NET Compact Framework 1.0
Because the .NET Compact Framework 1.0 does not support COM Interop, work is involved whenever you want to use existing COM objects inside a managed .NET Compact Framework 1.0 application. Because there is (limited) support for platform invoke, you can call unmanaged or native functions that reside in DLLs from within a managed application. For more information about platform invoke in the .NET Compact Framework 1.0, see the article An Introduction to P/Invoke and Marshaling on the Microsoft .NET Compact Framework.
If you want to use a COM object in a managed .NET Compact Framework 1.0 application, the only possibility you have is to make use of platform invoke. By using platform invoke, you can only call native functions inside DLLs. If you write a DLL in C++ with functions that wrap around all of the interfaces of the COM object you want to use, you have created a way to call into your COM object. However, there is one problem to be solved: because there is no managed method available to instantiate the COM object you want to use, you will have to do so in the DLL as well. Typically, when you are using Microsoft Visual Studio .NET 2003 with the .NET Compact Framework 1.0, you need a separate development environment, eMbedded Visual C++ version 4.0, to create the native wrapper DLL. However, with Visual Studio 2005, you can create both managed and native applications for smart devices, as long as your device runs Windows Mobile 2003-based software or later. You can even use Visual Studio 2005 to create .NET Compact Framework 1.0 applications for these devices.
Because Visual Studio 2005 also supports writing native C++ applications for smart devices, this article uses Visual Studio 2005 to show you how to create a native wrapper DLL around a COM object for those cases where you want to interoperate with a COM object inside a .NET Compact Framework 1.0 application. Using Visual Studio 2005, the first thing you need to do if you want to use an existing COM object inside a .NET Compact Framework 1.0 application is to create a new Visual C++ smart device project, as shown in Figure 2.
Stepping through the New Project wizard, you need to specify a number of settings. The project you will create must target the correct platform because, unlike managed code, native code is platform dependent. Right now, you should select the Windows Mobile 5.0 Pocket PC SDK as target platform SDK. You also need to specify the application type for the project. Because you will create a wrapper DLL for a COM object, the application type must be DLL. After you step through the wizard and set the correct options, Visual Studio 2005 creates all files necessary for your wrapper DLL, which is called CalculatorFlatInterface in this sample.
To use the SimpleCalc object, you must first initialize the COM library by using the API CoInitializeEx. After you finish using the SimpleCalc object, you must close the COM library by using the API CoUninitialize. Because you are using a wrapper DLL to continuously communicate with the SimpleCalc object, the DLL entry function is a good location to call CoInitializeEx and CoUninitialize. If you are using multiple COM wrappers inside your application, you should call these APIs during application initialization and application termination respectively. The following code shows how to initialize and close the COM library inside the DllMain function.
Note If Windows Mobile 5.0 Pocket PC SDK does not appear as an option, this is an indication that the Windows Mobile 5.0 Pocket PC SDK has not yet been installed. The SDK can be downloaded from the Microsoft Download Center.
HRESULT hr; BOOL APIENTRY DllMain (HANDLE hModule, DWORD ul_reason_for_call, LPVOID lpReserved) { switch (ul_reason_for_call) { case DLL_PROCESS_ATTACH: hr = CoInitializeEx(0, COINIT_MULTITHREADED); break; case DLL_THREAD_ATTACH: break; case DLL_THREAD_DETACH: break; case DLL_PROCESS_DETACH: CoUninitialize(); break; } return TRUE; }
After initializing the COM library, you can call into the SimpleCalc object. In the CalculatorFlatInterface wrapper DLL, you can now create a number of functions that call into the different methods that are exposed by the interfaces of the SimpleCalc object and that return the results of these calls to the calling application. The entire CalculatorFlatInterface wrapper DLL is available as part of this article's download code sample.
An important thing to think about is that C++ uses name mangling which significantly modifies the publicly visible function name to create unique names for each function—something that is necessary to support function overloading. To suppress name mangling, you need to precede function declarations in the wrapper DLL that you want to be available for platform invoke by managed code with extern "C". In the download code sample, you suppress name mangling by including extern "C" in the CALCULATORFLATINTERFACE_API macro definition.
Typically, you write a function in your wrapper DLL for each method of the SimpleCalc object you want to expose to a managed application, as shown in the following code.
CALCULATORFLATINTERFACE_API int Add(int iFirstValue, int iSecondValue, int* piResult) { HRESULT hrLocal; ISimpleCalc *ISimpleCalc = NULL;)) { ISimpleCalc->Add(iFirstValue, iSecondValue, piResult); ISimpleCalc->Release(); iRetVal = 0; } } return iRetVal; }
At first sight, you might feel that this code is not too difficult to write, but you can see that a considerable amount of extra code is necessary to call into a COM object from managed code if you are creating a .NET Compact Framework 1.0 application. You have to know how to call a COM object from native code, and you have to keep in mind that the API you expose in the wrapper DLL can be called from managed code. Additional work might be necessary—especially if you have to pass complex structures as parameters—because of limitations in data marshalling between managed and native code.
In the preceding code, you can see that the SimpleCalc object is created each time a method in the wrapper DLL is called. This is not the most efficient way to use COM objects, and this way will only work properly if the COM object is stateless. However, it shows you all that is needed to create, call, and destroy the object in one single function. A more efficient way to use a COM object would be to provide additional wrapper functions to create the object and to destroy the object. The create function should pass an IntPtr back to the managed application and should be called before the first method in the COM object is used. The IntPtr can then be used as a reference to the live COM object and should be passed to each individual method call. The destroy function should be called when the object is no longer needed.
Things become even more complex if your COM object also has a connection point with which it can call back into an application. In the .NET Compact Framework 1.0, you cannot pass delegates to a native function by using platform invoke. So whenever a COM object contains entry points for callback functions, that functionality should be implemented entirely in the wrapper DLL. You should use another mechanism—for instance, a MessageWindow class—to pass information from the COM object back to the managed application through the wrapper DLL. In this article's download code sample, you will find an example of this mechanism as well. For more information about using the MessageWindow class, see the article Using the Microsoft .NET Compact Framework MessageWindow Class.
The Divide method that is part of the ISimpleCalc interface and that is implemented in the SimpleCalc object optionally makes use of a callback function. When the Divide method detects a division-by-zero error, it not only sets an error return value, but it also calls a callback function (when present) to pass a friendly error message back to the caller.
As shown earlier in Figure 1, the ISimpleCalcCallBack interface expects a ShowMsg function. This function acts as a callback function. It should have the following signature.
When the SimpleCalc object calls ShowMsg, it passes a Unicode string to it. The implementation of ShowMsg determines what to do with the passed string. Because a delegate to a ShowMsg implementation cannot be passed from managed code to native code, the wrapper DLL is responsible for providing the callback function, and it should find a way to pass the Unicode string back to the managed application. The following code example shows a wrapper function to call into the SimpleCalc object and to set a callback function.
CALCULATORFLATINTERFACE_API int Divide(int iFirstValue, int iSecondValue, int* piResult) { HRESULT hrLocal; ISimpleCalc *ISimpleCalc = NULL; // Instantiate a new C++ object that implements the // ISimpleCalcCallBack interface ISimpleCalcCallBack *callBack = new CallBack(hWndManaged);)) { // Pass the callback object to the Calculator ISimpleCalc->SetCallback(callBack); ISimpleCalc->Divide(iFirstValue, iSecondValue, piResult); // Make sure the callback object is removed from the // SimpleCalc object if you no longer need it ISimpleCalc->ResetCallback(); ISimpleCalc->Release(); iRetVal = 0; } } delete callBack; return iRetVal; }
Instead of calling only one method in the SimpleCalc object, the preceding code calls additional methods to set a callback function prior to calling into the Divide method. When the Divide method returns, the callback function is reset again. In this particular implementation, the callback function is set and reset again each time the Divide method is called.
The callback function could have been set once during initialization of the SimpleCalc object, but this download code sample shows how you can combine several calls into the SimpleCalc object from within one single wrapper function. The actual callback function is a method inside a Callback object that is part of the wrapper DLL. The following code example shows the ShowMsg method that is called back from inside the SimpleCalc object.
This implementation of ShowMsg is very simple. The Unicode string that is passed as parameter is simply sent to a window by using a user defined message WM_COM_CALLBACKDATA. However, to be able to show the string inside a managed application, the managed application has to use a MessageWindow class to be able to expose a window handle and to act on the Microsoft Windows message that it receives from the CalculatorFlatInterface DLL. The window handle of the mainform of the managed application is passed to the constructor of the CallBack object of the CalculatorFlatInterface DLL.
After this long introduction about how to create a wrapper DLL to be able to use COM objects from inside managed code, there is one more thing you need to know: how to use the wrapper DLL inside a .NET Compact Framework 1.0 application.
Calling COM Objects with the .NET Compact Framework 1.0 and a Wrapper DLL
To be able to use the SimpleCalc object in combination with the wrapper DLL, you need to create a managed application by using Visual Studio .NET 2003 or Visual Studio 2005. To do so, you can simply create a new C# or Visual Basic .NET application. All code examples in this article are shown in C#, but the download code samples are available in Visual Basic .NET as well.
Figure 3 illustrates calling a COM component from a managed .NET Compact Framework 1.0 application.
Figure 3. Calling a COM component from a managed .NET Compact Framework 1.0 application
To begin, you can create a very simple user interface with four different buttons that correspond to the methods exposed by the wrapper DLL that calls into the SimpleCalc object. Figure 4 shows such an interface.
The code to call functions in the wrapper DLL is implemented in the different button_Click event handlers. To get an idea about how to call functionality in the SimpleCalc object by means of the wrapper DLL, look at the following code example. It shows how to make use of platform invoke to add two values and receive the result. Note that the function Add in the CalculatorFlatInterface DLL is responsible for calling the SimpleCalc COM object's Add function that in turn will perform the requested computation.
[DllImport("CalculatorFlatInterface.dll")] private extern static int Add(int firstValue, int secondValue, ref int result); private void btnAdd_Click(object sender, EventArgs e) { int result = 0; int firstValue = Convert.ToInt32(tbValue1.Text); int secondValue = Convert.ToInt32(tbValue2.Text); if (Add(firstValue, secondValue, ref result) == 0) { tbResult.Text = result.ToString(); } else { tbResult.Text = "???"; } }
The amount of work to create a wrapper DLL as previously described is not trivial. Therefore, third-party solutions, like CFCOM from Odyssey Software, are available to automate this work. Also, third-party wrapper DLLs, like PocketOutlook In The Hand, are available to wrap around a particular COM object.
The fact that you need to use a separate DLL to be able to use COM objects also has an impact on the memory consumption of a Windows Mobile–based device. This impact is especially relevant if you want to use several COM objects and thus need several wrapper DLLs as well, as is explained in the article Windows CE.NET Advanced Memory Management. Luckily, the .NET Compact Framework 2.0 has much better support for interoperability between managed code and native code, including COM Interop and the possibility to pass managed functions—by means of delegates—to native code, with which you can call back from native code into managed code.
Using COM Objects with the .NET Compact Framework 2.0
With the arrival of the .NET Compact Framework 2.0, which ships with Visual Studio 2005, interoperating with existing COM objects becomes much easier because the .NET Compact Framework 2.0 supports COM Interop. For a developer, this means that you no longer need to create your own wrapper DLL in native code to communicate with a COM object. Instead, you can directly communicate with the COM object from within managed code. Of course, this direct communication has a lot of advantages:
- You have less code to write.
- You don't have to know about low-level COM details.
- You can call methods inside COM objects as though they were managed methods.
- You have full IntelliSense technology available for COM object methods.
To use existing COM objects from managed code, you can add a reference to your COM object in your managed project, as shown in Figure 5. You can add references to files of type DLL, executable, and type library—a binary file that includes information about interfaces, types, and objects that a COM object exposes. Visual Studio 2005 then creates an interop assembly and automatically adds the assembly as a reference. The interop assembly contains managed types that allow you to program against unmanaged COM types. The interop assembly does not communicate directly with the COM object. Instead the Common Language Runtime COM Interop layer inserts a special proxy between the caller and the COM object, the so-called runtime callable wrapper. Thanks to the managed type definitions in the interop assembly, a COM object looks exactly like a managed object to a managed application.
Figure 5. COM Interop in version 2.0 of the .NET Compact Framework
To make use of the SimpleCalc object inside a managed application, you can simply create a new smart device project in Visual Studio 2005. Because all code examples in this article are in C#, you should choose a C# smart device project. Be sure to target the Windows Mobile 5.0 Pocket PC.
Note To be able to use the SimpleCalc COM object, you must download and install the download code sample that comes with this article.
To use the SimpleCalc object, you should create a user interface according to Figure 6. Be sure to give the different UI controls names as shown.
You also need to add button_Click event handlers for each of the buttons by double-clicking them in the form's Design view.
Note You have to switch back to Design view each time you add an event handler.
To be able to use the SimpleCalc object, you can simply add a reference to the DLL file containing the Calculator type library that provides a description of the COM object and its interfaces, as shown in Figure 7. To add a reference you need to perform the following actions:
- Right-click the project (CalculatorAppV2) in Solution Explorer
- Click Add Reference
- Browse to the folder where the SimpleCalc object's DLL file and type library (.tlb) file are located
- Select the type library file.
Note The actual path of the SimpleCalc object depends on where you installed it.
When you click OK, Visual Studio 2005 creates an interop assembly that you can use to access methods of the SimpleCalc object. To find out what methods are available in the interop assembly, you can change to Class view, expand the Project References, and explore the CalculatorLib assembly, as shown in Figure 8. If you select the ISimpleCalc interface, you will see all methods that it exposes.
It is also possible to use a tool called tlbimp (type library importer) to generate an interop assembly from a type library. You can find Tlbimp in the .NET Framework Software Developer Kit. There are advantages in using tlbimp over adding a reference to a type library in Visual Studio 2005. By using tlbimp, you have the possibility to sign the resulting assembly and place it in the global assembly cache. This allows you to share the interop assembly among multiple applications. Because the SimpleCalc object is not meant to be shared over several different applications, adding a direct reference to its type library as shown in Figure 7 makes perfect sense.
Before being able to use the methods of the SimpleCalc object, you have to provide using directives for the following namespaces in your managed application.
The "System.Runtime.InteropServices" namespace contains data types to support COM interop and platform invoke. The "CalculatorLib" namespace contains managed types to expose the SimpleCalc COM object class and interfaces. After providing the using directives for these namespaces, you can now use the methods from SimpleCalc as if it was a managed object, as you will see when you are adding functionality to the button_Click event handlers that you created for the CalculatorAppV2 application. Note that the following code example shows only one of the button_Click handlers. Implementing the other click handlers is similar.
namespace CalculatorAppV2 { public partial class Form1 : Form { private ISimpleCalc calculator = new SimpleCalc(); public Form1() { InitializeComponent(); } private void btnAdd_Click(object sender, EventArgs e) { int firstValue = Convert.ToInt32(tbValue1.Text); int secondValue = Convert.ToInt32(tbValue2.Text); int result = calculator.Add(firstValue, secondValue); tbResult.Text = result.ToString(); } } }
In the preceding code, a private variable of type ISimpleCalc is assigned to a new instance of type SimpleCalc. This is, in fact, an instantiation of the SimpleCalc COM object. If you entered the preceding code yourself, you can also see that the Add method for the variable of type SimpleCalc just looks like a managed object and even has full IntelliSense support.
Figure 9 illustrates calling a COM component from a managed .NET Compact Framework 2.0 application.
Figure 9. Calling a COM component from a managed .NET Compact Framework 2.0 application
If you compare Figure 9 with Figure 3, you can see immediately that you have to write much less code yourself to use an existing COM object. As a matter of fact, you only need to concentrate on your own application's code. Visual Studio 2005 provides you with the interface to the COM object. If you compare the managed call to the Add method to the Add method as defined in the SimpleCalc COM object (see the download code sample), you will see a remarkable difference.
The signature of the managed Add method is as follows.
The signature of the SimpleCalc object's Add method is as follows.
The translation of the Add method in the SimpleCalc object to the managed method happened automatically when you added a reference to the type library file.
To recap, all it takes to use an existing COM object in a managed application is adding a reference to the COM object to generate an interop assembly. After that, you can use the COM object, just as you would use a managed object that exists in another assembly.
However, there are some restrictions about using COM objects in the .NET Compact Framework 2.0. You cannot refer to an existing ActiveX control from within a managed project because a managed application can not act as an ActiveX host. There are (unsupported) ways to host ActiveX controls inside managed .NET Compact Framework 2.0 applications, but they are beyond the scope of this article. Alex Feinman's article, "Hosting ActiveX Controls in the .NET Compact Framework 2.0" covers this topic, but it will publish on MSDN in November 2005. Unlike in the full .NET Framework, the .Net Compact Framework 2.0 does not support hosting the runtime from native code. As a result, you cannot have an existing native application call into managed code. However, you can call back into managed code from within a COM object by using COM callable wrappers (CCWs) if the COM object is being hosted by a managed application. A CCW is created by the Common Language Runtime and is invisible to other classes in the .NET Compact Framework. An important function of a CCW is to marshal calls between native and unmanaged code, thus allowing a COM object to call managed functions that are exposed through COM interfaces.
Calling a Managed Method from Inside a COM Object
Even though there is only limited CCW support, a COM object can expose managed interfaces that expect a callback function. Since it is possible with the .NET Compact Framework 2.0 to pass delegates to native code, you can also pass a delegate to an exposed interface, resulting in the possibility for a COM object to call back into managed code.
In the SimpleCalc COM object, you will find this scenario implemented. SimpleCalc has two different interfaces: ISimpleCalc and ISimpleCalcCallBack (as shown in Figure 10).
Figure 10. Calling .NET objects from COM
You use the ISimpleCalcCallBack interface to call a method outside the actual COM object. The way you will use this functionality in the sample code is to allow the COM object to pass a friendly error message in case a divide-by-zero error is trapped in the Divide method that is part of the ISimpleCalc interface. Before being able to use the callback interface in the SimpleCalc object, you have to pass a reference to an implementation of the callback interface to SimpleCalc. The implementation of the ISimpleCalcCallBack callback interface is provided by the managed MsgNotification class.
The following code shows the managed object that will be called from COM.
To use the callback interface inside the SimpleCalc COM object, you have to pass a reference to the implementation of the callback interface from your managed application. The ISimpleCalc interface provides the SetCallback method for just this purpose, as shown in the following code. For example, you can call the SetCallback method in the main form's constructor of the CalculatorAppV2 managed application to pass the managed implementation of the ISimpleCalcCallBack interface, which is the MsgNotification class.
public partial class Form1 : Form { private ISimpleCalc calculator = new SimpleCalc(); private MsgNotification msgNotification = new MsgNotification(); public Form1() { InitializeComponent(); // Use a method of ISimpleCalc to pass a callback object to // the SimpleCalc COM object calculator.SetCallback((ISimpleCalcCallBack)msgNotification); } // Remaining functionality of class Form1 }
The preceding code is all you need to pass an implementation of the callback interface to the SimpleCalc COM object. Of course, the COM object determines when to actually call back into the method provided by the interface. In the sample SimpleCalc object, the callback interface's method will be invoked when SetCallback is executed to send a notification that the callback interface is ready to use.
The SimpleCalc object will also use the callback interface's method when a divide-by-zero error is trapped in the SimpleCalc object's Divide method. Of course, calling the Divide method inside managed code does not give any indication that the callback interface will be used; it is entirely up to the SimpleCalc object to determine when to use the callback interface. In fact, calling the Divide method of SimpleCalc from inside managed code looks exactly like calling any other method in SimpleCalc, as shown in the following code.
private void btnDivide_Click(object sender, EventArgs e) { int firstValue = Convert.ToInt32(tbValue1.Text); int secondValue = Convert.ToInt32(tbValue2.Text); try { int result = calculator.Divide(firstValue, secondValue); tbResult.Text = result.ToString(); } catch (ExternalException) { tbResult.Text = "Undefined"; } }
One thing you will note is that the btnDivide_Click handler of CalculatorAppV2 contains a try…catch block to handle exceptions. Particularly, an ExternalException exception is caught. The runtime automatically throws this exception when the method called inside the COM object returns anything other than S_OK. If the Divide method in the SimpleCalc object traps a divide-by-zero error, it will return an E_FAIL value, causing ExternalException to be thrown.
It is always a good idea to surround calls to COM objects by a try…catch block, especially because you are not always aware of the actual implementation of the COM object that you are using. The ExternalException exception contains an ErrorCode property that stores an integer value (HRESULT) that identifies the error.
Using COM Interop to Access Pocket Outlook
So far, you have learned how to use your own existing COM objects in a managed .NET Compact Framework 2.0 application. However, with COM Interop support, you also have easy access to Pocket Outlook by means of the Pocket Outlook Object Model (POOM). POOM is a COM component that exposes functionality of Pocket Outlook, so you can use its functionality in your own managed or native application.
In the .NET Compact Framework 1.0, using POOM was not straightforward. You either had to create a wrapper DLL in native code that flattened the POOM component or use a third-party library that flattened the POOM component for you. As you will see in this section, with the .NET Compact Framework 2.0, using POOM inside a managed application is now relatively easy because of COM Interop.
To see POOM in action, you will need to create a new managed application. With the code examples presented in this article, you will be able to browse and modify the contacts database from within your own application, and you will be able to make appointments for a particular contact. Of course, Pocket Outlook has much more functionality, but working through this section of the article should help you understand how to use Pocket Outlook inside your own application.
In the code examples presented in this article, you are not going to concentrate on a fancy user interface, but you will concentrate on using POOM inside a managed application. Figure 11 shows the user interface for a .NET Compact Framework 2.0 application that targets a Pocket PC 2003 Second Edition emulator.
The reason to choose the Pocket PC 2003 SE emulator instead of the Windows Mobile 5.0–based Pocket PC emulator is that you typically access Pocket Outlook inside a Windows Mobile 5.0–based device by using the new Windows Mobile 5.0 managed APIs, as explained later in this article. However, those APIs are available only on Windows Mobile 5.0–based devices. If you want to use Pocket Outlook in applications that target other devices, you need to take the approach that is described here. Note that the device must be capable of running .NET Compact Framework 2.0 applications to be able to use COM Interop. Devices that can run .NET Compact Framework 2.0 applications are devices that run one of the following platforms:
- Windows Mobile 5.0
- Windows Mobile 2003 software for Pocket PCs
- Windows Mobile 2003 Second Edition software for Pocket PCs
- Generic Windows CE 5.0
If you don't want to enter all of the code necessary for this application yourself, the application is also part of this article's download code sample. The code sample is available in both C# and Visual Basic .NET.
If you want to enter all of the code for the application yourself, you should change the names of the four buttons according to Figure 11. The next thing you need to do is add a Form_Load event handler by clicking somewhere on the form outside the controls, and then clicking the Events button on the toolbar that is above Properties. If you now locate the Load event and double-click it, Visual Studio will generate a Form1_Load event handler for you. You should repeat the same action to add a Form_Closing event handler to the application. Finally, you need to add click event handlers for all four buttons by double-clicking them in the Form1.cs Design window. Note that you have to switch back to Design view each time you double-click a button. For the time being, you have finished creating the user interface part of the UsingPOOM application.
The easiest way to call a COM component from managed code (thus also POOM) is to start with a type library. Unfortunately, the type library for POOM is not included in either the Pocket PC 2003 or Smartphone 2003 SDKs, so you'll have to build your own. Fortunately, building the type library is easy. You can create a type library for POOM by using the Microsoft Interface Definition Language compiler (midl.exe). You typically need to complete this step for all COM components that you want to access from inside a managed application and that don't have a type library available.
The interface definition file for POOM is pimstore.idl. It does not ship with the Pocket PC 2003 SDK, so you have to download it separately from Windows Mobile Team blog.
Assuming you have downloaded pimstore.idl and stored it in the Include folder of the Pocket PC 2003 SDK that is installed automatically when you installed Visual Studio 2005, you can create a type library for pimstore.idl at a command prompt. To make sure that you correctly set all environment variables that enable you to use particular command-line tools that come with Visual Studio 2005, you need to open a Visual Studio 2005 command prompt. On the Windows XP Start menu, point to All Programs, point to Microsoft Visual Studio 2005, point to Visual Studio Tools, and then click Visual Studio 2005 Command Prompt.
In the command prompt window that opens, you need to change the working directory to the directory in which you stored the pimstore.idl file. To create a type library for POOM, you can simply type the following command.
Carrying out the preceding command creates a type library file for you with the name pimstore.tlb. If midl shows warnings with numbers 2400 and 2401, you can simply ignore them. They are generated because some of the interface definitions in pimstore.idl are specified with an "optional" keyword that is already implied by another keyword in the interface definition (defaultvalue). The warnings have no impact on the generated type library.
The next step is to add a reference to pimstore.tlb in your Visual Studio 2005 managed project. If you have successfully added this reference to your project (as shown in Figure 12), you now should see a newly added reference to PocketOutlook in the Solution Explorer of your managed project. If you now open the class view, you can expand the PocketOutlook reference and examine the interfaces, objects, methods, and properties that are available for use inside your managed application.
Now that you have created a type library and added a reference to it in your managed project, you are ready to use functionality from Pocket Outlook inside your application. The first thing to do in your application is add a number of namespaces, particularly the "System.Runtime.InteropServices" and "PocketOutlook" namespaces. The latter is available to you because you added a reference to the pimstore type library to your project. To be able to access Pocket Outlook methods inside your application, you also need to create a variable of type ApplicationClass, which is part of the PocketOutlook namespace. This type encapsulates Pocket Outlook functionality. You can think about it as your connection to Pocket Outlook.
To be able to use functionality from Pocket Outlook you need to log on to it. When you finish using Pocket Outlook, you need to log off from it again. In the sample application, you will use Pocket Outlook throughout the entire lifetime of the application, so a good place to log on to Pocket Outlook is in the Form.Load event handler. The Form.Closing event handler can be used to log off from Pocket Outlook.
The following code shows how to use the ApplicationClass type to log on to and log off from Pocket Outlook.
You can now use functionality from Pocket Outlook. Keep in mind that you are using POOM in combination with COM Interop to be able to communicate with Pocket Outlook. For instance, showing all contacts that are currently available in the Pocket Outlook contacts database is simply a matter of retrieving the location where Pocket Outlook contacts are stored and walking through the Items collection to retrieve individual contacts. You can, for instance, create in your application a separate.
private void ShowContacts() { // Fill the list box with contact information from Pocket Outlook Folder contactsFolder = outlookApp.GetDefaultFolder(OlDefaultFolders.olFolderContacts); PocketOutlook.Items contacts = (PocketOutlook.Items)contactsFolder.Items; listBox1.BeginUpdate(); listBox1.Items.Clear(); foreach (ContactItem contact in contacts) { string name = contact.FirstName + " " + contact.LastName; listBox1.Items.Add(name); } listBox1.EndUpdate(); }
As you can see in the preceding code, accessing contacts from Pocket Outlook is simple and straightforward. Accessing appointments and tasks is just as easy. Retrieving detailed information—for instance, from ContactItem data—is also very simple. In this article's download code sample, you will find separate forms to display contact details, to add a new contact, and to create appointments. To get a sense of how easy it is to add new contact information to the Pocket Outlook contacts database by using a managed application, look at the following code.
private void AddContact() { // Locate the Contacts folder and add a new empty contact to the // collection Folder contactsFolder = outlookApp.GetDefaultFolder(OlDefaultFolders.olFolderContacts); PocketOutlook.Items contacts = (PocketOutlook.Items)contactsFolder.Items; ContactItem newContact = (ContactItem)contacts.Add(); // Fill in contact details and save the newContact information newContact.FirstName = firstName; newContact.LastName = lastName; newContact.CompanyName = companyName; newContact.BusinessTelephoneNumber = phoneNumber; newContact.Email1Address = emailAddress; newContact.Save(); }
As you can see, adding new contact information is a straightforward operation. First, you have to access the Pocket Outlook Contacts folder to get access to the collection of contacts. Then, you can simply add a new item to the collection, fill in the contact details, and save the newly added contact in the collection.
In the download code sample, you will see a similar approach to store new contact information or modify existing contact information. The only difference is that the download code sample uses a separate form with a number of text boxes to enter or modify contact information. Folder apptsFolder = outlookApp.GetDefaultFolder(OlDefaultFolders.olFolderCalendar); PocketOutlook.Items appts = (PocketOutlook.Items)apptsFolder.Items; AppointmentItem newAppointment = (AppointmentItem)appts.Add(); // Fill in appointment details and save the newAppointment // information newAppointment.Subject = subject; newAppointment.Start = DateTime.Now(); newAppointment.Save(); }
To delete an existing Pocket Outlook contact, you select the contact from the contact collection, and then you call the Delete method, as shown in the following code.
private void DeleteContact() { Folder contactsFolder = outlookApp.GetDefaultFolder(OlDefaultFolders.olFolderContacts); PocketOutlook.Items contacts = (PocketOutlook.Items)contactsFolder.Items; // Assume that contact information is shown in a list box and the // currently selected item should be deleted ContactItem ci = (ContactItem)contacts.Item(listBox1.SelectedIndex + 1); ci.Delete(); // Refresh the list box's contact information (the deleted contact // should no longer be visible) ShowContacts(); }
Even if you are not going to use your own application to maintain Pocket Outlook data, having access to Pocket Outlook data can add a lot of functionality to your managed application. Suppose you are creating a mobile line-of-business application and you want to store sales information that is stored in your contacts database. Instead of having to reenter the name of the client, you can simply access the contact information in Pocket Outlook to retrieve the name and any other information you need for that particular client. This ability reduces the amount of data entry for the user, which is very important in a mobile application, and it helps ensure that client information is consistent.
Using the New Windows Mobile 5.0 Managed APIs to Access Pocket Outlook
In the previous section of this article, you saw how the .NET Compact Framework 2.0 contains great functionality to make interoperability with native code easier and more complete. Because of COM Interop, using existing COM objects from inside a managed application is relatively easy. If you are developing applications that exclusively target Windows Mobile 5.0–based devices, it will even be easier to access a number of native objects by means of managed APIs that ship as part of the Windows Mobile 5.0 SDKs.
Particularly, using managed Windows Mobile 5.0 APIs will make using Telephony and Pocket Outlook very easy. For example, you can create a managed application, similar to the one you saw in the previous section of this article. Instead of using COM Interop, this time you will use the Windows Mobile 5.0 managed APIs. You do not need to create a type library to be able to use Pocket Outlook, and you do not need to use the COM–based Pocket Outlook Object Model. Instead, you can simply add a reference to the managed API you need and start using it. All managed APIs of Windows Mobile 5.0–based devices are available through the Microsoft.WindowsMobile class library.
To compare Windows Mobile 5.0 managed APIs with .NET Compact Framework COM Interop, you need to create a new managed project, this time targeting a Windows Mobile 5.0–based Pocket PC. You now should create a user interface similar to the one shown earlier in Figure 11. The next step is to implement exactly the same functionality as you had in the previous sample. Again, the complete source code is available in the download code sample in either C# or Visual Basic .NET.
To be able to create and access Pocket Outlook data items like contacts, appointments, and tasks, you need to add a reference to "Microsoft.WindowsMobile.PocketOutlook" in your managed project. The next thing to do in your application is add the "Microsoft.WindowsMobile.PocketOutlook" namespace.
To be able to access Pocket Outlook by using Windows Mobile 5.0 managed APIs inside your application, you need to create an instance of an OutlookSession object, as shown in the following code. Instantiating this object automatically establishes a connection between your application and Pocket Outlook.
If you compare the preceding code to the earlier code example about using the ApplicationClass type, you will see that they are almost identical. Using the Windows Mobile 5.0 managed APIs is a little easier because there is no need to separately log on to and log off from an ApplicationClass object. Instead, you are now dealing with an OutlookSession state object that connects to Pocket Outlook when it is instantiated.
To clean up resources after using the OutlookSession object, you should call the Dispose method when you no longer need it. When you have an OutlookSession object instantiated, you can use functionality from Pocket Outlook. For instance, showing all contacts that are currently available in the Pocket Outlook contacts database is simply a matter of retrieving the location where Pocket Outlook contacts are stored and walking through a contacts item collection to retrieve individual contacts. You can, for instance, create in your application a by using the Windows Mobile 5.0 managed APIs.
If you compare the preceding code to the earlier POOM based–code about retrieving contacts from Pocket Outlook, you can see that accessing contacts from Pocket Outlook is even simpler when you use the new Windows Mobile 5.0 managed APIs. Accessing appointments and tasks is as easy. Retrieving detailed information—for instance, from contact data—is also very simple. In the download code sample, you will find separate forms to display contact details, to add a new contact, and to create appointments. To get a sense of how easy it is to add new contact information to the contacts database by using the Windows Mobile 5.0 managed APIs, look at the following code.
private void AddContact() { // Locate the Contacts folder and add a new empty contact to the // collection ContactFolder cFolder = olSession.Contacts; Contact newContact = cFolder.Items.AddNew(); // Fill in contact details and save the newContact information newContact.FirstName = firstName; newContact.LastName = lastName; newContact.CompanyName = companyName; newContact.BusinessTelephoneNumber = phoneNumber; newContact.Email1Address = emailAddress; newContact.Update(); }
As you can see, adding new contact information is a simple, straightforward operation. First, you have to access the Pocket Outlook Contacts folder to get access to the collection of contacts. Then, you can simply add a new Contact to the collection, fill in the contact details, and save the newly added item data in the collection by using the Update method.
In the download code sample, you will see a similar approach to store new contact information or modify existing contact information. The only difference is that the sample application uses a separate form with a number of text boxes to enter or modify contact information.
If you compare the preceding code—using the Windows Mobile 5.0 managed APIs—to the earlier code—using POOM—to add contact information, you will see that the majority of the code is identical. The biggest difference is that you now use a Contact object, instead of a ContactItem object, to refer to contact information. Another difference is that you use an Update method, instead of a Save method, to store changes to a contact. AppointmentFolder aFolder = olSession.Appointments; Appointment newAppointment = aFolder.Items.AddNew(); // Fill in appointment details and save the newAppointment // information newAppointment.Subject = subject; newAppointment.Start = DateTime.Now(); newAppointment.Update(); }
Because the Windows Mobile 5.0 managed APIs use strongly typed collections (like AppointmentCollection or ContactCollection), you do not need to cast to a specific type when accessing members of a collection as you did when accessing Pocket Outlook through COM Interop.
To delete an existing Pocket Outlook contact, you select the contact from the contact collection, and then you call the Delete method, as shown in the following code.
Using the Windows Mobile 5.0 Managed APIs makes the process of using Pocket Outlook functionality even easier than using POOM in combination with COM Interop.
Conclusion
With the .NET Compact Framework 2.0, you can use existing COM objects in managed applications. Even though there are a few limitations (no hosting of the common language runtime and no out-of-the-box support of ActiveX controls), COM Interop is a great extension to the .NET Compact Framework. Using existing COM objects inside managed applications enables you to reuse existing code, thus helping to make sure that previous investments can still be used with a minimal amount of work. The new Managed APIs that are exclusively available for Windows Mobile 5.0–based devices create wrappers around existing functionality. They make using that particular functionality extremely simple, for example giving you easy access to Pocket Outlook.
In combination with Visual Studio 2005, the .NET Compact Framework gives you a truly high-productivity development environment. You don't need to spend time creating wrapper DLLs to be able to use COM objects. Instead, you can concentrate on the functionality that is needed for your own application, with the possibility to extensively reuse existing COM objects and native DLLs.
|
http://msdn.microsoft.com/en-us/library/aa446497.aspx
|
CC-MAIN-2014-15
|
refinedweb
| 8,030 | 55.84 |
What's the best way for me to access an API using a React app? The API is currently developed in Golang using kami & mgo for the POST/GET/DELETE requests.
I want to be able to make a GET request to the following URL:
on my React app and store the result in a state attribute:
this.state = {
data: //store the data here
}
{ systems : [ //list of objects ] }
fetch(myRequest)
.then(result => {
console.log(result);
//this.setState({ data: result.json() });
});
You will have to decide on a library to do API calls. A simple way is to use
fetch, which is built-in in modern browsers. There's a polyfill to cover older ones. jQuery's AJAX or SuperAgent are two alternatives. Here's a simple example using
fetch. You'll only have to change the URL of the request.
class Example extends React.Component { constructor() { super(); this.state = { data: {} }; } componentDidMount() { var self = this; fetch('') .then(function(response) { return response.json() }).then(function(data) { self.setState({ data }, () => console.log(self.state)); }); } render() { return ( <div/> ); } } ReactDOM.render(<Example/>, document.getElementById('View'));
<script src=""></script> <script src=""></script> <div id="View"></div>
|
https://codedump.io/share/3xahzEZrVhNm/1/best-way-to-access-api-from-react-app
|
CC-MAIN-2017-17
|
refinedweb
| 190 | 54.18 |
python generation formula send of method of details
- 2020-05-30 20:33:52
- OfStack
Look for casually on the net, the feeling is to speak along while speak not clear, write 1 here.
def generator(): while True: receive=yield 1 print('extra'+str(receive)) g=generator() print(next(g)) print(g.send(111)) print(next(g))
Output:
1 extra111 1 extraNone 1
Why is that? Click on send and you get 1 sentence
send: Resumes the generator and "sends" a value becomes the result of current yield expression
So yield 1 as a whole is treated as an expression, your send content is going to be the value of this expression, whatever you want to take or not take to the left, but yield is the thing that your send came in with. This expression becomes what you had when send came in and then continues to execute, again encountering yield, and printing yield followed by the expression.
Of course, normally you don't output a constant, you output a quantity that is related to what you receive, otherwise you just send it away.
|
https://ofstack.com/python/22070/python-generation-formula-send-of-method-of-details.html
|
CC-MAIN-2021-39
|
refinedweb
| 185 | 51.52 |
honest the recent period has been hard for my creativity for various reasons. In spite of the ongoing pandemy, which is hard to go through having kids under the roof, I’ve started working on something that might be interesting for you. But it’s a story to be released in 2 weeks 😉
Creating patterns
This example may seem strange at first, but the fact is that it’s been the actual production usage of scapy in my case. I needed to sniff network and discover situations where certain sequece of control bytes was sent, extract some information out of it and send it to external server.
In this example I’ll create such a sequence, add some valid data, save it as pcap file and then extract it with scapy. Let’s begin with adding sequence of
0x01 0x02 0x03 0x04 0x05 in packets’ payload in front of a byte of actual data to send.
import socket, sys HOST = 'localhost' PORT = 4321 with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sender: sender.connect((HOST, PORT)) for char in sys.argv[1]: sender.sendall(b'\x01\x02\x03\x04\x05' + char.encode('utf-8'))
Now we can start netcat server:
$ nc -l 127.0.0.1 4321
And listen to the network with wireshark filtering connections in loopback (localhost) and on port 4321:
What we also need is sending actual data. Let’s run the script prepared earlier:
$ python send.py "I made it! 🤘"
Scapy – pcap file analysis – code
We need to modify the code a little bit. First of all we need to sniff offline data saved in pcap files instead of live network communication. In order to achieve it we need to pass
offline argument with the name of the file we want to analyze. In order to extract desired packets, we need to pass proper function reference as
lfilter argument and
prn to process filtered packets.
class Sniffer: def __init__(self): super().__init__() self._message = b'' # ... def run(self): sniff( offline='dump.pcapng', lfilter=self.my_filter, prn=self.display_filtered_packet )
Next, we need to filter the packets that contain the sequence defined earlier:
def my_filter(self, packet: Packet) -> bool: return b'\x01\x02\x03\x04\x05' in raw(packet.payload)
The last step is to clear data and display it on exit:
class Sniffer: # ... def __exit__(self, exc_type, exc_val, exc_tb): print(self._message.decode('utf-8')) def display_filtered_packet(self, packet: Packet): raw_packet = raw(packet.payload) position = raw_packet.find(b'\x01\x02\x03\x04\x05') raw_packet = raw_packet[position:].replace(b'\x01\x02\x03\x04\x05', b'').replace(b'\r', b'') self._message += raw_packet
The
display_filtered_packet builds a message, but does not show it immediately since it would result in showing it byte by byte and this will be a problem with utf-8 characters that are saved in several bytes.
Execute
Having built our custom pcap file analyzer, let’s run it against the prepared file.
$ python analyze.py I made it! 🤘
As you can see, I really made it, message was decoded correctly, including the utf-8 I entered.
Learn more about scapy
I strongly encourage you to visit scapy project page. You’ll learn more about making HTTP requests or fuzzing. If you have any interesting tool built with scapy, let me know in the comments so that we can share more knowledge. You’ll find full code example on my Gitlab.
See also
You can also subscribe to the newsletter, and go through previous parts of the HackPy series:
|
https://blacksheephacks.pl/hackpy-part-4-pcap-files-analysis-with-scapy/
|
CC-MAIN-2022-40
|
refinedweb
| 583 | 56.86 |
Inset map showing a rectangular region
The
pygmt.Figure.inset method adds an inset figure inside a larger
figure. The function is called using a
with statement, and its
position,
box,
offset, and
margin can be customized. Plotting
methods called within the
with statement plot into the inset figure.
Out:
<IPython.core.display.Image object>
import pygmt # Set the region of the main figure region = [137.5, 141, 34, 37] fig = pygmt.Figure() # Plot the base map of the main figure. Universal Transverse Mercator (UTM) # projection is used and the UTM zone is set to be "54S". fig.basemap(region=region, projection="U54S/12c", frame=["WSne", "af"]) # Set the land color to "lightbrown", the water color to "azure1", the # shoreline width to "2p", and the area threshold to 1000 km^2 for the main # figure fig.coast(land="lightbrown", water="azure1", shorelines="2p", area_thresh=1000) # Create an inset map, setting the position to bottom right, the width to # 3 cm, the height to 3.6 cm, and the x- and y-offsets to # 0.1 cm, respectively. Draws a rectangular box around the inset with a fill # color of "white" and a pen of "1p". with fig.inset(position="jBR+w3c/3.6c+o0.1c", box="+gwhite+p1p"): # Plot the Japan main land in the inset using coast. "U54S/?" means UTM # projection with map width automatically determined from the inset width. # Highlight the Japan area in "lightbrown" # and draw its outline with a pen of "0.2p". fig.coast( region=[129, 146, 30, 46], projection="U54S/?", dcw="JP+glightbrown+p0.2p", area_thresh=10000, ) # Plot a rectangle ("r") in the inset map to show the area of the main # figure. "+s" means that the first two columns are the longitude and # latitude of the bottom left corner of the rectangle, and the last two # columns the longitude and latitude of the uppper right corner. rectangle = [[region[0], region[2], region[1], region[3]]] fig.plot(data=rectangle, style="r+s", pen="2p,blue") fig.show()
Total running time of the script: ( 0 minutes 1.562 seconds)
Gallery generated by Sphinx-Gallery
|
https://www.pygmt.org/dev/gallery/embellishments/inset_rectangle_region.html
|
CC-MAIN-2022-27
|
refinedweb
| 350 | 66.94 |
Saturday, August 25, 2012
Wow,
Posted On Saturday, August 25, 2012 4:24 PM | Comments (0) |
Filed Under [
.Net
Win8
]
Thursday, July 15, 2010
Hey.
Posted On Thursday, July 15, 2010 3:45 PM | Comments (1) |
Filed Under [
.Net
]
Monday, July 13, 2009
*taptaptap* Is this thing on?
I ran into an interesting bug recently, where you would get 2 identical requests to the page you were visiting. I wasn't sure where this was coming from, so I poked around a bit. It turns out that having an <img /> tag on a page with an empty src attribute (like so: <img src="" />) causes that image tag to point to the containing page, causing the entire page to get requested again (causing any server-side code you may have to get executed again), but the response just gets thrown away, since the img tag is only expecting an image back from the server, not an HTML document.
The way around this is to remove the src attribute altogether, and you can still set the image source through javascript.
Posted On Monday, July 13, 2009 9:55 AM | Comments (0) |
Monday, March 5, 2007
Recently).
Posted On Monday, March 5, 2007 2:08 PM | Comments (17) |
Monday, January 29, 2007
In this post, I talked about enabling or disabling a validator client-side. Someone had commented asking how to change the status of a control from valid to invalid. The following javascript works at least for required field validators, I haven't tested it with anything else (but I imagine it should work just fine). Again, this is 2.0, I haven't tested this in 1.1:
var myValidator = document.getElementById('<%=reqField.ClientID%>');ValidatorValidate(myValidator);
For a required field validator at least, the causes the same actions as clicking on a button on the page.
Hope this helps!
Posted On Monday, January 29, 2007 3:45 PM | Comments (1) |
Wednesday, August 16, 2006
One. :)
Posted On Wednesday, August 16, 2006 2:48 PM | Comments (0) |
Friday, August 11, 2006
If you ever find yourself needing to selectively disable an asp.net validator through javascript, you can do the following (in 2.0, not sure if this exists in 1.x):
function doSomething(){ var myVal = document.getElementById('myValidatorClientID'); ValidatorEnable(myVal, false); }
Quick and easy! Sadly, not as easy to find through Google, so hopefully this post will help that. :)
Posted On Friday, August 11, 2006 8:57 AM | Comments (96) |
Thursday, July 27, 2006
The).
Posted On Thursday, July 27, 2006 10:10 PM | Comments (85) |
Tuesday, May 9, 2006
In the helps for HtmlInputControl.Name (for .NET 2.0):
Gets or sets the unique identifier name for the HtmlInputControl control.
Later on, in that same help file:
In this implementation, the get accessor returns the value of the Control.UniqueID property. However, the set accessor does not assign a value to this property.
So then why say that it sets the property, if it doesn't actually set the property?
Posted On Tuesday, May 9, 2006 1:37 PM | Comments (1) |
Monday, May 8, 2006
I...
Posted On Monday, May 8, 2006 2:08 PM | Comments (3) |
Wednesday, May 3, 2006
Recently I had to retrieve some data from SQL that was tucked away inside XML. We have a “Settings” class inside our application that is a bunch of public properties, which we then serialize to XML to store inside the DB. This lets us easily add/remove fields to store without having to modify the database.
This can be a problem if you need to access those one of those field values from within SQL, but SQL 2005 provides some methods to do this (note that your data column must be the xml data type, I haven't found a way to get this to work by casting an ntext field to xml):
Providing your class looks like this:
[XmlSerializable]public class GlobalSettings{ [XmlElement] public int SomeValue;}
Then when serialized, your XML will look similar to this:
<GlobalSettings> <SomeValue>5</SomeValue></GlobalSettings>
In SQL Management studio, you can write this (assuming GlobalSettings is the table and GlobalSettingsXML is the column where this data is stored):
SELECT GlobalSettingsXML.query('(/GlobalSettings/SomeValue)') FROM GlobalSettings
This will return:
<SomeValue>5</SomeValue>
Which may not be as useful as we want. :) To get the actual value (5) from this, we need to use the value() function:
SELECT GlobalSettingsXML.value('(/GlobalSettings/SomeValue)[1]', 'int') FROM GlobalSettings
This will give us '5', cast to an int. Per the MSDN helps on value(), the [1] is required after your xpath expression because the expression is supposed to return a singleton. I'm not quite sure what that means or how the [1] denotes it, but it works (if you know, by all means tell me!).
Hopefully this will be of use to you.
Posted On Wednesday, May 3, 2006 1:07 PM | Comments (0) |
Monday, April 3, 2006
Posted On Monday, April 3, 2006 2:10 PM | Comments (0) |
Wednesday, February 15, 2006
After).
Posted On Wednesday, February 15, 2006 11:18 AM | Comments (0) |
Monday, February 13, 2006
In my inaugural post, I described a problem I had with a WebMethod erroring out instantly on the client, accompanied by a puzzling message in the Event Log, regarding viewstate.
On a whim, on that site, I decided to “publish” the website from within VS 2005 to a separate directory. I set up another virtual directory within IIS to point to that directory, loaded up the site and IE, and all my WebMethod calls were working!
From IE6, that is.
From FireFox, none of the calls work on the published website....
...the first time. If I click on the link that calls the WebMethod again (after getting the instant error), it works fine. Very weird.
Posted On Monday, February 13, 2006 7:47 AM | Comments (0) |.
Posted On Monday, February 13, 2006 7:07 AM | Comments (0) |
|
http://geekswithblogs.net/jonasb/Default.aspx
|
CC-MAIN-2014-35
|
refinedweb
| 990 | 61.26 |
Jakarta Commons Online Bookshelf: XML parsing with Digester. Part 1
Jakarta Commons Online Bookshelf: XML parsing with Digester. Part 1
Written by Vikram Goyal and reproduced from "Jakarta Commons Online Bookshelf" by permission of Manning Publications Co. ISBN 1932394524, copyright 2005. All rights reserved. See for more information.
XML parsing with Digester
Configuration files are used in all sorts of applications, allowing the application user to modify the details of an application or the way an application starts or behaves. Since storing these configuration details in the runtime code is neither possible nor desirable, these details are stored in external files. These files take a variety of shapes and forms. Some, like the simple name-value pair, let you specify the name of a property followed by the current assigned value. Others, like those based on XML, let you use XML to create hierarchies of configuration data using elements, attributes, and body data.
For application developers, parsing configuration files and modifying the behavior of the application isn't an easy task. Although the details of the configuration file are known in advance (after all, the developers created the basic structure of these files), objects specified within these configuration files may need to be created, set, modified, or deleted. Doing this at runtime is arduous.
The Digester component from Jakarta Commons is used to read XML files and act on the information contained within it using a set of predefined rules. Note that we say XML files, not XML configuration files. Digester can be used on all sorts of XML files, configuration or not.
The Digester component came about because of the need to parse the Struts configuration file. Like so many of the Jakarta Commons projects, its usability in Struts development made it a clear winner for the parsing of other configuration files as well.
In this module, we'll look at the Digester component. We'll start with the basics of Digester by looking at a simple example. We'll then look at the Digester stack and help you understand how it performs pattern matching. This will allow us to tackle all the rules that are prebuilt into Digester and demonstrate how they're useful. We'll use this knowledge to create a rule of our own. We'll round out the module by looking at how the Digester component can be externalized and made namespace-aware.
4.1 The Digester component
As we said, the Digester component came about because of the need to parse the Struts Configuration file. The code base for the parsing the Struts configuration file was dependent on several other parts of Struts. Realizing the importance of making this code base independent of Struts led to the creation of the Digester component. Struts was modified to reuse the Digester component so as not to include any dependencies. In essence, Digester is an XML -> Java object-mapping package. However, it's much more than that. Unlike other similar packages, it's highly configurable and extensible. It's event-driven in the sense that it lets you process objects based on events within the XML configuration file. Events are equivalent to patterns in the Digester world. Unlike the Simple API for XML (SAX), which is also event-driven, Digester provides a highlevel view of the events. This frees you from having to understand SAX events and, instead, lets you concentrate on the processing to be done.
Let's start our exploration of the Digester component with a simple example. Listing 4.1 shows an XML file that we'll use to parse and create objects. This XML file contains bean information for the JavaBeans listed in listings 4.2 and 4.3. Listing 4.4 shows the Digester code required to parse this file and create the JavaBean objects.
Created: March 27, 2003
Revised: May 2, 2005
URL:
|
http://www.webreference.com/programming/jakarta/index.html
|
CC-MAIN-2015-06
|
refinedweb
| 642 | 56.45 |
An encapsulated persistance layer for Python
A Python encapsulated persistence layer for supporting many data access layers.
Components
### DataManager
The DataManager is the central object of Polydatum. It is a top-level registry for Services, Resources, and Middleware. Typically an application has one DataManager per process. The DataManager also manages Contexts and gives access the DAL.
### Context
The Context contains the current state for the active request. It also provides access to Resources. When used in an HTTP framework typically one context is created at the start of the HTTP request and it ends before the HTTP response is sent.
When used with task managers such as Celery, the Context is created at the start of a task and ends before the task result is returned.
### DAL
The DAL is the “Data Access Layer”. The DAL is the registry for all Services. To make call a method on a Service, you start with the DAL.
result = dal.someservice.somemethod()
### Service
Services encapsulate business logic and data access. They are the Controller of MVC-like applications. Services can be nested within other services.
dal.register_services( someservice=SomeService().register_services( subservice=SubService() ) ) result = dal.someservice.subservice.somemethod()
### Meta
Meta is data about the context and usually includes things like the active user or HTTP request. Meta is read only and can not be modified inside the context.
class UserService(Service): def get_user(self): return self._ctx.meta.user dm = DataManager() dm.register_services(users=UserService()) with dm.context(meta={'user': 'bob'}) as ctx: assert ctx.dal.test.get_user() == 'bob'
### Resource
Resources are on-demand access to data backends such as SQL databases, key stores, and blob stores. Resources have a setup and teardown phase. Resources are only initialized and setup when they are first accessed within a context. This lazy loading ensures that only the Resources that are needed for a particular request are initialized.
The setup/teardown phases are particularly good for checking connections out from a connection pool and checking them back in at the end of the request.
def db_pool(context): conn = db.checkout_connection() yield conn db.checkin_connection(conn) class ItemService(Service): def get_item(self, id): return self._data_manager.db.query( 'SELECT * FROM table WHERE id={id}', id=id ) dm = DataManager() dm.register_services(items=ItemService()) dm.register_resources(db=db_pool) with dm.dal() as dal: item = dal.items.get_item(1)
### Middleware
Middleware have a setup and teardown phase for each context. They are particularly useful for managing transactions or error handling.
Context Middleware may only see and modify the Context. With the Context, Context Middleware can gain access to Resources.
def transaction_middleware(context): trans = context.db_resource.new_transaction() trans.start() try: yield trans except: trans.abort() else: trans.commit() dm = DataManager() dm.register_context_middleware(transaction_middleware)
Principals
- Methods that get an object should return None if an object can not be found.
- Methods that rely on an object existing to work (such as create that relies on a parent object) should raise NotFound if the parent object does not exist.
- All data access (SQL, MongoDB, Redis, S3, etc) must be done within a Service.
Considerations
### Middleware vs Resource
A Resource is created on demand. It’s purpose is to create a needed resource for a request and clean it up when done. It is created inside the context (and possibly by middleware). Errors that occur during Resource teardown are suppressed.
Middleware is ran on every context. It is setup before the context is active and torndown before resources are torndown. It’s purpose is to do setup/teardown within the context. Errors that occur in-context are propagated to middleware. Errors that occur in middleware are also propagated.
Testing
To run tests you’ll need to install the test requirements:
pip install -e . pip install -r src/tests/requirements.txt
Run tests:
cd src/tests && py.test
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/polydatum/
|
CC-MAIN-2017-34
|
refinedweb
| 653 | 52.76 |
27 July 2009 17:54 [Source: ICIS news]
NEW DELHI (ICIS news)--India’s Gujarat Alkalies and Chemicals Ltd (GACL) posted a 30.4% year-on-year drop in first-quarter net profit to Indian rupees (Rs) 379.1m ($7.9m), down from Rs544.6m in the same period in 2008, due the global economic slowdown, the company said on Monday.
Operating profit during the period that ended 30 June declined by 26.3% from Rs818.3m to Rs602.4m, while net sales decreased by 4.6% from Rs3.44bn to Rs3.28bn.
In an attempt to increase sales, GACL said it was outsourcing the production of some chemicals.
The products to be outsourced included calcium chloride, bleaching powder and toluene-based chemicals such as monochloroacetic acid (MCAA).
The company said it planned to commission its Rs26bn multi-product project at Dahej, in ?xml:namespace>
The project would comprise plants to manufacture polyols, phenol, caustic soda, chlorine, hydrogen peroxide, hydrazine and a captive power station.
GACL also said it was acquiring 400 acres at Dahej for its joint venture with Gujarat State Fertilizers & Chemicals and Gujarat Narmada Valley Fertilizers Co to set up an Rs100bn petrochemical plant.
GACL said engineering activities were moving forward at its 200,000 tonne/year chloromethanes joint venture with
The Dow-GACL SolVenture Ltd is scheduled to commission its Rs6bn plant at Dahej in 2011.
GACL operates manufacturing complexes at Dahej and
($1 = Rs48
|
http://www.icis.com/Articles/2009/07/27/9235370/gujarat-alkalies-q1-net-profit-falls-30-to-rs379m.html
|
CC-MAIN-2015-06
|
refinedweb
| 239 | 58.48 |
Exercise 9 is a remake of exercise 6, a common theme in this chapter. Only a few things need to be swapped in order to meet the requirements. We initially declare a single structure candyBar, then we create a dynamic array candyBar[3]. *bar points to the first element of candyBar[3], and we can access candyBar by calling bar. Here is my solution:
Do Programming Exercise 6, but, instead of declaring an array of three CandyBar structures,
use new to allocate the array dynamically.
#include <iostream> #include <string> using namespace std; // Candy bar structure struct candyBar { string brand; double weight; int calories; }; int main() { // create three members, but use new to allocate candyBar * bar = new candyBar[3]; bar[0].brand = "Crunch"; bar[0].weight = 1.7; bar[0].calories = 275; bar[1].brand = "Heath"; bar[1].weight = 2.3; bar[1].calories = 400; bar[2].brand = "Rolo"; bar[2].weight = 2.5; bar[2].calories = 350; // Ouput bars"; delete [] bar; // free memory cin.get(); return 0; }
Advertisements
|
https://rundata.wordpress.com/2012/10/31/c-primer-chapter-4-exercise-9/
|
CC-MAIN-2017-26
|
refinedweb
| 168 | 60.31 |
Beginning ASP.NET 2.0
E-Commerce in C# 2005
From Novice to Professional
■ ■ ■
Cristian Darie and Karli Watson
Darie-Watson_4681Front.fm Page i Thursday, September 22, 2005 5:26 AM
Beginning ASP.NET 2.0 E-Commerce in C# 2005: From Novice to Professional-468: Paul Sarknas Editor: Julie McNamee
Assistant Production Director: Kari Brooks-Copony
Production Editor: Linda Marousek
Compositor: Susan Glinert Stevens
Proofreader: Nancy Sixsmith
Indexer: Broccoli Information Management
Artist: Kinetic Publishing Services, LLC 2560 Ninth Street, Suite 219, Berkeley, CA
94710. in the Source Code section.
Darie-Watson_4681Front.fm Page ii Thursday, September 22, 2005 5:26 AM
11
■ ■ ■
C H A P T E R 2
Laying Out the Foundations
N
ow that you’ve convinced the client that you can create a cool web site to complement the
client’s store activity, it’s time to stop celebrating and start thinking about how to put into
practice all the promises made to the client. As usual, when you lay down on paper the tech-
nical requirements you must meet, everything starts to seem a bit more complicated than
initially anticipated.
■
Note
It is strongly recommended to consistently follow an efficient project-management methodology to
maximize the chances of the project’s success, on budget and on time. Most project-management theories
imply that an initial requirements/specifications document containing the details of the project you’re about to
create has been signed by you and the client. You can use this document as a guide while creating the solution, and
it also allows you to charge extra in case the client brings new requirements or requests changes after devel-
opment has started. See Appendix B for more details.
To ensure this project’s success, you need to come up with a smart way to implement what
you’ve signed the contract for. You want to make your life easy and develop the project smoothly
and quickly, but the ultimate goal is to make sure the client is satisfied with your work. Conse-
quently, you should aim to provide your site’s increasing number of visitors with a pleasant
web experience by creating a nice, functional, and responsive web site by implementing each
one of the three development phases described in the first chapter.
The requirements are high, but this is normal for an e-commerce site today. To maximize
the chances of success, we’ll try to analyze and anticipate as many of the technical require-
ments as possible, and implement the solution in way that supports changes and additions
with minimal effort.
In this chapter, we’ll lay down the foundations for the future BalloonShop web site. We’ll
talk about what technologies and tools you’ll use, and even more important, how you’ll use
them. Let’s consider a quick summary of the goals for this chapter before moving on:
• Analyze the project from a technical point of view.
• Analyze and choose an architecture for your application.
• Decide which technologies, programming languages, and tools to use.
Darie-Watson_4681C02.fm Page 11 Tuesday, September 20, 2005 4:51 AM
12
CHAPTER 2
■
LAYI NG OUT THE FOUNDATI ONS
• Discuss naming and coding conventions.
• Create the basic structure of the web site and set up the database.
Designing for Growth
The word “design” in the context of a Web Application can mean many things. Its most popular
usage probably refers to the visual and user interface (UI) design of a web site.
This aspect is crucial because, let’s face it, the visitor is often more impressed with how a
site looks and how easy it is to use than about which technologies and techniques are used
behind the scenes, or what operating system the web server is running. If the site is hard to use
and easy to forget, it just doesn’t matter what rocket science was used to create it.
Unfortunately, this truth makes many inexperienced programmers underestimate the
importance of the way the invisible part of the site is implemented—the code, the database,
and so on. The visual part of a site gets visitors interested to begin with, but its functionality
makes them come back. A web site can sometimes be implemented very quickly based on
certain initial requirements, but if not properly architected, it can become difficult, if not
impossible, to change.
For any project of any size, some preparation must be done before starting to code. Still,
no matter how much planning and design work is done, the unexpected does happen and
hidden catches, new requirements, and changing rules always seem to work against deadlines.
Even without these unexpected factors, site designers are often asked to change or add new
functionality after the project is finished and deployed. This also will be the case for BalloonShop,
which you’ll implement in three separate stages, as discussed in Chapter 1.
You’ll learn how to create the web site so that the site (or you) will not fall apart when func-
tionality is extended or updates are made. Because this is a programming book, it doesn’t
address important aspects of e-commerce, such as designing the UI, marketing techniques, or
legal issues. You’ll need additional material to cover that ground. Instead, in this book, we’ll
pay close attention to constructing the code that makes the site work.
The phrase “designing the code” can have different meanings; for example, we’ll need to
have a short talk about naming conventions. Still, the most important aspect that we need to
look at is the architecture to use when writing the code. The architecture refers to the way you
split the code for a simple piece of functionality (for example, the product search feature) into
smaller, interconnected components. Although it might be easier to implement that function-
ality as quickly and as simply as possible, in a single component, you gain great long-term
advantages by creating more components that work together to achieve the desired result.
Before considering the architecture itself, you must determine what you want from this
architecture.
Meeting Long-Term Requirements with Minimal Effort
Apart from the fact that you want a fast web site, each of the phases of development we talked
about in Chapter 1 brings new requirements that must be met.
Every time you proceed to a new stage, you want to reuse most of the already existing solu-
tion. It would be very inefficient to redesign the site (not just the visual part, but the code as
well!) just because you need to add a new feature. You can make it easier to reuse the solution
Darie-Watson_4681C02.fm Page 12 Tuesday, September 20, 2005 4:51 AM
CHAPTER 2
■
LAYI NG OUT THE FOUNDATI ONS
13
by planning ahead so that any new functionality that needs to be added can slot in with ease,
rather than each change causing a new headache.
When building the web site, implementing a flexible architecture composed of pluggable
components allows you to add new features—such as the shopping cart, the departments list,
or the product search feature—by coding them as separate components and plugging them
into the existing application. Achieving a good level of flexibility is one of the goals regarding
the application’s architecture, and this chapter shows how you can put this into practice. You’ll
see that the level of flexibility is proportional to the amount of time required to design and
implement it, so we’ll try to find a compromise that provides the best gains without complicating
the code too much.
Another major requirement that is common to all online applications is to have a scalable
architecture. Scalability is defined as the capability to increase resources to yield a linear increase
in service capacity. In other words, in a scalable system, the ratio (proportion) between the
number of client requests and the hardware resources required to handle those requests is
constant, even when the number of clients increases (ideally). An unscalable system can’t deal
with an increasing number of clients, no matter how many hardware resources are provided.
Because we’re optimistic about the number of customers, we must be sure that the site will
be able to deliver its functionality to a large number of clients without throwing out errors or
performing sluggishly.
Reliability is also a critical aspect for an e-commerce application. With the help of a coherent
error-handling strategy and a powerful relational database, you can ensure data integrity and
ensure that noncritical errors are properly handled without bringing the site to its knees.
The Magic of the Three-Tier Architecture
Generally, the architecture refers to splitting each piece of the application’s functionality into
separate components based on what they do and grouping each kind of component into a
single logical tier.
The three-tier architecture has become popular today because it answers most of the
problems discussed so far by splitting an application’s functionality unit into three logical tiers:
• The presentation tier
• The business tier
• The data tier
The presentation tier contains the UI elements of the site, and includes all the logic that
manages the interaction between the visitor and the client’s business. This tier makes the whole
site feel alive, and the way you design it is crucially important to the site’s success. Because
your application is a web site, its presentation tier is composed of dynamic web pages.
The business tier (also called the middle tier) receives requests from the presentation tier
and returns a result to the presentation tier depending on the business logic it contains. Almost
any event that happens in the presentation tier results in the business tier being called (except
events that can be handled locally by the presentation tier, such as simple input data validation).
For example, if the visitor is doing a product search, the presentation tier calls the business tier
and says, “Please send me back the products that match this search criterion.” Almost always,
Darie-Watson_4681C02.fm Page 13 Tuesday, September 20, 2005 4:51 AM
14
CHAPTER 2
■
LAYI NG OUT THE FOUNDATI ONS
the business tier needs to call the data tier for information to respond to the presentation tier’s
request.
The data tier (sometimes referred to as the database tier) is responsible for storing the
application’s data and sending it to the business tier when requested. For the BalloonShop
e-commerce site, you’ll need to store data about products (including their categories and their
departments), users, shopping carts, and so on. Almost every client request finally results in
the data tier being interrogated for information (except when previously retrieved data has
been cached at the business tier or presentation tier levels), so it’s important to have a fast
database system. In Chapters 3 and 4, you’ll learn how to design the database for optimum
performance.
These tiers are purely logical—there is no constraint on the physical location of each tier.
You’re free to place all the application, and implicitly all its tiers, on a single server machine.
Alternatively, you can place each tier on a separate machine or even split the components
of a single tier over multiple machines. Your choice depends on the particular performance
requirements of the application. This kind of flexibility allows you to achieve many benefits,
as you’ll soon see.
An important constraint in the three-layered architecture model is that information must flow
in sequential order between tiers. The presentation tier is only allowed to access the business tier
and never directly the data tier. The business tier is the “brain” in the middle that communicates
with the other tiers and processes and coordinates all the information flow. If the presentation
tier directly accessed the data tier, the rules of three-tier architecture programming would be
broken. When you implement a three-tier architecture, you must be consistent and obey its
rules to reap the benefits.
Figure 2-1 is a simple representation of how data is passed in an application that imple-
ments the three-tier architecture.
Figure 2-1. Simple representation of the three-tier architecture
A Simple Scenario
It’s easier to understand how data is passed and transformed between tiers if you take a closer
look at a simple example. To make the example even more relevant to the project, let’s analyze
a situation that will actually happen in BalloonShop. This scenario is typical for three-tier
applications.
Darie-Watson_4681C02.fm Page 14 Tuesday, September 20, 2005 4:51 AM
CHAPTER 2
■
LAYI NG OUT THE FOUNDATI ONS
15
Like most e-commerce sites, BalloonShop will have a shopping cart, which we’ll discuss
later in the book. For now, it’s enough to know that the visitor will add products to the shopping
cart by clicking an Add to Cart button. Figure 2-2 shows how the information flows through the
application when that button is clicked.
Figure 2-2. Internet visitor interacting with a three-tier application
When the user clicks the Add to Cart button for a specific product (Step 1), the presenta-
tion tier (which contains the button) forwards the request to the business tier—“Hey, I want
this product added to the visitor’s shopping cart!” (Step 2). The business tier receives the request,
understands that the user wants a specific product added to the shopping cart, and handles the
request by telling the data tier to update the visitor’s shopping cart by adding the selected
product (Step 3). The data tier needs to be called because it stores and manages the entire web
site’s data, including users’ shopping cart information.
The data tier updates the database (Step 4) and eventually returns a success code to the
business tier. The business tier (Step 5) handles the return code and any errors that might
have occurred in the data tier while updating the database and then returns the output to the
presentation tier.
Finally, the presentation tier generates an updated view of the shopping cart (Step 6). The
results of the execution are wrapped up by generating an HTML (Hypertext Markup Language)
web page that is returned to the visitor (Step 7), where the updated shopping cart can be seen
in the visitor’s favorite web browser.
Note that in this simple example, the business tier doesn’t do a lot of processing, and its
business logic isn’t very complex. However, if new business rules appear for your application,
you would change the business tier. If, for example, the business logic specified that a product
Darie-Watson_4681C02.fm Page 15 Tuesday, September 20, 2005 4:51 AM
16
CHAPTER 2
■
LAYI NG OUT THE FOUNDATI ONS
could only be added to the shopping cart if its quantity in stock were greater than zero, an
additional data tier call would have been made to determine the quantity. The data tier would
only be requested to update the shopping cart if products were in stock. In any case, the
presentation tier is informed about the status and provides human-readable feedback to the
visitor.
What’s in a Number?
It’s interesting to note how each tier interprets the same piece of information differently. For
the data tier, the numbers and information it stores have no significance because this tier is an
engine that saves, manages, and retrieves numbers, strings, or other data types—not product
quantities or product names. In the context of the previous example, a product quantity of 0
represents a simple, plain number without any meaning to the data tier (it is simply 0, a 32-bit
integer).
The data gains significance when the business tier reads it. When the business tier asks the
data tier for a product quantity and gets a “0” result, this is interpreted by the business tier as
“Hey, no products in stock!” This data is finally wrapped in a nice, visual form by the presenta-
tion tier, for example, a label reading, “Sorry, at the moment the product cannot be ordered.”
Even if it’s unlikely that you want to forbid a customer from adding a product to the shopping
cart if the product isn’t in stock, the example (described in Figure 2-3) is good enough to present
in yet another way how each of the three tiers has a different purpose.
Figure 2-3. Internet visitor interacting with a three-tier application
The Right Logic for the Right Tier
Because each layer contains its own logic, sometimes it can be tricky to decide where exactly
to draw the line between the tiers. In the previous scenario, instead of reading the product’s
Darie-Watson_4681C02.fm Page 16 Tuesday, September 20, 2005 4:51 AM
CHAPTER 2
■
LAYI NG OUT THE FOUNDATI ONS
17
quantity in the business tier and deciding whether the product is available based on that number
(resulting in two data tier, and implicitly database, calls), you could have a single data tier method
named AddProductIfAvailable that adds the product to the shopping cart only if it’s available
in stock.
In this scenario, some logic is transferred from the business tier to the data tier. In many
other circumstances, you might have the option to place the same logic in one tier or another,
or maybe in both. In most cases, there is no single best way to implement the three-tier architec-
ture, and you’ll need to make a compromise or a choice based on personal preference or
external constraints.
Occasionally, even though you know the right way (in respect to the architecture) to
implement something, you might choose to break the rules to get a performance gain. As a
general rule, if performance can be improved this way, it’s okay to break the strict limits between
tiers just a little bit (for example, add some of the business rules to the data tier or vice versa),
if these rules are not likely to change in time. Otherwise, keeping all the business rules in the
middle tier is preferable because it generates a “cleaner” application that is easier to maintain.
simple database operations using a simple UI.
A Three-Tier Architecture for BalloonShop
Implementing a three-tiered architecture for the BalloonShop web site will help you achieve
the goals listed at the beginning of the chapter. The coding discipline imposed by a system that
might seem rigid at first sight allows for excellent levels of flexibility and extensibility in the
long run.
Splitting major parts of the application into separate, smaller components also encourages
reusability. More than once when adding new features to the site you’ll see that you can reuse
some of the already existing bits. Adding a new feature without needing to change much of
what already exists is, in itself, a good example of reusability. Also, smaller pieces of code placed
in their correct places are easier to document and analyze later.
Another advantage of the three-tiered architecture is that, if properly implemented, the
overall system is resistant to changes. When bits in one of the tiers change, the other tiers
usually remain unaffected, sometimes even in extreme cases. For example, if for some reason
the backend database system is changed (say, the manager decides to use Oracle instead of
SQL Server), you only need to update the data tier. The existing business tier should work the
same with the new database.
Why Not Use More Tiers?
The three-tier architecture we’ve been talking about so far is a particular (and the most popular)
version of the n-Tier Architecture, which is a commonly used buzzword these days. n-Tier
architecture refers to splitting the solution into a number (n) of logical tiers. In complex projects,
sometimes it makes sense to split the business layer into more than one layer, thus resulting in
Darie-Watson_4681C02.fm Page 17 Tuesday, September 20, 2005 4:51 AM
18
CHAPTER 2
■
LAYI NG OUT THE FOUNDATI ONS
an architecture with more than three layers. However, for this web site, it makes most sense to
stick with the three-layered design, which offers most of the benefits while not requiring too
many hours of design or a complex hierarchy of framework code to support the architecture.
Maybe with a more involved and complex architecture, you would achieve even higher
levels of flexibility and scalability for the application, but you would need much more time for
design before starting to implement anything. As with any programming project, you must
find a fair balance between the time required to design the architecture and the time spent to
implement it. The three-tier architecture is best suited to projects with average complexity, like
the BalloonShop web site.
You also might be asking the opposite question, “Why not use fewer tiers?” A two-tier
architecture, also called client-server architecture, can be appropriate for less-complex projects. In
short, a two-tier architecture requires less time for planning and allows quicker development
in the beginning, although it generates an application that’s harder to maintain and extend in
the long run. Because we’re expecting to extend the application in the future, the client-server
architecture isn’t appropriate for this application, so it won’t be discussed further in this book.
Now that you know the general architecture, let’s see what technologies and tools you’ll
use to implement it. After a brief discussion of the technologies, you’ll create the foundation of
the presentation and data tiers by creating the first page of the site and the backend database.
You’ll start implementing some real functionality in each of the three tiers in Chapter 3 when
you start creating the web site’s product catalog.
Choosing Technologies and Tools
No matter which architecture is chosen, a major question that arises in every development
project is which technologies, programming languages, and tools are going to be used, bearing
in mind that external requirements can seriously limit your options.
■
Note
In this book, we’re creating a web site using Microsoft technologies. Keep in mind, however, that
when it comes to technology, problems often have more than one solution, and rarely is there only a single
best way to solve the problem. Although we really like Microsoft’s technologies as presented in this book, it
doesn’t necessarily mean they’re the best choice for any kind of project, in any circumstances. Additionally,
in many situations, you must use specific technologies because of client requirements or other external constraints.
The System Requirements and Software Requirements stages in the software development process will
determine which technologies you must use for creating the application. See Appendix B for more details.
This book is about programming e-commerce web sites with ASP.NET 2.0 (Active Server
Pages .NET 2.0) and C#. The tools you’ll use are Visual Web Developer 2005 Express Edition
and SQL Server 2005 Express Edition, which are freely available from Microsoft’s web site. See
Appendix A for installation instructions. Although the book assumes a little previous experi-
ence with each of these, we’ll take a quick look at them and see how they fit into the project and
into the three-tier architecture.
Darie-Watson_4681C02.fm Page 18 Tuesday, September 20, 2005 4:51 AM
CHAPTER 2
■
LAYI NG OUT THE FOUNDATI ONS
19
■
Note
This book builds on Beginning ASP.NET 1.1 E-Commerce: From Novice to Professional (Apress, 2004),
which used ASP.NET 1.1, Visual Studio .NET 2003, and SQL Server 2000. If you’re an open source fan, you
might also want to check out Beginning PHP 5 and MySQL E-Commerce: From Novice to Professional
(Apress, 2004).
Using ASP.NET 2.0
ASP.NET 2.0 is Microsoft’s latest technology set for building dynamic, interactive web content.
Compared to its previous versions, ASP.NET 2.0 includes many new features aimed at increasing
the web developer’s productivity in building web applications.
Because this book is targeted at both existing ASP.NET 1.1 and existing ASP.NET 2.0
developers, we’ll highlight a number of ASP.NET 2.0-specific techniques along the way and try
to provide useful tips and tricks that increase your coding efficiency by making the most of this
technology. However, do keep in mind that while building your e-commerce web site with this
book, we only cover a subset of the vast number of features ASP.NET 2.0 has to offer. Therefore,
you still need additional ASP.NET 2.0 books (or other resources) to use as a reference and to
complete your knowledge on theory issues that didn’t make it into this book. In the Apress
technology tree, reading this book comes naturally after Beginning ASP.NET 2.0 in C#: From
Novice to Professional (Apress, 2005), but you can always use the beginners’ books of your
choice instead.
ASP.NET is not the only server-side technology around for creating professional e-commerce
web sites. Among its most popular competitors are PHP (Hypertext Preprocessor), JSP (JavaServer
Pages), ColdFusion, and even the outdated ASP 3.0 and CGI (Common Gateway Interface).
Among these technologies are many differences, but also some fundamental similarities. For
example, pages written with any of these technologies are composed of basic HTML, which
draws the static part of the page (the template), and code that generates the dynamic part.
Web Clients and Web Servers
You probably already know the general principles about how dynamic web pages work. However,
as a short recap, Figure 2-4 shows what happens to an ASP.NET web page from the moment the
client browser (no matter if it’s Internet Explorer, Mozilla Firefox, or any other web browser)
requests it to the moment the browser actually receives it.
Darie-Watson_4681C02.fm Page 19 Tuesday, September 20, 2005 4:51 AM
20
CHAPTER 2
■
LAYI NG OUT THE FOUNDATI ONS
Figure 2-4. Web server processing client requests
After the request, the page is first processed at the server before being returned to the
client (this is the reason ASP.NET and the other mentioned technologies are called server-side
technologies). When an ASP.NET page is requested, its underlying code is first executed on the
server. After the final page is composed, the resulting HTML is returned to the visitor’s browser.
The returned HTML can optionally contain client-side script code, which is directly inter-
preted by the browser. The most popular client-side scripting technologies are JavaScript and
VBScript. JavaScript is usually the better choice because it has wider acceptance, whereas only
Internet Explorer recognizes VBScript. Other important client-side technologies are Macromedia
Flash and Java applets, but these are somewhat different because the web browser does not
directly parse them—Flash requires a specialized plug-in and Java applets require a JVM (Java
Virtual Machine). Internet Explorer also supports ActiveX controls and .NET assemblies.
The Code Behind the Page
From its first version, ASP.NET encouraged (and helped) developers to keep the code of a web
page physically separated from the HTML layout of that page. Keeping the code that gives life
to a web page in a separate file from the HTML layout of the page was an important improve-
ment over other server-side web-development technologies whose mix of code and HTML in
the same file often led to long and complicated source files that were hard to document, change,
and maintain. Also, a file containing both code and HTML is the subject of both programmers’
and designers’ work, which makes team collaboration unnecessarily complicated and increases
the chances of the designer creating bugs in the code logic while working on cosmetic changes.
Darie-Watson_4681C02.fm Page 20 Tuesday, September 20, 2005 4:51 AM
CHAPTER 2
■
LAYI NG OUT THE FOUNDATI ONS
21
ASP.NET 1.0 introduced a code-behind model, used to separate the HTML layout of a web
page from the code that gives life to that page. Although it was possible to write the code and
HTML in the same file, Visual Studio .NET 2002 and Visual Studio .NET 2003 always automati-
cally generated two separate files for a Web Form: the HTML layout resided in the .ASPX file
and the code resided in the code-behind file. Because ASP.NET allowed the developer to write
the code in the programming language of his choice (such as C# or VB .NET), the code-behind
file’s extension depended on the language it was written in (such as .ASPX.CS or .ASPX.VB).
ASP.NET 2.0 uses a refined code-behind model. Although the new model is more
powerful, the general principles (to help separate the page’s looks from its brain) are still the
same. We’ll look over the differences a bit later, especially for existing ASP.NET 1.x developers
migrating to ASP.NET 2.0.
Before moving on, let’s summarize the most important general features of ASP.NET:
• The server-side code can be written in the .NET language of your choice. By default, you
can choose from C#, VB .NET, and J#, but the whole infrastructure is designed to support
additional languages. These languages are powerful and fully object oriented.
• The server-side code of ASP.NET pages is fully compiled and executed—as opposed to
being interpreted line by line—which results in optimal performance and offers the
possibility to detect a number of errors at compile-time instead of runtime.
• The concept of code-behind files helps separate the visual part of the page from the
(server-side) logic behind it. This is an advantage over other technologies, in which both
the HTML and the server-side code reside in the same file (often resulting in the popular
“spaghetti code”).
• Visual Web Developer 2005 is an excellent and complete visual editor that represents a
good weapon in the ASP.NET programmer’s arsenal (although you don’t need it to
create ASP.NET Web Applications). Visual Web Developer 2005 Express Edition is free,
and you can use it to develop the examples in this book.
ASP.NET Web Forms, Web User Controls, and Master Pages
ASP.NET web sites are developed around ASP.NET Web Forms. ASP.NET Web Forms have the
.aspx extension and are the standard way to provide web functionality to clients. A request to
an ASPX resource, such as, results
in the default.aspx file being executed on the server (together with its code-behind file) and
the results being composed as an HTML page that is sent back to the client. Usually, the .aspx
file has an associated code-behind file, which is also considered part of the Web Form.
Web User Controls and Master Pages are similar to Web Forms in that they are also
composed of HTML and code (they also support the code-behind model), but they can’t be
directly accessed by clients. Instead, they are used to compose the content of the Web Forms.
Web User Controls are files with the .ascx extension that can be included in Web Forms,
with the parent Web Form becoming the container of the control. Web User Controls allow you
to easily reuse pieces of functionality in a number of Web Forms.
Darie-Watson_4681C02.fm Page 21 Tuesday, September 20, 2005 4:51 AM
22
CHAPTER 2
■
LAYI NG OUT THE FOUNDATI ONS
Master Pages are a new feature of ASP.NET 2.0. A Master Page is a template that can be
applied to a number of Web Forms in a site to ensure a consistent visual appearance and function-
ality throughout the various pages of the site. Updating the Master Page has an immediate
effect on every Web Form built on top of that Master Page.
Web User Controls, Web Server Controls, and HTML Server Controls It’s worth taking a second look
at Web User Controls from another perspective. Web User Controls are a particular type of
server-side control. Server-side controls generically refer to three kinds of controls: Web User
Controls, Web Server Controls, and HTML Server Controls. All these kinds of controls can be
used to reuse pieces of functionality inside Web Forms.
As stated in the previous section, Web User Controls are files with the .ascx extension that
have a structure similar to the structure of Web Forms, but they can’t be requested directly by
a client web browser; instead, they are meant to be included in Web Forms or other Web User
Controls.
Web Server Controls are compiled .NET classes that, when executed, generate HTML
output (eventually including client-side script). You can use them in Web Forms or in Web
User Controls. The .NET Framework ships with a large number of Web Server Controls (many
of which are new to version 2.0 of the framework), including simple controls such as Label,
TextBox, or Button, and more complex controls, such as validation controls, data controls, the
famous GridView control (which is meant to replace the old DataGrid), and so on. Web Server
Controls are powerful, but they are more complicated to code because all their functionality
must be implemented manually. Among other features, you can programmatically declare and
access their properties, make these properties accessible through the Visual Web Developer
designer, and add the controls to the toolbox, just as in Windows Forms applications or old
VB6 programs.
HTML Server Controls allow you to programmatically access HTML elements of the page
from code (such as from the code-behind file). You transform an HTML control to an HTML
Server Control by adding the runat="server" attribute to it. Most HTML Server Controls are
doubled by Web Server Controls (such as labels, buttons, and so on). For consistency, we’ll
stick with Web Server Controls most of the time, but you’ll need to use HTML Server Controls
in some cases.
For the BalloonShop project, you’ll use all kinds of controls, and you’ll create a number of
Web User Controls.
Because you can develop Web User Controls independently of the main web site and then
just plug them in when they’re ready, having a site structure based on Web User Controls
provides an excellent level of flexibility and reusability.
ASP.NET and the Three-Tier Architecture
The collection of Web Forms, Web User Controls, and Master Pages form the presentation tier
of the application. They are the part that creates the HTML code loaded by the visitor’s browser.
The logic of the UI is stored in the code-behind files of the Web Forms, Web User Controls,
and Master Pages. Note that although you don’t need to use code-behind files with ASP.NET
(you’re free to mix code and HTML just as you did with ASP), we’ll exclusively use the code-
behind model for the presentation-tier logic.
In the context of a three-tier application, the logic in the presentation tier usually refers to
the various event handlers, such as Page_Load and someButton_Click. As you learned earlier,
Darie-Watson_4681C02.fm Page 22 Tuesday, September 20, 2005 4:51 AM
CHAPTER 2
■
LAYI NG OUT THE FOUNDATI ONS
23
these event handlers should call business-tier methods to get their jobs done (and never call
the data tier directly).
Using C# and VB .NET
C# and VB .NET are languages that can be used to code the Web Forms’ code-behind files. In
this book, we’re using C#; in a separate version of this book called Beginning ASP.NET E-Commerce
in VB .NET: From Novice to Professional, we’ll present the same functionality using VB .NET.
Unlike its previous version (VB6), VB .NET is a fully object-oriented language and takes advan-
tage of all the features provided by the .NET Framework.
ASP.NET 2.0 even allows you to write the code for various elements inside a project in
different languages, but we won’t use this feature in this book. Separate projects written in
different .NET languages can freely interoperate, as long as you follow some basic rules. For
more information about how the .NET Framework works, you should read a general-purpose
.NET book.
■
Note
Just because you can use multiple languages in a single language, doesn’t mean you should
overuse that feature, if you have a choice. Being consistent is more important than playing with diversity if you
care for long-term ease of maintenance and prefer to avoid unnecessary headaches (which is something that
most programmers do).
In this book, apart from using C# for the code-behind files, you’ll use the same language to
code the middle tier classes. You’ll create the first classes in Chapter 3 when building the product
catalog, and you’ll learn more details there, including a number of new features that come with
.NET 2.0.
Using Visual Studio 2005 and Visual Web Developer 2005
Express Edition
Visual Studio 2005 is by far the most powerful tool you can find to develop .NET applications.
Visual Studio is a complete programming environment capable of working with many types of
projects and files, including Windows and Web Forms projects, setup and deployment projects,
and many others. Visual Studio also can be used as an interface to the database to create tables
and stored procedures, implement table relationships, and so on.
Visual Web Developer 2005 Express Edition is a free version of Visual Studio 2005, focused
on developing Web Applications with ASP.NET 2.0. Because the code in this book can be
built with any of these products, we’ll use the terms Visual Web Developer and Visual Studio
interchangeably.
A significant new feature in Visual Studio .NET 2005 and Visual Web Developer 2005
compared to previous versions of Visual Studio is the presence of an integrated web server,
which permits you to execute your ASP.NET Web Applications even if you don’t have IIS
(Internet Information Services) installed on your machine. This is good news for Windows XP
Home Edition users, who can’t install IIS on their machines because it isn’t supported by the
operating system.
Darie-Watson_4681C02.fm Page 23 Tuesday, September 20, 2005 4:51 AM
24
CHAPTER 2
■
LAYI NG OUT THE FOUNDATI ONS
Although we’ll use Visual Web Developer 2005 Express Edition for writing the BalloonShop
project, it’s important to know that you don’t have to. ASP.NET and the C# and VB .NET compilers
are available as free downloads at as part of the .NET Framework
SDK (Software Developers Kit), and a simple editor such as Notepad is enough to create any
kind of web page.
■
Tip
In the ASP.NET 1.x days when there were no free versions of Visual Studio, many developers preferred
to use a neat program called Web Matrix—a free ASP.NET development tool whose installer (which can still
be downloaded at) was only 1.3MB. Development of Web Matrix has
been discontinued though, because Visual Web Developer 2005 Express Edition is both powerful and free.
Visual Studio 2005 and Visual Web Developer 2005 come with many new features compared to
its earlier versions, and we’ll study a part of them while creating the BalloonShop project.
Using SQL Server 2005
Along with .NET Framework 2.0 and Visual Studio 2005, Microsoft also released a new version
of its player in the Relational Database Management Systems (RDBMS) field—SQL Server 2005.
This complex software program’s purpose is to store, manage, and retrieve data as quickly and
reliably as possible. You’ll use SQL Server to store all the information regarding your web site,
which will be dynamically placed on the web page by the application logic. Simply said, all data
regarding the products, departments, users, shopping carts, and so on will be stored and managed
by SQL Server.
The good news is that a lightweight version of SQL Server 2005, named SQL Server 2005
Express Edition, is freely available. Unlike the commercial versions, SQL Server 2005 Express
Edition doesn’t ship by default with any visual-management utilities. However, a very nice tool
called SQL Server Express Manager is also freely available. Appendix A contains details for
installing both SQL Server 2005 Express Edition and SQL Server Express Manager.
■
Tip
To learn more about the differences between SQL Server 2005 Express Edition and the other versions,
you can check.
The first steps in interacting with SQL Server come a bit later in this chapter when you
create the BalloonShop database.
Darie-Watson_4681C02.fm Page 24 Tuesday, September 20, 2005 4:51 AM
CHAPTER 2
■
LAYI NG OUT THE FOUNDATI ONS
25
SQL Server and the Three-Tier Architecture
It should be clear by now that SQL Server is somehow related to the data tier. However, if you
haven’t worked with databases until now, it might be less than obvious that SQL Server is more
than a simple store of data. Apart from the actual data stored inside, SQL Server is also capable
of storing logic in the form of stored procedures, maintaining table relationships, ensuring that
various data integrity rules are obeyed, and so on.
You can communicate with SQL Server through a language called T-SQL (Transact-SQL),
which is the SQL dialect recognized by SQL Server. SQL, or Structured Query Language, is the
language used to interact with the database. SQL is used to transmit to the database instructions
such as “Send me the last 10 orders” or “Delete product #123.”
Although it’s possible to compose T-SQL statements in your C# code and then submit
them for execution, this is generally a bad practice, because it incurs security, consistency, and
performance penalties. In our solution, we’ll store all data tier logic using stored procedures.
Historically, stored procedures were programs that were stored internally in the database and
were written in T-SQL. This still stands true with SQL Server 2005, which also brings the notion
of managed stored procedures that can be written in a .NET language such as C# and VB.NET
and are, as a result, compiled instead of interpreted.
■
Note
Writing stored procedures in C#, also called managed stored procedures, doesn’t just sound inter-
esting, it actually is. However, managed stored procedures are very powerful weapons, and as with any
weapon, only particular circumstances justify using them. Typically it makes sense to use managed stored
procedures when you need to perform complex mathematical operations or complex logic that can’t be easily
implemented with T-SQL. However, learning how to do these tasks the right way requires a good deal of
research, which is outside the scope of this book. Moreover, the data logic in this book didn’t justify adding
any managed stored procedures, and as a result you won’t see any here. Learning how to program managed
stored procedures takes quite a bit of time, and you might want to check out one of the books that are dedicated
to writing managed code under SQL Server.
The stored procedures are stored internally in the database and can be called from external
programs. In your architecture, the stored procedures will be called from the business tier. The
stored procedures in turn manipulate or access the data store, get the results, and return them
to the business tier (or perform the necessary operations).
Figure 2-5 shows the technologies associated with every tier in the three-tier architecture.
SQL Server contains the data tier of the application (stored procedures that contain the logic to
access and manipulate data) and also the actual data store.
Darie-Watson_4681C02.fm Page 25 Tuesday, September 20, 2005 4:51 AM
26
CHAPTER 2
■
LAYI NG OUT THE FOUNDATI ONS
Figure 2-5. Using Microsoft technologies and the three-tier architecture
Following Coding Standards
Although coding and naming standards might not seem that important at first, they definitely
shouldn’t be overlooked. Not following a set of rules for your code almost always results in
code that’s hard to read, understand, and maintain. On the other hand, when you follow a
consistent way of coding, you can say your code is already half documented, which is an
important contribution toward the project’s maintainability, especially when many people are
working at the same project at the same time.
Darie-Watson_4681C02.fm Page 26 Tuesday, September 20, 2005 4:51 AM
CHAPTER 2
■
LAYI NG OUT THE FOUNDATI ONS
27
■
Tip
Some companies have their own policies regarding coding and naming standards, whereas in other
cases you’ll have the flexibility to use your own preferences. In either case, the golden rule to follow is be
consistent in the way you code. Check out Microsoft’s suggested naming conventions at http://
msdn.microsoft.com/library/default.asp?url=/library/en-us/cpgenref/html/
cpconnamingguidelines.asp.
Naming conventions refer to many elements within a project, simply because almost all of
a project’s elements have names: the project itself, namespaces, Web Forms, Web User Controls,
instances of Web User Controls and other interface elements, classes, variables, methods,
method parameters, database tables, database columns, stored procedures, and so on. Without
some discipline when naming all those elements, after a week of coding, you won’t understand a
line of what you’ve written.
This book tries to stick to Microsoft’s recommendations regarding naming conventions.
Now the philosophy is that a variable name should express what the object does and not its
data type. We’ll talk more about naming conventions while building the site. Right now, it’s
time to play.
Creating the Visual Web Developer Project
Our favorite toy is, of course, Visual Web Developer. It allows you to create all kinds of projects,
including Web Site projects (formerly known as Web Application projects). The other necessary
toy is SQL Server, which will hold your web site’s data. We’ll deal with the database a bit later in
this chapter.
■
Note
At this point, we assume you have Visual Web Developer 2005 Express Edition and SQL Server 2005
Express Edition installed on your computer. It’s okay if you use the commercial versions of Visual Studio 2005
or SQL Server 2005, in which case the exercise steps you need to take might be a bit different from what is
presented in the book. Consult Appendix A for more details about the installation work.
The first step toward building the BalloonShop site is to open Visual Web Developer and
create a new ASP.NET Web Site project. If with previous versions of Visual Studio you needed
to have IIS installed, due to the integrated web server of Visual Studio .NET 2005 (named Cassini),
you can run the ASP.NET Web Application from any physical folder on your disk. As a result,
when creating the Web Site project, you can specify for destination either a web location (such
as) or a physical folder on your disk (such as C:\BalloonShop).
If you have a choice, usually the preferred solution is still to use IIS because of its better
performance and because it guarantees that the pages will display the same as the deployed
solution. Cassini (the integrated web server) does an excellent job of simulating IIS, but it still
shouldn’t be your first option. For this book, you can use either option, although our final tests
and screenshots were done using IIS.
Darie-Watson_4681C02.fm Page 27 Tuesday, September 20, 2005 4:51 AM
28
CHAPTER 2
■
LAYI NG OUT THE FOUNDATI ONS
You’ll create the BalloonShop project step-by-step in the exercises that follow. To ensure
that you always have the code we expect you to have and to eliminate any possible frustrations
or misunderstandings, we’ll always include the steps you must follow to build your project in
separate Exercise sections. We know it’s very annoying when a book tells you something, but
the computer’s monitor shows you another thing, so we did our best to eliminate this kind of
problems.
Let’s go.
Exercise: Creating the BalloonShop Project
Follow the steps in this exercise to create the ASP.NET Web Site project.
1.Start Visual Web Developer 2005 Express Edition, choose File ➤ New Web Site. In the dialog box that
opens, select ASP.NET Web Site from the Templates panel, and Visual C# for the Language.
2.In the first Location combo box, you can choose from File System, HTTP, and FTP, which determine how
your project is executed. If you choose to install the project on the File System, you need to choose a
physical location on your disk, such as C:\BalloonShop\. In this case, the Web Application is exe-
cuted using Visual Web Developer’s integrated web server (Cassini). If you choose an HTTP location
(such as), the Web Application will be executed through IIS.
Make a choice that fits you best. If you go the HTTP way and you’re developing on your local machine,
make sure that your machine has IIS installed (see Appendix A). For the purpose of this exercise, we’re
creating the project in the location, as shown in Figure 2-6.
Figure 2-6. Creating the Visual Studio .NET project
Darie-Watson_4681C02.fm Page 28 Tuesday, September 20, 2005 4:51 AM
CHAPTER 2
■
LAYI NG OUT THE FOUNDATI ONS
29
■
Note
When creating the project on an HTTP location with the local IIS server, the project is physically
created, by default, under the \InetPub\wwwroot folder. If you prefer to use another folder, use the Internet
Information Services applet by choosing Control Panel
➤
Administrative Tools to create a virtual folder pointing to
the physical folder of your choice prior to creating the Web Application with Visual Web Developer. If this note
doesn’t make much sense to you, ignore it for now.
3.Click OK. Visual Studio now creates the new project in the BalloonShop folder you specified.
In the new project, a new Web Form called Default.aspx is created by default, as shown in Figure 2-7.
Figure 2-7. The BalloonShop project in Visual Web Developer 2005 Express Edition
4.Execute the project in debug mode by pressing F5. At this point, Visual Web Developer will complain (as
shown in Figure 2-8) that it can’t debug the project as long as debugging is not enabled in web.config
(actually, at this point, the web.config file doesn’t even exist). Click OK to allow Visual Studio to enable
debug mode for you. Feel free to look at the newly created web.config file to see what has been done
for you.
Darie-Watson_4681C02.fm Page 29 Tuesday, September 20, 2005 4:51 AM
30
CHAPTER 2
■
LAYI NG OUT THE FOUNDATI ONS
Figure 2-8. Debugging must be enabled in web.config
5.When executing the project, a new and empty Internet Explorer should open. Closing the window will
also stop the project from executing (the Break and Stop Debugging symbols disappear from the Visual
Web Developer toolbar, and the project becomes fully editable again).
■
Note
When executing the project, the web site is loaded in your system’s default web browser. For the
purposes of debugging your code, we recommend configuring Visual Web Developer to use Internet Explorer
by default, even if your system’s preferred browser is (for example) Mozilla Firefox. The reason is that Internet
Explorer integration seems to work better. For example, Visual Web Developer knows when you close the
Internet Explorer window and automatically stops the project from debug mode so you can continue develop-
ment work normally; however, with other browsers, you may need to manually Stop Debugging (click the Stop
square button in the toolbar, or press Shift+F5 by default). To change the default browser to be used by Visual
Web Developer, right-click the root node in Solution Explorer, choose Browse With, select a browser from the
Browsers tab, and click Set as Default.
How It Works: Your Visual Web Developer Project
Congratulations! You have just completed the first step in creating your e-commerce store!
Unlike with previous versions of ASP.NET, you don’t need an IIS virtual directory (or IIS at all, for that matter) to run
a Web Application, because you can create the ASP.NET Web Site project in a physical location on your drive. Now
it’s up to you where and how you want to debug and execute your Web Application!
When not using IIS and executing the project, you’ll be pointed to an address like
BalloonShop/Default.aspx, which corresponds to the location of the integrated web server.
At this moment your project contains three files:
• Default.aspx is your Web Form.
• Default.aspx.cs is the code-behind file of the Web Form.
• web.config is the project’s configuration file.
We’ll have a closer look at these files later.
Darie-Watson_4681C02.fm Page 30 Tuesday, September 20, 2005 4:51 AM
CHAPTER 2
■
LAYI NG OUT THE FOUNDATI ONS
31
Implementing the Site Skeleton
The visual design of the site is usually agreed upon after a discussion with the client and in
collaboration with a professional web designer. Alternatively, you can buy a web site template
from one of the many companies that offer this kind of service for a reasonable price.
Because this is a programming book, we won’t discuss web design issues. Furthermore, we
want a simple design that allows you to focus on the technical details of the site. A simplistic
design will also make your life easier if you’ll need to apply your layout on top of the one we’re
creating here.
All pages in BalloonShop, including the first page, will have the structure shown in Figure 2-9.
In later chapters, you’ll add more components to the scheme (such as the login box or shopping
cart summary box), but for now, these are the pieces we’re looking to implement in the next
few chapters.
Figure 2-9. Structure of web pages in BalloonShop
Although the detailed structure of the product catalog is covered in the next chapter, right
now you know that the main list of departments needs to be displayed on every page of the site.
You also want the site header to be visible in any page the visitor browses.
You’ll implement this structure by creating the following:
Darie-Watson_4681C02.fm Page 31 Tuesday, September 20, 2005 4:51 AM
32
CHAPTER 2
■
LAYI NG OUT THE FOUNDATI ONS
• A Master Page containing the general structure of all the web site’s pages, as shown in
Figure 2-9
• A number of Web Forms that use the Master Page to implement the various locations of
the web site, such as the main page, the department pages, the search results page, and
so on
• A number of Web User Controls to simplify reusing specific pieces of functionality (such
as the departments list box, the categories list box, the search box, the header, and so on)
Figure 2-10 shows a few of the Web User Controls you’ll create while developing
BalloonShop.
Figure 2-10. Using Web User Controls to generate content
Using Web User Controls to implement different pieces of functionality has many long-
term advantages. Logically separating different, unrelated pieces of functionality from one
another gives you the flexibility to modify them independently and even reuse them in other
pages without having to write HTML code and the supporting code-behind file again. It’s also
extremely easy to extend the functionality or change the place of a feature implemented as a
user control in the parent web page; changing the location of a Web User Control is anything
but a complicated and lengthy process.
Darie-Watson_4681C02.fm Page 32 Tuesday, September 20, 2005 4:51 AM
CHAPTER 2
■
LAYI NG OUT THE FOUNDATI ONS
33
In the remainder of this chapter, we’ll write the Master Page of the site, a Web Form for the
first page that uses the Master Page, and the Header Web User Control. We’ll deal with the
other user controls in the following chapters. Finally, at the end of the chapter, you’ll create the
BalloonShop database, which is the last step in laying the foundations of the project.
Building the First Page
At the moment, you have a single Web Form in the site, Default.aspx, which Visual Web Developer
automatically created when you created the project. By default, Visual Web Developer didn’t
generate a Master Page for you, so you’ll do this in the following exercise.
Exercise: Creating the Main Web Page
1.Click Website ➤ Add New Item (or press Ctrl+Shift+A). In the dialog box that opens, choose Master
Page from the Visual Studio Installed Templates list.
2.Choose Visual C# for the language, check the Place code in a separate file check box, and change the
page name to BalloonShop.master (the default name MasterPage.master isn’t particularly
expressive). The Add New Item dialog box should now look like Figure 2-11.
Figure 2-11. Adding a new Master Page to the project
3.Click Add to add the new Master Page to the project. The new Master Page will be opened with some
default code in Source View. If you switch to Design View, you’ll see the ContentPlaceHolder object
that it contains. While in Source View, update its code like this:
Darie-Watson_4681C02.fm Page 33 Tuesday, September 20, 2005 4:51 AM
34
CHAPTER 2
■
LAYI NG OUT THE FOUNDATI ONS
<%@ Master Language="C#" AutoEventWireup="true"
CodeFile="BalloonShop.master.cs" Inherits="BalloonShop" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
"">
<html xmlns="">
<head runat="server">
<title>BalloonShop</title>
</head>
<body>
<form id="Form1" runat="server">
<table cellspacing="0" cellpadding="0" width="770" border="0">
<tr>
<td width="220" valign="top">
List of Departments
<br />
List of Categories
<br />
</td>
<td valign="top">
Header
<asp:ContentPlaceHolder
</asp:ContentPlaceHolder>
</td>
</tr>
</table>
</form>
</body>
</html>
4.Now switch again to Design View; you should see something like Figure 2-12.>
Darie-Watson_4681C02.fm Page 34 Tuesday, September 20, 2005 4:51 AM
CHAPTER 2
■
LAYI NG OUT THE FOUNDATI ONS
35
Figure 2-12. Your New Master Page in Design View
Master Pages are not meant to be accessed directly by clients, but to be implemented in Web Forms.
You’ll use the Master Page you’ve just created to establish the template of the Default.aspx Web
Form. Because the Default.aspx page that Visual Web Developer created for you was not meant to
be used with Master Pages (it contains code that should be inherited from the Master Page), it’s easier
to delete and re-create the file.
5.Right-click Default.aspx in Solution Explorer and choose Delete. Confirm the deletion.
6.Right-click the project root in Solution Explorer and select Add New Item. Choose the Web Form tem-
plate,="Untitled Page" %>
<asp:Content
</asp:Content>
When you switch to Design View, Default.aspx will look like Figure 2-13.
Darie-Watson_4681C02.fm Page 35 Tuesday, September 20, 2005 4:51 AM
36
CHAPTER 2
■
LAYI NG OUT THE FOUNDATI ONS
Figure 2-13. Default.aspx in Design View!" %>
Figure 2-14. Changing the form name using the Properties window
8.Press F5 to execute the project. You should get a page similar to the one in Figure 2-15.
Darie-Watson_4681C02.fm Page 36 Tuesday, September 20, 2005 4:51 AM
CHAPTER 2
■
LAYI NG OUT THE FOUNDATI ONS
37
Figure 2-15. Default.aspx in action
■
Note
You need to close the browser window or manually stop the project from running before you can use
the Designer window to edit your forms at full power again.
How It Works: The Main Web Page
Right now you have the skeleton of the first BalloonShop page in place. Perhaps it’s not apparent right now, but
working with Master Pages will save you a lot of headaches later on when you extend the site.
The Master Page establishes the layout of the pages that implement it, and these pages have the freedom to update
the contents of the ContentPlaceHolder elements. In our case, the header, the list of departments, and the list
of categories are standard elements that will appear in every page of the web site (although the list of categories will
have blank output in some cases and will appear only when a department is selected—you’ll see more about this
in the next chapter). For this reason, we included these elements directly in the Master Page, and they are not
editable when you’re designing Default.aspx. The actual contents of every section of the web site (such as the
search results page, the department and category pages, and so on) will be generated by separate Web Forms that
will be differentiated by the code in the ContentPlaceHolder object.
■
Note
A Master Page can contain more than one ContentPlaceHolder object.
The list of departments and the list of categories will be implemented as Web User Controls that generate their
output based on data read from the database. You’ll implement this functionality in Chapters 3 and 4.
Darie-Watson_4681C02.fm Page 37 Tuesday, September 20, 2005 4:51 AM
38
CHAPTER 2
■
LAYI NG OUT THE FOUNDATI ONS
Adding the Header to the Main Page
After so much theory about how useful Web User Controls are, you finally get to create one. The
Header control will populate the upper-right part of the main page and will look like Figure 2-16.
Figure 2-16. The BalloonShop logo
To keep your site’s folder organized, you’ll create a separate folder for all the user controls.
Having them in a centralized location is helpful, especially when the project grows and contains
a lot of files.
Exercise: Creating the Header Web User Control
Follow these steps to create the Web User Control and add it to the Master Page:
1.Download the code for this book from the Source Code area at, unzip it
somewhere on your disk, and copy the ImageFolders\Images folder to your project’s directory
(which will be \Inetpub\wwwroot\BalloonShop\ if you used the default options when creating the
project). The Images folder contains, among other files, a file named BalloonShopLogo.png, which
is the logo of your web site. Now, if you save, close, and reload your solution, the Images folder will
show up in Solution Explorer.
2.Make sure that the project isn’t currently running (if it is, the editing capabilities are limited), and that
the Solution Explorer window is visible (if it isn’t, choose View ➤ Solution Explorer or use the default
Ctrl+Alt+L shortcut). Right-click the root entry and select Add Folder ➤ Regular Folder.
3.Enter UserControls as the name of the new folder, as shown in Figure 2-17.
Darie-Watson_4681C02.fm Page 38 Tuesday, September 20, 2005 4:51 AM
CHAPTER 2
■
LAYI NG OUT THE FOUNDATI ONS
39
Figure 2-17. Adding a new folder to the BalloonShop project
4.Create the Header.ascx user control in the UserControls folder. Right-click UserControls in Solution
Explorer and click Add New Item. In the form that appears, choose the Web User Control template and
change the default name to Header.ascx. Leave the other options in place (as shown in Figure 2-18),
and click Add.
Figure 2-18. Creating the Header.ascx Web User Control
Darie-Watson_4681C02.fm Page 39 Tuesday, September 20, 2005 4:51 AM
40
CHAPTER 2
■
LAYI NG OUT THE FOUNDATI ONS
5.The Header Web User Control automatically opens in Source View. Modify the HTML code like this:
<%@ Control Language="C#" AutoEventWireup="true" CodeFile="Header.ascx.cs"
Inherits="Header" %>
<p align="center">
<a href="Default.aspx">
<img src="Images/BalloonShopLogo.png" border="0">
</a>
</p>
■
Note
If you switch the control to Design View right now, you won’t see the image because the relative path
to the Images folder points to a different absolute path at designtime than at runtime. At runtime, the control
is included and run from within BalloonShop.master, not from its current location (the UserControls
folder).
6.Open BalloonShop.master in Design View, drag Header.ascx from Solution Explorer, drop it near the
“Header” text, and then delete the “Header” text from the cell. The Design view of BalloonShop.master
should now look like Figure 2-19.
Figure 2-19. Adding Header.ascx to the Master Page
7.Click Debug ➤ Start (F5 by default) to execute the project. The web page will look like Figure 2-20.
Darie-Watson_4681C02.fm Page 40 Tuesday, September 20, 2005 4:51 AM
CHAPTER 2
■
LAYI NG OUT THE FOUNDATI ONS
41
Figure 2-20. BalloonShop in action
How It Works: The Header Web User Control
Congratulations once again! Your web site has a perfectly working header! If you don’t find it all that exciting, then
get ready for the next chapter, where you’ll get to write some real code and show the visitor dynamically generated
pages with data extracted from the database. The final step you’ll make in this chapter is to create the BalloonShop
database (in the next exercise), so everything will be set for creating your product catalog!
Until that point, make sure you clearly understand what happens in the project you have at hand. The Web User
Control you just created is included in the BalloonShop.master Master Page, so it applies to all pages that use
this Master Page, such as Default.aspx. Having that bit of HTML written as a separate control will make your life
just a little bit easier when you need to reuse the header in other parts of the site. If at any point the company
decides to change the logo, changing it in one place (the Header.ascx control) will affect all pages that use it.
This time you created the control by directly editing its HTML source, but it’s always possible to use the Design View
and create the control visually. The HTML tab of the Toolbox window in Visual Studio contains all the basic HTML elements,
including Image, which generates an img HTML element.
Let’s move on.
Creating the SQL Server Database
The final step in this chapter is to create the SQL Server database, although you won’t get to
effectively use it until the next chapter. SQL Server 2005 Express Edition, the free version of SQL
Darie-Watson_4681C02.fm Page 41 Tuesday, September 20, 2005 4:51 AM
42
CHAPTER 2
■
LAYI NG OUT THE FOUNDATI ONS
Server, doesn’t ship with the SQL Server Management Studio (formerly known as the Enterprise
Manager). However, now you can also create databases using Visual Web Developer’s features.
All the information that needs to be stored for your site, such as data about products,
customers, and so on, will be stored in a database named, unsurprisingly, BalloonShop.
Exercise: Creating a New SQL Server Database
The following steps show how to create the BalloonShop database using Visual Studio. However, feel free to use the
tool of your choice. 2-21), enter the name of the SQL Server instance where you want
to create the database (note that you can use (local) instead of the local computer’s name), the login
information, and the name of the new.
Figure 2-21. Creating a new SQL Server database using Visual Web Developer
■ authenti-
cation mode is disabled by default in SQL Server.
Darie-Watson_4681C02.fm Page 42 Tuesday, September 20, 2005 4:51 AM
CHAPTER 2
■
LAYI NG OUT THE FOUNDATI ONS
43
How It Works: The SQL Server Database
That’s it! You’ve just created a new SQL Server database! The Server Explorer window in Visual Studio allows you
to control many details of the SQL Server instance. After creating the BalloonShop database, it appears under the
Data Connections node in Server Explorer. Expanding that node reveals a wide area of functionality you can access
directly from Visual Web Developer (see Figure 2-22). Because the database you just created is (obviously) empty,
its subnodes are also empty, but you’ll take care of this detail in the following chapters.
Figure 2-22. Accessing the BalloonShop database from the Database Explorer
Downloading the Code
The code you have just written is available in the Source Code area of the Apress web site at or at the author’s web site at. It should
be easy for you to read through this book and build your solution as you go; however, if you
want to check something from our working version, you can. Instructions on loading the chapters
are available in the Welcome.html document in the download. You can also view the online
version of BalloonShop at.
Summary
We covered a lot of ground in this chapter, didn’t we? We talked about the three-tier architec-
ture and how it helps you create powerful flexible and scalable applications. You also saw how
each of the technologies used in this book fits into the three-tier architecture.
So far you have a very flexible and scalable application because it only has a main web
page formed from a Web Form, a Master Page, and the Header Web User Control, but you’ll feel
the real advantages of using a disciplined way of coding in the next chapters. In this chapter,
you have only coded the basic, static part of the presentation tier and created the BalloonShop
database, which is the support for the data tier. In the next chapter, you’ll start implementing
the product catalog and learn a lot about how to dynamically generate visual content using
data stored in the database with the help of the middle tier and with smart and fast presenta-
tion tier controls and components.
Darie-Watson_4681C02.fm Page 43 Tuesday, September 20, 2005 4:51 AM
Darie-Watson_4681C02.fm Page 44 Tuesday, September 20, 2005 4:51 AM
Enter the password to open this PDF file:
File name:
-
File size:
-
Title:
-
Author:
-
Subject:
-
Keywords:
-
Creation Date:
-
Modification Date:
-
Creator:
-
PDF Producer:
-
PDF Version:
-
Page Count:
-
Preparing document for printing…
0%
Log in to post a comment
|
https://www.techylib.com/en/view/yelpframe/beginning_asp.net_2.0_e-commerce_in_c_2005
|
CC-MAIN-2018-26
|
refinedweb
| 11,768 | 59.84 |
- Type:
Improvement
- Status: Open
- Priority:
Major
- Resolution: Unresolved
- Affects Version/s: None
- Fix Version/s: None
- Component/s: None
- Labels:None
I recently obtained a big (over 60GiB) heap dump from a customer and analyzed it using jxray (). One source of memory waste that the tool detected is arrays of length 1 that come from BlockInfo[] org.apache.hadoop.hdfs.server.namenode.INodeFile.blocks and INode$Feature[] org.apache.hadoop.hdfs.server.namenode.INodeFile.features. Only a small fraction of these arrays (less than 10%) have a length greater than 1. Collectively these arrays waste 5.5GiB, or 9.2% of the heap. See the attached screenshot for more details.
The reason why an array of length 1 is problematic is that every array in the JVM has a header, that takes between 16 and 20 bytes depending on the JVM configuration. For a big enough array this 16-20 byte overhead is not a concern, but if the array has only one element (that takes 4-8 bytes depending on the JVM configuration), the overhead becomes bigger than the array's "workload".
In such a situation it makes sense to replace the array data field Foo[] ar with an Object obj, that would contain either a direct reference to the array's single workload element, or a reference to the array if there is more than one element. This change will require further code changes and type casts. For example, code like return ar[i]; becomes return (obj instanceof Foo) ? (Foo) obj : ((Foo[]) obj)[i]; and so on. This doesn't look very pretty, but as far as I see, the code that deals with e.g. INodeFile.blocks already contains various null checks, etc. So we will not make the code much less readable.
|
https://issues.apache.org/jira/browse/HDFS-12922
|
CC-MAIN-2019-51
|
refinedweb
| 296 | 64.1 |
My goal for today was to populate a record with all of the Australian postcodes and their corresponding suburb and state descriptions. To do this I had to complete the following:
- 1: Download the postcode/location csv
- 2: Create new model to contain locations
- 3: Create a script to import a csv
- 4: Run the script in order to populate the database
- 5: Cleanse the data
Step #1: Download the .csv
I came across the following files after browsing a few of the Australian tech forums. I chose to use the Corra ones due to the claim that there are no license restrictions.
- Corra.com also provides a few PHP function templates for tasks such as distance estimation. I’m sure these could quite easily be converted to rails for anyone who’s chasing something similar.
- If for some reason you’d prefer to use an Australia Post .csv, they’re available at the following link:
Once you’ve chosen your .csv, just save it in your project folder i.e. C:my_app
Step #2: Create a Model for your Locations
Secondly, you’ll need to create a new migration for your locations. I’ve simply called mine Code:
def change create_table :codes do |t| t.string "Pcode" t.string "Locality" t.string "State" t.string "Comments" t.string "DeliveryOffice" t.string "PresortIndicator" t.string "ParcelZone" t.string "BSPnumber" t.string "BSPname" t.string "Category" t.timestamps end end
Step #3: Create a Script to Import the .csv
The third step is to create a script that allows you to import the csv. This is made simple thanks to a comment left by “Gianluca” on the following blog post:
Simply create a new file within your tasks folder and add the following code:
#my_app/lib/tasks/import.rake require "csv" desc "Import CSV file into an Active Record table" task :csv_model_import, [:filename, :model] => :environment do |task,args| firstline=0 keys = {} CSV.foreach(args[:filename]) do |row| if (firstline==0) keys = row firstline=1 next end params = {} keys.each_with_index do |key,i| params[key] = row[i] end Module.const_get(args[:model]).create(params) end end
Step #4: Run the script in order to populate the database
To run the script simply run the following command:
chris@chris-VirtualBox:~/my_app$ rake csv_model_import[codes.csv,Code]
Step #5: Cleanse the data
Cleansing the data is a little tedious, however one tip is to remove all locations that do not have a category value of “Delivery Area”.
Ahwell, that’s all I’ve got for now – let me know if you have any trouble.
|
https://whatibroke.com/tag/csv/
|
CC-MAIN-2022-40
|
refinedweb
| 430 | 67.04 |
A recent tweet by Fermat's Library noted that the Fundamental theorem of arithmetic provides a novel (if inefficient) way of determining whether two words are anagrams of one another.
The Fundamental theorem of arithmetic states that every integer greater than 1 is either a prime number itself or can be represented as the unique product of prime numbers.
First, one assigns distinct prime numbers to each letter (e.g. a=2, b=3, c=5, ...). Then form the product of the numbers corresponding to each letter in each of the two words. If the two products are equal, the words are anagrams. For example,
'cat': $5 \times 2 \times 71 = 710$
'act': $2 \times 5 \times 71 = 710$
The following code implements and tests this algorithm.
from functools import reduce a = 'abcdefghijklmnopqrstuvwxyz' p = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101] d = dict(zip(a, p)) def prod(s): return reduce((lambda n1, n2: n1*n2), [d[lett] for lett in s]) def is_anagram(s1, s2): return prod(s1) == prod(s2)
Here,
d is a dictionary mapping the letters to their primes. The
functools.reduce method applies a provided function cumulatively to the items of a sequence.
To see that it works, try:
is_anagram('cat', 'act') True is_anagram('tea', 'tee') False
Note that for longer words the products formed get quite large:
prod('floccinaucinihilipilification') 35334111214198884032311058457421182500
Comments are pre-moderated. Please be patient and your comment will appear soon.
Dominik Stańczak 9 months, 1 week ago
This is utterly dreadful and I love it. Clever idea!Link | Reply
New Comment
|
https://scipython.com/blog/using-prime-numbers-to-determine-if-two-words-are-anagrams/
|
CC-MAIN-2019-51
|
refinedweb
| 278 | 59.84 |
Scheduling posts 2: the Rakening
Yesterday I covered how I’m handling scheduling with my Jekyll-based blog. The
at command I mentioned there could be used in tandem with any static blogging system. Today I’m dropping in the “publish” task from my Rakefile, so you can see how I apply it specifically with Jekyll. The concepts are still portable, though.
Processing
The rake task can be run with or without an argument. If the argument is there and it’s a filename (and the file exists), it operates directly on that file. If no argument is passed, it offers a menu of available drafts to choose from.
To schedule a post, I just set the “date:” field in the YAML headers to a date in the future. That triggers all of the scheduling features. If the task is being run from the shell, it double checks with you for confirmation that you want to schedule a deploy. If confirmed (or forced),
at is run and reads from a file with the necessary commands to generate and deploy the task. In my case, it bumps the site version (used to bust cache on any CSS/JS files), runs a generate task and deploys the site using Rsync.
I’ll be covering my “draft” system in more detail in a future post. I’ll mention a relevant part of it now, though. The _drafts folder is a symlink from my Dropbox writing folder. If a post shows up in there with a “publish_” prefix in the filename, Hazel triggers this system automatically. It passes a filename directly, bypassing the need for any shell interaction. The file gets published and the site gets deployed. There’s also an incomplete “preview_” mode that will generate the staging site but not deploy.
Here’s the relevant part of the Rakefile with lots of comments. For people who would be implementing something like this, they should be explanatory enough. Because it’s a work-in-progress, I’m posting its current state directly. When it’s closer to finished it will be included in a full Git repo of all of my hacks with its most current version.
The “publish” rake task
desc "Publish a draft" task :publish, :filename do |t, args| # if there's a filename passed (rake publish[filename]) # use it. Otherwise, list all available drafts in a menu unless args.filename file = choose_file(File.join(source_dir,'_drafts')) Process.exit unless file # no file selected else file = args.filename if File.exists?(File.expand_path(args.filename)) raise "Specified file not found" unless file end now = Time.now short_date = now.strftime("%Y-%m-%d") long_date = now.strftime("%Y-%m-%d %H:%M") # separate the YAML headers contents = File.read(file).split(/^---\s*$/) if contents.count < 3 # Expects the draft to be properly formatted puts "Invalid header format on post #{File.basename(file)}" Process.exit end # parse the YAML. So much better than regex search and replaces headers = YAML::load("---\n"+contents[1]) content = contents[2].strip should = { :generate => false, :deploy => false, :schedule => false, :limit => 0 } # For use with a Dropbox/Hazel system. _drafts is a symlink from Dropbox, # posts dropped into it prefixed with "publish_" are automatically # published via Hazel script. # Checks for a "preview" argument, currently unimplemented if File.basename(file) =~ /^preview_/ or args.preview == "true" headers['published'] = false should[:generate] = true should[:limit] = 10 elsif File.basename(file) =~ /^publish_/ and args.preview != "false" headers['published'] = true should[:generate] = true should[:deploy] = true end #### deploy scheduling ### # if there's a date set in the draft... if headers.key? "date" pub_date = Time.parse(headers['date']) if pub_date > Time.now # and the date is in the future (at time of task) headers['date'] = pub_date.strftime("%Y-%m-%d %H:%M") # reformat date to standard short_date = pub_date.strftime("%Y-%m-%d") # for renaming the file to the publish date # offer to schedule a generate and deploy at the time of the future pub date # skip asking if we're creating from a scripted file (publish_*) should[:schedule] = should[:generate] and should[:deploy] ? true : ask("Schedule deploy for #{headers['date']}?", ['y','n']) == 'y' system("at -f ~/Sites/dev/brettterpstra.com/atjob.sh #{pub_date.strftime('%H%M %m%d%y')}") if should[:schedule] end end ### draft publishing ### # fall back to current date and title-based slug headers['date'] ||= long_date headers['slug'] ||= headers['title'].to_url.downcase # write out the modified YAML and post contents back to the original file File.open(file,'w+') {|file| file.puts YAML::dump(headers) + "---\n" + content + "\n"} # move the file to the posts folder with a standardized filename target = "#{source_dir}/#{posts_dir}/#{short_date}-#{headers['slug']}.#{new_post_ext}" mv file, target puts %Q{Published "#{headers['title']}" to #{target}} # auto-generate[/deploy] for non-future publish_ and preview_ files if should[:generate] && should[:deploy] Rake::Task[:gen_deploy].execute elsif should[:generate] if should[:limit] > 0 # my generate task accepts two optional arguments: # posts to limit jekyll to, and whether it's preview mode Rake::Task[:generate].invoke(should['limit'], true) else Rake::Task[:generate].execute end end end
Additional functions
choose_file
# Creates a user selection menu from directory listing def choose_file(dir) puts "Choose file:" @files = Dir["#{dir}/*"] @files.each_with_index { |f,i| puts "#{i+1}: #{f}" } print "> " num = STDIN.gets return false if num =~ /^[a-z ]*$/i file = @files[num.to_i - 1] end
ask
This is borrowed from the OctoPress Rakefile.
def ask(message, valid_options) return true if $skipask if valid_options answer = get_stdin("#{message} #{valid_options.delete_if{|opt| opt == ''}.to_s.gsub(/"/, '').gsub(/, /,'/')} ") while !valid_options.map{|opt| opt.nil? ? '' : opt.upcase }.include?(answer.nil? ? answer : answer.upcase) else answer = get_stdin(message) end answer end
|
https://brettterpstra.com/2013/01/18/scheduling-posts-2-the-rakening/
|
CC-MAIN-2022-33
|
refinedweb
| 935 | 58.99 |
Best way to structure a tkinter application
The following is the overall structure of my typical python tkinter program.
def funA(): def funA1(): def funA12(): # stuff def funA2(): # stuff def funB(): def funB1(): # stuff def funB2(): # stuff def funC(): def funC1(): # stuff def funC2(): # stuff root = tk.Tk() button1 = tk.Button(root, command=funA) button1.pack() button2 = tk.Button(root, command=funB) button2.pack() button3 = tk.Button(root, command=funC) button3.pack()
funA
funB and
funC will bring up another
Toplevel windows with widgets when user click on button 1, 2, 3.
I am wondering if this is the right way to write a python tkinter program? Sure, it will work even if I write this way, but is it the best way? It sounds stupid but when I see the codes other people written, their code is not messed up with bunch of functions and mostly they have classes.
Is there any specific structure that we should follow as good practice? How should I plan before start writing a python program?
I know there is no such thing as best practice in programming and I am not asking for it either. I just want some advice and explanations to keep me on the right direction as I am learning Python by myself.
I advocate an object oriented approach. This is the template that I start out with:
# Use Tkinter for python 2, tkinter for python 3 import tkinter as tk class MainApplication(tk.Frame): def __init__(self, parent, *args, **kwargs): tk.Frame.__init__(self, parent, *args, **kwargs) self.parent = parent <create the rest of your GUI here> if __name__ == "__main__": root = tk.Tk() MainApplication(root).pack(side="top", fill="both", expand=True) root.mainloop()
The important things to notice are:
I don't use a wildcard import. I import the package as "tk", which requires that I prefix all commands with
tk.. This prevents global namespace pollution, plus it makes the code completely obvious when you are using Tkinter classes, ttk classes, or some of your own.
The main application is a class.just because I typically start by creating a frame, but it is by no means necessary.
If your app has additional toplevel windows, I recommend making each of those a separate class, inheriting from
tk.Toplevel. This gives you all of the same advantages mentioned above -- the windows are atomic, they have their own namespace, and the code is well organized. Plus, it makes it easy to put each into its own module once the code starts to get large.
Finally, you might want to consider using classes for every major portion of your interface. For example, if you're creating an app with a toolbar, a navigation pane, a statusbar, and a main area, you could make each one of those classes. This makes your main code quite small and easy to understand:
class Navbar(tk.Frame): ... class Toolbar(tk.Frame): ... class Statusbar(tk.Frame): ... class Main(tk.Frame): ... class MainApplication(tk.Frame): def __init__(self, parent, *args, **kwargs): tk.Frame.__init__(self, parent, *args, **kwargs) self.statusbar = Statusbar(self, ...) self.toolbar = Toolbar(self, ...) self.navbar = Navbar(self, ...) self.main = Main(self, ...) self.statusbar.pack(side="bottom", fill="x") self.toolbar.pack(side="top", fill="x") self.navbar.pack(side="left", fill="y") self.main.pack(side="right", fill="both", expand=True)
Since all of those instances share a common parent, the parent effectively becomes the "controller" part of a model-view-controller architecture. So, for example, the main window could place something on the statusbar by calling
self.parent.statusbar.set("Hello, world"). This allows you to define a simple interface between the components, helping to keep coupling to a minimun.
From: stackoverflow.com/q/17466561
|
https://python-decompiler.com/article/2013-07/best-way-to-structure-a-tkinter-application
|
CC-MAIN-2019-47
|
refinedweb
| 622 | 58.18 |
Holds information about the inheritance path to a virtual base or function table pointer. More...
#include "clang/AST/VTableBuilder.h"
Holds information about the inheritance path to a virtual base or function table pointer.
A record may contain as many vfptrs or vbptrs as there are base subobjects.
Definition at line 446 of file VTableBuilder.h.
Definition at line 447 of file VTableBuilder.h.
Definition at line 449 of file VTableBuilder.h.
The vptr is stored inside the non-virtual component of this virtual base.
Definition at line 490 of file VTableBuilder.h.
References ContainingVBases.
The set of possibly indirect vbases that contain this vbtable.
When a derived class indirectly inherits from the same vbase twice, we only keep vtables and their paths from the first instance.
Definition at line 478 of file VTableBuilder.h.
Referenced by getVBaseWithVPtr().
Static offset from the top of the most derived class to this vfptr, including any virtual base offset.
Only used for vftables.
Definition at line 487 of file VTableBuilder.h.
This is the class that introduced the vptr by declaring new virtual methods or virtual bases.
Definition at line 459 of file VTableBuilder.h.
Referenced by selectBestPath().
The bases from the inheritance path that got used to mangle the vbtable name.
This is not really a full path like a CXXBasePath. It holds the subset of records that need to be mangled into the vbtable symbol name in order to get a unique name.
Definition at line 469 of file VTableBuilder.h.
The next base to push onto the mangled path if this path is ambiguous in a derived class.
If it's null, then it's already been pushed onto the path.
Definition at line 473 of file VTableBuilder.h.
IntroducingObject is at this offset from its containing complete object or virtual base.
Definition at line 463 of file VTableBuilder.h.
This is the most derived class that has this vptr at offset zero.
When single inheritance is used, this is always the most derived class. If multiple inheritance is used, it may be any direct or indirect base.
Definition at line 455 of file VTableBuilder.h.
This holds the base classes path from the complete type to the first base with the given vfptr offset, in the base-to-derived order.
Only used for vftables.
Definition at line 483 of file VTableBuilder.h.
|
https://clang.llvm.org/doxygen/structclang_1_1VPtrInfo.html
|
CC-MAIN-2022-21
|
refinedweb
| 393 | 68.77 |
Back in Seattle…
Wow, a whirlwind week in Vegas meeting with customers, colleagues, the community and of course conference-goers. If you didn’t catch the conference, last week was the SharePoint Conference 2012. It was a great week full of new areas for the SharePoint community to explore. For me, it was especially interesting as this conference brought together two keen areas of focus for me into focus: SharePoint and Windows Azure. This intersection manifested through the introduction of the new SharePoint cloud-hosted app model to the SharePoint community, and it is here where you really find these two platforms intersecting.
It started with the keynote, which was headlined by Jeff Teper, top brass in Office, who with the help of the SharePoint gang showed off some of the new Social, Mobile and Cloud-centric aspects of SharePoint 2013. To the theme above, it was also good to see Scott Guthrie there as well—introducing the 10,000-ish people to the cloud developer story. The discussion and buzz around this new app model continued throughout the week with many questions focusing on the what and how of these new apps.
Discussing Autohosted and Provider-Hosted…
Having spent some time in this area, I had the chance to present a session on the topic as well as deliver a post-con workshop to a group of very enthusiastic developers with tons of questions about the new cloud app model. I’ve posted my deck below here so for those that couldn’t attend, you can peruse the deck and at least get a sense for what we were discussing—which focused on the Autohosted app and the Provider-Hosted app. These are in essence the two new app models introduced in SharePoint 2013 having strong integration with Windows Azure.
The Autohosted app model natively leverages Windows Azure when you deploy the app to SharePoint, and the Provider-Hosted app enables you to use Windows Azure or other Web technologies (such as PhP). Now one of the questions that kept coming up this week was as follows: “if they both use Windows Azure, then what is the difference between the two?” And this is where things get interesting.
Both hosting models are intended to move code off of the server; this is part of the reason for the move to a new cloud-hosted model. For you SharePoint admins out there, this should make you happy—this abstraction of code away from the server should mitigate ill-performing or malicious code from running on the server. However, this is only part of the reason. The other part is in the new world of Cloud computing, services are rolled out in a more frequent cadence. And at the same time you’re introducing value to the customer through the ability to leverage services that constantly improve and evolve over time (and in a much shorter cycle), you don’t want those updates to cross-sect with your customizations. Updates should be seamless, and abstracting the code from the server helps support this process.
Okay, so we’ve moved off of the server into these two models. You still haven’t answered how they’re different? And the Autohosted and Provider-hosted models are different in a number of ways:
- The Autohosted: 1) the Autohosted app model uses the Web Sites and SQL Database services of Windows Azure, and 2) it is deployed to Windows Azure (and of course to the SharePoint site that is hosting the app). If you’re building departmental apps or light data-driven apps, the Autohosted Autohosted.
So, let’s spend a little time in this post on the Autohosted app model, and then I’ll return in a follow-up post with the Provider-hosted app model.
Building your first Autohosted App…
First, open up Visual Studio 2012 and click File, New Project. Select Office/SharePoint, and then select Apps and then App for SharePoint 2013. Provide a name for the app and click OK.
When prompted in the New app for SharePoint dialog, leave the default name for the app, select your O365 tenancy and then select Autohosted as the project type. Click Finish when done.
Visual Studio creates two projects for you: a SharePoint app project and a Web project. Right-click the project and select Add, Class. Call the class Sales, and then add the bolded code below. This is a simple class with three properties: an ID, a Quarter (which represents a fiscal quarter), and TotalSales (representing the total sales for that quarter).
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
namespace MyFirstAutohostedAppWeb
{
public class Sales
{
public int ID { get; set; }
public string Quarter { get; set; }
public string TotalSales { get; set; }
}
}
Then double-click the Default.aspx page and replace the <form> element contents with the following bolded code. This provides you with a GridView object, to which you will bind some in-memory data, and a LinkButton, which will trigger the binding.
<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="MyFirstAutohostedAppWeb.Pages.Default" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "">
<html xmlns="">
<head runat="server">
<title></title>
</head>
<body>
<form id="form1" runat="server">
<div>
<p style="font-family: calibri">Simple Autohosted Sales App</p>
<asp:GridView
<AlternatingRowStyle BackColor="White" ForeColor="#284775" />
>
<br />
<asp:LinkButtonGet Sales</asp:LinkButton>
</div>
</form>
</body>
</html>
Right-click the Default.aspx page and in the code-behind file (Default.aspx.cs), amend the file as per the bolded code below. You’ll see a List collection here, the object you’ll use for the binding, a number of key objects that are used by OAuth to generate the access token for the app and some simple logic to create an in-memory data object and bind it to the GridView.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
namespace MyFirstAutohostedAppWeb.Pages
{
public partial class Default : System.Web.UI.Page
{
List<Sales> mySalesData = new List<Sales>();
SharePointContextToken contextToken;
string accessToken;
Uri sharepointUrl;
protected void Page_Load(object sender, EventArgs e)
{
TokenHelper.TrustAllCertificates();
string contextTokenString = TokenHelper.GetContextTokenFromRequest(Request);
if (contextTokenString != null)
{
contextToken = TokenHelper.ReadAndValidateContextToken(contextTokenString, Request.Url.Authority);
sharepointUrl = new Uri(Request.QueryString["SPHostUrl"]);
accessToken = TokenHelper.GetAccessToken(contextToken, sharepointUrl.Authority).AccessToken;
lnkGetSalesData.CommandArgument = accessToken;
}
}
protected void lnkGetSalesData_Click(object sender, EventArgs e)
{
string accessToken = ((LinkButton)sender).CommandArgument;
if (IsPostBack)
{
sharepointUrl = new Uri(Request.QueryString["SPHostUrl"]);
}
Sales FY11 = new Sales();
Sales FY12 = new Sales();
Sales FY13 = new Sales();
FY11.ID = 1;
FY11.Quarter = "FY11";
FY11.TotalSales = "$2,002,102.00";
mySalesData.Add(FY11);
FY12.ID = 2;
FY12.Quarter = "FY12";
FY12.TotalSales = "$2,500,201.00";
mySalesData.Add(FY12);
FY13.ID = 3;
FY13.Quarter = "FY13";
FY13.TotalSales = "$2,902,211.00";
mySalesData.Add(FY13);
salesGridView.DataSource = mySalesData;
salesGridView.DataBind();
}
}
}
When you’re done, hit F6 and build the app. Then, double-click the AppManifest.xml file to open the designer, click the Permissions tab and set the Web scope to have Read permissions.
At this point, you can right-click and Deploy (and sign into your tenancy to deploy) or right-click and Publish, navigate to your SharePoint site, click new app to deploy, and then upload, click Deploy and then trust your app.
The result should be a deployed app, and when you click it and hit the Get Sales link button, the result should be something fantabulous like the following:
Now if you inspect the URL for this new app, you’ll see something interesting. As per the URL below, you’ll see the GUID for the app, the fact that it’s being hosted in the o365apps.net domain (in Windows Azure), and then you have token(s) and data that are embedded within the URL such as the SPHostUrl, the Language, and so on.
So, congrats if this was your first Autohosted app you built. It was a relatively simple one, but in time you’ll build some interesting apps that leverage more complex patterns. You can download the code from here. (Open the project, click the App project and then change the SharePoint Site URL to point to your SharePoint Online site before you build and deploy.)
For those that want to do more, there’s also a great sample from the SDK that illustrates OAuth and OData, which you can find here. Nice walkthrough that actually works without any rework save for changing the site property because it uses an existing hidden list.
Well, that’s it for today. More to come on the Provider-hosted app model soon.
Happy coding!
Steve
|
https://blogs.msdn.microsoft.com/steve_fox/2012/11/17/sharepoint-and-windows-azures-interlude-in-vegas-building-apps-using-the-autohosted-app-model/
|
CC-MAIN-2017-47
|
refinedweb
| 1,440 | 55.84 |
The default Window Title is “Panda”… is it possible to change to something else?
eg. “My First Panda” ?
The default Window Title is “Panda”… is it possible to change to something else?
eg. “My First Panda” ?
Here you go:
# get "WindowProperties" instance WinProps = base.win.getProperties() WinProps.setTitle("My First Panda") base.win.requestProperties(WinProps)
You can use this to set other window related properties as well.
See “WindowProperties” in the reference section of the manual.
Basically it worked, but there would be a lag (the old title still showed) while my scene and models were still loading; it would be set to the new title when completed.
Any ways to improve it?
If you want to change it like it’s the default, you should change it before importing the DirectStart.
from pandac.PandaModules import * loadPrcFileData("", "win-title My First Panda") import direct.directbase.DirectStart
Read for more :
You can get the list of the config variables :
cvMgr.listVariables()
Thanks so much. It works perfectly now!
Also note you can edit the config file manually for config variables, something people sometimes tend to forget. Open your config.prc with a text editor and find the line starting with “window-title” and change the value to the title of your choice. If the line doesn’t exist, you can add it yourself.
I find it better practice to make static changes like this in the config file since you wont be loading up the default value first and then re-loading the new value in code. More importantly, if someone else has to look at your code, and wants to change a config variable, they won’t be confused when the variable doesn’t change because you overwrote it in separate code. After all, the config file was meant to store the configuration values.
It doesn’t work at all for me. Any ideas why? I know the code is being executed because the fullscreen part works fine. I still get the “Panda” title bar.
# Standard imports import sys import math # Panda imports from pandac.PandaModules import * loadPrcFileData("", "win-title TITLE GOES HERE") loadPrcFileData("", "fullscreen 0") import direct.directbase.DirectStart from direct.task import Task from direct.showbase.DirectObject import * # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # Define the scene class Scene(DirectObject): def __init__(self): self.mTimeStep = 1 self.mTimeAccum = 0 taskMgr.add(self.timeUpdate, 'TimeUpdate') def timeUpdate(self,task): self.mTimeAccum += globalClock.getDt() if self.mTimeAccum > self.mTimeStep : print task.time self.mTimeAccum = 0 return Task.cont # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # Create the scene scene = Scene() # Run the viz run()
I figured it out … the tag “win-title” you write of above is incorrect. It is actually “window-title”.
loadPrcFileData("", "win-title TITLE GOES HERE")
Becomes
loadPrcFileData("", "window-title TITLE GOES HERE")
Oh, my bad. Of course it’s ‘window-title’. Sorry, I didn’t actually look at my config vars list. BTW, why are there two prefix ‘win-’ and ‘window-’ anyway ? It could drive us into confusion like this.
IMO, Editing the config.prc is only for the very very general config. For this case, editing the config.prc is not a good idea, because each project has specific & different config that need to be changed, without doing the change (or copying) everytime one installing new version.
Yep, it sure can. Sorry about that; this is one of those things that happens when a project (like Panda3D) is developed over a long period of time by several different people. It also becomes difficult to reconcile these things as more and more projects start to code to the way it is now.
Actually, the intended design is that each application should have its own, particular prc file (in addition to the very general config.prc file). Panda is designed to find and load multiple different prc files, in a deterministic, hierarchical order; the idea is that you would have (for instance) an airblade.prc file which contains the specific variables that airblade needs, and adds on to (or overrides) different variables already set in config.prc, which contains the generic variables that everyone probably wants.
David
Right. But for me, the fastest way to enable/disable/switch between configs setting while the project is under development is putting those changes in the main script. So I can simply jump to the top of it and make adjustments for particular testing purposes, and then run it immediately, without any need to open the config file all the time, or looking around when I need it.
The permanent changes then can be saved to the project’s config file once the project completed.
|
https://discourse.panda3d.org/t/solved-changing-the-window-title/1630
|
CC-MAIN-2022-27
|
refinedweb
| 762 | 59.19 |
Hello everyone,
I have completed my first small project using python3. It is the coin flip project. During this project we were also provided hints on what functions to use but I did not look at any of the hints. I did however have to sometimes search up how to use a certain prompt (such as random.choice) other than that I planned everything I needed to do in my head. I had to complete the basic requirements however I went up to additional requirements as-well. I also have a solution for the advanced requirements in my head but I didn’t write them down. As I was doing this project my brain did not want to use any functions (such as def function():). However, the next day I tried to integrate more use of def function(): and I was successful until I got down to the Intermediate Challenge my brain again wanted to get rid of def function(): so, I did and I have attached my code to this prompt. I know my code is not as flexible as I can be and I know everyone thinks differently so could you please give me some feedback on how I did and if my code is good enough while considering what I was required to do?
Here is my screen shot of my code:
If the screenshot does not work here is my code:
import random num = 0; g = 0; while (g != 2): print("Guess heads by entering 1 or tails by entering 2 for this coin flip."); #asking first question answer = input() #waiting for users response if(answer == 1): #printing heads if user chose heads print("You guessed heads.") else: #printing tails if user chose tails print("you guessed tails.") flip = random.choice([1,2]) #flips the coin if(flip == 1): #determines the outcome when flip is equal to 1 or something else and prints heads or tails print("The coin landed on Heads.") else: print("The coin landed on Tails.") if(answer == flip): #prompts user whether their guess was right or wrong print("Congrats you guessed right.") else: print("Sorry your guess was wrong.") num += 1 print("You have guessed " + str(num) + " times.") print("Would you would like to guess again enter 1 or enter 2 to exit?") g = input() if(g == 2): #printing heads if user chose heads break else: continue
Here are the Requirements:
Basic Requirements
- User Story: As a user I want to be able to guess the outcome of a random coin flip(heads/tails).
- User Story: As a user I want to clearly see the result of the coin flip.
- User Story: As a user I want to clearly see whether or not I guessed correctly.
Additional Challenges
Intermediate Challenge
- User Story: As a user I want to clearly see the updated guess history (correct count/total count).
- User Story: As a user I want to be able to quit the game or go again after each cycle.
Advanced Challenge
Let’s see if we can expand upon this challenge - what if instead of 2 options, there were 6?
User Story: As a user I want to be able to guess the outcome of a 6-sided dice roll (1-6), with the same feature set as the coin flip (see above).
- You can add this directly to the existing program you’ve already written! As an additional challenge see if you can build the program such that the the user can choose between the two guessing games at startup, and possibly even switch after each cycle.
|
https://discuss.codecademy.com/t/coin-flip-project/518905
|
CC-MAIN-2020-34
|
refinedweb
| 595 | 68.5 |
templar 1.0.0.dev4.
Setup
-----
### Installation
To begin, clone this repo into your working directory (this README
assumes Templar will be installed in `<project>/lib/`):
cd path/to/project
git clone lib/templar
Alternatively, add this repo as a git submodule:
cd path/to/project
git submodule add lib/templar
### config.py
Choose a directory in which you will be publishing work using Templar
(this can be any directory in your project structure). For example,
suppose we want to publish in a directory called `<project>/blog/src/`.
Create a `config.py` file with the following command:
python3 templar config blog/src
This will create a file called `config.py` in `<project>/blog/src/`
with the following (paraphrased) contents:
FILEPATH = ... # filepath of config.py
TEMPLATE_DIRS = [
FILEPATH,
# list of directories that contain a templates directory
]
VARIABLES = {
# variables used by templates
}
SUBSTITUTIONS = [
# (regex, sub) pairs
]
This basic setup is enough to start using Templar, but you can
customize each of the variables found in `config.py`. `TEMPLATE_DIRS`
is explained next; other customizations are described later.
### `TEMPLATE_DIRS`
Suppose we change `TEMPLATE_DIRS` to the following:
TEMPLATE_DIRS = [
FILEPATH,
os.path.join(FILEPATH, 'example'),
]
Continuing from our previous example, `FILEPATH` is
`<project>/blog/src` (since that is where `config.py` is located).
Templar will now look in two directories to find templates:
1. `<project>/blog/src/templates/`
2. `<project>/blog/src/example/templates/`
Templates will be searched in that order.
Notice that each filepath in `TEMPLATE_DIRS` is assumed to have a
`templates` directory -- this is where template files should be placed.
**Note**: The `FILEPATH` variable is not required, though it is
helpful as a reference point for other directories that contain
templates.
Usage
-----
### Basic Templating
Continuing from our example above, suppose we are publishing content
from the filepath `<project>/blog/src`.
First, create a content file anywhere -- for example, we'll create a
Markdown file called `example.md`:
~ title: This is a Templar example
This Markdown file is a *Templar example*.
This file contains standard Markdown, except for the first line `~
title: ...`. This is a *variable* declaration, which have the following
syntax:
~ variable name: variable value
Variables can be referenced from within templates (explained later) are
are useful for storing metadata about the content. Some things to note:
*
Once we are done with the content file, let's create a directory called
`templates`:
mkdir templates
Add a sample template file called `template.html` to the `templates`
directory:
<html>
<head>
<title>{{ title }}</title>
</head>
<body>
<h1>{{ title }}</title>
<p>Published: <i>{{ datetime.now() }}</i></p>
{{ :all }}
</body>
</html>
This template demonstrates three fundamental features:
* **Variables** (`{{ title }}`): variables can be defined in either the
content file (as seen above in `example.md`) or in `config.py`
(explained later). Variables can be reused (e.g. multiple references
to `title`).
* **Python expressions** (`{{ datetime.now() }}`): any valid Python
*expression* can be used -- the `str` of the final value will be used
in place of the `{{ ... }}`. For example, `datetime.now()` will
evaluate to the current time (`datetime` is imported automatically by
Templar)
* **Blocks** (`{{ :all }}`): a **block** is a section of the content
file (see the section on Blocks below). All block expressions in
templates start with a colon (`:`).
Templar reserves the special block name `:all` to mean the entire
content file. In this case, we are replacing `{{ :all }}` with all of
the contents of our file.
To publish the content, use the following command:
python3 ../../lib/templar compile template.html -s example.md -m -d result.html
* `../../lib/templar` is used because we are currently in
`<project>/blog/src`
* `compile` tells Templar to compile a template
* `template.html` is the template that we are using
* `-s example.md` specifies the source content. This option is optional
(Templar allows templating without use of a source file; this is
useful if you still want to take advantage of Templar's template
inheritance)
* `-m` tells Templar to convert the contents of `example.md` from
Markdown into HTML before templating occurs
* `-d result.html` tells Templar to write the result to a file called
`result.html`. This option is optional -- if omitted, Templar will
print the result of templating to standard out.
That's it! There should now be a file in `<project>/blog/src/` called
`result.html` that contains the following:
<html>
<head>
<title>This is a Templar example</title>
</head>
<body>
<h1>This is a Templar example</title>
<p>Published: <i>2014-06-08</i></p>
<p>This Markdown file is a <em>Templar example</em>.</p>
</body>
</html>
Notice that all `{{ ... }}` expressions have been replaced accordingly
(the `{{ datetime.now() }}` expression will be different depending on
when you publish.
Our final directory structure looks like this:
<project>/
lib/
templar/
src/
templates/
template.html
example.md
result.html
Example
-------
For a more extensive example of how to use Templar, see the repository
for my [personal website]()
It is often helpful to use a Makefile instead of directly running
Templar every time.
Templates
---------
Templates are stored in a directory called `templates`, which can be
located anywhere as long as the parent directory is listed in the
`TEMPLATE_DIRS` variable in `config.py`. The most basic "template" is
simply a regular HTML file. In addition, you can add *expressions* in
the templates that the compiler will resolve. For example, the
following template can be used to fill in contact information:
<ul>
<li>Name: {{ name }}</li>
<li>Age: {{ age }}</li>
<li>Occupation: {{ job }}</li>
</ul>
Expressions are denoted by two sets of curly braces, such as `{{ name
}}` or `{{ age }}`. The expression within the curly braces can be one
of the following:
* A **variable** defined either within a Markdown source file or
`config.py`. These "variable" names are more flexible than Python
variable names, as they can include spaces and hyphens (the only
restriction is they cannot contain newlines or colons).
* A **Python expression**. Any valid Python *expression* (not
statements) can be used -- the `str` of the final value will be used
in place of the `{{ ... }}`:
<p>Date published: {{ datetime.now() }}</p>
**Note**: `{{ ... }}` expressions will always be treated as variables
first; if no such variable exists, before Templar will evaluate the
expression as a Python expression instead.
* A block defined within a Markdown source file. For example, suppose a
source file has the following block:
<block example="">
Some Markdown here.
</block>
The block `example` can then be used in an expression like so:
<body>
<p>Some HTML here</p>
{{ :example }}
Notice the colon that precedes the block name.
### Template Inheritance
Templar also supports template inheritance. A "child" template can
specify which "parent" template to inherit by including the following
on the *very first line of the child template*:
<% extends parent.html %>
In the "parent" template, you can define labels that "child" templates
can fill. Suppose the following content is found in `parent.html`:
<div id="nav-bar">
<h3>Title</h3>
{% nav-bar %}
</div>
The `{% nav-bar %}` tag allows child templates to "override" parent
labels. Suppose the following content is found in `child.html`:
<% extends parent.html %>
<% nav-bar %>
<h3>Some other stuff</h3>
<h3>Some more stuff</h3>
<%/ nav-bar %>
The result of compiling `child.html` will look like this:
<div id="nav-bar">
<h3>Title</h3>
<h3>Some other stuff</h3>
<h3>Some more stuff</h3>
</div>
If a child template chooses not to inherit a tag, that tag will simply
be removed from the final document.
Source files
------------
### Markdown
A basic source file can simply contain regular Markdown. Templar uses a
Markdown converter that follows the [Daring
Fireball]()
specification.
In addition, Templar's Markdown parser supports variable definitions:
~ variable name: variable value
That is, a tilde (`~`) followed by at least one space, a variable name,
a colon, and a variable value. Variable declarations have the following
rules:
*
You can tell Templar to parse a content file as Markdown with use of
the `-m` flag:
python3 templar compile template.html -s example.md -m
python3 templar link example.md -m
### Other filetypes
Templar can also use non-Markdown files as content sources. For
example, we have a template called `homework.py` with the following:
"""Python homework"""
{{ :all }}
if __name__ == '__main__':
main()
and a source file called `hw1.py` with the following:
def question1(args):
pass
def question2(args):
pass
Then the following command would publish a file called `pub/hw1.py`
python3 templar compile homework.py -s hw1.py -d pub/hw1.py
Notice we omit the use of `-m`, since we are not publishing Markdown.
The contents of `pub/hw1.py` will look like the following:
"""Python homework"""
def question1(args):
pass
def question2(args):
pass
if __name__ == '__main__':
main()
* * *
There are also two special tags that can be used: the `<block>` tag and
the `<include>` tag. These tags can be used in any kind of source file
(Markdown or not).
### `block` tag
The `<block>` tag allows you to name a certain section of Markdown:
Some Markdown out here
<block example="">
Example
-------
This Markdown is within the block.
</block>
The opening `block` tag consists of triangular braces, `< >`, the word
`block`, followed by a space, and then the name of the block. In the
example above, the name of the block is `example`.
The closing `block` tag uses a forward slash and also needs to contain
the name of the block that it closes. This allows you to nest blocks
inside of each other:
<block outer="">
<block inner="">
This stuff here would be included in both the inner block and the
outer block
</block>
But this stuff would only be included in the outer block.
</block>
Block names must be unique within a single file.
The `<block>` tag can also be surronded by extra characters on the same
line:
# <block python="">
def hello(world):
return hi
# </block>
This allows blocks to be defined in source code (e.g. Python scripts)
as comments, so that the source code can be executed.
### `include` tag
The `<include>` tag allows you to link different sources
together. The idea is that sometimes, it is useful to write modular
Markdown sources to make it easier to manage directories. This also
makes it faster to refer to the same Markdown file without duplicating
its contents. Here is an example:
Topics
------
<include path="" to="" topics.
Examples
--------
<include path="" to="" example.
References
----------
<include path="" to="" references:
In the example above, the first and second `include` tags simply use a
filepath. As in the example, any filetype can be included, irrespective
of the original source filetype (e.g. a Python file can be included in
a Markdown file), as long as the files are plain text.
The filepaths can be written in the following ways
* *relative to the directory in which you will run Templar*: this is
always assumed by Templar first
* *relative to the current source file*: this is useful if the source
file is close to other files that it is linking. Templar will assume
the filepath is relative to the source file only if it cannot find
the filepath relative to the working directory.
The first two `include` tags will simply copy all of the contents
listed inside of `path/to/topics.md` and `path/to/example.py` into the
locations of the `include` tags.
The third `include` tag also references a `blockA` inside of the file
`path/to/references.md`. This is useful if you only want to include a
subsection of another Markdown file. The syntax is the following:
<include path="" to="" file:
#### Regular expressions
The block name in an `include` tag can also be a regular expression:
<include path="" to="" file:block\d+="">
Here, the regular expression will be `block\d+`. All blocks in
`path/to/file` whose names match the regular expression will be
included in place of the `include` tag. The blocks will be included in
the order they are defined in `path/to/file` (using their opening
`block` tag to define order).
Suppose the regular expression matches `block42`, `block2`, `block3`,
in that order. The `include` tag will be expanded into the following:
<include path="" to="" file:
<include path="" to="" file:
<include path="" to="" file:
### Custom patterns
You can define custom patterns in your source files, like the
following:
Question 1
----------
A question here.
<solution>
This is the solution to the question: `f = lambda x: f(x)`.
</solution>
You can specify how to convert the `<solution>...</solution>` pattern
by defining a regular expression in `config.py`. For example,
solution_re = re.compile(r"<solution>(.*?)</solution>", re.S)
def solution_sub(match):
return "<b>Solution</b>: " + match.group(1)
SUBSTITUTIONS = [
(solution_re, solution_sub),
]
would replace the `solution` tag with a boldface "Solution: " followed
by the contents within the solution tag. All regular expressions should
be listed inside of the `SUBSTITUTIONS` list (in `config.py`), along
with the corresponding substitution function.
**Important note**: the custom patterns are evaluated **after** all
linking (`include` tags) and all Markdown parsing has occurred. Thus,
in the example above, `solution_re` should expect the contents of the
solution tag to look like this:
<p>this is the solution to the question: <code>f = lambda x:
f(x)</code>.</p>
Other Features
--------------
### Linking
Templar's primary publishing method is to use templates via the
`compile` subcommand. However, if you do not need to use a template,
and simply want to link a source content file (through use of `include`
tags), you can use the `link` subcommand:
python3 templar link example.md -m -d result.html
* `example.md` is the source file to link. This is a required argument
* `-m` tells Templar to parse the content as Markdown. You can omit
this flag to skip Markdown parsing
* `-d result.html` tells Templar to write the result to a file called
`result.html`. If this argument is omitted, Templar will print the
result to standard out.
### Table of Contents
Templar has the capability to scrape headers to create a table of
contents. You can specify exactly *what* is defined to be a header in
`config.py`:
header_regex = r"..."
def header_translate(match):
return ...
def table_of_contents(lst):
...
* `header_regex` can either be a string or a `RegexObject`, and tell
Templar how to recognize a "header".
* `header_translate` takes a regular expression match and extracts
information (e.g. the `id` attribute and title of an HTML `<h1>` tag)
* `table_of_contents` takes a list of expressions returned by
`header_translate` and compiles the final table of contents.
The table of contents that is returned by `table_of_contents` is
available to templates with the expression
`{{ table-of-contents }}`
### Header slugs
Templar's Markdown parser adds slugs to headers (`<h[1-6]>`). These
slugs are added as `id` attributes of the headers.
### Multiple `config.py` files
It can be useful to have a hierarchical `config` structure if you have
multiple directories with content, each with their own publishing
configurations. For example, suppose we have the following directory
structure:
<project>/
lib/
templar/
articles/
config.py
config.py
projects/
config.py
We have a directory for articles that contains some general
configurations (e.g. a table of contents scraper). In the `blog` and
`projects` directories, we have more specific config files (e.g.
substitutions for different types of articles).
If we run Templar from the `blog` directory
python3 ../../lib/templar compile ...
Templar will use the following method to search for configs:
* Find the lowest common ancestor of `<project>/articles/blog/` and
`<project>/lib/templar/` (in this case, the ancestor is `<project>/`
* Starting from the ancestor (`<project>/`), Templar will traverse down
to `<project>/articles/blog/`
* At each intermediate directory, Templar will scan for a `config.py`
file. If one exists, it will accumulate the contents of that
`config.py` with all the other configs it has seen before.
Acknowledgements
----------------
Templar is an extension of the program that used to generate my
personal website. The idea for linking (the `include` and `block` tags)
conceived while developing the publisher for UC Berkeley's CS 61A labs.
The Markdown parser is a re-implementation of
[markdown2]() a Python
implementation of the original Perl Markdown parser. The variant of
Markdown that Templar supports is a subset of [Daring Fireball's
Markdown
specification]()
Syntax for template inheritance and expression substitution are
inspired by [Django]()'s templating
syntax, as well as [ejs]()'s templating syntax.
- Author: Albert Wu
- Keywords: templating,static template,markdown
- License: MIT
- Categories
- Package Index Owner: albert12132
- DOAP record: templar-1.0.0.dev4.xml
|
https://pypi.python.org/pypi/templar/1.0.0.dev4
|
CC-MAIN-2016-40
|
refinedweb
| 2,693 | 56.15 |
Commits
Files changed (54)
- +0 -0.DS_Store
- +1230 -0Concurrency.rst
- +2 -2DataTypes.rst
- +2 -2DatabasesAndJython.rst
- +2 -2DefiningFunctionsandUsingBuilt-Ins.rst
- +820 -0DeploymentTargets.rst
- +2 -2ExceptionHandlingDebug.rst
- +441 -0GUIApplications.rst
- +2 -2InputOutput.rst
- +1083 -0IntroToPylons.rst
- +3 -3JythonAndJavaIntegration.rst
- +1586 -0JythonDjango.rst
- +3 -0JythonGUI.rst
- +2 -2JythonIDE.rst
- +3 -0JythonPylons.rst
- +2 -2LangSyntax.rst
- +2 -2ModulesPackages.rst
- +2 -2ObjectOrientedJython.rst
- +2 -2OpsExpressPF.rst
- +2 -2Scripting.rst
- +2 -2SimpleWebApps.rst
- +1444 -0TestingIntegration.rst
- +16 -0_templates/layout.html
- +61 -0aboutTheAuthors.rst
- +261 -209appendixA.rst
- +440 -547appendixB.rst
- — —appendixC.rst
- — —conf.py
- — —images/11-18.jpg
- — —images/14-14.png
- — —images/14-15.png
- — —images/chapter14-conn-pool.png
- — —images/chapter14-conn-props.png
- — —images/chapter14-glassfish-jdbc.png
- — —images/chapter14-ping.png
- — —images/picture_0.png
- — —images/picture_1.png
- — —images/picture_10.png
- — —images/picture_11.png
- — —images/picture_12.png
- — —images/picture_13.png
- — —images/picture_14.png
- — —images/picture_15.png
- — —images/picture_16.png
- — —images/picture_17.jpg
- — —images/picture_2.jpg
- — —images/picture_3.jpg
- — —images/picture_4.jpg
- — —images/picture_5.jpg
- — —images/picture_6.jpg
- — —images/picture_7.png
- — —images/picture_8.png
- — —images/picture_9.png
- — —index.rst
.DS_Store.DS_Store
Binary file modified.
Concurrency.rstConcurrency.rst
.
+The semiconductor industry continues to work feverishly to uphold Mooreâs Law of exponential increase in chip density.
is arguably the most robust environment for running concurrent code today, and this functionality can be readily be used from Jython.
+This is especially true with respect to a concurrency model based on threads, which is what today.) If you attempt to solve concurrency issues through synchronization, you run into other problems: besides the potential performance hit, there are opportunities for deadlock and livelock.
+ Queues and related objects -- like synchronization barriers -- provide a structured mechanism to hand over objects between threads.
+One issue that you will have to consider in writing concurrent code is how much to make your implementation dependent on the Java platform.
+-âs.)
+ This means you can just use these standard Python types, and still get high performance concurrency.
+ So if it fits your appâs needs, you may want to consider using such collections as CopyOnWriteArrayList and ConcurrentSkipListMap (new in Java 6).
+ This is particular true of the executor services for running and managing tasks against thread pools.
+ So for example, avoid using threading.Timer, because you can use timed execution services in its place.
+ In particular, these constructs have been optimized to work in the context of a with-statement, as we will discuss.
+In practice, using Javaâs support for higher level primitives should not impact the portability of your code so much.
+Using tasks in particular tends to keep all of this well isolated, and such thread safety considerations as thread confinement and safe publication remain the same.
__).
+As we will see, publishing results into variables is safe in Jython, but itâs not the nicest way.
+ Upon JVM shutdown, any daemon threads are simply terminated, without an opportunityâor needâto perform cleanup or orderly shutdown.
+ This lack of cleanup means itâs important that daemon threads never hold any external resources, such as database connections or file handles.
+ For similar reasons, a daemon thread should never make an import attempt, as this can interfere with Jythonâs orderly shutdown.
+ In production, the only use case for daemon threads is when they are strictly used to work with in-memory objects, typically for some sort of housekeeping.
+ Likewise, a later example demonstrating deadlock uses daemon threads to enable shutdown without waiting on these deadlocked threads.
+The threading.local class enables each thread to have its own instances of some objects in an otherwise shared environment.
+Its usage is deceptively simple:simply create an instance of threading.local, or a subclass, and assign it to a variable or other name.
+Threads can then share the variable, but with a twist: each thread will see a different, thread-specific version of the object.
+This object can have arbitrary attributes added to it, each of which will not be visible to other threads.
+But one unique, and potentially useful, aspect is that any attributes specified in __slots__ will be *shared* across threads.
+Usually they donât make sense because threads are not the right scope, but an object or a function is, especially through a closure.
+If you are using thread locals, you are implicitly adopting a model where threads are partitioning the work.
+ Historically, this may have been in fact faster, but it now slows things down, and unnecessarily limits what a given thread can do.
+ A future refactoring of Jython will likely remove the use of âThreadStateâ completely, simultaneously speeding and cleaning things up.
+ And of course, if you are using code whose architecture mandates thread locals, itâs just something you will have to work with.
+Take advantage of the fact that Python is a dynamic language, with strong support for metaprogramming, and remember that the Jython implementation makes these techniques accessible when working with even recalcitrant Java code.
+They do not work well in a task-oriented model, because you donât want to associate context with a worker thread that will be assigned to arbitrary tasks.
âs runtime.
+Instead, developers typically will use a process-oriented model to evade the restrictiveness.
+This is true whether the import goes through the import statement, the equivalent __import__ builtin, or related code.
+Itâs important to note that even if the corresponding module has already been imported, the module import lock will still be acquired, if only briefly.
+Just keep in mind that thread(s) performing such imports will be forced to run single threaded because of this lock.
+So as you can see, you need to do at least two imports of a given module; one in the background thread; the other in the actual place(s) where the moduleâs namespace is being used.
+Here.
+Note that in the current implementation, the module import lock is global for the entire Jython runtime.
+Although there are other options, the object you submit to be executed should implement Javaâs Callable interface (a call method without arguments), as this best maps into working with a Python method or function.
+Tasks move through the states of being created, submitted (to an executor), started, and completed.
âs needed.
+We are going to look at how we can use this functionality by using the example of downloading web pages.
+We will wrap this up so itâs easy to work with, tracking the state of the download, as well as any timing information.
+In Jython any other task could be done in this fashion, whether it is a database query or a computationally intensive task written in Python.
+Upon completion of a future, either the result is returned, or an exception is thrown into the caller.
+(This pushing of the exception into the asynchronous caller is thus similar to how a coroutine works when send is called on it.)
+However, you may need to take into account that this shutdown can happen during extraordinary times in your code.
+Hereâs the Jython version of a robust shutdown function, shutdown_and_await_termination, as provided in the standard Java docs.
+The scenario is that instead of waiting for all the futures to complete, as our code did with invokeAll, or otherwise polling them, the completion service will push futures as they are completed onto a synchronized queue.
+Although it may be tempting to then schedule everything through the completion serviceâs queue, there are limits.
+For example, if youâre writing a scalable web spider, you would want to externalize this work queue, but for simple management, it would certainly suffice.
+ Why use tasks instead of threads? A common practice too often seen in production code is the addition of threading in a haphazard fashion:
+ This can result in a ratsâ nest of threads synchronizing on a variety of objects, often with timers and other event sources thrown in the mix.
+ Itâs certainly possible to make this sort of setup workâjust debug awayâbut using tasks, with explicit wait-on dependencies and time scheduling, makes it far simpler to build a simple, scalable system.
+- Can the (unintended) interaction of two or more threads corrupt a mutable object? This is especially dangerous for a collection like a list or a dictionary, because such corruption could potentially render the underlying data structure unusable or even produce infinite loops when traversing it.
+ In this case, there can be a data race with another thread in the time between retrieving the current value, and then updating with the incremented value.
+Jython ensures that its underlying mutable collection types -- dict, list, and set -- cannot be corrupted.
+However, other Java collection objects that your code might use would typically not have such no-corruption guarantees.
+If you need to use LinkedHashMap, so as to support an ordered dictionary, you will need to consider thread safety if it will be both shared and mutated.
+For example, we use this idea in Jython to test that certain operations on the list type are atomic.
+The net result should be right where you started, an empty list, which is what the test code asserts.
+Commonly used objects like strings, numbers, datetimes, tuples, and frozen sets are immutable, and you can create your own immutable objects too.
+We use synchronization to control the entry of threads into code blocks corresponding to synchronized resources.
+(In Jython, but unlike CPython, such locks are always reentrant; thereâs no distinction between threading.Lock and threading.RLock.) Other threads have to wait until that thread exits the lock.
+You should generally manage the entry and exit of such locks through a with-statement; failing that, you must use a try-finally to ensure that the lock is always released when exiting a block of code.
+Itâs actually slower than the with-statement, and using the with-statement version also results in more idiomatic Python code.
+This module provides a ``make_synchronized`` decorator function, which wraps any callable in Jython in a synchronized block.
+Even in the case of an exception, the synchronization lock is always released upon exit from the function.
+Again, this version is also slower than the with-statement form, and it doesnât use explicit locks.
+ Jython.
+ The with-statementâs semantics make it relatively easy for us to do that when working with built-in types like threading.Lock, while avoiding the overhead of Java runtime reflection.
+ In the future, support of the new invokedynamic bytecode should collapse these performance differences.
+You may want to use the synchronizers in Java.util.concurrent instead of their wrapped versions in threading.
+Also, you may want to use factories like Collections.synchronizedMap, when applicable, to ensure the underlying Java object has the desired synchronization.
+Without a timeout or other change in strategyâAlice just gets tired of waiting on Bob!âth.
+(Synchronized queues are also called blocking queues, and thatâs how they are described in java.util.concurrent.) Such queues represent a thread-safe way to send objects from one or more producing threads to one or more consuming threads.
âs waiting on a condition to wake up; notifyAll is used to wake up all such threads.
+Your code needs to bracket waiting and notifying the condition by acquiring the corresponding lock, then finally (as always!) releasing it.
+For example, hereâs how we actually implement a Queue in the standard library of Jython (just modified here to use the with-statement).
+We canât use a standard Java blocking queue, because the requirement of being able to join on the queue when thereâs no more work to be performed requires a third condition variable.
+You can use semaphores to describe scenarios where itâs possible for multiple threads to enter, or use locks that are set up to distinguish reads from writes.
+Data races and object corruption do not occur, and itâs not possible for other threads to see an inconsistent view.
+In addition, atomic operations will often use underlying support in the CPU, such as a compare-and-swap instruction.
+Python guarantees the atomicity of certain operations, although at best itâs only informally documented.
+Fredrik Lundhâs article on âThread Synchronization Methods in Pythonâ summarizes the mailing list discussions and the state of the CPython implementation.
.
+In particular, because dict is a ConcurrentHashMap, we also expose the following methods to atomically update dictionaries:
+Often, you still need to use synchronization to prevent data races, and this has to be done with care to avoid deadlocks and starvation.
+In practice, you probably donât need to share a large percentage of the mutable objects used in your code.
+ For example, if you are building up a buffer that is only pointed to by a local variable, you donât need to synchronize.
+ Itâs an easy prescription to follow, so long as you are not trying to keep around these intermediate objects to avoid allocation overhead: donât do that.
+ For example, if you are using modjy, then the database connection pools and thread pools are the responsibility of the servlet container.
+ (But donât do things like share database connections across threads.) Caches and databases then are where you will see shared state.
+ Send and receive messages to an actor (effectively an independent thread) and let it manipulate any objects it owns on your behalf.
+ The message queue can then ensure any accesses are appropriately serialized, so there are no thread safety issues.
+For example, if you use StringIO, you have to pay the cost that this class uses list, which is synchronized.
+Although it is because the memory model is not as surprising to our conventional reasoning about how programs operate.
+In order to maximize Java performance, itâs visible to other threads.
+Of course, this visibility only applies to changes made to non-local objects; thread confinement still applies.
+In particular, this means you cannot rely on the apparent sequential ordering of Java code when looking at two or more threads.
âsingletonsâthen you should do this in the top-level script of the module so that the module import lock is in effect.
+In particular, if a thread is waiting on most any synchronizers, such as a condition variable or on file I/O, this action will cause the waited-on method to exit with an InterruptedException.
+(Unfortunately lock acquisition, except under certain cases such as using lockInterruptibly on the underlying Java lock, is not interruptible.)
+Although Pythonâs threading module does not itself support interruption, it is available through the standard Java thread API.
+First, letâs import this class (we will rename it to JThread so it doesnât conflict with Pythonâs version).
+So logically you should be able to do the converse: use Python threads as if they are Java threads.
+ Incidentally, this formulation, instead of obj.interrupt(), looks like a static method on a class, as long as we pass in the object as the first argument.
+As of the latest released version (Jython 2.5.1), we forgot to include an appropriate __tojava__ method on the Thread class! So this looks like you canât do this trick after all.
+Or can you? What if you didnât have to wait until we fix this bug? You could explore the source code -- or look at the class with dir.
+We can *monkey patch* the Thread class such that it has an appropriate __tojava__ method, but only if it doesnât exist.
+So this patching is likely to work with a future version of Jython because we are going to fix this missing method before we even consider changing its implementation and removing _thread.
+But again, you shouldnât worry too much when you keep such fixes to a minimum, especially when itâs essentially a bug fix like this one.
+In our case, we will use a variant, the monkeypatch_method_if_not_set decorator, to ensure we only patch if it has not been fixed by a later version.
+You can also use the standard Python threading constructs, which in most cases just wrap the corresponding Java functionality.
+The standard mutable Python collection types have been implemented in Jython with concurrency in mind; and Pythonâs sequential consistency removes some potential bugs.
DataTypes.rstDataTypes.rst
DatabasesAndJython.rstDatabasesAndJython.rst
DefiningFunctionsandUsingBuilt-Ins.rstDefiningFunctionsandUsingBuilt-Ins.rst
DeploymentTargets.rstDeploymentTargets.rst
+However, they are all very similar and usually allow deployment of WAR file or exploded directory web applications.
+Some cloud environments have typical Java application servers available for hosting, while others such as the Google App Engine run a bit differently.
+In this chapter, weâll.
+Placing the *jython.jar* directly into each web application is a good idea because it allows the web application to follow the Java paradigm of âdeploy anywhere.â You do not need to worry whether you are deploying to Tomcat or Glassfish because the Jython runtime is embedded in your application.
+Lastly, this section will briefly cover some of the reasons why mobile deployment is not yet a viable option for Jython.
+While a couple of targets exist in the mobile world, namely Android and JavaFX, both environments are still very new and Jython has not yet been optimized to run on either.
+As with any Java web application, the standard web archive (WAR) files are universal throughout the Java application servers available today.
+This is good because it makes things a bit easier when it comes to the âwrite once run everywhereâ section will discuss how to deploy a WAR file on each of the three most widely used Java application servers.
+Now, all application servers are not covered in this section mainly due to the number of servers available today.
+However, you should be able to follow similar deployment instructions as those discussed here for any of the application servers available today for deploying Jython web applications in the WAR file format.
.
+For the purposes of this section, weâve used Netbeans 6.7, so there may be some references to it.
+To get started, download the Apache Tomcat server from the site at. Tomcat is constantly evolving, so weâll note that when writing this book the deployment procedures were targeted for the 6.0.20 release.
+Once you have downloaded the server and placed it into a location on your hard drive, you may have to change permissions.
+We had to use the *chmod +x* command on the entire apache-tomcat-6.0.20 directory before we were able to run the server.
+You will also need to configure an administrative account by going into the */conf/tomcat-users.xml* file and adding one.
+After this has been done, you can add the installation to an IDE environment of your choice if youâd like.
+For instance, if you wish to add to Netbeans 6.7 you will need to go to the âServicesâ tab in the navigator, right-click on servers, choose âTomcat 6.xâ option, and then fill in the appropriate information pertaining to your environment.
â<tomcat-root>/webapps/ROOTâ directory.
+For instance, if you have a web-start application entitled , then you would package the JAR file along with the JNLP and HTML file for the application into a directory entitled and then place that directory into the â<tomcat-root>/webapps/ROOTâ directory.
+Once the application has been copied to the appropriate locations, you should be able to access it via the web if Tomcat is started.
+The URL should look something like the following: **.
+Of course, you will need to use the server name and the port that you are using along with the appropriate JNLP name for your application.
+You can either use a WAR file including all content for your entire web application, or you can deploy an exploded directory application which is basically copy-and-paste for your entire web application directory structure into the â<tomcat-root>/webapps/ROOTâ directory.
+For manual deployment of a web application, you can copy either your exploded directory web application or your WAR file into the â<tomcat-root>/webappsâ directory.
+This means that you can have Tomcat started when you copy your WAR or exploded directory into the âwebappsâ location.
+Once youâve done this, you should see some feedback from the Tomcat server if you have a terminal open (or from within the IDE).
+The bonus to deploying exploded directory applications is that you can take any file within the application and change it at will.
+If you do not wish to have autodeploy enabled (perhaps in a production environment), then you can deploy applications on startup of the server.
+This process is basically the same as âautodeploy,â except any new applications that are copied into the âwebappsâ directory are not deployed until the server is restarted.
+To do this, open your web browser to the index of Tomcat, usually, and then click on the âManagerâ link in the left-hand menu.
+You will need to authenticate at that point using your administrator password, but once you are in the console deployment is quite easy.
+In an effort to avoid redundancy, we will once again redirect you to the Tomcat documentation for more information on deploying a web application via the Tomcat manager console.
+The Glassfish V3 server was still in preview mode, but showed a lot of potential for Jython application deployment.
+In this section, we will cover WAR and web start deployment to Glassfish V2, because it is the most widely used version.
+We will also discuss deployment for Django on Glassfish V3, because, we recommend downloading V2, because it is the most widely used at the time of this writing.
+The installation of Glassfish will not be covered in this text, because it varies depending upon which version you are using.
+There are detailed instructions for each version located on the Glassfish website, so we âServicesâ tab in the Netbeans navigator and right-click on âServersâ and then add the version you are planning to register.
+Once the âAdd Server Instanceâ window appears, simply fill in the information depending upon your environment.
+There is an administrative user named âadminâ that is set up by default with a Glassfish installation.
+In order to change the default password, it is best to startup Glassfish and log into the administrative console.
+Deploying a web start application is basically the same as any other web server, you simply make the web start JAR, JNLP, and HTML file accessible via the web.
+On Glassfish, you need to traverse into your âdomainâ directory and you will find a âdocrootâ inside.
+The path should be similar to â<glassfish-install-loc>/domains/domain1/docrootâ..
+Letâs assume that you are using V2, you have the option to âhot deployâ or use the Glassfish Admin Console to deploy your application.
+By default, the Glassfish âautodeployâ option â<glassfish-install-loc>/domains/domain1/autodeploy.â
+The Glassfish V3 server has some capabilities built into it to help facilitate the process of deploying a Django application.
as mentioned in the introduction, but for the most part deployment is the same.
+However, we have run into cases with some application servers such as JBoss where it wasnât so cut-and-dry to run a Jython application.
+For instance, we have tried to deploy a Jython servlet application on JBoss application server 5.1.0 GA and had lots of issues.
+For one, we had to manually add to the application because we were unable to compile the application in Netbeans without doing so...this was not the case with Tomcat or Glassfish.
+Similarly, we had issues trying to deploy a Jython web application to JBoss as there were several errors that had incurred when the container was scanning forth.
+If you deploy to another service that lives in âthe cloud,â you have very little control over the environment.
+In the next section, weâll study one such environment by Google which is known as the Google App Engine.
+While this âcloudâ service is an entirely different environment than your basic Java web application server, it contains some nice features that allow one to test applications prior to deployment in the cloud.
+Fresh to the likes of the Java platform, the Google App Engine can be used for deploying applications written in just about any language that runs on the JVM, Jython included.
+The App Engine went live in April of 2008, allowing Python developers to begin using its services to host Python applications and libraries.
âve.
+Entire books could be written on the subject of developing Jython applications to run on the App Engine.
+With that said, we will cover the basics to get you up and running with developing Jython applications for the App Engine.
+Once youâve read through this section, we suggest going to the Google App Engine documentation for further details.
+We will start by running the demo application known as âguestbookâ that comes with the Google App Engine SDK.
+This is a very simple Java application that allows one to sign in using an email address and post messages to the screen.
+In order to start the SDK web server and run the âguestbookââve issued the preceding command it will only take a second or two before the web server starts.
+You can then open a browser and traverse to ** to invoke the âguestbookâ application.
+This is a basic JSP-based Java web application, but we can deploy a Jython application and use it in the same manner as we will see in a few moments.
âSign Up.â Enter your existing account information or create a new account to get started.
+After your account has been activated you will need to create an application by clicking on the âCreate Applicationâ button.
+You have a total of 10 available application slots to use if you are making use of the free App Engine account.
+The Google App Engine provides project templates to get you started developing using the correct directory structure.
+Eclipse has a plug-in that makes it easy to generate Google App Engine projects and deploy them to the App Engine.
+If interested in making use of the plug-in, please visit to read more information and download the plug-in.
+Similarly, Netbeans has an App Engine plug-in that is available on the Kenai site appropriately named ().
+In this text we will cover the use of Netbeans 6.7 to develop a simple Jython servlet application to deploy on the App Engine.
+You can either download and use the template available with one of these IDE plug-ins, or simply create a new Netbeans project and make use of the template provided with the App Engine SDK (<app-engine-base-directory/demos/new_project_template>) to create your project directory structure.
+If you are using Eclipse you will find a section following this tutorial that provides some Eclipse plug-in specifics.
+In order to install the nbappengine plug-in, you add the âApp Engineâ update center to the Netbeans plug-in center by choosing the Settings tab and adding the update center using as the URL.
+Once youâve added the new update center you can select the Available Plugins tab and add all of the plug-ins in the âGoogle App Engineâ category, then choose Install.
+After doing so, you can add the âApp Engineâ as a server in your Netbeans environment using the âServicesâ tab.
+Once you have added the App Engine server to Netbeans, it will become an available deployment option for your web applications.
+For the deployment server, choose âGoogle App Engine,â and you will notice that when your web application is created an additional file will be created within the WEB-INF directory named appengine-web.xml.
.
+At this point we will need to create a couple of additional directories within our WEB-INF project directory.
+We should create a *lib* directory and place *jython.jar* and *appengine-api-1.0-sdk-1.2.2.jar* into the directory.
+In a traditional Jython servlet application we need to ensure that the *PyServlet* class is initialized at startup and that all files ending in *.py* are passed to it.
+We found some inconsistencies while deploying against the Google App Engine development server and deploying to the cloud.
+For this reason, we will show you the way that we were able to get the application to function as expected in both the production and development Google App Engine environments.
+If this same pattern is applied to Jython servlet applications, then we can use the factories to coerce our Jython servlet into Java byte code at runtime.
+We then map the resulting coerced class to a servlet mapping in the applicationâs web.xml deployment descriptor.
+We can also deploy our Jython applets and make use of *PyServlet* mapping to the *.py* extension in the *web.xml*.
to the directory that we created previously to ensure it is bundled with our application.
+There is a Java servlet contained within the PlyJy project named , and what this Java servlet does is essentially use the class to coerce a named Jython servlet and then invoke its resulting and methods.
+There is also a simple Java interface named in the project, and it must be implemented by our Jython servlet in order for the coercion to work as expected.
+When we use the PyServlet mapping implementation, there is no need to coerce objects using factories.
+You simply set up a servlet mapping within and use your Jython servlets directly with the .py extension in the URL.
+However, weâve we chose to implement the object factory solution for Jython servlet to App Engine deployment.
+In this example, weâll make use of a simple servlet that displays some text as well as the same example that was used in Chapter 13 with JSP and Jython.
+The first servlet simply displays some output, the next two perform some mathematical logic, and then there is a JSP to display the results for the mathematical servlets.
, we.
+Note that when using the PyServlet implementation you should exclude those portions in the *web.xml* above that are used for the object factory implementation.
+Thatâs it, now you can deploy the application to your Google App Engine development environment, and it should run without any issues.
+You can deploy directly to the cloud by right-clicking the application and choosing the âDeploy to App Engineâ option.
+If you wish to use the Eclipse IDE for development, you should definitely download the Google App Engine plug-in using the link provided earlier in the chapter.
+You should also use the PyDev plug-in which is available at. For the purposes of this section, we used Eclipse Galileo and started a new project named âJythonGAEâ you are ready to deploy the application, you can choose to use the Google App Engine development environment or deploy to the cloud.
+You can run the application by right-clicking on the project and choosing *Run As* option and then choose the Google Web Application option.
+If you are ready to deploy to the cloud, you can right-click on the project and choose the *Google* -> *Deploy to App Engine* option.
+According to the modjy web site, you need to obtain the source for Jython, then zip the directory and place it into another directory along with a file that will act as a pointer to the zip archive.
+The modjy site names the directory and names the pointer file . This pointer file can be named anything as long as the suffix is . Inside the pointer file you need to explicitly name the zip archive that you had created for the directory contents.
+Letâs assume you named it lib.zip, in this case we will put the text âlib.zipâ without the quotes into the file.
+Now if we add the modjy demonstration application to the project then our directory structure should look as follows:
+Likewise, we can run it using the Google App Engine SDK web server and it should provide the expected results.
+Google offers free hosting for smaller applications, and they also base account pricing on bandwidth.
+Most importantly, you can deploy Django, Pylons, and other applications via Jython to the App Engine by setting up your App Engine applications like the examples we had shown in this chapter.
storefront application where people can go to search for applications that have been submitted by developers.
+It be as easy as generating a JAR file that contains a Jython application and deploying it to the Java Store.
+Unfortunately, because the program is still in alpha mode at this time, we are because this product is still in alpha mode, this book will not discuss such aspects of the program as memberships or fees that may be incurred for hosing your applications on the Java Store.
+- Graphic image files used for icons and to give the consumer an idea of your applicationâs look.
similarities and differences to using the Jython standalone JAR technique.
+Weâve already discussed packaging Jython applications into a JAR file using the Jython standalone method in Chapter 13. In this section, you will learn how to make use of the One-JAR () product to distribute client-based Jython applications.
+There are a few options available on the download site, but for our purposes we will package an application using the source files for One-JAR.
+Next, we need to create separate source directories for both our Jython source and our Java source.
+Lastly, weâll create a *lib* directory into which we will place all of the required JAR files for the application.
+In order to run a Jython application, weâll need to package the Jython project source into a JAR file for our application.
+The easiest way to obtain a standalone Jython JAR is to run the installer and choose the standalone option.
+As you can see from the depiction of the file structure in this example, the *src* directory will contain our Jython source files.
+He has a detailed explanation of using One-Jar on his blog, and weâve replicated some of his work in this example.
+. .including a version of the build.xml that we will put together in order to build the application.
+In this example we are using Apache Ant for the build system, but you could choose something different if youâd like.
+In this case, weâll use the *PythonInterpreter* inside of our *Main.java* to invoke our simple Jython Swing application.
+In this example we are using the same simple Jython Swing application that we wrote for Chapter 13.
|
https://bitbucket.org/javajuneau/jythonbook/commits/28b0486ae6c10c366f4da396f96f2ca91b228ba2
|
CC-MAIN-2015-27
|
refinedweb
| 5,711 | 53.1 |
On Wed, 18 Jun 2008, Adam Spragg wrote:
Hi, On Tuesday 17 June 2008 14:57:01 Thomas Dickey wrote:The resolution was basically saying that the form library gets compiled to support wide-character mode, and that _it_ knows only about byte-at-a-time adds via addch, but that (with the limitation of not moving the cursor in the middle of the operation), should work.I've compiled my own ncurses library from the gnu sources, rebuilt the test program against that, linked with it, run it and got the same problem - E_UNKNOWN_COMMAND when adding the first character of a '£', random accented vowels from unicode code points 192-255, and '€' - the euro symbol. When trying to enter the Euro symbol, the status line prints out the correct character id of 8364 (0x20ac) and points out that I got the E_UNKNOWN_COMMAND on byte 0 of 3, which is the correct number of bytes for the utf-8 encoding of that character. I configured ncurses with the command line: ./configure --prefix=/home/adam --enable-widec --with-shared
I didn't get that far (was doing maintenance for $dayjob last night).
and checked the test program was loading the new copy of the library with strace. Anything else I can check on my end?
Aside from perhaps suspecting the chunk at the end of form_driver (which looks okay at the moment): /* * If we're using 8-bit characters, iscntrl+isprint cover the whole set. * But with multibyte characters, there is a third possibility, i.e., * parts of characters that build up into printable characters which are * not considered printable. * * FIXME: the wide-character branch should also use Check_Char(). */ #if USE_WIDEC_SUPPORT if (!iscntrl(UChar(c))) #else if (isprint(UChar(c)) && Check_Char(form->current->type, c, (TypeArgument *)(form->current->arg))) #endif res = Data_Entry(form, c); } For investigating your core dump, I intended to build a debug-version, and run it with valgrind (perhaps this evening).
Adam p.s. Just out of curiosity, are you seeing the '£' (pound) and '€' (euro) characters correctly in the emails I'm sending? I've checked my outgoing mail and as far as I can tell it's being sent in utf-8 and is identifying itself as utf-8. But your earlier reply to had the charset set to X-UNKNOWN which caused some problems displaying the pound character, and the archived version at lists.gnu.org[0] isn't quite right either.
at the moment, I'm connected to my mail-provider in pine, and that showsthe Latin-1 codes (no UTF-8). On my home machine, that would work properly.
-- Thomas E. Dickey
|
http://lists.gnu.org/archive/html/bug-ncurses/2008-06/msg00021.html
|
CC-MAIN-2016-40
|
refinedweb
| 437 | 59.13 |
Alternative to using GoogleMapAPI to retrieve the geo codes (Latitude and Longitude) from zip codes. This website allows batch processing of the zip code which make it very convenient for automated batch processing.
Below illustrate the general steps in retrieving the data from the website which involve just enter the zipcode, press the “geocode” button and get the output from secondary text box.
The above tasks can be automated using Selenium and python which can emulate the users action by using just a few lines of codes. A preview of the code are as shown below. You will notice that the it calls each element [textbox, button etc] by id. This is also an advantage of this website which provide the id tag for each required element. The data retrieved are converted to Pandas object for easy processing.
Currently, the waiting time is set manually by the users. The script can be further modified to retrieve the number of data being processed before retrieving the final output. Another issue is that this website also make use of GoogleMapAPI engine which restrict the number of query (~2500 per day). If require massive query of data, one way is to schedule the script to run at fix interval each day or perhaps query from multiple websites that have this conversion features.
For my project, I may need to pull more than 100,000 data set. Pulling only 2500 query is relatively limited even though I can run it on multiple computers. Would welcome suggestions.
import re, os, sys, datetime, time import pandas as pd from selenium import webdriver from selenium.webdriver import Firefox from time import gmtime, strftime def retrieve_geocode_fr_site(postcode_list): """ Retrieve batch of geocode based on postcode list. Based on site: Args: postcode_list (list): list of postcode. Returns: (Dataframe): dataframe containing postcode, lat, long NOte: need to calcute the time --. 100 entry take 94s """ ## need to convert input to str postcode_str = '\n'.join([str(n) for n in postcode_list]) #target website target_url = '' driver = webdriver.Firefox() driver.get(target_url) #input the query to the text box inputElement = driver.find_element_by_id("batch_in") inputElement.send_keys(postcode_str) #press button driver.find_element_by_id("geocode_btn").click() #allocate enough time for data to complete # 100 input ard 2-3 min, adjust according time.sleep(60*10) #retrieve ooutput output_data = driver.find_element_by_id("batch_out").get_attribute("value") output_data_list = [n.split(',') for n in output_data.splitlines()] #processing the output #last part create it to a pandas dataframe object for easy processng. headers = output_data_list.pop(0) geocode_df = pd.DataFrame(output_data_list, columns = headers) geocode_df['Postcode'] = geocode_df['"original address"'].str.strip('"') geocode_df = geocode_df.drop('"original address"',1) ## printing a subset print geocode_df.head() driver.close() return geocode_df
|
https://simply-python.com/tag/scrape/
|
CC-MAIN-2019-30
|
refinedweb
| 440 | 59.09 |
« Return to documentation listing
MPI_Get_count - Gets the number of top-level elements received.
#include <mpi.h>
int MPI_Get_count(MPI_Status *status, MPI_Datatype datatype,
int *count)
INCLUDE 'mpif.h'
MPI_GET_COUNT(STATUS, DATATYPE, COUNT, IERROR)
INTEGER STATUS(MPI_STATUS_SIZE), DATATYPE, COUNT, IERROR
#include <mpi.h>
int Status::Get_count(const Datatype& datatype) const
status Return status of receive operation (status).
datatype Datatype of each receive buffer element (handle).
count Number of received elements (integer).
IERROR Fortran only: Error status (integer). perfor-
mance. A message might be received without counting the number of ele-
ments it contains, and the count value is often not needed. Also, this
allows the same function to be used after a call to MPI_Probe.._Get_elements
Open MPI 1.2 September 2006 MPI_Get_count(3OpenMPI)
|
http://icl.cs.utk.edu/open-mpi/doc/v1.2/man3/MPI_Get_count.3.php
|
CC-MAIN-2016-22
|
refinedweb
| 122 | 52.87 |
Ticket #2432 (closed defect: fixed)
call_on_startup functions do not successfully modify things in tg.config
Description
I have a library that does a few things for a TG2 app. To make things easy on the people who will be using my library in their apps I have a function to call on application startup to set a few things up. Among other things, this function attempts to modify base_config.variable_providers and base_config.ignore_parameters. Here's the code:
def enable_csrf(): # Ignore the _csrf_token parameter ignore = config.get('ignore_parameters', []) print 'before', config.get('ignore_parameters', []) print 'before', config.get('variable_provider', []) if '_csrf_token' not in ignore: ignore.append('_csrf_token') config.update['ignore_parameters'] = ignore # Add a function to the template tg stdvars that looks up a template. var_provider = config.get('variable_provider', None) if var_provider: config['variable_provider'] = lambda: \ var_provider().update({'fedora_template': fedora_template}) else: config['variable_provider'] = lambda: {'fedora_template': fedora_template} print 'after', config.get('ignore_parameters', []) print 'after', config.get('variable_provider', []) config/app_cfg.py: base_config.call_on_startup = [enable_csrf]
When used like this, call_on_startup runs the enable_csrf() function but the modifications of tg.configignore_parameters? and tg.configvariable_providers? do not show up outside of the enable_csrf() function.
elpargo took an initial look at the function over IRC and had these comments:
right. It indeed seems like a) a bug in startup or b) we need a separate hook. take a look at tg/configuration. make_load_environment sets up the functions at setup_startup_and_shutdown the comment stays it will Register the functions however it is actually calling them. however in setup_tg_wsgi_app/make_base_app which is when things are actually created there is no call for that. to be honest I have never needed those, but this code doesn't seems right. all those setup_* functions are supposed to pull things in rather than execute them.
I have reworked my code to call my startup function in an overridden AppConfig? which may work out better for me in the long-run but thought this should either be addressed as a code or documentation bug. I can provide more code examples if needed but I'm out of town until January 4th.
Attachments
Change History
comment:2 Changed 9 years ago by toshio
With current tip, ignore_parameters appears to be broken altogether.
Fresh paster quickstart with this addition to app_cfg.py:
base_config.ignore_parameters = yum?
browse to Get a traceback:
TypeError?: index() got an unexpected keyword argument 'yum'
Will atach the full traceback.
Changed 9 years ago by toshio
traceback with current tip when giving a parameter specified via base_config.ignore_parameters
comment:4 Changed 9 years ago by percious
Hey, i made some considerable strides with dispatch today. If you could try again with tip, and if you are still failing please provide me with the controller method code. At the very least your code should now 404 instead of 500.
|
http://trac.turbogears.org/ticket/2432
|
CC-MAIN-2019-30
|
refinedweb
| 462 | 50.02 |
Tax
Have a Tax Question? Ask a Tax Expert
My work for the year was all performed in SC - Just my residency changed during the year from NC (first 5 months) to SC. On the W2 all tax was paid to SC (which I think is correct) and when I filed a yearly return (NC+SC) splitting the income between the states per residency I received $3200 refund from SC and paid $3100 to NC. I provided all this paperwork to the SC officer (including cashed NC check) who advised i need amended W2 which is not possible based on employer feedback. My total 2006 income was approx 100k. All paperwork was filed on time (march 2007).
Hi Arthur - does the additional information above help to clarify the situation?
|
http://www.justanswer.com/tax/3tzpn-2006-w2-states-income-sc-year-resident.html
|
CC-MAIN-2016-44
|
refinedweb
| 129 | 69.82 |
Using ASP.NET Code-Behind Without Visual Studio.NET
by John Peterson
Introduction
I'm sure you've probably heard (since I've done nothing but talk about it) that we recently went to San Jose to put on our very own ASP.NET Developer Conference & Expo. While we were there, we talked to a number of developers and got a good feel for what the attendees were most looking forward to in ASP.NET. One of the things that kept coming up was the ability to use code-behind to separate the display and layout of an ASP.NET page from it's code and application logic.
In Visual Studio.NET (VS.NET), this magic is all done for you. Now here's the rub... what if you don't have VS.NET? We've always maintained that while all the fancy tools are nice and might make you more productive, you don't need them to develop working solutions using ASP or ASP.NET. An email from one show attendee brought this whole topic to a head... here's an excerpt:
hey, john - i was in the San Jose developer's conference, i believe i got a chance to talk to you there a bit...i was searching through asp101 and not quite able to come up with an answer to a question that's bothering me, maybe it's so obvious no one is answering it, but i can't quite work through it alone.
i understand that there is a way to run a code-behind page inheriting from a class in a compiled DLL, yes? from what i've read visual studio .net does this 'automagically', but i can't find any explanation as to what the magic is, really. i've got VS on order, but anyway i'd like to understand how this stuff works with or without it.
To address the email and since I don't like to be proven wrong... here's how you can implement code-behind using just a plain old text editor and the tools included in the .NET Framework... no VS.NET required!
What Code-Behind Looks Like in Visual Studio.NET
Before we start, let me take this opportunity to illustrate what we're talking about. (I know I said no VS.NET, but it's just for illustration.)
I've fired off VS.NET and added a blank VB Web Form to my project (named boringly enough WebForm1.aspx). Next I add a button (named Button1) to the page. Here's what it looks like:
Figure 1
We're still basically in that same one file. When I click the button to add code to it, all of a sudden, a new file (named WebForm1.aspx.vb after our WebForm1.aspx) opens and VS.NET drops me into it in order to write the code for the button's click event.
Figure 2
Here are the code listings for the two resulting files:
Once I have VS.NET build my project and request it from a browser, whatever I type in the button click event handler (Button1_Click) in WebForm1.aspx.vb will execute when the button on WebForm1.aspx is clicked. Let's look at how this is accomplished.
Hot Wiring Our Own Code-Behind Page
Just looking at the listings above... most of this stuff has
nothing to do with the actual code-behind process. The obvious
command to investigate is
Codebehind="WebForm1.aspx.vb".
Oddly enough however this doesn't do anything in ASP.NET... it's there only
so VS.NET can find the source code! Let me illustrate.
I stripped down the above files and added a label control. I also added a command, which modifies the label, to the button click event handler (in the code-behind file) so we can tell when the two files are communicating and have some indication that the event handler is actually running. As a final step I renamed the files and classes to WebForm2 to prevent it from working because of anything VS.NET does behind the scenes... we're trying to accomplish this on our own! Here are the resulting 2 file listings:
Naturally that would be too easy... when you try and run WebForm2.aspx, you'll get an error something like this:
Parser Error Message: Could not load type 'WebForm2'.
Don't worry... all is not lost. The problem is simply that it can't find the class we define in our code-behind file. Normally VS.NET will automatically compile the .vb file into a .dll and place it in your application's /bin directory. Since we're not using VS.NET, it wasn't compiled and the application can't find the appropriate class. There are two solutions - compile it manually or tell the .aspx file where to find the .vb source file.
Compiling it manually is really pretty easy. The command will look something like this:
vbc /t:library /out:bin\WebForm2.dll /r:System.dll /r:System.Web.dll WebForm2.aspx.vb
I'm not going to go into all the compiler options, but basically we're taking WebForm2.aspx.vb and compiling it into a dll named WebForm2.dll and placing it in the application's /bin directory.
This option is the better approach if you need complete control over your compiler options or if you will be distributing the application without the .vb source files. Being the lazy type, I tend to go for option 2... check out this code listing:
Looks just like the last one doesn't it... well not quite... notice that instead
of the
Codebehind="WebForm2.aspx.vb" we now have
Src="WebForm2.aspx.vb". While
Codebehind doesn't mean
anything to ASP.NET,
Src (short for source) does and the code-behind file will
compile on the fly just like the .aspx file.
Note: Don't worry... you're not missing a file... I've got WebForm3.aspx running off the same code-behind file as WebForm2.aspx since the two .vb files would be identical anyway.
Some Final Notes
In order for .NET to find your classes, make sure your compiled files are stored in the /bin directory off the root of your application. You need to make sure you've set your directory up as an IIS application or else ASP.NET will go up the tree until it finds one and end up at the /bin directory of the root application if it doesn't find one sooner.
Those of you using VS.NET might have noticed my
Inherits
statements are short a project name. That's because VS.NET creates a
separate namespace for each project it creates. It's easy enough to do just use
the Namespace command, but that's beyond the scope of this article. I'm only
mentioning it so you don't panic when you see an inherits line that looks like this:
<%@ Page Language="vb" Inherits="ProjectName.WebForm2"%>
Wrap Up
So to sum everything up... all it really takes to do code-behind is one little inherits attribute in your page declaration line specifying the name of the class you want to inherit. If you're willing to precompile your classes into .dll files, that's where it stops. If you're lazy like me or like the "edit and run" simplicity that classic ASP gave you, add a src attribute pointed at your code-behind file and ASP.NET will compile it for you. That's really all there is to it.
If you don't want to copy and paste the code listings, you can get all 5 files in zip file format below.
Update: Separating Code-Behind and Web Form Files
A reader recently wrote:
Hi John,
I've read several of your articles, and I think you are the right person for my question.
Here is my case:
By default, code behind files (*.cs) and *.aspx files are located in the same folder. I want to separate my code behind files to a different folder. Is there any way to make it work?
If you can, please help.
Thank you in advance,
An
Well it's actually quite easy... just specify the full or relative virtual path to your code-behind file in your Web Form and you should be good to go. Continuing with the example used in the article, if you moved all your .vb files to the /codebehind folder, this script should find them:
Other Code-Behind Resources On The Web
- ASP.NET Code Behind Pages from 4GuysFromRolla
- INFO: ASP.NET Code-Behind Model Overview`
- ASP.NET Unleashed Sample Chapter 6 - the section on code-behind is Separating Code from Presentation
- Reusability in ASP.NET: Code-behind Classes and Pagelets
- Developing User Controls in a Code-Behind File
- Working with Single-File Web Forms Pages in Visual Studio .NET - a little on not using code-behind in VS.NET
There are no comments yet. Be the first to comment!
|
http://www.codeguru.com/csharp/.net/net_asp/tutorials/article.php/c19337/Using-ASPNET-CodeBehind-Without-Visual-StudioNET.htm
|
CC-MAIN-2017-09
|
refinedweb
| 1,508 | 76.62 |
table of contents
NAME¶ares_parse_a_reply - Parse a reply to a DNS query of type A
SYNOPSIS¶
#include <ares.h>
int ares_parse_a_reply(const unsigned char *abuf, int alen, struct hostent **host, struct ares_addrttl *addrttls, int *naddrttls);
DESCRIPTION¶The ares_parse_a_reply function parses the response to a query of type A into a struct hostent and/or an array of struct ares_addrttls. The parameters abuf and alen give the contents of the response. The result is stored in allocated memory and a pointer to it stored into the variable pointed to by host, if host is nonnull. It is the caller's responsibility to free the resulting host structure using ares_free_hostent(3) when it is no longer needed.
If addrttls and naddrttls are both nonnull, then up to *naddrttls struct ares_addrttl records are stored in the array pointed to by addrttls, and then *naddrttls is set to the number of records so stored. Note that the memory for these records is supplied by the caller.
RETURN VALUES¶ares_parse_a.
SEE ALSO¶ares_gethostbyname(3), ares_free_hostent(3)
AUTHOR¶Greg Hudson, MIT Information Systems
|
https://manpages.debian.org/testing/libc-ares-dev/ares_parse_a_reply.3.en.html
|
CC-MAIN-2020-10
|
refinedweb
| 177 | 50.57 |
Hi, is there any way “smart” way of checking if the current nengo
.py script is being run in the GUI or not?
I’ve looked through the GUI source code but don’t see an obvious way to do this.
Hi, is there any way “smart” way of checking if the current nengo
I’m not sure if there’s a supported way of doing this, but an unsupported way would be to check:
__name__ == 'builtins'
I’m not sure how robust this is, but I tested it out on
nengo-gui==0.4.6 and this is
True if the current script is being run in the GUI, and
False if being run as a
python script from the command line.
I’ve also used:
from nengo.simulator import Simulator as NengoSimulator if nengo.Simulator is not NengoSimulator: ... elif __name__ == "__main__": ...
Here’s yet another way:
if '__page__' in locals():
Here’s the part of the
nengo_gui code which initializes
locals when executing the code
We set the
__file__ variable here, and we also create this other variable
__page__ which refers to the
nengo_gui.page.Page object that’s running that page.
It’d still be nicer to have a better way to do that… something like a
nengo_gui.is_running_in_ui flag that could be checked… But we don’t have anything like that yet.
|
https://forum.nengo.ai/t/check-if-code-is-running-in-gui/1433
|
CC-MAIN-2020-50
|
refinedweb
| 226 | 72.36 |
An initial prototype of the Rivet's FormBroker is now available from
trunk/rivet/packages/formbroker.tclThe code is inspired by Karl's original code but it goes further ahead trying to become a form definition repository. The overall style of the package has an OOP flavor even though no one of the OOP environments available for Tcl was used. It's just 'namespace ensemble' based. I will henceforth use the word 'object' meaning any instance of form descriptor created by the FormBroker package.
Forms definition objects are referenced through commands generated by the FormBroker package with the 'create' callForms definition objects are referenced through commands generated by the FormBroker package with the 'create' call
set fbobj [::FormBroker create \ {var1 string bounds 10 constrain quote} \ {var2 email} \ {var3 integer bounds 10} \ {var4 unsigned bounds {10 100} constrain}]which is quite similar to the original form broker: each element in the argument list is a list in its own right in which the first and second element must be the form variable name and type. At the moment supported types are 'string', 'integer', 'unsigned', 'email'. Each of them has its own validation procedure. The supported variable types can be extended easily, but non portably: I mean that writing a validator requires explicit manipulation of the dictionary that provides a form variable internal representation. (As such it's a design flaw, at the moment). The keyword 'constrain' means that, when possible, a value is brought within its assigned bounds. For a 'string' it means the string has to be truncated to be n characters when longer....
A form response is then checked calling $fbobj validate responsewhere 'response' is the usual array of variables made by ::rivet::load_response. This method returns 'true' if the array validates. If the validation fails the method
$fbobj failing returns a list of variable names with the validation error codes.Variables can be quoted and the quoting function can be customized (the internal function just puts a variable value between single quotes). A custom function for quoting must have the very basic form
set quoted_string [<quoting-proc> $orig_string]the need for quotation can be variable specific (in that case 'validate' quotes in the 'response' array only the variables eligible to be quoted). Overall quoting can be forced by calling
$fbobj validate -forcequote responseThere is more to say but I don't want to bother you any further. I will answer your questions with pleasure. The namespace ensemble API is open to be amended if you have some strong idea on how to redesign it. I won't set out writing the documentation any soon: I'm going to allow more time to see if the design settle down using the package in regular development (which I still have to do!). If you're interested to write a specific data type validator I will show you how to (there's an example in trunk/contrib/validate_mac.tcl which shows how to validate a mac address)
-- MassimoP.S. I will certainly remove the 'namespace export *' line from the package in order to keep private all the methods not intended for application level programming
--------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
|
https://www.mail-archive.com/[email protected]/msg02405.html
|
CC-MAIN-2017-51
|
refinedweb
| 544 | 50.77 |
I have seen many tutorials on ASP.NET but most of them starts with coding and writing your first ASP.NET Program. But here I has written this tutorial for explaining why there is a need for ASP.NET when classy ASP is working fine and what are the underlying technology behind ASP.NET, What programming model ASP.NET Provides to programmers. Now let us get started.
ASP.NET is the new offering for Web developers from the Microsoft .It is not simply the next-generation of ASP; in fact, it is a completely re-engineered and enhanced technology that offers much, much more than traditional ASP and can increase productivity significantly.
Because it has evolved from ASP, ASP.NET looks very similar to its predecessor—but only at first sight. Some items look very familiar, and they remind us of ASP. But concepts like Web Forms, Web Services, or Server Controls gives ASP.NET the power to build real Web applications.
Microsoft Active Server Pages (ASP) is a server-side scripting technology. ASP is a technology that Microsoft created to ease the development of interactive Web applications. With ASP you can use client-side scripts as well as server-side scripts. Maybe you want to validate user input or access a database. ASP provides solutions for transaction processing and managing session state. Asp is one of the most successful language used in web development.
There are many problems with ASP if you think of needs for Today's powerful Web applications.
ASP.NET was developed in direct response to the problems that developers had with classic ASP. Since ASP is in such wide use, however, Microsoft ensured that ASP scripts execute without modification on a machine with the .NET Framework (the ASP engine, ASP.DLL, is not modified when installing the .NET Framework). Thus, IIS can house both ASP and ASP.NET scripts on the same machine.
Here are some point that gives the quick overview of ASP.NET.
ASP.NET is based on the fundamental architecture of .NET Framework. Visual studio provide a uniform way to combine the various features of this Architecture.
Architecture is explained form bottom to top in the following discussion.
At the bottom of the Architecture is Common Language Runtime. NET Framework common language runtime resides on top of the operating system services. The common language runtime loads and executes code that targets the runtime. This code is therefore called managed code. The runtime gives you, for example, the ability for cross-language integration.
.NET Framework provides a rich set of class libraries. These include base classes, like networking and input/output classes, a data class library for data access, and classes for use by programming tools, such as debugging services. All of them are brought together by the Services Framework, which sits on top of the common language runtime.
ADO.NET is Microsoft’s ActiveX Data Object (ADO) model for the .NET Framework. ADO.NET is not simply the migration of the popular ADO model to the managed environment but a completely new paradigm for data access and manipulation.ADO.NET is intended specifically for developing web applications. This is evident from its two major design principles:. They mirror typical HTML widgets like text boxes or buttons. If these controls do not fit your needs, you are free to create your own user controls.Web Services brings you a model to bind different applications over the Internet. This model is based on existing infrastructure and applications and is therefore standard-based, simple, and adaptable.Web Services are software solutions delivered via Internet to any device. Today, that means Web browsers on computers, for the most part, but the device-agnostic design of .NET will eliminate this limitation.
After this short excursion with some background information on the .NET Framework, we will now focus on ASP.NET.
Web applications written with ASP.NET will consist of many files with different file name extensions. The most common are listed here. Native ASP.NET files by default have the extension .aspx (which is, of course, an extension to .asp) or .ascx. Web Services normally have the extension .asmx.
Your file names containing the business logic will depend on the language you use. So, for example, a C# file would have the extension .aspx.cs. You already learned about the configuration file Web.Config.
Another one worth mentioning is the ASP.NET application file Global.asax - in the ASP world formerly known as Global.asa. But now there is also a code behind file Global.asax.vb, for example, if the file contains Visual Basic.NET code. Global.asax is an optional file that resides in the root directory of your application, and it contains global logic for your application.
All of these files are text files, and therefore human readable and writeable..
Code declaration blocks are lines of code enclosed in <script> tags. They contain the runat=server attribute, which tells ASP.NET that these controls can be accessed on the server and on the client. Optionally you can specify the language for the block. The code block itself consists of the definition of member variables and methods.
Render blocks contain inline code or inline expressions enclosed by the character sequences shown here. The language used inside those blocks could be specified through a directive like the one shown before.
You can declare several standard HTML elements as HTML server controls. Use the element as you are familiar with in HTML and add the attribute runat=server. This causes the HTML element to be treated as a server control. It is now programmatically accessible by using a unique ID. HTML server controls must reside within a <form> section that also has the attribute runat=server.
There are two different kinds of custom controls. On the one hand there are the controls that ship with .NET, and on the other hand you can create your own custom controls. Using custom server controls is the best way to encapsulate common programmatic functionality.
Just specify elements as you did with HTML elements, but add a tag prefix, which is an alias for the fully qualified namespace of the control. Again you must include the runat=server attribute. If you want to get programmatic access to the control, just add an Id attribute.
You can include properties for each server control to characterize its behavior. For example, you can set the maximum length of a TextBox. Those properties might have sub properties; you know this principle from HTML. Now you have the ability to specify, for example, the size and type of the font you use (font-size and font-type).
The last attribute is dedicated to event binding. This can be used to bind the control to a specific event. If you implement your own method MyClick, this method will be executed when the corresponding button is clicked if you use the server control event binding shown in the slide.
MyClick
You can create bindings between server controls and data sources. The data binding expression is enclosed by the character sequences <%# and %>. The data-binding model provided by ASP.NET is hierarchical. That means you can create bindings between server control properties and superior data sources.
If you need to create an instance of an object on the server, use server-side object tags. When the page is compiled, an instance of the specified object is created. To specify the object use the identifier attribute. You can declare (and instantiate) .NET objects using class as the identifier, and COM objects using either progid.
<% @Page Language="C#" Inherits="MoviePage" Src="SimpleWebForm.cs" %>
<html>
<body background="Texture.bmp">
<TITLE>Supermegacineplexadrome!</TITLE>
<H1 align="center"><FONT color="white" size="7">Welcome to <br.
using System;
using System.Web.UI;
using System.Web.UI.WebControls;
public class MoviePage:Page
{
protected void WriteDate()
{
Response.Write(DateTime.Now.ToString());
}
protected void WriteMovies()
{
Response.Write("<P>The Glass Ghost (R) 1:05 pm, 3:25 pm, 7:00 pm</P>");
Response.Write("<P>Untamed Harmony (PG-13) 12:50 pm, 3:25 pm, " + <br> "6:55 pm</P>");
Response.Write("<P>Forever Nowhere (PG) 3:30 pm, 8:35 pm<.
Like ASP, ASP.NET encapsulates its entities within a web application. A web application is an abstract term for all the resources available within the confines of an IIS virtual directory. For example, a web application may consist of one or more ASP.NET pages, assemblies, web services configuration files, graphics, and more. In this section we explore two fundamental components of a web application, namely global application files (Global.asax) and configuration files (Web.config).
Global.asax is a file used to declare application-level events and objects. Global.asax is the ASP.NET extension of the ASP Global.asa file. Code to handle application events (such as the start and end of an application) reside in Global.asax. Such event code cannot reside in the ASP.NET page or web service code itself, since during the start or end of the application, its code has not yet been loaded (or unloaded). Global.asax is also used to declare data that is available across different application requests or across different browser sessions. This process is known as application and session state management.
The Global.asax file must reside in the IIS virtual root. Remember that a virtual root can be thought of as the container of a web application. Events and state specified in the global file are then applied to all resources housed within the web application. If, for example, Global.asax defines a state application variable, all .aspx files within the virtual root will be able to access the variable.
Like an ASP.NET page, the Global.asax file is compiled upon the arrival of the first request for any resource in the application. The similarity continues when changes are made to the Global.asax file; ASP.NET automatically notices the changes, recompiles the file, and directs all new requests to the newest compilation. A Global.asax file is automatically created when you create a new web application project in the VS.NET IDE.
Application directives are placed at the top of the Global.asax file and provide information used to compile the global file. Three application directives are defined, namely Application, Assembly, and Import. Each directive is applied with the following syntax:
<%@ appDirective appAttribute=Value ...%>
In ASP, configuration settings for an application (such as session state) are stored in the IIS metabase. There are two major disadvantages with this scheme. First, settings are not stored in a human-readable manner but in a proprietary, binary format. Second, the settings are not easily ported from one host machine to another.(It is difficult to transfer information from an IIS’s metabase or Windows Registry to another machine, even if it has the same version of Windows.)
Web.config solves both of the aforementioned issues by storing configuration information as XML. Unlike Registry or metabase entries, XML documents are human-readable and can be modified with any text editor. Second, XML files are far more portable, involving a simple file transfer to switch machines.
Unlike Global.asax, Web.config can reside in any directory, which may or may not be a virtual root. The Web.config settings are then applied to all resources accessed within that directory, as well as its subdirectories. One consequence is that an IIS instance may have many web.config files. Attributes are applied in a hierarchical fashion. In other words, the web.config file at the lowest level directory is used.
Since Web.config is based on XML, it is extensible and flexible for a wide variety of applications. It is important, however, to note that the Web.config file is optional. A default Web.config file, used by all ASP.NET application resources, can be found on the local machine at:
\%winroot%\Microsoft.Net\Framework\version\CONFIG\machine.config
ASP.NET is an evolution of Microsoft’s Active Server Page (ASP) technology. Using ASP.NET, you can rapidly develop highly advanced web applications based on the .NET framework. Visual Studio Web Form Designer, which allows the design of web applications in an intuitive, graphical method similar to Visual Basic 6. ASP.NET ships with web controls wrapping each of the standard HTML controls, in addition to several controls specific to .NET. One such example is validation controls, which intuitively validate user input without the need for extensive client-side script.
In many respects, ASP.NET provides major improvements over ASP, and can definitely be considered a viable alternative for rapidly developing web-based applications.
I has write this tutorial to share my Knowledge of ASP.NET with you. You can find more articles and software projects with free source code on my web site .
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
<big><small>:););P:-D:(:((:-O:zzz::omg::wtf::laugh::laugh::mad::confused:</small></big>
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
https://www.codeproject.com/Articles/4468/Beginners-Introduction-to-ASP-NET?fid=16098&df=90&mpp=10&sort=Position&spc=None&select=4139973&noise=1&prof=True&view=None
|
CC-MAIN-2017-30
|
refinedweb
| 2,226 | 59.8 |
These are chat archives for ProtoDef-io/node-protodef
rak_net_worker?
def start_link(opts \\ []) do
@magic
invalid write attribute syntax, you probably meant to use: @response expression
=
(MatchError) no match of right hand side value: <<1, 0, 0, 0, 0, 0 ...
<< @id_unconnected_pong, 1 :: size(64), 1 :: size(64), @magic, @response >>
21:15:47.905 [info] Got unconnected ping! With ID of 879658, <<0, 0, 0, 0, 0, 13, 108, 42, 0, 255, 255, 0, 254, 254, 254, 254, 253, 253, 253, 253, 18, 52, 86, 120, 0, 0, 0, 0, 16, 20, 214, 159>> 21:15:47.905 [info] Sent back <<28, 0, 0, 0, 0, 0, 13, 108, 42, 0, 0, 0, 0, 0, 0, 0, 1, 0, 255, 255, 0, 254, 254, 254, 254, 253, 253, 253, 253, 18, 52, 86, 120, 77, 67, 80, 69, 59, 65, 32, 77, 105, 110, 101, 99, 114, 97, 102, 116, 58, ...>>
buffer.writeInt32BE(value[0], offset); buffer.writeInt32BE(value[1], offset + 4);
<<int::unsigned-integer-size(64)>> = <<339724::signed-integer-size(32), -6627871::signed-integer-size(32)>>).
|
https://gitter.im/ProtoDef-io/node-protodef/archives/2017/03/20
|
CC-MAIN-2018-51
|
refinedweb
| 176 | 69.82 |
I am trying to port some code written against go1.3 to current versions and ran into a case where the json parsing behavior is different between versions. We are using a custom unmarshaller for parsing some specific date format. It looks like recent versions pass in the string with additional quotes which 1.3 did not.
Is this a bug or an intentional change? And whats the best way of writing code which is compatible with different versions in this situation. Just go looking for all places where a custom unmarshaller is in use always strip out extra quotes if any? It would be a pity to have to do that - so I am hoping there is a better way.
package main
import "encoding/json"
import "fmt"
import "time"
type Timestamp1 time.Time
func (t *Timestamp1) UnmarshalJSON(b []byte) (err error) {
fmt.Println("String to parse as timestamp:", string(b))
parsedTime, err := time.Parse("2006-01-02T15:04:05", string(b))
if err == nil {
*t = Timestamp1(parsedTime)
return nil
} else {
return err
}
}
type S struct {
LastUpdatedDate Timestamp1 `json:"last_updated_date,string"`
}
func main() {
s := `{"last_updated_date" : "2015-11-03T10:00:00"}`
var s1 S
err := json.Unmarshal([]byte(s), &s1)
fmt.Println(err)
fmt.Println(s1)
}
There was a bug concerning
json:",string" tag that was fixed in 1.5. If there isn't a particular reason you need it, you can remove it and simply adjust your format:
// N.B. time is in quotes. parsedTime, err := time.Parse(`"2006-01-02T15:04:05"`, string(b))
Playground:.
This should work in 1.3 as well as 1.5.
|
https://codedump.io/share/kFwxuvdhfvpF/1/differences-in-parsing-json-with-a-custom-unmarshaller-between-golang-versions
|
CC-MAIN-2018-05
|
refinedweb
| 267 | 69.07 |
got a couple of comments in response to my previous post about “what kind of PCLs are these?” and “what’s a ‘Universal PCL’?” and so I thought I’d dig into that a little bit here as there are a number of ways of re-using .NET code in Windows/Phone 8.1 that weren’t all available to a Windows Phone 8.0 developer.
I’ll start with a blank, universal project which has the 2 separate Windows/Phone projects (or “heads” as you might hear Microsoft people calling them in //Build sessions) and a shared folder.
Adding a Portable Class Library
I think the key word with “portable class libraries” is the word “portable” and the question I always find myself asking is “portable across what?”. That is – I can make a portable class library to share across Windows/Phone 8.1 by doing this;
and that creates a .NET class libary (i.e. an assembly) which can be referenced by either/both of the Windows/Phone projects (usually, I think you’d be creating this library to reference from both). You can see this from the Properties page;
The important things about portable class libraries come from which tells me that the types for a PCL;
They must be shared across the targets you selected.
The must behave similarly across those targets.
They must not be candidates for deprecation.
They must make sense in a portable environment, especially when supporting members are not portable.
That portable class library that I make above is configured for Windows 8.1 and Windows Phone 8.1. I’m not sure of the best way to do this but if I look at the Object Browser in Visual Studio, it suggests that what I’ve got available to me here as a “surface area” of common APIs includes;
So that shows that I’ve got this “.NET Core” from .NET 4.5.1 framework and it also shows that I’ve got the common pieces of WinRT that Windows 8.1 contributes;
and I’ve also got the common pieces of WinRT that Windows Phone 8.1 contributes;
and if I take some representative class like XDocument (LINQ to XML) then I can use that from that portable class library;
namespace Portable { public class Class1 { public static void DoSomething() { XDocument xDoc = XDocument.Parse("<foo/>"); } } }
equally, if I use some piece of WinRT that’s available to both Windows/Phone then I can do that too. For example;
namespace Portable { public class Class1 { public static void DoSomething() { Button button = new Button(); button.Content = "Hello Portable XAML World"; } } }
but what I can’t do is make use of some UI control that doesn’t exist to both Windows/Phone. For example;
namespace Portable { public class Class1 { public static void DoSomething() { // This doesn't compile. Pivot doesn't exist on Windows 8.1. Pivot pivot = new Pivot(); } } }
or for an example the other way around;
namespace Portable { public class Class1 { public static void DoSomething() { // This doesn't compile. SearchBox doesn't exist on Windows Phone 8.1 SearchBox searchBox = new SearchBox(); } } }
or for a non-UI example I can use a shared API like;
namespace Portable { public class Class1 { public static async Task DoSomething() { StorageFolder photos = KnownFolders.PicturesLibrary; StorageFile file = await photos.CreateFileAsync( "myNewFile.txt", CreationCollisionOption.GenerateUniqueName); } } }
but I can’t use a non-UI API that’s not available in both places;
namespace Portable { public class Class1 { public static void DoSomething() { // This doesn't compile. ToastNotificationManager can't do History // on Windows 8.1 ToastNotificationManager.History.Remove("burntToast"); } } }
and for one the other way around;
namespace Portable { public class Class1 { public static void DoSomething() { // This doesn't compile. SearchPane doesn't exist on Windows Phone 8.1 SearchPane searchPane = searchPane.GetForCurrentView(); } } }
Where can I use this “Universal Portable Class Library” from? I can really only use it from 2 places – a Windows 8.1 project and a Windows Phone 8.1 project or from another class library that’s targeting the same target platforms.
If I try to reference it from (e.g.) a Windows Console Application then I’m going to get an error;
what if I tried to reference this library from a Silverlight 8.1 project? Same error. But maybe I can change that, perhaps I can go and change the target platforms for my portable library;
and then, sure enough, I can reference this library now from Silverlight 8.1, Windows 8.1, Windows Phone 8.1 but what does this mean to the API set that I can call? As you’d expect, it reduces it because we’re intersecting an intersection or subsetting a subset or whatever you want to call it.
Can I now create a Button in my portable code? No, because a Silverlight 8.1 Button is not a Windows Phone 8.1 Button.
namespace Portable { public class Class1 { public static void DoSomething() { // This does not compile. Silverlight's Button is not Windows/Phone's // Button. Button b = new Button(); } } }
But can I still use XDocument? You bet;
namespace Portable { public class Class1 { public static void DoSomething() { // This does not compile. Silverlight's Button is not Windows/Phone's // Button. XDocument xDoc = XDocument.Parse("<foo/>"); } } }
and can I still use async and Tasks etc? Yes;
namespace Portable { public class Class1 { public static async Task DoSomethingAsync() { await Task.Delay(1000); } } }
and can I still use portable WinRT APIs that are available across Windows/Phone/Silverlight8.1 ? Yes;
namespace Portable { public class Class1 { public static void DoSomething() { var someTileXml = TileUpdateManager.GetTemplateContent( TileTemplateType.TileSquare150x150Block); } } }
and so, when I’m creating a portable class library with that option in Visual Studio;
I don’t really think it’s doing anything different than creating a portable library for “Windows Desktop”;
and then pre-selecting these 2 target platforms for you;
As an aside, while I’ve got Xamarin.Android and Xamarin.iOS on the screen unless some magic happened at //Build that I didn’t see yet (it’s always possible!) if I add those 2 platforms here then I’m going to be subsetting down my available APIs such as to remove the possibility of invoking WinRT APIs as, of course, they aren’t portable across to Android/iOS. As I’ve said before if they were then things might be very interesting indeed
but they aren’t.
What About WinRT Components?
You can write WinRT components in .NET. A similar looking dialog can be used to do that portably. Like this;
I want to point out that this is not interchangeable with making a .NET class library. A WinRT component is a different beast. I’ll try and illustrate that like this;
namespace PortableWinRT { public sealed class Class1 { public static async Task<int> DoSomethingAsync() { await Task.Delay(5000); return (42); } } }
This code doesn’t compile. Why? Because we’re making a WinRT component. A WinRT component’s type system is not the .NET type system which makes sense because a WinRT component can be used from a JavaScript, C++ or .NET application and so tying it to the .NET type system (or the JS or C++ type system) wouldn’t make sense. It has its own type system.
Consequently, I can’t use Task<int> as a return type. Now…there are fairly simple ways around this in this instance. I can so something like this;
namespace PortableWinRT { public sealed class Class1 { public static IAsyncOperation<int> DoSomethingAsync() { return (InternalDoSomethingAsync().AsAsyncOperation<int>()); } static async Task<int> InternalDoSomethingAsync() { await Task.Delay(5000); return (42); } } }
but the point is more that I wouldn’t just go and make a custom WinRT component unless I had a reason to do that. For me, that generally boils down to one of;
- You want to use the component in different language environments.
- You need to write a custom WinRT component because the system needs one – a prime example for me would be writing a background task implementation which (I think) has to be a custom WinRT component.
No doubt there are other reasons to make them but those are the primary two that come to mind for me at the time of writing the post and I’d take care with doing (1).
I hope I got that right – feel free to let me know if I’ve goofed and I’ll fix.
|
https://mtaulty.com/2014/04/04/m_15165/
|
CC-MAIN-2021-25
|
refinedweb
| 1,391 | 65.32 |
Summary: In this article, Microsoft Scripting Guy Ed Wilson begins part 1 of a multipart WMI helper function module for Windows PowerShell.
Microsoft Scripting Guy Ed Wilson here. While I was teaching my Windows PowerShell Best Practices class in Montreal, we spent an entire day talking about Windows Management Instrumentation (WMI). While WMI seems to have a “bad reputation” in terms of complexity, consistency, and discoverability, Windows PowerShell has done much to make WMI more consistent, less complex, and definitely more discoverable. Personally, I love WMI not only because it provides ready access to reams of documentation about my computer systems, but also because it exposes many methods and writable properties that allow me to quickly and accurately configure many aspects of my systems. Ever since I began working on my Windows PowerShell Step By Step book for Microsoft Press, I have written hundreds of WMI scripts and helper functions. This week, I am going to collect together many of the helper functions into a single module that will make access to WMI information easier.
The first function I add to my WMI module is the Get-WMIClassesWithQualifiers function that I wrote for the Use a PowerShell Function to Find Specific WMI Classes post last Saturday. See that article for details on the function.
Creating a Windows PowerShell module is really easy. I open the Windows PowerShell ISE and paste my Get-WMIClassesWithQualifiers function into the script pane. This technique is shown in the following figure.
After I have pasted the first function into the module, I need to save the module so that I do not lose my work. I save the module into my scratch directory (called FSO off the root drive), giving me a place to work. After I have completed the module, I will use my Copy-Modules function to install the module into my user module location. The key thing to remember when saving a module is that the file extension must be .psm1. By default, the Windows PowerShell ISE saves files with a .ps1 extension, which is a Windows PowerShell script. A .psm1 file extension indicates a Windows PowerShell module. This technique is shown in the following figure.
I decided that I would also like to include the Get-WmiClassMethods function and the Get-WMIClassProperties functions in my WMI module. The great thing about these two functions is that they make it really easy to find methods and properties that are implemented and writable. This is really important! Because of the way that the Get-Member cmdlet works, it shows everything in WMI as read/write. And though this is technically correct (at least to a point in that I can update an object in memory), it is of very little practical value because I cannot update all WMI objects. For example, if I use the Get-Member cmdlet on the Win32_LogicalDisk WMI class, it reports that all of the properties are Get/Set. Obviously, I cannot use WMI to change the size of my disk drive, and I am not certain I want to attempt to change the amount of free space on my drive by using WMI. This is shown in the following figure.
Therefore, I need a better methodology for obtaining this information. I can use the Windows Management Instrumentation Tester (WbemTest) tool, but unfortunately, it requires me to examine every property in an individual manner to discover if a property is writable. This is not an acceptable solution when attempting to do a bit of quick scripting work.
Back in March of 2011, I wrote a series of Hey, Scripting Guy! posts where I explored writable WMI properties and implemented WMI methods. I decided to adapt those scripts to meet the need of my HSGWMIModule. The original script from March 12, 2011, is shown on the Scripting Guys Script Repository. I uploaded the modified functions to the Scripting Guys Script Repository so that you will have them if you wish to follow along.
I made the following changes to the original functions:
- It no longer checks for all WMI classes in a particular namespace. Instead, I have limited the scope to a single class.
- I made the $class parameter a mandatory parameter. You must supply a WMI class name to use the modules.
- I added comment-based help, and included examples of use.
- I changed the default computer name from “.” to $env:computername so that an actual computer name is used.
- I perform a [wmiclass] cast of the string supplied to $class so that it converts the string into a management object. This precludes the use of the Get-WmiObject cmdlet to return management objects.
- I removed the “entry” to the script, so running the script loads the functions into memory, but does not execute any searches.
So I have added the newly revised modules to my HSGWMIModule. I import the module from my scratch location, and use the Get-Command cmdlet (gcm is alias) to see the names of the functions contained in the module. Here are the commands and associated results:
PS C:\Users\edwils> Import-Module C:\fso\HSGWMImodule.psm1
PS C:\Users\edwils> gcm -Module hsg* | select name
Name
Get-WMIClassesWithQualifiers
Get-WmiClassMethods
Get-WmiClassProperties
New-Underline
The complete HSGWMIModuleV1 appears on the Scripting Guys Script Repository. You should download it, install it, and play with it. a very useful addition to the already present WMI functions!
It brings a bit more light into the sometimes a bit obscure world of WMI classes, properties and methods. I don't expect that every scripter will love to dive deeper into the world of abstract classes, unimplemeted methods, superclasses and so on.
But knowing the implemeted methods and writeable properties is surely most desirable!
Klaus.
|
https://blogs.technet.microsoft.com/heyscriptingguy/2011/10/24/a-powershell-wmi-helper-module-described/
|
CC-MAIN-2017-09
|
refinedweb
| 957 | 54.32 |
Dear friends,
I don't know, I cannot seem to figure this out, should i use arrays,,,or
what? Here is my problem, the user is allowed to press the button as many
times as they want, and I have to tally up the amounts.
My code is wild I know,, learning, and probablly going about it the wrong
way.
But it is the only way I know how to make it work,
Perhaps you can help me, please see my code:
class ButtonHandler implements ActionListener {
// moved to global scope and not block scope.
int num1, num2, num3, num4, total;
public int addMoney(){
total = num1 + num2 + num3 + num4;
return (total);
//infoMessage.setText(total);
}
public void actionPerformed(ActionEvent evt) {
if (evt.getSource() == bttnItem1) {
System.out.println("\nUser has chosen Chocolate.");
infoMessage.setText("User chosen Chocolate.");
} else if (evt.getSource() == bttnItem2) {
System.out.println("\nUser has chosen Peanuts.");
infoMessage.setText("User chosen Peanuts.");
} else if (evt.getSource() == bttnItem3) {
System.out.println("\nUser has chosen Candy.");
infoMessage.setText("User chosen Candy.");
} else if (evt.getSource() == bttnItem4) {
System.out.println("\nUser has chosen Soda Pop.");
infoMessage.setText("User chosen Soda Pop.");
} else if (evt.getSource() == bttnFive) {
System.out.println("\nUser has added 5 cents.");
moneyAmount.setText("5");
num1 = 5;
addMoney();
} else if (evt.getSource() == bttnTen) {
System.out.println("\nUser has added 10 cents.");
moneyAmount.setText("10");
num2 = 10;
addMoney();
}
}
I am just trying to add up the buttons clicked, how shall I go about this??
Any ideas would be greatly appreciated, as I am lost and need some advice,
professional pointers to how I should do this?
Please help?
Thanks
Brad
Forum Rules
Development Centers
-- Android Development Center
-- Cloud Development Project Center
-- HTML5 Development Center
-- Windows Mobile Development Center
|
http://forums.devx.com/showthread.php?29138-Using-Buttons-to-count-(functions)
|
CC-MAIN-2017-39
|
refinedweb
| 285 | 54.79 |
Given Binary Tree [3,9,20,null,null,15,7]
/ \
9 20
/ \
15 7
return its zigzag level order traversal as :
[3],
[20,9],
[15,7]
]
Java Solution :
class TreeNode { int val; TreeNode left; TreeNode right; TreeNode(int x) { val = x; } } public class Solution { public List<List<Integer>> zigzagLevelOrder(TreeNode root) { List<List<Integer>> result = new ArrayList<List<Integer>>(); if (root == null) { return result; } Stack<TreeNode> currLevel = new Stack<TreeNode>(); Stack<TreeNode> nextLevel = new Stack<TreeNode>(); Stack<TreeNode> tmp; currLevel.push(root); boolean normalOrder = true; while (!currLevel.isEmpty()) { ArrayList<Integer> currLevelResult = new ArrayList<Integer>(); while (!currLevel.isEmpty()) { TreeNode node = currLevel.pop(); currLevelResult.add(node.val); if (normalOrder) { if (node.left != null) { nextLevel.push(node.left); } if (node.right != null) { nextLevel.push(node.right); } } else { if (node.right != null) { nextLevel.push(node.right); } if (node.left != null) { nextLevel.push(node.left); } } } result.add(currLevelResult); tmp = currLevel; currLevel = nextLevel; nextLevel = tmp; normalOrder = !normalOrder; } return result; } }
Time Complexity : O(n) , Space Complexity : O(n) + O(n) = O(n)
Explanation
The problem can be solved easily using two stacks. Let us say the two stacks are currLevel and nextLevel. We would also need a variable normalOrder to keep track of the current level order (whether it is left to right or right to left). We pop from the currLevel stack and add it to the ArrayList currLevelResult. Whenever the current level order is from left to right, push the nodes left child, then its right child to stack nextLevel.Since a stack is a Last In First Out(LIFO) structure, next time when nodes are popped off nextLevel, it will be in the reverse order. On the other hand, when the current level is from right to left, we would push the nodes right child first, then its left child. Finally, don't forget to swap those two stacks at the end of each level (i.e when currLevel is empty). Please mention in the comments if you have any other optimized solution.
|
http://javahungry.blogspot.com/2016/09/zigzag-level-order-traversal-java-leetcode.html
|
CC-MAIN-2017-04
|
refinedweb
| 329 | 56.96 |
On 03/24/2011 08:23 AM, Hannes Reinecke wrote: > That is the approach I've been following for SUSE. > The UUID is assumed to be of this syntax: > > <type>-<identifier> There were several discussions, I think even unofficial definition tries but nobody documented that properly. So thanks for opening this here, it should be formalized (Alasdair?). What I remember from discussions: - all devices should set DM-UUID (not required still though) (btw available now in sysfs /sys/block/dm-X/dm/uuid in udev db, no need to use dm-ioctl) - everything is supposed to set a prefix to identify subsystem (owner) - DM-UUID can contain multiple namespaces - first part (prefix) says which the second part belongs to (e.g. CRYPT-PLAIN-blabla - CRYPT is says it was cryptsetup namespace, it is up subsystem to handle content, here we can parse crypt device type from it it for example.) - you can stack it (like partition/kpartx over other subsystem) (problem of prefix separator, now we will get probably something like part1-CRYPT-LUKS-00000000...) - kpartx uses prefix in format (part%N- where N is part number - my opinion it is bug but that up to discussion. My suggestion is to use KPARTX-part%N*) - some uses lowercase here - there is limit of 128 characters for DM-UUID (so stacking should handle this somehow) To my knowledge, these prefixes are in the wild: LVM- (lvm2) DMRAID- (dmraid) CRYPT- (cryptsetup) mpath- (multipath) part%N- (kpartx) Milan
|
https://www.redhat.com/archives/dm-devel/2011-March/msg00128.html
|
CC-MAIN-2014-15
|
refinedweb
| 245 | 52.02 |
December 2018 (version 1.20.0)
1.20.0 Update
Hi!
This was quite a year. We like to appreciate everyone who has visited and used us these far.
Wishing you a happy and exciting new year! Thank you.
We will keep up the hard work as to provide top notch services for JavaScript.
Please kindly go ahead with the highlights for the latest release.
Release Summary
This version includes a number of updates that we hope you've found some of them helpful.
The key highlights are:
- New rules - New rules for common pitfalls.
- Improved rules - Rules have been improved for more coverage and some false alarms are fixed.
- Extended language support - We've added support for useful language features in ESNext and Flow.
New Rules
New rules introduced in this release:
- USELESS_CATCH - Do not just rethrow the caught exception in the
catchclause
Improved Rules
The following rules have been improved:
- Extend REACT_STATIC_PROPERTY_IN_INSTANCE coverage for
defaultPropsand
displayNameproperties
- Extend UNDEFINED_IMPORT coverage for namespace imports
Extended Language Support
We have added support for useful language features currently in ESNext and Flow.
- Support private instance methods and private static fields
- Support Flow inexact object type syntax
Miscellaneous
- Support for React v16.6 like
memo()and
lazy()
- Disallow using
_as Flow type identifier because it is reserved for implicit instantiation (SYNTAX_ERROR)
Bug Fixes
- A false alarm for REACT_EVENT_HANDLER_INVALID_THIS occurs when autobind decorators are used
- A false alarm for STRICT_MODE_ASSIGN_TO_READONLY_VAR occurs when ES6 import and TypeScript namespace has the same name
- A false alarm for STRICT_MODE_INVALID_THIS may occur at
bind()calls
- A false alarm for SYNTAX_ERROR occurs for the optional or rest element of TypeScript tuple type
- A false alarm for SYNTAX_ERROR occurs when destructuring is used in function type
- Alarm message is reversed between getter/setter for GETTER_SETTER_RECURSION
|
https://deepscan.io/docs/updates/2018-12/
|
CC-MAIN-2019-13
|
refinedweb
| 295 | 50.77 |
>
I know that it's possible to open/change a scene using scripts,like Click on buttons or pressing a keyboard key, or even using the mouse buttons.
But, is it possible to open/change a scene in game by tipping a word? For example "TEST". When player types "TEST" in the keyboard, he is automatically moved to a new scene, like a cheat code. Is this possible? And if it is, what i need to do to make a script that work in this way?
Answer by Firedan1176
·
Jul 08, 2016 at 01:45 AM
All you need is an InputField. All you need to do is add a method to when the text is changed, under the TextField component. You'll also want to detect when they press enter, so they can finish typing it before confirming they want to load that level. Then, all you need to do is load the level with the string that is provided.
Edit
In your scene, create an InputField. Add a C# script called LoadLevelCheatCode. In the code, type this:
using UnityEngine;
using UnityEngine.SceneManagement;
using System.Collections;
public class LoadLevelCheatCode : MonoBehaviour {
public string cheatcode = "tilt";
public void SubmitCode(string code) {
if (code.Equals(cheatcode) && SceneManager.GetSceneByName(cheatcode) != null) SceneManager.LoadScene(cheatcode);
}
}
In the script you put on the InputField, fill in the "Cheatcode" field with the level name. It must match case too. Then, on the InputField component, under the "End Edit" field, add one. Drag the C# script you made into the target field. That's it! If you still need help, just leave another comment.
You can also personally contact me on Skype: Firedan1176 if you have other questions, or want a more instructive way to do this.
I found and updated a script that i found but i can't get it to work, i think it's because of the Unity 5
Here's the question:
Are you wanting the cheatcode to be activated as soon as they type in tilt, or do they need to press 'enter' to submit it?
As soon as they type
47 People are following this question.
Does anyone know how to look into this sort of thing? (Desciption)
3
Answers
Change camera viewport rect via script
1
Answer
Word Brain Scramble word script ?
0
Answers
KeyCodes for non-US keyboard layouts
3
Answers
Catch Ctrl+Z input in Editor
1
Answer
|
https://answers.unity.com/questions/1213127/open-scene-by-typing-a-word.html
|
CC-MAIN-2019-13
|
refinedweb
| 401 | 73.58 |
Because I'm a geek, I enjoy learning about the sometimes-subtle differences between easily-confused things. For example:
- I'm still not super-clear in my head on the differences between a hub, router and switch and how it relates to the gnomes that live inside of each.
- Hunks of minerals found in nature are rocks; as soon as you put them in a garden or build a bridge out of them, suddenly they become stones.
- When a pig hits 120 pounds, it's a hog.
I thought I might do an occasional series on easily confounded concepts in programming language design.
Here’s a question I get fairly often:
public class C
{
public static void DoIt<T>(T t)
{
ReallyDoIt(t);
}
private static void ReallyDoIt(string s)
{
System.Console.WriteLine("string");
}
private static void ReallyDoIt<T>(T t)
{
System.Console.WriteLine("everything else");
}
}
public class C
What happens when you call C.DoIt<string>? Many people expected that “string” is printed, when in fact “everything else” is always printed, no matter what T is.
The C# specification says that when you have a choice between calling ReallyDoIt<string>(string) and ReallyDoIt(string) – that is, when the choice is between two methods that have identical signatures, but one gets that signature via generic substitution – then we pick the “natural” signature over the “substituted” signature. Why don’t we do that in this case?
Because that’s not the choice that is presented. If you had said
ReallyDoIt("hello world");
ReallyDoIt("hello world");
then we would pick the “natural” version. But you didn’t pass something known to the compiler to be a string. You passed something known to be a T, an unconstrained type parameter, and hence it could be anything. So, the overload resolution algorithm reasons, is there a method that can always take anything? Yes, there is.
This illustrates that generics in C# are not like templates in C++. You can think of templates as a fancy-pants search-and-replace mechanism. When you say DoIt<string> in a template, the compiler conceptually searches out all uses of “T”, replaces them with “string”, and then compiles the resulting source code. Overload resolution proceeds with the substituted type arguments known, and the generated code then reflects the results of that overload resolution.
That’s not how generic types work; generic types are, well, generic. We do the overload resolution once and bake in the result. We do not change it at runtime when someone, possibly in an entirely different assembly, uses string as a type argument to the method. The IL we’ve generated for the generic type already has the method its going to call picked out. The jitter does not say “well, I happen to know that if we asked the C# compiler to execute right now with this additional information then it would have picked a different overload. Let me rewrite the generated code to ignore the code that the C# compiler originally generated...” The jitter knows nothing about the rules of C#.
Essentially, the case above is no different from this:
public class C
{
public static void DoIt(object t)
{
ReallyDoIt(t);
}
private static void ReallyDoIt(string s)
{
System.Console.WriteLine("string");
}
private static void ReallyDoIt(object t)
{
System.Console.WriteLine("everything else");
}
}
public class C
When the compiler generates the code for the call to ReallyDoIt, it picks the object version because that’s the best it can do. If someone calls this with a string, then it still goes to the object version.
Now, if you do want overload resolution to be re-executed at runtime based on the runtime types of the arguments, we can do that for you; that’s what the new “dynamic” feature does in C# 4.0. Just replace “object” with “dynamic” and when you make a call involving that object, we’ll run the overload resolution algorithm at runtime and dynamically spit code that calls the method that the compiler would have picked, had it known all the runtime types at compile time.
Ethernet used to consist of a single piece of cable trailing round the building with each computer attached to it by a short piece of cable. So only one computer could (usefully) transmit at a time.
A hub replaces the long cable. Each computer connects to the hub using twisted pair cable.
A switch is a smart hub that allows different pairs of computers to communicate at the same time.
A router routes packets between networks.
This doesn’t tell the whole story (e.g. hubs can be arranged in a tree). And yes, it has nothing to do with the point you were making.
Using dynamic in this way is how I now choose to implement the C# equivalent of partial specialization. Prior to dynamic, the only alternatives were to use extension methods (which was fragile and confusing) or implement your own dynamic dispatch using delegates.
What happens if you have a type constraint applied to T? Such as:
public static void DoIt<T>(T t) where T : String
… bad example constraining to the type string (since string is sealed it makes for a *very* specific constraint), but you get the idea.
Such a constraint is illegal. If you wanted T to be only string then why would you make it generic in the first place? — Eric
I didn’t realize that such a constraint was actually illegal. I’ve never encountered it since it doesn’t make sense.
A constraint of “object” is also illegal, for related reasons. It is possible to produce an otherwise illegal constraint on a generic method type parameter, but you have to work at it:
class B<T> { public virtual void M<U>() where U : T { } }
class D: B<string> { public override void M<U>() {} }
The constraint on U in D’s version of M<U> is sealed type string, which would not be legal in any other situation. This oddity hits some corner cases in the CLR and causes a great deal of difficulty in type analysis and code gen; I’ll blog someday about how I’ve screwed it up multiple times. — Eric
Here is a legal example:
interface IThing {}
public class C
{
public static void DoIt<T>(T t) where T: IThing
{
ReallyDoIt(t);
}
private static void ReallyDoIt(IThing t)
{
System.Console.WriteLine(“IThing”);
}
private static void ReallyDoIt<T>(T t)
{
System.Console.WriteLine(“everything else”);
}
}
Answer: IThing is printed, as should be expected.
Exactly. At compile time we know that the specific version is better than the generic version. — Eric
eric – why do these posts not appear simultaneously in your RSS feed?
A hub is an indiscrete gossip that repeats everything it hears on one port to all its other ports.
A switch is a discrete messenger which delivers anything it hears on any port only to those ports that need to know.
a router is an awkward post master that won’t deliver anything if it isn’t properly addressed.
David Morton actually had a great blog post about how it will be different in C# 4.0 using the dynamic keyword (what Eric mentioned at the end of his post). See it here:
Just a "reminder" to everyone…
dynamic will NOT come close to duplicating what C++ template do….it is something completely different [not better or worse…but very different]
C++ templates do ALL of their work at compile time. There are NO runtime decisions made. So from a performance perspective, templares and (pre-dynamic) generics have very similar performance characteristics.
When dynamic is used, all of the work gets moved into the runtime. Of course much will depend on the internals of C#/CLR and other aspects will depend on usage; but just consider what would happen (to performance) if a person writes image procesing code there each pixel gets passed to a method as a parameter that is "dynamic"…..
I am just waiting until we see an "explosion" of places where dynamic is used because it "seemed like a good idea", and the application suffers enough that my company gets called in to address "performance issues".
TheCPUWizard said: So from a performance perspective, templares and (pre-dynamic) generics have very similar performance characteristics.
This is definitely not true..
.NET manages to do spot optimization of generics only when working with type parameters which are value types, otherwise the optimizer is pretty much helpless.
And this doesn’t even begin to address the things that parameters that aren’t types enable for templates… many of which are huge perf wins.
Of course, templates being fully instantiated at compile-time, don’t provide any mechanism for extensibility. Dynamic does (at a price of course).
Oh, and for the other matter:
Hubs repeat all incoming traffic to all other ports. Bridging hubs are a little better, they work on a packet level and can queue up traffic when there’s a collision instead of discarding it (needed for say translating between segments with different speeds).
Switches look at the layer 2 (ethernet, typically) destination address to decide which port to forward a packet through. Usually the list of addresses reachable via each port is learned automatically by observing traffic. Packets where the connectivity of the destination isn’t known are flooded to all ports, like the hub.
Routers look at the layer 3 (IP, typically) destination address to decide which port to forward a packet through. The list of addresses (grouped into binary blocks) reachable via each port is managed either through manual configuration or exchange with peer routers. Packets where the connectivity of the destination isn’t known are dropped.
Hi Eric.
Great Post (as usual).
I’d really appreciate it if you could do a post on this (if you haven’t already).
—————————————————————————————
using System;
namespace OhDear
{
class Program
{
static void Main()
{
Do(() => { }, question => { });
Do(() => { throw new Exception("test"); }, question => { });
Do(() => { }, (Exception question) => { });
Do(() => { throw new Exception("test"); }, (Exception question) => { });
}
static void Do(Action action, Action<Exception> errorHandler)
{
Console.WriteLine("ONE");
}
static void Do<T>(Func<T> action, Action<T> callBack)
{
Console.WriteLine("TWO");
}
}
}
Expected output;
ONE
ONE
ONE
ONE
Actual output;
ONE
ONE
ONE
TWO
"This illustrates that generics in C# are not like templates in C++."
So was the choice to give them a syntax very much like templates in C++ done out of a desire to deliberately confuse programmers?
Something I’ve picked up in my years as a developer is to not create something, e.g. an API, that looks and feels a lot like an existing something else which is similar (call it "A"), but whose use cases and/or implementation is actually different in subtle and confusing ways. Either make it so the implementation is not different from "A" at all, or make it so that the implementation notices and squawks loudly if you try to use it as if it were an "A", or if none of those are possible make it so that it looks as different from "A" as you possibly can.
Has anyone else found this?
"So was the choice to give them a syntax very much like templates in C++ done out of a desire to deliberately confuse programmers?"
C# has other confusing areas. For instance, C# uses the C++ ~destructor syntax, but the behavior is (as we know) very different. This is very confusing for C++ programmers.
Ben Voight wrote: ."
You are 100% correct. Your example brings in other differences between C++ and C# (e.g. how arrrays are handled, differences between value and reference). Also the fact that std::vector is itself a template, and C++ can do "wonderful" things then templates are combined.
On the other hand, iy you used an example where you have (psuedo code)
class Base { vitrual void f(); }
class Leaf : Base { virtual void f(); }
And create a template <Leaf &> or generic <Leaf>, then the effects of calling "f()" will be identical between the two (both will be virtual calls)
The main point I was trying to illustrate, is that NEITHER C++ templates not C# generics make any "decisions" at execution time of a method.
Steven said:"C# has other confusing areas. For instance, C# uses the C++ ~destructor syntax, but the behavior is (as we know) very different. This is very confusing for C++ programmers."
Why not: "C++ has other confusing areas. For instance, C++ uses the C# ~destructor syntax, but the behavior is (as we know) very different. This is very confusing for C# programmers."
Oer the past 37 years, I have programmed in dozens of "high level" languages, not to mention close to 50 different assembly languages. I really hate to think how convoluted things would be if each environment avoided syntactical constructs (e.g. assembly language mnemonics) that had previously been used every time there was a difference in behaviour……
‘Why not: "C++ has other confusing areas. For instance, C++ uses the C# ~destructor syntax, but the behavior is (as we know) very different. This is very confusing for C# programmers."’
Uh, because C++ was written first, and C#’s syntax was clearly based on C++, not vice-versa. C++ could not have been written differently in order to avoid confusion with C#, as C# syntax had not been invented yet. It has something to do with the linearity of time, cause and effect, and other related concepts.
Karellen, The comment you quoted had nothing to do with decisions made at the time the language was authored. It had everything to do with the experience of a person who has worked in one ofthe languages and sees the other for the first time.
Given recent (past 5 year) trends, I am willing to bet that there are significantly more C# programmers who have never analyzed a single line of C++ code, than there are C++ programmers twoh have never analyzed a single line of C# code
And I think I know the answer Eric would give to Karellen: Contrary to (still, unfortunately) popular belief, the C# language designers were not attempting to replicate the functionality of C++ or even use it as a model. They were creating a new language, period, and any resemblance to any other language is purely coincidental.
C# generics have the same syntax as Java generics, which work similarly. So what might be confusing to one class of users is perfectly natural to others. The bottom line is that the world at large does not necessarily see everything the same way you do – most of us have learned to deal with that.
Ben Voigt [C++ MVP] said: "With templates, I get efficient array iteration with no bounds checking. With generics, I get virtual calls to MoveNext and Current."
I wouldn’t assume that. If the JIT can figure out statically what the type is, it may in theory use static calls instead of virtual, and it may even do some inlining. And if it doesn’t do it today, it may do in a future version. So the architecture of generics doesn’t rule out optimisation. On the contrary, by capturing a high-level description of our intentions, it potentially has *greater* scope for optimisation, by doing it later, when the maximum amount of information is available.
(I don’t use Java much in anger, but I believe Sun has been very agressive with this kind of thing in the ‘server’ flavour of their VM.)
Re: the syntax debate, regardless of the (very different) details of how generics/templates work in Java, C# and C++, there is one thing they all have in common. They all need a way to specify a list of type parameters or arguments: <T1, T2> or <int, double> – so naturally they all use the same syntax for that basic idea. It would have been perverse, given C++ is the most widely used generic system and is their closest syntactical ancestor language, for them to use different syntaxes for a particular small piece of the puzzle that happens to be common to them all, even though the rest of it differs. I wouldn’t expect C# to invent a different symbol for addition because its operator overloading works differently. Also note that neither C# or Java uses the keyword ‘template’ to introduce a declaration, so they don’t pretend to be precisely the same as templates.
@James Miles – That’s interesting (to me, anyway). I’ve boiled it down to:
static void Main()
{
Do(() => { throw new Exception(); });
}
static void Do(Action a)
{
Console.WriteLine("Expected");
}
static void Do(Func<string> d)
{
Console.WriteLine("Unexpected (but actual)");
}
In other words, a lambda has a throw making the end unreachable, and that makes it match Func<ANYTHING> better than Action, which does seem strange.
Even more strangely, you can get your expected behaviour by sticking a ‘return;’ at the end of the lambda:
Do(() => { throw new Exception(); return; });
Though I bet Eric will tell us that there’s some other situation where we’d be amazed if it *didn’t* behave like this! 🙂
This also helps me clarify overload resolution with respect to named parameters in C#4.0
In particular, I’ve been scratching my head over this code bit :
If you realize that overload resolution happens at compile time, along with the specific overload resolution rule for this case as specified in the C# spec, it’s crystal clear. Thanks for the post!
"Even more strangely, you can get your expected behaviour by sticking a ‘return;’"
Daniel Earwicker – Yes it is strange isn’t it 😉
I think you’ve missed the core difference between a template and a generic — at least for C++ people. The parameter of a template can be anything — a number, a type, a global variable, etc. And it can apply to a free function or a class. This allows us to do computations at compile time. See boost and loki for examples of the power this allows us — inline LL parsers, simple lambda functions, and precompiled regular expressions for just a few examples. These things are not possible using generics — they require other language features. They aren’t necessarily better in all cases, but they are certainly different at a more fundamental level than pointed out in the article.
Joel makes a good point — C++ templates are themselves a pure-functional Turing-complete language. A C++ compiler can compute anything, but you cannot tell whether the C++ compiler will be able to compile a program without running it (here’s a proof:).
So while some might enjoy the flexibility of a language-within-a-language that templates offer, others dislike the unmaintainability and undecipherable error messages that accompany it. I think C# generics strike a happy medium between C++ templates and Java generics.
> I’m still not super-clear in my head on the differences between a hub, router and switch and how it relates to the gnomes that live inside of each.
You have a right to be confused, despite the (completely valid) definitions given above, they keep changing what different bits of equipment do.
So, theoretically, Hubs are layer 1 (hardware) level devices, Switches are layer 2, and Routers are layer 3. Which doesn’t really account for Layer 3 Switches, which are like Routers but faster. Once upon a time, there were just Hubs and Routers. Then someone came up with Switches, which were designed to make networks more efficient by only sending data where it was needed. Then ‘Switch’ became a marketting term and at that point yu can kiss any technical validity goodbye.
Regarding the gnomes: Hub gnomes have been lobotomised, Router gnomes are a little slow, but have huge memories. Switch gnomes are hyperactive and schizophrenic.
@Joel Redman – I think Eric does explain that essential difference, in this quote:
"When you say DoIt<string> in a template, the compiler conceptually searches out all uses of “T”, replaces them with “string”, and then compiles the resulting source code."
The great value of CLR generics is that they allow us to define a generic type or method and expose it to other languages, without those languages needing to reparse the code of the language in which the generic entity was defined. So C++/CLI, C#,VB.NET, F# and countless other languages can share the same generic libraries.
And although I think C++ compile time metaprogramming is great, like many things in C++ it is mostly great relative to the other limitations of C++. std::tr1::shared_ptr is great if you don’t have GC! And similarly, C++ doesn’t have a standard API for type reflection or code generation.
The CLR has an extremely rich and powerful model for doing reflection and code generation at runtime, and the "wizards" in the C# world frequently come up with new ways to perform "magic" using this, and (perhaps surprisingly) still maintaining static type safety and high performance. It allows them to effectively extend the language, and blur the boundary between compilation and interpretation, much as templates do for C++.
So despite C# lacking the full power of templates, it is no less of a playground for advanced library authors who enjoy generating and picking through incomprehensible error messages.
@ Daniel
He does say that, but completely misses the major implications of it. The template is fundamentally more powerful than the C# generic, but at the same time, it cannot be used cross-language. It isn’t better or worse.. While this may not be quite as efficient overall as the C# model, for many applications it is preferable to be predictable. Real time apps and OS’s come to mind. Shared_ptr is a compromise in this direction, and certainly has its place. Again, for some apps, better, for others worse.
That’s why we like .NET. It allows us to use the paradigms we need to get the job done.
"C# generics have the same syntax as Java generics, which work similarly." – uh? Java generics and C# generics work completely differently!
Java generics are purely a compile time illusion: there is no such thing, at runtime, as an ArrayList<String> – just an ArrayList. The compiler keeps track of the compile time types and gives you some helpful error messages if you bypass type-safety, but you can bypass the error messages – eg by casting your ArrayList<String> to an ArrayList and then to an ArrayList<Object>, and put in something other than strings – and they will be *accepted* at runtime because the runtime isn’t even aware that ArrayList was generic in the first place!
Also, Java allows types to be partially unbound, so ArrayList<?> is also a legal type which you can have instances of and perform operations on.
C# does not have any concept of List<?> at compiletime except as an argument to typeof(). Not only that, but it doesn’t have a concept of List<?> at runtime, either. Any given instance of List<T> has a *particular* T that it applies to. Attempting to cast that list to a List of a different<T> would fail. Attempting to put other kinds of object other than String into a List<String> would be impossible (because to do so you’d have to write code that would fail with a class cast exception before it even got to the Add() call).
Java and C# generics are just as different as C# generics are from C++ templates. They use the same syntax because they have more in common than different. And that’s as it should be. Sure, it’s confusing when you encounter one of the comparatively obscure things that they do differently. But that’s outweighed by the fact that you can get a general sense of the code based on what they are doing the same.
A halfway decent programmer learns a new language, initially, by learning how to map its syntax to concepts he or she is familiar with from other languages, ANYWAY. Even if C# had decided to say that generics should be expressed as List~T~, C++ programmers coming to C# would still mentally map that to List<T> to start with to be able to get the basic idea of what it’s for. And only AFTER that get a deep enough understanding to grasp the subtle differences.
"They were creating a new language, period, and any resemblance to any other language is purely coincidental."
Come on. We all know other C-style languages influenced C#. They could’ve chosen the VB syntax for generics, but they didn’t. It’s not purely coincidental, if it were C# wouldn’t look like it did.
@author:
Templates like in C++ are a bit more than fancy-pants search&replace you know 🙂
I’m just writing to ask you:
What have stopped the C#/CLR designer from actually allowing to specialize the generic methods? Having: method<T>(T arg), method(string arg), method(int arg) a compiler knows about ALL possible overloads. Easiest way would be to emit a prologue for method<T>(T) that checks "if arg is string" "if arg is int" and thus call the appropriate ‘overload’ at runtime even if someone calls method((object)string). Now the programmers must do that all the time by hand if they want to have any specialization-like features :/
IMHO, not the C# compiler shall, but rather the CLR should perform such lookup and choose the right, tightest match at runtime, but with all other languages on the platform, I can understand the option taken there. But why not in C#? Rarely you have more than 1-6 specializations, so not that big performance hit at all.
@ComeOn
Aggreed with that. It’s very name and symbol claims that. Early marketing joke was, that C# is actually C++++ (two rows with ++). The evolution of the language shows heavy impact of changes made to independently evolving C++ and Java. heh, I ven remember that when C# was born, it was claimed that it is being created in a such way, that existing C++ and Java programmers will not notice much difference and can start coding in C# right away with little fuss. Maybe it’s still written somewhere on Microsoft’s pages?
@Joel Redman – this is now off topic and probably too late for you to read it, but I gotta take issue with this:
."
I hear/read that a lot, and I don’t agree. When you use shared_ptr (or any kind of ref-counting smart ptr) you’re doing so because you *don’t* know locally whether it is time to destroy the object. Therefore when the program exits the scope of the shared_ptr, you *don’t* know for sure that the object has been destroyed. So in practise there’s nothing deterministic about it. It may happen sooner than it would with real GC, but it’s no more deterministic in reality. If you want to know for sure that an object is definitely destroyed when you exit a specific scope, then use a normal variable or auto_ptr. Don’t use shared_ptr.
C++/CLI is a great resource for clarifying the distinction and integration point between RAII and GC, because it quite beautifully combines the two in a way that makes me rather envious. I wish C# could write Dispose methods for me! So far it only has the ‘using’ statement to make it easy to consume disposable objects, but nothing much to assist with implementing them.
whoa. where do you see sensible RAII in C# ? in using/Dispose? ‘dispose’ thing is a patch for nonexistence of destructors, and using – for of deterministic cleanup, and it doesnt more than calling dispose anyways.. i lately heard a lot from other programmers that using-this using-that, use-using-dont-call-dispose-manually, they really think that using() is doing some cleanup and frees memory.. please, finally stop publishing this rubbish, people really can start believing in it
|
https://blogs.msdn.microsoft.com/ericlippert/2009/07/30/whats-the-difference-part-one-generics-are-not-templates/?replytocom=7326
|
CC-MAIN-2018-30
|
refinedweb
| 4,660 | 60.95 |
(Time=0212 ET) no AOB items.
[] (11.10 + 5) postponed.
[] (11.15 + 10) (Time=0217) Misc. notes on action items. 2002/09/04: DavidF (done) Contact implementers regarding updates to their implementation of table 2 of the implementation table DavidF has posted all received, made corrections. Some reports still expected. BEA will provide a report by end of day Thu 10/24, but DavidO will not be at F2F to discuss. 2002/10/16: Henrik Send email to xmlp-comment to close issue 384, rationale is that we stick with status quo and don't want to introduce the notion of a gateway in the SOAP 1.2 spec Henrik posted a comment to xmlp list to discuss an editorial issue related to this. Ther are two defns of gateway. Henrik proposes to delete the defn in section 2. No opinions expressed otherwise. Pending
(Time=0224) -- Primer Nilo: has all LC comments included except i218 (remove text for processing intermediaries from Part 1 to Primer.) Nilo waiting for WG to resolve issue 394 first. See Primer change log. -- Spec A snapshot has been published for F2F. It is up to date, see IRC for editor todo list URL -- Test Collection: status of update based on LC issue resolutions Anish: (regrets) DF: A snapshot for the f2f was paublished late Monday. I assume it is up to date -- Attachment Feature nothing to report -- LC Issue List Carine says it is up to date. -- Responses to new charter, especially Call for Responses and IPR statements. DF: we are now operating under our new charter. There has been a new (AC) call for participation. We have recived some responses .... Yves: 6 replies. Only 2 companies did not reply, and I have sent private email to their AC reps. I hope to have resolution by F2F. DF: F2F agenda item on 3rd day to discuss IPR, and CR->PR. RayW will not be at F2F, but he has same IPR concerns as before (issue 327). -- Implementation tracking []. We need a volunteer to peruse the list of features in Table 2 and (a) identify any features that are no longer valid due to LC decisions, and (b) generate new features due to LC decisions. This work should be completed by the f2f so that we can more accurately identify the status of our implementation efforts. DF: I have updated the list of features and their implementations. It is looking good, there are only a small number of features without 2 or more implementations, and we have yet to add BEA. I have updated/moved some of the features but we need a more carefull job done to assess accuracy of the feature list. Is there a volunteer to do this? No one volunteers. DF: The work needs to be done by the F2F so we can decide at F2F on PR vs CR, etc. We can only skip CR if there is evidence of interop implementation of each feature. Need acurate list to make decisions. DF randomly selects a WG member to persue the feature list: Mike Champion. -- Media type draft, IANA application DF: we will discuss at F2F, not now -- Oct f2f [], questions? Yves: XMLPS code for bridge dial-in to F2F JI: is the registration list up to date? (yes)
(Time=0239) -- AF issues without proposed resolutions. WG members will be assigned to create resolutions for each of these in time for next week's f2f meeting o 385, add a conformance clause From QA working group. AI: Camilo o 388, explicitly reference equivalence rules for URIs From QA WG 2 choices RFC2119 or w3c namespace spec. 3rd option: AF spec. Cross into implementation? AI: Marc Hadley o 390, use of URIs for referencing secondary parts e.g., URL in SOAP with http:... but cid: in MIME attachment. May be implementation issue. AI: Carine o 391, how to dereference IDREFs and URIs two parts: 1. Hugo asks for clarification from deref from soap encoding, 2. editorial term "attachment" if secondary part not with envelope. AI: Nilo. o 392, intermediairies and secondary parts From Hugo re intermediaries and secondary parts, can intermediaries remove attachments, etc? AI: Jean-Jacques o 393, specify attachment implementation; this question will be discussed at the f2f meeting Hugo/WSAWG wants concrete AF spec. This is on F2F agenda. Need proposal to discuss at F2F E.g. Add subsection to AF doc? One or multiple implementations? AI: Mark Jones MJ: One proposal, or range? DF: Up to Mark to propose. Need to frame discussion. Size level of effort. MJ: Keep within charter duration, or feel free to go beyond Dec 2002? Yves: concrete AF does not need to go out with SOAP 1.2 specs. DO: WSAWG feels it would be good to keep XMLP WG intact to work on this - it has the right people, and avoids overhead of new WG creation. WSAWG would probably support XMLP charter time extension to finish concrete. NM: Against making work on concrete AF spec if it delays progress to CR/PR. DO: No one wants to slow down CR/PR MC: WSAWG never discussed delaying SOAP 1.2. Try to send to public list before F2F. -- The following issues are listed as "editorial". What is their status, and do we need proposals to resolve them? (0252) o 235, missing assertion AI: Henrik to alert Anish to do the edit o 237, assertion 70 missing a bullet test collection. cut and paste problem. AI: Henrik to alert Anish to do the edit AI: Carine - mark issue list as test collection for 235, 237 o 248, use same labelling placement in spec and primer Nilo: we should decline doing this. DF: Any objections to Nilo's suggestion? No objections. DF: issue 248 so closed. AI: Nilo send to xmlp comment and originator o 262, clarify when whitespace is ignored in primer Nilo: not really a primer issue. Should be directed at main spec. Gudge to rework proposal. AI: Henrik alert Gudge that re-write needs to handle 262 also. o 309, commentator suggests SOAP uses URIs rather than QNames; the Chair DF: I suggest closing this issue as a duplicate of 277 No objections DF: issue closed as a dupe of 277 AI: DF to send to xmlp comment and orig. o 357, request to change 2 faultnames, "DataEncodingUnknown" -> " SOAPEncodingUnknown" and "Misunderstood" -> "NotUnderstood" DF: editors have discussed this editorial issue and closed it already. They decided it is a generic fault and so keep DataEncodingUnknown, and they decided to change to NotUnderstood. Anyone object to this resolution? No objections. -- Discussion of pushback to any issues we have closed (Time=0301) No such issues reported. -- Potential new issues. For each potential new issue, we will first decide whether or not to accept it as an issue (based on severity, "8-ball" impact, time to resolve, etc). (Time=0302). Since last week's telcon discussion on this subject, there has been a lot of email discussion. An artefact of that discussion is a table that summarises the implications of each role, see: the discussants have found the table to be useful. Using this table as a starting point, I suggest we first attempt to summarise what are the discussants' positions so we can identify their differences and hence the questions we need to answer. NM: I am not a strong advocate for any of the options, and I think we have near agreement on tech characteristics. The sender/receiver contracts implied some proposed designs may be questionable. Discussion with regard role=relay solution. The WG chose a two level mechanism (1. role, 2. mu) and this solution puts both levels in the role. Discussion with regard a solution involving addition of an attribute that overrides the default. It would work, but adds a lot of work to put it in spec. HFN: an override attribute may not solve all use cases JJ: proposal collapses two different concepts, targetting and forwarding; if forwarding required, must use generic role name ("relay"), can't use application defined role (e.g. "cacheManager") JK: agrees with JJ, new role also does not solve all scenarios. only a new attrib solves all use cases HFN: went with relay role due to 8-ball question. It possibly solves all questions. It would be nice to have reading from 8-ball about what would be affected. ????: interaction with previous info items. look for hidden impacts. Yves (aka 8-ball): adding new role is more orthogonal, less interaction Adding new attrib would have more interaction NM: does not agree that one is less/more orthogonal than another. JJ: both proposals (role, new attribute) require changes to spec; amount of changes roughly the same DF: question of clarification regarding new role vs new attrib, are both solutions completely fleshed out? Is there proposed text for each? HFN: Yes, they are both fairly close to being fleshed out. HFN: mail list appears to show more people liking the addition of an attribute. We should use the table from JJ/Gudge to make it clear. I want to avoid another Last Call. NM: Make default the opposite seems more natural, but not at this late date. DF: to summarise options, Opt 1: new relay role Opt 2. new attrib, perhaps called relayIfNotProcessed or theNewAttribute (TNA) MH: how about another opton where we change mU to new "disposition" attribute with values = mu | relay | ?? JK: relay has no meaning when mu=true NM: such a proposal woul change the crisp semantic of mu; does not ... AI: Marc to write up proposal DF takes a non-binding vote (approx counts) - replace mustUnderstand with new attrib with three values: 2 - new relay role: 2 - new attrib: 5 - status quo: 1 DF: Clearly more in favor of new attribute option. AI: JJ, create proposal for new attribute (TNA). AI: Carine, generate new issue JK: How will we handle implementation of this new feature? DF: We will need to add a new feature to the table and show it is implemented (Time=0329) Reject because too late in cycle DO: reject because wsawg is taking on usage scenarios AI: JI to send to xmlp comment and Hal -- Spec issues with proposals (Time=0333) Request from editors to re-instate text that was deleted when we changed spec after resolving issue 220. No objections from WG to re-instating the text. (Time=0335) o 389, per last week's telcon, Jacek has proposed text for the spec, see. There has been a little discussion of this, see. Regarding Herve's response to JK's proposal, JK agrees with one point, (optional lexical value), but disagrees with Herve proposal for "1 or more" (outbound edges when representing an array) because an empty array is still an array. JK suggests to add "optional" to his proposal. DF: any objection to adopting JK's proposal, with "optional"? No objections. DF Issue 389 so closed. AI: JK to send to xmlp comment ... o 300 (and 359),. Glen has proposed text, see. MH: final sentence of para 1 in porposal is wrong. SOAP 1.1 behavior. MH, HFN: the text needs tuning up. DF: but the gist is correct, yes? MH: it should refer to cases of other media types HFN: it is a useful starting point DF: we will not close this issue now, but we'll tune-up the text and discuss at F2F AI: Henrik to tune up Glen's text by deleting last sentence of 1st para, and ??? o 355, CIIs in SOAP infoset The issue originator asks whether a SOAP infoset may contain Comment Info Items. At the last f2f we gave Gudge a broad mandate to create a proposal, which he has done []. This proposal should also clarify editorial issue 262 regarding whitespace significance as written in the Primer. Noah has described [] 3 changes to the proposal (disallow intermediaries to remove HEADERs, include references to addt'l rules, editorial). At last week's telcon we asked Gudge to renew his proposal in order to clarify what is novel and what is responding directly to the issues. We are awaiting the renewed proposal. DF: we will postpone discussionof this because we are awaiting text from Gudge o 277, general comments Proposal from Herve [] regarding use of QNames. Proposal from Herve [] regarding use of namespaces. DF note that Herve is not on call. MH: thread on list had too many branches, we need crisp summary HFN: do you want to make it prettier, or is something broken? Are you concerned about processing overhead? DF confirmed with MH that the thread is too fragmented, and needs work to find main points JJ: Herve will be at F2F, but on vacation now. AI: JJ to inform Herve to be ready to discuss at F2F o 294, MEC motivation? Marc proposes to add a MEC definition in Part 2, sec 5.1.2, and to modify the HTTP binding to address MEC, see. MH: msg exch context (MEC), there is no formal definition, vague. There is no def of how to initialize and pass between nodes. I agree on no formal def. We should add MEC to glossary (used only in part 2, so don't add to gloassary in Part 1). Instead, create subsections in Part 2. 6.2.3 and 6.3.3 talk about initializing abstractly, HTTP binding does not specify MEP ???? Need to specify how binding knows which MEP is to be used HFN: Web method property? MH: Orthogonal to web method (see thread with MB, NM) NM: Thread not settled. MH on right track. HTTP binding needs to address it. NM. MEP must be dealt with by binding. MEC okay to be adjunct. No objection to closing with MH's proposal, using HTTP Method. Agree with spirit, but do not closed until spec text is reviewed. AI. MH, send spec text. (MH will work it on Mon 28 Oct.) o 367, 368, 369, various QA concerns JohnI has laid out a number of discussion points [ and], there is also a proposal and some discussion. JI: 368 and 369 combined, what is meant by conformance. circular def. DF proposal. NM objections. inbound messages. too narrow to talk about receivers. initial sender responsibilities. proceed cautiously traffic light scenario. MEPs may influence conformance DF: did not push back on QA group HFN traditionally, do all MUST. Qualify "if" you are e.g., initial sender. AI: DF to re-write (368, 369), also clarify which part of text addresses which issue. Adjourn. Time: 0409
|
http://www.w3.org/2000/xp/Group/2/10/23-minutes.html
|
CC-MAIN-2015-32
|
refinedweb
| 2,416 | 66.13 |
#include <line_int.h>
Inheritance diagram for line_int::
The linear interpolation wavelet uses a predict phase that "predicts" that an odd element in the data set will line on a line between its two even neighbors.
This is an integer version of the linear interpolation wavelet. It is interesting to note that unlike the S transform (the integer version of the Haar wavelet) or the TS transform (an integer version of the CDF(3,1) transform) this algorithm does not preserve the mean. That is, when the transform is calculated, the first element of the result array will not be the mean.
Definition at line 60 of file line_int.h.
|
http://www.bearcave.com/misl/misl_tech/wavelets/packet/doc/classline__int.html
|
CC-MAIN-2017-47
|
refinedweb
| 109 | 50.16 |
Write wide-character formatted output to a buffer (varargs)
#include <wchar.h> #include <stdarg.h> int vswprintf( wchar_t * buf, size_t n, const wchar_t * format, va_list arg );
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The vswprintf() function formats data under control of the format control string, and writes the result to buf.
The vswprint() function is the wide-character version of vsprintf(), and is a varargs version of swprintf().
The number of wide characters written, excluding the terminating NUL, or a negative number if an error occurred (errno is set).
It's safe to call vswprintf() in a signal handler if the data isn't floating point.
|
http://www.qnx.com/developers/docs/7.0.0/com.qnx.doc.neutrino.lib_ref/topic/v/vswprintf.html
|
CC-MAIN-2018-22
|
refinedweb
| 118 | 65.73 |
What's the best way of checking if an object property in JavaScript is undefined?
Detecting an undefined object property
Use:
if (typeof something === "undefined") { alert("something is undefined"); }
If an object variable which have some properties you can use same thing like this:
if (typeof my_obj.someproperties === "undefined"){ console.log('the property is not available...'); // print into console }
I believe there are a number of incorrect answers to this topic. Contrary to common belief, "undefined" is not a keyword in JavaScript and can in fact have a value assigned to it.
Correct Code
The most robust way to perform this test is:
if (typeof myVar === "undefined")
This will always return the correct result, and even handles the situation where
myVar is not declared.
Degenerate code. DO NOT USE.
var undefined = false; // Shockingly, this is completely legal! if (myVar === undefined) { alert("You have been misled. Run away!"); }
Additionally,
myVar === undefined will raise an error in the situation where myVar is undeclared.
Despite being vehemently recommended by many other answers here,
typeof is a bad choice. It should never be used for checking whether variables have the value
undefined, because it acts as a combined check for the value
undefined and for whether a variable exists. In the vast majority of cases, you know when a variable exists, and
typeof will just introduce the potential for a silent failure if you make a typo in the variable name or in the string literal
'undefined'.
var snapshot = …; if (typeof snaposhot === 'undefined') { // ^ // misspelled¹ – this will never run, but it won’t throw an error! }
var foo = …; if (typeof foo === 'undefned') { // ^ // misspelled – this will never run, but it won’t throw an error! }
So unless you’re doing feature detection², where there’s uncertainty whether a given name will be in scope (like checking
typeof module !== 'undefined' as a step in code specific to a CommonJS environment),
typeof is a harmful choice when used on a variable, and the correct option is to compare the value directly:
var foo = …; if (foo === undefined) { ? }
Some common misconceptions about this include:
that reading an “uninitialized” variable (
var foo) or parameter (
function bar(foo) { … }, called as
bar()) will fail. This is simply not true – variables without explicit initialization and parameters that weren’t given values always become
undefined, and are always in scope.
that
undefinedcan be overwritten. There’s a lot more to this.
undefinedis not a keyword in JavaScript. Instead, it’s a property on the global object with the Undefined value. However, since ES5, this property has been read-only and non-configurable. No modern browser will allow the
undefinedproperty to be changed, and as of 2017 this has been the case for a long time. Lack of strict mode doesn’t affect
undefined’s behaviour either – it just makes statements like
undefined = 5do nothing instead of throwing. Since it isn’t a keyword, though, you can declare variables with the name
undefined, and those variables could be changed, making this once-common pattern:
(function (undefined) { // … })()
more dangerous than using the global
undefined. If you have to be ES3-compatible, replace
undefinedwith
void 0– don’t resort to
typeof. (
voidhas always been a unary operator that evaluates to the Undefined value for any operand.)
With how variables work out of the way, it’s time to address the actual question: object properties. There is no reason to ever use
typeof for object properties. The earlier exception regarding feature detection doesn’t apply here –
typeof only has special behaviour on variables, and expressions that reference object properties are not variables.
This:
if (typeof foo.bar === 'undefined') { ? }
is always exactly equivalent to this³:
if (foo.bar === undefined) { ? }
and taking into account the advice above, to avoid confusing readers as to why you’re using
typeof, because it makes the most sense to use
=== to check for equality, because it could be refactored to checking a variable’s value later, and because it just plain looks better, you should always use
=== undefined³ here as well.
Something else to consider when it comes to object properties is whether you really want to check for
undefined at all. A given property name can be absent on an object (producing the value
undefined when read), present on the object itself with the value
undefined, present on the object’s prototype with the value
undefined, or present on either of those with a non-
undefined value.
'key' in obj will tell you whether a key is anywhere on an object’s prototype chain, and
Object.prototype.hasOwnProperty.call(obj, 'key') will tell you whether it’s directly on the object. I won’t go into detail in this answer about prototypes and using objects as string-keyed maps, though, because it’s mostly intended to counter all the bad advice in other answers irrespective of the possible interpretations of the original question. Read up on object prototypes on MDN for more!
¹ unusual choice of example variable name? this is real dead code from the NoScript extension for Firefox.
² don’t assume that not knowing what’s in scope is okay in general, though. bonus vulnerability caused by abuse of dynamic scope: Project Zero 1225
³ once again assuming an ES5+ environment and that
undefined refers to the
undefined property of the global object. substitute
void 0 otherwise.
In JavaScript there is null and there is undefined. They have different meanings.
- undefined means that the variable value has not been defined; it is not known what the value is.
- null means that the variable value is defined and set to null (has no value).
What does this mean: "undefined object property"?
Actually it can mean two quite different things! First, it can mean the property that has never been defined in the object and, second, it can mean the property that has an undefined value. Let's look
We can clearly see that
typeof obj.prop == 'undefined' and
obj.prop === undefined are equivalent, and they do not distinguish those different situations. And
'prop' in obj can detect the situation when a property hasn't been defined at all and doesn't pay attention to the property value which may be undefined.
So what to do?
1) You want to know if a property is undefined by either the first or second meaning (the most typical situation).
obj.prop === undefined // IMHO, see "final fight" below
2) You want to just know if object has some property and don't care about its value.
'prop' in obj
Notes:
- You can't check an object and its property at the same time. For example, this
x.a === undefinedor this
typeof x.a == 'undefined'raises
ReferenceError: x is not definedif x is not defined.
- Variable
undefinedis a global variable (so actually it is
window.undefinedin browsers). It has been supported since ECMAScript 1st Edition and since ECMAScript 5 it is read only. So in modern browsers it can't be redefined to true as many authors love to frighten us with, but this is still a true for older browsers.
Final fight:
obj.prop === undefined vs
typeof obj.prop == 'undefined'
Pluses of
obj.prop === undefined:
- It's a bit shorter and looks a bit prettier
- The JavaScript engine will give you an error if you have misspelled
undefined
Minuses of
obj.prop === undefined:
undefinedcan be overridden in old browsers
Pluses of
typeof obj.prop == 'undefined':
- It is really universal! It works in new and old browsers.
Minuses of
typeof obj.prop == 'undefined':
'undefned'(misspelled) here is just a string constant, so the JavaScript engine can't help you if you have misspelled it like I just did.
Update (for server-side JavaScript):
Node.js supports the global variable
undefined as
global.undefined (it can also be used without the 'global' prefix). I don't know about other implementations of server-side JavaScript.
The issue boils down to three cases:
- The object has the property and its value is not
undefined.
- The object has the property and its value is
undefined.
- The object does not have the property. ( typeof( something ) == "undefined")
This worked for me while the others didn't.
If you do
if (myvar == undefined ) { alert('var does not exists or is not initialized'); }
it will fail when the variable variable myvar is the same as window.myvar or window['myvar']
To avoid errors to test when a global variable exists, you better use:
if(window.myvar == undefined ) { alert('var does not exists or is not initialized'); }
The question if a variable really exists doesn't matter, its value is incorrect. Otherwise, it is silly to initialize variables with undefined, and it is better use the value false to initialize. When you know that all variables that you declare are initialized with false, you can simply check its the next tests.
It is always better to use the instance/object of the variable to check if it got a valid value. It is more stable and is a better way of programming.
(y)).
In the article Exploring the Abyss of Null and Undefined in JavaScript I read that frameworks like Underscore.js use this function:
function isUndefined(obj){ return obj === void 0; }
Simply anything is not defined in JavaScript, is undefined, doesn't matter if it's a property inside an Object/Array or as just a simple variable...
JavaScript has
typeof which make it very easy to detect an undefined variable.
Simply check if
typeof whatever === 'undefined' and it will return a boolean.
That's how the famous function
isUndefined() in AngularJs v.1x is written:
function isUndefined(value) {return typeof value === 'undefined';}
So as you see the function receive a value, if that value is defined, it will return
false, otherwise for undefined values, return
true.
So let's have a look what gonna be the results when we passing values, including object properties like below, this is the list of variables we have:
var stackoverflow = {}; stackoverflow.javascipt = 'javascript'; var today; var self = this; var num = 8; var list = [1, 2, 3, 4, 5]; var y = null;
and we check them as below, you can see the results in front of them as a comment:
isUndefined(stackoverflow); //false isUndefined(stackoverflow.javascipt); //false isUndefined(today); //true isUndefined(self); //false isUndefined(num); //false isUndefined(list); //false isUndefined(y); //false isUndefined(stackoverflow.java); //true isUndefined(stackoverflow.php); //true isUndefined(stackoverflow && stackoverflow.css); //true
As you see we can check anything with using something like this in our code, as mentioned you can simply use
typeof in your code, but if you are using it over and over, create a function like the angular sample which I share and keep reusing as following DRY code pattern.
Also one more thing, for checking property on an object in a real application which you not sure even the object exists or not, check if the object exists first.
If you check a property on an object and the object doesn't exist, will throw an error and stop the whole application running.
isUndefined(x.css); VM808:2 Uncaught ReferenceError: x is not defined(…)
So simple you can wrap inside an if statement like below:
if(typeof x !== 'undefined') { //do something }
Which also equal to isDefined in Angular 1.x...
function isDefined(value) {return typeof value !== 'undefined';}
Also other javascript frameworks like underscore has similar defining check, but I recommend you use
typeof if you already not using any frameworks.
I also add this section from MDN which has got useful information about typeof, undefined and void(0).
Strict equality and undefined
You can use undefined and the strict equality and inequality operators to determine whether a variable has a value. In the following code, the variable x is not defined, ReferenceError (in contrast to `typeof`) }
'if (window.x) { }' is error safe
Most likely you want
if (window.x). This check is safe even if x hasn't been declared (
var x;) - browser doesn't throw an error.
Example: I want to know if my browser supports History API
if (window.history) { history.call_some_function(); }
How this works:
window is an object which holds all global variables as its members, and it is legal to try to access a non-existing member. If x hasn't been declared or hasn't been set then
window.x returns undefined. undefined leads to false when if() evaluates it.
"propertyName" in obj //-> true | false
Reading through this, I'm amazed I didn't see this. I have found multiple algorithms that would work for this.
Never Defined");
Defined as undefined Or never Defined
If you want it to result as
true for values defined with the value of
undefined, or never defined, you can simply use
=== undefined
if(obj.prop === undefined) console.log("The value is defined as undefined, or never defined");
Defined as a falsy value, undefined,null, or never defined.
Commonly, people have asked me for an algorithm to figure out if a value is either falsy,
undefined, or
null. The following works.
if(obj.prop == false || obj.prop === null || obj.prop === undefined) { console.log("The value is falsy, null, or undefined"); }
Compare with
void 0, for terseness.
if (foo !== void 0)
It's not as verbose as
if (typeof foo !== 'undefined')
The solution is incorrect. In JavaScript,
null == undefined
will return true, because they both are "casted" to a boolean and are false. The correct way would be to check
if (something === undefined)
which is the identity operator...
You can get an array all undefined with path using the.
There is a nice & elegant way to assign a defined property to a new variable if it is defined or assign a default value to it as a fallback if it´s want to check both is it undefined or its value is null:
//Just in JavaScript var s; // Undefined if (typeof s == "undefined" || s === null){ alert('either it is undefined or value is null') }
If you are using jQuery Library then
jQuery.isEmptyObject() will suffice for both cases,;
If you are using Angular:
angular.isUndefined(obj) angular.isUndefined(obj.prop)
Underscore.js:
_.isUndefined(obj) _.isUndefined(obj.prop).
I provide three ways here for those who expect weird answers:
function isUndefined1(val) { try { val.a; } catch (e) { return /undefined/.test(e.message); } return false; } function isUndefined2(val) { return !val && val+'' === 'undefined'; } function isUndefined3(val) { const defaultVal={}; return ((input=defaultVal)=>input===defaultVal)(val); } function test(func){ console.group(`test start :`+func.name); console.log(func(undefined)); console.log(func(null)); console.log(func(1)); console.log(func("1")); console.log(func(0)); console.log(func({})); console.log(func(function () { })); console.groupEnd(); } test(isUndefined1); test(isUndefined2); test(isUndefined3);
isUndefined1:
Try to get a property of the input value, check the error message if it exists. If the input value is undefined, the error message would be Uncaught TypeError: Cannot read property 'b' of undefined
isUndefined2:
Convert input value to string to compare with
"undefined" and ensure it's negative value.
isUndefined3:
In js, optional parameter works when the input value is exactly
undefined.
function isUnset(inp) { return (typeof inp === 'undefined') }
Returns false if variable is set, and true if is undefined.
Then use:
if (isUnset(var)) { // initialize variable here }.
you can also use Proxy, it will work with nested calls, but will require one extra check:
function resolveUnknownProps(obj, resolveKey) { const handler = { get(target, key) { if ( target[key] !== null && typeof target[key] === 'object' ) { return resolveUnknownProps(target[key], resolveKey); } else if (!target[key]) { return resolveUnknownProps({ [resolveKey]: true }, resolveKey); } return target[key]; }, }; return new Proxy(obj, handler); } const user = {} console.log(resolveUnknownProps(user, 'isUndefined').personalInfo.name.something.else); // { isUndefined: true }
so you will use it like:
const { isUndefined } = resolveUnknownProps(user, 'isUndefined').personalInfo.name.something.else; if (!isUndefined) { // do someting }
From lodash.js.
var undefined; function isUndefined(value) { return value === undefined; }
It creates a LOCAL variable named
undefined which is initialized with the default value -- the real
undefined, then compares
value with the variable
undefined.
Update 9/9/2019
I found lodash updated its implementation. See my issue and the code.
To be bullet-proof, simply use:
function isUndefined(value) { return value === void 0; }
ECMAScript 10 introduced a new feature - optional chaining which you can use to use a property of an object only when an object is defined like this:
const userPhone = user?.contactDetails?.phone;
It will reference to the phone property only when user and contactDetails are defined.
Ref.
|
https://xjavascript.com/view/5158/detecting-an-undefined-object-property
|
CC-MAIN-2021-10
|
refinedweb
| 2,725 | 55.34 |
Here are introduced mainly for Ophone, of course, a combination of android's source code (there are android2.2 the following source code.) Ophone is also first in class by android.hardware.Camera to control the camera equipment, to use only by androi
Posted: September 10, 2010 – 10:59 am | Author: benn | Filed under: Mobile Security This is a bit of a follow up to our previous post, but we thought it would be interesting to dissect the source code of two of the recent Android root attacks: the ud
Android PC-side shots sometimes we just need the source code, screenshots, no need to even open with DDMS, so stripping the screenshot of the code, of course, is not native, ah, is written under the principle of their own, for your reference the firs
This is mainly directed against android2.2 the first, starting from the manifest.xml. Can see: <Activity android: name = "com.android.ginwave.camera.Camera" android: configChanges = "orientation | keyboardHidden" android: theme = &q
1. Download android's source code, step reference: Since I was ubuntu, all in accordance with the configuration of ubuntu: A download code, reference Downlo
In the out / target / product / generic image file generated under the three: ramdisk.img, system.img, userdata.img and their corresponding tree root, system, data. ramdisk.img is the root file system, system.img including the main package, libraries
When you debug Android programs encounter "source not found" error should be very crazy bar, Goolge, when the release of SDK did not really go in the source code contains a bit confusing for many people, Git is undoubtedly a jerky thing, but fra
android's source code is very large Official Website The following passage To clone one of these trees, install git, and Run: git clone git://android.git.kernel.org/ + project path. To clone the entire Platform, install
1. Install cygwin, because the installation time special for a long time, so to choose the local download (supports breakpoints, that is, reinstall). So downloaded and then run setup.exe, select the local installation. About 5hour + 2.cygwin gain roo
Download the appropriate version of the android's source code on the appropriate version of Mine is D: \ android-sdk-windows-1.6_r1 \ platforms \ android-1.6 \ sources (Direct use of someone else's map of the ...) Then, right in your project selectio
First, the source code for Android Git is a linux Torvalds (Linux father) to help manage Linux kernel development and the development of an open source distributed version control software, which is different from Subversion, CVS such a centralized v
When you debug the program encountered Android "source not found" error should be very crazy, right, Goolge the release of SDK contains the source code did not really get a little confusing, no doubt to many people, Git is a jerky thing, and add
To be able to view the eclipse source code of Android SDK, we can do the following methods: 1, according to preferences of each version of the android sdk download the source code 1.5_R3:
Of: lizongbo Posted: 01:59. Saturday, January 1st, 2011 Copyright : free to reprint, reprint, sure to articles marked hyperlink original source and author information and this copyright notice. In the Go
1. Download android's source code, step reference: Since I was ubuntu, all in accordance with the configuration of ubuntu: A download code, reference Downlo
In the Android development process, often encounter a lot of hidden top-level programmers API can not be used, this time we can consider the Android source code under a "system-level" development, the so-called "system-level development&quo
[Reserved] 1. Download Android source code of the process is not said, a lot of online information 2. The steps to install jdk1.5 not say, a lot of online information 3. Download the source code I put in the directory is / home / wuyutaott / android
provided by google Android Android includes the original target machine code, the host compiler tools, simulation environment, through the decompression code package, the first-level directories and files as follows: | - Makefile (global Makefile) |
Using eclipse + ADT as an android development tool, it can be said is very convenient, in HelloActivity applet where we feel the eclipse feature rich and powerful. So, we can use eclipse to develop android source code do? If we directly import the an li
<classpathentry kind="src" path="packages/apps/AlarmClock/src"/> <classpathentry kind="src" path="out/target/common/obj/APPS/Launcher2_intermediates/src"/> <classpathentry kind="src" path=&q
Using Cygwin on Windows, to obtain the source code for Android 1 in preparation for Cygwin environment, which should curl, wget, python and other basic tools. 2, ready to store source code directory (eg: c: \ myeclair), into the Cygwin Shell environm
Made using Cygwin in Windows, Android source code 1, ready to Cygwin environment, which should curl, wget, python and other basic tools. 2, ready source storage directory (eg: c: \ myeclair), into the Cygwin Shell environment, execute the following c
android mobile development package download the source code case
Android IPC communication mechanism for source code analysis Binder Communications Description: Linux system, inter-process communication methods are: socket, named pipe, message queque, signal, share memory. Java System inter-process communication w
First download cygwin, cygwin is a linux platform for a class. Environment that is simulated in the windows linux terminal. Compared to the virtual machine to run linux, is a lightweight solution. In addition to this source code to download android,
Must first download a GIT in the windows can be PortableGit . With git can manually download the android source of input to a certain part, but from the following address get all you can download the sou
1. Download Msysgit, msysgit Google for the Windows environment is the development of Git client 2. Msysgit software installed, all the way next, skip this ... 3. Create a new directory will be used to store the down
Article from: collection of about Use Git to download Google Android source code Preparation of resources Android Resources Distribution Description: # TOC-External-projects An
Broncho A1 does not support WIFI base station and positioning, Android old version and there is NetworkLocationProvider, it realized the WIFI base station and positioning, but after the android 1.5 was removed. Originally wanted to achieve in the bro
Android with eclipse + ADT as a development tool, can be very convenient, in HelloActivity applet in eclipse features we feel great. So, we can use eclipse to develop android source it? If we direct the android source code in a project into eclipse,
This transferred from William Hua's Blog , the original in this . ---------------- Git is Linux Torvalds developed to help manage Linux kernel developed an open source distributed version control software, which is different from Subversion, CVS such
Android within the system provides a good text to read and write txt class, but there is no publicly available to the standard SDK, FileUtils class source code below, can operate well under Linux text file. public class FileUtils ( public static fina
Development, debugging Android procedures sometimes need to see when the source code of android sdk, Goolge did not put in the released SDK contains the source code into the event, rather perplexing to many people, Git is a jerky thing no doubt, the
In Ubuntu 9.04 compile Android Source] [reserved Source: blog.douhua.im of: douhua Brush the official version has been prepared to look at their compiled Brush. First, download, Android source code is hosted in the Linux Kernel source site, so versio
How to compile Android source code, what is the information from the network to find, after their successful certification to obtain and compile the source. 1, need to install some additional packages, in Ubuntu (I use the system) can be used under t
How to compile Android source code, what is the information from the network to find, through their source code verification to obtain and compile successfully. 1, the need to install some additional packages, in Ubuntu (I use the system) can be used
git source code for Android: 1. git clone git://android.git.kernel.org/ + project path. 2. Mkdir mydroid cd mydroid repo init-u git: / / android.git.kernel.org / platform / manifest.git repo sync git clone for the larger source, it can not HTTP, more
Android is finally the courage to look at the source code, OK ~ according to the steps given in the Internet: Steps are: 1, first download the SDK source code. android-1.5 download address the following
Such as the title, the previous 1.5 version has already been collated source code, but some people say that you want to reply to 1.6. Just tried it today, all versions of source code, running well. So simply compiled a download link and installation
Download the source code to find more than 2.0 SDK installation directory D: \ android-sdk-windows2.1 \ Locate the directory structure is as follows: D: \ android-sdk-windows2.1 \ -Platforms + Android-2 + Android-3 + Android-.... + Android-8 (figures
1. Install Cygwin details, see Annex 2.PATH = "$ PATH": / dir to set environment variables can not add a space before and after = number Install cygwin, "android. The first is the question how to modify. In the / frameworks / base / Android.mk, locate the following line: packages_to_document: = In the final assignment of the variable to add xxxxx (This is the
Android source code now requires Java Development Environment version 1.5. At the time when the development environment to build, did not notice the java version of the requirement to install the 1.6 version of java. Results compiled a java version w
Such as the title, has been finishing off the previous 1.5 version of the source code, but some people replied that want 1.6. Just tried it today, all versions of source code, running well. So simply compiled a download link and installation method.
How to use the TextView, including source code. TextView android is the most commonly used, but used to swing with people who may try to new a Textview,, but android have their own 'rules' for view management. /** * */ publi
Android, how to view the eclipse of the version of Android source code? See: Details
Steps are: 1. First download the SDK source code. (I'm under here, but do not know the full SDK source code, ) 2. To extract the source files, and in your SDK installation directory inside the android-sdk-windo
CodeWeblog.com 版权所有 闽ICP备15018612号
processed in 0.060 (s). 8 q(s)
|
http://www.codeweblog.com/stag/android-adb-source-code/
|
CC-MAIN-2016-44
|
refinedweb
| 1,762 | 51.38 |
$ cnpm install @jbboehr/acorn-jsx
This is plugin for Acorn - a tiny, fast JavaScript parser, written completely in JavaScript.
It was created as an experimental alternative, faster React.js JSX parser.
According to benchmarks, Acorn-JSX is 2x faster than official Esprima-based parser when location tracking is turned on in both (call it "source maps enabled mode"). At the same time, it consumes all the ES6+JSX syntax that can be consumed by Esprima-FB (this is proved by official tests).
UPDATE [14-Apr-2015]: Facebook implementation started deprecation process in favor of Acorn + Acorn-JSX + Babel for parsing and transpiling JSX syntax.
Please note that this tool only parses source code to JSX AST, which is useful for various language tools and services. If you want to transpile your code to regular ES5-compliant JavaScript with source map, check out the babel transpiler which uses
acorn-jsx under the hood.
You can use module directly in order to get Acorn instance with plugin installed:
var acorn = require('acorn-jsx');
Or you can use
inject.js for injecting plugin into your own version of Acorn like following:
var acorn = require('acorn-jsx/inject')(require('./custom-acorn'));
Then, use
plugins option whenever you need to support JSX while parsing:
var ast = acorn.parse(code, { plugins: { jsx: true } });
Note that official spec doesn't support mix of XML namespaces and object-style access in tag names (#27) like in
<namespace:Object.Property />, so it was deprecated in
[email protected]. If you still want to opt-in to support of such constructions, you can pass the following option:
var ast = acorn.parse(code, { plugins: { jsx: { allowNamespacedObjects: true } } });
Also, since most apps use pure React transformer, a new option was introduced that allows to prohibit namespaces completely:
var ast = acorn.parse(code, { plugins: { jsx: { allowNamespaces: false } } });
Note that by default
allowNamespaces is enabled for spec compliancy.
This plugin is issued under the MIT license.
|
https://npm.taobao.org/package/@jbboehr/acorn-jsx
|
CC-MAIN-2019-47
|
refinedweb
| 323 | 54.32 |
On Wed, Jul 25, 2001 at 06:38:58PM -0400, Stephen Frost wrote: > * Ben Collins ([email protected]) wrote: > > On Wed, Jul 25, 2001 at 09:38:13PM +0200, Guido Guenther wrote: > > > Hi, > > > the attached patch removes some previous workarounds by adding proper > > > functions to compiler.h for mips(el). This obsoletes the disabling of > > > the ati driver on mips(351) and the int10 module(350) on mipsel as well > > > as the mem_barrier patch(353). I've also attached the newport range fix > > > for completeness - both are against pre5. So these should be the only > > > patches needed to get XFree86 going on mips/mipsel for now. > > > -- Guido > > > > Are the below asm functions going to affect other architectures? I don't > > see them wrapped in any sort of arch specific defines. > > ---- > +#endif /* !linux */ > #endif /* __mips__ */ > ---- Ok, smack me with the blind stick. Sorry. Ben -- -----------=======-=-======-=========-----------=====------------=-=------ / Ben Collins -- ...on that fantastic voyage... -- Debian GNU/Linux \ ` [email protected] -- [email protected] -- [email protected] ' `---=========------=======-------------=-=-----=-===-======-------=--=---'
|
https://lists.debian.org/debian-x/2001/07/msg00107.html
|
CC-MAIN-2016-30
|
refinedweb
| 161 | 76.52 |
reggae 0.5.0
A build system in D
To use this package, put the following dependency into your project's dependencies section:
Reggae
A build system written in the D programming language. This is alpha software, only tested on Linux and likely to have breaking changes made.
Features
- Write readable build descriptions in D, Python, Ruby, JavaScript or Lua
- Out-of-tree builds
- Backends for GNU make, ninja, tup and a custom binary executable.
- User-defined variables like CMake in order to choose features before compile-time
- Low-level DAG build descriptions + high-level convenience rules to build C, C++ and D
- Automatic header/module dependency detection for C, C++ and D
- Automatically runs itself if the build description changes
- Rules for using dub build targets in your own build decription - use dub with ninja, add to the dub description, ...
Not all features are available on all backends. Executable D code commands (as opposed to shell commands) are only supported by the binary backend, and due to tup's nature dub support and a few other features can't be supported. When using the tup backend, simple is better.
The recommended backends are ninja and binary.
Usage
Reggae is actually a meta build system and works similarly to CMake or Premake. Those systems require writing configuration files in their own proprietary languages. The configuration files for Reggae are written in D, Python, Ruby, JavaScript or Lua
From a build directory (usually not the same as the source one), type
reggae -b <ninja|make|tup|binary> </path/to/project>. This will create
the actual build system depending on the backend chosen, for
Ninja,
GNU Make,
tup, or a runnable
executable, respectively. The project path passed must either:
Dub projects with no
reggaefile.d will have one generated for them in the build directory.
How to write build configurations
The best examples can be found in the features directory. illustrate the low-level primitives. To build D apps with no external dependencies, this will suffice and is similar to using rdmd:
import reggae; alias app = scriptlike!(App(SourceFileName("src/main.d"), BinaryFileName(. There is detailed documentation in markdown format.
For C and C++, the main high-level rules to use are
targetsFromSourceFiles and
link, but of course they can also be hand-assembled from
Target structs. Here is an
example C++ build:
import reggae; alias objs = objectFiles!(Sources!(["."]), // a list of directories Flags("-g -O0"), IncludePaths(["inc1", "inc2"])); alias app = link!(ExeName("app"), objs);
Sources can also be used like so:
Sources!(Dirs([/*directories to look for sources*/], Files([/*list of extra files to add*/]), Filter!(a => a != "foo.d"))); //get rid of unwanted files
objectFiles isn't specific to C++, it'll create object file targets
for all supported languages (currently C, C++ and!(".5.0 released 4 years ago
- atilaneves/reggae
- github.com/atilaneves/reggae
- BSD 3-clause
- Authors:
-
- Dependencies:
- none
- Versions:
- Show all 45 versions
- Download Stats:
0 downloads today
2 downloads this week
9 downloads this month
3716 downloads total
- Score:
- 1.5
- Short URL:
- reggae.dub.pm
|
http://code.dlang.org/packages/reggae/0.5.0
|
CC-MAIN-2019-22
|
refinedweb
| 511 | 54.12 |
I was writing a little .NET app in C# and was looking for a small library that would let me read and write zip files. That's when my first Google search gave me a link to Using the Zip Classes in the J# Class Libraries to Compress Files and Data with C#. Great! - that was my first reaction, but it didn't last too long. You can read zip files alright, but when you write zip files, if you have binary files (as you most often will), the generated zip file will be corrupt and you can't read it back. Funnily, WinZip does open it (ignoring the header errors) and the suggested solution by a Microsoftie in the one of the MS NGs was to extract all files using WinZip and then to zip it back. LOL. Why'd anyone want to use the zipping libraries at all if they had the option of bringing WinZip into the picture?
There is not even a KB article on this bug - bad! I lost about half a day when I was using these classes because I initially thought it was something wrong in my code. Anyway, this is a warning to everyone. Do not use the J# class libraries for zipping/unzipping if you are using the BCL 1.1 version. Heath Stewart (C# MVP and CodeProject Editor) has informed me that the BCL 2.0 includes zipping functionality - so that's really good news. And until that comes out, a very good free alternative to the J# classes is SharpZipLib which I very strongly recommend.
using System;
using System.IO;
using java.util.zip;
class MainClass
{
[STAThread]
static void Main(string[] args)
{
BugDemo();
}
static void BugDemo()
{
//Write zip file
ZipOutputStream os = new ZipOutputStream(
new java.io.FileOutputStream("test.zip"));
ZipEntry ze = new ZipEntry("gluk.gif");
ze.setMethod(ZipEntry.DEFLATED);
os.putNextEntry(ze);
java.io.FileInputStream fs =
new java.io.FileInputStream("gluk.gif");
sbyte[] buff = new sbyte[1024];
int n = 0;
while ((n=fs.read(buff,0,buff.Length))>0)
{
os.write(buff,0,n);
}
fs.close();
os.closeEntry();
os.close();
Console.WriteLine("Okay, the zip has been written.");
//Read zip file
try
{
//Bug will occur here
ZipFile zf = new ZipFile("test.zip");
}
catch
{
Console.WriteLine("Bug confirmed!");
}
}
}
Now make sure you have a gluk.gif in the same folder as the executable.
Compile and run.
There is no real workaround to the problem. You can use the alternative library I mentioned above or wait for Whidbey.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
From: Shankar Adhi [MSFT] ([email protected])<br />
Subject: Re: java.util.zip problem <br />
Newsgroups: microsoft.public.dotnet.vjsharp<br />
Date: 2003-05-09 06:40:56 PST <br />
<br />
<br />
This is a known issue in J# class library. It will be fixed possibly in the next release of J#. <br />
<br />
Work Around: Decompress your zip file and again compress it using winzip, the new zip file should work. Thanks for using J#.<br />
<br />
ShankarA
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
https://www.codeproject.com/Articles/6989/Bug-when-using-the-java-util-zip-classes-to-write?fid=44526&df=10000&mpp=50&sort=Position&spc=Relaxed&select=817619&tid=817596
|
CC-MAIN-2017-30
|
refinedweb
| 547 | 75.1 |
Compatible with Windows7 & Mac OS X Snow Leopard
A well hidden typo:
xslns:xsl=""
^^^^^
Your fingers have obviously learned to type "xsl" without engaging brain...
Michael Kay
> -----Original Message-----
> From: Dan Chandler [mailto:daniel.chandler@xxxxxxxxx]
> Sent: 22 March 2005 19:53
> To: xsl-list@xxxxxxxxxxxxxxxxxxxxxx
> Subject: [xsl] undeclared namespace error
>
> Hi all,
>
> I am working on a ,net application that displays an XML file. I have
> an XSLT file that is supposed to format the XML file into a table.
> The following is the xslt file:
>
> <?xml version="1.0" encoding="UTF-8" ?>
> <xsl:stylesheet xslns:
> <xsl:template
> <xsl:for-each
> <TR>
> >
> </TR>
> </xsl:for-each>
> </xsl:template>
> </stylesheet>
>
> I am betting I have several problems in there, but the one I am stuck
> on right now is when i try to run the application I get the error:
>
> "'xsl' is an undeclared namespace. Line 2, position 2."
>
> I've looked online and it seems like this error has something to do
> with the stylesheet declaration line, but I can't quite figure out
> what I am doing wrong. Anyone help is appreciated. Thanx.
>
> Dan
|
http://www.oxygenxml.com/archives/xsl-list/200503/msg01000.html
|
CC-MAIN-2013-20
|
refinedweb
| 186 | 69.72 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.