id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
3,500 | the iterator protocol the abstract base class iteratorin the collections abc moduledefines the iterator protocol in python as mentionedit must have __next__ method that the for loop (and other features that support iterationcan call to get new element from the sequence in additionevery iterator must also fulfill the iterable interface any class that provides an __iter__ method is iterablethat method must return an iterator instance that will cover all the elements in that class since an iterator is already looping over elementsits __iter__ function traditionally returns itself this might sound bit confusingso have look at the following examplebut note that this is very verbose way to solve this problem it clearly explains iteration and the two protocols in questionbut we'll be looking at several more readable ways to get this effect later in this class capitaliterabledef __init__(selfstring)self string string def __iter__(self)return capitaliterator(self stringclass capitaliteratordef __init__(selfstring)self words [ capitalize(for in string split()self index def __next__(self)if self index =len(self words)raise stopiteration(word self words[self indexself index + return word def __iter__(self)return self |
3,501 | this example defines an capitaliterable class whose job is to loop over each of the words in string and output them with the first letter capitalized most of the work of that iterable is passed to the capitaliterator implementation the canonical way to interact with this iterator is as followsiterable capitaliterable('the quick brown fox jumps over the lazy dog'iterator iter(iterablewhile truetryprint(next(iterator)except stopiterationbreak the quick brown fox jumps over the lazy dog this example first constructs an iterable and retrieves an iterator from it the distinction may need explanationthe iterable is an object with elements that can be looped over normallythese elements can be looped over multiple timesmaybe even at the same time or in overlapping code the iteratoron the other handrepresents specific location in that iterablesome of the items have been consumed and some have not two different iterators might be at different places in the list of wordsbut any one iterator can mark only one place each time next(is called on the iteratorit returns another token from the iterablein order eventuallythe iterator will be exhausted (won' have any more elements to return)in which case stopiteration is raisedand we break out of the loop |
3,502 | of coursewe already know much simpler syntax for constructing an iterator from an iterablefor in iterableprint(ithe quick brown fox jumps over the lazy dog as you can seethe for statementin spite of not looking terribly object-orientedis actually shortcut to some obviously object-oriented design principles keep this in mind as we discuss comprehensionsas theytooappear to be the polar opposite of an object-oriented tool yetthey use the exact same iteration protocol as for loops and are just another kind of shortcut comprehensions comprehensions are simplebut powerfulsyntaxes that allow us to transform or filter an iterable object in as little as one line of code the resultant object can be perfectly normal listsetor dictionaryor it can be generator expression that can be efficiently consumed in one go list comprehensions list comprehensions are one of the most powerful tools in pythonso people tend to think of them as advanced they're not indeedi've taken the liberty of littering previous examples with comprehensions and assuming you' understand them while it' true that advanced programmers use comprehensions lotit' not because they're advancedit' because they're trivialand handle some of the most common operations in software development |
3,503 | let' have look at one of those common operationsnamelyconverting list of items into list of related items specificallylet' assume we just read list of strings from fileand now we want to convert it to list of integers we know every item in the list is an integerand we want to do some activity (saycalculate an averageon those numbers here' one simple way to approach itinput_strings [' '' '' '' '' 'output_integers [for num in input_stringsoutput_integers append(int(num)this works fine and it' only three lines of code if you aren' used to comprehensionsyou may not even think it looks uglynowlook at the same code using list comprehensioninput_strings [' '' '' '' '' ']output_integers [int(numfor num in input_stringswe're down to one line andimportantly for performancewe've dropped an append method call for each item in the list overallit' pretty easy to tell what' going oneven if you're not used to comprehension syntax the square brackets indicateas alwaysthat we're creating list inside this list is for loop that iterates over each item in the input sequence the only thing that may be confusing is what' happening between the list' opening brace and the start of the for loop whatever happens here is applied to each of the items in the input list the item in question is referenced by the num variable from the loop soit' converting each individual element to an int data type that' all there is to basic list comprehension they are not so advanced after all comprehensions are highly optimized codelist comprehensions are far faster than for loops when looping over huge number of items if readability alone isn' convincing reason to use them as much as possiblespeed should be converting one list of items into related list isn' the only thing we can do with list comprehension we can also choose to exclude certain values by adding an if statement inside the comprehension have lookoutput_ints [int(nfor in input_strings if len( |
3,504 | shortened the name of the variable from num to and the result variable to output_ ints so it would still fit on one line other than thisall that' different between this example and the previous one is the if len( part this extra code excludes any strings with more than two characters the if statement is applied before the int functionso it' testing the length of string since our input strings are all integers at heartit excludes any number over now that is all there is to list comprehensionswe use them to map input values to output valuesapplying filter along the way to include or exclude any values that meet specific condition any iterable can be the input to list comprehensionanything we can wrap in for loop can also be placed inside comprehension for exampletext files are iterableeach call to __next__ on the file' iterator will return one line of the file we could load tab delimited file where the first line is header row into dictionary using the zip functionimport sys filename sys argv[ with open(filenameas fileheader file readline(strip(split('\ 'contacts dictzip(headerline strip(split('\ ')for line in file for contact in contactsprint("email{email-{last}{first}format**contact)this timei've added some whitespace to make it somewhat more readable (list comprehensions don' have to fit on one linethis example creates list of dictionaries from the zipped header and split lines for each line in the file erwhatdon' worry if that code or explanation doesn' make senseit' bit confusing one list comprehension is doing pile of work hereand the code is hard to understandreadand ultimatelymaintain this example shows that list comprehensions aren' always the best solutionmost programmers would agree that for loop would be more readable than this version |
3,505 | rememberthe tools we are provided with should not be abusedalways pick the right tool for the jobwhich is always to write maintainable code set and dictionary comprehensions comprehensions aren' restricted to lists we can use similar syntax with braces to create sets and dictionaries as well let' start with sets one way to create set is to wrap list comprehension in the set(constructorwhich converts it to set but why waste memory on an intermediate list that gets discardedwhen we can create set directlyhere' an example that uses named tuple to model author/title/genre triadsand then retrieves set of all the authors that write in specific genrefrom collections import namedtuple book namedtuple("book""author title genre"books book("pratchett""nightwatch""fantasy")book("pratchett""thief of time""fantasy")book("le guin""the dispossessed""scifi")book("le guin"" wizard of earthsea""fantasy")book("turner""the thief""fantasy")book("phillips""preston diamond""western")book("phillips""twice upon time""scifi")fantasy_authors author for in books if genre ='fantasy'the highlighted set comprehension sure is short in comparison to the demo-data setupif we were to use list comprehensionof courseterry pratchett would have been listed twice as it isthe nature of sets removes the duplicatesand we end up withfantasy_authors {'turner''pratchett''le guin'we can introduce colon to create dictionary comprehension this converts sequence into dictionary using keyvalue pairs for exampleit may be useful to quickly look up the author or genre in dictionary if we know the title we can use dictionary comprehension to map titles to book objectsfantasy_titles titleb for in books if genre ='fantasy' |
3,506 | nowwe have dictionaryand can look up books by title using the normal syntax in summarycomprehensions are not advanced pythonnor are they "non-objectorientedtools that should be avoided they are simply more concise and optimized syntax for creating listsetor dictionary from an existing sequence generator expressions sometimes we want to process new sequence without placing new listsetor dictionary into system memory if we're just looping over items one at timeand don' actually care about having final container object createdcreating that container is waste of memory when processing one item at timewe only need the current object stored in memory at any one moment but when we create containerall the objects have to be stored in that container before we start processing them for exampleconsider program that processes log files very simple log might contain information in this formatjan : : debug jan : : info jan : : warning serious jan : : warning jan : : info jan : : debug if you want to figure something out jan : : info but helpful jan : : warning jan : : warning this is debugging message this is an information method this is warning it could be another warning sent here' some information debug messages are only useful information is usually harmlesswarnings should be heeded watch for warnings log files for popular web serversdatabasesor -mail servers can contain many gigabytes of data ( recently had to clean nearly terabytes of logs off misbehaving systemif we want to process each line in the logwe can' use list comprehensionit would create list containing every line in the file this probably wouldn' fit in ram and could bring the computer to its kneesdepending on the operating system if we used for loop on the log filewe could process one line at time before reading the next one into memory wouldn' be nice if we could use comprehension syntax to get the same effectthis is where generator expressions come in they use the same syntax as comprehensionsbut they don' create final container object to create generator expressionwrap the comprehension in (instead of [or { |
3,507 | the following code parses log file in the previously presented formatand outputs new log file that contains only the warning linesimport sys inname sys argv[ outname sys argv[ with open(innameas infilewith open(outname" "as outfilewarnings ( for in infile if 'warningin lfor in warningsoutfile write(lthis program takes the two filenames on the command lineuses generator expression to filter out the warnings (in this caseit uses the if syntaxand leaves the line unmodified)and then outputs the warnings to another file if we run it on our sample filethe output looks like thisjan : : serious jan : : jan : : jan : : warning this is warning it could be warning warning warning another warning sent warnings should be heeded watch for warnings of coursewith such short input filewe could have safely used list comprehensionbut if the file is millions of lines longthe generator expression will have huge impact on both memory and speed generator expressions are frequently most useful inside function calls for examplewe can call summinor maxon generator expression instead of listsince these functions process one object at time we're only interested in the resultnot any intermediate container in generala generator expression should be used whenever possible if we don' actually need listsetor dictionarybut simply need to filter or convert items in sequencea generator expression will be most efficient if we need to know the length of listor sort the resultremove duplicatesor create dictionarywe'll have to use the comprehension syntax |
3,508 | generators generator expressions are actually sort of comprehension toothey compress the more advanced (this time it really is more advanced!generator syntax into one line the greater generator syntax looks even less object-oriented than anything we've seenbut we'll discover that once againit is simple syntax shortcut to create kind of object let' take the log file example little further if we want to delete the warning column from our output file (since it' redundantthis file contains only warnings)we have several optionsat various levels of readability we can do it with generator expressionimport sys innameoutname sys argv[ : with open(innameas infilewith open(outname" "as outfilewarnings ( replace('\twarning'''for in infile if 'warningin lfor in warningsoutfile write(lthat' perfectly readablethough wouldn' want to make the expression much more complicated than that we could also do it with normal for loopimport sys innameoutname sys argv[ : with open(innameas infilewith open(outname" "as outfilefor in infileif 'warningin loutfile write( replace('\twarning''')that' maintainablebut so many levels of indent in so few lines is kind of ugly more alarminglyif we wanted to do something different with the linesrather than just printing them outwe' have to duplicate the looping and conditional codetoo now let' consider truly object-oriented solutionwithout any shortcutsimport sys innameoutname sys argv[ : class warningfilterdef __init__(selfinsequence) |
3,509 | self insequence insequence def __iter__(self)return self def __next__(self) self insequence readline(while and 'warningnot in ll self insequence readline(if not lraise stopiteration return replace('\twarning'''with open(innameas infilewith open(outname" "as outfilefilter warningfilter(infilefor in filteroutfile write(lno doubt about itthat is so ugly and difficult to read that you may not even be able to tell what' going on we created an object that takes file object as inputand provides __next__ method like any iterator this __next__ method reads lines from the filediscarding them if they are not warning lines when it encounters warning lineit returns it then the for loop will call __next__ again to process the next warning line when we run out of lineswe raise stopiteration to tell the loop we're finished iterating it' pretty ugly compared to the other examplesbut it' also powerfulnow that we have class in our handswe can do whatever we want with it with that background behind uswe finally get to see generators in action this next example does exactly the same thing as the previous oneit creates an object with __next__ method that raises stopiteration when it' out of inputsimport sys innameoutname sys argv[ : def warnings_filter(insequence)for in insequenceif 'warningin lyield replace('\twarning'''with open(innameas infilewith open(outname" "as outfile |
3,510 | filter warnings_filter(infilefor in filteroutfile write(lokthat' pretty readablemaybe at least it' short but what on earth is going on hereit makes no sense whatsoever and what is yieldanywayin factyield is the key to generators when python sees yield in functionit takes that function and wraps it up in an object not unlike the one in our previous example think of the yield statement as similar to the return statementit exits the function and returns line unlike returnhoweverwhen the function is called again (via next())it will start where it left off--on the line after the yield statement--instead of at the beginning of the function in this examplethere is no line "afterthe yield statementso it jumps to the next iteration of the for loop since the yield statement is inside an if statementit only yields lines that contain warning while it looks like this is just function looping over the linesit is actually creating special type of objecta generator objectprint(warnings_filter([]) passed an empty list into the function to act as an iterator all the function does is create and return generator object that object has __iter__ and __ next__ methods on itjust like the one we created in the previous example whenever __next__ is calledthe generator runs the function until it finds yield statement it then returns the value from yieldand the next time __next__ is calledit picks up where it left off this use of generators isn' that advancedbut if you don' realize the function is creating an objectit can seem like magic this example was quite simplebut you can get really powerful effects by making multiple calls to yield in single functionthe generator will simply pick up at the most recent yield and continue to the next one yield items from another iterable oftenwhen we build generator functionwe end up in situation where we want to yield data from another iterable objectpossibly list comprehension or generator expression we constructed inside the generatoror perhaps some external items that were passed into the function this has always been possible by looping over the iterable and individually yielding each item howeverin python version the python developers introduced new syntax to make this little more elegant |
3,511 | let' adapt the generator example bit so that instead of accepting sequence of linesit accepts filename this would normally be frowned upon as it ties the object to particular paradigm when possible we should operate on iterators as inputthis way the same function could be used regardless of whether the log lines came from filememoryor web-based log aggregator so the following example is contrived for pedagogical reasons this version of the code illustrates that your generator can do some basic setup before yielding information from another iterable (in this casea generator expression)import sys innameoutname sys argv[ : def warnings_filter(infilename)with open(infilenameas infileyield from replace('\twarning'''for in infile if 'warningin filter warnings_filter(innamewith open(outname" "as outfilefor in filteroutfile write(lthis code combines the for loop from the previous example into generator expression notice how put the three clauses of the generator expression (the transformationthe loopand the filteron separate lines to make them more readable notice also that this transformation didn' help enoughthe previous example with for loop was more readable so let' consider an example that is more readable than its alternative it can be useful to construct generator that yields data from multiple other generators the itertools chain functionfor exampleyields data from iterables in sequence until they have all been exhausted this can be implemented far too easily using the yield from syntaxso let' consider classic computer science problemwalking general tree common implementation of the general tree data structure is computer' filesystem let' model few folders and files in unix filesystem so we can use yield from to walk them effectivelyclass filedef __init__(selfname) |
3,512 | self name name class folder(file)def __init__(selfname)super(__init__(nameself children [root folder(''etc folder('etc'root children append(etcetc children append(file('passwd')etc children append(file('groups')httpd folder('httpd'etc children append(httpdhttpd children append(file('http conf')var folder('var'root children append(varlog folder('log'var children append(loglog children append(file('messages')log children append(file('kernel')this setup code looks like lot of workbut in real filesystemit would be even more involved we' have to read data from the hard drive and structure it into the tree once in memoryhoweverthe code that outputs every file in the filesystem is quite elegantdef walk(file)if isinstance(filefolder)yield file name '/for in file childrenyield from walk(felseyield file name if this code encounters directoryit recursively asks walk(to generate list of all files subordinate to each of its childrenand then yields all that data plus its own filename in the simple case that it has encountered normal fileit just yields that name |
3,513 | as an asidesolving the preceding problem without using generator is tricky enough that this problem is common interview question if you answer it as shown like thisbe prepared for your interviewer to be both impressed and somewhat irritated that you answered it so easily they will likely demand that you explain exactly what is going on of coursearmed with the principles you've leaned in this you won' have any problem the yield from syntax is useful shortcut when writing chained generatorsbut it is more commonly used for different purposepiping data through coroutines we'll see many examples of this in concurrencybut for nowlet' discover what coroutine is coroutines coroutines are extremely powerful constructs that are often confused with generators many authors inappropriately describe coroutines as "generators with bit of extra syntax this is an easy mistake to makeasway back in python when coroutines were introducedthey were presented as "we added send method to the generator syntax this is further complicated by the fact that when you create coroutine in pythonthe object returned is generator the difference is actually lot more nuanced and will make more sense after you've seen few examples while coroutines in python are currently tightly coupled to the generator syntaxthey are only superficially related to the iterator protocol we have been discussing the upcoming (as this is publishedpython release makes coroutines truly standalone object and will provide new syntax to work with them the other thing to bear in mind is that coroutines are pretty hard to understand they are not used all that often in the wildand you could likely skip this section and happily develop in python for years without missing or even encountering them there are couple libraries that use coroutines extensively (mostly for concurrent or asynchronous programming)but they are normally written such that you can use coroutines without actually understanding how they workso if you get lost in this sectiondon' despair but you won' get losthaving studied the following examples here' one of the simplest possible coroutinesit allows us to keep running tally that can be increased by arbitrary valuesdef tally()score |
3,514 | while trueincrement yield score score +increment this code looks like black magic that couldn' possibly workso we'll see it working before going into line-by-line description this simple object could be used by scoring application for baseball team separate tallies could be kept for each teamand their score could be incremented by the number of runs accumulated at the end of every half-inning look at this interactive sessionwhite_sox tally(blue_jays tally(next(white_sox next(blue_jays white_sox send( blue_jays send( white_sox send( blue_jays send( first we construct two tally objectsone for each team yesthey look like functionsbut as with the generator objects in the previous sectionthe fact that there is yield statement inside the function tells python to put great deal of effort into turning the simple function into an object we then call next(on each of the coroutine objects this does the same thing as calling next on any generatorwhich is to sayit executes each line of code until it encounters yield statementreturns the value at that pointand then pauses until the next next(call so farthenthere' nothing new but look back at the yield statement in our coroutineincrement yield score |
3,515 | unlike with generatorsthis yield function looks like it' supposed to return value and assign it to variable this isin factexactly what' happening the coroutine is still paused at the yield statement and waiting to be activated again by another call to next(or ratheras you see in the interactive sessiona call to method called send(the send(method does exactly the same thing as next(except that in addition to advancing the generator to the next yield statement it also allows you to pass in value from outside the generator this value is assigned to the left side of the yield statement the thing that is really confusing for many people is the order in which this happensyield occurs and the generator pauses send(occurs from outside the function and the generator wakes up the value sent in is assigned to the left side of the yield statement the generator continues processing until it encounters another yield statement soin this particular exampleafter we construct the coroutine and advance it to the yield statement with call to next()each successive call to send(passes value into the coroutinewhich adds this value to its scoregoes back to the top of the while loopand keeps processing until it hits the yield statement the yield statement returns valueand this value becomes the return value of the most recent call to send don' miss thatthe send(method does not just submit value to the generatorit also returns the value from the upcoming yield statementjust like next(this is how we define the difference between generator and coroutinea generator only produces valueswhile coroutine can also consume them the behavior and syntax of next( ) __next__()and send(valueare rather unintuitive and frustrating the first is normal functionthe second is special methodand the last is normal method but all three do the same thingadvance the generator until it yields value and pause furtherthe next(function and associated method can be replicated by calling send(nonethere is value to having two different method names heresince it helps the reader of our code easily see whether they are interacting with coroutine or generator just find the fact that in one case it' function call and in the other it' normal method somewhat irritating |
3,516 | back to log parsing of coursethe previous example could easily have been coded using couple integer variables and calling +increment on them let' look at second example where coroutines actually save us some code this example is somewhat simplified (for pedagogical reasonsversion of problem had to solve in my real job the fact that it logically follows from the earlier discussions about processing log file is completely serendipitousthose examples were written for the first edition of this bookwhereas this problem came up four years laterthe linux kernel log contains lines that look somewhatbut not quite completelyunlike thisunrelated log messages sd : : : attached disk drive unrelated log messages sd : : : (serial=zz unrelated log messages sd : : : [sdaoptions unrelated log messages xfs error [sdaunrelated log messages sd : : : attached disk drive unrelated log messages sd : : : (serial=zz unrelated log messages sd : : : [sdboptions unrelated log messages sd : : : attached disk drive unrelated log messages sd : : : (serial=ww unrelated log messages sd : : : [sdcoptions unrelated log messages xfs error [sdcunrelated log messages there are whole bunch of interspersed kernel log messagessome of which pertain to hard disks the hard disk messages might be interspersed with other messagesbut they occur in predictable format and orderin which specific drive with known serial number is associated with bus identifier (such as : : : )and block device identifier (such as sdais associated with that bus finallyif the drive has corrupt filesystemit might fail with an xfs error |
3,517 | nowgiven the preceding log filethe problem we need to solve is how to obtain the serial number of any drives that have xfs errors on them this serial number might later be used by data center technician to identify and replace the drive we know we can identify the individual lines using regular expressionsbut we'll have to change the regular expressions as we loop through the linessince we'll be looking for different things depending on what we found previously the other difficult bit is that if we find an error stringthe information about which bus contains that stringand what serial number is attached to the drive on that bus has already been processed this can easily be solved by iterating through the lines of the file in reverse order before you look at this examplebe warned--the amount of code required for coroutine-based solution is scarily smallimport re def match_regex(filenameregex)with open(filenameas filelines file readlines(for line in reversed(lines)match re match(regexlineif matchregex yield match groups()[ def get_serials(filename)error_re 'xfs error (\[sd[ - ]\])matcher match_regex(filenameerror_redevice next(matcherwhile truebus matcher send'(sd \ +{*format(re escape(device))serial matcher send('{\(serial=([^)]*)\)format(bus)yield serial device matcher send(error_refor serial_number in get_serials('example_log log')print(serial_numberthis code neatly divides the job into two separate tasks the first task is to loop over all the lines and spit out any lines that match given regular expression the second task is to interact with the first task and give it guidance as to what regular expression it is supposed to be searching for at any given time |
3,518 | look at the match_regex coroutine first rememberit doesn' execute any code when it is constructedratherit just creates coroutine object once constructedsomeone outside the coroutine will eventually call next(to start the code runningat which point it stores the state of two variablesfilename and regex it then reads all the lines in the file and iterates over them in reverse each line is compared to the regular expression that was passed in until it finds match when the match is foundthe coroutine yields the first group from the regular expression and waits at some point in the futureother code will send in new regular expression to search for note that the coroutine never cares what regular expression it is trying to matchit' just looping over lines and comparing them to regular expression it' somebody else' responsibility to decide what regular expression to supply in this casethat somebody else is the get_serials generator it doesn' care about the lines in the filein fact it isn' even aware of them the first thing it does is create matcher object from the match_regex coroutine constructorgiving it default regular expression to search for it advances the coroutine to its first yield and stores the value it returns it then goes into loop that instructs the matcher object to search for bus id based on the stored device idand then serial number based on that bus id it idly yields that serial number to the outside for loop before instructing the matcher to find another device id and repeat the cycle basicallythe coroutine' (match_regexas it uses the regex yield syntaxjob is to search for the next important line in the filewhile the generator' (get_serialwhich uses the yield syntax without assignmentjob is to decide which line is important the generator has information about this particular problemsuch as what order lines will appear in the file the coroutineon the other handcould be plugged into any problem that required searching file for given regular expressions closing coroutines and throwing exceptions normal generators signal their exit from inside by raising stopiteration if we chain multiple generators together (for exampleby iterating over one generator from inside another)the stopiteration exception will be propagated outward eventuallyit will hit for loop that will see the exception and know that it' time to exit the loop coroutines don' normally follow the iteration mechanismrather than pulling data through one until an exception is encountereddata is usually pushed into it (using sendthe entity doing the pushing is normally the one in charge of telling the coroutine when it' finishedit does this by calling the close(method on the coroutine in question |
3,519 | when calledthe close(method will raise generatorexit exception at the point the coroutine was waiting for value to be sent in it is normally good policy for coroutines to wrap their yield statements in try finally block so that any cleanup tasks (such as closing associated files or socketscan be performed if we need to raise an exception inside coroutinewe can use the throw(method in similar way it accepts an exception type with optional value and traceback arguments the latter is useful when we encounter an exception in one coroutine and want to cause an exception to occur in an adjacent coroutine while maintaining the traceback both of these features are vital if you're building robust coroutine-based librariesbut we are unlikely to encounter them in our day-to-day coding lives the relationship between coroutinesgeneratorsand functions we've seen coroutines in actionso now let' go back to that discussion of how they are related to generators in pythonas is so often the casethe distinction is quite blurry in factall coroutines are generator objectsand authors often use the two terms interchangeably sometimesthey describe coroutines as subset of generators (only generators that return values from yield are considered coroutinesthis is technically true in pythonas we've seen in the previous sections howeverin the greater sphere of theoretical computer sciencecoroutines are considered the more general principlesand generators are specific type of coroutine furthernormal functions are yet another distinct subset of coroutines coroutine is routine that can have data passed in at one or more points and get it out at one or more points in pythonthe point where data is passed in and out is the yield statement functionor subroutineis the simplest type of coroutine you can pass data in at one pointand get data out at one other point when the function returns while function can have multiple return statementsonly one of them can be called for any given invocation of the function finallya generator is type of coroutine that can have data passed in at one pointbut can pass data out at multiple points in pythonthe data would be passed out at yield statementbut you can' pass data back in if you called sendthe data would be silently discarded |
3,520 | so in theorygenerators are types of coroutinesfunctions are types of coroutinesand there are coroutines that are neither functions nor generators that' simple enoughehso why does it feel more complicated in pythonin pythongenerators and coroutines are both constructed using syntax that looks like we are constructing function but the resulting object is not function at allit' totally different kind of object functions areof coursealso objects but they have different interfacefunctions are callable and return valuesgenerators have data pulled out using next()and coroutines have data pushed in using send case study one of the fields in which python is the most popular these days is data science let' implement basic machine learning algorithmmachine learning is huge topicbut the general idea is to make predictions or classifications about future data by using knowledge gained from past data uses of such algorithms aboundand data scientists are finding new ways to apply machine learning every day some important machine learning applications include computer vision (such as image classification or facial recognition)product recommendationidentifying spamand speech recognition we'll look at simpler problemgiven an rgb color definitionwhat name would humans identify that color asthere are more than million colors in the standard rgb color spaceand humans have come up with names for only fraction of them while there are thousands of names (some quite ridiculousjust go to any car dealership or makeup store)let' build classifier that attempts to divide the rgb space into the basic colorsred purple blue green yellow orange grey white pink |
3,521 | the first thing we need is dataset to train our algorithm on in production systemyou might scrape list of colors website or survey thousands of people insteadi created simple application that renders random color and asks the user to select one of the preceding nine options to classify it this application is included with the example code for this in the kivy_color_classifier directorybut we won' be going into the details of this code as its only purpose here is to generate sample data kivy has an incredibly well-engineered object-oriented api that you may want to explore on your own time if you would like to develop graphical programs that run on many systemsfrom your laptop to your cell phoneyou might want to check out my bookcreating apps in kivyo'reilly for the purposes of this case studythe important thing about that application is the outputwhich is comma-separated value (csvfile that contains four values per rowthe redgreenand blue values (represented as floating-point number between zero and one)and one of the preceding nine names that the user assigned to that color the dataset looks something like this , , ,green , , ,grey , , ,blue , , ,pink , , ,purple , , ,yellow , , ,purple , , ,purple , , ,blue , , ,green made datapoints ( very few of them untruebefore got bored and decided it was time to start machine learning on this dataset these datapoints are shipped with the examples for this if you would like to use my data (nobody' ever told me ' color-blindso it should be somewhat reasonablewe'll be implementing one of the simpler machine-learning algorithmsreferred to as -nearest neighbor this algorithm relies on some kind of "distancecalculation between points in the dataset (in our casewe can use three-dimensional version of the pythagorean theoremgiven new datapointit finds certain number (referred to as kas in -nearest neighborsof datapoints that are closest to it when measured by that distance calculation then it combines those datapoints in some way (an average might work for linear calculationsfor our classification problemwe'll use the mode)and returns the result |
3,522 | we won' go into too much detail about what the algorithm doesratherwe'll focus on some of the ways we can apply the iterator pattern or iterator protocol to this problem let' now write program that performs the following steps in order load the sample data from the file and construct model from it generate random colors classify each color and output it to file in the same format as the input once we have this second csv fileanother kivy program can load the file and render each colorasking human user to confirm or deny the accuracy of the predictionthus informing us of how accurate our algorithm and initial data set are the first step is fairly simple generator that loads csv data and converts it into format that is amenable to our needsimport csv dataset_filename 'colors csvdef load_colors(filename)with open(filenameas dataset_filelines csv reader(dataset_filefor line in linesyield tuple(float(yfor in line[ : ])line[ we haven' seen the csv reader function before it returns an iterator over the lines in the file each value returned by the iterator is list of strings in our casewe could have just split on commas and been finebut csv reader also takes care of managing quotation marks and various other nuances of the comma-separated value format we then loop over these lines and convert them to tuple of color and namewhere the color is tuple of three floating value integers this tuple is constructed using generator expression there might be more readable ways to construct this tupledo you think the code brevity and the speed of generator expression is worth the obfuscationinstead of returning list of color tuplesit yields them one at timethus constructing generator object nowwe need hundred random colors there are so many ways this can be donea list comprehension with nested generator expression[tuple(random(for in range( )for in range( ) basic generator function class that implements the __iter__ and __next__ protocols |
3,523 | push the data through pipeline of coroutines even just basic for loop the generator version seems to be most readableso let' add that function to our programfrom random import random def generate_colors(count= )for in range(count)yield (random()random()random()notice how we parameterize the number of colors to generate we can now reuse this function for other color-generating tasks in the future nowbefore we do the classification stepwe need function to calculate the "distancebetween two colors since it' possible to think of colors as being three dimensional (redgreenand blue could map to xyand axesfor example)let' use little basic mathimport math def color_distance(color color )channels zip(color color sum_distance_squared for in channelssum_distance_squared +( * return math sqrt(sum_distance_squaredthis is pretty basic-looking functionit doesn' look like it' even using the iterator protocol there' no yield functionno comprehensions howeverthere is for loopand that call to the zip function is doing some real iteration as well (remember that zip yields tuples containing one element from each input iteratornotehoweverthat this function is going to be called lot of times inside our -nearest neighbors algorithm if our code ran too slow and we were able to identify this function as bottleneckwe might want to replace it with less readablebut more optimizedgenerator expressiondef color_distance(color color )return math sqrt(sum(( [ [ ]* for in zipcolor color )) |
3,524 | howeveri strongly recommend not making such optimizations until you have proven that the readable version is too slow now that we have some plumbing in placelet' do the actual -nearest neighbor implementation this seems like good place to use coroutine here it is with some test code to ensure it' yielding sensible valuesdef nearest_neighbors(model_colorsnum_neighbors)model list(model_colorstarget yield while truedistances sorted((color_distance( [ ]target)cfor in model)target yield [ for in distances[ :num_neighborsmodel_colors load_colors(dataset_filenametarget_colors generate_colors( get_neighbors nearest_neighbors(model_colors next(get_neighborsfor color in target_colorsdistances get_neighbors send(colorprint(colorfor in distancesprint(color_distance(colord[ ]) [ ]the coroutine accepts two argumentsthe list of colors to be used as modeland the number of neighbors to query it converts the model to list because it' going to be iterated over multiple times in the body of the coroutineit accepts tuple of rgb color values using the yield syntax then it combines call to sorted with an odd generator expression see if you can figure out what that generator expression is doing it returns tuple of (distancecolor_datafor each color in the model rememberthe model itself contains tuples of (colorname)where color is tuple of three rgb values thereforethe generator is returning an iterator over weird data structure that looks like this(distance(rgb)color_name |
3,525 | the sorted call then sorts the results by their first elementwhich is distance this is complicated piece of code and isn' object-oriented at all you may want to break it down into normal for loop to ensure you understand what the generator expression is doing it might also be good exercise to imagine how this code would look if you were to pass key argument into the sorted function instead of constructing tuple the yield statement is bit less complicatedit pulls the second value from each of the first (distancecolor_datatuples in more concrete termsit yields the ((rgb)color_nametuple for the values with the lowest distance orif you prefer more abstract termsit yields the target' -nearest neighbors in the given model the remaining code is just boilerplate to test this methodit constructs the model and color generatorprimes the coroutineand prints the results in for loop the two remaining tasks are to choose color based on the nearest neighborsand to output the results to csv file let' make two more coroutines to take care of these tasks let' do the output first because it can be tested independentlydef write_results(filename="output csv")with open(filename" "as filewriter csv writer(filewhile truecolorname yield writer writerow(list(color[name]results write_results(next(resultsfor in range( )print(iresults send(((iii) )this coroutine maintains an open file as state and writes lines of code to it as they are sent in using send(the test code ensures the coroutine is working correctlyso now we can connect the two coroutines with third one the second coroutine uses bit of an odd trickfrom collections import counter def name_colors(get_neighbors)color yield while truenear get_neighbors send(colorname_guess countern[ for in nearmost_common( )[ ][ color yield name_guess |
3,526 | this coroutine acceptsas its argumentan existing coroutine in this caseit' an instance of nearest_neighbors this code basically proxies all the values sent into it through that nearest_neighbors instance then it does some processing on the result to get the most common color out of the values that were returned in this caseit would probably make just as much sense to adapt the original coroutine to return namesince it isn' being used for anything else howeverthere are many cases where it is useful to pass coroutines aroundthis is how we do it now all we have to do is connect these various coroutines and pipelines togetherand kick off the process with single function calldef process_colors(dataset_filename="colors csv")model_colors load_colors(dataset_filenameget_neighbors nearest_neighbors(model_colors get_color_name name_colors(get_neighborsoutput write_results(next(outputnext(get_neighborsnext(get_color_namefor color in generate_colors()name get_color_name send(coloroutput send((colorname)process_colors(sothis functionunlike almost every other function we've definedis perfectly normal function without any yield statements it doesn' get turned into coroutine or generator object it doeshoweverconstruct generator and three coroutines notice how the get_neighbors coroutine is passed into the constructor for name_colorspay attention to how all three coroutines are advanced to their first yield statements by calls to next once all the pipes are createdwe use for loop to send each of the generated colors into the get_color_name coroutineand then we pipe each of the values yielded by that coroutine to the output coroutinewhich writes it to file and that' iti created second kivy app that loads the resulting csv file and presents the colors to the user the user can select either yes or no depending on whether they think the choice made by the machine-learning algorithm matches the choice they would have made this is not scientifically accurate (it' ripe for observation bias)but it' good enough for playing around using my eyesit succeeded about percent of the timewhich is better than my grade average not bad for our first ever machine learning experienceeh |
3,527 | you might be wondering"what does this have to do with object-oriented programmingthere isn' even one class in this code!in some waysyou' be rightneither coroutines nor generators are commonly considered object-oriented howeverthe functions that create them return objectsin factyou could think of those functions as constructors the constructed object has appropriate send(and __next__(methods basicallythe coroutine/generator syntax is syntax shortcut for particular kind of object that would be quite verbose to create without it this case study has been an exercise in bottom-up design we created various lowlevel objects that did specific tasks and hooked them all together at the end find this to be common practice when developing with coroutines the alternativetopdown design sometimes results in more monolithic pieces of code instead of unique individual pieces in generalwe want to find happy medium between methods that are too large and methods that are too small and it' hard to see how they fit together this is trueof courseregardless of whether the iterator protocol is being used as we did here exercises if you don' use comprehensions in your daily coding very oftenthe first thing you should do is search through some existing code and find some for loops see if any of them can be trivially converted to generator expression or listsetor dictionary comprehension test the claim that list comprehensions are faster than for loops this can be done with the built-in timeit module use the help documentation for the timeit timeit function to find out how to use it basicallywrite two functions that do the same thingone using list comprehensionand one using for loop pass each function into timeit timeitand compare the results if you're feeling adventurouscompare generators and generator expressions as well testing code using timeit can become addictiveso bear in mind that code does not need to be hyperfast unless it' being executed an immense number of timessuch as on huge input list or file play around with generator functions start with basic iterators that require multiple values (mathematical sequences are canonical examplesthe fibonacci sequence is overused if you can' think of anything bettertry some more advanced generators that do things like take multiple input lists and somehow yield values that merge them generators can also be used on filescan you write simple generator that shows those lines that are identical in two files |
3,528 | coroutines abuse the iterator protocol but don' actually fulfill the iterator pattern can you build non-coroutine version of the code that gets serial number from log filetake an object-oriented approach so that you can store an additional state on class you'll learn lot about coroutines if you can create an object that is drop-in replacement for the existing coroutine see if you can abstract the coroutines used in the case study so that the -nearest-neighbor algorithm can be used on variety of datasets you'll likely want to construct coroutine that accepts other coroutines or functions that do the distance and recombination calculations as parametersand then calls into those functions to find the actual nearest neighbors summary in this we learned that design patterns are useful abstractions that provide "best practicesolutions for common programming problems we covered our first design patternthe iteratoras well as numerous ways that python uses and abuses this pattern for its own nefarious purposes the original iterator pattern is extremely object-orientedbut it is also rather ugly and verbose to code around howeverpython' built-in syntax abstracts the ugliness awayleaving us with clean interface to these object-oriented constructs comprehensions and generator expressions can combine container construction with iteration in single line generator objects can be constructed using the yield syntax coroutines look like generators on the outside but serve much different purpose we'll cover several more design patterns in the next two |
3,529 | in the last we were briefly introduced to design patternsand covered the iterator patterna pattern so useful and common that it has been abstracted into the core of the programming language itself in this we'll be reviewing other common patternsand how they are implemented in python as with iterationpython often provides an alternative syntax to make working with such problems simpler we will cover both the "traditionaldesignand the python version for these patterns in summarywe'll seenumerous specific patterns canonical implementation of each pattern in python python syntax to replace certain patterns the decorator pattern the decorator pattern allows us to "wrapan object that provides core functionality with other objects that alter this functionality any object that uses the decorated object will interact with it in exactly the same way as if it were undecorated (that isthe interface of the decorated object is identical to that of the core objectthere are two primary uses of the decorator patternenhancing the response of component as it sends data to second component supporting multiple optional behaviors |
3,530 | the second option is often suitable alternative to multiple inheritance we can construct core objectand then create decorator around that core since the decorator object has the same interface as the core objectwe can even wrap the new object in other decorators here' how it looks in umlinterface +someaction(core decorator decorator +someaction(+someaction(+someaction(herecore and all the decorators implement specific interface the decorators maintain reference to another instance of that interface via composition when calledthe decorator does some added processing before or after calling its wrapped interface the wrapped object may be another decoratoror the core functionality while multiple decorators may wrap each otherthe object in the "centerof all those decorators provides the core functionality decorator example let' look at an example from network programming we'll be using tcp socket the socket send(method takes string of input bytes and outputs them to the receiving socket at the other end there are plenty of libraries that accept sockets and access this function to send data on the stream let' create such an objectit will be an interactive shell that waits for connection from client and then prompts the user for string responseimport socket def respond(client)response input("enter value"client send(bytes(response'utf ')client close(server socket socket(socket af_inetsocket sock_streamserver bind(('localhost', )server listen( trywhile true |
3,531 | clientaddr server accept(respond(clientfinallyserver close(the respond function accepts socket parameter and prompts for data to be sent as replythen sends it to use itwe construct server socket and tell it to listen on port ( picked the port randomlyon the local computer when client connectsit calls the respond functionwhich requests data interactively and responds appropriately the important thing to notice is that the respond function only cares about two methods of the socket interfacesend and close to test thiswe can write very simple client that connects to the same port and outputs the response before exitingimport socket client socket socket(socket af_inetsocket sock_streamclient connect(('localhost' )print("received{ }format(client recv( ))client close(to use these programs start the server in one terminal open second terminal window and run the client at the enter valueprompt in the server windowtype value and press enter the client will receive what you typedprint it to the consoleand exit run the client second timethe server will prompt for second value nowlooking again at our server codewe see two sections the respond function sends data into socket object the remaining script is responsible for creating that socket object we'll create pair of decorators that customize the socket behavior without having to extend or modify the socket itself let' start with "loggingdecorator this object outputs any data being sent to the server' console before it sends it to the clientclass logsocketdef __init__(selfsocket)self socket socket def send(selfdata) |
3,532 | print("sending { to { }formatdataself socket getpeername()[ ])self socket send(datadef close(self)self socket close(this class decorates socket object and presents the send and close interface to client sockets better decorator would also implement (and possibly customizeall of the remaining socket methods it should properly implement all of the arguments to send(which actually accepts an optional flags argumentas wellbut let' keep our example simplewhenever send is called on this objectit logs the output to the screen before sending data to the client using the original socket we only have to change one line in our original code to use this decorator instead of calling respond with the socketwe call it with decorated socketrespond(logsocket(client)while that' quite simplewe have to ask ourselves why we didn' just extend the socket class and override the send method we could call super(send to do the actual sendingafter we logged it there is nothing wrong with this design either when faced with choice between decorators and inheritancewe should only use decorators if we need to modify the object dynamicallyaccording to some condition for examplewe may only want to enable the logging decorator if the server is currently in debugging mode decorators also beat multiple inheritance when we have more than one optional behavior as an examplewe can write second decorator that compresses data using gzip compression whenever send is calledimport gzip from io import bytesio class gzipsocketdef __init__(selfsocket)self socket socket def send(selfdata)buf bytesio(zipfile gzip gzipfile(fileobj=bufmode=" "zipfile write(datazipfile close(self socket send(buf getvalue()def close(self)self socket close( |
3,533 | the send method in this version compresses the incoming data before sending it on to the client now that we have these two decoratorswe can write code that dynamically switches between them when responding this example is not completebut it illustrates the logic we might follow to mix and match decoratorsclientaddr server accept(if log_sendclient loggingsocket(clientif client getpeername()[ in compress_hostsclient gzipsocket(clientrespond(clientthis code checks hypothetical configuration variable named log_send if it' enabledit wraps the socket in loggingsocket decorator similarlyit checks whether the client that has connected is in list of addresses known to accept compressed content if soit wraps the client in gzipsocket decorator notice that noneeitheror both of the decorators may be enableddepending on the configuration and connecting client try writing this using multiple inheritance and see how confused you getdecorators in python the decorator pattern is useful in pythonbut there are other options for examplewe may be able to use monkey-patchingwhich we discussed in python object-oriented shortcutsto get similar effect single inheritancewhere the "optionalcalculations are done in one large method can be an optionand multiple inheritance should not be written off just because it' not suitable for the specific example seen previouslyin pythonit is very common to use this pattern on functions as we saw in previous functions are objects too in factfunction decoration is so common that python provides special syntax to make it easy to apply such decorators to functions for examplewe can look at the logging example in more general way instead of loggingonly send calls on socketswe may find it helpful to log all calls to certain functions or methods the following example implements decorator that does just thisimport time def log_calls(func)def wrapper(*args**kwargs)now time time( |
3,534 | print("calling { with { and { }formatfunc __name__argskwargs)return_value func(*args**kwargsprint("executed { in { }msformatfunc __name__time time(now)return return_value return wrapper def test ( , , )print("\ttest called"def test ( , )print("\ttest called"def test ( , )print("\ttest called"time sleep( test log_calls(test test log_calls(test test log_calls(test test ( , , test ( , = test ( , this decorator function is very similar to the example we explored earlierin those casesthe decorator took socket-like object and created socket-like object this timeour decorator takes function object and returns new function object this code is comprised of three separate tasksa functionlog_callsthat accepts another function this function defines (internallya new functionnamed wrapperthat does some extra work before calling the original function this new function is returned three sample functions demonstrate the decorator in use the third one includes sleep call to demonstrate the timing test we pass each function into the decoratorwhich returns new function we assign this new function to the original variable nameeffectively replacing the original function with decorated one |
3,535 | this syntax allows us to build up decorated function objects dynamicallyjust as we did with the socket exampleif we don' replace the namewe can even keep decorated and non-decorated versions for different situations often these decorators are general modifications that are applied permanently to different functions in this situationpython supports special syntax to apply the decorator at the time the function is defined we've already seen this syntax when we discussed the property decoratornowlet' understand how it works instead of applying the decorator function after the method definitionwe can use the @decorator syntax to do it all at once@log_calls def test ( , , )print("\ttest called"the primary benefit of this syntax is that we can easily see that the function has been decorated at the time it is defined if the decorator is applied latersomeone reading the code may miss that the function has been altered at all answering question like"why is my program logging function calls to the console?can become much more difficulthoweverthe syntax can only be applied to functions we definesince we don' have access to the source code of other modules if we need to decorate functions that are part of somebody else' third-party librarywe have to use the earlier syntax there is more to the decorator syntax than we've seen here we don' have room to cover the advanced topics hereso check the python reference manual or other tutorials for more information decorators can be created as callable objectsnot just functions that return functions classes can also be decoratedin that casethe decorator returns new class instead of new function finallydecorators can take arguments to customize them on per-function basis the observer pattern the observer pattern is useful for state monitoring and event handling situations this pattern allows given object to be monitored by an unknown and dynamic group of "observerobjects whenever value on the core object changesit lets all the observer objects know that change has occurredby calling an update(method each observer may be responsible for different tasks whenever the core object changesthe core object doesn' know or care what those tasks areand the observers don' typically know or care what other observers are doing |
3,536 | hereit is in umlobserverinterface core observer observer an observer example the observer pattern might be useful in redundant backup system we can write core object that maintains certain valuesand then have one or more observers create serialized copies of that object these copies might be stored in databaseon remote hostor in local filefor example let' implement the core object using propertiesclass inventorydef __init__(self)self observers [self _product none self _quantity def attach(selfobserver)self observers append(observer@property def product(self)return self _product @product setter def product(selfvalue)self _product value self _update_observers(@property def quantity(self)return self _quantity @quantity setter def quantity(selfvalue)self _quantity value |
3,537 | self _update_observers(def _update_observers(self)for observer in self observersobserver(this object has two properties thatwhen setcall the _update_observers method on itself all this method does is loop over the available observers and let each one know that something has changed in this casewe call the observer object directlythe object will have to implement __call__ to process the update this would not be possible in many object-oriented programming languagesbut it' useful shortcut in python that can help make our code more readable now let' implement simple observer objectthis one will just print out some state to the consoleclass consoleobserverdef __init__(selfinventory)self inventory inventory def __call__(self)print(self inventory productprint(self inventory quantitythere' nothing terribly exciting herethe observed object is set up in the initializerand when the observer is calledwe do "something we can test the observer in an interactive consolei inventory( consoleobserver(ii attach(ci product "widgetwidget quantity widget after attaching the observer to the inventory objectwhenever we change one of the two observed propertiesthe observer is called and its action is invoked we can even add two different observer instancesi inventory( consoleobserver( |
3,538 | consoleobserver(ii attach( attach( product "gadgetgadget gadget this time when we change the productthere are two sets of outputone for each observer the key idea here is that we can easily add totally different types of observers that back up the data in filedatabaseor internet application at the same time the observer pattern detaches the code being observed from the code doing the observing if we were not using this patternwe would have had to put code in each of the properties to handle the different cases that might come uplogging to the consoleupdating database or fileand so on the code for each of these tasks would all be mixed in with the observed object maintaining it would be nightmareand adding new monitoring functionality at later date would be painful the strategy pattern the strategy pattern is common demonstration of abstraction in object-oriented programming the pattern implements different solutions to single problemeach in different object the client code can then choose the most appropriate implementation dynamically at runtime typicallydifferent algorithms have different trade-offsone might be faster than anotherbut uses lot more memorywhile third algorithm may be most suitable when multiple cpus are present or distributed system is provided here is the strategy pattern in umlabstraction user +someaction(implementation implementation +someaction(+someaction( |
3,539 | the user code connecting to the strategy pattern simply needs to know that it is dealing with the abstraction interface the actual implementation chosen performs the same taskbut in different wayseither waythe interface is identical strategy example the canonical example of the strategy pattern is sort routinesover the yearsnumerous algorithms have been invented for sorting collection of objectsquick sortmerge sortand heap sort are all fast sort algorithms with different featureseach useful in its own rightdepending on the size and type of inputshow out of order they areand the requirements of the system if we have client code that needs to sort collectionwe could pass it to an object with sort(method this object may be quicksorter or mergesorter objectbut the result will be the same in either casea sorted list the strategy used to do the sorting is abstracted from the calling codemaking it modular and replaceable of coursein pythonwe typically just call the sorted function or list sort method and trust that it will do the sorting in near-optimal fashion sowe really need to look at better example let' consider desktop wallpaper manager when an image is displayed on desktop backgroundit can be adjusted to the screen size in different ways for exampleassuming the image is smaller than the screenit can be tiled across the screencentered on itor scaled to fit there are othermore complicatedstrategies that can be used as wellsuch as scaling to the maximum height or widthcombining it with solidsemi-transparentor gradient background coloror other manipulations while we may want to add these strategies laterlet' start with the basic ones our strategy objects takes two inputsthe image to be displayedand tuple of the width and height of the screen they each return new image the size of the screenwith the image manipulated to fit according to the given strategy you'll need to install the pillow module with pip install pillow for this example to workfrom pil import image class tiledstrategydef make_background(selfimg_filedesktop_size)in_img image open(img_fileout_img image new('rgb'desktop_sizenum_tiles |
3,540 | / for oi in zip(out_img sizein_img sizefor in range(num_tiles[ ])for in range(num_tiles[ ])out_img pastein_imgin_img size[ xin_img size[ yin_img size[ ( + )in_img size[ ( + return out_img class centeredstrategydef make_background(selfimg_filedesktop_size)in_img image open(img_fileout_img image new('rgb'desktop_sizeleft (out_img size[ in_img size[ ]/ top (out_img size[ in_img size[ ]/ out_img pastein_imglefttopleft+in_img size[ ]top in_img size[ return out_img class scaledstrategydef make_background(selfimg_filedesktop_size)in_img image open(img_fileout_img in_img resize(desktop_sizereturn out_img |
3,541 | here we have three strategieseach using pil to perform their task individual strategies have make_background method that accepts the same set of parameters once selectedthe appropriate strategy can be called to create correctly sized version of the desktop image tiledstrategy loops over the number of input images that would fit in the width and height of the image and copies it into each locationrepeatedly centeredstrategy figures out how much space needs to be left on the four edges of the image to center it scaledstrategy forces the image to the output size (ignoring aspect ratioconsider how switching between these options would be implemented without the strategy pattern we' need to put all the code inside one great big method and use an awkward if statement to select the expected one every time we wanted to add new strategywe' have to make the method even more ungainly strategy in python the preceding canonical implementation of the strategy patternwhile very common in most object-oriented librariesis rarely seen in python programming these classes each represent objects that do nothing but provide single function we could just as easily call that function __call__ and make the object callable directly since there is no other data associated with the objectwe need do no more than create set of top-level functions and pass them around as our strategies instead opponents of design pattern philosophy will therefore say"because python has first-class functionsthe strategy pattern is unnecessaryin truthpython' first-class functions allow us to implement the strategy pattern in more straightforward way knowing the pattern exists can still help us choose correct design for our programbut implement it using more readable syntax the strategy patternor top-level function implementation of itshould be used when we need to allow client code or the end user to select from multiple implementations of the same interface the state pattern the state pattern is structurally similar to the strategy patternbut its intent and purpose are very different the goal of the state pattern is to represent state-transition systemssystems where it is obvious that an object can be in specific stateand that certain activities may drive it to different state to make this workwe need manageror context class that provides an interface for switching states internallythis class contains pointer to the current stateeach state knows what other states it is allowed to be in and will transition to those states depending on actions invoked upon it |
3,542 | so we have two types of classesthe context class and multiple state classes the context class maintains the current stateand forwards actions to the state classes the state classes are typically hidden from any other objects that are calling the contextit acts like black box that happens to perform state management internally here' how it looks in umlstate user +process(contextcontext state state +current_state +process(context+process(context+process(state +process(contexta state example to illustrate the state patternlet' build an xml parsing tool the context class will be the parser itself it will take string as input and place the tool in an initial parsing state the various parsing states will eat characterslooking for specific valueand when that value is foundchange to different state the goal is to create tree of node objects for each tag and its contents to keep things manageablewe'll parse only subset of xml tags and tag names we won' be able to handle attributes on tags it will parse text content of tagsbut won' attempt to parse "mixedcontentwhich has tags inside of text here is an example "simplified xmlfile that we'll be able to parsedusty phillips packt publishing python object oriented programming object oriented design objects in python |
3,543 | before we look at the states and the parserlet' consider the output of this program we know we want tree of node objectsbut what does node look likewellclearly it'll need to know the name of the tag it is parsingand since it' treeit should probably maintain pointer to the parent node and list of the node' children in order some nodes have text valuebut not all of them let' look at this node class firstclass nodedef __init__(selftag_nameparent=none)self parent parent self tag_name tag_name self children [self text="def __str__(self)if self textreturn self tag_name "self text elsereturn self tag_name this class sets default attribute values upon initialization the __str__ method is supplied to help visualize the tree structure when we're finished nowlooking at the example documentwe need to consider what states our parser can be in clearly it' going to start in state where no nodes have yet been processed we'll need state for processing opening tags and closing tags and when we're inside tag with text contentswe'll have to process that as separate statetoo switching states can be trickyhow do we know if the next node is an opening taga closing tagor text nodewe could put little logic in each state to work this outbut it actually makes more sense to create new state whose sole purpose is figuring out which state we'll be switching to next if we call this transition state childnodewe end up with the following statesfirsttag childnode opentag closetag text |
3,544 | the firsttag state will switch to childnodewhich is responsible for deciding which of the other three states to switch towhen those states are finishedthey'll switch back to childnode the following state-transition diagram shows the available state changesopentag firsttag childnode text closetag the states are responsible for taking "what' left of the string"processing as much of it as they know what to do withand then telling the parser to take care of the rest of it let' construct the parser class firstclass parserdef __init__(selfparse_string)self parse_string parse_string self root none self current_node none self state firsttag(def process(selfremaining_string)remaining self state process(remaining_stringselfif remainingself process(remainingdef start(self)self process(self parse_stringthe initializer sets up few variables on the class that the individual states will access the parse_string instance variable is the text that we are trying to parse the root node is the "topnode in the xml structure the current_node instance variable is the one that we are currently adding children to |
3,545 | the important feature of this parser is the process methodwhich accepts the remaining stringand passes it off to the current state the parser (the self argumentis also passed into the state' process method so that the state can manipulate it the state is expected to return the remainder of the unparsed string when it is finished processing the parser then recursively calls the process method on this remaining string to construct the rest of the tree nowlet' have look at the firsttag stateclass firsttagdef process(selfremaining_stringparser)i_start_tag remaining_string find('<'i_end_tag remaining_string find('>'tag_name remaining_string[i_start_tag+ :i_end_tagroot node(tag_nameparser root parser current_node root parser state childnode(return remaining_string[i_end_tag+ :this state finds the index (the i_ stands for indexof the opening and closing angle brackets on the first tag you may think this state is unnecessarysince xml requires that there be no text before an opening tag howeverthere may be whitespace that needs to be consumedthis is why we search for the opening angle bracket instead of assuming it is the first character in the document note that this code is assuming valid input file proper implementation would be rigorously testing for invalid inputand would attempt to recover or display an extremely descriptive error message the method extracts the name of the tag and assigns it to the root node of the parser it also assigns it to current_nodesince that' the one we'll be adding children to next then comes the important partthe method changes the current state on the parser object to childnode state it then returns the remainder of the string (after the opening tagto allow it to be processed the childnode statewhich seems quite complicatedturns out to require nothing but simple conditionalclass childnodedef process(selfremaining_stringparser)stripped remaining_string strip(if stripped startswith("</")parser state closetag( |
3,546 | elif stripped startswith("<")parser state opentag(elseparser state textnode(return stripped the strip(call removes whitespace from the string then the parser determines if the next item is an opening or closing tagor string of text depending on which possibility occursit sets the parser to particular stateand then tells it to parse the remainder of the string the opentag state is similar to the firsttag stateexcept that it adds the newly created node to the previous current_node object' children and sets it as the new current_node it places the processor back in the childnode state before continuingclass opentagdef process(selfremaining_stringparser)i_start_tag remaining_string find('<'i_end_tag remaining_string find('>'tag_name remaining_string[i_start_tag+ :i_end_tagnode node(tag_nameparser current_nodeparser current_node children append(nodeparser current_node node parser state childnode(return remaining_string[i_end_tag+ :the closetag state basically does the oppositeit sets the parser' current_node back to the parent node so any further children in the outside tag can be added to itclass closetagdef process(selfremaining_stringparser)i_start_tag remaining_string find('<'i_end_tag remaining_string find('>'assert remaining_string[i_start_tag+ ="/tag_name remaining_string[i_start_tag+ :i_end_tagassert tag_name =parser current_node tag_name parser current_node parser current_node parent parser state childnode(return remaining_string[i_end_tag+ :strip(the two assert statements help ensure that the parse strings are consistent the if statement at the end of the method ensures that the processor terminates when it is finished if the parent of node is noneit means that we are working on the root node |
3,547 | finallythe textnode state very simply extracts the text before the next close tag and sets it as value on the current nodeclass textnodedef process(selfremaining_stringparser)i_start_tag remaining_string find('<'text remaining_string[:i_start_tagparser current_node text text parser state childnode(return remaining_string[i_start_tag:now we just have to set up the initial state on the parser object we created the initial state is firsttag objectso just add the following to the __init__ methodself state firsttag(to test the classlet' add main script that opens an file from the command lineparses itand prints the nodesif __name__ ="__main__"import sys with open(sys argv[ ]as filecontents file read( parser(contentsp start(nodes [ rootwhile nodesnode nodes pop( print(nodenodes node children nodes this code opens the fileloads the contentsand parses the result then it prints each node and its children in order the __str__ method we originally added on the node class takes care of formatting the nodes for printing if we run the script on the earlier exampleit outputs the tree as followsbook authordusty phillips publisherpackt publishing titlepython object oriented programming content number |
3,548 | titleobject oriented design number titleobjects in python comparing this to the original simplified xml document tells us the parser is working state versus strategy the state pattern looks very similar to the strategy patternindeedthe uml diagrams for the two are identical the implementationtoois identicalwe could even have written our states as first-class functions instead of wrapping them in objectsas was suggested for strategy while the two patterns have identical structuresthey solve completely different problems the strategy pattern is used to choose an algorithm at runtimegenerallyonly one of those algorithms is going to be chosen for particular use case the state patternon the other hand is designed to allow switching between different states dynamicallyas some process evolves in codethe primary difference is that the strategy pattern is not typically aware of other strategy objects in the state patterneither the state or the context needs to know which other states that it can switch to state transition as coroutines the state pattern is the canonical object-oriented solution to state-transition problems howeverthe syntax for this pattern is rather verbose you can get similar effect by constructing your objects as coroutines remember the regular expression log file parser we built in the iterator patternthat was state-transition problem in disguise the main difference between that implementation and one that defines all the objects (or functionsused in the state pattern is that the coroutine solution allows us to encode more of the boilerplate in language constructs there are two implementationsbut neither one is inherently better than the otherbut you may find that coroutines are more readablefor given definition of "readable(you have to understand the syntax of coroutinesfirst!the singleton pattern the singleton pattern is one of the most controversial patternsmany have accused it of being an "anti-pattern" pattern that should be avoidednot promoted in pythonif someone is using the singleton patternthey're almost certainly doing something wrongprobably because they're coming from more restrictive programming language |
3,549 | so why discuss it at allsingleton is one of the most famous of all design patterns it is useful in overly object-oriented languagesand is vital part of traditional objectoriented programming more relevantlythe idea behind singleton is usefuleven if we implement that idea in totally different way in python the basic idea behind the singleton pattern is to allow exactly one instance of certain object to exist typicallythis object is sort of manager class like those we discussed in when to use object-oriented programming such objects often need to be referenced by wide variety of other objectsand passing references to the manager object around to the methods and constructors that need them can make code hard to read insteadwhen singleton is usedthe separate objects request the single instance of the manager object from the classso reference to it need not to be passed around the uml diagram doesn' fully describe itbut here it is for completenesssingleton +instancestatic +get_instance()static in most programming environmentssingletons are enforced by making the constructor private (so no one can create additional instances of it)and then providing static method to retrieve the single instance this method creates new instance the first time it is calledand then returns that same instance each time it is called again singleton implementation python doesn' have private constructorsbut for this purposeit has something even better we can use the __new__ class method to ensure that only one instance is ever createdclass oneonly_singleton none def __new__(cls*args**kwargs)if not cls _singletoncls _singleton super(oneonlycls __new__(cls*args**kwargsreturn cls _singleton |
3,550 | when __new__ is calledit normally constructs new instance of that class when we override itwe first check if our singleton instance has been createdif notwe create it using super call thuswhenever we call the constructor on oneonlywe always get the exact same instanceo oneonly( oneonly( = true the two objects are equal and located at the same addressthusthey are the same object this particular implementation isn' very transparentsince it' not obvious that singleton object has been created whenever we call constructorwe expect new instance of that objectin this casethat contract is violated perhapsgood docstrings on the class could alleviate this problem if we really think we need singleton but we don' need it python coders frown on forcing the users of their code into specific mindset we may think only one instance of class will ever be requiredbut other programmers may have different ideas singletons can interfere with distributed computingparallel programmingand automated testingfor example in all those casesit can be very useful to have multiple or alternative instances of specific objecteven though "normaloperation may never require one module variables can mimic singletons normallyin pythonthe singleton pattern can be sufficiently mimicked using module-level variables it' not as "safeas singleton in that people could reassign those variables at any timebut as with the private variables we discussed in objects in pythonthis is acceptable in python if someone has valid reason to change those variableswhy should we stop themit also doesn' stop people from instantiating multiple instances of the objectbut againif they have valid reason to do sowhy interfereideallywe should give them mechanism to get access to the "default singletonvaluewhile also allowing them to create other instances if they need them while technically not singleton at allit provides the most pythonic mechanism for singleton-like behavior |
3,551 | to use module-level variables instead of singletonwe instantiate an instance of the class after we've defined it we can improve our state pattern to use singletons instead of creating new object every time we change stateswe can create module-level variable that is always accessibleclass firsttagdef process(selfremaining_stringparser)i_start_tag remaining_string find('<'i_end_tag remaining_string find('>'tag_name remaining_string[i_start_tag+ :i_end_tagroot node(tag_nameparser root parser current_node root parser state child_node return remaining_string[i_end_tag+ :class childnodedef process(selfremaining_stringparser)stripped remaining_string strip(if stripped startswith("</")parser state close_tag elif stripped startswith("<")parser state open_tag elseparser state text_node return stripped class opentagdef process(selfremaining_stringparser)i_start_tag remaining_string find('<'i_end_tag remaining_string find('>'tag_name remaining_string[i_start_tag+ :i_end_tagnode node(tag_nameparser current_nodeparser current_node children append(nodeparser current_node node parser state child_node return remaining_string[i_end_tag+ :class textnodedef process(selfremaining_stringparser)i_start_tag remaining_string find('<'text remaining_string[:i_start_tagparser current_node text text parser state child_node |
3,552 | return remaining_string[i_start_tag:class closetagdef process(selfremaining_stringparser)i_start_tag remaining_string find('<'i_end_tag remaining_string find('>'assert remaining_string[i_start_tag+ ="/tag_name remaining_string[i_start_tag+ :i_end_tagassert tag_name =parser current_node tag_name parser current_node parser current_node parent parser state child_node return remaining_string[i_end_tag+ :strip(first_tag firsttag(child_node childnode(text_node textnode(open_tag opentag(close_tag closetag(all we've done is create instances of the various state classes that can be reused notice how we can access these module variables inside the classeseven before the variables have been definedthis is because the code inside the classes is not executed until the method is calledand by this pointthe entire module will have been defined the difference in this example is that instead of wasting memory creating bunch of new instances that must be garbage collectedwe are reusing single state object for each state even if multiple parsers are running at onceonly these state classes need to be used when we originally created the state-based parseryou may have wondered why we didn' pass the parser object to __init__ on each individual stateinstead of passing it into the process method as we did the state could then have been referenced as self parser this is perfectly valid implementation of the state patternbut it would not have allowed leveraging the singleton pattern if the state objects maintain reference to the parserthen they cannot be used simultaneously to reference other parsers rememberthese are two different patterns with different purposesthe fact that singleton' purpose may be useful for implementing the state pattern does not mean the two patterns are related |
3,553 | the template pattern the template pattern is useful for removing duplicate codeit' an implementation to support the don' repeat yourself principle we discussed in when to use object-oriented programming it is designed for situations where we have several different tasks to accomplish that have somebut not allsteps in common the common steps are implemented in base classand the distinct steps are overridden in subclasses to provide custom behavior in some waysit' like generalized strategy patternexcept similar sections of the algorithms are shared using base class here it is in the uml formatbaseclass +do_process(+step (+step (+step (+step (task task +step (+step ( template example let' create car sales reporter as an example we can store records of sales in an sqlite database table sqlite is simple file-based database engine that allows us to store records using sql syntax python includes sqlite in its standard libraryso there are no extra modules required we have two common tasks we need to performselect all sales of new vehicles and output them to the screen in comma-delimited format output comma-delimited list of all salespeople with their gross sales and save it to file that can be imported to spreadsheet these seem like quite different tasksbut they have some common features in both caseswe need to perform the following steps connect to the database construct query for new vehicles or gross sales |
3,554 | issue the query format the results into comma-delimited string output the data to file or -mail the query construction and output steps are different for the two tasksbut the remaining steps are identical we can use the template pattern to put the common steps in base classand the varying steps in two subclasses before we startlet' create database and put some sample data in itusing few lines of sqlimport sqlite conn sqlite connect("sales db"conn execute("create table sales (salesperson text"amt currencyyear integermodel textnew boolean)"conn execute("insert into sales values('tim' 'honda fit''true')"conn execute("insert into sales values('tim' 'ford focus''false')"conn execute("insert into sales values('gayle' 'dodge neon''false')"conn execute("insert into sales values('gayle' 'ford mustang''true')"conn execute("insert into sales values('gayle' 'lincoln navigator''true')"conn execute("insert into sales values('don' 'toyota prius''false')"conn commit(conn close(hopefully you can see what' going on here even if you don' know sqlwe've created table to hold the dataand used six insert statements to add sales records the data is stored in file named sales db now we have sample we can work with in developing our template pattern since we've already outlined the steps that the template has to performwe can start by defining the base class that contains the steps each step gets its own method (to make it easy to selectively override any one step)and we have one more managerial method that calls the steps in turn without any method contenthere' how it might lookclass querytemplatedef connect(self)pass |
3,555 | def construct_query(self)pass def do_query(self)pass def format_results(self)pass def output_results(self)pass def process_format(self)self connect(self construct_query(self do_query(self format_results(self output_results(the process_format method is the primary method to be called by an outside client it ensures each step is executed in orderbut it does not care if that step is implemented in this class or in subclass for our exampleswe know that three methods are going to be identical between our two classesimport sqlite class querytemplatedef connect(self)self conn sqlite connect("sales db"def construct_query(self)raise notimplementederror(def do_query(self)results self conn execute(self queryself results results fetchall(def format_results(self)output [for row in self resultsrow =[str(ifor in rowoutput append("join(row)self formatted_results "\njoin(outputdef output_results(self)raise notimplementederror( |
3,556 | to help with implementing subclassesthe two methods that are not specified raise notimplementederror this is common way to specify abstract interfaces in python when abstract base classes seem too heavyweight the methods could have empty implementations (with pass)or could be fully unspecified raising notimplementederrorhoweverhelps the programmer understand that the class is meant to be subclassed and these methods overriddenempty methods or methods that do not exist are harder to identify as needing to be implemented and to debug if we forget to implement them now we have template class that takes care of the boring detailsbut is flexible enough to allow the execution and formatting of wide variety of queries the best part isif we ever want to change our database engine from sqlite to another database engine (such as py-postgresql)we only have to do it herein this template classand we don' have to touch the two (or two hundredsubclasses we might have written let' have look at the concrete classes nowimport datetime class newvehiclesquery(querytemplate)def construct_query(self)self query "select from sales where new='true'def output_results(self)print(self formatted_resultsclass usergrossquery(querytemplate)def construct_query(self)self query ("select salespersonsum(amtfrom sales group by salesperson"def output_results(self)filename "gross_sales_{ }formatdatetime date today(strftime("% % % "with open(filename' 'as outfileoutfile write(self formatted_resultsthese two classes are actually pretty shortconsidering what they're doingconnecting to databaseexecuting queryformatting the resultsand outputting them the superclass takes care of the repetitive workbut lets us easily specify those steps that vary between tasks furtherwe can also easily change steps that are provided in the base class for exampleif we wanted to output something other than comma-delimited string (for examplean html report to be uploaded to website)we can still override format_results |
3,557 | exercises while writing this discovered that it can be very difficultand extremely educationalto come up with good examples where specific design patterns should be used instead of going over current or old projects to see where you can apply these patternsas 've suggested in previous think about the patterns and different situations where they might come up try to think outside your own experiences if your current projects are in the banking businessconsider how you' apply these design patterns in retail or point-of-sale application if you normally write web applicationsthink about using design patterns while writing compiler look at the decorator pattern and come up with some good examples of when to apply it focus on the pattern itselfnot the python syntax we discussedit' bit more general than the actual pattern the special syntax for decorators ishoweversomething you may want to look for places to apply in existing projects too what are some good areas to use the observer patternwhythink about not only how you' apply the patternbut how you would implement the same task without using observerwhat do you gainor loseby choosing to use itconsider the difference between the strategy and state patterns implementation-wisethey look very similaryet they have different purposes can you think of cases where the patterns could be interchangedwould it be reasonable to redesign state-based system to use strategy insteador vice versahow different would the design actually bethe template pattern is such an obvious application of inheritance to reduce duplicate code that you may have used it beforewithout knowing its name try to think of at least half dozen different scenarios where it would be useful if you can do thisyou'll be finding places for it in your daily coding all the time summary this discussed several common design patterns in detailwith examplesuml diagramsand discussion of the differences between python and statically typed object-oriented languages the decorator pattern is often implemented using python' more generic decorator syntax the observer pattern is useful way to decouple events from actions taken on those events the strategy pattern allows different algorithms to be chosen to accomplish the same task the state pattern looks similarbut is used instead to represent systems can move between different states using well-defined actions the singleton patternpopular in some statically typed languagesis almost always an anti-pattern in python in the next we'll wrap up our discussion of design patterns |
3,558 | in this we will be introduced to several more design patterns once againwe'll cover the canonical examples as well as any common alternative implementations in python we'll be discussingthe adapter pattern the facade pattern lazy initialization and the flyweight pattern the command pattern the abstract factory pattern the composition pattern the adapter pattern unlike most of the patterns we reviewed in strings and serializationthe adapter pattern is designed to interact with existing code we would not design brand new set of objects that implement the adapter pattern adapters are used to allow two pre-existing objects to work togethereven if their interfaces are not compatible like the display adapters that allow vga projectors to be plugged into hdmi portsan adapter object sits between two different interfacestranslating between them on the fly the adapter object' sole purpose is to perform this translation job adapting may entail variety of taskssuch as converting arguments to different formatrearranging the order of argumentscalling differently named methodor supplying default arguments |
3,559 | in structurethe adapter pattern is similar to simplified decorator pattern decorators typically provide the same interface that they replacewhereas adapters map between two different interfaces here it is in uml forminterface adapter +make_action(some,argumentsinterface +different_action(other,argumentshereinterface is expecting to call method called make_action(someargumentswe already have this perfect interface class that does everything we want (and to avoid duplicationwe don' want to rewrite it!)but it provides method called different_action(otherargumentsinstead the adapter class implements the make_action interface and maps the arguments to the existing interface the advantage here is that the code that maps from one interface to another is all in one place the alternative would be really uglywe' have to perform the translation in multiple places whenever we need to access this code for exampleimagine we have the following preexisting classwhich takes string date in the format "yyyy-mm-ddand calculates person' age on that dayclass agecalculatordef __init__(selfbirthday)self yearself monthself day int(xfor in birthday split('-')def calculate_age(selfdate)yearmonthday int(xfor in date split('-')age year self year if (month,day(self month,self day)age - return age |
3,560 | this is pretty simple class that does what it' supposed to do but we have to wonder what the programmer was thinkingusing specifically formatted string instead of using python' incredibly useful built-in datetime library as conscientious programmers who reuse code whenever possiblemost of the programs we write will interact with datetime objectsnot strings we have several options to address this scenariowe could rewrite the class to accept datetime objectswhich would probably be more accurate anyway but if this class had been provided by third party and we don' know or can' change its internal structurewe need to try something else we could use the class as it isand whenever we want to calculate the age on datetime date objectwe could call datetime date strftime('% -% -% 'to convert it to the proper format but that conversion would be happening in lot of placesand worseif we mistyped the % as %mit would give us the current minute instead of the entered monthimagine if you wrote that in dozen different places only to have to go back and change it when you realized your mistake it' not maintainable codeand it breaks the dry principle insteadwe can write an adapter that allows normal date to be plugged into normal agecalculator classimport datetime class dateageadapterdef _str_date(selfdate)return date strftime("% -% -% "def __init__(selfbirthday)birthday self _str_date(birthdayself calculator agecalculator(birthdaydef get_age(selfdate)date self _str_date(datereturn self calculator calculate_age(datethis adapter converts datetime date and datetime time (they have the same interface to strftimeinto string that our original agecalculator can use now we can use the original code with our new interface changed the method signature to get_age to demonstrate that the calling interface may also be looking for different method namenot just different type of argument |
3,561 | creating class as an adapter is the usual way to implement this patternbutas usualthere are other ways to do it in python inheritance and multiple inheritance can be used to add functionality to class for examplewe could add an adapter on the date class so that it works with the original agecalculator classimport datetime class ageabledate(datetime date)def split(selfchar)return self yearself monthself day it' code like this that makes one wonder if python should even be legal we have added split method to our subclass that takes single argument (which we ignoreand returns tuple of yearmonthand day this works flawlessly with the original agecalculator class because the code calls strip on specially formatted stringand stripin that casereturns tuple of yearmonthand day the agecalculator code only cares if strip exists and returns acceptable valuesit doesn' care if we really passed in string it really worksbd ageabledate( today ageabledate today(today ageabledate( agecalculator(bda calculate_age(today it works but it' stupid idea in this particular instancesuch an adapter would be hard to maintain we' soon forget why we needed to add strip method to date class the method name is ambiguous that can be the nature of adaptersbut creating an adapter explicitly instead of using inheritance usually clarifies its purpose instead of inheritancewe can sometimes also use monkey-patching to add method to an existing class it won' work with the datetime objectas it doesn' allow attributes to be added at runtimebut in normal classeswe can just add new method that provides the adapted interface that is required by calling code alternativelywe could extend or monkey-patch the agecalculator itself to replace the calculate_age method with something more amenable to our needs finallyit is often possible to use function as an adapterthis doesn' obviously fit the actual design of the adapter patternbut if we recall that functions are essentially objects with __call__ methodit becomes an obvious adapter adaptation |
3,562 | the facade pattern the facade pattern is designed to provide simple interface to complex system of components for complex taskswe may need to interact with these objects directlybut there is often "typicalusage for the system for which these complicated interactions aren' necessary the facade pattern allows us to define new object that encapsulates this typical usage of the system any time we want access to common functionalitywe can use the single object' simplified interface if another part of the project needs access to more complicated functionalityit is still able to interact with the system directly the uml diagram for the facade pattern is really dependent on the subsystembut in cloudy wayit looks like thisfacade +simple_task()(+other_simple_task(big system complex component another component facade isin many wayslike an adapter the primary difference is that the facade is trying to abstract simpler interface out of complex onewhile the adapter is only trying to map one existing interface to another let' write simple facade for an -mail application the low-level library for sending -mail in pythonas we saw in python object-oriented shortcutsis quite complicated the two libraries for receiving messages are even worse it would be nice to have simple class that allows us to send single -mailand list the -mails currently in the inbox on an imap or pop connection to keep our example shortwe'll stick with imap and smtptwo totally different subsystems that happen to deal with -mail our facade performs only two taskssending an -mail to specific addressand checking the inbox on an imap connection it makes some common assumptions about the connectionsuch as the host for both smtp and imap is at the same addressthat the username and password for both is the sameand that they use standard ports this covers the case for many -mail serversbut if programmer needs more flexibilitythey can always bypass the facade and access the two subsystems directly |
3,563 | the class is initialized with the hostname of the -mail servera usernameand password to log inimport smtplib import imaplib class emailfacadedef __init__(selfhostusernamepassword)self host host self username username self password password the send_email method formats the -mail address and messageand sends it using smtplib this isn' complicated taskbut it requires quite bit of fiddling to massage the "naturalinput parameters that are passed into the facade to the correct format to enable smtplib to send the messagedef send_email(selfto_emailsubjectmessage)if not "@in self usernamefrom_email "{ }@{ }formatself usernameself hostelsefrom_email self username message ("from{ }\ \ "to{ }\ \ "subject{ }\ \ \ \ { }"formatfrom_emailto_emailsubjectmessagesmtp smtplib smtp(self hostsmtp login(self usernameself passwordsmtp sendmail(from_email[to_email]messagethe if statement at the beginning of the method is catching whether or not the username is the entire "frome-mail address or just the part on the left side of the symboldifferent hosts treat the login details differently finallythe code to get the messages currently in the inbox is ruddy messthe imap protocol is painfully over-engineeredand the imaplib standard library is only thin layer over the protocoldef get_inbox(self)mailbox imaplib imap (self host |
3,564 | mailbox login(bytes(self username'utf ')bytes(self password'utf ')mailbox select(xdata mailbox search(none'all'messages [for num in data[ split()xmessage mailbox fetch(num'(rfc )'messages append(message[ ][ ]return messages nowif we add all this togetherwe have simple facade class that can send and receive messages in fairly straightforward mannermuch simpler than if we had to interact with these complex libraries directly although it is rarely named in the python communitythe facade pattern is an integral part of the python ecosystem because python emphasizes language readabilityboth the language and its libraries tend to provide easy-to-comprehend interfaces to complicated tasks for examplefor loopslist comprehensionsand generators are all facades into more complicated iterator protocol the defaultdict implementation is facade that abstracts away annoying corner cases when key doesn' exist in dictionary the third-party requests library is powerful facade over less readable libraries for http requests the flyweight pattern the flyweight pattern is memory optimization pattern novice python programmers tend to ignore memory optimizationassuming the built-in garbage collector will take care of them this is often perfectly acceptablebut when developing larger applications with many related objectspaying attention to memory concerns can have huge payoff the flyweight pattern basically ensures that objects that share state can use the same memory for that shared state it is often implemented only after program has demonstrated memory problems it may make sense to design an optimal configuration from the beginning in some situationsbut bear in mind that premature optimization is the most effective way to create program that is too complicated to maintain |
3,565 | let' have look at the uml diagram for the flyweight patternflyweightfactory +getflyweight(flyweight specificstate +shared_state +shared_action(specific_stateeach flyweight has no specific stateany time it needs to perform an operation on specificstatethat state needs to be passed into the flyweight by the calling code traditionallythe factory that returns flyweight is separate objectits purpose is to return flyweight for given key identifying that flyweight it works like the singleton pattern we discussed in python design patterns iif the flyweight existswe return itotherwisewe create new one in many languagesthe factory is implementednot as separate objectbut as static method on the flyweight class itself think of an inventory system for car sales each individual car has specific serial number and is specific color but most of the details about that car are the same for all cars of particular model for examplethe honda fit dx model is bare-bones car with few features the lx model has /ctiltcruiseand power windows and locks the sport model has fancy wheelsa usb chargerand spoiler without the flyweight patterneach individual car object would have to store long list of which features it did and did not have considering the number of cars honda sells in yearthis would add up to huge amount of wasted memory using the flyweight patternwe can instead have shared objects for the list of features associated with modeland then simply reference that modelalong with serial number and colorfor individual vehicles in pythonthe flyweight factory is often implemented using that funky __new__ constructorsimilar to what we did with the singleton pattern unlike singletonwhich only needs to return one instance of the classwe need to be able to return different instances depending on the keys we could store the items in dictionary and look them up based on the key this solution is problematichoweverbecause the item will remain in memory as long as it is in the dictionary if we sold out of lx model fitsthe fit flyweight is no longer necessaryyet it will still be in the dictionary we couldof courseclean this up whenever we sell carbut isn' that what garbage collector is for |
3,566 | we can solve this by taking advantage of python' weakref module this module provides weakvaluedictionary objectwhich basically allows us to store items in dictionary without the garbage collector caring about them if value is in weak referenced dictionary and there are no other references to that object stored anywhere in the application (that iswe sold out of lx models)the garbage collector will eventually clean up for us let' build the factory for our car flyweights firstimport weakref class carmodel_models weakref weakvaluedictionary(def __new__(clsmodel_name*args**kwargs)model cls _models get(model_nameif not modelmodel super(__new__(clscls _models[model_namemodel return model basicallywhenever we construct new flyweight with given namewe first look up that name in the weak referenced dictionaryif it existswe return that modelif notwe create new one either waywe know the __init__ method on the flyweight will be called every timeregardless of whether it is new or existing object our __init__ method can therefore look like thisdef __init__(selfmodel_nameair=falsetilt=falsecruise_control=falsepower_locks=falsealloy_wheels=falseusb_charger=false)if not hasattr(self"initted")self model_name model_name self air air self tilt tilt self cruise_control cruise_control self power_locks power_locks self alloy_wheels alloy_wheels self usb_charger usb_charger self initted=true the if statement ensures that we only initialize the object the first time __init__ is called this means we can call the factory later with just the model name and get the same flyweight object back howeverbecause the flyweight will be garbagecollected if no external references to it existwe have to be careful not to accidentally create new flyweight with null values |
3,567 | let' add method to our flyweight that hypothetically looks up serial number on specific model of vehicleand determines if it has been involved in any accidents this method needs access to the car' serial numberwhich varies from car to carit cannot be stored with the flyweight thereforethis data must be passed into the method by the calling codedef check_serial(selfserial_number)print("sorrywe are unable to check "the serial number { on the { "at this timeformatserial_numberself model_name)we can define class that stores the additional informationas well as reference to the flyweightclass cardef __init__(selfmodelcolorserial)self model model self color color self serial serial def check_serial(self)return self model check_serial(self serialwe can also keep track of the available models as well as the individual cars on the lotdx carmodel("fit dx"lx carmodel("fit lx"air=truecruise_control=truepower_locks=truetilt=truecar car(dx"blue"" "car car(dx"black"" "car car(lx"red"" "nowlet' demonstrate the weak referencing at workid(lx del lx del car import gc gc collect( |
3,568 | lx carmodel("fit lx"air=truecruise_control=truepower_locks=truetilt=trueid(lx lx carmodel("fit lx"id(lx lx air true the id function tells us the unique identifier for an object when we call it second timeafter deleting all references to the lx model and forcing garbage collectionwe see that the id has changed the value in the carmodel __new__ factory dictionary was deleted and fresh one created if we then try to construct second carmodel instancehoweverit returns the same object (the ids are the same)andeven though we did not supply any arguments in the second callthe air variable is still set to true this means the object was not initialized the second timejust as we designed obviouslyusing the flyweight pattern can be more complicated than just storing features on single car class when should we choose to use itthe flyweight pattern is designed for conserving memoryif we have hundreds of thousands of similar objectscombining similar properties into flyweight can have an enormous impact on memory consumption it is common for programming solutions that optimize cpumemoryor disk space result in more complicated code than their unoptimized brethren it is therefore important to weigh up the tradeoffs when deciding between code maintainability and optimization when choosing optimizationtry to use patterns such as flyweight to ensure that the complexity introduced by optimization is confined to single (well documentedsection of the code the command pattern the command pattern adds level of abstraction between actions that must be doneand the object that invokes those actionsnormally at later time in the command patternclient code creates command object that can be executed at later date this object knows about receiver object that manages its own internal state when the command is executed on it the command object implements specific interface (typically it has an execute or do_action methodand also keeps track of any arguments required to perform the action finallyone or more invoker objects execute the command at the correct time |
3,569 | here' the uml diagramcommandinterface +execute(invoker client command +execute(receiver common example of the command pattern is actions on graphical window oftenan action can be invoked by menu item on the menu bara keyboard shortcuta toolbar iconor context menu these are all examples of invoker objects the actions that actually occursuch as exitsaveor copyare implementations of commandinterface gui window to receive exita document to receive saveand clipboardmanager to receive copy commandsare all examples of possible receivers let' implement simple command pattern that provides commands for save and exit actions we'll start with some modest receiver classesimport sys class windowdef exit(self)sys exit( class documentdef __init__(selffilename)self filename filename self contents "this file cannot be modifieddef save(self)with open(self filename' 'as filefile write(self contentsthese mock classes model objects that would likely be doing lot more in working environment the window would need to handle mouse movement and keyboard eventsand the document would need to handle character insertiondeletionand selection but for our example these two classes will do what we need |
3,570 | now let' define some invoker classes these will model toolbarmenuand keyboard events that can happenagainthey aren' actually hooked up to anythingbut we can see how they are decoupled from the commandreceiverand client codeclass toolbarbuttondef __init__(selfnameiconname)self name name self iconname iconname def click(self)self command execute(class menuitemdef __init__(selfmenu_namemenuitem_name)self menu menu_name self item menuitem_name def click(self)self command execute(class keyboardshortcutdef __init__(selfkeymodifier)self key key self modifier modifier def keypress(self)self command execute(notice how the various action methods each call the execute method on their respective commandsthis code doesn' show the command attribute being set on each object they could be passed into the __init__ functionbut because they may be changed (for examplewith customizable keybinding editor)it makes more sense to set the attributes on the objects afterwards nowlet' hook up the commands themselvesclass savecommanddef __init__(selfdocument)self document document def execute(self)self document save(class exitcommand |
3,571 | def __init__(selfwindow)self window window def execute(self)self window exit(these commands are straightforwardthey demonstrate the basic patternbut it is important to note that we can store state and other information with the command if necessary for exampleif we had command to insert characterwe could maintain state for the character currently being inserted now all we have to do is hook up some client and test code to make the commands work for basic testingwe can just include this at the end of the scriptwindow window(document document("a_document txt"save savecommand(documentexit exitcommand(windowsave_button toolbarbutton('save''save png'save_button command save save_keystroke keyboardshortcut(" ""ctrl"save_keystroke command save exit_menu menuitem("file""exit"exit_menu command exit first we create two receivers and two commands then we create several of the available invokers and set the correct command on each of them to testwe can use python - filename py and run code like exit_menu click()which will end the programor save_keystroke keystroke()which will save the fake file unfortunatelythe preceding examples do not feel terribly pythonic they have lot of "boilerplate code(code that does not accomplish anythingbut only provides structure to the pattern)and the command classes are all eerily similar to each other perhaps we could create generic command object that takes function as callbackin factwhy bothercan we just use function or method object for each commandinstead of an object with an execute(methodwe can write function and use that as the command directly this is common paradigm for the command pattern in pythonimport sys class window |
3,572 | def exit(self)sys exit( class menuitemdef click(self)self command(window window(menu_item menuitem(menu_item command window exit now that looks lot more like python at first glanceit looks like we've removed the command pattern altogetherand we've tightly connected the menu_item and window classes but if we look closerwe find there is no tight coupling at all any callable can be set up as the command on the menuitemjust as before and the window exit method can be attached to any invoker most of the flexibility of the command pattern has been maintained we have sacrificed complete decoupling for readabilitybut this code isin my opinionand that of many python programmersmore maintainable than the fully abstracted version of coursesince we can add __call__ method to any objectwe aren' restricted to functions the previous example is useful shortcut when the method being called doesn' have to maintain statebut in more advanced usagewe can use this code as wellclass documentdef __init__(selffilename)self filename filename self contents "this file cannot be modifieddef save(self)with open(self filename' 'as filefile write(self contentsclass keyboardshortcutdef keypress(self)self command(class savecommanddef __init__(selfdocument)self document document def __call__(self) |
3,573 | self document save(document document("a_file txt"shortcut keyboardshortcut(save_command savecommand(documentshortcut command save_command here we have something that looks like the first command patternbut bit more idiomatic as you can seemaking the invoker call callable instead of command object with an execute method has not restricted us in any way in factit' given us more flexibility we can link to functions directly when that worksyet we can build complete callable command object when the situation calls for it the command pattern is often extended to support undoable commands for examplea text program may wrap each insertion in separate command with not only an execute methodbut also an undo method that will delete that insertion graphics program may wrap each drawing action (rectanglelinefreehand pixelsand so onin command that has an undo method that resets the pixels to their original state in such casesthe decoupling of the command pattern is much more obviously usefulbecause each action has to maintain enough of its state to undo that action at later date the abstract factory pattern the abstract factory pattern is normally used when we have multiple possible implementations of system that depend on some configuration or platform issue the calling code requests an object from the abstract factorynot knowing exactly what class of object will be returned the underlying implementation returned may depend on variety of factorssuch as current localeoperating systemor local configuration common examples of the abstract factory pattern include code for operating-system independent toolkitsdatabase backendsand country-specific formatters or calculators an operating-system-independent gui toolkit might use an abstract factory pattern that returns set of winform widgets under windowscocoa widgets under macgtk widgets under gnomeand qt widgets under kde django provides an abstract factory that returns set of object relational classes for interacting with specific database backend (mysqlpostgresqlsqliteand othersdepending on configuration setting for the current site if the application needs to be deployed in multiple placeseach one can use different database backend by changing only one configuration variable different countries have different systems for calculating taxessubtotalsand totals on retail merchandisean abstract factory can return particular tax calculation object |
3,574 | the uml class diagram for an abstract factory pattern is hard to understand without specific exampleso let' turn things around and create concrete example first we'll create set of formatters that depend on specific locale and help us format dates and currencies there will be an abstract factory class that picks the specific factoryas well as couple example concrete factoriesone for france and one for the usa each of these will create formatter objects for dates and timeswhich can be queried to format specific value here' the diagramformatterfactory +create_date_formatter(+create_currency_formatter(usaformatterfactory franceformatterfactory +create_date_formatter(+create_currency_formatter(+create_date_formatter(+create_currency_formatter(usadateformatter francedateformatter usacurrencyformatter francecurrencyformatter +format_date(+format_date(+format_currency(+format_currency(dateformatter currencyformatter +format_date(+format_currency(comparing that image to the earlier simpler text shows that picture is not always worth thousand wordsespecially considering we haven' even allowed for factory selection code here of coursein pythonwe don' have to implement any interface classesso we can discard dateformattercurrencyformatterand formatterfactory the formatting classes themselves are pretty straightforwardif verboseclass francedateformatterdef format_date(selfymd)ymd (str(xfor in ( , , ) ' if len( = else ' if len( = else ' if len( = else return("{ }/{ }/{ }format( , , )class usadateformatter |
3,575 | def format_date(selfymd)ymd (str(xfor in ( , , ) ' if len( = else ' if len( = else ' if len( = else return("{ }-{ }-{ }format( , , )class francecurrencyformatterdef format_currency(selfbasecents)basecents (str(xfor in (basecents)if len(cents= cents ' elif len(cents= cents ' cents digits [for , in enumerate(reversed(base))if and not digits append('digits append(cbase 'join(reversed(digits)return "{ }eur{ }format(basecentsclass usacurrencyformatterdef format_currency(selfbasecents)basecents (str(xfor in (basecents)if len(cents= cents ' elif len(cents= cents ' cents digits [for , in enumerate(reversed(base))if and not digits append(','digits append(cbase 'join(reversed(digits)return "${ { }format(basecents |
3,576 | these classes use some basic string manipulation to try to turn variety of possible inputs (integersstrings of different lengthsand othersinto the following formatsusa france date mm-dd-yyyy dd/mm/yyyy currency $ , eur there could obviously be more validation on the input in this codebut let' keep it simple and dumb for this example now that we have the formatters set upwe just need to create the formatter factoriesclass usaformatterfactorydef create_date_formatter(self)return usadateformatter(def create_currency_formatter(self)return usacurrencyformatter(class franceformatterfactorydef create_date_formatter(self)return francedateformatter(def create_currency_formatter(self)return francecurrencyformatter(now we set up the code that picks the appropriate formatter since this is the kind of thing that only needs to be set up oncewe could make it singleton--except singletons aren' very useful in python let' just make the current formatter module-level variable insteadcountry_code "usfactory_map "us"usaformatterfactory"fr"franceformatterfactoryformatter_factory factory_map get(country_code)(in this examplewe hardcode the current country codein practiceit would likely introspect the localethe operating systemor configuration file to choose the code this example uses dictionary to associate the country codes with factory classes then we grab the correct class from the dictionary and instantiate it |
3,577 | it is easy to see what needs to be done when we want to add support for more countriescreate the new formatter classes and the abstract factory itself bear in mind that formatter classes might be reusedfor examplecanada formats its currency the same way as the usabut its date format is more sensible than its southern neighbor abstract factories often return singleton objectbut this is not requiredin our codeit' returning new instance of each formatter every time it' called there' no reason the formatters couldn' be stored as instance variables and the same instance returned for each factory looking back at these exampleswe see thatonce againthere appears to be lot of boilerplate code for factories that just doesn' feel necessary in python oftenthe requirements that might call for an abstract factory can be more easily fulfilled by using separate module for each factory type (for examplethe usa and france)and then ensuring that the correct module is being accessed in factory module the package structure for such modules might look like thislocalize__init__ py backends__init__ py usa py france py the trick is that __init__ py in the localize package can contain logic that redirects all requests to the correct backend there is variety of ways this could be done if we know that the backend is never going to change dynamically (that iswithout restart)we can just put some if statements in __init__ py that check the current country codeand use the usually unacceptable from backends usa import syntax to import all variables from the appropriate backend orwe could import each of the backends and set current_backend variable to point at specific modulefrom backends import usafrance if country_code ="us"current_backend usa depending on which solution we chooseour client code would have to call either localize format_date or localize current_backend format_date to get date formatted in the current country' locale the end result is much more pythonic than the original abstract factory patternandin typical usagejust as flexible |
3,578 | the composite pattern the composite pattern allows complex tree-like structures to be built from simple components these componentscalled composite objectsare able to behave sort of like container and sort of like variable depending on whether they have child components composite objects are container objectswhere the content may actually be another composite object traditionallyeach component in composite object must be either leaf node (that cannot contain other objectsor composite node the key is that both composite and leaf nodes can have the same interface the uml diagram is very simplecomponent +some_action(leaf composite +some_action(+some_action(+add_child(this simple patternhoweverallows us to create complex arrangements of elementsall of which satisfy the interface of the component object here is concrete instance of such complicated arrangementcomposite leaf leaf leaf composite composite leaf leaf leaf leaf composite leaf leaf composite leaf the composite pattern is commonly useful in file/folder-like trees regardless of whether node in the tree is normal file or folderit is still subject to operations such as movingcopyingor deleting the node we can create component interface that supports these operationsand then use composite object to represent foldersand leaf nodes to represent normal files |
3,579 | of coursein pythononce againwe can take advantage of duck typing to implicitly provide the interfaceso we only need to write two classes let' define these interfaces firstclass folderdef __init__(selfname)self name name self children {def add_child(selfchild)pass def move(selfnew_path)pass def copy(selfnew_path)pass def delete(self)pass class filedef __init__(selfnamecontents)self name name self contents contents def move(selfnew_path)pass def copy(selfnew_path)pass def delete(self)pass for each folder (compositeobjectwe maintain dictionary of children oftena list is sufficientbut in this casea dictionary will be useful for looking up children by name our paths will be specified as node names separated by the charactersimilar to paths in unix shell |
3,580 | thinking about the methods involvedwe can see that moving or deleting node behaves in similar wayregardless of whether or not it is file or folder node copyinghoweverhas to do recursive copy for folder nodeswhile copying file node is trivial operation to take advantage of the similar operationswe can extract some of the common methods into parent class let' take that discarded component interface and change it to base classclass componentdef __init__(selfname)self name name def move(selfnew_path)new_folder =get_path(new_pathdel self parent children[self namenew_folder children[self nameself self parent new_folder def delete(self)del self parent children[self nameclass folder(component)def __init__(selfname)super(__init__(nameself children {def add_child(selfchild)pass def copy(selfnew_path)pass class file(component)def __init__(selfnamecontents)super(__init__(nameself contents contents def copy(selfnew_path)pass root folder(''def get_path(path) |
3,581 | names path split('/')[ :node root for name in namesnode node children[namereturn node we've created the move and delete methods on the component class both of them access mysterious parent variable that we haven' set yet the move method uses module-level get_path function that finds node from predefined root nodegiven path all files will be added to this root node or child of that node for the move methodthe target should be currently existing folderor we'll get an error as with many of the examples in technical bookserror handling is woefully absentto help focus on the principles under consideration let' set up that mysterious parent variable firstthis happensin the folder' add_child methoddef add_child(selfchild)child parent self self children[child namechild wellthat was easy enough let' see if our composite file hierarchy is working properlypython - _add_child py folder folder('folder 'folder folder('folder 'root add_child(folder root add_child(folder folder folder('folder 'folder add_child(folder file file('file ''contents'folder add_child(file file file('file ''other contents'folder add_child(file folder children {'file 'folder move('/folder /folder 'folder children {'folder ''file '<__main__ file object at xb ec> |
3,582 | file move('/folder 'folder children {'file ''folder '<__main__ folder object at xb >yeswe can create foldersadd folders to other foldersadd files to foldersand move them aroundwhat more could we ask for in file hierarchywellwe could ask for copying to be implementedbut to conserve treeslet' leave that as an exercise the composite pattern is extremely useful for variety of tree-like structuresincluding gui widget hierarchiesfile hierarchiestree setsgraphsand html dom it can be useful pattern in python when implemented according to the traditional implementationas the example earlier demonstrated sometimesif only shallow tree is being createdwe can get away with list of lists or dictionary of dictionariesand do not need to implement custom componentleafand composite classes other timeswe can get away with implementing only one composite classand treating leaf and composite objects as single class alternativelypython' duck typing can make it easy to add other objects to composite hierarchyas long as they have the correct interface exercises before diving into exercises for each design patterntake moment to implement the copy method for the file and folder objects in the previous section the file method should be quite trivialjust create new node with the same name and contentsand add it to the new parent folder the copy method on folder is quite bit more complicatedas you first have to duplicate the folderand then recursively copy each of its children to the new location you can call the copy(method on the children indiscriminatelyregardless of whether each is file or folder object this will drive home just how powerful the composite pattern can be nowas with the previous look at the patterns we've discussedand consider ideal places where you might implement them you may want to apply the adapter pattern to existing codeas it is usually applicable when interfacing with existing librariesrather than new code how can you use an adapter to force two interfaces to interact with each other correctlycan you think of system complex enough to justify using the facade patternconsider how facades are used in real-life situationssuch as the driver-facing interface of caror the control panel in factory it is similar in softwareexcept the users of the facade interface are other programmersrather than people trained to use them are there complex systems in your latest project that could benefit from the facade pattern |
3,583 | it' possible you don' have any hugememory-consuming code that would benefit from the flyweight patternbut can you think of situations where it might be usefulanywhere that large amounts of overlapping data need to be processeda flyweight is waiting to be used would it be useful in the banking industryin web applicationsat what point does the flyweight pattern make sensewhen is it overkillwhat about the command patterncan you think of any common (or better yetuncommonexamples of places where the decoupling of action from invocation would be usefullook at the programs you use on daily basisand imagine how they are implemented internally it' likely that many of them use the command pattern for one purpose or another the abstract factory patternor the somewhat more pythonic derivatives we discussedcan be very useful for creating one-touch-configurable systems can you think of places where such systems are usefulfinallyconsider the composite pattern there are tree-like structures all around us in programmingsome of themlike our file hierarchy exampleare blatantothers are fairly subtle what situations might arise where the composite pattern would be usefulcan you think of places where you can use it in your own codewhat if you adapted the pattern slightlyfor exampleto contain different types of leaf or composite nodes for different types of objectssummary in this we went into detail on several more design patternscovering their canonical descriptions as well as alternatives for implementing them in pythonwhich is often more flexible and versatile than traditional object-oriented languages the adapter pattern is useful for matching interfaceswhile the facade pattern is suited to simplifying them flyweight is complicated pattern and only useful if memory optimization is required in pythonthe command pattern is often more aptly implemented using first class functions as callbacks abstract factories allow run-time separation of implementations depending on configuration or system information the composite pattern is used universally for tree-like structures in the next we'll discuss how important it is to test python programsand how to do it |
3,584 | programs skilled python programmers agree that testing is one of the most important aspects of software development even though this is placed near the end of the bookit is not an afterthoughteverything we have studied so far will help us when writing tests we'll be studyingthe importance of unit testing and test-driven development the standard unittest module the py test automated testing suite the mock module code coverage cross-platform testing with tox why testa large collection of programmers already know how important it is to test their code if you're among themfeel free to skim this section you'll find the next section--where we actually see how to do the tests in python--much more scintillating if you're not convinced of the importance of testingi promise that your code is brokenyou just don' know it read on |
3,585 | some people argue that testing is more important in python code because of its dynamic naturecompiled languages such as java and +are occasionally thought to be somehow "saferbecause they enforce type checking at compile time howeverpython tests rarely check types they're checking values they're making sure that the right attributes have been set at the right time or that the sequence has the right lengthorderand values these higher-level things need to be tested in any language the real reason python programmers test more than programmers of other languages is that it is so easy to test in pythonbut why testdo we really need to testwhat if we didn' testto answer those questionswrite tic-tac-toe game from scratch without any testing at all don' run it until it is completely writtenstart to finish tic-tac-toe is fairly simple to implement if you make both players human players (no artificial intelligenceyou don' even have to try to calculate who the winner is now run your program and fix all the errors how many were therei recorded eight on my tic-tac-toe implementationand ' not sure caught them all did youwe need to test our code to make sure it works running the programas we just didand fixing the errors is one crude form of testing python programmers are able to write few lines of code and run the program to make sure those lines are doing what they expect but changing few lines of code can affect parts of the program that the developer hadn' realized will be influenced by the changesand therefore won' test it furthermoreas program growsthe various paths that the interpreter can take through that code also growand it quickly becomes impossible to manually test all of them to handle thiswe write automated tests these are programs that automatically run certain inputs through other programs or parts of programs we can run these test programs in seconds and cover more possible input situations than one programmer would think to test every time they change something there are four main reasons to write teststo ensure that code is working the way the developer thinks it should to ensure that code continues working when we make changes to ensure that the developer understood the requirements to ensure that the code we are writing has maintainable interface |
3,586 | the first point really doesn' justify the time it takes to write testwe can simply test the code directly in the interactive interpreter but when we have to perform the same sequence of test actions multiple timesit takes less time to automate those steps once and then run them whenever necessary it is good idea to run tests whenever we change codewhether it is during initial development or maintenance releases when we have comprehensive set of automated testswe can run them after code changes and know that we didn' inadvertently break anything that was tested the last two points are more interesting when we write tests for codeit helps us design the apiinterfaceor pattern that code takes thusif we misunderstood the requirementswriting test can help highlight that misunderstanding on the other sideif we're not certain how we want to design classwe can write test that interacts with that class so we have an idea what the most natural way to test it would be in factit is often beneficial to write the tests before we write the code we are testing test-driven development "write tests firstis the mantra of test-driven development test-driven development takes the "untested code is broken codeconcept one step further and suggests that only unwritten code should be untested do not write any code until you have written the tests for this code so the first step is to write test that proves the code would work obviouslythe test is going to failsince the code hasn' been written then write the code that ensures the test passes then write another test for the next segment of code test-driven development is fun it allows us to build little puzzles to solve then we implement the code to solve the puzzles then we make more complicated puzzleand we write code that solves the new puzzle without unsolving the previous one there are two goals to the test-driven methodology the first is to ensure that tests really get written it' so very easyafter we have written codeto say"hmmit seems to work don' have to write any tests for this it was just small changenothing could have broken if the test is already written before we write the codewe will know exactly when it works (because the test will pass)and we'll know in the future if it is ever broken by change weor someone else has made secondlywriting tests first forces us to consider exactly how the code will be interacted with it tells us what methods objects need to have and how attributes will be accessed it helps us break up the initial problem into smallertestable problemsand then to recombine the tested solutions into largeralso testedsolutions writing tests can thus become part of the design process oftenif we're writing test for new objectwe discover anomalies in the design that force us to consider new aspects of the software |
3,587 | as concrete exampleimagine writing code that uses an object-relational mapper to store object properties in database it is common to use an automatically assigned database id in such objects our code might use this id for various purposes if we are writing test for such codebefore we write itwe may realize that our design is faulty because objects do not have these ids until they have been saved to the database if we want to manipulate an object without saving it in our testit will highlight this problem before we have written code based on the faulty premise testing makes software better writing tests before we release the software makes it better before the end user sees or purchases the buggy version ( have worked for companies that thrive on the "the users can test itphilosophy it' not healthy business model!writing tests before we write software makes it better the first time it is written unit testing let' start our exploration with python' built-in test library this library provides common interface for unit tests unit tests focus on testing the least amount of code possible in any one test each one tests single unit of the total amount of available code the python library for this is calledunsurprisinglyunittest it provides several tools for creating and running unit teststhe most important being the testcase class this class provides set of methods that allow us to compare valuesset up testsand clean up when they have finished when we want to write set of unit tests for specific taskwe create subclass of testcaseand write individual methods to do the actual testing these methods must all start with the name test when this convention is followedthe tests automatically run as part of the test process normallythe tests set some values on an object and then run methodand use the built-in comparison methods to ensure that the right results were calculated here' very simple exampleimport unittest class checknumbers(unittest testcase)def test_int_float(self)self assertequal( if __name__ ="__main__"unittest main( |
3,588 | this code simply subclasses the testcase class and adds method that calls the testcase assertequal method this method will either succeed or raise an exceptiondepending on whether the two parameters are equal if we run this codethe main function from unittest will give us the following outputran test in ok did you know that floats and integers can compare as equallet' add failing testdef test_str_float(self)self assertequal( " "the output of this code is more sinisteras integers and strings are not considered equalf ===========================================================failtest_str_float (__main__ checknumberstraceback (most recent call last)file "simplest_unittest py"line in test_str_float self assertequal( " "assertionerror !' ran tests in failed (failures= the dot on the first line indicates that the first test (the one we wrote beforepassed successfullythe letter after it shows that the second test failed thenat the endit gives us some informative output telling us how and where the test failedalong with summary of the number of failures |
3,589 | we can have as many test methods on one testcase class as we likeas long as the method name begins with testthe test runner will execute each one as separate test each test should be completely independent of other tests results or calculations from previous test should have no impact on the current test the key to writing good unit tests is to keep each test method as short as possibletesting small unit of code with each test case if your code does not seem to naturally break up into such testable unitsit' probably sign that your design needs rethinking assertion methods the general layout of test case is to set certain variables to known valuesrun one or more functionsmethodsor processesand then "provethat correct expected results were returned or calculated by using testcase assertion methods there are few different assertion methods available to confirm that specific results have been achieved we just saw assertequalwhich will cause test failure if the two parameters do not pass an equality check the inverseassertnotequalwill fail if the two parameters do compare as equal the asserttrue and assertfalse methods each accept single expressionand fail if the expression does not pass an if test these tests are not checking for the boolean values true or false ratherthey test the same condition as though an if statement were usedfalsenone or an empty listdictionarystringsetor tuple would pass call to the assertfalse methodwhile nonzero numberscontainers with values in themor the value true would succeed when calling the asserttrue method there is an assertraises method that can be used to ensure specific function call raises specific exception oroptionallyit can be used as context manager to wrap inline code the test passes if the code inside the with statement raises the proper exceptionotherwiseit fails here' an example of both versionsimport unittest def average(seq)return sum(seqlen(seqclass testaverage(unittest testcase)def test_zero(self)self assertraises(zerodivisionerroraverage |
3,590 | []def test_with_zero(self)with self assertraises(zerodivisionerror)average([]if __name__ ="__main__"unittest main(the context manager allows us to write the code the way we would normally write it (by calling functions or executing code directly)rather than having to wrap the function call in another function call there are also several other assertion methodssummarized in the following tablemethods description assertgreater accept two comparable objects and ensure the named inequality holds assertgreaterequal assertless assertlessequal ensure an element is (or is notan element in container object assertin assertnotin ensure an element is (or is notthe exact value none (but not another falsey valueassertisnone assertisnotnone assertsameelements ensure two container objects have the same elementsignoring the order assertsequenceequalassertdictequal ensure two containers have the same elements in the same order if there' failureshow code diff comparing the two lists to see where they differ the last four methods also test the type of the list assertsetequal assertlistequal asserttupleequal each of the assertion methods accepts an optional argument named msg if suppliedit is included in the error message if the assertion fails this is useful for clarifying what was expected or explaining where bug may have occurred to cause the assertion to fail |
3,591 | reducing boilerplate and cleaning up after writing few small testswe often find that we have to do the same setup code for several related tests for examplethe following list subclass has three methods for statistical calculationsfrom collections import defaultdict class statslist(list)def mean(self)return sum(selflen(selfdef median(self)if len(self return self[int(len(self )elseidx int(len(self return (self[idxself[idx- ] def mode(self)freqs defaultdict(intfor item in selffreqs[item+ mode_freq max(freqs values()modes [for itemvalue in freqs items()if value =mode_freqmodes append(itemreturn modes clearlywe're going to want to test situations with each of these three methods that have very similar inputswe'll want to see what happens with empty lists or with lists containing non-numeric values or with lists containing normal dataset we can use the setup method on the testcase class to do initialization for each test this method accepts no argumentsand allows us to do arbitrary setup before each test is run for examplewe can test all three methods on identical lists of integers as followsfrom stats import statslist import unittest class testvalidinputs(unittest testcase)def setup(self)self stats statslist([ , , , , , ]def test_mean(self) |
3,592 | self assertequal(self stats mean() def test_median(self)self assertequal(self stats median() self stats append( self assertequal(self stats median() def test_mode(self)self assertequal(self stats mode()[ , ]self stats remove( self assertequal(self stats mode()[ ]if __name__ ="__main__"unittest main(if we run this exampleit indicates that all tests pass notice first that the setup method is never explicitly called inside the three test_methods the test suite does this on our behalf more importantly notice how test_median alters the listby adding an additional to ityet when test_mode is calledthe list has returned to the values specified in setup (if it had notthere would be two fours in the listand the mode method would have returned three valuesthis shows that setup is called individually before each testto ensure the test class starts with clean slate tests can be executed in any orderand the results of one test should not depend on any other tests in addition to the setup methodtestcase offers no-argument teardown methodwhich can be used for cleaning up after each and every test on the class has run this is useful if cleanup requires anything other than letting an object be garbage collected for exampleif we are testing code that does file /oour tests may create new files as side effect of testingthe teardown method can remove these files and ensure the system is in the same state it was before the tests ran test cases should never have side effects in generalwe group test methods into separate testcase subclasses depending on what setup code they have in common several tests that require the same or similar setup will be placed in one classwhile tests that require unrelated setup go in another class organizing and running tests it doesn' take long for collection of unit tests to grow very large and unwieldy it quickly becomes complicated to load and run all the tests at once this is primary goal of unit testingit should be trivial to run all tests on our program and get quick "yes or noanswer to the question"did my recent changes break any existing tests? |
3,593 | python' discover module basically looks for any modules in the current folder or subfolders with names that start with the characters test if it finds any testcase objects in these modulesthe tests are executed it' painless way to ensure we don' miss running any tests to use itensure your test modules are named test_ py and then run the command python - unittest discover ignoring broken tests sometimesa test is known to failbut we don' want the test suite to report the failure this may be because broken or unfinished feature has had tests writtenbut we aren' currently focusing on improving it more oftenit happens because feature is only available on certain platformpython versionor for advanced versions of specific library python provides us with few decorators to mark tests as expected to fail or to be skipped under known conditions the decorators areexpectedfailure(skip(reasonskipif(conditionreasonskipunless(conditionreasonthese are applied using the python decorator syntax the first one accepts no argumentsand simply tells the test runner not to record the test as failure when it fails the skip method goes one step further and doesn' even bother to run the test it expects single string argument describing why the test was skipped the other two decorators accept two argumentsone boolean expression that indicates whether or not the test should be runand similar description in usethese three decorators might be applied like thisimport unittest import sys class skiptests(unittest testcase)@unittest expectedfailure def test_fails(self)self assertequal(falsetrue@unittest skip("test is useless" |
3,594 | def test_skip(self)self assertequal(falsetrue@unittest skipif(sys version_info minor = "broken on "def test_skipif(self)self assertequal(falsetrue@unittest skipunless(sys platform startswith('linux')"broken unless on linux"def test_skipunless(self)self assertequal(falsetrueif __name__ ="__main__"unittest main(the first test failsbut it is reported as an expected failurethe second test is never run the other two tests may or may not be run depending on the current python version and operating system on my linux system running python the output looks like thisxssf ============================================================failtest_skipunless (__main__ skipteststraceback (most recent call last)file "skipping_tests py"line in test_skipunless self assertequal(falsetrueassertionerrorfalse !true ran tests in failed (failures= skipped= expected failures= the on the first line indicates an expected failurethe two characters represent skipped testsand the indicates real failuresince the conditional to skipunless was true on my system |
3,595 | testing with py test the python unittest module requires lot of boilerplate code to set up and initialize tests it is based on the very popular junit testing framework for java it even uses the same method names (you may have noticed they don' conform to the pep- naming standardwhich suggests underscores rather than camelcase to separate words in method nameand test layout while this is effective for testing in javait' not necessarily the best design for python testing because python programmers like their code to be elegant and simpleother test frameworks have been developedoutside the standard library two of the more popular ones are py test and nose the former is more robust and has had python support for much longerso we'll discuss it here since py test is not part of the standard libraryyou'll need to download and install it yourselfyou can get it from the py test home page at website has comprehensive installation instructions for variety of interpreters and platformsbut you can usually get away with the more common python package installerpip just type pip install pytest on your command line and you'll be good to go py test has substantially different layout from the unittest module it doesn' require test cases to be classes insteadit takes advantage of the fact that python functions are objectsand allows any properly named function to behave like test rather than providing bunch of custom methods for asserting equalityit uses the assert statement to verify results this makes tests more readable and maintainable when we run py testit will start in the current folder and search for any modules in that folder or subpackages whose names start with the characters test_ if any functions in this module also start with testthey will be executed as individual tests furthermoreif there are any classes in the module whose name starts with testany methods on that class that start with test_ will also be executed in the test environment let' port the simplest possible unittest example we wrote earlier to py testdef test_int_float()assert = for the exact same testwe've written two lines of more readable codein comparison to the six lines required in our first unittest example |
3,596 | howeverwe are not forbidden from writing class-based tests classes can be useful for grouping related tests together or for tests that need to access related attributes or methods on the class this example shows an extended class with passing and failing testwe'll see that the error output is more comprehensive than that provided by the unittest moduleclass testnumbersdef test_int_float(self)assert = def test_int_str(self)assert =" notice that the class doesn' have to extend any special objects to be picked up as test (although py test will run standard unittest testcases just fineif we run py test the output looks like this=============test session starts =============pythonplatform linux -python -pytesttest object class_pytest py class_pytest py ==================failures===================testnumbers test_int_str self def test_int_str(self)assert =" assert =' class_pytest py: assertionerror ===== failed passed in seconds ====== |
3,597 | the output starts with some useful information about the platform and interpreter this can be useful for sharing bugs across disparate systems the third line tells us the name of the file being tested (if there are multiple test modules picked upthey will all be displayed)followed by the familiar we saw in the unittest modulethe character indicates passing testwhile the letter demonstrates failure after all tests have runthe error output for each of them is displayed it presents summary of local variables (there is only one in this examplethe self parameter passed into the function)the source code where the error occurredand summary of the error message in additionif an exception other than an assertionerror is raisedpy test will present us with complete tracebackincluding source code references by defaultpy test suppresses output from print statements if the test is successful this is useful for test debuggingwhen test is failingwe can add print statements to the test to check the values of specific variables and attributes as the test runs if the test failsthese values are output to help with diagnosis howeveronce the test is successfulthe print statement output is not displayedand they can be easily ignored we don' have to "clean upthe output by removing print statements if the tests ever fail againdue to future changesthe debugging output will be immediately available one way to do setup and cleanup py test supports setup and teardown methods similar to those used in unittestbut it provides even more flexibility we'll discuss these brieflysince they are familiarbut they are not used as extensively as in the unittest moduleas py test provides us with powerful funcargs facilitywhich we'll discuss in the next section if we are writing class-based testswe can use two methods called setup_method and teardown_method in basically the same way that setup and teardown are called in unittest they are called before and after each test method in the class to perform setup and cleanup duties there is one difference from the unittest methods though both methods accept an argumentthe function object representing the method being called in additionpy test provides other setup and teardown functions to give us more control over when setup and cleanup code is executed the setup_class and teardown_class methods are expected to be class methodsthey accept single argument (there is no self argumentrepresenting the class in question |
3,598 | finallywe have the setup_module and teardown_module functionswhich are run immediately before and after all tests (in functions or classesin that module these can be useful for "one timesetupsuch as creating socket or database connection that will be used by all tests in the module be careful with this oneas it can accidentally introduce dependencies between tests if the object being set up stores the state that short description doesn' do great job of explaining exactly when these methods are calledso let' look at an example that illustrates exactly when it happensdef setup_module(module)print("setting up module { }formatmodule __name__)def teardown_module(module)print("tearing down module { }formatmodule __name__)def test_a_function()print("running test function"class basetestdef setup_class(cls)print("setting up class { }formatcls __name__)def teardown_class(cls)print("tearing down class { }\nformatcls __name__)def setup_method(selfmethod)print("setting up method { }formatmethod __name__)def teardown_method(selfmethod)print("tearing down method { }formatmethod __name__)class testclass (basetest)def test_method_ (self)print("running method - "def test_method_ (self) |
3,599 | print("running method - "class testclass (basetest)def test_method_ (self)print("running method - "def test_method_ (self)print("running method - "the sole purpose of the basetest class is to extract four methods that would be otherwise identical to the test classesand use inheritance to reduce the amount of duplicate code sofrom the point of view of py testthe two subclasses have not only two test methods eachbut also two setup and two teardown methods (one at the class levelone at the method levelif we run these tests using py test with the print function output suppression disabled (by passing the - or --capture=no flag)they show us when the various functions are called in relation to the tests themselvespy test setup_teardown py - setup_teardown py setting up module setup_teardown running test function setting up class testclass setting up method test_method_ running method - tearing down method test_method_ setting up method test_method_ running method - tearing down method test_method_ tearing down class testclass setting up class testclass setting up method test_method_ running method - tearing down method test_method_ setting up method test_method_ running method - tearing down method test_method_ tearing down class testclass tearing down module setup_teardown |
Subsets and Splits