id
int64 0
25.6k
| text
stringlengths 0
4.59k
|
---|---|
3,600 | the setup and teardown methods for the module are executed at the beginning and end of the session then the lone module-level test function is run nextthe setup method for the first class is executedfollowed by the two tests for that class these tests are each individually wrapped in separate setup_method and teardown_method calls after the tests have executedthe class teardown method is called the same sequence happens for the second classbefore the teardown_module method is finally calledexactly once completely different way to set up variables one of the most common uses for the various setup and teardown functions is to ensure certain class or module variables are available with known value before each test method is run py test offers completely different way to do this using what are known as funcargsshort for function arguments funcargs are basically named variables that are predefined in test configuration file this allows us to separate configuration from execution of testsand allows the funcargs to be used across multiple classes and modules to use themwe add parameters to our test function the names of the parameters are used to look up specific arguments in specially named functions for exampleif we wanted to test the statslist class we used while demonstrating unittestwe would again want to repeatedly test list of valid integers but we can write our tests like so instead of using setup methodfrom stats import statslist def pytest_funcarg__valid_stats(request)return statslist([ , , , , , ]def test_mean(valid_stats)assert valid_stats mean(= def test_median(valid_stats)assert valid_stats median(= valid_stats append( assert valid_stats median(= def test_mode(valid_stats)assert valid_stats mode(=[ , valid_stats remove( assert valid_stats mode(=[ |
3,601 | each of the three test methods accepts parameter named valid_statsthis parameter is created by calling the pytest_funcarg__valid_stats function defined at the top of the file it can also be defined in file called conftest py if the funcarg is needed by multiple modules the conftest py file is parsed by py test to load any "globaltest configurationit is sort of catch-all for customizing the py test experience as with other py test featuresthe name of the factory for returning funcarg is importantfuncargs are functions that are named pytest_funcarg__where is valid variable name that can be used as parameter in test function this function accepts mysterious request parameterand returns the object to be passed as an argument into the individual test functions the funcarg is created afresh for each call to an individual test functionthis allows usfor exampleto change the list in one test and know that it will be reset to its original values in the next test funcargs can do lot more than return basic variables that request object passed into the funcarg factory provides some extremely useful methods and attributes to modify the funcarg' behavior the moduleclsand function attributes allow us to see exactly which test is requesting the funcarg the config attribute allows us to check command-line arguments and other configuration data more interestinglythe request object provides methods that allow us to do additional cleanup on the funcargor to reuse it across testsactivities that would otherwise be relegated to setup and teardown methods of specific scope the request addfinalizer method accepts callback function that performs cleanup after each test function that uses the funcarg has been called this provides the equivalent of teardown methodallowing us to clean up filesclose connectionsempty listsor reset queues for examplethe following code tests the os mkdir functionality by creating temporary directory funcargimport tempfile import shutil import os path def pytest_funcarg__temp_dir(request)dir tempfile mkdtemp(print(dirdef cleanup()shutil rmtree(dirrequest addfinalizer(cleanup |
3,602 | return dir def test_osfiles(temp_dir)os mkdir(os path join(temp_dir' ')os mkdir(os path join(temp_dir' ')dir_contents os listdir(temp_dirassert len(dir_contents= assert 'ain dir_contents assert 'bin dir_contents the funcarg creates new empty temporary directory for files to be created in then it adds finalizer call to remove that directory (using shutil rmtreewhich recursively removes directory and anything inside itafter the test has completed the filesystem is then left in the same state in which it started we can use the request cached_setup method to create function argument variables that last longer than one test this is useful when setting up an expensive operation that can be reused by multiple tests as long as the resource reuse doesn' break the atomic or unit nature of the tests (so that one test does not rely on and is not impacted by previous onefor exampleif we were to test the following echo serverwe may want to run only one instance of the server in separate processand then have multiple tests connect to that instanceimport socket socket socket(socket af_inetsocket sock_streams setsockopt(socket sol_socketsocket so_reuseaddr bind(('localhost', ) listen( while trueclientaddress accept(data client recv( client send(dataclient close(all this code does is listen on specific port and wait for input from client socket when it receives inputit sends the same value back to test thiswe can start the server in separate process and cache the result for use in multiple tests here' how the test code might lookimport subprocess import socket |
3,603 | import time def pytest_funcarg__echoserver(request)def setup() subprocess popen['python ''echo_server py']time sleep( return def cleanup( ) terminate(return request cached_setupsetup=setupteardown=cleanupscope="session"def pytest_funcarg__clientsocket(request) socket socket(socket af_inetsocket sock_streams connect(('localhost' )request addfinalizer(lambdas close()return def test_echo(echoserverclientsocket)clientsocket send( "abc"assert clientsocket recv( = 'abcdef test_echo (echoserverclientsocket)clientsocket send( "def"assert clientsocket recv( = 'defwe've created two funcargs here the first runs the echo server in separate processand returns the process object the second instantiates new socket object for each testand closes it when the test has completedusing addfinalizer the first funcarg is the one we're currently interested in it looks much like traditional unit test setup and teardown we create setup function that accepts no parameters and returns the correct argumentin this casea process object that is actually ignored by the testssince they only care that the server is running thenwe create cleanup function (the name of the function is arbitrary since it' just an object we pass into another function)which accepts single argumentthe argument returned by setup this cleanup code terminates the process |
3,604 | instead of returning funcarg directlythe parent function returns the results of call to request cached_setup it accepts two arguments for the setup and teardown functions (which we just created)and scope argument this last argument should be one of the three strings "function""module"or "session"it determines just how long the argument will be cached we set it to "sessionin this exampleso it is cached for the duration of the entire py test run the process will not be terminated or restarted until all tests have run the "modulescopeof coursecaches it only for tests in that moduleand the "functionscope treats the object more like normal funcargin that it is reset after each test function is run skipping tests with py test as with the unittest moduleit is frequently necessary to skip tests in py testfor variety of reasonsthe code being tested hasn' been written yetthe test only runs on certain interpreters or operating systemsor the test is time consuming and should only be run under certain circumstances we can skip tests at any point in our code using the py test skip function it accepts single argumenta string describing why it has been skipped this function can be called anywhereif we call it inside test functionthe test will be skipped if we call it at the module levelall the tests in that module will be skipped if we call it inside funcarg functionall tests that call that funcarg will be skipped of coursein all these locationsit is often desirable to skip tests only if certain conditions are or are not met since we can execute the skip function at any place in python codewe can execute it inside an if statement so we may write test that looks like thisimport sys import py test def test_simple_skip()if sys platform !"fakeos"py test skip("test works only on fakeos"fakeos do_something_fake(assert fakeos did_not_happen that' some pretty silly codereally there is no python platform named fakeosso this test will skip on all operating systems it shows how we can skip conditionallyand since the if statement can check any valid conditionalwe have lot of power over when tests are skipped oftenwe check sys version_info to check the python interpreter versionsys platform to check the operating systemor some_library __ version__ to check whether we have recent enough version of given api |
3,605 | since skipping an individual test method or function based on certain conditional is one of the most common uses of test skippingpy test provides convenience decorator that allows us to do this in one line the decorator accepts single stringwhich can contain any executable python code that evaluates to boolean value for examplethe following test will only run on python or higherimport py test @py test mark skipif("sys version_info <( , )"def test_python ()assert "hellodecode(="hellothe py test mark xfail decorator behaves similarlyexcept that it marks test as expected to failsimilar to unittest expectedfailure(if the test is successfulit will be recorded as failureif it failsit will be reported as expected behavior in the case of xfailthe conditional argument is optionalif it is not suppliedthe test will be marked as expected to fail under all conditions imitating expensive objects sometimeswe want to test code that requires an object be supplied that is either expensive or difficult to construct while this may mean your api needs rethinking to have more testable interface (which typically means more usable interface)we sometimes find ourselves writing test code that has ton of boilerplate to set up objects that are only incidentally related to the code under test for exampleimagine we have some code that keeps track of flight statuses in key-value store (such as redis or memcachesuch that we can store the timestamp and the most recent status basic version of such code might look like thisimport datetime import redis class flightstatustrackerallowed_statuses {'cancelled''delayed''on time'def __init__(self)self redis redis strictredis(def change_status(selfflightstatus) |
3,606 | status status upper(if status not in self allowed_statusesraise valueerror"{is not valid statusformat(status)key "flightno:{}format(flightvalue "{}|{}formatdatetime datetime now(isoformat()statusself redis set(keyvaluethere are lot of things we ought to test in that change_status method we should check that it raises the appropriate error if bad status is passed in we need to ensure that it converts statuses to uppercase we can see that the key and value have the correct formatting when the set(method is called on the redis object one thing we don' have to check in our unit testshoweveris that the redis object is properly storing the data this is something that absolutely should be tested in integration or application testingbut at the unit test levelwe can assume that the py-redis developers have tested their code and that this method does what we want it to as ruleunit tests should be self-contained and not rely on the existence of outside resourcessuch as running redis instance insteadwe only need to test that the set(method was called the appropriate number of times and with the appropriate arguments we can use mock(objects in our tests to replace the troublesome method with an object we can introspect the following example illustrates the use of mockfrom unittest mock import mock import py test def pytest_funcarg__tracker()return flightstatustracker(def test_mock_method(tracker)tracker redis set mock(with py test raises(valueerroras extracker change_status("ac ""lost"assert ex value args[ ="lost is not valid statusassert tracker redis set call_count = |
3,607 | this testwritten using py test syntaxasserts that the correct exception is raised when an inappropriate argument is passed in in additionit creates mock object for the set method and makes sure that it is never called if it wasit would mean there was bug in our exception handling code simply replacing the method worked fine in this casesince the object being replaced was destroyed in the end howeverwe often want to replace function or method only for the duration of test for exampleif we want to test the timestamp formatting in the mock methodwe need to know exactly what datetime datetime now(is going to return howeverthis value changes from run to run we need some way to pin it to specific value so we can test it deterministically remember monkey-patchingtemporarily setting library function to specific value is an excellent use of it the mock library provides patch context manager that allows us to replace attributes on existing libraries with mock objects when the context manager exitsthe original attribute is automatically restored so as not to impact other test cases here' an examplefrom unittest mock import patch def test_patch(tracker)tracker redis set mock(fake_now datetime datetime( with patch('datetime datetime'as dtdt now return_value fake_now tracker change_status("ac ""on time"dt now assert_called_once_with(tracker redis set assert_called_once_with"flightno:ac "" : : |on time"in this examplewe first construct value called fake_nowwhich we will set as the return value of the datetime datetime now function we have to construct this object before we patch datetime datetime because otherwise we' be calling the patched now function before we constructed itthe with statement invites the patch to replace the datetime datetime module with mock objectwhich is returned as the value dt the neat thing about mock objects is that any time you access an attribute or method on that objectit returns another mock object thus when we access dt nowit gives us new mock object we set the return_value of that object to our fake_now objectthat waywhenever the datetime datetime now function is calledit will return our object instead of new mock object |
3,608 | thenafter calling our change_status method with known valueswe use the mock class' assert_called_once_with function to ensure that the now function was indeed called exactly once with no arguments we then call it second time to prove that the redis set method was called with arguments that were formatted as we expected them to be the previous example is good indication of how writing tests can guide our api design the flightstatustracker object looks sensible at first glancewe construct redis connection when the object is constructedand we call into it when we need it when we write tests for this codehoweverwe discover that even if we mock out that self redis variable on flightstatustrackerthe redis connection still has to be constructed this call actually fails if there is no redis server runningand our tests also fail we could solve this problem by mocking out the redis strictredis class to return mock in setup method better ideahowevermight be to rethink our example instead of constructing the redis instance inside__init__perhaps we should allow the user to pass one inas in the following exampledef __init__(selfredis_instance=none)self redis redis_instance if redis_instance else redis strictredis(this allows us to pass mock in when we are testingso the strictredis method never gets constructed howeverit also allows any client code that talks to flightstatustracker to pass in their own redis instance there are variety of reasons they might want to do this they may have already constructed one for other parts of their code they may have created an optimized implementation of the redis api perhaps they have one that logs metrics to their internal monitoring systems by writing unit testwe've uncovered use case that makes our api more flexible from the startrather than waiting for clients to demand we support their exotic needs this has been brief introduction to the wonders of mocking code mocks are part of the standard unittest library since python but as you see from these examplesthey can also be used with py test and other libraries mocks have other more advanced features that you may need to take advantage of as your code gets more complicated for exampleyou can use the spec argument to invite mock to imitate an existing class so that it raises an error if code tries to access an attribute that does not exist on the imitated class you can also construct mock methods that return different arguments each time they are called by passing list as the side_effect argument the side_effect parameter is quite versatileyou can also use it to execute arbitrary functions when the mock is called or to raise an exception |
3,609 | in generalwe should be quite stingy with mocks if we find ourselves mocking out multiple elements in given unit testwe may end up testing the mock framework rather than our real code this serves no useful purpose whatsoeverafter allmocks are well-tested alreadyif our code is doing lot of thisit' probably another sign that the api we are testing is poorly designed mocks should exist at the boundaries between the code under test and the libraries they interface with if this isn' happeningwe may need to change the api so that the boundaries are redrawn in different place how much testing is enoughwe've already established that untested code is broken code but how can we tell how well our code is testedhow do we know how much of our code is actually being tested and how much is brokenthe first question is the more important onebut it' hard to answer even if we know we have tested every line of code in our applicationwe do not know that we have tested it properly for exampleif we write stats test that only checks what happens when we provide list of integersit may still fail spectacularly if used on list of floats or strings or self-made objects the onus of designing complete test suites still lies with the programmer the second question--how much of our code is actually being tested--is easy to verify code coverage is essentially an estimate of the number of lines of code that are executed by program if we know that number and the number of lines that are in the programwe can get an estimate of what percentage of the code was really testedor covered if we additionally have an indicator as to which lines were not testedwe can more easily write new tests to ensure those lines are less broken the most popular tool for testing code coverage is calledmemorably enoughcoverage py it can be installed like most other third-party libraries using the command pip install coverage we don' have space to cover all the details of the coverage apiso we'll just look at few typical examples if we have python script that runs all our unit tests for us (for exampleusing unittest maina custom test runner or discover)we can use the following command to perform coverage analysiscoverage run coverage_unittest py this command will exit normallybut it creates file named coverage that holds the data from the run we can now use the coverage report command to get an analysis of code coveragecoverage report |
3,610 | the output is as followsname stmts exec cover coverage_unittest stats total this basic report lists the files that were executed (our unit test and module it importedthe number of lines of code in each fileand the number that were executed by the test are also listed the two numbers are then combined to estimate the amount of code coverage if we pass the - option to the report commandit will additionally add column that looks like thismissing - - the ranges of lines listed here identify lines in the stats module that were not executed during the test run the example we just ran the code coverage tool on uses the same stats module we created earlier in the howeverit deliberately uses single test that fails to test lot of code in the file here' the testfrom stats import statslist import unittest class testmean(unittest testcase)def test_mean(self)self assertequal(statslist([ , , , , , ]mean() if __name__ ="__main__"unittest main(this code doesn' test the median or mode functionswhich correspond to the line numbers that the coverage output told us were missing |
3,611 | the textual report is sufficientbut if we use the command coverage htmlwe can get an even fancier interactive html report that we can view in web browser the web page even highlights which lines in the source code were and were not tested here' how it lookswe can use the coverage py module with py test as well we'll need to install the py test plugin for code coverageusing pip install pytest-coverage the plugin adds several command-line options to py testthe most useful being --cover-reportwhich can be set to htmlreportor annotate (the latter actually modifies the source code to highlight any lines that were not coveredunfortunatelyif we could somehow run coverage report on this section of the we' find that we have not covered most of what there is to know about code coverageit is possible to use the coverage api to manage code coverage from within our own programs (or test suites)and coverage py accepts numerous configuration options that we haven' touched on we also haven' discussed the difference between statement coverage and branch coverage (the latter is much more usefuland the default in recent versions of coverage pyor other styles of code coverage |
3,612 | bear in mind that while percent code coverage is lofty goal that we should all strive for percent coverage is not enoughjust because statement was tested does not mean that it was tested properly for all possible inputs case study let' walk through test-driven development by writing smalltestedcryptography application don' worryyou won' need to understand the mathematics behind complicated modern encryption algorithms such as threefish or rsa insteadwe'll be implementing sixteenth-century algorithm known as the vigenere cipher the application simply needs to be able to encode and decode messagegiven an encoding keywordusing this cipher firstwe need to understand how the cipher works if we apply it manually (without computerwe start with table like thisa |
3,613 | given keywordtrainwe can encode the message encoded in python as follows repeat the keyword and message together such that it is easy to map letters from one to the othere for each letter in the plain textfind the row that begins with that letter in the table find the column with the letter associated with the keyword letter for the chosen plaintext letter the encoded character is at the intersection of this row and column for examplethe row starting with intersects the column starting with at the character sothe first letter in the ciphertext is the row starting with intersects the column starting with at the character eleading to the ciphertext xe intersects at cand intersects at and map to while and map to the full encoded message is xecwqxuivcrkhwa decoding basically follows the opposite procedure firstfind the row with the character for the shared keyword (the row)then find the location in that row where the encoded character (the xis located the plaintext character is at the top of the column for that row (the eimplementing it our program will need an encode method that takes keyword and plaintext and returns the ciphertextand decode method that accepts keyword and ciphertext and returns the original message but rather than just writing those methodslet' follow test-driven development strategy we'll be using py test for our unit testing we need an encode methodand we know what it has to dolet' write test for that method firstdef test_encode()cipher vigenerecipher("train"encoded cipher encode("encodedinpython"assert encoded ="xecwqxuivcrkhwa |
3,614 | this test failsnaturallybecause we aren' importing vigenerecipher class anywhere let' create new module to hold that class let' start with the following vigenerecipher classclass vigenerecipherdef __init__(selfkeyword)self keyword keyword def encode(selfplaintext)return "xecwqxuivcrkhwaif we add from vigenere_cipher import vigenerecipher line to the top of our test class and run py testthe preceding test will passwe've finished our first test-driven development cycle obviouslyreturning hardcoded string is not the most sensible implementation of cipher classso let' add second testdef test_encode_character()cipher vigenerecipher("train"encoded cipher encode(" "assert encoded ="xahnow that test will fail it looks like we're going to have to work harder but just thought of somethingwhat if someone tries to encode string with spaces or lowercase charactersbefore we start implementing the encodinglet' add some tests for these casesso we don' we forget them the expected behavior will be to remove spacesand to convert lowercase letters to capitalsdef test_encode_spaces()cipher vigenerecipher("train"encoded cipher encode("encoded in python"assert encoded ="xecwqxuivcrkhwadef test_encode_lowercase()cipher vigenerecipher("train"encoded cipher encode("encoded in python"assert encoded ="xecwqxuivcrkhwaif we run the new test suitewe find that the new tests pass (they expect the same hardcoded stringbut they ought to fail later if we forget to account for these cases |
3,615 | now that we have some test caseslet' think about how to implement our encoding algorithm writing code to use table like we used in the earlier manual algorithm is possiblebut seems complicatedconsidering that each row is just an alphabet rotated by an offset number of characters it turns out ( asked wikipediathat we can use modulo arithmetic to combine the characters instead of doing table lookup given plaintext and keyword charactersif we convert the two letters to their numerical values (with being and being )add them togetherand take the remainder mod we get the ciphertext characterthis is straightforward calculationbut since it happens on character-by-character basiswe should probably put it in its own function and before we do thatwe should write test for the new functionfrom vigenere_cipher import combine_character def test_combine_character()assert combine_character(" "" "="xassert combine_character(" "" "="enow we can write the code to make this function work in all honestyi had to run the test several times before got this function completely correctfirst returned an integerand then forgot to shift the character back up to the normal ascii scale from the zero-based scale having the test available made it easy to test and debug these errors this is another bonus of test-driven development def combine_character(plainkeyword)plain plain upper(keyword keyword upper(plain_num ord(plainord(' 'keyword_num ord(keywordord(' 'return chr(ord(' '(plain_num keyword_num now that combine_characters is testedi thought we' be ready to implement our encode function howeverthe first thing we want inside that function is repeating version of the keyword string that is as long as the plaintext let' implement function for that first oopsi mean let' implement the test firstdef test_extend_keyword()cipher vigenerecipher("train"extended cipher extend_keyword( assert extended ="traintraintraint |
3,616 | before writing this testi expected to write extend_keyword as standalone function that accepted keyword and an integer but as started drafting the testi realized it made more sense to use it as helper method on the vigenerecipher class this shows how test-driven development can help design more sensible apis here' the method implementationdef extend_keyword(selfnumber)repeats number /len(self keyword return (self keyword repeats)[:numberonce againthis took few runs of the test to get right ended up adding second versions of the testone with fifteen and one with sixteen lettersto make sure it works if the integer division has an even number now we're finally ready to write our encode methoddef encode(selfplaintext)cipher [keyword self extend_keyword(len(plaintext)for , in zip(plaintextkeyword)cipher append(combine_character( , )return "join(cipherthat looks correct our test suite should pass nowrightactuallyif we run itwe'll find that two tests are still failing we totally forgot about the spaces and lowercase charactersit is good thing we wrote those tests to remind us we'll have to add this line at the beginning of the methodplaintext plaintext replace("""upper(if we have an idea about corner case in the middle of implementing somethingwe can create test describing that idea we don' even have to implement the testwe can just run assert false to remind us to implement it later the failing test will never let us forget the corner case and it can' be ignored like filing task can if it takes while to get around to fixing the implementationwe can mark the test as an expected failure now all the tests pass successfully this is pretty longso we'll condense the examples for decoding here are couple testsdef test_separate_character()assert separate_character(" "" "=" |
3,617 | assert separate_character(" "" "="ndef test_decode()cipher vigenerecipher("train"decoded cipher decode("xecwqxuivcrkhwa"assert decoded ="encodedinpythonhere' the separate_character functiondef separate_character(cypherkeyword)cypher cypher upper(keyword keyword upper(cypher_num ord(cypherord(' 'keyword_num ord(keywordord(' 'return chr(ord(' '(cypher_num keyword_num and the decode methoddef decode(selfciphertext)plain [keyword self extend_keyword(len(ciphertext)for , in zip(ciphertextkeyword)plain append(separate_character( , )return "join(plainthese methods have lot of similarity to those used for encoding the great thing about having all these tests written and passing is that we can now go back and modify our codeknowing it is still safely passing the tests for exampleif we replace our existing encode and decode methods with these refactored methodsour tests still passdef _code(selftextcombine_func)text text replace("""upper(combined [keyword self extend_keyword(len(text)for , in zip(textkeyword)combined append(combine_func( , )return "join(combineddef encode(selfplaintext)return self _code(plaintextcombine_characterdef decode(selfciphertext)return self _code(ciphertextseparate_character |
3,618 | this is the final benefit of test-driven developmentand the most important once the tests are writtenwe can improve our code as much as we like and be confident that our changes didn' break anything we have been testing for furthermorewe know exactly when our refactor is finishedwhen the tests all pass of courseour tests may not comprehensively test everything we need them tomaintenance or code refactoring can still cause undiagnosed bugs that don' show up in testing automated tests are not foolproof if bugs do occurhoweverit is still possible to follow test-driven planstep one is to write test (or multiple teststhat duplicates or "provesthat the bug in question is occurring this willof coursefail then write the code to make the tests stop failing if the tests were comprehensivethe bug will be fixedand we will know if it ever happens againas soon as we run the test suite finallywe can try to determine how well our tests operate on this code with the py test coverage plugin installedpy test -coverage-report=report tells us that our test suite has percent code coverage this is great statisticbut we shouldn' get too cocky about it our code hasn' been tested when encoding messages that have numbersand its behavior with such inputs is thus undefined exercises practice test-driven development that is your first exercise it' easier to do this if you're starting new projectbut if you have existing code you need to work onyou can start by writing tests for each new feature you implement this can become frustrating as you become more enamored with automated tests the olduntested code will start to feel rigid and tightly coupledand will become uncomfortable to maintainyou'll start feeling like changes you make are breaking the code and you have no way of knowingfor lack of tests but if you start smalladding tests will improvethe codebase improves over time so to get your feet wet with test-driven developmentstart fresh project once you've started to appreciate the benefits (you willand realize that the time spent writing tests is quickly regained in terms of more maintainable codeyou'll want to start writing tests for existing code this is when you should start doing itnot before writing tests for code that we "knowworks is boring it is hard to get interested in the project until you realize just how broken the code we thought was working really is try writing the same set of tests using both the built-in unittest module and py test which do you preferunittest is more similar to test frameworks in other languageswhile py test is arguably more pythonic both allow us to write object-oriented tests and to test object-oriented programs with ease |
3,619 | we used py test in our case studybut we didn' touch on any features that wouldn' have been easily testable using unittest try adapting the tests to use test skipping or funcargs try the various setup and teardown methodsand compare their use to funcargs which feels more natural to youin our case studywe have lot of tests that use similar vigenerecipher objecttry reworking this code to use funcarg how many lines of code does it savetry running coverage report on the tests you've written did you miss testing any lines of codeeven if you have percent coveragehave you tested all the possible inputsif you're doing test-driven development percent coverage should follow quite naturallyas you will write test before the code that satisfies that test howeverif writing tests for existing codeit is more likely that there will be edge conditions that go untested think carefully about the values that are somehow differentempty lists when you expect full oneszero or one or infinity compared to intermediate integersfloats that don' round to an exact decimal placestrings when you expected numeralsor the ubiquitous none value when you expected something meaningful if your tests cover such edge casesyour code will be in good shape summary we have finally covered the most important topic in python programmingautomated testing test-driven development is considered best practice the standard library unittest module provides great out-of-the-box solution for testingwhile the py test framework has some more pythonic syntaxes mocks can be used to emulate complex classes in our tests code coverage gives us an estimate of how much of our code is being run by our testsbut it does not tell us that we have tested the right things in the next we'll jump into completely different topicconcurrency |
3,620 | concurrency is the art of making computer do (or appear to domultiple things at once historicallythis meant inviting the processor to switch between different tasks many times per second in modern systemsit can also literally mean doing two or more things simultaneously on separate processor cores concurrency is not inherently an object-oriented topicbut python' concurrent systems are built on top of the object-oriented constructs we've covered throughout the book this will introduce you to the following topicsthreads multiprocessing futures asyncio concurrency is complicated the basic concepts are fairly simplebut the bugs that can occur are notoriously difficult to track down howeverfor many projectsconcurrency is the only way to get the performance we need imagine if web server couldn' respond to user' request until the previous one was completedwe won' be going into all the details of just how hard it is (another full book would be requiredbut we'll see how to do basic concurrency in pythonand some of the most common pitfalls to avoid |
3,621 | threads most oftenconcurrency is created so that work can continue happening while the program is waiting for / to happen for examplea server can start processing new network request while it waits for data from previous request to arrive an interactive program might render an animation or perform calculation while waiting for the user to press key bear in mind that while person can type more than characters per minutea computer can perform billions of instructions per second thusa ton of processing can happen between individual key presseseven when typing quickly it' theoretically possible to manage all this switching between activities within your programbut it would be virtually impossible to get right insteadwe can rely on python and the operating system to take care of the tricky switching partwhile we create objects that appear to be running independentlybut simultaneously these objects are called threadsin python they have very simple api let' take look at basic examplefrom threading import thread class inputreader(thread)def run(self)self line_of_text input(print("enter some text and press enter"thread inputreader(thread start(count result while thread is_alive()result count count count + print("calculated squares up to { { { }formatcountresult)print("while you typed '{}'format(thread line_of_text)this example runs two threads can you see themevery program has one threadcalled the main thread the code that executes from the beginning is happening in this thread the second threadmore obviouslyexists as the inputreader class to construct threadwe must extend the thread class and implement the run method any code inside the run method (or that is called from within that methodis executed in separate thread |
3,622 | the new thread doesn' start running until we call the start(method on the object in this casethe thread immediately pauses to wait for input from the keyboard in the meantimethe original thread continues executing at the point start was called it starts calculating squares inside while loop the condition in the while loop checks if the inputreader thread has exited its run method yetonce it doesit outputs some summary information to the screen if we run the example and type the string "hello world"the output looks as followsenter some text and press enterhello world calculated squares up to while you typed 'hello worldyou willof coursecalculate more or less squares while typing the string as the numbers are related to both our relative typing speedsand to the processor speeds of the computers we are running thread only starts running in concurrent mode when we call the start method if we want to take out the concurrent call to see how it compareswe can call thread run(in the place that we originally called thread start(the output is tellingenter some text and press enterhello world calculated squares up to while you typed 'hello worldin this casethe thread never becomes alive and the while loop never executes we wasted lot of cpu power sitting idle while we were typing there are lot of different patterns for using threads effectively we won' be covering all of thembut we will look at common one so we can learn about the join method let' check the current temperature in the capital city of every province in canadafrom threading import thread import json from urllib request import urlopen import time cities 'edmonton''victoria''winnipeg''fredericton'"st john' "'halifax''toronto''charlottetown' |
3,623 | 'quebec city''reginaclass tempgetter(thread)def __init__(selfcity)super(__init__(self city city def run(self)url_template ''weather? ={},ca&units=metric'response urlopen(url_template format(self city)data json loads(response read(decode()self temperature data['main']['temp'threads [tempgetter(cfor in citiesstart time time(for thread in threadsthread start(for thread in threadsthread join(for thread in threadsprint"it is { temperature }degc in { city}format(thread)print"got {temps in {secondsformatlen(threads)time time(start)this code constructs threads before starting them notice how we can override the constructor to pass them into the thread objectremembering to call super to ensure the thread is properly initialized pay attention to thisthe new thread isn' running yetso the __init__ method is still executing from inside the main thread data we construct in one thread is accessible from other running threads after the threads have been startedwe loop over them againcalling the join(method on each this method essentially says "wait for the thread to complete before doing anythingwe call this ten times in sequencethe for loop won' exit until all ten threads have completed |
3,624 | at this pointwe can print the temperature that was stored on each thread object notice once again that we can access data that was constructed within the thread from the main thread in threadsall state is shared by default executing this code on my mbit connection takes about two tenths of secondit is degc in edmonton it is degc in victoria it is degc in winnipeg it is - degc in fredericton it is - degc in st john' it is - degc in halifax it is - degc in toronto it is - degc in charlottetown it is - degc in quebec city it is degc in regina got temps in seconds if we run this code in single thread (by changing the start(call to run(and commenting out the join(call)it takes closer to seconds because each second request has to complete before the next one begins this speedup of times shows just how useful concurrent programming can be the many problems with threads threads can be usefulespecially in other programming languagesbut modern python programmers tend to avoid them for several reasons as we'll seethere are other ways to do concurrent programming that are receiving more attention from the python developers let' discuss some of these pitfalls before moving on to more salient topics shared memory the main problem with threads is also their primary advantage threads have access to all the memory and thus all the variables in the program this can too easily cause inconsistencies in the program state have you ever encountered room where single light has two switches and two different people turn them on at the same timeeach person (threadexpects their action to turn the lamp ( variableonbut the resulting value (the lamp is offis inconsistent with those expectations now imagine if those two threads were transferring funds between bank accounts or managing the cruise control in vehicle |
3,625 | the solution to this problem in threaded programming is to "synchronizeaccess to any code that reads or writes shared variable there are few different ways to do thisbut we won' go into them here so we can focus on more pythonic constructs the synchronization solution worksbut it is way too easy to forget to apply it worsebugs due to inappropriate use of synchronization are really hard to track down because the order in which threads perform operations is inconsistent we can' easily reproduce the error usuallyit is safest to force communication between threads to happen using lightweight data structure that already uses locks appropriately python offers the queue queue class to do thisit' functionality is basically the same as the multiprocessing queue that we will discuss in the next section in some casesthese disadvantages might be outweighed by the one advantage of allowing shared memoryit' fast if multiple threads need access to huge data structureshared memory can provide that access quickly howeverthis advantage is usually nullified by the fact thatin pythonit is impossible for two threads running on different cpu cores to be performing calculations at exactly the same time this brings us to our second problem with threads the global interpreter lock in order to efficiently manage memorygarbage collectionand calls to machine code in librariespython has utility called the global interpreter lockor gil it' impossible to turn offand it means that threads are useless in python for one thing that they excel at in other languagesparallel processing the gil' primary effectfor our purposes is to prevent any two threads from doing work at the exact same timeeven if they have work to do in this case"doing workmeans using the cpuso it' perfectly ok for multiple threads to access the disk or networkthe gil is released as soon as the thread starts to wait for something the gil is quite highly disparagedmostly by people who don' understand what it is or all the benefits it brings to python it would definitely be nice if our language didn' have this restrictionbut the python reference developers have determined thatfor now at leastit brings more value than it costs it makes the reference implementation easier to maintain and developand during the single-core processor days when python was originally developedit actually made the interpreter faster the net result of the gilhoweveris that it limits the benefits that threads bring uswithout alleviating the costs |
3,626 | while the gil is problem in the reference implementation of python that most people useit has been solved in some of the nonstandard implementations such as ironpython and jython unfortunatelyat the time of publicationnone of these support python thread overhead one final limitation of threads as compared to the asynchronous system we will be discussing later is the cost of maintaining the thread each thread takes up certain amount of memory (both in the python process and the operating system kernelto record the state of that thread switching between the threads also uses (smallamount of cpu time this work happens seamlessly without any extra coding (we just have to call start(and the rest is taken care of)but the work still has to happen somewhere this can be alleviated somewhat by structuring our workload so that threads can be reused to perform multiple jobs python provides threadpool feature to handle this it is shipped as part of the multiprocessing library and behaves identically to the processpoolthat we will discuss shortlyso let' defer discussion until the next section multiprocessing the multiprocessing api was originally designed to mimic the thread api howeverit has evolved and in recent versions of python it supports more features more robustly the multiprocessing library is designed when cpu-intensive jobs need to happen in parallel and multiple cores are available (given that four core raspberry pi can currently be purchased for $ there are usually multiple cores availablemultiprocessing is not useful when the processes spend majority of their time waiting on / (for examplenetworkdiskdatabaseor keyboard)but they are the way to go for parallel computation the multiprocessing module spins up new operating system processes to do the work on windows machinesthis is relatively expensive operationon linuxprocesses are implemented in the kernel the same way threads areso the overhead is limited to the cost of running separate python interpreters in each process |
3,627 | let' try to parallelize compute-heavy operation using similar constructs to those provided by the threading apifrom multiprocessing import processcpu_count import time import os class muchcpu(process)def run(self)print(os getpid()for in range( )pass if __name__ ='__main__'procs [muchcpu(for in range(cpu_count()) time time(for in procsp start(for in procsp join(print('work took {secondsformat(time time( )this example just ties up the cpu for million iterations you may not consider this to be useful workbut it' cold day and appreciate the heat my laptop generates under such load the api should be familiarwe implement subclass of process (instead of threadand implement run method this method prints out the process id ( unique number the operating system assigns to each process on the machinebefore doing some intense (if misguidedwork pay special attention to the if __name__ ='__main__'guard around the module level code that prevents it to run if the module is being importedrather than run as program this is good practice in generalbut when using multiprocessing on some operating systemsit is essential behind the scenesmultiprocessing may have to import the module inside the new process in order to execute the run(method if we allowed the entire module to execute at that pointit would start creating new processes recursively until the operating system ran out of resources we construct one process for each processor core on our machinethen start and join each of those processes on my era quad-core laptopthe output looks like this |
3,628 | work took seconds the first four lines are the process id that was printed inside each muchcpu instance the last line shows that the million iterations can run in about seconds on my machine during that secondsmy process monitor indicated that all four of my cores were running at percent if we subclass threading thread instead of multiprocessing process in muchcputhe output looks like this work took seconds this timethe four threads are running inside the same process and take close to three times as long to run this is the cost of the global interpreter lockin other languages or implementations of pythonthe threaded version would run at least as fast as the multiprocessing versionwe might expect it to be four times as longbut remember that many other programs are running on my laptop in the multiprocessing versionthese programs also need share of the four cpus in the threading versionthose programs can use the other three cpus instead multiprocessing pools in generalthere is no reason to have more processes than there are processors on the computer there are few reasons for thisonly cpu_count(processes can run simultaneously each process consumes resources with full copy of the python interpreter communication between processes is expensive creating processes takes nonzero amount of time given these constraintsit makes sense to create at most cpu_count(processes when the program starts and then have them execute tasks as needed it is not difficult to implement basic series of communicating processes that does thisbut it can be tricky to debugtestand get right of coursepython being pythonwe don' have to do all this work because the python developers have already done it for us in the form of multiprocessing pools |
3,629 | the primary advantage of pools is that they abstract away the overhead of figuring out what code is executing in the main process and which code is running in the subprocess as with the threading api that multiprocessing mimicsit can often be hard to remember who is executing what the pool abstraction restricts the number of places that code in different processes interact with each othermaking it much easier to keep track of pools also seamlessly hide the process of passing data between processes using pool looks much like function callyou pass data into functionit is executed in another process or processesand when the work is donea value is returned it is important to understand that under the hooda lot of work is being done to support thisobjects in one process are being pickled and passed into pipe another process retrieves data from the pipe and unpickles it work is done in the subprocess and result is produced the result is pickled and passed into pipe eventuallythe original process unpickles it and returns it all this pickling and passing data into pipes takes time and memory thereforeit is ideal to keep the amount and size of data passed into and returned from the pool to minimumand it is only advantageous to use the pool if lot of processing has to be done on the data in question armed with this knowledgethe code to make all this machinery work is surprisingly simple let' look at the problem of calculating all the prime factors of list of random numbers this is common and expensive part of variety of cryptography algorithms (not to mention attacks on those algorithms!it requires years of processing power to crack the extremely large numbers used to secure your bank accounts the following implementationwhile readableis not at all efficientbut that' ok because we want to see it using lots of cpu timeimport random from multiprocessing pool import pool def prime_factor(value)factors [for divisor in range( value- )quotientremainder divmod(valuedivisorif not remainderfactors extend(prime_factor(divisor)factors extend(prime_factor(quotient)break |
3,630 | elsefactors [valuereturn factors if __name__ ='__main__'pool pool(to_factor random randint( for in range( results pool map(prime_factorto_factorfor valuefactors in zip(to_factorresults)print("the factors of {are {}format(valuefactors)let' focus on the parallel processing aspects as the brute force recursive algorithm for calculating factors is pretty clear we first construct multiprocessing pool instance by defaultthis pool creates separate process for each of the cpu cores in the machine it is running on the map method accepts function and an iterable the pool pickles each of the values in the iterable and passes it into an available processwhich executes the function on it when that process is finished doing it' workit pickles the resulting list of factors and passes it back to the pool once all the pools are finished processing work (which could take some time)the results list is passed back to the original processwhich has been waiting patiently for all this work to complete it is often more useful to use the similar map_async methodwhich returns immediately even though the processes are still working in that casethe results variable would not be list of valuesbut promise to return list of values later by calling results get(this promise object also has methods like ready()and wait()which allow us to check whether all the results are in yet alternativelyif we don' know all the values we want to get results for in advancewe can use the apply_async method to queue up single job if the pool has process that isn' already workingit will start immediatelyotherwiseit will hold onto the task until there is free process available pools can also be closedwhich refuses to take any further tasksbut processes everything currently in the queueor terminatedwhich goes one step further and refuses to start any jobs still on the queuealthough any jobs currently running are still permitted to complete |
3,631 | queues if we need more control over communication between processeswe can use queue queue data structures are useful for sending messages from one process into one or more other processes any picklable object can be sent into queuebut remember that pickling can be costly operationso keep such objects small to illustrate queueslet' build little search engine for text content that stores all relevant entries in memory this is not the most sensible way to build text-based search enginebut have used this pattern to query numerical data that needed to use cpu-intensive processes to construct chart that was then rendered to the user this particular search engine scans all files in the current directory in parallel process is constructed for each core on the cpu each of these is instructed to load some of the files into memory let' look at the function that does the loading and searchingdef search(pathsquery_qresults_q)lines [for path in pathslines extend( strip(for in path open()query query_q get(while queryresults_q put([ for in lines if query in ]query query_q get(rememberthis function is run in different process (in factit is run in cpucount(different processesfrom the main thread it is passes list of path path objects and two multiprocessing queue objectsone for incoming queries and one to send outgoing results these queues have similar interface to the queue class we discussed in python data structures howeverthey are doing extra work to pickle the data in the queue and pass it into the subprocess over pipe these two queues are set up in the main process and passed through the pipes into the search function inside the child processes the search code is pretty dumbboth in terms of efficiency and of capabilitiesit loops over every line stored in memory and puts the matching ones in list the list is placed on queue and passed back to the main process let' look at the main processwhich sets up these queuesif __name__ ='__main__'from multiprocessing import processqueuecpu_count from path import path cpus cpu_count( |
3,632 | pathnames [ for in path('listdir(if isfile()paths [pathnames[ ::cpusfor in range(cpus)query_queues [queue(for in range(cpus)results_queue queue(search_procs process(target=searchargs=(pqresults_queue)for pq in zip(pathsquery_queuesfor proc in search_procsproc start(for easier descriptionlet' assume cpu_count is four notice how the import statements are placed inside the if guardthis is small optimization that prevents them from being imported in each subprocess (where they aren' neededon certain operating systems we list all the paths in the current directory and then split the list into four approximately equal parts we also construct list of four queue objects to send data into each subprocess finallywe construct single results queuethis is passed into all four of the subprocesses each of them can put data into the queue and it will be aggregated in the main process now let' look at the code that makes search actually happenfor in query_queuesq put("def" put(nonesignal process termination for in range(cpus)for match in results_queue get()print(matchfor proc in search_procsproc join(this code performs single search for "def(because it' common phrase in directory full of python files!in more production ready systemwe would probably hook socket up to this search code in that casewe' have to change the inter-process protocol so that the message coming back on the return queue contained enough information to identify which of many queries the results were attached to this use of queues is actually local version of what could become distributed system imagine if the searches were being sent out to multiple computers and then recombined we won' discuss it herebut the multiprocessing module includes manager class that can take lot of the boilerplate out of the preceding code there is even version of the multiprocessing manager that can manage subprocesses on remote systems to construct rudimentary distributed application check the python multiprocessing documentation if you are interested in pursuing this further |
3,633 | the problems with multiprocessing as threads domultiprocessing also has problemssome of which we have already discussed there is no best way to do concurrencythis is especially true in python we always need to examine the parallel problem to figure out which of the many available solutions is the best one for that problem sometimesthere is no best solution in the case of multiprocessingthe primary drawback is that sharing data between processes is very costly as we have discussedall communication between processeswhether by queuespipesor more implicit mechanism requires pickling the objects excessive pickling quickly dominates processing time multiprocessing works best when relatively small objects are passed between processes and tremendous amount of work needs to be done on each one on the other handif no communication between processes is requiredthere may not be any point in using the module at allwe can spin up four separate python processes and use them independently the other major problem with multiprocessing is thatlike threadsit can be hard to tell which process variable or method is being accessed in in multiprocessingif you access variable from another process it will usually overwrite the variable in the currently running process while the other process keeps the old value this is really confusing to maintainso don' do it futures let' start looking at more asynchronous way of doing concurrency futures wrap either multiprocessing or threading depending on what kind of concurrency we need (tending towards / versus tending towards cputhey don' completely solve the problem of accidentally altering shared statebut they allow us to structure our code such that it is easier to track down when we do so futures provide distinct boundaries between the different threads or processes similar to the multiprocessing poolthey are useful for "call and answertype interactions in which processing can happen in another thread and then at some point in the future (they are aptly namedafter all)you can ask it for the result it' really just wrapper around multiprocessing pools and thread poolsbut it provides cleaner api and encourages nicer code future is an object that basically wraps function call that function call is run in the background in thread or process the future object has methods to check if the future has completed and to get the results after it has completed |
3,634 | let' do another file search example in the last sectionwe implemented version of the unix grep command this timelet' do simple version of the find command the example will search the entire filesystem for paths that contain given string of charactersfrom concurrent futures import threadpoolexecutor from pathlib import path from os path import sep as pathsep from collections import deque def find_files(pathquery_string)subdirs [for in path iterdir()full_path str( absolute()if is_dir(and not is_symlink()subdirs append(pif query_string in full_pathprint(full_pathreturn subdirs query pyfutures deque(basedir path(pathsepabsolute(with threadpoolexecutor(max_workers= as executorfutures appendexecutor submit(find_filesbasedirquery)while futuresfuture futures popleft(if future exception()continue elif future done()subdirs future result(for subdir in subdirsfutures append(executor submitfind_filessubdirquery)elsefutures append(future |
3,635 | this code consists of function named find_files that is run in separate thread (or processif we used processpoolexecutorthere isn' anything particularly special about this functionbut note how it does not access any global variables all interaction with the external environment is passed into the function or returned from it this is not technical requirementbut it is the best way to keep your brain inside your skull when programming with futures accessing outside variables without proper synchronization results in something called race condition for exampleimagine two concurrent writes trying to increment an integer counter they start at the same time and both read the value as then they both increment the value and write back the result as but if two processes are trying to increment variablethe expected result would be that it gets incremented by twoso the result should be modern wisdom is that the easiest way to avoid doing this is to keep as much state as possible private and share them through known-safe constructssuch as queues we set up couple variables before we get startedwe'll be searching for all files that contain the characters pyfor this example we have queue of futures that we'll discuss shortly the basedir variable points to the root of the filesystem'/on unix machines and probably :on windows firstlet' have short course on search theory this algorithm implements breadth first search in parallel rather than recursively searching every directory using depth first searchit adds all the subdirectories in the current folder to the queuethen all the subdirectories of each of those folders and so on the meat of the program is known as an event loop we can construct threadpoolexecutor as context manager so that it is automatically cleaned up and its threads closed when it is done it requires max_workers argument to indicate the number of threads running at timeif more than this many jobs are submittedit queues up the rest until worker thread becomes available when using processpoolexecutorthis is normally constrained to the number of cpus on the machinebut with threadsit can be much higherdepending how many are waiting on / at time each thread takes up certain amount of memoryso it shouldn' be too highit doesn' take all that many threads before the speed of the diskrather than number of parallel requestsis the bottleneck |
3,636 | once the executor has been constructedwe submit job to it using the root directory the submit(method immediately returns future objectwhich promises to give us result eventually the future is placed on the queue the loop then repeatedly removes the first future from the queue and inspects it if it is still runningit gets added back to the end of the queue otherwisewe check if the function raised an exception with call to future exception(if it didwe just ignore it (it' usually permission erroralthough real app would need to be more careful about what the exception wasif we didn' check this exception hereit would be raised when we called result(and could be handled through the normal try except mechanism assuming no exception occurredwe can call result(to get the return value of the function call since the function returns list of subdirectories that are not symbolic links (my lazy way of preventing an infinite loop)result(returns the same thing these new subdirectories are submitted to the executor and the resulting futures are tossed onto the queue to have their contents searched in later iteration so that' all that is required to develop future-based / -bound application under the hoodit' using the same thread or process apis we've already discussedbut it provides more understandable interface and makes it easier to see the boundaries between concurrently running functions (just don' try to access global variables from inside the future!asyncio asyncio is the current state of the art in python concurrent programming it combines the concept of futures and an event loop with the coroutines we discussed in the iterator pattern the result is about as elegant and easy to understand as it is possible to get when writing concurrent codethough that isn' saying lotasyncio can be used for few different concurrent tasksbut it was specifically designed for network / most networking applicationsespecially on the server sidespend lot of time waiting for data to come in from the network this can be solved by handling each client in separate threadbut threads use up memory and other resources asyncio uses coroutines instead of threads the library also provides its own event loopobviating the need for the several lines long while loop in the previous example howeverevent loops come with cost when we run code in an async task on the event loopthat code must return immediatelyblocking neither on / nor on long-running calculations this is minor thing when writing our own codebut it means that any standard library or third-party functions that block on / have to have non-blocking versions created |
3,637 | asyncio solves this by creating set of coroutines that use the yield from syntax to return control to the event loop immediately the event loop takes care of checking whether the blocking call has completed and performing any subsequent tasksjust like we did manually in the previous section asyncio in action canonical example of blocking function is the time sleep call let' use the asynchronous version of this call to illustrate the basics of an asyncio event loopimport asyncio import random @asyncio coroutine def random_sleep(counter)delay random random( print("{sleeps for { fsecondsformat(counterdelay)yield from asyncio sleep(delayprint("{awakensformat(counter)@asyncio coroutine def five_sleepers()print("creating five tasks"tasks asyncio async(random_sleep( )for in range( )print("sleeping after starting five tasks"yield from asyncio sleep( print("waking and waiting for five tasks"yield from asyncio wait(tasksasyncio get_event_loop(run_until_complete(five_sleepers()print("done five tasks"this is fairly basic examplebut it covers several features of asyncio programming it is easiest to understand in the order that it executeswhich is more or less bottom to top the second last line gets the event loop and instructs it to run future until it is finished the future in question is named five_sleepers once that future has done its workthe loop will exit and our code will terminate as asynchronous programmerswe don' need to know too much about what happens inside that run_ until_complete callbut be aware that lot is going on it' souped up coroutine version of the futures loop we wrote in the previous that knows how to deal with iterationexceptionsfunction returnsparallel callsand more |
3,638 | now look little more closely at that five_sleepers future ignore the decorator for few paragraphswe'll get back to it the coroutine first constructs five instances of the random_sleep future the resulting futures are wrapped in an asyncio async taskwhich adds them to the loop' task queue so they can execute concurrently when control is returned to the event loop that control is returned whenever we call yield from in this casewe call yield from asyncio sleep to pause execution of this coroutine for two seconds during this breakthe event loop executes the tasks that it has queued upnamely the five random_sleep futures these coroutines each print starting messagethen send control back to the event loop for specific amount of time if any of the sleep calls inside random_sleep are shorter than two secondsthe event loop passes control back into the relevant futurewhich prints its awakening message before returning when the sleep call inside five_sleepers wakes upit executes up to the next yield from callwhich waits for the remaining random_sleep tasks to complete when all the sleep calls have finished executingthe random_sleep tasks returnwhich removes them from the event queue once all five of those are completedthe asyncio wait call and then the five_sleepers method also return finallysince the event queue is now emptythe run_until_complete call is able to terminate and the program ends the asyncio coroutine decorator mostly just documents that this coroutine is meant to be used as future in an event loop in this casethe program would run just fine without the decorator howeverthe asyncio coroutine decorator can also be used to wrap normal function (one that doesn' yieldso that it can be treated as future in this casethe entire function executes before returning control to the event loopthe decorator just forces the function to fulfill the coroutine api so the event loop knows how to handle it reading an asyncio future an asyncio coroutine executes each line in order until it encounters yield from statementat which point it returns control to the event loop the event loop then executes any other tasks that are ready to runincluding the one that the original coroutine was waiting on whenever that child task completesthe event loop sends the result back into the coroutine so that it can pick up executing until it encounters another yield from statement or returns this allows us to write code that executes synchronously until we explicitly need to wait for something this removes the nondeterministic behavior of threadsso we don' need to worry nearly so much about shared state |
3,639 | it' still good idea to avoid accessing shared state from inside coroutine it makes your code much easier to reason about more importantlyeven though an ideal world might have all asynchronous execution happen inside coroutinesthe reality is that some futures are executed behind the scenes inside threads or processes stick to "share nothingphilosophy to avoid ton of difficult bugs in additionasyncio allows us to collect logical sections of code together inside single coroutineeven if we are waiting for other work elsewhere as specific instanceeven though the yield from asyncio sleep call in the random_sleep coroutine is allowing ton of stuff to happen inside the event loopthe coroutine itself looks like it' doing everything in order this ability to read related pieces of asynchronous code without worrying about the machinery that waits for tasks to complete is the primary benefit of the asyncio module asyncio for networking asyncio was specifically designed for use with network socketsso let' implement dns server more accuratelylet' implement one extremely basic feature of dns server the domain name system' basic purpose is to translate domain namessuch as www amazon com into ip addresses such as it has to be able to perform many types of queries and know how to contact other dns servers if it doesn' have the answer required we won' be implementing any of thisbut the following example is able to respond directly to standard dns query to look up ips for my three most recent employersimport asyncio from contextlib import suppress ip_map 'facebook com ' ' 'yougov com ' ' 'wipo int ' def lookup_dns(data)domain 'pointerpart_length data[ while part_length |
3,640 | domain +data[pointer:pointer+part_lengthbpointer +part_length part_length data[pointer ip ip_map get(domain 'return domainip def create_response(dataip)ba bytearray packet ba(data[: ]ba([ ]data[ : packet +ba( data[ :packet +ba([ ]for in ip split(')packet append(int( )return packet class dnsprotocol(asyncio datagramprotocol)def connection_made(selftransport)self transport transport def datagram_received(selfdataaddr)print("received request from {}format(addr[ ])domainip lookup_dns(dataprint("sending ip {for {to {}formatdomain decode()ipaddr[ ])self transport sendtocreate_response(dataip)addrloop asyncio get_event_loop(transportprotocol loop run_until_completeloop create_datagram_endpointdnsprotocollocal_addr=( ' ))print("dns server running"with suppress(keyboardinterrupt)loop run_forever(transport close(loop close( |
3,641 | this example sets up dictionary that dumbly maps few domains to ipv addresses it is followed by two functions that extract information from binary dns query packet and construct the response we won' be discussing theseif you want to know more about dns read rfc ("request for comment"the format for defining most internet protocols and you can test this service by running the following command in another terminalnslookup -port= facebook com localhost let' get on with the entree asyncio networking revolves around the intimately linked concepts of transports and protocols protocol is class that has specific methods that are called when relevant events happen since dns runs on top of udp (user datagram protocol)we build our protocol class as subclass of datagramprotocol this class has variety of events that it can respond towe are specifically interested in the initial connection occurring (solely so we can store the transport for future useand the datagram_received event for dnseach received datagram must be parsed and responded toat which point the interaction is over sowhen datagram is receivedwe process the packetlook up the ipand construct response using the functions we aren' talking about (they're black sheep in the familythen we instruct the underlying transport to send the resulting packet back to the requesting client using its sendto method the transport essentially represents communication stream in this caseit abstracts away all the fuss of sending and receiving data on udp socket on an event loop there are similar transports for interacting with tcp sockets and subprocessesfor example the udp transport is constructed by calling the loop' create_datagram_endpoint coroutine this constructs the appropriate udp socket and starts listening on it we pass it the address that the socket needs to listen onand importantlythe protocol class we created so that the transport knows what to call when it receives data since the process of initializing socket takes non-trivial amount of time and would block the event loopthe create_datagram_endpoint function is coroutine in our examplewe don' really need to do anything while we wait for this initializationso we wrap the call in loop run_until_complete the event loop takes care of managing the futureand when it' completeit returns tuple of two valuesthe newly initialized transport and the protocol object that was constructed from the class we passed in |
3,642 | behind the scenesthe transport has set up task on the event loop that is listening for incoming udp connections all we have to dothenis start the event loop running with the call to loop run_forever(so that task can process these packets when the packets arrivethey are processed on the protocol and everything just works the only other major thing to pay attention to is that transports (andindeedevent loopsare supposed to be closed when we are finished with them in this casethe code runs just fine without the two calls to close()but if we were constructing transports on the fly (or just doing proper error handling!)we' need to be quite bit more conscious of it you may have been dismayed to see how much boilerplate is required in setting up protocol class and underlying transport asyncio provides an abstraction on top of these two key concepts called streams we'll see an example of streams in the tcp server in the next example using executors to wrap blocking code asyncio provides its own version of the futures library to allow us to run code in separate thread or process when there isn' an appropriate non-blocking call to be made this essentially allows us to combine threads and processes with the asynchronous model one of the more useful applications of this feature is to get the best of both worlds when an application has bursts of / -bound and cpubound activity the / -bound portions can happen in the event-loop while the cpu-intensive work can be spun off to different process to illustrate thislet' implement "sorting as serviceusing asyncioimport asyncio import json from concurrent futures import processpoolexecutor def sort_in_process(data)nums json loads(data decode()curr while curr len(nums)if nums[curr>nums[curr- ]curr + elsenums[curr]nums[curr- nums[curr- ]nums[currif curr |
3,643 | curr - return json dumps(numsencode(@asyncio coroutine def sort_request(readerwriter)print("received connection"length yield from reader read( data yield from reader readexactlyint from_bytes(length'big')result yield from asyncio get_event_loop(run_in_executornonesort_in_processdataprint("sorted list"writer write(resultwriter close(print("connection closed"loop asyncio get_event_loop(loop set_default_executor(processpoolexecutor()server loop run_until_completeasyncio start_server(sort_request ' )print("sort service running"loop run_forever(server close(loop run_until_complete(server wait_closed()loop close(this is an example of good code implementing some really stupid ideas the whole idea of sort as service is pretty ridiculous using our own sorting algorithm instead of calling python' sorted is even worse the algorithm we used is called gnome sortor in some cases"stupid sortit is slow sort algorithm implemented in pure python we defined our own protocol instead of using one of the many perfectly suitable application protocols that exist in the wild even the idea of using multiprocessing for parallelism might be suspect herewe still end up passing all the data into and out of the subprocesses sometimesit' important to take step back from the program you are writing and ask yourself if you are trying to meet the right goals but let' look at some of the smart features of this design firstwe are passing bytes into and out of the subprocess this is lot smarter than decoding the json in the main process it means the (relatively expensivedecoding can happen on different cpu alsopickled json strings are generally smaller than pickled listsso less data is passing between processes |
3,644 | secondthe two methods are very linearit looks like code is being executed one line after another of coursein asynciothis is an illusionbut we don' have to worry about shared memory or concurrency primitives streams the previous example should look familiar by now as it has similar boilerplate to other asyncio programs howeverthere are few differences you'll notice we called start_server instead of create_server this method hooks into asyncio' streams instead of using the underlying transport/protocol code instead of passing in protocol classwe can pass in normal coroutinewhich receives reader and writer parameters these both represent streams of bytes that can be read from and written like files or sockets secondbecause this is tcp server instead of udpthere is some socket cleanup required when the program finishes this cleanup is blocking callso we have to run the wait_closed coroutine on the event loop streams are fairly simple to understand reading is potentially blocking call so we have to call it with yield from writing doesn' blockit just puts the data on queuewhich asyncio sends out in the background our code inside the sort_request method makes two read requests firstit reads bytes from the wire and converts them to an integer using big endian notation this integer represents the number of bytes of data the client intends to send so in the next callto readexactlyit reads that many bytes the difference between read and readexactly is that the former will read up to the requested number of byteswhile the latter will buffer reads until it receives all of themor until the connection closes executors now let' look at the executor code we import the exact same processpoolexecutor that we used in the previous section notice that we don' need special asyncio version of it the event loop has handy run_in_executor coroutine that we can use to run futures on by defaultthe loop runs code in threadpoolexecutorbut we can pass in different executor if we wish oras we did in this examplewe can set different default when we set up the event loop by calling loop set_default_ executor(as you probably recall from the previous sectionthere is not lot of boilerplate for using futures with an executor howeverwhen we use them with asynciothere is none at allthe coroutine automatically wraps the function call in future and submits it to the executor our code blocks until the future completeswhile the event loop continues processing other connectionstasksor futures when the future is donethe coroutine wakes up and continues on to write the data back to the client |
3,645 | you may be wondering ifinstead of running multiple processes inside an event loopit might be better to run multiple event loops in different processes the answer is"maybehoweverdepending on the exact problem spacewe are probably better off running independent copies of program with single event loop than to try to coordinate everything with master multiprocessing process we've hit most of the high points of asyncio in this sectionand the has covered many other concurrency primitives concurrency is hard problem to solveand no one solution fits all use cases the most important part of designing concurrent system is deciding which of the available tools is the correct one to use for the problem we have seen advantages and disadvantages of several concurrent systemsand now have some insights into which are the better choices for different types of requirements case study to wrap up this and the booklet' build basic image compression tool it will take black and white images (with bit per pixeleither on or offand attempt to compress it using very basic form of compression known as run-length encoding you may find black and white images bit far-fetched if soyou haven' enjoyed enough hours at 've included some sample black and white bmp images (which are easy to read data into and leave lot of opportunity to improve on file sizewith the example code for this we'll be compressing the images using simple technique called run-length encoding this technique basically takes sequence of bits and replaces any strings of repeated bits with the number of bits that are repeated for examplethe string might be replaced with to indicate that zeros are followed by ones and then more zeroes to make things little more interestingwe will break each row into bit chunks didn' pick bits arbitrarily different values can be encoded into bitswhich means that if row contains all ones or all zeroswe can store it in single bytethe first bit indicating whether it is row of or row of sand the remaining bits indicating how many of that bit exists breaking up the image into blocks has another advantagewe can process individual blocks in parallel without them depending on each other howeverthere' major disadvantage as wellif run has just few ones or zeros in itthen it will take up more space in the compressed file when we break up long runs into blockswe may end up creating more of these small runs and bloat the size of the file |
3,646 | when dealing with fileswe have to think about the exact layout of the bytes in the compressed file our file will store two byte little-endian integers at the beginning of the file representing the width and height of the completed file then it will write bytes representing the bit chunks of each row now before we start designing concurrent system to build such compressed imageswe should ask fundamental questionis this application / -bound or cpu-boundmy answerhonestlyis " don' knowi' not sure whether the app will spend more time loading data from disk and writing it back or doing the compression in memory suspect that it is cpu bound app in principlebut once we start passing image strings into subprocesseswe may lose any benefit of parallelism the optimal solution to this problem is probably to write or cython extensionbut let' see how far we can get in pure python we'll build this application using bottom-up design that way we'll have some building blocks that we can combine into different concurrency patterns to see how they compare let' start with the code that compresses -bit chunk using run-length encodingfrom bitarray import bitarray def compress_chunk(chunk)compressed bytearray(count last chunk[ for bit in chunk[ :]if bit !lastcompressed append(count ( last)count last bit count + compressed append(count ( last)return compressed this code uses the bitarray class for manipulating individual zeros and ones it is distributed as third-party modulewhich you can install with the command pip install bitarray the chunk that is passed into compress_chunks is an instance of this class (although the example would work just as well with list of booleansthe primary benefit of the bitarray in this case is that when pickling them between processesthey take up an th of the space of list of booleans or bytestring of and thereforethey pickle faster they are also bit (pun intendedeasier to work with than doing ton of bitwise operations |
3,647 | the method compresses the data using run-length encoding and returns bytearray containing the packed data where bitarray is like list of ones and zerosa bytearray is like list of byte objects (each byteof coursecontaining ones or zerosthe algorithm that performs the compression is pretty simple (although ' like to point out that it took me two days to implement and debug it simple to understand does not necessarily imply easy to write!it first sets the last variable to the type of bit in the current run (either true or falseit then loops over the bitscounting each oneuntil it finds one that is different when it doesit constructs new byte by making the leftmost bit of the byte (the positioneither zero or onedepending on what the last variable contained then it resets the counter and repeats the operation once the loop is doneit creates one last byte for the last runand returns the result while we're creating building blockslet' make function that compresses row of image datadef compress_row(row)compressed bytearray(chunks split_bits(row for chunk in chunkscompressed extend(compress_chunk(chunk)return compressed this function accepts bitarray named row it splits it into chunks that are each bits wide using function that we'll define very shortly then it compresses each of those chunks using the previously defined compress_chunkconcatenating the results into bytearraywhich it returns we define split_bits as simple generatordef split_bits(bitswidth)for in range( len(bits)width)yield bits[ : +widthnowsince we aren' certain yet whether this will run more effectively in threads or processeslet' wrap these functions in method that runs everything in provided executordef compress_in_executor(executorbitswidth)row_compressors [for row in split_bits(bitswidth)compressor executor submit(compress_rowrow |
3,648 | row_compressors append(compressorcompressed bytearray(for compressor in row_compressorscompressed extend(compressor result()return compressed this example barely needs explainingit splits the incoming bits into rows based on the width of the image using the same split_bits function we have already defined (hooray for bottom-up design!note that this code will compress any sequence of bitsalthough it would bloatrather than compress binary data that has frequent changes in bit values black and white images are definitely good candidates for the compression algorithm in question let' now create function that loads an image file using the third-party pillow moduleconverts it to bitsand compresses it we can easily switch between executors using the venerable comment statementfrom pil import image def compress_image(in_filenameout_filenameexecutor=none)executor executor if executor else processpoolexecutor(with image open(in_filenameas imagebits bitarray(image convert(' 'getdata()widthheight image size compressed compress_in_executor(executorbitswidthwith open(out_filename'wb'as filefile write(width to_bytes( 'little')file write(height to_bytes( 'little')file write(compresseddef single_image_main()in_filenameout_filename sys argv[ : #executor threadpoolexecutor( executor processpoolexecutor(compress_image(in_filenameout_filenameexecutorthe image convert(call changes the image to black and white (one bitmodewhile getdata(returns an iterator over those values we pack the results into bitarray so they transfer across the wire more quickly when we output the compressed filewe first write the width and height of the image followed by the compressed datawhich arrives as bytearraywhich can be written directly to the binary file |
3,649 | having written all this codewe are finally able to test whether thread pools or process pools give us better performance created large ( pixelsblack and white image and ran it through both pools the processpool takes about seconds to process the image on my systemwhile the threadpool consistently takes about thusas we suspectedthe cost of pickling bits and bytes back and forth between processes is eating almost all the efficiency gains from running on multiple processors (though looking at my cpu monitorit does fully utilize all four cores on my machineso it looks like compressing single image is most effectively done in separate processbut only barely because we are passing so much data back and forth between the parent and subprocesses multiprocessing is more effective when the amount of data passed between processes is quite low so let' extend the app to compress all the bitmaps in directory in parallel the only thing we'll have to pass into the subprocesses are filenamesso we should get speed gain compared to using threads alsoto be kind of crazywe'll use the existing code to compress individual images this means we'll be running processpoolexecutor inside each subprocess to create even more subprocesses don' recommend doing this in real lifefrom pathlib import path def compress_dir(in_dirout_dir)if not out_dir exists()out_dir mkdir(executor processpoolexecutor(for file in for in in_dir iterdir(if suffix =bmp')out_file (out_dir file namewith_suffix(rle'executor submitcompress_imagestr(file)str(out_file)def dir_images_main()in_dirout_dir (path(pfor in sys argv[ : ]compress_dir(in_dirout_dirthis code uses the compress_image function we defined previouslybut runs it in separate process for each image it doesn' pass an executor into the functionso compress_image creates processpoolexecutor once the new process has started running |
3,650 | now that we are running executors inside executorsthere are four combinations of threads and process pools that we can be using to compress images they each have quite different timing profilesprocess pool per row process pool per image seconds thread pool per image seconds thread pool per row seconds seconds as we might expectusing threads for each image and again using threads for each row is the slowestsince the gil prevents us from doing any work in parallel given that we were slightly faster when using separate processes for each row when we were using single imageyou may be surprised to see that it is faster to use threadpool feature for rows if we are processing each image in separate process take some time to understand why this might be my machine contains only four processor cores each row in each image is being processed in separate poolwhich means that all those rows are competing for processing power when there is only one imagewe get (very modestspeedup by running each row in parallel howeverwhen we increase the number of images being processed at oncethe cost of passing all that row data into and out of subprocess is actively stealing processing time from each of the other images soif we can process each image on separate processorwhere the only thing that has to get pickled into the subprocess pipe is couple filenameswe get solid speedup thuswe see that different workloads require different concurrency paradigms even if we are just using futures we have to make informed decisions about what kind of executor to use also note that for typically-sized imagesthe program runs quickly enough that it really doesn' matter which concurrency structures we use in facteven if we didn' use any concurrency at allwe' probably end up with about the same user experience this problem could also have been solved using the threading and/or multiprocessing modules directlythough there would have been quite bit more boilerplate code to write you may be wondering whether or not asyncio would be useful here the answer is"probably notmost operating systems don' have good way to do non-blocking reads from the filesystemso the library ends up wrapping all the calls in futures anyway |
3,651 | for completenesshere' the code that used to decompress the rle images to confirm that the algorithm was working correctly (indeedit wasn' until fixed bugs in both compression and decompressionand ' still not sure if it is perfect should have used test-driven development!)from pil import image import sys def decompress(widthheightbytes)image image new(' '(widthheight)col row for byte in bytescolor (byte > count byte ~ for in range(count)image putpixel((rowcol)colorrow + if not row widthcol + row return image with open(sys argv[ ]'rb'as filewidth int from_bytes(file read( )'little'height int from_bytes(file read( )'little'image decompress(widthheightfile read()image save(sys argv[ ]'bmp'this code is fairly straightforward each run is encoded in single byte it uses some bitwise math to extract the color of the pixel and the length of the run then it sets each pixel from that run in the imageincrementing the row and column of the next pixel to check at appropriate intervals |
3,652 | exercises we've covered several different concurrency paradigms in this and still don' have clear idea of when each one is useful as we saw in the case studyit is often good idea to prototype few different strategies before committing to one concurrency in python is huge topic and an entire book of this size could not cover everything there is to know about it as your first exercisei encourage you to check out several third-party libraries that may provide additional contextexecneta library that permits local and remote share-nothing concurrency parallel pythonan alternative interpreter that can execute threads in parallel cythona python-compatible language that compiles to and has primitives to release the gil and take advantage of fully parallel multi-threading pypy-stman experimental implementation of software transactional memory on top of the ultra-fast pypy implementation of the python interpreter gevent if you have used threads in recent applicationtake look at the code and see if you can make it more readable and less bug-prone by using futures compare thread and multiprocessing futures to see if you can gain anything by using multiple cpus try implementing an asyncio service for some basic http requests you may need to look up the structure of an http request on the webthey are fairly simple ascii packets to decipher if you can get it to the point that web browser can render simple get requestyou'll have good understanding of asyncio network transports and protocols make sure you understand the race conditions that happen in threads when you access shared data try to come up with program that uses multiple threads to set shared values in such way that the data deliberately becomes corrupt or invalid remember the link collector we covered for the case study in python data structurescan you make it run faster by making requests in parallelis it better to use raw threadsfuturesor asyncio for thistry writing the run-length encoding example using threads or multiprocessing directly do you get any speed gainsis the code easier or harder to reason aboutis there any way to speed up the decompression script by using concurrency or parallelism |
3,653 | summary this ends our exploration of object-oriented programming with topic that isn' very object-oriented concurrency is difficult problem and we've only scratched the surface while the underlying os abstractions of processes and threads do not provide an api that is remotely object-orientedpython offers some really good object-oriented abstractions around them the threading and multiprocessing packages both provide an object-oriented interface to the underlying mechanics futures are able to encapsulate lot of the messy details into single object asyncio uses coroutine objects to make our code read as though it runs synchronouslywhile hiding ugly and complicated implementation details behind very simple loop abstraction thank you for reading python object-oriented programmingsecond edition hope you've enjoyed the ride and are eager to start implementing object-oriented software in all your future projects |
3,654 | absolute imports abstract base classes (abcs@classmethod about creating - using abstract factory pattern - abstraction about defining examples abstract methods access control adapter pattern about - adapter class interface interface aggregation about comparingwith composition assertion methods asyncio about asyncio futurereading executors executorsusing to wrap blocking code for networking - implementing streams attributes specifying basic inheritance about - built-insextending overriding super function behaviors about defining specifying bottom-up design built-ins extending - camelcase notation case studyobject-oriented design - character classes class code coverage code coverage test verifying - command pattern - composite pattern - composition - comprehensions dictionary comprehension generator expressions list comprehensions - concurrency about case study - constructor context manager - |
3,655 | about - closing exceptionraising log parsing - relationshipwith functions relationshipwith generators counter object coverage py data about defining objectsdefining data notation decorator pattern about core example - interface uses usingin python - decorators design patterns about - abstract factory pattern - adapter pattern - command pattern - composite pattern - decorator pattern facade pattern - flyweight pattern - observer pattern singleton pattern state pattern strategy pattern template pattern dewey decimal system (dds dictionaries about - counter object defaultdictusing - use cases dictionary comprehension docstrings about using dot notation duck typing empty objects encapsulation exceptions about case study - custom exceptionsdefining - effects handling - hierarchy raising - expensive objects imitating - facade pattern - fifo (first in first outqueues file / - flyweight pattern - funcargs functions about - callable objects usingas attributes futures - generators about - datayielding from another iterable - global interpreter lock (gil hashable |
3,656 | information hiding about defining inheritance about - case study - instance diagram interfaces international standard book number (isbn iterator pattern case study - iterators about iterator protocol - javascript object notation (json lifo (last in first outqueues list comprehensions - lists about - sorting - mailing list managercase study building - members method overloading about default arguments - unpacking arguments variable argument lists - methods module contents organizing - modules about absolute imports example organizing relative imports multiple inheritance about diamond problem - sets of arguments - multiprocessing about - drawbacks pools queues mutable byte strings object-oriented defining - object-oriented programming case study - examples objects about comparingwith class defining duplicate coderemoving - existing codereusing - identifying - managing - picklescustomizing - serializing - web objectsserializing - observer pattern about example - package parameters patterns matchingregular expression about charactersescaping multiple charactersmatching patternsgrouping selection of charactersmatching |
3,657 | url pick action pitfallsthreads about global interpreter lock shared memory polymorphism priority queue properties about using - usingfor adding behavior to class data - property constructor property function public interface creating py test about cleanup - setup - testing with - testsskipping with variablessetting up - python python built-in functions about enumerate function len(function reversed(function python classes arguments - attributesadding creating implementing objectinitializing - python data structures built-insextending - case study - dictionaries - empty objects lists - named tuples queues sets - tuples queues about fifo queues - lifo queues priority queues regular expressions about case study - informationobtaining from patternsmatching repeated regular expressionsmaking efficient relative imports self argument sequence diagram sets - simple command-line notebook application building - singleton pattern about implementing - slots stacks state pattern about example - state transition as coroutines versusstrategy pattern strategy pattern about abstraction interface example user code usingin python string formatting about bracesescaping container lookups keyword arguments object lookups |
3,658 | some examples of pytorch syntax the main oo concepts pre-defined and programmer-supplied attributes function objects vs callables defining class in python how python creates an instance defining methods creating class hierarchy multiple-inheritance class hierarchies making class instance iterable purdue university new(vs init( |
3,659 | outline some examples of pytorch syntax the main oo concepts pre-defined and programmer-supplied attributes function objects vs callables defining class in python how python creates an instance defining methods creating class hierarchy multiple-inheritance class hierarchies making class instance iterable purdue university new(vs init( |
3,660 | some examples of pytorch syntax if you are not already well-schooled in the syntax of object-oriented pythonyou might find the following examples somewhat befuddlingimport torchvision transforms as tvt xform tvt compose[tvt grayscale(num_output_channels )tvt resize(( , )out_image xforminput_image_pil the statement in the third line appears to indicate that we are using xform as function which is being returned by the statement in the second line does that mean functions in python return functionsto fully understand what' going on here you have to know what' meant by an object being callable python makes distinction between function objects and callables while all function objects are callablesnot all callables are function objects purdue university |
3,661 | some examples of pytorch syntax (contd now consider the following exampleclass encoderrnn(torch nn module)def __init__(selfinput_sizehidden_size)super(encoderrnnself__init__(==the rest of the definition ==we are obviously trying to define new class named encoderrnn as subclass of torch nn module and the method init (is there to initialize an instance object constructed from this class but why are we making the call super(encoderrnnselfinit (and supplying the name of the subclass again to this methodto understand this syntaxyou have to know how you can ask method to get part of the work done by method defined for one of its superclasses how that works is different for single-inheritance class hierarchy and for multiple-inheritance class hierarchy purdue university |
3,662 | some examples of pytorch syntax (contd for another examplethe two layers in the following neural network (from pytorch tutorialare declared in lines (aand (band how the network is connected is declared in forward(in line (cwe push data through the network by calling model(xin line (dbut we never call forward(how is one supposed to understand thisclass twolayernet(torch nn module)def __init__(selfd_inhd_out)torch nn module __init__self self linear torch nn linear(d_inhself linear torch nn linear(hd_outdef forward(selfx)h_relu self linear (xclamp(min= using clamp(for nn relu y_pred self linear (h_relureturn y_pred nd_inhd_out torch randn(nd_inn is batch size torch randn(nd_outmodel twolayernet(d_inhd_outcriterion torch nn mseloss(reduction='sum'optimizer torch optim sgd(model parameters()lr= - for in range( )y_pred model(xloss criterion(y_predyoptimizer zero_grad(loss backward(optimizer step(purdue university #( #( #( #( |
3,663 | some examples of pytorch syntax (contd for another example that may confuse beginning python programmerconsider the following syntax for constructing data loader in pytorch scripttraining_data torchvision datasets cifar root=self dataroottrain=truedownload=truetransform=transform_traintrain_data_loader torch utils data dataloadertraining_databatch_size=self batch_size,shuffle=truenum_workers= subsequentlyyou may see the following sorts of callsdataiter iter(train_data_loaderimageslabels dataiter next(or calls like for data in train_data_loaderinputs,labels data outputs model(inputs(continued on next slidepurdue university |
3,664 | some examples of pytorch syntax (contd (continued from previous slidefor novice python programmera construct like for in somethingto make sense"somethinngis likely to be one of the typical storage containerslike listtuplesetetc but 'train data loaderdoes not look like any of those storage containers so what' going on herepurdue university |
3,665 | outline some examples of pytorch syntax the main oo concepts pre-defined and programmer-supplied attributes function objects vs callables defining class in python how python creates an instance defining methods creating class hierarchy multiple-inheritance class hierarchies making class instance iterable purdue university new(vs init( |
3,666 | the main oo concepts the following fundamental notions of object-oriented programming in general apply to object-oriented python alsoclassinstancesand attributes encapsulation inheritance polymorphism purdue university |
3,667 | what are classes and instancesat high level of conceptualizationa class can be thought of as category we may think of "catas class specific cat would then be an instance of this class for the purpose of writing codea class is data structure with attributes an instance constructed from class will have specific values for the attributes to endow instances with behaviorsa class can be provided with methods purdue university |
3,668 | attributesmethodsinstance variablesand class variables method is function you invoke on an instance of the class or the class itself method that is invoked on an instance is sometimes called an instance method you can also invoke method directly on classin which case it is called class method or static method attributes that take data values on per-instance basis are frequently referred to as instance variables attributes that take on values on per-class basis are called class attributes or static attributes or class variables purdue university |
3,669 | encapsulation hiding or controlling access to the implementation-related attributes and the methods of class is called encapsulation with appropriate data encapsulationa class will present well-defined public interface for its clientsthe users of the class client should only access those data attributes and invoke those methods that are in the public interface purdue university |
3,670 | inheritance and polymorphism inheritance in object-oriented code allows subclass to inherit some or all of the attributes and methods of its superclass(espolymorphism basically means that given category of objects can exhibit multiple identities at the same timein the sense that cat instance is not only of type catbut also of type fourlegged and animalall at the same time purdue university |
3,671 | polymorphism (contd as an example of polymorphismsuppose we declare list animals ['kitty''fido''tabby''quacker''spot'of catsdotsand duck -instances made from different classes in some animal hierarchy -and if we were to invoke method calculateiq(on this list of animals in the following fashion for item in animalsitem calculateiq(polymorphism would cause the correct implementation code for calculateiq(to be automatically invoked for each of the animals purdue university |
3,672 | regarding the previous example on polymorphism in many object-oriented languagesa method such as calculateiq(would need to be declared for the root class animal for the control loop shown on the previous slide to work properly all of the public methods and attributes defined for the root class would constitute the public interface of the class hierarchy and each class in the hierarchy would be free to provide its own implementation for the methods declared in the root class polymorphism in nutshell allows us to manipulate instances belonging to the different classes of hierarchy through common interface defined for the root class purdue university |
3,673 | outline some examples of pytorch syntax the main oo concepts pre-defined and programmer-supplied attributes function objects vs callables defining class in python how python creates an instance defining methods creating class hierarchy multiple-inheritance class hierarchies making class instance iterable purdue university new(vs init( |
3,674 | attributespre-defined vs programmer-supplied class in python comes with certain pre-defined attributes the pre-defined attributes of class are not to be confused with the programmer-supplied attributes such as the class and instance variables and the programmer-supplied methods by the same tokenan instance constructed from class is an object with certain pre-defined attributes that again are not be confused with the programmer-supplied instance and class variables associated with the instance and the programmer-supplied methods that can be invoked on the instance purdue university |
3,675 | attributespre-defined vs programmer-supplied (contd note that in python the word attribute is used to describe any propertyvariable or method that can be invoked with the dot operator on either the class or an instance constructed from class what have mentioned above is worthy of note in case you have previously run into an object-oriented language in which distinction was made between attributes and methods in pythonthe attributes available for class include the programmer-supplied class and instance variables and methods this usage of attribute makes it all encompassingin the sense that it includes the pre-defined data attributes and methodsthe programmer-supplied class and instance variablesandof coursethe programmer-supplied methods basicallyas mentioned abovean attribute is anything that can be invoked using the dot operator on either class or an instance purdue university |
3,676 | pre-defined and programmer-supplied methods for class to define it formallya method is function that can be invoked on an object using the object-oriented call syntax that for python is of the form obj method()where obj may either be an instance of class or the class itself thereforethe pre-defined functions that can be invoked on either the class itself or on class instance using the object-oriented syntax are also methods the pre-defined attributesboth variables and methodsemploy special naming conventionthe names begin and end with two underscores python makes distinction between function objects and callables while all function objects are callablesnot all callables are function objects purdue university |
3,677 | outline some examples of pytorch syntax the main oo concepts pre-defined and programmer-supplied attributes function objects vs callables defining class in python how python creates an instance defining methods creating class hierarchy multiple-inheritance class hierarchies making class instance iterable purdue university new(vs init( |
3,678 | function objects vs callables function object can only be created with def statement on the other handa callable is any object that can be called like function for examplea class name can be called directly to yield an instance of class thereforea class name is callable an instance object can also be called directlywhat that yields depends on whether or not the underlying class provides definition for the system-supplied call (method purdue university |
3,679 | in python'()is an operator -the function call operator you will see objects that may be called with or without the '()operator andwhen they are called with '()'there may or may not exist any arguments inside the parentheses for class with method foocalling just foo returns result different from what is returned by foo(the former returns the method object itself that foo stands for and the latter will cause execution of the function object associated with the method call purdue university |
3,680 | an example of class with callable instances import random random seed( class class xdef __init__selfarr self arr arr def get_num(selfi)return self arr[idef __call__(self)return self arr end of class definition xobj xrandom sample(range( , ) print(xobj get_num( )print(xobj() [ if you execute this codeyou will see the output shown in the commented out portions of the last two lines in the last line of the codenote how we are calling the function call operator '()on the instance constructed from the class purdue university |
3,681 | the same example but with no def for call import random random seed( class class xdef __init__selfarr self arr arr def get_num(selfi)return self arr[idef __call__(self)return self arr end of class definition xobj xrandom sample(range( , ) print(xobj get_num( ) print(xobj()traceback (most recent call lastfile "usingcall py"line in print(xobj()typeerror'xobject is not callable purdue university |
3,682 | outline some examples of pytorch syntax the main oo concepts pre-defined and programmer-supplied attributes function objects vs callables defining class in python how python creates an instance defining methods creating class hierarchy multiple-inheritance class hierarchies making class instance iterable purdue university new(vs init( |
3,683 | defining class in python 'll present the full definition of python class in stages 'll start with very simple example of class to make the reader familiar with the pre-defined init (method whose role is to initialize the instance returned by call to the constructor firsthere is the simplest possible definition of class in pythonclass simpleclasspass an instance of this class may be constructed by invoking its predefined default constructorx simpleclass(purdue university |
3,684 | defining class in python (contd here is class with user-supplied init (this method is automatically invoked to initialize the state of the instance returned by call to person()class person class persondef __init__(selfa_namean_age )self name a_name self age an_age end of class definition #test codea_person person"zaphod" print(a_person nameprint(a_person agepurdue university zaphod |
3,685 | pre-defined attributes for class being an object in its own rightevery python class comes equipped with the following pre-defined attributesname string name of the class doc documentation string for the class bases tuple of parent classes of the class dict module purdue university dictionary whose keys are the names of the class variables and the methods of the class and whose values are the corresponding bindings module in which the class is defined |
3,686 | pre-defined attributes for an instance since every class instance is also an object in its own rightit also comes equipped with certain pre-defined attributes we will be particularly interested in the following twoclass string name of the class from which the instance was constructed dict dictionary whose keys are the names of the instance variables it is important to realize that the namespace as represented by the dictionary dict for class object is not the same as the namespace as represented by the dictionary dict for an instance object constructed from the class purdue university |
3,687 | dict vs dir(as an alternative to invoking dict use the built-in global dir()as in on class nameone can also dirmyclass that returns list of all the attribute namesfor variables and for methodsfor the class (both directly defined for the class and inherited from class' superclassesif we had called "printdir(person)for the person class defined on slide the system would have returned['__class__''__delattr__''__dict__''__dir__''__doc__''__eq__''__format__''__ge__''__getattribute__''__gt__''__hash__''__init__''__init_subclass__''__le__''__lt__''__module__''__ne__''__new__''__reduce__''__reduce_ex__''__repr__''__setattr__''__sizeof__''__str__''__subclasshook__''__weakref__'purdue university |
3,688 | illustrating the values for system-supplied attributes class person class person" very simple classdef __init__(self,nam,yy )self name nam self age yy #-end of class definition -#test codea_person person("zaphod", print(a_person nameprint(a_person ageclass attributesprint(person __name__print(person __doc__print(person __module__print(person __bases__print(person __dict__instance attributesprint(a_person __class__print(a_person __dict__ purdue university zaphod person very simple class main ({'__module__'__main__''__doc__' very simp ''__init__':<function __init '__main__ person {'age': 'name':'zaphod' |
3,689 | class definitionmore general syntax class myclass 'optional documentation stringclass_var class_var var def __init__selfvar default )'optional documentation stringattribute var rest_of_construction_init_suite def some_methodselfsome_parameters )'optional documentation stringmethod_code purdue university |
3,690 | class definitionmore general syntax (contd regarding the syntax shown on the previous slidenote the class variables class var and class var such variables exist on per-class basismeaning that they are static class variable can be given value in class definitionas shown for class var in generalthe header of init (may look likedef __init__(selfvar var var default )body_of_init this constructor initializer could be for class with three instance variableswith the last default initialized as shown the first parametertypically named selfis set implicitly to the instance under construction purdue university |
3,691 | class definitionmore general syntax (cond if you do not provide class with its own init ()the system will provide the class with default init (you override the default definition by providing your own implementation for init (the syntax for user-defined method for class is the same as for stand-alone python functionsexcept for the special significance accorded the first parametertypically named self it is meant to be bound to reference to the instance on which the method is invoked purdue university |
3,692 | the root class object all classes are subclassedeither directly or indirectly from the root class object the object class defines set of methods with default implementations that are inherited by all classes derived from object the list of attributes defined for the object class can be seen by printing out the list returned by the built-in dir(functionprintdirobject this call returns ['__class__','__delattr__','__doc__','__getattribute__','__hash__','__init__','__new__',__reduce__''__reduce_ex__','__repr__','__setattr__','__str__'we can also examine the attribute list available for the object class by printing out the contents of its dict attribute by printobject __dict__ this will print out both the attribute names and their bindings purdue university |
3,693 | new(vs init(outline some examples of pytorch syntax the main oo concepts pre-defined and programmer-supplied attributes function objects vs callables defining class in python how python creates an instance defining methods creating class hierarchy multiple-inheritance class hierarchies making class instance iterable purdue university new(vs init( |
3,694 | new(vs init(how python creates an instance from class python uses the following two-step procedure for constructing an instance from classstep the call to the constructor creates what may be referred to as generic instance from the class definition the generic instance' memory allocation is customized with the code in the method new (of the class this method may either be defined directly for the class or the class may inherit it from one of its parent classes purdue university |
3,695 | new(vs init(creating an instance from class (contd the method new (is implicitly considered by python to be static method its first parameter is meant to be set equal to the name of the class whose instance is desired and it must return the instance created if class does not provide its own definition for new () search is conducted for this method in the parent classes of the class step then the instance method init (of the class is invoked to initialize the instance returned by new (purdue university |
3,696 | example showing new(vs init(new (and init (working together for instance creation the script shown on slide defines class and provides it with static method new (and an instance method init (we do not need any special declaration for new (to be recognized as static because this method is special-cased by python note the contents of the namespace dictionary dict created for class as printed out by dict this dictionary shows the names created specifically for class on the other handdir(xalso shows the names inherited by purdue university |
3,697 | new(vs init(instance construction example (contd in the script on the next slidealso note that the namespace dictionary xobj dict created at runtime for the instance xobj is empty -for obvious reasons as stated earlierwhen dir(is called on classit returns list of all the attributes that can be invoked on class and on the instances made from that class the returned list also includes the attributes inherited from the class' parents when called on instanceas in dirxobj )the returned list is the same as above plus any instance variables defined for the class purdue university |
3,698 | new(vs init(instance construction example (contd class class (object)def __new__cls )print("__new__ invoked"return object __new__cls def __init__self )print("__init__ invoked" derived from root class object the param 'clsset to the name of the class test code xobj (printx __dict__ __new__ invoked __init__ invoked #{'__module__''__main__''__new__'printdir( #['__class__','__delattr__''__getattribute__''__hash__''__init__''__module__''__new__'printdirxobj #['__class__''__delattr__''__getattribute__''__hash__''__init__''__module__''__new__'purdue university |
3,699 | new(vs init(instance construction example (contd class class xx is still derived from the root class object def __new__cls )the param 'clsset to the name of the class print("__new__ invoked"return object __new__cls def __init__self )print("__init__ invoked"test code xobj (printx __dict__ printdir(xprintdirxobj purdue university __new__ invoked __init__ invoked {'__module__''__main__''__new__'['__class__','__delattr__''__getattribute__''__hash__''__init__''__module__''__new__'['__class__''__delattr__''__getattribute__''__hash__''__init__''__module__''__new__' |
Subsets and Splits