{"commit":"48ca20f466be048d5f8e896f5d6d653a4c84a4a8","subject":"","message":"\n\ngit-svn-id: http:\/\/svn.sqlalchemy.org\/sqlalchemy\/trunk@331 8cd8332f-0806-0410-a4b6-96f4b9520244\n","repos":"obeattie\/sqlalchemy,obeattie\/sqlalchemy,obeattie\/sqlalchemy","old_file":"doc\/build\/content\/document_base.myt","new_file":"doc\/build\/content\/document_base.myt","new_contents":"<%flags>inherit=\"doclib.myt\"<\/%flags>\n\n<%python scope=\"global\">\n\n\tfiles = [\n\t\t'roadmap',\n\t\t'pooling',\n\t\t'dbengine',\n\t\t'metadata',\n\t\t'sqlconstruction',\n\t\t'datamapping',\n\t\t'adv_datamapping',\n\t\t\n\t\t]\n\n<\/%python>\n\n<%attr>\n\tfiles=files\n\twrapper='section_wrapper.myt'\n\tonepage='documentation'\n\tindex='index'\n\ttitle='SQLAlchemy Documentation'\n\tversion = '0.91'\n<\/%attr>\n\n\n\n\n\n\n","old_contents":"<%flags>inherit=\"doclib.myt\"<\/%flags>\n\n<%python scope=\"global\">\n\n\tfiles = [\n\t\t'roadmap',\n\t\t'pooling',\n\t\t'dbengine',\n\t\t'metadata',\n\t\t'sqlconstruction',\n\t\t'datamapping',\n\t\t'adv_datamapping',\n\t\t'activerecord',\n\t\t\n\t\t]\n\n<\/%python>\n\n<%attr>\n\tfiles=files\n\twrapper='section_wrapper.myt'\n\tonepage='documentation'\n\tindex='index'\n\ttitle='SQLAlchemy Documentation'\n\tversion = '0.91'\n<\/%attr>\n\n\n\n\n\n\n","returncode":0,"stderr":"","license":"mit","lang":"Myghty"} {"commit":"1298a7587fd82a75115f27bf47b68d02057a8c7c","subject":"doc dev...","message":"doc dev...\n\n\ngit-svn-id: 655ff90ec95d1eeadb1ee4bb9db742a3c015d499@1200 8cd8332f-0806-0410-a4b6-96f4b9520244\n","repos":"obeattie\/sqlalchemy,obeattie\/sqlalchemy,obeattie\/sqlalchemy","old_file":"doc\/build\/content\/unitofwork.myt","new_file":"doc\/build\/content\/unitofwork.myt","new_contents":"<%flags>inherit='document_base.myt'<\/%flags>\n<%attr>title='Unit of Work'<\/%attr>\n\n<&|doclib.myt:item, name=\"unitofwork\", description=\"Unit of Work\" &>\n <&|doclib.myt:item, name=\"overview\", description=\"Overview\" &>\n
The concept behind Unit of Work is to track modifications to a field of objects, and then be able to commit those changes to the database in a single operation. Theres a lot of advantages to this, including that your application doesn't need to worry about individual save operations on objects, nor about the required order for those operations, nor about excessive repeated calls to save operations that would be more efficiently aggregated into one step. It also simplifies database transactions, providing a neat package with which to insert into the traditional database begin\/commit phase.\n <\/p>\n
SQLAlchemy's unit of work includes these functions:\n
The current unit of work is accessed via a Session object. The Session is available in a thread-local context from the objectstore module as follows:<\/p>\n <&|formatting.myt:code&>\n # get the current thread's session\n session = objectstore.get_session()\n <\/&>\n
The Session object acts as a proxy to an underlying UnitOfWork object. Common methods include commit(), begin(), clear(), and delete(). Most of these methods are available at the module level in the objectstore module, which operate upon the Session returned by the get_session() function:\n <\/p>\n <&|formatting.myt:code&>\n # this...\n objectstore.get_session().commit()\n \n # is the same as this:\n objectstore.commit()\n <\/&>\n\n
A description of the most important methods and concepts follows.<\/p>\n\n <&|doclib.myt:item, name=\"identitymap\", description=\"Identity Map\" &>\n
The first concept to understand about the Unit of Work is that it is keeping track of all mapped objects which have been loaded from the database, as well as all mapped objects which have been saved to the database in the current session. This means that everytime you issue a In particular, it is insuring that only one<\/b> instance of a particular object, corresponding to a particular database identity, exists within the Session at one time. By \"database identity\" we mean a table or relational concept in the database combined with a particular primary key in that table. The session accomplishes this task using a dictionary known as an Identity Map<\/b>. When Example:<\/p>\n <&|formatting.myt:code&>\n mymapper = mapper(MyClass, mytable)\n \n obj1 = mymapper.selectfirst(mytable.c.id==15)\n obj2 = mymapper.selectfirst(mytable.c.id==15)\n \n >>> obj1 is obj2\n True\n <\/&>\n The Identity Map is an instance of To view the Session's identity map, it is accessible via the The next concept is that in addition to the Session storing a record of all objects loaded or saved, it also stores records of all newly created<\/b> objects, records of all objects whose attributes have been modified<\/b>, records of all objects that have been marked as deleted<\/b>, and records of all modified list-based attributes<\/b> where additions or deletions have occurred. These lists are used when a These records are all tracked by a collection of Heres an interactive example, assuming the Unlike the identity map, the This is the main gateway to what the Unit of Work does best, which is save everything ! It should be clear by now that a commit looks like:\n <\/p>\n <&|formatting.myt:code&>\n objectstore.get_session().commit()\n <\/&>\n It also can be called with a list of objects; in this form, the commit operation will be limited only to the objects specified in the list, as well as any child objects within This second form of commit should be used more carefully as it will not necessarily locate other dependent objects within the session, whose database representation may have foreign constraint relationships with the objects being operated upon.<\/p>\n \n <&|doclib.myt:item, name=\"whatis\", description=\"What Commit is, and Isn't\" &>\n The purpose of the Commit operation is to instruct the Unit of Work to analyze its lists of modified objects, assemble them into a dependency graph, fire off the appopriate INSERT, UPDATE, and DELETE statements via the mappers related to those objects, and update the identifying object attributes that correspond directly to database columns. And thats it.<\/b> This means, it is not going to change anything about your objects as they exist in memory, with the exception of synchronizing the identifier attributes on saved and updated objects as they correspond directly to newly inserted or updated rows, which typically include only primary key and foreign key attributes that in most cases are integers. A brief list of what will not<\/b> happen includes:<\/p>\n This means, if you set So the primary guideline for dealing with commit() is, the developer is responsible for maintaining in-memory objects and their relationships to each other, the unit of work is responsible for maintaining the database representation of the in-memory objects.<\/b> The typical pattern is that the manipulation of objects *is* the way that changes get communicated to the unit of work, so that when the commit occurs, the objects are already in their correct in-memory representation and problems dont arise. The manipulation of identifier attributes like integer key values as well as deletes in particular are a frequent source of confusion.<\/p>\n \n A terrific feature of SQLAlchemy which is also a supreme source of confusion is the backreference feature, described in <&formatting.myt:link, path=\"datamapping_relations_backreferences\"&>. This feature allows two types of objects to maintain attributes that reference each other, typically one object maintaining a list of elements of the other side, which contains a scalar reference to the list-holding object. When you append an element to the list, the element gets a \"backreference\" back to the object which has the list. When you attach the list-holding element to the child element, the child element gets attached to the list. This feature has nothing to do whatsoever with the Unit of Work.<\/b> It is strictly a small convenience feature intended to support the developer's manual manipulation of in-memory objects, and the backreference operation happens at the moment objects are attached or removed to\/from each other, independent of any kind of database operation. It does not change the golden rule, that the developer is reponsible for maintaining in-memory object relationships.<\/p>\n <\/&>\n <\/&>\n\n <&|doclib.myt:item, name=\"delete\", description=\"Delete\" &>\n The delete call places an object or objects into the Unit of Work's list of objects to be marked as deleted:<\/p>\n <&|formatting.myt:code&>\n # mark three objects to be deleted\n objectstore.get_session().delete(obj1, obj2, obj3)\n \n # commit\n objectstore.get_session().commit()\n <\/&>\n When objects which contain references to other objects are deleted, the mappers for those related objects will issue UPDATE statements for those objects that should no longer contain references to the deleted object, setting foreign key identifiers to NULL. Similarly, when a mapper contains relations with the As stated before, the purpose of delete is strictly to issue DELETE statements to the database. It does not affect the in-memory structure of objects, other than changing the identifying attributes on objects, such as setting foreign key identifiers on updated rows to None. It has no effect on the status of references between object instances, nor any effect on the Python garbage-collection status of objects.<\/p>\n <\/&>\n\n <&|doclib.myt:item, name=\"clear\", description=\"Clear\" &>\n To clear out the current thread's UnitOfWork, which has the effect of discarding the Identity Map and the lists of all objects that have been modified, just issue a clear:\n <\/p>\n <&|formatting.myt:code&>\n # via module\n objectstore.clear()\n \n # or via Session\n objectstore.get_session().clear()\n <\/&>\n This is the easiest way to \"start fresh\", as in a web application that wants to have a newly loaded graph of objects on each request. Any object instances before the clear operation should be discarded.<\/p>\n <\/&>\n\n <&|doclib.myt:item, name=\"refreshexpire\", description=\"Refresh \/ Expire\" &>\n To assist with the Unit of Work's \"sticky\" behavior, individual objects can have all of their attributes immediately re-loaded from the database, or marked as \"expired\" which will cause a re-load to occur upon the next access of any of the object's mapped attributes. This includes all relationships, so lazy-loaders will be re-initialized, eager relationships will be repopulated. Any changes marked on the object are discarded:<\/p>\n <&|formatting.myt:code&>\n # immediately re-load attributes on obj1, obj2\n session.refresh(obj1, obj2)\n \n # expire objects obj1, obj2, attributes will be reloaded\n # on the next access:\n session.expire(obj1, obj2, obj3)\n <\/&>\n <\/&>\n \n <&|doclib.myt:item, name=\"expunge\", description=\"Expunge\" &>\n Expunge simply removes all record of an object from the current Session. This includes the identity map, and all history-tracking lists:<\/p>\n <&|formatting.myt:code&>\n session.expunge(obj1)\n <\/&>\n Use The current thread's UnitOfWork object keeps track of objects that are modified. It maintains the following lists:<\/p>\n <&|formatting.myt:code&>\n # new objects that were just constructed\n objectstore.get_session().new\n \n # objects that exist in the database, that were modified\n objectstore.get_session().dirty\n \n # objects that have been marked as deleted via objectstore.delete()\n objectstore.get_session().deleted\n <\/&>\n To commit the changes stored in those lists, just issue a commit. This can be called via objectstore.session().commit()<\/span>, or through the module-level convenience method in the objectstore module:<\/p>\n <&|formatting.myt:code&>\n objectstore.commit()\n <\/&>\n The commit operation takes place within a SQL-level transaction, so any failures that occur will roll back the state of everything to before the commit took place.<\/p>\n When mappers are created for classes, new object construction automatically places objects in the \"new\" list on the UnitOfWork, and object modifications automatically place objects in the \"dirty\" list. To mark objects as to be deleted, use the \"delete\" method on UnitOfWork, or the module level version:<\/p>\n <&|formatting.myt:code&>\n objectstore.delete(myobj1, myobj2, ...)\n <\/&>\n \n Commit() can also take a list of objects which narrow its scope to looking at just those objects to save:<\/p>\n <&|formatting.myt:code&>\n objectstore.commit(myobj1, myobj2, ...)\n <\/&>\n Committing just a subset of instances should be used carefully, as it may result in an inconsistent save state between dependent objects (it should manage to locate loaded dependencies and save those also, but it hasnt been tested much).<\/p>\n \n <&|doclib.myt:item, name=\"begin\", description=\"Controlling Scope with begin()\" &>\n status<\/b> - release 0.1.1\/SVN head<\/p>\n The \"scope\" of the unit of work commit can be controlled further by issuing a begin(). A begin operation constructs a new UnitOfWork object and sets it as the currently used UOW. It maintains a reference to the original UnitOfWork as its \"parent\", and shares the same \"identity map\" of objects that have been loaded from the database within the scope of the parent UnitOfWork. However, the \"new\", \"dirty\", and \"deleted\" lists are empty. This has the effect that only changes that take place after the begin() operation get logged to the current UnitOfWork, and therefore those are the only changes that get commit()ted. When the commit is complete, the \"begun\" UnitOfWork removes itself and places the parent UnitOfWork as the current one again.<\/p>\n The begin() method returns a transactional object, upon which you can call commit() or rollback(). Only this transactional object controls the transaction<\/b> - commit() upon the Session will do nothing until commit() or rollback() is called upon the transactional object.<\/p>\n <&|formatting.myt:code&>\n # modify an object\n myobj1.foo = \"something new\"\n \n # begin an objectstore scope\n # this is equivalent to objectstore.get_session().begin()\n trans = objectstore.begin()\n \n # modify another object\n myobj2.lala = \"something new\"\n \n # only 'myobj2' is saved\n trans.commit()\n <\/&>\n begin\/commit supports the same \"nesting\" behavior as the SQLEngine (note this behavior is not the original \"nested\" behavior), meaning that many begin() calls can be made, but only the outermost transactional object will actually perform a commit(). Similarly, calls to the commit() method on the Session, which might occur in function calls within the transaction, will not do anything; this allows an external function caller to control the scope of transactions used within the functions.<\/p>\n <\/&>\n <&|doclib.myt:item, name=\"transactionnesting\", description=\"Nesting UnitOfWork in a Database Transaction\" &>\n The UOW commit operation places its INSERT\/UPDATE\/DELETE operations within the scope of a database transaction controlled by a SQLEngine:\n <&|formatting.myt:code&>\n engine.begin()\n try:\n # run objectstore update operations\n except:\n engine.rollback()\n raise\n engine.commit()\n <\/&>\n If you recall from the <&formatting.myt:link, path=\"dbengine_transactions\"&> section, the engine's begin()\/commit() methods support reentrant behavior. This means you can nest begin and commits and only have the outermost begin\/commit pair actually take effect (rollbacks however, abort the whole operation at any stage). From this it follows that the UnitOfWork commit operation can be nested within a transaction as well:<\/p>\n <&|formatting.myt:code&>\n engine.begin()\n try:\n # perform custom SQL operations\n objectstore.commit()\n # perform custom SQL operations\n except:\n engine.rollback()\n raise\n engine.commit()\n <\/&>\n \n <\/&>\n <\/&>\n <&|doclib.myt:item, name=\"identity\", description=\"The Identity Map\" &>\n All object instances which are saved to the database, or loaded from the database, are given an identity by the mapper\/objectstore. This identity is available via the _instance_key property attached to each object instance, and is a tuple consisting of the table's class, the SQLAlchemy-specific \"hash key\" of the table its persisted to, and an additional tuple of primary key values, in the order that they appear within the table definition:<\/p>\n <&|formatting.myt:code&>\n >>> obj._instance_key \n ( Note that this identity is a database identity, not an in-memory identity. An application can have several different objects in different unit-of-work scopes that have the same database identity, or an object can be removed from memory, and constructed again later, with the same database identity. What can never<\/b> happen is for two copies of the same object to exist in the same unit-of-work scope with the same database identity; this is guaranteed by the identity map<\/b>.\n <\/p>\n \n At the moment that an object is assigned this key, it is also added to the current thread's unit-of-work's identity map. The identity map is just a WeakValueDictionary which maintains the one and only reference to a particular object within the current unit of work scope. It is used when result rows are fetched from the database to insure that only one copy of a particular object actually comes from that result set in the case that eager loads or other joins are used, or if the object had already been loaded from a previous result set. The get() method on a mapper, which retrieves an object based on primary key identity, also checks in the current identity map first to save a database round-trip if possible. In the case of an object lazy-loading a single child object, the get() method is also used.\n <\/p>\n Methods on mappers and the objectstore module, which are relevant to identity include the following:<\/p>\n <&|formatting.myt:code&>\n # assume 'm' is a mapper\n m = mapper(User, users)\n \n # get the identity key corresponding to a primary key\n key = m.identity_key(7)\n \n # for composite key, list out the values in the order they\n # appear in the table\n key = m.identity_key(12, 'rev2')\n\n # get the identity key given a primary key \n # value as a tuple, a class, and a table\n key = objectstore.get_id_key((12, 'rev2'), User, users)\n \n # get the identity key for an object, whether or not it actually\n # has one attached to it (m is the mapper for obj's class)\n key = m.instance_key(obj)\n \n # same thing, from the objectstore (works for any obj type)\n key = objectstore.instance_key(obj)\n \n # is this key in the current identity map?\n objectstore.has_key(key)\n \n # is this object in the current identity map?\n objectstore.has_instance(obj)\n\n # get this object from the current identity map based on \n # singular\/composite primary key, or if not go \n # and load from the database\n obj = m.get(12, 'rev2')\n <\/&>\n <\/&>\n <&|doclib.myt:item, name=\"import\", description=\"Bringing External Instances into the UnitOfWork\" &>\n The _instance_key attribute is designed to work with objects that are serialized into strings and brought back again. As it contains no references to internal structures or database connections, applications that use caches or session storage which require serialization (i.e. pickling) can store SQLAlchemy-loaded objects. However, as mentioned earlier, an object with a particular database identity is only allowed to exist uniquely within the current unit-of-work scope. So, upon deserializing such an object, it has to \"check in\" with the current unit-of-work\/identity map combination, to insure that it is the only unique instance. This is achieved via the import_instance()<\/span> function in objectstore:<\/p>\n <&|formatting.myt:code&>\n # deserialize an object\n myobj = pickle.loads(mystring)\n \n # \"import\" it. if the objectstore already had this object in the \n # identity map, then you get back the one from the current session.\n myobj = objectstore.import_instance(myobj)\n <\/&>\n Note that the import_instance() function will either mark the deserialized object as the official copy in the current identity map, which includes updating its _instance_key with the current application's class instance, or it will discard it and return the corresponding object that was already present.<\/p>\n <\/&>\n\n <&|doclib.myt:item, name=\"advscope\", description=\"Advanced UnitOfWork Management\"&>\n\n <&|doclib.myt:item, name=\"object\", description=\"Per-Object Sessions\" &>\n Sessions can be created on an ad-hoc basis and used for individual groups of objects and operations. This has the effect of bypassing the entire \"global\"\/\"threadlocal\" UnitOfWork system and explicitly using a particular Session:<\/p>\n <&|formatting.myt:code&>\n # make a new Session with a global UnitOfWork\n s = objectstore.Session()\n \n # make objects bound to this Session\n x = MyObj(_sa_session=s)\n \n # perform mapper operations bound to this Session\n # (this function coming soon)\n r = MyObj.mapper.using(s).select_by(id=12)\n \n # get the session that corresponds to an instance\n s = objectstore.get_session(x)\n \n # commit \n s.commit()\n\n # perform a block of operations with this session set within the current scope\n objectstore.push_session(s)\n try:\n r = mapper.select_by(id=12)\n x = new MyObj()\n objectstore.commit()\n finally:\n objectstore.pop_session()\n <\/&>\n <\/&>\n\n <&|doclib.myt:item, name=\"scope\", description=\"Custom Session Objects\/Custom Scopes\" &>\n\n For users who want to make their own Session subclass, or replace the algorithm used to return scoped Session objects (i.e. the objectstore.get_session() method):<\/p>\n <&|formatting.myt:code&>\n # make a new Session\n s = objectstore.Session()\n \n # set it as the current thread-local session\n objectstore.session_registry.set(s)\n\n # set the objectstore's session registry to a different algorithm\n \n def create_session():\n \"\"\"creates new sessions\"\"\"\n return objectstore.Session()\n def mykey():\n \"\"\"creates contextual keys to store scoped sessions\"\"\"\n return \"mykey\"\n \n objectstore.session_registry = sqlalchemy.util.ScopedRegistry(createfunc=create_session, scopefunc=mykey)\n <\/&>\n <\/&>\n\n <&|doclib.myt:item, name=\"logging\", description=\"Analyzing Object Commits\" &>\n The objectstore module can log an extensive display of its \"commit plans\", which is a graph of its internal representation of objects before they are committed to the database. To turn this logging on:\n <&|formatting.myt:code&>\n # make an engine with echo_uow\n engine = create_engine('myengine...', echo_uow=True)\n \n # globally turn on echo\n objectstore.LOG = True\n <\/&>\n Commits will then dump to the standard output displays like the following:<\/p>\n <&|formatting.myt:code, syntaxtype=None&>\n Task dump:\n UOWTask(6034768) 'User\/users\/6015696'\n |\n |- Save elements\n |- Save: UOWTaskElement(6034800): User(6016624) (save)\n |\n |- Save dependencies\n |- UOWDependencyProcessor(6035024) 'addresses' attribute on saved User's (UOWTask(6034768) 'User\/users\/6015696')\n | |-UOWTaskElement(6034800): User(6016624) (save)\n |\n |- Delete dependencies\n |- UOWDependencyProcessor(6035056) 'addresses' attribute on User's to be deleted (UOWTask(6034768) 'User\/users\/6015696')\n | |-(no objects)\n |\n |- Child tasks\n |- UOWTask(6034832) 'Address\/email_addresses\/6015344'\n | |\n | |- Save elements\n | |- Save: UOWTaskElement(6034864): Address(6034384) (save)\n | |- Save: UOWTaskElement(6034896): Address(6034256) (save)\n | |----\n | \n |----\n <\/&>\n The above graph can be read straight downwards to determine the order of operations. It indicates \"save User 6016624, process each element in the 'addresses' list on User 6016624, save Address 6034384, Address 6034256\".\n <\/&>\n \n <\/&>\n<\/&>\n","old_contents":"<%flags>inherit='document_base.myt'<\/%flags>\n<%attr>title='Unit of Work'<\/%attr>\n\n<&|doclib.myt:item, name=\"unitofwork\", description=\"Unit of Work\" &>\n <&|doclib.myt:item, name=\"overview\", description=\"Overview\" &>\n The concept behind Unit of Work is to track modifications to a field of objects, and then be able to commit those changes to the database in a single operation. Theres a lot of advantages to this, including that your application doesn't need to worry about individual save operations on objects, nor about the required order for those operations, nor about excessive repeated calls to save operations that would be more efficiently aggregated into one step. It also simplifies database transactions, providing a neat package with which to insert into the traditional database begin\/commit phase.\n <\/p>\n SQLAlchemy's unit of work includes these functions:\n The current unit of work is accessed via a Session object. The Session is available in a thread-local context from the objectstore module as follows:<\/p>\n <&|formatting.myt:code&>\n # get the current thread's session\n session = objectstore.get_session()\n <\/&>\n The Session object acts as a proxy to an underlying UnitOfWork object. Common methods include commit(), begin(), clear(), and delete(). Most of these methods are available at the module level in the objectstore module, which operate upon the Session returned by the get_session() function:\n <\/p>\n <&|formatting.myt:code&>\n # this...\n objectstore.get_session().commit()\n \n # is the same as this:\n objectstore.commit()\n <\/&>\n\n A description of the most important methods and concepts follows.<\/p>\n\n <&|doclib.myt:item, name=\"identitymap\", description=\"Identity Map\" &>\n The first concept to understand about the Unit of Work is that it is keeping track of all mapped objects which have been loaded from the database, as well as all mapped objects which have been saved to the database in the current session. This means that everytime you issue a In particular, it is insuring that only one<\/b> instance of a particular object, corresponding to a particular database identity, exists within the Session at one time. By \"database identity\" we mean a table or relational concept in the database combined with a particular primary key in that table. The session accomplishes this task using a dictionary known as an Identity Map<\/b>. When Example:<\/p>\n <&|formatting.myt:code&>\n mymapper = mapper(MyClass, mytable)\n \n obj1 = mymapper.selectfirst(mytable.c.id==15)\n obj2 = mymapper.selectfirst(mytable.c.id==15)\n \n >>> obj1 is obj2\n True\n <\/&>\n The Identity Map is an instance of To view the Session's identity map, it is accessible via the The next concept is that in addition to the Session storing a record of all objects loaded or saved, it also stores records of all newly created<\/b> objects, records of all objects whose attributes have been modified<\/b>, records of all objects that have been marked as deleted<\/b>, and records of all modified list-based attributes<\/b> where additions or deletions have occurred. These lists are used when a These records are all tracked by a collection of Heres an interactive example, assuming the Unlike the identity map, the This is the main gateway to what the Unit of Work does best, which is save everything ! It should be clear by now that a commit looks like:\n <\/p>\n <&|formatting.myt:code&>\n objectstore.get_session().commit()\n <\/&>\n It also can be called with a list of objects; in this form, the commit operation will be limited only to the objects specified in the list, as well as any child objects within This second form of commit should be used more carefully as it will not necessarily locate other dependent objects within the session, whose database representation may have foreign constraint relationships with the objects being operated upon.<\/p>\n \n <&|doclib.myt:item, name=\"whatis\", description=\"What Commit is, and Isn't\" &>\n The purpose of the Commit operation is to instruct the Unit of Work to analyze its lists of modified objects, assemble them into a dependency graph, and fire off the appopriate INSERT, UPDATE, and DELETE statements via the mappers related to those objects. And thats it.<\/b> This means, it is not going to change anything about your objects as they exist in memory, with the exception of populating scalar object attributes with newly generated default column values which normally only involves primary and foreign key identifiers. A brief list of what will not<\/b> happen includes:<\/p>\n So the primary guideline for dealing with commit() is, the developer is responsible for maintaining the objects in memory, the unit of work is responsible for maintaining the database representation.<\/b><\/p>\n \n A terrific feature of SQLAlchemy which is also a supreme source of confusion is the backreference feature, described in <&formatting.myt:link, path=\"datamapping_relations_backreferences\"&>. This feature allows two types of objects to maintain attributes that reference each other, typically one object maintaining a list of elements of the other side. When you append an element to the list, the element gets a \"backreference\" back to the object which has the list. When you attach the list-holding element to the child element, the child element gets attached to the list. This feature has nothing to do whatsoever with the Unit of Work.<\/b> It is strictly a small convenience feature added to support an extremely common pattern. Besides this one little feature, the developer must maintain in-memory object relationships manually<\/b>. Note that we are talking about the manipulation<\/b> of objects, not the initial loading of them which is handled by the mapper.<\/p>\n <\/&>\n <\/&>\n\n <&|doclib.myt:item, name=\"delete\", description=\"Delete\" &>\n <\/&>\n\n <&|doclib.myt:item, name=\"clear\", description=\"Clear\" &>\n To clear out the current thread's UnitOfWork, which has the effect of discarding the Identity Map and the lists of all objects that have been modified, just issue a clear:\n <\/p>\n <&|formatting.myt:code&>\n # via module\n objectstore.clear()\n \n # or via Session\n objectstore.get_session().clear()\n <\/&>\n This is the easiest way to \"start fresh\", as in a web application that wants to have a newly loaded graph of objects on each request. Any object instances before the clear operation should be discarded.<\/p>\n <\/&>\n\n <&|doclib.myt:item, name=\"refreshexpire\", description=\"Refresh \/ Expire\" &>\n <\/&>\n \n <&|doclib.myt:item, name=\"expunge\", description=\"Expunge\" &>\n <\/&>\n\n <&|doclib.myt:item, name=\"import\", description=\"Import Instance\" &>\n <\/&>\n \n <\/&>\n <&|doclib.myt:item, name=\"begincommit\", description=\"Begin\/Commit\" &>\n The current thread's UnitOfWork object keeps track of objects that are modified. It maintains the following lists:<\/p>\n <&|formatting.myt:code&>\n # new objects that were just constructed\n objectstore.get_session().new\n \n # objects that exist in the database, that were modified\n objectstore.get_session().dirty\n \n # objects that have been marked as deleted via objectstore.delete()\n objectstore.get_session().deleted\n <\/&>\n To commit the changes stored in those lists, just issue a commit. This can be called via objectstore.session().commit()<\/span>, or through the module-level convenience method in the objectstore module:<\/p>\n <&|formatting.myt:code&>\n objectstore.commit()\n <\/&>\n The commit operation takes place within a SQL-level transaction, so any failures that occur will roll back the state of everything to before the commit took place.<\/p>\n When mappers are created for classes, new object construction automatically places objects in the \"new\" list on the UnitOfWork, and object modifications automatically place objects in the \"dirty\" list. To mark objects as to be deleted, use the \"delete\" method on UnitOfWork, or the module level version:<\/p>\n <&|formatting.myt:code&>\n objectstore.delete(myobj1, myobj2, ...)\n <\/&>\n \n Commit() can also take a list of objects which narrow its scope to looking at just those objects to save:<\/p>\n <&|formatting.myt:code&>\n objectstore.commit(myobj1, myobj2, ...)\n <\/&>\n Committing just a subset of instances should be used carefully, as it may result in an inconsistent save state between dependent objects (it should manage to locate loaded dependencies and save those also, but it hasnt been tested much).<\/p>\n \n <&|doclib.myt:item, name=\"begin\", description=\"Controlling Scope with begin()\" &>\n status<\/b> - release 0.1.1\/SVN head<\/p>\n The \"scope\" of the unit of work commit can be controlled further by issuing a begin(). A begin operation constructs a new UnitOfWork object and sets it as the currently used UOW. It maintains a reference to the original UnitOfWork as its \"parent\", and shares the same \"identity map\" of objects that have been loaded from the database within the scope of the parent UnitOfWork. However, the \"new\", \"dirty\", and \"deleted\" lists are empty. This has the effect that only changes that take place after the begin() operation get logged to the current UnitOfWork, and therefore those are the only changes that get commit()ted. When the commit is complete, the \"begun\" UnitOfWork removes itself and places the parent UnitOfWork as the current one again.<\/p>\n The begin() method returns a transactional object, upon which you can call commit() or rollback(). Only this transactional object controls the transaction<\/b> - commit() upon the Session will do nothing until commit() or rollback() is called upon the transactional object.<\/p>\n <&|formatting.myt:code&>\n # modify an object\n myobj1.foo = \"something new\"\n \n # begin an objectstore scope\n # this is equivalent to objectstore.get_session().begin()\n trans = objectstore.begin()\n \n # modify another object\n myobj2.lala = \"something new\"\n \n # only 'myobj2' is saved\n trans.commit()\n <\/&>\n begin\/commit supports the same \"nesting\" behavior as the SQLEngine (note this behavior is not the original \"nested\" behavior), meaning that many begin() calls can be made, but only the outermost transactional object will actually perform a commit(). Similarly, calls to the commit() method on the Session, which might occur in function calls within the transaction, will not do anything; this allows an external function caller to control the scope of transactions used within the functions.<\/p>\n <\/&>\n <&|doclib.myt:item, name=\"transactionnesting\", description=\"Nesting UnitOfWork in a Database Transaction\" &>\n The UOW commit operation places its INSERT\/UPDATE\/DELETE operations within the scope of a database transaction controlled by a SQLEngine:\n <&|formatting.myt:code&>\n engine.begin()\n try:\n # run objectstore update operations\n except:\n engine.rollback()\n raise\n engine.commit()\n <\/&>\n If you recall from the <&formatting.myt:link, path=\"dbengine_transactions\"&> section, the engine's begin()\/commit() methods support reentrant behavior. This means you can nest begin and commits and only have the outermost begin\/commit pair actually take effect (rollbacks however, abort the whole operation at any stage). From this it follows that the UnitOfWork commit operation can be nested within a transaction as well:<\/p>\n <&|formatting.myt:code&>\n engine.begin()\n try:\n # perform custom SQL operations\n objectstore.commit()\n # perform custom SQL operations\n except:\n engine.rollback()\n raise\n engine.commit()\n <\/&>\n \n <\/&>\n <\/&>\n <&|doclib.myt:item, name=\"identity\", description=\"The Identity Map\" &>\n All object instances which are saved to the database, or loaded from the database, are given an identity by the mapper\/objectstore. This identity is available via the _instance_key property attached to each object instance, and is a tuple consisting of the table's class, the SQLAlchemy-specific \"hash key\" of the table its persisted to, and an additional tuple of primary key values, in the order that they appear within the table definition:<\/p>\n <&|formatting.myt:code&>\n >>> obj._instance_key \n ( Note that this identity is a database identity, not an in-memory identity. An application can have several different objects in different unit-of-work scopes that have the same database identity, or an object can be removed from memory, and constructed again later, with the same database identity. What can never<\/b> happen is for two copies of the same object to exist in the same unit-of-work scope with the same database identity; this is guaranteed by the identity map<\/b>.\n <\/p>\n \n At the moment that an object is assigned this key, it is also added to the current thread's unit-of-work's identity map. The identity map is just a WeakValueDictionary which maintains the one and only reference to a particular object within the current unit of work scope. It is used when result rows are fetched from the database to insure that only one copy of a particular object actually comes from that result set in the case that eager loads or other joins are used, or if the object had already been loaded from a previous result set. The get() method on a mapper, which retrieves an object based on primary key identity, also checks in the current identity map first to save a database round-trip if possible. In the case of an object lazy-loading a single child object, the get() method is also used.\n <\/p>\n Methods on mappers and the objectstore module, which are relevant to identity include the following:<\/p>\n <&|formatting.myt:code&>\n # assume 'm' is a mapper\n m = mapper(User, users)\n \n # get the identity key corresponding to a primary key\n key = m.identity_key(7)\n \n # for composite key, list out the values in the order they\n # appear in the table\n key = m.identity_key(12, 'rev2')\n\n # get the identity key given a primary key \n # value as a tuple, a class, and a table\n key = objectstore.get_id_key((12, 'rev2'), User, users)\n \n # get the identity key for an object, whether or not it actually\n # has one attached to it (m is the mapper for obj's class)\n key = m.instance_key(obj)\n \n # same thing, from the objectstore (works for any obj type)\n key = objectstore.instance_key(obj)\n \n # is this key in the current identity map?\n objectstore.has_key(key)\n \n # is this object in the current identity map?\n objectstore.has_instance(obj)\n\n # get this object from the current identity map based on \n # singular\/composite primary key, or if not go \n # and load from the database\n obj = m.get(12, 'rev2')\n <\/&>\n <\/&>\n <&|doclib.myt:item, name=\"import\", description=\"Bringing External Instances into the UnitOfWork\" &>\n The _instance_key attribute is designed to work with objects that are serialized into strings and brought back again. As it contains no references to internal structures or database connections, applications that use caches or session storage which require serialization (i.e. pickling) can store SQLAlchemy-loaded objects. However, as mentioned earlier, an object with a particular database identity is only allowed to exist uniquely within the current unit-of-work scope. So, upon deserializing such an object, it has to \"check in\" with the current unit-of-work\/identity map combination, to insure that it is the only unique instance. This is achieved via the import_instance()<\/span> function in objectstore:<\/p>\n <&|formatting.myt:code&>\n # deserialize an object\n myobj = pickle.loads(mystring)\n \n # \"import\" it. if the objectstore already had this object in the \n # identity map, then you get back the one from the current session.\n myobj = objectstore.import_instance(myobj)\n <\/&>\n Note that the import_instance() function will either mark the deserialized object as the official copy in the current identity map, which includes updating its _instance_key with the current application's class instance, or it will discard it and return the corresponding object that was already present.<\/p>\n <\/&>\n\n <&|doclib.myt:item, name=\"advscope\", description=\"Advanced UnitOfWork Management\"&>\n\n <&|doclib.myt:item, name=\"object\", description=\"Per-Object Sessions\" &>\n Sessions can be created on an ad-hoc basis and used for individual groups of objects and operations. This has the effect of bypassing the entire \"global\"\/\"threadlocal\" UnitOfWork system and explicitly using a particular Session:<\/p>\n <&|formatting.myt:code&>\n # make a new Session with a global UnitOfWork\n s = objectstore.Session()\n \n # make objects bound to this Session\n x = MyObj(_sa_session=s)\n \n # perform mapper operations bound to this Session\n # (this function coming soon)\n r = MyObj.mapper.using(s).select_by(id=12)\n \n # get the session that corresponds to an instance\n s = objectstore.get_session(x)\n \n # commit \n s.commit()\n\n # perform a block of operations with this session set within the current scope\n objectstore.push_session(s)\n try:\n r = mapper.select_by(id=12)\n x = new MyObj()\n objectstore.commit()\n finally:\n objectstore.pop_session()\n <\/&>\n <\/&>\n\n <&|doclib.myt:item, name=\"scope\", description=\"Custom Session Objects\/Custom Scopes\" &>\n\n For users who want to make their own Session subclass, or replace the algorithm used to return scoped Session objects (i.e. the objectstore.get_session() method):<\/p>\n <&|formatting.myt:code&>\n # make a new Session\n s = objectstore.Session()\n \n # set it as the current thread-local session\n objectstore.session_registry.set(s)\n\n # set the objectstore's session registry to a different algorithm\n \n def create_session():\n \"\"\"creates new sessions\"\"\"\n return objectstore.Session()\n def mykey():\n \"\"\"creates contextual keys to store scoped sessions\"\"\"\n return \"mykey\"\n \n objectstore.session_registry = sqlalchemy.util.ScopedRegistry(createfunc=create_session, scopefunc=mykey)\n <\/&>\n <\/&>\n\n <&|doclib.myt:item, name=\"logging\", description=\"Analyzing Object Commits\" &>\n The objectstore module can log an extensive display of its \"commit plans\", which is a graph of its internal representation of objects before they are committed to the database. To turn this logging on:\n <&|formatting.myt:code&>\n # make an engine with echo_uow\n engine = create_engine('myengine...', echo_uow=True)\n \n # globally turn on echo\n objectstore.LOG = True\n <\/&>\n Commits will then dump to the standard output displays like the following:<\/p>\n <&|formatting.myt:code, syntaxtype=None&>\n Task dump:\n UOWTask(6034768) 'User\/users\/6015696'\n |\n |- Save elements\n |- Save: UOWTaskElement(6034800): User(6016624) (save)\n |\n |- Save dependencies\n |- UOWDependencyProcessor(6035024) 'addresses' attribute on saved User's (UOWTask(6034768) 'User\/users\/6015696')\n | |-UOWTaskElement(6034800): User(6016624) (save)\n |\n |- Delete dependencies\n |- UOWDependencyProcessor(6035056) 'addresses' attribute on User's to be deleted (UOWTask(6034768) 'User\/users\/6015696')\n | |-(no objects)\n |\n |- Child tasks\n |- UOWTask(6034832) 'Address\/email_addresses\/6015344'\n | |\n | |- Save elements\n | |- Save: UOWTaskElement(6034864): Address(6034384) (save)\n | |- Save: UOWTaskElement(6034896): Address(6034256) (save)\n | |----\n | \n |----\n <\/&>\n The above graph can be read straight downwards to determine the order of operations. It indicates \"save User 6016624, process each element in the 'addresses' list on User 6016624, save Address 6034384, Address 6034256\".\n <\/&>\n \n <\/&>\n<\/&>\n","returncode":0,"stderr":"","license":"mit","lang":"Myghty"}
{"commit":"3afa14f0c583aca05e10d6aa3f2fbc6a3c01a830","subject":"dev on uow docs","message":"dev on uow docs\n\n\ngit-svn-id: 655ff90ec95d1eeadb1ee4bb9db742a3c015d499@880 8cd8332f-0806-0410-a4b6-96f4b9520244\n","repos":"obeattie\/sqlalchemy,obeattie\/sqlalchemy,obeattie\/sqlalchemy","old_file":"doc\/build\/content\/unitofwork.myt","new_file":"doc\/build\/content\/unitofwork.myt","new_contents":"<%flags>inherit='document_base.myt'<\/%flags>\n<%attr>title='Unit of Work'<\/%attr>\n\n<&|doclib.myt:item, name=\"unitofwork\", description=\"Unit of Work\" &>\n <&|doclib.myt:item, name=\"overview\", description=\"Overview\" &>\n The concept behind Unit of Work is to track modifications to a field of objects, and then be able to commit those changes to the database in a single operation. Theres a lot of advantages to this, including that your application doesn't need to worry about individual save operations on objects, nor about the required order for those operations, nor about excessive repeated calls to save operations that would be more efficiently aggregated into one step. It also simplifies database transactions, providing a neat package with which to insert into the traditional database begin\/commit phase.\n <\/p>\n SQLAlchemy's unit of work includes these functions:\n To get a hold of the current unit of work, its available inside a thread local registry object (an instance of sqlalchemy.util.ScopedRegistry<\/span>) in the objectstore package:<\/p>\n <&|formatting.myt:code&>\n u = objectstore.uow()\n <\/&>\n You can also construct your own UnitOfWork object. However, to get your mappers to talk to it, it has to be placed in the current thread-local scope:<\/p>\n <&|formatting.myt:code&>\n u = objectstore.UnitOfWork()\n objectstore.uow.set(u)\n <\/&>\n Whatever unit of work is present in the registry can be cleared out, which will create a new one upon the next access:<\/p>\n <&|formatting.myt:code&>\n objectstore.uow.clear()\n <\/&>\n The uow attribute also can be made to use \"application\" scope, instead of \"thread\" scope, meaning all threads will access the same instance of UnitOfWork:<\/p>\n <&|formatting.myt:code&>\n objectstore.uow.defaultscope = 'application'\n <\/&>\n Although theres not much advantage to doing so, and also would make mapper usage not thread safe.<\/p>\n \n The objectstore package includes many module-level methods which all operate upon the current UnitOfWork object. These include begin(), commit(), clear(), delete(), has_key(), and import_instance(), which are described below.<\/p>\n <\/&>\n <&|doclib.myt:item, name=\"begincommit\", description=\"Begin\/Commit\" &>\n The current thread's UnitOfWork object keeps track of objects that are modified. It maintains the following lists:<\/p>\n <&|formatting.myt:code&>\n # new objects that were just constructed\n objectstore.uow().new\n \n # objects that exist in the database, that were modified\n objectstore.uow().dirty\n \n # objects that have been marked as deleted via objectstore.delete()\n objectstore.uow().deleted\n <\/&>\n To commit the changes stored in those lists, just issue a commit. This can be called via objectstore.uow().commit()<\/span>, or through the module-level convenience method in the objectstore module:<\/p>\n <&|formatting.myt:code&>\n objectstore.commit()\n <\/&>\n The commit operation takes place within a SQL-level transaction, so any failures that occur will roll back the state of everything to before the commit took place.<\/p>\n When mappers are created for classes, new object construction automatically places objects in the \"new\" list on the UnitOfWork, and object modifications automatically place objects in the \"dirty\" list. To mark objects as to be deleted, use the \"delete\" method on UnitOfWork, or the module level version:<\/p>\n <&|formatting.myt:code&>\n objectstore.delete(myobj1, myobj2, ...)\n <\/&>\n \n Commit() can also take a list of objects which narrow its scope to looking at just those objects to save:<\/p>\n <&|formatting.myt:code&>\n objectstore.commit(myobj1, myobj2, ...)\n <\/&>\n This feature should be used carefully, as it may result in an inconsistent save state between dependent objects (it should manage to locate loaded dependencies and save those also, but it hasnt been tested much).<\/p>\n \n <&|doclib.myt:item, name=\"begin\", description=\"Controlling Scope with begin()\" &>\n \n The \"scope\" of the unit of work commit can be controlled further by issuing a begin(). A begin operation constructs a new UnitOfWork object and sets it as the currently used UOW. It maintains a reference to the original UnitOfWork as its \"parent\", and shares the same \"identity map\" of objects that have been loaded from the database within the scope of the parent UnitOfWork. However, the \"new\", \"dirty\", and \"deleted\" lists are empty. This has the effect that only changes that take place after the begin() operation get logged to the current UnitOfWork, and therefore those are the only changes that get commit()ted. When the commit is complete, the \"begun\" UnitOfWork removes itself and places the parent UnitOfWork as the current one again.<\/p>\n <&|formatting.myt:code&>\n # modify an object\n myobj1.foo = \"something new\"\n \n # begin an objectstore scope\n # this is equivalent to objectstore.uow().begin()\n objectstore.begin()\n \n # modify another object\n myobj2.lala = \"something new\"\n \n # only 'myobj2' is saved\n objectstore.commit()\n <\/&>\n As always, the actual database transaction begin\/commit occurs entirely within the objectstore.commit() operation.<\/p>\n \n Since the begin\/commit paradigm works in a stack-based manner, it follows that any level of nesting of begin\/commit can be used:<\/p>\n <&|formatting.myt:code&>\n # start with UOW #1 as the thread-local UnitOfWork\n a = Foo()\n objectstore.begin() # push UOW #2 on the stack\n b = Foo()\n objectstore.begin() # push UOW #3 on the stack\n c = Foo()\n\n # saves 'c'\n objectstore.commit() # commit UOW #3\n\n d = Foo()\n\n # saves 'b' and 'd'\n objectstore.commit() # commit UOW #2\n \n # saves 'a', everything else prior to it\n objectstore.commit() # commit thread-local UOW #1\n <\/&>\n <\/&>\n <&|doclib.myt:item, name=\"transactionnesting\", description=\"Nesting UnitOfWork in a Database Transaction\" &>\n The UOW commit operation places its INSERT\/UPDATE\/DELETE operations within the scope of a database transaction controlled by a SQLEngine:\n <&|formatting.myt:code&>\n engine.begin()\n try:\n # run objectstore update operations\n except:\n engine.rollback()\n raise\n engine.commit()\n <\/&>\n If you recall from the <&formatting.myt:link, path=\"dbengine_transactions\"&> section, the engine's begin()\/commit() methods support reentrant behavior. This means you can nest begin and commits and only have the outermost begin\/commit pair actually take effect (rollbacks however, abort the whole operation at any stage). From this it follows that the UnitOfWork commit operation can be nested within a transaction as well:<\/p>\n <&|formatting.myt:code&>\n engine.begin()\n try:\n # perform custom SQL operations\n objectstore.commit()\n # perform custom SQL operations\n except:\n engine.rollback()\n raise\n engine.commit()\n <\/&>\n \n <\/&>\n <\/&>\n <&|doclib.myt:item, name=\"identity\", description=\"The Identity Map\" &>\n <\/&>\n <&|doclib.myt:item, name=\"import\", description=\"Bringing External Instances into the UnitOfWork\" &>\n <\/&>\n <&|doclib.myt:item, name=\"rollback\", description=\"Rollback\" &>\n <\/&>\n<\/&>\n","old_contents":"<%flags>inherit='document_base.myt'<\/%flags>\n<%attr>title='Unit of Work'<\/%attr>\n\n<&|doclib.myt:item, name=\"unitofwork\", description=\"Unit of Work\" &>\n <&|doclib.myt:item, name=\"overview\", description=\"Overview\" &>\n The concept behind Unit of Work is to track modifications to a field of objects, and then be able to commit those changes to the database in a single operation. Theres a lot of advantages to this, including that your application doesn't need to worry about individual save operations on objects, nor about the required order for those operations, nor about excessive repeated calls to save operations that would be more efficiently aggregated into one step. It also simplifies database transactions, providing a neat package with which to insert into the traditional database begin\/commit phase.\n <\/p>\n SQLAlchemy's unit of work includes these functions:\n A database engine is a subclass of sqlalchemy.engine.SQLEngine<\/span>, and is the starting point for where SQLAlchemy provides a layer of abstraction on top of the various DBAPI2 database modules. It serves as an abstract factory for database-specific implementation objects as well as a layer of abstraction over the most essential tasks of a database connection, including connecting, executing queries, returning result sets, and managing transactions.<\/p>\n \n \n The average developer doesn't need to know anything about the interface or workings of a SQLEngine in order to use it. Simply creating one, and then specifying it when constructing tables and other SQL objects is all that's needed. <\/p>\n \n A SQLEngine is also a layer of abstraction on top of the connection pooling described in the previous section. While a DBAPI connection pool can be used explicitly alongside a SQLEngine, its not really necessary. Once you have a SQLEngine, you can retrieve pooled connections directly from its underlying connection pool via its own connection()<\/span> method. However, if you're exclusively using SQLALchemy's SQL construction objects and\/or object-relational mappers, all the details of connecting are handled by those libraries automatically.\n <\/p>\n <&|doclib.myt:item, name=\"establishing\", description=\"Establishing a Database Engine\" &>\n \n Engines exist for SQLite, Postgres, MySQL, and Oracle, using the Pysqlite, Psycopg (1 or 2), MySQLDB, and cx_Oracle modules. Each engine imports its corresponding module which is required to be installed. For Postgres and Oracle, an alternate module may be specified at construction time as well.\n <\/p>\n An example of connecting to each engine is as follows:<\/p>\n \n <&|formatting.myt:code&>\n from sqlalchemy import *\n\n # sqlite in memory \n sqlite_engine = create_engine('sqlite', {'filename':':memory:'}, **opts)\n \n # sqlite using a file\n sqlite_engine = create_engine('sqlite', {'filename':'querytest.db'}, **opts)\n\n # postgres\n postgres_engine = create_engine('postgres', \n {'database':'test', \n 'host':'127.0.0.1', \n 'user':'scott', \n 'password':'tiger'}, **opts)\n\n # mysql\n mysql_engine = create_engine('mysql',\n {\n 'db':'mydb',\n 'user':'scott',\n 'passwd':'tiger',\n 'host':'127.0.0.1'\n }\n **opts)\n # oracle\n oracle_engine = create_engine('oracle', \n {'dsn':'mydsn', \n 'user':'scott', \n 'password':'tiger'}, **opts)\n \n\n <\/&>\n Note that the general form of connecting to an engine is:<\/p>\n <&|formatting.myt:code&>\n engine = create_engine(\n The second argument is a dictionary whose key\/value pairs will be passed to the underlying DBAPI connect() method as keyword arguments. Any keyword argument supported by the DBAPI module can be in this dictionary.<\/p>\n Engines can also be loaded by URL. The above format is converted into <% ' A few useful methods off the SQLEngine are described here:<\/p>\n <&|formatting.myt:code&>\n engine = create_engine('postgres:\/\/hostname=localhost&user=scott&password=tiger&database=test')\n \n # get a pooled DBAPI connection\n conn = engine.connection()\n \n # create\/drop tables based on table metadata objects\n # (see the next section, Table Metadata, for info on table metadata)\n engine.create(mytable)\n engine.drop(mytable)\n \n # get the DBAPI module being used\n dbapi = engine.dbapi()\n \n # get the default schema name\n name = engine.get_default_schema_name()\n \n # execute some SQL directly, returns a ResultProxy (see the SQL Construction section for details)\n result = engine.execute(\"select * from table where col1=:col1\", {'col1':'foo'})\n \n # log a message to the engine's log stream\n engine.log('this is a message')\n \n <\/&> \n <\/&>\n \n <&|doclib.myt:item, name=\"options\", description=\"Database Engine Options\" &>\n The remaining arguments to create_engine<\/span> are keyword arguments that are passed to the specific subclass of sqlalchemy.engine.SQLEngine<\/span> being used, as well as the underlying sqlalchemy.pool.Pool<\/span> instance. All of the options described in the previous section <&formatting.myt:link, path=\"pooling_configuration\"&> can be specified, as well as engine-specific options:<\/p>\n A SQLEngine also provides an interface to the transactional capabilities of the underlying DBAPI connection object, as well as the connection object itself. Note that when using the object-relational-mapping package, described in a later section, basic transactional operation is handled for you automatically by its \"Unit of Work\" system; the methods described here will usually apply just to literal SQL update\/delete\/insert operations or those performed via the SQL construction library.<\/p>\n \n Typically, a connection is opened with \"autocommit=False\". So to perform SQL operations and just commit as you go, you can simply pull out a connection from the connection pool, keep it in the local scope, and call commit() on it as needed. As long as the connection remains referenced, all other SQL operations within the same thread will use this same connection, including those used by the SQL construction system as well as the object-relational mapper, both described in later sections:<\/p>\n <&|formatting.myt:code&>\n conn = engine.connection()\n\n # execute SQL via the engine\n engine.execute(\"insert into mytable values ('foo', 'bar')\")\n conn.commit()\n\n # execute SQL via the SQL construction library \n mytable.insert().execute(col1='bat', col2='lala')\n conn.commit()\n \n <\/&>\n \n There is a more automated way to do transactions, and that is to use the engine's begin()\/commit() functionality. When the begin() method is called off the engine, a connection is checked out from the pool and stored in a thread-local context. That way, all subsequent SQL operations within the same thread will use that same connection. Subsequent commit() or rollback() operations are performed against that same connection. In effect, its a more automated way to perform the \"commit as you go\" example above. <\/p>\n \n <&|formatting.myt:code&>\n engine.begin()\n engine.execute(\"insert into mytable values ('foo', 'bar')\")\n mytable.insert().execute(col1='foo', col2='bar')\n engine.commit()\n <\/&>\n\n A traditional \"rollback on exception\" pattern looks like this:<\/p> \n\n <&|formatting.myt:code&>\n engine.begin()\n try:\n engine.execute(\"insert into mytable values ('foo', 'bar')\")\n mytable.insert().execute(col1='foo', col2='bar')\n except:\n engine.rollback()\n raise\n engine.commit()\n <\/&>\n \n An shortcut which is equivalent to the above is provided by the transaction<\/span> method:<\/p>\n \n <&|formatting.myt:code&>\n def do_stuff():\n engine.execute(\"insert into mytable values ('foo', 'bar')\")\n mytable.insert().execute(col1='foo', col2='bar')\n\n engine.transaction(do_stuff)\n <\/&>\n An added bonus to the engine's transaction methods is \"reentrant\" functionality; once you call begin(), subsequent calls to begin() will increment a counter that must be decremented corresponding to each commit() statement before an actual commit can happen. This way, any number of methods that want to insure a transaction can call begin\/commit, and be nested arbitrarily:<\/p>\n <&|formatting.myt:code&>\n \n # method_a starts a transaction and calls method_b\n def method_a():\n engine.begin()\n try:\n method_b()\n except:\n engine.rollback()\n raise\n engine.commit()\n\n # method_b starts a transaction, or joins the one already in progress,\n # and does some SQL\n def method_b():\n engine.begin()\n try:\n engine.execute(\"insert into mytable values ('bat', 'lala')\")\n mytable.insert().execute(col1='bat', col2='lala')\n except:\n engine.rollback()\n raise\n engine.commit()\n \n # call method_a \n method_a() \n \n <\/&>\n Above, method_a is called first, which calls engine.begin(). Then it calls method_b. When method_b calls engine.begin(), it just increments a counter that is decremented when it calls commit(). If either method_a or method_b calls rollback(), the whole transaction is rolled back. The transaction is not committed until method_a calls the commit() method.<\/p>\n \n The object-relational-mapper capability of SQLAlchemy includes its own commit()<\/span> method that gathers SQL statements into a batch and runs them within one transaction. That transaction is also invokved within the scope of the \"reentrant\" methodology above; so multiple objectstore.commit() operations can also be bundled into a larger database transaction via the above methodology.<\/p>\n <\/&>\n<\/&>\n","old_contents":"<%flags>inherit='document_base.myt'<\/%flags>\n<%attr>title='Database Engines'<\/%attr>\n\n<&|doclib.myt:item, name=\"dbengine\", description=\"Database Engines\" &>\n A database engine is a subclass of sqlalchemy.engine.SQLEngine<\/span>, and is the starting point for where SQLAlchemy provides a layer of abstraction on top of the various DBAPI2 database modules. It serves as an abstract factory for database-specific implementation objects as well as a layer of abstraction over the most essential tasks of a database connection, including connecting, executing queries, returning result sets, and managing transactions.<\/p>\n \n \n The average developer doesn't need to know anything about the interface or workings of a SQLEngine in order to use it. Simply creating one, and then specifying it when constructing tables and other SQL objects is all that's needed. <\/p>\n \n A SQLEngine is also a layer of abstraction on top of the connection pooling described in the previous section. While a DBAPI connection pool can be used explicitly alongside a SQLEngine, its not really necessary. Once you have a SQLEngine, you can retrieve pooled connections directly from its underlying connection pool via its own connection()<\/span> method. However, if you're exclusively using SQLALchemy's SQL construction objects and\/or object-relational mappers, all the details of connecting are handled by those libraries automatically.\n <\/p>\n <&|doclib.myt:item, name=\"establishing\", description=\"Establishing a Database Engine\" &>\n \n Engines exist for SQLite, Postgres, MySQL, and Oracle, using the Pysqlite, Psycopg (1 or 2), MySQLDB, and cx_Oracle modules. Each engine imports its corresponding module which is required to be installed. For Postgres and Oracle, an alternate module may be specified at construction time as well.\n <\/p>\n An example of connecting to each engine is as follows:<\/p>\n \n <&|formatting.myt:code&>\n from sqlalchemy import *\n\n # sqlite in memory \n sqlite_engine = create_engine('sqlite', {'filename':':memory:'}, **opts)\n \n # sqlite using a file\n sqlite_engine = create_engine('sqlite', {'filename':'querytest.db'}, **opts)\n\n # postgres\n postgres_engine = create_engine('postgres', \n {'database':'test', \n 'host':'127.0.0.1', \n 'user':'scott', \n 'password':'tiger'}, **opts)\n\n # mysql\n mysql_engine = create_engine('mysql',\n {\n 'db':'mydb',\n 'user':'scott',\n 'passwd':'tiger',\n 'host':'127.0.0.1'\n }\n **opts)\n # oracle\n oracle_engine = create_engine('oracle', \n {'dsn':'mydsn', \n 'user':'scott', \n 'password':'tiger'}, **opts)\n \n\n <\/&>\n Note that the general form of connecting to an engine is:<\/p>\n <&|formatting.myt:code&>\n engine = create_engine(\n The second argument is a dictionary whose key\/value pairs will be passed to the underlying DBAPI connect() method as keyword arguments. Any keyword argument supported by the DBAPI module can be in this dictionary.<\/p>\n Engines can also be loaded by URL. The above format is converted into <% ' The remaining arguments to create_engine<\/span> are keyword arguments that are passed to the specific subclass of sqlalchemy.engine.SQLEngine<\/span> being used, as well as the underlying sqlalchemy.pool.Pool<\/span> instance. All of the options described in the previous section <&formatting.myt:link, path=\"pooling_configuration\"&> can be specified, as well as engine-specific options:<\/p>\n The package sqlalchemy.types<\/span> defines the datatype identifiers which may be used when defining <&formatting.myt:link, path=\"metadata\", text=\"table metadata\"&>. This package includes a set of generic types, a set of SQL-specific subclasses of those types, and a small extension system used by specific database connectors to adapt these generic types into database-specific type objects.\n<\/p>\n<&|doclib.myt:item, name=\"standard\", description=\"Built-in Types\" &>\n\n SQLAlchemy comes with a set of standard generic datatypes, which are defined as classes. They are specified to table meta data using either the class itself, or an instance of the class. Creating an instance of the class allows you to specify parameters for the type, such as string length, numerical precision, etc. \n<\/p>\n The standard set of generic types are:<\/p>\n<&|formatting.myt:code&>\n# sqlalchemy.types package:\nclass String(TypeEngine):\n def __init__(self, length=None)\n \nclass Integer(TypeEngine)\n \nclass Numeric(TypeEngine): \n def __init__(self, precision=10, length=2)\n \nclass Float(TypeEngine):\n def __init__(self, precision=10)\n \nclass DateTime(TypeEngine)\n \nclass Binary(TypeEngine): \n def __init__(self, length=None)\n \nclass Boolean(TypeEngine)\n<\/&>\n More specific subclasses of these types are available, to allow finer grained control over types:<\/p>\n<&|formatting.myt:code&>\nclass FLOAT(Numeric)\nclass TEXT(String)\nclass DECIMAL(Numeric)\nclass INT(Integer)\nINTEGER = INT\nclass TIMESTAMP(DateTime)\nclass DATETIME(DateTime)\nclass CLOB(String)\nclass VARCHAR(String)\nclass CHAR(String)\nclass BLOB(Binary)\nclass BOOLEAN(Boolean)\n<\/&>\n When using a specific database engine, these types are adapted even further via a set of database-specific subclasses defined by the database engine.<\/p>\n<\/&>\n\n<&|doclib.myt:item, name=\"custom\", description=\"Creating your Own Types\" &>\n Types also support pre-processing of query parameters as well as post-processing of result set data. You can make your own classes to perform these operations. They are specified by subclassing the desired type class as well as the special mixin TypeDecorator, which manages the adaptation of the underlying type to a database-specific type:<\/p>\n<&|formatting.myt:code&>\n import sqlalchemy.types as types\n\n class MyType(types.TypeDecorator, types.String):\n \"\"\"basic type that decorates String, prefixes values with \"PREFIX:\" on \n the way in and strips it off on the way out.\"\"\"\n def convert_bind_param(self, value, engine):\n return \"PREFIX:\" + value\n def convert_result_value(self, value, engine):\n return value[7:]\n<\/&>\n Another example, which illustrates a fully defined datatype. This just overrides the base type class TypeEngine:<\/p>\n<&|formatting.myt:code&>\n import sqlalchemy.types as types\n\n class MyType(types.TypeEngine):\n def __init__(self, precision = 8):\n self.precision = precision\n def get_col_spec(self):\n return \"MYTYPE(%s)\" % self.precision\n def convert_bind_param(self, value, engine):\n return value\n def convert_result_value(self, value, engine):\n return value\n def adapt(self, typeobj):\n \"\"\"produces an adaptation of this object given a type which is a subclass of this object\"\"\"\n return typeobj(self.precision)\n def adapt_args(self):\n \"\"\"allows for the adaptation of this TypeEngine object into a new kind of type depending on its arguments.\"\"\"\n return self\n<\/&>\n<\/&>\n<\/&>","old_contents":"<%flags>inherit='document_base.myt'<\/%flags>\n<%attr>title='The Types System'<\/%attr>\n\n<&|doclib.myt:item, name=\"types\", description=\"The Types System\" &>\n The package sqlalchemy.types<\/span> defines the datatype identifiers which may be used when defining <&formatting.myt:link, path=\"metadata\", text=\"table metadata\"&>. This package includes a set of generic types, a set of SQL-specific subclasses of those types, and a small extension system used by specific database connectors to adapt these generic types into database-specific type objects.\n<\/p>\n<&|doclib.myt:item, name=\"standard\", description=\"Built-in Types\" &>\n\n SQLAlchemy comes with a set of standard generic datatypes, which are defined as classes. They are specified to table meta data using either the class itself, or an instance of the class. Creating an instance of the class allows you to specify parameters for the type, such as string length, numerical precision, etc. \n<\/p>\n The standard set of generic types are:<\/p>\n<&|formatting.myt:code&>\n# sqlalchemy.types package:\nclass String(TypeEngine):\n def __init__(self, length=None)\n \nclass Integer(TypeEngine)\n \nclass Numeric(TypeEngine): \n def __init__(self, precision=10, length=2)\n \nclass Float(TypeEngine):\n def __init__(self, precision=10)\n \nclass DateTime(TypeEngine)\n \nclass Binary(TypeEngine): \n def __init__(self, length=None)\n \nclass Boolean(TypeEngine)\n<\/&>\n More specific subclasses of these types are available, to allow finer grained control over types:<\/p>\n<&|formatting.myt:code&>\nclass FLOAT(Numeric)\nclass TEXT(String)\nclass DECIMAL(Numeric)\nclass INT(Integer)\nINTEGER = INT\nclass TIMESTAMP(DateTime)\nclass DATETIME(DateTime)\nclass CLOB(String)\nclass VARCHAR(String)\nclass CHAR(String)\nclass BLOB(Binary)\nclass BOOLEAN(Boolean)\n<\/&>\n When using a specific database engine, these types are adapted even further via a set of database-specific subclasses defined by the database engine.<\/p>\n<\/&>\n\n<&|doclib.myt:item, name=\"custom\", description=\"Creating your Own Types\" &>\n Types also support pre-processing of query parameters as well as post-processing of result set data. You can make your own classes to perform these operations. They are specified by subclassing the desired type class as well as the special mixin TypeDecorator, which manages the adaptation of the underlying type to a database-specific type:<\/p>\n<&|formatting.myt:code&>\n import sqlalchemy.types as types\n\n class MyType(types.TypeDecorator, types.String):\n \"\"\"basic type that decorates String, prefixes values with \"PREFIX:\" on \n the way in and strips it off on the way out.\"\"\"\n def convert_bind_param(self, value):\n return \"PREFIX:\" + value\n def convert_result_value(self, value):\n return value[7:]\n<\/&>\n Another example, which illustrates a fully defined datatype. This just overrides the base type class TypeEngine:<\/p>\n<&|formatting.myt:code&>\n import sqlalchemy.types as types\n\n class MyType(types.TypeEngine):\n def __init__(self, precision = 8):\n self.precision = precision\n def get_col_spec(self):\n return \"MYTYPE(%s)\" % self.precision\n def convert_bind_param(self, value):\n return value\n def convert_result_value(self, value):\n return value\n def adapt(self, typeobj):\n \"\"\"produces an adaptation of this object given a type which is a subclass of this object\"\"\"\n return typeobj(self.precision)\n def adapt_args(self):\n \"\"\"allows for the adaptation of this TypeEngine object into a new kind of type depending on its arguments.\"\"\"\n return self\n<\/&>\n<\/&>\n<\/&>","returncode":0,"stderr":"","license":"mit","lang":"Myghty"}
{"commit":"12d77467d32fdf17e66844aeb22326ae5ac008a1","subject":"","message":"\n\ngit-svn-id: http:\/\/svn.sqlalchemy.org\/sqlalchemy\/trunk@521 8cd8332f-0806-0410-a4b6-96f4b9520244\n","repos":"obeattie\/sqlalchemy,obeattie\/sqlalchemy,obeattie\/sqlalchemy","old_file":"doc\/build\/content\/roadmap.myt","new_file":"doc\/build\/content\/roadmap.myt","new_contents":"<%flags>inherit='document_base.myt'<\/%flags>\n<&|doclib.myt:item, name=\"roadmap\", description=\"Roadmap\" &>\n SQLAlchemy includes several components, each of which are useful by themselves to give varying levels of assistance to a database-enabled application. Below is a roadmap of the \"knowledge dependencies\" between these components indicating the order in which concepts may be learned. \n<\/p>\n\n SQLAlchemy includes several components, each of which are useful by themselves to give varying levels of assistance to a database-enabled application. Below is a roadmap of the \"knowledge dependencies\" between these components indicating the order in which concepts may be learned. \n<\/p>\n\n Data mapping describes the process of defining Mapper<\/b> objects, which associate table metadata with user-defined classes. \n\nThe Mapper's role is to perform SQL operations upon the database, associating individual table rows with instances of those classes, and individual database columns with properties upon those instances, to transparently associate in-memory objects with a persistent database representation. <\/p>\n\n When a Mapper is created to associate a Table object with a class, all of the columns defined in the Table object are associated with the class via property accessors, which add overriding functionality to the normal process of setting and getting object attributes. These property accessors also keep track of changes to object attributes; these changes will be stored to the database when the application \"commits\" the current transactional context (known as a Unit of Work<\/b>). The __init__()<\/span> method of the object is also decorated to communicate changes when new instances of the object are created.<\/p>\n\n The Mapper also provides the interface by which instances of the object are loaded from the database. The primary method for this is its select()<\/span> method, which has similar arguments to a sqlalchemy.sql.Select<\/span> object. But this select method executes automatically and returns results, instead of awaiting an execute() call. Instead of returning a cursor-like object, it returns an array of objects.<\/p>\n\n The three elements to be defined, i.e. the Table metadata, the user-defined class, and the Mapper, are typically defined as module-level variables, and may be defined in any fashion suitable to the application, with the only requirement being that the class and table metadata are described before the mapper. For the sake of example, we will be defining these elements close together, but this should not be construed as a requirement; since SQLAlchemy is not a framework, those decisions are left to the developer or an external framework.\n<\/p>\n<&|doclib.myt:item, name=\"synopsis\", description=\"Synopsis\" &>\n This is the simplest form of a full \"round trip\" of creating table meta data, creating a class, mapping the class to the table, getting some results, and saving changes. For each concept, the following sections will dig in deeper to the available capabilities.<\/p>\n <&|formatting.myt:code&>\n from sqlalchemy import *\n \n # engine\n engine = create_engine(\"sqlite:\/\/mydb.db\")\n \n # table metadata\n users = Table('users', engine, \n Column('user_id', Integer, primary_key=True),\n Column('user_name', String(16)),\n Column('password', String(20))\n )\n\n # class definition \n class User(object):\n pass\n \n # create a mapper\n usermapper = mapper(User, users)\n \n # select\n<&formatting.myt:poplink&>user = usermapper.select_by(user_name='fred')[0] \n<&|formatting.myt:codepopper, link=\"sql\" &>\nSELECT users.user_id AS users_user_id, users.user_name AS users_user_name, \nusers.password AS users_password \nFROM users \nWHERE users.user_name = :users_user_name ORDER BY users.oid\n\n{'users_user_name': 'fred'}\n <\/&>\n # modify\n user.user_name = 'fred jones'\n \n # commit - saves everything that changed\n<&formatting.myt:poplink&>objectstore.commit() \n<&|formatting.myt:codepopper, link=\"sql\" &>\nUPDATE users SET user_name=:user_name \n WHERE users.user_id = :user_id\n\n[{'user_name': 'fred jones', 'user_id': 1}] \n <\/&>\n \n \n <\/&>\n <&|doclib.myt:item, name=\"attaching\", description=\"Attaching Mappers to their Class\"&>\n For convenience's sake, the Mapper can be attached as an attribute on the class itself as well:<\/p>\n <&|formatting.myt:code&>\n User.mapper = mapper(User, users)\n \n userlist = User.mapper.select_by(user_id=12)\n <\/&>\n There is also a full-blown \"monkeypatch\" function that creates a primary mapper, attaches the above mapper class property, and also the methods get, get_by, select, select_by, selectone, commit<\/span> and delete<\/span>:<\/p>\n <&|formatting.myt:code&>\n assign_mapper(User, users)\n userlist = User.select_by(user_id=12)\n <\/&>\n Other methods of associating mappers and finder methods with their corresponding classes, such as via common base classes or mixins, can be devised as well. SQLAlchemy does not aim to dictate application architecture and will always allow the broadest variety of architectural patterns, but may include more helper objects and suggested architectures in the future.<\/p>\n <\/&>\n <&|doclib.myt:item, name=\"overriding\", description=\"Overriding Properties\"&>\n A common request is the ability to create custom class properties that override the behavior of setting\/getting an attribute. Currently, the easiest way to do this in SQLAlchemy is just how its done normally; define your attribute with a different name, such as \"_attribute\", and use a property to get\/set its value. The mapper just needs to be told of the special name:<\/p>\n <&|formatting.myt:code&>\n class MyClass(object):\n def _set_email(self, email):\n self._email = email\n def _get_email(self, email):\n return self._email\n email = property(_get_email, _set_email)\n \n m = mapper(MyClass, mytable, properties = {\n # map the '_email' attribute to the \"email\" column\n # on the table\n '_email': mytable.c.email\n })\n <\/&>\n In a later release, SQLAlchemy will also allow _get_email and _set_email to be attached directly to the \"email\" property created by the mapper, and will also allow this association to occur via decorators.<\/p>\n <\/&>\n<\/&>\n<&|doclib.myt:item, name=\"selecting\", description=\"Selecting from a Mapper\" &>\n There are a variety of ways to select from a mapper. These range from minimalist to explicit. Below is a synopsis of the these methods:<\/p>\n <&|formatting.myt:code&>\n # select_by, using property names or column names as keys\n # the keys are grouped together by an AND operator\n result = mapper.select_by(name='john', street='123 green street')\n\n # select_by can also combine SQL criterion with key\/value properties\n result = mapper.select_by(users.c.user_name=='john', \n addresses.c.zip_code=='12345, street='123 green street')\n \n # get_by, which takes the same arguments as select_by\n # returns a single scalar result or None if no results\n user = mapper.get_by(id=12)\n \n # \"dynamic\" versions of select_by and get_by - everything past the \n # \"select_by_\" or \"get_by_\" is used as the key, and the function argument\n # as the value\n result = mapper.select_by_name('fred')\n u = mapper.get_by_name('fred')\n \n # get an object directly from its primary key. this will bypass the SQL\n # call if the object has already been loaded\n u = mapper.get(15)\n \n # get an object that has a composite primary key of three columns.\n # the order of the arguments matches that of the table meta data.\n myobj = mapper.get(27, 3, 'receipts')\n \n # using a WHERE criterion\n result = mapper.select(or_(users.c.user_name == 'john', users.c.user_name=='fred'))\n \n # using a WHERE criterion to get a scalar\n u = mapper.selectone(users.c.user_name=='john')\n \n # using a full select object\n result = mapper.select(users.select(users.c.user_name=='john'))\n \n # using straight text \n result = mapper.select_text(\"select * from users where user_name='fred'\")\n\n # or using a \"text\" object\n result = mapper.select(text(\"select * from users where user_name='fred'\", engine=engine))\n <\/&> \n The last few examples above show the usage of the mapper's table object to provide the columns for a WHERE Clause. These columns are also accessible off of the mapped class directly. When a mapper is assigned to a class, it also attaches a special property accessor c<\/span> to the class itself, which can be used just like the table metadata to access the columns of the table:<\/p>\n <&|formatting.myt:code&>\n User.mapper = mapper(User, users)\n \n userlist = User.mapper.select(User.c.user_id==12)\n <\/&> \n<\/&>\n<&|doclib.myt:item, name=\"saving\", description=\"Saving Objects\" &>\n When objects corresponding to mapped classes are created or manipulated, all changes are logged by a package called sqlalchemy.mapping.objectstore<\/span>. The changes are then written to the database when an application calls objectstore.commit()<\/span>. This pattern is known as a Unit of Work<\/b>, and has many advantages over saving individual objects or attributes on those objects with individual method invocations. Domain models can be built with far greater complexity with no concern over the order of saves and deletes, excessive database round-trips and write operations, or deadlocking issues. The commit() operation uses a transaction as well, and will also perform \"concurrency checking\" to insure the proper number of rows were in fact affected (not supported with the current MySQL drivers). Transactional resources are used effectively in all cases; the unit of work handles all the details.<\/p>\n \n When a mapper is created, the target class has its mapped properties decorated by specialized property accessors that track changes, and its __init__()<\/span> method is also decorated to mark new objects as \"new\".<\/p>\n <&|formatting.myt:code&>\n User.mapper = mapper(User, users)\n\n # create a new User\n myuser = User()\n myuser.user_name = 'jane'\n myuser.password = 'hello123'\n\n # create another new User \n myuser2 = User()\n myuser2.user_name = 'ed'\n myuser2.password = 'lalalala'\n\n # load a third User from the database \n<&formatting.myt:poplink&>myuser3 = User.mapper.select(User.c.user_name=='fred')[0] \n<&|formatting.myt:codepopper, link=\"sql\" &>\nSELECT users.user_id AS users_user_id, \nusers.user_name AS users_user_name, users.password AS users_password\nFROM users WHERE users.user_name = :users_user_name\n{'users_user_name': 'fred'}\n<\/&>\n myuser3.user_name = 'fredjones'\n\n # save all changes \n<&formatting.myt:poplink&>objectstore.commit() \n<&|formatting.myt:codepopper, link=\"sql\" &>\nUPDATE users SET user_name=:user_name\nWHERE users.user_id =:users_user_id\n[{'users_user_id': 1, 'user_name': 'fredjones'}]\n\nINSERT INTO users (user_name, password) VALUES (:user_name, :password)\n{'password': 'hello123', 'user_name': 'jane'}\n\nINSERT INTO users (user_name, password) VALUES (:user_name, :password)\n{'password': 'lalalala', 'user_name': 'ed'}\n<\/&>\n <\/&>\n In the examples above, we defined a User class with basically no properties or methods. Theres no particular reason it has to be this way, the class can explicitly set up whatever properties it wants, whether or not they will be managed by the mapper. It can also specify a constructor, with the restriction that the constructor is able to function with no arguments being passed to it (this restriction can be lifted with some extra parameters to the mapper; more on that later):<\/p>\n <&|formatting.myt:code&>\n class User(object):\n def __init__(self, user_name = None, password = None):\n self.user_id = None\n self.user_name = user_name\n self.password = password\n def get_name(self):\n return self.user_name\n def __repr__(self):\n return \"User id %s name %s password %s\" % (repr(self.user_id), \n repr(self.user_name), repr(self.password))\n User.mapper = mapper(User, users)\n\n u = User('john', 'foo')\n<&formatting.myt:poplink&>objectstore.commit() \n<&|formatting.myt:codepopper, link=\"sql\" &>\nINSERT INTO users (user_name, password) VALUES (:user_name, :password)\n{'password': 'foo', 'user_name': 'john'}\n<\/&>\n >>> u\n User id 1 name 'john' password 'foo'\n \n <\/&>\n\n Recent versions of SQLAlchemy will only put modified object attributes columns into the UPDATE statements generated upon commit. This is to conserve database traffic and also to successfully interact with a \"deferred\" attribute, which is a mapped object attribute against the mapper's primary table that isnt loaded until referenced by the application.<\/p>\n<\/&>\n\n<&|doclib.myt:item, name=\"relations\", description=\"Defining and Using Relationships\" &>\n So that covers how to map the columns in a table to an object, how to load objects, create new ones, and save changes. The next step is how to define an object's relationships to other database-persisted objects. This is done via the relation<\/span> function provided by the mapper module. So with our User class, lets also define the User has having one or more mailing addresses. First, the table metadata:<\/p>\n <&|formatting.myt:code&>\n from sqlalchemy import *\n engine = create_engine('sqlite', {'filename':'mydb'})\n \n # define user table\n users = Table('users', engine, \n Column('user_id', Integer, primary_key=True),\n Column('user_name', String(16)),\n Column('password', String(20))\n )\n \n # define user address table\n addresses = Table('addresses', engine,\n Column('address_id', Integer, primary_key=True),\n Column('user_id', Integer, ForeignKey(\"users.user_id\")),\n Column('street', String(100)),\n Column('city', String(80)),\n Column('state', String(2)),\n Column('zip', String(10))\n )\n <\/&>\n Of importance here is the addresses table's definition of a foreign key<\/b> relationship to the users table, relating the user_id column into a parent-child relationship. When a Mapper wants to indicate a relation of one object to another, this ForeignKey object is the default method by which the relationship is determined (although if you didn't define ForeignKeys, or you want to specify explicit relationship columns, that is available as well). <\/p>\n So then lets define two classes, the familiar User class, as well as an Address class:\n\n <&|formatting.myt:code&>\n class User(object):\n def __init__(self, user_name = None, password = None):\n self.user_name = user_name\n self.password = password\n \n class Address(object):\n def __init__(self, street=None, city=None, state=None, zip=None):\n self.street = street\n self.city = city\n self.state = state\n self.zip = zip\n <\/&>\n And then a Mapper that will define a relationship of the User and the Address classes to each other as well as their table metadata. We will add an additional mapper keyword argument properties<\/span> which is a dictionary relating the name of an object property to a database relationship, in this case a relation<\/span> object against a newly defined mapper for the Address class:<\/p>\n <&|formatting.myt:code&>\n User.mapper = mapper(User, users, properties = {\n 'addresses' : relation(mapper(Address, addresses))\n }\n )\n <\/&>\n Lets do some operations with these classes and see what happens:<\/p>\n\n <&|formatting.myt:code&>\n u = User('jane', 'hihilala')\n u.addresses.append(Address('123 anywhere street', 'big city', 'UT', '76543'))\n u.addresses.append(Address('1 Park Place', 'some other city', 'OK', '83923'))\n\n objectstore.commit() \n<&|formatting.myt:poppedcode, link=\"sql\" &>INSERT INTO users (user_name, password) VALUES (:user_name, :password)\n{'password': 'hihilala', 'user_name': 'jane'}\n\nINSERT INTO addresses (user_id, street, city, state, zip) VALUES (:user_id, :street, :city, :state, :zip)\n{'city': 'big city', 'state': 'UT', 'street': '123 anywhere street', 'user_id':1, 'zip': '76543'}\n\nINSERT INTO addresses (user_id, street, city, state, zip) VALUES (:user_id, :street, :city, :state, :zip)\n{'city': 'some other city', 'state': 'OK', 'street': '1 Park Place', 'user_id':1, 'zip': '83923'}\n<\/&>\n <\/&>\n A lot just happened there! The Mapper object figured out how to relate rows in the addresses table to the users table, and also upon commit had to determine the proper order in which to insert rows. After the insert, all the User and Address objects have all their new primary and foreign keys populated.<\/p>\n\n Also notice that when we created a Mapper on the User class which defined an 'addresses' relation, the newly created User instance magically had an \"addresses\" attribute which behaved like a list. This list is in reality a property accessor function, which returns an instance of sqlalchemy.util.HistoryArraySet<\/span>, which fulfills the full set of Python list accessors, but maintains a unique<\/b> set of objects (based on their in-memory identity), and also tracks additions and deletions to the list:<\/p>\n <&|formatting.myt:code&>\n del u.addresses[1]\n u.addresses.append(Address('27 New Place', 'Houston', 'TX', '34839'))\n\n objectstore.commit() \n\n<&|formatting.myt:poppedcode, link=\"sql\" &>UPDATE addresses SET user_id=:user_id\n WHERE addresses.address_id = :addresses_address_id\n[{'user_id': None, 'addresses_address_id': 2}]\n\nINSERT INTO addresses (user_id, street, city, state, zip) \nVALUES (:user_id, :street, :city, :state, :zip)\n{'city': 'Houston', 'state': 'TX', 'street': '27 New Place', 'user_id': 1, 'zip': '34839'}\n<\/&> \n\n <\/&>\n<&|doclib.myt:item, name=\"private\", description=\"Useful Feature: Private Relations\" &>\n So our one address that was removed from the list, was updated to have a user_id of None<\/span>, and a new address object was inserted to correspond to the new Address added to the User. But now, theres a mailing address with no user_id floating around in the database of no use to anyone. How can we avoid this ? This is acheived by using the private=True<\/span> parameter of relation<\/span>:\n\n <&|formatting.myt:code&>\n User.mapper = mapper(User, users, properties = {\n 'addresses' : relation(mapper(Address, addresses), private=True)\n }\n )\n del u.addresses[1]\n u.addresses.append(Address('27 New Place', 'Houston', 'TX', '34839'))\n\n objectstore.commit() <&|formatting.myt:poppedcode, link=\"sql\" &>\nINSERT INTO addresses (user_id, street, city, state, zip) \nVALUES (:user_id, :street, :city, :state, :zip)\n{'city': 'Houston', 'state': 'TX', 'street': '27 New Place', 'user_id': 1, 'zip': '34839'}\n\nDELETE FROM addresses WHERE addresses.address_id = :address_id\n[{'address_id': 2}]\n<\/&> \n\n <\/&>\n In this case, with the private flag set, the element that was removed from the addresses list was also removed from the database. By specifying the private<\/span> flag on a relation, it is indicated to the Mapper that these related objects exist only as children of the parent object, otherwise should be deleted.<\/p>\n<\/&>\n<&|doclib.myt:item, name=\"backreferences\", description=\"Useful Feature: Backreferences\" &>\n By creating relations with the backref<\/span> keyword, a bi-directional relationship can be created which will keep both ends of the relationship updated automatically, even without any database queries being executed. Below, the User mapper is created with an \"addresses\" property, and the corresponding Address mapper receives a \"backreference\" to the User object via the property name \"user\":\n <&|formatting.myt:code&>\n Address.mapper = mapper(Address, addresses)\n User.mapper = mapper(User, users, properties = {\n 'addresses' : relation(Address.mapper, backref='user')\n }\n )\n\n u = User('fred', 'hi')\n a1 = Address('123 anywhere street', 'big city', 'UT', '76543')\n a2 = Address('1 Park Place', 'some other city', 'OK', '83923')\n \n # append a1 to u\n u.addresses.append(a1)\n \n # attach u to a2\n a2.user = u\n \n # the bi-directional relation is maintained\n >>> u.addresses == [a1, a2]\n True\n >>> a1.user is user and a2.user is user\n True\n <\/&>\n\n The backreference feature also works with many-to-many relationships, which are described later. When creating a backreference, a corresponding property is placed on the child mapper. The default arguments to this property can be overridden using the backref()<\/span> function:\n <&|formatting.myt:code&>\n Address.mapper = mapper(Address, addresses)\n \n User.mapper = mapper(User, users, properties = {\n 'addresses' : relation(Address.mapper, \n backref=backref('user', lazy=False, private=True))\n }\n )\n <\/&>\n<\/&>\n<&|doclib.myt:item, name=\"cascade\", description=\"Creating Relationships Automatically with cascade_mappers\" &>\n The mapper package has a helper function cascade_mappers()<\/span> which can simplify the task of linking several mappers together. Given a list of classes and\/or mappers, it identifies the foreign key relationships between the given mappers or corresponding class mappers, and creates relation() objects representing those relationships, including a backreference. Attempts to find\nthe \"secondary\" table in a many-to-many relationship as well. The names of the relations\nare a lowercase version of the related class. In the case of one-to-many or many-to-many,\nthe name is \"pluralized\", which currently is based on the English language (i.e. an 's' or \n'es' added to it):<\/p>\n <&|formatting.myt:code&>\n # create two mappers. the 'users' and 'addresses' tables have a foreign key\n # relationship\n mapper1 = mapper(User, users)\n mapper2 = mapper(Address, addresses)\n \n # cascade the two mappers together (can also specify User, Address as the arguments)\n cascade_mappers(mapper1, mapper2)\n \n # two new object instances\n u = User('user1')\n a = Address('test')\n \n # \"addresses\" and \"user\" property are automatically added\n u.addresses.append(a)\n print a.user\n <\/&>\n\n<\/&>\n <&|doclib.myt:item, name=\"lazyload\", description=\"Selecting from Relationships: Lazy Load\" &>\n We've seen how the relation<\/span> specifier affects the saving of an object and its child items, how does it affect selecting them? By default, the relation keyword indicates that the related property should be attached a Lazy Loader<\/b> when instances of the parent object are loaded from the database; this is just a callable function that when accessed will invoke a second SQL query to load the child objects of the parent.<\/p>\n \n <&|formatting.myt:code&>\n # define a mapper\n User.mapper = mapper(User, users, properties = {\n 'addresses' : relation(mapper(Address, addresses), private=True)\n })\n \n # select users where username is 'jane', get the first element of the list\n # this will incur a load operation for the parent table\n user = User.mapper.select(user_name='jane')[0] \n \n<&|formatting.myt:poppedcode, link=\"sql\" &>SELECT users.user_id AS users_user_id, \nusers.user_name AS users_user_name, users.password AS users_password\nFROM users WHERE users.user_name = :users_user_name ORDER BY users.oid\n{'users_user_name': 'jane'}\n<\/&>\n\n # iterate through the User object's addresses. this will incur an\n # immediate load of those child items\n for a in user.addresses: \n<&|formatting.myt:poppedcode, link=\"sql\" &>SELECT addresses.address_id AS addresses_address_id, \naddresses.user_id AS addresses_user_id, addresses.street AS addresses_street, \naddresses.city AS addresses_city, addresses.state AS addresses_state, \naddresses.zip AS addresses_zip FROM addresses\nWHERE addresses.user_id = :users_user_id ORDER BY addresses.oid\n{'users_user_id': 1}<\/&> \n print repr(a)\n\n <\/&> \n <&|doclib.myt:item, name=\"relselectby\", description=\"Useful Feature: Creating Joins via select_by\" &>\n In mappers that have relationships, the select_by<\/span> method and its cousins include special functionality that can be used to create joins. Just specify a key in the argument list which is not present in the primary mapper's list of properties or columns, but *is* present in the property list of one of its relationships:\n <&|formatting.myt:code&>\n <&formatting.myt:poplink&>l = User.mapper.select_by(street='123 Green Street')\n<&|formatting.myt:codepopper, link=\"sql\" &>SELECT users.user_id AS users_user_id, \nusers.user_name AS users_user_name, users.password AS users_password\nFROM users, addresses \nWHERE users.user_id=addresses.user_id\nAND addresses.street=:addresses_street\nORDER BY users.oid\n{'addresses_street', '123 Green Street'}\n<\/&>\n <\/&>\n The above example is shorthand for:<\/p>\n <&|formatting.myt:code&>\n l = User.mapper.select(and_(\n Address.c.user_id==User.c.user_id, \n Address.c.street=='123 Green Street')\n )\n <\/&>\n \n <\/&>\n <&|doclib.myt:item, name=\"refreshing\", description=\"How to Refresh the List?\" &>\n Once the child list of Address objects is loaded, it is done loading for the lifetime of the object instance. Changes to the list will not be interfered with by subsequent loads, and upon commit those changes will be saved. Similarly, if a new User object is created and child Address objects added, a subsequent select operation which happens to touch upon that User instance, will also not affect the child list, since it is already loaded.<\/p>\n \n The issue of when the mapper actually gets brand new objects from the database versus when it assumes the in-memory version is fine the way it is, is a subject of transactional scope<\/b>. Described in more detail in the Unit of Work section, for now it should be noted that the total storage of all newly created and selected objects, within the scope of the current thread<\/b>, can be reset via releasing or otherwise disregarding all current object instances, and calling:<\/p>\n <&|formatting.myt:code&>\n objectstore.clear()\n <\/&>\n This operation will clear out all currently mapped object instances, and subsequent select statements will load fresh copies from the databse.<\/p>\n \n To operate upon a single object, just use the remove<\/span> function:<\/p>\n <&|formatting.myt:code&>\n # (this function coming soon)\n objectstore.remove(myobject)\n <\/&>\n \n <\/&>\n <\/&>\n <&|doclib.myt:item, name=\"eagerload\", description=\"Selecting from Relationships: Eager Load\" &>\n With just a single parameter \"lazy=False\" specified to the relation object, the parent and child SQL queries can be joined together.\n\n <&|formatting.myt:code&>\n Address.mapper = mapper(Address, addresses)\n User.mapper = mapper(User, users, properties = {\n 'addresses' : relation(Address.mapper, lazy=False)\n }\n )\n\n user = User.mapper.get_by(user_name='jane')\n\n<&|formatting.myt:poppedcode, link=\"sql\" &>SELECT users.user_id AS users_user_id, users.user_name AS users_user_name, \nusers.password AS users_password, \naddresses.address_id AS addresses_address_id, addresses.user_id AS addresses_user_id, \naddresses.street AS addresses_street, addresses.city AS addresses_city, \naddresses.state AS addresses_state, addresses.zip AS addresses_zip\nFROM users LEFT OUTER JOIN addresses ON users.user_id = addresses.user_id\nWHERE users.user_name = :users_user_name ORDER BY users.oid, addresses.oid\n{'users_user_name': 'jane'}\n<\/&>\n for a in user.addresses: \n print repr(a)\n\n <\/&>\n Above, a pretty ambitious query is generated just by specifying that the User should be loaded with its child Addresses in one query. When the mapper processes the results, it uses an Identity Map<\/b> to keep track of objects that were already loaded, based on their primary key identity. Through this method, the redundant rows produced by the join are organized into the distinct object instances they represent.<\/p>\n \n The generation of this query is also immune to the effects of additional joins being specified in the original query. To use our select_by example above, joining against the \"addresses\" table to locate users with a certain street results in this behavior:\n <&|formatting.myt:code&>\n users = User.mapper.select_by(street='123 Green Street')\n\n<&|formatting.myt:poppedcode, link=\"sql\" &>SELECT users.user_id AS users_user_id, \nusers.user_name AS users_user_name, users.password AS users_password, \naddresses.address_id AS addresses_address_id, \naddresses.user_id AS addresses_user_id, addresses.street AS addresses_street, \naddresses.city AS addresses_city, addresses.state AS addresses_state, \naddresses.zip AS addresses_zip\nFROM addresses AS addresses_417c, \nusers LEFT OUTER JOIN addresses ON users.user_id = addresses.user_id\nWHERE addresses_417c.street = :addresses_street \nAND users.user_id = addresses_417c.user_id \nORDER BY users.oid, addresses.oid\n{'addresses_street': '123 Green Street'}\n<\/&>\n <\/&> \n The join implied by passing the \"street\" parameter is converted into an \"aliasized\" clause by the eager loader, so that it does not conflict with the join used to eager load the child address objects.<\/p>\n <\/&>\n <&|doclib.myt:item, name=\"options\", description=\"Switching Lazy\/Eager, No Load\" &>\n The options<\/span> method of mapper provides an easy way to get alternate forms of a mapper from an original one. The most common use of this feature is to change the \"eager\/lazy\" loading behavior of a particular mapper, via the functions eagerload()<\/span>, lazyload()<\/span> and noload()<\/span>:\n <\/p>\n <&|formatting.myt:code&>\n # user mapper with lazy addresses\n User.mapper = mapper(User, users, properties = {\n 'addresses' : relation(mapper(Address, addresses))\n }\n )\n \n # make an eager loader \n eagermapper = User.mapper.options(eagerload('addresses'))\n u = eagermapper.select()\n \n # make another mapper that wont load the addresses at all\n plainmapper = User.mapper.options(noload('addresses'))\n \n # multiple options can be specified\n mymapper = oldmapper.options(lazyload('tracker'), noload('streets'), eagerload('members'))\n\n # to specify a relation on a relation, separate the property names by a \".\"\n mymapper = oldmapper.options(eagerload('orders.items'))\n\n <\/&> \n \n <\/&>\n\n<\/&>\n\n\n<&|doclib.myt:item, name=\"onetoone\", description=\"One to One\/Many to One\" &>\n The above examples focused on the \"one-to-many\" relationship. To do other forms of relationship is easy, as the relation<\/span> function can usually figure out what you want:<\/p>\n\n <&|formatting.myt:code&>\n # a table to store a user's preferences for a site\n prefs = Table('user_prefs', engine,\n Column('pref_id', Integer, primary_key = True),\n Column('stylename', String(20)),\n Column('save_password', Boolean, nullable = False),\n Column('timezone', CHAR(3), nullable = False)\n )\n\n # user table gets 'preference_id' column added\n users = Table('users', engine, \n Column('user_id', Integer, primary_key = True),\n Column('user_name', String(16), nullable = False),\n Column('password', String(20), nullable = False),\n Column('preference_id', Integer, ForeignKey(\"prefs.pref_id\"))\n )\n \n # class definition for preferences\n class UserPrefs(object):\n pass\n UserPrefs.mapper = mapper(UserPrefs, prefs)\n \n # address mapper\n Address.mapper = mapper(Address, addresses)\n \n # make a new mapper referencing everything.\n m = mapper(User, users, properties = dict(\n addresses = relation(Address.mapper, lazy=True, private=True),\n preferences = relation(UserPrefs.mapper, lazy=False, private=True),\n ))\n \n # select\n<&formatting.myt:poplink&>user = m.get_by(user_name='fred')\n<&|formatting.myt:codepopper, link=\"sql\" &>\nSELECT users.user_id AS users_user_id, users.user_name AS users_user_name, \nusers.password AS users_password, users.preference_id AS users_preference_id, \nuser_prefs.pref_id AS user_prefs_pref_id, user_prefs.stylename AS user_prefs_stylename, \nuser_prefs.save_password AS user_prefs_save_password, user_prefs.timezone AS user_prefs_timezone \nFROM users LEFT OUTER JOIN user_prefs ON user_prefs.pref_id = users.preference_id \nWHERE users.user_name = :users_user_name ORDER BY users.oid, user_prefs.oid\n\n{'users_user_name': 'fred'}\n <\/&>\n save_password = user.preferences.save_password\n \n # modify\n user.preferences.stylename = 'bluesteel'\n<&formatting.myt:poplink&>user.addresses.append(Address('freddy@hi.org')) \n<&|formatting.myt:codepopper, link=\"sql\" &>\nSELECT email_addresses.address_id AS email_addresses_address_id, \nemail_addresses.user_id AS email_addresses_user_id, \nemail_addresses.email_address AS email_addresses_email_address \nFROM email_addresses \nWHERE email_addresses.user_id = :users_user_id \nORDER BY email_addresses.oid, email_addresses.oid\n\n{'users_user_id': 1}\n <\/&>\n # commit\n <&formatting.myt:poplink&>objectstore.commit() \n<&|formatting.myt:codepopper, link=\"sql\" &>\nUPDATE user_prefs SET stylename=:stylename\nWHERE user_prefs.pref_id = :pref_id\n\n[{'stylename': 'bluesteel', 'pref_id': 1}]\n\nINSERT INTO email_addresses (address_id, user_id, email_address) \nVALUES (:address_id, :user_id, :email_address)\n\n{'email_address': 'freddy@hi.org', 'address_id': None, 'user_id': 1}\n<\/&>\n <\/&>\n<\/&>\n\n<&|doclib.myt:item, name=\"manytomany\", description=\"Many to Many\" &>\n The relation<\/span> function handles a basic many-to-many relationship when you specify the association table:<\/p>\n <&|formatting.myt:code&>\n articles = Table('articles', engine,\n Column('article_id', Integer, primary_key = True),\n Column('headline', String(150), key='headline'),\n Column('body', TEXT, key='body'),\n )\n\n keywords = Table('keywords', engine,\n Column('keyword_id', Integer, primary_key = True),\n Column('keyword_name', String(50))\n )\n\n itemkeywords = Table('article_keywords', engine,\n Column('article_id', Integer, ForeignKey(\"articles.article_id\")),\n Column('keyword_id', Integer, ForeignKey(\"keywords.keyword_id\"))\n )\n\n # class definitions\n class Keyword(object):\n def __init__(self, name = None):\n self.keyword_name = name\n\n class Article(object):\n pass\n \n # define a mapper that does many-to-many on the 'itemkeywords' association \n # table\n Article.mapper = mapper(Article, articles, properties = dict(\n keywords = relation(mapper(Keyword, keywords), itemkeywords, lazy=False)\n )\n )\n\n article = Article()\n article.headline = 'a headline'\n article.body = 'this is the body'\n article.keywords.append(Keyword('politics'))\n article.keywords.append(Keyword('entertainment'))\n <&formatting.myt:poplink&>\n objectstore.commit() <&|formatting.myt:codepopper, link=\"sql\" &>\nINSERT INTO keywords (name) VALUES (:name)\n\n{'name': 'politics'}\n\nINSERT INTO keywords (name) VALUES (:name)\n\n{'name': 'entertainment'}\n\nINSERT INTO articles (article_headline, article_body) VALUES (:article_headline, :article_body)\n\n{'article_body': 'this is the body', 'article_headline': 'a headline'}\n\nINSERT INTO article_keywords (article_id, keyword_id) VALUES (:article_id, :keyword_id)\n\n[{'keyword_id': 1, 'article_id': 1}, {'keyword_id': 2, 'article_id': 1}]\n<\/&>\n # select articles based on a keyword. select_by will handle the extra joins.\n <&formatting.myt:poplink&>articles = Article.mapper.select_by(keyword_name='politics')\n<&|formatting.myt:codepopper, link=\"sql\" &>\nSELECT articles.article_id AS articles_article_id, \narticles.article_headline AS articles_article_headline, \narticles.article_body AS articles_article_body, \nkeywords.keyword_id AS keywords_keyword_id, \nkeywords.keyword_name AS keywords_keyword_name \nFROM keywords AS keywords_f008, \narticle_keywords AS article_keywords_dbf0, \narticles LEFT OUTER JOIN article_keywords ON \narticles.article_id = article_keywords.article_id \nLEFT OUTER JOIN keywords ON \nkeywords.keyword_id = article_keywords.keyword_id \nWHERE (keywords_f008.keyword_name = :keywords_keyword_name \nAND articles.article_id = article_keywords_dbf0.article_id) \nAND keywords_f008.keyword_id = article_keywords_dbf0.keyword_id \nORDER BY articles.oid, article_keywords.oid \n{'keywords_keyword_name': 'politics'}\n<\/&>\n # modify\n a = articles[0]\n del a.keywords[:]\n a.keywords.append(Keyword('topstories'))\n a.keywords.append(Keyword('government'))\n\n # commit. individual INSERT\/DELETE operations will take place only for the list\n # elements that changed.\n<&formatting.myt:poplink&> \n objectstore.commit() \n<&|formatting.myt:codepopper &>\nINSERT INTO keywords (name) VALUES (:name)\n\n{'name': 'topstories'}\n\nINSERT INTO keywords (name) VALUES (:name)\n\n{'name': 'government'}\n\nDELETE FROM article_keywords \nWHERE article_keywords.article_id = :article_id \nAND article_keywords.keyword_id = :keyword_id\n\n[{'keyword_id': 1, 'article_id': 1}, {'keyword_id': 2, 'article_id': 1}]\n\nINSERT INTO article_keywords (article_id, keyword_id) VALUES (:article_id, :keyword_id)\n\n[{'keyword_id': 3, 'article_id': 1}, {'keyword_id': 4, 'article_id': 1}]\n<\/&>\n\n \n <\/&>\n<\/&>\n<&|doclib.myt:item, name=\"association\", description=\"Association Object\" &>\n\n Many to Many can also be done with an association object, that adds additional information about how two items are related. This association object is set up in basically the same way as any other mapped object. However, since an association table typically has no primary key columns, you have to tell the mapper what columns will compose its \"primary key\", which are the two (or more) columns involved in the association. Also, the relation function needs an additional hint as to the fact that this mapped object is an association object, via the \"association\" argument which points to the class or mapper representing the other side of the association.<\/p>\n <&|formatting.myt:code&>\n # add \"attached_by\" column which will reference the user who attached this keyword\n itemkeywords = Table('article_keywords', engine,\n Column('article_id', Integer, ForeignKey(\"articles.article_id\")),\n Column('keyword_id', Integer, ForeignKey(\"keywords.keyword_id\")),\n Column('attached_by', Integer, ForeignKey(\"users.user_id\"))\n )\n\n # define an association class\n class KeywordAssociation(object):\n pass\n\n # mapper for KeywordAssociation\n # specify \"primary key\" columns manually\n KeywordAssociation.mapper = mapper(KeywordAssociation, itemkeywords,\n primary_key = [itemkeywords.c.article_id, itemkeywords.c.keyword_id],\n properties={\n 'keyword' : relation(Keyword, lazy = False), # uses primary Keyword mapper\n 'user' : relation(User, lazy = True) # uses primary User mapper\n }\n )\n \n # mappers for Users, Keywords\n User.mapper = mapper(User, users)\n Keyword.mapper = mapper(Keyword, keywords)\n \n # define the mapper. \n m = mapper(Article, articles, properties={\n 'keywords':relation(KeywordAssociation.mapper, lazy=False, association=Keyword)\n\t\t}\n )\n \n # bonus step - well, we do want to load the users in one shot, \n # so modify the mapper via an option.\n # this returns a new mapper with the option switched on.\n m2 = mapper.options(eagerload('keywords.user'))\n \n # select by keyword again\n <&formatting.myt:poplink&>alist = m2.select_by(keyword_name='jacks_stories')\n<&|formatting.myt:codepopper, link=\"sql\" &>\nSELECT articles.article_id AS articles_article_id, \narticles.article_headline AS articles_article_headline, \narticles.article_body AS articles_article_body, \narticle_keywords.article_id AS article_keywords_article_id, \narticle_keywords.keyword_id AS article_keywords_keyword_id, \narticle_keywords.attached_by AS article_keywords_attached_by, \nusers.user_id AS users_user_id, users.user_name AS users_user_name, \nusers.password AS users_password, users.preference_id AS users_preference_id, \nkeywords.keyword_id AS keywords_keyword_id, keywords.name AS keywords_name \nFROM article_keywords article_keywords_3a64, keywords keywords_11b7, \narticles LEFT OUTER JOIN article_keywords ON articles.article_id = article_keywords.article_id \nLEFT OUTER JOIN users ON users.user_id = article_keywords.attached_by \nLEFT OUTER JOIN keywords ON keywords.keyword_id = article_keywords.keyword_id \nWHERE keywords_11b7.keyword_id = article_keywords_3a64.keyword_id \nAND article_keywords_3a64.article_id = articles.article_id \nAND keywords_11b7.name = :keywords_name \nORDER BY articles.oid, article_keywords.oid, users.oid, keywords.oid\n\n{'keywords_name': 'jacks_stories'}\n<\/&>\n # user is available\n for a in alist:\n for k in a.keywords:\n if k.keyword.name == 'jacks_stories':\n print k.user.user_name\n \n<\/&>\n \n<\/&>\n\n<\/&>\n","old_contents":"<%flags>inherit='document_base.myt'<\/%flags>\n<%attr>title='Data Mapping'<\/%attr>\n\n<&|doclib.myt:item, name=\"datamapping\", description=\"Basic Data Mapping\" &>\n Data mapping describes the process of defining Mapper<\/b> objects, which associate table metadata with user-defined classes. \n\nThe Mapper's role is to perform SQL operations upon the database, associating individual table rows with instances of those classes, and individual database columns with properties upon those instances, to transparently associate in-memory objects with a persistent database representation. <\/p>\n\n When a Mapper is created to associate a Table object with a class, all of the columns defined in the Table object are associated with the class via property accessors, which add overriding functionality to the normal process of setting and getting object attributes. These property accessors also keep track of changes to object attributes; these changes will be stored to the database when the application \"commits\" the current transactional context (known as a Unit of Work<\/b>). The __init__()<\/span> method of the object is also decorated to communicate changes when new instances of the object are created.<\/p>\n\n The Mapper also provides the interface by which instances of the object are loaded from the database. The primary method for this is its select()<\/span> method, which has similar arguments to a sqlalchemy.sql.Select<\/span> object. But this select method executes automatically and returns results, instead of awaiting an execute() call. Instead of returning a cursor-like object, it returns an array of objects.<\/p>\n\n The three elements to be defined, i.e. the Table metadata, the user-defined class, and the Mapper, are typically defined as module-level variables, and may be defined in any fashion suitable to the application, with the only requirement being that the class and table metadata are described before the mapper. For the sake of example, we will be defining these elements close together, but this should not be construed as a requirement; since SQLAlchemy is not a framework, those decisions are left to the developer or an external framework.\n<\/p>\n<&|doclib.myt:item, name=\"synopsis\", description=\"Synopsis\" &>\n This is the simplest form of a full \"round trip\" of creating table meta data, creating a class, mapping the class to the table, getting some results, and saving changes. For each concept, the following sections will dig in deeper to the available capabilities.<\/p>\n <&|formatting.myt:code&>\n from sqlalchemy import *\n \n # engine\n engine = create_engine(\"sqlite:\/\/mydb.db\")\n \n # table metadata\n users = Table('users', engine, \n Column('user_id', Integer, primary_key=True),\n Column('user_name', String(16)),\n Column('password', String(20))\n )\n\n # class definition \n class User(object):\n pass\n \n # create a mapper\n usermapper = mapper(User, users)\n \n # select\n<&formatting.myt:poplink&>user = usermapper.select_by(user_name='fred')[0] \n<&|formatting.myt:codepopper, link=\"sql\" &>\nSELECT users.user_id AS users_user_id, users.user_name AS users_user_name, \nusers.password AS users_password \nFROM users \nWHERE users.user_name = :users_user_name ORDER BY users.oid\n\n{'users_user_name': 'fred'}\n <\/&>\n # modify\n user.user_name = 'fred jones'\n \n # commit - saves everything that changed\n<&formatting.myt:poplink&>objectstore.commit() \n<&|formatting.myt:codepopper, link=\"sql\" &>\nUPDATE users SET user_name=:user_name \n WHERE users.user_id = :user_id\n\n[{'user_name': 'fred jones', 'user_id': 1}] \n <\/&>\n \n \n <\/&>\n <&|doclib.myt:item, name=\"attaching\", description=\"Attaching Mappers to their Class\"&>\n For convenience's sake, the Mapper can be attached as an attribute on the class itself as well:<\/p>\n <&|formatting.myt:code&>\n User.mapper = mapper(User, users)\n \n userlist = User.mapper.select_by(user_id=12)\n <\/&>\n There is also a full-blown \"monkeypatch\" function that creates a primary mapper, attaches the above mapper class property, and also the methods get, get_by, select, select_by, selectone, commit<\/span> and delete<\/span>:<\/p>\n <&|formatting.myt:code&>\n assign_mapper(User, users)\n userlist = User.select_by(user_id=12)\n <\/&>\n Other methods of associating mappers and finder methods with their corresponding classes, such as via common base classes or mixins, can be devised as well. SQLAlchemy does not aim to dictate application architecture and will always allow the broadest variety of architectural patterns, but may include more helper objects and suggested architectures in the future.<\/p>\n <\/&>\n <&|doclib.myt:item, name=\"overriding\", description=\"Overriding Properties\"&>\n A common request is the ability to create custom class properties that override the behavior of setting\/getting an attribute. Currently, the easiest way to do this in SQLAlchemy is just how its done normally; define your attribute with a different name, such as \"_attribute\", and use a property to get\/set its value. The mapper just needs to be told of the special name:<\/p>\n <&|formatting.myt:code&>\n class MyClass(object):\n def _set_email(self, email):\n self._email = email\n def _get_email(self, email):\n return self._email\n email = property(_get_email, _set_email)\n \n m = mapper(MyClass, mytable, properties = {\n # map the '_email' attribute to the \"email\" column\n # on the table\n '_email': mytable.c.email\n })\n <\/&>\n In a later release, SQLAlchemy will also allow _get_email and _set_email to be attached directly to the \"email\" property created by the mapper, and will also allow this association to occur via decorators.<\/p>\n <\/&>\n<\/&>\n<&|doclib.myt:item, name=\"selecting\", description=\"Selecting from a Mapper\" &>\n There are a variety of ways to select from a mapper. These range from minimalist to explicit. Below is a synopsis of the these methods:<\/p>\n <&|formatting.myt:code&>\n # select_by, using property names or column names as keys\n # the keys are grouped together by an AND operator\n result = mapper.select_by(name='john', street='123 green street')\n\n # select_by can also combine SQL criterion with key\/value properties\n result = mapper.select_by(users.c.user_name=='john', \n addresses.c.zip_code=='12345, street='123 green street')\n \n # get_by, which takes the same arguments as select_by\n # returns a single scalar result or None if no results\n user = mapper.get_by(id=12)\n \n # \"dynamic\" versions of select_by and get_by - everything past the \n # \"select_by_\" or \"get_by_\" is used as the key, and the function argument\n # as the value\n result = mapper.select_by_name('fred')\n u = mapper.get_by_name('fred')\n \n # get an object directly from its primary key. this will bypass the SQL\n # call if the object has already been loaded\n u = mapper.get(15)\n \n # get an object that has a composite primary key of three columns.\n # the order of the arguments matches that of the table meta data.\n myobj = mapper.get(27, 3, 'receipts')\n \n # using a WHERE criterion\n result = mapper.select(or_(users.c.user_name == 'john', users.c.user_name=='fred'))\n \n # using a WHERE criterion to get a scalar\n u = mapper.selectone(users.c.user_name=='john')\n \n # using a full select object\n result = mapper.select(users.select(users.c.user_name=='john'))\n \n # using straight text \n result = mapper.select_text(\"select * from users where user_name='fred'\")\n\n # or using a \"text\" object\n result = mapper.select(text(\"select * from users where user_name='fred'\", engine=engine))\n <\/&> \n The last few examples above show the usage of the mapper's table object to provide the columns for a WHERE Clause. These columns are also accessible off of the mapped class directly. When a mapper is assigned to a class, it also attaches a special property accessor c<\/span> to the class itself, which can be used just like the table metadata to access the columns of the table:<\/p>\n <&|formatting.myt:code&>\n User.mapper = mapper(User, users)\n \n userlist = User.mapper.select(User.c.user_id==12)\n <\/&> \n<\/&>\n<&|doclib.myt:item, name=\"saving\", description=\"Saving Objects\" &>\n When objects corresponding to mapped classes are created or manipulated, all changes are logged by a package called sqlalchemy.mapping.objectstore<\/span>. The changes are then written to the database when an application calls objectstore.commit()<\/span>. This pattern is known as a Unit of Work<\/b>, and has many advantages over saving individual objects or attributes on those objects with individual method invocations. Domain models can be built with far greater complexity with no concern over the order of saves and deletes, excessive database round-trips and write operations, or deadlocking issues. The commit() operation uses a transaction as well, and will also perform \"concurrency checking\" to insure the proper number of rows were in fact affected (not supported with the current MySQL drivers). Transactional resources are used effectively in all cases; the unit of work handles all the details.<\/p>\n \n When a mapper is created, the target class has its mapped properties decorated by specialized property accessors that track changes, and its __init__()<\/span> method is also decorated to mark new objects as \"new\".<\/p>\n <&|formatting.myt:code&>\n User.mapper = mapper(User, users)\n\n # create a new User\n myuser = User()\n myuser.user_name = 'jane'\n myuser.password = 'hello123'\n\n # create another new User \n myuser2 = User()\n myuser2.user_name = 'ed'\n myuser2.password = 'lalalala'\n\n # load a third User from the database \n<&formatting.myt:poplink&>myuser3 = User.mapper.select(User.c.user_name=='fred')[0] \n<&|formatting.myt:codepopper, link=\"sql\" &>\nSELECT users.user_id AS users_user_id, \nusers.user_name AS users_user_name, users.password AS users_password\nFROM users WHERE users.user_name = :users_user_name\n{'users_user_name': 'fred'}\n<\/&>\n myuser3.user_name = 'fredjones'\n\n # save all changes \n<&formatting.myt:poplink&>objectstore.commit() \n<&|formatting.myt:codepopper, link=\"sql\" &>\nUPDATE users SET user_name=:user_name\nWHERE users.user_id =:users_user_id\n[{'users_user_id': 1, 'user_name': 'fredjones'}]\n\nINSERT INTO users (user_name, password) VALUES (:user_name, :password)\n{'password': 'hello123', 'user_name': 'jane'}\n\nINSERT INTO users (user_name, password) VALUES (:user_name, :password)\n{'password': 'lalalala', 'user_name': 'ed'}\n<\/&>\n <\/&>\n In the examples above, we defined a User class with basically no properties or methods. Theres no particular reason it has to be this way, the class can explicitly set up whatever properties it wants, whether or not they will be managed by the mapper. It can also specify a constructor, with the restriction that the constructor is able to function with no arguments being passed to it (this restriction can be lifted with some extra parameters to the mapper; more on that later):<\/p>\n <&|formatting.myt:code&>\n class User(object):\n def __init__(self, user_name = None, password = None):\n self.user_id = None\n self.user_name = user_name\n self.password = password\n def get_name(self):\n return self.user_name\n def __repr__(self):\n return \"User id %s name %s password %s\" % (repr(self.user_id), \n repr(self.user_name), repr(self.password))\n User.mapper = mapper(User, users)\n\n u = User('john', 'foo')\n<&formatting.myt:poplink&>objectstore.commit() \n<&|formatting.myt:codepopper, link=\"sql\" &>\nINSERT INTO users (user_name, password) VALUES (:user_name, :password)\n{'password': 'foo', 'user_name': 'john'}\n<\/&>\n >>> u\n User id 1 name 'john' password 'foo'\n \n <\/&>\n\n Recent versions of SQLAlchemy will only put modified object attributes columns into the UPDATE statements generated upon commit. This is to conserve database traffic and also to successfully interact with a \"deferred\" attribute, which is a mapped object attribute against the mapper's primary table that isnt loaded until referenced by the application.<\/p>\n<\/&>\n\n<&|doclib.myt:item, name=\"relations\", description=\"Defining and Using Relationships\" &>\n So that covers how to map the columns in a table to an object, how to load objects, create new ones, and save changes. The next step is how to define an object's relationships to other database-persisted objects. This is done via the relation<\/span> function provided by the mapper module. So with our User class, lets also define the User has having one or more mailing addresses. First, the table metadata:<\/p>\n <&|formatting.myt:code&>\n from sqlalchemy import *\n engine = create_engine('sqlite', {'filename':'mydb'})\n \n # define user table\n users = Table('users', engine, \n Column('user_id', Integer, primary_key=True),\n Column('user_name', String(16)),\n Column('password', String(20))\n )\n \n # define user address table\n addresses = Table('addresses', engine,\n Column('address_id', Integer, primary_key=True),\n Column('user_id', Integer, ForeignKey(\"users.user_id\")),\n Column('street', String(100)),\n Column('city', String(80)),\n Column('state', String(2)),\n Column('zip', String(10))\n )\n <\/&>\n Of importance here is the addresses table's definition of a foreign key<\/b> relationship to the users table, relating the user_id column into a parent-child relationship. When a Mapper wants to indicate a relation of one object to another, this ForeignKey object is the default method by which the relationship is determined (although if you didn't define ForeignKeys, or you want to specify explicit relationship columns, that is available as well). <\/p>\n So then lets define two classes, the familiar User class, as well as an Address class:\n\n <&|formatting.myt:code&>\n class User(object):\n def __init__(self, user_name = None, password = None):\n self.user_name = user_name\n self.password = password\n \n class Address(object):\n def __init__(self, street=None, city=None, state=None, zip=None):\n self.street = street\n self.city = city\n self.state = state\n self.zip = zip\n <\/&>\n And then a Mapper that will define a relationship of the User and the Address classes to each other as well as their table metadata. We will add an additional mapper keyword argument properties<\/span> which is a dictionary relating the name of an object property to a database relationship, in this case a relation<\/span> object against a newly defined mapper for the Address class:<\/p>\n <&|formatting.myt:code&>\n User.mapper = mapper(User, users, properties = {\n 'addresses' : relation(mapper(Address, addresses))\n }\n )\n <\/&>\n Lets do some operations with these classes and see what happens:<\/p>\n\n <&|formatting.myt:code&>\n u = User('jane', 'hihilala')\n u.addresses.append(Address('123 anywhere street', 'big city', 'UT', '76543'))\n u.addresses.append(Address('1 Park Place', 'some other city', 'OK', '83923'))\n\n objectstore.commit() \n<&|formatting.myt:poppedcode, link=\"sql\" &>INSERT INTO users (user_name, password) VALUES (:user_name, :password)\n{'password': 'hihilala', 'user_name': 'jane'}\n\nINSERT INTO addresses (user_id, street, city, state, zip) VALUES (:user_id, :street, :city, :state, :zip)\n{'city': 'big city', 'state': 'UT', 'street': '123 anywhere street', 'user_id':1, 'zip': '76543'}\n\nINSERT INTO addresses (user_id, street, city, state, zip) VALUES (:user_id, :street, :city, :state, :zip)\n{'city': 'some other city', 'state': 'OK', 'street': '1 Park Place', 'user_id':1, 'zip': '83923'}\n<\/&>\n <\/&>\n A lot just happened there! The Mapper object figured out how to relate rows in the addresses table to the users table, and also upon commit had to determine the proper order in which to insert rows. After the insert, all the User and Address objects have all their new primary and foreign keys populated.<\/p>\n\n Also notice that when we created a Mapper on the User class which defined an 'addresses' relation, the newly created User instance magically had an \"addresses\" attribute which behaved like a list. This list is in reality a property accessor function, which returns an instance of sqlalchemy.util.HistoryArraySet<\/span>, which fulfills the full set of Python list accessors, but maintains a unique<\/b> set of objects (based on their in-memory identity), and also tracks additions and deletions to the list:<\/p>\n <&|formatting.myt:code&>\n del u.addresses[1]\n u.addresses.append(Address('27 New Place', 'Houston', 'TX', '34839'))\n\n objectstore.commit() \n\n<&|formatting.myt:poppedcode, link=\"sql\" &>UPDATE addresses SET user_id=:user_id\n WHERE addresses.address_id = :addresses_address_id\n[{'user_id': None, 'addresses_address_id': 2}]\n\nINSERT INTO addresses (user_id, street, city, state, zip) \nVALUES (:user_id, :street, :city, :state, :zip)\n{'city': 'Houston', 'state': 'TX', 'street': '27 New Place', 'user_id': 1, 'zip': '34839'}\n<\/&> \n\n <\/&>\n<&|doclib.myt:item, name=\"private\", description=\"Useful Feature: Private Relations\" &>\n So our one address that was removed from the list, was updated to have a user_id of select<\/code> call to a mapper which returns results, all of those objects are now installed within the current Session, mapped to their identity.<\/p>\n \n
select<\/code> or
get<\/code> calls on mappers issue queries to the database, they will in nearly all cases go out to the database on each call to fetch results. However, when the mapper instantiates<\/b> objects corresponding to the result set rows it receives, it will check the current identity map first<\/b> before instantating a new object, and return the same instance<\/b> already present in the identiy map if it already exists.<\/p> \n \n
weakref.WeakValueDictionary<\/code>, so that when an in-memory object falls out of scope, it will be removed automatically. However, this may not be instant if there are circular references upon the object. The current SA attributes implementation places some circular refs upon objects, although this may change in the future. There are other ways to remove object instances from the current session, as well as to clear the current session entirely, which are described later in this section.<\/p>\n
identity_map<\/code> accessor, and is an instance of
weakref.WeakValueDictionary<\/code>:<\/p>\n <&|formatting.myt:code&><% \"\"\"\n >>> objectstore.get_session().identity_map.values()\n [<__main__.User object at 0x712630>, <__main__.Address object at 0x712a70>]\n \"\"\" %>\n <\/&>\n \n <\/&>\n \n <&|doclib.myt:item, name=\"changed\", description=\"Whats Changed ?\" &>\n
commit()<\/code> call is issued to save all changes. After the commit occurs, these lists are all cleared out.<\/p>\n \n
Set<\/code> objects (which are a SQLAlchemy-specific instance called a
HashSet<\/code>) that are also viewable off the Session:<\/p>\n <&|formatting.myt:code&>\n # new objects that were just constructed\n session.new\n \n # objects that exist in the database, that were modified\n session.dirty\n \n # objects that have been marked as deleted via session.delete(obj)\n session.deleted\n \n # list-based attributes thave been appended\n session.modified_lists\n <\/&>\n
User<\/code> and
Address<\/code> mapper setup first outlined in <&formatting.myt:link, path=\"datamapping_relations\"&>:<\/p>\n <&|formatting.myt:code&>\n \">>>\" # get the current thread's session\n \">>>\" session = objectstore.get_session()\n\n \">>>\" # create a new object, with a list-based attribute \n \">>>\" # containing two more new objects\n \">>>\" u = User(user_name='Fred')\n \">>>\" u.addresses.append(Address(city='New York'))\n \">>>\" u.addresses.append(Address(city='Boston'))\n \n \">>>\" # objects are in the \"new\" list\n \">>>\" session.new\n [<__main__.User object at 0x713630>, \n <__main__.Address object at 0x713a70>, \n <__main__.Address object at 0x713b30>]\n \n \">>>\" # view the \"modified lists\" member, \n \">>>\" # reveals our two Address objects as well, inside of a list\n \">>>\" session.modified_lists\n [[<__main__.Address object at 0x713a70>, <__main__.Address object at 0x713b30>]]\n\n \">>>\" # lets view what the class\/ID is for the list object\n \">>>\" [\"%s %s\" % (l.__class__, id(l)) for l in session.modified_lists]\n ['sqlalchemy.mapping.unitofwork.UOWListElement 7391872']\n \n \">>>\" # now commit\n \">>>\" session.commit()\n \n \">>>\" # the \"new\" list is now empty\n \">>>\" session.new\n []\n \n \">>>\" # the \"modified lists\" list is now empty\n \">>>\" session.modified_lists\n []\n \n \">>>\" # now lets modify an object\n \">>>\" u.user_name='Ed'\n \n \">>>\" # it gets placed in the \"dirty\" list\n \">>>\" session.dirty\n [<__main__.User object at 0x713630>]\n \n \">>>\" # delete one of the addresses \n \">>>\" session.delete(u.addresses[0])\n \n \">>>\" # and also delete it off the User object, note that\n \">>>\" # this is *not automatic* when using session.delete()\n \">>>\" del u.addresses[0]\n \">>>\" session.deleted\n [<__main__.Address object at 0x713a70>] \n \n \">>>\" # commit\n \">>>\" session.commit()\n \n \">>>\" # all lists are cleared out\n \">>>\" session.new, session.dirty, session.modified_lists, session.deleted\n ([], [], [], [])\n \n \">>>\" # identity map has the User and the one remaining Address\n \">>>\" session.identity_map.values()\n [<__main__.Address object at 0x713b30>, <__main__.User object at 0x713630>]\n <\/&>\n
new<\/code>,
dirty<\/code>,
modified_lists<\/code>, and
deleted<\/code> lists are not weak referencing.<\/b> This means if you abandon all references to new or modified objects within a session, they are still present<\/b> and will be saved on the next commit operation, unless they are removed from the Session explicitly (more on that later). The
new<\/code> list may change in a future release to be weak-referencing, however for the
deleted<\/code> list, one can see that its quite natural for a an object marked as deleted to have no references in the application, yet a DELETE operation is still required.<\/p>\n <\/&>\n \n <&|doclib.myt:item, name=\"commit\", description=\"Commit\" &>\n
private<\/code> relationships for a delete operation:<\/p>\n <&|formatting.myt:code&>\n # saves only user1 and address2. all other modified\n # objects remain present in the session.\n objectstore.get_session().commit(user1, address2)\n <\/&>\n
\n
address.user_id<\/code> to 5, that integer attribute will be saved, but it will not place an
Address<\/code> object in the
addresses<\/code> attribute of the corresponding
User<\/code> object. In some cases there may be a lazy-loader still attached to an object attribute which when first accesed performs a fresh load from the database and creates the appearance of this behavior, but this behavior should not be relied upon as it is specific to lazy loading and also may disappear in a future release. Similarly, if the
Address<\/code> object is marked as deleted and a commit is issued, the correct DELETE statements will be issued, but if the object instance itself is still attached to the
User<\/code>, it will remain.<\/p>\n
private=True<\/code> option, DELETE statements will be issued for objects within that relationship in addition to that of the primary deleted object; this is called a cascading delete<\/b>.<\/p>\n
expunge<\/code> when youd like to remove an object altogether from memory, such as before calling
del<\/code> on it, which will prevent any \"ghost\" operations occuring when the session is committed.<\/p>\n <\/&>\n\n <&|doclib.myt:item, name=\"import\", description=\"Import Instance\" &>\n <\/&>\n \n <\/&>\n <&|doclib.myt:item, name=\"begincommit\", description=\"Begin\/Commit\" &>\n
\n
select<\/code> call to a mapper which returns results, all of those objects are now installed within the current Session, mapped to their identity.<\/p>\n \n
select<\/code> or
get<\/code> calls on mappers issue queries to the database, they will in nearly all cases go out to the database on each call to fetch results. However, when the mapper instantiates<\/b> objects corresponding to the result set rows it receives, it will check the current identity map first<\/b> before instantating a new object, and return the same instance<\/b> already present in the identiy map if it already exists.<\/p> \n \n
weakref.WeakValueDictionary<\/code>, so that when an in-memory object falls out of scope, it will be removed automatically. However, this may not be instant if there are circular references upon the object. The current SA attributes implementation places some circular refs upon objects, although this may change in the future. There are other ways to remove object instances from the current session, as well as to clear the current session entirely, which are described later in this section.<\/p>\n
identity_map<\/code> accessor, and is an instance of
weakref.WeakValueDictionary<\/code>:<\/p>\n <&|formatting.myt:code&><% \"\"\"\n >>> objectstore.get_session().identity_map.values()\n [<__main__.User object at 0x712630>, <__main__.Address object at 0x712a70>]\n \"\"\" %>\n <\/&>\n \n <\/&>\n \n <&|doclib.myt:item, name=\"changed\", description=\"Whats Changed ?\" &>\n
commit()<\/code> call is issued to save all changes. After the commit occurs, these lists are all cleared out.<\/p>\n \n
Set<\/code> objects (which are a SQLAlchemy-specific instance called a
HashSet<\/code>) that are also viewable off the Session:<\/p>\n <&|formatting.myt:code&>\n # new objects that were just constructed\n session.new\n \n # objects that exist in the database, that were modified\n session.dirty\n \n # objects that have been marked as deleted via session.delete(obj)\n session.deleted\n \n # list-based attributes thave been appended\n session.modified_lists\n <\/&>\n
User<\/code> and
Address<\/code> mapper setup first outlined in <&formatting.myt:link, path=\"datamapping_relations\"&>:<\/p>\n <&|formatting.myt:code&>\n \">>>\" # get the current thread's session\n \">>>\" session = objectstore.get_session()\n\n \">>>\" # create a new object, with a list-based attribute \n \">>>\" # containing two more new objects\n \">>>\" u = User(user_name='Fred')\n \">>>\" u.addresses.append(Address(city='New York'))\n \">>>\" u.addresses.append(Address(city='Boston'))\n \n \">>>\" # objects are in the \"new\" list\n \">>>\" session.new\n [<__main__.User object at 0x713630>, \n <__main__.Address object at 0x713a70>, \n <__main__.Address object at 0x713b30>]\n \n \">>>\" # view the \"modified lists\" member, \n \">>>\" # reveals our two Address objects as well, inside of a list\n \">>>\" session.modified_lists\n [[<__main__.Address object at 0x713a70>, <__main__.Address object at 0x713b30>]]\n\n \">>>\" # lets view what the class\/ID is for the list object\n \">>>\" [\"%s %s\" % (l.__class__, id(l)) for l in session.modified_lists]\n ['sqlalchemy.mapping.unitofwork.UOWListElement 7391872']\n \n \">>>\" # now commit\n \">>>\" session.commit()\n \n \">>>\" # the \"new\" list is now empty\n \">>>\" session.new\n []\n \n \">>>\" # the \"modified lists\" list is now empty\n \">>>\" session.modified_lists\n []\n \n \">>>\" # now lets modify an object\n \">>>\" u.user_name='Ed'\n \n \">>>\" # it gets placed in the \"dirty\" list\n \">>>\" session.dirty\n [<__main__.User object at 0x713630>]\n \n \">>>\" # delete one of the addresses \n \">>>\" session.delete(u.addresses[0])\n \n \">>>\" # and also delete it off the User object, note that\n \">>>\" # this is *not automatic* when using session.delete()\n \">>>\" del u.addresses[0]\n \">>>\" session.deleted\n [<__main__.Address object at 0x713a70>] \n \n \">>>\" # commit\n \">>>\" session.commit()\n \n \">>>\" # all lists are cleared out\n \">>>\" session.new, session.dirty, session.modified_lists, session.deleted\n ([], [], [], [])\n \n \">>>\" # identity map has the User and the one remaining Address\n \">>>\" session.identity_map.values()\n [<__main__.Address object at 0x713b30>, <__main__.User object at 0x713630>]\n <\/&>\n
new<\/code>,
dirty<\/code>,
modified_lists<\/code>, and
deleted<\/code> lists are not weak referencing.<\/b> This means if you abandon all references to new or modified objects within a session, they are still present<\/b> and will be saved on the next commit operation, unless they are removed from the Session explicitly (more on that later). The
new<\/code> list may change in a future release to be weak-referencing, however for the
deleted<\/code> list, one can see that its quite natural for a an object marked as deleted to have no references in the application, yet a DELETE operation is still required.<\/p>\n <\/&>\n \n <&|doclib.myt:item, name=\"commit\", description=\"Commit\" &>\n
private<\/code> relationships for a delete operation:<\/p>\n <&|formatting.myt:code&>\n # saves only user1 and address2. all other modified\n # objects remain present in the session.\n objectstore.get_session().commit(user1, address2)\n <\/&>\n
\n
user.address_id<\/code> to 5, that will be saved, but it will not place an
Address<\/code> object on the
user<\/code> object. Similarly, if the
Address<\/code> object is deleted but is still attached to the
User<\/code>, it will remain.<\/li>\n <\/ul>\n
\n
\n
\n
\n
\nStart\n |\n |\n |--- <&formatting.myt:link, path=\"pooling_establishing\" &>\n | |\n | |\n | |------ <&formatting.myt:link, path=\"pooling_configuration\" &>\n | | \n | |\n +--- <&formatting.myt:link, path=\"dbengine_establishing\" &> |\n | |\n | | \n |--------- <&formatting.myt:link, path=\"dbengine_options\" &>\n |\n |\n +---- <&formatting.myt:link, path=\"metadata_tables\" &>\n |\n |\n |---- <&formatting.myt:link, path=\"metadata_creating\" &>\n | \n | \n |---- <&formatting.myt:link, path=\"sql\" &>\n | | \n | | \n +---- <&formatting.myt:link, path=\"datamapping\"&> | \n | | | \n | | | \n | <&formatting.myt:link, path=\"unitofwork\"&> | \n | | | \n | | | \n | +----------- <&formatting.myt:link, path=\"adv_datamapping\"&>\n | \n +----- <&formatting.myt:link, path=\"types\"&>\n<\/pre>\n<\/&>\n","old_contents":"<%flags>inherit='document_base.myt'<\/%flags>\n<&|doclib.myt:item, name=\"roadmap\", description=\"Roadmap\" &>\n
\nStart\n |\n |\n |--- <&formatting.myt:link, path=\"pooling_establishing\" &>\n | |\n | |\n | |------ <&formatting.myt:link, path=\"pooling_configuration\" &>\n | | \n | |\n +--- <&formatting.myt:link, path=\"dbengine_establishing\" &> |\n | |\n | | \n |--------- <&formatting.myt:link, path=\"dbengine_options\" &>\n |\n |\n +---- <&formatting.myt:link, path=\"metadata_tables\" &>\n |\n |\n |---- <&formatting.myt:link, path=\"metadata_building\" &>\n | \n | \n |---- <&formatting.myt:link, path=\"sql\" &>\n | | \n | | \n +---- <&formatting.myt:link, path=\"datamapping\"&> | \n | | | \n | | | \n | <&formatting.myt:link, path=\"unitofwork\"&> | \n | | | \n | | | \n | +----------- <&formatting.myt:link, path=\"adv_datamapping\"&>\n | \n +----- <&formatting.myt:link, path=\"types\"&>\n<\/pre>\n<\/&>\n","returncode":0,"stderr":"","license":"mit","lang":"Myghty"}
{"commit":"7a69b3b88e31731bf2c4734c61b6fb63c1c607d1","subject":"doc","message":"doc\n\n\ngit-svn-id: 655ff90ec95d1eeadb1ee4bb9db742a3c015d499@456 8cd8332f-0806-0410-a4b6-96f4b9520244\n","repos":"obeattie\/sqlalchemy,obeattie\/sqlalchemy,obeattie\/sqlalchemy","old_file":"doc\/build\/content\/pooling.myt","new_file":"doc\/build\/content\/pooling.myt","new_contents":"<%flags>inherit='document_base.myt'<\/%flags>\n<&|doclib.myt:item, name=\"pooling\", description=\"Connection Pooling\" &>\n <&|doclib.myt:item, name=\"establishing\", description=\"Establishing a Transparent Connection Pool\" &>\n <\/&>\n\n <&|doclib.myt:item, name=\"configuration\", description=\"Connection Pool Configuration\" &>\n <\/&>\n<\/&>","old_contents":"<%flags>inherit='document_base.myt'<\/%flags>\n<&|doclib.myt:item, name=\"pooling\", description=\"Connection Pooling\" &>\n<\/&>","returncode":0,"stderr":"","license":"mit","lang":"Myghty"}
{"commit":"fc43d50cec01cf3adf7a37b799f470f9445f3313","subject":"formatting...","message":"formatting...\n","repos":"robin900\/sqlalchemy,ThiefMaster\/sqlalchemy,alex\/sqlalchemy,monetate\/sqlalchemy,276361270\/sqlalchemy,graingert\/sqlalchemy,Cito\/sqlalchemy,elelianghh\/sqlalchemy,pdufour\/sqlalchemy,Akrog\/sqlalchemy,ThiefMaster\/sqlalchemy,276361270\/sqlalchemy,epa\/sqlalchemy,halfcrazy\/sqlalchemy,inspirehep\/sqlalchemy,epa\/sqlalchemy,itkovian\/sqlalchemy,davidfraser\/sqlalchemy,bootandy\/sqlalchemy,elelianghh\/sqlalchemy,alex\/sqlalchemy,bdupharm\/sqlalchemy,dstufft\/sqlalchemy,j5int\/sqlalchemy,Cito\/sqlalchemy,davidfraser\/sqlalchemy,wfxiang08\/sqlalchemy,sqlalchemy\/sqlalchemy,WinterNis\/sqlalchemy,dstufft\/sqlalchemy,olemis\/sqlalchemy,hsum\/sqlalchemy,halfcrazy\/sqlalchemy,sandan\/sqlalchemy,alex\/sqlalchemy,davidjb\/sqlalchemy,bootandy\/sqlalchemy,pdufour\/sqlalchemy,zzzeek\/sqlalchemy,EvaSDK\/sqlalchemy,brianv0\/sqlalchemy,Cito\/sqlalchemy,wujuguang\/sqlalchemy,graingert\/sqlalchemy,wfxiang08\/sqlalchemy,olemis\/sqlalchemy,brianv0\/sqlalchemy,bdupharm\/sqlalchemy,hsum\/sqlalchemy,davidjb\/sqlalchemy,robin900\/sqlalchemy,EvaSDK\/sqlalchemy,inspirehep\/sqlalchemy,sandan\/sqlalchemy,monetate\/sqlalchemy,WinterNis\/sqlalchemy,itkovian\/sqlalchemy,wujuguang\/sqlalchemy,j5int\/sqlalchemy,Akrog\/sqlalchemy","old_file":"doc\/build\/content\/datamapping.myt","new_file":"doc\/build\/content\/datamapping.myt","new_contents":"<%flags>inherit='document_base.myt'<\/%flags>\n<%attr>title='Data Mapping'<\/%attr>\n\n<&|doclib.myt:item, name=\"datamapping\", description=\"Basic Data Mapping\" &>\n