author
int64
658
755k
date
stringlengths
19
19
timezone
int64
-46,800
43.2k
hash
stringlengths
40
40
message
stringlengths
5
490
mods
list
language
stringclasses
20 values
license
stringclasses
3 values
repo
stringlengths
5
68
original_message
stringlengths
12
491
89,741
31.03.2017 09:08:28
-7,200
85f64865dee362fdf8cb367dd3b947c5a3e4842e
doc: clean up docs and integrate README.rst and CONTRIBUTING.rst
[ { "change_type": "MODIFY", "old_path": "docs/cobrapy_difference.rst", "new_path": "docs/cobrapy_difference.rst", "diff": "@@ -47,4 +47,4 @@ cobrapy:\n# proceed\nIt is important to note that cameo models maintain `optimize` to maintain\n-compatibility with cobrapy but we discourage its use.\n+compatibility with cobrapy.\n" }, { "change_type": "MODIFY", "old_path": "docs/contributing.rst", "new_path": "docs/contributing.rst", "diff": "-=====================\n-Contributing to cameo\n-=====================\n-\n-...\n+.. include:: ../CONTRIBUTING.rst\n\\ No newline at end of file\n" }, { "change_type": "MODIFY", "old_path": "docs/development.rst", "new_path": "docs/development.rst", "diff": "@@ -7,22 +7,25 @@ and/or that would like to contribute to its development.\ncameo vs. cobrapy\n-=================\n+~~~~~~~~~~~~~~~~~\n+While cameo uses and extends the same data structures as cobrapy, there exist a few notable differences.\nThe following provides a comprehensive side-by-side comparison of cobrapy and cameo aiming to make it easier for users\n-who are already familiar with cobrapy to get started with cameo. While cameo uses and extends the same data structures\n-as provided by cobrapy (`~cameo.core.Reaction`, `~cameo.core.SolverBasedModel`, etc.) and is thus backwards-compatible to it, it deviates\n+who are already familiar with cobrapy to get started with cameo.\nSolver interface\n----------------\nCameo deviates from cobrapy in the way optimization problems are solved by using a separate solver interface provided by\n-the optlang package (see :ref:`optlang_interface`). The following benefits ....\n+the optlang package (see :ref:`optlang_interface`), which has the following benefits:\n-* Methods that require solving multiple succession will run a lot faster since previously found solution will be reused.\n-* Implementation of novel or published becomes a lot easier since optlang (based on the very popular symbolic math library sympy) facilitates the formulation of constraints and objectives using equations (similar to GAMS) instead of matrix formalism.\n-* Adding of additional constraints (even non-metabolic ), is straight forwards and eliminates the problem in cobrapy of having to define opti (check out the ice cream sandwich ...)\n-* The optimization problem is always accessible and has a one-to-one correspondence to the model.\n+* Methods that require solving a model multiple times will run faster since previously found solutions will be\n+ automatically re-used by the solvers to warm-start the next optimization.\n+* Implementation of novel or published methods becomes easier since optlang (based on the popular symbolic math\n+ library sympy) facilitates the formulation of constraints and objectives using equations (similar to GAMS)\n+ instead of matrix formalism.\n+* Adding additional constraints (even non-metabolic) is straight forwards and eliminates the problem in cobrapy\n+ of having to define dummy metabolites etc.\nImporting a model\n-----------------\n@@ -65,7 +68,7 @@ user to determine if the problem was successfully solved.\nif solution.status == 'optimal':\n# proceed\n-In our personal opinion, we believe that the more pythonic way is to raise an Exception if the problem could not be solved.\n+In order to avoid users accidentally working with non-optimal solutions, cameo will raise an exception instead.\n.. code-block:: python\n@@ -82,23 +85,11 @@ compatibility with cobrapy but we discourage its use.\noptlang\ncopy_vs_time_machine\n-Convenience functions\n----------------------\n-\n-Cameo implements a number of convenience functions that are (currently) not available in cobrapy. For example, instead of\n-running flux variability analysis, one can quickly obtain the effective lower and upper bound\n-\n-.. code-block:: python\n-\n- model.reaction.PGK.effective_lower_bound\n-\n-\n-.. _optlang_interface\nThe optlang solver interface\n-============================\n+~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n-For efficiency reasons, cameo does not utilize the cobrapy's interface to LP and MILP solver.\n+For efficiency reasons, cameo does not utilize cobrapy's interfaces to LP and MILP solvers.\nInstead it utilizes optlang_, which is a generic interface to a number of free and commercial optimization solvers.\nIt is based on the popular symbolic math library sympy_ and thus enables the formulation of optimization problems\nusing equations instead of matrix formalism.\n@@ -112,7 +103,7 @@ The LP/MILP solver can be changed in the following way.\nmodel.solver = 'cplex'\n-Currently `cplex` and `glpk` are supported.\n+Currently `cplex`, `glpk`, and `gurobi` are supported.\nManipulating the solver object\n------------------------------\n@@ -125,14 +116,17 @@ For example, one can inspect the optimization problem in CPLEX LP format by prin\nprint(model.solver)\nHaving access to the `optlang`_ solver object provides for a very convenient way for manipulating the optimization problem.\n-It is straightforward to add additional constraints, for example, a flux ratio constraint.\n+For example, it is straightforward to add additional constraints, for example, a flux ratio constraint.\n.. code-block:: python\nreaction1 = model.reactions.PGI\nreaction2 = model.reactions.G6PDH2r\nratio = 5\n- flux_ratio_constraint = model.solver.interface.Constraint(reaction1.flux_expression - ratio * reaction2.flux_expression, lb=0, ub=0)\n+ flux_ratio_constraint = model.solver.interface.Constraint(\n+ reaction1.flux_expression - ratio * reaction2.flux_expression,\n+ lb=0,\n+ ub=0)\nmodel.solver.add(flux_ratio_constraint)\nThis will constrain the flux split between glycolysis and pentose phosphate patwhay to 20.\n@@ -142,7 +136,8 @@ This will constrain the flux split between glycolysis and pentose phosphate patw\nGood coding practices\n=====================\n-Cameo developers and user are encouraged to avoid making expensive copies of models and other data structures. Instead, we put forward a design pattern based on transactions.\n+Cameo developers and users are encouraged to avoid making copies of models and other data structures. Instead, we put\n+ forward a design pattern based on transactions.\n.. code-block:: python\n" }, { "change_type": "MODIFY", "old_path": "docs/index.rst", "new_path": "docs/index.rst", "diff": "-=================\n-Welcome to cameo!\n-=================\n+=====================================================\n+Computer Aided Metabolic Engineering and Optimization\n+=====================================================\n-|PyPI| |License| |Build Status| |Coverage Status| |DOI|\n+.. include:: ../README.rst\n+ :start-after: summary-start\n+ :end-before: summary-end\n-**Cameo** is a high-level python library developed to aid the strain\n-design process in metabolic engineering projects. The library provides a\n-modular framework of simulation methods, strain design methods, and access\n-to models, that targets developers that want custom analysis workflows.\n-Furthermore, it exposes a high-level API to users that just want to\n-compute promising strain designs.\n+.. include:: ../README.rst\n+ :start-after: showcase-start\n+ :end-before: showcase-end\n-You got curious? Head over to `try.cameo.bio <http://try.cameo.bio>`__\n-and give it a try.\nUser's guide\n-============\n+~~~~~~~~~~~~\n.. toctree::\n:maxdepth: 2\ninstallation\n- FAQ <FAQ>\ntutorials\nDevelopers's guide\n-==================\n+~~~~~~~~~~~~~~~~~~\n.. toctree::\n:maxdepth: 2\n- development\ncontributing\n+ development\nAPI\n-===\n+~~~\n.. toctree::\n:maxdepth: 2\n@@ -43,20 +39,11 @@ API\nIndices and tables\n-==================\n+~~~~~~~~~~~~~~~~~~\n* :ref:`genindex`\n* :ref:`modindex`\n* :ref:`search`\n-\n-.. |PyPI| image:: https://img.shields.io/pypi/v/cameo.svg\n- :target: https://pypi.python.org/pypi/cameo\n-.. |License| image:: http://img.shields.io/badge/license-APACHE2-blue.svg\n- :target: http://img.shields.io/badge/license-APACHE2-blue.svg\n-.. |Build Status| image:: https://travis-ci.org/biosustain/cameo.svg?branch=master\n- :target: https://travis-ci.org/biosustain/cameo\n-.. |Coverage Status| image:: https://coveralls.io/repos/biosustain/cameo/badge.svg?branch=devel\n- :target: https://coveralls.io/r/biosustain/cameo?branch=devel\n-.. |DOI| image:: https://zenodo.org/badge/5031/biosustain/cameo.svg\n- :target: https://zenodo.org/badge/latestdoi/5031/biosustain/cameo\n+.. include:: ../README.rst\n+ :start-after: url-marker\n\\ No newline at end of file\n" }, { "change_type": "MODIFY", "old_path": "docs/installation.rst", "new_path": "docs/installation.rst", "diff": "Installation\n============\n+Basic installation\n+==================\n+\n+.. include:: ../README.rst\n+ :start-after: installation-start\n+ :end-before: installation-end\n+\nSetting up a virtual environment first\n======================================\n@@ -9,111 +16,48 @@ We highly recommended installing cameo inside a virtual environment (virtualenv_\nvirtualenvwrapper_ tremendously simplifies using virtualenv_ and can easily\nbe installed using virtualenv-burrito_. Once you installed virtualenv_ and virtualenvwrapper_, run\n-.. code-block:: bash\n+.. code-block:: guess\n$ mkvirtualenv cameo # or whatever you'd like to call your virtual environment\n$ workon cameo\n-and then continue with the installation instructions described below.\n+and then continue with the installation instructions described above.\n-Alternatively you can use ``conda`` if you're already an anaconda user (there is no conda recipe for cameo though so you'll\n+Alternatively you can use ``conda`` if you are an `Anaconda <https://anaconda.org/>`__ user (there is no conda recipe for cameo though so you'll\nstill need to install it using ``pip``). Do the following to create a virtual environment and get some of the heavier dependencies out of the way.\n-.. code-block:: bash\n-\n- $ conda create -y -n cameo3.4 python=3.4 scipy numpy pandas numexpr matplotlib\n-\n-Non-python dependencies\n-=======================\n-\n-Cameo relies on optlang_ to solve optimization problems. Currently, optlang supports either glpk_ (open source) or cplex_\n-(academic licenses available), which are not python tools. At least one of them has to be installed before one can proceed\n-with the cameo installation.\n-\n-GLPK\n-----\n-\n-Using cameo with glpk_ also requires swig_ to be installed (in order to generate python bindings).\n-On ubuntu (or other similar linux platforms) we recommend using :code:`apt-get`:\n-\n-.. code-block:: bash\n+.. code-block:: guess\n- $ sudo apt-get install libglpk-dev glpk-utils swig\n+ $ conda create -y -n cameo3.4 python=3.4 lxml scipy pandas numexpr matplotlib\n-On macs we recommend using homebrew_.\n+Then follow the basic installation instructions described above.\n-.. code-block:: bash\n-\n- $ brew install swig\n- $ brew install glpk\n-\n-CPLEX\n------\n-\n-The cplex_ contains a python directory (similar to :code:`IBM/ILOG/CPLEX_Studio1251/cplex/python/x86-64_osx`). Inside\n-this directory run\n-\n-.. code-block:: bash\n+Soft dependencies\n+=================\n- $ python setup.py install\n+The following soft dependencies can be installed all at once using\n-to install the python bindings.\n+.. code-block:: guess\n-Installation\n-============\n+ $ pip install cameo[all]\n-Cameo can be installed using ``pip`` (don't forget to activate your virtual environment in case you created one).\n+or individually by specifying individual categories of dependencies. For example\n-.. code-block:: bash\n+.. code-block:: guess\n- $ pip install cameo\n+ $ pip install cameo[test, sbml, ...]\n-\n-Soft dependencies\n-=================\n-\n-The following soft dependencies can be installed all at once using ``pip install cameo[all]`` or individually\n-by specifying individual categories of dependencies (for example ``pip install cameo[swiglpk, sbml, ...]``).\nThe following categories are available::\n'docs': ['Sphinx>=1.3.5', 'numpydoc>=0.5'],\n- 'swiglpk': ['swiglpk>=1.2.14'],\n'plotly': ['plotly>=1.9.6'],\n- 'bokeh': ['bokeh>=0.11.1'],\n+ 'bokeh': ['bokeh<=0.12.1'],\n'jupyter': ['jupyter>=1.0.0', 'ipywidgets>=4.1.1'],\n- 'test': ['nose>=1.3.7', 'rednose>=0.4.3', 'coverage>=4.0.3'],\n+ 'test': ['pytest', 'pytest-cov'],\n'parallel': ['redis>=2.10.5', 'ipyparallel>=5.0.1'],\n'sbml': ['python-libsbml>=5.13.0', 'lxml>=3.6.0']\n-\n-Development setup\n-=================\n-\n-``pip`` can also be used to install cameo directly from the `github repository <https://github.com/biosustain/cameo>`_.\n-\n-.. code-block:: bash\n-\n- $ pip install -e git+https://github.com/biosustain/cameo.git@devel#egg=cameo\n-\n-Alternatively, you can clone the repository (or your fork) and then run\n-\n-.. code-block:: bash\n-\n- $ pip install -e .\n-\n-within the cameo directory.\n-\n-.. _homebrew: http://brew.sh/\n-.. _swig: http://www.swig.org/\n-.. _glpk: https://www.gnu.org/software/glpk/\n-.. _cplex: http://www-01.ibm.com/software/commerce/optimization/cplex-optimizer/\n.. _optlang: https://github.com/biosustain/optlang\n.. _virtualenv-burrito: https://github.com/brainsik/virtualenv-burrito\n.. _virtualenv: https://pypi.python.org/pypi/virtualenv\n.. _virtualenvwrapper: https://pypi.python.org/pypi/virtualenvwrapper\n\\ No newline at end of file\n-\n-.. _sphinx: https://pypi.python.org/pypi/sphinx\n-.. _numpydoc: https://pypi.python.org/pypi/numpydoc\n-.. _nose: https://pypi.python.org/pypi/nose/\n-.. _rednose: https://pypi.python.org/pypi/rednose\n-.. _codecov: https://pypi.python.org/pypi/codecov\n" }, { "change_type": "MODIFY", "old_path": "docs/tutorials.rst", "new_path": "docs/tutorials.rst", "diff": "@@ -15,8 +15,6 @@ Furthermore, `course materials`_ are available for a 2-day course in cell factor\n07-predict-heterologous-pathways\n08-high-level-API\n09-vanillin-production\n- 11-multiprocess\n- 12-advanced-usage-of-heuristic-optimization.rst\n.. _try.cameo.bio: http://try.cameo.bio\n" } ]
Python
Apache License 2.0
biosustain/cameo
doc: clean up docs and integrate README.rst and CONTRIBUTING.rst
89,741
31.03.2017 15:05:17
-7,200
ad8aa45bd5331147916fcc745cd4200a3821793e
fix: deploy docs even if it is not a tagged commit
[ { "change_type": "MODIFY", "old_path": ".travis.yml", "new_path": ".travis.yml", "diff": "@@ -38,7 +38,7 @@ before_install:\n- 'echo \"this is a build for: $TRAVIS_BRANCH\"'\n- 'if [[ \"$TRAVIS_BRANCH\" != \"devel\" ]]; then bash ./.travis/install_cplex.sh; fi'\ninstall:\n-- pip install flake8 numpy scipy pyzmq pandas pytest pytest-cov travis-sphinx\n+- pip install flake8 numpy scipy pyzmq pandas pytest pytest-cov\n- pip install .[swiglpk,test,parallel,cli,doc]\nbefore_script:\n- if [[ $TRAVIS_PYTHON_VERSION == \"3.6\" ]]; then flake8 .; fi\n@@ -77,10 +77,10 @@ deploy:\ngithub_token: $GH_TOKEN # Set in travis-ci.org dashboard\ntarget-branch: gh-pages\non:\n- tags: true\nbranch:\n- master\n- devel\n+ - feat-use-travis-gh-pages-deploy\ncondition: $TRAVIS_PYTHON_VERSION == \"3.6\"\nrepo: biosustain/cameo\n# - provider: releases\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: deploy docs even if it is not a tagged commit (#150)
89,741
31.03.2017 18:35:36
-7,200
f5d489b991fe1874b472e78a2d17c4a7bd36c8aa
fix: return to build dir on travis after building docs
[ { "change_type": "MODIFY", "old_path": ".travis.yml", "new_path": ".travis.yml", "diff": "@@ -47,6 +47,7 @@ script:\nafter_success:\n- codecov\n- if [[ $TRAVIS_PYTHON_VERSION == \"3.6\" ]]; then cd docs && make html && echo \"cameo.bio\" > _build/html/CNAME; fi\n+ - cd $TRAVIS_BUILD_DIR\nnotifications:\nslack:\nrooms:\n@@ -77,7 +78,6 @@ deploy:\ngithub_token: $GH_TOKEN # Set in travis-ci.org dashboard\ntarget-branch: gh-pages\non:\n- tags: true\nbranch:\n- master\n- devel\n" }, { "change_type": "MODIFY", "old_path": "README.rst", "new_path": "README.rst", "diff": "@@ -51,7 +51,7 @@ for further details.\nDocumentation and Examples\n~~~~~~~~~~~~~~~~~~~~~~~~~~\n-Documentation is available on `http://cameo.bio`__. Numerous `Jupyter notebooks <http://nbviewer.ipython.org/github/biosustain/cameo-notebooks/tree/master/>`__\n+Documentation is available on `cameo.bio <http://cameo.bio>`__. Numerous `Jupyter notebooks <http://nbviewer.ipython.org/github/biosustain/cameo-notebooks/tree/master/>`__\nprovide examples and tutorials and also form part of the documentation. They are also availabe in executable form on (`try.cameo.bio <http://try.cameo.bio>`__).\nFurthermore, course materials for a two day cell factory engineering course are available `here <https://biosustain.github.io/cell-factory-design-course/>`__.\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: return to build dir on travis after building docs
89,741
31.03.2017 19:15:36
-7,200
9d1705f6121c139e78ceb9bb284e7bdd37347d33
fix: diagnose problem with gh-pages deployment
[ { "change_type": "MODIFY", "old_path": ".travis.yml", "new_path": ".travis.yml", "diff": "@@ -73,7 +73,7 @@ deploy:\nrepo: biosustain/cameo\ndocs_dir: docs/_build/html\n- provider: pages\n- local_dir: docs/_build/html\n+ local_dir: docs\nskip_cleanup: true\ngithub_token: $GH_TOKEN # Set in travis-ci.org dashboard\ntarget-branch: gh-pages\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: diagnose problem with gh-pages deployment
89,741
31.03.2017 19:26:44
-7,200
6f9887764aef7b37f391dbc8e7fbc330732b4ace
fix: build docs right before deployment
[ { "change_type": "MODIFY", "old_path": ".travis.yml", "new_path": ".travis.yml", "diff": "@@ -46,8 +46,6 @@ script:\n- pytest -v -rsx --cov --cov-report=xml\nafter_success:\n- codecov\n- - if [[ $TRAVIS_PYTHON_VERSION == \"3.6\" ]]; then cd docs && make html && echo \"cameo.bio\" > _build/html/CNAME; fi\n- - cd $TRAVIS_BUILD_DIR\nnotifications:\nslack:\nrooms:\n@@ -57,6 +55,8 @@ notifications:\nbefore_deploy:\n- pip install twine\n- python setup.py sdist bdist_wheel\n+ - if [[ $TRAVIS_PYTHON_VERSION == \"3.6\" ]]; then cd docs && make html && echo \"cameo.bio\" > _build/html/CNAME; fi\n+ - cd $TRAVIS_BUILD_DIR\nenv:\nglobal:\nsecure: QgrOXEgpcH6xgToVfWIX6j6CPvycKMPtNnoYAxPrZjkMzd2aCHHeokv0FZkCn3uePO0I8W8TkKBxilGZbWYoseDq+Snds18sBTG9u2NHvYHnDQb4Oki7+NoxhlnGIOj/8ADONOpc0n7PyFDPK8zmKVZvv9p78OHZO5CmV/ktOeg=\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: build docs right before deployment
89,741
31.03.2017 19:29:38
-7,200
bcaa1b4eecd18145e275bebf3dbb9abdc158b7b1
fix: install sphinx to fix deployment
[ { "change_type": "MODIFY", "old_path": ".travis.yml", "new_path": ".travis.yml", "diff": "@@ -39,7 +39,7 @@ before_install:\n- 'if [[ \"$TRAVIS_BRANCH\" != \"devel\" ]]; then bash ./.travis/install_cplex.sh; fi'\ninstall:\n- pip install flake8 numpy scipy pyzmq pandas pytest pytest-cov\n-- pip install .[swiglpk,test,parallel,cli,doc]\n+- pip install .[test,paralleldocs]\nbefore_script:\n- if [[ $TRAVIS_PYTHON_VERSION == \"3.6\" ]]; then flake8 .; fi\nscript:\n@@ -73,7 +73,7 @@ deploy:\nrepo: biosustain/cameo\ndocs_dir: docs/_build/html\n- provider: pages\n- local_dir: docs\n+ local_dir: docs/_build/html\nskip_cleanup: true\ngithub_token: $GH_TOKEN # Set in travis-ci.org dashboard\ntarget-branch: gh-pages\n@@ -81,7 +81,6 @@ deploy:\nbranch:\n- master\n- devel\n- - feat-use-travis-gh-pages-deploy\ncondition: $TRAVIS_PYTHON_VERSION == \"3.6\"\nrepo: biosustain/cameo\n# - provider: releases\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: install sphinx to fix deployment
89,741
31.03.2017 19:42:48
-7,200
7a67a2b7e77d52b52693f7070ba1708a78bb85cf
chore: set CNAME via provider
[ { "change_type": "MODIFY", "old_path": ".travis.yml", "new_path": ".travis.yml", "diff": "@@ -39,7 +39,7 @@ before_install:\n- 'if [[ \"$TRAVIS_BRANCH\" != \"devel\" ]]; then bash ./.travis/install_cplex.sh; fi'\ninstall:\n- pip install flake8 numpy scipy pyzmq pandas pytest pytest-cov\n-- pip install .[test,paralleldocs]\n+- pip install .[test,parallel,docs]\nbefore_script:\n- if [[ $TRAVIS_PYTHON_VERSION == \"3.6\" ]]; then flake8 .; fi\nscript:\n@@ -55,7 +55,7 @@ notifications:\nbefore_deploy:\n- pip install twine\n- python setup.py sdist bdist_wheel\n- - if [[ $TRAVIS_PYTHON_VERSION == \"3.6\" ]]; then cd docs && make html && echo \"cameo.bio\" > _build/html/CNAME; fi\n+ - if [[ $TRAVIS_PYTHON_VERSION == \"3.6\" ]]; then cd docs && make html; fi\n- cd $TRAVIS_BUILD_DIR\nenv:\nglobal:\n@@ -73,6 +73,7 @@ deploy:\nrepo: biosustain/cameo\ndocs_dir: docs/_build/html\n- provider: pages\n+ fqdn: cameo.bio\nlocal_dir: docs/_build/html\nskip_cleanup: true\ngithub_token: $GH_TOKEN # Set in travis-ci.org dashboard\n" } ]
Python
Apache License 2.0
biosustain/cameo
chore: set CNAME via provider
89,741
31.03.2017 20:01:23
-7,200
387524e3b50cead585b6cee52f99d517f7a4d7d7
fix: create .nojekyll file in docs
[ { "change_type": "MODIFY", "old_path": ".travis.yml", "new_path": ".travis.yml", "diff": "@@ -55,7 +55,7 @@ notifications:\nbefore_deploy:\n- pip install twine\n- python setup.py sdist bdist_wheel\n- - if [[ $TRAVIS_PYTHON_VERSION == \"3.6\" ]]; then cd docs && make html; fi\n+ - if [[ $TRAVIS_PYTHON_VERSION == \"3.6\" ]]; then cd docs && make html && touch _build/html/.nojekyll; fi\n- cd $TRAVIS_BUILD_DIR\nenv:\nglobal:\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: create .nojekyll file in docs
89,737
03.04.2017 11:20:05
-7,200
6ac3a268b382c3df15469da7db0a9a58fd9b63cb
feat: add a time limit to heuristic optimization Also added fix documentation and unit tests.
[ { "change_type": "MODIFY", "old_path": "cameo/api/designer.py", "new_path": "cameo/api/designer.py", "diff": "@@ -74,9 +74,11 @@ class _OptimizationRunner(object):\nclass _OptGeneRunner(_OptimizationRunner):\ndef __call__(self, strategy):\nmax_evaluations = 15000\n+ max_time = (45, 0)\nif self.debug:\nmax_evaluations = 1000\n+ max_time = (5, 0)\n(model, pathway, aerobic) = (strategy[1], strategy[2], strategy[3])\nmodel = model.copy()\n@@ -91,7 +93,7 @@ class _OptGeneRunner(_OptimizationRunner):\nmodel.objective = model.biomass\nopt_gene = OptGene(model=model, plot=False)\ndesigns = opt_gene.run(target=pathway.product.id, biomass=model.biomass, substrate=model.carbon_source,\n- max_evaluations=max_evaluations, max_knockouts=15)\n+ max_evaluations=max_evaluations, max_knockouts=15, max_time=max_time)\nreturn designs\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/heuristic/evolutionary/optimization.py", "new_path": "cameo/strain_design/heuristic/evolutionary/optimization.py", "diff": "from __future__ import absolute_import, print_function\n+import collections\nimport logging\nimport time\nimport types\n@@ -194,7 +195,30 @@ class HeuristicOptimization(object):\nraise TypeError(\"single objective heuristics do not support multiple objective functions\")\nself._heuristic_method = heuristic_method(self.random)\n- def run(self, evaluator=None, generator=None, view=config.default_view, maximize=True, **kwargs):\n+ def run(self, evaluator=None, generator=None, view=config.default_view, maximize=True, max_time=None, **kwargs):\n+ \"\"\"\n+ Runs the evolutionary algorithm.\n+\n+ Parameters\n+ ----------\n+ evaluator : function\n+ A function that evaluates candidates.\n+ generator : function\n+ A function that yields candidates.\n+ view : cameo.parallel.SequentialView, cameo.parallel.MultiprocessingView\n+ A view for single or multiprocessing.\n+ maximize : bool\n+ The sense of the optimization algorithm.\n+ max_time : tuple\n+ A tuple with (minutes, seconds) or (hours, minutes, seconds)\n+ kwargs : dict\n+ See inspyred documentation for more information.\n+\n+ Returns\n+ -------\n+ list\n+ A list of individuals from the last iteration.\n+ \"\"\"\nif isinstance(self.heuristic_method.archiver, archives.BestSolutionArchive):\nself.heuristic_method.archiver.reset()\n@@ -205,7 +229,21 @@ class HeuristicOptimization(object):\nfor observer in self.observers:\nobserver.reset()\n+\nt = time.time()\n+\n+ if max_time is not None:\n+ terminator = self.heuristic_method.terminator\n+ if isinstance(terminator, collections.Iterable):\n+ terminator = list(terminator)\n+ terminator.append(inspyred.ec.terminators.time_termination)\n+ else:\n+ terminator = [terminator, inspyred.ec.terminators.time_termination]\n+\n+ self.heuristic_method.terminator = terminator\n+ kwargs['start_time'] = t\n+ kwargs['max_time'] = max_time\n+\nprint(time.strftime(\"Starting optimization at %a, %d %b %Y %H:%M:%S\", time.localtime(t)))\nres = self.heuristic_method.evolve(generator=generator,\nmaximize=maximize,\n@@ -260,7 +298,9 @@ class TargetOptimization(HeuristicOptimization):\nAttributes\n----------\nsimulation_method : see flux_analysis.simulation\n- wt_reference: dict\n+ The method used to simulate the model.\n+ wt_reference : dict, cameo.flux_analysis.simulation.FluxDistributionResult\n+ A dict (dict-like) object with flux values from a reference state.\nsimulation_method : method\nthe simulation method to use for evaluating results\nevaluator : TargetEvaluator\n@@ -346,17 +386,26 @@ class TargetOptimization(HeuristicOptimization):\nif self.progress:\nself.observers.append(observers.ProgressObserver())\n- def run(self, view=config.default_view, max_size=10, variable_size=True, diversify=False, **kwargs):\n+ def run(self, max_size=10, variable_size=True, diversify=False, view=config.default_view, **kwargs):\n\"\"\"\n+ Runs the evolutionary algorithm.\n+\nParameters\n----------\nmax_size : int\n- Maximum size of a solution, e.g., the maximum number of reactions or genes to knock-out or swap\n+ Maximum size of a solution, e.g., the maximum number of reactions or genes to knock-out or swap.\nvariable_size : boolean\nIf true, the solution size can change meaning that the combination of knockouts can have different sizes up\nto max_size. Otherwise it only produces knockout solutions with a fixed number of knockouts.\ndiversify : bool\nIt true, the generator will not be allowed to generate repeated candidates in the initial population.\n+ view : cameo.parallel.SequentialView, cameo.parallel.MultiprocessingView\n+ A view for single or multiprocessing.\n+\n+ Returns\n+ -------\n+ TargetOptimizationResult\n+ The result of the optimization.\n\"\"\"\nif kwargs.get('seed', None) is None:\n" }, { "change_type": "MODIFY", "old_path": "tests/test_strain_design_heuristics.py", "new_path": "tests/test_strain_design_heuristics.py", "diff": "@@ -16,6 +16,7 @@ from __future__ import absolute_import, print_function\nimport os\nimport pickle\n+import time\nfrom collections import namedtuple\nfrom math import sqrt\nfrom tempfile import mkstemp\n@@ -27,6 +28,7 @@ import six\nfrom inspyred.ec import Bounder\nfrom inspyred.ec.emo import Pareto\nfrom ordered_set import OrderedSet\n+from six.moves import range\nfrom cameo import config, fba\nfrom cameo.core.manipulation import swap_cofactors\n@@ -67,7 +69,6 @@ from cameo.strain_design.heuristic.evolutionary.variators import (_do_set_n_poin\nset_n_point_crossover)\nfrom cameo.util import RandomGenerator as Random\nfrom cameo.util import TimeMachine\n-from six.moves import range\ntry:\nfrom cameo.parallel import RedisQueue\n@@ -934,6 +935,20 @@ class TestReactionKnockoutOptimization:\nassert results.seed == expected_results.seed\n+ def test_run_with_time_limit(self, model):\n+ # TODO: make optlang deterministic so this results can be permanently stored.\n+ objective = biomass_product_coupled_yield(\n+ \"Biomass_Ecoli_core_N_lp_w_fsh_GAM_rp__Nmet2\", \"EX_ac_lp_e_rp_\", \"EX_glc_lp_e_rp_\")\n+\n+ rko = ReactionKnockoutOptimization(model=model,\n+ simulation_method=fba,\n+ objective_function=objective)\n+ start_time = time.time()\n+ rko.run(max_evaluations=3000000, pop_size=10, view=SequentialView(), seed=SEED, max_time=(1, 0))\n+ elapsed_time = time.time() - start_time\n+\n+ assert elapsed_time < 1.25 * 60\n+\ndef test_run_multi_objective(self, model):\n# TODO: make optlang deterministic so this results can be permanently stored.\n_, result_file = mkstemp('.pkl')\n" } ]
Python
Apache License 2.0
biosustain/cameo
feat: add a time limit to heuristic optimization Also added fix documentation and unit tests.
89,733
11.04.2017 15:14:30
-7,200
db8da3430741ec0fa6b5fdd719429c2766a4ad35
fix: remove duplicated target entries And unit tests.
[ { "change_type": "MODIFY", "old_path": "cameo/strain_design/heuristic/evolutionary/optimization.py", "new_path": "cameo/strain_design/heuristic/evolutionary/optimization.py", "diff": "@@ -625,9 +625,9 @@ class TargetOptimizationResult(Result):\nfor index, solution in enumerate(solutions):\ntargets = self._decoder(solution.candidate, flat=True)\nif len(targets) > 0:\n- decoded_solutions.loc[index] = [targets, solution.fitness]\n+ decoded_solutions.loc[index] = [tuple(targets), solution.fitness]\n- decoded_solutions.drop_duplicates(inplace=True)\n+ decoded_solutions.drop_duplicates(inplace=True, subset=\"targets\")\ndecoded_solutions.reset_index(inplace=True)\nreturn decoded_solutions\n" }, { "change_type": "MODIFY", "old_path": "tests/test_strain_design_heuristics.py", "new_path": "tests/test_strain_design_heuristics.py", "diff": "@@ -924,6 +924,9 @@ class TestReactionKnockoutOptimization:\nresults = rko.run(max_evaluations=3000, pop_size=10, view=SequentialView(), seed=SEED)\n+ assert len(results.data_frame.targets) > 0\n+ assert len(results.data_frame.targets) == len(results.data_frame.targets.apply(tuple).unique())\n+\nwith open(result_file, 'wb') as in_file:\npickle.dump(results, in_file)\n@@ -943,6 +946,7 @@ class TestReactionKnockoutOptimization:\nrko = ReactionKnockoutOptimization(model=model,\nsimulation_method=fba,\nobjective_function=objective)\n+\nstart_time = time.time()\nrko.run(max_evaluations=3000000, pop_size=10, view=SequentialView(), seed=SEED, max_time=(1, 0))\nelapsed_time = time.time() - start_time\n@@ -968,6 +972,8 @@ class TestReactionKnockoutOptimization:\nresults = rko.run(max_evaluations=3000, pop_size=10, view=SequentialView(), seed=SEED)\n+ assert len(results.data_frame.targets) == len(results.data_frame.targets.apply(tuple).unique())\n+\nwith open(result_file, 'wb') as in_file:\npickle.dump(results, in_file)\n@@ -1005,6 +1011,8 @@ class TestGeneKnockoutOptimization:\nresults = rko.run(max_evaluations=3000, pop_size=10, view=SequentialView(), seed=SEED)\n+ assert len(results.data_frame.targets) == len(results.data_frame.targets.apply(tuple).unique())\n+\nwith open(result_file, 'wb') as in_file:\npickle.dump(results, in_file)\n@@ -1032,6 +1040,8 @@ class TestGeneKnockoutOptimization:\nresults = rko.run(max_evaluations=3000, pop_size=10, view=SequentialView(), seed=SEED)\n+ assert len(results.data_frame.targets) == len(results.data_frame.targets.apply(tuple).unique())\n+\nwith open(result_file, 'wb') as in_file:\npickle.dump(results, in_file)\n@@ -1043,6 +1053,21 @@ class TestGeneKnockoutOptimization:\nassert results.seed == expected_results.seed\n+ def test_run_with_time_limit(self, model):\n+ # TODO: make optlang deterministic so this results can be permanently stored.\n+ objective = biomass_product_coupled_yield(\n+ \"Biomass_Ecoli_core_N_lp_w_fsh_GAM_rp__Nmet2\", \"EX_ac_lp_e_rp_\", \"EX_glc_lp_e_rp_\")\n+\n+ rko = ReactionKnockoutOptimization(model=model,\n+ simulation_method=fba,\n+ objective_function=objective)\n+\n+ start_time = time.time()\n+ rko.run(max_evaluations=3000000, pop_size=10, view=SequentialView(), seed=SEED, max_time=(1, 0))\n+ elapsed_time = time.time() - start_time\n+\n+ assert elapsed_time < 1.25 * 60\n+\nclass TestVariator:\ndef test_set_n_point_crossover(self):\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: remove duplicated target entries (#155) And unit tests.
89,737
24.05.2017 14:56:02
-7,200
1cfa12ff020cce0cc914a5e49d63b581a60e65d9
add make api
[ { "change_type": "MODIFY", "old_path": ".travis.yml", "new_path": ".travis.yml", "diff": "@@ -55,7 +55,7 @@ notifications:\nbefore_deploy:\n- pip install twine\n- python setup.py sdist bdist_wheel\n- - if [[ $TRAVIS_PYTHON_VERSION == \"3.6\" ]]; then cd docs && make html && touch _build/html/.nojekyll; fi\n+ - if [[ $TRAVIS_PYTHON_VERSION == \"3.6\" ]]; then cd docs && make apidoc && make html && touch _build/html/.nojekyll; fi\n- cd $TRAVIS_BUILD_DIR\nenv:\nglobal:\n" } ]
Python
Apache License 2.0
biosustain/cameo
add make api
89,734
29.05.2017 13:09:21
-7,200
6d409284d407ea60054e1d8d97ec1ed27f89e905
feat: move travis testing to being tox based
[ { "change_type": "ADD", "old_path": null, "new_path": ".tox_install.sh", "diff": "+#!/usr/bin/env bash\n+\n+pip \"$@\"\n+\n+# On Travis CI quickly install python-libsbml wheels.\n+if [[ -n \"${CI}\" && \"${TOXENV}\" != \"flake8\" ]]; then\n+ pip install --find-links https://s3.eu-central-1.amazonaws.com/moonlight-science/wheelhouse/index.html --no-index python-libsbml==5.12.1\n+fi\n" }, { "change_type": "MODIFY", "old_path": ".travis.yml", "new_path": ".travis.yml", "diff": "@@ -2,14 +2,24 @@ language: python\nsudo: required\ndist: trusty\n-python:\n- - '2.7'\n- - '3.4'\n- - '3.5'\n- - '3.6'\n-\nmatrix:\nfast_finish: true\n+ include:\n+ - python: '3.5'\n+ env:\n+ - TOXENV=flake8\n+ - python: '2.7'\n+ env:\n+ - TOXENV=py27\n+ - python: '3.4'\n+ env:\n+ - TOXENV=py34\n+ - python: '3.5'\n+ env:\n+ - TOXENV=py35\n+ - python: '3.6'\n+ env:\n+ - TOXENV=py36\nbranches:\nonly:\n@@ -20,8 +30,10 @@ branches:\ncache:\n- pip: true\n+\nservices:\n- redis-server\n+\naddons:\napt:\npackages:\n@@ -32,20 +44,18 @@ addons:\n- glpk-utils\n- pandoc\n- openbabel\n+\nbefore_install:\n-- pip install pip --upgrade\n-- pip install codecov\n+- travis_retry pip install --upgrade pip setuptools wheel tox\n- 'echo \"this is a build for: $TRAVIS_BRANCH\"'\n- 'if [[ \"$TRAVIS_BRANCH\" != \"devel\" ]]; then bash ./.travis/install_cplex.sh; fi'\n+\ninstall:\n-- pip install flake8 numpy scipy pyzmq pandas pytest pytest-cov\n- pip install .[test,parallel,docs]\n-before_script:\n-- if [[ $TRAVIS_PYTHON_VERSION == \"3.6\" ]]; then flake8 .; fi\n+\nscript:\n- - pytest -v -rsx --cov --cov-report=xml\n-after_success:\n- - codecov\n+ - tox\n+\nnotifications:\nslack:\nrooms:\n@@ -55,7 +65,9 @@ notifications:\nbefore_deploy:\n- pip install twine\n- python setup.py sdist bdist_wheel\n- - if [[ $TRAVIS_PYTHON_VERSION == \"3.6\" ]]; then cd docs && make apidoc && make html && touch _build/html/.nojekyll; fi\n+ - if [[ $TRAVIS_PYTHON_VERSION == \"3.6\" ]]; then\n+ cd docs && make apidoc && make html && touch _build/html/.nojekyll;\n+ fi\n- cd $TRAVIS_BUILD_DIR\nenv:\nglobal:\n" }, { "change_type": "MODIFY", "old_path": "tox.ini", "new_path": "tox.ini", "diff": "envlist = flake8, py27, py34, py35, py36\n[testenv]\n+passenv =\n+ CI\n+ TRAVIS\n+ TRAVIS_*\ndeps =\n- -r{toxinidir}/requirements_dev.txt\n-commands =\npytest\n+ pytest-cov\n+ codecov\n+ redis\n+ ipyparallel\n+whitelist_externals =\n+ obabel\n+install_command = ./.tox_install.sh install {opts} {packages}\n+commands =\n+ pytest -v -rsx --cov --cov-report=xml\n+ - codecov\n[testenv:flake8]\nskip_install = True\n" } ]
Python
Apache License 2.0
biosustain/cameo
feat: move travis testing to being tox based
89,741
06.06.2017 16:46:27
-7,200
aaa2e3c9f621563232fadccdf678b1822071cb25
fix: add blank line after markup to avoid rst warning Thank you for spotting this.
[ { "change_type": "MODIFY", "old_path": "README.rst", "new_path": "README.rst", "diff": "@@ -56,6 +56,7 @@ provide examples and tutorials and also form part of the documentation. They are\nFurthermore, course materials for a two day cell factory engineering course are available `here <https://biosustain.github.io/cell-factory-design-course/>`__.\n.. showcase-start\n+\nHigh-level API (for users)\n^^^^^^^^^^^^^^^^^^^^^^^^^^\n@@ -136,4 +137,3 @@ Contributions\n:target: https://coveralls.io/r/biosustain/cameo?branch=devel\n.. |DOI| image:: https://zenodo.org/badge/5031/biosustain/cameo.svg\n:target: https://zenodo.org/badge/latestdoi/5031/biosustain/cameo\n\\ No newline at end of file\n-\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: add blank line after markup to avoid rst warning (#171) Thank you for spotting this.
89,735
20.03.2017 12:49:08
-3,600
8e0ae78b0b5664d83c283432e1449c06c5715410
refactor: remove gene/metabolite id Moved to cobra.core
[ { "change_type": "MODIFY", "old_path": "cameo/core/gene.py", "new_path": "cameo/core/gene.py", "diff": "@@ -41,25 +41,25 @@ class Gene(cobra.core.Gene):\nif model is not None:\nnew_gene._model = model\nreturn new_gene\n-\n- @property\n- def id(self):\n- return getattr(self, \"_id\", None) # Returns None if _id is not set\n-\n- @id.setter\n- def id(self, value):\n- if value == self.id:\n- pass\n- elif not isinstance(value, six.string_types):\n- raise TypeError(\"ID must be a string\")\n- elif getattr(self, \"_model\", None) is not None: # (= if hasattr(self, \"_model\") and self._model is not None)\n- if value in self.model.genes:\n- raise ValueError(\"The model already contains a gene with the id:\", value)\n-\n- self._id = value\n- self.model.genes._generate_index()\n- else:\n- self._id = value\n+ #\n+ # @property\n+ # def id(self):\n+ # return getattr(self, \"_id\", None) # Returns None if _id is not set\n+ #\n+ # @id.setter\n+ # def id(self, value):\n+ # if value == self.id:\n+ # pass\n+ # elif not isinstance(value, six.string_types):\n+ # raise TypeError(\"ID must be a string\")\n+ # elif getattr(self, \"_model\", None) is not None: # (= if hasattr(self, \"_model\") and self._model is not None)\n+ # if value in self.model.genes:\n+ # raise ValueError(\"The model already contains a gene with the id:\", value)\n+ #\n+ # self._id = value\n+ # self.model.genes._generate_index()\n+ # else:\n+ # self._id = value\ndef knock_out(self, time_machine=None):\n\"\"\"Knockout gene by marking as non-functional and set all its affected reactions' bounds to zero\n" }, { "change_type": "MODIFY", "old_path": "cameo/core/metabolite.py", "new_path": "cameo/core/metabolite.py", "diff": "@@ -120,25 +120,25 @@ class Metabolite(cobra.core.Metabolite):\nelse:\nself._relax_mass_balance_constrain(time_machine)\n- @property\n- def id(self):\n- return getattr(self, \"_id\", None) # Returns None if _id is not set\n-\n- @id.setter\n- def id(self, value):\n- if value == self.id:\n- pass\n- elif not isinstance(value, six.string_types):\n- raise TypeError(\"ID must be a string\")\n- elif getattr(self, \"_model\", None) is not None: # (= if hasattr(self, \"_model\") and self._model is not None)\n- if value in self.model.metabolites:\n- raise ValueError(\"The model already contains a metabolite with the id:\", value)\n- self.model.solver.constraints[self.id].name = value\n-\n- self._id = value\n- self.model.metabolites._generate_index()\n- else:\n- self._id = value\n+ # @property\n+ # def id(self):\n+ # return getattr(self, \"_id\", None) # Returns None if _id is not set\n+ #\n+ # @id.setter\n+ # def id(self, value):\n+ # if value == self.id:\n+ # pass\n+ # elif not isinstance(value, six.string_types):\n+ # raise TypeError(\"ID must be a string\")\n+ # elif getattr(self, \"_model\", None) is not None: # (= if hasattr(self, \"_model\") and self._model is not None)\n+ # if value in self.model.metabolites:\n+ # raise ValueError(\"The model already contains a metabolite with the id:\", value)\n+ # self.model.solver.constraints[self.id].name = value\n+ #\n+ # self._id = value\n+ # self.model.metabolites._generate_index()\n+ # else:\n+ # self._id = value\ndef _repr_html_(self):\nreturn \"\"\"\n" }, { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/analysis.py", "new_path": "cameo/flux_analysis/analysis.py", "diff": "@@ -492,9 +492,9 @@ class _PhenotypicPhasePlaneChunkEvaluator(object):\nfloat\ngram product per 1 g of feeding source or nan if more than one product or feeding source\n\"\"\"\n- too_long = (len(self.source.metabolites) > 1,\n- len(self.product_reaction.metabolites) > 1)\n- if any(too_long):\n+ not_unique = (len(self.source.metabolites) != 1,\n+ len(self.product_reaction.metabolites) != 1)\n+ if any(not_unique):\nreturn numpy.nan\ntry:\nsource_flux = self.source.flux\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: remove gene/metabolite id Moved to cobra.core
89,735
20.03.2017 13:28:12
-3,600
32d28531add11a31b1223d9404159f4dcd9d34ec
refactor: remove reaction bound setters Moved cobra.core.Reaction
[ { "change_type": "MODIFY", "old_path": "cameo/core/reaction.py", "new_path": "cameo/core/reaction.py", "diff": "@@ -106,203 +106,204 @@ class Reaction(_cobrapy.core.Reaction):\nname : str, optional\nThe name of the reaction.\n\"\"\"\n- super(Reaction, self).__init__(id=id, name=name, subsystem=subsystem)\n- self._lower_bound = lower_bound\n- self._upper_bound = upper_bound\n- self._model = None\n- self._reverse_variable = None\n- self._forward_variable = None\n+ super(Reaction, self).__init__(id=id, name=name, subsystem=subsystem,\n+ lower_bound=lower_bound, upper_bound=upper_bound)\n+ # self._lower_bound = lower_bound\n+ # self._upper_bound = upper_bound\n+ # self._model = None\n+ # self._reverse_variable = None\n+ # self._forward_variable = None\ndef __str__(self):\nreturn ''.join((self.id, \": \", self.build_reaction_string()))\n- @_cobrapy.core.Reaction.gene_reaction_rule.setter\n- def gene_reaction_rule(self, rule):\n- _cobrapy.core.Reaction.gene_reaction_rule.fset(self, rule)\n- self._clone_genes(self.model)\n+ # @_cobrapy.core.Reaction.gene_reaction_rule.setter\n+ # def gene_reaction_rule(self, rule):\n+ # _cobrapy.core.Reaction.gene_reaction_rule.fset(self, rule)\n+ # self._clone_genes(self.model)\n- @property\n- def reversibility(self):\n- return self._lower_bound < 0 < self._upper_bound\n-\n- @property\n- def flux_expression(self):\n- \"\"\"An optlang variable representing the forward flux (if associated with model), otherwise None.\n- Representing the net flux if model.reversible_encoding == 'unsplit'\"\"\"\n- model = self.model\n- if model is not None:\n- return 1. * self.forward_variable - 1. * self.reverse_variable\n- else:\n- return None\n-\n- @property\n- def forward_variable(self):\n- \"\"\"An optlang variable representing the forward flux (if associated with model), otherwise None.\"\"\"\n- model = self.model\n- if model is not None:\n- if self._forward_variable is None:\n- self._forward_variable = model.solver.variables[self.id]\n- assert self._forward_variable.problem is self.model.solver\n-\n- return self._forward_variable\n- else:\n- return None\n-\n- @property\n- def reverse_variable(self):\n- \"\"\"An optlang variable representing the reverse flux (if associated with model), otherwise None.\"\"\"\n- model = self.model\n- if model is not None:\n- if self._reverse_variable is None:\n- self._reverse_variable = model.solver.variables[self.reverse_id]\n- assert self._reverse_variable.problem is self.model.solver\n- return self._reverse_variable\n- else:\n- return None\n-\n- def __copy__(self):\n- cop = copy(super(Reaction, self))\n- cop._reset_var_cache()\n- return cop\n-\n- def __deepcopy__(self, memo):\n- cop = deepcopy(super(Reaction, self), memo)\n- cop._reset_var_cache()\n- return cop\n-\n- @property\n- def lower_bound(self):\n- return self._lower_bound\n-\n- @property\n- def functional(self):\n- \"\"\" reaction is functional\n-\n- Returns\n- -------\n- bool\n- True if the gene-protein-reaction (GPR) rule is fulfilled for this reaction, or if reaction is not\n- associated to a model, otherwise False.\n- \"\"\"\n- if self._model:\n- tree, _ = parse_gpr(self.gene_reaction_rule)\n- return eval_gpr(tree, {gene.id for gene in self.genes if not gene.functional})\n- return True\n-\n- @lower_bound.setter\n- def lower_bound(self, value):\n- model = self.model\n-\n- if model is not None:\n-\n- forward_variable, reverse_variable = self.forward_variable, self.reverse_variable\n- if self._lower_bound < 0 < self._upper_bound: # reversible\n- if value < 0:\n- reverse_variable.ub = -1 * value\n- elif value >= 0:\n- reverse_variable.ub = 0\n- try:\n- forward_variable.lb = value\n- except ValueError:\n- forward_variable.ub = value\n- self._upper_bound = value\n- forward_variable.lb = value\n- elif self._lower_bound == 0 and self._upper_bound == 0: # knockout\n- if value < 0:\n- reverse_variable.ub = -1 * value\n- elif value >= 0:\n- forward_variable.ub = value\n- self._upper_bound = value\n- forward_variable.lb = value\n- elif self._lower_bound >= 0: # forward irreversible\n- if value < 0:\n- reverse_variable.ub = -1 * value\n- forward_variable.lb = 0\n- else:\n- try:\n- forward_variable.lb = value\n- except ValueError:\n- forward_variable.ub = value\n- self._upper_bound = value\n- forward_variable.lb = value\n-\n- elif self._upper_bound <= 0: # reverse irreversible\n- if value > 0:\n- reverse_variable.lb = 0\n- reverse_variable.ub = 0\n- forward_variable.ub = value\n- self._upper_bound = value\n- forward_variable.lb = value\n- else:\n- try:\n- reverse_variable.ub = -1 * value\n- except ValueError:\n- reverse_variable.lb = -1 * value\n- self._upper_bound = value\n- reverse_variable.ub = -1 * value\n- else:\n- raise ValueError('lower_bound issue')\n-\n- self._lower_bound = value\n+ # @property\n+ # def reversibility(self):\n+ # return self._lower_bound < 0 < self._upper_bound\n- @property\n- def upper_bound(self):\n- return self._upper_bound\n+ # @property\n+ # def flux_expression(self):\n+ # \"\"\"An optlang variable representing the forward flux (if associated with model), otherwise None.\n+ # Representing the net flux if model.reversible_encoding == 'unsplit'\"\"\"\n+ # model = self.model\n+ # if model is not None:\n+ # return 1. * self.forward_variable - 1. * self.reverse_variable\n+ # else:\n+ # return None\n+ #\n+ # @property\n+ # def forward_variable(self):\n+ # \"\"\"An optlang variable representing the forward flux (if associated with model), otherwise None.\"\"\"\n+ # model = self.model\n+ # if model is not None:\n+ # if self._forward_variable is None:\n+ # self._forward_variable = model.solver.variables[self.id]\n+ # assert self._forward_variable.problem is self.model.solver\n+ #\n+ # return self._forward_variable\n+ # else:\n+ # return None\n+ #\n+ # @property\n+ # def reverse_variable(self):\n+ # \"\"\"An optlang variable representing the reverse flux (if associated with model), otherwise None.\"\"\"\n+ # model = self.model\n+ # if model is not None:\n+ # if self._reverse_variable is None:\n+ # self._reverse_variable = model.solver.variables[self.reverse_id]\n+ # assert self._reverse_variable.problem is self.model.solver\n+ # return self._reverse_variable\n+ # else:\n+ # return None\n- @upper_bound.setter\n- def upper_bound(self, value):\n- model = self.model\n- if model is not None:\n+ # def __copy__(self):\n+ # cop = copy(super(Reaction, self))\n+ # cop._reset_var_cache()\n+ # return cop\n+ #\n+ # def __deepcopy__(self, memo):\n+ # cop = deepcopy(super(Reaction, self), memo)\n+ # cop._reset_var_cache()\n+ # return cop\n- forward_variable, reverse_variable = self.forward_variable, self.reverse_variable\n- if self._lower_bound < 0 < self._upper_bound: # reversible\n- if value > 0:\n- forward_variable.ub = value\n- elif value <= 0:\n- forward_variable.ub = 0\n- try:\n- reverse_variable.lb = -1 * value\n- except ValueError:\n- reverse_variable.ub = -1 * value\n- self._lower_bound = value\n- reverse_variable.lb = -1 * value\n- elif self._lower_bound == 0 and self._upper_bound == 0: # knockout\n- if value > 0:\n- forward_variable.ub = value\n- elif value <= 0:\n- reverse_variable.ub = -1 * value\n- self._lower_bound = value\n- reverse_variable.lb = -1 * value\n- elif self._lower_bound >= 0: # forward irreversible\n- if value > 0:\n- try:\n- forward_variable.ub = value\n- except ValueError:\n- forward_variable.lb = value\n- self._lower_bound = value\n- forward_variable.ub = value\n- else:\n- forward_variable.lb = 0\n- forward_variable.ub = 0\n- reverse_variable.ub = -1 * value\n- self._lower_bound = value\n- reverse_variable.lb = -1 * value\n-\n- elif self._upper_bound <= 0: # reverse irreversible\n- if value < 0:\n- try:\n- reverse_variable.lb = -1 * value\n- except ValueError:\n- reverse_variable.ub = -1 * value\n- self._lower_bound = value\n- reverse_variable.lb = -1 * value\n- else:\n- forward_variable.ub = value\n- reverse_variable.lb = 0\n- else:\n- raise ValueError('upper_bound issue')\n+ # @property\n+ # def lower_bound(self):\n+ # return self._lower_bound\n- self._upper_bound = value\n+ # @property\n+ # def functional(self):\n+ # \"\"\" reaction is functional\n+ #\n+ # Returns\n+ # -------\n+ # bool\n+ # True if the gene-protein-reaction (GPR) rule is fulfilled for this reaction, or if reaction is not\n+ # associated to a model, otherwise False.\n+ # \"\"\"\n+ # if self._model:\n+ # tree, _ = parse_gpr(self.gene_reaction_rule)\n+ # return eval_gpr(tree, {gene.id for gene in self.genes if not gene.functional})\n+ # return True\n+\n+ # @lower_bound.setter\n+ # def lower_bound(self, value):\n+ # model = self.model\n+ #\n+ # if model is not None:\n+ #\n+ # forward_variable, reverse_variable = self.forward_variable, self.reverse_variable\n+ # if self._lower_bound < 0 < self._upper_bound: # reversible\n+ # if value < 0:\n+ # reverse_variable.ub = -1 * value\n+ # elif value >= 0:\n+ # reverse_variable.ub = 0\n+ # try:\n+ # forward_variable.lb = value\n+ # except ValueError:\n+ # forward_variable.ub = value\n+ # self._upper_bound = value\n+ # forward_variable.lb = value\n+ # elif self._lower_bound == 0 and self._upper_bound == 0: # knockout\n+ # if value < 0:\n+ # reverse_variable.ub = -1 * value\n+ # elif value >= 0:\n+ # forward_variable.ub = value\n+ # self._upper_bound = value\n+ # forward_variable.lb = value\n+ # elif self._lower_bound >= 0: # forward irreversible\n+ # if value < 0:\n+ # reverse_variable.ub = -1 * value\n+ # forward_variable.lb = 0\n+ # else:\n+ # try:\n+ # forward_variable.lb = value\n+ # except ValueError:\n+ # forward_variable.ub = value\n+ # self._upper_bound = value\n+ # forward_variable.lb = value\n+ #\n+ # elif self._upper_bound <= 0: # reverse irreversible\n+ # if value > 0:\n+ # reverse_variable.lb = 0\n+ # reverse_variable.ub = 0\n+ # forward_variable.ub = value\n+ # self._upper_bound = value\n+ # forward_variable.lb = value\n+ # else:\n+ # try:\n+ # reverse_variable.ub = -1 * value\n+ # except ValueError:\n+ # reverse_variable.lb = -1 * value\n+ # self._upper_bound = value\n+ # reverse_variable.ub = -1 * value\n+ # else:\n+ # raise ValueError('lower_bound issue')\n+ #\n+ # self._lower_bound = value\n+ #\n+ # @property\n+ # def upper_bound(self):\n+ # return self._upper_bound\n+ #\n+ # @upper_bound.setter\n+ # def upper_bound(self, value):\n+ # model = self.model\n+ # if model is not None:\n+ #\n+ # forward_variable, reverse_variable = self.forward_variable, self.reverse_variable\n+ # if self._lower_bound < 0 < self._upper_bound: # reversible\n+ # if value > 0:\n+ # forward_variable.ub = value\n+ # elif value <= 0:\n+ # forward_variable.ub = 0\n+ # try:\n+ # reverse_variable.lb = -1 * value\n+ # except ValueError:\n+ # reverse_variable.ub = -1 * value\n+ # self._lower_bound = value\n+ # reverse_variable.lb = -1 * value\n+ # elif self._lower_bound == 0 and self._upper_bound == 0: # knockout\n+ # if value > 0:\n+ # forward_variable.ub = value\n+ # elif value <= 0:\n+ # reverse_variable.ub = -1 * value\n+ # self._lower_bound = value\n+ # reverse_variable.lb = -1 * value\n+ # elif self._lower_bound >= 0: # forward irreversible\n+ # if value > 0:\n+ # try:\n+ # forward_variable.ub = value\n+ # except ValueError:\n+ # forward_variable.lb = value\n+ # self._lower_bound = value\n+ # forward_variable.ub = value\n+ # else:\n+ # forward_variable.lb = 0\n+ # forward_variable.ub = 0\n+ # reverse_variable.ub = -1 * value\n+ # self._lower_bound = value\n+ # reverse_variable.lb = -1 * value\n+ #\n+ # elif self._upper_bound <= 0: # reverse irreversible\n+ # if value < 0:\n+ # try:\n+ # reverse_variable.lb = -1 * value\n+ # except ValueError:\n+ # reverse_variable.ub = -1 * value\n+ # self._lower_bound = value\n+ # reverse_variable.lb = -1 * value\n+ # else:\n+ # forward_variable.ub = value\n+ # reverse_variable.lb = 0\n+ # else:\n+ # raise ValueError('upper_bound issue')\n+ #\n+ # self._upper_bound = value\n@property\ndef model(self):\n" }, { "change_type": "MODIFY", "old_path": "tests/data/iJO1366.pickle", "new_path": "tests/data/iJO1366.pickle", "diff": "Binary files a/tests/data/iJO1366.pickle and b/tests/data/iJO1366.pickle differ\n" }, { "change_type": "MODIFY", "old_path": "tests/data/salmonella.pickle", "new_path": "tests/data/salmonella.pickle", "diff": "Binary files a/tests/data/salmonella.pickle and b/tests/data/salmonella.pickle differ\n" }, { "change_type": "MODIFY", "old_path": "tests/test_webmodels.py", "new_path": "tests/test_webmodels.py", "diff": "@@ -30,6 +30,7 @@ class TestWebModels:\nwith pytest.raises(requests.ConnectionError):\nget_sbml_file(1, host=\"http://blabla\")\n+ @pytest.mark.skipif(True, reason='too slow for testing')\ndef test_index(self):\ntry:\nindex = index_models_minho()\n@@ -40,6 +41,7 @@ class TestWebModels:\nassert list(index.columns) == [\"id\", \"name\", \"doi\", \"author\", \"year\", \"formats\", \"organism\", \"taxonomy\",\n\"validated\"]\n+ @pytest.mark.skipif(True, reason='too slow for testing')\ndef test_get_sbml(self):\ntry:\ntmp = get_sbml_file(1)\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: remove reaction bound setters Moved cobra.core.Reaction
89,735
20.03.2017 18:09:23
-3,600
ab2afce9a245f960646947a0c8447dc7d745e0a6
refactor: replace model.solve() w model.optimize() cobra.core.Model.optimize has been improved, removing the need for model.solve.
[ { "change_type": "MODIFY", "old_path": "cameo/__init__.py", "new_path": "cameo/__init__.py", "diff": "@@ -29,8 +29,8 @@ from cameo import load_model\n# load a model from SBML format (can be found under cameo/tests/data)\nmodel = load_model('EcoliCore.xml')\n-# solve the model and print the objective value\n-solution = model.solve()\n+# optimize the model and print the objective value\n+solution = model.optimize()\nprint 'Objective value:', solution.f\n# Determine a set of gene deletions that will optimize the production\n" }, { "change_type": "MODIFY", "old_path": "cameo/core/solver_based_model.py", "new_path": "cameo/core/solver_based_model.py", "diff": "@@ -21,8 +21,6 @@ from __future__ import absolute_import, print_function\nimport csv\nimport logging\n-import time\n-import types\nfrom copy import copy, deepcopy\nfrom functools import partial\n@@ -35,15 +33,15 @@ from pandas import DataFrame, pandas\nfrom sympy import Add\nfrom sympy import Mul\nfrom sympy.core.singleton import S\n+from cobra.core import get_solution\nfrom cameo import config\n-from cameo import exceptions\nfrom cameo.core.gene import Gene\nfrom cameo.core.metabolite import Metabolite\nfrom cameo.exceptions import SolveError, Infeasible\n-from cameo.util import TimeMachine, inheritdocstring, AutoVivification\n+from cameo.util import TimeMachine, inheritdocstring\nfrom .reaction import Reaction\n-from .solution import LazySolution, Solution\n+from .solution import LazySolution\n__all__ = ['to_solver_based_model', 'SolverBasedModel']\n@@ -389,7 +387,11 @@ class SolverBasedModel(cobra.core.Model):\nfix_objective_name = 'Fixed_objective_{}'.format(self.objective.name)\nif fix_objective_name in self.solver.constraints:\nself.solver.remove(fix_objective_name)\n- objective_value = self.solve().objective_value * fraction\n+ self.solver.optimize()\n+ if self.solver.status == 'optimal':\n+ objective_value = self.objective.value * fraction\n+ else:\n+ raise Infeasible('failed to fix objective')\nconstraint = self.solver.interface.Constraint(self.objective.expression,\nname=fix_objective_name)\nif self.objective.direction == 'max':\n@@ -464,42 +466,42 @@ class SolverBasedModel(cobra.core.Model):\nself.solver.add(ratio_constraint, sloppy=True)\nreturn ratio_constraint\n- def optimize(self, objective_sense=None, solution_type=Solution, **kwargs):\n- \"\"\"OptlangBasedModel implementation of optimize.\n-\n- Exists only for compatibility reasons. Uses model.solve() instead.\n- \"\"\"\n- self._timestamp_last_optimization = time.time()\n- if objective_sense is not None:\n- original_direction = self.objective.direction\n- self.objective.direction = {'minimize': 'min', 'maximize': 'max'}[objective_sense]\n- self.solver.optimize()\n- self.objective.direction = original_direction\n- else:\n- self.solver.optimize()\n- solution = solution_type(self)\n- self.solution = solution\n- return solution\n-\n- def solve(self, solution_type=LazySolution, *args, **kwargs):\n- \"\"\"Optimize model.\n-\n- Parameters\n- ----------\n- solution_type : Solution or LazySolution, optional\n- The type of solution that should be returned (defaults to LazySolution).\n-\n- Returns\n- -------\n- Solution or LazySolution\n- \"\"\"\n- solution = self.optimize(solution_type=solution_type, *args, **kwargs)\n- if solution.status is not 'optimal':\n- raise exceptions._OPTLANG_TO_EXCEPTIONS_DICT.get(solution.status, SolveError)(\n- 'Solving model %s did not return an optimal solution. The returned solution status is \"%s\"' % (\n- self, solution.status))\n- else:\n- return solution\n+ # def optimize(self, objective_sense=None, solution_type=Solution, **kwargs):\n+ # \"\"\"OptlangBasedModel implementation of optimize.\n+ #\n+ # Exists only for compatibility reasons. Uses model.optimize() instead.\n+ # \"\"\"\n+ # self._timestamp_last_optimization = time.time()\n+ # if objective_sense is not None:\n+ # original_direction = self.objective.direction\n+ # self.objective.direction = {'minimize': 'min', 'maximize': 'max'}[objective_sense]\n+ # self.solver.optimize()\n+ # self.objective.direction = original_direction\n+ # else:\n+ # self.solver.optimize()\n+ # solution = solution_type(self)\n+ # self.solution = solution\n+ # return solution\n+ #\n+ # def solve(self, solution_type=LazySolution, *args, **kwargs):\n+ # \"\"\"Optimize model.\n+ #\n+ # Parameters\n+ # ----------\n+ # solution_type : Solution or LazySolution, optional\n+ # The type of solution that should be returned (defaults to LazySolution).\n+ #\n+ # Returns\n+ # -------\n+ # Solution or LazySolution\n+ # \"\"\"\n+ # solution = self.optimize(solution_type=solution_type, *args, **kwargs)\n+ # if solution.status is not 'optimal':\n+ # raise exceptions._OPTLANG_TO_EXCEPTIONS_DICT.get(solution.status, SolveError)(\n+ # 'Solving model %s did not return an optimal solution. The returned solution status is \"%s\"' % (\n+ # self, solution.status))\n+ # else:\n+ # return solution\ndef __dir__(self):\n# Hide 'optimize' from user.\n@@ -543,8 +545,10 @@ class SolverBasedModel(cobra.core.Model):\n# Essential metabolites are only in reactions that carry flux.\nmetabolites = set()\n- solution = self.solve()\n-\n+ self.solver.optimize()\n+ if self.solver.status != 'optimal':\n+ raise SolveError('optimization failed')\n+ solution = get_solution(self)\nfor reaction_id, flux in six.iteritems(solution.fluxes):\nif abs(flux) > 0:\nreaction = self.reactions.get_by_id(reaction_id)\n@@ -553,13 +557,9 @@ class SolverBasedModel(cobra.core.Model):\nfor metabolite in metabolites:\nwith TimeMachine() as tm:\nmetabolite.knock_out(time_machine=tm, force_steady_state=force_steady_state)\n- try:\n- solution = self.solve()\n- if solution.f < threshold:\n- essential_metabolites.append(metabolite)\n- except Infeasible:\n+ self.solver.optimize()\n+ if self.solver.status != 'optimal' or self.objective.value < threshold:\nessential_metabolites.append(metabolite)\n-\nreturn essential_metabolites\ndef essential_reactions(self, threshold=1e-6):\n@@ -577,19 +577,17 @@ class SolverBasedModel(cobra.core.Model):\n\"\"\"\nessential = []\ntry:\n- solution = self.solve()\n-\n+ self.solver.optimize()\n+ if self.solver.status != 'optimal':\n+ raise SolveError('optimization failed')\n+ solution = get_solution(self)\nfor reaction_id, flux in six.iteritems(solution.fluxes):\nif abs(flux) > 0:\nreaction = self.reactions.get_by_id(reaction_id)\nwith TimeMachine() as tm:\nreaction.knock_out(time_machine=tm)\n- try:\n- sol = self.solve()\n- except Infeasible:\n- essential.append(reaction)\n- else:\n- if sol.f < threshold:\n+ self.solver.optimize()\n+ if self.solver.status != 'optimal' or self.objective.value < threshold:\nessential.append(reaction)\nexcept SolveError as e:\n@@ -613,7 +611,10 @@ class SolverBasedModel(cobra.core.Model):\n\"\"\"\nessential = []\ntry:\n- solution = self.solve()\n+ self.solver.optimize()\n+ if self.solver.status != 'optimal':\n+ raise SolveError('optimization failed')\n+ solution = get_solution(self)\ngenes_to_check = set()\nfor reaction_id, flux in six.iteritems(solution.fluxes):\nif abs(flux) > 0:\n@@ -621,12 +622,8 @@ class SolverBasedModel(cobra.core.Model):\nfor gene in genes_to_check:\nwith TimeMachine() as tm:\ngene.knock_out(time_machine=tm)\n- try:\n- sol = self.solve()\n- except Infeasible:\n- essential.append(gene)\n- else:\n- if sol.f < threshold:\n+ self.solver.optimize()\n+ if self.solver.status != 'optimal' or self.objective.value < threshold:\nessential.append(gene)\nexcept SolveError as e:\n" }, { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/analysis.py", "new_path": "cameo/flux_analysis/analysis.py", "diff": "@@ -24,10 +24,11 @@ from functools import partial, reduce\nimport numpy\nimport pandas\nimport six\n-from cobra.core import Reaction, Metabolite\n+from cobra.core import Reaction, Metabolite, get_solution\nfrom numpy import trapz\nfrom six.moves import zip\nfrom sympy import S\n+from optlang.interface import UNBOUNDED\nimport cameo\nfrom cameo import config\n@@ -250,12 +251,12 @@ def _flux_variability_analysis(model, reactions=None):\nfva_sol[reaction.id] = dict()\nmodel.solver.objective.set_linear_coefficients({reaction.forward_variable: 1.,\nreaction.reverse_variable: -1.})\n- try:\n- solution = model.solve()\n- fva_sol[reaction.id]['lower_bound'] = solution.f\n- except Unbounded:\n+ model.solver.optimize()\n+ if model.solver.status == 'optimal':\n+ fva_sol[reaction.id]['lower_bound'] = model.objective.value\n+ elif model.solver.status == UNBOUNDED:\nfva_sol[reaction.id]['lower_bound'] = -numpy.inf\n- except Infeasible:\n+ else:\nlb_flags[reaction.id] = True\nmodel.solver.objective.set_linear_coefficients({reaction.forward_variable: 0.,\nreaction.reverse_variable: 0.})\n@@ -268,12 +269,12 @@ def _flux_variability_analysis(model, reactions=None):\nmodel.solver.objective.set_linear_coefficients({reaction.forward_variable: 1.,\nreaction.reverse_variable: -1.})\n- try:\n- solution = model.solve()\n- fva_sol[reaction.id]['upper_bound'] = solution.f\n- except Unbounded:\n+ model.solver.optimize()\n+ if model.solver.status == 'optimal':\n+ fva_sol[reaction.id]['upper_bound'] = model.objective.value\n+ elif model.solver.status == UNBOUNDED:\nfva_sol[reaction.id]['upper_bound'] = numpy.inf\n- except Infeasible:\n+ else:\nub_flag = True\nif lb_flags[reaction.id] is True and ub_flag is True:\n@@ -316,7 +317,7 @@ def _get_c_source_reaction(model):\n\"\"\"\nmedium_reactions = [model.reactions.get_by_id(reaction) for reaction in model.medium.reaction_id]\ntry:\n- model.solve()\n+ model.optimize()\nexcept (Infeasible, AssertionError):\nreturn None\nsource_reactions = [(reaction, reaction.flux * reaction.n_carbon) for reaction in medium_reactions if\n@@ -348,19 +349,19 @@ def _cycle_free_fva(model, reactions=None, sloppy=True, sloppy_bound=666):\nfva_sol[reaction.id] = dict()\nmodel.objective = reaction\nmodel.objective.direction = 'min'\n- try:\n- solution = model.solve()\n- except Unbounded:\n+ model.solver.optimize()\n+ if model.solver.status == UNBOUNDED:\nfva_sol[reaction.id]['lower_bound'] = -numpy.inf\ncontinue\n- except Infeasible:\n+ elif model.solver.status != 'optimal':\nfva_sol[reaction.id]['lower_bound'] = 0\ncontinue\n- bound = solution.f\n+ bound = model.objective.value\nif sloppy and abs(bound) < sloppy_bound:\nfva_sol[reaction.id]['lower_bound'] = bound\nelse:\nlogger.debug('Determine if {} with bound {} is a cycle'.format(reaction.id, bound))\n+ solution = get_solution(model)\nv0_fluxes = solution.x_dict\nv1_cycle_free_fluxes = remove_infeasible_cycles(model, v0_fluxes)\nif abs(v1_cycle_free_fluxes[reaction.id] - bound) < 10 ** -6:\n@@ -376,31 +377,30 @@ def _cycle_free_fva(model, reactions=None, sloppy=True, sloppy_bound=666):\nknockout_reaction = model.reactions.get_by_id(key)\nknockout_reaction.knock_out(time_machine=tm)\nmodel.objective.direction = 'min'\n- try:\n- solution = model.solve()\n- except Unbounded:\n+ model.solver.optimize()\n+ if model.solver.status == 'optimal':\n+ fva_sol[reaction.id]['lower_bound'] = model.objective.value\n+ elif model.solver.status == UNBOUNDED:\nfva_sol[reaction.id]['lower_bound'] = -numpy.inf\n- except Infeasible:\n- fva_sol[reaction.id]['lower_bound'] = 0\nelse:\n- fva_sol[reaction.id]['lower_bound'] = solution.f\n+ fva_sol[reaction.id]['lower_bound'] = 0\nfor reaction in reactions:\nmodel.objective = reaction\nmodel.objective.direction = 'max'\n- try:\n- solution = model.solve()\n- except Unbounded:\n+ model.solver.optimize()\n+ if model.solver.status == UNBOUNDED:\nfva_sol[reaction.id]['upper_bound'] = numpy.inf\ncontinue\n- except Infeasible:\n+ elif model.solver.status != 'optimal':\nfva_sol[reaction.id]['upper_bound'] = 0\ncontinue\n- bound = solution.f\n+ bound = model.objective.value\nif sloppy and abs(bound) < sloppy_bound:\nfva_sol[reaction.id]['upper_bound'] = bound\nelse:\nlogger.debug('Determine if {} with bound {} is a cycle'.format(reaction.id, bound))\n+ solution = get_solution(model)\nv0_fluxes = solution.x_dict\nv1_cycle_free_fluxes = remove_infeasible_cycles(model, v0_fluxes)\nif abs(v1_cycle_free_fluxes[reaction.id] - bound) < 1e-6:\n@@ -416,14 +416,13 @@ def _cycle_free_fva(model, reactions=None, sloppy=True, sloppy_bound=666):\nknockout_reaction = model.reactions.get_by_id(key)\nknockout_reaction.knock_out(time_machine=tm)\nmodel.objective.direction = 'max'\n- try:\n- solution = model.solve()\n- except Unbounded:\n+ model.solver.optimize()\n+ if model.solver.status == 'optimal':\n+ fva_sol[reaction.id]['upper_bound'] = model.objective.value\n+ elif model.solver.status == UNBOUNDED:\nfva_sol[reaction.id]['upper_bound'] = numpy.inf\n- except Infeasible:\n- fva_sol[reaction.id]['upper_bound'] = 0\nelse:\n- fva_sol[reaction.id]['upper_bound'] = solution.f\n+ fva_sol[reaction.id]['upper_bound'] = 0\ndf = pandas.DataFrame.from_dict(fva_sol, orient='index')\nlb_higher_ub = df[df.lower_bound > df.upper_bound]\n@@ -516,10 +515,10 @@ class _PhenotypicPhasePlaneChunkEvaluator(object):\ndef _interval_estimates(self):\ntry:\n- flux = self.model.solve().f\n+ flux = self.model.optimize().f\ncarbon_yield = self.carbon_yield()\nmass_yield = self.mass_yield()\n- except Infeasible:\n+ except (AssertionError, Infeasible):\nflux = 0\ncarbon_yield = 0\nmass_yield = 0\n" }, { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/simulation.py", "new_path": "cameo/flux_analysis/simulation.py", "diff": "@@ -40,6 +40,8 @@ import sympy\nfrom sympy import Add\nfrom sympy import Mul\nfrom sympy.parsing.sympy_parser import parse_expr\n+from cobra.core import get_solution\n+from cobra.exceptions import OptimizationError\nfrom optlang.interface import OptimizationExpression\nfrom cameo.config import ndecimals\n@@ -78,7 +80,10 @@ def fba(model, objective=None, reactions=None, *args, **kwargs):\nif objective is not None:\ntm(do=partial(setattr, model, 'objective', objective),\nundo=partial(setattr, model, 'objective', model.objective))\n- solution = model.solve()\n+ model.solver.optimize()\n+ if model.solver.status != 'optimal':\n+ raise SolveError('optimization failed')\n+ solution = get_solution(model)\nif reactions is not None:\nresult = FluxDistributionResult({r: solution.get_primal_by_id(r) for r in reactions}, solution.f)\nelse:\n@@ -151,7 +156,7 @@ def pfba(model, objective=None, reactions=None, fraction_of_optimum=1, *args, **\nwith TimeMachine() as tm:\nadd_pfba(model, objective=objective, fraction_of_optimum=fraction_of_optimum, time_machine=tm)\ntry:\n- solution = model.solve()\n+ solution = model.optimize()\nif reactions is not None:\nresult = FluxDistributionResult({r: solution.get_primal_by_id(r) for r in reactions}, solution.f)\nelse:\n@@ -226,7 +231,7 @@ def moma(model, reference=None, cache=None, reactions=None, *args, **kwargs):\ncache.add_objective(create_objective, None, cache.variables.values())\n- solution = model.solve()\n+ solution = model.optimize()\nif reactions is not None:\nresult = FluxDistributionResult({r: solution.get_primal_by_id(r) for r in reactions}, solution.f)\nelse:\n@@ -323,7 +328,7 @@ def lmoma(model, reference=None, cache=None, reactions=None, *args, **kwargs):\ntry:\n- solution = model.solve()\n+ solution = model.optimize()\nif reactions is not None:\nresult = FluxDistributionResult({r: solution.get_primal_by_id(r) for r in reactions}, solution.f)\nelse:\n@@ -418,7 +423,7 @@ def room(model, reference=None, cache=None, delta=0.03, epsilon=0.001, reactions\nmodel.objective = model.solver.interface.Objective(add([mul([One, var]) for var in cache.variables.values()]),\ndirection='min')\ntry:\n- solution = model.solve()\n+ solution = model.optimize()\nif reactions is not None:\nresult = FluxDistributionResult({r: solution.get_primal_by_id(r) for r in reactions}, solution.f)\nelse:\n" }, { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/structural.py", "new_path": "cameo/flux_analysis/structural.py", "diff": "@@ -349,7 +349,7 @@ class ShortestElementaryFluxModes(six.Iterator):\ndef __generate_elementary_modes(self):\nwhile True:\ntry:\n- self.model.solve()\n+ self.model.optimize()\nexcept SolveError:\nraise StopIteration\nelementary_flux_mode = list()\n@@ -463,12 +463,11 @@ class MinimalCutSetsEnumerator(ShortestElementaryFluxModes): # pragma: no cover\nwith TimeMachine() as tm:\nfor reac_id in mcs:\nself._primal_model.reactions.get_by_id(reac_id).knock_out(tm)\n- try:\n- self._primal_model.solve()\n- except Infeasible:\n- return None\n- else:\n+ self._primal_model.solver.optimize()\n+ if self._primal_model.solver.status == 'optimal':\nreturn mcs\n+ else:\n+ return None\nelse:\nreturn mcs\n@@ -629,9 +628,8 @@ class MinimalCutSetsEnumerator(ShortestElementaryFluxModes): # pragma: no cover\n# of knockouts can be feasible either.\nwith TimeMachine() as tm:\nreaction.knock_out(tm)\n- try:\n- self._primal_model.solve()\n- except Infeasible:\n+ self._primal_model.solver.optimize()\n+ if self._primal_model.solver.status != 'optimal':\nillegal_knockouts.append(reaction.id)\nself._illegal_knockouts = illegal_knockouts\nreturn cloned_constraints\n" }, { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/util.py", "new_path": "cameo/flux_analysis/util.py", "diff": "@@ -91,7 +91,7 @@ def remove_infeasible_cycles(model, fluxes, fix=()):\nundo=partial(setattr, reaction_to_fix, 'upper_bound', reaction_to_fix.upper_bound))\ntry:\n- solution = model.solve()\n+ solution = model.optimize()\nexcept SolveError as e:\nlogger.warning(\"Couldn't remove cycles from reference flux distribution.\")\nraise e\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/deterministic/flux_variability_based.py", "new_path": "cameo/strain_design/deterministic/flux_variability_based.py", "diff": "@@ -122,7 +122,7 @@ class DifferentialFVA(StrainDesignMethod):\n>>> from cameo.strain_design.deterministic import DifferentialFVA\n>>> model = models.bigg.e_coli_core\n>>> reference_model = model.copy()\n- >>> reference_model.reactions.Biomass_Ecoli_core_w_GAM.lower_bound = reference_model.solve().objective_value\n+ >>> reference_model.reactions.Biomass_Ecoli_core_w_GAM.lower_bound = reference_model.optimize().objective_value\n>>> diffFVA = DifferentialFVA(design_space_model=model,\nreference_model=reference_model,\nobjective=model.reactions.EX_succ_e,\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/deterministic/linear_programming.py", "new_path": "cameo/strain_design/deterministic/linear_programming.py", "diff": "@@ -245,7 +245,7 @@ class OptKnock(StrainDesignMethod):\ncount = 0\nwhile count < max_results:\ntry:\n- solution = self._model.solve()\n+ solution = self._model.optimize()\nexcept SolveError as e:\nlogger.debug(\"Problem could not be solved. Terminating and returning \" + str(count) + \" solutions\")\nlogger.debug(str(e))\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/pathway_prediction/pathway_predictor.py", "new_path": "cameo/strain_design/pathway_prediction/pathway_predictor.py", "diff": "@@ -314,7 +314,7 @@ class PathwayPredictor(StrainDesignMethod):\nwhile counter <= max_predictions:\nlogger.debug('Predicting pathway No. %d' % counter)\ntry:\n- self.model.solve()\n+ self.model.optimize()\nexcept SolveError as e:\nlogger.error('No pathway could be predicted. Terminating pathway predictions.')\nlogger.error(e)\n" }, { "change_type": "MODIFY", "old_path": "cameo/stuff/distance.py", "new_path": "cameo/stuff/distance.py", "diff": "@@ -136,7 +136,7 @@ class ManhattanDistance(Distance):\ndef minimize(self, *args, **kwargs):\nself.model.objective.direction = 'min'\n- solution = self.model.solve()\n+ solution = self.model.optimize()\nresult = FluxDistributionResult(solution)\nreturn result\n@@ -192,6 +192,6 @@ class RegulatoryOnOffDistance(Distance):\ndef minimize(self, *args, **kwargs):\nself.model.objective.direction = 'min'\n- solution = self.model.solve()\n+ solution = self.model.optimize()\nresult = FluxDistributionResult(solution)\nreturn result\n" }, { "change_type": "MODIFY", "old_path": "cameo/stuff/stuff.py", "new_path": "cameo/stuff/stuff.py", "diff": "@@ -25,7 +25,7 @@ from cameo.util import TimeMachine\ndef gene_knockout_growth(gene_id, model, threshold=10 ** -6, simulation_method=fba,\nnormalize=True, biomass=None, biomass_flux=None, *args, **kwargs):\nif biomass_flux is None:\n- s = model.solve()\n+ s = model.optimize()\nbiomass_flux = s.f\nif 'reference' not in kwargs:\nkwargs['reference'] = s.x_dict\n@@ -65,7 +65,7 @@ def reaction_component_production(model, reaction):\ntm(do=partial(model.add_reactions, [test]), undo=partial(model.remove_reactions, [test]))\ntm(do=partial(setattr, model, 'objective', test.id), undo=partial(setattr, model, 'objective', model.objective))\ntry:\n- print(metabolite.id, \"= \", model.solve().f)\n+ print(metabolite.id, \"= \", model.optimize().f)\nexcept SolveError:\nprint(metabolite, \" cannot be produced (reactions: %s)\" % metabolite.reactions)\nfinally:\n" }, { "change_type": "MODIFY", "old_path": "tests/test_flux_analysis.py", "new_path": "tests/test_flux_analysis.py", "diff": "@@ -25,6 +25,7 @@ import numpy as np\nimport pandas\nimport pytest\nfrom sympy import Add\n+from cobra.util import create_stoichiometric_matrix\nimport cameo\nfrom cameo.flux_analysis import remove_infeasible_cycles, structural\n@@ -201,9 +202,9 @@ class TestSimulationMethods:\ndef test_pfba(self, core_model):\noriginal_objective = core_model.objective\nfba_solution = fba(core_model)\n- fba_flux_sum = sum((abs(val) for val in list(fba_solution.fluxes.values())))\n+ fba_flux_sum = sum((abs(val) for val in list(fba_solution.fluxes.values)))\npfba_solution = pfba(core_model)\n- pfba_flux_sum = sum((abs(val) for val in list(pfba_solution.fluxes.values())))\n+ pfba_flux_sum = sum((abs(val) for val in list(pfba_solution.fluxes.values)))\n# looks like GLPK finds a parsimonious solution without the flux minimization objective\nassert (pfba_flux_sum - fba_flux_sum) < 1e-6, \\\n\"FBA sum is suppose to be lower than PFBA (was %f)\" % (pfba_flux_sum - fba_flux_sum)\n@@ -218,9 +219,9 @@ class TestSimulationMethods:\ndef test_pfba_ijo1366(self, ijo1366):\noriginal_objective = ijo1366.objective\nfba_solution = fba(ijo1366)\n- fba_flux_sum = sum((abs(val) for val in fba_solution.fluxes.values()))\n+ fba_flux_sum = sum((abs(val) for val in fba_solution.fluxes.values))\npfba_solution = pfba(ijo1366)\n- pfba_flux_sum = sum((abs(val) for val in pfba_solution.fluxes.values()))\n+ pfba_flux_sum = sum((abs(val) for val in pfba_solution.fluxes.values))\nassert (pfba_flux_sum - fba_flux_sum) < 1e-6, \\\n\"FBA sum is suppose to be lower than PFBA (was %f)\" % (pfba_flux_sum - fba_flux_sum)\nassert ijo1366.objective is original_objective\n@@ -327,8 +328,7 @@ class TestSimulationMethods:\nwith TimeMachine() as tm:\ntoy_model.reactions.v6.knock_out(tm)\nresult_changed = moma(toy_model, reference=reference_changed)\n-\n- assert expected != result_changed.fluxes\n+ assert np.all([expected != result_changed.fluxes])\nclass TestRemoveCycles:\n@@ -338,7 +338,7 @@ class TestRemoveCycles:\noriginal_objective = copy.copy(core_model.objective)\ncore_model.objective = core_model.solver.interface.Objective(\nAdd(*core_model.solver.variables.values()), name='Max all fluxes')\n- solution = core_model.solve()\n+ solution = core_model.optimize()\nassert abs(solution.data_frame.fluxes.abs().sum() - 2508.293334) < 1e-6\nfluxes = solution.fluxes\ncore_model.objective = original_objective\n@@ -364,7 +364,7 @@ class TestStructural:\ndef test_find_coupled_reactions(self, core_model):\ncouples = structural.find_coupled_reactions(core_model)\n- fluxes = core_model.solve().fluxes\n+ fluxes = core_model.optimize().fluxes\nfor coupled_set in couples:\ncoupled_set = list(coupled_set)\nassert round(abs(fluxes[coupled_set[0].id] - fluxes[coupled_set[1].id]), 7) == 0\n" }, { "change_type": "MODIFY", "old_path": "tests/test_solver_based_model.py", "new_path": "tests/test_solver_based_model.py", "diff": "@@ -55,7 +55,7 @@ ESSENTIAL_REACTIONS = ['GLNS', 'Biomass_Ecoli_core_N_LPAREN_w_FSLASH_GAM_RPAREN_\ndef solved_model(request, data_directory):\ncore_model = load_model(os.path.join(data_directory, 'EcoliCore.xml'), sanitize=False)\ncore_model.solver = request.param\n- solution = core_model.solve()\n+ solution = core_model.optimize()\nreturn solution, core_model\n@@ -72,22 +72,22 @@ def tiny_toy_model(request):\nreturn tiny\n-class TestLazySolution:\n- def test_self_invalidation(self, solved_model):\n- solution, model = solved_model\n- assert abs(solution.f - 0.873921506968431) < 0.000001\n- model.optimize()\n- with pytest.raises(UndefinedSolution):\n- getattr(solution, 'f')\n-\n- def test_solution_contains_only_reaction_specific_values(self, solved_model):\n- solution, model = solved_model\n- reaction_ids = set([reaction.id for reaction in model.reactions])\n- assert set(solution.fluxes.keys()).difference(reaction_ids) == set()\n- assert set(solution.reduced_costs.keys()).difference(reaction_ids) == set()\n- assert set(solution.reduced_costs.keys()).difference(reaction_ids) == set()\n- metabolite_ids = set([metabolite.id for metabolite in model.metabolites])\n- assert set(solution.shadow_prices.keys()).difference(metabolite_ids) == set()\n+# class TestLazySolution:\n+# def test_self_invalidation(self, solved_model):\n+# solution, model = solved_model\n+# assert abs(solution.f - 0.873921506968431) < 0.000001\n+# model.optimize()\n+# with pytest.raises(UndefinedSolution):\n+# getattr(solution, 'f')\n+#\n+# def test_solution_contains_only_reaction_specific_values(self, solved_model):\n+# solution, model = solved_model\n+# reaction_ids = set([reaction.id for reaction in model.reactions])\n+# assert set(solution.fluxes.keys()).difference(reaction_ids) == set()\n+# assert set(solution.reduced_costs.keys()).difference(reaction_ids) == set()\n+# assert set(solution.reduced_costs.keys()).difference(reaction_ids) == set()\n+# metabolite_ids = set([metabolite.id for metabolite in model.metabolites])\n+# assert set(solution.shadow_prices.keys()).difference(metabolite_ids) == set()\nclass TestReaction:\n@@ -110,26 +110,27 @@ class TestReaction:\nassert reaction.upper_bound == cloned_reaction.upper_bound\nassert reaction.lower_bound == cloned_reaction.lower_bound\n- def test_gene_reaction_rule_setter(self, core_model):\n- rxn = Reaction('rxn')\n- rxn.add_metabolites({Metabolite('A'): -1, Metabolite('B'): 1})\n- rxn.gene_reaction_rule = 'A2B1 or A2B2 and A2B3'\n- assert hasattr(list(rxn.genes)[0], 'knock_out')\n- core_model.add_reaction(rxn)\n- with cameo.util.TimeMachine() as tm:\n- core_model.genes.A2B1.knock_out(time_machine=tm)\n- assert not core_model.genes.A2B1.functional\n- core_model.genes.A2B3.knock_out(time_machine=tm)\n- assert not rxn.functional\n- assert core_model.genes.A2B3.functional\n- assert rxn.functional\n- core_model.genes.A2B1.knock_out()\n- assert not core_model.genes.A2B1.functional\n- assert core_model.reactions.rxn.functional\n- core_model.genes.A2B3.knock_out()\n- assert not core_model.reactions.rxn.functional\n- non_functional = [gene.id for gene in core_model.non_functional_genes]\n- assert all(gene in non_functional for gene in ['A2B3', 'A2B1'])\n+ # test moved to cobra\n+ # def test_gene_reaction_rule_setter(self, core_model):\n+ # rxn = Reaction('rxn')\n+ # rxn.add_metabolites({Metabolite('A'): -1, Metabolite('B'): 1})\n+ # rxn.gene_reaction_rule = 'A2B1 or A2B2 and A2B3'\n+ # assert hasattr(list(rxn.genes)[0], 'knock_out')\n+ # core_model.add_reaction(rxn)\n+ # with cameo.util.TimeMachine() as tm:\n+ # core_model.genes.A2B1.knock_out(time_machine=tm)\n+ # assert not core_model.genes.A2B1.functional\n+ # core_model.genes.A2B3.knock_out(time_machine=tm)\n+ # assert not rxn.functional\n+ # assert core_model.genes.A2B3.functional\n+ # assert rxn.functional\n+ # core_model.genes.A2B1.knock_out()\n+ # assert not core_model.genes.A2B1.functional\n+ # assert core_model.reactions.rxn.functional\n+ # core_model.genes.A2B3.knock_out()\n+ # assert not core_model.reactions.rxn.functional\n+ # non_functional = [gene.id for gene in core_model.non_functional_genes]\n+ # assert all(gene in non_functional for gene in ['A2B3', 'A2B1'])\ndef test_gene_reaction_rule_setter_reaction_already_added_to_model(self, core_model):\nrxn = Reaction('rxn')\n@@ -825,22 +826,22 @@ class TestSolverBasedModel:\n# def test_solver_change(self, core_model):\n# solver_id = id(core_model.solver)\n# problem_id = id(core_model.solver.problem)\n- # solution = core_model.solve().x_dict\n+ # solution = core_model.optimize().x_dict\n# core_model.solver = 'glpk'\n# assert id(core_model.solver) != solver_id\n# assert id(core_model.solver.problem) != problem_id\n- # new_solution = core_model.solve()\n+ # new_solution = core_model.optimize()\n# for key in list(solution.keys()):\n# assert round(abs(new_solution.x_dict[key] - solution[key]), 7) == 0\n#\n# def test_solver_change_with_optlang_interface(self, core_model):\n# solver_id = id(core_model.solver)\n# problem_id = id(core_model.solver.problem)\n- # solution = core_model.solve().x_dict\n+ # solution = core_model.optimize().x_dict\n# core_model.solver = optlang.glpk_interface\n# assert id(core_model.solver) != solver_id\n# assert id(core_model.solver.problem) != problem_id\n- # new_solution = core_model.solve()\n+ # new_solution = core_model.optimize()\n# for key in list(solution.keys()):\n# assert round(abs(new_solution.x_dict[key] - solution[key]), 7) == 0\n@@ -920,21 +921,21 @@ class TestSolverBasedModel:\ncp = model.copy()\nratio_constr = cp.add_ratio_constraint(cp.reactions.PGI, cp.reactions.G6PDH2r, 0.5)\nassert ratio_constr.name == 'ratio_constraint_PGI_G6PDH2r'\n- solution = cp.solve()\n+ solution = cp.optimize()\nassert round(abs(solution.f - 0.870407873712), 7) == 0\nassert round(abs(2 * solution.x_dict['PGI'] - solution.x_dict['G6PDH2r']), 7) == 0\ncp = model.copy()\nratio_constr = cp.add_ratio_constraint(cp.reactions.PGI, cp.reactions.G6PDH2r, 0.5)\nassert ratio_constr.name == 'ratio_constraint_PGI_G6PDH2r'\n- solution = cp.solve()\n+ solution = cp.optimize()\nassert round(abs(solution.f - 0.870407873712), 7) == 0\nassert round(abs(2 * solution.x_dict['PGI'] - solution.x_dict['G6PDH2r']), 7) == 0\ncp = model.copy()\nratio_constr = cp.add_ratio_constraint('PGI', 'G6PDH2r', 0.5)\nassert ratio_constr.name == 'ratio_constraint_PGI_G6PDH2r'\n- solution = cp.solve()\n+ solution = cp.optimize()\nassert abs(solution.f - 0.870407) < 1e-6\nassert abs(2 * solution.x_dict['PGI'] - solution.x_dict['G6PDH2r']) < 1e-6\n@@ -942,7 +943,7 @@ class TestSolverBasedModel:\nratio_constr = cp.add_ratio_constraint([cp.reactions.PGI, cp.reactions.ACALD],\n[cp.reactions.G6PDH2r, cp.reactions.ACONTa], 0.5)\nassert ratio_constr.name == 'ratio_constraint_PGI+ACALD_G6PDH2r+ACONTa'\n- solution = cp.solve()\n+ solution = cp.optimize()\nassert abs(solution.f - 0.872959) < 1e-6\nassert abs((solution.x_dict['PGI'] + solution.x_dict['ACALD']) -\n0.5 * (solution.x_dict['G6PDH2r'] + solution.x_dict['ACONTa'])) < 1e-5\n" }, { "change_type": "MODIFY", "old_path": "tests/test_targets.py", "new_path": "tests/test_targets.py", "diff": "@@ -60,7 +60,7 @@ class TestTargets:\ndown_reg_target.apply(model, time_machine=tm)\nassert model.reactions.PGI.upper_bound == 3.4\nassert model.reactions.PGI.lower_bound == -1000\n- assert abs(model.solve().f - 0.8706) < 0.0001\n+ assert abs(model.optimize().f - 0.8706) < 0.0001\nassert model.reactions.PGI.upper_bound == 1000\nassert model.reactions.PGI.lower_bound == -1000\n@@ -77,7 +77,7 @@ class TestTargets:\ndown_reg_target.apply(model, time_machine=tm)\nassert model.reactions.RPI.lower_bound == -1.5\nassert model.reactions.RPI.upper_bound == 1000\n- assert abs(model.solve().f - 0.8691) < 0.0001\n+ assert abs(model.optimize().f - 0.8691) < 0.0001\nassert model.reactions.RPI.lower_bound == -1000\nassert model.reactions.RPI.upper_bound == 1000\n@@ -188,7 +188,7 @@ class TestTargets:\nknockout_target.apply(model, time_machine=tm)\nassert model.reactions.PGI.lower_bound == 0\nassert model.reactions.PGI.upper_bound == 0\n- assert abs(model.solve().f - 0.8631) < 0.0001\n+ assert abs(model.optimize().f - 0.8631) < 0.0001\nassert model.reactions.PGI.lower_bound == -1000\nassert model.reactions.PGI.upper_bound == 1000\n" }, { "change_type": "MODIFY", "old_path": "tests/test_util.py", "new_path": "tests/test_util.py", "diff": "@@ -31,7 +31,7 @@ SEED = 1234\[email protected](scope=\"function\")\ndef problem_cache_trial(core_model):\n- reference = core_model.solve().fluxes\n+ reference = core_model.optimize().fluxes\nn_constraints = len(core_model.solver.constraints)\nn_variables = len(core_model.solver.variables)\nreturn core_model, reference, n_constraints, n_variables\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: replace model.solve() w model.optimize() cobra.core.Model.optimize has been improved, removing the need for model.solve.
89,735
20.03.2017 18:09:56
-3,600
34ee2dbc3c201ad439d41b5b8e41da20e0915baf
refactor: remove model.S Avoid capital letter attributes and use the cobra implementation of the S matrix.
[ { "change_type": "MODIFY", "old_path": "cameo/core/solver_based_model.py", "new_path": "cameo/core/solver_based_model.py", "diff": "@@ -632,18 +632,18 @@ class SolverBasedModel(cobra.core.Model):\nreturn essential\n- @property\n- def S(self):\n- metabolite_index = {metabolite.id: index for index, metabolite in enumerate(self.metabolites)}\n- stoichiometric_matrix = np.zeros((len(self.metabolites), len(self.reactions)))\n-\n- for i, reaction in enumerate(self.reactions):\n- for metabolite, coefficient in six.iteritems(reaction.metabolites):\n- j = metabolite_index[metabolite.id]\n- stoichiometric_matrix[j, i] = coefficient\n-\n- return stoichiometric_matrix\n-\n+ # @property\n+ # def S(self):\n+ # metabolite_index = {metabolite.id: index for index, metabolite in enumerate(self.metabolites)}\n+ # stoichiometric_matrix = np.zeros((len(self.metabolites), len(self.reactions)))\n+ #\n+ # for i, reaction in enumerate(self.reactions):\n+ # for metabolite, coefficient in six.iteritems(reaction.metabolites):\n+ # j = metabolite_index[metabolite.id]\n+ # stoichiometric_matrix[j, i] = coefficient\n+ #\n+ # return stoichiometric_matrix\n+ #\n@property\ndef medium(self):\n" }, { "change_type": "MODIFY", "old_path": "tests/test_flux_analysis.py", "new_path": "tests/test_flux_analysis.py", "diff": "@@ -454,6 +454,6 @@ class TestNullSpace:\nassert round(abs(np.dot(ns.T, a[1])[0] - 0), 10) == 0\ndef test_with_core_model(self, core_model):\n- s = core_model.S\n+ s = create_stoichiometric_matrix(core_model)\nns = nullspace(s)\nassert round(abs(np.abs(s.dot(ns)).max() - 0), 10) == 0\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: remove model.S Avoid capital letter attributes and use the cobra implementation of the S matrix.
89,735
21.03.2017 09:00:23
-3,600
a96d72443d57c8869ae62ab29640cc6a3279354b
refactor: remove fix_objective_as_constraint Replace with the cobrapy implementation (which is pretty much identical)
[ { "change_type": "MODIFY", "old_path": "cameo/core/solver_based_model.py", "new_path": "cameo/core/solver_based_model.py", "diff": "@@ -372,37 +372,37 @@ class SolverBasedModel(cobra.core.Model):\nself.add_reactions([reaction])\nreturn reaction\n- def fix_objective_as_constraint(self, time_machine=None, fraction=1):\n- \"\"\"Fix current objective as an additional constraint (e.g., ..math`c^T v >= max c^T v`).\n-\n- Parameters\n- ----------\n- time_machine : TimeMachine, optional\n- A TimeMachine instance can be provided, making it easy to undo this modification.\n-\n- Returns\n- -------\n- None\n- \"\"\"\n- fix_objective_name = 'Fixed_objective_{}'.format(self.objective.name)\n- if fix_objective_name in self.solver.constraints:\n- self.solver.remove(fix_objective_name)\n- self.solver.optimize()\n- if self.solver.status == 'optimal':\n- objective_value = self.objective.value * fraction\n- else:\n- raise Infeasible('failed to fix objective')\n- constraint = self.solver.interface.Constraint(self.objective.expression,\n- name=fix_objective_name)\n- if self.objective.direction == 'max':\n- constraint.lb = objective_value\n- else:\n- constraint.ub = objective_value\n- if time_machine is None:\n- self.solver._add_constraint(constraint, sloppy=True)\n- else:\n- time_machine(do=partial(self.solver._add_constraint, constraint, sloppy=True),\n- undo=partial(self.solver.remove, constraint))\n+ # def fix_objective_as_constraint(self, time_machine=None, fraction=1):\n+ # \"\"\"Fix current objective as an additional constraint (e.g., ..math`c^T v >= max c^T v`).\n+ #\n+ # Parameters\n+ # ----------\n+ # time_machine : TimeMachine, optional\n+ # A TimeMachine instance can be provided, making it easy to undo this modification.\n+ #\n+ # Returns\n+ # -------\n+ # None\n+ # \"\"\"\n+ # fix_objective_name = 'Fixed_objective_{}'.format(self.objective.name)\n+ # if fix_objective_name in self.solver.constraints:\n+ # self.solver.remove(fix_objective_name)\n+ # self.solver.optimize()\n+ # if self.solver.status == 'optimal':\n+ # objective_value = self.objective.value * fraction\n+ # else:\n+ # raise Infeasible('failed to fix objective')\n+ # constraint = self.solver.interface.Constraint(self.objective.expression,\n+ # name=fix_objective_name)\n+ # if self.objective.direction == 'max':\n+ # constraint.lb = objective_value\n+ # else:\n+ # constraint.ub = objective_value\n+ # if time_machine is None:\n+ # self.solver._add_constraint(constraint, sloppy=True)\n+ # else:\n+ # time_machine(do=partial(self.solver._add_constraint, constraint, sloppy=True),\n+ # undo=partial(self.solver.remove, constraint))\ndef add_ratio_constraint(self, expr1, expr2, ratio, prefix='ratio_constraint_'):\n\"\"\"Adds a ratio constraint (expr1/expr2 = ratio) to the model.\n" }, { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/analysis.py", "new_path": "cameo/flux_analysis/analysis.py", "diff": "@@ -25,6 +25,7 @@ import numpy\nimport pandas\nimport six\nfrom cobra.core import Reaction, Metabolite, get_solution\n+from cobra.util import fix_objective_as_constraint\nfrom numpy import trapz\nfrom six.moves import zip\nfrom sympy import S\n@@ -100,13 +101,12 @@ def flux_variability_analysis(model, reactions=None, fraction_of_optimum=0., pfb\nview = config.default_view\nif reactions is None:\nreactions = model.reactions\n- with TimeMachine() as tm:\n+ with model:\nif fraction_of_optimum > 0.:\n- model.fix_objective_as_constraint(fraction=fraction_of_optimum, time_machine=tm)\n+ fix_objective_as_constraint(model, fraction=fraction_of_optimum)\nif pfba_factor is not None:\n# don't add the objective-constraint again so fraction_of_optimum=0\n- fix_pfba_as_constraint(model, multiplier=pfba_factor, time_machine=tm, fraction_of_optimum=0)\n- tm(do=int, undo=partial(setattr, model, \"objective\", model.objective))\n+ fix_pfba_as_constraint(model, multiplier=pfba_factor, fraction_of_optimum=0)\nreaction_chunks = (chunk for chunk in partition(reactions, len(view)))\nif remove_cycles:\nfunc_obj = _FvaFunctionObject(model, _cycle_free_fva)\n" }, { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/simulation.py", "new_path": "cameo/flux_analysis/simulation.py", "diff": "@@ -34,13 +34,13 @@ import cameo\nimport logging\nfrom functools import partial\n-from itertools import chain\nimport sympy\nfrom sympy import Add\nfrom sympy import Mul\nfrom sympy.parsing.sympy_parser import parse_expr\nfrom cobra.core import get_solution\n+from cobra.flux_analysis.parsimonious import add_pfba\nfrom cobra.exceptions import OptimizationError\nfrom optlang.interface import OptimizationExpression\n@@ -91,41 +91,6 @@ def fba(model, objective=None, reactions=None, *args, **kwargs):\nreturn result\n-def add_pfba(model, objective=None, fraction_of_optimum=1.0, time_machine=None):\n- \"\"\"Add pFBA objective\n-\n- Add objective to minimize the summed flux of all reactions to the\n- current objective.\n-\n- Parameters\n- ----------\n- model : cameo.core.SolverBasedModel\n- The model to add the objective to\n- objective :\n- An objective to set in combination with the pFBA objective.\n- fraction_of_optimum : float\n- Fraction of optimum which must be maintained. The original objective\n- reaction is constrained to be greater than maximal_value *\n- fraction_of_optimum.\n- time_machine : cameo.util.TimeMachine\n- A time machine to undo the added pFBA objective\n- \"\"\"\n- if objective is not None:\n- model.change_objective(objective, time_machine=time_machine)\n- if model.solver.objective.name == '_pfba_objective':\n- raise ValueError('model already has pfba objective')\n- if fraction_of_optimum > 0:\n- model.fix_objective_as_constraint(fraction=fraction_of_optimum, time_machine=time_machine)\n- reaction_variables = ((rxn.forward_variable, rxn.reverse_variable)\n- for rxn in model.reactions)\n- variables = chain(*reaction_variables)\n- pfba_objective = model.solver.interface.Objective(add(\n- [mul((sympy.singleton.S.One, variable))\n- for variable in variables]), direction='min', sloppy=True,\n- name=\"_pfba_objective\")\n- model.change_objective(pfba_objective, time_machine=time_machine)\n-\n-\ndef pfba(model, objective=None, reactions=None, fraction_of_optimum=1, *args, **kwargs):\n\"\"\"Parsimonious Enzyme Usage Flux Balance Analysis [1].\n@@ -153,15 +118,16 @@ def pfba(model, objective=None, reactions=None, fraction_of_optimum=1, *args, **\ngenome-scale models. Molecular Systems Biology, 6, 390. doi:10.1038/msb.2010.47\n\"\"\"\n- with TimeMachine() as tm:\n- add_pfba(model, objective=objective, fraction_of_optimum=fraction_of_optimum, time_machine=tm)\n+ with model:\n+ add_pfba(model, objective=objective, fraction_of_optimum=fraction_of_optimum)\ntry:\n- solution = model.optimize()\n+ model.solver.optimize()\n+ solution = get_solution(model)\nif reactions is not None:\nresult = FluxDistributionResult({r: solution.get_primal_by_id(r) for r in reactions}, solution.f)\nelse:\nresult = FluxDistributionResult.from_solution(solution)\n- except SolveError as e:\n+ except (SolveError, OptimizationError) as e:\nlogger.error(\"pfba could not determine an optimal solution for objective %s\" % model.objective)\nraise e\nreturn result\n" }, { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/util.py", "new_path": "cameo/flux_analysis/util.py", "diff": "@@ -99,7 +99,7 @@ def remove_infeasible_cycles(model, fluxes, fix=()):\nreturn result\n-def fix_pfba_as_constraint(model, multiplier=1, fraction_of_optimum=1, time_machine=None):\n+def fix_pfba_as_constraint(model, multiplier=1, fraction_of_optimum=1):\n\"\"\"Fix the pFBA optimum as a constraint\nUseful when setting other objectives, like the maximum flux through given reaction may be more realistic if all\n@@ -113,21 +113,15 @@ def fix_pfba_as_constraint(model, multiplier=1, fraction_of_optimum=1, time_mach\nThe multiplier of the minimal sum of all reaction fluxes to use as the constraint.\nfraction_of_optimum : float\nThe fraction of the objective value's optimum to use as constraint when getting the pFBA objective's minimum\n- time_machine : TimeMachine, optional\n- A TimeMachine instance can be provided, making it easy to undo this modification.\n\"\"\"\nfix_constraint_name = '_fixed_pfba_constraint'\nif fix_constraint_name in model.solver.constraints:\nmodel.solver.remove(fix_constraint_name)\n- with TimeMachine() as tm:\n- add_pfba(model, time_machine=tm, fraction_of_optimum=fraction_of_optimum)\n+ with model:\n+ add_pfba(model, fraction_of_optimum=fraction_of_optimum)\npfba_objective_value = model.optimize().objective_value * multiplier\nconstraint = model.solver.interface.Constraint(model.objective.expression,\nname=fix_constraint_name,\nub=pfba_objective_value)\n- if time_machine is None:\n- model.solver._add_constraint(constraint, sloppy=True)\n- else:\n- time_machine(do=partial(model.solver._add_constraint, constraint, sloppy=True),\n- undo=partial(model.solver.remove, constraint))\n+ model.add_cons_vars(constraint, sloppy=True)\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/deterministic/flux_variability_based.py", "new_path": "cameo/strain_design/deterministic/flux_variability_based.py", "diff": "@@ -34,8 +34,10 @@ except ImportError:\nfrom IProgress import ProgressBar\nfrom pandas import DataFrame, pandas\n-from cameo.visualization.plotting import plotter\n+from cobra.util import fix_objective_as_constraint\n+\n+from cameo.visualization.plotting import plotter\nfrom cameo import config\nfrom cameo.ui import notice\n@@ -141,8 +143,7 @@ class DifferentialFVA(StrainDesignMethod):\nself.design_space_nullspace = nullspace(create_stoichiometric_array(self.design_space_model))\nif reference_model is None:\nself.reference_model = self.design_space_model.copy()\n- self.reference_model.fix_objective_as_constraint()\n- self.reference_nullspace = self.design_space_nullspace\n+ fix_objective_as_constraint(self.reference_model)\nelse:\nself.reference_model = reference_model\nself.reference_nullspace = nullspace(create_stoichiometric_array(self.reference_model))\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/deterministic/linear_programming.py", "new_path": "cameo/strain_design/deterministic/linear_programming.py", "diff": "@@ -24,6 +24,8 @@ from IProgress.widgets import Bar, Percentage\nfrom pandas import DataFrame\nfrom sympy import Add\n+from cobra.util import fix_objective_as_constraint\n+\nfrom cameo import config\nfrom cameo import ui\nfrom cameo.core.solver_based_model_dual import convert_to_dual\n@@ -101,7 +103,7 @@ class OptKnock(StrainDesignMethod):\nself._model.solver.interface.__name__.split(\".\")[-1])\nif fraction_of_optimum is not None:\n- self._model.fix_objective_as_constraint(fraction=fraction_of_optimum)\n+ fix_objective_as_constraint(self._model, fraction=fraction_of_optimum)\nif remove_blocked:\nself._remove_blocked_reactions()\nif not exclude_reactions:\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/heuristic/evolutionary/evaluators.py", "new_path": "cameo/strain_design/heuristic/evolutionary/evaluators.py", "diff": "# limitations under the License.\nimport logging\n+from cobra.exceptions import OptimizationError\n+\nfrom cameo.core.manipulation import swap_cofactors\nfrom cameo.exceptions import SolveError\nfrom cameo.strain_design.heuristic.evolutionary.decoders import SetDecoder\n@@ -122,7 +124,7 @@ class KnockoutEvaluator(TargetEvaluator):\nreactions=self.objective_function.reactions,\n**self.simulation_kwargs)\nfitness = self.objective_function(self.model, solution, targets)\n- except SolveError as e:\n+ except (SolveError, OptimizationError) as e:\nlogger.debug(e)\nfitness = self.objective_function.worst_fitness()\nreturn fitness\n" }, { "change_type": "MODIFY", "old_path": "tests/test_flux_analysis.py", "new_path": "tests/test_flux_analysis.py", "diff": "@@ -25,7 +25,8 @@ import numpy as np\nimport pandas\nimport pytest\nfrom sympy import Add\n-from cobra.util import create_stoichiometric_matrix\n+from cobra.util import create_stoichiometric_matrix, fix_objective_as_constraint\n+from cobra.flux_analysis.parsimonious import add_pfba\nimport cameo\nfrom cameo.flux_analysis import remove_infeasible_cycles, structural\n@@ -33,7 +34,7 @@ from cameo.flux_analysis.analysis import (find_blocked_reactions,\nflux_variability_analysis,\nphenotypic_phase_plane,\nfix_pfba_as_constraint)\n-from cameo.flux_analysis.simulation import fba, lmoma, moma, pfba, room, add_pfba\n+from cameo.flux_analysis.simulation import fba, lmoma, moma, pfba, room\nfrom cameo.flux_analysis.structural import nullspace\nfrom cameo.parallel import MultiprocessingView, SequentialView\nfrom cameo.util import TimeMachine, current_solver_name, pick_one\n@@ -81,13 +82,13 @@ class TestFluxVariabilityAnalysis:\nassert sum(abs(pfba_fva.lower_bound)) - 518.422 < .001\nassert sum(abs(pfba_fva.upper_bound)) - 518.422 < .001\n- def test_add_remove_pfb(self, core_model):\n- with TimeMachine() as tm:\n- add_pfba(core_model, time_machine=tm)\n+ def test_add_remove_pfba(self, core_model):\n+ with core_model:\n+ add_pfba(core_model)\nassert '_pfba_objective' == core_model.objective.name\nassert '_pfba_objective' != core_model.solver.constraints\n- with TimeMachine() as tm:\n- fix_pfba_as_constraint(core_model, time_machine=tm)\n+ with core_model:\n+ fix_pfba_as_constraint(core_model)\nassert '_fixed_pfba_constraint' in core_model.solver.constraints\nassert '_fixed_pfba_constraint' not in core_model.solver.constraints\n@@ -190,14 +191,14 @@ class TestSimulationMethods:\noriginal_objective = core_model.objective\nassert abs(solution.objective_value - 0.873921) < 0.000001\nassert len(solution.fluxes) == len(core_model.reactions)\n- assert core_model.objective is original_objective\n+ assert core_model.objective.expression == original_objective.expression\ndef test_fba_with_reaction_filter(self, core_model):\noriginal_objective = core_model.objective\nsolution = fba(core_model, reactions=['EX_o2_LPAREN_e_RPAREN_', 'EX_glc_LPAREN_e_RPAREN_'])\nassert abs(solution.objective_value - 0.873921) < 0.000001\nassert len(solution.fluxes) == 2\n- assert core_model.objective is original_objective\n+ assert core_model.objective.expression == original_objective.expression\ndef test_pfba(self, core_model):\noriginal_objective = core_model.objective\n@@ -208,13 +209,13 @@ class TestSimulationMethods:\n# looks like GLPK finds a parsimonious solution without the flux minimization objective\nassert (pfba_flux_sum - fba_flux_sum) < 1e-6, \\\n\"FBA sum is suppose to be lower than PFBA (was %f)\" % (pfba_flux_sum - fba_flux_sum)\n- assert core_model.objective is original_objective\n+ assert core_model.objective.expression == original_objective.expression\ndef test_pfba_with_reaction_filter(self, core_model):\noriginal_objective = core_model.objective\npfba_solution = pfba(core_model, reactions=['EX_o2_LPAREN_e_RPAREN_', 'EX_glc_LPAREN_e_RPAREN_'])\nassert len(pfba_solution.fluxes) == 2\n- assert core_model.objective is original_objective\n+ assert core_model.objective.expression == original_objective.expression\ndef test_pfba_ijo1366(self, ijo1366):\noriginal_objective = ijo1366.objective\n@@ -224,7 +225,7 @@ class TestSimulationMethods:\npfba_flux_sum = sum((abs(val) for val in pfba_solution.fluxes.values))\nassert (pfba_flux_sum - fba_flux_sum) < 1e-6, \\\n\"FBA sum is suppose to be lower than PFBA (was %f)\" % (pfba_flux_sum - fba_flux_sum)\n- assert ijo1366.objective is original_objective\n+ assert ijo1366.objective.expression == original_objective.expression\ndef test_lmoma(self, core_model):\noriginal_objective = core_model.objective\n@@ -232,7 +233,7 @@ class TestSimulationMethods:\nsolution = lmoma(core_model, reference=pfba_solution)\ndistance = sum((abs(solution[v] - pfba_solution[v]) for v in pfba_solution.keys()))\nassert abs(0 - distance) < 1e-6, \"lmoma distance without knockouts must be 0 (was %f)\" % distance\n- assert core_model.objective is original_objective\n+ assert core_model.objective.expression == original_objective.expression\ndef test_lmoma_change_ref(self, core_model):\noriginal_objective = core_model.objective\n@@ -241,7 +242,7 @@ class TestSimulationMethods:\nsolution = lmoma(core_model, reference=fluxes)\ndistance = sum((abs(solution[v] - pfba_solution[v]) for v in pfba_solution.keys()))\nassert abs(0 - distance) > 1e-6, \"lmoma distance without knockouts must be 0 (was %f)\" % distance\n- assert core_model.objective is original_objective\n+ assert core_model.objective.expression == original_objective.expression\ndef test_lmoma_with_reaction_filter(self, core_model):\noriginal_objective = core_model.objective\n@@ -249,7 +250,7 @@ class TestSimulationMethods:\nsolution = lmoma(core_model, reference=pfba_solution,\nreactions=['EX_o2_LPAREN_e_RPAREN_', 'EX_glc_LPAREN_e_RPAREN_'])\nassert len(solution.fluxes) == 2\n- assert core_model.objective is original_objective\n+ assert core_model.objective.expression == original_objective.expression\ndef test_moma(self, core_model):\nif current_solver_name(core_model) == 'glpk':\n@@ -259,7 +260,7 @@ class TestSimulationMethods:\nsolution = moma(core_model, reference=pfba_solution)\ndistance = sum((abs(solution[v] - pfba_solution[v]) for v in pfba_solution.keys()))\nassert abs(0 - distance) < 1e-6, \"moma distance without knockouts must be 0 (was %f)\" % distance\n- assert core_model.objective is original_objective\n+ assert core_model.objective.expression == original_objective.expression\ndef test_room(self, core_model):\noriginal_objective = core_model.objective\n@@ -267,7 +268,7 @@ class TestSimulationMethods:\nsolution = room(core_model, reference=pfba_solution)\nassert abs(0 - solution.objective_value) < 1e-6, \\\n\"room objective without knockouts must be 0 (was %f)\" % solution.objective_value\n- assert core_model.objective is original_objective\n+ assert core_model.objective.expression == original_objective.expression\ndef test_room_with_reaction_filter(self, core_model):\noriginal_objective = core_model.objective\n@@ -275,7 +276,7 @@ class TestSimulationMethods:\nsolution = room(core_model, reference=pfba_solution,\nreactions=['EX_o2_LPAREN_e_RPAREN_', 'EX_glc_LPAREN_e_RPAREN_'])\nassert len(solution.fluxes) == 2\n- assert core_model.objective is original_objective\n+ assert core_model.objective.expression == original_objective.expression\ndef test_room_shlomi_2005(self, toy_model):\noriginal_objective = toy_model.objective\n@@ -288,7 +289,7 @@ class TestSimulationMethods:\nfor k in reference.keys():\nassert abs(expected[k] - result.fluxes[k]) < 0.1, \"%s: %f | %f\"\n- assert toy_model.objective is original_objective\n+ assert toy_model.objective.expression == original_objective.expression\ndef test_moma_shlomi_2005(self, toy_model):\nif current_solver_name(toy_model) == 'glpk':\n@@ -333,8 +334,8 @@ class TestSimulationMethods:\nclass TestRemoveCycles:\ndef test_remove_cycles(self, core_model):\n- with TimeMachine() as tm:\n- core_model.fix_objective_as_constraint(time_machine=tm)\n+ with core_model:\n+ fix_objective_as_constraint(core_model)\noriginal_objective = copy.copy(core_model.objective)\ncore_model.objective = core_model.solver.interface.Objective(\nAdd(*core_model.solver.variables.values()), name='Max all fluxes')\n" }, { "change_type": "MODIFY", "old_path": "tests/test_solver_based_model.py", "new_path": "tests/test_solver_based_model.py", "diff": "@@ -27,6 +27,8 @@ import pandas\nimport pytest\nimport six\n+from cobra.util import fix_objective_as_constraint\n+\nimport cameo\nfrom cameo import Model, load_model\nfrom cameo.config import solvers\n@@ -950,13 +952,13 @@ class TestSolverBasedModel:\ndef test_fix_objective_as_constraint(self, core_model):\n# with TimeMachine\n- with TimeMachine() as tm:\n- core_model.fix_objective_as_constraint(time_machine=tm)\n+ with core_model:\n+ fix_objective_as_constraint(core_model)\nconstraint_name = core_model.solver.constraints[-1]\nassert core_model.solver.constraints[-1].expression - core_model.objective.expression == 0\nassert constraint_name not in core_model.solver.constraints\n# without TimeMachine\n- core_model.fix_objective_as_constraint()\n+ fix_objective_as_constraint(core_model)\nconstraint_name = core_model.solver.constraints[-1]\nassert core_model.solver.constraints[-1].expression - core_model.objective.expression == 0\nassert constraint_name in core_model.solver.constraints\n" }, { "change_type": "MODIFY", "old_path": "tests/test_strain_design_deterministic.py", "new_path": "tests/test_strain_design_deterministic.py", "diff": "@@ -49,7 +49,7 @@ class TestFSEOF:\nfseof = FSEOF(model)\nfseof_result = fseof.run(target=\"EX_succ_lp_e_rp_\")\nassert isinstance(fseof_result, FSEOFResult)\n- assert objective is model.objective\n+ assert objective.expression == model.objective.expression\ndef test_fseof_result(self, model):\nfseof = FSEOF(model)\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: remove fix_objective_as_constraint Replace with the cobrapy implementation (which is pretty much identical)
89,735
21.03.2017 16:50:04
-3,600
27361401619a33c2285fa4025dc95dd6ebe4666e
refactor: remove non_functional_genes This method was unused/tested. Remove in favor of new implementation elsewhere.
[ { "change_type": "MODIFY", "old_path": "cameo/core/solver_based_model.py", "new_path": "cameo/core/solver_based_model.py", "diff": "@@ -21,11 +21,9 @@ from __future__ import absolute_import, print_function\nimport csv\nimport logging\n-from copy import copy, deepcopy\nfrom functools import partial\nimport cobra\n-import numpy as np\nimport optlang\nimport six\nimport sympy\n@@ -38,7 +36,7 @@ from cobra.core import get_solution\nfrom cameo import config\nfrom cameo.core.gene import Gene\nfrom cameo.core.metabolite import Metabolite\n-from cameo.exceptions import SolveError, Infeasible\n+from cameo.exceptions import SolveError\nfrom cameo.util import TimeMachine, inheritdocstring\nfrom .reaction import Reaction\nfrom .solution import LazySolution\n@@ -122,32 +120,32 @@ class SolverBasedModel(cobra.core.Model):\nself._timestamp_last_optimization = None\nself.solution = LazySolution(self)\n- @property\n- def non_functional_genes(self):\n- \"\"\"All non-functional genes in this model\n- Returns\n- -------\n- frozenset\n- set with the genes that are marked as non-functional\n- \"\"\"\n- return frozenset(gene for gene in self.genes if not gene.functional)\n-\n- def __copy__(self):\n- return self.__deepcopy__()\n-\n- def __deepcopy__(self):\n- return self.copy()\n-\n- def copy(self):\n- \"\"\"Needed for compatibility with cobrapy.\"\"\"\n- model_copy = super(SolverBasedModel, self).copy()\n- for reac in model_copy.reactions:\n- reac._reset_var_cache()\n- try:\n- model_copy._solver = deepcopy(self.solver)\n- except Exception: # pragma: no cover # Cplex has an issue with deep copies\n- model_copy._solver = copy(self.solver) # pragma: no cover\n- return model_copy\n+ # @property\n+ # def non_functional_genes(self):\n+ # \"\"\"All non-functional genes in this model\n+ # Returns\n+ # -------\n+ # frozenset\n+ # set with the genes that are marked as non-functional\n+ # \"\"\"\n+ # return frozenset(gene for gene in self.genes if not gene.functional)\n+ #\n+ # def __copy__(self):\n+ # return self.__deepcopy__()\n+ #\n+ # def __deepcopy__(self):\n+ # return self.copy()\n+ #\n+ # def copy(self):\n+ # \"\"\"Needed for compatibility with cobrapy.\"\"\"\n+ # model_copy = super(SolverBasedModel, self).copy()\n+ # for reac in model_copy.reactions:\n+ # reac._reset_var_cache()\n+ # try:\n+ # model_copy._solver = deepcopy(self.solver)\n+ # except Exception: # pragma: no cover # Cplex has an issue with deep copies\n+ # model_copy._solver = copy(self.solver) # pragma: no cover\n+ # return model_copy\ndef _repr_html_(self): # pragma: no cover\ntemplate = \"\"\"<table>\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: remove non_functional_genes This method was unused/tested. Remove in favor of new implementation elsewhere.
89,735
22.03.2017 14:37:37
-3,600
ef1310ae3977f695ad6e0d7fee3e35d833df488b
refactor: move add_exchange and _reaction_for Move these functions out of SolverBasedModel and refactor them to use new cobrapy functionality.
[ { "change_type": "MODIFY", "old_path": "cameo/core/metabolite.py", "new_path": "cameo/core/metabolite.py", "diff": "@@ -17,6 +17,7 @@ from functools import partial\nimport cobra\nimport six\nfrom cameo.util import inheritdocstring\n+from cameo.core.utils import add_exchange\nlogger = logging.getLogger(__name__)\n@@ -116,7 +117,7 @@ class Metabolite(cobra.core.Metabolite):\nelse:\nreaction.change_bounds(ub=0, time_machine=time_machine)\nif force_steady_state:\n- self.model.add_exchange(self, prefix=\"KO_\", time_machine=time_machine)\n+ add_exchange(self._model, self, prefix=\"KO_\")\nelse:\nself._relax_mass_balance_constrain(time_machine)\n" }, { "change_type": "MODIFY", "old_path": "cameo/core/reaction.py", "new_path": "cameo/core/reaction.py", "diff": "from __future__ import absolute_import, print_function\n-import hashlib\nimport logging\n-from copy import copy, deepcopy\nfrom functools import partial\nimport cobra as _cobrapy\nimport six\n-from cobra.manipulation.delete import parse_gpr, eval_gpr\nimport cameo\nfrom cameo import flux_analysis\n" }, { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/analysis.py", "new_path": "cameo/flux_analysis/analysis.py", "diff": "@@ -34,11 +34,12 @@ from optlang.interface import UNBOUNDED\nimport cameo\nfrom cameo import config\nfrom cameo.core.result import Result\n-from cameo.exceptions import Infeasible, Unbounded\n+from cameo.exceptions import Infeasible\nfrom cameo.flux_analysis.util import remove_infeasible_cycles, fix_pfba_as_constraint\nfrom cameo.parallel import SequentialView\nfrom cameo.ui import notice\nfrom cameo.util import TimeMachine, partition, _BIOMASS_RE_\n+from cameo.core.utils import get_reaction_for, add_exchange\nfrom cameo.visualization.plotting import plotter\nlogger = logging.getLogger(__name__)\n@@ -163,13 +164,13 @@ def phenotypic_phase_plane(model, variables=[], objective=None, source=None, poi\nif view is None:\nview = config.default_view\n- with TimeMachine() as tm:\n+ with TimeMachine() as tm, model:\nif objective is not None:\nif isinstance(objective, Metabolite):\ntry:\nobjective = model.reactions.get_by_id(\"DM_%s\" % objective.id)\nexcept KeyError:\n- objective = model.add_exchange(objective, time_machine=tm)\n+ objective = add_exchange(model, objective)\n# try:\n# objective = model.reaction_for(objective, time_machine=tm)\n# except KeyError:\n@@ -178,7 +179,7 @@ def phenotypic_phase_plane(model, variables=[], objective=None, source=None, poi\nmodel.change_objective(objective, time_machine=tm)\nif source:\n- source_reaction = model._reaction_for(source)\n+ source_reaction = get_reaction_for(model, source)\nelse:\nsource_reaction = _get_c_source_reaction(model)\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/deterministic/flux_variability_based.py", "new_path": "cameo/strain_design/deterministic/flux_variability_based.py", "diff": "@@ -47,6 +47,7 @@ from cameo.parallel import SequentialView\nfrom cameo.core.reaction import Reaction\nfrom cameo.core.metabolite import Metabolite\n+from cameo.core.utils import get_reaction_for\nfrom cameo.visualization.escher_ext import NotebookBuilder\nfrom cameo.visualization.palette import mapper, Palette\n@@ -824,7 +825,7 @@ class FSEOF(StrainDesignMethod):\n\"\"\"\nmodel = self.model\n- target = model._reaction_for(target)\n+ target = get_reaction_for(model, target)\nsimulation_kwargs = simulation_kwargs if simulation_kwargs is not None else {}\nsimulation_kwargs['objective'] = self.primary_objective\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/deterministic/linear_programming.py", "new_path": "cameo/strain_design/deterministic/linear_programming.py", "diff": "@@ -31,6 +31,7 @@ from cameo import ui\nfrom cameo.core.solver_based_model_dual import convert_to_dual\nfrom cameo.core.strain_design import StrainDesignMethodResult, StrainDesignMethod, StrainDesign\nfrom cameo.core.target import ReactionKnockoutTarget\n+from cameo.core.utils import get_reaction_for\nfrom cameo.exceptions import SolveError\nfrom cameo.flux_analysis.analysis import phenotypic_phase_plane, flux_variability_analysis\nfrom cameo.flux_analysis.simulation import fba\n@@ -134,7 +135,7 @@ class OptKnock(StrainDesignMethod):\nself.essential_reactions = self._model.essential_reactions() + self._model.exchanges\nif essential_reactions:\n- self.essential_reactions += [self._model._reaction_for(r) for r in essential_reactions]\n+ self.essential_reactions += [get_reaction_for(self._model, r) for r in essential_reactions]\nreactions = set(self._model.reactions) - set(self.essential_reactions)\nif use_nullspace_simplification:\n@@ -232,9 +233,11 @@ class OptKnock(StrainDesignMethod):\n-------\nOptKnockResult\n\"\"\"\n-\n- target = self._model._reaction_for(target, add=False)\n- biomass = self._model._reaction_for(biomass, add=False)\n+ # TODO: why not required arguments?\n+ if biomass is None or target is None:\n+ raise ValueError('missing biomass and/or target reaction')\n+ target = get_reaction_for(self._model, target, add=False)\n+ biomass = get_reaction_for(self._model, biomass, add=False)\nknockout_list = []\nfluxes_list = []\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/heuristic/evolutionary_based.py", "new_path": "cameo/strain_design/heuristic/evolutionary_based.py", "diff": "@@ -38,6 +38,7 @@ from cameo.strain_design.heuristic.evolutionary.processing import process_reacti\nprocess_gene_knockout_solution, process_reaction_swap_solution\nfrom cameo.util import TimeMachine\nfrom cameo.visualization.plotting import plotter\n+from cameo.core.utils import get_reaction_for\n__all__ = [\"OptGene\"]\n@@ -86,7 +87,7 @@ class OptGene(StrainDesignMethod):\ndef run(self, target=None, biomass=None, substrate=None, max_knockouts=5, variable_size=True,\nsimulation_method=fba, growth_coupled=False, max_evaluations=20000, population_size=200,\n- time_machine=None, max_results=50, use_nullspace_simplification=True, seed=None, **kwargs):\n+ max_results=50, use_nullspace_simplification=True, seed=None, **kwargs):\n\"\"\"\nParameters\n----------\n@@ -108,8 +109,6 @@ class OptGene(StrainDesignMethod):\nNumber of evaluations before stop\npopulation_size : int\nNumber of individuals in each generation\n- time_machine : TimeMachine\n- See TimeMachine\nmax_results : int\nMax number of different designs to return if found.\nkwargs : dict\n@@ -125,9 +124,10 @@ class OptGene(StrainDesignMethod):\n-------\nOptGeneResult\n\"\"\"\n- target = self._model._reaction_for(target, time_machine=time_machine)\n- biomass = self._model._reaction_for(biomass, time_machine=time_machine)\n- substrate = self._model._reaction_for(substrate, time_machine=time_machine)\n+\n+ target = get_reaction_for(self._model, target)\n+ biomass = get_reaction_for(self._model, biomass)\n+ substrate = get_reaction_for(self._model, substrate)\nif growth_coupled:\nobjective_function = biomass_product_coupled_min_yield(biomass, target, substrate)\n@@ -371,9 +371,9 @@ class HeuristicOptSwap(StrainDesignMethod):\nHeuristicOptSwapResult\n\"\"\"\n- target = self._model._reaction_for(target, time_machine=time_machine)\n- biomass = self._model._reaction_for(biomass, time_machine=time_machine)\n- substrate = self._model._reaction_for(substrate, time_machine=time_machine)\n+ target = get_reaction_for(self._model, target)\n+ biomass = get_reaction_for(self._model, biomass)\n+ substrate = get_reaction_for(self._model, substrate)\nif growth_coupled:\nobjective_function = biomass_product_coupled_min_yield(biomass, target, substrate)\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/pathway_prediction/pathway_predictor.py", "new_path": "cameo/strain_design/pathway_prediction/pathway_predictor.py", "diff": "@@ -34,6 +34,7 @@ from cameo.core.reaction import Reaction\nfrom cameo.core.result import Result, MetaInformation\nfrom cameo.core.strain_design import StrainDesignMethodResult, StrainDesign, StrainDesignMethod\nfrom cameo.core.target import ReactionKnockinTarget\n+from cameo.core.utils import add_exchange\nfrom cameo.data import metanetx\nfrom cameo.exceptions import SolveError\nfrom cameo.strain_design.pathway_prediction import util\n@@ -300,14 +301,14 @@ class PathwayPredictor(StrainDesignMethod):\nproduct = self._find_product(product)\npathways = list()\n- with TimeMachine() as tm:\n+ with TimeMachine() as tm, self.model:\ntm(do=partial(setattr, self.model.solver.configuration, 'timeout', timeout),\nundo=partial(setattr, self.model.solver.configuration, 'timeout',\nself.model.solver.configuration.timeout))\ntry:\nproduct_reaction = self.model.reactions.get_by_id('DM_' + product.id)\nexcept KeyError:\n- product_reaction = self.model.add_exchange(product, time_machine=tm)\n+ product_reaction = add_exchange(self.model, product)\nproduct_reaction.change_bounds(lb=min_production, time_machine=tm)\ncounter = 1\n" }, { "change_type": "MODIFY", "old_path": "tests/test_solver_based_model.py", "new_path": "tests/test_solver_based_model.py", "diff": "@@ -35,6 +35,7 @@ from cameo.config import solvers\nfrom cameo.core.gene import Gene\nfrom cameo.core.metabolite import Metabolite\nfrom cameo.core.solver_based_model import Reaction\n+from cameo.core.utils import get_reaction_for, add_exchange\nfrom cameo.exceptions import UndefinedSolution\nfrom cameo.flux_analysis.structural import create_stoichiometric_array\nfrom cameo.util import TimeMachine\n@@ -758,7 +759,7 @@ class TestSolverBasedModel:\ndef test_add_exchange(self, core_model):\nfor demand, prefix in {True: 'DemandReaction_', False: 'SupplyReaction_'}.items():\nfor metabolite in core_model.metabolites:\n- demand_reaction = core_model.add_exchange(metabolite, demand=demand, prefix=prefix)\n+ demand_reaction = add_exchange(core_model, metabolite, demand=demand, prefix=prefix)\nassert core_model.reactions.get_by_id(demand_reaction.id) == demand_reaction\nassert demand_reaction.reactants == [metabolite]\nassert core_model.solver.constraints[metabolite.id].expression.has(\n@@ -766,9 +767,9 @@ class TestSolverBasedModel:\ndef test_add_exchange_time_machine(self, core_model):\nfor demand, prefix in {True: 'DemandReaction_', False: 'SupplyReaction_'}.items():\n- with TimeMachine() as tm:\n+ with core_model:\nfor metabolite in core_model.metabolites:\n- demand_reaction = core_model.add_exchange(metabolite, demand=demand, prefix=prefix, time_machine=tm)\n+ demand_reaction = add_exchange(core_model, metabolite, demand=demand, prefix=prefix)\nassert core_model.reactions.get_by_id(demand_reaction.id) == demand_reaction\nassert demand_reaction.reactants == [metabolite]\nassert -core_model.solver.constraints[metabolite.id].expression.has(\n@@ -779,9 +780,9 @@ class TestSolverBasedModel:\ndef test_add_existing_exchange(self, core_model):\nfor metabolite in core_model.metabolites:\n- core_model.add_exchange(metabolite, prefix=\"test\")\n+ add_exchange(core_model, metabolite, prefix=\"test\")\nwith pytest.raises(ValueError):\n- core_model.add_exchange(metabolite, prefix=\"test\")\n+ add_exchange(core_model, metabolite, prefix=\"test\")\ndef test_objective(self, core_model):\nobj = core_model.objective\n@@ -821,7 +822,7 @@ class TestSolverBasedModel:\ndef test_invalid_objective_raises(self, core_model):\nwith pytest.raises(ValueError):\n- setattr(core_model, 'objective', 'This is not a valid objective!')\n+ core_model.objective = 'This is not a valid objective!'\nwith pytest.raises(TypeError):\nsetattr(core_model, 'objective', 3.)\n#\n@@ -912,7 +913,7 @@ class TestSolverBasedModel:\ndef test_add_exchange_for_non_existing_metabolite(self, core_model):\nmetabolite = Metabolite(id=\"a_metabolite\")\n- core_model.add_exchange(metabolite)\n+ add_exchange(core_model, metabolite)\nassert core_model.solver.constraints[metabolite.id].expression.has(\ncore_model.solver.variables[\"DM_\" + metabolite.id])\n@@ -963,21 +964,21 @@ class TestSolverBasedModel:\nassert core_model.solver.constraints[-1].expression - core_model.objective.expression == 0\nassert constraint_name in core_model.solver.constraints\n- def test_reactions_for(self, core_model):\n- with TimeMachine() as tm:\n+ def test_get_reaction_for(self, core_model):\n+ with core_model:\nfor r in core_model.reactions:\n- assert isinstance(core_model._reaction_for(r.id, time_machine=tm), Reaction)\n- assert isinstance(core_model._reaction_for(r, time_machine=tm), Reaction)\n+ assert isinstance(get_reaction_for(core_model, r.id), Reaction)\n+ assert isinstance(get_reaction_for(core_model, r), Reaction)\nfor m in core_model.metabolites:\n- assert isinstance(core_model._reaction_for(m.id, time_machine=tm), Reaction)\n- assert isinstance(core_model._reaction_for(m, time_machine=tm), Reaction)\n+ assert isinstance(get_reaction_for(core_model, m.id), Reaction)\n+ assert isinstance(get_reaction_for(core_model, m), Reaction)\n+ with pytest.raises(TypeError):\n+ get_reaction_for(core_model, None)\nwith pytest.raises(KeyError):\n- core_model._reaction_for(None)\n- with pytest.raises(KeyError):\n- core_model._reaction_for(\"blablabla\")\n+ get_reaction_for(core_model, \"blablabla\")\nwith pytest.raises(KeyError):\n- core_model._reaction_for(\"accoa_lp_c_lp_\", add=False)\n+ get_reaction_for(core_model, \"accoa_lp_c_lp_\", add=False)\ndef test_stoichiometric_matrix(self, core_model):\nstoichiometric_matrix = create_stoichiometric_array(core_model)\n" }, { "change_type": "MODIFY", "old_path": "tests/test_strain_design_deterministic.py", "new_path": "tests/test_strain_design_deterministic.py", "diff": "@@ -118,7 +118,7 @@ class TestOptKnock:\ndef test_invalid_input(self, cplex_optknock):\n_, optknock = cplex_optknock\n- with pytest.raises(KeyError):\n+ with pytest.raises(ValueError):\noptknock.run(target=\"EX_ac_lp_e_rp_\")\n- with pytest.raises(KeyError):\n+ with pytest.raises(ValueError):\noptknock.run(biomass=\"Biomass_Ecoli_core_N_lp_w_fsh_GAM_rp__Nmet2\")\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: move add_exchange and _reaction_for Move these functions out of SolverBasedModel and refactor them to use new cobrapy functionality.
89,735
22.03.2017 14:54:27
-3,600
927e03ff624bf32ea0676e765b0498c30800671a
refactor: remove add_ratio_constraint This method is not actually used anywhere in cameo and is somewhat convoluted. Remove completely in favor of directly formulating constraints and adding them as only marginally more effort but considerably more explicit.
[ { "change_type": "MODIFY", "old_path": "cameo/core/solver_based_model.py", "new_path": "cameo/core/solver_based_model.py", "diff": "@@ -26,11 +26,9 @@ from functools import partial\nimport cobra\nimport optlang\nimport six\n-import sympy\nfrom pandas import DataFrame, pandas\nfrom sympy import Add\nfrom sympy import Mul\n-from sympy.core.singleton import S\nfrom cobra.core import get_solution\nfrom cameo import config\n@@ -404,67 +402,67 @@ class SolverBasedModel(cobra.core.Model):\n# time_machine(do=partial(self.solver._add_constraint, constraint, sloppy=True),\n# undo=partial(self.solver.remove, constraint))\n- def add_ratio_constraint(self, expr1, expr2, ratio, prefix='ratio_constraint_'):\n- \"\"\"Adds a ratio constraint (expr1/expr2 = ratio) to the model.\n-\n- Parameters\n- ----------\n- expr1 : str, Reaction, list or sympy.Expression\n- A reaction, a reaction ID or a linear expression.\n- expr2 : str, Reaction, list or sympy.Expression\n- A reaction, a reaction ID or a linear expression.\n- ratio : float\n- The ratio in expr1/expr2 = ratio\n- prefix : str\n- The prefix that will be added to the constraint ID (defaults to 'ratio_constraint_').\n-\n- Returns\n- -------\n- optlang.Constraint\n- The constraint name will be composed of `prefix`\n- and the two reaction IDs (e.g. 'ratio_constraint_reaction1_reaction2').\n-\n- Examples\n- --------\n- >>> model.add_ratio_constraint('r1', 'r2', 0.5)\n- >>> print(model.solver.constraints['ratio_constraint_r1_r2'])\n- ratio_constraint: ratio_constraint_r1_r2: 0 <= -0.5 * r1 + 1.0 * PGI <= 0\n- \"\"\"\n- if isinstance(expr1, six.string_types):\n- prefix_1 = expr1\n- expr1 = self.reactions.get_by_id(expr1).flux_expression\n- elif isinstance(expr1, Reaction):\n- prefix_1 = expr1.id\n- expr1 = expr1.flux_expression\n- elif isinstance(expr1, list):\n- prefix_1 = \"+\".join(r.id for r in expr1)\n- expr1 = sum([r.flux_expression for r in expr1], S.Zero)\n- elif not isinstance(expr1, sympy.Expr):\n- raise ValueError(\"'expr1' is not a valid expression\")\n- else:\n- prefix_1 = str(expr1)\n-\n- if isinstance(expr2, six.string_types):\n- prefix_2 = expr2\n- expr2 = self.reactions.get_by_id(expr2).flux_expression\n- elif isinstance(expr2, Reaction):\n- prefix_2 = expr2.id\n- expr2 = expr2.flux_expression\n- elif isinstance(expr2, list):\n- prefix_2 = \"+\".join(r.id for r in expr2)\n- expr2 = sum([r.flux_expression for r in expr2], S.Zero)\n- elif not isinstance(expr2, sympy.Expr):\n- raise ValueError(\"'expr2' is not a valid expression\")\n- else:\n- prefix_2 = str(expr2)\n-\n- ratio_constraint = self.solver.interface.Constraint(expr1 - ratio * expr2,\n- lb=0,\n- ub=0,\n- name=prefix + prefix_1 + '_' + prefix_2)\n-\n- self.solver.add(ratio_constraint, sloppy=True)\n- return ratio_constraint\n+ # def add_ratio_constraint(self, expr1, expr2, ratio, prefix='ratio_constraint_'):\n+ # \"\"\"Adds a ratio constraint (expr1/expr2 = ratio) to the model.\n+ #\n+ # Parameters\n+ # ----------\n+ # expr1 : str, Reaction, list or sympy.Expression\n+ # A reaction, a reaction ID or a linear expression.\n+ # expr2 : str, Reaction, list or sympy.Expression\n+ # A reaction, a reaction ID or a linear expression.\n+ # ratio : float\n+ # The ratio in expr1/expr2 = ratio\n+ # prefix : str\n+ # The prefix that will be added to the constraint ID (defaults to 'ratio_constraint_').\n+ #\n+ # Returns\n+ # -------\n+ # optlang.Constraint\n+ # The constraint name will be composed of `prefix`\n+ # and the two reaction IDs (e.g. 'ratio_constraint_reaction1_reaction2').\n+ #\n+ # Examples\n+ # --------\n+ # >>> model.add_ratio_constraint('r1', 'r2', 0.5)\n+ # >>> print(model.solver.constraints['ratio_constraint_r1_r2'])\n+ # ratio_constraint: ratio_constraint_r1_r2: 0 <= -0.5 * r1 + 1.0 * PGI <= 0\n+ # \"\"\"\n+ # if isinstance(expr1, six.string_types):\n+ # prefix_1 = expr1\n+ # expr1 = self.reactions.get_by_id(expr1).flux_expression\n+ # elif isinstance(expr1, Reaction):\n+ # prefix_1 = expr1.id\n+ # expr1 = expr1.flux_expression\n+ # elif isinstance(expr1, list):\n+ # prefix_1 = \"+\".join(r.id for r in expr1)\n+ # expr1 = sum([r.flux_expression for r in expr1], S.Zero)\n+ # elif not isinstance(expr1, sympy.Expr):\n+ # raise ValueError(\"'expr1' is not a valid expression\")\n+ # else:\n+ # prefix_1 = str(expr1)\n+ #\n+ # if isinstance(expr2, six.string_types):\n+ # prefix_2 = expr2\n+ # expr2 = self.reactions.get_by_id(expr2).flux_expression\n+ # elif isinstance(expr2, Reaction):\n+ # prefix_2 = expr2.id\n+ # expr2 = expr2.flux_expression\n+ # elif isinstance(expr2, list):\n+ # prefix_2 = \"+\".join(r.id for r in expr2)\n+ # expr2 = sum([r.flux_expression for r in expr2], S.Zero)\n+ # elif not isinstance(expr2, sympy.Expr):\n+ # raise ValueError(\"'expr2' is not a valid expression\")\n+ # else:\n+ # prefix_2 = str(expr2)\n+ #\n+ # ratio_constraint = self.solver.interface.Constraint(expr1 - ratio * expr2,\n+ # lb=0,\n+ # ub=0,\n+ # name=prefix + prefix_1 + '_' + prefix_2)\n+ #\n+ # self.solver.add(ratio_constraint, sloppy=True)\n+ # return ratio_constraint\n# def optimize(self, objective_sense=None, solution_type=Solution, **kwargs):\n# \"\"\"OptlangBasedModel implementation of optimize.\n" }, { "change_type": "MODIFY", "old_path": "tests/test_solver_based_model.py", "new_path": "tests/test_solver_based_model.py", "diff": "@@ -917,39 +917,39 @@ class TestSolverBasedModel:\nassert core_model.solver.constraints[metabolite.id].expression.has(\ncore_model.solver.variables[\"DM_\" + metabolite.id])\n- def test_add_ratio_constraint(self, solved_model):\n- solution, model = solved_model\n- assert round(abs(solution.f - 0.873921506968), 7) == 0\n- assert 2 * solution.x_dict['PGI'] != solution.x_dict['G6PDH2r']\n- cp = model.copy()\n- ratio_constr = cp.add_ratio_constraint(cp.reactions.PGI, cp.reactions.G6PDH2r, 0.5)\n- assert ratio_constr.name == 'ratio_constraint_PGI_G6PDH2r'\n- solution = cp.optimize()\n- assert round(abs(solution.f - 0.870407873712), 7) == 0\n- assert round(abs(2 * solution.x_dict['PGI'] - solution.x_dict['G6PDH2r']), 7) == 0\n- cp = model.copy()\n-\n- ratio_constr = cp.add_ratio_constraint(cp.reactions.PGI, cp.reactions.G6PDH2r, 0.5)\n- assert ratio_constr.name == 'ratio_constraint_PGI_G6PDH2r'\n- solution = cp.optimize()\n- assert round(abs(solution.f - 0.870407873712), 7) == 0\n- assert round(abs(2 * solution.x_dict['PGI'] - solution.x_dict['G6PDH2r']), 7) == 0\n-\n- cp = model.copy()\n- ratio_constr = cp.add_ratio_constraint('PGI', 'G6PDH2r', 0.5)\n- assert ratio_constr.name == 'ratio_constraint_PGI_G6PDH2r'\n- solution = cp.optimize()\n- assert abs(solution.f - 0.870407) < 1e-6\n- assert abs(2 * solution.x_dict['PGI'] - solution.x_dict['G6PDH2r']) < 1e-6\n-\n- cp = model.copy()\n- ratio_constr = cp.add_ratio_constraint([cp.reactions.PGI, cp.reactions.ACALD],\n- [cp.reactions.G6PDH2r, cp.reactions.ACONTa], 0.5)\n- assert ratio_constr.name == 'ratio_constraint_PGI+ACALD_G6PDH2r+ACONTa'\n- solution = cp.optimize()\n- assert abs(solution.f - 0.872959) < 1e-6\n- assert abs((solution.x_dict['PGI'] + solution.x_dict['ACALD']) -\n- 0.5 * (solution.x_dict['G6PDH2r'] + solution.x_dict['ACONTa'])) < 1e-5\n+ # def test_add_ratio_constraint(self, solved_model):\n+ # solution, model = solved_model\n+ # assert round(abs(solution.f - 0.873921506968), 7) == 0\n+ # assert 2 * solution.x_dict['PGI'] != solution.x_dict['G6PDH2r']\n+ # cp = model.copy()\n+ # ratio_constr = cp.add_ratio_constraint(cp.reactions.PGI, cp.reactions.G6PDH2r, 0.5)\n+ # assert ratio_constr.name == 'ratio_constraint_PGI_G6PDH2r'\n+ # solution = cp.optimize()\n+ # assert round(abs(solution.f - 0.870407873712), 7) == 0\n+ # assert round(abs(2 * solution.x_dict['PGI'] - solution.x_dict['G6PDH2r']), 7) == 0\n+ # cp = model.copy()\n+ #\n+ # ratio_constr = cp.add_ratio_constraint(cp.reactions.PGI, cp.reactions.G6PDH2r, 0.5)\n+ # assert ratio_constr.name == 'ratio_constraint_PGI_G6PDH2r'\n+ # solution = cp.optimize()\n+ # assert round(abs(solution.f - 0.870407873712), 7) == 0\n+ # assert round(abs(2 * solution.x_dict['PGI'] - solution.x_dict['G6PDH2r']), 7) == 0\n+ #\n+ # cp = model.copy()\n+ # ratio_constr = cp.add_ratio_constraint('PGI', 'G6PDH2r', 0.5)\n+ # assert ratio_constr.name == 'ratio_constraint_PGI_G6PDH2r'\n+ # solution = cp.optimize()\n+ # assert abs(solution.f - 0.870407) < 1e-6\n+ # assert abs(2 * solution.x_dict['PGI'] - solution.x_dict['G6PDH2r']) < 1e-6\n+ #\n+ # cp = model.copy()\n+ # ratio_constr = cp.add_ratio_constraint([cp.reactions.PGI, cp.reactions.ACALD],\n+ # [cp.reactions.G6PDH2r, cp.reactions.ACONTa], 0.5)\n+ # assert ratio_constr.name == 'ratio_constraint_PGI+ACALD_G6PDH2r+ACONTa'\n+ # solution = cp.optimize()\n+ # assert abs(solution.f - 0.872959) < 1e-6\n+ # assert abs((solution.x_dict['PGI'] + solution.x_dict['ACALD']) -\n+ # 0.5 * (solution.x_dict['G6PDH2r'] + solution.x_dict['ACONTa'])) < 1e-5\ndef test_fix_objective_as_constraint(self, core_model):\n# with TimeMachine\n@@ -1005,7 +1005,11 @@ class TestSolverBasedModel:\nassert len(medium[medium.reaction_id == rid]) == 1\ndef test_solver_change_preserves_non_metabolic_constraints(self, core_model):\n- core_model.add_ratio_constraint(core_model.reactions.PGK, core_model.reactions.PFK, 1 / 2)\n+ with core_model:\n+ constraint = core_model.problem.Constraint(core_model.reactions.PGK.flux_expression -\n+ 0.5 * core_model.reactions.PFK.flux_expression,\n+ lb=0, ub=0)\n+ core_model.add_cons_vars(constraint)\nall_constraint_ids = core_model.solver.constraints.keys()\nassert all_constraint_ids[-1], 'ratio_constraint_PGK_PFK'\nresurrected = pickle.loads(pickle.dumps(core_model))\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: remove add_ratio_constraint This method is not actually used anywhere in cameo and is somewhat convoluted. Remove completely in favor of directly formulating constraints and adding them as only marginally more effort but considerably more explicit.
89,735
22.03.2017 15:28:44
-3,600
6c180f1b49652c2a5b02a786f604731e6dfa44ca
refactor: move essential_{genes, reactions, meta} Simply move these out of the SolverBasedModel class in a step towards removing this class altogether. These functions could possibly be further moved to cobrapy altogether.
[ { "change_type": "MODIFY", "old_path": "cameo/strain_design/deterministic/linear_programming.py", "new_path": "cameo/strain_design/deterministic/linear_programming.py", "diff": "@@ -33,7 +33,7 @@ from cameo.core.strain_design import StrainDesignMethodResult, StrainDesignMetho\nfrom cameo.core.target import ReactionKnockoutTarget\nfrom cameo.core.utils import get_reaction_for\nfrom cameo.exceptions import SolveError\n-from cameo.flux_analysis.analysis import phenotypic_phase_plane, flux_variability_analysis\n+from cameo.flux_analysis.analysis import phenotypic_phase_plane, flux_variability_analysis, find_essential_reactions\nfrom cameo.flux_analysis.simulation import fba\nfrom cameo.flux_analysis.structural import find_coupled_reactions_nullspace\nfrom cameo.util import TimeMachine, reduce_reaction_set, decompose_reaction_groups\n@@ -133,7 +133,7 @@ class OptKnock(StrainDesignMethod):\ndef _build_problem(self, essential_reactions, use_nullspace_simplification):\nlogger.debug(\"Starting to formulate OptKnock problem\")\n- self.essential_reactions = self._model.essential_reactions() + self._model.exchanges\n+ self.essential_reactions = find_essential_reactions(self._model) + self._model.exchanges\nif essential_reactions:\nself.essential_reactions += [get_reaction_for(self._model, r) for r in essential_reactions]\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/heuristic/evolutionary/multiprocess/optimization.py", "new_path": "cameo/strain_design/heuristic/evolutionary/multiprocess/optimization.py", "diff": "@@ -23,6 +23,7 @@ from cameo import config\nfrom cameo import parallel\nfrom cameo import util\nfrom cameo.flux_analysis.simulation import pfba\n+from cameo.flux_analysis.analysis import find_essential_genes, find_essential_reactions\nfrom cameo.strain_design.heuristic.evolutionary import ReactionKnockoutOptimization, GeneKnockoutOptimization\nfrom cameo.strain_design.heuristic.evolutionary.multiprocess.migrators import MultiprocessingMigrator\nfrom cameo.strain_design.heuristic.evolutionary.multiprocess.observers import \\\n@@ -213,7 +214,7 @@ class MultiprocessReactionKnockoutOptimization(MultiprocessKnockoutOptimization)\nself.reactions = reactions\nif essential_reactions is None:\n- self.essential_reactions = set([r.id for r in self.model.essential_reactions()])\n+ self.essential_reactions = set([r.id for r in find_essential_reactions(self.model)])\nelse:\nself.essential_reactions = essential_reactions\n@@ -256,7 +257,7 @@ class MultiprocessGeneKnockoutOptimization(MultiprocessKnockoutOptimization):\nself.genes = genes\nif essential_genes is None:\n- self.essential_genes = set([g.id for g in self.model.essential_genes()])\n+ self.essential_genes = set([g.id for g in find_essential_genes(self.model)])\nelse:\nself.essential_genes = essential_genes\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/heuristic/evolutionary/optimization.py", "new_path": "cameo/strain_design/heuristic/evolutionary/optimization.py", "diff": "@@ -29,9 +29,10 @@ from cameo import config\nfrom cameo.core.result import Result\nfrom cameo.core.solver_based_model import SolverBasedModel\nfrom cameo.flux_analysis.simulation import pfba, lmoma, moma, room, logger as simulation_logger\n-from cameo.flux_analysis.structural import find_blocked_reactions_nullspace, find_coupled_reactions_nullspace, \\\n- nullspace, \\\n- create_stoichiometric_array\n+from cameo.flux_analysis.structural import (find_blocked_reactions_nullspace, find_coupled_reactions_nullspace,\n+ nullspace,\n+ create_stoichiometric_array)\n+from cameo.flux_analysis.analysis import find_essential_genes, find_essential_reactions\nfrom cameo.strain_design.heuristic.evolutionary import archives\nfrom cameo.strain_design.heuristic.evolutionary import decoders\nfrom cameo.strain_design.heuristic.evolutionary import evaluators\n@@ -670,9 +671,9 @@ class ReactionKnockoutOptimization(KnockoutOptimization):\nself.reactions = reactions\nlogger.debug(\"Computing essential reactions...\")\nif essential_reactions is None:\n- self.essential_reactions = set(r.id for r in self.model.essential_reactions())\n+ self.essential_reactions = set(r.id for r in find_essential_reactions(self.model))\nelse:\n- self.essential_reactions = set([r.id for r in self.model.essential_reactions()] + essential_reactions)\n+ self.essential_reactions = set([r.id for r in find_essential_reactions(self.model)] + essential_reactions)\nif use_nullspace_simplification:\nns = nullspace(create_stoichiometric_array(self.model))\n@@ -763,9 +764,9 @@ class GeneKnockoutOptimization(KnockoutOptimization):\nelse:\nself.genes = genes\nif essential_genes is None:\n- self.essential_genes = {g.id for g in self.model.essential_genes()}\n+ self.essential_genes = {g.id for g in find_essential_genes(self.model)}\nelse:\n- self.essential_genes = set([g.id for g in self.model.essential_genes()] + essential_genes)\n+ self.essential_genes = set([g.id for g in find_essential_genes(self.model)] + essential_genes)\n# TODO: use genes from groups\nif use_nullspace_simplification:\n" }, { "change_type": "MODIFY", "old_path": "tests/test_flux_analysis.py", "new_path": "tests/test_flux_analysis.py", "diff": "@@ -36,6 +36,7 @@ from cameo.flux_analysis.analysis import (find_blocked_reactions,\nfix_pfba_as_constraint)\nfrom cameo.flux_analysis.simulation import fba, lmoma, moma, pfba, room\nfrom cameo.flux_analysis.structural import nullspace\n+from cameo.flux_analysis.analysis import find_essential_reactions\nfrom cameo.parallel import MultiprocessingView, SequentialView\nfrom cameo.util import TimeMachine, current_solver_name, pick_one\n@@ -393,7 +394,7 @@ class TestStructural:\ndef test_coupled_reactions(self, core_model):\n# If a reaction is essential, all coupled reactions are essential\n- essential_reactions = core_model.essential_reactions()\n+ essential_reactions = find_essential_reactions(core_model)\ncoupled_reactions = structural.find_coupled_reactions_nullspace(core_model)\nfor essential_reaction in essential_reactions:\nfor group in coupled_reactions:\n@@ -404,7 +405,7 @@ class TestStructural:\n# # FIXME: this test has everything to run, but sometimes removing the reactions doesn't seem to work.\n# @pytest.mark.skipif(TRAVIS, reason=\"Inconsistent behaviour (bug)\")\ndef test_reactions_in_group_become_blocked_if_one_is_removed(self, core_model):\n- essential_reactions = core_model.essential_reactions()\n+ essential_reactions = find_essential_reactions(core_model)\ncoupled_reactions = structural.find_coupled_reactions_nullspace(core_model)\nfor group in coupled_reactions:\nrepresentative = pick_one(group)\n" }, { "change_type": "MODIFY", "old_path": "tests/test_solver_based_model.py", "new_path": "tests/test_solver_based_model.py", "diff": "@@ -38,6 +38,7 @@ from cameo.core.solver_based_model import Reaction\nfrom cameo.core.utils import get_reaction_for, add_exchange\nfrom cameo.exceptions import UndefinedSolution\nfrom cameo.flux_analysis.structural import create_stoichiometric_array\n+from cameo.flux_analysis.analysis import find_essential_genes, find_essential_metabolites, find_essential_reactions\nfrom cameo.util import TimeMachine\nTRAVIS = bool(os.getenv('TRAVIS', False))\n@@ -876,32 +877,32 @@ class TestSolverBasedModel:\nassert not any(abs_diff > 1e-6)\ndef test_essential_genes(self, core_model):\n- observed_essential_genes = [g.id for g in core_model.essential_genes()]\n+ observed_essential_genes = [g.id for g in find_essential_genes(core_model)]\nassert sorted(observed_essential_genes) == sorted(ESSENTIAL_GENES)\nwith pytest.raises(cameo.exceptions.SolveError):\ncore_model.reactions.Biomass_Ecoli_core_N_LPAREN_w_FSLASH_GAM_RPAREN__Nmet2.lower_bound = 999999.\n- core_model.essential_genes()\n+ find_essential_genes(core_model)\ndef test_essential_reactions(self, core_model):\n- observed_essential_reactions = [r.id for r in core_model.essential_reactions()]\n+ observed_essential_reactions = [r.id for r in find_essential_reactions(core_model)]\nassert sorted(observed_essential_reactions) == sorted(ESSENTIAL_REACTIONS)\nwith pytest.raises(cameo.exceptions.SolveError):\ncore_model.reactions.Biomass_Ecoli_core_N_LPAREN_w_FSLASH_GAM_RPAREN__Nmet2.lower_bound = 999999.\n- core_model.essential_reactions()\n+ find_essential_reactions(core_model)\ndef test_essential_metabolites(self, core_model):\n- essential_metabolites_unbalanced = [m.id for m in core_model.essential_metabolites(force_steady_state=False)]\n- essential_metabolites_balanced = [m.id for m in core_model.essential_metabolites(force_steady_state=True)]\n+ essential_metabolites_unbalanced = [m.id for m in find_essential_metabolites(core_model, force_steady_state=False)]\n+ essential_metabolites_balanced = [m.id for m in find_essential_metabolites(core_model, force_steady_state=True)]\nassert sorted(essential_metabolites_unbalanced) == sorted(ESSENTIAL_METABOLITES)\nassert sorted(essential_metabolites_balanced) == sorted(ESSENTIAL_METABOLITES)\nwith pytest.raises(cameo.exceptions.SolveError):\ncore_model.reactions.Biomass_Ecoli_core_N_LPAREN_w_FSLASH_GAM_RPAREN__Nmet2.lower_bound = 999999.\n- core_model.essential_metabolites(force_steady_state=False)\n+ find_essential_metabolites(core_model, force_steady_state=False)\nwith pytest.raises(cameo.exceptions.SolveError):\ncore_model.reactions.Biomass_Ecoli_core_N_LPAREN_w_FSLASH_GAM_RPAREN__Nmet2.lower_bound = 999999.\n- core_model.essential_metabolites(force_steady_state=True)\n+ find_essential_metabolites(core_model, force_steady_state=True)\ndef test_effective_bounds(self, core_model):\ncore_model.reactions.Biomass_Ecoli_core_N_LPAREN_w_FSLASH_GAM_RPAREN__Nmet2.lower_bound = 0.873921\n" }, { "change_type": "MODIFY", "old_path": "tests/test_strain_design_heuristics.py", "new_path": "tests/test_strain_design_heuristics.py", "diff": "@@ -68,6 +68,7 @@ from cameo.strain_design.heuristic.evolutionary.variators import (_do_set_n_poin\nset_indel,\nset_mutation,\nset_n_point_crossover)\n+from cameo.flux_analysis.analysis import find_essential_genes, find_essential_reactions\nfrom cameo.util import RandomGenerator as Random\nfrom cameo.util import TimeMachine\n@@ -914,7 +915,7 @@ class TestOptimizationResult:\nclass TestReactionKnockoutOptimization:\ndef test_initializer(self, model):\n- essential_reactions = set([r.id for r in model.essential_reactions()])\n+ essential_reactions = set([r.id for r in find_essential_reactions(model)])\nobjective = biomass_product_coupled_yield(\n\"Biomass_Ecoli_core_N_lp_w_fsh_GAM_rp__Nmet2\", \"EX_ac_lp_e_rp_\", \"EX_glc_lp_e_rp_\")\nrko = ReactionKnockoutOptimization(model=model,\n@@ -1013,7 +1014,7 @@ class TestReactionKnockoutOptimization:\nclass TestGeneKnockoutOptimization:\ndef test_initializer(self, model):\n- essential_genes = set([r.id for r in model.essential_genes()])\n+ essential_genes = set([r.id for r in find_essential_genes(model)])\nobjective = biomass_product_coupled_yield(\n\"Biomass_Ecoli_core_N_lp_w_fsh_GAM_rp__Nmet2\", \"EX_ac_lp_e_rp_\", \"EX_glc_lp_e_rp_\")\nrko = GeneKnockoutOptimization(model=model,\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: move essential_{genes, reactions, meta} Simply move these out of the SolverBasedModel class in a step towards removing this class altogether. These functions could possibly be further moved to cobrapy altogether.
89,735
23.03.2017 10:54:42
-3,600
f7821d7348326175010778c6df81e613aab01b7b
refactor: remove model._ids_to_reactions Replaced with `DictList.get_by_any` which does pretty much the same thing.
[ { "change_type": "MODIFY", "old_path": "cameo/core/solver_based_model.py", "new_path": "cameo/core/solver_based_model.py", "diff": "@@ -499,11 +499,11 @@ class SolverBasedModel(cobra.core.Model):\n# else:\n# return solution\n- def __dir__(self):\n- # Hide 'optimize' from user.\n- fields = sorted(dir(type(self)) + list(self.__dict__.keys()))\n- fields.remove('optimize')\n- return fields\n+ # def __dir__(self):\n+ # # Hide 'optimize' from user.\n+ # fields = sorted(dir(type(self)) + list(self.__dict__.keys()))\n+ # fields.remove('optimize')\n+ # return fields\n# def essential_metabolites(self, threshold=1e-6, force_steady_state=False):\n# \"\"\"Return a list of essential metabolites.\n@@ -702,17 +702,17 @@ class SolverBasedModel(cobra.core.Model):\nreturn model\n- def _ids_to_reactions(self, reactions):\n- \"\"\"Translate reaction IDs into reactions (skips reactions).\"\"\"\n- clean_reactions = list()\n- for reaction in reactions:\n- if isinstance(reaction, six.string_types):\n- clean_reactions.append(self.reactions.get_by_id(reaction))\n- elif isinstance(reaction, Reaction):\n- clean_reactions.append(reaction)\n- else:\n- raise Exception('%s is not a reaction or reaction ID.' % reaction)\n- return clean_reactions\n+ # def _ids_to_reactions(self, reactions):\n+ # \"\"\"Translate reaction IDs into reactions (skips reactions).\"\"\"\n+ # clean_reactions = list()\n+ # for reaction in reactions:\n+ # if isinstance(reaction, six.string_types):\n+ # clean_reactions.append(self.reactions.get_by_id(reaction))\n+ # elif isinstance(reaction, Reaction):\n+ # clean_reactions.append(reaction)\n+ # else:\n+ # raise Exception('%s is not a reaction or reaction ID.' % reaction)\n+ # return clean_reactions\ndef change_objective(self, value, time_machine=None):\n\"\"\"\n" }, { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/analysis.py", "new_path": "cameo/flux_analysis/analysis.py", "diff": "@@ -314,7 +314,7 @@ def phenotypic_phase_plane(model, variables=[], objective=None, source=None, poi\nelse:\nsource_reaction = _get_c_source_reaction(model)\n- variable_reactions = model._ids_to_reactions(variables)\n+ variable_reactions = model.reactions.get_by_any(variables)\nvariables_min_max = flux_variability_analysis(model, reactions=variable_reactions, view=SequentialView())\ngrid = [numpy.linspace(lower_bound, upper_bound, points, endpoint=True) for\nreaction_id, lower_bound, upper_bound in\n@@ -371,7 +371,7 @@ def _flux_variability_analysis(model, reactions=None):\nif reactions is None:\nreactions = model.reactions\nelse:\n- reactions = model._ids_to_reactions(reactions)\n+ reactions = model.reactions.get_by_any(reactions)\nfva_sol = OrderedDict()\nlb_flags = dict()\nwith TimeMachine() as tm:\n@@ -475,7 +475,7 @@ def _cycle_free_fva(model, reactions=None, sloppy=True, sloppy_bound=666):\nif reactions is None:\nreactions = model.reactions\nelse:\n- reactions = model._ids_to_reactions(reactions)\n+ reactions = model.reactions.get_by_any(reactions)\nfva_sol = OrderedDict()\nfor reaction in reactions:\nfva_sol[reaction.id] = dict()\n@@ -726,7 +726,7 @@ def _fbid_fva(model, knockouts, view):\nreachable_reactions = wt_fva.data_frame.query(\"lower_bound != 0 | upper_bound != 0\")\n- for reaction in model._ids_to_reactions(knockouts):\n+ for reaction in model.reactions.get_by_any(knockouts):\nreaction.knock_out(tm)\nmt_fva = flux_variability_analysis(model, reactions=reachable_reactions.index, view=view, remove_cycles=False)\n" }, { "change_type": "MODIFY", "old_path": "cameo/visualization/visualization.py", "new_path": "cameo/visualization/visualization.py", "diff": "@@ -103,7 +103,7 @@ def draw_knockout_result(model, map_name, simulation_method, knockouts, *args, *\ntm = TimeMachine()\ntry:\n- for reaction in model._ids_to_reactions(knockouts):\n+ for reaction in model.reactions.get_by_any(knockouts):\ntm(do=partial(setattr, reaction, 'lower_bound', 0),\nundo=partial(setattr, reaction, 'lower_bound', reaction.lower_bound))\ntm(do=partial(setattr, reaction, 'upper_bound', 0),\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: remove model._ids_to_reactions Replaced with `DictList.get_by_any` which does pretty much the same thing.
89,735
23.03.2017 12:03:47
-3,600
70be2c50e622ab25e59bdd7fb0cad325f71eb04b
refactor: remove model.change_objective Replaced with model.objective setter which now has context support.
[ { "change_type": "MODIFY", "old_path": "cameo/core/solver_based_model.py", "new_path": "cameo/core/solver_based_model.py", "diff": "@@ -21,7 +21,6 @@ from __future__ import absolute_import, print_function\nimport csv\nimport logging\n-from functools import partial\nimport cobra\nimport optlang\n@@ -714,16 +713,16 @@ class SolverBasedModel(cobra.core.Model):\n# raise Exception('%s is not a reaction or reaction ID.' % reaction)\n# return clean_reactions\n- def change_objective(self, value, time_machine=None):\n- \"\"\"\n- Changes the objective of the model to the given value. Allows passing a time machine to\n- revert the change later\n- \"\"\"\n- if time_machine is None:\n- self.objective = value\n- else:\n- time_machine(do=partial(setattr, self, \"objective\", value),\n- undo=partial(setattr, self, \"objective\", self.objective))\n+ # def change_objective(self, value, time_machine=None):\n+ # \"\"\"\n+ # Changes the objective of the model to the given value. Allows passing a time machine to\n+ # revert the change later\n+ # \"\"\"\n+ # if time_machine is None:\n+ # self.objective = value\n+ # else:\n+ # time_machine(do=partial(setattr, self, \"objective\", value),\n+ # undo=partial(setattr, self, \"objective\", self.objective))\n# def _reaction_for(self, value, time_machine=None, add=True):\n# \"\"\"\n" }, { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/analysis.py", "new_path": "cameo/flux_analysis/analysis.py", "diff": "@@ -295,7 +295,7 @@ def phenotypic_phase_plane(model, variables=[], objective=None, source=None, poi\nif view is None:\nview = config.default_view\n- with TimeMachine() as tm, model:\n+ with model:\nif objective is not None:\nif isinstance(objective, Metabolite):\ntry:\n@@ -307,7 +307,7 @@ def phenotypic_phase_plane(model, variables=[], objective=None, source=None, poi\n# except KeyError:\n# pass\n- model.change_objective(objective, time_machine=tm)\n+ model.objective = objective\nif source:\nsource_reaction = get_reaction_for(model, source)\n@@ -374,8 +374,8 @@ def _flux_variability_analysis(model, reactions=None):\nreactions = model.reactions.get_by_any(reactions)\nfva_sol = OrderedDict()\nlb_flags = dict()\n- with TimeMachine() as tm:\n- model.change_objective(S.Zero, time_machine=tm)\n+ with model:\n+ model.objective = S.Zero\nmodel.objective.direction = 'min'\nfor reaction in reactions:\n" }, { "change_type": "MODIFY", "old_path": "tests/data/make-data-sets.py", "new_path": "tests/data/make-data-sets.py", "diff": "@@ -18,6 +18,6 @@ ppp2d.data_frame.to_csv(os.path.join(TESTDIR, 'REFERENCE_PPP_o2_glc_EcoliCore.cs\nmodel = CORE_MODEL.copy()\nmodel.solver = 'glpk'\nobjective = model.add_demand(model.metabolites.ac_c)\n-model.change_objective(objective)\n+model.objective = objective\nppp = phenotypic_phase_plane(model, ['EX_o2_LPAREN_e_RPAREN_'])\nppp.data_frame.to_csv(os.path.join(TESTDIR, 'REFERENCE_PPP_o2_EcoliCore_ac.csv'), float_format='%.3f')\n" }, { "change_type": "MODIFY", "old_path": "tests/test_solver_based_model.py", "new_path": "tests/test_solver_based_model.py", "diff": "@@ -797,15 +797,15 @@ class TestSolverBasedModel:\ncore_model.solver.objective = core_model.solver.interface.Objective(expression)\nassert core_model.solver.objective.expression == expression\n- core_model.change_objective(\"ENO\")\n+ core_model.objective = \"ENO\"\neno_obj = core_model.solver.interface.Objective(\ncore_model.reactions.ENO.flux_expression, direction=\"max\")\npfk_obj = core_model.solver.interface.Objective(\ncore_model.reactions.PFK.flux_expression, direction=\"max\")\nassert core_model.solver.objective == eno_obj\n- with TimeMachine() as tm:\n- core_model.change_objective(\"PFK\", tm)\n+ with core_model:\n+ core_model.objective = \"PFK\"\nassert core_model.solver.objective == pfk_obj\nassert core_model.solver.objective == eno_obj\n" }, { "change_type": "MODIFY", "old_path": "tests/test_strain_design_heuristics.py", "new_path": "tests/test_strain_design_heuristics.py", "diff": "@@ -586,8 +586,8 @@ class TestSwapOptimization:\ncofactors = ((model.metabolites.nad_c, model.metabolites.nadh_c),\n(model.metabolites.nadp_c, model.metabolites.nadph_c))\n- with TimeMachine() as tm:\n- model.change_objective(model.reactions.EX_etoh_lp_e_rp_, time_machine=tm)\n+ with TimeMachine() as tm, model:\n+ model.objective = model.reactions.EX_etoh_lp_e_rp_\nswap_cofactors(model.reactions.ALCD2x, model, cofactors, inplace=True, time_machine=tm)\nreactions = ['GAPD', 'AKGDH', 'PDH', 'GLUDy', 'MDH']\noptimization = CofactorSwapOptimization(model=model, objective_function=py, candidate_reactions=reactions)\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: remove model.change_objective Replaced with model.objective setter which now has context support.
89,735
23.03.2017 16:17:45
-3,600
f25951e055b9bca77dcda74921f35f6767c9f9cc
refactor: remove reaction.add_metabolites Not needed anymore
[ { "change_type": "MODIFY", "old_path": "cameo/core/reaction.py", "new_path": "cameo/core/reaction.py", "diff": "@@ -18,7 +18,7 @@ from __future__ import absolute_import, print_function\nimport logging\nfrom functools import partial\n-import cobra as _cobrapy\n+import cobra\nimport six\nimport cameo\n@@ -31,7 +31,7 @@ logger.setLevel(logging.DEBUG)\[email protected]_metaclass(inheritdocstring)\n-class Reaction(_cobrapy.core.Reaction):\n+class Reaction(cobra.core.Reaction):\n\"\"\"This class extends the cobrapy Reaction class to work with SolverBasedModel.\nNotes\n@@ -368,28 +368,28 @@ class Reaction(_cobrapy.core.Reaction):\ndef is_exchange(self):\nreturn (len(self.reactants) == 0 or len(self.products) == 0) and len(self.metabolites) == 1\n- def add_metabolites(self, metabolites, combine=True, **kwargs):\n- if combine:\n- old_coefficients = self.metabolites\n- super(Reaction, self).add_metabolites(metabolites, combine=combine, **kwargs)\n- model = self.model\n- if model is not None:\n- for metabolite, coefficient in six.iteritems(metabolites):\n-\n- if isinstance(metabolite, six.string_types): # support metabolites added as strings.\n- metabolite = model.metabolites.get_by_id(metabolite)\n- if combine:\n- try:\n- old_coefficient = old_coefficients[metabolite]\n- except KeyError:\n- pass\n- else:\n- coefficient = coefficient + old_coefficient\n-\n- model.solver.constraints[metabolite.id].set_linear_coefficients({\n- self.forward_variable: coefficient,\n- self.reverse_variable: -coefficient\n- })\n+ # def add_metabolites(self, metabolites, combine=True, **kwargs):\n+ # if combine:\n+ # old_coefficients = self.metabolites\n+ # super(Reaction, self).add_metabolites(metabolites, combine=combine, **kwargs)\n+ # model = self.model\n+ # if model is not None:\n+ # for metabolite, coefficient in six.iteritems(metabolites):\n+ #\n+ # if isinstance(metabolite, six.string_types): # support metabolites added as strings.\n+ # metabolite = model.metabolites.get_by_id(metabolite)\n+ # if combine:\n+ # try:\n+ # old_coefficient = old_coefficients[metabolite]\n+ # except KeyError:\n+ # pass\n+ # else:\n+ # coefficient = coefficient + old_coefficient\n+ #\n+ # model.solver.constraints[metabolite.id].set_linear_coefficients({\n+ # self.forward_variable: coefficient,\n+ # self.reverse_variable: -coefficient\n+ # })\ndef knock_out(self, time_machine=None):\n\"\"\"Knockout reaction by setting its bounds to zero.\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: remove reaction.add_metabolites Not needed anymore
89,735
24.03.2017 17:50:15
-3,600
1fcf3c6cd2969034cb1e4e7016c43a3d59e20102
refactor: move metabolite.knock_out add_exchange Move these to cobra. metabolite.knock_out must be metabolite method to enable target.knock_out symmetry. Depends on add_exchange
[ { "change_type": "MODIFY", "old_path": "cameo/core/metabolite.py", "new_path": "cameo/core/metabolite.py", "diff": "# See the License for the specific language governing permissions and\n# limitations under the License.\nimport logging\n-from functools import partial\nimport cobra\nimport six\nfrom cameo.util import inheritdocstring\nfrom cameo.core.utils import add_exchange\n+from cobra.util.context import get_context\nlogger = logging.getLogger(__name__)\n@@ -59,67 +59,65 @@ class Metabolite(cobra.core.Metabolite):\n\"\"\"\nreturn self.elements.get('C', 0)\n- @property\n- def constraint(self):\n- if self.model is not None:\n- return self.model.solver.constraints[self.id]\n- else:\n- return None\n-\n- def _relax_mass_balance_constrain(self, time_machine):\n- if time_machine:\n- time_machine(do=partial(setattr, self.constraint, \"lb\", -1000 * len(self.reactions)),\n- undo=partial(setattr, self.constraint, \"lb\", self.constraint.lb))\n- time_machine(do=partial(setattr, self.constraint, \"ub\", 1000 * len(self.reactions)),\n- undo=partial(setattr, self.constraint, \"ub\", self.constraint.ub))\n- else:\n- self.constraint.lb = None\n- self.constraint.ub = None\n-\n- def knock_out(self, time_machine=None, force_steady_state=False):\n- \"\"\"'Knockout' a metabolite. This can be done in 2 ways:\n-\n- 1. Implementation follows the description in [1]\n- \"All fluxes around the metabolite M should be restricted to only produce the metabolite,\n- for which balancing constraint of mass conservation is relaxed to allow nonzero values\n- of the incoming fluxes whereas all outgoing fluxes are limited to zero.\"\n-\n- 2. Force steady state\n- All reactions consuming the metabolite are restricted to only produce the metabolite. A demand\n- reaction is added to sink the metabolite produced to keep the problem feasible under\n- the S.v = 0 constraint.\n-\n-\n- Knocking out a metabolite overrules the constraints set on the reactions producing the metabolite.\n-\n- Parameters\n- ----------\n- time_machine : TimeMachine\n- An action stack to reverse actions\n- force_steady_state: bool\n- If True, uses approach 2.\n-\n- References\n- ----------\n- .. [1] Kim, P.-J., Lee, D.-Y., Kim, T. Y., Lee, K. H., Jeong, H., Lee, S. Y., & Park, S. (2007).\n- Metabolite essentiality elucidates robustness of Escherichia coli metabolism. PNAS, 104(34), 13638-13642\n- \"\"\"\n- # restrict reactions to produce metabolite\n- for reaction in self.reactions:\n- if reaction.metabolites[self] > 0: # for positive stoichiometric coefficient set lb to 0\n- if reaction.upper_bound < 0:\n- reaction.change_bounds(lb=0, ub=0, time_machine=time_machine)\n- else:\n- reaction.change_bounds(lb=0, time_machine=time_machine)\n- elif reaction.metabolites[self] < 0: # for negative stoichiometric coefficient set ub to 0\n- if reaction.lower_bound > 0:\n- reaction.change_bounds(lb=0, ub=0, time_machine=time_machine)\n- else:\n- reaction.change_bounds(ub=0, time_machine=time_machine)\n- if force_steady_state:\n- add_exchange(self._model, self, prefix=\"KO_\")\n- else:\n- self._relax_mass_balance_constrain(time_machine)\n+ # @property\n+ # def constraint(self):\n+ # if self.model is not None:\n+ # return self.model.solver.constraints[self.id]\n+ # else:\n+ # return None\n+\n+ # def _relax_mass_balance_constrain(self, time_machine):\n+ # if time_machine:\n+ # time_machine(do=partial(setattr, self.constraint, \"lb\", -1000 * len(self.reactions)),\n+ # undo=partial(setattr, self.constraint, \"lb\", self.constraint.lb))\n+ # time_machine(do=partial(setattr, self.constraint, \"ub\", 1000 * len(self.reactions)),\n+ # undo=partial(setattr, self.constraint, \"ub\", self.constraint.ub))\n+ # else:\n+ # self.constraint.lb = None\n+ # self.constraint.ub = None\n+ #\n+ # def knock_out(self, force_steady_state=False):\n+ # \"\"\"'Knockout' a metabolite. This can be done in 2 ways:\n+ #\n+ # 1. Implementation follows the description in [1]\n+ # \"All fluxes around the metabolite M should be restricted to only produce the metabolite,\n+ # for which balancing constraint of mass conservation is relaxed to allow nonzero values\n+ # of the incoming fluxes whereas all outgoing fluxes are limited to zero.\"\n+ #\n+ # 2. Force steady state\n+ # All reactions consuming the metabolite are restricted to only produce the metabolite. A demand\n+ # reaction is added to sink the metabolite produced to keep the problem feasible under\n+ # the S.v = 0 constraint.\n+ #\n+ #\n+ # Knocking out a metabolite overrules the constraints set on the reactions producing the metabolite.\n+ #\n+ # Parameters\n+ # ----------\n+ # force_steady_state: bool\n+ # If True, uses approach 2.\n+ #\n+ # References\n+ # ----------\n+ # .. [1] Kim, P.-J., Lee, D.-Y., Kim, T. Y., Lee, K. H., Jeong, H., Lee, S. Y., & Park, S. (2007).\n+ # Metabolite essentiality elucidates robustness of Escherichia coli metabolism. PNAS, 104(34), 13638-13642\n+ # \"\"\"\n+ # # restrict reactions to produce metabolite\n+ # for reaction in self.reactions:\n+ # if reaction.metabolites[self] > 0: # for positive stoichiometric coefficient set lb to 0\n+ # reaction.bounds = (0, 0) if reaction.upper_bound < 0 else (0, reaction.upper_bound)\n+ # elif reaction.metabolites[self] < 0: # for negative stoichiometric coefficient set ub to 0\n+ # reaction.bounds = (0, 0) if reaction.lower_bound > 0 else (reaction.lower_bound, 0)\n+ # if force_steady_state:\n+ # add_exchange(self._model, self, prefix=\"KO_\")\n+ # else:\n+ # previous_bounds = self.constraint.lb, self.constraint.ub\n+ # self.constraint.lb, self.constraint.ub = None, None\n+ # context = get_context(self)\n+ # if context:\n+ # def reset():\n+ # self.constraint.lb, self.constraint.ub = previous_bounds\n+ # context(reset)\n# @property\n# def id(self):\n" }, { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/analysis.py", "new_path": "cameo/flux_analysis/analysis.py", "diff": "@@ -95,8 +95,8 @@ def find_essential_metabolites(model, threshold=1e-6, force_steady_state=False):\nmetabolites.update(reaction.metabolites.keys())\nfor metabolite in metabolites:\n- with TimeMachine() as tm:\n- metabolite.knock_out(time_machine=tm, force_steady_state=force_steady_state)\n+ with model:\n+ metabolite.knock_out(force_steady_state=force_steady_state)\nmodel.solver.optimize()\nif model.solver.status != 'optimal' or model.objective.value < threshold:\nessential.append(metabolite)\n" }, { "change_type": "MODIFY", "old_path": "tests/test_solver_based_model.py", "new_path": "tests/test_solver_based_model.py", "diff": "@@ -21,7 +21,8 @@ import os\nimport pickle\nimport cobra.test\n-from cobra.util import SolverNotFound\n+from cobra.util import SolverNotFound, add_exchange\n+\nimport numpy\nimport pandas\nimport pytest\n@@ -35,7 +36,7 @@ from cameo.config import solvers\nfrom cameo.core.gene import Gene\nfrom cameo.core.metabolite import Metabolite\nfrom cameo.core.solver_based_model import Reaction\n-from cameo.core.utils import get_reaction_for, add_exchange\n+from cameo.core.utils import get_reaction_for\nfrom cameo.exceptions import UndefinedSolution\nfrom cameo.flux_analysis.structural import create_stoichiometric_array\nfrom cameo.flux_analysis.analysis import find_essential_genes, find_essential_metabolites, find_essential_reactions\n@@ -277,11 +278,11 @@ class TestReaction:\ndef test_change_bounds(self, core_model):\nreac = core_model.reactions.ACALD\n- reac.change_bounds(lb=2, ub=2)\n+ reac.bounds = (2, 2)\nassert reac.lower_bound == 2\nassert reac.upper_bound == 2\n- with TimeMachine() as tm:\n- reac.change_bounds(lb=5, time_machine=tm)\n+ with core_model:\n+ reac.lower_bound = 5\nassert reac.lower_bound == 5\nassert reac.upper_bound == 5\nassert reac.lower_bound == 2\n@@ -757,34 +758,6 @@ class TestSolverBasedModel:\nassert reaction2.model == core_model\nassert reaction2 == core_model.reactions.get_by_id(reaction2.id)\n- def test_add_exchange(self, core_model):\n- for demand, prefix in {True: 'DemandReaction_', False: 'SupplyReaction_'}.items():\n- for metabolite in core_model.metabolites:\n- demand_reaction = add_exchange(core_model, metabolite, demand=demand, prefix=prefix)\n- assert core_model.reactions.get_by_id(demand_reaction.id) == demand_reaction\n- assert demand_reaction.reactants == [metabolite]\n- assert core_model.solver.constraints[metabolite.id].expression.has(\n- core_model.solver.variables[prefix + metabolite.id])\n-\n- def test_add_exchange_time_machine(self, core_model):\n- for demand, prefix in {True: 'DemandReaction_', False: 'SupplyReaction_'}.items():\n- with core_model:\n- for metabolite in core_model.metabolites:\n- demand_reaction = add_exchange(core_model, metabolite, demand=demand, prefix=prefix)\n- assert core_model.reactions.get_by_id(demand_reaction.id) == demand_reaction\n- assert demand_reaction.reactants == [metabolite]\n- assert -core_model.solver.constraints[metabolite.id].expression.has(\n- core_model.solver.variables[prefix + metabolite.id])\n- for metabolite in core_model.metabolites:\n- assert prefix + metabolite.id not in core_model.reactions\n- assert prefix + metabolite.id not in core_model.solver.variables.keys()\n-\n- def test_add_existing_exchange(self, core_model):\n- for metabolite in core_model.metabolites:\n- add_exchange(core_model, metabolite, prefix=\"test\")\n- with pytest.raises(ValueError):\n- add_exchange(core_model, metabolite, prefix=\"test\")\n-\ndef test_objective(self, core_model):\nobj = core_model.objective\nassert {var.name: coef for var, coef in obj.expression.as_coefficients_dict().items()} == \\\n@@ -912,12 +885,6 @@ class TestSolverBasedModel:\nassert abs(reaction.effective_upper_bound - REFERENCE_FVA_SOLUTION_ECOLI_CORE['upper_bound'][\nreaction.id]) < 0.000001\n- def test_add_exchange_for_non_existing_metabolite(self, core_model):\n- metabolite = Metabolite(id=\"a_metabolite\")\n- add_exchange(core_model, metabolite)\n- assert core_model.solver.constraints[metabolite.id].expression.has(\n- core_model.solver.variables[\"DM_\" + metabolite.id])\n-\n# def test_add_ratio_constraint(self, solved_model):\n# solution, model = solved_model\n# assert round(abs(solution.f - 0.873921506968), 7) == 0\n@@ -1029,24 +996,6 @@ class TestMetabolite:\nassert \"test2\" in core_model.metabolites\nassert \"test\" not in core_model.metabolites\n- def test_knock_out(self, core_model):\n- rxn = Reaction('rxn', upper_bound=10, lower_bound=-10)\n- metabolite_a = Metabolite('A')\n- metabolite_b = Metabolite('B')\n- rxn.add_metabolites({metabolite_a: -1, metabolite_b: 1})\n- core_model.add_reaction(rxn)\n- with TimeMachine() as tm:\n- metabolite_a.knock_out(time_machine=tm)\n- assert rxn.upper_bound == 0\n- metabolite_b.knock_out(time_machine=tm)\n- assert rxn.lower_bound == 0\n- assert metabolite_a.constraint.lb == -1000 * len(metabolite_a.reactions)\n- assert metabolite_a.constraint.ub == 1000 * len(metabolite_a.reactions)\n- assert metabolite_a.constraint.lb == 0\n- assert metabolite_a.constraint.ub == 0\n- assert rxn.upper_bound == 10\n- assert rxn.lower_bound == -10\n-\ndef test_remove_from_model(self, core_model):\nmet = core_model.metabolites.get_by_id(\"g6p_c\")\nmet.remove_from_model()\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: move metabolite.knock_out add_exchange Move these to cobra. metabolite.knock_out must be metabolite method to enable target.knock_out symmetry. Depends on add_exchange
89,735
24.03.2017 18:01:37
-3,600
33ca70a657e5fc54b1391193aa7d7ae656baea86
refactor: remove reaction.change_bounds obsolete with reaction.bounds
[ { "change_type": "MODIFY", "old_path": "cameo/api/designer.py", "new_path": "cameo/api/designer.py", "diff": "@@ -332,9 +332,9 @@ class Designer(object):\nif isinstance(host, Model):\nhost = Host(name='UNKNOWN_HOST', models=[host])\nfor model in list(host.models):\n- with TimeMachine() as tm:\n+ with model:\nif not aerobic and \"EX_o2_e\" in model.reactions:\n- model.reactions.EX_o2_e.change_bounds(lb=0, time_machine=tm)\n+ model.reactions.EX_o2_e.lower_bound = 0\nidentifier = searching()\nlogging.debug('Processing model {} for host {}'.format(model.id, host.name))\nnotice('Predicting pathways for product %s in %s (using model %s).'\n" }, { "change_type": "MODIFY", "old_path": "cameo/core/reaction.py", "new_path": "cameo/core/reaction.py", "diff": "from __future__ import absolute_import, print_function\nimport logging\n-from functools import partial\nimport cobra\nimport six\n@@ -449,17 +448,17 @@ class Reaction(cobra.core.Reaction):\n# # if remove_orphans:\n# model.solver.remove([metabolite.model.solver for metabolite in self.metabolites.keys()])\n- def change_bounds(self, lb=None, ub=None, time_machine=None):\n- \"\"\"Changes one or both of the reaction bounds and allows the changes to be reversed with a TimeMachine\"\"\"\n- if time_machine is not None:\n- time_machine(do=int,\n- undo=partial(setattr, self, \"lower_bound\", self.lower_bound))\n- time_machine(do=int,\n- undo=partial(setattr, self, \"upper_bound\", self.upper_bound))\n- if lb is not None:\n- self.lower_bound = lb\n- if ub is not None:\n- self.upper_bound = ub\n+ # def change_bounds(self, lb=None, ub=None, time_machine=None):\n+ # \"\"\"Changes one or both of the reaction bounds and allows the changes to be reversed with a TimeMachine\"\"\"\n+ # if time_machine is not None:\n+ # time_machine(do=int,\n+ # undo=partial(setattr, self, \"lower_bound\", self.lower_bound))\n+ # time_machine(do=int,\n+ # undo=partial(setattr, self, \"upper_bound\", self.upper_bound))\n+ # if lb is not None:\n+ # self.lower_bound = lb\n+ # if ub is not None:\n+ # self.upper_bound = ub\n@property\ndef n_carbon(self):\n" }, { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/analysis.py", "new_path": "cameo/flux_analysis/analysis.py", "diff": "@@ -192,9 +192,9 @@ def find_blocked_reactions(model):\nA list of reactions.\n\"\"\"\n- with TimeMachine() as tm:\n+ with TimeMachine() as tm, model:\nfor exchange in model.exchanges:\n- exchange.change_bounds(-9999, 9999, tm)\n+ exchange.bounds = (-9999, 9999)\nfva_solution = flux_variability_analysis(model)\nreturn frozenset(reaction for reaction in model.reactions\nif round(fva_solution.lower_bound(reaction.id), config.ndecimals) == 0 and\n@@ -657,13 +657,13 @@ class _PhenotypicPhasePlaneChunkEvaluator(object):\nreturn flux, carbon_yield, mass_yield\ndef _production_envelope_inner(self, point):\n- with TimeMachine() as tm:\n+ original_direction = self.model.objective.direction\n+ with self.model:\nfor (reaction, coordinate) in zip(self.variable_reactions, point):\n- reaction.change_bounds(coordinate, coordinate, time_machine=tm)\n+ reaction.bounds = (coordinate, coordinate)\ninterval = []\ninterval_carbon_yield = []\ninterval_mass_yield = []\n- tm(do=int, undo=partial(setattr, self.model.objective, 'direction', self.model.objective.direction))\nself.model.objective.direction = 'min'\nflux, carbon_yield, mass_yield = self._interval_estimates()\n@@ -676,6 +676,7 @@ class _PhenotypicPhasePlaneChunkEvaluator(object):\ninterval.append(flux)\ninterval_carbon_yield.append(carbon_yield)\ninterval_mass_yield.append(mass_yield)\n+ self.model.objective.direction = original_direction\nintervals = tuple(interval) + tuple(interval_carbon_yield) + tuple(interval_mass_yield)\nreturn point + intervals\n@@ -712,13 +713,13 @@ def flux_balance_impact_degree(model, knockouts, view=config.default_view, metho\ndef _fbid_fva(model, knockouts, view):\n- with model, TimeMachine() as tm:\n+ with model:\nfor reaction in model.reactions:\nif reaction.reversibility:\n- reaction.change_bounds(-1, 1, time_machine=tm)\n+ reaction.bounds = (-1, 1)\nelse:\n- reaction.change_bounds(0, 1, time_machine=tm)\n+ reaction.bounds = (0, 1)\nwt_fva = flux_variability_analysis(model, view=view, remove_cycles=False)\nwt_fva._data_frame['upper_bound'] = wt_fva._data_frame.upper_bound.apply(numpy.round)\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/heuristic/evolutionary/objective_functions.py", "new_path": "cameo/strain_design/heuristic/evolutionary/objective_functions.py", "diff": "@@ -20,7 +20,7 @@ import six\nfrom inspyred.ec.emo import Pareto\nfrom cameo import config, flux_variability_analysis\n-from cameo.core.reaction import Reaction\n+from cobra.core import Reaction\n__all__ = ['biomass_product_coupled_yield', 'product_yield', 'number_of_knockouts']\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/pathway_prediction/pathway_predictor.py", "new_path": "cameo/strain_design/pathway_prediction/pathway_predictor.py", "diff": "@@ -135,7 +135,7 @@ class PathwayResult(Pathway, Result, StrainDesign):\nif exchanges:\ntm(do=partial(model.add_reactions, self.exchanges),\nundo=partial(model.remove_reactions, self.exchanges, delete=False, remove_orphans=True))\n- self.product.change_bounds(lb=0, time_machine=tm)\n+ self.product.lower_bound = 0\ntry:\ntm(do=partial(model.add_reactions, [self.product]),\nundo=partial(model.remove_reactions, [self.product], delete=False, remove_orphans=True))\n@@ -310,7 +310,7 @@ class PathwayPredictor(StrainDesignMethod):\nexcept KeyError:\nproduct_reaction = add_exchange(self.model, product)\n- product_reaction.change_bounds(lb=min_production, time_machine=tm)\n+ product_reaction.lower_bound = min_production\ncounter = 1\nwhile counter <= max_predictions:\nlogger.debug('Predicting pathway No. %d' % counter)\n" }, { "change_type": "MODIFY", "old_path": "tests/test_solver_based_model.py", "new_path": "tests/test_solver_based_model.py", "diff": "@@ -935,11 +935,11 @@ class TestSolverBasedModel:\ndef test_get_reaction_for(self, core_model):\nwith core_model:\nfor r in core_model.reactions:\n- assert isinstance(get_reaction_for(core_model, r.id), Reaction)\n- assert isinstance(get_reaction_for(core_model, r), Reaction)\n+ assert isinstance(get_reaction_for(core_model, r.id), cobra.core.Reaction)\n+ assert isinstance(get_reaction_for(core_model, r), cobra.core.Reaction)\nfor m in core_model.metabolites:\n- assert isinstance(get_reaction_for(core_model, m.id), Reaction)\n- assert isinstance(get_reaction_for(core_model, m), Reaction)\n+ assert isinstance(get_reaction_for(core_model, m.id), cobra.core.Reaction)\n+ assert isinstance(get_reaction_for(core_model, m), cobra.core.Reaction)\nwith pytest.raises(TypeError):\nget_reaction_for(core_model, None)\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: remove reaction.change_bounds obsolete with reaction.bounds
89,735
24.03.2017 18:03:00
-3,600
8118b56bd6034a82392dfa1cd67bf58c03d3ab45
refactor: remove reaction.{flux, reduced_costs} in favor of the identical properties from super
[ { "change_type": "MODIFY", "old_path": "cameo/core/reaction.py", "new_path": "cameo/core/reaction.py", "diff": "@@ -349,19 +349,19 @@ class Reaction(cobra.core.Reaction):\nremove_cycles=False)\nreturn fva_result['upper_bound'][self.id]\n- @property\n- def flux(self):\n- if self.model is not None:\n- return self.forward_variable.primal - self.reverse_variable.primal\n- else:\n- return None\n-\n- @property\n- def reduced_cost(self):\n- if self.model is not None and self.forward_variable.dual is not None:\n- return self.forward_variable.dual - self.reverse_variable.dual\n- else:\n- return None\n+ # @property\n+ # def flux(self):\n+ # if self.model is not None:\n+ # return self.forward_variable.primal - self.reverse_variable.primal\n+ # else:\n+ # return None\n+ #\n+ # @property\n+ # def reduced_cost(self):\n+ # if self.model is not None and self.forward_variable.dual is not None:\n+ # return self.forward_variable.dual - self.reverse_variable.dual\n+ # else:\n+ # return None\n@property\ndef is_exchange(self):\n" }, { "change_type": "MODIFY", "old_path": "tests/test_solver_based_model.py", "new_path": "tests/test_solver_based_model.py", "diff": "@@ -383,8 +383,10 @@ class TestReaction:\nassert isinstance(reaction.reduced_cost, float)\nfor reaction in model.reactions:\nmodel.remove_reactions([reaction])\n- assert reaction.flux is None\n- assert reaction.reduced_cost is None\n+ with pytest.raises(RuntimeError):\n+ assert reaction.flux\n+ with pytest.raises(RuntimeError):\n+ assert reaction.reduced_cost\ndef test_knockout(self, core_model):\noriginal_bounds = dict()\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: remove reaction.{flux, reduced_costs} in favor of the identical properties from super
89,735
24.03.2017 18:04:34
-3,600
248e72ebd5761e5773ec75b128996b62f43946de
refactor: remove reaction.effective_bounds Cobrapy's model.summary does something very similar, use that instead.
[ { "change_type": "MODIFY", "old_path": "cameo/core/reaction.py", "new_path": "cameo/core/reaction.py", "diff": "@@ -335,19 +335,19 @@ class Reaction(cobra.core.Reaction):\n# model.objective += coef_difference * self.flux_expression\n# self._objective_coefficient = value\n- @property\n- def effective_lower_bound(self):\n- model = self.model\n- fva_result = flux_analysis.flux_variability_analysis(model, reactions=[self], view=SequentialView(),\n- remove_cycles=False)\n- return fva_result['lower_bound'][self.id]\n-\n- @property\n- def effective_upper_bound(self):\n- model = self.model\n- fva_result = flux_analysis.flux_variability_analysis(model, reactions=[self], view=SequentialView(),\n- remove_cycles=False)\n- return fva_result['upper_bound'][self.id]\n+ # @property\n+ # def effective_lower_bound(self):\n+ # model = self.model\n+ # fva_result = flux_analysis.flux_variability_analysis(model, reactions=[self], view=SequentialView(),\n+ # remove_cycles=False)\n+ # return fva_result['lower_bound'][self.id]\n+ #\n+ # @property\n+ # def effective_upper_bound(self):\n+ # model = self.model\n+ # fva_result = flux_analysis.flux_variability_analysis(model, reactions=[self], view=SequentialView(),\n+ # remove_cycles=False)\n+ # return fva_result['upper_bound'][self.id]\n# @property\n# def flux(self):\n" }, { "change_type": "MODIFY", "old_path": "tests/test_solver_based_model.py", "new_path": "tests/test_solver_based_model.py", "diff": "@@ -879,13 +879,13 @@ class TestSolverBasedModel:\ncore_model.reactions.Biomass_Ecoli_core_N_LPAREN_w_FSLASH_GAM_RPAREN__Nmet2.lower_bound = 999999.\nfind_essential_metabolites(core_model, force_steady_state=True)\n- def test_effective_bounds(self, core_model):\n- core_model.reactions.Biomass_Ecoli_core_N_LPAREN_w_FSLASH_GAM_RPAREN__Nmet2.lower_bound = 0.873921\n- for reaction in core_model.reactions:\n- assert abs(reaction.effective_lower_bound - REFERENCE_FVA_SOLUTION_ECOLI_CORE['lower_bound'][\n- reaction.id]) < 0.000001\n- assert abs(reaction.effective_upper_bound - REFERENCE_FVA_SOLUTION_ECOLI_CORE['upper_bound'][\n- reaction.id]) < 0.000001\n+ # def test_effective_bounds(self, core_model):\n+ # core_model.reactions.Biomass_Ecoli_core_N_LPAREN_w_FSLASH_GAM_RPAREN__Nmet2.lower_bound = 0.873921\n+ # for reaction in core_model.reactions:\n+ # assert abs(reaction.effective_lower_bound - REFERENCE_FVA_SOLUTION_ECOLI_CORE['lower_bound'][\n+ # reaction.id]) < 0.000001\n+ # assert abs(reaction.effective_upper_bound - REFERENCE_FVA_SOLUTION_ECOLI_CORE['upper_bound'][\n+ # reaction.id]) < 0.000001\n# def test_add_ratio_constraint(self, solved_model):\n# solution, model = solved_model\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: remove reaction.effective_bounds Cobrapy's model.summary does something very similar, use that instead.
89,735
24.03.2017 18:07:35
-3,600
18254adfa65183a81fc9c548fd7d3ebf66cd9d46
refactor: remove reaction.pop unused
[ { "change_type": "MODIFY", "old_path": "cameo/core/reaction.py", "new_path": "cameo/core/reaction.py", "diff": "@@ -21,8 +21,6 @@ import cobra\nimport six\nimport cameo\n-from cameo import flux_analysis\n-from cameo.parallel import SequentialView\nfrom cameo.util import inheritdocstring\nlogger = logging.getLogger(__name__)\n@@ -416,20 +414,20 @@ class Reaction(cobra.core.Reaction):\n# else:\n# super(Reaction, self).knock_out()\n- def pop(self, metabolite_id):\n- \"\"\"Removes a given metabolite from the reaction stoichiometry, and returns the coefficient.\n- \"\"\"\n- if self._model is None:\n- return super(Reaction, self).pop(metabolite_id)\n- else:\n- if isinstance(metabolite_id, six.string_types):\n- met = self.model.metabolites.get_by_id(metabolite_id)\n- else:\n- met = metabolite_id\n- coef = self.metabolites[met]\n- self.add_metabolites({met: -coef}, combine=True)\n- return coef\n-\n+ # def pop(self, metabolite_id):\n+ # \"\"\"Removes a given metabolite from the reaction stoichiometry, and returns the coefficient.\n+ # \"\"\"\n+ # if self._model is None:\n+ # return super(Reaction, self).pop(metabolite_id)\n+ # else:\n+ # if isinstance(metabolite_id, six.string_types):\n+ # met = self.model.metabolites.get_by_id(metabolite_id)\n+ # else:\n+ # met = metabolite_id\n+ # coef = self.metabolites[met]\n+ # self.add_metabolites({met: -coef}, combine=True)\n+ # return coef\n+ #\n# def remove_from_model(self, model=None, remove_orphans=False):\n# reaction_model = self.model\n# forward = self.forward_variable\n" }, { "change_type": "MODIFY", "old_path": "tests/test_solver_based_model.py", "new_path": "tests/test_solver_based_model.py", "diff": "@@ -563,24 +563,24 @@ class TestReaction:\nassert core_model.solver.constraints[already_included_metabolite.id].expression.has(\n-10 * reaction.reverse_variable)\n- def test_pop(self, core_model):\n- pgi = core_model.reactions.PGI\n- g6p = core_model.metabolites.get_by_id(\"g6p_c\")\n- f6p = core_model.metabolites.get_by_id(\"f6p_c\")\n- g6p_expr = core_model.solver.constraints[\"g6p_c\"].expression\n- g6p_coef = pgi.pop(\"g6p_c\")\n- assert g6p not in pgi.metabolites\n- actual = core_model.solver.constraints[\"g6p_c\"].expression.as_coefficients_dict()\n- expected = (g6p_expr - g6p_coef * pgi.flux_expression).as_coefficients_dict()\n- assert actual == expected\n- assert pgi.metabolites[f6p] == 1\n-\n- f6p_expr = core_model.solver.constraints[\"f6p_c\"].expression\n- f6p_coef = pgi.pop(f6p)\n- assert f6p not in pgi.metabolites\n- assert core_model.solver.constraints[\"f6p_c\"].expression.as_coefficients_dict() == (\n- f6p_expr - f6p_coef * pgi.flux_expression\n- ).as_coefficients_dict()\n+ # def test_pop(self, core_model):\n+ # pgi = core_model.reactions.PGI\n+ # g6p = core_model.metabolites.get_by_id(\"g6p_c\")\n+ # f6p = core_model.metabolites.get_by_id(\"f6p_c\")\n+ # g6p_expr = core_model.solver.constraints[\"g6p_c\"].expression\n+ # g6p_coef = pgi.pop(\"g6p_c\")\n+ # assert g6p not in pgi.metabolites\n+ # actual = core_model.solver.constraints[\"g6p_c\"].expression.as_coefficients_dict()\n+ # expected = (g6p_expr - g6p_coef * pgi.flux_expression).as_coefficients_dict()\n+ # assert actual == expected\n+ # assert pgi.metabolites[f6p] == 1\n+ #\n+ # f6p_expr = core_model.solver.constraints[\"f6p_c\"].expression\n+ # f6p_coef = pgi.pop(f6p)\n+ # assert f6p not in pgi.metabolites\n+ # assert core_model.solver.constraints[\"f6p_c\"].expression.as_coefficients_dict() == (\n+ # f6p_expr - f6p_coef * pgi.flux_expression\n+ # ).as_coefficients_dict()\ndef test_remove_from_model(self, core_model):\npgi = core_model.reactions.PGI\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: remove reaction.pop unused
89,735
24.03.2017 18:11:40
-3,600
d630a3cbf6f58cb4ddb1132c4e1247570db6f9b8
refactor: remove redundant reaction.is_exchange super has boundary which is equivalent
[ { "change_type": "MODIFY", "old_path": "cameo/core/reaction.py", "new_path": "cameo/core/reaction.py", "diff": "@@ -361,9 +361,9 @@ class Reaction(cobra.core.Reaction):\n# else:\n# return None\n- @property\n- def is_exchange(self):\n- return (len(self.reactants) == 0 or len(self.products) == 0) and len(self.metabolites) == 1\n+ # @property\n+ # def boundary(self):\n+ # return (len(self.reactants) == 0 or len(self.products) == 0) and len(self.metabolites) == 1\n# def add_metabolites(self, metabolites, combine=True, **kwargs):\n# if combine:\n" }, { "change_type": "MODIFY", "old_path": "cameo/core/solver_based_model.py", "new_path": "cameo/core/solver_based_model.py", "diff": "@@ -231,7 +231,7 @@ class SolverBasedModel(cobra.core.Model):\n#\n# Reactions that either don't have products or substrates.\n# \"\"\"\n- # return [reaction for reaction in self.reactions if reaction.is_exchange]\n+ # return [reaction for reaction in self.reactions if reaction.boundary]\n# def add_metabolites(self, metabolite_list):\n# super(SolverBasedModel, self).add_metabolites(metabolite_list)\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/heuristic/evolutionary/objective_functions.py", "new_path": "cameo/strain_design/heuristic/evolutionary/objective_functions.py", "diff": "@@ -196,14 +196,14 @@ class biomass_product_coupled_yield(YieldFunction):\nbiomass_flux = round(solution.fluxes[self.biomass], config.ndecimals)\nif self.carbon_yield:\nproduct = model.reaction.get_by_id(self.product)\n- if product.is_exchange:\n+ if product.boundary:\nproduct_flux = round(solution.fluxes[self.product], config.ndecimals) * product.n_carbon\nelse:\nproduct_flux = round(solution.fluxes[self.product], config.ndecimals) * product.n_carbon / 2\nsubstrate_flux = 0\nfor substrate_id in self.substrates:\nsubstrate = model.reactions.get_by_id(substrate_id)\n- if substrate.is_exchange:\n+ if substrate.boundary:\nsubstrate_flux += abs(solution.fluxes[substrate_id]) * substrate.n_carbon\nelse:\nsubstrate_flux += abs(solution.fluxes[substrate_id]) * substrate.n_carbon / 2\n@@ -269,14 +269,14 @@ class biomass_product_coupled_min_yield(biomass_product_coupled_yield):\nmin_product_flux = round(fva_res[\"lower_bound\"][self.product], config.ndecimals)\nif self.carbon_yield:\nproduct = model.reactions.get_by_id(self.product)\n- if product.is_exchange:\n+ if product.boundary:\nproduct_flux = min_product_flux * product.n_carbon\nelse:\nproduct_flux = min_product_flux * product.n_carbon / 2\nsubstrate_flux = 0\nfor substrate_id in self.substrates:\nsubstrate = model.reactions.get_by_id(substrate_id)\n- if substrate.is_exchange:\n+ if substrate.boundary:\nsubstrate_flux += abs(solution.fluxes[substrate_id]) * substrate.n_carbon\nelse:\nsubstrate_flux += abs(solution.fluxes[substrate_id]) * substrate.n_carbon / 2\n@@ -337,14 +337,14 @@ class product_yield(YieldFunction):\ndef __call__(self, model, solution, targets):\nif self.carbon_yield:\nproduct = model.reactions.get_by_id(self.product)\n- if product.is_exchange:\n+ if product.boundary:\nproduct_flux = round(solution.fluxes[self.product], config.ndecimals) * product.n_carbon\nelse:\nproduct_flux = round(solution.fluxes[self.product], config.ndecimals) * product.n_carbon / 2\nsubstrate_flux = 0\nfor substrate_id in self.substrates:\nsubstrate = model.reactions.get_by_id(substrate_id)\n- if substrate.is_exchange:\n+ if substrate.boundary:\nsubstrate_flux += abs(solution.fluxes[substrate_id]) * substrate.n_carbon\nelse:\nsubstrate_flux += abs(solution.fluxes[substrate_id]) * substrate.n_carbon / 2\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: remove redundant reaction.is_exchange super has boundary which is equivalent
89,735
27.03.2017 09:26:11
-7,200
fe66e621950ad0cb9aabe48f3f9a7666e5a614e2
refactor: remove n_carbon Newly introduced but hardly used attribute for metabolite and reaction. Remove it in favor of considering more general property.
[ { "change_type": "MODIFY", "old_path": "cameo/core/metabolite.py", "new_path": "cameo/core/metabolite.py", "diff": "# Copyright 2016 Novo Nordisk Foundation Center for Biosustainability, DTU.\n-\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n@@ -16,8 +15,6 @@ import logging\nimport cobra\nimport six\nfrom cameo.util import inheritdocstring\n-from cameo.core.utils import add_exchange\n-from cobra.util.context import get_context\nlogger = logging.getLogger(__name__)\n@@ -48,16 +45,16 @@ class Metabolite(cobra.core.Metabolite):\n# super(Metabolite, self).remove_from_model(method, **kwargs)\n# model.solver.remove(model.solver.constraints[self.id])\n- @property\n- def n_carbon(self):\n- \"\"\"number of carbon atoms\n-\n- Returns\n- -------\n- int\n- number of carbons in this metabolite\n- \"\"\"\n- return self.elements.get('C', 0)\n+ # @property\n+ # def n_carbon(self):\n+ # \"\"\"number of carbon atoms\n+ #\n+ # Returns\n+ # -------\n+ # int\n+ # number of carbons in this metabolite\n+ # \"\"\"\n+ # return self.elements.get('C', 0)\n# @property\n# def constraint(self):\n" }, { "change_type": "MODIFY", "old_path": "cameo/core/reaction.py", "new_path": "cameo/core/reaction.py", "diff": "@@ -458,16 +458,16 @@ class Reaction(cobra.core.Reaction):\n# if ub is not None:\n# self.upper_bound = ub\n- @property\n- def n_carbon(self):\n- \"\"\"number of carbon atoms\n-\n- Returns\n- -------\n- int\n- number of carbons for all metabolites involved in a reaction\n- \"\"\"\n- return sum(metabolite.n_carbon for metabolite in self.metabolites)\n+ # @property\n+ # def n_carbon(self):\n+ # \"\"\"number of carbon atoms\n+ #\n+ # Returns\n+ # -------\n+ # int\n+ # number of carbons for all metabolites involved in a reaction\n+ # \"\"\"\n+ # return sum(metabolite.n_carbon for metabolite in self.metabolites)\ndef _repr_html_(self):\nreturn \"\"\"\n" }, { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/analysis.py", "new_path": "cameo/flux_analysis/analysis.py", "diff": "@@ -452,7 +452,8 @@ def _get_c_source_reaction(model):\nmodel.optimize()\nexcept (Infeasible, AssertionError):\nreturn None\n- source_reactions = [(reaction, reaction.flux * reaction.n_carbon) for reaction in medium_reactions if\n+\n+ source_reactions = [(reaction, reaction.flux * n_carbon(reaction)) for reaction in medium_reactions if\nreaction.flux < 0]\nsorted_sources = sorted(source_reactions, key=lambda reaction_tuple: reaction_tuple[1])\nreturn sorted_sources[0][0]\n@@ -589,7 +590,7 @@ class _PhenotypicPhasePlaneChunkEvaluator(object):\n-------\nfloat\nreaction flux multiplied by number of carbon in reactants\"\"\"\n- carbon = sum(metabolite.n_carbon for metabolite in reaction.reactants)\n+ carbon = sum(metabolite.elements.get('C', 0) for metabolite in reaction.reactants)\ntry:\nreturn reaction.flux * carbon\nexcept AssertionError:\n@@ -953,3 +954,7 @@ class FluxBalanceImpactDegreeResult(Result):\ndef plot(self, grid=None, width=None, height=None, title=None):\npass\n+\n+\n+def n_carbon(reaction):\n+ return sum(metabolite.elements.get('C', 0) for metabolite in reaction.metabolites)\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/heuristic/evolutionary/objective_functions.py", "new_path": "cameo/strain_design/heuristic/evolutionary/objective_functions.py", "diff": "@@ -197,16 +197,16 @@ class biomass_product_coupled_yield(YieldFunction):\nif self.carbon_yield:\nproduct = model.reaction.get_by_id(self.product)\nif product.boundary:\n- product_flux = round(solution.fluxes[self.product], config.ndecimals) * product.n_carbon\n+ product_flux = round(solution.fluxes[self.product], config.ndecimals) * n_carbon(product)\nelse:\n- product_flux = round(solution.fluxes[self.product], config.ndecimals) * product.n_carbon / 2\n+ product_flux = round(solution.fluxes[self.product], config.ndecimals) * n_carbon(product) / 2\nsubstrate_flux = 0\nfor substrate_id in self.substrates:\nsubstrate = model.reactions.get_by_id(substrate_id)\nif substrate.boundary:\n- substrate_flux += abs(solution.fluxes[substrate_id]) * substrate.n_carbon\n+ substrate_flux += abs(solution.fluxes[substrate_id]) * n_carbon(substrate)\nelse:\n- substrate_flux += abs(solution.fluxes[substrate_id]) * substrate.n_carbon / 2\n+ substrate_flux += abs(solution.fluxes[substrate_id]) * n_carbon(substrate) / 2\nsubstrate_flux = round(substrate_flux, config.ndecimals)\nelse:\n@@ -270,16 +270,16 @@ class biomass_product_coupled_min_yield(biomass_product_coupled_yield):\nif self.carbon_yield:\nproduct = model.reactions.get_by_id(self.product)\nif product.boundary:\n- product_flux = min_product_flux * product.n_carbon\n+ product_flux = min_product_flux * n_carbon(product)\nelse:\n- product_flux = min_product_flux * product.n_carbon / 2\n+ product_flux = min_product_flux * n_carbon(product) / 2\nsubstrate_flux = 0\nfor substrate_id in self.substrates:\nsubstrate = model.reactions.get_by_id(substrate_id)\nif substrate.boundary:\n- substrate_flux += abs(solution.fluxes[substrate_id]) * substrate.n_carbon\n+ substrate_flux += abs(solution.fluxes[substrate_id]) * n_carbon(substrate)\nelse:\n- substrate_flux += abs(solution.fluxes[substrate_id]) * substrate.n_carbon / 2\n+ substrate_flux += abs(solution.fluxes[substrate_id]) * n_carbon(substrate) / 2\nsubstrate_flux = round(substrate_flux, config.ndecimals)\nelse:\nproduct_flux = min_product_flux\n@@ -338,16 +338,16 @@ class product_yield(YieldFunction):\nif self.carbon_yield:\nproduct = model.reactions.get_by_id(self.product)\nif product.boundary:\n- product_flux = round(solution.fluxes[self.product], config.ndecimals) * product.n_carbon\n+ product_flux = round(solution.fluxes[self.product], config.ndecimals) * n_carbon(product)\nelse:\n- product_flux = round(solution.fluxes[self.product], config.ndecimals) * product.n_carbon / 2\n+ product_flux = round(solution.fluxes[self.product], config.ndecimals) * n_carbon(product) / 2\nsubstrate_flux = 0\nfor substrate_id in self.substrates:\nsubstrate = model.reactions.get_by_id(substrate_id)\nif substrate.boundary:\n- substrate_flux += abs(solution.fluxes[substrate_id]) * substrate.n_carbon\n+ substrate_flux += abs(solution.fluxes[substrate_id]) * n_carbon(substrate)\nelse:\n- substrate_flux += abs(solution.fluxes[substrate_id]) * substrate.n_carbon / 2\n+ substrate_flux += abs(solution.fluxes[substrate_id]) * n_carbon(substrate) / 2\nsubstrate_flux = round(substrate_flux, config.ndecimals)\nelse:\nproduct_flux = round(solution.fluxes[self.product], config.ndecimals)\n@@ -418,3 +418,14 @@ class number_of_knockouts(ObjectiveFunction):\nreturn 0\nelse:\nreturn np.inf\n+\n+\n+def n_carbon(reaction):\n+ \"\"\"number of carbon atoms\n+\n+ Returns\n+ -------\n+ int\n+ number of carbons for all metabolites involved in a reaction\n+ \"\"\"\n+ return sum(metabolite.elements.get('C', 0) for metabolite in reaction.metabolites)\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: remove n_carbon Newly introduced but hardly used attribute for metabolite and reaction. Remove it in favor of considering more general property.
89,735
27.03.2017 09:54:07
-7,200
0a402205be303cdc9bd43a977576a029997e7195
refactor: move medium definitions In favor of cobrapy's medium definition.
[ { "change_type": "MODIFY", "old_path": "cameo/core/solver_based_model.py", "new_path": "cameo/core/solver_based_model.py", "diff": "from __future__ import absolute_import, print_function\n-import csv\nimport logging\nimport cobra\nimport optlang\nimport six\n-from pandas import DataFrame, pandas\nfrom sympy import Add\nfrom sympy import Mul\n@@ -640,66 +638,66 @@ class SolverBasedModel(cobra.core.Model):\n# return stoichiometric_matrix\n#\n- @property\n- def medium(self):\n- \"\"\"Current medium.\"\"\"\n- reaction_ids = []\n- reaction_names = []\n- lower_bounds = []\n- upper_bounds = []\n- for ex in self.exchanges:\n- metabolite = list(ex.metabolites.keys())[0]\n- coeff = ex.metabolites[metabolite]\n- if coeff * ex.lower_bound > 0:\n- reaction_ids.append(ex.id)\n- reaction_names.append(ex.name)\n- lower_bounds.append(ex.lower_bound)\n- upper_bounds.append(ex.upper_bound)\n-\n- return DataFrame({'reaction_id': reaction_ids,\n- 'reaction_name': reaction_names,\n- 'lower_bound': lower_bounds,\n- 'upper_bound': upper_bounds},\n- index=None, columns=['reaction_id', 'reaction_name', 'lower_bound', 'upper_bound'])\n-\n- # TODO: describe the formats in doc\n- def load_medium(self, medium, copy=False, delimiter=\"\\t\"):\n- \"\"\"\n- Loads a medium into the model. If copy is true it will return\n- a copy of the model. Otherwise it applies the medium to itself.\n- Supported formats\n- TODO\n-\n- Parameters\n- ----------\n- medium: str, pandas.DataFrame, dict.\n-\n- copy: boolean, optional\n- If True copies the model, otherwise the changes will happen inplace.\n- delimiter: str\n- Only if loading the medium from a file.\n-\n- Returns\n- -------\n- SolverBasedModel\n- If copy=True, returns a copy of the model.\n-\n- \"\"\"\n-\n- if copy:\n- model = self.copy()\n- else:\n- model = self\n- if isinstance(medium, dict):\n- model._load_medium_from_dict(medium)\n- elif isinstance(medium, pandas.DataFrame):\n- model._load_medium_from_dataframe(medium)\n- elif isinstance(medium, six.string_types):\n- model._load_medium_from_file(medium, delimiter=delimiter)\n- else:\n- raise AssertionError(\"input type (%s) is not valid\" % type(medium))\n-\n- return model\n+ # @property\n+ # def medium(self):\n+ # \"\"\"Current medium.\"\"\"\n+ # reaction_ids = []\n+ # reaction_names = []\n+ # lower_bounds = []\n+ # upper_bounds = []\n+ # for ex in self.exchanges:\n+ # metabolite = list(ex.metabolites.keys())[0]\n+ # coeff = ex.metabolites[metabolite]\n+ # if coeff * ex.lower_bound > 0:\n+ # reaction_ids.append(ex.id)\n+ # reaction_names.append(ex.name)\n+ # lower_bounds.append(ex.lower_bound)\n+ # upper_bounds.append(ex.upper_bound)\n+ #\n+ # return DataFrame({'reaction_id': reaction_ids,\n+ # 'reaction_name': reaction_names,\n+ # 'lower_bound': lower_bounds,\n+ # 'upper_bound': upper_bounds},\n+ # index=None, columns=['reaction_id', 'reaction_name', 'lower_bound', 'upper_bound'])\n+ #\n+ # # TODO: describe the formats in doc\n+ # def load_medium(self, medium, copy=False, delimiter=\"\\t\"):\n+ # \"\"\"\n+ # Loads a medium into the model. If copy is true it will return\n+ # a copy of the model. Otherwise it applies the medium to itself.\n+ # Supported formats\n+ # TODO\n+ #\n+ # Parameters\n+ # ----------\n+ # medium: str, pandas.DataFrame, dict.\n+ #\n+ # copy: boolean, optional\n+ # If True copies the model, otherwise the changes will happen inplace.\n+ # delimiter: str\n+ # Only if loading the medium from a file.\n+ #\n+ # Returns\n+ # -------\n+ # SolverBasedModel\n+ # If copy=True, returns a copy of the model.\n+ #\n+ # \"\"\"\n+ #\n+ # if copy:\n+ # model = self.copy()\n+ # else:\n+ # model = self\n+ # if isinstance(medium, dict):\n+ # model._load_medium_from_dict(medium)\n+ # elif isinstance(medium, pandas.DataFrame):\n+ # model._load_medium_from_dataframe(medium)\n+ # elif isinstance(medium, six.string_types):\n+ # model._load_medium_from_file(medium, delimiter=delimiter)\n+ # else:\n+ # raise AssertionError(\"input type (%s) is not valid\" % type(medium))\n+ #\n+ # return model\n# def _ids_to_reactions(self, reactions):\n# \"\"\"Translate reaction IDs into reactions (skips reactions).\"\"\"\n@@ -779,29 +777,29 @@ class SolverBasedModel(cobra.core.Model):\n# raise KeyError(None)\n#\n# return value\n-\n- def _load_medium_from_dict(self, medium):\n- assert isinstance(medium, dict)\n- for ex_reaction in self.exchanges:\n- ex_reaction.lower_bound = medium.get(ex_reaction.id, 0)\n-\n- def _load_medium_from_file(self, file_path, delimiter=\"\\t\"):\n- medium = {}\n-\n- with open(file_path, \"rb\") as csv_file:\n- reader = csv.reader(csv_file, delimiter=delimiter)\n-\n- for row in reader:\n- self.reactions.get_by_id(row[0])\n- medium[row[0]] = row[1]\n-\n- self._load_medium_from_dict(medium)\n-\n- def _load_medium_from_dataframe(self, medium):\n- assert isinstance(medium, DataFrame)\n- for ex_reaction in self.exchanges:\n- if ex_reaction.id in medium.reaction_id.values:\n- medium_row = medium[medium.reaction_id == ex_reaction.id]\n- ex_reaction.lower_bound = medium_row.lower_bound.values[0]\n- else:\n- ex_reaction.lower_bound = 0\n+ #\n+ # def _load_medium_from_dict(self, medium):\n+ # assert isinstance(medium, dict)\n+ # for ex_reaction in self.exchanges:\n+ # ex_reaction.lower_bound = medium.get(ex_reaction.id, 0)\n+ #\n+ # def _load_medium_from_file(self, file_path, delimiter=\"\\t\"):\n+ # medium = {}\n+ #\n+ # with open(file_path, \"rb\") as csv_file:\n+ # reader = csv.reader(csv_file, delimiter=delimiter)\n+ #\n+ # for row in reader:\n+ # self.reactions.get_by_id(row[0])\n+ # medium[row[0]] = row[1]\n+ #\n+ # self._load_medium_from_dict(medium)\n+ #\n+ # def _load_medium_from_dataframe(self, medium):\n+ # assert isinstance(medium, DataFrame)\n+ # for ex_reaction in self.exchanges:\n+ # if ex_reaction.id in medium.reaction_id.values:\n+ # medium_row = medium[medium.reaction_id == ex_reaction.id]\n+ # ex_reaction.lower_bound = medium_row.lower_bound.values[0]\n+ # else:\n+ # ex_reaction.lower_bound = 0\n" }, { "change_type": "MODIFY", "old_path": "cameo/core/utils.py", "new_path": "cameo/core/utils.py", "diff": "+import six\n+import csv\n+\nfrom cobra.util import add_exchange\n+from pandas import DataFrame\ndef get_reaction_for(model, value, add=True):\n@@ -37,3 +41,95 @@ def get_reaction_for(model, value, add=True):\nelse:\nraise KeyError('Invalid target %s' % value)\nreturn reactions[0]\n+\n+\n+def medium(model):\n+ \"\"\"Current medium for this model.\"\"\"\n+ reaction_ids = []\n+ reaction_names = []\n+ lower_bounds = []\n+ upper_bounds = []\n+ for ex in model.exchanges:\n+ metabolite = list(ex.metabolites.keys())[0]\n+ coeff = ex.metabolites[metabolite]\n+ if coeff * ex.lower_bound > 0:\n+ reaction_ids.append(ex.id)\n+ reaction_names.append(ex.name)\n+ lower_bounds.append(ex.lower_bound)\n+ upper_bounds.append(ex.upper_bound)\n+\n+ return DataFrame({'reaction_id': reaction_ids,\n+ 'reaction_name': reaction_names,\n+ 'lower_bound': lower_bounds,\n+ 'upper_bound': upper_bounds},\n+ index=None, columns=['reaction_id', 'reaction_name', 'lower_bound', 'upper_bound'])\n+\n+\n+def load_medium(model, medium_def, copy=False, delimiter=\"\\t\"):\n+ \"\"\"\n+ Loads a medium into the model. If copy is true it will return\n+ a copy of the model. Otherwise it applies the medium to itself.\n+ Supported formats\n+ TODO\n+\n+ Parameters\n+ ----------\n+ model : cameo.core.SolverBasedModel\n+ The model to load medium for\n+ medium_def: str, pandas.DataFrame, dict.\n+ The medium to load\n+ copy: boolean, optional\n+ If True copies the model, otherwise the changes will happen inplace.\n+ delimiter: str\n+ Only if loading the medium from a file.\n+\n+ Returns\n+ -------\n+ SolverBasedModel\n+ If copy=True, returns a copy of the model.\n+\n+ \"\"\"\n+\n+ if copy:\n+ model = model.copy()\n+ else:\n+ model = model\n+ if isinstance(medium_def, dict):\n+ _load_medium_from_dict(model, medium_def)\n+ elif isinstance(medium_def, DataFrame):\n+ _load_medium_from_dataframe(model, medium_def)\n+ elif isinstance(medium_def, six.string_types):\n+ _load_medium_from_file(model, medium_def, delimiter=delimiter)\n+ else:\n+ raise AssertionError(\"input type (%s) is not valid\" % type(medium))\n+\n+ return model\n+\n+\n+def _load_medium_from_dict(model, medium_def):\n+ assert isinstance(medium_def, dict)\n+ for ex_reaction in model.exchanges:\n+ ex_reaction.lower_bound = medium_def.get(ex_reaction.id, 0)\n+\n+\n+def _load_medium_from_file(model, file_path, delimiter=\"\\t\"):\n+ this_medium = {}\n+\n+ with open(file_path, \"rb\") as csv_file:\n+ reader = csv.reader(csv_file, delimiter=delimiter)\n+\n+ for row in reader:\n+ model.reactions.get_by_id(row[0])\n+ this_medium[row[0]] = row[1]\n+\n+ _load_medium_from_dict(model, this_medium)\n+\n+\n+def _load_medium_from_dataframe(model, medium_df):\n+ assert isinstance(medium_df, DataFrame)\n+ for ex_reaction in model.exchanges:\n+ if ex_reaction.id in medium_df.reaction_id.values:\n+ medium_row = medium_df[medium_df.reaction_id == ex_reaction.id]\n+ ex_reaction.lower_bound = medium_row.lower_bound.values[0]\n+ else:\n+ ex_reaction.lower_bound = 0\n" }, { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/analysis.py", "new_path": "cameo/flux_analysis/analysis.py", "diff": "@@ -447,13 +447,12 @@ def _get_c_source_reaction(model):\nReaction\nThe medium reaction with highest input carbon flux\n\"\"\"\n- medium_reactions = [model.reactions.get_by_id(reaction) for reaction in model.medium.reaction_id]\ntry:\nmodel.optimize()\nexcept (Infeasible, AssertionError):\nreturn None\n-\n- source_reactions = [(reaction, reaction.flux * n_carbon(reaction)) for reaction in medium_reactions if\n+ model_reactions = model.reactions.get_by_any(list(model.medium))\n+ source_reactions = [(reaction, reaction.flux * n_carbon(reaction)) for reaction in model_reactions if\nreaction.flux < 0]\nsorted_sources = sorted(source_reactions, key=lambda reaction_tuple: reaction_tuple[1])\nreturn sorted_sources[0][0]\n" }, { "change_type": "MODIFY", "old_path": "tests/test_solver_based_model.py", "new_path": "tests/test_solver_based_model.py", "diff": "@@ -21,7 +21,7 @@ import os\nimport pickle\nimport cobra.test\n-from cobra.util import SolverNotFound, add_exchange\n+from cobra.util import SolverNotFound\nimport numpy\nimport pandas\n@@ -36,11 +36,11 @@ from cameo.config import solvers\nfrom cameo.core.gene import Gene\nfrom cameo.core.metabolite import Metabolite\nfrom cameo.core.solver_based_model import Reaction\n-from cameo.core.utils import get_reaction_for\n+from cameo.core.utils import get_reaction_for, load_medium, medium\nfrom cameo.exceptions import UndefinedSolution\nfrom cameo.flux_analysis.structural import create_stoichiometric_array\nfrom cameo.flux_analysis.analysis import find_essential_genes, find_essential_metabolites, find_essential_reactions\n-from cameo.util import TimeMachine\n+\nTRAVIS = bool(os.getenv('TRAVIS', False))\nTESTDIR = os.path.dirname(__file__)\n@@ -964,15 +964,15 @@ class TestSolverBasedModel:\nassert stoichiometric_matrix[j, i] == coefficient\ndef test_set_medium(self, core_model):\n- medium = core_model.medium\n+ this_medium = medium(core_model)\nfor reaction in core_model.exchanges:\nif reaction.lower_bound == 0:\n- assert reaction.id not in medium.reaction_id.values\n+ assert reaction.id not in this_medium.reaction_id.values\nif reaction.lower_bound < 0:\n- assert reaction.id in medium.reaction_id.values\n- core_model.load_medium(medium)\n- for rid in core_model.medium.reaction_id:\n- assert len(medium[medium.reaction_id == rid]) == 1\n+ assert reaction.id in this_medium.reaction_id.values\n+ load_medium(core_model, this_medium)\n+ for rid in medium(core_model).reaction_id:\n+ assert len(this_medium[this_medium.reaction_id == rid]) == 1\ndef test_solver_change_preserves_non_metabolic_constraints(self, core_model):\nwith core_model:\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: move medium definitions In favor of cobrapy's medium definition.
89,735
27.03.2017 10:14:46
-7,200
a4c814d394f4f2f459c9707a4fb53afa930a1936
refactor: remove solution classes merged and improved upstream, no need to subclass
[ { "change_type": "MODIFY", "old_path": "cameo/core/solution.py", "new_path": "cameo/core/solution.py", "diff": "# See the License for the specific language governing permissions and\n# limitations under the License.\n-from __future__ import absolute_import, print_function\n-\n-import datetime\n-import logging\n-import time\n-from collections import OrderedDict\n-\n-import cobra\n-from pandas import DataFrame, Series\n-\n-import cameo\n-from cameo.exceptions import UndefinedSolution\n-\n-logger = logging.getLogger(__name__)\n-\n-\n-class SolutionBase(object):\n- def __new__(cls, *args, **kwargs):\n- # this is a cobrapy compatibility hack\n- if len(args) == 1 and not isinstance(args[0], cameo.core.solver_based_model.SolverBasedModel):\n- cobrapy_solution = super(SolutionBase, cls).__new__(cobra.core.Solution)\n- cobrapy_solution.__init__(*args, **kwargs)\n- return cobrapy_solution\n- else:\n- return super(SolutionBase, cls).__new__(cls)\n-\n- def __init__(self, model):\n- self.model = model\n- self._x = None\n- self._y = None\n- self._x_dict = None\n- self._y_dict = None\n-\n- @property\n- def data_frame(self):\n- return DataFrame({'fluxes': Series(self.fluxes), 'reduced_costs': Series(self.reduced_costs)})\n-\n- def __str__(self):\n- \"\"\"A pandas DataFrame representation of the solution.\n-\n- Returns\n- -------\n- pandas.DataFrame\n- \"\"\"\n- return str(self.data_frame)\n-\n- def _repr_html_(self):\n- return self.data_frame._repr_html_()\n-\n- def as_cobrapy_solution(self):\n- \"\"\"Convert into a cobrapy Solution.\n-\n- Returns\n- -------\n- cobra.core.Solution.Solution\n- \"\"\"\n- return Solution(self.f, x=self.x,\n- x_dict=self.x_dict, y=self.y, y_dict=self.y_dict,\n- the_solver=None, the_time=0, status=self.status)\n-\n- def get_primal_by_id(self, reaction_id):\n- \"\"\"Return a flux/primal value for a reaction.\n-\n- Parameters\n- ----------\n- reaction_id : str\n- A reaction ID.\n- \"\"\"\n- return self.x_dict[reaction_id]\n-\n- @property\n- def x_dict(self):\n- if self._x_dict is None:\n- return self.fluxes\n- else:\n- return self._x_dict\n-\n- @x_dict.setter\n- def x_dict(self, value):\n- self._x_dict = value\n-\n- @property\n- def x(self):\n- if self._x is None:\n- return self.fluxes.values()\n- else:\n- return self._x\n-\n- @x.setter\n- def x(self, value):\n- self._x = value\n-\n- @property\n- def y_dict(self):\n- if self._y_dict is None:\n- return self.reduced_costs\n- else:\n- return self._y_dict\n-\n- @y_dict.setter\n- def y_dict(self, value):\n- self._y_dict = value\n-\n- @property\n- def y(self):\n- if self._y is None:\n- return self.reduced_costs.values()\n- else:\n- return self._y\n-\n- @y.setter\n- def y(self, value):\n- self._y = value\n-\n- @property\n- def objective_value(self):\n- return self.f\n-\n-\n-class Solution(SolutionBase):\n- \"\"\"This class mimicks the cobrapy Solution class.\n-\n- Attributes\n- ----------\n- fluxes : OrderedDict\n- A dictionary of flux values.\n- reduced_costs : OrderedDict\n- A dictionary of reduced costs.\n-\n- Notes\n- -----\n- See also documentation for cobra.core.Solution.Solution for an extensive list of inherited attributes.\n- \"\"\"\n-\n- def __init__(self, model, *args, **kwargs):\n- \"\"\"\n- Parameters\n- ----------\n- model : SolverBasedModel\n- \"\"\"\n- super(Solution, self).__init__(model, *args, **kwargs)\n- self.f = model.solver.objective.value\n- self.fluxes = OrderedDict()\n- self.shadow_prices = model.solver.shadow_prices\n- self.reduced_costs = OrderedDict()\n- self._primal_values = model.solver.primal_values\n- self._reduced_values = model.solver.reduced_costs\n-\n- for reaction in model.reactions:\n- self.fluxes[reaction.id] = self._primal_values[reaction.id] - self._primal_values[\n- reaction.reverse_id]\n-\n- self.reduced_costs[reaction.id] = self._reduced_values[reaction.id] - self._reduced_values[\n- reaction.reverse_id]\n-\n- self.status = model.solver.status\n- self._reaction_ids = [r.id for r in self.model.reactions]\n- self._metabolite_ids = [m.id for m in self.model.metabolites]\n-\n- def __dir__(self):\n- # Hide 'cobrapy' attributes and methods from user.\n- fields = sorted(dir(type(self)) + list(self.__dict__.keys()))\n- fields.remove('x')\n- fields.remove('y')\n- fields.remove('x_dict')\n- fields.remove('y_dict')\n- return fields\n-\n- def _repr_html_(self):\n- return \"%s: %f\" % (self.model.objective.expression, self.f)\n-\n-\n-class LazySolution(SolutionBase):\n- \"\"\"This class implements a lazy evaluating version of the cobrapy Solution class.\n-\n- Attributes\n- ----------\n- model : SolverBasedModel\n- fluxes : OrderedDict\n- A dictionary of flux values.\n- reduced_costs : OrderedDict\n- A dictionary of reduced costs.\n-\n- Notes\n- -----\n- See also documentation for cobra.core.Solution.Solution for an extensive list of inherited attributes.\n- \"\"\"\n-\n- def __init__(self, model, *args, **kwargs):\n- \"\"\"\n- Parameters\n- ----------\n- model : SolverBasedModel\n- \"\"\"\n- super(LazySolution, self).__init__(model, *args, **kwargs)\n- if self.model._timestamp_last_optimization is not None:\n- self._time_stamp = self.model._timestamp_last_optimization\n- else:\n- self._time_stamp = time.time()\n- self._f = None\n- self._primal_values = None\n- self._reduced_values = None\n-\n- def _check_freshness(self):\n- \"\"\"Raises an exceptions if the solution might have become invalid due to re-optimization of the attached model.\n-\n- Raises\n- ------\n- UndefinedSolution\n- If solution has become invalid.\n- \"\"\"\n- if self._time_stamp != self.model._timestamp_last_optimization:\n- def timestamp_formatter(timestamp):\n- datetime.datetime.fromtimestamp(timestamp).strftime(\n- \"%Y-%m-%d %H:%M:%S:%f\")\n-\n- raise UndefinedSolution(\n- 'The solution (captured around %s) has become invalid as the model has been '\n- 're-optimized recently (%s).' % (\n- timestamp_formatter(self._time_stamp),\n- timestamp_formatter(self.model._timestamp_last_optimization))\n- )\n-\n- @property\n- def status(self):\n- self._check_freshness()\n- return self.model.solver.status\n-\n- @property\n- def f(self):\n- self._check_freshness()\n- if self._f is None:\n- return self.model.solver.objective.value\n- else:\n- return self._f\n-\n- @f.setter\n- def f(self, value):\n- self._f = value\n-\n- @property\n- def fluxes(self):\n- self._check_freshness()\n- primal_values = self.model.solver.primal_values\n-\n- fluxes = OrderedDict()\n- for reaction in self.model.reactions:\n- fluxes[reaction.id] = primal_values[reaction.id] - primal_values[reaction.reverse_id]\n-\n- return fluxes\n-\n- @property\n- def reduced_costs(self):\n- self._check_freshness()\n- reduced_values = self.model.solver.reduced_costs\n-\n- reduced_costs = OrderedDict()\n- for reaction in self.model.reactions:\n- reduced_costs[reaction.id] = reduced_values[reaction.id] - reduced_values[\n- reaction.reverse_id]\n- return reduced_costs\n-\n- @property\n- def shadow_prices(self):\n- self._check_freshness()\n- return self.model.solver.shadow_prices\n-\n- def get_primal_by_id(self, reaction_id):\n- \"\"\"Return a flux/primal value for a reaction.\n+# from __future__ import absolute_import, print_function\n+#\n+# import datetime\n+# import logging\n+# import time\n+# from collections import OrderedDict\n+#\n+# import cobra\n+# from pandas import DataFrame, Series\n+#\n+# import cameo\n+# from cameo.exceptions import UndefinedSolution\n+#\n+# logger = logging.getLogger(__name__)\n+#\n+#\n+# class SolutionBase(object):\n+# def __new__(cls, *args, **kwargs):\n+# # this is a cobrapy compatibility hack\n+# if len(args) == 1 and not isinstance(args[0], cameo.core.solver_based_model.SolverBasedModel):\n+# cobrapy_solution = super(SolutionBase, cls).__new__(cobra.core.Solution)\n+# cobrapy_solution.__init__(*args, **kwargs)\n+# return cobrapy_solution\n+# else:\n+# return super(SolutionBase, cls).__new__(cls)\n+#\n+# def __init__(self, model):\n+# self.model = model\n+# self._x = None\n+# self._y = None\n+# self._x_dict = None\n+# self._y_dict = None\n+#\n+# @property\n+# def data_frame(self):\n+# return DataFrame({'fluxes': Series(self.fluxes), 'reduced_costs': Series(self.reduced_costs)})\n+#\n+# def __str__(self):\n+# \"\"\"A pandas DataFrame representation of the solution.\n+#\n+# Returns\n+# -------\n+# pandas.DataFrame\n+# \"\"\"\n+# return str(self.data_frame)\n+#\n+# def _repr_html_(self):\n+# return self.data_frame._repr_html_()\n+#\n+# def as_cobrapy_solution(self):\n+# \"\"\"Convert into a cobrapy Solution.\n+#\n+# Returns\n+# -------\n+# cobra.core.Solution.Solution\n+# \"\"\"\n+# return Solution(self.f, x=self.x,\n+# x_dict=self.x_dict, y=self.y, y_dict=self.y_dict,\n+# the_solver=None, the_time=0, status=self.status)\n+#\n+# def get_primal_by_id(self, reaction_id):\n+# \"\"\"Return a flux/primal value for a reaction.\n+#\n+# Parameters\n+# ----------\n+# reaction_id : str\n+# A reaction ID.\n+# \"\"\"\n+# return self.x_dict[reaction_id]\n+#\n+# @property\n+# def x_dict(self):\n+# if self._x_dict is None:\n+# return self.fluxes\n+# else:\n+# return self._x_dict\n+#\n+# @x_dict.setter\n+# def x_dict(self, value):\n+# self._x_dict = value\n+#\n+# @property\n+# def x(self):\n+# if self._x is None:\n+# return self.fluxes.values()\n+# else:\n+# return self._x\n+#\n+# @x.setter\n+# def x(self, value):\n+# self._x = value\n+#\n+# @property\n+# def y_dict(self):\n+# if self._y_dict is None:\n+# return self.reduced_costs\n+# else:\n+# return self._y_dict\n+#\n+# @y_dict.setter\n+# def y_dict(self, value):\n+# self._y_dict = value\n+#\n+# @property\n+# def y(self):\n+# if self._y is None:\n+# return self.reduced_costs.values()\n+# else:\n+# return self._y\n+#\n+# @y.setter\n+# def y(self, value):\n+# self._y = value\n+#\n+# @property\n+# def objective_value(self):\n+# return self.f\n+#\n+#\n+# class Solution(SolutionBase):\n+# \"\"\"This class mimicks the cobrapy Solution class.\n+#\n+# Attributes\n+# ----------\n+# fluxes : OrderedDict\n+# A dictionary of flux values.\n+# reduced_costs : OrderedDict\n+# A dictionary of reduced costs.\n+#\n+# Notes\n+# -----\n+# See also documentation for cobra.core.Solution.Solution for an extensive list of inherited attributes.\n+# \"\"\"\n+#\n+# def __init__(self, model, *args, **kwargs):\n+# \"\"\"\n+# Parameters\n+# ----------\n+# model : SolverBasedModel\n+# \"\"\"\n+# super(Solution, self).__init__(model, *args, **kwargs)\n+# self.f = model.solver.objective.value\n+# self.fluxes = OrderedDict()\n+# self.shadow_prices = model.solver.shadow_prices\n+# self.reduced_costs = OrderedDict()\n+# self._primal_values = model.solver.primal_values\n+# self._reduced_values = model.solver.reduced_costs\n+#\n+# for reaction in model.reactions:\n+# self.fluxes[reaction.id] = self._primal_values[reaction.id] - self._primal_values[\n+# reaction.reverse_id]\n+#\n+# self.reduced_costs[reaction.id] = self._reduced_values[reaction.id] - self._reduced_values[\n+# reaction.reverse_id]\n+#\n+# self.status = model.solver.status\n+# self._reaction_ids = [r.id for r in self.model.reactions]\n+# self._metabolite_ids = [m.id for m in self.model.metabolites]\n+#\n+# def __dir__(self):\n+# # Hide 'cobrapy' attributes and methods from user.\n+# fields = sorted(dir(type(self)) + list(self.__dict__.keys()))\n+# fields.remove('x')\n+# fields.remove('y')\n+# fields.remove('x_dict')\n+# fields.remove('y_dict')\n+# return fields\n+#\n+# def _repr_html_(self):\n+# return \"%s: %f\" % (self.model.objective.expression, self.f)\n- Parameters\n- ----------\n- reaction_id : str\n- A reaction ID.\n- \"\"\"\n- self._check_freshness()\n- return self.model.reactions.get_by_id(reaction_id).flux\n- def __dir__(self):\n- # Hide 'cobrapy' attributes and methods from user.\n- fields = sorted(dir(type(self)) + list(self.__dict__.keys()))\n- fields.remove('x')\n- fields.remove('y')\n- fields.remove('x_dict')\n- fields.remove('y_dict')\n- return fields\n+# class LazySolution(SolutionBase):\n+# \"\"\"This class implements a lazy evaluating version of the cobrapy Solution class.\n+#\n+# Attributes\n+# ----------\n+# model : SolverBasedModel\n+# fluxes : OrderedDict\n+# A dictionary of flux values.\n+# reduced_costs : OrderedDict\n+# A dictionary of reduced costs.\n+#\n+# Notes\n+# -----\n+# See also documentation for cobra.core.Solution.Solution for an extensive list of inherited attributes.\n+# \"\"\"\n+#\n+# def __init__(self, model, *args, **kwargs):\n+# \"\"\"\n+# Parameters\n+# ----------\n+# model : SolverBasedModel\n+# \"\"\"\n+# super(LazySolution, self).__init__(model, *args, **kwargs)\n+# if self.model._timestamp_last_optimization is not None:\n+# self._time_stamp = self.model._timestamp_last_optimization\n+# else:\n+# self._time_stamp = time.time()\n+# self._f = None\n+# self._primal_values = None\n+# self._reduced_values = None\n+#\n+# def _check_freshness(self):\n+# \"\"\"Raises an exceptions if the solution might have become invalid due to re-optimization of the attached model.\n+#\n+# Raises\n+# ------\n+# UndefinedSolution\n+# If solution has become invalid.\n+# \"\"\"\n+# if self._time_stamp != self.model._timestamp_last_optimization:\n+# def timestamp_formatter(timestamp):\n+# datetime.datetime.fromtimestamp(timestamp).strftime(\n+# \"%Y-%m-%d %H:%M:%S:%f\")\n+#\n+# raise UndefinedSolution(\n+# 'The solution (captured around %s) has become invalid as the model has been '\n+# 're-optimized recently (%s).' % (\n+# timestamp_formatter(self._time_stamp),\n+# timestamp_formatter(self.model._timestamp_last_optimization))\n+# )\n+#\n+# @property\n+# def status(self):\n+# self._check_freshness()\n+# return self.model.solver.status\n+#\n+# @property\n+# def f(self):\n+# self._check_freshness()\n+# if self._f is None:\n+# return self.model.solver.objective.value\n+# else:\n+# return self._f\n+#\n+# @f.setter\n+# def f(self, value):\n+# self._f = value\n+#\n+# @property\n+# def fluxes(self):\n+# self._check_freshness()\n+# primal_values = self.model.solver.primal_values\n+#\n+# fluxes = OrderedDict()\n+# for reaction in self.model.reactions:\n+# fluxes[reaction.id] = primal_values[reaction.id] - primal_values[reaction.reverse_id]\n+#\n+# return fluxes\n+#\n+# @property\n+# def reduced_costs(self):\n+# self._check_freshness()\n+# reduced_values = self.model.solver.reduced_costs\n+#\n+# reduced_costs = OrderedDict()\n+# for reaction in self.model.reactions:\n+# reduced_costs[reaction.id] = reduced_values[reaction.id] - reduced_values[\n+# reaction.reverse_id]\n+# return reduced_costs\n+#\n+# @property\n+# def shadow_prices(self):\n+# self._check_freshness()\n+# return self.model.solver.shadow_prices\n+#\n+# def get_primal_by_id(self, reaction_id):\n+# \"\"\"Return a flux/primal value for a reaction.\n+#\n+# Parameters\n+# ----------\n+# reaction_id : str\n+# A reaction ID.\n+# \"\"\"\n+# self._check_freshness()\n+# return self.model.reactions.get_by_id(reaction_id).flux\n+#\n+# def __dir__(self):\n+# # Hide 'cobrapy' attributes and methods from user.\n+# fields = sorted(dir(type(self)) + list(self.__dict__.keys()))\n+# fields.remove('x')\n+# fields.remove('y')\n+# fields.remove('x_dict')\n+# fields.remove('y_dict')\n+# return fields\n" }, { "change_type": "MODIFY", "old_path": "cameo/core/solver_based_model.py", "new_path": "cameo/core/solver_based_model.py", "diff": "@@ -32,7 +32,6 @@ from cameo.core.gene import Gene\nfrom cameo.core.metabolite import Metabolite\nfrom cameo.util import inheritdocstring\nfrom .reaction import Reaction\n-from .solution import LazySolution\n__all__ = ['to_solver_based_model', 'SolverBasedModel']\n@@ -110,8 +109,8 @@ class SolverBasedModel(cobra.core.Model):\n# self._solver = solver_interface.Model()\n# self._solver.objective = solver_interface.Objective(S.Zero)\n# self._populate_solver(self.reactions, self.metabolites)\n- self._timestamp_last_optimization = None\n- self.solution = LazySolution(self)\n+ # self._timestamp_last_optimization = None\n+ # self.solution = LazySolution(self)\n# @property\n# def non_functional_genes(self):\n" }, { "change_type": "MODIFY", "old_path": "tests/data/iJO1366.pickle", "new_path": "tests/data/iJO1366.pickle", "diff": "Binary files a/tests/data/iJO1366.pickle and b/tests/data/iJO1366.pickle differ\n" }, { "change_type": "MODIFY", "old_path": "tests/data/salmonella.pickle", "new_path": "tests/data/salmonella.pickle", "diff": "Binary files a/tests/data/salmonella.pickle and b/tests/data/salmonella.pickle differ\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: remove solution classes merged and improved upstream, no need to subclass
89,735
27.03.2017 10:24:17
-7,200
84b775c9f027dbfd38f6f8b4d2e61f423291d0ed
refactor: move _repr_html_ move these upstream
[ { "change_type": "MODIFY", "old_path": "cameo/core/reaction.py", "new_path": "cameo/core/reaction.py", "diff": "@@ -468,27 +468,27 @@ class Reaction(cobra.core.Reaction):\n# number of carbons for all metabolites involved in a reaction\n# \"\"\"\n# return sum(metabolite.n_carbon for metabolite in self.metabolites)\n-\n- def _repr_html_(self):\n- return \"\"\"\n- <table>\n- <tr>\n- <td><strong>Id</strong></td><td>%s</td>\n- </tr>\n- <tr>\n- <td><strong>Name</strong></td><td>%s</td>\n- </tr>\n- <tr>\n- <td><strong>Stoichiometry</strong></td><td>%s</td>\n- </tr>\n- <tr>\n- <td><strong>GPR</strong></td><td>%s</td>\n- </tr>\n- <tr>\n- <td><strong>Lower bound</strong></td><td>%f</td>\n- </tr>\n- <tr>\n- <td><strong>Upper bound</strong></td><td>%f</td>\n- </tr>\n- </table>\n- \"\"\" % (self.id, self.name, self.reaction, self.gene_reaction_rule, self.lower_bound, self.upper_bound)\n+ #\n+ # def _repr_html_(self):\n+ # return \"\"\"\n+ # <table>\n+ # <tr>\n+ # <td><strong>Id</strong></td><td>%s</td>\n+ # </tr>\n+ # <tr>\n+ # <td><strong>Name</strong></td><td>%s</td>\n+ # </tr>\n+ # <tr>\n+ # <td><strong>Stoichiometry</strong></td><td>%s</td>\n+ # </tr>\n+ # <tr>\n+ # <td><strong>GPR</strong></td><td>%s</td>\n+ # </tr>\n+ # <tr>\n+ # <td><strong>Lower bound</strong></td><td>%f</td>\n+ # </tr>\n+ # <tr>\n+ # <td><strong>Upper bound</strong></td><td>%f</td>\n+ # </tr>\n+ # </table>\n+ # \"\"\" % (self.id, self.name, self.reaction, self.gene_reaction_rule, self.lower_bound, self.upper_bound)\n" }, { "change_type": "MODIFY", "old_path": "cameo/core/solver_based_model.py", "new_path": "cameo/core/solver_based_model.py", "diff": "@@ -139,28 +139,28 @@ class SolverBasedModel(cobra.core.Model):\n# model_copy._solver = copy(self.solver) # pragma: no cover\n# return model_copy\n- def _repr_html_(self): # pragma: no cover\n- template = \"\"\"<table>\n-<tr>\n-<td>Name</td>\n-<td>%(name)s</td>\n-</tr>\n-<tr>\n-<td>Number of metabolites</td>\n-<td>%(num_metabolites)s</td>\n-</tr>\n-<tr>\n-<td>Number of reactions</td>\n-<td>%(num_reactions)s</td>\n-</tr>\n-<tr>\n-<td>Reactions</td>\n-<td><div style=\"width:100%%; max-height:300px; overflow:auto\">%(reactions)s</div></td>\n-</tr>\n-</table>\"\"\"\n- return template % {'name': self.id, 'num_metabolites': len(self.metabolites),\n- 'num_reactions': len(self.reactions),\n- 'reactions': '<br>'.join([r.build_reaction_string() for r in self.reactions])}\n+# def _repr_html_(self): # pragma: no cover\n+# template = \"\"\"<table>\n+# <tr>\n+# <td>Name</td>\n+# <td>%(name)s</td>\n+# </tr>\n+# <tr>\n+# <td>Number of metabolites</td>\n+# <td>%(num_metabolites)s</td>\n+# </tr>\n+# <tr>\n+# <td>Number of reactions</td>\n+# <td>%(num_reactions)s</td>\n+# </tr>\n+# <tr>\n+# <td>Reactions</td>\n+# <td><div style=\"width:100%%; max-height:300px; overflow:auto\">%(reactions)s</div></td>\n+# </tr>\n+# </table>\"\"\"\n+# return template % {'name': self.id, 'num_metabolites': len(self.metabolites),\n+# 'num_reactions': len(self.reactions),\n+# 'reactions': '<br>'.join([r.build_reaction_string() for r in self.reactions])}\n# @property\n# def objective(self):\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: move _repr_html_ move these upstream
89,735
27.03.2017 12:56:15
-7,200
09b7f9aa713b15a0de79df012a5296f8f7e9561b
refactor: favour model context over time-machine Where straight-forward, replace the use of TimeMachine with model as context.
[ { "change_type": "MODIFY", "old_path": "cameo/api/designer.py", "new_path": "cameo/api/designer.py", "diff": "@@ -46,7 +46,6 @@ from cameo.exceptions import SolveError\nfrom cameo.strain_design import OptGene, DifferentialFVA\nfrom cameo.ui import notice, searching, stop_loader\nfrom cameo.strain_design import pathway_prediction\n-from cameo.util import TimeMachine\nfrom cameo.models import universal\nfrom cameo.strain_design.heuristic.evolutionary.objective_functions import biomass_product_coupled_min_yield\nfrom cameo.strain_design.heuristic.evolutionary.objective_functions import product_yield\n" }, { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/analysis.py", "new_path": "cameo/flux_analysis/analysis.py", "diff": "@@ -38,7 +38,7 @@ from cameo.exceptions import Infeasible, SolveError\nfrom cameo.flux_analysis.util import remove_infeasible_cycles, fix_pfba_as_constraint\nfrom cameo.parallel import SequentialView\nfrom cameo.ui import notice\n-from cameo.util import TimeMachine, partition, _BIOMASS_RE_\n+from cameo.util import partition, _BIOMASS_RE_\nfrom cameo.core.utils import get_reaction_for, add_exchange\nfrom cameo.visualization.plotting import plotter\n@@ -192,7 +192,7 @@ def find_blocked_reactions(model):\nA list of reactions.\n\"\"\"\n- with TimeMachine() as tm, model:\n+ with model:\nfor exchange in model.exchanges:\nexchange.bounds = (-9999, 9999)\nfva_solution = flux_variability_analysis(model)\n" }, { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/simulation.py", "new_path": "cameo/flux_analysis/simulation.py", "diff": "@@ -32,8 +32,6 @@ import numpy\nimport logging\n-from functools import partial\n-\nimport sympy\nfrom sympy import Add\nfrom sympy import Mul\n@@ -44,7 +42,7 @@ from cobra.exceptions import OptimizationError\nfrom optlang.interface import OptimizationExpression\nfrom cameo.config import ndecimals\n-from cameo.util import TimeMachine, ProblemCache, in_ipnb\n+from cameo.util import ProblemCache, in_ipnb\nfrom cameo.exceptions import SolveError\nfrom cameo.core.result import Result\nfrom cameo.visualization.palette import mapper, Palette\n@@ -75,10 +73,9 @@ def fba(model, objective=None, reactions=None, *args, **kwargs):\nContains the result of the linear solver.\n\"\"\"\n- with TimeMachine() as tm:\n+ with model:\nif objective is not None:\n- tm(do=partial(setattr, model, 'objective', objective),\n- undo=partial(setattr, model, 'objective', model.objective))\n+ model.objective = objective\nmodel.solver.optimize()\nif model.solver.status != 'optimal':\nraise SolveError('optimization failed')\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/deterministic/linear_programming.py", "new_path": "cameo/strain_design/deterministic/linear_programming.py", "diff": "@@ -36,7 +36,7 @@ from cameo.exceptions import SolveError\nfrom cameo.flux_analysis.analysis import phenotypic_phase_plane, flux_variability_analysis, find_essential_reactions\nfrom cameo.flux_analysis.simulation import fba\nfrom cameo.flux_analysis.structural import find_coupled_reactions_nullspace\n-from cameo.util import TimeMachine, reduce_reaction_set, decompose_reaction_groups\n+from cameo.util import reduce_reaction_set, decompose_reaction_groups\nfrom cameo.visualization.plotting import plotter\nlogger = logging.getLogger(__name__)\n@@ -244,7 +244,7 @@ class OptKnock(StrainDesignMethod):\nproduction_list = []\nbiomass_list = []\nloader_id = ui.loading()\n- with TimeMachine() as tm:\n+ with self._model:\nself._model.objective = target.id\nself._number_of_knockouts_constraint.lb = self._number_of_knockouts_constraint.ub - max_knockouts\ncount = 0\n@@ -280,9 +280,7 @@ class OptKnock(StrainDesignMethod):\nif len(knockouts) < max_knockouts:\nself._number_of_knockouts_constraint.lb = self._number_of_knockouts_constraint.ub - len(knockouts)\n-\n- tm(do=partial(self._model.solver.add, integer_cut),\n- undo=partial(self._model.solver.remove, integer_cut))\n+ self._model.add_cons_vars(integer_cut)\ncount += 1\nui.stop_loader(loader_id)\n@@ -354,9 +352,9 @@ class OptKnockResult(StrainDesignMethodResult):\nreturn self._target\ndef display_on_map(self, index=0, map_name=None, palette=\"YlGnBu\"):\n- with TimeMachine() as tm:\n+ with self._model:\nfor ko in self.data_frame.loc[index, \"reactions\"]:\n- self._model.reactions.get_by_id(ko).knock_out(tm)\n+ self._model.reactions.get_by_id(ko).knock_out()\nfluxes = fba(self._model)\nfluxes.display_on_map(map_name=map_name, palette=palette)\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/heuristic/evolutionary/processing.py", "new_path": "cameo/strain_design/heuristic/evolutionary/processing.py", "diff": "@@ -95,17 +95,16 @@ def process_gene_knockout_solution(model, solution, simulation_method, simulatio\nA list with: reactions, genes, size, fva_min, fva_max, target flux, biomass flux, yield, fitness\n\"\"\"\n- with TimeMachine() as tm:\n+ with model:\ngenes = [model.genes.get_by_id(gid) for gid in solution]\nreactions = find_gene_knockout_reactions(model, solution)\nfor reaction in reactions:\n- reaction.knock_out(tm)\n+ reaction.knock_out()\nreaction_ids = [r.id for r in reactions]\nflux_dist = simulation_method(model, reactions=objective_function.reactions,\nobjective=biomass, **simulation_kwargs)\n- tm(do=partial(setattr, model, \"objective\", biomass),\n- undo=partial(setattr, model, \"objective\", model.objective))\n+ model.objective = biomass\nfva = flux_variability_analysis(model, fraction_of_optimum=0.99, reactions=[target])\ntarget_yield = flux_dist[target] / abs(flux_dist[substrate])\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/heuristic/evolutionary_based.py", "new_path": "cameo/strain_design/heuristic/evolutionary_based.py", "diff": "@@ -476,17 +476,17 @@ class HeuristicOptSwapResult(StrainDesignMethodResult):\nself._processed_solutions = processed_solutions\ndef display_on_map(self, index=0, map_name=None, palette=\"YlGnBu\"):\n- with TimeMachine() as tm:\n+ with self._model:\nfor ko in self.data_frame.loc[index, \"reactions\"]:\n- self._model.reactions.get_by_id(ko).swap_cofactors(self._swap_pairs, tm)\n+ self._model.reactions.get_by_id(ko).swap_cofactors(self._swap_pairs)\nfluxes = self._simulation_method(self._model, **self._simulation_kwargs)\nfluxes.display_on_map(map_name=map_name, palette=palette)\ndef plot(self, index=0, grid=None, width=None, height=None, title=None, palette=None, **kwargs):\nwt_production = phenotypic_phase_plane(self._model, objective=self._target, variables=[self._biomass])\n- with TimeMachine() as tm:\n+ with self._model:\nfor ko in self.data_frame.loc[index, \"reactions\"]:\n- self._model.reactions.get_by_id(ko).swap_cofactors(self._swap_pairs, tm)\n+ self._model.reactions.get_by_id(ko).swap_cofactors(self._swap_pairs)\nmt_production = phenotypic_phase_plane(self._model, objective=self._target, variables=[self._biomass])\nif title is None:\n" }, { "change_type": "MODIFY", "old_path": "cameo/visualization/visualization.py", "new_path": "cameo/visualization/visualization.py", "diff": "@@ -25,10 +25,8 @@ import networkx as nx\nfrom cobra.core import Metabolite, Reaction\n-from functools import partial\nfrom io import BytesIO\nfrom escher import Builder\n-from cameo.util import TimeMachine\ntry:\nfrom IPython.display import HTML, SVG\n@@ -101,24 +99,12 @@ cdf.embed(\"%s\", 942, 678);\ndef draw_knockout_result(model, map_name, simulation_method, knockouts, *args, **kwargs):\n- tm = TimeMachine()\n-\n- try:\n+ with model:\nfor reaction in model.reactions.get_by_any(knockouts):\n- tm(do=partial(setattr, reaction, 'lower_bound', 0),\n- undo=partial(setattr, reaction, 'lower_bound', reaction.lower_bound))\n- tm(do=partial(setattr, reaction, 'upper_bound', 0),\n- undo=partial(setattr, reaction, 'upper_bound', reaction.upper_bound))\n-\n+ reaction.knock_out()\nsolution = simulation_method(model, *args, **kwargs).x_dict\n- tm.reset()\n-\nreturn Builder(map_name, reaction_data=solution)\n- except Exception as e:\n- tm.reset()\n- raise e\n-\ndef inchi_to_svg(inchi, file=None, debug=False, three_d=False):\n\"\"\"Generate an SVG drawing from an InChI string.\n" }, { "change_type": "MODIFY", "old_path": "tests/test_flux_analysis.py", "new_path": "tests/test_flux_analysis.py", "diff": "@@ -418,10 +418,9 @@ class TestStructural:\nfor group in coupled_reactions:\nrepresentative = pick_one(group)\nif representative not in essential_reactions:\n- with TimeMachine() as tm:\n+ with core_model:\nassert core_model == representative.model\n- tm(do=partial(core_model.remove_reactions, [representative], delete=False),\n- undo=partial(core_model.add_reactions, [representative]))\n+ core_model.remove_reactions([representative])\n# # FIXME: Hack because of optlang queue issues with GLPK\n# core_model.solver.update()\nassert representative not in core_model.reactions\n@@ -437,12 +436,11 @@ class TestStructural:\nfor group in coupled_reactions:\nrepresentative = pick_one(group)\nif representative not in essential_reactions:\n- with TimeMachine() as tm:\n+ with core_model:\nfwd_var_name = representative.forward_variable.name\nrev_var_name = representative.reverse_variable.name\nassert core_model == representative.model\n- tm(do=partial(core_model.remove_reactions, [representative], delete=False),\n- undo=partial(core_model.add_reactions, [representative]))\n+ core_model.remove_reactions([representative])\n# # FIXME: Hack because of optlang queue issues with GLPK\n# core_model.solver.update()\nassert representative not in core_model.reactions\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: favour model context over time-machine Where straight-forward, replace the use of TimeMachine with model as context.
89,735
29.03.2017 14:59:21
-7,200
0b8e2a77874b0ede25e1000d55e75a9b18f26233
refactor: use optlang constants check status against optlang constants
[ { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/analysis.py", "new_path": "cameo/flux_analysis/analysis.py", "diff": "@@ -29,7 +29,7 @@ from cobra.util import fix_objective_as_constraint\nfrom numpy import trapz\nfrom six.moves import zip\nfrom sympy import S\n-from optlang.interface import UNBOUNDED\n+from optlang.interface import UNBOUNDED, OPTIMAL\nimport cameo\nfrom cameo import config\n@@ -86,7 +86,7 @@ def find_essential_metabolites(model, threshold=1e-6, force_steady_state=False):\n# Essential metabolites are only in reactions that carry flux.\nmetabolites = set()\nmodel.solver.optimize()\n- if model.solver.status != 'optimal':\n+ if model.solver.status != OPTIMAL:\nraise SolveError('optimization failed')\nsolution = get_solution(model)\nfor reaction_id, flux in six.iteritems(solution.fluxes):\n@@ -98,7 +98,7 @@ def find_essential_metabolites(model, threshold=1e-6, force_steady_state=False):\nwith model:\nmetabolite.knock_out(force_steady_state=force_steady_state)\nmodel.solver.optimize()\n- if model.solver.status != 'optimal' or model.objective.value < threshold:\n+ if model.solver.status != OPTIMAL or model.objective.value < threshold:\nessential.append(metabolite)\nreturn essential\n@@ -121,7 +121,7 @@ def find_essential_reactions(model, threshold=1e-6):\nessential = []\ntry:\nmodel.solver.optimize()\n- if model.solver.status != 'optimal':\n+ if model.solver.status != OPTIMAL:\nraise SolveError('optimization failed')\nsolution = get_solution(model)\nfor reaction_id, flux in six.iteritems(solution.fluxes):\n@@ -130,7 +130,7 @@ def find_essential_reactions(model, threshold=1e-6):\nwith model:\nreaction.knock_out()\nmodel.solver.optimize()\n- if model.solver.status != 'optimal' or model.objective.value < threshold:\n+ if model.solver.status != OPTIMAL or model.objective.value < threshold:\nessential.append(reaction)\nexcept SolveError as e:\n@@ -158,7 +158,7 @@ def find_essential_genes(model, threshold=1e-6):\nessential = []\ntry:\nmodel.solver.optimize()\n- if model.solver.status != 'optimal':\n+ if model.solver.status != OPTIMAL:\nraise SolveError('optimization failed')\nsolution = get_solution(model)\ngenes_to_check = set()\n@@ -169,7 +169,7 @@ def find_essential_genes(model, threshold=1e-6):\nwith model:\ngene.knock_out()\nmodel.solver.optimize()\n- if model.solver.status != 'optimal' or model.objective.value < threshold:\n+ if model.solver.status != OPTIMAL or model.objective.value < threshold:\nessential.append(gene)\nexcept SolveError as e:\n@@ -384,7 +384,7 @@ def _flux_variability_analysis(model, reactions=None):\nmodel.solver.objective.set_linear_coefficients({reaction.forward_variable: 1.,\nreaction.reverse_variable: -1.})\nmodel.solver.optimize()\n- if model.solver.status == 'optimal':\n+ if model.solver.status == OPTIMAL:\nfva_sol[reaction.id]['lower_bound'] = model.objective.value\nelif model.solver.status == UNBOUNDED:\nfva_sol[reaction.id]['lower_bound'] = -numpy.inf\n@@ -402,7 +402,7 @@ def _flux_variability_analysis(model, reactions=None):\nreaction.reverse_variable: -1.})\nmodel.solver.optimize()\n- if model.solver.status == 'optimal':\n+ if model.solver.status == OPTIMAL:\nfva_sol[reaction.id]['upper_bound'] = model.objective.value\nelif model.solver.status == UNBOUNDED:\nfva_sol[reaction.id]['upper_bound'] = numpy.inf\n@@ -485,7 +485,7 @@ def _cycle_free_fva(model, reactions=None, sloppy=True, sloppy_bound=666):\nif model.solver.status == UNBOUNDED:\nfva_sol[reaction.id]['lower_bound'] = -numpy.inf\ncontinue\n- elif model.solver.status != 'optimal':\n+ elif model.solver.status != OPTIMAL:\nfva_sol[reaction.id]['lower_bound'] = 0\ncontinue\nbound = model.objective.value\n@@ -510,7 +510,7 @@ def _cycle_free_fva(model, reactions=None, sloppy=True, sloppy_bound=666):\nknockout_reaction.knock_out()\nmodel.objective.direction = 'min'\nmodel.solver.optimize()\n- if model.solver.status == 'optimal':\n+ if model.solver.status == OPTIMAL:\nfva_sol[reaction.id]['lower_bound'] = model.objective.value\nelif model.solver.status == UNBOUNDED:\nfva_sol[reaction.id]['lower_bound'] = -numpy.inf\n@@ -524,7 +524,7 @@ def _cycle_free_fva(model, reactions=None, sloppy=True, sloppy_bound=666):\nif model.solver.status == UNBOUNDED:\nfva_sol[reaction.id]['upper_bound'] = numpy.inf\ncontinue\n- elif model.solver.status != 'optimal':\n+ elif model.solver.status != OPTIMAL:\nfva_sol[reaction.id]['upper_bound'] = 0\ncontinue\nbound = model.objective.value\n@@ -549,7 +549,7 @@ def _cycle_free_fva(model, reactions=None, sloppy=True, sloppy_bound=666):\nknockout_reaction.knock_out()\nmodel.objective.direction = 'max'\nmodel.solver.optimize()\n- if model.solver.status == 'optimal':\n+ if model.solver.status == OPTIMAL:\nfva_sol[reaction.id]['upper_bound'] = model.objective.value\nelif model.solver.status == UNBOUNDED:\nfva_sol[reaction.id]['upper_bound'] = numpy.inf\n" }, { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/simulation.py", "new_path": "cameo/flux_analysis/simulation.py", "diff": "@@ -40,7 +40,7 @@ from cobra import get_solution, Reaction\nfrom cobra.flux_analysis.parsimonious import add_pfba\nfrom cobra.exceptions import OptimizationError\n-from optlang.interface import OptimizationExpression\n+from optlang.interface import OptimizationExpression, OPTIMAL\nfrom cameo.config import ndecimals\nfrom cameo.util import ProblemCache, in_ipnb\nfrom cameo.exceptions import SolveError\n@@ -77,7 +77,7 @@ def fba(model, objective=None, reactions=None, *args, **kwargs):\nif objective is not None:\nmodel.objective = objective\nmodel.solver.optimize()\n- if model.solver.status != 'optimal':\n+ if model.solver.status != OPTIMAL:\nraise SolveError('optimization failed')\nsolution = get_solution(model)\nif reactions is not None:\n" }, { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/structural.py", "new_path": "cameo/flux_analysis/structural.py", "diff": "@@ -22,6 +22,7 @@ from itertools import product\nimport numpy as np\nimport optlang\n+from optlang.interface import OPTIMAL\nimport pandas\nimport six\nimport sympy\n@@ -461,7 +462,7 @@ class MinimalCutSetsEnumerator(ShortestElementaryFluxModes): # pragma: no cover\nfor reac_id in mcs:\nself._primal_model.reactions.get_by_id(reac_id).knock_out()\nself._primal_model.solver.optimize()\n- if self._primal_model.solver.status == 'optimal':\n+ if self._primal_model.solver.status == OPTIMAL:\nreturn mcs\nelse:\nreturn None\n@@ -626,7 +627,7 @@ class MinimalCutSetsEnumerator(ShortestElementaryFluxModes): # pragma: no cover\nwith self._primal_model:\nreaction.knock_out()\nself._primal_model.solver.optimize()\n- if self._primal_model.solver.status != 'optimal':\n+ if self._primal_model.solver.status != OPTIMAL:\nillegal_knockouts.append(reaction.id)\nself._illegal_knockouts = illegal_knockouts\nreturn cloned_constraints\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: use optlang constants check status against optlang constants
89,735
29.03.2017 15:00:04
-7,200
337c829a9f053b24fb29091e113580883fd55223
fix: swap_cofactors Fix remnant use of reaction.swap_cofactors (old bug)
[ { "change_type": "MODIFY", "old_path": "cameo/strain_design/heuristic/evolutionary/processing.py", "new_path": "cameo/strain_design/heuristic/evolutionary/processing.py", "diff": "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n-from functools import partial\n-\nfrom cobra.manipulation.delete import find_gene_knockout_reactions\n+from cameo.core.manipulation import swap_cofactors\nfrom cameo import flux_variability_analysis\n-from cameo.util import TimeMachine\ndef process_reaction_knockout_solution(model, solution, simulation_method, simulation_kwargs,\n@@ -147,16 +145,14 @@ def process_reaction_swap_solution(model, solution, simulation_method, simulatio\n[fitness, [fitness]]\n\"\"\"\n- with TimeMachine() as tm:\n+ with model:\nreactions = [model.reactions.get_by_id(rid) for rid in solution]\nfor reaction in reactions:\n- reaction.swap_cofactors(tm, swap_pairs)\n+ swap_cofactors(reaction, model, swap_pairs)\nflux_dist = simulation_method(model, reactions=objective_function.reactions,\nobjective=biomass, **simulation_kwargs)\n- tm(do=partial(setattr, model, \"objective\", biomass),\n- undo=partial(setattr, model, \"objective\", model.objective))\n-\n+ model.objective = biomass\nfva = flux_variability_analysis(model, fraction_of_optimum=0.99, reactions=[target])\ntarget_yield = flux_dist[target] / abs(flux_dist[substrate])\nreturn [solution, fva.lower_bound(target),\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/heuristic/evolutionary_based.py", "new_path": "cameo/strain_design/heuristic/evolutionary_based.py", "diff": "@@ -26,6 +26,7 @@ from pandas import DataFrame\nfrom cobra import Model\nfrom cameo.core.strain_design import StrainDesignMethod, StrainDesignMethodResult, StrainDesign\nfrom cameo.core.target import ReactionKnockoutTarget, GeneKnockoutTarget, ReactionCofactorSwapTarget\n+from cameo.core.manipulation import swap_cofactors\nfrom cameo.exceptions import SolveError\nfrom cameo.flux_analysis.analysis import phenotypic_phase_plane\nfrom cameo.flux_analysis.simulation import fba\n@@ -478,7 +479,7 @@ class HeuristicOptSwapResult(StrainDesignMethodResult):\ndef display_on_map(self, index=0, map_name=None, palette=\"YlGnBu\"):\nwith self._model:\nfor ko in self.data_frame.loc[index, \"reactions\"]:\n- self._model.reactions.get_by_id(ko).swap_cofactors(self._swap_pairs)\n+ swap_cofactors(self._model.reactions.get_by_id(ko), self._model, self._swap_pairs)\nfluxes = self._simulation_method(self._model, **self._simulation_kwargs)\nfluxes.display_on_map(map_name=map_name, palette=palette)\n@@ -486,7 +487,7 @@ class HeuristicOptSwapResult(StrainDesignMethodResult):\nwt_production = phenotypic_phase_plane(self._model, objective=self._target, variables=[self._biomass])\nwith self._model:\nfor ko in self.data_frame.loc[index, \"reactions\"]:\n- self._model.reactions.get_by_id(ko).swap_cofactors(self._swap_pairs)\n+ swap_cofactors(self._model.reactions.get_by_id(ko), self._model, self._swap_pairs)\nmt_production = phenotypic_phase_plane(self._model, objective=self._target, variables=[self._biomass])\nif title is None:\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: swap_cofactors Fix remnant use of reaction.swap_cofactors (old bug)
89,735
29.03.2017 16:22:10
-7,200
fe9797ad4b3ff722afe6c170ccddab5a433439e6
refactor: move solver exceptions to cobra These exceptions are necessary in cobra as well, so let's define them there and cobra's `assert_optimal` to handle which error to throw.
[ { "change_type": "MODIFY", "old_path": "cameo/exceptions.py", "new_path": "cameo/exceptions.py", "diff": "@@ -27,24 +27,3 @@ class IncompatibleTargets(Exception):\nclass SolveError(Exception):\ndef __init__(self, message):\nsuper(SolveError, self).__init__(message)\n-\n-\n-class Infeasible(SolveError):\n- pass\n-\n-\n-class Unbounded(SolveError):\n- pass\n-\n-\n-class FeasibleButNotOptimal(SolveError):\n- pass\n-\n-\n-class UndefinedSolution(SolveError):\n- pass\n-\n-\n-_OPTLANG_TO_EXCEPTIONS_DICT = dict((\n- (optlang.interface.INFEASIBLE, Infeasible), (optlang.interface.UNBOUNDED, Unbounded),\n- (optlang.interface.FEASIBLE, FeasibleButNotOptimal), (optlang.interface.UNDEFINED, UndefinedSolution)))\n" }, { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/analysis.py", "new_path": "cameo/flux_analysis/analysis.py", "diff": "@@ -24,8 +24,10 @@ from functools import reduce\nimport numpy\nimport pandas\nimport six\n-from cobra import Reaction, Metabolite, get_solution\n-from cobra.util import fix_objective_as_constraint\n+from cobra import Reaction, Metabolite\n+from cobra.core import get_solution\n+from cobra.util import fix_objective_as_constraint, assert_optimal\n+from cobra.exceptions import Infeasible\nfrom numpy import trapz\nfrom six.moves import zip\nfrom sympy import S\n@@ -34,7 +36,7 @@ from optlang.interface import UNBOUNDED, OPTIMAL\nimport cameo\nfrom cameo import config\nfrom cameo.core.result import Result\n-from cameo.exceptions import Infeasible, SolveError\n+from cameo.exceptions import SolveError\nfrom cameo.flux_analysis.util import remove_infeasible_cycles, fix_pfba_as_constraint\nfrom cameo.parallel import SequentialView\nfrom cameo.ui import notice\n@@ -86,8 +88,7 @@ def find_essential_metabolites(model, threshold=1e-6, force_steady_state=False):\n# Essential metabolites are only in reactions that carry flux.\nmetabolites = set()\nmodel.solver.optimize()\n- if model.solver.status != OPTIMAL:\n- raise SolveError('optimization failed')\n+ assert_optimal(model)\nsolution = get_solution(model)\nfor reaction_id, flux in six.iteritems(solution.fluxes):\nif abs(flux) > 0:\n@@ -121,8 +122,7 @@ def find_essential_reactions(model, threshold=1e-6):\nessential = []\ntry:\nmodel.solver.optimize()\n- if model.solver.status != OPTIMAL:\n- raise SolveError('optimization failed')\n+ assert_optimal(model)\nsolution = get_solution(model)\nfor reaction_id, flux in six.iteritems(solution.fluxes):\nif abs(flux) > 0:\n@@ -158,8 +158,7 @@ def find_essential_genes(model, threshold=1e-6):\nessential = []\ntry:\nmodel.solver.optimize()\n- if model.solver.status != OPTIMAL:\n- raise SolveError('optimization failed')\n+ assert_optimal(model)\nsolution = get_solution(model)\ngenes_to_check = set()\nfor reaction_id, flux in six.iteritems(solution.fluxes):\n" }, { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/simulation.py", "new_path": "cameo/flux_analysis/simulation.py", "diff": "@@ -36,11 +36,13 @@ import sympy\nfrom sympy import Add\nfrom sympy import Mul\nfrom sympy.parsing.sympy_parser import parse_expr\n-from cobra import get_solution, Reaction\n+from cobra import Reaction\n+from cobra.core import get_solution\n+from cobra.util import assert_optimal\nfrom cobra.flux_analysis.parsimonious import add_pfba\nfrom cobra.exceptions import OptimizationError\n-from optlang.interface import OptimizationExpression, OPTIMAL\n+from optlang.interface import OptimizationExpression\nfrom cameo.config import ndecimals\nfrom cameo.util import ProblemCache, in_ipnb\nfrom cameo.exceptions import SolveError\n@@ -77,8 +79,7 @@ def fba(model, objective=None, reactions=None, *args, **kwargs):\nif objective is not None:\nmodel.objective = objective\nmodel.solver.optimize()\n- if model.solver.status != OPTIMAL:\n- raise SolveError('optimization failed')\n+ assert_optimal(model)\nsolution = get_solution(model)\nif reactions is not None:\nresult = FluxDistributionResult({r: solution.get_primal_by_id(r) for r in reactions}, solution.f)\n" }, { "change_type": "MODIFY", "old_path": "tests/test_solver_based_model.py", "new_path": "tests/test_solver_based_model.py", "diff": "@@ -30,13 +30,11 @@ import six\nfrom cobra.util import fix_objective_as_constraint\nfrom cobra import Model, Reaction, Metabolite\n+from cobra.exceptions import OptimizationError\n-import cameo\n-import cameo.core\nfrom cameo import load_model\nfrom cameo.config import solvers\nfrom cameo.core.utils import get_reaction_for, load_medium, medium\n-from cameo.exceptions import UndefinedSolution\nfrom cameo.flux_analysis.structural import create_stoichiometric_array\nfrom cameo.flux_analysis.analysis import find_essential_genes, find_essential_metabolites, find_essential_reactions\n@@ -853,14 +851,14 @@ class TestModel:\ndef test_essential_genes(self, core_model):\nobserved_essential_genes = [g.id for g in find_essential_genes(core_model)]\nassert sorted(observed_essential_genes) == sorted(ESSENTIAL_GENES)\n- with pytest.raises(cameo.exceptions.SolveError):\n+ with pytest.raises(OptimizationError):\ncore_model.reactions.Biomass_Ecoli_core_N_LPAREN_w_FSLASH_GAM_RPAREN__Nmet2.lower_bound = 999999.\nfind_essential_genes(core_model)\ndef test_essential_reactions(self, core_model):\nobserved_essential_reactions = [r.id for r in find_essential_reactions(core_model)]\nassert sorted(observed_essential_reactions) == sorted(ESSENTIAL_REACTIONS)\n- with pytest.raises(cameo.exceptions.SolveError):\n+ with pytest.raises(OptimizationError):\ncore_model.reactions.Biomass_Ecoli_core_N_LPAREN_w_FSLASH_GAM_RPAREN__Nmet2.lower_bound = 999999.\nfind_essential_reactions(core_model)\n@@ -870,11 +868,11 @@ class TestModel:\nassert sorted(essential_metabolites_unbalanced) == sorted(ESSENTIAL_METABOLITES)\nassert sorted(essential_metabolites_balanced) == sorted(ESSENTIAL_METABOLITES)\n- with pytest.raises(cameo.exceptions.SolveError):\n+ with pytest.raises(OptimizationError):\ncore_model.reactions.Biomass_Ecoli_core_N_LPAREN_w_FSLASH_GAM_RPAREN__Nmet2.lower_bound = 999999.\nfind_essential_metabolites(core_model, force_steady_state=False)\n- with pytest.raises(cameo.exceptions.SolveError):\n+ with pytest.raises(OptimizationError):\ncore_model.reactions.Biomass_Ecoli_core_N_LPAREN_w_FSLASH_GAM_RPAREN__Nmet2.lower_bound = 999999.\nfind_essential_metabolites(core_model, force_steady_state=True)\n" }, { "change_type": "MODIFY", "old_path": "tests/test_strain_design_deterministic.py", "new_path": "tests/test_strain_design_deterministic.py", "diff": "@@ -21,15 +21,15 @@ import pytest\nfrom pandas import DataFrame\nfrom pandas.util.testing import assert_frame_equal\n+from cobra.exceptions import Infeasible\n+\nimport cameo\nfrom cameo import fba\nfrom cameo.config import solvers\nfrom cameo.strain_design.deterministic.flux_variability_based import (FSEOF,\nDifferentialFVA,\nFSEOFResult)\n-from cameo.exceptions import Infeasible\nfrom cameo.strain_design.deterministic.linear_programming import OptKnock\n-from cameo.util import TimeMachine\nTRAVIS = bool(os.getenv('TRAVIS', False))\nTESTDIR = os.path.dirname(__file__)\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: move solver exceptions to cobra These exceptions are necessary in cobra as well, so let's define them there and cobra's `assert_optimal` to handle which error to throw.
89,735
29.03.2017 17:00:19
-7,200
6a307e3de8da3259f25908951b76f135fa97e761
fix: re-enable tests for all solvers
[ { "change_type": "MODIFY", "old_path": "tests/conftest.py", "new_path": "tests/conftest.py", "diff": "@@ -10,7 +10,6 @@ from cameo.config import solvers\ncameo_directory = abspath(join(dirname(abspath(__file__)), \"..\"))\ncameo_location = abspath(join(cameo_directory, \"..\"))\ndata_dir = join(cameo_directory, \"tests\", \"data\", \"\")\n-solvers = ['cplex'] #TODO: remove this\[email protected](scope=\"session\")\n@@ -63,7 +62,7 @@ def toy_model(request, data_directory):\n# FIXME: should be possible to scope at least to class\[email protected](scope=\"function\", params=['glpk'])#params=list(solvers))\[email protected](scope=\"function\", params=list(solvers))\ndef core_model(request, data_directory):\necoli_core = load_model(join(data_directory, 'EcoliCore.xml'), sanitize=False)\necoli_core.solver = request.param\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: re-enable tests for all solvers
89,735
29.03.2017 17:19:31
-7,200
b7d0efe714c2be1387fc73af426d4d206ea97014
fix: skip test for essential metabolites that one needs some work still..
[ { "change_type": "MODIFY", "old_path": "tests/test_solver_based_model.py", "new_path": "tests/test_solver_based_model.py", "diff": "@@ -862,19 +862,24 @@ class TestModel:\ncore_model.reactions.Biomass_Ecoli_core_N_LPAREN_w_FSLASH_GAM_RPAREN__Nmet2.lower_bound = 999999.\nfind_essential_reactions(core_model)\n- def test_essential_metabolites(self, core_model):\n- essential_metabolites_unbalanced = [m.id for m in find_essential_metabolites(core_model, force_steady_state=False)]\n- essential_metabolites_balanced = [m.id for m in find_essential_metabolites(core_model, force_steady_state=True)]\n- assert sorted(essential_metabolites_unbalanced) == sorted(ESSENTIAL_METABOLITES)\n+ def test_essential_metabolites_steady_state(self, core_model):\n+ essential_metabolites_balanced = [m.id for m in find_essential_metabolites(core_model,\n+ force_steady_state=True)]\nassert sorted(essential_metabolites_balanced) == sorted(ESSENTIAL_METABOLITES)\nwith pytest.raises(OptimizationError):\ncore_model.reactions.Biomass_Ecoli_core_N_LPAREN_w_FSLASH_GAM_RPAREN__Nmet2.lower_bound = 999999.\n- find_essential_metabolites(core_model, force_steady_state=False)\n+ find_essential_metabolites(core_model, force_steady_state=True)\n+\n+ @pytest.mark.xfail(reason='needs some refactoring, uses missing bounds, not allowed by cplex')\n+ def test_essential_metabolites(self, core_model):\n+ essential_metabolites_unbalanced = [m.id for m in find_essential_metabolites(core_model,\n+ force_steady_state=False)]\n+ assert sorted(essential_metabolites_unbalanced) == sorted(ESSENTIAL_METABOLITES)\nwith pytest.raises(OptimizationError):\ncore_model.reactions.Biomass_Ecoli_core_N_LPAREN_w_FSLASH_GAM_RPAREN__Nmet2.lower_bound = 999999.\n- find_essential_metabolites(core_model, force_steady_state=True)\n+ find_essential_metabolites(core_model, force_steady_state=False)\n# def test_effective_bounds(self, core_model):\n# core_model.reactions.Biomass_Ecoli_core_N_LPAREN_w_FSLASH_GAM_RPAREN__Nmet2.lower_bound = 0.873921\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: skip test for essential metabolites that one needs some work still..
89,735
29.03.2017 16:57:10
-7,200
b4d2f191a8a83af9197a7440f400967f9f25bd34
chore: schedule devel06 for ci and use cameo branch of cobrapy. Also only pull cplex for py27, 34 as unavailable for higher versions.
[ { "change_type": "MODIFY", "old_path": ".travis.yml", "new_path": ".travis.yml", "diff": "@@ -21,6 +21,7 @@ branches:\n- master\n- devel\n- devel-nonfree\n+ - devel06\n- /^[0-9]+\\.[0-9]+\\.[0-9]+[.0-9ab]*$/\ncache:\n" }, { "change_type": "MODIFY", "old_path": "requirements.txt", "new_path": "requirements.txt", "diff": "@@ -4,7 +4,7 @@ blessings>=1.5.1\npandas>=0.15.2\nordered-set==1.2\ninspyred>=1.0\n-cobra>=0.6.0a2\n+cobra>=0.6.0a4\noptlang>=0.3.0\nescher>=1.0.0\nnumexpr>=2.4\n" }, { "change_type": "MODIFY", "old_path": "tests/data/update-pickles.py", "new_path": "tests/data/update-pickles.py", "diff": "@@ -11,4 +11,3 @@ with open('iJO1366.pickle', 'wb') as out:\nsalmonella = cobra.test.create_test_model('salmonella')\nwith open('salmonella.pickle', 'wb') as out:\npickle.dump(salmonella, out)\n-\n" } ]
Python
Apache License 2.0
biosustain/cameo
chore: schedule devel06 for ci and use cameo branch of cobrapy. Also only pull cplex for py27, 34 as unavailable for higher versions.
89,735
30.03.2017 09:16:19
-7,200
2cd69efb57bfbf16c4621e63718a68ecf6792008
fix: save pickles in protocol 2 / use glpk To enable reading in old pythons and with missing cplex
[ { "change_type": "MODIFY", "old_path": "tests/data/iJO1366.pickle", "new_path": "tests/data/iJO1366.pickle", "diff": "Binary files a/tests/data/iJO1366.pickle and b/tests/data/iJO1366.pickle differ\n" }, { "change_type": "MODIFY", "old_path": "tests/data/salmonella.pickle", "new_path": "tests/data/salmonella.pickle", "diff": "Binary files a/tests/data/salmonella.pickle and b/tests/data/salmonella.pickle differ\n" }, { "change_type": "MODIFY", "old_path": "tests/data/update-pickles.py", "new_path": "tests/data/update-pickles.py", "diff": "import pickle\nimport cobra.test\n+import optlang\nfrom cameo import load_model\n-ijo = load_model('iJO1366.xml')\n+ijo = load_model('iJO1366.xml', solver_interface=optlang.glpk_interface)\nwith open('iJO1366.pickle', 'wb') as out:\n- pickle.dump(ijo, out)\n+ pickle.dump(ijo, out, protocol=2)\nsalmonella = cobra.test.create_test_model('salmonella')\n+salmonella.solver = 'glpk'\nwith open('salmonella.pickle', 'wb') as out:\n- pickle.dump(salmonella, out)\n+ pickle.dump(salmonella, out, protocol=2)\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: save pickles in protocol 2 / use glpk To enable reading in old pythons and with missing cplex
89,735
30.03.2017 09:16:39
-7,200
35db1fddac6b1cbb1e42203df95bff4157d21fa7
fix: catch SolverNotFound error
[ { "change_type": "MODIFY", "old_path": "cameo/strain_design/pathway_prediction/pathway_predictor.py", "new_path": "cameo/strain_design/pathway_prediction/pathway_predictor.py", "diff": "@@ -26,6 +26,7 @@ from cobra import DictList\nfrom sympy import Add, Mul, RealNumber\nfrom cobra import Model, Metabolite, Reaction\n+from cobra.util import SolverNotFound\nfrom cameo import fba\nfrom cameo import models, phenotypic_phase_plane\n@@ -260,7 +261,7 @@ class PathwayPredictor(StrainDesignMethod):\ntry:\nlogger.info('Trying to set solver to cplex to speed up pathway predictions.')\nself.model.solver = 'cplex'\n- except ValueError:\n+ except SolverNotFound:\nlogger.info('cplex not available for pathway predictions.')\nself.new_reactions = self._extend_model(model.exchanges)\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: catch SolverNotFound error
89,735
30.03.2017 10:35:12
-7,200
725e4e2d4df836ef54fe2e777ec80b631ce19c44
fix: use iteritems instead of items iteritems also supported with pandas series
[ { "change_type": "MODIFY", "old_path": "cameo/strain_design/deterministic/flux_variability_based.py", "new_path": "cameo/strain_design/deterministic/flux_variability_based.py", "diff": "@@ -872,7 +872,7 @@ class FSEOF(StrainDesignMethod):\ntarget.lower_bound = level\ntarget.upper_bound = level\nsolution = simulation_method(model, **simulation_kwargs)\n- for reaction_id, flux in solution.fluxes.items():\n+ for reaction_id, flux in solution.fluxes.iteritems():\nresults[reaction_id].append(round(flux, ndecimals))\n# Test each reaction\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: use iteritems instead of items iteritems also supported with pandas series
89,735
30.03.2017 14:39:18
-7,200
e877911290f85f0c5afc9f0d7c24cb4a56d65c6a
fix: reaction.copy fails across models Reaction.copy in cobrapy needs refactoring.
[ { "change_type": "MODIFY", "old_path": "cameo/strain_design/pathway_prediction/pathway_predictor.py", "new_path": "cameo/strain_design/pathway_prediction/pathway_predictor.py", "diff": "@@ -20,6 +20,7 @@ import warnings\nfrom collections import Counter\nfrom functools import partial\nfrom math import ceil\n+from copy import copy\nimport six\nfrom cobra import DictList\n@@ -454,7 +455,7 @@ class PathwayPredictor(StrainDesignMethod):\nif metabolite.id in original_model_metabolites:\ncontinue\n- new_reactions.append(reaction.copy())\n+ new_reactions.append(copy(reaction))\nself.model.add_reactions(new_reactions)\nreturn new_reactions\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: reaction.copy fails across models Reaction.copy in cobrapy needs refactoring.
89,735
31.03.2017 10:43:36
-7,200
4a2c90f7f823a5a47daeeee5adfc077af8519bea
fix: sloppy=True for objective def fails w cplex
[ { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/simulation.py", "new_path": "cameo/flux_analysis/simulation.py", "diff": "@@ -285,8 +285,7 @@ def lmoma(model, reference=None, cache=None, reactions=None, *args, **kwargs):\ndef create_objective(model, variables):\nreturn model.solver.interface.Objective(add([mul((One, var)) for var in variables]),\ndirection=\"min\",\n- sloppy=True)\n-\n+ sloppy=False)\ncache.add_objective(create_objective, None, cache.variables.values())\ntry:\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: sloppy=True for objective def fails w cplex
89,735
31.03.2017 10:44:48
-7,200
98f749d80439d4d8de8208a4a355cf8762b72f1e
fix: pathway tests manipulate the model Some tests change the model, causing others to fail. Change the scope.
[ { "change_type": "MODIFY", "old_path": "tests/test_pathway_predictions.py", "new_path": "tests/test_pathway_predictions.py", "diff": "@@ -38,7 +38,7 @@ def pathway_predictor(request, data_directory, universal_model):\nreturn core_model, predictor\[email protected](scope=\"module\")\[email protected](scope=\"function\")\ndef pathway_predictor_result(pathway_predictor):\ncore_model, predictor = pathway_predictor\nreturn core_model, predictor.run(product='L-Serine', max_predictions=1)\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: pathway tests manipulate the model Some tests change the model, causing others to fail. Change the scope.
89,741
05.04.2017 10:32:59
-7,200
7cc84417803d6bef08fc726cd451a7777f4cbc48
fix: make PathwayPredictor work with custom models
[ { "change_type": "MODIFY", "old_path": "cameo/strain_design/pathway_prediction/pathway_predictor.py", "new_path": "cameo/strain_design/pathway_prediction/pathway_predictor.py", "diff": "@@ -27,7 +27,7 @@ from cobra import DictList\nfrom sympy import Add, Mul, RealNumber\nfrom cobra import Model, Metabolite, Reaction\n-from cobra.util import SolverNotFound\n+from cobra.util import SolverNotFound, assert_optimal\nfrom cameo import fba\nfrom cameo import models, phenotypic_phase_plane\n@@ -38,7 +38,7 @@ from cameo.core.strain_design import StrainDesignMethodResult, StrainDesign, Str\nfrom cameo.core.target import ReactionKnockinTarget\nfrom cameo.core.utils import add_exchange\nfrom cameo.data import metanetx\n-from cameo.exceptions import SolveError\n+from cobra.exceptions import OptimizationError\nfrom cameo.strain_design.pathway_prediction import util\nfrom cameo.util import TimeMachine\nfrom cameo.visualization.plotting import plotter\n@@ -316,9 +316,10 @@ class PathwayPredictor(StrainDesignMethod):\ncounter = 1\nwhile counter <= max_predictions:\nlogger.debug('Predicting pathway No. %d' % counter)\n+ self.model.solver.optimize()\ntry:\n- self.model.optimize()\n- except SolveError as e:\n+ assert_optimal(self.model)\n+ except OptimizationError as e:\nlogger.error('No pathway could be predicted. Terminating pathway predictions.')\nlogger.error(e)\nbreak\n@@ -376,7 +377,7 @@ class PathwayPredictor(StrainDesignMethod):\npathway.apply(self.original_model)\ntry:\nsolution = fba(self.original_model, objective=pathway.product.id)\n- except SolveError as e:\n+ except OptimizationError as e:\nlogger.error(e)\nlogger.error(\n\"Addition of pathway {} made the model unsolvable. \"\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: make PathwayPredictor work with custom models
89,741
05.04.2017 14:38:01
-7,200
ae4c3adbbb9ab1b3fe8be0cc2648cf65a7de5fde
fix: SolveError -> OptimizationError to catch problems during strain design problems
[ { "change_type": "MODIFY", "old_path": "cameo/api/designer.py", "new_path": "cameo/api/designer.py", "diff": "@@ -42,7 +42,7 @@ from cameo import fba\nfrom cameo import config, util\nfrom cameo.api.hosts import hosts as HOSTS, Host\nfrom cameo.api.products import products\n-from cameo.exceptions import SolveError\n+from cobra.exceptions import OptimizationError\nfrom cameo.strain_design import OptGene, DifferentialFVA\nfrom cameo.ui import notice, searching, stop_loader\nfrom cameo.strain_design import pathway_prediction\n@@ -304,7 +304,7 @@ class Designer(object):\n_pyield = pyield(model, solution, strain_design.targets)\ntarget_flux = solution.fluxes[pyield.product]\nbiomass = solution.fluxes[bpcy.biomass]\n- except SolveError:\n+ except OptimizationError:\n_bpcy, _pyield, target_flux, biomass = np.nan, np.nan, np.nan, np.nan\nreturn _bpcy, _pyield, target_flux, biomass\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: SolveError -> OptimizationError to catch problems during strain design problems
89,741
05.04.2017 14:47:30
-7,200
d6094c8e29b3b0d38d3ab5899a79c87424c9f7c2
fix: found another SolveError -> OptimizationError
[ { "change_type": "MODIFY", "old_path": "cameo/api/designer.py", "new_path": "cameo/api/designer.py", "diff": "@@ -447,7 +447,7 @@ class Designer(object):\ntry:\nflux_dist = fba(model, objective=product)\nreturn flux_dist[product.id] / abs(flux_dist[source.id])\n- except SolveError:\n+ except OptimizationError:\nreturn 0.0\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: found another SolveError -> OptimizationError
89,735
25.04.2017 11:12:52
-7,200
77e03737bb4038d21ae6bfd6cae429821954e597
fix: match to work with pre-release cobrapy use the to_frame() method instead of data_frame property of solution use the model's add_boundary instead of the temporary add_exchange
[ { "change_type": "MODIFY", "old_path": "cameo/core/utils.py", "new_path": "cameo/core/utils.py", "diff": "import six\nimport csv\n-from cobra.util import add_exchange\nfrom pandas import DataFrame\n@@ -37,7 +36,7 @@ def get_reaction_for(model, value, add=True):\nreactions = model.reactions.query(\"^(EX|DM)_{}$\".format(metabolite.id))\nif len(reactions) == 0:\nif add:\n- reactions = [add_exchange(model, metabolite)]\n+ reactions = [model.add_boundary(metabolite, type='demand')]\nelse:\nraise KeyError('Invalid target %s' % value)\nreturn reactions[0]\n" }, { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/analysis.py", "new_path": "cameo/flux_analysis/analysis.py", "diff": "@@ -26,8 +26,8 @@ import pandas\nimport six\nfrom cobra import Reaction, Metabolite\nfrom cobra.core import get_solution\n-from cobra.util import fix_objective_as_constraint, assert_optimal\n-from cobra.exceptions import Infeasible\n+from cobra.util import fix_objective_as_constraint, assert_optimal, get_context\n+from cobra.exceptions import Infeasible, OptimizationError\nfrom numpy import trapz\nfrom six.moves import zip\nfrom sympy import S\n@@ -36,12 +36,11 @@ from optlang.interface import UNBOUNDED, OPTIMAL\nimport cameo\nfrom cameo import config\nfrom cameo.core.result import Result\n-from cobra.exceptions import OptimizationError\nfrom cameo.flux_analysis.util import remove_infeasible_cycles, fix_pfba_as_constraint\nfrom cameo.parallel import SequentialView\nfrom cameo.ui import notice\nfrom cameo.util import partition, _BIOMASS_RE_\n-from cameo.core.utils import get_reaction_for, add_exchange\n+from cameo.core.utils import get_reaction_for\nfrom cameo.visualization.plotting import plotter\nlogger = logging.getLogger(__name__)\n@@ -50,6 +49,59 @@ __all__ = ['find_blocked_reactions', 'flux_variability_analysis', 'phenotypic_ph\n'flux_balance_impact_degree']\n+def knock_out_metabolite(metabolite, force_steady_state=False):\n+ \"\"\"'Knockout' a metabolite. This can be done in 2 ways:\n+\n+ 1. Implementation follows the description in [1] \"All fluxes around\n+ the metabolite M should be restricted to only produce the\n+ metabolite, for which balancing constraint of mass conservation is\n+ relaxed to allow nonzero values of the incoming fluxes whereas all\n+ outgoing fluxes are limited to zero.\"\n+\n+ 2. Force steady state All reactions consuming the metabolite are\n+ restricted to only produce the metabolite. A demand reaction is\n+ added to sink the metabolite produced to keep the problem feasible\n+ under the S.v = 0 constraint.\n+\n+\n+ Knocking out a metabolite overrules the constraints set on the\n+ reactions producing the metabolite.\n+\n+ Parameters\n+ ----------\n+ force_steady_state: bool\n+ If True, uses approach 2.\n+\n+ References\n+ ----------\n+ .. [1] Kim, P.-J., Lee, D.-Y., Kim, T. Y., Lee, K. H., Jeong, H.,\n+ Lee, S. Y., & Park, S. (2007). Metabolite essentiality elucidates\n+ robustness of Escherichia coli metabolism. PNAS, 104(34), 13638-13642\n+\n+ \"\"\"\n+ # restrict reactions to produce metabolite\n+ for rxn in metabolite.reactions:\n+ if rxn.metabolites[metabolite] > 0:\n+ rxn.bounds = (0, 0) if rxn.upper_bound < 0 \\\n+ else (0, rxn.upper_bound)\n+ elif rxn.metabolites[metabolite] < 0:\n+ rxn.bounds = (0, 0) if rxn.lower_bound > 0 \\\n+ else (rxn.lower_bound, 0)\n+ if force_steady_state:\n+ metabolite._model.add_boundary(metabolite, type=\"knock-out\",\n+ lb=0, ub=1000,\n+ reaction_id=\"KO_{}\".format(metabolite.id))\n+ else:\n+ previous_bounds = metabolite.constraint.lb, metabolite.constraint.ub\n+ metabolite.constraint.lb, metabolite.constraint.ub = None, None\n+ context = get_context(metabolite)\n+ if context:\n+ def reset():\n+ metabolite.constraint.lb, metabolite.constraint.ub = previous_bounds\n+\n+ context(reset)\n+\n+\ndef find_essential_metabolites(model, threshold=1e-6, force_steady_state=False):\n\"\"\"Return a list of essential metabolites.\n@@ -97,7 +149,7 @@ def find_essential_metabolites(model, threshold=1e-6, force_steady_state=False):\nfor metabolite in metabolites:\nwith model:\n- metabolite.knock_out(force_steady_state=force_steady_state)\n+ knock_out_metabolite(metabolite, force_steady_state=force_steady_state)\nmodel.solver.optimize()\nif model.solver.status != OPTIMAL or model.objective.value < threshold:\nessential.append(metabolite)\n@@ -300,7 +352,7 @@ def phenotypic_phase_plane(model, variables=[], objective=None, source=None, poi\ntry:\nobjective = model.reactions.get_by_id(\"DM_%s\" % objective.id)\nexcept KeyError:\n- objective = add_exchange(model, objective)\n+ objective = model.add_boundary(objective, type='demand')\n# try:\n# objective = model.reaction_for(objective, time_machine=tm)\n# except KeyError:\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/pathway_prediction/pathway_predictor.py", "new_path": "cameo/strain_design/pathway_prediction/pathway_predictor.py", "diff": "@@ -28,6 +28,7 @@ from sympy import Add, Mul, RealNumber\nfrom cobra import Model, Metabolite, Reaction\nfrom cobra.util import SolverNotFound, assert_optimal\n+from cobra.exceptions import OptimizationError\nfrom cameo import fba\nfrom cameo import models, phenotypic_phase_plane\n@@ -36,9 +37,7 @@ from cameo.core.pathway import Pathway\nfrom cameo.core.result import Result, MetaInformation\nfrom cameo.core.strain_design import StrainDesignMethodResult, StrainDesign, StrainDesignMethod\nfrom cameo.core.target import ReactionKnockinTarget\n-from cameo.core.utils import add_exchange\nfrom cameo.data import metanetx\n-from cobra.exceptions import OptimizationError\nfrom cameo.strain_design.pathway_prediction import util\nfrom cameo.util import TimeMachine\nfrom cameo.visualization.plotting import plotter\n@@ -310,7 +309,7 @@ class PathwayPredictor(StrainDesignMethod):\ntry:\nproduct_reaction = self.model.reactions.get_by_id('DM_' + product.id)\nexcept KeyError:\n- product_reaction = add_exchange(self.model, product)\n+ product_reaction = self.model.add_boundary(product, type='demand')\nproduct_reaction.lower_bound = min_production\ncounter = 1\n" }, { "change_type": "MODIFY", "old_path": "requirements.txt", "new_path": "requirements.txt", "diff": "@@ -4,7 +4,7 @@ blessings>=1.5.1\npandas>=0.15.2\nordered-set==1.2\ninspyred>=1.0\n-cobra>=0.6.0a4\n+cobra>=0.6.0a7\noptlang>=0.3.0\nescher>=1.0.0\nnumexpr>=2.4\n" }, { "change_type": "MODIFY", "old_path": "setup.py", "new_path": "setup.py", "diff": "@@ -28,7 +28,7 @@ requirements = ['numpy>=1.9.1',\n'blessings>=1.5.1',\n'pandas>=0.15.2',\n'ordered-set>=1.2',\n- 'cobra>=0.6.0a2',\n+ 'cobra>=0.6.0a7',\n'optlang>=0.4.2',\n'requests>=2.5.0',\n'numexpr>=2.4',\n" }, { "change_type": "MODIFY", "old_path": "tests/test_flux_analysis.py", "new_path": "tests/test_flux_analysis.py", "diff": "@@ -349,7 +349,7 @@ class TestRemoveCycles:\ncore_model.objective = core_model.solver.interface.Objective(\nAdd(*core_model.solver.variables.values()), name='Max all fluxes')\nsolution = core_model.optimize()\n- assert abs(solution.data_frame.fluxes.abs().sum() - 2508.293334) < 1e-6\n+ assert abs(solution.to_frame().fluxes.abs().sum() - 2508.293334) < 1e-6\nfluxes = solution.fluxes\ncore_model.objective = original_objective\nclean_fluxes = remove_infeasible_cycles(core_model, fluxes)\n" }, { "change_type": "MODIFY", "old_path": "tests/test_solver_based_model.py", "new_path": "tests/test_solver_based_model.py", "diff": "@@ -410,6 +410,7 @@ class TestReaction:\nassert reaction.lower_bound == original_bounds[reaction.id][0]\nassert reaction.upper_bound == original_bounds[reaction.id][1]\n+ @pytest.mark.xfail(reason=\"to be implemented in cobra\")\ndef test_repr_html_(self, core_model):\nassert '<table>' in core_model.reactions[0]._repr_html_()\n@@ -1006,6 +1007,7 @@ class TestMetabolite:\nassert not (met.id in core_model.metabolites)\nassert not (met.id in core_model.solver.constraints)\n+ @pytest.mark.xfail(reason='to be implemented in cobra')\ndef test_notebook_repr(self):\nmet = Metabolite(id=\"test\", name=\"test metabolites\", formula=\"CH4\")\nexpected = \"\"\"\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: match to work with pre-release cobrapy - use the to_frame() method instead of data_frame property of solution - use the model's add_boundary instead of the temporary add_exchange
89,735
16.05.2017 15:47:24
-7,200
f71ac71b16f2b2a68b62b3e7d27f897f5460fe21
fix: don't used removed (unused) argument delete The delete argument had no function in remove_reactions in cobra 0.6 and was therefore removed.
[ { "change_type": "MODIFY", "old_path": "cameo/core/pathway.py", "new_path": "cameo/core/pathway.py", "diff": "@@ -128,7 +128,7 @@ class Pathway(object):\nif tm is not None:\ntm(do=partial(model.add_reactions, self.reactions),\n- undo=partial(model.remove_reactions, self.reactions, delete=False))\n+ undo=partial(model.remove_reactions, self.reactions))\nelse:\nmodel.add_reactions(self.reactions)\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/pathway_prediction/pathway_predictor.py", "new_path": "cameo/strain_design/pathway_prediction/pathway_predictor.py", "diff": "@@ -129,17 +129,17 @@ class PathwayResult(Pathway, Result, StrainDesign):\nwarnings.warn(\"The 'plug_model' method as been deprecated. Use apply instead.\", DeprecationWarning)\nif tm is not None:\ntm(do=partial(model.add_reactions, self.reactions),\n- undo=partial(model.remove_reactions, self.reactions, delete=False, remove_orphans=True))\n+ undo=partial(model.remove_reactions, self.reactions, remove_orphans=True))\nif adapters:\ntm(do=partial(model.add_reactions, self.adapters),\n- undo=partial(model.remove_reactions, self.adapters, delete=False, remove_orphans=True))\n+ undo=partial(model.remove_reactions, self.adapters, remove_orphans=True))\nif exchanges:\ntm(do=partial(model.add_reactions, self.exchanges),\n- undo=partial(model.remove_reactions, self.exchanges, delete=False, remove_orphans=True))\n+ undo=partial(model.remove_reactions, self.exchanges, remove_orphans=True))\nself.product.lower_bound = 0\ntry:\ntm(do=partial(model.add_reactions, [self.product]),\n- undo=partial(model.remove_reactions, [self.product], delete=False, remove_orphans=True))\n+ undo=partial(model.remove_reactions, [self.product], remove_orphans=True))\nexcept Exception:\nlogger.warning(\"Exchange %s already in model\" % self.product.id)\npass\n" }, { "change_type": "MODIFY", "old_path": "tests/test_solver_based_model.py", "new_path": "tests/test_solver_based_model.py", "diff": "@@ -685,7 +685,7 @@ class TestModel:\nassert coefficients_dict[core_model.reactions.r2.reverse_variable] == -3.\ndef test_remove_reactions_1(self, core_model):\n- core_model.remove_reactions([core_model.reactions.PGI, core_model.reactions.PGK], delete=False)\n+ core_model.remove_reactions([core_model.reactions.PGI, core_model.reactions.PGK])\nassert \"PGI\" not in core_model.reactions\nassert \"PGK\" not in core_model.reactions\nassert \"PGI\" not in core_model.reactions\n@@ -708,7 +708,7 @@ class TestModel:\ndef test_remove_and_add_reactions(self, core_model):\nmodel_copy = core_model.copy()\npgi, pgk = model_copy.reactions.PGI, model_copy.reactions.PGK\n- model_copy.remove_reactions([pgi, pgk], delete=False)\n+ model_copy.remove_reactions([pgi, pgk])\nassert \"PGI\" not in model_copy.reactions\nassert \"PGK\" not in model_copy.reactions\nassert \"PGI\" in core_model.reactions\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: don't used removed (unused) argument delete The delete argument had no function in remove_reactions in cobra 0.6 and was therefore removed.
89,735
16.05.2017 15:48:21
-7,200
cf76129b9d2830fab7d950f70e3be8efd3d2a5c5
refactor: improve ModelDB when failed connection Instead of making a mock list, when failing to connect, just don't make an index at all and update the status flag to facilitate skipping tests.
[ { "change_type": "MODIFY", "old_path": "cameo/models/webmodels.py", "new_path": "cameo/models/webmodels.py", "diff": "@@ -173,13 +173,14 @@ class ModelDB(object):\nself._index_key = index_key\nself._get_model_method = get_model_method\nself._index = None\n+ self.status = ''\ndef _index_models(self):\ntry:\nself._index = self._index_method()\n+ self.status = 'indexed'\nexcept requests.ConnectionError:\n- self._index = [\"no_models_available\"]\n- self.no_models_available = \"The server could not be reached. Make sure you are connected to the internet\"\n+ self.status = \"The server could not be reached. Make sure you are connected to the internet\"\ndef __dir__(self):\nif self._index is None:\n" }, { "change_type": "MODIFY", "old_path": "tests/test_io.py", "new_path": "tests/test_io.py", "diff": "@@ -66,8 +66,10 @@ class TestModelLoading(object):\[email protected](libsbml is None, reason=\"minho has fbc < 2, requiring missing lisbml\")\ndef test_import_model_minho(self):\n- model = cameo.models.minho.__getattr__('Ecoli core Model')\n- assert model.id == 'Ecoli_core_model'\n+ model = cameo.models.minho\n+ if model.status != 'indexed':\n+ pytest.skip('failed to index minho db')\n+ assert model.__getattr__('Ecoli core Model').id == 'Ecoli_core_model'\ndef test_invalid_path(self):\nwith pytest.raises(Exception):\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: improve ModelDB when failed connection Instead of making a mock list, when failing to connect, just don't make an index at all and update the status flag to facilitate skipping tests.
89,735
14.06.2017 13:56:15
-7,200
7dc125b59ac9e27001bf019fc962b4f9905262e0
fix: adjust phpp to cobrapy model optimization should use model.solver.optimize and set or not set fluxes depending on result, rather than fetching all values (no lazysolution anymore)
[ { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/analysis.py", "new_path": "cameo/flux_analysis/analysis.py", "diff": "@@ -697,11 +697,12 @@ class _PhenotypicPhasePlaneChunkEvaluator(object):\nreturn [self._production_envelope_inner(point) for point in points]\ndef _interval_estimates(self):\n- try:\n- flux = self.model.optimize().f\n+ self.model.solver.optimize()\n+ if self.model.solver.status == OPTIMAL:\n+ flux = self.model.solver.objective.value\ncarbon_yield = self.carbon_yield()\nmass_yield = self.mass_yield()\n- except (AssertionError, Infeasible):\n+ else:\nflux = 0\ncarbon_yield = 0\nmass_yield = 0\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/deterministic/flux_variability_based.py", "new_path": "cameo/strain_design/deterministic/flux_variability_based.py", "diff": "@@ -144,6 +144,7 @@ class DifferentialFVA(StrainDesignMethod):\nif reference_model is None:\nself.reference_model = self.design_space_model.copy()\nfix_objective_as_constraint(self.reference_model)\n+ self.reference_nullspace = self.design_space_nullspace\nelse:\nself.reference_model = reference_model\nself.reference_nullspace = nullspace(create_stoichiometric_array(self.reference_model))\n" }, { "change_type": "MODIFY", "old_path": "tests/test_flux_analysis.py", "new_path": "tests/test_flux_analysis.py", "diff": "@@ -19,7 +19,6 @@ from __future__ import absolute_import\nimport copy\nimport os\nimport re\n-from functools import partial\nimport numpy as np\nimport pandas\n@@ -39,7 +38,7 @@ from cameo.flux_analysis.simulation import fba, lmoma, moma, pfba, room\nfrom cameo.flux_analysis.structural import nullspace\nfrom cameo.flux_analysis.analysis import find_essential_reactions\nfrom cameo.parallel import MultiprocessingView, SequentialView\n-from cameo.util import TimeMachine, current_solver_name, pick_one\n+from cameo.util import current_solver_name, pick_one\nTRAVIS = 'TRAVIS' in os.environ\nTEST_DIR = os.path.dirname(__file__)\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: adjust phpp to cobrapy model optimization should use model.solver.optimize and set or not set fluxes depending on result, rather than fetching all values (no lazysolution anymore)
89,735
27.06.2017 15:56:05
-7,200
4ea73c5d21ef0c03675e834a989c7e17911e09a1
fix: avoid mutables in defaults and required argument so no default
[ { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/analysis.py", "new_path": "cameo/flux_analysis/analysis.py", "diff": "@@ -301,7 +301,7 @@ def flux_variability_analysis(model, reactions=None, fraction_of_optimum=0., pfb\nreturn FluxVariabilityResult(solution)\n-def phenotypic_phase_plane(model, variables=[], objective=None, source=None, points=20, view=None):\n+def phenotypic_phase_plane(model, variables, objective=None, source=None, points=20, view=None):\n\"\"\"Phenotypic phase plane analysis [1].\nImplements a phenotypic phase plan analysis with interpretation same as\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: avoid mutables in defaults and required argument so no default
89,734
18.07.2017 12:04:16
-7,200
7e6845e3d665f1f44d51c28ab8e57a636c0442de
feat: quick fix for Python 2 on Windows The stdlib gzip module always adds binary mode 'b' to the read flag. This was causing a `ValueError` on Python 2 under Windows.
[ { "change_type": "MODIFY", "old_path": "cameo/data/metanetx.py", "new_path": "cameo/data/metanetx.py", "diff": "@@ -26,7 +26,13 @@ import pandas\nimport cameo\n-with gzip.open(os.path.join(cameo._cameo_data_path, 'metanetx.json.gz'), 'rt') as f:\n+if six.PY2:\n+ flag = 'r'\n+else:\n+ flag = 'rt'\n+\n+with gzip.open(os.path.join(cameo._cameo_data_path, 'metanetx.json.gz'),\n+ flag) as f:\n_METANETX = json.load(f)\nbigg2mnx = _METANETX['bigg2mnx']\n@@ -34,5 +40,6 @@ mnx2bigg = _METANETX['mnx2bigg']\nall2mnx = _METANETX['all2mnx']\nmnx2all = {v: k for k, v in six.iteritems(all2mnx)}\n-with gzip.open(os.path.join(cameo._cameo_data_path, 'metanetx_chem_prop.json.gz'), 'rt') as f:\n+with gzip.open(os.path.join(cameo._cameo_data_path,\n+ 'metanetx_chem_prop.json.gz'), flag) as f:\nchem_prop = pandas.read_json(f)\n" } ]
Python
Apache License 2.0
biosustain/cameo
feat: quick fix for Python 2 on Windows The stdlib gzip module always adds binary mode 'b' to the read flag. This was causing a `ValueError` on Python 2 under Windows.
89,735
14.07.2017 16:35:50
-7,200
9cef65ee840d8038c4fe8c0cb254b0b2811ac213
refactor: check compartment regex valid cryptic failures if compartment regex was not valid, raise some sensible error instead
[ { "change_type": "MODIFY", "old_path": "cameo/strain_design/pathway_prediction/util.py", "new_path": "cameo/strain_design/pathway_prediction/util.py", "diff": "# limitations under the License.\nfrom __future__ import absolute_import, print_function\n-\n-__all__ = ['create_adapter_reactions', 'display_pathway']\n-\nimport re\nfrom cobra import Reaction\n@@ -24,9 +21,11 @@ from cameo.util import in_ipnb\ntry:\nfrom IPython.display import display\n-except:\n+except ImportError:\npass\n+__all__ = ['create_adapter_reactions', 'display_pathway']\n+\ndef create_adapter_reactions(original_metabolites, universal_model, mapping, compartment_regexp):\n\"\"\"Create adapter reactions that connect host and universal model.\n@@ -34,13 +33,13 @@ def create_adapter_reactions(original_metabolites, universal_model, mapping, com\nArguments\n---------\noriginal_metabolites : list\n- List of host metababolites.\n+ List of host metabolites.\nuniversal_model : cobra.Model\nThe universal model.\nmapping : dict\nA mapping between between host and universal model metabolite IDs.\ncompartment_regexp : regex\n- A compile regex that matches metabolites that should be connected to the universal model.\n+ A compiled regex that matches metabolites that should be connected to the universal model.\nReturns\n-------\n@@ -48,8 +47,10 @@ def create_adapter_reactions(original_metabolites, universal_model, mapping, com\nThe list of adapter reactions.\n\"\"\"\nadapter_reactions = []\n- for metabolite in original_metabolites: # model is the original host model\n- if compartment_regexp.match(metabolite.compartment):\n+ metabolites_in_main_compartment = [m for m in original_metabolites if compartment_regexp.match(m.compartment)]\n+ if len(metabolites_in_main_compartment) == 0:\n+ raise ValueError('no metabolites matching regex for main compartment %s' % compartment_regexp)\n+ for metabolite in metabolites_in_main_compartment: # model is the original host model\nname = re.sub('_{}$'.format(metabolite.compartment), '', metabolite.id) # TODO: still a hack\nmapped_name = None\nfor prefix in ['bigg:', 'kegg:', 'rhea:', 'brenda:', '']: # try no prefix at last\n" }, { "change_type": "MODIFY", "old_path": "tests/test_pathway_predictions.py", "new_path": "tests/test_pathway_predictions.py", "diff": "@@ -45,11 +45,13 @@ def pathway_predictor_result(pathway_predictor):\nclass TestPathwayPredictor:\n- def test_setting_incorrect_universal_model_raises(self, pathway_predictor):\n- model, predictor = pathway_predictor\n+ def test_incorrect_arguments_raises(self, pathway_predictor):\n+ model, _ = pathway_predictor\nwith pytest.raises(ValueError) as excinfo:\nPathwayPredictor(model, universal_model='Mickey_Mouse')\nassert re.search(r'Provided universal_model.*', str(excinfo.value))\n+ with pytest.raises(ValueError) as excinfo:\n+ PathwayPredictor(model, compartment_regexp='Mickey_Mouse')\ndef test_predict_non_native_compound(self, pathway_predictor):\nmodel, predictor = pathway_predictor\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: check compartment regex valid cryptic failures if compartment regex was not valid, raise some sensible error instead
89,735
17.07.2017 11:06:05
-7,200
ce7d0edb5d039420cb65e0298917fd0cec1bb4bf
refactor: avoid printing long reaction biomass reaction string too long for any axis, auto-shorten to id
[ { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/analysis.py", "new_path": "cameo/flux_analysis/analysis.py", "diff": "@@ -403,8 +403,10 @@ def _nice_id(reaction):\nif isinstance(reaction, Reaction):\nif hasattr(reaction, 'nice_id'):\nnice_id = reaction.nice_id\n- else:\n+ elif len(reaction.metabolites) < 5:\nnice_id = reaction\n+ else:\n+ nice_id = reaction.id\nelse:\nnice_id = str(reaction)\nreturn nice_id\n@@ -871,8 +873,7 @@ class PhenotypicPhasePlaneResult(Result):\n'carbon yield, src={}'.format(self.source_reaction),\n'[mmol(C)/mmol(C(src)) h^-1]')}\nif estimate not in possible_estimates:\n- raise Exception('estimate must be one of %s' %\n- ', '.join(possible_estimates.keys()))\n+ raise ValueError('estimate must be one of %s' % ', '.join(possible_estimates.keys()))\nupper, lower, description, unit = possible_estimates[estimate]\nif title is None:\ntitle = \"Phenotypic Phase Plane ({})\".format(description)\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: avoid printing long reaction biomass reaction string too long for any axis, auto-shorten to id
89,735
18.07.2017 13:55:57
-7,200
6b25a201a7c1be1b8f3674491b84c45fd535b44b
fix: respect essential_genes in optgene `essential_genes` arg expected to be a list of identifiers, fix bug assuming list of gene objects.
[ { "change_type": "MODIFY", "old_path": "cameo/strain_design/heuristic/evolutionary/optimization.py", "new_path": "cameo/strain_design/heuristic/evolutionary/optimization.py", "diff": "@@ -771,8 +771,9 @@ class GeneKnockoutOptimization(KnockoutOptimization):\nif use_nullspace_simplification:\nns = nullspace(create_stoichiometric_array(self.model))\ndead_end_reactions = find_blocked_reactions_nullspace(self.model, ns=ns)\n- dead_end_genes = {g for g in self.model.genes if all(r in dead_end_reactions for r in g.reactions)}\n- genes = [g for g in self.model.genes if g not in self.essential_genes and g.id not in dead_end_genes]\n+ dead_end_genes = {g.id for g in self.model.genes if all(r in dead_end_reactions for r in g.reactions)}\n+ exclude_genes = self.essential_genes.union(dead_end_genes)\n+ genes = [g for g in self.model.genes if g.id not in exclude_genes]\nself.representation = [g.id for g in genes]\nelse:\nself.representation = list(self.genes.difference(self.essential_genes))\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: respect essential_genes in optgene `essential_genes` arg expected to be a list of identifiers, fix bug assuming list of gene objects.
89,735
18.07.2017 14:09:16
-7,200
e1ade4cbf65ffc60153218fa14c4df44fba850ac
refactor: remove calls to removed add_demand Replace `add_demand` with `add_boundary`. Also then became necessary to remove the time machine argument from deprecated `plug_model`.
[ { "change_type": "MODIFY", "old_path": "cameo/core/pathway.py", "new_path": "cameo/core/pathway.py", "diff": "@@ -109,10 +109,9 @@ class Pathway(object):\nreaction.name + sep +\nreaction.notes.get(\"pathway_note\", \"\") + \"\\n\")\n- def plug_model(self, model, tm=None):\n+ def plug_model(self, model):\n\"\"\"\nPlugs the pathway to a model.\n- If a TimeMachine is provided, the reactions will be removed after reverting the TimeMachine.\nMetabolites are matched in the model by id. For metabolites with no ID in the model, an exchange reaction\nis added to the model\n@@ -121,19 +120,11 @@ class Pathway(object):\n---------\nmodel: cobra.Model\nThe model to plug in the pathway\n- tm: TimeMachine\n- Optionally, a TimeMachine object can be added to the operation\n-\n\"\"\"\n- if tm is not None:\n- tm(do=partial(model.add_reactions, self.reactions),\n- undo=partial(model.remove_reactions, self.reactions))\n- else:\nmodel.add_reactions(self.reactions)\n-\nmetabolites = set(reduce(lambda x, y: x + y, [list(r.metabolites.keys()) for r in self.reactions], []))\n- exchanges = [model.add_demand(m, prefix=\"EX_\", time_machine=tm) for m in metabolites if len(m.reactions) == 1]\n+ exchanges = [model.add_boundary(m) for m in metabolites if len(m.reactions) == 1]\nfor exchange in exchanges:\nexchange.lower_bound = 0\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/deterministic/flux_variability_based.py", "new_path": "cameo/strain_design/deterministic/flux_variability_based.py", "diff": "@@ -153,11 +153,11 @@ class DifferentialFVA(StrainDesignMethod):\nself.objective = objective.id\nelif isinstance(objective, Metabolite):\ntry:\n- self.reference_model.add_demand(objective)\n+ self.reference_model.add_boundary(objective, type='demand')\nexcept ValueError:\npass\ntry:\n- self.objective = self.design_space_model.add_demand(objective).id\n+ self.objective = self.design_space_model.add_boundary(objective, type='demand').id\nexcept ValueError:\nself.objective = self.design_space_model.reactions.get_by_id(\"DM_\" + objective.id).id\nelif isinstance(objective, six.string_types):\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/pathway_prediction/pathway_predictor.py", "new_path": "cameo/strain_design/pathway_prediction/pathway_predictor.py", "diff": "@@ -125,25 +125,8 @@ class PathwayResult(Pathway, Result, StrainDesign):\nself.apply(model)\nreturn phenotypic_phase_plane(model, variables=[objective], objective=self.product.id)\n- def plug_model(self, model, tm=None, adapters=True, exchanges=True):\n+ def plug_model(self, model, adapters=True, exchanges=True):\nwarnings.warn(\"The 'plug_model' method as been deprecated. Use apply instead.\", DeprecationWarning)\n- if tm is not None:\n- tm(do=partial(model.add_reactions, self.reactions),\n- undo=partial(model.remove_reactions, self.reactions, remove_orphans=True))\n- if adapters:\n- tm(do=partial(model.add_reactions, self.adapters),\n- undo=partial(model.remove_reactions, self.adapters, remove_orphans=True))\n- if exchanges:\n- tm(do=partial(model.add_reactions, self.exchanges),\n- undo=partial(model.remove_reactions, self.exchanges, remove_orphans=True))\n- self.product.lower_bound = 0\n- try:\n- tm(do=partial(model.add_reactions, [self.product]),\n- undo=partial(model.remove_reactions, [self.product], remove_orphans=True))\n- except Exception:\n- logger.warning(\"Exchange %s already in model\" % self.product.id)\n- pass\n- else:\nmodel.add_reactions(self.reactions)\nif adapters:\nmodel.add_reactions(self.adapters)\n@@ -168,10 +151,10 @@ class PathwayPredictions(StrainDesignMethodResult):\ndef pathways(self):\nreturn self._designs\n- def plug_model(self, model, index, tm=None):\n+ def plug_model(self, model, index):\nwarnings.warn(\"The 'plug_model' method as been deprecated. You can use result[i].apply instead\",\nDeprecationWarning)\n- self.pathways[index].plug_model(model, tm)\n+ self.pathways[index].plug_model(model)\ndef __getitem__(self, item):\nreturn self.pathways[item]\n" }, { "change_type": "MODIFY", "old_path": "scripts/parse_metanetx.py", "new_path": "scripts/parse_metanetx.py", "diff": "@@ -132,9 +132,9 @@ def construct_universal_model(list_of_db_prefixes, reac_xref, reac_prop, chem_pr\nmodel = Model('metanetx_universal_model_' + '_'.join(list_of_db_prefixes))\nmodel.add_reactions(reactions)\n- # Add sinks for all metabolites\n+ # Add demands for all metabolites\nfor metabolite in model.metabolites:\n- model.add_demand(metabolite)\n+ model.add_boundary(metabolite, type='demand')\nreturn model\n" }, { "change_type": "MODIFY", "old_path": "tests/test_pathway_predictions.py", "new_path": "tests/test_pathway_predictions.py", "diff": "@@ -105,7 +105,7 @@ class TestPathwayResult:\nassert set(r.id for r in result[0].exchanges) == set(r.id for r in result_recovered.exchanges)\nassert result[0].product.id == result_recovered.product.id\n- def test_plug_model_without_time_machine(self, pathway_predictor_result):\n+ def test_plug_model_without_context(self, pathway_predictor_result):\nmodel, result = pathway_predictor_result\nmodel = model.copy()\nresult[0].plug_model(model)\n@@ -118,10 +118,10 @@ class TestPathwayResult:\nfor reaction in result[0].adapters:\nassert reaction in model.reactions\n- def test_plug_model_with_time_machine(self, pathway_predictor_result):\n+ def test_plug_model_with_context(self, pathway_predictor_result):\nmodel, result = pathway_predictor_result\n- with TimeMachine() as tm:\n- result[0].plug_model(model, tm=tm)\n+ with model:\n+ result[0].plug_model(model)\nfor reaction in result[0].reactions:\nassert reaction not in model.reactions\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: remove calls to removed add_demand Replace `add_demand` with `add_boundary`. Also then became necessary to remove the time machine argument from deprecated `plug_model`.
89,735
18.07.2017 16:48:42
-7,200
183a1ba5441afc5dcd8e24ccc6c173a16937f9c6
fix: adjust to new solution object Fluxes are now default Series so allow them to be as input to room and moma
[ { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/simulation.py", "new_path": "cameo/flux_analysis/simulation.py", "diff": "@@ -81,7 +81,7 @@ def fba(model, objective=None, reactions=None, *args, **kwargs):\nassert_optimal(model)\nsolution = get_solution(model)\nif reactions is not None:\n- result = FluxDistributionResult({r: solution.get_primal_by_id(r) for r in reactions}, solution.f)\n+ result = FluxDistributionResult({r: solution[r] for r in reactions}, solution.f)\nelse:\nresult = FluxDistributionResult.from_solution(solution)\nreturn result\n@@ -235,7 +235,7 @@ def lmoma(model, reference=None, cache=None, reactions=None, *args, **kwargs):\ncache.begin_transaction()\n- if not isinstance(reference, (dict, FluxDistributionResult)):\n+ if not isinstance(reference, (dict, pandas.Series, FluxDistributionResult)):\nraise TypeError(\"reference must be a flux distribution (dict or FluxDistributionResult\")\ntry:\n@@ -337,7 +337,7 @@ def room(model, reference=None, cache=None, delta=0.03, epsilon=0.001, reactions\ncache.begin_transaction()\n- if not isinstance(reference, (dict, FluxDistributionResult)):\n+ if not isinstance(reference, (dict, pandas.Series, FluxDistributionResult)):\nraise TypeError(\"reference must be a flux distribution (dict or FluxDistributionResult\")\ntry:\n@@ -438,7 +438,7 @@ class FluxDistributionResult(Result):\n@property\ndef data_frame(self):\n- return pandas.DataFrame(list(self._fluxes.values()), index=list(self._fluxes.keys()), columns=['flux'])\n+ return pandas.DataFrame(list(self._fluxes.values), index=list(self._fluxes.keys()), columns=['flux'])\n@property\ndef fluxes(self):\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: adjust to new solution object Fluxes are now default Series so allow them to be as input to room and moma
89,735
19.07.2017 09:08:39
-7,200
105381c6c2c7601f5ea1f5a1b576f9b90d6f2469
refactor: remove debug print statements
[ { "change_type": "MODIFY", "old_path": "cameo/strain_design/deterministic/linear_programming.py", "new_path": "cameo/strain_design/deterministic/linear_programming.py", "diff": "@@ -115,14 +115,12 @@ class OptKnock(StrainDesignMethod):\nself._build_problem(exclude_reactions, use_nullspace_simplification)\ndef _remove_blocked_reactions(self):\n- print(len(self._model.reactions))\nfva_res = flux_variability_analysis(self._model, fraction_of_optimum=0)\nblocked = [\nself._model.reactions.get_by_id(reaction) for reaction, row in fva_res.data_frame.iterrows()\nif (round(row[\"lower_bound\"], config.ndecimals) ==\nround(row[\"upper_bound\"], config.ndecimals) == 0)]\nself._model.remove_reactions(blocked)\n- print(len(self._model.reactions))\ndef _reduce_to_nullspace(self, reactions):\nself.reaction_groups = find_coupled_reactions_nullspace(self._model)\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: remove debug print statements
89,735
19.07.2017 12:24:19
-7,200
cf0c9ec209cf478219db12e5b01fa5aad0fab5c3
refactor: simplify optimization with error Use new feature in cobrapy to raise error if model is infeasible. Avoid utility imports from cobrapy.
[ { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/analysis.py", "new_path": "cameo/flux_analysis/analysis.py", "diff": "@@ -27,7 +27,7 @@ import pandas\nimport six\nfrom cobra import Reaction, Metabolite\nfrom cobra.core import get_solution\n-from cobra.util import fix_objective_as_constraint, assert_optimal, get_context\n+from cobra.util import fix_objective_as_constraint, get_context\nfrom cobra.exceptions import OptimizationError\nfrom numpy import trapz\nfrom six.moves import zip\n@@ -140,9 +140,7 @@ def find_essential_metabolites(model, threshold=1e-6, force_steady_state=False):\nessential = []\n# Essential metabolites are only in reactions that carry flux.\nmetabolites = set()\n- model.solver.optimize()\n- assert_optimal(model)\n- solution = get_solution(model)\n+ solution = model.optimize(raise_error=True)\nfor reaction_id, flux in six.iteritems(solution.fluxes):\nif abs(flux) > 0:\nreaction = model.reactions.get_by_id(reaction_id)\n@@ -174,9 +172,7 @@ def find_essential_reactions(model, threshold=1e-6):\n\"\"\"\nessential = []\ntry:\n- model.solver.optimize()\n- assert_optimal(model)\n- solution = get_solution(model)\n+ solution = model.optimize(raise_error=True)\nfor reaction_id, flux in six.iteritems(solution.fluxes):\nif abs(flux) > 0:\nreaction = model.reactions.get_by_id(reaction_id)\n@@ -210,9 +206,7 @@ def find_essential_genes(model, threshold=1e-6):\n\"\"\"\nessential = []\ntry:\n- model.solver.optimize()\n- assert_optimal(model)\n- solution = get_solution(model)\n+ solution = model.optimize(raise_error=True)\ngenes_to_check = set()\nfor reaction_id, flux in six.iteritems(solution.fluxes):\nif abs(flux) > 0:\n@@ -502,8 +496,7 @@ def get_c_source_reaction(model):\nThe medium reaction with highest input carbon flux\n\"\"\"\ntry:\n- model.solver.optimize()\n- assert_optimal(model)\n+ model.slim_optimize(error_value=None)\nexcept OptimizationError:\nreturn None\nmedium_reactions = model.reactions.get_by_any(list(model.medium))\n" }, { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/simulation.py", "new_path": "cameo/flux_analysis/simulation.py", "diff": "@@ -37,8 +37,6 @@ from sympy import Add\nfrom sympy import Mul\nfrom sympy.parsing.sympy_parser import parse_expr\nfrom cobra import Reaction\n-from cobra.core import get_solution\n-from cobra.util import assert_optimal\nfrom cobra.flux_analysis.parsimonious import add_pfba\nfrom cobra.exceptions import OptimizationError\n@@ -77,9 +75,7 @@ def fba(model, objective=None, reactions=None, *args, **kwargs):\nwith model:\nif objective is not None:\nmodel.objective = objective\n- model.solver.optimize()\n- assert_optimal(model)\n- solution = get_solution(model)\n+ solution = model.optimize(raise_error=True)\nif reactions is not None:\nresult = FluxDistributionResult({r: solution[r] for r in reactions}, solution.f)\nelse:\n@@ -117,8 +113,7 @@ def pfba(model, objective=None, reactions=None, fraction_of_optimum=1, *args, **\nwith model:\nadd_pfba(model, objective=objective, fraction_of_optimum=fraction_of_optimum)\ntry:\n- model.solver.optimize()\n- solution = get_solution(model)\n+ solution = model.optimize(raise_error=True)\nif reactions is not None:\nresult = FluxDistributionResult({r: solution.get_primal_by_id(r) for r in reactions}, solution.f)\nelse:\n@@ -193,7 +188,8 @@ def moma(model, reference=None, cache=None, reactions=None, *args, **kwargs):\ncache.add_objective(create_objective, None, cache.variables.values())\n- solution = model.optimize()\n+ solution = model.optimize(raise_error=True)\n+\nif reactions is not None:\nresult = FluxDistributionResult({r: solution.get_primal_by_id(r) for r in reactions}, solution.f)\nelse:\n@@ -289,7 +285,7 @@ def lmoma(model, reference=None, cache=None, reactions=None, *args, **kwargs):\ntry:\n- solution = model.optimize()\n+ solution = model.optimize(raise_error=True)\nif reactions is not None:\nresult = FluxDistributionResult({r: solution.get_primal_by_id(r) for r in reactions}, solution.f)\nelse:\n@@ -384,7 +380,7 @@ def room(model, reference=None, cache=None, delta=0.03, epsilon=0.001, reactions\nmodel.objective = model.solver.interface.Objective(add([mul([One, var]) for var in cache.variables.values()]),\ndirection='min')\ntry:\n- solution = model.optimize()\n+ solution = model.optimize(raise_error=True)\nif reactions is not None:\nresult = FluxDistributionResult({r: solution.get_primal_by_id(r) for r in reactions}, solution.f)\nelse:\n" }, { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/structural.py", "new_path": "cameo/flux_analysis/structural.py", "diff": "@@ -347,7 +347,7 @@ class ShortestElementaryFluxModes(six.Iterator):\ndef __generate_elementary_modes(self):\nwhile True:\ntry:\n- self.model.optimize()\n+ self.model.slim_optimize(error_value=None)\nexcept OptimizationError:\nraise StopIteration\nelementary_flux_mode = list()\n" }, { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/util.py", "new_path": "cameo/flux_analysis/util.py", "diff": "@@ -73,7 +73,7 @@ def remove_infeasible_cycles(model, fluxes, fix=()):\nreaction_to_fix = model.reactions.get_by_id(reaction_id)\nreaction_to_fix.bounds = (fluxes[reaction_id], fluxes[reaction_id])\ntry:\n- solution = model.optimize()\n+ solution = model.optimize(raise_error=True)\nexcept OptimizationError as e:\nlogger.warning(\"Couldn't remove cycles from reference flux distribution.\")\nraise e\n@@ -102,7 +102,7 @@ def fix_pfba_as_constraint(model, multiplier=1, fraction_of_optimum=1):\nmodel.solver.remove(fix_constraint_name)\nwith model:\nadd_pfba(model, fraction_of_optimum=fraction_of_optimum)\n- pfba_objective_value = model.optimize().objective_value * multiplier\n+ pfba_objective_value = model.slim_optimize(error_value=None) * multiplier\nconstraint = model.solver.interface.Constraint(model.objective.expression,\nname=fix_constraint_name,\nub=pfba_objective_value)\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/deterministic/linear_programming.py", "new_path": "cameo/strain_design/deterministic/linear_programming.py", "diff": "@@ -16,7 +16,6 @@ from __future__ import print_function\nimport logging\nimport warnings\n-from functools import partial\nimport numpy\nfrom IProgress.progressbar import ProgressBar\n@@ -248,7 +247,7 @@ class OptKnock(StrainDesignMethod):\ncount = 0\nwhile count < max_results:\ntry:\n- solution = self._model.optimize()\n+ solution = self._model.optimize(raise_error=True)\nexcept OptimizationError as e:\nlogger.debug(\"Problem could not be solved. Terminating and returning \" + str(count) + \" solutions\")\nlogger.debug(str(e))\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/pathway_prediction/pathway_predictor.py", "new_path": "cameo/strain_design/pathway_prediction/pathway_predictor.py", "diff": "@@ -27,7 +27,7 @@ from cobra import DictList\nfrom sympy import Add, Mul, RealNumber\nfrom cobra import Model, Metabolite, Reaction\n-from cobra.util import SolverNotFound, assert_optimal\n+from cobra.util import SolverNotFound\nfrom cobra.exceptions import OptimizationError\nfrom cameo import fba\n@@ -298,9 +298,8 @@ class PathwayPredictor(StrainDesignMethod):\ncounter = 1\nwhile counter <= max_predictions:\nlogger.debug('Predicting pathway No. %d' % counter)\n- self.model.solver.optimize()\ntry:\n- assert_optimal(self.model)\n+ self.model.slim_optimize(error_value=None)\nexcept OptimizationError as e:\nlogger.error('No pathway could be predicted. Terminating pathway predictions.')\nlogger.error(e)\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: simplify optimization with error Use new feature in cobrapy to raise error if model is infeasible. Avoid utility imports from cobrapy.
89,735
19.07.2017 12:25:09
-7,200
e16802dff0aeacccbd16f33b538dc0bff2f87542
refactor: remove cameo.stuff Untested code not used in the package or associated notebooks, let's put this somewhere else.
[ { "change_type": "DELETE", "old_path": "cameo/stuff/__init__.py", "new_path": null, "diff": "-# Copyright 2014 Novo Nordisk Foundation Center for Biosustainability, DTU.\n-#\n-# Licensed under the Apache License, Version 2.0 (the \"License\");\n-# you may not use this file except in compliance with the License.\n-# You may obtain a copy of the License at\n-#\n-# http://www.apache.org/licenses/LICENSE-2.0\n-#\n-# Unless required by applicable law or agreed to in writing, software\n-# distributed under the License is distributed on an \"AS IS\" BASIS,\n-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n-# See the License for the specific language governing permissions and\n-# limitations under the License.\n" }, { "change_type": "DELETE", "old_path": "cameo/stuff/distance.py", "new_path": null, "diff": "-# Copyright 2014 Novo Nordisk Foundation Center for Biosustainability, DTU.\n-#\n-# Licensed under the Apache License, Version 2.0 (the \"License\");\n-# you may not use this file except in compliance with the License.\n-# You may obtain a copy of the License at\n-#\n-# http://www.apache.org/licenses/LICENSE-2.0\n-#\n-# Unless required by applicable law or agreed to in writing, software\n-# distributed under the License is distributed on an \"AS IS\" BASIS,\n-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n-# See the License for the specific language governing permissions and\n-# limitations under the License.\n-\n-from __future__ import absolute_import, print_function\n-\n-\"\"\"Methods for manipulating a model to compute a set of fluxes that minimize (or maximize)\n-different notions of distance (L1, number of binary changes etc.) to a given reference flux distribution.\"\"\"\n-\n-import six\n-\n-from sympy import Add\n-from sympy import Mul\n-\n-from cameo.flux_analysis.simulation import FluxDistributionResult\n-from cameo.core.solution import SolutionBase\n-\n-add = Add._from_args\n-mul = Mul._from_args\n-\n-import logging\n-\n-logger = logging.getLogger(__name__)\n-\n-\n-class Distance(object):\n- \"\"\"Abstract distance base class.\"\"\"\n-\n- @staticmethod\n- def __check_valid_reference(reference):\n- if not isinstance(reference, (dict, SolutionBase, FluxDistributionResult)):\n- raise ValueError('%s is not a valid reference flux distribution.'\n- 'Needs to be either a dict or Solution or FluxDistributionResult.')\n-\n- def __init__(self, model, reference=None, *args, **kwargs):\n- super(Distance, self).__init__(*args, **kwargs)\n- self.__check_valid_reference(reference)\n- self._reference = reference\n- self._model = model.copy()\n-\n- @property # read-only\n- def model(self):\n- return self._model\n-\n- @property\n- def reference(self):\n- return self._reference\n-\n- @reference.setter\n- def reference(self, value):\n- self.__check_valid_reference(value)\n- self._set_new_reference(value)\n-\n- def _set_new_reference(self, reference):\n- raise NotImplementedError\n-\n- def _prep_model(self, *args, **kwargs):\n- raise NotImplementedError\n-\n- def minimize(self, *args, **kwargs):\n- raise NotImplementedError\n-\n- def maximize(self, *args, **kwargs):\n- raise NotImplementedError\n-\n-\n-class ManhattanDistance(Distance):\n- \"\"\"Compute steady-state fluxes that minimizes the Manhattan distance (L1 norm)\n- to a reference flux distribution.\n-\n- Parameters\n- ----------\n- model : Model\n- reference : dict or Solution\n- A reference flux distribution.\n-\n- Attributes\n- ----------\n- model : Model\n- reference : dict\n- \"\"\"\n-\n- def __init__(self, model, reference=None, *args, **kwargs):\n- super(ManhattanDistance, self).__init__(model, reference=reference, *args, **kwargs)\n- self._aux_variables = dict()\n- self._deviation_constraints = dict()\n- self._prep_model()\n-\n- def _prep_model(self, *args, **kwargs):\n- for rid, flux_value in six.iteritems(self.reference):\n- self._add_deviavtion_constraint(rid, flux_value)\n- objective = self.model.solver.interface.Objective(add(self._aux_variables.values()), name='deviations')\n- self.model.objective = objective\n-\n- def _set_new_reference(self, reference):\n- # remove unnecessary constraints\n- constraints_to_remove = list()\n- aux_vars_to_remove = list()\n- for key in self._deviation_constraints.keys():\n- if key not in reference:\n- constraints_to_remove.extend(self._deviation_constraints.pop(key))\n- aux_vars_to_remove.append(self._aux_variables[key])\n- self.model.solver._remove_constraints(constraints_to_remove)\n- self.model.solver._remove_variables(aux_vars_to_remove)\n- # Add new or adapt existing constraints\n- for key, value in six.iteritems(reference):\n- try:\n- (lb_constraint, ub_constraint) = self._deviation_constraints[key]\n- lb_constraint.lb = value\n- ub_constraint.ub = value\n- except KeyError:\n- self._add_deviavtion_constraint(key, value)\n-\n- def _add_deviavtion_constraint(self, reaction_id, flux_value):\n- reaction = self.model.reactions.get_by_id(reaction_id)\n- aux_var = self.model.solver.interface.Variable('aux_' + reaction_id, lb=0)\n- self._aux_variables[reaction_id] = aux_var\n- self.model.solver._add_variable(aux_var)\n- constraint_lb = self.model.solver.interface.Constraint(reaction.flux_expression - aux_var, ub=flux_value,\n- name='deviation_lb_' + reaction_id)\n- self.model.solver._add_constraint(constraint_lb, sloppy=True)\n- constraint_ub = self.model.solver.interface.Constraint(reaction.flux_expression + aux_var, lb=flux_value,\n- name='deviation_ub_' + reaction_id)\n- self.model.solver._add_constraint(constraint_ub, sloppy=True)\n- self._deviation_constraints[reaction_id] = (constraint_lb, constraint_ub)\n-\n- def minimize(self, *args, **kwargs):\n- self.model.objective.direction = 'min'\n- solution = self.model.optimize()\n- result = FluxDistributionResult(solution)\n- return result\n-\n-\n-class RegulatoryOnOffDistance(Distance):\n- \"\"\"Minimize the number of reactions that need to be activated in order for a model\n- to compute fluxes that are close to a provided reference flux distribution (none need to be activated\n- if, for example, the model itself had been used to produce the reference flux distribution).\n-\n- Parameters\n- ----------\n- model : Model\n- reference : dict or Solution\n- A reference flux distribution.\n-\n- Attributes\n- ----------\n- model : Model\n- reference : dict\n- \"\"\"\n-\n- def __init__(self, model, reference=None, delta=0.03, epsilon=0.001, *args, **kwargs):\n- super(RegulatoryOnOffDistance, self).__init__(model, reference=reference, *args, **kwargs)\n- self._aux_variables = dict()\n- self._switch_constraints = dict()\n- self._prep_model(delta=delta, epsilon=epsilon)\n-\n- def _prep_model(self, delta=None, epsilon=None):\n- for rid, flux_value in six.iteritems(self.reference):\n- self._add_switch_constraint(rid, flux_value, delta, epsilon)\n- objective = self.model.solver.interface.Objective(add(self._aux_variables.values()), name='switches')\n- self.model.objective = objective\n-\n- def _add_switch_constraint(self, reaction_id, flux_value, delta, epsilon):\n- reaction = self.model.reactions.get_by_id(reaction_id)\n- switch_variable = self.model.solver.interface.Variable(\"y_\" + reaction_id, type=\"binary\")\n- self.model.solver._add_variable(switch_variable)\n- self._aux_variables[switch_variable.name] = switch_variable\n-\n- w_u = flux_value + delta * abs(flux_value) + epsilon\n- expression = reaction.flux_expression - switch_variable * (reaction.upper_bound - w_u)\n- constraint_a = self.model.solver.interface.Constraint(expression, ub=w_u,\n- name=\"switch_constraint_%s_lb\" % reaction_id)\n- self.model.solver._add_constraint(constraint_a)\n- self._switch_constraints[constraint_a.name] = constraint_a\n-\n- w_l = flux_value - delta * abs(flux_value) - epsilon\n- expression = reaction.flux_expression - switch_variable * (reaction.lower_bound - w_l)\n- constraint_b = self.model.solver.interface.Constraint(expression, lb=w_l,\n- name=\"switch_constraint_%s_ub\" % reaction_id)\n- self._switch_constraints[constraint_b.name] = constraint_b\n- self.model.solver._add_constraint(constraint_b)\n-\n- def minimize(self, *args, **kwargs):\n- self.model.objective.direction = 'min'\n- solution = self.model.optimize()\n- result = FluxDistributionResult(solution)\n- return result\n" }, { "change_type": "DELETE", "old_path": "cameo/stuff/stuff.py", "new_path": null, "diff": "-# Copyright 2014 Novo Nordisk Foundation Center for Biosustainability, DTU.\n-#\n-# Licensed under the Apache License, Version 2.0 (the \"License\");\n-# you may not use this file except in compliance with the License.\n-# You may obtain a copy of the License at\n-#\n-# http://www.apache.org/licenses/LICENSE-2.0\n-#\n-# Unless required by applicable law or agreed to in writing, software\n-# distributed under the License is distributed on an \"AS IS\" BASIS,\n-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n-# See the License for the specific language governing permissions and\n-# limitations under the License.\n-\n-from __future__ import absolute_import, print_function\n-\n-from functools import partial\n-from cobra.manipulation.delete import find_gene_knockout_reactions\n-from cobra import Reaction\n-from cameo.flux_analysis.simulation import fba\n-from cobra.exceptions import OptimizationError\n-from cameo.util import TimeMachine\n-\n-\n-def gene_knockout_growth(gene_id, model, threshold=10 ** -6, simulation_method=fba,\n- normalize=True, biomass=None, biomass_flux=None, *args, **kwargs):\n- if biomass_flux is None:\n- s = model.optimize()\n- biomass_flux = s.f\n- if 'reference' not in kwargs:\n- kwargs['reference'] = s.x_dict\n- gene = model.genes.get_by_id(gene_id)\n- knockouts = find_gene_knockout_reactions(model, [gene])\n- tm = TimeMachine()\n-\n- for reaction in knockouts:\n- tm(do=partial(setattr, reaction, 'lower_bound', 0),\n- undo=partial(setattr, reaction, 'lower_bound', reaction.lower_bound))\n- tm(do=partial(setattr, reaction, 'upper_bound', 0),\n- undo=partial(setattr, reaction, 'upper_bound', reaction.upper_bound))\n-\n- try:\n- s = simulation_method(model, *args, **kwargs)\n- f = s.get_primal_by_id(biomass)\n- if f >= threshold:\n- if normalize:\n- f = f / biomass_flux\n- else:\n- f = 0\n- except OptimizationError:\n- f = float('nan')\n- finally:\n- tm.reset()\n-\n- return f\n-\n-\n-def reaction_component_production(model, reaction):\n- tm = TimeMachine()\n- for metabolite in reaction.metabolites:\n- test = Reaction(\"EX_%s_temp\" % metabolite.id)\n- test._metabolites[metabolite] = -1\n- # hack frozen set from cobrapy to be able to add a reaction\n- metabolite._reaction = set(metabolite._reaction)\n- tm(do=partial(model.add_reactions, [test]), undo=partial(model.remove_reactions, [test]))\n- tm(do=partial(setattr, model, 'objective', test.id), undo=partial(setattr, model, 'objective', model.objective))\n- try:\n- print(metabolite.id, \"= \", model.optimize().f)\n- except OptimizationError:\n- print(metabolite, \" cannot be produced (reactions: %s)\" % metabolite.reactions)\n- finally:\n- tm.reset()\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: remove cameo.stuff Untested code not used in the package or associated notebooks, let's put this somewhere else.
89,735
18.07.2017 14:19:19
-7,200
076d34cfa84683995d26dca9aff042a69ec25900
refactor: avoid divide by zero Probably new numpy behavior, instead of raising ZeroDivisionError, a UserWarning is triggered, adjust to this.
[ { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/structural.py", "new_path": "cameo/flux_analysis/structural.py", "diff": "@@ -164,7 +164,7 @@ def find_coupled_reactions_nullspace(model, ns=None, tol=1e-10):\n\"\"\"\nif ns is None:\nns = nullspace(create_stoichiometric_array(model))\n- mask = (np.abs(ns) <= tol).all(1) # Mask for blocked reactions\n+ mask = (np.abs(ns) <= tol).all(axis=1) # Mask for blocked reactions\nnon_blocked_ns = ns[~mask]\nnon_blocked_reactions = np.array(list(model.reactions))[~mask]\n@@ -190,7 +190,7 @@ def find_coupled_reactions_nullspace(model, ns=None, tol=1e-10):\ncontinue\nreaction_j = non_blocked_reactions[j]\nright = non_blocked_ns[j]\n- ratio = left / right\n+ ratio = np.apply_along_axis(lambda x: x[0] / x[1] if abs(x[1]) > 0. else np.inf, 0, np.array([left, right]))\nif abs(max(ratio) - min(ratio)) < tol * 100:\ngroup[reaction_j] = round(ratio.mean(), 10)\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/deterministic/flux_variability_based.py", "new_path": "cameo/strain_design/deterministic/flux_variability_based.py", "diff": "@@ -329,11 +329,13 @@ class DifferentialFVA(StrainDesignMethod):\nsol['gaps'] = gaps\nif self.normalize_ranges_by is not None:\nnormalizer = sol.lower_bound[self.normalize_ranges_by]\n+ if normalizer > non_zero_flux_threshold:\nnormalized_intervals = sol[['lower_bound', 'upper_bound']].values / normalizer\n- normalized_gaps = [self._interval_gap(interval1, interval2) for interval1, interval2 in\n+ sol['normalized_gaps'] = [self._interval_gap(interval1, interval2) for interval1, interval2 in\nmy_zip(reference_intervals, normalized_intervals)]\n- sol['normalized_gaps'] = normalized_gaps\n+ else:\n+ sol['normalized_gaps'] = [numpy.nan] * len(sol.lower_bound)\nelse:\nsol['normalized_gaps'] = gaps\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/heuristic/evolutionary/objective_functions.py", "new_path": "cameo/strain_design/heuristic/evolutionary/objective_functions.py", "diff": "@@ -208,15 +208,13 @@ class biomass_product_coupled_yield(YieldFunction):\nelse:\nsubstrate_flux += abs(solution.fluxes[substrate_id]) * n_carbon(substrate) / 2\nsubstrate_flux = round(substrate_flux, config.ndecimals)\n-\nelse:\nproduct_flux = round(solution.fluxes[self.product], config.ndecimals)\nsubstrate_flux = round(sum(abs(solution.fluxes[s]) for s in self.substrates), config.ndecimals)\n- try:\n+ if substrate_flux > config.non_zero_flux_threshold:\nreturn (biomass_flux * product_flux) / substrate_flux\n-\n- except ZeroDivisionError:\n- return 0.0\n+ else:\n+ return 0.\ndef _repr_latex_(self):\nif self.carbon_yield:\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: avoid divide by zero Probably new numpy behavior, instead of raising ZeroDivisionError, a UserWarning is triggered, adjust to this.
89,735
19.07.2017 13:25:57
-7,200
2c9bd1d5d5b9c722595be455842f690722e47adc
refactor: replace cameo pfba with cobra pfba Identical functions except the return value, use cobrapy's pfba and re-class the return value.
[ { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/simulation.py", "new_path": "cameo/flux_analysis/simulation.py", "diff": "@@ -37,7 +37,7 @@ from sympy import Add\nfrom sympy import Mul\nfrom sympy.parsing.sympy_parser import parse_expr\nfrom cobra import Reaction\n-from cobra.flux_analysis.parsimonious import add_pfba\n+from cobra.flux_analysis import pfba as cobrapy_pfba\nfrom cobra.exceptions import OptimizationError\nfrom optlang.interface import OptimizationExpression\n@@ -77,7 +77,7 @@ def fba(model, objective=None, reactions=None, *args, **kwargs):\nmodel.objective = objective\nsolution = model.optimize(raise_error=True)\nif reactions is not None:\n- result = FluxDistributionResult({r: solution[r] for r in reactions}, solution.f)\n+ result = FluxDistributionResult({r: solution[r] for r in reactions}, solution.objective_value)\nelse:\nresult = FluxDistributionResult.from_solution(solution)\nreturn result\n@@ -110,18 +110,8 @@ def pfba(model, objective=None, reactions=None, fraction_of_optimum=1, *args, **\ngenome-scale models. Molecular Systems Biology, 6, 390. doi:10.1038/msb.2010.47\n\"\"\"\n- with model:\n- add_pfba(model, objective=objective, fraction_of_optimum=fraction_of_optimum)\n- try:\n- solution = model.optimize(raise_error=True)\n- if reactions is not None:\n- result = FluxDistributionResult({r: solution.get_primal_by_id(r) for r in reactions}, solution.f)\n- else:\n- result = FluxDistributionResult.from_solution(solution)\n- except OptimizationError as e:\n- logger.error(\"pfba could not determine an optimal solution for objective %s\" % model.objective)\n- raise e\n- return result\n+ solution = cobrapy_pfba(model, objective=objective, fraction_of_optimum=fraction_of_optimum, reactions=reactions)\n+ return FluxDistributionResult.from_solution(solution)\ndef moma(model, reference=None, cache=None, reactions=None, *args, **kwargs):\n" }, { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/util.py", "new_path": "cameo/flux_analysis/util.py", "diff": "@@ -17,7 +17,7 @@ from cobra.exceptions import OptimizationError\nimport sympy\nfrom sympy import Add, Mul\n-from cameo.flux_analysis.simulation import add_pfba\n+from cobra.flux_analysis.parsimonious import add_pfba\nimport logging\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: replace cameo pfba with cobra pfba Identical functions except the return value, use cobrapy's pfba and re-class the return value.
89,735
20.07.2017 13:01:51
-7,200
e17b15790427eb8b1367af76d60e65e8d47674ef
refactor: use cobra find_essential_{genes, rxn} Functions equivalent except that cobra version returns a set and cameo returned a list
[ { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/analysis.py", "new_path": "cameo/flux_analysis/analysis.py", "diff": "@@ -155,76 +155,6 @@ def find_essential_metabolites(model, threshold=1e-6, force_steady_state=False):\nreturn essential\n-def find_essential_reactions(model, threshold=1e-6):\n- \"\"\"Return a list of essential reactions.\n-\n- Parameters\n- ----------\n- model : cobra.Model\n- The model to find the essential reactions for.\n- threshold : float (default 1e-6)\n- Minimal objective flux to be considered viable.\n-\n- Returns\n- -------\n- list\n- List of essential reactions\n- \"\"\"\n- essential = []\n- try:\n- solution = model.optimize(raise_error=True)\n- for reaction_id, flux in six.iteritems(solution.fluxes):\n- if abs(flux) > 0:\n- reaction = model.reactions.get_by_id(reaction_id)\n- with model:\n- reaction.knock_out()\n- model.solver.optimize()\n- if model.solver.status != OPTIMAL or model.objective.value < threshold:\n- essential.append(reaction)\n-\n- except OptimizationError as e:\n- logger.error('Cannot determine essential reactions for un-optimal model.')\n- raise e\n-\n- return essential\n-\n-\n-def find_essential_genes(model, threshold=1e-6):\n- \"\"\"Return a list of essential genes.\n-\n- Parameters\n- ----------\n- model : cobra.Model\n- The model to find the essential genes for.\n- threshold : float (default 1e-6)\n- Minimal objective flux to be considered viable.\n-\n- Returns\n- -------\n- list\n- List of essential genes\n- \"\"\"\n- essential = []\n- try:\n- solution = model.optimize(raise_error=True)\n- genes_to_check = set()\n- for reaction_id, flux in six.iteritems(solution.fluxes):\n- if abs(flux) > 0:\n- genes_to_check.update(model.reactions.get_by_id(reaction_id).genes)\n- for gene in genes_to_check:\n- with model:\n- gene.knock_out()\n- model.solver.optimize()\n- if model.solver.status != OPTIMAL or model.objective.value < threshold:\n- essential.append(gene)\n-\n- except OptimizationError as e:\n- logger.error('Cannot determine essential genes for un-optimal model.')\n- raise e\n-\n- return essential\n-\n-\ndef find_blocked_reactions(model):\n\"\"\"Determine reactions that cannot carry steady-state flux.\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/deterministic/linear_programming.py", "new_path": "cameo/strain_design/deterministic/linear_programming.py", "diff": "@@ -24,6 +24,8 @@ from pandas import DataFrame\nfrom sympy import Add\nfrom cobra.util import fix_objective_as_constraint\n+from cobra.exceptions import OptimizationError\n+from cobra.flux_analysis import find_essential_reactions\nfrom cameo import config\nfrom cameo import ui\n@@ -31,8 +33,7 @@ from cameo.core.model_dual import convert_to_dual\nfrom cameo.core.strain_design import StrainDesignMethodResult, StrainDesignMethod, StrainDesign\nfrom cameo.core.target import ReactionKnockoutTarget\nfrom cameo.core.utils import get_reaction_for\n-from cobra.exceptions import OptimizationError\n-from cameo.flux_analysis.analysis import phenotypic_phase_plane, flux_variability_analysis, find_essential_reactions\n+from cameo.flux_analysis.analysis import phenotypic_phase_plane, flux_variability_analysis\nfrom cameo.flux_analysis.simulation import fba\nfrom cameo.flux_analysis.structural import find_coupled_reactions_nullspace\nfrom cameo.util import reduce_reaction_set, decompose_reaction_groups\n@@ -130,11 +131,11 @@ class OptKnock(StrainDesignMethod):\ndef _build_problem(self, essential_reactions, use_nullspace_simplification):\nlogger.debug(\"Starting to formulate OptKnock problem\")\n- self.essential_reactions = find_essential_reactions(self._model) + self._model.exchanges\n+ self.essential_reactions = find_essential_reactions(self._model).union(self._model.exchanges)\nif essential_reactions:\n- self.essential_reactions += [get_reaction_for(self._model, r) for r in essential_reactions]\n+ self.essential_reactions.update(set(get_reaction_for(self._model, r) for r in essential_reactions))\n- reactions = set(self._model.reactions) - set(self.essential_reactions)\n+ reactions = set(self._model.reactions) - self.essential_reactions\nif use_nullspace_simplification:\nreactions = self._reduce_to_nullspace(reactions)\nelse:\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/heuristic/evolutionary/multiprocess/optimization.py", "new_path": "cameo/strain_design/heuristic/evolutionary/multiprocess/optimization.py", "diff": "@@ -23,7 +23,7 @@ from cameo import config\nfrom cameo import parallel\nfrom cameo import util\nfrom cameo.flux_analysis.simulation import pfba\n-from cameo.flux_analysis.analysis import find_essential_genes, find_essential_reactions\n+from cobra.flux_analysis import find_essential_genes, find_essential_reactions\nfrom cameo.strain_design.heuristic.evolutionary import ReactionKnockoutOptimization, GeneKnockoutOptimization\nfrom cameo.strain_design.heuristic.evolutionary.multiprocess.migrators import MultiprocessingMigrator\nfrom cameo.strain_design.heuristic.evolutionary.multiprocess.observers import \\\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/heuristic/evolutionary/optimization.py", "new_path": "cameo/strain_design/heuristic/evolutionary/optimization.py", "diff": "@@ -31,7 +31,7 @@ from cameo.flux_analysis.simulation import pfba, lmoma, moma, room, logger as si\nfrom cameo.flux_analysis.structural import (find_blocked_reactions_nullspace, find_coupled_reactions_nullspace,\nnullspace,\ncreate_stoichiometric_array)\n-from cameo.flux_analysis.analysis import find_essential_genes, find_essential_reactions\n+from cobra.flux_analysis import find_essential_genes, find_essential_reactions\nfrom cameo.strain_design.heuristic.evolutionary import archives\nfrom cameo.strain_design.heuristic.evolutionary import decoders\nfrom cameo.strain_design.heuristic.evolutionary import evaluators\n" }, { "change_type": "MODIFY", "old_path": "tests/test_flux_analysis.py", "new_path": "tests/test_flux_analysis.py", "diff": "@@ -28,6 +28,7 @@ from cobra.util import create_stoichiometric_matrix, fix_objective_as_constraint\nfrom cobra.flux_analysis.parsimonious import add_pfba\nfrom cobra import Metabolite, Reaction\n+from cobra.flux_analysis import find_essential_reactions\nfrom cameo.flux_analysis import remove_infeasible_cycles, structural\nfrom cameo.flux_analysis.analysis import (find_blocked_reactions,\n@@ -36,7 +37,6 @@ from cameo.flux_analysis.analysis import (find_blocked_reactions,\nfix_pfba_as_constraint)\nfrom cameo.flux_analysis.simulation import fba, lmoma, moma, pfba, room\nfrom cameo.flux_analysis.structural import nullspace\n-from cameo.flux_analysis.analysis import find_essential_reactions\nfrom cameo.parallel import MultiprocessingView, SequentialView\nfrom cameo.util import current_solver_name, pick_one\n" }, { "change_type": "MODIFY", "old_path": "tests/test_solver_based_model.py", "new_path": "tests/test_solver_based_model.py", "diff": "@@ -36,7 +36,8 @@ from cameo import load_model\nfrom cameo.config import solvers\nfrom cameo.core.utils import get_reaction_for, load_medium, medium\nfrom cameo.flux_analysis.structural import create_stoichiometric_array\n-from cameo.flux_analysis.analysis import find_essential_genes, find_essential_metabolites, find_essential_reactions\n+from cameo.flux_analysis.analysis import find_essential_metabolites\n+from cobra.flux_analysis import find_essential_genes, find_essential_reactions\nTRAVIS = bool(os.getenv('TRAVIS', False))\n" }, { "change_type": "MODIFY", "old_path": "tests/test_strain_design_heuristics.py", "new_path": "tests/test_strain_design_heuristics.py", "diff": "@@ -68,7 +68,7 @@ from cameo.strain_design.heuristic.evolutionary.variators import (_do_set_n_poin\nset_indel,\nset_mutation,\nset_n_point_crossover)\n-from cameo.flux_analysis.analysis import find_essential_genes, find_essential_reactions\n+from cobra.flux_analysis import find_essential_genes, find_essential_reactions\nfrom cameo.util import RandomGenerator as Random\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: use cobra find_essential_{genes, rxn} Functions equivalent except that cobra version returns a set and cameo returned a list
89,735
20.07.2017 15:01:38
-7,200
712f552a87d309d5c7ee8e1226e132735b5793ec
refactor: do not use mutables as defaults
[ { "change_type": "MODIFY", "old_path": "cameo/api/hosts.py", "new_path": "cameo/api/hosts.py", "diff": "@@ -35,7 +35,10 @@ MODEL_DIRECTORY = os.path.join(os.path.join(cameo.__path__[0]), 'models/json')\nclass Host(object):\n- def __init__(self, name='', models=[], biomass=[], carbon_sources=[]):\n+ def __init__(self, name='', models=None, biomass=None, carbon_sources=None):\n+ models = models or []\n+ biomass = biomass or []\n+ carbon_sources = carbon_sources or []\nself.name = name\nself.models = util.IntelliContainer()\nfor id, biomass, carbon_source in zip(models, biomass, carbon_sources):\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: do not use mutables as defaults
89,735
21.07.2017 12:42:05
-7,200
95d5f30d73e02e68ab2b3c430c220864e50c51ce
chore: DRY up requirements No need to repeat info in setup.py in requirements.txt, following recommendations in
[ { "change_type": "MODIFY", "old_path": "requirements.txt", "new_path": "requirements.txt", "diff": "-numpy>=1.9.1\n-scipy>=0.9.0\n-blessings>=1.5.1\n-pandas>=0.18.1\n-ordered-set==1.2\n-inspyred>=1.0\n-cobra>=0.6.0a7\n-optlang>=0.3.0\n-escher>=1.0.0\n-numexpr>=2.4\n-networkx>=1.9.1\n-six>=1.9.0\n-future>=0.15.2\n-lazy-object-proxy>=1.2.0\n-IProgress>=0.2\n-palettable>=2.1.1\n-requests>=2.10.0\n+-e .\n" }, { "change_type": "MODIFY", "old_path": "setup.py", "new_path": "setup.py", "diff": "@@ -26,12 +26,13 @@ import versioneer\nrequirements = ['numpy>=1.9.1',\n'scipy>=0.14.0',\n'blessings>=1.5.1',\n- 'pandas>=0.15.2',\n+ 'pandas>=0.20.2',\n'ordered-set>=1.2',\n- 'cobra>=0.6.0a7',\n- 'optlang>=0.4.2',\n- 'requests>=2.5.0',\n+ 'cobra>=0.8.0',\n+ 'future>=0.15.2',\n+ 'optlang>=1.2.1',\n'numexpr>=2.4',\n+ 'requests>=2.10.0',\n'networkx>=1.9.1',\n'six>=1.9.0',\n'escher>=1.1.2',\n" } ]
Python
Apache License 2.0
biosustain/cameo
chore: DRY up requirements No need to repeat info in setup.py in requirements.txt, following recommendations in https://caremad.io/posts/2013/07/setup-vs-requirement/
89,735
21.07.2017 14:14:12
-7,200
ee6965e2be1abb143abb0644fe5770f15ee431b2
chore: remove unused plotting_old.py
[ { "change_type": "DELETE", "old_path": "cameo/visualization/plotting_old.py", "new_path": null, "diff": "-# Copyright 2015 Novo Nordisk Foundation Center for Biosustainability, DTU.\n-\n-# Licensed under the Apache License, Version 2.0 (the \"License\");\n-# you may not use this file except in compliance with the License.\n-# You may obtain a copy of the License at\n-\n-# http://www.apache.org/licenses/LICENSE-2.0\n-\n-# Unless required by applicable law or agreed to in writing, software\n-# distributed under the License is distributed on an \"AS IS\" BASIS,\n-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n-# See the License for the specific language governing permissions and\n-# limitations under the License.\n-\n-from cameo.util import partition\n-from cameo import config, util\n-\n-import logging\n-\n-logger = logging.getLogger(__name__)\n-logger.setLevel(logging.WARNING)\n-\n-__all__ = ['plot_production_envelope', 'plot_2_production_envelopes', 'plot_flux_variability_analysis']\n-\n-MISSING_PLOTTING_BACKEND_MESSAGE = \"No supported plotting backend could be found. Please install bokeh if you'd like to generate plots (other backends will be supported in the future).\"\n-GOLDEN_RATIO = 1.618033988\n-\n-try:\n- from bokeh import plotting\n- from bokeh.models import GridPlot\n-except ImportError:\n- pass\n-else:\n- def _figure(title, width, height, **kwargs):\n- return plotting.figure(title=title,\n- tools=\"save\",\n- plot_width=width if width is not None else 700,\n- plot_height=height if height is not None else 700,\n- **kwargs)\n-\n-\n- def _add_production_envelope(plot, envelope, key, patch_color=\"#99d8c9\", patch_alpha=0.3, line_color=\"blue\"):\n- ub = envelope[\"objective_upper_bound\"].values\n- lb = envelope[\"objective_lower_bound\"].values\n- var = envelope[key].values\n-\n- x = [v for v in var] + [v for v in reversed(var)]\n- y = [v for v in lb] + [v for v in reversed(ub)]\n-\n- plot.patch(x=x, y=y, color=patch_color, alpha=patch_alpha)\n-\n- if \"label\" in envelope.columns:\n- plot.text(envelope[\"label\"].values, var, ub)\n-\n- plot.line(var, ub, color=line_color)\n- plot.line(var, lb, color=line_color)\n- if ub[-1] != lb[-1]:\n- plot.line((var[-1], var[-1]), (ub[-1], lb[-1]), color=line_color)\n-\n-\n- def plot_flux_variability_analysis_bokeh(fva_result, grid=None, width=None, height=None, title=None,\n- axis_font_size=None, color=\"blue\"):\n-\n- title = \"Flux Variability Analysis\" if title is None else title\n-\n- factors = list(fva_result.index)\n- x0 = list(fva_result.lower_bound.values)\n- x1 = list(fva_result.upper_bound.values)\n- min_value = min([min(x0), min(x1)])\n- max_value = max([max(x0), max(x1)])\n-\n- p = _figure(title, width, height, y_range=factors, x_range=[min_value - 5, max_value + 5])\n-\n- line_width = (width - 30) / len(factors) / 2\n-\n- p.segment(x0, factors, x1, factors, line_width=line_width, line_color=color)\n- for x_0, x_1, f in zip(x0, x1, factors):\n- if x_0 == x_1:\n- p.segment([x_0 - 0.01], [f], [x_1 + 0.01], [f], line_width=line_width, color=color)\n- p.line([0, 0], [0, len(factors) + 1], line_color=\"black\", line_width=1, line_alpha=0.3)\n-\n- if axis_font_size is not None:\n- p.xaxis.axis_label_text_font_size = axis_font_size\n- p.yaxis.axis_label_text_font_size = axis_font_size\n-\n- if grid is not None:\n- grid.append(p)\n- else:\n- plotting.show(p)\n-\n-\n- def plot_2_flux_variability_analysis_bokeh(fva_result1, fva_result2, grid=None, width=None, height=None, title=None,\n- axis_font_size=None, color1=\"blue\", color2=\"orange\"):\n-\n- factors = list(fva_result1.index)\n-\n- left1 = list(fva_result1.upper_bound)\n- right1 = list(fva_result1.lower_bound)\n- top1 = [i for i in range(0, len(factors))]\n- bottom1 = [i + 0.5 for i in range(0, len(factors))]\n-\n- left2 = list(fva_result2.upper_bound)\n- right2 = list(fva_result2.lower_bound)\n- top2 = [i for i in range(0, len(factors))]\n- bottom2 = [i - 0.5 for i in range(0, len(factors))]\n-\n- x_range = [min([min(bottom1), min(bottom2)]) - 5, max([max(top1), max(top2)]) + 5]\n-\n- title = \"Comparing Flux Variability Results\" if title is None else title\n-\n- p = _figure(title, width, height, x_range=x_range, y_range=factors)\n-\n- p.quad(top=top1, bottom=bottom1, left=left1, right=right1, color=color1)\n- p.quad(top=top2, bottom=bottom2, left=left2, right=right2, color=color2)\n-\n- if axis_font_size is not None:\n- p.xaxis.axis_label_text_font_size = axis_font_size\n- p.yaxis.axis_label_text_font_size = axis_font_size\n-\n- if grid is not None:\n- grid.append(p)\n- else:\n- plotting.show(p)\n-\n-\n- def plot_production_envelope_bokeh(envelope, objective, key, grid=None, width=None, height=None, title=None,\n- points=None, points_colors=None, axis_font_size=None, color=\"blue\"):\n-\n- title = \"Production Envelope\" if title is None else title\n- p = _figure(title, width, height)\n-\n- p.xaxis.axis_label = key\n- p.yaxis.axis_label = objective\n-\n- _add_production_envelope(p, envelope, key, patch_color=color, patch_alpha=0.3, line_color=color)\n-\n- if axis_font_size is not None:\n- p.xaxis.axis_label_text_font_size = axis_font_size\n- p.yaxis.axis_label_text_font_size = axis_font_size\n-\n- if points is not None:\n- p.scatter(*zip(*points), color=\"green\" if points_colors is None else points_colors)\n-\n- if grid is not None:\n- grid.append(p)\n- else:\n- plotting.show(p)\n-\n-\n- def plot_2_production_envelopes_bokeh(envelope1, envelope2, objective, key, grid=None, width=None,\n- height=None, title=None, points=None, points_colors=None,\n- axis_font_size=None, color1=\"blue\", color2=\"orange\"):\n-\n- title = \"2 Production Envelopes\" if title is None else title\n- p = _figure(title, width, height)\n-\n- p.xaxis.axis_label = key\n- p.yaxis.axis_label = objective\n-\n- _add_production_envelope(p, envelope1, key, patch_color=color1, patch_alpha=0.3, line_color=color1)\n- _add_production_envelope(p, envelope2, key, patch_color=color2, patch_alpha=0.3, line_color=color2)\n-\n- if axis_font_size is not None:\n- p.xaxis.axis_label_text_font_size = axis_font_size\n- p.yaxis.axis_label_text_font_size = axis_font_size\n-\n- if points is not None:\n- p.scatter(*zip(*points), color=\"green\" if points_colors is None else points_colors)\n-\n- if grid is not None:\n- grid.append(p)\n- else:\n- plotting.show(p)\n-\n-try:\n- from bashplotlib import scatterplot\n-except ImportError:\n- pass\n-else:\n- def plot_production_envelope_cli(envelope, objective, key, grid=None, width=None, height=None, title=None,\n- points=None, points_colors=None, axis_font_size=None, color=\"blue\"):\n- scatterplot.plot_scatter(None, envelope[key], envelope[\"objective_upper_bound\"], \"*\")\n-\n-\n-def plot_production_envelope(envelope, objective, key, grid=None, width=None, height=None, title=None,\n- points=None, points_colors=None, axis_font_size=None, color=\"blue\"):\n- if width is None and height is None:\n- width = 700\n- if width is None or height is None:\n- width, height = _golden_ratio(width, height)\n- try:\n- if config.use_bokeh:\n- plot_production_envelope_bokeh(envelope, objective, key, grid=grid, width=width, height=height,\n- title=title, points=points, points_colors=points_colors,\n- axis_font_size=axis_font_size, color=color)\n- else:\n- plot_production_envelope_cli(envelope, objective, key, width=width, height=height, title=title,\n- points=points,\n- points_colors=points_colors, axis_font_size=axis_font_size, color=color)\n- except NameError:\n- logger.logger.warn(MISSING_PLOTTING_BACKEND_MESSAGE)\n-\n-\n-def plot_2_production_envelopes(envelope1, envelope2, objective, key, grid=None, width=None, height=None, title=None,\n- points=None, points_colors=None, axis_font_size=None, color1=\"blue\", color2=\"orange\"):\n- if width is None and height is None:\n- width = 700\n- if width is None or height is None:\n- width, height = _golden_ratio(width, height)\n- try:\n- if config.use_bokeh:\n- plot_2_production_envelopes_bokeh(envelope1, envelope2, objective, key, grid=grid, width=width,\n- height=height,\n- title=title, points=points, points_colors=points_colors,\n- axis_font_size=axis_font_size, color1=color1, color2=color2)\n- else:\n- logger.warn(MISSING_PLOTTING_BACKEND_MESSAGE)\n- except NameError:\n- logger.warn(MISSING_PLOTTING_BACKEND_MESSAGE)\n-\n-\n-def plot_flux_variability_analysis(fva_result, grid=None, width=None, height=None, title=None, axis_font_size=None,\n- color=\"blue\"):\n- if width is None and height is None:\n- width = 700\n- if width is None or height is None:\n- width, height = _golden_ratio(width, height)\n- try:\n- if config.use_bokeh:\n- plot_flux_variability_analysis_bokeh(fva_result, grid=grid, width=width, height=height, title=title,\n- axis_font_size=axis_font_size, color=color)\n- else:\n- logger.warn(MISSING_PLOTTING_BACKEND_MESSAGE)\n- except NameError:\n- logger.warn(MISSING_PLOTTING_BACKEND_MESSAGE)\n-\n-\n-def plot_2_flux_variability_analysis(fva_result1, fva_result2, grid=None, width=None, height=None, title=None,\n- axis_font_size=None, color1=\"blue\", color2=\"orange\"):\n- if width is None and height is None:\n- width = 700\n- if width is None or height is None:\n- width, height = _golden_ratio(width, height)\n- try:\n- if config.use_bokeh:\n- plot_2_flux_variability_analysis_bokeh(fva_result1, fva_result2, grid=grid, width=width, height=height,\n- title=title, axis_font_size=axis_font_size, color1=color1,\n- color2=color2)\n- else:\n- logger.warn(MISSING_PLOTTING_BACKEND_MESSAGE)\n- except NameError:\n- logger.warn(MISSING_PLOTTING_BACKEND_MESSAGE)\n-\n-\n-class Grid(object):\n- def __init__(self, nrows=1, title=None):\n- self.plots = []\n- if nrows <= 1:\n- nrows = 1\n- self.nrows = nrows\n- self.title = title\n-\n- def __enter__(self):\n- return self\n-\n- def append(self, plot):\n- self.plots.append(plot)\n-\n- def __exit__(self, exc_type, exc_val, exc_tb):\n- self._plot_grid()\n-\n- def _plot_grid(self):\n- if util.in_ipnb():\n- if config.use_bokeh:\n- self._plot_bokeh_grid()\n- elif config.use_matplotlib:\n- self._plot_matplotlib_grid()\n- else:\n- self._plot_cli_grid()\n-\n- def _plot_bokeh_grid(self):\n- if len(self.plots) > 0:\n- grid = GridPlot(children=partition(self.plots, self.nrows), title=self.title)\n- plotting.show(grid)\n-\n-\n-def _golden_ratio(width, height):\n- if width is None:\n- width = int(height + height / GOLDEN_RATIO)\n-\n- elif height is None:\n- height = int(width / GOLDEN_RATIO)\n-\n- return width, height\n" } ]
Python
Apache License 2.0
biosustain/cameo
chore: remove unused plotting_old.py
89,735
14.03.2017 10:56:54
-3,600
aa9cab02fe8f6e5ce7b64283d57b5deb2990d7b0
fix: only cplex on 2.7, 3.4 cplex in binary we pull in is only for python 2.7 and 3.4
[ { "change_type": "MODIFY", "old_path": ".travis.yml", "new_path": ".travis.yml", "diff": "@@ -44,7 +44,7 @@ addons:\nbefore_install:\n- travis_retry pip install --upgrade pip setuptools wheel tox\n- 'echo \"this is a build for: $TRAVIS_BRANCH\"'\n-- 'if [[ \"$TRAVIS_BRANCH\" != \"devel\" ]]; then bash ./.travis/install_cplex.sh; fi'\n+- 'if [[ \"$TRAVIS_BRANCH\" != \"devel\" && ($TRAVIS_PYTHON_VERSION == \"3.4\" || $TRAVIS_PYTHON_VERSION == \"2.7\") ]]; then bash ./.travis/install_cplex.sh; fi'\nscript:\n- tox\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: only cplex on 2.7, 3.4 (#133) cplex in binary we pull in is only for python 2.7 and 3.4
89,735
28.07.2017 11:15:14
-7,200
62684d6227bbb50b06c053e93b72dc027e3f4fec
docs: add release notes Let's keep track of what changes between the releases in md files rather than only github releases.
[ { "change_type": "ADD", "old_path": null, "new_path": "release-notes/0.11.0.md", "diff": "+# Release notes for cameo 0.11.0\n+\n+## Highlights\n+\n+This a major release with substantial changes to several areas. Cameo pioneered the use of a model class that tightly integrates with the underlying solver and called this the `SolverBasedModel` which inherited from cobrapy's regular `Model`. Since this innovation is useful in more places than cameo, we have now moved all this functionality to cobrapy and consequently there is no longer any need for the extra class in cameo. In this release, we provide a refactored cameo that no longer has its own core classes but directly imports all of these from cobrapy.\n+\n+Additionally, we now use [cobrapy's context manager](http://cobrapy.readthedocs.io/en/latest/getting_started.html#Making-changes-reversibly-using-models-as-contexts) instead of the `TimeMachine` (which inspired it) so method's that took a `time_machine` argument no longer do so.\n+\n+We have also started the process to better adhere to the scope of cameo, which is strain design, and to move all basic analysis and simulation to cobrapy. Since these changes are substantial, this is still a work in progress but we wanted to already make a release so that cameo users and contributors more easily can follow the direction we are taking.\n+\n+Overall, although the changes are substantial, the actual changes to user workflows should be fairly small but with a few backwards incompatible changes which are outlined below. We hope that the added advantage of a better integration with cobrapy, and other packages that depend on cobrapy will make up for any inconvenience.\n+\n+## New features\n+\n+- pandas `0.20.2` support\n+\n+## Fixes\n+\n+- `phenotypic_phase_plane` no longer assumed that exchange reactions are formulated to have negative stoichiometry.\n+- `Python 2` on Windows no longer fails when reading zipped jsons.\n+- `essential_genes` as argument to optgene is now respected, previously ignored.\n+- divide by zero error now handled properly in optgene/optknock\n+\n+## Backwards incompatible changes\n+\n+- `SolverBasedModel`, `Gene` and `Metabolite` are no longer defined in\n+ cameo and must be imported from cobrapy.\n+- `cobra.Model` does not have `change_bounds` method. Instead use\n+ direct assignment and the context manager instead of `TimeMachine`.\n+- `cobra.Model` does not have `solve` method. Use `optimize`.\n+- `cobra.Model` does not have `S` property. Use `cobra.util.create_stoichimetric_matrix`\n+- `cobra.Model` does not have `fix_objective_as_constraint`. Use\n+ `cobra.util.fix_objective_as_constraint`.\n+- `cobra.Model` does not have `non_functional_genes`. No replacement.\n+- `cobra.Model` does not have `add_exchange`. Use `cobra.Model.add_boundary` instead.\n+- `cobra.Model` does not have `add_ratio_constraint`. Use `cobra.Model.add_cons_vars`.\n+- `cobra.Model` does not have `essential_{genes,reactions}`. Use\n+ `cobra.flux_analysis.find_essential_{genes,reactions}`.\n+- `cobra.Model` does not have `change_objective`. Use direct\n+ assignment to objective.\n+- `.knock_out()` methods do not accept `time_machine`. Use context\n+ manager.\n+- `cobra.Reaction` does not have `effective_bounds`. No replacement.\n+- `cobra.Reaction` does not have `is_exchange`. Use `boundary` instead.\n+- `cobra.Model` does not have a `solution` property. User is expectd\n+ to keep track of solutions.\n+- `SolveError` is replaced by `cobra.exceptions.OptimizationError`\n+- `cameo.stuff` was removed.\n+- `cameo.visualization.plotting_old` was removed.\n" }, { "change_type": "ADD", "old_path": null, "new_path": "release-notes/next-release.md", "diff": "+# Release notes for cameo x.y.z\n+\n+## Fixes\n+\n+## New features\n+\n+## Deprecated features\n" } ]
Python
Apache License 2.0
biosustain/cameo
docs: add release notes Let's keep track of what changes between the releases in md files rather than only github releases.
89,733
01.08.2017 11:33:33
-7,200
ff60b56074f8d2f93bcdf4af445362b78681f510
fix: validate left[i] == right[i]
[ { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/structural.py", "new_path": "cameo/flux_analysis/structural.py", "diff": "@@ -191,6 +191,25 @@ def find_coupled_reactions_nullspace(model, ns=None, tol=1e-10):\nreaction_j = non_blocked_reactions[j]\nright = non_blocked_ns[j]\nratio = np.apply_along_axis(lambda x: x[0] / x[1] if abs(x[1]) > 0. else np.inf, 0, np.array([left, right]))\n+\n+ # special case:\n+ # if ratio is 1 (a/b == 1) then a == b.\n+ # but if a = 0 and b = 0, then a/b = np.inf.\n+ # solution:\n+ # mask inf from ratio\n+ # check if non-inf elements ratio is ~1\n+ # check if left and right values are 0 for indices with inf ratio\n+ # if yes, replace with 1\n+ inf_mask = np.isinf(ratio)\n+ non_inf = ratio[~inf_mask]\n+\n+ if (abs((non_inf - 1)) < tol * 100).all():\n+ right_is_zero = (abs(right[inf_mask]) < tol).all()\n+ left_is_zero = (abs(left[inf_mask]) < tol).all()\n+\n+ if right_is_zero and left_is_zero:\n+ ratio[inf_mask] = 1\n+\nif abs(max(ratio) - min(ratio)) < tol * 100:\ngroup[reaction_j] = round(ratio.mean(), 10)\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: validate left[i] == right[i]
89,737
26.08.2017 20:07:25
-7,200
2abe3eb4f169eb85c6fa2eb7b69329919e5d9703
fix: essential metabolites method breaks with some solvers
[ { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/analysis.py", "new_path": "cameo/flux_analysis/analysis.py", "diff": "@@ -94,7 +94,7 @@ def knock_out_metabolite(metabolite, force_steady_state=False):\nreaction_id=\"KO_{}\".format(metabolite.id))\nelse:\nprevious_bounds = metabolite.constraint.lb, metabolite.constraint.ub\n- metabolite.constraint.lb, metabolite.constraint.ub = None, None\n+ metabolite.constraint.lb, metabolite.constraint.ub = -1000, 1000\ncontext = get_context(metabolite)\nif context:\ndef reset():\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: essential metabolites method breaks with some solvers
89,737
25.08.2017 11:15:47
-7,200
e894bd2638625f43acdda90dfbc228d0e5e3c3df
fix: problem cache refactored to use cobrapy context manager Updated unit tests Self validation
[ { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/simulation.py", "new_path": "cameo/flux_analysis/simulation.py", "diff": "@@ -24,26 +24,23 @@ Currently implements:\nfrom __future__ import absolute_import, print_function\n+import logging\nimport os\n-import six\n-import pandas\nimport numpy\n-\n-import logging\n-\n+import pandas\n+import six\nimport sympy\n+from cobra import Reaction\n+from cobra.flux_analysis import pfba as cobrapy_pfba\n+from optlang.interface import OptimizationExpression\nfrom sympy import Add\nfrom sympy import Mul\nfrom sympy.parsing.sympy_parser import parse_expr\n-from cobra import Reaction\n-from cobra.flux_analysis import pfba as cobrapy_pfba\n-from cobra.exceptions import OptimizationError\n-from optlang.interface import OptimizationExpression\nfrom cameo.config import ndecimals\n-from cameo.util import ProblemCache, in_ipnb\nfrom cameo.core.result import Result\n+from cameo.util import ProblemCache, in_ipnb\nfrom cameo.visualization.palette import mapper, Palette\n__all__ = ['fba', 'pfba', 'moma', 'lmoma', 'room']\n@@ -185,6 +182,9 @@ def moma(model, reference=None, cache=None, reactions=None, *args, **kwargs):\nelse:\nresult = FluxDistributionResult.from_solution(solution)\nreturn result\n+ except Exception as e:\n+ cache.rollback()\n+ raise e\nfinally:\nif volatile:\ncache.reset()\n@@ -250,7 +250,7 @@ def lmoma(model, reference=None, cache=None, reactions=None, *args, **kwargs):\nname=constraint_id)\nreturn constraint\n- cache.add_constraint(\"c_%s_ub\" % rid, create_upper_constraint, update_upper_constraint,\n+ cache.add_constraint(\"lmoma_const_%s_ub\" % rid, create_upper_constraint, update_upper_constraint,\ncache.variables[pos_var_id], reaction, flux_value)\ndef update_lower_constraint(model, constraint, var, reaction, flux_value):\n@@ -264,7 +264,7 @@ def lmoma(model, reference=None, cache=None, reactions=None, *args, **kwargs):\nname=constraint_id)\nreturn constraint\n- cache.add_constraint(\"c_%s_lb\" % rid, create_lower_constraint, update_lower_constraint,\n+ cache.add_constraint(\"lmoma_const_%s_lb\" % rid, create_lower_constraint, update_lower_constraint,\ncache.variables[neg_var_id], reaction, flux_value)\ndef create_objective(model, variables):\n@@ -273,16 +273,13 @@ def lmoma(model, reference=None, cache=None, reactions=None, *args, **kwargs):\nsloppy=False)\ncache.add_objective(create_objective, None, cache.variables.values())\n- try:\n-\nsolution = model.optimize(raise_error=True)\nif reactions is not None:\nresult = FluxDistributionResult({r: solution.get_primal_by_id(r) for r in reactions}, solution.f)\nelse:\nresult = FluxDistributionResult.from_solution(solution)\nreturn result\n- except OptimizationError as e:\n- raise e\n+\nexcept Exception as e:\ncache.rollback()\nraise e\n@@ -345,10 +342,10 @@ def room(model, reference=None, cache=None, delta=0.03, epsilon=0.001, reactions\ndef update_upper_constraint(model, constraint, reaction, variable, flux_value, epsilon):\nw_u = flux_value + delta * abs(flux_value) + epsilon\n- constraint._set_coefficients_low_level({variable: reaction.upper_bound - w_u})\n+ constraint.set_linear_coefficients({variable: reaction.upper_bound - w_u})\nconstraint.ub = w_u\n- cache.add_constraint(\"c_%s_upper\" % rid, create_upper_constraint, update_upper_constraint,\n+ cache.add_constraint(\"room_const_%s_upper\" % rid, create_upper_constraint, update_upper_constraint,\nreaction, cache.variables[\"y_%s\" % rid], flux_value, epsilon)\ndef create_lower_constraint(model, constraint_id, reaction, variable, flux_value, epsilon):\n@@ -361,24 +358,21 @@ def room(model, reference=None, cache=None, delta=0.03, epsilon=0.001, reactions\ndef update_lower_constraint(model, constraint, reaction, variable, flux_value, epsilon):\nw_l = flux_value - delta * abs(flux_value) - epsilon\n- constraint._set_coefficients_low_level({variable: reaction.lower_bound - w_l})\n+ constraint.set_linear_coefficients({variable: reaction.lower_bound - w_l})\nconstraint.lb = w_l\n- cache.add_constraint(\"c_%s_lower\" % rid, create_lower_constraint, update_lower_constraint,\n+ cache.add_constraint(\"room_const_%s_lower\" % rid, create_lower_constraint, update_lower_constraint,\nreaction, cache.variables[\"y_%s\" % rid], flux_value, epsilon)\nmodel.objective = model.solver.interface.Objective(add([mul([One, var]) for var in cache.variables.values()]),\ndirection='min')\n- try:\n+\nsolution = model.optimize(raise_error=True)\nif reactions is not None:\nresult = FluxDistributionResult({r: solution.get_primal_by_id(r) for r in reactions}, solution.f)\nelse:\nresult = FluxDistributionResult.from_solution(solution)\nreturn result\n- except OptimizationError as e:\n- logger.error(\"room could not determine an optimal solution for objective %s\" % model.objective)\n- raise e\nexcept Exception as e:\ncache.rollback()\n@@ -531,7 +525,6 @@ class FluxDistributionResult(Result):\nif __name__ == '__main__':\nimport time\nfrom cobra.io import read_sbml_model\n- from cobra.flux_analysis.parsimonious import optimize_minimal_flux\nfrom cameo import load_model\n# sbml_path = '../../tests/data/EcoliCore.xml'\n" }, { "change_type": "MODIFY", "old_path": "cameo/util.py", "new_path": "cameo/util.py", "diff": "@@ -32,6 +32,7 @@ import numpy\nimport pandas\nimport pip\nimport six\n+from cobra.util.context import HistoryManager\nfrom numpy.random import RandomState\nfrom six.moves import range\n@@ -120,50 +121,51 @@ class ProblemCache(object):\n\"\"\"\ndef __init__(self, model):\n- self.time_machine = None\n+ self.history_manager = None\nself._model = model\nself.variables = {}\nself.constraints = {}\nself.objective = None\n- self.original_objective = model.objective\n- self.time_machine = TimeMachine()\n+ self.original_objective = model.solver.objective\n+ self._contexts = [HistoryManager()]\nself.transaction_id = None\ndef begin_transaction(self):\n\"\"\"\nCreates a time point. If rollback is called, the variables and constrains will be reverted to this point.\n\"\"\"\n- self.transaction_id = uuid1()\n- self.time_machine(do=int, undo=int, bookmark=self.transaction_id)\n+ self._contexts.append(HistoryManager())\n@property\ndef model(self):\nreturn self._model\ndef _append_constraint(self, constraint_id, create, *args, **kwargs):\n- self.constraints[constraint_id] = create(self.model, constraint_id, *args, **kwargs)\n- self._model.solver.add(self.constraints[constraint_id])\n+ constraint = self.constraints[constraint_id] = create(self._model, constraint_id, *args, **kwargs)\n+ assert constraint_id in self.constraints\n+ self._model.solver.add(constraint)\ndef _remove_constraint(self, constraint_id):\nconstraint = self.constraints.pop(constraint_id)\n- self.model.solver.remove(constraint)\n+ self._model.solver.remove(constraint)\ndef _append_variable(self, variable_id, create, *args, **kwargs):\n- self.variables[variable_id] = create(self.model, variable_id, *args, **kwargs)\n- self.model.solver.add(self.variables[variable_id])\n+ variable = self.variables[variable_id] = create(self._model, variable_id, *args, **kwargs)\n+ assert variable_id in self.variables\n+ self._model.solver.add(variable)\ndef _remove_variable(self, variable_id):\nvariable = self.variables.pop(variable_id)\n- self.model.solver.remove(variable)\n+ self._model.solver.remove(variable)\ndef _rebuild_variable(self, variable):\n(type, lb, ub, name) = variable.type, variable.lb, variable.ub, variable.name\ndef rebuild():\n- self.model.solver.remove(variable)\n+ self._model.solver.remove(variable)\nnew_variable = self.model.solver.interface.Variable(name, lb=lb, ub=ub, type=type)\nself.variables[name] = variable\n- self.model.solver.add(new_variable, sloppy=True)\n+ self._model.solver.add(new_variable, sloppy=True)\nreturn rebuild\n@@ -186,14 +188,15 @@ class ProblemCache(object):\nupdate : function\na function that updates an optlang.interface.Constraint\n\"\"\"\n- if constraint_id in self.constraints:\n- if update is not None:\n- update(self.model, self.constraints[constraint_id], *args, **kwargs)\n- else:\n- self.time_machine(\n- do=partial(self._append_constraint, constraint_id, create, *args, **kwargs),\n- undo=partial(self._remove_constraint, constraint_id)\n- )\n+ context = self._contexts[-1]\n+\n+ if constraint_id not in self.constraints:\n+ self._append_constraint(constraint_id, create, *args, **kwargs)\n+ context(partial(self._remove_constraint, constraint_id))\n+ elif update is not None:\n+ update(self._model, self.constraints[constraint_id], *args, **kwargs)\n+\n+ assert constraint_id in self.constraints\ndef add_variable(self, variable_id, create, update, *args, **kwargs):\n\"\"\"\n@@ -207,60 +210,60 @@ class ProblemCache(object):\nArguments\n---------\n- constraint_id: str\n+ variable_id : str\nThe identifier of the constraint\ncreate : function\nA function that creates an optlang.interface.Variable\nupdate : function\na function that updates an optlang.interface.Variable\n\"\"\"\n- if variable_id in self.variables:\n- if update is not None:\n- self.time_machine(\n- do=partial(update, self.model, self.variables[variable_id], *args, **kwargs),\n- undo=self._rebuild_variable(self.variables[variable_id]))\n- else:\n- self.time_machine(\n- do=partial(self._append_variable, variable_id, create, *args, **kwargs),\n- undo=partial(self._remove_variable, variable_id)\n- )\n+ context = self._contexts[-1]\n+ if variable_id not in self.variables:\n+ self._append_variable(variable_id, create, *args, **kwargs)\n+ context(partial(self._remove_variable, variable_id))\n+ elif update is not None:\n+ # rebuild_function = self._rebuild_variable(self.variables[variable_id])\n+ update(self._model, self.variables[variable_id], *args, **kwargs)\n+ # context(rebuild_function)\n+\n+ assert variable_id in self.variables\ndef add_objective(self, create, update, *args):\n+ context = self._contexts[-1]\nif self.objective is None:\n- self.objective = create(self.model, *args)\n- self.time_machine(\n- do=partial(setattr, self.model, 'objective', self.objective),\n- undo=partial(setattr, self.model, 'objective', self.model.objective)\n- )\n- else:\n- if update:\n- self.objective = update(self.model, *args)\n- self.time_machine(\n- do=partial(setattr, self.model, 'objective', self.objective),\n- undo=partial(setattr, self.model, 'objective', self.model.objective)\n- )\n+ previous_objective = self._model.solver.objective\n+ self.model.solver.objective = self.objective = create(self._model, *args)\n+ context(partial(setattr, self._model.solver, 'objective', previous_objective))\n+\n+ elif update:\n+ previous_objective = self._model.solver.objective\n+ self.model.solver.objective = self.objective = update(self._model, *args)\n+ context(partial(setattr, self._model.solver, 'objective', previous_objective))\ndef reset(self):\n\"\"\"\nRemoves all constraints and variables from the cache.\n\"\"\"\n- self.model.solver.remove(self.constraints.values())\n- self.model.solver.remove(self.variables.values())\n- self.model.objective = self.original_objective\n+ variables = self.variables.keys()\n+ constraints = self.constraints.keys()\n+ while len(self._contexts) > 0:\n+ manager = self._contexts.pop()\n+ manager.reset()\n+ self._contexts.append(HistoryManager())\n+ assert all(var_id not in self._model.solver.variables for var_id in variables)\n+ assert all(const_id not in self._model.solver.constraints for const_id in constraints)\nself.variables = {}\n- self.objective = None\nself.constraints = {}\n- self.transaction_id = None\n- self.time_machine.history.clear()\n+ self._model.objective = self.original_objective\n+ self.objective = None\ndef rollback(self):\n\"\"\"\nReturns to the previous transaction start point.\n\"\"\"\n- if self.transaction_id is None:\n+ if len(self._contexts) < 2:\nraise RuntimeError(\"Start transaction must be called before rollback\")\n- self.time_machine.undo(self.transaction_id)\n- self.transaction_id = None\n+ self._contexts.pop().reset()\ndef __enter__(self):\n\"\"\"\n@@ -271,6 +274,7 @@ class ProblemCache(object):\nYou want to run room/lmoma for every single knockout.\n>>> with ProblemCache(model) as cache:\n>>> for reaction in reactions:\n+ >>> reaction.knock_out()\n>>> result = lmoma(model, reference=reference, cache=cache)\nReturns\n" }, { "change_type": "MODIFY", "old_path": "tests/test_flux_analysis.py", "new_path": "tests/test_flux_analysis.py", "diff": "@@ -23,12 +23,12 @@ import re\nimport numpy as np\nimport pandas\nimport pytest\n-from sympy import Add\n-from cobra.util import create_stoichiometric_matrix, fix_objective_as_constraint\n-from cobra.flux_analysis.parsimonious import add_pfba\n-\nfrom cobra import Metabolite, Reaction\n+from cobra.exceptions import OptimizationError\nfrom cobra.flux_analysis import find_essential_reactions\n+from cobra.flux_analysis.parsimonious import add_pfba\n+from cobra.util import create_stoichiometric_matrix, fix_objective_as_constraint\n+from sympy import Add\nfrom cameo.flux_analysis import remove_infeasible_cycles, structural\nfrom cameo.flux_analysis.analysis import (find_blocked_reactions,\n@@ -38,7 +38,7 @@ from cameo.flux_analysis.analysis import (find_blocked_reactions,\nfrom cameo.flux_analysis.simulation import fba, lmoma, moma, pfba, room\nfrom cameo.flux_analysis.structural import nullspace\nfrom cameo.parallel import MultiprocessingView, SequentialView\n-from cameo.util import current_solver_name, pick_one\n+from cameo.util import current_solver_name, pick_one, ProblemCache\nTRAVIS = 'TRAVIS' in os.environ\nTEST_DIR = os.path.dirname(__file__)\n@@ -244,6 +244,8 @@ class TestSimulationMethods:\ndistance = sum((abs(solution[v] - pfba_solution[v]) for v in pfba_solution.keys()))\nassert abs(0 - distance) > 1e-6, \"lmoma distance without knockouts must be 0 (was %f)\" % distance\nassert core_model.objective.expression == original_objective.expression\n+ assert not any(v.name.startswith(\"u_\") for v in core_model.solver.variables)\n+ assert not any(c.name.startswith(\"lmoma_const_\") for c in core_model.solver.constraints)\ndef test_lmoma_with_reaction_filter(self, core_model):\noriginal_objective = core_model.objective\n@@ -252,6 +254,8 @@ class TestSimulationMethods:\nreactions=['EX_o2_LPAREN_e_RPAREN_', 'EX_glc_LPAREN_e_RPAREN_'])\nassert len(solution.fluxes) == 2\nassert core_model.objective.expression == original_objective.expression\n+ assert not any(v.name.startswith(\"u_\") for v in core_model.solver.variables)\n+ assert not any(c.name.startswith(\"lmoma_const_\") for c in core_model.solver.constraints)\ndef test_moma(self, core_model):\nif current_solver_name(core_model) == 'glpk':\n@@ -262,6 +266,8 @@ class TestSimulationMethods:\ndistance = sum((abs(solution[v] - pfba_solution[v]) for v in pfba_solution.keys()))\nassert abs(0 - distance) < 1e-6, \"moma distance without knockouts must be 0 (was %f)\" % distance\nassert core_model.objective.expression == original_objective.expression\n+ assert not any(v.name.startswith(\"moma_aux_\") for v in core_model.solver.variables)\n+ assert not any(c.name.startswith(\"moma_const_\") for c in core_model.solver.constraints)\ndef test_room(self, core_model):\noriginal_objective = core_model.objective\n@@ -270,6 +276,8 @@ class TestSimulationMethods:\nassert abs(0 - solution.objective_value) < 1e-6, \\\n\"room objective without knockouts must be 0 (was %f)\" % solution.objective_value\nassert core_model.objective.expression == original_objective.expression\n+ assert not any(v.name.startswith(\"y_\") for v in core_model.solver.variables)\n+ assert not any(c.name.startswith(\"moma_const_\") for c in core_model.solver.constraints)\ndef test_room_with_reaction_filter(self, core_model):\noriginal_objective = core_model.objective\n@@ -278,12 +286,71 @@ class TestSimulationMethods:\nreactions=['EX_o2_LPAREN_e_RPAREN_', 'EX_glc_LPAREN_e_RPAREN_'])\nassert len(solution.fluxes) == 2\nassert core_model.objective.expression == original_objective.expression\n+ assert not any(v.name.startswith(\"y_\") for v in core_model.solver.variables)\n+\n+ def test_moma_with_cache(self, core_model):\n+ if current_solver_name(core_model) == 'glpk':\n+ pytest.skip('glpk does not support qp')\n+ original_objective = core_model.objective\n+ pfba_solution = pfba(core_model)\n+ essential_reactions = find_essential_reactions(core_model)\n+ cache = ProblemCache(core_model)\n+ for r in core_model.reactions:\n+ if r not in essential_reactions:\n+ with core_model:\n+ r.knock_out()\n+ moma(core_model, reference=pfba_solution, cache=cache)\n+ assert any(v.name.startswith(\"moma_aux_\") for v in core_model.solver.variables)\n+ assert any(c.name.startswith(\"moma_const_\") for c in core_model.solver.constraints)\n+ cache.reset()\n+ assert core_model.objective.expression == original_objective.expression\n+ assert not any(v.name.startswith(\"moma_aux_\") for v in core_model.solver.variables)\n+ assert not any(c.name.startswith(\"moma_const_\") for c in core_model.solver.constraints)\n+\n+ def test_lmoma_with_cache(self, core_model):\n+ original_objective = core_model.objective\n+ pfba_solution = pfba(core_model)\n+ essential_reactions = find_essential_reactions(core_model)\n+ cache = ProblemCache(core_model)\n+ for r in core_model.reactions:\n+ if r not in essential_reactions:\n+ with core_model:\n+ r.knock_out()\n+ lmoma(core_model, reference=pfba_solution, cache=cache)\n+ assert any(v.name.startswith(\"u_\") for v in core_model.solver.variables)\n+ assert any(c.name.startswith(\"lmoma_const_\") for c in core_model.solver.constraints)\n+ cache.reset()\n+ assert core_model.objective.expression == original_objective.expression\n+ assert not any(v.name.startswith(\"u_\") for v in core_model.solver.variables)\n+ assert not any(c.name.startswith(\"lmoma_const_\") for c in core_model.solver.constraints)\n+\n+ def test_room_with_cache(self, core_model):\n+ original_objective = core_model.objective\n+ pfba_solution = pfba(core_model)\n+ essential_reactions = find_essential_reactions(core_model)\n+ cache = ProblemCache(core_model)\n+ for r in core_model.reactions:\n+ if r not in essential_reactions:\n+ with core_model:\n+ r.knock_out()\n+ try:\n+ room(core_model, reference=pfba_solution, cache=cache)\n+ assert any(v.name.startswith(\"y_\") for v in core_model.solver.variables)\n+ assert any(c.name.startswith(\"room_const_\") for c in core_model.solver.constraints)\n+ except OptimizationError: # TODO: room shouldn't return infeasible for non-essential reacitons\n+ continue\n+ cache.reset()\n+ assert core_model.objective.expression == original_objective.expression\n+ assert not any(v.name.startswith(\"y_\") for v in core_model.solver.variables)\n+ assert not any(c.name.startswith(\"room_const_\") for c in core_model.solver.constraints)\ndef test_room_shlomi_2005(self, toy_model):\noriginal_objective = toy_model.objective\nreference = {\"b1\": 10, \"v1\": 10, \"v2\": 5, \"v3\": 0, \"v4\": 0, \"v5\": 0, \"v6\": 5, \"b2\": 5, \"b3\": 5}\nexpected = {'b1': 10.0, 'b2': 5.0, 'b3': 5.0, 'v1': 10.0,\n'v2': 5.0, 'v3': 0.0, 'v4': 5.0, 'v5': 5.0, 'v6': 0.0}\n+ assert not any(v.name.startswith(\"y_\") for v in toy_model.solver.variables)\n+\nwith toy_model:\ntoy_model.reactions.v6.knock_out()\nresult = room(toy_model, reference=reference, delta=0, epsilon=0)\n@@ -291,6 +358,7 @@ class TestSimulationMethods:\nfor k in reference.keys():\nassert abs(expected[k] - result.fluxes[k]) < 0.1, \"%s: %f | %f\"\nassert toy_model.objective.expression == original_objective.expression\n+ assert not any(v.name.startswith(\"y_\") for v in toy_model.variables)\ndef test_moma_shlomi_2005(self, toy_model):\nif current_solver_name(toy_model) == 'glpk':\n@@ -308,6 +376,7 @@ class TestSimulationMethods:\nfor k in reference.keys():\nassert abs(expected[k] - result.fluxes[k]) < 0.1, \"%s: %f | %f\"\nassert toy_model.objective.expression == original_objective.expression\n+ assert not any(v.name.startswith(\"u_\") for v in toy_model.solver.variables)\ndef test_moma_shlomi_2005_change_ref(self, toy_model):\nif current_solver_name(toy_model) == 'glpk':\n@@ -325,6 +394,7 @@ class TestSimulationMethods:\nfor k in reference.keys():\nassert abs(expected[k] - result.fluxes[k]) < 0.1, \"%s: %f | %f\"\nassert toy_model.objective.expression == original_objective.expression\n+ assert not any(v.name.startswith(\"u_\") for v in toy_model.solver.variables)\n# TODO: this test should be merged with the one above but problem cache is not resetting the model properly anymore.\ndef test_moma_shlomi_2005_change_ref_1(self, toy_model):\n@@ -338,6 +408,7 @@ class TestSimulationMethods:\ntoy_model.reactions.v6.knock_out()\nresult_changed = moma(toy_model, reference=reference_changed)\nassert np.all([expected != result_changed.fluxes])\n+ assert not any(v.name.startswith(\"u_\") for v in toy_model.solver.variables)\nclass TestRemoveCycles:\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: problem cache refactored to use cobrapy context manager Updated unit tests Self validation
89,737
30.08.2017 11:08:43
-7,200
97843f21354fe7e0cc5d996b0db74ad1348218a0
fix: problem cache reset self-validation improved
[ { "change_type": "MODIFY", "old_path": "cameo/util.py", "new_path": "cameo/util.py", "diff": "@@ -244,8 +244,8 @@ class ProblemCache(object):\n\"\"\"\nRemoves all constraints and variables from the cache.\n\"\"\"\n- variables = self.variables.keys()\n- constraints = self.constraints.keys()\n+ variables = list(self.variables.keys())\n+ constraints = list(self.constraints.keys())\nwhile len(self._contexts) > 0:\nmanager = self._contexts.pop()\nmanager.reset()\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: problem cache reset self-validation improved
89,737
31.08.2017 14:09:02
-7,200
4a34ca413277ac98e35683dd01a023e991c1a151
fix: problem cache reset still doesn't work on a single tests
[ { "change_type": "MODIFY", "old_path": "tests/test_flux_analysis.py", "new_path": "tests/test_flux_analysis.py", "diff": "@@ -329,6 +329,7 @@ class TestSimulationMethods:\npfba_solution = pfba(core_model)\nessential_reactions = find_essential_reactions(core_model)\ncache = ProblemCache(core_model)\n+ infeasible = 0\nfor r in core_model.reactions:\nif r not in essential_reactions:\nwith core_model:\n@@ -337,14 +338,18 @@ class TestSimulationMethods:\nroom(core_model, reference=pfba_solution, cache=cache)\nassert any(v.name.startswith(\"y_\") for v in core_model.solver.variables)\nassert any(c.name.startswith(\"room_const_\") for c in core_model.solver.constraints)\n- except OptimizationError: # TODO: room shouldn't return infeasible for non-essential reacitons\n+ except OptimizationError: # TODO: room shouldn't return infeasible for non-essential reactions\n+ infeasible += 1\ncontinue\n+ assert infeasible < len(core_model.reactions)\ncache.reset()\nassert core_model.objective.expression == original_objective.expression\nassert not any(v.name.startswith(\"y_\") for v in core_model.solver.variables)\nassert not any(c.name.startswith(\"room_const_\") for c in core_model.solver.constraints)\ndef test_room_shlomi_2005(self, toy_model):\n+ if current_solver_name(toy_model) == \"glpk\":\n+ pytest.xfail(\"this test doesn't work with glpk\")\noriginal_objective = toy_model.objective\nreference = {\"b1\": 10, \"v1\": 10, \"v2\": 5, \"v3\": 0, \"v4\": 0, \"v5\": 0, \"v6\": 5, \"b2\": 5, \"b3\": 5}\nexpected = {'b1': 10.0, 'b2': 5.0, 'b3': 5.0, 'v1': 10.0,\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: problem cache reset still doesn't work on a single tests
89,735
31.08.2017 14:48:47
-7,200
f6fc50853e852936f78cb7b4062b9410ec30048e
release: 0.11.2
[ { "change_type": "ADD", "old_path": null, "new_path": "release-notes/0.11.2", "diff": "+# Release notes for cameo 0.11.2\n+\n+## Fixes\n+\n+- Fix the ProblemCache used in moma, lmoma and room to use cobrapy's\n+ HistoryManager instead of the TimeMachine, resolving an issue which\n+ caused repeated application of these methods on the same model to\n+ fail.\n" } ]
Python
Apache License 2.0
biosustain/cameo
release: 0.11.2
89,736
18.10.2017 18:03:39
-7,200
31c80e4b9152c2dbd91ef15a4f6b5fe5e1d6136f
fix: failing to run evolution after pickling
[ { "change_type": "MODIFY", "old_path": "cameo/strain_design/pathway_prediction/pathway_predictor.py", "new_path": "cameo/strain_design/pathway_prediction/pathway_predictor.py", "diff": "@@ -132,14 +132,12 @@ class PathwayResult(Pathway, Result, StrainDesign):\nmodel.add_reactions(self.adapters)\nif exchanges:\nmodel.add_reactions(self.exchanges)\n-\n- self.product.lower_bound = 0\ntry:\nmodel.add_reaction(self.product)\nexcept Exception:\nlogger.warning(\"Exchange %s already in model\" % self.product.id)\npass\n-\n+ self.product.lower_bound = 0\nclass PathwayPredictions(StrainDesignMethodResult):\n__method_name__ = \"PathwayPredictor\"\n" }, { "change_type": "MODIFY", "old_path": "tests/test_strain_design_heuristics.py", "new_path": "tests/test_strain_design_heuristics.py", "diff": "@@ -925,14 +925,14 @@ class TestOptimizationResult:\nassert solutions.archive.count(individual) == 1, \"%s is unique in archive\" % individual\[email protected](scope=\"session\")\[email protected](scope=\"function\")\ndef reaction_ko_single_objective(model):\nobjective = biomass_product_coupled_yield(\n\"Biomass_Ecoli_core_N_lp_w_fsh_GAM_rp__Nmet2\", \"EX_ac_lp_e_rp_\", \"EX_glc_lp_e_rp_\")\nreturn ReactionKnockoutOptimization(model=model, simulation_method=fba, objective_function=objective)\[email protected](scope=\"session\")\[email protected](scope=\"function\")\ndef reaction_ko_multi_objective(model):\nobjective1 = biomass_product_coupled_yield(\n\"Biomass_Ecoli_core_N_lp_w_fsh_GAM_rp__Nmet2\", \"EX_ac_lp_e_rp_\", \"EX_glc_lp_e_rp_\")\n@@ -1025,14 +1025,14 @@ class TestReactionKnockoutOptimization:\nbenchmark(reaction_ko_multi_objective.run, max_evaluations=3000, pop_size=10, view=SequentialView(), seed=SEED)\[email protected](scope=\"session\")\[email protected](scope=\"function\")\ndef gene_ko_single_objective(model):\nobjective = biomass_product_coupled_yield(\n\"Biomass_Ecoli_core_N_lp_w_fsh_GAM_rp__Nmet2\", \"EX_ac_lp_e_rp_\", \"EX_glc_lp_e_rp_\")\nreturn GeneKnockoutOptimization(model=model, simulation_method=fba, objective_function=objective)\[email protected](scope=\"session\")\[email protected](scope=\"function\")\ndef gene_ko_multi_objective(model):\nobjective1 = biomass_product_coupled_yield(\n\"Biomass_Ecoli_core_N_lp_w_fsh_GAM_rp__Nmet2\", \"EX_ac_lp_e_rp_\", \"EX_glc_lp_e_rp_\")\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: failing to run evolution after pickling
89,741
07.11.2017 15:17:30
-3,600
5ef403db97a572a6b283580ae70f1d5967a47457
doc: docs need jupyter installed because of ipython3 code cell tag
[ { "change_type": "MODIFY", "old_path": ".travis.yml", "new_path": ".travis.yml", "diff": "@@ -59,7 +59,7 @@ before_deploy:\n- pip install twine\n- python setup.py sdist bdist_wheel\n- if [[ $TRAVIS_PYTHON_VERSION == \"3.6\" ]]; then\n- pip install .[docs];\n+ pip install .[docs,jupyter];\ncd docs && make apidoc && make html && touch _build/html/.nojekyll;\nfi\n- cd $TRAVIS_BUILD_DIR\n" } ]
Python
Apache License 2.0
biosustain/cameo
doc: docs need jupyter installed because of ipython3 code cell tag
89,733
06.02.2018 12:45:48
-3,600
6826ce258412dde3229cc9e6b54db64cab0bef17
Fix: publication fixes * fix: update cobra version * fix: add docstring to the package To remove any confusion from the reviewers heads.
[ { "change_type": "MODIFY", "old_path": "cameo/strain_design/deterministic/linear_programming.py", "new_path": "cameo/strain_design/deterministic/linear_programming.py", "diff": "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n+\"\"\"\n+This module contains algorithms based on linear programming techniques, including mixed-integer linear programming\n+\"\"\"\nfrom __future__ import print_function\n" }, { "change_type": "MODIFY", "old_path": "setup.py", "new_path": "setup.py", "diff": "@@ -28,7 +28,7 @@ requirements = ['numpy>=1.9.1',\n'blessings>=1.5.1',\n'pandas>=0.20.2',\n'ordered-set>=1.2',\n- 'cobra>=0.8.0',\n+ 'cobra>=0.11.1',\n'future>=0.15.2',\n'optlang>=1.2.1',\n'numexpr>=2.4',\n" } ]
Python
Apache License 2.0
biosustain/cameo
Fix: publication fixes (#198) * fix: update cobra version * fix: add docstring to the package To remove any confusion from the reviewers heads.
89,736
04.04.2018 15:22:01
-7,200
96d408c6478bd1af282b7a25be61815405c5c868
hotfix: six string types
[ { "change_type": "MODIFY", "old_path": "cameo/api/designer.py", "new_path": "cameo/api/designer.py", "diff": "@@ -181,7 +181,7 @@ class Designer(object):\nnotice(\"Starting searching for compound %s\" % product)\ntry:\nproduct = self.__translate_product_to_universal_reactions_model_metabolite(product, database)\n- except Exception:\n+ except KeyError:\nraise KeyError(\"Product %s is not in the %s database\" % (product, database.id))\npathways = self.predict_pathways(product, hosts=hosts, database=database, aerobic=aerobic)\noptimization_reports = self.optimize_strains(pathways, view, aerobic=aerobic)\n@@ -354,7 +354,7 @@ class Designer(object):\ndef __translate_product_to_universal_reactions_model_metabolite(self, product, database):\nif isinstance(product, Metabolite):\nreturn product\n- elif isinstance(product, six.text_type) or isinstance(product, six.string_type):\n+ elif isinstance(product, six.text_type) or isinstance(product, six.string_types):\nsearch_result = products.search(product)\nsearch_result = search_result.loc[[i for i in search_result.index if i in database.metabolites]]\nif len(search_result) == 0:\n" } ]
Python
Apache License 2.0
biosustain/cameo
hotfix: six string types
89,741
11.04.2018 10:16:17
-7,200
1146ba933f24ba7ed97613d316f135eceb16144f
doc: add citation information to README
[ { "change_type": "MODIFY", "old_path": "README.rst", "new_path": "README.rst", "diff": "@@ -19,6 +19,8 @@ compute promising strain designs.\nCurious? Head over to `try.cameo.bio <http://try.cameo.bio>`__\nand give it a try.\n+Please cite https://doi.org/10.1021/acssynbio.7b00423 if you've used cameo in a scientific publication.\n+\n.. summary-end\nInstallation\n@@ -120,7 +122,7 @@ Predict heterologous pathways for a desired chemical.\nContributions\n~~~~~~~~~~~~~\n-..are very welcome! Please read the `guideline <CONTRIBUTING.rst>`__ for instructions how to contribute.\n+... are very welcome! Please read the `guideline <CONTRIBUTING.rst>`__ for instructions how to contribute.\n.. url-marker\n" } ]
Python
Apache License 2.0
biosustain/cameo
doc: add citation information to README
89,736
21.06.2018 15:17:15
-7,200
94ce3c1177b85df3edb4065ef0ff4cca0c94d0db
fix: cplex installation
[ { "change_type": "MODIFY", "old_path": ".gitignore", "new_path": ".gitignore", "diff": "@@ -54,3 +54,6 @@ docs/_build\n/.DS_Store\n.cache\n/coverage.xml\n+\n+cplex/\n+cplex*.tar.gz\n\\ No newline at end of file\n" }, { "change_type": "MODIFY", "old_path": ".travis.yml", "new_path": ".travis.yml", "diff": "@@ -38,7 +38,7 @@ addons:\n- openbabel\nbefore_install:\n-- pip install --upgrade pip setuptools wheel tox\n+- pip install --upgrade pip setuptools wheel tox requests\n- bash ./.travis/install_cplex.sh\nscript:\n" }, { "change_type": "MODIFY", "old_path": ".travis/install_cplex.sh", "new_path": ".travis/install_cplex.sh", "diff": "#!/usr/bin/env bash\n-set -exu\n+set -eu\n# Build on master and tags.\n# $CPLEX_URL is defined in the Travis repository settings.\nif [[ (\"${TRAVIS_BRANCH}\" == \"master\" || -n \"${TRAVIS_TAG}\") \\\n&& (\"${TRAVIS_PYTHON_VERSION}\" == \"2.7\" || \"${TRAVIS_PYTHON_VERSION}\" == \"3.5\") ]];then\n- curl -L \"${CPLEX_URL}\" -o cplex.tar.gz\n- tar xzf cplex.tar.gz\n- cd \"cplex/python/${TRAVIS_PYTHON_VERSION}/x86-64_linux\"\n+ python ./.travis/load_dependency.py \"${GH_TOKEN}\" \"cplex-python${TRAVIS_PYTHON_VERSION}.tar.gz\"\n+ tar xzf cplex-python${TRAVIS_PYTHON_VERSION}.tar.gz\n+ cd \"cplex/python/3.4/x86-64_linux\"\npython setup.py install\ncd \"${TRAVIS_BUILD_DIR}\"\nfi\n" }, { "change_type": "ADD", "old_path": null, "new_path": ".travis/load_dependency.py", "diff": "+# -*- coding: utf-8 -*-\n+\n+from __future__ import absolute_import, print_function\n+\n+import os\n+import shutil\n+import sys\n+try:\n+ from urllib.parse import urljoin\n+except ImportError:\n+ from urlparse import urljoin\n+\n+import requests\n+\n+\n+def main(token, filepath):\n+ assert 'CPLEX_URL' in os.environ\n+ url = urljoin(os.environ['CPLEX_URL'], 'contents')\n+ sha_url = urljoin(os.environ['CPLEX_URL'], 'git/blobs/{}')\n+ headers = {\n+ 'Authorization': 'token {}'.format(token),\n+ 'Accept': 'application/vnd.github.v3.raw'\n+ }\n+ files_meta = requests.get(url, headers=headers).json()\n+ sha = [i for i in files_meta if i['path'] == filepath][0]['sha']\n+ response = requests.get(sha_url.format(sha), headers=headers, stream=True)\n+ with open(filepath, 'wb') as out_file:\n+ response.raw.decode_content = True\n+ shutil.copyfileobj(response.raw, out_file)\n+\n+if __name__ == \"__main__\":\n+ if len(sys.argv) != 3:\n+ print(\"Usage:\\n{} <GitHub Token> <Archive Name>\")\n+ sys.exit(2)\n+ main(*sys.argv[1:])\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: cplex installation
89,736
21.06.2018 15:35:01
-7,200
cf29bdfdb0eee5b42a8669cbc0115c996c83be0e
fix: cameo for python2.7
[ { "change_type": "MODIFY", "old_path": ".travis/install_cplex.sh", "new_path": ".travis/install_cplex.sh", "diff": "set -eu\n# Build on master and tags.\n-# $CPLEX_URL is defined in the Travis repository settings.\n+# $CPLEX_URL and $GH_TOKEN is defined in the Travis repository settings.\nif [[ (\"${TRAVIS_BRANCH}\" == \"master\" || -n \"${TRAVIS_TAG}\") \\\n&& (\"${TRAVIS_PYTHON_VERSION}\" == \"2.7\" || \"${TRAVIS_PYTHON_VERSION}\" == \"3.5\") ]];then\npython ./.travis/load_dependency.py \"${GH_TOKEN}\" \"cplex-python${TRAVIS_PYTHON_VERSION}.tar.gz\"\ntar xzf cplex-python${TRAVIS_PYTHON_VERSION}.tar.gz\n- cd \"cplex/python/3.4/x86-64_linux\"\n+ PYTHON_VERSION=${TRAVIS_PYTHON_VERSION}\n+ if [ \"${PYTHON_VERSION}\" == \"3.5\" ]\n+ then\n+ PYTHON_VERSION=\"3.4\"\n+ fi\n+ cd \"cplex/python/${PYTHON_VERSION}/x86-64_linux\"\npython setup.py install\ncd \"${TRAVIS_BUILD_DIR}\"\nfi\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: cameo for python2.7
89,736
21.06.2018 15:38:56
-7,200
451690bba787e724f74af2e591703d8c61b60b2e
fix: cameo for python2.7 - reset the variable
[ { "change_type": "MODIFY", "old_path": ".travis/install_cplex.sh", "new_path": ".travis/install_cplex.sh", "diff": "@@ -12,6 +12,8 @@ if [[ (\"${TRAVIS_BRANCH}\" == \"master\" || -n \"${TRAVIS_TAG}\") \\\nif [ \"${PYTHON_VERSION}\" == \"3.5\" ]\nthen\nPYTHON_VERSION=\"3.4\"\n+ else\n+ PYTHON_VERSION=\"2.6\"\nfi\ncd \"cplex/python/${PYTHON_VERSION}/x86-64_linux\"\npython setup.py install\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: cameo for python2.7 - reset the variable
89,736
21.06.2018 19:38:51
-7,200
5a1e6c2559410fcb07a1714482d2327309acb80e
fix: travis build, skip existing
[ { "change_type": "MODIFY", "old_path": ".travis.yml", "new_path": ".travis.yml", "diff": "@@ -58,6 +58,7 @@ deploy:\nsecure: nxjszXtUzQfnLlfg0cmFjd9gRekXDog6dkkN1rMc7CIWH2gZ1gAX4sNETVChnuSmu9egzhuIkviHstRrdyGoEZ7ZkHlTXmpVAs9AY96eMSejnwHHODhYno0jB7DjGcfejodLF+lo6lWz7S7mXXwML6YLM3xxG+AOjLHlHbPTaKc=\ndistributions: sdist bdist_wheel\nskip_cleanup: true\n+ skip_existing: true\non:\nbranch: master\ntags: true\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: travis build, skip existing
89,734
05.11.2018 10:06:27
-3,600
8f65273eb81c4e90987103881d036628f734b7e5
chore: extend gitignore with PyCharm template
[ { "change_type": "MODIFY", "old_path": ".gitignore", "new_path": ".gitignore", "diff": "@@ -54,3 +54,109 @@ docs/_build\n/.DS_Store\n.cache\n/coverage.xml\n+### Python template\n+# Byte-compiled / optimized / DLL files\n+__pycache__/\n+*.py[cod]\n+*$py.class\n+\n+# C extensions\n+*.so\n+\n+# Distribution / packaging\n+.Python\n+build/\n+develop-eggs/\n+dist/\n+downloads/\n+eggs/\n+.eggs/\n+lib/\n+lib64/\n+parts/\n+sdist/\n+var/\n+wheels/\n+*.egg-info/\n+.installed.cfg\n+*.egg\n+MANIFEST\n+\n+# PyInstaller\n+# Usually these files are written by a python script from a template\n+# before PyInstaller builds the exe, so as to inject date/other infos into it.\n+*.manifest\n+*.spec\n+\n+# Installer logs\n+pip-log.txt\n+pip-delete-this-directory.txt\n+\n+# Unit test / coverage reports\n+htmlcov/\n+.tox/\n+.coverage\n+.coverage.*\n+.cache\n+nosetests.xml\n+coverage.xml\n+*.cover\n+.hypothesis/\n+.pytest_cache/\n+\n+# Translations\n+*.mo\n+*.pot\n+\n+# Django stuff:\n+*.log\n+local_settings.py\n+db.sqlite3\n+\n+# Flask stuff:\n+instance/\n+.webassets-cache\n+\n+# Scrapy stuff:\n+.scrapy\n+\n+# Sphinx documentation\n+docs/_build/\n+\n+# PyBuilder\n+target/\n+\n+# Jupyter Notebook\n+.ipynb_checkpoints\n+\n+# pyenv\n+.python-version\n+\n+# celery beat schedule file\n+celerybeat-schedule\n+\n+# SageMath parsed files\n+*.sage.py\n+\n+# Environments\n+.env\n+.venv\n+env/\n+venv/\n+ENV/\n+env.bak/\n+venv.bak/\n+\n+# Spyder project settings\n+.spyderproject\n+.spyproject\n+\n+# Rope project settings\n+.ropeproject\n+\n+# mkdocs documentation\n+/site\n+\n+# mypy\n+.mypy_cache/\n+\n" } ]
Python
Apache License 2.0
biosustain/cameo
chore: extend gitignore with PyCharm template
89,734
05.11.2018 10:54:19
-3,600
0d08eaab4621dd768226c5ef5f4f23028fe728db
style: conform with flake8 settings
[ { "change_type": "MODIFY", "old_path": "cameo/core/pathway.py", "new_path": "cameo/core/pathway.py", "diff": "@@ -102,12 +102,14 @@ class Pathway(object):\nwith open(file_path, \"w\") as output_file:\nfor reaction in self.reactions:\nequation = _build_equation(reaction.metabolites)\n- output_file.write(reaction.id + sep +\n- equation + sep +\n- reaction.lower_bound + sep +\n- reaction.upper_bound + sep +\n- reaction.name + sep +\n- reaction.notes.get(\"pathway_note\", \"\") + \"\\n\")\n+ output_file.write(sep.join(map(str, [\n+ reaction.id,\n+ equation,\n+ reaction.lower_bound,\n+ reaction.upper_bound,\n+ reaction.name,\n+ reaction.notes.get(\"pathway_note\", \"\")\n+ ])) + \"\\n\")\ndef plug_model(self, model):\n\"\"\"\n" }, { "change_type": "MODIFY", "old_path": "cameo/core/target.py", "new_path": "cameo/core/target.py", "diff": "@@ -342,8 +342,11 @@ class GeneModulationTarget(FluxModulationTarget):\ndef __eq__(self, other):\nif isinstance(other, GeneModulationTarget):\n- return (self.id == other.id and self._value == other._value and\n+ return (\n+ (self.id == other.id) and (\n+ self._value == other._value) and (\nself._reference_value == other._reference_value)\n+ )\nelse:\nreturn False\n@@ -425,8 +428,11 @@ class ReactionModulationTarget(FluxModulationTarget):\ndef __eq__(self, other):\nif isinstance(other, ReactionModulationTarget):\n- return (self.id == other.id and self._value == other._value and\n+ return (\n+ (self.id == other.id) and (\n+ self._value == other._value) and (\nself._reference_value == other._reference_value)\n+ )\nelse:\nreturn False\n" }, { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/analysis.py", "new_path": "cameo/flux_analysis/analysis.py", "diff": "@@ -172,9 +172,13 @@ def find_blocked_reactions(model):\nfor exchange in model.exchanges:\nexchange.bounds = (-9999, 9999)\nfva_solution = flux_variability_analysis(model)\n- return frozenset(reaction for reaction in model.reactions\n- if round(fva_solution.lower_bound(reaction.id), config.ndecimals) == 0 and\n- round(fva_solution.upper_bound(reaction.id), config.ndecimals) == 0)\n+ return frozenset(\n+ reaction for reaction in model.reactions\n+ if round(\n+ fva_solution.lower_bound(reaction.id),\n+ config.ndecimals) == 0 and round(\n+ fva_solution.upper_bound(reaction.id), config.ndecimals) == 0\n+ )\ndef flux_variability_analysis(model, reactions=None, fraction_of_optimum=0., pfba_factor=None,\n@@ -303,14 +307,16 @@ def phenotypic_phase_plane(model, variables, objective=None, source=None, points\nnice_variable_ids = [_nice_id(reaction) for reaction in variable_reactions]\nvariable_reactions_ids = [reaction.id for reaction in variable_reactions]\n- phase_plane = pandas.DataFrame(envelope,\n- columns=(variable_reactions_ids +\n- ['objective_lower_bound',\n+ phase_plane = pandas.DataFrame(\n+ envelope, columns=(variable_reactions_ids + [\n+ 'objective_lower_bound',\n'objective_upper_bound',\n'c_yield_lower_bound',\n'c_yield_upper_bound',\n'mass_yield_lower_bound',\n- 'mass_yield_upper_bound']))\n+ 'mass_yield_upper_bound'\n+ ])\n+ )\nif objective is None:\nobjective = model.objective\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/deterministic/flux_variability_based.py", "new_path": "cameo/strain_design/deterministic/flux_variability_based.py", "diff": "@@ -77,7 +77,7 @@ logger = logging.getLogger(__name__)\nclass DifferentialFVA(StrainDesignMethod):\n- \"\"\"Differential flux variability analysis.\n+ r\"\"\"Differential flux variability analysis.\nCompares flux ranges of a reference model to a set of models that\nhave been parameterized to lie on a grid of evenly spaced points in the\n@@ -354,22 +354,37 @@ class DifferentialFVA(StrainDesignMethod):\ndf['suddenly_essential'] = False\ndf['free_flux'] = False\n- df.loc[(df.lower_bound == 0) & (df.upper_bound == 0) &\n- (ref_upper_bound != 0) & (ref_lower_bound != 0), 'KO'] = True\n-\n- df.loc[((ref_upper_bound < 0) & (df.lower_bound > 0) |\n- ((ref_lower_bound > 0) & (df.upper_bound < 0))), 'flux_reversal'] = True\n-\n- df.loc[((df.lower_bound <= 0) & (df.lower_bound > 0)) |\n- ((ref_lower_bound >= 0) & (df.upper_bound <= 0)), 'suddenly_essential'] = True\n+ df.loc[\n+ (df.lower_bound == 0) & (\n+ df.upper_bound == 0) & (\n+ ref_upper_bound != 0) & (\n+ ref_lower_bound != 0),\n+ 'KO'\n+ ] = True\n+\n+ df.loc[\n+ ((ref_upper_bound < 0) & (df.lower_bound > 0) | (\n+ (ref_lower_bound > 0) & (df.upper_bound < 0))),\n+ 'flux_reversal'\n+ ] = True\n+\n+ df.loc[\n+ ((df.lower_bound <= 0) & (df.lower_bound > 0)) | (\n+ (ref_lower_bound >= 0) & (df.upper_bound <= 0)),\n+ 'suddenly_essential'\n+ ] = True\nis_reversible = numpy.asarray([\n- self.design_space_model.reactions.get_by_id(i).reversibility for i in df.index], dtype=bool)\n+ self.design_space_model.reactions.get_by_id(i).reversibility\n+ for i in df.index], dtype=bool)\nnot_reversible = numpy.logical_not(is_reversible)\n- df.loc[((df.lower_bound == -1000) & (df.upper_bound == 1000) & is_reversible) |\n- ((df.lower_bound == 0) & (df.upper_bound == 1000) & not_reversible) |\n- ((df.lower_bound == -1000) & (df.upper_bound == 0) & not_reversible), 'free_flux'] = True\n+ df.loc[\n+ ((df.lower_bound == -1000) & (df.upper_bound == 1000) & is_reversible) | (\n+ (df.lower_bound == 0) & (df.upper_bound == 1000) & not_reversible) | (\n+ (df.lower_bound == -1000) & (df.upper_bound == 0) & not_reversible),\n+ 'free_flux'\n+ ] = True\ndf['reaction'] = df.index\ndf['excluded'] = df['reaction'].isin(self.exclude)\n@@ -481,9 +496,9 @@ class DifferentialFVAResult(StrainDesignMethodResult):\nfor _, solution in solutions.groupby(('biomass', 'production')):\ntargets = []\nrelevant_targets = solution.loc[\n- (numpy.abs(solution['normalized_gaps']) > non_zero_flux_threshold) &\n- numpy.logical_not(solution['excluded']) &\n- numpy.logical_not(solution['free_flux'])\n+ (numpy.abs(solution['normalized_gaps']) > non_zero_flux_threshold) & (\n+ numpy.logical_not(solution['excluded'])) & (\n+ numpy.logical_not(solution['free_flux']))\n]\nfor rid, relevant_row in relevant_targets.iterrows():\nif relevant_row.KO:\n@@ -648,8 +663,9 @@ class DifferentialFVAResult(StrainDesignMethodResult):\ndata = self.nth_panel(index)\n# Find values above decimal precision and not NaN\ndata = data.loc[\n- ~numpy.isnan(data['normalized_gaps']) &\n- (data['normalized_gaps'].abs() > non_zero_flux_threshold)]\n+ ~numpy.isnan(data['normalized_gaps']) & (\n+ data['normalized_gaps'].abs() > non_zero_flux_threshold)\n+ ]\ndata.index = data['reaction']\nreaction_data = data['normalized_gaps'].copy()\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/deterministic/linear_programming.py", "new_path": "cameo/strain_design/deterministic/linear_programming.py", "diff": "@@ -119,10 +119,12 @@ class OptKnock(StrainDesignMethod):\ndef _remove_blocked_reactions(self):\nfva_res = flux_variability_analysis(self._model, fraction_of_optimum=0)\n+ # FIXME: Iterate over the index only (reaction identifiers).\nblocked = [\nself._model.reactions.get_by_id(reaction) for reaction, row in fva_res.data_frame.iterrows()\n- if (round(row[\"lower_bound\"], config.ndecimals) ==\n- round(row[\"upper_bound\"], config.ndecimals) == 0)]\n+ if (round(row[\"lower_bound\"], config.ndecimals) == round(\n+ row[\"upper_bound\"], config.ndecimals) == 0)\n+ ]\nself._model.remove_reactions(blocked)\ndef _reduce_to_nullspace(self, reactions):\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/heuristic/evolutionary/optimization.py", "new_path": "cameo/strain_design/heuristic/evolutionary/optimization.py", "diff": "@@ -679,8 +679,12 @@ class ReactionKnockoutOptimization(KnockoutOptimization):\nns = nullspace(create_stoichiometric_array(self.model))\ndead_ends = set(find_blocked_reactions_nullspace(self.model, ns=ns))\nexchanges = set(self.model.exchanges)\n- reactions = [r for r in self.model.reactions if r not in exchanges and r not in dead_ends and\n- r.id not in self.essential_reactions]\n+ reactions = [\n+ r for r in self.model.reactions\n+ if (r not in exchanges) and (\n+ r not in dead_ends) and (\n+ r.id not in self.essential_reactions)\n+ ]\ngroups = find_coupled_reactions_nullspace(self.model, ns=ns)\ngroups_keys = [set(group) for group in groups if any(r.id in reactions for r in group)]\n" } ]
Python
Apache License 2.0
biosustain/cameo
style: conform with flake8 settings
89,739
11.12.2018 03:11:27
28,800
e994d07ef085405927d9635924b05edc9b82528b
Update simulation.py to use solution.objective_value vs. legacy solution.f * Update simulation.py to use solution.objective_value vs. legacy solution.f Update simulation.py to use solution.objective_value vs. legacy solution.f * LNT: Lint lines too long.
[ { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/simulation.py", "new_path": "cameo/flux_analysis/simulation.py", "diff": "@@ -178,7 +178,8 @@ def moma(model, reference=None, cache=None, reactions=None, *args, **kwargs):\nsolution = model.optimize(raise_error=True)\nif reactions is not None:\n- result = FluxDistributionResult({r: solution.get_primal_by_id(r) for r in reactions}, solution.f)\n+ result = FluxDistributionResult(\n+ {r: solution.get_primal_by_id(r) for r in reactions}, solution.objective_value)\nelse:\nresult = FluxDistributionResult.from_solution(solution)\nreturn result\n@@ -275,7 +276,8 @@ def lmoma(model, reference=None, cache=None, reactions=None, *args, **kwargs):\nsolution = model.optimize(raise_error=True)\nif reactions is not None:\n- result = FluxDistributionResult({r: solution.get_primal_by_id(r) for r in reactions}, solution.f)\n+ result = FluxDistributionResult(\n+ {r: solution.get_primal_by_id(r) for r in reactions}, solution.objective_value)\nelse:\nresult = FluxDistributionResult.from_solution(solution)\nreturn result\n@@ -369,7 +371,8 @@ def room(model, reference=None, cache=None, delta=0.03, epsilon=0.001, reactions\nsolution = model.optimize(raise_error=True)\nif reactions is not None:\n- result = FluxDistributionResult({r: solution.get_primal_by_id(r) for r in reactions}, solution.f)\n+ result = FluxDistributionResult(\n+ {r: solution.get_primal_by_id(r) for r in reactions}, solution.objective_value)\nelse:\nresult = FluxDistributionResult.from_solution(solution)\nreturn result\n@@ -392,7 +395,7 @@ class FluxDistributionResult(Result):\n@classmethod\ndef from_solution(cls, solution, *args, **kwargs):\n- return cls(solution.fluxes, solution.f, *args, **kwargs)\n+ return cls(solution.fluxes, solution.objective_value, *args, **kwargs)\ndef __init__(self, fluxes, objective_value, *args, **kwargs):\nsuper(FluxDistributionResult, self).__init__(*args, **kwargs)\n" } ]
Python
Apache License 2.0
biosustain/cameo
Update simulation.py to use solution.objective_value vs. legacy solution.f (#218) * Update simulation.py to use solution.objective_value vs. legacy solution.f Update simulation.py to use solution.objective_value vs. legacy solution.f * LNT: Lint lines too long.
89,741
12.12.2018 13:10:41
-3,600
20ffb09370c465111428b32c13e936073e65b33d
fix: broken cplex installation on travis-ci
[ { "change_type": "MODIFY", "old_path": ".travis/install_cplex.sh", "new_path": ".travis/install_cplex.sh", "diff": "set -eu\n# Build on master and tags.\n-# $CPLEX_URL and $GH_TOKEN is defined in the Travis repository settings.\nif [[ (\"${TRAVIS_BRANCH}\" == \"master\" || -n \"${TRAVIS_TAG}\") \\\n&& (\"${TRAVIS_PYTHON_VERSION}\" == \"2.7\" || \"${TRAVIS_PYTHON_VERSION}\" == \"3.5\") ]];then\n- python ./.travis/load_dependency.py \"${GH_TOKEN}\" \"cplex-python${TRAVIS_PYTHON_VERSION}.tar.gz\"\n- tar xzf cplex-python${TRAVIS_PYTHON_VERSION}.tar.gz\n+ curl -O $CPLEX_SECRET # check lastpass\n+ tar xzf cplex-python3.6.tar.gz\nPYTHON_VERSION=${TRAVIS_PYTHON_VERSION}\nif [ \"${PYTHON_VERSION}\" == \"3.5\" ]\nthen\n" }, { "change_type": "DELETE", "old_path": ".travis/load_dependency.py", "new_path": null, "diff": "-# -*- coding: utf-8 -*-\n-\n-from __future__ import absolute_import, print_function\n-\n-import os\n-import shutil\n-import sys\n-try:\n- from urllib.parse import urljoin\n-except ImportError:\n- from urlparse import urljoin\n-\n-import requests\n-\n-\n-def main(token, filepath):\n- assert 'CPLEX_URL' in os.environ\n- url = urljoin(os.environ['CPLEX_URL'], 'contents')\n- sha_url = urljoin(os.environ['CPLEX_URL'], 'git/blobs/{}')\n- headers = {\n- 'Authorization': 'token {}'.format(token),\n- 'Accept': 'application/vnd.github.v3.raw'\n- }\n- files_meta = requests.get(url, headers=headers).json()\n- sha = [i for i in files_meta if i['path'] == filepath][0]['sha']\n- response = requests.get(sha_url.format(sha), headers=headers, stream=True)\n- with open(filepath, 'wb') as out_file:\n- response.raw.decode_content = True\n- shutil.copyfileobj(response.raw, out_file)\n-\n-if __name__ == \"__main__\":\n- if len(sys.argv) != 3:\n- print(\"Usage:\\n{} <GitHub Token> <Archive Name>\")\n- sys.exit(2)\n- main(*sys.argv[1:])\n" } ]
Python
Apache License 2.0
biosustain/cameo
fix: broken cplex installation on travis-ci
89,741
06.02.2019 16:16:22
-3,600
aa45d592f1b57e0e2629d9eb67ce960fdb9a81af
hotfix: fix integercut ID clash in PathwayPredictor
[ { "change_type": "MODIFY", "old_path": "cameo/strain_design/pathway_prediction/pathway_predictor.py", "new_path": "cameo/strain_design/pathway_prediction/pathway_predictor.py", "diff": "@@ -294,9 +294,10 @@ class PathwayPredictor(StrainDesignMethod):\nproduct_reaction = self.model.add_boundary(product, type='demand')\nproduct_reaction.lower_bound = min_production\n- counter = 1\n- while counter <= max_predictions:\n- logger.debug('Predicting pathway No. %d' % counter)\n+ pathway_counter = 1\n+ integer_cut_counter = 1\n+ while pathway_counter <= max_predictions:\n+ logger.debug('Predicting pathway No. %d' % pathway_counter)\ntry:\nself.model.slim_optimize(error_value=None)\nexcept OptimizationError as err:\n@@ -342,10 +343,10 @@ class PathwayPredictor(StrainDesignMethod):\npathway = PathwayResult(pathway, exchanges, adapters, product_reaction)\nif not silent:\n- util.display_pathway(pathway, counter)\n+ util.display_pathway(pathway, pathway_counter)\ninteger_cut = self.model.solver.interface.Constraint(Add(*vars_to_cut),\n- name=\"integer_cut_\" + str(counter),\n+ name=\"integer_cut_\" + str(integer_cut_counter),\nub=len(vars_to_cut) - 1)\nlogger.debug('Adding integer cut.')\ntm(\n@@ -368,7 +369,8 @@ class PathwayPredictor(StrainDesignMethod):\nif value > non_zero_flux_threshold:\npathways.append(pathway)\nlogger.info(\"Max flux: %.5G\", value)\n- counter += 1\n+ pathway_counter += 1\n+ integer_cut_counter += 1\nif callback is not None:\ncallback(pathway)\nelse:\n@@ -377,6 +379,7 @@ class PathwayPredictor(StrainDesignMethod):\n\"flux %.5G is below the requirement %.5G. \"\n\"Skipping.\", pathway, value,\nnon_zero_flux_threshold)\n+ integer_cut_counter += 1\nreturn PathwayPredictions(pathways)\n" } ]
Python
Apache License 2.0
biosustain/cameo
hotfix: fix integercut ID clash in PathwayPredictor
89,734
11.02.2019 12:54:38
-3,600
8489055323c2a22b35625461d9e263d22b8f50fd
style: improve logging statements
[ { "change_type": "MODIFY", "old_path": "cameo/strain_design/pathway_prediction/pathway_predictor.py", "new_path": "cameo/strain_design/pathway_prediction/pathway_predictor.py", "diff": "@@ -315,18 +315,19 @@ class PathwayPredictor(StrainDesignMethod):\nif len(vars_to_cut) == 0:\n# no pathway found:\nlogger.info(\"It seems %s is a native product in model %s. \"\n- \"Let's see if we can find better heterologous pathways.\" % (product, self.model))\n+ \"Let's see if we can find better heterologous pathways.\", product, self.model)\n# knockout adapter with native product\nfor adapter in self.adpater_reactions:\nif product in adapter.metabolites:\n- logger.info('Knocking out adapter reaction %s containing native product.' % adapter)\n+ logger.info('Knocking out adapter reaction %s '\n+ 'containing native product.', adapter)\nadapter.knock_out()\ncontinue\npathway = [self.model.reactions.get_by_id(y_var.name[2:]) for y_var in vars_to_cut]\npathway_metabolites = set([m for pathway_reaction in pathway for m in pathway_reaction.metabolites])\n- logger.info('Pathway predicted: %s' % '\\t'.join(\n+ logger.info('Pathway predicted: %s', '\\t'.join(\n[r.build_reaction_string(use_metabolite_names=True) for r in pathway]))\npathway_metabolites.add(product)\n@@ -362,8 +363,8 @@ class PathwayPredictor(StrainDesignMethod):\nexcept OptimizationError as err:\nlogger.error(err)\nlogger.error(\n- \"Addition of pathway {} made the model unsolvable. \"\n- \"Skipping pathway.\".format(pathway))\n+ \"Addition of pathway %r made the model unsolvable. \"\n+ \"Skipping pathway.\", pathway)\ncontinue\nelse:\nif value > non_zero_flux_threshold:\n" } ]
Python
Apache License 2.0
biosustain/cameo
style: improve logging statements
89,734
11.02.2019 13:01:06
-3,600
f2f5e7581d79f4cd536934363f21f7cea0f84c40
refactor: increment integer cut in finally
[ { "change_type": "MODIFY", "old_path": "cameo/strain_design/pathway_prediction/pathway_predictor.py", "new_path": "cameo/strain_design/pathway_prediction/pathway_predictor.py", "diff": "@@ -371,7 +371,6 @@ class PathwayPredictor(StrainDesignMethod):\npathways.append(pathway)\nlogger.info(\"Max flux: %.5G\", production_flux)\npathway_counter += 1\n- integer_cut_counter += 1\nif callback is not None:\ncallback(pathway)\nelse:\n@@ -380,6 +379,7 @@ class PathwayPredictor(StrainDesignMethod):\n\"flux %.5G is below the requirement %.5G. \"\n\"Skipping.\", pathway, production_flux,\nnon_zero_flux_threshold)\n+ finally:\ninteger_cut_counter += 1\nreturn PathwayPredictions(pathways)\n" } ]
Python
Apache License 2.0
biosustain/cameo
refactor: increment integer cut in finally
89,734
19.06.2019 19:51:42
-7,200
2f4e9107890f031e828701a6c563c0f1626a877f
chore: add missing XML header
[ { "change_type": "MODIFY", "old_path": "tests/data/EcoliCore.xml", "new_path": "tests/data/EcoliCore.xml", "diff": "+<?xml version='1.0' encoding='utf-8' standalone='no'?>\n<sbml xmlns:fbc=\"http://www.sbml.org/sbml/level3/version1/fbc/version2\" level=\"3\" sboTerm=\"SBO:0000624\" version=\"1\" xmlns=\"http://www.sbml.org/sbml/level3/version1/core\" fbc:required=\"false\">\n<model fbc:strict=\"true\" id=\"MODELID_3473243\">\n<listOfUnitDefinitions>\n" }, { "change_type": "MODIFY", "old_path": "tests/data/ecoli_core_model.xml", "new_path": "tests/data/ecoli_core_model.xml", "diff": "+<?xml version='1.0' encoding='utf-8' standalone='no'?>\n<sbml xmlns:fbc=\"http://www.sbml.org/sbml/level3/version1/fbc/version2\" level=\"3\" sboTerm=\"SBO:0000624\" version=\"1\" xmlns=\"http://www.sbml.org/sbml/level3/version1/core\" fbc:required=\"false\">\n<model fbc:strict=\"true\" id=\"Ecoli_core_model\">\n<listOfUnitDefinitions>\n" }, { "change_type": "MODIFY", "old_path": "tests/data/iAF1260.xml", "new_path": "tests/data/iAF1260.xml", "diff": "+<?xml version='1.0' encoding='utf-8' standalone='no'?>\n<sbml xmlns:fbc=\"http://www.sbml.org/sbml/level3/version1/fbc/version2\" level=\"3\" sboTerm=\"SBO:0000624\" version=\"1\" xmlns=\"http://www.sbml.org/sbml/level3/version1/core\" fbc:required=\"false\">\n<model fbc:strict=\"true\" id=\"MODELID_3307911\">\n<listOfUnitDefinitions>\n" }, { "change_type": "MODIFY", "old_path": "tests/data/iJO1366.xml", "new_path": "tests/data/iJO1366.xml", "diff": "+<?xml version='1.0' encoding='utf-8' standalone='no'?>\n<sbml xmlns:fbc=\"http://www.sbml.org/sbml/level3/version1/fbc/version2\" level=\"3\" sboTerm=\"SBO:0000624\" version=\"1\" xmlns=\"http://www.sbml.org/sbml/level3/version1/core\" fbc:required=\"false\">\n<model fbc:strict=\"true\" id=\"iJO1366\">\n<listOfUnitDefinitions>\n" }, { "change_type": "MODIFY", "old_path": "tests/data/iMM904.xml", "new_path": "tests/data/iMM904.xml", "diff": "+<?xml version='1.0' encoding='utf-8' standalone='no'?>\n<sbml xmlns:fbc=\"http://www.sbml.org/sbml/level3/version1/fbc/version2\" level=\"3\" sboTerm=\"SBO:0000624\" version=\"1\" xmlns=\"http://www.sbml.org/sbml/level3/version1/core\" fbc:required=\"false\">\n<model fbc:strict=\"true\" id=\"iMM904\">\n<listOfUnitDefinitions>\n" }, { "change_type": "MODIFY", "old_path": "tests/data/iTO977_v1.01_cobra.xml", "new_path": "tests/data/iTO977_v1.01_cobra.xml", "diff": "+<?xml version='1.0' encoding='utf-8' standalone='no'?>\n<sbml xmlns:fbc=\"http://www.sbml.org/sbml/level3/version1/fbc/version2\" level=\"3\" sboTerm=\"SBO:0000624\" version=\"1\" xmlns=\"http://www.sbml.org/sbml/level3/version1/core\" fbc:required=\"false\">\n<model fbc:strict=\"true\" id=\"draft6\">\n<listOfUnitDefinitions>\n" }, { "change_type": "MODIFY", "old_path": "tests/data/toy_model_Papin_2003.xml", "new_path": "tests/data/toy_model_Papin_2003.xml", "diff": "+<?xml version='1.0' encoding='utf-8' standalone='no'?>\n<sbml xmlns:fbc=\"http://www.sbml.org/sbml/level3/version1/fbc/version2\" level=\"3\" sboTerm=\"SBO:0000624\" version=\"1\" xmlns=\"http://www.sbml.org/sbml/level3/version1/core\" fbc:required=\"false\">\n<model fbc:strict=\"true\" id=\"\">\n<listOfUnitDefinitions>\n" } ]
Python
Apache License 2.0
biosustain/cameo
chore: add missing XML header
89,734
19.06.2019 19:52:06
-7,200
9b3f7db636255fc3583078ca9c0d51128ae669ef
chore: update model pickles
[ { "change_type": "MODIFY", "old_path": "tests/data/iJO1366.pickle", "new_path": "tests/data/iJO1366.pickle", "diff": "Binary files a/tests/data/iJO1366.pickle and b/tests/data/iJO1366.pickle differ\n" }, { "change_type": "MODIFY", "old_path": "tests/data/salmonella.pickle", "new_path": "tests/data/salmonella.pickle", "diff": "Binary files a/tests/data/salmonella.pickle and b/tests/data/salmonella.pickle differ\n" }, { "change_type": "MODIFY", "old_path": "tests/data/update-pickles.py", "new_path": "tests/data/update-pickles.py", "diff": "import pickle\n-import cobra.test\n-import optlang\n+import cobra\n+from cobra.test import create_test_model\n+from optlang import glpk_interface\nfrom cameo import load_model\n-ijo = load_model('iJO1366.xml', solver_interface=optlang.glpk_interface)\n+\n+config = cobra.Configuration()\n+config.solver = \"glpk\"\n+\n+\n+ijo = load_model('iJO1366.xml', glpk_interface)\nwith open('iJO1366.pickle', 'wb') as out:\npickle.dump(ijo, out, protocol=2)\n-salmonella = cobra.test.create_test_model('salmonella')\n-salmonella.solver = 'glpk'\n+salmonella = create_test_model('salmonella')\nwith open('salmonella.pickle', 'wb') as out:\npickle.dump(salmonella, out, protocol=2)\n" } ]
Python
Apache License 2.0
biosustain/cameo
chore: update model pickles
89,734
19.06.2019 20:33:10
-7,200
95802bb8f491fbab4ddedf74ca76591481d90cfc
chore: upload codecov report
[ { "change_type": "MODIFY", "old_path": ".travis.yml", "new_path": ".travis.yml", "diff": "@@ -44,6 +44,9 @@ before_install:\nscript:\n- travis_wait tox\n+after_success:\n+ - bash <(curl -s https://codecov.io/bash)\n+\nbefore_deploy:\n- if [[ $TRAVIS_PYTHON_VERSION == \"3.6\" ]]; then\npip install .[docs,jupyter];\n" } ]
Python
Apache License 2.0
biosustain/cameo
chore: upload codecov report
89,734
20.06.2019 09:41:23
-7,200
2bf6bf80af9ccca81278523a2c5f163859a29869
chore: update coverage
[ { "change_type": "DELETE", "old_path": ".coveragerc", "new_path": null, "diff": "-# .coveragerc to control coverage.py\n-[run]\n-branch = True\n-source = cameo\n-omit =\n- cameo/strain_design/heuristic/plotters.py\n- cameo/strain_design/heuristic/multiprocess/plotters.py\n- cameo/visualization/*\n- cameo/_version.py\n- cameo/stuff/*\n- cameo/ui/*\n-\n-[report]\n-# Regexes for lines to exclude from consideration\n-exclude_lines =\n- # Have to re-enable the standard pragma\n- pragma: no cover\n-\n- # Don't complain about missing debug-only code:\n- def __repr__\n- if self\\.debug\n-\n- # Don't complain if tests don't hit defensive assertion code:\n- raise AssertionError\n- raise NotImplementedError\n-\n- # Don't complain if non-runnable code isn't run:\n- if 0:\n- if __name__ == .__main__.:\n-\n- # Don't test visualization stuff\n- def display_on_map.*\n- def plot.*\n-\n-\n-ignore_errors = True\n-\n-[html]\n-directory = tests/coverage\n\\ No newline at end of file\n" }, { "change_type": "MODIFY", "old_path": "tox.ini", "new_path": "tox.ini", "diff": "@@ -46,3 +46,54 @@ known_third_party =\ncobra\nfuture\nsix\n+\n+[coverage:paths]\n+source =\n+ cameo\n+ */site-packages/cameo\n+\n+[coverage:run]\n+parallel = True\n+branch = True\n+omit =\n+ cameo/strain_design/heuristic/plotters.py\n+ cameo/strain_design/heuristic/multiprocess/plotters.py\n+ cameo/visualization/*\n+ cameo/_version.py\n+ cameo/stuff/*\n+ cameo/ui/*\n+\n+[coverage:report]\n+# Regexes for lines to exclude from consideration\n+exclude_lines =\n+# Have to re-enable the standard pragma\n+ pragma: no cover\n+\n+# Don't complain about missing debug-only code:\n+ def __repr__\n+ if self\\.debug\n+\n+# Don't complain if tests don't hit defensive assertion code:\n+ raise AssertionError\n+ raise NotImplementedError\n+\n+# Don't complain if non-runnable code isn't run:\n+ if 0:\n+ if __name__ == .__main__.:\n+\n+# Don't test visualization stuff\n+ def display_on_map.*\n+ def plot.*\n+\n+\n+ignore_errors = True\n+omit =\n+ cameo/strain_design/heuristic/plotters.py\n+ cameo/strain_design/heuristic/multiprocess/plotters.py\n+ cameo/visualization/*\n+ cameo/_version.py\n+ cameo/stuff/*\n+ cameo/ui/*\n+\n+[coverage:html]\n+directory = tests/coverage\n" } ]
Python
Apache License 2.0
biosustain/cameo
chore: update coverage
89,734
21.06.2019 22:20:33
-7,200
c20c841960623d9c407c2d9ef484c7c028976b9c
docs: fix some failing docstrings
[ { "change_type": "MODIFY", "old_path": "cameo/api/designer.py", "new_path": "cameo/api/designer.py", "diff": "@@ -191,8 +191,8 @@ class Designer(object):\n\"\"\"\nOptimize targets for the identified pathways. The optimization will only run if the pathway can be optimized.\n- Arguments\n- ---------\n+ Parameters\n+ ----------\npathways : dict\nA dictionary with information of pathways to optimize ([Host, Model] -> PredictedPathways).\nview : object\n" }, { "change_type": "MODIFY", "old_path": "cameo/core/pathway.py", "new_path": "cameo/core/pathway.py", "diff": "@@ -49,15 +49,16 @@ class Pathway(object):\n@classmethod\ndef from_file(cls, file_path, sep=\"\\t\"):\n\"\"\"\n- Read a pathway from a file. The file format is:\n+ Read a pathway from a file.\n+\n+ The file format is:\nreaction_id<sep>equation<sep>lower_limit<sep>upper_limit<sep>name<sep>comments\\n\nThe equation is defined by:\ncoefficient * substrate_name#substrate_id + ... <=> coefficient * product_name#product_id\n- Arguments\n- ---------\n-\n+ Parameters\n+ ----------\nfile_path: str\nThe path to the file containing the pathway\nsep: str\n@@ -66,6 +67,7 @@ class Pathway(object):\nReturns\n-------\nPathway\n+\n\"\"\"\nreactions = []\nmetabolites = {}\n@@ -86,18 +88,17 @@ class Pathway(object):\n\"\"\"\nWrites the pathway to a file.\n- Arguments\n- ---------\n-\n+ Parameters\n+ ----------\nfile_path: str\nThe path to the file where the pathway will be written\nsep: str\nThe separator between elements in the file (default: \"\\t\")\n-\nSee Also\n--------\nPathway.from_file\n+\n\"\"\"\nwith open(file_path, \"w\") as output_file:\nfor reaction in self.reactions:\n@@ -113,17 +114,17 @@ class Pathway(object):\ndef plug_model(self, model):\n\"\"\"\n- Plugs the pathway to a model.\n+ Plug the pathway to a model.\nMetabolites are matched in the model by id. For metabolites with no ID in the model, an exchange reaction\nis added to the model\n- Arguments\n- ---------\n+ Parameters\n+ ----------\nmodel: cobra.Model\nThe model to plug in the pathway\n- \"\"\"\n+ \"\"\"\nmodel.add_reactions(self.reactions)\nmetabolites = set(reduce(lambda x, y: x + y, [list(r.metabolites.keys()) for r in self.reactions], []))\nexchanges = [model.add_boundary(m) for m in metabolites if len(m.reactions) == 1]\n" }, { "change_type": "MODIFY", "old_path": "cameo/core/target.py", "new_path": "cameo/core/target.py", "diff": "@@ -47,14 +47,7 @@ class Target(object):\nself.accession_db = accession_db\ndef apply(self, model):\n- \"\"\"\n- Applies the modification on the target, depending on the target type.\n-\n- See Also\n- --------\n- Subclass implementations\n-\n- \"\"\"\n+ \"\"\"Apply the modification to the target, depending on the type.\"\"\"\nraise NotImplementedError\ndef __eq__(self, other):\n@@ -95,7 +88,10 @@ class FluxModulationTarget(Target):\nSee Also\n--------\n- ReactionModulationTarget, ReactionKnockoutTarget, GeneModulationTarget and GeneKnockoutTarget for implementation.\n+ ReactionModulationTarget\n+ ReactionKnockoutTarget\n+ GeneModulationTarget\n+ GeneKnockoutTarget\n\"\"\"\n__gnomic_feature_type__ = 'flux'\n" }, { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/simulation.py", "new_path": "cameo/flux_analysis/simulation.py", "diff": "@@ -461,8 +461,8 @@ class FluxDistributionResult(Result):\np[2] p[2] .. p[1] .. p[0] .. p[1] .. p[2] p[2]\n- Arguments\n- ---------\n+ Parameters\n+ ----------\npalette: Palette, list, str\nA Palette from palettable of equivalent, a list of colors (size 3) or a palette name\n@@ -470,6 +470,7 @@ class FluxDistributionResult(Result):\n-------\ntuple\n((-2*std, color), (-std, color) (0 color) (std, color) (2*std, color))\n+\n\"\"\"\nif isinstance(palette, six.string_types):\npalette = mapper.map_palette(palette, 3)\n" }, { "change_type": "MODIFY", "old_path": "cameo/flux_analysis/util.py", "new_path": "cameo/flux_analysis/util.py", "diff": "@@ -30,7 +30,7 @@ logger = logging.getLogger(__name__)\ndef remove_infeasible_cycles(model, fluxes, fix=()):\n\"\"\"Remove thermodynamically infeasible cycles from a flux distribution.\n- Arguments\n+ Parameters\n---------\nmodel : cobra.Model\nThe model that generated the flux distribution.\n" }, { "change_type": "MODIFY", "old_path": "cameo/network_analysis/util.py", "new_path": "cameo/network_analysis/util.py", "diff": "@@ -20,8 +20,8 @@ __all__ = ['distance_based_on_molecular_formula']\ndef distance_based_on_molecular_formula(metabolite1, metabolite2, normalize=True):\n\"\"\"Calculate the distance of two metabolites bases on the molecular formula\n- Arguments\n- ---------\n+ Parameters\n+ ----------\nmetabolite1 : Metabolite\nThe first metabolite.\nmetabolite2 : Metabolite\n@@ -33,6 +33,7 @@ def distance_based_on_molecular_formula(metabolite1, metabolite2, normalize=True\n-------\nfloat\nThe distance between metabolite1 and metabolite2.\n+\n\"\"\"\nif len(metabolite1.elements) == 0 or len(metabolite2.elements) == 0:\nraise ValueError('Cannot calculate distance between metabolites %s and %s' % (metabolite1, metabolite2))\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/deterministic/flux_variability_based.py", "new_path": "cameo/strain_design/deterministic/flux_variability_based.py", "diff": "@@ -622,8 +622,8 @@ class DifferentialFVAResult(StrainDesignMethodResult):\np[0] p[0] .. p[1] .. p[2] .. p[3] .. p[-1] p[-1]\n- Arguments\n- ---------\n+ Parameters\n+ ----------\npalette: Palette, list, str\nA Palette from palettable of equivalent, a list of colors (size 5) or a palette name\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/heuristic/evolutionary/evaluators.py", "new_path": "cameo/strain_design/heuristic/evolutionary/evaluators.py", "diff": "@@ -99,8 +99,8 @@ class KnockoutEvaluator(TargetEvaluator):\n\"\"\"\nEvaluates a single individual.\n- Arguments\n- ---------\n+ Parameters\n+ ----------\nindividual: set\nThe encoded representation of a single individual.\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/heuristic/evolutionary/multiprocess/migrators.py", "new_path": "cameo/strain_design/heuristic/evolutionary/multiprocess/migrators.py", "diff": "@@ -60,8 +60,8 @@ class MultiprocessingMigrator(object):\n- *evaluate_migrant* -- should new migrants be evaluated before\nadding them to the population (default False)\n- Arguments\n- ---------\n+ Parameters\n+ ----------\nmax_migrants: int\nNumber of migrants in the queue at the same time.\nconnection_kwargs: keyword arguments:\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/heuristic/evolutionary/optimization.py", "new_path": "cameo/strain_design/heuristic/evolutionary/optimization.py", "diff": "@@ -868,12 +868,13 @@ class CofactorSwapOptimization(TargetOptimization):\nFind reactions that have one set of the cofactors targeted for swapping and are mass balanced and updates the\n`candidate_reactions` attribute\n- Arguments\n- ---------\n+ Parameters\n+ ----------\nmodel: cobra.Model\nA model with reactions to search on.\nswaps: tuple\nPair of cofactors to swap.\n+\n\"\"\"\ndef swap_search(mets):\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/heuristic/evolutionary/processing.py", "new_path": "cameo/strain_design/heuristic/evolutionary/processing.py", "diff": "@@ -21,8 +21,8 @@ def process_reaction_knockout_solution(model, solution, simulation_method, simul\nbiomass, target, substrate, objective_function):\n\"\"\"\n- Arguments\n- ---------\n+ Parameters\n+ ----------\nmodel: cobra.Model\nA constraint-based model\n@@ -40,11 +40,12 @@ def process_reaction_knockout_solution(model, solution, simulation_method, simul\nThe main carbon source uptake rate\nobjective_function: cameo.strain_design.heuristic.evolutionary.objective_functions.ObjectiveFunction\nThe objective function used for evaluation.\n+\nReturns\n-------\n-\nlist\nA list with: reactions, size, fva_min, fva_max, target flux, biomass flux, yield, fitness\n+\n\"\"\"\nwith model:\n@@ -66,9 +67,8 @@ def process_gene_knockout_solution(model, solution, simulation_method, simulatio\nbiomass, target, substrate, objective_function):\n\"\"\"\n- Arguments\n- ---------\n-\n+ Parameters\n+ ----------\nmodel: cobra.Model\nA constraint-based model\nsolution: tuple\n@@ -88,9 +88,9 @@ def process_gene_knockout_solution(model, solution, simulation_method, simulatio\nReturns\n-------\n-\nlist\nA list with: reactions, genes, size, fva_min, fva_max, target flux, biomass flux, yield, fitness\n+\n\"\"\"\nwith model:\n@@ -115,9 +115,8 @@ def process_reaction_swap_solution(model, solution, simulation_method, simulatio\ntarget, substrate, objective_function, swap_pairs):\n\"\"\"\n- Arguments\n- ---------\n-\n+ Parameters\n+ ----------\nmodel: cobra.Model\nA constraint-based model\nsolution: tuple - (reactions, knockouts)\n@@ -139,10 +138,10 @@ def process_reaction_swap_solution(model, solution, simulation_method, simulatio\nReturns\n-------\n-\nlist\nA list with: reactions, size, fva_min, fva_max, target flux, biomass flux, yield, fitness,\n[fitness, [fitness]]\n+\n\"\"\"\nwith model:\n" }, { "change_type": "MODIFY", "old_path": "cameo/strain_design/pathway_prediction/util.py", "new_path": "cameo/strain_design/pathway_prediction/util.py", "diff": "@@ -30,8 +30,8 @@ __all__ = ['create_adapter_reactions', 'display_pathway']\ndef create_adapter_reactions(original_metabolites, universal_model, mapping, compartment_regexp):\n\"\"\"Create adapter reactions that connect host and universal model.\n- Arguments\n- ---------\n+ Parameters\n+ ----------\noriginal_metabolites : list\nList of host metabolites.\nuniversal_model : cobra.Model\n@@ -43,8 +43,9 @@ def create_adapter_reactions(original_metabolites, universal_model, mapping, com\nReturns\n-------\n- reactions : list\n+ list\nThe list of adapter reactions.\n+\n\"\"\"\nadapter_reactions = []\nmetabolites_in_main_compartment = [m for m in original_metabolites if compartment_regexp.match(m.compartment)]\n" }, { "change_type": "MODIFY", "old_path": "cameo/util.py", "new_path": "cameo/util.py", "diff": "@@ -179,14 +179,15 @@ class ProblemCache(object):\n\"args\" in the first example must match args on the second example.\n- Arguments\n- ---------\n+ Parameters\n+ ----------\nconstraint_id : str\nThe identifier of the constraint\ncreate : function\nA function that creates an optlang.interface.Constraint\nupdate : function\na function that updates an optlang.interface.Constraint\n+\n\"\"\"\ncontext = self._contexts[-1]\n@@ -208,14 +209,15 @@ class ProblemCache(object):\n\"args\" in the first example must match args on the second example.\n- Arguments\n- ---------\n+ Parameters\n+ ----------\nvariable_id : str\nThe identifier of the constraint\ncreate : function\nA function that creates an optlang.interface.Variable\nupdate : function\na function that updates an optlang.interface.Variable\n+\n\"\"\"\ncontext = self._contexts[-1]\nif variable_id not in self.variables:\n" }, { "change_type": "MODIFY", "old_path": "cameo/visualization/plotting/abstract.py", "new_path": "cameo/visualization/plotting/abstract.py", "diff": "@@ -83,8 +83,8 @@ class AbstractPlotter(object):\n10 0 MT 0.4\n2 0 MT 0.5\n- Arguments\n- ---------\n+ Parameters\n+ ----------\ndataframe: pandas.DataFrame\nThe data to plot.\n@@ -110,7 +110,11 @@ class AbstractPlotter(object):\nReturns\n-------\na plottable object\n- see AbstractPlotter.display\n+\n+ See Also\n+ --------\n+ AbstractPlotter.display\n+\n\"\"\"\nraise NotImplementedError\n@@ -127,8 +131,8 @@ class AbstractPlotter(object):\n10 0 WT 0.4 0.5\n2 0 WT 0.5 0.3\n- Arguments\n- ---------\n+ Parameters\n+ ----------\ndataframe: pandas.DataFrame\nThe data to plot.\n@@ -156,7 +160,11 @@ class AbstractPlotter(object):\nReturns\n-------\na plottable object\n- see AbstractPlotter.display\n+\n+ See Also\n+ --------\n+ AbstractPlotter.display\n+\n\"\"\"\nraise NotImplementedError\n@@ -172,8 +180,8 @@ class AbstractPlotter(object):\n10 -10 MT PFK1\n2 0 MT ADT\n- Arguments\n- ---------\n+ Parameters\n+ ----------\ndataframe: pandas.DataFrame\nThe data to plot.\ngrid: AbstractGrid\n@@ -199,6 +207,7 @@ class AbstractPlotter(object):\nSee Also\n--------\nAbstractPlotter.display\n+\n\"\"\"\nraise NotImplementedError\n@@ -214,8 +223,8 @@ class AbstractPlotter(object):\n3 -10 0 1\n4 0 1 -10\n- Arguments\n- ---------\n+ Parameters\n+ ----------\ndataframe: pandas.DataFrame\nThe data to plot.\ngrid: AbstractGrid\n@@ -241,6 +250,7 @@ class AbstractPlotter(object):\nSee Also\n--------\nAbstractPlotter.display\n+\n\"\"\"\nraise NotImplementedError\n@@ -257,8 +267,8 @@ class AbstractPlotter(object):\nC 1 0.032\nD 0 0.0\n- Arguments\n- ---------\n+ Parameters\n+ ----------\ndataframe: pandas.DataFrame\nThe data to plot.\ngrid: AbstractGrid\n@@ -284,6 +294,7 @@ class AbstractPlotter(object):\nSee Also\n--------\nAbstractPlotter.display\n+\n\"\"\"\nraise NotImplementedError\n" }, { "change_type": "MODIFY", "old_path": "docs/conf.py", "new_path": "docs/conf.py", "diff": "@@ -174,7 +174,7 @@ html_theme_path = [alabaster.get_path()]\n# html_title = None\n# A shorter title for the navigation bar. Default is the same as html_title.\n-html_short_title = None\n+html_short_title = ''\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n" }, { "change_type": "MODIFY", "old_path": "scripts/parse_metanetx.py", "new_path": "scripts/parse_metanetx.py", "diff": "@@ -72,8 +72,8 @@ def parse_reaction(formula, irrev_arrow='-->', rev_arrow='<=>'):\ndef construct_universal_model(list_of_db_prefixes, reac_xref, reac_prop, chem_prop):\n\"\"\"\"Construct a universal model based on metanetx.\n- Arguments\n- ---------\n+ Parameters\n+ ----------\nlist_of_db_prefixes : list\nA list of database prefixes, e.g., ['bigg', 'rhea']\nreac_xref : pandas.DataFrame\n@@ -82,6 +82,7 @@ def construct_universal_model(list_of_db_prefixes, reac_xref, reac_prop, chem_pr\nA dataframe of http://www.metanetx.org/cgi-bin/mnxget/mnxref/reac_prop.tsv\nchem_prop : pandas.DataFrame\nA dataframe of http://www.metanetx.org/cgi-bin/mnxget/mnxref/chem_prop.tsv\n+\n\"\"\"\n# Select which reactions to include in universal reaction database\n" } ]
Python
Apache License 2.0
biosustain/cameo
docs: fix some failing docstrings