repo
stringclasses 856
values | pull_number
int64 3
127k
| instance_id
stringlengths 12
58
| issue_numbers
sequencelengths 1
5
| base_commit
stringlengths 40
40
| patch
stringlengths 67
1.54M
| test_patch
stringlengths 0
107M
| problem_statement
stringlengths 3
307k
| hints_text
stringlengths 0
908k
| created_at
timestamp[s] |
---|---|---|---|---|---|---|---|---|---|
networkx/networkx | 2,534 | networkx__networkx-2534 | [
"1540"
] | 5aff55f0491d4e90e7df8a77a79cf1c020e21e25 | diff --git a/networkx/algorithms/minors.py b/networkx/algorithms/minors.py
--- a/networkx/algorithms/minors.py
+++ b/networkx/algorithms/minors.py
@@ -309,13 +309,17 @@ def contracted_nodes(G, u, v, self_loops=True):
Contracting two nonadjacent nodes of the cycle graph on four nodes `C_4`
yields the path graph (ignoring parallel edges)::
- >>> import networkx as nx
>>> G = nx.cycle_graph(4)
>>> M = nx.contracted_nodes(G, 1, 3)
>>> P3 = nx.path_graph(3)
>>> nx.is_isomorphic(M, P3)
True
+ >>> G = nx.MultiGraph(P3)
+ >>> M = nx.contracted_nodes(G, 0, 2)
+ >>> M.edges
+ MultiEdgeView([(0, 1, 0), (0, 1, 1)])
+
See also
--------
contracted_edge
| diff --git a/networkx/algorithms/tests/test_minors.py b/networkx/algorithms/tests/test_minors.py
--- a/networkx/algorithms/tests/test_minors.py
+++ b/networkx/algorithms/tests/test_minors.py
@@ -222,6 +222,15 @@ def test_directed_node_contraction(self):
expected.add_edge(0, 0)
assert_true(nx.is_isomorphic(actual, expected))
+ def test_create_multigraph(self):
+ """Tests that using a MultiGraph creates multiple edges."""
+ G = nx.path_graph(3, create_using=nx.MultiGraph())
+ actual = nx.contracted_nodes(G, 0, 2)
+ expected = nx.MultiDiGraph()
+ expected.add_edge(0, 1)
+ expected.add_edge(0, 1)
+ assert_edges_equal(actual.edges, expected.edges)
+
def test_node_attributes(self):
"""Tests that node contraction preserves node attributes."""
G = nx.cycle_graph(4)
@@ -234,7 +243,8 @@ def test_node_attributes(self):
expected = nx.complete_graph(3)
expected = nx.relabel_nodes(expected, {1: 2, 2: 3})
expected.add_edge(0, 0)
- expected.node[0].update(dict(foo='bar', contraction={1: dict(baz='xyzzy')}))
+ cdict = {1: {'baz': 'xyzzy'}}
+ expected.node[0].update(dict(foo='bar', contraction=cdict))
assert_true(nx.is_isomorphic(actual, expected))
assert_equal(actual.node, expected.node)
| Node contraction documentation should provide an example with multigraphs
Node or edge contraction is really a multigraph operation. For example, identifying the two endpoints of the path graph on three vertices yields a graph with two nodes and two parallel edges joining them. This should be at least an example in the documentation, and probably a unit test too.
| 2017-07-22T00:30:01 |
|
networkx/networkx | 2,535 | networkx__networkx-2535 | [
"2531"
] | 0b7603cd047301a3691836607c421fa1a97c7cd1 | diff --git a/networkx/convert_matrix.py b/networkx/convert_matrix.py
--- a/networkx/convert_matrix.py
+++ b/networkx/convert_matrix.py
@@ -1126,3 +1126,7 @@ def setup_module(module):
import scipy
except:
raise SkipTest("SciPy not available")
+ try:
+ import pandas
+ except:
+ raise SkipTest("Pandas not available")
| diff --git a/networkx/drawing/tests/test_agraph.py b/networkx/drawing/tests/test_agraph.py
--- a/networkx/drawing/tests/test_agraph.py
+++ b/networkx/drawing/tests/test_agraph.py
@@ -3,6 +3,8 @@
import tempfile
from nose import SkipTest
from nose.tools import assert_true, assert_equal
+from networkx.testing import assert_edges_equal, assert_nodes_equal
+
import networkx as nx
@@ -23,8 +25,8 @@ def build_graph(self, G):
return G
def assert_equal(self, G1, G2):
- assert_equal(sorted(G1.nodes()), sorted(G2.nodes()))
- assert_equal(sorted(G1.edges()), sorted(G2.edges()))
+ assert_nodes_equal(G1.nodes(), G2.nodes())
+ assert_edges_equal(G1.edges(), G2.edges())
assert_equal(G1.graph['metal'], G2.graph['metal'])
def agraph_checks(self, G):
| missing commits
@hagberg , @dschult I just noticed that there is no ``doc/release/api_1.11.rst``, but there is one here:
https://github.com/networkx/networkx/tree/v1.11
in ``doc/source/reference/api_1.11.rst``. It appears this file was never committed on the master branch.
The v1.11 branch is " 59 commits ahead, 1066 commits behind master. " So it looks like there may be a number of missing commits on master. For example, this is also missing:
https://github.com/networkx/networkx/commit/5665c71f3a9aec0325078de2de43537aee03386d
As this shows:
```
$ git lg networkx/drawing/tests/test_agraph.py
* d8ada85 - Make graph attributes work both to/from with agraph (#2507) (11 days ago) [Dan Schult]
* 7bfb768 - Improve drawing test scripts (typos, newlines, methods) (1 year, 5 months ago) [Michael-E-Rose]
* f5031dd - Adjust imports in drawing layouts with graphviz (1 year, 6 months ago) [Dan Schult]
* 9922ec7 - doc, formatting, and whitespace cleanup (5 years ago) [Aric Hagberg]
* 47565b1 - Handle name in translation between pygraphviz (AGraph) and networkx. Fixes #734 (5 years ago) [Aric Hagberg]
* 3665bc1 - Update tests (6 years ago) [Aric Hagberg]
* d41d15f - More imports cleanup and exceptions fixed. (6 years ago) [Loïc Séguin-C.]
* baceff1 - Added tests for multigraph conversion to/from agraph. Changed from_agraph() so that the tests pass. (8 years ago) [dschult]
* ca6df32 - Convert drawing tests to functional tests and use SkipTest if optional packages are not available. (8 years ago) [aric]
```
I suspect that this was unintentional and that I should go through the missing commits and either cherry-pick the appropriate ones or make a new commit when cherry-picking doesn't work. I just wanted to check whether I am correct before I go through the effort. I will make a PR so you can review the commits I grab before merging to master.
| 2017-07-22T02:18:58 |
|
networkx/networkx | 2,538 | networkx__networkx-2538 | [
"2537"
] | f5db0f9635b6e07701eb5a15a285aebdd0908b37 | diff --git a/networkx/drawing/layout.py b/networkx/drawing/layout.py
--- a/networkx/drawing/layout.py
+++ b/networkx/drawing/layout.py
@@ -108,9 +108,10 @@ def circular_layout(G, scale=1, center=None, dim=2):
Parameters
----------
G : NetworkX graph or list of nodes
+ A position will be assigned to every node in G.
- scale : float
- Scale factor for positions
+ scale : number (default: 1)
+ Scale factor for positions.
center : array-like or None
Coordinate pair around which to center the layout.
@@ -164,18 +165,19 @@ def shell_layout(G, nlist=None, scale=1, center=None, dim=2):
Parameters
----------
G : NetworkX graph or list of nodes
+ A position will be assigned to every node in G.
nlist : list of lists
List of node lists for each shell.
- scale : float
- Scale factor for positions
+ scale : number (default: 1)
+ Scale factor for positions.
center : array-like or None
Coordinate pair around which to center the layout.
dim : int
- Dimension of layout, currently only dim=2 is supported
+ Dimension of layout, currently only dim=2 is supported.
Returns
-------
@@ -232,7 +234,7 @@ def fruchterman_reingold_layout(G, k=None,
fixed=None,
iterations=50,
weight='weight',
- scale=1.0,
+ scale=1,
center=None,
dim=2):
"""Position nodes using Fruchterman-Reingold force-directed algorithm.
@@ -240,6 +242,7 @@ def fruchterman_reingold_layout(G, k=None,
Parameters
----------
G : NetworkX graph or list of nodes
+ A position will be assigned to every node in G.
k : float (default=None)
Optimal distance between nodes. If None the distance is set to
@@ -261,15 +264,15 @@ def fruchterman_reingold_layout(G, k=None,
The edge attribute that holds the numerical value used for
the edge weight. If None, then all edge weights are 1.
- scale : float (default=1.0)
- Scale factor for positions. The nodes are positioned
- in a box of size [0, scale] x [0, scale].
+ scale : number (default: 1)
+ Scale factor for positions. Not used unless `fixed is None`.
center : array-like or None
Coordinate pair around which to center the layout.
+ Not used unless `fixed is None`.
dim : int
- Dimension of layout
+ Dimension of layout.
Returns
-------
@@ -477,7 +480,7 @@ def _sparse_fruchterman_reingold(A, k=None, pos=None, fixed=None,
def kamada_kawai_layout(G, dist=None,
pos=None,
weight='weight',
- scale=1.0,
+ scale=1,
center=None,
dim=2):
"""Position nodes using Kamada-Kawai path-length cost-function.
@@ -485,6 +488,7 @@ def kamada_kawai_layout(G, dist=None,
Parameters
----------
G : NetworkX graph or list of nodes
+ A position will be assigned to every node in G.
dist : float (default=None)
A two-level dictionary of optimal distances between nodes,
@@ -500,11 +504,14 @@ def kamada_kawai_layout(G, dist=None,
The edge attribute that holds the numerical value used for
the edge weight. If None, then all edge weights are 1.
+ scale : number (default: 1)
+ Scale factor for positions.
+
center : array-like or None
Coordinate pair around which to center the layout.
dim : int
- Dimension of layout
+ Dimension of layout.
Returns
-------
@@ -601,19 +608,20 @@ def spectral_layout(G, weight='weight', scale=1, center=None, dim=2):
Parameters
----------
G : NetworkX graph or list of nodes
+ A position will be assigned to every node in G.
weight : string or None optional (default='weight')
The edge attribute that holds the numerical value used for
the edge weight. If None, then all edge weights are 1.
- scale : float
- Scale factor for positions
+ scale : number (default: 1)
+ Scale factor for positions.
center : array-like or None
Coordinate pair around which to center the layout.
dim : int
- Dimension of layout
+ Dimension of layout.
Returns
-------
@@ -760,7 +768,7 @@ def rescale_layout(pos, scale=1):
lim = 0 # max coordinate for all axes
for i in range(pos.shape[1]):
pos[:, i] -= pos[:, i].mean()
- lim = max(pos[:, i].max(), lim)
+ lim = max(abs(pos[:, i]).max(), lim)
# rescale to (-scale, scale) in all directions, preserves aspect
if lim > 0:
for i in range(pos.shape[1]):
| diff --git a/networkx/drawing/tests/test_layout.py b/networkx/drawing/tests/test_layout.py
--- a/networkx/drawing/tests/test_layout.py
+++ b/networkx/drawing/tests/test_layout.py
@@ -51,6 +51,43 @@ def test_smoke_string(self):
if self.scipy is not None:
vpos = nx.kamada_kawai_layout(G)
+ def check_scale_and_center(self, pos, scale, center):
+ center = numpy.array(center)
+ low = center - scale
+ hi = center + scale
+ vpos = numpy.array(list(pos.values()))
+ length = vpos.max(0) - vpos.min(0)
+ assert (length <= 2 * scale).all()
+ assert (vpos >= low).all()
+ assert (vpos <= hi).all()
+
+ def test_scale_and_center_arg(self):
+ sc = self.check_scale_and_center
+ c = (4, 5)
+ G = nx.complete_graph(9)
+ G.add_node(9)
+ sc(nx.random_layout(G, center=c), scale=0.5, center=(4.5, 5.5))
+ # rest can have 2*scale length: [-scale, scale]
+ sc(nx.spring_layout(G, scale=2, center=c), scale=2, center=c)
+ sc(nx.spectral_layout(G, scale=2, center=c), scale=2, center=c)
+ sc(nx.circular_layout(G, scale=2, center=c), scale=2, center=c)
+ sc(nx.shell_layout(G, scale=2, center=c), scale=2, center=c)
+ if self.scipy is not None:
+ sc(nx.kamada_kawai_layout(G, scale=2, center=c), scale=2, center=c)
+
+ def test_default_scale_and_center(self):
+ sc = self.check_scale_and_center
+ c = (0, 0)
+ G = nx.complete_graph(9)
+ G.add_node(9)
+ sc(nx.random_layout(G), scale=0.5, center=(0.5, 0.5))
+ sc(nx.spring_layout(G), scale=1, center=c)
+ sc(nx.spectral_layout(G), scale=1, center=c)
+ sc(nx.circular_layout(G), scale=1, center=c)
+ sc(nx.shell_layout(G), scale=1, center=c)
+ if self.scipy is not None:
+ sc(nx.kamada_kawai_layout(G), scale=1, center=c)
+
def test_adjacency_interface_numpy(self):
A = nx.to_numpy_matrix(self.Gs)
pos = nx.drawing.layout._fruchterman_reingold(A)
| Revisit center and scaling choices for layout.py in v2.0
We made some choices for v1.11 that were supposed to be temporary fixes to be improved in v2.0. Let's make sre that got straightened out.
| 2017-07-24T03:13:12 |
|
networkx/networkx | 2,558 | networkx__networkx-2558 | [
"2464"
] | fbbfe49a5906ce9ecce4f0d002fc58e89769fd7a | diff --git a/networkx/convert.py b/networkx/convert.py
--- a/networkx/convert.py
+++ b/networkx/convert.py
@@ -55,12 +55,12 @@ def to_networkx_graph(data, create_using=None, multigraph_input=False):
The preferred way to call this is automatically
from the class constructor
- >>> d={0: {1: {'weight':1}}} # dict-of-dicts single edge (0,1)
- >>> G=nx.Graph(d)
+ >>> d = {0: {1: {'weight':1}}} # dict-of-dicts single edge (0,1)
+ >>> G = nx.Graph(d)
instead of the equivalent
- >>> G=nx.from_dict_of_dicts(d)
+ >>> G = nx.from_dict_of_dicts(d)
Parameters
----------
@@ -120,11 +120,9 @@ def to_networkx_graph(data, create_using=None, multigraph_input=False):
raise TypeError("Input is not known type.")
# list or generator of edges
- if (isinstance(data, list) or
- isinstance(data, tuple) or
- hasattr(data, '_adjdict') or
- hasattr(data, 'next') or
- hasattr(data, '__next__')):
+
+ if (isinstance(data, (list, tuple)) or
+ any(hasattr(data, attr) for attr in ['_adjdict', 'next', '__next__'])):
try:
return from_edgelist(data, create_using=create_using)
except:
@@ -134,11 +132,18 @@ def to_networkx_graph(data, create_using=None, multigraph_input=False):
try:
import pandas as pd
if isinstance(data, pd.DataFrame):
- try:
- return nx.from_pandas_dataframe(data, edge_attr=True, create_using=create_using)
- except:
- msg = "Input is not a correct Pandas DataFrame."
- raise nx.NetworkXError(msg)
+ if data.shape[0] == data.shape[1]:
+ try:
+ return nx.from_pandas_adjacency(data, create_using=create_using)
+ except:
+ msg = "Input is not a correct Pandas DataFrame adjacency matrix."
+ raise nx.NetworkXError(msg)
+ else:
+ try:
+ return nx.from_pandas_edgelist(data, edge_attr=True, create_using=create_using)
+ except:
+ msg = "Input is not a correct Pandas DataFrame edge-list."
+ raise nx.NetworkXError(msg)
except ImportError:
msg = 'pandas not found, skipping conversion test.'
warnings.warn(msg, ImportWarning)
@@ -146,7 +151,7 @@ def to_networkx_graph(data, create_using=None, multigraph_input=False):
# numpy matrix or ndarray
try:
import numpy
- if isinstance(data, numpy.matrix) or isinstance(data, numpy.ndarray):
+ if isinstance(data, (numpy.matrix, numpy.ndarray)):
try:
return nx.from_numpy_matrix(data, create_using=create_using)
except:
@@ -172,8 +177,6 @@ def to_networkx_graph(data, create_using=None, multigraph_input=False):
raise nx.NetworkXError(
"Input is not a known data type for conversion.")
- return
-
def to_dict_of_lists(G, nodelist=None):
"""Return adjacency representation of graph as a dictionary of lists.
@@ -213,11 +216,12 @@ def from_dict_of_lists(d, create_using=None):
Examples
--------
- >>> dol= {0:[1]} # single edge (0,1)
- >>> G=nx.from_dict_of_lists(dol)
+ >>> dol = {0: [1]} # single edge (0,1)
+ >>> G = nx.from_dict_of_lists(dol)
or
- >>> G=nx.Graph(dol) # use Graph constructor
+
+ >>> G = nx.Graph(dol) # use Graph constructor
"""
G = _prep_create_using(create_using)
@@ -296,11 +300,12 @@ def from_dict_of_dicts(d, create_using=None, multigraph_input=False):
Examples
--------
- >>> dod= {0: {1:{'weight':1}}} # single edge (0,1)
- >>> G=nx.from_dict_of_dicts(dod)
+ >>> dod = {0: {1: {'weight': 1}}} # single edge (0,1)
+ >>> G = nx.from_dict_of_dicts(dod)
or
- >>> G=nx.Graph(dod) # use Graph constructor
+
+ >>> G = nx.Graph(dod) # use Graph constructor
"""
G = _prep_create_using(create_using)
@@ -314,13 +319,13 @@ def from_dict_of_dicts(d, create_using=None, multigraph_input=False):
for u, nbrs in d.items()
for v, datadict in nbrs.items()
for key, data in datadict.items()
- )
+ )
else:
G.add_edges_from((u, v, data)
for u, nbrs in d.items()
for v, datadict in nbrs.items()
for key, data in datadict.items()
- )
+ )
else: # Undirected
if G.is_multigraph():
seen = set() # don't add both directions of undirected graph
@@ -329,7 +334,7 @@ def from_dict_of_dicts(d, create_using=None, multigraph_input=False):
if (u, v) not in seen:
G.add_edges_from((u, v, key, data)
for key, data in datadict.items()
- )
+ )
seen.add((v, u))
else:
seen = set() # don't add both directions of undirected graph
@@ -373,8 +378,7 @@ def to_edgelist(G, nodelist=None):
"""
if nodelist is None:
return G.edges(data=True)
- else:
- return G.edges(nodelist, data=True)
+ return G.edges(nodelist, data=True)
def from_edgelist(edgelist, create_using=None):
@@ -390,11 +394,12 @@ def from_edgelist(edgelist, create_using=None):
Examples
--------
- >>> edgelist= [(0,1)] # single edge (0,1)
- >>> G=nx.from_edgelist(edgelist)
+ >>> edgelist = [(0, 1)] # single edge (0,1)
+ >>> G = nx.from_edgelist(edgelist)
or
- >>> G=nx.Graph(edgelist) # use Graph constructor
+
+ >>> G = nx.Graph(edgelist) # use Graph constructor
"""
G = _prep_create_using(create_using)
diff --git a/networkx/convert_matrix.py b/networkx/convert_matrix.py
--- a/networkx/convert_matrix.py
+++ b/networkx/convert_matrix.py
@@ -1,3 +1,9 @@
+# Copyright (C) 2006-2017 by
+# Aric Hagberg <[email protected]>
+# Dan Schult <[email protected]>
+# Pieter Swart <[email protected]>
+# All rights reserved.
+# BSD license.
"""Functions to convert NetworkX graphs to and from numpy/scipy matrices.
The preferred way of converting data to a NetworkX graph is through the
@@ -8,34 +14,29 @@
--------
Create a 10 node random graph from a numpy matrix
->>> import numpy
->>> a = numpy.reshape(numpy.random.random_integers(0,1,size=100),(10,10))
+>>> import numpy as np
+>>> a = np.reshape(np.random.random_integers(0, 1, size=100), (10, 10))
>>> D = nx.DiGraph(a)
or equivalently
->>> D = nx.to_networkx_graph(a,create_using=nx.DiGraph())
+>>> D = nx.to_networkx_graph(a, create_using=nx.DiGraph())
See Also
--------
nx_agraph, nx_pydot
"""
-# Copyright (C) 2006-2014 by
-# Aric Hagberg <[email protected]>
-# Dan Schult <[email protected]>
-# Pieter Swart <[email protected]>
-# All rights reserved.
-# BSD license.
-import warnings
+
+import warnings as _warnings
import itertools
import networkx as nx
from networkx.convert import _prep_create_using
from networkx.utils import not_implemented_for
-__author__ = """\n""".join(['Aric Hagberg <[email protected]>',
- 'Pieter Swart ([email protected])',
- 'Dan Schult([email protected])'])
+
__all__ = ['from_numpy_matrix', 'to_numpy_matrix',
'from_pandas_dataframe', 'to_pandas_dataframe',
+ 'from_pandas_adjacency', 'to_pandas_adjacency',
+ 'from_pandas_edgelist', 'to_pandas_edgelist',
'to_numpy_recarray',
'from_scipy_sparse_matrix', 'to_scipy_sparse_matrix',
'from_numpy_array', 'to_numpy_array']
@@ -43,6 +44,15 @@
def to_pandas_dataframe(G, nodelist=None, dtype=None, order=None,
multigraph_weight=sum, weight='weight', nonedge=0.0):
+ """DEPRECATED: Replaced by ``to_pandas_adjacency``."""
+ msg = "to_pandas_dataframe is deprecated and will be removed" \
+ "in 2.1, use to_pandas_adjacency instead."
+ _warnings.warn(msg, DeprecationWarning)
+ return to_pandas_adjacency(G, nodelist, dtype, order, multigraph_weight, weight, nonedge)
+
+
+def to_pandas_adjacency(G, nodelist=None, dtype=None, order=None,
+ multigraph_weight=sum, weight='weight', nonedge=0.0):
"""Return the graph adjacency matrix as a Pandas DataFrame.
Parameters
@@ -93,8 +103,8 @@ def to_pandas_dataframe(G, nodelist=None, dtype=None, order=None,
>>> import pandas as pd
>>> import numpy as np
- >>> G = nx.Graph([(1,1)])
- >>> df = nx.to_pandas_dataframe(G, dtype=int)
+ >>> G = nx.Graph([(1, 1)])
+ >>> df = nx.to_pandas_adjacency(G, dtype=int)
>>> df
1
1 1
@@ -106,19 +116,20 @@ def to_pandas_dataframe(G, nodelist=None, dtype=None, order=None,
Examples
--------
>>> G = nx.MultiDiGraph()
- >>> G.add_edge(0,1,weight=2)
+ >>> G.add_edge(0, 1, weight=2)
0
- >>> G.add_edge(1,0)
+ >>> G.add_edge(1, 0)
0
- >>> G.add_edge(2,2,weight=3)
+ >>> G.add_edge(2, 2, weight=3)
0
- >>> G.add_edge(2,2)
+ >>> G.add_edge(2, 2)
1
- >>> nx.to_pandas_dataframe(G, nodelist=[0,1,2], dtype=int)
+ >>> nx.to_pandas_adjacency(G, nodelist=[0, 1, 2], dtype=int)
0 1 2
0 0 2 0
1 1 0 0
2 0 0 4
+
"""
import pandas as pd
M = to_numpy_matrix(G, nodelist=nodelist, dtype=dtype, order=order,
@@ -129,8 +140,125 @@ def to_pandas_dataframe(G, nodelist=None, dtype=None, order=None,
return pd.DataFrame(data=M, index=nodelist, columns=nodelist)
+def from_pandas_adjacency(df, create_using=None):
+ r"""Return a graph from Pandas DataFrame.
+
+ The Pandas DataFrame is interpreted as an adjacency matrix for the graph.
+
+ Parameters
+ ----------
+ df : Pandas DataFrame
+ An adjacency matrix representation of a graph
+
+ create_using : NetworkX graph
+ Use specified graph for result. The default is Graph()
+
+ Notes
+ -----
+ If the numpy matrix has a single data type for each matrix entry it
+ will be converted to an appropriate Python data type.
+
+ If the numpy matrix has a user-specified compound data type the names
+ of the data fields will be used as attribute keys in the resulting
+ NetworkX graph.
+
+ See Also
+ --------
+ to_pandas_adjacency
+
+ Examples
+ --------
+ Simple integer weights on edges:
+
+ >>> import pandas as pd
+ >>> df = pd.DataFrame([[1, 1], [2, 1]])
+ >>> df
+ 0 1
+ 0 1 1
+ 1 2 1
+ >>> G = nx.from_pandas_adjacency(df)
+ >>> print(nx.info(G))
+ Name:
+ Type: Graph
+ Number of nodes: 2
+ Number of edges: 3
+ Average degree: 3.0000
+
+ """
+
+ A = df.values
+ G = from_numpy_matrix(A, create_using)
+ try:
+ df = df[df.index]
+ except:
+ raise nx.NetworkXError("Columns must match Indices.", "%s not in columns" %
+ list(set(df.index).difference(set(df.columns))))
+
+ nx.relabel.relabel_nodes(G, dict(enumerate(df.columns)), copy=False)
+ return G
+
+
+def to_pandas_edgelist(G, source='source', target='target', nodelist=None,
+ dtype=None, order=None):
+ """Return the graph edge list as a Pandas DataFrame.
+
+ Parameters
+ ----------
+ G : graph
+ The NetworkX graph used to construct the Pandas DataFrame.
+
+ source : str or int, optional
+ A valid column name (string or iteger) for the source nodes (for the
+ directed case).
+
+ target : str or int, optional
+ A valid column name (string or iteger) for the target nodes (for the
+ directed case).
+
+ nodelist : list, optional
+ Use only nodes specified in nodelist
+
+ Returns
+ -------
+ df : Pandas DataFrame
+ Graph edge list
+
+ Examples
+ --------
+ >>> G = nx.Graph([('A', 'B', {'cost': 1, 'weight': 7}),
+ ... ('C', 'E', {'cost': 9, 'weight': 10})])
+ >>> df = nx.to_pandas_edgelist(G, nodelist=['A', 'C'])
+ >>> df
+ cost source target weight
+ 0 1 A B 7
+ 1 9 C E 10
+
+ """
+ import pandas as pd
+ if nodelist is None:
+ edgelist = G.edges(data=True)
+ else:
+ edgelist = G.edges(nodelist, data=True)
+ source_nodes = [s for s, t, d in edgelist]
+ target_nodes = [t for s, t, d in edgelist]
+ all_keys = set().union(*(d.keys() for s, t, d in edgelist))
+ edge_attr = {k: [d.get(k, float("nan")) for s, t, d in edgelist] for k in all_keys}
+ edgelistdict = {source: source_nodes, target: target_nodes}
+ edgelistdict.update(edge_attr)
+ return pd.DataFrame(edgelistdict)
+
+
def from_pandas_dataframe(df, source='source', target='target', edge_attr=None,
create_using=None):
+ """DEPRECATED: Replaced by ``from_pandas_edgelist``."""
+ msg = "from_pandas_dataframe is deprecated and will be removed" \
+ "in 2.1, use from_pandas_edgelist instead."
+ _warnings.warn(msg, DeprecationWarning)
+ return from_pandas_edgelist(df, source, target, edge_attr, create_using)
+
+
+def from_pandas_edgelist(df, source='source', target='target', edge_attr=None,
+ create_using=None):
"""Return a graph from Pandas DataFrame containing an edge list.
The Pandas DataFrame should contain at least two columns of node names and
@@ -166,7 +294,7 @@ def from_pandas_dataframe(df, source='source', target='target', edge_attr=None,
See Also
--------
- to_pandas_dataframe
+ to_pandas_edgelist
Examples
--------
@@ -186,7 +314,7 @@ def from_pandas_dataframe(df, source='source', target='target', edge_attr=None,
0 4 7 A D
1 7 1 B A
2 10 9 C E
- >>> G=nx.from_pandas_dataframe(df, 0, 'b', ['weight', 'cost'])
+ >>> G = nx.from_pandas_edgelist(df, 0, 'b', ['weight', 'cost'])
>>> G['E']['C']['weight']
10
>>> G['E']['C']['cost']
@@ -195,9 +323,10 @@ def from_pandas_dataframe(df, source='source', target='target', edge_attr=None,
... 'target': [2, 2, 3],
... 'weight': [3, 4, 5],
... 'color': ['red', 'blue', 'blue']})
- >>> G = nx.from_pandas_dataframe(edges, edge_attr=True)
+ >>> G = nx.from_pandas_edgelist(edges, edge_attr=True)
>>> G[0][2]['color']
'red'
+
"""
g = _prep_create_using(create_using)
@@ -317,18 +446,19 @@ def to_numpy_matrix(G, nodelist=None, dtype=None, order=None,
Examples
--------
>>> G = nx.MultiDiGraph()
- >>> G.add_edge(0,1,weight=2)
+ >>> G.add_edge(0, 1, weight=2)
0
- >>> G.add_edge(1,0)
+ >>> G.add_edge(1, 0)
0
- >>> G.add_edge(2,2,weight=3)
+ >>> G.add_edge(2, 2, weight=3)
0
- >>> G.add_edge(2,2)
+ >>> G.add_edge(2, 2)
1
- >>> nx.to_numpy_matrix(G, nodelist=[0,1,2])
+ >>> nx.to_numpy_matrix(G, nodelist=[0, 1, 2])
matrix([[ 0., 2., 0.],
[ 1., 0., 0.],
[ 0., 0., 4.]])
+
"""
import numpy as np
@@ -385,17 +515,16 @@ def from_numpy_matrix(A, parallel_edges=False, create_using=None):
--------
Simple integer weights on edges:
- >>> import numpy
- >>> A=numpy.matrix([[1, 1], [2, 1]])
- >>> G=nx.from_numpy_matrix(A)
+ >>> import numpy as np
+ >>> A = np.matrix([[1, 1], [2, 1]])
+ >>> G = nx.from_numpy_matrix(A)
If `create_using` is a multigraph and the matrix has only integer entries,
the entries will be interpreted as weighted edges joining the vertices
(without creating parallel edges):
- >>> import numpy
- >>> A = numpy.matrix([[1, 1], [1, 2]])
- >>> G = nx.from_numpy_matrix(A, create_using = nx.MultiGraph())
+ >>> A = np.matrix([[1, 1], [1, 2]])
+ >>> G = nx.from_numpy_matrix(A, create_using=nx.MultiGraph())
>>> G[1][1]
AtlasView({0: {'weight': 2}})
@@ -403,18 +532,16 @@ def from_numpy_matrix(A, parallel_edges=False, create_using=None):
but `parallel_edges` is True, then the entries will be interpreted as
the number of parallel edges joining those two vertices:
- >>> import numpy
- >>> A = numpy.matrix([[1, 1], [1, 2]])
+ >>> A = np.matrix([[1, 1], [1, 2]])
>>> temp = nx.MultiGraph()
- >>> G = nx.from_numpy_matrix(A, parallel_edges = True, create_using = temp)
+ >>> G = nx.from_numpy_matrix(A, parallel_edges=True, create_using=temp)
>>> G[1][1]
AtlasView({0: {'weight': 1}, 1: {'weight': 1}})
User defined compound data type on edges:
- >>> import numpy
>>> dt = [('weight', float), ('cost', int)]
- >>> A = numpy.matrix([[(1.0, 2)]], dtype = dt)
+ >>> A = np.matrix([[(1.0, 2)]], dtype=dt)
>>> G = nx.from_numpy_matrix(A)
>>> list(G.edges())
[(0, 0)]
@@ -436,7 +563,7 @@ def from_numpy_matrix(A, parallel_edges=False, create_using=None):
try: # Python 3.x
blurb = chr(1245) # just to trigger the exception
kind_to_python_type['U'] = str
- except ValueError: # Python 2.6+
+ except ValueError: # Python 2.7
kind_to_python_type['U'] = unicode
G = _prep_create_using(create_using)
n, m = A.shape
@@ -530,14 +657,15 @@ def to_numpy_recarray(G, nodelist=None, dtype=None, order=None):
Examples
--------
>>> G = nx.Graph()
- >>> G.add_edge(1,2,weight=7.0,cost=5)
- >>> A=nx.to_numpy_recarray(G,dtype=[('weight',float),('cost',int)])
+ >>> G.add_edge(1, 2, weight=7.0, cost=5)
+ >>> A = nx.to_numpy_recarray(G, dtype=[('weight', float), ('cost', int)])
>>> print(A.weight)
[[ 0. 7.]
[ 7. 0.]]
>>> print(A.cost)
[[0 5]
[5 0]]
+
"""
if dtype is None:
dtype = [('weight', float)]
@@ -617,26 +745,26 @@ def to_scipy_sparse_matrix(G, nodelist=None, dtype=None,
resulting Scipy sparse matrix can be modified as follows:
>>> import scipy as sp
- >>> G = nx.Graph([(1,1)])
+ >>> G = nx.Graph([(1, 1)])
>>> A = nx.to_scipy_sparse_matrix(G)
>>> print(A.todense())
[[1]]
- >>> A.setdiag(A.diagonal()*2)
+ >>> A.setdiag(A.diagonal() * 2)
>>> print(A.todense())
[[2]]
Examples
--------
>>> G = nx.MultiDiGraph()
- >>> G.add_edge(0,1,weight=2)
+ >>> G.add_edge(0, 1, weight=2)
0
- >>> G.add_edge(1,0)
+ >>> G.add_edge(1, 0)
0
- >>> G.add_edge(2,2,weight=3)
+ >>> G.add_edge(2, 2, weight=3)
0
- >>> G.add_edge(2,2)
+ >>> G.add_edge(2, 2)
1
- >>> S = nx.to_scipy_sparse_matrix(G, nodelist=[0,1,2])
+ >>> S = nx.to_scipy_sparse_matrix(G, nodelist=[0, 1, 2])
>>> print(S.todense())
[[0 2 0]
[1 0 0]
@@ -791,16 +919,15 @@ def from_scipy_sparse_matrix(A, parallel_edges=False, create_using=None,
Examples
--------
- >>> import scipy.sparse
- >>> A = scipy.sparse.eye(2,2,1)
+ >>> import scipy as sp
+ >>> A = sp.sparse.eye(2, 2, 1)
>>> G = nx.from_scipy_sparse_matrix(A)
If `create_using` is a multigraph and the matrix has only integer entries,
the entries will be interpreted as weighted edges joining the vertices
(without creating parallel edges):
- >>> import scipy
- >>> A = scipy.sparse.csr_matrix([[1, 1], [1, 2]])
+ >>> A = sp.sparse.csr_matrix([[1, 1], [1, 2]])
>>> G = nx.from_scipy_sparse_matrix(A, create_using=nx.MultiGraph())
>>> G[1][1]
AtlasView({0: {'weight': 2}})
@@ -809,8 +936,7 @@ def from_scipy_sparse_matrix(A, parallel_edges=False, create_using=None,
but `parallel_edges` is True, then the entries will be interpreted as
the number of parallel edges joining those two vertices:
- >>> import scipy
- >>> A = scipy.sparse.csr_matrix([[1, 1], [1, 2]])
+ >>> A = sp.sparse.csr_matrix([[1, 1], [1, 2]])
>>> G = nx.from_scipy_sparse_matrix(A, parallel_edges=True,
... create_using=nx.MultiGraph())
>>> G[1][1]
@@ -932,18 +1058,19 @@ def to_numpy_array(G, nodelist=None, dtype=None, order=None,
Examples
--------
>>> G = nx.MultiDiGraph()
- >>> G.add_edge(0,1,weight=2)
+ >>> G.add_edge(0, 1, weight=2)
0
- >>> G.add_edge(1,0)
+ >>> G.add_edge(1, 0)
0
- >>> G.add_edge(2,2,weight=3)
+ >>> G.add_edge(2, 2, weight=3)
0
- >>> G.add_edge(2,2)
+ >>> G.add_edge(2, 2)
1
- >>> nx.to_numpy_array(G, nodelist=[0,1,2])
+ >>> nx.to_numpy_array(G, nodelist=[0, 1, 2])
array([[ 0., 2., 0.],
[ 1., 0., 0.],
[ 0., 0., 4.]])
+
"""
import numpy as np
if nodelist is None:
@@ -1080,7 +1207,6 @@ def from_numpy_array(A, parallel_edges=False, create_using=None):
the entries will be interpreted as weighted edges joining the vertices
(without creating parallel edges):
- >>> import numpy as np
>>> A = np.array([[1, 1], [1, 2]])
>>> G = nx.from_numpy_array(A, create_using=nx.MultiGraph())
>>> G[1][1]
@@ -1090,7 +1216,6 @@ def from_numpy_array(A, parallel_edges=False, create_using=None):
but `parallel_edges` is True, then the entries will be interpreted as
the number of parallel edges joining those two vertices:
- >>> import numpy as np
>>> A = np.array([[1, 1], [1, 2]])
>>> temp = nx.MultiGraph()
>>> G = nx.from_numpy_array(A, parallel_edges=True, create_using=temp)
@@ -1099,7 +1224,6 @@ def from_numpy_array(A, parallel_edges=False, create_using=None):
User defined compound data type on edges:
- >>> import numpy
>>> dt = [('weight', float), ('cost', int)]
>>> A = np.array([[(1.0, 2)]], dtype=dt)
>>> G = nx.from_numpy_array(A)
| diff --git a/networkx/tests/test_convert.py b/networkx/tests/test_convert.py
--- a/networkx/tests/test_convert.py
+++ b/networkx/tests/test_convert.py
@@ -1,5 +1,7 @@
#!/usr/bin/env python
-from nose.tools import assert_equal, assert_not_equal, assert_true, assert_false
+from nose.tools import (assert_equal, assert_not_equal,
+ assert_true, assert_false,
+ assert_raises)
import networkx as nx
from networkx.testing import assert_nodes_equal, assert_edges_equal, assert_graphs_equal
@@ -39,6 +41,38 @@ def test_simple_graphs(self):
Gdod = nx.Graph(dod)
assert_graphs_equal(Gdod, P3)
+ def test_exceptions(self):
+ # _prep_create_using
+ G = {"a": "a"}
+ H = nx.to_networkx_graph(G)
+ assert_graphs_equal(H, nx.Graph([('a', 'a')]))
+ assert_raises(TypeError, to_networkx_graph, G, create_using=0.0)
+
+ # NX graph
+ class G(object):
+ adj = None
+
+ assert_raises(nx.NetworkXError, to_networkx_graph, G)
+
+ # pygraphviz agraph
+ class G(object):
+ is_strict = None
+
+ assert_raises(nx.NetworkXError, to_networkx_graph, G)
+
+ # Dict of [dicts, lists]
+ G = {"a": 0}
+ assert_raises(TypeError, to_networkx_graph, G)
+
+ # list or generator of edges
+ class G(object):
+ next = None
+
+ assert_raises(nx.NetworkXError, to_networkx_graph, G)
+
+ # no match
+ assert_raises(nx.NetworkXError, to_networkx_graph, "a")
+
def test_digraphs(self):
for dest, source in [(to_dict_of_dicts, from_dict_of_dicts),
(to_dict_of_lists, from_dict_of_lists)]:
@@ -225,3 +259,8 @@ def test_attribute_dict_integrity(self):
assert_equal(list(H.nodes), list(G.nodes))
H = nx.OrderedDiGraph(G)
assert_equal(list(H.nodes), list(G.nodes))
+
+ def test_to_edgelist(self):
+ G = nx.Graph([(1, 1)])
+ elist = nx.to_edgelist(G, nodelist=list(G))
+ assert_edges_equal(G.edges(data=True), elist)
diff --git a/networkx/tests/test_convert_numpy.py b/networkx/tests/test_convert_numpy.py
--- a/networkx/tests/test_convert_numpy.py
+++ b/networkx/tests/test_convert_numpy.py
@@ -2,19 +2,20 @@
from nose.tools import assert_raises, assert_true, assert_equal
import networkx as nx
-from networkx.generators.classic import barbell_graph,cycle_graph,path_graph
+from networkx.generators.classic import barbell_graph, cycle_graph, path_graph
from networkx.testing.utils import assert_graphs_equal
class TestConvertNumpy(object):
- numpy=1 # nosetests attribute, use nosetests -a 'not numpy' to skip test
+ numpy = 1 # nosetests attribute, use nosetests -a 'not numpy' to skip test
+
@classmethod
def setupClass(cls):
global np
global np_assert_equal
try:
import numpy as np
- np_assert_equal=np.testing.assert_equal
+ np_assert_equal = np.testing.assert_equal
except ImportError:
raise SkipTest('NumPy not available.')
@@ -25,15 +26,19 @@ def __init__(self):
self.G3 = self.create_weighted(nx.Graph())
self.G4 = self.create_weighted(nx.DiGraph())
+ def test_exceptions(self):
+ G = np.array("a")
+ assert_raises(nx.NetworkXError, nx.to_networkx_graph, G)
+
def create_weighted(self, G):
g = cycle_graph(4)
G.add_nodes_from(g)
- G.add_weighted_edges_from( (u,v,10+u) for u,v in g.edges())
+ G.add_weighted_edges_from((u, v, 10 + u) for u, v in g.edges())
return G
def assert_equal(self, G1, G2):
- assert_true( sorted(G1.nodes())==sorted(G2.nodes()) )
- assert_true( sorted(G1.edges())==sorted(G2.edges()) )
+ assert_true(sorted(G1.nodes()) == sorted(G2.nodes()))
+ assert_true(sorted(G1.edges()) == sorted(G2.edges()))
def identity_conversion(self, G, A, create_using):
assert(A.sum() > 0)
@@ -46,7 +51,7 @@ def identity_conversion(self, G, A, create_using):
def test_shape(self):
"Conversion from non-square array."
- A=np.array([[1,2,3],[4,5,6]])
+ A = np.array([[1, 2, 3], [4, 5, 6]])
assert_raises(nx.NetworkXError, nx.from_numpy_matrix, A)
def test_identity_graph_matrix(self):
@@ -108,66 +113,66 @@ def test_nodelist(self):
def test_weight_keyword(self):
WP4 = nx.Graph()
- WP4.add_edges_from( (n,n+1,dict(weight=0.5,other=0.3)) for n in range(3) )
+ WP4.add_edges_from((n, n + 1, dict(weight=0.5, other=0.3)) for n in range(3))
P4 = path_graph(4)
A = nx.to_numpy_matrix(P4)
- np_assert_equal(A, nx.to_numpy_matrix(WP4,weight=None))
- np_assert_equal(0.5*A, nx.to_numpy_matrix(WP4))
- np_assert_equal(0.3*A, nx.to_numpy_matrix(WP4,weight='other'))
+ np_assert_equal(A, nx.to_numpy_matrix(WP4, weight=None))
+ np_assert_equal(0.5 * A, nx.to_numpy_matrix(WP4))
+ np_assert_equal(0.3 * A, nx.to_numpy_matrix(WP4, weight='other'))
def test_from_numpy_matrix_type(self):
- A=np.matrix([[1]])
- G=nx.from_numpy_matrix(A)
- assert_equal(type(G[0][0]['weight']),int)
+ A = np.matrix([[1]])
+ G = nx.from_numpy_matrix(A)
+ assert_equal(type(G[0][0]['weight']), int)
- A=np.matrix([[1]]).astype(np.float)
- G=nx.from_numpy_matrix(A)
- assert_equal(type(G[0][0]['weight']),float)
+ A = np.matrix([[1]]).astype(np.float)
+ G = nx.from_numpy_matrix(A)
+ assert_equal(type(G[0][0]['weight']), float)
- A=np.matrix([[1]]).astype(np.str)
- G=nx.from_numpy_matrix(A)
- assert_equal(type(G[0][0]['weight']),str)
+ A = np.matrix([[1]]).astype(np.str)
+ G = nx.from_numpy_matrix(A)
+ assert_equal(type(G[0][0]['weight']), str)
- A=np.matrix([[1]]).astype(np.bool)
- G=nx.from_numpy_matrix(A)
- assert_equal(type(G[0][0]['weight']),bool)
+ A = np.matrix([[1]]).astype(np.bool)
+ G = nx.from_numpy_matrix(A)
+ assert_equal(type(G[0][0]['weight']), bool)
- A=np.matrix([[1]]).astype(np.complex)
- G=nx.from_numpy_matrix(A)
- assert_equal(type(G[0][0]['weight']),complex)
+ A = np.matrix([[1]]).astype(np.complex)
+ G = nx.from_numpy_matrix(A)
+ assert_equal(type(G[0][0]['weight']), complex)
- A=np.matrix([[1]]).astype(np.object)
- assert_raises(TypeError,nx.from_numpy_matrix,A)
+ A = np.matrix([[1]]).astype(np.object)
+ assert_raises(TypeError, nx.from_numpy_matrix, A)
def test_from_numpy_matrix_dtype(self):
- dt=[('weight',float),('cost',int)]
- A=np.matrix([[(1.0,2)]],dtype=dt)
- G=nx.from_numpy_matrix(A)
- assert_equal(type(G[0][0]['weight']),float)
- assert_equal(type(G[0][0]['cost']),int)
- assert_equal(G[0][0]['cost'],2)
- assert_equal(G[0][0]['weight'],1.0)
+ dt = [('weight', float), ('cost', int)]
+ A = np.matrix([[(1.0, 2)]], dtype=dt)
+ G = nx.from_numpy_matrix(A)
+ assert_equal(type(G[0][0]['weight']), float)
+ assert_equal(type(G[0][0]['cost']), int)
+ assert_equal(G[0][0]['cost'], 2)
+ assert_equal(G[0][0]['weight'], 1.0)
def test_to_numpy_recarray(self):
- G=nx.Graph()
- G.add_edge(1,2,weight=7.0,cost=5)
- A=nx.to_numpy_recarray(G,dtype=[('weight',float),('cost',int)])
- assert_equal(sorted(A.dtype.names),['cost','weight'])
- assert_equal(A.weight[0,1],7.0)
- assert_equal(A.weight[0,0],0.0)
- assert_equal(A.cost[0,1],5)
- assert_equal(A.cost[0,0],0)
+ G = nx.Graph()
+ G.add_edge(1, 2, weight=7.0, cost=5)
+ A = nx.to_numpy_recarray(G, dtype=[('weight', float), ('cost', int)])
+ assert_equal(sorted(A.dtype.names), ['cost', 'weight'])
+ assert_equal(A.weight[0, 1], 7.0)
+ assert_equal(A.weight[0, 0], 0.0)
+ assert_equal(A.cost[0, 1], 5)
+ assert_equal(A.cost[0, 0], 0)
def test_numpy_multigraph(self):
- G=nx.MultiGraph()
- G.add_edge(1,2,weight=7)
- G.add_edge(1,2,weight=70)
- A=nx.to_numpy_matrix(G)
- assert_equal(A[1,0],77)
- A=nx.to_numpy_matrix(G,multigraph_weight=min)
- assert_equal(A[1,0],7)
- A=nx.to_numpy_matrix(G,multigraph_weight=max)
- assert_equal(A[1,0],70)
+ G = nx.MultiGraph()
+ G.add_edge(1, 2, weight=7)
+ G.add_edge(1, 2, weight=70)
+ A = nx.to_numpy_matrix(G)
+ assert_equal(A[1, 0], 77)
+ A = nx.to_numpy_matrix(G, multigraph_weight=min)
+ assert_equal(A[1, 0], 7)
+ A = nx.to_numpy_matrix(G, multigraph_weight=max)
+ assert_equal(A[1, 0], 70)
def test_from_numpy_matrix_parallel_edges(self):
"""Tests that the :func:`networkx.from_numpy_matrix` function
@@ -238,14 +243,15 @@ def test_dtype_int_multigraph(self):
class TestConvertNumpyArray(object):
- numpy=1 # nosetests attribute, use nosetests -a 'not numpy' to skip test
+ numpy = 1 # nosetests attribute, use nosetests -a 'not numpy' to skip test
+
@classmethod
def setupClass(cls):
global np
global np_assert_equal
try:
import numpy as np
- np_assert_equal=np.testing.assert_equal
+ np_assert_equal = np.testing.assert_equal
except ImportError:
raise SkipTest('NumPy not available.')
@@ -259,12 +265,12 @@ def __init__(self):
def create_weighted(self, G):
g = cycle_graph(4)
G.add_nodes_from(g)
- G.add_weighted_edges_from( (u,v,10+u) for u,v in g.edges())
+ G.add_weighted_edges_from((u, v, 10 + u) for u, v in g.edges())
return G
def assert_equal(self, G1, G2):
- assert_true( sorted(G1.nodes())==sorted(G2.nodes()) )
- assert_true( sorted(G1.edges())==sorted(G2.edges()) )
+ assert_true(sorted(G1.nodes()) == sorted(G2.nodes()))
+ assert_true(sorted(G1.edges()) == sorted(G2.edges()))
def identity_conversion(self, G, A, create_using):
assert(A.sum() > 0)
@@ -277,7 +283,7 @@ def identity_conversion(self, G, A, create_using):
def test_shape(self):
"Conversion from non-square array."
- A=np.array([[1,2,3],[4,5,6]])
+ A = np.array([[1, 2, 3], [4, 5, 6]])
assert_raises(nx.NetworkXError, nx.from_numpy_array, A)
def test_identity_graph_array(self):
@@ -315,66 +321,66 @@ def test_nodelist(self):
def test_weight_keyword(self):
WP4 = nx.Graph()
- WP4.add_edges_from( (n,n+1,dict(weight=0.5,other=0.3)) for n in range(3) )
+ WP4.add_edges_from((n, n + 1, dict(weight=0.5, other=0.3)) for n in range(3))
P4 = path_graph(4)
A = nx.to_numpy_array(P4)
- np_assert_equal(A, nx.to_numpy_array(WP4,weight=None))
- np_assert_equal(0.5*A, nx.to_numpy_array(WP4))
- np_assert_equal(0.3*A, nx.to_numpy_array(WP4,weight='other'))
+ np_assert_equal(A, nx.to_numpy_array(WP4, weight=None))
+ np_assert_equal(0.5 * A, nx.to_numpy_array(WP4))
+ np_assert_equal(0.3 * A, nx.to_numpy_array(WP4, weight='other'))
def test_from_numpy_array_type(self):
- A=np.array([[1]])
- G=nx.from_numpy_array(A)
- assert_equal(type(G[0][0]['weight']),int)
+ A = np.array([[1]])
+ G = nx.from_numpy_array(A)
+ assert_equal(type(G[0][0]['weight']), int)
- A=np.array([[1]]).astype(np.float)
- G=nx.from_numpy_array(A)
- assert_equal(type(G[0][0]['weight']),float)
+ A = np.array([[1]]).astype(np.float)
+ G = nx.from_numpy_array(A)
+ assert_equal(type(G[0][0]['weight']), float)
- A=np.array([[1]]).astype(np.str)
- G=nx.from_numpy_array(A)
- assert_equal(type(G[0][0]['weight']),str)
+ A = np.array([[1]]).astype(np.str)
+ G = nx.from_numpy_array(A)
+ assert_equal(type(G[0][0]['weight']), str)
- A=np.array([[1]]).astype(np.bool)
- G=nx.from_numpy_array(A)
- assert_equal(type(G[0][0]['weight']),bool)
+ A = np.array([[1]]).astype(np.bool)
+ G = nx.from_numpy_array(A)
+ assert_equal(type(G[0][0]['weight']), bool)
- A=np.array([[1]]).astype(np.complex)
- G=nx.from_numpy_array(A)
- assert_equal(type(G[0][0]['weight']),complex)
+ A = np.array([[1]]).astype(np.complex)
+ G = nx.from_numpy_array(A)
+ assert_equal(type(G[0][0]['weight']), complex)
- A=np.array([[1]]).astype(np.object)
- assert_raises(TypeError,nx.from_numpy_array,A)
+ A = np.array([[1]]).astype(np.object)
+ assert_raises(TypeError, nx.from_numpy_array, A)
def test_from_numpy_array_dtype(self):
- dt=[('weight',float),('cost',int)]
- A=np.array([[(1.0,2)]],dtype=dt)
- G=nx.from_numpy_array(A)
- assert_equal(type(G[0][0]['weight']),float)
- assert_equal(type(G[0][0]['cost']),int)
- assert_equal(G[0][0]['cost'],2)
- assert_equal(G[0][0]['weight'],1.0)
+ dt = [('weight', float), ('cost', int)]
+ A = np.array([[(1.0, 2)]], dtype=dt)
+ G = nx.from_numpy_array(A)
+ assert_equal(type(G[0][0]['weight']), float)
+ assert_equal(type(G[0][0]['cost']), int)
+ assert_equal(G[0][0]['cost'], 2)
+ assert_equal(G[0][0]['weight'], 1.0)
def test_to_numpy_recarray(self):
- G=nx.Graph()
- G.add_edge(1,2,weight=7.0,cost=5)
- A=nx.to_numpy_recarray(G,dtype=[('weight',float),('cost',int)])
- assert_equal(sorted(A.dtype.names),['cost','weight'])
- assert_equal(A.weight[0,1],7.0)
- assert_equal(A.weight[0,0],0.0)
- assert_equal(A.cost[0,1],5)
- assert_equal(A.cost[0,0],0)
+ G = nx.Graph()
+ G.add_edge(1, 2, weight=7.0, cost=5)
+ A = nx.to_numpy_recarray(G, dtype=[('weight', float), ('cost', int)])
+ assert_equal(sorted(A.dtype.names), ['cost', 'weight'])
+ assert_equal(A.weight[0, 1], 7.0)
+ assert_equal(A.weight[0, 0], 0.0)
+ assert_equal(A.cost[0, 1], 5)
+ assert_equal(A.cost[0, 0], 0)
def test_numpy_multigraph(self):
- G=nx.MultiGraph()
- G.add_edge(1,2,weight=7)
- G.add_edge(1,2,weight=70)
- A=nx.to_numpy_array(G)
- assert_equal(A[1,0],77)
- A=nx.to_numpy_array(G,multigraph_weight=min)
- assert_equal(A[1,0],7)
- A=nx.to_numpy_array(G,multigraph_weight=max)
- assert_equal(A[1,0],70)
+ G = nx.MultiGraph()
+ G.add_edge(1, 2, weight=7)
+ G.add_edge(1, 2, weight=70)
+ A = nx.to_numpy_array(G)
+ assert_equal(A[1, 0], 77)
+ A = nx.to_numpy_array(G, multigraph_weight=min)
+ assert_equal(A[1, 0], 7)
+ A = nx.to_numpy_array(G, multigraph_weight=max)
+ assert_equal(A[1, 0], 70)
def test_from_numpy_array_parallel_edges(self):
"""Tests that the :func:`networkx.from_numpy_array` function
@@ -390,10 +396,10 @@ def test_from_numpy_array_parallel_edges(self):
expected.add_weighted_edges_from([(u, v, 1) for (u, v) in edges])
expected.add_edge(1, 1, weight=2)
actual = nx.from_numpy_array(A, parallel_edges=True,
- create_using=nx.DiGraph())
+ create_using=nx.DiGraph())
assert_graphs_equal(actual, expected)
actual = nx.from_numpy_array(A, parallel_edges=False,
- create_using=nx.DiGraph())
+ create_using=nx.DiGraph())
assert_graphs_equal(actual, expected)
# Now each integer entry in the adjacency matrix is interpreted as the
# number of parallel edges in the graph if the appropriate keyword
@@ -402,14 +408,14 @@ def test_from_numpy_array_parallel_edges(self):
expected = nx.MultiDiGraph()
expected.add_weighted_edges_from([(u, v, 1) for (u, v) in edges])
actual = nx.from_numpy_array(A, parallel_edges=True,
- create_using=nx.MultiDiGraph())
+ create_using=nx.MultiDiGraph())
assert_graphs_equal(actual, expected)
expected = nx.MultiDiGraph()
expected.add_edges_from(set(edges), weight=1)
# The sole self-loop (edge 0) on vertex 1 should have weight 2.
expected[1][1][0]['weight'] = 2
actual = nx.from_numpy_array(A, parallel_edges=False,
- create_using=nx.MultiDiGraph())
+ create_using=nx.MultiDiGraph())
assert_graphs_equal(actual, expected)
def test_symmetric(self):
diff --git a/networkx/tests/test_convert_pandas.py b/networkx/tests/test_convert_pandas.py
--- a/networkx/tests/test_convert_pandas.py
+++ b/networkx/tests/test_convert_pandas.py
@@ -1,8 +1,8 @@
from nose import SkipTest
-from nose.tools import assert_true
+from nose.tools import assert_true, assert_raises
import networkx as nx
-from networkx.testing import assert_nodes_equal, assert_edges_equal
+from networkx.testing import assert_nodes_equal, assert_edges_equal, assert_graphs_equal
class TestConvertPandas(object):
@@ -15,7 +15,7 @@ def setupClass(cls):
except ImportError:
raise SkipTest('Pandas not available.')
- def __init__(self, ):
+ def __init__(self):
global pd
import pandas as pd
@@ -31,43 +31,80 @@ def __init__(self, ):
columns=['weight', 'cost', 0, 'b'])
self.mdf = df.append(mdf)
- def assert_equal(self, G1, G2):
- assert_true(nx.is_isomorphic(G1, G2, edge_match=lambda x, y: x == y))
+ def test_exceptions(self):
+ G = pd.DataFrame(["a"]) # adj
+ assert_raises(nx.NetworkXError, nx.to_networkx_graph, G)
+ G = pd.DataFrame(["a", 0.0]) # elist
+ assert_raises(nx.NetworkXError, nx.to_networkx_graph, G)
+ df = pd.DataFrame([[1, 1], [1, 0]], dtype=int, index=[1, 2], columns=["a", "b"])
+ assert_raises(nx.NetworkXError, nx.from_pandas_adjacency, df)
- def test_from_dataframe_all_attr(self, ):
+ def test_from_edgelist_all_attr(self):
Gtrue = nx.Graph([('E', 'C', {'cost': 9, 'weight': 10}),
('B', 'A', {'cost': 1, 'weight': 7}),
('A', 'D', {'cost': 7, 'weight': 4})])
+ G = nx.from_pandas_edgelist(self.df, 0, 'b', True)
+ assert_graphs_equal(G, Gtrue)
+ # deprecated
G = nx.from_pandas_dataframe(self.df, 0, 'b', True)
- self.assert_equal(G, Gtrue)
+ assert_graphs_equal(G, Gtrue)
# MultiGraph
MGtrue = nx.MultiGraph(Gtrue)
MGtrue.add_edge('A', 'D', cost=16, weight=4)
- MG = nx.from_pandas_dataframe(self.mdf, 0, 'b', True, nx.MultiGraph())
- self.assert_equal(MG, MGtrue)
+ MG = nx.from_pandas_edgelist(self.mdf, 0, 'b', True, nx.MultiGraph())
+ assert_graphs_equal(MG, MGtrue)
- def test_from_dataframe_multi_attr(self, ):
+ def test_from_edgelist_multi_attr(self):
Gtrue = nx.Graph([('E', 'C', {'cost': 9, 'weight': 10}),
('B', 'A', {'cost': 1, 'weight': 7}),
('A', 'D', {'cost': 7, 'weight': 4})])
- G = nx.from_pandas_dataframe(self.df, 0, 'b', ['weight', 'cost'])
- self.assert_equal(G, Gtrue)
+ G = nx.from_pandas_edgelist(self.df, 0, 'b', ['weight', 'cost'])
+ assert_graphs_equal(G, Gtrue)
- def test_from_dataframe_one_attr(self, ):
+ def test_from_edgelist_multidigraph_and_edge_attr(self):
+ # example from issue #2374
+ Gtrue = nx.MultiDiGraph([('X1', 'X4', {'Co': 'zA', 'Mi': 0, 'St': 'X1'}),
+ ('X1', 'X4', {'Co': 'zB', 'Mi': 54, 'St': 'X2'}),
+ ('X1', 'X4', {'Co': 'zB', 'Mi': 49, 'St': 'X3'}),
+ ('X1', 'X4', {'Co': 'zB', 'Mi': 44, 'St': 'X4'}),
+ ('Y1', 'Y3', {'Co': 'zC', 'Mi': 0, 'St': 'Y1'}),
+ ('Y1', 'Y3', {'Co': 'zC', 'Mi': 34, 'St': 'Y2'}),
+ ('Y1', 'Y3', {'Co': 'zC', 'Mi': 29, 'St': 'X2'}),
+ ('Y1', 'Y3', {'Co': 'zC', 'Mi': 24, 'St': 'Y3'}),
+ ('Z1', 'Z3', {'Co': 'zD', 'Mi': 0, 'St': 'Z1'}),
+ ('Z1', 'Z3', {'Co': 'zD', 'Mi': 14, 'St': 'X3'}),
+ ('Z1', 'Z3', {'Co': 'zE', 'Mi': 9, 'St': 'Z2'}),
+ ('Z1', 'Z3', {'Co': 'zE', 'Mi': 4, 'St': 'Z3'})])
+ df = pd.DataFrame.from_items([
+ ('O', ['X1', 'X1', 'X1', 'X1', 'Y1', 'Y1', 'Y1', 'Y1', 'Z1', 'Z1', 'Z1', 'Z1']),
+ ('D', ['X4', 'X4', 'X4', 'X4', 'Y3', 'Y3', 'Y3', 'Y3', 'Z3', 'Z3', 'Z3', 'Z3']),
+ ('St', ['X1', 'X2', 'X3', 'X4', 'Y1', 'Y2', 'X2', 'Y3', 'Z1', 'X3', 'Z2', 'Z3']),
+ ('Co', ['zA', 'zB', 'zB', 'zB', 'zC', 'zC', 'zC', 'zC', 'zD', 'zD', 'zE', 'zE']),
+ ('Mi', [0, 54, 49, 44, 0, 34, 29, 24, 0, 14, 9, 4])])
+ G1 = nx.from_pandas_edgelist(df, source='O', target='D',
+ edge_attr=True,
+ create_using=nx.MultiDiGraph())
+ G2 = nx.from_pandas_edgelist(df, source='O', target='D',
+ edge_attr=['St', 'Co', 'Mi'],
+ create_using=nx.MultiDiGraph())
+ assert_graphs_equal(G1, Gtrue)
+ assert_graphs_equal(G2, Gtrue)
+
+ def test_from_edgelist_one_attr(self):
Gtrue = nx.Graph([('E', 'C', {'weight': 10}),
('B', 'A', {'weight': 7}),
('A', 'D', {'weight': 4})])
- G = nx.from_pandas_dataframe(self.df, 0, 'b', 'weight')
- self.assert_equal(G, Gtrue)
+ G = nx.from_pandas_edgelist(self.df, 0, 'b', 'weight')
+ assert_graphs_equal(G, Gtrue)
- def test_from_dataframe_no_attr(self, ):
+ def test_from_edgelist_no_attr(self):
Gtrue = nx.Graph([('E', 'C', {}),
('B', 'A', {}),
('A', 'D', {})])
- G = nx.from_pandas_dataframe(self.df, 0, 'b',)
- self.assert_equal(G, Gtrue)
+ G = nx.from_pandas_edgelist(self.df, 0, 'b',)
+ assert_graphs_equal(G, Gtrue)
- def test_from_datafram(self, ):
+ def test_from_edgelist(self):
# Pandas DataFrame
g = nx.cycle_graph(10)
G = nx.Graph()
@@ -77,13 +114,34 @@ def test_from_datafram(self, ):
source = [s for s, t, d in edgelist]
target = [t for s, t, d in edgelist]
weight = [d['weight'] for s, t, d in edgelist]
- import pandas as pd
edges = pd.DataFrame({'source': source,
'target': target,
'weight': weight})
- GG = nx.from_pandas_dataframe(edges, edge_attr='weight')
- assert_nodes_equal(sorted(G.nodes()), sorted(GG.nodes()))
- assert_edges_equal(sorted(G.edges()), sorted(GG.edges()))
+ GG = nx.from_pandas_edgelist(edges, edge_attr='weight')
+ assert_nodes_equal(G.nodes(), GG.nodes())
+ assert_edges_equal(G.edges(), GG.edges())
GW = nx.to_networkx_graph(edges, create_using=nx.Graph())
- assert_nodes_equal(sorted(G.nodes()), sorted(GW.nodes()))
- assert_edges_equal(sorted(G.edges()), sorted(GW.edges()))
+ assert_nodes_equal(G.nodes(), GW.nodes())
+ assert_edges_equal(G.edges(), GW.edges())
+
+ def test_from_adjacency(self):
+ nodelist = [1, 2]
+ dftrue = pd.DataFrame([[1, 1], [1, 0]], dtype=int, index=nodelist, columns=nodelist)
+ G = nx.Graph([(1, 1), (1, 2)])
+ df = nx.to_pandas_adjacency(G, dtype=int)
+ pd.testing.assert_frame_equal(df, dftrue)
+ # deprecated
+ df = nx.to_pandas_dataframe(G, dtype=int)
+ pd.testing.assert_frame_equal(df, dftrue)
+
+ def test_roundtrip(self):
+ # edgelist
+ Gtrue = nx.Graph([(1, 1), (1, 2)])
+ df = nx.to_pandas_edgelist(Gtrue)
+ G = nx.from_pandas_edgelist(df)
+ assert_graphs_equal(Gtrue, G)
+ # adjacency
+ Gtrue = nx.Graph(({1: {1: {'weight': 1}, 2: {'weight': 1}}, 2: {1: {'weight': 1}}}))
+ df = nx.to_pandas_adjacency(Gtrue, dtype=int)
+ G = nx.from_pandas_adjacency(df)
+ assert_graphs_equal(Gtrue, G)
diff --git a/networkx/tests/test_convert_scipy.py b/networkx/tests/test_convert_scipy.py
--- a/networkx/tests/test_convert_scipy.py
+++ b/networkx/tests/test_convert_scipy.py
@@ -3,7 +3,7 @@
import networkx as nx
from networkx.testing import assert_graphs_equal
-from networkx.generators.classic import barbell_graph,cycle_graph,path_graph
+from networkx.generators.classic import barbell_graph, cycle_graph, path_graph
from networkx.testing.utils import assert_graphs_equal
@@ -15,7 +15,7 @@ def setupClass(cls):
import numpy as np
import scipy as sp
import scipy.sparse as sparse
- np_assert_equal=np.testing.assert_equal
+ np_assert_equal = np.testing.assert_equal
except ImportError:
raise SkipTest('SciPy sparse library not available.')
@@ -26,18 +26,24 @@ def __init__(self):
self.G3 = self.create_weighted(nx.Graph())
self.G4 = self.create_weighted(nx.DiGraph())
+ def test_exceptions(self):
+ class G(object):
+ format = None
+
+ assert_raises(nx.NetworkXError, nx.to_networkx_graph, G)
+
def create_weighted(self, G):
g = cycle_graph(4)
e = list(g.edges())
- source = [u for u,v in e]
- dest = [v for u,v in e]
- weight = [s+10 for s in source]
+ source = [u for u, v in e]
+ dest = [v for u, v in e]
+ weight = [s + 10 for s in source]
ex = zip(source, dest, weight)
G.add_weighted_edges_from(ex)
return G
def assert_isomorphic(self, G1, G2):
- assert_true(nx.is_isomorphic(G1,G2))
+ assert_true(nx.is_isomorphic(G1, G2))
def identity_conversion(self, G, A, create_using):
GG = nx.from_scipy_sparse_matrix(A, create_using=create_using)
@@ -71,7 +77,7 @@ def identity_conversion(self, G, A, create_using):
def test_shape(self):
"Conversion from non-square sparse array."
- A = sp.sparse.lil_matrix([[1,2,3],[4,5,6]])
+ A = sp.sparse.lil_matrix([[1, 2, 3], [4, 5, 6]])
assert_raises(nx.NetworkXError, nx.from_scipy_sparse_matrix, A)
def test_identity_graph_matrix(self):
@@ -105,60 +111,60 @@ def test_nodelist(self):
# Make nodelist ambiguous by containing duplicates.
nodelist += [nodelist[0]]
- assert_raises(nx.NetworkXError, nx.to_numpy_matrix, P3,
+ assert_raises(nx.NetworkXError, nx.to_numpy_matrix, P3,
nodelist=nodelist)
def test_weight_keyword(self):
WP4 = nx.Graph()
- WP4.add_edges_from( (n,n+1,dict(weight=0.5,other=0.3))
- for n in range(3) )
+ WP4.add_edges_from((n, n + 1, dict(weight=0.5, other=0.3))
+ for n in range(3))
P4 = path_graph(4)
A = nx.to_scipy_sparse_matrix(P4)
np_assert_equal(A.todense(),
- nx.to_scipy_sparse_matrix(WP4,weight=None).todense())
- np_assert_equal(0.5*A.todense(),
+ nx.to_scipy_sparse_matrix(WP4, weight=None).todense())
+ np_assert_equal(0.5 * A.todense(),
nx.to_scipy_sparse_matrix(WP4).todense())
- np_assert_equal(0.3*A.todense(),
- nx.to_scipy_sparse_matrix(WP4,weight='other').todense())
+ np_assert_equal(0.3 * A.todense(),
+ nx.to_scipy_sparse_matrix(WP4, weight='other').todense())
def test_format_keyword(self):
WP4 = nx.Graph()
- WP4.add_edges_from( (n,n+1,dict(weight=0.5,other=0.3))
- for n in range(3) )
+ WP4.add_edges_from((n, n + 1, dict(weight=0.5, other=0.3))
+ for n in range(3))
P4 = path_graph(4)
A = nx.to_scipy_sparse_matrix(P4, format='csr')
np_assert_equal(A.todense(),
- nx.to_scipy_sparse_matrix(WP4,weight=None).todense())
+ nx.to_scipy_sparse_matrix(WP4, weight=None).todense())
A = nx.to_scipy_sparse_matrix(P4, format='csc')
np_assert_equal(A.todense(),
- nx.to_scipy_sparse_matrix(WP4,weight=None).todense())
+ nx.to_scipy_sparse_matrix(WP4, weight=None).todense())
A = nx.to_scipy_sparse_matrix(P4, format='coo')
np_assert_equal(A.todense(),
- nx.to_scipy_sparse_matrix(WP4,weight=None).todense())
+ nx.to_scipy_sparse_matrix(WP4, weight=None).todense())
A = nx.to_scipy_sparse_matrix(P4, format='bsr')
np_assert_equal(A.todense(),
- nx.to_scipy_sparse_matrix(WP4,weight=None).todense())
+ nx.to_scipy_sparse_matrix(WP4, weight=None).todense())
A = nx.to_scipy_sparse_matrix(P4, format='lil')
np_assert_equal(A.todense(),
- nx.to_scipy_sparse_matrix(WP4,weight=None).todense())
+ nx.to_scipy_sparse_matrix(WP4, weight=None).todense())
A = nx.to_scipy_sparse_matrix(P4, format='dia')
np_assert_equal(A.todense(),
- nx.to_scipy_sparse_matrix(WP4,weight=None).todense())
+ nx.to_scipy_sparse_matrix(WP4, weight=None).todense())
A = nx.to_scipy_sparse_matrix(P4, format='dok')
np_assert_equal(A.todense(),
- nx.to_scipy_sparse_matrix(WP4,weight=None).todense())
+ nx.to_scipy_sparse_matrix(WP4, weight=None).todense())
@raises(nx.NetworkXError)
def test_format_keyword_raise(self):
WP4 = nx.Graph()
- WP4.add_edges_from( (n,n+1,dict(weight=0.5,other=0.3))
- for n in range(3) )
+ WP4.add_edges_from((n, n + 1, dict(weight=0.5, other=0.3))
+ for n in range(3))
P4 = path_graph(4)
nx.to_scipy_sparse_matrix(P4, format='any_other')
@@ -174,19 +180,19 @@ def test_empty(self):
def test_ordering(self):
G = nx.DiGraph()
- G.add_edge(1,2)
- G.add_edge(2,3)
- G.add_edge(3,1)
- M = nx.to_scipy_sparse_matrix(G,nodelist=[3,2,1])
- np_assert_equal(M.todense(), np.matrix([[0,0,1],[1,0,0],[0,1,0]]))
+ G.add_edge(1, 2)
+ G.add_edge(2, 3)
+ G.add_edge(3, 1)
+ M = nx.to_scipy_sparse_matrix(G, nodelist=[3, 2, 1])
+ np_assert_equal(M.todense(), np.matrix([[0, 0, 1], [1, 0, 0], [0, 1, 0]]))
def test_selfloop_graph(self):
- G = nx.Graph([(1,1)])
+ G = nx.Graph([(1, 1)])
M = nx.to_scipy_sparse_matrix(G)
np_assert_equal(M.todense(), np.matrix([[1]]))
def test_selfloop_digraph(self):
- G = nx.DiGraph([(1,1)])
+ G = nx.DiGraph([(1, 1)])
M = nx.to_scipy_sparse_matrix(G)
np_assert_equal(M.todense(), np.matrix([[1]]))
| to_pandas_dataframe is not the inverse of from_pandas_dataframe
The function `from_pandas_dataframe` assumes that the data frame has edge-list like structures, but `to_pandas_dataframe` generates an adjacency matrix.
Personally, I think the edge-list like structures are more useful because:
(1) Otherwise these two functions are just (wrapped) duplications of `numpy_matrix` related functions
(2) The edge-list like data frames are convenient for other interfaces such as `py2cytoscape`.
Some related commits:
[52eb3d3](https://github.com/networkx/networkx/commit/52eb3d38ff78e4002a424ae5e8976998afdaf1a2): `from_pandas_dataframe` and `to_pandas_dataframe` were added. At that time both functions were about adjacency matrix.
[f0c22dd](https://github.com/networkx/networkx/commit/f0c22dd3305ff236860c651d8a08fbabcb8b0a77): `from_pandas_dataframe` was changed to current behavior, but it seems that `to_pandas_dataframe` was forgotten to be changed accordingly.
| As far as I can tell from the discussions on github this change may not have been intentional.
It looks like we might need functions for pandas adjacency matrix as well as pandas edge list. Can anyone involved in the previous commits comment on the best way to fix this?
I don't use Pandas, but it also seems to me that there could be a use for both adjacency matrices and edge lists. So maybe we should provide either (1) four functions or (2) modify the to/from functions to have a flag indicating whether we are using an adjacency matrix or an edge list. I haven't looked at the code yet, but I am happy to provide an implementation of either solution. @dschult , @hagberg
If we go with 4 functions, maybe we should delete the two existing functions and provide something like:
```
def from_pandas_edgelist(df, source='source', target='target', edge_attr=None, create_using=None):
def to_pandas_edgelist(G, nodelist=None, dtype=None, order=None,
multigraph_weight=sum, weight='weight', nonedge=0.0):
def from_pandas_adjacency(df, create_using=None, mask_zero=True, mask_nan=True):
def to_pandas_adjacency(G, nodelist=None, dtype=None, order=None,
multigraph_weight=sum, weight='weight', nonedge=0.0):
```
Then in `to_networkx_graph`, we could do something like:
```
if isinstance(data, pd.DataFrame):
if data.shape[0] == data.shape[1]:
<call from_pandas_adjacency(data)
else
<call from_pandas_edgelist(data, edge_attr=True, create_using=create_using)
```
Thoughts? Should I try implementing something for comment?
I like the idea of explicit separate functions.
Yes, have a go at it! :) At the moment we have one written for adjacency and one written for edgelist. You can find the other edgelist in the history, but it might be just as well to rewrite it. Have fun! | 2017-07-27T21:06:22 |
networkx/networkx | 2,564 | networkx__networkx-2564 | [
"2068"
] | e5f70a4fd3ac0b23669474639488a044d444773d | diff --git a/networkx/algorithms/tree/mst.py b/networkx/algorithms/tree/mst.py
--- a/networkx/algorithms/tree/mst.py
+++ b/networkx/algorithms/tree/mst.py
@@ -1,48 +1,52 @@
# -*- coding: utf-8 -*-
-"""
-Algorithms for calculating min/max spanning trees/forests.
-
-"""
-# Copyright (C) 2015 NetworkX Developers
+# Copyright (C) 2017 NetworkX Developers
# Aric Hagberg <[email protected]>
# Dan Schult <[email protected]>
# Pieter Swart <[email protected]>
# Loïc Séguin-C. <[email protected]>
# All rights reserved.
# BSD license.
+"""
+Algorithms for calculating min/max spanning trees/forests.
-__all__ = [
- 'minimum_spanning_edges', 'maximum_spanning_edges',
- 'minimum_spanning_tree', 'maximum_spanning_tree',
-]
-
+"""
from heapq import heappop, heappush
from itertools import count
import networkx as nx
from networkx.utils import UnionFind, not_implemented_for
+__all__ = [
+ 'minimum_spanning_edges', 'maximum_spanning_edges',
+ 'minimum_spanning_tree', 'maximum_spanning_tree',
+]
+
@not_implemented_for('multigraph')
def boruvka_mst_edges(G, minimum=True, weight='weight', keys=False, data=True):
- """Iterates over the edges of a minimum spanning tree as computed by
- Borůvka's algorithm.
+ """Iterate over edges of a Borůvka's algorithm min/max spanning tree.
+
+ Parameters
+ ----------
+ G : NetworkX Graph
+ The edges of `G` must have distinct weights,
+ otherwise the edges may not form a tree.
- `G` is a NetworkX graph. Also, the edges must have distinct weights,
- otherwise the edges may not form a tree.
+ minimum : bool (default: True)
+ Find the minimum (True) or maximum (False) spanning tree.
- `weight` is the edge attribute that stores the edge weights. Each
- edge in the graph must have such an attribute, otherwise a
- :exc:`KeyError` will be raised.
+ weight : string (default: 'weight')
+ The name of the edge attribute holding the edge weights.
- If `data` is True, this iterator yields edges of the form
- ``(u, v, d)``, where ``u`` and ``v`` are nodes and ``d`` is the edge
- attribute dictionary. Otherwise, it yields edges of the form
- ``(u, v)``.
+ keys : bool (default: True)
+ This argument is ignored since this function is not
+ implemented for multigraphs; it exists only for consistency
+ with the other minimum spanning tree functions.
- The `keys` argument is ignored, since this function is not
- implemented for multigraphs; it exists only for consistency with the
- other minimum spanning tree functions.
+ data : bool (default: True)
+ Flag for whether to yield edge attribute dicts.
+ If True, yield edges `(u, v, d)`, where `d` is the attribute dict.
+ If False, yield edges `(u, v)`.
"""
opt = min if minimum else max
@@ -107,6 +111,29 @@ def best_edge(component):
def kruskal_mst_edges(G, minimum, weight='weight', keys=True, data=True):
+ """Iterate over edges of a Kruskal's algorithm min/max spanning tree.
+
+ Parameters
+ ----------
+ G : NetworkX Graph
+ The graph holding the tree of interest.
+
+ minimum : bool (default: True)
+ Find the minimum (True) or maximum (False) spanning tree.
+
+ weight : string (default: 'weight')
+ The name of the edge attribute holding the edge weights.
+
+ keys : bool (default: True)
+ If `G` is a multigraph, `keys` controls whether edge keys ar yielded.
+ Otherwise `keys` is ignored.
+
+ data : bool (default: True)
+ Flag for whether to yield edge attribute dicts.
+ If True, yield edges `(u, v, d)`, where `d` is the attribute dict.
+ If False, yield edges `(u, v)`.
+
+ """
subtrees = UnionFind()
if G.is_multigraph():
edges = G.edges(keys=True, data=True)
@@ -114,21 +141,20 @@ def kruskal_mst_edges(G, minimum, weight='weight', keys=True, data=True):
edges = G.edges(data=True)
getweight = lambda t: t[-1].get(weight, 1)
edges = sorted(edges, key=getweight, reverse=not minimum)
- is_multigraph = G.is_multigraph()
# Multigraphs need to handle edge keys in addition to edge data.
- if is_multigraph:
+ if G.is_multigraph():
for u, v, k, d in edges:
if subtrees[u] != subtrees[v]:
if keys:
if data:
- yield (u, v, k, d)
+ yield u, v, k, d
else:
- yield (u, v, k)
+ yield u, v, k
else:
if data:
- yield (u, v, d)
+ yield u, v, d
else:
- yield (u, v)
+ yield u, v
subtrees.union(u, v)
else:
for u, v, d in edges:
@@ -157,43 +183,49 @@ def prim_mst_edges(G, minimum, weight='weight', keys=True, data=True):
frontier = []
visited = [u]
if is_multigraph:
- for u, v, k, d in G.edges(u, keys=True, data=True):
- push(frontier, (d.get(weight, 1) * sign, next(c), u, v, k))
+ for v, keydict in G.adj[u].items():
+ for k, d in keydict.items():
+ wt = d.get(weight, 1) * sign
+ push(frontier, (wt, next(c), u, v, k, d))
else:
- for u, v, d in G.edges(u, data=True):
- push(frontier, (d.get(weight, 1) * sign, next(c), u, v))
+ for v, d in G.adj[u].items():
+ wt = d.get(weight, 1) * sign
+ push(frontier, (wt, next(c), u, v, d))
while frontier:
if is_multigraph:
- W, _, u, v, k = pop(frontier)
+ W, _, u, v, k, d = pop(frontier)
else:
- W, _, u, v = pop(frontier)
+ W, _, u, v, d = pop(frontier)
if v in visited:
continue
- visited.append(v)
- nodes.remove(v)
- if is_multigraph:
- for _, w, k2, d2 in G.edges(v, keys=True, data=True):
- if w in visited:
- continue
- new_weight = d2.get(weight, 1) * sign
- push(frontier, (new_weight, next(c), v, w, k2))
- else:
- for _, w, d2 in G.edges(v, data=True):
- if w in visited:
- continue
- new_weight = d2.get(weight, 1) * sign
- push(frontier, (new_weight, next(c), v, w))
# Multigraphs need to handle edge keys in addition to edge data.
if is_multigraph and keys:
if data:
- yield u, v, k, G[u][v]
+ yield u, v, k, d
else:
yield u, v, k
else:
if data:
- yield u, v, G[u][v]
+ yield u, v, d
else:
yield u, v
+ # update frontier
+ visited.append(v)
+ nodes.remove(v)
+ if is_multigraph:
+ for w, keydict in G.adj[v].items():
+ if w in visited:
+ continue
+ for k2, d2 in keydict.items():
+ new_weight = d2.get(weight, 1) * sign
+ push(frontier, (new_weight, next(c), v, w, k2, d2))
+ else:
+ for w, d2 in G.adj[v].items():
+ if w in visited:
+ continue
+ new_weight = d2.get(weight, 1) * sign
+ push(frontier, (new_weight, next(c), v, w, d2))
+
ALGORITHMS = {
'boruvka': boruvka_mst_edges,
@@ -204,19 +236,8 @@ def prim_mst_edges(G, minimum, weight='weight', keys=True, data=True):
@not_implemented_for('directed')
-def _spanning_edges(G, minimum, algorithm='kruskal', weight='weight',
- keys=True, data=True):
- try:
- algo = ALGORITHMS[algorithm]
- except KeyError:
- msg = '{} is not a valid choice for an algorithm.'.format(algorithm)
- raise ValueError(msg)
-
- return algo(G, minimum=minimum, weight=weight, keys=keys, data=data)
-
-
-def minimum_spanning_edges(G, algorithm='kruskal', weight='weight', keys=True,
- data=True):
+def minimum_spanning_edges(G, algorithm='kruskal', weight='weight',
+ keys=True, data=True):
"""Generate edges in a minimum spanning forest of an undirected
weighted graph.
@@ -232,15 +253,14 @@ def minimum_spanning_edges(G, algorithm='kruskal', weight='weight', keys=True,
algorithm : string
The algorithm to use when finding a minimum spanning tree. Valid
- choices are 'kruskal', 'prim', or 'boruvka'. The default is
- 'kruskal'.
+ choices are 'kruskal', 'prim', or 'boruvka'. The default is 'kruskal'.
weight : string
Edge data key to use for weight (default 'weight').
keys : bool
- Whether to yield edge key in multigraphs in addition to the
- edge. If `G` is not a multigraph, this is ignored.
+ Whether to yield edge key in multigraphs in addition to the edge.
+ If `G` is not a multigraph, this is ignored.
data : bool, optional
If True yield the edge data along with the edge.
@@ -248,20 +268,16 @@ def minimum_spanning_edges(G, algorithm='kruskal', weight='weight', keys=True,
Returns
-------
edges : iterator
- An iterator over tuples representing edges in a minimum spanning
- tree of `G`.
+ An iterator over edges in a maximum spanning tree of `G`.
+ Edges connecting nodes `u` and `v` are represented as tuples:
+ `(u, v, k, d)` or `(u, v, k)` or `(u, v, d)` or `(u, v)`
- If `G` is a multigraph and both `keys` and `data` are
- True, then the tuples are four-tuples of the form `(u, v, k,
- w)`, where `(u, v)` is an edge, `k` is the edge key
- identifying the particular edge joining `u` with `v`, and
- `w` is the weight of the edge. If `keys` is True but
- `data` is False, the tuples are three-tuples of the form
- `(u, v, k)`.
+ If `G` is a multigraph, `keys` indicates whether the edge key `k` will
+ be reported in the third position in the edge tuple. `data` indicates
+ whether the edge datadict `d` will appear at the end of the edge tuple.
- If `G` is not a multigraph, the tuples are of the form `(u, v,
- w)` if `data` is True or `(u, v)` if `data` is
- False.
+ If `G` is not a multigraph, the tuples are `(u, v, d)` if `data` is True
+ or `(u, v)` if `data` is False.
Examples
--------
@@ -297,11 +313,18 @@ def minimum_spanning_edges(G, algorithm='kruskal', weight='weight', keys=True,
http://www.ics.uci.edu/~eppstein/PADS/
"""
- return _spanning_edges(G, minimum=True, algorithm=algorithm,
- weight=weight, keys=keys, data=data)
+ try:
+ algo = ALGORITHMS[algorithm]
+ except KeyError:
+ msg = '{} is not a valid choice for an algorithm.'.format(algorithm)
+ raise ValueError(msg)
+ return algo(G, minimum=True, weight=weight, keys=keys, data=data)
-def maximum_spanning_edges(G, algorithm='kruskal', weight='weight', data=True):
+
+@not_implemented_for('directed')
+def maximum_spanning_edges(G, algorithm='kruskal', weight='weight',
+ keys=True, data=True):
"""Generate edges in a maximum spanning forest of an undirected
weighted graph.
@@ -317,15 +340,14 @@ def maximum_spanning_edges(G, algorithm='kruskal', weight='weight', data=True):
algorithm : string
The algorithm to use when finding a maximum spanning tree. Valid
- choices are 'kruskal', 'prim', or 'boruvka'. The default is
- 'kruskal'.
+ choices are 'kruskal', 'prim', or 'boruvka'. The default is 'kruskal'.
weight : string
Edge data key to use for weight (default 'weight').
keys : bool
- Whether to yield edge key in multigraphs in addition to the
- edge. If `G` is not a multigraph, this is ignored.
+ Whether to yield edge key in multigraphs in addition to the edge.
+ If `G` is not a multigraph, this is ignored.
data : bool, optional
If True yield the edge data along with the edge.
@@ -333,20 +355,16 @@ def maximum_spanning_edges(G, algorithm='kruskal', weight='weight', data=True):
Returns
-------
edges : iterator
- An iterator over tuples representing edges in a maximum spanning
- tree of `G`.
+ An iterator over edges in a maximum spanning tree of `G`.
+ Edges connecting nodes `u` and `v` are represented as tuples:
+ `(u, v, k, d)` or `(u, v, k)` or `(u, v, d)` or `(u, v)`
- If `G` is a multigraph and both `keys` and `data` are
- True, then the tuples are four-tuples of the form `(u, v, k,
- w)`, where `(u, v)` is an edge, `k` is the edge key
- identifying the particular edge joining `u` with `v`, and
- `w` is the weight of the edge. If `keys` is True but
- `data` is False, the tuples are three-tuples of the form
- `(u, v, k)`.
+ If `G` is a multigraph, `keys` indicates whether the edge key `k` will
+ be reported in the third position in the edge tuple. `data` indicates
+ whether the edge datadict `d` will appear at the end of the edge tuple.
- If `G` is not a multigraph, the tuples are of the form `(u, v,
- w)` if `data` is True or `(u, v)` if `data` is
- False.
+ If `G` is not a multigraph, the tuples are `(u, v, d)` if `data` is True
+ or `(u, v)` if `data` is False.
Examples
--------
@@ -381,29 +399,13 @@ def maximum_spanning_edges(G, algorithm='kruskal', weight='weight', data=True):
Modified code from David Eppstein, April 2006
http://www.ics.uci.edu/~eppstein/PADS/
"""
- return _spanning_edges(G, minimum=False, algorithm=algorithm,
- weight=weight, data=data)
-
-
-@not_implemented_for('directed')
-def _optimum_spanning_tree(G, algorithm, minimum, weight='weight'):
- # When creating the spanning tree, we can ignore the key used to
- # identify multigraph edges, since a tree is guaranteed to have no
- # multiedges. This is why we use `keys=False`.
- edges = _spanning_edges(G, minimum, algorithm=algorithm, weight=weight,
- keys=False, data=True)
- T = nx.Graph(edges)
-
- # Add isolated nodes
- if len(T) != len(G):
- T.add_nodes_from(nx.isolates(G))
-
- # Add node and graph attributes as shallow copy
- for n in T:
- T.node[n].update(G.node[n])
- T.graph = G.graph.copy()
+ try:
+ algo = ALGORITHMS[algorithm]
+ except KeyError:
+ msg = '{} is not a valid choice for an algorithm.'.format(algorithm)
+ raise ValueError(msg)
- return T
+ return algo(G, minimum=False, weight=weight, keys=keys, data=data)
def minimum_spanning_tree(G, weight='weight', algorithm='kruskal'):
@@ -449,8 +451,12 @@ def minimum_spanning_tree(G, weight='weight', algorithm='kruskal'):
See :mod:`networkx.tree.recognition` for more detailed definitions.
"""
- return _optimum_spanning_tree(G, algorithm=algorithm, minimum=True,
- weight=weight)
+ edges = minimum_spanning_edges(G, algorithm, weight, keys=True, data=True)
+ T = G.__class__() # Same graph class as G
+ T.graph.update(G.graph)
+ T.add_nodes_from(G.node.items())
+ T.add_edges_from(edges)
+ return T
def maximum_spanning_tree(G, weight='weight', algorithm='kruskal'):
@@ -498,5 +504,10 @@ def maximum_spanning_tree(G, weight='weight', algorithm='kruskal'):
See :mod:`networkx.tree.recognition` for more detailed definitions.
"""
- return _optimum_spanning_tree(G, algorithm=algorithm, minimum=False,
- weight=weight)
+ edges = maximum_spanning_edges(G, algorithm, weight, keys=True, data=True)
+ edges = list(edges)
+ T = G.__class__() # Same graph class as G
+ T.graph.update(G.graph)
+ T.add_nodes_from(G.node.items())
+ T.add_edges_from(edges)
+ return T
| Include default keyword argument to minimum_spanning_edges function
As pointed out in https://github.com/networkx/networkx/pull/1741#commitcomment-12726208, it would be nice to have a `default` keyword argument to `minimum_spanning_edges`, identical to the keyword argument with the same name in the `Graph.edges()` method. This makes function/method parameters more consistent.
| 2017-07-28T21:58:12 |
||
networkx/networkx | 2,566 | networkx__networkx-2566 | [
"2164"
] | ea4db3e5dc6b962c72f61060d18a2af3c9977e34 | diff --git a/networkx/algorithms/tree/mst.py b/networkx/algorithms/tree/mst.py
--- a/networkx/algorithms/tree/mst.py
+++ b/networkx/algorithms/tree/mst.py
@@ -11,7 +11,9 @@
"""
from heapq import heappop, heappush
+from operator import itemgetter
from itertools import count
+from math import isnan
import networkx as nx
from networkx.utils import UnionFind, not_implemented_for
@@ -23,7 +25,8 @@
@not_implemented_for('multigraph')
-def boruvka_mst_edges(G, minimum=True, weight='weight', keys=False, data=True):
+def boruvka_mst_edges(G, minimum=True, weight='weight',
+ keys=False, data=True, ignore_nan=False):
"""Iterate over edges of a Borůvka's algorithm min/max spanning tree.
Parameters
@@ -48,8 +51,11 @@ def boruvka_mst_edges(G, minimum=True, weight='weight', keys=False, data=True):
If True, yield edges `(u, v, d)`, where `d` is the attribute dict.
If False, yield edges `(u, v)`.
+ ignore_nan : bool (default: False)
+ If a NaN is found as an edge weight normally an exception is raised.
+ If `ignore_nan is True` then that edge is ignored instead.
+
"""
- opt = min if minimum else max
# Initialize a forest, assuming initially that it is the discrete
# partition of the nodes of the graph.
forest = UnionFind(G)
@@ -61,16 +67,20 @@ def best_edge(component):
A return value of ``None`` indicates an empty boundary.
"""
- # TODO In Python 3.4 and later, we can just do
- #
- # boundary = nx.edge_boundary(G, component, data=weight)
- # return opt(boundary, key=lambda e: e[-1][weight], default=None)
- #
- # which is better because it doesn't require creating a list.
- boundary = list(nx.edge_boundary(G, component, data=True))
- if not boundary:
- return None
- return opt(boundary, key=lambda e: e[-1][weight])
+ sign = 1 if minimum else -1
+ minwt = float('inf')
+ boundary = None
+ for e in nx.edge_boundary(G, component, data=True):
+ wt = e[-1].get(weight, 1) * sign
+ if isnan(wt):
+ if ignore_nan:
+ continue
+ msg = "NaN found as an edge weight. Edge %s"
+ raise ValueError(msg % (e,))
+ if wt < minwt:
+ minwt = wt
+ boundary = e
+ return boundary
# Determine the optimum edge in the edge boundary of each component
# in the forest.
@@ -110,7 +120,8 @@ def best_edge(component):
forest.union(u, v)
-def kruskal_mst_edges(G, minimum, weight='weight', keys=True, data=True):
+def kruskal_mst_edges(G, minimum, weight='weight',
+ keys=True, data=True, ignore_nan=False):
"""Iterate over edges of a Kruskal's algorithm min/max spanning tree.
Parameters
@@ -133,17 +144,42 @@ def kruskal_mst_edges(G, minimum, weight='weight', keys=True, data=True):
If True, yield edges `(u, v, d)`, where `d` is the attribute dict.
If False, yield edges `(u, v)`.
+ ignore_nan : bool (default: False)
+ If a NaN is found as an edge weight normally an exception is raised.
+ If `ignore_nan is True` then that edge is ignored instead.
+
"""
subtrees = UnionFind()
if G.is_multigraph():
edges = G.edges(keys=True, data=True)
+
+ def filter_nan_edges(edges=edges, weight=weight):
+ sign = 1 if minimum else -1
+ for u, v, k, d in edges:
+ wt = d.get(weight, 1) * sign
+ if isnan(wt):
+ if ignore_nan:
+ continue
+ msg = "NaN found as an edge weight. Edge %s"
+ raise ValueError(msg % ((u, v, f, k, d),))
+ yield wt, u, v, k, d
else:
edges = G.edges(data=True)
- getweight = lambda t: t[-1].get(weight, 1)
- edges = sorted(edges, key=getweight, reverse=not minimum)
+
+ def filter_nan_edges(edges=edges, weight=weight):
+ sign = 1 if minimum else -1
+ for u, v, d in edges:
+ wt = d.get(weight, 1) * sign
+ if isnan(wt):
+ if ignore_nan:
+ continue
+ msg = "NaN found as an edge weight. Edge %s"
+ raise ValueError(msg % ((u, v, d),))
+ yield wt, u, v, d
+ edges = sorted(filter_nan_edges(), key=itemgetter(0))
# Multigraphs need to handle edge keys in addition to edge data.
if G.is_multigraph():
- for u, v, k, d in edges:
+ for wt, u, v, k, d in edges:
if subtrees[u] != subtrees[v]:
if keys:
if data:
@@ -157,7 +193,7 @@ def kruskal_mst_edges(G, minimum, weight='weight', keys=True, data=True):
yield u, v
subtrees.union(u, v)
else:
- for u, v, d in edges:
+ for wt, u, v, d in edges:
if subtrees[u] != subtrees[v]:
if data:
yield (u, v, d)
@@ -166,7 +202,35 @@ def kruskal_mst_edges(G, minimum, weight='weight', keys=True, data=True):
subtrees.union(u, v)
-def prim_mst_edges(G, minimum, weight='weight', keys=True, data=True):
+def prim_mst_edges(G, minimum, weight='weight',
+ keys=True, data=True, ignore_nan=False):
+ """Iterate over edges of Prim's algorithm min/max spanning tree.
+
+ Parameters
+ ----------
+ G : NetworkX Graph
+ The graph holding the tree of interest.
+
+ minimum : bool (default: True)
+ Find the minimum (True) or maximum (False) spanning tree.
+
+ weight : string (default: 'weight')
+ The name of the edge attribute holding the edge weights.
+
+ keys : bool (default: True)
+ If `G` is a multigraph, `keys` controls whether edge keys ar yielded.
+ Otherwise `keys` is ignored.
+
+ data : bool (default: True)
+ Flag for whether to yield edge attribute dicts.
+ If True, yield edges `(u, v, d)`, where `d` is the attribute dict.
+ If False, yield edges `(u, v)`.
+
+ ignore_nan : bool (default: False)
+ If a NaN is found as an edge weight normally an exception is raised.
+ If `ignore_nan is True` then that edge is ignored instead.
+
+ """
is_multigraph = G.is_multigraph()
push = heappush
pop = heappop
@@ -174,9 +238,7 @@ def prim_mst_edges(G, minimum, weight='weight', keys=True, data=True):
nodes = list(G)
c = count()
- sign = 1
- if not minimum:
- sign = -1
+ sign = 1 if minimum else -1
while nodes:
u = nodes.pop(0)
@@ -186,10 +248,20 @@ def prim_mst_edges(G, minimum, weight='weight', keys=True, data=True):
for v, keydict in G.adj[u].items():
for k, d in keydict.items():
wt = d.get(weight, 1) * sign
+ if isnan(wt):
+ if ignore_nan:
+ continue
+ msg = "NaN found as an edge weight. Edge %s"
+ raise ValueError(msg % ((u, v, k, d),))
push(frontier, (wt, next(c), u, v, k, d))
else:
for v, d in G.adj[u].items():
wt = d.get(weight, 1) * sign
+ if isnan(wt):
+ if ignore_nan:
+ continue
+ msg = "NaN found as an edge weight. Edge %s"
+ raise ValueError(msg % ((u, v, d),))
push(frontier, (wt, next(c), u, v, d))
while frontier:
if is_multigraph:
@@ -237,7 +309,7 @@ def prim_mst_edges(G, minimum, weight='weight', keys=True, data=True):
@not_implemented_for('directed')
def minimum_spanning_edges(G, algorithm='kruskal', weight='weight',
- keys=True, data=True):
+ keys=True, data=True, ignore_nan=False):
"""Generate edges in a minimum spanning forest of an undirected
weighted graph.
@@ -265,6 +337,10 @@ def minimum_spanning_edges(G, algorithm='kruskal', weight='weight',
data : bool, optional
If True yield the edge data along with the edge.
+ ignore_nan : bool (default: False)
+ If a NaN is found as an edge weight normally an exception is raised.
+ If `ignore_nan is True` then that edge is ignored instead.
+
Returns
-------
edges : iterator
@@ -319,12 +395,13 @@ def minimum_spanning_edges(G, algorithm='kruskal', weight='weight',
msg = '{} is not a valid choice for an algorithm.'.format(algorithm)
raise ValueError(msg)
- return algo(G, minimum=True, weight=weight, keys=keys, data=data)
+ return algo(G, minimum=True, weight=weight, keys=keys, data=data,
+ ignore_nan=ignore_nan)
@not_implemented_for('directed')
def maximum_spanning_edges(G, algorithm='kruskal', weight='weight',
- keys=True, data=True):
+ keys=True, data=True, ignore_nan=False):
"""Generate edges in a maximum spanning forest of an undirected
weighted graph.
@@ -352,6 +429,10 @@ def maximum_spanning_edges(G, algorithm='kruskal', weight='weight',
data : bool, optional
If True yield the edge data along with the edge.
+ ignore_nan : bool (default: False)
+ If a NaN is found as an edge weight normally an exception is raised.
+ If `ignore_nan is True` then that edge is ignored instead.
+
Returns
-------
edges : iterator
@@ -382,7 +463,7 @@ def maximum_spanning_edges(G, algorithm='kruskal', weight='weight',
Find maximum spanning edges by Prim's algorithm
>>> G = nx.cycle_graph(4)
- >>> G.add_edge(0,3,weight=2) # assign weight 2 to edge 0-3
+ >>> G.add_edge(0, 3, weight=2) # assign weight 2 to edge 0-3
>>> mst = tree.maximum_spanning_edges(G, algorithm='prim', data=False)
>>> edgelist = list(mst)
>>> sorted(edgelist)
@@ -405,10 +486,12 @@ def maximum_spanning_edges(G, algorithm='kruskal', weight='weight',
msg = '{} is not a valid choice for an algorithm.'.format(algorithm)
raise ValueError(msg)
- return algo(G, minimum=False, weight=weight, keys=keys, data=data)
+ return algo(G, minimum=False, weight=weight, keys=keys, data=data,
+ ignore_nan=ignore_nan)
-def minimum_spanning_tree(G, weight='weight', algorithm='kruskal'):
+def minimum_spanning_tree(G, weight='weight', algorithm='kruskal',
+ ignore_nan=False):
"""Returns a minimum spanning tree or forest on an undirected graph `G`.
Parameters
@@ -425,6 +508,10 @@ def minimum_spanning_tree(G, weight='weight', algorithm='kruskal'):
choices are 'kruskal', 'prim', or 'boruvka'. The default is
'kruskal'.
+ ignore_nan : bool (default: False)
+ If a NaN is found as an edge weight normally an exception is raised.
+ If `ignore_nan is True` then that edge is ignored instead.
+
Returns
-------
G : NetworkX Graph
@@ -451,7 +538,8 @@ def minimum_spanning_tree(G, weight='weight', algorithm='kruskal'):
See :mod:`networkx.tree.recognition` for more detailed definitions.
"""
- edges = minimum_spanning_edges(G, algorithm, weight, keys=True, data=True)
+ edges = minimum_spanning_edges(G, algorithm, weight, keys=True,
+ data=True, ignore_nan=ignore_nan)
T = G.__class__() # Same graph class as G
T.graph.update(G.graph)
T.add_nodes_from(G.node.items())
@@ -459,7 +547,8 @@ def minimum_spanning_tree(G, weight='weight', algorithm='kruskal'):
return T
-def maximum_spanning_tree(G, weight='weight', algorithm='kruskal'):
+def maximum_spanning_tree(G, weight='weight', algorithm='kruskal',
+ ignore_nan=False):
"""Returns a maximum spanning tree or forest on an undirected graph `G`.
Parameters
@@ -476,6 +565,10 @@ def maximum_spanning_tree(G, weight='weight', algorithm='kruskal'):
choices are 'kruskal', 'prim', or 'boruvka'. The default is
'kruskal'.
+ ignore_nan : bool (default: False)
+ If a NaN is found as an edge weight normally an exception is raised.
+ If `ignore_nan is True` then that edge is ignored instead.
+
Returns
-------
@@ -504,7 +597,8 @@ def maximum_spanning_tree(G, weight='weight', algorithm='kruskal'):
See :mod:`networkx.tree.recognition` for more detailed definitions.
"""
- edges = maximum_spanning_edges(G, algorithm, weight, keys=True, data=True)
+ edges = maximum_spanning_edges(G, algorithm, weight, keys=True,
+ data=True, ignore_nan=ignore_nan)
edges = list(edges)
T = G.__class__() # Same graph class as G
T.graph.update(G.graph)
| diff --git a/networkx/algorithms/tree/tests/test_mst.py b/networkx/algorithms/tree/tests/test_mst.py
--- a/networkx/algorithms/tree/tests/test_mst.py
+++ b/networkx/algorithms/tree/tests/test_mst.py
@@ -11,12 +11,13 @@
from unittest import TestCase
from nose.tools import assert_equal
-from nose.tools import raises
+from nose.tools import raises, assert_raises
import networkx as nx
-from networkx.testing import (assert_graphs_equal,assert_nodes_equal,
+from networkx.testing import (assert_graphs_equal, assert_nodes_equal,
assert_edges_equal)
+
@raises(ValueError)
def test_unknown_algorithm():
nx.minimum_spanning_tree(nx.Graph(), algorithm='random')
@@ -84,6 +85,23 @@ def test_without_data(self):
expected = [(u, v) for u, v, d in self.minimum_spanning_edgelist]
assert_edges_equal(actual, expected)
+ def test_nan_weights(self):
+ # Edge weights NaN never appear in the spanning tree. see #2164
+ G = self.G
+ G.add_edge(0, 12, weight=float('nan'))
+ edges = nx.minimum_spanning_edges(G, algorithm=self.algo,
+ data=False, ignore_nan=True)
+ actual = sorted((min(u, v), max(u, v)) for u, v in edges)
+ expected = [(u, v) for u, v, d in self.minimum_spanning_edgelist]
+ assert_edges_equal(actual, expected)
+ # Now test for raising exception
+ edges = nx.minimum_spanning_edges(G, algorithm=self.algo,
+ data=False, ignore_nan=False)
+ assert_raises(ValueError, list, edges)
+ # test default for ignore_nan as False
+ edges = nx.minimum_spanning_edges(G, algorithm=self.algo, data=False)
+ assert_raises(ValueError, list, edges)
+
def test_minimum_tree(self):
T = nx.minimum_spanning_tree(self.G, algorithm=self.algo)
actual = sorted(T.edges(data=True))
| nan values in minimum_spanning_tree
When calculating the minimum_spanning_tree, the algorithm does not correct for (or at least warn about) edges with nan weight. This can give very strange results, such as a few nodes with very high degree, 'or introduce more subtle bugs.
Simple MWE:
``` python
import networkx, math, random
import matplotlib.pyplot as plt
G = networkx.complete_graph(100)
for n1, n2 in G.edges_iter():
G.edge[n1][n2]['weight'] = random.random()
# Pre-nan
f, [ax1, ax2] = plt.subplots(2, 1)
ax1.imshow(networkx.to_numpy_matrix(networkx.minimum_spanning_tree(G))!=0)
# Post-nan
G.add_edge(1, 50, {'weight' : float('nan')})
ax2.imshow(networkx.to_numpy_matrix(networkx.minimum_spanning_tree(G))!=0)
plt.show()
```
| I updated your MWE to show the plots side-by-side. The presence of nans definitely affects Kruskal's algorithm (which is what is used by `minimum_spanning_tree`) since it tries to sort all the weights. Of course, nan's are not sortable and so the resultant "sorted" list is not sorted in general.
We could remove the nans from consideration, but that also might cause bugs in people's code who didn't expect nans in their graph in the first place. If we opt to filter them out, I think we should consider adding an option to raise an exception on the detection of a nan weight.
We could special case the sort function (which will slow things down) or filter or raise an exception as @chebee7i suggests. In general I don't like a lot of input checking in the algorithms. What about filtering them out since those edges can't participate in any spanning tree?
Filtering them out seems like it would be a good solution if we don't want to check for valid input. I agree that input shouldn't be checked too stringently, since it could make the code messy pretty quickly and it goes against the spirit of duck typing. But I also see the other side, that many users of NetworkX do scientific work and passing the nan (which isn't in the domain and is thus by definition a bug in their code) through silently will be a very difficult bug to catch. However I think either of the proposed fixes would be a good solution.
| 2017-07-29T13:48:48 |
networkx/networkx | 2,571 | networkx__networkx-2571 | [
"2448"
] | bb18c7d11ad9bbd60a77cf7beaa52eaab21ea897 | diff --git a/networkx/drawing/layout.py b/networkx/drawing/layout.py
--- a/networkx/drawing/layout.py
+++ b/networkx/drawing/layout.py
@@ -298,6 +298,8 @@ def fruchterman_reingold_layout(G, k=None,
if pos is not None:
# Determine size of existing domain to adjust initial positions
dom_size = max(coord for pos_tup in pos.values() for coord in pos_tup)
+ if dom_size == 0:
+ dom_size = 1
shape = (len(G), dim)
pos_arr = np.random.random(shape) * dom_size + center
for i, n in enumerate(G):
| diff --git a/networkx/drawing/tests/test_layout.py b/networkx/drawing/tests/test_layout.py
--- a/networkx/drawing/tests/test_layout.py
+++ b/networkx/drawing/tests/test_layout.py
@@ -27,6 +27,18 @@ def setUp(self):
nx.add_path(self.Gs, 'abcdef')
self.bigG = nx.grid_2d_graph(25, 25) #bigger than 500 nodes for sparse
+ def test_spring_init_pos(self):
+ # Tests GH #2448
+ import math
+ G = nx.Graph()
+ G.add_edges_from([(0, 1), (1, 2), (2, 0), (2, 3)])
+
+ init_pos = {0: (0.0, 0.0)}
+ fixed_pos = [0]
+ pos = nx.fruchterman_reingold_layout(G, pos=init_pos, fixed=fixed_pos)
+ has_nan = any(math.isnan(c) for coords in pos.values() for c in coords)
+ assert_false(has_nan, 'values should not be nan')
+
def test_smoke_int(self):
G = self.Gi
vpos = nx.random_layout(G)
| fixing a subset of nodes in spring_layout
The following layout works reasonably well,
```
import networkx as nx
G = nx.Graph()
G.add_edges_from([(0,1),(1,2),(2,0),(2,3)])
pos = nx.spring_layout(G)
print(pos)
```
However, the results from run to run are not easily reproduced because the initialization is random.
To mitigate this we can fix node 0 to (0,0), as follows,
```
init_pos = {0:(0.0, 0.0)}
fixed_pos = [0]
pos = nx.spring_layout(G, pos=init_pos, fixed=fixed_pos)
print(pos)
```
This results in the following error,
```
networkx/drawing/layout.py:342: RuntimeWarning: divide by zero encountered
```
A work around seems to be to provide initial points to all positions as follows,
```
import random
init_pos = {n:(random.uniform(-1.0, 1.0), random.uniform(-1.0, 1.0)) for n in G}
init_pos[0] = (0.0, 0.0)
fixed_pos = [0]
pos = nx.spring_layout(G, pos=init_pos, fixed=fixed_pos)
print(pos)
```
| Interesting... Thanks for the report.
Setting the initial position to (1, 1) works. The strange behavior occurs when setting one node to the origin. That occurs because we find the domain_size by taking the maximum value of all initial positions. Unfortunately, if that maximum value is ```0``` then the domain size is set to zero and all the points are initially put at the same point. That leads to a division by zero exception.
I think a straightforward fix would be to check if there is only one initial point provided. If so, then set the domain_size to be 1.
Would someone like to provide a pull request for this? Thanks!
| 2017-07-31T17:59:19 |
networkx/networkx | 2,583 | networkx__networkx-2583 | [
"2307"
] | e679e196c5439c2bb3c0e1a1d20da1928e91b7dd | diff --git a/networkx/algorithms/centrality/katz.py b/networkx/algorithms/centrality/katz.py
--- a/networkx/algorithms/centrality/katz.py
+++ b/networkx/algorithms/centrality/katz.py
@@ -32,13 +32,13 @@ def katz_centrality(G, alpha=0.1, beta=1.0, max_iter=1000, tol=1.0e-6,
x_i = \alpha \sum_{j} A_{ij} x_j + \beta,
- where `A` is the adjacency matrix of graph G with eigenvalues `\lambda`.
+ where `A` is the adjacency matrix of graph G with eigenvalues $\lambda$.
- The parameter `\beta` controls the initial centrality and
+ The parameter $\beta$ controls the initial centrality and
.. math::
- \alpha < \frac{1}{\lambda_{max}}.
+ \alpha < \frac{1}{\lambda_{\max}}.
Katz centrality computes the relative influence of a node within a
network by measuring the number of the immediate neighbors (first
@@ -46,11 +46,11 @@ def katz_centrality(G, alpha=0.1, beta=1.0, max_iter=1000, tol=1.0e-6,
to the node under consideration through these immediate neighbors.
Extra weight can be provided to immediate neighbors through the
- parameter :math:`\beta`. Connections made with distant neighbors
- are, however, penalized by an attenuation factor `\alpha` which
+ parameter $\beta$. Connections made with distant neighbors
+ are, however, penalized by an attenuation factor $\alpha$ which
should be strictly less than the inverse largest eigenvalue of the
adjacency matrix in order for the Katz centrality to be computed
- correctly. More information is provided in [1]_ .
+ correctly. More information is provided in [1]_.
Parameters
----------
@@ -100,10 +100,10 @@ def katz_centrality(G, alpha=0.1, beta=1.0, max_iter=1000, tol=1.0e-6,
--------
>>> import math
>>> G = nx.path_graph(4)
- >>> phi = (1+math.sqrt(5))/2.0 # largest eigenvalue of adj matrix
- >>> centrality = nx.katz_centrality(G,1/phi-0.01)
+ >>> phi = (1 + math.sqrt(5)) / 2.0 # largest eigenvalue of adj matrix
+ >>> centrality = nx.katz_centrality(G, 1/phi - 0.01)
>>> for n, c in sorted(centrality.items()):
- ... print("%d %0.2f"%(n, c))
+ ... print("%d %0.2f" % (n, c))
0 0.37
1 0.60
2 0.60
@@ -122,18 +122,20 @@ def katz_centrality(G, alpha=0.1, beta=1.0, max_iter=1000, tol=1.0e-6,
Katz centrality was introduced by [2]_.
This algorithm it uses the power method to find the eigenvector
- corresponding to the largest eigenvalue of the adjacency matrix of G.
- The constant alpha should be strictly less than the inverse of largest
+ corresponding to the largest eigenvalue of the adjacency matrix of ``G``.
+ The parameter ``alpha`` should be strictly less than the inverse of largest
eigenvalue of the adjacency matrix for the algorithm to converge.
- The iteration will stop after max_iter iterations or an error tolerance of
- number_of_nodes(G)*tol has been reached.
+ You can use ``max(nx.adjacency_spectrum(G))`` to get $\lambda_{\max}$ the largest
+ eigenvalue of the adjacency matrix.
+ The iteration will stop after ``max_iter`` iterations or an error tolerance of
+ ``number_of_nodes(G) * tol`` has been reached.
- When `\alpha = 1/\lambda_{max}` and `\beta=0`, Katz centrality is the same
+ When $\alpha = 1/\lambda_{\max}$ and $\beta=0$, Katz centrality is the same
as eigenvector centrality.
For directed graphs this finds "left" eigenvectors which corresponds
to the in-edges in the graph. For out-edges Katz centrality
- first reverse the graph with G.reverse().
+ first reverse the graph with ``G.reverse()``.
References
----------
@@ -206,14 +208,13 @@ def katz_centrality_numpy(G, alpha=0.1, beta=1.0, normalized=True,
x_i = \alpha \sum_{j} A_{ij} x_j + \beta,
- where `A` is the adjacency matrix of graph G with eigenvalues `\lambda`.
+ where `A` is the adjacency matrix of graph G with eigenvalues $\lambda$.
- The parameter `\beta` controls the initial centrality and
+ The parameter $\beta$ controls the initial centrality and
.. math::
- \alpha < \frac{1}{\lambda_{max}}.
-
+ \alpha < \frac{1}{\lambda_{\max}}.
Katz centrality computes the relative influence of a node within a
network by measuring the number of the immediate neighbors (first
@@ -221,11 +222,11 @@ def katz_centrality_numpy(G, alpha=0.1, beta=1.0, normalized=True,
to the node under consideration through these immediate neighbors.
Extra weight can be provided to immediate neighbors through the
- parameter :math:`\beta`. Connections made with distant neighbors
- are, however, penalized by an attenuation factor `\alpha` which
+ parameter $\beta$. Connections made with distant neighbors
+ are, however, penalized by an attenuation factor $\alpha$ which
should be strictly less than the inverse largest eigenvalue of the
adjacency matrix in order for the Katz centrality to be computed
- correctly. More information is provided in [1]_ .
+ correctly. More information is provided in [1]_.
Parameters
----------
@@ -261,10 +262,10 @@ def katz_centrality_numpy(G, alpha=0.1, beta=1.0, normalized=True,
--------
>>> import math
>>> G = nx.path_graph(4)
- >>> phi = (1 + math.sqrt(5))/2.0 # largest eigenvalue of adj matrix
+ >>> phi = (1 + math.sqrt(5)) / 2.0 # largest eigenvalue of adj matrix
>>> centrality = nx.katz_centrality_numpy(G, 1/phi)
>>> for n, c in sorted(centrality.items()):
- ... print("%d %0.2f"%(n, c))
+ ... print("%d %0.2f" % (n, c))
0 0.37
1 0.60
2 0.60
@@ -283,14 +284,17 @@ def katz_centrality_numpy(G, alpha=0.1, beta=1.0, normalized=True,
Katz centrality was introduced by [2]_.
This algorithm uses a direct linear solver to solve the above equation.
- The constant alpha should be strictly less than the inverse of largest
- eigenvalue of the adjacency matrix for there to be a solution. When
- `\alpha = 1/\lambda_{max}` and `\beta=0`, Katz centrality is the same as
- eigenvector centrality.
+ The parameter ``alpha`` should be strictly less than the inverse of largest
+ eigenvalue of the adjacency matrix for there to be a solution.
+ You can use ``max(nx.adjacency_spectrum(G))`` to get $\lambda_{\max}$ the largest
+ eigenvalue of the adjacency matrix.
+
+ When $\alpha = 1/\lambda_{\max}$ and $\beta=0$, Katz centrality is the same
+ as eigenvector centrality.
For directed graphs this finds "left" eigenvectors which corresponds
to the in-edges in the graph. For out-edges Katz centrality
- first reverse the graph with G.reverse().
+ first reverse the graph with ``G.reverse()``.
References
----------
| Mixed negative and positive values for Katz centrality
Recently, while using networkx, I computed Katz centrality on a network using the default values and no edge weighting. The results showed a mix of negative and positive values for this metric which I believe should always be positive (or in some circumstances always negative). Please see this link: https://lists.nongnu.org/archive/html/igraph-help/2009-10/msg00059.html
I believe the cause of this may be that for this network the default value for alpha was not less than the inverse of the largest eigenvalue for the Adjacency matrix (please see here: https://networkx.github.io/documentation/networkx-1.10/reference/generated/networkx.algorithms.centrality.katz_centrality.html). Does this sound correct?
If that is correct, is it possible to add a check to see whether this condition is satisfied rather than to return erroneous numbers?
| It's certainly possibly to check `max(nx.adjacency_spectrum(G))` to estimate appropriate values for alpha. I think we must not be doing that since it adds extra computation - maybe roughly doubles the computation time. A middle ground would be to put some information in the docs on how to compute the largest eigenvalue.
Thank you @hagberg. I will implement that check in my own code.
Needs a PR to add documentation on how to compute the required eigenvalue. | 2017-08-04T20:49:15 |
|
networkx/networkx | 2,595 | networkx__networkx-2595 | [
"2126"
] | 20507b7a2b5bdfdd8b9235862dacc03172b28ac4 | diff --git a/networkx/algorithms/centrality/closeness.py b/networkx/algorithms/centrality/closeness.py
--- a/networkx/algorithms/centrality/closeness.py
+++ b/networkx/algorithms/centrality/closeness.py
@@ -1,50 +1,68 @@
-"""
-Closeness centrality measures.
-"""
# Copyright (C) 2004-2017 by
# Aric Hagberg <[email protected]>
# Dan Schult <[email protected]>
# Pieter Swart <[email protected]>
# All rights reserved.
# BSD license.
+#
+# Authors: Aric Hagberg <[email protected]>
+# Pieter Swart <[email protected]>
+# Sasha Gutfraind <[email protected]>
+# Dan Schult <[email protected]>
+"""
+Closeness centrality measures.
+"""
import functools
import networkx as nx
-__author__ = "\n".join(['Aric Hagberg <[email protected]>',
- 'Pieter Swart ([email protected])',
- 'Sasha Gutfraind ([email protected])'])
+
__all__ = ['closeness_centrality']
-def closeness_centrality(G, u=None, distance=None, normalized=True, reverse=False):
+def closeness_centrality(G, u=None, distance=None,
+ wf_improved=True, reverse=False):
r"""Compute closeness centrality for nodes.
Closeness centrality [1]_ of a node `u` is the reciprocal of the
- sum of the shortest path distances to `u` from all `n-1` other nodes.
- Since the sum of distances depends on the number of nodes in the
- graph, closeness is normalized by the sum of minimum possible
- distances `n-1`.
+ average shortest path distance to `u` over all `n-1` reachable nodes.
.. math::
C(u) = \frac{n - 1}{\sum_{v=1}^{n-1} d(v, u)},
where `d(v, u)` is the shortest-path distance between `v` and `u`,
- and `n` is the number of nodes in the graph.
+ and `n` is the number of nodes that can reach `u`.
Notice that higher values of closeness indicate higher centrality.
+ Wasserman and Faust propose an improved formula for graphs with
+ more than one connected component. The result is "a ratio of the
+ fraction of actors in the group who are reachable, to the average
+ distance" from the reachable actors [2]_. You might think this
+ scale factor is inverted but it is not. As is, nodes from small
+ components receive a smaller closeness value. Letting `N` denote
+ the number of nodes in the graph,
+
+ .. math::
+
+ C_{WF}(u) = \frac{n-1}{N-1} \frac{n - 1}{\sum_{v=1}^{n-1} d(v, u)},
+
Parameters
----------
G : graph
A NetworkX graph
+
u : node, optional
Return only the value for node u
+
distance : edge attribute key, optional (default=None)
- Use the specified edge attribute as the edge distance in shortest
+ Use the specified edge attribute as the edge distance in shortest
path calculations
- normalized : bool, optional
- If True (default) normalize by the number of nodes in the connected
- part of the graph.
+
+ wf_improved : bool, optional (default=True)
+ If True, scale by the fraction of nodes reachable. This gives the
+ Wasserman and Faust improved formula. For single component graphs
+ it is the same as the original formula.
+
reverse : bool, optional (default=False)
If True and G is a digraph, reverse the edges of G, using successors
instead of predecessors.
@@ -65,7 +83,7 @@ def closeness_centrality(G, u=None, distance=None, normalized=True, reverse=Fals
`n` is the number of nodes in the connected part of graph
containing the node. If the graph is not completely connected,
this algorithm computes the closeness centrality for each
- connected part separately.
+ connected part separately scaled by that parts size.
If the 'distance' keyword is set to an edge attribute key then the
shortest-path length will be computed using Dijkstra's algorithm with
@@ -76,12 +94,15 @@ def closeness_centrality(G, u=None, distance=None, normalized=True, reverse=Fals
.. [1] Linton C. Freeman: Centrality in networks: I.
Conceptual clarification. Social Networks 1:215-239, 1979.
http://leonidzhukov.ru/hse/2013/socialnetworks/papers/freeman79-centrality.pdf
+ .. [2] pg. 201 of Wasserman, S. and Faust, K.,
+ Social Network Analysis: Methods and Applications, 1994,
+ Cambridge University Press.
"""
if distance is not None:
- # use Dijkstra's algorithm with specified attribute as edge weight
+ # use Dijkstra's algorithm with specified attribute as edge weight
path_length = functools.partial(nx.single_source_dijkstra_path_length,
weight=distance)
- else: # handle either directed or undirected
+ else: # handle either directed or undirected
if G.is_directed() and not reverse:
path_length = nx.single_target_shortest_path_length
else:
@@ -96,10 +117,10 @@ def closeness_centrality(G, u=None, distance=None, normalized=True, reverse=Fals
sp = dict(path_length(G, n))
totsp = sum(sp.values())
if totsp > 0.0 and len(G) > 1:
- closeness_centrality[n] = (len(sp)-1.0) / totsp
+ closeness_centrality[n] = (len(sp) - 1.0) / totsp
# normalize to number of nodes-1 in connected part
- if normalized:
- s = (len(sp)-1.0) / ( len(G) - 1 )
+ if wf_improved:
+ s = (len(sp) - 1.0) / (len(G) - 1)
closeness_centrality[n] *= s
else:
closeness_centrality[n] = 0.0
| diff --git a/networkx/algorithms/centrality/tests/test_closeness_centrality.py b/networkx/algorithms/centrality/tests/test_closeness_centrality.py
--- a/networkx/algorithms/centrality/tests/test_closeness_centrality.py
+++ b/networkx/algorithms/centrality/tests/test_closeness_centrality.py
@@ -4,90 +4,109 @@
from nose.tools import *
import networkx as nx
+
class TestClosenessCentrality:
def setUp(self):
-
self.K = nx.krackhardt_kite_graph()
self.P3 = nx.path_graph(3)
self.P4 = nx.path_graph(4)
self.K5 = nx.complete_graph(5)
- self.C4=nx.cycle_graph(4)
- self.T=nx.balanced_tree(r=2, h=2)
+ self.C4 = nx.cycle_graph(4)
+ self.T = nx.balanced_tree(r=2, h=2)
self.Gb = nx.Graph()
- self.Gb.add_edges_from([(0,1), (0,2), (1,3), (2,3),
- (2,4), (4,5), (3,5)])
-
+ self.Gb.add_edges_from([(0, 1), (0, 2), (1, 3), (2, 3),
+ (2, 4), (4, 5), (3, 5)])
F = nx.florentine_families_graph()
self.F = F
+ def test_wf_improved(self):
+ G = nx.union(self.P4, nx.path_graph([4, 5, 6]))
+ c = nx.closeness_centrality(G)
+ cwf = nx.closeness_centrality(G, wf_improved=False)
+ res = {0: 0.25, 1: 0.375, 2: 0.375, 3: 0.25,
+ 4: 0.222, 5: 0.333, 6: 0.222}
+ wf_res = {0: 0.5, 1: 0.75, 2: 0.75, 3: 0.5,
+ 4: 0.667, 5: 1.0, 6: 0.667}
+ for n in G:
+ assert_almost_equal(c[n], res[n], places=3)
+ assert_almost_equal(cwf[n], wf_res[n], places=3)
+
+ def test_digraph(self):
+ G = nx.path_graph(3, create_using=nx.DiGraph())
+ c = nx.closeness_centrality(G)
+ cr = nx.closeness_centrality(G, reverse=True)
+ d = {0: 0.0, 1: 0.500, 2: 0.667}
+ dr = {0: 0.667, 1: 0.500, 2: 0.0}
+ for n in sorted(self.P3):
+ assert_almost_equal(c[n], d[n], places=3)
+ assert_almost_equal(cr[n], dr[n], places=3)
def test_k5_closeness(self):
- c=nx.closeness_centrality(self.K5)
- d={0: 1.000,
- 1: 1.000,
- 2: 1.000,
- 3: 1.000,
- 4: 1.000}
+ c = nx.closeness_centrality(self.K5)
+ d = {0: 1.000,
+ 1: 1.000,
+ 2: 1.000,
+ 3: 1.000,
+ 4: 1.000}
for n in sorted(self.K5):
- assert_almost_equal(c[n],d[n],places=3)
+ assert_almost_equal(c[n], d[n], places=3)
def test_p3_closeness(self):
- c=nx.closeness_centrality(self.P3)
- d={0: 0.667,
- 1: 1.000,
- 2: 0.667}
+ c = nx.closeness_centrality(self.P3)
+ d = {0: 0.667,
+ 1: 1.000,
+ 2: 0.667}
for n in sorted(self.P3):
- assert_almost_equal(c[n],d[n],places=3)
+ assert_almost_equal(c[n], d[n], places=3)
def test_krackhardt_closeness(self):
- c=nx.closeness_centrality(self.K)
- d={0: 0.529,
- 1: 0.529,
- 2: 0.500,
- 3: 0.600,
- 4: 0.500,
- 5: 0.643,
- 6: 0.643,
- 7: 0.600,
- 8: 0.429,
- 9: 0.310}
+ c = nx.closeness_centrality(self.K)
+ d = {0: 0.529,
+ 1: 0.529,
+ 2: 0.500,
+ 3: 0.600,
+ 4: 0.500,
+ 5: 0.643,
+ 6: 0.643,
+ 7: 0.600,
+ 8: 0.429,
+ 9: 0.310}
for n in sorted(self.K):
- assert_almost_equal(c[n],d[n],places=3)
+ assert_almost_equal(c[n], d[n], places=3)
def test_florentine_families_closeness(self):
- c=nx.closeness_centrality(self.F)
- d={'Acciaiuoli': 0.368,
- 'Albizzi': 0.483,
- 'Barbadori': 0.4375,
- 'Bischeri': 0.400,
- 'Castellani': 0.389,
- 'Ginori': 0.333,
- 'Guadagni': 0.467,
- 'Lamberteschi': 0.326,
- 'Medici': 0.560,
- 'Pazzi': 0.286,
- 'Peruzzi': 0.368,
- 'Ridolfi': 0.500,
- 'Salviati': 0.389,
- 'Strozzi': 0.4375,
- 'Tornabuoni': 0.483}
+ c = nx.closeness_centrality(self.F)
+ d = {'Acciaiuoli': 0.368,
+ 'Albizzi': 0.483,
+ 'Barbadori': 0.4375,
+ 'Bischeri': 0.400,
+ 'Castellani': 0.389,
+ 'Ginori': 0.333,
+ 'Guadagni': 0.467,
+ 'Lamberteschi': 0.326,
+ 'Medici': 0.560,
+ 'Pazzi': 0.286,
+ 'Peruzzi': 0.368,
+ 'Ridolfi': 0.500,
+ 'Salviati': 0.389,
+ 'Strozzi': 0.4375,
+ 'Tornabuoni': 0.483}
for n in sorted(self.F):
- assert_almost_equal(c[n],d[n],places=3)
+ assert_almost_equal(c[n], d[n], places=3)
def test_weighted_closeness(self):
- XG=nx.Graph()
- XG.add_weighted_edges_from([('s','u',10), ('s','x',5), ('u','v',1),
- ('u','x',2), ('v','y',1), ('x','u',3),
- ('x','v',5), ('x','y',2), ('y','s',7),
- ('y','v',6)])
- c=nx.closeness_centrality(XG,distance='weight')
- d={'y': 0.200,
- 'x': 0.286,
- 's': 0.138,
- 'u': 0.235,
- 'v': 0.200}
+ edges = ([('s', 'u', 10), ('s', 'x', 5), ('u', 'v', 1),
+ ('u', 'x', 2), ('v', 'y', 1), ('x', 'u', 3),
+ ('x', 'v', 5), ('x', 'y', 2), ('y', 's', 7), ('y', 'v', 6)])
+ XG = nx.Graph()
+ XG.add_weighted_edges_from(edges)
+ c = nx.closeness_centrality(XG, distance='weight')
+ d = {'y': 0.200,
+ 'x': 0.286,
+ 's': 0.138,
+ 'u': 0.235,
+ 'v': 0.200}
for n in sorted(XG):
- assert_almost_equal(c[n],d[n],places=3)
-
+ assert_almost_equal(c[n], d[n], places=3)
| Closeness centrality - normalisation flag - clarity necessary for what normalisation means in this context.
When trying the `networkx.algorithms.centrality.closeness` method it seems to return the same values whether setting `normalisation` to true or false.
Looking at the code:
```
closeness_centrality = {}
for n in nodes:
sp = path_length(G,n)
totsp = sum(sp.values())
if totsp > 0.0 and len(G) > 1:
closeness_centrality[n] = (len(sp)-1.0) / totsp
# normalize to number of nodes-1 in connected part
if normalized:
s = (len(sp)-1.0) / ( len(G) - 1 )
closeness_centrality[n] *= s
else:
closeness_centrality[n] = 0.0
```
In most cases the length of the `sp` dictionary will be the same as the number of nodes in the graph, thus multiplying `closeness_centrality[n]` by 1 and having no effect?
It seems like the intent is to normalise values across disconnected graphs?
| The intent is indeed to normalize relative to the minimum distance possible within the component. I think the documentation is fairly clear about this. But there are other issues going on.
In the cited article, inverse closeness is the sum of the distances divided by (number of nodes -1). Closeness is then (number of nodes -1) divided by sum of distances. Let N be the number of nodes and n be the number of nodes in the component.
Questions:
- without normalization we have `C = (n-1)/sum_of_dict`. So normalization already occurs even when `normalized is False`. That should be made clear in docs.
- with normalization we have `C = (n-1)/(N-1) * (n-1)/sum_of_dict` which looks like the reciprocal of what we want to multiply by. Shouldn't the scale factor `s=(n-1)/N-1)` be instead `s=(N-1)/(n-1)`?
The tests don't check normalization so its hard to tell what was intended.
I'm pretty sure I'm guilty here https://github.com/networkx/networkx/commit/7df946596899468c0b114adf3fb581d830b7c051.
I'm not sure what the right answer is.
If I'm not mistaken, Closeness centrality by definition takes into account the number of nodes, i.e.
`(n-1)/sum(distances from node to all others)`
So in one sense normalisation is a misnomer? That said, in my case I would find it beneficial to have the option to return the raw sum of distances and the number of nodes so that I can 'normalise' using other methods.
Some earlier discussion
#824
https://groups.google.com/forum/?hl=en#!topic/networkx-discuss/ZfCQDrZcvYI
#241
Following up on the previous discussion, Wasserman and Faust do indeed suggest an "improved" closeness centrality measure that scales as we scale it. The result is "a ratio of the fraction of actors in the group who are reachable, to the average distance" to the reachable actors [W&F pg 201]. So we've got the right scaling and it matches our documentation.
The only issue I see here is the word "normalized" and whether the documentation sufficiently stresses what that means. I think @shongololo was able to figure out what was going on... but was concerned that it might not be what was intended. I'll change the docs...
Should I change the name "normalized"?
BTW, to get the sum of the distances on can use the shortest_path routines directly:
```
dict((node, sum(d for _,d in nx.single_source_shortest_path_length(G, node))) for node in nodes)
```
Yes, "normalized" does seem a little bit of a stretch for what this does. Got a better name?
@dschult the 'improved' closeness centrality is the method I like to use.
Perhaps an `improved_formula` or `alt_formula` flag with an explanation that this 'normalises' based on Wasserman and Faust's method vs. the conventional method?
Or we could just write another function with a different name.
See #2391 for more confusion over normalization. | 2017-08-09T05:59:24 |
networkx/networkx | 2,610 | networkx__networkx-2610 | [
"2605"
] | d47fe57f99dd07f5bb45ab5890e71f5fb2db5c3a | diff --git a/networkx/classes/coreviews.py b/networkx/classes/coreviews.py
--- a/networkx/classes/coreviews.py
+++ b/networkx/classes/coreviews.py
@@ -10,8 +10,7 @@
# Dan Schult([email protected])
"""
"""
-from itertools import chain
-from collections import Mapping, Set, Iterable
+from collections import Mapping
import networkx as nx
__all__ = ['AtlasView', 'AdjacencyView', 'MultiAdjacencyView',
diff --git a/networkx/classes/reportviews.py b/networkx/classes/reportviews.py
--- a/networkx/classes/reportviews.py
+++ b/networkx/classes/reportviews.py
@@ -191,7 +191,7 @@ def __call__(self, data=False, default=None):
return self
return NodeDataView(self._nodes, data, default)
- def data(self, data=False, default=None):
+ def data(self, data=True, default=None):
if data is False:
return self
return NodeDataView(self._nodes, data, default)
@@ -235,8 +235,14 @@ def __init__(self, nodedict, data=False, default=None):
self._default = default
@classmethod
- def _from_iterable(self, it):
- return set(it)
+ def _from_iterable(cls, it):
+ try:
+ return set(it)
+ except TypeError as err:
+ if "unhashable" in str(err):
+ msg = " : Could be b/c data=True or your values are unhashable"
+ raise TypeError(str(err) + msg)
+ raise
def __len__(self):
return len(self._nodes)
@@ -271,11 +277,6 @@ def __getitem__(self, n):
return ddict
return ddict[data] if data in ddict else self._default
- def __call__(self, data=False, default=None):
- if data == self._data and default == self._default:
- return self
- return NodeDataView(self._nodes, data, default)
-
def __repr__(self):
if self._data is False:
return '%s(%r)' % (self.__class__.__name__, tuple(self))
@@ -879,7 +880,7 @@ def __setstate__(self, state):
def _from_iterable(self, it):
return set(it)
- view = OutEdgeDataView
+ dataview = OutEdgeDataView
def __init__(self, G):
succ = G._succ if hasattr(G, "succ") else G._adj
@@ -912,12 +913,12 @@ def __getitem__(self, e):
def __call__(self, nbunch=None, data=False, default=None):
if nbunch is None and data is False:
return self
- return self.view(self, nbunch, data, default)
+ return self.dataview(self, nbunch, data, default)
- def data(self, nbunch=None, data=False, default=None):
+ def data(self, data=True, default=None, nbunch=None):
if nbunch is None and data is False:
return self
- return self.view(self, nbunch, data, default)
+ return self.dataview(self, nbunch, data, default)
# String Methods
def __str__(self):
@@ -995,7 +996,7 @@ class EdgeView(OutEdgeView):
"""
__slots__ = ()
- view = EdgeDataView
+ dataview = EdgeDataView
def __len__(self):
return sum(len(nbrs) for n, nbrs in self._nodes_nbrs()) // 2
@@ -1021,7 +1022,7 @@ class InEdgeView(OutEdgeView):
"""A EdgeView class for inward edges of a DiGraph"""
__slots__ = ()
- view = InEdgeDataView
+ dataview = InEdgeDataView
def __init__(self, G):
pred = G._pred if hasattr(G, "pred") else G._adj
@@ -1050,7 +1051,7 @@ class OutMultiEdgeView(OutEdgeView):
"""A EdgeView class for outward edges of a MultiDiGraph"""
__slots__ = ()
- view = OutMultiEdgeDataView
+ dataview = OutMultiEdgeDataView
def __len__(self):
return sum(len(kdict) for n, nbrs in self._nodes_nbrs()
@@ -1083,19 +1084,19 @@ def __getitem__(self, e):
def __call__(self, nbunch=None, data=False, keys=False, default=None):
if nbunch is None and data is False and keys is True:
return self
- return self.view(self, nbunch, data, keys, default)
+ return self.dataview(self, nbunch, data, keys, default)
- def data(self, nbunch=None, data=False, keys=False, default=None):
+ def data(self, data=True, keys=False, default=None, nbunch=None):
if nbunch is None and data is False and keys is True:
return self
- return self.view(self, nbunch, data, keys, default)
+ return self.dataview(self, nbunch, data, keys, default)
class MultiEdgeView(OutMultiEdgeView):
"""A EdgeView class for edges of a MultiGraph"""
__slots__ = ()
- view = MultiEdgeDataView
+ dataview = MultiEdgeDataView
def __len__(self):
return sum(len(kdict) for n, nbrs in self._nodes_nbrs()
@@ -1116,7 +1117,7 @@ class InMultiEdgeView(OutMultiEdgeView):
"""A EdgeView class for inward edges of a MultiDiGraph"""
__slots__ = ()
- view = InMultiEdgeDataView
+ dataview = InMultiEdgeDataView
def __init__(self, G):
pred = G._pred if hasattr(G, "pred") else G._adj
| diff --git a/networkx/classes/tests/test_coreviews.py b/networkx/classes/tests/test_coreviews.py
--- a/networkx/classes/tests/test_coreviews.py
+++ b/networkx/classes/tests/test_coreviews.py
@@ -1,14 +1,23 @@
from nose.tools import assert_equal, assert_not_equal, assert_is,\
assert_is_not, assert_true, assert_false, assert_raises
+import tempfile
+import pickle
import networkx as nx
-class test_atlasview(object):
+class TestAtlasView(object):
+ # node->data
def setup(self):
self.d = {0: {'color': 'blue', 'weight': 1.2}, 1: {}, 2: {'color': 1}}
self.av = nx.classes.coreviews.AtlasView(self.d)
+ def test_pickle(self):
+ view = self.av
+ pview = pickle.loads(pickle.dumps(view, -1))
+ assert_equal(view, pview)
+ assert_equal(view.__slots__, pview.__slots__)
+
def test_len(self):
assert_equal(len(self.av), len(self.d))
@@ -23,6 +32,7 @@ def test_getitem(self):
def test_copy(self):
avcopy = self.av.copy()
assert_equal(avcopy[0], self.av[0])
+ assert_equal(avcopy, self.av)
assert_is_not(avcopy[0], self.av[0])
assert_is_not(avcopy, self.av)
avcopy[5] = {}
@@ -44,13 +54,20 @@ def test_repr(self):
assert_equal(str(self.av), out)
-class test_adjacencyview(object):
+class TestAdjacencyView(object):
+ # node->nbr->data
def setup(self):
dd = {'color': 'blue', 'weight': 1.2}
self.nd = {0: dd, 1: {}, 2: {'color': 1}}
self.adj = {3: self.nd, 0: {3: dd}, 1: {}, 2: {3: {'color': 1}}}
self.adjview = nx.classes.coreviews.AdjacencyView(self.adj)
+ def test_pickle(self):
+ view = self.adjview
+ pview = pickle.loads(pickle.dumps(view, -1))
+ assert_equal(view, pview)
+ assert_equal(view.__slots__, pview.__slots__)
+
def test_len(self):
assert_equal(len(self.adjview), len(self.adj))
@@ -85,7 +102,8 @@ def test_repr(self):
assert_equal(str(self.adjview), out)
-class test_multiadjacencyview(test_adjacencyview):
+class TestMultiAdjacencyView(TestAdjacencyView):
+ # node->nbr->key->data
def setup(self):
dd = {'color': 'blue', 'weight': 1.2}
self.kd = {0: dd, 1: {}, 2: {'color': 1}}
@@ -113,12 +131,19 @@ def test_copy(self):
assert_false(hasattr(self.adjview, '__setitem__'))
-class test_unionatlas(object):
+class TestUnionAtlas(object):
+ # node->data
def setup(self):
self.s = {0: {'color': 'blue', 'weight': 1.2}, 1: {}, 2: {'color': 1}}
self.p = {3: {'color': 'blue', 'weight': 1.2}, 4: {}, 2: {'watch': 2}}
self.av = nx.classes.coreviews.UnionAtlas(self.s, self.p)
+ def test_pickle(self):
+ view = self.av
+ pview = pickle.loads(pickle.dumps(view, -1))
+ assert_equal(view, pview)
+ assert_equal(view.__slots__, pview.__slots__)
+
def test_len(self):
assert_equal(len(self.av), len(self.s) + len(self.p))
@@ -151,7 +176,6 @@ def test_copy(self):
def test_items(self):
expected = dict(self.p.items())
expected.update(self.s)
- print(sorted(self.av.items()), sorted(expected.items()))
assert_equal(sorted(self.av.items()), sorted(expected.items()))
def test_repr(self):
@@ -159,7 +183,8 @@ def test_repr(self):
assert_equal(str(self.av), out)
-class test_unionadjacency(object):
+class TestUnionAdjacency(object):
+ # node->nbr->data
def setup(self):
dd = {'color': 'blue', 'weight': 1.2}
self.nd = {0: dd, 1: {}, 2: {'color': 1}}
@@ -167,11 +192,17 @@ def setup(self):
self.p = {3: {}, 0: {3: dd}, 1: {0: {}}, 2: {1: {'color': 1}}}
self.adjview = nx.classes.coreviews.UnionAdjacency(self.s, self.p)
+ def test_pickle(self):
+ view = self.adjview
+ pview = pickle.loads(pickle.dumps(view, -1))
+ assert_equal(view, pview)
+ assert_equal(view.__slots__, pview.__slots__)
+
def test_len(self):
assert_equal(len(self.adjview), len(self.s))
def test_iter(self):
- assert_equal(list(self.adjview), list(self.s))
+ assert_equal(sorted(self.adjview), sorted(self.s))
def test_getitem(self):
assert_is_not(self.adjview[1], self.s[1])
@@ -198,7 +229,43 @@ def test_repr(self):
assert_equal(str(self.adjview), out)
-class test_unionmultiadjacency(test_unionadjacency):
+class TestUnionMultiInner(TestUnionAdjacency):
+ # nbr->key->data
+ def setup(self):
+ dd = {'color': 'blue', 'weight': 1.2}
+ self.kd = {7: {}, 'ekey': {}, 9: {'color': 1}}
+ self.s = {3: self.kd, 0: {7: dd}, 1: {}, 2: {'key': {'color': 1}}}
+ self.p = {3: {}, 0: {3: dd}, 1: {}, 2: {1: {'span': 2}}}
+ self.adjview = nx.classes.coreviews.UnionMultiInner(self.s, self.p)
+
+ def test_len(self):
+ assert_equal(len(self.adjview), len(self.s) + len(self.p))
+
+ def test_getitem(self):
+ assert_is_not(self.adjview[1], self.s[1])
+ assert_is(self.adjview[0][7], self.adjview[0][3])
+ assert_equal(self.adjview[2]['key']['color'], 1)
+ assert_equal(self.adjview[2][1]['span'], 2)
+ assert_raises(KeyError, self.adjview.__getitem__, 4)
+ assert_raises(KeyError, self.adjview[1].__getitem__, 'key')
+
+ def test_copy(self):
+ avcopy = self.adjview.copy()
+ assert_equal(avcopy[0], self.adjview[0])
+ assert_is_not(avcopy[0], self.adjview[0])
+
+ avcopy[2][1]['width'] = 8
+ assert_not_equal(avcopy[2], self.adjview[2])
+ self.adjview[2][1]['width'] = 8
+ assert_equal(avcopy[2], self.adjview[2])
+ del self.adjview[2][1]['width']
+
+ assert_false(hasattr(self.adjview, '__setitem__'))
+ assert_true(hasattr(avcopy, '__setitem__'))
+
+
+class TestUnionMultiAdjacency(TestUnionAdjacency):
+ # node->nbr->key->data
def setup(self):
dd = {'color': 'blue', 'weight': 1.2}
self.kd = {7: {}, 8: {}, 9: {'color': 1}}
@@ -225,3 +292,4 @@ def test_copy(self):
del self.adjview[2][3][8]['ht']
assert_false(hasattr(self.adjview, '__setitem__'))
+ assert_true(hasattr(avcopy, '__setitem__'))
diff --git a/networkx/classes/tests/test_graphviews.py b/networkx/classes/tests/test_graphviews.py
--- a/networkx/classes/tests/test_graphviews.py
+++ b/networkx/classes/tests/test_graphviews.py
@@ -4,8 +4,9 @@
import networkx as nx
from networkx.testing import assert_edges_equal
+# Note: SubGraph views are not tested here. They have their own testing file
-class test_reverse_view(object):
+class TestReverseView(object):
def setup(self):
self.G = nx.path_graph(9, create_using=nx.DiGraph())
self.rv = nx.reverse_view(self.G)
@@ -33,7 +34,7 @@ def test_exceptions(self):
assert_raises(nx.NetworkXNotImplemented, nxg.ReverseView, nx.Graph())
-class test_multi_reverse_view(object):
+class TestMultiReverseView(object):
def setup(self):
self.G = nx.path_graph(9, create_using=nx.MultiDiGraph())
self.G.add_edge(4, 5)
@@ -65,7 +66,7 @@ def test_exceptions(self):
assert_raises(nx.NetworkXNotImplemented, nxg.MultiReverseView, MG)
-class test_to_directed(object):
+class TestToDirected(object):
def setup(self):
self.G = nx.path_graph(9)
self.dv = nx.to_directed(self.G)
@@ -108,7 +109,7 @@ def test_exceptions(self):
assert_raises(nx.NetworkXError, nxg.MultiDiGraphView, self.G)
-class test_to_undirected(object):
+class TestToUndirected(object):
def setup(self):
self.DG = nx.path_graph(9, create_using=nx.DiGraph())
self.uv = nx.to_undirected(self.DG)
@@ -150,7 +151,7 @@ def test_exceptions(self):
assert_raises(nx.NetworkXError, nxg.MultiGraphView, self.DG)
-class Test_combinations(object):
+class TestChainsOfViews(object):
def setUp(self):
self.G = nx.path_graph(9)
self.DG = nx.path_graph(9, create_using=nx.DiGraph())
diff --git a/networkx/classes/tests/test_views.py b/networkx/classes/tests/test_reportviews.py
similarity index 67%
rename from networkx/classes/tests/test_views.py
rename to networkx/classes/tests/test_reportviews.py
--- a/networkx/classes/tests/test_views.py
+++ b/networkx/classes/tests/test_reportviews.py
@@ -1,27 +1,28 @@
from nose.tools import assert_equal, assert_not_equal, \
- assert_true, assert_false, assert_raises
+ assert_true, assert_false, assert_raises, \
+ assert_is, assert_is_not
import networkx as nx
# Nodes
-class test_nodeview(object):
+class TestNodeView(object):
def setup(self):
self.G = nx.path_graph(9)
+ self.nv = self.G.nodes # NodeView(G)
def test_pickle(self):
import pickle
- nv = self.G.nodes() # NodeView(self.G)
+ nv = self.nv
pnv = pickle.loads(pickle.dumps(nv, -1))
assert_equal(nv, pnv)
assert_equal(nv.__slots__, pnv.__slots__)
def test_repr(self):
- nv = self.G.nodes()
- assert_equal(str(nv), "NodeView((0, 1, 2, 3, 4, 5, 6, 7, 8))")
+ assert_equal(str(self.nv), "NodeView((0, 1, 2, 3, 4, 5, 6, 7, 8))")
def test_contains(self):
- nv = self.G.nodes()
+ nv = self.nv
assert_true(7 in nv)
assert_false(9 in nv)
self.G.remove_node(7)
@@ -29,29 +30,14 @@ def test_contains(self):
assert_false(7 in nv)
assert_true(9 in nv)
- def test_contains_data(self):
- nvd = self.G.nodes(data=True)
- self.G.nodes[3]['foo'] = 'bar'
- assert_true((7, {}) in nvd)
- assert_true((3, {'foo': 'bar'}) in nvd)
- nvdf = self.G.nodes(data='foo', default='biz')
- assert_true((7, 'biz') in nvdf)
- assert_true((3, 'bar') in nvdf)
- assert_true((3, nvdf[3]) in nvdf)
-
def test_getitem(self):
- nv = self.G.nodes
- nvd = self.G.nodes(data=True)
+ nv = self.nv
self.G.nodes[3]['foo'] = 'bar'
assert_equal(nv[7], {})
assert_equal(nv[3], {'foo': 'bar'})
- assert_equal(nvd[3], {'foo': 'bar'})
- nvdf = self.G.nodes(data='foo', default='biz')
- assert_true(nvdf[7], 'biz')
- assert_equal(nvdf[3], 'bar')
def test_iter(self):
- nv = self.G.nodes()
+ nv = self.nv
for i, n in enumerate(nv):
assert_equal(i, n)
inv = iter(nv)
@@ -66,21 +52,123 @@ def test_iter(self):
for i, n in enumerate(nnv):
assert_equal(i, n)
- def test_iter_data(self):
- nv = self.G.nodes(data=True)
+ def test_call(self):
+ nodes = self.nv
+ assert_is(nodes, nodes())
+ assert_is_not(nodes, nodes(data=True))
+ assert_is_not(nodes, nodes(data='weight'))
+
+
+class TestNodeDataView(object):
+ def setup(self):
+ self.G = nx.path_graph(9)
+ self.nv = self.G.nodes.data() # NodeDataView(G)
+ self.ndv = self.G.nodes.data(True)
+ self.nwv = self.G.nodes.data('foo')
+
+ def test_viewtype(self):
+ nv = self.G.nodes
+ ndvfalse = nv.data(False)
+ assert_is(nv, ndvfalse)
+ assert_is_not(nv, self.ndv)
+
+ def test_pickle(self):
+ import pickle
+ nv = self.nv
+ pnv = pickle.loads(pickle.dumps(nv, -1))
+ assert_equal(nv, pnv)
+ assert_equal(nv.__slots__, pnv.__slots__)
+
+ def test_repr(self):
+ msg = "NodeDataView({0: {}, 1: {}, 2: {}, 3: {}, " + \
+ "4: {}, 5: {}, 6: {}, 7: {}, 8: {}})"
+ assert_equal(str(self.ndv), msg)
+
+ def test_contains(self):
+ self.G.nodes[3]['foo'] = 'bar'
+ assert_true((7, {}) in self.nv)
+ assert_true((3, {'foo': 'bar'}) in self.nv)
+ assert_true((3, 'bar') in self.nwv)
+ assert_true((7, None) in self.nwv)
+ # default
+ nwv_def = self.G.nodes(data='foo', default='biz')
+ assert_true((7, 'biz') in nwv_def)
+ assert_true((3, 'bar') in nwv_def)
+
+ def test_getitem(self):
+ self.G.nodes[3]['foo'] = 'bar'
+ assert_equal(self.nv[3], {'foo': 'bar'})
+ # default
+ nwv_def = self.G.nodes(data='foo', default='biz')
+ assert_true(nwv_def[7], 'biz')
+ assert_equal(nwv_def[3], 'bar')
+
+ def test_iter(self):
+ nv = self.nv
for i, (n, d) in enumerate(nv):
assert_equal(i, n)
assert_equal(d, {})
inv = iter(nv)
assert_equal(next(inv), (0, {}))
self.G.nodes[3]['foo'] = 'bar'
+ # default
for n, d in nv:
if n == 3:
assert_equal(d, {'foo': 'bar'})
- break
+ else:
+ assert_equal(d, {})
+ # data=True
+ for n, d in self.ndv:
+ if n == 3:
+ assert_equal(d, {'foo': 'bar'})
+ else:
+ assert_equal(d, {})
+ # data='foo'
+ for n, d in self.nwv:
+ if n == 3:
+ assert_equal(d, 'bar')
+ else:
+ assert_equal(d, None)
+ # data='foo', default=1
+ for n, d in self.G.nodes.data('foo', default=1):
+ if n == 3:
+ assert_equal(d, 'bar')
+ else:
+ assert_equal(d, 1)
+
+
+def test_nodedataview_unhashable():
+ G = nx.path_graph(9)
+ G.nodes[3]['foo'] = 'bar'
+ nvs = [G.nodes.data()]
+ nvs.append(G.nodes.data(True))
+ H = G.copy()
+ H.nodes[4]['foo'] = {1, 2, 3}
+ nvs.append(H.nodes.data(True))
+ # raise unhashable
+ for nv in nvs:
+ assert_raises(TypeError, set, nv)
+ assert_raises(TypeError, eval, 'nv | nv', locals())
+ # no raise... hashable
+ Gn = G.nodes.data(False)
+ set(Gn)
+ Gn | Gn
+ Gn = G.nodes.data('foo')
+ set(Gn)
+ Gn | Gn
+
+
+class TestNodeViewSetOps(object):
+ def setUp(self):
+ self.G = nx.path_graph(9)
+ self.G.nodes[3]['foo'] = 'bar'
+ self.nv = self.G.nodes
+
+ def n_its(self, nodes):
+ return {node for node in nodes}
def test_len(self):
- nv = self.G.nodes()
+ nv = self.nv
assert_equal(len(nv), 9)
self.G.remove_node(7)
assert_equal(len(nv), 8)
@@ -89,72 +177,102 @@ def test_len(self):
def test_and(self):
# print("G & H nodes:", gnv & hnv)
- nv = self.G.nodes()
- some_nodes = {n for n in range(5, 12)}
- assert_equal(nv & some_nodes, {n for n in range(5, 9)})
- assert_equal(some_nodes & nv, {n for n in range(5, 9)})
+ nv = self.nv
+ some_nodes = self.n_its(range(5, 12))
+ assert_equal(nv & some_nodes, self.n_its(range(5, 9)))
+ assert_equal(some_nodes & nv, self.n_its(range(5, 9)))
def test_or(self):
# print("G | H nodes:", gnv | hnv)
- nv = self.G.nodes()
- some_nodes = {n for n in range(5, 12)}
- assert_equal(nv | some_nodes, {n for n in range(12)})
- assert_equal(some_nodes | nv, {n for n in range(12)})
+ nv = self.nv
+ some_nodes = self.n_its(range(5, 12))
+ assert_equal(nv | some_nodes, self.n_its(range(12)))
+ assert_equal(some_nodes | nv, self.n_its(range(12)))
def test_xor(self):
# print("G ^ H nodes:", gnv ^ hnv)
- nv = self.G.nodes()
- some_nodes = {n for n in range(5, 12)}
- assert_equal(nv ^ some_nodes, {0, 1, 2, 3, 4, 9, 10, 11})
- assert_equal(some_nodes ^ nv, {0, 1, 2, 3, 4, 9, 10, 11})
+ nv = self.nv
+ some_nodes = self.n_its(range(5, 12))
+ nodes = {0, 1, 2, 3, 4, 9, 10, 11}
+ assert_equal(nv ^ some_nodes, self.n_its(nodes))
+ assert_equal(some_nodes ^ nv, self.n_its(nodes))
def test_sub(self):
# print("G - H nodes:", gnv - hnv)
- nv = self.G.nodes()
- some_nodes = {n for n in range(5, 12)}
- assert_equal(nv - some_nodes, {n for n in range(5)})
- assert_equal(some_nodes - nv, {n for n in range(9, 12)})
+ nv = self.nv
+ some_nodes = self.n_its(range(5, 12))
+ assert_equal(nv - some_nodes, self.n_its(range(5)))
+ assert_equal(some_nodes - nv, self.n_its(range(9, 12)))
+
+
+class TestNodeDataViewSetOps(TestNodeViewSetOps):
+ def setUp(self):
+ self.G = nx.path_graph(9)
+ self.G.nodes[3]['foo'] = 'bar'
+ self.nv = self.G.nodes.data('foo')
+
+ def n_its(self, nodes):
+ return {(node, 'bar' if node == 3 else None) for node in nodes}
+
+
+class TestNodeDataViewDefaultSetOps(TestNodeDataViewSetOps):
+ def setUp(self):
+ self.G = nx.path_graph(9)
+ self.G.nodes[3]['foo'] = 'bar'
+ self.nv = self.G.nodes.data('foo', default=1)
+
+ def n_its(self, nodes):
+ return {(node, 'bar' if node == 3 else 1) for node in nodes}
# Edges Data View
-class test_edgedataview(object):
- def setup(self):
+class TestEdgeDataView(object):
+ def setUp(self):
self.G = nx.path_graph(9)
- self.DG = nx.path_graph(9, create_using=nx.DiGraph())
self.eview = nx.reportviews.EdgeView
- def modify_edge(G, e, **kwds):
- G._adj[e[0]][e[1]].update(kwds)
- self.modify_edge = modify_edge
+ def modify_edge(self, G, e, **kwds):
+ self.G._adj[e[0]][e[1]].update(kwds)
+
+ def test_repr(self):
+ ev = self.eview(self.G)(data=True)
+ rep = "EdgeDataView([(0, 1, {}), (1, 2, {}), " + \
+ "(2, 3, {}), (3, 4, {}), " + \
+ "(4, 5, {}), (5, 6, {}), " + \
+ "(6, 7, {}), (7, 8, {})])"
+ assert_equal(repr(ev), rep)
def test_iterdata(self):
- G = self.G.copy()
+ G = self.G
evr = self.eview(G)
ev = evr(data=True)
+ ev_def = evr(data='foo', default=1)
+
for u, v, d in ev:
pass
assert_equal(d, {})
- ev = evr(data='foo', default=1)
- for u, v, wt in ev:
+
+ for u, v, wt in ev_def:
pass
assert_equal(wt, 1)
self.modify_edge(G, (2, 3), foo='bar')
- ev = evr(data=True)
for e in ev:
+ assert_equal(len(e), 3)
if set(e[:2]) == {2, 3}:
assert_equal(e[2], {'foo': 'bar'})
- assert_equal(len(e), 3)
checked = True
- break
+ else:
+ assert_equal(e[2], {})
assert_true(checked)
- ev = evr(data='foo', default=1)
- for e in ev:
+
+ for e in ev_def:
+ assert_equal(len(e), 3)
if set(e[:2]) == {2, 3}:
assert_equal(e[2], 'bar')
- assert_equal(len(e), 3)
checked_wt = True
- break
+ else:
+ assert_equal(e[2], 1)
assert_true(checked_wt)
def test_iter(self):
@@ -185,28 +303,106 @@ def test_len(self):
assert_equal(len(evr(1)), 2)
assert_equal(len(evr([1, 2, 3])), 4)
- evr = self.eview(self.DG)
+ assert_equal(len(self.G.edges(1)), 2)
+ assert_equal(len(self.G.edges()), 8)
+ assert_equal(len(self.G.edges), 8)
+
+
+class TestOutEdgeDataView(TestEdgeDataView):
+ def setUp(self):
+ self.G = nx.path_graph(9, create_using=nx.DiGraph())
+ self.eview = nx.reportviews.OutEdgeView
+
+ def test_repr(self):
+ ev = self.eview(self.G)(data=True)
+ rep = "OutEdgeDataView([(0, 1, {}), (1, 2, {}), " + \
+ "(2, 3, {}), (3, 4, {}), " + \
+ "(4, 5, {}), (5, 6, {}), " + \
+ "(6, 7, {}), (7, 8, {})])"
+ assert_equal(repr(ev), rep)
+
+ def test_len(self):
+ evr = self.eview(self.G)
+ ev = evr(data='foo')
+ assert_equal(len(ev), 8)
assert_equal(len(evr(1)), 1)
assert_equal(len(evr([1, 2, 3])), 3)
- assert_equal(len(self.G.edges(1)), 2)
+ assert_equal(len(self.G.edges(1)), 1)
assert_equal(len(self.G.edges()), 8)
assert_equal(len(self.G.edges), 8)
- assert_equal(len(self.DG.edges(1)), 1)
- assert_equal(len(self.DG.edges()), 8)
- assert_equal(len(self.DG.edges), 8)
+class TestInEdgeDataView(TestOutEdgeDataView):
+ def setUp(self):
+ self.G = nx.path_graph(9, create_using=nx.DiGraph())
+ self.eview = nx.reportviews.InEdgeView
+
+ def test_repr(self):
+ ev = self.eview(self.G)(data=True)
+ rep = "InEdgeDataView([(0, 1, {}), (1, 2, {}), " + \
+ "(2, 3, {}), (3, 4, {}), " + \
+ "(4, 5, {}), (5, 6, {}), " + \
+ "(6, 7, {}), (7, 8, {})])"
+ assert_equal(repr(ev), rep)
+
+
+class TestMultiEdgeDataView(TestEdgeDataView):
+ def setUp(self):
+ self.G = nx.path_graph(9, create_using=nx.MultiGraph())
+ self.eview = nx.reportviews.MultiEdgeView
+
+ def modify_edge(self, G, e, **kwds):
+ self.G._adj[e[0]][e[1]][0].update(kwds)
-# Edges
-class test_edgeview(object):
+ def test_repr(self):
+ ev = self.eview(self.G)(data=True)
+ rep = "MultiEdgeDataView([(0, 1, {}), (1, 2, {}), " + \
+ "(2, 3, {}), (3, 4, {}), " + \
+ "(4, 5, {}), (5, 6, {}), " + \
+ "(6, 7, {}), (7, 8, {})])"
+ assert_equal(repr(ev), rep)
+
+
+class TestOutMultiEdgeDataView(TestOutEdgeDataView):
+ def setUp(self):
+ self.G = nx.path_graph(9, create_using=nx.MultiDiGraph())
+ self.eview = nx.reportviews.OutMultiEdgeView
+
+ def modify_edge(self, G, e, **kwds):
+ self.G._adj[e[0]][e[1]][0].update(kwds)
+
+ def test_repr(self):
+ ev = self.eview(self.G)(data=True)
+ rep = "OutMultiEdgeDataView([(0, 1, {}), (1, 2, {}), " + \
+ "(2, 3, {}), (3, 4, {}), " + \
+ "(4, 5, {}), (5, 6, {}), " + \
+ "(6, 7, {}), (7, 8, {})])"
+ assert_equal(repr(ev), rep)
+
+
+class TestInMultiEdgeDataView(TestOutMultiEdgeDataView):
+ def setUp(self):
+ self.G = nx.path_graph(9, create_using=nx.MultiDiGraph())
+ self.eview = nx.reportviews.InMultiEdgeView
+
+ def test_repr(self):
+ ev = self.eview(self.G)(data=True)
+ rep = "InMultiEdgeDataView([(0, 1, {}), (1, 2, {}), " + \
+ "(2, 3, {}), (3, 4, {}), " + \
+ "(4, 5, {}), (5, 6, {}), " + \
+ "(6, 7, {}), (7, 8, {})])"
+ assert_equal(repr(ev), rep)
+
+
+# Edge Views
+class TestEdgeView(object):
def setup(self):
self.G = nx.path_graph(9)
self.eview = nx.reportviews.EdgeView
- def modify_edge(G, e, **kwds):
- G._adj[e[0]][e[1]].update(kwds)
- self.modify_edge = modify_edge
+ def modify_edge(self, G, e, **kwds):
+ self.G._adj[e[0]][e[1]].update(kwds)
def test_repr(self):
ev = self.eview(self.G)
@@ -217,12 +413,14 @@ def test_repr(self):
def test_call(self):
ev = self.eview(self.G)
assert_equal(id(ev), id(ev()))
+ assert_equal(id(ev), id(ev(data=False)))
assert_not_equal(id(ev), id(ev(data=True)))
assert_not_equal(id(ev), id(ev(nbunch=1)))
def test_data(self):
ev = self.eview(self.G)
- assert_equal(id(ev), id(ev.data()))
+ assert_not_equal(id(ev), id(ev.data()))
+ assert_equal(id(ev), id(ev.data(data=False)))
assert_not_equal(id(ev), id(ev.data(data=True)))
assert_not_equal(id(ev), id(ev.data(nbunch=1)))
@@ -303,15 +501,11 @@ def test_sub(self):
assert_true(ev - some_edges, result)
-class test_directed_edges(test_edgeview):
+class TestOutEdgeView(TestEdgeView):
def setup(self):
self.G = nx.path_graph(9, nx.DiGraph())
self.eview = nx.reportviews.OutEdgeView
- def modify_edge(G, e, **kwds):
- G._adj[e[0]][e[1]].update(kwds)
- self.modify_edge = modify_edge
-
def test_repr(self):
ev = self.eview(self.G)
rep = "OutEdgeView([(0, 1), (1, 2), (2, 3), (3, 4), " + \
@@ -319,15 +513,11 @@ def test_repr(self):
assert_equal(repr(ev), rep)
-class test_inedges(test_edgeview):
+class TestInEdgeView(TestEdgeView):
def setup(self):
self.G = nx.path_graph(9, nx.DiGraph())
self.eview = nx.reportviews.InEdgeView
- def modify_edge(G, e, **kwds):
- G._adj[e[0]][e[1]].update(kwds)
- self.modify_edge = modify_edge
-
def test_repr(self):
ev = self.eview(self.G)
rep = "InEdgeView([(0, 1), (1, 2), (2, 3), (3, 4), " + \
@@ -335,17 +525,16 @@ def test_repr(self):
assert_equal(repr(ev), rep)
-class test_multiedges(test_edgeview):
+class TestMultiEdgeView(TestEdgeView):
def setup(self):
self.G = nx.path_graph(9, nx.MultiGraph())
self.G.add_edge(1, 2, key=3, foo='bar')
self.eview = nx.reportviews.MultiEdgeView
- def modify_edge(G, e, **kwds):
- if len(e) == 2:
- e = e + (0,)
- G._adj[e[0]][e[1]][e[2]].update(kwds)
- self.modify_edge = modify_edge
+ def modify_edge(self, G, e, **kwds):
+ if len(e) == 2:
+ e = e + (0,)
+ self.G._adj[e[0]][e[1]][e[2]].update(kwds)
def test_repr(self):
ev = self.eview(self.G)
@@ -356,12 +545,16 @@ def test_repr(self):
def test_call(self):
ev = self.eview(self.G)
assert_equal(id(ev), id(ev(keys=True)))
+ assert_equal(id(ev), id(ev(data=False, keys=True)))
+ assert_not_equal(id(ev), id(ev(keys=False)))
assert_not_equal(id(ev), id(ev(data=True)))
assert_not_equal(id(ev), id(ev(nbunch=1)))
def test_data(self):
ev = self.eview(self.G)
- assert_equal(id(ev), id(ev.data(keys=True)))
+ assert_not_equal(id(ev), id(ev.data()))
+ assert_equal(id(ev), id(ev.data(data=False, keys=True)))
+ assert_not_equal(id(ev), id(ev.data(keys=False)))
assert_not_equal(id(ev), id(ev.data(data=True)))
assert_not_equal(id(ev), id(ev.data(nbunch=1)))
@@ -375,7 +568,7 @@ def test_iter(self):
assert_equal(iter(iev), iev)
def test_iterkeys(self):
- G = self.G.copy()
+ G = self.G
evr = self.eview(G)
ev = evr(keys=True)
for u, v, k in ev:
@@ -389,13 +582,22 @@ def test_iterkeys(self):
self.modify_edge(G, (2, 3, 0), foo='bar')
ev = evr(keys=True, data=True)
for e in ev:
+ assert_equal(len(e), 4)
+ print('edge:',e)
if set(e[:2]) == {2, 3}:
+ print(self.G._adj[2][3])
assert_equal(e[2], 0)
assert_equal(e[3], {'foo': 'bar'})
- assert_equal(len(e), 4)
checked = True
- break
+ elif set(e[:3]) == {1, 2, 3}:
+ assert_equal(e[2], 3)
+ assert_equal(e[3], {'foo': 'bar'})
+ checked_multi = True
+ else:
+ assert_equal(e[2], 0)
+ assert_equal(e[3], {})
assert_true(checked)
+ assert_true(checked_multi)
ev = evr(keys=True, data='foo', default=1)
for e in ev:
if set(e[:2]) == {1, 2} and e[2] == 3:
@@ -474,17 +676,16 @@ def test_and(self):
assert_equal(some_edges & ev, {(0, 1, 0), (1, 0, 0)})
-class test_directed_multiedges(test_multiedges):
+class TestOutMultiEdgeView(TestMultiEdgeView):
def setup(self):
self.G = nx.path_graph(9, nx.MultiDiGraph())
self.G.add_edge(1, 2, key=3, foo='bar')
self.eview = nx.reportviews.OutMultiEdgeView
- def modify_edge(G, e, **kwds):
- if len(e) == 2:
- e = e + (0,)
- G._adj[e[0]][e[1]][e[2]].update(kwds)
- self.modify_edge = modify_edge
+ def modify_edge(self, G, e, **kwds):
+ if len(e) == 2:
+ e = e + (0,)
+ self.G._adj[e[0]][e[1]][e[2]].update(kwds)
def test_repr(self):
ev = self.eview(self.G)
@@ -493,17 +694,16 @@ def test_repr(self):
assert_equal(repr(ev), rep)
-class test_in_multiedges(test_multiedges):
+class TestInMultiEdgeView(TestMultiEdgeView):
def setup(self):
self.G = nx.path_graph(9, nx.MultiDiGraph())
self.G.add_edge(1, 2, key=3, foo='bar')
self.eview = nx.reportviews.InMultiEdgeView
- def modify_edge(G, e, **kwds):
- if len(e) == 2:
- e = e + (0,)
- G._adj[e[0]][e[1]][e[2]].update(kwds)
- self.modify_edge = modify_edge
+ def modify_edge(self, G, e, **kwds):
+ if len(e) == 2:
+ e = e + (0,)
+ self.G._adj[e[0]][e[1]][e[2]].update(kwds)
def test_repr(self):
ev = self.eview(self.G)
@@ -513,7 +713,7 @@ def test_repr(self):
# Degrees
-class test_degreeview(object):
+class TestDegreeView(object):
GRAPH = nx.Graph
dview = nx.reportviews.DegreeView
@@ -522,12 +722,8 @@ def setup(self):
self.G.add_edge(1, 3, foo=2)
self.G.add_edge(1, 3, foo=3)
- def modify_edge(G, e, **kwds):
- G._adj[e[0]][e[1]].update(kwds)
- self.modify_edge = modify_edge
-
def test_repr(self):
- dv = self.G.degree()
+ dv = self.dview(self.G)
rep = "DegreeView({0: 1, 1: 3, 2: 2, 3: 3, 4: 2, 5: 1})"
assert_equal(repr(dv), rep)
@@ -588,7 +784,7 @@ def test_len(self):
assert_equal(len(dv), 6)
-class test_didegreeview(test_degreeview):
+class TestDiDegreeView(TestDegreeView):
GRAPH = nx.DiGraph
dview = nx.reportviews.DiDegreeView
@@ -598,7 +794,7 @@ def test_repr(self):
assert_equal(repr(dv), rep)
-class test_outdegreeview(test_degreeview):
+class TestOutDegreeView(TestDegreeView):
GRAPH = nx.DiGraph
dview = nx.reportviews.OutDegreeView
@@ -641,7 +837,7 @@ def test_weight(self):
assert_equal(dvd[3], 1)
-class test_indegreeview(test_degreeview):
+class TestInDegreeView(TestDegreeView):
GRAPH = nx.DiGraph
dview = nx.reportviews.InDegreeView
@@ -684,7 +880,7 @@ def test_weight(self):
assert_equal(dvd[3], 4)
-class test_multidegreeview(test_degreeview):
+class TestMultiDegreeView(TestDegreeView):
GRAPH = nx.MultiGraph
dview = nx.reportviews.MultiDegreeView
@@ -727,7 +923,7 @@ def test_weight(self):
assert_equal(dvd[3], 7)
-class test_dimultidegreeview(test_multidegreeview):
+class TestDiMultiDegreeView(TestMultiDegreeView):
GRAPH = nx.MultiDiGraph
dview = nx.reportviews.DiMultiDegreeView
@@ -737,7 +933,7 @@ def test_repr(self):
assert_equal(repr(dv), rep)
-class test_outmultidegreeview(test_degreeview):
+class TestOutMultiDegreeView(TestDegreeView):
GRAPH = nx.MultiDiGraph
dview = nx.reportviews.OutMultiDegreeView
@@ -780,7 +976,7 @@ def test_weight(self):
assert_equal(dvd[3], 1)
-class test_inmultidegreeview(test_degreeview):
+class TestInMultiDegreeView(TestDegreeView):
GRAPH = nx.MultiDiGraph
dview = nx.reportviews.InMultiDegreeView
diff --git a/networkx/classes/tests/test_subgraphviews.py b/networkx/classes/tests/test_subgraphviews.py
--- a/networkx/classes/tests/test_subgraphviews.py
+++ b/networkx/classes/tests/test_subgraphviews.py
@@ -4,7 +4,7 @@
import networkx as nx
-class test_graphview(object):
+class TestSubGraphView(object):
gview = nx.graphviews.SubGraph
graph = nx.Graph
hide_edges_filter = staticmethod(nx.filters.hide_edges)
@@ -89,7 +89,7 @@ def test_shown_edges(self):
assert_equal(G.degree(3), 1)
-class test_digraphview(test_graphview):
+class TestSubDiGraphView(TestSubGraphView):
gview = nx.graphviews.SubDiGraph
graph = nx.DiGraph
hide_edges_filter = staticmethod(nx.filters.hide_diedges)
@@ -128,7 +128,7 @@ def test_inout_degree(self):
# multigraph
-class test_multigraphview(test_graphview):
+class TestMultiGraphView(TestSubGraphView):
gview = nx.graphviews.SubMultiGraph
graph = nx.MultiGraph
hide_edges_filter = staticmethod(nx.filters.hide_multiedges)
@@ -183,7 +183,7 @@ def test_shown_edges(self):
# multidigraph
-class test_multidigraphview(test_multigraphview, test_digraphview):
+class TestMultiDiGraphView(TestMultiGraphView, TestSubDiGraphView):
gview = nx.graphviews.SubMultiDiGraph
graph = nx.MultiDiGraph
hide_edges_filter = staticmethod(nx.filters.hide_multidiedges)
@@ -204,7 +204,7 @@ def test_inout_degree(self):
# induced_subgraph
-class test_induced_subgraph(object):
+class TestInducedSubGraph(object):
def setUp(self):
self.K3 = G = nx.complete_graph(3)
G.graph['foo'] = []
@@ -264,7 +264,7 @@ def graphs_equal(self, H, G):
# edge_subgraph
-class test_edge_subgraph(object):
+class TestEdgeSubGraph(object):
def setup(self):
# Create a path graph on five nodes.
self.G = G = nx.path_graph(5)
diff --git a/networkx/readwrite/tests/test_gpickle.py b/networkx/readwrite/tests/test_gpickle.py
--- a/networkx/readwrite/tests/test_gpickle.py
+++ b/networkx/readwrite/tests/test_gpickle.py
@@ -54,12 +54,6 @@ def test_protocol(self):
for G in [self.G, self.DG, self.MG, self.MDG,
self.fG, self.fDG, self.fMG, self.fMDG]:
with tempfile.TemporaryFile() as f:
-# print('G is',G)
-# for objname in dir(G):
-# print('checking G.',objname)
-# obj = getattr(G, objname)
-# if hasattr(obj, '__slots__'):
-# print('slots in',obj)
nx.write_gpickle(G, f, 0)
f.seek(0)
Gin = nx.read_gpickle(f)
| Add more tests for views
| 2017-08-14T20:11:43 |
|
networkx/networkx | 2,618 | networkx__networkx-2618 | [
"2546"
] | 3f4fd85765bf2d88188cfd4c84d0707152e6cd1e | diff --git a/networkx/release.py b/networkx/release.py
--- a/networkx/release.py
+++ b/networkx/release.py
@@ -101,9 +101,9 @@ def writefile():
# This is *good*, and the most likely place users will be when
# running setup.py. We do not want to overwrite version.py.
# Grab the version so that setup can use it.
- sys.path.insert(0, basedir)
+ #sys.path.insert(0, basedir)
from version import version
- del sys.path[0]
+ #del sys.path[0]
else:
# This is *bad*. It means the user might have a tarball that
# does not include version.py. Let this error raise so we can
@@ -152,7 +152,7 @@ def get_info(dynamic=True):
# This is where most final releases of NetworkX will be.
# All info should come from version.py. If it does not exist, then
# no vcs information will be provided.
- sys.path.insert(0, basedir)
+ #sys.path.insert(0, basedir)
try:
from version import date, date_info, version, version_info, vcs_info
except ImportError:
@@ -160,7 +160,7 @@ def get_info(dynamic=True):
vcs_info = (None, (None, None))
else:
revision = vcs_info[1][0]
- del sys.path[0]
+ #del sys.path[0]
if import_failed or (dynamic and not dynamic_failed):
# We are here if:
| `networkx.version` shadows any other module named `version` if imported first
Steps to reproduce:
```
$ pip freeze | grep networkx
networkx==1.11
$ touch version.py
$ python -c 'import version; print(version)'
<module 'version' from '/Users/ben/scratch/version.py'>
$ python -c 'import networkx; import version; print(version)'
<module 'version' from '/Users/ben/.virtualenvs/personal/lib/python3.6/site-packages/networkx/version.py'>
```
Reading the code, it looks like the `release` module is adding the networkx package to `sys.path`, importing version and deleting it again?
| Perhaps this may be fixed by moving some of the logic in release.py to setup.py using some static parsing? In one of my modules, my setup.py looks in the __init__.py file and statically parses the value of `__version__`. Here is that code adapted to networkx:
```python
def parse_version():
""" Statically parse the version number from version.py """
from os.path import dirname, join
import ast
init_fpath = join(dirname(__file__), 'networkx', 'version.py')
with open(init_fpath) as file_:
sourcecode = file_.read()
pt = ast.parse(sourcecode)
class VersionVisitor(ast.NodeVisitor):
def visit_Assign(self, node):
for target in node.targets:
if target.id == 'version':
self.version = node.value.s
visitor = VersionVisitor()
visitor.visit(pt)
return visitor.version
```
The above function assume you are running from `setup.py`.
If the release.py code isn't actively being used in the networkx module, this may be a good alternative. | 2017-08-16T06:31:13 |
|
networkx/networkx | 2,632 | networkx__networkx-2632 | [
"2631"
] | ca839ad24da9ec72abf920eaa33d3fce0a888f4a | diff --git a/networkx/algorithms/structuralholes.py b/networkx/algorithms/structuralholes.py
--- a/networkx/algorithms/structuralholes.py
+++ b/networkx/algorithms/structuralholes.py
@@ -97,7 +97,8 @@ def effective_size(G, nodes=None, weight=None):
undirected graphs when computing neighbors of ``v``.
nodes : container, optional
- Container of nodes in the graph ``G``.
+ Container of nodes in the graph ``G`` to compute the effective size.
+ If None, the effective size of every node is computed.
weight : None or string, optional
If None, all edge weights are considered equal.
@@ -145,7 +146,7 @@ def redundancy(G, u, v, weight=None):
nodes = G
# Use Borgatti's simplified formula for unweighted and undirected graphs
if not G.is_directed() and weight is None:
- for v in G:
+ for v in nodes:
# Effective size is not defined for isolated nodes
if len(G[v]) == 0:
effective_size[v] = float('nan')
@@ -153,7 +154,7 @@ def redundancy(G, u, v, weight=None):
E = nx.ego_graph(G, v, center=False, undirected=True)
effective_size[v] = len(E) - (2 * E.size()) / len(E)
else:
- for v in G:
+ for v in nodes:
# Effective size is not defined for isolated nodes
if len(G[v]) == 0:
effective_size[v] = float('nan')
@@ -186,7 +187,8 @@ def constraint(G, nodes=None, weight=None):
The graph containing ``v``. This can be either directed or undirected.
nodes : container, optional
- Container of nodes in the graph ``G``.
+ Container of nodes in the graph ``G`` to compute the constraint. If
+ None, the constraint of every node is computed.
weight : None or string, optional
If None, all edge weights are considered equal.
| Parameter 'nodes' is not effective in nx.effective_size()
https://github.com/networkx/networkx/blob/ca839ad24da9ec72abf920eaa33d3fce0a888f4a/networkx/algorithms/structuralholes.py#L144-L162
Parameter variable `nodes` is not used anymore after assigning the default value, which is `G`, at line 145. As another function `nx.constraint()` does in the same file, it seems that for-loops in this code snippet should iterate over `nodes`, not `G`.
| 2017-08-22T05:17:04 |
||
networkx/networkx | 2,633 | networkx__networkx-2633 | [
"2630"
] | 4e44160983b80407b4ff2cced4fc5b0b6be1d9e8 | diff --git a/networkx/classes/graph.py b/networkx/classes/graph.py
--- a/networkx/classes/graph.py
+++ b/networkx/classes/graph.py
@@ -711,6 +711,9 @@ def nodes(self):
self.__dict__['nodes'] = nodes
return nodes
+ # for backwards compatibility with 1.x, will be removed for 3.x
+ node = nodes
+
def number_of_nodes(self):
"""Return the number of nodes in the graph.
| diff --git a/networkx/classes/tests/test_graph.py b/networkx/classes/tests/test_graph.py
--- a/networkx/classes/tests/test_graph.py
+++ b/networkx/classes/tests/test_graph.py
@@ -9,6 +9,12 @@
from networkx.testing.utils import *
+def test_deprecated():
+ # for backwards compatibility with 1.x, will be removed for 3.x
+ G = nx.complete_graph(3)
+ assert_equal(G.node, {0: {}, 1: {}, 2: {}})
+
+
class BaseGraphTester(object):
""" Tests for data-structure independent graph class features."""
| Deprecation warnings for G.node and G.edge?
@dschult , @hagberg While trying to get scikit-image working with 2.0, I've started wondering if we should provide ``G.node`` and ``G.edge`` for backward compatibility until we release 2.1 or so.
Here is the scikit-image PR:
https://github.com/scikit-image/scikit-image/pull/2766
| That seems like a reasonable idea.
I think if we leave it for v2.0 it will be hard to change it for v2.1 where people don't expect strong changes in API. But, I am fine with leaving ```G.node``` and ```G.edge``` until v?.? determined at a later date. I suppose we could keep ```nodes_iter/edges_iter``` also pointing to ```nodes/edges``` but I'd prefer to have them change to the non-```_iter``` version (which works with v1.x code too).
Either way we might want to advertise a way to make graphs from v2.0 look like v1.x graphs. That way people can use code that will work for either. I guess that should be put into the migration document.
The other major change is that copy/subgraph/reverse/edge_subgraph have changed the level of deep-ness of their copying. I'm not sure what the best way of handling that is. It depends whether their code needs a certain level of deep-nees or not. Most of the time the v2 deep-nees works: a simple ```.copy()``` should be enough. But if their graph has edge data that is a container and they change it on the subgraph, they will need a ```deepcopy```. Note too that using ```deepcopy``` on a graphview will copy both the view object and the underlying graph object.
Here is some code to make v2.x look like v1.x except for deep-ness of copies.
```
if not hasattr(G, "nodes_iter"):
G.nodes_iter = G.nodes # or make them remove _iter from their code.
G.edges_iter = G.edges # or make them remove _iter from their code.
# same for G.degree_iter, G.adjacency_iter, G.adjacency_list, G.neighbors_iter
G.node = G.nodes
G.edge = G.adj
def subgraph(self, nbunch):
induced_nodes = nx.filters.show_nodes(self.nbunch_iter(nbunch))
return nx.graphviews.SubGraph(self, induced_nodes).copy() # Notice the .copy()
G.subgraph = subgraph
```
If we re-introduce ```G.node``` and ```G.edge``` and tell people to remove the ```_iter``` from their code I think we don't need the above conversion. The only compatibility issue left is the level of deep-ness of copy/subgraph/reverse. We can say: if in doubt, deepcopy. But I suspect most of the time they shouldn't deepcopy.
I don't know a way to make deprecation warnings on use of an attribute.
Perhaps we should just leave them in the API long term...
I haven't checked whether this works, but I was thinking we could do something like:
```
@property
def node(self):
return self._node
@property
def edge(self):
return self._adj
def nodes_iter(self, data=False):
return list(self.nodes(data))
def edges_iter(self, *args, **kwargs):
return list(self.edges(*args, **kwargs))
```
But with warnings like this for all four methods (I didn't want to cluster the above snippets):
```
@property
def node(self):
"""DEPRECATED: Replaced by ``nodes``."""
msg = "node is deprecated and will be removed" \
"in 2.1, use nodes instead."
_warnings.warn(msg, DeprecationWarning)
return self._node
```
Do you think something like that might work? I can make a PR this evening and test it with ``scikit-image``.
Nice use of @property. :)
There are other ```_iter``` functions too. If these work we could do the same for them.
Q: will the scikit-image code then produce deprecation warnings when they use code that works for both v1.x and v2? How do we best serve packages which want to support old and new versions of our code? | 2017-08-22T16:30:10 |
networkx/networkx | 2,647 | networkx__networkx-2647 | [
"2646"
] | 386b71a7af6c4898331f62987d8ced3f5621b680 | diff --git a/doc/conf.py b/doc/conf.py
--- a/doc/conf.py
+++ b/doc/conf.py
@@ -199,7 +199,7 @@
latex_appendices = ['tutorial']
# Intersphinx mapping
-intersphinx_mapping = {'https://docs.python.org/': None,
+intersphinx_mapping = {'https://docs.python.org/2/': None,
'https://docs.scipy.org/doc/numpy/': None,
}
| Readthedocs pain
Readthedocs (RTD) is a pain to work with and keeps having timeout errors. I started to look into whether we can build the docs on our own and push them to RTD instead of having it built on the site. It would also make more sense to have the doc build process as part of our CI process, rather than only checked after the fact.
Has there been any discussion about moving away from RTD before (at least the build process)? If so, was there a reason not to move? I assume it is too late to move back to hosting the docs on github, but I thought I'd check since it might be easier to do.
| When we moved to RTD (from hosting on github) the idea was that it would be simpler engineering than building and pushing docs to github. If there is a CI approach that is reasonable to maintain that seems like a good path forward.
I will look into our options then. | 2017-09-03T02:25:49 |
|
networkx/networkx | 2,666 | networkx__networkx-2666 | [
"2664"
] | 5b16b92241d0b0bba40f8974ccb50aada7104622 | diff --git a/networkx/readwrite/gexf.py b/networkx/readwrite/gexf.py
--- a/networkx/readwrite/gexf.py
+++ b/networkx/readwrite/gexf.py
@@ -345,7 +345,7 @@ def add_edges(self, G, graph_element):
def edge_key_data(G):
# helper function to unify multigraph and graph edge iterator
if G.is_multigraph():
- for u, v, data, key in G.edges(data=True, keys=True):
+ for u, v, key, data in G.edges(data=True, keys=True):
edge_data = data.copy()
edge_data.update(key=key)
edge_id = edge_data.pop('id', None)
| Potential bug in networkx.write_gexf()
I'm using `2.0rc2.dev20170911220036`
``` bash
=> pip freeze | grep networkx
networkx==2.0rc2.dev20170911220036
```
and when I try to write to gexf `networkx.write_gexf(G,filename)`, it gave me the following error:
```bash
Traceback (most recent call last):
File "graph_nx2.py", line 434, in <module>
verbose=True)
File "graph_nx2.py", line 55, in __init__
self.OutputGEXF()
File "graph_nx2.py", line 422, in OutputGEXF
nx.write_gexf(self.G, self.file_name)
File "<decorator-gen-388>", line 2, in write_gexf
File "/home/me/anaconda2/envs/nx2/lib/python2.7/site-packages/networkx-2.0rc2.dev20170911220036-py2.7.egg/networkx/utils/decorators.py", line 224, in _open_file
result = func(*new_args, **kwargs)
File "/home/me/anaconda2/envs/nx2/lib/python2.7/site-packages/networkx-2.0rc2.dev20170911220036-py2.7.egg/networkx/readwrite/gexf.py", line 88, in write_gexf
writer.add_graph(G)
File "/home/me/anaconda2/envs/nx2/lib/python2.7/site-packages/networkx-2.0rc2.dev20170911220036-py2.7.egg/networkx/readwrite/gexf.py", line 306, in add_graph
self.add_edges(G, graph_element)
File "/home/me/anaconda2/envs/nx2/lib/python2.7/site-packages/networkx-2.0rc2.dev20170911220036-py2.7.egg/networkx/readwrite/gexf.py", line 363, in add_edges
for u, v, key, edge_data in edge_key_data(G):
File "/home/me/anaconda2/envs/nx2/lib/python2.7/site-packages/networkx-2.0rc2.dev20170911220036-py2.7.egg/networkx/readwrite/gexf.py", line 349, in edge_key_data
edge_data = data.copy()
AttributeError: 'int' object has no attribute 'copy'
```
After looking into the source code, I find this code block in `networkx.readwrite.gexf` for the `multiDiGraph` that I'm using:
```python
def add_edges(self, G, graph_element):
def edge_key_data(G):
# helper function to unify multigraph and graph edge iterator
if G.is_multigraph():
for u, v, data, key in G.edges(data=True, keys=True):
edge_data = data.copy()
edge_data.update(key=key)
edge_id = edge_data.pop('id', None)
if edge_id is None:
edge_id = next(self.edge_id)
yield u, v, edge_id, edge_data
```
Seems like the problem is `u, v, data, key in G.edges(data=True, keys=True)`, which I believe should instead be `u, v, key, data in G.edges(data=True, keys=True)`, with `data` and `key` swapped.
Reason for that is when I get the following:
```python
>>> G.edges(data=True, keys=True)
>>> ... ('node0', 'node1', 0, {'variable': 'g', 'delay': 0.0}) ...
```
Could this be a bug? Or am I doing something wrong
| This looks like a bug -- Thanks for the detailed report!
I think its just a matter of switching order, but I'll check to make sure. | 2017-09-12T13:33:49 |
|
networkx/networkx | 2,670 | networkx__networkx-2670 | [
"2667"
] | 108ff600c455582ca3fba54e91eccdda604a330c | diff --git a/networkx/algorithms/tree/mst.py b/networkx/algorithms/tree/mst.py
--- a/networkx/algorithms/tree/mst.py
+++ b/networkx/algorithms/tree/mst.py
@@ -537,6 +537,8 @@ def minimum_spanning_tree(G, weight='weight', algorithm='kruskal',
There may be more than one tree with the same minimum or maximum weight.
See :mod:`networkx.tree.recognition` for more detailed definitions.
+ Isolated nodes with self-loops are in the tree as edgeless isolated nodes.
+
"""
edges = minimum_spanning_edges(G, algorithm, weight, keys=True,
data=True, ignore_nan=ignore_nan)
@@ -596,6 +598,8 @@ def maximum_spanning_tree(G, weight='weight', algorithm='kruskal',
There may be more than one tree with the same minimum or maximum weight.
See :mod:`networkx.tree.recognition` for more detailed definitions.
+ Isolated nodes with self-loops are in the tree as edgeless isolated nodes.
+
"""
edges = maximum_spanning_edges(G, algorithm, weight, keys=True,
data=True, ignore_nan=ignore_nan)
| Minimum spanning tree doesn't return all of the nodes
Hello,
I am having an issue with an undirected connected graph. When I get the MST, the function returns less nodes than the original graph:
```
mst = nx.minimum_spanning_tree(q)
print mst.number_of_nodes(), q.number_of_nodes()
```
This returns:
`30831 30833`
Any idea of why is this happening, and how I can fix it?
Thanks,
Jose
| The nodes of the minimum spanning tree should be the same as those from the original graph. We do add the edges we've found for the tree after adding the nodes from the original graph, so it is possible that a "bug" would create extra nodes in ```mst0``` that are not in ```q```. But you are getting fewer nodes in ```mst0``` and I don't see how that could happen with the current code.
Which version of NetworkX are you using?
Is it helpful to find out the extra nodes? ```set(mst0) ^ set(q)```
Hey!
Thank you for the fast response. I am using 1.11:
```
pip freeze | grep networkx
networkx==1.11
```
I tried checking the nodes to get an insight. The nodes have an attribute, but the graph is unweighted. (The attribute is not called weight)
I mean: what are the nodes that get added? You can see that by using the command:
```set(mst0) ^ set(q) # provides the set elements that are in either set but not both```
Hopefully it will just give you two nodes.
I'm sorry, I don't get what you are saying. I don't get nodes added, I get _less_ nodes.
I think I was able to isolate the issue. MST doesn't play well with graphs with selfloops:
```
G1 = nx.Graph()
G1.add_nodes_from([1, 2, 3])
G1.add_edge(2, 2)
print G1.number_of_nodes() #3
print nx.minimum_spanning_tree(G1).number_of_nodes() #2
```
The output should be 3 and 3, but it's 3 and 2. Notice that it doesn't matter if the graph is disconnected (like this one). The issue persists for directed graphs.
Thank you for that simple example!
This weakness is partially fixed in NetworkX 2.0 (that is, with the current Github development branch OR the pypi version 2.0rc1 using ```pip install networkx==2.0rc1```).
Only partially fixed because the selfloop edges still aren't part of the mst forest, but the nodes are.
It looks like the minimum spanning tree of a single node graph with a self-loop is just the node (That node is itself a spanning tree by all definitions I have found; and including the edge will only increase the weight of the tree).
So it looks like the v2.0 reporting of the MST is correct (it includes isolated nodes but not self-loops on isolated nodes). I will add a note to the documentation that this is the case.
V1.11 has a bug where it doesn't report any isolated nodes which have self-loops.
Thanks for catching and reporting this! | 2017-09-16T17:17:18 |
|
networkx/networkx | 2,713 | networkx__networkx-2713 | [
"2600"
] | 9f6c9cd6a561d41192bc29f14fd9bc16bcaad919 | diff --git a/networkx/algorithms/community/quality.py b/networkx/algorithms/community/quality.py
--- a/networkx/algorithms/community/quality.py
+++ b/networkx/algorithms/community/quality.py
@@ -114,7 +114,10 @@ def inter_community_edges(G, partition):
# for block in partition))
# return sum(1 for u, v in G.edges() if aff[u] != aff[v])
#
- return nx.quotient_graph(G, partition, create_using=nx.MultiGraph()).size()
+ if G.is_directed():
+ return nx.quotient_graph(G, partition, create_using=nx.MultiDiGraph()).size()
+ else:
+ return nx.quotient_graph(G, partition, create_using=nx.MultiGraph()).size()
def inter_community_non_edges(G, partition):
| diff --git a/networkx/algorithms/community/tests/test_quality.py b/networkx/algorithms/community/tests/test_quality.py
--- a/networkx/algorithms/community/tests/test_quality.py
+++ b/networkx/algorithms/community/tests/test_quality.py
@@ -12,6 +12,7 @@
"""
from __future__ import division
+from nose.tools import assert_equal
from nose.tools import assert_almost_equal
import networkx as nx
@@ -19,6 +20,7 @@
from networkx.algorithms.community import coverage
from networkx.algorithms.community import modularity
from networkx.algorithms.community import performance
+from networkx.algorithms.community.quality import inter_community_edges
class TestPerformance(object):
@@ -61,3 +63,17 @@ def test_modularity():
assert_almost_equal(-16 / (14 ** 2), modularity(G, C))
C = [{0, 1, 2}, {3, 4, 5}]
assert_almost_equal((35 * 2) / (14 ** 2), modularity(G, C))
+
+
+def test_inter_community_edges_with_digraphs():
+ G = nx.complete_graph(2, create_using = nx.DiGraph())
+ partition = [{0}, {1}]
+ assert_equal(inter_community_edges(G, partition), 2)
+
+ G = nx.complete_graph(10, create_using = nx.DiGraph())
+ partition = [{0}, {1, 2}, {3, 4, 5}, {6, 7, 8, 9}]
+ assert_equal(inter_community_edges(G, partition), 70)
+
+ G = nx.cycle_graph(4, create_using = nx.DiGraph())
+ partition = [{0, 1}, {2, 3}]
+ assert_equal(inter_community_edges(G, partition), 2)
| inter_community_non_edges ignore directionality
Hi,
I think the function:
nx.algorithms.community.quality.inter_community_non_edges()
does not work properly for directed graph. It always return the non-edge of a undirected graph, basically halving the number of edges. This mean that the performance function (nx.algorithms.community.performance) will never by higher than 50% for a directed graph.
I'm using version '2.0.dev_20170801111157', python 3.5.1
Best,
Nicolas
| Can you provide a short example to demonstrate this?
I will be helpful to know more precisely what the trouble is.
Thanks!
Hi,
In the example below, you will find that the number of intra_community_edges is multiplied by two when the graph is transform in a directed one, but the number of inter-community non-edge do not change. I found this bug using real life data imported from panda dataframes.
Thanks!
Nicolas
import networkx as nx
G = nx.random_partition_graph([10,10,10],.25,.01, seed=12345)
partition = G.graph['partition']
perf = nx.algorithms.community.performance(G,partition )
intra_edges = nx.algorithms.community.quality.intra_community_edges(G, partition)
inter_edges = nx.algorithms.community.quality.inter_community_non_edges(G, partition)
print('undirected')
print(perf)
print(intra_edges)
print(inter_edges)
G2 = G.to_directed()
partition = G2.graph['partition']
perf = nx.algorithms.community.performance(G2,partition )
intra_edges = nx.algorithms.community.quality.intra_community_edges(G2, partition)
inter_edges = nx.algorithms.community.quality.inter_community_non_edges(G2, partition)
print('directed')
print(perf)
print(intra_edges)
print(inter_edges) | 2017-10-15T17:09:15 |
networkx/networkx | 2,721 | networkx__networkx-2721 | [
"2432"
] | 49fb7d68040d0709f7d12f6626b16007015d96e8 | diff --git a/networkx/readwrite/nx_shp.py b/networkx/readwrite/nx_shp.py
--- a/networkx/readwrite/nx_shp.py
+++ b/networkx/readwrite/nx_shp.py
@@ -23,7 +23,7 @@
__all__ = ['read_shp', 'write_shp']
-def read_shp(path, simplify=True, geom_attrs=True):
+def read_shp(path, simplify=True, geom_attrs=True, strict=True):
"""Generates a networkx.DiGraph from shapefiles. Point geometries are
translated into nodes, lines into edges. Coordinate tuples are used as
keys. Attributes are preserved, line geometries are simplified into start
@@ -53,11 +53,27 @@ def read_shp(path, simplify=True, geom_attrs=True):
the edge geometry as well (as they do when they are read via
this method) and they change, your geomety will be out of sync.
+ strict: bool
+ If True, raise NetworkXError when feature geometry is missing or
+ GeometryType is not supported.
+ If False, silently ignore missing or unsupported geometry in features.
Returns
-------
G : NetworkX graph
+ Raises
+ ------
+ ImportError
+ If ogr module is not available.
+
+ RuntimeError
+ If file cannot be open or read.
+
+ NetworkXError
+ If strict=True and feature is missing geometry or GeometryType is
+ not supported.
+
Examples
--------
>>> G=nx.read_shp('test.shp') # doctest: +SKIP
@@ -76,11 +92,18 @@ def read_shp(path, simplify=True, geom_attrs=True):
net = nx.DiGraph()
shp = ogr.Open(path)
+ if shp is None:
+ raise RuntimeError("Unable to open {}".format(path))
for lyr in shp:
fields = [x.GetName() for x in lyr.schema]
for f in lyr:
- flddata = [f.GetField(f.GetFieldIndex(x)) for x in fields]
g = f.geometry()
+ if g is None:
+ if strict:
+ raise nx.NetworkXError("Bad data: feature missing geometry")
+ else:
+ continue
+ flddata = [f.GetField(f.GetFieldIndex(x)) for x in fields]
attributes = dict(zip(fields, flddata))
attributes["ShpName"] = lyr.GetName()
# Note: Using layer level geometry type
@@ -94,8 +117,9 @@ def read_shp(path, simplify=True, geom_attrs=True):
net.add_edge(e1, e2)
net[e1][e2].update(attr)
else:
- raise ImportError("GeometryType {} not supported".
- format(g.GetGeometryType()))
+ if strict:
+ raise nx.NetworkXError("GeometryType {} not supported".
+ format(g.GetGeometryType()))
return net
@@ -256,13 +280,13 @@ def create_feature(geometry, lyr, attributes=None):
# New edge attribute write support merged into edge loop
fields = {} # storage for field names and their data types
- attributes = {} # storage for attribute data (indexed by field names)
# Conversion dict between python and ogr types
OGRTypes = {int: ogr.OFTInteger, str: ogr.OFTString, float: ogr.OFTReal}
# Edge loop
for e in G.edges(data=True):
+ attributes = {} # storage for attribute data (indexed by field names)
data = G.get_edge_data(*e)
g = netgeometry(e, data)
# Loop through attribute data in edges
| diff --git a/networkx/readwrite/tests/test_shp.py b/networkx/readwrite/tests/test_shp.py
--- a/networkx/readwrite/tests/test_shp.py
+++ b/networkx/readwrite/tests/test_shp.py
@@ -5,6 +5,7 @@
import tempfile
from nose import SkipTest
from nose.tools import assert_equal
+from nose.tools import raises
import networkx as nx
@@ -208,3 +209,93 @@ def test_wkt_export(self):
def tearDown(self):
self.deletetmp(self.drv, self.testdir, self.shppath)
+
+
+@raises(RuntimeError)
+def test_read_shp_nofile():
+ try:
+ from osgeo import ogr
+ except ImportError:
+ raise SkipTest('ogr not available.')
+ G = nx.read_shp("hopefully_this_file_will_not_be_available")
+
+
+class TestMissingGeometry(object):
+ @classmethod
+ def setup_class(cls):
+ global ogr
+ try:
+ from osgeo import ogr
+ except ImportError:
+ raise SkipTest('ogr not available.')
+
+ def setUp(self):
+ self.setup_path()
+ self.delete_shapedir()
+ self.create_shapedir()
+
+ def tearDown(self):
+ self.delete_shapedir()
+
+ def setup_path(self):
+ self.path = os.path.join(tempfile.gettempdir(), 'missing_geometry')
+
+ def create_shapedir(self):
+ drv = ogr.GetDriverByName("ESRI Shapefile")
+ shp = drv.CreateDataSource(self.path)
+ lyr = shp.CreateLayer("nodes", None, ogr.wkbPoint)
+ feature = ogr.Feature(lyr.GetLayerDefn())
+ feature.SetGeometry(None)
+ lyr.CreateFeature(feature)
+ feature.Destroy()
+
+ def delete_shapedir(self):
+ drv = ogr.GetDriverByName("ESRI Shapefile")
+ if os.path.exists(self.path):
+ drv.DeleteDataSource(self.path)
+
+ @raises(nx.NetworkXError)
+ def test_missing_geometry(self):
+ G = nx.read_shp(self.path)
+
+
+class TestMissingAttrWrite(object):
+ @classmethod
+ def setup_class(cls):
+ global ogr
+ try:
+ from osgeo import ogr
+ except ImportError:
+ raise SkipTest('ogr not available.')
+
+ def setUp(self):
+ self.setup_path()
+ self.delete_shapedir()
+
+ def tearDown(self):
+ self.delete_shapedir()
+
+ def setup_path(self):
+ self.path = os.path.join(tempfile.gettempdir(), 'missing_attributes')
+
+ def delete_shapedir(self):
+ drv = ogr.GetDriverByName("ESRI Shapefile")
+ if os.path.exists(self.path):
+ drv.DeleteDataSource(self.path)
+
+ def test_missing_attributes(self):
+ G = nx.DiGraph()
+ A = (0, 0)
+ B = (1, 1)
+ C = (2, 2)
+ G.add_edge(A, B, foo=100)
+ G.add_edge(A, C)
+
+ nx.write_shp(G, self.path)
+ H = nx.read_shp(self.path)
+
+ for u, v, d in H.edges(data=True):
+ if u == A and v == B:
+ assert_equal(d['foo'], 100)
+ if u == A and v == C:
+ assert_equal(d['foo'], None)
| Bug in write_shp when some edges miss attribute(s)
When writing shapefiles with v1.11, `write_shp` uses the last valid values for attributes an edge doesn't have. The reason is that the values of missing `key`s in the dictionary `attributes` are not updated in the edge loop. A work-around is to make sure all attributes are set before using `write_shp` or to remove edges with missing attributes but this requires a separate edge loop. So, it would be nice to have a fix integrated in `write_shp`. Thanks!
| 2017-10-25T05:03:48 |
|
networkx/networkx | 2,759 | networkx__networkx-2759 | [
"2726",
"2726"
] | 94e908ab14103a304f20754861e06c6fb69e52ef | diff --git a/networkx/classes/function.py b/networkx/classes/function.py
--- a/networkx/classes/function.py
+++ b/networkx/classes/function.py
@@ -258,7 +258,13 @@ def add_path(G, nodes, **attr):
>>> nx.add_path(G, [0, 1, 2, 3])
>>> nx.add_path(G, [10, 11, 12], weight=7)
"""
- G.add_edges_from(pairwise(nodes), **attr)
+ nlist = iter(nodes)
+ try:
+ first_node = next(nlist)
+ except StopIteration:
+ return
+ G.add_node(first_node)
+ G.add_edges_from(pairwise(chain((first_node,), nlist)), **attr)
def add_cycle(G, nodes, **attr):
| diff --git a/networkx/classes/tests/test_function.py b/networkx/classes/tests/test_function.py
--- a/networkx/classes/tests/test_function.py
+++ b/networkx/classes/tests/test_function.py
@@ -81,6 +81,42 @@ def test_add_path(self):
(13, 14, {'weight': 2.}),
(14, 15, {'weight': 2.})])
+ G = self.G.copy()
+ nlist = [None]
+ nx.add_path(G, nlist)
+ assert_edges_equal(G.edges(nlist), [])
+ assert_nodes_equal(G, list(self.G) + [None])
+
+ G = self.G.copy()
+ nlist = iter([None])
+ nx.add_path(G, nlist)
+ assert_edges_equal(G.edges([None]), [])
+ assert_nodes_equal(G, list(self.G) + [None])
+
+ G = self.G.copy()
+ nlist = [12]
+ nx.add_path(G, nlist)
+ assert_edges_equal(G.edges(nlist), [])
+ assert_nodes_equal(G, list(self.G) + [12])
+
+ G = self.G.copy()
+ nlist = iter([12])
+ nx.add_path(G, nlist)
+ assert_edges_equal(G.edges([12]), [])
+ assert_nodes_equal(G, list(self.G) + [12])
+
+ G = self.G.copy()
+ nlist = []
+ nx.add_path(G, nlist)
+ assert_edges_equal(G.edges, self.G.edges)
+ assert_nodes_equal(G, list(self.G))
+
+ G = self.G.copy()
+ nlist = iter([])
+ nx.add_path(G, nlist)
+ assert_edges_equal(G.edges, self.G.edges)
+ assert_nodes_equal(G, list(self.G))
+
def test_add_cycle(self):
G = self.G.copy()
nlist = [12, 13, 14, 15]
| Add a node when add_path involves no edge.
`add_path` is convenient for adding edges without adding nodes explicitly.
Consider a case where there exists no edge to be added:
```
G = nx.Graph()
nx.add_path(G, [0])
```
If one wants to test the neighbors of a node in a graph, `G.neighbors(0)` will throw networkx.exception.NetworkXError: The node 0 is not in the graph.
Obviously, one can check whether the argument for `add_path` is a singleton and explicitly do
```
path = [0] # or something else
G = nx.Graph()
if len(path) == 1:
G.add_node(path[0])
nx.add_path(G, path)
G.neighbors(0)
```
Should adding a zero-edge path with a single node add a node in the graph if the node isn’t already in the graph?
This also applies to `add_star`.
Add a node when add_path involves no edge.
`add_path` is convenient for adding edges without adding nodes explicitly.
Consider a case where there exists no edge to be added:
```
G = nx.Graph()
nx.add_path(G, [0])
```
If one wants to test the neighbors of a node in a graph, `G.neighbors(0)` will throw networkx.exception.NetworkXError: The node 0 is not in the graph.
Obviously, one can check whether the argument for `add_path` is a singleton and explicitly do
```
path = [0] # or something else
G = nx.Graph()
if len(path) == 1:
G.add_node(path[0])
nx.add_path(G, path)
G.neighbors(0)
```
Should adding a zero-edge path with a single node add a node in the graph if the node isn’t already in the graph?
This also applies to `add_star`.
| I think I have to agree that a path of length zero (and a star with no leaves) should be included. Would you like to create a PR with this fix?
I think I have to agree that a path of length zero (and a star with no leaves) should be included. Would you like to create a PR with this fix? | 2017-11-19T02:57:43 |
networkx/networkx | 2,760 | networkx__networkx-2760 | [
"2757"
] | 94e908ab14103a304f20754861e06c6fb69e52ef | diff --git a/networkx/drawing/nx_pylab.py b/networkx/drawing/nx_pylab.py
--- a/networkx/drawing/nx_pylab.py
+++ b/networkx/drawing/nx_pylab.py
@@ -151,6 +151,17 @@ def draw_networkx(G, pos=None, arrows=True, with_labels=True, **kwds):
arrows : bool, optional (default=True)
For directed graphs, if True draw arrowheads.
+ Note: Arrows will be the same color as edges.
+
+ arrowstyle : str, optional (default='-|>')
+ For directed graphs, choose the style of the arrowsheads.
+ See :py:class: `matplotlib.patches.ArrowStyle` for more
+ options.
+
+ arrowsize : int, optional (default=10)
+ For directed graphs, choose the size of the arrow head head's length and
+ width. See :py:class: `matplotlib.patches.FancyArrowPatch` for attribute
+ `mutation_scale` for more info.
with_labels : bool, optional (default=True)
Set to True to draw labels on the nodes.
@@ -229,10 +240,8 @@ def draw_networkx(G, pos=None, arrows=True, with_labels=True, **kwds):
Notes
-----
- For directed graphs, "arrows" (actually just thicker stubs) are drawn
- at the head end. Arrows can be turned off with keyword arrows=False.
- Yes, it is ugly but drawing proper arrows with Matplotlib this
- way is tricky.
+ For directed graphs, arrows are drawn at the head end. Arrows can be
+ turned off with keyword arrows=False.
Examples
--------
@@ -409,12 +418,17 @@ def draw_networkx_edges(G, pos,
edge_color='k',
style='solid',
alpha=1.0,
+ arrowstyle='-|>',
+ arrowsize=10,
edge_cmap=None,
edge_vmin=None,
edge_vmax=None,
ax=None,
arrows=True,
label=None,
+ node_size=300,
+ nodelist=None,
+ node_shape="o",
**kwds):
"""Draw the edges of the graph G.
@@ -458,6 +472,17 @@ def draw_networkx_edges(G, pos,
arrows : bool, optional (default=True)
For directed graphs, if True draw arrowheads.
+ Note: Arrows will be the same color as edges.
+
+ arrowstyle : str, optional (default='-|>')
+ For directed graphs, choose the style of the arrow heads.
+ See :py:class: `matplotlib.patches.ArrowStyle` for more
+ options.
+
+ arrowsize : int, optional (default=10)
+ For directed graphs, choose the size of the arrow head head's length and
+ width. See :py:class: `matplotlib.patches.FancyArrowPatch` for attribute
+ `mutation_scale` for more info.
label : [None| string]
Label for legend
@@ -467,18 +492,29 @@ def draw_networkx_edges(G, pos,
matplotlib.collection.LineCollection
`LineCollection` of the edges
+ list of matplotlib.patches.FancyArrowPatch
+ `FancyArrowPatch` instances of the directed edges
+
+ Depending whether the drawing includes arrows or not.
+
Notes
-----
- For directed graphs, "arrows" (actually just thicker stubs) are drawn
- at the head end. Arrows can be turned off with keyword arrows=False.
- Yes, it is ugly but drawing proper arrows with Matplotlib this
- way is tricky.
+ For directed graphs, arrows are drawn at the head end. Arrows can be
+ turned off with keyword arrows=False. Be sure to include `node_size' as a
+ keyword argument; arrows are drawn considering the size of nodes.
Examples
--------
>>> G = nx.dodecahedral_graph()
>>> edges = nx.draw_networkx_edges(G, pos=nx.spring_layout(G))
+ >>> G = nx.DiGraph()
+ >>> G.add_edges_from([(1, 2), (1, 3), (2, 3)])
+ >>> arcs = nx.draw_networkx_edges(G, pos=nx.spring_layout(G))
+ >>> alphas = [0.3, 0.4, 0.5]
+ >>> for i, arc in enumerate(arcs): # change alpha values of arcs
+ ... arc.set_alpha(alphas[i])
+
Also see the NetworkX drawing examples at
https://networkx.github.io/documentation/latest/auto_examples/index.html
@@ -494,8 +530,9 @@ def draw_networkx_edges(G, pos,
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.cbook as cb
- from matplotlib.colors import colorConverter, Colormap
+ from matplotlib.colors import colorConverter, Colormap, Normalize
from matplotlib.collections import LineCollection
+ from matplotlib.patches import FancyArrowPatch
import numpy as np
except ImportError:
raise ImportError("Matplotlib required for draw()")
@@ -512,6 +549,9 @@ def draw_networkx_edges(G, pos,
if not edgelist or len(edgelist) == 0: # no edges!
return None
+ if nodelist is None:
+ nodelist = list(G.nodes())
+
# set edge positions
edge_pos = np.asarray([(pos[e[0]], pos[e[1]]) for e in edgelist])
@@ -537,84 +577,112 @@ def draw_networkx_edges(G, pos,
# numbers (which are going to be mapped with a colormap)
edge_colors = None
else:
- raise ValueError('edge_color must consist of either color names or numbers')
+ raise ValueError('edge_color must contain color names or numbers')
else:
if is_string_like(edge_color) or len(edge_color) == 1:
edge_colors = (colorConverter.to_rgba(edge_color, alpha), )
else:
- raise ValueError(
- 'edge_color must be a single color or list of exactly m colors where m is the number or edges')
-
- edge_collection = LineCollection(edge_pos,
- colors=edge_colors,
- linewidths=lw,
- antialiaseds=(1,),
- linestyle=style,
- transOffset=ax.transData,
- )
-
- edge_collection.set_zorder(1) # edges go behind nodes
- edge_collection.set_label(label)
- ax.add_collection(edge_collection)
-
- # Note: there was a bug in mpl regarding the handling of alpha values for
- # each line in a LineCollection. It was fixed in matplotlib in r7184 and
- # r7189 (June 6 2009). We should then not set the alpha value globally,
- # since the user can instead provide per-edge alphas now. Only set it
- # globally if provided as a scalar.
- if cb.is_numlike(alpha):
- edge_collection.set_alpha(alpha)
-
- if edge_colors is None:
- if edge_cmap is not None:
- assert(isinstance(edge_cmap, Colormap))
- edge_collection.set_array(np.asarray(edge_color))
- edge_collection.set_cmap(edge_cmap)
- if edge_vmin is not None or edge_vmax is not None:
- edge_collection.set_clim(edge_vmin, edge_vmax)
- else:
- edge_collection.autoscale()
+ msg = 'edge_color must be a color or list of one color per edge'
+ raise ValueError(msg)
+
+ if (not G.is_directed() or not arrows):
+ edge_collection = LineCollection(edge_pos,
+ colors=edge_colors,
+ linewidths=lw,
+ antialiaseds=(1,),
+ linestyle=style,
+ transOffset=ax.transData,
+ )
+
+ edge_collection.set_zorder(1) # edges go behind nodes
+ edge_collection.set_label(label)
+ ax.add_collection(edge_collection)
+
+ # Note: there was a bug in mpl regarding the handling of alpha values
+ # for each line in a LineCollection. It was fixed in matplotlib by
+ # r7184 and r7189 (June 6 2009). We should then not set the alpha
+ # value globally, since the user can instead provide per-edge alphas
+ # now. Only set it globally if provided as a scalar.
+ if cb.is_numlike(alpha):
+ edge_collection.set_alpha(alpha)
+
+ if edge_colors is None:
+ if edge_cmap is not None:
+ assert(isinstance(edge_cmap, Colormap))
+ edge_collection.set_array(np.asarray(edge_color))
+ edge_collection.set_cmap(edge_cmap)
+ if edge_vmin is not None or edge_vmax is not None:
+ edge_collection.set_clim(edge_vmin, edge_vmax)
+ else:
+ edge_collection.autoscale()
+ return edge_collection
arrow_collection = None
if G.is_directed() and arrows:
+ # Note: Waiting for someone to implement arrow to intersection with
+ # marker. Meanwhile, this works well for polygons with more than 4
+ # sides and circle.
+
+ def to_marker_edge(marker_size, marker):
+ if marker in "s^>v<d": # `large` markers need extra space
+ return np.sqrt(2 * marker_size) / 2
+ else:
+ return np.sqrt(marker_size) / 2
- # a directed graph hack
- # draw thick line segments at head end of edge
- # waiting for someone else to implement arrows that will work
+ # Draw arrows with `matplotlib.patches.FancyarrowPatch`
+ arrow_collection = []
+ mutation_scale = arrowsize # scale factor of arrow head
arrow_colors = edge_colors
- a_pos = []
- p = 1.0 - 0.25 # make head segment 25 percent of edge length
- for src, dst in edge_pos:
+ if arrow_colors is None:
+ if edge_cmap is not None:
+ assert(isinstance(edge_cmap, Colormap))
+ else:
+ edge_cmap = plt.get_cmap() # default matplotlib colormap
+ if edge_vmin is None:
+ edge_vmin = min(edge_color)
+ if edge_vmax is None:
+ edge_vmax = max(edge_color)
+ color_normal = Normalize(vmin=edge_vmin, vmax=edge_vmax)
+
+ for i, (src, dst) in enumerate(edge_pos):
x1, y1 = src
x2, y2 = dst
- dx = x2 - x1 # x offset
- dy = y2 - y1 # y offset
- d = np.sqrt(float(dx**2 + dy**2)) # length of edge
- if d == 0: # source and target at same position
- continue
- if dx == 0: # vertical edge
- xa = x2
- ya = dy * p + y1
- if dy == 0: # horizontal edge
- ya = y2
- xa = dx * p + x1
+ arrow_color = None
+ line_width = None
+ shrink_source = 0 # space from source to tail
+ shrink_target = 0 # space from head to target
+ if cb.iterable(node_size): # many node sizes
+ src_node, dst_node = edgelist[i]
+ index_node = nodelist.index(dst_node)
+ marker_size = node_size[index_node]
+ shrink_target = to_marker_edge(marker_size, node_shape)
else:
- theta = np.arctan2(dy, dx)
- xa = p * d * np.cos(theta) + x1
- ya = p * d * np.sin(theta) + y1
-
- a_pos.append(((xa, ya), (x2, y2)))
-
- arrow_collection = LineCollection(a_pos,
- colors=arrow_colors,
- linewidths=[4 * ww for ww in lw],
- antialiaseds=(1,),
- transOffset=ax.transData,
- )
-
- arrow_collection.set_zorder(1) # edges go behind nodes
- ax.add_collection(arrow_collection)
+ shrink_target = to_marker_edge(node_size, node_shape)
+ if arrow_colors is None:
+ arrow_color = edge_cmap(color_normal(edge_color[i]))
+ elif len(arrow_colors) > 1:
+ arrow_color = arrow_colors[i]
+ else:
+ arrow_color = arrow_colors[0]
+ if len(lw) > 1:
+ line_width = lw[i]
+ else:
+ line_width = lw[0]
+ arrow = FancyArrowPatch((x1, y1), (x2, y2),
+ arrowstyle=arrowstyle,
+ shrinkA=shrink_source,
+ shrinkB=shrink_target,
+ mutation_scale=mutation_scale,
+ color=arrow_color,
+ linewidth=line_width,
+ zorder=1) # arrows go behind nodes
+
+ # There seems to be a bug in matplotlib to make collections of
+ # FancyArrowPatch instances. Until fixed, the patches are added
+ # individually to the axes instance.
+ arrow_collection.append(arrow)
+ ax.add_patch(arrow)
# update view
minx = np.amin(np.ravel(edge_pos[:, :, 0]))
@@ -629,9 +697,7 @@ def draw_networkx_edges(G, pos,
ax.update_datalim(corners)
ax.autoscale_view()
-# if arrow_collection:
-
- return edge_collection
+ return arrow_collection
def draw_networkx_labels(G, pos,
@@ -720,7 +786,7 @@ def draw_networkx_labels(G, pos,
for n, label in labels.items():
(x, y) = pos[n]
if not is_string_like(label):
- label = str(label) # this will cause "1" and 1 to be labeled the same
+ label = str(label) # this makes "1" and 1 labeled the same
t = ax.text(x, y,
label,
size=font_size,
@@ -817,7 +883,6 @@ def draw_networkx_edge_labels(G, pos,
"""
try:
import matplotlib.pyplot as plt
- import matplotlib.cbook as cb
import numpy as np
except ImportError:
raise ImportError("Matplotlib required for draw()")
@@ -839,7 +904,8 @@ def draw_networkx_edge_labels(G, pos,
y1 * label_pos + y2 * (1.0 - label_pos))
if rotate:
- angle = np.arctan2(y2 - y1, x2 - x1) / (2.0 * np.pi) * 360 # degrees
+ # in degrees
+ angle = np.arctan2(y2 - y1, x2 - x1) / (2.0 * np.pi) * 360
# make label orientation "right-side-up"
if angle > 90:
angle -= 180
@@ -858,7 +924,7 @@ def draw_networkx_edge_labels(G, pos,
fc=(1.0, 1.0, 1.0),
)
if not is_string_like(label):
- label = str(label) # this will cause "1" and 1 to be labeled the same
+ label = str(label) # this makes "1" and 1 labeled the same
# set optional alignment
horizontalalignment = kwds.get('horizontalalignment', 'center')
@@ -1003,15 +1069,16 @@ def apply_alpha(colors, alpha, elem_list, cmap=None, vmin=None, vmax=None):
in order (cycling through alpha multiple times if necessary).
elem_list : array of networkx objects
- The list of elements which are being colored. These could be nodes, edges
- or labels.
+ The list of elements which are being colored. These could be nodes,
+ edges or labels.
cmap : matplotlib colormap
- Color map for use if colors is a list of floats corresponding to points on
- a color mapping.
+ Color map for use if colors is a list of floats corresponding to points
+ on a color mapping.
vmin, vmax : float
- Minimum and maximum values for normalizing colors if a color mapping is used.
+ Minimum and maximum values for normalizing colors if a color mapping is
+ used.
Returns
-------
@@ -1021,7 +1088,7 @@ def apply_alpha(colors, alpha, elem_list, cmap=None, vmin=None, vmax=None):
"""
import numbers
- import itertools
+ from itertools import islice, cycle
try:
import numpy as np
@@ -1030,29 +1097,33 @@ def apply_alpha(colors, alpha, elem_list, cmap=None, vmin=None, vmax=None):
except ImportError:
raise ImportError("Matplotlib required for draw()")
- # If we have been provided with a list of numbers as long as elem_list, apply the color mapping.
+ # If we have been provided with a list of numbers as long as elem_list,
+ # apply the color mapping.
if len(colors) == len(elem_list) and isinstance(colors[0], numbers.Number):
mapper = cm.ScalarMappable(cmap=cmap)
mapper.set_clim(vmin, vmax)
rgba_colors = mapper.to_rgba(colors)
- # Otherwise, convert colors to matplotlib's RGB using the colorConverter object.
- # These are converted to numpy ndarrays to be consistent with the to_rgba method of ScalarMappable.
+ # Otherwise, convert colors to matplotlib's RGB using the colorConverter
+ # object. These are converted to numpy ndarrays to be consistent with the
+ # to_rgba method of ScalarMappable.
else:
try:
rgba_colors = np.array([colorConverter.to_rgba(colors)])
except ValueError:
- rgba_colors = np.array([colorConverter.to_rgba(color) for color in colors])
- # Set the final column of the rgba_colors to have the relevant alpha values.
+ rgba_colors = np.array([colorConverter.to_rgba(color)
+ for color in colors])
+ # Set the final column of the rgba_colors to have the relevant alpha values
try:
- # If alpha is longer than the number of colors, resize to the number of elements.
- # Also, if rgba_colors.size (the number of elements of rgba_colors) is the same as the number of
- # elements, resize the array, to avoid it being interpreted as a colormap by scatter()
+ # If alpha is longer than the number of colors, resize to the number of
+ # elements. Also, if rgba_colors.size (the number of elements of
+ # rgba_colors) is the same as the number of elements, resize the array,
+ # to avoid it being interpreted as a colormap by scatter()
if len(alpha) > len(rgba_colors) or rgba_colors.size == len(elem_list):
rgba_colors.resize((len(elem_list), 4))
rgba_colors[1:, 0] = rgba_colors[0, 0]
rgba_colors[1:, 1] = rgba_colors[0, 1]
rgba_colors[1:, 2] = rgba_colors[0, 2]
- rgba_colors[:, 3] = list(itertools.islice(itertools.cycle(alpha), len(rgba_colors)))
+ rgba_colors[:, 3] = list(islice(cycle(alpha), len(rgba_colors)))
except TypeError:
rgba_colors[:, -1] = alpha
return rgba_colors
| Improvement to arrowheads of a DiGraph edges.
I was wondering if there was planing on changing the arrowheads of the edges of directed graphs to be more arrow like (triangular shape instead of thicker line). For instance see [this question](https://stackoverflow.com/questions/47213650/changing-arrowhead-type-in-networkx) on stackoverflow.
| That would be a great addition the the networkx/matplotlib drawing package. It is technically possible but so far there have been no contributions that solve the general arrow drawing problem. | 2017-11-19T15:32:48 |
|
networkx/networkx | 2,769 | networkx__networkx-2769 | [
"2740"
] | d5be0b25a2e07b6dfb1c343b0357b7dcf35e9306 | diff --git a/networkx/readwrite/gml.py b/networkx/readwrite/gml.py
--- a/networkx/readwrite/gml.py
+++ b/networkx/readwrite/gml.py
@@ -677,7 +677,7 @@ def stringize(key, value, ignored_keys, indent, in_list=False):
if epos != -1 and text.find('.', 0, epos) == -1:
text = text[:epos] + '.' + text[epos:]
if key == 'label':
- yield indent + key + ' "' + test + '"'
+ yield indent + key + ' "' + text + '"'
else:
yield indent + key + ' ' + text
elif isinstance(value, dict):
| diff --git a/networkx/readwrite/tests/test_gml.py b/networkx/readwrite/tests/test_gml.py
--- a/networkx/readwrite/tests/test_gml.py
+++ b/networkx/readwrite/tests/test_gml.py
@@ -278,6 +278,23 @@ def test_unicode_node(self):
]"""
assert_equal(data, answer)
+ def test_float_label(self):
+ node = 1.0
+ G = nx.Graph()
+ G.add_node(node)
+ fobj = tempfile.NamedTemporaryFile()
+ nx.write_gml(G, fobj)
+ fobj.seek(0)
+ # Should be bytes in 2.x and 3.x
+ data = fobj.read().strip().decode('ascii')
+ answer = """graph [
+ node [
+ id 0
+ label "1.0"
+ ]
+]"""
+ assert_equal(data, answer)
+
def test_name(self):
G = nx.parse_gml('graph [ name "x" node [ id 0 label "x" ] ]')
assert_equal('x', G.graph['name'])
| Possible syntax error in gml.py (readwrite)
https://github.com/networkx/networkx/blob/e3566e53793cdccc19a8c2ff9c056e0d7b524634/networkx/readwrite/gml.py#L680
I've got an error when I want to write a .gml file from a network using the write_gml() function. At line 680 there is "test" instead of "text" I think.
| Yep, that is a bug. Thanks for the report. This is a simple fix.
You're welcome.
By the way, very great and useful functions ! | 2017-11-25T05:26:20 |
networkx/networkx | 2,773 | networkx__networkx-2773 | [
"2767"
] | 3d7ea0d690e59c2d5d223528ea9e21b21fb7f8a4 | diff --git a/networkx/generators/degree_seq.py b/networkx/generators/degree_seq.py
--- a/networkx/generators/degree_seq.py
+++ b/networkx/generators/degree_seq.py
@@ -426,7 +426,7 @@ def expected_degree_graph(w, seed=None, selfloops=True):
# weights dictates the order of the (integer) node labels, so we
# need to remember the permutation applied in the sorting.
order = sorted(enumerate(w), key=itemgetter(1), reverse=True)
- mapping = {c: v for c, (u, v) in enumerate(order)}
+ mapping = {c: u for c, (u, v) in enumerate(order)}
seq = [v for u, v in order]
last = n
if not selfloops:
| diff --git a/networkx/generators/tests/test_degree_seq.py b/networkx/generators/tests/test_degree_seq.py
--- a/networkx/generators/tests/test_degree_seq.py
+++ b/networkx/generators/tests/test_degree_seq.py
@@ -92,6 +92,8 @@ def test_expected_degree_graph():
# test that fixed seed delivers the same graph
deg_seq = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3]
G1 = nx.expected_degree_graph(deg_seq, seed=1000)
+ assert_equal(len(G1), 12)
+
G2 = nx.expected_degree_graph(deg_seq, seed=1000)
assert_true(nx.is_isomorphic(G1, G2))
@@ -105,6 +107,7 @@ def test_expected_degree_graph_selfloops():
G1 = nx.expected_degree_graph(deg_seq, seed=1000, selfloops=False)
G2 = nx.expected_degree_graph(deg_seq, seed=1000, selfloops=False)
assert_true(nx.is_isomorphic(G1, G2))
+ assert_equal(len(G1), 12)
def test_expected_degree_graph_skew():
@@ -112,6 +115,7 @@ def test_expected_degree_graph_skew():
G1 = nx.expected_degree_graph(deg_seq, seed=1000)
G2 = nx.expected_degree_graph(deg_seq, seed=1000)
assert_true(nx.is_isomorphic(G1, G2))
+ assert_equal(len(G1), 5)
def test_havel_hakimi_construction():
| node-mapping bug expected_degree_graph
Hi I used the NX1 expected_degree_graph generator. It has the same interface as the NX2.
https://networkx.github.io/documentation/stable/reference/generated/networkx.generators.degree_seq.expected_degree_graph.html#networkx.generators.degree_seq.expected_degree_graph
But NX2 will not generate the graph correctly. The total number of edge is always 1. No further error message provided.
Same script works well for NX1. (I downgrade to older version.
```python
D = 10
N = 1000
degree_l = [D for i in range(N)]
G = nx.expected_degree_graph(degree_l, seed=datetime.now(), selfloops=False)
```
| That is certainly a bug -- Thanks for the report!
I think it was introduced two years ago in an attempt to make it more simple... but a typo snuck into the mapping on line 429: ```mapping = {c: v for c, (u, v) in enumerate(order)}``` should be ```mapping = {c: u for c, (u, v) in enumerate(order)}``` Looks like we will need more tests (I haven't looked at them yet).
Yep, that looks like the right fix. | 2017-11-25T17:52:23 |
networkx/networkx | 2,774 | networkx__networkx-2774 | [
"1918"
] | 18c2fa79edbd578bea3e7a1935502f54c58385d7 | diff --git a/networkx/algorithms/covering.py b/networkx/algorithms/covering.py
--- a/networkx/algorithms/covering.py
+++ b/networkx/algorithms/covering.py
@@ -72,9 +72,12 @@ def min_edge_cover(G, matching_algorithm=None):
maxcardinality=True)
maximum_matching = matching_algorithm(G)
# ``min_cover`` is superset of ``maximum_matching``
- min_cover = set(maximum_matching.items())
+ try:
+ min_cover = set(maximum_matching.items()) # bipartite matching case returns dict
+ except AttributeError:
+ min_cover = maximum_matching
# iterate for uncovered nodes
- uncovered_nodes = set(G) - {v for u, v in min_cover}
+ uncovered_nodes = set(G) - {v for u, v in min_cover} - {u for u, v in min_cover}
for v in uncovered_nodes:
# Since `v` is uncovered, each edge incident to `v` will join it
# with a covered node (otherwise, if there were an edge joining
diff --git a/networkx/algorithms/matching.py b/networkx/algorithms/matching.py
--- a/networkx/algorithms/matching.py
+++ b/networkx/algorithms/matching.py
@@ -69,7 +69,8 @@ def matching_dict_to_set(matching):
# appear in `matching.items()`, so we use a set of sets. This way,
# only the (frozen)set `{u, v}` appears as an element in the
# returned set.
- return set(map(frozenset, matching.items()))
+
+ return set((u,v) for (u,v) in set(map(frozenset, matching.items())))
def is_matching(G, matching):
@@ -172,11 +173,8 @@ def max_weight_matching(G, maxcardinality=False, weight='weight'):
Returns
-------
- mate : dictionary
- The matching is returned as a dictionary, mate, such that
- mate[v] == w if node v is matched to node w. Unmatched nodes do not
- occur as a key in mate.
-
+ matching : set
+ A maximal matching of the graph.
Notes
-----
@@ -249,7 +247,7 @@ def leaves(self):
# Get a list of vertices.
gnodes = list(G)
if not gnodes:
- return {} # don't bother with empty graphs
+ return set( ) # don't bother with empty graphs
# Find the maximum edge weight.
maxweight = 0
@@ -931,4 +929,4 @@ def verifyOptimum():
if allinteger:
verifyOptimum()
- return mate
+ return matching_dict_to_set(mate)
| diff --git a/networkx/algorithms/tests/test_covering.py b/networkx/algorithms/tests/test_covering.py
--- a/networkx/algorithms/tests/test_covering.py
+++ b/networkx/algorithms/tests/test_covering.py
@@ -24,7 +24,7 @@ def test_graph_single_edge(self):
G = nx.Graph()
G.add_edge(0, 1)
assert_equal(nx.min_edge_cover(G),
- {(0, 1), (1, 0)})
+ {(0, 1)})
def test_bipartite_explicit(self):
G = nx.Graph()
@@ -34,6 +34,7 @@ def test_bipartite_explicit(self):
(2, 'c'), (3, 'c'), (4, 'a')])
min_cover = nx.min_edge_cover(G, nx.algorithms.bipartite.matching.
eppstein_matching)
+ min_cover2 = nx.min_edge_cover(G)
assert_true(nx.is_edge_cover(G, min_cover))
assert_equal(len(min_cover), 8)
@@ -41,7 +42,7 @@ def test_complete_graph(self):
G = nx.complete_graph(10)
min_cover = nx.min_edge_cover(G)
assert_true(nx.is_edge_cover(G, min_cover))
- assert_equal(len(min_cover), 10)
+ assert_equal(len(min_cover), 5)
class TestIsEdgeCover:
diff --git a/networkx/algorithms/tests/test_matching.py b/networkx/algorithms/tests/test_matching.py
--- a/networkx/algorithms/tests/test_matching.py
+++ b/networkx/algorithms/tests/test_matching.py
@@ -5,7 +5,8 @@
from nose.tools import assert_false
from nose.tools import assert_true
import networkx as nx
-
+from networkx.algorithms.matching import matching_dict_to_set
+from networkx.testing import assert_edges_equal
class TestMaxWeightMatching(object):
"""Unit tests for the
@@ -16,28 +17,28 @@ class TestMaxWeightMatching(object):
def test_trivial1(self):
"""Empty graph"""
G = nx.Graph()
- assert_equal(nx.max_weight_matching(G), {})
+ assert_equal(nx.max_weight_matching(G), set())
def test_trivial2(self):
"""Self loop"""
G = nx.Graph()
G.add_edge(0, 0, weight=100)
- assert_equal(nx.max_weight_matching(G), {})
+ assert_equal(nx.max_weight_matching(G), set())
def test_trivial3(self):
"""Single edge"""
G = nx.Graph()
G.add_edge(0, 1)
- assert_equal(nx.max_weight_matching(G),
- {0: 1, 1: 0})
+ assert_edges_equal(nx.max_weight_matching(G),
+ matching_dict_to_set({0: 1, 1: 0}))
def test_trivial4(self):
"""Small graph"""
G = nx.Graph()
G.add_edge('one', 'two', weight=10)
G.add_edge('two', 'three', weight=11)
- assert_equal(nx.max_weight_matching(G),
- {'three': 'two', 'two': 'three'})
+ assert_edges_equal(nx.max_weight_matching(G),
+ matching_dict_to_set({'three': 'two', 'two': 'three'}))
def test_trivial5(self):
"""Path"""
@@ -45,18 +46,18 @@ def test_trivial5(self):
G.add_edge(1, 2, weight=5)
G.add_edge(2, 3, weight=11)
G.add_edge(3, 4, weight=5)
- assert_equal(nx.max_weight_matching(G),
- {2: 3, 3: 2})
- assert_equal(nx.max_weight_matching(G, 1),
- {1: 2, 2: 1, 3: 4, 4: 3})
+ assert_edges_equal(nx.max_weight_matching(G),
+ matching_dict_to_set({2: 3, 3: 2}))
+ assert_edges_equal(nx.max_weight_matching(G, 1),
+ matching_dict_to_set({1: 2, 2: 1, 3: 4, 4: 3}))
def test_trivial6(self):
"""Small graph with arbitrary weight attribute"""
G = nx.Graph()
G.add_edge('one', 'two', weight=10, abcd=11)
G.add_edge('two', 'three', weight=11, abcd=10)
- assert_equal(nx.max_weight_matching(G, weight='abcd'),
- {'one': 'two', 'two': 'one'})
+ assert_edges_equal(nx.max_weight_matching(G, weight='abcd'),
+ matching_dict_to_set({'one': 'two', 'two': 'one'}))
def test_floating_point_weights(self):
"""Floating point weights"""
@@ -65,8 +66,8 @@ def test_floating_point_weights(self):
G.add_edge(2, 3, weight=math.exp(1))
G.add_edge(1, 3, weight=3.0)
G.add_edge(1, 4, weight=math.sqrt(2.0))
- assert_equal(nx.max_weight_matching(G),
- {1: 4, 2: 3, 3: 2, 4: 1})
+ assert_edges_equal(nx.max_weight_matching(G),
+ matching_dict_to_set({1: 4, 2: 3, 3: 2, 4: 1}))
def test_negative_weights(self):
"""Negative weights"""
@@ -76,38 +77,38 @@ def test_negative_weights(self):
G.add_edge(2, 3, weight=1)
G.add_edge(2, 4, weight=-1)
G.add_edge(3, 4, weight=-6)
- assert_equal(nx.max_weight_matching(G),
- {1: 2, 2: 1})
- assert_equal(nx.max_weight_matching(G, 1),
- {1: 3, 2: 4, 3: 1, 4: 2})
+ assert_edges_equal(nx.max_weight_matching(G),
+ matching_dict_to_set({1: 2, 2: 1}))
+ assert_edges_equal(nx.max_weight_matching(G, 1),
+ matching_dict_to_set({1: 3, 2: 4, 3: 1, 4: 2}))
def test_s_blossom(self):
"""Create S-blossom and use it for augmentation:"""
G = nx.Graph()
G.add_weighted_edges_from([(1, 2, 8), (1, 3, 9),
(2, 3, 10), (3, 4, 7)])
- assert_equal(nx.max_weight_matching(G),
- {1: 2, 2: 1, 3: 4, 4: 3})
+ assert_edges_equal(nx.max_weight_matching(G),
+ matching_dict_to_set({1: 2, 2: 1, 3: 4, 4: 3}))
G.add_weighted_edges_from([(1, 6, 5), (4, 5, 6)])
- assert_equal(nx.max_weight_matching(G),
- {1: 6, 2: 3, 3: 2, 4: 5, 5: 4, 6: 1})
+ assert_edges_equal(nx.max_weight_matching(G),
+ matching_dict_to_set({1: 6, 2: 3, 3: 2, 4: 5, 5: 4, 6: 1}))
def test_s_t_blossom(self):
"""Create S-blossom, relabel as T-blossom, use for augmentation:"""
G = nx.Graph()
G.add_weighted_edges_from([(1, 2, 9), (1, 3, 8), (2, 3, 10),
(1, 4, 5), (4, 5, 4), (1, 6, 3)])
- assert_equal(nx.max_weight_matching(G),
- {1: 6, 2: 3, 3: 2, 4: 5, 5: 4, 6: 1})
+ assert_edges_equal(nx.max_weight_matching(G),
+ matching_dict_to_set({1: 6, 2: 3, 3: 2, 4: 5, 5: 4, 6: 1}))
G.add_edge(4, 5, weight=3)
G.add_edge(1, 6, weight=4)
- assert_equal(nx.max_weight_matching(G),
- {1: 6, 2: 3, 3: 2, 4: 5, 5: 4, 6: 1})
+ assert_edges_equal(nx.max_weight_matching(G),
+ matching_dict_to_set({1: 6, 2: 3, 3: 2, 4: 5, 5: 4, 6: 1}))
G.remove_edge(1, 6)
G.add_edge(3, 6, weight=4)
- assert_equal(nx.max_weight_matching(G),
- {1: 2, 2: 1, 3: 6, 4: 5, 5: 4, 6: 3})
+ assert_edges_equal(nx.max_weight_matching(G),
+ matching_dict_to_set({1: 2, 2: 1, 3: 6, 4: 5, 5: 4, 6: 3}))
def test_nested_s_blossom(self):
"""Create nested S-blossom, use for augmentation:"""
@@ -117,7 +118,7 @@ def test_nested_s_blossom(self):
(2, 4, 8), (3, 5, 8), (4, 5, 10),
(5, 6, 6)])
assert_equal(nx.max_weight_matching(G),
- {1: 3, 2: 4, 3: 1, 4: 2, 5: 6, 6: 5})
+ matching_dict_to_set({1: 3, 2: 4, 3: 1, 4: 2, 5: 6, 6: 5}))
def test_nested_s_blossom_relabel(self):
"""Create S-blossom, relabel as S, include in nested S-blossom:"""
@@ -125,8 +126,8 @@ def test_nested_s_blossom_relabel(self):
G.add_weighted_edges_from([(1, 2, 10), (1, 7, 10), (2, 3, 12),
(3, 4, 20), (3, 5, 20), (4, 5, 25),
(5, 6, 10), (6, 7, 10), (7, 8, 8)])
- assert_equal(nx.max_weight_matching(G),
- {1: 2, 2: 1, 3: 4, 4: 3, 5: 6, 6: 5, 7: 8, 8: 7})
+ assert_edges_equal(nx.max_weight_matching(G),
+ matching_dict_to_set({1: 2, 2: 1, 3: 4, 4: 3, 5: 6, 6: 5, 7: 8, 8: 7}))
def test_nested_s_blossom_expand(self):
"""Create nested S-blossom, augment, expand recursively:"""
@@ -135,8 +136,8 @@ def test_nested_s_blossom_expand(self):
(2, 4, 12), (3, 5, 12), (4, 5, 14),
(4, 6, 12), (5, 7, 12), (6, 7, 14),
(7, 8, 12)])
- assert_equal(nx.max_weight_matching(G),
- {1: 2, 2: 1, 3: 5, 4: 6, 5: 3, 6: 4, 7: 8, 8: 7})
+ assert_edges_equal(nx.max_weight_matching(G),
+ matching_dict_to_set({1: 2, 2: 1, 3: 5, 4: 6, 5: 3, 6: 4, 7: 8, 8: 7}))
def test_s_blossom_relabel_expand(self):
"""Create S-blossom, relabel as T, expand:"""
@@ -144,8 +145,8 @@ def test_s_blossom_relabel_expand(self):
G.add_weighted_edges_from([(1, 2, 23), (1, 5, 22), (1, 6, 15),
(2, 3, 25), (3, 4, 22), (4, 5, 25),
(4, 8, 14), (5, 7, 13)])
- assert_equal(nx.max_weight_matching(G),
- {1: 6, 2: 3, 3: 2, 4: 8, 5: 7, 6: 1, 7: 5, 8: 4})
+ assert_edges_equal(nx.max_weight_matching(G),
+ matching_dict_to_set({1: 6, 2: 3, 3: 2, 4: 8, 5: 7, 6: 1, 7: 5, 8: 4}))
def test_nested_s_blossom_relabel_expand(self):
"""Create nested S-blossom, relabel as T, expand:"""
@@ -153,8 +154,8 @@ def test_nested_s_blossom_relabel_expand(self):
G.add_weighted_edges_from([(1, 2, 19), (1, 3, 20), (1, 8, 8),
(2, 3, 25), (2, 4, 18), (3, 5, 18),
(4, 5, 13), (4, 7, 7), (5, 6, 7)])
- assert_equal(nx.max_weight_matching(G),
- {1: 8, 2: 3, 3: 2, 4: 7, 5: 6, 6: 5, 7: 4, 8: 1})
+ assert_edges_equal(nx.max_weight_matching(G),
+ matching_dict_to_set({1: 8, 2: 3, 3: 2, 4: 7, 5: 6, 6: 5, 7: 4, 8: 1}))
def test_nasty_blossom1(self):
"""Create blossom, relabel as T in more than one way, expand,
@@ -165,9 +166,9 @@ def test_nasty_blossom1(self):
(3, 4, 45), (4, 5, 50), (1, 6, 30),
(3, 9, 35), (4, 8, 35), (5, 7, 26),
(9, 10, 5)])
- assert_equal(nx.max_weight_matching(G),
- {1: 6, 2: 3, 3: 2, 4: 8, 5: 7,
- 6: 1, 7: 5, 8: 4, 9: 10, 10: 9})
+ assert_edges_equal(nx.max_weight_matching(G),
+ matching_dict_to_set({1: 6, 2: 3, 3: 2, 4: 8, 5: 7,
+ 6: 1, 7: 5, 8: 4, 9: 10, 10: 9}))
def test_nasty_blossom2(self):
"""Again but slightly different:"""
@@ -176,9 +177,9 @@ def test_nasty_blossom2(self):
(3, 4, 45), (4, 5, 50), (1, 6, 30),
(3, 9, 35), (4, 8, 26), (5, 7, 40),
(9, 10, 5)])
- assert_equal(nx.max_weight_matching(G),
- {1: 6, 2: 3, 3: 2, 4: 8, 5: 7,
- 6: 1, 7: 5, 8: 4, 9: 10, 10: 9})
+ assert_edges_equal(nx.max_weight_matching(G),
+ matching_dict_to_set({1: 6, 2: 3, 3: 2, 4: 8, 5: 7,
+ 6: 1, 7: 5, 8: 4, 9: 10, 10: 9}))
def test_nasty_blossom_least_slack(self):
"""Create blossom, relabel as T, expand such that a new
@@ -189,9 +190,9 @@ def test_nasty_blossom_least_slack(self):
(3, 4, 45), (4, 5, 50), (1, 6, 30),
(3, 9, 35), (4, 8, 28), (5, 7, 26),
(9, 10, 5)])
- assert_equal(nx.max_weight_matching(G),
- {1: 6, 2: 3, 3: 2, 4: 8, 5: 7,
- 6: 1, 7: 5, 8: 4, 9: 10, 10: 9})
+ assert_edges_equal(nx.max_weight_matching(G),
+ matching_dict_to_set({1: 6, 2: 3, 3: 2, 4: 8, 5: 7,
+ 6: 1, 7: 5, 8: 4, 9: 10, 10: 9}))
def test_nasty_blossom_augmenting(self):
"""Create nested blossom, relabel as T in more than one way"""
@@ -203,9 +204,9 @@ def test_nasty_blossom_augmenting(self):
(5, 6, 94), (6, 7, 50), (1, 8, 30),
(3, 11, 35), (5, 9, 36), (7, 10, 26),
(11, 12, 5)])
- assert_equal(nx.max_weight_matching(G),
- {1: 8, 2: 3, 3: 2, 4: 6, 5: 9, 6: 4,
- 7: 10, 8: 1, 9: 5, 10: 7, 11: 12, 12: 11})
+ assert_edges_equal(nx.max_weight_matching(G),
+ matching_dict_to_set({1: 8, 2: 3, 3: 2, 4: 6, 5: 9, 6: 4,
+ 7: 10, 8: 1, 9: 5, 10: 7, 11: 12, 12: 11}))
def test_nasty_blossom_expand_recursively(self):
"""Create nested S-blossom, relabel as S, expand recursively:"""
@@ -214,9 +215,9 @@ def test_nasty_blossom_expand_recursively(self):
(2, 4, 55), (3, 5, 55), (4, 5, 50),
(1, 8, 15), (5, 7, 30), (7, 6, 10),
(8, 10, 10), (4, 9, 30)])
- assert_equal(nx.max_weight_matching(G),
- {1: 2, 2: 1, 3: 5, 4: 9, 5: 3,
- 6: 7, 7: 6, 8: 10, 9: 4, 10: 8})
+ assert_edges_equal(nx.max_weight_matching(G),
+ matching_dict_to_set({1: 2, 2: 1, 3: 5, 4: 9, 5: 3,
+ 6: 7, 7: 6, 8: 10, 9: 4, 10: 8}))
class TestIsMatching(object):
| maximal_matching and max_weight_matching have different return types
The former returns a set of edges, the latter a dictionary. Should these return the same type of object?
| Let's return a set of edges for `max_weight_matching` to make them the same. | 2017-11-25T23:18:37 |
networkx/networkx | 2,775 | networkx__networkx-2775 | [
"1582"
] | 18c2fa79edbd578bea3e7a1935502f54c58385d7 | diff --git a/networkx/drawing/nx_agraph.py b/networkx/drawing/nx_agraph.py
--- a/networkx/drawing/nx_agraph.py
+++ b/networkx/drawing/nx_agraph.py
@@ -150,18 +150,29 @@ def to_agraph(N):
# add nodes
for n, nodedata in N.nodes(data=True):
- A.add_node(n, **nodedata)
+ A.add_node(n)
+ if nodedata is not None:
+ a = A.get_node(n)
+ a.attr.update({k: str(v) for k, v in nodedata.items()})
# loop over edges
-
if N.is_multigraph():
for u, v, key, edgedata in N.edges(data=True, keys=True):
- str_edata = {k: str(v) for k, v in edgedata.items() if k != 'key'}
- A.add_edge(u, v, key=str(key), **str_edata)
+ str_edgedata = {k: str(v) for k, v in edgedata.items() if k != 'key'}
+ A.add_edge(u, v, key=str(key))
+ if edgedata is not None:
+ a = A.get_edge(u,v)
+ a.attr.update(str_edgedata)
+
+
else:
for u, v, edgedata in N.edges(data=True):
str_edgedata = {k: str(v) for k, v in edgedata.items()}
- A.add_edge(u, v, **str_edgedata)
+ A.add_edge(u, v)
+ if edgedata is not None:
+ a = A.get_edge(u,v)
+ a.attr.update(str_edgedata)
+
return A
| diff --git a/networkx/drawing/tests/test_agraph.py b/networkx/drawing/tests/test_agraph.py
--- a/networkx/drawing/tests/test_agraph.py
+++ b/networkx/drawing/tests/test_agraph.py
@@ -24,6 +24,7 @@ def build_graph(self, G):
G.graph['metal'] = 'bronze'
return G
+
def assert_equal(self, G1, G2):
assert_nodes_equal(G1.nodes(), G2.nodes())
assert_edges_equal(G1.edges(), G2.edges())
@@ -79,3 +80,21 @@ def test_view_pygraphviz_edgelable(self):
G.add_edge(1, 2, weight=7)
G.add_edge(2, 3, weight=8)
nx.nx_agraph.view_pygraphviz(G, edgelabel='weight')
+
+ def test_from_agraph_name(self):
+ G = nx.Graph(name='test')
+ A = nx.nx_agraph.to_agraph(G)
+ H = nx.nx_agraph.from_agraph(A)
+ assert_equal(G.name, 'test')
+
+
+ def test_graph_with_reserved_keywords(self):
+ # test attribute/keyword clash case for #1582
+ # node: n
+ # edges: u,v
+ G = nx.Graph()
+ G = self.build_graph(G)
+ G.node['E']['n']='keyword'
+ G.edges[('A','B')]['u']='keyword'
+ G.edges[('A','B')]['v']='keyword'
+ A = nx.nx_agraph.to_agraph(G)
| to_agraph failing on node attribute name `n`
I have some graph where I add the node attribute `n`
```
G = nx.DiGraph([(1,2)])
G.node[1]['n'] = 1
nx.to_agraph(G)
networkx/drawing/nx_agraph.py:146
TypeError: add_node() got multiple values for argument 'n'
```
Is this behavior wanted? For semantic reasons I would enjoy having 'n' as a node attribute. I am aware that `G.add_node(3, n=10)` has the same behavior. This is the part of `to_agraph` where the error occurs:
```
def to_agraph(N)
...
# add nodes
for n,nodedata in N.nodes(data=True):
A.add_node(n,**nodedata)
...
```
I would suggest the following change: Allow for a keyword argument to define the attributes one wants to export.
```
def to_agraph(N, node_attr=None):
...
# add nodes
for n,nodedata in N.nodes(data=True):
A.add_node(n,**{k: v for k,v in nodedata.values() if node_attr is None or k in node_attr})
...
```
The argument would have to be propagated upwards to `nx.draw_networkx` etc. but would allow for an easy attributes filter.
| This is definitely a bug. `n` should be an acceptable node attribute.
The issue, of course, is that we have to use some name for the parameter to `add_node`. We could change it to `def add_node(self, _n, attr_dict=None, **attr)` or you can pass it in as: `G.add_node(3, attr_dict={'n': 10})`. Not sure if we should change or you should.
@chebee7i This is why I'm asking here before I decide to change my personal networkx branch or my project.
Currently it is at least consistent that both `nx.Graph.add_node` and `pygraphviz.AGraph.add_node` do not allow 'n' to be passed in as keyword argument. The latter does not allow for a `attr_dict` though. Should I propose the change in `pygraphviz` instead?
I'm not sure.
I just opened #1583 which would change the `n` in `add_node` to a `u` (but `u` as an attribute name would still be problematic). This is mostly a consistency issue, but it would address your problem (at least for networkx). As for pygraphviz, you should definitely open an issue there and link to it here. I think adding an `attr_dict` makes sense, independent of whether `n` is changed to `u` throughout the code.
I opened an issue in pygraphviz/pygraphviz#57
@chebee7i @jdrudolph @hagberg @dschult
Is this issue still open? I would like to work on this.
We could at least fix the (narrow) issue posed here with the suggested
```python
def to_agraph(N, node_attr=None):
...
# add nodes
for n,nodedata in N.nodes(data=True):
A.add_node(n,**{k: v for k,v in nodedata.values() if node_attr is None or k in node_attr})
``` | 2017-11-25T23:50:24 |
networkx/networkx | 2,776 | networkx__networkx-2776 | [
"2449"
] | 18c2fa79edbd578bea3e7a1935502f54c58385d7 | diff --git a/networkx/drawing/layout.py b/networkx/drawing/layout.py
--- a/networkx/drawing/layout.py
+++ b/networkx/drawing/layout.py
@@ -232,6 +232,7 @@ def fruchterman_reingold_layout(G, k=None,
pos=None,
fixed=None,
iterations=50,
+ threshold = 1e-4,
weight='weight',
scale=1,
center=None,
@@ -257,7 +258,11 @@ def fruchterman_reingold_layout(G, k=None,
Nodes to keep fixed at initial position.
iterations : int optional (default=50)
- Number of iterations of spring-force relaxation
+ Maximum number of iterations taken
+
+ threshold: float optional (default = 1e-4)
+ Threshold for relative error in node position changes.
+ The iteration stops if the error is below this threshold.
weight : string or None optional (default='weight')
The edge attribute that holds the numerical value used for
@@ -318,18 +323,18 @@ def fruchterman_reingold_layout(G, k=None,
raise ValueError
A = nx.to_scipy_sparse_matrix(G, weight=weight, dtype='f')
if k is None and fixed is not None:
- # We must adjust k by domain size for layouts not near 1x1
+ # We must adjust k by domain size for layouts not near 1x1
nnodes, _ = A.shape
k = dom_size / np.sqrt(nnodes)
pos = _sparse_fruchterman_reingold(A, k, pos_arr, fixed,
- iterations, dim)
+ iterations, threshold, dim)
except:
A = nx.to_numpy_matrix(G, weight=weight)
if k is None and fixed is not None:
# We must adjust k by domain size for layouts not near 1x1
nnodes, _ = A.shape
k = dom_size / np.sqrt(nnodes)
- pos = _fruchterman_reingold(A, k, pos_arr, fixed, iterations, dim)
+ pos = _fruchterman_reingold(A, k, pos_arr, fixed, iterations, threshold, dim)
if fixed is None:
pos = rescale_layout(pos, scale=scale) + center
pos = dict(zip(G, pos))
@@ -340,7 +345,7 @@ def fruchterman_reingold_layout(G, k=None,
def _fruchterman_reingold(A, k=None, pos=None, fixed=None,
- iterations=50, dim=2):
+ iterations=50, threshold = 1e-4, dim=2):
# Position nodes in adjacency matrix A using Fruchterman-Reingold
# Entry point for NetworkX graph is fruchterman_reingold_layout()
try:
@@ -401,11 +406,14 @@ def _fruchterman_reingold(A, k=None, pos=None, fixed=None,
pos += delta_pos
# cool temperature
t -= dt
+ err = np.linalg.norm(delta_pos)/nnodes
+ if err < threshold:
+ break
return pos
def _sparse_fruchterman_reingold(A, k=None, pos=None, fixed=None,
- iterations=50, dim=2):
+ iterations=50, threshold=1e-4, dim=2):
# Position nodes in adjacency matrix A using Fruchterman-Reingold
# Entry point for NetworkX graph is fruchterman_reingold_layout()
# Sparse version
@@ -472,9 +480,13 @@ def _sparse_fruchterman_reingold(A, k=None, pos=None, fixed=None,
# update positions
length = np.sqrt((displacement**2).sum(axis=0))
length = np.where(length < 0.01, 0.1, length)
- pos += (displacement * t / length).T
+ delta_pos = (displacement * t / length).T
+ pos += delta_pos
# cool temperature
t -= dt
+ err = np.linalg.norm(delta_pos)/nnodes
+ if err < threshold:
+ break
return pos
| Convergence of spring_layout
It would be helpful if spring_layout could be configured to terminate before the iteration limit, once the largest value of `delta_pos` is below a specified threshold.
| 2017-11-26T02:14:05 |
||
networkx/networkx | 2,816 | networkx__networkx-2816 | [
"2734"
] | b271d45e1329ef65d888366c595c010070abe035 | diff --git a/networkx/classes/reportviews.py b/networkx/classes/reportviews.py
--- a/networkx/classes/reportviews.py
+++ b/networkx/classes/reportviews.py
@@ -791,8 +791,7 @@ def __init__(self, viewer, nbunch=None,
if data in dd else (n, nbr, default)
def __len__(self):
- return sum(len(kd) for n, nbrs in self._nodes_nbrs()
- for nbr, kd in nbrs.items())
+ return sum(1 for e in self)
def __iter__(self):
return (self._report(n, nbr, k, dd) for n, nbrs in self._nodes_nbrs()
@@ -821,10 +820,6 @@ class MultiEdgeDataView(OutMultiEdgeDataView):
"""An EdgeDataView class for edges of MultiGraph; See EdgeDataView"""
__slots__ = ()
- def __len__(self):
- # nbunch makes it hard to count edges between nodes in nbunch
- return sum(1 for e in self)
-
def __iter__(self):
seen = {}
for n, nbrs in self._nodes_nbrs():
@@ -1016,7 +1011,7 @@ class EdgeView(OutEdgeView):
dataview = EdgeDataView
def __len__(self):
- return sum(len(nbrs) for n, nbrs in self._nodes_nbrs()) // 2
+ return sum(len(nbrs) + (n in nbrs) for n, nbrs in self._nodes_nbrs()) // 2
def __iter__(self):
seen = {}
@@ -1120,8 +1115,7 @@ class MultiEdgeView(OutMultiEdgeView):
dataview = MultiEdgeDataView
def __len__(self):
- return sum(len(kdict) for n, nbrs in self._nodes_nbrs()
- for nbr, kdict in nbrs.items()) // 2
+ return sum(1 for e in self)
def __iter__(self):
seen = {}
| diff --git a/networkx/classes/tests/test_reportviews.py b/networkx/classes/tests/test_reportviews.py
--- a/networkx/classes/tests/test_reportviews.py
+++ b/networkx/classes/tests/test_reportviews.py
@@ -326,6 +326,12 @@ def test_len(self):
assert_equal(len(self.G.edges()), 8)
assert_equal(len(self.G.edges), 8)
+ H = self.G.copy()
+ H.add_edge(1, 1)
+ assert_equal(len(H.edges(1)), 3)
+ assert_equal(len(H.edges()), 9)
+ assert_equal(len(H.edges), 9)
+
class TestOutEdgeDataView(TestEdgeDataView):
def setUp(self):
@@ -351,6 +357,12 @@ def test_len(self):
assert_equal(len(self.G.edges()), 8)
assert_equal(len(self.G.edges), 8)
+ H = self.G.copy()
+ H.add_edge(1, 1)
+ assert_equal(len(H.edges(1)), 2)
+ assert_equal(len(H.edges()), 9)
+ assert_equal(len(H.edges), 9)
+
class TestInEdgeDataView(TestOutEdgeDataView):
def setUp(self):
@@ -486,6 +498,12 @@ def test_len(self):
num_ed = 9 if self.G.is_multigraph() else 8
assert_equal(len(ev), num_ed)
+ H = self.G.copy()
+ H.add_edge(1, 1)
+ assert_equal(len(H.edges(1)), 3 + H.is_multigraph() - H.is_directed())
+ assert_equal(len(H.edges()), num_ed + 1)
+ assert_equal(len(H.edges), num_ed + 1)
+
def test_and(self):
# print("G & H edges:", gnv & hnv)
ev = self.eview(self.G)
| len(G.edges) unexpected values
I'm not sure if this is a bug or expected behavior but it's at least confusing. This is using 2.0 nx.Graph() - I would provide the data to recreate, but it's private and I'm not sure why this is occurring, which might just be my lack of knowledge
```
>>> len(G.edges())
300
>>> G.number_of_edges()
312
>>> count = 0
>>> s = set()
>>> for edge in G.edges():
... seen = edge in s or (edge[1], edge[0]) in s
... if not seen:
... count += 1
... s.add(edge)
>>> count
312
```
What would be likely reasons that len() would give a different answer than number_of_edges()? I thought it was because of reversed edges, but that doesn't seem to be the case either.
| This looks like a bug. Thanks for the report!
I believe the cause is selfloops. It looks like ```len(G.edges)``` does not count selfloops.
Short term fix is to use ```G.size()``` or ```G.number_of_edges()``` but tweak the following in more complex situations:
sum(len(nbrs)+(n in nbrs) for n, nbrs in G.adj.items()) // 2
or
sum(1 for e in G.edges)
I do have self loops in the data, so that is likely the case. Thanks! | 2018-01-07T03:23:01 |
networkx/networkx | 2,825 | networkx__networkx-2825 | [
"2062"
] | add6a734f7fe50c8d27f2fcf9814e485135c4fd4 | diff --git a/networkx/algorithms/operators/product.py b/networkx/algorithms/operators/product.py
--- a/networkx/algorithms/operators/product.py
+++ b/networkx/algorithms/operators/product.py
@@ -19,7 +19,8 @@
from networkx.utils import not_implemented_for
__all__ = ['tensor_product', 'cartesian_product',
- 'lexicographic_product', 'strong_product', 'power']
+ 'lexicographic_product', 'strong_product', 'power',
+ 'rooted_product']
def _dict_product(d1, d2):
@@ -188,9 +189,9 @@ def cartesian_product(G, H):
The Cartesian product $P$ of the graphs $G$ and $H$ has a node set that
is the Cartesian product of the node sets, $V(P)=V(G) \times V(H)$.
- $P$ has an edge $((u,v),(x,y))$ if and only if either $u$ is equal to $x$ and
- both $v$ and $y$ are adjacent in $H$ or if $v$ is equal to $y$ and both $u$
- and $x$ are adjacent in $G$.
+ $P$ has an edge $((u,v),(x,y))$ if and only if either $u$ is equal to $x$
+ and both $v$ and $y$ are adjacent in $H$ or if $v$ is equal to $y$ and
+ both $u$ and $x$ are adjacent in $G$.
Parameters
----------
@@ -434,3 +435,41 @@ def power(G, k):
level += 1
H.add_edges_from((n, nbr) for nbr in seen)
return H
+
+
+@not_implemented_for('multigraph')
+def rooted_product(G, H, root):
+ """ Return the rooted product of graphs G and H rooted at root in H.
+
+ A new graph is constructed representing the rooted product of
+ the inputted graphs, G and H, with a root in H.
+ A rooted product duplicates H for each nodes in G with the root
+ of H corresponding to the node in G. Nodes are renamed as the direct
+ product of G and H. The result is a subgraph of the cartesian product.
+
+ Parameters
+ ----------
+ G,H : graph
+ A NetworkX graph
+ root : node
+ A node in H
+
+ Returns
+ -------
+ R : The rooted product of G and H with a specified root in H
+
+ Notes
+ -----
+ The nodes of R are the Cartesian Product of the nodes of G and H.
+ The nodes of G and H are not relabeled.
+ """
+ if root not in H:
+ raise nx.NetworkXError('root must be a vertex in H')
+
+ R = nx.Graph()
+ R.add_nodes_from(product(G, H))
+
+ R.add_edges_from(((e[0], root), (e[1], root)) for e in G.edges())
+ R.add_edges_from(((g, e[0]), (g, e[1])) for g in G for e in H.edges())
+
+ return R
| diff --git a/networkx/algorithms/operators/tests/test_product.py b/networkx/algorithms/operators/tests/test_product.py
--- a/networkx/algorithms/operators/tests/test_product.py
+++ b/networkx/algorithms/operators/tests/test_product.py
@@ -1,11 +1,11 @@
import networkx as nx
-from networkx import tensor_product, cartesian_product, lexicographic_product, strong_product
from nose.tools import assert_true, assert_equal, raises
from networkx.testing import assert_edges_equal
+
@raises(nx.NetworkXError)
def test_tensor_product_raises():
- P = tensor_product(nx.DiGraph(), nx.Graph())
+ P = nx.tensor_product(nx.DiGraph(), nx.Graph())
def test_tensor_product_null():
@@ -16,28 +16,28 @@ def test_tensor_product_null():
P3 = nx.path_graph(3)
P10 = nx.path_graph(10)
# null graph
- G = tensor_product(null, null)
+ G = nx.tensor_product(null, null)
assert_true(nx.is_isomorphic(G, null))
# null_graph X anything = null_graph and v.v.
- G = tensor_product(null, empty10)
+ G = nx.tensor_product(null, empty10)
assert_true(nx.is_isomorphic(G, null))
- G = tensor_product(null, K3)
+ G = nx.tensor_product(null, K3)
assert_true(nx.is_isomorphic(G, null))
- G = tensor_product(null, K10)
+ G = nx.tensor_product(null, K10)
assert_true(nx.is_isomorphic(G, null))
- G = tensor_product(null, P3)
+ G = nx.tensor_product(null, P3)
assert_true(nx.is_isomorphic(G, null))
- G = tensor_product(null, P10)
+ G = nx.tensor_product(null, P10)
assert_true(nx.is_isomorphic(G, null))
- G = tensor_product(empty10, null)
+ G = nx.tensor_product(empty10, null)
assert_true(nx.is_isomorphic(G, null))
- G = tensor_product(K3, null)
+ G = nx.tensor_product(K3, null)
assert_true(nx.is_isomorphic(G, null))
- G = tensor_product(K10, null)
+ G = nx.tensor_product(K10, null)
assert_true(nx.is_isomorphic(G, null))
- G = tensor_product(P3, null)
+ G = nx.tensor_product(P3, null)
assert_true(nx.is_isomorphic(G, null))
- G = tensor_product(P10, null)
+ G = nx.tensor_product(P10, null)
assert_true(nx.is_isomorphic(G, null))
@@ -46,9 +46,9 @@ def test_tensor_product_size():
K3 = nx.complete_graph(3)
K5 = nx.complete_graph(5)
- G = tensor_product(P5, K3)
+ G = nx.tensor_product(P5, K3)
assert_equal(nx.number_of_nodes(G), 5 * 3)
- G = tensor_product(K3, K5)
+ G = nx.tensor_product(K3, K5)
assert_equal(nx.number_of_nodes(G), 3 * 5)
@@ -56,38 +56,38 @@ def test_tensor_product_combinations():
# basic smoke test, more realistic tests would be usefule
P5 = nx.path_graph(5)
K3 = nx.complete_graph(3)
- G = tensor_product(P5, K3)
+ G = nx.tensor_product(P5, K3)
assert_equal(nx.number_of_nodes(G), 5 * 3)
- G = tensor_product(P5, nx.MultiGraph(K3))
+ G = nx.tensor_product(P5, nx.MultiGraph(K3))
assert_equal(nx.number_of_nodes(G), 5 * 3)
- G = tensor_product(nx.MultiGraph(P5), K3)
+ G = nx.tensor_product(nx.MultiGraph(P5), K3)
assert_equal(nx.number_of_nodes(G), 5 * 3)
- G = tensor_product(nx.MultiGraph(P5), nx.MultiGraph(K3))
+ G = nx.tensor_product(nx.MultiGraph(P5), nx.MultiGraph(K3))
assert_equal(nx.number_of_nodes(G), 5 * 3)
- G = tensor_product(nx.DiGraph(P5), nx.DiGraph(K3))
+ G = nx.tensor_product(nx.DiGraph(P5), nx.DiGraph(K3))
assert_equal(nx.number_of_nodes(G), 5 * 3)
def test_tensor_product_classic_result():
K2 = nx.complete_graph(2)
G = nx.petersen_graph()
- G = tensor_product(G, K2)
+ G = nx.tensor_product(G, K2)
assert_true(nx.is_isomorphic(G, nx.desargues_graph()))
G = nx.cycle_graph(5)
- G = tensor_product(G, K2)
+ G = nx.tensor_product(G, K2)
assert_true(nx.is_isomorphic(G, nx.cycle_graph(10)))
G = nx.tetrahedral_graph()
- G = tensor_product(G, K2)
+ G = nx.tensor_product(G, K2)
assert_true(nx.is_isomorphic(G, nx.cubical_graph()))
def test_tensor_product_random():
G = nx.erdos_renyi_graph(10, 2 / 10.)
H = nx.erdos_renyi_graph(10, 2 / 10.)
- GH = tensor_product(G, H)
+ GH = nx.tensor_product(G, H)
for (u_G, u_H) in GH.nodes():
for (v_G, v_H) in GH.nodes():
@@ -104,7 +104,7 @@ def test_cartesian_product_multigraph():
H = nx.MultiGraph()
H.add_edge(3, 4, key=0)
H.add_edge(3, 4, key=1)
- GH = cartesian_product(G, H)
+ GH = nx.cartesian_product(G, H)
assert_equal(set(GH), {(1, 3), (2, 3), (2, 4), (1, 4)})
assert_equal({(frozenset([u, v]), k) for u, v, k in GH.edges(keys=True)},
{(frozenset([u, v]), k) for u, v, k in
@@ -116,7 +116,7 @@ def test_cartesian_product_multigraph():
@raises(nx.NetworkXError)
def test_cartesian_product_raises():
- P = cartesian_product(nx.DiGraph(), nx.Graph())
+ P = nx.cartesian_product(nx.DiGraph(), nx.Graph())
def test_cartesian_product_null():
@@ -127,28 +127,28 @@ def test_cartesian_product_null():
P3 = nx.path_graph(3)
P10 = nx.path_graph(10)
# null graph
- G = cartesian_product(null, null)
+ G = nx.cartesian_product(null, null)
assert_true(nx.is_isomorphic(G, null))
# null_graph X anything = null_graph and v.v.
- G = cartesian_product(null, empty10)
+ G = nx.cartesian_product(null, empty10)
assert_true(nx.is_isomorphic(G, null))
- G = cartesian_product(null, K3)
+ G = nx.cartesian_product(null, K3)
assert_true(nx.is_isomorphic(G, null))
- G = cartesian_product(null, K10)
+ G = nx.cartesian_product(null, K10)
assert_true(nx.is_isomorphic(G, null))
- G = cartesian_product(null, P3)
+ G = nx.cartesian_product(null, P3)
assert_true(nx.is_isomorphic(G, null))
- G = cartesian_product(null, P10)
+ G = nx.cartesian_product(null, P10)
assert_true(nx.is_isomorphic(G, null))
- G = cartesian_product(empty10, null)
+ G = nx.cartesian_product(empty10, null)
assert_true(nx.is_isomorphic(G, null))
- G = cartesian_product(K3, null)
+ G = nx.cartesian_product(K3, null)
assert_true(nx.is_isomorphic(G, null))
- G = cartesian_product(K10, null)
+ G = nx.cartesian_product(K10, null)
assert_true(nx.is_isomorphic(G, null))
- G = cartesian_product(P3, null)
+ G = nx.cartesian_product(P3, null)
assert_true(nx.is_isomorphic(G, null))
- G = cartesian_product(P10, null)
+ G = nx.cartesian_product(P10, null)
assert_true(nx.is_isomorphic(G, null))
@@ -157,12 +157,12 @@ def test_cartesian_product_size():
K5 = nx.complete_graph(5)
P5 = nx.path_graph(5)
K3 = nx.complete_graph(3)
- G = cartesian_product(P5, K3)
+ G = nx.cartesian_product(P5, K3)
assert_equal(nx.number_of_nodes(G), 5 * 3)
assert_equal(nx.number_of_edges(G),
nx.number_of_edges(P5) * nx.number_of_nodes(K3) +
nx.number_of_edges(K3) * nx.number_of_nodes(P5))
- G = cartesian_product(K3, K5)
+ G = nx.cartesian_product(K3, K5)
assert_equal(nx.number_of_nodes(G), 3 * 5)
assert_equal(nx.number_of_edges(G),
nx.number_of_edges(K5) * nx.number_of_nodes(K3) +
@@ -174,19 +174,19 @@ def test_cartesian_product_classic():
P2 = nx.path_graph(2)
P3 = nx.path_graph(3)
# cube = 2-path X 2-path
- G = cartesian_product(P2, P2)
- G = cartesian_product(P2, G)
+ G = nx.cartesian_product(P2, P2)
+ G = nx.cartesian_product(P2, G)
assert_true(nx.is_isomorphic(G, nx.cubical_graph()))
# 3x3 grid
- G = cartesian_product(P3, P3)
+ G = nx.cartesian_product(P3, P3)
assert_true(nx.is_isomorphic(G, nx.grid_2d_graph(3, 3)))
def test_cartesian_product_random():
G = nx.erdos_renyi_graph(10, 2 / 10.)
H = nx.erdos_renyi_graph(10, 2 / 10.)
- GH = cartesian_product(G, H)
+ GH = nx.cartesian_product(G, H)
for (u_G, u_H) in GH.nodes():
for (v_G, v_H) in GH.nodes():
@@ -199,7 +199,7 @@ def test_cartesian_product_random():
@raises(nx.NetworkXError)
def test_lexicographic_product_raises():
- P = lexicographic_product(nx.DiGraph(), nx.Graph())
+ P = nx.lexicographic_product(nx.DiGraph(), nx.Graph())
def test_lexicographic_product_null():
@@ -210,28 +210,28 @@ def test_lexicographic_product_null():
P3 = nx.path_graph(3)
P10 = nx.path_graph(10)
# null graph
- G = lexicographic_product(null, null)
+ G = nx.lexicographic_product(null, null)
assert_true(nx.is_isomorphic(G, null))
# null_graph X anything = null_graph and v.v.
- G = lexicographic_product(null, empty10)
+ G = nx.lexicographic_product(null, empty10)
assert_true(nx.is_isomorphic(G, null))
- G = lexicographic_product(null, K3)
+ G = nx.lexicographic_product(null, K3)
assert_true(nx.is_isomorphic(G, null))
- G = lexicographic_product(null, K10)
+ G = nx.lexicographic_product(null, K10)
assert_true(nx.is_isomorphic(G, null))
- G = lexicographic_product(null, P3)
+ G = nx.lexicographic_product(null, P3)
assert_true(nx.is_isomorphic(G, null))
- G = lexicographic_product(null, P10)
+ G = nx.lexicographic_product(null, P10)
assert_true(nx.is_isomorphic(G, null))
- G = lexicographic_product(empty10, null)
+ G = nx.lexicographic_product(empty10, null)
assert_true(nx.is_isomorphic(G, null))
- G = lexicographic_product(K3, null)
+ G = nx.lexicographic_product(K3, null)
assert_true(nx.is_isomorphic(G, null))
- G = lexicographic_product(K10, null)
+ G = nx.lexicographic_product(K10, null)
assert_true(nx.is_isomorphic(G, null))
- G = lexicographic_product(P3, null)
+ G = nx.lexicographic_product(P3, null)
assert_true(nx.is_isomorphic(G, null))
- G = lexicographic_product(P10, null)
+ G = nx.lexicographic_product(P10, null)
assert_true(nx.is_isomorphic(G, null))
@@ -239,22 +239,22 @@ def test_lexicographic_product_size():
K5 = nx.complete_graph(5)
P5 = nx.path_graph(5)
K3 = nx.complete_graph(3)
- G = lexicographic_product(P5, K3)
+ G = nx.lexicographic_product(P5, K3)
assert_equal(nx.number_of_nodes(G), 5 * 3)
- G = lexicographic_product(K3, K5)
+ G = nx.lexicographic_product(K3, K5)
assert_equal(nx.number_of_nodes(G), 3 * 5)
def test_lexicographic_product_combinations():
P5 = nx.path_graph(5)
K3 = nx.complete_graph(3)
- G = lexicographic_product(P5, K3)
+ G = nx.lexicographic_product(P5, K3)
assert_equal(nx.number_of_nodes(G), 5 * 3)
- G = lexicographic_product(nx.MultiGraph(P5), K3)
+ G = nx.lexicographic_product(nx.MultiGraph(P5), K3)
assert_equal(nx.number_of_nodes(G), 5 * 3)
- G = lexicographic_product(P5, nx.MultiGraph(K3))
+ G = nx.lexicographic_product(P5, nx.MultiGraph(K3))
assert_equal(nx.number_of_nodes(G), 5 * 3)
- G = lexicographic_product(nx.MultiGraph(P5), nx.MultiGraph(K3))
+ G = nx.lexicographic_product(nx.MultiGraph(P5), nx.MultiGraph(K3))
assert_equal(nx.number_of_nodes(G), 5 * 3)
# No classic easily found classic results for lexicographic product
@@ -263,7 +263,7 @@ def test_lexicographic_product_combinations():
def test_lexicographic_product_random():
G = nx.erdos_renyi_graph(10, 2 / 10.)
H = nx.erdos_renyi_graph(10, 2 / 10.)
- GH = lexicographic_product(G, H)
+ GH = nx.lexicographic_product(G, H)
for (u_G, u_H) in GH.nodes():
for (v_G, v_H) in GH.nodes():
@@ -275,7 +275,7 @@ def test_lexicographic_product_random():
@raises(nx.NetworkXError)
def test_strong_product_raises():
- P = strong_product(nx.DiGraph(), nx.Graph())
+ P = nx.strong_product(nx.DiGraph(), nx.Graph())
def test_strong_product_null():
@@ -286,28 +286,28 @@ def test_strong_product_null():
P3 = nx.path_graph(3)
P10 = nx.path_graph(10)
# null graph
- G = strong_product(null, null)
+ G = nx.strong_product(null, null)
assert_true(nx.is_isomorphic(G, null))
# null_graph X anything = null_graph and v.v.
- G = strong_product(null, empty10)
+ G = nx.strong_product(null, empty10)
assert_true(nx.is_isomorphic(G, null))
- G = strong_product(null, K3)
+ G = nx.strong_product(null, K3)
assert_true(nx.is_isomorphic(G, null))
- G = strong_product(null, K10)
+ G = nx.strong_product(null, K10)
assert_true(nx.is_isomorphic(G, null))
- G = strong_product(null, P3)
+ G = nx.strong_product(null, P3)
assert_true(nx.is_isomorphic(G, null))
- G = strong_product(null, P10)
+ G = nx.strong_product(null, P10)
assert_true(nx.is_isomorphic(G, null))
- G = strong_product(empty10, null)
+ G = nx.strong_product(empty10, null)
assert_true(nx.is_isomorphic(G, null))
- G = strong_product(K3, null)
+ G = nx.strong_product(K3, null)
assert_true(nx.is_isomorphic(G, null))
- G = strong_product(K10, null)
+ G = nx.strong_product(K10, null)
assert_true(nx.is_isomorphic(G, null))
- G = strong_product(P3, null)
+ G = nx.strong_product(P3, null)
assert_true(nx.is_isomorphic(G, null))
- G = strong_product(P10, null)
+ G = nx.strong_product(P10, null)
assert_true(nx.is_isomorphic(G, null))
@@ -315,22 +315,22 @@ def test_strong_product_size():
K5 = nx.complete_graph(5)
P5 = nx.path_graph(5)
K3 = nx.complete_graph(3)
- G = strong_product(P5, K3)
+ G = nx.strong_product(P5, K3)
assert_equal(nx.number_of_nodes(G), 5 * 3)
- G = strong_product(K3, K5)
+ G = nx.strong_product(K3, K5)
assert_equal(nx.number_of_nodes(G), 3 * 5)
def test_strong_product_combinations():
P5 = nx.path_graph(5)
K3 = nx.complete_graph(3)
- G = strong_product(P5, K3)
+ G = nx.strong_product(P5, K3)
assert_equal(nx.number_of_nodes(G), 5 * 3)
- G = strong_product(nx.MultiGraph(P5), K3)
+ G = nx.strong_product(nx.MultiGraph(P5), K3)
assert_equal(nx.number_of_nodes(G), 5 * 3)
- G = strong_product(P5, nx.MultiGraph(K3))
+ G = nx.strong_product(P5, nx.MultiGraph(K3))
assert_equal(nx.number_of_nodes(G), 5 * 3)
- G = strong_product(nx.MultiGraph(P5), nx.MultiGraph(K3))
+ G = nx.strong_product(nx.MultiGraph(P5), nx.MultiGraph(K3))
assert_equal(nx.number_of_nodes(G), 5 * 3)
# No classic easily found classic results for strong product
@@ -339,7 +339,7 @@ def test_strong_product_combinations():
def test_strong_product_random():
G = nx.erdos_renyi_graph(10, 2 / 10.)
H = nx.erdos_renyi_graph(10, 2 / 10.)
- GH = strong_product(G, H)
+ GH = nx.strong_product(G, H)
for (u_G, u_H) in GH.nodes():
for (v_G, v_H) in GH.nodes():
@@ -365,11 +365,28 @@ def test_graph_power():
G.add_edge(9, 2)
H = nx.power(G, 2)
- assert_edges_equal(list(H.edges()), [(0, 1), (0, 2), (0, 5), (0, 6), (0, 7), (1, 9),
- (1, 2), (1, 3), (1, 6), (2, 3), (2, 4), (2, 8),
- (2, 9), (3, 4), (3, 5), (3, 9), (4, 5), (4, 6),
- (5, 6), (5, 7), (6, 7), (6, 8), (7, 8), (7, 9),
- (8, 9)])
+ assert_edges_equal(list(H.edges()),
+ [(0, 1), (0, 2), (0, 5), (0, 6), (0, 7), (1, 9),
+ (1, 2), (1, 3), (1, 6), (2, 3), (2, 4), (2, 8),
+ (2, 9), (3, 4), (3, 5), (3, 9), (4, 5), (4, 6),
+ (5, 6), (5, 7), (6, 7), (6, 8), (7, 8), (7, 9),
+ (8, 9)])
+
+
@raises(ValueError)
def test_graph_power_negative():
- nx.power(nx.Graph(),-1)
+ nx.power(nx.Graph(), -1)
+
+
+@raises(nx.NetworkXError)
+def test_rooted_product_raises():
+ nx.rooted_product(nx.Graph(), nx.path_graph(2), 10)
+
+
+def test_rooted_product():
+ G = nx.cycle_graph(5)
+ H = nx.Graph()
+ H.add_edges_from([('a', 'b'), ('b', 'c'), ('b', 'd')])
+ R = nx.rooted_product(G, H, 'a')
+ assert_equal(len(R), len(G) * len(H))
+ assert_equal(R.size(), G.size() + len(G) * H.size())
| Rooted product of graphs
Addresses issue https://github.com/networkx/networkx/issues/2017
Implements code to construct the rooted product of two graphs. Relevant information can be found at https://en.wikipedia.org/wiki/Rooted_product_of_graphs
This is my first pull request to this product so please let me know if there is anything I should fix.
| 2018-01-11T03:52:17 |
|
networkx/networkx | 2,830 | networkx__networkx-2830 | [
"2262"
] | 6e1d2b3ffa27c47087dbab1baf9225a599966b14 | diff --git a/networkx/algorithms/approximation/__init__.py b/networkx/algorithms/approximation/__init__.py
--- a/networkx/algorithms/approximation/__init__.py
+++ b/networkx/algorithms/approximation/__init__.py
@@ -1,8 +1,22 @@
-'''
- .. warning:: The approximation submodule is not imported automatically with networkx.
-
- Approximate algorithms can be imported with ``from networkx.algorithms import approximation``.
-'''
+# __init__.py - package containing heuristics for optimization problems
+#
+# Copyright 2016-2018 NetworkX developers.
+#
+# This file is part of NetworkX.
+#
+# NetworkX is distributed under a BSD license; see LICENSE.txt for more
+# information.
+"""Approximations of graph properties and Heuristic functions for optimization
+problems.
+
+ .. warning:: The approximation submodule is not imported in the top-level
+ ``networkx``.
+
+ These functions can be imported with
+ ``from networkx.algorithms import approximation``.
+
+"""
+
from networkx.algorithms.approximation.clustering_coefficient import *
from networkx.algorithms.approximation.clique import *
from networkx.algorithms.approximation.connectivity import *
diff --git a/networkx/algorithms/approximation/clique.py b/networkx/algorithms/approximation/clique.py
--- a/networkx/algorithms/approximation/clique.py
+++ b/networkx/algorithms/approximation/clique.py
@@ -1,15 +1,22 @@
# -*- coding: utf-8 -*-
-"""
-Cliques.
-"""
-# Copyright (C) 2011-2012 by
+# Copyright (C) 2011-2018 by
# Nicholas Mancuso <[email protected]>
# All rights reserved.
# BSD license.
+# Copyright 2016-2018 NetworkX developers.
+# NetworkX is distributed under a BSD license
+#
+# Authors: Nicholas Mancuso ([email protected])
+# Jeffery Finkelstein <[email protected]>
+# Dan Schult <[email protected]>
+"""Functions for computing large cliques."""
+from operator import itemgetter
+
import networkx as nx
+from networkx.utils import not_implemented_for
from networkx.algorithms.approximation import ramsey
-__author__ = """Nicholas Mancuso ([email protected])"""
-__all__ = ["clique_removal", "max_clique"]
+
+__all__ = ["clique_removal", "max_clique", "large_clique_size"]
def max_clique(G):
@@ -31,7 +38,7 @@ def max_clique(G):
Notes
------
A clique in an undirected graph G = (V, E) is a subset of the vertex set
- `C \subseteq V`, such that for every two vertices in C, there exists an edge
+ `C \subseteq V` such that for every two vertices in C there exists an edge
connecting the two. This is equivalent to saying that the subgraph
induced by C is complete (in some cases, the term clique may also refer
to the subgraph).
@@ -75,7 +82,7 @@ def clique_removal(G):
Returns
-------
max_ind_cliques : (set, list) tuple
- Maximal independent set and list of maximal cliques (sets) in the graph.
+ 2-tuple of Maximal Independent Set and list of maximal cliques (sets).
References
----------
@@ -97,3 +104,68 @@ def clique_removal(G):
# Determine the largest independent set as measured by cardinality.
maxiset = max(isets, key=len)
return maxiset, cliques
+
+
+@not_implemented_for('directed')
+@not_implemented_for('multigraph')
+def large_clique_size(G):
+ """Find the size of a large clique in a graph.
+
+ A *clique* is a subset of nodes in which each pair of nodes is
+ adjacent. This function is a heuristic for finding the size of a
+ large clique in the graph.
+
+ Parameters
+ ----------
+ G : NetworkX graph
+
+ Returns
+ -------
+ int
+ The size of a large clique in the graph.
+
+ Notes
+ -----
+ This implementation is from [1]_. Its worst case time complexity is
+ :math:`O(n d^2)`, where *n* is the number of nodes in the graph and
+ *d* is the maximum degree.
+
+ This function is a heuristic, which means it may work well in
+ practice, but there is no rigorous mathematical guarantee on the
+ ratio between the returned number and the actual largest clique size
+ in the graph.
+
+ References
+ ----------
+ .. [1] Pattabiraman, Bharath, et al.
+ "Fast Algorithms for the Maximum Clique Problem on Massive Graphs
+ with Applications to Overlapping Community Detection."
+ *Internet Mathematics* 11.4-5 (2015): 421--448.
+ <https://dx.doi.org/10.1080/15427951.2014.986778>
+
+ See also
+ --------
+
+ :func:`networkx.algorithms.approximation.clique.max_clique`
+ A function that returns an approximate maximum clique with a
+ guarantee on the approximation ratio.
+
+ :mod:`networkx.algorithms.clique`
+ Functions for finding the exact maximum clique in a graph.
+
+ """
+ degrees = G.degree
+ def _clique_heuristic(G, U, size, best_size):
+ if not U:
+ return max(best_size, size)
+ u = max(U, key=degrees)
+ U.remove(u)
+ N_prime = {v for v in G[u] if degrees[v] >= best_size}
+ return _clique_heuristic(G, U & N_prime, size + 1, best_size)
+
+ best_size = 0
+ nodes = (u for u in G if degrees[u] >= best_size)
+ for u in nodes:
+ neighbors = {v for v in G[u] if degrees[v] >= best_size}
+ best_size = _clique_heuristic(G, neighbors, 1, best_size)
+ return best_size
| diff --git a/networkx/algorithms/approximation/tests/test_clique.py b/networkx/algorithms/approximation/tests/test_clique.py
--- a/networkx/algorithms/approximation/tests/test_clique.py
+++ b/networkx/algorithms/approximation/tests/test_clique.py
@@ -14,11 +14,12 @@
from nose.tools import assert_greater
from nose.tools import assert_true
-from nose.tools import eq_
+from nose.tools import assert_equal
import networkx as nx
from networkx.algorithms.approximation import max_clique
from networkx.algorithms.approximation import clique_removal
+from networkx.algorithms.approximation import large_clique_size
def is_independent_set(G, nodes):
@@ -79,13 +80,13 @@ class TestMaxClique(object):
def test_null_graph(self):
G = nx.null_graph()
- eq_(len(max_clique(G)), 0)
+ assert_equal(len(max_clique(G)), 0)
def test_complete_graph(self):
graph = nx.complete_graph(30)
# this should return the entire graph
mc = max_clique(graph)
- eq_(30, len(mc))
+ assert_equal(30, len(mc))
def test_maximal_by_cardinality(self):
"""Tests that the maximal clique is computed according to maximum
@@ -102,3 +103,17 @@ def test_maximal_by_cardinality(self):
G = nx.lollipop_graph(30, 2)
clique = max_clique(G)
assert_greater(len(clique), 2)
+
+
+def test_large_clique_size():
+ G = nx.complete_graph(9)
+ nx.add_cycle(G, [9, 10, 11])
+ G.add_edge(8, 9)
+ G.add_edge(1, 12)
+ G.add_node(13)
+
+ assert_equal(large_clique_size(G), 9)
+ G.remove_node(5)
+ assert_equal(large_clique_size(G), 8)
+ G.remove_edge(2, 3)
+ assert_equal(large_clique_size(G), 7)
| Adds a large clique size heuristic function
This commit creates a new package, `networkx.algorithms.heuristic`, and
a new module within that package containing a function for finding the
size of a large clique in a graph.
This is the algorithm suggested in issue #773. I'm not certain if it is accurate, so please double check it.
There's no unit test because I'm not sure how to test it.
| 2018-01-14T06:28:20 |
|
networkx/networkx | 2,883 | networkx__networkx-2883 | [
"2880"
] | c195df14fcdd5d42e162e6f7ebc0f2c7c934ac52 | diff --git a/networkx/generators/line.py b/networkx/generators/line.py
--- a/networkx/generators/line.py
+++ b/networkx/generators/line.py
@@ -21,7 +21,6 @@
__all__ = ['line_graph', 'inverse_line_graph']
-@not_implemented_for('multigraph')
def line_graph(G, create_using=None):
"""Returns the line graph of the graph or digraph `G`.
| Allow line_graph to apply to multigraph
The code is written for multigraphs and graphs, but recently put an errant restriction on multigraphs.
Line 24 of line.py
See #2814
Short term fix is to call ```nx.generators.line._lg_undirected```
| 2018-02-20T22:45:58 |
||
networkx/networkx | 2,920 | networkx__networkx-2920 | [
"2916"
] | 3cf16a90f247f080e9fb5169da4443a0c9f48293 | diff --git a/networkx/classes/function.py b/networkx/classes/function.py
--- a/networkx/classes/function.py
+++ b/networkx/classes/function.py
@@ -585,6 +585,9 @@ def info(G, n=None):
def set_node_attributes(G, values, name=None):
"""Sets node attributes from a given value or dictionary of values.
+ .. Warning:: The call order of arguments `values` and `name`
+ switched between v1.x & v2.x.
+
Parameters
----------
G : NetworkX Graph
@@ -694,6 +697,9 @@ def get_node_attributes(G, name):
def set_edge_attributes(G, values, name=None):
"""Sets edge attributes from a given value or dictionary of values.
+ .. Warning:: The call order of arguments `values` and `name`
+ switched between v1.x & v2.x.
+
Parameters
----------
G : NetworkX Graph
| Please update documentation to reflect changes from 1.X to 2.X - set_node_attributes
Hello,
After having spent a good few hours trying to figure out why the following wouldn't work (copied from the documentation page on set_node_attributes).
>>> G = nx.path_graph(3)
>>> bb = nx.betweenness_centrality(G) # this is a dictionary
>>> nx.set_node_attributes(G, 'betweenness', bb)
**TypeError: unhashable type: 'dict'**
I had almost managed to convince myself that NetworkX hadn't been updated to work with Python 3.X
Turns out in the migration from NetworkX 1.X to 2.0 the argument order was swapped around to:
nx.set_node_attributes(G, bb, 'betweenness')
Please make an update to the documentation page for set_node_attributes here:
[http://pelegm-networkx.readthedocs.io/en/latest/reference/generated/networkx.classes.function.set_node_attributes.html](url)
So that poor new users to NetworkX and Python don't have to suffer through this. It shouldn't be a requirement to have to go through this [https://networkx.github.io/documentation/stable/release/migration_guide_from_1.x_to_2.0.html](url)
to get the right format to make it work.
Many thanks!
| Yes -- we should put a note/warning in the docstring for ```set_node_attributes``` that the order was changed between 1.x and 2.x
@achilleasatha Amen! Just went through the same ordeal myself. | 2018-04-02T12:49:35 |
|
networkx/networkx | 2,924 | networkx__networkx-2924 | [
"2922"
] | 5cb6e7ecb8faefa459846c6f8f00b884759c76bc | diff --git a/networkx/algorithms/community/community_generators.py b/networkx/algorithms/community/community_generators.py
--- a/networkx/algorithms/community/community_generators.py
+++ b/networkx/algorithms/community/community_generators.py
@@ -268,7 +268,7 @@ def LFR_benchmark_graph(n, tau1, tau2, mu, average_degree=None,
NetworkXError
If any of the parameters do not meet their upper and lower bounds:
- - ``tau1`` and ``tau2`` must be less than or equal to one.
+ - ``tau1`` and ``tau2`` must be strictly greater than 1.
- ``mu`` must be in [0, 1].
- ``max_degree`` must be in {1, ..., *n*}.
- ``min_community`` and ``max_community`` must be in {0, ...,
| For LFR Benchmark Graph Generation, the docs do not match the parameter requirements
A user reading the online or inline documentation will read the following potential raised errors when using the LFR benchmark graph generation function:
```
Raises
------
NetworkXError
If any of the parameters do not meet their upper and lower bounds:
- ``tau1`` and ``tau2`` must be less than or equal to one.
- ``mu`` must be in [0, 1].
- ``max_degree`` must be in {1, ..., *n*}.
- ``min_community`` and ``max_community`` must be in {0, ...,
*n*}.
If not exactly one of ``average_degree`` and ``min_degree`` is
specified.
If ``min_degree`` is not specified and a suitable ``min_degree``
cannot be found.
```
However, this is not how this function behaves. Below, note the inequality checks on tau and tau2:
```
if not tau1 > 1:
raise nx.NetworkXError("tau1 must be greater than one")
if not tau2 > 1:
raise nx.NetworkXError("tau2 must be greater than one")
```
Either the docs need to be updated to reflect tau1, tau2 > 1 or the inline inequality checks need to be modified to allow tau1 and tau2 to equal 1.
| I was looking at the documentation here: https://networkx.github.io/documentation/latest/_modules/networkx/algorithms/community/community_generators.html#LFR_benchmark_graph which upon closer inspection does not match `master`. Looks like no change is needed except for to update the documentation on networkx.github.io, which will happen at release time. | 2018-04-03T16:16:04 |
|
networkx/networkx | 2,936 | networkx__networkx-2936 | [
"2933"
] | 1baa63affe4e8ca24091e6be940f745cfa0e8570 | diff --git a/networkx/readwrite/pajek.py b/networkx/readwrite/pajek.py
--- a/networkx/readwrite/pajek.py
+++ b/networkx/readwrite/pajek.py
@@ -22,6 +22,8 @@
"""
+import warnings
+
import networkx as nx
from networkx.utils import is_string_like, open_file, make_str
@@ -55,16 +57,24 @@ def generate_pajek(G):
# make dictionary mapping nodes to integers
nodenumber = dict(zip(nodes, range(1, len(nodes) + 1)))
for n in nodes:
- na = G.nodes.get(n, {})
- x = na.get('x', 0.0)
- y = na.get('y', 0.0)
- id = int(na.get('id', nodenumber[n]))
+ # copy node attributes and pop mandatory attributes
+ # to avoid duplication.
+ na = G.nodes.get(n, {}).copy()
+ x = na.pop('x', 0.0)
+ y = na.pop('y', 0.0)
+ id = int(na.pop('id', nodenumber[n]))
nodenumber[n] = id
- shape = na.get('shape', 'ellipse')
+ shape = na.pop('shape', 'ellipse')
s = ' '.join(map(make_qstr, (id, n, x, y, shape)))
+ # only optional attributes are left in na.
for k, v in na.items():
- if v.strip() != '':
+ if is_string_like(v) and v.strip() != '':
s += ' %s %s' % (make_qstr(k), make_qstr(v))
+ else:
+ warnings.warn('Node attribute %s is not processed. %s.' %
+ (k,
+ 'Empty attribute' if is_string_like(v) else
+ 'Non-string attribute'))
yield s
# write edges with attributes
@@ -77,8 +87,13 @@ def generate_pajek(G):
value = d.pop('weight', 1.0) # use 1 as default edge value
s = ' '.join(map(make_qstr, (nodenumber[u], nodenumber[v], value)))
for k, v in d.items():
- if v.strip() != '':
+ if is_string_like(v) and v.strip() != '':
s += ' %s %s' % (make_qstr(k), make_qstr(v))
+ else:
+ warnings.warn('Edge attribute %s is not processed. %s.' %
+ (k,
+ 'Empty attribute' if is_string_like(v) else
+ 'Non-string attribute'))
yield s
@@ -99,6 +114,12 @@ def write_pajek(G, path, encoding='UTF-8'):
>>> G=nx.path_graph(4)
>>> nx.write_pajek(G, "test.net")
+ Warnings
+ --------
+ Optional node attributes and edge attributes must be non-empty strings.
+ Otherwise it will not be written into the file. You will need to
+ convert those attributes to strings if you want to keep them.
+
References
----------
See http://vlado.fmf.uni-lj.si/pub/networks/pajek/doc/draweps.htm
| diff --git a/networkx/readwrite/tests/test_pajek.py b/networkx/readwrite/tests/test_pajek.py
--- a/networkx/readwrite/tests/test_pajek.py
+++ b/networkx/readwrite/tests/test_pajek.py
@@ -57,6 +57,32 @@ def test_read_pajek(self):
for n in G:
assert_equal(G.nodes[n], Gin.nodes[n])
+ def test_write_pajek(self):
+ import io
+ G = parse_pajek(self.data)
+ fh = io.BytesIO()
+ nx.write_pajek(G,fh)
+ fh.seek(0)
+ H = nx.read_pajek(fh)
+ assert_nodes_equal(list(G), list(H))
+ assert_edges_equal(list(G.edges()), list(H.edges()))
+ # Graph name is left out for now, therefore it is not tested.
+ # assert_equal(G.graph, H.graph)
+
+ def test_ignored_attribute(self):
+ import io
+ G = nx.Graph()
+ fh = io.BytesIO()
+ G.add_node(1, int_attr=1)
+ G.add_node(2, empty_attr=' ')
+ G.add_edge(1, 2, int_attr=2)
+ G.add_edge(2, 3, empty_attr=' ')
+
+ import warnings
+ with warnings.catch_warnings(record=True) as w:
+ nx.write_pajek(G,fh)
+ assert_equal(len(w), 4)
+
def test_noname(self):
# Make sure we can parse a line such as: *network
# Issue #952
| Write pajek error
When rewriting a network who has been read from pajek there is an error:
'float' object has no attribute 'strip'
apparently coordinates (x,y) don't get translated back to string
(Networkx 2.1)
| 2018-04-11T19:21:14 |
|
networkx/networkx | 2,950 | networkx__networkx-2950 | [
"2949"
] | a33403783b908e06921204ca584a5dc27201033e | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -62,7 +62,8 @@
# add basic documentation
data = [(docdirbase, glob("*.txt"))]
# add examples
-for d in ['advanced',
+for d in ['.',
+ 'advanced',
'algorithms',
'basic',
'3d_drawing',
@@ -74,6 +75,7 @@
'subclass']:
dd = os.path.join(docdirbase, 'examples', d)
pp = os.path.join('examples', d)
+ data.append((dd, glob(os.path.join(pp, "*.txt"))))
data.append((dd, glob(os.path.join(pp, "*.py"))))
data.append((dd, glob(os.path.join(pp, "*.bz2"))))
data.append((dd, glob(os.path.join(pp, "*.gz"))))
| networkx 2.1 error building doc with sphinx 1.7.2
Hello,
when building the doc with sphinx 1.7.2 i got this error:
```
sphinx-build -b html -d build/doctrees . build/html
Running Sphinx v1.6.6
making output directory...
/usr/lib/python2.7/dist-packages/IPython/nbconvert.py:13: ShimWarning: The `IPython.nbconvert` package has been deprecated since IPython 4.0. You should import from nbconvert instead.
"You should import from nbconvert instead.", ShimWarning)
Change of translator for the pyfile builder.
Change of translator for the ipynb builder.
loading pickled environment... not yet created
[autosummary] generating autosummary for: bibliography.rst, citing.rst, credits.rst, developer/contribute.rst, developer/gitwash/configure_git.rst, developer/gitwash/development_workflow.rst, developer/gitwash/following_latest.rst, developer/gitwash/forking_hell.rst, developer/gitwash/git_development.rst, developer/gitwash/git_install.rst, ..., release/api_1.7.rst, release/api_1.8.rst, release/api_1.9.rst, release/index.rst, release/migration_guide_from_1.x_to_2.0.rst, release/release_2.0.rst, release/release_2.1.rst, release/release_dev.rst, release/release_template.rst, tutorial.rst
[autosummary] generating autosummary for: /home/morph/deb/build-area/python-networkx-2.1/doc/reference/algorithms/generated/networkx.algorithms.approximation.clique.clique_removal.rst, /home/morph/deb/build-area/python-networkx-2.1/doc/reference/algorithms/generated/networkx.algorithms.approximation.clique.large_clique_size.rst, /home/morph/deb/build-area/python-networkx-2.1/doc/reference/algorithms/generated/networkx.algorithms.approximation.clique.max_clique.rst, /home/morph/deb/build-area/python-networkx-2.1/doc/reference/algorithms/generated/networkx.algorithms.approximation.clustering_coefficient.average_clustering.rst, /home/morph/deb/build-area/python-networkx-2.1/doc/reference/algorithms/generated/networkx.algorithms.approximation.connectivity.all_pairs_node_connectivity.rst, /home/morph/deb/build-area/python-networkx-2.1/doc/reference/algorithms/generated/networkx.algorithms.approximation.connectivity.local_node_connectivity.rst, /home/morph/deb/build-area/python-networkx-2.1/doc/reference/algorithms/generated/networkx.algorithms.approximation.connectivity.node_connectivity.rst, /home/morph/deb/build-area/python-networkx-2.1/doc/reference/algorithms/generated/networkx.algorithms.approximation.dominating_set.min_edge_dominating_set.rst, /home/morph/deb/build-area/python-networkx-2.1/doc/reference/algorithms/generated/networkx.algorithms.approximation.dominating_set.min_weighted_dominating_set.rst, /home/morph/deb/build-area/python-networkx-2.1/doc/reference/algorithms/generated/networkx.algorithms.approximation.independent_set.maximum_independent_set.rst, ..., /home/morph/deb/build-area/python-networkx-2.1/doc/reference/readwrite/generated/networkx.readwrite.nx_shp.write_shp.rst, /home/morph/deb/build-area/python-networkx-2.1/doc/reference/readwrite/generated/networkx.readwrite.nx_yaml.read_yaml.rst, /home/morph/deb/build-area/python-networkx-2.1/doc/reference/readwrite/generated/networkx.readwrite.nx_yaml.write_yaml.rst, /home/morph/deb/build-area/python-networkx-2.1/doc/reference/readwrite/generated/networkx.readwrite.pajek.parse_pajek.rst, /home/morph/deb/build-area/python-networkx-2.1/doc/reference/readwrite/generated/networkx.readwrite.pajek.read_pajek.rst, /home/morph/deb/build-area/python-networkx-2.1/doc/reference/readwrite/generated/networkx.readwrite.pajek.write_pajek.rst, /home/morph/deb/build-area/python-networkx-2.1/doc/reference/readwrite/generated/networkx.readwrite.sparse6.from_sparse6_bytes.rst, /home/morph/deb/build-area/python-networkx-2.1/doc/reference/readwrite/generated/networkx.readwrite.sparse6.read_sparse6.rst, /home/morph/deb/build-area/python-networkx-2.1/doc/reference/readwrite/generated/networkx.readwrite.sparse6.to_sparse6_bytes.rst, /home/morph/deb/build-area/python-networkx-2.1/doc/reference/readwrite/generated/networkx.readwrite.sparse6.write_sparse6.rst
loading intersphinx inventory from ../../debian/python.org_objects.inv...
WARNING: intersphinx inventory '../../debian/python.org_objects.inv' not fetchable due to <type 'exceptions.IOError'>: [Errno 2] No such file or directory: u'/home/morph/deb/build-area/python-networkx-2.1/doc/../../debian/python.org_objects.inv'
loading intersphinx inventory from ../../debian/scipy.org_numpy_objects.inv...
WARNING: intersphinx inventory '../../debian/scipy.org_numpy_objects.inv' not fetchable due to <type 'exceptions.IOError'>: [Errno 2] No such file or directory: u'/home/morph/deb/build-area/python-networkx-2.1/doc/../../debian/scipy.org_numpy_objects.inv'
generating gallery...
Exception occurred:
File "/usr/lib/python2.7/dist-packages/sphinx_gallery/gen_gallery.py", line 222, in generate_gallery_rst
.format(examples_dir))
IOError: Main example directory /home/morph/deb/build-area/python-networkx-2.1/doc/../examples does not have a README.txt file. Please write one to introduce your gallery.
The full traceback has been saved in /tmp/sphinx-err-SnsvwK.log, if you want to report the issue to the developers.
```
content of `/tmp/sphinx-err-SnsvwK.log` is:
```
# Sphinx version: 1.6.6
# Python version: 2.7.14+ (CPython)
# Docutils version: 0.14
# Jinja2 version: 2.10
# Last messages:
# Loaded extensions:
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/sphinx/cmdline.py", line 305, in main
opts.warningiserror, opts.tags, opts.verbosity, opts.jobs)
File "/usr/lib/python2.7/dist-packages/sphinx/application.py", line 234, in __init__
self._init_builder()
File "/usr/lib/python2.7/dist-packages/sphinx/application.py", line 312, in _init_builder
self.emit('builder-inited')
File "/usr/lib/python2.7/dist-packages/sphinx/application.py", line 489, in emit
return self.events.emit(event, self, *args)
File "/usr/lib/python2.7/dist-packages/sphinx/events.py", line 79, in emit
results.append(callback(*args))
File "/usr/lib/python2.7/dist-packages/sphinx_gallery/gen_gallery.py", line 222, in generate_gallery_rst
.format(examples_dir))
IOError: Main example directory /home/morph/deb/build-area/python-networkx-2.1/doc/../examples does not have a README.txt file. Please write one to introduce your gallery.
```
can you have a look?
thanks!
| I can't reproduce this error. How did you get the networkx? Does it come with the example dir and is there a README.txt in it?
it's from the PyPI tarball; it comes with a top-level `examples` directory (ie not under `doc/`) and it doesnt contain a README.txt file | 2018-04-23T21:03:03 |
|
networkx/networkx | 2,955 | networkx__networkx-2955 | [
"2954"
] | 48326e1761c08d7a073aec53f7a644baf2249ef6 | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -81,6 +81,10 @@
data.append((dd, glob(os.path.join(pp, "*.gz"))))
data.append((dd, glob(os.path.join(pp, "*.mbox"))))
data.append((dd, glob(os.path.join(pp, "*.edgelist"))))
+# add js force examples
+dd = os.path.join(docdirbase, 'examples', 'javascript/force')
+pp = os.path.join('examples', 'javascript/force')
+data.append((dd, glob(os.path.join(pp, "*"))))
# add the tests
package_data = {
| doc/_static/copybutton.js is missing from PyPI tarball for networkx 2.1
Hello,
`doc/_static/copybutton.js` is currently missing while it's being referred by almost all the HTML webpage.
can you please add it to the PyPI released tarball?
thanks!
| I run a diff on pypi release and github release and found more missing files, which may also be the cause of #2941. I'll send a PR later. | 2018-04-29T19:54:46 |
|
networkx/networkx | 2,984 | networkx__networkx-2984 | [
"2977"
] | 404679f7a8a9f015a693eaa01a5b319091123b31 | diff --git a/networkx/convert_matrix.py b/networkx/convert_matrix.py
--- a/networkx/convert_matrix.py
+++ b/networkx/convert_matrix.py
@@ -218,10 +218,10 @@ def to_pandas_edgelist(G, source='source', target='target', nodelist=None,
>>> G = nx.Graph([('A', 'B', {'cost': 1, 'weight': 7}),
... ('C', 'E', {'cost': 9, 'weight': 10})])
>>> df = nx.to_pandas_edgelist(G, nodelist=['A', 'C'])
- >>> df
- cost source target weight
- 0 1 A B 7
- 1 9 C E 10
+ >>> df[['source', 'target', 'cost', 'weight']]
+ source target cost weight
+ 0 A B 1 7
+ 1 C E 9 10
"""
import pandas as pd
@@ -290,7 +290,7 @@ def from_pandas_edgelist(df, source='source', target='target', edge_attr=None,
>>> df = pd.DataFrame(ints, columns=['weight', 'cost'])
>>> df[0] = a
>>> df['b'] = b
- >>> df
+ >>> df[['weight', 'cost', 0, 'b']]
weight cost 0 b
0 4 7 A D
1 7 1 B A
| change in pandas breaks doctest in python 3.6
It looks like that the recent change in pandas: https://github.com/pandas-dev/pandas/pull/19884 is breaking one of the doctest in python 3.6.
I'm not sure what would be the best solution here since py2 and py3 behave differently and the difference is not crucial at all. One way to fix this would be use an `OrderedDict` but it feels like an overkill.
```
======================================================================
FAIL: to_pandas_edgelist (networkx.convert_matrix)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/opt/python/3.6.3/lib/python3.6/doctest.py", line 2199, in runTest
raise self.failureException(self.format_failure(new.getvalue()))
AssertionError: Failed doctest test for networkx.convert_matrix.to_pandas_edgelist
File "/home/travis/venv/lib/python3.6/site-packages/networkx/convert_matrix.py", line 192, in to_pandas_edgelist
----------------------------------------------------------------------
File "/home/travis/venv/lib/python3.6/site-packages/networkx/convert_matrix.py", line 221, in networkx.convert_matrix.to_pandas_edgelist
Failed example:
df
Expected:
cost source target weight
0 1 A B 7
1 9 C E 10
Got:
source target weight cost
0 A B 7 1
1 C E 10 9
```
| 2018-05-22T21:52:23 |
||
networkx/networkx | 2,994 | networkx__networkx-2994 | [
"2831"
] | 2b04833ed070513c72c98592b22e277928226627 | diff --git a/networkx/algorithms/components/strongly_connected.py b/networkx/algorithms/components/strongly_connected.py
--- a/networkx/algorithms/components/strongly_connected.py
+++ b/networkx/algorithms/components/strongly_connected.py
@@ -80,6 +80,7 @@ def strongly_connected_components(G):
Information Processing Letters 49(1): 9-14, (1994)..
"""
+ nbrs = {}
preorder = {}
lowlink = {}
scc_found = {}
@@ -94,7 +95,9 @@ def strongly_connected_components(G):
i = i + 1
preorder[v] = i
done = 1
- v_nbrs = G[v]
+ if v not in nbrs:
+ nbrs[v] = iter(G[v])
+ v_nbrs = nbrs[v]
for w in v_nbrs:
if w not in preorder:
queue.append(w)
@@ -102,7 +105,7 @@ def strongly_connected_components(G):
break
if done == 1:
lowlink[v] = preorder[v]
- for w in v_nbrs:
+ for w in G[v]:
if w not in scc_found:
if preorder[w] > preorder[v]:
lowlink[v] = min([lowlink[v], lowlink[w]])
| diff --git a/networkx/algorithms/components/tests/test_strongly_connected.py b/networkx/algorithms/components/tests/test_strongly_connected.py
--- a/networkx/algorithms/components/tests/test_strongly_connected.py
+++ b/networkx/algorithms/components/tests/test_strongly_connected.py
@@ -1,4 +1,5 @@
#!/usr/bin/env python
+import time
from nose.tools import *
import networkx as nx
from networkx import NetworkXNotImplemented
@@ -154,3 +155,29 @@ def test_connected_raise(self):
assert_raises(NetworkXNotImplemented, nx.condensation, G)
# deprecated
assert_raises(NetworkXNotImplemented, nx.strongly_connected_component_subgraphs, G)
+
+# Commented out due to variability on Travis-CI hardware/operating systems
+# def test_linear_time(self):
+# # See Issue #2831
+# count = 100 # base case
+# dg = nx.DiGraph()
+# dg.add_nodes_from([0, 1])
+# for i in range(2, count):
+# dg.add_node(i)
+# dg.add_edge(i, 1)
+# dg.add_edge(0, i)
+# t = time.time()
+# ret = tuple(nx.strongly_connected_components(dg))
+# dt = time.time() - t
+#
+# count = 200
+# dg = nx.DiGraph()
+# dg.add_nodes_from([0, 1])
+# for i in range(2, count):
+# dg.add_node(i)
+# dg.add_edge(i, 1)
+# dg.add_edge(0, i)
+# t = time.time()
+# ret = tuple(nx.strongly_connected_components(dg))
+# dt2 = time.time() - t
+# assert_less(dt2, dt * 2.3) # should be 2 times longer for this graph
| strongly_connected_components too slow: quadratic run time for some graphs
nx.strongly_connected_components(dg) is prohibitively slow for certain graphs.
For example, the code below shows approximately quadratic run time for a very simple non-cyclic DAG having one start node linking to each of N intermediate nodes each linking to an end node.
SCC should run in O(num_nodes + num_edges) using Tarjan's algorithm.
I ran into this in a practical problem (with a much more complex graph) where SCC computation using networkx just seemed to hang for some graphs.
Tested with Python 3.5.2, networkx 2.0 (installed with pip3 install networkx on Jan 13, 2018).
Here's the test code:
```
import time
import networkx as nx
for count in range(10000, 100000, 10000):
# Create the graph
dg = nx.DiGraph()
dg.add_node(0) # Start node
dg.add_node(1) # End node
# Add intermediate nodes and their edges
for i in range(2, count):
dg.add_node(i) # Add intermediate node
dg.add_edge(0, i) # From start node to intermediate
dg.add_edge(i, 1) # From intermediate to end node
# Time finding strongly connected components
t = time.time()
ret = tuple(nx.strongly_connected_components(dg))
print("with {} nodes: {} secs".format(count, time.time() - t))
```
| Yes, there must be something wrong in the implementation of `strongly_connected_components`.
The recursive version of Tarjan's algorithm `strongly_connected_components_recursive` and `kosaraju_strongly_connected_components` both show (using your code) linear scaling in num_nodes + num_edges.
Where is the bug? Thanks for the simple example that shows the issue.
Hi, I happened to see this post, and did some profiling. I might be the case that the implementation is correct, just inefficient. Not sure if this will help anyone, but here's the profiling result.

Hello,
@blueTaxi , thank you for pointing to the exact lines.
@dschult , @hagberg please check my [PR](https://github.com/networkx/networkx/pull/2841)
The problem was in the loop that you may see in the message above.
We iterating over and over again by the elements that already have been detected as SCC,
so removing them from the list of nodes improves performance.
I've used simple performance test provided by @tatuylonen
Also looks like there are big fluctuations in my performance testing - because I did it on my personal laptop.
Performance results `without patch` vs `with patch`
```
Old Performance results:
10000
with 10000 nodes: 2.124323844909668 secs
with 10000 nodes: 0.06797289848327637 secs
iterative / recursive: 31.25251228520619
20000
with 20000 nodes: 8.607456922531128 secs
with 20000 nodes: 0.08330178260803223 secs
iterative / recursive: 103.328604179248
30000
with 30000 nodes: 19.59886121749878 secs
with 30000 nodes: 0.12491989135742188 secs
iterative / recursive: 156.8914366201484
40000
with 40000 nodes: 35.78781795501709 secs
with 40000 nodes: 0.30605173110961914 secs
iterative / recursive: 116.93388508297278
50000
with 50000 nodes: 57.72680306434631 secs
with 50000 nodes: 0.24113011360168457 secs
iterative / recursive: 239.40105282620755
New performance results:
10000
with 10000 nodes: 0.2716481685638428 secs
with 10000 nodes: 0.042136192321777344 secs
iterative / recursive: 6.446908313152117
20000
with 20000 nodes: 0.5712282657623291 secs
with 20000 nodes: 0.08850312232971191 secs
iterative / recursive: 6.45432896292924
30000
with 30000 nodes: 1.0197288990020752 secs
with 30000 nodes: 0.14125633239746094 secs
iterative / recursive: 7.218996003186649
40000
with 40000 nodes: 1.5178308486938477 secs
with 40000 nodes: 0.26900696754455566 secs
iterative / recursive: 5.64234771518492
50000
with 50000 nodes: 2.1965038776397705 secs
with 50000 nodes: 0.34764623641967773 secs
iterative / recursive: 6.318215609813638
```
I think those good results may be special for the specific graphs being tested. Can you describe why this would move the algorithm from quadratic time to linear time? When I try the new version for random graphs, I get quadratic dependence.
I think the quadratic nature comes from rechecking all the ```w``` in ```v_nbrs``` for being in ```preorder``` each time we come back to ```v``` being on the top of the queue. So, for each neighbor we have to pass through all the previously explored neighbors.
I tried to create a version that stores the ```v_nbrs``` on the queue with ```v``` but wasn't able to get it working in the time I had. ```v_nbrs``` is used in more than one place and sometimes needs to use ```G[v]```
Thanks everybody for looking at this!
Thank you for pointing on the `queue` I realized that my patch actually not fix the problem. Will try to implement your approach with `queue`.
@ArseniyAntonov
Maybe the best performance would include a combination of your approach with removing nodes in addition to storing ```v_nbrs``` as a set and popping nodes off it as preorder is checked. G'Luck -- :)
@dschult
I've added queue and check it on the random graph.
Here are my results:
for the graph specified in this issue:
```
10000
with 10000 nodes: 0.17738914489746094 secs
with 10000 nodes: 0.04557490348815918 secs
iterative / recursive: 3.8922549763281107
20000
with 20000 nodes: 0.4013180732727051 secs
with 20000 nodes: 0.0905609130859375 secs
iterative / recursive: 4.431471145745577
30000
with 30000 nodes: 0.6294271945953369 secs
with 30000 nodes: 0.1387310028076172 secs
iterative / recursive: 4.537033408950299
40000
with 40000 nodes: 0.9015460014343262 secs
with 40000 nodes: 0.2698240280151367 secs
iterative / recursive: 3.3412369093524568
50000
with 50000 nodes: 1.1479790210723877 secs
with 50000 nodes: 0.35440802574157715 secs
iterative / recursive: 3.239145103078046
60000
with 60000 nodes: 1.2027249336242676 secs
with 60000 nodes: 0.42836713790893555 secs
iterative / recursive: 2.807696546227477
70000
with 70000 nodes: 1.4035720825195312 secs
with 70000 nodes: 0.5212817192077637 secs
iterative / recursive: 2.6925403880509364
80000
with 80000 nodes: 2.561509847640991 secs
with 80000 nodes: 0.5745389461517334 secs
iterative / recursive: 4.458374605930556
90000
with 90000 nodes: 2.9348678588867188 secs
with 90000 nodes: 0.7854900360107422 secs
iterative / recursive: 3.736352753488756
```
For the random graph with 30000 nodes:
`nx.gn_graph(30000)`
```
Attempt 0
with 30000 nodes: 1.2159347534179688e-05 secs
with 30000 nodes: 6.9141387939453125e-06 secs
iterative / recursive: 1.7586206896551724
Attempt 1
with 30000 nodes: 3.0994415283203125e-06 secs
with 30000 nodes: 2.86102294921875e-06 secs
iterative / recursive: 1.0833333333333333
Attempt 2
with 30000 nodes: 2.86102294921875e-06 secs
with 30000 nodes: 3.0994415283203125e-06 secs
iterative / recursive: 0.9230769230769231
Attempt 3
with 30000 nodes: 3.0994415283203125e-06 secs
with 30000 nodes: 1.9073486328125e-06 secs
iterative / recursive: 1.625
```
Please check it also on your side (I did not find any performance tests in networkx)
@dschult is anybody wants to review?
Thanks for the gentle nudge! :)
I looked more at this last week and almost got to a resolution.
I am leaning toward removing the part that copies the graph and removes the scc nodes. The storing of ```nbrs``` seems to be the change that removes quadratic dependence.
@ArseniyAntonov What do you think about that approach? | 2018-06-02T20:13:52 |
networkx/networkx | 2,995 | networkx__networkx-2995 | [
"2893"
] | 1f7e8d962303b36834774f966fb81578d5b1e18a | diff --git a/examples/drawing/plot_directed.py b/examples/drawing/plot_directed.py
--- a/examples/drawing/plot_directed.py
+++ b/examples/drawing/plot_directed.py
@@ -11,6 +11,7 @@
# Author: Rodrigo Dorantes-Gilardi ([email protected])
from __future__ import division
+import matplotlib as mpl
import matplotlib.pyplot as plt
import networkx as nx
@@ -30,6 +31,10 @@
for i in range(M):
edges[i].set_alpha(edge_alphas[i])
+pc = mpl.collections.PatchCollection(edges, cmap=plt.cm.Blues)
+pc.set_array(edge_colors)
+plt.colorbar(pc)
+
ax = plt.gca()
ax.set_axis_off()
plt.show()
| unable to add colorbar to a MultiDiGraph for networkx >= 2.1
In networkx 2.0, `nx.draw_networkx_edges()` returned a `LineCollection`, which you could then pass to matplotlib's `colorbar()` to get a colorbar showing the scale of the colors for the edges.
In networkx 2.1, it now returns a list of `FancyArrowPatch`s, and `colorbar()` complains that these are not mappable. Is there a new way to add a colorbar to a directed graph for networkx >= 2.1? I couldn't find anything in the docs.
| Too bad this feature was lost (or not as easy). The FancyArrowPatchs give us many nice features.
It looks like the colorbar features of Matplotlib include a ColorbarPatch.
Can you try the answers in [this stackoverflow answer](https://stackoverflow.com/questions/18658047/adding-a-matplotlib-colorbar-from-a-patchcollection) to see if something similar can work here?
thanks, I'll give that a try
that worked great. See the issue I linked above for example output -- the new arrows are much nicer. For the record, here's what I had to do:
`draw_networkx_edges` returns a list of `FancyArrayPatch`s now
```
pc = mpl.collections.PatchCollection(edges_lc, cmap=plt.cm.viridis)
pc.set_array(weights)
plt.colorbar(pc)
```
where `weights` is the array of weights I used for `edge_color` in the call to `draw_networkx_edges()`
I'll explore adding a simple example of this to your gallery via a PR later today.
here's a simple example using one of the gallery examples from the networkx documentation:
https://networkx.github.io/documentation/stable/auto_examples/drawing/plot_directed.html#sphx-glr-auto-examples-drawing-plot-directed-py
```
from __future__ import division
import matplotlib as mpl
import matplotlib.pyplot as plt
import networkx as nx
G = nx.generators.directed.random_k_out_graph(10, 3, 0.5)
pos = nx.layout.spring_layout(G)
node_sizes = [3 + 10 * i for i in range(len(G))]
M = G.number_of_edges()
edge_colors = range(2, M + 2)
edge_alphas = [(5 + i) / (M + 4) for i in range(M)]
nodes = nx.draw_networkx_nodes(G, pos, node_size=node_sizes, node_color='blue')
edges = nx.draw_networkx_edges(G, pos, node_size=node_sizes, arrowstyle='->',
arrowsize=10, edge_color=edge_colors,
edge_cmap=plt.cm.Blues, width=2)
# set alpha value for each edge
for i in range(M):
edges[i].set_alpha(edge_alphas[i])
pc = mpl.collections.PatchCollection(edges, cmap=plt.cm.Blues)
pc.set_array(edge_colors)
plt.colorbar(pc)
ax = plt.gca()
ax.set_axis_off()
plt.savefig("digraph_colorbar.png")
```
gives:

| 2018-06-03T00:55:45 |
|
networkx/networkx | 2,996 | networkx__networkx-2996 | [
"2979"
] | 4cc2957dbfa2f59bbac7a6888551f359e6347d4c | diff --git a/networkx/algorithms/approximation/steinertree.py b/networkx/algorithms/approximation/steinertree.py
--- a/networkx/algorithms/approximation/steinertree.py
+++ b/networkx/algorithms/approximation/steinertree.py
@@ -25,11 +25,22 @@ def metric_closure(G, weight='weight'):
"""
M = nx.Graph()
- seen = set()
Gnodes = set(G)
- for u, (distance, path) in nx.all_pairs_dijkstra(G, weight=weight):
- seen.add(u)
- for v in Gnodes - seen:
+
+ # check for connected graph while processing first node
+ all_paths_iter = nx.all_pairs_dijkstra(G, weight=weight)
+ u, (distance, path) = next(all_paths_iter)
+ if Gnodes - set(distance):
+ msg = "G is not a connected graph. metric_closure is not defined."
+ raise nx.NetworkXError(msg)
+ Gnodes.remove(u)
+ for v in Gnodes:
+ M.add_edge(u, v, distance=distance[v], path=path[v])
+
+ # first node done -- now process the rest
+ for u, (distance, path) in all_paths_iter:
+ Gnodes.remove(u)
+ for v in Gnodes:
M.add_edge(u, v, distance=distance[v], path=path[v])
return M
| diff --git a/networkx/algorithms/approximation/tests/test_steinertree.py b/networkx/algorithms/approximation/tests/test_steinertree.py
--- a/networkx/algorithms/approximation/tests/test_steinertree.py
+++ b/networkx/algorithms/approximation/tests/test_steinertree.py
@@ -1,3 +1,4 @@
+from nose.tools import assert_raises
import networkx as nx
from networkx.algorithms.approximation.steinertree import metric_closure
from networkx.algorithms.approximation.steinertree import steiner_tree
@@ -17,6 +18,11 @@ def setUp(self):
self.G = G
self.term_nodes = [1, 2, 3, 4, 5]
+ def test_connected_metric_closure(self):
+ G = self.G.copy()
+ G.add_node(100)
+ assert_raises(nx.NetworkXError, metric_closure, G)
+
def test_metric_closure(self):
M = metric_closure(self.G)
mc = [(1, 2, {'distance': 10, 'path': [1, 2]}),
| metric_closure will throw KeyError with unconnected graph
Suggest checking connectedness with `nx.is_connected()` on entry to `metric_closure()` and throwing a more informative error if not.
| Is there a reference for the definition of metric_closure or Steiner tree on a disconnected graph?
My naive view is that Steiner Tree's definition works applied to disconnected graphs.
Most definitions of the metric closure I have found add the extra edges with infinite (or very large) distances to make it a complete graph. But couldn't we also just exclude all edges between components? It would work just as well for Steiner Tree. Are there other uses for metric closure?
I'm not a graph theorist but certainly the definition of a Steiner Tree seems to require connectedness to me, if the terminals occur across multiple components, since like Minimal Spanning Trees they make use of edges in the original graph - the metric closure is required only in the approximation algorithm. I suppose an alternative would be to have the metric closure defined, but infinite for edges between components. At the moment this case needs to be handled more gracefully.
I agree... As far as my fairly cursory search of the literature has found, people don't use these terms for a disconnected graph. It may be possible to create a useful definition in such cases, but let's just refactor so that the current error message becomes a more useful one. I think the cost of running nx.is_connected(G) is pretty steep for this feature. Most calls will use connected graphs. I suggest a check of the first dijkstra result to test connectivity. I'll put a PR together soon if this sounds good.
Sounds good to me - do you plan to simply check the resultant cost matrix is fully populated? If `is_connected()` is less efficient than this proposed approach, maybe it should also be re-implemented ;) | 2018-06-03T04:13:03 |
networkx/networkx | 3,016 | networkx__networkx-3016 | [
"2911"
] | ecf2ca0a6110a56bc0ae486add9b9f4cb6bfd0f9 | diff --git a/networkx/classes/ordered.py b/networkx/classes/ordered.py
--- a/networkx/classes/ordered.py
+++ b/networkx/classes/ordered.py
@@ -1,5 +1,10 @@
"""
Consistently ordered variants of the default base classes.
+Note that if you are using Python 3.6, you shouldn't need these classes
+because the dicts in Python 3.6 are ordered.
+Note also that there are many differing expectations for the word "ordered"
+and that these classes may not provide the order you expect.
+The intent here is to give a consistent order not a particular order.
The Ordered (Di/Multi/MultiDi) Graphs give a consistent order for reporting of
nodes and edges. The order of node reporting agrees with node adding, but for
@@ -8,6 +13,17 @@
In general, you should use the default (i.e., unordered) graph classes.
However, there are times (e.g., when testing) when you may need the
order preserved.
+
+Special care is required when using subgraphs of the Ordered classes.
+The order of nodes in the subclass is not necessarily the same order
+as the original class. In general it is probably better to avoid using
+subgraphs and replace with code similar to:
+
+ # instead of SG = G.subgraph(ordered_nodes)
+ SG=nx.OrderedGraph()
+ SG.add_nodes_from(ordered_nodes)
+ SG.add_edges_from((u, v) for (u, v) in G.edges() if u in SG if v in SG)
+
"""
from collections import OrderedDict
| `OrderedGraph.subgraph` does not maintain the order of the nodes
A subgraph built from a `OrderedGraph` should keep the order of the nodes, yet nodes in the subgraph are neither in the order of the initial graph, nor in the order of the selection. The issue can be seen from the following snippet:
```python
graph = nx.OrderedGraph()
nodes = list(range(10))
random.shuffle(nodes)
graph.add_nodes_from(nodes) # key order is (7, 2, 1, 9, 0, 8, 6, 4, 3, 5)
# We create a selection in the same order as the initial graph keys
to_keep = [key for key in graph if key % 2 == 0] # [2, 0, 8, 6, 4]
subgraph = graph.subgraph(to_keep) # (0, 2, 4, 6, 8)
# We create a selection in a different order
subgraph = graph.subgraph([5, 3, 1]) # (1, 3, 5)
```
From what I see, the issue is due to `Graph.subgraph` passing the selection to `nx.filters.show_nodes` that transforms it to a set. The nodes in the subgraph are then in the order of the set; as sets do not preserve the order, the order of the initial graph if not preserved.
| It is not clear whether a subgraph should preserve the order of the nodes of the original graph or the order of nodes provided to the command.
Perhaps it is best to simplify the API by using more basic commands to build what you want. e.g.
SG=nx.OrderedGraph()
SG.add_nodes_from(ordered_nodes)
SG.add_edges_from((u, v) for (u, v) in G.edges() if u in SG if v in SG)
Is the better API a long list of methods each creating their specified subgraph orderings?
Or is it better to ask users to build their own?
Perhaps we should remove the subgraph method entirely...
It may only be me, but I got very surprised when my subgraph was shuffled. It did cause a bug that was long to identify because I was assuming that the subgraph was in the same order as the initial graph since my initial graph was an `OrderedGraph`. Not having the `subgraph` method would also have been a surprise since `Graph` has a it and `OrderedGraph` is only a subclass of `Graph`.
In my case, the selection was in the same order as the nodes in the graph, so I indeed did not think about the two possible ordering options. Keeping the ordering of the selection would be consistent with how numpy do indexing; however, on the contrary to numpy indexing, subgraphs do not keep duplicates. Because of that, I think that keeping the order of the selection can be confusing when it involves duplicates, and that keeping the order of the graph should be preferred.
The choice of one option or the other is, I think, a question of documentation.
Let's fix the documentation. See #2985 | 2018-06-16T16:17:59 |
|
networkx/networkx | 3,020 | networkx__networkx-3020 | [
"1568"
] | 7805070f0a35cef3a1c03f400c49913accf740c9 | diff --git a/networkx/drawing/nx_agraph.py b/networkx/drawing/nx_agraph.py
--- a/networkx/drawing/nx_agraph.py
+++ b/networkx/drawing/nx_agraph.py
@@ -157,7 +157,8 @@ def to_agraph(N):
# loop over edges
if N.is_multigraph():
for u, v, key, edgedata in N.edges(data=True, keys=True):
- str_edgedata = {k: str(v) for k, v in edgedata.items() if k != 'key'}
+ str_edgedata = {k: str(v) for k, v in edgedata.items()
+ if k != 'key'}
A.add_edge(u, v, key=str(key))
if edgedata is not None:
a = A.get_edge(u, v)
@@ -266,6 +267,18 @@ def pygraphviz_layout(G, prog='neato', root=None, args=''):
>>> pos = nx.nx_agraph.graphviz_layout(G)
>>> pos = nx.nx_agraph.graphviz_layout(G, prog='dot')
+ Notes
+ -----
+ If you use complex node objects, they may have the same string
+ representation and GraphViz could treat them as the same node.
+ The layout may assign both nodes a single location. See Issue #1568
+ If this occurs in your case, consider relabeling the nodes just
+ for the layout computation using something similar to:
+
+ H = nx.convert_node_labels_to_integers(G, label_attribute='node_label')
+ H_layout = nx.nx_agraph.pygraphviz_layout(G, prog='dot')
+ G_layout = {H.nodes[n]['node_label']: p for n, p in H_layout.items()}
+
"""
try:
import pygraphviz
diff --git a/networkx/drawing/nx_pydot.py b/networkx/drawing/nx_pydot.py
--- a/networkx/drawing/nx_pydot.py
+++ b/networkx/drawing/nx_pydot.py
@@ -229,7 +229,8 @@ def to_pydot(N):
if N.is_multigraph():
for u, v, key, edgedata in N.edges(data=True, keys=True):
- str_edgedata = dict((k, make_str(v)) for k, v in edgedata.items() if k != 'key')
+ str_edgedata = dict((k, make_str(v)) for k, v in edgedata.items()
+ if k != 'key')
edge = pydot.Edge(make_str(u), make_str(v),
key=make_str(key), **str_edgedata)
P.add_edge(edge)
@@ -261,8 +262,8 @@ def graphviz_layout(G, prog='neato', root=None, **kwds):
# FIXME: Document the "root" parameter.
-# FIXME: Why does this function accept a variadic dictionary of keyword arguments
-# (i.e., "**kwds") but fail to do anything with those arguments? This is probably
+# FIXME: Why does this function accept a variadic dict of keyword arguments
+# (i.e., "**kwds") but fail to do anything with them? This is probably
# wrong, as unrecognized keyword arguments will be silently ignored.
def pydot_layout(G, prog='neato', root=None, **kwds):
"""Create node positions using :mod:`pydot` and Graphviz.
@@ -273,7 +274,7 @@ def pydot_layout(G, prog='neato', root=None, **kwds):
NetworkX graph to be laid out.
prog : optional[str]
Basename of the GraphViz command with which to layout this graph.
- Defaults to `neato`, the default GraphViz command for undirected graphs.
+ Defaults to `neato`: default GraphViz command for undirected graphs.
Returns
--------
@@ -285,6 +286,19 @@ def pydot_layout(G, prog='neato', root=None, **kwds):
>>> G = nx.complete_graph(4)
>>> pos = nx.nx_pydot.pydot_layout(G)
>>> pos = nx.nx_pydot.pydot_layout(G, prog='dot')
+
+ Notes
+ -----
+ If you use complex node objects, they may have the same string
+ representation and GraphViz could treat them as the same node.
+ The layout may assign both nodes a single location. See Issue #1568
+ If this occurs in your case, consider relabeling the nodes just
+ for the layout computation using something similar to:
+
+ H = nx.convert_node_labels_to_integers(G, label_attribute='node_label')
+ H_layout = nx.nx_pydot.pydot_layout(G, prog='dot')
+ G_layout = {H.nodes[n]['node_label']: p for n, p in H_layout.items()}
+
"""
pydot = _import_pydot()
P = to_pydot(G)
@@ -295,7 +309,7 @@ def pydot_layout(G, prog='neato', root=None, **kwds):
# from the passed graph with the passed external GraphViz command.
D_bytes = P.create_dot(prog=prog)
- # Unique string decoded from these bytes with the preferred locale encoding.
+ # Unique string decoded from these bytes with the preferred locale encoding
D = unicode(D_bytes, encoding=getpreferredencoding())
if D == "": # no data returned
| Graphviz layout breaks if two nodes have the same string representation
If two nodes in a graph have the same string representation (i.e, `str(u) == str(v)`), computing a graphviz layout is broken. For example:
```
import networkx as nx
class Node(object):
def __str__(self):
return 'Foo'
u = Node()
v = Node()
w = Node()
g = nx.DiGraph()
g.add_edges_from([(u,v), (v,w)])
print nx.graphviz_layout(g)
```
This outputs:
```
{
<__main__.Node object at 0x1045b8dd0>: (0.0, 0.0),
<__main__.Node object at 0x1045b8e50>: (0.0, 0.0),
<__main__.Node object at 0x1004e2150>: (0.0, 0.0)
}
```
Notice that each of the three nodes is (undesirably) mapped to the same location. If the `__str__` method is removed from the `Node` class definition and the example is run again, we get a more sensible result:
```
{
<__main__.Node object at 0x1004b4d90>: (-37.382, 60.934),
<__main__.Node object at 0x1046c0d50>: (6.6387, 3.6105),
<__main__.Node object at 0x1046c0dd0>: (30.743, -64.544)
}
```
The problem can be attributed to the fact that `networkx.graphviz_layout` uses `pydot` to compute the layout. In converting the input graph to a `pydot` graph, the nodes must be converted to strings. This occurs on the [following line](https://github.com/networkx/networkx/blob/master/networkx/drawing/nx_pydot.py#L212) of the `to_pydot` function.
It appears that the issue also occurs when using `networkx.drawing.nx_agraph.pygraphviz_layout` to compute the layout for a similar reason.
The straightforward fix, I think, is to relabel the nodes of the graph given to `networkx.drawing.nx_pydot.pydot_layout` so that they have unique integer indices, then proceed as normal. We must then invert the keys of the resulting layout dictionary before returning it to the user so that they are the original node objects.
Because this must also be done for `networkx.drawing.nx_agraph.pygraphviz_layout`, I think the best fix might be to implement a decorator which performs these transformations, and apply it to both functions. I'd be happy to submit a pull request in the next few days if this sounds like a reasonable approach.
| Same issue here.
I am using my custom python class objects instead of strings for nodes.
I believe I have correctly implemented `__hash__`, `__eq__`, `__repr__` and `__str__` methods.
Two objects share the same `__str__` representation (the name that I want to put on the node) but the hash contents are different (eg: in a tree, the parent node is different).
The default set implementation recognizes that the objects are different (based on `__hash__`, `__eq__`) despite having same `__str__`.
So, I guess the node objects are explicitly converted to strings before adding to list/set/dict somewhere.
Trying to fix. Any pointers would be helpful.
Caught the bug.
https://github.com/networkx/networkx/blob/c01040c4f1f50bd036f76a0b26b30d0ad2130565/networkx/drawing/nx_pydot.py#L319
The `make_str()` breaks the functionality for complex objects.
Suggestion: how about calling `hash(node)` for ID and `str(node)` for name??
Update:
Now I understand that the pydot.Node doesn't support such functionality.
It just takes `name` and thats all it does.
Suggestion: extend `pydot.Node` to properly handle name and hashcode
If somebody is wondering about the issue:
**With pydot**
<img width="952" alt="screen shot 2017-11-04 at 3 42 53 pm" src="https://user-images.githubusercontent.com/1865964/32410145-f16fe25e-c176-11e7-91ae-0964a8e60048.png">
**with kamada_kawai_layout**
<img width="944" alt="screen shot 2017-11-04 at 3 43 21 pm" src="https://user-images.githubusercontent.com/1865964/32410152-1bc7c364-c177-11e7-944c-9df59f987619.png">
The trouble here is representing the nodes with objects that have the same ```str``` representation.
Perhaps the easy fix is to convert nodes to integers, get the layout from that and then convert those positions to the old node names:
H = nx.convert_node_labels_to_integers(G, label_attribute='node_label')
H_layout = nx.nx_pydot.pydot_layout(G, prog='dot')
G_layout = {H.nodes[n]['node_label']: p for n, p in H_layout.items()}
I'll put this into the doc_string as an example usage. | 2018-06-18T06:33:26 |
|
networkx/networkx | 3,029 | networkx__networkx-3029 | [
"2906"
] | bf1c7cc9b144767523e5abcf84f949d4223848a0 | diff --git a/networkx/algorithms/flow/networksimplex.py b/networkx/algorithms/flow/networksimplex.py
--- a/networkx/algorithms/flow/networksimplex.py
+++ b/networkx/algorithms/flow/networksimplex.py
@@ -385,7 +385,8 @@ def find_cycle(i, p, q):
Wn, We = trace_path(p, w)
Wn.reverse()
We.reverse()
- We.append(i)
+ if We != [i]:
+ We.append(i)
WnR, WeR = trace_path(q, w)
del WnR[-1]
Wn += WnR
| diff --git a/networkx/algorithms/flow/tests/test_mincost.py b/networkx/algorithms/flow/tests/test_mincost.py
--- a/networkx/algorithms/flow/tests/test_mincost.py
+++ b/networkx/algorithms/flow/tests/test_mincost.py
@@ -321,6 +321,18 @@ def test_digon(self):
assert_equal(H, soln)
assert_equal(nx.cost_of_flow(G, H), 2857140)
+ def test_deadend(self):
+ """Check if one-node cycles are handled properly. Taken from ticket
+ #2906 from @sshraven."""
+ G = nx.DiGraph()
+
+ G.add_nodes_from(range(5), demand=0)
+ G.node[4]['demand'] = -13
+ G.node[3]['demand'] = 13
+
+ G.add_edges_from([(0,2), (0, 3), (2, 1)], capacity=20, weight=0.1)
+ assert_raises(nx.NetworkXUnfeasible, nx.min_cost_flow, G)
+
def test_infinite_capacity_neg_digon(self):
"""An infinite capacity negative cost digon results in an unbounded
instance."""
| min_cost_flow does not throw an exception when the graph is disconnected
According to the docs ([here](https://networkx.github.io/documentation/stable/reference/algorithms/generated/networkx.algorithms.flow.min_cost_flow.html)), `min_cost_flow` is supposed to throw an exception if the source vertex is not connected to destination vertex.
On the contrary, the following code is stuck in an infinite loop:
```
G = nx.DiGraph()
G.add_nodes_from(range(5), demand=0)
G.node[4]['demand'] = -13
G.node[3]['demand'] = 13
edges = [
(0, 2, {'capacity': 20, 'weight': 0.1}),
(0, 3, {'capacity': 20, 'weight': 0.1}),
(2, 1, {'capacity': 20, 'weight': 0.1})]
G.add_edges_from(edges)
flowDict = nx.min_cost_flow(G)
```
Is this a bug?
I am using version 2.1
| Thanks for this bug report! I can verify that an infinite loop occurs with this example.
In the method that looks for cycles when 1 edge is found, the edge is duplicated (line #388) when it shouldn't be. I'll put a PR in shortly.
| 2018-06-25T17:32:02 |
networkx/networkx | 3,031 | networkx__networkx-3031 | [
"2931"
] | bf1c7cc9b144767523e5abcf84f949d4223848a0 | diff --git a/networkx/algorithms/approximation/dominating_set.py b/networkx/algorithms/approximation/dominating_set.py
--- a/networkx/algorithms/approximation/dominating_set.py
+++ b/networkx/algorithms/approximation/dominating_set.py
@@ -38,7 +38,7 @@ def min_weighted_dominating_set(G, weight=None):
Undirected graph.
weight : string
- The node attribute storing the weight of an edge. If provided,
+ The node attribute storing the weight of an node. If provided,
the node attribute with this key must be a number for each
node. If not provided, each node is assumed to have weight one.
diff --git a/networkx/generators/degree_seq.py b/networkx/generators/degree_seq.py
--- a/networkx/generators/degree_seq.py
+++ b/networkx/generators/degree_seq.py
@@ -116,7 +116,7 @@ def _configuration_model(deg_sequence, create_using, directed=False,
return G
# Build a list of available degree-repeated nodes. For example,
# for degree sequence [3, 2, 1, 1, 1], the "stub list" is
- # initially [1, 1, 1, 2, 2, 3, 4, 5], that is, node 1 has degree
+ # initially [0, 0, 0, 1, 1, 2, 3, 4], that is, node 0 has degree
# 3 and thus is repeated 3 times, etc.
#
# Also, shuffle the stub list in order to get a random sequence of
@@ -837,7 +837,6 @@ def phase1(self):
def phase2(self):
# choose remaining nodes uniformly at random and use rejection sampling
while len(self.remaining_degree) >= 2 * self.dmax:
- norm = float(max(self.remaining_degree.values()))**2
while True:
u, v = sorted(random.sample(self.remaining_degree.keys(), 2))
if self.graph.has_edge(u, v):
| Computed but unused line in DegreeSequenceRandomGraph
norm in `phase2()` of class `DegreeSequenceRandomGraph` is computed but unused.
```
norm = float(max(self.remaining_degree.values()))**2
```
in
```
def phase2(self):
# choose remaining nodes uniformly at random and use rejection sampling
while len(self.remaining_degree) >= 2 * self.dmax:
norm = float(max(self.remaining_degree.values()))**2
while True:
u, v = sorted(random.sample(self.remaining_degree.keys(), 2))
if self.graph.has_edge(u, v):
continue
if random.random() < self.q(u, v):
break
if random.random() < self.p(u, v): # accept edge
self.graph.add_edge(u, v)
```
[code](https://networkx.github.io/documentation/stable/_modules/networkx/generators/degree_seq.html#random_degree_sequence_graph)
| 2018-06-25T19:38:01 |
||
networkx/networkx | 3,034 | networkx__networkx-3034 | [
"3030"
] | 3b26c34665c2e2aa3616be54c15c3e20135150c1 | diff --git a/networkx/generators/random_graphs.py b/networkx/generators/random_graphs.py
--- a/networkx/generators/random_graphs.py
+++ b/networkx/generators/random_graphs.py
@@ -470,6 +470,11 @@ def connected_watts_strogatz_graph(n, k, p, tries=100, seed=None):
newman_watts_strogatz_graph()
watts_strogatz_graph()
+ References
+ ----------
+ .. [1] Duncan J. Watts and Steven H. Strogatz,
+ Collective dynamics of small-world networks,
+ Nature, 393, pp. 440--442, 1998.
"""
for i in range(tries):
G = watts_strogatz_graph(n, k, p, seed)
| Travis nbconvert giving error
We are building the docs on Travis and converting the examples to Jupyter notebooks for the gallery. (This is using python2.7). It started showing [a new error this weekend](https://travis-ci.org/networkx/networkx/jobs/396510217) It looks like libpng is not behaving nicely. I'm not sure where to look for answers to this one.
| @jarrodmillman do you have time to look at the travis doc-build errors that started about 3 days ago? Even some quick tips on how to go about debugging it would be really helpful. Thanks! :)
I'll take a look later today (at this and a couple other outstanding items). | 2018-06-28T01:13:43 |
|
networkx/networkx | 3,041 | networkx__networkx-3041 | [
"1556"
] | d5e1ab0db10756d38024951337a7e06caf158308 | diff --git a/networkx/readwrite/graphml.py b/networkx/readwrite/graphml.py
--- a/networkx/readwrite/graphml.py
+++ b/networkx/readwrite/graphml.py
@@ -334,6 +334,23 @@ class GraphML(object):
(float, "float"), (float, "double"),
(bool, "boolean")]
+ # These additions to types allow writing numpy types
+ try:
+ import numpy as np
+ except:
+ pass
+ else:
+ # prepend so that python types are created upon read (last entry wins)
+ types = [(np.float64, "float"), (np.float32, "float"),
+ (np.float16, "float"), (np.float_, "float"),
+ (np.int, "int"), (np.int8, "int"),
+ (np.int16, "int"), (np.int32, "int"),
+ (np.int64, "int"), (np.uint8, "int"),
+ (np.uint16, "int"), (np.uint32, "int"),
+ (np.uint64, "int"), (np.int_, "int"),
+ (np.intc, "int"), (np.intp, "int"),
+ ] + types
+
xml_type = dict(types)
python_type = dict(reversed(a) for a in types)
| diff --git a/networkx/readwrite/tests/test_graphml.py b/networkx/readwrite/tests/test_graphml.py
--- a/networkx/readwrite/tests/test_graphml.py
+++ b/networkx/readwrite/tests/test_graphml.py
@@ -932,6 +932,58 @@ def test_multigraph_to_graph(self):
os.close(fd)
os.unlink(fname)
+ def test_numpy_float(self):
+ try:
+ import numpy as np
+ except:
+ return
+ wt = np.float(3.4)
+ G = nx.Graph([(1, 2, {'weight': wt})])
+ fd, fname = tempfile.mkstemp()
+ self.writer(G, fname)
+ H = nx.read_graphml(fname, node_type=int)
+ assert_equal(G._adj, H._adj)
+ os.close(fd)
+ os.unlink(fname)
+
+ def test_numpy_float64(self):
+ try:
+ import numpy as np
+ except:
+ return
+ wt = np.float64(3.4)
+ G = nx.Graph([(1, 2, {'weight': wt})])
+ fd, fname = tempfile.mkstemp()
+ self.writer(G, fname)
+ H = nx.read_graphml(fname, node_type=int)
+ assert_equal(G.edges, H.edges)
+ wtG = G[1][2]['weight']
+ wtH = H[1][2]['weight']
+ assert_almost_equal(wtG, wtH, places=6)
+ assert_equal(type(wtG), np.float64)
+ assert_equal(type(wtH), float)
+ os.close(fd)
+ os.unlink(fname)
+
+ def test_numpy_float32(self):
+ try:
+ import numpy as np
+ except:
+ return
+ wt = np.float32(3.4)
+ G = nx.Graph([(1, 2, {'weight': wt})])
+ fd, fname = tempfile.mkstemp()
+ self.writer(G, fname)
+ H = nx.read_graphml(fname, node_type=int)
+ assert_equal(G.edges, H.edges)
+ wtG = G[1][2]['weight']
+ wtH = H[1][2]['weight']
+ assert_almost_equal(wtG, wtH, places=6)
+ assert_equal(type(wtG), np.float32)
+ assert_equal(type(wtH), float)
+ os.close(fd)
+ os.unlink(fname)
+
def test_unicode_attributes(self):
G = nx.Graph()
try: # Python 3.x
| Expanded datatypes for graphml writer
The GraphML writer doesn't handle NumPy floats since it only knows to accept Python floats. I'm sure we can make this a bit easier for users.
| The writer should work with `numpy.float` variables, since `numpy.float` is just an alias to python's `float` type:
``` python
>>> import numpy as np
>>> x = np.float(32.45)
>>> type(x)
<type 'float'>
```
Are you talking about `float32` and `float64` in numpy? Worth noting that they are single and double precision datatypes respectively.
I don't think the precision matters, since we don't seem to care about the precision when writing regular Python floats to GraphML either.
@chebee7i do you have some ideas for how to approach this?
Otherwise I'm going to bump it to networkx-2.1
This is still an issue in networkx 2.1.
Yes -- the github issue shows that the milestone was modifed from networkx-2.1 to networkx-2.2.
Would you be willing to try to get that working? We need a pull request. | 2018-06-28T21:57:30 |
networkx/networkx | 3,066 | networkx__networkx-3066 | [
"3065"
] | 7d3a39f87f47460bd3f92ac4d81278c73d69418b | diff --git a/networkx/drawing/layout.py b/networkx/drawing/layout.py
--- a/networkx/drawing/layout.py
+++ b/networkx/drawing/layout.py
@@ -130,12 +130,18 @@ def circular_layout(G, scale=1, center=None, dim=2):
Dimension of layout.
If dim>2, the remaining dimensions are set to zero
in the returned positions.
+ If dim<2, a ValueError is raised.
Returns
-------
pos : dict
A dictionary of positions keyed by node
+ Raises
+ -------
+ ValueError
+ If dim < 2
+
Examples
--------
>>> G = nx.path_graph(4)
@@ -149,6 +155,9 @@ def circular_layout(G, scale=1, center=None, dim=2):
"""
import numpy as np
+ if dim < 2:
+ raise ValueError('cannot handle dimensions < 2')
+
G, center = _process_params(G, center, dim)
paddims = max(0, (dim - 2))
@@ -188,12 +197,18 @@ def shell_layout(G, nlist=None, scale=1, center=None, dim=2):
dim : int
Dimension of layout, currently only dim=2 is supported.
+ Other dimension values result in a ValueError.
Returns
-------
pos : dict
A dictionary of positions keyed by node
+ Raises
+ -------
+ ValueError
+ If dim != 2
+
Examples
--------
>>> G = nx.path_graph(4)
@@ -208,6 +223,9 @@ def shell_layout(G, nlist=None, scale=1, center=None, dim=2):
"""
import numpy as np
+ if dim != 2:
+ raise ValueError('can only handle 2 dimensions')
+
G, center = _process_params(G, center, dim)
if len(G) == 0:
@@ -621,7 +639,7 @@ def kamada_kawai_layout(G, dist=None,
pos : dict or None optional (default=None)
Initial positions for nodes as a dictionary with node as keys
and values as a coordinate list or tuple. If None, then use
- circular_layout().
+ circular_layout() for dim >= 2 and a linear layout for dim == 1.
weight : string or None optional (default='weight')
The edge attribute that holds the numerical value used for
@@ -664,7 +682,10 @@ def kamada_kawai_layout(G, dist=None,
dist_mtx[row][col] = rdist[nc]
if pos is None:
- pos = circular_layout(G, dim=dim)
+ if dim >= 2:
+ pos = circular_layout(G, dim=dim)
+ else:
+ pos = {n: pt for n, pt in zip(G, np.linspace(0, 1, len(G)))}
pos_arr = np.array([pos[n] for n in G])
pos = _kamada_kawai_solve(dist_mtx, pos_arr, dim)
| diff --git a/networkx/drawing/tests/test_layout.py b/networkx/drawing/tests/test_layout.py
--- a/networkx/drawing/tests/test_layout.py
+++ b/networkx/drawing/tests/test_layout.py
@@ -65,6 +65,7 @@ def test_smoke_int(self):
vpos = nx.shell_layout(G)
if self.scipy is not None:
vpos = nx.kamada_kawai_layout(G)
+ vpos = nx.kamada_kawai_layout(G, dim=1)
def test_smoke_string(self):
G = self.Gs
@@ -76,6 +77,7 @@ def test_smoke_string(self):
vpos = nx.shell_layout(G)
if self.scipy is not None:
vpos = nx.kamada_kawai_layout(G)
+ vpos = nx.kamada_kawai_layout(G, dim=1)
def check_scale_and_center(self, pos, scale, center):
center = numpy.array(center)
@@ -114,6 +116,12 @@ def test_default_scale_and_center(self):
if self.scipy is not None:
sc(nx.kamada_kawai_layout(G), scale=1, center=c)
+ def test_circular_and_shell_dim_error(self):
+ G = nx.path_graph(4)
+ assert_raises(ValueError, nx.circular_layout, G, dim=1)
+ assert_raises(ValueError, nx.shell_layout, G, dim=1)
+ assert_raises(ValueError, nx.shell_layout, G, dim=3)
+
def test_adjacency_interface_numpy(self):
A = nx.to_numpy_matrix(self.Gs)
pos = nx.drawing.layout._fruchterman_reingold(A)
| kamada_kawai_layout fails with dim=1
`nx.kamada_kawai_layout(G, dim=1)` Fails when called without initializations through `pos` argument.
This is cased by the default initialization which calls [circular_layout](https://github.com/networkx/networkx/blob/27dd93dda901171631fba92dc02a521cfc9e057e/networkx/drawing/layout.py#L667).
But `circular_layout()` ignores parameter `dim=1` and overwrites it with 2 without any notification.
`circular_layout` should throw an exception if it cannot handle `dim=1`.
```
import networkx as nx
def create_bipartite_graph(n, m):
assert n <= m
dist_matrix = np.ones((n+m, n+m), dtype=float) * 255
matches = n + np.random.permutation(m)[:n]
dist_matrix[np.arange(n), matches] = 75
dist_matrix[matches, np.arange(n)] = 75
G = nx.Graph()
edges = (dist_matrix <= 75).nonzero()
dists = dist_matrix[edges]
edges = (edges[0], edges[1],
[{'weight': x} for x in dists])
edges = zip(*edges)
G.add_edges_from(edges)
return G
G = create_bipartite_graph(10, 10)
pos = nx.kamada_kawai_layout(G, dim=1) # THIS FAILS
pos = nx.kamada_kawai_layout(G, dim=1, pos=np.random.rand(20)) # THIS WORKS
```
| Thanks for this. Seems like a good suggestion for fix: circular_layout should raise an exception for ```dim``` values it can't handle. | 2018-07-14T21:40:47 |
networkx/networkx | 3,068 | networkx__networkx-3068 | [
"3013"
] | 7d3a39f87f47460bd3f92ac4d81278c73d69418b | diff --git a/networkx/convert_matrix.py b/networkx/convert_matrix.py
--- a/networkx/convert_matrix.py
+++ b/networkx/convert_matrix.py
@@ -182,8 +182,9 @@ def from_pandas_adjacency(df, create_using=None):
try:
df = df[df.index]
except:
- raise nx.NetworkXError("Columns must match Indices.", "%s not in columns" %
- list(set(df.index).difference(set(df.columns))))
+ msg = "%s not in columns"
+ missing = list(set(df.index).difference(set(df.columns)))
+ raise nx.NetworkXError("Columns must match Indices.", msg % missing)
nx.relabel.relabel_nodes(G, dict(enumerate(df.columns)), copy=False)
return G
@@ -233,7 +234,8 @@ def to_pandas_edgelist(G, source='source', target='target', nodelist=None,
source_nodes = [s for s, t, d in edgelist]
target_nodes = [t for s, t, d in edgelist]
all_keys = set().union(*(d.keys() for s, t, d in edgelist))
- edge_attr = {k: [d.get(k, float("nan")) for s, t, d in edgelist] for k in all_keys}
+ edge_attr = {k: [d.get(k, float("nan")) for s, t, d in edgelist]
+ for k in all_keys}
edgelistdict = {source: source_nodes, target: target_nodes}
edgelistdict.update(edge_attr)
return pd.DataFrame(edgelistdict)
@@ -268,8 +270,8 @@ def from_pandas_edgelist(df, source='source', target='target', edge_attr=None,
edge_attr : str or int, iterable, True
A valid column name (str or integer) or list of column names that will
- be used to retrieve items from the row and add them to the graph as edge
- attributes. If `True`, all of the remaining columns will be added.
+ be used to retrieve items from the row and add them to the graph as
+ edge attributes. If `True`, all of the remaining columns will be added.
create_using : NetworkX graph constructor, optional (default=nx.Graph)
Graph type to create. If graph instance, then cleared before populated.
@@ -311,43 +313,34 @@ def from_pandas_edgelist(df, source='source', target='target', edge_attr=None,
'red'
"""
-
g = nx.empty_graph(0, create_using)
- # Index of source and target
- src_i = df.columns.get_loc(source)
- tar_i = df.columns.get_loc(target)
- if edge_attr:
- # If all additional columns requested, build up a list of tuples
- # [(name, index),...]
- if edge_attr is True:
- # Create a list of all columns indices, ignore nodes
- edge_i = []
- for i, col in enumerate(df.columns):
- if col is not source and col is not target:
- edge_i.append((col, i))
- # If a list or tuple of name is requested
- elif isinstance(edge_attr, (list, tuple)):
- edge_i = [(i, df.columns.get_loc(i)) for i in edge_attr]
- # If a string or int is passed
- else:
- edge_i = [(edge_attr, df.columns.get_loc(edge_attr)), ]
-
- # Iteration on values returns the rows as Numpy arrays
- for row in df.values:
- s, t = row[src_i], row[tar_i]
- if g.is_multigraph():
- g.add_edge(s, t)
- key = max(g[s][t]) # default keys just count, so max is most recent
- g[s][t][key].update((i, row[j]) for i, j in edge_i)
- else:
- g.add_edge(s, t)
- g[s][t].update((i, row[j]) for i, j in edge_i)
-
- # If no column names are given, then just return the edges.
+ if edge_attr is None:
+ g.add_edges_from(zip(df[source], df[target]))
+ return g
+
+ # Additional columns requested
+ if edge_attr is True:
+ cols = [c for c in df.columns if c is not source and c is not target]
+ elif isinstance(edge_attr, (list, tuple)):
+ cols = edge_attr
else:
- for row in df.values:
- g.add_edge(row[src_i], row[tar_i])
+ cols = [edge_attr]
+
+ try:
+ eattrs = zip(*[df[col] for col in cols])
+ except (KeyError, TypeError) as e:
+ msg = "Invalid edge_attr argument: %s" % edge_attr
+ raise nx.NetworkXError(msg)
+ for s, t, attrs in zip(df[source], df[target], eattrs):
+
+ g.add_edge(s, t)
+
+ if g.is_multigraph():
+ key = max(g[s][t]) # default keys just count so max is most recent
+ g[s][t][key].update((attr, val) for attr, val in zip(cols, attrs))
+ else:
+ g[s][t].update((attr, val) for attr, val in zip(cols, attrs))
return g
@@ -467,10 +460,10 @@ def from_numpy_matrix(A, parallel_edges=False, create_using=None):
An adjacency matrix representation of a graph
parallel_edges : Boolean
- If this is True, `create_using` is a multigraph, and `A` is an
+ If True, `create_using` is a multigraph, and `A` is an
integer matrix, then entry *(i, j)* in the matrix is interpreted as the
- number of parallel edges joining vertices *i* and *j* in the graph. If it
- is False, then the entries in the adjacency matrix are interpreted as
+ number of parallel edges joining vertices *i* and *j* in the graph.
+ If False, then the entries in the adjacency matrix are interpreted as
the weight of a single edge joining the vertices.
create_using : NetworkX graph constructor, optional (default=nx.Graph)
| diff --git a/networkx/tests/test_convert_pandas.py b/networkx/tests/test_convert_pandas.py
--- a/networkx/tests/test_convert_pandas.py
+++ b/networkx/tests/test_convert_pandas.py
@@ -58,6 +58,13 @@ def test_from_edgelist_multi_attr(self):
G = nx.from_pandas_edgelist(self.df, 0, 'b', ['weight', 'cost'])
assert_graphs_equal(G, Gtrue)
+ def test_from_edgelist_multi_attr_incl_target(self):
+ Gtrue = nx.Graph([('E', 'C', {0: 'C', 'b': 'E', 'weight': 10}),
+ ('B', 'A', {0: 'B', 'b': 'A', 'weight': 7}),
+ ('A', 'D', {0: 'A', 'b': 'D', 'weight': 4})])
+ G = nx.from_pandas_edgelist(self.df, 0, 'b', [0, 'b', 'weight'])
+ assert_graphs_equal(G, Gtrue)
+
def test_from_edgelist_multidigraph_and_edge_attr(self):
# example from issue #2374
Gtrue = nx.MultiDiGraph([('X1', 'X4', {'Co': 'zA', 'Mi': 0, 'St': 'X1'}),
@@ -94,6 +101,23 @@ def test_from_edgelist_one_attr(self):
G = nx.from_pandas_edgelist(self.df, 0, 'b', 'weight')
assert_graphs_equal(G, Gtrue)
+ def test_from_edgelist_int_attr_name(self):
+ # note: this also tests that edge_attr can be `source`
+ Gtrue = nx.Graph([('E', 'C', {0: 'C'}),
+ ('B', 'A', {0: 'B'}),
+ ('A', 'D', {0: 'A'})])
+ G = nx.from_pandas_edgelist(self.df, 0, 'b', 0)
+ assert_graphs_equal(G, Gtrue)
+
+ def test_from_edgelist_invalid_attr(self):
+ assert_raises(nx.NetworkXError, nx.from_pandas_edgelist,
+ self.df, 0, 'b', 'misspell')
+ assert_raises(nx.NetworkXError, nx.from_pandas_edgelist,
+ self.df, 0, 'b', 1)
+ # unhashable attribute name
+ assert_raises(nx.NetworkXError, nx.from_pandas_edgelist,
+ self.df, 0, 'b', {})
+
def test_from_edgelist_no_attr(self):
Gtrue = nx.Graph([('E', 'C', {}),
('B', 'A', {}),
| nx.from_pandas_edgelist() iterates over DataFrame.values which changes dtypes
I arrived here through:
```
>>> df = pd.DataFrame([[1, 2, '0.1']], columns=['source', 'target', 'other'])
>>> nx.from_pandas_edgelist(df).nodes()
NodeView((1, 2))
>>> df = pd.DataFrame([[1, 2, 0.1]], columns=['source', 'target', 'other'])
>>> nx.from_pandas_edgelist(df).nodes()
NodeView((1.0, 2.0))
```
I initially thought this would be an issue with [`from_pandas_edgelist`](https://github.com/networkx/networkx/blob/9ac4941c1cb25bf47c85d2bb19f7557f5c807694/networkx/convert_matrix.py#L241) but that code just hands off directly to add_edge:
https://github.com/networkx/networkx/blob/9ac4941c1cb25bf47c85d2bb19f7557f5c807694/networkx/convert_matrix.py#L346-L350
which means that [Graph().add_edge](https://github.com/networkx/networkx/blob/e97abea688a471a8422a676ea7c93d079bfa0f1f/networkx/classes/graph.py#L812-L874) must be responsible.
This is in networkx version 2.1
| Update: This is happening before `Graph.add_edge()`
I changed the lines in convert_matrix to:
```
else:
for row in df.values:
print(row)
print(row[src_i])
print(row[tar_i])
print(type(row[src_i]))
print(type(row[tar_i]))
g.add_edge(row[src_i], row[tar_i])
```
and I'm seeing the types change before `g.add_edge`. That means this is an issue with `from_pandas_edgelist` or maybe... (gasp) an issue with pandas?
This "feature" is documented in both networkx and pandas. It involves ```DataFrame.values``` using a flexible type for numeric values. From [the docstring for](https://networkx.github.io/documentation/stable/reference/generated/networkx.convert_matrix.from_pandas_edgelist.html) ```nx.from_pandas_edgelist```:
Note: This function iterates over DataFrame.values, which is not
guaranteed to retain the data type across columns in the row. This is only
a problem if your row is entirely numeric and a mix of ints and floats. In
that case, all values will be returned as floats. See the
DataFrame.iterrows documentation for an example.
Thanks @dschult! I should have read the docs better. Could you point me towards the pandas documentation of this feature?
And could you explain why networkx iterates over `.values`? the docstring only lists the disadvantage. It seems like my issue could be fixed by iterating over the dataframe another way than `.values`
Ahh, I see the pandas docs: It's documented right with [df.values()](http://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.values.html)
I'd still love to hear what motivated this choice for the purposes of `from_pandas_edgelist`
@dschult why does networkx iterate over `.values`?
That's the way to iterate over the matrix of values in the dataframe. Other suggestions?
@dschult how about [`itertuples`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.itertuples.html)?
```
n = 100000 # 100k edges
df = pd.concat([pd.DataFrame(np.random.randint(0,1000,(n,2)), columns=['source', 'target']), pd.DataFrame(np.random.random(n))], axis=1)
```
```
%%time
nx.from_pandas_edgelist(df)
CPU times: user 243 ms, sys: 15.2 ms, total: 258 ms
Wall time: 256 ms
```
```
%%time
g = nx.Graph()
for row in df.itertuples():
g.add_edge(row[0], row[1])
CPU times: user 696 ms, sys: 49.2 ms, total: 746 ms
Wall time: 744 ms
```
Slower, but solves this dtype issue.
1M edges:
```
CPU times: user 2.76 s, sys: 83.7 ms, total: 2.85 s
Wall time: 2.85 s
```
vs
```
CPU times: user 6.9 s, sys: 330 ms, total: 7.23 s
Wall time: 7.23 s
```
And 10M edges:
```
CPU times: user 28.1 s, sys: 211 ms, total: 28.3 s
Wall time: 28.3 s
```
vs
```
CPU times: user 1min 13s, sys: 7.86 s, total: 1min 21s
Wall time: 1min 22s
```
Looks like a consistent factor of 3.
@dschult here's a slightly messy, but faster way to do it:
```
n = 1000000 # 1M edges
df = pd.DataFrame(np.random.randint(0,1000,(n,2)),columns=['source','target'])
```
```
%%time
nx.from_pandas_edgelist(df)
```
```
CPU times: user 2.58 s, sys: 59.5 ms, total: 2.64 s
Wall time: 2.64 s
```
```
%%time
g = nx.Graph()
[g.add_edge(s,t) for s,t in zip(df['source'], df['target'])]
```
```
CPU times: user 2.03 s, sys: 82.8 ms, total: 2.12 s
Wall time: 2.12 s
```
You shifted from testing that it worked to testing its speed. But assuming all the above works, could you put together a PR with the improved version? Thanks!
| 2018-07-16T14:00:50 |
networkx/networkx | 3,069 | networkx__networkx-3069 | [
"3060"
] | bbf624a2e8a621bb5be3bc0937b8d286fe389e0c | diff --git a/networkx/drawing/nx_agraph.py b/networkx/drawing/nx_agraph.py
--- a/networkx/drawing/nx_agraph.py
+++ b/networkx/drawing/nx_agraph.py
@@ -301,7 +301,7 @@ def pygraphviz_layout(G, prog='neato', root=None, args=''):
return node_pos
[email protected]_file(5, 'w')
[email protected]_file(5, 'w+b')
def view_pygraphviz(G, edgelabel=None, prog='dot', args='',
suffix='', path=None):
"""Views the graph G using the specified layout algorithm.
| `view_pygraphviz` fails for a `path` that is `str`
In the following, `view_pygraphviz` works fine:
import networkx
G = networkx.generators.trees.random_tree(100)
networkx.drawing.nx_agraph.view_pygraphviz(G, prog="dot", args="-Nshape=box");
But when you try to specify the `path` which is supposed to be a `str`...
import networkx
G = networkx.generators.trees.random_tree(100)
networkx.drawing.nx_agraph.view_pygraphviz(G, path="mytest.png", prog="dot", args="-Nshape=box");
...it raises `TypeError: write() argument must be str, not bytes` from within `pygraphviz/agraph::draw()`.
Now, if you:
import networkx
G = networkx.generators.trees.random_tree(100)
A = networkx.drawing.nx_agraph.to_agraph(G)
A.draw("mytest.png", prog="dot")
This will work fine, minus of course the specific formatting (graph, edge, node appearance) that `networkx` adds in `view_pygraphviz`.
Therefore, the "problem" must be in `view_pygraphviz`.
After a bit of experimentation, it turns out that this works:
import networkx
G = networkx.generators.trees.random_tree(100)
networkx.drawing.nx_agraph.view_pygraphviz(G, path=open("mytest.png", "w+b"), prog="dot", args="-Nshape=box");
The hint to pass a file descriptor rather than a `str` was obtained from [the documentation](https://github.com/networkx/networkx/blob/master/networkx/drawing/nx_agraph.py#L425), specifically *"# Assume the decorator worked and it is a file-object."*.
Is this a bug? If not, which decorator does this function hint at?
**EDIT:**
Netowrkx version 2.1, Python version 3.5.2
| The decorator is ```open_file``` (it appears in the line before the function signature ```def ...``` with an ```@``` in front. The decorator is defined in ```networkx/utils/decorators.py``` and opens the file if needed, or passes the file object through otherwise.
You have clearly found a work-around for the error. But it'd be nice to fix this...
In looking at both this code and the code for pygraphviz, it looks like we should have the decorator specify mode ```'w+b'```. But I'm not close to machines that can easily test whether this works for the different cases and versions of OS/python/etc. Can you easily verify that changing the 'w' to 'w+b' in the ```@nx.utils.open_file``` decorator fixes this bug? | 2018-07-16T19:41:22 |
|
networkx/networkx | 3,071 | networkx__networkx-3071 | [
"2942"
] | ea16c9b04b7a3f9a9be16b80895725069b24153e | diff --git a/networkx/utils/decorators.py b/networkx/utils/decorators.py
--- a/networkx/utils/decorators.py
+++ b/networkx/utils/decorators.py
@@ -3,6 +3,11 @@
from collections import defaultdict
from os.path import splitext
from contextlib import contextmanager
+try:
+ from pathlib import Path
+except ImportError:
+ # Use Path to indicate if pathlib exists (like numpy does)
+ Path = None
import networkx as nx
from decorator import decorator
@@ -208,6 +213,10 @@ def _open_file(func_to_be_decorated, *args, **kwargs):
# path is already a file-like object
fobj = path
close_fobj = False
+ elif Path is not None and isinstance(path, Path):
+ # path is a pathlib reference to a filename
+ fobj = _dispatch_dict[path.suffix](str(path), mode=mode)
+ close_fobj = True
else:
# could be None, in which case the algorithm will deal with it
fobj = path
| diff --git a/networkx/utils/tests/test_decorators.py b/networkx/utils/tests/test_decorators.py
--- a/networkx/utils/tests/test_decorators.py
+++ b/networkx/utils/tests/test_decorators.py
@@ -87,6 +87,13 @@ def test_writer_arg0_str(self):
def test_writer_arg0_fobj(self):
self.writer_arg0(self.fobj)
+ def test_writer_arg0_pathlib(self):
+ try:
+ import pathlib
+ self.writer_arg0(pathlib.Path(self.name))
+ except ImportError:
+ return
+
def test_writer_arg1_str(self):
self.writer_arg1(self.name)
assert_equal(self.read(self.name), ''.join(self.text))
| nx.write_gpickle and nx.read_gpickle not implementing os.PathLike
If you use pathlib in python3, networkx.write_gpickle and networkx.read_gpickle will not accept these as paths.
| This is easily fixable (I think) just by using str(x) whenever you expect a string path.
Each of these functions uses the ```@open_file``` decorator to determine if the argument is a string-like name of the file or a file object. Your suggestion would lead to a change in that decorator so that if it is a pathlib object, a reasonable file object is created.
Would you be able to take a look at open_file and see if there is a good way to identify the argument as a pathlib object? Then pathlib would work for all networkx functions.
Actually, I think it might be better/faster/cheaper just to have the user put ```str(p)``` when they are calling the function, instead of adding a bunch of code to NetworkX which would eventually need to be changed to keep up with pathlib. The pathlib library helps you manipulate the path, but when you want to use it you turn it into a string by using ```str```.
So I think I will close this issue in favor of leaving specific code out of networkx and instead requiring the user to turn the argument into a string before calling these functions.
I think keeping up with pathlib is pretty important, as it's a base package and is trying to create a standard for how to handle paths in python. Part of python 3 compatibility should be compatibility with all base packages and compatibility with PEP paradigms.
Not to mention that compatibility is just basically easy to implement as you've said.
I can look at it someday.
I agree with keeping up with all base packages.
But isn't the proper use of pathlib to turn it into a string when you want to use it?
f = open(str(p))
or in a more sophisticated way:
with p.open('wb') as f:
nx.write_pickle(G, path=f)
How are you envisioning people use the path objects?
I'm not sure I can cite the book, but my impression was that anything that implements os.pathlike should be considered a path, so any modules that take paths should handle things that are pathlike.
I understand that it's easy on either the user or the developer end just by calling str, but sometimes it's hard to know as a user which methods can take pathlikes and which require strings. So slowly identifying and converting anything that cant take pathlikes will improve user experience in the distant future when 3.6 is the new 2.7
It's also been awhile but I think I recall opening this issue bc when I was debugging some code where I passed a pathlike to networkx it gave some horrid traceback that took a bit to realize what was wrong. So at least maybe we should add a typecheck assert. But at that point it's just as easy to cast to string. | 2018-07-17T03:14:08 |
networkx/networkx | 3,072 | networkx__networkx-3072 | [
"2973",
"2973"
] | 5b491d1f9c0325b10d794fe65c0747a96cd51519 | diff --git a/networkx/algorithms/operators/all.py b/networkx/algorithms/operators/all.py
--- a/networkx/algorithms/operators/all.py
+++ b/networkx/algorithms/operators/all.py
@@ -38,6 +38,11 @@ def union_all(graphs, rename=(None,)):
-------
U : a graph with the same type as the first graph in list
+ Raises
+ ------
+ ValueError
+ If `graphs` is an empty list.
+
Notes
-----
To force a disjoint union with node relabeling, use
@@ -52,6 +57,8 @@ def union_all(graphs, rename=(None,)):
union
disjoint_union_all
"""
+ if not graphs:
+ raise ValueError('cannot apply union_all to an empty list')
graphs_names = zip_longest(graphs, rename)
U, gname = next(graphs_names)
for H, hname in graphs_names:
@@ -75,6 +82,11 @@ def disjoint_union_all(graphs):
-------
U : A graph with the same type as the first graph in list
+ Raises
+ ------
+ ValueError
+ If `graphs` is an empty list.
+
Notes
-----
It is recommended that the graphs be either all directed or all undirected.
@@ -83,6 +95,8 @@ def disjoint_union_all(graphs):
If a graph attribute is present in multiple graphs, then the value
from the last graph in the list with that attribute is used.
"""
+ if not graphs:
+ raise ValueError('cannot apply disjoint_union_all to an empty list')
graphs = iter(graphs)
U = next(graphs)
for H in graphs:
@@ -105,6 +119,11 @@ def compose_all(graphs):
-------
C : A graph with the same type as the first graph in list
+ Raises
+ ------
+ ValueError
+ If `graphs` is an empty list.
+
Notes
-----
It is recommended that the supplied graphs be either all directed or all
@@ -114,6 +133,8 @@ def compose_all(graphs):
If a graph attribute is present in multiple graphs, then the value
from the last graph in the list with that attribute is used.
"""
+ if not graphs:
+ raise ValueError('cannot apply compose_all to an empty list')
graphs = iter(graphs)
C = next(graphs)
for H in graphs:
@@ -136,11 +157,18 @@ def intersection_all(graphs):
-------
R : A new graph with the same type as the first graph in list
+ Raises
+ ------
+ ValueError
+ If `graphs` is an empty list.
+
Notes
-----
Attributes from the graph, nodes, and edges are not copied to the new
graph.
"""
+ if not graphs:
+ raise ValueError('cannot apply intersection_all to an empty list')
graphs = iter(graphs)
R = next(graphs)
for H in graphs:
| diff --git a/networkx/algorithms/operators/tests/test_all.py b/networkx/algorithms/operators/tests/test_all.py
--- a/networkx/algorithms/operators/tests/test_all.py
+++ b/networkx/algorithms/operators/tests/test_all.py
@@ -199,3 +199,23 @@ def test_mixed_type_compose():
H = nx.MultiGraph()
I = nx.Graph()
U = nx.compose_all([G, H, I])
+
+
+@raises(ValueError)
+def test_empty_union():
+ nx.union_all([])
+
+
+@raises(ValueError)
+def test_empty_disjoint_union():
+ nx.disjoint_union_all([])
+
+
+@raises(ValueError)
+def test_empty_compose_all():
+ nx.compose_all([])
+
+
+@raises(ValueError)
+def test_empty_intersection_all():
+ nx.intersection_all([])
| Behavior of `nx.union_all` when `graphs=[]`
I'm running into a case where I'm passing in an empty list to `nx.union_all`, and it returns a None type.
While this is not necessarily the wrong thing to do, it is not documented.
Intuitively, I would expect the result of union with no inputs to be an empty graph, but the issue here is that you don't know what type the graph should be. Therefore I think the best behavior would be to raise a ValueError indicating that the input cannot be empty. This would make it more clear where the code is failing.
Current behavior:
```python
>>> nx.union_all([nx.path_graph([1, 2])])
<networkx.classes.graph.Graph at 0x7f6fb15d1ac8>
>>> nx.union_all([nx.path_graph([1, 2]), nx.path_graph([3, 4])])
<networkx.classes.graph.Graph at 0x7f6fb1477ac8>
>>> print(nx.union_all([]))
None
```
Proposed Behavior:
```python
>>> print(nx.union_all([]))
ValueError: Cannot union_all an empty list
```
Behavior of `nx.union_all` when `graphs=[]`
I'm running into a case where I'm passing in an empty list to `nx.union_all`, and it returns a None type.
While this is not necessarily the wrong thing to do, it is not documented.
Intuitively, I would expect the result of union with no inputs to be an empty graph, but the issue here is that you don't know what type the graph should be. Therefore I think the best behavior would be to raise a ValueError indicating that the input cannot be empty. This would make it more clear where the code is failing.
Current behavior:
```python
>>> nx.union_all([nx.path_graph([1, 2])])
<networkx.classes.graph.Graph at 0x7f6fb15d1ac8>
>>> nx.union_all([nx.path_graph([1, 2]), nx.path_graph([3, 4])])
<networkx.classes.graph.Graph at 0x7f6fb1477ac8>
>>> print(nx.union_all([]))
None
```
Proposed Behavior:
```python
>>> print(nx.union_all([]))
ValueError: Cannot union_all an empty list
```
| This seems like a good way to handle an empty incoming list. Thanks!
This seems like a good way to handle an empty incoming list. Thanks! | 2018-07-17T20:44:04 |
networkx/networkx | 3,082 | networkx__networkx-3082 | [
"3079"
] | a0daf7591e5e7b8648e9aa2a7fc2507cd746c082 | diff --git a/networkx/drawing/nx_agraph.py b/networkx/drawing/nx_agraph.py
--- a/networkx/drawing/nx_agraph.py
+++ b/networkx/drawing/nx_agraph.py
@@ -145,7 +145,8 @@ def to_agraph(N):
A.node_attr.update(N.graph.get('node', {}))
A.edge_attr.update(N.graph.get('edge', {}))
- A.graph_attr.update(N.graph)
+ A.graph_attr.update((k, v) for k, v in N.graph.items()
+ if k not in ('graph', 'node', 'edge'))
# add nodes
for n, nodedata in N.nodes(data=True):
| diff --git a/networkx/drawing/tests/test_agraph.py b/networkx/drawing/tests/test_agraph.py
--- a/networkx/drawing/tests/test_agraph.py
+++ b/networkx/drawing/tests/test_agraph.py
@@ -3,7 +3,8 @@
import tempfile
from nose import SkipTest
from nose.tools import assert_true, assert_equal, assert_raises
-from networkx.testing import assert_edges_equal, assert_nodes_equal
+from networkx.testing import assert_edges_equal, assert_nodes_equal, \
+ assert_graphs_equal
import networkx as nx
@@ -90,3 +91,16 @@ def test_graph_with_reserved_keywords(self):
G.edges[('A', 'B')]['u'] = 'keyword'
G.edges[('A', 'B')]['v'] = 'keyword'
A = nx.nx_agraph.to_agraph(G)
+
+ def test_round_trip(self):
+ G = nx.Graph()
+ A = nx.nx_agraph.to_agraph(G)
+ H = nx.nx_agraph.from_agraph(A)
+ #assert_graphs_equal(G, H)
+ AA = nx.nx_agraph.to_agraph(H)
+ HH = nx.nx_agraph.from_agraph(AA)
+ assert_graphs_equal(H, HH)
+ G.graph['graph'] = {}
+ G.graph['node'] = {}
+ G.graph['edge'] = {}
+ assert_graphs_equal(G, HH)
| repeatedly reading and writing dot files does not converge on the same dot file
Steps to reproduce
---
Suppose the following Python command:
python3 -c 'import networkx as nx, sys; nx.nx_agraph.write_dot(nx.nx_agraph.read_dot(sys.stdin), sys.stdout)'
Essentially this is just writing the same dot graph to stdout that it read from stdin.
And then also this trivial dot input:
```dot
digraph G {
graph ["edge"="{}",
"graph"="{}",
name=G,
"node"="{}"
];
}
```
Now I'm executing:
command < graph1.dot > graph2.dot
command < graph2.dot > graph3.dot
command < graph3.dot > graph4.dot
...
Expected behaviour
---
Since the command is writing out the same graph it is reading without modifying it, it should not matter how often I read in and write out the same dot file. At least after the first invocation, what networkx writes out should remain the same over the remaining invocations. That is, the file should converge to a stable version that doesn't change independent how often the script is executed on it.
Reality
---
The `graph` attribute gets more and more populated. The more often this is executed the bigger the file becomes.
`graph2.dot`:
```dot
digraph G {
graph ["edge"="{}",
"graph"="{'edge': '{}', 'graph': '{}', 'name': 'G', 'node': '{}'}",
name=G,
"node"="{}"
];
}
```
`graph3.dot`:
```dot
digraph G {
graph ["edge"="{}",
"graph"="{'edge': '{}', 'graph': \"{'edge': '{}', 'graph': '{}', 'name': 'G', 'node': '{}'}\", 'name': 'G', 'node': '{}'}",
name=G,
"node"="{}"
];
}
```
`graph4.dot`:
```dot
digraph G {
graph ["edge"="{}",
"graph"="{'edge': '{}', 'graph': '{\'edge\': \'{}\', \'graph\': \"{\'edge\': \'{}\', \'graph\': \'{}\', \'name\': \'G\', \'node\': \'{}\'}\", \'\
name\': \'G\', \'node\': \'{}\'}', 'name': 'G', 'node': '{}'}",
name=G,
"node"="{}"
];
}
```
| This looks like it is putting the entire graph into the ```graph``` attribute.
Your initial graph shouldn't use the special attribute ```graph```. Am I right?
Does that create a subgraph within the bigger graph? I;m not sure why you have it there.
I certainly messes up reading and writing as you demonstrate. What if it isn't there? Does the same thing happen one iteration later?
Do you have suggestions for how to handle this?
If my initial graph doesn't have the `graph` attribute, then it will be put in by networkx and the issue is the same. Try this initial graph instead:
```dot
digraph G {
}
```
After passing this through the command, it becomes:
```dot
digraph G {
graph ["edge"="{}",
"graph"="{}",
name=G,
"node"="{}"
];
}
```
It doesn't matter why *I* have it there. As you can see, it is put in by networkx itself and not by me. I noticed this issue specifically because I didn't put the attribute there but networkx did.
So yes, if it isn't there, networkx will put it and we have the same problem again.
Did you not try to reproduce the issue yourself? | 2018-07-23T23:31:56 |
networkx/networkx | 3,095 | networkx__networkx-3095 | [
"3094"
] | 1a52196699668ec5531711590621b0d4821e57ae | diff --git a/networkx/algorithms/shortest_paths/weighted.py b/networkx/algorithms/shortest_paths/weighted.py
--- a/networkx/algorithms/shortest_paths/weighted.py
+++ b/networkx/algorithms/shortest_paths/weighted.py
@@ -1115,13 +1115,13 @@ def bellman_ford_predecessor_and_distance(G, source, target=None,
>>> G = nx.path_graph(5, create_using = nx.DiGraph())
>>> pred, dist = nx.bellman_ford_predecessor_and_distance(G, 0)
>>> sorted(pred.items())
- [(0, [None]), (1, [0]), (2, [1]), (3, [2]), (4, [3])]
+ [(0, []), (1, [0]), (2, [1]), (3, [2]), (4, [3])]
>>> sorted(dist.items())
[(0, 0), (1, 1), (2, 2), (3, 3), (4, 4)]
>>> pred, dist = nx.bellman_ford_predecessor_and_distance(G, 0, 1)
>>> sorted(pred.items())
- [(0, [None]), (1, [0])]
+ [(0, []), (1, [0])]
>>> sorted(dist.items())
[(0, 0), (1, 1)]
@@ -1143,6 +1143,8 @@ def bellman_ford_predecessor_and_distance(G, source, target=None,
not containing the source contains a negative cost (di)cycle, it
will not be detected.
+ In NetworkX v2.1 and prior, the source node had predecessor `[None]`.
+ In NetworkX v2.2 this changed to the source node having predecessor `[]`
"""
if source not in G:
raise nx.NodeNotFound("Node %s is not found in the graph" % source)
@@ -1151,7 +1153,7 @@ def bellman_ford_predecessor_and_distance(G, source, target=None,
raise nx.NetworkXUnbounded("Negative cost cycle detected.")
dist = {source: 0}
- pred = {source: [None]}
+ pred = {source: []}
if len(G) == 1:
return pred, dist
@@ -1215,7 +1217,7 @@ def _bellman_ford(G, source, weight, pred=None, paths=None, dist=None,
"""
if pred is None:
- pred = {v: [None] for v in source}
+ pred = {v: [] for v in source}
if dist is None:
dist = {v: 0 for v in source}
@@ -1267,7 +1269,7 @@ def _bellman_ford(G, source, weight, pred=None, paths=None, dist=None,
path = [dst]
cur = dst
- while pred[cur][0] is not None:
+ while pred[cur]:
cur = pred[cur][0]
path.append(cur)
@@ -2096,7 +2098,7 @@ def johnson(G, weight='weight'):
raise nx.NetworkXError('Graph is not weighted.')
dist = {v: 0 for v in G}
- pred = {v: [None] for v in G}
+ pred = {v: [] for v in G}
weight = _weight_function(G, weight)
# Calculate distance of shortest paths
| diff --git a/networkx/algorithms/shortest_paths/tests/test_weighted.py b/networkx/algorithms/shortest_paths/tests/test_weighted.py
--- a/networkx/algorithms/shortest_paths/tests/test_weighted.py
+++ b/networkx/algorithms/shortest_paths/tests/test_weighted.py
@@ -346,7 +346,7 @@ def test_single_node_graph(self):
assert_equal(nx.single_source_bellman_ford_path(G, 0), {0: [0]})
assert_equal(nx.single_source_bellman_ford_path_length(G, 0), {0: 0})
assert_equal(nx.single_source_bellman_ford(G, 0), ({0: 0}, {0: [0]}))
- assert_equal(nx.bellman_ford_predecessor_and_distance(G, 0), ({0: [None]}, {0: 0}))
+ assert_equal(nx.bellman_ford_predecessor_and_distance(G, 0), ({0: []}, {0: 0}))
assert_equal(nx.goldberg_radzik(G, 0), ({0: None}, {0: 0}))
assert_raises(nx.NodeNotFound, nx.bellman_ford_predecessor_and_distance, G, 1)
assert_raises(nx.NodeNotFound, nx.goldberg_radzik, G, 1)
@@ -385,7 +385,7 @@ def test_negative_weight_cycle(self):
({0: 0, 1: 1, 2: -2, 3: -1, 4: 0},
{0: [0], 1: [0, 1], 2: [0, 1, 2], 3: [0, 1, 2, 3], 4: [0, 1, 2, 3, 4]}))
assert_equal(nx.bellman_ford_predecessor_and_distance(G, 0),
- ({0: [None], 1: [0], 2: [1], 3: [2], 4: [3]},
+ ({0: [], 1: [0], 2: [1], 3: [2], 4: [3]},
{0: 0, 1: 1, 2: -2, 3: -1, 4: 0}))
assert_equal(nx.goldberg_radzik(G, 0),
({0: None, 1: 0, 2: 1, 3: 2, 4: 3},
@@ -403,7 +403,7 @@ def test_not_connected(self):
({0: 0, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1},
{0: [0], 1: [0, 1], 2: [0, 2], 3: [0, 3], 4: [0, 4], 5: [0, 5]}))
assert_equal(nx.bellman_ford_predecessor_and_distance(G, 0),
- ({0: [None], 1: [0], 2: [0], 3: [0], 4: [0], 5: [0]},
+ ({0: [], 1: [0], 2: [0], 3: [0], 4: [0], 5: [0]},
{0: 0, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}))
assert_equal(nx.goldberg_radzik(G, 0),
({0: None, 1: 0, 2: 0, 3: 0, 4: 0, 5: 0},
@@ -423,7 +423,7 @@ def test_not_connected(self):
({0: 0, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1},
{0: [0], 1: [0, 1], 2: [0, 2], 3: [0, 3], 4: [0, 4], 5: [0, 5]}))
assert_equal(nx.bellman_ford_predecessor_and_distance(G, 0, weight='load'),
- ({0: [None], 1: [0], 2: [0], 3: [0], 4: [0], 5: [0]},
+ ({0: [], 1: [0], 2: [0], 3: [0], 4: [0], 5: [0]},
{0: 0, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}))
assert_equal(nx.goldberg_radzik(G, 0, weight='load'),
({0: None, 1: 0, 2: 0, 3: 0, 4: 0, 5: 0},
@@ -481,7 +481,7 @@ def test_path_graph(self):
assert_equal(nx.single_source_bellman_ford(G, 0),
({0: 0, 1: 1, 2: 2, 3: 3}, {0: [0], 1: [0, 1], 2: [0, 1, 2], 3: [0, 1, 2, 3]}))
assert_equal(nx.bellman_ford_predecessor_and_distance(G, 0),
- ({0: [None], 1: [0], 2: [1], 3: [2]}, {0: 0, 1: 1, 2: 2, 3: 3}))
+ ({0: [], 1: [0], 2: [1], 3: [2]}, {0: 0, 1: 1, 2: 2, 3: 3}))
assert_equal(nx.goldberg_radzik(G, 0),
({0: None, 1: 0, 2: 1, 3: 2}, {0: 0, 1: 1, 2: 2, 3: 3}))
assert_equal(nx.single_source_bellman_ford_path(G, 3),
@@ -491,7 +491,7 @@ def test_path_graph(self):
assert_equal(nx.single_source_bellman_ford(G, 3),
({0: 3, 1: 2, 2: 1, 3: 0}, {0: [3, 2, 1, 0], 1: [3, 2, 1], 2: [3, 2], 3: [3]}))
assert_equal(nx.bellman_ford_predecessor_and_distance(G, 3),
- ({0: [1], 1: [2], 2: [3], 3: [None]}, {0: 3, 1: 2, 2: 1, 3: 0}))
+ ({0: [1], 1: [2], 2: [3], 3: []}, {0: 3, 1: 2, 2: 1, 3: 0}))
assert_equal(nx.goldberg_radzik(G, 3),
({0: 1, 1: 2, 2: 3, 3: None}, {0: 3, 1: 2, 2: 1, 3: 0}))
@@ -506,7 +506,7 @@ def test_4_cycle(self):
assert_equal(path[3], [0, 3])
pred, dist = nx.bellman_ford_predecessor_and_distance(G, 0)
- assert_equal(pred[0], [None])
+ assert_equal(pred[0], [])
assert_equal(pred[1], [0])
assert_true(pred[2] in [[1, 3], [3, 1]])
assert_equal(pred[3], [0])
| inconsistent behavior of predecessor-and-distance functions
While working on #2346 , I ran into the following slightly surprising behavior:
* `dijkstra_predecessor_and_distance()` sets `pred[source]` to be `[]`
* `bellman_ford_predecessor_and_distance()` sets `pred[source]` to be `[None]`
This behavior should be replicated by the following (as of commit `1a521966`):
```
>>> import networkx as nx
>>> G = nx.path_graph(2)
>>> nx.dijkstra_predecessor_and_distance(G, 0)[0]
{0: [], 1: [0]}
>>> nx.bellman_ford_predecessor_and_distance(G, 0)[0]
{0: [None], 1: [0]}
```
(From the [code for `_bellman_ford()`](https://github.com/networkx/networkx/blob/1a52196699668ec5531711590621b0d4821e57ae/networkx/algorithms/shortest_paths/weighted.py#L1166), `None` is being used as a sentinel value when constructing paths.)
This produces an error when trying to use the two algorithms' output interchangeably.
The inconsistency is small, but does seem like a bug. These two functions' return values should probably have similar conventions; the behavior of `dijkstra_predecessor_and_distance()` makes the most sense to me.
| 2018-07-30T03:05:20 |
|
networkx/networkx | 3,097 | networkx__networkx-3097 | [
"2997"
] | 1a52196699668ec5531711590621b0d4821e57ae | diff --git a/networkx/algorithms/similarity.py b/networkx/algorithms/similarity.py
--- a/networkx/algorithms/similarity.py
+++ b/networkx/algorithms/similarity.py
@@ -1,4 +1,26 @@
# -*- coding: utf-8 -*-
+# Copyright (C) 2010 by
+# Aric Hagberg <[email protected]>
+# Dan Schult <[email protected]>
+# Pieter Swart <[email protected]>
+# All rights reserved.
+# BSD license.
+#
+# Author: Andrey Paramonov <[email protected]>
+""" Functions measuring similarity using graph edit distance.
+
+The graph edit distance is the number of edge/node changes needed
+to make two graphs isomorphic.
+
+The default algorithm/implementation is sub-optimal for some graphs.
+The problem of finding the exact Graph Edit Distance (GED) is NP-hard
+so it is often slow. If the simple interface `graph_edit_distance`
+takes too long for your graph, try `optimize_graph_edit_distance`
+and/or `optimize_edit_paths`.
+
+At the same time, I encourage capable people to investigate
+alternative GED algorithms, in order to improve the choices available.
+"""
from __future__ import print_function
import math
import networkx as nx
@@ -20,8 +42,10 @@ def debug_print(*args, **kwargs):
def graph_edit_distance(G1, G2, node_match=None, edge_match=None,
- node_subst_cost=None, node_del_cost=None, node_ins_cost=None,
- edge_subst_cost=None, edge_del_cost=None, edge_ins_cost=None,
+ node_subst_cost=None, node_del_cost=None,
+ node_ins_cost=None,
+ edge_subst_cost=None, edge_del_cost=None,
+ edge_ins_cost=None,
upper_bound=None):
"""Returns GED (graph edit distance) between graphs G1 and G2.
@@ -152,8 +176,10 @@ def graph_edit_distance(G1, G2, node_match=None, edge_match=None,
def optimal_edit_paths(G1, G2, node_match=None, edge_match=None,
- node_subst_cost=None, node_del_cost=None, node_ins_cost=None,
- edge_subst_cost=None, edge_del_cost=None, edge_ins_cost=None,
+ node_subst_cost=None, node_del_cost=None,
+ node_ins_cost=None,
+ edge_subst_cost=None, edge_del_cost=None,
+ edge_ins_cost=None,
upper_bound=None):
"""Returns all minimum-cost edit paths transforming G1 to G2.
@@ -296,8 +322,10 @@ def optimal_edit_paths(G1, G2, node_match=None, edge_match=None,
def optimize_graph_edit_distance(G1, G2, node_match=None, edge_match=None,
- node_subst_cost=None, node_del_cost=None, node_ins_cost=None,
- edge_subst_cost=None, edge_del_cost=None, edge_ins_cost=None,
+ node_subst_cost=None, node_del_cost=None,
+ node_ins_cost=None,
+ edge_subst_cost=None, edge_del_cost=None,
+ edge_ins_cost=None,
upper_bound=None):
"""Returns consecutive approximations of GED (graph edit distance)
between graphs G1 and G2.
@@ -428,8 +456,10 @@ def optimize_graph_edit_distance(G1, G2, node_match=None, edge_match=None,
def optimize_edit_paths(G1, G2, node_match=None, edge_match=None,
- node_subst_cost=None, node_del_cost=None, node_ins_cost=None,
- edge_subst_cost=None, edge_del_cost=None, edge_ins_cost=None,
+ node_subst_cost=None, node_del_cost=None,
+ node_ins_cost=None,
+ edge_subst_cost=None, edge_del_cost=None,
+ edge_ins_cost=None,
upper_bound=None, strictly_decreasing=True):
"""GED (graph edit distance) calculation: advanced interface.
@@ -574,18 +604,19 @@ def make_CostMatrix(C, m, n):
lsa_row_ind, lsa_col_ind = linear_sum_assignment(C)
# Fixup dummy assignments:
- # each substitution i<->j should have corresponding dummy assignment m+j<->n+i
+ # each substitution i<->j should have dummy assignment m+j<->n+i
# NOTE: fast reduce of Cv relies on it
#assert len(lsa_row_ind) == len(lsa_col_ind)
- subst_ind = list(k for k, i, j in zip(range(len(lsa_row_ind)), lsa_row_ind, lsa_col_ind)
- if i < m and j < n)
- dummy_ind = list(k for k, i, j in zip(range(len(lsa_row_ind)), lsa_row_ind, lsa_col_ind)
- if i >= m and j >= n)
+ indexes = zip(range(len(lsa_row_ind)), lsa_row_ind, lsa_col_ind)
+ subst_ind = list(k for k, i, j in indexes if i < m and j < n)
+ indexes = zip(range(len(lsa_row_ind)), lsa_row_ind, lsa_col_ind)
+ dummy_ind = list(k for k, i, j in indexes if i >= m and j >= n)
#assert len(subst_ind) == len(dummy_ind)
lsa_row_ind[dummy_ind] = lsa_col_ind[subst_ind] + m
lsa_col_ind[dummy_ind] = lsa_row_ind[subst_ind] + n
- return CostMatrix(C, lsa_row_ind, lsa_col_ind, C[lsa_row_ind, lsa_col_ind].sum())
+ return CostMatrix(C, lsa_row_ind, lsa_col_ind,
+ C[lsa_row_ind, lsa_col_ind].sum())
def extract_C(C, i, j, m, n):
#assert(C.shape == (m + n, m + n))
@@ -627,14 +658,12 @@ def match_edges(u, v, pending_g, pending_h, Ce, matched_uv=[]):
N = len(pending_h)
#assert Ce.C.shape == (M + N, M + N)
- g_ind = list(i for i in range(M)
- if any(pending_g[i][:2] in ((p, u), (u, p))
- for p, q in matched_uv)
- or pending_g[i][:2] == (u, u))
- h_ind = list(j for j in range(N)
- if any(pending_h[j][:2] in ((q, v), (v, q))
- for p, q in matched_uv)
- or pending_h[j][:2] == (v, v))
+ g_ind = [i for i in range(M) if pending_g[i][:2] == (u, u) or
+ any(pending_g[i][:2] in ((p, u), (u, p))
+ for p, q in matched_uv)]
+ h_ind = [j for j in range(N) if pending_h[j][:2] == (v, v) or
+ any(pending_h[j][:2] in ((q, v), (v, q))
+ for p, q in matched_uv)]
m = len(g_ind)
n = len(h_ind)
@@ -649,7 +678,8 @@ def match_edges(u, v, pending_g, pending_h, Ce, matched_uv=[]):
for l, j in zip(range(n), h_ind):
h = pending_h[j][:2]
if nx.is_directed(G1) or nx.is_directed(G2):
- if any(g == (p, u) and h == (q, v) or g == (u, p) and h == (v, q)
+ if any(g == (p, u) and h == (q, v) or
+ g == (u, p) and h == (v, q)
for p, q in matched_uv):
continue
else:
@@ -714,7 +744,8 @@ def get_edit_ops(matched_uv, pending_u, pending_v, Cv,
# 1) a vertex mapping from optimal linear sum assignment
i, j = min((k, l) for k, l in zip(Cv.lsa_row_ind, Cv.lsa_col_ind)
if k < m or l < n)
- xy, localCe = match_edges(pending_u[i] if i < m else None, pending_v[j] if j < n else None,
+ xy, localCe = match_edges(pending_u[i] if i < m else None,
+ pending_v[j] if j < n else None,
pending_g, pending_h, Ce, matched_uv)
Ce_xy = reduce_Ce(Ce, xy, len(pending_g), len(pending_h))
#assert Ce.ls <= localCe.ls + Ce_xy.ls
@@ -746,13 +777,15 @@ def get_edit_ops(matched_uv, pending_u, pending_v, Cv,
#assert Cv.ls <= Cv.C[i, j] + Cv_ij.ls
if prune(matched_cost + Cv.C[i, j] + Cv_ij.ls + Ce.ls):
continue
- xy, localCe = match_edges(pending_u[i] if i < m else None, pending_v[j] if j < n else None,
+ xy, localCe = match_edges(pending_u[i] if i < m else None,
+ pending_v[j] if j < n else None,
pending_g, pending_h, Ce, matched_uv)
if prune(matched_cost + Cv.C[i, j] + Cv_ij.ls + localCe.ls):
continue
Ce_xy = reduce_Ce(Ce, xy, len(pending_g), len(pending_h))
#assert Ce.ls <= localCe.ls + Ce_xy.ls
- if prune(matched_cost + Cv.C[i, j] + Cv_ij.ls + localCe.ls + Ce_xy.ls):
+ if prune(matched_cost + Cv.C[i, j] + Cv_ij.ls + localCe.ls +
+ Ce_xy.ls):
continue
other.append(((i, j), Cv_ij, xy, Ce_xy, Cv.C[i, j] + localCe.ls))
@@ -827,8 +860,10 @@ def get_edit_paths(matched_uv, pending_u, pending_v, Cv,
v = pending_v.pop(j) if j < len(pending_v) else None
matched_uv.append((u, v))
for x, y in xy:
- matched_gh.append((pending_g[x] if x < len(pending_g) else None,
- pending_h[y] if y < len(pending_h) else None))
+ len_g = len(pending_g)
+ len_h = len(pending_h)
+ matched_gh.append((pending_g[x] if x < len_g else None,
+ pending_h[y] if y < len_h else None))
sortedx = list(sorted(x for x, y in xy))
sortedy = list(sorted(y for x, y in xy))
G = list((pending_g.pop(x) if x < len(pending_g) else None)
@@ -837,15 +872,17 @@ def get_edit_paths(matched_uv, pending_u, pending_v, Cv,
for y in reversed(sortedy))
# yield from
- for t in get_edit_paths(matched_uv, pending_u, pending_v, Cv_ij,
- matched_gh, pending_g, pending_h, Ce_xy,
+ for t in get_edit_paths(matched_uv, pending_u, pending_v,
+ Cv_ij,
+ matched_gh, pending_g, pending_h,
+ Ce_xy,
matched_cost + edit_cost):
yield t
# backtrack
- if not u is None:
+ if u is not None:
pending_u.insert(i, u)
- if not v is None:
+ if v is not None:
pending_v.insert(j, v)
matched_uv.pop()
for x, g in zip(sortedx, reversed(G)):
@@ -868,10 +905,12 @@ def get_edit_paths(matched_uv, pending_u, pending_v, Cv,
C = np.zeros((m + n, m + n))
if node_subst_cost:
C[0:m, 0:n] = np.array([node_subst_cost(G1.nodes[u], G2.nodes[v])
- for u in pending_u for v in pending_v]).reshape(m, n)
+ for u in pending_u for v in pending_v]
+ ).reshape(m, n)
elif node_match:
C[0:m, 0:n] = np.array([1 - int(node_match(G1.nodes[u], G2.nodes[v]))
- for u in pending_u for v in pending_v]).reshape(m, n)
+ for u in pending_u for v in pending_v]
+ ).reshape(m, n)
else:
# all zeroes
pass
@@ -888,9 +927,11 @@ def get_edit_paths(matched_uv, pending_u, pending_v, Cv,
#assert not n or min(ins_costs) >= 0
inf = C[0:m, 0:n].sum() + sum(del_costs) + sum(ins_costs) + 1
C[0:m, n:n + m] = np.array([del_costs[i] if i == j else inf
- for i in range(m) for j in range(m)]).reshape(m, m)
+ for i in range(m) for j in range(m)]
+ ).reshape(m, m)
C[m:m + n, 0:n] = np.array([ins_costs[i] if i == j else inf
- for i in range(n) for j in range(n)]).reshape(n, n)
+ for i in range(n) for j in range(n)]
+ ).reshape(n, n)
Cv = make_CostMatrix(C, m, n)
#debug_print('Cv: {} x {}'.format(m, n))
#debug_print(Cv.C)
@@ -904,10 +945,12 @@ def get_edit_paths(matched_uv, pending_u, pending_v, Cv,
C = np.zeros((m + n, m + n))
if edge_subst_cost:
C[0:m, 0:n] = np.array([edge_subst_cost(G1.edges[g], G2.edges[h])
- for g in pending_g for h in pending_h]).reshape(m, n)
+ for g in pending_g for h in pending_h]
+ ).reshape(m, n)
elif edge_match:
C[0:m, 0:n] = np.array([1 - int(edge_match(G1.edges[g], G2.edges[h]))
- for g in pending_g for h in pending_h]).reshape(m, n)
+ for g in pending_g for h in pending_h]
+ ).reshape(m, n)
else:
# all zeroes
pass
@@ -924,9 +967,11 @@ def get_edit_paths(matched_uv, pending_u, pending_v, Cv,
#assert not n or min(ins_costs) >= 0
inf = C[0:m, 0:n].sum() + sum(del_costs) + sum(ins_costs) + 1
C[0:m, n:n + m] = np.array([del_costs[i] if i == j else inf
- for i in range(m) for j in range(m)]).reshape(m, m)
+ for i in range(m) for j in range(m)]
+ ).reshape(m, m)
C[m:m + n, 0:n] = np.array([ins_costs[i] if i == j else inf
- for i in range(n) for j in range(n)]).reshape(n, n)
+ for i in range(n) for j in range(n)]
+ ).reshape(n, n)
Ce = make_CostMatrix(C, m, n)
#debug_print('Ce: {} x {}'.format(m, n))
#debug_print(Ce.C)
@@ -953,10 +998,10 @@ def prune(cost):
for vertex_path, edge_path, cost in \
get_edit_paths([], pending_u, pending_v, Cv,
[], pending_g, pending_h, Ce, 0):
- #assert list(sorted(G1.nodes)) == list(sorted(list(u for u, v in vertex_path if u is not None)))
- #assert list(sorted(G2.nodes)) == list(sorted(list(v for u, v in vertex_path if v is not None)))
- #assert list(sorted(G1.edges)) == list(sorted(list(g for g, h in edge_path if g is not None)))
- #assert list(sorted(G2.edges)) == list(sorted(list(h for g, h in edge_path if h is not None)))
+ #assert sorted(G1.nodes) == sorted(u for u, v in vertex_path if u is not None)
+ #assert sorted(G2.nodes) == sorted(v for u, v in vertex_path if v is not None)
+ #assert sorted(G1.edges) == sorted(g for g, h in edge_path if g is not None)
+ #assert sorted(G2.edges) == sorted(h for g, h in edge_path if h is not None)
#print(vertex_path, edge_path, cost, file = sys.stderr)
#assert cost == maxcost.value
yield list(vertex_path), list(edge_path), cost
| graph edit distance doesn't work for non-large special, but not circular example
I stumbled about it, the programe doesn't come to end.
That's some minimal example:
>
> #!/usr/bin/env python
> import networkx as nx
> import time
>
> g1 = nx.Graph()
> g1.add_edges_from([(5, 4), (5, 6), (5, 7), (21, 8), (2, 1), (2, 0), (5, 3)])
> g2 = nx.Graph()
> g2.add_edges_from([(15, 13), (23, 16), (23, 24), (100, 7), (18, 19), (100, 23), (19, 20), (22, 21)])
>
> start_time = time.time()
> print "starting"
> dst = nx.graph_edit_distance(g1, g2)
> print(' GED={0} took {1:.2f} seconds'.format(dst, time.time() - start_time))
| Hi @kleinias ,
probably you'll have more luck with `optimize_graph_edit_distance`.
Any resolution here? Should we change the docs somewhere to suggest trying the optimize version of the function? Should we just close this? What do you suggest?
@dschult I believe the procedure should **ultimately work** for the particular example, quickly.
However, **current** algorithm/implementation is sub-optimal, for this example. Moreover, I'm pretty sure it will always be possible to find an example for which exact GED calculation is slow, as it's NP-hard. So, `optimize_graph_edit_distance` will always have its uses. The algorithm interface was designed to be both simple (`graph_edit_distance`) and flexible (`optimize_graph_edit_distance`/`optimize_edit_paths`), and if the simple interface doesn't yield good results, one has to dig a bit deeper.
At the same time, I encourage capable people to investigate alternative GED algorithms, in order to fix this particular example.
That's a great summary. Maybe I'll try to put a paragraph "docstring" for the module (at the top) that simply says what you jut said. That way people know a little more which functions to try. I'll take a stab at the difference between optimize_edit_paths and optimal_edit_paths too. | 2018-07-30T14:19:28 |
|
networkx/networkx | 3,101 | networkx__networkx-3101 | [
"3099"
] | 6ee8574f64c938721137adcb8ba6938367442039 | diff --git a/networkx/classes/digraph.py b/networkx/classes/digraph.py
--- a/networkx/classes/digraph.py
+++ b/networkx/classes/digraph.py
@@ -290,18 +290,18 @@ def __init__(self, incoming_graph_data=None, **attr):
{'day': 'Friday'}
"""
- self.node_dict_factory = ndf = self.node_dict_factory
+ self.node_dict_factory = self.node_dict_factory
self.adjlist_outer_dict_factory = self.adjlist_outer_dict_factory
self.adjlist_inner_dict_factory = self.adjlist_inner_dict_factory
self.edge_attr_dict_factory = self.edge_attr_dict_factory
self.graph = {} # dictionary for graph attributes
- self._node = ndf() # dictionary for node attributes
+ self._node = self.node_dict_factory() # dictionary for node attr
# We store two adjacency lists:
# the predecessors of node n are stored in the dict self._pred
# the successors of node n are stored in the dict self._succ=self._adj
- self._adj = ndf() # empty adjacency dictionary
- self._pred = ndf() # predecessor
+ self._adj = self.adjlist_outer_dict_factory() # empty adjacency dict
+ self._pred = self.adjlist_outer_dict_factory() # predecessor
self._succ = self._adj # successor
# attempt to load graph with data
| diff --git a/networkx/classes/tests/test_special.py b/networkx/classes/tests/test_special.py
--- a/networkx/classes/tests/test_special.py
+++ b/networkx/classes/tests/test_special.py
@@ -8,6 +8,47 @@
from test_multidigraph import TestMultiDiGraph
+def test_factories():
+ class mydict1(dict):
+ pass
+
+ class mydict2(dict):
+ pass
+
+ class mydict3(dict):
+ pass
+
+ class mydict4(dict):
+ pass
+
+ class mydict5(dict):
+ pass
+
+ for Graph in (nx.Graph, nx.DiGraph, nx.MultiGraph, nx.MultiDiGraph):
+ # print("testing class: ", Graph.__name__)
+ class MyGraph(Graph):
+ node_dict_factory = mydict1
+ adjlist_outer_dict_factory = mydict2
+ adjlist_inner_dict_factory = mydict3
+ edge_key_dict_factory = mydict4
+ edge_attr_dict_factory = mydict5
+ G = MyGraph()
+ assert_is_instance(G._node, mydict1)
+ assert_is_instance(G._adj, mydict2)
+ G.add_node(1)
+ assert_is_instance(G._adj[1], mydict3)
+ if G.is_directed():
+ assert_is_instance(G._pred, mydict2)
+ assert_is_instance(G._succ, mydict2)
+ assert_is_instance(G._pred[1], mydict3)
+ G.add_edge(1, 2)
+ if G.is_multigraph():
+ assert_is_instance(G._adj[1][2], mydict4)
+ assert_is_instance(G._adj[1][2][0], mydict5)
+ else:
+ assert_is_instance(G._adj[1][2], mydict5)
+
+
class SpecialGraphTester(TestGraph):
def setUp(self):
TestGraph.setUp(self)
| Use AdjListOuterDict instead of NodeDict for DiGraph's succ and pred?
Hi networkx folks,
I found the init function of DiGraph [here](https://github.com/networkx/networkx/blob/master/networkx/classes/digraph.py#L303) is interesting. Instead of using `adjlist_outer_dict_factory`, the `node_dict_factory` is used. Is this a bug? My use case is try to hook network's DiGraph to notify my other components of when a node and edge is being added to the graph. My current solution is to subclass the DiGraph and replace the four dict structures with my own dict. However, because of this logic, it is harder to distinguish between node and edge addition/removal. Would you please have a look? Thank you.
-Minjie
| Yup --- That is a bug. That should be ```adjlist_outer_dict_factory```.
Thank you!
Thanks for the response. Look forward to the fix! | 2018-07-31T16:18:56 |
networkx/networkx | 3,104 | networkx__networkx-3104 | [
"3100"
] | d65a4de9e4f732cd9bb1cf1d174d1b6bd1ce56fb | diff --git a/networkx/readwrite/gml.py b/networkx/readwrite/gml.py
--- a/networkx/readwrite/gml.py
+++ b/networkx/readwrite/gml.py
@@ -195,7 +195,7 @@ def read_gml(path, label='label', destringizer=None):
For additional documentation on the GML file format, please see the
`GML website <http://www.infosun.fim.uni-passau.de/Graphlet/GML/gml-tr.html>`_.
- See the module docstring :mod:`networkx.readwrite.gml` for additional details.
+ See the module docstring :mod:`networkx.readwrite.gml` for more details.
Examples
--------
@@ -265,7 +265,7 @@ def parse_gml(lines, label='label', destringizer=None):
For additional documentation on the GML file format, please see the
`GML website <http://www.infosun.fim.uni-passau.de/Graphlet/GML/gml-tr.html>`_.
- See the module docstring :mod:`networkx.readwrite.gml` for additional details.
+ See the module docstring :mod:`networkx.readwrite.gml` for more details.
"""
def decode_line(line):
if isinstance(line, bytes):
@@ -611,7 +611,7 @@ def generate_gml(G, stringizer=None):
For additional documentation on the GML file format, please see the
`GML website <http://www.infosun.fim.uni-passau.de/Graphlet/GML/gml-tr.html>`_.
- See the module docstring :mod:`networkx.readwrite.gml` for additional details.
+ See the module docstring :mod:`networkx.readwrite.gml` for more details.
Examples
--------
@@ -793,10 +793,13 @@ def write_gml(G, path, stringizer=None):
specification. For other data types, you need to explicitly supply a
`stringizer`/`destringizer`.
+ Note that while we allow non-standard GML to be read from a file, we make
+ sure to write GML format. In particular, underscores are not allowed in
+ attribute names.
For additional documentation on the GML file format, please see the
`GML website <http://www.infosun.fim.uni-passau.de/Graphlet/GML/gml-tr.html>`_.
- See the module docstring :mod:`networkx.readwrite.gml` for additional details.
+ See the module docstring :mod:`networkx.readwrite.gml` for more details.
Examples
--------
| Error when attempting to write a GraphML file after merging two graphs from imported GML files.
Hi,
(i posted this on StackOverflow earlier today, but thought it advisable to raise it here too in case it is a known issue - https://stackoverflow.com/questions/51612124/networkx-key-error-when-writing-gml-file )
I am getting the following error message when attempting to write GML file after merging two graphs using compose():
`NetworkXError: 'user_id' is not a valid key`
The background is that I import two GML files using:
```
g = nx.read_gml(file_path + "test_graph_1.gml")
h = nx.read_gml(file_path + "test_graph_2.gml")
```
Each node (in both GML files) file is structured thus:
```
node [
id 9
user_id "1663413990"
file "wingsscotland.dat"
label "brian_bilston"
image "/Users/ian/development/gtf/gtf/img/1663413990.jpg"
type "friends"
statuses 21085
friends 737
followers 53425
listed 550
ffr 72.4898
lfr 0.1029
shape "triangle-up"
]
```
After importing each file, I can check all the node attributes, see that nodes are unique within each graph.
I also see that NetworkX by default discards the 'id' field, und uses the 'label' as the identifier of the node. It retains the user_id attribute (which happens to be a Twitter user_id and suits my purposes well).
Running
`list(f.nodes(data=True))`
I can see that the data for the node above is:
```
('brian_bilston',
{'ffr': 72.4898,
'file': 'wingsscotland.dat',
'followers': 53425,
'friends': 737,
'image': '/Users/ian/development/gtf/gtf/img/1663413990.jpg',
'lfr': 0.1029,
'listed': 550,
'shape': 'triangle-up',
'statuses': 21085,
'type': 'friends',
'user_id': '1663413990'})
```
There is (in this test case) one common node shared by Graph g and Graph h, - the one shown above. All others are unique by user_id and label.
I then merge the two graphs using:
`f = nx.compose(g,h)`
This works ok.
I then go to write out a new GML from the graph, f, using:
`nx.write_gml(f, file_path + "one_plus_two.gml")`
At this point I get the error, above:
` NetworkXError: 'user_id' is not a valid key
`
I have checked the uniqueness of all user_id's (in case I had duplicated one):
```
uid = nx.get_node_attributes(f,'user_id')
print(uid)
```
Which outputs:
```
{'brian_bilston': '1663413990',
'ICMResearch': '100',
'justcswilliams': '200',
'MissBabington': '300',
'ProBirdRights': '400',
'FredSmith': '247775851',
'JasWatt': '160952087',
'Angela_Lewis': '2316946782',
'Fuzzpig54': '130136162',
'SonnyRussel': '828881340',
'JohnBird': '448476934',
'AngusMcAngus': '19785044'}
```
(formatted for readability).
So, all user_id's are unique, as far as I can tell.
So, if it is not a question of uniqueness of keys, what is the error telling me?
I've exhausted my thinking on this!
Any pointers, please, would be very much appreciated!
| Yes -- this is a known issues: see #2131
The GML spec doesn't allow underscores in attribute names.
We allow reading .gml files that don't correspond to the official GML spec. But we write only items that follow the spec. You should convert your attribute names to not include the underscore.
for n in G:
G.node[n]['userid'] = G.node[n]['user_id']
del G[n]['user_id']
We should also add to the documentation a note about this.
Wonderful @dschult I really appreciate your quick answer.
I will post a response to the StackOverflow question with a link back here so that anyone else can find it from there.
Much appreciated
Ian | 2018-07-31T21:43:53 |
|
networkx/networkx | 3,110 | networkx__networkx-3110 | [
"2966"
] | 794323b9ca4fcd5bb270137ba2fe51bcb78b9a6f | diff --git a/networkx/algorithms/centrality/eigenvector.py b/networkx/algorithms/centrality/eigenvector.py
--- a/networkx/algorithms/centrality/eigenvector.py
+++ b/networkx/algorithms/centrality/eigenvector.py
@@ -28,16 +28,16 @@ def eigenvector_centrality(G, max_iter=100, tol=1.0e-6, nstart=None,
Eigenvector centrality computes the centrality for a node based on the
centrality of its neighbors. The eigenvector centrality for node $i$ is
+ the $i$-th element of the vector $x$ defined by the equation
.. math::
Ax = \lambda x
where $A$ is the adjacency matrix of the graph `G` with eigenvalue
- $\lambda$. By virtue of the Perron–Frobenius theorem, there is
- a unique and positive solution if $\lambda$ is the largest
- eigenvalue associated with the eigenvector of the adjacency matrix
- $A$ ([2]_).
+ $\lambda$. By virtue of the Perron–Frobenius theorem, there is a unique
+ solution $x$, all of whose entries are positive, if $\lambda$ is the
+ largest eigenvalue of the adjacency matrix $A$ ([2]_).
Parameters
----------
| eigenvector_centrality() docstring needs rewording
The docstring for eigenvector_centrality() refers to the cluster centrality of a node i and then proceeds to give an equation for the vector of cluster centrality for all nodes. Suggest rewording similar to the katz_similarity() method.
| ```katz_similarity``` also gives an equation for the vector (actually an equation giving one element of the vector in terms of all the others). But some improvement to the docs seems simple and easy to do. Would you like to take a shot at making the equation for eigenvector_centrality easier to read? | 2018-08-03T19:44:07 |
|
networkx/networkx | 3,111 | networkx__networkx-3111 | [
"3105"
] | ca654b6831bb5c4bebff94b4efe0be17c229fa28 | diff --git a/networkx/convert_matrix.py b/networkx/convert_matrix.py
--- a/networkx/convert_matrix.py
+++ b/networkx/convert_matrix.py
@@ -177,8 +177,6 @@ def from_pandas_adjacency(df, create_using=None):
"""
- A = df.values
- G = from_numpy_matrix(A, create_using=create_using)
try:
df = df[df.index]
except:
@@ -186,6 +184,9 @@ def from_pandas_adjacency(df, create_using=None):
missing = list(set(df.index).difference(set(df.columns)))
raise nx.NetworkXError("Columns must match Indices.", msg % missing)
+ A = df.values
+ G = from_numpy_matrix(A, create_using=create_using)
+
nx.relabel.relabel_nodes(G, dict(enumerate(df.columns)), copy=False)
return G
| diff --git a/networkx/tests/test_convert_pandas.py b/networkx/tests/test_convert_pandas.py
--- a/networkx/tests/test_convert_pandas.py
+++ b/networkx/tests/test_convert_pandas.py
@@ -163,3 +163,14 @@ def test_roundtrip(self):
df = nx.to_pandas_adjacency(Gtrue, dtype=int)
G = nx.from_pandas_adjacency(df)
assert_graphs_equal(Gtrue, G)
+
+ def test_from_adjacency_named(self):
+ # example from issue #3105
+ data = {"A": {"A": 0, "B": 0, "C": 0},
+ "B": {"A": 1, "B": 0, "C": 0},
+ "C": {"A": 0, "B": 1, "C": 0}}
+ dftrue = pd.DataFrame(data)
+ df = dftrue[["A", "C", "B"]]
+ G = nx.from_pandas_adjacency(df, create_using=nx.DiGraph())
+ df = nx.to_pandas_adjacency(G, dtype=int)
+ pd.testing.assert_frame_equal(df, dftrue)
| When using 'from_pandas_adjacency', edges are added focusing on column names rather than the order
When using the function `from_pandas_adjacency` (networkx 2.1) to create a graph, edges are added focusing on the ***order*** of columns and rows in the pandas DataFrame.
Personally, I think that it is more useful by adding edges focusing on the ***names*** of columns and rows (indices) rather than the ***order***, because
1. Personally, when column names and row names are given in the pandas DataFrame, I think it is natural to interpret them as node names.
2. In the current behavior, it is necessary for the code calling `from_pandas_adjacency` to ensure that the columns and rows of the pandas DataFrame are in the same order. For example, by writing the following code every time
`df = df[df.index] # Sort columns by row (index) order`
Without this code, there is a possibility that an unintended graph structure may be created.
The examples are as follows.
### Current behavior:
In the following case 1 and case 2, the pandas DataFrames have the same structure except column order. (The columns order of case 1 is ["A", "B", "C"]], and the columns order of case 2 is ["A "," C "," B "])

However, the graph structure created using `from_pandas_adjacency` (networkx 2.1) differs between case 1 and case 2.

**Case 1:** This case is no problem.
```
# A->B, B->C
data = {
"A": {"A": 0, "B" : 0, "C": 0},
"B": {"A": 1, "B" : 0, "C": 0},
"C": {"A": 0, "B" : 1, "C": 0}
}
case1_df = pd.DataFrame(data)
G = nx.from_pandas_adjacency(case1_df, create_using=nx.DiGraph())
pos = nx.spring_layout(G)
nx.draw_networkx_nodes(G, pos=pos, node_color="#ffa1a1")
nx.draw_networkx_labels(G, pos=pos)
nx.draw_networkx_edges(G, pos=pos)
plt.title('case 1')
plt.show()
```

**Case 2:**
I think that behavior in this case is a little confusing.
Focusing on column names and row names, it is expected that the same graph structure as case 1 will be created. However, in fact, a different graph structure is created. It is because `from_pandas_adjacency` is currently based on the ***order***, not the ***names***.
```
# A->B, B->C (Same as case 1)
data = {
"A": {"A": 0, "B" : 0, "C": 0},
"B": {"A": 1, "B" : 0, "C": 0},
"C": {"A": 0, "B" : 1, "C": 0}
}
case2_df = pd.DataFrame(data)
# !!! Change the column order from ["A", "B", "C"] to ["A", "C", "B"] !!!
case2_df = case2_df[["A", "C", "B"]]
G = nx.from_pandas_adjacency(case2_df, create_using=nx.DiGraph())
pos = nx.spring_layout(G)
nx.draw_networkx_nodes(G, pos=pos, node_color="#ffa1a1")
nx.draw_networkx_labels(G, pos=pos)
nx.draw_networkx_edges(G, pos=pos)
plt.title('case 2')
plt.show()
```

### Expected result:
I want the result of case 2 to be the same as case 1. The reasons are mentioned at the beginning of this issue.
If the code is changed, I'm thinking of sending a PR like the following.
**Before:**
https://github.com/networkx/networkx/blob/ab3a6cdece25e359f28526fdc5f9f8ae443503f3/networkx/convert_matrix.py#L180-L190
**After:**
My desire is to avoid the situation that always needs to be careful of columns order when using `from_pandas_adjacency`. I think that there are two approaches.
**[Approach 1]**
In the original code, the columns are sorted after creating the graph. Change this to sort before creating the graph.
```
def from_pandas_adjacency(df, create_using=None):
# …
try:
df = df[df.index]
except:
msg = "%s not in columns"
missing = list(set(df.index).difference(set(df.columns)))
raise nx.NetworkXError("Columns must match Indices.", msg % missing)
A = df.values
G = from_numpy_matrix(A, create_using=create_using)
nx.relabel.relabel_nodes(G, dict(enumerate(df.columns)), copy=False)
return G
```
**[Approach 2]**
If the columns and rows are in the different order, throw an exception. Then, before using `from_pandas_adjacency`, I will be able to remember that the columns and rows are in the same order. And, it is possible to avoid trouble that an unintended graph structure is created.
| Thanks for pointing this out! I agree that we need a fix here -- either raise an exception when order doesn't match or reorder the columns to match the rows. I would lean toward your **Approach 1**. Do you see advantages of the second approach? I guess my question is also getting at: are there backward compatibility issues we need to worry about here?
Thanks!
Thank you for the comments.
> Do you see advantages of the second approach?
I think there is no big advantage for the second approach. I myself want to adopt **Approach 1** rather than Approach 2.
I presented Approach 2 as an alternative to the case where Approach 1 can not be adopted for any reasons. (I thought that `from_pandas_adjacency` might have been implemented to focus on order after some discussion.)
> I guess my question is also getting at: are there backward compatibility issues we need to worry about here?
I think there is no backward compatibility problem.
If there are cases in which it is necessary to ignore the column name and create a graph structure, it will be a problem, but I think that is not a general case.
If there is no problem, I would like to send a pull request with the contents of **Approach 1**. | 2018-08-04T22:40:47 |
networkx/networkx | 3,114 | networkx__networkx-3114 | [
"3113"
] | ca654b6831bb5c4bebff94b4efe0be17c229fa28 | diff --git a/networkx/classes/digraph.py b/networkx/classes/digraph.py
--- a/networkx/classes/digraph.py
+++ b/networkx/classes/digraph.py
@@ -769,6 +769,25 @@ def has_predecessor(self, u, v):
def successors(self, n):
"""Return an iterator over successor nodes of n.
+ A successor of n is a node m such that there exists a directed
+ edge from n to m.
+
+ Parameters
+ ----------
+ n : node
+ A node in the graph
+
+ Raises
+ -------
+ NetworkXError
+ If n is not in the graph.
+
+ See Also
+ --------
+ predecessors
+
+ Notes
+ -----
neighbors() and successors() are the same.
"""
try:
@@ -780,7 +799,25 @@ def successors(self, n):
neighbors = successors
def predecessors(self, n):
- """Return an iterator over predecessor nodes of n."""
+ """Return an iterator over predecessor nodes of n.
+
+ A predecessor of n is a node m such that there exists a directed
+ edge from m to n.
+
+ Parameters
+ ----------
+ n : node
+ A node in the graph
+
+ Raises
+ -------
+ NetworkXError
+ If n is not in the graph.
+
+ See Also
+ --------
+ successors
+ """
try:
return iter(self._pred[n])
except KeyError:
| G.predecessors(node) documentation could benefit from additional explanation
I'm on a _tree_ network similar to this toy,
[('total_fruits', 'apples'),
('total_fruits_and_vegetables', 'total_fruits'),
('total_fruits_and_vegetables', 'total_vegetables'),
('total_vegetables', 'carrot')]
To find the root node I'm using this solution on StackOverflow: [NetworkX find root_node for a particular node in a directed graph](https://stackoverflow.com/a/47191030/343215). This solution works, but I think the documentation could be clearer.
The documentation for predecessor [networkx.DiGraph.predecessors](https://networkx.github.io/documentation/stable/reference/classes/generated/networkx.DiGraph.predecessors.html) says:
> DiGraph.predecessors(n)[source]
> Return an iterator over predecessor nodes of n.
For my tree I expected G.predecessors(node) output to return _all_ of the predecessors to the root,
[n for n in G.predecessors('apples')]
>>> ['apples', ''total_fruits', 'total_fruits_and_vegetables']
But what I get:
[n for n in G.predecessors('apples')]
>>>['total_fruits']
Finally, I realized _nodes_ refers to the predecessors one level or edge from the node parameter, because the network may not be a tree.
In this edge case I'm asking if maybe the documentation could benefit further description.
| Yes -- and that docstring is really VERY minimal... It doesn't even say what the input or output arguments are. There may be others in that category like ```G.successors``` but I haven't checked.
Pull requests would be helpful... | 2018-08-06T19:51:24 |
|
networkx/networkx | 3,123 | networkx__networkx-3123 | [
"3120"
] | 1a3c7bc134114da1e97b2f5228491c6569737afe | diff --git a/networkx/algorithms/__init__.py b/networkx/algorithms/__init__.py
--- a/networkx/algorithms/__init__.py
+++ b/networkx/algorithms/__init__.py
@@ -40,7 +40,6 @@
from networkx.algorithms.smallworld import *
from networkx.algorithms.smetric import *
from networkx.algorithms.structuralholes import *
-from networkx.algorithms.triads import *
from networkx.algorithms.sparsifiers import *
from networkx.algorithms.swap import *
from networkx.algorithms.traversal import *
| Double import
I noticed that in `networkx/algorithms/__init__.py`the statement `from networkx.algorithms.triads import *` occurs twice. Is there any reason for this or is this just a blunder?
| That's a blunder... probably happened when we alphabetized the list a little while ago. | 2018-08-17T15:40:50 |
|
networkx/networkx | 3,127 | networkx__networkx-3127 | [
"3042"
] | 8818e1dddfc43acf34395fd8c3222d1ea1831db5 | diff --git a/networkx/algorithms/coloring/__init__.py b/networkx/algorithms/coloring/__init__.py
--- a/networkx/algorithms/coloring/__init__.py
+++ b/networkx/algorithms/coloring/__init__.py
@@ -1,2 +1,3 @@
from networkx.algorithms.coloring.greedy_coloring import *
-__all__ = ['greedy_color']
+from networkx.algorithms.coloring.equitable_coloring import equitable_color
+__all__ = ['greedy_color', 'equitable_color']
diff --git a/networkx/algorithms/coloring/equitable_coloring.py b/networkx/algorithms/coloring/equitable_coloring.py
new file mode 100644
--- /dev/null
+++ b/networkx/algorithms/coloring/equitable_coloring.py
@@ -0,0 +1,474 @@
+# -*- coding: utf-8 -*-
+# Copyright (C) 2018 by
+# Utkarsh Upadhyay <[email protected]>
+# All rights reserved.
+# BSD license.
+"""
+Equitable coloring of graphs with bounded degree.
+"""
+
+import networkx as nx
+from collections import defaultdict
+
+__all__ = ['equitable_color']
+
+
+def is_coloring(G, coloring):
+ """Determine if the coloring is a valid coloring for the graph G."""
+ # Verify that the coloring is valid.
+ for (s, d) in G.edges:
+ if coloring[s] == coloring[d]:
+ return False
+ return True
+
+
+def is_equitable(G, coloring, num_colors=None):
+ """Determines if the coloring is valid and equitable for the graph G."""
+
+ if not is_coloring(G, coloring):
+ return False
+
+ # Verify whether it is equitable.
+ color_set_size = defaultdict(int)
+ for color in coloring.values():
+ color_set_size[color] += 1
+
+ if num_colors is not None:
+ for color in range(num_colors):
+ if color not in color_set_size:
+ # These colors do not have any vertices attached to them.
+ color_set_size[color] = 0
+
+ # If there are more than 2 distinct values, the coloring cannot be equitable
+ all_set_sizes = set(color_set_size.values())
+ if len(all_set_sizes) == 0 and num_colors is None: # Was an empty graph
+ return True
+ elif len(all_set_sizes) == 1:
+ return True
+ elif len(all_set_sizes) == 2:
+ a, b = list(all_set_sizes)
+ return abs(a - b) <= 1
+ else: # len(all_set_sizes) > 2:
+ return False
+
+
+def make_C_from_F(F):
+ C = defaultdict(lambda: [])
+ for node, color in F.items():
+ C[color].append(node)
+
+ return C
+
+
+def make_N_from_L_C(L, C):
+ nodes = L.keys()
+ colors = C.keys()
+ return {(node, color): sum(1 for v in L[node] if v in C[color])
+ for node in nodes for color in colors}
+
+
+def make_H_from_C_N(C, N):
+ return {(c1, c2): sum(1 for node in C[c1] if N[(node, c2)] == 0)
+ for c1 in C.keys() for c2 in C.keys()}
+
+
+def change_color(u, X, Y, N, H, F, C, L):
+ """Change the color of 'u' from X to Y and update N, H, F, C."""
+ assert F[u] == X and X != Y
+
+ # Change the class of 'u' from X to Y
+ F[u] = Y
+
+ for k in C.keys():
+ # 'u' witnesses an edge from k -> Y instead of from k -> X now.
+ if N[u, k] == 0:
+ H[(X, k)] -= 1
+ H[(Y, k)] += 1
+
+ for v in L[u]:
+ # 'v' has lost a neighbor in X and gained one in Y
+ N[(v, X)] -= 1
+ N[(v, Y)] += 1
+
+ if N[(v, X)] == 0:
+ # 'v' witnesses F[v] -> X
+ H[(F[v], X)] += 1
+
+ if N[(v, Y)] == 1:
+ # 'v' no longer witnesses F[v] -> Y
+ H[(F[v], Y)] -= 1
+
+ C[X].remove(u)
+ C[Y].append(u)
+
+
+def move_witnesses(src_color, dst_color, N, H, F, C, T_cal, L):
+ """Move witness along a path from src_color to dst_color."""
+ X = src_color
+ while X != dst_color:
+ Y = T_cal[X]
+ # Move _any_ witness from X to Y = T_cal[X]
+ w = [x for x in C[X] if N[(x, Y)] == 0][0]
+ change_color(w, X, Y, N=N, H=H, F=F, C=C, L=L)
+ X = Y
+
+
+def pad_graph(G, num_colors):
+ """Add a disconnected complete clique K_p such that the number of nodes in
+ the graph becomes a multiple of `num_colors`.
+
+ Assumes that the graph's nodes are labelled using integers.
+
+ Returns the number of nodes with each color.
+ """
+
+ n_ = len(G)
+ r = num_colors - 1
+
+ # Ensure that the number of nodes in G is a multiple of (r + 1)
+ s = n_ // (r + 1)
+ if n_ != s * (r + 1):
+ p = (r + 1) - n_ % (r + 1)
+ s += 1
+
+ # Complete graph K_p between (imaginary) nodes [n_, ... , n_ + p]
+ K = nx.relabel_nodes(nx.complete_graph(p),
+ {idx: idx + n_ for idx in range(p)})
+ G.add_edges_from(K.edges)
+
+ return s
+
+
+def procedure_P(V_minus, V_plus, N, H, F, C, L, excluded_colors=None):
+ """Procedure P as described in the paper."""
+
+ if excluded_colors is None:
+ excluded_colors = set()
+
+ A_cal = set()
+ T_cal = {}
+ R_cal = []
+
+ # BFS to determine A_cal, i.e. colors reachable from V-
+ reachable = [V_minus]
+ marked = set(reachable)
+ idx = 0
+
+ while idx < len(reachable):
+ pop = reachable[idx]
+ idx += 1
+
+ A_cal.add(pop)
+ R_cal.append(pop)
+
+ # TODO: Checking whether a color has been visited can be made faster by
+ # using a look-up table instead of testing for membership in a set by a
+ # logarithmic factor.
+ next_layer = []
+ for k in C.keys():
+ if H[(k, pop)] > 0 and \
+ k not in A_cal and \
+ k not in excluded_colors and \
+ k not in marked:
+ next_layer.append(k)
+
+ for dst in next_layer:
+ # Record that `dst` can reach `pop`
+ T_cal[dst] = pop
+
+ marked.update(next_layer)
+ reachable.extend(next_layer)
+
+ # Variables for the algorithm
+ b = (len(C) - len(A_cal))
+
+ if V_plus in A_cal:
+ # Easy case: V+ is in A_cal
+ # Move one node from V+ to V- using T_cal to find the parents.
+ move_witnesses(V_plus, V_minus, N=N, H=H, F=F, C=C, T_cal=T_cal, L=L)
+ else:
+ # If there is a solo edge, we can resolve the situation by
+ # moving witnesses from B to A, making G[A] equitable and then
+ # recursively balancing G[B - w] with a different V_minus and
+ # but the same V_plus.
+
+ A_0 = set()
+ A_cal_0 = set()
+ num_terminal_sets_found = 0
+ made_equitable = False
+
+ for W_1 in R_cal[::-1]:
+
+ for v in C[W_1]:
+ X = None
+
+ for U in C.keys():
+ if N[(v, U)] == 0 and U in A_cal and U != W_1:
+ X = U
+
+ # v does not witness an edge in H[A_cal]
+ if X is None:
+ continue
+
+ for U in C.keys():
+ # Note: Departing from the paper here.
+ if N[(v, U)] >= 1 and U not in A_cal:
+ X_prime = U
+ w = v
+
+ # Finding the solo neighbor of w in X_prime
+ y_candidates = [node for node in L[w]
+ if F[node] == X_prime and N[(node, W_1)] == 1]
+
+ if len(y_candidates) > 0:
+ y = y_candidates[0]
+ W = W_1
+
+ # Move w from W to X, now X has one extra node.
+ change_color(w, W, X, N=N, H=H, F=F, C=C, L=L)
+
+ # Move witness from X to V_minus, making the coloring
+ # equitable.
+ move_witnesses(src_color=X, dst_color=V_minus,
+ N=N, H=H, F=F, C=C, T_cal=T_cal, L=L)
+
+ # Move y from X_prime to W, making W the correct size.
+ change_color(y, X_prime, W, N=N, H=H, F=F, C=C, L=L)
+
+ # Then call the procedure on G[B - y]
+ procedure_P(V_minus=X_prime, V_plus=V_plus,
+ N=N, H=H, C=C, F=F, L=L,
+ excluded_colors=excluded_colors.union(A_cal))
+ made_equitable = True
+ break
+
+ if made_equitable:
+ break
+ else:
+ # No node in W_1 was found such that
+ # it had a solo-neighbor.
+ A_cal_0.add(W_1)
+ A_0.update(C[W_1])
+ num_terminal_sets_found += 1
+
+ if num_terminal_sets_found == b:
+ # Otherwise, construct the maximal independent set and find
+ # a pair of z_1, z_2 as in Case II.
+
+ # BFS to determine B_cal': the set of colors reachable from V+
+ B_cal_prime = set()
+ T_cal_prime = {}
+
+ reachable = [V_plus]
+ marked = set(reachable)
+ idx = 0
+ while idx < len(reachable):
+ pop = reachable[idx]
+ idx += 1
+
+ B_cal_prime.add(pop)
+
+ # No need to check for excluded_colors here because
+ # they only exclude colors from A_cal
+ next_layer = [k for k in C.keys()
+ if H[(pop, k)] > 0 and
+ k not in B_cal_prime and
+ k not in marked]
+
+ for dst in next_layer:
+ T_cal_prime[pop] = dst
+
+ marked.update(next_layer)
+ reachable.extend(next_layer)
+
+ # Construct the independent set of G[B']
+ I_set = set()
+ I_covered = set()
+ W_covering = {}
+
+ B_prime = [node for k in B_cal_prime for node in C[k]]
+
+ # Add the nodes in V_plus to I first.
+ for z in C[V_plus] + B_prime:
+ if z in I_covered or F[z] not in B_cal_prime:
+ continue
+
+ I_set.add(z)
+ I_covered.add(z)
+ I_covered.update([nbr for nbr in L[z]])
+
+ for w in L[z]:
+ if F[w] in A_cal_0 and N[(z, F[w])] == 1:
+ if w not in W_covering:
+ W_covering[w] = z
+ else:
+ # Found z1, z2 which have the same solo
+ # neighbor in some W
+ z_1 = W_covering[w]
+ # z_2 = z
+
+ Z = F[z_1]
+ W = F[w]
+
+ # shift nodes along W, V-
+ move_witnesses(W, V_minus,
+ N=N, H=H, F=F, C=C,
+ T_cal=T_cal, L=L)
+
+ # shift nodes along V+ to Z
+ move_witnesses(V_plus, Z,
+ N=N, H=H, F=F, C=C,
+ T_cal=T_cal_prime, L=L)
+
+ # change color of z_1 to W
+ change_color(z_1, Z, W,
+ N=N, H=H, F=F, C=C, L=L)
+
+ # change color of w to some color in B_cal
+ W_plus = [k for k in C.keys()
+ if N[(w, k)] == 0 and
+ k not in A_cal][0]
+ change_color(w, W, W_plus,
+ N=N, H=H, F=F, C=C, L=L)
+
+ # recurse with G[B \cup W*]
+ excluded_colors.update([
+ k for k in C.keys()
+ if k != W and k not in B_cal_prime
+ ])
+ procedure_P(V_minus=W, V_plus=W_plus,
+ N=N, H=H, C=C, F=F, L=L,
+ excluded_colors=excluded_colors)
+
+ made_equitable = True
+ break
+
+ if made_equitable:
+ break
+ else:
+ assert False, "Must find a w which is the solo neighbor " \
+ "of two vertices in B_cal_prime."
+
+ if made_equitable:
+ break
+
+
+def equitable_color(G, num_colors):
+ """Provides equitable (r + 1)-coloring for nodes of G in O(r * n^2) time
+ if deg(G) <= r. The algorithm is described in [1]_.
+
+ Attempts to color a graph using r colors, where no neighbors of a node
+ can have same color as the node itself and the number of nodes with each
+ color differ by at most 1.
+
+ Parameters
+ ----------
+ G : networkX graph
+ The nodes of this graph will be colored.
+
+ num_colors : number of colors to use
+ This number must be at least one more than the maximum degree of nodes
+ in the graph.
+
+ Returns
+ -------
+ A dictionary with keys representing nodes and values representing
+ corresponding coloring.
+
+ Examples
+ --------
+ >>> G = nx.cycle_graph(4)
+ >>> d = nx.coloring.equitable_color(G, num_colors=3)
+ >>> nx.algorithms.coloring.equitable_coloring.is_equitable(G, d)
+ True
+
+ Raises
+ ------
+ NetworkXAlgorithmError
+ If the maximum degree of the graph ``G`` is greater than
+ ``num_colors``.
+
+ References
+ ----------
+ .. [1] Kierstead, H. A., Kostochka, A. V., Mydlarz, M., & Szemerédi, E.
+ (2010). A fast algorithm for equitable coloring. Combinatorica, 30(2),
+ 217-224.
+ """
+
+ # Map nodes to integers for simplicity later.
+ nodes_to_int = {}
+ int_to_nodes = {}
+
+ for idx, node in enumerate(G.nodes):
+ nodes_to_int[node] = idx
+ int_to_nodes[idx] = node
+
+ G = nx.relabel_nodes(G, nodes_to_int, copy=True)
+
+ # Basic graph statistics and sanity check.
+ if len(G.nodes) > 0:
+ r_ = max([G.degree(node) for node in G.nodes])
+ else:
+ r_ = 0
+
+ if r_ >= num_colors:
+ raise nx.NetworkXAlgorithmError(
+ 'Graph has maximum degree {}, needs {} (> {}) colors for guaranteed coloring.'
+ .format(r_, r_ + 1, num_colors)
+ )
+
+ # Ensure that the number of nodes in G is a multiple of (r + 1)
+ pad_graph(G, num_colors)
+
+ # Starting the algorithm.
+ # L = {node: list(G.neighbors(node)) for node in G.nodes}
+ L_ = {node: [] for node in G.nodes}
+
+ # Arbitrary equitable allocation of colors to nodes.
+ F = {node: idx % num_colors for idx, node in enumerate(G.nodes)}
+
+ C = make_C_from_F(F)
+
+ # The neighborhood is empty initially.
+ N = make_N_from_L_C(L_, C)
+
+ # Currently all nodes witness all edges.
+ H = make_H_from_C_N(C, N)
+
+ # Start of algorithm.
+ edges_seen = set()
+
+ for u in sorted(G.nodes):
+ for v in sorted(G.neighbors(u)):
+
+ # Do not double count edges if (v, u) has already been seen.
+ if (v, u) in edges_seen:
+ continue
+
+ edges_seen.add((u, v))
+
+ L_[u].append(v)
+ L_[v].append(u)
+
+ N[(u, F[v])] += 1
+ N[(v, F[u])] += 1
+
+ if F[u] != F[v]:
+ # Were 'u' and 'v' witnesses for F[u] -> F[v] or F[v] -> F[u]?
+ if N[(u, F[v])] == 1:
+ H[F[u], F[v]] -= 1 # u cannot witness an edge between F[u], F[v]
+
+ if N[(v, F[u])] == 1:
+ H[F[v], F[u]] -= 1 # v cannot witness an edge between F[v], F[u]
+
+ if N[(u, F[u])] != 0:
+ # Find the first color where 'u' does not have any neighbors.
+ Y = [k for k in C.keys() if N[(u, k)] == 0][0]
+ X = F[u]
+ change_color(u, X, Y, N=N, H=H, F=F, C=C, L=L_)
+
+ # Procedure P
+ procedure_P(V_minus=X, V_plus=Y,
+ N=N, H=H, F=F, C=C, L=L_)
+
+ return {int_to_nodes[x]: F[x] for x in int_to_nodes}
| diff --git a/networkx/algorithms/coloring/tests/test_coloring.py b/networkx/algorithms/coloring/tests/test_coloring.py
--- a/networkx/algorithms/coloring/tests/test_coloring.py
+++ b/networkx/algorithms/coloring/tests/test_coloring.py
@@ -12,6 +12,11 @@
import networkx as nx
from nose.tools import *
+
+is_coloring = nx.algorithms.coloring.equitable_coloring.is_coloring
+is_equitable = nx.algorithms.coloring.equitable_coloring.is_equitable
+
+
ALL_STRATEGIES = [
'largest_first',
'random_sequential',
@@ -97,6 +102,202 @@ def test_seed_argument(self):
for u, v in graph.edges:
assert_not_equal(c1[u], c1[v])
+ def test_is_coloring(self):
+ G = nx.Graph()
+ G.add_edges_from([(0, 1), (1, 2)])
+ coloring = {0: 0, 1: 1, 2: 0}
+ assert is_coloring(G, coloring)
+
+ coloring[0] = 1
+ assert not is_coloring(G, coloring)
+ assert not is_equitable(G, coloring)
+
+ def test_is_equitable(self):
+ G = nx.Graph()
+ G.add_edges_from([(0, 1), (1, 2)])
+ coloring = {0: 0, 1: 1, 2: 0}
+ assert is_equitable(G, coloring)
+
+ G.add_edges_from([(2, 3), (2, 4), (2, 5)])
+ coloring[3] = 1
+ coloring[4] = 1
+ coloring[5] = 1
+ assert is_coloring(G, coloring)
+ assert not is_equitable(G, coloring)
+
+ def test_num_colors(self):
+ G = nx.Graph()
+ G.add_edges_from([(0, 1), (0, 2), (0, 3)])
+ assert_raises(nx.NetworkXAlgorithmError,
+ nx.coloring.equitable_color, G, 2)
+
+ def test_equitable_color(self):
+ G = nx.fast_gnp_random_graph(n=10, p=0.2, seed=42)
+ coloring = nx.coloring.equitable_color(G, max_degree(G) + 1)
+ assert is_equitable(G, coloring)
+
+ def test_equitable_color_empty(self):
+ G = nx.empty_graph()
+ coloring = nx.coloring.equitable_color(G, max_degree(G) + 1)
+ assert is_equitable(G, coloring)
+
+ def test_equitable_color_large(self):
+ G = nx.fast_gnp_random_graph(100, 0.1, seed=42)
+ coloring = nx.coloring.equitable_color(G, max_degree(G) + 1)
+ assert is_equitable(G, coloring, num_colors=max_degree(G) + 1)
+
+ def test_case_V_plus_not_in_A_cal(self):
+ # Hand crafted case to avoid the easy case.
+ L = {
+ 0: [2, 5],
+ 1: [3, 4],
+ 2: [0, 8],
+ 3: [1, 7],
+ 4: [1, 6],
+ 5: [0, 6],
+ 6: [4, 5],
+ 7: [3],
+ 8: [2],
+ }
+
+ F = {
+ # Color 0
+ 0: 0,
+ 1: 0,
+
+ # Color 1
+ 2: 1,
+ 3: 1,
+ 4: 1,
+ 5: 1,
+
+ # Color 2
+ 6: 2,
+ 7: 2,
+ 8: 2,
+ }
+
+ C = nx.algorithms.coloring.equitable_coloring.make_C_from_F(F)
+ N = nx.algorithms.coloring.equitable_coloring.make_N_from_L_C(L, C)
+ H = nx.algorithms.coloring.equitable_coloring.make_H_from_C_N(C, N)
+
+ nx.algorithms.coloring.equitable_coloring.procedure_P(
+ V_minus=0, V_plus=1, N=N, H=H, F=F, C=C, L=L
+ )
+ check_state(L=L, N=N, H=H, F=F, C=C)
+
+ def test_cast_no_solo(self):
+ L = {
+ 0: [8, 9],
+ 1: [10, 11],
+
+ 2: [8],
+ 3: [9],
+ 4: [10, 11],
+
+ 5: [8],
+ 6: [9],
+ 7: [10, 11],
+
+ 8: [0, 2, 5],
+ 9: [0, 3, 6],
+ 10: [1, 4, 7],
+ 11: [1, 4, 7],
+ }
+
+ F = {
+ 0: 0,
+ 1: 0,
+
+ 2: 2,
+ 3: 2,
+ 4: 2,
+
+ 5: 3,
+ 6: 3,
+ 7: 3,
+
+ 8: 1,
+ 9: 1,
+ 10: 1,
+ 11: 1,
+ }
+
+ C = nx.algorithms.coloring.equitable_coloring.make_C_from_F(F)
+ N = nx.algorithms.coloring.equitable_coloring.make_N_from_L_C(L, C)
+ H = nx.algorithms.coloring.equitable_coloring.make_H_from_C_N(C, N)
+
+ nx.algorithms.coloring.equitable_coloring.procedure_P(
+ V_minus=0, V_plus=1, N=N, H=H, F=F, C=C, L=L
+ )
+ check_state(L=L, N=N, H=H, F=F, C=C)
+
+ def test_hard_prob(self):
+ # Tests for two levels of recursion.
+ num_colors, s = 5, 5
+
+ G = nx.Graph()
+ G.add_edges_from(
+ [(0, 10), (0, 11), (0, 12), (0, 23), (10, 4), (10, 9),
+ (10, 20), (11, 4), (11, 8), (11, 16), (12, 9), (12, 22),
+ (12, 23), (23, 7), (1, 17), (1, 18), (1, 19), (1, 24),
+ (17, 5), (17, 13), (17, 22), (18, 5), (19, 5), (19, 6),
+ (19, 8), (24, 7), (24, 16), (2, 4), (2, 13), (2, 14),
+ (2, 15), (4, 6), (13, 5), (13, 21), (14, 6), (14, 15),
+ (15, 6), (15, 21), (3, 16), (3, 20), (3, 21), (3, 22),
+ (16, 8), (20, 8), (21, 9), (22, 7)]
+ )
+ F = {node: node // s for node in range(num_colors * s)}
+ F[s - 1] = num_colors - 1
+
+ params = make_params_from_graph(G=G, F=F)
+
+ nx.algorithms.coloring.equitable_coloring.procedure_P(
+ V_minus=0, V_plus=num_colors - 1, **params
+ )
+ check_state(**params)
+
+ def test_hardest_prob(self):
+ # Tests for two levels of recursion.
+ num_colors, s = 10, 4
+
+ G = nx.Graph()
+ G.add_edges_from(
+ [(0, 19), (0, 24), (0, 29), (0, 30), (0, 35), (19, 3), (19, 7),
+ (19, 9), (19, 15), (19, 21), (19, 24), (19, 30), (19, 38),
+ (24, 5), (24, 11), (24, 13), (24, 20), (24, 30), (24, 37),
+ (24, 38), (29, 6), (29, 10), (29, 13), (29, 15), (29, 16),
+ (29, 17), (29, 20), (29, 26), (30, 6), (30, 10), (30, 15),
+ (30, 22), (30, 23), (30, 39), (35, 6), (35, 9), (35, 14),
+ (35, 18), (35, 22), (35, 23), (35, 25), (35, 27), (1, 20),
+ (1, 26), (1, 31), (1, 34), (1, 38), (20, 4), (20, 8), (20, 14),
+ (20, 18), (20, 28), (20, 33), (26, 7), (26, 10), (26, 14),
+ (26, 18), (26, 21), (26, 32), (26, 39), (31, 5), (31, 8),
+ (31, 13), (31, 16), (31, 17), (31, 21), (31, 25), (31, 27),
+ (34, 7), (34, 8), (34, 13), (34, 18), (34, 22), (34, 23),
+ (34, 25), (34, 27), (38, 4), (38, 9), (38, 12), (38, 14),
+ (38, 21), (38, 27), (2, 3), (2, 18), (2, 21), (2, 28), (2, 32),
+ (2, 33), (2, 36), (2, 37), (2, 39), (3, 5), (3, 9), (3, 13),
+ (3, 22), (3, 23), (3, 25), (3, 27), (18, 6), (18, 11), (18, 15),
+ (18, 39), (21, 4), (21, 10), (21, 14), (21, 36), (28, 6),
+ (28, 10), (28, 14), (28, 16), (28, 17), (28, 25), (28, 27),
+ (32, 5), (32, 10), (32, 12), (32, 16), (32, 17), (32, 22),
+ (32, 23), (33, 7), (33, 10), (33, 12), (33, 16), (33, 17),
+ (33, 25), (33, 27), (36, 5), (36, 8), (36, 15), (36, 16),
+ (36, 17), (36, 25), (36, 27), (37, 5), (37, 11), (37, 15),
+ (37, 16), (37, 17), (37, 22), (37, 23), (39, 7), (39, 8),
+ (39, 15), (39, 22), (39, 23)]
+ )
+ F = {node: node // s for node in range(num_colors * s)}
+ F[s - 1] = num_colors - 1 # V- = 0, V+ = num_colors - 1
+
+ params = make_params_from_graph(G=G, F=F)
+
+ nx.algorithms.coloring.equitable_coloring.procedure_P(
+ V_minus=0, V_plus=num_colors - 1, **params
+ )
+ check_state(**params)
+
############################## Utility functions ##############################
def verify_coloring(graph, coloring):
@@ -477,3 +678,47 @@ def sli_hc():
'connected_sequential_dfs': [
(cs_shc, False, (3, 4))],
}
+
+
+#---------------------------------------------------------------------------
+# Helper functions to test
+# (graph function, interchange, valid # of colors)
+
+
+def check_state(L, N, H, F, C):
+ s = len(C[0])
+ num_colors = len(C.keys())
+
+ assert all(u in L[v] for u in L.keys() for v in L[u])
+ assert all(F[u] != F[v] for u in L.keys() for v in L[u])
+ assert all(len(L[u]) < num_colors for u in L.keys())
+ assert all(len(C[x]) == s for x in C)
+ assert all(H[(c1, c2)] >= 0 for c1 in C.keys() for c2 in C.keys())
+ assert all(N[(u, F[u])] == 0 for u in F.keys())
+
+
+def max_degree(G):
+ """Get the maximum degree of any node in G."""
+ return max([G.degree(node) for node in G.nodes]) if len(G.nodes) > 0 else 0
+
+
+def make_params_from_graph(G, F):
+ """Returns {N, L, H, C} from the given graph."""
+ num_nodes = len(G)
+ L = {u: [] for u in range(num_nodes)}
+ for (u, v) in G.edges:
+ L[u].append(v)
+ L[v].append(u)
+
+ C = nx.algorithms.coloring.equitable_coloring.make_C_from_F(F)
+ N = nx.algorithms.coloring.equitable_coloring.make_N_from_L_C(L, C)
+ H = nx.algorithms.coloring.equitable_coloring.make_H_from_C_N(C, N)
+
+ return {
+ 'N': N,
+ 'F': F,
+ 'C': C,
+ 'H': H,
+ 'L': L,
+ }
+
| ENH: Addition of the Equitable coloring algorithm.
I recently implemented an [equitable coloring](https://en.wikipedia.org/wiki/Equitable_coloring) algorithm for degree bounded graphs from a paper by [Kierstead et. al.](https://link.springer.com/article/10.1007%2Fs00493-010-2483-5) here: [musically-ut/equitable-coloring](https://github.com/musically-ut/equitable-coloring).
From Wikipedia:
> In graph theory [..] an equitable coloring is an assignment of colors to the vertices of an undirected graph, in such a way that
>
> * No two adjacent vertices have the same color, and
> * The numbers of vertices in any two color classes differ by at most one.
----
I wanted to get an idea of whether there is any interest in including `equitable_color` from my package as a potential coloring strategy under `networkx.algorithms.coloring`, next to the `greedy_color` strategies already there.
If so, I could work on a PR which will port the code and the test cases to `networkx`.
| Yes, I think that would be a good place to include your equitable-coloring functionality.
Thanks very much! | 2018-08-19T15:21:51 |
networkx/networkx | 3,131 | networkx__networkx-3131 | [
"3130",
"3130"
] | 8818e1dddfc43acf34395fd8c3222d1ea1831db5 | diff --git a/networkx/algorithms/shortest_paths/weighted.py b/networkx/algorithms/shortest_paths/weighted.py
--- a/networkx/algorithms/shortest_paths/weighted.py
+++ b/networkx/algorithms/shortest_paths/weighted.py
@@ -1101,7 +1101,7 @@ def all_pairs_dijkstra_path(G, cutoff=None, weight='weight'):
def bellman_ford_predecessor_and_distance(G, source, target=None,
- cutoff=None, weight='weight'):
+ weight='weight'):
"""Compute shortest path lengths and predecessors on shortest paths
in weighted graphs.
@@ -1136,8 +1136,6 @@ def bellman_ford_predecessor_and_distance(G, source, target=None,
pred, dist : dictionaries
Returns two dictionaries keyed by node to predecessor in the
path and to the distance from the source respectively.
- Warning: If target is specified, the dicts are incomplete as they
- only contain information for the nodes along a path to target.
Raises
------
@@ -1162,9 +1160,9 @@ def bellman_ford_predecessor_and_distance(G, source, target=None,
>>> pred, dist = nx.bellman_ford_predecessor_and_distance(G, 0, 1)
>>> sorted(pred.items())
- [(0, []), (1, [0])]
+ [(0, []), (1, [0]), (2, [1]), (3, [2]), (4, [3])]
>>> sorted(dist.items())
- [(0, 0), (1, 1)]
+ [(0, 0), (1, 1), (2, 2), (3, 3), (4, 4)]
>>> from nose.tools import assert_raises
>>> G = nx.cycle_graph(5, create_using = nx.DiGraph())
@@ -1202,26 +1200,30 @@ def bellman_ford_predecessor_and_distance(G, source, target=None,
weight = _weight_function(G, weight)
dist = _bellman_ford(G, [source], weight, pred=pred, dist=dist,
- cutoff=cutoff, target=target)
+ target=target)
return (pred, dist)
def _bellman_ford(G, source, weight, pred=None, paths=None, dist=None,
- cutoff=None, target=None):
- """Relaxation loop for Bellman–Ford algorithm
+ target=None):
+ """Relaxation loop for Bellman–Ford algorithm.
+
+ This is an implementation of the SPFA variant.
+ See https://en.wikipedia.org/wiki/Shortest_Path_Faster_Algorithm
Parameters
----------
G : NetworkX graph
source: list
- List of source nodes
+ List of source nodes. The shortest path from any of the source
+ nodes will be found if multiple sources are provided.
weight : function
- The weight of an edge is the value returned by the function. The
- function must accept exactly three positional arguments: the two
- endpoints of an edge and the dictionary of edge attributes for
- that edge. The function must return a number.
+ The weight of an edge is the value returned by the function. The
+ function must accept exactly three positional arguments: the two
+ endpoints of an edge and the dictionary of edge attributes for
+ that edge. The function must return a number.
pred: dict of lists, optional (default=None)
dict to store a list of predecessors keyed by that node
@@ -1236,9 +1238,6 @@ def _bellman_ford(G, source, weight, pred=None, paths=None, dist=None,
If None, returned dist dict contents default to 0 for every node in the
source list
- cutoff: integer or float, optional
- Depth to stop the search. Only paths of length <= cutoff are returned
-
target: node label, optional
Ending node for path. Path lengths to other destinations may (and
probably will) be incorrect.
@@ -1286,14 +1285,6 @@ def _bellman_ford(G, source, weight, pred=None, paths=None, dist=None,
for v, e in G_succ[u].items():
dist_v = dist_u + weight(v, u, e)
- if cutoff is not None:
- if dist_v > cutoff:
- continue
-
- if target is not None:
- if dist_v > dist.get(target, inf):
- continue
-
if dist_v < dist.get(v, inf):
if v not in in_q:
q.append(v)
@@ -1434,7 +1425,7 @@ def bellman_ford_path_length(G, source, target, weight='weight'):
"node %s not reachable from %s" % (source, target))
-def single_source_bellman_ford_path(G, source, cutoff=None, weight='weight'):
+def single_source_bellman_ford_path(G, source, weight='weight'):
"""Compute shortest path between source and all other reachable
nodes for a weighted graph.
@@ -1448,9 +1439,6 @@ def single_source_bellman_ford_path(G, source, cutoff=None, weight='weight'):
weight: string, optional (default='weight')
Edge data key corresponding to the edge weight
- cutoff : integer or float, optional
- Depth to stop the search. Only paths of length <= cutoff are returned.
-
Returns
-------
paths : dictionary
@@ -1479,12 +1467,11 @@ def single_source_bellman_ford_path(G, source, cutoff=None, weight='weight'):
"""
(length, path) = single_source_bellman_ford(
- G, source, cutoff=cutoff, weight=weight)
+ G, source, weight=weight)
return path
-def single_source_bellman_ford_path_length(G, source,
- cutoff=None, weight='weight'):
+def single_source_bellman_ford_path_length(G, source, weight='weight'):
"""Compute the shortest path length between source and all other
reachable nodes for a weighted graph.
@@ -1498,9 +1485,6 @@ def single_source_bellman_ford_path_length(G, source,
weight: string, optional (default='weight')
Edge data key corresponding to the edge weight.
- cutoff : integer or float, optional
- Depth to stop the search. Only paths of length <= cutoff are returned.
-
Returns
-------
length : iterator
@@ -1536,11 +1520,10 @@ def single_source_bellman_ford_path_length(G, source,
"""
weight = _weight_function(G, weight)
- return _bellman_ford(G, [source], weight, cutoff=cutoff)
+ return _bellman_ford(G, [source], weight)
-def single_source_bellman_ford(G, source,
- target=None, cutoff=None, weight='weight'):
+def single_source_bellman_ford(G, source, target=None, weight='weight'):
"""Compute shortest paths and lengths in a weighted graph G.
Uses Bellman-Ford algorithm for shortest paths.
@@ -1555,9 +1538,6 @@ def single_source_bellman_ford(G, source,
target : node label, optional
Ending node for path
- cutoff : integer or float, optional
- Depth to stop the search. Only paths of length <= cutoff are returned.
-
Returns
-------
distance, path : pair of dictionaries, or numeric and list
@@ -1611,8 +1591,7 @@ def single_source_bellman_ford(G, source,
weight = _weight_function(G, weight)
paths = {source: [source]} # dictionary of paths
- dist = _bellman_ford(G, [source], weight, paths=paths, cutoff=cutoff,
- target=target)
+ dist = _bellman_ford(G, [source], weight, paths=paths, target=target)
if target is None:
return (dist, paths)
try:
@@ -1622,7 +1601,7 @@ def single_source_bellman_ford(G, source,
raise nx.NetworkXNoPath(msg)
-def all_pairs_bellman_ford_path_length(G, cutoff=None, weight='weight'):
+def all_pairs_bellman_ford_path_length(G, weight='weight'):
""" Compute shortest path lengths between all nodes in a weighted graph.
Parameters
@@ -1632,9 +1611,6 @@ def all_pairs_bellman_ford_path_length(G, cutoff=None, weight='weight'):
weight: string, optional (default='weight')
Edge data key corresponding to the edge weight
- cutoff : integer or float, optional
- Depth to stop the search. Only paths of length <= cutoff are returned.
-
Returns
-------
distance : iterator
@@ -1666,10 +1642,10 @@ def all_pairs_bellman_ford_path_length(G, cutoff=None, weight='weight'):
"""
length = single_source_bellman_ford_path_length
for n in G:
- yield (n, dict(length(G, n, cutoff=cutoff, weight=weight)))
+ yield (n, dict(length(G, n, weight=weight)))
-def all_pairs_bellman_ford_path(G, cutoff=None, weight='weight'):
+def all_pairs_bellman_ford_path(G, weight='weight'):
""" Compute shortest paths between all nodes in a weighted graph.
Parameters
@@ -1679,9 +1655,6 @@ def all_pairs_bellman_ford_path(G, cutoff=None, weight='weight'):
weight: string, optional (default='weight')
Edge data key corresponding to the edge weight
- cutoff : integer or float, optional
- Depth to stop the search. Only paths of length <= cutoff are returned.
-
Returns
-------
distance : dictionary
@@ -1707,7 +1680,7 @@ def all_pairs_bellman_ford_path(G, cutoff=None, weight='weight'):
path = single_source_bellman_ford_path
# TODO This can be trivially parallelized.
for n in G:
- yield (n, path(G, n, cutoff=cutoff, weight=weight))
+ yield (n, path(G, n, weight=weight))
def goldberg_radzik(G, source, weight='weight'):
| diff --git a/networkx/algorithms/shortest_paths/tests/test_weighted.py b/networkx/algorithms/shortest_paths/tests/test_weighted.py
--- a/networkx/algorithms/shortest_paths/tests/test_weighted.py
+++ b/networkx/algorithms/shortest_paths/tests/test_weighted.py
@@ -558,6 +558,17 @@ def test_4_cycle(self):
assert_equal(pred[3], 0)
assert_equal(dist, {0: 0, 1: 1, 2: 2, 3: 1})
+ def test_negative_weight(self):
+ G = nx.DiGraph()
+ G.add_nodes_from('abcd')
+ G.add_edge('a','d', weight = 0)
+ G.add_edge('a','b', weight = 1)
+ G.add_edge('b','c', weight = -3)
+ G.add_edge('c','d', weight = 1)
+
+ assert_equal(nx.bellman_ford_path(G, 'a', 'd'), ['a', 'b', 'c', 'd'])
+ assert_equal(nx.bellman_ford_path_length(G, 'a', 'd'), -1)
+
class TestJohnsonAlgorithm(WeightedTestBase):
| bellman_ford_path seems to work wrong
On this example
```python
sat=0.5
sf = 1
mon = 2
G=nx.MultiDiGraph()
G.add_nodes_from(['start','bread','money:3','eat bread0','finish','buy bread','bread:2','eat bread1'])
G.add_edge('start','bread', weight = 0)
G.add_edge('start','money:3', weight = 0)
G.add_edge('bread','eat bread0', weight = 1/sf-1/sat)
G.add_edge('eat bread0', 'finish', weight = 0)
G.add_edge('money:3','buy bread', weight = 2/mon-1/sf)
G.add_edge('buy bread', 'bread:2', weight = 0)
G.add_edge('bread:2','eat bread1', weight = 1/2-1/sat)
G.add_edge('eat bread1','finish', weight = 0)
G.add_edge('eat bread1', 'bread', weight = 0)
path = nx.bellman_ford_path(G,'start','finish', weight='weight')
w = nx.bellman_ford_path_length(G,'start','finish', weight='weight')
print(path, w)
```
The output is
`['start', 'bread', 'eat bread0', 'finish'] -1.0`
But `print(nx.johnson(G, weight="weight")["start"]["finish"])` output is
`['start', 'money:3', 'buy bread', 'bread:2', 'eat bread1', 'bread', 'eat bread0', 'finish']`
And if one will draw the graph, it`s easy to see, the shortest path weight is -2.5
I am using the last stable version: 2.1
bellman_ford_path seems to work wrong
On this example
```python
sat=0.5
sf = 1
mon = 2
G=nx.MultiDiGraph()
G.add_nodes_from(['start','bread','money:3','eat bread0','finish','buy bread','bread:2','eat bread1'])
G.add_edge('start','bread', weight = 0)
G.add_edge('start','money:3', weight = 0)
G.add_edge('bread','eat bread0', weight = 1/sf-1/sat)
G.add_edge('eat bread0', 'finish', weight = 0)
G.add_edge('money:3','buy bread', weight = 2/mon-1/sf)
G.add_edge('buy bread', 'bread:2', weight = 0)
G.add_edge('bread:2','eat bread1', weight = 1/2-1/sat)
G.add_edge('eat bread1','finish', weight = 0)
G.add_edge('eat bread1', 'bread', weight = 0)
path = nx.bellman_ford_path(G,'start','finish', weight='weight')
w = nx.bellman_ford_path_length(G,'start','finish', weight='weight')
print(path, w)
```
The output is
`['start', 'bread', 'eat bread0', 'finish'] -1.0`
But `print(nx.johnson(G, weight="weight")["start"]["finish"])` output is
`['start', 'money:3', 'buy bread', 'bread:2', 'eat bread1', 'bread', 'eat bread0', 'finish']`
And if one will draw the graph, it`s easy to see, the shortest path weight is -2.5
I am using the last stable version: 2.1
| 2018-08-22T18:12:03 |
|
networkx/networkx | 3,179 | networkx__networkx-3179 | [
"3174"
] | d5c12bd269495e09a4faf4792e1d1ffb357bb7f0 | diff --git a/networkx/drawing/nx_pylab.py b/networkx/drawing/nx_pylab.py
--- a/networkx/drawing/nx_pylab.py
+++ b/networkx/drawing/nx_pylab.py
@@ -26,6 +26,7 @@
from networkx.drawing.layout import shell_layout, \
circular_layout, kamada_kawai_layout, spectral_layout, \
spring_layout, random_layout
+from numbers import Number
__all__ = ['draw',
'draw_networkx',
@@ -608,7 +609,7 @@ def draw_networkx_edges(G, pos,
# r7184 and r7189 (June 6 2009). We should then not set the alpha
# value globally, since the user can instead provide per-edge alphas
# now. Only set it globally if provided as a scalar.
- if cb.is_numlike(alpha):
+ if isinstance(alpha, Number):
edge_collection.set_alpha(alpha)
if edge_colors is None:
@@ -1094,7 +1095,6 @@ def apply_alpha(colors, alpha, elem_list, cmap=None, vmin=None, vmax=None):
Array containing RGBA format values for each of the node colours.
"""
- import numbers
from itertools import islice, cycle
try:
@@ -1106,7 +1106,7 @@ def apply_alpha(colors, alpha, elem_list, cmap=None, vmin=None, vmax=None):
# If we have been provided with a list of numbers as long as elem_list,
# apply the color mapping.
- if len(colors) == len(elem_list) and isinstance(colors[0], numbers.Number):
+ if len(colors) == len(elem_list) and isinstance(colors[0], Number):
mapper = cm.ScalarMappable(cmap=cmap)
mapper.set_clim(vmin, vmax)
rgba_colors = mapper.to_rgba(colors)
| compatibility with matplotlib-3.0.0 ?
running this code:
````
# Networks graph Example : https://github.com/ipython/ipywidgets/blob/master/examples/Exploring%20Graphs.ipynb
%matplotlib inline
from ipywidgets import interact
import matplotlib.pyplot as plt
import networkx as nx
# wrap a few graph generation functions so they have the same signature
def random_lobster(n, m, k, p):
return nx.random_lobster(n, p, p / m)
def powerlaw_cluster(n, m, k, p):
return nx.powerlaw_cluster_graph(n, m, p)
def erdos_renyi(n, m, k, p):
return nx.erdos_renyi_graph(n, p)
def newman_watts_strogatz(n, m, k, p):
return nx.newman_watts_strogatz_graph(n, k, p)
@interact(n=(2,30), m=(1,10), k=(1,10), p=(0.0, 1.0, 0.001),
generator={'lobster': random_lobster,
'power law': powerlaw_cluster,
'Newman-Watts-Strogatz': newman_watts_strogatz,
u'Erdős-Rényi': erdos_renyi,
})
def plot_random_graph(n, m, k, p, generator):
g = generator(n, m, k, p)
nx.draw(g)
plt.title(generator.__name__)
plt.show()
````
I get this warning:
````
...python-3.7.1rc1.amd64\lib\site-packages\networkx\drawing\nx_pylab.py:611: MatplotlibDeprecationWarning: isinstance(..., numbers.Number)
if cb.is_numlike(alpha):
````
coming from this https://github.com/matplotlib/matplotlib/commit/1f923d40d67d95a0d0aa3cdaea329ce9c4296d90
so you may have to replace:
````
if cb.is_numlike(alpha):
````
per something like ?
````
if cb.isinstance(alpha, Number)
````
| It looks like their suggested fix is to use ```isinstance(alpha, numbers.Number)```.
We don't need ```cb.``` in front of ```isinstance```. We do need the ```numbers``` package.
Thanks for this... I'll put the label for "Needs PR" | 2018-10-03T08:25:45 |
|
networkx/networkx | 3,253 | networkx__networkx-3253 | [
"3233"
] | 8f4845e94709dd62a4ebf3775fe02ca777ec49f2 | diff --git a/networkx/drawing/nx_pylab.py b/networkx/drawing/nx_pylab.py
--- a/networkx/drawing/nx_pylab.py
+++ b/networkx/drawing/nx_pylab.py
@@ -179,7 +179,7 @@ def draw_networkx(G, pos=None, arrows=True, with_labels=True, **kwds):
Size of nodes. If an array is specified it must be the
same length as nodelist.
- node_color : color string, or array of floats, (default='r')
+ node_color : color string, or array of floats, (default='#1f78b4')
Node color. Can be a single color format string,
or a sequence of colors with the same length as nodelist.
If numeric values are specified they will be mapped to
@@ -284,7 +284,7 @@ def draw_networkx(G, pos=None, arrows=True, with_labels=True, **kwds):
def draw_networkx_nodes(G, pos,
nodelist=None,
node_size=300,
- node_color='r',
+ node_color='#1f78b4',
node_shape='o',
alpha=1.0,
cmap=None,
@@ -319,7 +319,7 @@ def draw_networkx_nodes(G, pos,
same length as nodelist.
node_color : color string, or array of floats
- Node color. Can be a single color format string (default='r'),
+ Node color. Can be a single color format string (default='#1f78b4'),
or a sequence of colors with the same length as nodelist.
If numeric values are specified they will be mapped to
colors using the cmap and vmin,vmax parameters. See
| Default color for draw_networkx() is not colorblind-safe
Currently black labels on red background, which can't be read by some red-green colorblind individuals (including myself). The safest would be to go with a blue color. Any objections?
| That seems good to me.
Here's a plug for a project related to this stuff that needs some energy: -- there is a package called GraVE which integrates ideas and code from Matplotlib and NetworkX to ease and enable better drawing. https://github.com/networkx/grave
| 2018-11-25T15:33:56 |
|
networkx/networkx | 3,255 | networkx__networkx-3255 | [
"3254"
] | 8f4845e94709dd62a4ebf3775fe02ca777ec49f2 | diff --git a/networkx/drawing/nx_pylab.py b/networkx/drawing/nx_pylab.py
--- a/networkx/drawing/nx_pylab.py
+++ b/networkx/drawing/nx_pylab.py
@@ -412,6 +412,13 @@ def draw_networkx_nodes(G, pos,
linewidths=linewidths,
edgecolors=edgecolors,
label=label)
+ plt.tick_params(
+ axis='both',
+ which='both',
+ bottom=False,
+ left=False,
+ labelbottom=False,
+ labelleft=False)
node_collection.set_zorder(2)
return node_collection
@@ -702,6 +709,14 @@ def to_marker_edge(marker_size, marker):
ax.update_datalim(corners)
ax.autoscale_view()
+ plt.tick_params(
+ axis='both',
+ which='both',
+ bottom=False,
+ left=False,
+ labelbottom=False,
+ labelleft=False)
+
return arrow_collection
@@ -808,6 +823,14 @@ def draw_networkx_labels(G, pos,
clip_on=True,
)
text_items[n] = t
+
+ plt.tick_params(
+ axis='both',
+ which='both',
+ bottom=False,
+ left=False,
+ labelbottom=False,
+ labelleft=False)
return text_items
@@ -954,6 +977,14 @@ def draw_networkx_edge_labels(G, pos,
)
text_items[(n1, n2)] = t
+ plt.tick_params(
+ axis='both',
+ which='both',
+ bottom=False,
+ left=False,
+ labelbottom=False,
+ labelleft=False)
+
return text_items
| Remove axis ticks and tick labels from default network visualization
The axis ticks are districting and in most cases unnecessary.
| 2018-11-25T15:48:58 |
||
networkx/networkx | 3,256 | networkx__networkx-3256 | [
"3252"
] | 8f4845e94709dd62a4ebf3775fe02ca777ec49f2 | diff --git a/networkx/classes/digraph.py b/networkx/classes/digraph.py
--- a/networkx/classes/digraph.py
+++ b/networkx/classes/digraph.py
@@ -1198,4 +1198,4 @@ def reverse(self, copy=True):
H.add_edges_from((v, u, deepcopy(d)) for u, v, d
in self.edges(data=True))
return H
- return nx.graphviews.ReverseView(self)
+ return nx.graphviews.reverse_view(self)
| Deprecationwarning in digraph.py 'ReverseView is deprecated. Use reverse_view instead'
```
/usr/local/lib/python3.7/site-packages/networkx/classes/digraph.py:1186: DeprecationWarning: ReverseView is deprecated. Use reverse_view instead
return nx.graphviews.ReverseView(self)
```
This is from networkx 2.2, I checked the current source and I see the issue is still in master, now on line [1201](https://github.com/networkx/networkx/blob/master/networkx/classes/digraph.py#L1201)
| 2018-11-26T09:01:39 |
||
networkx/networkx | 3,312 | networkx__networkx-3312 | [
"3216"
] | 4443b017d796a3fb7c3f7d37868e1fe8b1d530e2 | diff --git a/networkx/algorithms/isomorphism/__init__.py b/networkx/algorithms/isomorphism/__init__.py
--- a/networkx/algorithms/isomorphism/__init__.py
+++ b/networkx/algorithms/isomorphism/__init__.py
@@ -2,3 +2,4 @@
from networkx.algorithms.isomorphism.vf2userfunc import *
from networkx.algorithms.isomorphism.matchhelpers import *
from networkx.algorithms.isomorphism.temporalisomorphvf2 import *
+from networkx.algorithms.isomorphism.ismags import *
diff --git a/networkx/algorithms/isomorphism/ismags.py b/networkx/algorithms/isomorphism/ismags.py
new file mode 100644
--- /dev/null
+++ b/networkx/algorithms/isomorphism/ismags.py
@@ -0,0 +1,1096 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+
+"""
+****************
+ISMAGS Algorithm
+****************
+
+Provides a Python implementation of the ISMAGS algorithm. [1]_
+
+It is capable of finding (subgraph) isomorphisms between two graphs, taking the
+symmetry of the subgraph into account. In most cases the VF2 algorithm is
+faster (at least on small graphs) than this implementation, but in some cases
+there is an exponential number of isomorphisms that are symmetrically
+equivalent. In that case, the ISMAGS algorithm will provide only one solution
+per symmetry group.
+
+>>> import networkx as nx
+>>> petersen = nx.petersen_graph()
+>>> ismags = nx.isomorphism.ISMAGS(petersen, petersen)
+>>> isomorphisms = list(ismags.isomorphisms_iter(symmetry=False))
+>>> len(isomorphisms)
+120
+>>> isomorphisms = list(ismags.isomorphisms_iter(symmetry=True))
+>>> answer = [{0: 0, 1: 1, 2: 2, 3: 3, 4: 4, 5: 5, 6: 6, 7:7, 8: 8, 9: 9}]
+>>> answer == isomorphisms
+True
+
+In addition, this implementation also provides an interface to find the
+largest common induced subgraph [2]_ between any two graphs, again taking
+symmetry into account. Given `graph` and `subgraph` the algorithm will remove
+nodes from the `subgraph` until `subgraph` is isomorphic to a subgraph of
+`graph`. Since only the symmetry of `subgraph` is taken into account it is
+worth thinking about how you provide your graphs:
+
+>>> graph1 = nx.path_graph(4)
+>>> graph2 = nx.star_graph(3)
+>>> ismags = nx.isomorphism.ISMAGS(graph1, graph2)
+>>> ismags.is_isomorphic()
+False
+>>> largest_common_subgraph = list(ismags.largest_common_subgraph())
+>>> answer = [
+... {1: 0, 0: 1, 2: 2},
+... {2: 0, 1: 1, 3: 2}
+... ]
+>>> answer == largest_common_subgraph
+True
+>>> ismags2 = nx.isomorphism.ISMAGS(graph2, graph1)
+>>> largest_common_subgraph = list(ismags2.largest_common_subgraph())
+>>> answer = [
+... {1: 0, 0: 1, 2: 2},
+... {1: 0, 0: 1, 3: 2},
+... {2: 0, 0: 1, 1: 2},
+... {2: 0, 0: 1, 3: 2},
+... {3: 0, 0: 1, 1: 2},
+... {3: 0, 0: 1, 2: 2}
+... ]
+>>> answer == largest_common_subgraph
+True
+
+However, when not taking symmetry into account, it doesn't matter:
+
+>>> largest_common_subgraph = list(ismags.largest_common_subgraph(symmetry=False))
+>>> answer = [
+... {1: 0, 0: 1, 2: 2},
+... {1: 0, 2: 1, 0: 2},
+... {2: 0, 1: 1, 3: 2},
+... {2: 0, 3: 1, 1: 2},
+... {1: 0, 0: 1, 2: 3},
+... {1: 0, 2: 1, 0: 3},
+... {2: 0, 1: 1, 3: 3},
+... {2: 0, 3: 1, 1: 3},
+... {1: 0, 0: 2, 2: 3},
+... {1: 0, 2: 2, 0: 3},
+... {2: 0, 1: 2, 3: 3},
+... {2: 0, 3: 2, 1: 3}
+... ]
+>>> answer == largest_common_subgraph
+True
+>>> largest_common_subgraph = list(ismags2.largest_common_subgraph(symmetry=False))
+>>> answer = [
+... {1: 0, 0: 1, 2: 2},
+... {1: 0, 0: 1, 3: 2},
+... {2: 0, 0: 1, 1: 2},
+... {2: 0, 0: 1, 3: 2},
+... {3: 0, 0: 1, 1: 2},
+... {3: 0, 0: 1, 2: 2},
+... {1: 1, 0: 2, 2: 3},
+... {1: 1, 0: 2, 3: 3},
+... {2: 1, 0: 2, 1: 3},
+... {2: 1, 0: 2, 3: 3},
+... {3: 1, 0: 2, 1: 3},
+... {3: 1, 0: 2, 2: 3}
+... ]
+>>> answer == largest_common_subgraph
+True
+
+Notes
+-----
+ - The current implementation works for undirected graphs only. The algorithm
+ in general should work for directed graphs as well though.
+ - Node keys for both provided graphs need to be fully orderable as well as
+ hashable.
+ - Node and edge equality is assumed to be transitive: if A is equal to B, and
+ B is equal to C, then A is equal to C.
+
+References
+----------
+ .. [1] M. Houbraken, S. Demeyer, T. Michoel, P. Audenaert, D. Colle,
+ M. Pickavet, "The Index-Based Subgraph Matching Algorithm with General
+ Symmetries (ISMAGS): Exploiting Symmetry for Faster Subgraph
+ Enumeration", PLoS One 9(5): e97896, 2014.
+ https://doi.org/10.1371/journal.pone.0097896
+ .. [2] https://en.wikipedia.org/wiki/Maximum_common_induced_subgraph
+"""
+
+__author__ = 'P C Kroon ([email protected])'
+__all__ = ['ISMAGS']
+
+from collections import defaultdict, Counter
+from functools import reduce, wraps
+import itertools
+
+
+def are_all_equal(iterable):
+ """
+ Returns ``True`` if and only if all elements in `iterable` are equal; and
+ ``False`` otherwise.
+
+ Parameters
+ ----------
+ iterable: collections.abc.Iterable
+ The container whose elements will be checked.
+
+ Returns
+ -------
+ bool
+ ``True`` iff all elements in `iterable` compare equal, ``False``
+ otherwise.
+ """
+ try:
+ shape = iterable.shape
+ except AttributeError:
+ pass
+ else:
+ if len(shape) > 1:
+ message = 'The function does not works on multidimension arrays.'
+ raise NotImplementedError(message) from None
+
+ iterator = iter(iterable)
+ first = next(iterator, None)
+ return all(item == first for item in iterator)
+
+
+def make_partitions(items, test):
+ """
+ Partitions items into sets based on the outcome of ``test(item1, item2)``.
+ Pairs of items for which `test` returns `True` end up in the same set.
+
+ Parameters
+ ----------
+ items : collections.abc.Iterable[collections.abc.Hashable]
+ Items to partition
+ test : collections.abc.Callable[collections.abc.Hashable, collections.abc.Hashable]
+ A function that will be called with 2 arguments, taken from items.
+ Should return `True` if those 2 items need to end up in the same
+ partition, and `False` otherwise.
+
+ Returns
+ -------
+ list[set]
+ A list of sets, with each set containing part of the items in `items`,
+ such that ``all(test(*pair) for pair in itertools.combinations(set, 2))
+ == True``
+
+ Notes
+ -----
+ The function `test` is assumed to be transitive: if ``test(a, b)`` and
+ ``test(b, c)`` return ``True``, then ``test(a, c)`` must also be ``True``.
+ """
+ partitions = []
+ for item in items:
+ for partition in partitions:
+ p_item = next(iter(partition))
+ if test(item, p_item):
+ partition.add(item)
+ break
+ else: # No break
+ partitions.append(set((item,)))
+ return partitions
+
+
+def partition_to_color(partitions):
+ """
+ Creates a dictionary with for every item in partition for every partition
+ in partitions the index of partition in partitions.
+
+ Parameters
+ ----------
+ partitions: collections.abc.Sequence[collections.abc.Iterable]
+ As returned by :func:`make_partitions`.
+
+ Returns
+ -------
+ dict
+ """
+ colors = dict()
+ for color, keys in enumerate(partitions):
+ for key in keys:
+ colors[key] = color
+ return colors
+
+
+def intersect(collection_of_sets):
+ """
+ Given an collection of sets, returns the intersection of those sets.
+
+ Parameters
+ ----------
+ collection_of_sets: collections.abc.Collection[set]
+ A collection of sets.
+
+ Returns
+ -------
+ set
+ An intersection of all sets in `collection_of_sets`. Will have the same
+ type as the item initially taken from `collection_of_sets`.
+ """
+ collection_of_sets = list(collection_of_sets)
+ first = collection_of_sets.pop()
+ out = reduce(set.intersection, collection_of_sets, set(first))
+ return type(first)(out)
+
+
+class ISMAGS:
+ """
+ Implements the ISMAGS subgraph matching algorith. [1]_ ISMAGS stands for
+ "Index-based Subgraph Matching Algorithm with General Symmetries". As the
+ name implies, it is symmetry aware and will only generate non-symmetric
+ isomorphisms.
+
+ Notes
+ -----
+ The implementation imposes additional conditions compared to the VF2
+ algorithm on the graphs provided and the comparison functions
+ (:attr:`node_equality` and :attr:`edge_equality`):
+
+ - Node keys in both graphs must be orderable as well as hashable.
+ - Equality must be transitive: if A is equal to B, and B is equal to C,
+ then A must be equal to C.
+
+ Attributes
+ ----------
+ graph: networkx.Graph
+ subgraph: networkx.Graph
+ node_equality: collections.abc.Callable
+ The function called to see if two nodes should be considered equal.
+ It's signature looks like this:
+ ``f(graph1: networkx.Graph, node1, graph2: networkx.Graph, node2) -> bool``.
+ `node1` is a node in `graph1`, and `node2` a node in `graph2`.
+ Constructed from the argument `node_match`.
+ edge_equality: collections.abc.Callable
+ The function called to see if two edges should be considered equal.
+ It's signature looks like this:
+ ``f(graph1: networkx.Graph, edge1, graph2: networkx.Graph, edge2) -> bool``.
+ `edge1` is an edge in `graph1`, and `edge2` an edge in `graph2`.
+ Constructed from the argument `edge_match`.
+
+ References
+ ----------
+ .. [1] M. Houbraken, S. Demeyer, T. Michoel, P. Audenaert, D. Colle,
+ M. Pickavet, "The Index-Based Subgraph Matching Algorithm with General
+ Symmetries (ISMAGS): Exploiting Symmetry for Faster Subgraph
+ Enumeration", PLoS One 9(5): e97896, 2014.
+ https://doi.org/10.1371/journal.pone.0097896
+ """
+ def __init__(self, graph, subgraph, node_match=None, edge_match=None,
+ cache=None):
+ """
+ Parameters
+ ----------
+ graph: networkx.Graph
+ subgraph: networkx.Graph
+ node_match: collections.abc.Callable or None
+ Function used to determine whether two nodes are equivalent. Its
+ signature should look like ``f(n1: dict, n2: dict) -> bool``, with
+ `n1` and `n2` node property dicts. See also
+ :func:`~networkx.algorithms.isomorphism.categorical_node_match` and
+ friends.
+ If `None`, all nodes are considered equal.
+ edge_match: collections.abc.Callable or None
+ Function used to determine whether two edges are equivalent. Its
+ signature should look like ``f(e1: dict, e2: dict) -> bool``, with
+ `e1` and `e2` edge property dicts. See also
+ :func:`~networkx.algorithms.isomorphism.categorical_edge_match` and
+ friends.
+ If `None`, all edges are considered equal.
+ cache: collections.abc.Mapping
+ A cache used for caching graph symmetries.
+ """
+ # TODO: graph and subgraph setter methods that invalidate the caches.
+ # TODO: allow for precomputed partitions and colors
+ self.graph = graph
+ self.subgraph = subgraph
+ self._symmetry_cache = cache
+ # Naming conventions are taken from the original paper. For your
+ # sanity:
+ # sg: subgraph
+ # g: graph
+ # e: edge(s)
+ # n: node(s)
+ # So: sgn means "subgraph nodes".
+ self._sgn_partitions_ = None
+ self._sge_partitions_ = None
+
+ self._sgn_colors_ = None
+ self._sge_colors_ = None
+
+ self._gn_partitions_ = None
+ self._ge_partitions_ = None
+
+ self._gn_colors_ = None
+ self._ge_colors_ = None
+
+ self._node_compat_ = None
+ self._edge_compat_ = None
+
+ if node_match is None:
+ self.node_equality = self._node_match_maker(lambda n1, n2: True)
+ self._sgn_partitions_ = [set(self.subgraph.nodes)]
+ self._gn_partitions_ = [set(self.graph.nodes)]
+ self._node_compat_ = {0: 0}
+ else:
+ self.node_equality = self._node_match_maker(node_match)
+ if edge_match is None:
+ self.edge_equality = self._edge_match_maker(lambda e1, e2: True)
+ self._sge_partitions_ = [set(self.subgraph.edges)]
+ self._ge_partitions_ = [set(self.graph.edges)]
+ self._edge_compat_ = {0: 0}
+ else:
+ self.edge_equality = self._edge_match_maker(edge_match)
+
+ @property
+ def _sgn_partitions(self):
+ if self._sgn_partitions_ is None:
+ def nodematch(node1, node2):
+ return self.node_equality(self.subgraph, node1, self.subgraph, node2)
+ self._sgn_partitions_ = make_partitions(self.subgraph.nodes, nodematch)
+ return self._sgn_partitions_
+
+ @property
+ def _sge_partitions(self):
+ if self._sge_partitions_ is None:
+ def edgematch(edge1, edge2):
+ return self.edge_equality(self.subgraph, edge1, self.subgraph, edge2)
+ self._sge_partitions_ = make_partitions(self.subgraph.edges, edgematch)
+ return self._sge_partitions_
+
+ @property
+ def _gn_partitions(self):
+ if self._gn_partitions_ is None:
+ def nodematch(node1, node2):
+ return self.node_equality(self.graph, node1, self.graph, node2)
+ self._gn_partitions_ = make_partitions(self.graph.nodes, nodematch)
+ return self._gn_partitions_
+
+ @property
+ def _ge_partitions(self):
+ if self._ge_partitions_ is None:
+ def edgematch(edge1, edge2):
+ return self.edge_equality(self.graph, edge1, self.graph, edge2)
+ self._ge_partitions_ = make_partitions(self.graph.edges, edgematch)
+ return self._ge_partitions_
+
+ @property
+ def _sgn_colors(self):
+ if self._sgn_colors_ is None:
+ self._sgn_colors_ = partition_to_color(self._sgn_partitions)
+ return self._sgn_colors_
+
+ @property
+ def _sge_colors(self):
+ if self._sge_colors_ is None:
+ self._sge_colors_ = partition_to_color(self._sge_partitions)
+ return self._sge_colors_
+
+ @property
+ def _gn_colors(self):
+ if self._gn_colors_ is None:
+ self._gn_colors_ = partition_to_color(self._gn_partitions)
+ return self._gn_colors_
+
+ @property
+ def _ge_colors(self):
+ if self._ge_colors_ is None:
+ self._ge_colors_ = partition_to_color(self._ge_partitions)
+ return self._ge_colors_
+
+ @property
+ def _node_compatibility(self):
+ if self._node_compat_ is not None:
+ return self._node_compat_
+ self._node_compat_ = {}
+ for sgn_part_color, gn_part_color in itertools.product(range(len(self._sgn_partitions)),
+ range(len(self._gn_partitions))):
+ sgn = next(iter(self._sgn_partitions[sgn_part_color]))
+ gn = next(iter(self._gn_partitions[gn_part_color]))
+ if self.node_equality(self.subgraph, sgn, self.graph, gn):
+ self._node_compat_[sgn_part_color] = gn_part_color
+ return self._node_compat_
+
+ @property
+ def _edge_compatibility(self):
+ if self._edge_compat_ is not None:
+ return self._edge_compat_
+ self._edge_compat_ = {}
+ for sge_part_color, ge_part_color in itertools.product(range(len(self._sge_partitions)),
+ range(len(self._ge_partitions))):
+ sge = next(iter(self._sge_partitions[sge_part_color]))
+ ge = next(iter(self._ge_partitions[ge_part_color]))
+ if self.edge_equality(self.subgraph, sge, self.graph, ge):
+ self._edge_compat_[sge_part_color] = ge_part_color
+ return self._edge_compat_
+
+ @staticmethod
+ def _node_match_maker(cmp):
+ @wraps(cmp)
+ def comparer(graph1, node1, graph2, node2):
+ return cmp(graph1.nodes[node1], graph2.nodes[node2])
+ return comparer
+
+ @staticmethod
+ def _edge_match_maker(cmp):
+ @wraps(cmp)
+ def comparer(graph1, edge1, graph2, edge2):
+ return cmp(graph1.edges[edge1], graph2.edges[edge2])
+ return comparer
+
+ def find_isomorphisms(self, symmetry=True):
+ """
+ Find all subgraph isomorphisms between :attr:`subgraph` <=
+ :attr:`graph`.
+
+ Parameters
+ ----------
+ symmetry: bool
+ Whether symmetry should be taken into account. If False, found
+ isomorphisms may be symmetrically equivalent.
+
+ Yields
+ ------
+ dict
+ The found isomorphism mappings of {graph_node: subgraph_node}.
+ """
+ # The networkx VF2 algorithm is slightly funny in when it yields an
+ # empty dict and when not.
+ if not self.subgraph:
+ yield {}
+ return
+ elif not self.graph:
+ return
+ elif len(self.graph) < len(self.subgraph):
+ return
+
+ if symmetry:
+ _, cosets = self.analyze_symmetry(self.subgraph,
+ self._sgn_partitions,
+ self._sge_colors)
+ constraints = self._make_constraints(cosets)
+ else:
+ constraints = []
+
+ candidates = self._find_nodecolor_candidates()
+ la_candidates = self._get_lookahead_candidates()
+ for sgn in self.subgraph:
+ extra_candidates = la_candidates[sgn]
+ if extra_candidates:
+ candidates[sgn] = candidates[sgn] | {frozenset(extra_candidates)}
+
+ if any(candidates.values()):
+ start_sgn = min(candidates, key=lambda n: min(candidates[n], key=len))
+ candidates[start_sgn] = (intersect(candidates[start_sgn]),)
+ yield from self._map_nodes(start_sgn, candidates, constraints)
+ else:
+ return
+
+ @staticmethod
+ def _find_neighbor_color_count(graph, node, node_color, edge_color):
+ """
+ For `node` in `graph`, count the number of edges of a specific color
+ it has to nodes of a specific color.
+ """
+ counts = Counter()
+ neighbors = graph[node]
+ for neighbor in neighbors:
+ n_color = node_color[neighbor]
+ if (node, neighbor) in edge_color:
+ e_color = edge_color[node, neighbor]
+ else:
+ e_color = edge_color[neighbor, node]
+ counts[e_color, n_color] += 1
+ return counts
+
+ def _get_lookahead_candidates(self):
+ """
+ Returns a mapping of {subgraph node: collection of graph nodes} for
+ which the graph nodes are feasible candidates for the subgraph node, as
+ determined by looking ahead one edge.
+ """
+ g_counts = {}
+ for gn in self.graph:
+ g_counts[gn] = self._find_neighbor_color_count(self.graph, gn,
+ self._gn_colors,
+ self._ge_colors)
+ candidates = defaultdict(set)
+ for sgn in self.subgraph:
+ sg_count = self._find_neighbor_color_count(self.subgraph, sgn,
+ self._sgn_colors,
+ self._sge_colors)
+ new_sg_count = Counter()
+ for (sge_color, sgn_color), count in sg_count.items():
+ try:
+ ge_color = self._edge_compatibility[sge_color]
+ gn_color = self._node_compatibility[sgn_color]
+ except KeyError:
+ pass
+ else:
+ new_sg_count[ge_color, gn_color] = count
+
+ for gn, g_count in g_counts.items():
+ if all(new_sg_count[x] <= g_count[x] for x in new_sg_count):
+ # Valid candidate
+ candidates[sgn].add(gn)
+ return candidates
+
+ def largest_common_subgraph(self, symmetry=True):
+ """
+ Find the largest common induced subgraphs between :attr:`subgraph` and
+ :attr:`graph`.
+
+ Parameters
+ ----------
+ symmetry: bool
+ Whether symmetry should be taken into account. If False, found
+ largest common subgraphs may be symmetrically equivalent.
+
+ Yields
+ ------
+ dict
+ The found isomorphism mappings of {graph_node: subgraph_node}.
+ """
+ # The networkx VF2 algorithm is slightly funny in when it yields an
+ # empty dict and when not.
+ if not self.subgraph:
+ yield {}
+ return
+ elif not self.graph:
+ return
+
+ if symmetry:
+ _, cosets = self.analyze_symmetry(self.subgraph,
+ self._sgn_partitions,
+ self._sge_colors)
+ constraints = self._make_constraints(cosets)
+ else:
+ constraints = []
+
+ candidates = self._find_nodecolor_candidates()
+
+ if any(candidates.values()):
+ yield from self._largest_common_subgraph(candidates, constraints)
+ else:
+ return
+
+ def analyze_symmetry(self, graph, node_partitions, edge_colors):
+ """
+ Find a minimal set of permutations and corresponding co-sets that
+ describe the symmetry of :attr:`subgraph`.
+
+ Returns
+ -------
+ set[frozenset]
+ The found permutations. This is a set of frozenset of pairs of node
+ keys which can be exchanged without changing :attr:`subgraph`.
+ dict[collections.abc.Hashable, set[collections.abc.Hashable]]
+ The found co-sets. The co-sets is a dictionary of {node key:
+ set of node keys}. Every key-value pair describes which `values`
+ can be interchanged without changing nodes less than `key`.
+ """
+ if self._symmetry_cache is not None:
+ key = hash((tuple(graph.nodes), tuple(graph.edges),
+ tuple(map(tuple, node_partitions)), tuple(edge_colors.items())))
+ if key in self._symmetry_cache:
+ return self._symmetry_cache[key]
+ node_partitions = list(self._refine_node_partitions(graph,
+ node_partitions,
+ edge_colors))
+ assert len(node_partitions) == 1
+ node_partitions = node_partitions[0]
+ permutations, cosets = self._process_ordered_pair_partitions(graph,
+ node_partitions,
+ node_partitions,
+ edge_colors)
+ if self._symmetry_cache is not None:
+ self._symmetry_cache[key] = permutations, cosets
+ return permutations, cosets
+
+ def is_isomorphic(self, symmetry=False):
+ """
+ Returns True if :attr:`graph` is isomorphic to :attr:`subgraph` and
+ False otherwise.
+
+ Returns
+ -------
+ bool
+ """
+ return len(self.subgraph) == len(self.graph) and self.subgraph_is_isomorphic(symmetry)
+
+ def subgraph_is_isomorphic(self, symmetry=False):
+ """
+ Returns True if a subgraph of :attr:`graph` is isomorphic to
+ :attr:`subgraph` and False otherwise.
+
+ Returns
+ -------
+ bool
+ """
+ # symmetry=False, since we only need to know whether there is any
+ # example; figuring out all symmetry elements probably costs more time
+ # than it gains.
+ isom = next(self.subgraph_isomorphisms_iter(symmetry=symmetry), None)
+ return isom is not None
+
+ def isomorphisms_iter(self, symmetry=True):
+ """
+ Does the same as :meth:`find_isomorphisms` if :attr:`graph` and
+ :attr:`subgraph` have the same number of nodes.
+
+ .. automethod:: find_isomorphisms
+ """
+ if len(self.graph) == len(self.subgraph):
+ yield from self.subgraph_isomorphisms_iter(symmetry=symmetry)
+
+ def subgraph_isomorphisms_iter(self, symmetry=True):
+ """
+ Alternative name for :meth:`find_isomorphisms`.
+
+ .. automethod:: find_isomorphisms
+ """
+ return self.find_isomorphisms(symmetry)
+
+ def _find_nodecolor_candidates(self):
+ """
+ Per node in subgraph find all nodes in graph that have the same color.
+ """
+ candidates = defaultdict(set)
+ for sgn in self.subgraph.nodes:
+ sgn_color = self._sgn_colors[sgn]
+ if sgn_color in self._node_compatibility:
+ gn_color = self._node_compatibility[sgn_color]
+ candidates[sgn].add(frozenset(self._gn_partitions[gn_color]))
+ else:
+ candidates[sgn].add(frozenset())
+ candidates = dict(candidates)
+ for sgn, options in candidates.items():
+ candidates[sgn] = frozenset(options)
+ return candidates
+
+ @staticmethod
+ def _make_constraints(cosets):
+ """
+ Turn cosets into constraints.
+ """
+ constraints = []
+ for node_i, node_ts in cosets.items():
+ for node_t in node_ts:
+ if node_i != node_t:
+ # Node i must be smaller than node t.
+ constraints.append((node_i, node_t))
+ return constraints
+
+ @staticmethod
+ def _find_node_edge_color(graph, node_colors, edge_colors):
+ """
+ For every node in graph, come up with a color that combines 1) the
+ color of the node, and 2) the number of edges of a color to each type
+ of node.
+ """
+ counts = defaultdict(lambda: defaultdict(int))
+ for node1, node2 in graph.edges:
+ if (node1, node2) in edge_colors:
+ # FIXME directed graphs
+ ecolor = edge_colors[node1, node2]
+ else:
+ ecolor = edge_colors[node2, node1]
+ # Count per node how many edges it has of what color to nodes of
+ # what color
+ counts[node1][ecolor, node_colors[node2]] += 1
+ counts[node2][ecolor, node_colors[node1]] += 1
+
+ node_edge_colors = dict()
+ for node in graph.nodes:
+ node_edge_colors[node] = node_colors[node], set(counts[node].items())
+
+ return node_edge_colors
+
+ @staticmethod
+ def _get_permutations_by_length(items):
+ """
+ Get all permutations of items, but only permute items with the same
+ length.
+
+ >>> found = list(ISMAGS._get_permutations_by_length([[1], [2], [3, 4], [4, 5]]))
+ >>> answer = [
+ ... (([1], [2]), ([3, 4], [4, 5])),
+ ... (([1], [2]), ([4, 5], [3, 4])),
+ ... (([2], [1]), ([3, 4], [4, 5])),
+ ... (([2], [1]), ([4, 5], [3, 4])),
+ ... ]
+ >>> found == answer
+ True
+ """
+ by_len = defaultdict(list)
+ for item in items:
+ by_len[len(item)].append(item)
+
+ yield from itertools.product(*(itertools.permutations(by_len[l]) for l in sorted(by_len)))
+
+ @classmethod
+ def _refine_node_partitions(cls, graph, node_partitions, edge_colors, branch=False):
+ """
+ Given a partition of nodes in graph, make the partitions smaller such
+ that all nodes in a partition have 1) the same color, and 2) the same
+ number of edges to specific other partitions.
+ """
+ def equal_color(node1, node2):
+ return node_edge_colors[node1] == node_edge_colors[node2]
+
+ node_partitions = list(node_partitions)
+ node_colors = partition_to_color(node_partitions)
+ node_edge_colors = cls._find_node_edge_color(graph, node_colors, edge_colors)
+ if all(are_all_equal(node_edge_colors[node] for node in partition)
+ for partition in node_partitions):
+ yield node_partitions
+ return
+
+ new_partitions = []
+ output = [new_partitions]
+ for partition in node_partitions:
+ if not are_all_equal(node_edge_colors[node] for node in partition):
+ refined = make_partitions(partition, equal_color)
+ if (branch and len(refined) != 1 and
+ len({len(r) for r in refined}) != len([len(r) for r in refined])):
+ # This is where it breaks. There are multiple new cells
+ # in refined with the same length, and their order
+ # matters.
+ # So option 1) Hit it with a big hammer and simply make all
+ # orderings.
+ permutations = cls._get_permutations_by_length(refined)
+ new_output = []
+ for n_p in output:
+ for permutation in permutations:
+ new_output.append(n_p + list(permutation[0]))
+ output = new_output
+ else:
+ for n_p in output:
+ n_p.extend(sorted(refined, key=len))
+ else:
+ for n_p in output:
+ n_p.append(partition)
+ for n_p in output:
+ yield from cls._refine_node_partitions(graph, n_p, edge_colors, branch)
+
+ def _edges_of_same_color(self, sgn1, sgn2):
+ """
+ Returns all edges in :attr:`graph` that have the same colour as the
+ edge between sgn1 and sgn2 in :attr:`subgraph`.
+ """
+ if (sgn1, sgn2) in self._sge_colors:
+ # FIXME directed graphs
+ sge_color = self._sge_colors[sgn1, sgn2]
+ else:
+ sge_color = self._sge_colors[sgn2, sgn1]
+ if sge_color in self._edge_compatibility:
+ ge_color = self._edge_compatibility[sge_color]
+ g_edges = self._ge_partitions[ge_color]
+ else:
+ g_edges = []
+ return g_edges
+
+ def _map_nodes(self, sgn, candidates, constraints, mapping=None, to_be_mapped=None):
+ """
+ Find all subgraph isomorphisms honoring constraints.
+ """
+ if mapping is None:
+ mapping = {}
+ else:
+ mapping = mapping.copy()
+ if to_be_mapped is None:
+ to_be_mapped = set(self.subgraph.nodes)
+
+ # Note, we modify candidates here. Doesn't seem to affect results, but
+ # remember this.
+ #candidates = candidates.copy()
+ sgn_candidates = intersect(candidates[sgn])
+ candidates[sgn] = frozenset([sgn_candidates])
+ for gn in sgn_candidates:
+ # We're going to try to map sgn to gn.
+ if gn in mapping.values() or sgn not in to_be_mapped:
+ # gn is already mapped to something
+ continue # pragma: no cover
+
+ # REDUCTION and COMBINATION
+ mapping[sgn] = gn
+ # BASECASE
+ if to_be_mapped == set(mapping.keys()):
+ yield {v: k for k, v in mapping.items()}
+ continue
+ left_to_map = to_be_mapped - set(mapping.keys())
+
+ new_candidates = candidates.copy()
+ sgn_neighbours = set(self.subgraph[sgn])
+ not_gn_neighbours = set(self.graph.nodes) - set(self.graph[gn])
+ for sgn2 in left_to_map:
+ if sgn2 not in sgn_neighbours:
+ gn2_options = not_gn_neighbours
+ else:
+ # Get all edges to gn of the right color:
+ g_edges = self._edges_of_same_color(sgn, sgn2)
+ # FIXME directed graphs
+ # And all nodes involved in those which are connected to gn
+ gn2_options = {n for e in g_edges for n in e if gn in e}
+ # Node color compatibility should be taken care of by the
+ # initial candidate lists made by find_subgraphs
+
+ # Add gn2_options to the right collection. Since new_candidates
+ # is a dict of frozensets of frozensets of node indices it's
+ # a bit clunky. We can't do .add, and + also doesn't work. We
+ # could do |, but I deem union to be clearer.
+ new_candidates[sgn2] = new_candidates[sgn2].union([frozenset(gn2_options)])
+
+ if (sgn, sgn2) in constraints:
+ gn2_options = {gn2 for gn2 in self.graph if gn2 > gn}
+ elif (sgn2, sgn) in constraints:
+ gn2_options = {gn2 for gn2 in self.graph if gn2 < gn}
+ else:
+ continue # pragma: no cover
+ new_candidates[sgn2] = new_candidates[sgn2].union([frozenset(gn2_options)])
+
+ # The next node is the one that is unmapped and has fewest
+ # candidates
+ # Pylint disables because it's a one-shot function.
+ next_sgn = min(left_to_map,
+ key=lambda n: min(new_candidates[n], key=len)) # pylint: disable=cell-var-from-loop
+ yield from self._map_nodes(next_sgn,
+ new_candidates,
+ constraints,
+ mapping=mapping,
+ to_be_mapped=to_be_mapped)
+ # Unmap sgn-gn. Strictly not necessary since it'd get overwritten
+ # when making a new mapping for sgn.
+ #del mapping[sgn]
+
+ def _largest_common_subgraph(self, candidates, constraints,
+ to_be_mapped=None):
+ """
+ Find all largest common subgraphs honoring constraints.
+ """
+ if to_be_mapped is None:
+ to_be_mapped = {frozenset(self.subgraph.nodes)}
+
+ # The LCS problem is basically a repeated subgraph isomorphism problem
+ # with smaller and smaller subgraphs. We store the nodes that are
+ # "part of" the subgraph in to_be_mapped, and we make it a little
+ # smaller every iteration.
+
+ # pylint disable becuase it's guarded against by default value
+ current_size = len(next(iter(to_be_mapped), [])) # pylint: disable=stop-iteration-return
+
+ found_iso = False
+ if current_size <= len(self.graph):
+ # There's no point in trying to find isomorphisms of
+ # graph >= subgraph if subgraph has more nodes than graph.
+
+ # Try the isomorphism first with the nodes with lowest ID. So sort
+ # them. Those are more likely to be part of the final
+ # correspondence. This makes finding the first answer(s) faster. In
+ # theory.
+ for nodes in sorted(to_be_mapped, key=sorted):
+ # Find the isomorphism between subgraph[to_be_mapped] <= graph
+ next_sgn = min(nodes, key=lambda n: min(candidates[n], key=len))
+ isomorphs = self._map_nodes(next_sgn, candidates, constraints,
+ to_be_mapped=nodes)
+
+ # This is effectively `yield from isomorphs`, except that we look
+ # whether an item was yielded.
+ try:
+ item = next(isomorphs)
+ except StopIteration:
+ pass
+ else:
+ yield item
+ yield from isomorphs
+ found_iso = True
+
+ # BASECASE
+ if found_iso or current_size == 1:
+ # Shrinking has no point because either 1) we end up with a smaller
+ # common subgraph (and we want the largest), or 2) there'll be no
+ # more subgraph.
+ return
+
+ left_to_be_mapped = set()
+ for nodes in to_be_mapped:
+ for sgn in nodes:
+ # We're going to remove sgn from to_be_mapped, but subject to
+ # symmetry constraints. We know that for every constraint we
+ # have those subgraph nodes are equal. So whenever we would
+ # remove the lower part of a constraint, remove the higher
+ # instead. This is all dealth with by _remove_node. And because
+ # left_to_be_mapped is a set, we don't do double work.
+
+ # And finally, make the subgraph one node smaller.
+ # REDUCTION
+ new_nodes = self._remove_node(sgn, nodes, constraints)
+ left_to_be_mapped.add(new_nodes)
+ # COMBINATION
+ yield from self._largest_common_subgraph(candidates, constraints,
+ to_be_mapped=left_to_be_mapped)
+
+ @staticmethod
+ def _remove_node(node, nodes, constraints):
+ """
+ Returns a new set where node has been removed from nodes, subject to
+ symmetry constraints. We know, that for every constraint we have
+ those subgraph nodes are equal. So whenever we would remove the
+ lower part of a constraint, remove the higher instead.
+ """
+ while True:
+ for low, high in constraints:
+ if low == node and high in nodes:
+ node = high
+ break
+ else: # no break, couldn't find node in constraints
+ break
+ return frozenset(nodes - {node})
+
+ @staticmethod
+ def _find_permutations(top_partitions, bottom_partitions):
+ """
+ Return the pairs of top/bottom partitions where the partitions are
+ different. Ensures that all partitions in both top and bottom
+ partitions have size 1.
+ """
+ # Find permutations
+ permutations = set()
+ for top, bot in zip(top_partitions, bottom_partitions):
+ # top and bot have only one element
+ if len(top) != 1 or len(bot) != 1:
+ raise IndexError("Not all nodes are coupled. This is"
+ " impossible: {}, {}".format(top_partitions,
+ bottom_partitions))
+ if top != bot:
+ permutations.add(frozenset((next(iter(top)), next(iter(bot)))))
+ return permutations
+
+ @staticmethod
+ def _update_orbits(orbits, permutations):
+ """
+ Update orbits based on permutations. Orbits is modified in place.
+ For every pair of items in permutations their respective orbits are
+ merged.
+ """
+ for permutation in permutations:
+ node, node2 = permutation
+ # Find the orbits that contain node and node2, and replace the
+ # orbit containing node with the union
+ first = second = None
+ for idx, orbit in enumerate(orbits):
+ if first is not None and second is not None:
+ break
+ if node in orbit:
+ first = idx
+ if node2 in orbit:
+ second = idx
+ if first != second:
+ orbits[first].update(orbits[second])
+ del orbits[second]
+
+ def _couple_nodes(self, top_partitions, bottom_partitions, pair_idx,
+ t_node, b_node, graph, edge_colors):
+ """
+ Generate new partitions from top and bottom_partitions where t_node is
+ coupled to b_node. pair_idx is the index of the partitions where t_ and
+ b_node can be found.
+ """
+ t_partition = top_partitions[pair_idx]
+ b_partition = bottom_partitions[pair_idx]
+ assert t_node in t_partition and b_node in b_partition
+ # Couple node to node2. This means they get their own partition
+ new_top_partitions = [top.copy() for top in top_partitions]
+ new_bottom_partitions = [bot.copy() for bot in bottom_partitions]
+ new_t_groups = {t_node}, t_partition - {t_node}
+ new_b_groups = {b_node}, b_partition - {b_node}
+ # Replace the old partitions with the coupled ones
+ del new_top_partitions[pair_idx]
+ del new_bottom_partitions[pair_idx]
+ new_top_partitions[pair_idx:pair_idx] = new_t_groups
+ new_bottom_partitions[pair_idx:pair_idx] = new_b_groups
+
+ new_top_partitions = self._refine_node_partitions(graph,
+ new_top_partitions,
+ edge_colors)
+ new_bottom_partitions = self._refine_node_partitions(graph,
+ new_bottom_partitions,
+ edge_colors, branch=True)
+ new_top_partitions = list(new_top_partitions)
+ assert len(new_top_partitions) == 1
+ new_top_partitions = new_top_partitions[0]
+ for bot in new_bottom_partitions:
+ yield list(new_top_partitions), bot
+
+ def _process_ordered_pair_partitions(self, graph, top_partitions,
+ bottom_partitions, edge_colors,
+ orbits=None, cosets=None):
+ """
+ Processes ordered pair partitions as per the reference paper. Finds and
+ returns all permutations and cosets that leave the graph unchanged.
+ """
+ if orbits is None:
+ orbits = [{node} for node in graph.nodes]
+ else:
+ # Note that we don't copy orbits when we are given one. This means
+ # we leak information between the recursive branches. This is
+ # intentional!
+ orbits = orbits
+ if cosets is None:
+ cosets = {}
+ else:
+ cosets = cosets.copy()
+
+ assert all(len(t_p) == len(b_p) for t_p, b_p in zip(top_partitions, bottom_partitions))
+
+ # BASECASE
+ if all(len(top) == 1 for top in top_partitions):
+ # All nodes are mapped
+ permutations = self._find_permutations(top_partitions, bottom_partitions)
+ self._update_orbits(orbits, permutations)
+ if permutations:
+ return [permutations], cosets
+ else:
+ return [], cosets
+
+ permutations = []
+ unmapped_nodes = {(node, idx)
+ for idx, t_partition in enumerate(top_partitions)
+ for node in t_partition if len(t_partition) > 1}
+ node, pair_idx = min(unmapped_nodes)
+ b_partition = bottom_partitions[pair_idx]
+
+ for node2 in sorted(b_partition):
+ if len(b_partition) == 1:
+ # Can never result in symmetry
+ continue
+ if node != node2 and any(node in orbit and node2 in orbit for orbit in orbits):
+ # Orbit prune branch
+ continue
+ # REDUCTION
+ # Couple node to node2
+ partitions = self._couple_nodes(top_partitions, bottom_partitions,
+ pair_idx, node, node2, graph,
+ edge_colors)
+ for opp in partitions:
+ new_top_partitions, new_bottom_partitions = opp
+
+ new_perms, new_cosets = self._process_ordered_pair_partitions(graph,
+ new_top_partitions,
+ new_bottom_partitions,
+ edge_colors,
+ orbits,
+ cosets)
+ # COMBINATION
+ permutations += new_perms
+ cosets.update(new_cosets)
+
+ mapped = {k for top, bottom in zip(top_partitions, bottom_partitions)
+ for k in top if len(top) == 1 and top == bottom}
+ ks = {k for k in graph.nodes if k < node}
+ # Have all nodes with ID < node been mapped?
+ find_coset = ks <= mapped and node not in cosets
+ if find_coset:
+ # Find the orbit that contains node
+ for orbit in orbits:
+ if node in orbit:
+ cosets[node] = orbit.copy()
+ return permutations, cosets
| diff --git a/networkx/algorithms/isomorphism/tests/test_ismags.py b/networkx/algorithms/isomorphism/tests/test_ismags.py
new file mode 100644
--- /dev/null
+++ b/networkx/algorithms/isomorphism/tests/test_ismags.py
@@ -0,0 +1,272 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+"""
+ Tests for ISMAGS isomorphism algorithm.
+"""
+
+from nose.tools import assert_true, assert_equal
+from nose import SkipTest
+import networkx as nx
+from networkx.algorithms import isomorphism as iso
+
+
+def _matches_to_sets(matches):
+ """
+ Helper function to facilitate comparing collections of dictionaries in
+ which order does not matter.
+ """
+ return set(map(lambda m: frozenset(m.items()), matches))
+
+
+class TestSelfIsomorphism(object):
+ data = [
+ (
+ [(0, dict(name='a')),
+ (1, dict(name='a')),
+ (2, dict(name='b')),
+ (3, dict(name='b')),
+ (4, dict(name='a')),
+ (5, dict(name='a'))],
+ [(0, 1), (1, 2), (2, 3), (3, 4), (4, 5)]
+ ),
+ (
+ range(1, 5),
+ [(1, 2), (2, 4), (4, 3), (3, 1)]
+ ),
+ (
+ [],
+ [(0, 1), (1, 2), (2, 3), (3, 4), (4, 5), (5, 0), (0, 6), (6, 7),
+ (2, 8), (8, 9), (4, 10), (10, 11)]
+ ),
+ (
+ [],
+ [(0, 1), (1, 2), (1, 4), (2, 3), (3, 5), (3, 6)]
+ ),
+ ]
+
+ def test_self_isomorphism(self):
+ """
+ For some small, symmetric graphs, make sure that 1) they are isomorphic
+ to themselves, and 2) that only the identity mapping is found.
+ """
+ for node_data, edge_data in self.data:
+ graph = nx.Graph()
+ graph.add_nodes_from(node_data)
+ graph.add_edges_from(edge_data)
+
+ ismags = iso.ISMAGS(graph, graph, node_match=iso.categorical_node_match('name', None))
+ assert_true(ismags.is_isomorphic())
+ assert_true(ismags.subgraph_is_isomorphic())
+ assert_equal(list(ismags.subgraph_isomorphisms_iter(symmetry=True)),
+ [{n: n for n in graph.nodes}])
+
+ def test_edgecase_self_isomorphism(self):
+ """
+ This edgecase is one of the cases in which it is hard to find all
+ symmetry elements.
+ """
+ graph = nx.Graph()
+ graph.add_path(range(5))
+ graph.add_edges_from([(2, 5), (5, 6)])
+
+ ismags = iso.ISMAGS(graph, graph)
+ ismags_answer = list(ismags.find_isomorphisms(True))
+ assert ismags_answer == [{n: n for n in graph.nodes}]
+
+ graph = nx.relabel_nodes(graph, {0: 0, 1: 1, 2: 2, 3: 3, 4: 6, 5: 4, 6: 5})
+ ismags = iso.ISMAGS(graph, graph)
+ ismags_answer = list(ismags.find_isomorphisms(True))
+ assert ismags_answer == [{n: n for n in graph.nodes}]
+
+ @SkipTest
+ def test_directed_self_isomorphism(self):
+ """
+ For some small, directed, symmetric graphs, make sure that 1) they are
+ isomorphic to themselves, and 2) that only the identity mapping is
+ found.
+ """
+ for node_data, edge_data in self.data:
+ graph = nx.Graph()
+ graph.add_nodes_from(node_data)
+ graph.add_edges_from(edge_data)
+
+ ismags = iso.ISMAGS(graph, graph, node_match=iso.categorical_node_match('name', None))
+ assert_true(ismags.is_isomorphic())
+ assert_true(ismags.subgraph_is_isomorphic())
+ assert_equal(list(ismags.subgraph_isomorphisms_iter(symmetry=True)),
+ [{n: n for n in graph.nodes}])
+
+
+class TestSubgraphIsomorphism(object):
+ def test_isomorphism(self):
+ g1 = nx.Graph()
+ g1.add_cycle(range(4))
+
+ g2 = nx.Graph()
+ g2.add_cycle(range(4))
+ g2.add_edges_from([(n, m) for n, m in zip(g2, range(4, 8))])
+ ismags = iso.ISMAGS(g2, g1)
+ assert_equal(list(ismags.subgraph_isomorphisms_iter(symmetry=True)),
+ [{n: n for n in g1.nodes}])
+
+ def test_isomorphism2(self):
+ g1 = nx.Graph()
+ g1.add_path(range(3))
+
+ g2 = g1.copy()
+ g2.add_edge(1, 3)
+
+ ismags = iso.ISMAGS(g2, g1)
+ matches = ismags.subgraph_isomorphisms_iter(symmetry=True)
+ expected_symmetric = [{0: 0, 1: 1, 2: 2},
+ {0: 0, 1: 1, 3: 2},
+ {2: 0, 1: 1, 3: 2}]
+ assert_equal(_matches_to_sets(matches),
+ _matches_to_sets(expected_symmetric))
+
+ matches = ismags.subgraph_isomorphisms_iter(symmetry=False)
+ expected_asymmetric = [{0: 2, 1: 1, 2: 0},
+ {0: 2, 1: 1, 3: 0},
+ {2: 2, 1: 1, 3: 0}]
+ assert_equal(_matches_to_sets(matches),
+ _matches_to_sets(expected_symmetric + expected_asymmetric))
+
+ def test_labeled_nodes(self):
+ g1 = nx.Graph()
+ g1.add_cycle(range(3))
+ g1.nodes[1]['attr'] = True
+
+ g2 = g1.copy()
+ g2.add_edge(1, 3)
+ ismags = iso.ISMAGS(g2, g1, node_match=lambda x, y: x == y)
+ matches = ismags.subgraph_isomorphisms_iter(symmetry=True)
+ expected_symmetric = [{0: 0, 1: 1, 2: 2}]
+ assert_equal(_matches_to_sets(matches),
+ _matches_to_sets(expected_symmetric))
+
+ matches = ismags.subgraph_isomorphisms_iter(symmetry=False)
+ expected_asymmetric = [{0: 2, 1: 1, 2: 0}]
+ assert_equal(_matches_to_sets(matches),
+ _matches_to_sets(expected_symmetric + expected_asymmetric))
+
+ def test_labeled_edges(self):
+ g1 = nx.Graph()
+ g1.add_cycle(range(3))
+ g1.edges[1, 2]['attr'] = True
+
+ g2 = g1.copy()
+ g2.add_edge(1, 3)
+ ismags = iso.ISMAGS(g2, g1, edge_match=lambda x, y: x == y)
+ matches = ismags.subgraph_isomorphisms_iter(symmetry=True)
+ expected_symmetric = [{0: 0, 1: 1, 2: 2}]
+ assert_equal(_matches_to_sets(matches),
+ _matches_to_sets(expected_symmetric))
+
+ matches = ismags.subgraph_isomorphisms_iter(symmetry=False)
+ expected_asymmetric = [{1: 2, 0: 0, 2: 1}]
+ assert_equal(_matches_to_sets(matches),
+ _matches_to_sets(expected_symmetric + expected_asymmetric))
+
+
+class TestWikipediaExample(object):
+ # Nodes 'a', 'b', 'c' and 'd' form a column.
+ # Nodes 'g', 'h', 'i' and 'j' form a column.
+ g1edges = [['a', 'g'], ['a', 'h'], ['a', 'i'],
+ ['b', 'g'], ['b', 'h'], ['b', 'j'],
+ ['c', 'g'], ['c', 'i'], ['c', 'j'],
+ ['d', 'h'], ['d', 'i'], ['d', 'j']]
+
+ # Nodes 1,2,3,4 form the clockwise corners of a large square.
+ # Nodes 5,6,7,8 form the clockwise corners of a small square
+ g2edges = [[1, 2], [2, 3], [3, 4], [4, 1],
+ [5, 6], [6, 7], [7, 8], [8, 5],
+ [1, 5], [2, 6], [3, 7], [4, 8]]
+
+ def test_graph(self):
+ g1 = nx.Graph()
+ g2 = nx.Graph()
+ g1.add_edges_from(self.g1edges)
+ g2.add_edges_from(self.g2edges)
+ gm = iso.ISMAGS(g1, g2)
+ assert_true(gm.is_isomorphic())
+
+
+class TestLargestCommonSubgraph(object):
+ def test_mcis(self):
+ # Example graphs from DOI: 10.1002/spe.588
+ graph1 = nx.Graph()
+ graph1.add_edges_from([(1, 2), (2, 3), (2, 4), (3, 4), (4, 5)])
+ graph1.nodes[1]['color'] = 0
+
+ graph2 = nx.Graph()
+ graph2.add_edges_from([(1, 2), (2, 3), (2, 4), (3, 4), (3, 5),
+ (5, 6), (5, 7), (6, 7)])
+ graph2.nodes[1]['color'] = 1
+ graph2.nodes[6]['color'] = 2
+ graph2.nodes[7]['color'] = 2
+
+ ismags = iso.ISMAGS(graph1, graph2, node_match=iso.categorical_node_match('color', None))
+ assert_equal(list(ismags.subgraph_isomorphisms_iter(True)), [])
+ assert_equal(list(ismags.subgraph_isomorphisms_iter(False)), [])
+ found_mcis = _matches_to_sets(ismags.largest_common_subgraph())
+ expected = _matches_to_sets([{2: 2, 3: 4, 4: 3, 5: 5},
+ {2: 4, 3: 2, 4: 3, 5: 5}])
+ assert_equal(expected, found_mcis)
+
+ ismags = iso.ISMAGS(graph2, graph1, node_match=iso.categorical_node_match('color', None))
+ assert_equal(list(ismags.subgraph_isomorphisms_iter(True)), [])
+ assert_equal(list(ismags.subgraph_isomorphisms_iter(False)), [])
+ found_mcis = _matches_to_sets(ismags.largest_common_subgraph())
+ # Same answer, but reversed.
+ expected = _matches_to_sets([{2: 2, 3: 4, 4: 3, 5: 5},
+ {4: 2, 2: 3, 3: 4, 5: 5}])
+ assert_equal(expected, found_mcis)
+
+ def test_symmetry_mcis(self):
+ graph1 = nx.Graph()
+ graph1.add_path(range(4))
+
+ graph2 = nx.Graph()
+ graph2.add_path(range(3))
+ graph2.add_edge(1, 3)
+
+ # Only the symmetry of graph2 is taken into account here.
+ ismags1 = iso.ISMAGS(graph1, graph2, node_match=iso.categorical_node_match('color', None))
+ assert_equal(list(ismags1.subgraph_isomorphisms_iter(True)), [])
+ found_mcis = _matches_to_sets(ismags1.largest_common_subgraph())
+ expected = _matches_to_sets([{0: 0, 1: 1, 2: 2},
+ {1: 0, 3: 2, 2: 1}])
+ assert_equal(expected, found_mcis)
+
+ # Only the symmetry of graph1 is taken into account here.
+ ismags2 = iso.ISMAGS(graph2, graph1, node_match=iso.categorical_node_match('color', None))
+ assert_equal(list(ismags2.subgraph_isomorphisms_iter(True)), [])
+ found_mcis = _matches_to_sets(ismags2.largest_common_subgraph())
+ expected = _matches_to_sets([{3: 2, 0: 0, 1: 1},
+ {2: 0, 0: 2, 1: 1},
+ {3: 0, 0: 2, 1: 1},
+ {3: 0, 1: 1, 2: 2},
+ {0: 0, 1: 1, 2: 2},
+ {2: 0, 3: 2, 1: 1}])
+
+ assert_equal(expected, found_mcis)
+
+ found_mcis1 = _matches_to_sets(ismags1.largest_common_subgraph(False))
+ found_mcis2 = ismags2.largest_common_subgraph(False)
+ found_mcis2 = [{v: k for k, v in d.items()} for d in found_mcis2]
+ found_mcis2 = _matches_to_sets(found_mcis2)
+
+ expected = _matches_to_sets([{3: 2, 1: 3, 2: 1},
+ {2: 0, 0: 2, 1: 1},
+ {1: 2, 3: 3, 2: 1},
+ {3: 0, 1: 3, 2: 1},
+ {0: 2, 2: 3, 1: 1},
+ {3: 0, 1: 2, 2: 1},
+ {2: 0, 0: 3, 1: 1},
+ {0: 0, 2: 3, 1: 1},
+ {1: 0, 3: 3, 2: 1},
+ {1: 0, 3: 2, 2: 1},
+ {0: 3, 1: 1, 2: 2},
+ {0: 0, 1: 1, 2: 2}])
+ assert_equal(expected, found_mcis1)
+ assert_equal(expected, found_mcis2)
| ISMAGS subgraph isomorphism algorithm
Hi all,
First off, thanks for the library; I make heavy use of networkx, and it always seems to have the tools I need already implemented.
Recently however, this was not the case ( :'( ). I work with molecules which to a layman can be described as highly symmetric graphs, and I needed to find subgraph isomorphisms. When using the available VF2 algorithm, this caused a combinatorial explosion of equivalent isomorphisms wrecking performance.
As a result I had to implement the ISMAGS (Index-based Subgraph Matching Algorithm with General Symmetries) [1]. This algorithm find all subgraph isomorphisms that are not symmetrically equivalent. I'm still in the progress of polishing and cleaning, but do you think this would make a nice addition to networkx? If so, I'd be more than happy to open a PR.
[1] https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0097896
| Yes, I'm interested in this addition to the library!
I'm familiar with the problem of finding general symmetries of a graph -- that can be prohibitively long. If I understand correctly, your PR would use the symmetries to examine subgraphs, reporting one from each symmetry equivalence class.
Thanks very much!
Effectively, yes. The subgraph isomorphism problem (and by extension automorphism) are still NP-hard, and that doesn't go away unfortunately,
The ISMAGS algorithm finds a (small? minimal?) set of permutations in the subgraph given, and uses those to generate constraints on the isomorphisms found, So there is only ever one isomorphism found between a graph and itself; any others are de facto symmetery equivalent.
As I said before, the code as I have it now still needs polishing (and there is still the occasional bug), but it's progressing nicely. At the moment my implementation only works for undirected graphs though. I think I know where I need to change it to make work for directed graphs as well, but that's WIP (and not actually my use-case, but that's ok).
Shall I open an "early stage" PR with the proviso that it's still WIP, so that 1) I get an extra pair of eyes on it, and 2) I can unify the code style a bit? I addition, how are tests and coverage currently handled in networkx? At the moment I rely heavily on https://github.com/HypothesisWorks/hypothesis and https://github.com/pckroon/hypothesis-networkx (shameless plug) for testing.
I also have PR open in my own project to get this in, if you want a sneak peak (https://github.com/marrink-lab/vermouth-martinize/pull/155).
It would be great to see an early stage PR. Tests are restricted to unit test style tests using ```nose```. That also tests any examples included in the docstrings. Each directory has a ```tests/``` directory with test modules. The tests are not exhaustive but rather make sure that basic functionality works and that we don't break something inadvertently. The tests are run after each commit to any PR so the idea is to make short routines that test one idea. It looks like Hypothesis is much more involved than what we use -- but I didn't look at it closely.
Great, I'll get on it ASAP. I have another code review incoming on the project for which I actually implemented it. I'll wait until I have that processed before I open a PR here, that should reduce some duplicate effort. I can work on some of the tests/docs in the meantime.
As you describe it the networkx tests are mostly to prevent (obvious) regressions. Hypothesis is designed more to be sure that code is correct, and 100% coverage is reached. It runs your code with not quite random examples (it's really good at hunting down edge cases). So it works best if you have some invariants that you can check based on input, or if you have a reference implementation. | 2019-02-01T13:27:04 |
networkx/networkx | 3,339 | networkx__networkx-3339 | [
"3338"
] | 6bdcdcf07b9d84911697012a2c5833c62fa25992 | diff --git a/networkx/algorithms/shortest_paths/unweighted.py b/networkx/algorithms/shortest_paths/unweighted.py
--- a/networkx/algorithms/shortest_paths/unweighted.py
+++ b/networkx/algorithms/shortest_paths/unweighted.py
@@ -413,7 +413,7 @@ def single_target_shortest_path(G, target, cutoff=None):
shortest_path, single_source_shortest_path
"""
if target not in G:
- raise nx.NodeNotFound("Target {} not in G".format(source))
+ raise nx.NodeNotFound("Target {} not in G".format(target))
def join(p1, p2):
return p2 + p1
| Bug in networkx/algorithms/shortest_paths/unweighted.py
Line 415/416 of **networkc/algorithms/shortest_paths/unweighted.py** reads
` if target not in G:
raise nx.NodeNotFound("Target {} not in G".format(source))`
should read
` if target not in G:
raise nx.NodeNotFound("Target {} not in G".format(target))`
| 2019-03-08T02:10:07 |
||
networkx/networkx | 3,347 | networkx__networkx-3347 | [
"3346"
] | 19cd86c254ebf4a889fddc171ba166187fb72890 | diff --git a/networkx/readwrite/gexf.py b/networkx/readwrite/gexf.py
--- a/networkx/readwrite/gexf.py
+++ b/networkx/readwrite/gexf.py
@@ -400,6 +400,11 @@ def edge_key_data(G):
edges_element = Element("edges")
for u, v, key, edge_data in edge_key_data(G):
kw = {"id": str(key)}
+ try:
+ edge_label = edge_data.pop("label")
+ kw["label"] = str(edge_label)
+ except KeyError:
+ pass
try:
edge_weight = edge_data.pop("weight")
kw["weight"] = str(edge_weight)
| diff --git a/networkx/readwrite/tests/test_gexf.py b/networkx/readwrite/tests/test_gexf.py
--- a/networkx/readwrite/tests/test_gexf.py
+++ b/networkx/readwrite/tests/test_gexf.py
@@ -78,7 +78,7 @@ def setup_class(cls):
</node>
</nodes>
<edges>
- <edge id="0" source="0" target="1"/>
+ <edge id="0" source="0" target="1" label="foo"/>
<edge id="1" source="0" target="2"/>
<edge id="2" source="1" target="0"/>
<edge id="3" source="2" target="1"/>
@@ -105,7 +105,7 @@ def setup_class(cls):
indegree=1,
frog=True,
)
- cls.attribute_graph.add_edge("0", "1", id="0")
+ cls.attribute_graph.add_edge("0", "1", id="0", label="foo")
cls.attribute_graph.add_edge("0", "2", id="1")
cls.attribute_graph.add_edge("1", "0", id="2")
cls.attribute_graph.add_edge("2", "1", id="3")
@@ -573,7 +573,8 @@ def test_multigraph_with_missing_attributes(self):
G = nx.MultiGraph()
G.add_node(0, label="1", color="green")
G.add_node(1, label="2", color="green")
- G.add_edge(0, 1, id="0", weight=3, type="undirected", start=0, end=1)
+ G.add_edge(0, 1, id="0", wight=3, type="undirected", start=0, end=1)
+ G.add_edge(0, 1, id="1", label="foo", start=0, end=1)
G.add_edge(0, 1)
fh = io.BytesIO()
nx.write_gexf(G, fh)
| Missing edge label attribute in GEXFWriter
According to the [GEXF specification ](https://gephi.org/gexf/1.2draft/gexf-12draft-primer.pdf):
> Each edge can have a optional XML-attribute label, which is a string.
The [specification 1.2draft](https://github.com/gephi/gexf/blob/81ba4e7ccdc25631f836fc5caa4ed64ba5300379/specs/1.2draft/data.xsd#L73) defines this attribute as:
`<xs:attribute name="label" type="xs:token"/>`
The GEXF writer doesn't recognize this attribute correctly.
Instead it creates a new data attribute like:
```
<attributes class="edge" mode="static">
<attribute id="0" title="label" type="string"/>
</attributes>
```
| 2019-03-15T11:09:25 |
|
networkx/networkx | 3,356 | networkx__networkx-3356 | [
"3334",
"3334"
] | c1421cae0ecf87b94c8ba9f715bd2097df72c094 | diff --git a/networkx/generators/directed.py b/networkx/generators/directed.py
--- a/networkx/generators/directed.py
+++ b/networkx/generators/directed.py
@@ -255,11 +255,11 @@ def _choose_node(G, distribution, delta, psum):
raise nx.NetworkXError("MultiDiGraph required in create_using")
if alpha <= 0:
- raise ValueError('alpha must be >= 0.')
+ raise ValueError('alpha must be > 0.')
if beta <= 0:
- raise ValueError('beta must be >= 0.')
+ raise ValueError('beta must be > 0.')
if gamma <= 0:
- raise ValueError('beta must be >= 0.')
+ raise ValueError('gamma must be > 0.')
if abs(alpha + beta + gamma - 1.0) >= 1e-9:
raise ValueError('alpha+beta+gamma must equal 1.')
| Error message
The file networkx/generators/directed.py, line 262, the debug message is Wrong
Error message
The file networkx/generators/directed.py, line 262, the debug message is Wrong
| 2019-03-26T04:16:37 |
||
networkx/networkx | 3,361 | networkx__networkx-3361 | [
"3348"
] | c8bbdd64c0623e515141e708a76ad05d463837b3 | diff --git a/networkx/drawing/nx_agraph.py b/networkx/drawing/nx_agraph.py
--- a/networkx/drawing/nx_agraph.py
+++ b/networkx/drawing/nx_agraph.py
@@ -226,7 +226,8 @@ def graphviz_layout(G, prog='neato', root=None, args=''):
args : string, optional
Extra arguments to Graphviz layout program
- Returns : dictionary
+ Returns
+ -------
Dictionary of x, y, positions keyed by node.
Examples
@@ -238,7 +239,6 @@ def graphviz_layout(G, prog='neato', root=None, args=''):
Notes
-----
This is a wrapper for pygraphviz_layout.
-
"""
return pygraphviz_layout(G, prog=prog, root=root, args=args)
diff --git a/networkx/drawing/nx_pydot.py b/networkx/drawing/nx_pydot.py
--- a/networkx/drawing/nx_pydot.py
+++ b/networkx/drawing/nx_pydot.py
@@ -235,11 +235,26 @@ def to_pydot(N):
return P
-def graphviz_layout(G, prog='neato', root=None, **kwds):
+def graphviz_layout(G, prog='neato', root=None):
"""Create node positions using Pydot and Graphviz.
Returns a dictionary of positions keyed by node.
+ Parameters
+ ----------
+ G : NetworkX Graph
+ The graph for which the layout is computed.
+ prog : string (default: 'neato')
+ The name of the GraphViz program to use for layout.
+ Options depend on GraphViz version but may include:
+ 'dot', 'twopi', 'fdp', 'sfdp', 'circo'
+ root : Node from G or None (default: None)
+ The node of G from which to start some layout algorithms.
+
+ Returns
+ -------
+ Dictionary of (x, y) positions keyed by node.
+
Examples
--------
>>> G = nx.complete_graph(4)
@@ -250,23 +265,22 @@ def graphviz_layout(G, prog='neato', root=None, **kwds):
-----
This is a wrapper for pydot_layout.
"""
- return pydot_layout(G=G, prog=prog, root=root, **kwds)
+ return pydot_layout(G=G, prog=prog, root=root)
-# FIXME: Document the "root" parameter.
-# FIXME: Why does this function accept a variadic dict of keyword arguments
-# (i.e., "**kwds") but fail to do anything with them? This is probably
-# wrong, as unrecognized keyword arguments will be silently ignored.
-def pydot_layout(G, prog='neato', root=None, **kwds):
+def pydot_layout(G, prog='neato', root=None):
"""Create node positions using :mod:`pydot` and Graphviz.
Parameters
--------
G : Graph
NetworkX graph to be laid out.
- prog : optional[str]
- Basename of the GraphViz command with which to layout this graph.
- Defaults to `neato`: default GraphViz command for undirected graphs.
+ prog : string (default: 'neato')
+ Name of the GraphViz command to use for layout.
+ Options depend on GraphViz version but may include:
+ 'dot', 'twopi', 'fdp', 'sfdp', 'circo'
+ root : Node from G or None (default: None)
+ The node of G from which to start some layout algorithms.
Returns
--------
| Passing arguments to graphviz_layout
It seems there is no way to pass arguments to the graphviz's programs, e.g.:
`pos = graphviz_layout(G, prog='dot', args='-Grankdir=LR')`
where G is a networkx graph.
I tried in different ways, like "args='-Grankdir=LR'" but "dot" is never affected by this.
Any idea?
| Unfortunately, there are two versions of ```graphviz_layout``` in the networkx package. Which are you referring to? It might be better to call them ```pydot_layout``` and/or ```pygraphviz_layout```. Looking at the code, it seems that ```pydot_layout``` ignores any ```args``` while ```pygraphviz_layout``` at least forwards the ```args``` keywords on to the pygraphviz package. It also seems to me that the doc_strings in ```pydot_layout``` and its version of the wrapper ```graphviz_layout``` should be updated to at least list the possible arguments correctly. | 2019-03-28T10:56:36 |
|
networkx/networkx | 3,362 | networkx__networkx-3362 | [
"3341"
] | 654690492ee0a5d59baa16dbf2bf51440f837ba9 | diff --git a/networkx/algorithms/centrality/closeness.py b/networkx/algorithms/centrality/closeness.py
--- a/networkx/algorithms/centrality/closeness.py
+++ b/networkx/algorithms/centrality/closeness.py
@@ -18,8 +18,7 @@
__all__ = ['closeness_centrality']
-def closeness_centrality(G, u=None, distance=None,
- wf_improved=True, reverse=False):
+def closeness_centrality(G, u=None, distance=None, wf_improved=True):
r"""Compute closeness centrality for nodes.
Closeness centrality [1]_ of a node `u` is the reciprocal of the
@@ -30,7 +29,9 @@ def closeness_centrality(G, u=None, distance=None,
C(u) = \frac{n - 1}{\sum_{v=1}^{n-1} d(v, u)},
where `d(v, u)` is the shortest-path distance between `v` and `u`,
- and `n` is the number of nodes that can reach `u`.
+ and `n` is the number of nodes that can reach `u`. Notice that the
+ closeness distance function computes the incoming distance to `u`
+ for directed graphs. To use outward distance, act on `G.reverse()`.
Notice that higher values of closeness indicate higher centrality.
@@ -63,10 +64,6 @@ def closeness_centrality(G, u=None, distance=None,
Wasserman and Faust improved formula. For single component graphs
it is the same as the original formula.
- reverse : bool, optional (default=False)
- If True and G is a digraph, reverse the edges of G, using successors
- instead of predecessors.
-
Returns
-------
nodes : dictionary
@@ -89,6 +86,10 @@ def closeness_centrality(G, u=None, distance=None,
shortest-path length will be computed using Dijkstra's algorithm with
that edge attribute as the edge weight.
+ In NetworkX 2.2 and earlier a bug caused Dijkstra's algorithm to use the
+ outward distance rather than the inward distance. If you use a 'distance'
+ keyword and a DiGraph, your results will change between v2.2 and v2.3.
+
References
----------
.. [1] Linton C. Freeman: Centrality in networks: I.
@@ -98,18 +99,18 @@ def closeness_centrality(G, u=None, distance=None,
Social Network Analysis: Methods and Applications, 1994,
Cambridge University Press.
"""
+ if G.is_directed():
+ G = G.reverse() # create a reversed graph view
+
if distance is not None:
# use Dijkstra's algorithm with specified attribute as edge weight
path_length = functools.partial(nx.single_source_dijkstra_path_length,
weight=distance)
- else: # handle either directed or undirected
- if G.is_directed() and not reverse:
- path_length = nx.single_target_shortest_path_length
- else:
- path_length = nx.single_source_shortest_path_length
+ else:
+ path_length = nx.single_source_shortest_path_length
if u is None:
- nodes = G.nodes()
+ nodes = G.nodes
else:
nodes = [u]
closeness_centrality = {}
| diff --git a/networkx/algorithms/centrality/tests/test_closeness_centrality.py b/networkx/algorithms/centrality/tests/test_closeness_centrality.py
--- a/networkx/algorithms/centrality/tests/test_closeness_centrality.py
+++ b/networkx/algorithms/centrality/tests/test_closeness_centrality.py
@@ -38,7 +38,7 @@ def test_wf_improved(self):
def test_digraph(self):
G = nx.path_graph(3, create_using=nx.DiGraph())
c = nx.closeness_centrality(G)
- cr = nx.closeness_centrality(G, reverse=True)
+ cr = nx.closeness_centrality(G.reverse())
d = {0: 0.0, 1: 0.500, 2: 0.667}
dr = {0: 0.667, 1: 0.500, 2: 0.0}
for n in sorted(self.P3):
| Closeness centrality assumes symmetric distance when "distance" parameter is used
At the beginning of `nx.closeness_centrality` code there is this logic:
```python
if distance is not None:
# use Dijkstra's algorithm with specified attribute as edge weight
path_length = functools.partial(nx.single_source_dijkstra_path_length,
weight=distance)
else: # handle either directed or undirected
if G.is_directed() and not reverse:
path_length = nx.single_target_shortest_path_length
else:
path_length = nx.single_source_shortest_path_length
```
This means that if `distance` parameter is passed then the direction of the edges is ignored, and the `reverse` parameter has no effect.
Is this a design decision? I have directed networks with non-symmetric distances and am interested in both inbound and outbound centralities. It looks to me this would be a simple change (checking `reverse` and using `nx.single_target_dijkstra_path_length` if needed). Am willing to make a PR if this is deemed an appropriate change.
| This actually looks like a bug to me -- at least on first look.
The dijkstra method works for an undirected graph.
Does it even work for a directed graph? Say: a single edge (0, 1).
Also, there is no function ```single_target_dijkstra_path_length```, so the quick fix is different. Now that we have an efficient way to reverse graphs (nx 2.x), it might be best to simply check the reverse flag at the beginning and just use ```G=G.reverse()``` or similar to handle that case.
But first thing is to check that a directed graph is treated correctly for weighted edges.
Thanks!
> Does it even work for a directed graph? Say: a single edge (0, 1).
It looks like so:
```python
g = nx.DiGraph()
g.add_nodes_from([0,1])
g.add_edge(0, 1, distance=1)
g.add_edge(1, 0, distance=2)
nx.closeness_centrality(g, distance="distance")
```
outputs `{0: 1.0, 1: 0.5}`, and using `nx.reverse(g)` instead of `g` transposes the numbers as expected.
If the second edge is removed, then output is `{0: 1.0, 1: 0.0}`, as expected (and also works after manually reversing).
If `nx.reverse` is efficient and not too memory taxing, I agree that adding a conditional
```
if reverse:
G = nx.reverse(G)
```
at the beginning of the computation makes more sense. My networks are small, so for me that will not be an issue, but larger networks might take a hit. Perhaps `nx.reverse_view` instead, or using `copy=False` if that is more resource friendly?
**EDIT**: Upon closer look, it looks like the values are wrongly reversed when the graphs are directed? The docstring claims to compute the *inbound* closeness, but in the one-edge example it seems it is computing the outbound one instead.
Your edited comment is key: The djikstra shortest path computes outward closeness, not inward as we advertise. I think that means that currently if our graph has all edge weights 1, we get reversed results for using weighted as compared to unweighted methods. Hmmm...
Fixing this could provide a backward compatible headache. We should document it with strong warnings that for NX2.3+ using directed weighted edges gives correct (inward) values whereas for previous versions the directed weighted edges give correct (outward) values and ```reverse``` had no affect.
Assuming ```reverse=False```, in order to compute the inward closeness, we will need to reverse the edges in the weighted case. For the unweighted case, we essentially do that inside ```single_target_shortest_path_length```. Maybe it would be cleaner to reverse the edges in both cases and use ```single_source_shortest_path_length``` for the unweighted case.
When ```reverse=True```, then we would NOT reverse the edges... A little turned around perhaps, but it stems from closeness centrality being defined as looking at inward connections.
Thoughts?
I don't see a way of fixing this without breaking backwards compatibility. On the other hands, the old results are not what was documented, so I believe it is a necessary collateral damage; documentation and strong warnings can certainly alleviate the pain.
Regarding the API, I think is is perverse that `reverse=True` means that the graph is not reversed, but I don't see an easy way out of it. Options are redefine closeness to be outwards by default (breaking all the old code!), or changing the `reverse` flag name to something else that is more sensible.
In either case, we could keep raising `DeprecationWarning` for a while until whatever the new API/defaults settles. Since I am not a core developer, I will defer this decision to the people that are more involved with the code, I am willing to help implementing the PR with whatever is agreed on.
I'm leaning toward getting rid of the ```reverse``` argument. We can reverse graphs easily now so users can simply call it on ```G.reverse()``` instead of ```G```. Fewer arguments also makes the interface that much cleaner. In the code we can reverse any directed graph input in order to provide the inward closeness as advertised.
| 2019-03-28T12:17:01 |
networkx/networkx | 3,363 | networkx__networkx-3363 | [
"2969"
] | d35b211b64def881b55b812c6b10bbeb5ce0d440 | diff --git a/networkx/readwrite/gexf.py b/networkx/readwrite/gexf.py
--- a/networkx/readwrite/gexf.py
+++ b/networkx/readwrite/gexf.py
@@ -202,24 +202,28 @@ class GEXF(object):
(float, "double"),
(bool, "boolean"),
(list, "string"),
- (dict, "string")]
-
- try: # Python 3.x
- blurb = chr(1245) # just to trigger the exception
- types.extend([
- (int, "long"),
- (str, "liststring"),
- (str, "anyURI"),
- (str, "string")])
- except ValueError: # Python 2.6+
- types.extend([
- (long, "long"),
- (str, "liststring"),
- (str, "anyURI"),
- (str, "string"),
- (unicode, "liststring"),
- (unicode, "anyURI"),
- (unicode, "string")])
+ (dict, "string"),
+ (int, "long"),
+ (str, "liststring"),
+ (str, "anyURI"),
+ (str, "string")]
+
+ # These additions to types allow writing numpy types
+ try:
+ import numpy as np
+ except ImportError:
+ pass
+ else:
+ # prepend so that python types are created upon read (last entry wins)
+ types = [(np.float64, "float"), (np.float32, "float"),
+ (np.float16, "float"), (np.float_, "float"),
+ (np.int, "int"), (np.int8, "int"),
+ (np.int16, "int"), (np.int32, "int"),
+ (np.int64, "int"), (np.uint8, "int"),
+ (np.uint16, "int"), (np.uint32, "int"),
+ (np.uint64, "int"), (np.int_, "int"),
+ (np.intc, "int"), (np.intp, "int"),
+ ] + types
xml_type = dict(types)
python_type = dict(reversed(a) for a in types)
@@ -268,6 +272,7 @@ def __init__(self, graph=None, encoding='utf-8', prettyprint=True,
# counters for edge and attribute identifiers
self.edge_id = itertools.count()
self.attr_id = itertools.count()
+ self.all_edge_ids = set()
# default attributes are stored in dictionaries
self.attr = {}
self.attr['node'] = {}
@@ -287,6 +292,11 @@ def __str__(self):
return s
def add_graph(self, G):
+ # first pass through G collecting edge ids
+ for u, v, dd in G.edges(data=True):
+ eid = dd.get('id')
+ if eid is not None:
+ self.all_edge_ids.add(make_str(eid))
# set graph attributes
if G.graph.get('mode') == 'dynamic':
mode = 'dynamic'
@@ -363,6 +373,9 @@ def edge_key_data(G):
edge_id = edge_data.pop('id', None)
if edge_id is None:
edge_id = next(self.edge_id)
+ while make_str(edge_id) in self.all_edge_ids:
+ edge_id = next(self.edge_id)
+ self.all_edge_ids.add(make_str(edge_id))
yield u, v, edge_id, edge_data
else:
for u, v, data in G.edges(data=True):
@@ -370,6 +383,9 @@ def edge_key_data(G):
edge_id = edge_data.pop('id', None)
if edge_id is None:
edge_id = next(self.edge_id)
+ while make_str(edge_id) in self.all_edge_ids:
+ edge_id = next(self.edge_id)
+ self.all_edge_ids.add(make_str(edge_id))
yield u, v, edge_id, edge_data
edges_element = Element('edges')
for u, v, key, edge_data in edge_key_data(G):
@@ -422,6 +438,8 @@ def add_attributes(self, node_or_edge, xml_obj, data, default):
if k == 'key':
k = 'networkx_key'
val_type = type(v)
+ if val_type not in self.xml_type:
+ raise TypeError('attribute value type is not allowed: %s' % val_type)
if isinstance(v, list):
# dynamic data
for val, start, end in v:
| diff --git a/networkx/readwrite/tests/test_gexf.py b/networkx/readwrite/tests/test_gexf.py
--- a/networkx/readwrite/tests/test_gexf.py
+++ b/networkx/readwrite/tests/test_gexf.py
@@ -313,6 +313,88 @@ def test_write_with_node_attributes(self):
obtained = '\n'.join(nx.generate_gexf(G))
assert_equal(expected, obtained)
+ def test_edge_id_construct(self):
+ G = nx.Graph()
+ G.add_edges_from([(0, 1, {'id': 0}), (1, 2, {'id': 2}), (2, 3)])
+ expected = """<gexf version="1.2" xmlns="http://www.gexf.net/1.2draft" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.w3.org/2001/XMLSchema-instance">
+ <graph defaultedgetype="undirected" mode="static" name="">
+ <meta>
+ <creator>NetworkX {}</creator>
+ <lastmodified>{}</lastmodified>
+ </meta>
+ <nodes>
+ <node id="0" label="0" />
+ <node id="1" label="1" />
+ <node id="2" label="2" />
+ <node id="3" label="3" />
+ </nodes>
+ <edges>
+ <edge id="0" source="0" target="1" />
+ <edge id="2" source="1" target="2" />
+ <edge id="1" source="2" target="3" />
+ </edges>
+ </graph>
+</gexf>""".format(nx.__version__, time.strftime('%d/%m/%Y'))
+ obtained = '\n'.join(nx.generate_gexf(G))
+ assert_equal(expected, obtained)
+
+ def test_numpy_type(self):
+ G = nx.path_graph(4)
+ try:
+ import numpy
+ except ImportError:
+ return
+ nx.set_node_attributes(G, {n:n for n in numpy.arange(4)}, 'number')
+ G[0][1]['edge-number'] = numpy.float64(1.1)
+
+ expected = """<gexf version="1.2" xmlns="http://www.gexf.net/1.2draft" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.w3.org/2001/XMLSchema-instance">
+ <graph defaultedgetype="undirected" mode="static" name="">
+ <attributes class="edge" mode="static">
+ <attribute id="1" title="edge-number" type="float" />
+ </attributes>
+ <attributes class="node" mode="static">
+ <attribute id="0" title="number" type="int" />
+ </attributes>
+ <meta>
+ <creator>NetworkX {}</creator>
+ <lastmodified>{}</lastmodified>
+ </meta>
+ <nodes>
+ <node id="0" label="0">
+ <attvalues>
+ <attvalue for="0" value="0" />
+ </attvalues>
+ </node>
+ <node id="1" label="1">
+ <attvalues>
+ <attvalue for="0" value="1" />
+ </attvalues>
+ </node>
+ <node id="2" label="2">
+ <attvalues>
+ <attvalue for="0" value="2" />
+ </attvalues>
+ </node>
+ <node id="3" label="3">
+ <attvalues>
+ <attvalue for="0" value="3" />
+ </attvalues>
+ </node>
+ </nodes>
+ <edges>
+ <edge id="0" source="0" target="1">
+ <attvalues>
+ <attvalue for="1" value="1.1" />
+ </attvalues>
+ </edge>
+ <edge id="1" source="1" target="2" />
+ <edge id="2" source="2" target="3" />
+ </edges>
+ </graph>
+</gexf>""".format(nx.__version__, time.strftime('%d/%m/%Y'))
+ obtained = '\n'.join(nx.generate_gexf(G))
+ assert_equal(expected, obtained)
+
def test_bool(self):
G = nx.Graph()
G.add_node(1, testattr=True)
| write_gexf not compatible with node attributes?
The following minimal example using `write_gexf` fails for me (version 1.11):
```python
import networkx as nx
g = nx.random_graphs.complete_graph(n=20)
nx.set_node_attributes(g, 'parameter', [0 for i in range(len(g))])
nx.write_gexf(g_trans, 'tmp.gexf')
```
Fails with:
```
~/miniconda2/envs/std3/lib/python3.6/site-packages/networkx/readwrite/gexf.py in add_attributes(self, node_or_edge, xml_obj, data, default)
407 if type(v)==list:
408 # dynamic data
--> 409 for val,start,end in v:
410 val_type = type(val)
411 if start is not None or end is not None:
TypeError: 'int' object is not iterable
```
It's not a matter of the type, either:
```python
nx.set_node_attributes(g, 'parameter', ['0' for i in range(len(g))])
```
also fails. The code works fine with `write_gml` instead.
Possible duplicate of #2204 , #1490?
| This is starting to look specific enough (with 3 Issues lined up as well) that I'm going give it a Milestone 2.3 designation. We should get this done. | 2019-03-28T12:23:12 |
networkx/networkx | 3,364 | networkx__networkx-3364 | [
"3342"
] | d35b211b64def881b55b812c6b10bbeb5ce0d440 | diff --git a/networkx/algorithms/bipartite/generators.py b/networkx/algorithms/bipartite/generators.py
--- a/networkx/algorithms/bipartite/generators.py
+++ b/networkx/algorithms/bipartite/generators.py
@@ -380,7 +380,11 @@ def preferential_attachment_graph(aseq, p, create_using=None, seed=None):
References
----------
- .. [1] Jean-Loup Guillaume and Matthieu Latapy,
+ .. [1] Guillaume, J.L. and Latapy, M.,
+ Bipartite graphs as models of complex networks.
+ Physica A: Statistical Mechanics and its Applications,
+ 2006, 371(2), pp.795-813.
+ .. [2] Jean-Loup Guillaume and Matthieu Latapy,
Bipartite structure of all complex networks,
Inf. Process. Lett. 90, 2004, pg. 215-221
https://doi.org/10.1016/j.ipl.2004.03.007
| issues about the function 'preferential_attachment_graph' for bipartite networks
Hi, when using the function of 'preferential_attachment_graph' to generate a random bipartite network, how to set the probability 'p'?
I have checked the journal paper, 'Jean-Loup Guillaume and Matthieu Latapy, Bipartite structure of all complex networks, Inf. Process. Lett. 90, 2004, pg. 215-221', which gives a formula to calculate 'p'. But I wonder if the 'p' is the ratio of number of nodes to the sum of the sizes of the cliques, not 1 minus the ratio.
Looking forward to your reply.
| Hmmm.. Perhaps the best way to figure this out is to take an extreme case and see what ```p``` is.
For example, if ```p=1``` do you get the ratio or 1 minus the ratio?
The function requests ```p```, so you can choose to set it however you like. But I think you are seeking info about what impact that choice has on the structure.
Thank you for your reply.
The author had made a mistake in this paper. I have found their another paper, 'Guillaume, J.L. and Latapy, M., 2006. Bipartite graphs as models of complex networks. Physica A: Statistical Mechanics and its Applications, 371(2), pp.795-813.', in which p equals the ratio.
I recommend that the reference paper be changed to the correct one, or the current one for this function may make people confused.
Thank you
Maybe both references would be helpful. | 2019-03-28T12:23:57 |
|
networkx/networkx | 3,365 | networkx__networkx-3365 | [
"3295"
] | d35b211b64def881b55b812c6b10bbeb5ce0d440 | diff --git a/networkx/drawing/nx_pylab.py b/networkx/drawing/nx_pylab.py
--- a/networkx/drawing/nx_pylab.py
+++ b/networkx/drawing/nx_pylab.py
@@ -674,7 +674,7 @@ def to_marker_edge(marker_size, marker):
shrink_source = 0 # space from source to tail
shrink_target = 0 # space from head to target
if cb.iterable(node_size): # many node sizes
- src_node, dst_node = edgelist[i]
+ src_node, dst_node = edgelist[i][:2]
index_node = nodelist.index(dst_node)
marker_size = node_size[index_node]
shrink_target = to_marker_edge(marker_size, node_shape)
| diff --git a/networkx/drawing/tests/test_pylab.py b/networkx/drawing/tests/test_pylab.py
--- a/networkx/drawing/tests/test_pylab.py
+++ b/networkx/drawing/tests/test_pylab.py
@@ -102,6 +102,12 @@ def test_empty_graph(self):
G = nx.Graph()
nx.draw(G)
+ def test_multigraph_edgelist_tuples(self):
+ # See Issue #3295
+ G = nx.path_graph(3, create_using=nx.MultiDiGraph)
+ nx.draw_networkx(G, edgelist=[(0, 1, 0)])
+ nx.draw_networkx(G, edgelist=[(0, 1, 0)], node_size=[10, 20])
+
def test_alpha_iter(self):
pos = nx.random_layout(self.G)
# with fewer alpha elements than nodes
| `draw_networkx` behaves differently when `node_size` is provided
If I call `draw_networkx` with something like
```
nx.draw_networkx(g, edgelist=[(0, 1, 0)])
```
that is, with the egelist a list of tuples of length 3, all works well. However, if I in addition pass a `node_size` argument that is an iterable, the behaviour changes and it expects edgelist to consist of tuples of length 2:
```
nx.draw_networkx(g, edgelist=[(0, 1, 0)], node_size=[10, 20])
```
```
/home/ruben/anaconda3/lib/python3.6/site-packages/networkx/drawing/nx_pylab.py in draw_networkx_edges(G, pos, edgelist, width, edge_color, style, alpha, arrowstyle, arrowsize, edge_cmap, edge_vmin, edge_vmax, ax, arrows, label, node_size, nodelist, node_shape, **kwds)
659 shrink_target = 0 # space from head to target
660 if cb.iterable(node_size): # many node sizes
--> 661 src_node, dst_node = edgelist[i]
662 index_node = nodelist.index(dst_node)
663 marker_size = node_size[index_node]
ValueError: too many values to unpack (expected 2)
```
It may have got something to do with an assumption on graph type, my graph was of type `MultiDiGraph`.
This is not a big issue and easily worked around, but I thought I'd point out the inconsistent behaviour, if indeed it is.
| Thank you for reporting this!
It is a bug. The offending line should allow edge 3-tuples from multigraphs by the code:
src_node, dst_node = edgelist[i][:2]
We should check that section of code for other similar bugs and add a test that reveals this issue. | 2019-03-28T12:25:05 |
networkx/networkx | 3,366 | networkx__networkx-3366 | [
"3322"
] | 8095ad5a2f244d7bfe465996157cbe7d1b6bb402 | diff --git a/networkx/algorithms/graphical.py b/networkx/algorithms/graphical.py
--- a/networkx/algorithms/graphical.py
+++ b/networkx/algorithms/graphical.py
@@ -35,7 +35,7 @@ def is_graphical(sequence, method='eg'):
sequence : list or iterable container
A sequence of integer node degrees
- method : "eg" | "hh"
+ method : "eg" | "hh" (default: 'eg')
The method used to validate the degree sequence.
"eg" corresponds to the Erdős-Gallai algorithm, and
"hh" to the Havel-Hakimi algorithm.
@@ -73,7 +73,17 @@ def is_graphical(sequence, method='eg'):
def _basic_graphical_tests(deg_sequence):
# Sort and perform some simple tests on the sequence
if not nx.utils.is_list_of_ints(deg_sequence):
- raise nx.NetworkXUnfeasible
+ # check for a type that can be converted to int. Like numpy.int64
+ ds = []
+ for d in deg_sequence:
+ try:
+ intd = int(d)
+ except ValueError:
+ raise nx.NetworkXError("Invalid type in deg_sequence: not an integer")
+ if intd != d:
+ raise nx.NetworkXError("Invalid type in deg_sequence: not an integer")
+ ds.append(intd)
+ deg_sequence = ds
p = len(deg_sequence)
num_degs = [0] * p
dmax, dmin, dsum, n = 0, p, 0, 0
| diff --git a/networkx/algorithms/tests/test_graphical.py b/networkx/algorithms/tests/test_graphical.py
--- a/networkx/algorithms/tests/test_graphical.py
+++ b/networkx/algorithms/tests/test_graphical.py
@@ -1,5 +1,5 @@
#!/usr/bin/env python
-from nose.tools import *
+from nose.tools import assert_true, assert_false, raises
from nose import SkipTest
import networkx as nx
@@ -31,7 +31,14 @@ def test_string_input():
def test_negative_input():
assert_false(nx.is_graphical([-1], 'hh'))
assert_false(nx.is_graphical([-1], 'eg'))
- assert_false(nx.is_graphical([72.5], 'eg'))
+
+@raises(nx.NetworkXException)
+def test_non_integer_input():
+ a = nx.is_graphical([72.5], 'eg')
+
+@raises(nx.NetworkXException)
+def test_non_integer_input():
+ a = nx.is_graphical([72.5], 'hh')
class TestAtlas(object):
@@ -133,3 +140,24 @@ def test_pseudo_sequence():
# Test for negative integer in sequence
seq = [1000, 3, 3, 3, 3, 2, 2, -2, 1, 1]
assert_false(nx.is_pseudographical(seq))
+
+def test_numpy_degree_sequence():
+ try:
+ import numpy
+ except ImportError:
+ return
+ ds = numpy.array([1, 2, 2, 2, 1], dtype=numpy.int64)
+ assert_true(nx.is_graphical(ds, 'eg'))
+ assert_true(nx.is_graphical(ds, 'hh'))
+ ds = numpy.array([1, 2, 2, 2, 1], dtype=numpy.float64)
+ assert_true(nx.is_graphical(ds, 'eg'))
+ assert_true(nx.is_graphical(ds, 'hh'))
+
+@raises(nx.NetworkXError, AssertionError)
+def test_numpy_noninteger_degree_sequence():
+ try:
+ import numpy
+ except ImportError:
+ raise nx.NetworkXError('make test pass by raising exception')
+ ds = numpy.array([1.1, 2, 2, 2, 1], dtype=numpy.float64)
+ a = nx.is_graphical(ds, 'eg')
| is_graphical treats numpy int as not integers
There is a minor issue with using `is_graphical` function with numpy array.
When passing a numpy array with integers as the argument, `is_graphical` automatically returns `False` because numpy `int` type is not an instance of python `int` type.
Also there is no warning regarding the data type of input so I was very confused for a while when I had a clearly graphical degree sequence passed to the `is_graphical` and was returned `False`.
I would suggest considering numpy int as a valid integer type, otherwise I have to recreate a list of python integers from the array of numpy integers.
Also I think it is reasonable to raise an exception or print a warning when the data type of the input degree sequence is not a valid integer, because this case is fundamentally different from not being able to pass HH or EG tests.
```
>>> import networkx as nx
>>> import numpy as np
>>> a = np.array([2,2,2])
>>> b = [2,2,2]
>>> nx.is_graphical(a)
False
>>> nx.is_graphical(b)
True
```
| Thanks for this. | 2019-03-28T12:25:46 |
networkx/networkx | 3,378 | networkx__networkx-3378 | [
"3374"
] | fca1b1307aafd0444d2da91de451f1b995ca99fb | diff --git a/networkx/release.py b/networkx/release.py
--- a/networkx/release.py
+++ b/networkx/release.py
@@ -214,6 +214,7 @@ def get_info(dynamic=True):
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
+ 'Programming Language :: Python :: 3 :: Only',
'Topic :: Software Development :: Libraries :: Python Modules',
'Topic :: Scientific/Engineering :: Bio-Informatics',
'Topic :: Scientific/Engineering :: Information Analysis',
| 2.3rc1 drops python 2.7 support in a minor version
People using `NetworkX~=2.2` as their version specifier to pip, but still on Python 2.7 will get this message now that 2.3rc1 is out:
```
NetworkX requires Python 3.5 or later (2.7 detected).
```
This happens with no changes to their code.
Would dropping Python 2.7 support be enough of a change to necessitate a new major version of NetworkX?
| Hmmm.. I suspect this is happening with many other packages. For example, pandas and numpy both planned to stop supporting python2. Neither plans to change the major release number -- numpy-1.17 will no longer support python2 and Pandas will have 0.24 be the last to support python2.
Maybe it will work for them because of people using ~=0.24.2 which is equivalent to >=0.24.2, == 0.24.*
Maybe we can do the same with people expected to use ~=2.2.0. But we haven't been using 3 numbers for our releases.
Thoughts?
Going to `major.minor.patch` usually implies that that [Semantic Versioning](https://semver.org) is being used. Is that what you're proposing?
Probably the simplest way to deal with this would be to make a note in the readme or docs about the versioning scheme and forwards/backwards compatibility expectations that a user should have.
Would something like this work for you:
```
We don't use semantic versioning. The first number indicates that we have
made a major API break (e.g., 1.x to 2.x), which has happened once and probably
won't happen again for some time. The point releases are new versions and may
contain minor API breakage. Usually, this happens after a one cycle deprecation
period. API changes are documented here:
https://networkx.github.io/documentation/stable/release/index.html
``` | 2019-04-03T17:49:33 |
|
networkx/networkx | 3,395 | networkx__networkx-3395 | [
"3394"
] | fedc37f8cb342b2f630f7bbd080ebe076a3217e9 | diff --git a/networkx/drawing/nx_pylab.py b/networkx/drawing/nx_pylab.py
--- a/networkx/drawing/nx_pylab.py
+++ b/networkx/drawing/nx_pylab.py
@@ -181,18 +181,18 @@ def draw_networkx(G, pos=None, arrows=True, with_labels=True, **kwds):
Size of nodes. If an array is specified it must be the
same length as nodelist.
- node_color : color string, or array of floats, (default='#1f78b4')
- Node color. Can be a single color format string,
- or a sequence of colors with the same length as nodelist.
- If numeric values are specified they will be mapped to
- colors using the cmap and vmin,vmax parameters. See
+ node_color : color or array of colors (default='#1f78b4')
+ Node color. Can be a single color or a sequence of colors with the same
+ length as nodelist. Color can be string, or rgb (or rgba) tuple of
+ floats from 0-1. If numeric values are specified they will be
+ mapped to colors using the cmap and vmin,vmax parameters. See
matplotlib.scatter for more details.
node_shape : string, optional (default='o')
The shape of the node. Specification is as matplotlib.scatter
marker, one of 'so^>v<dph8'.
- alpha : float, optional (default=1.0)
+ alpha : float, optional (default=None)
The node and edge transparency
cmap : Matplotlib colormap, optional (default=None)
@@ -207,11 +207,11 @@ def draw_networkx(G, pos=None, arrows=True, with_labels=True, **kwds):
width : float, optional (default=1.0)
Line width of edges
- edge_color : color string, or array of floats (default='r')
- Edge color. Can be a single color format string,
- or a sequence of colors with the same length as edgelist.
- If numeric values are specified they will be mapped to
- colors using the edge_cmap and edge_vmin,edge_vmax parameters.
+ edge_color : color or array of colors (default='k')
+ Edge color. Can be a single color or a sequence of colors with the same
+ length as edgelist. Color can be string, or rgb (or rgba) tuple of
+ floats from 0-1. If numeric values are specified they will be
+ mapped to colors using the edge_cmap and edge_vmin,edge_vmax parameters.
edge_cmap : Matplotlib colormap, optional (default=None)
Colormap for mapping intensities of edges
@@ -288,7 +288,7 @@ def draw_networkx_nodes(G, pos,
node_size=300,
node_color='#1f78b4',
node_shape='o',
- alpha=1.0,
+ alpha=None,
cmap=None,
vmin=None,
vmax=None,
@@ -320,11 +320,11 @@ def draw_networkx_nodes(G, pos,
Size of nodes (default=300). If an array is specified it must be the
same length as nodelist.
- node_color : color string, or array of floats
- Node color. Can be a single color format string (default='#1f78b4'),
- or a sequence of colors with the same length as nodelist.
- If numeric values are specified they will be mapped to
- colors using the cmap and vmin,vmax parameters. See
+ node_color : color or array of colors (default='#1f78b4')
+ Node color. Can be a single color or a sequence of colors with the same
+ length as nodelist. Color can be string, or rgb (or rgba) tuple of
+ floats from 0-1. If numeric values are specified they will be
+ mapped to colors using the cmap and vmin,vmax parameters. See
matplotlib.scatter for more details.
node_shape : string
@@ -332,7 +332,7 @@ def draw_networkx_nodes(G, pos,
marker, one of 'so^>v<dph8' (default='o').
alpha : float or array of floats
- The node transparency. This can be a single alpha value (default=1.0),
+ The node transparency. This can be a single alpha value (default=None),
in which case it will be applied to all the nodes of color. Otherwise,
if it is an array, the elements of alpha will be applied to the colors
in order (cycling through alpha multiple times if necessary).
@@ -431,7 +431,7 @@ def draw_networkx_edges(G, pos,
width=1.0,
edge_color='k',
style='solid',
- alpha=1.0,
+ alpha=None,
arrowstyle='-|>',
arrowsize=10,
edge_cmap=None,
@@ -464,17 +464,17 @@ def draw_networkx_edges(G, pos,
width : float, or array of floats
Line width of edges (default=1.0)
- edge_color : color string, or array of floats
- Edge color. Can be a single color format string (default='r'),
- or a sequence of colors with the same length as edgelist.
- If numeric values are specified they will be mapped to
- colors using the edge_cmap and edge_vmin,edge_vmax parameters.
+ edge_color : color or array of colors (default='k')
+ Edge color. Can be a single color or a sequence of colors with the same
+ length as edgelist. Color can be string, or rgb (or rgba) tuple of
+ floats from 0-1. If numeric values are specified they will be
+ mapped to colors using the edge_cmap and edge_vmin,edge_vmax parameters.
style : string
Edge line style (default='solid') (solid|dashed|dotted,dashdot)
alpha : float
- The edge transparency (default=1.0)
+ The edge transparency (default=None)
edge_ cmap : Matplotlib colormap
Colormap for mapping intensities of edges (default=None)
@@ -573,69 +573,42 @@ def draw_networkx_edges(G, pos,
if nodelist is None:
nodelist = list(G.nodes())
+ # FancyArrowPatch handles color=None different from LineCollection
+ if edge_color is None:
+ edge_color = 'k'
+
# set edge positions
edge_pos = np.asarray([(pos[e[0]], pos[e[1]]) for e in edgelist])
- if not cb.iterable(width):
- lw = (width,)
- else:
- lw = width
-
- if not is_string_like(edge_color) \
- and cb.iterable(edge_color) \
- and len(edge_color) == len(edge_pos):
- if np.alltrue([is_string_like(c) for c in edge_color]):
- # (should check ALL elements)
- # list of color letters such as ['k','r','k',...]
- edge_colors = tuple([colorConverter.to_rgba(c, alpha)
- for c in edge_color])
- elif np.alltrue([not is_string_like(c) for c in edge_color]):
- # If color specs are given as (rgb) or (rgba) tuples, we're OK
- if np.alltrue([cb.iterable(c) and len(c) in (3, 4)
- for c in edge_color]):
- edge_colors = tuple(edge_color)
+ # Check if edge_color is an array of floats and map to edge_cmap.
+ # This is the only case handled differently from matplotlib
+ if cb.iterable(edge_color) and (len(edge_color) == len(edge_pos)) \
+ and np.alltrue([isinstance(c,Number) for c in edge_color]):
+ if edge_cmap is not None:
+ assert(isinstance(edge_cmap, Colormap))
else:
- # numbers (which are going to be mapped with a colormap)
- edge_colors = None
- else:
- raise ValueError('edge_color must contain color names or numbers')
- else:
- if is_string_like(edge_color) or len(edge_color) == 1:
- edge_colors = (colorConverter.to_rgba(edge_color, alpha), )
- else:
- msg = 'edge_color must be a color or list of one color per edge'
- raise ValueError(msg)
+ edge_cmap = plt.get_cmap()
+ if edge_vmin is None:
+ edge_vmin = min(edge_color)
+ if edge_vmax is None:
+ edge_vmax = max(edge_color)
+ color_normal = Normalize(vmin=edge_vmin, vmax=edge_vmax)
+ edge_color = [edge_cmap(color_normal(e)) for e in edge_color]
if (not G.is_directed() or not arrows):
edge_collection = LineCollection(edge_pos,
- colors=edge_colors,
- linewidths=lw,
+ colors=edge_color,
+ linewidths=width,
antialiaseds=(1,),
linestyle=style,
transOffset=ax.transData,
+ alpha=alpha
)
edge_collection.set_zorder(1) # edges go behind nodes
edge_collection.set_label(label)
ax.add_collection(edge_collection)
- # Note: there was a bug in mpl regarding the handling of alpha values
- # for each line in a LineCollection. It was fixed in matplotlib by
- # r7184 and r7189 (June 6 2009). We should then not set the alpha
- # value globally, since the user can instead provide per-edge alphas
- # now. Only set it globally if provided as a scalar.
- if isinstance(alpha, Number):
- edge_collection.set_alpha(alpha)
-
- if edge_colors is None:
- if edge_cmap is not None:
- assert(isinstance(edge_cmap, Colormap))
- edge_collection.set_array(np.asarray(edge_color))
- edge_collection.set_cmap(edge_cmap)
- if edge_vmin is not None or edge_vmax is not None:
- edge_collection.set_clim(edge_vmin, edge_vmax)
- else:
- edge_collection.autoscale()
return edge_collection
arrow_collection = None
@@ -654,23 +627,12 @@ def to_marker_edge(marker_size, marker):
# Draw arrows with `matplotlib.patches.FancyarrowPatch`
arrow_collection = []
mutation_scale = arrowsize # scale factor of arrow head
- arrow_colors = edge_colors
- if arrow_colors is None:
- if edge_cmap is not None:
- assert(isinstance(edge_cmap, Colormap))
- else:
- edge_cmap = plt.get_cmap() # default matplotlib colormap
- if edge_vmin is None:
- edge_vmin = min(edge_color)
- if edge_vmax is None:
- edge_vmax = max(edge_color)
- color_normal = Normalize(vmin=edge_vmin, vmax=edge_vmax)
+ # FancyArrowPatch doesn't handle color strings
+ arrow_colors = colorConverter.to_rgba_array(edge_color,alpha)
for i, (src, dst) in enumerate(edge_pos):
x1, y1 = src
x2, y2 = dst
- arrow_color = None
- line_width = None
shrink_source = 0 # space from source to tail
shrink_target = 0 # space from head to target
if cb.iterable(node_size): # many node sizes
@@ -680,16 +642,25 @@ def to_marker_edge(marker_size, marker):
shrink_target = to_marker_edge(marker_size, node_shape)
else:
shrink_target = to_marker_edge(node_size, node_shape)
- if arrow_colors is None:
- arrow_color = edge_cmap(color_normal(edge_color[i]))
- elif len(arrow_colors) > 1:
- arrow_color = arrow_colors[i]
+
+ if cb.iterable(arrow_colors):
+ if len(arrow_colors) == len(edge_pos):
+ arrow_color = arrow_colors[i]
+ elif len(arrow_colors)==1:
+ arrow_color = arrow_colors[0]
+ else: # Cycle through colors
+ arrow_color = arrow_colors[i%len(arrow_colors)]
else:
- arrow_color = arrow_colors[0]
- if len(lw) > 1:
- line_width = lw[i]
+ arrow_color = edge_color
+
+ if cb.iterable(width):
+ if len(width) == len(edge_pos):
+ line_width = width[i]
+ else:
+ line_width = width[i%len(width)]
else:
- line_width = lw[0]
+ line_width = width
+
arrow = FancyArrowPatch((x1, y1), (x2, y2),
arrowstyle=arrowstyle,
shrinkA=shrink_source,
@@ -736,7 +707,7 @@ def draw_networkx_labels(G, pos,
font_color='k',
font_family='sans-serif',
font_weight='normal',
- alpha=1.0,
+ alpha=None,
bbox=None,
ax=None,
**kwds):
@@ -768,8 +739,8 @@ def draw_networkx_labels(G, pos,
font_weight : string
Font weight (default='normal')
- alpha : float
- The text transparency (default=1.0)
+ alpha : float or None
+ The text transparency (default=None)
ax : Matplotlib Axes object, optional
Draw the graph in the specified Matplotlib axes.
@@ -833,7 +804,7 @@ def draw_networkx_labels(G, pos,
clip_on=True,
)
text_items[n] = t
-
+
plt.tick_params(
axis='both',
which='both',
@@ -852,7 +823,7 @@ def draw_networkx_edge_labels(G, pos,
font_color='k',
font_family='sans-serif',
font_weight='normal',
- alpha=1.0,
+ alpha=None,
bbox=None,
ax=None,
rotate=True,
@@ -871,8 +842,8 @@ def draw_networkx_edge_labels(G, pos,
ax : Matplotlib Axes object, optional
Draw the graph in the specified Matplotlib axes.
- alpha : float
- The text transparency (default=1.0)
+ alpha : float or None
+ The text transparency (default=None)
edge_labels : dictionary
Edge labels in a dictionary keyed by edge two-tuple of text
| diff --git a/networkx/drawing/tests/test_pylab.py b/networkx/drawing/tests/test_pylab.py
--- a/networkx/drawing/tests/test_pylab.py
+++ b/networkx/drawing/tests/test_pylab.py
@@ -57,7 +57,56 @@ def test_arrows(self):
plt.show()
def test_edge_colors_and_widths(self):
- nx.draw_random(self.G, edgelist=[(0, 1), (0, 2)], width=[1, 2], edge_colors=['r', 'b'])
+ pos = nx.circular_layout(self.G)
+ for G in (self.G, self.G.to_directed()):
+ nx.draw_networkx_nodes(G, pos, node_color=[(1.0, 1.0, 0.2, 0.5)])
+ nx.draw_networkx_labels(G, pos)
+ # edge with default color and width
+ nx.draw_networkx_edges(G, pos, edgelist=[(0, 1)],
+ width=None,
+ edge_color=None)
+ # edges with global color strings and widths in lists
+ nx.draw_networkx_edges(G, pos, edgelist=[(0, 2), (0, 3)],
+ width=[3],
+ edge_color=['r'])
+ # edges with color strings and widths for each edge
+ nx.draw_networkx_edges(G, pos, edgelist=[(0, 2), (0, 3)],
+ width=[1, 3],
+ edge_color=['r', 'b'])
+ # edges with fewer color strings and widths than edges
+ nx.draw_networkx_edges(G, pos,
+ edgelist=[(1, 2), (1, 3), (2, 3), (3, 4)],
+ width=[1, 3],
+ edge_color=['g', 'm', 'c'])
+ # edges with more color strings and widths than edges
+ nx.draw_networkx_edges(G, pos, edgelist=[(3, 4)],
+ width=[1, 2, 3, 4],
+ edge_color=['r', 'b', 'g', 'k'])
+ # with rgb tuple and 3 edges - is interpreted with cmap
+ nx.draw_networkx_edges(G, pos, edgelist=[(4, 5), (5, 6), (6, 7)],
+ edge_color=(1.0, 0.4, 0.3))
+ # with rgb tuple in list
+ nx.draw_networkx_edges(G, pos, edgelist=[(7, 8), (8, 9)],
+ edge_color=[(0.4, 1.0, 0.0)])
+ # with rgba tuple and 4 edges - is interpretted with cmap
+ nx.draw_networkx_edges(G, pos, edgelist=[(9, 10), (10, 11),
+ (10, 12), (10, 13)],
+ edge_color=(0.0, 1.0, 1.0, 0.5))
+ # with rgba tuple in list
+ nx.draw_networkx_edges(G, pos, edgelist=[(9, 10), (10, 11),
+ (10, 12), (10, 13)],
+ edge_color=[(0.0, 1.0, 1.0, 0.5)])
+ # with color string and global alpha
+ nx.draw_networkx_edges(G, pos, edgelist=[(11, 12), (11, 13)],
+ edge_color='purple', alpha=0.2)
+ # with color string in a list
+ nx.draw_networkx_edges(G, pos, edgelist=[(11, 12), (11, 13)],
+ edge_color=['purple'])
+ # with single edge and hex color string
+ nx.draw_networkx_edges(G, pos, edgelist=[(12, 13)],
+ edge_color='#1f78b4f0')
+
+ plt.show()
def test_labels_and_colors(self):
G = nx.cubical_graph()
@@ -67,12 +116,12 @@ def test_labels_and_colors(self):
nodelist=[0, 1, 2, 3],
node_color='r',
node_size=500,
- alpha=0.8)
+ alpha=0.75)
nx.draw_networkx_nodes(G, pos,
nodelist=[4, 5, 6, 7],
node_color='b',
node_size=500,
- alpha=0.8)
+ alpha=[0.25, 0.5, 0.75, 1.0])
# edges
nx.draw_networkx_edges(G, pos, width=1.0, alpha=0.5)
nx.draw_networkx_edges(G, pos,
| Parameter edge_color is inconsistent with node_color and description
When using draw_networkx_nodes and draw_networkx_edges, the formats supported for the colors (node_color and edge_color) by the two methods are inconsistent.
The description of the node_color and edge_color parameters is also inconsistent with what is supported.
This is OK, although RGBA is not mentioned as a supported format:
```
import networkx as nx
G = nx.complete_graph(5)
nx.draw_networkx_nodes(G, nx.spring_layout(G), node_color=(0.0,0.0,1.0,1.0))
nx.draw_networkx_edges(G, nx.spring_layout(G), edge_color='r')
```
This throws an error:
```
import networkx as nx
G = nx.complete_graph(5)
nx.draw_networkx_nodes(G, nx.spring_layout(G), node_color=(0.0,0.0,1.0,1.0))
nx.draw_networkx_edges(G, nx.spring_layout(G), edge_color=(0.0,0.0,1.0,1.0))
```
[matplotlib.pyplot.scatter](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.scatter.html) says:
> Note that c should not be a single numeric RGB or RGBA sequence because that is indistinguishable from an array of values to be colormapped. If you want to specify the same RGB or RGBA value for all points, use a 2-D array with a single row. Otherwise, value- matching will have precedence in case of a size matching with x and y.
| 2019-04-17T00:24:50 |
|
networkx/networkx | 3,461 | networkx__networkx-3461 | [
"3466",
"3475"
] | 98328a04c8ee7b03e31b27c00b03fc2f5975160e | diff --git a/networkx/algorithms/core.py b/networkx/algorithms/core.py
--- a/networkx/algorithms/core.py
+++ b/networkx/algorithms/core.py
@@ -2,12 +2,14 @@
# Aric Hagberg <[email protected]>
# Dan Schult <[email protected]>
# Pieter Swart <[email protected]>
+# Antoine Allard <[email protected]>
# All rights reserved.
# BSD license.
#
# Authors: Dan Schult ([email protected])
# Jason Grout ([email protected])
# Aric Hagberg ([email protected])
+# Antoine Allard ([email protected])
"""
Find the k-cores of a graph.
@@ -30,13 +32,20 @@
D-cores: Measuring Collaboration of Directed Graphs Based on Degeneracy
Christos Giatsidis, Dimitrios M. Thilikos, Michalis Vazirgiannis, ICDM 2011.
http://www.graphdegeneracy.org/dcores_ICDM_2011.pdf
+
+Multi-scale structure and topological anomaly detection via a new network \
+statistic: The onion decomposition
+L. Hébert-Dufresne, J. A. Grochow, and A. Allard
+Scientific Reports 6, 31708 (2016)
+http://doi.org/10.1038/srep31708
+
"""
import networkx as nx
from networkx.exception import NetworkXError
from networkx.utils import not_implemented_for
-__all__ = ['core_number', 'find_cores', 'k_core',
- 'k_shell', 'k_crust', 'k_corona']
+__all__ = ['core_number', 'find_cores', 'k_core', 'k_shell',
+ 'k_crust', 'k_corona', 'k_truss', 'onion_layers']
@not_implemented_for('multigraph')
@@ -353,3 +362,161 @@ def k_corona(G, k, core_number=None):
def func(v, k, c):
return c[v] == k and k == sum(1 for w in G[v] if c[w] >= k)
return _core_subgraph(G, func, k, core_number)
+
+
+@not_implemented_for('directed')
+@not_implemented_for('multigraph')
+def k_truss(G, k):
+ """Returns the k-truss of `G`.
+
+ The k-truss is the maximal subgraph of `G` which contains at least three
+ vertices where every edge is incident to at least `k` triangles.
+
+ Parameters
+ ----------
+ G : NetworkX graph
+ An undirected graph
+ k : int
+ The order of the truss
+
+ Returns
+ -------
+ H : NetworkX graph
+ The k-truss subgraph
+
+ Raises
+ ------
+ NetworkXError
+
+ The k-truss is not defined for graphs with self loops or parallel edges
+ or directed graphs.
+
+ Notes
+ -----
+ A k-clique is a (k-2)-truss and a k-truss is a (k+1)-core.
+
+ Not implemented for digraphs or graphs with parallel edges or self loops.
+
+ Graph, node, and edge attributes are copied to the subgraph.
+
+ References
+ ----------
+ .. [1] Bounds and Algorithms for k-truss. Paul Burkhardt, Vance Faber,
+ David G. Harris, 2018. https://arxiv.org/abs/1806.05523v2
+ .. [2] Trusses: Cohesive Subgraphs for Social Network Analysis. Jonathan
+ Cohen, 2005.
+ """
+ H = G.copy()
+
+ n_dropped = 1
+ while n_dropped > 0:
+ n_dropped = 0
+ to_drop = []
+ seen = set()
+ for u in H:
+ nbrs_u = set(H[u])
+ seen.add(u)
+ new_nbrs = [v for v in nbrs_u if v not in seen]
+ for v in new_nbrs:
+ if len(nbrs_u & set(H[v])) < k:
+ to_drop.append((u, v))
+ H.remove_edges_from(to_drop)
+ n_dropped = len(to_drop)
+ H.remove_nodes_from(list(nx.isolates(H)))
+
+ return H
+
+
+@not_implemented_for('multigraph')
+@not_implemented_for('directed')
+def onion_layers(G):
+ """Returns the layer of each vertex in the onion decomposition of the graph.
+
+ The onion decomposition refines the k-core decomposition by providing
+ information on the internal organization of each k-shell. It is usually
+ used alongside the `core numbers`.
+
+ Parameters
+ ----------
+ G : NetworkX graph
+ A simple graph without self loops or parallel edges
+
+ Returns
+ -------
+ od_layers : dictionary
+ A dictionary keyed by vertex to the onion layer. The layers are
+ contiguous integers starting at 1.
+
+ Raises
+ ------
+ NetworkXError
+ The onion decomposition is not implemented for graphs with self loops
+ or parallel edges or for directed graphs.
+
+ Notes
+ -----
+ Not implemented for graphs with parallel edges or self loops.
+
+ Not implemented for directed graphs.
+
+ See Also
+ --------
+ core_number
+
+ References
+ ----------
+ .. [1] Multi-scale structure and topological anomaly detection via a new
+ network statistic: The onion decomposition
+ L. Hébert-Dufresne, J. A. Grochow, and A. Allard
+ Scientific Reports 6, 31708 (2016)
+ http://doi.org/10.1038/srep31708
+ .. [2] Percolation and the effective structure of complex networks
+ A. Allard and L. Hébert-Dufresne
+ Physical Review X 9, 011023 (2019)
+ http://doi.org/10.1103/PhysRevX.9.011023
+ """
+ if nx.number_of_selfloops(G) > 0:
+ msg = ('Input graph contains self loops which is not permitted; '
+ 'Consider using G.remove_edges_from(nx.selfloop_edges(G)).')
+ raise NetworkXError(msg)
+ # Dictionaries to register the k-core/onion decompositions.
+ od_layers = {}
+ # Adjacency list
+ neighbors = {v: list(nx.all_neighbors(G, v)) for v in G}
+ # Effective degree of nodes.
+ degrees = dict(G.degree())
+ # Performs the onion decomposition.
+ current_core = 1
+ current_layer = 1
+ # Sets vertices of degree 0 to layer 1, if any.
+ isolated_nodes = [v for v in nx.isolates(G)]
+ if len(isolated_nodes) > 0:
+ for v in isolated_nodes:
+ od_layers[v] = current_layer
+ degrees.pop(v)
+ current_layer = 2
+ # Finds the layer for the remaining nodes.
+ while len(degrees) > 0:
+ # Sets the order for looking at nodes.
+ nodes = sorted(degrees, key=degrees.get)
+ # Sets properly the current core.
+ min_degree = degrees[nodes[0]]
+ if min_degree > current_core:
+ current_core = min_degree
+ # Identifies vertices in the current layer.
+ this_layer = []
+ for n in nodes:
+ if degrees[n] > current_core:
+ break
+ this_layer.append(n)
+ # Identifies the core/layer of the vertices in the current layer.
+ for v in this_layer:
+ od_layers[v] = current_layer
+ for n in neighbors[v]:
+ neighbors[n].remove(v)
+ degrees[n] = degrees[n] - 1
+ degrees.pop(v)
+ # Updates the layer count.
+ current_layer = current_layer + 1
+ # Returns the dictionaries containing the onion layer of each vertices.
+ return od_layers
diff --git a/networkx/drawing/nx_pylab.py b/networkx/drawing/nx_pylab.py
--- a/networkx/drawing/nx_pylab.py
+++ b/networkx/drawing/nx_pylab.py
@@ -414,7 +414,7 @@ def draw_networkx_nodes(G, pos,
linewidths=linewidths,
edgecolors=edgecolors,
label=label)
- plt.tick_params(
+ ax.tick_params(
axis='both',
which='both',
bottom=False,
@@ -550,7 +550,6 @@ def draw_networkx_edges(G, pos,
try:
import matplotlib
import matplotlib.pyplot as plt
- import matplotlib.cbook as cb
from matplotlib.colors import colorConverter, Colormap, Normalize
from matplotlib.collections import LineCollection
from matplotlib.patches import FancyArrowPatch
@@ -582,7 +581,7 @@ def draw_networkx_edges(G, pos,
# Check if edge_color is an array of floats and map to edge_cmap.
# This is the only case handled differently from matplotlib
- if cb.iterable(edge_color) and (len(edge_color) == len(edge_pos)) \
+ if np.iterable(edge_color) and (len(edge_color) == len(edge_pos)) \
and np.alltrue([isinstance(c,Number) for c in edge_color]):
if edge_cmap is not None:
assert(isinstance(edge_cmap, Colormap))
@@ -635,7 +634,7 @@ def to_marker_edge(marker_size, marker):
x2, y2 = dst
shrink_source = 0 # space from source to tail
shrink_target = 0 # space from head to target
- if cb.iterable(node_size): # many node sizes
+ if np.iterable(node_size): # many node sizes
src_node, dst_node = edgelist[i][:2]
index_node = nodelist.index(dst_node)
marker_size = node_size[index_node]
@@ -643,7 +642,7 @@ def to_marker_edge(marker_size, marker):
else:
shrink_target = to_marker_edge(node_size, node_shape)
- if cb.iterable(arrow_colors):
+ if np.iterable(arrow_colors):
if len(arrow_colors) == len(edge_pos):
arrow_color = arrow_colors[i]
elif len(arrow_colors)==1:
@@ -653,7 +652,7 @@ def to_marker_edge(marker_size, marker):
else:
arrow_color = edge_color
- if cb.iterable(width):
+ if np.iterable(width):
if len(width) == len(edge_pos):
line_width = width[i]
else:
@@ -690,7 +689,7 @@ def to_marker_edge(marker_size, marker):
ax.update_datalim(corners)
ax.autoscale_view()
- plt.tick_params(
+ ax.tick_params(
axis='both',
which='both',
bottom=False,
@@ -768,7 +767,6 @@ def draw_networkx_labels(G, pos,
"""
try:
import matplotlib.pyplot as plt
- import matplotlib.cbook as cb
except ImportError:
raise ImportError("Matplotlib required for draw()")
except RuntimeError:
@@ -805,7 +803,7 @@ def draw_networkx_labels(G, pos,
)
text_items[n] = t
- plt.tick_params(
+ ax.tick_params(
axis='both',
which='both',
bottom=False,
@@ -958,7 +956,7 @@ def draw_networkx_edge_labels(G, pos,
)
text_items[(n1, n2)] = t
- plt.tick_params(
+ ax.tick_params(
axis='both',
which='both',
bottom=False,
diff --git a/networkx/readwrite/gml.py b/networkx/readwrite/gml.py
--- a/networkx/readwrite/gml.py
+++ b/networkx/readwrite/gml.py
@@ -10,7 +10,7 @@
"""
Read graphs in GML format.
-"GML, the G>raph Modelling Language, is our proposal for a portable
+"GML, the Graph Modelling Language, is our proposal for a portable
file format for graphs. GML's key features are portability, simple
syntax, extensibility and flexibility. A GML file consists of a
hierarchical key-value lists. Graphs can be annotated with arbitrary
| diff --git a/networkx/algorithms/tests/test_core.py b/networkx/algorithms/tests/test_core.py
--- a/networkx/algorithms/tests/test_core.py
+++ b/networkx/algorithms/tests/test_core.py
@@ -124,3 +124,31 @@ def test_k_corona(self):
# k=2
k_corona_subgraph = nx.k_corona(self.H, k=0)
assert_equal(sorted(k_corona_subgraph.nodes()), [0])
+
+ def test_k_truss(self):
+ # k=-1
+ k_truss_subgraph = nx.k_truss(self.G, -1)
+ assert_equal(sorted(k_truss_subgraph.nodes()), list(range(1,21)))
+ # k=0
+ k_truss_subgraph = nx.k_truss(self.G, 0)
+ assert_equal(sorted(k_truss_subgraph.nodes()), list(range(1,21)))
+ # k=1
+ k_truss_subgraph = nx.k_truss(self.G, 1)
+ assert_equal(sorted(k_truss_subgraph.nodes()), list(range(1,13)))
+ # k=2
+ k_truss_subgraph = nx.k_truss(self.G, 2)
+ assert_equal(sorted(k_truss_subgraph.nodes()), list(range(1,9)))
+ # k=3
+ k_truss_subgraph = nx.k_truss(self.G, 3)
+ assert_equal(sorted(k_truss_subgraph.nodes()), [])
+
+ def test_onion_layers(self):
+ layers = nx.onion_layers(self.G)
+ nodes_by_layer = [sorted([n for n in layers if layers[n] == val])
+ for val in range(1, 7)]
+ assert_nodes_equal(nodes_by_layer[0], [21])
+ assert_nodes_equal(nodes_by_layer[1], [17, 18, 19, 20])
+ assert_nodes_equal(nodes_by_layer[2], [10, 12, 13, 14, 15, 16])
+ assert_nodes_equal(nodes_by_layer[3], [9, 11])
+ assert_nodes_equal(nodes_by_layer[4], [1, 2, 4, 5, 6, 8])
+ assert_nodes_equal(nodes_by_layer[5], [3, 7])
| Matplotlib 3.1 deprecation warning (iterable function)
MatplotLib 3.1 generates the following warning:
```
. . . /lib/python3.7/site-packages/networkx/drawing/nx_pylab.py:579: MatplotlibDeprecationWarning:
The iterable function was deprecated in Matplotlib 3.1 and will be removed in 3.3. Use np.iterable instead.
if not cb.iterable(width):
. . . /lib/python3.7/site-packages/networkx/drawing/nx_pylab.py:676: MatplotlibDeprecationWarning:
The iterable function was deprecated in Matplotlib 3.1 and will be removed in 3.3. Use np.iterable instead.
if cb.iterable(node_size): # many node sizes
```
networkx.draw() hides axes on different subplot
I try to have a graph visualization on one subplot and a line plot of some values on another subplot. However, after I draw the graph on the first subplot, axes on the other subplot disappear (although I did not allow any shared axes). And I cannot bring them back.
```
import networkx as nx
import matplotlib.pyplot as plt
fig = plt.figure()
ax1 = fig.add_subplot(211)
ax2 = fig.add_subplot(212)
G = nx.graph_atlas(1)
nx.draw(G, ax=ax1)
plt.show()
```
Results in `ax2` having the ticks for both axes hidden, whereas if I run the same code without `nx.draw(...)` line the ticks are visible on both subplots.
How is it possible that `nx.draw(G, ax=ax1)` affects `ax2`?
| 2019-05-31T02:42:38 |
|
networkx/networkx | 3,484 | networkx__networkx-3484 | [
"3483"
] | 09c3da707d12aec42f3c0b2f91b60007bb436d37 | diff --git a/networkx/algorithms/traversal/depth_first_search.py b/networkx/algorithms/traversal/depth_first_search.py
--- a/networkx/algorithms/traversal/depth_first_search.py
+++ b/networkx/algorithms/traversal/depth_first_search.py
@@ -281,7 +281,7 @@ def dfs_preorder_nodes(G, source=None, depth_limit=None):
G : NetworkX graph
source : node, optional
- Specify starting node for depth-first search and return edges in
+ Specify starting node for depth-first search and return nodes in
the component reachable from source.
depth_limit : int, optional (default=len(G))
| dfs_preorder_nodes docstring says "edges" instead of "nodes"
https://github.com/networkx/networkx/blob/09c3da707d12aec42f3c0b2f91b60007bb436d37/networkx/algorithms/traversal/depth_first_search.py#L284
| 2019-06-20T15:43:13 |
||
networkx/networkx | 3,508 | networkx__networkx-3508 | [
"3464"
] | 69744d003815620c0b6a61023190b1c1170dbdfe | diff --git a/networkx/algorithms/shortest_paths/astar.py b/networkx/algorithms/shortest_paths/astar.py
--- a/networkx/algorithms/shortest_paths/astar.py
+++ b/networkx/algorithms/shortest_paths/astar.py
@@ -110,13 +110,18 @@ def heuristic(u, v):
return path
if curnode in explored:
- continue
+ # Do not override the parent of starting node
+ if explored[curnode] is None:
+ continue
+
+ # Skip bad paths that were enqueued before finding a better one
+ qcost, h = enqueued[curnode]
+ if qcost < dist:
+ continue
explored[curnode] = parent
for neighbor, w in G[curnode].items():
- if neighbor in explored:
- continue
ncost = dist + w.get(weight, 1)
if neighbor in enqueued:
qcost, h = enqueued[neighbor]
| diff --git a/networkx/algorithms/shortest_paths/tests/test_astar.py b/networkx/algorithms/shortest_paths/tests/test_astar.py
--- a/networkx/algorithms/shortest_paths/tests/test_astar.py
+++ b/networkx/algorithms/shortest_paths/tests/test_astar.py
@@ -1,9 +1,8 @@
-from nose.tools import assert_equal
+from nose.tools import assert_equal, assert_in
from nose.tools import assert_raises
from nose.tools import raises
from math import sqrt
-from random import random, choice
import networkx as nx
from networkx.utils import pairwise
@@ -23,27 +22,24 @@ def setUp(self):
self.XG = nx.DiGraph()
self.XG.add_weighted_edges_from(edges)
- def test_random_graph(self):
- """Tests that the A* shortest path agrees with Dijkstra's
- shortest path for a random graph.
+ def test_multiple_optimal_paths(self):
+ """Tests that A* algorithm finds any of multiple optimal paths"""
+ heuristic_values = {"a": 1.35, "b": 1.18, "c": 0.67, "d": 0}
- """
-
- G = nx.Graph()
+ def h(u, v):
+ return heuristic_values[u]
- points = [(random(), random()) for _ in range(100)]
+ graph = nx.Graph()
+ points = ["a", "b", "c", "d"]
+ edges = [("a", "b", 0.18), ("a", "c", 0.68),
+ ("b", "c", 0.50), ("c", "d", 0.67)]
- # Build a path from points[0] to points[-1] to be sure it exists
- for p1, p2 in pairwise(points):
- G.add_edge(p1, p2, weight=dist(p1, p2))
+ graph.add_nodes_from(points)
+ graph.add_weighted_edges_from(edges)
- # Add other random edges
- for _ in range(100):
- p1, p2 = choice(points), choice(points)
- G.add_edge(p1, p2, weight=dist(p1, p2))
-
- path = nx.astar_path(G, points[0], points[-1], dist)
- assert_equal(path, nx.dijkstra_path(G, points[0], points[-1]))
+ path1 = ["a", "c", "d"]
+ path2 = ["a", "b", "c", "d"]
+ assert_in(nx.astar_path(graph, "a", "d", h), (path1, path2))
def test_astar_directed(self):
assert_equal(nx.astar_path(self.XG, 's', 'v'), ['s', 'x', 'u', 'v'])
@@ -87,6 +83,32 @@ def test_astar_undirected3(self):
assert_equal(nx.astar_path(XG4, 0, 2), [0, 1, 2])
assert_equal(nx.astar_path_length(XG4, 0, 2), 4)
+ """ Tests that A* finds correct path when multiple paths exist
+ and the best one is not expanded first (GH issue #3464)
+ """
+ def test_astar_directed3(self):
+ heuristic_values = {"n5": 36, "n2": 4, "n1": 0, "n0": 0}
+
+ def h(u, v):
+ return heuristic_values[u]
+
+ edges = [("n5", "n1", 11), ("n5", "n2", 9),
+ ("n2", "n1", 1), ("n1", "n0", 32)]
+ graph = nx.DiGraph()
+ graph.add_weighted_edges_from(edges)
+ answer = ["n5", "n2", "n1", "n0"]
+ assert_equal(nx.astar_path(graph, "n5", "n0", h), answer)
+
+ """ Tests that that parent is not wrongly overridden when a
+ node is re-explored multiple times.
+ """
+ def test_astar_directed4(self):
+ edges = [("a", "b", 1), ("a", "c", 1), ("b", "d", 2),
+ ("c", "d", 1), ("d", "e", 1)]
+ graph = nx.DiGraph()
+ graph.add_weighted_edges_from(edges)
+ assert_equal(nx.astar_path(graph, "a", "e"), ["a", "c", "d", "e"])
+
# >>> MXG4=NX.MultiGraph(XG4)
# >>> MXG4.add_edge(0,1,3)
# >>> NX.dijkstra_path(MXG4,0,2)
| A* (A star) function astar_path returns a wrong answer
Suppose we have a graph below:

Find shortest path from N5 to N0.
Numbers in bracket alongside each node are heuristic values estimating distance to n0. (This is a bad example of heuristic function I constructed to use in my classroom.)
The best path from n5 to n0 is:
n5→n4→n3→n2→n1→n0
What networkx.algorithms.shortest_paths.astar.astar_path returns:
[n5, n1, n0]
which is apparently not correct.
| Here's a similar very simple example that shows the same bug:
Gedges = [(1,0,11), (1,2,1), (2,0,6)]
G = nx.DiGraph()
G.add_weighted_edges(Gedges)
def heur(u,v):
distance = {0:0, 1:36, 2:16}
return distance[u]
nx.astart_path(G,1,0,heur) # returns [1, 0] instead of shorter [1,2,0]
Definitely a bug here. I think when the code finds the target it never checks whether the heuristic estimate of distance is correct... It just stops. Why don't any tests find this bug? Am I missing something? We need a PR with a test and some fixed code.
Hey,
in your example @dschult, I think the result is actually correct as you do not have an admissible heuristic (i.e. one that underestimates cost to the goal) and in that case A* is not guaranteed to find an optimal solution.
In the case of the main example, there actually seems to be a bug. As far as I can tell, the problem seems to be the checking of already explored nodes. It stores explored nodes in a dictionary, so when the same node is encountered twice, it does not explore the node again. In the example here, a "bad path" is found first (`n5`->`n1`->`n0`), which correctly gets put at the end of the priority queue (and is not yet fully explored!). The problem is though, that all other paths through `n1` do not get explored because `n1` in stored among the already explored nodes. Therefore, the "bad path" gets returned as solution.
I'll test it a bit more and see if I can come up with a PR. | 2019-07-14T15:27:09 |
networkx/networkx | 3,527 | networkx__networkx-3527 | [
"3520"
] | 7d96d96b146f7daf788d459ec9ef3d19de72f5ad | diff --git a/networkx/algorithms/bipartite/matching.py b/networkx/algorithms/bipartite/matching.py
--- a/networkx/algorithms/bipartite/matching.py
+++ b/networkx/algorithms/bipartite/matching.py
@@ -1,6 +1,7 @@
# matching.py - bipartite graph maximum matching algorithms
#
-# Copyright 2015 Jeffrey Finkelstein <[email protected]>.
+# Copyright 2015 Jeffrey Finkelstein <[email protected]>,
+# Copyright 2019 Søren Fuglede Jørgensen
#
# This file is part of NetworkX.
#
@@ -17,8 +18,8 @@
# Portions of this module use code from David Eppstein's Python Algorithms and
# Data Structures (PADS) library, which is dedicated to the public domain (for
# proof, see <http://www.ics.uci.edu/~eppstein/PADS/ABOUT-PADS.txt>).
-"""Provides functions for computing a maximum cardinality matching in a
-bipartite graph.
+"""Provides functions for computing maximum cardinality matchings and minimum
+weight full matchings in a bipartite graph.
If you don't care about the particular implementation of the maximum matching
algorithm, simply use the :func:`maximum_matching`. If you do care, you can
@@ -40,15 +41,21 @@
The dictionary returned by :func:`maximum_matching` includes a mapping for
vertices in both the left and right vertex sets.
+Similarly, :func:`minimum_weight_full_matching` produces, for a complete
+weighted bipartite graph, a matching whose cardinality is the cardinality of
+the smaller of the two partitions, and for which the sum of the weights of the
+edges included in the matching is minimal.
+
"""
import collections
import itertools
+from networkx.algorithms.bipartite.matrix import biadjacency_matrix
from networkx.algorithms.bipartite import sets as bipartite_sets
import networkx as nx
__all__ = ['maximum_matching', 'hopcroft_karp_matching', 'eppstein_matching',
- 'to_vertex_cover']
+ 'to_vertex_cover', 'minimum_weight_full_matching']
INFINITY = float('inf')
@@ -87,7 +94,6 @@ def hopcroft_karp_matching(G, top_nodes=None):
Notes
-----
-
This function is implemented with the `Hopcroft--Karp matching algorithm
<https://en.wikipedia.org/wiki/Hopcroft%E2%80%93Karp_algorithm>`_ for
bipartite graphs.
@@ -205,7 +211,6 @@ def eppstein_matching(G, top_nodes=None):
Notes
-----
-
This function is implemented with David Eppstein's version of the algorithm
Hopcroft--Karp algorithm (see :func:`hopcroft_karp_matching`), which
originally appeared in the `Python Algorithms and Data Structures library
@@ -406,7 +411,6 @@ def to_vertex_cover(G, matching, top_nodes=None):
Parameters
----------
-
G : NetworkX graph
Undirected bipartite graph
@@ -426,7 +430,6 @@ def to_vertex_cover(G, matching, top_nodes=None):
Returns
-------
-
vertex_cover : :class:`set`
The minimum vertex cover in `G`.
@@ -442,7 +445,6 @@ def to_vertex_cover(G, matching, top_nodes=None):
Notes
-----
-
This function is implemented using the procedure guaranteed by `Konig's
theorem
<https://en.wikipedia.org/wiki/K%C3%B6nig%27s_theorem_%28graph_theory%29>`_,
@@ -483,3 +485,96 @@ def to_vertex_cover(G, matching, top_nodes=None):
#:
#: This function is simply an alias for :func:`hopcroft_karp_matching`.
maximum_matching = hopcroft_karp_matching
+
+
+def minimum_weight_full_matching(G, top_nodes=None, weight='weight'):
+ """Returns the minimum weight full matching of the bipartite graph `G`.
+
+ Let :math:`G = ((U, V), E)` be a complete weighted bipartite graph with
+ real weights :math:`w : E \to \mathbb{R}`. This function then produces
+ a maximum matching :math:`M \subseteq E` which, since the graph is
+ assumed to be complete, has cardinality
+
+ .. math::
+ \lvert M \rvert = \min(\lvert U \rvert, \lvert V \rvert),
+
+ and which minimizes the sum of the weights of the edges included in the
+ matching, :math:`\sum_{e \in M} w(e)`.
+
+ When :math:`\lvert U \rvert = \lvert V \rvert`, this is commonly
+ referred to as a perfect matching; here, since we allow
+ :math:`\lvert U \rvert` and :math:`\lvert V \rvert` to differ, we
+ follow Karp[1]_ and refer to the matching as *full*.
+
+ Parameters
+ ----------
+ G : NetworkX graph
+
+ Undirected bipartite graph
+
+ top_nodes : container
+
+ Container with all nodes in one bipartite node set. If not supplied
+ it will be computed.
+
+ weight : string, optional (default='weight')
+
+ The edge data key used to provide each value in the matrix.
+
+ Returns
+ -------
+ matches : dictionary
+
+ The matching is returned as a dictionary, `matches`, such that
+ ``matches[v] == w`` if node `v` is matched to node `w`. Unmatched
+ nodes do not occur as a key in matches.
+
+ Raises
+ ------
+ ValueError : Exception
+
+ Raised if the input bipartite graph is not complete.
+
+ ImportError : Exception
+
+ Raised if SciPy is not available.
+
+ Notes
+ -----
+ The problem of determining a minimum weight full matching is also known as
+ the rectangular linear assignment problem. This implementation defers the
+ calculation of the assignment to SciPy.
+
+ References
+ ----------
+ .. [1] Richard Manning Karp:
+ An algorithm to Solve the m x n Assignment Problem in Expected Time
+ O(mn log n).
+ Networks, 10(2):143–152, 1980.
+
+ """
+ try:
+ import scipy.optimize
+ except ImportError:
+ raise ImportError('minimum_weight_full_matching requires SciPy: ' +
+ 'https://scipy.org/')
+ left, right = nx.bipartite.sets(G, top_nodes)
+ # Ensure that the graph is complete. This is currently a requirement in
+ # the underlying optimization algorithm from SciPy, but the constraint
+ # will be removed in SciPy 1.4.0, at which point it can also be removed
+ # here.
+ for (u, v) in itertools.product(left, right):
+ # As the graph is undirected, make sure to check for edges in
+ # both directions
+ if (u, v) not in G.edges() and (v, u) not in G.edges():
+ raise ValueError('The bipartite graph must be complete.')
+ U = list(left)
+ V = list(right)
+ weights = biadjacency_matrix(G, row_order=U,
+ column_order=V, weight=weight).toarray()
+ left_matches = scipy.optimize.linear_sum_assignment(weights)
+ d = {U[u]: V[v] for u, v in zip(*left_matches)}
+ # d will contain the matching from edges in left to right; we need to
+ # add the ones from right to left as well.
+ d.update({v: u for u, v in d.items()})
+ return d
| diff --git a/networkx/algorithms/bipartite/tests/test_matching.py b/networkx/algorithms/bipartite/tests/test_matching.py
--- a/networkx/algorithms/bipartite/tests/test_matching.py
+++ b/networkx/algorithms/bipartite/tests/test_matching.py
@@ -1,6 +1,7 @@
# test_matching.py - unit tests for bipartite matching algorithms
#
-# Copyright 2015 Jeffrey Finkelstein <[email protected]>.
+# Copyright 2015 Jeffrey Finkelstein <[email protected]>,
+# Copyright 2019 Søren Fuglede Jørgensen
#
# This file is part of NetworkX.
#
@@ -11,11 +12,13 @@
import networkx as nx
+from nose import SkipTest
from nose.tools import assert_true, assert_equal, raises
from networkx.algorithms.bipartite.matching import eppstein_matching
from networkx.algorithms.bipartite.matching import hopcroft_karp_matching
from networkx.algorithms.bipartite.matching import maximum_matching
+from networkx.algorithms.bipartite.matching import minimum_weight_full_matching
from networkx.algorithms.bipartite.matching import to_vertex_cover
@@ -196,3 +199,106 @@ def test_eppstein_matching():
matching = eppstein_matching(G)
assert_true(len(matching) == len(maximum_matching(G)))
assert all(x in set(matching.keys()) for x in set(matching.values()))
+
+
+class TestMinimumWeightFullMatching(object):
+
+ @classmethod
+ def setupClass(cls):
+ global scipy
+ try:
+ import scipy.optimize
+ except ImportError:
+ raise SkipTest('SciPy not available.')
+
+ def test_minimum_weight_full_matching_square(self):
+ G = nx.complete_bipartite_graph(3, 3)
+ G.add_edge(0, 3, weight=400)
+ G.add_edge(0, 4, weight=150)
+ G.add_edge(0, 5, weight=400)
+ G.add_edge(1, 3, weight=400)
+ G.add_edge(1, 4, weight=450)
+ G.add_edge(1, 5, weight=600)
+ G.add_edge(2, 3, weight=300)
+ G.add_edge(2, 4, weight=225)
+ G.add_edge(2, 5, weight=300)
+ matching = minimum_weight_full_matching(G)
+ assert_equal(matching, {0: 4, 1: 3, 2: 5, 4: 0, 3: 1, 5: 2})
+
+ def test_minimum_weight_full_matching_smaller_left(self):
+ G = nx.complete_bipartite_graph(3, 4)
+ G.add_edge(0, 3, weight=400)
+ G.add_edge(0, 4, weight=150)
+ G.add_edge(0, 5, weight=400)
+ G.add_edge(0, 6, weight=1)
+ G.add_edge(1, 3, weight=400)
+ G.add_edge(1, 4, weight=450)
+ G.add_edge(1, 5, weight=600)
+ G.add_edge(1, 6, weight=2)
+ G.add_edge(2, 3, weight=300)
+ G.add_edge(2, 4, weight=225)
+ G.add_edge(2, 5, weight=290)
+ G.add_edge(2, 6, weight=3)
+ matching = minimum_weight_full_matching(G)
+ assert_equal(matching, {0: 4, 1: 6, 2: 5, 4: 0, 5: 2, 6: 1})
+
+ def test_minimum_weight_full_matching_smaller_top_nodes_right(self):
+ G = nx.complete_bipartite_graph(3, 4)
+ G.add_edge(0, 3, weight=400)
+ G.add_edge(0, 4, weight=150)
+ G.add_edge(0, 5, weight=400)
+ G.add_edge(0, 6, weight=1)
+ G.add_edge(1, 3, weight=400)
+ G.add_edge(1, 4, weight=450)
+ G.add_edge(1, 5, weight=600)
+ G.add_edge(1, 6, weight=2)
+ G.add_edge(2, 3, weight=300)
+ G.add_edge(2, 4, weight=225)
+ G.add_edge(2, 5, weight=290)
+ G.add_edge(2, 6, weight=3)
+ matching = minimum_weight_full_matching(G, top_nodes=[3, 4, 5, 6])
+ assert_equal(matching, {0: 4, 1: 6, 2: 5, 4: 0, 5: 2, 6: 1})
+
+ def test_minimum_weight_full_matching_smaller_right(self):
+ G = nx.complete_bipartite_graph(4, 3)
+ G.add_edge(0, 4, weight=400)
+ G.add_edge(0, 5, weight=400)
+ G.add_edge(0, 6, weight=300)
+ G.add_edge(1, 4, weight=150)
+ G.add_edge(1, 5, weight=450)
+ G.add_edge(1, 6, weight=225)
+ G.add_edge(2, 4, weight=400)
+ G.add_edge(2, 5, weight=600)
+ G.add_edge(2, 6, weight=290)
+ G.add_edge(3, 4, weight=1)
+ G.add_edge(3, 5, weight=2)
+ G.add_edge(3, 6, weight=3)
+ matching = minimum_weight_full_matching(G)
+ assert_equal(matching, {1: 4, 2: 6, 3: 5, 4: 1, 5: 3, 6: 2})
+
+ def test_minimum_weight_full_matching_negative_weights(self):
+ G = nx.complete_bipartite_graph(2, 2)
+ G.add_edge(0, 2, weight=-2)
+ G.add_edge(0, 3, weight=0.2)
+ G.add_edge(1, 2, weight=-2)
+ G.add_edge(1, 3, weight=0.3)
+ matching = minimum_weight_full_matching(G)
+ assert_equal(matching, {0: 3, 1: 2, 2: 1, 3: 0})
+
+ def test_minimum_weight_full_matching_different_weight_key(self):
+ G = nx.complete_bipartite_graph(2, 2)
+ G.add_edge(0, 2, mass=2)
+ G.add_edge(0, 3, mass=0.2)
+ G.add_edge(1, 2, mass=1)
+ G.add_edge(1, 3, mass=2)
+ matching = minimum_weight_full_matching(G, weight='mass')
+ assert_equal(matching, {0: 3, 1: 2, 2: 1, 3: 0})
+
+ @raises(ValueError)
+ def test_minimum_weight_full_matching_requires_complete_input(self):
+ G = nx.Graph()
+ G.add_nodes_from([1, 2, 3, 4], bipartite=0)
+ G.add_nodes_from(['a', 'b', 'c'], bipartite=1)
+ G.add_edges_from([(1, 'a'), (1, 'b'), (2, 'b'),
+ (2, 'c'), (3, 'c'), (4, 'a')])
+ minimum_weight_full_matching(G)
| Minimum weight perfect bipartite matching
I'm wondering if there would be appetite for an algorithm to solve the problem of finding minimum weight perfect bipartite matchings in bipartite graphs: Let *G* = ((*U*, *V*), *E*) be a weighted bipartite graph. We then want to find a perfect matching, i.e. one of cardinality min(|*U*|, |*V*|), so that the sum over all weights in the matching is minimal. This is also known as the [linear assignment problem](https://en.wikipedia.org/wiki/Assignment_problem).
There are algorithms for solving several related problems in NetworkX already, but I didn't seem to be able to find one specialized to this problem. Some related algorithms are the following:
* The linear assignment problem is a special case of a minimum cost flow problem, obtained by adding a source incident to all vertices in *U* and a sink incident to all vertices in *V*, and letting the demand for these two vertices be min(|*U*|, |*V*|) with appropriate signs. This could then be solved using `networkx.algorithms.flow.min_cost_flow`. A specialized algorithm would take advantage of the bipartite structure to improve performance. Moreover, as far as I can tell, `min_cost_flow` assumes that all weights are integral, which we wouldn't have to here.
* On the other hand, `networkx.algorithms.bipartite.maximum_matching` can detect if a feasible solution exists, i.e. a matching with the desired cardinality, but it doesn't take into account the weights.
* `networkx.algorithms.matching.max_weight_matching` takes weights into account but operates on arbitrary graphs and does not require that the matching be perfect in the bipartite case.
A large number of different algorithms exist to solve the problem. Assuming that the inclusion of an algorithm is relevant, the first thing I'm wondering is the following: SciPy already has an efficient implementation of an algorithm to solve the problem in [`scipy.optimize.linear_sum_assignment`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.linear_sum_assignment.html), and since NetworkX already depends on SciPy, it would be only a few lines of code to simply make use of the SciPy implementation. However, as NetworkX is generally focused on bundling Python implementations of its algorithms, simply making use of the SciPy implementation might be going against the spirit of the library, and it would be preferable to include a Python version of one of the algorithms instead?
| In some cases, NetworkX provides his own Python implementation **and** a Numpy based implementation. For instance, this occurs for the Floyd-Warshall algorithm with a very smart and efficient implementation (see timings below).
By the way, another very interesting option is Numba + Numpy I'm experimenting for the moment: it provides sometimes still faster execution than C++. Some Floyd-Warshall algorithm timings (in seconds) for a graph with parameters n=800, m=100000:
```
Networkx : 82.66
Python (no dict) : 34.58
Python
(no dict and multiprocessing) : 21
Numpy from Networkx : 1.67 (no predecessors calculation)
Numba-Numpy-nx : 1.33
Graph-tool : 0.91
Numba-Numpy-nx-paral : 0.5
Scipy (Cython) : 0.50
C++ : 0.30
Numba : 0.24
C++ threads : 0.14
Numba parallel : 0.11
C++ openMP : 0.09
```
I tried Numba under GPU but didn't get nice results (I'm new to GPU programming).
In an ideal world we would have a scipy and base python implementation of each algorithm, but it is certainly fine to just implement a scipy version. The vast majority of our users have scipy available. Those that don't would not be able to use the function. We could leave it at that, or implement a python version. But since you suggest the Scipy version would be very straightforward I think it is reasonable to start with that and add a pure python version if you think it would be helpful.
Thanks. The algorithms generally aren't particularly complicated, but if I were to do it, I would probably just port what's already in SciPy. I think it's hard to tell how helpful that would be. I definitely understand the value in pure-Python implementations. You know the user base best.
And yes, it would be straightforward to adopt the SciPy version; really it amounts to the following:
```python
from scipy.optimize import linear_sum_assignment
def minimum_weight_matching(G, top_nodes=None):
left, right = nx.bipartite.sets(G, top_nodes)
# Ensure that the graph is complete This is currently a requirement in the underlying
# optimization algorithm from SciPy, but the constraint will be removed in SciPy 1.4.0,
# at which point it may also be removed here.
if len(G.edges()) != len(left)*len(right):
raise ValueError('The bipartite graph must be complete.')
U = list(left)
V = list(right)
weights = nx.bipartite.biadjacency_matrix(G, row_order=U, column_order=V).toarray()
assignment = linear_sum_assignment(weights)
return {U[u]: V[v] for u, v in zip(*assignment)}
```
One downside is, as mentioned in the comment, that the requirement of completeness will only be lifted in the upcoming version of SciPy.
Anyway, I'm happy to provide a PR if you think this would be useful.
Yes, I think this would be useful. You should add the function, some simple unit-tests and/or smoke tests and a link in the doc/reference files. You might want to look at another module that uses scipy to see how we arrange the testing so the tests get skipped if scipy is not present.
Thanks!
Thanks for the kick start; I'll look into it. Do you have a favorite example of a module that does things just right? Otherwise I'll just go hunting.
```networkx/algorithms/bipartitie/spectral.py``` has the code that handles imports well for testing.
| 2019-07-25T09:48:04 |
networkx/networkx | 3,535 | networkx__networkx-3535 | [
"3521"
] | ee8c4476c0ceb802b1b15a22f6b79b5ef8704f64 | diff --git a/networkx/algorithms/tree/coding.py b/networkx/algorithms/tree/coding.py
--- a/networkx/algorithms/tree/coding.py
+++ b/networkx/algorithms/tree/coding.py
@@ -258,7 +258,7 @@ def to_prufer_sequence(T):
relabel the nodes of your tree to the appropriate format.
This implementation is from [1]_ and has a running time of
- $O(n \log n)$.
+ $O(n)$.
See also
--------
@@ -303,7 +303,7 @@ def to_prufer_sequence(T):
def parents(u):
return next(v for v in T[u] if degree[v] > 1)
- index = u = min(k for k in range(n) if degree[k] == 1)
+ index = u = next(k for k in range(n) if degree[k] == 1)
result = []
for i in range(n - 2):
v = parents(u)
@@ -312,7 +312,7 @@ def parents(u):
if v < index and degree[v] == 1:
u = v
else:
- index = u = min(k for k in range(index + 1, n) if degree[k] == 1)
+ index = u = next(k for k in range(index + 1, n) if degree[k] == 1)
return result
@@ -347,7 +347,7 @@ def from_prufer_sequence(sequence):
relabel the nodes of your tree to the appropriate format.
This implementation is from [1]_ and has a running time of
- $O(n \log n)$.
+ $O(n)$.
References
----------
@@ -387,7 +387,7 @@ def from_prufer_sequence(sequence):
# tree. After the loop, there should be exactly two nodes that are
# not in this set.
not_orphaned = set()
- index = u = min(k for k in range(n) if degree[k] == 1)
+ index = u = next(k for k in range(n) if degree[k] == 1)
for v in sequence:
T.add_edge(u, v)
not_orphaned.add(u)
@@ -395,7 +395,7 @@ def from_prufer_sequence(sequence):
if v < index and degree[v] == 1:
u = v
else:
- index = u = min(k for k in range(index + 1, n) if degree[k] == 1)
+ index = u = next(k for k in range(index + 1, n) if degree[k] == 1)
# At this point, there must be exactly two orphaned nodes; join them.
orphans = set(T) - not_orphaned
u, v = orphans
| Prufer schemes: quadratic instead of linear complexity
A tree with set vertices `range(n)` can be encoded by a sequence of `n - 2` integer in `range(n)` known as the tree Prufer sequence and conservely, any such sequence can be decoded into a unique tree.
The NetworkX module `/networkx/algorithms/tree/coding.py` implements the encoding and decoding schemes by mean of resp. the `to_prufer_sequence` function and `from_prufer_sequence` function. The docstring for each function references a 2009 paper describing linear time algorithms (and not n*log(n) as the docstring claims cf. §2 and §3 in the paper).
The problem is that performances of the implemented algorithms are unsatisfactory and the time complexity seems to be more quadratic than linear. It results for instance that decoding a Prufer sequence of lenght about 20000 requires 32s.
On the other hand, based on another paper, I implemented Prufer encoding and decoding functions and the previous Prufer sequence need only 0.01s to be decoded: the NetworkX implementation has a time execution problem.
To be more accurate, I have tested the NetworkX decoding function successively on
- data input of size 2000
- on data input of size 10 x 2000=20000
and time execution raises from 0.32s seconds to about 31.83s: this is a strong indication that the algorithm has quadratic complexity. Here is the test code :
```python
from time import clock
from random import randrange
import networkx as nx
def test_speed(n):
sequence=[randrange(3) for i in range(n-2)]
d=clock()
G=nx.from_prufer_sequence(sequence)
print("%d: %.2fs" %(n, clock()-d))
test_speed(2000)
test_speed(20000)
```
```
2000: 0.32s
20000: 31.83s
```
I suspect that the following part of the source code has not linear complexity, cf. the last line:
```python
# from NetworkX sources
def from_prufer_sequence(sequence):
# ...
# code snipped
# ...
for v in sequence:
# ...
# code snipped
# ...
# the min function executes a loop
# so we have 2 nested loops of size n (sometimes)
index = u = min(k for k in range(index + 1, n) if degree[k] == 1)
```
The `to_prufer_sequence` function has more or less the same problem.
| Makes sense to me; you could probably use `heapq` instead of calling `min` repeatedly (as suggested in the referenced paper, I believe).
A heap is likely to solve the slowdown. You will get an algorithm with n*log(n) complexity. Since you have worked on the paper upon which the algorithm is based, it's more appropriate you set the patch. Or, if you prefer, I can create a pull request implementing a linear algorithm and based on a [different paper](https://pdfs.semanticscholar.org/f6a7/dffda435d1f732d220312fb667e1ba4d41a5.pdf).
It is fairly easy to implement a linear algorithm and end up with a quadratic algorithm if you aren't careful with data structures. Let's fix this so it is linear (or n*log(n) if that is a preferred algorithm)
Hello,
I ran into the accused function performing quadratically as well.
I read the referenced paper and identified the issue, which is in the implementation and not the referenced paper!
The problem with the referenced line
`index = u = min(k for k in range(index + 1, n) if degree[k] == 1)`
is quite straight forward. It is always the smallest index that satisfies the predicate which should be the result. But `min`, does not stop at the first satisfying index, it unnecessarily scans the remaining indices as well. If this were avoided, every number from `0` to `n-1` would only be considered once in the entire execution of the algorithm, since the index which defines the start of the search range is moved onwards in the same operation.
A simple solution is to replace `min` with `next`. The identical line before the loop should be adjusted as well. | 2019-08-04T13:44:37 |
|
networkx/networkx | 3,554 | networkx__networkx-3554 | [
"3552"
] | df22b6a4713be1fc7ef10231138f7c8be34f326b | diff --git a/networkx/drawing/nx_pylab.py b/networkx/drawing/nx_pylab.py
--- a/networkx/drawing/nx_pylab.py
+++ b/networkx/drawing/nx_pylab.py
@@ -389,8 +389,8 @@ def draw_networkx_nodes(G, pos,
if nodelist is None:
nodelist = list(G)
- if not nodelist or len(nodelist) == 0: # empty nodelist, no drawing
- return None
+ if len(nodelist) == 0: # empty nodelist, no drawing
+ return
try:
xy = np.asarray([pos[v] for v in nodelist])
| draw_networkx_nodes: Error doesn't show my mistakes exactly.
Hi,
I passed nodelist to draw_networkx_nodes method as it needs.
But if I set nodelist as ndarray in numpy, it shows the following error.
```
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
```
The nodelist I passed is like as follows.
```
a = np.array([1, 2, 3, 4, 5)]
nx.draw_networkx_nodes(G, pos=pos, nodelist=a)
```
I'm sure G has each node in a, and pos is exactly what it needs.
I managed to solve this problem by transforming ndarray to list like this.
```
a = np.array([1, 2, 3, 4, 5)]
nx.draw_networkx_nodes(G, pos=pos, nodelist=a.tolist())
```
I believe that error confuses us, so we should either allow to pass ndarray or show a more accurate error.
Thank you for your consideration.
| This looks like a bug. Thanks for the report!
Line 392 of ```nx_pylab.py``` should be changed from
if not nodelist or len(nodelist) == 0:
# to something similar to this (untested)
if len(nodelist) == 0 or not nodelist:
Your workaround works in the meantime. | 2019-08-21T13:04:20 |
|
networkx/networkx | 3,564 | networkx__networkx-3564 | [
"3550"
] | e5ecf69029b7fb75084edec42758d8e8c928a7ef | diff --git a/networkx/classes/graph.py b/networkx/classes/graph.py
--- a/networkx/classes/graph.py
+++ b/networkx/classes/graph.py
@@ -1328,7 +1328,7 @@ def get_edge_data(self, u, v, default=None):
"""Returns the attribute dictionary associated with edge (u, v).
This is identical to `G[u][v]` except the default is returned
- instead of an exception is the edge doesn't exist.
+ instead of an exception if the edge doesn't exist.
Parameters
----------
| [Typo][Doctring][graph.py][get_edge_data()]
The doc string of the `get_edge_details` fucntion of `graphy.py` seems to have a minor typo.
```
def get_edge_data(self, u, v, default=None):
"""Returns the attribute dictionary associated with edge (u, v).
This is identical to `G[u][v]` except the default is returned
instead of an exception is the edge doesn't exist.
```
**is** `the edge doesn't exist.` should be **if** `the edge doesn't exist.`
Line: https://github.com/networkx/networkx/blob/master/networkx/classes/graph.py#L1331
| Thanks! | 2019-08-31T20:40:39 |
|
networkx/networkx | 3,604 | networkx__networkx-3604 | [
"3510"
] | d81d0362af0a4a70fc0a2a9617e3956bf022cb28 | diff --git a/networkx/algorithms/__init__.py b/networkx/algorithms/__init__.py
--- a/networkx/algorithms/__init__.py
+++ b/networkx/algorithms/__init__.py
@@ -19,7 +19,7 @@
from networkx.algorithms.distance_regular import *
from networkx.algorithms.dominance import *
from networkx.algorithms.dominating import *
-from networkx.algorithms.efficiency import *
+from networkx.algorithms.efficiency_measures import *
from networkx.algorithms.euler import *
from networkx.algorithms.graphical import *
from networkx.algorithms.hierarchy import *
diff --git a/networkx/algorithms/efficiency.py b/networkx/algorithms/efficiency_measures.py
similarity index 91%
rename from networkx/algorithms/efficiency.py
rename to networkx/algorithms/efficiency_measures.py
--- a/networkx/algorithms/efficiency.py
+++ b/networkx/algorithms/efficiency_measures.py
@@ -99,16 +99,21 @@ def global_efficiency(G):
n = len(G)
denom = n * (n - 1)
if denom != 0:
- g_eff = sum(efficiency(G, u, v) for u, v in permutations(G, 2)) / denom
+ lengths = nx.all_pairs_shortest_path_length(G)
+ g_eff = 0
+ for source, targets in lengths:
+ for target, distance in targets.items():
+ if distance > 0:
+ g_eff += 1 / distance
+ g_eff /= denom
+ # g_eff = sum(1 / d for s, tgts in lengths
+ # for t, d in tgts.items() if d > 0) / denom
else:
g_eff = 0
# TODO This can be made more efficient by computing all pairs shortest
# path lengths in parallel.
- #
- # TODO This summation can be trivially parallelized.
return g_eff
-
@not_implemented_for('directed')
def local_efficiency(G):
"""Returns the average local efficiency of the graph.
| diff --git a/networkx/generators/tests/test_degree_seq.py b/networkx/generators/tests/test_degree_seq.py
--- a/networkx/generators/tests/test_degree_seq.py
+++ b/networkx/generators/tests/test_degree_seq.py
@@ -200,8 +200,6 @@ def test_random_degree_sequence_graph():
d = [1, 2, 2, 3]
G = nx.random_degree_sequence_graph(d, seed=42)
assert_equal(d, sorted(d for n, d in G.degree()))
- G = nx.random_degree_sequence_graph(d)
- assert_equal(d, sorted(d for n, d in G.degree()))
def test_random_degree_sequence_graph_raise():
@@ -210,7 +208,7 @@ def test_random_degree_sequence_graph_raise():
def test_random_degree_sequence_large():
- G1 = nx.fast_gnp_random_graph(100, 0.1)
+ G1 = nx.fast_gnp_random_graph(100, 0.1, seed=42)
d1 = (d for n, d in G1.degree())
G2 = nx.random_degree_sequence_graph(d1, seed=42)
d2 = (d for n, d in G2.degree())
| `global_efficiency` is inefficient
See https://stackoverflow.com/a/57032282/2966723
Currently the code takes every single pair of nodes (in both directions), and finds the length of the path between those two.
An obvious factor of 2 speedup is to change the denominator by a factor of 2 and use `combinations` rather than `permutations` to do the calculation, so that the path is only considered once. (presumably would need to check that the graph is undirected first)
Alternately, another option is to use this code that I provided in my answer.
def my_global_efficiency(G):
'''author Joel C Miller
https://stackoverflow.com/a/57032282/2966723
'''
n = len(G)
denom = n*(n-1)
if denom>0:
efficiency = 0
for path_collection in nx.all_pairs_shortest_path_length(G):
source = path_collection[0]
for target in path_collection[1]:
if target != source:
efficiency += 1./path_collection[1][target]
return efficiency/denom
else:
return 0
| This looks good to me. | 2019-09-26T11:48:51 |
networkx/networkx | 3,606 | networkx__networkx-3606 | [
"3573"
] | d81d0362af0a4a70fc0a2a9617e3956bf022cb28 | diff --git a/networkx/readwrite/gexf.py b/networkx/readwrite/gexf.py
--- a/networkx/readwrite/gexf.py
+++ b/networkx/readwrite/gexf.py
@@ -25,10 +25,12 @@
import networkx as nx
from networkx.utils import open_file, make_str
try:
- from xml.etree.cElementTree import Element, ElementTree, SubElement, tostring
+ from xml.etree.cElementTree import (Element, ElementTree, SubElement,
+ tostring)
except ImportError:
try:
- from xml.etree.ElementTree import Element, ElementTree, SubElement, tostring
+ from xml.etree.ElementTree import (Element, ElementTree, SubElement,
+ tostring)
except ImportError:
pass
@@ -36,7 +38,8 @@
@open_file(1, mode='wb')
-def write_gexf(G, path, encoding='utf-8', prettyprint=True, version='1.2draft'):
+def write_gexf(G, path, encoding='utf-8', prettyprint=True,
+ version='1.2draft'):
"""Write G in GEXF format to path.
"GEXF (Graph Exchange XML Format) is a language for describing
@@ -98,14 +101,14 @@ def generate_gexf(G, encoding='utf-8', prettyprint=True, version='1.2draft'):
Parameters
----------
G : graph
- A NetworkX graph
+ A NetworkX graph
encoding : string (optional, default: 'utf-8')
- Encoding for text data.
+ Encoding for text data.
prettyprint : bool (optional, default: True)
- If True use line breaks and indenting in output XML.
+ If True use line breaks and indenting in output XML.
version : string (default: 1.2draft)
- Version of GEFX File Format (see https://gephi.org/gexf/format/schema.html).
- Supported values: "1.1draft", "1.2draft"
+ Version of GEFX File Format (see https://gephi.org/gexf/format/schema.html)
+ Supported values: "1.1draft", "1.2draft"
Examples
@@ -154,7 +157,7 @@ def read_gexf(path, node_type=None, relabel=False, version='1.2draft'):
If True relabel the nodes to use the GEXF node "label" attribute
instead of the node "id" attribute as the NetworkX node label.
version : string (default: 1.2draft)
- Version of GEFX File Format (see https://gephi.org/gexf/format/schema.html).
+ Version of GEFX File Format (see https://gephi.org/gexf/format/schema.html)
Supported values: "1.1draft", "1.2draft"
Returns
@@ -223,7 +226,7 @@ class GEXF(object):
(np.uint16, "int"), (np.uint32, "int"),
(np.uint64, "int"), (np.int_, "int"),
(np.intc, "int"), (np.intp, "int"),
- ] + types
+ ] + types
xml_type = dict(types)
python_type = dict(reversed(a) for a in types)
@@ -243,7 +246,7 @@ def set_version(self, version):
self.NS_GEXF = d['NS_GEXF']
self.NS_VIZ = d['NS_VIZ']
self.NS_XSI = d['NS_XSI']
- self.SCHEMALOCATION = d['NS_XSI']
+ self.SCHEMALOCATION = d['SCHEMALOCATION']
self.VERSION = d['VERSION']
self.version = version
@@ -267,6 +270,14 @@ def __init__(self, graph=None, encoding='utf-8', prettyprint=True,
'xsi:schemaLocation': self.SCHEMALOCATION,
'version': self.VERSION})
+ # Make meta element a non-graph element
+ # Also add lastmodifieddate as attribute, not tag
+ meta_element = Element('meta')
+ subelement_text = 'NetworkX {}'.format(nx.__version__)
+ SubElement(meta_element, 'creator').text = subelement_text
+ meta_element.set('lastmodifieddate', time.strftime('%Y-%m-%d'))
+ self.xml.append(meta_element)
+
ET.register_namespace('viz', self.NS_VIZ)
# counters for edge and attribute identifiers
@@ -311,18 +322,10 @@ def add_graph(self, G):
graph_element = Element('graph', defaultedgetype=default, mode=mode,
name=name)
self.graph_element = graph_element
- self.add_meta(G, graph_element)
self.add_nodes(G, graph_element)
self.add_edges(G, graph_element)
self.xml.append(graph_element)
- def add_meta(self, G, graph_element):
- # add meta element with creator and date
- meta_element = Element('meta')
- SubElement(meta_element, 'creator').text = 'NetworkX {}'.format(nx.__version__)
- SubElement(meta_element, 'lastmodified').text = time.strftime('%d/%m/%Y')
- graph_element.append(meta_element)
-
def add_nodes(self, G, graph_element):
nodes_element = Element('nodes')
for node, data in G.nodes(data=True):
@@ -439,7 +442,8 @@ def add_attributes(self, node_or_edge, xml_obj, data, default):
k = 'networkx_key'
val_type = type(v)
if val_type not in self.xml_type:
- raise TypeError('attribute value type is not allowed: %s' % val_type)
+ raise TypeError('attribute value type is not allowed: %s'
+ % val_type)
if isinstance(v, list):
# dynamic data
for val, start, end in v:
@@ -449,12 +453,20 @@ def add_attributes(self, node_or_edge, xml_obj, data, default):
self.alter_graph_mode_timeformat(start)
self.alter_graph_mode_timeformat(end)
break
- attr_id = self.get_attr_id(make_str(k), self.xml_type[val_type],
+ attr_id = self.get_attr_id(make_str(k),
+ self.xml_type[val_type],
node_or_edge, default, mode)
for val, start, end in v:
e = Element('attvalue')
e.attrib['for'] = attr_id
e.attrib['value'] = make_str(val)
+ # Handle nan, inf, -inf differently
+ if e.attrib['value'] == 'inf':
+ e.attrib['value'] = 'INF'
+ elif e.attrib['value'] == 'nan':
+ e.attrib['value'] = 'NaN'
+ elif e.attrib['value'] == '-inf':
+ e.attrib['value'] = '-INF'
if start is not None:
e.attrib['start'] = make_str(start)
if end is not None:
@@ -463,7 +475,8 @@ def add_attributes(self, node_or_edge, xml_obj, data, default):
else:
# static data
mode = 'static'
- attr_id = self.get_attr_id(make_str(k), self.xml_type[val_type],
+ attr_id = self.get_attr_id(make_str(k),
+ self.xml_type[val_type],
node_or_edge, default, mode)
e = Element('attvalue')
e.attrib['for'] = attr_id
@@ -471,6 +484,13 @@ def add_attributes(self, node_or_edge, xml_obj, data, default):
e.attrib['value'] = make_str(v).lower()
else:
e.attrib['value'] = make_str(v)
+ # Handle nan, inf, -inf differently
+ if e.attrib['value'] == 'inf':
+ e.attrib['value'] = 'INF'
+ elif e.attrib['value'] == 'nan':
+ e.attrib['value'] = 'NaN'
+ elif e.attrib['value'] == '-inf':
+ e.attrib['value'] = '-INF'
attvalues.append(e)
xml_obj.append(attvalues)
return data
@@ -532,7 +552,8 @@ def add_viz(self, element, node_data):
thickness = viz.get('thickness')
if thickness is not None:
- e = Element('{%s}thickness' % self.NS_VIZ, value=str(thickness))
+ e = Element('{%s}thickness' % self.NS_VIZ,
+ value=str(thickness))
element.append(e)
shape = viz.get('shape')
@@ -591,7 +612,8 @@ def add_spells(self, node_or_edge_element, node_or_edge_data):
return node_or_edge_data
def alter_graph_mode_timeformat(self, start_or_end):
- # if 'start' or 'end' appears, alter Graph mode to dynamic and set timeformat
+ # If 'start' or 'end' appears, alter Graph mode to dynamic and
+ # set timeformat
if self.graph_element.get('mode') == 'static':
if start_or_end is not None:
if isinstance(start_or_end, str):
@@ -687,7 +709,8 @@ def make_graph(self, graph_xml):
self.timeformat = 'string'
# node and edge attributes
- attributes_elements = graph_xml.findall('{%s}attributes' % self.NS_GEXF)
+ attributes_elements = graph_xml.findall('{%s}attributes' %
+ self.NS_GEXF)
# dictionaries to hold attributes and attribute defaults
node_attr = {}
node_default = {}
@@ -710,7 +733,8 @@ def make_graph(self, graph_xml):
# Hack to handle Gephi0.7beta bug
# add weight attribute
- ea = {'weight': {'type': 'double', 'mode': 'static', 'title': 'weight'}}
+ ea = {'weight': {'type': 'double', 'mode': 'static',
+ 'title': 'weight'}}
ed = {}
edge_attr.update(ea)
edge_default.update(ed)
@@ -769,6 +793,14 @@ def add_node(self, G, node_xml, node_attr, node_pid=None):
for node_xml in subnodes.findall('{%s}node' % self.NS_GEXF):
self.add_node(G, node_xml, node_attr, node_pid=node_id)
+ # Handle nan, inf, -inf differently
+ for k, v in data.items():
+ if make_str(v) == 'inf':
+ data[k] = 'INF'
+ elif make_str(v) == 'nan':
+ data[k] = 'NaN'
+ elif make_str(v) == '-inf':
+ data[k] = '-INF'
G.add_node(node_id, **data)
def add_start_end(self, data, xml):
@@ -917,7 +949,8 @@ def decode_attr_elements(self, gexf_keys, obj_xml):
try: # should be in our gexf_keys dictionary
title = gexf_keys[key]['title']
except KeyError:
- raise nx.NetworkXError('No attribute defined for=%s.' % key)
+ raise nx.NetworkXError('No attribute defined for=%s.'
+ % key)
atype = gexf_keys[key]['type']
value = a.get('value')
if atype == 'boolean':
@@ -1015,7 +1048,7 @@ def setup_module(module):
from nose import SkipTest
try:
import xml.etree.cElementTree
- except:
+ except Exception as e:
raise SkipTest('xml.etree.cElementTree not available.')
@@ -1024,5 +1057,5 @@ def teardown_module(module):
import os
try:
os.unlink('test.gexf')
- except:
+ except Exception as e:
pass
| diff --git a/networkx/readwrite/tests/test_gexf.py b/networkx/readwrite/tests/test_gexf.py
--- a/networkx/readwrite/tests/test_gexf.py
+++ b/networkx/readwrite/tests/test_gexf.py
@@ -38,8 +38,10 @@ def setUp(self):
self.simple_directed_fh = \
io.BytesIO(self.simple_directed_data.encode('UTF-8'))
- self.attribute_data = """<?xml version="1.0" encoding="UTF-8"?>
-<gexf xmlns="http://www.gexf.net/1.2draft" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.gexf.net/1.2draft http://www.gexf.net/1.2draft/gexf.xsd" version="1.2">
+ self.attribute_data = """<?xml version="1.0" encoding="UTF-8"?>\
+<gexf xmlns="http://www.gexf.net/1.2draft" xmlns:xsi="http://www.w3.\
+org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.gexf.net/\
+1.2draft http://www.gexf.net/1.2draft/gexf.xsd" version="1.2">
<meta lastmodifieddate="2009-03-20">
<creator>Gephi.org</creator>
<description>A Web network</description>
@@ -135,7 +137,8 @@ def setUp(self):
self.simple_undirected_graph.add_node('1', label='World')
self.simple_undirected_graph.add_edge('0', '1', id='0')
- self.simple_undirected_fh = io.BytesIO(self.simple_undirected_data.encode('UTF-8'))
+ self.simple_undirected_fh = io.BytesIO(self.simple_undirected_data
+ .encode('UTF-8'))
def test_read_simple_directed_graphml(self):
G = self.simple_directed_graph
@@ -304,13 +307,15 @@ def test_write_with_node_attributes(self):
G.nodes[i]['start'] = i
G.nodes[i]['end'] = i + 1
- if sys.version_info < (3,8):
- expected = """<gexf version="1.2" xmlns="http://www.gexf.net/1.2draft" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.w3.org/2001/XMLSchema-instance">
+ if sys.version_info < (3, 8):
+ expected = """<gexf version="1.2" xmlns="http://www.gexf.net/1.2\
+draft" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:\
+schemaLocation="http://www.gexf.net/1.2draft http://www.gexf.net/1.2draft/\
+gexf.xsd">
+ <meta lastmodifieddate="{}">
+ <creator>NetworkX {}</creator>
+ </meta>
<graph defaultedgetype="undirected" mode="dynamic" name="" timeformat="long">
- <meta>
- <creator>NetworkX {}</creator>
- <lastmodified>{}</lastmodified>
- </meta>
<nodes>
<node end="1" id="0" label="0" pid="0" start="0" />
<node end="2" id="1" label="1" pid="1" start="1" />
@@ -323,14 +328,16 @@ def test_write_with_node_attributes(self):
<edge id="2" source="2" target="3" />
</edges>
</graph>
-</gexf>""".format(nx.__version__, time.strftime('%d/%m/%Y'))
+</gexf>""".format(time.strftime('%Y-%m-%d'), nx.__version__)
else:
- expected = """<gexf xmlns="http://www.gexf.net/1.2draft" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.w3.org/2001/XMLSchema-instance" version="1.2">
+ expected = """<gexf xmlns="http://www.gexf.net/1.2draft" xmlns:xsi\
+="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=\
+"http://www.gexf.net/1.2draft http://www.gexf.net/1.2draft/\
+gexf.xsd" version="1.2">
+ <meta lastmodifieddate="{}">
+ <creator>NetworkX {}</creator>
+ </meta>
<graph defaultedgetype="undirected" mode="dynamic" name="" timeformat="long">
- <meta>
- <creator>NetworkX {}</creator>
- <lastmodified>{}</lastmodified>
- </meta>
<nodes>
<node id="0" label="0" pid="0" start="0" end="1" />
<node id="1" label="1" pid="1" start="1" end="2" />
@@ -343,21 +350,23 @@ def test_write_with_node_attributes(self):
<edge source="2" target="3" id="2" />
</edges>
</graph>
-</gexf>""".format(nx.__version__, time.strftime('%d/%m/%Y'))
-
+</gexf>""".format(time.strftime('%Y-%m-%d'), nx.__version__)
obtained = '\n'.join(nx.generate_gexf(G))
assert_equal(expected, obtained)
def test_edge_id_construct(self):
G = nx.Graph()
G.add_edges_from([(0, 1, {'id': 0}), (1, 2, {'id': 2}), (2, 3)])
- if sys.version_info < (3,8):
- expected = """<gexf version="1.2" xmlns="http://www.gexf.net/1.2draft" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.w3.org/2001/XMLSchema-instance">
+
+ if sys.version_info < (3, 8):
+ expected = """<gexf version="1.2" xmlns="http://www.gexf.net/\
+1.2draft" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:\
+schemaLocation="http://www.gexf.net/1.2draft http://www.gexf.net/1.2draft/\
+gexf.xsd">
+ <meta lastmodifieddate="{}">
+ <creator>NetworkX {}</creator>
+ </meta>
<graph defaultedgetype="undirected" mode="static" name="">
- <meta>
- <creator>NetworkX {}</creator>
- <lastmodified>{}</lastmodified>
- </meta>
<nodes>
<node id="0" label="0" />
<node id="1" label="1" />
@@ -370,14 +379,15 @@ def test_edge_id_construct(self):
<edge id="1" source="2" target="3" />
</edges>
</graph>
-</gexf>""".format(nx.__version__, time.strftime('%d/%m/%Y'))
+</gexf>""".format(time.strftime('%Y-%m-%d'), nx.__version__)
else:
- expected = """<gexf xmlns="http://www.gexf.net/1.2draft" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.w3.org/2001/XMLSchema-instance" version="1.2">
+ expected = """<gexf xmlns="http://www.gexf.net/1.2draft" xmlns:xsi\
+="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.\
+gexf.net/1.2draft http://www.gexf.net/1.2draft/gexf.xsd" version="1.2">
+ <meta lastmodifieddate="{}">
+ <creator>NetworkX {}</creator>
+ </meta>
<graph defaultedgetype="undirected" mode="static" name="">
- <meta>
- <creator>NetworkX {}</creator>
- <lastmodified>{}</lastmodified>
- </meta>
<nodes>
<node id="0" label="0" />
<node id="1" label="1" />
@@ -390,7 +400,7 @@ def test_edge_id_construct(self):
<edge source="2" target="3" id="1" />
</edges>
</graph>
-</gexf>""".format(nx.__version__, time.strftime('%d/%m/%Y'))
+</gexf>""".format(time.strftime('%Y-%m-%d'), nx.__version__)
obtained = '\n'.join(nx.generate_gexf(G))
assert_equal(expected, obtained)
@@ -401,10 +411,15 @@ def test_numpy_type(self):
import numpy
except ImportError:
return
- nx.set_node_attributes(G, {n:n for n in numpy.arange(4)}, 'number')
+ nx.set_node_attributes(G, {n: n for n in numpy.arange(4)}, 'number')
G[0][1]['edge-number'] = numpy.float64(1.1)
- expected = """<gexf version="1.2" xmlns="http://www.gexf.net/1.2draft" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.w3.org/2001/XMLSchema-instance">
+ expected = """<gexf version="1.2" xmlns="http://www.gexf.net/1.2draft"\
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation\
+="http://www.gexf.net/1.2draft http://www.gexf.net/1.2draft/gexf.xsd">
+ <meta lastmodifieddate="{}">
+ <creator>NetworkX {}</creator>
+ </meta>
<graph defaultedgetype="undirected" mode="static" name="">
<attributes class="edge" mode="static">
<attribute id="1" title="edge-number" type="float" />
@@ -412,10 +427,6 @@ def test_numpy_type(self):
<attributes class="node" mode="static">
<attribute id="0" title="number" type="int" />
</attributes>
- <meta>
- <creator>NetworkX {}</creator>
- <lastmodified>{}</lastmodified>
- </meta>
<nodes>
<node id="0" label="0">
<attvalues>
@@ -448,7 +459,7 @@ def test_numpy_type(self):
<edge id="2" source="2" target="3" />
</edges>
</graph>
-</gexf>""".format(nx.__version__, time.strftime('%d/%m/%Y'))
+</gexf>""".format(time.strftime('%Y-%m-%d'), nx.__version__)
obtained = '\n'.join(nx.generate_gexf(G))
assert_equal(expected, obtained)
@@ -460,3 +471,21 @@ def test_bool(self):
fh.seek(0)
H = nx.read_gexf(fh, node_type=int)
assert_equal(H.nodes[1]['testattr'], True)
+
+ # Test for NaN, INF and -INF
+ def test_specials(self):
+ G = nx.Graph()
+ G.add_node(1, testattr=float('inf'))
+ G.add_node(2, testattr=float('nan'))
+ G.add_node(3, testattr=float('-inf'))
+ list_value = [(1, 2, 3), (2, 1, 2)]
+ G.add_node(4, key=list_value)
+ fh = io.BytesIO()
+ nx.write_gexf(G, fh)
+ fh.seek(0)
+ # float as inf,-inf and nan are all floats internally
+ H = nx.read_gexf(fh, node_type=float)
+ assert_equal(H.nodes[1]['testattr'], "INF")
+ assert_equal(H.nodes[2]['testattr'], "NaN")
+ assert_equal(H.nodes[3]['testattr'], "-INF")
+ assert_equals(H.nodes[4]['networkx_key'], list_value)
| GEXF output does not conform to GEXF and XML Schema standards
On master (currently 0d8d93a0e43829d6020e8babf59d411af6259a6c) and Python 3.7.3, if I run the following script:
```python
import io
import networkx
g = networkx.Graph()
g.add_node(1, weight=float('inf'))
g.add_node(2, weight=float('nan'))
with io.BytesIO() as f:
networkx.write_gexf(g, f)
print(f.getvalue().decode())
```
I get the following output:
```xml
<?xml version='1.0' encoding='utf-8'?>
<gexf version="1.2" xmlns="http://www.gexf.net/1.2draft" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.w3.org/2001/XMLSchema-instance">
<graph defaultedgetype="undirected" mode="static" name="">
<attributes class="node" mode="static">
<attribute id="0" title="weight" type="double" />
</attributes>
<meta>
<creator>NetworkX 2.4rc1.dev_20190906193120</creator>
<lastmodified>06/09/2019</lastmodified>
</meta>
<nodes>
<node id="1" label="1">
<attvalues>
<attvalue for="0" value="inf" />
</attvalues>
</node>
<node id="2" label="2">
<attvalues>
<attvalue for="0" value="nan" />
</attvalues>
</node>
</nodes>
<edges />
</graph>
</gexf>
```
This contains three errors in conformance with the GEXF specification. Below is a diff showing what the output should have been.
```diff
--- is.gexf 2019-09-06 14:34:24.000000000 -0500
+++ should_be.gexf 2019-09-06 14:50:28.000000000 -0500
@@ -1,22 +1,21 @@
<?xml version='1.0' encoding='utf-8'?>
-<gexf version="1.2" xmlns="http://www.gexf.net/1.2draft" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.w3.org/2001/XMLSchema-instance">
+<gexf version="1.2" xmlns="http://www.gexf.net/1.2draft" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.gexf.net/1.2draft http://www.gexf.net/1.2draft/gexf.xsd">
+ <meta lastmodifieddate="2019-09-06">
+ <creator>NetworkX 2.4rc1.dev_20190906193120</creator>
+ </meta>
<graph defaultedgetype="undirected" mode="static" name="">
<attributes class="node" mode="static">
<attribute id="0" title="weight" type="double" />
</attributes>
- <meta>
- <creator>NetworkX 2.4rc1.dev_20190906193120</creator>
- <lastmodified>06/09/2019</lastmodified>
- </meta>
<nodes>
<node id="1" label="1">
<attvalues>
- <attvalue for="0" value="inf" />
+ <attvalue for="0" value="INF" />
</attvalues>
</node>
<node id="2" label="2">
<attvalues>
- <attvalue for="0" value="nan" />
+ <attvalue for="0" value="NaN" />
</attvalues>
</node>
</nodes>
```
For the error in the schema location (see section 2.2 of the [GEXF primer](https://gephi.org/gexf/1.2draft/gexf-12draft-primer.pdf)), the line to blame is
https://github.com/networkx/networkx/blob/0d8d93a0e43829d6020e8babf59d411af6259a6c/networkx/readwrite/gexf.py#L246
I'm not sure where floats are converted to text, but I'm guessing that the current code relies on Python to do it. However, the [XML Schema spec](https://www.w3.org/TR/xmlschema-2/#double-lexical-representation), which the GEXF spec relies on for data type definitions, says that not-a-numbers should be spelled as `NaN`, infinity as `INF`, and negative infinity as `-INF`.
For what it's worth, fixing the schema location issue made Gephi 0.9.2 start displaying my graphs correctly.
The GEXF primer says the `meta` tag must be declared before the `graph` tag, and that `lastmodifieddate` is an attribute, not a tag, and should contain the date in ISO (yyyy-mm-dd) format.
| +1 this issue/PR. Adding lastmodified as a tag does not allow me to import the GEXF file into Cytoscape (with gexf-app). Changing it to lastmodifieddate as attribute fixed it for me.
For the time being, I just patched the function as below in my local file (~/.local/lib/python3.6/site-packages/networkx/readwrite/gexf.py) [Ubuntu 18.04 / Python 3.6.8 x64 / installed via pip3]
```
def add_meta(self, G, graph_element):
# add meta element with creator and date
meta_element = Element('meta', lastmodifieddate = time.strftime('%Y-%m-%d'))
SubElement(meta_element, 'creator').text = 'NetworkX {}'.format(nx.__version__)
graph_element.append(meta_element)`
```
Would appreciate a merge so others can profit from the fix. But yeah, for the time being I just patched it this way :man_shrugging: | 2019-09-26T14:58:23 |
networkx/networkx | 3,613 | networkx__networkx-3613 | [
"3187"
] | 9ff7a8e9cc0b69932b4ec9bec7b2fdaff8c91666 | diff --git a/networkx/algorithms/dag.py b/networkx/algorithms/dag.py
--- a/networkx/algorithms/dag.py
+++ b/networkx/algorithms/dag.py
@@ -481,18 +481,30 @@ def is_aperiodic(G):
@not_implemented_for('undirected')
-def transitive_closure(G):
+def transitive_closure(G, reflexive=False):
""" Returns transitive closure of a directed graph
The transitive closure of G = (V,E) is a graph G+ = (V,E+) such that
- for all v,w in V there is an edge (v,w) in E+ if and only if there
- is a non-null path from v to w in G.
+ for all v, w in V there is an edge (v, w) in E+ if and only if there
+ is a path from v to w in G.
+
+ Handling of paths from v to v has some flexibility within this definition.
+ A reflexive transitive closure creates a self-loop for the path
+ from v to v of length 0. The usual transitive closure creates a
+ self-loop only if a cycle exists (a path from v to v with length > 0).
+ We also allow an option for no self-loops.
Parameters
----------
G : NetworkX DiGraph
A directed graph
-
+ reflexive : Bool or None, optional (default: False)
+ Determines when cycles create self-loops in the Transitive Closure.
+ If True, trivial cycles (length 0) create self-loops. The result
+ is a reflexive tranistive closure of G.
+ If False (the default) non-trivial cycles create self-loops.
+ If None, self-loops are not created.
+
Returns
-------
NetworkX DiGraph
@@ -510,10 +522,23 @@ def transitive_closure(G):
TODO this function applies to all directed graphs and is probably misplaced
here in dag.py
"""
+ if reflexive is None:
+ TC = G.copy()
+ for v in G:
+ edges = ((v, u) for u in nx.dfs_preorder_nodes(G, v) if v != u)
+ TC.add_edges_from(edges)
+ return TC
+ if reflexive is True:
+ TC = G.copy()
+ for v in G:
+ edges = ((v, u) for u in nx.dfs_preorder_nodes(G, v))
+ TC.add_edges_from(edges)
+ return TC
+ # reflexive is False
TC = G.copy()
for v in G:
- TC.add_edges_from((v, u) for u in nx.dfs_preorder_nodes(G, source=v)
- if v != u)
+ edges = ((v, w) for u, w in nx.edge_dfs(G, v))
+ TC.add_edges_from(edges)
return TC
@@ -525,7 +550,7 @@ def transitive_closure_dag(G, topo_order=None):
if the graph has a cycle.
The transitive closure of G = (V,E) is a graph G+ = (V,E+) such that
- for all v,w in V there is an edge (v,w) in E+ if and only if there
+ for all v, w in V there is an edge (v, w) in E+ if and only if there
is a non-null path from v to w in G.
Parameters
| diff --git a/networkx/algorithms/tests/test_dag.py b/networkx/algorithms/tests/test_dag.py
--- a/networkx/algorithms/tests/test_dag.py
+++ b/networkx/algorithms/tests/test_dag.py
@@ -290,27 +290,55 @@ def test_descendants(self):
def test_transitive_closure(self):
G = nx.DiGraph([(1, 2), (2, 3), (3, 4)])
- transitive_closure = nx.algorithms.dag.transitive_closure
solution = [(1, 2), (1, 3), (1, 4), (2, 3), (2, 4), (3, 4)]
- assert_edges_equal(transitive_closure(G).edges(), solution)
+ assert_edges_equal(nx.transitive_closure(G).edges(), solution)
G = nx.DiGraph([(1, 2), (2, 3), (2, 4)])
solution = [(1, 2), (1, 3), (1, 4), (2, 3), (2, 4)]
- assert_edges_equal(transitive_closure(G).edges(), solution)
+ assert_edges_equal(nx.transitive_closure(G).edges(), solution)
+ G = nx.DiGraph([(1, 2), (2, 3), (3, 1)])
+ solution = [(1, 2), (2, 1), (2, 3), (3, 2), (1, 3), (3, 1)]
+ soln = sorted(solution + [(n, n) for n in G])
+ assert_edges_equal(sorted(nx.transitive_closure(G).edges()), soln)
G = nx.Graph([(1, 2), (2, 3), (3, 4)])
- assert_raises(nx.NetworkXNotImplemented, transitive_closure, G)
+ assert_raises(nx.NetworkXNotImplemented, nx.transitive_closure, G)
# test if edge data is copied
G = nx.DiGraph([(1, 2, {"a": 3}), (2, 3, {"b": 0}), (3, 4)])
- H = transitive_closure(G)
+ H = nx.transitive_closure(G)
for u, v in G.edges():
assert_equal(G.get_edge_data(u, v), H.get_edge_data(u, v))
k = 10
G = nx.DiGraph((i, i + 1, {"f": "b", "weight": i}) for i in range(k))
- H = transitive_closure(G)
+ H = nx.transitive_closure(G)
for u, v in G.edges():
assert_equal(G.get_edge_data(u, v), H.get_edge_data(u, v))
+ def test_reflexive_transitive_closure(self):
+ G = nx.DiGraph([(1, 2), (2, 3), (3, 4)])
+ solution = [(1, 2), (1, 3), (1, 4), (2, 3), (2, 4), (3, 4)]
+ soln = sorted(solution + [(n, n) for n in G])
+ assert_edges_equal(nx.transitive_closure(G).edges(), solution)
+ assert_edges_equal(nx.transitive_closure(G, False).edges(), solution)
+ assert_edges_equal(nx.transitive_closure(G, True).edges(), soln)
+ assert_edges_equal(nx.transitive_closure(G, None).edges(), solution)
+
+ G = nx.DiGraph([(1, 2), (2, 3), (2, 4)])
+ solution = [(1, 2), (1, 3), (1, 4), (2, 3), (2, 4)]
+ soln = sorted(solution + [(n, n) for n in G])
+ assert_edges_equal(nx.transitive_closure(G).edges(), solution)
+ assert_edges_equal(nx.transitive_closure(G, False).edges(), solution)
+ assert_edges_equal(nx.transitive_closure(G, True).edges(), soln)
+ assert_edges_equal(nx.transitive_closure(G, None).edges(), solution)
+
+ G = nx.DiGraph([(1, 2), (2, 3), (3, 1)])
+ solution = sorted([(1, 2), (2, 1), (2, 3), (3, 2), (1, 3), (3, 1)])
+ soln = sorted(solution + [(n, n) for n in G])
+ assert_edges_equal(sorted(nx.transitive_closure(G).edges()), soln)
+ assert_edges_equal(sorted(nx.transitive_closure(G, False).edges()), soln)
+ assert_edges_equal(sorted(nx.transitive_closure(G, None).edges()), solution)
+ assert_edges_equal(sorted(nx.transitive_closure(G, True).edges()), soln)
+
def test_transitive_closure_dag(self):
G = nx.DiGraph([(1, 2), (2, 3), (3, 4)])
transitive_closure = nx.algorithms.dag.transitive_closure_dag
| transitive_closure loses self loops
Consider the following example:
import numpy as np
import networkx as nx
a = np.array([[0, 1, 1], [1, 0, 1], [1, 1, 0]])
G = nx.from_numpy_matrix(a, create_using=nx.MultiDiGraph())
T = nx.transitive_closure(G)
print(nx.to_numpy_matrix(T))
The transitive closure lacks the expected self loops. Why? (The documentation link does not work, and the docstring implies that self loops should be kept.) By "expected", I mean "according to standard definitions", such as the Wikipedia definition. I anticipate that a different definition is being used, but what is it?
The current [answer at StackExchange](https://stackoverflow.com/questions/52710488/transitive-closure-in-networkx/52711544#52711544) claims this is an implementation error.
| I would claim it is a definitional difference rather than an implementation error.
The question is whether self-loops should be added to the closure.
The documentation link you refer to is the link to eppstein's webpage, right?
Looking at the Wikipedia page and it's reference to Nuutila (1995):
http://www.cs.hut.fi/~enu/thesis.pdf
I tend to agree that these loops should be included.
Thanks!
Yes, the current documentation link is http://www.ics.uci.edu/~eppstein/PADS/PartialOrder.py, which fails. [Note: This link does work as of September 2019.]
I have not been able to find a definition of the transitive closure of a directed graph that does not imply that self loops should be included. Absence of evidence may not be evidence of absence, but my belief is that a *closure* must embed the original relation and all its reachable points.
We should make this clear at the very least -- and probably change the current default behavior. But as the link you reference above suggests, some people do not expect self-loops in the closure. Just because the mathematical meaning of a word like closure seems clear doesn't mean the word will be clear. Besides, your definition of "reachable points" may be different from others.
It seems to me there are two types of self-loops that arise and we need to be clear about which is treated which way. Is a node reachable from itself in zero steps? If so, we need self-loops on every node. I believe this is called "reflexive transitive closure". If we enforce that paths must have positive length, then we only get self-loops when there exists a cycle that includes that node.
I gather that you are saying that we should include self-loops due to cycles in the graph, but exclude self-loops if no such cycle exists. If this is what you are suggesting, then I agree.
Yes, you understand me. But I would understand an option (say, `forceReflexive`) for those who want the reflexive transitive closure to get it.
I agree.. An option ```reflexive=True``` could include all selfloops, ```reflexive=False``` could be the default with self-loops only for nodes on cycles. And we could add an option: ```reflexive=None``` which does not include any self-loops -- if that doesn't make the code to convoluted.
In any case, it means we'll have to separate this from using ```dfs_preorder_nodes``` because we need to see when cycles are formed. | 2019-10-01T02:47:15 |
networkx/networkx | 3,614 | networkx__networkx-3614 | [
"3293"
] | d0f608727c5e06966c2b1d3a29d02a66b7b64152 | diff --git a/networkx/algorithms/flow/gomory_hu.py b/networkx/algorithms/flow/gomory_hu.py
--- a/networkx/algorithms/flow/gomory_hu.py
+++ b/networkx/algorithms/flow/gomory_hu.py
@@ -164,7 +164,8 @@ def gomory_hu_tree(G, capacity='capacity', flow_func=None):
target = tree[source]
# compute minimum cut
cut_value, partition = nx.minimum_cut(G, source, target,
- capacity=capacity, flow_func=flow_func,
+ capacity=capacity,
+ flow_func=flow_func,
residual=R)
labels[(source, target)] = cut_value
# Update the tree
@@ -172,9 +173,16 @@ def gomory_hu_tree(G, capacity='capacity', flow_func=None):
for node in partition[0]:
if node != source and node in tree and tree[node] == target:
tree[node] = source
- labels[(node, source)] = labels.get((node, target), cut_value)
+ labels[node, source] = labels.get((node, target), cut_value)
+ #
+ if target != root and tree[target] in partition[0]:
+ labels[source, tree[target]] = labels[target, tree[target]]
+ labels[target, source] = cut_value
+ tree[source] = tree[target]
+ tree[target] = source
+
# Build the tree
T = nx.Graph()
T.add_nodes_from(G)
- T.add_weighted_edges_from(((u, v, labels[(u, v)]) for u, v in tree.items()))
+ T.add_weighted_edges_from(((u, v, labels[u, v]) for u, v in tree.items()))
return T
| faulty gomory_hu method
Hi,
it seems the implementation of gomory_hu is faulty: the computed tree is not necessarily a Gomory-Hu (cut) tree. Indeed, in the tree (bottom left) in the figure, the edge (2,3) indicates that {0,2} should give a smallest cut separating 2 from 3. The cut, however, contains four edges (unit weights) but the smallest cut, the edges incident with 2, has size 3. The tree at the bottom right should be a proper G-H tree.

The issue apparently stems from the fact that a couple of lines of the Gusfield algorithm have been omitted from the implementation in networkx. The lines that are missing are the last seven or so lines on page 152 in Gusfield's paper. I've attached a modified version of the gomory_hu method that should correct this (zipped). I've also attached a jupyter notebook that compares the current implementation with the fix (zipped). Note the weights computed by the nx.gomory_hu method seem okay -- it's only the tree that may be incorrect.
Caveat: I was too lazy to check that Gusfield's algorithm actually works.
Henning
[gomory-hu.zip](https://github.com/networkx/networkx/files/2738256/gomory-hu.zip)
| 2019-10-01T04:27:47 |
||
networkx/networkx | 3,626 | networkx__networkx-3626 | [
"3258"
] | bee39fee79170babed15394c3c98bc2a9758a50c | diff --git a/networkx/generators/duplication.py b/networkx/generators/duplication.py
--- a/networkx/generators/duplication.py
+++ b/networkx/generators/duplication.py
@@ -80,21 +80,21 @@ def partial_duplication_graph(N, n, p, q, seed=None):
G = nx.complete_graph(n)
for new_node in range(n, N):
- # Add a new vertex, v, to the graph.
- G.add_node(new_node)
-
# Pick a random vertex, u, already in the graph.
- src_node = seed.randint(0, new_node)
+ src_node = seed.randint(0, new_node - 1)
- # Join v and u with probability q.
- if seed.random() < q:
- G.add_edge(new_node, src_node)
+ # Add a new vertex, v, to the graph.
+ G.add_node(new_node)
# For each neighbor of u...
for neighbor_node in list(nx.all_neighbors(G, src_node)):
# Add the neighbor to v with probability p.
if seed.random() < p:
G.add_edge(new_node, neighbor_node)
+
+ # Join v and u with probability q.
+ if seed.random() < q:
+ G.add_edge(new_node, src_node)
return G
| Problem with nx.partial_duplication_graph
Hi everyone,
the operations on the partial_duplication_graph algorithm are out of order. One should first select a node to replicate and then include a new node.
Also, the edge connecting the original node and the replica should be added after the other edge. If you do it before, the replica is considered a neighbor of the original one and you won't have the desired result.
| You are correct that the new node should not get self-loop edges and by adding an edge to ```u``` before duplicating the neighbor edges, we make it possible to create a self-loop edge ```v-v```. This doesn't align with the doc_string order either. That should be fixed... Probably good to check the article to check whether the code aligns with the paper, or the doc_string aligns with the paper (are self-loops created for the new node or not).
I pretty sure the order of selecting ```u``` and then adding ```v``` doesn't impact the results. But we might as well make it agree with the doc_string description.
In fact, the deal about selecting "u" and adding "v" is that we choose a node "u" randomly to be duplicated and then add a node "v" to be "u"'s replica. If you add "v" before selecting "u", there is a slight chance that you select the recently added node "v" to be replicated, and that would make no sense for the model and does not agree with the doc_string description. But it's certainly good to check the article.
thank you for the attention.
Yes -- I agree with you that you can't choose u to be v. But the code doesn't allow that because it doesn't choose from all nodes, only from ```range(0,newN)``` which excludes the new node.
Here is the part of the code that selects the node to be replicated:
- src_node = random.randint(0, new_node)
In fact, the node to be duplicated have to be chosen from range(0, newN). But the function random.randint(), that is used to select the node to be replicated, takes into account "newN" to the selectable nodes. Maybe the alternative should be:
- src_node = random.choice(range(0, new_node))
or
- src_node = random.randint(0, new_node-1)
Ahh -- thank you for persisting -- now I understand where my confusion is coming...
The code actually doesn't use the ```random.randint``` function. It uses ```seed.randint``` where ```seed``` is created by the decorator ```@py_random_state```. The function ```numpy.Random.randint``` does not include the upper endpoint, while the ```random.randint``` does. I got those two confused...
So... better to switch the order to what the docs say, and also to make that code clearer... Otherwise we could (as the current code allows) choose u to be ```new node``` which is not good.
Thanks again...
| 2019-10-03T04:14:56 |
|
networkx/networkx | 3,628 | networkx__networkx-3628 | [
"3197"
] | bee39fee79170babed15394c3c98bc2a9758a50c | diff --git a/networkx/readwrite/json_graph/jit.py b/networkx/readwrite/json_graph/jit.py
--- a/networkx/readwrite/json_graph/jit.py
+++ b/networkx/readwrite/json_graph/jit.py
@@ -60,6 +60,9 @@ def jit_graph(data, create_using=None):
G = create_using
G.clear()
+ if nx.utils.is_string_like(data):
+ data = json.loads(data)
+
for node in data:
G.add_node(node['id'], **node['data'])
if node.get('adjacencies') is not None:
@@ -77,10 +80,10 @@ def jit_data(G, indent=None):
G : NetworkX Graph
indent: optional, default=None
- If indent is a non-negative integer, then JSON array elements and object
- members will be pretty-printed with that indent level. An indent level
- of 0, or negative, will only insert newlines. None (the default) selects
- the most compact representation.
+ If indent is a non-negative integer, then JSON array elements and
+ object members will be pretty-printed with that indent level.
+ An indent level of 0, or negative, will only insert newlines.
+ None (the default) selects the most compact representation.
Returns
-------
| diff --git a/networkx/readwrite/json_graph/tests/test_jit.py b/networkx/readwrite/json_graph/tests/test_jit.py
--- a/networkx/readwrite/json_graph/tests/test_jit.py
+++ b/networkx/readwrite/json_graph/tests/test_jit.py
@@ -55,3 +55,10 @@ def test_jit_multi_directed(self):
K.add_edge(1, 2)
assert_false(nx.is_isomorphic(H, K))
assert_true(nx.is_isomorphic(G, K))
+
+ def test_jit_round_trip(self):
+ G = nx.Graph()
+ d = nx.jit_data(G)
+ H = jit_graph(json.loads(d))
+ K = jit_graph(d)
+ assert_true(nx.is_isomorphic(H, K))
| jit json import/export
I would consider the functions `jit_data` and `jit_graph` to be their inverses, so that
```
import networkx as nx
nx.jit_graph(nx.jit_data(nx.Graph()))
```
works.
Instead, it produces a TypeError (nx 2.2), because jit_data is a function `nx graph -> json string`, while jit_graph is a function `json object -> nx graph`, so that the correct program would be
```
import networkx as nx
import json
nx.jit_graph(json.loads(nx.jit_data(nx.Graph())))
```
This is documented, but in my view unexpected and incoherent behavior. I'm pretty new to networkx and are not familiar with your design philosophy, but see the options
* to add a clarifying note in the documentation OR
* return the json object in `jit_data` OR
* make use of the json.loads function in `jit_graph`.
What are your opinions on this?
I am willing to submit a PR (but probably it is just easier for you to make that oneline-commit, so that's also fine :))
| 2019-10-03T04:42:02 |
|
networkx/networkx | 3,629 | networkx__networkx-3629 | [
"3188"
] | bee39fee79170babed15394c3c98bc2a9758a50c | diff --git a/networkx/drawing/layout.py b/networkx/drawing/layout.py
--- a/networkx/drawing/layout.py
+++ b/networkx/drawing/layout.py
@@ -252,7 +252,10 @@ def shell_layout(G, nlist=None, scale=1, center=None, dim=2):
theta = np.linspace(0, 1, len(nodes) + 1)[:-1] * 2 * np.pi
theta = theta.astype(np.float32)
pos = np.column_stack([np.cos(theta), np.sin(theta)])
- pos = rescale_layout(pos, scale=scale * radius / len(nlist)) + center
+ if len(pos) > 1:
+ pos = rescale_layout(pos, scale=scale * radius / len(nlist)) + center
+ else:
+ pos = np.array([(scale * radius + center[0], center[1])])
npos.update(zip(nodes, pos))
radius += 1.0
@@ -833,7 +836,7 @@ def spectral_layout(G, weight='weight', scale=1, center=None, dim=2):
A += A.T
pos = _spectral(A, dim)
- pos = rescale_layout(pos, scale) + center
+ pos = rescale_layout(pos, scale=scale) + center
pos = dict(zip(G, pos))
return pos
| diff --git a/networkx/drawing/tests/test_layout.py b/networkx/drawing/tests/test_layout.py
--- a/networkx/drawing/tests/test_layout.py
+++ b/networkx/drawing/tests/test_layout.py
@@ -1,7 +1,7 @@
"""Unit tests for layout functions."""
from nose import SkipTest
from nose.tools import assert_almost_equal, assert_equal, \
- assert_false, assert_raises
+ assert_true, assert_false, assert_raises
import networkx as nx
@@ -25,7 +25,7 @@ def setUp(self):
self.Gi = nx.grid_2d_graph(5, 5)
self.Gs = nx.Graph()
nx.add_path(self.Gs, 'abcdef')
- self.bigG = nx.grid_2d_graph(25, 25) # bigger than 500 nodes for sparse
+ self.bigG = nx.grid_2d_graph(25, 25) # > 500 nodes for sparse
@staticmethod
def collect_node_distances(positions):
@@ -183,9 +183,10 @@ def test_single_nodes(self):
G = nx.path_graph(1)
vpos = nx.shell_layout(G)
assert_false(vpos[0].any())
- G = nx.path_graph(3)
- vpos = nx.shell_layout(G, [[0], [1, 2]])
+ G = nx.path_graph(4)
+ vpos = nx.shell_layout(G, [[0], [1, 2], [3]])
assert_false(vpos[0].any())
+ assert_true(vpos[3].any()) # ensure node 3 not at origin (#3188)
def test_smoke_initial_pos_fruchterman_reingold(self):
pos = nx.circular_layout(self.Gi)
@@ -194,11 +195,11 @@ def test_smoke_initial_pos_fruchterman_reingold(self):
def test_fixed_node_fruchterman_reingold(self):
# Dense version (numpy based)
pos = nx.circular_layout(self.Gi)
- npos = nx.fruchterman_reingold_layout(self.Gi, pos=pos, fixed=[(0, 0)])
+ npos = nx.spring_layout(self.Gi, pos=pos, fixed=[(0, 0)])
assert_equal(tuple(pos[(0, 0)]), tuple(npos[(0, 0)]))
# Sparse version (scipy based)
pos = nx.circular_layout(self.bigG)
- npos = nx.fruchterman_reingold_layout(self.bigG, pos=pos, fixed=[(0, 0)])
+ npos = nx.spring_layout(self.bigG, pos=pos, fixed=[(0, 0)])
for axis in range(2):
assert_almost_equal(pos[(0, 0)][axis], npos[(0, 0)][axis])
@@ -222,12 +223,12 @@ def test_center_parameter(self):
def test_center_wrong_dimensions(self):
G = nx.path_graph(1)
+ assert_equal(id(nx.spring_layout), id(nx.fruchterman_reingold_layout))
assert_raises(ValueError, nx.random_layout, G, center=(1, 1, 1))
assert_raises(ValueError, nx.circular_layout, G, center=(1, 1, 1))
assert_raises(ValueError, nx.planar_layout, G, center=(1, 1, 1))
assert_raises(ValueError, nx.spring_layout, G, center=(1, 1, 1))
- assert_raises(ValueError, nx.fruchterman_reingold_layout, G, center=(1, 1, 1))
- assert_raises(ValueError, nx.fruchterman_reingold_layout, G, dim=3, center=(1, 1))
+ assert_raises(ValueError, nx.spring_layout, G, dim=3, center=(1, 1))
assert_raises(ValueError, nx.spectral_layout, G, center=(1, 1, 1))
assert_raises(ValueError, nx.spectral_layout, G, dim=3, center=(1, 1))
assert_raises(ValueError, nx.shell_layout, G, center=(1, 1, 1))
@@ -255,7 +256,7 @@ def test_empty_graph(self):
assert_equal(vpos, {})
def test_bipartite_layout(self):
- G = nx.complete_bipartite_graph(3,5)
+ G = nx.complete_bipartite_graph(3, 5)
top, bottom = nx.bipartite.sets(G)
vpos = nx.bipartite_layout(G, top)
@@ -313,7 +314,8 @@ def test_kamada_kawai_costfn_2d(self):
expected_cost = 0.5 * meanwt * numpy.sum(numpy.sum(pos, axis=0) ** 2)
for i in range(pos.shape[0]):
for j in range(i + 1, pos.shape[0]):
- expected_cost += (numpy.linalg.norm(pos[i] - pos[j]) * invdist[i][j] - 1.0) ** 2
+ diff = numpy.linalg.norm(pos[i] - pos[j])
+ expected_cost += (diff * invdist[i][j] - 1.0) ** 2
assert_almost_equal(cost, expected_cost)
| Shell Layout incorrectly rescales for single element shells.
Hi there!
I had a shell layout that used to work with the 1.1 release, but incorrectly overlaps nodes when there is only one element in the shell in the latest build.
Upon inspection, I believe the `rescale_layout` function to be the culprit.
https://github.com/networkx/networkx/blob/794323b9ca4fcd5bb270137ba2fe51bcb78b9a6f/networkx/drawing/layout.py#L252
For a two shell graph with one element in the outer shell: `nlist [[1], [2]]`, `shell_layout` outputs:
`{1: array([0., 0.]), 2: array([0.0, 0. ])}`
where it used to output:
`{1: array([0., 0.]), 2: array([1.0, 0. ])}`
For a two shell graph with two element in the outer shell: `nlist [[1], [2, 3]]`, `shell_layout` correctly outputs:
```
{1: array([0., 0.]),
2: array([5.00000000e-01, 2.18556941e-08]),
3: array([-5.00000000e-01, -2.18556941e-08])}
```
We see in the `rescale_layout`, that the positions are being normalized. Unfortunately this means for single node shells, they shift to zero as it's position value is also the mean.
https://github.com/networkx/networkx/blob/794323b9ca4fcd5bb270137ba2fe51bcb78b9a6f/networkx/drawing/layout.py#L889
I believe a simple solution is to first check that there is more than one element:
```python
if len(pos[:,i]) > 1:
pos[:, i] -= pos[:, i].mean()
```
This results in an acceptable set of positions, while not changing any instances for shells with more nodes:
`{1: array([0., 0.]), 2: array([0.5, 0. ])}`
I'd be happy to submit a PR, once I check to ensure no other issues with the other layouts.
| I agree with you that there is a bug in how the code rescales the shell layouts. If there is a single node in the shell, it moves that node to the middle of the diagram.
I would like to fix the behavior by changing the ```shell_layout``` function rather than touching ```rescale_layout``` though. There are a lot of functions that use rescale_layout and they might not work as expected if we don't shift one node layouts to the origin. Instead I am hoping that a similar ```if``` statement in the ```shell_layout``` could determine whether rescale_layout gets used at all. If there is one node, then set that coordinate appropriately and otherwise call the rescale function. Do you see any downside to this approach?
### Problem
I believe if we tried to delegate that responsibility of a binary rescale vs no-rescale to `shell_layout` we'd run into issues with partial rescaling causing distorted graphs.
For example lets say in shell_layout we tried to rescale only if there were more than one node in a shell:
```python
if len(node) > 1:
pos = rescale_layout(pos, scale=scale * radius / len(nlist)) + center
```
Then a call to `shell_layout(nlist=[[1],[2,3],[4]])` would result in:
```python
{1: array([1., 0.], dtype=float32),
2: array([3.33333343e-01, 1.45704631e-08]),
3: array([-3.33333343e-01, -1.45704631e-08]),
4: array([1., 0.], dtype=float32)}
```
### Alternative:
What if we modified the `rescale_layout` such that normalization was optional by means of a default parameter `norm=True`. This would allow the caller to decide the conditions under which the rescaling should include normalization and by using a default argument which encapsulates the current functionality, would not break any other functions.
So `rescale_layout` would become:
```python
def rescale_layout(pos, scale=1, norm=True):
"""
Parameters
----------
...
norm : boolean (default: True)
normalize positions as part of rescaling.
...
"""
for i in range(pos.shape[1]):
if norm:
pos[:, i] -= pos[:, i].mean()
lim = max(abs(pos[:, i]).max(), lim)
````
Then we could leave it up to the caller on when to rescale.
Therefore `shell_layout` would elect to only normalize when there was more than one node in the shell:
```python
norm = len(nodes) > 1
pos = rescale_layout(pos, scale=scale * radius / len(nlist), norm=norm) + center
```
```python
shell_layout2(nlist=[[1],[2,3],[4],[5,6],[7,8,9,10],[11]])
{1: array([0., 0.]),
2: array([1.66666672e-01, 7.28523153e-09]),
3: array([-1.66666672e-01, -7.28523153e-09]),
4: array([0.33333334, 0. ]),
5: array([5.00000000e-01, 2.18556941e-08]),
6: array([-5.00000000e-01, -2.18556941e-08]),
7: array([6.66666687e-01, 9.93410776e-09]),
8: array([-2.11942996e-08, 6.66666687e-01]),
9: array([-6.66666687e-01, -4.83477436e-08]),
10: array([ 1.58965481e-08, -6.66666687e-01]),
11: array([0.83333331, 0. ])}
```
I'm suggesting that we *do* scale when there is only one node in a shell. Just not using the ```rescale_layout``` code. Of course, the outer shells have a different radius than inner shells, so we have to scale each shell. It should be something like:
if len(node) > 1:
pos = rescale_layout(pos, scale=scale * radius / len(nlist)) + center
else:
pos = [(scale * radius + center[0], center[1])] # or something similar
While my own coding ideology prefers to modify `rescale_layout` to accommodate optionally scaling without shifting the mean, and thereby consolidate all scaling operations, I completely understand the rationale for leaving it untouched (particularly when an `if-else` will suffice).
`pos = [(scale * radius + center[0], center[1])]` works great.
I'd be happy to pull a PR if you'd like.
A PR would be really helpful. Thank you for being flexible.
On a stylistic note, is it also OK in the same PR to modify (I know some repos prefer those type of changes separate):
https://github.com/networkx/networkx/blob/794323b9ca4fcd5bb270137ba2fe51bcb78b9a6f/networkx/drawing/layout.py#L805
to `pos = rescale_layout(pos, scale=scale) + center`
to match the other invocations:
https://github.com/networkx/networkx/blob/794323b9ca4fcd5bb270137ba2fe51bcb78b9a6f/networkx/drawing/layout.py#L174
https://github.com/networkx/networkx/blob/794323b9ca4fcd5bb270137ba2fe51bcb78b9a6f/networkx/drawing/layout.py#L252
https://github.com/networkx/networkx/blob/794323b9ca4fcd5bb270137ba2fe51bcb78b9a6f/networkx/drawing/layout.py#L326
https://github.com/networkx/networkx/blob/794323b9ca4fcd5bb270137ba2fe51bcb78b9a6f/networkx/drawing/layout.py#L340
https://github.com/networkx/networkx/blob/794323b9ca4fcd5bb270137ba2fe51bcb78b9a6f/networkx/drawing/layout.py#L469
https://github.com/networkx/networkx/blob/794323b9ca4fcd5bb270137ba2fe51bcb78b9a6f/networkx/drawing/layout.py#L688
I remember we had some discussions about the `rescale_layout` function when I introduced a layout for bipartite networks. You might find #2976 and #2986 helpful.
Yes -- ```scale=scale``` is better for line 805. | 2019-10-03T05:08:40 |
networkx/networkx | 3,634 | networkx__networkx-3634 | [
"3562"
] | 8dee433bccc4935a554e8f29178865d097c26bf4 | diff --git a/networkx/convert_matrix.py b/networkx/convert_matrix.py
--- a/networkx/convert_matrix.py
+++ b/networkx/convert_matrix.py
@@ -274,10 +274,11 @@ def from_pandas_edgelist(df, source='source', target='target', edge_attr=None,
A valid column name (string or integer) for the target nodes (for the
directed case).
- edge_attr : str or int, iterable, True
- A valid column name (str or integer) or list of column names that will
- be used to retrieve items from the row and add them to the graph as
- edge attributes. If `True`, all of the remaining columns will be added.
+ edge_attr : str or int, iterable, True, or None
+ A valid column name (str or int) or iterable of column names that are
+ used to retrieve items and add them to the graph as edge attributes.
+ If `True`, all of the remaining columns will be added.
+ If `None`, no edge attributes are added to the graph.
create_using : NetworkX graph constructor, optional (default=nx.Graph)
Graph type to create. If graph instance, then cleared before populated.
@@ -332,6 +333,9 @@ def from_pandas_edgelist(df, source='source', target='target', edge_attr=None,
cols = edge_attr
else:
cols = [edge_attr]
+ if len(cols) == 0:
+ msg = "Invalid edge_attr argument. No columns found with name: %s"
+ raise nx.NetworkXError(msg % cols)
try:
eattrs = zip(*[df[col] for col in cols])
@@ -339,14 +343,12 @@ def from_pandas_edgelist(df, source='source', target='target', edge_attr=None,
msg = "Invalid edge_attr argument: %s" % edge_attr
raise nx.NetworkXError(msg)
for s, t, attrs in zip(df[source], df[target], eattrs):
-
- g.add_edge(s, t)
-
if g.is_multigraph():
- key = max(g[s][t]) # default keys just count so max is most recent
- g[s][t][key].update((attr, val) for attr, val in zip(cols, attrs))
+ key = g.add_edge(s, t)
+ g[s][t][key].update(zip(cols, attrs))
else:
- g[s][t].update((attr, val) for attr, val in zip(cols, attrs))
+ g.add_edge(s, t)
+ g[s][t].update(zip(cols, attrs))
return g
| diff --git a/networkx/tests/test_convert_pandas.py b/networkx/tests/test_convert_pandas.py
--- a/networkx/tests/test_convert_pandas.py
+++ b/networkx/tests/test_convert_pandas.py
@@ -2,7 +2,8 @@
from nose.tools import assert_raises
import networkx as nx
-from networkx.testing import assert_nodes_equal, assert_edges_equal, assert_graphs_equal
+from networkx.testing import assert_nodes_equal, assert_edges_equal, \
+ assert_graphs_equal
class TestConvertPandas(object):
@@ -36,7 +37,8 @@ def test_exceptions(self):
assert_raises(nx.NetworkXError, nx.to_networkx_graph, G)
G = pd.DataFrame(["a", 0.0]) # elist
assert_raises(nx.NetworkXError, nx.to_networkx_graph, G)
- df = pd.DataFrame([[1, 1], [1, 0]], dtype=int, index=[1, 2], columns=["a", "b"])
+ df = pd.DataFrame([[1, 1], [1, 0]], dtype=int,
+ index=[1, 2], columns=["a", "b"])
assert_raises(nx.NetworkXError, nx.from_pandas_adjacency, df)
def test_from_edgelist_all_attr(self):
@@ -67,24 +69,23 @@ def test_from_edgelist_multi_attr_incl_target(self):
def test_from_edgelist_multidigraph_and_edge_attr(self):
# example from issue #2374
- Gtrue = nx.MultiDiGraph([('X1', 'X4', {'Co': 'zA', 'Mi': 0, 'St': 'X1'}),
- ('X1', 'X4', {'Co': 'zB', 'Mi': 54, 'St': 'X2'}),
- ('X1', 'X4', {'Co': 'zB', 'Mi': 49, 'St': 'X3'}),
- ('X1', 'X4', {'Co': 'zB', 'Mi': 44, 'St': 'X4'}),
- ('Y1', 'Y3', {'Co': 'zC', 'Mi': 0, 'St': 'Y1'}),
- ('Y1', 'Y3', {'Co': 'zC', 'Mi': 34, 'St': 'Y2'}),
- ('Y1', 'Y3', {'Co': 'zC', 'Mi': 29, 'St': 'X2'}),
- ('Y1', 'Y3', {'Co': 'zC', 'Mi': 24, 'St': 'Y3'}),
- ('Z1', 'Z3', {'Co': 'zD', 'Mi': 0, 'St': 'Z1'}),
- ('Z1', 'Z3', {'Co': 'zD', 'Mi': 14, 'St': 'X3'}),
- ('Z1', 'Z3', {'Co': 'zE', 'Mi': 9, 'St': 'Z2'}),
- ('Z1', 'Z3', {'Co': 'zE', 'Mi': 4, 'St': 'Z3'})])
+ edges = [('X1', 'X4', {'Co': 'zA', 'Mi': 0, 'St': 'X1'}),
+ ('X1', 'X4', {'Co': 'zB', 'Mi': 54, 'St': 'X2'}),
+ ('X1', 'X4', {'Co': 'zB', 'Mi': 49, 'St': 'X3'}),
+ ('X1', 'X4', {'Co': 'zB', 'Mi': 44, 'St': 'X4'}),
+ ('Y1', 'Y3', {'Co': 'zC', 'Mi': 0, 'St': 'Y1'}),
+ ('Y1', 'Y3', {'Co': 'zC', 'Mi': 34, 'St': 'Y2'}),
+ ('Y1', 'Y3', {'Co': 'zC', 'Mi': 29, 'St': 'X2'}),
+ ('Y1', 'Y3', {'Co': 'zC', 'Mi': 24, 'St': 'Y3'}),
+ ('Z1', 'Z3', {'Co': 'zD', 'Mi': 0, 'St': 'Z1'}),
+ ('Z1', 'Z3', {'Co': 'zD', 'Mi': 14, 'St': 'X3'})]
+ Gtrue = nx.MultiDiGraph(edges)
df = pd.DataFrame.from_dict({
- 'O': ['X1', 'X1', 'X1', 'X1', 'Y1', 'Y1', 'Y1', 'Y1', 'Z1', 'Z1', 'Z1', 'Z1'],
- 'D': ['X4', 'X4', 'X4', 'X4', 'Y3', 'Y3', 'Y3', 'Y3', 'Z3', 'Z3', 'Z3', 'Z3'],
- 'St': ['X1', 'X2', 'X3', 'X4', 'Y1', 'Y2', 'X2', 'Y3', 'Z1', 'X3', 'Z2', 'Z3'],
- 'Co': ['zA', 'zB', 'zB', 'zB', 'zC', 'zC', 'zC', 'zC', 'zD', 'zD', 'zE', 'zE'],
- 'Mi': [0, 54, 49, 44, 0, 34, 29, 24, 0, 14, 9, 4]})
+ 'O': ['X1', 'X1', 'X1', 'X1', 'Y1', 'Y1', 'Y1', 'Y1', 'Z1', 'Z1'],
+ 'D': ['X4', 'X4', 'X4', 'X4', 'Y3', 'Y3', 'Y3', 'Y3', 'Z3', 'Z3'],
+ 'St': ['X1', 'X2', 'X3', 'X4', 'Y1', 'Y2', 'X2', 'Y3', 'Z1', 'X3'],
+ 'Co': ['zA', 'zB', 'zB', 'zB', 'zC', 'zC', 'zC', 'zC', 'zD', 'zD'],
+ 'Mi': [0, 54, 49, 44, 0, 34, 29, 24, 0, 14]})
G1 = nx.from_pandas_edgelist(df, source='O', target='D',
edge_attr=True,
create_using=nx.MultiDiGraph)
@@ -114,6 +115,14 @@ def test_from_edgelist_invalid_attr(self):
self.df, 0, 'b', 'misspell')
assert_raises(nx.NetworkXError, nx.from_pandas_edgelist,
self.df, 0, 'b', 1)
+ # see Issue #3562
+ edgeframe = pd.DataFrame([[0, 1], [1, 2], [2, 0]], columns=['s', 't'])
+ assert_raises(nx.NetworkXError, nx.from_pandas_edgelist,
+ edgeframe, 's', 't', True)
+ assert_raises(nx.NetworkXError, nx.from_pandas_edgelist,
+ edgeframe, 's', 't', 'weight')
+ assert_raises(nx.NetworkXError, nx.from_pandas_edgelist,
+ edgeframe, 's', 't', ['weight', 'size'])
def test_from_edgelist_no_attr(self):
Gtrue = nx.Graph([('E', 'C', {}),
@@ -144,7 +153,8 @@ def test_from_edgelist(self):
def test_from_adjacency(self):
nodelist = [1, 2]
- dftrue = pd.DataFrame([[1, 1], [1, 0]], dtype=int, index=nodelist, columns=nodelist)
+ dftrue = pd.DataFrame([[1, 1], [1, 0]], dtype=int,
+ index=nodelist, columns=nodelist)
G = nx.Graph([(1, 1), (1, 2)])
df = nx.to_pandas_adjacency(G, dtype=int)
pd.testing.assert_frame_equal(df, dftrue)
@@ -156,7 +166,8 @@ def test_roundtrip(self):
G = nx.from_pandas_edgelist(df)
assert_graphs_equal(Gtrue, G)
# adjacency
- Gtrue = nx.Graph(({1: {1: {'weight': 1}, 2: {'weight': 1}}, 2: {1: {'weight': 1}}}))
+ adj = {1: {1: {'weight': 1}, 2: {'weight': 1}}, 2: {1: {'weight': 1}}}
+ Gtrue = nx.Graph(adj)
df = nx.to_pandas_adjacency(Gtrue, dtype=int)
G = nx.from_pandas_adjacency(df)
assert_graphs_equal(Gtrue, G)
| `from_pandas_edgelist` creates empty graph if there are no attributes
Using `from_pandas_edgelist` with `edge_attr=True` results in an empty graph if there are no attribute columns. If a user passes in a dataframe which _may_ contain attribute columns, it will yield an empty graph if there are no non-source/target columns.
| That doesn't sound good.... Do you have a short piece of code that shows the problem?
Thanks!
Thanks for your response! Here's a snippet that illustrates the issue.
```
import networkx as nx
import pandas as pd
edgeframe = pd.DataFrame([
{"src": 0, "tgt": 1},
{"src": 1, "tgt": 2},
{"src": 2, "tgt": 0}])
graph = nx.from_pandas_edgelist(
edgeframe,
source="src",
target="tgt",
edge_attr=True)
print(graph.order()) # 0
print(graph.size()) # 0
edgeframe["weight"] = 0 # add a trivial attribute
graph = nx.from_pandas_edgelist(
edgeframe,
source="src",
target="tgt",
edge_attr=True)
print(graph.order()) # 3
print(graph.size()) # 3
```
Good -- thanks for that code. It is clear that near line 325 the code presumes that the dataframe has a column that is not indicating source or target. And if that column doesn't exist it will cause the ```zip()``` function later on to terminate before any edges are added.
This is a bug and should be easy to fix.
Let's just raise an exception when no ```edge_attr``` columns exist?
That might be cleanest... | 2019-10-04T15:13:19 |
networkx/networkx | 3,636 | networkx__networkx-3636 | [
"3612"
] | d5237efa8a9f53b61a74745b4741f569bd352c38 | diff --git a/networkx/readwrite/gexf.py b/networkx/readwrite/gexf.py
--- a/networkx/readwrite/gexf.py
+++ b/networkx/readwrite/gexf.py
@@ -461,12 +461,13 @@ def add_attributes(self, node_or_edge, xml_obj, data, default):
e.attrib['for'] = attr_id
e.attrib['value'] = make_str(val)
# Handle nan, inf, -inf differently
- if e.attrib['value'] == 'inf':
- e.attrib['value'] = 'INF'
- elif e.attrib['value'] == 'nan':
- e.attrib['value'] = 'NaN'
- elif e.attrib['value'] == '-inf':
- e.attrib['value'] = '-INF'
+ if val_type == float:
+ if e.attrib['value'] == 'inf':
+ e.attrib['value'] = 'INF'
+ elif e.attrib['value'] == 'nan':
+ e.attrib['value'] = 'NaN'
+ elif e.attrib['value'] == '-inf':
+ e.attrib['value'] = '-INF'
if start is not None:
e.attrib['start'] = make_str(start)
if end is not None:
@@ -484,13 +485,14 @@ def add_attributes(self, node_or_edge, xml_obj, data, default):
e.attrib['value'] = make_str(v).lower()
else:
e.attrib['value'] = make_str(v)
- # Handle nan, inf, -inf differently
- if e.attrib['value'] == 'inf':
- e.attrib['value'] = 'INF'
- elif e.attrib['value'] == 'nan':
- e.attrib['value'] = 'NaN'
- elif e.attrib['value'] == '-inf':
- e.attrib['value'] = '-INF'
+ # Handle float nan, inf, -inf differently
+ if val_type == float:
+ if e.attrib['value'] == 'inf':
+ e.attrib['value'] = 'INF'
+ elif e.attrib['value'] == 'nan':
+ e.attrib['value'] = 'NaN'
+ elif e.attrib['value'] == '-inf':
+ e.attrib['value'] = '-INF'
attvalues.append(e)
xml_obj.append(attvalues)
return data
@@ -793,14 +795,6 @@ def add_node(self, G, node_xml, node_attr, node_pid=None):
for node_xml in subnodes.findall('{%s}node' % self.NS_GEXF):
self.add_node(G, node_xml, node_attr, node_pid=node_id)
- # Handle nan, inf, -inf differently
- for k, v in data.items():
- if make_str(v) == 'inf':
- data[k] = 'INF'
- elif make_str(v) == 'nan':
- data[k] = 'NaN'
- elif make_str(v) == '-inf':
- data[k] = '-INF'
G.add_node(node_id, **data)
def add_start_end(self, data, xml):
| diff --git a/networkx/readwrite/tests/test_gexf.py b/networkx/readwrite/tests/test_gexf.py
--- a/networkx/readwrite/tests/test_gexf.py
+++ b/networkx/readwrite/tests/test_gexf.py
@@ -474,18 +474,32 @@ def test_bool(self):
# Test for NaN, INF and -INF
def test_specials(self):
+ from math import isnan
+ inf, nan = float('inf'), float('nan')
G = nx.Graph()
- G.add_node(1, testattr=float('inf'))
- G.add_node(2, testattr=float('nan'))
- G.add_node(3, testattr=float('-inf'))
- list_value = [(1, 2, 3), (2, 1, 2)]
- G.add_node(4, key=list_value)
+ G.add_node(1, testattr=inf, strdata='inf', key='a')
+ G.add_node(2, testattr=nan, strdata='nan', key='b')
+ G.add_node(3, testattr=-inf, strdata='-inf', key='c')
+
fh = io.BytesIO()
nx.write_gexf(G, fh)
fh.seek(0)
- # float as inf,-inf and nan are all floats internally
- H = nx.read_gexf(fh, node_type=float)
- assert_equal(H.nodes[1]['testattr'], "INF")
- assert_equal(H.nodes[2]['testattr'], "NaN")
- assert_equal(H.nodes[3]['testattr'], "-INF")
- assert_equals(H.nodes[4]['networkx_key'], list_value)
+ filetext = fh.read()
+ fh.seek(0)
+ H = nx.read_gexf(fh, node_type=int)
+
+ assert_in(b'INF', filetext)
+ assert_in(b'NaN', filetext)
+ assert_in(b'-INF', filetext)
+
+ assert_equal(H.nodes[1]['testattr'], inf)
+ assert_true(isnan(H.nodes[2]['testattr']))
+ assert_equal(H.nodes[3]['testattr'], -inf)
+
+ assert_equal(H.nodes[1]['strdata'], 'inf')
+ assert_equal(H.nodes[2]['strdata'], 'nan')
+ assert_equal(H.nodes[3]['strdata'], '-inf')
+
+ assert_equal(H.nodes[1]['networkx_key'], 'a')
+ assert_equal(H.nodes[2]['networkx_key'], 'b')
+ assert_equal(H.nodes[3]['networkx_key'], 'c')
| GEXF output does not round-trip
On master (currently 269c5721100797514aa0469d58e637ec3a29e984) and Python 3.7.3, if I run the following script:
```python
import io, math
import networkx
g = networkx.Graph()
g.add_node(1, weight=float('inf'), data='inf')
g.add_node(2, weight=float('nan'), data='nan')
with io.BytesIO() as f:
networkx.write_gexf(g, f)
f.seek(0)
h = networkx.read_gexf(f, node_type=int)
if h.nodes[1]['data'] != 'inf':
print(f"h.nodes[1]['data'] == {h.nodes[1]['data']!r} != 'inf'")
if h.nodes[2]['data'] != 'nan':
print(f"h.nodes[2]['data'] == {h.nodes[2]['data']!r} != 'nan'")
if not isinstance(h.nodes[1]['weight'], float) or not math.isinf(h.nodes[1]['weight']):
print(f"h.nodes[1]['weight'] == {h.nodes[1]['weight']!r} != float('inf')")
if not isinstance(h.nodes[2]['weight'], float) or not math.isnan(h.nodes[2]['weight']):
print(f"h.nodes[2]['weight'] == {h.nodes[2]['weight']!r} != float('nan')")
```
I get the following output:
```
h.nodes[1]['data'] == 'INF' != 'inf'
h.nodes[2]['data'] == 'NaN' != 'nan'
h.nodes[1]['weight'] == 'INF' != float('inf')
h.nodes[2]['weight'] == 'NaN' != float('nan')
```
It appears that dc21ca4, which fixed #3573, caused `write_gexf` to write `float('inf')` to XML file as `INF` by transforming text without checking data types first, and that `read_gexf` does not cast `float` or `double` XML types to the `float` Python type.
| Yes -- you are right... I understand the issue.
It'd be nice to read 'INF', 'NaN' and '-INF' as floats (and maybe 'inf', etc too)...
I think to conform to the standard, you just rely on the data type definitions at the top of the GEXF file. This line should tell the reader how to convert the data:
```xml
<attribute id="0" title="weight" type="double" />
```
If the type is double or float, convert it to a Python `float`.
Thanks very much for this report!
It looks like the lines added to the "reader" at line 800 of the file should not be there. The ```decode_attr_elements``` handles them so we don't have to.
Also, the lines in the writer should check the type to ensure it is float before converting.
And we'll add your example as a test. Thanks again!
| 2019-10-06T03:57:22 |
networkx/networkx | 3,672 | networkx__networkx-3672 | [
"3610"
] | 62d87e2f2650a9a6914308cee7c3ae3217b26aed | diff --git a/networkx/readwrite/gexf.py b/networkx/readwrite/gexf.py
--- a/networkx/readwrite/gexf.py
+++ b/networkx/readwrite/gexf.py
@@ -356,7 +356,7 @@ def add_nodes(self, G, graph_element):
# add node element and attr subelements
default = G.graph.get('node_default', {})
node_data = self.add_parents(node_element, node_data)
- if self.version == '1.1':
+ if self.VERSION == '1.1':
node_data = self.add_slices(node_element, node_data)
else:
node_data = self.add_spells(node_element, node_data)
@@ -420,7 +420,7 @@ def edge_key_data(G):
edge_element = Element('edge',
source=source_id, target=target_id, **kw)
default = G.graph.get('edge_default', {})
- if self.version == '1.1':
+ if self.VERSION == '1.1':
edge_data = self.add_slices(edge_element, edge_data)
else:
edge_data = self.add_spells(edge_element, edge_data)
@@ -768,7 +768,7 @@ def add_node(self, G, node_xml, node_attr, node_pid=None):
# get attributes and subattributues for node
data = self.decode_attr_elements(node_attr, node_xml)
data = self.add_parents(data, node_xml) # add any parents
- if self.version == '1.1':
+ if self.VERSION == '1.1':
data = self.add_slices(data, node_xml) # add slices
else:
data = self.add_spells(data, node_xml) # add spells
@@ -899,7 +899,7 @@ def add_edge(self, G, edge_element, edge_attr):
data = self.decode_attr_elements(edge_attr, edge_element)
data = self.add_start_end(data, edge_element)
- if self.version == '1.1':
+ if self.VERSION == '1.1':
data = self.add_slices(data, edge_element) # add slices
else:
data = self.add_spells(data, edge_element) # add spells
| diff --git a/networkx/readwrite/tests/test_gexf.py b/networkx/readwrite/tests/test_gexf.py
--- a/networkx/readwrite/tests/test_gexf.py
+++ b/networkx/readwrite/tests/test_gexf.py
@@ -140,8 +140,7 @@ def test_read_simple_directed_graphml(self):
H = nx.read_gexf(self.simple_directed_fh)
assert sorted(G.nodes()) == sorted(H.nodes())
assert sorted(G.edges()) == sorted(H.edges())
- assert (sorted(G.edges(data=True)) ==
- sorted(H.edges(data=True)))
+ assert sorted(G.edges(data=True)) == sorted(H.edges(data=True))
self.simple_directed_fh.seek(0)
def test_write_read_simple_directed_graphml(self):
@@ -152,17 +151,15 @@ def test_write_read_simple_directed_graphml(self):
H = nx.read_gexf(fh)
assert sorted(G.nodes()) == sorted(H.nodes())
assert sorted(G.edges()) == sorted(H.edges())
- assert (sorted(G.edges(data=True)) ==
- sorted(H.edges(data=True)))
+ assert sorted(G.edges(data=True)) == sorted(H.edges(data=True))
self.simple_directed_fh.seek(0)
def test_read_simple_undirected_graphml(self):
G = self.simple_undirected_graph
H = nx.read_gexf(self.simple_undirected_fh)
assert sorted(G.nodes()) == sorted(H.nodes())
- assert (
- sorted(sorted(e) for e in G.edges()) ==
- sorted(sorted(e) for e in H.edges()))
+ assert (sorted(sorted(e) for e in G.edges())
+ == sorted(sorted(e) for e in H.edges()))
self.simple_undirected_fh.seek(0)
def test_read_attribute_graphml(self):
@@ -260,9 +257,8 @@ def test_default_attribute(self):
fh.seek(0)
H = nx.read_gexf(fh, node_type=int)
assert sorted(G.nodes()) == sorted(H.nodes())
- assert (
- sorted(sorted(e) for e in G.edges()) ==
- sorted(sorted(e) for e in H.edges()))
+ assert (sorted(sorted(e) for e in G.edges())
+ == sorted(sorted(e) for e in H.edges()))
# Reading a gexf graph always sets mode attribute to either
# 'static' or 'dynamic'. Remove the mode attribute from the
# read graph for the sake of comparing remaining attributes.
@@ -498,3 +494,106 @@ def test_specials(self):
assert H.nodes[1]['networkx_key'] == 'a'
assert H.nodes[2]['networkx_key'] == 'b'
assert H.nodes[3]['networkx_key'] == 'c'
+
+ def test_simple_list(self):
+ G = nx.Graph()
+ list_value = [(1, 2, 3), (9, 1, 2)]
+ G.add_node(1, key=list_value)
+ fh = io.BytesIO()
+ nx.write_gexf(G, fh)
+ fh.seek(0)
+ H = nx.read_gexf(fh, node_type=int)
+ assert H.nodes[1]['networkx_key'] == list_value
+
+ def test_dynamic_mode(self):
+ G = nx.Graph()
+ G.add_node(1, label='1', color='green')
+ G.graph['mode'] = 'dynamic'
+ fh = io.BytesIO()
+ nx.write_gexf(G, fh)
+ fh.seek(0)
+ H = nx.read_gexf(fh, node_type=int)
+ assert sorted(G.nodes()) == sorted(H.nodes())
+ assert (sorted(sorted(e) for e in G.edges())
+ == sorted(sorted(e) for e in H.edges()))
+
+ def test_multigraph_with_missing_attributes(self):
+ G = nx.MultiGraph()
+ G.add_node(0, label='1', color='green')
+ G.add_node(1, label='2', color='green')
+ G.add_edge(0, 1, id='0', weight=3, type='undirected', start=0, end=1)
+ G.add_edge(0, 1)
+ fh = io.BytesIO()
+ nx.write_gexf(G, fh)
+ fh.seek(0)
+ H = nx.read_gexf(fh, node_type=int)
+ assert sorted(G.nodes()) == sorted(H.nodes())
+ assert (sorted(sorted(e) for e in G.edges())
+ == sorted(sorted(e) for e in H.edges()))
+
+ def test_missing_viz_attributes(self):
+ G = nx.Graph()
+ G.add_node(0, label='1', color='green')
+ G.nodes[0]['viz'] = {'size': 54}
+ G.nodes[0]['viz']['position'] = {'x': 0, 'y': 1, 'z': 0}
+ G.nodes[0]['viz']['color'] = {'r': 0, 'g': 0, 'b': 256}
+ G.nodes[0]['viz']['shape'] = 'http://random.url'
+ G.nodes[0]['viz']['thickness'] = 2
+ fh = io.BytesIO()
+ nx.write_gexf(G, fh, version='1.1draft')
+ fh.seek(0)
+ H = nx.read_gexf(fh, node_type=int)
+ assert sorted(G.nodes()) == sorted(H.nodes())
+ assert (sorted(sorted(e) for e in G.edges())
+ == sorted(sorted(e) for e in H.edges()))
+
+ # Second graph for the other branch
+ G = nx.Graph()
+ G.add_node(0, label='1', color='green')
+ G.nodes[0]['viz'] = {'size': 54}
+ G.nodes[0]['viz']['position'] = {'x': 0, 'y': 1, 'z': 0}
+ G.nodes[0]['viz']['color'] = {'r': 0, 'g': 0, 'b': 256, 'a': 0.5}
+ G.nodes[0]['viz']['shape'] = 'ftp://random.url'
+ G.nodes[0]['viz']['thickness'] = 2
+ fh = io.BytesIO()
+ nx.write_gexf(G, fh)
+ fh.seek(0)
+ H = nx.read_gexf(fh, node_type=int)
+ assert sorted(G.nodes()) == sorted(H.nodes())
+ assert (sorted(sorted(e) for e in G.edges())
+ == sorted(sorted(e) for e in H.edges()))
+
+ def test_slice_and_spell(self):
+ # Test spell first, so version = 1.2
+ G = nx.Graph()
+ G.add_node(0, label='1', color='green')
+ G.nodes[0]['spells'] = [(1, 2)]
+ fh = io.BytesIO()
+ nx.write_gexf(G, fh)
+ fh.seek(0)
+ H = nx.read_gexf(fh, node_type=int)
+ assert sorted(G.nodes()) == sorted(H.nodes())
+ assert (sorted(sorted(e) for e in G.edges())
+ == sorted(sorted(e) for e in H.edges()))
+
+ G = nx.Graph()
+ G.add_node(0, label='1', color='green')
+ G.nodes[0]['slices'] = [(1, 2)]
+ fh = io.BytesIO()
+ nx.write_gexf(G, fh, version='1.1draft')
+ fh.seek(0)
+ H = nx.read_gexf(fh, node_type=int)
+ assert sorted(G.nodes()) == sorted(H.nodes())
+ assert (sorted(sorted(e) for e in G.edges())
+ == sorted(sorted(e) for e in H.edges()))
+
+ def test_add_parent(self):
+ G = nx.Graph()
+ G.add_node(0, label='1', color='green', parents=[1, 2])
+ fh = io.BytesIO()
+ nx.write_gexf(G, fh)
+ fh.seek(0)
+ H = nx.read_gexf(fh, node_type=int)
+ assert sorted(G.nodes()) == sorted(H.nodes())
+ assert (sorted(sorted(e) for e in G.edges())
+ == sorted(sorted(e) for e in H.edges()))
| Possible bug in version attribute in GEXF.py
In ```gexf.py```, from lines 769 to 773, we currently have https://github.com/networkx/networkx/blob/4a0f42b477e5b20377ba7502de555b67b429adc3/networkx/readwrite/gexf.py#L769-L773
while in lines 822-830, it is:
https://github.com/networkx/networkx/blob/4a0f42b477e5b20377ba7502de555b67b429adc3/networkx/readwrite/gexf.py#L822-L830
which seemed somewhat inconsistent to me (**version** vs **VERSION**). So I looked above, and found:
https://github.com/networkx/networkx/blob/4a0f42b477e5b20377ba7502de555b67b429adc3/networkx/readwrite/gexf.py#L159-L161
which indicates that the ```version``` parameter would only accept ```1.1draft/1.2draft```, while the ```VERSION``` parameter will have ```1.1/1.2``` in it.
https://github.com/networkx/networkx/blob/4a0f42b477e5b20377ba7502de555b67b429adc3/networkx/readwrite/gexf.py#L250-L251
Hence, the checks in both add_edges and add_nodes with ```self.version``` fails always.
**How to test?**
```python
import networkx as nx
G=nx.Graph()
G.add_node(0, label='1', color='green')
G.nodes[0]['viz']['color'] = {'r' : 0, 'g' : 0, 'b' : 256}
fh = io.BytesIO()
nx.write_gexf(G, fh, version='1.1draft')
fh.seek(0)
H = nx.read_gexf(fh, node_type=int)
```
The above code will execute fine, but as soon as we omit the version parameter (which defaults to 1.2draft), it raises the exception as it expects a ```rgba``` tuple. Clearly, the VERSION parameter is working.
However:
```python
import networkx as nx
G=nx.Graph()
G.add_node(0, label='1', color='green', version='1.1draft')
G.nodes[0]['spells']=[(1,2)]
print("\n".join(nx.generate_gexf(G)))
```
will still have the spell attribute, but neither will have slice. Same for version=1.2draft (can be checked via **print()** in them too)
Thus, I believe that all the ```self.version = 1.1 ``` checks should be made into ```self.VERSION = 1.1 ```, unless I am misunderstanding it? At any case, the code should be consistent for the check.
**Additional context:** I am currently working on a PR to improve the code coverage of test_gexf.py as per discussion in #3606 and noticed this weird issue. Clarification regarding this bug(?) will be welcome so that I could incorporate it in my PR
| Yes! Anywhere it checks ```self.version == '1.1'```, that should be ```self.VERSION```.
I guess that means we need a test that slices work.
Thanks! | 2019-10-19T16:08:17 |
networkx/networkx | 3,676 | networkx__networkx-3676 | [
"3431"
] | 89c13d92dd4ae104d8b8f2c88f98cad499937f25 | diff --git a/networkx/generators/small.py b/networkx/generators/small.py
--- a/networkx/generators/small.py
+++ b/networkx/generators/small.py
@@ -83,6 +83,10 @@ def make_small_graph(graph_description, create_using=None):
Use the create_using argument to choose the graph class/type.
"""
+
+ if graph_description[0] not in ("adjacencylist", "edgelist"):
+ raise NetworkXError("ltype must be either adjacencylist or edgelist")
+
ltype = graph_description[0]
name = graph_description[1]
n = graph_description[2]
| diff --git a/networkx/generators/tests/test_small.py b/networkx/generators/tests/test_small.py
--- a/networkx/generators/tests/test_small.py
+++ b/networkx/generators/tests/test_small.py
@@ -16,10 +16,16 @@
class TestGeneratorsSmall():
def test_make_small_graph(self):
- d = ["adjacencylist", "Bull Graph", 5, [[2, 3], [1, 3, 4], [1, 2, 5], [2], [3]]]
+ d = ["adjacencylist", "Bull Graph", 5, [[2, 3], [1, 3, 4], [1, 2, 5],
+ [2], [3]]]
G = nx.make_small_graph(d)
assert is_isomorphic(G, nx.bull_graph())
+ # Test small graph creation error with wrong ltype
+ d[0] = "erroneouslist"
+ pytest.raises(nx.NetworkXError, nx.make_small_graph,
+ graph_description=d)
+
def test__LCF_graph(self):
# If n<=0, then return the null_graph
G = nx.LCF_graph(-10, [1, 2], 100)
| `make_small_graph` may fail to return error when it gets a bad input
The following command was used in an answer on [stackoverflow](https://stackoverflow.com/a/56032840/2966723), and produces a very different result from desired:
gr = {
0: [4],
1: [3],
2: [2, 3, 4, 5],
3: [1, 3],
4: [3],
5: [4]
}
G = nx.make_small_graph(gr)
But it isn't what they expected to happen:
G.nodes()
> NodeView((2, 3, 4, 5))
G.edges()
> EdgeView([])
Because `gr[0]`, `gr[1]`, and `gr[2]` are defined, no error message is returned. The code then creates an empty graph with the nodes from `gr[2]` (that is with `[2,3,4,5]`). It then fails to add any edges because `gr[0]` isn't of the right form.
I think the easiest fix is to add an extra condition if `gr[0]` is not either `'edgelist'` or `'adjacencylist'` to return an error.
| 2019-10-20T17:20:10 |
|
networkx/networkx | 3,698 | networkx__networkx-3698 | [
"3696"
] | e58f79d7408cab2218b7fe4514376a8450402ebd | diff --git a/networkx/drawing/nx_pylab.py b/networkx/drawing/nx_pylab.py
--- a/networkx/drawing/nx_pylab.py
+++ b/networkx/drawing/nx_pylab.py
@@ -612,6 +612,9 @@ def draw_networkx_edges(G, pos,
alpha=alpha
)
+ edge_collection.set_cmap(edge_cmap)
+ edge_collection.set_clim(edge_vmin, edge_vmax)
+
edge_collection.set_zorder(1) # edges go behind nodes
edge_collection.set_label(label)
ax.add_collection(edge_collection)
| Edge plots no longer work the same 2.3 vs 2.4
We have used the nx.draw_networkx_nodes and nx.draw_networkx_edges functions together to plot node and edge color scales separately, for example, pressure at nodes and flow through pipes (edges). This is no longer working in NetworkX 2.4. The graphics below show identical code, run on 2.3 and 2.4. In NetworkX 2.4, the edges are being colored according to the node colormap, but no values are being assigned to the edge colorbar.
The code below does not contain full details, but is meant to show which options we are passing in.
`nodes = nx.draw_networkx_nodes(G, pos, with_labels=False, nodelist=nodelist, node_color=nodecolor, node_size=node_size, alpha=node_alpha, cmap=node_cmap, vmin=node_range[0], vmax = node_range[1], linewidths=0, ax=ax)`
`edges = nx.draw_networkx_edges(G, pos, edgelist=linklist, edge_color=linkcolor, width=link_width, alpha=link_alpha, edge_cmap=link_cmap, edge_vmin=link_range[0], edge_vmax=link_range[1], ax=ax)`
`plt.colorbar(nodes, shrink=0.5, pad=0, ax=ax)`
`plt.colorbar(edges, shrink=0.5, pad=0.05, ax=ax)`


| Thanks for this -- wow... its weird... Do I understand correctly that the edges and edge colors are drawn the same in both versions, but the colorbar is different. And the colorbar is the only thing that is different that you have noticed...???
Does the same thing have with a small graph or say 2-4 edges?
So, yes on both counts. The only difference between the generation of the two graphs was running “conda update networkx” between running the script the first and second time. Conda only updated networkx, all other packages (matplotlib in particular to 3.1.1) had been updated prior to the first run.
We’ve run this on a network with 10 nodes and edges, and it did the same thing.
Within draw_networkx_edges, the LineCollection needs cmap and clim defined before it is returned. Something like this should work.
```
edge_collection.set_cmap(edge_cmap)
edge_collection.set_clim(edge_vmin, edge_vmax)
```
Thanks @kaklise !!
Can you (@dbhart )verify that by adding these two lines before the ```plt.colorbar``` lines does the trick?
edge_collection.set_cmap(link_cmap)
edge_collection.set_clim(link_range[0], link_range[1])
I'm running the same example as @dbhart , and this works as long as the min and max in link_range are not None. This is handled inside draw_networkx_edges, where vmin and vmax can be defined based on the edge color. The resulting figure is now matches 2.3.
Excellent! Thanks!
Are you up for creating a PR or should I do that?
Not sure how to add a test of the colorbar being correct... :{
I can submit a PR (I'm not sure how to test colorbars either) | 2019-10-29T23:00:44 |
|
networkx/networkx | 3,699 | networkx__networkx-3699 | [
"3687"
] | 3987370c957de15249809672933dc06c2fed6fc1 | diff --git a/networkx/classes/graph.py b/networkx/classes/graph.py
--- a/networkx/classes/graph.py
+++ b/networkx/classes/graph.py
@@ -1223,8 +1223,7 @@ def neighbors(self, n):
Notes
-----
- It is usually more convenient (and faster) to access the
- adjacency dictionary as ``G[n]``:
+ Alternate ways to access the neighbors are ``G.adj[n]`` or ``G[n]``:
>>> G = nx.Graph() # or DiGraph, MultiGraph, MultiDiGraph, etc
>>> G.add_edge('a', 'b', weight=7)
| Speed-up of Graph.__getitem__
The documentation (at https://networkx.github.io/documentation/stable/reference/classes/generated/networkx.Graph.neighbors.html) says:
"It is usually more convenient (and faster) to access the adjacency dictionary as G[n]".
However, my testing using %timeit showed that is not true.
```python
%timeit G[2]
%timeit list(G.neighbors(2))
```
```
1.08 µs ± 42.6 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
685 ns ± 37.2 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
```
| We can easily speed up G[u] to be as fast as it used to be.
(in G.__getitem__ use G._adj, not G.adj)
This Issue should lead to a PR which does that. That is a much better fix than changing the docs.
Thank you for bringing it up!
```python
%timeit G[2]
%timeit G.neighbors(2)
%timeit G._adj[2] # the fastest: direct access to the data
%timeit [n for n in G[2]]
%timeit [n for n in G.neighbors(2)]
%timeit [n for n in G._adj[2]]
```
With the resulting times
630 ns ± 6.83 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
238 ns ± 3.86 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
62.7 ns ± 0.626 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
13.1 µs ± 104 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
12.6 µs ± 123 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
11.9 µs ± 65.1 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
What was the reason of switching to G.adj if G._adj is faster?
G.adj is read-only as is G.adj[u] and G.adj[u][v] but G.adj[u][v]['attr'] is a read-write dict.
They all use G._adj to look up the info, but they protect against corrupting the data structure.
Whoops... We can't rewrite ```G.__getitem__``` to return ```G._adj[n]``` because while that is faster, it no longer protects the data structure.
We'll have to change the docs instead... :{ | 2019-10-30T17:15:41 |
|
networkx/networkx | 3,752 | networkx__networkx-3752 | [
"3751"
] | d883a19b5278f4fb1013dd16bb178b9be5b7cc64 | diff --git a/examples/javascript/force.py b/examples/javascript/force.py
--- a/examples/javascript/force.py
+++ b/examples/javascript/force.py
@@ -5,6 +5,10 @@
Example of writing JSON format graph data and using the D3 Javascript library
to produce an HTML/Javascript drawing.
+
+You will need to download the following directory:
+
+- https://github.com/networkx/networkx/tree/master/examples/javascript/force
"""
import json
| examples/javascript missing files
Hi!
[examples/javascript](https://networkx.github.io/documentation/networkx-2.2/auto_examples/javascript/force.html#sphx-glr-download-auto-examples-javascript-force-py) shows how to get the graph in html. Webpage allows downloading the python file but doesn't the other related files and folders. One has to go to the github and download the example from the repo.
Is this just an omission or a bug? I could perhaps fix it with some initial guidance.
Thanks!
| 2019-12-17T16:58:42 |
||
networkx/networkx | 3,764 | networkx__networkx-3764 | [
"3753"
] | a8c09757f52c2d690d0c8cd983e55a2af9b8d260 | diff --git a/networkx/drawing/layout.py b/networkx/drawing/layout.py
--- a/networkx/drawing/layout.py
+++ b/networkx/drawing/layout.py
@@ -168,7 +168,7 @@ def circular_layout(G, scale=1, center=None, dim=2):
return pos
-def shell_layout(G, nlist=None, scale=1, center=None, dim=2):
+def shell_layout(G, nlist=None, rotate=None, scale=1, center=None, dim=2):
"""Position nodes in concentric circles.
Parameters
@@ -179,6 +179,11 @@ def shell_layout(G, nlist=None, scale=1, center=None, dim=2):
nlist : list of lists
List of node lists for each shell.
+ rotate : angle in radians (default=pi/len(nlist))
+ Angle by which to rotate the starting position of each shell
+ relative to the starting position of the previous shell.
+ To recreate behavior before v2.5 use rotate=0.
+
scale : number (default: 1)
Scale factor for positions.
@@ -227,25 +232,27 @@ def shell_layout(G, nlist=None, scale=1, center=None, dim=2):
# draw the whole graph in one shell
nlist = [list(G)]
+ radius_bump = scale / len(nlist)
+
if len(nlist[0]) == 1:
# single node at center
radius = 0.0
else:
# else start at r=1
- radius = 1.0
+ radius = radius_bump
+ if rotate is None:
+ rotate = np.pi / len(nlist)
+ first_theta = rotate
npos = {}
for nodes in nlist:
- # Discard the extra angle since it matches 0 radians.
- theta = np.linspace(0, 1, len(nodes) + 1)[:-1] * 2 * np.pi
- theta = theta.astype(np.float32)
- pos = np.column_stack([np.cos(theta), np.sin(theta)])
- if len(pos) > 1:
- pos = rescale_layout(pos, scale=scale * radius / len(nlist)) + center
- else:
- pos = np.array([(scale * radius + center[0], center[1])])
+ # Discard the last angle (endpoint=False) since 2*pi matches 0 radians
+ theta = np.linspace(0, 2 * np.pi, len(nodes),
+ endpoint=False, dtype=np.float32) + first_theta
+ pos = radius * np.column_stack([np.cos(theta), np.sin(theta)]) + center
npos.update(zip(nodes, pos))
- radius += 1.0
+ radius += radius_bump
+ first_theta += rotate
return npos
@@ -474,7 +481,7 @@ def fruchterman_reingold_layout(G,
pos = _sparse_fruchterman_reingold(A, k, pos_arr, fixed,
iterations, threshold,
dim, seed)
- except:
+ except ValueError:
A = nx.to_numpy_array(G, weight=weight)
if k is None and fixed is not None:
# We must adjust k by domain size for layouts not near 1x1
@@ -575,7 +582,7 @@ def _sparse_fruchterman_reingold(A, k=None, pos=None, fixed=None,
# make sure we have a LIst of Lists representation
try:
A = A.tolil()
- except:
+ except AttributeError:
A = (coo_matrix(A)).tolil()
if pos is None:
@@ -935,7 +942,7 @@ def planar_layout(G, scale=1, center=None, dim=2):
raise nx.NetworkXException("G is not planar.")
pos = nx.combinatorial_embedding_to_pos(embedding)
node_list = list(embedding)
- pos = np.row_stack((pos[x] for x in node_list))
+ pos = np.row_stack([pos[x] for x in node_list])
pos = pos.astype(np.float64)
pos = rescale_layout(pos, scale=scale) + center
return dict(zip(node_list, pos))
| diff --git a/networkx/drawing/tests/test_layout.py b/networkx/drawing/tests/test_layout.py
--- a/networkx/drawing/tests/test_layout.py
+++ b/networkx/drawing/tests/test_layout.py
@@ -1,14 +1,12 @@
"""Unit tests for layout functions."""
+import networkx as nx
+from networkx.testing import almost_equal
+
import pytest
numpy = pytest.importorskip('numpy')
test_smoke_empty_graphscipy = pytest.importorskip('scipy')
-import pytest
-import networkx as nx
-from networkx.testing import almost_equal
-
-
class TestLayout(object):
@classmethod
@@ -151,6 +149,8 @@ def test_adjacency_interface_numpy(self):
assert pos.shape == (6, 2)
pos = nx.drawing.layout._fruchterman_reingold(A, dim=3)
assert pos.shape == (6, 3)
+ pos = nx.drawing.layout._sparse_fruchterman_reingold(A)
+ assert pos.shape == (6, 2)
def test_adjacency_interface_scipy(self):
A = nx.to_scipy_sparse_matrix(self.Gs, dtype='d')
@@ -169,6 +169,9 @@ def test_single_nodes(self):
vpos = nx.shell_layout(G, [[0], [1, 2], [3]])
assert not vpos[0].any()
assert vpos[3].any() # ensure node 3 not at origin (#3188)
+ assert numpy.linalg.norm(vpos[3]) <= 1 # ensure node 3 fits (#3753)
+ vpos = nx.shell_layout(G, [[0], [1, 2], [3]], rotate=0)
+ assert numpy.linalg.norm(vpos[3]) <= 1 # ensure node 3 fits (#3753)
def test_smoke_initial_pos_fruchterman_reingold(self):
pos = nx.circular_layout(self.Gi)
| Shell radius too big for shells of 1 element.
This bug appears to be the opposite of issue #3188 where shells of one element collapsed to zero radius.
```
nx.__version__
'2.4'
```
Shells with one element now get assigned an X position equal to their location +1 in nlist.
shell_layout(nlist=[[1,2],[3, 4],[5, 6], [7], [8]])
{1: array([2.00000003e-01, 8.74227801e-09]),
2: array([-2.00000003e-01, -8.74227801e-09]),
3: array([4.00000006e-01, 1.74845560e-08]),
4: array([-4.00000006e-01, -1.74845560e-08]),
5: array([6.00000024e-01, 2.62268340e-08]),
6: array([-6.00000024e-01, -2.62268340e-08]),
7: array([4., 0.]),
8: array([5., 0.])}
Resulting in plots that look like:

| Suggested solution:
```
def shell_layout_suggestion(nlist=None, scale=1, center=[0,0] ):
import numpy as np
npos = {}
# Create array of radii with outer radius = 1 * scale
radii = np.linspace(1, len(nlist), len(nlist) ) * scale / len(nlist)
for nodes, radius in zip(nlist, radii):
for node in nodes:
i = nodes.index(node)
# Discard the extra angle since it matches 0 radians.
theta = np.linspace(0, 2 * np.pi, len(nodes) +1)[:-1]
pos = np.column_stack([radius * np.cos(theta[i]) + center[0], radius * np.sin(theta[i]) + center[1]])
npos.update({node: pos[0]})
return npos
```
Resultant plot:

Updated - to stagger the first point of each shell around the circle to avoid shell having its first point at Y=0.
```
def shell_layout_suggestion(nlist=None, scale=1, center=[0,0] ):
import numpy as np
npos = {}
# Create array of radii with outer radius = 1 * scale
radii = np.linspace(1, len(nlist), len(nlist) ) * scale / len(nlist)
for nodes, radius in zip(nlist, radii):
for node in nodes:
i = nodes.index(node)
# Discard the extra angle since it matches 0 radians.
theta = np.linspace(0, 2 * np.pi, len(nodes) +1)[:-1] + (np.pi * nlist.index(nodes)/len(nlist))
pos = np.column_stack([radius * np.cos(theta[i]) + center[0], radius * np.sin(theta[i]) + center[1]])
npos.update({node: pos[0]})
return npos
```

This is a bug.
The code checks whether there is one node in the shell but doesn't scale the radius by the number of nodes in the graph in that case (and it does scale the radius when the shell has more than one node).
Thanks for this!
You're welcome.
I couldn't figure out why the original code treated shells differently based on the number of nodes in the shell. Nor could I figure out why it should.
So my approach treats all shells the same, regardless of the number of nodes.
Also, I didn't see why it was necessary to rescale when it seems straight forward to scale first, then fit all the nodes into that scale.
The old code treated shells with single nodes specially because it couldn't use ```rescale_layout``` with a single node (of course, then it didn't rescale at all -- which isn't right either). You are correct that we don't need to rescale using the generic rescale function in shall_layout because we can easily set the scale directly.
Your code can be improved by using ```enumerate()``` to avoid the fairly slow ```list.index``` method in two places.
If you want to make a pull request that would be great. Otherwise I will make the fix. | 2019-12-28T01:40:56 |
networkx/networkx | 3,784 | networkx__networkx-3784 | [
"3703"
] | b6b0370dfb3282613d5827ec46cd514097154579 | diff --git a/networkx/generators/lattice.py b/networkx/generators/lattice.py
--- a/networkx/generators/lattice.py
+++ b/networkx/generators/lattice.py
@@ -126,7 +126,6 @@ def grid_graph(dim, periodic=False):
>>> len(G)
6
"""
- dlabel = "%s" % dim
if not dim:
return empty_graph(0)
| grid_graph says it takes a tuple, but will only take a list
Using the latest master, networkx.grid_graph works when passing in a list, but crashes when passing in a tuple:
```
$ ~/anaconda3/envs/python37env/bin/python --version
Python 3.7.4
$ ~/anaconda3/envs/python37env/bin/python -c "import networkx; networkx.grid_graph([10,5])"
$ ~/anaconda3/envs/python37env/bin/python -c "import networkx; networkx.grid_graph((10,5))"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/networkx/networkx/generators/lattice.py", line 129, in grid_graph
dlabel = "%s" % dim
TypeError: not all arguments converted during string formatting
```
https://networkx.github.io/documentation/stable/reference/generated/networkx.generators.lattice.grid_graph.html
> dim (list or tuple of numbers...)
| That's a bug in grid_graph. The first line is ```dlabel = "%s" % dim``` and should be ```dlabel = "%s" % (dim,)```. But I'm not sure why that line is even there... ```dlabel``` is never used.
It looks like that line is a mistake that appeared 2 years ago when taking out the G.name attribute in the grid_graph and moving it to lattice.py.
We should simply remove that line.
As a workaround, you can use: nx.grid_graph(((2,3),)) | 2020-01-19T22:47:55 |
|
networkx/networkx | 3,804 | networkx__networkx-3804 | [
"3736"
] | 7ebeb86907bddb80ddec3fd453c00a9b973140c3 | diff --git a/networkx/algorithms/centrality/__init__.py b/networkx/algorithms/centrality/__init__.py
--- a/networkx/algorithms/centrality/__init__.py
+++ b/networkx/algorithms/centrality/__init__.py
@@ -15,4 +15,5 @@
from .reaching import *
from .percolation import *
from .second_order import *
+from .trophic import *
from .voterank_alg import *
diff --git a/networkx/algorithms/centrality/trophic.py b/networkx/algorithms/centrality/trophic.py
new file mode 100644
--- /dev/null
+++ b/networkx/algorithms/centrality/trophic.py
@@ -0,0 +1,169 @@
+"""Trophic levels"""
+import networkx as nx
+
+from networkx.utils import not_implemented_for
+
+__all__ = ['trophic_levels', 'trophic_differences',
+ 'trophic_incoherence_parameter']
+
+
+@not_implemented_for('undirected')
+def trophic_levels(G, weight='weight'):
+ r"""Compute the trophic levels of nodes.
+
+ The trophic level of a node $i$ is
+
+ .. math::
+
+ s_i = 1 + \frac{1}{k_^{in}_i \sum_{j} a_{ij} s_j
+
+ where $k_^{in}_i$ is the in-degree of i
+
+ .. math::
+
+ k^{in}_i = \sum_{j} a_{ij}
+
+ and nodes with $k_^{in}_i = 0$ have $s_i = 1$ by convention.
+
+ These are calculated using the method outlined in Levine [1]_.
+
+ Parameters
+ ----------
+ G : DiGraph
+ A directed networkx graph
+
+ Returns
+ -------
+ nodes : dict
+ Dictionary of nodes with trophic level as the vale.
+
+ References
+ ----------
+ .. [1] Stephen Levine (1980) J. theor. Biol. 83, 195-207
+ """
+ try:
+ import numpy as np
+ except ImportError:
+ raise ImportError(
+ "trophic_levels() requires NumPy: http://scipy.org/")
+
+ # find adjacency matrix
+ a = nx.adjacency_matrix(G, weight=weight).T.toarray()
+
+ # drop rows/columns where in-degree is zero
+ rowsum = np.sum(a, axis=1)
+ p = a[rowsum != 0][:, rowsum != 0]
+ # normalise so sum of in-degree weights is 1 along each row
+ p = p / rowsum[rowsum != 0][:, np.newaxis]
+
+ # calculate trophic levels
+ nn = p.shape[0]
+ i = np.eye(nn)
+ try:
+ n = np.linalg.inv(i - p)
+ except np.linalg.LinAlgError as err:
+ # LinAlgError is raised when there is a non-basal node
+ msg = "Trophic levels are only defined for graphs where every " + \
+ "node has a path from a basal node (basal nodes are nodes " + \
+ "with no incoming edges)."
+ raise nx.NetworkXError(msg) from err
+ y = n.sum(axis=1) + 1
+
+ levels = {}
+
+ # all nodes with in-degree zero have trophic level == 1
+ zero_node_ids = (node_id for node_id, degree in G.in_degree if degree == 0)
+ for node_id in zero_node_ids:
+ levels[node_id] = 1
+
+ # all other nodes have levels as calculated
+ nonzero_node_ids = (node_id for node_id, degree in G.in_degree
+ if degree != 0)
+ for i, node_id in enumerate(nonzero_node_ids):
+ levels[node_id] = y[i]
+
+ return levels
+
+
+@not_implemented_for('undirected')
+def trophic_differences(G, weight='weight'):
+ r"""Compute the trophic differences of the edges of a directed graph.
+
+ The trophic difference $x_ij$ for each edge is defined in Johnson et al.
+ [1]_ as:
+
+ .. math::
+ x_ij = s_j - s_i
+
+ Where $s_i$ is the trophic level of node $i$.
+
+ Parameters
+ ----------
+ G : DiGraph
+ A directed networkx graph
+
+ Returns
+ -------
+ diffs : dict
+ Dictionary of edges with trophic differences as the value.
+
+ References
+ ----------
+ .. [1] Samuel Johnson, Virginia Dominguez-Garcia, Luca Donetti, Miguel A.
+ Munoz (2014) PNAS "Trophic coherence determines food-web stability"
+ """
+ levels = trophic_levels(G, weight=weight)
+ diffs = {}
+ for u, v in G.edges:
+ diffs[(u, v)] = levels[v] - levels[u]
+ return diffs
+
+
+@not_implemented_for('undirected')
+def trophic_incoherence_parameter(G, weight='weight', cannibalism=False):
+ r"""Compute the trophic incoherence parameter of a graph.
+
+ Trophic coherence is defined as the homogeneity of the distribution of
+ trophic distances: the more similar, the more coherent. This is measured by
+ the standard deviation of the trophic differences and referred to as the
+ trophic incoherence parameter $q$ by [1].
+
+ Parameters
+ ----------
+ G : DiGraph
+ A directed networkx graph
+
+ cannibalism: Boolean
+ If set to False, self edges are not considered in the calculation
+
+ Returns
+ -------
+ trophic_incoherence_parameter : float
+ The trophic coherence of a graph
+
+ References
+ ----------
+ .. [1] Samuel Johnson, Virginia Dominguez-Garcia, Luca Donetti, Miguel A.
+ Munoz (2014) PNAS "Trophic coherence determines food-web stability"
+ """
+ try:
+ import numpy as np
+ except ImportError:
+ raise ImportError(
+ "trophic_incoherence_parameter() requires NumPy: " +
+ "http://scipy.org/")
+
+ if cannibalism:
+ diffs = trophic_differences(G, weight=weight)
+ else:
+ # If no cannibalism, remove self-edges
+ self_loops = list(nx.selfloop_edges(G))
+ if self_loops:
+ # Make a copy so we do not change G's edges in memory
+ G_2 = G.copy()
+ G_2.remove_edges_from(self_loops)
+ else:
+ # Avoid copy otherwise
+ G_2 = G
+ diffs = trophic_differences(G_2, weight=weight)
+ return np.std(list(diffs.values()))
| diff --git a/networkx/algorithms/centrality/tests/test_trophic.py b/networkx/algorithms/centrality/tests/test_trophic.py
new file mode 100644
--- /dev/null
+++ b/networkx/algorithms/centrality/tests/test_trophic.py
@@ -0,0 +1,285 @@
+"""Test trophic levels, trophic differences and trophic coherence
+"""
+import pytest
+np = pytest.importorskip('numpy')
+
+import networkx as nx
+from networkx.testing import almost_equal
+
+
+def test_trophic_levels():
+ """Trivial example
+ """
+ G = nx.DiGraph()
+ G.add_edge("a", "b")
+ G.add_edge("b", "c")
+
+ d = nx.trophic_levels(G)
+ assert d == {"a": 1, "b": 2, "c": 3}
+
+
+def test_trophic_levels_levine():
+ """Example from Figure 5 in Stephen Levine (1980) J. theor. Biol. 83,
+ 195-207
+ """
+ S = nx.DiGraph()
+ S.add_edge(1, 2, weight=1.0)
+ S.add_edge(1, 3, weight=0.2)
+ S.add_edge(1, 4, weight=0.8)
+ S.add_edge(2, 3, weight=0.2)
+ S.add_edge(2, 5, weight=0.3)
+ S.add_edge(4, 3, weight=0.6)
+ S.add_edge(4, 5, weight=0.7)
+ S.add_edge(5, 4, weight=0.2)
+
+ # save copy for later, test intermediate implementation details first
+ S2 = S.copy()
+
+ # drop nodes of in-degree zero
+ z = [nid for nid, d in S.in_degree if d == 0]
+ for nid in z:
+ S.remove_node(nid)
+
+ # find adjacency matrix
+ q = nx.linalg.graphmatrix.adjacency_matrix(S).T
+
+ expected_q = np.array([
+ [0, 0, 0., 0],
+ [0.2, 0, 0.6, 0],
+ [0, 0, 0, 0.2],
+ [0.3, 0, 0.7, 0]
+ ])
+ assert np.array_equal(q.todense(), expected_q)
+
+ # must be square, size of number of nodes
+ assert len(q.shape) == 2
+ assert q.shape[0] == q.shape[1]
+ assert q.shape[0] == len(S)
+
+ nn = q.shape[0]
+
+ i = np.eye(nn)
+ n = np.linalg.inv(i - q)
+ y = np.dot(np.asarray(n), np.ones(nn))
+
+ expected_y = np.array([1, 2.07906977, 1.46511628, 2.3255814])
+ assert np.allclose(y, expected_y)
+
+ expected_d = {
+ 1: 1,
+ 2: 2,
+ 3: 3.07906977,
+ 4: 2.46511628,
+ 5: 3.3255814
+ }
+
+ d = nx.trophic_levels(S2)
+
+ for nid, level in d.items():
+ expected_level = expected_d[nid]
+ assert almost_equal(expected_level, level)
+
+
+def test_trophic_levels_simple():
+ matrix_a = np.array([[0, 0], [1, 0]])
+ G = nx.from_numpy_matrix(matrix_a, create_using=nx.DiGraph)
+ d = nx.trophic_levels(G)
+ assert almost_equal(d[0], 2)
+ assert almost_equal(d[1], 1)
+
+
+def test_trophic_levels_more_complex():
+ matrix = np.array([
+ [0, 1, 0, 0],
+ [0, 0, 1, 0],
+ [0, 0, 0, 1],
+ [0, 0, 0, 0]
+ ])
+ G = nx.from_numpy_matrix(matrix, create_using=nx.DiGraph)
+ d = nx.trophic_levels(G)
+ expected_result = [1, 2, 3, 4]
+ for ind in range(4):
+ assert almost_equal(d[ind], expected_result[ind])
+
+ matrix = np.array([
+ [0, 1, 1, 0],
+ [0, 0, 1, 1],
+ [0, 0, 0, 1],
+ [0, 0, 0, 0]
+ ])
+ G = nx.from_numpy_matrix(matrix, create_using=nx.DiGraph)
+ d = nx.trophic_levels(G)
+
+ expected_result = [1, 2, 2.5, 3.25]
+ print("Calculated result: ", d)
+ print("Expected Result: ", expected_result)
+
+ for ind in range(4):
+ assert almost_equal(d[ind], expected_result[ind])
+
+
+def test_trophic_levels_even_more_complex():
+ # Another, bigger matrix
+ matrix = np.array([
+ [0, 0, 0, 0, 0],
+ [0, 1, 0, 1, 0],
+ [1, 0, 0, 0, 0],
+ [0, 1, 0, 0, 0],
+ [0, 0, 0, 1, 0]
+ ])
+
+ # Generated this linear system using pen and paper:
+ K = np.array([
+ [1, 0, -1, 0, 0],
+ [0, 0.5, 0, -0.5, 0],
+ [0, 0, 1, 0, 0],
+ [0, -0.5, 0, 1, -0.5],
+ [0, 0, 0, 0, 1],
+ ])
+ result_1 = np.ravel(np.matmul(np.linalg.inv(K), np.ones(5)))
+ G = nx.from_numpy_matrix(matrix, create_using=nx.DiGraph)
+ result_2 = nx.trophic_levels(G)
+
+ for ind in range(5):
+ assert almost_equal(result_1[ind], result_2[ind])
+
+
+def test_trophic_levels_singular_matrix():
+ """Should raise an error with graphs with only non-basal nodes
+ """
+ matrix = np.identity(4)
+ G = nx.from_numpy_matrix(matrix, create_using=nx.DiGraph)
+ with pytest.raises(nx.NetworkXError) as e:
+ nx.trophic_levels(G)
+ msg = "Trophic levels are only defined for graphs where every node " + \
+ "has a path from a basal node (basal nodes are nodes with no " + \
+ "incoming edges)."
+ assert msg in str(e.value)
+
+
+def test_trophic_levels_singular_with_basal():
+ """Should fail to compute if there are any parts of the graph which are not
+ reachable from any basal node (with in-degree zero).
+ """
+ G = nx.DiGraph()
+ # a has in-degree zero
+ G.add_edge('a', 'b')
+
+ # b is one level above a, c and d
+ G.add_edge('c', 'b')
+ G.add_edge('d', 'b')
+
+ # c and d form a loop, neither are reachable from a
+ G.add_edge('c', 'd')
+ G.add_edge('d', 'c')
+
+ with pytest.raises(nx.NetworkXError) as e:
+ nx.trophic_levels(G)
+ msg = "Trophic levels are only defined for graphs where every node " + \
+ "has a path from a basal node (basal nodes are nodes with no " + \
+ "incoming edges)."
+ assert msg in str(e.value)
+
+ # if self-loops are allowed, smaller example:
+ G = nx.DiGraph()
+ G.add_edge('a', 'b') # a has in-degree zero
+ G.add_edge('c', 'b') # b is one level above a and c
+ G.add_edge('c', 'c') # c has a self-loop
+ with pytest.raises(nx.NetworkXError) as e:
+ nx.trophic_levels(G)
+ msg = "Trophic levels are only defined for graphs where every node " + \
+ "has a path from a basal node (basal nodes are nodes with no " + \
+ "incoming edges)."
+ assert msg in str(e.value)
+
+
+def test_trophic_differences():
+ matrix_a = np.array([[0, 1], [0, 0]])
+ G = nx.from_numpy_matrix(matrix_a, create_using=nx.DiGraph)
+ diffs = nx.trophic_differences(G)
+ assert almost_equal(diffs[(0, 1)], 1)
+
+ matrix_b = np.array([
+ [0, 1, 1, 0],
+ [0, 0, 1, 1],
+ [0, 0, 0, 1],
+ [0, 0, 0, 0]
+ ])
+ G = nx.from_numpy_matrix(matrix_b, create_using=nx.DiGraph)
+ diffs = nx.trophic_differences(G)
+
+ assert almost_equal(diffs[(0, 1)], 1)
+ assert almost_equal(diffs[(0, 2)], 1.5)
+ assert almost_equal(diffs[(1, 2)], 0.5)
+ assert almost_equal(diffs[(1, 3)], 1.25)
+ assert almost_equal(diffs[(2, 3)], 0.75)
+
+
+def test_trophic_incoherence_parameter_no_cannibalism():
+ matrix_a = np.array([[0, 1], [0, 0]])
+ G = nx.from_numpy_matrix(matrix_a, create_using=nx.DiGraph)
+ q = nx.trophic_incoherence_parameter(G, cannibalism=False)
+ assert almost_equal(q, 0)
+
+ matrix_b = np.array([
+ [0, 1, 1, 0],
+ [0, 0, 1, 1],
+ [0, 0, 0, 1],
+ [0, 0, 0, 0]
+ ])
+ G = nx.from_numpy_matrix(matrix_b, create_using=nx.DiGraph)
+ q = nx.trophic_incoherence_parameter(G, cannibalism=False)
+ assert almost_equal(q, np.std([1, 1.5, 0.5, 0.75, 1.25]))
+
+ matrix_c = np.array([
+ [0, 1, 1, 0],
+ [0, 1, 1, 1],
+ [0, 0, 0, 1],
+ [0, 0, 0, 1]
+ ])
+ G = nx.from_numpy_matrix(matrix_c, create_using=nx.DiGraph)
+ q = nx.trophic_incoherence_parameter(G, cannibalism=False)
+ # Ignore the -link
+ assert almost_equal(q, np.std([1, 1.5, 0.5, 0.75, 1.25]))
+
+ # no self-loops case
+ matrix_d = np.array([
+ [0, 1, 1, 0],
+ [0, 0, 1, 1],
+ [0, 0, 0, 1],
+ [0, 0, 0, 0]
+ ])
+ G = nx.from_numpy_matrix(matrix_d, create_using=nx.DiGraph)
+ q = nx.trophic_incoherence_parameter(G, cannibalism=False)
+ # Ignore the -link
+ assert almost_equal(q, np.std([1, 1.5, 0.5, 0.75, 1.25]))
+
+
+
+def test_trophic_incoherence_parameter_cannibalism():
+ matrix_a = np.array([[0, 1], [0, 0]])
+ G = nx.from_numpy_matrix(matrix_a, create_using=nx.DiGraph)
+ q = nx.trophic_incoherence_parameter(G, cannibalism=True)
+ assert almost_equal(q, 0)
+
+ matrix_b = np.array([
+ [0, 0, 0, 0, 0],
+ [0, 1, 0, 1, 0],
+ [1, 0, 0, 0, 0],
+ [0, 1, 0, 0, 0],
+ [0, 0, 0, 1, 0]
+ ])
+ G = nx.from_numpy_matrix(matrix_b, create_using=nx.DiGraph)
+ q = nx.trophic_incoherence_parameter(G, cannibalism=True)
+ assert almost_equal(q, 2)
+
+ matrix_c = np.array([
+ [0, 1, 1, 0],
+ [0, 0, 1, 1],
+ [0, 0, 0, 1],
+ [0, 0, 0, 0]
+ ])
+ G = nx.from_numpy_matrix(matrix_c, create_using=nx.DiGraph)
+ q = nx.trophic_incoherence_parameter(G, cannibalism=True)
+ # Ignore the -link
+ assert almost_equal(q, np.std([1, 1.5, 0.5, 0.75, 1.25]))
| Trophic levels (feature suggestion)
Would it be interesting to include an algorithm to calculate trophic levels?
Related notions which could be included are trophic difference and trophic coherence - see wikipedia [1] and [2].
This may be a duplicate of #577 from 2012, which was closed due to inactivity.
I've made an initial attempt over at [3], adding a module at `algorithms.centrality.trophic`.
1. https://en.wikipedia.org/wiki/Trophic_coherence
2. Stephen Levine (1980) Several measures of trophic structure applicable to complex food webs. J. theor. Biol. 83, 2, 195-207. https://doi.org/10.1016/0022-5193(80)90288-X
3. https://github.com/tomalrussell/networkx/commit/e715def4abe0a184c0934aa7d6a886b6d5ffe705
| 2020-01-30T10:04:18 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.