repo
stringclasses 856
values | pull_number
int64 3
127k
| instance_id
stringlengths 12
58
| issue_numbers
sequencelengths 1
5
| base_commit
stringlengths 40
40
| patch
stringlengths 67
1.54M
| test_patch
stringlengths 0
107M
| problem_statement
stringlengths 3
307k
| hints_text
stringlengths 0
908k
| created_at
timestamp[s] |
---|---|---|---|---|---|---|---|---|---|
networkx/networkx | 5,576 | networkx__networkx-5576 | [
"5566"
] | e5c42d70afdf9f4df1878796bc4369b90bb8d68c | diff --git a/networkx/classes/function.py b/networkx/classes/function.py
--- a/networkx/classes/function.py
+++ b/networkx/classes/function.py
@@ -385,9 +385,11 @@ def induced_subgraph(G, nbunch):
Examples
--------
>>> G = nx.path_graph(4) # or DiGraph, MultiGraph, MultiDiGraph, etc
- >>> H = G.subgraph([0, 1, 2])
+ >>> H = nx.induced_subgraph(G, [0, 1, 3])
>>> list(H.edges)
- [(0, 1), (1, 2)]
+ [(0, 1)]
+ >>> list(H.nodes)
+ [0, 1, 3]
"""
induced_nodes = nx.filters.show_nodes(G.nbunch_iter(nbunch))
return nx.graphviews.subgraph_view(G, induced_nodes)
| induced_subgraph docs show unrelated example
<!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion -->
<!--- Provide a general summary of the issue in the Title above -->
The example on the documentation for induced_subgraph [here](https://networkx.org/documentation/stable/reference/generated/networkx.classes.function.induced_subgraph.html) shows an example of a generic subgraph function, not the induced_subgraph function.
### Current Behavior
<!--- Tell us what happens instead of the expected behavior -->
Currently the doc page in question does not show an example of the function it is documenting in the example. It shows an example of the `subgraph()` method, instead of the `induced_subgraph()` method. Perhaps this is for a reason, but it seems wrong.
### Expected Behavior
<!--- Tell us what should happen -->
I expect the documentation example to show an example using the function being documented (in this case a call to the induced_subgraph function somewhere on this page).
### Steps to Reproduce
<!--- Provide a minimal example that reproduces the bug -->
### Environment
<!--- Please provide details about your local environment -->
Python version: 3.7
NetworkX version: 2.8
### Additional context
<!--- Add any other context about the problem here, screenshots, etc. -->
| If this is something that should be fixed, I'm happy to take this on! Reading a few other issues, I'm not sure if this may be better placed in this repo (https://github.com/networkx/nx-guides), so just lmk and I can close/reopen there/whatever the proper channel is.
This is a bit tricky... if you look at the implementation of the `subgraph` method (i.e. `G.subgraph`) it looks like it returns the induced subgraph. I agree that the docstring for `induced_subgraph` should probably show the function itself being used though, and perhaps mention that the subgraph method of graphs also returns the induced subgraph.
Agree. I can give this one a shot and put a PR up in the next few days. | 2022-04-24T22:45:34 |
|
networkx/networkx | 5,583 | networkx__networkx-5583 | [
"4485"
] | 12c1a00cd116701a763f7c57c230b8739d2ed085 | diff --git a/networkx/algorithms/operators/binary.py b/networkx/algorithms/operators/binary.py
--- a/networkx/algorithms/operators/binary.py
+++ b/networkx/algorithms/operators/binary.py
@@ -309,6 +309,35 @@ def compose(G, H):
NodeView((0, 1, 2))
>>> R.edges
EdgeView([(0, 1), (0, 2), (1, 2)])
+
+ By default, the attributes from `H` take precedent over attributes from `G`.
+ If you prefer another way of combining attributes, you can update them after the compose operation:
+
+ >>> G = nx.Graph([(0, 1, {'weight': 2.0}), (3, 0, {'weight': 100.0})])
+ >>> H = nx.Graph([(0, 1, {'weight': 10.0}), (1, 2, {'weight': -1.0})])
+ >>> nx.set_node_attributes(G, {0: 'dark', 1: 'light', 3: 'black'}, name='color')
+ >>> nx.set_node_attributes(H, {0: 'green', 1: 'orange', 2: 'yellow'}, name='color')
+ >>> GcomposeH = nx.compose(G, H)
+
+ Normally, color attribute values of nodes of GcomposeH come from H. We can workaround this as follows:
+
+ >>> node_data = {n: G.nodes[n]['color'] + " " + H.nodes[n]['color'] for n in G.nodes & H.nodes}
+ >>> nx.set_node_attributes(GcomposeH, node_data, 'color')
+ >>> print(GcomposeH.nodes[0]['color'])
+ dark green
+
+ >>> print(GcomposeH.nodes[3]['color'])
+ black
+
+ Similarly, we can update edge attributes after the compose operation in a way we prefer:
+
+ >>> edge_data = {e: G.edges[e]['weight'] * H.edges[e]['weight'] for e in G.edges & H.edges}
+ >>> nx.set_edge_attributes(GcomposeH, edge_data, 'weight')
+ >>> print(GcomposeH.edges[(0, 1)]['weight'])
+ 20.0
+
+ >>> print(GcomposeH.edges[(3, 0)]['weight'])
+ 100.0
"""
return nx.compose_all([G, H])
| Have `compose` offer an option to sum attributes.
This will need a bit of thought about how best to handle this...
Right now in `compose` where there are nodes with the same name or edges between the same nodes in both graphs and the attributes are in conflict, it takes the attributes of one of them over the other.
I can imagine many scenarios where we might want to sum the attributes if they are numeric. I think it would be good to include an optional flag to allow summing of node or edge attributes.
| Wouldn’t that complicate the composition operation? It is defined as “the simple union of the node sets and edge sets” in [binary operations](https://github.com/networkx/networkx/blob/master/networkx/algorithms/operators/binary.py).
There's some choice about how to interpret the union of the two graphs both with edges 1-2 and in one case the weight is 1 and the other the weight is 2.
How does one do a simple union of this? What does the "simple union" mean when two edges have different weights?
Right now the algorithm chooses one of them.
Is that the right result? I think it will depend on context. But I can definitely think of contexts where we'll want to do the sum.
While I would agree the ability to perform any binary operation on the weights of two like edges would be useful, I also think edge weight operations don’t belong in the same algorithm that takes their union.
A union of two sets not disjoint would behave like they do in this function (“losing” elements from one set or the other), so I would argue this operation behaves as expected. `Compose` produces a sort of non-commutative union but then that is something that should be handled outside of the operation in my opinion (e.g. making an isomorphic graph for preserving vertices/edges before calling).
I guess the current workaround is something like the following.
```python
GcomposeH = nx.compose(G, H)
node_data = {n: G.nodes[n].get('fun') + H.nodes[n].get('fun') for n in G.nodes & H.nodes}
nx.set_node_attributes(GcomposeH, node_data, 'fun')
edge_data = {e: G.edges[e].get('weight') + H.edges[e].get('weight') for e in G.edges & H.edges}
nx.set_edge_attributes(GcomposeH, edge_data, 'weight')
```
It seems like it would be hard to generalize the combining of attribute dictionaries.
For example, do we sum attribute 'fun' but multiply attribute 'goodfun' and ignore another?
Perhaps an example like this would help people construct their own though.
Hello @dschult , I want to work on this issue if it is still needed.
Hello @dtekinoglu , I think what is needed here is an example in the doc_string similar to the "workaround" above that shows users ho to update their edge attributes after the `compose` operation.
Thanks!! | 2022-04-26T15:45:00 |
|
networkx/networkx | 5,616 | networkx__networkx-5616 | [
"5594"
] | bc7ace58c872d527475c09345f89579ff82e4c5d | diff --git a/networkx/algorithms/euler.py b/networkx/algorithms/euler.py
--- a/networkx/algorithms/euler.py
+++ b/networkx/algorithms/euler.py
@@ -23,6 +23,11 @@ def is_eulerian(G):
circuit* is a closed walk that includes each edge of a graph exactly
once.
+ Graphs with isolated vertices (i.e. vertices with zero degree) are not
+ considered to have Eulerian circuits. Therefore, if the graph is not
+ connected (or not strongly connected, for directed graphs), this function
+ returns False.
+
Parameters
----------
G : NetworkX graph
@@ -37,10 +42,18 @@ def is_eulerian(G):
>>> nx.is_eulerian(nx.petersen_graph())
False
- Notes
- -----
- If the graph is not connected (or not strongly connected, for
- directed graphs), this function returns False.
+ If you prefer to allow graphs with isolated vertices to have Eulerian circuits,
+ you can first remove such vertices and then call `is_eulerian` as below example shows.
+
+ >>> G = nx.Graph([(0, 1), (1, 2), (0, 2)])
+ >>> G.add_node(3)
+ >>> nx.is_eulerian(G)
+ False
+
+ >>> G.remove_nodes_from(list(nx.isolates(G)))
+ >>> nx.is_eulerian(G)
+ True
+
"""
if G.is_directed():
@@ -58,6 +71,11 @@ def is_semieulerian(G):
"""Return True iff `G` is semi-Eulerian.
G is semi-Eulerian if it has an Eulerian path but no Eulerian circuit.
+
+ See Also
+ --------
+ has_eulerian_path
+ is_eulerian
"""
return has_eulerian_path(G) and not is_eulerian(G)
@@ -224,8 +242,8 @@ def has_eulerian_path(G, source=None):
- at most one vertex has out_degree - in_degree = 1,
- at most one vertex has in_degree - out_degree = 1,
- every other vertex has equal in_degree and out_degree,
- - and all of its vertices with nonzero degree belong to a
- single connected component of the underlying undirected graph.
+ - and all of its vertices belong to a single connected
+ component of the underlying undirected graph.
If `source` is not None, an Eulerian path starting at `source` exists if no
other node has out_degree - in_degree = 1. This is equivalent to either
@@ -234,13 +252,16 @@ def has_eulerian_path(G, source=None):
An undirected graph has an Eulerian path iff:
- exactly zero or two vertices have odd degree,
- - and all of its vertices with nonzero degree belong to a
- - single connected component.
+ - and all of its vertices belong to a single connected component.
If `source` is not None, an Eulerian path starting at `source` exists if
either there exists an Eulerian circuit or `source` has an odd degree and the
conditions above hold.
+ Graphs with isolated vertices (i.e. vertices with zero degree) are not considered
+ to have an Eulerian path. Therefore, if the graph is not connected (or not strongly
+ connected, for directed graphs), this function returns False.
+
Parameters
----------
G : NetworkX Graph
@@ -253,6 +274,20 @@ def has_eulerian_path(G, source=None):
-------
Bool : True if G has an Eulerian path.
+ Example
+ -------
+ If you prefer to allow graphs with isolated vertices to have Eulerian path,
+ you can first remove such vertices and then call `has_eulerian_path` as below example shows.
+
+ >>> G = nx.Graph([(0, 1), (1, 2), (0, 2)])
+ >>> G.add_node(3)
+ >>> nx.has_eulerian_path(G)
+ False
+
+ >>> G.remove_nodes_from(list(nx.isolates(G)))
+ >>> nx.has_eulerian_path(G)
+ True
+
See Also
--------
is_eulerian
@@ -262,11 +297,6 @@ def has_eulerian_path(G, source=None):
return True
if G.is_directed():
- # Remove isolated nodes (if any) without altering the input graph
- nodes_remove = [v for v in G if G.in_degree[v] == 0 and G.out_degree[v] == 0]
- if nodes_remove:
- G = G.copy()
- G.remove_nodes_from(nodes_remove)
ins = G.in_degree
outs = G.out_degree
# Since we know it is not eulerian, outs - ins must be 1 for source
| diff --git a/networkx/algorithms/tests/test_euler.py b/networkx/algorithms/tests/test_euler.py
--- a/networkx/algorithms/tests/test_euler.py
+++ b/networkx/algorithms/tests/test_euler.py
@@ -140,15 +140,14 @@ def test_has_eulerian_path_directed_graph(self):
G.add_edges_from([(0, 1), (1, 2), (0, 2)])
assert not nx.has_eulerian_path(G)
- def test_has_eulerian_path_isolated_node(self):
# Test directed graphs without isolated node returns True
G = nx.DiGraph()
G.add_edges_from([(0, 1), (1, 2), (2, 0)])
assert nx.has_eulerian_path(G)
- # Test directed graphs with isolated node returns True
+ # Test directed graphs with isolated node returns False
G.add_node(3)
- assert nx.has_eulerian_path(G)
+ assert not nx.has_eulerian_path(G)
@pytest.mark.parametrize("G", (nx.Graph(), nx.DiGraph()))
def test_has_eulerian_path_not_weakly_connected(self, G):
| Inconsistent implementation of Euler algorithms
According to my research, having an Euler Circuit and/or Euler Path does not require a graph to be connected. Instead, it is enough that all of its vertices **with nonzero degree** belong to a single connected component. Nevertheless, there are some resources which states that a graph can have Euler Path and/or Euler Circuit iff it is connected.
In the doc_string, it is stated that `has_eulerian_path` method follows the first opinion.
```
A directed graph has an Eulerian path iff:
- at most one vertex has out_degree - in_degree = 1,
- at most one vertex has in_degree - out_degree = 1,
- every other vertex has equal in_degree and out_degree,
- and all of its vertices with nonzero degree belong to a single connected component of the underlying undirected graph.
An undirected graph has an Eulerian path iff:
- exactly zero or two vertices have odd degree,
- and all of its vertices with nonzero degree belong to a single connected component.
```
However, when we look at the source code, we see that it weirdly requires connectedness for undirected graphs but not for directed ones. Here is the related part:
```
if G.is_directed():
# Remove isolated nodes (if any) without altering the input graph
nodes_remove = [v for v in G if G.in_degree[v] == 0 and G.out_degree[v] == 0]
if nodes_remove:
G = G.copy()
G.remove_nodes_from(nodes_remove)
```
If graph is directed, it removes the isolated nodes. But at the last part, it checks for connectedness both for directed (already removed isolated ones) and undirected graphs.
```
return (
unbalanced_ins <= 1 and unbalanced_outs <= 1 and nx.is_weakly_connected(G)
)
else:
# We know it is not eulerian, so degree of source must be odd.
if source is not None and G.degree[source] % 2 != 1:
return False
# Sum is 2 since we know it is not eulerian (which implies sum is 0)
return sum(d % 2 == 1 for v, d in G.degree()) == 2 and nx.is_connected(G)
```
As a result, following examples return different results:
```
>>> G = nx.DiGraph([(0, 1), (1, 2), (2, 0)])
>>> G.add_node(3)
>>> nx.draw(G)
>>> nx.has_eulerian_path(G)
True
```

```
>>> G = nx.Graph([(0, 1), (1, 2), (0, 2)])
>>> G.add_node(3)
>>> nx.draw(G)
>>> nx.has_eulerian_path(G)
False
```

On the other hand, `is_eulerian` method requires connectedness for both graph types. IMO, there is an incosistency in the source code. If the first approach is to be accepted, it should be implemented both for Euler Paths and Euler Circuits since an Euler Circuit is nothing but an Euler Path that starts and ends at the same vertex. If you think that connectedness must be a condition, it should be implemented for both directed and undirected graph types.
@rossbar @dschult @MridulS
| I looked at the "blame" history of this file (available from a button at the top right of the "view file" screen which is available at the upper right of each file on the PRs diff page).
From the beginning, we have defined eulerian and the eulerian_circuit code to rule out isolated nodes. That is the eulerian circuit must visit all nodes and cross all edges.
In PR #3399 we added functions for `eulerian_path` where the doc_string says the non-isolated nodes must form a connected component, but the code itself only checked for a connected graph -- meaning that the isolated nodes were ruled out.
Then in PR #4246 we made some changes to allow a keyword argument `source` and the code with that change added a line to remove the isolated nodes from G in the directed case only -- probably to make that part of the code agree with the doc_strings.
I agree with your assessment. The code in these functions is inconsistent between directed and undirected and also does not agree with the doc_strings. It appears that the eulerian_circuit code is consistent, but does not agree with the how the eulerian+path (sometimes) handles isolated nodes.
We need to decide whether we want to rule out isolated nodes from our definitions of eulerian...
Most definitions seem to allow isolated nodes. Our original docs and code ruled out isolated nodes. And most of our functions continue to work with way. The doc_string for eulerian_path does not, but the code does for undirected graphs. The only code that doesn't is for directed graphs.
Why would you want to rule out isolated nodes? It means you don't have to check for and remove isolated nodes for every call to one of these eulerian functions. And since isolated nodes are not really relevant for the concept of eulerian, users can safely remove them before investigating the eulerian nature of the graph. So, if we include a note in the doc_strings alerting the user, it should work fine. And in our case, backward compatibility for most of our routines calls for handling isolated nodes as *not* eulerian. But this is probably not a highly used scenario -- so the backward compatibility is likely to not be an issue.
Why would you not want to rule out isolated nodes? The concept of eulerian is about edges, not nodes. So it is cleaner from a theoretical perspective to allow isolated nodes so you can ignore whether they are there or not. This argument fails when you realize that you want to rule out graphs that have two components each of which has an eulerian circuit. Then you need the graph to have a single connected component (for all nodes of nonzero degree). So, in fact you can't ignore nodes completely.
Thoughts?
After re-reading this with a couple days for perspective, I think we should stick with excluding isolated nodes from our definition of eulerian circuit and eulerian path. This should be well documented in each function, with an example in some main function(s) of how to remove isolated nodes for people who want the other definition.
The desire to allow isolated nodes is driven by a mistaken notion that we can then avoid considering nodes — only edges. That view is mistaken because we need to ensure that the non-isolated nodes form a single component. So you do have to consider the nodes when you exclude them from the single component part of the definition. I feel it is cleaner to use the definition that disallows isolated nodes, and provide info on how to pre-process a graph if users want to allow isolated nodes. This choice also provides better backward compatibility than other options to make our code consistent.
So, what needs to be done:
- change the path docstring(s) to make it clear that we don’t consider graphs with isolated nodes to have an eulerian path.
- Provide an example in key functions for how to remove isolated nodes if desired `G.remove_nodes_from(list(nx.isolates(G))`
- Change the directed code within eulerian_path to no longer remove isolated nodes.
- Add a release note that this behavior has changed (in 1st doc/release/release_dev.rst`)
Will this make the module consistent in its handling of isolated nodes? Am I missing anything?
Thank you, @dschult I am starting to work on the sub-tasks you stated. Meanwhile, I will also continue to check this thread to see if any addition/change is made with respect to what needs to be done. | 2022-05-08T20:57:34 |
networkx/networkx | 5,696 | networkx__networkx-5696 | [
"4774"
] | f8b33d8b08666277b5a2e44fb1cd50157fa404f3 | diff --git a/networkx/algorithms/isomorphism/ismags.py b/networkx/algorithms/isomorphism/ismags.py
--- a/networkx/algorithms/isomorphism/ismags.py
+++ b/networkx/algorithms/isomorphism/ismags.py
@@ -579,17 +579,34 @@ def largest_common_subgraph(self, symmetry=True):
def analyze_symmetry(self, graph, node_partitions, edge_colors):
"""
Find a minimal set of permutations and corresponding co-sets that
- describe the symmetry of :attr:`subgraph`.
+ describe the symmetry of `graph`, given the node and edge equalities
+ given by `node_partitions` and `edge_colors`, respectively.
+
+ Parameters
+ ----------
+ graph : networkx.Graph
+ The graph whose symmetry should be analyzed.
+ node_partitions : list of sets
+ A list of sets containining node keys. Node keys in the same set
+ are considered equivalent. Every node key in `graph` should be in
+ exactly one of the sets. If all nodes are equivalent, this should
+ be ``[set(graph.nodes)]``.
+ edge_colors : dict mapping edges to their colors
+ A dict mapping every edge in `graph` to its corresponding color.
+ Edges with the same color are considered equivalent. If all edges
+ are equivalent, this should be ``{e: 0 for e in graph.edges}``.
+
Returns
-------
set[frozenset]
- The found permutations. This is a set of frozenset of pairs of node
+ The found permutations. This is a set of frozensets of pairs of node
keys which can be exchanged without changing :attr:`subgraph`.
dict[collections.abc.Hashable, set[collections.abc.Hashable]]
- The found co-sets. The co-sets is a dictionary of {node key:
- set of node keys}. Every key-value pair describes which `values`
- can be interchanged without changing nodes less than `key`.
+ The found co-sets. The co-sets is a dictionary of
+ ``{node key: set of node keys}``.
+ Every key-value pair describes which ``values`` can be interchanged
+ without changing nodes less than ``key``.
"""
if self._symmetry_cache is not None:
key = hash(
| Review ISMAGS methods
Related to discussion in #4761
Some of the methods of the [ISMAGS class](https://github.com/networkx/networkx/blob/d70b314b37168f0ea7c5b0d7f9ff61d73232747b/networkx/algorithms/isomorphism/ismags.py#L227) are poorly documented; for example, methods like [`analyze_symmetry`](https://github.com/networkx/networkx/blob/d70b314b37168f0ea7c5b0d7f9ff61d73232747b/networkx/algorithms/isomorphism/ismags.py#L579-L593) are missing a detailed parameter description in the docstring. There is also some question (see #4761) as to whether this particular method was ever intended to be public in the first place.
At the very least, the documentation for `analyze_symmetry` in particular (and some of the other methods) should be improved to explain the parameters in the signature more clearly.
| Oops :) Sorry about that.
I'll leave it up to you whether it's valuable to keep `analyze_symmetry` public.
Either way, the docstring as-is is indeed wrong/insufficient.
Maybe:
```
def analyze_symmetry(self, graph, node_partitions, edge_colors):
"""
Find a minimal set of permutations and corresponding co-sets that
describe the symmetry of `graph`, given the node and edge equalities
given by `node_partitions` and `edge_colors`, respectively.
Parameters
----------
graph: networkx.Graph
The graph whose symmetry should be analyzed
node_partitions: list[set]
A list of sets containing node keys. Node keys in the same set are
considered equivalent. Every node keys in `graph` should be in exactly
one of the sets. If all nodes are equivalent, this should be `[set(graph.nodes)]`
See also: :func:`make_partitions`.
edge_colors: dict[collections.abc.Hashable, collections.abc.Hashable]
A dict mapping every edge in `graph` as tuple of node keys (e.g. `(n1, n2)`)
to a "color" (usually an integer). Edges with the same color are considered
equivalent. If all edges are equivalent, this should be `{e: 0 for e in graph.edges}`.
See also: :func:`partition_to_color`.
Returns
-------
set[frozenset]
The found permutations. This is a set of frozensets of pairs of node
keys which can be exchanged without changing :attr:`subgraph`.
dict[collections.abc.Hashable, set[collections.abc.Hashable]]
The found co-sets. The co-sets is a dictionary of {node key:
set of node keys}. Every key-value pair describes which `values`
can be interchanged without changing nodes less than `key`.
"""
```
?
It may also be an idea to simplify this a little bit by making it accept edge_partitions instead of edge_colors, but again, I'll leave that up to you.
Unfortunately I don't have the time to open a PR for this :(
I also found a bug in my ISMAGS implementation (with fix), but I'll open a separate issue (#4915) for that. | 2022-06-07T12:01:38 |
|
networkx/networkx | 5,697 | networkx__networkx-5697 | [
"4729"
] | f8b33d8b08666277b5a2e44fb1cd50157fa404f3 | diff --git a/networkx/conftest.py b/networkx/conftest.py
--- a/networkx/conftest.py
+++ b/networkx/conftest.py
@@ -236,6 +236,10 @@ def set_warnings():
)
warnings.filterwarnings("ignore", category=DeprecationWarning, message="info")
warnings.filterwarnings("ignore", category=DeprecationWarning, message="to_tuple")
+ # create_using for scale_free_graph
+ warnings.filterwarnings(
+ "ignore", category=DeprecationWarning, message="The create_using argument"
+ )
@pytest.fixture(autouse=True)
diff --git a/networkx/generators/directed.py b/networkx/generators/directed.py
--- a/networkx/generators/directed.py
+++ b/networkx/generators/directed.py
@@ -189,6 +189,7 @@ def scale_free_graph(
delta_out=0,
create_using=None,
seed=None,
+ initial_graph=None,
):
"""Returns a scale-free directed graph.
@@ -215,9 +216,22 @@ def scale_free_graph(
The default is a MultiDiGraph 3-cycle.
If a graph instance, use it without clearing first.
If a graph constructor, call it to construct an empty graph.
+
+ .. deprecated:: 3.0
+
+ create_using is deprecated, use `initial_graph` instead.
+
seed : integer, random_state, or None (default)
Indicator of random number generation state.
See :ref:`Randomness<randomness>`.
+ initial_graph : MultiDiGraph instance, optional
+ Build the scale-free graph starting from this initial MultiDiGraph,
+ if provided.
+
+
+ Returns
+ -------
+ MultiDiGraph
Examples
--------
@@ -245,14 +259,37 @@ def _choose_node(candidates, node_list, delta):
return seed.choice(node_list)
return seed.choice(candidates)
- if create_using is None or not hasattr(create_using, "_adj"):
- # start with 3-cycle
- G = nx.empty_graph(3, create_using, default=nx.MultiDiGraph)
- G.add_edges_from([(0, 1), (1, 2), (2, 0)])
- else:
+ if create_using is not None:
+ import warnings
+
+ warnings.warn(
+ "The create_using argument is deprecated and will be removed in the future.\n\n"
+ "To create a scale free graph from an existing MultiDiGraph, use\n"
+ "initial_graph instead.",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+
+ # TODO: Rm all this complicated logic when deprecation expires and replace
+ # with commented code:
+ # if initial_graph is not None and hasattr(initial_graph, "_adj"):
+ # G = initial_graph
+ # else:
+ # # Start with 3-cycle
+ # G = nx.MultiDiGraph([(0, 1), (1, 2), (2, 0)])
+ if create_using is not None and hasattr(create_using, "_adj"):
+ if initial_graph is not None:
+ raise ValueError(
+ "Cannot set both create_using and initial_graph. Set create_using=None."
+ )
G = create_using
+ else:
+ if initial_graph is not None and hasattr(initial_graph, "_adj"):
+ G = initial_graph
+ else:
+ G = nx.MultiDiGraph([(0, 1), (1, 2), (2, 0)])
if not (G.is_directed() and G.is_multigraph()):
- raise nx.NetworkXError("MultiDiGraph required in create_using")
+ raise nx.NetworkXError("MultiDiGraph required in initial_graph")
if alpha <= 0:
raise ValueError("alpha must be > 0.")
| diff --git a/networkx/generators/tests/test_directed.py b/networkx/generators/tests/test_directed.py
--- a/networkx/generators/tests/test_directed.py
+++ b/networkx/generators/tests/test_directed.py
@@ -58,6 +58,12 @@ def test_create_using_keyword_arguments(self):
pytest.raises(ValueError, scale_free_graph, 100, gamma=-0.3)
[email protected]("ig", (nx.Graph(), nx.DiGraph([(0, 1)])))
+def test_scale_free_graph_initial_graph_kwarg(ig):
+ with pytest.raises(nx.NetworkXError):
+ scale_free_graph(100, initial_graph=ig)
+
+
class TestRandomKOutGraph:
"""Unit tests for the
:func:`~networkx.generators.directed.random_k_out_graph` function.
| Graph instance modified in place by `scale_free_graph`
`nx.scale_free_graph` has a `create_using`argument which accepts different types of inputs. When a graph instance is provided, it is used as a starting point for the graph to which more nodes will be added as is clearly described in the docstring.
However, the instance passed in via `create_using` is also modified in place:
```python
>>> G = nx.scale_free_graph(10)
>>> G.nodes()
NodeView((0, 1, 2, 3, 4, 5, 6, 7, 8, 9))
>>> H = nx.scale_free_graph(20, create_using=G)
>>> G.nodes()
NodeView((0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19))
```
This behavior should either be documented, or the `create_using` argument should be copied (when a graph instance) to prevent the in-place modification. I think the second option is more consistent with the other functions in NetworkX, which generally try to avoid in-place modification of inputs.
| `create_using` is a strange kind of beast (see NXEP3). I thought we never used it as a starting graph from which to add nodes and edges. Though apparently `scale_free_graph` does that -- (see below). Learn something every day. :}
The standard treatment is that `create_using`, if it is a NetworkX Graph, is cleared before using it to create the graph. If it is a graph constructor (like `nx.Graph`) then it is called to create the starting graph.
So... That argument -- `create_using` is always modified when it is an instance. Maybe NXEP3 can end up fixing that behavior too.
> The standard treatment is that create_using, if it is a NetworkX Graph, is cleared before using it to create the graph
Yes, this is the first instance that I've come across where this *wasn't* the case. In this particular case, I think this may just be the result of imprecise argument naming. `create_using` doesn't really make sense in this context because the graph [must be a `MultiDiGraph`](https://github.com/networkx/networkx/blob/ead0e65bda59862e329f2e6f1da47919c6b07ca9/networkx/generators/directed.py#L251-L258), so the parameter is really only for passing in an initial graph instance to build upon, not selecting a graph type.
I think the intended behavior would be better reflected by a better argument name, e.g. `initial_graph` with behavior like:
```python
if initial_graph is not None:
G = initial_graph.copy()
else:
G = nx.MultiDiGraph([(0, 1), (1, 2), (2, 0)]) # This is the default initialization for the current implementation
```
The feature of having a graph to start building a new graph also appears in #4659 and we should probably be consistent in our choice of names. Should the argument be named `initial_graph` or `seed_graph`?
> Should the argument be named initial_graph or seed_graph?
I think both names are good and equally clear. I have a slight preference for `initial_graph` only because the term "seed" is so commonly associated with random number generation, but this is very much a nit. | 2022-06-07T12:08:50 |
networkx/networkx | 5,698 | networkx__networkx-5698 | [
"4483"
] | f8b33d8b08666277b5a2e44fb1cd50157fa404f3 | diff --git a/networkx/generators/line.py b/networkx/generators/line.py
--- a/networkx/generators/line.py
+++ b/networkx/generators/line.py
@@ -43,6 +43,18 @@ def line_graph(G, create_using=None):
>>> print(sorted(map(sorted, L.edges()))) # makes a 3-clique, K3
[[(0, 1), (0, 2)], [(0, 1), (0, 3)], [(0, 2), (0, 3)]]
+ Edge attributes from `G` are not copied over as node attributes in `L`, but
+ attributes can be copied manually:
+
+ >>> G = nx.path_graph(4)
+ >>> G.add_edges_from((u, v, {"tot": u+v}) for u, v in G.edges)
+ >>> G.edges(data=True)
+ EdgeDataView([(0, 1, {'tot': 1}), (1, 2, {'tot': 3}), (2, 3, {'tot': 5})])
+ >>> H = nx.line_graph(G)
+ >>> H.add_nodes_from((node, G.edges[node]) for node in H)
+ >>> H.nodes(data=True)
+ NodeDataView({(0, 1): {'tot': 1}, (2, 3): {'tot': 5}, (1, 2): {'tot': 3}})
+
Notes
-----
Graph, node, and edge data are not propagated to the new graph. For
| Line graph with attributes
If we have some graph with edge attributes, is it possible to generate the line graph somehow with the original edge attributes embedded into the line graph nodes? Would save a lot of hassle.
| It looks like one additional line of code will transfer all the edge attributes of the original graph to the line graph as node attributes.
```python
G = nx.path_graph(4)
G.add_edges_from((u, v, {'krill': u+v}) for u, v in G.edges)
H = nx.line_graph(G)
H.add_nodes_from((node, G.edges[node]) for node in H)
```
I think @dschult 's comment very nicely answers the question and IMO would be a nice addition to the docstring examples. | 2022-06-07T12:11:07 |
|
networkx/networkx | 5,700 | networkx__networkx-5700 | [
"5047"
] | f8b33d8b08666277b5a2e44fb1cd50157fa404f3 | diff --git a/networkx/generators/geometric.py b/networkx/generators/geometric.py
--- a/networkx/generators/geometric.py
+++ b/networkx/generators/geometric.py
@@ -37,7 +37,7 @@ def euclidean(x, y):
return math.dist(x, y)
-def geometric_edges(G, radius, p):
+def geometric_edges(G, radius, p=2):
"""Returns edge list of node pairs within `radius` of each other.
Parameters
@@ -49,10 +49,10 @@ def geometric_edges(G, radius, p):
radius : scalar
The distance threshold. Edges are included in the edge list if the
distance between the two nodes is less than `radius`.
- p : scalar
+ p : scalar, default=2
The `Minkowski distance metric
- <https://en.wikipedia.org/wiki/Minkowski_distance>`_ use to compute
- distances.
+ <https://en.wikipedia.org/wiki/Minkowski_distance>`_ used to compute
+ distances. The default value is 2, i.e. Euclidean distance.
Returns
-------
@@ -75,14 +75,13 @@ def geometric_edges(G, radius, p):
... (1, {"pos": (3, 0)}),
... (2, {"pos": (8, 0)}),
... ])
- >>> p = 2 # Euclidean distance
- >>> nx.geometric_edges(G, radius=1, p=p)
+ >>> nx.geometric_edges(G, radius=1)
[]
- >>> nx.geometric_edges(G, radius=4, p=p)
+ >>> nx.geometric_edges(G, radius=4)
[(0, 1)]
- >>> nx.geometric_edges(G, radius=6, p=p)
+ >>> nx.geometric_edges(G, radius=6)
[(0, 1), (1, 2)]
- >>> nx.geometric_edges(G, radius=9, p=p)
+ >>> nx.geometric_edges(G, radius=9)
[(0, 1), (0, 2), (1, 2)]
"""
nodes_pos = G.nodes(data="pos")
| Add default distance metric to `geometric_edges`
The `nx.generators.geometric.geometric_edges` function currently takes 3 varpos parameters. The last of these parameters `p` is the "Minkovsky distance metric", which allows users to select the power used to compute the distances.
Having looked at the function a bit, I was wondering if it wasn't worth perhaps turning `p` into an optional kwarg and giving it a default value. IIUC, `p=2` corresponds to Euclidean distance, which is the default distance measure used for many of the other functions in the module. Thoughts about making this the default value for this function?
| It seems to me that most geometric networks would use euclidean distance and this change shouldn't mess up any backward compatibility. Sounds good to me. :} | 2022-06-07T12:24:33 |
|
networkx/networkx | 5,705 | networkx__networkx-5705 | [
"5691"
] | fd9a6521a87005ada7b373b3bed659f0bf5cab3b | diff --git a/networkx/drawing/layout.py b/networkx/drawing/layout.py
--- a/networkx/drawing/layout.py
+++ b/networkx/drawing/layout.py
@@ -1083,11 +1083,16 @@ def multipartite_layout(G, subset_key="subset", align="vertical", scale=1, cente
raise ValueError(msg)
layers[layer] = [v] + layers.get(layer, [])
+ # Sort by layer, if possible
+ try:
+ layers = sorted(layers.items())
+ except TypeError:
+ layers = list(layers.items())
+
pos = None
nodes = []
-
width = len(layers)
- for i, layer in enumerate(layers.values()):
+ for i, (_, layer) in enumerate(layers):
height = len(layer)
xs = np.repeat(i, height)
ys = np.arange(0, height, dtype=float)
| diff --git a/networkx/drawing/tests/test_layout.py b/networkx/drawing/tests/test_layout.py
--- a/networkx/drawing/tests/test_layout.py
+++ b/networkx/drawing/tests/test_layout.py
@@ -422,3 +422,24 @@ def test_multipartite_layout_nonnumeric_partition_labels():
G.add_edges_from([(0, 2), (0, 3), (1, 2)])
pos = nx.multipartite_layout(G)
assert len(pos) == len(G)
+
+
+def test_multipartite_layout_layer_order():
+ """Return the layers in sorted order if the layers of the multipartite
+ graph are sortable. See gh-5691"""
+ G = nx.Graph()
+ for node, layer in zip(("a", "b", "c", "d", "e"), (2, 3, 1, 2, 4)):
+ G.add_node(node, subset=layer)
+
+ # Horizontal alignment, therefore y-coord determines layers
+ pos = nx.multipartite_layout(G, align="horizontal")
+
+ # Nodes "a" and "d" are in the same layer
+ assert pos["a"][-1] == pos["d"][-1]
+ # positions should be sorted according to layer
+ assert pos["c"][-1] < pos["a"][-1] < pos["b"][-1] < pos["e"][-1]
+
+ # Make sure that multipartite_layout still works when layers are not sortable
+ G.nodes["a"]["subset"] = "layer_0" # Can't sort mixed strs/ints
+ pos_nosort = nx.multipartite_layout(G) # smoke test: this should not raise
+ assert pos_nosort.keys() == pos.keys()
| Ordering of layers in multipartite_layout changed in v2.7. Add a way to control the order?
I am trying to run a script to plot graphs, but they're coming out wrong. I've tracked down the issue to incorrect positions obtained from the multipartite_layout command, which is either ignoring or misordering the node layer information.
### Current Behavior
The positions output by multipartite_layout don't order the layers correctly.
### Expected Behavior
The positions output by multipartite_layout should order the layers by value.
### Steps to Reproduce
Here is a snippet of what I currently get for the position of nodes in a small graph:
import networkx as nx
```
G = nx.DiGraph()
G.add_node('a',layer=2)
G.add_node('b',layer=3)
G.add_node('c',layer=1)
G.add_node('d',layer=2)
G.add_node('e',layer=4)
nx.multipartite_layout(G,align='horizontal',subset_key='layer')
{'d': array([-0.27777778, -0.66666667]),
'a': array([ 0.27777778, -0.66666667]),
'b': array([ 0. , -0.11111111]),
'c': array([0. , 0.44444444]),
'e': array([0., 1.])}
```
What I expect to get out (and can still see if I use an online compiler) is
```
{'d': array([-0.3125, -0.25 ]),
'a': array([ 0.3125, -0.25 ]),
'b': array([0. , 0.375]),
'c': array([ 0. , -0.875]),
'e': array([0., 1.])}
```
### Environment
Python version: 3.9.12
NetworkX version: 2.7.1
### Additional context
On an older computer when I had this issue I was able to resolve it by uninstalling and reinstalling Anaconda. This time I'm working from a new computer with a fresh install and uninstalling/reinstalling doesn't work.
| In PR #5153 we allowed non-numeric layer values. But that change also impacts the order of the layers in the multipartite layout. The order of the layers is no longer determined by the layer's numeric value. Notice that we never intended the layer's value to be required to be a number nor to reflect the position/order of the layers. But I can see why it would be surprising to suddenly have that change.
At the moment, the order of the layers is determined by the order that the nodes are reported by `G.nodes`. That is, the layers are ordered based on the first time in `G.nodes` that a node has that layer value. So, with your example, the nodes are ordered alphabetically, so the layer for 'a' is the first layer even though it's layer is `2`. Layer `1` is not positioned until node `c` is processed.
Technically this is not a bug because we never guaranteed the order of the layers, but our implementation did provide an order and we changed that order. So we broke backward compatibility and should provide a way to indicate the order of the layers.
I'm going to change the title of this issue to reflect a request for the ability to choose the order of the layers in a multipartitie layout. A solution will need a way to allow the user to indicate the order of the layers while still allowing non-numeric layer values. Suggestions?
Good to know how the layers are assigned. Now that I know how it works I'm working around it, for now, by creating a duplicate graph where nodes are added in the desired order. Part of me wants to think that when the layers are indicated with integers it should still default to that order (rather than input order), though maybe a choice can be made between "input" or "sorted."
Perhaps we could do something like this:
```python
try:
layers = sorted((lyr, i, nlist) for i, (lyr, nlist) in enumerate(layers.items()))
except TypeError:
layers = [(lyr, i, nlist) for i, (lyr, nlist) in enumerate(layers.items())]
for lyr, i, nlist in layers:
```
This would handle the case when the layer identifiers are unsortable, but it will also work for ordering the layers according to a sort of the layer identifier when all layer identifiers are sortable. It does **not** allow the layer identifier to provide the numerical value for the position of that layer. Is that important to allow? That is how it used to work, so maybe good to have it available...
> This would handle the case when the layer identifiers are unsortable, but it will also work for ordering the layers according to a sort of the layer identifier when all layer identifiers are sortable. It does not allow the layer identifier to provide the numerical value for the position of that layer. Is that important to allow? That is how it used to work, so maybe good to have it available...
I think this is the way to go. Even though the ordering was never guaranteed, it's clear that users are relying on it (see also #5515 ) and it's perfectly reasonable to expect numeric layers to be sorted. I prefer the solution proposed above to e.g. adding a new keyword argument. | 2022-06-08T09:23:36 |
networkx/networkx | 5,707 | networkx__networkx-5707 | [
"5046"
] | fd9a6521a87005ada7b373b3bed659f0bf5cab3b | diff --git a/networkx/generators/geometric.py b/networkx/generators/geometric.py
--- a/networkx/generators/geometric.py
+++ b/networkx/generators/geometric.py
@@ -84,6 +84,25 @@ def geometric_edges(G, radius, p=2):
>>> nx.geometric_edges(G, radius=9)
[(0, 1), (0, 2), (1, 2)]
"""
+ # Input validation - every node must have a "pos" attribute
+ for n, pos in G.nodes(data="pos"):
+ if pos is None:
+ raise nx.NetworkXError(
+ f"All nodes in `G` must have a 'pos' attribute. Check node {n}"
+ )
+
+ # NOTE: See _geometric_edges for the actual implementation. The reason this
+ # is split into two functions is to avoid the overhead of input validation
+ # every time the function is called internally in one of the other
+ # geometric generators
+ return _geometric_edges(G, radius, p)
+
+
+def _geometric_edges(G, radius, p=2):
+ """
+ Implements `geometric_edges` without input validation. See `geometric_edges`
+ for complete docstring.
+ """
nodes_pos = G.nodes(data="pos")
try:
import scipy as sp
@@ -189,7 +208,7 @@ def random_geometric_graph(n, radius, dim=2, pos=None, p=2, seed=None):
pos = {v: [seed.random() for i in range(dim)] for v in G}
nx.set_node_attributes(G, pos, "pos")
- G.add_edges_from(geometric_edges(G, radius, p))
+ G.add_edges_from(_geometric_edges(G, radius, p))
return G
@@ -315,7 +334,7 @@ def should_join(edge):
dist = (sum(abs(a - b) ** p for a, b in zip(pos[u], pos[v]))) ** (1 / p)
return seed.random() < p_dist(dist)
- G.add_edges_from(filter(should_join, geometric_edges(G, radius, p)))
+ G.add_edges_from(filter(should_join, _geometric_edges(G, radius, p)))
return G
@@ -778,7 +797,7 @@ def thresholded_random_geometric_graph(
edges = (
(u, v)
- for u, v in geometric_edges(G, radius, p)
+ for u, v in _geometric_edges(G, radius, p)
if weight[u] + weight[v] >= theta
)
G.add_edges_from(edges)
| diff --git a/networkx/generators/tests/test_geometric.py b/networkx/generators/tests/test_geometric.py
--- a/networkx/generators/tests/test_geometric.py
+++ b/networkx/generators/tests/test_geometric.py
@@ -2,6 +2,8 @@
import random
from itertools import combinations
+import pytest
+
import networkx as nx
@@ -318,3 +320,10 @@ def test_theta(self):
# Adjacent vertices must be within the given distance.
if v in G[u]:
assert (G.nodes[u]["weight"] + G.nodes[v]["weight"]) >= 0.1
+
+
+def test_geometric_edges_raises_no_pos():
+ G = nx.path_graph(3)
+ msg = "All nodes in `G` must have a 'pos' attribute"
+ with pytest.raises(nx.NetworkXError, match=msg):
+ nx.geometric_edges(G, radius=1)
| Should `geometric_edges` raise a specific exception if nodes don't have "pos" attribute?
The `networkx.generators.geometric.geometric_edges` function assumes that the input graph contains nodes that have a `"pos"` attribute representing the 2D position of the node. If this is not the case, exceptions are raised (depending on whether or not scipy is installed) that don't really indicate what the problem is:
```python
>>> G = nx.path_graph(3)
```
### with scipy
```python
>>> nx.generators.geometric.geometric_edges(G, radius=1, p=2)
Traceback (most recent call last)
...
ckdtree.pyx in scipy.spatial.ckdtree.cKDTree.__init__()
ValueError: data must be 2 dimensions
```
### without scipy
```python
>>> nx.generators.geometric.geometric_edges(G, radius=1, p=2)
Traceback (most recent call last):
...
TypeError: 'NoneType' object is not iterable
```
The thing that makes the most sense to me would be to raise an exception with a specific message to indicate to users that the nodes need a "pos" attribute. Another potential solution would be to have a default value for position when the attribute isn't found, but I don't know what a sensible default would be (`(0, 0)`)?
| I would say it should raise the exception. Putting a bunch of nodes at (0,0) would be potentially hard to notice.
Does raising the exception mean we should go through every node to check if it has a 2-tuple value on a `pos` attribute?
Or is it better to catch the current error and provide a better message for it?
> Does raising the exception mean we should go through every node to check if it has a 2-tuple value on a pos attribute?
This is what I was thinking just because it's fewer things to modify (adding one check at the beginning versus catching exceptions in multiple branches). The latter may actually be preferable from a performance perspective however since `geometric_edges` is used internally in a couple of the geometric generators where an extra check for node attributes would be unnecessary, and would actually slow things down. | 2022-06-08T11:47:55 |
networkx/networkx | 5,708 | networkx__networkx-5708 | [
"4962"
] | fd9a6521a87005ada7b373b3bed659f0bf5cab3b | diff --git a/networkx/classes/multidigraph.py b/networkx/classes/multidigraph.py
--- a/networkx/classes/multidigraph.py
+++ b/networkx/classes/multidigraph.py
@@ -569,14 +569,14 @@ def edges(self):
as well as edge attribute lookup. When called, it also provides
an EdgeDataView object which allows control of access to edge
attributes (but does not provide set-like operations).
- Hence, `G.edges[u, v, k]['color']` provides the value of the color
- attribute for edge from `u` to `v` with key `k` while
- `for (u, v, k, c) in G.edges(data='color', default='red',
- keys=True):` iterates through all the edges yielding the color
- attribute with default `'red'` if no color attribute exists.
+ Hence, ``G.edges[u, v, k]['color']`` provides the value of the color
+ attribute for the edge from ``u`` to ``v`` with key ``k`` while
+ ``for (u, v, k, c) in G.edges(data='color', default='red', keys=True):``
+ iterates through all the edges yielding the color attribute with
+ default `'red'` if no color attribute exists.
Edges are returned as tuples with optional data and keys
- in the order (node, neighbor, key, data). If `keys=True` is not
+ in the order (node, neighbor, key, data). If ``keys=True`` is not
provided, the tuples will just be (node, neighbor, data), but
multiple tuples with the same node and neighbor will be
generated when multiple edges between two nodes exist.
@@ -599,10 +599,10 @@ def edges(self):
Returns
-------
- edges : EdgeView
+ edges : OutMultiEdgeView
A view of edge attributes, usually it iterates over (u, v)
(u, v, k) or (u, v, k, d) tuples of edges, but can also be
- used for attribute lookup as `edges[u, v, k]['foo']`.
+ used for attribute lookup as ``edges[u, v, k]['foo']``.
Notes
-----
diff --git a/networkx/classes/multigraph.py b/networkx/classes/multigraph.py
--- a/networkx/classes/multigraph.py
+++ b/networkx/classes/multigraph.py
@@ -763,17 +763,21 @@ def edges(self):
edges(self, nbunch=None, data=False, keys=False, default=None)
- The EdgeView provides set-like operations on the edge-tuples
+ The MultiEdgeView provides set-like operations on the edge-tuples
as well as edge attribute lookup. When called, it also provides
an EdgeDataView object which allows control of access to edge
attributes (but does not provide set-like operations).
- Hence, `G.edges[u, v, k]['color']` provides the value of the color
- attribute for edge `(u, v, k)` while
- `for (u, v, c) in G.edges(data='color', default='red'):`
- iterates through all the edges yielding the color attribute.
+ Hence, ``G.edges[u, v, k]['color']`` provides the value of the color
+ attribute for the edge from ``u`` to ``v`` with key ``k`` while
+ ``for (u, v, k, c) in G.edges(data='color', keys=True, default="red"):``
+ iterates through all the edges yielding the color attribute with
+ default `'red'` if no color attribute exists.
Edges are returned as tuples with optional data and keys
- in the order (node, neighbor, key, data).
+ in the order (node, neighbor, key, data). If ``keys=True`` is not
+ provided, the tuples will just be (node, neighbor, data), but
+ multiple tuples with the same node and neighbor will be generated
+ when multiple edges exist between two nodes.
Parameters
----------
@@ -795,7 +799,7 @@ def edges(self):
edges : MultiEdgeView
A view of edge attributes, usually it iterates over (u, v)
(u, v, k) or (u, v, k, d) tuples of edges, but can also be
- used for attribute lookup as `edges[u, v, k]['foo']`.
+ used for attribute lookup as ``edges[u, v, k]['foo']``.
Notes
-----
@@ -804,7 +808,7 @@ def edges(self):
Examples
--------
- >>> G = nx.MultiGraph() # or MultiDiGraph
+ >>> G = nx.MultiGraph()
>>> nx.add_path(G, [0, 1, 2])
>>> key = G.add_edge(2, 3, weight=5)
>>> key2 = G.add_edge(2, 1, weight=2) # multi-edge
| Can't retrieve all edges for pair of nodes in Multi(Di)Graph
<!--- Provide a general summary of the issue in the Title above -->
According to the docs, I should be able to subscript into Multi(Di)Graph by pair of nodes and retrieve all the edges:
> Hence, **G.edges[u, v]['color']** provides the value of the color attribute for edge (u, v) while for (u, v, c) in G.edges(data='color', default='red'): iterates through all the edges yielding the color attribute. [Docs](https://networkx.org/documentation/stable/reference/classes/generated/networkx.MultiGraph.edges.html)
Either the docs are incorrect (I suspect a copy-paste from non-Multi Graph/DiGraph), or this does not work as expected.
### Current Behavior
<!--- Tell us what happens instead of the expected behavior -->
```python
G = nx.MultiGraph() # or MultiDiGraph
nx.add_path(G, [0, 1, 2])
key = G.add_edge(1, 2, weight=5)
G.edges[1, 2]
# G.edges[1, 2, 0] or G.edges[1, 2, 1] works
```
> ValueError: not enough values to unpack (expected 3, got 2)
### Expected Behavior
<!--- Tell us what should happen -->
I would expect to be able to retrieve all the (multi-)edges for a pair of nodes by subscripting into the edges of the MultiGraph:
```python
G.edges[1, 2]
```
> MultiEdgeView([(1, 2, 0), (1, 2, 1)])
### Environment
<!--- Please provide details about your local environment -->
Python version: 3.8
NetworkX version: 2.6.1
| > I should be able to subscript into Multi(Di)Graph by pair of nodes and retrieve all the edges
The docs aren't particularly clear here, though I don't think they support the interpretation that indexing edges by node numbers will return all edges between the two nodes --- rather, indexing EdgeView (and subclasses) is intended to access specific edges (or their attributes). In the case of MultiGraphs, each edge includes a key hence the above error. I agree that the documentation could be improved here to make this more clear.
Ye, I wanted a quick way to get a list of a specific attribute value for each of the edges. I ended up doing this, is there a better way?
```
G = nx.MultiGraph()
G.add_edge(1, 2, weight=3)
G.add_edge(1, 2, weight=5)
list(map(itemgetter('weight'), G[1][2].values()))
```
> Out[10]: [3, 5]
I was missing that `G[1][2]` provides an `AtlasView` of all the edges with attributes.
> a quick way to get a list of a specific attribute value for each of the edges. I ended up doing this, is there a better way?
It depends on what you want specifically. For example, what you've done above will grab the `"weight"` attribute for every edge between two nodes (only relevant for multigraphs). Your solution is perfectly fine, though I would probably use list comprehension instead of the `list(map(` pattern:
```python
>>> e12_wts = [e['weight'] for e in G[1][2].values()]
```
If instead you wanted the attribute for *all* edges (instead of all edges between two specific nodes) then you'd iterate over the edge view (with data), something like:
```python
>>> G.add_edge(0,1, weight=7) # Add a new edge for illustration
>>> edge_wts = [d["weight"] for _, _, d in G.edges(data=True)]
```
Still dusting off my python hat, list comprehension is much more legible xD thanks for the help!
The DataView can also specify which data attribute (instead of `data=True`):
```python
>>> G.add_edge(0,1, weight=7) # Add a new edge for illustration
>>> edge_wts = [wt for _, _, wt in G.edges(data="weight")]
```
You can specify a `default` value for edges that don't have that attribute (the default edge weight default is `None`). For `MultiGraph` You can also request that the edge keys be returned -- then you get a 4-tuple.
```python
>>> edge_wts = [wt for _, _, wt in G.edges(data="weight", default=1)]
```
But the bottom line is that the docs for Multi(Di)Graph need to be updated... :} | 2022-06-08T12:09:15 |
|
networkx/networkx | 5,713 | networkx__networkx-5713 | [
"5175"
] | 08ea2775e8fb7c9e311c6bc5abe8237a3529cd18 | diff --git a/networkx/algorithms/community/louvain.py b/networkx/algorithms/community/louvain.py
--- a/networkx/algorithms/community/louvain.py
+++ b/networkx/algorithms/community/louvain.py
@@ -219,10 +219,18 @@ def _one_level(G, m, partition, resolution=1, is_directed=False, seed=None):
out_degrees = dict(G.out_degree(weight="weight"))
Stot_in = [deg for deg in in_degrees.values()]
Stot_out = [deg for deg in out_degrees.values()]
+ # Calculate weights for both in and out neighbours
+ nbrs = {}
+ for u in G:
+ nbrs[u] = defaultdict(float)
+ for _, n, wt in G.out_edges(u, data="weight"):
+ nbrs[u][n] += wt
+ for n, _, wt in G.in_edges(u, data="weight"):
+ nbrs[u][n] += wt
else:
degrees = dict(G.degree(weight="weight"))
Stot = [deg for deg in degrees.values()]
- nbrs = {u: {v: data["weight"] for v, data in G[u].items() if v != u} for u in G}
+ nbrs = {u: {v: data["weight"] for v, data in G[u].items() if v != u} for u in G}
rand_nodes = list(G.nodes)
seed.shuffle(rand_nodes)
nb_moves = 1
@@ -238,22 +246,36 @@ def _one_level(G, m, partition, resolution=1, is_directed=False, seed=None):
out_degree = out_degrees[u]
Stot_in[best_com] -= in_degree
Stot_out[best_com] -= out_degree
+ remove_cost = (
+ -weights2com[best_com] / m
+ + resolution
+ * (out_degree * Stot_in[best_com] + in_degree * Stot_out[best_com])
+ / m**2
+ )
else:
degree = degrees[u]
Stot[best_com] -= degree
+ remove_cost = -weights2com[best_com] / m + resolution * (
+ Stot[best_com] * degree
+ ) / (2 * m**2)
for nbr_com, wt in weights2com.items():
if is_directed:
gain = (
- wt
+ remove_cost
+ + wt / m
- resolution
* (
out_degree * Stot_in[nbr_com]
+ in_degree * Stot_out[nbr_com]
)
- / m
+ / m**2
)
else:
- gain = 2 * wt - resolution * (Stot[nbr_com] * degree) / m
+ gain = (
+ remove_cost
+ + wt / m
+ - resolution * (Stot[nbr_com] * degree) / (2 * m**2)
+ )
if gain > best_mod:
best_mod = gain
best_com = nbr_com
| diff --git a/networkx/algorithms/community/tests/test_louvain.py b/networkx/algorithms/community/tests/test_louvain.py
--- a/networkx/algorithms/community/tests/test_louvain.py
+++ b/networkx/algorithms/community/tests/test_louvain.py
@@ -43,6 +43,59 @@ def test_partition():
assert part == partition
+def test_directed_partition():
+ """
+ Test 2 cases that were looping infinitely
+ from issues #5175 and #5704
+ """
+ G = nx.DiGraph()
+ H = nx.DiGraph()
+ G.add_nodes_from(range(10))
+ H.add_nodes_from([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
+ G_edges = [
+ (0, 2),
+ (0, 1),
+ (1, 0),
+ (2, 1),
+ (2, 0),
+ (3, 4),
+ (4, 3),
+ (7, 8),
+ (8, 7),
+ (9, 10),
+ (10, 9),
+ ]
+ H_edges = [
+ (1, 2),
+ (1, 6),
+ (1, 9),
+ (2, 3),
+ (2, 4),
+ (2, 5),
+ (3, 4),
+ (4, 3),
+ (4, 5),
+ (5, 4),
+ (6, 7),
+ (6, 8),
+ (9, 10),
+ (9, 11),
+ (10, 11),
+ (11, 10),
+ ]
+ G.add_edges_from(G_edges)
+ H.add_edges_from(H_edges)
+
+ G_expected_partition = [{0, 1, 2}, {3, 4}, {5}, {6}, {8, 7}, {9, 10}]
+ G_partition = louvain_communities(G, seed=123, weight=None)
+
+ H_expected_partition = [{2, 3, 4, 5}, {8, 1, 6, 7}, {9, 10, 11}]
+ H_partition = louvain_communities(H, seed=123, weight=None)
+
+ assert G_partition == G_expected_partition
+ assert H_partition == H_expected_partition
+
+
def test_none_weight_param():
G = nx.karate_club_graph()
nx.set_edge_attributes(
@@ -84,9 +137,9 @@ def test_quality():
quality4 = partition_quality(J, partition4)[0]
assert quality >= 0.65
- assert quality2 >= 0.85
+ assert quality2 >= 0.65
assert quality3 >= 0.65
- assert quality4 >= 0.85
+ assert quality4 >= 0.65
def test_multigraph():
@@ -120,7 +173,7 @@ def test_threshold():
G = nx.LFR_benchmark_graph(
250, 3, 1.5, 0.009, average_degree=5, min_community=20, seed=10
)
- partition1 = louvain_communities(G, threshold=0.2, seed=2)
+ partition1 = louvain_communities(G, threshold=0.3, seed=2)
partition2 = louvain_communities(G, seed=2)
mod1 = modularity(G, partition1)
mod2 = modularity(G, partition2)
| louvain for large directed graph cannot stop
<!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion -->
<!--- Provide a general summary of the issue in the Title above -->
the nb_moves cannot decrease
### Current Behavior
<!--- Tell us what happens instead of the expected behavior -->
### Expected Behavior
<!--- Tell us what should happen -->
### Steps to Reproduce
<!--- Provide a minimal example that reproduces the bug -->
### Environment
<!--- Please provide details about your local environment -->
Python version:
NetworkX version:
### Additional context
<!--- Add any other context about the problem here, screenshots, etc. -->
| Try starting with a small graph and increasing the size to see how the time varies.
It might be that you graph is so big it will just take a long time.
my graph has 30000 nodes and the nb_moves is always around 27000 and cannot drop
I also try a graph with only 100 edges, but the same thing happen
@dschult
my graph seemingly has very bad structure, and i also test a graph with 10 edges, For some bad structures the program seems to get stuck in a dead loop, I tried to shuffle the neighbor_com list, which seems to work for me
> and i also test a graph with 10 edges, For some bad structures the program seems to get stuck in a dead loop,
It would be very useful if you could share a minimal reproducing example that results in the bad behavior: for example, the graph with only 10 edges that results in the dead loop. This can help us determine whether there is something in the code that can be improved.
import networkx as nx
G = nx.DiGraph()
for i in range(10):
G.add_node(i)
G.add_edges_from([(0, 2), (0, 1), (1, 0), (2, 1), (2, 0), (3, 4), (4, 3), (7, 8), (8, 7), (9, 10), (10, 9)])
pa=nx.algorithms.community.louvain_communities(G,seed=123)
@rossbar
The above dead loop situation could probably be solved by changing the code in louvain.py
`for nbr_com, wt in weights2com.items():`
to
```python
if (nbrs[u] == {}):
continue
r_list = (list(weights2com.items()))
random.shuffle(r_list)
for nbr_com, wt in r_list:
if is_directed:
```
However, this function still takes a lot of time when the input is a large directed graph, and the result seems not good
So it seems that we also need to check the neighbor communities of a node in a random order to fix the issue.
When I do that on the example graph provided by @ginandsherry I get a result.
Interesting - many thanks @ginandsherry for the reproducer!
It does indeed seem that node order has an effect here. A slight modification of the initial example that results in a different insertion order has no problem:
```python
>>> import networkx as nx
>>> G = nx.DiGraph()
>>> G.add_edges_from([
... (0, 2), (0, 1), (1, 0), (2, 1), (2, 0), (3, 4), (4, 3), (7, 8), (8, 7), (9, 10), (10, 9)
... ])
>>> G.add_node(5) # Add node that was not included when adding edges
>>> G.nodes()
NodeView((0, 2, 1, 3, 4, 7, 8, 9, 10, 5))
>>> pa = nx.algorithms.community.louvain_communities(G, seed=123) # returns immediately
```
However, when running the reproducer as presented (i.e. adding nodes prior to edges) I get the stated behavior:
```python
>>> import networkx as nx
>>> G = nx.DiGraph()
>>> G.add_nodes_from(n for n in range(10))
>>> G.add_edges_from([
... (0, 2), (0, 1), (1, 0), (2, 1), (2, 0), (3, 4), (4, 3), (7, 8), (8, 7), (9, 10), (10, 9)
... ])
>>> G.nodes()
NodeView((0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10))
>>> pa = nx.algorithms.community.louvain_communities(G, seed=123) # hangs
```
The order in which the nodes are considered definitely affects the outcome as stated by the authors of the algorithm [here](https://arxiv.org/pdf/0803.0476.pdf)
> One should also note that the output of the algorithm depends on the order in which
the nodes are considered. Preliminary results on several test cases seem to indicate that
the ordering of the nodes does not have a significant influence on the modularity that
is obtained. However the ordering can influence the computation time. The problem of
choosing an order is thus worth studying since it could give good heuristics to enhance
the computation time.
Here is the original implementation in C++ for the Directed Louvain https://github.com/nicolasdugue/DirectedLouvain
This is the function that computes the modularity gain on each step
https://github.com/nicolasdugue/DirectedLouvain/blob/00398d5659ee49973171f67db4f6dacb4230bfbf/include/community.h#L121-L134
It looks similar to the one we have in NetworkX and they don't shuffle the nodes after every pass. Maybe I am missing something in the formula, but this would be a good place to investigate before implementing a shuffling after every pass which could reduce the algorihtm's performance | 2022-06-08T17:51:40 |
networkx/networkx | 5,715 | networkx__networkx-5715 | [
"5714"
] | a3a383f7a90e478df40bc9d746c925f2c94a5a2b | diff --git a/doc/conf.py b/doc/conf.py
--- a/doc/conf.py
+++ b/doc/conf.py
@@ -135,7 +135,7 @@
},
],
"external_links": [{"name": "Guides", "url": "https://networkx.org/nx-guides/"}],
- "navbar_end": ["navbar-icon-links", "version"],
+ "navbar_end": ["theme-switcher", "navbar-icon-links", "version"],
"page_sidebar_items": ["search-field", "page-toc", "edit-this-page"],
}
html_sidebars = {
@@ -188,6 +188,7 @@
"latest": "devel (latest)",
"stable": "current (stable)",
},
+ "default_mode": "light",
}
# Options for LaTeX output
diff --git a/networkx/algorithms/distance_measures.py b/networkx/algorithms/distance_measures.py
--- a/networkx/algorithms/distance_measures.py
+++ b/networkx/algorithms/distance_measures.py
@@ -52,7 +52,9 @@ def extrema_bounding(G, compute="diameter"):
NetworkXError
If the graph consists of multiple components
ValueError
- If `compute` is not one of "diameter", "radius", "periphery", "center", or "eccentricities".
+ If `compute` is not one of "diameter", "radius", "periphery", "center",
+ or "eccentricities".
+
Notes
-----
This algorithm was proposed in the following papers:
| Heads up: Latest documentation now uses dark mode by default depending on browser defaults
With `pydata-sphinx-theme==0.9`, the documentation now defaults to light/dark mode depending on your browser preferences. Mine happen to be set to dark, so I see the following at https://networkx.org/documenation/latest

IMO the dark mode is still a little rough around the edges - the logo and banner don't look great and the dark background is a deeper black than the default darkmode for other sites.
I think we should do a little experimentation with dark mode for the site before releasing it to users. I'd propose to configure the theme to force light mode (i.e. the behavior up until v0.9) until we've had a chance to play around with dark mode a bit more.
| 2022-06-09T05:56:45 |
||
networkx/networkx | 5,800 | networkx__networkx-5800 | [
"5620",
"5620"
] | 2c904d18dc79df3acd64495ef64c6ff4674992a0 | diff --git a/networkx/algorithms/bipartite/centrality.py b/networkx/algorithms/bipartite/centrality.py
--- a/networkx/algorithms/bipartite/centrality.py
+++ b/networkx/algorithms/bipartite/centrality.py
@@ -252,7 +252,7 @@ def closeness_centrality(G, nodes, normalized=True):
s = (len(sp) - 1) / (len(G) - 1)
closeness[node] *= s
else:
- closeness[n] = 0.0
+ closeness[node] = 0.0
for node in bottom:
sp = dict(path_length(G, node))
totsp = sum(sp.values())
@@ -262,5 +262,5 @@ def closeness_centrality(G, nodes, normalized=True):
s = (len(sp) - 1) / (len(G) - 1)
closeness[node] *= s
else:
- closeness[n] = 0.0
+ closeness[node] = 0.0
return closeness
| diff --git a/networkx/algorithms/bipartite/tests/test_centrality.py b/networkx/algorithms/bipartite/tests/test_centrality.py
--- a/networkx/algorithms/bipartite/tests/test_centrality.py
+++ b/networkx/algorithms/bipartite/tests/test_centrality.py
@@ -51,9 +51,9 @@ def test_closeness_centrality(self):
G.add_node(0)
G.add_node(1)
c = bipartite.closeness_centrality(G, [0])
- assert c == {1: 0.0}
+ assert c == {0: 0.0, 1: 0.0}
c = bipartite.closeness_centrality(G, [1])
- assert c == {1: 0.0}
+ assert c == {0: 0.0, 1: 0.0}
def test_davis_degree_centrality(self):
G = self.davis
| Bipartite Closeness centrality algorithm does not report nodes with zero value.
### Current Behavior
The function saves closeness values of '0.0' always at closeness[len(nodes)] which is closeness[n]. This results a returned dictionary with less elements than number of nodes in the graph.
### Expected Behavior
The values of '0.0' should instead be saved at closeness[node] within the for-loops iterating over all nodes.
### Steps to Reproduce
/
### Environment
Python version: 3.9.12
NetworkX version: 2.8
### Additional context
In both cases within the source code below
`else:
closeness[n] = 0.
`
needs to be changed to:
`else:
closeness[node] = 0.
`
See sourcecode:
https://networkx.org/documentation/stable/_modules/networkx/algorithms/bipartite/centrality.html#closeness_centrality
Bipartite Closeness centrality algorithm does not report nodes with zero value.
### Current Behavior
The function saves closeness values of '0.0' always at closeness[len(nodes)] which is closeness[n]. This results a returned dictionary with less elements than number of nodes in the graph.
### Expected Behavior
The values of '0.0' should instead be saved at closeness[node] within the for-loops iterating over all nodes.
### Steps to Reproduce
/
### Environment
Python version: 3.9.12
NetworkX version: 2.8
### Additional context
In both cases within the source code below
`else:
closeness[n] = 0.
`
needs to be changed to:
`else:
closeness[node] = 0.
`
See sourcecode:
https://networkx.org/documentation/stable/_modules/networkx/algorithms/bipartite/centrality.html#closeness_centrality
| The docstring states that
The closeness of a node is the distance to all other nodes in the
graph or in the case that the graph is not connected to all other nodes
in the connected component containing that node.
So, I think we don't intend to report nodes that are not connected to the other nodes. Isn't that the intent of this part of the code? If I am missing something (and even if not) can you provide a small example where the resulting dict is not what you would expect? That is the first step toward getting a test to show this difficulty. (and thus the first step toward a fix. :}
In the docstring it also states that:
> Notes
> -----
> The nodes input parameter must contain all nodes in one bipartite node set,
> but the dictionary returned contains all nodes from both node sets.
>
I might try to provide an example at some point when I have time, but it should be fine without one..
The function saves multiple calulated closeness values at closeness[n], which can't be what was intended.
Ahh... yes, I understand --- it looks like a typo, that should be `node` instead of `n` in two places.
And you said that in the original post. I was too focused on it being the bipartite and not the regular closeness centrality that I missed that.
Yes, that code appears to simply avoid dividing by zero. And instead it assigns a value to a nonexisting node.
Thanks!
The docstring states that
The closeness of a node is the distance to all other nodes in the
graph or in the case that the graph is not connected to all other nodes
in the connected component containing that node.
So, I think we don't intend to report nodes that are not connected to the other nodes. Isn't that the intent of this part of the code? If I am missing something (and even if not) can you provide a small example where the resulting dict is not what you would expect? That is the first step toward getting a test to show this difficulty. (and thus the first step toward a fix. :}
In the docstring it also states that:
> Notes
> -----
> The nodes input parameter must contain all nodes in one bipartite node set,
> but the dictionary returned contains all nodes from both node sets.
>
I might try to provide an example at some point when I have time, but it should be fine without one..
The function saves multiple calulated closeness values at closeness[n], which can't be what was intended.
Ahh... yes, I understand --- it looks like a typo, that should be `node` instead of `n` in two places.
And you said that in the original post. I was too focused on it being the bipartite and not the regular closeness centrality that I missed that.
Yes, that code appears to simply avoid dividing by zero. And instead it assigns a value to a nonexisting node.
Thanks! | 2022-06-20T22:05:41 |
networkx/networkx | 5,819 | networkx__networkx-5819 | [
"3880"
] | dd2d7d9b4455b08f23c663cbec0c6901957640b7 | diff --git a/networkx/algorithms/centrality/closeness.py b/networkx/algorithms/centrality/closeness.py
--- a/networkx/algorithms/centrality/closeness.py
+++ b/networkx/algorithms/centrality/closeness.py
@@ -49,7 +49,9 @@ def closeness_centrality(G, u=None, distance=None, wf_improved=True):
distance : edge attribute key, optional (default=None)
Use the specified edge attribute as the edge distance in shortest
- path calculations
+ path calculations. If `None` (the default) all edges have a distance of 1.
+ Absent edge attributes are assigned a distance of 1. Note that no check
+ is performed to ensure that edges have the provided attribute.
wf_improved : bool, optional (default=True)
If True, scale by the fraction of nodes reachable. This gives the
| In `nx.closeness_centrality(G, distance='distance')` even though `distance` doesn't exist in the edge attribute dictionary, it doesn't raise any error or warning.
When I calculate the centrality of nodes in network, I use both weighted centrality and unweighted centrality. "Unweighted centrality" means that "all edges have same weight and distance". and "Weighted centrality" means their weights are different.
The way to calculate them is easy like below.
```python
# for unweighted centrality
nx.closeness_centrality(G)
# for weighted centrality
nx.closeness_centrality(G, distance='distance').items():
```
However, there is a small problem(or warning).
In below simple code, 'distance' is not in edge attribute dictionary. But, I passed it `nx.closeness_centrality(G, distance='distance')` as argument.
Even though `distance` doesn't exist in the edge attribute dictionary, it doesn't raise any error or warning. I don't know the weighted centrality wasn't calculated in the correct way.
```python
import networkx as nx
G = nx.Graph()
G.add_edges_from([
(0, 1, {'weight': 0.5}), (1, 2, {'weight': 0.2}), (2, 0, {'weight': 0.3}),])
print("== CLOSENESS CENTRALITY")
for k, v in nx.closeness_centrality(G).items():
print(k, v)
print("== WEIGHTED CLOSENESS CENTRALITY")
# actually, key 'distance' doesn't exist in the edge attribute distionary.
for k, v in nx.closeness_centrality(G, distance='distance').items():
print(k, v)
print("=="*20)
```
The output is below. When the argument, 'distance' doesn't exist in edge attribute dictionary, it runs like Unweighted centrality. But, it doesn't make me know. I just think why two centrality are exactly and same, how it could be, for long time. This error makes me waste my effort.
```
== CLOSENESS CENTRALITY
0 1.0
1 1.0
2 1.0
== WEIGHTED CLOSENESS CENTRALITY
0 1.0
1 1.0
2 1.0
```
I think this is not an error but, it could make people confused.
Therefore, when the argument name of `distance` doesn't exist in any edge attribute name, it would be better to raise warning message for users.
If you feel that I am rude, it is because of my lack of english. Sorry. I don't want to hurt you.
I always thanks for your help. I have used `networkx` a lot for my research. With your help, I've learned a lot.
| It looks like the problem stems from here:
https://github.com/networkx/networkx/blob/123fc2be585c6ebbf46b7fe876ba85dd34f92b2e/networkx/algorithms/shortest_paths/weighted.py#L76-L78
The use of the `get` method ignores missing keys and returns the default value of `1` instead. This is obviously not desirable in the case described above, where an error is appropriate. However, it looks like this behavior is relied upon in other contexts to set default values, so the fix isn't a no-brainer. | 2022-06-25T18:07:51 |
|
networkx/networkx | 5,822 | networkx__networkx-5822 | [
"5817"
] | dd2d7d9b4455b08f23c663cbec0c6901957640b7 | diff --git a/networkx/algorithms/approximation/traveling_salesman.py b/networkx/algorithms/approximation/traveling_salesman.py
--- a/networkx/algorithms/approximation/traveling_salesman.py
+++ b/networkx/algorithms/approximation/traveling_salesman.py
@@ -668,21 +668,15 @@ def direction_of_ascent():
a_eq = np.empty((len(G) + 1, len(minimum_1_arborescences)), dtype=int)
b_eq = np.zeros(len(G) + 1, dtype=int)
b_eq[len(G)] = 1
- arb_count = 0
- for arborescence in minimum_1_arborescences:
+ for arb_count, arborescence in enumerate(minimum_1_arborescences):
n_count = len(G) - 1
for n, deg in arborescence.degree:
a_eq[n_count][arb_count] = deg - 2
n_count -= 1
a_eq[len(G)][arb_count] = 1
- arb_count += 1
- program_result = optimize.linprog(
- c, A_eq=a_eq, b_eq=b_eq, method="interior-point"
- )
- bool_result = program_result.x >= 0
- if program_result.status == 0 and np.sum(bool_result) == len(
- minimum_1_arborescences
- ):
+ program_result = optimize.linprog(c, A_eq=a_eq, b_eq=b_eq)
+ # If the constants exist, then the direction of ascent doesn't
+ if program_result.success:
# There is no direction of ascent
return None, minimum_1_arborescences
| Held-Karp ascent failures due to optimize.linprog infeasibility
The recent scipy 1.9rc release caught some failures related to our use of `optimize.linprog` in the Held-Karp implementations. For example, the following tests are failing:
```
FAILED networkx/algorithms/approximation/tests/test_traveling_salesman.py::test_held_karp_ascent
FAILED networkx/algorithms/approximation/tests/test_traveling_salesman.py::test_ascent_method_asymmetric
FAILED networkx/algorithms/approximation/tests/test_traveling_salesman.py::test_ascent_method_asymmetric_2
FAILED networkx/algorithms/approximation/tests/test_traveling_salesman.py::test_held_karp_ascent_asymmetric_3
FAILED networkx/algorithms/approximation/tests/test_traveling_salesman.py::test_asadpour_real_world
FAILED networkx/algorithms/approximation/tests/test_traveling_salesman.py::test_asadpour_real_world_path
```
The specifics related to the release are addressed elsewhere (see scipy/scipy#16466 and #5816), but taking a closer look at these failures it turns out that an underlying problem is that the results from `optimize.linprog` are actually not successful (this was true for scipy 1.8 as well, but we only got warnings instead of exceptions). In each of these cases, the returned `OptimiztionResult.success == False`. I didn't look closely at each case, but the `.message` attribute for at least one case indicates redundancies in `A_eq` and `b_eq` make the problem infeasible. I'm not sure exactly what's going on and whether this has something to do with the problem formulations or maybe some problem in how we're using `optimize.linprog`.
| I had noticed the warnings coming from `optimize.linprog` and always wanted to go back and try to fix them. I guess scipy has forced my hand to at least try :smiley:. Unfortunately, the linear programming part of this has always been my weakest point with the algorithm. Also, I don't have access to my annotated copy of Held and Karp's paper at the moment but I should still have a PDF copy somewhere on my laptop.
My inclination is that the issue is not with the original problem formulation from the paper or with scipy. I most likely have a subtle error in populating the constraint matrices before they are passed to scipy. I will have time to start the investigative process tomorrow, but I don't really know how long it will take to track down the problem. Only that it will take a day or two for me to get back up to speed with the held karp ascent method. | 2022-06-26T19:32:09 |
|
networkx/networkx | 5,846 | networkx__networkx-5846 | [
"5636"
] | b25c68151ca7fd5ba2d1432f017463500dafe11d | diff --git a/networkx/algorithms/bridges.py b/networkx/algorithms/bridges.py
--- a/networkx/algorithms/bridges.py
+++ b/networkx/algorithms/bridges.py
@@ -69,6 +69,9 @@ def bridges(G, root=None):
H = nx.Graph(G) if multigraph else G
chains = nx.chain_decomposition(H, root=root)
chain_edges = set(chain.from_iterable(chains))
+ H_copy = H.copy()
+ if root is not None:
+ H = H.subgraph(nx.node_connected_component(H, root)).copy()
for u, v in H.edges():
if (u, v) not in chain_edges and (v, u) not in chain_edges:
if multigraph and len(G[u][v]) > 1:
@@ -128,7 +131,7 @@ def has_bridges(G, root=None):
"""
try:
- next(bridges(G))
+ next(bridges(G, root=root))
except StopIteration:
return False
else:
diff --git a/networkx/algorithms/chains.py b/networkx/algorithms/chains.py
--- a/networkx/algorithms/chains.py
+++ b/networkx/algorithms/chains.py
@@ -141,6 +141,10 @@ def _build_chain(G, u, v, visited):
u, v = v, G.nodes[v]["parent"]
yield u, v
+ # Check if the root is in the graph G. If not, raise NodeNotFound
+ if root is not None and root not in G:
+ raise nx.NodeNotFound(f"Root node {root} is not in graph")
+
# Create a directed version of H that has the DFS edges directed
# toward the root and the nontree edges directed away from the root
# (in each connected component).
| diff --git a/networkx/algorithms/tests/test_bridges.py b/networkx/algorithms/tests/test_bridges.py
--- a/networkx/algorithms/tests/test_bridges.py
+++ b/networkx/algorithms/tests/test_bridges.py
@@ -1,5 +1,7 @@
"""Unit tests for bridge-finding algorithms."""
+import pytest
+
import networkx as nx
@@ -51,6 +53,61 @@ def test_multiedge_bridge(self):
assert list(nx.bridges(G)) == [(2, 3)]
+class TestHasBridges:
+ """Unit tests for the has bridges function."""
+
+ def test_single_bridge(self):
+ edges = [
+ # DFS tree edges.
+ (1, 2),
+ (2, 3),
+ (3, 4),
+ (3, 5),
+ (5, 6), # The only bridge edge
+ (6, 7),
+ (7, 8),
+ (5, 9),
+ (9, 10),
+ # Nontree edges.
+ (1, 3),
+ (1, 4),
+ (2, 5),
+ (5, 10),
+ (6, 8),
+ ]
+ G = nx.Graph(edges)
+ assert nx.has_bridges(G) # Default root
+ assert nx.has_bridges(G, root=1) # arbitrary root in G
+
+ def test_has_bridges_raises_root_not_in_G(self):
+ G = nx.Graph()
+ G.add_nodes_from([1, 2, 3])
+ with pytest.raises(nx.NodeNotFound):
+ nx.has_bridges(G, root=6)
+
+ def test_multiedge_bridge(self):
+ edges = [
+ (0, 1),
+ (0, 2),
+ (1, 2),
+ (1, 2),
+ (2, 3),
+ (3, 4),
+ (3, 4),
+ ]
+ G = nx.MultiGraph(edges)
+ assert nx.has_bridges(G)
+ # Make every edge a multiedge
+ G.add_edges_from([(0, 1), (0, 2), (2, 3)])
+ assert not nx.has_bridges(G)
+
+ def test_bridges_multiple_components(self):
+ G = nx.Graph()
+ nx.add_path(G, [0, 1, 2]) # One connected component
+ nx.add_path(G, [4, 5, 6]) # Another connected component
+ assert list(nx.bridges(G, root=4)) == [(4, 5), (5, 6)]
+
+
class TestLocalBridges:
"""Unit tests for the local_bridge function."""
diff --git a/networkx/algorithms/tests/test_chains.py b/networkx/algorithms/tests/test_chains.py
--- a/networkx/algorithms/tests/test_chains.py
+++ b/networkx/algorithms/tests/test_chains.py
@@ -1,6 +1,8 @@
"""Unit tests for the chain decomposition functions."""
from itertools import cycle, islice
+import pytest
+
import networkx as nx
@@ -129,3 +131,10 @@ def test_disconnected_graph_root_node(self):
assert len(chains) == len(expected)
for chain in chains:
self.assertContainsChain(chain, expected)
+
+ def test_chain_decomposition_root_not_in_G(self):
+ """Test chain decomposition when root is not in graph"""
+ G = nx.Graph()
+ G.add_nodes_from([1, 2, 3])
+ with pytest.raises(nx.NodeNotFound):
+ nx.has_bridges(G, root=6)
| Unused `root` argument in `has_bridges`
This is the last remaining issue from #5486. I've split it out into it's own issue as all of the other problems identified in #5486 have been resolved.
As originally discovered by @dtekinoglu `has_bridges` defines a `root` parameter which is not used. It should be passed on to the underlying `bridges` call. There should also be some additional tests for `has_bridges` with the `root` argument - both that it works as expected when `root` is provided, and that it fails as expected when `root` is specified incorrectly (e.g. `NodeNotFound`)
| 2022-07-06T02:33:10 |
|
networkx/networkx | 5,876 | networkx__networkx-5876 | [
"5862"
] | 2fb00bb8b9ed1e2917e5bc1aac04c558bd23c6d8 | diff --git a/networkx/algorithms/lowest_common_ancestors.py b/networkx/algorithms/lowest_common_ancestors.py
--- a/networkx/algorithms/lowest_common_ancestors.py
+++ b/networkx/algorithms/lowest_common_ancestors.py
@@ -28,12 +28,34 @@ def naive_all_pairs_lowest_common_ancestor(G, pairs=None):
The pairs of nodes of interest.
If None, will find the LCA of all pairs of nodes.
- Returns
- -------
- An iterator over ((node1, node2), lca) where (node1, node2) are
- the pairs specified and lca is a lowest common ancestor of the pair.
- Note that for the default of all pairs in G, we consider
- unordered pairs, e.g. you will not get both (b, a) and (a, b).
+ Yields
+ ------
+ ((node1, node2), lca) : 2-tuple
+ Where lca is least common ancestor of node1 and node2.
+ Note that for the default case, the order of the node pair is not considered,
+ e.g. you will not get both ``(a, b)`` and ``(b, a)``
+
+ Raises
+ ------
+ NetworkXPointlessConcept
+ If `G` is null.
+ NetworkXError
+ If `G` is not a DAG.
+
+ Examples
+ --------
+ The default behavior is to yield the lowest common ancestor for all
+ possible combinations of nodes in `G`, including self-pairings:
+
+ >>> G = nx.DiGraph([(0, 1), (0, 3), (1, 2)])
+ >>> dict(nx.naive_all_pairs_lowest_common_ancestor(G))
+ {(0, 0): 0, (0, 1): 0, (0, 3): 0, (0, 2): 0, (1, 1): 1, (1, 3): 0, (1, 2): 1, (3, 3): 3, (3, 2): 0, (2, 2): 2}
+
+ The pairs argument can be used to limit the output to only the
+ specified node pairings:
+
+ >>> dict(nx.naive_all_pairs_lowest_common_ancestor(G, pairs=[(1, 2), (2, 3)]))
+ {(1, 2): 1, (2, 3): 0}
Notes
-----
@@ -302,16 +324,32 @@ def all_pairs_lowest_common_ancestor(G, pairs=None):
The pairs of nodes of interest.
If None, will find the LCA of all pairs of nodes.
- Returns
- -------
- An iterator over ((node1, node2), lca) where (node1, node2) are
- the pairs specified and lca is a lowest common ancestor of the pair.
- Note that for the default of all pairs in G, we consider
- unordered pairs, e.g. you will not get both (b, a) and (a, b).
+ Yields
+ ------
+ ((node1, node2), lca) : 2-tuple
+ Where lca is least common ancestor of node1 and node2.
+ Note that for the default case, the order of the node pair is not considered,
+ e.g. you will not get both ``(a, b)`` and ``(b, a)``
+
+ Raises
+ ------
+ NetworkXPointlessConcept
+ If `G` is null.
+ NetworkXError
+ If `G` is not a DAG.
Examples
--------
+ The default behavior is to yield the lowest common ancestor for all
+ possible combinations of nodes in `G`, including self-pairings:
+
>>> G = nx.DiGraph([(0, 1), (0, 3), (1, 2)])
+ >>> dict(nx.all_pairs_lowest_common_ancestor(G))
+ {(2, 2): 2, (1, 1): 1, (2, 1): 1, (1, 3): 0, (2, 3): 0, (3, 3): 3, (0, 0): 0, (1, 0): 0, (2, 0): 0, (3, 0): 0}
+
+ The `pairs` argument can be used to limit the output to only the
+ specified node pairings:
+
>>> dict(nx.all_pairs_lowest_common_ancestor(G, pairs=[(1, 2), (2, 3)]))
{(2, 3): 0, (1, 2): 1}
| Update `all_pairs_lca` docstrings
Just a few potential improvements I noticed while reviewing #5736:
- [ ] The `Returns` section should actually be a `Yields` as this is a generator. This should also simplify the return description
- [ ] Add a `Raises` section to indicate when exceptions are raised (graph is undirected, graph isn't DAG, graph is empty, etc.)
- [ ] Update example to show usage both with and without the `pairs` argument.
Note that these changes should be made to the `naive` fn docstring as well.
| These are all in the module: `networkx/algorithms/lowest_common_ancestors.py` | 2022-07-17T08:05:02 |
|
networkx/networkx | 5,892 | networkx__networkx-5892 | [
"5828"
] | 98060487ad192918cfc2415fc0b5c309ff2d3565 | diff --git a/networkx/algorithms/operators/binary.py b/networkx/algorithms/operators/binary.py
--- a/networkx/algorithms/operators/binary.py
+++ b/networkx/algorithms/operators/binary.py
@@ -15,14 +15,16 @@
def union(G, H, rename=(None, None)):
- """Return the union of graphs G and H.
+ """Combine graphs G and H. The names of nodes must be unique.
+
+ A name collision between the graphs will raise an exception.
+
+ A renaming facility is provided to avoid name collisions.
- Graphs G and H must be disjoint after the renaming takes place,
- otherwise an exception is raised.
Parameters
----------
- G,H : graph
+ G, H : graph
A NetworkX graph
rename : tuple , default=(None, None)
@@ -34,14 +36,23 @@ def union(G, H, rename=(None, None)):
-------
U : A union graph with the same type as G.
+ See Also
+ --------
+ compose
+ :func:`~networkx.Graph.update`
+ disjoint_union
+
Notes
-----
- To force a disjoint union with node relabeling, use
- disjoint_union(G,H) or convert_node_labels_to integers().
+ To combine graphs that have common nodes, consider compose(G, H)
+ or the method, Graph.update().
- Graph, edge, and node attributes are propagated from G and H
- to the union graph. If a graph attribute is present in both
- G and H the value from H is used.
+ disjoint_union() is similar to union() except that it avoids name clashes
+ by relabeling the nodes with sequential integers.
+
+ Edge and node attributes are propagated from G and H to the union graph.
+ Graph attributes are also propagated, but if they are present in both G and H,
+ then the value from H is used.
Examples
--------
@@ -53,17 +64,15 @@ def union(G, H, rename=(None, None)):
>>> U.edges
EdgeView([('G0', 'G1'), ('G0', 'G2'), ('G1', 'G2'), ('H0', 'H1'), ('H0', 'H3'), ('H1', 'H3'), ('H1', 'H2')])
- See Also
- --------
- disjoint_union
+
"""
return nx.union_all([G, H], rename)
def disjoint_union(G, H):
- """Return the disjoint union of graphs G and H.
+ """Combine graphs G and H. The nodes are assumed to be unique (disjoint).
- This algorithm forces distinct integer node labels.
+ This algorithm automatically relabels nodes to avoid name collisions.
Parameters
----------
@@ -74,6 +83,12 @@ def disjoint_union(G, H):
-------
U : A union graph with the same type as G.
+ See Also
+ --------
+ union
+ compose
+ :func:`~networkx.Graph.update`
+
Notes
-----
A new graph is created, of the same class as G. It is recommended
@@ -82,9 +97,15 @@ def disjoint_union(G, H):
The nodes of G are relabeled 0 to len(G)-1, and the nodes of H are
relabeled len(G) to len(G)+len(H)-1.
- Graph, edge, and node attributes are propagated from G and H
- to the union graph. If a graph attribute is present in both
- G and H the value from H is used.
+ Renumbering forces G and H to be disjoint, so no exception is ever raised for a name collision.
+ To preserve the check for common nodes, use union().
+
+ Edge and node attributes are propagated from G and H to the union graph.
+ Graph attributes are also propagated, but if they are present in both G and H,
+ then the value from H is used.
+
+ To combine graphs that have common nodes, consider compose(G, H)
+ or the method, Graph.update().
Examples
--------
@@ -262,10 +283,12 @@ def symmetric_difference(G, H):
def compose(G, H):
- """Returns a new graph of G composed with H.
+ """Compose graph G with H by combining nodes and edges into a single graph.
+
+ The node sets and edges sets do not need to be disjoint.
- Composition is the simple union of the node sets and edge sets.
- The node sets of G and H do not need to be disjoint.
+ Composing preserves the attributes of nodes and edges.
+ Attribute values from H take precedent over attribute values from G.
Parameters
----------
@@ -274,17 +297,25 @@ def compose(G, H):
Returns
-------
- C: A new graph with the same type as G
+ C: A new graph with the same type as G
+
+ See Also
+ --------
+ :func:`~networkx.Graph.update`
+ union
+ disjoint_union
Notes
-----
It is recommended that G and H be either both directed or both undirected.
- Attributes from H take precedent over attributes from G.
For MultiGraphs, the edges are identified by incident nodes AND edge-key.
This can cause surprises (i.e., edge `(1, 2)` may or may not be the same
in two graphs) if you use MultiGraph without keeping track of edge keys.
+ If combining the attributes of common nodes is not desired, consider union(),
+ which raises an exception for name collisions.
+
Examples
--------
>>> G = nx.Graph([(0, 1), (0, 2)])
| Update documentation surrounding `union`, `disjoint_union`, `compose` and `Graph.update`
This is an action item extracted from the discussion in #4208.
It's not always immediately clear to users how the various operators `union`, `disjoint_union`, and `compose` differ and for which use-cases each is appropriate. There is also the `update` method of the graph classes, which simply updates the nodes/edges of a graph in-place with those of another, which is related to the above operations. It would be an improvement to ensure that the docstrings for these functions all include a link to the `update` method.
It may also be worthwhile to update the `operators` module docstring with examples highlighting the use-cases and differences between all of these operations.
| I'd like to be assigned to this issue. I've read through the discussion in https://github.com/networkx/networkx/issues/4208. I think I can clarify the difference between `union`, `disjoint_union`, `compose` and `Graph.update`
The first time I wanted to combine two graphs, I started at `union`, which was not what I really wanted. It took me a while to find compose. I can leave some descriptive bread crumbs in the "See Also" section.
Also, can an update the `operators` module so the descriptions there are consistent with the updates elsewhere.
There were some comments about improving the error message of `union`, which I could do, but perhaps that was meant to be raised in a separate issue.
Sounds good @brocla ! Just as a heads up - NetworkX typically doesn't assign issues. Generally we just ask that you check whether there is already an open PR linked to the issue (you can usually see this in the `Development` panel on the RHS of the page in GitHub). Since there isn't anything open in this case, please feel free to make a PR!
If you're interested in more tid-bits about the typical project workflow, you can check out [this FAQ](https://networkx.org/documentation/latest/developer/new_contributor_faq.html#q-i-ve-found-an-issue-i-m-interested-in-can-i-have-it-assigned-to-me) (feedback welcome of course)! | 2022-07-23T18:22:43 |
|
networkx/networkx | 5,894 | networkx__networkx-5894 | [
"5893"
] | 98060487ad192918cfc2415fc0b5c309ff2d3565 | diff --git a/networkx/classes/graph.py b/networkx/classes/graph.py
--- a/networkx/classes/graph.py
+++ b/networkx/classes/graph.py
@@ -41,6 +41,28 @@ def __set__(self, obj, value):
del od["adj"]
+class _CachedPropertyResetterNode:
+ """Data Descriptor class for _node that resets ``nodes`` cached_property when needed
+
+ This assumes that the ``cached_property`` ``G.node`` should be reset whenever
+ ``G._node`` is set to a new value.
+
+ This object sits on a class and ensures that any instance of that
+ class clears its cached property "nodes" whenever the underlying
+ instance attribute "_node" is set to a new object. It only affects
+ the set process of the obj._adj attribute. All get/del operations
+ act as they normally would.
+
+ For info on Data Descriptors see: https://docs.python.org/3/howto/descriptor.html
+ """
+
+ def __set__(self, obj, value):
+ od = obj.__dict__
+ od["_node"] = value
+ if "nodes" in od:
+ del od["nodes"]
+
+
class Graph:
"""
Base class for undirected graphs.
@@ -282,6 +304,7 @@ class Graph:
"""
_adj = _CachedPropertyResetterAdj()
+ _node = _CachedPropertyResetterNode()
node_dict_factory = dict
node_attr_dict_factory = dict
| diff --git a/networkx/classes/tests/test_graph.py b/networkx/classes/tests/test_graph.py
--- a/networkx/classes/tests/test_graph.py
+++ b/networkx/classes/tests/test_graph.py
@@ -178,6 +178,11 @@ def test_cache_reset(self):
G._adj = {}
assert id(G.adj) != id(old_adj)
+ old_nodes = G.nodes
+ assert id(G.nodes) == id(old_nodes)
+ G._node = {}
+ assert id(G.nodes) != id(old_nodes)
+
def test_attributes_cached(self):
G = self.K3.copy()
assert id(G.nodes) == id(G.nodes)
| Critical NetworkX 2.8.X bug with mutable cached_properties
### Current Behavior
The `nodes()` method of a Graph is decorated with `@cached_property`.<br>
This leads to the assumption that a Graph's `nodes()` method should return a static value.<br>
This assumption is incorrect.
Notably, the `@cached_property` decorator completely breaks `Graph.subgraph()`.<br>
Trace:
Graph.subgraph -> graphviews.subgraph_view -> [this line](https://github.com/networkx/networkx/blob/ead0e65bda59862e329f2e6f1da47919c6b07ca9/networkx/classes/graphviews.py#L149) prevents [this line](https://github.com/networkx/networkx/blob/ead0e65bda59862e329f2e6f1da47919c6b07ca9/networkx/classes/graphviews.py#L152) from functioning correctly
Subgraphs are shown as containing the wrong nodes as a result, until `del G.nodes` is manually run.<br>
This breaks things.
### Expected Behavior
`@cached_property` decorators should not break functionality.
### Steps to Reproduce
- Initialize a Graph, `G`
- Populate `G` with node objects that are complex enough that the following inequality assertion will pass: `newG = nx.freeze(G.__class__()); newG._graph = G; newG.graph = G.graph; assert(newG.nodes()!=G.nodes)`.
- Pick a subset of `G.nodes`, call this `subG_nodes`
- `subgraph = G.subgraph(subG_nodes)`
- In NetworkX 2.7, the nodes of `subgraph` will be a subset of the nodes of `G`. In NetworkX 2.8, the nodes of `subgraph` and the nodes of `G` will be a fully disjoint set.
### Environment
<!--- Please provide details about your local environment -->
Python version: 3.9.10
NetworkX version: 2.8.5 vs 2.7.0
| Thank you! This is indeed a bug. We will need to make a similar fix to the ones we made for G.adj/G.succ/G.pred
That is, add a data descriptor for G._node which resets the cache when G._node is set.
The workaround until a fix is ready is:
```
subgraph = G.subgraph(subG_nodes)
del subgraph.nodes
``` | 2022-07-25T14:18:48 |
networkx/networkx | 5,899 | networkx__networkx-5899 | [
"5787"
] | f99f1a69fea12c0d14ca6edd5efecf73a20003a7 | diff --git a/networkx/conftest.py b/networkx/conftest.py
--- a/networkx/conftest.py
+++ b/networkx/conftest.py
@@ -142,6 +142,11 @@ def set_warnings():
warnings.filterwarnings(
"ignore", category=DeprecationWarning, message="nx.nx_pydot"
)
+ warnings.filterwarnings(
+ "ignore",
+ category=DeprecationWarning,
+ message="signature change for node_link functions",
+ )
@pytest.fixture(autouse=True)
diff --git a/networkx/readwrite/json_graph/node_link.py b/networkx/readwrite/json_graph/node_link.py
--- a/networkx/readwrite/json_graph/node_link.py
+++ b/networkx/readwrite/json_graph/node_link.py
@@ -25,7 +25,9 @@ def _to_tuple(x):
return tuple(map(_to_tuple, x))
-def node_link_data(G, attrs=None):
+def node_link_data(
+ G, attrs=None, source="source", target="target", name="id", key="key", link="links"
+):
"""Returns data in node-link format that is suitable for JSON serialization
and use in Javascript documents.
@@ -45,6 +47,27 @@ def node_link_data(G, attrs=None):
If some user-defined graph data use these attribute names as data keys,
they may be silently dropped.
+ .. deprecated:: 2.8.6
+
+ The `attrs` keyword argument will be replaced with `source`, `target`, `name`,
+ `key` and `link`. in networkx 3.1
+
+ If the `attrs` keyword and the new keywords are both used in a single function call (not recommended)
+ the `attrs` keyword argument will take precedence.
+
+ The values of the keywords must be unique.
+
+ source : string
+ A string that provides the 'source' attribute name for storing NetworkX-internal graph data.
+ target : string
+ A string that provides the 'target' attribute name for storing NetworkX-internal graph data.
+ name : string
+ A string that provides the 'name' attribute name for storing NetworkX-internal graph data.
+ key : string
+ A string that provides the 'key' attribute name for storing NetworkX-internal graph data.
+ link : string
+ A string that provides the 'link' attribute name for storing NetworkX-internal graph data.
+
Returns
-------
data : dict
@@ -53,25 +76,35 @@ def node_link_data(G, attrs=None):
Raises
------
NetworkXError
- If values in attrs are not unique.
+ If the values of 'source', 'target' and 'key' are not unique.
Examples
--------
- >>> from networkx.readwrite import json_graph
>>> G = nx.Graph([("A", "B")])
- >>> data1 = json_graph.node_link_data(G)
- >>> H = nx.gn_graph(2)
- >>> data2 = json_graph.node_link_data(
- ... H, {"link": "edges", "source": "from", "target": "to"}
- ... )
+ >>> data1 = nx.node_link_data(G)
+ >>> data1
+ {'directed': False, 'multigraph': False, 'graph': {}, 'nodes': [{'id': 'A'}, {'id': 'B'}], 'links': [{'source': 'A', 'target': 'B'}]}
- To serialize with json
+ To serialize with JSON
>>> import json
>>> s1 = json.dumps(data1)
- >>> s2 = json.dumps(
- ... data2, default={"link": "edges", "source": "from", "target": "to"}
- ... )
+ >>> s1
+ '{"directed": false, "multigraph": false, "graph": {}, "nodes": [{"id": "A"}, {"id": "B"}], "links": [{"source": "A", "target": "B"}]}'
+
+ A graph can also be serialized by passing `node_link_data` as an encoder function. The two methods are equivalent.
+
+ >>> s1 = json.dumps(G, default=nx.node_link_data)
+ >>> s1
+ '{"directed": false, "multigraph": false, "graph": {}, "nodes": [{"id": "A"}, {"id": "B"}], "links": [{"source": "A", "target": "B"}]}'
+
+ The attribute names for storing NetworkX-internal graph data can
+ be specified as keyword options.
+
+ >>> H = nx.gn_graph(2)
+ >>> data2 = nx.node_link_data(H, link="edges", source="from", target="to")
+ >>> data2
+ {'directed': True, 'multigraph': False, 'graph': {}, 'nodes': [{'id': 0}, {'id': 1}], 'edges': [{'from': 1, 'to': 0}]}
Notes
-----
@@ -80,22 +113,43 @@ def node_link_data(G, attrs=None):
Attribute 'key' is only used for multigraphs.
+ To use `node_link_data` in conjunction with `node_link_graph`,
+ the keyword names for the attributes must match.
+
+
See Also
--------
node_link_graph, adjacency_data, tree_data
"""
+ # ------ TODO: Remove between the lines after signature change is complete ----- #
+ if attrs is not None:
+ import warnings
+
+ msg = (
+ "\n\nThe `attrs` keyword argument of node_link_data is deprecated\n"
+ "and will be removed in networkx 3.1. It is replaced with explicit\n"
+ "keyword arguments: `source`, `target`, `name`, `key` and `link`.\n"
+ "To make this warning go away, and ensure usage is forward\n"
+ "compatible, replace `attrs` with the keywords. "
+ "For example:\n\n"
+ " >>> node_link_data(G, attrs={'target': 'foo', 'name': 'bar'})\n\n"
+ "should instead be written as\n\n"
+ " >>> node_link_data(G, target='foo', name='bar')\n\n"
+ "in networkx 3.1.\n"
+ "The default values of the keywords will not change.\n"
+ )
+ warnings.warn(msg, DeprecationWarning, stacklevel=2)
+
+ source = attrs.get("source", "source")
+ target = attrs.get("target", "target")
+ name = attrs.get("name", "name")
+ key = attrs.get("key", "key")
+ link = attrs.get("link", "links")
+ # -------------------------------------------------- #
multigraph = G.is_multigraph()
- # Allow 'attrs' to keep default values.
- if attrs is None:
- attrs = _attrs
- else:
- attrs.update({k: v for (k, v) in _attrs.items() if k not in attrs})
- name = attrs["name"]
- source = attrs["source"]
- target = attrs["target"]
- links = attrs["link"]
+
# Allow 'key' to be omitted from attrs if the graph is not a multigraph.
- key = None if not multigraph else attrs["key"]
+ key = None if not multigraph else key
if len({source, target, key}) < 3:
raise nx.NetworkXError("Attribute names are not unique.")
data = {
@@ -105,20 +159,31 @@ def node_link_data(G, attrs=None):
"nodes": [dict(chain(G.nodes[n].items(), [(name, n)])) for n in G],
}
if multigraph:
- data[links] = [
+ data[link] = [
dict(chain(d.items(), [(source, u), (target, v), (key, k)]))
for u, v, k, d in G.edges(keys=True, data=True)
]
else:
- data[links] = [
+ data[link] = [
dict(chain(d.items(), [(source, u), (target, v)]))
for u, v, d in G.edges(data=True)
]
return data
-def node_link_graph(data, directed=False, multigraph=True, attrs=None):
+def node_link_graph(
+ data,
+ directed=False,
+ multigraph=True,
+ attrs=None,
+ source="source",
+ target="target",
+ name="id",
+ key="key",
+ link="links",
+):
"""Returns graph from node-link data format.
+ Useful for de-serialization from JSON.
Parameters
----------
@@ -139,6 +204,27 @@ def node_link_graph(data, directed=False, multigraph=True, attrs=None):
dict(source='source', target='target', name='id',
key='key', link='links')
+ .. deprecated:: 2.8.6
+
+ The `attrs` keyword argument will be replaced with the individual keywords: `source`, `target`, `name`,
+ `key` and `link`. in networkx 3.1.
+
+ If the `attrs` keyword and the new keywords are both used in a single function call (not recommended)
+ the `attrs` keyword argument will take precedence.
+
+ The values of the keywords must be unique.
+
+ source : string
+ A string that provides the 'source' attribute name for storing NetworkX-internal graph data.
+ target : string
+ A string that provides the 'target' attribute name for storing NetworkX-internal graph data.
+ name : string
+ A string that provides the 'name' attribute name for storing NetworkX-internal graph data.
+ key : string
+ A string that provides the 'key' attribute name for storing NetworkX-internal graph data.
+ link : string
+ A string that provides the 'link' attribute name for storing NetworkX-internal graph data.
+
Returns
-------
G : NetworkX graph
@@ -146,24 +232,65 @@ def node_link_graph(data, directed=False, multigraph=True, attrs=None):
Examples
--------
- >>> from networkx.readwrite import json_graph
- >>> G = nx.Graph([("A", "B")])
- >>> data = json_graph.node_link_data(G)
- >>> H = json_graph.node_link_graph(data)
+
+ Create data in node-link format by converting a graph.
+
+ >>> G = nx.Graph([('A', 'B')])
+ >>> data = nx.node_link_data(G)
+ >>> data
+ {'directed': False, 'multigraph': False, 'graph': {}, 'nodes': [{'id': 'A'}, {'id': 'B'}], 'links': [{'source': 'A', 'target': 'B'}]}
+
+ Revert data in node-link format to a graph.
+
+ >>> H = nx.node_link_graph(data)
+ >>> print(H.edges)
+ [('A', 'B')]
+
+ To serialize and deserialize a graph with JSON,
+
+ >>> import json
+ >>> d = json.dumps(node_link_data(G))
+ >>> H = node_link_graph(json.loads(d))
+ >>> print(G.edges, H.edges)
+ [('A', 'B')] [('A', 'B')]
+
Notes
-----
Attribute 'key' is only used for multigraphs.
+ To use `node_link_data` in conjunction with `node_link_graph`,
+ the keyword names for the attributes must match.
+
See Also
--------
node_link_data, adjacency_data, tree_data
"""
- # Allow 'attrs' to keep default values.
- if attrs is None:
- attrs = _attrs
- else:
- attrs.update({k: v for k, v in _attrs.items() if k not in attrs})
+ # ------ TODO: Remove between the lines after signature change is complete ----- #
+ if attrs is not None:
+ import warnings
+
+ msg = (
+ "\n\nThe `attrs` keyword argument of node_link_graph is deprecated\n"
+ "and will be removed in networkx 3.1. It is replaced with explicit\n"
+ "keyword arguments: `source`, `target`, `name`, `key` and `link`.\n"
+ "To make this warning go away, and ensure usage is forward\n"
+ "compatible, replace `attrs` with the keywords. "
+ "For example:\n\n"
+ " >>> node_link_graph(data, attrs={'target': 'foo', 'name': 'bar'})\n\n"
+ "should instead be written as\n\n"
+ " >>> node_link_graph(data, target='foo', name='bar')\n\n"
+ "in networkx 3.1.\n"
+ "The default values of the keywords will not change.\n"
+ )
+ warnings.warn(msg, DeprecationWarning, stacklevel=2)
+
+ source = attrs.get("source", "source")
+ target = attrs.get("target", "target")
+ name = attrs.get("name", "name")
+ key = attrs.get("key", "key")
+ link = attrs.get("link", "links")
+ # -------------------------------------------------- #
multigraph = data.get("multigraph", multigraph)
directed = data.get("directed", directed)
if multigraph:
@@ -172,19 +299,16 @@ def node_link_graph(data, directed=False, multigraph=True, attrs=None):
graph = nx.Graph()
if directed:
graph = graph.to_directed()
- name = attrs["name"]
- source = attrs["source"]
- target = attrs["target"]
- links = attrs["link"]
+
# Allow 'key' to be omitted from attrs if the graph is not a multigraph.
- key = None if not multigraph else attrs["key"]
+ key = None if not multigraph else key
graph.graph = data.get("graph", {})
c = count()
for d in data["nodes"]:
node = _to_tuple(d.get(name, next(c)))
nodedata = {str(k): v for k, v in d.items() if k != name}
graph.add_node(node, **nodedata)
- for d in data[links]:
+ for d in data[link]:
src = tuple(d[source]) if isinstance(d[source], list) else d[source]
tgt = tuple(d[target]) if isinstance(d[target], list) else d[target]
if not multigraph:
| diff --git a/networkx/readwrite/json_graph/tests/test_node_link.py b/networkx/readwrite/json_graph/tests/test_node_link.py
--- a/networkx/readwrite/json_graph/tests/test_node_link.py
+++ b/networkx/readwrite/json_graph/tests/test_node_link.py
@@ -6,7 +6,73 @@
from networkx.readwrite.json_graph import node_link_data, node_link_graph
+# TODO: To be removed when signature change complete
+def test_attrs_deprecation(recwarn):
+ G = nx.path_graph(3)
+
+ # No warnings when `attrs` kwarg not used
+ data = node_link_data(G)
+ H = node_link_graph(data)
+ assert len(recwarn) == 0
+
+ # Future warning raised with `attrs` kwarg
+ attrs = dict(source="source", target="target", name="id", key="key", link="links")
+ data = node_link_data(G, attrs=attrs)
+ assert len(recwarn) == 1
+
+ recwarn.clear()
+ H = node_link_graph(data, attrs=attrs)
+ assert len(recwarn) == 1
+
+
class TestNodeLink:
+
+ # TODO: To be removed when signature change complete
+ def test_custom_attrs_dep(self):
+ G = nx.path_graph(4)
+ G.add_node(1, color="red")
+ G.add_edge(1, 2, width=7)
+ G.graph[1] = "one"
+ G.graph["foo"] = "bar"
+
+ attrs = dict(
+ source="c_source",
+ target="c_target",
+ name="c_id",
+ key="c_key",
+ link="c_links",
+ )
+
+ H = node_link_graph(
+ node_link_data(G, attrs=attrs), multigraph=False, attrs=attrs
+ )
+ assert nx.is_isomorphic(G, H)
+ assert H.graph["foo"] == "bar"
+ assert H.nodes[1]["color"] == "red"
+ assert H[1][2]["width"] == 7
+
+ # provide only a partial dictionary of keywords.
+ # This is similar to an example in the doc string
+ attrs = dict(
+ link="c_links",
+ source="c_source",
+ target="c_target",
+ )
+ H = node_link_graph(
+ node_link_data(G, attrs=attrs), multigraph=False, attrs=attrs
+ )
+ assert nx.is_isomorphic(G, H)
+ assert H.graph["foo"] == "bar"
+ assert H.nodes[1]["color"] == "red"
+ assert H[1][2]["width"] == 7
+
+ # TODO: To be removed when signature change complete
+ def test_exception_dep(self):
+ with pytest.raises(nx.NetworkXError):
+ G = nx.MultiDiGraph()
+ attrs = dict(name="node", source="node", target="node", key="node")
+ node_link_data(G, attrs)
+
def test_graph(self):
G = nx.path_graph(4)
H = node_link_graph(node_link_data(G))
@@ -68,7 +134,7 @@ def test_exception(self):
with pytest.raises(nx.NetworkXError):
G = nx.MultiDiGraph()
attrs = dict(name="node", source="node", target="node", key="node")
- node_link_data(G, attrs)
+ node_link_data(G, **attrs)
def test_string_ids(self):
q = "qualité"
@@ -97,9 +163,7 @@ def test_custom_attrs(self):
link="c_links",
)
- H = node_link_graph(
- node_link_data(G, attrs=attrs), multigraph=False, attrs=attrs
- )
+ H = node_link_graph(node_link_data(G, **attrs), multigraph=False, **attrs)
assert nx.is_isomorphic(G, H)
assert H.graph["foo"] == "bar"
assert H.nodes[1]["color"] == "red"
| `node_link` functions `attrs` kwarg
`node_link_graph` and `node_link_data` have a similar API to the `cytoscape` functions, where there is a `attrs` kwarg which expects a dictionary containing specific keys that is then unpacked internally.
This API is a bit un-Pythonic and was removed in favor of keyword arguments - see the related discussion in #4199. It's worth checking if the `node_link` functions wouldn't benefit from a similar treatment.
| Yes, IMO these functions should be changed to have 5 keyword arguments instead of a single dict containing the five values.
Thinking out loud...
Will need to:
- add key words to signature, while retaining `attrs`
- add a deprecation warning in the,
-- doc string
-- functions. Code will trigger the warning when `attrs is not None`
- Fix the functions to use the keyword values, instead of `attrs`, when `attrs` is None
- add tests that use keywords instead of `attrs`. These tests will work after `attrs` is dropped.
| 2022-07-27T04:30:10 |
networkx/networkx | 5,902 | networkx__networkx-5902 | [
"5901"
] | 28f78cfa9a386620ee1179582fda1db5ffc59f84 | diff --git a/networkx/algorithms/community/louvain.py b/networkx/algorithms/community/louvain.py
--- a/networkx/algorithms/community/louvain.py
+++ b/networkx/algorithms/community/louvain.py
@@ -179,7 +179,8 @@ def louvain_partitions(
)
improvement = True
while improvement:
- yield partition
+ # gh-5901 protect the sets in the yielded list from further manipulation here
+ yield [s.copy() for s in partition]
new_mod = modularity(
graph, inner_partition, resolution=resolution, weight="weight"
)
| diff --git a/networkx/algorithms/community/tests/test_louvain.py b/networkx/algorithms/community/tests/test_louvain.py
--- a/networkx/algorithms/community/tests/test_louvain.py
+++ b/networkx/algorithms/community/tests/test_louvain.py
@@ -2,6 +2,7 @@
from networkx.algorithms.community import (
is_partition,
louvain_communities,
+ louvain_partitions,
modularity,
partition_quality,
)
@@ -30,7 +31,7 @@ def test_valid_partition():
assert is_partition(H, partition2)
-def test_partition():
+def test_karate_club_partition():
G = nx.karate_club_graph()
part = [
{0, 1, 2, 3, 7, 9, 11, 12, 13, 17, 19, 21},
@@ -43,6 +44,18 @@ def test_partition():
assert part == partition
+def test_partition_iterator():
+ G = nx.path_graph(15)
+ parts_iter = louvain_partitions(G, seed=42)
+ first_part = next(parts_iter)
+ first_copy = [s.copy() for s in first_part]
+
+ # gh-5901 reports sets changing after next partition is yielded
+ assert first_copy[0] == first_part[0]
+ second_part = next(parts_iter)
+ assert first_copy[0] == first_part[0]
+
+
def test_directed_partition():
"""
Test 2 cases that were looping infinitely
| louvain_partitions iteration difference
louvain_partitions leads to different clusterings depending on how one iterates over the returned generator
### Current Behavior
If I iterate over the generator returned from louvain_partitions by a for loop I get for each iteration the expected clustering levels.
However, if I first create a list (or tuple, deque...) and then iterate over the list I get different results. In fact, the levels have certain clusters which are empty if I iterate over the list.
### Expected Behavior
I would expect that no matter how I iterate over the resulting generator I would get the same results.
### Steps to Reproduce
```
import networkx as nx
import networkx.algorithms.community as nx_comm
G = nx.path_graph(200)
def evaluate(part):
print(len(part))
count = 0
for nodes in part:
if len(nodes) == 0:
count += 1
print(f"\tEmpty Clusters: {count}")
louvian_clustering = list(nx_comm.louvain_partitions(G, resolution=1, seed=20))
for part in louvian_clustering:
evaluate(part)
print("----")
for part in nx_comm.louvain_partitions(G, resolution=1, seed=20):
evaluate(part)
```
Output
```
90
Empty Clusters: 76
41
Empty Clusters: 27
17
Empty Clusters: 3
14
Empty Clusters: 0
----
90
Empty Clusters: 0
41
Empty Clusters: 0
17
Empty Clusters: 0
14
Empty Clusters: 0
```
### Environment
<!--- Please provide details about your local environment -->
Python version: 3.10.5
NetworkX version: 2.8.5
| Thank you very much for reporting this!!
This is a very strange an ugly bug. It stems from the function yielding a list-of-sets without copying those sets. When the next partition is requested, the sets are updated while computing the next partition. So the originally yielded partition gets changed while computing the next partition. This doesn't show up if you process each partition before iterating to the next partition. Here's a test that fails due to this bug.
```python
def test_partition_iterator():
G = nx.path_graph(15)
parts_iter = nx.community.louvain_partitions(G, seed=42)
first_part = next(parts_iter)
first_copy = [s.copy() for s in first_part]
# gh-5901 reports sets changing after next partition is yielded
assert first_copy[0] == first_part[0]
second_part = next(parts_iter)
assert first_copy[0] == first_part[0]
``` | 2022-07-29T00:16:14 |
networkx/networkx | 5,903 | networkx__networkx-5903 | [
"5896"
] | 28f78cfa9a386620ee1179582fda1db5ffc59f84 | diff --git a/networkx/relabel.py b/networkx/relabel.py
--- a/networkx/relabel.py
+++ b/networkx/relabel.py
@@ -114,9 +114,13 @@ def relabel_nodes(G, mapping, copy=True):
--------
convert_node_labels_to_integers
"""
- # you can pass a function f(old_label)->new_label
+ # you can pass a function f(old_label) -> new_label
+ # or a class e.g. str(old_label) -> new_label
# but we'll just make a dictionary here regardless
- if not hasattr(mapping, "__getitem__"):
+ # To allow classes, we check if __getitem__ is a bound method using __self__
+ if not (
+ hasattr(mapping, "__getitem__") and hasattr(mapping.__getitem__, "__self__")
+ ):
m = {n: mapping(n) for n in G}
else:
m = mapping
| diff --git a/networkx/tests/test_relabel.py b/networkx/tests/test_relabel.py
--- a/networkx/tests/test_relabel.py
+++ b/networkx/tests/test_relabel.py
@@ -106,6 +106,12 @@ def mapping(n):
H = nx.relabel_nodes(G, mapping)
assert nodes_equal(H.nodes(), [65, 66, 67, 68])
+ def test_relabel_nodes_classes(self):
+ G = nx.empty_graph()
+ G.add_edges_from([(0, 1), (0, 2), (1, 2), (2, 3)])
+ H = nx.relabel_nodes(G, str)
+ assert nodes_equal(H.nodes, ["0", "1", "2", "3"])
+
def test_relabel_nodes_graph(self):
G = nx.Graph([("A", "B"), ("A", "C"), ("B", "C"), ("C", "D")])
mapping = {"A": "aardvark", "B": "bear", "C": "cat", "D": "dog"}
| `relabel_nodes` does not work for callables with `__getitem__`
<!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion -->
<!--- Provide a general summary of the issue in the Title above -->
`relabel_nodes` accepts mappings in the form of functions. It differentiates function from mapping by testing`not hasattr(mapping, "__getitem__")`. This test is ambiguous for callables that do possess '__getitem__`, e.g. `str`.
Would it be better to test `isinstance(mapping, typing.Mapping)` and document accordingly?
### Current Behavior
<!--- Tell us what happens instead of the expected behavior -->
```python
import networkx as nx
G = nx.DiGraph()
G.add_nodes_from([1, 2])
# works
# nx.relabel_nodes(G, lambda x: str(x))
# error
nx.relabel_nodes(G, str)
```
```
Traceback (most recent call last):
File "...\networkx_2_8_4_relabel_callable.py", line 10, in <module>
nx.relabel_nodes(G, str)
File "...\lib\site-packages\networkx\relabel.py", line 121, in relabel_nodes
return _relabel_copy(G, m)
File "...\lib\site-packages\networkx\relabel.py", line 193, in _relabel_copy
H.add_nodes_from(mapping.get(n, n) for n in G)
File "...\lib\site-packages\networkx\classes\digraph.py", line 519, in add_nodes_from
for n in nodes_for_adding:
File "...\lib\site-packages\networkx\relabel.py", line 193, in <genexpr>
H.add_nodes_from(mapping.get(n, n) for n in G)
AttributeError: type object 'str' has no attribute 'get'
```
### Expected Behavior
<!--- Tell us what should happen -->
`relabel_nodes` works for callables.
### Steps to Reproduce
<!--- Provide a minimal example that reproduces the bug -->
See current behavior.
### Environment
<!--- Please provide details about your local environment -->
Python version: 3.9
NetworkX version: 2.8.5
### Additional context
<!--- Add any other context about the problem here, screenshots, etc. -->
| This is a good point. We want to identify if something has attribute `__getitem__` (which could be a dict/list/etc) or if they have attribute `__call__` (any function, class, many other objects). But what do we choose to do when they can do both? In terms of abc classes they are a Callable Sequence or Mapping. Our code prioritizes mappings specified via lookup over callables. So if something can do both, we use it for lookup rather than calling. Your suggestion would prioritize calling over lookup. I'm not sure I can argue for which is better. lookup-first preserves backward compatibility.
But your example shows that it is even more complicated than the story above because classes have method-attributes even though they are not bound methods. In your example `str.__getitem__` exists, but is a class method requiring an instance before lookup is possible. It is not bound to an instance. This suggests to me that our check for `__getitem__` could be improved. The idea is to check for an attribute `__getitem__` first, but to also make sure it is a bound method. Bound methods are indicated by the `__self__` attribute.
So we could change the check for `__getitem__` to a double-hasattr check:
```python
# check for bound method __getitem__
if not (hasattr(mapping, "__getitem__") and hasattr(mapping.__getitem__, "__self__"))
``` | 2022-07-29T02:02:29 |
networkx/networkx | 5,921 | networkx__networkx-5921 | [
"5681"
] | 4a019f04d0e304ecd2f28b15d854e1282e03461d | diff --git a/networkx/algorithms/approximation/treewidth.py b/networkx/algorithms/approximation/treewidth.py
--- a/networkx/algorithms/approximation/treewidth.py
+++ b/networkx/algorithms/approximation/treewidth.py
@@ -98,22 +98,23 @@ def __init__(self, graph):
# nodes that have to be updated in the heap before each iteration
self._update_nodes = []
- self._degreeq = [] # a heapq with 2-tuples (degree,node)
+ self._degreeq = [] # a heapq with 3-tuples (degree,unique_id,node)
+ self.count = itertools.count()
# build heap with initial degrees
for n in graph:
- self._degreeq.append((len(graph[n]), n))
+ self._degreeq.append((len(graph[n]), next(self.count), n))
heapify(self._degreeq)
def best_node(self, graph):
# update nodes in self._update_nodes
for n in self._update_nodes:
# insert changed degrees into degreeq
- heappush(self._degreeq, (len(graph[n]), n))
+ heappush(self._degreeq, (len(graph[n]), next(self.count), n))
# get the next valid (minimum degree) node
while self._degreeq:
- (min_degree, elim_node) = heappop(self._degreeq)
+ (min_degree, _, elim_node) = heappop(self._degreeq)
if elim_node not in graph or len(graph[elim_node]) != min_degree:
# outdated entry in degreeq
continue
@@ -145,16 +146,15 @@ def min_fill_in_heuristic(graph):
min_fill_in = sys.maxsize
- # create sorted list of (degree, node)
- degree_list = [(len(graph[node]), node) for node in graph]
- degree_list.sort()
+ # sort nodes by degree
+ nodes_by_degree = sorted(graph, key=lambda x: len(graph[x]))
+ min_degree = len(graph[nodes_by_degree[0]])
- # abort condition
- min_degree = degree_list[0][0]
+ # abort condition (handle complete graph)
if min_degree == len(graph) - 1:
return None
- for (_, node) in degree_list:
+ for node in nodes_by_degree:
num_fill_in = 0
nbrs = graph[node]
for nbr in nbrs:
| diff --git a/networkx/algorithms/approximation/tests/test_treewidth.py b/networkx/algorithms/approximation/tests/test_treewidth.py
--- a/networkx/algorithms/approximation/tests/test_treewidth.py
+++ b/networkx/algorithms/approximation/tests/test_treewidth.py
@@ -132,13 +132,16 @@ def test_empty_graph(self):
_, _ = treewidth_min_degree(G)
def test_two_component_graph(self):
- """Test empty graph"""
G = nx.Graph()
G.add_node(1)
G.add_node(2)
treewidth, _ = treewidth_min_degree(G)
assert treewidth == 0
+ def test_not_sortable_nodes(self):
+ G = nx.Graph([(0, "a")])
+ treewidth_min_degree(G)
+
def test_heuristic_first_steps(self):
"""Test first steps of min_degree heuristic"""
graph = {
@@ -237,13 +240,16 @@ def test_empty_graph(self):
_, _ = treewidth_min_fill_in(G)
def test_two_component_graph(self):
- """Test empty graph"""
G = nx.Graph()
G.add_node(1)
G.add_node(2)
treewidth, _ = treewidth_min_fill_in(G)
assert treewidth == 0
+ def test_not_sortable_nodes(self):
+ G = nx.Graph([(0, "a")])
+ treewidth_min_fill_in(G)
+
def test_heuristic_first_steps(self):
"""Test first steps of min_fill_in heuristic"""
graph = {
| treewidth algos depend on node types
The treewidth functions appear to depend on the types of nodes in the input graph.
### Current Behavior
For example, the algos fail is there is a mix of nodes with integer and string types.
### Expected Behavior
The algos should not depend on node types, only the graph structure.
### Steps to Reproduce
```python
import networkx as nx
from networkx.algorithms.approximation import treewidth
G = nx.Graph()
G.add_nodes_from([0, 'a'])
tw, td = treewidth.treewidth_min_degree(G)
or
tw, td = treewidth.treewidth_min_fill_in(G)
```
### Environment
Python version: 3.8
NetworkX version: 2.8
### Additional context
Error:
File "<somedir>/lib/python3.8/site-packages/networkx/algorithms/approximation/treewidth.py", line 106, in __init__
heapify(self._degreeq)
TypeError: '<' not supported between instances of 'int' and 'str'
| Thanks for the report @harristeague , I can reproduce this as well.
The problem occurs while using `heapify`. So we are providing the object in `heapify` in a way that it allows comparing the nodes (perhaps only when ties on the heap priority/weight values occur. We could use some sort of unique identifier ahead of the node in the tuple put onto the heap. | 2022-08-08T11:36:04 |
networkx/networkx | 5,930 | networkx__networkx-5930 | [
"5681"
] | 19c1454d3dfa70a893ea67f2d78515658e8c08e5 | diff --git a/networkx/algorithms/dag.py b/networkx/algorithms/dag.py
--- a/networkx/algorithms/dag.py
+++ b/networkx/algorithms/dag.py
@@ -304,12 +304,27 @@ def topological_sort(G):
def lexicographical_topological_sort(G, key=None):
- """Returns a generator of nodes in lexicographically topologically sorted
- order.
+ """Generate the nodes in the unique lexicographical topological sort order.
+
+ Generates a unique ordering of nodes by first sorting topologically (for which there are often
+ multiple valid orderings) and then additionally by sorting lexicographically.
+
+ A topological sort arranges the nodes of a directed graph so that the
+ upstream node of each directed edge precedes the downstream node.
+ It is always possible to find a solution for directed graphs that have no cycles.
+ There may be more than one valid solution.
+
+ Lexicographical sorting is just sorting alphabetically. It is used here to break ties in the
+ topological sort and to determine a single, unique ordering. This can be useful in comparing
+ sort results.
+
+ The lexicographical order can be customized by providing a function to the `key=` parameter.
+ The definition of the key function is the same as used in python's built-in `sort()`.
+ The function takes a single argument and returns a key to use for sorting purposes.
+
+ Lexicographical sorting can fail if the node names are un-sortable. See the example below.
+ The solution is to provide a function to the `key=` argument that returns sortable keys.
- A topological sort is a nonunique permutation of the nodes such that an
- edge from u to v implies that u appears before v in the topological sort
- order.
Parameters
----------
@@ -317,13 +332,13 @@ def lexicographical_topological_sort(G, key=None):
A directed acyclic graph (DAG)
key : function, optional
- This function maps nodes to keys with which to resolve ambiguities in
- the sort order. Defaults to the identity function.
+ A function of one argument that converts a node name to a comparison key.
+ It defines and resolves ambiguities in the sort order. Defaults to the identity function.
Yields
------
nodes
- Yields the nodes in lexicographical topological sort order.
+ Yields the nodes of G in lexicographical topological sort order.
Raises
------
@@ -339,6 +354,10 @@ def lexicographical_topological_sort(G, key=None):
RuntimeError
If `G` is changed while the returned iterator is being processed.
+ TypeError
+ Results from un-sortable node names.
+ Consider using `key=` parameter to resolve ambiguities in the sort order.
+
Examples
--------
>>> DG = nx.DiGraph([(2, 1), (2, 5), (1, 3), (1, 4), (5, 4)])
@@ -347,6 +366,25 @@ def lexicographical_topological_sort(G, key=None):
>>> list(nx.lexicographical_topological_sort(DG, key=lambda x: -x))
[2, 5, 1, 4, 3]
+ The sort will fail for any graph with integer and string nodes. Comparison of integer to strings
+ is not defined in python. Is 3 greater or less than 'red'?
+
+ >>> DG = nx.DiGraph([(1, 'red'), (3, 'red'), (1, 'green'), (2, 'blue')])
+ >>> list(nx.lexicographical_topological_sort(DG))
+ Traceback (most recent call last):
+ ...
+ TypeError: '<' not supported between instances of 'str' and 'int'
+ ...
+
+ Incomparable nodes can be resolved using a `key` function. This example function
+ allows comparison of integers and strings by returning a tuple where the first
+ element is True for `str`, False otherwise. The second element is the node name.
+ This groups the strings and integers separately so they can be compared only among themselves.
+
+ >>> key = lambda node: (isinstance(node, str), node)
+ >>> list(nx.lexicographical_topological_sort(DG, key=key))
+ [1, 2, 3, 'blue', 'green', 'red']
+
Notes
-----
This algorithm is based on a description and proof in
@@ -391,7 +429,12 @@ def create_tuple(node):
except KeyError as err:
raise RuntimeError("Graph changed during iteration") from err
if indegree_map[child] == 0:
- heapq.heappush(zero_indegree, create_tuple(child))
+ try:
+ heapq.heappush(zero_indegree, create_tuple(child))
+ except TypeError as err:
+ raise TypeError(
+ f"{err}\nConsider using `key=` parameter to resolve ambiguities in the sort order."
+ )
del indegree_map[child]
yield node
| treewidth algos depend on node types
The treewidth functions appear to depend on the types of nodes in the input graph.
### Current Behavior
For example, the algos fail is there is a mix of nodes with integer and string types.
### Expected Behavior
The algos should not depend on node types, only the graph structure.
### Steps to Reproduce
```python
import networkx as nx
from networkx.algorithms.approximation import treewidth
G = nx.Graph()
G.add_nodes_from([0, 'a'])
tw, td = treewidth.treewidth_min_degree(G)
or
tw, td = treewidth.treewidth_min_fill_in(G)
```
### Environment
Python version: 3.8
NetworkX version: 2.8
### Additional context
Error:
File "<somedir>/lib/python3.8/site-packages/networkx/algorithms/approximation/treewidth.py", line 106, in __init__
heapify(self._degreeq)
TypeError: '<' not supported between instances of 'int' and 'str'
| Thanks for the report @harristeague , I can reproduce this as well.
The problem occurs while using `heapify`. So we are providing the object in `heapify` in a way that it allows comparing the nodes (perhaps only when ties on the heap priority/weight values occur. We could use some sort of unique identifier ahead of the node in the tuple put onto the heap.
The same error occurs in,
- `lexicographical_topological_sort` in `algorithms/graphical.py`
- `MappedQueue` in `utils/mapped_queue.py`
which also use `heapify`,
> TypeError: '<' not supported between instances of 'int' and 'str'
Thanks for those comments @brocla !!
Hmmm... I thought the `create_tuple` function (line 375) in `dag.py` avoids the issue for `lexicographical_topological_sort`. But that is in `algorithms/dag.py` not `algorithms/graphical.py`. Perhaps I am missing something?
`MappedQueue` is a helper class that let's you put whatever you need to on the queue -- including the responsibility for the developer that is using it to make sure it is sortable. But again, maybe I am missing the place you are referring to.
Do you have examples that don't work in those functions? Or is this comment based on looking at code?
Hello @dschult . I should have included the examples when I had them the first time. Here is how I triggered the `TypeError`.
```
>>> G = nx.path_graph(5, create_using=nx.DiGraph)
>>> G.add_edge(2, 'a')
>>> G.edges
OutEdgeView([(0, 1), (1, 2), (2, 3), (2, 'a'), (3, 4)])
>>> list(nx.lexicographical_topological_sort(G))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/brocla/python/networkx/networkx/algorithms/dag.py", line 394, in lexicographical_topological_sort
heapq.heappush(zero_indegree, create_tuple(child))
TypeError: '<' not supported between instances of 'str' and 'int'
```
```
>>> nx.utils.mapped_queue.MappedQueue([916, 50, 'a', 493, 237])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/brocla/python/networkx/networkx/utils/mapped_queue.py", line 132, in __init__
self._heapify()
File "/home/brocla/python/networkx/networkx/utils/mapped_queue.py", line 136, in _heapify
heapq.heapify(self.heap)
TypeError: '<' not supported between instances of 'int' and 'str'
```
BTW, `heapify` is also used in these files, but I was not able to coax them into a `TypeError`. They seem to be sorting on something besides nodes.
- `algorithms/dag.py`
- `generators/degree_seq.py`
Thanks for this! It took me a little while to recall how those two functions handle sorting. We've got it fixed so most of the time it works fine. But your examples show how our unprotected inputs can lead to problems. It would be very helpful to have better documentation; and examples to show how to handle it.
Issues:
- `lexicographical_topological_sort` has an optional second argument `key` which (just like min/max) takes a function that returns the sort value given the node. This is only needed to break ties in the lexicographical ordering. So, if a TypeError is raised due to mixed node types, the user should provide a `key` function that enables comparison of the nodes. For example:
```python
G = nx.path_graph([0, 1, 'a' ,2])
node_order = {node: i for i, node in enumerate(G)}
nx.lexicographical_topological_sort(G, key=lambda x: node_order[x])
```
We should include an example like this one. There are many other improvements needed in that doc_string. For example, the leading paragraph description (after the one-line description) explains what a topological sort is, but not anything about a lexicographical topological sort. We should also add the TypeError as one of the exceptions that are raised, with a hint to provide a `key` input that returns values that are sortable.
- `MappingQueue` is a priority queue that sorts the items by priority. The priority values are usually provided via a dict keyed by element to the priority of that element. But you can also provide an iterator of elements as input, in which case the elements are also the priorities. To fix the example you show, the user should input a dict holding priorities rather than a list of elements. But that is not clear from the documentation. An example should be included saying that unsortable elements will cause a TypeError and showing an example of how to handle that similar to
```python
inputs = [243, 'a', 213]
node_order = {node: i for i, node in enumerate(inputs)}
M = nx.MappedQueue(node_order)
```
This doc_string does not say much at all. It should at least include a sentence that unsortable elements should have a priority assigned to them upon input via using a dict. Otherwise a TypeError will occur when the comparisons are made. And then include an example of how to make a dict to avoid this problem.
Handling `heapify` library wide:
- Each place we use heapify, we make the elements into tuples which protect the nodes from ever being compared when the tuple-elements that contain the nodes are compared. This is done by putting a unique value in the tuple before the node, e.g. `element = (priority, node_id, node)`. So long as the node_id is unique, comparing tuples will never compare the raw nodes themselves. That is why heapify doesn't lead to a TypeError in those other functions in dag.py and generators.
- In the two cases you exposed, we have provided the flexibility to use the nodes themselves. The freedom comes with responsibility to keep yourself safe. But it is hard to do that without the information about how to do that. So, we need to update the doc_strings.
Hi @dschult . Thanks for the clear explanations. It helps me understand the code and the intentions when it was written. Great.
I'd like to draft an update to the docstrings. I'll do them one at a time, `lexicographical_topological_sort` first.
#### Here is what I think I heard...
- Better explanations for the short and long descriptions. Describe lexicographical and topological without using those words.
- Add an explanation of how `key=` solves the un-sortable problem.
- Put a hint in the exception statement that points the user to the `key=` argument.
- Add an example that shows how to solve the un-sortable problem
#### A thought about the example...
Using `enumerate` to generate sortable keys also means that there won't be any lexicographic aspect to the order.
The node positions that are not determined by the topological sort will just be in the order they are stored in the Graph.
What if the example used this key?
```python
>>> sorted([2, 'c', 1, 'b', 3, 'a'], key=lambda node: (isinstance(node, str), node))
[1, 2, 3, 'a', 'b', 'c']
```
It is not general enough solve everyone's sorting problems, but good enough that the example will demonstrate lexi behavior.
That list of suggested changes looks good to me.
And I agree that the example you make by putting all strings behind every non-string works for the case of a graph with nodes that are integers or strings. I think that is good enough -- anyone with a more difficult sorting case will have to figure it out. After all lexicographical implies that the nodes themselves are sortable somehow. :} | 2022-08-15T13:00:31 |
|
networkx/networkx | 5,988 | networkx__networkx-5988 | [
"5987"
] | 1ce75f0f3604abd0551fa9baf20c65c3747fb328 | diff --git a/networkx/algorithms/dag.py b/networkx/algorithms/dag.py
--- a/networkx/algorithms/dag.py
+++ b/networkx/algorithms/dag.py
@@ -1006,7 +1006,15 @@ def dag_longest_path(G, weight="weight", default_weight=1, topo_order=None):
dist = {} # stores {v : (length, u)}
for v in topo_order:
us = [
- (dist[u][0] + data.get(weight, default_weight), u)
+ (
+ dist[u][0]
+ + (
+ max(data.values(), key=lambda x: x.get(weight, default_weight))
+ if G.is_multigraph()
+ else data
+ ).get(weight, default_weight),
+ u,
+ )
for u, data in G.pred[v].items()
]
@@ -1068,8 +1076,13 @@ def dag_longest_path_length(G, weight="weight", default_weight=1):
"""
path = nx.dag_longest_path(G, weight, default_weight)
path_length = 0
- for (u, v) in pairwise(path):
- path_length += G[u][v].get(weight, default_weight)
+ if G.is_multigraph():
+ for u, v in pairwise(path):
+ i = max(G[u][v], key=lambda x: G[u][v][x].get(weight, default_weight))
+ path_length += G[u][v][i].get(weight, default_weight)
+ else:
+ for (u, v) in pairwise(path):
+ path_length += G[u][v].get(weight, default_weight)
return path_length
| diff --git a/networkx/algorithms/tests/test_dag.py b/networkx/algorithms/tests/test_dag.py
--- a/networkx/algorithms/tests/test_dag.py
+++ b/networkx/algorithms/tests/test_dag.py
@@ -60,6 +60,31 @@ def test_unorderable_nodes(self):
# this will raise NotImplementedError when nodes need to be ordered
nx.dag_longest_path(G)
+ def test_multigraph_unweighted(self):
+ edges = [(1, 2), (2, 3), (2, 3), (3, 4), (4, 5), (1, 3), (1, 5), (3, 5)]
+ G = nx.MultiDiGraph(edges)
+ assert nx.dag_longest_path(G) == [1, 2, 3, 4, 5]
+
+ def test_multigraph_weighted(self):
+ G = nx.MultiDiGraph()
+ edges = [
+ (1, 2, 2),
+ (2, 3, 2),
+ (1, 3, 1),
+ (1, 3, 5),
+ (1, 3, 2),
+ ]
+ G.add_weighted_edges_from(edges)
+ assert nx.dag_longest_path(G) == [1, 3]
+
+ def test_multigraph_weighted_default_weight(self):
+ G = nx.MultiDiGraph([(1, 2), (2, 3)]) # Unweighted edges
+ G.add_weighted_edges_from([(1, 3, 1), (1, 3, 5), (1, 3, 2)])
+
+ # Default value for default weight is 1
+ assert nx.dag_longest_path(G) == [1, 3]
+ assert nx.dag_longest_path(G, default_weight=3) == [1, 2, 3]
+
class TestDagLongestPathLength:
"""Unit tests for computing the length of a longest path in a
@@ -91,6 +116,23 @@ def test_weighted(self):
G.add_weighted_edges_from(edges)
assert nx.dag_longest_path_length(G) == 5
+ def test_multigraph_unweighted(self):
+ edges = [(1, 2), (2, 3), (2, 3), (3, 4), (4, 5), (1, 3), (1, 5), (3, 5)]
+ G = nx.MultiDiGraph(edges)
+ assert nx.dag_longest_path_length(G) == 4
+
+ def test_multigraph_weighted(self):
+ G = nx.MultiDiGraph()
+ edges = [
+ (1, 2, 2),
+ (2, 3, 2),
+ (1, 3, 1),
+ (1, 3, 5),
+ (1, 3, 2),
+ ]
+ G.add_weighted_edges_from(edges)
+ assert nx.dag_longest_path_length(G) == 5
+
class TestDAG:
@classmethod
| Weighted MultiDiGraphs never use weights in dag_longest_path and dag_longest_path_length
### Current Behavior
Given any MultiDiGraph, using dag_longest_path will always evaluate using the default_weight keyword argument.
This is because dag_longest_path uses `G.pred[v].items()` to grab the data dictionary, but the data dictionary for a MultiDiGraph is embedded inside of another dictionary with the edge number as a key. When dag_longest_path calls `data.get(weight, default_weight)` on this, it will always raise a KeyError on weight and use default_weight instead.
dag_longest_path_length also calls dict.get() on the wrong dictionary, making it return bad results for weighted MultiDiGraphs even if dag_longest_path returns the correct path.
### Expected Behavior
A MultiDiGraph should either evaluate correctly using weights or raise a NotImplementedError.
### Steps to Reproduce
```
MDG = nx.MultiDiGraph([("A", "B", {"cost": 5}), ("B", "C", {"cost": 10}), ("A", "C", {"cost": 40})])
print(nx.dag_longest_path(MDG, weight="cost"), nx.dag_longest_path_length(MDG, weight="cost"))
# prints ['A', 'B', 'C'] 2
# should be ['A', 'C'] 40
```
Incorrect paths are especially noticeable when a weight key is given, but default_weight is set to 0, as it will always return a list containing only the starting node.
### Environment
Python version: 3.10
NetworkX version: 2.8.5
### Additional context
I have a fix for this issue ready if you would like the functions to support MultiDiGraphs.
| 2022-09-17T21:14:45 |
|
networkx/networkx | 6,041 | networkx__networkx-6041 | [
"5980"
] | ce692bd3f05900608b829b983838d099b378ca8f | diff --git a/networkx/readwrite/json_graph/adjacency.py b/networkx/readwrite/json_graph/adjacency.py
--- a/networkx/readwrite/json_graph/adjacency.py
+++ b/networkx/readwrite/json_graph/adjacency.py
@@ -149,9 +149,9 @@ def adjacency_graph(data, directed=False, multigraph=True, attrs=_attrs):
target = target_data.pop(id_)
if not multigraph:
graph.add_edge(source, target)
- graph[source][target].update(tdata)
+ graph[source][target].update(target_data)
else:
ky = target_data.pop(key, None)
graph.add_edge(source, target, key=ky)
- graph[source][target][ky].update(tdata)
+ graph[source][target][ky].update(target_data)
return graph
| diff --git a/networkx/readwrite/json_graph/tests/test_adjacency.py b/networkx/readwrite/json_graph/tests/test_adjacency.py
--- a/networkx/readwrite/json_graph/tests/test_adjacency.py
+++ b/networkx/readwrite/json_graph/tests/test_adjacency.py
@@ -1,16 +1,18 @@
+import copy
import json
import pytest
import networkx as nx
from networkx.readwrite.json_graph import adjacency_data, adjacency_graph
+from networkx.utils import graphs_equal
class TestAdjacency:
def test_graph(self):
G = nx.path_graph(4)
H = adjacency_graph(adjacency_data(G))
- assert nx.is_isomorphic(G, H)
+ assert graphs_equal(G, H)
def test_graph_attributes(self):
G = nx.path_graph(4)
@@ -20,12 +22,14 @@ def test_graph_attributes(self):
G.graph[1] = "one"
H = adjacency_graph(adjacency_data(G))
+ assert graphs_equal(G, H)
assert H.graph["foo"] == "bar"
assert H.nodes[1]["color"] == "red"
assert H[1][2]["width"] == 7
d = json.dumps(adjacency_data(G))
H = adjacency_graph(json.loads(d))
+ assert graphs_equal(G, H)
assert H.graph["foo"] == "bar"
assert H.graph[1] == "one"
assert H.nodes[1]["color"] == "red"
@@ -36,7 +40,7 @@ def test_digraph(self):
nx.add_path(G, [1, 2, 3])
H = adjacency_graph(adjacency_data(G))
assert H.is_directed()
- assert nx.is_isomorphic(G, H)
+ assert graphs_equal(G, H)
def test_multidigraph(self):
G = nx.MultiDiGraph()
@@ -44,15 +48,29 @@ def test_multidigraph(self):
H = adjacency_graph(adjacency_data(G))
assert H.is_directed()
assert H.is_multigraph()
+ assert graphs_equal(G, H)
def test_multigraph(self):
G = nx.MultiGraph()
G.add_edge(1, 2, key="first")
G.add_edge(1, 2, key="second", color="blue")
H = adjacency_graph(adjacency_data(G))
- assert nx.is_isomorphic(G, H)
+ assert graphs_equal(G, H)
assert H[1][2]["second"]["color"] == "blue"
+ def test_input_data_is_not_modified_when_building_graph(self):
+ G = nx.path_graph(4)
+ input_data = adjacency_data(G)
+ orig_data = copy.deepcopy(input_data)
+ # Ensure input is unmodified by deserialisation
+ assert graphs_equal(G, adjacency_graph(input_data))
+ assert input_data == orig_data
+
+ def test_adjacency_form_json_serialisable(self):
+ G = nx.path_graph(4)
+ H = adjacency_graph(json.loads(json.dumps(adjacency_data(G))))
+ assert graphs_equal(G, H)
+
def test_exception(self):
with pytest.raises(nx.NetworkXError):
G = nx.MultiDiGraph()
| Deserialisation artifacts in adjacency_graph
### Current Behavior
Serialising and deserialising a Graph using the matched pair json_graph.adjacency_data and json_graph.adjacency_graph produces a graph which is not equal to the incoming graph using the graphs_equal method.
This is because adjacency.py:152 and adjacency.py:156 set the edge attributes to a dictionary containing the successor node of the edge, rather than to the dictionary from which it has been popped:
for i, d in enumerate(data["adjacency"]):
source = mapping[i]
for tdata in d:
target_data = tdata.copy()
target = target_data.pop(id_)
if not multigraph:
graph.add_edge(source, target)
graph[source][target].update(tdata) # Should be target_data, which has v removed
else:
ky = target_data.pop(key, None)
graph.add_edge(source, target, key=ky)
graph[source][target][ky].update(tdata) # Should be target_data, which has v removed
### Expected Behavior
A Graph when serialised and deserialised with paired methods should be equal to itself, if its nodes are defined in a way to enable the equality.
### Steps to Reproduce
def test_deserialized_graph_equal(self):
G = nx.MultiGraph()
G.add_edge(1, 2, key="first")
G.add_edge(1, 2, key="second", color="blue")
H = adjacency_graph(adjacency_data(G))
assert graphs_equal(G, H) # == False
### Environment
Python version: 3.10
NetworkX version: 2.8.6
### Additional context
I have a patchset ready to go with a fix, opening this bug report to attach to.
| 2022-10-12T09:43:20 |
|
networkx/networkx | 6,085 | networkx__networkx-6085 | [
"6081"
] | ce692bd3f05900608b829b983838d099b378ca8f | diff --git a/networkx/algorithms/flow/maxflow.py b/networkx/algorithms/flow/maxflow.py
--- a/networkx/algorithms/flow/maxflow.py
+++ b/networkx/algorithms/flow/maxflow.py
@@ -12,14 +12,6 @@
# Define the default flow function for computing maximum flow.
default_flow_func = preflow_push
-# Functions that don't support cutoff for minimum cut computations.
-flow_funcs = [
- boykov_kolmogorov,
- dinitz,
- edmonds_karp,
- preflow_push,
- shortest_augmenting_path,
-]
__all__ = ["maximum_flow", "maximum_flow_value", "minimum_cut", "minimum_cut_value"]
@@ -452,7 +444,7 @@ def minimum_cut(flowG, _s, _t, capacity="capacity", flow_func=None, **kwargs):
if not callable(flow_func):
raise nx.NetworkXError("flow_func has to be callable.")
- if kwargs.get("cutoff") is not None and flow_func in flow_funcs:
+ if kwargs.get("cutoff") is not None and flow_func is preflow_push:
raise nx.NetworkXError("cutoff should not be specified.")
R = flow_func(flowG, _s, _t, capacity=capacity, value_only=True, **kwargs)
@@ -603,7 +595,7 @@ def minimum_cut_value(flowG, _s, _t, capacity="capacity", flow_func=None, **kwar
if not callable(flow_func):
raise nx.NetworkXError("flow_func has to be callable.")
- if kwargs.get("cutoff") is not None and flow_func in flow_funcs:
+ if kwargs.get("cutoff") is not None and flow_func is preflow_push:
raise nx.NetworkXError("cutoff should not be specified.")
R = flow_func(flowG, _s, _t, capacity=capacity, value_only=True, **kwargs)
| diff --git a/networkx/algorithms/flow/tests/test_maxflow.py b/networkx/algorithms/flow/tests/test_maxflow.py
--- a/networkx/algorithms/flow/tests/test_maxflow.py
+++ b/networkx/algorithms/flow/tests/test_maxflow.py
@@ -20,6 +20,7 @@
preflow_push,
shortest_augmenting_path,
}
+
max_min_funcs = {nx.maximum_flow, nx.minimum_cut}
flow_value_funcs = {nx.maximum_flow_value, nx.minimum_cut_value}
interface_funcs = max_min_funcs & flow_value_funcs
@@ -427,25 +428,24 @@ def test_flow_func_parameters(self):
def test_minimum_cut_no_cutoff(self):
G = self.G
- for flow_func in flow_funcs:
- pytest.raises(
- nx.NetworkXError,
- nx.minimum_cut,
- G,
- "x",
- "y",
- flow_func=flow_func,
- cutoff=1.0,
- )
- pytest.raises(
- nx.NetworkXError,
- nx.minimum_cut_value,
- G,
- "x",
- "y",
- flow_func=flow_func,
- cutoff=1.0,
- )
+ pytest.raises(
+ nx.NetworkXError,
+ nx.minimum_cut,
+ G,
+ "x",
+ "y",
+ flow_func=preflow_push,
+ cutoff=1.0,
+ )
+ pytest.raises(
+ nx.NetworkXError,
+ nx.minimum_cut_value,
+ G,
+ "x",
+ "y",
+ flow_func=preflow_push,
+ cutoff=1.0,
+ )
def test_kwargs(self):
G = self.H
| cutoff argument improperly disallowed when calling `minimum_cut` with `shortest_augmenting_path`
It should be possible to call `minimum_cut(G, s, t, flow_func=shortest_augmenting_path, cutoff=3)` however this raises an exception.
From `maxflow.py`, `minium_cut()` definition:
```
if kwargs.get("cutoff") is not None and flow_func in flow_funcs:
raise nx.NetworkXError("cutoff should not be specified.")
```
where
```
# Functions that don't support cutoff for minimum cut computations.
flow_funcs = [
boykov_kolmogorov,
dinitz,
edmonds_karp,
preflow_push,
shortest_augmenting_path,
]
```
however the docstring and source for `shortestaugmentingpath.py` do allow the `cutoff` argument.
| I think the problem is that cutoff for shortest_augmenting_path is not working properly that's why is disabled even when the function does allow the argument.
Can you point me to any github issue or failing unit test for `shortest_augmenting_path` not working properly?
In the details of #1102 it seems that the initial flagging of functions that don't have the cutoff functionality correctly included the `shortest_augmenting_path` (as well as some others) but the `cutoff` functionality was added in a commit a few days later. It looks like an oversight that these functions did not get removed from the list of functions that do not support the cutoff input parameter.
I believe there may be others in `flow_funcs` that also do now support (and long have supported) the cutoff feature.
Can someone check those functions and make a PR to correct the list of functions that don't support the cutoff feature?
~~I'm going to transfer this discussion to an Issue so we make sure to fix this.~~{already is an Issue :}
I'll do it
Awesome! I might suggest the list variable be called `flow_funcs_without_cutoff` for readability.
I agree that the name of that variable needs to be changed. That suggestion looks good to me.
| 2022-10-17T15:02:48 |
networkx/networkx | 6,098 | networkx__networkx-6098 | [
"5857"
] | bcf607cf7ce4009ca37786b2fcd84e548f1833f5 | diff --git a/networkx/drawing/nx_pylab.py b/networkx/drawing/nx_pylab.py
--- a/networkx/drawing/nx_pylab.py
+++ b/networkx/drawing/nx_pylab.py
@@ -650,6 +650,42 @@ def draw_networkx_edges(
# undirected graphs (for performance reasons) and use FancyArrowPatches
# for directed graphs.
# The `arrows` keyword can be used to override the default behavior
+ use_linecollection = not G.is_directed()
+ if arrows in (True, False):
+ use_linecollection = not arrows
+
+ # Some kwargs only apply to FancyArrowPatches. Warn users when they use
+ # non-default values for these kwargs when LineCollection is being used
+ # instead of silently ignoring the specified option
+ if use_linecollection and any(
+ [
+ arrowstyle is not None,
+ arrowsize != 10,
+ connectionstyle != "arc3",
+ min_source_margin != 0,
+ min_target_margin != 0,
+ ]
+ ):
+ import warnings
+
+ msg = (
+ "\n\nThe {0} keyword argument is not applicable when drawing edges\n"
+ "with LineCollection.\n\n"
+ "To make this warning go away, either specify `arrows=True` to\n"
+ "force FancyArrowPatches or use the default value for {0}.\n"
+ "Note that using FancyArrowPatches may be slow for large graphs.\n"
+ )
+ if arrowstyle is not None:
+ msg = msg.format("arrowstyle")
+ if arrowsize != 10:
+ msg = msg.format("arrowsize")
+ if connectionstyle != "arc3":
+ msg = msg.format("connectionstyle")
+ if min_source_margin != 0:
+ msg = msg.format("min_source_margin")
+ if min_target_margin != 0:
+ msg = msg.format("min_target_margin")
+ warnings.warn(msg, category=UserWarning, stacklevel=2)
if arrowstyle == None:
if G.is_directed():
@@ -657,10 +693,6 @@ def draw_networkx_edges(
else:
arrowstyle = "-"
- use_linecollection = not G.is_directed()
- if arrows in (True, False):
- use_linecollection = not arrows
-
if ax is None:
ax = plt.gca()
| diff --git a/networkx/drawing/tests/test_pylab.py b/networkx/drawing/tests/test_pylab.py
--- a/networkx/drawing/tests/test_pylab.py
+++ b/networkx/drawing/tests/test_pylab.py
@@ -1,6 +1,7 @@
"""Unit tests for matplotlib drawing functions."""
import itertools
import os
+import warnings
import pytest
@@ -396,6 +397,7 @@ def test_labels_and_colors():
G,
pos,
edgelist=[(4, 5), (5, 6), (6, 7), (7, 4)],
+ arrows=True,
min_source_margin=0.5,
min_target_margin=0.75,
width=8,
@@ -752,3 +754,38 @@ def test_draw_networkx_edges_undirected_selfloop_colors():
for fap, clr, slp in zip(ax.patches, edge_colors[-3:], sl_points):
assert fap.get_path().contains_point(slp)
assert mpl.colors.same_color(fap.get_edgecolor(), clr)
+ plt.delaxes(ax)
+
+
[email protected](
+ "fap_only_kwarg", # Non-default values for kwargs that only apply to FAPs
+ (
+ {"arrowstyle": "-"},
+ {"arrowsize": 20},
+ {"connectionstyle": "arc3,rad=0.2"},
+ {"min_source_margin": 10},
+ {"min_target_margin": 10},
+ ),
+)
+def test_user_warnings_for_unused_edge_drawing_kwargs(fap_only_kwarg):
+ """Users should get a warning when they specify a non-default value for
+ one of the kwargs that applies only to edges drawn with FancyArrowPatches,
+ but FancyArrowPatches aren't being used under the hood."""
+ G = nx.path_graph(3)
+ pos = {n: (n, n) for n in G}
+ fig, ax = plt.subplots()
+ # By default, an undirected graph will use LineCollection to represent
+ # the edges
+ kwarg_name = list(fap_only_kwarg.keys())[0]
+ with pytest.warns(
+ UserWarning, match=f"\n\nThe {kwarg_name} keyword argument is not applicable"
+ ):
+ nx.draw_networkx_edges(G, pos, ax=ax, **fap_only_kwarg)
+ # FancyArrowPatches are always used when `arrows=True` is specified.
+ # Check that warnings are *not* raised in this case
+ with warnings.catch_warnings():
+ # Escalate warnings -> errors so tests fail if warnings are raised
+ warnings.simplefilter("error")
+ nx.draw_networkx_edges(G, pos, ax=ax, arrows=True, **fap_only_kwarg)
+
+ plt.delaxes(ax)
| `connectionstyle` argument of `nx.draw_networkx_edges()` does not work properly for multigraphs and undirected graphs
`connectionstyle` argument of `nx.draw_networkx_edges()` does not work properly for MultiGraphs and Undirected graphs. Consider the following example:
```
G=nx.DiGraph([(1,2),(3,1),(3,2)])
positions = {1:(0,0),2:(1,-2), 3:(2,0)}
nx.draw_networkx_nodes(G, pos=positions, node_size = 500)
nx.draw_networkx_edges(G, pos=positions, arrowstyle="-", connectionstyle="arc3,rad=0.3");
```
Output is the following:

The outputs of the same code snippet when G is created as a multigraph and an undirected graph (`G=nx.MultiGraph([(1,2),(3,1),(3,2)])` and `G=nx.Graph([(1,2),(3,1),(3,2)])`) as follows:

| Thanks for reporting @dtekinoglu , this looks like another instance of #5694. It's been discussed at several meetings, but this issue really seems to be biting a lot of people - we should definitely get something in place for the next release! | 2022-10-18T22:48:20 |
networkx/networkx | 6,132 | networkx__networkx-6132 | [
"6036"
] | 6096b0993b664028b5bcbc211f1149c8ea9a3957 | diff --git a/networkx/readwrite/adjlist.py b/networkx/readwrite/adjlist.py
--- a/networkx/readwrite/adjlist.py
+++ b/networkx/readwrite/adjlist.py
@@ -60,6 +60,14 @@ def generate_adjlist(G, delimiter=" "):
--------
write_adjlist, read_adjlist
+ Notes
+ -----
+ The default `delimiter=" "` will result in unexpected results if node names contain
+ whitespace characters. To avoid this problem, specify an alternate delimiter when spaces are
+ valid in node names.
+
+ NB: This option is not available for data that isn't user-generated.
+
"""
directed = G.is_directed()
seen = set()
@@ -113,6 +121,11 @@ def write_adjlist(G, path, comments="#", delimiter=" ", encoding="utf-8"):
Notes
-----
+ The default `delimiter=" "` will result in unexpected results if node names contain
+ whitespace characters. To avoid this problem, specify an alternate delimiter when spaces are
+ valid in node names.
+ NB: This option is not available for data that isn't user-generated.
+
This format does not store graph, node, or edge data.
See Also
| Improve test coverage for algorithms in load centrality
<!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion -->
<!--- Provide a general summary of the issue in the Title above -->
Currently we don't have full coverage for the algorithms in load centrality. Code blocks which are highlighted with red at codcov https://app.codecov.io/gh/networkx/networkx/blob/main/networkx/algorithms/centrality/load.py don't have corresponding tests. The tests should be added in https://github.com/networkx/networkx/blob/main/networkx/algorithms/centrality/tests/test_load_centrality.py
### Current Behavior
<!--- Tell us what happens instead of the expected behavior -->
We don't test all the paths the code can take us.
### Expected Behavior
<!--- Tell us what should happen -->
We should be testing everything so there aren't any surprises.
### Steps to Reproduce
<!--- Provide a minimal example that reproduces the bug -->
Visit https://app.codecov.io/gh/networkx/networkx/blob/main/networkx/algorithms/centrality/load.py
| 2022-10-25T14:17:08 |
||
networkx/networkx | 6,149 | networkx__networkx-6149 | [
"6144"
] | 9374d1ab1cc732a1a86ef1ed2438bc51e834f20d | diff --git a/networkx/algorithms/swap.py b/networkx/algorithms/swap.py
--- a/networkx/algorithms/swap.py
+++ b/networkx/algorithms/swap.py
@@ -46,7 +46,7 @@ def directed_edge_swap(G, *, nswap=1, max_tries=100, seed=None):
NetworkXError
If `G` is not directed, or
If nswap > max_tries, or
- If there are fewer than 4 nodes in `G`
+ If there are fewer than 4 nodes or 3 edges in `G`.
NetworkXAlgorithmError
If the number of swap attempts exceeds `max_tries` before `nswap` swaps are made
@@ -70,7 +70,9 @@ def directed_edge_swap(G, *, nswap=1, max_tries=100, seed=None):
if nswap > max_tries:
raise nx.NetworkXError("Number of swaps > number of tries allowed.")
if len(G) < 4:
- raise nx.NetworkXError("Graph has less than four nodes.")
+ raise nx.NetworkXError("DiGraph has fewer than four nodes.")
+ if len(G.edges) < 3:
+ raise nx.NetworkXError("DiGraph has fewer than 3 edges")
# Instead of choosing uniformly at random from a generated edge list,
# this algorithm chooses nonuniformly from the set of nodes with
@@ -161,6 +163,15 @@ def double_edge_swap(G, nswap=1, max_tries=100, seed=None):
G : graph
The graph after double edge swaps.
+ Raises
+ ------
+ NetworkXError
+ If `G` is directed, or
+ If `nswap` > `max_tries`, or
+ If there are fewer than 4 nodes or 2 edges in `G`.
+ NetworkXAlgorithmError
+ If the number of swap attempts exceeds `max_tries` before `nswap` swaps are made
+
Notes
-----
Does not enforce any connectivity constraints.
@@ -174,7 +185,9 @@ def double_edge_swap(G, nswap=1, max_tries=100, seed=None):
if nswap > max_tries:
raise nx.NetworkXError("Number of swaps > number of tries allowed.")
if len(G) < 4:
- raise nx.NetworkXError("Graph has less than four nodes.")
+ raise nx.NetworkXError("Graph has fewer than four nodes.")
+ if len(G.edges) < 2:
+ raise nx.NetworkXError("Graph has fewer than 2 edges")
# Instead of choosing uniformly at random from a generated edge list,
# this algorithm chooses nonuniformly from the set of nodes with
# probability weighted by degree.
@@ -285,7 +298,7 @@ def connected_double_edge_swap(G, nswap=1, _window_threshold=3, seed=None):
if not nx.is_connected(G):
raise nx.NetworkXError("Graph not connected")
if len(G) < 4:
- raise nx.NetworkXError("Graph has less than four nodes.")
+ raise nx.NetworkXError("Graph has fewer than four nodes.")
n = 0
swapcount = 0
deg = G.degree()
| diff --git a/networkx/algorithms/tests/test_swap.py b/networkx/algorithms/tests/test_swap.py
--- a/networkx/algorithms/tests/test_swap.py
+++ b/networkx/algorithms/tests/test_swap.py
@@ -19,9 +19,6 @@ def test_edge_cases_directed_edge_swap():
"Maximum number of swap attempts \\(11\\) exceeded "
"before desired swaps achieved \\(\\d\\)."
)
- graph = nx.DiGraph([(0, 1), (2, 3)])
- with pytest.raises(nx.NetworkXAlgorithmError, match=e):
- nx.directed_edge_swap(graph, nswap=4, max_tries=10, seed=1)
graph = nx.DiGraph([(0, 0), (0, 1), (1, 0), (2, 3), (3, 2)])
with pytest.raises(nx.NetworkXAlgorithmError, match=e):
nx.directed_edge_swap(graph, nswap=1, max_tries=10, seed=1)
@@ -138,3 +135,22 @@ def test_degree_seq_c4():
degrees = sorted(d for n, d in G.degree())
G = nx.double_edge_swap(G, 1, 100)
assert degrees == sorted(d for n, d in G.degree())
+
+
+def test_fewer_than_4_nodes():
+ G = nx.DiGraph()
+ G.add_nodes_from([0, 1, 2])
+ with pytest.raises(nx.NetworkXError, match=".*fewer than four nodes."):
+ nx.directed_edge_swap(G)
+
+
+def test_less_than_3_edges():
+ G = nx.DiGraph([(0, 1), (1, 2)])
+ G.add_nodes_from([3, 4])
+ with pytest.raises(nx.NetworkXError, match=".*fewer than 3 edges"):
+ nx.directed_edge_swap(G)
+
+ G = nx.Graph()
+ G.add_nodes_from([0, 1, 2, 3])
+ with pytest.raises(nx.NetworkXError, match=".*fewer than 2 edges"):
+ nx.double_edge_swap(G)
| directed_edge_swap: ZeroDivisionError when passing a digraph without edges
I was working on improving test coverage for `swap.py` and tested `directed_edge_swap` with a digraph without edges. That resulted in a ZeroDivisionError from `nx.utils.cumulative_distribution`.
### Steps to Reproduce
```python
from networkx as nx
G = nx.DiGraph()
G.add_nodes_from([0, 2, 4, 3])
G = nx.directed_edge_swap(G, nswap=1, max_tries=500, seed=1)
```
### Current Behavior
ZeroDivisionError is raised because in `nx.utils.cumulative_distribution` there's a division by the sum of the input distribution that is zero. The distribution that `nx.utils.cumulative_distribution` is called with in this example is the degrees of a digraph without edges (all degrees are zero). This is the error log.
```
Traceback (most recent call last):
File "test.py", line 12, in <module>
G = nx.directed_edge_swap(G, nswap=1, max_tries=500, seed=1)
File "/home/paula/Outreachy/networkX/networkx/networkx/utils/decorators.py", line 766, in func
return argmap._lazy_compile(__wrapper)(*args, **kwargs)
File "<class 'networkx.utils.decorators.argmap'> compilation 21", line 5, in argmap_directed_edge_swap_17
File "/home/paula/Outreachy/networkX/networkx/networkx/algorithms/swap.py", line 81, in directed_edge_swap
cdf = nx.utils.cumulative_distribution(degrees) # cdf of degree
File "/home/paula/Outreachy/networkX/networkx/networkx/utils/random_sequence.py", line 103, in cumulative_distribution
cdf.append(cdf[i] + distribution[i] / psum)
ZeroDivisionError: division by zero
```
### Expected Behavior
I think the best is to raise an exception in `directed_edge_swap` with a message that exposes this situation. Also, this should be added to the [doc entry](https://output.circle-artifacts.com/output/job/ab098af6-f76c-4bf9-a732-5c243aca96da/artifacts/0/doc/build/html/reference/algorithms/generated/networkx.algorithms.swap.directed_edge_swap.html#directed-edge-swap).
### Environment
Python version: 3.8
NetworkX version: 2.8.7
### Further Discoveries
Something else that should be added is that if the digraph has less than 3 edges then is impossible to swap edges. In that case, the current behavior is that the max number of tries is reached. In my opinion, this case should raise an exception with an appropriate message. Also, this could be mentioned in the [doc entry](https://output.circle-artifacts.com/output/job/ab098af6-f76c-4bf9-a732-5c243aca96da/artifacts/0/doc/build/html/reference/algorithms/generated/networkx.algorithms.swap.directed_edge_swap.html#directed-edge-swap).
I will work on these changes and will look closely at the other functions in `swap.py`.
Also, I think it will be good to review the functions that use `nx.utils.cumulative_distribution` to check for this error. I can do that too.
| 2022-10-29T17:06:53 |
|
networkx/networkx | 6,151 | networkx__networkx-6151 | [
"6150"
] | 9374d1ab1cc732a1a86ef1ed2438bc51e834f20d | diff --git a/networkx/algorithms/smallworld.py b/networkx/algorithms/smallworld.py
--- a/networkx/algorithms/smallworld.py
+++ b/networkx/algorithms/smallworld.py
@@ -46,6 +46,11 @@ def random_reference(G, niter=1, connectivity=True, seed=None):
G : graph
The randomized graph.
+ Raises
+ ------
+ NetworkXError
+ If there are fewer than 4 nodes or 2 edges in `G`
+
Notes
-----
The implementation is adapted from the algorithm by Maslov and Sneppen
@@ -58,7 +63,9 @@ def random_reference(G, niter=1, connectivity=True, seed=None):
Science 296.5569 (2002): 910-913.
"""
if len(G) < 4:
- raise nx.NetworkXError("Graph has less than four nodes.")
+ raise nx.NetworkXError("Graph has fewer than four nodes.")
+ if len(G.edges) < 2:
+ raise nx.NetworkXError("Graph has fewer that 2 edges")
from networkx.utils import cumulative_distribution, discrete_sequence
@@ -119,7 +126,7 @@ def lattice_reference(G, niter=5, D=None, connectivity=True, seed=None):
Parameters
----------
G : graph
- An undirected graph with 4 or more nodes.
+ An undirected graph.
niter : integer (optional, default=1)
An edge is rewired approximatively niter times.
@@ -139,6 +146,11 @@ def lattice_reference(G, niter=5, D=None, connectivity=True, seed=None):
G : graph
The latticized graph.
+ Raises
+ ------
+ NetworkXError
+ If there are fewer than 4 nodes or 2 edges in `G`
+
Notes
-----
The implementation is adapted from the algorithm by Sporns et al. [1]_.
@@ -160,7 +172,9 @@ def lattice_reference(G, niter=5, D=None, connectivity=True, seed=None):
local_conn = nx.connectivity.local_edge_connectivity
if len(G) < 4:
- raise nx.NetworkXError("Graph has less than four nodes.")
+ raise nx.NetworkXError("Graph has fewer than four nodes.")
+ if len(G.edges) < 2:
+ raise nx.NetworkXError("Graph has fewer that 2 edges")
# Instead of choosing uniformly at random from a generated edge list,
# this algorithm chooses nonuniformly from the set of nodes with
# probability weighted by degree.
| diff --git a/networkx/algorithms/tests/test_smallworld.py b/networkx/algorithms/tests/test_smallworld.py
--- a/networkx/algorithms/tests/test_smallworld.py
+++ b/networkx/algorithms/tests/test_smallworld.py
@@ -68,3 +68,11 @@ def test_omega():
for o in omegas:
assert -1 <= o <= 1
+
+
[email protected]("f", (nx.random_reference, nx.lattice_reference))
+def test_graph_no_edges(f):
+ G = nx.Graph()
+ G.add_nodes_from([0, 1, 2, 3])
+ with pytest.raises(nx.NetworkXError, match="Graph has fewer that 2 edges"):
+ f(G)
| In smallworld.py: ZeroDivisionError when passing a graph without edges
This is a similar error to #6144 that comes from the use of `nx.utils.cumulative_distribution` with graphs without edges. In #6144 there is a deeper explanation of the bug.
| 2022-10-29T20:12:59 |
|
networkx/networkx | 6,183 | networkx__networkx-6183 | [
"6036"
] | 6ef8b9986ad9a8bc79a4a6640a8f9ee285b67a7b | diff --git a/networkx/algorithms/centrality/dispersion.py b/networkx/algorithms/centrality/dispersion.py
--- a/networkx/algorithms/centrality/dispersion.py
+++ b/networkx/algorithms/centrality/dispersion.py
@@ -19,6 +19,13 @@ def dispersion(G, u=None, v=None, normalized=True, alpha=1.0, b=0.0, c=0.0):
The target of the dispersion score if specified.
normalized : bool
If True (default) normalize by the embededness of the nodes (u and v).
+ alpha, b, c : float
+ Parameters for the normalization procedure. When `normalized` is True,
+ the dispersion value is normalized by::
+
+ result = ((dispersion + b) ** alpha) / (embeddedness + c)
+
+ as long as the denominator is nonzero.
Returns
-------
| Improve test coverage for algorithms in load centrality
<!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion -->
<!--- Provide a general summary of the issue in the Title above -->
Currently we don't have full coverage for the algorithms in load centrality. Code blocks which are highlighted with red at codcov https://app.codecov.io/gh/networkx/networkx/blob/main/networkx/algorithms/centrality/load.py don't have corresponding tests. The tests should be added in https://github.com/networkx/networkx/blob/main/networkx/algorithms/centrality/tests/test_load_centrality.py
### Current Behavior
<!--- Tell us what happens instead of the expected behavior -->
We don't test all the paths the code can take us.
### Expected Behavior
<!--- Tell us what should happen -->
We should be testing everything so there aren't any surprises.
### Steps to Reproduce
<!--- Provide a minimal example that reproduces the bug -->
Visit https://app.codecov.io/gh/networkx/networkx/blob/main/networkx/algorithms/centrality/load.py
| 2022-11-07T11:38:26 |
||
networkx/networkx | 6,186 | networkx__networkx-6186 | [
"6180"
] | 6ef8b9986ad9a8bc79a4a6640a8f9ee285b67a7b | diff --git a/networkx/algorithms/clique.py b/networkx/algorithms/clique.py
--- a/networkx/algorithms/clique.py
+++ b/networkx/algorithms/clique.py
@@ -137,6 +137,67 @@ def find_cliques(G, nodes=None):
ValueError
If `nodes` is not a clique.
+ Examples
+ --------
+ >>> from pprint import pprint # For nice dict formatting
+ >>> G = nx.karate_club_graph()
+ >>> sum(1 for c in nx.find_cliques(G)) # The number of maximal cliques in G
+ 36
+ >>> max(nx.find_cliques(G), key=len) # The largest maximal clique in G
+ [0, 1, 2, 3, 13]
+
+ The size of the largest maximal clique is known as the *clique number* of
+ the graph, which can be found directly with:
+
+ >>> max(len(c) for c in nx.find_cliques(G))
+ 5
+
+ One can also compute the number of maximal cliques in `G` that contain a given
+ node. The following produces a dictionary keyed by node whose
+ values are the number of maximal cliques in `G` that contain the node:
+
+ >>> pprint({n: sum(1 for c in nx.find_cliques(G) if n in c) for n in G})
+ {0: 13,
+ 1: 6,
+ 2: 7,
+ 3: 3,
+ 4: 2,
+ 5: 3,
+ 6: 3,
+ 7: 1,
+ 8: 3,
+ 9: 2,
+ 10: 2,
+ 11: 1,
+ 12: 1,
+ 13: 2,
+ 14: 1,
+ 15: 1,
+ 16: 1,
+ 17: 1,
+ 18: 1,
+ 19: 2,
+ 20: 1,
+ 21: 1,
+ 22: 1,
+ 23: 3,
+ 24: 2,
+ 25: 2,
+ 26: 1,
+ 27: 3,
+ 28: 2,
+ 29: 2,
+ 30: 2,
+ 31: 4,
+ 32: 9,
+ 33: 14}
+
+ Or, similarly, the maximal cliques in `G` that contain a given node.
+ For example, the 4 maximal cliques that contain node 31:
+
+ >>> [c for c in nx.find_cliques(G) if 31 in c]
+ [[0, 31], [33, 32, 31], [33, 28, 31], [24, 25, 31]]
+
See Also
--------
find_cliques_recursive
@@ -274,7 +335,7 @@ def find_cliques_recursive(G, nodes=None):
See Also
--------
find_cliques
- An iterative version of the same algorithm.
+ An iterative version of the same algorithm. See docstring for examples.
Notes
-----
@@ -451,6 +512,14 @@ def graph_clique_number(G, cliques=None):
The *clique number* of a graph is the size of the largest clique in
the graph.
+ .. deprecated:: 3.0
+
+ graph_clique_number is deprecated in NetworkX 3.0 and will be removed
+ in v3.2. The graph clique number can be computed directly with::
+
+ max(len(c) for c in nx.find_cliques(G))
+
+
Parameters
----------
G : NetworkX graph
@@ -473,6 +542,16 @@ def graph_clique_number(G, cliques=None):
maximal cliques.
"""
+ import warnings
+
+ warnings.warn(
+ (
+ "\n\ngraph_clique_number is deprecated and will be removed.\n"
+ "Use: ``max(len(c) for c in nx.find_cliques(G))`` instead."
+ ),
+ DeprecationWarning,
+ stacklevel=2,
+ )
if len(G.nodes) < 1:
return 0
if cliques is None:
@@ -483,6 +562,13 @@ def graph_clique_number(G, cliques=None):
def graph_number_of_cliques(G, cliques=None):
"""Returns the number of maximal cliques in the graph.
+ .. deprecated:: 3.0
+
+ graph_number_of_cliques is deprecated and will be removed in v3.2.
+ The number of maximal cliques can be computed directly with::
+
+ sum(1 for _ in nx.find_cliques(G))
+
Parameters
----------
G : NetworkX graph
@@ -505,6 +591,16 @@ def graph_number_of_cliques(G, cliques=None):
maximal cliques.
"""
+ import warnings
+
+ warnings.warn(
+ (
+ "\n\ngraph_number_of_cliques is deprecated and will be removed.\n"
+ "Use: ``sum(1 for _ in nx.find_cliques(G))`` instead."
+ ),
+ DeprecationWarning,
+ stacklevel=2,
+ )
if cliques is None:
cliques = list(find_cliques(G))
return len(cliques)
@@ -576,9 +672,29 @@ def node_clique_number(G, nodes=None, cliques=None, separate_nodes=False):
def number_of_cliques(G, nodes=None, cliques=None):
"""Returns the number of maximal cliques for each node.
+ .. deprecated:: 3.0
+
+ number_of_cliques is deprecated and will be removed in v3.2.
+ Use the result of `find_cliques` directly to compute the number of
+ cliques containing each node::
+
+ {n: sum(1 for c in nx.find_cliques(G) if n in c) for n in G}
+
Returns a single or list depending on input nodes.
Optional list of cliques can be input if already computed.
"""
+ import warnings
+
+ warnings.warn(
+ (
+ "\n\nnumber_of_cliques is deprecated and will be removed.\n"
+ "Use the result of find_cliques directly to compute the number\n"
+ "of cliques containing each node:\n\n"
+ " {n: sum(1 for c in nx.find_cliques(G) if n in c) for n in G}\n\n"
+ ),
+ DeprecationWarning,
+ stacklevel=2,
+ )
if cliques is None:
cliques = list(find_cliques(G))
@@ -599,9 +715,29 @@ def number_of_cliques(G, nodes=None, cliques=None):
def cliques_containing_node(G, nodes=None, cliques=None):
"""Returns a list of cliques containing the given node.
+ .. deprecated:: 3.0
+
+ cliques_containing_node is deprecated and will be removed in 3.2.
+ Use the result of `find_cliques` directly to compute the cliques that
+ contain each node::
+
+ {n: [c for c in nx.find_cliques(G) if n in c] for n in G}
+
Returns a single list or list of lists depending on input nodes.
Optional list of cliques can be input if already computed.
"""
+ import warnings
+
+ warnings.warn(
+ (
+ "\n\ncliques_containing_node is deprecated and will be removed.\n"
+ "Use the result of find_cliques directly to compute maximal cliques\n"
+ "containing each node:\n\n"
+ " {n: [c for c in nx.find_cliques(G) if n in c] for n in G}\n\n"
+ ),
+ DeprecationWarning,
+ stacklevel=2,
+ )
if cliques is None:
cliques = list(find_cliques(G))
diff --git a/networkx/algorithms/isomorphism/isomorph.py b/networkx/algorithms/isomorphism/isomorph.py
--- a/networkx/algorithms/isomorphism/isomorph.py
+++ b/networkx/algorithms/isomorphism/isomorph.py
@@ -33,13 +33,15 @@ def could_be_isomorphic(G1, G2):
# Check local properties
d1 = G1.degree()
t1 = nx.triangles(G1)
- c1 = nx.number_of_cliques(G1)
+ clqs_1 = list(nx.find_cliques(G1))
+ c1 = {n: sum(1 for c in clqs_1 if n in c) for n in G1} # number of cliques
props1 = [[d, t1[v], c1[v]] for v, d in d1]
props1.sort()
d2 = G2.degree()
t2 = nx.triangles(G2)
- c2 = nx.number_of_cliques(G2)
+ clqs_2 = list(nx.find_cliques(G2))
+ c2 = {n: sum(1 for c in clqs_2 if n in c) for n in G2} # number of cliques
props2 = [[d, t2[v], c2[v]] for v, d in d2]
props2.sort()
| diff --git a/networkx/algorithms/tests/test_clique.py b/networkx/algorithms/tests/test_clique.py
--- a/networkx/algorithms/tests/test_clique.py
+++ b/networkx/algorithms/tests/test_clique.py
@@ -69,80 +69,96 @@ def test_find_cliques3(self):
def test_clique_number(self):
G = self.G
- assert nx.graph_clique_number(G) == 4
- assert nx.graph_clique_number(G, cliques=self.cl) == 4
+ with pytest.deprecated_call():
+ assert nx.graph_clique_number(G) == 4
+ with pytest.deprecated_call():
+ assert nx.graph_clique_number(G, cliques=self.cl) == 4
def test_clique_number2(self):
G = nx.Graph()
G.add_nodes_from([1, 2, 3])
- assert nx.graph_clique_number(G) == 1
+ with pytest.deprecated_call():
+ assert nx.graph_clique_number(G) == 1
def test_clique_number3(self):
G = nx.Graph()
- assert nx.graph_clique_number(G) == 0
+ with pytest.deprecated_call():
+ assert nx.graph_clique_number(G) == 0
def test_number_of_cliques(self):
G = self.G
- assert nx.graph_number_of_cliques(G) == 5
- assert nx.graph_number_of_cliques(G, cliques=self.cl) == 5
- assert nx.number_of_cliques(G, 1) == 1
- assert list(nx.number_of_cliques(G, [1]).values()) == [1]
- assert list(nx.number_of_cliques(G, [1, 2]).values()) == [1, 2]
- assert nx.number_of_cliques(G, [1, 2]) == {1: 1, 2: 2}
- assert nx.number_of_cliques(G, 2) == 2
- assert nx.number_of_cliques(G) == {
- 1: 1,
- 2: 2,
- 3: 1,
- 4: 2,
- 5: 1,
- 6: 2,
- 7: 1,
- 8: 1,
- 9: 1,
- 10: 1,
- 11: 1,
- }
- assert nx.number_of_cliques(G, nodes=list(G)) == {
- 1: 1,
- 2: 2,
- 3: 1,
- 4: 2,
- 5: 1,
- 6: 2,
- 7: 1,
- 8: 1,
- 9: 1,
- 10: 1,
- 11: 1,
- }
- assert nx.number_of_cliques(G, nodes=[2, 3, 4]) == {2: 2, 3: 1, 4: 2}
- assert nx.number_of_cliques(G, cliques=self.cl) == {
- 1: 1,
- 2: 2,
- 3: 1,
- 4: 2,
- 5: 1,
- 6: 2,
- 7: 1,
- 8: 1,
- 9: 1,
- 10: 1,
- 11: 1,
- }
- assert nx.number_of_cliques(G, list(G), cliques=self.cl) == {
- 1: 1,
- 2: 2,
- 3: 1,
- 4: 2,
- 5: 1,
- 6: 2,
- 7: 1,
- 8: 1,
- 9: 1,
- 10: 1,
- 11: 1,
- }
+ with pytest.deprecated_call():
+ assert nx.graph_number_of_cliques(G) == 5
+ with pytest.deprecated_call():
+ assert nx.graph_number_of_cliques(G, cliques=self.cl) == 5
+ with pytest.deprecated_call():
+ assert nx.number_of_cliques(G, 1) == 1
+ with pytest.deprecated_call():
+ assert list(nx.number_of_cliques(G, [1]).values()) == [1]
+ with pytest.deprecated_call():
+ assert list(nx.number_of_cliques(G, [1, 2]).values()) == [1, 2]
+ with pytest.deprecated_call():
+ assert nx.number_of_cliques(G, [1, 2]) == {1: 1, 2: 2}
+ with pytest.deprecated_call():
+ assert nx.number_of_cliques(G, 2) == 2
+ with pytest.deprecated_call():
+ assert nx.number_of_cliques(G) == {
+ 1: 1,
+ 2: 2,
+ 3: 1,
+ 4: 2,
+ 5: 1,
+ 6: 2,
+ 7: 1,
+ 8: 1,
+ 9: 1,
+ 10: 1,
+ 11: 1,
+ }
+ with pytest.deprecated_call():
+ assert nx.number_of_cliques(G, nodes=list(G)) == {
+ 1: 1,
+ 2: 2,
+ 3: 1,
+ 4: 2,
+ 5: 1,
+ 6: 2,
+ 7: 1,
+ 8: 1,
+ 9: 1,
+ 10: 1,
+ 11: 1,
+ }
+ with pytest.deprecated_call():
+ assert nx.number_of_cliques(G, nodes=[2, 3, 4]) == {2: 2, 3: 1, 4: 2}
+ with pytest.deprecated_call():
+ assert nx.number_of_cliques(G, cliques=self.cl) == {
+ 1: 1,
+ 2: 2,
+ 3: 1,
+ 4: 2,
+ 5: 1,
+ 6: 2,
+ 7: 1,
+ 8: 1,
+ 9: 1,
+ 10: 1,
+ 11: 1,
+ }
+ with pytest.deprecated_call():
+ assert nx.number_of_cliques(G, list(G), cliques=self.cl) == {
+ 1: 1,
+ 2: 2,
+ 3: 1,
+ 4: 2,
+ 5: 1,
+ 6: 2,
+ 7: 1,
+ 8: 1,
+ 9: 1,
+ 10: 1,
+ 11: 1,
+ }
def test_node_clique_number(self):
G = self.G
@@ -182,23 +198,31 @@ def test_node_clique_number(self):
def test_cliques_containing_node(self):
G = self.G
- assert nx.cliques_containing_node(G, 1) == [[2, 6, 1, 3]]
- assert list(nx.cliques_containing_node(G, [1]).values()) == [[[2, 6, 1, 3]]]
- assert [
- sorted(c) for c in list(nx.cliques_containing_node(G, [1, 2]).values())
- ] == [[[2, 6, 1, 3]], [[2, 6, 1, 3], [2, 6, 4]]]
- result = nx.cliques_containing_node(G, [1, 2])
+ with pytest.deprecated_call():
+ assert nx.cliques_containing_node(G, 1) == [[2, 6, 1, 3]]
+ with pytest.deprecated_call():
+ assert list(nx.cliques_containing_node(G, [1]).values()) == [[[2, 6, 1, 3]]]
+ with pytest.deprecated_call():
+ assert [
+ sorted(c) for c in list(nx.cliques_containing_node(G, [1, 2]).values())
+ ] == [[[2, 6, 1, 3]], [[2, 6, 1, 3], [2, 6, 4]]]
+ with pytest.deprecated_call():
+ result = nx.cliques_containing_node(G, [1, 2])
for k, v in result.items():
result[k] = sorted(v)
assert result == {1: [[2, 6, 1, 3]], 2: [[2, 6, 1, 3], [2, 6, 4]]}
- assert nx.cliques_containing_node(G, 1) == [[2, 6, 1, 3]]
+ with pytest.deprecated_call():
+ assert nx.cliques_containing_node(G, 1) == [[2, 6, 1, 3]]
expected = [{2, 6, 1, 3}, {2, 6, 4}]
- answer = [set(c) for c in nx.cliques_containing_node(G, 2)]
+ with pytest.deprecated_call():
+ answer = [set(c) for c in nx.cliques_containing_node(G, 2)]
assert answer in (expected, list(reversed(expected)))
- answer = [set(c) for c in nx.cliques_containing_node(G, 2, cliques=self.cl)]
+ with pytest.deprecated_call():
+ answer = [set(c) for c in nx.cliques_containing_node(G, 2, cliques=self.cl)]
assert answer in (expected, list(reversed(expected)))
- assert len(nx.cliques_containing_node(G)) == 11
+ with pytest.deprecated_call():
+ assert len(nx.cliques_containing_node(G)) == 11
def test_make_clique_bipartite(self):
G = self.G
| Deprecate `nx.number_of_cliques`?
Just a thought I had while [reviewing 6171](https://github.com/networkx/networkx/pull/6171#pullrequestreview-1169036037).
The `number_of_cliques` helper function creates (by default) a dictionary keyed by node where the values are the number of cliques containing the node. The default behavior essentially boils down to a one-liner:
```python
{n: sum(1 for c in nx.find_cliques(G) if n in c) for n in G}
```
The majority of the implementation of `nx.number_of_cliques` is dedicated to munging the various input arguments. IMO it'd be an improvement to instead recommend that users do this computation the Pythonic way rather than provide networkx-specific API for this task. IMO the potential upside for future users is worth the noise of a deprecation, but this is definitely subjective! Thoughts?
| Another function like that is `graph_number_of_cliques`. It is literally `len(cliques)` if the input `cliques` is already computed and it is `sum(1 for c in nx.find_cliques(G))` if the cliques input is `None`. Removing these functions (I am sure there are others in this module) and replacing them with examples in the `find_cliques` doc_string seems like a good idea.
Looking over all the functions in the `clique.py` module, the ones that could be replaced by one-liners on the output of `find_cliques` are:
- `graph_clique_number`: returns the size of the largest clique in the graph
- `graph_number_of_cliques`: returns the number of maximal cliques in the graph
- `number_of_cliques`: The number of maximal cliques for each node
- `cliques_containing_node`: List of maximal cliques for each node | 2022-11-08T04:58:09 |
networkx/networkx | 6,212 | networkx__networkx-6212 | [
"6036"
] | 36e29e89e4247807f053bd6f951bc9db4e3ca4b6 | diff --git a/networkx/algorithms/link_analysis/pagerank_alg.py b/networkx/algorithms/link_analysis/pagerank_alg.py
--- a/networkx/algorithms/link_analysis/pagerank_alg.py
+++ b/networkx/algorithms/link_analysis/pagerank_alg.py
@@ -44,6 +44,7 @@ def pagerank(
tol : float, optional
Error tolerance used to check convergence in power method solver.
+ The iteration will stop after a tolerance of ``len(G) * tol`` is reached.
nstart : dictionary, optional
Starting value of PageRank iteration for each node.
@@ -391,6 +392,7 @@ def _pagerank_scipy(
tol : float, optional
Error tolerance used to check convergence in power method solver.
+ The iteration will stop after a tolerance of ``len(G) * tol`` is reached.
nstart : dictionary, optional
Starting value of PageRank iteration for each node.
| Improve test coverage for algorithms in load centrality
<!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion -->
<!--- Provide a general summary of the issue in the Title above -->
Currently we don't have full coverage for the algorithms in load centrality. Code blocks which are highlighted with red at codcov https://app.codecov.io/gh/networkx/networkx/blob/main/networkx/algorithms/centrality/load.py don't have corresponding tests. The tests should be added in https://github.com/networkx/networkx/blob/main/networkx/algorithms/centrality/tests/test_load_centrality.py
### Current Behavior
<!--- Tell us what happens instead of the expected behavior -->
We don't test all the paths the code can take us.
### Expected Behavior
<!--- Tell us what should happen -->
We should be testing everything so there aren't any surprises.
### Steps to Reproduce
<!--- Provide a minimal example that reproduces the bug -->
Visit https://app.codecov.io/gh/networkx/networkx/blob/main/networkx/algorithms/centrality/load.py
| 2022-11-13T12:53:10 |
||
networkx/networkx | 6,234 | networkx__networkx-6234 | [
"5209"
] | b11fcb3a8950dbde0f8447cb7eded4283fd19aaa | diff --git a/networkx/algorithms/lowest_common_ancestors.py b/networkx/algorithms/lowest_common_ancestors.py
--- a/networkx/algorithms/lowest_common_ancestors.py
+++ b/networkx/algorithms/lowest_common_ancestors.py
@@ -14,7 +14,6 @@
@not_implemented_for("undirected")
-@not_implemented_for("multigraph")
def all_pairs_lowest_common_ancestor(G, pairs=None):
"""Return the lowest common ancestor of all pairs or the provided pairs
@@ -112,7 +111,6 @@ def generate_lca_from_pairs(G, pairs):
@not_implemented_for("undirected")
-@not_implemented_for("multigraph")
def lowest_common_ancestor(G, node1, node2, default=None):
"""Compute the lowest common ancestor of the given pair of nodes.
@@ -150,7 +148,6 @@ def lowest_common_ancestor(G, node1, node2, default=None):
@not_implemented_for("undirected")
-@not_implemented_for("multigraph")
def tree_all_pairs_lowest_common_ancestor(G, root=None, pairs=None):
r"""Yield the lowest common ancestor for sets of pairs in a tree.
@@ -237,7 +234,8 @@ def tree_all_pairs_lowest_common_ancestor(G, root=None, pairs=None):
msg = "No root specified and tree has multiple sources."
raise nx.NetworkXError(msg)
root = n
- elif deg > 1:
+ # checking deg>1 is not sufficient for MultiDiGraphs
+ elif deg > 1 and len(G.pred[n]) > 1:
msg = "Tree LCA only defined on trees; use DAG routine."
raise nx.NetworkXError(msg)
if root is None:
| diff --git a/networkx/algorithms/tests/test_lowest_common_ancestors.py b/networkx/algorithms/tests/test_lowest_common_ancestors.py
--- a/networkx/algorithms/tests/test_lowest_common_ancestors.py
+++ b/networkx/algorithms/tests/test_lowest_common_ancestors.py
@@ -136,12 +136,6 @@ def test_tree_all_pairs_lca_not_implemented(self):
with pytest.raises(NNI):
next(all_pairs_lca(G))
pytest.raises(NNI, nx.lowest_common_ancestor, G, 0, 1)
- G = nx.MultiDiGraph([(0, 1)])
- with pytest.raises(NNI):
- next(tree_all_pairs_lca(G))
- with pytest.raises(NNI):
- next(all_pairs_lca(G))
- pytest.raises(NNI, nx.lowest_common_ancestor, G, 0, 1)
def test_tree_all_pairs_lca_trees_without_LCAs(self):
G = nx.DiGraph()
@@ -150,6 +144,41 @@ def test_tree_all_pairs_lca_trees_without_LCAs(self):
assert ans == [((3, 3), 3)]
+class TestMultiTreeLCA(TestTreeLCA):
+ @classmethod
+ def setup_class(cls):
+ cls.DG = nx.MultiDiGraph()
+ edges = [(0, 1), (0, 2), (1, 3), (1, 4), (2, 5), (2, 6)]
+ cls.DG.add_edges_from(edges)
+ cls.ans = dict(tree_all_pairs_lca(cls.DG, 0))
+ # add multiedges
+ cls.DG.add_edges_from(edges)
+
+ gold = {(n, n): n for n in cls.DG}
+ gold.update({(0, i): 0 for i in range(1, 7)})
+ gold.update(
+ {
+ (1, 2): 0,
+ (1, 3): 1,
+ (1, 4): 1,
+ (1, 5): 0,
+ (1, 6): 0,
+ (2, 3): 0,
+ (2, 4): 0,
+ (2, 5): 2,
+ (2, 6): 2,
+ (3, 4): 1,
+ (3, 5): 0,
+ (3, 6): 0,
+ (4, 5): 0,
+ (4, 6): 0,
+ (5, 6): 2,
+ }
+ )
+
+ cls.gold = gold
+
+
class TestDAGLCA:
@classmethod
def setup_class(cls):
@@ -326,6 +355,62 @@ def test_all_pairs_lca_one_pair_gh4942(self):
assert nx.lowest_common_ancestor(G, 1, 3) == 2
+class TestMultiDiGraph_DAGLCA(TestDAGLCA):
+ @classmethod
+ def setup_class(cls):
+ cls.DG = nx.MultiDiGraph()
+ nx.add_path(cls.DG, (0, 1, 2, 3))
+ # add multiedges
+ nx.add_path(cls.DG, (0, 1, 2, 3))
+ nx.add_path(cls.DG, (0, 4, 3))
+ nx.add_path(cls.DG, (0, 5, 6, 8, 3))
+ nx.add_path(cls.DG, (5, 7, 8))
+ cls.DG.add_edge(6, 2)
+ cls.DG.add_edge(7, 2)
+
+ cls.root_distance = nx.shortest_path_length(cls.DG, source=0)
+
+ cls.gold = {
+ (1, 1): 1,
+ (1, 2): 1,
+ (1, 3): 1,
+ (1, 4): 0,
+ (1, 5): 0,
+ (1, 6): 0,
+ (1, 7): 0,
+ (1, 8): 0,
+ (2, 2): 2,
+ (2, 3): 2,
+ (2, 4): 0,
+ (2, 5): 5,
+ (2, 6): 6,
+ (2, 7): 7,
+ (2, 8): 7,
+ (3, 3): 3,
+ (3, 4): 4,
+ (3, 5): 5,
+ (3, 6): 6,
+ (3, 7): 7,
+ (3, 8): 8,
+ (4, 4): 4,
+ (4, 5): 0,
+ (4, 6): 0,
+ (4, 7): 0,
+ (4, 8): 0,
+ (5, 5): 5,
+ (5, 6): 5,
+ (5, 7): 5,
+ (5, 8): 5,
+ (6, 6): 6,
+ (6, 7): 5,
+ (6, 8): 6,
+ (7, 7): 7,
+ (7, 8): 7,
+ (8, 8): 8,
+ }
+ cls.gold.update(((0, n), 0) for n in cls.DG)
+
+
def test_all_pairs_lca_self_ancestors():
"""Self-ancestors should always be the node itself, i.e. lca of (0, 0) is 0.
See gh-4458."""
@@ -334,6 +419,9 @@ def test_all_pairs_lca_self_ancestors():
G.add_nodes_from(range(5))
G.add_edges_from([(1, 0), (2, 0), (3, 2), (4, 1), (4, 3)])
- assert all(
- u == v == a for (u, v), a in nx.all_pairs_lowest_common_ancestor(G) if u == v
- )
+ ap_lca = nx.all_pairs_lowest_common_ancestor
+ assert all(u == v == a for (u, v), a in ap_lca(G) if u == v)
+ MG = nx.MultiDiGraph(G)
+ assert all(u == v == a for (u, v), a in ap_lca(MG) if u == v)
+ MG.add_edges_from([(1, 0), (2, 0)])
+ assert all(u == v == a for (u, v), a in ap_lca(MG) if u == v)
| lowest_common_ancestor for multigraph
Hello,
I am working with a multigraph and would like to find the lowest common ancestor for nodes. It looks like there's no implementation for multigraph. Can you please help me with this?
| Hmmm... How would that differ for multigraph and graph?
Can't you just use the same function?
This is what I get

Also, the source code says that it is not implemented for multigraph
https://networkx.org/documentation/stable/_modules/networkx/algorithms/lowest_common_ancestors.html#lowest_common_ancestor
Hmmm... You are right -- the code explicitly checks for multigraphs and rules them out.
We should probably look at that more closely. I can't think of a reason why that should be done.
A workaround is to convert the graph to a `nx.DiGraph` for the `lowest_common_ancestor` calculation:
```python
nx.lowest_common_ancestor(nx.DiGraph(final), "HP:0003388", "HP:0011968")
```
That works for now :)
Thanks for the help!
I really appreciate it!
@dschult can we apply the same solution internally as well for finding multigraph's `lowest_common_ancestor`, if so then I can create a PR for that.
I think of this method (converting the MultiDiGraph to a DiGraph so it produces a lowest_common_denominator) as a workaround. I'd rather figure out why the algorithm doesn't work for a `MultiDiGraph` and fix it. It seems like it should.
Is the `@not_implemented_for("multigraph")` needed for any of the LCA algorithms? According to the docs, these functions should work on any non-null DAG, which can include a MultiDiGraph.
A previous change refactored the decorator from `@not_implemented_for("multigraph", "undirected")` to the current setting [here](https://github.com/networkx/networkx/pull/2603). However, I think this should only need the `@not_implemented_for("undirected")` decorator. This would also throw an exception for a Multigraph, but allow MultiDiGraph to work as expected.
If that's the case, I'm happy to file a PR for this one. If not then apologies for the noise here!
Can you figure out whether this code for LCA algorithms works for multigraphs? Maybe take out the "not_implemented_for" line for multigraph and try it on a few carefully constructed MultiDiGraphs. If there are inconsistent results we could start with a test that shows the problem -- that might make it possible to adapt the code to work for MultiDiGraph. If there are no inconsistent results then it might be possible to remove the "multigraph" restriction. But it is very simple to covert the MultiDiGraph to a DiGraph. So it might be sufficient to add that to the doc_string and maybe include an example of doing that.
There are definitely places in the code where `degree` is used and other places with `dfs_preorder` used. The degree could easily be a problem for MultiDiGraphs. For example, the in-degree > 1 check would probably change if we had a MultiDiGraph instead of a DiGraph. | 2022-11-22T16:24:27 |
networkx/networkx | 6,240 | networkx__networkx-6240 | [
"6239"
] | d82815dba6c8ddce19cd49f700298dc82a58f066 | diff --git a/networkx/algorithms/traversal/depth_first_search.py b/networkx/algorithms/traversal/depth_first_search.py
--- a/networkx/algorithms/traversal/depth_first_search.py
+++ b/networkx/algorithms/traversal/depth_first_search.py
@@ -364,12 +364,15 @@ def dfs_labeled_edges(G, source=None, depth_limit=None):
edges: generator
A generator of triples of the form (*u*, *v*, *d*), where (*u*,
*v*) is the edge being explored in the depth-first search and *d*
- is one of the strings 'forward', 'nontree', or 'reverse'. A
- 'forward' edge is one in which *u* has been visited but *v* has
+ is one of the strings 'forward', 'nontree', 'reverse', or 'reverse-depth_limit'.
+ A 'forward' edge is one in which *u* has been visited but *v* has
not. A 'nontree' edge is one in which both *u* and *v* have been
visited but the edge is not in the DFS tree. A 'reverse' edge is
- on in which both *u* and *v* have been visited and the edge is in
- the DFS tree.
+ one in which both *u* and *v* have been visited and the edge is in
+ the DFS tree. When the `depth_limit` is reached via a 'forward' edge,
+ a 'reverse' edge is immediately generated rather than the subtree
+ being explored. To indicate this flavor of 'reverse' edge, the string
+ yielded is 'reverse-depth_limit'.
Examples
--------
@@ -436,6 +439,8 @@ def dfs_labeled_edges(G, source=None, depth_limit=None):
visited.add(child)
if depth_now > 1:
stack.append((child, depth_now - 1, iter(G[child])))
+ else:
+ yield parent, child, "reverse-depth_limit"
except StopIteration:
stack.pop()
if stack:
| diff --git a/networkx/algorithms/traversal/tests/test_dfs.py b/networkx/algorithms/traversal/tests/test_dfs.py
--- a/networkx/algorithms/traversal/tests/test_dfs.py
+++ b/networkx/algorithms/traversal/tests/test_dfs.py
@@ -59,11 +59,43 @@ def test_dfs_labeled_edges(self):
edges = list(nx.dfs_labeled_edges(self.G, source=0))
forward = [(u, v) for (u, v, d) in edges if d == "forward"]
assert forward == [(0, 0), (0, 1), (1, 2), (2, 4), (1, 3)]
+ assert edges == [
+ (0, 0, "forward"),
+ (0, 1, "forward"),
+ (1, 0, "nontree"),
+ (1, 2, "forward"),
+ (2, 1, "nontree"),
+ (2, 4, "forward"),
+ (4, 2, "nontree"),
+ (4, 0, "nontree"),
+ (2, 4, "reverse"),
+ (1, 2, "reverse"),
+ (1, 3, "forward"),
+ (3, 1, "nontree"),
+ (3, 0, "nontree"),
+ (1, 3, "reverse"),
+ (0, 1, "reverse"),
+ (0, 3, "nontree"),
+ (0, 4, "nontree"),
+ (0, 0, "reverse"),
+ ]
def test_dfs_labeled_disconnected_edges(self):
edges = list(nx.dfs_labeled_edges(self.D))
forward = [(u, v) for (u, v, d) in edges if d == "forward"]
assert forward == [(0, 0), (0, 1), (2, 2), (2, 3)]
+ assert edges == [
+ (0, 0, "forward"),
+ (0, 1, "forward"),
+ (1, 0, "nontree"),
+ (0, 1, "reverse"),
+ (0, 0, "reverse"),
+ (2, 2, "forward"),
+ (2, 3, "forward"),
+ (3, 2, "nontree"),
+ (2, 3, "reverse"),
+ (2, 2, "reverse"),
+ ]
def test_dfs_tree_isolates(self):
G = nx.Graph()
@@ -141,12 +173,79 @@ def test_dls_edges(self):
edges = nx.dfs_edges(self.G, source=9, depth_limit=4)
assert list(edges) == [(9, 8), (8, 7), (7, 2), (2, 1), (2, 3), (9, 10)]
- def test_dls_labeled_edges(self):
+ def test_dls_labeled_edges_depth_1(self):
edges = list(nx.dfs_labeled_edges(self.G, source=5, depth_limit=1))
forward = [(u, v) for (u, v, d) in edges if d == "forward"]
assert forward == [(5, 5), (5, 4), (5, 6)]
+ # Note: reverse-depth_limit edge types were not reported before gh-6240
+ assert edges == [
+ (5, 5, "forward"),
+ (5, 4, "forward"),
+ (5, 4, "reverse-depth_limit"),
+ (5, 6, "forward"),
+ (5, 6, "reverse-depth_limit"),
+ (5, 5, "reverse"),
+ ]
- def test_dls_labeled_disconnected_edges(self):
+ def test_dls_labeled_edges_depth_2(self):
edges = list(nx.dfs_labeled_edges(self.G, source=6, depth_limit=2))
forward = [(u, v) for (u, v, d) in edges if d == "forward"]
assert forward == [(6, 6), (6, 5), (5, 4)]
+ assert edges == [
+ (6, 6, "forward"),
+ (6, 5, "forward"),
+ (5, 4, "forward"),
+ (5, 4, "reverse-depth_limit"),
+ (5, 6, "nontree"),
+ (6, 5, "reverse"),
+ (6, 6, "reverse"),
+ ]
+
+ def test_dls_labeled_disconnected_edges(self):
+ edges = list(nx.dfs_labeled_edges(self.D, depth_limit=1))
+ assert edges == [
+ (0, 0, "forward"),
+ (0, 1, "forward"),
+ (0, 1, "reverse-depth_limit"),
+ (0, 0, "reverse"),
+ (2, 2, "forward"),
+ (2, 3, "forward"),
+ (2, 3, "reverse-depth_limit"),
+ (2, 7, "forward"),
+ (2, 7, "reverse-depth_limit"),
+ (2, 2, "reverse"),
+ (8, 8, "forward"),
+ (8, 7, "nontree"),
+ (8, 9, "forward"),
+ (8, 9, "reverse-depth_limit"),
+ (8, 8, "reverse"),
+ (10, 10, "forward"),
+ (10, 9, "nontree"),
+ (10, 10, "reverse"),
+ ]
+ # large depth_limit has no impact
+ edges = list(nx.dfs_labeled_edges(self.D, depth_limit=19))
+ assert edges == [
+ (0, 0, "forward"),
+ (0, 1, "forward"),
+ (1, 0, "nontree"),
+ (0, 1, "reverse"),
+ (0, 0, "reverse"),
+ (2, 2, "forward"),
+ (2, 3, "forward"),
+ (3, 2, "nontree"),
+ (2, 3, "reverse"),
+ (2, 7, "forward"),
+ (7, 2, "nontree"),
+ (7, 8, "forward"),
+ (8, 7, "nontree"),
+ (8, 9, "forward"),
+ (9, 8, "nontree"),
+ (9, 10, "forward"),
+ (10, 9, "nontree"),
+ (9, 10, "reverse"),
+ (8, 9, "reverse"),
+ (7, 8, "reverse"),
+ (2, 7, "reverse"),
+ (2, 2, "reverse"),
+ ]
| nx.dfs_labeled_edges does not return last visited edge if depth_limit is specified
`nx.dfs_labeled_edges` does not return last (deepest) visited edge when traversing backwards if **depth_limit** is specified.
### Current Behavior
```
graph = nx.path_graph(5, nx.DiGraph)
list(nx.dfs_labeled_edges(graph, source=0))
[(0, 0, 'forward'),
(0, 1, 'forward'),
(1, 2, 'forward'),
(2, 3, 'forward'),
(3, 4, 'forward'),
(3, 4, 'reverse'),
(2, 3, 'reverse'),
(1, 2, 'reverse'),
(0, 1, 'reverse'),
(0, 0, 'reverse')]
list(nx.dfs_labeled_edges(graph, source=0, depth_limit=4))
[(0, 0, 'forward'),
(0, 1, 'forward'),
(1, 2, 'forward'),
(2, 3, 'forward'),
(3, 4, 'forward'),
(2, 3, 'reverse'),
(1, 2, 'reverse'),
(0, 1, 'reverse'),
(0, 0, 'reverse')]
```
Note that edge `(3, 4, 'reverse')` is missing.
### Expected Behavior
```
graph = nx.path_graph(5, nx.DiGraph)
list(nx.dfs_labeled_edges(graph, source=0, depth_limit=4))
[(0, 0, 'forward'),
(0, 1, 'forward'),
(1, 2, 'forward'),
(2, 3, 'forward'),
(3, 4, 'forward'),
(3, 4, 'reverse'),
(2, 3, 'reverse'),
(1, 2, 'reverse'),
(0, 1, 'reverse'),
(0, 0, 'reverse')]
```
### Environment
Python version: 3.10
NetworkX version: 2.8.8 and 3.0rc1
| I can verify that there is a bug in `dfs_labeled_edges` for the reporting of reverse edges at the requested `depth_level`. The fix is to add an else clause for the check on `depth_now > 1`. And there are almost no tests currently for "reverse" edge reporting in the test suite. So we should add those.
This may not be a bug afterall... more like a question about the definition of a "reverse" edge.
A "reverse" edge is one in which both u and v have been visited and the edge is in the DFS tree. But with the depth_limit, we don't actually visit node v when we move forward along (u, v)... we reach the depth limit and stop that branch. Because we haven't visited that node, some people could argue that we shouldn't report the "reverse" edge.
One result is that for a fixed depth_limit, the preorder of nodes reports one depth more of nodes than the post-order.
But we can define the labeled DFS edges in either manner. For depth_limit 2, we could report the forward and reverse edges that we would-have-seen without a depth_limit for the edges that we touch. Or we could report the reverse edges only if we actually visit the node at the far end of the edge. I can't find any good definition of what constitutes a reverse edge (also called a back-edge) when there is a depth_limit to the traversal.
@ikarsokolov can you describe the application you are working with and why you expect there to be a reverse edge for each forward edge when the depth_limit is set?
I can see an advantage to ensuring that "reverse" edge is reported for each "forward" edge. You gain information about when you stopped looking at a node -- but you don't have information about whether the DFS had finished visiting that node or you reached the depth_limit. We can easily limit the impact of this change to just this function. What is the right thing to report?
@dschult thanks for quick and detailed responses!
> can you describe the application you are working with and why you expect there to be a reverse edge for each forward edge when the depth_limit is set?
Simple example:
I have tree and want to do DFS from the root. I also have blacklist of nodes that should be excluded from DFS result with their complete subtrees. Something like "I want filtered iterator over tree without these particular nodes and all their children".
I listen to traversal events:
- "forward" edge pointed to blacklisted node => blacklisted subtree traversal is started, subsequent nodes should be dropped from the result.
- Matching "reverse" edge => exit from blacklisted node, subtree traversal is finished, subsequent nodes should be kept in the result.
If depth_limit is set and blacklisted node happens to be one level above limit I will never receive signal that node + it's subtree traversal is complete.
> A "reverse" edge is one in which both u and v have been visited and the edge is in the DFS tree. But with the depth_limit, we don't actually visit node v when we move forward along (u, v)... we reach the depth limit and stop that branch. Because we haven't visited that node, some people could argue that we shouldn't report the "reverse" edge.
I see your point. I think in this case we should have another label besides "forward". Something like "examine".
See how this is done in [boost graph](https://www.boost.org/doc/libs/1_80_0/libs/graph/doc/depth_first_search.html) library.
They handled traversal with visitor pattern but the principle is the same.
```
vis.examine_edge(e, g) is invoked on every out-edge of each vertex after it is discovered.
vis.tree_edge(e, g) is invoked on each edge as it becomes a member of the edges that form the search tree. If you wish to record predecessors, do so at this event point.
vis.back_edge(e, g) is invoked on the back edges in the graph.
```
`examine_edge` is missing label in NetworkX implementation.
The only drawback I see here is wasted computational power. "Examined" nodes on last depth level will not be included in final result. At least on my naive implementation. My graph has about ~1M nodes so a lot of nodes can be enumerated here,
With our code, you can get the "examine" trait of an edges by looking for `etype in ("forward", "nontree")`. So, I'm not sure it helps all that much. The difference I see is that the boost code has a "finish_vertex" moment. That's really what you are looking for: a point when you can be sure the tree from that node has been explored completely. That's what we use "reverse" to indicate.
But we have to figure out whether we consider the subtree-from-that-node to have been fully explored when the depth_limit stopped the exploration. Currently, we don't indicate that we stopped exploring the tree-from-that-node. I think it would be good to report the edge as "reverse-depth_limit" to make a note that a reverse edge should be indicated (we are done exploring that node), but that it isn't the same as a normal "reverse" edge. I'll put together a PR for this.
In the meantime, a workaround is for you to track the depth yourself, and when a "forward" edge reaches the depth_limit, the corresponding "reverse" edge is guaranteed to immediately follow that "forward" edge. So, you don't have to enter blacklisted mode at all. You know that no subtree nodes will be reported.
Something like:
```python
if etype == "forward":
if v in excluded_nodes:
if depth < depth_limit: # no need to blacklist the subtree if depth == depth_limit
blacklisted_subtree = True
``` | 2022-11-27T09:07:26 |
networkx/networkx | 6,259 | networkx__networkx-6259 | [
"6248"
] | 53be757de9a87a3413943737fbf8478bb23b17c7 | diff --git a/networkx/convert_matrix.py b/networkx/convert_matrix.py
--- a/networkx/convert_matrix.py
+++ b/networkx/convert_matrix.py
@@ -1000,7 +1000,7 @@ def to_numpy_array(
return A
-def from_numpy_array(A, parallel_edges=False, create_using=None):
+def from_numpy_array(A, parallel_edges=False, create_using=None, edge_attr="weight"):
"""Returns a graph from a 2D NumPy array.
The 2D NumPy array is interpreted as an adjacency matrix for the graph.
@@ -1020,6 +1020,10 @@ def from_numpy_array(A, parallel_edges=False, create_using=None):
create_using : NetworkX graph constructor, optional (default=nx.Graph)
Graph type to create. If graph instance, then cleared before populated.
+ edge_attr : String, optional (default="weight")
+ The attribute to which the array values are assigned on each edge. If
+ it is None, edge attributes will not be assigned.
+
Notes
-----
For directed graphs, explicitly mention create_using=nx.DiGraph,
@@ -1034,6 +1038,11 @@ def from_numpy_array(A, parallel_edges=False, create_using=None):
indicated by the upper triangle of the array `A` will be added to the
graph.
+ If `edge_attr` is Falsy (False or None), edge attributes will not be
+ assigned, and the array data will be treated like a binary mask of
+ edge presence or absence. Otherwise, the attributes will be assigned
+ as follows:
+
If the NumPy array has a single data type for each array entry it
will be converted to an appropriate Python data type.
@@ -1127,7 +1136,9 @@ def from_numpy_array(A, parallel_edges=False, create_using=None):
{
name: kind_to_python_type[dtype.kind](val)
for (_, dtype, name), val in zip(fields, A[u, v])
- },
+ }
+ if edge_attr
+ else {},
)
for u, v in edges
)
@@ -1144,11 +1155,17 @@ def from_numpy_array(A, parallel_edges=False, create_using=None):
# for d in range(A[u, v]):
# G.add_edge(u, v, weight=1)
#
- triples = chain(
- ((u, v, {"weight": 1}) for d in range(A[u, v])) for (u, v) in edges
- )
+ if edge_attr:
+ triples = chain(
+ ((u, v, {edge_attr: 1}) for d in range(A[u, v])) for (u, v) in edges
+ )
+ else:
+ triples = chain(((u, v, {}) for d in range(A[u, v])) for (u, v) in edges)
else: # basic data type
- triples = ((u, v, {"weight": python_type(A[u, v])}) for u, v in edges)
+ if edge_attr:
+ triples = ((u, v, {edge_attr: python_type(A[u, v])}) for u, v in edges)
+ else:
+ triples = ((u, v, {}) for u, v in edges)
# If we are creating an undirected multigraph, only add the edges from the
# upper triangle of the matrix. Otherwise, add all the edges. This relies
# on the fact that the vertices created in the
| diff --git a/networkx/tests/test_convert_numpy.py b/networkx/tests/test_convert_numpy.py
--- a/networkx/tests/test_convert_numpy.py
+++ b/networkx/tests/test_convert_numpy.py
@@ -164,6 +164,34 @@ def test_from_numpy_array_parallel_edges(self):
)
assert graphs_equal(actual, expected)
+ @pytest.mark.parametrize(
+ "dt",
+ (
+ None, # default
+ int, # integer dtype
+ np.dtype(
+ [("weight", "f8"), ("color", "i1")]
+ ), # Structured dtype with named fields
+ ),
+ )
+ def test_from_numpy_array_no_edge_attr(self, dt):
+ A = np.array([[0, 1], [1, 0]], dtype=dt)
+ G = nx.from_numpy_array(A, edge_attr=None)
+ assert "weight" not in G.edges[0, 1]
+ assert len(G.edges[0, 1]) == 0
+
+ def test_from_numpy_array_multiedge_no_edge_attr(self):
+ A = np.array([[0, 2], [2, 0]])
+ G = nx.from_numpy_array(A, create_using=nx.MultiDiGraph, edge_attr=None)
+ assert all("weight" not in e for _, e in G[0][1].items())
+ assert len(G[0][1][0]) == 0
+
+ def test_from_numpy_array_custom_edge_attr(self):
+ A = np.array([[0, 2], [3, 0]])
+ G = nx.from_numpy_array(A, edge_attr="cost")
+ assert "weight" not in G.edges[0, 1]
+ assert G.edges[0, 1]["cost"] == 3
+
def test_symmetric(self):
"""Tests that a symmetric array has edges added only once to an
undirected multigraph when using :func:`networkx.from_numpy_array`.
| Allow caller to opt out of adding `weight` attributes in `from_numpy_array`
### Current Behavior
<!--- Tell us what happens instead of the expected behavior -->
Right now, calls to `from_numpy_array` add a `weight` attribute to all edges in the newly created (non-multi)graph.
### Expected Behavior
<!--- Tell us what should happen -->
It would be nice to allow the user to opt out of this behavior, or to specify an attribute that should receive this weight.
### Steps to Reproduce
<!--- Provide a minimal example that reproduces the bug -->
```python
import networkx as nx
import numpy as np
mat = np.array([
[0, 1],
[1, 0]
])
g = nx.from_numpy_array(mat, create_using=nx.DiGraph)
print(g.edges(data=True))
```
```
[(0, 1, {'weight': 1}), (1, 0, {'weight': 1})]
```
### Environment
<!--- Please provide details about your local environment -->
* Python version: 3.10
* NetworkX version: All (2.6.3)
### Additional context
<!--- Add any other context about the problem here, screenshots, etc. -->
This is confusing behavior because the caller does not explicitly request attribute creation. I would propose the following PREFERRED behavior:
#### Preferred
Creating a graph with an array of `dtype` ∈ {int, float} creates an attribute of name `weight_attribute`, which defaults to `w` for backwards compat.
```python
g = nx.from_numpy_array(mat, create_using=nx.DiGraph, weight_attribute="w")
print(g.edges(data=True))
```
```
[(0, 1, {'w': 1}), (1, 0, {'w': 1})]
```
```
Creating a graph with an array of `dtype` = bool treats each value as an existence flag and does not create an attribute:
```python
g = nx.from_numpy_array(mat==1, create_using=nx.DiGraph, weight_attribute="w")
print(g.edges(data=True))
```
```
[(0, 1, {}), (1, 0, {})]
```
#### Also seems fine but changes fn signature:
Perhaps better for backwards compat is to allow the user to opt-out of the current behavior with a new argument `weight_attribute` which defaults to `weight` but can be set to `None` to remove this behavior. (Perhaps a bit more predictable than the above option?)
#### Seems kinda jank but is totally back-compat:
Setting `parallel_edges=True` and `create_using=nx.DiGraph` currently doesn't do anything, and could be repurposed to perform this behavior. Eh. This is kinda gross.
| This seems like a reasonable request to me. I can certainly see a use-case for simply using the adjacency matrix to specify edges *without* creating edge attributes (e.g. to save memory). It doesn't look like there's an obvious way to do this, so I think this would be a nice enhancement.
Of all the proposals above I think the one I like best is to add a new keyword argument (though with a more generic name... maybe `edge_attribute`?) to indicate the edge attribute name. By default this would be `edge_attribute="weight"` for backwards compatibility, but users could also pass in `None` which would then treat the adjacency matrix as binary only indicating edge existence and not store attributes.
There are some potentially messy corners (e.g. the interpretation of `0` as non-edge and making sure everything works well with structured dtypes) but overall I think this would be an improvement!
I agree with @rossbar that this sounds like a good feature. I'd argue for the keyword to be `weight="weight"` as the default way to refer to edge attributes is `weight` in many other places (like `to_numpy_array`). But I don't feel strongly about it.
And I agree that there might be some messy corner cases. Distinguishing between 0 and non-edge might be a different PR. But could be done with a keyword argument like `nonedge_sentinal=0` so people could use `nonedge_sentinal=float("nan")` for example.
Lots of thanks for the thoughtful discussion here!
To roll up some of the points so far, the new signature would look something like this:
```python
def from_numpy_array(
mat: np.ndarray,
edge_attribute: str = "weight",
nonedge_sentinel: Any = float("nan") # of the same dtype as `mat`
)
```
@dschult — I used @rossbar's `edge_attribute` instead of `weight=weight` (nothing personal <3) purely to clarify that this is the name of the new attribute. For example,
```python
# Create edges of 5 different colors:
mat = np.random.randint(0, 10, (5, 5))
from_numpy_array(mat, weight="color")
```
...feels a bit clunky to me, whereas `edge_attribute="color"` feels a bit more intuitive.
Happy to discuss that some more of course!
If you like, I'm happy to draft up a PR with impl and tests for your review :)
Interesting that all the `to_<datastructure>` functions use `weight="weight"` keywords, while the `from_<data_structure>` functions either don't have it, use `edge_attribute` or use `edge_attr`. We should probably make this more uniform. The `edge_attr` case is interesting because it uses value `None` for no attributes, just like we are thinking, and `True` to mean load all the available information for each edge. We could use that option for compound dtypes perhaps. I'm not sure what's involved there.
And just to be clear, the default value for non-edges should be 0. That is standard. We just want to provide a way for people to change that if desired.
Got it — would you prefer one PR that refactors the `from_X` fns and adds this functionality, or two (1. refactor; 2. close this issue)?
Let's keep the scope narrow so things are easier to review. I'd be +1 for a PR to address this feature in `from_numpy_array` specifically! | 2022-12-07T03:53:20 |
networkx/networkx | 6,270 | networkx__networkx-6270 | [
"6257"
] | 9abaf6e5a04adec812d967b53cfa4c560a459e6b | diff --git a/networkx/algorithms/isomorphism/vf2pp_helpers/feasibility.py b/networkx/algorithms/isomorphism/vf2pp_helpers/feasibility.py
--- a/networkx/algorithms/isomorphism/vf2pp_helpers/feasibility.py
+++ b/networkx/algorithms/isomorphism/vf2pp_helpers/feasibility.py
@@ -232,16 +232,16 @@ def _consistent_PT(u, v, graph_params, state_params):
for predecessor in G1.pred[u]:
if predecessor in mapping:
- if G1.number_of_edges(u, predecessor) != G2.number_of_edges(
- v, mapping[predecessor]
+ if G1.number_of_edges(predecessor, u) != G2.number_of_edges(
+ mapping[predecessor], v
):
return False
for predecessor in G2.pred[v]:
if predecessor in reverse_mapping:
if G1.number_of_edges(
- u, reverse_mapping[predecessor]
- ) != G2.number_of_edges(v, predecessor):
+ reverse_mapping[predecessor], u
+ ) != G2.number_of_edges(predecessor, v):
return False
return True
| diff --git a/networkx/algorithms/isomorphism/tests/test_isomorphvf2.py b/networkx/algorithms/isomorphism/tests/test_isomorphvf2.py
--- a/networkx/algorithms/isomorphism/tests/test_isomorphvf2.py
+++ b/networkx/algorithms/isomorphism/tests/test_isomorphvf2.py
@@ -401,3 +401,9 @@ def test_monomorphism_edge_match():
gm = iso.DiGraphMatcher(G, SG, edge_match=iso.categorical_edge_match("label", None))
assert gm.subgraph_is_monomorphic()
+
+
+def test_isomorphvf2pp_multidigraphs():
+ g = nx.MultiDiGraph({0: [1, 1, 2, 2, 3], 1: [2, 3, 3], 2: [3]})
+ h = nx.MultiDiGraph({0: [1, 1, 2, 2, 3], 1: [2, 3, 3], 3: [2]})
+ assert not (nx.vf2pp_is_isomorphic(g, h))
| vf2pp_is_isomorphic returns wrong results
<!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion -->
<!--- Provide a general summary of the issue in the Title above -->
### Current Behavior
```
import networkx as nx
g = nx.MultiDiGraph({0: [1, 1, 2, 2, 3], 1: [2, 3, 3], 2: [3]})
h = nx.MultiDiGraph({0: [1, 1, 2, 2, 3], 1: [2, 3, 3], 3: [2]})
print(nx.is_isomorphic(g, h))
print(nx.vf2pp_is_isomorphic(g, h))
```
prints:
```
False
True
```
<!--- Tell us what happens instead of the expected behavior -->
### Expected Behavior
Prints:
```
False
False
```
<!--- Tell us what should happen -->
### Steps to Reproduce
See above.
<!--- Provide a minimal example that reproduces the bug -->
### Environment
<!--- Please provide details about your local environment -->
Python version: 3.11.0
NetworkX version: 3.0rc1
### Additional context
<!--- Add any other context about the problem here, screenshots, etc. -->
| I can verify that these two graphs should not return an isomorphism and that the vf2pp code does. I can also verify (using `nx.vf2pp_all_isomorphisms`) that the isomorphism yielded is the isomorphism between the non-multiedge version of those two graphs. The VF2 version of the code verifies that the non-multiedge version of those two graphs are isomorphic with that same isomorphism.
Side note: in my looking at adding VF2++ subgraph isomorphism, I have noticed that some parts of the existing isomorphism code indicate they are checking MultiGraphs, but only check for self-loops. So that might be a place to start debugging this.
Here is some code to reproduce my findings:
```python
import networkx as nx
g = nx.MultiDiGraph({0: [1, 1, 2, 2, 3], 1: [2, 3, 3], 2: [3]})
h = nx.MultiDiGraph({0: [1, 1, 2, 2, 3], 1: [2, 3, 3], 3: [2]})
print(nx.is_isomorphic(g, h)) # False
print(nx.vf2pp_is_isomorphic(g, h)) # True
list((nx.isomorphism.vf2pp_all_isomorphisms(g, h)) # output: [{0: 0, 1: 1, 2: 3, 3: 2}]
G = nx.DiGraph(g)
H = nx.DiGraph(h)
list((nx.isomorphism.vf2pp_all_isomorphisms(G, H)) # output: [{0: 0, 1: 1, 2: 3, 3: 2}]
list(nx.isomorphism.DiGraphMatcher(G, H).isomorphisms_iter()) # output: [{0: 0, 1: 1, 2: 3, 3: 2}]
list(nx.isomorphism.DiGraphMatcher(g, h).isomorphisms_iter()) # output: []
```
Thanks very much for this report! | 2022-12-10T00:28:21 |
networkx/networkx | 6,333 | networkx__networkx-6333 | [
"6274"
] | 851aab2dad6dda73995b06d1ef01b95c3aa79891 | diff --git a/networkx/algorithms/cluster.py b/networkx/algorithms/cluster.py
--- a/networkx/algorithms/cluster.py
+++ b/networkx/algorithms/cluster.py
@@ -329,8 +329,10 @@ def clustering(G, nodes=None, weight=None):
----------
G : graph
- nodes : container of nodes, optional (default=all nodes in G)
- Compute clustering for nodes in this container.
+ nodes : node, iterable of nodes, or None (default=None)
+ If a singleton node, return the number of triangles for that node.
+ If an iterable, compute the number of triangles for each of those nodes.
+ If `None` (the default) compute the number of triangles for all nodes in `G`.
weight : string or None, optional (default=None)
The edge attribute that holds the numerical value used as a weight.
| doc: improve doc of possible values of `nodes` and expected behaviour
This is a note to update the documentation of functions that accept `nodes` kwarg, so that it's more explicit what the behaviour of the function is depending on the passed object.
Potentially, the docstrings would be adjusted with:
```
nodes : node, iterable of nodes, or None (default=None)
If a singleton node, return the number of triangles for that node.
If an iterable, compute the number of triangles for each of those nodes.
If `None` (the default) compute the number of triangles for all nodes in `G`.
```
_Originally posted by @dschult in https://github.com/networkx/networkx/pull/6258#discussion_r1045180012_
| 2023-01-05T01:27:02 |
||
networkx/networkx | 6,369 | networkx__networkx-6369 | [
"6365"
] | 150daea06aeb7acd88d09fdf946a6210e97476a8 | diff --git a/networkx/algorithms/cluster.py b/networkx/algorithms/cluster.py
--- a/networkx/algorithms/cluster.py
+++ b/networkx/algorithms/cluster.py
@@ -316,8 +316,7 @@ def clustering(G, nodes=None, weight=None):
.. math::
- c_u = \frac{2}{deg^{tot}(u)(deg^{tot}(u)-1) - 2deg^{\leftrightarrow}(u)}
- T(u),
+ c_u = \frac{T(u)}{2(deg^{tot}(u)(deg^{tot}(u)-1) - 2deg^{\leftrightarrow}(u))},
where :math:`T(u)` is the number of directed triangles through node
:math:`u`, :math:`deg^{tot}(u)` is the sum of in degree and out degree of
| Small documentation error in clustering coefficient for weighted networks
### Current Behavior
The networkx documenation gives the following formula for the clustering coefficient for unweighted directed networks:
https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.cluster.clustering.html#networkx.algorithms.cluster.clustering
_Picture from networkx documenation_

Please note the factor 2 in the numerator.
In the original publication the equation is the following:
_Picture from original publication_

Please note the factor 2 in the denominator.
I checked the implementation and there it seems to be correct [here](https://github.com/networkx/networkx/blob/59f5b01c80064b3b88e1d044bab1c4cf61fdbc85/networkx/algorithms/cluster.py#L376) (great work to whoever made it so easy to compare it to the equation from the original publication 🎉 )
```python
t / ((dt * (dt - 1) - 2 * db) * 2)
```
Please note the factor two in the denominator.
### Expected Behavior
I think the equation in the documentation should be updated so that the 2 is in the denominator.
| 2023-01-16T13:02:56 |
||
networkx/networkx | 6,426 | networkx__networkx-6426 | [
"5814"
] | adced78d6b0d8c940fd134fbb65ef563767231db | diff --git a/networkx/algorithms/swap.py b/networkx/algorithms/swap.py
--- a/networkx/algorithms/swap.py
+++ b/networkx/algorithms/swap.py
@@ -56,6 +56,8 @@ def directed_edge_swap(G, *, nswap=1, max_tries=100, seed=None):
The graph G is modified in place.
+ A later swap is allowed to undo a previous swap.
+
References
----------
.. [1] Erdős, Péter L., et al. “A Simple Havel-Hakimi Type Algorithm to Realize
| diff --git a/networkx/algorithms/tests/test_swap.py b/networkx/algorithms/tests/test_swap.py
--- a/networkx/algorithms/tests/test_swap.py
+++ b/networkx/algorithms/tests/test_swap.py
@@ -2,14 +2,36 @@
import networkx as nx
-
-def test_directed_edge_swap():
- graph = nx.path_graph(200, create_using=nx.DiGraph)
- in_degrees = sorted((n, d) for n, d in graph.in_degree())
- out_degrees = sorted((n, d) for n, d in graph.out_degree())
- G = nx.directed_edge_swap(graph, nswap=40, max_tries=500, seed=1)
- assert in_degrees == sorted((n, d) for n, d in G.in_degree())
- assert out_degrees == sorted((n, d) for n, d in G.out_degree())
+cycle = nx.cycle_graph(5, create_using=nx.DiGraph)
+tree = nx.random_tree(10, create_using=nx.DiGraph)
+path = nx.path_graph(5, create_using=nx.DiGraph)
+binomial = nx.binomial_tree(3, create_using=nx.DiGraph)
+HH = nx.directed_havel_hakimi_graph([1, 2, 1, 2, 2, 2], [3, 1, 0, 1, 2, 3])
+balanced_tree = nx.balanced_tree(2, 3, create_using=nx.DiGraph)
+
+
[email protected]("G", [path, binomial, HH, cycle, tree, balanced_tree])
+def test_directed_edge_swap(G):
+ in_degree = set(G.in_degree)
+ out_degree = set(G.out_degree)
+ edges = set(G.edges)
+ nx.directed_edge_swap(G, nswap=1, max_tries=100, seed=1)
+ assert in_degree == set(G.in_degree)
+ assert out_degree == set(G.out_degree)
+ assert edges != set(G.edges)
+ assert 3 == sum(e not in edges for e in G.edges)
+
+
+def test_directed_edge_swap_undo_previous_swap():
+ G = nx.DiGraph(nx.path_graph(4).edges) # only 1 swap possible
+ edges = set(G.edges)
+ nx.directed_edge_swap(G, nswap=2, max_tries=100)
+ assert edges == set(G.edges)
+
+ nx.directed_edge_swap(G, nswap=1, max_tries=100, seed=1)
+ assert {(0, 2), (1, 3), (2, 1)} == set(G.edges)
+ nx.directed_edge_swap(G, nswap=1, max_tries=100, seed=1)
+ assert edges == set(G.edges)
def test_edge_cases_directed_edge_swap():
| Improve testing for directed_edge_swap
There is an opportunity to improve the testing of the `directed_swap_function`.
There is a concrete suggestions [here](https://github.com/networkx/networkx/pull/5663#discussion_r906225024) for improving `test_directed_edge_swap` by validating that the input and output edges are indeed different. Upon reflection, I think this check could even be improved further by verifying that the number of swaps is correct... I think the difference in the edge sets should be 2*`nswap` - that might be one way to check.
It'd also be good to add more test cases of non-path graphs where there are only a limited number of edge swaps possible, and verify that the algorithm can recover all of them. See [the discussion](https://github.com/networkx/networkx/pull/5663#pullrequestreview-1018715617) for further details.
| Hello
I would like to take up this issue. I will open a pull request soon.
I have made the function `test_directed_edge_swap()` assert that the edges of the original DiGraph and the one after swapping edges are not all the same, in addition to checking the in and out degree of the graph.
Also, in order to remove the limitation of test on path graph only, I have made changes which would generate a random directed graph (which would not be a path graph) and run the tests on it.
Testing a random graph isn't all that helpful. It would be better to test a graph with a specific configuration that you are concerned will be handled correctly by the function. Random graphs give a different result every time you run them, so it is very hard to track down what the cause of the error is. You can specify a "seed" to make it the same result every time, but that is no longer random. It is just some arbitrary graph that you usually don't know much about.
So, if you want to expand beyond a `path_graph`, I would suggest choosing a specific graph that has a different topology and working with that. One option would be a complete graph with only one directed edge for each edge pair.
```python
G = nx.DiGraph(nx.complete_graph(N).edges)
```
You should think about better graphs to test edge swaps. I just made this one up as something that looks very different from a path -- not because of some feature about edge swaps.
Thanks for taking this on! :} | 2023-02-12T19:02:09 |
networkx/networkx | 6,434 | networkx__networkx-6434 | [
"6428"
] | 79586c3f1a0f47e6643919887ace7e7e9335db8a | diff --git a/networkx/generators/line.py b/networkx/generators/line.py
--- a/networkx/generators/line.py
+++ b/networkx/generators/line.py
@@ -243,7 +243,7 @@ def inverse_line_graph(G):
Notes
-----
- This is an implementation of the Roussopoulos algorithm.
+ This is an implementation of the Roussopoulos algorithm[1]_.
If G consists of multiple components, then the algorithm doesn't work.
You should invert every component separately:
@@ -259,8 +259,9 @@ def inverse_line_graph(G):
References
----------
- * Roussopolous, N, "A max {m, n} algorithm for determining the graph H from
- its line graph G", Information Processing Letters 2, (1973), 108--112.
+ .. [1] Roussopoulos, N.D. , "A max {m, n} algorithm for determining the graph H from
+ its line graph G", Information Processing Letters 2, (1973), 108--112, ISSN 0020-0190,
+ `DOI link <https://doi.org/10.1016/0020-0190(73)90029-X>`_
"""
if G.number_of_nodes() == 0:
| Reference for inverse_line_graph
There is a spelling mistake in the reference for the documentation of inverse_line_graph (latest version).
https://networkx.org/documentation/stable/reference/generated/networkx.generators.line.inverse_line_graph.html
The author is "N. D. Roussopoulos" instead of N. Roussopolous (note the location of "ou"). Also, it would be nice to add a link or DOI to the article. The complete bibliographic info is below.
@article{ROUSSOPOULOS1973108,
title = {A max {m,n} algorithm for determining the graph H from its line graph G},
journal = {Information Processing Letters},
volume = {2},
number = {4},
pages = {108-112},
year = {1973},
issn = {0020-0190},
doi = {https://doi.org/10.1016/0020-0190(73)90029-X},
url = {https://www.sciencedirect.com/science/article/pii/002001907390029X},
author = {Nicholas D. Roussopoulos},
keywords = {graph, maximal subgroup, complete graph, complete bipartite, automorphism}
}
| Could you assign to me this issue?
I think I can't do that. Can somebody else do this? Possibly @rossbar?
We don't usually use the "assign" function in Github to assign Issues.
Please open a PR with the fix for this issue and we will review it.
Thanks!
great thanks!
I'm sorry @barbireau I have a question with your dio link. It redirects me to a DOI not found when I click on it. I'm doing something wrong or the link is wrong ?
The DOI link should be without the "}" at the end. Github automaticaly makes the doi a clickable link, but adds the "}" by mistake.
So the doi is
https://doi.org/10.1016/0020-0190(73)90029-X
> I'm sorry @barbireau I have a question with your dio link. It redirects me to a DOI not found when I click on it. I'm doing something wrong or the link is wrong ?
ooh I got it, it took a square bracket as part of the link | 2023-02-15T16:55:52 |
|
networkx/networkx | 6,457 | networkx__networkx-6457 | [
"6420"
] | e6b0062430ccc0bf264795f5466d344d183d81a4 | diff --git a/networkx/algorithms/tree/coding.py b/networkx/algorithms/tree/coding.py
--- a/networkx/algorithms/tree/coding.py
+++ b/networkx/algorithms/tree/coding.py
@@ -331,6 +331,11 @@ def from_prufer_sequence(sequence):
NetworkX graph
The tree corresponding to the given Prüfer sequence.
+ Raises
+ ------
+ NetworkXError
+ If the Prüfer sequence is not valid.
+
Notes
-----
There is a bijection from labeled trees to Prüfer sequences. This
@@ -384,6 +389,11 @@ def from_prufer_sequence(sequence):
not_orphaned = set()
index = u = next(k for k in range(n) if degree[k] == 1)
for v in sequence:
+ # check the validity of the prufer sequence
+ if v < 0 or v > n - 1:
+ raise nx.NetworkXError(
+ f"Invalid Prufer sequence: Values must be between 0 and {n-1}, got {v}"
+ )
T.add_edge(u, v)
not_orphaned.add(u)
degree[v] -= 1
| rom_prufer_sequence report "ValueError: too many values to unpack"
### Current Behavior
If you try the following code
```
import networkx as nx
nx.from_prufer_sequence([7, 5, 5, 3, 1])
```
you will get
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/xing/miniconda3/lib/python3.9/site-packages/networkx/algorithms/tree/coding.py", line 396, in from_prufer_sequence
u, v = orphans
ValueError: too many values to unpack (expected 2)
```
### Expected Behavior
The code should return a tree without any problem. See Applied Combinatorics by Mitchel T. Keller, William T. Trotter, [Chapter 5.6](https://www.rellek.net/book/s_graphs_counting-trees.html)
| I think the sequence must have `n-2` values with numbers between 0 and n-1. This has 5 values, so n=7. But the first number in the sequence is 7. So this is an invalid sequence... Right?
I guess the error message could be more helpful. But first -- let's address the expected behavior question...
Should this return a tree?
I see. But node labels starting from 0 is not the convention in most math books, including [Wikipedia](https://en.wikipedia.org/wiki/Pr%C3%BCfer_sequence).
Anyway, better error would be helpful.
> I see. But node labels starting from 0 is not the convention in most math books, including [Wikipedia](https://en.wikipedia.org/wiki/Pr%C3%BCfer_sequence).
fwiw, the paper cited in the references of the implementation (https://www.scirp.org/journal/paperinformation.aspx?paperid=532) does use 0 to n - 1. Should be print a warning/error about this? Maybe catch the error when looking for orphans. | 2023-03-01T00:11:47 |
|
networkx/networkx | 6,471 | networkx__networkx-6471 | [
"6458"
] | 4a6f2f43508d26d0eb9884a24cba28721d5fb875 | diff --git a/networkx/classes/backends.py b/networkx/classes/backends.py
--- a/networkx/classes/backends.py
+++ b/networkx/classes/backends.py
@@ -128,7 +128,13 @@ def _dispatch(func=None, *, name=None):
@functools.wraps(func)
def wrapper(*args, **kwds):
- graph = args[0]
+ if args:
+ graph = args[0]
+ else:
+ try:
+ graph = kwds["G"]
+ except KeyError:
+ raise TypeError(f"{name}() missing positional argument: 'G'") from None
if hasattr(graph, "__networkx_plugin__") and plugins:
plugin_name = graph.__networkx_plugin__
if plugin_name in plugins:
| diff --git a/networkx/classes/tests/test_backends.py b/networkx/classes/tests/test_backends.py
new file mode 100644
--- /dev/null
+++ b/networkx/classes/tests/test_backends.py
@@ -0,0 +1,14 @@
+import pytest
+
+import networkx as nx
+
+pytest.importorskip("scipy")
+pytest.importorskip("numpy")
+
+
+def test_dispatch_kwds_vs_args():
+ G = nx.path_graph(4)
+ nx.pagerank(G)
+ nx.pagerank(G=G)
+ with pytest.raises(TypeError):
+ nx.pagerank()
| upgrade 2.8.6 -> 3.0 breaks shortest_path()
<!--- Provide a general summary of the issue in the Title above -->
I just re-installed networkx without specifying version so I was upgraded from 2.8.6 to 3.0
Running unaltered code calling nx.shortest_path(*args, **kwargs) under 2.8.6 runs fine and produces correct result but 3.0 fails with an error, missing arguments.
Looking at the source, the 3.0 version got a decorator which doesn't seem to work properly.
The error is reproduced using stripped-down [lollipop example](https://networkx.org/documentation/stable/auto_examples/basic/plot_properties.html#sphx-glr-auto-examples-basic-plot-properties-py)
### Current Behavior
under 3.0:
```
import networkx as nx
G = nx.lollipop_graph(4, 6)
print(nx.shortest_path(G=G, source=list(G.nodes())[1], target=list(G.nodes())[9]))
```
produces
```
Traceback (most recent call last):
File "C:\Users\XYZ\AppData\Roaming\JetBrains\PyCharm2022.3\scratches\scratch_1.py", line 7, in <module>
print(nx.shortest_path(G=G, source=list(G.nodes())[1], target=list(G.nodes())[9]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\modules\venv\networkx_3\Lib\site-packages\networkx\classes\backends.py", line 134, in wrapper
graph = args[0]
~~~~^^^
IndexError: tuple index out of range
```
### Expected Behavior
under 2.8.6:
```
import networkx as nx
G = nx.lollipop_graph(4, 6)
print(nx.shortest_path(G=G, source=list(G.nodes())[1], target=list(G.nodes())[9]))
```
produces
`[1, 3, 4, 5, 6, 7, 8, 9]`
### Environment
Python version: anaconda 3.10, networkx installed via pip
NetworkX version: 2.8.6 and 3.0
| The error only occurs if you give a keyword name to the input G.
We need to fix that... but try it with input `G` instead of `G=G` and it should work just fine.
Perhaps we should make that input a position-only input. But maybe it would be possible to make
the backend code find keyword arguments as well as position arguments.
(if len(args)<1 look for a keyword `G`? But then we can't apply it to functions that name the graph something else -- if there are any such functions :})
> The error only occurs if you give a keyword name to the input G.
> We need to fix that... but try it with input `G` instead of `G=G` and it should work just fine.
I confirm this.
> Perhaps we should make that input a position-only input.
That would reduce compatibility to 3.8+ if I'm not mistaken. IDK if it is a good solution but what about raising an exception if len(args) != 0?
Urghh, yes this is an issue. Thanks for opening the issue! We should have caught this before.
> That would reduce compatibility to 3.8+ if I'm not mistaken. IDK if it is a good solution but what about raising an exception if len(args) != 0?
We already are py3.8+ for new networkx releases.
> That would reduce compatibility to 3.8+ if I'm not mistaken. IDK if it is a good solution but what about raising an exception if len(args) != 0?
I meant !=1 of course.
Anyway, thanks for all the effort put into this superb package, it greatly helped me to solve some of the problems I had faced in my apps.
We'd have to make any function that uses the `_dispatch` decorator make the first input position-only.
Would it be more flexible to allow the `_dispatch` decorator to include as an argument the name of the possibly keyword input variable? It could default to `G` and not change much code anywhere -- and we'd have to document that anyone decorating a function with first argument called something other than `G` will need to use this argument for the decorator.
Then inside the decorator code we check if `len(args)<1` and if so, we look up the keyword arg.
(of course, maybe we want to allow dispatch to position arguments other than the first position... but one step at a time here... :} Not sure what is best...
> We'd have to make any function that uses the _dispatch decorator make the first input position-only.
Would it be more flexible to allow the _dispatch decorator to include as an argument the name of the possibly keyword input variable? It could default to G and not change much code anywhere -- and we'd have to document that anyone decorating a function with first argument called something other than G will need to use this argument for the decorator.
We anyway have to revisit this specific bit of the decorator as right now the assumption is that there is only 1 argument with a graph object. The current `_dispatch` implementation wouldn't work for functions where you expect 2 graph inputs (like isomorphism). IMO these changes should happen inside the `_dispatch` decorator and we should do better job of standardizing function arguments (like it should either be just `G` or `graph` or something) elsewhere in the codebase.
Sounds good. I think it is pretty universal within NetworkX that the graph goes first and it is named `G`.
I guess that might be the first fix then... Change `_dispatch` to check if `len(args)` is zero and if so, look at `kwargs` for an entry called `G`. That seems easier than going through the codebase to add position-only syntax for each function that takes a graph as the first argument. :)
CC @jim22k | 2023-03-07T19:26:06 |
networkx/networkx | 6,478 | networkx__networkx-6478 | [
"6336"
] | 4a6f2f43508d26d0eb9884a24cba28721d5fb875 | diff --git a/networkx/classes/coreviews.py b/networkx/classes/coreviews.py
--- a/networkx/classes/coreviews.py
+++ b/networkx/classes/coreviews.py
@@ -134,7 +134,7 @@ def __init__(self, succ, pred):
self._pred = pred
def __len__(self):
- return len(self._succ) + len(self._pred)
+ return len(self._succ.keys() | self._pred.keys())
def __iter__(self):
return iter(set(self._succ.keys()) | set(self._pred.keys()))
| diff --git a/networkx/classes/tests/test_coreviews.py b/networkx/classes/tests/test_coreviews.py
--- a/networkx/classes/tests/test_coreviews.py
+++ b/networkx/classes/tests/test_coreviews.py
@@ -155,7 +155,7 @@ def test_pickle(self):
assert view.__slots__ == pview.__slots__
def test_len(self):
- assert len(self.av) == len(self.s) + len(self.p)
+ assert len(self.av) == len(self.s.keys() | self.p.keys()) == 5
def test_iter(self):
assert set(self.av) == set(self.s) | set(self.p)
@@ -257,7 +257,7 @@ def setup_method(self):
self.adjview = nx.classes.coreviews.UnionMultiInner(self.s, self.p)
def test_len(self):
- assert len(self.adjview) == len(self.s) + len(self.p)
+ assert len(self.adjview) == len(self.s.keys() | self.p.keys()) == 4
def test_getitem(self):
assert self.adjview[1] is not self.s[1]
| nx.DiGraph.to_undirected() not working as expected for bidirectional edges when using as_view = True
Problem: When using `to_undirected()` on a DiGraph the properties are inconsistent, i.e., differ depending on if as_view was set to True or False. More precisely, the reported degree is not as expected when using `as_view = True`. I guess this might also have an effect on other properties that depend on degree.
### Current Behavior
The node degree of the undirected graph returned by `to_undirected()` is different depending on if as_view was set to True or False. If the directed graph had bidirectional edges, the degree is off by approx. a factor of 2 in the graph view. Everything works as expected if as_view is set to false.
### Expected Behavior
`G.to_undirected(as_view = False).degree()` and `G.to_undirected(as_view = True).degree()` should behave the same.
### Steps to Reproduce
```
import networkx as nx
import sys
print(sys.version) # 3.10.8 | packaged by conda-forge | (main, Nov 24 2022, 14:07:00) [MSC v.1916 64 bit (AMD64)]
print(nx.__version__) # 2.8.8
G = nx.DiGraph()
G.add_nodes_from(["v0","v1","v2"])
G.add_edges_from([("v0","v1"),("v1","v0"),("v1","v2")])
print(G.degree()) # Correct [('v0', 2), ('v1', 3), ('v2', 1)]
G_undir_1 = G.to_undirected(as_view = False)
print(G_undir_1.degree()) # Correct [('v0', 1), ('v1', 2), ('v2', 1)]
G_undir_2 = G.to_undirected(as_view = True)
print(G_undir_2.degree()) # Incorrect [('v0', 2), ('v1', 3), ('v2', 1)]
```
### Environment
<!--- Please provide details about your local environment -->
Python version: 3.10.8
NetworkX version: 2.8.8
### Additional context
n/a
| @MridulS @paulitapb @rossbar @jarrodmillman you guys should look at this issue | 2023-03-10T01:51:33 |
networkx/networkx | 6,486 | networkx__networkx-6486 | [
"6477"
] | d76f3bfa9a26c3956323764d14868bf3ff8f7e24 | diff --git a/networkx/algorithms/tree/mst.py b/networkx/algorithms/tree/mst.py
--- a/networkx/algorithms/tree/mst.py
+++ b/networkx/algorithms/tree/mst.py
@@ -334,12 +334,22 @@ def prim_mst_edges(G, minimum, weight="weight", keys=True, data=True, ignore_nan
continue
for k2, d2 in keydict.items():
new_weight = d2.get(weight, 1) * sign
+ if isnan(new_weight):
+ if ignore_nan:
+ continue
+ msg = f"NaN found as an edge weight. Edge {(v, w, k2, d2)}"
+ raise ValueError(msg)
push(frontier, (new_weight, next(c), v, w, k2, d2))
else:
for w, d2 in G.adj[v].items():
if w in visited:
continue
new_weight = d2.get(weight, 1) * sign
+ if isnan(new_weight):
+ if ignore_nan:
+ continue
+ msg = f"NaN found as an edge weight. Edge {(v, w, d2)}"
+ raise ValueError(msg)
push(frontier, (new_weight, next(c), v, w, d2))
| diff --git a/networkx/algorithms/tree/tests/test_mst.py b/networkx/algorithms/tree/tests/test_mst.py
--- a/networkx/algorithms/tree/tests/test_mst.py
+++ b/networkx/algorithms/tree/tests/test_mst.py
@@ -253,6 +253,36 @@ class TestKruskal(MultigraphMSTTestBase):
algorithm = "kruskal"
+ def test_key_data_bool(self):
+ """Tests that the keys and data values are included in
+ MST edges based on whether keys and data parameters are
+ true or false"""
+ G = nx.MultiGraph()
+ G.add_edge(1, 2, key=1, weight=2)
+ G.add_edge(1, 2, key=2, weight=3)
+ G.add_edge(3, 2, key=1, weight=2)
+ G.add_edge(3, 1, key=1, weight=4)
+
+ # keys are included and data is not included
+ mst_edges = nx.minimum_spanning_edges(
+ G, algorithm=self.algo, keys=True, data=False
+ )
+ assert edges_equal([(1, 2, 1), (2, 3, 1)], list(mst_edges))
+
+ # keys are not included and data is included
+ mst_edges = nx.minimum_spanning_edges(
+ G, algorithm=self.algo, keys=False, data=True
+ )
+ assert edges_equal(
+ [(1, 2, {"weight": 2}), (2, 3, {"weight": 2})], list(mst_edges)
+ )
+
+ # both keys and data are not included
+ mst_edges = nx.minimum_spanning_edges(
+ G, algorithm=self.algo, keys=False, data=False
+ )
+ assert edges_equal([(1, 2), (2, 3)], list(mst_edges))
+
class TestPrim(MultigraphMSTTestBase):
"""Unit tests for computing a minimum (or maximum) spanning tree
@@ -261,6 +291,25 @@ class TestPrim(MultigraphMSTTestBase):
algorithm = "prim"
+ def test_ignore_nan(self):
+ """Tests that the edges with NaN weights are ignored or
+ raise an Error based on ignore_nan is true or false"""
+ H = nx.MultiGraph()
+ H.add_edge(1, 2, key=1, weight=float("nan"))
+ H.add_edge(1, 2, key=2, weight=3)
+ H.add_edge(3, 2, key=1, weight=2)
+ H.add_edge(3, 1, key=1, weight=4)
+
+ # NaN weight edges are ignored when ignore_nan=True
+ mst_edges = nx.minimum_spanning_edges(H, algorithm=self.algo, ignore_nan=True)
+ assert edges_equal(
+ [(1, 2, 2, {"weight": 3}), (2, 3, 1, {"weight": 2})], list(mst_edges)
+ )
+
+ # NaN weight edges raise Error when ignore_nan=False
+ with pytest.raises(ValueError):
+ list(nx.minimum_spanning_edges(H, algorithm=self.algo, ignore_nan=False))
+
def test_multigraph_keys_tree(self):
G = nx.MultiGraph()
G.add_edge(0, 1, key="a", weight=2)
| Improve test coverage for MST algorithms
I found that the test coverage for the mst.py file in algorithms/tree is 92.49%
https://app.codecov.io/gh/networkx/networkx/blob/main/networkx/algorithms/tree/mst.py
I am working on improving this.
Current Behavior
We don't test all the paths the code can take us.
Expected Behavior
We should be testing everything so there aren't any surprises.
| 2023-03-10T21:23:49 |
|
networkx/networkx | 6,503 | networkx__networkx-6503 | [
"6488",
"6490"
] | d76f3bfa9a26c3956323764d14868bf3ff8f7e24 | diff --git a/networkx/algorithms/tree/operations.py b/networkx/algorithms/tree/operations.py
--- a/networkx/algorithms/tree/operations.py
+++ b/networkx/algorithms/tree/operations.py
@@ -78,25 +78,13 @@ def join(rooted_trees, label_attribute=None):
# Get the relabeled roots.
roots = [
- next(v for v, d in tree.nodes(data=True) if d.get("_old") == root)
+ next(v for v, d in tree.nodes(data=True) if d.get(label_attribute) == root)
for tree, root in zip(trees, roots)
]
- # Remove the old node labels.
+ # Add all sets of nodes and edges, attributes
for tree in trees:
- for v in tree:
- tree.nodes[v].pop("_old")
-
- # Add all sets of nodes and edges, with data.
- nodes = (tree.nodes(data=True) for tree in trees)
- edges = (tree.edges(data=True) for tree in trees)
- R.add_nodes_from(chain.from_iterable(nodes))
- R.add_edges_from(chain.from_iterable(edges))
-
- # Add graph attributes; later attributes take precedent over earlier
- # attributes.
- for tree in trees:
- R.graph.update(tree.graph)
+ R.update(tree)
# Finally, join the subtrees at the root. We know 0 is unused by the
# way we relabeled the subtrees.
| diff --git a/networkx/algorithms/tree/tests/test_operations.py b/networkx/algorithms/tree/tests/test_operations.py
--- a/networkx/algorithms/tree/tests/test_operations.py
+++ b/networkx/algorithms/tree/tests/test_operations.py
@@ -2,10 +2,20 @@
"""
+from itertools import chain
+
import networkx as nx
from networkx.utils import edges_equal, nodes_equal
+def _check_label_attribute(input_trees, res_tree, label_attribute="_old"):
+ res_attr_dict = nx.get_node_attributes(res_tree, label_attribute)
+ res_attr_set = set(res_attr_dict.values())
+ input_label = (list(tree[0].nodes()) for tree in input_trees)
+ input_label_set = set(chain.from_iterable(input_label))
+ return res_attr_set == input_label_set
+
+
class TestJoin:
"""Unit tests for the :func:`networkx.tree.join` function."""
@@ -24,14 +34,18 @@ def test_single(self):
"""
T = nx.empty_graph(1)
- actual = nx.join([(T, 0)])
+ trees = [(T, 0)]
+ actual = nx.join(trees)
expected = nx.path_graph(2)
assert nodes_equal(list(expected), list(actual))
assert edges_equal(list(expected.edges()), list(actual.edges()))
+ assert _check_label_attribute(trees, actual)
def test_basic(self):
"""Tests for joining multiple subtrees at a root node."""
trees = [(nx.full_rary_tree(2, 2**2 - 1), 0) for i in range(2)]
- actual = nx.join(trees)
+ label_attribute = "old_values"
+ actual = nx.join(trees, label_attribute)
expected = nx.full_rary_tree(2, 2**3 - 1)
assert nx.is_isomorphic(actual, expected)
+ assert _check_label_attribute(trees, actual, label_attribute)
| Join operation in trees---not handling label_attribute
<!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion -->
<!--- Provide a general summary of the issue in the Title above -->
[https://github.com/networkx/networkx/blob/main/networkx/algorithms/tree/operations.py](https://github.com/networkx/networkx/blob/main/networkx/algorithms/tree/operations.py)
1. The resulting graph of join operation in trees isn't including the old labels of inputs.
2. Not handling the cases where label_attribute is passed as an argument.
### Current Behavior

<!--- Tell us what happens instead of the expected behavior -->
### Expected Behavior

<!--- Tell us what should happen -->
### Steps to Reproduce
As shown above
<!--- Provide a minimal example that reproduces the bug -->
### Environment
<!--- Please provide details about your local environment -->
Python version: 3.10.6
NetworkX version: 3.0
### Additional context
[https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.tree.operations.join.html](https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.tree.operations.join.html)
<!--- Add any other context about the problem here, screenshots, etc. -->
Improve test coverage for operations.py (join)
<!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion -->
<!--- Provide a general summary of the issue in the Title above -->
### Current Behavior
https://app.codecov.io/gh/networkx/networkx/blob/main/networkx/algorithms/tree/operations.py the current test coverage is 92.8%. There are still some cases needed to be handled.
<!--- Tell us what happens instead of the expected behavior -->
### Expected Behavior
<!--- Tell us what should happen -->
https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.tree.operations.join.html
1. Test case to check label_attribute should be added
2. In the documentation its written that the inputs must be tree. But this function works for graphs too. Could you tell me if its for trees or graphs as well?
### Steps to Reproduce
<!--- Provide a minimal example that reproduces the bug -->
### Environment
<!--- Please provide details about your local environment -->
Python version:3.10.6
NetworkX version:3.0
### Additional context
<!--- Add any other context about the problem here, screenshots, etc. -->
| 2023-03-13T12:34:28 |
|
networkx/networkx | 6,519 | networkx__networkx-6519 | [
"6510"
] | 1034b689cadfc8a854745043783ba53a73cbdf36 | diff --git a/networkx/readwrite/graph6.py b/networkx/readwrite/graph6.py
--- a/networkx/readwrite/graph6.py
+++ b/networkx/readwrite/graph6.py
@@ -121,7 +121,7 @@ def bits():
G = nx.Graph()
G.add_nodes_from(range(n))
- for (i, j), b in zip([(i, j) for j in range(1, n) for i in range(j)], bits()):
+ for (i, j), b in zip(((i, j) for j in range(1, n) for i in range(j)), bits()):
if b:
G.add_edge(i, j)
| Memory issue with read_graph6
<!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion -->
<!--- Provide a general summary of the issue in the Title above -->
I'm having memory issues when loading some moderately large graphs stored in the graph6 format (they were produced by `networkx` by the way, and neither generating them nor writing them to file using `write_graph6` raised any issue).
### Current Behavior
<!--- Tell us what happens instead of the expected behavior -->
When loading the file using `read_graph6`, all memory is quickly used up and the system will become unresponsive if I fail to kill `python`.
### Expected Behavior
<!--- Tell us what should happen -->
The file should be read and loaded into an `nx.Graph` object without any issue.
### Steps to Reproduce
<!--- Provide a minimal example that reproduces the bug -->
Decompress [the attached file](https://github.com/networkx/networkx/files/10989656/problematic-instance-for-read_graph6.g6.zip) (warning: decompressed file size is about 130M), open `top` in another terminal to "see" the problem, and then simply try to initialise the graph:
>>> import networkx as nx
>>> graph = nx.read_graph6("problematic-instance-for-read_graph6.g6")
# be ready to kill the interpreter
### Environment
<!--- Please provide details about your local environment -->
Python version: 3.9.2
NetworkX version: 2.5
OS: Debian 11.6
### Additional context
The graph contains "only" 40.320 nodes and 564.480 edges, and `nauty-showg` has no problem displaying its contents in under a second. Smaller graphs of the same family yield no issue, but they're an order of magnitude smaller (the largest similar graph that poses no problem has 5.040 nodes and 52.920 edges).
If you have `nauty-showg`, you can load the graph without any issue as follows:
>>> import subprocess
>>> import networkx as nx
>>> G = nx.Graph()
>>> output = subprocess.check_output(("nauty-showg", "-l0", "problematic-instance-for-read_graph6.g6"))
>>> for row in output.decode().split('\n')[2:]:
... line = row.strip()[:-1].split()
... G.add_edges_from((int(line[0]), int(x)) for x in line[2:])
<!--- Add any other context about the problem here, screenshots, etc. -->
| 2023-03-18T18:23:34 |
||
networkx/networkx | 6,558 | networkx__networkx-6558 | [
"6533"
] | c7218d53caa7c8490de89a0950cf539825e6109a | diff --git a/networkx/algorithms/connectivity/kcutsets.py b/networkx/algorithms/connectivity/kcutsets.py
--- a/networkx/algorithms/connectivity/kcutsets.py
+++ b/networkx/algorithms/connectivity/kcutsets.py
@@ -92,10 +92,11 @@ def all_node_cuts(G, k=None, flow_func=None):
# Address some corner cases first.
# For complete Graphs
+
if nx.density(G) == 1:
- for cut_set in combinations(G, len(G) - 1):
- yield set(cut_set)
+ yield from ()
return
+
# Initialize data structures.
# Keep track of the cuts already computed so we do not repeat them.
seen = []
@@ -129,7 +130,7 @@ def all_node_cuts(G, k=None, flow_func=None):
for x in X:
# step 3: Compute local connectivity flow of x with all other
# non adjacent nodes in G
- non_adjacent = set(G) - X - set(G[x])
+ non_adjacent = set(G) - {x} - set(G[x])
for v in non_adjacent:
# step 4: compute maximum flow in an Even-Tarjan reduction H of G
# and step 5: build the associated residual network R
| diff --git a/networkx/algorithms/connectivity/tests/test_kcutsets.py b/networkx/algorithms/connectivity/tests/test_kcutsets.py
--- a/networkx/algorithms/connectivity/tests/test_kcutsets.py
+++ b/networkx/algorithms/connectivity/tests/test_kcutsets.py
@@ -259,8 +259,15 @@ def test_cycle_graph():
def test_complete_graph():
G = nx.complete_graph(5)
- solution = [{0, 1, 2, 3}, {0, 1, 2, 4}, {0, 1, 3, 4}, {0, 2, 3, 4}, {1, 2, 3, 4}]
- cuts = list(nx.all_node_cuts(G))
- assert len(solution) == len(cuts)
- for cut in cuts:
- assert cut in solution
+ assert nx.node_connectivity(G) == 4
+ assert list(nx.all_node_cuts(G)) == []
+
+
+def test_all_node_cuts_simple_case():
+ G = nx.complete_graph(5)
+ G.remove_edges_from([(0, 1), (3, 4)])
+ expected = [{0, 1, 2}, {2, 3, 4}]
+ actual = list(nx.all_node_cuts(G))
+ assert len(actual) == len(expected)
+ for cut in actual:
+ assert cut in expected
| all_node_cuts returns incorrect cuts
The function `all_node_cuts` returns node cut-sets for the K2 (complete graph with two nodes), but there exist no.
### Current Behavior
Consider the following graph, consisting of two nodes connected by an edge:
```python
graph = nx.Graph()
graph.add_edge(1,2)
list(all_node_cuts(graph)
>>> [{1}, {2}]
```
This graph has no node cut-sets because removing any vertex does not increase the number of connected components
### Expected Behavior
Return no node cut-sets:
```python
graph = nx.Graph()
graph.add_edge(1,2)
list(all_node_cuts(graph)
>>> []
```
### Steps to Reproduce
The above example
### Environment
Python version: 3.10.10
NetworkX version: 2.6.3
| I think this issue has to do with complete graphs in general, because for a complete graph with `n` nodes, the connectivity is `n-1` which is the size of every node cut-set.
_Node cut-set of an undirected graph G is the set of nodes, that if removed, would break G into two or more connected components._
For a K-N graph, we will have to remove n-1 nodes that will break the graph into two components - one with a single node and another with no nodes.
If we assume an empty graph to be a component by itself, then perhaps the output of `all_node_cuts` is valid, else maybe an exception can be raised whenever a complete graph is passed in.
TLDR; Issue #3025 points out a few bugs which were fixed in #3039 They left the complete graph case, but it seems thatis not handled correctly.
Looking through the blame history of this function, it seems that bug reports such as #1875 have been fixed by adding extra code to handle those cases rather than fixing the underlying algorithm. Special code for e.g. cycles was also added but has since been removed without removing the code for complete graphs. I'm not sure why this approach was taken. Was the underlying algorithm investigated and found to be unable to handle those cases? How do we know the underlying algorithm is correct? I've been playing with it and it seems to work fine.
The tests use `node_connnectivity` to give the size of the minimal cutset. The doc_string for `node_connectivity` mentions that the connectivity is the number of nodes that need to be removed so that the graph is disconnected **or the trivial graph** (trivial graph is one node, no edges). So, I believe the author considered a node cut-set to be the nodes which when removed leave a disconnected or trivial graph.
By the definitions of node cut or vertex cut that I have found, it only is a cut if the resulting graph is disconnected. That is different from what is implemented here **I think**.
But, are there graphs that are not complete graphs that have "no node cut-set". That is, there is no set of nodes which when removed leave the graph disconnected? I don't think so... because if you consider an edge that isn't present in the graph, then removing all but the two nodes on either side of the missing edge will leave the two nodes -- a disconnected graph. I conclude that the only case where we might result in the trivial graph is if we start with a complete graph.
The [wikipedia article on connectivity](https://en.wikipedia.org/wiki/Connectivity_(graph_theory)) states that "In particular, a complete graph with n vertices, denoted Kn, has no vertex cuts at all, but the vertex-connectivity is n − 1."
So I am left thinking that the original implementation on this function was correct, and the addition of specialized code to handle complete graphs was added (incorrectly) to produce "cuts" which lead to trivial graphs rather than disconnected graphs. That is, the added code made the node_connectivity equal to the size of the node cut-set. That trait is true for all graph except for complete graphs.
OK... enough of this train of thought writing. :}
I think we should remove the code that handles complete graphs specially and fix the tests to check for equality of node_connectivity with the length of the cut sets unless the graph is a complete graph.
Thoughts?
I too read a couple of articles yesterday that discussed cut sets with regard to complete graphs. (refer [CMU lecture notes](https://www.math.cmu.edu/~af1p/Teaching/GT/CH3.pdf), [Wolfram MathWorld](https://mathworld.wolfram.com/VertexCut.html)).
All sources said that complete graphs have no vertex cut, even though the vertex-connectivity is n − 1.
I think we should remove the code that handles the case of complete graphs. The test cases should check that in case of complete graphs, `list(all_node_cuts(KN_graph)` should be empty and of size node_connectivity in other cases.
I agree. Make the code return an empty set when the input is a complete graph. I think this just means deleting the special case handling code, but we better check that. :)
And let's change the current tests for complete graphs and change them to check the empty cut return value while also checking that node_connectivity is n-1 in that case. | 2023-03-26T20:52:36 |
networkx/networkx | 6,563 | networkx__networkx-6563 | [
"6562"
] | c7218d53caa7c8490de89a0950cf539825e6109a | diff --git a/networkx/algorithms/chordal.py b/networkx/algorithms/chordal.py
--- a/networkx/algorithms/chordal.py
+++ b/networkx/algorithms/chordal.py
@@ -73,6 +73,8 @@ def is_chordal(G):
search. It returns False when it finds that the separator for any node
is not a clique. Based on the algorithms in [1]_.
+ Self loops are ignored.
+
References
----------
.. [1] R. E. Tarjan and M. Yannakakis, Simple linear-time algorithms
@@ -80,6 +82,8 @@ def is_chordal(G):
selectively reduce acyclic hypergraphs, SIAM J. Comput., 13 (1984),
pp. 566–579.
"""
+ if len(G.nodes) <= 3:
+ return True
return len(_find_chordality_breaker(G)) == 0
@@ -130,6 +134,8 @@ def find_induced_nodes(G, s, t, treewidth_bound=sys.maxsize):
The algorithm is inspired by Algorithm 4 in [1]_.
A formal definition of induced node can also be found on that reference.
+ Self Loops are ignored
+
References
----------
.. [1] Learning Bounded Treewidth Bayesian Networks.
@@ -326,9 +332,9 @@ def _find_chordality_breaker(G, s=None, treewidth_bound=sys.maxsize):
If it does find one, it returns (u,v,w) where u,v,w are the three
nodes that together with s are involved in the cycle.
+
+ It ignores any self loops.
"""
- if nx.number_of_selfloops(G) > 0:
- raise nx.NetworkXError("Input graph is not chordal.")
unnumbered = set(G)
if s is None:
s = arbitrary_element(G)
| diff --git a/networkx/algorithms/tests/test_chordal.py b/networkx/algorithms/tests/test_chordal.py
--- a/networkx/algorithms/tests/test_chordal.py
+++ b/networkx/algorithms/tests/test_chordal.py
@@ -60,11 +60,11 @@ def test_is_chordal(self):
assert not nx.is_chordal(self.non_chordal_G)
assert nx.is_chordal(self.chordal_G)
assert nx.is_chordal(self.connected_chordal_G)
+ assert nx.is_chordal(nx.Graph())
assert nx.is_chordal(nx.complete_graph(3))
assert nx.is_chordal(nx.cycle_graph(3))
assert not nx.is_chordal(nx.cycle_graph(5))
- with pytest.raises(nx.NetworkXError, match="Input graph is not chordal"):
- nx.is_chordal(self.self_loop_G)
+ assert nx.is_chordal(self.self_loop_G)
def test_induced_nodes(self):
G = nx.generators.classic.path_graph(10)
| is_chordal crashes on empty graphs
<!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion -->
<!--- Provide a general summary of the issue in the Title above -->
### Current Behavior
<!--- Tell us what happens instead of the expected behavior -->
Running `nx.is_chordal` on an empty graph raises `StopIteration`.
### Expected Behavior
<!--- Tell us what should happen -->
The function should simply return `True`, since the definition simply requires that all cycles of length >= 4 contain a chord. Paths or trees are chordal since they have no cycles, and the function does not crash on these, nor on edgeless graphs.
### Steps to Reproduce
<!--- Provide a minimal example that reproduces the bug -->
Just create an empty graph and run the function:
```python
>>> import networkx as nx
>>> graph = nx.Graph()
>>> nx.is_chordal(graph)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3/dist-packages/networkx/algorithms/chordal.py", line 89, in is_chordal
if len(_find_chordality_breaker(G)) == 0:
File "/usr/lib/python3/dist-packages/networkx/algorithms/chordal.py", line 320, in _find_chordality_breaker
s = arbitrary_element(G)
File "/usr/lib/python3/dist-packages/networkx/utils/misc.py", line 233, in arbitrary_element
return next(iter(iterable))
StopIteration
```
### Environment
<!--- Please provide details about your local environment -->
Python version: 3.9.2
NetworkX version: 2.5
### Additional context
<!--- Add any other context about the problem here, screenshots, etc. -->
| When `is_chordal` is called, this statement is executed -
```python
return len(_find_chordality_breaker(G)) == 0
```
Since no source `s` has been specified in `_find_chordality_breaker`, the following statements execute which further raises `StopIteration` because there are no nodes in G.
```python
if s is None:
s = arbitrary_element(G)
```
I believe a specific condition needs to be added to `is_chordal` that returns True when `len(G.nodes) == 0`, in fact this condition can be extended to `len(G.nodes) <= 3` because such a graph will always be chordal.
Yes I think `len(G.nodes) <= 3` will be the correct condition to implement here as a graph with only one or two vertices is trivially chordal because it contains no cycle. A graph with three vertices can have at most three edges, and every cycle of length four or more would require at least four edges, so no such cycle can exist in a graph with three vertices.
@alabarre @navyagarwal Are you guys working on this issue, or can I take it up?
@PurviChaurasia your contributions indicate that you're more familiar with the code base than I am, so I'll gladly let you make the change. The suggested addition seems to be the only required change indeed.
Thanks!
Thanks a lot :)
I'll add bit more context for the change in the docstring as well and add relevant tests.
> I'll add bit more context for the change in the docstring as well and add relevant tests.
@PurviChaurasia Hey, I'm already working on it, was just adding test cases when I saw the comments 😅
There is actually a wrong test case in `test_chordal.py` in line 63. It is mentioned:
`assert nx.is_chordal(nx.complete_graph(3))`
I think we'll have to comment that out as well.
@navyagarwal if it's fine by you could we be co-authors on this PR? I was actually just about to submit mine when I saw your message :)
> There is actually a wrong test case in `test_chordal.py` in line 63. It is mentioned: `assert nx.is_chordal(nx.complete_graph(3))` I think we'll have to comment that out as well.
No, but why? The test case looks alright to me
Yep meant that only!
> > There is actually a wrong test case in `test_chordal.py` in line 63. It is mentioned: `assert nx.is_chordal(nx.complete_graph(3))` I think we'll have to comment that out as well.
>
> No, but why? The test case looks alright to me
did you try running the test case after making the changes?
Wait you're right, there was issue from my end. | 2023-03-28T14:31:26 |
networkx/networkx | 6,570 | networkx__networkx-6570 | [
"6553"
] | d59f31228cdc7cc289221d6a32d74f593fb7dddc | diff --git a/networkx/algorithms/shortest_paths/unweighted.py b/networkx/algorithms/shortest_paths/unweighted.py
--- a/networkx/algorithms/shortest_paths/unweighted.py
+++ b/networkx/algorithms/shortest_paths/unweighted.py
@@ -216,6 +216,13 @@ def bidirectional_shortest_path(G, source, target):
NetworkXNoPath
If no path exists between source and target.
+ Examples
+ --------
+ >>> G = nx.Graph()
+ >>> nx.add_path(G, [0, 1, 2, 3, 0, 4, 5, 6, 7, 4])
+ >>> nx.bidirectional_shortest_path(G, 2, 6)
+ [2, 1, 0, 4, 5, 6]
+
See Also
--------
shortest_path
| Add examples to functions in shortest path folder.
I wish to add examples to the following functions for better understanding. :-
1. def _dijkstra_multisource, def _bellman_ford, def _inner_bellman_ford [here](https://github.com/networkx/networkx/blob/main/networkx/algorithms/shortest_paths/weighted.py)
2. def bidirectional_shortest_path(G, source, target):,def _single_shortest_path(adj, firstlevel, paths, cutoff, join): [here](https://github.com/networkx/networkx/blob/main/networkx/algorithms/shortest_paths/unweighted.py)
3. def floyd_warshall_numpy(G, nodelist=None, weight="weight"):, def floyd_warshall(G, weight="weight"): [here](https://github.com/networkx/networkx/blob/main/networkx/algorithms/shortest_paths/dense.py)
4. def astar_path_length(G, source, target, heuristic=None, weight="weight"): [here](https://github.com/networkx/networkx/blob/main/networkx/algorithms/shortest_paths/astar.py )
| 2023-03-28T19:44:36 |
||
networkx/networkx | 6,600 | networkx__networkx-6600 | [
"6594"
] | 7b8dea857a015f06c0305edbc0b0ca8314f81f2e | diff --git a/networkx/algorithms/isomorphism/ismags.py b/networkx/algorithms/isomorphism/ismags.py
--- a/networkx/algorithms/isomorphism/ismags.py
+++ b/networkx/algorithms/isomorphism/ismags.py
@@ -184,8 +184,8 @@ def make_partitions(items, test):
def partition_to_color(partitions):
"""
- Creates a dictionary with for every item in partition for every partition
- in partitions the index of partition in partitions.
+ Creates a dictionary that maps each item in each partition to the index of
+ the partition to which it belongs.
Parameters
----------
| Error in method description in ismags.py
The docstring of `partition_to_color` method in ismags.py seems off to me. The description is not clear, and it's hard to understand what the method is supposed to do.
```python
def partition_to_color(partitions):
"""
Creates a dictionary with for every item in partition for every partition
in partitions the index of partition in partitions.
Parameters
----------
partitions: collections.abc.Sequence[collections.abc.Iterable]
As returned by :func:`make_partitions`.
Returns
-------
dict
"""
colors = {}
for color, keys in enumerate(partitions):
for key in keys:
colors[key] = color
return colors
```
I think the following description explains the method better.
```python
def partition_to_color(partitions):
"""
Creates a dictionary that maps each item in each partition to the index of
the partition it belongs to
"""
```
If the new description looks alright, I'll go ahead and make the changes.
| 2023-04-01T10:40:46 |
||
networkx/networkx | 6,611 | networkx__networkx-6611 | [
"6547"
] | f26a1705f57c83838a966cc847fc52f78ff7314c | diff --git a/networkx/algorithms/isomorphism/ismags.py b/networkx/algorithms/isomorphism/ismags.py
--- a/networkx/algorithms/isomorphism/ismags.py
+++ b/networkx/algorithms/isomorphism/ismags.py
@@ -1,7 +1,6 @@
"""
-****************
ISMAGS Algorithm
-****************
+================
Provides a Python implementation of the ISMAGS algorithm. [1]_
@@ -90,21 +89,21 @@
Notes
-----
- - The current implementation works for undirected graphs only. The algorithm
- in general should work for directed graphs as well though.
- - Node keys for both provided graphs need to be fully orderable as well as
- hashable.
- - Node and edge equality is assumed to be transitive: if A is equal to B, and
- B is equal to C, then A is equal to C.
+- The current implementation works for undirected graphs only. The algorithm
+ in general should work for directed graphs as well though.
+- Node keys for both provided graphs need to be fully orderable as well as
+ hashable.
+- Node and edge equality is assumed to be transitive: if A is equal to B, and
+ B is equal to C, then A is equal to C.
References
----------
- .. [1] M. Houbraken, S. Demeyer, T. Michoel, P. Audenaert, D. Colle,
- M. Pickavet, "The Index-Based Subgraph Matching Algorithm with General
- Symmetries (ISMAGS): Exploiting Symmetry for Faster Subgraph
- Enumeration", PLoS One 9(5): e97896, 2014.
- https://doi.org/10.1371/journal.pone.0097896
- .. [2] https://en.wikipedia.org/wiki/Maximum_common_induced_subgraph
+.. [1] M. Houbraken, S. Demeyer, T. Michoel, P. Audenaert, D. Colle,
+ M. Pickavet, "The Index-Based Subgraph Matching Algorithm with General
+ Symmetries (ISMAGS): Exploiting Symmetry for Faster Subgraph
+ Enumeration", PLoS One 9(5): e97896, 2014.
+ https://doi.org/10.1371/journal.pone.0097896
+.. [2] https://en.wikipedia.org/wiki/Maximum_common_induced_subgraph
"""
__all__ = ["ISMAGS"]
| Layout of website page for ISMAGS needs fixing
The website documentation page for [ISMAGS algorithm](https://networkx.org/documentation/stable/reference/algorithms/isomorphism.ismags.html) is not rendering properly. The layout of components is messed up.
<img width="959" alt="image" src="https://user-images.githubusercontent.com/82928853/227500999-8a7f51a5-f04e-41b1-9c0c-b17e9be9b3dc.png">
<img width="960" alt="image" src="https://user-images.githubusercontent.com/82928853/227501167-c14302bd-9b80-4139-9a6a-95d1bd4e8b0b.png">
| @dschult @MridulS I looked into the ismags.py file but I'm not able to figure out why this is happening, could you help out a bit?
This looks like a bad interaction between sphinx extensions and the pydata-sphinx-theme. My guess is that one or more of `numpydoc` or `sphinx-autodoc` is mis-identifying some sections from the module docstring which is screwing up the heading levels/object tags for the rest of the docstring.
This should likely be fixed upstream, though we'll need to figure out which sphinx extension is behaving badly. In the mean time we can probably use a simple workaround to fix the docs. | 2023-04-03T03:56:36 |
|
networkx/networkx | 6,612 | networkx__networkx-6612 | [
"6443"
] | f26a1705f57c83838a966cc847fc52f78ff7314c | diff --git a/networkx/readwrite/gexf.py b/networkx/readwrite/gexf.py
--- a/networkx/readwrite/gexf.py
+++ b/networkx/readwrite/gexf.py
@@ -569,7 +569,7 @@ def add_viz(self, element, node_data):
r=str(color.get("r")),
g=str(color.get("g")),
b=str(color.get("b")),
- a=str(color.get("a")),
+ a=str(color.get("a", 1.0)),
)
element.append(e)
| diff --git a/networkx/readwrite/tests/test_gexf.py b/networkx/readwrite/tests/test_gexf.py
--- a/networkx/readwrite/tests/test_gexf.py
+++ b/networkx/readwrite/tests/test_gexf.py
@@ -491,6 +491,16 @@ def test_missing_viz_attributes(self):
sorted(e) for e in H.edges()
)
+ # Test missing alpha value for version >draft1.1 - set default alpha value
+ # to 1.0 instead of `None` when writing for better general compatibility
+ fh = io.BytesIO()
+ # G.nodes[0]["viz"]["color"] does not have an alpha value explicitly defined
+ # so the default is used instead
+ nx.write_gexf(G, fh, version="1.2draft")
+ fh.seek(0)
+ H = nx.read_gexf(fh, node_type=int)
+ assert H.nodes[0]["viz"]["color"]["a"] == 1.0
+
# Second graph for the other branch
G = nx.Graph()
G.add_node(0, label="1", color="green")
| Gexf export sets the rgba's alpha-value to "None" if missing (gexf schema version 1.1 import)
### Current Behavior
Given I import a gexf-file which has schema version 1.1. There are only rgb values used for color definitions, for example:
`
<node id="1789" label="1789">
<ns0:color b="41" g="255" r="0" />...
`
When I export the graph imported before into a gexf file again using networkx (using nx.write_gexf and with default gexf-version '1.2draft') it adds alpha value "None":
`
<node id="1789" label="1789">
<viz:color r="0" g="255" b="41" a="None" />...
`
### Expected Behavior
In my opinion it would make more sense to use a default value of _a="1.0"_ to remain compatibility. For example Gephi 0.10.1 can not handle _alpha="None"_ values.
### Steps to Reproduce
You can download the dataset [Schoolday](https://www.michihenninger.ch/youtube/schoolday.gexf) and execute the following code:
`schoolday_graph = nx.read_gexf("data/schoolday.gexf")`
`nx.write_gexf(schoolday_graph, path="data/schoolday_new.gexf")`
Open the saved _schoolday_new.gexf_ file with a text editor. You can see that it uses schema version 1.2 and has every alpha value set to None.
### Current workaround:
The current workaround I use is to read and write the gexf-file in networkx like described in the reproduce section. Then I open the file with a text editor and replace all _a="None" by _a="1.0"_.
### Environment
Python version: 3.9.16
NetworkX version: 3.0
Gephi: 0.10.1
| Is this a behavior change you noticed between NetworkX versions? In other words, did this work the way you expected in a prior version but changed in 3.0? I don't think the support for different GEXF versions is really that great on the NetworkX side of things, so I wouldn't be surprised if this was something that was just "broken" for a long time.
[Line 572 of networkx/readwrite/gexf.py](https://github.com/networkx/networkx/blob/34d77cf4ed2aa6b69a34f617607d46b572faf45f/networkx/readwrite/gexf.py#L572) is where the difference between version 1.1draft and 1.2draft is processed in NetworkX.
Indeed, if a color is specified and there is no alpha value associated with that color, the value `"None"` will appear in the resulting file. One could say this is simply enforcing the requirement in v1.2draft that all colors have an alpha value. But I agree that this is somewhat limiting, especially when shifting from older color systems that don't include the alpha value.
One fix would be to change line 572 to provide a default value of 1.0 for the alpha value of the color. Do we see any down side to making that shift? I would be fine with a default for alpha and not for red/green/blue because I doubt many people leave out only one of red/green/blue. They might leave out all 3(4 including alpha) but then the "None" value seems like an OK result in that case.
This might be an easy fix.
@rossbar: I never had problems before networkx version 3.0. @dschult did explain it really well. gexf version 1.1 did not support alpha values. Here the Schema of gexf v.1.1 (https://gexf.net/1.1/viz.rng)
```
<!-- new point -->
<define name="color-content">
<interleave>
<attribute name="r">
<ref name="color-channel"/>
</attribute>
<attribute name="g">
<ref name="color-channel"/>
</attribute>
<attribute name="b">
<ref name="color-channel"/>
</attribute>
</interleave>
</define>
```
and 1.2 supports alpha values (https://gexf.net/1.2/viz.rng):
```
<define name="color-content">
<interleave>
<attribute name="r">
<ref name="color-channel"/>
</attribute>
<attribute name="g">
<ref name="color-channel"/>
</attribute>
<attribute name="b">
<ref name="color-channel"/>
</attribute>
<optional>
<attribute name="a">
<ref name="alpha-channel"/>
</attribute>
</optional>
```
I think there is no downside to use "1.0" as default when no alpha value is given because when no alpha information is available, I would assume that the color should not be transparent. rgba(245, 40, 145, 1) is the same as rgb(245, 40, 145). Another (maybe even more correct) solution is to simply not set an alpha value when not available because it's marked as optional in the gexf 1.2 schema. I just checked that (I opened a gexf-file of version 1.2draft in a text editor and removed all alpha values and could still open it in Gephi)
I agree that the solution (either use "1.0" as default or simply don't write if not available) should be not a big deal to implement.
@rossbar @dschult : I'm going to make a Pull Request and solve it by simply not setting the alpha value if not available.
@michihenninger Are you still doing this or I should get on it?
Hi @dschult @michihenninger @MridulS If no one's working on this issue, I'd like to take it up!
> Hi @dschult @michihenninger @MridulS If no one's working on this issue, I'd like to take it up!
Working on it. Will soon link a pr ;)
> > Hi @dschult @michihenninger @MridulS If no one's working on this issue, I'd like to take it up!
>
> Working on it. Will soon link a pr ;)
@Qudirah Thanks a lot. I'm a bit busy these days. It's still on my TODO list ;-)
> > > Hi @dschult @michihenninger @MridulS If no one's working on this issue, I'd like to take it up!
> >
> >
> > Working on it. Will soon link a pr ;)
>
> @Qudirah Thanks a lot. I'm a bit busy these days. It's still on my TODO list ;-)
Anytime! | 2023-04-03T04:51:27 |
networkx/networkx | 6,673 | networkx__networkx-6673 | [
"6603"
] | 51347f71a41c07f61ee6cf039ddeab20a1d25ad0 | diff --git a/networkx/algorithms/coloring/equitable_coloring.py b/networkx/algorithms/coloring/equitable_coloring.py
--- a/networkx/algorithms/coloring/equitable_coloring.py
+++ b/networkx/algorithms/coloring/equitable_coloring.py
@@ -384,12 +384,13 @@ def procedure_P(V_minus, V_plus, N, H, F, C, L, excluded_colors=None):
def equitable_color(G, num_colors):
- """Provides equitable (r + 1)-coloring for nodes of G in O(r * n^2) time
- if deg(G) <= r. The algorithm is described in [1]_.
+ """Provides an equitable coloring for nodes of `G`.
- Attempts to color a graph using r colors, where no neighbors of a node
- can have same color as the node itself and the number of nodes with each
- color differ by at most 1.
+ Attempts to color a graph using `num_colors` colors, where no neighbors of
+ a node can have same color as the node itself and the number of nodes with
+ each color differ by at most 1. `num_colors` must be greater than the
+ maximum degree of `G`. The algorithm is described in [1]_ and has
+ complexity O(num_colors * n**2).
Parameters
----------
@@ -408,15 +409,13 @@ def equitable_color(G, num_colors):
Examples
--------
>>> G = nx.cycle_graph(4)
- >>> d = nx.coloring.equitable_color(G, num_colors=3)
- >>> nx.algorithms.coloring.equitable_coloring.is_equitable(G, d)
- True
+ >>> nx.coloring.equitable_color(G, num_colors=3)
+ {0: 2, 1: 1, 2: 2, 3: 0}
Raises
------
NetworkXAlgorithmError
- If the maximum degree of the graph ``G`` is greater than
- ``num_colors``.
+ If `num_colors` is not at least the maximum degree of the graph `G`
References
----------
| equitable_coloring file is not properly documented
The functions in networkx/algorithms/coloring/equitable_coloring.py are not properly documented. It needs docstring improvement. @MridulS @rossbar is this something I can go ahead with?
| The only function exported from this module is `equitable_color`, which does appear to be properly documented. Some of the other functions seem like they are intended for direct use (e.g. `is_coloring` and `is_equitable`), while others definitely seem like "internal" functions that should likely not be used directly in user code (`make_*_from_*`, `procedure_P`, etc.).
Docstring improvements are always welcome, though I'd start by focusing on `equitable_color` since that's the only function that's clearly user-facing. The docstring for `equitable_color` generally LGTM but I'm sure there are opportunities to improve it. | 2023-05-01T05:32:00 |
|
networkx/networkx | 6,674 | networkx__networkx-6674 | [
"6574"
] | 049794fea380bea1c0a8d35e9a4602ed94a02b30 | diff --git a/networkx/algorithms/approximation/clique.py b/networkx/algorithms/approximation/clique.py
--- a/networkx/algorithms/approximation/clique.py
+++ b/networkx/algorithms/approximation/clique.py
@@ -118,9 +118,6 @@ def max_clique(G):
BIT Numerical Mathematics, 32(2), 180–196. Springer.
doi:10.1007/BF01994876
"""
- if G is None:
- raise ValueError("Expected NetworkX graph!")
-
# finding the maximum clique in a graph is equivalent to finding
# the independent set in the complementary graph
cgraph = nx.complement(G)
| Improve test coverage for clique.py
There is a line left uncovered in https://app.codecov.io/gh/networkx/networkx/blob/main/networkx/algorithms/approximation/clique.py and I am looking into it. There is a bit of an issue though when I try G = None. An attribute error is raised not a value error.
Steps to Reproduce
from networkx.algorithms.approximation.clique import maximum_independent_set,max_clique
G=nx.Graph()
G.add_nodes_from([(2,3),(5,6),(3,6)])
max_clique(G=None)
output:
AttributeError Traceback (most recent call last)
Cell In[84], line 1
----> 1 max_clique(G)
File <class 'networkx.utils.decorators.argmap'> compilation 32:3, in argmap_max_clique_28(G)
1 import bz2
2 import collections
----> 3 import gzip
4 import inspect
5 import itertools
File ~\anaconda3\lib\site-packages\networkx\utils\decorators.py:83, in not_implemented_for.<locals>._not_implemented_for(g)
82 def _not_implemented_for(g):
---> 83 if (mval is None or mval == g.is_multigraph()) and (
84 dval is None or dval == g.is_directed()
85 ):
86 raise nx.NetworkXNotImplemented(errmsg)
88 return g
AttributeError: 'NoneType' object has no attribute 'is_multigraph'
| @MridulS @rossbar @dschult
In this case I think we can safely delete the check in L109-110. I don't think it's possible to hit that due to the `not_implemented_for` decorator. Even if it were hit-able, it's a pretty trivial input validation that I think can be safely removed. | 2023-05-01T05:36:30 |
|
networkx/networkx | 6,675 | networkx__networkx-6675 | [
"6572"
] | 049794fea380bea1c0a8d35e9a4602ed94a02b30 | diff --git a/networkx/algorithms/approximation/connectivity.py b/networkx/algorithms/approximation/connectivity.py
--- a/networkx/algorithms/approximation/connectivity.py
+++ b/networkx/algorithms/approximation/connectivity.py
@@ -357,12 +357,6 @@ def _bidirectional_shortest_path(G, source, target, exclude):
def _bidirectional_pred_succ(G, source, target, exclude):
# does BFS from both source and target and meets in the middle
# excludes nodes in the container "exclude" from the search
- if source is None or target is None:
- raise nx.NetworkXException(
- "Bidirectional shortest path called without source or target"
- )
- if target == source:
- return ({target: None}, {source: None}, source)
# handle either directed or undirected
if G.is_directed():
| Improve test coverage for connected.py
There are a few red lines I want to work on in https://app.codecov.io/gh/networkx/networkx/blob/main/networkx/algorithms/approximation/connectivity.py.
My issue is I can't import the modified bidirectional_shortest_path function in the connectivity.py file. The one in the shortest_path/unweighted.py does not have the exclude argument. To write the test cases, I need to access the function somehow. What am I missing? @MridulS @dschult @rossbar Also I don't know if that is why the doc examples of the function are missing in the doc, but I will like to work on that too.
| The functions that are prefixed with an underscore (`_`) are "private" by convention; i.e. they are not intended for direct use by users. Thus they are intentionally left out of the API documentation.
Internal functions like this are often intended only for use in the module in which they're defined, so they are typically only accessible by that module. For example, if you wanted to access `_bidirectional_pred_succ` you'd do something like `from networkx.algorithms.approximation.connectivity import _bidirectional_pred_succ`. | 2023-05-01T05:50:27 |
|
networkx/networkx | 6,694 | networkx__networkx-6694 | [
"6690",
"6732"
] | fae8af6011ec56aa25ae637433654a945f8426fd | diff --git a/networkx/algorithms/simple_paths.py b/networkx/algorithms/simple_paths.py
--- a/networkx/algorithms/simple_paths.py
+++ b/networkx/algorithms/simple_paths.py
@@ -169,6 +169,17 @@ def all_simple_paths(G, source, target, cutoff=None):
[0, 3, 1, 2]
[0, 3, 2]
+ The singleton path from ``source`` to itself is considered a simple path and is
+ included in the results:
+
+ >>> G = nx.empty_graph(5)
+ >>> list(nx.all_simple_paths(G, source=0, target=0))
+ [[0]]
+
+ >>> G = nx.path_graph(3)
+ >>> list(nx.all_simple_paths(G, source=0, target={0, 1, 2}))
+ [[0], [0, 1], [0, 1, 2]]
+
Iterate over each path from the root nodes to the leaf nodes in a
directed acyclic graph using a functional programming approach::
@@ -242,83 +253,8 @@ def all_simple_paths(G, source, target, cutoff=None):
all_shortest_paths, shortest_path, has_path
"""
- if source not in G:
- raise nx.NodeNotFound(f"source node {source} not in graph")
- if target in G:
- targets = {target}
- else:
- try:
- targets = set(target)
- except TypeError as err:
- raise nx.NodeNotFound(f"target node {target} not in graph") from err
- if source in targets:
- return _empty_generator()
- if cutoff is None:
- cutoff = len(G) - 1
- if cutoff < 1:
- return _empty_generator()
- if G.is_multigraph():
- return _all_simple_paths_multigraph(G, source, targets, cutoff)
- else:
- return _all_simple_paths_graph(G, source, targets, cutoff)
-
-
-def _empty_generator():
- yield from ()
-
-
-def _all_simple_paths_graph(G, source, targets, cutoff):
- visited = {source: True}
- stack = [iter(G[source])]
- while stack:
- children = stack[-1]
- child = next(children, None)
- if child is None:
- stack.pop()
- visited.popitem()
- elif len(visited) < cutoff:
- if child in visited:
- continue
- if child in targets:
- yield list(visited) + [child]
- visited[child] = True
- if targets - set(visited.keys()): # expand stack until find all targets
- stack.append(iter(G[child]))
- else:
- visited.popitem() # maybe other ways to child
- else: # len(visited) == cutoff:
- for target in (targets & (set(children) | {child})) - set(visited.keys()):
- yield list(visited) + [target]
- stack.pop()
- visited.popitem()
-
-
-def _all_simple_paths_multigraph(G, source, targets, cutoff):
- visited = {source: True}
- stack = [(v for u, v in G.edges(source))]
- while stack:
- children = stack[-1]
- child = next(children, None)
- if child is None:
- stack.pop()
- visited.popitem()
- elif len(visited) < cutoff:
- if child in visited:
- continue
- if child in targets:
- yield list(visited) + [child]
- visited[child] = True
- if targets - set(visited.keys()):
- stack.append((v for u, v in G.edges(child)))
- else:
- visited.popitem()
- else: # len(visited) == cutoff:
- for target in targets - set(visited.keys()):
- count = ([child] + list(children)).count(target)
- for i in range(count):
- yield list(visited) + [target]
- stack.pop()
- visited.popitem()
+ for edge_path in all_simple_edge_paths(G, source, target, cutoff):
+ yield [source] + [edge[1] for edge in edge_path]
@nx._dispatch
@@ -375,6 +311,19 @@ def all_simple_edge_paths(G, source, target, cutoff=None):
[(1, 2, 'k0'), (2, 3, 'k0')]
[(1, 2, 'k1'), (2, 3, 'k0')]
+ When ``source`` is one of the targets, the empty path starting and ending at
+ ``source`` without traversing any edge is considered a valid simple edge path
+ and is included in the results:
+
+ >>> G = nx.Graph()
+ >>> G.add_node(0)
+ >>> paths = list(nx.all_simple_edge_paths(G, 0, 0))
+ >>> for path in paths:
+ ... print (path)
+ []
+ >>> len(paths)
+ 1
+
Notes
-----
@@ -394,52 +343,62 @@ def all_simple_edge_paths(G, source, target, cutoff=None):
"""
if source not in G:
- raise nx.NodeNotFound("source node %s not in graph" % source)
+ raise nx.NodeNotFound(f"source node {source} not in graph")
+
if target in G:
targets = {target}
else:
try:
targets = set(target)
- except TypeError:
- raise nx.NodeNotFound("target node %s not in graph" % target)
- if source in targets:
- return []
- if cutoff is None:
- cutoff = len(G) - 1
- if cutoff < 1:
- return []
- if G.is_multigraph():
- for simp_path in _all_simple_edge_paths_multigraph(G, source, targets, cutoff):
- yield simp_path
- else:
- for simp_path in _all_simple_paths_graph(G, source, targets, cutoff):
- yield list(zip(simp_path[:-1], simp_path[1:]))
+ except TypeError as err:
+ raise nx.NodeNotFound(f"target node {target} not in graph") from err
+
+ cutoff = cutoff if cutoff is not None else len(G) - 1
+
+ if cutoff >= 0 and targets:
+ yield from _all_simple_edge_paths(G, source, targets, cutoff)
-def _all_simple_edge_paths_multigraph(G, source, targets, cutoff):
- if not cutoff or cutoff < 1:
- return []
- visited = [source]
- stack = [iter(G.edges(source, keys=True))]
+def _all_simple_edge_paths(G, source, targets, cutoff):
+ # We simulate recursion with a stack, keeping the current path being explored
+ # and the outgoing edge iterators at each point in the stack.
+ # To avoid unnecessary checks, the loop is structured in a way such that a path
+ # is considered for yielding only after a new node/edge is added.
+ # We bootstrap the search by adding a dummy iterator to the stack that only yields
+ # a dummy edge to source (so that the trivial path has a chance of being included).
+
+ get_edges = (
+ (lambda node: G.edges(node, keys=True))
+ if G.is_multigraph()
+ else (lambda node: G.edges(node))
+ )
+
+ # The current_path is a dictionary that maps nodes in the path to the edge that was
+ # used to enter that node (instead of a list of edges) because we want both a fast
+ # membership test for nodes in the path and the preservation of insertion order.
+ current_path = {None: None}
+ stack = [iter([(None, source)])]
while stack:
- children = stack[-1]
- child = next(children, None)
- if child is None:
+ # 1. Try to extend the current path.
+ next_edge = next((e for e in stack[-1] if e[1] not in current_path), None)
+ if next_edge is None:
+ # All edges of the last node in the current path have been explored.
stack.pop()
- visited.pop()
- elif len(visited) < cutoff:
- if child[1] in targets:
- yield visited[1:] + [child]
- elif child[1] not in [v[0] for v in visited[1:]]:
- visited.append(child)
- stack.append(iter(G.edges(child[1], keys=True)))
- else: # len(visited) == cutoff:
- for u, v, k in [child] + list(children):
- if v in targets:
- yield visited[1:] + [(u, v, k)]
- stack.pop()
- visited.pop()
+ current_path.popitem()
+ continue
+ previous_node, next_node, *_ = next_edge
+
+ # 2. Check if we've reached a target.
+ if next_node in targets:
+ yield (list(current_path.values()) + [next_edge])[2:] # remove dummy edge
+
+ # 3. Only expand the search through the next node if it makes sense.
+ if len(current_path) - 1 < cutoff and (
+ targets - current_path.keys() - {next_node}
+ ):
+ current_path[next_node] = next_edge
+ stack.append(iter(get_edges(next_node)))
@not_implemented_for("multigraph")
| diff --git a/networkx/algorithms/shortest_paths/tests/test_generic.py b/networkx/algorithms/shortest_paths/tests/test_generic.py
--- a/networkx/algorithms/shortest_paths/tests/test_generic.py
+++ b/networkx/algorithms/shortest_paths/tests/test_generic.py
@@ -278,6 +278,10 @@ def test_has_path(self):
assert nx.has_path(G, 0, 2)
assert not nx.has_path(G, 0, 4)
+ def test_has_path_singleton(self):
+ G = nx.empty_graph(1)
+ assert nx.has_path(G, 0, 0)
+
def test_all_shortest_paths(self):
G = nx.Graph()
nx.add_path(G, [0, 1, 2, 3])
diff --git a/networkx/algorithms/tests/test_cycles.py b/networkx/algorithms/tests/test_cycles.py
--- a/networkx/algorithms/tests/test_cycles.py
+++ b/networkx/algorithms/tests/test_cycles.py
@@ -97,6 +97,10 @@ def test_simple_cycles(self):
for c in cc:
assert any(self.is_cyclic_permutation(c, rc) for rc in ca)
+ def test_simple_cycles_singleton(self):
+ G = nx.Graph([(0, 0)]) # self-loop
+ assert list(nx.simple_cycles(G)) == [[0]]
+
def test_unsortable(self):
# this test ensures that graphs whose nodes without an intrinsic
# ordering do not cause issues
diff --git a/networkx/algorithms/tests/test_simple_paths.py b/networkx/algorithms/tests/test_simple_paths.py
--- a/networkx/algorithms/tests/test_simple_paths.py
+++ b/networkx/algorithms/tests/test_simple_paths.py
@@ -140,8 +140,7 @@ def test_all_simple_paths_with_two_targets_inside_cycle_emits_two_paths():
def test_all_simple_paths_source_target():
G = nx.path_graph(4)
- paths = nx.all_simple_paths(G, 1, 1)
- assert list(paths) == []
+ assert list(nx.all_simple_paths(G, 1, 1)) == [[1]]
def test_all_simple_paths_cutoff():
@@ -181,8 +180,7 @@ def test_all_simple_paths_on_non_trivial_graph():
def test_all_simple_paths_multigraph():
G = nx.MultiGraph([(1, 2), (1, 2)])
- paths = nx.all_simple_paths(G, 1, 1)
- assert list(paths) == []
+ assert list(nx.all_simple_paths(G, 1, 1)) == [[1]]
nx.add_path(G, [3, 1, 10, 2])
paths = list(nx.all_simple_paths(G, 1, 2))
assert len(paths) == 3
@@ -195,6 +193,10 @@ def test_all_simple_paths_multigraph_with_cutoff():
assert len(paths) == 2
assert {tuple(p) for p in paths} == {(1, 2), (1, 2)}
+ # See GitHub issue #6732.
+ G = nx.MultiGraph([(0, 1), (0, 2)])
+ assert list(nx.all_simple_paths(G, 0, {1, 2}, cutoff=1)) == [[0, 1], [0, 2]]
+
def test_all_simple_paths_directed():
G = nx.DiGraph()
@@ -211,11 +213,17 @@ def test_all_simple_paths_empty():
def test_all_simple_paths_corner_cases():
- assert list(nx.all_simple_paths(nx.empty_graph(2), 0, 0)) == []
+ assert list(nx.all_simple_paths(nx.empty_graph(2), 0, 0)) == [[0]]
assert list(nx.all_simple_paths(nx.empty_graph(2), 0, 1)) == []
assert list(nx.all_simple_paths(nx.path_graph(9), 0, 8, 0)) == []
+def test_all_simple_paths_source_in_targets():
+ # See GitHub issue #6690.
+ G = nx.path_graph(3)
+ assert list(nx.all_simple_paths(G, 0, {0, 1, 2})) == [[0], [0, 1], [0, 1, 2]]
+
+
def hamiltonian_path(G, source):
source = arbitrary_element(G)
neighbors = set(G[source]) - {source}
@@ -264,6 +272,11 @@ def test_all_simple_edge_paths():
assert {tuple(p) for p in paths} == {((0, 1), (1, 2), (2, 3))}
+def test_all_simple_edge_paths_empty_path():
+ G = nx.empty_graph(1)
+ assert list(nx.all_simple_edge_paths(G, 0, 0)) == [[]]
+
+
def test_all_simple_edge_paths_with_two_targets_emits_two_paths():
G = nx.path_graph(4)
G.add_edge(2, 4)
@@ -327,7 +340,7 @@ def test_all_simple_edge_paths_with_two_targets_inside_cycle_emits_two_paths():
def test_all_simple_edge_paths_source_target():
G = nx.path_graph(4)
paths = nx.all_simple_edge_paths(G, 1, 1)
- assert list(paths) == []
+ assert list(paths) == [[]]
def test_all_simple_edge_paths_cutoff():
@@ -368,7 +381,7 @@ def test_all_simple_edge_paths_on_non_trivial_graph():
def test_all_simple_edge_paths_multigraph():
G = nx.MultiGraph([(1, 2), (1, 2)])
paths = nx.all_simple_edge_paths(G, 1, 1)
- assert list(paths) == []
+ assert list(paths) == [[]]
nx.add_path(G, [3, 1, 10, 2])
paths = list(nx.all_simple_edge_paths(G, 1, 2))
assert len(paths) == 3
@@ -401,11 +414,16 @@ def test_all_simple_edge_paths_empty():
def test_all_simple_edge_paths_corner_cases():
- assert list(nx.all_simple_edge_paths(nx.empty_graph(2), 0, 0)) == []
+ assert list(nx.all_simple_edge_paths(nx.empty_graph(2), 0, 0)) == [[]]
assert list(nx.all_simple_edge_paths(nx.empty_graph(2), 0, 1)) == []
assert list(nx.all_simple_edge_paths(nx.path_graph(9), 0, 8, 0)) == []
+def test_all_simple_edge_paths_ignores_self_loop():
+ G = nx.Graph([(0, 0), (0, 1), (1, 1), (1, 2)])
+ assert list(nx.all_simple_edge_paths(G, 0, 2)) == [[(0, 1), (1, 2)]]
+
+
def hamiltonian_edge_path(G, source):
source = arbitrary_element(G)
neighbors = set(G[source]) - {source}
@@ -458,6 +476,11 @@ def test_shortest_simple_paths():
)
+def test_shortest_simple_paths_singleton_path():
+ G = nx.empty_graph(3)
+ assert list(nx.shortest_simple_paths(G, 0, 0)) == [[0]]
+
+
def test_shortest_simple_paths_directed():
G = nx.cycle_graph(7, create_using=nx.DiGraph())
paths = nx.shortest_simple_paths(G, 0, 3)
| all_simple_paths returns empty generator when the source is one of the possible targets
### Current Behavior
Due to this conditional:
https://github.com/networkx/networkx/blob/5f2f88f6afc385892d91c20676d7dc6aa80d52bb/networkx/algorithms/simple_paths.py#L253-L254
whenever the `source` node appears as one of the possible targets, `all_simple_paths` always generates an empty list of paths.
For example:
```python
import networkx as nx
graph = nx.Graph([(1, 2), (1, 3)])
```
```python
>>> list(nx.all_simple_paths(graph, source=1, target={1, 2, 3}))
[]
```
### Expected Behavior
Apologies if I have misunderstood what the function is meant to do. Here's what I understand.
The function generates a list of simple paths from the given source to (any of the) given target(s). The docstring says:
> Pass an iterable of nodes as target to generate all paths ending in **any** of several nodes
So the fact that the source node is in the set of target nodes does not necessarily exclude _all_ paths, there could be some other target that works. For instance, in the example above the function missed out on two perfectly reasonable paths: `[1, 2]` and `[1, 3]`. As far as I understand they both satisfy the description above: they're simple paths from the source (1) to any of the targets (1, 2, 3).
So why does it return an empty generator? It should return the same as `list(nx.all_simple_paths(graph, source=1, target={2, 3}))`, that is, `[[1, 2], [1, 3]]`.
I understand that any path from source to itself wouldn't be simple, but again this doesn't automatically exclude paths to any of the other specified targets.
### Steps to Reproduce
```python
import networkx as nx
graph = nx.Graph([(1, 2), (1, 3)])
list(nx.all_simple_paths(graph, source=1, target={1, 2, 3}))
```
### Environment
<!--- Please provide details about your local environment -->
Python version: 3.11
NetworkX version: 5f2f88f6afc385892d91c20676d7dc6aa80d52bb (3.1)
all_simple_paths on MultiGraph fails when multiple targets are reached exactly at the cutoff
<!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion -->
<!--- Provide a general summary of the issue in the Title above -->
### Steps to Reproduce
```python
import networkx as nx
G = nx.MultiGraph([(0, 1), (0, 2)])
list(nx.all_simple_paths(G, 0, {1, 2}, cutoff=1))
```
### Current Behavior
```python
>>> list(nx.all_simple_paths(G, 0, {1, 2}, cutoff=1))
[[0, 1]]
```
### Expected Behavior
```python
>>> list(nx.all_simple_paths(G, 0, {1, 2}, cutoff=1))
[[0, 1], [0, 2]]
```
### Environment
<!--- Please provide details about your local environment -->
Python version: 3.11
NetworkX version: 3.2rc0.dev0
### Additional context
This is a bug I encountered while working on #6694 . The cause of the bug is here:
https://github.com/networkx/networkx/blob/5fcf01b9a43a097c4f579486023d1279b2b88619/networkx/algorithms/simple_paths.py#L315-L318
Since `children` is an iterator, the first time `list(children)` is computed, the iterator is consumed, and it returns an empty list afterwards.
I will fix it in #6694 because I need to refactor this function in any case.
| It looks like handling multiple targets was introduced in #3138 and at that time the if-structure was changed from
```python
if source == target:
return []
```
to
```python
if source in target:
return []
```
The current post's corner case was not discussed at the time. The previous behavior is different because with only one target, when that target equals the source, you don't get a simple path. It is a cycle. But with multiple targets, we now have the case in the OP -- what about paths to the other targets?
A workaround is to call the helper function directly:
```python
list(nx.algorithms.simple_paths._all_simple_paths_graph(G, source=1, targets={1, 2, 3}, cutoff=10))
```
or to call with `target=set(target) - {source}`.
But I think the fix is pretty easy. We can let the code do it's thing with the full set of targets. But we could retain the spirit of the original code (avoiding extra computation in a corner case) by making the check:
```python
targets.discard(source)
if not targets:
return _empty_generator()
```
Does this make sense?
> We can let the code do it's thing with the full set of targets.
That was my first thought too.
> But we could retain the spirit of the original code (avoiding extra computation in a corner case) by making the check:
>
> ```python
> targets.discard(source)
> if not targets:
> return _empty_generator()
> ```
>
> Does this make sense?
Seems like the best fix. I can't do it right now, but if in a while nobody has submitted a PR I'll do it.
Actually, isn't the trivial path `[source]` always a simple path from `source` to `source`? It seems kind of arbitrary to exclude this case too.
Indeed -- at least based on a definition that a path is a sequence of nodes. But I guess that also means that a list of no nodes `[]` is also a path. Isn't the empty sequence a sequence? These corner cases are rarely described in a satisfying manner in textbooks/references.
But `[source]` is not a path if we go by [the definition e.g. wikipedia](https://en.wikipedia.org/wiki/Path_(graph_theory)), that `In graph theory, a path in a graph is a finite or infinite sequence of edges which joins a sequence of vertices.`
So, I think we should decide based on whether users will be more troubled by having to remove the single node paths (and zero length paths) when they don't want them vs by having to add the single node paths when they do want them.
Backward compatibility suggests we should not report these paths with less than 2 nodes in this PR. But I am open to discussion about whether we should have another PR that deprecates the current behavior and adds single node (or less than 2 node) paths in a future version. I'm -0.5 on adding them because I believe more users will want to remove them than users who will need to add them.
In the meantime, we should add a sentence or note to the doc_string that this function does not yield paths with less than 2 nodes, so those should be manually added by the user if desired.
I agree with your reasoning overall.
I would say that in all settings where you need to "do X for every Y in an arbitrary object Z", in this case "do X for every simple path in an arbitrary graph", it helps to be as general as possible with your definition of Y (simple path). Like including the empty set when computing the subsets of a set, or in this case returning the singleton path.
For example, a simple routine to find Hamiltonian paths could just use `all_simple_paths` with every possible source and specifying the entire set of vertices as possible targets, and then filter to only return paths that include all vertices. If the singleton path is excluded, this would not work for graphs of size one, and the routine would have to handle it manually. (Furthermore, in fact, if the empty path is excluded, this would also break this routine for empty graphs.)
In general I think that the more general the method is, the less corner cases the user will have to deal with manually themselves. So I am in favour of adding this change in the future.
But I think we can reasonably draw the line at non-empty paths: the method gives all simple paths that start from the given `source`, which the empty path does not.
By the way, `shortest_simple_paths` does currently include the singleton path, so consistency is another reason to make the change to `all_simple_paths`:
```python
>>> G = nx.cycle_graph(5)
>>> list(nx.shortest_simple_paths(G, 0, 0))
[[0]]
```
You are sooo right... And the `is_simple_path` also recognizes a single node path. So in our simple_path functions we have:
- `list(nx.shortest_simple_paths(G, 0, 0))` returns [[0]]
- `nx.is_simple_path(G, [0])` returns True
- `list(nx.all_simple_paths(G, 0, 0))` returns []
- `list(nx.all_simple_edge_paths(G, 0, 0))` returns []
- `nx.simple_cycles(G)` yields single node paths when a self-loop exists.
So, the evidence for yielding single node paths is much stronger than I realized.
@MartinPJorge do you have an opinion about returning single node paths?
I think I am now +0.5 on reporting single node simple paths.
If we do change the `all_simple_paths` and `all_simple_edge_paths` to yield the single node paths, I think we would need to deprecate the current skipping of those edges. Does anyone think we should fix this without deprecation (e.g. because the current behavior is wrong)?
Also, looking briefly at the current code, this might require that we change how the `cutoff < 1` check is done. And maybe we could get rid of `empty_generator` now and make the functions be generators which use `yield from ...` instead of `return ...`.
> Does anyone think we should fix this without deprecation (e.g. because the current behavior is wrong)?
Given the above discussion and the fact that other simple path functions include singletons, I would say this is a bug that needs to be fixed. But, on a pragmatical side, who knows if someone somewhere is relying on this behaviour...
> If we do change the all_simple_paths and all_simple_edge_paths to yield the single node paths, I think we would need to deprecate the current skipping of those edges. Does anyone think we should fix this without deprecation (e.g. because the current behavior is wrong)?
Without having any real sense of how disruptive this would be in user code, my general feeling is that deprecating/warning would be overkill. IIUC, there is already an inconsistency in behavior between various functions, so user code would be getting different answers depending on which functions were used. Within this context, modifying behavior to improve consistency between functions feels more like a fix than an unexpected change!
Two more cents from me: I think the best way to discuss/build consensus on this would be to have a PR that adds tests for the desired behavior across all relevant functions.
The debate over path length is as old as time itself -- is it the number of nodes or the number of edges? Is a path a sequence of edges, or a sequence of nodes with edges between them? Throughout our library, we have used the latter. I think that it's important to remain consistent on this, and that strongly implies that we should treat `[0]` as a sequence of nodes starting at `0` and ending at `0` where all consecutive pairs of nodes are joined by an edge -- a path.
More supporting evidence: `len(nx.path_graph(1))` is `1`.
Great -- I think we have consensus on changing #6694
1. report the single node paths (and show this in doc_string examples)
2. add a test that these functions report single node paths.
3. If you want (otherwise we will do it in another PR) add tests to the other functions mentioned above to ensure that they treat single node paths as paths. Those tests may already exist. But if they don't it'd be good to add them.
@plammens Can you do 1) and 2) and let me know if you want us to do 3)?
Thanks!!
More evidence:
```python
G = nx.empty_graph(0) # or any graph with 0 as a node
nx.has_path(G, 0)
```
outputs `True`
> @plammens Can you do 1) and 2) and let me know if you want us to do 3)? Thanks!!
I'm now working on 1) and 2), I think I can also do 3) but I'll let you now if not.
| 2023-05-16T21:36:21 |
networkx/networkx | 6,760 | networkx__networkx-6760 | [
"6749"
] | a63c8bd3873fc7885726215248c7fe17e9cefd4c | diff --git a/networkx/algorithms/shortest_paths/weighted.py b/networkx/algorithms/shortest_paths/weighted.py
--- a/networkx/algorithms/shortest_paths/weighted.py
+++ b/networkx/algorithms/shortest_paths/weighted.py
@@ -2426,11 +2426,6 @@ def johnson(G, weight="weight"):
distance : dictionary
Dictionary, keyed by source and target, of shortest paths.
- Raises
- ------
- NetworkXError
- If given graph is not weighted.
-
Examples
--------
>>> graph = nx.DiGraph()
@@ -2465,9 +2460,6 @@ def johnson(G, weight="weight"):
all_pairs_bellman_ford_path_length
"""
- if not nx.is_weighted(G, weight=weight):
- raise nx.NetworkXError("Graph is not weighted.")
-
dist = {v: 0 for v in G}
pred = {v: [] for v in G}
weight = _weight_function(G, weight)
| diff --git a/networkx/algorithms/shortest_paths/tests/test_weighted.py b/networkx/algorithms/shortest_paths/tests/test_weighted.py
--- a/networkx/algorithms/shortest_paths/tests/test_weighted.py
+++ b/networkx/algorithms/shortest_paths/tests/test_weighted.py
@@ -865,10 +865,9 @@ def test_zero_cycle_smoke(self):
class TestJohnsonAlgorithm(WeightedTestBase):
def test_single_node_graph(self):
- with pytest.raises(nx.NetworkXError):
- G = nx.DiGraph()
- G.add_node(0)
- nx.johnson(G)
+ G = nx.DiGraph()
+ G.add_node(0)
+ assert nx.johnson(G) == {0: {0: [0]}}
def test_negative_cycle(self):
G = nx.DiGraph()
@@ -915,9 +914,27 @@ def test_negative_weights(self):
}
def test_unweighted_graph(self):
- with pytest.raises(nx.NetworkXError):
- G = nx.path_graph(5)
- nx.johnson(G)
+ G = nx.Graph()
+ G.add_edges_from([(1, 0), (2, 1)])
+ H = G.copy()
+ nx.set_edge_attributes(H, values=1, name="weight")
+ assert nx.johnson(G) == nx.johnson(H)
+
+ def test_partially_weighted_graph_with_negative_edges(self):
+ G = nx.DiGraph()
+ G.add_edges_from([(0, 1), (1, 2), (2, 0), (1, 0)])
+ G[1][0]["weight"] = -2
+ G[0][1]["weight"] = 3
+ G[1][2]["weight"] = -4
+
+ H = G.copy()
+ H[2][0]["weight"] = 1
+
+ I = G.copy()
+ I[2][0]["weight"] = 8
+
+ assert nx.johnson(G) == nx.johnson(H)
+ assert nx.johnson(G) != nx.johnson(I)
def test_graphs(self):
validate_path(self.XG, "s", "v", 9, nx.johnson(self.XG)["s"]["v"])
| Docstring of `johnson` algorithm handling of weight incorrect
<!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion -->
<!--- Provide a general summary of the issue in the Title above -->
### Current Behavior
The documentation says:
```
If no such edge attribute exists, the weight of the edge is assumed to be one.
```
### Expected Behavior
The behavior to match the docstring or the docstring to match the behavior.
### Steps to Reproduce
```python
In [1]: import networkx as nx
In [2]: G = nx.Graph()
In [3]: G.add_edge(0, 1, weight=2)
In [4]: G.add_edge(0, 2)
In [5]: nx.johnson(G)
---------------------------------------------------------------------------
NetworkXError Traceback (most recent call last)
Cell In[5], line 1
----> 1 nx.johnson(G)
File ~/miniconda3/envs/gb8/lib/python3.11/site-packages/networkx/algorithms/shortest_paths/weighted.py:2469, in johnson(G, weight)
2402 r"""Uses Johnson's Algorithm to compute shortest paths.
2403
2404 Johnson's Algorithm finds a shortest path between each pair of
(...)
2466
2467 """
2468 if not nx.is_weighted(G, weight=weight):
-> 2469 raise nx.NetworkXError("Graph is not weighted.")
2471 dist = {v: 0 for v in G}
2472 pred = {v: [] for v in G}
NetworkXError: Graph is not weighted.
```
### Environment
<!--- Please provide details about your local environment -->
Python version: 3.11
NetworkX version: 3.1
### Additional context
It uses `nx.is_weighted`, which requires that _all_ edges have the attribute to be considered weighted.
| The parameters part of the docstring may have been copied from the rest of the functions (Dijkstra and Bellman-ford) in the file. These methods assume the weight of the edge to be 1 when not specified.
It makes sense to have similar behavior for the Johnson method as well. We'll only need to remove the `nx.is_weighted` condition and modify/remove the test cases for unweighted and single-node graphs.
No other changes will be required since the Johnson method uses `_bellman_ford` and `_dijkstra` methods that already handle the unweighted edges.
If this approach seems alright, I'll create a PR.
Yes -- that looks like a correct description of what has happened with the documentation of this function. And I agree that it seems straightforward to make it handle edges without an edge attribute.
It'd be nice to include tests of input graphs without edge attributes for some edges and show the function working and maybe even check that results match a case with weight specified as 1... ??? | 2023-06-24T06:32:50 |
networkx/networkx | 6,788 | networkx__networkx-6788 | [
"6783"
] | a52706d4f367916cba97dcc57efdd46d7c174f54 | diff --git a/networkx/algorithms/cycles.py b/networkx/algorithms/cycles.py
--- a/networkx/algorithms/cycles.py
+++ b/networkx/algorithms/cycles.py
@@ -4,7 +4,7 @@
========================
"""
-from collections import defaultdict
+from collections import Counter, defaultdict
from itertools import combinations, product
from math import inf
@@ -50,7 +50,7 @@ def cycle_basis(G, root=None):
>>> G = nx.Graph()
>>> nx.add_cycle(G, [0, 1, 2, 3])
>>> nx.add_cycle(G, [0, 3, 4, 5])
- >>> print(nx.cycle_basis(G, 0))
+ >>> nx.cycle_basis(G, 0)
[[3, 4, 5, 0], [1, 2, 3, 0]]
Notes
@@ -1053,8 +1053,8 @@ def minimum_cycle_basis(G, weight=None):
>>> G = nx.Graph()
>>> nx.add_cycle(G, [0, 1, 2, 3])
>>> nx.add_cycle(G, [0, 3, 4, 5])
- >>> print([sorted(c) for c in nx.minimum_cycle_basis(G)])
- [[0, 1, 2, 3], [0, 3, 4, 5]]
+ >>> nx.minimum_cycle_basis(G)
+ [[5, 4, 3, 0], [3, 2, 1, 0]]
References:
[1] Kavitha, Telikepalli, et al. "An O(m^2n) Algorithm for
@@ -1074,85 +1074,87 @@ def minimum_cycle_basis(G, weight=None):
)
-def _min_cycle_basis(comp, weight):
+def _min_cycle_basis(G, weight):
cb = []
# We extract the edges not in a spanning tree. We do not really need a
# *minimum* spanning tree. That is why we call the next function with
# weight=None. Depending on implementation, it may be faster as well
- spanning_tree_edges = list(nx.minimum_spanning_edges(comp, weight=None, data=False))
- edges_excl = [frozenset(e) for e in comp.edges() if e not in spanning_tree_edges]
- N = len(edges_excl)
+ tree_edges = list(nx.minimum_spanning_edges(G, weight=None, data=False))
+ chords = G.edges - tree_edges - {(v, u) for u, v in tree_edges}
# We maintain a set of vectors orthogonal to sofar found cycles
- set_orth = [{edge} for edge in edges_excl]
- for k in range(N):
+ set_orth = [{edge} for edge in chords]
+ while set_orth:
+ base = set_orth.pop()
# kth cycle is "parallel" to kth vector in set_orth
- new_cycle = _min_cycle(comp, set_orth[k], weight=weight)
- cb.append(list(set().union(*new_cycle)))
+ cycle_edges = _min_cycle(G, base, weight)
+ cb.append([v for u, v in cycle_edges])
+
# now update set_orth so that k+1,k+2... th elements are
# orthogonal to the newly found cycle, as per [p. 336, 1]
- base = set_orth[k]
- set_orth[k + 1 :] = [
- orth ^ base if len(orth & new_cycle) % 2 else orth
- for orth in set_orth[k + 1 :]
+ set_orth = [
+ (
+ {e for e in orth if e not in base if e[::-1] not in base}
+ | {e for e in base if e not in orth if e[::-1] not in orth}
+ )
+ if any((e in orth or e[::-1] in orth) for e in cycle_edges)
+ else orth
+ for orth in set_orth
]
return cb
-def _min_cycle(G, orth, weight=None):
+def _min_cycle(G, orth, weight):
"""
Computes the minimum weight cycle in G,
orthogonal to the vector orth as per [p. 338, 1]
+ Use (u, 1) to indicate the lifted copy of u (denoted u' in paper).
"""
- T = nx.Graph()
-
- nodes_idx = {node: idx for idx, node in enumerate(G.nodes())}
- idx_nodes = {idx: node for node, idx in nodes_idx.items()}
-
- nnodes = len(nodes_idx)
+ Gi = nx.Graph()
- # Add 2 copies of each edge in G to T. If edge is in orth, add cross edge;
- # otherwise in-plane edge
- for u, v, data in G.edges(data=True):
- uidx, vidx = nodes_idx[u], nodes_idx[v]
- edge_w = data.get(weight, 1)
- if frozenset((u, v)) in orth:
- T.add_edges_from(
- [(uidx, nnodes + vidx), (nnodes + uidx, vidx)], weight=edge_w
- )
+ # Add 2 copies of each edge in G to Gi.
+ # If edge is in orth, add cross edge; otherwise in-plane edge
+ for u, v, wt in G.edges(data=weight, default=1):
+ if (u, v) in orth or (v, u) in orth:
+ Gi.add_edges_from([(u, (v, 1)), ((u, 1), v)], Gi_weight=wt)
else:
- T.add_edges_from(
- [(uidx, vidx), (nnodes + uidx, nnodes + vidx)], weight=edge_w
- )
+ Gi.add_edges_from([(u, v), ((u, 1), (v, 1))], Gi_weight=wt)
- all_shortest_pathlens = dict(nx.shortest_path_length(T, weight=weight))
- cross_paths_w_lens = {
- n: all_shortest_pathlens[n][nnodes + n] for n in range(nnodes)
- }
+ # find the shortest length in Gi between n and (n, 1) for each n
+ # Note: Use "Gi_weight" for name of weight attribute
+ spl = nx.shortest_path_length
+ lift = {n: spl(Gi, source=n, target=(n, 1), weight="Gi_weight") for n in G}
- # Now compute shortest paths in T, which translates to cyles in G
- start = min(cross_paths_w_lens, key=cross_paths_w_lens.get)
- end = nnodes + start
- min_path = nx.shortest_path(T, source=start, target=end, weight="weight")
+ # Now compute that short path in Gi, which translates to a cycle in G
+ start = min(lift, key=lift.get)
+ end = (start, 1)
+ min_path_i = nx.shortest_path(Gi, source=start, target=end, weight="Gi_weight")
- # Now we obtain the actual path, re-map nodes in T to those in G
- min_path_nodes = [node if node < nnodes else node - nnodes for node in min_path]
- # Now remove the edges that occur two times
- mcycle_pruned = _path_to_cycle(min_path_nodes)
+ # Now we obtain the actual path, re-map nodes in Gi to those in G
+ min_path = [n if n in G else n[0] for n in min_path_i]
- return {frozenset((idx_nodes[u], idx_nodes[v])) for u, v in mcycle_pruned}
-
-
-def _path_to_cycle(path):
- """
- Removes the edges from path that occur even number of times.
- Returns a set of edges
- """
- edges = set()
- for edge in pairwise(path):
- # Toggle whether to keep the current edge.
- edges ^= {edge}
- return edges
+ # Now remove the edges that occur two times
+ # two passes: flag which edges get kept, then build it
+ edgelist = list(pairwise(min_path))
+ edgeset = set()
+ for e in edgelist:
+ if e in edgeset:
+ edgeset.remove(e)
+ elif e[::-1] in edgeset:
+ edgeset.remove(e[::-1])
+ else:
+ edgeset.add(e)
+
+ min_edgelist = []
+ for e in edgelist:
+ if e in edgeset:
+ min_edgelist.append(e)
+ edgeset.remove(e)
+ elif e[::-1] in edgeset:
+ min_edgelist.append(e[::-1])
+ edgeset.remove(e[::-1])
+
+ return min_edgelist
@not_implemented_for("directed")
| diff --git a/networkx/algorithms/tests/test_cycles.py b/networkx/algorithms/tests/test_cycles.py
--- a/networkx/algorithms/tests/test_cycles.py
+++ b/networkx/algorithms/tests/test_cycles.py
@@ -4,16 +4,14 @@
import pytest
-import networkx
import networkx as nx
-from networkx.algorithms import find_cycle, minimum_cycle_basis
from networkx.algorithms.traversal.edgedfs import FORWARD, REVERSE
class TestCycles:
@classmethod
def setup_class(cls):
- G = networkx.Graph()
+ G = nx.Graph()
nx.add_cycle(G, [0, 1, 2, 3])
nx.add_cycle(G, [0, 3, 4, 5])
nx.add_cycle(G, [0, 1, 6, 7, 8])
@@ -29,30 +27,30 @@ def is_cyclic_permutation(self, a, b):
def test_cycle_basis(self):
G = self.G
- cy = networkx.cycle_basis(G, 0)
+ cy = nx.cycle_basis(G, 0)
sort_cy = sorted(sorted(c) for c in cy)
assert sort_cy == [[0, 1, 2, 3], [0, 1, 6, 7, 8], [0, 3, 4, 5]]
- cy = networkx.cycle_basis(G, 1)
+ cy = nx.cycle_basis(G, 1)
sort_cy = sorted(sorted(c) for c in cy)
assert sort_cy == [[0, 1, 2, 3], [0, 1, 6, 7, 8], [0, 3, 4, 5]]
- cy = networkx.cycle_basis(G, 9)
+ cy = nx.cycle_basis(G, 9)
sort_cy = sorted(sorted(c) for c in cy)
assert sort_cy == [[0, 1, 2, 3], [0, 1, 6, 7, 8], [0, 3, 4, 5]]
# test disconnected graphs
nx.add_cycle(G, "ABC")
- cy = networkx.cycle_basis(G, 9)
+ cy = nx.cycle_basis(G, 9)
sort_cy = sorted(sorted(c) for c in cy[:-1]) + [sorted(cy[-1])]
assert sort_cy == [[0, 1, 2, 3], [0, 1, 6, 7, 8], [0, 3, 4, 5], ["A", "B", "C"]]
def test_cycle_basis2(self):
with pytest.raises(nx.NetworkXNotImplemented):
G = nx.DiGraph()
- cy = networkx.cycle_basis(G, 0)
+ cy = nx.cycle_basis(G, 0)
def test_cycle_basis3(self):
with pytest.raises(nx.NetworkXNotImplemented):
G = nx.MultiGraph()
- cy = networkx.cycle_basis(G, 0)
+ cy = nx.cycle_basis(G, 0)
def test_cycle_basis_ordered(self):
# see gh-6654 replace sets with (ordered) dicts
@@ -703,50 +701,50 @@ def setup_class(cls):
def test_graph_nocycle(self):
G = nx.Graph(self.edges)
- pytest.raises(nx.exception.NetworkXNoCycle, find_cycle, G, self.nodes)
+ pytest.raises(nx.exception.NetworkXNoCycle, nx.find_cycle, G, self.nodes)
def test_graph_cycle(self):
G = nx.Graph(self.edges)
G.add_edge(2, 0)
- x = list(find_cycle(G, self.nodes))
+ x = list(nx.find_cycle(G, self.nodes))
x_ = [(0, 1), (1, 2), (2, 0)]
assert x == x_
def test_graph_orientation_none(self):
G = nx.Graph(self.edges)
G.add_edge(2, 0)
- x = list(find_cycle(G, self.nodes, orientation=None))
+ x = list(nx.find_cycle(G, self.nodes, orientation=None))
x_ = [(0, 1), (1, 2), (2, 0)]
assert x == x_
def test_graph_orientation_original(self):
G = nx.Graph(self.edges)
G.add_edge(2, 0)
- x = list(find_cycle(G, self.nodes, orientation="original"))
+ x = list(nx.find_cycle(G, self.nodes, orientation="original"))
x_ = [(0, 1, FORWARD), (1, 2, FORWARD), (2, 0, FORWARD)]
assert x == x_
def test_digraph(self):
G = nx.DiGraph(self.edges)
- x = list(find_cycle(G, self.nodes))
+ x = list(nx.find_cycle(G, self.nodes))
x_ = [(0, 1), (1, 0)]
assert x == x_
def test_digraph_orientation_none(self):
G = nx.DiGraph(self.edges)
- x = list(find_cycle(G, self.nodes, orientation=None))
+ x = list(nx.find_cycle(G, self.nodes, orientation=None))
x_ = [(0, 1), (1, 0)]
assert x == x_
def test_digraph_orientation_original(self):
G = nx.DiGraph(self.edges)
- x = list(find_cycle(G, self.nodes, orientation="original"))
+ x = list(nx.find_cycle(G, self.nodes, orientation="original"))
x_ = [(0, 1, FORWARD), (1, 0, FORWARD)]
assert x == x_
def test_multigraph(self):
G = nx.MultiGraph(self.edges)
- x = list(find_cycle(G, self.nodes))
+ x = list(nx.find_cycle(G, self.nodes))
x_ = [(0, 1, 0), (1, 0, 1)] # or (1, 0, 2)
# Hash randomization...could be any edge.
assert x[0] == x_[0]
@@ -754,26 +752,26 @@ def test_multigraph(self):
def test_multidigraph(self):
G = nx.MultiDiGraph(self.edges)
- x = list(find_cycle(G, self.nodes))
+ x = list(nx.find_cycle(G, self.nodes))
x_ = [(0, 1, 0), (1, 0, 0)] # (1, 0, 1)
assert x[0] == x_[0]
assert x[1][:2] == x_[1][:2]
def test_digraph_ignore(self):
G = nx.DiGraph(self.edges)
- x = list(find_cycle(G, self.nodes, orientation="ignore"))
+ x = list(nx.find_cycle(G, self.nodes, orientation="ignore"))
x_ = [(0, 1, FORWARD), (1, 0, FORWARD)]
assert x == x_
def test_digraph_reverse(self):
G = nx.DiGraph(self.edges)
- x = list(find_cycle(G, self.nodes, orientation="reverse"))
+ x = list(nx.find_cycle(G, self.nodes, orientation="reverse"))
x_ = [(1, 0, REVERSE), (0, 1, REVERSE)]
assert x == x_
def test_multidigraph_ignore(self):
G = nx.MultiDiGraph(self.edges)
- x = list(find_cycle(G, self.nodes, orientation="ignore"))
+ x = list(nx.find_cycle(G, self.nodes, orientation="ignore"))
x_ = [(0, 1, 0, FORWARD), (1, 0, 0, FORWARD)] # or (1, 0, 1, 1)
assert x[0] == x_[0]
assert x[1][:2] == x_[1][:2]
@@ -782,7 +780,7 @@ def test_multidigraph_ignore(self):
def test_multidigraph_ignore2(self):
# Loop traversed an edge while ignoring its orientation.
G = nx.MultiDiGraph([(0, 1), (1, 2), (1, 2)])
- x = list(find_cycle(G, [0, 1, 2], orientation="ignore"))
+ x = list(nx.find_cycle(G, [0, 1, 2], orientation="ignore"))
x_ = [(1, 2, 0, FORWARD), (1, 2, 1, REVERSE)]
assert x == x_
@@ -794,7 +792,7 @@ def test_multidigraph_original(self):
G = nx.MultiDiGraph([(0, 1), (1, 2), (2, 3), (4, 2)])
pytest.raises(
nx.exception.NetworkXNoCycle,
- find_cycle,
+ nx.find_cycle,
G,
[0, 1, 2, 3, 4],
orientation="original",
@@ -803,9 +801,9 @@ def test_multidigraph_original(self):
def test_dag(self):
G = nx.DiGraph([(0, 1), (0, 2), (1, 2)])
pytest.raises(
- nx.exception.NetworkXNoCycle, find_cycle, G, orientation="original"
+ nx.exception.NetworkXNoCycle, nx.find_cycle, G, orientation="original"
)
- x = list(find_cycle(G, orientation="ignore"))
+ x = list(nx.find_cycle(G, orientation="ignore"))
assert x == [(0, 1, FORWARD), (1, 2, FORWARD), (0, 2, REVERSE)]
def test_prev_explored(self):
@@ -813,7 +811,7 @@ def test_prev_explored(self):
G = nx.DiGraph()
G.add_edges_from([(1, 0), (2, 0), (1, 2), (2, 1)])
- pytest.raises(nx.NetworkXNoCycle, find_cycle, G, source=0)
+ pytest.raises(nx.NetworkXNoCycle, nx.find_cycle, G, source=0)
x = list(nx.find_cycle(G, 1))
x_ = [(1, 2), (2, 1)]
assert x == x_
@@ -831,8 +829,8 @@ def test_no_cycle(self):
G = nx.DiGraph()
G.add_edges_from([(1, 2), (2, 0), (3, 1), (3, 2)])
- pytest.raises(nx.NetworkXNoCycle, find_cycle, G, source=0)
- pytest.raises(nx.NetworkXNoCycle, find_cycle, G)
+ pytest.raises(nx.NetworkXNoCycle, nx.find_cycle, G, source=0)
+ pytest.raises(nx.NetworkXNoCycle, nx.find_cycle, G)
def assert_basis_equal(a, b):
@@ -848,12 +846,12 @@ def setup_class(cls):
cls.diamond_graph = T
def test_unweighted_diamond(self):
- mcb = minimum_cycle_basis(self.diamond_graph)
- assert_basis_equal([sorted(c) for c in mcb], [[1, 2, 4], [2, 3, 4]])
+ mcb = nx.minimum_cycle_basis(self.diamond_graph)
+ assert_basis_equal(mcb, [[2, 4, 1], [3, 4, 2]])
def test_weighted_diamond(self):
- mcb = minimum_cycle_basis(self.diamond_graph, weight="weight")
- assert_basis_equal([sorted(c) for c in mcb], [[1, 2, 4], [1, 2, 3, 4]])
+ mcb = nx.minimum_cycle_basis(self.diamond_graph, weight="weight")
+ assert_basis_equal(mcb, [[2, 4, 1], [4, 3, 2, 1]])
def test_dimensionality(self):
# checks |MCB|=|E|-|V|+|NC|
@@ -864,17 +862,49 @@ def test_dimensionality(self):
nedges = rg.number_of_edges()
ncomp = nx.number_connected_components(rg)
- dim_mcb = len(minimum_cycle_basis(rg))
+ dim_mcb = len(nx.minimum_cycle_basis(rg))
assert dim_mcb == nedges - nnodes + ncomp
def test_complete_graph(self):
cg = nx.complete_graph(5)
- mcb = minimum_cycle_basis(cg)
+ mcb = nx.minimum_cycle_basis(cg)
assert all(len(cycle) == 3 for cycle in mcb)
def test_tree_graph(self):
tg = nx.balanced_tree(3, 3)
- assert not minimum_cycle_basis(tg)
+ assert not nx.minimum_cycle_basis(tg)
+
+ def test_petersen_graph(self):
+ G = nx.petersen_graph()
+ mcb = list(nx.minimum_cycle_basis(G))
+ expected = [
+ [4, 9, 7, 5, 0],
+ [1, 2, 3, 4, 0],
+ [1, 6, 8, 5, 0],
+ [4, 3, 8, 5, 0],
+ [1, 6, 9, 4, 0],
+ [1, 2, 7, 5, 0],
+ ]
+ assert len(mcb) == len(expected)
+ assert all(c in expected for c in mcb)
+
+ # check that order of the nodes is a path
+ for c in mcb:
+ assert all(G.has_edge(u, v) for u, v in nx.utils.pairwise(c, cyclic=True))
+
+ def test_gh6787_and_edge_attribute_names(self):
+ G = nx.cycle_graph(4)
+ G.add_weighted_edges_from([(0, 2, 10), (1, 3, 10)], weight="dist")
+ expected = [[1, 3, 0], [3, 2, 1, 0], [1, 2, 0]]
+ mcb = list(nx.minimum_cycle_basis(G, weight="dist"))
+ assert len(mcb) == len(expected)
+ assert all(c in expected for c in mcb)
+
+ # test not using a weight with weight attributes
+ expected = [[1, 3, 0], [1, 2, 0], [3, 2, 0]]
+ mcb = list(nx.minimum_cycle_basis(G))
+ assert len(mcb) == len(expected)
+ assert all(c in expected for c in mcb)
class TestGirth:
| Minimum cycle basis returns impossible cycles
### Current Behavior
The minimum cycle basis of the generalized Petersen graph G=P_7,2 appearing in example 11 of
Liebchen, Christian, and Romeo Rizzi. "Classes of cycle bases." Discrete Applied Mathematics 155.3 (2007): 337-355.
generated by nx.minimum_cycle_basis
contains cycles containing edges that are not edges of the graph.
<!--- Tell us what happens instead of the expected behavior -->
### Expected Behavior
According to the cited paper, the following is a minimum cycle basis for the graph G:
[7, 9, 1, 0, 6,]
[7, 12, 4, 5, 6,]
[10, 8, 0, 1, 2]
[13, 8, 0, 6, 5]
[11, 9, 1, 2, 3]
[12, 10, 2, 3, 4]
[13, 11, 3, 4, 5]
[7, 9, 1, 2, 10, 12]
This is an expected result. (The basis is not unique.)
<!--- Tell us what should happen -->
### Steps to Reproduce
```python
>>> import scipy.sparse as sp
>>> import networkx as nx
>>> row = [0, 0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6,
... 7, 7, 8, 8, 9, 10, 11]
>>> col = [1, 6, 8, 9, 2, 3, 10, 4, 11, 5, 12, 6, 13, 7,
... 9, 12, 10, 13, 11, 12, 13]
>>> data = [3.] * 14 + [2.] * 7
>>> G = sp.coo_array((data, (row, col)), shape=(14, 14))
>>> G = G + G.T
>>> GG = nx.Graph(G)
>>> nx.minimum_cycle_basis(GG)
[[0, 5, 6, 8, 13], [0, 1, 6, 7, 9], [4, 5, 6, 7, 12], [0, 1, 2, 8, 10], [0, 6, 7, 8, 10, 12], [1, 2, 3, 9, 11], [2, 3, 4, 10, 12], [3, 4, 5, 11, 13]]
```
Here, for example (0, 5) is not an edge of the graph G.
### Environment
<!--- Please provide details about your local environment -->
Python version: 3.10.12
NetworkX version: 3.1
### Additional context
<!--- Add any other context about the problem here, screenshots, etc. -->
| This is because the nodes in the node list of a cycle are not necessarily returned in the order by which they appear in the cycle.
This behavior has been documented [here](https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.cycles.minimum_cycle_basis.html#networkx.algorithms.cycles.minimum_cycle_basis).
Thank you for the prompt reply. I overlooked this in the documentation. Is there a preferred way of finding the order in which the nodes appear in the cycles?
> Is there a preferred way of finding the order in which the nodes appear in the cycles?
I can't say that this way is "preferred", nor can I guarantee that it will even always work, but the naive thing to do would be an exhaustive search of potential edges from the cycle basis in `G`:
```python
>>> import itertools
>>> cycle_bases = nx.minimum_cycle_basis(GG)
>>> cb = cycle_bases[0]
>>> cb
[0, 5, 6, 8, 13]
>>> potential_edges = {
... (u, v) for u, v in itertools.product(cb, cb)
... if u != v # ignoring self edges in this example
... }
>>> cycle = potential_edges & set(GG.edges())
>>> cycle
{(0, 6), (0, 8), (5, 6), (5, 13), (8, 13)}
```
Using the `GG.subgraph` method should cut down the work in finding edges.
The `nx.cycle_basis(GG)` function returns lists of nodes that are the path of the cycle. It is worth looking into...
The current return values of `nx.minimum_cycle_basis` are not very useful. I'm not sure why it returns the cycles with unordered nodes. We will look into it more.
Here are some partial fixes:
`nx.find_cycle` might work with something like this:
```python
cb = nx.minimum_cycle_basis(GG)
cbasis = [bl for b in cb if len(bl:=nx.find_cycle(GG.subgraph(b)))==len(b)]
```
which can be written more clearly:
```python
cb = nx.minimum_cycle_basis(GG)
cbasis=[]
for b in cb:
cycle = nx.find_cycle(GG.subgraph(b))
if len(cycle) == len(b):
cbasis.append(cycle)
```
Does `nx.cycle_basis(GG)` work for you? Or do you need a minimum?
Maybe the following works? (max of a collection of lists uses `len` to get the longest one.)
```python
[max(nx.cycle_basis(GG.subgraph(b))) for b in nx.minimum_cycle_basis(GG)]
```
@dschult I am confused about the find_cycle function. Does it only find one cycle in the graph? Not necessarily all cycles?
@rossbar I think what you achieve is what GG.subgraph(b) in the code of dschult does. Correct?
I think this code will find the correct order of all of the minimum cycles and store them in the list `min_cycles`.
```python
import scipy.sparse as sp
import networkx as nx
row = [0, 0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6,
7, 7, 8, 8, 9, 10, 11]
col = [1, 6, 8, 9, 2, 3, 10, 4, 11, 5, 12, 6, 13, 7,
9, 12, 10, 13, 11, 12, 13]
data = [3.] * 14 + [2.] * 7
G = sp.coo_array((data, (row, col)), shape=(14, 14))
G = G + G.T
GG = nx.Graph(G)
basis = nx.minimum_cycle_basis(GG)
min_cycles = []
for b in basis:
s = nx.simple_cycles(GG.subgraph(b))
s = [np.append(x, x[0]) for x in s if len(x) == len(b)]
c = np.array([np.sum(G[x[:-1], x[1:]]) for x in s])
idx = np.argmin(c)
min_cycles.append(list(s[idx][:-1]))
``` | 2023-07-13T07:57:43 |
networkx/networkx | 6,798 | networkx__networkx-6798 | [
"6796"
] | 0204a246a8bfba38153f03b3967a669c05a7181f | diff --git a/networkx/algorithms/planarity.py b/networkx/algorithms/planarity.py
--- a/networkx/algorithms/planarity.py
+++ b/networkx/algorithms/planarity.py
@@ -855,6 +855,22 @@ class PlanarEmbedding(nx.DiGraph):
"""
+ def __init__(self, incoming_graph_data=None, **attr):
+ super().__init__(incoming_graph_data=incoming_graph_data, **attr)
+ self.add_edge = self.__forbidden
+ self.add_edges_from = self.__forbidden
+ self.add_weighted_edges_from = self.__forbidden
+
+ def __forbidden(self, *args, **kwargs):
+ """Forbidden operation
+
+ Any edge additions to a PlanarEmbedding should be done using
+ method `add_half_edge`.
+ """
+ raise NotImplementedError(
+ "Use `add_half_edge` method to add edges to a PlanarEmbedding."
+ )
+
def get_data(self):
"""Converts the adjacency structure into a better readable structure.
@@ -896,6 +912,75 @@ def set_data(self, data):
self.add_half_edge(v, w, cw=ref)
ref = w
+ def remove_node(self, n):
+ """Remove node n.
+
+ Removes the node n and all adjacent edges, updating the
+ PlanarEmbedding to account for any resulting edge removal.
+ Attempting to remove a non-existent node will raise an exception.
+
+ Parameters
+ ----------
+ n : node
+ A node in the graph
+
+ Raises
+ ------
+ NetworkXError
+ If n is not in the graph.
+
+ See Also
+ --------
+ remove_nodes_from
+
+ """
+ try:
+ for u in self._pred[n]:
+ succs_u = self._succ[u]
+ un_cw = succs_u[n]["cw"]
+ un_ccw = succs_u[n]["ccw"]
+ del succs_u[n]
+ del self._pred[u][n]
+ if n != un_cw:
+ succs_u[un_cw]["ccw"] = un_ccw
+ succs_u[un_ccw]["cw"] = un_cw
+ del self._node[n]
+ del self._succ[n]
+ del self._pred[n]
+ except KeyError as err: # NetworkXError if n not in self
+ raise nx.NetworkXError(
+ f"The node {n} is not in the planar embedding."
+ ) from err
+
+ def remove_nodes_from(self, nodes):
+ """Remove multiple nodes.
+
+ Parameters
+ ----------
+ nodes : iterable container
+ A container of nodes (list, dict, set, etc.). If a node
+ in the container is not in the graph it is silently ignored.
+
+ See Also
+ --------
+ remove_node
+
+ Notes
+ -----
+ When removing nodes from an iterator over the graph you are changing,
+ a `RuntimeError` will be raised with message:
+ `RuntimeError: dictionary changed size during iteration`. This
+ happens when the graph's underlying dictionary is modified during
+ iteration. To avoid this error, evaluate the iterator into a separate
+ object, e.g. by using `list(iterator_of_nodes)`, and pass this
+ object to `G.remove_nodes_from`.
+
+ """
+ for n in nodes:
+ if n in self._node:
+ self.remove_node(n)
+ # silently skip non-existing nodes
+
def neighbors_cw_order(self, v):
"""Generator for the neighbors of v in clockwise order.
@@ -940,6 +1025,7 @@ def add_half_edge(self, start_node, end_node, *, cw=None, ccw=None):
End node of reference edge.
Omit or pass `None` if adding the first out-half-edge of `start_node`.
+
Raises
------
NetworkXException
@@ -962,7 +1048,7 @@ def add_half_edge(self, start_node, end_node, *, cw=None, ccw=None):
if ccw is not None:
raise nx.NetworkXError("Only one of cw/ccw can be specified.")
ref_ccw = succs[cw]["ccw"]
- self.add_edge(start_node, end_node, cw=cw, ccw=ref_ccw)
+ super().add_edge(start_node, end_node, cw=cw, ccw=ref_ccw)
succs[ref_ccw]["cw"] = end_node
succs[cw]["ccw"] = end_node
# when (cw == leftmost_nbr), the newly added neighbor is
@@ -973,7 +1059,7 @@ def add_half_edge(self, start_node, end_node, *, cw=None, ccw=None):
if ccw not in succs:
raise nx.NetworkXError("Invalid counterclockwise reference node.")
ref_cw = succs[ccw]["cw"]
- self.add_edge(start_node, end_node, cw=ref_cw, ccw=ccw)
+ super().add_edge(start_node, end_node, cw=ref_cw, ccw=ccw)
succs[ref_cw]["ccw"] = end_node
succs[ccw]["cw"] = end_node
move_leftmost_nbr_to_end = True
@@ -986,11 +1072,12 @@ def add_half_edge(self, start_node, end_node, *, cw=None, ccw=None):
# we keep track of the leftmost neighbor, which we accomplish
# by keeping it as the last key in dict self._succ[start_node]
succs[leftmost_nbr] = succs.pop(leftmost_nbr)
+
else:
if cw is not None or ccw is not None:
raise nx.NetworkXError("Invalid reference node.")
# adding the first edge out of start_node
- self.add_edge(start_node, end_node, ccw=end_node, cw=end_node)
+ super().add_edge(start_node, end_node, ccw=end_node, cw=end_node)
def check_structure(self):
"""Runs without exceptions if this object is valid.
@@ -1107,6 +1194,79 @@ def add_half_edge_cw(self, start_node, end_node, reference_neighbor):
"""
self.add_half_edge(start_node, end_node, ccw=reference_neighbor)
+ def remove_edge(self, u, v):
+ """Remove the edge between u and v.
+
+ Parameters
+ ----------
+ u, v : nodes
+ Remove the half-edges (u, v) and (v, u) and update the
+ edge ordering around the removed edge.
+
+ Raises
+ ------
+ NetworkXError
+ If there is not an edge between u and v.
+
+ See Also
+ --------
+ remove_edges_from : remove a collection of edges
+ """
+ try:
+ succs_u = self._succ[u]
+ succs_v = self._succ[v]
+ uv_cw = succs_u[v]["cw"]
+ uv_ccw = succs_u[v]["ccw"]
+ vu_cw = succs_v[u]["cw"]
+ vu_ccw = succs_v[u]["ccw"]
+ del succs_u[v]
+ del self._pred[v][u]
+ del succs_v[u]
+ del self._pred[u][v]
+ if v != uv_cw:
+ succs_u[uv_cw]["ccw"] = uv_ccw
+ succs_u[uv_ccw]["cw"] = uv_cw
+ if u != vu_cw:
+ succs_v[vu_cw]["ccw"] = vu_ccw
+ succs_v[vu_ccw]["cw"] = vu_cw
+ except KeyError as err:
+ raise nx.NetworkXError(
+ f"The edge {u}-{v} is not in the planar embedding."
+ ) from err
+
+ def remove_edges_from(self, ebunch):
+ """Remove all edges specified in ebunch.
+
+ Parameters
+ ----------
+ ebunch: list or container of edge tuples
+ Each pair of half-edges between the nodes given in the tuples
+ will be removed from the graph. The nodes can be passed as:
+
+ - 2-tuples (u, v) half-edges (u, v) and (v, u).
+ - 3-tuples (u, v, k) where k is ignored.
+
+ See Also
+ --------
+ remove_edge : remove a single edge
+
+ Notes
+ -----
+ Will fail silently if an edge in ebunch is not in the graph.
+
+ Examples
+ --------
+ >>> G = nx.path_graph(4) # or DiGraph, MultiGraph, MultiDiGraph, etc
+ >>> ebunch = [(1, 2), (2, 3)]
+ >>> G.remove_edges_from(ebunch)
+ """
+ for e in ebunch:
+ u, v = e[:2] # ignore edge data
+ # assuming that the PlanarEmbedding is valid, if the half_edge
+ # (u, v) is in the graph, then so is half_edge (v, u)
+ if u in self._succ and v in self._succ[u]:
+ self.remove_edge(u, v)
+
def connect_components(self, v, w):
"""Adds half-edges for (v, w) and (w, v) at some position.
| diff --git a/networkx/algorithms/tests/test_planarity.py b/networkx/algorithms/tests/test_planarity.py
--- a/networkx/algorithms/tests/test_planarity.py
+++ b/networkx/algorithms/tests/test_planarity.py
@@ -277,6 +277,20 @@ def test_counterexample_planar_recursive(self):
G.add_node(1)
get_counterexample_recursive(G)
+ def test_edge_removal_from_planar_embedding(self):
+ # PlanarEmbedding.check_structure() must succeed after edge removal
+ edges = ((0, 1), (1, 2), (2, 3), (3, 4), (4, 0), (0, 2), (0, 3))
+ G = nx.Graph(edges)
+ cert, P = nx.check_planarity(G)
+ assert cert is True
+ P.remove_edge(0, 2)
+ self.check_graph(P, is_planar=True)
+ P.add_half_edge_ccw(1, 3, 2)
+ P.add_half_edge_cw(3, 1, 2)
+ self.check_graph(P, is_planar=True)
+ P.remove_edges_from(((0, 3), (1, 3)))
+ self.check_graph(P, is_planar=True)
+
def check_embedding(G, embedding):
"""Raises an exception if the combinatorial embedding is not correct
@@ -410,19 +424,51 @@ def test_get_data(self):
data_cmp = {0: [3, 2, 1], 1: [0], 2: [0], 3: [0]}
assert data == data_cmp
- def test_missing_edge_orientation(self):
+ def test_edge_removal(self):
embedding = nx.PlanarEmbedding()
- embedding.add_edge(1, 2)
- embedding.add_edge(2, 1)
+ embedding.set_data(
+ {
+ 1: [2, 5, 7],
+ 2: [1, 3, 4, 5],
+ 3: [2, 4],
+ 4: [3, 6, 5, 2],
+ 5: [7, 1, 2, 4],
+ 6: [4, 7],
+ 7: [6, 1, 5],
+ }
+ )
+ # remove_edges_from() calls remove_edge(), so both are tested here
+ embedding.remove_edges_from(((5, 4), (1, 5)))
+ embedding.check_structure()
+ embedding_expected = nx.PlanarEmbedding()
+ embedding_expected.set_data(
+ {
+ 1: [2, 7],
+ 2: [1, 3, 4, 5],
+ 3: [2, 4],
+ 4: [3, 6, 2],
+ 5: [7, 2],
+ 6: [4, 7],
+ 7: [6, 1, 5],
+ }
+ )
+ assert nx.utils.graphs_equal(embedding, embedding_expected)
+
+ def test_missing_edge_orientation(self):
+ embedding = nx.PlanarEmbedding({1: {2: {}}, 2: {1: {}}})
with pytest.raises(nx.NetworkXException):
# Invalid structure because the orientation of the edge was not set
embedding.check_structure()
def test_invalid_edge_orientation(self):
- embedding = nx.PlanarEmbedding()
- embedding.add_half_edge(1, 2)
- embedding.add_half_edge(2, 1)
- embedding.add_edge(1, 3)
+ embedding = nx.PlanarEmbedding(
+ {
+ 1: {2: {"cw": 2, "ccw": 2}},
+ 2: {1: {"cw": 1, "ccw": 1}},
+ 1: {3: {}},
+ 3: {1: {}},
+ }
+ )
with pytest.raises(nx.NetworkXException):
embedding.check_structure()
@@ -461,12 +507,23 @@ def test_successful_face_traversal(self):
assert face == [1, 2]
def test_unsuccessful_face_traversal(self):
- embedding = nx.PlanarEmbedding()
- embedding.add_edge(1, 2, ccw=2, cw=3)
- embedding.add_edge(2, 1, ccw=1, cw=3)
+ embedding = nx.PlanarEmbedding(
+ {1: {2: {"cw": 3, "ccw": 2}}, 2: {1: {"cw": 3, "ccw": 1}}}
+ )
with pytest.raises(nx.NetworkXException):
embedding.traverse_face(1, 2)
+ def test_forbidden_methods(self):
+ embedding = nx.PlanarEmbedding()
+ embedding.add_node(42) # no exception
+ embedding.add_nodes_from([(23, 24)]) # no exception
+ with pytest.raises(NotImplementedError):
+ embedding.add_edge(1, 3)
+ with pytest.raises(NotImplementedError):
+ embedding.add_edges_from([(0, 2), (1, 4)])
+ with pytest.raises(NotImplementedError):
+ embedding.add_weighted_edges_from([(0, 2, 350), (1, 4, 125)])
+
@staticmethod
def get_star_embedding(n):
embedding = nx.PlanarEmbedding()
| Modifying the edges of a PlannarEmbedding graph invalidates its structure.
<!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion -->
<!--- Provide a general summary of the issue in the Title above -->
If an edge is removed and another one added to a `PlanarEmbedding`, it becomes invalid even if the modifications result in a planar embedding (see notebook below).
### Current Behavior
An error is produced upon calling `PlanarEmbedding.check_structure()`.
### Expected Behavior
No error (provided the graph is still a planar embedding).
### Steps to Reproduce
```python
import networkx as nx
edges = ((0, 1), (1, 2), (2, 3), (3, 0), (0, 2))
pos = ((0, 0), (1, 0), (1, 1), (0, 1))
Gmwe = nx.Graph(((0, 1),
(1, 2),
(2, 3),
(3, 0),
(0, 2)))
cert, Pmwe = nx.check_planarity(Gmwe)
cert
```
Output:
True
```python
type(Pmwe)
```
Output:
networkx.algorithms.planarity.PlanarEmbedding
```python
nx.draw(Pmwe, with_labels=True, pos=pos)
```

```python
list(Pmwe.neighbors_cw_order(0))
```
Output:
[1, 3, 2]
```python
list(Pmwe.neighbors_cw_order(2))
```
Output:
[1, 0, 3]
Now we remove the edge `(0, 2)` and add `(3, 1)`, inserting each half-edge in the proper position.
```python
Pmwe.remove_edges_from(((0, 2), (2, 0)))
Pmwe.add_half_edge_ccw(1, 3, 2)
Pmwe.add_half_edge_cw(3, 1, 2)
```
```python
nx.draw(Pmwe, with_labels=True, pos=pos)
```

Why is the modified PlanarEmbedding not structurally sound anymore?
```python
Pmwe.check_structure()
```
Output:
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
File ~\programs\miniconda\envs\interarray\lib\site-packages\networkx\algorithms\planarity.py:938, in PlanarEmbedding.check_structure(self)
937 try:
--> 938 sorted_nbrs = set(self.neighbors_cw_order(v))
939 except KeyError as err:
File ~\programs\miniconda\envs\interarray\lib\site-packages\networkx\algorithms\planarity.py:915, in PlanarEmbedding.neighbors_cw_order(self, v)
914 yield current_node
--> 915 current_node = self[v][current_node]["cw"]
File ~\programs\miniconda\envs\interarray\lib\site-packages\networkx\classes\coreviews.py:53, in AtlasView.__getitem__(self, key)
52 def __getitem__(self, key):
---> 53 return self._atlas[key]
KeyError: 2
The above exception was the direct cause of the following exception:
NetworkXException Traceback (most recent call last)
Cell In[12], line 1
----> 1 Pmwe.check_structure()
File ~\programs\miniconda\envs\interarray\lib\site-packages\networkx\algorithms\planarity.py:941, in PlanarEmbedding.check_structure(self)
939 except KeyError as err:
940 msg = f"Bad embedding. Missing orientation for a neighbor of {v}"
--> 941 raise nx.NetworkXException(msg) from err
943 unsorted_nbrs = set(self[v])
944 if sorted_nbrs != unsorted_nbrs:
NetworkXException: Bad embedding. Missing orientation for a neighbor of 0
The `check_planarity()` function has no problems with it:
```python
cert, Nmwe = nx.check_planarity(Pmwe)
cert
```
Output:
True
```python
nx.draw(Nmwe, with_labels=True, pos=pos)
```

```python
nx.difference(Pmwe, Nmwe).edges
```
Output:
OutEdgeView([])
### Environment
Python version: 3.10.12
NetworkX version: 3.1
### Additional context
<!--- Add any other context about the problem here, screenshots, etc. -->
| After experimenting a bit, I figured that `PlanarEmbedding.remove_edge()` is not changing the values of keys `'cw'` and `'ccw'` in the data of the edges surrounding the removed edge (i.e. they keep referencing the removed edge). I think this is a bug.
Yes -- it sounds like that is a bug.
Thanks for tracking down the `remove_edge` location for the bug. Are you able to open a PR to fix it? If not we will get to it, but we'd love a PR -- or even a description/patch of what changes are needed.
Thanks!! | 2023-07-20T08:27:09 |
networkx/networkx | 6,837 | networkx__networkx-6837 | [
"6836",
"6836"
] | ff9c27b40f1b6159020d1da007f4e44e16e082a4 | diff --git a/networkx/readwrite/gml.py b/networkx/readwrite/gml.py
--- a/networkx/readwrite/gml.py
+++ b/networkx/readwrite/gml.py
@@ -311,9 +311,34 @@ def tokenize():
]
tokens = re.compile("|".join(f"({pattern})" for pattern in patterns))
lineno = 0
+ multilines = [] # entries spread across multiple lines
for line in lines:
- length = len(line)
pos = 0
+
+ # deal with entries spread across multiple lines
+ #
+ # should we actually have to deal with escaped "s then do it here
+ if multilines:
+ multilines.append(line.strip())
+ if line[-1] == '"': # closing multiline entry
+ # multiline entries will be joined by space. cannot
+ # reintroduce newlines as this will break the tokenizer
+ line = " ".join(multilines)
+ multilines = []
+ else: # continued multiline entry
+ lineno += 1
+ continue
+ else:
+ if line.count('"') == 1: # opening multiline entry
+ if line.strip()[0] != '"' and line.strip()[-1] != '"':
+ # since we expect something like key "value", the " should not be found at ends
+ # otherwise tokenizer will pick up the formatting mistake.
+ multilines = [line.rstrip()]
+ lineno += 1
+ continue
+
+ length = len(line)
+
while pos < length:
match = tokens.match(line, pos)
if match is None:
| diff --git a/networkx/readwrite/tests/test_gml.py b/networkx/readwrite/tests/test_gml.py
--- a/networkx/readwrite/tests/test_gml.py
+++ b/networkx/readwrite/tests/test_gml.py
@@ -614,6 +614,28 @@ def test_outofrange_integers(self):
os.close(fd)
os.unlink(fname)
+ def test_multiline(self):
+ # example from issue #6836
+ multiline_example = """
+graph
+[
+ node
+ [
+ id 0
+ label "multiline node"
+ label2 "multiline1
+ multiline2
+ multiline3"
+ alt_name "id 0"
+ ]
+]
+"""
+ G = nx.parse_gml(multiline_example)
+ assert G.nodes["multiline node"] == {
+ "label2": "multiline1 multiline2 multiline3",
+ "alt_name": "id 0",
+ }
+
@contextmanager
def byte_file():
| Multi-line entries in GML files not supported
I am dealing with GML files that contain string entries spread across multiple lines. Networkx cannot parse such entries (Cytoscape for example can).
### Current Behavior
Parsing GML files with multi-line entries fails with `NetworkXError: cannot tokenize...`.
### Expected Behavior
Networkx should be able to parse those entries
### Steps to Reproduce
Example GML file:
graph
[
node
[
id 0
label "multiline node"
label2 "multiline1
multiline2
multiline3"
alt_name "id 0"
]
]
Example code:
nx.read_gml("multiline.gml")
# fails with NetworkXError: cannot tokenize...
<!--- Provide a minimal example that reproduces the bug -->
### Environment
<!--- Please provide details about your local environment -->
Python version: 3.9.7 and 3.11.4
NetworkX version: 2.6.3 and latest (git commit ff9c27b40)
### Additional context
Already working on a fix
Multi-line entries in GML files not supported
I am dealing with GML files that contain string entries spread across multiple lines. Networkx cannot parse such entries (Cytoscape for example can).
### Current Behavior
Parsing GML files with multi-line entries fails with `NetworkXError: cannot tokenize...`.
### Expected Behavior
Networkx should be able to parse those entries
### Steps to Reproduce
Example GML file:
graph
[
node
[
id 0
label "multiline node"
label2 "multiline1
multiline2
multiline3"
alt_name "id 0"
]
]
Example code:
nx.read_gml("multiline.gml")
# fails with NetworkXError: cannot tokenize...
<!--- Provide a minimal example that reproduces the bug -->
### Environment
<!--- Please provide details about your local environment -->
Python version: 3.9.7 and 3.11.4
NetworkX version: 2.6.3 and latest (git commit ff9c27b40)
### Additional context
Already working on a fix
| 2023-08-03T02:20:04 |
|
networkx/networkx | 6,854 | networkx__networkx-6854 | [
"6846"
] | baec26fbabc3321beb52ed4afc9d2ad7b631df6f | diff --git a/networkx/conftest.py b/networkx/conftest.py
--- a/networkx/conftest.py
+++ b/networkx/conftest.py
@@ -94,6 +94,7 @@ def set_warnings():
@pytest.fixture(autouse=True)
def add_nx(doctest_namespace):
doctest_namespace["nx"] = networkx
+ # TODO: remove the try-except block when we require numpy >= 2
try:
import numpy as np
| NetworkX fails with NumPy 2.0dev - NEP 51
The `main` branch currently [fails](https://github.com/networkx/networkx/actions/runs/5845735581/job/15850137542) with the numpy 2.0 nightly release as numpy has changed the (printing) behavior of scalars [NEP 51](https://numpy.org/neps/nep-0051-scalar-representation.html). Our doctests will fail as it's expecting an exact string match (?).
| This looks like another one of those changes that mess up our doc_strings. When numpy changed their output format we had to handle those differently. I don't recall if we only tested doc_strings with specific numpy versions, but I think it was something like that. And we changed all the doc_strings format in one big commit I think.
So, we should talk about how best to handle it. My initial thought is that we drop the `--doctest-modules` on much of the testing matrix -- At first that can be for whenever numpy >= 2.0. But then we'll need to switch it at some point from dropping it for newest numpy to dropping it for the older numpy -- and we make that CI change when we make the doc_string change.
I guess the idea is that the results of doc_string tests/examples are rarely dependent on version of numpy. So as long as we're testing it on one version of numpy, it is likely working on the others and we test the others using the test modules.
> So, we should talk about how best to handle it
My initial thought was to investigate how many of the tests we could reformat so that the output didn't contain scalar repr's. From a quick glance it seems there may be a few that can be modified, but I don't think that will work in every case. Therefore investigating alternatives is worthwhile.
I think @dschult 's proposal is the most workable, even if it requires a bit of manual config/pinning in CI. Another option might be to use `np.printoptions` to set printing to "legacy" mode for the scalars, at least for the purposes of testing. I *think* this is doable, but we'll have to look into it more (if it is possible, it should probably be added to the NEP!)
The silver lining of these failures is that the new scalar repr identifies some places in the codebase where numpy scalars are returned implicitly. Kinda neat IMO :) | 2023-08-17T17:37:00 |
|
networkx/networkx | 6,866 | networkx__networkx-6866 | [
"6865"
] | 1fe8bbe7f0892d26b658d16f31fb9707dfbc6876 | diff --git a/networkx/linalg/laplacianmatrix.py b/networkx/linalg/laplacianmatrix.py
--- a/networkx/linalg/laplacianmatrix.py
+++ b/networkx/linalg/laplacianmatrix.py
@@ -257,7 +257,8 @@ def directed_laplacian_matrix(
evals, evecs = sp.sparse.linalg.eigs(P.T, k=1)
v = evecs.flatten().real
p = v / v.sum()
- sqrtp = np.sqrt(p)
+ # p>=0 by Perron-Frobenius Thm. Use abs() to fix roundoff across zero gh-6865
+ sqrtp = np.sqrt(np.abs(p))
Q = (
# TODO: rm csr_array wrapper when spdiags creates arrays
sp.sparse.csr_array(sp.sparse.spdiags(sqrtp, 0, n, n))
| directed_laplacian_matrix sometimes returns nans
<!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion -->
<!--- Provide a general summary of the issue in the Title above -->
The function `directed_laplacian_matrix` returns a matrix with `nan` entries on specific input graphs. The entries of the directed Laplacian matrix are always well-defined, so `nan`s should not appear. This behavior does not produce an error, but the code of the function produces a `RuntimeWarning`, and seems to come from the attempt to take the square root (with `numpy`) of an array with negative entries (produced by `scipy`).
### Current Behavior
<!--- Tell us what happens instead of the expected behavior -->
The function `directed_laplacian_matrix` in line `257` calls `scipy.sparse.linalg.eigs` to get the unique eigenvector whose eigenvalue is 1. This eigenvector should have all positive or all negative values, but it seems that `scipy` sometimes returns some positive and some negative entries. The existence of a unique left eigenvector with all positive entries (corresponding to the eigenvalue 1) is guaranteed by the Perron--Frobenius theorem. The correct eigenvector is selected by `sparse.linalg.eigs` from `scipy`, and even if all the entries are negative (still a legitimate eigenvector), normalizing on line `259` should ensure all entries are positive. However, if the signs of entries are both positve and negative, then that will stay the case after this line. When taking the square root later in line `260` with `numpy.sqrt`, a `RuntimeWarning` occurs, because the input is not necessarily positive. The warning is not produced when executing the function `directed_laplacian_matrix` directly, only when `np.sqrt(p)` is called on line `260` by itself.
### Expected Behavior
<!--- Tell us what should happen -->
It should be that `directed_laplacian_matrix` returns a matrix that does not have any `nan`s. Or, perhaps it should give an error if the vector `evecs` returned from `scipy` on line `257` does not have all positive or all negative entries.
### Steps to Reproduce
<!--- Provide a minimal example that reproduces the bug -->
I was trying to compute the directed Laplacian matrix for a directed graph with 408 nodes, accessible in `adjlist` format in [theattached file](https://github.com/networkx/networkx/files/12395264/G.txt) file (and also at a [pastebin link](https://pastebin.com/ET1k7vBq)) . This is the largest strongly connected component of a larger directed graph with 440 nodes (so the node indices go up to 440). The following code demonstrates that sometimes `nan` values are returned. The linked file is saved as `G.txt`.
````
import networkx as nx
G = nx.read_adjlist('G.txt', create_using=nx.DiGraph)
tries = 100
no_nan,has_nan = (0,0)
for _ in range(tries):
mat = nx.directed_laplacian_matrix(G)
nans = np.where(np.isnan(np.array(mat)))
if len(nans[0] == 0):
no_nan += 1
else:
has_nan += 1
print('In '+str(tries)+' attempts, '+str(no_nan)+' attempts with no nan values, '+str(has_nan)+' attempts with nan values', flush=True)
>>> In 100 attempts, 69 attempts with no nan values, 31 attempts with nan values
````
The following code of lines `251` to `260` shows that the `RuntimeWarning` happens on the `np.sqrt` line.
````
import numpy as np
import scipy as sp
import warnings
warnings.filterwarnings("error")
tries = 100
success,fail = (0,0)
for _ in range(tries):
P = nx.linalg.laplacianmatrix._transition_matrix(G)
evals,evecs = sp.sparse.linalg.eigs(P.T,k=1)
v = evecs.flatten().real
p = v/v.sum()
try:
sqrtp = np.sqrt(p)
success += 1
except:
fail += 1
print('In '+str(tries)+' attempts, '+str(success)+' successes and '+str(fail)+' failures', flush=True)
>>> In 100 attempts, 27 successes and 73 failures
````
The precise warning is as follows:
````
RuntimeWarning: invalid value encountered in sqrt
sqrtp = np.sqrt(p)
````
### Environment
<!--- Please provide details about your local environment -->
Python version: 3.8.3
NetworkX version: 3.1
NumPy version: 1.23.1
SciPy version: 1.8.0
### Other comments
My guess is the issue is with rounding in `scipy`, so perhaps it is better to have this discussion in that project. However, it would be nice for `networkx` to provide some indication (error, warning) if this happens - now I only get an error if I try to do something with the returned matrix. There is no check made in `directed_laplacian_matrix` to see if the entries of `p` (the vector passed to the square root function) are positive, perhaps checking there and rasing error would better inform the user if there is a problem.
| 2023-08-21T15:31:52 |
||
networkx/networkx | 6,867 | networkx__networkx-6867 | [
"6795"
] | 1fe8bbe7f0892d26b658d16f31fb9707dfbc6876 | diff --git a/networkx/algorithms/traversal/depth_first_search.py b/networkx/algorithms/traversal/depth_first_search.py
--- a/networkx/algorithms/traversal/depth_first_search.py
+++ b/networkx/algorithms/traversal/depth_first_search.py
@@ -156,6 +156,9 @@ def dfs_predecessors(G, source=None, depth_limit=None):
source : node, optional
Specify starting node for depth-first search.
+ Note that you will get predecessors for all nodes in the
+ component containing `source`. This input only specifies
+ where the DFS starts.
depth_limit : int, optional (default=len(G))
Specify the maximum search depth.
@@ -207,6 +210,9 @@ def dfs_successors(G, source=None, depth_limit=None):
source : node, optional
Specify starting node for depth-first search.
+ Note that you will get successors for all nodes in the
+ component containing `source`. This input only specifies
+ where the DFS starts.
depth_limit : int, optional (default=len(G))
Specify the maximum search depth.
| confusing results generated by dfs_predecessors()
The dfs_predecessors(G, source, depth) and the dfs_successors(G, source, depth) generate the same results.
They both return the successors of the source node.
| If the graph `G` is undirected, then this is expected. You can think of undirected edges as bi-directional, in which case there is no distinction between predecessors and successors:
```python
>>> G = nx.path_graph(4)
>>> nx.dfs_successors(G, source=1, depth_limit=1)
{1: [0, 2]}
>>> nx.dfs_predecessors(G, source=1, depth_limit=1)
{0: 1, 2: 1}
```
Please try the following code
```python
>>> G = nx.DiGraph()
>>> G.add_edges_from([(1, 2), (1, 3) , (3, 4) , (3, 5) , (0, 1), (9, 0)])
>>> nx.draw(G, with_labels=True)
>>> a = nx.dfs_predecessors(G, source=1)
{2: 1, 3: 1, 4: 3, 5: 3}
>>> b = nx.dfs_successors(G, source=1)
{1: [2, 3], 3: [4, 5]}
```
Thanks for the example, I see what you mean now: I assume you are expecting `nx.dfs_predecessors(G, source=1)` to give the predecessors of node `1`, so something like `{1: [0, 9]}`. The current result does seem incorrect to me.
But the docs state that the source is the starting node for the DFS, not the node for which you want the predecessors. So, while I think the returned value is correct, I can see that there is a misunderstanding in the wording of the doc_string.
We could change the name of the optional keyword argument from `source` to `starting_node_of_DFS`. But I think that may be more than what is called for. How about the description becomes:
```
source : node, optional
Specify starting node for depth-first search.
Note that you will get predecessors for all nodes in the
component containing `source`. This input only specifies
where the DFS starts.
```
Thoughts? Is there a better way to convey this? | 2023-08-21T18:08:40 |
|
networkx/networkx | 6,869 | networkx__networkx-6869 | [
"6848"
] | f93f0e2a066fc456aa447853af9d00eec1058542 | diff --git a/networkx/utils/decorators.py b/networkx/utils/decorators.py
--- a/networkx/utils/decorators.py
+++ b/networkx/utils/decorators.py
@@ -261,14 +261,15 @@ def _nodes_or_number(n):
def np_random_state(random_state_argument):
- """Decorator to generate a `numpy.random.RandomState` instance.
+ """Decorator to generate a numpy RandomState or Generator instance.
The decorator processes the argument indicated by `random_state_argument`
using :func:`nx.utils.create_random_state`.
The argument value can be a seed (integer), or a `numpy.random.RandomState`
- instance or (`None` or `numpy.random`). The latter options use the glocal
- random number generator used by `numpy.random`.
- The result is a `numpy.random.RandomState` instance.
+ or `numpy.random.RandomState` instance or (`None` or `numpy.random`).
+ The latter two options use the global random number generator for `numpy.random`.
+
+ The returned instance is a `numpy.random.RandomState` or `numpy.random.Generator`.
Parameters
----------
@@ -307,19 +308,24 @@ def random_array(dims, random_state=1):
def py_random_state(random_state_argument):
"""Decorator to generate a random.Random instance (or equiv).
- The decorator processes the argument indicated by `random_state_argument`
- using :func:`nx.utils.create_py_random_state`.
- The argument value can be a seed (integer), or a random number generator::
+ This decorator processes `random_state_argument` using
+ :func:`nx.utils.create_py_random_state`.
+ The input value can be a seed (integer), or a random number generator::
If int, return a random.Random instance set with seed=int.
If random.Random instance, return it.
If None or the `random` package, return the global random number
generator used by `random`.
- If np.random package, return the global numpy random number
- generator wrapped in a PythonRandomInterface class.
- If np.random.RandomState instance, return it wrapped in
- PythonRandomInterface
- If a PythonRandomInterface instance, return it
+ If np.random package, or the default numpy RandomState instance,
+ return the default numpy random number generator wrapped in a
+ `PythonRandomViaNumpyBits` class.
+ If np.random.Generator instance, return it wrapped in a
+ `PythonRandomViaNumpyBits` class.
+
+ # Legacy options
+ If np.random.RandomState instance, return it wrapped in a
+ `PythonRandomInterface` class.
+ If a `PythonRandomInterface` instance, return it
Parameters
----------
diff --git a/networkx/utils/misc.py b/networkx/utils/misc.py
--- a/networkx/utils/misc.py
+++ b/networkx/utils/misc.py
@@ -11,6 +11,7 @@
1
"""
+import random
import sys
import uuid
import warnings
@@ -30,6 +31,7 @@
"create_random_state",
"create_py_random_state",
"PythonRandomInterface",
+ "PythonRandomViaNumpyBits",
"nodes_equal",
"edges_equal",
"graphs_equal",
@@ -271,7 +273,68 @@ def create_random_state(random_state=None):
raise ValueError(msg)
+class PythonRandomViaNumpyBits(random.Random):
+ """Provide the random.random algorithms using a Numpy.random bit generator
+
+ The intent is to allow people to contribute code that uses Python's random
+ library, but still allow users to provide a single easily controlled random
+ bit-stream for all work with NetworkX. This implementation is based on helpful
+ comments and code from Robert Kern on NumPy's GitHub Issue #24458.
+
+ This implementation supercedes that of `PythonRandomInterface` which rewrote
+ methods to account for subtle differences in API between `random` and
+ `numpy.random`. Instead this subclasses `random.Random` and overwrites
+ the methods `random`, `getrandbits`, `getstate`, `setstate` and `seed`.
+ It makes them use the rng values from an input numpy `RandomState` or `Generator`.
+ Those few methods allow the rest of the `random.Random` methods to provide
+ the API interface of `random.random` whlie using randomness generated by
+ a numpy generator.
+ """
+
+ def __init__(self, rng=None):
+ try:
+ import numpy as np
+ except ImportError:
+ msg = "numpy not found, only random.random available."
+ warnings.warn(msg, ImportWarning)
+
+ if rng is None:
+ self._rng = np.random.mtrand._rand
+ else:
+ self._rng = rng
+
+ # Not necessary, given our overriding of gauss() below, but it's
+ # in the superclass and nominally public, so initialize it here.
+ self.gauss_next = None
+
+ def random(self):
+ """Get the next random number in the range 0.0 <= X < 1.0."""
+ return self._rng.random()
+
+ def getrandbits(self, k):
+ """getrandbits(k) -> x. Generates an int with k random bits."""
+ if k < 0:
+ raise ValueError("number of bits must be non-negative")
+ numbytes = (k + 7) // 8 # bits / 8 and rounded up
+ x = int.from_bytes(self._rng.bytes(numbytes), "big")
+ return x >> (numbytes * 8 - k) # trim excess bits
+
+ def getstate(self):
+ return self._rng.__getstate__()
+
+ def setstate(self, state):
+ self._rng.__setstate__(state)
+
+ def seed(self, *args, **kwds):
+ "Do nothing override method."
+
+
+##################################################################
class PythonRandomInterface:
+ """PythonRandomInterface is included for backward compatibility
+ New code should use PythonRandomViaNumpyBits instead.
+ """
+
def __init__(self, rng=None):
try:
import numpy as np
@@ -293,6 +356,12 @@ def uniform(self, a, b):
def randrange(self, a, b=None):
import numpy as np
+ if b is None:
+ a, b = 0, a
+ if b > 9223372036854775807: # from np.iinfo(np.int64).max
+ tmp_rng = PythonRandomViaNumpyBits(self._rng)
+ return tmp_rng.randrange(a, b)
+
if isinstance(self._rng, np.random.Generator):
return self._rng.integers(a, b)
return self._rng.randint(a, b)
@@ -323,6 +392,10 @@ def sample(self, seq, k):
def randint(self, a, b):
import numpy as np
+ if b > 9223372036854775807: # from np.iinfo(np.int64).max
+ tmp_rng = PythonRandomViaNumpyBits(self._rng)
+ return tmp_rng.randint(a, b)
+
if isinstance(self._rng, np.random.Generator):
return self._rng.integers(a, b + 1)
return self._rng.randint(a, b + 1)
@@ -357,32 +430,50 @@ def create_py_random_state(random_state=None):
if random.Random instance, return it.
if None or the `random` package, return the global random number
generator used by `random`.
- if np.random package, return the global numpy random number
- generator wrapped in a PythonRandomInterface class.
- if np.random.RandomState or np.random.Generator instance, return it
- wrapped in PythonRandomInterface
+ if an np.random.Generator instance, or the np.random package, or
+ the global numpy random number generator, then return it
+ wrapped in a PythonRandomViaNumpyBits class.
+ if a PythonRandomViaNumpyBits instance, return it
+
+ # Provided for backward bit-stream matching with legacy code
+ if a np.randomRandomState instance and not the global numpy default,
+ return it wrapped in PythonRandomInterface
if a PythonRandomInterface instance, return it
- """
- import random
-
- try:
- import numpy as np
-
- if random_state is np.random:
- return PythonRandomInterface(np.random.mtrand._rand)
- if isinstance(random_state, np.random.RandomState | np.random.Generator):
- return PythonRandomInterface(random_state)
- if isinstance(random_state, PythonRandomInterface):
- return random_state
- except ImportError:
- pass
+ Note: Conversion from older PythonRandomInterface to PythonRandomViaNumpyBits
+ is handled here to allow users of Legacy `numpy.random.RandomState` to exactly
+ match the legacy values produced. We assume that if a user cares about legacy
+ values, they are using a np.RandomState instance that is not the numpy default.
+ The default instance has state reset for each Python session. The Generator
+ class does not guarantee to maintain bit stream across versions. We wrap any
+ RandomState instance other than the default with `PythonRandomInterface`.
+ All other numpy random inputs are wrapped with `PythonRandomViaNumpyBits`.
+ """
if random_state is None or random_state is random:
return random._inst
if isinstance(random_state, random.Random):
return random_state
if isinstance(random_state, int):
return random.Random(random_state)
+
+ try:
+ import numpy as np
+ except ImportError:
+ pass
+ else:
+ if isinstance(random_state, PythonRandomInterface | PythonRandomViaNumpyBits):
+ return random_state
+ if isinstance(random_state, np.random.Generator):
+ return PythonRandomViaNumpyBits(random_state)
+ if random_state is np.random:
+ return PythonRandomViaNumpyBits(np.random.mtrand._rand)
+
+ if isinstance(random_state, np.random.RandomState):
+ if random_state is np.random.mtrand._rand:
+ return PythonRandomViaNumpyBits(random_state)
+ # Only need older interface if specially constructed RandomState used
+ return PythonRandomInterface(random_state)
+
msg = f"{random_state} cannot be used to generate a random.Random instance"
raise ValueError(msg)
| diff --git a/networkx/utils/tests/test_decorators.py b/networkx/utils/tests/test_decorators.py
--- a/networkx/utils/tests/test_decorators.py
+++ b/networkx/utils/tests/test_decorators.py
@@ -13,7 +13,7 @@
open_file,
py_random_state,
)
-from networkx.utils.misc import PythonRandomInterface
+from networkx.utils.misc import PythonRandomInterface, PythonRandomViaNumpyBits
def test_not_implemented_decorator():
@@ -212,17 +212,19 @@ def setup_class(cls):
@np_random_state(1)
def instantiate_np_random_state(self, random_state):
- assert isinstance(random_state, np.random.RandomState)
- return random_state.random_sample()
+ allowed = (np.random.RandomState, np.random.Generator)
+ assert isinstance(random_state, allowed)
+ return random_state.random()
@py_random_state(1)
def instantiate_py_random_state(self, random_state):
- assert isinstance(random_state, random.Random | PythonRandomInterface)
+ allowed = (random.Random, PythonRandomInterface, PythonRandomViaNumpyBits)
+ assert isinstance(random_state, allowed)
return random_state.random()
def test_random_state_None(self):
np.random.seed(42)
- rv = np.random.random_sample()
+ rv = np.random.random()
np.random.seed(42)
assert rv == self.instantiate_np_random_state(None)
@@ -233,7 +235,7 @@ def test_random_state_None(self):
def test_random_state_np_random(self):
np.random.seed(42)
- rv = np.random.random_sample()
+ rv = np.random.random()
np.random.seed(42)
assert rv == self.instantiate_np_random_state(np.random)
np.random.seed(42)
@@ -241,7 +243,7 @@ def test_random_state_np_random(self):
def test_random_state_int(self):
np.random.seed(42)
- np_rv = np.random.random_sample()
+ np_rv = np.random.random()
random.seed(42)
py_rv = random.random()
@@ -249,39 +251,56 @@ def test_random_state_int(self):
seed = 1
rval = self.instantiate_np_random_state(seed)
rval_expected = np.random.RandomState(seed).rand()
- assert rval, rval_expected
+ assert rval == rval_expected
# test that global seed wasn't changed in function
- assert np_rv == np.random.random_sample()
+ assert np_rv == np.random.random()
random.seed(42)
rval = self.instantiate_py_random_state(seed)
rval_expected = random.Random(seed).random()
- assert rval, rval_expected
+ assert rval == rval_expected
# test that global seed wasn't changed in function
assert py_rv == random.random()
- def test_random_state_np_random_RandomState(self):
+ def test_random_state_np_random_Generator(self):
np.random.seed(42)
- np_rv = np.random.random_sample()
+ np_rv = np.random.random()
+ np.random.seed(42)
+ seed = 1
+
+ rng = np.random.default_rng(seed)
+ rval = self.instantiate_np_random_state(rng)
+ rval_expected = np.random.default_rng(seed).random()
+ assert rval == rval_expected
+
+ rval = self.instantiate_py_random_state(rng)
+ rval_expected = np.random.default_rng(seed).random(size=2)[1]
+ assert rval == rval_expected
+ # test that global seed wasn't changed in function
+ assert np_rv == np.random.random()
+ def test_random_state_np_random_RandomState(self):
+ np.random.seed(42)
+ np_rv = np.random.random()
np.random.seed(42)
seed = 1
+
rng = np.random.RandomState(seed)
- rval = self.instantiate_np_random_state(seed)
- rval_expected = np.random.RandomState(seed).rand()
- assert rval, rval_expected
+ rval = self.instantiate_np_random_state(rng)
+ rval_expected = np.random.RandomState(seed).random()
+ assert rval == rval_expected
- rval = self.instantiate_py_random_state(seed)
- rval_expected = np.random.RandomState(seed).rand()
- assert rval, rval_expected
+ rval = self.instantiate_py_random_state(rng)
+ rval_expected = np.random.RandomState(seed).random(size=2)[1]
+ assert rval == rval_expected
# test that global seed wasn't changed in function
- assert np_rv == np.random.random_sample()
+ assert np_rv == np.random.random()
def test_random_state_py_random(self):
seed = 1
rng = random.Random(seed)
rv = self.instantiate_py_random_state(rng)
- assert rv, random.Random(seed).random()
+ assert rv == random.Random(seed).random()
pytest.raises(ValueError, self.instantiate_np_random_state, rng)
diff --git a/networkx/utils/tests/test_misc.py b/networkx/utils/tests/test_misc.py
--- a/networkx/utils/tests/test_misc.py
+++ b/networkx/utils/tests/test_misc.py
@@ -6,6 +6,7 @@
import networkx as nx
from networkx.utils import (
PythonRandomInterface,
+ PythonRandomViaNumpyBits,
arbitrary_element,
create_py_random_state,
create_random_state,
@@ -184,21 +185,31 @@ def test_create_py_random_state():
rs = np.random.RandomState
rng = np.random.default_rng(1000)
rng_explicit = np.random.Generator(np.random.SFC64())
- nprs = PythonRandomInterface
+ old_nprs = PythonRandomInterface
+ nprs = PythonRandomViaNumpyBits
assert isinstance(create_py_random_state(np.random), nprs)
- assert isinstance(create_py_random_state(rs(1)), nprs)
+ assert isinstance(create_py_random_state(rs(1)), old_nprs)
assert isinstance(create_py_random_state(rng), nprs)
assert isinstance(create_py_random_state(rng_explicit), nprs)
# test default rng input
- assert isinstance(PythonRandomInterface(), nprs)
+ assert isinstance(PythonRandomInterface(), old_nprs)
+ assert isinstance(PythonRandomViaNumpyBits(), nprs)
+
+ # VeryLargeIntegers Smoke test (they raise error for np.random)
+ int64max = 9223372036854775807 # from np.iinfo(np.int64).max
+ for r in (rng, rs(1)):
+ prs = create_py_random_state(r)
+ prs.randrange(3, int64max + 5)
+ prs.randint(3, int64max + 5)
def test_PythonRandomInterface_RandomState():
np = pytest.importorskip("numpy")
+ seed = 42
rs = np.random.RandomState
- rng = PythonRandomInterface(rs(42))
- rs42 = rs(42)
+ rng = PythonRandomInterface(rs(seed))
+ rs42 = rs(seed)
# make sure these functions are same as expected outcome
assert rng.randrange(3, 5) == rs42.randint(3, 5)
@@ -219,8 +230,9 @@ def test_PythonRandomInterface_RandomState():
def test_PythonRandomInterface_Generator():
np = pytest.importorskip("numpy")
- rng = np.random.default_rng(42)
- pri = PythonRandomInterface(np.random.default_rng(42))
+ seed = 42
+ rng = np.random.default_rng(seed)
+ pri = PythonRandomInterface(np.random.default_rng(seed))
# make sure these functions are same as expected outcome
assert pri.randrange(3, 5) == rng.integers(3, 5)
| Heads up.... numpy random integer generators cant handle large integers
It looks like numpy random number generators cant handle large integers (large being > `2**64`, i.e. not held in `int64`). They raise an error saying that the values can't be held in `int64` (which of course is correct). And if you try to specify that they use python integers (which don't have that limit) by using `dtype=np.object_`, it raises stating that np.object_ is not of integer type.
We probably use large integers more than most scientific libraries (20**20 is the number of possible edges with 21 nodes). So, we may run into this more often than some.
Workaround: instead of `seed = np.random.default_rng(); seed.integers(0, 20**20)`, we could convert it to a floating point computation and discretize later:
```python
seed = np.random.default.rng()
def randint(a, b, seed):
return a + int(seed.random() * (b - a))
randint(0, 20**20, seed)
```
This could/should be added to the `utils.misc.py` class `PythonRandomInterface` method `randint`.
### Numpy fix
Since numpy is creating the value, it seems like they don't need to raise an error for `dtype=np.object_` in `np.random.Generator.integers`. It could be made an integer Python object and be perfectly good output.
We could/should look into making a PR to numpy to allow `integers` to return Python integers when `dtype=object_`
I think this is the first time we've noticed a feature of `random` that isn't provided by `numpy.random`, but there may be others.
| I created an issue on numpy numpy/numpy#24458 and a thread on the numpy discussion list.
I think it looks straight-forward to add support for dtype('O') in the `integers` function. But straight-foward does not always mean easy. :}
The example where this arose is the `_select_k` and `_select_jd_trees` parts of #6758 | 2023-08-22T01:18:03 |
networkx/networkx | 6,892 | networkx__networkx-6892 | [
"6874"
] | 9075583cdcd74a98de3a7b715e1bf220f749bfbc | diff --git a/networkx/algorithms/shortest_paths/weighted.py b/networkx/algorithms/shortest_paths/weighted.py
--- a/networkx/algorithms/shortest_paths/weighted.py
+++ b/networkx/algorithms/shortest_paths/weighted.py
@@ -1973,6 +1973,10 @@ def goldberg_radzik(G, source, weight="weight"):
negative (di)cycle. Note: any negative weight edge in an
undirected graph is a negative cycle.
+ As of NetworkX v3.2, a zero weight cycle is no longer
+ incorrectly reported as a negative weight cycle.
+
+
Examples
--------
>>> G = nx.path_graph(5, create_using=nx.DiGraph())
@@ -2057,7 +2061,7 @@ def topo_sort(relabeled):
continue
t = d[u] + weight(u, v, e)
d_v = d[v]
- if t <= d_v:
+ if t < d_v:
is_neg = t < d_v
d[v] = t
pred[v] = u
| diff --git a/networkx/algorithms/shortest_paths/tests/test_weighted.py b/networkx/algorithms/shortest_paths/tests/test_weighted.py
--- a/networkx/algorithms/shortest_paths/tests/test_weighted.py
+++ b/networkx/algorithms/shortest_paths/tests/test_weighted.py
@@ -596,6 +596,20 @@ def test_negative_cycle(self):
)
pytest.raises(nx.NetworkXUnbounded, nx.goldberg_radzik, G, 1)
+ def test_zero_cycle(self):
+ G = nx.cycle_graph(5, create_using=nx.DiGraph())
+ G.add_edge(2, 3, weight=-4)
+ # check that zero cycle doesnt raise
+ nx.goldberg_radzik(G, 1)
+ nx.bellman_ford_predecessor_and_distance(G, 1)
+
+ G.add_edge(2, 3, weight=-4.0001)
+ # check that negative cycle does raise
+ pytest.raises(
+ nx.NetworkXUnbounded, nx.bellman_ford_predecessor_and_distance, G, 1
+ )
+ pytest.raises(nx.NetworkXUnbounded, nx.goldberg_radzik, G, 1)
+
def test_find_negative_cycle_longer_cycle(self):
G = nx.cycle_graph(5, create_using=nx.DiGraph())
nx.add_cycle(G, [3, 5, 6, 7, 8, 9])
| Inconsistency in Negative Cycle Detection Between Bellman-Ford and Goldberg-Radzik Algorithms
### Description:
When attempting to compute the shortest path length between two nodes in a graph using the Bellman-Ford and Goldberg-Radzik algorithms, inconsistent results regarding the presence of a negative cycle are obtained. Specifically, the Goldberg-Radzik algorithm suggests the presence of a negative cycle and fails to compute the path length, while the Bellman-Ford algorithm successfully computes the shortest path without detecting any negative cycles.
### Observed Behavior:
Shortest path length between 6 and 0 using Bellman-Ford: 11 \
Goldberg_Radzik: A negative cycle exists in the graph. \
Shortest path length between 6 and 0 using Goldberg_Radzik: None
### Expected Behavior:
The results from both algorithms should be consistent. Either both algorithms should detect a negative cycle and fail to compute the path, or both should successfully compute the shortest path without detecting any negative cycles.
### Steps to Reproduce:
1. Create a directed graph with the following nodes and edges:
- Nodes: [0, 1, 2, 3, 4, 5, 6, 7]
- Edges: (Provided in the code snippet below)
2. Attempt to compute the shortest path length between nodes `6` and `0` using both Bellman-Ford and Goldberg-Radzik algorithms.
### Environment:
Python version: 3.8.10 \
NetworkX version: 3.1
```python
# Python code to reproduce the issue
import networkx as nx
import matplotlib.pyplot as plt
class ManualGraphLoader:
def __init__(self):
self.G = self.create_graph_from_data()
def create_graph_from_data(self):
G = nx.DiGraph()
# Add nodes
for i in range(8):
G.add_node(i)
# Add edges with weights
edges_data = [
(0, 5, 12), (0, 7, 1), (0, 4, 6), (1, 0, -1), (2, 3, 5), (2, 7, -1), (4, 0, 12),
(5, 4, 15), (5, 3, -16), (6, 3, 4), (6, 5, -4), (6, 4, -1), (7, 5, -8), (7, 0, 9),
(7, 4, 7), (7, 2, 1)
]
for src, tgt, weight in edges_data:
G.add_edge(src, tgt, weight=weight)
return G
def compute_shortest_path_length_bellman_ford(self, source, target):
"""Computes shortest path length between source and target using Bellman-Ford algorithm."""
try:
return nx.bellman_ford_path_length(self.G, source=source, target=target, weight='weight')
except nx.NetworkXNoPath:
print(f"Bellman-Ford: No path exists between {source} and {target}.")
return None
except nx.NetworkXUnbounded:
print(f"Bellman-Ford: A negative cycle exists in the graph.")
return None
def compute_shortest_path_length_goldberg_radzik(self, source, target):
"""Computes the shortest path length between source and target using the Goldberg-Radzik algorithm."""
try:
_, distance_map = nx.goldberg_radzik(self.G, source=source, weight='weight')
return distance_map.get(target, None)
except nx.NetworkXNoPath:
print(f"Goldberg_Radzik: No path exists between {source} and {target}.")
return None
except nx.NetworkXUnbounded:
print(f"Goldberg_Radzik: A negative cycle exists in the graph.")
return None
def get_graph(self):
"""Returns the manually created graph."""
return self.G
if __name__ == "__main__":
graph_loader = ManualGraphLoader()
G = graph_loader.get_graph()
source, target = 6, 0
shortest_path_length = graph_loader.compute_shortest_path_length_bellman_ford(source, target)
print(f"Shortest path length between {source} and {target} using Bellman-Ford: {shortest_path_length}")
shortest_path_length = graph_loader.compute_shortest_path_length_goldberg_radzik(source, target)
print(f"Shortest path length between {source} and {target} using Goldberg_Radzik: {shortest_path_length}")
try:
cycle = nx.negative_edge_cycle(G, weight='weight')
if cycle:
print("The graph has a negative weight cycle.")
else:
print("The graph does not have a negative weight cycle.")
except nx.NetworkXUnbounded:
print("The graph has a negative cycle.")
| I'd appreciate any feedback on this issue. If more information or clarifications are needed, please let me know.
This is puzzling -- (so perhaps a bug). There is a cycle with weight zero.
`list(nx.simple_cycles(G)` reports `[[0, 4], [0, 7, 4], [0, 7], [0, 7, 5, 4], [0, 5, 4], [2, 7]]`
Manually checking these shows no negative cycles, but the `[2,7]` cycle has weight 0.
I suspect that is the trouble.
Perhaps goldberg radzik is checking for non-positive cycles rather than negative cycles while bellman ford is looking for negative cycles....
But I haven't found the difference yet -- reporting early in case this helps you. | 2023-08-31T17:28:24 |
networkx/networkx | 6,894 | networkx__networkx-6894 | [
"6886"
] | a4748e58ced50ee3db200cf5f77a2dcc9ef859c7 | diff --git a/networkx/algorithms/centrality/percolation.py b/networkx/algorithms/centrality/percolation.py
--- a/networkx/algorithms/centrality/percolation.py
+++ b/networkx/algorithms/centrality/percolation.py
@@ -38,7 +38,10 @@ def percolation_centrality(G, attribute="percolation", states=None, weight=None)
attribute : None or string, optional (default='percolation')
Name of the node attribute to use for percolation state, used
- if `states` is None.
+ if `states` is None. If a node does not set the attribute the
+ state of that node will be set to the default value of 1.
+ If all nodes do not have the attribute all nodes will be set to
+ 1 and the centrality measure will be equivalent to betweenness centrality.
states : None or dict, optional (default=None)
Specify percolation states for the nodes, nodes as keys states
@@ -85,7 +88,7 @@ def percolation_centrality(G, attribute="percolation", states=None, weight=None)
nodes = G
if states is None:
- states = nx.get_node_attributes(nodes, attribute)
+ states = nx.get_node_attributes(nodes, attribute, default=1)
# sum of all percolation states
p_sigma_x_t = 0.0
| diff --git a/networkx/algorithms/centrality/tests/test_percolation_centrality.py b/networkx/algorithms/centrality/tests/test_percolation_centrality.py
--- a/networkx/algorithms/centrality/tests/test_percolation_centrality.py
+++ b/networkx/algorithms/centrality/tests/test_percolation_centrality.py
@@ -31,52 +31,57 @@ def example1b_G():
return G
-class TestPercolationCentrality:
- def test_percolation_example1a(self):
- """percolation centrality: example 1a"""
- G = example1a_G()
- p = nx.percolation_centrality(G)
- p_answer = {4: 0.625, 6: 0.667}
- for n, k in p_answer.items():
- assert p[n] == pytest.approx(k, abs=1e-3)
+def test_percolation_example1a():
+ """percolation centrality: example 1a"""
+ G = example1a_G()
+ p = nx.percolation_centrality(G)
+ p_answer = {4: 0.625, 6: 0.667}
+ for n, k in p_answer.items():
+ assert p[n] == pytest.approx(k, abs=1e-3)
- def test_percolation_example1b(self):
- """percolation centrality: example 1a"""
- G = example1b_G()
- p = nx.percolation_centrality(G)
- p_answer = {4: 0.825, 6: 0.4}
- for n, k in p_answer.items():
- assert p[n] == pytest.approx(k, abs=1e-3)
- def test_converge_to_betweenness(self):
- """percolation centrality: should converge to betweenness
- centrality when all nodes are percolated the same"""
- # taken from betweenness test test_florentine_families_graph
- G = nx.florentine_families_graph()
- b_answer = {
- "Acciaiuoli": 0.000,
- "Albizzi": 0.212,
- "Barbadori": 0.093,
- "Bischeri": 0.104,
- "Castellani": 0.055,
- "Ginori": 0.000,
- "Guadagni": 0.255,
- "Lamberteschi": 0.000,
- "Medici": 0.522,
- "Pazzi": 0.000,
- "Peruzzi": 0.022,
- "Ridolfi": 0.114,
- "Salviati": 0.143,
- "Strozzi": 0.103,
- "Tornabuoni": 0.092,
- }
+def test_percolation_example1b():
+ """percolation centrality: example 1a"""
+ G = example1b_G()
+ p = nx.percolation_centrality(G)
+ p_answer = {4: 0.825, 6: 0.4}
+ for n, k in p_answer.items():
+ assert p[n] == pytest.approx(k, abs=1e-3)
- p_states = {k: 1.0 for k, v in b_answer.items()}
- p_answer = nx.percolation_centrality(G, states=p_states)
- for n in sorted(G):
- assert p_answer[n] == pytest.approx(b_answer[n], abs=1e-3)
- p_states = {k: 0.3 for k, v in b_answer.items()}
- p_answer = nx.percolation_centrality(G, states=p_states)
- for n in sorted(G):
- assert p_answer[n] == pytest.approx(b_answer[n], abs=1e-3)
+def test_converge_to_betweenness():
+ """percolation centrality: should converge to betweenness
+ centrality when all nodes are percolated the same"""
+ # taken from betweenness test test_florentine_families_graph
+ G = nx.florentine_families_graph()
+ b_answer = {
+ "Acciaiuoli": 0.000,
+ "Albizzi": 0.212,
+ "Barbadori": 0.093,
+ "Bischeri": 0.104,
+ "Castellani": 0.055,
+ "Ginori": 0.000,
+ "Guadagni": 0.255,
+ "Lamberteschi": 0.000,
+ "Medici": 0.522,
+ "Pazzi": 0.000,
+ "Peruzzi": 0.022,
+ "Ridolfi": 0.114,
+ "Salviati": 0.143,
+ "Strozzi": 0.103,
+ "Tornabuoni": 0.092,
+ }
+
+ # If no initial state is provided, state for
+ # every node defaults to 1
+ p_answer = nx.percolation_centrality(G)
+ assert p_answer == pytest.approx(b_answer, abs=1e-3)
+
+ p_states = {k: 0.3 for k, v in b_answer.items()}
+ p_answer = nx.percolation_centrality(G, states=p_states)
+ assert p_answer == pytest.approx(b_answer, abs=1e-3)
+
+
+def test_default_percolation():
+ G = nx.erdos_renyi_graph(42, 0.42, seed=42)
+ assert nx.percolation_centrality(G) == pytest.approx(nx.betweenness_centrality(G))
| Unexpected Crash in "nx.percolation_centrality" caused by KeyError: 0
Hello! Sorry for bothering you again.
I noticed that "nx.percolation_centrality" will sometimes crash by a `KeyError: 0` exception.
Since this exception does not belong to NetworkX.Exceptions, I believe that it may related to an unexpected exception issue.
Would it be possible for you to confirm and investigate this?
Best regards,
Joye
### Step to Reproduce
Please run the following Python code:
```Python
import networkx as nx
L = [(0, 1101, 0), (0, 597, 0), (2, 986, 0), (4, 829, 0), (7, 885, 0), (9, 587, 0), (10, 218, 0), (11, 5, 0), (19, 498, 0), (24, 591, 0), (26, 131, 0), (45, 344, 0), (53, 152, 0), (55, 736, 0), (57, 9, 0), (59, 79, 0), (64, 1095, 0), (78, 167, 0), (78, 578, 0), (85, 365, 0), (93, 451, 0), (96, 1088, 0), (97, 107, 0), (105, 568, 0), (107, 801, 0), (109, 1109, 0), (110, 411, 0), (112, 532, 0), (113, 815, 0), (114, 329, 0), (121, 114, 0), (123, 1057, 0), (124, 333, 0), (124, 471, 0), (124, 690, 0), (125, 932, 0), (141, 916, 0), (149, 589, 0), (152, 1011, 0), (152, 753, 0), (153, 850, 0), (156, 65, 0), (159, 469, 0), (173, 272, 0), (177, 557, 0), (182, 629, 0), (186, 594, 0), (188, 675, 0), (193, 664, 0), (194, 215, 0), (200, 966, 0), (206, 366, 0), (220, 176, 0), (223, 414, 0), (224, 657, 0), (224, 378, 0), (227, 1192, 0), (232, 809, 0), (233, 322, 0), (239, 964, 0), (244, 515, 0), (246, 457, 0), (252, 1085, 0), (260, 674, 0), (263, 231, 0), (265, 86, 0), (275, 661, 0), (276, 161, 0), (279, 1012, 0), (282, 995, 0), (295, 366, 0), (306, 582, 0), (307, 341, 0), (308, 501, 0), (308, 1189, 0), (314, 1085, 0), (315, 236, 0), (317, 1165, 0), (332, 436, 0), (333, 668, 0), (337, 143, 0), (339, 409, 0), (342, 1206, 0), (345, 1101, 0), (348, 1192, 0), (353, 533, 0), (358, 1015, 0), (375, 38, 0), (378, 712, 0), (378, 387, 0), (378, 279, 0), (381, 422, 0), (390, 760, 0), (399, 73, 0), (399, 859, 0), (399, 724, 0), (401, 311, 0), (402, 906, 0), (408, 568, 0), (409, 909, 0), (413, 39, 0), (419, 61, 0), (420, 454, 0), (424, 1158, 0), (426, 167, 0), (430, 839, 0), (430, 120, 0), (432, 336, 0), (432, 67, 0), (435, 88, 0), (436, 303, 0), (437, 317, 0), (445, 707, 0), (447, 915, 0), (450, 313, 0), (450, 407, 0), (456, 586, 0), (457, 833, 0), (461, 1130, 0), (462, 128, 0), (464, 283, 0), (464, 1183, 0), (469, 477, 0), (469, 597, 0), (472, 356, 0), (477, 88, 0), (480, 437, 0), (481, 953, 0), (483, 1024, 0), (483, 1065, 0), (485, 449, 0), (488, 791, 0), (492, 79, 0), (494, 646, 0), (498, 1002, 0), (499, 197, 0), (504, 1120, 0), (505, 13, 0), (512, 361, 0), (513, 1189, 0), (513, 534, 0), (517, 59, 0), (522, 706, 0), (529, 392, 0), (531, 247, 0), (531, 749, 0), (535, 833, 0), (537, 3, 0), (537, 943, 0), (540, 308, 0), (546, 369, 0), (553, 191, 0), (554, 426, 0), (561, 662, 0), (564, 602, 0), (568, 1003, 0), (570, 605, 0), (576, 1125, 0), (580, 81, 0), (591, 1228, 0), (607, 270, 0), (608, 120, 0), (609, 854, 0), (615, 1125, 0), (623, 891, 0), (625, 1217, 0), (628, 278, 0), (628, 1160, 0), (633, 981, 0), (638, 912, 0), (640, 315, 0), (645, 422, 0), (647, 467, 0), (650, 786, 0), (653, 323, 0), (658, 333, 0), (659, 1024, 0), (661, 1163, 0), (662, 810, 0), (666, 761, 0), (666, 41, 0), (669, 1167, 0), (670, 1206, 0), (673, 722, 0), (676, 766, 0), (692, 968, 0), (694, 1157, 0), (695, 561, 0), (704, 428, 0), (704, 815, 0), (706, 360, 0), (719, 1030, 0), (720, 1086, 0), (723, 368, 0), (724, 704, 0), (737, 928, 0), (751, 99, 0), (751, 651, 0), (757, 938, 0), (765, 617, 0), (767, 1236, 0), (768, 264, 0), (774, 24, 0), (777, 34, 0), (777, 99, 0), (779, 1018, 0), (790, 550, 0), (790, 244, 0), (793, 1035, 0), (801, 253, 0), (807, 843, 0), (809, 393, 0), (810, 563, 0), (811, 902, 0), (813, 486, 0), (816, 954, 0), (817, 29, 0), (822, 56, 0), (824, 94, 0), (834, 1040, 0), (835, 834, 0), (845, 50, 0), (863, 1136, 0), (871, 993, 0), (877, 212, 0), (880, 864, 0), (885, 1019, 0), (890, 228, 0), (899, 963, 0), (907, 949, 0), (907, 212, 0), (910, 144, 0), (916, 992, 0), (920, 398, 0), (922, 671, 0), (922, 131, 0), (924, 178, 0), (925, 562, 0), (925, 302, 0), (935, 609, 0), (935, 150, 0), (936, 60, 0), (939, 672, 0), (949, 140, 0), (950, 957, 0), (960, 761, 0), (963, 184, 0), (967, 1092, 0), (982, 1176, 0), (986, 891, 0), (989, 431, 0), (992, 858, 0), (996, 475, 0), (996, 32, 0), (1002, 234, 0), (1006, 278, 0), (1009, 968, 0), (1021, 1117, 0), (1021, 1006, 0), (1026, 996, 0), (1034, 699, 0), (1035, 387, 0), (1036, 1120, 0), (1042, 1235, 0), (1049, 413, 0), (1053, 881, 0), (1056, 1018, 0), (1078, 1131, 0), (1080, 596, 0), (1087, 711, 0), (1090, 294, 0), (1096, 939, 0), (1097, 1087, 0), (1099, 1127, 0), (1114, 858, 0), (1119, 59, 0), (1126, 702, 0), (1128, 494, 0), (1134, 318, 0), (1135, 666, 0), (1135, 301, 0), (1142, 1007, 0), (1145, 877, 0), (1148, 37, 0), (1152, 606, 0), (1153, 182, 0), (1153, 546, 0), (1157, 488, 0), (1160, 1120, 0), (1163, 1170, 0), (1166, 1077, 0), (1168, 905, 0), (1169, 1014, 0), (1173, 957, 0), (1174, 849, 0), (1179, 194, 0), (1180, 180, 0), (1183, 206, 0), (1185, 1072, 0), (1186, 639, 0), (1194, 230, 0), (1195, 654, 0), (1195, 714, 0), (1202, 517, 0), (1204, 1072, 0), (1206, 529, 0), (1212, 946, 0), (1214, 621, 0), (1225, 221, 0), (1227, 338, 0), (1229, 1092, 0), (1232, 547, 0), (1233, 896, 0), (1236, 793, 0)]
edge_list = [(x[0], x[1]) for x in L]
#import the Grpah data
G = nx.from_edgelist(edge_list)
nx.percolation_centrality(G)
```
#### Results:
```
File "/home/qiuyang/.local/lib/python3.10/site-packages/networkx/algorithms/centrality/percolation.py", line 122, in _accumulate_percolation
pw_s_w = states[s] / (p_sigma_x_t - states[w])
KeyError: 0
```
### Environments:
NetworkX: 3.1
Python: 3.10
| Thanks for this @joyemang33!
`percolation_centrality` indeed only works if all the nodes have the right node attribute (default is `percolation`). We should catch this and have a better default behavior. Maybe we can just set the same percolation state for all the nodes by default (in which case I think this just becomes betweenness centrality).
In your example if you set the node attribute `percolation` to `1.0` before calling `percolation_centrality`
```python
nx.set_node_attributes(G, 1.0, 'percolation')
```
it should work (but this is just betweenness centrality)
Or we should raise an error if the user doesn't provide either a starting `state` or an explicit node attribute?
Hi @MridulS ! Thanks for your so quick response. It looks that setting a default value will be good😉 | 2023-08-31T19:25:36 |
networkx/networkx | 6,908 | networkx__networkx-6908 | [
"6906"
] | 88097f7d7f798ec49eb868691dde77cf791a67ec | diff --git a/networkx/algorithms/tree/operations.py b/networkx/algorithms/tree/operations.py
--- a/networkx/algorithms/tree/operations.py
+++ b/networkx/algorithms/tree/operations.py
@@ -4,13 +4,42 @@
import networkx as nx
-__all__ = ["join"]
+__all__ = ["join", "join_trees"]
def join(rooted_trees, label_attribute=None):
- """Returns a new rooted tree with a root node joined with the roots
+ """A deprecated name for `join_trees`
+
+ Returns a new rooted tree with a root node joined with the roots
of each of the given rooted trees.
+ .. deprecated:: 3.2
+
+ `join` is deprecated in NetworkX v3.2 and will be removed in v3.4.
+ It has been renamed join_trees with the same syntax/interface.
+
+ """
+ import warnings
+
+ warnings.warn(
+ "The function `join` is deprecated and is renamed `join_trees`.\n"
+ "The ``join`` function itself will be removed in v3.4",
+ DeprecationWarning,
+ stacklevel=2,
+ )
+
+ return join_trees(rooted_trees, label_attribute=label_attribute)
+
+
+def join_trees(rooted_trees, label_attribute=None):
+ """Returns a new rooted tree made by joining `rooted_trees`
+
+ Constructs a new tree by joining each tree in `rooted_trees`.
+ A new root node is added and connected to each of the roots
+ of the input trees. While copying the nodes from the trees,
+ relabeling to integers occurs and the old name stored as an
+ attribute of the new node in the returned graph.
+
Parameters
----------
rooted_trees : list
@@ -35,6 +64,10 @@ def join(rooted_trees, label_attribute=None):
Notes
-----
+ Trees are stored in NetworkX as NetworkX Graphs. There is no specific
+ enforcement of the fact that these are trees. Testing for each tree
+ can be done using :func:`networkx.is_tree`.
+
Graph, edge, and node attributes are propagated from the given
rooted trees to the created tree. If there are any overlapping graph
attributes, those from later trees will overwrite those from earlier
| diff --git a/networkx/algorithms/tree/tests/test_operations.py b/networkx/algorithms/tree/tests/test_operations.py
--- a/networkx/algorithms/tree/tests/test_operations.py
+++ b/networkx/algorithms/tree/tests/test_operations.py
@@ -1,7 +1,3 @@
-"""Unit tests for the :mod:`networkx.algorithms.tree.operations` module.
-
-"""
-
from itertools import chain
import networkx as nx
@@ -16,36 +12,29 @@ def _check_label_attribute(input_trees, res_tree, label_attribute="_old"):
return res_attr_set == input_label_set
-class TestJoin:
- """Unit tests for the :func:`networkx.tree.join` function."""
-
- def test_empty_sequence(self):
- """Tests that joining the empty sequence results in the tree
- with one node.
-
- """
- T = nx.join([])
- assert len(T) == 1
- assert T.number_of_edges() == 0
-
- def test_single(self):
- """Tests that joining just one tree yields a tree with one more
- node.
-
- """
- T = nx.empty_graph(1)
- trees = [(T, 0)]
- actual = nx.join(trees)
- expected = nx.path_graph(2)
- assert nodes_equal(list(expected), list(actual))
- assert edges_equal(list(expected.edges()), list(actual.edges()))
- assert _check_label_attribute(trees, actual)
-
- def test_basic(self):
- """Tests for joining multiple subtrees at a root node."""
- trees = [(nx.full_rary_tree(2, 2**2 - 1), 0) for i in range(2)]
- label_attribute = "old_values"
- actual = nx.join(trees, label_attribute)
- expected = nx.full_rary_tree(2, 2**3 - 1)
- assert nx.is_isomorphic(actual, expected)
- assert _check_label_attribute(trees, actual, label_attribute)
+def test_empty_sequence():
+ """Joining the empty sequence results in the tree with one node."""
+ T = nx.join_trees([])
+ assert len(T) == 1
+ assert T.number_of_edges() == 0
+
+
+def test_single():
+ """Joining just one tree yields a tree with one more node."""
+ T = nx.empty_graph(1)
+ trees = [(T, 0)]
+ actual = nx.join_trees(trees)
+ expected = nx.path_graph(2)
+ assert nodes_equal(list(expected), list(actual))
+ assert edges_equal(list(expected.edges()), list(actual.edges()))
+ assert _check_label_attribute(trees, actual)
+
+
+def test_basic():
+ """Joining multiple subtrees at a root node."""
+ trees = [(nx.full_rary_tree(2, 2**2 - 1), 0) for i in range(2)]
+ label_attribute = "old_values"
+ actual = nx.join_trees(trees, label_attribute)
+ expected = nx.full_rary_tree(2, 2**3 - 1)
+ assert nx.is_isomorphic(actual, expected)
+ assert _check_label_attribute(trees, actual, label_attribute)
| Rename nx.join to something which gives more context about operations on trees
While reviewing #6503 [@dschult noted](https://github.com/networkx/networkx/pull/6503#issuecomment-1707200999) that the current function name `nx.join` doesn't give enough information to the user. The function is used to "join" trees.
This will require a deprecation cycle.
| 2023-09-06T20:15:05 |
|
networkx/networkx | 6,957 | networkx__networkx-6957 | [
"6897"
] | ba11717e40ac4466f322f1c448eb70a5914c8b6e | diff --git a/networkx/algorithms/components/strongly_connected.py b/networkx/algorithms/components/strongly_connected.py
--- a/networkx/algorithms/components/strongly_connected.py
+++ b/networkx/algorithms/components/strongly_connected.py
@@ -178,6 +178,11 @@ def kosaraju_strongly_connected_components(G, source=None):
def strongly_connected_components_recursive(G):
"""Generate nodes in strongly connected components of graph.
+ .. deprecated:: 3.2
+
+ This function is deprecated and will be removed in a future version of
+ NetworkX. Use `strongly_connected_components` instead.
+
Recursive version of algorithm.
Parameters
@@ -236,35 +241,18 @@ def strongly_connected_components_recursive(G):
Information Processing Letters 49(1): 9-14, (1994)..
"""
+ import warnings
+
+ warnings.warn(
+ (
+ "\n\nstrongly_connected_components_recursive is deprecated and will be\n"
+ "removed in the future. Use strongly_connected_components instead."
+ ),
+ category=DeprecationWarning,
+ stacklevel=2,
+ )
- def visit(v, cnt):
- root[v] = cnt
- visited[v] = cnt
- cnt += 1
- stack.append(v)
- for w in G[v]:
- if w not in visited:
- yield from visit(w, cnt)
- if w not in component:
- root[v] = min(root[v], root[w])
- if root[v] == visited[v]:
- component[v] = root[v]
- tmpc = {v} # hold nodes in this component
- while stack[-1] != v:
- w = stack.pop()
- component[w] = root[v]
- tmpc.add(w)
- stack.remove(v)
- yield tmpc
-
- visited = {}
- component = {}
- root = {}
- cnt = 0
- stack = []
- for source in G:
- if source not in visited:
- yield from visit(source, cnt)
+ yield from strongly_connected_components(G)
@not_implemented_for("undirected")
diff --git a/networkx/conftest.py b/networkx/conftest.py
--- a/networkx/conftest.py
+++ b/networkx/conftest.py
@@ -117,6 +117,11 @@ def set_warnings():
warnings.filterwarnings(
"ignore", category=DeprecationWarning, message="function `join` is deprecated"
)
+ warnings.filterwarnings(
+ "ignore",
+ category=DeprecationWarning,
+ message="\n\nstrongly_connected_components_recursive",
+ )
@pytest.fixture(autouse=True)
| diff --git a/networkx/algorithms/components/tests/test_strongly_connected.py b/networkx/algorithms/components/tests/test_strongly_connected.py
--- a/networkx/algorithms/components/tests/test_strongly_connected.py
+++ b/networkx/algorithms/components/tests/test_strongly_connected.py
@@ -63,7 +63,8 @@ def test_tarjan(self):
def test_tarjan_recursive(self):
scc = nx.strongly_connected_components_recursive
for G, C in self.gc:
- assert {frozenset(g) for g in scc(G)} == C
+ with pytest.deprecated_call():
+ assert {frozenset(g) for g in scc(G)} == C
def test_kosaraju(self):
scc = nx.kosaraju_strongly_connected_components
@@ -168,7 +169,8 @@ def test_null_graph(self):
G = nx.DiGraph()
assert list(nx.strongly_connected_components(G)) == []
assert list(nx.kosaraju_strongly_connected_components(G)) == []
- assert list(nx.strongly_connected_components_recursive(G)) == []
+ with pytest.deprecated_call():
+ assert list(nx.strongly_connected_components_recursive(G)) == []
assert len(nx.condensation(G)) == 0
pytest.raises(
nx.NetworkXPointlessConcept, nx.is_strongly_connected, nx.DiGraph()
@@ -181,7 +183,8 @@ def test_connected_raise(self):
with pytest.raises(NetworkXNotImplemented):
next(nx.kosaraju_strongly_connected_components(G))
with pytest.raises(NetworkXNotImplemented):
- next(nx.strongly_connected_components_recursive(G))
+ with pytest.deprecated_call():
+ next(nx.strongly_connected_components_recursive(G))
pytest.raises(NetworkXNotImplemented, nx.is_strongly_connected, G)
pytest.raises(
nx.NetworkXPointlessConcept, nx.is_strongly_connected, nx.DiGraph()
@@ -191,7 +194,6 @@ def test_connected_raise(self):
strong_cc_methods = (
nx.strongly_connected_components,
nx.kosaraju_strongly_connected_components,
- nx.strongly_connected_components_recursive,
)
@pytest.mark.parametrize("get_components", strong_cc_methods)
| Discrepancy in 'strongly_connected_components_recursive' for Detecting SCCs
Hey NetworkX team! 👋
### Description:
While attempting to identify strongly connected components (SCCs) of a graph using the `strongly_connected_components`, `strongly_connected_components_recursive`, and `kosaraju_strongly_connected_components` methods provided by `networkx`, inconsistent SCCs were observed. Notably, the `strongly_connected_components_recursive` method identified node `8` as a separate SCC, while the other two methods included node `8` in a larger SCC.\
For context, I've minimized my original, much larger graph to a smaller version to isolate and highlight the issue.
### Observed Behavior:
- Using `strongly_connected_components`: [{0, 2, 3, 5, 6, 7, 8, 10}, {1}, {4}, {9}]
- Using `strongly_connected_components_recursive`: [{8}, {0, 2, 3, 5, 6, 7, 10}, {1}, {4}, {9}]
- Using `kosaraju_strongly_connected_components`: [{1}, {0, 2, 3, 5, 6, 7, 8, 10}, {9}, {4}]
### Expected Behavior:
All three methods should produce consistent SCCs. Node `8` should be part of the larger SCC `{0, 2, 3, 5, 6, 7, 8, 10}` based on the provided edges.
### Steps to Reproduce:
1. Create a directed graph with the provided nodes and edges.
2. Compute SCCs using the `strongly_connected_components`, `strongly_connected_components_recursive`, and `kosaraju_strongly_connected_components` methods.
### Environment:
Python version: 3.8.10
NetworkX version: 3.1
```python
import networkx as nx
from matplotlib import pyplot as plt
class Graph:
def __init__(self):
self.graph = nx.DiGraph()
def add_nodes(self, nodes):
for node in nodes:
self.graph.add_node(node)
def add_edges(self, edges):
for edge in edges:
self.graph.add_edge(edge[0], edge[1])
def compute_scc(self):
"""Compute the result using three functions and print them"""
result1 = list(nx.strongly_connected_components(self.graph))
result2 = list(nx.strongly_connected_components_recursive(self.graph))
result3 = list(nx.kosaraju_strongly_connected_components(self.graph))
print("Using strongly_connected_components:", result1)
print("Using strongly_connected_components_recursive:", result2)
print("Using kosaraju_strongly_connected_components:", result3)
if __name__ == "__main__":
nodes = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
edges = [
(0, 2), (0, 8), (0, 10), (2, 3), (3, 10),
(3, 0), (4, 6), (4, 1), (5, 6), (5, 7),
(6, 2), (6, 5), (7, 10), (8, 7), (9, 8),
(10, 6)
]
graph_instance = Graph()
graph_instance.add_nodes(nodes)
graph_instance.add_edges(edges)
graph_instance.compute_scc()
| Hi @iany0 ! I am not on the team of NetworkX, but I am also interested in the robustness of NetworkX.
I investigated your graph data and found that `node: 8` should not be in an isolated strongest component.
I also find the issue of the code is here in lines 246 to 249 of [strongly_connected.py](https://github.com/networkx/networkx/blob/main/networkx/algorithms/components/strongly_connected.py), and another issue of this code is that `cnt` should not be independent for different visitings.
These codes should be:
```Python
def visit(v, cnt):
root[v] = cnt[0]
visited[v] = cnt[0]
cnt[0] += 1
stack.append(v)
for w in G[v]:
if w not in visited:
yield from visit(w, cnt)
root[v] = min(root[v], root[w])
elif w not in component:
root[v] = min(root[v], visited[w])
print(v, root[v], visited[v])
if root[v] == visited[v]:
component[v] = root[v]
tmpc = {v} # hold nodes in this component
while stack[-1] != v:
w = stack.pop()
component[w] = root[v]
tmpc.add(w)
stack.remove(v)
yield tmpc
visited = {}
component = {}
root = {}
cnt = [0]
stack = []
for source in G:
if source not in visited:
yield from visit(source, cnt)
```
This change can fix the inconsistency and return the correct result for `strongly_connected_components_recursive`
I will also try to commit my fix for the issue.
Cheers for your findings!
Best regards,
Joye
Thanks for the bug report and and thanks for the fix. :)
This problem has been around for a long long time, and shows how little use the recursive version of the function has received. I can verify that the bug is present -- users should not trust results from the recursive version of the code until it is fixed.
See #6898 | 2023-09-26T06:55:22 |
networkx/networkx | 6,964 | networkx__networkx-6964 | [
"6924"
] | adbaaa1a0e9d7e94d3623d0fe52157627af1228e | diff --git a/networkx/algorithms/tournament.py b/networkx/algorithms/tournament.py
--- a/networkx/algorithms/tournament.py
+++ b/networkx/algorithms/tournament.py
@@ -5,14 +5,16 @@
each pair of distinct nodes. For each function in this module that
accepts a graph as input, you must provide a tournament graph. The
responsibility is on the caller to ensure that the graph is a tournament
-graph.
+graph:
+
+ >>> G = nx.DiGraph([(0, 1), (1, 2), (2, 0)])
+ >>> nx.is_tournament(G)
+ True
To access the functions in this module, you must access them through the
-:mod:`networkx.algorithms.tournament` module::
+:mod:`networkx.tournament` module::
- >>> from networkx.algorithms import tournament
- >>> G = nx.DiGraph([(0, 1), (1, 2), (2, 0)])
- >>> tournament.is_tournament(G)
+ >>> nx.tournament.is_reachable(G, 0, 1)
True
.. _tournament graph: https://en.wikipedia.org/wiki/Tournament_%28graph_theory%29
@@ -83,9 +85,8 @@ def is_tournament(G):
Examples
--------
- >>> from networkx.algorithms import tournament
>>> G = nx.DiGraph([(0, 1), (1, 2), (2, 0)])
- >>> tournament.is_tournament(G)
+ >>> nx.is_tournament(G)
True
Notes
@@ -123,9 +124,10 @@ def hamiltonian_path(G):
Examples
--------
- >>> from networkx.algorithms import tournament
>>> G = nx.DiGraph([(0, 1), (0, 2), (0, 3), (1, 2), (1, 3), (2, 3)])
- >>> tournament.hamiltonian_path(G)
+ >>> nx.is_tournament(G)
+ True
+ >>> nx.tournament.hamiltonian_path(G)
[0, 1, 2, 3]
Notes
@@ -202,9 +204,10 @@ def score_sequence(G):
Examples
--------
- >>> from networkx.algorithms import tournament
>>> G = nx.DiGraph([(1, 0), (1, 3), (0, 2), (0, 3), (2, 1), (3, 2)])
- >>> tournament.score_sequence(G)
+ >>> nx.is_tournament(G)
+ True
+ >>> nx.tournament.score_sequence(G)
[1, 1, 2, 2]
"""
@@ -286,11 +289,12 @@ def is_reachable(G, s, t):
Examples
--------
- >>> from networkx.algorithms import tournament
>>> G = nx.DiGraph([(1, 0), (1, 3), (1, 2), (2, 3), (2, 0), (3, 0)])
- >>> tournament.is_reachable(G, 1, 3)
+ >>> nx.is_tournament(G)
+ True
+ >>> nx.tournament.is_reachable(G, 1, 3)
True
- >>> tournament.is_reachable(G, 3, 2)
+ >>> nx.tournament.is_reachable(G, 3, 2)
False
Notes
@@ -367,12 +371,16 @@ def is_strongly_connected(G):
Examples
--------
- >>> from networkx.algorithms import tournament
- >>> G = nx.DiGraph([(0, 1), (0, 2), (0, 3), (1, 2), (1, 3), (2, 3), (3, 0)])
- >>> tournament.is_strongly_connected(G)
+ >>> G = nx.DiGraph([(0, 1), (0, 2), (1, 2), (1, 3), (2, 3), (3, 0)])
+ >>> nx.is_tournament(G)
+ True
+ >>> nx.tournament.is_strongly_connected(G)
+ True
+ >>> G.remove_edge(3, 0)
+ >>> G.add_edge(0, 3)
+ >>> nx.is_tournament(G)
True
- >>> G.remove_edge(1, 3)
- >>> tournament.is_strongly_connected(G)
+ >>> nx.tournament.is_strongly_connected(G)
False
Notes
| Some examples in tournament.py don't use tournament graphs
Most `tournament` functions assume the input graph is a tournament. And the docs say the user must check whether it is a tournament. But some of the example don't use tournament graphs. The docs say that the function's output is undefined when the graph is not a tournament graph. So, while the examples do produce the output they claim to produce, it is a meaningless result.
For example: `tournament.is_strongly_connected(G)` returns False even though the graph in the example is strongly connected.
```python
>>> G = nx.DiGraph([(0, 1), (0, 2), (0, 3), (1, 2), (1, 3), (2, 3), (3, 0)])
>>> nx.tournament.is_strongly_connected(G)
True
>>> G.remove_edge(1, 3)
>>> nx.tournament.is_strongly_connected(G) # this graph is strongly connected.
False
```
The function `is_strongly_connected` returns `False` even though the graph G is strongly connected. This is because both examples used are not tournament graphs (exactly one directed edge between each pair of nodes).
I think every example in this module should add a line checking if the graph is a tournament before running the function. There may be others that are not valid example uses of the function. And those examples will reinforce the good practice of checking each time a graph is changed.
| 2023-09-27T20:55:47 |
||
networkx/networkx | 6,973 | networkx__networkx-6973 | [
"5562"
] | 9b45a4394c95558d547cee32f069d153df84dfb1 | diff --git a/networkx/algorithms/community/modularity_max.py b/networkx/algorithms/community/modularity_max.py
--- a/networkx/algorithms/community/modularity_max.py
+++ b/networkx/algorithms/community/modularity_max.py
@@ -309,6 +309,9 @@ def greedy_modularity_communities(
.. [4] Newman, M. E. J."Analysis of weighted networks"
Physical Review E 70(5 Pt 2):056131, 2004.
"""
+ if not G.size():
+ return [{n} for n in G]
+
if (cutoff < 1) or (cutoff > G.number_of_nodes()):
raise ValueError(f"cutoff must be between 1 and {len(G)}. Got {cutoff}.")
if best_n is not None:
| diff --git a/networkx/algorithms/community/tests/test_modularity_max.py b/networkx/algorithms/community/tests/test_modularity_max.py
--- a/networkx/algorithms/community/tests/test_modularity_max.py
+++ b/networkx/algorithms/community/tests/test_modularity_max.py
@@ -331,3 +331,10 @@ def test_best_n():
best_n = 1
expected = [frozenset(range(13))]
assert greedy_modularity_communities(G, best_n=best_n) == expected
+
+
+def test_greedy_modularity_communities_corner_cases():
+ G = nx.empty_graph()
+ assert nx.community.greedy_modularity_communities(G) == []
+ G.add_nodes_from(range(3))
+ assert nx.community.greedy_modularity_communities(G) == [{0}, {1}, {2}]
| Properly handle edge cases for greedy_modularity_communities
Currently we don't do a good job of catching edge cases like empty graph, no edges, 3 nodes 1 edge:
``` python
>>> greedy_modularity_communities(G) # empty graph
....
....
ValueError: cutoff must be between 1 and 0. Got 1.
>>> greedy_modularity_communities(G) # 3 nodes 1 edge
.....
.....
StopIteration:
>>> greedy_modularity_communities(G) # no edges
....
....
ZeroDivisionError: division by zero
```
We should catch them and error out more gracefully.
| Hi Mridul,
I would like to work on the issue, plus its visualization. Is this the desired behavior?
Instead of ValueError, printing 'This is an empty graph.'
Instead of StopIteration, printing 'Only one edge.'
Instead of ZeroDivisionError, print 'No edges.'
Thanks.
I think the middle error was fixed in #5550 recently. So that’s helpful. ;}
The other two cases could maybe raise a NetworkXPointlessConcept exception explaining that there are no edges upon which to build communities.
But I’d prefer that we return an object that makes sense for the request rather than raise an exception. That way when people apply this function to a large collection of graphs, and one happens to e.g. have no edges, it doesn’t break the process with an exception. I know they can just use try/except. But it’s probably better for them to just get an empty list back… Thoughts?
The first and 3rd case can be handled as one case by testing for `G.size() == 0` in which case my suggestion is to return `[{n} for n in G]`. That would be an empty list for the null graph, and each node in its own community for the no edges case. I think these make sense…??? We’ll need tests too, of course.
> But I’d prefer that we return an object that makes sense for the request rather than raise an exception. That way when people apply this function to a large collection of graphs, and one happens to e.g. have no edges, it doesn’t break the process with an exception. I know they can just use try/except. But it’s probably better for them to just get an empty list back… Thoughts?
I agree that this behavior is preferable from a usability standpoint :+1:
> The first and 3rd case can be handled as one case by testing for G.size() == 0 in which case my suggestion is to return [{n} for n in G]. That would be an empty list for the null graph, and each node in its own community for the no edges case. I think these make sense…???
I like this idea, assuming it makes sense in the context of what constitutes a "community" - I defer to the experts! @z3y50n does this sound reasonable to you?
The `ZeroDivisionError` also happens (of course) when inputting a graph without edges into `networkx.algorithms.community.quality.modularity`.
| 2023-10-02T07:07:11 |
networkx/networkx | 6,995 | networkx__networkx-6995 | [
"6748"
] | 6b3f7bb7ff8ca28c2c070c6470736a3882ba73d7 | diff --git a/networkx/algorithms/approximation/traveling_salesman.py b/networkx/algorithms/approximation/traveling_salesman.py
--- a/networkx/algorithms/approximation/traveling_salesman.py
+++ b/networkx/algorithms/approximation/traveling_salesman.py
@@ -224,6 +224,10 @@ def traveling_salesman_problem(G, weight="weight", nodes=None, cycle=True, metho
the biggest weight edge is removed to make a Hamiltonian path.
Then each edge on the new complete graph used for that analysis is
replaced by the shortest_path between those nodes on the original graph.
+ If the input graph `G` includes edges with weights that do not adhere to
+ the triangle inequality, such as when `G` is not a complete graph (i.e
+ length of non-existent edges is infinity), then the returned path may
+ contain some repeating nodes (other than the starting node).
Parameters
----------
@@ -265,7 +269,6 @@ def traveling_salesman_problem(G, weight="weight", nodes=None, cycle=True, metho
List of nodes in `G` along a path with an approximation of the minimal
path through `nodes`.
-
Raises
------
NetworkXError
| Travelling Salesman Problem gives wrong results if triangle inequality is not fulfilled
<!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion -->
<!--- Provide a general summary of the issue in the Title above -->
### Current Behavior
<!--- Tell us what happens instead of the expected behavior -->
Calculating the TSP with a graph that does not fulfill the triangle inequality returns a tour that contains nodes multiple times. Then, the result is not a valid Hamilton cycle.
### Expected Behavior
<!--- Tell us what should happen -->
If the triangle inequality is not fulfilled, the method should throw an exception. Further, the fact should be mentioned in the documentation.
### Steps to Reproduce
<!--- Provide a minimal example that reproduces the bug -->
```
graph = nx.Graph()
graph.add_nodes_from(range(2))
graph.add_weighted_edges_from([(0, 1, 1), (0, 2, 3), (1, 2, 1)])
print(nx.approximation.traveling_salesman_problem(graph)) # output: [0, 1, 2, 1, 0]
```
### Environment
<!--- Please provide details about your local environment -->
Python version: 3.10
NetworkX version: 3.1
### Additional context
<!--- Add any other context about the problem here, screenshots, etc. -->
https://en.wikipedia.org/wiki/Christofides_algorithm
| 2023-10-07T21:46:00 |
||
networkx/networkx | 7,013 | networkx__networkx-7013 | [
"6745"
] | 399b2b95294a26d792b393a94549c33319517ccb | diff --git a/networkx/algorithms/approximation/traveling_salesman.py b/networkx/algorithms/approximation/traveling_salesman.py
--- a/networkx/algorithms/approximation/traveling_salesman.py
+++ b/networkx/algorithms/approximation/traveling_salesman.py
@@ -1033,7 +1033,7 @@ def simulated_annealing_tsp(
Parameters
----------
G : Graph
- `G` should be a complete weighted undirected graph.
+ `G` should be a complete weighted graph.
The distance between all pairs of nodes should be included.
init_cycle : list of all nodes or "greedy"
@@ -1249,7 +1249,7 @@ def threshold_accepting_tsp(
Parameters
----------
G : Graph
- `G` should be a complete weighted undirected graph.
+ `G` should be a complete weighted graph.
The distance between all pairs of nodes should be included.
init_cycle : list or "greedy"
| Travelling Salesman Problem takes too long for directed graphs
<!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion -->
<!--- Provide a general summary of the issue in the Title above -->
### Current Behavior
<!--- Tell us what happens instead of the expected behavior -->
Calculating a TSP approximation via `nx.approximation.traveling_salesman_problem` takes too long for directed graphs. Even for directed graphs with 6 nodes, the procedure takes multiple minutes.
### Expected Behavior
The solution should be calculated within a few seconds. For directed graphs, the TSP approximation is really fast.
### Steps to Reproduce
<!--- Provide a minimal example that reproduces the bug -->
```
graph = nx.DiGraph()
graph.add_nodes_from(range(6))
graph.add_weighted_edges_from((s, t, s + t) for s in graph.nodes() for t in graph.nodes() if s != t)
tour = nx.approximation.traveling_salesman_problem(graph)
```
### Environment
<!--- Please provide details about your local environment -->
Python version: 3.10
NetworkX version: 3.1
### Additional context
<!--- Add any other context about the problem here, screenshots, etc. -->
| There are multiple different approximation algorithms which you can use for the `traveling_salesman_problem` function. For directed graphs, the default is the Asadpour[^1] method because it actually has an approximation bound, unlike the threshold accepting and simulated annealing methods, and doesn't require extra inputs like an initial solution.
However, it is slower than the other two and in my experience very inconsistent. Your graph takes around 8 minutes on my computer (nx 3.1, python 3.11.4) but the six vertex graph in the [test suit](https://github.com/networkx/networkx/blob/e4463a4f38bcdce1b6a95b1bdfaf762cbf80ddf7/networkx/algorithms/approximation/tests/test_traveling_salesman.py#L766) only takes about 0.43 seconds.
According to this script, the other methods are actually many times faster than the default method.
```python
import networkx as nx
import time
from datetime import timedelta
graph = nx.DiGraph()
graph.add_nodes_from(range(6))
graph.add_weighted_edges_from((s, t, s + t) for s in graph.nodes() for t in graph.nodes() if s != t)
for n, m in zip(["Treshold Accepting,", "Simulated Annealing,", "Asadpour,"],
[lambda G, w: nx.approximation.threshold_accepting_tsp(G, "greedy", w),
lambda G, w : nx.approximation.simulated_annealing_tsp(G, "greedy", w),
nx.approximation.asadpour_atsp]):
start = time.time()
tour = nx.approximation.traveling_salesman_problem(graph, method=m)
end = time.time()
print(f"Name: {n:20} Duration: {timedelta(seconds=end - start)}, Route: {tour}, Cost: {sum([graph[tour[i]][tour[i+1]]['weight'] for i in range(len(tour) - 1)])}")
```
```
Name: Treshold Accepting, Duration: 0:00:00.045257, Route: [0, 1, 2, 3, 4, 5, 0], Cost: 30
Name: Simulated Annealing, Duration: 0:00:00.005363, Route: [0, 1, 2, 3, 4, 5, 0], Cost: 30
Name: Asadpour, Duration: 0:08:34.207166, Route: [1, 2, 3, 5, 0, 4, 1], Cost: 30
```
It is worth noting that for `threshold_accepting_tsp` and `simulated_annealing_tsp` the documentation for `G` states it should be undirected, but the methods work will with directed graphs and even the examples in the documentation use directed graphs. We should at the very least update the documentation to more explicitly states that these methods can be used for directed graphs.
[^1]: A. Asadpour, M. X. Goemans, A. Madry, S. O. Gharan, and A. Saberi, An O (log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, SODA ’10, Society for Industrial and Applied Mathematics, 2010, p. 379 - 389 https://dl.acm.org/doi/abs/10.5555/1873601.1873633. | 2023-10-12T13:00:13 |
|
networkx/networkx | 7,019 | networkx__networkx-7019 | [
"6430"
] | d68caf64b057f3c5f0adf4f09ce3843e4f2395c4 | diff --git a/networkx/algorithms/d_separation.py b/networkx/algorithms/d_separation.py
--- a/networkx/algorithms/d_separation.py
+++ b/networkx/algorithms/d_separation.py
@@ -11,88 +11,183 @@
algorithm presented in [2]_. Refer to [3]_, [4]_ for a couple of
alternative algorithms.
-Here, we provide a brief overview of d-separation and related concepts that
-are relevant for understanding it:
-
-Blocking paths
---------------
+The functional interface in NetworkX consists of three functions:
-Before we overview, we introduce the following terminology to describe paths:
+- `find_minimal_d_separator` returns a minimal d-separator set ``z``.
+ That is, removing any node or nodes from it makes it no longer a d-separator.
+- `is_d_separator` checks if a given set is a d-separator.
+- `is_minimal_d_separator` checks if a given set is a minimal d-separator.
-- "open" path: A path between two nodes that can be traversed
-- "blocked" path: A path between two nodes that cannot be traversed
+D-separators
+------------
-A **collider** is a triplet of nodes along a path that is like the following:
-``... u -> c <- v ...``), where 'c' is a common successor of ``u`` and ``v``. A path
-through a collider is considered "blocked". When
-a node that is a collider, or a descendant of a collider is included in
-the d-separating set, then the path through that collider node is "open". If the
-path through the collider node is open, then we will call this node an open collider.
+Here, we provide a brief overview of d-separation and related concepts that
+are relevant for understanding it:
-The d-separation set blocks the paths between ``u`` and ``v``. If you include colliders,
-or their descendant nodes in the d-separation set, then those colliders will open up,
-enabling a path to be traversed if it is not blocked some other way.
+The ideas of d-separation and d-connection relate to paths being open or blocked.
+
+- A "path" is a sequence of nodes connected in order by edges. Unlike for most
+ graph theory analysis, the direction of the edges is ignored. Thus the path
+ can be thought of as a traditional path on the undirected version of the graph.
+- A "candidate d-separator" ``z`` is a set of nodes being considered as
+ possibly blocking all paths between two prescribed sets ``x`` and ``y`` of nodes.
+ We refer to each node in the candidate d-separator as "known".
+- A "collider" node on a path is a node that is a successor of its two neighbor
+ nodes on the path. That is, ``c`` is a collider if the edge directions
+ along the path look like ``... u -> c <- v ...``.
+- If a collider node or any of its descendants are "known", the collider
+ is called an "open collider". Otherwise it is a "blocking collider".
+- Any path can be "blocked" in two ways. If the path contains a "known" node
+ that is not a collider, the path is blocked. Also, if the path contains a
+ collider that is not a "known" node, the path is blocked.
+- A path is "open" if it is not blocked. That is, it is open if every node is
+ either an open collider or not a "known". Said another way, every
+ "known" in the path is a collider and every collider is open (has a
+ "known" as a inclusive descendant). The concept of "open path" is meant to
+ demonstrate a probabilistic conditional dependence between two nodes given
+ prescribed knowledge ("known" nodes).
+- Two sets ``x`` and ``y`` of nodes are "d-separated" by a set of nodes ``z``
+ if all paths between nodes in ``x`` and nodes in ``y`` are blocked. That is,
+ if there are no open paths from any node in ``x`` to any node in ``y``.
+ Such a set ``z`` is a "d-separator" of ``x`` and ``y``.
+- A "minimal d-separator" is a d-separator ``z`` for which no node or subset
+ of nodes can be removed with it still being a d-separator.
+
+The d-separator blocks some paths between ``x`` and ``y`` but opens others.
+Nodes in the d-separator block paths if the nodes are not colliders.
+But if a collider or its descendant nodes are in the d-separation set, the
+colliders are open, allowing a path through that collider.
Illustration of D-separation with examples
------------------------------------------
-For a pair of two nodes, ``u`` and ``v``, all paths are considered open if
-there is a path between ``u`` and ``v`` that is not blocked. That means, there is an open
-path between ``u`` and ``v`` that does not encounter a collider, or a variable in the
-d-separating set.
+A pair of two nodes, ``u`` and ``v``, are d-connected if there is a path
+from ``u`` to ``v`` that is not blocked. That means, there is an open
+path from ``u`` to ``v``.
For example, if the d-separating set is the empty set, then the following paths are
-unblocked between ``u`` and ``v``:
+open between ``u`` and ``v``:
-- u <- z -> v
-- u -> w -> ... -> z -> v
+- u <- n -> v
+- u -> w -> ... -> n -> v
-If for example, 'z' is in the d-separating set, then 'z' blocks those paths
-between ``u`` and ``v``.
+If on the other hand, ``n`` is in the d-separating set, then ``n`` blocks
+those paths between ``u`` and ``v``.
-Colliders block a path by default if they and their descendants are not included
-in the d-separating set. An example of a path that is blocked when the d-separating
-set is empty is:
+Colliders block a path if they and their descendants are not included
+in the d-separating set. An example of a path that is blocked when the
+d-separating set is empty is:
-- u -> w -> ... -> z <- v
+- u -> w -> ... -> n <- v
-because 'z' is a collider in this path and 'z' is not in the d-separating set. However,
-if 'z' or a descendant of 'z' is included in the d-separating set, then the path through
-the collider at 'z' (... -> z <- ...) is now "open".
+The node ``n`` is a collider in this path and is not in the d-separating set.
+So ``n`` blocks this path. However, if ``n`` or a descendant of ``n`` is
+included in the d-separating set, then the path through the collider
+at ``n`` (... -> n <- ...) is "open".
-D-separation is concerned with blocking all paths between u and v. Therefore, a
-d-separating set between ``u`` and ``v`` is one where all paths are blocked.
+D-separation is concerned with blocking all paths between nodes from ``x`` to ``y``.
+A d-separating set between ``x`` and ``y`` is one where all paths are blocked.
D-separation and its applications in probability
------------------------------------------------
-D-separation is commonly used in probabilistic graphical models. D-separation
+D-separation is commonly used in probabilistic causal-graph models. D-separation
connects the idea of probabilistic "dependence" with separation in a graph. If
-one assumes the causal Markov condition [5]_, then d-separation implies conditional
-independence in probability distributions.
+one assumes the causal Markov condition [5]_, (every node is conditionally
+independent of its non-descendants, given its parents) then d-separation implies
+conditional independence in probability distributions.
+Symmetrically, d-connection implies dependence.
+
+The intuition is as follows. The edges on a causal graph indicate which nodes
+influence the outcome of other nodes directly. An edge from u to v
+implies that the outcome of event ``u`` influences the probabilities for
+the outcome of event ``v``. Certainly knowing ``u`` changes predictions for ``v``.
+But also knowing ``v`` changes predictions for ``u``. The outcomes are dependent.
+Furthermore, an edge from ``v`` to ``w`` would mean that ``w`` and ``v`` are dependent
+and thus that ``u`` could indirectly influence ``w``.
+
+Without any knowledge about the system (candidate d-separating set is empty)
+a causal graph ``u -> v -> w`` allows all three nodes to be dependent. But
+if we know the outcome of ``v``, the conditional probabilities of outcomes for
+``u`` and ``w`` are independent of each other. That is, once we know the outcome
+for ```v`, the probabilities for ``w`` do not depend on the outcome for ``u``.
+This is the idea behind ``v`` blocking the path if it is "known" (in the candidate
+d-separating set).
+
+The same argument works whether the direction of the edges are both
+left-going and when both arrows head out from the middle. Having a "known"
+node on a path blocks the collider-free path because those relationships
+make the conditional probabilities independent.
+
+The direction of the causal edges does impact dependence precisely in the
+case of a collider e.g. ``u -> v <- w``. In that situation, both ``u`` and ``w``
+influence ``v```. But they do not directly influence each other. So without any
+knowledge of any outcomes, ``u`` and ``w`` are independent. That is the idea behind
+colliders blocking the path. But, if ``v`` is known, the conditional probabilities
+of ``u`` and ``w`` can be dependent. This is the heart of Berkson's Paradox [6]_.
+For example, suppose ``u`` and ``w`` are boolean events (they either happen or do not)
+and ``v`` represents the outcome "at least one of ``u`` and ``w`` occur". Then knowing
+``v`` is true makes the conditional probabilities of ``u`` and ``w`` dependent.
+Essentially, knowing that at least one of them is true raises the probability of
+each. But further knowledge that ``w`` is true (or false) change the conditional
+probability of ``u`` to either the original value or 1. So the conditional
+probability of ``u`` depends on the outcome of ``w`` even though there is no
+causal relationship between them. When a collider is known, dependence can
+occur across paths through that collider. This is the reason open colliders
+do not block paths.
+
+Furthermore, even if ``v`` is not "known", if one of its descendants is "known"
+we can use that information to know more about ``v`` which again makes
+``u`` and ``w`` potentially dependent. Suppose the chance of ``n`` occurring
+is much higher when ``v`` occurs ("at least one of ``u`` and ``w`` occur").
+Then if we know ``n`` occurred, it is more likely that ``v`` occurred and that
+makes the chance of ``u`` and ``w`` dependent. This is the idea behind why
+a collider does no block a path if any descendant of the collider is "known".
+
+When two sets of nodes ``x`` and ``y`` are d-separated by a set ``z``,
+it means that given the outcomes of the nodes in ``z``, the probabilities
+of outcomes of the nodes in ``x`` are independent of the outcomes of the
+nodes in ``y`` and vice versa.
Examples
--------
-
->>>
->>> # HMM graph with five states and observation nodes
-... g = nx.DiGraph()
->>> g.add_edges_from(
+A Hidden Markov Model with 5 observed states and 5 hidden states
+where the hidden states have causal relationships resulting in
+a path results in the following causal network. We check that
+early states along the path are separated from late state in
+the path by the d-separator of the middle hidden state.
+Thus if we condition on the middle hidden state, the early
+state probabilities are independent of the late state outcomes.
+
+>>> G = nx.DiGraph()
+>>> G.add_edges_from(
... [
-... ("S1", "S2"),
-... ("S2", "S3"),
-... ("S3", "S4"),
-... ("S4", "S5"),
-... ("S1", "O1"),
-... ("S2", "O2"),
-... ("S3", "O3"),
-... ("S4", "O4"),
-... ("S5", "O5"),
+... ("H1", "H2"),
+... ("H2", "H3"),
+... ("H3", "H4"),
+... ("H4", "H5"),
+... ("H1", "O1"),
+... ("H2", "O2"),
+... ("H3", "O3"),
+... ("H4", "O4"),
+... ("H5", "O5"),
... ]
... )
->>>
->>> # states/obs before 'S3' are d-separated from states/obs after 'S3'
-... nx.d_separated(g, {"S1", "S2", "O1", "O2"}, {"S4", "S5", "O4", "O5"}, {"S3"})
+>>> x, y, z = ({"H1", "O1"}, {"H5", "O5"}, {"H3"})
+>>> nx.is_d_separator(G, x, y, z)
+True
+>>> nx.is_minimal_d_separator(G, x, y, z)
+True
+>>> nx.is_minimal_d_separator(G, x, y, z | {"O3"})
+False
+>>> z = nx.find_minimal_d_separator(G, x | y, {"O2", "O3", "O4"})
+>>> z == {"H2", "H4"}
+True
+
+If no minimal_d_separator exists, `None` is returned
+
+>>> other_z = nx.find_minimal_d_separator(G, x | y, {"H2", "H3"})
+>>> other_z is None
True
@@ -101,142 +196,192 @@
.. [1] Pearl, J. (2009). Causality. Cambridge: Cambridge University Press.
-.. [2] Darwiche, A. (2009). Modeling and reasoning with Bayesian networks.
+.. [2] Darwiche, A. (2009). Modeling and reasoning with Bayesian networks.
Cambridge: Cambridge University Press.
-.. [3] Shachter, R. D. (1998).
- Bayes-ball: rational pastime (for determining irrelevance and requisite
- information in belief networks and influence diagrams).
- In , Proceedings of the Fourteenth Conference on Uncertainty in Artificial
- Intelligence (pp. 480–487).
- San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.
+.. [3] Shachter, Ross D. "Bayes-ball: The rational pastime (for
+ determining irrelevance and requisite information in belief networks
+ and influence diagrams)." In Proceedings of the Fourteenth Conference
+ on Uncertainty in Artificial Intelligence (UAI), (pp. 480–487). 1998.
.. [4] Koller, D., & Friedman, N. (2009).
Probabilistic graphical models: principles and techniques. The MIT Press.
.. [5] https://en.wikipedia.org/wiki/Causal_Markov_condition
+.. [6] https://en.wikipedia.org/wiki/Berkson%27s_paradox
+
"""
from collections import deque
+from itertools import chain
import networkx as nx
from networkx.utils import UnionFind, not_implemented_for
-__all__ = ["d_separated", "minimal_d_separator", "is_minimal_d_separator"]
+__all__ = [
+ "is_d_separator",
+ "is_minimal_d_separator",
+ "find_minimal_d_separator",
+ "d_separated",
+ "minimal_d_separator",
+]
@not_implemented_for("undirected")
@nx._dispatch
-def d_separated(G, x, y, z):
- """
- Return whether node sets ``x`` and ``y`` are d-separated by ``z``.
+def is_d_separator(G, x, y, z):
+ """Return whether node sets `x` and `y` are d-separated by `z`.
Parameters
----------
- G : graph
+ G : nx.DiGraph
A NetworkX DAG.
- x : set
- First set of nodes in ``G``.
+ x : node or set of nodes
+ First node or set of nodes in `G`.
- y : set
- Second set of nodes in ``G``.
+ y : node or set of nodes
+ Second node or set of nodes in `G`.
- z : set
- Set of conditioning nodes in ``G``. Can be empty set.
+ z : node or set of nodes
+ Potential separator (set of conditioning nodes in `G`). Can be empty set.
Returns
-------
b : bool
- A boolean that is true if ``x`` is d-separated from ``y`` given ``z`` in ``G``.
+ A boolean that is true if `x` is d-separated from `y` given `z` in `G`.
Raises
------
NetworkXError
- The *d-separation* test is commonly used with directed
- graphical models which are acyclic. Accordingly, the algorithm
- raises a :exc:`NetworkXError` if the input graph is not a DAG.
+ The *d-separation* test is commonly used on disjoint sets of
+ nodes in acyclic directed graphs. Accordingly, the algorithm
+ raises a :exc:`NetworkXError` if the node sets are not
+ disjoint or if the input graph is not a DAG.
NodeNotFound
If any of the input nodes are not found in the graph,
- a :exc:`NodeNotFound` exception is raised.
+ a :exc:`NodeNotFound` exception is raised
Notes
-----
A d-separating set in a DAG is a set of nodes that
blocks all paths between the two sets. Nodes in `z`
block a path if they are part of the path and are not a collider,
- or a descendant of a collider. A collider structure along a path
+ or a descendant of a collider. Also colliders that are not in `z`
+ block a path. A collider structure along a path
is ``... -> c <- ...`` where ``c`` is the collider node.
https://en.wikipedia.org/wiki/Bayesian_network#d-separation
"""
+ try:
+ x = {x} if x in G else x
+ y = {y} if y in G else y
+ z = {z} if z in G else z
+
+ intersection = x & y or x & z or y & z
+ if intersection:
+ raise nx.NetworkXError(
+ f"The sets are not disjoint, with intersection {intersection}"
+ )
+
+ set_v = x | y | z
+ if set_v - G.nodes:
+ raise nx.NodeNotFound(f"The node(s) {set_v - G.nodes} are not found in G")
+ except TypeError:
+ raise nx.NodeNotFound("One of x, y, or z is not a node or a set of nodes in G")
if not nx.is_directed_acyclic_graph(G):
raise nx.NetworkXError("graph should be directed acyclic")
- union_xyz = x.union(y).union(z)
-
- if any(n not in G.nodes for n in union_xyz):
- raise nx.NodeNotFound("one or more specified nodes not found in the graph")
-
- G_copy = G.copy()
-
- # transform the graph by removing leaves that are not in x | y | z
- # until no more leaves can be removed.
- leaves = deque([n for n in G_copy.nodes if G_copy.out_degree[n] == 0])
- while len(leaves) > 0:
- leaf = leaves.popleft()
- if leaf not in union_xyz:
- for p in G_copy.predecessors(leaf):
- if G_copy.out_degree[p] == 1:
- leaves.append(p)
- G_copy.remove_node(leaf)
-
- # transform the graph by removing outgoing edges from the
- # conditioning set.
- edges_to_remove = list(G_copy.out_edges(z))
- G_copy.remove_edges_from(edges_to_remove)
-
- # use disjoint-set data structure to check if any node in `x`
- # occurs in the same weakly connected component as a node in `y`.
- disjoint_set = UnionFind(G_copy.nodes())
- for component in nx.weakly_connected_components(G_copy):
- disjoint_set.union(*component)
- disjoint_set.union(*x)
- disjoint_set.union(*y)
-
- if x and y and disjoint_set[next(iter(x))] == disjoint_set[next(iter(y))]:
- return False
- else:
- return True
+ # contains -> and <-> edges from starting node T
+ forward_deque = deque([])
+ forward_visited = set()
+
+ # contains <- and - edges from starting node T
+ backward_deque = deque(x)
+ backward_visited = set()
+
+ ancestors_or_z = set().union(*[nx.ancestors(G, node) for node in x]) | z | x
+
+ while forward_deque or backward_deque:
+ if backward_deque:
+ node = backward_deque.popleft()
+ backward_visited.add(node)
+ if node in y:
+ return False
+ if node in z:
+ continue
+
+ # add <- edges to backward deque
+ backward_deque.extend(G.pred[node].keys() - backward_visited)
+ # add -> edges to forward deque
+ forward_deque.extend(G.succ[node].keys() - forward_visited)
+
+ if forward_deque:
+ node = forward_deque.popleft()
+ forward_visited.add(node)
+ if node in y:
+ return False
+
+ # Consider if -> node <- is opened due to ancestor of node in z
+ if node in ancestors_or_z:
+ # add <- edges to backward deque
+ backward_deque.extend(G.pred[node].keys() - backward_visited)
+ if node not in z:
+ # add -> edges to forward deque
+ forward_deque.extend(G.succ[node].keys() - forward_visited)
+
+ return True
@not_implemented_for("undirected")
@nx._dispatch
-def minimal_d_separator(G, u, v):
- """Compute a minimal d-separating set between 'u' and 'v'.
+def find_minimal_d_separator(G, x, y, *, included=None, restricted=None):
+ """Returns a minimal d-separating set between `x` and `y` if possible
- A d-separating set in a DAG is a set of nodes that blocks all paths
- between the two nodes, 'u' and 'v'. This function
- constructs a d-separating set that is "minimal", meaning it is the smallest
- d-separating set for 'u' and 'v'. This is not necessarily
- unique. For more details, see Notes.
+ A d-separating set in a DAG is a set of nodes that blocks all
+ paths between the two sets of nodes, `x` and `y`. This function
+ constructs a d-separating set that is "minimal", meaning no nodes can
+ be removed without it losing the d-separating property for `x` and `y`.
+ If no d-separating sets exist for `x` and `y`, this returns `None`.
+
+ In a DAG there may be more than one minimal d-separator between two
+ sets of nodes. Minimal d-separators are not always unique. This function
+ returns one minimal d-separator, or `None` if no d-separator exists.
+
+ Uses the algorithm presented in [1]_. The complexity of the algorithm
+ is :math:`O(m)`, where :math:`m` stands for the number of edges in
+ the subgraph of G consisting of only the ancestors of `x` and `y`.
+ For full details, see [1]_.
Parameters
----------
G : graph
A networkx DAG.
- u : node
- A node in the graph, G.
- v : node
- A node in the graph, G.
+ x : set | node
+ A node or set of nodes in the graph.
+ y : set | node
+ A node or set of nodes in the graph.
+ included : set | node | None
+ A node or set of nodes which must be included in the found separating set,
+ default is None, which means the empty set.
+ restricted : set | node | None
+ Restricted node or set of nodes to consider. Only these nodes can be in
+ the found separating set, default is None meaning all nodes in ``G``.
+
+ Returns
+ -------
+ z : set | None
+ The minimal d-separating set, if at least one d-separating set exists,
+ otherwise None.
Raises
------
NetworkXError
- Raises a :exc:`NetworkXError` if the input graph is not a DAG.
+ Raises a :exc:`NetworkXError` if the input graph is not a DAG
+ or if node sets `x`, `y`, and `included` are not disjoint.
NodeNotFound
If any of the input nodes are not found in the graph,
@@ -244,89 +389,98 @@ def minimal_d_separator(G, u, v):
References
----------
- .. [1] Tian, J., & Paz, A. (1998). Finding Minimal D-separators.
-
- Notes
- -----
- This function only finds ``a`` minimal d-separator. It does not guarantee
- uniqueness, since in a DAG there may be more than one minimal d-separator
- between two nodes. Moreover, this only checks for minimal separators
- between two nodes, not two sets. Finding minimal d-separators between
- two sets of nodes is not supported.
-
- Uses the algorithm presented in [1]_. The complexity of the algorithm
- is :math:`O(|E_{An}^m|)`, where :math:`|E_{An}^m|` stands for the
- number of edges in the moralized graph of the sub-graph consisting
- of only the ancestors of 'u' and 'v'. For full details, see [1]_.
-
- The algorithm works by constructing the moral graph consisting of just
- the ancestors of `u` and `v`. Then it constructs a candidate for
- a separating set ``Z'`` from the predecessors of `u` and `v`.
- Then BFS is run starting from `u` and marking nodes
- found from ``Z'`` and calling those nodes ``Z''``.
- Then BFS is run again starting from `v` and marking nodes if they are
- present in ``Z''``. Those marked nodes are the returned minimal
- d-separating set.
-
- https://en.wikipedia.org/wiki/Bayesian_network#d-separation
+ .. [1] van der Zander, Benito, and Maciej Liśkiewicz. "Finding
+ minimal d-separators in linear time and applications." In
+ Uncertainty in Artificial Intelligence, pp. 637-647. PMLR, 2020.
"""
if not nx.is_directed_acyclic_graph(G):
raise nx.NetworkXError("graph should be directed acyclic")
- union_uv = {u, v}
+ try:
+ x = {x} if x in G else x
+ y = {y} if y in G else y
- if any(n not in G.nodes for n in union_uv):
- raise nx.NodeNotFound("one or more specified nodes not found in the graph")
+ if included is None:
+ included = set()
+ elif included in G:
+ included = {included}
- # first construct the set of ancestors of X and Y
- x_anc = nx.ancestors(G, u)
- y_anc = nx.ancestors(G, v)
- D_anc_xy = x_anc.union(y_anc)
- D_anc_xy.update((u, v))
+ if restricted is None:
+ restricted = set(G)
+ elif restricted in G:
+ restricted = {restricted}
- # second, construct the moralization of the subgraph of Anc(X,Y)
- moral_G = nx.moral_graph(G.subgraph(D_anc_xy))
+ set_y = x | y | included | restricted
+ if set_y - G.nodes:
+ raise nx.NodeNotFound(f"The node(s) {set_y - G.nodes} are not found in G")
+ except TypeError:
+ raise nx.NodeNotFound(
+ "One of x, y, included or restricted is not a node or set of nodes in G"
+ )
- # find a separating set Z' in moral_G
- Z_prime = set(G.predecessors(u)).union(set(G.predecessors(v)))
+ if not included <= restricted:
+ raise nx.NetworkXError(
+ f"Included nodes {included} must be in restricted nodes {restricted}"
+ )
- # perform BFS on the graph from 'x' to mark
- Z_dprime = _bfs_with_marks(moral_G, u, Z_prime)
- Z = _bfs_with_marks(moral_G, v, Z_dprime)
- return Z
+ intersection = x & y or x & included or y & included
+ if intersection:
+ raise nx.NetworkXError(
+ f"The sets x, y, included are not disjoint. Overlap: {intersection}"
+ )
+
+ nodeset = x | y | included
+ ancestors_x_y_included = nodeset.union(*[nx.ancestors(G, node) for node in nodeset])
+
+ z_init = restricted & (ancestors_x_y_included - (x | y))
+
+ x_closure = _reachable(G, x, ancestors_x_y_included, z_init)
+ if x_closure & y:
+ return None
+
+ z_updated = z_init & (x_closure | included)
+ y_closure = _reachable(G, y, ancestors_x_y_included, z_updated)
+ return z_updated & (y_closure | included)
@not_implemented_for("undirected")
@nx._dispatch
-def is_minimal_d_separator(G, u, v, z):
- """Determine if a d-separating set is minimal.
+def is_minimal_d_separator(G, x, y, z, *, included=None, restricted=None):
+ """Determine if `z` is a minimal d-separator for `x` and `y`.
- A d-separating set, `z`, in a DAG is a set of nodes that blocks
- all paths between the two nodes, `u` and `v`. This function
- verifies that a set is "minimal", meaning there is no smaller
- d-separating set between the two nodes.
+ A d-separator, `z`, in a DAG is a set of nodes that blocks
+ all paths from nodes in set `x` to nodes in set `y`.
+ A minimal d-separator is a d-separator `z` such that removing
+ any subset of nodes makes it no longer a d-separator.
- Note: This function checks whether `z` is a d-separator AND is minimal.
- One can use the function `d_separated` to only check if `z` is a d-separator.
- See examples below.
+ Note: This function checks whether `z` is a d-separator AND is
+ minimal. One can use the function `is_d_separator` to only check if
+ `z` is a d-separator. See examples below.
Parameters
----------
G : nx.DiGraph
- The graph.
- u : node
- A node in the graph.
- v : node
- A node in the graph.
- z : Set of nodes
- The set of nodes to check if it is a minimal d-separating set.
- The function :func:`d_separated` is called inside this function
+ A NetworkX DAG.
+ x : node | set
+ A node or set of nodes in the graph.
+ y : node | set
+ A node or set of nodes in the graph.
+ z : node | set
+ The node or set of nodes to check if it is a minimal d-separating set.
+ The function :func:`is_d_separator` is called inside this function
to verify that `z` is in fact a d-separator.
+ included : set | node | None
+ A node or set of nodes which must be included in the found separating set,
+ default is ``None``, which means the empty set.
+ restricted : set | node | None
+ Restricted node or set of nodes to consider. Only these nodes can be in
+ the found separating set, default is ``None`` meaning all nodes in ``G``.
Returns
-------
bool
- Whether or not the set `z` is a d-separator and is also minimal.
+ Whether or not the set `z` is a minimal d-separator subject to
+ `restricted` nodes and `included` node constraints.
Examples
--------
@@ -338,7 +492,7 @@ def is_minimal_d_separator(G, u, v, z):
>>> nx.is_minimal_d_separator(G, 0, 2, {1, 3, 4})
False
>>> # alternatively, if we only want to check that {1, 3, 4} is a d-separator
- >>> nx.d_separated(G, {0}, {4}, {1, 3, 4})
+ >>> nx.is_d_separator(G, 0, 2, {1, 3, 4})
True
Raises
@@ -352,106 +506,217 @@ def is_minimal_d_separator(G, u, v, z):
References
----------
- .. [1] Tian, J., & Paz, A. (1998). Finding Minimal D-separators.
+ .. [1] van der Zander, Benito, and Maciej Liśkiewicz. "Finding
+ minimal d-separators in linear time and applications." In
+ Uncertainty in Artificial Intelligence, pp. 637-647. PMLR, 2020.
Notes
-----
- This function only works on verifying a d-separating set is minimal
- between two nodes. To verify that a d-separating set is minimal between
- two sets of nodes is not supported.
-
- Uses algorithm 2 presented in [1]_. The complexity of the algorithm
- is :math:`O(|E_{An}^m|)`, where :math:`|E_{An}^m|` stands for the
- number of edges in the moralized graph of the sub-graph consisting
- of only the ancestors of ``u`` and ``v``.
-
- The algorithm works by constructing the moral graph consisting of just
- the ancestors of `u` and `v`. First, it performs BFS on the moral graph
- starting from `u` and marking any nodes it encounters that are part of
- the separating set, `z`. If a node is marked, then it does not continue
- along that path. In the second stage, BFS with markings is repeated on the
- moral graph starting from `v`. If at any stage, any node in `z` is
- not marked, then `z` is considered not minimal. If the end of the algorithm
- is reached, then `z` is minimal.
+ This function works on verifying that a set is minimal and
+ d-separating between two nodes. Uses criterion (a), (b), (c) on
+ page 4 of [1]_. a) closure(`x`) and `y` are disjoint. b) `z` contains
+ all nodes from `included` and is contained in the `restricted`
+ nodes and in the union of ancestors of `x`, `y`, and `included`.
+ c) the nodes in `z` not in `included` are contained in both
+ closure(x) and closure(y). The closure of a set is the set of nodes
+ connected to the set by a directed path in G.
+
+ The complexity is :math:`O(m)`, where :math:`m` stands for the
+ number of edges in the subgraph of G consisting of only the
+ ancestors of `x` and `y`.
For full details, see [1]_.
-
- https://en.wikipedia.org/wiki/Bayesian_network#d-separation
"""
- if not nx.d_separated(G, {u}, {v}, z):
- return False
-
- x_anc = nx.ancestors(G, u)
- y_anc = nx.ancestors(G, v)
- xy_anc = x_anc.union(y_anc)
+ if not nx.is_directed_acyclic_graph(G):
+ raise nx.NetworkXError("graph should be directed acyclic")
- # if Z contains any node which is not in ancestors of X or Y
- # then it is definitely not minimal
- if any(node not in xy_anc for node in z):
+ try:
+ x = {x} if x in G else x
+ y = {y} if y in G else y
+ z = {z} if z in G else z
+
+ if included is None:
+ included = set()
+ elif included in G:
+ included = {included}
+
+ if restricted is None:
+ restricted = set(G)
+ elif restricted in G:
+ restricted = {restricted}
+
+ set_y = x | y | included | restricted
+ if set_y - G.nodes:
+ raise nx.NodeNotFound(f"The node(s) {set_y - G.nodes} are not found in G")
+ except TypeError:
+ raise nx.NodeNotFound(
+ "One of x, y, z, included or restricted is not a node or set of nodes in G"
+ )
+
+ if not included <= z:
+ raise nx.NetworkXError(
+ f"Included nodes {included} must be in proposed separating set z {x}"
+ )
+ if not z <= restricted:
+ raise nx.NetworkXError(
+ f"Separating set {z} must be contained in restricted set {restricted}"
+ )
+
+ intersection = x.intersection(y) or x.intersection(z) or y.intersection(z)
+ if intersection:
+ raise nx.NetworkXError(
+ f"The sets are not disjoint, with intersection {intersection}"
+ )
+
+ nodeset = x | y | included
+ ancestors_x_y_included = nodeset.union(*[nx.ancestors(G, n) for n in nodeset])
+
+ # criterion (a) -- check that z is actually a separator
+ x_closure = _reachable(G, x, ancestors_x_y_included, z)
+ if x_closure & y:
return False
- D_anc_xy = x_anc.union(y_anc)
- D_anc_xy.update((u, v))
-
- # second, construct the moralization of the subgraph
- moral_G = nx.moral_graph(G.subgraph(D_anc_xy))
-
- # start BFS from X
- marks = _bfs_with_marks(moral_G, u, z)
-
- # if not all the Z is marked, then the set is not minimal
- if any(node not in marks for node in z):
+ # criterion (b) -- basic constraint; included and restricted already checked above
+ if not (z <= ancestors_x_y_included):
return False
- # similarly, start BFS from Y and check the marks
- marks = _bfs_with_marks(moral_G, v, z)
- # if not all the Z is marked, then the set is not minimal
- if any(node not in marks for node in z):
+ # criterion (c) -- check that z is minimal
+ y_closure = _reachable(G, y, ancestors_x_y_included, z)
+ if not ((z - included) <= (x_closure & y_closure)):
return False
-
return True
-@not_implemented_for("directed")
-def _bfs_with_marks(G, start_node, check_set):
- """Breadth-first-search with markings.
+@not_implemented_for("undirected")
+def _reachable(G, x, a, z):
+ """Modified Bayes-Ball algorithm for finding d-connected nodes.
- Performs BFS starting from ``start_node`` and whenever a node
- inside ``check_set`` is met, it is "marked". Once a node is marked,
- BFS does not continue along that path. The resulting marked nodes
- are returned.
+ Find all nodes in `a` that are d-connected to those in `x` by
+ those in `z`. This is an implementation of the function
+ `REACHABLE` in [1]_ (which is itself a modification of the
+ Bayes-Ball algorithm [2]_) when restricted to DAGs.
Parameters
----------
- G : nx.Graph
- An undirected graph.
- start_node : node
- The start of the BFS.
- check_set : set
- The set of nodes to check against.
+ G : nx.DiGraph
+ A NetworkX DAG.
+ x : node | set
+ A node in the DAG, or a set of nodes.
+ a : node | set
+ A (set of) node(s) in the DAG containing the ancestors of `x`.
+ z : node | set
+ The node or set of nodes conditioned on when checking d-connectedness.
Returns
-------
- marked : set
- A set of nodes that were marked.
+ w : set
+ The closure of `x` in `a` with respect to d-connectedness
+ given `z`.
+
+ References
+ ----------
+ .. [1] van der Zander, Benito, and Maciej Liśkiewicz. "Finding
+ minimal d-separators in linear time and applications." In
+ Uncertainty in Artificial Intelligence, pp. 637-647. PMLR, 2020.
+
+ .. [2] Shachter, Ross D. "Bayes-ball: The rational pastime
+ (for determining irrelevance and requisite information in
+ belief networks and influence diagrams)." In Proceedings of the
+ Fourteenth Conference on Uncertainty in Artificial Intelligence
+ (UAI), (pp. 480–487). 1998.
+ """
+
+ def _pass(e, v, f, n):
+ """Whether a ball entering node `v` along edge `e` passes to `n` along `f`.
+
+ Boolean function defined on page 6 of [1]_.
+
+ Parameters
+ ----------
+ e : bool
+ Directed edge by which the ball got to node `v`; `True` iff directed into `v`.
+ v : node
+ Node where the ball is.
+ f : bool
+ Directed edge connecting nodes `v` and `n`; `True` iff directed `n`.
+ n : node
+ Checking whether the ball passes to this node.
+
+ Returns
+ -------
+ b : bool
+ Whether the ball passes or not.
+
+ References
+ ----------
+ .. [1] van der Zander, Benito, and Maciej Liśkiewicz. "Finding
+ minimal d-separators in linear time and applications." In
+ Uncertainty in Artificial Intelligence, pp. 637-647. PMLR, 2020.
+ """
+ is_element_of_A = n in a
+ # almost_definite_status = True # always true for DAGs; not so for RCGs
+ collider_if_in_Z = v not in z or (e and not f)
+ return is_element_of_A and collider_if_in_Z # and almost_definite_status
+
+ queue = deque([])
+ for node in x:
+ if bool(G.pred[node]):
+ queue.append((True, node))
+ if bool(G.succ[node]):
+ queue.append((False, node))
+ processed = queue.copy()
+
+ while any(queue):
+ e, v = queue.popleft()
+ preds = ((False, n) for n in G.pred[v])
+ succs = ((True, n) for n in G.succ[v])
+ f_n_pairs = chain(preds, succs)
+ for f, n in f_n_pairs:
+ if (f, n) not in processed and _pass(e, v, f, n):
+ queue.append((f, n))
+ processed.append((f, n))
+
+ return {w for (_, w) in processed}
+
+
+# Deprecated functions:
+def d_separated(G, x, y, z):
+ """Return whether nodes sets ``x`` and ``y`` are d-separated by ``z``.
+
+ .. deprecated:: 3.3
+
+ This function is deprecated and will be removed in NetworkX v3.5.
+ Please use `is_d_separator(G, x, y, z)`.
+
+ """
+ import warnings
+
+ warnings.warn(
+ "d_separated is deprecated and will be removed in NetworkX v3.5."
+ "Please use `is_d_separator(G, x, y, z)`.",
+ category=DeprecationWarning,
+ stacklevel=2,
+ )
+ return nx.is_d_separator(G, x, y, z)
+
+
+def minimal_d_separator(G, u, v):
+ """Returns a minimal_d-separating set between `x` and `y` if possible
+
+ .. deprecated:: 3.3
+
+ minimal_d_separator is deprecated and will be removed in NetworkX v3.5.
+ Please use `find_minimal_d_separator(G, x, y)`.
+
"""
- visited = {}
- marked = set()
- queue = []
-
- visited[start_node] = None
- queue.append(start_node)
- while queue:
- m = queue.pop(0)
-
- for nbr in G.neighbors(m):
- if nbr not in visited:
- # memoize where we visited so far
- visited[nbr] = None
-
- # mark the node in Z' and do not continue along that path
- if nbr in check_set:
- marked.add(nbr)
- else:
- queue.append(nbr)
- return marked
+ import warnings
+
+ warnings.warn(
+ (
+ "This function is deprecated and will be removed in NetworkX v3.5."
+ "Please use `is_d_separator(G, x, y)`."
+ ),
+ category=DeprecationWarning,
+ stacklevel=2,
+ )
+ return nx.find_minimal_d_separator(G, u, v)
diff --git a/networkx/conftest.py b/networkx/conftest.py
--- a/networkx/conftest.py
+++ b/networkx/conftest.py
@@ -133,6 +133,12 @@ def set_warnings():
warnings.filterwarnings(
"ignore", category=DeprecationWarning, message="\n\nrandom_triad"
)
+ warnings.filterwarnings(
+ "ignore", category=DeprecationWarning, message="minimal_d_separator"
+ )
+ warnings.filterwarnings(
+ "ignore", category=DeprecationWarning, message="d_separated"
+ )
warnings.filterwarnings("ignore", category=DeprecationWarning, message="\n\nk_core")
warnings.filterwarnings(
"ignore", category=DeprecationWarning, message="\n\nk_shell"
| diff --git a/networkx/algorithms/tests/test_d_separation.py b/networkx/algorithms/tests/test_d_separation.py
--- a/networkx/algorithms/tests/test_d_separation.py
+++ b/networkx/algorithms/tests/test_d_separation.py
@@ -81,6 +81,41 @@ def asia_graph_fixture():
return asia_graph()
[email protected]()
+def large_collider_graph():
+ edge_list = [("A", "B"), ("C", "B"), ("B", "D"), ("D", "E"), ("B", "F"), ("G", "E")]
+ G = nx.DiGraph(edge_list)
+ return G
+
+
[email protected]()
+def chain_and_fork_graph():
+ edge_list = [("A", "B"), ("B", "C"), ("B", "D"), ("D", "C")]
+ G = nx.DiGraph(edge_list)
+ return G
+
+
[email protected]()
+def no_separating_set_graph():
+ edge_list = [("A", "B")]
+ G = nx.DiGraph(edge_list)
+ return G
+
+
[email protected]()
+def large_no_separating_set_graph():
+ edge_list = [("A", "B"), ("C", "A"), ("C", "B")]
+ G = nx.DiGraph(edge_list)
+ return G
+
+
[email protected]()
+def collider_trek_graph():
+ edge_list = [("A", "B"), ("C", "B"), ("C", "D")]
+ G = nx.DiGraph(edge_list)
+ return G
+
+
@pytest.mark.parametrize(
"graph",
[path_graph(), fork_graph(), collider_graph(), naive_bayes_graph(), asia_graph()],
@@ -90,40 +125,40 @@ def test_markov_condition(graph):
for node in graph.nodes:
parents = set(graph.predecessors(node))
non_descendants = graph.nodes - nx.descendants(graph, node) - {node} - parents
- assert nx.d_separated(graph, {node}, non_descendants, parents)
+ assert nx.is_d_separator(graph, {node}, non_descendants, parents)
def test_path_graph_dsep(path_graph):
"""Example-based test of d-separation for path_graph."""
- assert nx.d_separated(path_graph, {0}, {2}, {1})
- assert not nx.d_separated(path_graph, {0}, {2}, {})
+ assert nx.is_d_separator(path_graph, {0}, {2}, {1})
+ assert not nx.is_d_separator(path_graph, {0}, {2}, set())
def test_fork_graph_dsep(fork_graph):
"""Example-based test of d-separation for fork_graph."""
- assert nx.d_separated(fork_graph, {1}, {2}, {0})
- assert not nx.d_separated(fork_graph, {1}, {2}, {})
+ assert nx.is_d_separator(fork_graph, {1}, {2}, {0})
+ assert not nx.is_d_separator(fork_graph, {1}, {2}, set())
def test_collider_graph_dsep(collider_graph):
"""Example-based test of d-separation for collider_graph."""
- assert nx.d_separated(collider_graph, {0}, {1}, {})
- assert not nx.d_separated(collider_graph, {0}, {1}, {2})
+ assert nx.is_d_separator(collider_graph, {0}, {1}, set())
+ assert not nx.is_d_separator(collider_graph, {0}, {1}, {2})
def test_naive_bayes_dsep(naive_bayes_graph):
"""Example-based test of d-separation for naive_bayes_graph."""
for u, v in combinations(range(1, 5), 2):
- assert nx.d_separated(naive_bayes_graph, {u}, {v}, {0})
- assert not nx.d_separated(naive_bayes_graph, {u}, {v}, {})
+ assert nx.is_d_separator(naive_bayes_graph, {u}, {v}, {0})
+ assert not nx.is_d_separator(naive_bayes_graph, {u}, {v}, set())
def test_asia_graph_dsep(asia_graph):
"""Example-based test of d-separation for asia_graph."""
- assert nx.d_separated(
+ assert nx.is_d_separator(
asia_graph, {"asia", "smoking"}, {"dyspnea", "xray"}, {"bronchitis", "either"}
)
- assert nx.d_separated(
+ assert nx.is_d_separator(
asia_graph, {"tuberculosis", "cancer"}, {"bronchitis"}, {"smoking", "xray"}
)
@@ -137,11 +172,11 @@ def test_undirected_graphs_are_not_supported():
"""
g = nx.path_graph(3, nx.Graph)
with pytest.raises(nx.NetworkXNotImplemented):
- nx.d_separated(g, {0}, {1}, {2})
+ nx.is_d_separator(g, {0}, {1}, {2})
with pytest.raises(nx.NetworkXNotImplemented):
nx.is_minimal_d_separator(g, {0}, {1}, {2})
with pytest.raises(nx.NetworkXNotImplemented):
- nx.minimal_d_separator(g, {0}, {1})
+ nx.find_minimal_d_separator(g, {0}, {1})
def test_cyclic_graphs_raise_error():
@@ -152,60 +187,128 @@ def test_cyclic_graphs_raise_error():
"""
g = nx.cycle_graph(3, nx.DiGraph)
with pytest.raises(nx.NetworkXError):
- nx.d_separated(g, {0}, {1}, {2})
+ nx.is_d_separator(g, {0}, {1}, {2})
with pytest.raises(nx.NetworkXError):
- nx.minimal_d_separator(g, 0, 1)
+ nx.find_minimal_d_separator(g, {0}, {1})
with pytest.raises(nx.NetworkXError):
- nx.is_minimal_d_separator(g, 0, 1, {2})
+ nx.is_minimal_d_separator(g, {0}, {1}, {2})
def test_invalid_nodes_raise_error(asia_graph):
"""
Test that graphs that have invalid nodes passed in raise errors.
"""
+ # Check both set and node arguments
+ with pytest.raises(nx.NodeNotFound):
+ nx.is_d_separator(asia_graph, {0}, {1}, {2})
+ with pytest.raises(nx.NodeNotFound):
+ nx.is_d_separator(asia_graph, 0, 1, 2)
+ with pytest.raises(nx.NodeNotFound):
+ nx.is_minimal_d_separator(asia_graph, {0}, {1}, {2})
with pytest.raises(nx.NodeNotFound):
- nx.d_separated(asia_graph, {0}, {1}, {2})
+ nx.is_minimal_d_separator(asia_graph, 0, 1, 2)
with pytest.raises(nx.NodeNotFound):
- nx.is_minimal_d_separator(asia_graph, 0, 1, {2})
+ nx.find_minimal_d_separator(asia_graph, {0}, {1})
with pytest.raises(nx.NodeNotFound):
- nx.minimal_d_separator(asia_graph, 0, 1)
+ nx.find_minimal_d_separator(asia_graph, 0, 1)
-def test_minimal_d_separator():
+def test_nondisjoint_node_sets_raise_error(collider_graph):
+ """
+ Test that error is raised when node sets aren't disjoint.
+ """
+ with pytest.raises(nx.NetworkXError):
+ nx.is_d_separator(collider_graph, 0, 1, 0)
+ with pytest.raises(nx.NetworkXError):
+ nx.is_d_separator(collider_graph, 0, 2, 0)
+ with pytest.raises(nx.NetworkXError):
+ nx.is_d_separator(collider_graph, 0, 0, 1)
+ with pytest.raises(nx.NetworkXError):
+ nx.is_d_separator(collider_graph, 1, 0, 0)
+ with pytest.raises(nx.NetworkXError):
+ nx.find_minimal_d_separator(collider_graph, 0, 0)
+ with pytest.raises(nx.NetworkXError):
+ nx.find_minimal_d_separator(collider_graph, 0, 1, included=0)
+ with pytest.raises(nx.NetworkXError):
+ nx.find_minimal_d_separator(collider_graph, 1, 0, included=0)
+ with pytest.raises(nx.NetworkXError):
+ nx.is_minimal_d_separator(collider_graph, 0, 0, set())
+ with pytest.raises(nx.NetworkXError):
+ nx.is_minimal_d_separator(collider_graph, 0, 1, set(), included=0)
+ with pytest.raises(nx.NetworkXError):
+ nx.is_minimal_d_separator(collider_graph, 1, 0, set(), included=0)
+
+
+def test_is_minimal_d_separator(
+ large_collider_graph,
+ chain_and_fork_graph,
+ no_separating_set_graph,
+ large_no_separating_set_graph,
+ collider_trek_graph,
+):
# Case 1:
# create a graph A -> B <- C
# B -> D -> E;
# B -> F;
# G -> E;
- edge_list = [("A", "B"), ("C", "B"), ("B", "D"), ("D", "E"), ("B", "F"), ("G", "E")]
- G = nx.DiGraph(edge_list)
- assert not nx.d_separated(G, {"B"}, {"E"}, set())
+ assert not nx.is_d_separator(large_collider_graph, {"B"}, {"E"}, set())
# minimal set of the corresponding graph
# for B and E should be (D,)
- Zmin = nx.minimal_d_separator(G, "B", "E")
-
- # the minimal separating set should pass the test for minimality
- assert nx.is_minimal_d_separator(G, "B", "E", Zmin)
+ Zmin = nx.find_minimal_d_separator(large_collider_graph, "B", "E")
+ # check that the minimal d-separator is a d-separating set
+ assert nx.is_d_separator(large_collider_graph, "B", "E", Zmin)
+ # the minimal separating set should also pass the test for minimality
+ assert nx.is_minimal_d_separator(large_collider_graph, "B", "E", Zmin)
+ # function should also work with set arguments
+ assert nx.is_minimal_d_separator(large_collider_graph, {"A", "B"}, {"G", "E"}, Zmin)
assert Zmin == {"D"}
# Case 2:
# create a graph A -> B -> C
# B -> D -> C;
- edge_list = [("A", "B"), ("B", "C"), ("B", "D"), ("D", "C")]
- G = nx.DiGraph(edge_list)
- assert not nx.d_separated(G, {"A"}, {"C"}, set())
- Zmin = nx.minimal_d_separator(G, "A", "C")
+ assert not nx.is_d_separator(chain_and_fork_graph, {"A"}, {"C"}, set())
+ Zmin = nx.find_minimal_d_separator(chain_and_fork_graph, "A", "C")
# the minimal separating set should pass the test for minimality
- assert nx.is_minimal_d_separator(G, "A", "C", Zmin)
+ assert nx.is_minimal_d_separator(chain_and_fork_graph, "A", "C", Zmin)
assert Zmin == {"B"}
-
Znotmin = Zmin.union({"D"})
- assert not nx.is_minimal_d_separator(G, "A", "C", Znotmin)
+ assert not nx.is_minimal_d_separator(chain_and_fork_graph, "A", "C", Znotmin)
+
+ # Case 3:
+ # create a graph A -> B
+
+ # there is no m-separating set between A and B at all, so
+ # no minimal m-separating set can exist
+ assert not nx.is_d_separator(no_separating_set_graph, {"A"}, {"B"}, set())
+ assert nx.find_minimal_d_separator(no_separating_set_graph, "A", "B") is None
+
+ # Case 4:
+ # create a graph A -> B with A <- C -> B
+
+ # there is no m-separating set between A and B at all, so
+ # no minimal m-separating set can exist
+ # however, the algorithm will initially propose C as a
+ # minimal (but invalid) separating set
+ assert not nx.is_d_separator(large_no_separating_set_graph, {"A"}, {"B"}, {"C"})
+ assert nx.find_minimal_d_separator(large_no_separating_set_graph, "A", "B") is None
+
+ # Test `included` and `excluded` args
+ # create graph A -> B <- C -> D
+ assert nx.find_minimal_d_separator(collider_trek_graph, "A", "D", included="B") == {
+ "B",
+ "C",
+ }
+ assert (
+ nx.find_minimal_d_separator(
+ collider_trek_graph, "A", "D", included="B", restricted="B"
+ )
+ is None
+ )
-def test_minimal_d_separator_checks_dsep():
+def test_is_minimal_d_separator_checks_dsep():
"""Test that is_minimal_d_separator checks for d-separation as well."""
g = nx.DiGraph()
g.add_edges_from(
@@ -221,8 +324,25 @@ def test_minimal_d_separator_checks_dsep():
]
)
- assert not nx.d_separated(g, {"C"}, {"F"}, {"D"})
+ assert not nx.is_d_separator(g, {"C"}, {"F"}, {"D"})
# since {'D'} and {} are not d-separators, we return false
assert not nx.is_minimal_d_separator(g, "C", "F", {"D"})
- assert not nx.is_minimal_d_separator(g, "C", "F", {})
+ assert not nx.is_minimal_d_separator(g, "C", "F", set())
+
+
+def test__reachable(large_collider_graph):
+ reachable = nx.algorithms.d_separation._reachable
+ g = large_collider_graph
+ x = {"F", "D"}
+ ancestors = {"A", "B", "C", "D", "F"}
+ assert reachable(g, x, ancestors, {"B"}) == {"B", "F", "D"}
+ assert reachable(g, x, ancestors, set()) == ancestors
+
+
+def test_deprecations():
+ G = nx.DiGraph([(0, 1), (1, 2)])
+ with pytest.deprecated_call():
+ nx.d_separated(G, 0, 2, {1})
+ with pytest.deprecated_call():
+ z = nx.minimal_d_separator(G, 0, 2)
| Minimal d-separator function does not handle cases where no d-separating set exists
The minimal d-separator function does not return a special value when no valid d-separating set exists.
### Current Behavior
Currently the code returns:
```
>>> import networkx as nx
>>> G = nx.DiGraph()
>>> G.add_edge("A", "B")
>>> nx.minimal_d_separator(G, "A", "B")
set()
```
but this implies that A is marginally independent of B in G, which is false.
Compare with
```
>>> import networkx as nx
>>> G = nx.DiGraph()
>>> G.add_edge("A", "B")
>>> G.add_edge("C", "B")
>>> nx.minimal_d_separator(G, "A", "C")
set()
```
and this empty set correctly implies that A is marginally independent of C.
### Expected Behavior
The function should return None, raise an error, or otherwise indicate that there is no valid d-separating set for A and B in the above graph, in order to distinguish from cases where the empty set is indeed a valid d-separating set.
### Steps to Reproduce
See above.
### Environment
Python version: 3.10.9
NetworkX version: 3.0
### Additional context
Related to discussions in #6247
| @dschult I can also handle this by adding a return `None` in this edge-case, unless @jaron-lee wants to take the PR?
That seems relatively straightforward, I'm happy to take a stab over the weekend
I agree that returning `None` should indicate that there is no minimal separator. And returning an empty set means that the empty set is a d_separator.
This changes (corrects) the current behavior, so a note should be added to the API changes section of `doc/release/release_dev.rst`. I don't think it should need a deprecation, since it is correcting a faulty previous behavior. But I'd like @rossbar to verify that I'm not missing something. | 2023-10-14T03:34:08 |
networkx/networkx | 7,024 | networkx__networkx-7024 | [
"7023"
] | 3b95b3b39e4bb57831e9611b820054218af15a71 | diff --git a/networkx/algorithms/connectivity/edge_augmentation.py b/networkx/algorithms/connectivity/edge_augmentation.py
--- a/networkx/algorithms/connectivity/edge_augmentation.py
+++ b/networkx/algorithms/connectivity/edge_augmentation.py
@@ -68,7 +68,7 @@ def is_k_edge_connected(G, k):
if k == 1:
return nx.is_connected(G)
elif k == 2:
- return not nx.has_bridges(G)
+ return nx.is_connected(G) and not nx.has_bridges(G)
else:
return nx.edge_connectivity(G, cutoff=k) >= k
| diff --git a/networkx/algorithms/connectivity/tests/test_edge_augmentation.py b/networkx/algorithms/connectivity/tests/test_edge_augmentation.py
--- a/networkx/algorithms/connectivity/tests/test_edge_augmentation.py
+++ b/networkx/algorithms/connectivity/tests/test_edge_augmentation.py
@@ -75,6 +75,11 @@ def test_is_k_edge_connected():
assert is_k_edge_connected(G, k=3)
assert is_k_edge_connected(G, k=4)
+ G = nx.compose(nx.complete_graph([0, 1, 2]), nx.complete_graph([3, 4, 5]))
+ assert not is_k_edge_connected(G, k=1)
+ assert not is_k_edge_connected(G, k=2)
+ assert not is_k_edge_connected(G, k=3)
+
def test_is_k_edge_connected_exceptions():
pytest.raises(
| ```is_k_edge_connected``` incorrectly returns True for k=2 with multi-component graphs without bridges
### Current Behavior
The implementation of ```is_k_edge_connected``` currently defers to ```is_connected``` for k=1, to ```has_bridges``` for k=2, and runs the full check for k>2.
As a result, this returns True for a multi-component graph without bridges.
### Expected Behavior
Since the graph has multiple components, it is not connected in the first place and ```is_k_edge_connected``` should return False for all (positive) values of k.
### Steps to Reproduce
```
import networkx as nx
G = nx.compose(nx.complete_graph([0, 1, 2]), nx.complete_graph([3, 4, 5]))
nx.connectivity.is_k_edge_connected(G, 2)
```
This returns True but should return False.
### Environment
Python version: 3.11.2
NetworkX version: 3.1
### Additional context
As defined in the NetworkX documentation, a bridge is an edge whose removal increases the number of connected components. It appears to me, then, that a graph not having bridges is a necessary-but-not-sufficient condition for a graph being 2-edge-connected.
| 2023-10-17T16:13:05 |
|
networkx/networkx | 7,041 | networkx__networkx-7041 | [
"7038"
] | ce237b7d63920ddcf8eb749f6be4db42cf3a5f85 | diff --git a/networkx/algorithms/cluster.py b/networkx/algorithms/cluster.py
--- a/networkx/algorithms/cluster.py
+++ b/networkx/algorithms/cluster.py
@@ -72,7 +72,7 @@ def triangles(G, nodes=None):
# iterate over the nodes in a graph
for node, neighbors in G.adjacency():
later_neighbors[node] = {
- n for n in neighbors if n not in later_neighbors and n is not node
+ n for n in neighbors if n not in later_neighbors and n != node
}
# instantiate Counter for each node to include isolated nodes
diff --git a/networkx/algorithms/similarity.py b/networkx/algorithms/similarity.py
--- a/networkx/algorithms/similarity.py
+++ b/networkx/algorithms/similarity.py
@@ -1363,7 +1363,7 @@ def sim(u, v):
for its in range(max_iterations):
oldsim = newsim
- newsim = {u: {v: sim(u, v) if u is not v else 1 for v in G} for u in G}
+ newsim = {u: {v: sim(u, v) if u != v else 1 for v in G} for u in G}
is_close = all(
all(
abs(newsim[u][v] - old) <= tolerance * (1 + abs(old))
| could_be_isomorphic returns False for an isomorphic graph on 3.2
<!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion -->
<!--- Provide a general summary of the issue in the Title above -->
### Current Behavior
We have noticed a new failure on downstream testing using isomorphic checks when updating to networkx 3.2. We know that the graph we check is isomorphic, yet `nx.could_be_isomorphic` started returning `False`, which I believe is a regression in the latest release.
<!--- Tell us what happens instead of the expected behavior -->
### Expected Behavior
`nx.could_be_isomorphic` should return `True` for an isomorphic graph
<!--- Tell us what should happen -->
### Steps to Reproduce
```py
import networkx as nx
left = nx.read_gexf("left.gexf")
right = nx.read_gexf("right.gexf")
assert nx.is_isomorphic(left, right)
nx.could_be_isomorphic(left, right) # should be True but is False
```
<!--- Provide a minimal example that reproduces the bug -->
### Environment
<!--- Please provide details about your local environment -->
Python version: 3.9 - 3.12
NetworkX version: 3.2
### Additional context
<!--- Add any other context about the problem here, screenshots, etc. -->
The gefx files:
[Archive.zip](https://github.com/networkx/networkx/files/13061063/Archive.zip)
| Thanks for reporting this!!
Is the `nxleft` and `nxright` in your example code (in the assert statement) supposed to be `left` and `right`?
Also, can you describe how you know these are isomorphic graphs?
Do you have the isomorphism? Or is that result based on previous networkx results?
> Is the nxleft and nxright in your example code (in the assert statement) supposed to be left and right?
Yes, sorry!
Both left and right come from the same edge list, only ids are remapped. It is the same graph and it is isomorphic as proved by the assertion. Something has changed in the `could_be_isomorphic` function that it started returning False (which is wrong).
I can verify that `could_be_isomorphic` and `fast_could_be_isomorphic` return apparently false results for this case. And I've traced it to the computation of `triangles`. One node is only connected to itself by a self-loop (node "46" in right which is isomorphic to node "8" in left). Those nodes are getting a different number of triangles in the two graphs when using `nx.triangles`. The self-loop is the only edge involving those nodes in the graphs.
`nx.triangles was changed 3 months ago in #6258 so I think an error was introduced there -- something about self-loops but even worse, something that counts the triangles differently depending on relabeling the nodes.
| 2023-10-22T16:37:55 |
|
networkx/networkx | 7,062 | networkx__networkx-7062 | [
"7047"
] | 1c5272054f71f9484347f7e4246ae3d5da367f7b | diff --git a/networkx/conftest.py b/networkx/conftest.py
--- a/networkx/conftest.py
+++ b/networkx/conftest.py
@@ -53,7 +53,17 @@ def pytest_configure(config):
networkx.utils.backends._dispatch._fallback_to_nx = bool(fallback_to_nx)
# nx-loopback backend is only available when testing
backends = entry_points(name="nx-loopback", group="networkx.backends")
- networkx.utils.backends.backends["nx-loopback"] = next(iter(backends))
+ if backends:
+ networkx.utils.backends.backends["nx-loopback"] = next(iter(backends))
+ else:
+ warnings.warn(
+ "\n\n WARNING: Mixed NetworkX configuration! \n\n"
+ " This environment has mixed configuration for networkx.\n"
+ " The test object nx-loopback is not configured correctly.\n"
+ " You should not be seeing this message.\n"
+ " Try `pip install -e .`, or change your PYTHONPATH\n"
+ " Make sure python finds the networkx repo you are testing\n\n"
+ )
def pytest_collection_modifyitems(config, items):
diff --git a/networkx/lazy_imports.py b/networkx/lazy_imports.py
--- a/networkx/lazy_imports.py
+++ b/networkx/lazy_imports.py
@@ -95,7 +95,7 @@ def __getattr__(self, x):
f"No module named '{fd['spec']}'\n\n"
"This error is lazily reported, having originally occurred in\n"
f' File {fd["filename"]}, line {fd["lineno"]}, in {fd["function"]}\n\n'
- f'----> {"".join(fd["code_context"]).strip()}'
+ f'----> {"".join(fd["code_context"] or "").strip()}'
)
| backend nx-loopback not found when trying to run pytest
In some configurations (that I haven't been able to track down) I can't use pytest with networkx.
I've traced the problem to [line 55 in conftest.py](https://github.com/networkx/networkx/blob/46d67fecfec615215cfa8b26e5024ae743e5973a/networkx/conftest.py#L55) where the `nx-loopback` entry_point is obtained and stored in the backends dict. The problem arises when the `entry_points()` call returns an empty list. The next line then tries to iterate to find the first element of the list, but gets a StopIteration instead.
I'm thinking that we need to add a check there for finding no entry point and handle it in some way. But I don't know enough about entry points to figure out what to do. And I'm guessing a new developer trying to test their added function to networkx wouldn't know what to do either.
Do I have to do something with my environment to turn on the networkx entry_points? Do I have to turn on entry_points to run pytest on my local repo?
<details>
```
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "/Users/dschult/mambaforge/lib/python3.10/site-packages/_pytest/main.py", line 265, in wrap_session
INTERNALERROR> config._do_configure()
INTERNALERROR> File "/Users/dschult/mambaforge/lib/python3.10/site-packages/_pytest/config/__init__.py", line 1046, in _do_configure
INTERNALERROR> self.hook.pytest_configure.call_historic(kwargs=dict(config=self))
INTERNALERROR> File "/Users/dschult/mambaforge/lib/python3.10/site-packages/pluggy/_hooks.py", line 514, in call_historic
INTERNALERROR> res = self._hookexec(self.name, self._hookimpls, kwargs, False)
INTERNALERROR> File "/Users/dschult/mambaforge/lib/python3.10/site-packages/pluggy/_manager.py", line 115, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
INTERNALERROR> File "/Users/dschult/mambaforge/lib/python3.10/site-packages/pluggy/_callers.py", line 113, in _multicall
INTERNALERROR> raise exception.with_traceback(exception.__traceback__)
INTERNALERROR> File "/Users/dschult/mambaforge/lib/python3.10/site-packages/pluggy/_callers.py", line 77, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "/Users/dschult/NX/NXmyshort/networkx/conftest.py", line 56, in pytest_configure
INTERNALERROR> networkx.utils.backends.backends["nx-loopback"] = next(iter(backends))
INTERNALERROR> StopIteration
```
</details>
| > In some configurations (that I haven't been able to track down) I can't use pytest with networkx.
I've noticed this as well - one thing I've noticed that may or may not be relevant is that re-running `pip install -e .` usually fixes the problem. Perhaps there's something related to the entry points not playing nice with editable installs?
After recreatig this scenario a bunch I have determined that my workflow runs into this when I have `PYTHONPATH` pointing to a pre-nx-loopback version of networkx while I am testing in a post-nx-loopback repo. This matches what @rossbar describes as testing one repo when another version is installed.
I think all is well with the following change:
```python
if backends:
nx.utils.backends.backends["nx-loopback"] = next(iter(backends))
```
No exception is raised at the load staged -- only once any test that involves the backend runs. (I get 5 tests failing in these conditions.)
If you think it is better, we can either give a warning (which prints before the tests are run).
Or we could raise an exception making us fix things before being able to test.
I'm slightly in favor of giving a warning. Something like:
```python
if backends:
nx.utils.backends.backends["nx-loopback"] = next(iter(backends))
else:
warnings.warn(
"\n\n WARNING: Mixed NetworkX configuration! \n\n"
" This environment has mixed configuration for networkx.\n"
" The test object nx-loopback is not configured correctly.\n"
" You should not be seeing this message.\n"
" Try `pip install -e .`, or change your PYTHONPATH\n"
" Make sure you are testing a repo that mirrors entry_points\n"
" of the networkx library found by python.\n\n"
)
```
But perhaps this is too long. :} | 2023-10-25T23:38:57 |
|
networkx/networkx | 7,130 | networkx__networkx-7130 | [
"3818"
] | f93f0e2a066fc456aa447853af9d00eec1058542 | diff --git a/networkx/algorithms/similarity.py b/networkx/algorithms/similarity.py
--- a/networkx/algorithms/similarity.py
+++ b/networkx/algorithms/similarity.py
@@ -322,7 +322,8 @@ def optimal_edit_paths(
edge_edit_path : list of tuples ((u1, v1), (u2, v2))
cost : numeric
- Optimal edit path cost (graph edit distance).
+ Optimal edit path cost (graph edit distance). When the cost
+ is zero, it indicates that `G1` and `G2` are isomorphic.
Examples
--------
@@ -334,6 +335,14 @@ def optimal_edit_paths(
>>> cost
5.0
+ Notes
+ -----
+ To transform `G1` into a graph isomorphic to `G2`, apply the node
+ and edge edits in the returned ``edit_paths``.
+ In the case of isomorphic graphs, the cost is zero, and the paths
+ represent different isomorphic mappings (isomorphisms). That is, the
+ edits involve renaming nodes and edges to match the structure of `G2`.
+
See Also
--------
graph_edit_distance, optimize_edit_paths
| I have a question about `nx.optimal_edit_paths(G1, G2)`.
I have been conducting a experiment for computing graph similarity.
I have a question about `nx.optimal_edit_paths(G1, G2)`.
In the [documentation of it](https://networkx.github.io/documentation/stable/reference/algorithms/generated/networkx.algorithms.similarity.optimal_edit_paths.html), it is written that this function **Returns all minimum-cost edit paths transforming G1 to G2**.
I use the function in my code and I run it. my code is below.
Even though `G1` and `G2` are exactly same graph and their graph_edit_cost is 0,
there a a lot of optimal_edit_paths.
```python
N = 3
G1 = nx.complete_graph(N)
G2 = nx.complete_graph(N)
edit_path, graph_edit_cost = nx.optimal_edit_paths(G1, G2)
print(f"graph edit cost: {graph_edit_cost}")
for p in edit_path:
node_edit, edge_edit = p
print(f"node_edit: {node_edit}")
print(f"edge_edit: {edge_edit}")
print("--"*30)
```
the output is below.
As I said it before, graph_edit_cost is zero because `G1` and `G2` are exactly same graphs.
but there are lots of optimal path when paths was derived by `nx.optimal_edit_paths(G1, G2)`.
I don't know why this happend. It doesn't look like 'optimal' because they don't have same edit cost.
```
graph edit cost: 0.0
node_edit: [(0, 0), (1, 1), (2, 2)]
edge_edit: [((0, 1), (0, 1)), ((0, 2), (0, 2)), ((1, 2), (1, 2))]
------------------------------------------------------------
node_edit: [(0, 0), (2, 1), (1, 2)]
edge_edit: [((0, 2), (0, 1)), ((0, 1), (0, 2)), ((1, 2), (1, 2))]
------------------------------------------------------------
node_edit: [(1, 0), (0, 1), (2, 2)]
edge_edit: [((0, 1), (0, 1)), ((0, 2), (1, 2)), ((1, 2), (0, 2))]
------------------------------------------------------------
node_edit: [(1, 0), (2, 1), (0, 2)]
edge_edit: [((1, 2), (0, 1)), ((0, 1), (0, 2)), ((0, 2), (1, 2))]
------------------------------------------------------------
node_edit: [(2, 0), (0, 1), (1, 2)]
edge_edit: [((0, 2), (0, 1)), ((0, 1), (1, 2)), ((1, 2), (0, 2))]
------------------------------------------------------------
node_edit: [(2, 0), (1, 1), (0, 2)]
edge_edit: [((1, 2), (0, 1)), ((0, 1), (1, 2)), ((0, 2), (0, 2))]
------------------------------------------------------------
```
And also, if the function is correct, How can I transform `G1` to `G2` base on that operations(node_edit, edge_edit)? I don't know how those operations have to be applied to `G2` to make `G2`.
I always appreciate your help in this library.
becasue of that, I've learned a lot about network science.
If you feel my text is rude or not polite, it is because of my lack of english.
sorry about that.
| > If you feel my text is rude or not polite, it is because of my lack of english. sorry about that.
Nothing at all to be sorry about @frhyme , thanks for the question.
I'm not familiar at all with `optimal_edit_paths`, but I just wanted to call your attention to #4102 - maybe the discussion will prove useful.
This question is quite old, and I think a complete answer requires looking at the reference paper to see how they count the number of edits and in particular how they check for isomorphism with ```G2``` after the edits. But here's a partial answer:
The edit paths you report all have 3 node edits and 3 edge edits. That means the length of those two lists is 6 total for all of the edit paths. They are equivalent despite the first being the "identity path"(? a term I just made up).
But, as you point out, the two graphs are isomorphic, so the edit distance should be 0 if distance is the number of changes needed to make G1 isomorphic to G2.
I suspect the algorithm actually computes the number of changes needed to convert ```G1``` to ```G2```. Said another way, it doesn't count the 6 edits as part of the edit cost. In fact if you look closely, these 6 paths consist precisely of the isomorphism mappings from ```G1``` to ```G2```. They are simply renaming the nodes and edges and thus have no cost.
This question shows that the documentation needs more information about what is returned here (i.e. what is an edit path?). That should be sufficient to close this Question.
It sounds like the documentation should be improved along the lines discussed above in order to close this one. | 2023-11-24T16:37:37 |
|
networkx/networkx | 7,141 | networkx__networkx-7141 | [
"4386"
] | 862b0e95d2008c7e032fbdb907168927c24b4255 | diff --git a/networkx/algorithms/triads.py b/networkx/algorithms/triads.py
--- a/networkx/algorithms/triads.py
+++ b/networkx/algorithms/triads.py
@@ -178,6 +178,11 @@ def triadic_census(G, nodelist=None):
This algorithm has complexity $O(m)$ where $m$ is the number of edges in
the graph.
+ For undirected graphs, the triadic census can be computed by first converting
+ the graph into a directed graph using the ``G.to_directed()`` method.
+ After this conversion, only the triad types 003, 102, 201 and 300 will be
+ present in the undirected scenario.
+
Raises
------
ValueError
| Triadic Census for Undirected Graph
Is there any specific reason (for e.g. computationally expensive, etc.) why the [triads algorithms](https://networkx.org/documentation/stable/reference/algorithms/triads.html) are not implemented for Undirected Graphs ? From what I understand, there would be only 4 types of triads possible in an undirected graph and the calculation would be similar to how it's done for directed graph. Please let me know if I am missing something here. Also, can I please go ahead with the implementation for the triad algorithms ? :)
| There is no specific reason not to do it for undirected graphs. Go for it. :}
@dschult i would like to know can we modify the documentation of traids as the given documentation shows for directed graph but can also be used as undirected .
We need to ad more than documentation changes to make this work for undirected graphs. The types of triads are not the same for directed and undirected. As the OP in this Issue mentions, there are 4 types of undirected triads possible. So the code would be similar to what we've got, but different. Another question is whether anyone is interested in undirected triads. I haven't seen any literature about it, but I haven't looked either. :)
There seems to be a way to develop a triadic census for undirected graphs (referred to very briefly in [these ucinet help pages](http://www.analytictech.com/ucinet/help/hs4335.htm). It essentially says: convert the graph to directed using `G.to_directed()` and then run the existing algorithm. Only 4 triad types will show up in the census -- the ones that only consist of double-direction edges.
Given that the literature on this idea/concept is essentially non-existent, and what is there is a very simple approach to using the current function. I think the best approach to handling this issue is to put a one-line comment in the doc_string that creating a triadic census for an undirected is best done by converting the graph to a directed graph and then running the provided function. Maybe also mention that only the following triads will arise: 003, 102, 201 and 300.
This change should be easy to implement.
@dschult i think this would work
` Note: For undirected networks, create a triadic census by converting the graph to a directed graph using G.to_directed().
Only the triad types 003, 102, 201, and 300 will arise in the undirected case.`
Hello everyone,
I hope this message finds you well. I am interested in contributing to this issue and wanted to see if there is anyone working on it. If not, I would be happy to take care of it and send a pull request. Please let me know if there is someone already working.
Thank you!
go for it. :) | 2023-12-05T10:37:36 |
|
networkx/networkx | 7,171 | networkx__networkx-7171 | [
"7147"
] | f34dda2c924924edb6a350c62420fb0187ab150f | diff --git a/networkx/algorithms/centrality/laplacian.py b/networkx/algorithms/centrality/laplacian.py
--- a/networkx/algorithms/centrality/laplacian.py
+++ b/networkx/algorithms/centrality/laplacian.py
@@ -50,8 +50,11 @@ def laplacian_centrality(
walk_type : string or None, optional (default=None)
Optional parameter `walk_type` used when calling
:func:`directed_laplacian_matrix <networkx.directed_laplacian_matrix>`.
- If None, the transition matrix is selected depending on the properties
- of the graph. Otherwise can be `random`, `lazy`, or `pagerank`.
+ One of ``"random"``, ``"lazy"``, or ``"pagerank"``. If ``walk_type=None``
+ (the default), then a value is selected according to the properties of `G`:
+ - ``walk_type="random"`` if `G` is strongly connected and aperiodic
+ - ``walk_type="lazy"`` if `G` is strongly connected but not aperiodic
+ - ``walk_type="pagerank"`` for all other cases.
alpha : real (default = 0.95)
Optional parameter `alpha` used when calling
diff --git a/networkx/linalg/laplacianmatrix.py b/networkx/linalg/laplacianmatrix.py
--- a/networkx/linalg/laplacianmatrix.py
+++ b/networkx/linalg/laplacianmatrix.py
@@ -219,8 +219,11 @@ def directed_laplacian_matrix(
If None, then each edge has weight 1.
walk_type : string or None, optional (default=None)
- If None, `P` is selected depending on the properties of the
- graph. Otherwise is one of 'random', 'lazy', or 'pagerank'
+ One of ``"random"``, ``"lazy"``, or ``"pagerank"``. If ``walk_type=None``
+ (the default), then a value is selected according to the properties of `G`:
+ - ``walk_type="random"`` if `G` is strongly connected and aperiodic
+ - ``walk_type="lazy"`` if `G` is strongly connected but not aperiodic
+ - ``walk_type="pagerank"`` for all other cases.
alpha : real
(1 - alpha) is the teleportation probability used with pagerank
@@ -307,8 +310,11 @@ def directed_combinatorial_laplacian_matrix(
If None, then each edge has weight 1.
walk_type : string or None, optional (default=None)
- If None, `P` is selected depending on the properties of the
- graph. Otherwise is one of 'random', 'lazy', or 'pagerank'
+ One of ``"random"``, ``"lazy"``, or ``"pagerank"``. If ``walk_type=None``
+ (the default), then a value is selected according to the properties of `G`:
+ - ``walk_type="random"`` if `G` is strongly connected and aperiodic
+ - ``walk_type="lazy"`` if `G` is strongly connected but not aperiodic
+ - ``walk_type="pagerank"`` for all other cases.
alpha : real
(1 - alpha) is the teleportation probability used with pagerank
@@ -372,8 +378,11 @@ def _transition_matrix(G, nodelist=None, weight="weight", walk_type=None, alpha=
If None, then each edge has weight 1.
walk_type : string or None, optional (default=None)
- If None, `P` is selected depending on the properties of the
- graph. Otherwise is one of 'random', 'lazy', or 'pagerank'
+ One of ``"random"``, ``"lazy"``, or ``"pagerank"``. If ``walk_type=None``
+ (the default), then a value is selected according to the properties of `G`:
+ - ``walk_type="random"`` if `G` is strongly connected and aperiodic
+ - ``walk_type="lazy"`` if `G` is strongly connected but not aperiodic
+ - ``walk_type="pagerank"`` for all other cases.
alpha : real
(1 - alpha) is the teleportation probability used with pagerank
| Document walk_type in directed_laplacian and friends
The current documentation of the "walk_type" argument to the many directed laplacian matrix approaches is opaque. And the logic is hidden within the actual code because it occurs in a helper function presumably so the logic is the same for all these functions. The logic should be fairly straightforward to explain. For the directed graph case it is given in `linalg.laplacianmatrix._transition_matrix` as
```python
if walk_type is None:
if nx.is_strongly_connected(G):
if nx.is_aperiodic(G):
walk_type = "random"
else:
walk_type = "lazy"
else:
walk_type = "pagerank"
```
We should put a word description of this into the `walk_type` parameter description..
| I want to work on this issue. Kindly assign me this issue.
You may submit a PR or a comment here if you have further questions.
We are small enough that we don't assign PRs. :)
Thanks!! | 2023-12-17T15:45:15 |
|
networkx/networkx | 7,204 | networkx__networkx-7204 | [
"5723"
] | 73c655270ee22ae06b60c5a6c4a03d1fdf202fc2 | diff --git a/networkx/conftest.py b/networkx/conftest.py
--- a/networkx/conftest.py
+++ b/networkx/conftest.py
@@ -89,9 +89,6 @@ def pytest_collection_modifyitems(config, items):
# TODO: The warnings below need to be dealt with, but for now we silence them.
@pytest.fixture(autouse=True)
def set_warnings():
- warnings.filterwarnings(
- "ignore", category=DeprecationWarning, message="nx.nx_pydot"
- )
warnings.filterwarnings(
"ignore",
category=FutureWarning,
diff --git a/networkx/drawing/nx_pydot.py b/networkx/drawing/nx_pydot.py
--- a/networkx/drawing/nx_pydot.py
+++ b/networkx/drawing/nx_pydot.py
@@ -19,7 +19,6 @@
- Graphviz: https://www.graphviz.org
- DOT Language: http://www.graphviz.org/doc/info/lang.html
"""
-import warnings
from locale import getpreferredencoding
import networkx as nx
@@ -41,13 +40,6 @@ def write_dot(G, path):
Path can be a string or a file handle.
"""
- msg = (
- "nx.nx_pydot.write_dot depends on the pydot package, which has "
- "known issues and is not actively maintained. Consider using "
- "nx.nx_agraph.write_dot instead.\n\n"
- "See https://github.com/networkx/networkx/issues/5723"
- )
- warnings.warn(msg, DeprecationWarning, stacklevel=2)
P = to_pydot(G)
path.write(P.to_string())
return
@@ -79,14 +71,6 @@ def read_dot(path):
"""
import pydot
- msg = (
- "nx.nx_pydot.read_dot depends on the pydot package, which has "
- "known issues and is not actively maintained. Consider using "
- "nx.nx_agraph.read_dot instead.\n\n"
- "See https://github.com/networkx/networkx/issues/5723"
- )
- warnings.warn(msg, DeprecationWarning, stacklevel=2)
-
data = path.read()
# List of one or more "pydot.Dot" instances deserialized from this file.
@@ -120,12 +104,6 @@ def from_pydot(P):
>>> G = nx.Graph(nx.nx_pydot.from_pydot(A))
"""
- msg = (
- "nx.nx_pydot.from_pydot depends on the pydot package, which has "
- "known issues and is not actively maintained.\n\n"
- "See https://github.com/networkx/networkx/issues/5723"
- )
- warnings.warn(msg, DeprecationWarning, stacklevel=2)
if P.get_strict(None): # pydot bug: get_strict() shouldn't take argument
multiedges = False
@@ -220,13 +198,6 @@ def to_pydot(N):
"""
import pydot
- msg = (
- "nx.nx_pydot.to_pydot depends on the pydot package, which has "
- "known issues and is not actively maintained.\n\n"
- "See https://github.com/networkx/networkx/issues/5723"
- )
- warnings.warn(msg, DeprecationWarning, stacklevel=2)
-
# set Graphviz graph type
if N.is_directed():
graph_type = "digraph"
@@ -348,14 +319,6 @@ def graphviz_layout(G, prog="neato", root=None):
-----
This is a wrapper for pydot_layout.
"""
- msg = (
- "nx.nx_pydot.graphviz_layout depends on the pydot package, which has "
- "known issues and is not actively maintained. Consider using "
- "nx.nx_agraph.graphviz_layout instead.\n\n"
- "See https://github.com/networkx/networkx/issues/5723"
- )
- warnings.warn(msg, DeprecationWarning, stacklevel=2)
-
return pydot_layout(G=G, prog=prog, root=root)
@@ -399,12 +362,6 @@ def pydot_layout(G, prog="neato", root=None):
"""
import pydot
- msg = (
- "nx.nx_pydot.pydot_layout depends on the pydot package, which has "
- "known issues and is not actively maintained.\n\n"
- "See https://github.com/networkx/networkx/issues/5723"
- )
- warnings.warn(msg, DeprecationWarning, stacklevel=2)
P = to_pydot(G)
if root is not None:
P.set("root", str(root))
| diff --git a/networkx/drawing/tests/test_pydot.py b/networkx/drawing/tests/test_pydot.py
--- a/networkx/drawing/tests/test_pydot.py
+++ b/networkx/drawing/tests/test_pydot.py
@@ -11,7 +11,6 @@
pydot = pytest.importorskip("pydot")
[email protected]
class TestPydot:
def pydot_checks(self, G, prog):
"""
| Deprecate pydot?
In https://github.com/networkx/networkx/pull/5721, we added a `PendingDeprecationWarning` for `nx.nx_pydot.*`. It has a number of issues and isn't actively maintained. We are also planning to improve installation issues for pygraphviz. This is a tracking issue to gather feedback from users.
Please leave a comment if you have feelings either way about this change.
| > We are also planning to improve installation issues for pygraphviz.
Do you mean providing wheel package for pygraphviz? I know I had colleagues having trouble building/installing the package on MacOS. That would be awesome!
I looked at the PR mentioned, and it looks like the deprecation warning for `nx.nx_pydot.pydot_layout` doesn't suggest a replacement. Is there a way to use pygraphviz via networkx for generating graph layouts? If not, please don't deprecate yet - those layout generators are seriously useful (e.g. the spring layouts are much better than the native networkx ones, as far as I can tell).
Ah, I see `nx.nx_agraph.pygraphviz_layout` exists - perhaps those could be added to the deprecation message?
On Windows, it's really hard to get pygraphviz to work. I very much prefer a lightweight tool that only outputs a Graphviz dotfile and then lets me run the Graphviz binary from a script to render diagrams from it myself. I'm not sure what the issues with pydot are, but from my perspective as a user, pygraphviz and pydot may overlap somewhat but still fulfill two different needs. If pydot support actually gets removed I'd be SOL wrt network visualization.
Thanks for this comment!
As you may have gathered, our concern with pydot is that it doesn't seem to be maintained anymore. We've started to have bug reports that might just never be fixed. We've been working pretty hard on getting pygraphviz to be easier to install on Windows. But there are understandable limitations when using a program with such a storied history like Graphviz.
Our support for pydot depends on comments like this one. So thank you.
umh, i just want to render my graph/network from python script to terminal as inline image, how to do that 😅 ?
<!-- anyone has any experience with using mermaid from within python? -->
My trail for reaching here:
* DeprecationWarning: nx.nx_pydot.to_pydot depends on the pydot package, which hasknown issues and is not actively maintained. See https://github.com/networkx/networkx/issues/5723
* https://github.com/pydot/pydot/blob/90936e75462c7b0e4bb16d97c1ae7efdf04e895c/README.md?plain=true#L116
<-- https://www.graphviz.org/resources/#python
<-- https://networkx.org/documentation/stable/reference/drawing.html
<-- https://www.geeksforgeeks.org/visualize-graphs-in-python/
<-- [nwks](https://duckduckgo.com/?q=visualise+networks+in+python&t=vivaldi&ia=web) <-- gfg article <-- [grphs](https://duckduckgo.com/?q=how+to+draw+graphs+in+python&t=vivaldi&ia=web) <-- ddg 🔍
* https://pypi.org/project/imgcat/
<-- ddg 🔍 : [python iterm2 image protocol](https://duckduckgo.com/?q=python+iterm2+image+protocol&t=vivaldi&ia=web)
----
> [NX's] main goal is to enable graph analysis **rather than perform graph visualization**. In the future, graph visualization functionality **may be removed** from NetworkX or only available as an add-on package.
Proper graph visualization is hard, and we highly recommend that people visualize their graphs **with tools dedicated** to that task.
> \- [NX/Reference/Drawing](https://networkx.org/documentation/stable/reference/drawing.html)
_hiding myself as offtopic_
----
> I'd be SOL wrt network visualization.
SOL = Simply Out of Luck?
https://www.acronymfinder.com/Slang/SOL.html
Are you saying you want to render the graph in the terminal window? (we have ascii representations of graphs #5602 )
If you mean you want to render the graph from a terminal-based script, we will continue to support `pygraphviz` which creates dot files. We are definitely not cutting off support for interaction with GraphViz.
And if you don't need the graphviz features, the matplotlib-based tools we offer will continue in the long term. So, basic image creation is supported. But we aren't able to support all the features of a full-fledged graph visualization tool.
Perhaps we should update the comments in the NX/Reference/Drawing section to make it more clera that basic rrawing will continue to be available via networkx. :}
> we have ascii representations of graphs #5602
the text based thing is nice, i wanted to experiment with that too. thanks for sharing it.
> Are you saying you want to render the graph in the terminal window?
nope, i was saying that
- to render an image file from graph (i.e. static image, not in a separate window like `matplotlib.plot.show()` which opens an interactive widget
- the rest of the part is covered thereafter by <tt>[imgcat]</tt> module which you don't have to worry about 😃
[imgcat]: https://pypi.org/project/imgcat/
oh, i
- missed sharing one part of why i dabbled into weeds of this `networkx.drawing.nx_pydot.to_pydot(my_networkx_graph)` method
----
_Have shared the following at: https://github.com/networkx/networkx/discussions/6638 . Please add your input related to this over there. Thanks 😃_
<details>
- 'ts that the `ms-python.vscode-pylance` (python LS for VSCode) is showing errors with networkx.draw related functions
~~(wait, i am getting the details of it)~~ _see below_
> "draw" is not a known member of module "networkx" Pylance [reportGeneralTypeIssues](https://github.com/microsoft/pyright/blob/main/docs/configuration.md#reportGeneralTypeIssues)
(function) draw: Unknown
> "draw_networkx" is not a known member of module "networkx" Pylance [reportGeneralTypeIssues](https://github.com/microsoft/pyright/blob/main/docs/configuration.md#reportGeneralTypeIssues)
(function) draw_networkx: Unknown
</details>
I believe matplotlib can save the figure without drawing it interactively.
Maybe use `matplotlib.pyplot.savefig` instead of `matplotlib.pyplot.show`. There may be other ways too. They definitely allow image file creation without any interactive UI.
Just to put a good word in for leaving pydot around for a while, it works on in-browser systems like Jupyter Lite, whereas pygraphviz would not be possible to use there without [additional effort](https://github.com/pygraphviz/pygraphviz/issues/453) compiling a [WASM version](https://github.com/pygraphviz/pygraphviz/issues/319).
> it works on in-browser systems like Jupyter Lite, whereas pygraphviz would not be possible
This is true since `pydot` is pure Python, though it's worth noting that:
1) You'd only be able to produce dot files - converting to actual image formats (e.g `.png`) would require graphviz to be installed (not sure how difficult this is for in-browser systems), and
2) It "works" in the sense that it is installable, but there are many defects related to updates to `pyparsing` that pydot never incorporated as it has been unmaintained for nearly 2 years.
@jarrodmillman @MridulS @rossbar A new pydot version has been released which should fix the breaking issues!
I'm not sure what's the current state of pydot in networkx, but I'm just letting you know that if you'd want to bring it back, you could try 😄 | 2024-01-04T02:44:27 |
networkx/networkx | 7,224 | networkx__networkx-7224 | [
"7223"
] | fd3cadc48c7277d12f2a27a03b2acb7b9783f783 | diff --git a/networkx/algorithms/planarity.py b/networkx/algorithms/planarity.py
--- a/networkx/algorithms/planarity.py
+++ b/networkx/algorithms/planarity.py
@@ -1385,3 +1385,16 @@ def is_directed(self):
contained.
"""
return False
+
+ def copy(self, as_view=False):
+ if as_view is True:
+ return nx.graphviews.generic_graph_view(self)
+ G = self.__class__()
+ G.graph.update(self.graph)
+ G.add_nodes_from((n, d.copy()) for n, d in self._node.items())
+ super(self.__class__, G).add_edges_from(
+ (u, v, datadict.copy())
+ for u, nbrs in self._adj.items()
+ for v, datadict in nbrs.items()
+ )
+ return G
| `PlanarEmbedding.copy()` raising `NotImplementedError` after PR #6798
`PlanarEmbedding` inherits the method `copy()` from `nx.Graph`. However, the current implementation uses the method `add_edges_from()` which was made forbidden by PR #6798.
### Current Behavior
```python
embedding = nx.PlanarEmbedding({1: {2: {"cw": 2, "ccw": 2}},
2: {1: {"cw": 1, "ccw": 1}}})
emb2 = embedding.copy()
```
```
NotImplementedError: Use `add_half_edge` method to add edges to a PlanarEmbedding.
```
### Environment
Python version: 3.11
NetworkX version: commit fd3cadc (main)
### Additional context
PR #6798 was written by me to solve a bug, but the blocking of inherited edge-adding methods was requested by @dschult.
I think this block is quite likely to break code in unforeseen places. The safer approach would be to extend these methods in PlanarEmbedding and display a deprecation warning for a while (I can't work on that at the moment).
The specific problem with `copy()` can be resolved by the PR below.
| 2024-01-13T17:11:27 |
||
networkx/networkx | 7,229 | networkx__networkx-7229 | [
"7228"
] | b2f98a4cc9d32fbf8f598b5335519e33e0bc3db2 | diff --git a/networkx/algorithms/operators/product.py b/networkx/algorithms/operators/product.py
--- a/networkx/algorithms/operators/product.py
+++ b/networkx/algorithms/operators/product.py
@@ -128,7 +128,7 @@ def tensor_product(G, H):
r"""Returns the tensor product of G and H.
The tensor product $P$ of the graphs $G$ and $H$ has a node set that
- is the tensor product of the node sets, $V(P)=V(G) \times V(H)$.
+ is the Cartesian product of the node sets, $V(P)=V(G) \times V(H)$.
$P$ has an edge $((u,v), (x,y))$ if and only if $(u,x)$ is an edge in $G$
and $(v,y)$ is an edge in $H$.
| Typo in Tensor Product Documentation
### Current Behavior
Currently, in the documentation, the definition of the node set of the product says, it is the **tensor** product of the node sets. This is incorrect, as the tensor product is not applicable to sets and does not correspond to the sci. definition.
### Expected Behavior
The **Cartesian** product of the node set provides the correct (and implemented) mathematical operation.
| 2024-01-15T20:30:33 |
||
networkx/networkx | 7,254 | networkx__networkx-7254 | [
"7253"
] | d6569adf7c224a9c85c0e4e4c92435f772f73582 | diff --git a/networkx/algorithms/connectivity/edge_kcomponents.py b/networkx/algorithms/connectivity/edge_kcomponents.py
--- a/networkx/algorithms/connectivity/edge_kcomponents.py
+++ b/networkx/algorithms/connectivity/edge_kcomponents.py
@@ -505,17 +505,24 @@ def _high_degree_components(G, k):
@nx._dispatchable
def general_k_edge_subgraphs(G, k):
- """General algorithm to find all maximal k-edge-connected subgraphs in G.
+ """General algorithm to find all maximal k-edge-connected subgraphs in `G`.
- Returns
- -------
- k_edge_subgraphs : a generator of nx.Graphs that are k-edge-subgraphs
- Each k-edge-subgraph is a maximal set of nodes that defines a subgraph
- of G that is k-edge-connected.
+ Parameters
+ ----------
+ G : nx.Graph
+ Graph in which all maximal k-edge-connected subgraphs will be found.
+
+ k : int
+
+ Yields
+ ------
+ k_edge_subgraphs : Graph instances that are k-edge-subgraphs
+ Each k-edge-subgraph contains a maximal set of nodes that defines a
+ subgraph of `G` that is k-edge-connected.
Notes
-----
- Implementation of the basic algorithm from _[1]. The basic idea is to find
+ Implementation of the basic algorithm from [1]_. The basic idea is to find
a global minimum cut of the graph. If the cut value is at least k, then the
graph is a k-edge-connected subgraph and can be added to the results.
Otherwise, the cut is used to split the graph in two and the procedure is
@@ -524,7 +531,7 @@ def general_k_edge_subgraphs(G, k):
a single node or a subgraph of G that is k-edge-connected.
This implementation contains optimizations for reducing the number of calls
- to max-flow, but there are other optimizations in _[1] that could be
+ to max-flow, but there are other optimizations in [1]_ that could be
implemented.
References
@@ -547,7 +554,7 @@ def general_k_edge_subgraphs(G, k):
... (14, 101, 24),
... ]
>>> G = nx.Graph(it.chain(*[pairwise(path) for path in paths]))
- >>> sorted(map(len, k_edge_subgraphs(G, k=3)))
+ >>> sorted(len(k_sg) for k_sg in k_edge_subgraphs(G, k=3))
[1, 1, 1, 4, 4]
"""
if k < 1:
| Return type of `general_k_edge_subgraphs` is incorrect in docstring
In networkx 3.2.1, `general_k_edge_subgraphs` return type in docstring says this:
```
k_edge_subgraphs : a generator of nx.Graphs that are k-edge-subgraphs
Each k-edge-subgraph is a maximal set of nodes that defines a subgraph
of G that is k-edge-connected.
```
However, it actually _yields_ (not returns) _graphs_, not sets of nodes.
| 2024-01-29T17:44:20 |
||
networkx/networkx | 7,255 | networkx__networkx-7255 | [
"7252"
] | 2da36864c5899cbc55fc98c96635390dd96d5f88 | diff --git a/networkx/generators/nonisomorphic_trees.py b/networkx/generators/nonisomorphic_trees.py
--- a/networkx/generators/nonisomorphic_trees.py
+++ b/networkx/generators/nonisomorphic_trees.py
@@ -19,22 +19,20 @@ def nonisomorphic_trees(order, create="graph"):
Parameters
----------
order : int
- order of the desired tree(s)
-
- create : graph or matrix (default="Graph)
- If graph is selected a list of trees will be returned,
- if matrix is selected a list of adjacency matrix will
- be returned
-
- Returns
- -------
- G : List of NetworkX Graphs
-
- M : List of Adjacency matrices
-
- References
- ----------
-
+ order of the desired tree(s)
+
+ create : one of {"graph", "matrix"} (default="graph")
+ If ``"graph"`` is selected a list of ``Graph`` instances will be returned,
+ if matrix is selected a list of adjacency matrices will be returned.
+
+ Yields
+ ------
+ list
+ A list of nonisomorphic trees, in one of two formats depending on the
+ value of the `create` parameter:
+ - ``create="graph"``: yields a list of `networkx.Graph` instances
+ - ``create="matrix"``: yields a list of list-of-lists representing
+ adjacency matrices
"""
if order < 2:
| Docstring of `nonisomorphic_trees` should use `Yields`, not `Returns`
In NetworkX 3.2.1, `nonisomorphic_trees` yields graphs or adjacency matrices, so the docstring should use a `Yields` section, but it uses `Returns`.
| 2024-01-29T19:14:05 |
||
networkx/networkx | 7,293 | networkx__networkx-7293 | [
"7284"
] | 74ce63f222df0189c1c9ae425b5b7748111d17f9 | diff --git a/networkx/drawing/nx_pylab.py b/networkx/drawing/nx_pylab.py
--- a/networkx/drawing/nx_pylab.py
+++ b/networkx/drawing/nx_pylab.py
@@ -858,9 +858,6 @@ def draw_networkx_edges(
raise TypeError("Argument `arrows` must be of type bool or None")
use_linecollection = not arrows
- if arrowstyle is None:
- arrowstyle = "-|>" if G.is_directed() else "-"
-
if isinstance(connectionstyle, str):
connectionstyle = [connectionstyle]
elif np.iterable(connectionstyle):
@@ -897,6 +894,10 @@ def draw_networkx_edges(
msg.format("connectionstyle"), category=UserWarning, stacklevel=2
)
+ # NOTE: Arrowstyle modification must occur after the warnings section
+ if arrowstyle is None:
+ arrowstyle = "-|>" if G.is_directed() else "-"
+
if ax is None:
ax = plt.gca()
| diff --git a/networkx/drawing/tests/test_pylab.py b/networkx/drawing/tests/test_pylab.py
--- a/networkx/drawing/tests/test_pylab.py
+++ b/networkx/drawing/tests/test_pylab.py
@@ -833,3 +833,15 @@ def test_user_warnings_for_unused_edge_drawing_kwargs(fap_only_kwarg):
nx.draw_networkx_edges(G, pos, ax=ax, arrows=True, **fap_only_kwarg)
plt.delaxes(ax)
+
+
[email protected]("draw_fn", (nx.draw, nx.draw_circular))
+def test_no_warning_on_default_draw_arrowstyle(draw_fn):
+ # See gh-7284
+ fig, ax = plt.subplots()
+ G = nx.cycle_graph(5)
+ with warnings.catch_warnings(record=True) as w:
+ draw_fn(G, ax=ax)
+ assert len(w) == 0
+
+ plt.delaxes(ax)
| Visualization: arrowstyle warning is very noisy
Follow-up to #7010
The logic determining when the `arrowstyle` UserWarning changed in #7010, with the unintended side-effect of it being raised far more often. For example, it is now raised during the doc build for all of the simple drawings of the classic generators.
The warning logic should be reviewed and modified so that the warning is only raised when appropriate.
| This might be "just" a matter of setting a better stacklevel for the warnings. They didn't have a stacklevel before.
@dschult I think you're right - there was a stacklevel prior to #7010, but due to the nested function definitions, it was not correct :upside_down_face: . In other words, this warning *should've* been raised based on the previous logic, but wasn't because of stacklevel - though the raising itself was a bug!
The real cause of the problem is that [the check for raising the warning](https://github.com/networkx/networkx/blob/74ce63f222df0189c1c9ae425b5b7748111d17f9/networkx/drawing/nx_pylab.py#L883) checks that `arrowstyle is not None`, but the top-level drawing functions `draw_networkx` and `draw` (which call `draw_networkx_edges` under-the-hood) have `arrowstyle="-"` as the default.
I think the best way to fix this while maintaining the default behavior for all the drawing functions is to modify the `arrowstyle` warning check to `is not None or != "-"`. I'm going to give this a stab and see if it actually makes sense! | 2024-02-13T23:11:25 |
networkx/networkx | 7,316 | networkx__networkx-7316 | [
"7256"
] | de85e3fe52879f819e7a7924474fc6be3994e8e4 | diff --git a/networkx/conftest.py b/networkx/conftest.py
--- a/networkx/conftest.py
+++ b/networkx/conftest.py
@@ -148,6 +148,9 @@ def set_warnings():
warnings.filterwarnings(
"ignore", category=DeprecationWarning, message="\n\nk_corona"
)
+ warnings.filterwarnings(
+ "ignore", category=DeprecationWarning, message=r"\n\nThe 'create=matrix'"
+ )
@pytest.fixture(autouse=True)
diff --git a/networkx/generators/nonisomorphic_trees.py b/networkx/generators/nonisomorphic_trees.py
--- a/networkx/generators/nonisomorphic_trees.py
+++ b/networkx/generators/nonisomorphic_trees.py
@@ -25,6 +25,15 @@ def nonisomorphic_trees(order, create="graph"):
If ``"graph"`` is selected a list of ``Graph`` instances will be returned,
if matrix is selected a list of adjacency matrices will be returned.
+ .. deprecated:: 3.3
+
+ The `create` argument is deprecated and will be removed in NetworkX
+ version 3.5. In the future, `nonisomorphic_trees` will yield graph
+ instances by default. To generate adjacency matrices, call
+ ``nx.to_numpy_array`` on the output, e.g.::
+
+ [nx.to_numpy_array(G) for G in nx.nonisomorphic_trees(N)]
+
Yields
------
list
@@ -45,6 +54,20 @@ def nonisomorphic_trees(order, create="graph"):
if create == "graph":
yield _layout_to_graph(layout)
elif create == "matrix":
+ import warnings
+
+ warnings.warn(
+ (
+ "\n\nThe 'create=matrix' argument of nonisomorphic_trees\n"
+ "is deprecated and will be removed in version 3.5.\n"
+ "Use ``nx.to_numpy_array`` to convert graphs to adjacency "
+ "matrices, e.g.::\n\n"
+ " [nx.to_numpy_array(G) for G in nx.nonisomorphic_trees(N)]"
+ ),
+ category=DeprecationWarning,
+ stacklevel=2,
+ )
+
yield _layout_to_matrix(layout)
layout = _next_rooted_tree(layout)
| diff --git a/networkx/generators/tests/test_nonisomorphic_trees.py b/networkx/generators/tests/test_nonisomorphic_trees.py
--- a/networkx/generators/tests/test_nonisomorphic_trees.py
+++ b/networkx/generators/tests/test_nonisomorphic_trees.py
@@ -1,10 +1,8 @@
"""
-====================
-Generators - Non Isomorphic Trees
-====================
-
Unit tests for WROM algorithm generator in generators/nonisomorphic_trees.py
"""
+import pytest
+
import networkx as nx
from networkx.utils import edges_equal
@@ -54,11 +52,16 @@ def f(x):
def test_nonisomorphic_trees_matrix(self):
trees_2 = [[[0, 1], [1, 0]]]
- assert list(nx.nonisomorphic_trees(2, create="matrix")) == trees_2
+ with pytest.deprecated_call():
+ assert list(nx.nonisomorphic_trees(2, create="matrix")) == trees_2
+
trees_3 = [[[0, 1, 1], [1, 0, 0], [1, 0, 0]]]
- assert list(nx.nonisomorphic_trees(3, create="matrix")) == trees_3
+ with pytest.deprecated_call():
+ assert list(nx.nonisomorphic_trees(3, create="matrix")) == trees_3
+
trees_4 = [
[[0, 1, 0, 1], [1, 0, 1, 0], [0, 1, 0, 0], [1, 0, 0, 0]],
[[0, 1, 1, 1], [1, 0, 0, 0], [1, 0, 0, 0], [1, 0, 0, 0]],
]
- assert list(nx.nonisomorphic_trees(4, create="matrix")) == trees_4
+ with pytest.deprecated_call():
+ assert list(nx.nonisomorphic_trees(4, create="matrix")) == trees_4
| `nonisomorphic_trees` create argument
the `nonisomorphic_trees` graph generator has [an argument](https://github.com/networkx/networkx/blob/2da36864c5899cbc55fc98c96635390dd96d5f88/networkx/generators/nonisomorphic_trees.py#L16) named `create` which toggles whether graphs are returned as either nx.Graph instances, or adjacency matrices in list-of-list format.
From a quick review of this function, I'm wondering if it might not be worth deprecating this `create` argument. For one thing, it is very similar to `create_using=` which is ubiquitous among the graph generation functions and has a very different meaning! Also I'm not sure that returning adjacency matrices as `list` should even be an option - if anything I'd expect users to want adjacency matrices in a form more suitable for efficient computation, i.e. numpy arrays. The latter would be possible with something like `np.to_numpy_array(G) for G in nx.nonisomorphic_trees(order)`, so there is a straightforward way to get the (arguably more desirable) behavior after a deprecation.
| 2024-02-27T22:35:34 |
|
networkx/networkx | 7,319 | networkx__networkx-7319 | [
"7291"
] | 051ffc1f75d63ff685141e8afc131a0e03325091 | diff --git a/networkx/readwrite/graphml.py b/networkx/readwrite/graphml.py
--- a/networkx/readwrite/graphml.py
+++ b/networkx/readwrite/graphml.py
@@ -1010,9 +1010,10 @@ def decode_data_elements(self, graphml_keys, obj_xml):
edge_label = data_element.find(f"{pref}EdgeLabel")
if edge_label is not None:
break
-
if edge_label is not None:
data["label"] = edge_label.text
+ elif text is None:
+ data[data_name] = ""
return data
def find_graphml_keys(self, graph_element):
| diff --git a/networkx/readwrite/tests/test_graphml.py b/networkx/readwrite/tests/test_graphml.py
--- a/networkx/readwrite/tests/test_graphml.py
+++ b/networkx/readwrite/tests/test_graphml.py
@@ -1505,3 +1505,27 @@ def test_exception_for_unsupported_datatype_graph_attr():
fh = io.BytesIO()
with pytest.raises(TypeError, match="GraphML does not support"):
nx.write_graphml(G, fh)
+
+
+def test_empty_attribute():
+ """Tests that a GraphML string with an empty attribute can be parsed
+ correctly."""
+ s = """<?xml version='1.0' encoding='utf-8'?>
+ <graphml>
+ <key id="d1" for="node" attr.name="foo" attr.type="string"/>
+ <key id="d2" for="node" attr.name="bar" attr.type="string"/>
+ <graph>
+ <node id="0">
+ <data key="d1">aaa</data>
+ <data key="d2">bbb</data>
+ </node>
+ <node id="1">
+ <data key="d1">ccc</data>
+ <data key="d2"></data>
+ </node>
+ </graph>
+ </graphml>"""
+ fh = io.BytesIO(s.encode("UTF-8"))
+ G = nx.read_graphml(fh)
+ assert G.nodes["0"] == {"foo": "aaa", "bar": "bbb"}
+ assert G.nodes["1"] == {"foo": "ccc", "bar": ""}
| Empty GraphML attribute is not shown with networkx
### Current Behavior
Currently, networkx does not show (GraphML) attributes that are empty.
Here is little example.
```xml
<?xml version='1.0' encoding='utf-8'?>
<graphml>
<key id="d1" for="node" attr.name="foo" attr.type="string"/>
<key id="d2" for="node" attr.name="bar" attr.type="string"/>
<graph>
<node id="0">
<data key="d1">aaa</data>
<data key="d2">bbb</data>
</node>
<node id="1">
<data key="d1">ccc</data>
<data key="d2"></data>
</node>
</graph>
</graphml>
```
If I load the file, node 0 shows both attributes while node 1 only shows the attribute `foo`.
```python
G = nx.read_graphml("example.graphml")
G.nodes().get('0')
# {'foo': 'aaa', 'bar': 'bbb'}
G.nodes().get('1')
# {'foo': 'ccc'}
```
### Expected Behavior
I would expect that node 1 shows attribute `bar` containing an empty string.
### Environment
<!--- Please provide details about your local environment -->
Python version: Python 3.11.6
NetworkX version: networkx==3.2.1
### Additional context
This bug appeared during an analysis of the Brave pagegraph, [see here](https://github.com/brave/brave-browser/issues/35970).
| 2024-02-29T04:07:19 |
|
networkx/networkx | 7,324 | networkx__networkx-7324 | [
"7301"
] | c339da9b0e3c54990624b2b2af7b97af7491eef8 | diff --git a/examples/geospatial/delaunay.py b/examples/geospatial/plot_delaunay.py
similarity index 96%
rename from examples/geospatial/delaunay.py
rename to examples/geospatial/plot_delaunay.py
--- a/examples/geospatial/delaunay.py
+++ b/examples/geospatial/plot_delaunay.py
@@ -58,7 +58,11 @@
# Now, we can plot with a nice basemap.
ax = cells.plot(facecolor="lightblue", alpha=0.50, edgecolor="cornsilk", linewidth=2)
-add_basemap(ax)
+try: # Try-except for issues with timeout/parsing failures in CI
+ add_basemap(ax)
+except:
+ pass
+
ax.axis("off")
nx.draw(
delaunay_graph,
diff --git a/examples/geospatial/lines.py b/examples/geospatial/plot_lines.py
similarity index 94%
rename from examples/geospatial/lines.py
rename to examples/geospatial/plot_lines.py
--- a/examples/geospatial/lines.py
+++ b/examples/geospatial/plot_lines.py
@@ -76,7 +76,10 @@
for i, facet in enumerate(ax):
facet.set_title(("Streets", "Graph")[i])
facet.axis("off")
- add_basemap(facet)
+ try: # For issues with downloading/parsing in CI
+ add_basemap(facet)
+ except:
+ pass
nx.draw(
G_primal, {n: [n[0], n[1]] for n in list(G_primal.nodes)}, ax=ax[1], node_size=50
)
@@ -92,7 +95,10 @@
for i, facet in enumerate(ax):
facet.set_title(("Streets", "Graph")[i])
facet.axis("off")
- add_basemap(facet)
+ try: # For issues with downloading/parsing in CI
+ add_basemap(facet)
+ except:
+ pass
nx.draw(G_dual, {n: [n[0], n[1]] for n in list(G_dual.nodes)}, ax=ax[1], node_size=50)
plt.show()
diff --git a/examples/geospatial/plot_points.py b/examples/geospatial/plot_points.py
--- a/examples/geospatial/plot_points.py
+++ b/examples/geospatial/plot_points.py
@@ -51,7 +51,10 @@
f, ax = plt.subplots(1, 2, figsize=(8, 4))
for i, facet in enumerate(ax):
cases.plot(marker=".", color="orangered", ax=facet)
- add_basemap(facet)
+ try: # For issues with downloading/parsing basemaps in CI
+ add_basemap(facet)
+ except:
+ pass
facet.set_title(("KNN-3", "50-meter Distance Band")[i])
facet.axis("off")
nx.draw(knn_graph, positions, ax=ax[0], node_size=5, node_color="b")
| contextily.add_basemap failing in some geospatial gallery examples
Followup issue to #7299 - the `contextily.add_basemap` had been intermittently failing in some of the geospatial gallery examples, e.g. `plot_lines.py` and `plot_delaunay.py`. These have been temporarily renamed to stop them from running during the doc build to avoid CI blockers (#7299). I haven't looked into it in detail, but my initial guess would be it has to do with a network timeout or some other issue accessing resources over the web. A full traceback can be found at the [end of this log](https://github.com/networkx/networkx/actions/runs/7916641025/job/21610909161#step:7:3889).
@ljwolf I know this isn't specifically a geopandas issue, but is this something you've encountered before?
#7299 should be reverted once a fix is found!
| I haven't seen this error before, but it looks like something easy to replicate/fix. I'll take a look. Also, tagging @martinfleis in case he can get to it sooner than me!
That is the same issue that showed up in our CI today https://github.com/geopandas/contextily/actions/runs/7909639251/job/21591004622 It seems to be cause by some network issue and, in our case, simply re-triggering CI fixed it.
Have you encountered the issue in any other case than that single time linked above?
> It seems to be cause by some network issue and, in our case, simply re-triggering CI fixed it.
Yeah that's what I suspected.
> Have you encountered the issue in any other case than that single time linked above?
Unfortunately yes. It happens less frequently in the github workflows but has been pretty consistent circleci. Based on your experience it seems like we might need to tweak a timeout somewhere to make this more reliable... I'll look into it when I get a chance. Thanks for connections & info @ljwolf @martinfleis !
I've seen this one first time today, so I'm not entirely sure what's causing it. The easiest solution may be changing the source of the baseline that is used. If it is caused by the server that is serving the tiles, different one might be a solution.
Are you aware of any difference between CircleCI and GHA that could cause this?
> Are you aware of any difference between CircleCI and GHA that could cause this?
No, not concretely though different runtime limits might have an effect. In further testing I was able to also intermittently reproduce it locally so CI configuration is not the root cause.
I'll have a look if we can build some additional robustness to contextily. We do have retries on request but if the request returns invalid data like in this case, we do not try to fetch the tile again. I suppose we should. | 2024-03-01T19:30:31 |
|
networkx/networkx | 7,327 | networkx__networkx-7327 | [
"7315"
] | 00e3c4c4ecad08bff17b894f6d39bcc96a3c583f | diff --git a/networkx/algorithms/shortest_paths/unweighted.py b/networkx/algorithms/shortest_paths/unweighted.py
--- a/networkx/algorithms/shortest_paths/unweighted.py
+++ b/networkx/algorithms/shortest_paths/unweighted.py
@@ -117,7 +117,7 @@ def single_target_shortest_path_length(G, target, cutoff=None):
Examples
--------
>>> G = nx.path_graph(5, create_using=nx.DiGraph())
- >>> length = nx.single_target_shortest_path_length(G, 4)
+ >>> length = dict(nx.single_target_shortest_path_length(G, 4))
>>> length[0]
4
>>> for node in range(5):
@@ -151,7 +151,7 @@ def single_target_shortest_path_length(G, target, cutoff=None):
nextlevel = [target]
# for version 3.3 we will return a dict like this:
# return dict(_single_shortest_path_length(adj, nextlevel, cutoff))
- return dict(_single_shortest_path_length(adj, nextlevel, cutoff))
+ return _single_shortest_path_length(adj, nextlevel, cutoff)
@nx._dispatchable
| diff --git a/networkx/algorithms/shortest_paths/tests/test_unweighted.py b/networkx/algorithms/shortest_paths/tests/test_unweighted.py
--- a/networkx/algorithms/shortest_paths/tests/test_unweighted.py
+++ b/networkx/algorithms/shortest_paths/tests/test_unweighted.py
@@ -92,9 +92,9 @@ def test_single_target_shortest_path(self):
def test_single_target_shortest_path_length(self):
pl = nx.single_target_shortest_path_length
lengths = {0: 0, 1: 1, 2: 2, 3: 3, 4: 3, 5: 2, 6: 1}
- assert pl(self.cycle, 0) == lengths
+ assert dict(pl(self.cycle, 0)) == lengths
lengths = {0: 0, 1: 6, 2: 5, 3: 4, 4: 3, 5: 2, 6: 1}
- assert pl(self.directed_cycle, 0) == lengths
+ assert dict(pl(self.directed_cycle, 0)) == lengths
# test missing targets
target = 8
with pytest.raises(nx.NodeNotFound, match=f"Target {target} is not in G"):
| `single_target_shortest_path_length` on main returns dict, not iterator
There seem to be inconsistencies introduced in #6584 and #7161. For example, for `single_target_shortest_path_length`, the warning says that it will return dict (not iterator) starting in version 3.5, but it returns a dict _today_.
| The warning message is:
```
single_target_shortest_path_length will return a dict instead of
an iterator in version 3.5
```
but the return statement is:
```python
return dict(_single_shortest_path_length(adj, nextlevel, cutoff))
```
For context: in the latest released version (3.2.1) it still returns a generator:
```python
In [1]: import networkx as nx
In [2]: nx.__version__
Out[2]: '3.2.1'
In [3]: G = nx.path_graph(10)
In [4]: nx.single_target_shortest_path_length(G, 0, 9)
Out[4]: <generator object _single_shortest_path_length at 0x771b69589d80>
```
so this inconsistency is only on the development branch and should be rectified before the next release! | 2024-03-03T19:28:08 |
networkx/networkx | 7,328 | networkx__networkx-7328 | [
"7235"
] | 19e6bc740319bcd428334d555525890a4c88ec73 | diff --git a/doc/conf.py b/doc/conf.py
--- a/doc/conf.py
+++ b/doc/conf.py
@@ -54,6 +54,7 @@
"image_scrapers": ("matplotlib",),
"matplotlib_animations": True,
"plot_gallery": "True",
+ "reference_url": {"sphinx_gallery": None},
}
# Add pygraphviz png scraper, if available
try:
diff --git a/examples/algorithms/plot_image_segmentation_spectral_graph_partiion.py b/examples/algorithms/plot_image_segmentation_spectral_graph_partiion.py
--- a/examples/algorithms/plot_image_segmentation_spectral_graph_partiion.py
+++ b/examples/algorithms/plot_image_segmentation_spectral_graph_partiion.py
@@ -1,12 +1,16 @@
"""
-=====================================================
+==================================================
Image Segmentation via Spectral Graph Partitioning
-=====================================================
-Example of partitioning a undirected graph obtained by `k-neighbors`
+==================================================
+
+Example of partitioning a undirected graph obtained by ``k-neighbors``
from an RGB image into two subgraphs using spectral clustering
illustrated by 3D plots of the original labeled data points in RGB 3D space
vs the bi-partition marking performed by graph partitioning via spectral clustering.
-All 3D plots and animations use the 3D spectral layout.
+All 3D plots use the 3D spectral layout.
+
+See :ref:`sphx_glr_auto_examples_3d_drawing` for recipes to create 3D animations
+from these visualizations.
"""
import numpy as np
import networkx as nx
@@ -15,7 +19,7 @@
from matplotlib.lines import Line2D
from sklearn.cluster import SpectralClustering
-# sphinx_gallery_thumbnail_number = 4
+# sphinx_gallery_thumbnail_number = 3
###############################################################################
# Create an example 3D dataset "The Rings".
@@ -129,40 +133,6 @@ def _scatter_plot(ax, X, array_of_markers, axis_plot=True):
plt.show()
-###############################################################################
-# Generate the rotating animation of the clustered data.
-# ------------------------------------------------------
-# The data points are marked according to clustering and rotated
-# in the 3D animation.
-
-
-def _init():
- ax.clear()
- _scatter_plot(ax, X, array_of_markers)
- ax.grid(False)
- ax.set_axis_off()
- ax.view_init(elev=6.0, azim=-22.0)
-
-
-def _frame_update(index):
- ax.view_init(6.0 + index * 0.2, -22.0 + index * 0.5)
-
-
-fig = plt.figure(layout="tight")
-ax = fig.add_subplot(111, projection="3d")
-ax.grid(False)
-ax.set_axis_off()
-ani = animation.FuncAnimation(
- fig,
- _frame_update,
- init_func=_init,
- interval=50,
- cache_frame_data=False,
- frames=100,
-)
-
-plt.show()
-
###############################################################################
# Generate the plots of the graph.
@@ -217,29 +187,3 @@ def _3d_graph_plot(ax):
_3d_graph_plot(ax1)
plt.tight_layout()
plt.show()
-
-###############################################################################
-# Generate the rotating 3D animation of the graph.
-# ------------------------------------------------
-# The nodes of the graph are marked according to clustering.
-# The graph is rotated in the 3D animation.
-
-
-def _frame_update(index):
- ax.view_init(100.0 + index * 0.7, -100.0 + index * 0.5)
-
-
-fig = plt.figure(layout="tight")
-ax = fig.add_subplot(111, projection="3d")
-ax.grid(False)
-ax.set_axis_off()
-_3d_graph_plot(ax)
-ani = animation.FuncAnimation(
- fig,
- _frame_update,
- interval=50,
- cache_frame_data=False,
- frames=100,
-)
-
-plt.show()
| Gallery: spectral graph partition very slow
Just opening this issue to track a performance issue I noticed with the docs.
The image segmentation and spectral graph example added in #7040 is extremely slow - it takes about 100s to run on a fast machine (ryzen 5950X) and is even worse in CI, taking nearly 4 minutes per doc build to run the example.
This should be investigated ASAP to reduce CI burden. There are multiple 3D animations in the example - I suspect switching them to static images would resolve the issue, though there will have to be a bit of profiling to verify that the animation(s) are the bottleneck and not the analysis itself.
| 2024-03-03T20:06:10 |
||
networkx/networkx | 7,329 | networkx__networkx-7329 | [
"7153"
] | 00e3c4c4ecad08bff17b894f6d39bcc96a3c583f | diff --git a/networkx/algorithms/coloring/greedy_coloring.py b/networkx/algorithms/coloring/greedy_coloring.py
--- a/networkx/algorithms/coloring/greedy_coloring.py
+++ b/networkx/algorithms/coloring/greedy_coloring.py
@@ -20,7 +20,6 @@
]
-@nx._dispatchable
def strategy_largest_first(G, colors):
"""Returns a list of the nodes of ``G`` in decreasing order by
degree.
@@ -32,7 +31,6 @@ def strategy_largest_first(G, colors):
@py_random_state(2)
-@nx._dispatchable
def strategy_random_sequential(G, colors, seed=None):
"""Returns a random permutation of the nodes of ``G`` as a list.
@@ -47,7 +45,6 @@ def strategy_random_sequential(G, colors, seed=None):
return nodes
-@nx._dispatchable
def strategy_smallest_last(G, colors):
"""Returns a deque of the nodes of ``G``, "smallest" last.
@@ -121,7 +118,6 @@ def _maximal_independent_set(G):
return result
-@nx._dispatchable
def strategy_independent_set(G, colors):
"""Uses a greedy independent set removal strategy to determine the
colors.
@@ -146,7 +142,6 @@ def strategy_independent_set(G, colors):
yield from nodes
-@nx._dispatchable
def strategy_connected_sequential_bfs(G, colors):
"""Returns an iterable over nodes in ``G`` in the order given by a
breadth-first traversal.
@@ -160,7 +155,6 @@ def strategy_connected_sequential_bfs(G, colors):
return strategy_connected_sequential(G, colors, "bfs")
-@nx._dispatchable
def strategy_connected_sequential_dfs(G, colors):
"""Returns an iterable over nodes in ``G`` in the order given by a
depth-first traversal.
@@ -174,7 +168,6 @@ def strategy_connected_sequential_dfs(G, colors):
return strategy_connected_sequential(G, colors, "dfs")
-@nx._dispatchable
def strategy_connected_sequential(G, colors, traversal="bfs"):
"""Returns an iterable over nodes in ``G`` in the order given by a
breadth-first or depth-first traversal.
@@ -207,7 +200,6 @@ def strategy_connected_sequential(G, colors, traversal="bfs"):
yield end
-@nx._dispatchable
def strategy_saturation_largest_first(G, colors):
"""Iterates over all the nodes of ``G`` in "saturation order" (also
known as "DSATUR").
| `strategy_saturation_largest_first` endless generator
<!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion -->
<!--- Provide a general summary of the issue in the Title above -->
### Current Behavior
`strategy_saturation_largest_first` generator does not terminate. See the stopping condition here:
https://github.com/networkx/networkx/blob/9cc8b422a512e7e7819238d597fd6815ec9c1c8e/networkx/algorithms/coloring/greedy_coloring.py#L243
but neither `G` nor `colors` is updated within the loop! The result is the same `node` is yielded repeatedly.
### Expected Behavior
I would expect the generator to terminate. I don't grok this function, but endless generator doesn't seem right.
### Steps to Reproduce
```python
import networkx as nx
G = nx.empty_graph()
G.add_node(1)
list(nx.algorithms.coloring.greedy_coloring.strategy_saturation_largest_first(G, {}))
```
### Environment
Python version: 3.10
NetworkX version: dev
### Additional context
I (of course) encountered this while tinkering with dispatching machinery!
| The design seems to be that changes to the `colors` dict [occur between the yield](https://github.com/networkx/networkx/blob/9cc8b422a512e7e7819238d597fd6815ec9c1c8e/networkx/algorithms/coloring/greedy_coloring.py#L381) and the `next` call to the generator. The generator does not have control over that logic. But the calling function does.
Note also that it is "ok" to have a generator that never terminates (e.g. `all_prime_numbers()`). Is there a better way to handle this coding situation? Or maybe I should also ask -- what problem does this cause for dispatching?
> Or maybe I should also ask -- what problem does this cause for dispatching?
I need to explicitly exclude the function when creating the "dependencies between dispatched networkx functions" graph here
https://gist.github.com/eriknw/6017eec12e3dc61fbfba191144b0ae6b#file-get_func_deps-diff
otherwise tests never finish.
So, yeah, I'm doing something pretty hacky, so if this behavior is expected, then feel free to close. It would seem that consuming the iterator like I do in the patch above has unintended (and surprising to me) side-effects. These side-effects don't result in "stalled code" in normal use.
For example, this script stalls when the above patch is applied, but runs fine without the patch:
```python
import networkx as nx
G = nx.Graph()
G.add_node(1)
nx.greedy_color(G, strategy='saturation_largest_first', interchange=False)
```
Thanks for taking a look and educating me!
I think this points out how interrelated this function is to the calling function. It might make sense to make these strategy functions private. But they have been public for quite a long time.
Could we create a new category of functions -- those that are public but for which the dispatching is not appropriate?
Also, why don't the other strategy functions give this same kind of trouble? It looks like none of them are "infinite generators". But we should support infinite generators (IMO)... I think.....???
> Could we create a new category of functions -- those that are public but for which the dispatching is not appropriate?
Sure... should we simply remove `@nx._dispatch` from these functions?
A quick ping on this one - IMO the best course of action is to *not* dispatch the `strategy_` functions. AFAICT, those are callables that are intended only for explicit use with `greedy_coloring`. | 2024-03-03T20:15:05 |
|
networkx/networkx | 7,332 | networkx__networkx-7332 | [
"7286"
] | 7e97d4c040a39cc4cbe22b2ab78c10f028610bb9 | diff --git a/networkx/algorithms/similarity.py b/networkx/algorithms/similarity.py
--- a/networkx/algorithms/similarity.py
+++ b/networkx/algorithms/similarity.py
@@ -20,6 +20,7 @@
from itertools import product
import networkx as nx
+from networkx.utils import np_random_state
__all__ = [
"graph_edit_distance",
@@ -1662,9 +1663,10 @@ def panther_similarity(
return top_k_with_val
+@np_random_state(5)
@nx._dispatchable(edge_attrs="weight")
def generate_random_paths(
- G, sample_size, path_length=5, index_map=None, weight="weight"
+ G, sample_size, path_length=5, index_map=None, weight="weight", seed=None
):
"""Randomly generate `sample_size` paths of length `path_length`.
@@ -1685,6 +1687,9 @@ def generate_random_paths(
weight : string or None, optional (default="weight")
The name of an edge attribute that holds the numerical value
used as a weight. If None then each edge has weight 1.
+ seed : integer, random_state, or None (default)
+ Indicator of random number generation state.
+ See :ref:`Randomness<randomness>`.
Returns
-------
@@ -1718,6 +1723,10 @@ def generate_random_paths(
"""
import numpy as np
+ randint_fn = (
+ seed.integers if isinstance(seed, np.random.Generator) else seed.randint
+ )
+
# Calculate transition probabilities between
# every pair of vertices according to Eq. (3)
adj_mat = nx.to_numpy_array(G, weight=weight)
@@ -1729,7 +1738,7 @@ def generate_random_paths(
for path_index in range(sample_size):
# Sample current vertex v = v_i uniformly at random
- node_index = np.random.randint(0, high=num_nodes)
+ node_index = randint_fn(num_nodes)
node = node_map[node_index]
# Add v into p_r and add p_r into the path set
@@ -1747,7 +1756,7 @@ def generate_random_paths(
for _ in range(path_length):
# Randomly sample a neighbor (v_j) according
# to transition probabilities from ``node`` (v) to its neighbors
- nbr_index = np.random.choice(
+ nbr_index = seed.choice(
num_nodes, p=transition_probabilities[starting_index]
)
| diff --git a/networkx/algorithms/tests/test_similarity.py b/networkx/algorithms/tests/test_similarity.py
--- a/networkx/algorithms/tests/test_similarity.py
+++ b/networkx/algorithms/tests/test_similarity.py
@@ -845,8 +845,6 @@ def test_panther_similarity_isolated(self):
nx.panther_similarity(G, source=1)
def test_generate_random_paths_unweighted(self):
- np.random.seed(42)
-
index_map = {}
num_paths = 10
path_length = 2
@@ -857,7 +855,7 @@ def test_generate_random_paths_unweighted(self):
G.add_edge(1, 2)
G.add_edge(2, 4)
paths = nx.generate_random_paths(
- G, num_paths, path_length=path_length, index_map=index_map
+ G, num_paths, path_length=path_length, index_map=index_map, seed=42
)
expected_paths = [
[3, 0, 3],
| `generate_random_paths` does not have a `seed` argument
The `nx.generate_random_paths` iterator creates paths of specified length from `G`. This function is not seeded, so there is currently no way to reproduce results between runs.
| @rossbar I had a quick go at implementing this - changing the signature to:
```python
@py_random_state(5)
@nx._dispatchable(edge_attrs="weight")
def generate_random_paths(
G, sample_size, path_length=5, index_map=None, weight="weight", seed=None
):
```
The convention in the codebase seems to be to use the numpy-backed `random.Random` subclass `PythonRandomViaNumpyBits` for new code. This was simple enough to achieve, but I had to substitute `np.random.choice` with `seed.choices` to get the same weighted choice behaviour. However, this generates different choices than `np.random.choice`, I guess through a different (probably less efficient) implementation that calls `seed.getrandbits`.
This would mean that user code that uses a set random seed would break when upgrading. Would we be okay with that? Or should we try and guarantee the state is the same as the current implementation if the global numpy RNG is seeded (this might be very difficult)? | 2024-03-05T00:30:47 |
networkx/networkx | 7,335 | networkx__networkx-7335 | [
"5913"
] | 91337d52979b7673651267cd8e6c6d44c489b5df | diff --git a/networkx/algorithms/approximation/traveling_salesman.py b/networkx/algorithms/approximation/traveling_salesman.py
--- a/networkx/algorithms/approximation/traveling_salesman.py
+++ b/networkx/algorithms/approximation/traveling_salesman.py
@@ -683,7 +683,9 @@ def direction_of_ascent():
a_eq[n_count][arb_count] = deg - 2
n_count -= 1
a_eq[len(G)][arb_count] = 1
- program_result = optimize.linprog(c, A_eq=a_eq, b_eq=b_eq)
+ program_result = optimize.linprog(
+ c, A_eq=a_eq, b_eq=b_eq, method="highs-ipm"
+ )
# If the constants exist, then the direction of ascent doesn't
if program_result.success:
# There is no direction of ascent
| diff --git a/networkx/algorithms/approximation/tests/test_traveling_salesman.py b/networkx/algorithms/approximation/tests/test_traveling_salesman.py
--- a/networkx/algorithms/approximation/tests/test_traveling_salesman.py
+++ b/networkx/algorithms/approximation/tests/test_traveling_salesman.py
@@ -756,9 +756,15 @@ def fixed_asadpour(G, weight):
# the shortest path between those vertices, allowing vertices to appear more
# than once.
#
- # However, we are using a fixed random number generator so we know what the
- # expected tour is.
- expected_tours = [[1, 4, 5, 0, 2, 3, 2, 1], [3, 2, 0, 1, 4, 5, 3]]
+ # Even though we are using a fixed seed, multiple tours have been known to
+ # be returned. The first two are from the original delevopment of this test,
+ # and the third one from issue #5913 on GitHub. If other tours are returned,
+ # add it on the list of expected tours.
+ expected_tours = [
+ [1, 4, 5, 0, 2, 3, 2, 1],
+ [3, 2, 0, 1, 4, 5, 3],
+ [3, 2, 1, 0, 5, 4, 3],
+ ]
assert tour in expected_tours
| Failing traveling salesman test
### Current Behavior
I help maintain a NetworkX package for the Fedora Linux distribution. We like to run the tests when we build, to catch problems that might affect users. One traveling salesman test, test_asadpour_tsp at line 707 of networkx/algorithms/approximation/tests/test_traveling_salesman.py in NetworkX 2.8.5, sometimes succeeds and sometimes fails. When it fails, it finds the tour `[3, 2, 1, 0, 5, 4, 3]`, which has the same weight (402) as the expected tour `[3, 2, 0, 1, 4, 5, 3]`. I'm curious about this comment at the bottom of the test:
```
# However, we are using a fixed random number generator so we know what the
# expected tour is.
```
Does that mean that something is wrong with the random number generator? If so, can you give me a hint as to what I should look for?
### Expected Behavior
The tests should pass.
### Steps to Reproduce
The test failure is intermittent. We do the build, then run pytest.
### Environment
Python version: 3.11.0~b5
NetworkX version: 2.8.5
| This is odd indeed. While I did author the code in question, I will admit that the random number generation pattern used in networkx isn't something that I've investigated. I haven't been able to replicate the error in my environment running Ubuntu 22.04 (python 3.10.4) against the main branch of the repo and haven't heard any reports of that test failing in our pipeline, but since you mention that it's an intermittent problem that doesn't exactly say anything. (Ugh those types of problems are the worst to track down!)
I did check that the `seed` parameter passed to `asadpour_atsp` is used in all of the places where random numbers are required. It is worth noting that the since their are already two tours in the `expected_tours`, the comment above it seems... not exactly correct, but I'm not sure where the existing non-determinism is coming from.
Since the returned answer has the same weight as the correct answer, the quick and dirty option here is to add that tour to the list of expected tours. Perhaps either @dschult or @rossbar would have a better intuition on whether this is a problem that we need to track down or if the fast solution is considered good enough.
I think the random number generation in asadpour is fine. But calling `traveling_salesman_problem` doesn’t present a way to provide the seed to the asadpour function directly. One of the tests shows that it can be called with a seed by overriding the default method with a function that calls asadpour with a seed. But… the test that is causing troubles doesn’t do that. So no seed is set for that test despite the comment saying it is. Perhaps a previous version did call the seed.
I think the easy fix is to change the test to use asadpour with a seed. Longer term maybe we should provide a seed argument to traveling_salesman_problem. But I’m not sure. If we don’t do that we should provide an example in the doc_string of how to call it with a seed. Thoughts?
Changing the test to provide a seed sounds like the right way to go. Thank you both for the quick responses.
Changing it to use a seed would be the best solution, but I believe that it already is? [Line 742](https://github.com/networkx/networkx/blob/28f3a4e22d32ca43ff768b1fa365b26f75a0273d/networkx/algorithms/approximation/tests/test_traveling_salesman.py#L742) in the test file of the 2.8.5 release does wrap the `asadpour_atsp` function and provides a seed value of 19 for the failing test unless I'm seriously mistaken.
Whoops! Sorry @mjschwenne I stand corrected. The seed is set in the test function. Not sure what I was looking at. :}
I have extracted the test function and run it many times with the same results. I have also added a print statement to show the next random number generated after it finishes and I get the same number from (0,1) many many times within a for loop. I'm not sure how best to debug this. @jamesjer can you repeatedly run this code in the environment the test is run in?
```python
import networkx as nx
import numpy as np
edge_list = [
(0, 1, 100),
(0, 2, 100),
(0, 5, 1),
(1, 2, 100),
(1, 4, 1),
(2, 3, 1),
(3, 4, 100),
(3, 5, 100),
(4, 5, 100),
(1, 0, 100),
(2, 0, 100),
(5, 0, 1),
(2, 1, 100),
(4, 1, 1),
(3, 2, 1),
(4, 3, 100),
(5, 3, 100),
(5, 4, 100),
]
G = nx.DiGraph()
G.add_weighted_edges_from(edge_list)
def fixed_asadpour(G, weight):
return nx_app.asadpour_atsp(G, weight, seed=19)
for i in range(15):
nx_app.traveling_salesman_problem(G, weight="weight",method=fixed_asadpour)
```
Are you still seeing this issue @jamesjer ? Have you tried NX>2.8.5? I can't reproduce it locally either.
I think we might be hitting this in Gentoo on x86 (https://bugs.gentoo.org/921958).
If i run that script above, I get:
```
$ python /tmp/foo.py
[3, 2, 1, 0, 5, 4, 3]
[3, 2, 1, 0, 5, 4, 3]
[3, 2, 1, 0, 5, 4, 3]
[3, 2, 1, 0, 5, 4, 3]
[3, 2, 1, 0, 5, 4, 3]
[3, 2, 1, 0, 5, 4, 3]
[3, 2, 1, 0, 5, 4, 3]
[3, 2, 1, 0, 5, 4, 3]
[3, 2, 1, 0, 5, 4, 3]
[3, 2, 1, 0, 5, 4, 3]
[3, 2, 1, 0, 5, 4, 3]
[3, 2, 1, 0, 5, 4, 3]
[3, 2, 1, 0, 5, 4, 3]
[3, 2, 1, 0, 5, 4, 3]
[3, 2, 1, 0, 5, 4, 3]
```
But for me, I seem to always get the failure (and the same output from the script).
What version of networkx and python are you using?
I haven't seen this happen for quite awhile, but it did on a recent build of networkx 3.2.1 with python 3.12.1. It does not happen on every build. Running the script above in the build environment produces nothing but `[3, 2, 0, 1, 4, 5, 3]`, every time.
Is there any parallelism in the test suite?
I took another look at the implementation and the first thing that jumped out as a potential point of variability is the use of `scipy.optimize.linprog` in the `held_karp_ascent` function. By default, `linprog` uses the `"highs"` method, which chooses between one of two sub-methods, either `highs-ipm` or `highs-ds` - see the [linprog("highs") method docstring](https://docs.scipy.org/doc/scipy/reference/optimize.linprog-highs.html#optimize-linprog-highs) for details.
The docstring mentions that when the default `"highs"` is specified, the submethod is chosen by scipy. I didn't chase down the selection logic, but perhaps it's possible different methods are being selected on different platforms. Another possibility is that the methods give different results on different platforms - they are binary extensions after all, so their building/packaging may have something to do with it.
If someone were going to dive into this further, that's where I'd start looking. In the end though, there may not be a whole lot we can do about it. If we're lucky, perhaps explicitly specifying one of `highs-ds` or `highs-ipm` removes the variability. The worst case scenario, such as it is, is that we cannot guarantee exact reproducibility for cases where there are multiple valid solutions. IMV this is not the end of the world. I'd be inclined to just document it and update the test to include all valid solutions to the test case!
> What version of networkx and python are you using?
networkx-3.2.1 w/ python-3.11.8 | 2024-03-07T02:13:53 |
networkx/networkx | 7,341 | networkx__networkx-7341 | [
"7339"
] | ef5f9acb5b711346d6eb92b4bf01d32a75d4f57c | diff --git a/networkx/algorithms/centrality/reaching.py b/networkx/algorithms/centrality/reaching.py
--- a/networkx/algorithms/centrality/reaching.py
+++ b/networkx/algorithms/centrality/reaching.py
@@ -112,7 +112,7 @@ def as_distance(u, v, d):
# TODO This can be trivially parallelized.
lrc = [
centrality(G, node, paths=paths, weight=weight, normalized=normalized)
- for node, paths in shortest_paths
+ for node, paths in shortest_paths.items()
]
max_lrc = max(lrc)
diff --git a/networkx/algorithms/shortest_paths/generic.py b/networkx/algorithms/shortest_paths/generic.py
--- a/networkx/algorithms/shortest_paths/generic.py
+++ b/networkx/algorithms/shortest_paths/generic.py
@@ -149,11 +149,11 @@ def shortest_path(G, source=None, target=None, weight=None, method="dijkstra"):
# Find paths between all pairs.
if method == "unweighted":
- paths = nx.all_pairs_shortest_path(G)
+ paths = dict(nx.all_pairs_shortest_path(G))
elif method == "dijkstra":
- paths = nx.all_pairs_dijkstra_path(G, weight=weight)
+ paths = dict(nx.all_pairs_dijkstra_path(G, weight=weight))
else: # method == 'bellman-ford':
- paths = nx.all_pairs_bellman_ford_path(G, weight=weight)
+ paths = dict(nx.all_pairs_bellman_ford_path(G, weight=weight))
else:
# Find paths from all nodes co-accessible to the target.
if G.is_directed():
| diff --git a/networkx/algorithms/shortest_paths/tests/test_generic.py b/networkx/algorithms/shortest_paths/tests/test_generic.py
--- a/networkx/algorithms/shortest_paths/tests/test_generic.py
+++ b/networkx/algorithms/shortest_paths/tests/test_generic.py
@@ -212,7 +212,9 @@ def test_single_source_all_shortest_paths(self):
assert sorted(ans[4]) == [[4]]
def test_all_pairs_shortest_path(self):
- p = dict(nx.shortest_path(self.cycle))
+ # shortest_path w/o source and target will return a generator instead of
+ # a dict beginning in version 3.5. Only the first call needs changed here.
+ p = nx.shortest_path(self.cycle)
assert p[0][3] == [0, 1, 2, 3]
assert p == dict(nx.all_pairs_shortest_path(self.cycle))
p = dict(nx.shortest_path(self.grid))
| Inconsistent `shortest_path` warning and return type on main
### Current Behavior
#7161 updated the warning for `shortest_path` to indicate the return type (from dict to iterator) will occur in NetworkX 3.5.
#6584 changed the return type from dict to iterator, but #7161 didn't change the return type back to dict.
### Expected Behavior
I expect `shortest_path` with no source or target to return an iterator over node, path pairs on main branch to be consistent with the warning.
### Steps to Reproduce
```
>>> import networkx as nx
>>> assert isinstance(nx.shortest_path(nx.complete_graph(4)), dict) # raises AssertionError
```
### Environment
Python version: 3.10
NetworkX version: main branch
### Additional context
This is similar to issue #7315. I haven't fully compared #6584, #7161, and main branch to ensure consistency.
| 2024-03-11T03:47:07 |
|
networkx/networkx | 7,375 | networkx__networkx-7375 | [
"7353"
] | 83d2cf4090cd79ff3788b4d06f2ef4f53bfd17bb | diff --git a/networkx/algorithms/similarity.py b/networkx/algorithms/similarity.py
--- a/networkx/algorithms/similarity.py
+++ b/networkx/algorithms/similarity.py
@@ -319,8 +319,12 @@ def optimal_edit_paths(
Returns
-------
edit_paths : list of tuples (node_edit_path, edge_edit_path)
- node_edit_path : list of tuples (u, v)
- edge_edit_path : list of tuples ((u1, v1), (u2, v2))
+ - node_edit_path : list of tuples ``(u, v)`` indicating node transformations
+ between `G1` and `G2`. ``u`` is `None` for insertion, ``v`` is `None`
+ for deletion.
+ - edge_edit_path : list of tuples ``((u1, v1), (u2, v2))`` indicating edge
+ transformations between `G1` and `G2`. ``(None, (u2,v2))`` for insertion
+ and ``((u1,v1), None)`` for deletion.
cost : numeric
Optimal edit path cost (graph edit distance). When the cost
| Missing documentation for output of ```optimal_edit_paths``` function
<!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion -->
<!--- Provide a general summary of the issue in the Title above -->
### Current Behavior
The current description in the documentation for the ```optimal_edit_paths``` function is:
```
edit_paths: list of tuples (node_edit_path, edge_edit_path)
node_edit_path : list of tuples (u, v) edge_edit_path : list of tuples ((u1, v1), (u2, v2))
cost : numeric
Optimal edit path cost (graph edit distance).
```
while it states the type of output, there is no explanation on how to interpret the result.
On stackoverflow there is a [question](https://stackoverflow.com/questions/73696000/how-to-interpret-the-output-of-networkx-optimal-edit-paths) where users don't know how to interpret the output of such function.
| I agree that this doc_string should be improved.
The best description of graph edit paths I have found is buried in the doc_string of a function defined within a function -- and thus not available outside of the source code. Perhaps it can kick-start a better description for the main doc_string.
<this is from the `get_edit_paths` function defined in `optimize_edit_paths`>
```
Returns:
sequence of (vertex_path, edge_path, cost)
vertex_path: complete vertex edit path
list of tuples (u, v) of vertex mappings u<->v,
u=None or v=None for deletion/insertion
edge_path: complete edge edit path
list of tuples (g, h) of edge mappings g<->h,
g=None or h=None for deletion/insertion
cost: total cost of edit path
NOTE: path costs are non-increasing
``` | 2024-03-26T17:28:41 |
|
networkx/networkx | 7,378 | networkx__networkx-7378 | [
"7377"
] | f44fdc4d12cd36907959637961865a0325383a91 | diff --git a/networkx/classes/coreviews.py b/networkx/classes/coreviews.py
--- a/networkx/classes/coreviews.py
+++ b/networkx/classes/coreviews.py
@@ -281,7 +281,13 @@ def __init__(self, d, NODE_OK):
self.NODE_OK = NODE_OK
def __len__(self):
- return sum(1 for n in self)
+ # check whether NODE_OK stores the number of nodes as `length`
+ # or the nodes themselves as a set `nodes`. If not, count the nodes.
+ if hasattr(self.NODE_OK, "length"):
+ return self.NODE_OK.length
+ if hasattr(self.NODE_OK, "nodes"):
+ return len(self.NODE_OK.nodes & self._atlas.keys())
+ return sum(1 for n in self._atlas if self.NODE_OK(n))
def __iter__(self):
try: # check that NODE_OK has attr 'nodes'
@@ -324,7 +330,13 @@ def __init__(self, d, NODE_OK, EDGE_OK):
self.EDGE_OK = EDGE_OK
def __len__(self):
- return sum(1 for n in self)
+ # check whether NODE_OK stores the number of nodes as `length`
+ # or the nodes themselves as a set `nodes`. If not, count the nodes.
+ if hasattr(self.NODE_OK, "length"):
+ return self.NODE_OK.length
+ if hasattr(self.NODE_OK, "nodes"):
+ return len(self.NODE_OK.nodes & self._atlas.keys())
+ return sum(1 for n in self._atlas if self.NODE_OK(n))
def __iter__(self):
try: # check that NODE_OK has attr 'nodes'
diff --git a/networkx/classes/filters.py b/networkx/classes/filters.py
--- a/networkx/classes/filters.py
+++ b/networkx/classes/filters.py
@@ -54,7 +54,14 @@ def hide_multiedges(edges):
# write show_nodes as a class to make SubGraph pickleable
class show_nodes:
- """Filter class to show specific nodes."""
+ """Filter class to show specific nodes.
+
+ Attach the set of nodes as an attribute to speed up this commonly used filter
+
+ Note that another allowed attribute for filters is to store the number of nodes
+ on the filter as attribute `length` (used in `__len__`). It is a user
+ responsibility to ensure this attribute is accurate if present.
+ """
def __init__(self, nodes):
self.nodes = set(nodes)
| FilterAdjacency: __len__ is recalculated unnecessarily
The class FilterAdjacency is only used in views and therefore only works on frozen sets of edges and nodes. Despite that its __len__ function is recalculated each time it is called instead of it being calculated and stored once upon its first call.
### Current Behavior
__len__ of FilterAdjacency is recalculated unnecessarily upon each call using up 80% of runtime for my application.
### Expected Behavior
__len__ of FilterAdjacency should calculated and stored once upon its first call. For all other calls the stored value is returned.
### Steps to Reproduce
Any use of nx.subgraph_view shows this behaviour. Very poor performance shows in my case for number of nodes > 100.000.
### Environment
Should be completely independent of env.
Python version: 3.11.8
NetworkX version: 3.2.1
| 2024-03-27T13:01:29 |
||
networkx/networkx | 7,380 | networkx__networkx-7380 | [
"7354"
] | f0b5a6d884ac7e303f2e2092d3a0a48723815239 | diff --git a/networkx/utils/backends.py b/networkx/utils/backends.py
--- a/networkx/utils/backends.py
+++ b/networkx/utils/backends.py
@@ -1239,6 +1239,7 @@ def check_iterator(it):
"read_pajek",
"from_pydot",
"pydot_read_dot",
+ "agraph_read_dot",
# graph comparison fails b/c of nan values
"read_gexf",
}:
| Dispatch test suite incompatible with pygraphviz
Just something I noticed today while attempting to run the test suite with `NETWORKX_TEST_BACKEND=nx-loopback` with the `pytest-xdist` extension. Doing so gives Segmentation faults, indicating that something about the nx-loopback mechanism isn't threadsafe (perhaps this is known?).
## To reproduce
Install xdist (`pip install pytest-xdist`) in the development environment and run the test suite with the nx-loopback backend with more than 1 worker, e.g.:
```bash
NETWORKX_TEST_BACKEND=nx-loopback pytest -n auto --doctest-modules --durations=10 --pyargs networkx
```
A selection from the pytest log below (can't include the whole thing due to gh issue character limits):
<details>
<summary>Pytest log</summary>
<pre>
=========================================================== test session starts ===========================================================
platform linux -- Python 3.11.6, pytest-8.1.1, pluggy-1.4.0
Matplotlib: 3.9.0.dev0
Freetype: 2.6.1
rootdir: /home/ross/repos/networkx
configfile: pyproject.toml
plugins: anyio-4.2.0, mpl-0.16.1, cov-4.1.0, xdist-3.5.0
12 workers [6269 items]
<clipped>
............................................x....x................................................................................. [ 64%]
........................................................................................................Fatal Python error: Segmentation fault
Thread 0x0000747ba7c006c0 (most recent call first):
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/execnet/gateway_base.py", line 474 in read
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/execnet/gateway_base.py", line 507 in from_io
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/execnet/gateway_base.py", line 1049 in _thread_receiver
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/execnet/gateway_base.py", line 296 in run
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/execnet/gateway_base.py", line 361 in _perform_spawn
Current thread 0x0000747ba9422740 (most recent call first):
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pygraphviz/graphviz.py", line 226 in agnameof
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pygraphviz/agraph.py", line 228 in __repr__
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/_io/saferepr.py", line 73 in repr_instance
File "/usr/lib/python3.11/reprlib.py", line 63 in repr1
File "/usr/lib/python3.11/reprlib.py", line 53 in repr
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/_io/saferepr.py", line 61 in repr
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/_io/saferepr.py", line 111 in saferepr
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/_code/code.py", line 843 in repr_args
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/_code/code.py", line 939 in repr_traceback_entry
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/_code/code.py", line 994 in <listcomp>
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/_code/code.py", line 993 in repr_traceback
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/_code/code.py", line 1064 in repr_excinfo
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/_code/code.py", line 699 in getrepr
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/nodes.py", line 464 in _repr_failure_py
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/python.py", line 1814 in repr_failure
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/reports.py", line 364 in from_item_and_call
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/runner.py", line 367 in pytest_runtest_makereport
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_callers.py", line 102 in _multicall
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_manager.py", line 119 in _hookexec
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_hooks.py", line 501 in __call__
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/runner.py", line 242 in call_and_report
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/runner.py", line 134 in runtestprotocol
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/runner.py", line 115 in pytest_runtest_protocol
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_callers.py", line 102 in _multicall
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_manager.py", line 119 in _hookexec
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_hooks.py", line 501 in __call__
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/xdist/remote.py", line 174 in run_one_test
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/xdist/remote.py", line 157 in pytest_runtestloop
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_callers.py", line 102 in _multicall
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_manager.py", line 119 in _hookexec
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_hooks.py", line 501 in __call__
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/main.py", line 339 in _main
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/main.py", line 285 in wrap_session
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/main.py", line 332 in pytest_cmdline_main
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_callers.py", line 102 in _multicall
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_manager.py", line 119 in _hookexec
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_hooks.py", line 501 in __call__
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/xdist/remote.py", line 355 in <module>
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/execnet/gateway_base.py", line 1157 in executetask
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/execnet/gateway_base.py", line 296 in run
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/execnet/gateway_base.py", line 361 in _perform_spawn
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/execnet/gateway_base.py", line 343 in integrate_as_primary_thread
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/execnet/gateway_base.py", line 1142 in serve
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/execnet/gateway_base.py", line 1640 in serve
File "<string>", line 8 in <module>
File "<string>", line 1 in <module>
Extension modules: markupsafe._speedups, numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, scipy._lib._ccallback_c, PIL._imaging, matplotlib._path, kiwisolver._cext, pandas._libs.tslibs.ccalendar, pandas._libs.tslibs.np_datetime, pandas._libs.tslibs.dtypes, pandas._libs.tslibs.base, pandas._libs.tslibs.nattype, pandas._libs.tslibs.timezones, pandas._libs.tslibs.fields, pandas._libs.tslibs.timedeltas, pandas._libs.tslibs.tzconversion, pandas._libs.tslibs.timestamps, pandas._libs.properties, pandas._libs.tslibs.offsets, pandas._libs.tslibs.strptime, pandas._libs.tslibs.parsing, pandas._libs.tslibs.conversion, pandas._libs.tslibs.period, pandas._libs.tslibs.vectorized, pandas._libs.ops_dispatch, pandas._libs.missing, pandas._libs.hashtable, pandas._libs.algos, pandas._libs.interval, pandas._libs.lib, pandas._libs.ops, pandas._libs.hashing, pandas._libs.arrays, pandas._libs.tslib, pandas._libs.sparse, pandas._libs.internals, pandas._libs.indexing, pandas._libs.index, pandas._libs.writers, pandas._libs.join, pandas._libs.window.aggregations, pandas._libs.window.indexers, pandas._libs.reshape, pandas._libs.groupby, pandas._libs.json, pandas._libs.parsers, pandas._libs.testing, pygraphviz._graphviz, scipy.sparse._sparsetools, _csparsetools, scipy.sparse._csparsetools, scipy.linalg._fblas, scipy.linalg._flapack, scipy.linalg.cython_lapack, scipy.linalg._cythonized_array_utils, scipy.linalg._solve_toeplitz, scipy.linalg._decomp_lu_cython, scipy.linalg._matfuncs_sqrtm_triu, scipy.linalg.cython_blas, scipy.linalg._matfuncs_expm, scipy.linalg._decomp_update, scipy.sparse.linalg._dsolve._superlu, scipy.sparse.linalg._eigen.arpack._arpack, scipy.sparse.csgraph._tools, scipy.sparse.csgraph._shortest_path, scipy.sparse.csgraph._traversal, scipy.sparse.csgraph._min_spanning_tree, scipy.sparse.csgraph._flow, scipy.sparse.csgraph._matching, scipy.sparse.csgraph._reordering, scipy.optimize._minpack2, scipy.optimize._group_columns, scipy._lib.messagestream, scipy.optimize._trlib._trlib, scipy.optimize._lbfgsb, _moduleTNC, scipy.optimize._moduleTNC, scipy.optimize._cobyla, scipy.optimize._slsqp, scipy.optimize._minpack, scipy.optimize._lsq.givens_elimination, scipy.optimize._zeros, scipy.optimize._highs.cython.src._highs_wrapper, scipy.optimize._highs._highs_wrapper, scipy.optimize._highs.cython.src._highs_constants, scipy.optimize._highs._highs_constants, scipy.linalg._interpolative, scipy.optimize._bglu_dense, scipy.optimize._lsap, scipy.spatial._ckdtree, scipy.spatial._qhull, scipy.spatial._voronoi, scipy.spatial._distance_wrap, scipy.spatial._hausdorff, scipy.special._ufuncs_cxx, scipy.special._ufuncs, scipy.special._specfun, scipy.special._comb, scipy.special._ellip_harm_2, scipy.spatial.transform._rotation, scipy.optimize._direct, fontTools.misc.bezierTools, lxml._elementpath, lxml.etree, fontTools.varLib.iup (total: 116)
........................... [ 66%]
............................................................................x...............................FFF.................... [ 68%]
.............................F...F................................................................................................. [ 71%]
............[gw3] node down: Not properly terminated
F
replacing crashed worker gw3
collecting: 12/13
<clipped>
Fatal Python error: Segmentation fault
Thread 0x00007eda27a006c0 (most recent call first):
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/execnet/gateway_base.py", line 474 in read
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/execnet/gateway_base.py", line 507 in from_io
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/execnet/gateway_base.py", line 1049 in _thread_receiver
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/execnet/gateway_base.py", line 296 in run
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/execnet/gateway_base.py", line 361 in _perform_spawn
Current thread 0x00007eda293e3740 (most recent call first):
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pygraphviz/graphviz.py", line 226 in agnameof
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pygraphviz/agraph.py", line 228 in __repr__
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/_io/saferepr.py", line 73 in repr_instance
File "/usr/lib/python3.11/reprlib.py", line 63 in repr1
File "/usr/lib/python3.11/reprlib.py", line 53 in repr
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/_io/saferepr.py", line 61 in repr
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/_io/saferepr.py", line 111 in saferepr
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/_code/code.py", line 843 in repr_args
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/_code/code.py", line 939 in repr_traceback_entry
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/_code/code.py", line 994 in <listcomp>
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/_code/code.py", line 993 in repr_traceback
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/_code/code.py", line 1064 in repr_excinfo
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/_code/code.py", line 699 in getrepr
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/nodes.py", line 464 in _repr_failure_py
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/python.py", line 1814 in repr_failure
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/reports.py", line 364 in from_item_and_call
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/runner.py", line 367 in pytest_runtest_makereport
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_callers.py", line 102 in _multicall
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_manager.py", line 119 in _hookexec
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_hooks.py", line 501 in __call__
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/runner.py", line 242 in call_and_report
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/runner.py", line 134 in runtestprotocol
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/runner.py", line 115 in pytest_runtest_protocol
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_callers.py", line 102 in _multicall
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_manager.py", line 119 in _hookexec
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_hooks.py", line 501 in __call__
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/xdist/remote.py", line 174 in run_one_test
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/xdist/remote.py", line 157 in pytest_runtestloop
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_callers.py", line 102 in _multicall
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_manager.py", line 119 in _hookexec
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_hooks.py", line 501 in __call__
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/main.py", line 339 in _main
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/main.py", line 285 in wrap_session
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/main.py", line 332 in pytest_cmdline_main
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_callers.py", line 102 in _multicall
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_manager.py", line 119 in _hookexec
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_hooks.py", line 501 in __call__
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/xdist/remote.py", line 355 in <module>
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/execnet/gateway_base.py", line 1157 in executetask
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/execnet/gateway_base.py", line 296 in run
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/execnet/gateway_base.py", line 361 in _perform_spawn
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/execnet/gateway_base.py", line 343 in integrate_as_primary_thread
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/execnet/gateway_base.py", line 1142 in serve
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/execnet/gateway_base.py", line 1640 in serve
File "<string>", line 8 in <module>
File "<string>", line 1 in <module>
Extension modules: markupsafe._speedups, numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, scipy._lib._ccallback_c, PIL._imaging, matplotlib._path, kiwisolver._cext, pandas._libs.tslibs.ccalendar, pandas._libs.tslibs.np_datetime, pandas._libs.tslibs.dtypes, pandas._libs.tslibs.base, pandas._libs.tslibs.nattype, pandas._libs.tslibs.timezones, pandas._libs.tslibs.fields, pandas._libs.tslibs.timedeltas, pandas._libs.tslibs.tzconversion, pandas._libs.tslibs.timestamps, pandas._libs.properties, pandas._libs.tslibs.offsets, pandas._libs.tslibs.strptime, pandas._libs.tslibs.parsing, pandas._libs.tslibs.conversion, pandas._libs.tslibs.period, pandas._libs.tslibs.vectorized, pandas._libs.ops_dispatch, pandas._libs.missing, pandas._libs.hashtable, pandas._libs.algos, pandas._libs.interval, pandas._libs.lib, pandas._libs.ops, pandas._libs.hashing, pandas._libs.arrays, pandas._libs.tslib, pandas._libs.sparse, pandas._libs.internals, pandas._libs.indexing, pandas._libs.index, pandas._libs.writers, pandas._libs.join, pandas._libs.window.aggregations, pandas._libs.window.indexers, pandas._libs.reshape, pandas._libs.groupby, pandas._libs.json, pandas._libs.parsers, pandas._libs.testing, pygraphviz._graphviz, scipy.sparse._sparsetools, _csparsetools, scipy.sparse._csparsetools, scipy.linalg._fblas, scipy.linalg._flapack, scipy.linalg.cython_lapack, scipy.linalg._cythonized_array_utils, scipy.linalg._solve_toeplitz, scipy.linalg._decomp_lu_cython, scipy.linalg._matfuncs_sqrtm_triu, scipy.linalg.cython_blas, scipy.linalg._matfuncs_expm, scipy.linalg._decomp_update, scipy.sparse.linalg._dsolve._superlu, scipy.sparse.linalg._eigen.arpack._arpack, scipy.sparse.csgraph._tools, scipy.sparse.csgraph._shortest_path, scipy.sparse.csgraph._traversal, scipy.sparse.csgraph._min_spanning_tree, scipy.sparse.csgraph._flow, scipy.sparse.csgraph._matching, scipy.sparse.csgraph._reordering, scipy.special._ufuncs_cxx, scipy.special._ufuncs, scipy.special._specfun, scipy.special._comb, scipy.special._ellip_harm_2 (total: 86)
.............................[gw12] node down: Not properly terminated
F
replacing crashed worker gw12
collecting: 13/14 workers.........................[gw4] node down: Not properly terminated
F
replacing crashed worker gw4
collecting: 13/15 workers...Fatal Python error: Segmentation fault
Thread 0x00007a522ba006c0 (most recent call first):
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/execnet/gateway_base.py", line 474 in read
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/execnet/gateway_base.py", line 507 in from_io
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/execnet/gateway_base.py", line 1049 in _thread_receiver
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/execnet/gateway_base.py", line 296 in run
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/execnet/gateway_base.py", line 361 in _perform_spawn
Current thread 0x00007a522d3ff740 (most recent call first):
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pygraphviz/graphviz.py", line 226 in agnameof
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pygraphviz/agraph.py", line 228 in __repr__
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/_io/saferepr.py", line 73 in repr_instance
File "/usr/lib/python3.11/reprlib.py", line 63 in repr1
File "/usr/lib/python3.11/reprlib.py", line 53 in repr
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/_io/saferepr.py", line 61 in repr
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/_io/saferepr.py", line 111 in saferepr
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/_code/code.py", line 843 in repr_args
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/_code/code.py", line 939 in repr_traceback_entry
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/_code/code.py", line 994 in <listcomp>
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/_code/code.py", line 993 in repr_traceback
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/_code/code.py", line 1064 in repr_excinfo
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/_code/code.py", line 699 in getrepr
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/nodes.py", line 464 in _repr_failure_py
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/python.py", line 1814 in repr_failure
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/reports.py", line 364 in from_item_and_call
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/runner.py", line 367 in pytest_runtest_makereport
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_callers.py", line 102 in _multicall
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_manager.py", line 119 in _hookexec
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_hooks.py", line 501 in __call__
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/runner.py", line 242 in call_and_report
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/runner.py", line 134 in runtestprotocol
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/runner.py", line 115 in pytest_runtest_protocol
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_callers.py", line 102 in _multicall
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_manager.py", line 119 in _hookexec
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_hooks.py", line 501 in __call__
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/xdist/remote.py", line 174 in run_one_test
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/xdist/remote.py", line 157 in pytest_runtestloop
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_callers.py", line 102 in _multicall
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_manager.py", line 119 in _hookexec
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_hooks.py", line 501 in __call__
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/main.py", line 339 in _main
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/main.py", line 285 in wrap_session
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/_pytest/main.py", line 332 in pytest_cmdline_main
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_callers.py", line 102 in _multicall
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_manager.py", line 119 in _hookexec
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/pluggy/_hooks.py", line 501 in __call__
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/xdist/remote.py", line 355 in <module>
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/execnet/gateway_base.py", line 1157 in executetask
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/execnet/gateway_base.py", line 296 in run
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/execnet/gateway_base.py", line 361 in _perform_spawn
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/execnet/gateway_base.py", line 343 in integrate_as_primary_thread
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/execnet/gateway_base.py", line 1142 in serve
File "/home/ross/.virtualenvs/nx-dev/lib/python3.11/site-packages/execnet/gateway_base.py", line 1640 in serve
File "<string>", line 8 in <module>
File "<string>", line 1 in <module>
Extension modules: markupsafe._speedups, numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, scipy._lib._ccallback_c, PIL._imaging, matplotlib._path, kiwisolver._cext, pandas._libs.tslibs.ccalendar, pandas._libs.tslibs.np_datetime, pandas._libs.tslibs.dtypes, pandas._libs.tslibs.base, pandas._libs.tslibs.nattype, pandas._libs.tslibs.timezones, pandas._libs.tslibs.fields, pandas._libs.tslibs.timedeltas, pandas._libs.tslibs.tzconversion, pandas._libs.tslibs.timestamps, pandas._libs.properties, pandas._libs.tslibs.offsets, pandas._libs.tslibs.strptime, pandas._libs.tslibs.parsing, pandas._libs.tslibs.conversion, pandas._libs.tslibs.period, pandas._libs.tslibs.vectorized, pandas._libs.ops_dispatch, pandas._libs.missing, pandas._libs.hashtable, pandas._libs.algos, pandas._libs.interval, pandas._libs.lib, pandas._libs.ops, pandas._libs.hashing, pandas._libs.arrays, pandas._libs.tslib, pandas._libs.sparse, pandas._libs.internals, pandas._libs.indexing, pandas._libs.index, pandas._libs.writers, pandas._libs.join, pandas._libs.window.aggregations, pandas._libs.window.indexers, pandas._libs.reshape, pandas._libs.groupby, pandas._libs.json, pandas._libs.parsers, pandas._libs.testing, pygraphviz._graphviz, scipy.sparse._sparsetools, _csparsetools, scipy.sparse._csparsetools, scipy.linalg._fblas, scipy.linalg._flapack, scipy.linalg.cython_lapack, scipy.linalg._cythonized_array_utils, scipy.linalg._solve_toeplitz, scipy.linalg._decomp_lu_cython, scipy.linalg._matfuncs_sqrtm_triu, scipy.linalg.cython_blas, scipy.linalg._matfuncs_expm, scipy.linalg._decomp_update, scipy.sparse.linalg._dsolve._superlu, scipy.sparse.linalg._eigen.arpack._arpack, scipy.sparse.csgraph._tools, scipy.sparse.csgraph._shortest_path, scipy.sparse.csgraph._traversal, scipy.sparse.csgraph._min_spanning_tree, scipy.sparse.csgraph._flow, scipy.sparse.csgraph._matching, scipy.sparse.csgraph._reordering, fontTools.misc.bezierTools, lxml._elementpath, lxml.etree, fontTools.varLib.iup, scipy.optimize._minpack2, scipy.optimize._group_columns, scipy._lib.messagestream, scipy.optimize._trlib._trlib, scipy.optimize._lbfgsb, _moduleTNC, scipy.optimize._moduleTNC, scipy.optimize._cobyla, scipy.optimize._slsqp, scipy.optimize._minpack, scipy.optimize._lsq.givens_elimination, scipy.optimize._zeros, scipy.optimize._highs.cython.src._highs_wrapper, scipy.optimize._highs._highs_wrapper, scipy.optimize._highs.cython.src._highs_constants, scipy.optimize._highs._highs_constants, scipy.linalg._interpolative, scipy.optimize._bglu_dense, scipy.optimize._lsap, scipy.spatial._ckdtree, scipy.spatial._qhull, scipy.spatial._voronoi, scipy.spatial._distance_wrap, scipy.spatial._hausdorff, scipy.special._ufuncs_cxx, scipy.special._ufuncs, scipy.special._specfun, scipy.special._comb, scipy.special._ellip_harm_2, scipy.spatial.transform._rotation, scipy.optimize._direct, scipy.ndimage._nd_image, _ni_label, scipy.ndimage._ni_label, scipy.integrate._odepack, scipy.integrate._quadpack, scipy.integrate._vode, scipy.integrate._dop, scipy.integrate._lsoda, scipy.special.cython_special, scipy.stats._stats, scipy.stats.beta_ufunc, scipy.stats._boost.beta_ufunc, scipy.stats.binom_ufunc, scipy.stats._boost.binom_ufunc, scipy.stats.nbinom_ufunc, scipy.stats._boost.nbinom_ufunc, scipy.stats.hypergeom_ufunc, scipy.stats._boost.hypergeom_ufunc, scipy.stats.ncf_ufunc, scipy.stats._boost.ncf_ufunc, scipy.stats.ncx2_ufunc, scipy.stats._boost.ncx2_ufunc, scipy.stats.nct_ufunc, scipy.stats._boost.nct_ufunc, scipy.stats.skewnorm_ufunc, scipy.stats._boost.skewnorm_ufunc, scipy.stats.invgauss_ufunc, scipy.stats._boost.invgauss_ufunc, scipy.interpolate._fitpack, scipy.interpolate.dfitpack, scipy.interpolate._bspl, scipy.interpolate._ppoly, scipy.interpolate.interpnd, scipy.interpolate._rbfinterp_pythran, scipy.interpolate._rgi_cython, scipy.stats._biasedurn, scipy.stats._levy_stable.levyst, scipy.stats._stats_pythran, scipy._lib._uarray._uarray, scipy.stats._ansari_swilk_statistics, scipy.stats._sobol, scipy.stats._qmc_cy, scipy.stats._mvn, scipy.stats._rcont.rcont, scipy.stats._unuran.unuran_wrapper (total: 161)
15 workers [6269 items] ....s...........[gw6] node down: Not properly terminated
F
replacing crashed worker gw6
16 workers [6269 items] ..................s..............................................................................................................................................x...............x......x..............................................................................................................................................................................................................s...................................................................................................................................................s....................................................................................................
================================================================ FAILURES =================================================================
_____________________________________________________ TestPydot.test_pydot[neato-G0] ______________________________________________________
[gw2] linux -- Python 3.11.6 /home/ross/.virtualenvs/nx-dev/bin/python
self = <networkx.drawing.tests.test_pydot.TestPydot object at 0x75a13f3dd050>, G = <networkx.classes.graph.Graph object at 0x75a13f3cd550>
prog = 'neato', tmp_path = PosixPath('/tmp/pytest-of-ross/pytest-4/popen-gw2/test_pydot_neato_G0_0')
@pytest.mark.parametrize("G", (nx.Graph(), nx.DiGraph()))
@pytest.mark.parametrize("prog", ("neato", "dot"))
def test_pydot(self, G, prog, tmp_path):
"""
Validate :mod:`pydot`-based usage of the passed NetworkX graph with the
passed basename of an external GraphViz command (e.g., `dot`, `neato`).
"""
# Set the name of this graph to... "G". Failing to do so will
# subsequently trip an assertion expecting this name.
G.graph["name"] = "G"
# Add arbitrary nodes and edges to the passed empty graph.
G.add_edges_from([("A", "B"), ("A", "C"), ("B", "C"), ("A", "D")])
G.add_node("E")
# Validate layout of this graph with the passed GraphViz command.
graph_layout = nx.nx_pydot.pydot_layout(G, prog=prog)
assert isinstance(graph_layout, dict)
# Convert this graph into a "pydot.Dot" instance.
P = nx.nx_pydot.to_pydot(G)
# Convert this "pydot.Dot" instance back into a graph of the same type.
G2 = G.__class__(nx.nx_pydot.from_pydot(P))
# Validate the original and resulting graphs to be the same.
assert graphs_equal(G, G2)
fname = tmp_path / "out.dot"
# Serialize this "pydot.Dot" instance to a temporary file in dot format
P.write_raw(fname)
# Deserialize a list of new "pydot.Dot" instances back from this file.
Pin_list = pydot.graph_from_dot_file(path=fname, encoding="utf-8")
# Validate this file to contain only one graph.
assert len(Pin_list) == 1
# The single "pydot.Dot" instance deserialized from this file.
Pin = Pin_list[0]
# Sorted list of all nodes in the original "pydot.Dot" instance.
n1 = sorted(p.get_name() for p in P.get_node_list())
# Sorted list of all nodes in the deserialized "pydot.Dot" instance.
n2 = sorted(p.get_name() for p in Pin.get_node_list())
# Validate these instances to contain the same nodes.
assert n1 == n2
# Sorted list of all edges in the original "pydot.Dot" instance.
e1 = sorted((e.get_source(), e.get_destination()) for e in P.get_edge_list())
# Sorted list of all edges in the original "pydot.Dot" instance.
e2 = sorted((e.get_source(), e.get_destination()) for e in Pin.get_edge_list())
# Validate these instances to contain the same edges.
assert e1 == e2
# Deserialize a new graph of the same type back from this file.
> Hin = nx.nx_pydot.read_dot(fname)
networkx/drawing/tests/test_pydot.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
networkx/utils/decorators.py:789: in func
return argmap._lazy_compile(__wrapper)(*args, **kwargs)
<class 'networkx.utils.decorators.argmap'> compilation 1108:5: in argmap_read_dot_1103
???
networkx/utils/backends.py:538: in __call__
return self._convert_and_call_for_tests(
networkx/utils/backends.py:949: in _convert_and_call_for_tests
result = getattr(backend, self.name)(*converted_args, **converted_kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
path = <itertools._tee object at 0x75a130792800>
@open_file(0, mode="r")
@nx._dispatchable(name="pydot_read_dot", graphs=None, returns_graph=True)
def read_dot(path):
"""Returns a NetworkX :class:`MultiGraph` or :class:`MultiDiGraph` from the
dot file with the passed path.
If this file contains multiple graphs, only the first such graph is
returned. All graphs _except_ the first are silently ignored.
Parameters
----------
path : str or file
Filename or file handle.
Returns
-------
G : MultiGraph or MultiDiGraph
A :class:`MultiGraph` or :class:`MultiDiGraph`.
Notes
-----
Use `G = nx.Graph(nx.nx_pydot.read_dot(path))` to return a :class:`Graph` instead of a
:class:`MultiGraph`.
"""
import pydot
> data = path.read()
E AttributeError: 'itertools._tee' object has no attribute 'read'
networkx/drawing/nx_pydot.py:74: AttributeError
_____________________________________________________ TestPydot.test_pydot[neato-G1] ______________________________________________________
[gw2] linux -- Python 3.11.6 /home/ross/.virtualenvs/nx-dev/bin/python
self = <networkx.drawing.tests.test_pydot.TestPydot object at 0x75a13f3dd790>
G = <networkx.classes.digraph.DiGraph object at 0x75a13f3cd690>, prog = 'neato'
tmp_path = PosixPath('/tmp/pytest-of-ross/pytest-4/popen-gw2/test_pydot_neato_G1_0')
@pytest.mark.parametrize("G", (nx.Graph(), nx.DiGraph()))
@pytest.mark.parametrize("prog", ("neato", "dot"))
def test_pydot(self, G, prog, tmp_path):
"""
Validate :mod:`pydot`-based usage of the passed NetworkX graph with the
passed basename of an external GraphViz command (e.g., `dot`, `neato`).
"""
# Set the name of this graph to... "G". Failing to do so will
# subsequently trip an assertion expecting this name.
G.graph["name"] = "G"
# Add arbitrary nodes and edges to the passed empty graph.
G.add_edges_from([("A", "B"), ("A", "C"), ("B", "C"), ("A", "D")])
G.add_node("E")
# Validate layout of this graph with the passed GraphViz command.
graph_layout = nx.nx_pydot.pydot_layout(G, prog=prog)
assert isinstance(graph_layout, dict)
# Convert this graph into a "pydot.Dot" instance.
P = nx.nx_pydot.to_pydot(G)
# Convert this "pydot.Dot" instance back into a graph of the same type.
G2 = G.__class__(nx.nx_pydot.from_pydot(P))
# Validate the original and resulting graphs to be the same.
assert graphs_equal(G, G2)
fname = tmp_path / "out.dot"
# Serialize this "pydot.Dot" instance to a temporary file in dot format
P.write_raw(fname)
# Deserialize a list of new "pydot.Dot" instances back from this file.
Pin_list = pydot.graph_from_dot_file(path=fname, encoding="utf-8")
# Validate this file to contain only one graph.
assert len(Pin_list) == 1
# The single "pydot.Dot" instance deserialized from this file.
Pin = Pin_list[0]
# Sorted list of all nodes in the original "pydot.Dot" instance.
n1 = sorted(p.get_name() for p in P.get_node_list())
# Sorted list of all nodes in the deserialized "pydot.Dot" instance.
n2 = sorted(p.get_name() for p in Pin.get_node_list())
# Validate these instances to contain the same nodes.
assert n1 == n2
# Sorted list of all edges in the original "pydot.Dot" instance.
e1 = sorted((e.get_source(), e.get_destination()) for e in P.get_edge_list())
# Sorted list of all edges in the original "pydot.Dot" instance.
e2 = sorted((e.get_source(), e.get_destination()) for e in Pin.get_edge_list())
# Validate these instances to contain the same edges.
assert e1 == e2
# Deserialize a new graph of the same type back from this file.
> Hin = nx.nx_pydot.read_dot(fname)
networkx/drawing/tests/test_pydot.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
<class 'networkx.utils.decorators.argmap'> compilation 1108:5: in argmap_read_dot_1103
???
networkx/utils/backends.py:538: in __call__
return self._convert_and_call_for_tests(
networkx/utils/backends.py:949: in _convert_and_call_for_tests
result = getattr(backend, self.name)(*converted_args, **converted_kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
path = <itertools._tee object at 0x75a11922a800>
@open_file(0, mode="r")
@nx._dispatchable(name="pydot_read_dot", graphs=None, returns_graph=True)
def read_dot(path):
"""Returns a NetworkX :class:`MultiGraph` or :class:`MultiDiGraph` from the
dot file with the passed path.
If this file contains multiple graphs, only the first such graph is
returned. All graphs _except_ the first are silently ignored.
Parameters
----------
path : str or file
Filename or file handle.
Returns
-------
G : MultiGraph or MultiDiGraph
A :class:`MultiGraph` or :class:`MultiDiGraph`.
Notes
-----
Use `G = nx.Graph(nx.nx_pydot.read_dot(path))` to return a :class:`Graph` instead of a
:class:`MultiGraph`.
"""
import pydot
> data = path.read()
E AttributeError: 'itertools._tee' object has no attribute 'read'
networkx/drawing/nx_pydot.py:74: AttributeError
______________________________________________________ TestPydot.test_pydot[dot-G0] _______________________________________________________
[gw2] linux -- Python 3.11.6 /home/ross/.virtualenvs/nx-dev/bin/python
self = <networkx.drawing.tests.test_pydot.TestPydot object at 0x75a13f3dda50>, G = <networkx.classes.graph.Graph object at 0x75a13f3cd550>
prog = 'dot', tmp_path = PosixPath('/tmp/pytest-of-ross/pytest-4/popen-gw2/test_pydot_dot_G0_0')
@pytest.mark.parametrize("G", (nx.Graph(), nx.DiGraph()))
@pytest.mark.parametrize("prog", ("neato", "dot"))
def test_pydot(self, G, prog, tmp_path):
"""
Validate :mod:`pydot`-based usage of the passed NetworkX graph with the
passed basename of an external GraphViz command (e.g., `dot`, `neato`).
"""
# Set the name of this graph to... "G". Failing to do so will
# subsequently trip an assertion expecting this name.
G.graph["name"] = "G"
# Add arbitrary nodes and edges to the passed empty graph.
G.add_edges_from([("A", "B"), ("A", "C"), ("B", "C"), ("A", "D")])
G.add_node("E")
# Validate layout of this graph with the passed GraphViz command.
graph_layout = nx.nx_pydot.pydot_layout(G, prog=prog)
assert isinstance(graph_layout, dict)
# Convert this graph into a "pydot.Dot" instance.
P = nx.nx_pydot.to_pydot(G)
# Convert this "pydot.Dot" instance back into a graph of the same type.
G2 = G.__class__(nx.nx_pydot.from_pydot(P))
# Validate the original and resulting graphs to be the same.
assert graphs_equal(G, G2)
fname = tmp_path / "out.dot"
# Serialize this "pydot.Dot" instance to a temporary file in dot format
P.write_raw(fname)
# Deserialize a list of new "pydot.Dot" instances back from this file.
Pin_list = pydot.graph_from_dot_file(path=fname, encoding="utf-8")
# Validate this file to contain only one graph.
assert len(Pin_list) == 1
# The single "pydot.Dot" instance deserialized from this file.
Pin = Pin_list[0]
# Sorted list of all nodes in the original "pydot.Dot" instance.
n1 = sorted(p.get_name() for p in P.get_node_list())
# Sorted list of all nodes in the deserialized "pydot.Dot" instance.
n2 = sorted(p.get_name() for p in Pin.get_node_list())
# Validate these instances to contain the same nodes.
assert n1 == n2
# Sorted list of all edges in the original "pydot.Dot" instance.
e1 = sorted((e.get_source(), e.get_destination()) for e in P.get_edge_list())
# Sorted list of all edges in the original "pydot.Dot" instance.
e2 = sorted((e.get_source(), e.get_destination()) for e in Pin.get_edge_list())
# Validate these instances to contain the same edges.
assert e1 == e2
# Deserialize a new graph of the same type back from this file.
> Hin = nx.nx_pydot.read_dot(fname)
networkx/drawing/tests/test_pydot.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
<class 'networkx.utils.decorators.argmap'> compilation 1108:5: in argmap_read_dot_1103
???
networkx/utils/backends.py:538: in __call__
return self._convert_and_call_for_tests(
networkx/utils/backends.py:949: in _convert_and_call_for_tests
result = getattr(backend, self.name)(*converted_args, **converted_kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
path = <itertools._tee object at 0x75a114fe2880>
@open_file(0, mode="r")
@nx._dispatchable(name="pydot_read_dot", graphs=None, returns_graph=True)
def read_dot(path):
"""Returns a NetworkX :class:`MultiGraph` or :class:`MultiDiGraph` from the
dot file with the passed path.
If this file contains multiple graphs, only the first such graph is
returned. All graphs _except_ the first are silently ignored.
Parameters
----------
path : str or file
Filename or file handle.
Returns
-------
G : MultiGraph or MultiDiGraph
A :class:`MultiGraph` or :class:`MultiDiGraph`.
Notes
-----
Use `G = nx.Graph(nx.nx_pydot.read_dot(path))` to return a :class:`Graph` instead of a
:class:`MultiGraph`.
"""
import pydot
> data = path.read()
E AttributeError: 'itertools._tee' object has no attribute 'read'
networkx/drawing/nx_pydot.py:74: AttributeError
______________________________________________________ TestPydot.test_pydot[dot-G1] _______________________________________________________
[gw2] linux -- Python 3.11.6 /home/ross/.virtualenvs/nx-dev/bin/python
self = <networkx.drawing.tests.test_pydot.TestPydot object at 0x75a13f3cf1d0>
G = <networkx.classes.digraph.DiGraph object at 0x75a13f3cd690>, prog = 'dot'
tmp_path = PosixPath('/tmp/pytest-of-ross/pytest-4/popen-gw2/test_pydot_dot_G1_0')
@pytest.mark.parametrize("G", (nx.Graph(), nx.DiGraph()))
@pytest.mark.parametrize("prog", ("neato", "dot"))
def test_pydot(self, G, prog, tmp_path):
"""
Validate :mod:`pydot`-based usage of the passed NetworkX graph with the
passed basename of an external GraphViz command (e.g., `dot`, `neato`).
"""
# Set the name of this graph to... "G". Failing to do so will
# subsequently trip an assertion expecting this name.
G.graph["name"] = "G"
# Add arbitrary nodes and edges to the passed empty graph.
G.add_edges_from([("A", "B"), ("A", "C"), ("B", "C"), ("A", "D")])
G.add_node("E")
# Validate layout of this graph with the passed GraphViz command.
graph_layout = nx.nx_pydot.pydot_layout(G, prog=prog)
assert isinstance(graph_layout, dict)
# Convert this graph into a "pydot.Dot" instance.
P = nx.nx_pydot.to_pydot(G)
# Convert this "pydot.Dot" instance back into a graph of the same type.
G2 = G.__class__(nx.nx_pydot.from_pydot(P))
# Validate the original and resulting graphs to be the same.
assert graphs_equal(G, G2)
fname = tmp_path / "out.dot"
# Serialize this "pydot.Dot" instance to a temporary file in dot format
P.write_raw(fname)
# Deserialize a list of new "pydot.Dot" instances back from this file.
Pin_list = pydot.graph_from_dot_file(path=fname, encoding="utf-8")
# Validate this file to contain only one graph.
assert len(Pin_list) == 1
# The single "pydot.Dot" instance deserialized from this file.
Pin = Pin_list[0]
# Sorted list of all nodes in the original "pydot.Dot" instance.
n1 = sorted(p.get_name() for p in P.get_node_list())
# Sorted list of all nodes in the deserialized "pydot.Dot" instance.
n2 = sorted(p.get_name() for p in Pin.get_node_list())
# Validate these instances to contain the same nodes.
assert n1 == n2
# Sorted list of all edges in the original "pydot.Dot" instance.
e1 = sorted((e.get_source(), e.get_destination()) for e in P.get_edge_list())
# Sorted list of all edges in the original "pydot.Dot" instance.
e2 = sorted((e.get_source(), e.get_destination()) for e in Pin.get_edge_list())
# Validate these instances to contain the same edges.
assert e1 == e2
# Deserialize a new graph of the same type back from this file.
> Hin = nx.nx_pydot.read_dot(fname)
networkx/drawing/tests/test_pydot.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
<class 'networkx.utils.decorators.argmap'> compilation 1108:5: in argmap_read_dot_1103
???
networkx/utils/backends.py:538: in __call__
return self._convert_and_call_for_tests(
networkx/utils/backends.py:949: in _convert_and_call_for_tests
result = getattr(backend, self.name)(*converted_args, **converted_kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
path = <itertools._tee object at 0x75a114e1d680>
@open_file(0, mode="r")
@nx._dispatchable(name="pydot_read_dot", graphs=None, returns_graph=True)
def read_dot(path):
"""Returns a NetworkX :class:`MultiGraph` or :class:`MultiDiGraph` from the
dot file with the passed path.
If this file contains multiple graphs, only the first such graph is
returned. All graphs _except_ the first are silently ignored.
Parameters
----------
path : str or file
Filename or file handle.
Returns
-------
G : MultiGraph or MultiDiGraph
A :class:`MultiGraph` or :class:`MultiDiGraph`.
Notes
-----
Use `G = nx.Graph(nx.nx_pydot.read_dot(path))` to return a :class:`Graph` instead of a
:class:`MultiGraph`.
"""
import pydot
> data = path.read()
E AttributeError: 'itertools._tee' object has no attribute 'read'
networkx/drawing/nx_pydot.py:74: AttributeError
________________________________________________________ TestPydot.test_read_write ________________________________________________________
[gw2] linux -- Python 3.11.6 /home/ross/.virtualenvs/nx-dev/bin/python
self = <networkx.drawing.tests.test_pydot.TestPydot object at 0x75a13f3c62d0>
def test_read_write(self):
G = nx.MultiGraph()
G.graph["name"] = "G"
G.add_edge("1", "2", key="0") # read assumes strings
fh = StringIO()
nx.nx_pydot.write_dot(G, fh)
fh.seek(0)
> H = nx.nx_pydot.read_dot(fh)
networkx/drawing/tests/test_pydot.py:88:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
<class 'networkx.utils.decorators.argmap'> compilation 1108:5: in argmap_read_dot_1103
???
networkx/utils/backends.py:538: in __call__
return self._convert_and_call_for_tests(
networkx/utils/backends.py:949: in _convert_and_call_for_tests
result = getattr(backend, self.name)(*converted_args, **converted_kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
path = <itertools._tee object at 0x75a12e9a1f40>
@open_file(0, mode="r")
@nx._dispatchable(name="pydot_read_dot", graphs=None, returns_graph=True)
def read_dot(path):
"""Returns a NetworkX :class:`MultiGraph` or :class:`MultiDiGraph` from the
dot file with the passed path.
If this file contains multiple graphs, only the first such graph is
returned. All graphs _except_ the first are silently ignored.
Parameters
----------
path : str or file
Filename or file handle.
Returns
-------
G : MultiGraph or MultiDiGraph
A :class:`MultiGraph` or :class:`MultiDiGraph`.
Notes
-----
Use `G = nx.Graph(nx.nx_pydot.read_dot(path))` to return a :class:`Graph` instead of a
:class:`MultiGraph`.
"""
import pydot
> data = path.read()
E AttributeError: 'itertools._tee' object has no attribute 'read'
networkx/drawing/nx_pydot.py:74: AttributeError
__________________________________________________ networkx/drawing/tests/test_agraph.py __________________________________________________
[gw3] linux -- Python 3.11.6 /home/ross/.virtualenvs/nx-dev/bin/python
worker 'gw3' crashed while running 'networkx/drawing/tests/test_agraph.py::TestAGraph::test_agraph_roundtripping[G0]'
__________________________________________________ networkx/drawing/tests/test_agraph.py __________________________________________________
[gw12] linux -- Python 3.11.6 /home/ross/.virtualenvs/nx-dev/bin/python
worker 'gw12' crashed while running 'networkx/drawing/tests/test_agraph.py::TestAGraph::test_agraph_roundtripping[G2]'
__________________________________________________ networkx/drawing/tests/test_agraph.py __________________________________________________
[gw4] linux -- Python 3.11.6 /home/ross/.virtualenvs/nx-dev/bin/python
worker 'gw4' crashed while running 'networkx/drawing/tests/test_agraph.py::TestAGraph::test_agraph_roundtripping[G1]'
__________________________________________________ networkx/drawing/tests/test_agraph.py __________________________________________________
[gw6] linux -- Python 3.11.6 /home/ross/.virtualenvs/nx-dev/bin/python
worker 'gw6' crashed while running 'networkx/drawing/tests/test_agraph.py::TestAGraph::test_agraph_roundtripping[G3]'
========================================================= short test summary info =========================================================
FAILED networkx/drawing/tests/test_pydot.py::TestPydot::test_pydot[neato-G0] - AttributeError: 'itertools._tee' object has no attribute 'read'
FAILED networkx/drawing/tests/test_pydot.py::TestPydot::test_pydot[neato-G1] - AttributeError: 'itertools._tee' object has no attribute 'read'
FAILED networkx/drawing/tests/test_pydot.py::TestPydot::test_pydot[dot-G0] - AttributeError: 'itertools._tee' object has no attribute 'read'
FAILED networkx/drawing/tests/test_pydot.py::TestPydot::test_pydot[dot-G1] - AttributeError: 'itertools._tee' object has no attribute 'read'
FAILED networkx/drawing/tests/test_pydot.py::TestPydot::test_read_write - AttributeError: 'itertools._tee' object has no attribute 'read'
FAILED networkx/drawing/tests/test_agraph.py::TestAGraph::test_agraph_roundtripping[G0]
FAILED networkx/drawing/tests/test_agraph.py::TestAGraph::test_agraph_roundtripping[G2]
FAILED networkx/drawing/tests/test_agraph.py::TestAGraph::test_agraph_roundtripping[G1]
FAILED networkx/drawing/tests/test_agraph.py::TestAGraph::test_agraph_roundtripping[G3]
============================= 9 failed, 6237 passed, 12 skipped, 11 xfailed, 36 warnings in 104.94s (0:01:44) =============================
</pre>
</details>
| Can confirm I could reproduce the seg fault errors (. Graphviz makes no promises about being thread-safe, but unsure why only the backend causes the faults...
As discussed at the last community meeting, I went ahead and ran the tests both with and without xdist + with and without backends to get a clearer picture of when the failures occur. I can confirm that the agraph tests fail as above both with and without xdist:
1. No xdist, no backend: `pytest --doctest-modules --pyargs networkx` | PASS
2. xdist, no backend: `pytest --doctest-modules -n auto --pyargs networkx` | PASS
3. No xdist, with backend: `NETWORKX_TEST_BACKEND=nx-loopback pytest --doctest-modules --pyargs networkx` | FAIL
4. xdist, with backend: `NETWORKX_TEST_BACKEND=nx-loopback pytest --doctest-modules -n auto --pyargs networkx | FAIL
So - @eriknw 's suspicions are confirmed: bullet 3 indicates that the problem is not necessarily related to xdist, but the backend machinery's interaction with pygraphviz.
I think I have seen something similar before, not sure about the `segfault` but maybe we need to use `threadpoolctl` here?
Maybe tangential but scipy ran into something that could be helpful (?) https://github.com/scipy/scipy/pull/14441
> but maybe we need to use threadpoolctl here?
From the previous set of tests, this doesn't seem to stem from parallelism but rather from nx-loopback itself. I haven't dug down into how the loopback interface works though!
Narrowing it down to a single test (quicker to reproduce):
```
NETWORKX_TEST_BACKEND=nx-loopback pytest networkx -k "test_agraph_roundtripping" # FAILS
pytest networkx -k "test_agraph_roundtripping" # PASSES
```
and the error is:
<details close>
<summary>pytest log</summary>
<br>
```
platform linux -- Python 3.10.12, pytest-8.0.2, pluggy-1.4.0
rootdir: /mnt/c/dev/networkx
plugins: cov-4.1.0, xdist-3.5.0
collected 5556 items / 5552 deselected / 4 selected
networkx/drawing/tests/test_agraph.py Fatal Python error: Segmentation fault
Current thread 0x00007f83f4bd1000 (most recent call first):
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/pygraphviz/graphviz.py", line 226 in agnameof
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/pygraphviz/agraph.py", line 228 in __repr__
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/_pytest/_io/saferepr.py", line 73 in repr_instance
File "/usr/lib/python3.10/reprlib.py", line 62 in repr1
File "/usr/lib/python3.10/reprlib.py", line 52 in repr
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/_pytest/_io/saferepr.py", line 61 in repr
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/_pytest/_io/saferepr.py", line 111 in saferepr
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/_pytest/_code/code.py", line 842 in repr_args
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/_pytest/_code/code.py", line 938 in repr_traceback_entry
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/_pytest/_code/code.py", line 993 in <listcomp>
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/_pytest/_code/code.py", line 992 in repr_traceback
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/_pytest/_code/code.py", line 1063 in repr_excinfo
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/_pytest/_code/code.py", line 698 in getrepr
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/_pytest/nodes.py", line 496 in _repr_failure_py
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/_pytest/python.py", line 1874 in repr_failure
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/_pytest/reports.py", line 363 in from_item_and_call
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/_pytest/runner.py", line 369 in pytest_runtest_makereport
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/pluggy/_callers.py", line 102 in _multicall
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/pluggy/_manager.py", line 119 in _hookexec
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/pluggy/_hooks.py", line 501 in __call__
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/_pytest/runner.py", line 225 in call_and_report
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/_pytest/runner.py", line 134 in runtestprotocol
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/_pytest/runner.py", line 115 in pytest_runtest_protocol
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/pluggy/_callers.py", line 102 in _multicall
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/pluggy/_manager.py", line 119 in _hookexec
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/pluggy/_hooks.py", line 501 in __call__
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/_pytest/main.py", line 352 in pytest_runtestloop
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/pluggy/_callers.py", line 102 in _multicall
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/pluggy/_manager.py", line 119 in _hookexec
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/pluggy/_hooks.py", line 501 in __call__
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/_pytest/main.py", line 327 in _main
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/_pytest/main.py", line 273 in wrap_session
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/_pytest/main.py", line 320 in pytest_cmdline_main
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/pluggy/_callers.py", line 102 in _multicall
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/pluggy/_manager.py", line 119 in _hookexec
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/pluggy/_hooks.py", line 501 in __call__
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/_pytest/config/__init__.py", line 175 in main
File "/mnt/c/dev/networkx/.venv/lib/python3.10/site-packages/_pytest/config/__init__.py", line 198 in console_main
File "/mnt/c/dev/networkx/.venv/bin/pytest", line 8 in <module>
Extension modules: numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, scipy._lib._ccallback_c, matplotlib._c_internal_utils, PIL._imaging, matplotlib._path, kiwisolver._cext, pandas._libs.tslibs.ccalendar, pandas._libs.tslibs.np_datetime, pandas._libs.tslibs.dtypes, pandas._libs.tslibs.base, pandas._libs.tslibs.nattype, pandas._libs.tslibs.timezones, pandas._libs.tslibs.fields, pandas._libs.tslibs.timedeltas, pandas._libs.tslibs.tzconversion, pandas._libs.tslibs.timestamps, pandas._libs.properties, pandas._libs.tslibs.offsets, pandas._libs.tslibs.strptime, pandas._libs.tslibs.parsing, pandas._libs.tslibs.conversion, pandas._libs.tslibs.period, pandas._libs.tslibs.vectorized, pandas._libs.ops_dispatch, pandas._libs.missing, pandas._libs.hashtable, pandas._libs.algos, pandas._libs.interval, pandas._libs.lib, pandas._libs.ops, pandas._libs.hashing, pandas._libs.arrays, pandas._libs.tslib, pandas._libs.sparse, pandas._libs.internals, pandas._libs.indexing, pandas._libs.index, pandas._libs.writers, pandas._libs.join, pandas._libs.window.aggregations, pandas._libs.window.indexers, pandas._libs.reshape, pandas._libs.groupby, pandas._libs.json, pandas._libs.parsers, pandas._libs.testing, pygraphviz._graphviz, scipy.sparse._sparsetools, _csparsetools, scipy.sparse._csparsetools, scipy.linalg._fblas, scipy.linalg._flapack, scipy.linalg.cython_lapack, scipy.linalg._cythonized_array_utils, scipy.linalg._solve_toeplitz, scipy.linalg._flinalg, scipy.linalg._decomp_lu_cython, scipy.linalg._matfuncs_sqrtm_triu, scipy.linalg.cython_blas, scipy.linalg._matfuncs_expm, scipy.linalg._decomp_update, scipy.sparse.linalg._dsolve._superlu, scipy.sparse.linalg._eigen.arpack._arpack, scipy.sparse.csgraph._tools, scipy.sparse.csgraph._shortest_path, scipy.sparse.csgraph._traversal, scipy.sparse.csgraph._min_spanning_tree, scipy.sparse.csgraph._flow, scipy.sparse.csgraph._matching, scipy.sparse.csgraph._reordering, matplotlib._image (total: 83)
Segmentation fault (core dumped)
```
<br>
</details>
---
Edit: It seems the issue arises when a file handler is passed to `networkx.nx_agraph.read_dot`. Exchanging
```python
with open(fname) as fh:
Hin = nx.nx_agraph.read_dot(fh)
```
for
```python
Hin = nx.nx_agraph.read_dot(fname)
```
[here](https://github.com/networkx/networkx/blob/f0b5a6d884ac7e303f2e2092d3a0a48723815239/networkx/drawing/tests/test_agraph.py#L44-L45) causes the test to no longer error for me. Interestingly the write method is fine either way.
Also, running the test code in a python file works fine too. It seems like the seg fault happens due to some thing pytest is doing with the handler/agraph
---
Edit 2: I've managed to circumvent the seg-fault - it's happening when pytest tries to call `repr` on the `pygraphviz.AGraph` during an error trace. I patched `lambda _: "<redacted>"` onto `AGraph.__repr__`, and now I believe the real error is:
<details close>
<summary>pytest log 2</summary>
<br>
```python
<class 'networkx.utils.decorators.argmap'> compilation 117:3: in argmap_read_dot_114
???
networkx/utils/backends.py:662: in __call__
return self._convert_and_call_for_tests(
networkx/utils/backends.py:1248: in _convert_and_call_for_tests
G = self.orig_func(*args2, **kwargs2)
networkx/drawing/nx_agraph.py:222: in read_dot
A = pygraphviz.AGraph(file=path)
.venv/lib/python3.10/site-packages/pygraphviz/agraph.py:157: in __init__
self.read(filename)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <redacted>, path = <_io.TextIOWrapper name='/tmp/pytest-of-aaronzo/pytest-109/test_agraph_roundtripping_G0_0/fh_test.dot' mode='r' encoding='UTF-8'>
def read(self, path):
"""Read graph from dot format file on path.
path can be a file name or file handle
use::
G.read('file.dot')
"""
fh = self._get_fh(path)
try:
self._close_handle()
try:
self.handle = gv.agread(fh, None)
except ValueError:
> raise DotError("Invalid Input")
E pygraphviz.agraph.DotError: Invalid Input
.venv/lib/python3.10/site-packages/pygraphviz/agraph.py:1252: DotError
```
<br>
</details> | 2024-03-30T00:17:06 |
|
networkx/networkx | 7,388 | networkx__networkx-7388 | [
"7383"
] | fbb2d82dc7b69f104eedd4f8630c4ab6e1b6713e | diff --git a/networkx/conftest.py b/networkx/conftest.py
--- a/networkx/conftest.py
+++ b/networkx/conftest.py
@@ -241,25 +241,31 @@ def add_nx(doctest_namespace):
"algorithms/centrality/current_flow_betweenness_subset.py",
"algorithms/centrality/eigenvector.py",
"algorithms/centrality/katz.py",
+ "algorithms/centrality/laplacian.py",
"algorithms/centrality/second_order.py",
"algorithms/centrality/subgraph_alg.py",
"algorithms/communicability_alg.py",
+ "algorithms/community/divisive.py",
+ "algorithms/distance_measures.py",
"algorithms/link_analysis/hits_alg.py",
"algorithms/link_analysis/pagerank_alg.py",
"algorithms/node_classification.py",
"algorithms/similarity.py",
+ "algorithms/tree/mst.py",
+ "algorithms/walks.py",
"convert_matrix.py",
"drawing/layout.py",
+ "drawing/nx_pylab.py",
"generators/spectral_graph_forge.py",
"generators/expanders.py",
"linalg/algebraicconnectivity.py",
"linalg/attrmatrix.py",
"linalg/bethehessianmatrix.py",
"linalg/graphmatrix.py",
+ "linalg/laplacianmatrix.py",
"linalg/modularitymatrix.py",
"linalg/spectrum.py",
"utils/rcm.py",
- "algorithms/centrality/laplacian.py",
]
needs_matplotlib = ["drawing/nx_pylab.py"]
needs_pandas = ["convert_matrix.py"]
| diff --git a/.github/workflows/test.yml b/.github/workflows/test.yml
--- a/.github/workflows/test.yml
+++ b/.github/workflows/test.yml
@@ -58,6 +58,30 @@ jobs:
run: |
pytest --doctest-modules --durations=10 --pyargs networkx
+ default-without-scipy:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v4
+ - name: Set up Python
+ uses: actions/setup-python@v5
+ with:
+ python-version: "3.12"
+
+ - name: Install packages
+ run: |
+ python -m pip install --upgrade pip
+ python -m pip install -r requirements/default.txt -r requirements/test.txt
+ python -m pip uninstall -y scipy # All default dependencies except scipy
+ python -m pip install .
+ python -m pip list
+
+ - name: Test for warnings at import time
+ run: python -Werror -c "import networkx"
+
+ - name: Test NetworkX
+ run: |
+ pytest --doctest-modules --durations=10 --pyargs networkx
+
dispatch:
runs-on: ubuntu-latest
strategy:
diff --git a/networkx/algorithms/community/tests/test_divisive.py b/networkx/algorithms/community/tests/test_divisive.py
--- a/networkx/algorithms/community/tests/test_divisive.py
+++ b/networkx/algorithms/community/tests/test_divisive.py
@@ -41,7 +41,7 @@ def test_edge_betweenness_partition():
def test_edge_current_flow_betweenness_partition():
- pytest.importorskip("numpy")
+ pytest.importorskip("scipy")
G = nx.barbell_graph(3, 0)
C = nx.community.edge_current_flow_betweenness_partition(G, 2)
diff --git a/networkx/algorithms/tests/test_cycles.py b/networkx/algorithms/tests/test_cycles.py
--- a/networkx/algorithms/tests/test_cycles.py
+++ b/networkx/algorithms/tests/test_cycles.py
@@ -11,10 +11,9 @@
def check_independent(basis):
if len(basis) == 0:
return
- try:
- import numpy as np
- except ImportError:
- return
+
+ np = pytest.importorskip("numpy")
+ sp = pytest.importorskip("scipy") # Required by incidence_matrix
H = nx.Graph()
for b in basis:
diff --git a/networkx/algorithms/tests/test_distance_measures.py b/networkx/algorithms/tests/test_distance_measures.py
--- a/networkx/algorithms/tests/test_distance_measures.py
+++ b/networkx/algorithms/tests/test_distance_measures.py
@@ -324,6 +324,7 @@ class TestResistanceDistance:
def setup_class(cls):
global np
np = pytest.importorskip("numpy")
+ sp = pytest.importorskip("scipy")
def setup_method(self):
G = nx.Graph()
@@ -428,6 +429,7 @@ class TestEffectiveGraphResistance:
def setup_class(cls):
global np
np = pytest.importorskip("numpy")
+ sp = pytest.importorskip("scipy")
def setup_method(self):
G = nx.Graph()
@@ -600,6 +602,7 @@ class TestKemenyConstant:
def setup_class(cls):
global np
np = pytest.importorskip("numpy")
+ sp = pytest.importorskip("scipy")
def setup_method(self):
G = nx.Graph()
diff --git a/networkx/algorithms/tree/tests/test_mst.py b/networkx/algorithms/tree/tests/test_mst.py
--- a/networkx/algorithms/tree/tests/test_mst.py
+++ b/networkx/algorithms/tree/tests/test_mst.py
@@ -737,6 +737,7 @@ class TestNumberSpanningTrees:
def setup_class(cls):
global np
np = pytest.importorskip("numpy")
+ sp = pytest.importorskip("scipy")
def test_nst_disconnected(self):
G = nx.empty_graph(2)
diff --git a/networkx/drawing/tests/test_pylab.py b/networkx/drawing/tests/test_pylab.py
--- a/networkx/drawing/tests/test_pylab.py
+++ b/networkx/drawing/tests/test_pylab.py
@@ -32,7 +32,8 @@ def test_draw():
for function, option in itertools.product(functions, options):
function(barbell, **option)
plt.savefig("test.ps")
-
+ except ModuleNotFoundError: # draw_kamada_kawai requires scipy
+ pass
finally:
try:
os.unlink("test.ps")
| 3.2.1: pytest fails in networkx/drawing/tests/test_pylab.py with `module 'matplotlib' has no attribute 'use'`
matplotlib 3.8.3 and pytest fails on scanning units with
```console
+ PYTHONPATH=/home/tkloczko/rpmbuild/BUILDROOT/python-networkx-3.2.1-5.fc36.x86_64/usr/lib64/python3.9/site-packages:/home/tkloczko/rpmbuild/BUILDROOT/python-networkx-3.2.1-5.fc36.x86_64/usr/lib/python3.9/site-packages
+ /usr/bin/pytest -ra -m 'not network'
============================= test session starts ==============================
platform linux -- Python 3.9.18, pytest-8.1.1, pluggy-1.4.0
rootdir: /home/tkloczko/rpmbuild/BUILD/networkx-networkx-3.2.1
configfile: pyproject.toml
collected 4831 items / 1 error / 30 skipped
==================================== ERRORS ====================================
____________ ERROR collecting networkx/drawing/tests/test_pylab.py _____________
networkx/drawing/tests/test_pylab.py:10: in <module>
mpl.use("PS")
E AttributeError: module 'matplotlib' has no attribute 'use'
=============================== warnings summary ===============================
networkx/utils/backends.py:135
/home/tkloczko/rpmbuild/BUILD/networkx-networkx-3.2.1/networkx/utils/backends.py:135: RuntimeWarning: networkx backend defined more than once: nx-loopback
backends.update(_get_backends("networkx.backends"))
networkx/utils/backends.py:576
/home/tkloczko/rpmbuild/BUILD/networkx-networkx-3.2.1/networkx/utils/backends.py:576: DeprecationWarning:
random_tree is deprecated and will be removed in NX v3.4
Use random_labeled_tree instead.
return self.orig_func(*args, **kwargs)
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
```
| Also I've tested pytest with networkx/drawing/tests/test_pylab.py added to --ignore list.
Looks like in few units are missing `scipy` marks.
<details>
<summary>Here is pytest output:</summary>
```console
+ PYTHONPATH=/home/tkloczko/rpmbuild/BUILDROOT/python-networkx-3.2.1-5.fc36.x86_64/usr/lib64/python3.9/site-packages:/home/tkloczko/rpmbuild/BUILDROOT/python-networkx-3.2.1-5.fc36.x86_64/usr/lib/python3.9/site-packages
+ /usr/bin/pytest -ra -m 'not network' --ignore networkx/drawing/tests/test_pylab.py
==================================================================================== test session starts ====================================================================================
platform linux -- Python 3.9.18, pytest-8.1.1, pluggy-1.4.0
rootdir: /home/tkloczko/rpmbuild/BUILD/networkx-networkx-3.2.1
configfile: pyproject.toml
collected 4877 items / 28 skipped
networkx/algorithms/approximation/tests/test_approx_clust_coeff.py ...... [ 0%]
networkx/algorithms/approximation/tests/test_clique.py ........ [ 0%]
networkx/algorithms/approximation/tests/test_connectivity.py .................. [ 0%]
networkx/algorithms/approximation/tests/test_distance_measures.py ........ [ 0%]
networkx/algorithms/approximation/tests/test_dominating_set.py .... [ 0%]
networkx/algorithms/approximation/tests/test_kcomponents.py ................ [ 1%]
networkx/algorithms/approximation/tests/test_matching.py . [ 1%]
networkx/algorithms/approximation/tests/test_maxcut.py ..... [ 1%]
networkx/algorithms/approximation/tests/test_ramsey.py . [ 1%]
networkx/algorithms/approximation/tests/test_steinertree.py .... [ 1%]
networkx/algorithms/approximation/tests/test_traveling_salesman.py ............................ssssssssss...s. [ 2%]
networkx/algorithms/approximation/tests/test_treewidth.py .............. [ 2%]
networkx/algorithms/approximation/tests/test_vertex_cover.py .... [ 2%]
networkx/algorithms/assortativity/tests/test_connectivity.py .......... [ 2%]
networkx/algorithms/assortativity/tests/test_mixing.py ................... [ 3%]
networkx/algorithms/assortativity/tests/test_neighbor_degree.py ...... [ 3%]
networkx/algorithms/assortativity/tests/test_pairs.py ........... [ 3%]
networkx/algorithms/bipartite/tests/test_basic.py ............sss [ 3%]
networkx/algorithms/bipartite/tests/test_centrality.py ....... [ 4%]
networkx/algorithms/bipartite/tests/test_cluster.py ......... [ 4%]
networkx/algorithms/bipartite/tests/test_covering.py .... [ 4%]
networkx/algorithms/bipartite/tests/test_edgelist.py ............... [ 4%]
networkx/algorithms/bipartite/tests/test_extendability.py ........... [ 4%]
networkx/algorithms/bipartite/tests/test_generators.py .......... [ 5%]
networkx/algorithms/bipartite/tests/test_matching.py ............ssssssss [ 5%]
networkx/algorithms/bipartite/tests/test_project.py .................. [ 5%]
networkx/algorithms/bipartite/tests/test_redundancy.py ... [ 5%]
networkx/algorithms/centrality/tests/test_betweenness_centrality.py ......................................... [ 6%]
networkx/algorithms/centrality/tests/test_betweenness_centrality_subset.py ...................... [ 7%]
networkx/algorithms/centrality/tests/test_closeness_centrality.py ............. [ 7%]
networkx/algorithms/centrality/tests/test_degree_centrality.py ....... [ 7%]
networkx/algorithms/centrality/tests/test_dispersion.py .... [ 7%]
networkx/algorithms/centrality/tests/test_group.py ........................ [ 8%]
networkx/algorithms/centrality/tests/test_harmonic_centrality.py ............. [ 8%]
networkx/algorithms/centrality/tests/test_katz_centrality.py ..........sssssssssss..sss [ 9%]
networkx/algorithms/centrality/tests/test_load_centrality.py .................. [ 9%]
networkx/algorithms/centrality/tests/test_percolation_centrality.py .... [ 9%]
networkx/algorithms/centrality/tests/test_reaching.py ............... [ 9%]
networkx/algorithms/centrality/tests/test_voterank.py ...... [ 9%]
networkx/algorithms/coloring/tests/test_coloring.py ................. [ 10%]
networkx/algorithms/community/tests/test_asyn_fluid.py ..... [ 10%]
networkx/algorithms/community/tests/test_centrality.py ..... [ 10%]
networkx/algorithms/community/tests/test_kclique.py ........ [ 10%]
networkx/algorithms/community/tests/test_kernighan_lin.py ........ [ 10%]
networkx/algorithms/community/tests/test_label_propagation.py ....................... [ 11%]
networkx/algorithms/community/tests/test_louvain.py ............. [ 11%]
networkx/algorithms/community/tests/test_lukes.py .... [ 11%]
networkx/algorithms/community/tests/test_modularity_max.py .................. [ 11%]
networkx/algorithms/community/tests/test_quality.py ....... [ 12%]
networkx/algorithms/community/tests/test_utils.py .... [ 12%]
networkx/algorithms/components/tests/test_attracting.py .... [ 12%]
networkx/algorithms/components/tests/test_biconnected.py ............. [ 12%]
networkx/algorithms/components/tests/test_connected.py ......... [ 12%]
networkx/algorithms/components/tests/test_semiconnected.py ........ [ 12%]
networkx/algorithms/components/tests/test_strongly_connected.py ..........F.. [ 13%]
networkx/algorithms/components/tests/test_weakly_connected.py ...... [ 13%]
networkx/algorithms/connectivity/tests/test_connectivity.py .................................. [ 13%]
networkx/algorithms/connectivity/tests/test_cuts.py ..................... [ 14%]
networkx/algorithms/connectivity/tests/test_disjoint_paths.py .................. [ 14%]
networkx/algorithms/connectivity/tests/test_edge_augmentation.py .................... [ 15%]
networkx/algorithms/connectivity/tests/test_edge_kcomponents.py ..................... [ 15%]
networkx/algorithms/connectivity/tests/test_kcomponents.py .sss...... [ 15%]
networkx/algorithms/connectivity/tests/test_kcutsets.py s........s..... [ 16%]
networkx/algorithms/connectivity/tests/test_stoer_wagner.py ..... [ 16%]
networkx/algorithms/flow/tests/test_gomory_hu.py ....s.... [ 16%]
networkx/algorithms/flow/tests/test_maxflow.py ........................... [ 16%]
networkx/algorithms/flow/tests/test_maxflow_large_graph.py ...s.. [ 17%]
networkx/algorithms/flow/tests/test_mincost.py ................... [ 17%]
networkx/algorithms/flow/tests/test_networksimplex.py ...................... [ 17%]
networkx/algorithms/isomorphism/tests/test_ismags.py .......... [ 18%]
networkx/algorithms/isomorphism/tests/test_isomorphism.py .... [ 18%]
networkx/algorithms/isomorphism/tests/test_isomorphvf2.py ................ [ 18%]
networkx/algorithms/isomorphism/tests/test_match_helpers.py .. [ 18%]
networkx/algorithms/isomorphism/tests/test_temporalisomorphvf2.py ............ [ 18%]
networkx/algorithms/isomorphism/tests/test_tree_isomorphism.py ..... [ 18%]
networkx/algorithms/isomorphism/tests/test_vf2pp.py ............................................ [ 19%]
networkx/algorithms/isomorphism/tests/test_vf2pp_helpers.py ............................................. [ 20%]
networkx/algorithms/isomorphism/tests/test_vf2userfunc.py ............................ [ 21%]
networkx/algorithms/minors/tests/test_contraction.py ............................... [ 21%]
networkx/algorithms/operators/tests/test_all.py ................... [ 22%]
networkx/algorithms/operators/tests/test_binary.py .................... [ 22%]
networkx/algorithms/operators/tests/test_product.py ............................ [ 23%]
networkx/algorithms/operators/tests/test_unary.py ... [ 23%]
networkx/algorithms/shortest_paths/tests/test_astar.py ................ [ 23%]
networkx/algorithms/shortest_paths/tests/test_dense.py ........ [ 23%]
networkx/algorithms/shortest_paths/tests/test_dense_numpy.py ....... [ 24%]
networkx/algorithms/shortest_paths/tests/test_generic.py .......................... [ 24%]
networkx/algorithms/shortest_paths/tests/test_unweighted.py ................. [ 24%]
networkx/algorithms/shortest_paths/tests/test_weighted.py ........................................................ [ 26%]
networkx/algorithms/tests/test_asteroidal.py . [ 26%]
networkx/algorithms/tests/test_boundary.py ............. [ 26%]
networkx/algorithms/tests/test_bridges.py .......... [ 26%]
networkx/algorithms/tests/test_chains.py ..... [ 26%]
networkx/algorithms/tests/test_chordal.py .......... [ 26%]
networkx/algorithms/tests/test_clique.py ............ [ 27%]
networkx/algorithms/tests/test_cluster.py ........................................... [ 28%]
networkx/algorithms/tests/test_core.py ............... [ 28%]
networkx/algorithms/tests/test_covering.py ........... [ 28%]
networkx/algorithms/tests/test_cuts.py ................. [ 28%]
networkx/algorithms/tests/test_cycles.py ..............................................FF.FF.......... [ 30%]
networkx/algorithms/tests/test_d_separation.py ............... [ 30%]
networkx/algorithms/tests/test_dag.py ............................................................ [ 31%]
networkx/algorithms/tests/test_distance_measures.py ..............................................FFFFF.FFFF.....FFFFFFFFFF [ 33%]
networkx/algorithms/tests/test_distance_regular.py ....... [ 33%]
networkx/algorithms/tests/test_dominance.py ...................... [ 33%]
networkx/algorithms/tests/test_dominating.py ..... [ 33%]
networkx/algorithms/tests/test_efficiency.py ....... [ 33%]
networkx/algorithms/tests/test_euler.py ................................ [ 34%]
networkx/algorithms/tests/test_graph_hashing.py ........................ [ 35%]
networkx/algorithms/tests/test_graphical.py ............. [ 35%]
networkx/algorithms/tests/test_hierarchy.py ..... [ 35%]
networkx/algorithms/tests/test_hybrid.py .. [ 35%]
networkx/algorithms/tests/test_isolate.py ... [ 35%]
networkx/algorithms/tests/test_link_prediction.py ......................................................................... [ 37%]
networkx/algorithms/tests/test_lowest_common_ancestors.py ....................................................... [ 38%]
networkx/algorithms/tests/test_matching.py ................................................ [ 39%]
networkx/algorithms/tests/test_max_weight_clique.py ..... [ 39%]
networkx/algorithms/tests/test_mis.py ....... [ 39%]
networkx/algorithms/tests/test_moral.py . [ 39%]
networkx/algorithms/tests/test_non_randomness.py ...... [ 39%]
networkx/algorithms/tests/test_planar_drawing.py ............ [ 39%]
networkx/algorithms/tests/test_planarity.py .............................. [ 40%]
networkx/algorithms/tests/test_reciprocity.py ..... [ 40%]
networkx/algorithms/tests/test_regular.py ............. [ 40%]
networkx/algorithms/tests/test_richclub.py ......... [ 41%]
networkx/algorithms/tests/test_similarity.py sssssssssssssssssssssssssssssssssssssssssssss [ 41%]
networkx/algorithms/tests/test_simple_paths.py .......................................................................... [ 43%]
networkx/algorithms/tests/test_smallworld.py ...... [ 43%]
networkx/algorithms/tests/test_smetric.py .. [ 43%]
networkx/algorithms/tests/test_sparsifiers.py ....... [ 43%]
networkx/algorithms/tests/test_structuralholes.py ............. [ 44%]
networkx/algorithms/tests/test_summarization.py ................. [ 44%]
networkx/algorithms/tests/test_swap.py ..................... [ 44%]
networkx/algorithms/tests/test_threshold.py ................s. [ 45%]
networkx/algorithms/tests/test_time_dependent.py ............ [ 45%]
networkx/algorithms/tests/test_tournament.py ...............s..... [ 45%]
networkx/algorithms/tests/test_triads.py ................ [ 46%]
networkx/algorithms/tests/test_vitality.py ...... [ 46%]
networkx/algorithms/tests/test_voronoi.py .......... [ 46%]
networkx/algorithms/tests/test_wiener.py .... [ 46%]
networkx/algorithms/traversal/tests/test_beamsearch.py ... [ 46%]
networkx/algorithms/traversal/tests/test_bfs.py ................... [ 47%]
networkx/algorithms/traversal/tests/test_dfs.py .................. [ 47%]
networkx/algorithms/traversal/tests/test_edgebfs.py ................ [ 47%]
networkx/algorithms/traversal/tests/test_edgedfs.py ............... [ 48%]
networkx/algorithms/tree/tests/test_branchings.py ................................ [ 48%]
networkx/algorithms/tree/tests/test_coding.py .............. [ 48%]
networkx/algorithms/tree/tests/test_decomposition.py ..... [ 49%]
networkx/algorithms/tree/tests/test_mst.py ....................................................ssss [ 50%]
networkx/algorithms/tree/tests/test_operations.py .... [ 50%]
networkx/algorithms/tree/tests/test_recognition.py ......................... [ 50%]
networkx/classes/tests/test_coreviews.py ........................................................ [ 51%]
networkx/classes/tests/test_digraph.py .................................................................................... [ 53%]
networkx/classes/tests/test_digraph_historical.py .......................................... [ 54%]
networkx/classes/tests/test_filters.py ........... [ 54%]
networkx/classes/tests/test_function.py ....................................................................... [ 56%]
networkx/classes/tests/test_graph.py ................................................................ [ 57%]
networkx/classes/tests/test_graph_historical.py .................................. [ 58%]
networkx/classes/tests/test_graphviews.py ................................... [ 58%]
networkx/classes/tests/test_multidigraph.py ......................................................................................................................................... [ 61%]
.................................................. [ 62%]
networkx/classes/tests/test_multigraph.py ........................................................................................................................................... [ 65%]
.............. [ 65%]
networkx/classes/tests/test_reportviews.py .......................................................................................................................................... [ 68%]
....................................................................................................... [ 70%]
networkx/classes/tests/test_special.py .............................................................................................................................................. [ 73%]
..................................................................................................................................................................................... [ 77%]
......................... [ 78%]
networkx/classes/tests/test_subgraphviews.py ................................ [ 78%]
networkx/drawing/tests/test_latex.py ...... [ 78%]
networkx/generators/tests/test_atlas.py ........ [ 78%]
networkx/generators/tests/test_classic.py ............................................ [ 79%]
networkx/generators/tests/test_cographs.py . [ 79%]
networkx/generators/tests/test_community.py ...................... [ 80%]
networkx/generators/tests/test_degree_seq.py ................... [ 80%]
networkx/generators/tests/test_directed.py .............. [ 81%]
networkx/generators/tests/test_duplication.py ....... [ 81%]
networkx/generators/tests/test_ego.py .. [ 81%]
networkx/generators/tests/test_expanders.py .....sssss................ [ 81%]
networkx/generators/tests/test_geometric.py .............................. [ 82%]
networkx/generators/tests/test_harary_graph.py .. [ 82%]
networkx/generators/tests/test_internet_as_graphs.py ..... [ 82%]
networkx/generators/tests/test_intersection.py .... [ 82%]
networkx/generators/tests/test_interval_graph.py ........ [ 82%]
networkx/generators/tests/test_joint_degree_seq.py .... [ 82%]
networkx/generators/tests/test_lattice.py ....................... [ 83%]
networkx/generators/tests/test_line.py ................................... [ 84%]
networkx/generators/tests/test_mycielski.py ... [ 84%]
networkx/generators/tests/test_nonisomorphic_trees.py ..... [ 84%]
networkx/generators/tests/test_random_clustered.py .... [ 84%]
networkx/generators/tests/test_random_graphs.py ..................................................................... [ 85%]
networkx/generators/tests/test_small.py ...................................... [ 86%]
networkx/generators/tests/test_stochastic.py ....... [ 86%]
networkx/generators/tests/test_sudoku.py ...... [ 86%]
networkx/generators/tests/test_time_series.py ....... [ 86%]
networkx/generators/tests/test_trees.py .................. [ 87%]
networkx/generators/tests/test_triads.py .. [ 87%]
networkx/linalg/tests/test_algebraic_connectivity.py sssss............ssssssssssssssssssssssssssssssssss.........ssssssssssssssssssssssssssss [ 89%]
networkx/linalg/tests/test_attrmatrix.py ...ss [ 89%]
networkx/readwrite/json_graph/tests/test_adjacency.py ........ [ 89%]
networkx/readwrite/json_graph/tests/test_cytoscape.py ....... [ 89%]
networkx/readwrite/json_graph/tests/test_node_link.py ........... [ 89%]
networkx/readwrite/json_graph/tests/test_tree.py ... [ 89%]
networkx/readwrite/tests/test_adjlist.py .................. [ 90%]
networkx/readwrite/tests/test_edgelist.py .......................... [ 90%]
networkx/readwrite/tests/test_gexf.py ..................... [ 91%]
networkx/readwrite/tests/test_gml.py ......................... [ 91%]
networkx/readwrite/tests/test_graph6.py ............................... [ 92%]
networkx/readwrite/tests/test_graphml.py ........................................................... [ 93%]
networkx/readwrite/tests/test_leda.py .. [ 93%]
networkx/readwrite/tests/test_p2g.py ... [ 93%]
networkx/readwrite/tests/test_pajek.py ........ [ 93%]
networkx/readwrite/tests/test_sparse6.py ................ [ 94%]
networkx/readwrite/tests/test_text.py ................................. [ 94%]
networkx/tests/test_all_random_functions.py s [ 94%]
networkx/tests/test_convert.py ............... [ 95%]
networkx/tests/test_convert_numpy.py .................................................. [ 96%]
networkx/tests/test_convert_pandas.py ...................... [ 96%]
networkx/tests/test_exceptions.py ....... [ 96%]
networkx/tests/test_import.py .. [ 96%]
networkx/tests/test_lazy_imports.py .... [ 96%]
networkx/tests/test_relabel.py .............................. [ 97%]
networkx/utils/tests/test__init.py . [ 97%]
networkx/utils/tests/test_decorators.py ................................... [ 98%]
networkx/utils/tests/test_heaps.py .. [ 98%]
networkx/utils/tests/test_mapped_queue.py .............................................. [ 99%]
networkx/utils/tests/test_misc.py ............................... [ 99%]
networkx/utils/tests/test_random_sequence.py .... [ 99%]
networkx/utils/tests/test_rcm.py .. [ 99%]
networkx/utils/tests/test_unionfind.py ..... [100%]
========================================================================================= FAILURES ==========================================================================================
________________________________________________________________________ TestStronglyConnected.test_connected_raise _________________________________________________________________________
self = <networkx.algorithms.components.tests.test_strongly_connected.TestStronglyConnected object at 0x7f09dc544430>
def test_connected_raise(self):
G = nx.Graph()
with pytest.raises(NetworkXNotImplemented):
next(nx.strongly_connected_components(G))
with pytest.raises(NetworkXNotImplemented):
next(nx.kosaraju_strongly_connected_components(G))
with pytest.raises(NetworkXNotImplemented):
with pytest.deprecated_call():
> next(nx.strongly_connected_components_recursive(G))
networkx/algorithms/components/tests/test_strongly_connected.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
<class 'networkx.utils.decorators.argmap'> compilation 498:3: in argmap_strongly_connected_components_recursive_495
import gzip
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
g = <networkx.classes.graph.Graph object at 0x7f09dab59e20>
def _not_implemented_for(g):
if (mval is None or mval == g.is_multigraph()) and (
dval is None or dval == g.is_directed()
):
> raise nx.NetworkXNotImplemented(errmsg)
E networkx.exception.NetworkXNotImplemented: not implemented for undirected type
networkx/utils/decorators.py:90: NetworkXNotImplemented
During handling of the above exception, another exception occurred:
self = <networkx.algorithms.components.tests.test_strongly_connected.TestStronglyConnected object at 0x7f09dc544430>
def test_connected_raise(self):
G = nx.Graph()
with pytest.raises(NetworkXNotImplemented):
next(nx.strongly_connected_components(G))
with pytest.raises(NetworkXNotImplemented):
next(nx.kosaraju_strongly_connected_components(G))
with pytest.raises(NetworkXNotImplemented):
with pytest.deprecated_call():
> next(nx.strongly_connected_components_recursive(G))
E Failed: DID NOT WARN. No warnings of type (<class 'DeprecationWarning'>, <class 'PendingDeprecationWarning'>, <class 'FutureWarning'>) were emitted.
E Emitted warnings: [].
networkx/algorithms/components/tests/test_strongly_connected.py:187: Failed
_________________________________________________________________________ TestMinimumCycleBasis.test_dimensionality _________________________________________________________________________
self = <networkx.algorithms.tests.test_cycles.TestMinimumCycleBasis object at 0x7f09dbb9aca0>
def test_dimensionality(self):
# checks |MCB|=|E|-|V|+|NC|
ntrial = 10
for seed in range(1234, 1234 + ntrial):
rg = nx.erdos_renyi_graph(10, 0.3, seed=seed)
nnodes = rg.number_of_nodes()
nedges = rg.number_of_edges()
ncomp = nx.number_connected_components(rg)
mcb = nx.minimum_cycle_basis(rg)
assert len(mcb) == nedges - nnodes + ncomp
> check_independent(mcb)
networkx/algorithms/tests/test_cycles.py:883:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
networkx/algorithms/tests/test_cycles.py:22: in check_independent
inc = nx.incidence_matrix(H, oriented=True)
networkx/utils/backends.py:576: in __call__
return self.orig_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
G = <networkx.classes.graph.Graph object at 0x7f09d913e670>, nodelist = None, edgelist = None, oriented = True, weight = None
@nx._dispatch(edge_attrs="weight")
def incidence_matrix(
G, nodelist=None, edgelist=None, oriented=False, weight=None, *, dtype=None
):
"""Returns incidence matrix of G.
The incidence matrix assigns each row to a node and each column to an edge.
For a standard incidence matrix a 1 appears wherever a row's node is
incident on the column's edge. For an oriented incidence matrix each
edge is assigned an orientation (arbitrarily for undirected and aligning to
direction for directed). A -1 appears for the source (tail) of an edge and
1 for the destination (head) of the edge. The elements are zero otherwise.
Parameters
----------
G : graph
A NetworkX graph
nodelist : list, optional (default= all nodes in G)
The rows are ordered according to the nodes in nodelist.
If nodelist is None, then the ordering is produced by G.nodes().
edgelist : list, optional (default= all edges in G)
The columns are ordered according to the edges in edgelist.
If edgelist is None, then the ordering is produced by G.edges().
oriented: bool, optional (default=False)
If True, matrix elements are +1 or -1 for the head or tail node
respectively of each edge. If False, +1 occurs at both nodes.
weight : string or None, optional (default=None)
The edge data key used to provide each value in the matrix.
If None, then each edge has weight 1. Edge weights, if used,
should be positive so that the orientation can provide the sign.
dtype : a NumPy dtype or None (default=None)
The dtype of the output sparse array. This type should be a compatible
type of the weight argument, eg. if weight would return a float this
argument should also be a float.
If None, then the default for SciPy is used.
Returns
-------
A : SciPy sparse array
The incidence matrix of G.
Notes
-----
For MultiGraph/MultiDiGraph, the edges in edgelist should be
(u,v,key) 3-tuples.
"Networks are the best discrete model for so many problems in
applied mathematics" [1]_.
References
----------
.. [1] Gil Strang, Network applications: A = incidence matrix,
http://videolectures.net/mit18085f07_strang_lec03/
"""
> import scipy as sp
E ModuleNotFoundError: No module named 'scipy'
networkx/linalg/graphmatrix.py:68: ModuleNotFoundError
_________________________________________________________________________ TestMinimumCycleBasis.test_complete_graph _________________________________________________________________________
self = <networkx.algorithms.tests.test_cycles.TestMinimumCycleBasis object at 0x7f09dbe80a00>
def test_complete_graph(self):
cg = nx.complete_graph(5)
mcb = nx.minimum_cycle_basis(cg)
assert all(len(cycle) == 3 for cycle in mcb)
> check_independent(mcb)
networkx/algorithms/tests/test_cycles.py:889:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
networkx/algorithms/tests/test_cycles.py:22: in check_independent
inc = nx.incidence_matrix(H, oriented=True)
networkx/utils/backends.py:576: in __call__
return self.orig_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
G = <networkx.classes.graph.Graph object at 0x7f09d948e850>, nodelist = None, edgelist = None, oriented = True, weight = None
@nx._dispatch(edge_attrs="weight")
def incidence_matrix(
G, nodelist=None, edgelist=None, oriented=False, weight=None, *, dtype=None
):
"""Returns incidence matrix of G.
The incidence matrix assigns each row to a node and each column to an edge.
For a standard incidence matrix a 1 appears wherever a row's node is
incident on the column's edge. For an oriented incidence matrix each
edge is assigned an orientation (arbitrarily for undirected and aligning to
direction for directed). A -1 appears for the source (tail) of an edge and
1 for the destination (head) of the edge. The elements are zero otherwise.
Parameters
----------
G : graph
A NetworkX graph
nodelist : list, optional (default= all nodes in G)
The rows are ordered according to the nodes in nodelist.
If nodelist is None, then the ordering is produced by G.nodes().
edgelist : list, optional (default= all edges in G)
The columns are ordered according to the edges in edgelist.
If edgelist is None, then the ordering is produced by G.edges().
oriented: bool, optional (default=False)
If True, matrix elements are +1 or -1 for the head or tail node
respectively of each edge. If False, +1 occurs at both nodes.
weight : string or None, optional (default=None)
The edge data key used to provide each value in the matrix.
If None, then each edge has weight 1. Edge weights, if used,
should be positive so that the orientation can provide the sign.
dtype : a NumPy dtype or None (default=None)
The dtype of the output sparse array. This type should be a compatible
type of the weight argument, eg. if weight would return a float this
argument should also be a float.
If None, then the default for SciPy is used.
Returns
-------
A : SciPy sparse array
The incidence matrix of G.
Notes
-----
For MultiGraph/MultiDiGraph, the edges in edgelist should be
(u,v,key) 3-tuples.
"Networks are the best discrete model for so many problems in
applied mathematics" [1]_.
References
----------
.. [1] Gil Strang, Network applications: A = incidence matrix,
http://videolectures.net/mit18085f07_strang_lec03/
"""
> import scipy as sp
E ModuleNotFoundError: No module named 'scipy'
networkx/linalg/graphmatrix.py:68: ModuleNotFoundError
_________________________________________________________________________ TestMinimumCycleBasis.test_petersen_graph _________________________________________________________________________
self = <networkx.algorithms.tests.test_cycles.TestMinimumCycleBasis object at 0x7f09dbe80190>
def test_petersen_graph(self):
G = nx.petersen_graph()
mcb = list(nx.minimum_cycle_basis(G))
expected = [
[4, 9, 7, 5, 0],
[1, 2, 3, 4, 0],
[1, 6, 8, 5, 0],
[4, 3, 8, 5, 0],
[1, 6, 9, 4, 0],
[1, 2, 7, 5, 0],
]
assert len(mcb) == len(expected)
assert all(c in expected for c in mcb)
# check that order of the nodes is a path
for c in mcb:
assert all(G.has_edge(u, v) for u, v in nx.utils.pairwise(c, cyclic=True))
# check independence of the basis
> check_independent(mcb)
networkx/algorithms/tests/test_cycles.py:913:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
networkx/algorithms/tests/test_cycles.py:22: in check_independent
inc = nx.incidence_matrix(H, oriented=True)
networkx/utils/backends.py:576: in __call__
return self.orig_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
G = <networkx.classes.graph.Graph object at 0x7f09d903cb80>, nodelist = None, edgelist = None, oriented = True, weight = None
@nx._dispatch(edge_attrs="weight")
def incidence_matrix(
G, nodelist=None, edgelist=None, oriented=False, weight=None, *, dtype=None
):
"""Returns incidence matrix of G.
The incidence matrix assigns each row to a node and each column to an edge.
For a standard incidence matrix a 1 appears wherever a row's node is
incident on the column's edge. For an oriented incidence matrix each
edge is assigned an orientation (arbitrarily for undirected and aligning to
direction for directed). A -1 appears for the source (tail) of an edge and
1 for the destination (head) of the edge. The elements are zero otherwise.
Parameters
----------
G : graph
A NetworkX graph
nodelist : list, optional (default= all nodes in G)
The rows are ordered according to the nodes in nodelist.
If nodelist is None, then the ordering is produced by G.nodes().
edgelist : list, optional (default= all edges in G)
The columns are ordered according to the edges in edgelist.
If edgelist is None, then the ordering is produced by G.edges().
oriented: bool, optional (default=False)
If True, matrix elements are +1 or -1 for the head or tail node
respectively of each edge. If False, +1 occurs at both nodes.
weight : string or None, optional (default=None)
The edge data key used to provide each value in the matrix.
If None, then each edge has weight 1. Edge weights, if used,
should be positive so that the orientation can provide the sign.
dtype : a NumPy dtype or None (default=None)
The dtype of the output sparse array. This type should be a compatible
type of the weight argument, eg. if weight would return a float this
argument should also be a float.
If None, then the default for SciPy is used.
Returns
-------
A : SciPy sparse array
The incidence matrix of G.
Notes
-----
For MultiGraph/MultiDiGraph, the edges in edgelist should be
(u,v,key) 3-tuples.
"Networks are the best discrete model for so many problems in
applied mathematics" [1]_.
References
----------
.. [1] Gil Strang, Network applications: A = incidence matrix,
http://videolectures.net/mit18085f07_strang_lec03/
"""
> import scipy as sp
E ModuleNotFoundError: No module named 'scipy'
networkx/linalg/graphmatrix.py:68: ModuleNotFoundError
____________________________________________________________ TestMinimumCycleBasis.test_gh6787_variable_weighted_complete_graph _____________________________________________________________
self = <networkx.algorithms.tests.test_cycles.TestMinimumCycleBasis object at 0x7f09dbe80e50>
def test_gh6787_variable_weighted_complete_graph(self):
N = 8
cg = nx.complete_graph(N)
cg.add_weighted_edges_from([(u, v, 9) for u, v in cg.edges])
cg.add_weighted_edges_from([(u, v, 1) for u, v in nx.cycle_graph(N).edges])
mcb = nx.minimum_cycle_basis(cg, weight="weight")
> check_independent(mcb)
networkx/algorithms/tests/test_cycles.py:921:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
networkx/algorithms/tests/test_cycles.py:22: in check_independent
inc = nx.incidence_matrix(H, oriented=True)
networkx/utils/backends.py:576: in __call__
return self.orig_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
G = <networkx.classes.graph.Graph object at 0x7f09d94ec1c0>, nodelist = None, edgelist = None, oriented = True, weight = None
@nx._dispatch(edge_attrs="weight")
def incidence_matrix(
G, nodelist=None, edgelist=None, oriented=False, weight=None, *, dtype=None
):
"""Returns incidence matrix of G.
The incidence matrix assigns each row to a node and each column to an edge.
For a standard incidence matrix a 1 appears wherever a row's node is
incident on the column's edge. For an oriented incidence matrix each
edge is assigned an orientation (arbitrarily for undirected and aligning to
direction for directed). A -1 appears for the source (tail) of an edge and
1 for the destination (head) of the edge. The elements are zero otherwise.
Parameters
----------
G : graph
A NetworkX graph
nodelist : list, optional (default= all nodes in G)
The rows are ordered according to the nodes in nodelist.
If nodelist is None, then the ordering is produced by G.nodes().
edgelist : list, optional (default= all edges in G)
The columns are ordered according to the edges in edgelist.
If edgelist is None, then the ordering is produced by G.edges().
oriented: bool, optional (default=False)
If True, matrix elements are +1 or -1 for the head or tail node
respectively of each edge. If False, +1 occurs at both nodes.
weight : string or None, optional (default=None)
The edge data key used to provide each value in the matrix.
If None, then each edge has weight 1. Edge weights, if used,
should be positive so that the orientation can provide the sign.
dtype : a NumPy dtype or None (default=None)
The dtype of the output sparse array. This type should be a compatible
type of the weight argument, eg. if weight would return a float this
argument should also be a float.
If None, then the default for SciPy is used.
Returns
-------
A : SciPy sparse array
The incidence matrix of G.
Notes
-----
For MultiGraph/MultiDiGraph, the edges in edgelist should be
(u,v,key) 3-tuples.
"Networks are the best discrete model for so many problems in
applied mathematics" [1]_.
References
----------
.. [1] Gil Strang, Network applications: A = incidence matrix,
http://videolectures.net/mit18085f07_strang_lec03/
"""
> import scipy as sp
E ModuleNotFoundError: No module named 'scipy'
networkx/linalg/graphmatrix.py:68: ModuleNotFoundError
______________________________________________________________________ TestResistanceDistance.test_resistance_distance ______________________________________________________________________
self = <networkx.algorithms.tests.test_distance_measures.TestResistanceDistance object at 0x7f09dbc71b80>
def test_resistance_distance(self):
> rd = nx.resistance_distance(self.G, 1, 3, "weight", True)
networkx/algorithms/tests/test_distance_measures.py:360:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
<class 'networkx.utils.decorators.argmap'> compilation 909:4: in argmap_resistance_distance_906
???
networkx/utils/backends.py:576: in __call__
return self.orig_func(*args, **kwargs)
networkx/algorithms/distance_measures.py:744: in resistance_distance
L = nx.laplacian_matrix(G, weight=weight).todense()
networkx/utils/decorators.py:770: in func
return argmap._lazy_compile(__wrapper)(*args, **kwargs)
<class 'networkx.utils.decorators.argmap'> compilation 913:4: in argmap_laplacian_matrix_910
???
networkx/utils/backends.py:576: in __call__
return self.orig_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
G = <networkx.classes.graph.Graph object at 0x7f09d948f970>, nodelist = None, weight = 'weight'
@not_implemented_for("directed")
@nx._dispatch(edge_attrs="weight")
def laplacian_matrix(G, nodelist=None, weight="weight"):
"""Returns the Laplacian matrix of G.
The graph Laplacian is the matrix L = D - A, where
A is the adjacency matrix and D is the diagonal matrix of node degrees.
Parameters
----------
G : graph
A NetworkX graph
nodelist : list, optional
The rows and columns are ordered according to the nodes in nodelist.
If nodelist is None, then the ordering is produced by G.nodes().
weight : string or None, optional (default='weight')
The edge data key used to compute each value in the matrix.
If None, then each edge has weight 1.
Returns
-------
L : SciPy sparse array
The Laplacian matrix of G.
Notes
-----
For MultiGraph, the edges weights are summed.
See Also
--------
:func:`~networkx.convert_matrix.to_numpy_array`
normalized_laplacian_matrix
:func:`~networkx.linalg.spectrum.laplacian_spectrum`
Examples
--------
For graphs with multiple connected components, L is permutation-similar
to a block diagonal matrix where each block is the respective Laplacian
matrix for each component.
>>> G = nx.Graph([(1, 2), (2, 3), (4, 5)])
>>> print(nx.laplacian_matrix(G).toarray())
[[ 1 -1 0 0 0]
[-1 2 -1 0 0]
[ 0 -1 1 0 0]
[ 0 0 0 1 -1]
[ 0 0 0 -1 1]]
"""
> import scipy as sp
E ModuleNotFoundError: No module named 'scipy'
networkx/linalg/laplacianmatrix.py:66: ModuleNotFoundError
___________________________________________________________________ TestResistanceDistance.test_resistance_distance_noinv ___________________________________________________________________
self = <networkx.algorithms.tests.test_distance_measures.TestResistanceDistance object at 0x7f09dbc71670>
def test_resistance_distance_noinv(self):
> rd = nx.resistance_distance(self.G, 1, 3, "weight", False)
networkx/algorithms/tests/test_distance_measures.py:365:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
<class 'networkx.utils.decorators.argmap'> compilation 909:4: in argmap_resistance_distance_906
???
networkx/utils/backends.py:576: in __call__
return self.orig_func(*args, **kwargs)
networkx/algorithms/distance_measures.py:744: in resistance_distance
L = nx.laplacian_matrix(G, weight=weight).todense()
<class 'networkx.utils.decorators.argmap'> compilation 913:4: in argmap_laplacian_matrix_910
???
networkx/utils/backends.py:576: in __call__
return self.orig_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
G = <networkx.classes.graph.Graph object at 0x7f09d94289d0>, nodelist = None, weight = 'weight'
@not_implemented_for("directed")
@nx._dispatch(edge_attrs="weight")
def laplacian_matrix(G, nodelist=None, weight="weight"):
"""Returns the Laplacian matrix of G.
The graph Laplacian is the matrix L = D - A, where
A is the adjacency matrix and D is the diagonal matrix of node degrees.
Parameters
----------
G : graph
A NetworkX graph
nodelist : list, optional
The rows and columns are ordered according to the nodes in nodelist.
If nodelist is None, then the ordering is produced by G.nodes().
weight : string or None, optional (default='weight')
The edge data key used to compute each value in the matrix.
If None, then each edge has weight 1.
Returns
-------
L : SciPy sparse array
The Laplacian matrix of G.
Notes
-----
For MultiGraph, the edges weights are summed.
See Also
--------
:func:`~networkx.convert_matrix.to_numpy_array`
normalized_laplacian_matrix
:func:`~networkx.linalg.spectrum.laplacian_spectrum`
Examples
--------
For graphs with multiple connected components, L is permutation-similar
to a block diagonal matrix where each block is the respective Laplacian
matrix for each component.
>>> G = nx.Graph([(1, 2), (2, 3), (4, 5)])
>>> print(nx.laplacian_matrix(G).toarray())
[[ 1 -1 0 0 0]
[-1 2 -1 0 0]
[ 0 -1 1 0 0]
[ 0 0 0 1 -1]
[ 0 0 0 -1 1]]
"""
> import scipy as sp
E ModuleNotFoundError: No module named 'scipy'
networkx/linalg/laplacianmatrix.py:66: ModuleNotFoundError
_________________________________________________________________ TestResistanceDistance.test_resistance_distance_no_weight _________________________________________________________________
self = <networkx.algorithms.tests.test_distance_measures.TestResistanceDistance object at 0x7f09dbc71160>
def test_resistance_distance_no_weight(self):
> rd = nx.resistance_distance(self.G, 1, 3)
networkx/algorithms/tests/test_distance_measures.py:370:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
<class 'networkx.utils.decorators.argmap'> compilation 909:4: in argmap_resistance_distance_906
???
networkx/utils/backends.py:576: in __call__
return self.orig_func(*args, **kwargs)
networkx/algorithms/distance_measures.py:744: in resistance_distance
L = nx.laplacian_matrix(G, weight=weight).todense()
<class 'networkx.utils.decorators.argmap'> compilation 913:4: in argmap_laplacian_matrix_910
???
networkx/utils/backends.py:576: in __call__
return self.orig_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
G = <networkx.classes.graph.Graph object at 0x7f09daebe910>, nodelist = None, weight = None
@not_implemented_for("directed")
@nx._dispatch(edge_attrs="weight")
def laplacian_matrix(G, nodelist=None, weight="weight"):
"""Returns the Laplacian matrix of G.
The graph Laplacian is the matrix L = D - A, where
A is the adjacency matrix and D is the diagonal matrix of node degrees.
Parameters
----------
G : graph
A NetworkX graph
nodelist : list, optional
The rows and columns are ordered according to the nodes in nodelist.
If nodelist is None, then the ordering is produced by G.nodes().
weight : string or None, optional (default='weight')
The edge data key used to compute each value in the matrix.
If None, then each edge has weight 1.
Returns
-------
L : SciPy sparse array
The Laplacian matrix of G.
Notes
-----
For MultiGraph, the edges weights are summed.
See Also
--------
:func:`~networkx.convert_matrix.to_numpy_array`
normalized_laplacian_matrix
:func:`~networkx.linalg.spectrum.laplacian_spectrum`
Examples
--------
For graphs with multiple connected components, L is permutation-similar
to a block diagonal matrix where each block is the respective Laplacian
matrix for each component.
>>> G = nx.Graph([(1, 2), (2, 3), (4, 5)])
>>> print(nx.laplacian_matrix(G).toarray())
[[ 1 -1 0 0 0]
[-1 2 -1 0 0]
[ 0 -1 1 0 0]
[ 0 0 0 1 -1]
[ 0 0 0 -1 1]]
"""
> import scipy as sp
E ModuleNotFoundError: No module named 'scipy'
networkx/linalg/laplacianmatrix.py:66: ModuleNotFoundError
________________________________________________________________ TestResistanceDistance.test_resistance_distance_neg_weight _________________________________________________________________
self = <networkx.algorithms.tests.test_distance_measures.TestResistanceDistance object at 0x7f09dbc71580>
def test_resistance_distance_neg_weight(self):
self.G[2][3]["weight"] = -4
> rd = nx.resistance_distance(self.G, 1, 3, "weight", True)
networkx/algorithms/tests/test_distance_measures.py:375:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
<class 'networkx.utils.decorators.argmap'> compilation 909:4: in argmap_resistance_distance_906
???
networkx/utils/backends.py:576: in __call__
return self.orig_func(*args, **kwargs)
networkx/algorithms/distance_measures.py:744: in resistance_distance
L = nx.laplacian_matrix(G, weight=weight).todense()
<class 'networkx.utils.decorators.argmap'> compilation 913:4: in argmap_laplacian_matrix_910
???
networkx/utils/backends.py:576: in __call__
return self.orig_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
G = <networkx.classes.graph.Graph object at 0x7f09d9bfe280>, nodelist = None, weight = 'weight'
@not_implemented_for("directed")
@nx._dispatch(edge_attrs="weight")
def laplacian_matrix(G, nodelist=None, weight="weight"):
"""Returns the Laplacian matrix of G.
The graph Laplacian is the matrix L = D - A, where
A is the adjacency matrix and D is the diagonal matrix of node degrees.
Parameters
----------
G : graph
A NetworkX graph
nodelist : list, optional
The rows and columns are ordered according to the nodes in nodelist.
If nodelist is None, then the ordering is produced by G.nodes().
weight : string or None, optional (default='weight')
The edge data key used to compute each value in the matrix.
If None, then each edge has weight 1.
Returns
-------
L : SciPy sparse array
The Laplacian matrix of G.
Notes
-----
For MultiGraph, the edges weights are summed.
See Also
--------
:func:`~networkx.convert_matrix.to_numpy_array`
normalized_laplacian_matrix
:func:`~networkx.linalg.spectrum.laplacian_spectrum`
Examples
--------
For graphs with multiple connected components, L is permutation-similar
to a block diagonal matrix where each block is the respective Laplacian
matrix for each component.
>>> G = nx.Graph([(1, 2), (2, 3), (4, 5)])
>>> print(nx.laplacian_matrix(G).toarray())
[[ 1 -1 0 0 0]
[-1 2 -1 0 0]
[ 0 -1 1 0 0]
[ 0 0 0 1 -1]
[ 0 0 0 -1 1]]
"""
> import scipy as sp
E ModuleNotFoundError: No module named 'scipy'
networkx/linalg/laplacianmatrix.py:66: ModuleNotFoundError
__________________________________________________________________________ TestResistanceDistance.test_multigraph ___________________________________________________________________________
self = <networkx.algorithms.tests.test_distance_measures.TestResistanceDistance object at 0x7f09dbc71df0>
def test_multigraph(self):
G = nx.MultiGraph()
G.add_edge(1, 2, weight=2)
G.add_edge(2, 3, weight=4)
G.add_edge(3, 4, weight=1)
G.add_edge(1, 4, weight=3)
> rd = nx.resistance_distance(G, 1, 3, "weight", True)
networkx/algorithms/tests/test_distance_measures.py:385:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
<class 'networkx.utils.decorators.argmap'> compilation 909:4: in argmap_resistance_distance_906
???
networkx/utils/backends.py:576: in __call__
return self.orig_func(*args, **kwargs)
networkx/algorithms/distance_measures.py:744: in resistance_distance
L = nx.laplacian_matrix(G, weight=weight).todense()
<class 'networkx.utils.decorators.argmap'> compilation 913:4: in argmap_laplacian_matrix_910
???
networkx/utils/backends.py:576: in __call__
return self.orig_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
G = <networkx.classes.multigraph.MultiGraph object at 0x7f09daebef70>, nodelist = None, weight = 'weight'
@not_implemented_for("directed")
@nx._dispatch(edge_attrs="weight")
def laplacian_matrix(G, nodelist=None, weight="weight"):
"""Returns the Laplacian matrix of G.
The graph Laplacian is the matrix L = D - A, where
A is the adjacency matrix and D is the diagonal matrix of node degrees.
Parameters
----------
G : graph
A NetworkX graph
nodelist : list, optional
The rows and columns are ordered according to the nodes in nodelist.
If nodelist is None, then the ordering is produced by G.nodes().
weight : string or None, optional (default='weight')
The edge data key used to compute each value in the matrix.
If None, then each edge has weight 1.
Returns
-------
L : SciPy sparse array
The Laplacian matrix of G.
Notes
-----
For MultiGraph, the edges weights are summed.
See Also
--------
:func:`~networkx.convert_matrix.to_numpy_array`
normalized_laplacian_matrix
:func:`~networkx.linalg.spectrum.laplacian_spectrum`
Examples
--------
For graphs with multiple connected components, L is permutation-similar
to a block diagonal matrix where each block is the respective Laplacian
matrix for each component.
>>> G = nx.Graph([(1, 2), (2, 3), (4, 5)])
>>> print(nx.laplacian_matrix(G).toarray())
[[ 1 -1 0 0 0]
[-1 2 -1 0 0]
[ 0 -1 1 0 0]
[ 0 0 0 1 -1]
[ 0 0 0 -1 1]]
"""
> import scipy as sp
E ModuleNotFoundError: No module named 'scipy'
networkx/linalg/laplacianmatrix.py:66: ModuleNotFoundError
_________________________________________________________________ TestResistanceDistance.test_resistance_distance_same_node _________________________________________________________________
self = <networkx.algorithms.tests.test_distance_measures.TestResistanceDistance object at 0x7f09dba6cb20>
def test_resistance_distance_same_node(self):
> assert nx.resistance_distance(self.G, 1, 1) == 0
networkx/algorithms/tests/test_distance_measures.py:394:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
<class 'networkx.utils.decorators.argmap'> compilation 909:4: in argmap_resistance_distance_906
???
networkx/utils/backends.py:576: in __call__
return self.orig_func(*args, **kwargs)
networkx/algorithms/distance_measures.py:744: in resistance_distance
L = nx.laplacian_matrix(G, weight=weight).todense()
<class 'networkx.utils.decorators.argmap'> compilation 913:4: in argmap_laplacian_matrix_910
???
networkx/utils/backends.py:576: in __call__
return self.orig_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
G = <networkx.classes.graph.Graph object at 0x7f09d9a586a0>, nodelist = None, weight = None
@not_implemented_for("directed")
@nx._dispatch(edge_attrs="weight")
def laplacian_matrix(G, nodelist=None, weight="weight"):
"""Returns the Laplacian matrix of G.
The graph Laplacian is the matrix L = D - A, where
A is the adjacency matrix and D is the diagonal matrix of node degrees.
Parameters
----------
G : graph
A NetworkX graph
nodelist : list, optional
The rows and columns are ordered according to the nodes in nodelist.
If nodelist is None, then the ordering is produced by G.nodes().
weight : string or None, optional (default='weight')
The edge data key used to compute each value in the matrix.
If None, then each edge has weight 1.
Returns
-------
L : SciPy sparse array
The Laplacian matrix of G.
Notes
-----
For MultiGraph, the edges weights are summed.
See Also
--------
:func:`~networkx.convert_matrix.to_numpy_array`
normalized_laplacian_matrix
:func:`~networkx.linalg.spectrum.laplacian_spectrum`
Examples
--------
For graphs with multiple connected components, L is permutation-similar
to a block diagonal matrix where each block is the respective Laplacian
matrix for each component.
>>> G = nx.Graph([(1, 2), (2, 3), (4, 5)])
>>> print(nx.laplacian_matrix(G).toarray())
[[ 1 -1 0 0 0]
[-1 2 -1 0 0]
[ 0 -1 1 0 0]
[ 0 0 0 1 -1]
[ 0 0 0 -1 1]]
"""
> import scipy as sp
E ModuleNotFoundError: No module named 'scipy'
networkx/linalg/laplacianmatrix.py:66: ModuleNotFoundError
________________________________________________________________ TestResistanceDistance.test_resistance_distance_only_nodeA _________________________________________________________________
self = <networkx.algorithms.tests.test_distance_measures.TestResistanceDistance object at 0x7f09dba6cc10>
def test_resistance_distance_only_nodeA(self):
> rd = nx.resistance_distance(self.G, nodeA=1)
networkx/algorithms/tests/test_distance_measures.py:397:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
<class 'networkx.utils.decorators.argmap'> compilation 909:4: in argmap_resistance_distance_906
???
networkx/utils/backends.py:576: in __call__
return self.orig_func(*args, **kwargs)
networkx/algorithms/distance_measures.py:744: in resistance_distance
L = nx.laplacian_matrix(G, weight=weight).todense()
<class 'networkx.utils.decorators.argmap'> compilation 913:4: in argmap_laplacian_matrix_910
???
networkx/utils/backends.py:576: in __call__
return self.orig_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
G = <networkx.classes.graph.Graph object at 0x7f09d8c7d5b0>, nodelist = None, weight = None
@not_implemented_for("directed")
@nx._dispatch(edge_attrs="weight")
def laplacian_matrix(G, nodelist=None, weight="weight"):
"""Returns the Laplacian matrix of G.
The graph Laplacian is the matrix L = D - A, where
A is the adjacency matrix and D is the diagonal matrix of node degrees.
Parameters
----------
G : graph
A NetworkX graph
nodelist : list, optional
The rows and columns are ordered according to the nodes in nodelist.
If nodelist is None, then the ordering is produced by G.nodes().
weight : string or None, optional (default='weight')
The edge data key used to compute each value in the matrix.
If None, then each edge has weight 1.
Returns
-------
L : SciPy sparse array
The Laplacian matrix of G.
Notes
-----
For MultiGraph, the edges weights are summed.
See Also
--------
:func:`~networkx.convert_matrix.to_numpy_array`
normalized_laplacian_matrix
:func:`~networkx.linalg.spectrum.laplacian_spectrum`
Examples
--------
For graphs with multiple connected components, L is permutation-similar
to a block diagonal matrix where each block is the respective Laplacian
matrix for each component.
>>> G = nx.Graph([(1, 2), (2, 3), (4, 5)])
>>> print(nx.laplacian_matrix(G).toarray())
[[ 1 -1 0 0 0]
[-1 2 -1 0 0]
[ 0 -1 1 0 0]
[ 0 0 0 1 -1]
[ 0 0 0 -1 1]]
"""
> import scipy as sp
E ModuleNotFoundError: No module named 'scipy'
networkx/linalg/laplacianmatrix.py:66: ModuleNotFoundError
________________________________________________________________ TestResistanceDistance.test_resistance_distance_only_nodeB _________________________________________________________________
self = <networkx.algorithms.tests.test_distance_measures.TestResistanceDistance object at 0x7f09dbc71070>
def test_resistance_distance_only_nodeB(self):
> rd = nx.resistance_distance(self.G, nodeB=1)
networkx/algorithms/tests/test_distance_measures.py:409:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
<class 'networkx.utils.decorators.argmap'> compilation 909:4: in argmap_resistance_distance_906
???
networkx/utils/backends.py:576: in __call__
return self.orig_func(*args, **kwargs)
networkx/algorithms/distance_measures.py:744: in resistance_distance
L = nx.laplacian_matrix(G, weight=weight).todense()
<class 'networkx.utils.decorators.argmap'> compilation 913:4: in argmap_laplacian_matrix_910
???
networkx/utils/backends.py:576: in __call__
return self.orig_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
G = <networkx.classes.graph.Graph object at 0x7f09d9423820>, nodelist = None, weight = None
@not_implemented_for("directed")
@nx._dispatch(edge_attrs="weight")
def laplacian_matrix(G, nodelist=None, weight="weight"):
"""Returns the Laplacian matrix of G.
The graph Laplacian is the matrix L = D - A, where
A is the adjacency matrix and D is the diagonal matrix of node degrees.
Parameters
----------
G : graph
A NetworkX graph
nodelist : list, optional
The rows and columns are ordered according to the nodes in nodelist.
If nodelist is None, then the ordering is produced by G.nodes().
weight : string or None, optional (default='weight')
The edge data key used to compute each value in the matrix.
If None, then each edge has weight 1.
Returns
-------
L : SciPy sparse array
The Laplacian matrix of G.
Notes
-----
For MultiGraph, the edges weights are summed.
See Also
--------
:func:`~networkx.convert_matrix.to_numpy_array`
normalized_laplacian_matrix
:func:`~networkx.linalg.spectrum.laplacian_spectrum`
Examples
--------
For graphs with multiple connected components, L is permutation-similar
to a block diagonal matrix where each block is the respective Laplacian
matrix for each component.
>>> G = nx.Graph([(1, 2), (2, 3), (4, 5)])
>>> print(nx.laplacian_matrix(G).toarray())
[[ 1 -1 0 0 0]
[-1 2 -1 0 0]
[ 0 -1 1 0 0]
[ 0 0 0 1 -1]
[ 0 0 0 -1 1]]
"""
> import scipy as sp
E ModuleNotFoundError: No module named 'scipy'
networkx/linalg/laplacianmatrix.py:66: ModuleNotFoundError
____________________________________________________________________ TestResistanceDistance.test_resistance_distance_all ____________________________________________________________________
self = <networkx.algorithms.tests.test_distance_measures.TestResistanceDistance object at 0x7f09dc290040>
def test_resistance_distance_all(self):
> rd = nx.resistance_distance(self.G)
networkx/algorithms/tests/test_distance_measures.py:421:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
<class 'networkx.utils.decorators.argmap'> compilation 909:4: in argmap_resistance_distance_906
???
networkx/utils/backends.py:576: in __call__
return self.orig_func(*args, **kwargs)
networkx/algorithms/distance_measures.py:744: in resistance_distance
L = nx.laplacian_matrix(G, weight=weight).todense()
<class 'networkx.utils.decorators.argmap'> compilation 913:4: in argmap_laplacian_matrix_910
???
networkx/utils/backends.py:576: in __call__
return self.orig_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
G = <networkx.classes.graph.Graph object at 0x7f09d8d719d0>, nodelist = None, weight = None
@not_implemented_for("directed")
@nx._dispatch(edge_attrs="weight")
def laplacian_matrix(G, nodelist=None, weight="weight"):
"""Returns the Laplacian matrix of G.
The graph Laplacian is the matrix L = D - A, where
A is the adjacency matrix and D is the diagonal matrix of node degrees.
Parameters
----------
G : graph
A NetworkX graph
nodelist : list, optional
The rows and columns are ordered according to the nodes in nodelist.
If nodelist is None, then the ordering is produced by G.nodes().
weight : string or None, optional (default='weight')
The edge data key used to compute each value in the matrix.
If None, then each edge has weight 1.
Returns
-------
L : SciPy sparse array
The Laplacian matrix of G.
Notes
-----
For MultiGraph, the edges weights are summed.
See Also
--------
:func:`~networkx.convert_matrix.to_numpy_array`
normalized_laplacian_matrix
:func:`~networkx.linalg.spectrum.laplacian_spectrum`
Examples
--------
For graphs with multiple connected components, L is permutation-similar
to a block diagonal matrix where each block is the respective Laplacian
matrix for each component.
>>> G = nx.Graph([(1, 2), (2, 3), (4, 5)])
>>> print(nx.laplacian_matrix(G).toarray())
[[ 1 -1 0 0 0]
[-1 2 -1 0 0]
[ 0 -1 1 0 0]
[ 0 0 0 1 -1]
[ 0 0 0 -1 1]]
"""
> import scipy as sp
E ModuleNotFoundError: No module named 'scipy'
networkx/linalg/laplacianmatrix.py:66: ModuleNotFoundError
___________________________________________________________________ TestKemenyConstant.test_kemeny_constant_not_connected ___________________________________________________________________
self = <networkx.algorithms.tests.test_distance_measures.TestKemenyConstant object at 0x7f09dbbd8550>
def test_kemeny_constant_not_connected(self):
self.G.add_node(5)
with pytest.raises(nx.NetworkXError):
> nx.kemeny_constant(self.G)
networkx/algorithms/tests/test_distance_measures.py:540:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
<class 'networkx.utils.decorators.argmap'> compilation 921:4: in argmap_kemeny_constant_918
???
networkx/utils/backends.py:576: in __call__
return self.orig_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
G = <networkx.classes.graph.Graph object at 0x7f09daba5190>
@nx.utils.not_implemented_for("directed")
@nx._dispatch(edge_attrs="weight")
def kemeny_constant(G, *, weight=None):
"""Returns the Kemeny constant of the given graph.
The *Kemeny constant* (or Kemeny's constant) of a graph `G`
can be computed by regarding the graph as a Markov chain.
The Kemeny constant is then the expected number of time steps
to transition from a starting state i to a random destination state
sampled from the Markov chain's stationary distribution.
The Kemeny constant is independent of the chosen initial state [1]_.
The Kemeny constant measures the time needed for spreading
across a graph. Low values indicate a closely connected graph
whereas high values indicate a spread-out graph.
If weight is not provided, then a weight of 1 is used for all edges.
Since `G` represents a Markov chain, the weights must be positive.
Parameters
----------
G : NetworkX graph
weight : string or None, optional (default=None)
The edge data key used to compute the Kemeny constant.
If None, then each edge has weight 1.
Returns
-------
K : float
The Kemeny constant of the graph `G`.
Raises
------
NetworkXNotImplemented
If the graph `G` is directed.
NetworkXError
If the graph `G` is not connected, or contains no nodes,
or has edges with negative weights.
Examples
--------
>>> G = nx.complete_graph(5)
>>> round(nx.kemeny_constant(G), 10)
3.2
Notes
-----
The implementation is based on equation (3.3) in [2]_.
Self-loops are allowed and indicate a Markov chain where
the state can remain the same. Multi-edges are contracted
in one edge with weight equal to the sum of the weights.
References
----------
.. [1] Wikipedia
"Kemeny's constant."
https://en.wikipedia.org/wiki/Kemeny%27s_constant
.. [2] Lovász L.
Random walks on graphs: A survey.
Paul Erdös is Eighty, vol. 2, Bolyai Society,
Mathematical Studies, Keszthely, Hungary (1993), pp. 1-46
"""
import numpy as np
> import scipy as sp
E ModuleNotFoundError: No module named 'scipy'
networkx/algorithms/distance_measures.py:846: ModuleNotFoundError
_____________________________________________________________________ TestKemenyConstant.test_kemeny_constant_no_nodes ______________________________________________________________________
self = <networkx.algorithms.tests.test_distance_measures.TestKemenyConstant object at 0x7f09dbbd8b20>
def test_kemeny_constant_no_nodes(self):
G = nx.Graph()
with pytest.raises(nx.NetworkXError):
> nx.kemeny_constant(G)
networkx/algorithms/tests/test_distance_measures.py:545:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
<class 'networkx.utils.decorators.argmap'> compilation 921:4: in argmap_kemeny_constant_918
???
networkx/utils/backends.py:576: in __call__
return self.orig_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
G = <networkx.classes.graph.Graph object at 0x7f09d9ca7250>
@nx.utils.not_implemented_for("directed")
@nx._dispatch(edge_attrs="weight")
def kemeny_constant(G, *, weight=None):
"""Returns the Kemeny constant of the given graph.
The *Kemeny constant* (or Kemeny's constant) of a graph `G`
can be computed by regarding the graph as a Markov chain.
The Kemeny constant is then the expected number of time steps
to transition from a starting state i to a random destination state
sampled from the Markov chain's stationary distribution.
The Kemeny constant is independent of the chosen initial state [1]_.
The Kemeny constant measures the time needed for spreading
across a graph. Low values indicate a closely connected graph
whereas high values indicate a spread-out graph.
If weight is not provided, then a weight of 1 is used for all edges.
Since `G` represents a Markov chain, the weights must be positive.
Parameters
----------
G : NetworkX graph
weight : string or None, optional (default=None)
The edge data key used to compute the Kemeny constant.
If None, then each edge has weight 1.
Returns
-------
K : float
The Kemeny constant of the graph `G`.
Raises
------
NetworkXNotImplemented
If the graph `G` is directed.
NetworkXError
If the graph `G` is not connected, or contains no nodes,
or has edges with negative weights.
Examples
--------
>>> G = nx.complete_graph(5)
>>> round(nx.kemeny_constant(G), 10)
3.2
Notes
-----
The implementation is based on equation (3.3) in [2]_.
Self-loops are allowed and indicate a Markov chain where
the state can remain the same. Multi-edges are contracted
in one edge with weight equal to the sum of the weights.
References
----------
.. [1] Wikipedia
"Kemeny's constant."
https://en.wikipedia.org/wiki/Kemeny%27s_constant
.. [2] Lovász L.
Random walks on graphs: A survey.
Paul Erdös is Eighty, vol. 2, Bolyai Society,
Mathematical Studies, Keszthely, Hungary (1993), pp. 1-46
"""
import numpy as np
> import scipy as sp
E ModuleNotFoundError: No module named 'scipy'
networkx/algorithms/distance_measures.py:846: ModuleNotFoundError
__________________________________________________________________ TestKemenyConstant.test_kemeny_constant_negative_weight __________________________________________________________________
self = <networkx.algorithms.tests.test_distance_measures.TestKemenyConstant object at 0x7f09dbbd8d90>
def test_kemeny_constant_negative_weight(self):
G = nx.Graph()
w12 = 2
w13 = 3
w23 = -10
G.add_edge(1, 2, weight=w12)
G.add_edge(1, 3, weight=w13)
G.add_edge(2, 3, weight=w23)
with pytest.raises(nx.NetworkXError):
> nx.kemeny_constant(G, weight="weight")
networkx/algorithms/tests/test_distance_measures.py:556:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
<class 'networkx.utils.decorators.argmap'> compilation 921:4: in argmap_kemeny_constant_918
???
networkx/utils/backends.py:576: in __call__
return self.orig_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
G = <networkx.classes.graph.Graph object at 0x7f09d94857c0>
@nx.utils.not_implemented_for("directed")
@nx._dispatch(edge_attrs="weight")
def kemeny_constant(G, *, weight=None):
"""Returns the Kemeny constant of the given graph.
The *Kemeny constant* (or Kemeny's constant) of a graph `G`
can be computed by regarding the graph as a Markov chain.
The Kemeny constant is then the expected number of time steps
to transition from a starting state i to a random destination state
sampled from the Markov chain's stationary distribution.
The Kemeny constant is independent of the chosen initial state [1]_.
The Kemeny constant measures the time needed for spreading
across a graph. Low values indicate a closely connected graph
whereas high values indicate a spread-out graph.
If weight is not provided, then a weight of 1 is used for all edges.
Since `G` represents a Markov chain, the weights must be positive.
Parameters
----------
G : NetworkX graph
weight : string or None, optional (default=None)
The edge data key used to compute the Kemeny constant.
If None, then each edge has weight 1.
Returns
-------
K : float
The Kemeny constant of the graph `G`.
Raises
------
NetworkXNotImplemented
If the graph `G` is directed.
NetworkXError
If the graph `G` is not connected, or contains no nodes,
or has edges with negative weights.
Examples
--------
>>> G = nx.complete_graph(5)
>>> round(nx.kemeny_constant(G), 10)
3.2
Notes
-----
The implementation is based on equation (3.3) in [2]_.
Self-loops are allowed and indicate a Markov chain where
the state can remain the same. Multi-edges are contracted
in one edge with weight equal to the sum of the weights.
References
----------
.. [1] Wikipedia
"Kemeny's constant."
https://en.wikipedia.org/wiki/Kemeny%27s_constant
.. [2] Lovász L.
Random walks on graphs: A survey.
Paul Erdös is Eighty, vol. 2, Bolyai Society,
Mathematical Studies, Keszthely, Hungary (1993), pp. 1-46
"""
import numpy as np
> import scipy as sp
E ModuleNotFoundError: No module named 'scipy'
networkx/algorithms/distance_measures.py:846: ModuleNotFoundError
__________________________________________________________________________ TestKemenyConstant.test_kemeny_constant __________________________________________________________________________
self = <networkx.algorithms.tests.test_distance_measures.TestKemenyConstant object at 0x7f09dbbd8ac0>
def test_kemeny_constant(self):
> K = nx.kemeny_constant(self.G, weight="weight")
networkx/algorithms/tests/test_distance_measures.py:559:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
<class 'networkx.utils.decorators.argmap'> compilation 921:4: in argmap_kemeny_constant_918
???
networkx/utils/backends.py:576: in __call__
return self.orig_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
G = <networkx.classes.graph.Graph object at 0x7f09d8caf7f0>
@nx.utils.not_implemented_for("directed")
@nx._dispatch(edge_attrs="weight")
def kemeny_constant(G, *, weight=None):
"""Returns the Kemeny constant of the given graph.
The *Kemeny constant* (or Kemeny's constant) of a graph `G`
can be computed by regarding the graph as a Markov chain.
The Kemeny constant is then the expected number of time steps
to transition from a starting state i to a random destination state
sampled from the Markov chain's stationary distribution.
The Kemeny constant is independent of the chosen initial state [1]_.
The Kemeny constant measures the time needed for spreading
across a graph. Low values indicate a closely connected graph
whereas high values indicate a spread-out graph.
If weight is not provided, then a weight of 1 is used for all edges.
Since `G` represents a Markov chain, the weights must be positive.
Parameters
----------
G : NetworkX graph
weight : string or None, optional (default=None)
The edge data key used to compute the Kemeny constant.
If None, then each edge has weight 1.
Returns
-------
K : float
The Kemeny constant of the graph `G`.
Raises
------
NetworkXNotImplemented
If the graph `G` is directed.
NetworkXError
If the graph `G` is not connected, or contains no nodes,
or has edges with negative weights.
Examples
--------
>>> G = nx.complete_graph(5)
>>> round(nx.kemeny_constant(G), 10)
3.2
Notes
-----
The implementation is based on equation (3.3) in [2]_.
Self-loops are allowed and indicate a Markov chain where
the state can remain the same. Multi-edges are contracted
in one edge with weight equal to the sum of the weights.
References
----------
.. [1] Wikipedia
"Kemeny's constant."
https://en.wikipedia.org/wiki/Kemeny%27s_constant
.. [2] Lovász L.
Random walks on graphs: A survey.
Paul Erdös is Eighty, vol. 2, Bolyai Society,
Mathematical Studies, Keszthely, Hungary (1993), pp. 1-46
"""
import numpy as np
> import scipy as sp
E ModuleNotFoundError: No module named 'scipy'
networkx/algorithms/distance_measures.py:846: ModuleNotFoundError
_____________________________________________________________________ TestKemenyConstant.test_kemeny_constant_no_weight _____________________________________________________________________
self = <networkx.algorithms.tests.test_distance_measures.TestKemenyConstant object at 0x7f09dbbd86d0>
def test_kemeny_constant_no_weight(self):
> K = nx.kemeny_constant(self.G)
networkx/algorithms/tests/test_distance_measures.py:579:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
<class 'networkx.utils.decorators.argmap'> compilation 921:4: in argmap_kemeny_constant_918
???
networkx/utils/backends.py:576: in __call__
return self.orig_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
G = <networkx.classes.graph.Graph object at 0x7f09d9bc8190>
@nx.utils.not_implemented_for("directed")
@nx._dispatch(edge_attrs="weight")
def kemeny_constant(G, *, weight=None):
"""Returns the Kemeny constant of the given graph.
The *Kemeny constant* (or Kemeny's constant) of a graph `G`
can be computed by regarding the graph as a Markov chain.
The Kemeny constant is then the expected number of time steps
to transition from a starting state i to a random destination state
sampled from the Markov chain's stationary distribution.
The Kemeny constant is independent of the chosen initial state [1]_.
The Kemeny constant measures the time needed for spreading
across a graph. Low values indicate a closely connected graph
whereas high values indicate a spread-out graph.
If weight is not provided, then a weight of 1 is used for all edges.
Since `G` represents a Markov chain, the weights must be positive.
Parameters
----------
G : NetworkX graph
weight : string or None, optional (default=None)
The edge data key used to compute the Kemeny constant.
If None, then each edge has weight 1.
Returns
-------
K : float
The Kemeny constant of the graph `G`.
Raises
------
NetworkXNotImplemented
If the graph `G` is directed.
NetworkXError
If the graph `G` is not connected, or contains no nodes,
or has edges with negative weights.
Examples
--------
>>> G = nx.complete_graph(5)
>>> round(nx.kemeny_constant(G), 10)
3.2
Notes
-----
The implementation is based on equation (3.3) in [2]_.
Self-loops are allowed and indicate a Markov chain where
the state can remain the same. Multi-edges are contracted
in one edge with weight equal to the sum of the weights.
References
----------
.. [1] Wikipedia
"Kemeny's constant."
https://en.wikipedia.org/wiki/Kemeny%27s_constant
.. [2] Lovász L.
Random walks on graphs: A survey.
Paul Erdös is Eighty, vol. 2, Bolyai Society,
Mathematical Studies, Keszthely, Hungary (1993), pp. 1-46
"""
import numpy as np
> import scipy as sp
E ModuleNotFoundError: No module named 'scipy'
networkx/algorithms/distance_measures.py:846: ModuleNotFoundError
____________________________________________________________________ TestKemenyConstant.test_kemeny_constant_multigraph _____________________________________________________________________
self = <networkx.algorithms.tests.test_distance_measures.TestKemenyConstant object at 0x7f09dbbd83d0>
def test_kemeny_constant_multigraph(self):
G = nx.MultiGraph()
w12_1 = 2
w12_2 = 1
w13 = 3
w23 = 4
G.add_edge(1, 2, weight=w12_1)
G.add_edge(1, 2, weight=w12_2)
G.add_edge(1, 3, weight=w13)
G.add_edge(2, 3, weight=w23)
> K = nx.kemeny_constant(G, weight="weight")
networkx/algorithms/tests/test_distance_measures.py:592:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
<class 'networkx.utils.decorators.argmap'> compilation 921:4: in argmap_kemeny_constant_918
???
networkx/utils/backends.py:576: in __call__
return self.orig_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
G = <networkx.classes.multigraph.MultiGraph object at 0x7f09daf9c8b0>
@nx.utils.not_implemented_for("directed")
@nx._dispatch(edge_attrs="weight")
def kemeny_constant(G, *, weight=None):
"""Returns the Kemeny constant of the given graph.
The *Kemeny constant* (or Kemeny's constant) of a graph `G`
can be computed by regarding the graph as a Markov chain.
The Kemeny constant is then the expected number of time steps
to transition from a starting state i to a random destination state
sampled from the Markov chain's stationary distribution.
The Kemeny constant is independent of the chosen initial state [1]_.
The Kemeny constant measures the time needed for spreading
across a graph. Low values indicate a closely connected graph
whereas high values indicate a spread-out graph.
If weight is not provided, then a weight of 1 is used for all edges.
Since `G` represents a Markov chain, the weights must be positive.
Parameters
----------
G : NetworkX graph
weight : string or None, optional (default=None)
The edge data key used to compute the Kemeny constant.
If None, then each edge has weight 1.
Returns
-------
K : float
The Kemeny constant of the graph `G`.
Raises
------
NetworkXNotImplemented
If the graph `G` is directed.
NetworkXError
If the graph `G` is not connected, or contains no nodes,
or has edges with negative weights.
Examples
--------
>>> G = nx.complete_graph(5)
>>> round(nx.kemeny_constant(G), 10)
3.2
Notes
-----
The implementation is based on equation (3.3) in [2]_.
Self-loops are allowed and indicate a Markov chain where
the state can remain the same. Multi-edges are contracted
in one edge with weight equal to the sum of the weights.
References
----------
.. [1] Wikipedia
"Kemeny's constant."
https://en.wikipedia.org/wiki/Kemeny%27s_constant
.. [2] Lovász L.
Random walks on graphs: A survey.
Paul Erdös is Eighty, vol. 2, Bolyai Society,
Mathematical Studies, Keszthely, Hungary (1993), pp. 1-46
"""
import numpy as np
> import scipy as sp
E ModuleNotFoundError: No module named 'scipy'
networkx/algorithms/distance_measures.py:846: ModuleNotFoundError
______________________________________________________________________ TestKemenyConstant.test_kemeny_constant_weight0 ______________________________________________________________________
self = <networkx.algorithms.tests.test_distance_measures.TestKemenyConstant object at 0x7f09dbe64f40>
def test_kemeny_constant_weight0(self):
G = nx.Graph()
w12 = 0
w13 = 3
w23 = 4
G.add_edge(1, 2, weight=w12)
G.add_edge(1, 3, weight=w13)
G.add_edge(2, 3, weight=w23)
> K = nx.kemeny_constant(G, weight="weight")
networkx/algorithms/tests/test_distance_measures.py:617:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
<class 'networkx.utils.decorators.argmap'> compilation 921:4: in argmap_kemeny_constant_918
???
networkx/utils/backends.py:576: in __call__
return self.orig_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
G = <networkx.classes.graph.Graph object at 0x7f09d948d400>
@nx.utils.not_implemented_for("directed")
@nx._dispatch(edge_attrs="weight")
def kemeny_constant(G, *, weight=None):
"""Returns the Kemeny constant of the given graph.
The *Kemeny constant* (or Kemeny's constant) of a graph `G`
can be computed by regarding the graph as a Markov chain.
The Kemeny constant is then the expected number of time steps
to transition from a starting state i to a random destination state
sampled from the Markov chain's stationary distribution.
The Kemeny constant is independent of the chosen initial state [1]_.
The Kemeny constant measures the time needed for spreading
across a graph. Low values indicate a closely connected graph
whereas high values indicate a spread-out graph.
If weight is not provided, then a weight of 1 is used for all edges.
Since `G` represents a Markov chain, the weights must be positive.
Parameters
----------
G : NetworkX graph
weight : string or None, optional (default=None)
The edge data key used to compute the Kemeny constant.
If None, then each edge has weight 1.
Returns
-------
K : float
The Kemeny constant of the graph `G`.
Raises
------
NetworkXNotImplemented
If the graph `G` is directed.
NetworkXError
If the graph `G` is not connected, or contains no nodes,
or has edges with negative weights.
Examples
--------
>>> G = nx.complete_graph(5)
>>> round(nx.kemeny_constant(G), 10)
3.2
Notes
-----
The implementation is based on equation (3.3) in [2]_.
Self-loops are allowed and indicate a Markov chain where
the state can remain the same. Multi-edges are contracted
in one edge with weight equal to the sum of the weights.
References
----------
.. [1] Wikipedia
"Kemeny's constant."
https://en.wikipedia.org/wiki/Kemeny%27s_constant
.. [2] Lovász L.
Random walks on graphs: A survey.
Paul Erdös is Eighty, vol. 2, Bolyai Society,
Mathematical Studies, Keszthely, Hungary (1993), pp. 1-46
"""
import numpy as np
> import scipy as sp
E ModuleNotFoundError: No module named 'scipy'
networkx/algorithms/distance_measures.py:846: ModuleNotFoundError
_____________________________________________________________________ TestKemenyConstant.test_kemeny_constant_selfloop ______________________________________________________________________
self = <networkx.algorithms.tests.test_distance_measures.TestKemenyConstant object at 0x7f09dbe64730>
def test_kemeny_constant_selfloop(self):
G = nx.Graph()
w11 = 1
w12 = 2
w13 = 3
w23 = 4
G.add_edge(1, 1, weight=w11)
G.add_edge(1, 2, weight=w12)
G.add_edge(1, 3, weight=w13)
G.add_edge(2, 3, weight=w23)
> K = nx.kemeny_constant(G, weight="weight")
networkx/algorithms/tests/test_distance_measures.py:643:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
<class 'networkx.utils.decorators.argmap'> compilation 921:4: in argmap_kemeny_constant_918
???
networkx/utils/backends.py:576: in __call__
return self.orig_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
G = <networkx.classes.graph.Graph object at 0x7f09d8cac310>
@nx.utils.not_implemented_for("directed")
@nx._dispatch(edge_attrs="weight")
def kemeny_constant(G, *, weight=None):
"""Returns the Kemeny constant of the given graph.
The *Kemeny constant* (or Kemeny's constant) of a graph `G`
can be computed by regarding the graph as a Markov chain.
The Kemeny constant is then the expected number of time steps
to transition from a starting state i to a random destination state
sampled from the Markov chain's stationary distribution.
The Kemeny constant is independent of the chosen initial state [1]_.
The Kemeny constant measures the time needed for spreading
across a graph. Low values indicate a closely connected graph
whereas high values indicate a spread-out graph.
If weight is not provided, then a weight of 1 is used for all edges.
Since `G` represents a Markov chain, the weights must be positive.
Parameters
----------
G : NetworkX graph
weight : string or None, optional (default=None)
The edge data key used to compute the Kemeny constant.
If None, then each edge has weight 1.
Returns
-------
K : float
The Kemeny constant of the graph `G`.
Raises
------
NetworkXNotImplemented
If the graph `G` is directed.
NetworkXError
If the graph `G` is not connected, or contains no nodes,
or has edges with negative weights.
Examples
--------
>>> G = nx.complete_graph(5)
>>> round(nx.kemeny_constant(G), 10)
3.2
Notes
-----
The implementation is based on equation (3.3) in [2]_.
Self-loops are allowed and indicate a Markov chain where
the state can remain the same. Multi-edges are contracted
in one edge with weight equal to the sum of the weights.
References
----------
.. [1] Wikipedia
"Kemeny's constant."
https://en.wikipedia.org/wiki/Kemeny%27s_constant
.. [2] Lovász L.
Random walks on graphs: A survey.
Paul Erdös is Eighty, vol. 2, Bolyai Society,
Mathematical Studies, Keszthely, Hungary (1993), pp. 1-46
"""
import numpy as np
> import scipy as sp
E ModuleNotFoundError: No module named 'scipy'
networkx/algorithms/distance_measures.py:846: ModuleNotFoundError
_____________________________________________________________ TestKemenyConstant.test_kemeny_constant_complete_bipartite_graph ______________________________________________________________
self = <networkx.algorithms.tests.test_distance_measures.TestKemenyConstant object at 0x7f09dba58b20>
def test_kemeny_constant_complete_bipartite_graph(self):
# Theorem 1 in https://www.sciencedirect.com/science/article/pii/S0166218X20302912
n1 = 5
n2 = 4
G = nx.complete_bipartite_graph(n1, n2)
> K = nx.kemeny_constant(G)
networkx/algorithms/tests/test_distance_measures.py:660:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
<class 'networkx.utils.decorators.argmap'> compilation 921:4: in argmap_kemeny_constant_918
???
networkx/utils/backends.py:576: in __call__
return self.orig_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
G = <networkx.classes.graph.Graph object at 0x7f09d948eac0>
@nx.utils.not_implemented_for("directed")
@nx._dispatch(edge_attrs="weight")
def kemeny_constant(G, *, weight=None):
"""Returns the Kemeny constant of the given graph.
The *Kemeny constant* (or Kemeny's constant) of a graph `G`
can be computed by regarding the graph as a Markov chain.
The Kemeny constant is then the expected number of time steps
to transition from a starting state i to a random destination state
sampled from the Markov chain's stationary distribution.
The Kemeny constant is independent of the chosen initial state [1]_.
The Kemeny constant measures the time needed for spreading
across a graph. Low values indicate a closely connected graph
whereas high values indicate a spread-out graph.
If weight is not provided, then a weight of 1 is used for all edges.
Since `G` represents a Markov chain, the weights must be positive.
Parameters
----------
G : NetworkX graph
weight : string or None, optional (default=None)
The edge data key used to compute the Kemeny constant.
If None, then each edge has weight 1.
Returns
-------
K : float
The Kemeny constant of the graph `G`.
Raises
------
NetworkXNotImplemented
If the graph `G` is directed.
NetworkXError
If the graph `G` is not connected, or contains no nodes,
or has edges with negative weights.
Examples
--------
>>> G = nx.complete_graph(5)
>>> round(nx.kemeny_constant(G), 10)
3.2
Notes
-----
The implementation is based on equation (3.3) in [2]_.
Self-loops are allowed and indicate a Markov chain where
the state can remain the same. Multi-edges are contracted
in one edge with weight equal to the sum of the weights.
References
----------
.. [1] Wikipedia
"Kemeny's constant."
https://en.wikipedia.org/wiki/Kemeny%27s_constant
.. [2] Lovász L.
Random walks on graphs: A survey.
Paul Erdös is Eighty, vol. 2, Bolyai Society,
Mathematical Studies, Keszthely, Hungary (1993), pp. 1-46
"""
import numpy as np
> import scipy as sp
E ModuleNotFoundError: No module named 'scipy'
networkx/algorithms/distance_measures.py:846: ModuleNotFoundError
____________________________________________________________________ TestKemenyConstant.test_kemeny_constant_path_graph _____________________________________________________________________
self = <networkx.algorithms.tests.test_distance_measures.TestKemenyConstant object at 0x7f09dba58f40>
def test_kemeny_constant_path_graph(self):
# Theorem 2 in https://www.sciencedirect.com/science/article/pii/S0166218X20302912
n = 10
G = nx.path_graph(n)
> K = nx.kemeny_constant(G)
networkx/algorithms/tests/test_distance_measures.py:667:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
<class 'networkx.utils.decorators.argmap'> compilation 921:4: in argmap_kemeny_constant_918
???
networkx/utils/backends.py:576: in __call__
return self.orig_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
G = <networkx.classes.graph.Graph object at 0x7f09d9b082e0>
@nx.utils.not_implemented_for("directed")
@nx._dispatch(edge_attrs="weight")
def kemeny_constant(G, *, weight=None):
"""Returns the Kemeny constant of the given graph.
The *Kemeny constant* (or Kemeny's constant) of a graph `G`
can be computed by regarding the graph as a Markov chain.
The Kemeny constant is then the expected number of time steps
to transition from a starting state i to a random destination state
sampled from the Markov chain's stationary distribution.
The Kemeny constant is independent of the chosen initial state [1]_.
The Kemeny constant measures the time needed for spreading
across a graph. Low values indicate a closely connected graph
whereas high values indicate a spread-out graph.
If weight is not provided, then a weight of 1 is used for all edges.
Since `G` represents a Markov chain, the weights must be positive.
Parameters
----------
G : NetworkX graph
weight : string or None, optional (default=None)
The edge data key used to compute the Kemeny constant.
If None, then each edge has weight 1.
Returns
-------
K : float
The Kemeny constant of the graph `G`.
Raises
------
NetworkXNotImplemented
If the graph `G` is directed.
NetworkXError
If the graph `G` is not connected, or contains no nodes,
or has edges with negative weights.
Examples
--------
>>> G = nx.complete_graph(5)
>>> round(nx.kemeny_constant(G), 10)
3.2
Notes
-----
The implementation is based on equation (3.3) in [2]_.
Self-loops are allowed and indicate a Markov chain where
the state can remain the same. Multi-edges are contracted
in one edge with weight equal to the sum of the weights.
References
----------
.. [1] Wikipedia
"Kemeny's constant."
https://en.wikipedia.org/wiki/Kemeny%27s_constant
.. [2] Lovász L.
Random walks on graphs: A survey.
Paul Erdös is Eighty, vol. 2, Bolyai Society,
Mathematical Studies, Keszthely, Hungary (1993), pp. 1-46
"""
import numpy as np
> import scipy as sp
E ModuleNotFoundError: No module named 'scipy'
networkx/algorithms/distance_measures.py:846: ModuleNotFoundError
===================================================================================== warnings summary ======================================================================================
networkx/utils/backends.py:135
/home/tkloczko/rpmbuild/BUILD/networkx-networkx-3.2.1/networkx/utils/backends.py:135: RuntimeWarning: networkx backend defined more than once: nx-loopback
backends.update(_get_backends("networkx.backends"))
networkx/utils/backends.py:576
/home/tkloczko/rpmbuild/BUILD/networkx-networkx-3.2.1/networkx/utils/backends.py:576: DeprecationWarning:
random_tree is deprecated and will be removed in NX v3.4
Use random_labeled_tree instead.
return self.orig_func(*args, **kwargs)
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_prominent_group_single_node
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_prominent_group_with_c
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_prominent_group_normalized_endpoints
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_prominent_group_disconnected_graph
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_group_betweenness_directed_weighted
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_prominent_group_greedy_algorithm
/home/tkloczko/rpmbuild/BUILD/networkx-networkx-3.2.1/networkx/algorithms/centrality/group.py:501: FutureWarning: ChainedAssignmentError: behaviour will change in pandas 3.0!
You are setting values through chained assignment. Currently this works in certain cases, but when using Copy-on-Write (which will become the default behaviour in pandas 3.0) this will never work to update the original DataFrame or Series, because the intermediate object on which we are setting values will behave as a copy.
A typical example is when you are setting values in a column of a DataFrame, like:
df["col"][row_indexer] = value
Use `df.loc[row_indexer, "col"] = values` instead, to perform the assignment in a single step and ensure this keeps updating the original `df`.
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
DF_tree.nodes[node_p]["betweenness"][x][y] = (
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_prominent_group_single_node
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_prominent_group_with_c
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_prominent_group_normalized_endpoints
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_prominent_group_disconnected_graph
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_group_betweenness_directed_weighted
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_prominent_group_greedy_algorithm
/home/tkloczko/rpmbuild/BUILD/networkx-networkx-3.2.1/networkx/algorithms/centrality/group.py:505: FutureWarning: ChainedAssignmentError: behaviour will change in pandas 3.0!
You are setting values through chained assignment. Currently this works in certain cases, but when using Copy-on-Write (which will become the default behaviour in pandas 3.0) this will never work to update the original DataFrame or Series, because the intermediate object on which we are setting values will behave as a copy.
A typical example is when you are setting values in a column of a DataFrame, like:
df["col"][row_indexer] = value
Use `df.loc[row_indexer, "col"] = values` instead, to perform the assignment in a single step and ensure this keeps updating the original `df`.
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
DF_tree.nodes[node_p]["betweenness"][x][y] -= (
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_prominent_group_single_node
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_prominent_group_with_c
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_prominent_group_normalized_endpoints
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_prominent_group_disconnected_graph
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_group_betweenness_directed_weighted
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_prominent_group_greedy_algorithm
/home/tkloczko/rpmbuild/BUILD/networkx-networkx-3.2.1/networkx/algorithms/centrality/group.py:509: FutureWarning: ChainedAssignmentError: behaviour will change in pandas 3.0!
You are setting values through chained assignment. Currently this works in certain cases, but when using Copy-on-Write (which will become the default behaviour in pandas 3.0) this will never work to update the original DataFrame or Series, because the intermediate object on which we are setting values will behave as a copy.
A typical example is when you are setting values in a column of a DataFrame, like:
df["col"][row_indexer] = value
Use `df.loc[row_indexer, "col"] = values` instead, to perform the assignment in a single step and ensure this keeps updating the original `df`.
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
DF_tree.nodes[node_p]["betweenness"][x][y] -= (
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
================================================================================== short test summary info ==================================================================================
SKIPPED [1] networkx/algorithms/assortativity/tests/test_correlation.py:4: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/bipartite/tests/test_matrix.py:4: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/bipartite/tests/test_spectral_bipartivity.py:3: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_current_flow_betweenness_centrality.py:8: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_current_flow_betweenness_centrality_subset.py:4: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_current_flow_closeness.py:4: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_eigenvector_centrality.py:6: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_laplacian_centrality.py:6: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_second_order_centrality.py:8: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_subgraph.py:4: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_trophic.py:6: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/link_analysis/tests/test_hits.py:6: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/link_analysis/tests/test_pagerank.py:9: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_communicability.py:6: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_node_classification.py:4: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_polynomials.py:7: could not import 'sympy': No module named 'sympy'
SKIPPED [1] networkx/algorithms/tests/test_walks.py:8: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/classes/tests/test_backends.py:7: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/drawing/tests/test_agraph.py:7: could not import 'pygraphviz': No module named 'pygraphviz'
SKIPPED [1] networkx/drawing/tests/test_layout.py:7: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/drawing/tests/test_pydot.py:11: could not import 'pydot': No module named 'pydot'
SKIPPED [1] networkx/generators/tests/test_spectral_graph_forge.py:4: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/linalg/tests/test_bethehessian.py:4: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/linalg/tests/test_graphmatrix.py:4: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/linalg/tests/test_laplacian.py:4: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/linalg/tests/test_modularity.py:4: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/linalg/tests/test_spectrum.py:4: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/tests/test_convert_scipy.py:4: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/approximation/tests/test_traveling_salesman.py:394: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/approximation/tests/test_traveling_salesman.py:431: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/approximation/tests/test_traveling_salesman.py:487: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/approximation/tests/test_traveling_salesman.py:522: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/approximation/tests/test_traveling_salesman.py:562: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/approximation/tests/test_traveling_salesman.py:601: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/approximation/tests/test_traveling_salesman.py:660: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/approximation/tests/test_traveling_salesman.py:715: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/approximation/tests/test_traveling_salesman.py:784: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/approximation/tests/test_traveling_salesman.py:832: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/approximation/tests/test_traveling_salesman.py:905: need --runslow option to run
SKIPPED [1] networkx/algorithms/bipartite/tests/test_basic.py:97: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/bipartite/tests/test_basic.py:108: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/bipartite/tests/test_basic.py:119: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/bipartite/tests/test_matching.py:223: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/bipartite/tests/test_matching.py:233: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/bipartite/tests/test_matching.py:245: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/bipartite/tests/test_matching.py:259: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/bipartite/tests/test_matching.py:276: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/bipartite/tests/test_matching.py:293: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/bipartite/tests/test_matching.py:310: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/bipartite/tests/test_matching.py:319: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_katz_centrality.py:121: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_katz_centrality.py:134: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_katz_centrality.py:143: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_katz_centrality.py:152: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_katz_centrality.py:161: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_katz_centrality.py:201: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_katz_centrality.py:205: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_katz_centrality.py:209: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_katz_centrality.py:215: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_katz_centrality.py:220: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_katz_centrality.py:233: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_katz_centrality.py:317: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_katz_centrality.py:324: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_katz_centrality.py:339: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/connectivity/tests/test_kcomponents.py:94: need --runslow option to run
SKIPPED [1] networkx/algorithms/connectivity/tests/test_kcomponents.py:110: need --runslow option to run
SKIPPED [1] networkx/algorithms/connectivity/tests/test_kcomponents.py:117: need --runslow option to run
SKIPPED [1] networkx/algorithms/connectivity/tests/test_kcutsets.py:136: need --runslow option to run
SKIPPED [1] networkx/algorithms/connectivity/tests/test_kcutsets.py:210: need --runslow option to run
SKIPPED [1] networkx/algorithms/flow/tests/test_gomory_hu.py:76: need --runslow option to run
SKIPPED [1] networkx/algorithms/flow/tests/test_maxflow_large_graph.py:128: need --runslow option to run
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:44: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:60: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:86: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:101: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:116: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:153: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:190: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:197: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:238: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:252: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:281: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:310: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:339: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:363: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:372: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:383: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:394: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:405: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:417: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:432: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:445: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:457: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:471: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:489: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:501: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:513: could not import 'scipy': No module named 'scipy'
SKIPPED [2] networkx/algorithms/tests/test_similarity.py:531: could not import 'scipy': No module named 'scipy'
SKIPPED [2] networkx/algorithms/tests/test_similarity.py:609: could not import 'scipy': No module named 'scipy'
SKIPPED [2] networkx/algorithms/tests/test_similarity.py:637: could not import 'scipy': No module named 'scipy'
SKIPPED [2] networkx/algorithms/tests/test_similarity.py:674: could not import 'scipy': No module named 'scipy'
SKIPPED [2] networkx/algorithms/tests/test_similarity.py:697: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:702: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:732: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:776: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:790: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:796: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:809: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:822: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:860: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:901: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_threshold.py:251: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_tournament.py:127: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tree/tests/test_mst.py:464: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tree/tests/test_mst.py:492: need --runslow option to run
SKIPPED [1] networkx/algorithms/tree/tests/test_mst.py:588: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tree/tests/test_mst.py:618: need --runslow option to run
SKIPPED [5] networkx/generators/tests/test_expanders.py:25: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/linalg/tests/test_algebraic_connectivity.py:15: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/linalg/tests/test_algebraic_connectivity.py:23: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/linalg/tests/test_algebraic_connectivity.py:31: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/linalg/tests/test_algebraic_connectivity.py:39: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/linalg/tests/test_algebraic_connectivity.py:50: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/linalg/tests/test_algebraic_connectivity.py:105: could not import 'scipy': No module named 'scipy'
SKIPPED [4] networkx/linalg/tests/test_algebraic_connectivity.py:112: could not import 'scipy': No module named 'scipy'
SKIPPED [4] networkx/linalg/tests/test_algebraic_connectivity.py:124: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/linalg/tests/test_algebraic_connectivity.py:137: could not import 'scipy': No module named 'scipy'
SKIPPED [4] networkx/linalg/tests/test_algebraic_connectivity.py:148: could not import 'scipy': No module named 'scipy'
SKIPPED [4] networkx/linalg/tests/test_algebraic_connectivity.py:159: could not import 'scipy': No module named 'scipy'
SKIPPED [4] networkx/linalg/tests/test_algebraic_connectivity.py:171: could not import 'scipy': No module named 'scipy'
SKIPPED [4] networkx/linalg/tests/test_algebraic_connectivity.py:182: could not import 'scipy': No module named 'scipy'
SKIPPED [8] networkx/linalg/tests/test_algebraic_connectivity.py:200: could not import 'scipy': No module named 'scipy'
SKIPPED [4] networkx/linalg/tests/test_algebraic_connectivity.py:333: could not import 'scipy': No module named 'scipy'
SKIPPED [4] networkx/linalg/tests/test_algebraic_connectivity.py:342: could not import 'scipy': No module named 'scipy'
SKIPPED [4] networkx/linalg/tests/test_algebraic_connectivity.py:351: could not import 'scipy': No module named 'scipy'
SKIPPED [4] networkx/linalg/tests/test_algebraic_connectivity.py:361: could not import 'scipy': No module named 'scipy'
SKIPPED [4] networkx/linalg/tests/test_algebraic_connectivity.py:371: could not import 'scipy': No module named 'scipy'
SKIPPED [8] networkx/linalg/tests/test_algebraic_connectivity.py:395: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/linalg/tests/test_attrmatrix.py:80: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/linalg/tests/test_attrmatrix.py:94: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/tests/test_all_random_functions.py:222: need --runslow option to run
FAILED networkx/algorithms/components/tests/test_strongly_connected.py::TestStronglyConnected::test_connected_raise - Failed: DID NOT WARN. No warnings of type (<class 'DeprecationWarning'>, <class 'PendingDeprecationWarning'>, <class 'FutureWarning'>) were emitted.
FAILED networkx/algorithms/tests/test_cycles.py::TestMinimumCycleBasis::test_dimensionality - ModuleNotFoundError: No module named 'scipy'
FAILED networkx/algorithms/tests/test_cycles.py::TestMinimumCycleBasis::test_complete_graph - ModuleNotFoundError: No module named 'scipy'
FAILED networkx/algorithms/tests/test_cycles.py::TestMinimumCycleBasis::test_petersen_graph - ModuleNotFoundError: No module named 'scipy'
FAILED networkx/algorithms/tests/test_cycles.py::TestMinimumCycleBasis::test_gh6787_variable_weighted_complete_graph - ModuleNotFoundError: No module named 'scipy'
FAILED networkx/algorithms/tests/test_distance_measures.py::TestResistanceDistance::test_resistance_distance - ModuleNotFoundError: No module named 'scipy'
FAILED networkx/algorithms/tests/test_distance_measures.py::TestResistanceDistance::test_resistance_distance_noinv - ModuleNotFoundError: No module named 'scipy'
FAILED networkx/algorithms/tests/test_distance_measures.py::TestResistanceDistance::test_resistance_distance_no_weight - ModuleNotFoundError: No module named 'scipy'
FAILED networkx/algorithms/tests/test_distance_measures.py::TestResistanceDistance::test_resistance_distance_neg_weight - ModuleNotFoundError: No module named 'scipy'
FAILED networkx/algorithms/tests/test_distance_measures.py::TestResistanceDistance::test_multigraph - ModuleNotFoundError: No module named 'scipy'
FAILED networkx/algorithms/tests/test_distance_measures.py::TestResistanceDistance::test_resistance_distance_same_node - ModuleNotFoundError: No module named 'scipy'
FAILED networkx/algorithms/tests/test_distance_measures.py::TestResistanceDistance::test_resistance_distance_only_nodeA - ModuleNotFoundError: No module named 'scipy'
FAILED networkx/algorithms/tests/test_distance_measures.py::TestResistanceDistance::test_resistance_distance_only_nodeB - ModuleNotFoundError: No module named 'scipy'
FAILED networkx/algorithms/tests/test_distance_measures.py::TestResistanceDistance::test_resistance_distance_all - ModuleNotFoundError: No module named 'scipy'
FAILED networkx/algorithms/tests/test_distance_measures.py::TestKemenyConstant::test_kemeny_constant_not_connected - ModuleNotFoundError: No module named 'scipy'
FAILED networkx/algorithms/tests/test_distance_measures.py::TestKemenyConstant::test_kemeny_constant_no_nodes - ModuleNotFoundError: No module named 'scipy'
FAILED networkx/algorithms/tests/test_distance_measures.py::TestKemenyConstant::test_kemeny_constant_negative_weight - ModuleNotFoundError: No module named 'scipy'
FAILED networkx/algorithms/tests/test_distance_measures.py::TestKemenyConstant::test_kemeny_constant - ModuleNotFoundError: No module named 'scipy'
FAILED networkx/algorithms/tests/test_distance_measures.py::TestKemenyConstant::test_kemeny_constant_no_weight - ModuleNotFoundError: No module named 'scipy'
FAILED networkx/algorithms/tests/test_distance_measures.py::TestKemenyConstant::test_kemeny_constant_multigraph - ModuleNotFoundError: No module named 'scipy'
FAILED networkx/algorithms/tests/test_distance_measures.py::TestKemenyConstant::test_kemeny_constant_weight0 - ModuleNotFoundError: No module named 'scipy'
FAILED networkx/algorithms/tests/test_distance_measures.py::TestKemenyConstant::test_kemeny_constant_selfloop - ModuleNotFoundError: No module named 'scipy'
FAILED networkx/algorithms/tests/test_distance_measures.py::TestKemenyConstant::test_kemeny_constant_complete_bipartite_graph - ModuleNotFoundError: No module named 'scipy'
FAILED networkx/algorithms/tests/test_distance_measures.py::TestKemenyConstant::test_kemeny_constant_path_graph - ModuleNotFoundError: No module named 'scipy'
=========================================================== 24 failed, 4684 passed, 197 skipped, 20 warnings in 123.65s (0:02:03) ===========================================================
```
</details>
With below patch
<details>
```patch
--- a/networkx/algorithms/tests/test_cycles.py
+++ b/networkx/algorithms/tests/test_cycles.py
@@ -870,6 +870,7 @@
assert_basis_equal(mcb, [[2, 4, 1], [4, 3, 2, 1]])
def test_dimensionality(self):
+ pytest.importorskip("scipy")
# checks |MCB|=|E|-|V|+|NC|
ntrial = 10
for seed in range(1234, 1234 + ntrial):
@@ -883,6 +884,7 @@
check_independent(mcb)
def test_complete_graph(self):
+ pytest.importorskip("scipy")
cg = nx.complete_graph(5)
mcb = nx.minimum_cycle_basis(cg)
assert all(len(cycle) == 3 for cycle in mcb)
@@ -893,6 +895,7 @@
assert not nx.minimum_cycle_basis(tg)
def test_petersen_graph(self):
+ pytest.importorskip("scipy")
G = nx.petersen_graph()
mcb = list(nx.minimum_cycle_basis(G))
expected = [
@@ -913,6 +916,7 @@
check_independent(mcb)
def test_gh6787_variable_weighted_complete_graph(self):
+ pytest.importorskip("scipy")
N = 8
cg = nx.complete_graph(N)
cg.add_weighted_edges_from([(u, v, 9) for u, v in cg.edges])
--- a/networkx/algorithms/tests/test_distance_measures.py
+++ b/networkx/algorithms/tests/test_distance_measures.py
@@ -334,6 +334,7 @@
self.G = G
def test_resistance_distance_directed_graph(self):
+ pytest.importorskip("scipy")
G = nx.DiGraph()
with pytest.raises(nx.NetworkXNotImplemented):
nx.resistance_distance(G)
@@ -362,21 +363,25 @@
assert round(rd, 5) == round(test_data, 5)
def test_resistance_distance_noinv(self):
+ pytest.importorskip("scipy")
rd = nx.resistance_distance(self.G, 1, 3, "weight", False)
test_data = 1 / (1 / (1 / 2 + 1 / 4) + 1 / (1 / 1 + 1 / 3))
assert round(rd, 5) == round(test_data, 5)
def test_resistance_distance_no_weight(self):
+ pytest.importorskip("scipy")
rd = nx.resistance_distance(self.G, 1, 3)
assert round(rd, 5) == 1
def test_resistance_distance_neg_weight(self):
+ pytest.importorskip("scipy")
self.G[2][3]["weight"] = -4
rd = nx.resistance_distance(self.G, 1, 3, "weight", True)
test_data = 1 / (1 / (2 + -4) + 1 / (1 + 3))
assert round(rd, 5) == round(test_data, 5)
def test_multigraph(self):
+ pytest.importorskip("scipy")
G = nx.MultiGraph()
G.add_edge(1, 2, weight=2)
G.add_edge(2, 3, weight=4)
@@ -391,9 +396,11 @@
nx.resistance_distance(self.G, 1, 3, "weight")
def test_resistance_distance_same_node(self):
+ pytest.importorskip("scipy")
assert nx.resistance_distance(self.G, 1, 1) == 0
def test_resistance_distance_only_nodeA(self):
+ pytest.importorskip("scipy")
rd = nx.resistance_distance(self.G, nodeA=1)
test_data = {}
test_data[1] = 0
@@ -406,6 +413,7 @@
assert np.isclose(rd[key], test_data[key])
def test_resistance_distance_only_nodeB(self):
+ pytest.importorskip("scipy")
rd = nx.resistance_distance(self.G, nodeB=1)
test_data = {}
test_data[1] = 0
@@ -418,6 +426,7 @@
assert np.isclose(rd[key], test_data[key])
def test_resistance_distance_all(self):
+ pytest.importorskip("scipy")
rd = nx.resistance_distance(self.G)
assert type(rd) == dict
assert round(rd[1][3], 5) == 1
@@ -535,16 +544,19 @@
nx.kemeny_constant(G)
def test_kemeny_constant_not_connected(self):
+ pytest.importorskip("scipy")
self.G.add_node(5)
with pytest.raises(nx.NetworkXError):
nx.kemeny_constant(self.G)
def test_kemeny_constant_no_nodes(self):
+ pytest.importorskip("scipy")
G = nx.Graph()
with pytest.raises(nx.NetworkXError):
nx.kemeny_constant(G)
def test_kemeny_constant_negative_weight(self):
+ pytest.importorskip("scipy")
G = nx.Graph()
w12 = 2
w13 = 3
@@ -556,6 +568,7 @@
nx.kemeny_constant(G, weight="weight")
def test_kemeny_constant(self):
+ pytest.importorskip("scipy")
K = nx.kemeny_constant(self.G, weight="weight")
w12 = 2
w13 = 3
@@ -576,10 +589,12 @@
assert np.isclose(K, test_data)
def test_kemeny_constant_no_weight(self):
+ pytest.importorskip("scipy")
K = nx.kemeny_constant(self.G)
assert np.isclose(K, 4 / 3)
def test_kemeny_constant_multigraph(self):
+ pytest.importorskip("scipy")
G = nx.MultiGraph()
w12_1 = 2
w12_2 = 1
@@ -607,6 +622,7 @@
assert np.isclose(K, test_data)
def test_kemeny_constant_weight0(self):
+ pytest.importorskip("scipy")
G = nx.Graph()
w12 = 0
w13 = 3
@@ -631,6 +647,7 @@
assert np.isclose(K, test_data)
def test_kemeny_constant_selfloop(self):
+ pytest.importorskip("scipy")
G = nx.Graph()
w11 = 1
w12 = 2
@@ -653,6 +670,7 @@
assert np.isclose(K, test_data)
def test_kemeny_constant_complete_bipartite_graph(self):
+ pytest.importorskip("scipy")
# Theorem 1 in https://www.sciencedirect.com/science/article/pii/S0166218X20302912
n1 = 5
n2 = 4
@@ -661,6 +679,7 @@
assert np.isclose(K, n1 + n2 - 3 / 2)
def test_kemeny_constant_path_graph(self):
+ pytest.importorskip("scipy")
# Theorem 2 in https://www.sciencedirect.com/science/article/pii/S0166218X20302912
n = 10
G = nx.path_graph(n)
```
</details>
Still I have two units failing
<details>
<summary>Here is pytest output:</summary>
```console
+ PYTHONPATH=/home/tkloczko/rpmbuild/BUILDROOT/python-networkx-3.2.1-5.fc36.x86_64/usr/lib64/python3.9/site-packages:/home/tkloczko/rpmbuild/BUILDROOT/python-networkx-3.2.1-5.fc36.x86_64/usr/lib/python3.9/site-packages
+ /usr/bin/pytest -ra -m 'not network' --ignore networkx/drawing/tests/test_pylab.py
============================= test session starts ==============================
platform linux -- Python 3.9.18, pytest-8.1.1, pluggy-1.4.0
rootdir: /home/tkloczko/rpmbuild/BUILD/networkx-networkx-3.2.1
configfile: pyproject.toml
collected 4877 items / 28 skipped
networkx/algorithms/approximation/tests/test_approx_clust_coeff.py ..... [ 0%]
. [ 0%]
networkx/algorithms/approximation/tests/test_clique.py ........ [ 0%]
networkx/algorithms/approximation/tests/test_connectivity.py ........... [ 0%]
....... [ 0%]
networkx/algorithms/approximation/tests/test_distance_measures.py ...... [ 0%]
.. [ 0%]
networkx/algorithms/approximation/tests/test_dominating_set.py .... [ 0%]
networkx/algorithms/approximation/tests/test_kcomponents.py ............ [ 1%]
.... [ 1%]
networkx/algorithms/approximation/tests/test_matching.py . [ 1%]
networkx/algorithms/approximation/tests/test_maxcut.py ..... [ 1%]
networkx/algorithms/approximation/tests/test_ramsey.py . [ 1%]
networkx/algorithms/approximation/tests/test_steinertree.py .... [ 1%]
networkx/algorithms/approximation/tests/test_traveling_salesman.py ..... [ 1%]
.......................ssssssssss...s. [ 2%]
networkx/algorithms/approximation/tests/test_treewidth.py .............. [ 2%]
[ 2%]
networkx/algorithms/approximation/tests/test_vertex_cover.py .... [ 2%]
networkx/algorithms/assortativity/tests/test_connectivity.py .......... [ 2%]
networkx/algorithms/assortativity/tests/test_mixing.py ................. [ 3%]
.. [ 3%]
networkx/algorithms/assortativity/tests/test_neighbor_degree.py ...... [ 3%]
networkx/algorithms/assortativity/tests/test_pairs.py ........... [ 3%]
networkx/algorithms/bipartite/tests/test_basic.py ............sss [ 3%]
networkx/algorithms/bipartite/tests/test_centrality.py ....... [ 4%]
networkx/algorithms/bipartite/tests/test_cluster.py ......... [ 4%]
networkx/algorithms/bipartite/tests/test_covering.py .... [ 4%]
networkx/algorithms/bipartite/tests/test_edgelist.py ............... [ 4%]
networkx/algorithms/bipartite/tests/test_extendability.py ........... [ 4%]
networkx/algorithms/bipartite/tests/test_generators.py .......... [ 5%]
networkx/algorithms/bipartite/tests/test_matching.py ............sssssss [ 5%]
s [ 5%]
networkx/algorithms/bipartite/tests/test_project.py .................. [ 5%]
networkx/algorithms/bipartite/tests/test_redundancy.py ... [ 5%]
networkx/algorithms/centrality/tests/test_betweenness_centrality.py .... [ 6%]
..................................... [ 6%]
networkx/algorithms/centrality/tests/test_betweenness_centrality_subset.py . [ 6%]
..................... [ 7%]
networkx/algorithms/centrality/tests/test_closeness_centrality.py ...... [ 7%]
....... [ 7%]
networkx/algorithms/centrality/tests/test_degree_centrality.py ....... [ 7%]
networkx/algorithms/centrality/tests/test_dispersion.py .... [ 7%]
networkx/algorithms/centrality/tests/test_group.py ..................... [ 8%]
... [ 8%]
networkx/algorithms/centrality/tests/test_harmonic_centrality.py ....... [ 8%]
...... [ 8%]
networkx/algorithms/centrality/tests/test_katz_centrality.py ..........s [ 8%]
ssssssssss..sss [ 9%]
networkx/algorithms/centrality/tests/test_load_centrality.py ........... [ 9%]
....... [ 9%]
networkx/algorithms/centrality/tests/test_percolation_centrality.py .... [ 9%]
[ 9%]
networkx/algorithms/centrality/tests/test_reaching.py ............... [ 9%]
networkx/algorithms/centrality/tests/test_voterank.py ...... [ 9%]
networkx/algorithms/coloring/tests/test_coloring.py ................. [ 10%]
networkx/algorithms/community/tests/test_asyn_fluid.py ..... [ 10%]
networkx/algorithms/community/tests/test_centrality.py ..... [ 10%]
networkx/algorithms/community/tests/test_kclique.py ........ [ 10%]
networkx/algorithms/community/tests/test_kernighan_lin.py ........ [ 10%]
networkx/algorithms/community/tests/test_label_propagation.py .......... [ 10%]
............. [ 11%]
networkx/algorithms/community/tests/test_louvain.py ............. [ 11%]
networkx/algorithms/community/tests/test_lukes.py .... [ 11%]
networkx/algorithms/community/tests/test_modularity_max.py ............. [ 11%]
..... [ 11%]
networkx/algorithms/community/tests/test_quality.py ....... [ 12%]
networkx/algorithms/community/tests/test_utils.py .... [ 12%]
networkx/algorithms/components/tests/test_attracting.py .... [ 12%]
networkx/algorithms/components/tests/test_biconnected.py ............. [ 12%]
networkx/algorithms/components/tests/test_connected.py ......... [ 12%]
networkx/algorithms/components/tests/test_semiconnected.py ........ [ 12%]
networkx/algorithms/components/tests/test_strongly_connected.py ........ [ 13%]
..F.. [ 13%]
networkx/algorithms/components/tests/test_weakly_connected.py ...... [ 13%]
networkx/algorithms/connectivity/tests/test_connectivity.py ............ [ 13%]
...................... [ 13%]
networkx/algorithms/connectivity/tests/test_cuts.py .................... [ 14%]
. [ 14%]
networkx/algorithms/connectivity/tests/test_disjoint_paths.py .......... [ 14%]
........ [ 14%]
networkx/algorithms/connectivity/tests/test_edge_augmentation.py ....... [ 14%]
............. [ 15%]
networkx/algorithms/connectivity/tests/test_edge_kcomponents.py ........ [ 15%]
............. [ 15%]
networkx/algorithms/connectivity/tests/test_kcomponents.py .sss...... [ 15%]
networkx/algorithms/connectivity/tests/test_kcutsets.py s........s..... [ 16%]
networkx/algorithms/connectivity/tests/test_stoer_wagner.py ..... [ 16%]
networkx/algorithms/flow/tests/test_gomory_hu.py ....s.... [ 16%]
networkx/algorithms/flow/tests/test_maxflow.py ......................... [ 16%]
.. [ 16%]
networkx/algorithms/flow/tests/test_maxflow_large_graph.py ...s.. [ 17%]
networkx/algorithms/flow/tests/test_mincost.py ................... [ 17%]
networkx/algorithms/flow/tests/test_networksimplex.py .................. [ 17%]
.... [ 17%]
networkx/algorithms/isomorphism/tests/test_ismags.py .......... [ 18%]
networkx/algorithms/isomorphism/tests/test_isomorphism.py .... [ 18%]
networkx/algorithms/isomorphism/tests/test_isomorphvf2.py .............. [ 18%]
.. [ 18%]
networkx/algorithms/isomorphism/tests/test_match_helpers.py .. [ 18%]
networkx/algorithms/isomorphism/tests/test_temporalisomorphvf2.py ...... [ 18%]
...... [ 18%]
networkx/algorithms/isomorphism/tests/test_tree_isomorphism.py ..... [ 18%]
networkx/algorithms/isomorphism/tests/test_vf2pp.py .................... [ 19%]
........................ [ 19%]
networkx/algorithms/isomorphism/tests/test_vf2pp_helpers.py ............ [ 20%]
................................. [ 20%]
networkx/algorithms/isomorphism/tests/test_vf2userfunc.py .............. [ 21%]
.............. [ 21%]
networkx/algorithms/minors/tests/test_contraction.py ................... [ 21%]
............ [ 21%]
networkx/algorithms/operators/tests/test_all.py ................... [ 22%]
networkx/algorithms/operators/tests/test_binary.py .................... [ 22%]
networkx/algorithms/operators/tests/test_product.py .................... [ 23%]
........ [ 23%]
networkx/algorithms/operators/tests/test_unary.py ... [ 23%]
networkx/algorithms/shortest_paths/tests/test_astar.py ................ [ 23%]
networkx/algorithms/shortest_paths/tests/test_dense.py ........ [ 23%]
networkx/algorithms/shortest_paths/tests/test_dense_numpy.py ....... [ 24%]
networkx/algorithms/shortest_paths/tests/test_generic.py ............... [ 24%]
........... [ 24%]
networkx/algorithms/shortest_paths/tests/test_unweighted.py ............ [ 24%]
..... [ 24%]
networkx/algorithms/shortest_paths/tests/test_weighted.py .............. [ 25%]
.......................................... [ 26%]
networkx/algorithms/tests/test_asteroidal.py . [ 26%]
networkx/algorithms/tests/test_boundary.py ............. [ 26%]
networkx/algorithms/tests/test_bridges.py .......... [ 26%]
networkx/algorithms/tests/test_chains.py ..... [ 26%]
networkx/algorithms/tests/test_chordal.py .......... [ 26%]
networkx/algorithms/tests/test_clique.py ............ [ 27%]
networkx/algorithms/tests/test_cluster.py .............................. [ 27%]
............. [ 28%]
networkx/algorithms/tests/test_core.py ............... [ 28%]
networkx/algorithms/tests/test_covering.py ........... [ 28%]
networkx/algorithms/tests/test_cuts.py ................. [ 28%]
networkx/algorithms/tests/test_cycles.py ............................... [ 29%]
...............ss.ss.......... [ 30%]
networkx/algorithms/tests/test_d_separation.py ............... [ 30%]
networkx/algorithms/tests/test_dag.py .................................. [ 31%]
.......................... [ 31%]
networkx/algorithms/tests/test_distance_measures.py .................... [ 32%]
.....................s....Fssss.ssss.....ssssssssss [ 33%]
networkx/algorithms/tests/test_distance_regular.py ....... [ 33%]
networkx/algorithms/tests/test_dominance.py ...................... [ 33%]
networkx/algorithms/tests/test_dominating.py ..... [ 33%]
networkx/algorithms/tests/test_efficiency.py ....... [ 33%]
networkx/algorithms/tests/test_euler.py ................................ [ 34%]
[ 34%]
networkx/algorithms/tests/test_graph_hashing.py ........................ [ 35%]
[ 35%]
networkx/algorithms/tests/test_graphical.py ............. [ 35%]
networkx/algorithms/tests/test_hierarchy.py ..... [ 35%]
networkx/algorithms/tests/test_hybrid.py .. [ 35%]
networkx/algorithms/tests/test_isolate.py ... [ 35%]
networkx/algorithms/tests/test_link_prediction.py ...................... [ 36%]
................................................... [ 37%]
networkx/algorithms/tests/test_lowest_common_ancestors.py .............. [ 37%]
......................................... [ 38%]
networkx/algorithms/tests/test_matching.py ............................. [ 38%]
................... [ 39%]
networkx/algorithms/tests/test_max_weight_clique.py ..... [ 39%]
networkx/algorithms/tests/test_mis.py ....... [ 39%]
networkx/algorithms/tests/test_moral.py . [ 39%]
networkx/algorithms/tests/test_non_randomness.py ...... [ 39%]
networkx/algorithms/tests/test_planar_drawing.py ............ [ 39%]
networkx/algorithms/tests/test_planarity.py ............................ [ 40%]
.. [ 40%]
networkx/algorithms/tests/test_reciprocity.py ..... [ 40%]
networkx/algorithms/tests/test_regular.py ............. [ 40%]
networkx/algorithms/tests/test_richclub.py ......... [ 41%]
networkx/algorithms/tests/test_similarity.py sssssssssssssssssssssssssss [ 41%]
ssssssssssssssssss [ 41%]
networkx/algorithms/tests/test_simple_paths.py ......................... [ 42%]
................................................. [ 43%]
networkx/algorithms/tests/test_smallworld.py ...... [ 43%]
networkx/algorithms/tests/test_smetric.py .. [ 43%]
networkx/algorithms/tests/test_sparsifiers.py ....... [ 43%]
networkx/algorithms/tests/test_structuralholes.py ............. [ 44%]
networkx/algorithms/tests/test_summarization.py ................. [ 44%]
networkx/algorithms/tests/test_swap.py ..................... [ 44%]
networkx/algorithms/tests/test_threshold.py ................s. [ 45%]
networkx/algorithms/tests/test_time_dependent.py ............ [ 45%]
networkx/algorithms/tests/test_tournament.py ...............s..... [ 45%]
networkx/algorithms/tests/test_triads.py ................ [ 46%]
networkx/algorithms/tests/test_vitality.py ...... [ 46%]
networkx/algorithms/tests/test_voronoi.py .......... [ 46%]
networkx/algorithms/tests/test_wiener.py .... [ 46%]
networkx/algorithms/traversal/tests/test_beamsearch.py ... [ 46%]
networkx/algorithms/traversal/tests/test_bfs.py ................... [ 47%]
networkx/algorithms/traversal/tests/test_dfs.py .................. [ 47%]
networkx/algorithms/traversal/tests/test_edgebfs.py ................ [ 47%]
networkx/algorithms/traversal/tests/test_edgedfs.py ............... [ 48%]
networkx/algorithms/tree/tests/test_branchings.py ...................... [ 48%]
.......... [ 48%]
networkx/algorithms/tree/tests/test_coding.py .............. [ 48%]
networkx/algorithms/tree/tests/test_decomposition.py ..... [ 49%]
networkx/algorithms/tree/tests/test_mst.py ............................. [ 49%]
.......................ssss [ 50%]
networkx/algorithms/tree/tests/test_operations.py .... [ 50%]
networkx/algorithms/tree/tests/test_recognition.py ..................... [ 50%]
.... [ 50%]
networkx/classes/tests/test_coreviews.py ............................... [ 51%]
......................... [ 51%]
networkx/classes/tests/test_digraph.py ................................. [ 52%]
................................................... [ 53%]
networkx/classes/tests/test_digraph_historical.py ...................... [ 54%]
.................... [ 54%]
networkx/classes/tests/test_filters.py ........... [ 54%]
networkx/classes/tests/test_function.py ................................ [ 55%]
....................................... [ 56%]
networkx/classes/tests/test_graph.py ................................... [ 56%]
............................. [ 57%]
networkx/classes/tests/test_graph_historical.py ........................ [ 58%]
.......... [ 58%]
networkx/classes/tests/test_graphviews.py .............................. [ 58%]
..... [ 58%]
networkx/classes/tests/test_multidigraph.py ............................ [ 59%]
........................................................................ [ 61%]
........................................................................ [ 62%]
............... [ 62%]
networkx/classes/tests/test_multigraph.py .............................. [ 63%]
........................................................................ [ 64%]
................................................... [ 65%]
networkx/classes/tests/test_reportviews.py ............................. [ 66%]
........................................................................ [ 68%]
........................................................................ [ 69%]
.................................................................... [ 70%]
networkx/classes/tests/test_special.py ................................. [ 71%]
........................................................................ [ 73%]
........................................................................ [ 74%]
........................................................................ [ 75%]
........................................................................ [ 77%]
........................... [ 78%]
networkx/classes/tests/test_subgraphviews.py ........................... [ 78%]
..... [ 78%]
networkx/drawing/tests/test_latex.py ...... [ 78%]
networkx/generators/tests/test_atlas.py ........ [ 78%]
networkx/generators/tests/test_classic.py .............................. [ 79%]
.............. [ 79%]
networkx/generators/tests/test_cographs.py . [ 79%]
networkx/generators/tests/test_community.py ...................... [ 80%]
networkx/generators/tests/test_degree_seq.py ................... [ 80%]
networkx/generators/tests/test_directed.py .............. [ 81%]
networkx/generators/tests/test_duplication.py ....... [ 81%]
networkx/generators/tests/test_ego.py .. [ 81%]
networkx/generators/tests/test_expanders.py .....sssss................ [ 81%]
networkx/generators/tests/test_geometric.py ............................ [ 82%]
.. [ 82%]
networkx/generators/tests/test_harary_graph.py .. [ 82%]
networkx/generators/tests/test_internet_as_graphs.py ..... [ 82%]
networkx/generators/tests/test_intersection.py .... [ 82%]
networkx/generators/tests/test_interval_graph.py ........ [ 82%]
networkx/generators/tests/test_joint_degree_seq.py .... [ 82%]
networkx/generators/tests/test_lattice.py ....................... [ 83%]
networkx/generators/tests/test_line.py ................................. [ 83%]
.. [ 84%]
networkx/generators/tests/test_mycielski.py ... [ 84%]
networkx/generators/tests/test_nonisomorphic_trees.py ..... [ 84%]
networkx/generators/tests/test_random_clustered.py .... [ 84%]
networkx/generators/tests/test_random_graphs.py ........................ [ 84%]
............................................. [ 85%]
networkx/generators/tests/test_small.py ................................ [ 86%]
...... [ 86%]
networkx/generators/tests/test_stochastic.py ....... [ 86%]
networkx/generators/tests/test_sudoku.py ...... [ 86%]
networkx/generators/tests/test_time_series.py ....... [ 86%]
networkx/generators/tests/test_trees.py .................. [ 87%]
networkx/generators/tests/test_triads.py .. [ 87%]
networkx/linalg/tests/test_algebraic_connectivity.py sssss............ss [ 87%]
ssssssssssssssssssssssssssssssss.........ssssssssssssssssssssssssssss [ 89%]
networkx/linalg/tests/test_attrmatrix.py ...ss [ 89%]
networkx/readwrite/json_graph/tests/test_adjacency.py ........ [ 89%]
networkx/readwrite/json_graph/tests/test_cytoscape.py ....... [ 89%]
networkx/readwrite/json_graph/tests/test_node_link.py ........... [ 89%]
networkx/readwrite/json_graph/tests/test_tree.py ... [ 89%]
networkx/readwrite/tests/test_adjlist.py .................. [ 90%]
networkx/readwrite/tests/test_edgelist.py .......................... [ 90%]
networkx/readwrite/tests/test_gexf.py ..................... [ 91%]
networkx/readwrite/tests/test_gml.py ......................... [ 91%]
networkx/readwrite/tests/test_graph6.py ............................... [ 92%]
networkx/readwrite/tests/test_graphml.py ............................... [ 92%]
............................ [ 93%]
networkx/readwrite/tests/test_leda.py .. [ 93%]
networkx/readwrite/tests/test_p2g.py ... [ 93%]
networkx/readwrite/tests/test_pajek.py ........ [ 93%]
networkx/readwrite/tests/test_sparse6.py ................ [ 94%]
networkx/readwrite/tests/test_text.py ................................. [ 94%]
networkx/tests/test_all_random_functions.py s [ 94%]
networkx/tests/test_convert.py ............... [ 95%]
networkx/tests/test_convert_numpy.py ................................... [ 95%]
............... [ 96%]
networkx/tests/test_convert_pandas.py ...................... [ 96%]
networkx/tests/test_exceptions.py ....... [ 96%]
networkx/tests/test_import.py .. [ 96%]
networkx/tests/test_lazy_imports.py .... [ 96%]
networkx/tests/test_relabel.py .............................. [ 97%]
networkx/utils/tests/test__init.py . [ 97%]
networkx/utils/tests/test_decorators.py ................................ [ 98%]
... [ 98%]
networkx/utils/tests/test_heaps.py .. [ 98%]
networkx/utils/tests/test_mapped_queue.py .............................. [ 98%]
................ [ 99%]
networkx/utils/tests/test_misc.py ............................... [ 99%]
networkx/utils/tests/test_random_sequence.py .... [ 99%]
networkx/utils/tests/test_rcm.py .. [ 99%]
networkx/utils/tests/test_unionfind.py ..... [100%]
=================================== FAILURES ===================================
__________________ TestStronglyConnected.test_connected_raise __________________
self = <networkx.algorithms.components.tests.test_strongly_connected.TestStronglyConnected object at 0x7fe5866e5d30>
def test_connected_raise(self):
G = nx.Graph()
with pytest.raises(NetworkXNotImplemented):
next(nx.strongly_connected_components(G))
with pytest.raises(NetworkXNotImplemented):
next(nx.kosaraju_strongly_connected_components(G))
with pytest.raises(NetworkXNotImplemented):
with pytest.deprecated_call():
> next(nx.strongly_connected_components_recursive(G))
networkx/algorithms/components/tests/test_strongly_connected.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
<class 'networkx.utils.decorators.argmap'> compilation 498:3: in argmap_strongly_connected_components_recursive_495
import gzip
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
g = <networkx.classes.graph.Graph object at 0x7fe585274df0>
def _not_implemented_for(g):
if (mval is None or mval == g.is_multigraph()) and (
dval is None or dval == g.is_directed()
):
> raise nx.NetworkXNotImplemented(errmsg)
E networkx.exception.NetworkXNotImplemented: not implemented for undirected type
networkx/utils/decorators.py:90: NetworkXNotImplemented
During handling of the above exception, another exception occurred:
self = <networkx.algorithms.components.tests.test_strongly_connected.TestStronglyConnected object at 0x7fe5866e5d30>
def test_connected_raise(self):
G = nx.Graph()
with pytest.raises(NetworkXNotImplemented):
next(nx.strongly_connected_components(G))
with pytest.raises(NetworkXNotImplemented):
next(nx.kosaraju_strongly_connected_components(G))
with pytest.raises(NetworkXNotImplemented):
with pytest.deprecated_call():
> next(nx.strongly_connected_components_recursive(G))
E Failed: DID NOT WARN. No warnings of type (<class 'DeprecationWarning'>, <class 'PendingDeprecationWarning'>, <class 'FutureWarning'>) were emitted.
E Emitted warnings: [].
networkx/algorithms/components/tests/test_strongly_connected.py:187: Failed
_______________ TestResistanceDistance.test_resistance_distance ________________
self = <networkx.algorithms.tests.test_distance_measures.TestResistanceDistance object at 0x7fe586613490>
def test_resistance_distance(self):
> rd = nx.resistance_distance(self.G, 1, 3, "weight", True)
networkx/algorithms/tests/test_distance_measures.py:361:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
<class 'networkx.utils.decorators.argmap'> compilation 909:4: in argmap_resistance_distance_906
???
networkx/utils/backends.py:576: in __call__
return self.orig_func(*args, **kwargs)
networkx/algorithms/distance_measures.py:744: in resistance_distance
L = nx.laplacian_matrix(G, weight=weight).todense()
networkx/utils/decorators.py:770: in func
return argmap._lazy_compile(__wrapper)(*args, **kwargs)
<class 'networkx.utils.decorators.argmap'> compilation 913:4: in argmap_laplacian_matrix_910
???
networkx/utils/backends.py:576: in __call__
return self.orig_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
G = <networkx.classes.graph.Graph object at 0x7fe5824015b0>, nodelist = None
weight = 'weight'
@not_implemented_for("directed")
@nx._dispatch(edge_attrs="weight")
def laplacian_matrix(G, nodelist=None, weight="weight"):
"""Returns the Laplacian matrix of G.
The graph Laplacian is the matrix L = D - A, where
A is the adjacency matrix and D is the diagonal matrix of node degrees.
Parameters
----------
G : graph
A NetworkX graph
nodelist : list, optional
The rows and columns are ordered according to the nodes in nodelist.
If nodelist is None, then the ordering is produced by G.nodes().
weight : string or None, optional (default='weight')
The edge data key used to compute each value in the matrix.
If None, then each edge has weight 1.
Returns
-------
L : SciPy sparse array
The Laplacian matrix of G.
Notes
-----
For MultiGraph, the edges weights are summed.
See Also
--------
:func:`~networkx.convert_matrix.to_numpy_array`
normalized_laplacian_matrix
:func:`~networkx.linalg.spectrum.laplacian_spectrum`
Examples
--------
For graphs with multiple connected components, L is permutation-similar
to a block diagonal matrix where each block is the respective Laplacian
matrix for each component.
>>> G = nx.Graph([(1, 2), (2, 3), (4, 5)])
>>> print(nx.laplacian_matrix(G).toarray())
[[ 1 -1 0 0 0]
[-1 2 -1 0 0]
[ 0 -1 1 0 0]
[ 0 0 0 1 -1]
[ 0 0 0 -1 1]]
"""
> import scipy as sp
E ModuleNotFoundError: No module named 'scipy'
networkx/linalg/laplacianmatrix.py:66: ModuleNotFoundError
=============================== warnings summary ===============================
networkx/utils/backends.py:135
/home/tkloczko/rpmbuild/BUILD/networkx-networkx-3.2.1/networkx/utils/backends.py:135: RuntimeWarning: networkx backend defined more than once: nx-loopback
backends.update(_get_backends("networkx.backends"))
networkx/utils/backends.py:576
/home/tkloczko/rpmbuild/BUILD/networkx-networkx-3.2.1/networkx/utils/backends.py:576: DeprecationWarning:
random_tree is deprecated and will be removed in NX v3.4
Use random_labeled_tree instead.
return self.orig_func(*args, **kwargs)
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_prominent_group_single_node
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_prominent_group_with_c
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_prominent_group_normalized_endpoints
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_prominent_group_disconnected_graph
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_group_betweenness_directed_weighted
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_prominent_group_greedy_algorithm
/home/tkloczko/rpmbuild/BUILD/networkx-networkx-3.2.1/networkx/algorithms/centrality/group.py:501: FutureWarning: ChainedAssignmentError: behaviour will change in pandas 3.0!
You are setting values through chained assignment. Currently this works in certain cases, but when using Copy-on-Write (which will become the default behaviour in pandas 3.0) this will never work to update the original DataFrame or Series, because the intermediate object on which we are setting values will behave as a copy.
A typical example is when you are setting values in a column of a DataFrame, like:
df["col"][row_indexer] = value
Use `df.loc[row_indexer, "col"] = values` instead, to perform the assignment in a single step and ensure this keeps updating the original `df`.
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
DF_tree.nodes[node_p]["betweenness"][x][y] = (
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_prominent_group_single_node
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_prominent_group_with_c
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_prominent_group_normalized_endpoints
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_prominent_group_disconnected_graph
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_group_betweenness_directed_weighted
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_prominent_group_greedy_algorithm
/home/tkloczko/rpmbuild/BUILD/networkx-networkx-3.2.1/networkx/algorithms/centrality/group.py:505: FutureWarning: ChainedAssignmentError: behaviour will change in pandas 3.0!
You are setting values through chained assignment. Currently this works in certain cases, but when using Copy-on-Write (which will become the default behaviour in pandas 3.0) this will never work to update the original DataFrame or Series, because the intermediate object on which we are setting values will behave as a copy.
A typical example is when you are setting values in a column of a DataFrame, like:
df["col"][row_indexer] = value
Use `df.loc[row_indexer, "col"] = values` instead, to perform the assignment in a single step and ensure this keeps updating the original `df`.
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
DF_tree.nodes[node_p]["betweenness"][x][y] -= (
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_prominent_group_single_node
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_prominent_group_with_c
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_prominent_group_normalized_endpoints
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_prominent_group_disconnected_graph
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_group_betweenness_directed_weighted
networkx/algorithms/centrality/tests/test_group.py::TestProminentGroup::test_prominent_group_greedy_algorithm
/home/tkloczko/rpmbuild/BUILD/networkx-networkx-3.2.1/networkx/algorithms/centrality/group.py:509: FutureWarning: ChainedAssignmentError: behaviour will change in pandas 3.0!
You are setting values through chained assignment. Currently this works in certain cases, but when using Copy-on-Write (which will become the default behaviour in pandas 3.0) this will never work to update the original DataFrame or Series, because the intermediate object on which we are setting values will behave as a copy.
A typical example is when you are setting values in a column of a DataFrame, like:
df["col"][row_indexer] = value
Use `df.loc[row_indexer, "col"] = values` instead, to perform the assignment in a single step and ensure this keeps updating the original `df`.
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
DF_tree.nodes[node_p]["betweenness"][x][y] -= (
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
SKIPPED [1] networkx/algorithms/assortativity/tests/test_correlation.py:4: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/bipartite/tests/test_matrix.py:4: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/bipartite/tests/test_spectral_bipartivity.py:3: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_current_flow_betweenness_centrality.py:8: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_current_flow_betweenness_centrality_subset.py:4: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_current_flow_closeness.py:4: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_eigenvector_centrality.py:6: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_laplacian_centrality.py:6: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_second_order_centrality.py:8: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_subgraph.py:4: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_trophic.py:6: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/link_analysis/tests/test_hits.py:6: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/link_analysis/tests/test_pagerank.py:9: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_communicability.py:6: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_node_classification.py:4: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_polynomials.py:7: could not import 'sympy': No module named 'sympy'
SKIPPED [1] networkx/algorithms/tests/test_walks.py:8: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/classes/tests/test_backends.py:7: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/drawing/tests/test_agraph.py:7: could not import 'pygraphviz': No module named 'pygraphviz'
SKIPPED [1] networkx/drawing/tests/test_layout.py:7: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/drawing/tests/test_pydot.py:11: could not import 'pydot': No module named 'pydot'
SKIPPED [1] networkx/generators/tests/test_spectral_graph_forge.py:4: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/linalg/tests/test_bethehessian.py:4: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/linalg/tests/test_graphmatrix.py:4: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/linalg/tests/test_laplacian.py:4: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/linalg/tests/test_modularity.py:4: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/linalg/tests/test_spectrum.py:4: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/tests/test_convert_scipy.py:4: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/approximation/tests/test_traveling_salesman.py:394: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/approximation/tests/test_traveling_salesman.py:431: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/approximation/tests/test_traveling_salesman.py:487: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/approximation/tests/test_traveling_salesman.py:522: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/approximation/tests/test_traveling_salesman.py:562: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/approximation/tests/test_traveling_salesman.py:601: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/approximation/tests/test_traveling_salesman.py:660: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/approximation/tests/test_traveling_salesman.py:715: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/approximation/tests/test_traveling_salesman.py:784: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/approximation/tests/test_traveling_salesman.py:832: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/approximation/tests/test_traveling_salesman.py:905: need --runslow option to run
SKIPPED [1] networkx/algorithms/bipartite/tests/test_basic.py:97: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/bipartite/tests/test_basic.py:108: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/bipartite/tests/test_basic.py:119: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/bipartite/tests/test_matching.py:223: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/bipartite/tests/test_matching.py:233: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/bipartite/tests/test_matching.py:245: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/bipartite/tests/test_matching.py:259: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/bipartite/tests/test_matching.py:276: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/bipartite/tests/test_matching.py:293: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/bipartite/tests/test_matching.py:310: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/bipartite/tests/test_matching.py:319: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_katz_centrality.py:121: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_katz_centrality.py:134: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_katz_centrality.py:143: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_katz_centrality.py:152: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_katz_centrality.py:161: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_katz_centrality.py:201: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_katz_centrality.py:205: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_katz_centrality.py:209: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_katz_centrality.py:215: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_katz_centrality.py:220: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_katz_centrality.py:233: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_katz_centrality.py:317: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_katz_centrality.py:324: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/centrality/tests/test_katz_centrality.py:339: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/connectivity/tests/test_kcomponents.py:94: need --runslow option to run
SKIPPED [1] networkx/algorithms/connectivity/tests/test_kcomponents.py:110: need --runslow option to run
SKIPPED [1] networkx/algorithms/connectivity/tests/test_kcomponents.py:117: need --runslow option to run
SKIPPED [1] networkx/algorithms/connectivity/tests/test_kcutsets.py:136: need --runslow option to run
SKIPPED [1] networkx/algorithms/connectivity/tests/test_kcutsets.py:210: need --runslow option to run
SKIPPED [1] networkx/algorithms/flow/tests/test_gomory_hu.py:76: need --runslow option to run
SKIPPED [1] networkx/algorithms/flow/tests/test_maxflow_large_graph.py:128: need --runslow option to run
SKIPPED [1] networkx/algorithms/tests/test_cycles.py:873: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_cycles.py:887: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_cycles.py:898: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_cycles.py:919: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_distance_measures.py:337: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_distance_measures.py:366: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_distance_measures.py:372: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_distance_measures.py:377: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_distance_measures.py:384: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_distance_measures.py:399: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_distance_measures.py:403: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_distance_measures.py:416: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_distance_measures.py:429: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_distance_measures.py:547: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_distance_measures.py:553: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_distance_measures.py:559: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_distance_measures.py:571: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_distance_measures.py:592: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_distance_measures.py:597: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_distance_measures.py:625: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_distance_measures.py:650: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_distance_measures.py:673: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_distance_measures.py:682: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:44: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:60: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:86: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:101: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:116: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:153: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:190: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:197: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:238: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:252: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:281: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:310: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:339: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:363: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:372: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:383: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:394: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:405: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:417: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:432: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:445: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:457: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:471: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:489: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:501: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:513: could not import 'scipy': No module named 'scipy'
SKIPPED [2] networkx/algorithms/tests/test_similarity.py:531: could not import 'scipy': No module named 'scipy'
SKIPPED [2] networkx/algorithms/tests/test_similarity.py:609: could not import 'scipy': No module named 'scipy'
SKIPPED [2] networkx/algorithms/tests/test_similarity.py:637: could not import 'scipy': No module named 'scipy'
SKIPPED [2] networkx/algorithms/tests/test_similarity.py:674: could not import 'scipy': No module named 'scipy'
SKIPPED [2] networkx/algorithms/tests/test_similarity.py:697: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:702: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:732: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:776: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:790: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:796: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:809: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:822: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:860: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_similarity.py:901: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_threshold.py:251: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tests/test_tournament.py:127: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tree/tests/test_mst.py:464: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tree/tests/test_mst.py:492: need --runslow option to run
SKIPPED [1] networkx/algorithms/tree/tests/test_mst.py:588: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/algorithms/tree/tests/test_mst.py:618: need --runslow option to run
SKIPPED [5] networkx/generators/tests/test_expanders.py:25: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/linalg/tests/test_algebraic_connectivity.py:15: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/linalg/tests/test_algebraic_connectivity.py:23: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/linalg/tests/test_algebraic_connectivity.py:31: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/linalg/tests/test_algebraic_connectivity.py:39: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/linalg/tests/test_algebraic_connectivity.py:50: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/linalg/tests/test_algebraic_connectivity.py:105: could not import 'scipy': No module named 'scipy'
SKIPPED [4] networkx/linalg/tests/test_algebraic_connectivity.py:112: could not import 'scipy': No module named 'scipy'
SKIPPED [4] networkx/linalg/tests/test_algebraic_connectivity.py:124: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/linalg/tests/test_algebraic_connectivity.py:137: could not import 'scipy': No module named 'scipy'
SKIPPED [4] networkx/linalg/tests/test_algebraic_connectivity.py:148: could not import 'scipy': No module named 'scipy'
SKIPPED [4] networkx/linalg/tests/test_algebraic_connectivity.py:159: could not import 'scipy': No module named 'scipy'
SKIPPED [4] networkx/linalg/tests/test_algebraic_connectivity.py:171: could not import 'scipy': No module named 'scipy'
SKIPPED [4] networkx/linalg/tests/test_algebraic_connectivity.py:182: could not import 'scipy': No module named 'scipy'
SKIPPED [8] networkx/linalg/tests/test_algebraic_connectivity.py:200: could not import 'scipy': No module named 'scipy'
SKIPPED [4] networkx/linalg/tests/test_algebraic_connectivity.py:333: could not import 'scipy': No module named 'scipy'
SKIPPED [4] networkx/linalg/tests/test_algebraic_connectivity.py:342: could not import 'scipy': No module named 'scipy'
SKIPPED [4] networkx/linalg/tests/test_algebraic_connectivity.py:351: could not import 'scipy': No module named 'scipy'
SKIPPED [4] networkx/linalg/tests/test_algebraic_connectivity.py:361: could not import 'scipy': No module named 'scipy'
SKIPPED [4] networkx/linalg/tests/test_algebraic_connectivity.py:371: could not import 'scipy': No module named 'scipy'
SKIPPED [8] networkx/linalg/tests/test_algebraic_connectivity.py:395: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/linalg/tests/test_attrmatrix.py:80: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/linalg/tests/test_attrmatrix.py:94: could not import 'scipy': No module named 'scipy'
SKIPPED [1] networkx/tests/test_all_random_functions.py:222: need --runslow option to run
FAILED networkx/algorithms/components/tests/test_strongly_connected.py::TestStronglyConnected::test_connected_raise
FAILED networkx/algorithms/tests/test_distance_measures.py::TestResistanceDistance::test_resistance_distance
===== 2 failed, 4683 passed, 220 skipped, 20 warnings in 120.36s (0:02:00) =====
```
</details>
As you may see there are as well some pytest warnings.
Please let me know if you want above patch as PR.
I wasn't able to reproduce this error locally. I even tried changing matplotlib's version in `requirements/default.txt` to 3.8.3 but still, no tests were failing. Looking at the above logs(`test session starts` section) you are using `Python 3.9.18` but networkx requires `Python >=3.10`.
> I wasn't able to reproduce this error locally. I even tried changing matplotlib's version in `requirements/default.txt` to 3.8.3 but still, no tests were failing. Looking at the above logs(`test session starts` section) you are using `Python 3.9.18` but networkx requires `Python >=3.10`.
Are you sure? 🤔
In tagged 3.2.1 version pyproject.toml I see:
https://github.com/networkx/networkx/blob/9c1ee6392311d056760714d4126cd6382f75a96f/pyproject.toml#L9
There are a few different things reported here, I'll try to address them separately:
Re: the original reported test failure, i.e. the fact that `matplotlib` has no `use` attr - I can't reproduce this one. Matplotlib 3.8.3 certainly does have this method:
```bash
python -m venv mpl-test
source mpl-test/bin/activate
python -m pip install matplotlib==3.8.3
python -c "import matplotlib as mpl; mpl.use('PS')" # Works fine
```
However - in the test incantation, `pytest.importorskip` [is used](https://github.com/networkx/networkx/blob/76c3e9bc00fb7d159b00e4c71c7274d185fcb254/networkx/drawing/tests/test_pylab.py#L8) to load matplotlib. It's possible there's some bad interaction with the importorskip, then immediately calling a method on the bound `mpl` name - though I would expect that to raise some form of `ModuleNotFound` error rather than an `AttributeError`. Perhaps there's some sort of partial/malformed matplotlib package in the test environment?
---
The missing scipy imports are real - these snuck in because we only test two environments w.r.t default dependencies: when *none* of them are installed, or when *all* of them are installed. There is no explicit testing (IIRC) of a subset of default dependencies. Patches to fix the missing scipy importorskips welcome!
---
The remaining test failures that aren't covered above have been fixed already on `main` and will be incorporated in the next release (within the next couple weeks).
I'm not 100% but my understanding is that mainly all those issues are related only to test suite and no actual tested code.
Am I right? 🤔
> I'm not 100% but my understanding is that mainly all those issues are related only to test suite and no actual tested code.
Correct - these are all warts on the test suite and should not affect user-facing code. It's possible that some of the scipy dependencies make it possible to have an environment where some NX functionality is not avaialbe, but that's expected and the resulting `ImportError`/`ModuleNotFoundError` should make it very clear to users which "soft" dependencies are missing. | 2024-04-03T18:47:55 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.