repo
stringclasses
856 values
pull_number
int64
3
127k
instance_id
stringlengths
12
58
issue_numbers
sequencelengths
1
5
base_commit
stringlengths
40
40
patch
stringlengths
67
1.54M
test_patch
stringlengths
0
107M
problem_statement
stringlengths
3
307k
hints_text
stringlengths
0
908k
created_at
timestamp[s]
networkx/networkx
3,822
networkx__networkx-3822
[ "3701" ]
a4d024c54f06d17d2f9ab26595a0b20ed6858f5c
diff --git a/networkx/generators/random_graphs.py b/networkx/generators/random_graphs.py --- a/networkx/generators/random_graphs.py +++ b/networkx/generators/random_graphs.py @@ -1000,6 +1000,11 @@ def random_lobster(n, p1, p2, seed=None): leaf nodes. A caterpillar is a tree that reduces to a path graph when pruning all leaf nodes; setting `p2` to zero produces a caterpillar. + This implementation iterates on the probabilities `p1` and `p2` to add + edges at levels 1 and 2, respectively. Graphs are therefore constructed + iteratively with uniform randomness at each level rather than being selected + uniformly at random from the set of all possible lobsters. + Parameters ---------- n : int @@ -1011,19 +1016,29 @@ def random_lobster(n, p1, p2, seed=None): seed : integer, random_state, or None (default) Indicator of random number generation state. See :ref:`Randomness<randomness>`. + + Raises + ------ + NetworkXError + If `p1` or `p2` parameters are >= 1 because the while loops would never finish. """ + p1, p2 = abs(p1), abs(p2) + if any([p >= 1 for p in [p1, p2]]): + raise nx.NetworkXError("Probability values for `p1` and `p2` must both be < 1.") + # a necessary ingredient in any self-respecting graph library llen = int(2 * seed.random() * n + 0.5) L = path_graph(llen) # build caterpillar: add edges to path graph with probability p1 current_node = llen - 1 for n in range(llen): - if seed.random() < p1: # add fuzzy caterpillar parts + while seed.random() < p1: # add fuzzy caterpillar parts current_node += 1 L.add_edge(n, current_node) - if seed.random() < p2: # add crunchy lobster bits + cat_node = current_node + while seed.random() < p2: # add crunchy lobster bits current_node += 1 - L.add_edge(current_node - 1, current_node) + L.add_edge(cat_node, current_node) return L # voila, un lobster!
diff --git a/networkx/generators/tests/test_random_graphs.py b/networkx/generators/tests/test_random_graphs.py --- a/networkx/generators/tests/test_random_graphs.py +++ b/networkx/generators/tests/test_random_graphs.py @@ -91,7 +91,38 @@ def test_random_graph(self): constructor = [(10, 20, 0.8), (20, 40, 0.8)] G = random_shell_graph(constructor, seed) + def is_caterpillar(g): + """ + A tree is a caterpillar iff all nodes of degree >=3 are surrounded + by at most two nodes of degree two or greater. + ref: http://mathworld.wolfram.com/CaterpillarGraph.html + """ + deg_over_3 = [n for n in g if g.degree(n) >= 3] + for n in deg_over_3: + nbh_deg_over_2 = [nbh for nbh in g.neighbors(n) if g.degree(nbh) >= 2] + if not len(nbh_deg_over_2) <= 2: + return False + return True + + def is_lobster(g): + """ + A tree is a lobster if it has the property that the removal of leaf + nodes leaves a caterpillar graph (Gallian 2007) + ref: http://mathworld.wolfram.com/LobsterGraph.html + """ + non_leafs = [n for n in g if g.degree(n) > 1] + return is_caterpillar(g.subgraph(non_leafs)) + G = random_lobster(10, 0.1, 0.5, seed) + assert max([G.degree(n) for n in G.nodes()]) > 3 + assert is_lobster(G) + pytest.raises(NetworkXError, random_lobster, 10, 0.1, 1, seed) + pytest.raises(NetworkXError, random_lobster, 10, 1, 1, seed) + pytest.raises(NetworkXError, random_lobster, 10, 1, 0.5, seed) + + # docstring says this should be a caterpillar + G = random_lobster(10, 0.1, 0.0, seed) + assert is_caterpillar(G) # difficult to find seed that requires few tries seq = random_powerlaw_tree_sequence(10, 3, seed=14, tries=1)
Wrong random_lobster implementation? Hi, it seems that [networkx.random_lobster's implementation logic](https://github.com/networkx/networkx/blob/4e9771f04192e94a5cbdd71249a983d124a56593/networkx/generators/random_graphs.py#L1009) is not aligned with the common definition as given in [wolfram mathworld](http://mathworld.wolfram.com/LobsterGraph.html) or [Wikipedia](https://en.wikipedia.org/wiki/List_of_graphs#Lobster). For example, it can not produce the simplest lobster graph examples such as ![lobster_example](https://www.researchgate.net/profile/Dr_Nasreen_Khan/publication/256292297/figure/fig5/AS:669566174756876@1536648424468/A-diameter-2-lobster-graph.png).
I believe it can produce such monster lobsters :) What makes you think that it can't? It creates random lobsters -- you can't request a specific lobster. Hi, in my understanding as well as my experiments, the implementation can generate at most one first-leaf and one second-leaf. It doesn't support multiple leaves~ Here is my simple beta-version implementation if you are interested in. The definitions of p1 and p2 are different from yours. ```py def random_lobster(n, p1, p2): path_edges = [list(range(n-1)), list(range(1,n))] first_leaf_num = np.random.binomial(n, p1) first_leaf_edges = [np.random.randint(low=0, high=n, size=first_leaf_num).tolist(), np.arange(n, n+first_leaf_num).tolist()] second_leaf_num = np.random.binomial(first_leaf_num, p2) second_leaf_edges = [np.random.randint(low=n, high=n+first_leaf_num, size=second_leaf_num).tolist(), np.arange(n+first_leaf_num, n+first_leaf_num+second_leaf_num).tolist()] rol = path_edges[0]+first_leaf_edges[0]+second_leaf_edges[0] col = path_edges[1]+first_leaf_edges[1]+second_leaf_edges[1] edges = [rol, col] return edges ``` So, you agree that it creates lobsters -- just not the lobsters you would like to see? None of the references you point to specify anything specific about creating lobsters other than it is a tree where when you remove leaves twice you end up with a path. It looks like you are proposing a new random model of lobster generation. Would you mind describing your model, where you got the algorithm from, why you think it is good, how it might be included in NetworkX, etc? It seems that you agree that the current implementation is not able to generate multi-leaves lobster graphs... And as a random generator of a specific set of graphs, it should be able to generate any element in the set. Can you imagine that `np.random.choice(10)` will never generate 5? If you can't do that, maybe you should not include this function in the package, or use another clearer name. My codes just resulted from my one-minute thinking. It's super easy but can generate a much larger subset of lobster graphs than yours. It's not ideal of-course. The correct implementation logic given your function name should be able to generate any element with equal probability in the set of lobster graphs of some specific properties. I appreciate your efforts on sharing the codes and networkx is probably one of the most popular python libraries about graphs. But please remind that influence implies responsibility. The more users use your codes, the more influential a mistake will be. The stable version should be stable and reliable. BTW, I don't care if my codes will be included; no responsibility as well. Just wanna remind of this issue and if the team cares about the users, they should do something to fix this. The current implementation can/does implement multi-leaf lobsters. But it only generates at most one second level leaf for each first level leaf. As is true for many of our random graph algorithms, we don't claim uniformly likely outcomes from the set. That is often too difficult, though it is the ideal we'd all appreciate. Perhaps we should be more clear in the documentation. I would like to replace/improve the lobster generator if there is interest. And it would be nice to use a method established in the literature. As you say, influence implies responsibility. Since you are suggesting the change, would you be willing to search for established methods of creating random lobsters? Thanks either way -- it's good to have this on our radar. Hi, feel free to do whatever you want. This is just a suggestion of common misunderstanding. I was using your function for building datasets for training and evaluation methods. I have to say that the distribution of generated samples does matter in this case. As for the established methods, I don't think of me as an expert in the algorithm area, and I don't have enough time for learning everything from scratch. And sorry for being harsh these days. I was freak out for wasting several days on a small-variant dataset. I took networkx's reliability as granted and spent lots of time debugging other pieces of my codes. You can at least mention something in the documents to avoid such situations. Thanks.
2020-02-16T23:49:40
networkx/networkx
3,848
networkx__networkx-3848
[ "3846" ]
3f4f9c3379a5d70fc58852154aab7b1051ff96d6
diff --git a/networkx/algorithms/connectivity/cuts.py b/networkx/algorithms/connectivity/cuts.py --- a/networkx/algorithms/connectivity/cuts.py +++ b/networkx/algorithms/connectivity/cuts.py @@ -281,7 +281,7 @@ def minimum_st_node_cut(G, s, t, flow_func=None, auxiliary=None, residual=None): if mapping is None: raise nx.NetworkXError('Invalid auxiliary digraph.') if G.has_edge(s, t) or G.has_edge(t, s): - return [] + return {} kwargs = dict(flow_func=flow_func, residual=residual, auxiliary=H) # The edge cut in the auxiliary digraph corresponds to the node cut in the
diff --git a/networkx/algorithms/connectivity/tests/test_cuts.py b/networkx/algorithms/connectivity/tests/test_cuts.py --- a/networkx/algorithms/connectivity/tests/test_cuts.py +++ b/networkx/algorithms/connectivity/tests/test_cuts.py @@ -268,7 +268,7 @@ def tests_minimum_st_node_cut(): G.add_nodes_from([0, 1, 2, 3, 7, 8, 11, 12]) G.add_edges_from([(7, 11), (1, 11), (1, 12), (12, 8), (0, 1)]) nodelist = minimum_st_node_cut(G, 7, 11) - assert(nodelist == []) + assert(nodelist == {}) def test_invalid_auxiliary():
`minimum_st_node_cut` returns empty list instead of set for adjacent nodes https://github.com/networkx/networkx/blob/3f4f9c3379a5d70fc58852154aab7b1051ff96d6/networkx/algorithms/connectivity/cuts.py#L284 Should read `return {}`. Was questioning my sanity for a bit there πŸ˜‰
2020-03-04T08:11:36
networkx/networkx
3,907
networkx__networkx-3907
[ "3884" ]
b0ca6ce081cc6e66d509c5f7fed60728da821c25
diff --git a/networkx/convert.py b/networkx/convert.py --- a/networkx/convert.py +++ b/networkx/convert.py @@ -50,7 +50,7 @@ def to_networkx_graph(data, create_using=None, multigraph_input=False): any NetworkX graph dict-of-dicts dict-of-lists - list of edges + container (ie set, list, tuple, iterator) of edges Pandas DataFrame (row per edge) numpy matrix numpy ndarray @@ -108,7 +108,7 @@ def to_networkx_graph(data, create_using=None, multigraph_input=False): # list or generator of edges - if isinstance(data, (list, tuple)) or any( + if isinstance(data, (list, tuple, set)) or any( hasattr(data, attr) for attr in ["_adjdict", "next", "__next__"] ): try:
Add support for sets of graph edges Currently it looks like only tuples and lists of graph edgeds are supported, however with a small change I think it would be easy to add support for any Iterable, providing greater flexibilty to users of the library. Are there any considerations I'd need to take before submitting a PR?
2020-04-11T01:04:04
networkx/networkx
3,916
networkx__networkx-3916
[ "3910" ]
bd1c7bbb9b9e7c3500a852d1f691134310c0ecd2
diff --git a/networkx/readwrite/pajek.py b/networkx/readwrite/pajek.py --- a/networkx/readwrite/pajek.py +++ b/networkx/readwrite/pajek.py @@ -54,7 +54,12 @@ def generate_pajek(G): na = G.nodes.get(n, {}).copy() x = na.pop('x', 0.0) y = na.pop('y', 0.0) - id = int(na.pop('id', nodenumber[n])) + try: + id = int(na.pop('id', nodenumber[n])) + except ValueError as e: + e.args += (("Pajek format requires 'id' to be an int()." + " Refer to the 'Relabeling nodes' section."),) + raise nodenumber[n] = id shape = na.pop('shape', 'ellipse') s = ' '.join(map(make_qstr, (id, n, x, y, shape)))
Pajek export assumes Graph node's id attribute to be int castable In pajek export it reads https://github.com/networkx/networkx/blob/b0ca6ce081cc6e66d509c5f7fed60728da821c25/networkx/readwrite/pajek.py#L57 I was wondering, since NetworkX supports arbitrary Python objects as Graph nodes where does the assumption come from that a node's `id` attribute is `int` castable? An objects `id` attribute would have nothing to do with Python's builtin `id()`. But I'm not familiar with the Pajek file format so don't know about the relevance and special treatment of `id` in it. I was working with a Graph whose node's `id` attribute happened to be a string. For it Pajek export failed with ``` ValueError: invalid literal for int() with base 10: 'arn:aws:iam::579763146468:role/aws-service-role/trustedadvisor.amazonaws.com/AWSServiceRoleForTrustedAdvisor' ```
Its not a networkx limitation -- its a Pajek standard. You can [relabel the nodes](https://networkx.github.io/documentation/stable/reference/relabel.html) to be integers and store the string as an attribute in the Pajek file. I see, thank you for clarifying! Since this is a limitation always enforced by the Pajek standard, would it make sense to add a try/catch ValueError around the int() cast and add that information together with your suggestion to relabel? Or is there a situation where the raw `ValueError: invalid literal for int()` is a more useful output? I'd send a PR if that makes sense. That's an excellent idea! I think the cost of a try/catch over the casting should be small and the error message probably would help some people a lot. :}
2020-04-14T18:33:16
networkx/networkx
3,952
networkx__networkx-3952
[ "3926" ]
c91c0cd93759d0c6ad21e23fbe3d4290d2ed0d21
diff --git a/networkx/generators/community.py b/networkx/generators/community.py --- a/networkx/generators/community.py +++ b/networkx/generators/community.py @@ -300,7 +300,7 @@ def planted_partition_graph(l, k, p_in, p_out, seed=None, directed=False): .. [2] Santo Fortunato 'Community Detection in Graphs' Physical Reports Volume 486, Issue 3-5 p. 75-174. https://arxiv.org/abs/0906.0612 """ - return random_partition_graph([k] * l, p_in, p_out, seed, directed) + return random_partition_graph([k] * l, p_in, p_out, seed=seed, directed=directed) @py_random_state(6) @@ -377,7 +377,7 @@ def gaussian_random_partition_graph(n, s, v, p_in, p_out, directed=False, break assigned += size sizes.append(size) - return random_partition_graph(sizes, p_in, p_out, directed, seed) + return random_partition_graph(sizes, p_in, p_out, seed=seed, directed=directed) def ring_of_cliques(num_cliques, clique_size):
gaussian_random_partition_graph constructor optional parameter doesn't work gaussian_random_partition_graph constructor always returns a directed graph no matter how I set the parameter "directed"
Yes -- indeed the function ```gaussian_random_partition_graph``` calls ```random_partition_graph``` with positional arguments in the wrong order. seed and directed are switched. Would be good to change that call to using keywords: ```python gaussian_random_partition_graph(sizes, p_in, p_out, directed=directed, seed=seed) ```
2020-05-12T17:38:15
networkx/networkx
3,958
networkx__networkx-3958
[ "3937" ]
a589aac756a82d44b9f5947cf2ddaf81be348880
diff --git a/networkx/algorithms/structuralholes.py b/networkx/algorithms/structuralholes.py --- a/networkx/algorithms/structuralholes.py +++ b/networkx/algorithms/structuralholes.py @@ -98,7 +98,7 @@ def effective_size(G, nodes=None, weight=None): Returns ------- dict - Dictionary with nodes as keys and the constraint on the node as values. + Dictionary with nodes as keys and the effective size of the node as values. Notes -----
Misleading description in the doc In this page https://networkx.github.io/documentation/stable/reference/algorithms/generated/networkx.algorithms.structuralholes.effective_size.html The description of *Return* is "Dictionary with nodes as keys and the constraint on the node as values." But this is effective size. I think it should be "Dictionary with nodes as keys and the **effective size of** the node as values."
2020-05-15T15:13:12
networkx/networkx
3,959
networkx__networkx-3959
[ "3885" ]
a589aac756a82d44b9f5947cf2ddaf81be348880
diff --git a/networkx/algorithms/structuralholes.py b/networkx/algorithms/structuralholes.py --- a/networkx/algorithms/structuralholes.py +++ b/networkx/algorithms/structuralholes.py @@ -98,7 +98,7 @@ def effective_size(G, nodes=None, weight=None): Returns ------- dict - Dictionary with nodes as keys and the constraint on the node as values. + Dictionary with nodes as keys and the effective size of the node as values. Notes ----- diff --git a/networkx/linalg/graphmatrix.py b/networkx/linalg/graphmatrix.py --- a/networkx/linalg/graphmatrix.py +++ b/networkx/linalg/graphmatrix.py @@ -13,8 +13,8 @@ def incidence_matrix(G, nodelist=None, edgelist=None, oriented=False, weight=Non For a standard incidence matrix a 1 appears wherever a row's node is incident on the column's edge. For an oriented incidence matrix each edge is assigned an orientation (arbitrarily for undirected and aligning to - direction for directed). A -1 appears for the tail of an edge and 1 - for the head of the edge. The elements are zero otherwise. + direction for directed). A -1 appears for the source (tail) of an edge and + 1 for the destination (head) of the edge. The elements are zero otherwise. Parameters ----------
Incidence Matrix Documentation is confusing First my networkx version is 2.4 In the incidence matrix documentation it states if using the oriented option " A -1 appears for the tail of an edge and 1 for the head of the edge." This appears to be not true: The table below shows has node and the edge with the edge in (source_node, destination_node) format. In row 0, column 0 the edge specified is from node_0 to node_1. As someone not familiar with the language I would interpret head to mean source, and tail to mean destination, so the value I would expect is 1 not -1. I did not think of the head and tail as the head and tail of an arrow, which is why I was confused. Can the documentation be modified to " A -1 appears for the source (tail) of an edge and 1 for the destination (head) of the edge." The table below can be reproduced by ```python import pandas as pd import networkx as nx node_list = [f'node_{i}' for i in range(3)] G = nx.complete_graph(node_list, nx.DiGraph()) for i, edge in enumerate(G.edges, 1): G.edges[edge]['weight'] = i ind_mat = pd.DataFrame(nx.incidence_matrix(G, oriented=True, weight='weight').toarray(), columns = list(G.edges), index=G.nodes) ind_mat ``` | | ('node_0', 'node_1') | ('node_0', 'node_2') | ('node_1', 'node_0') | ('node_1', 'node_2') | ('node_2', 'node_0') | ('node_2', 'node_1') | |:-------|-----------------------:|-----------------------:|-----------------------:|-----------------------:|-----------------------:|-----------------------:| | node_0 | -1 | -2 | 3 | 0 | 5 | 0 | | node_1 | 1 | 0 | -3 | -4 | 0 | 6 | | node_2 | 0 | 2 | 0 | 4 | -5 | -6 |
Indeed, the documentation does assume that the head/tail names follow the standard arrow analog with head being the destination. Is source/destination more clear than head/tail? Maybe... Probably best to include both as you have suggested. :}
2020-05-15T15:25:58
networkx/networkx
3,967
networkx__networkx-3967
[ "3911" ]
04e1ef2598fa494fc4b0889ef69a71b83ff32493
diff --git a/networkx/algorithms/simple_paths.py b/networkx/algorithms/simple_paths.py --- a/networkx/algorithms/simple_paths.py +++ b/networkx/algorithms/simple_paths.py @@ -5,6 +5,7 @@ import networkx as nx from networkx.utils import not_implemented_for from networkx.utils import pairwise +from networkx.utils import empty_generator from networkx.algorithms.shortest_paths.weighted import _weight_function __all__ = [ @@ -228,11 +229,11 @@ def all_simple_paths(G, source, target, cutoff=None): except TypeError: raise nx.NodeNotFound(f"target node {target} not in graph") if source in targets: - return [] + return empty_generator() if cutoff is None: cutoff = len(G) - 1 if cutoff < 1: - return [] + return empty_generator() if G.is_multigraph(): return _all_simple_paths_multigraph(G, source, targets, cutoff) else: diff --git a/networkx/utils/misc.py b/networkx/utils/misc.py --- a/networkx/utils/misc.py +++ b/networkx/utils/misc.py @@ -44,6 +44,11 @@ def iterable(obj): return True +def empty_generator(): + """ Return a generator with no members """ + yield from () + + def flatten(obj, result=None): """ Return flattened version of (possibly nested) iterable object. """ if not iterable(obj) or is_string_like(obj):
diff --git a/networkx/algorithms/tests/test_simple_paths.py b/networkx/algorithms/tests/test_simple_paths.py --- a/networkx/algorithms/tests/test_simple_paths.py +++ b/networkx/algorithms/tests/test_simple_paths.py @@ -135,7 +135,7 @@ def test_all_simple_paths_with_two_targets_inside_cycle_emits_two_paths(): def test_all_simple_paths_source_target(): G = nx.path_graph(4) paths = nx.all_simple_paths(G, 1, 1) - assert paths == [] + assert list(paths) == [] def test_all_simple_paths_cutoff(): @@ -164,7 +164,7 @@ def test_all_simple_paths_on_non_trivial_graph(): def test_all_simple_paths_multigraph(): G = nx.MultiGraph([(1, 2), (1, 2)]) paths = nx.all_simple_paths(G, 1, 1) - assert paths == [] + assert list(paths) == [] nx.add_path(G, [3, 1, 10, 2]) paths = list(nx.all_simple_paths(G, 1, 2)) assert len(paths) == 3
Incorrectly returning [] when a generator is expected This return value (`[]`) is not consistent with the intention of the `all_simple_paths` function: ```py if source in targets: return [] ``` From: https://github.com/networkx/networkx/blob/b0ca6ce081cc6e66d509c5f7fed60728da821c25/networkx/algorithms/simple_paths.py#L230 Context: ```py def all_simple_paths(G, source, target, cutoff=None): """Generate all simple paths in the graph G from source to target. A simple path is a path with no repeated nodes. [...] Returns ------- path_generator: generator A generator that produces lists of simple paths. If there are no paths between the source and target within the given cutoff the generator produces no output. ``` I recommend, instead, that NetworkX provide a reusable empty generator function, such as: ```py def empty_generator(): yield from () ``` Then, in `all_simple_paths` (and almost certainly, other places in the library), call `return empty_generator()`
2020-05-20T01:49:36
networkx/networkx
3,981
networkx__networkx-3981
[ "3980" ]
8bad036f7a38686b389fb9c140e0ca20ed93be35
diff --git a/networkx/algorithms/centrality/current_flow_betweenness.py b/networkx/algorithms/centrality/current_flow_betweenness.py --- a/networkx/algorithms/centrality/current_flow_betweenness.py +++ b/networkx/algorithms/centrality/current_flow_betweenness.py @@ -46,7 +46,7 @@ def approximate_current_flow_betweenness_centrality(G, normalized=True, Default data type for internal matrices. Set to np.float32 for lower memory consumption. - solver : string (default='lu') + solver : string (default='full') Type of linear solver to use for computing the flow matrix. Options are "full" (uses most memory), "lu" (recommended), and "cg" (uses least memory). @@ -168,7 +168,7 @@ def current_flow_betweenness_centrality(G, normalized=True, weight=None, Default data type for internal matrices. Set to np.float32 for lower memory consumption. - solver : string (default='lu') + solver : string (default='full') Type of linear solver to use for computing the flow matrix. Options are "full" (uses most memory), "lu" (recommended), and "cg" (uses least memory). @@ -273,7 +273,7 @@ def edge_current_flow_betweenness_centrality(G, normalized=True, Default data type for internal matrices. Set to np.float32 for lower memory consumption. - solver : string (default='lu') + solver : string (default='full') Type of linear solver to use for computing the flow matrix. Options are "full" (uses most memory), "lu" (recommended), and "cg" (uses least memory).
Documentation for approximate_current_flow_betweenness_centrality, edge_current_flow_betweenness_centrality and current_flow_betweenness_centrality does not match In the documentation the default `solver` of nx.approximate_current_flow_betweenness_centrality nx.edge_current_flow_betweenness_centrality and nx.current_flow_betweenness_centrality is `'lu'`. In the definition of the function the default is however `'full'`. As `'lu'` is the recommended method and it is the default is in - `nx.edge_current_flow_betweenness_centrality_subset`, - `nx.current_flow_betweenness_centrality_subset`, - `nx.current_flow_closeness_centrality` the defaults should be changed here as well.
Can you please change the documentation instead of changing the default? The history of commits shows that #3981 changed the default to "full". But it neglected to change the documentation. Let's bring it up-to-date by changing the documentation. Thanks for bringing this issue to light!
2020-05-31T11:32:51
networkx/networkx
4,034
networkx__networkx-4034
[ "4032" ]
8daff1a11d0433c2fc43a5e00c5cdefb431bb400
diff --git a/networkx/readwrite/graph6.py b/networkx/readwrite/graph6.py --- a/networkx/readwrite/graph6.py +++ b/networkx/readwrite/graph6.py @@ -60,12 +60,12 @@ def _generate_graph6_bytes(G, nodes, header): yield b'\n' -def from_graph6_bytes(string): - """Read a simple undirected graph in graph6 format from string. +def from_graph6_bytes(bytes_in): + """Read a simple undirected graph in graph6 format from bytes. Parameters ---------- - string : string + bytes_in : bytes Data in graph6 format, without a trailing newline. Returns @@ -75,10 +75,10 @@ def from_graph6_bytes(string): Raises ------ NetworkXError - If the string is unable to be parsed in graph6 format + If bytes_in is unable to be parsed in graph6 format ValueError - If any character ``c`` in the input string does not satisfy + If any character ``c`` in bytes_in does not satisfy ``63 <= ord(c) < 127``. Examples @@ -104,10 +104,10 @@ def bits(): for i in [5, 4, 3, 2, 1, 0]: yield (d >> i) & 1 - if string.startswith(b'>>graph6<<'): - string = string[10:] + if bytes_in.startswith(b'>>graph6<<'): + bytes_in = bytes_in[10:] - data = [c - 63 for c in string] + data = [c - 63 for c in bytes_in] if any(c > 63 for c in data): raise ValueError('each input character must be in range(63, 127)')
Cannot read graph6 I have a basic script that I used a year ago to convert graph6 strings into PACE-format graph files; it doesn't work now with networkx version 2.4: `in from_graph6_bytes if string.startswith(b'>>graph6<<'): TypeError: startswith first arg must be str or a tuple of str, not bytes`
Perhaps you've switched from Python 2 to Python 3 over the last year? It seems that `from_graph6_bytes` expects a bytes-like object, not a string. An update to the `Parameters` section of the docstring could perhaps make this more clear. If this is indeed a Python 2/3 issue you could try encoding your Python3 string to produce a sequence of bytes, e.g. something like `from_graph6_bytes(<your_string>.encode('ascii'))` in your script. > > > Perhaps you've switched from Python 2 to Python 3 over the last year? It seems that `from_graph6_bytes` expects a bytes-like object, not a string. An update to the `Parameters` section of the docstring could perhaps make this more clear. > > If this is indeed a Python 2/3 issue you could try encoding your Python3 string to produce a sequence of bytes, e.g. something like `from_graph6_bytes(<your_string>.encode('ascii'))` in your script. I believe that the error is coming from within NetworkX's code - the call to `string.startswith()` with is made within the `from_graph6_bytes` function. Or maybe... I'm passing a String object to it, which it is internally naming 'string' but expecting that it is actually a sequence of bytes? And String's `startswith` function expects a String argument, whereas Byte's expects a Byte argument? Does that make sense? > I believe that the error is coming from within NetworkX's code - the call to string.startswith() with is made within the from_graph6_bytes function. This is what I was referring to before - it's confusing because the variable `string` in the function `from_graph6_bytes` is not actually a string - there is a mismatch between the function name (which implies that the input should be bytes) and the naming convention *inside* the function (argument name is string). This is likely an artifact from when the function was originally written (pre-Python3) as the naming convention would have made sense for Python 2. Here's a quick example to illustrate the root of the problem: ```python >>> s = '>>graph6<<A_' >>> type(s) str >>> b = s.encode('ascii') >>> type(b) bytes >>> type(b'>>graph6<<') bytes # This produces the error because the str.startswith method expects strings, # not bytes >>> s.startswith(b'>>graph6<<') Traceback (most recent call last) ... TypeError: startswith first arg must be str or a tuple of str, not bytes # But a comparison of bytes to bytes is fine >>> b.startswith(b'>>graph6<<') True ``` And with `nx.from_graph6_bytes`: ```python >>> import networkx as nx >>> nx.from_graph6_bytes(s) Traceback (most recent call last) ... TypeError: startswith first arg must be str or a tuple of str, not bytes >>> G = nx.from_graph6_bytes(b) >>> G.edges EdgeView([(0, 1)]) ``` Long story short - `from_graph6_bytes` needs a bytes-like input, *not* a string. I think updating the argument name, docstring, and maybe touching up the exceptions would help resolve some of this confusion.
2020-06-30T17:35:07
networkx/networkx
4,044
networkx__networkx-4044
[ "3998" ]
8f160146cbf5dd217d31fa90d7aae89bc4c49967
diff --git a/doc/conf.py b/doc/conf.py --- a/doc/conf.py +++ b/doc/conf.py @@ -55,7 +55,7 @@ autosummary_generate = True # Add any paths that contain templates here, relative to this directory. -# templates_path = [''] +templates_path = ['_templates'] suppress_warnings = ["ref.citation", "ref.footnote"]
Feature Big Picture URLs in documentation We should include more links to the documentation landing page and the github repository in our documentation. Links to the landing page would allow people to switch between docs for different versions and between docs and code. Links to the github repository would allow people to find the code/issues/PRs from the docs while staying in a browser (not installing source code locally via git).
2020-07-04T20:45:19
networkx/networkx
4,066
networkx__networkx-4066
[ "4058" ]
5638e1ff3d01e21c7d950615a699eb1f99987b8d
diff --git a/networkx/relabel.py b/networkx/relabel.py --- a/networkx/relabel.py +++ b/networkx/relabel.py @@ -13,7 +13,7 @@ def relabel_nodes(G, mapping, copy=True): mapping : dictionary A dictionary with the old labels as keys and new labels as values. - A partial mapping is allowed. + A partial mapping is allowed. Mapping 2 nodes to a single node is allowed. copy : bool (optional, default=True) If True return a copy, or if False relabel the nodes in place. @@ -64,6 +64,27 @@ def relabel_nodes(G, mapping, copy=True): >>> list(H) [0, 1, 4] + In a multigraph, relabeling two or more nodes to the same new node + will retain all edges, but may change the edge keys in the process: + + >>> G = nx.MultiGraph() + >>> G.add_edge(0, 1, value="a") # returns the key for this edge + 0 + >>> G.add_edge(0, 2, value="b") + 0 + >>> G.add_edge(0, 3, value="c") + 0 + >>> mapping = {1: 4, 2: 4, 3: 4} + >>> H = nx.relabel_nodes(G, mapping, copy=True) + >>> print(H[0]) + {4: {0: {'value': 'a'}, 1: {'value': 'b'}, 2: {'value': 'c'}}} + + This works for in-place relabeling too: + + >>> G = nx.relabel_nodes(G, mapping, copy=False) + >>> print(G[0]) + {4: {0: {'value': 'a'}, 1: {'value': 'b'}, 2: {'value': 'c'}}} + Notes ----- Only the nodes specified in the mapping will be relabeled. @@ -77,6 +98,13 @@ def relabel_nodes(G, mapping, copy=True): graph is not possible in-place and an exception is raised. In that case, use copy=True. + If a relabel operation on a multigraph would cause two or more + edges to have the same source, target and key, the second edge must + be assigned a new key to retain all edges. The new key is set + to the lowest non-negative integer not already used as a key + for edges between these two nodes. Note that this means non-numeric + keys may be replaced by numeric keys. + See Also -------- convert_node_labels_to_integers @@ -136,6 +164,15 @@ def _relabel_inplace(G, mapping): (new if old == source else source, new, key, data) for (source, _, key, data) in G.in_edges(old, data=True, keys=True) ] + # Ensure new edges won't overwrite existing ones + seen = set() + for i, (source, target, key, data) in enumerate(new_edges): + if (target in G[source] and key in G[source][target]): + new_key = 0 if not isinstance(key, (int, float)) else key + while (new_key in G[source][target] or (target, new_key) in seen): + new_key += 1 + new_edges[i] = (source, target, new_key, data) + seen.add((target, new_key)) else: new_edges = [ (new, new if old == target else target, data) @@ -156,10 +193,25 @@ def _relabel_copy(G, mapping): H.add_nodes_from(mapping.get(n, n) for n in G) H._node.update((mapping.get(n, n), d.copy()) for n, d in G.nodes.items()) if G.is_multigraph(): - H.add_edges_from( + new_edges = [ (mapping.get(n1, n1), mapping.get(n2, n2), k, d.copy()) for (n1, n2, k, d) in G.edges(keys=True, data=True) - ) + ] + + # check for conflicting edge-keys + undirected = not G.is_directed() + seen_edges = set() + for i, (source, target, key, data) in enumerate(new_edges): + while (source, target, key) in seen_edges: + if not isinstance(key, (int, float)): + key = 0 + key += 1 + seen_edges.add((source, target, key)) + if undirected: + seen_edges.add((target, source, key)) + new_edges[i] = (source, target, key, data) + + H.add_edges_from(new_edges) else: H.add_edges_from( (mapping.get(n1, n1), mapping.get(n2, n2), d.copy())
diff --git a/networkx/tests/test_relabel.py b/networkx/tests/test_relabel.py --- a/networkx/tests/test_relabel.py +++ b/networkx/tests/test_relabel.py @@ -183,3 +183,109 @@ def test_relabel_selfloop(self): G = nx.MultiDiGraph([(1, 1)]) G = nx.relabel_nodes(G, {1: 0}, copy=False) assert_nodes_equal(G.nodes(), [0]) + +# def test_relabel_multidigraph_inout_inplace(self): +# pass + def test_relabel_multidigraph_inout_merge_nodes(self): + for MG in (nx.MultiGraph, nx.MultiDiGraph): + for cc in (True, False): + G = MG([(0, 4), (1, 4), (4, 2), (4, 3)]) + G[0][4][0]["value"] = "a" + G[1][4][0]["value"] = "b" + G[4][2][0]["value"] = "c" + G[4][3][0]["value"] = "d" + G.add_edge(0, 4, key="x", value="e") + G.add_edge(4, 3, key="x", value="f") + mapping = {0: 9, 1: 9, 2: 9, 3: 9} + H = nx.relabel_nodes(G, mapping, copy=cc) + # No ordering on keys enforced + assert {"value": "a"} in H[9][4].values() + assert {"value": "b"} in H[9][4].values() + assert {"value": "c"} in H[4][9].values() + assert len(H[4][9]) == 3 if G.is_directed() else 6 + assert {"value": "d"} in H[4][9].values() + assert {"value": "e"} in H[9][4].values() + assert {"value": "f"} in H[4][9].values() + assert len(H[9][4]) == 3 if G.is_directed() else 6 + + def test_relabel_multigraph_merge_inplace(self): + G = nx.MultiGraph([(0, 1), (0, 2), (0, 3), (0, 1), (0, 2), (0, 3)]) + G[0][1][0]["value"] = "a" + G[0][2][0]["value"] = "b" + G[0][3][0]["value"] = "c" + mapping = {1: 4, 2: 4, 3: 4} + nx.relabel_nodes(G, mapping, copy=False) + # No ordering on keys enforced + assert {"value": "a"} in G[0][4].values() + assert {"value": "b"} in G[0][4].values() + assert {"value": "c"} in G[0][4].values() + + def test_relabel_multidigraph_merge_inplace(self): + G = nx.MultiDiGraph([(0, 1), (0, 2), (0, 3)]) + G[0][1][0]["value"] = "a" + G[0][2][0]["value"] = "b" + G[0][3][0]["value"] = "c" + mapping = {1: 4, 2: 4, 3: 4} + nx.relabel_nodes(G, mapping, copy=False) + # No ordering on keys enforced + assert {"value": "a"} in G[0][4].values() + assert {"value": "b"} in G[0][4].values() + assert {"value": "c"} in G[0][4].values() + + def test_relabel_multidigraph_inout_copy(self): + G = nx.MultiDiGraph([(0, 4), (1, 4), (4, 2), (4, 3)]) + G[0][4][0]["value"] = "a" + G[1][4][0]["value"] = "b" + G[4][2][0]["value"] = "c" + G[4][3][0]["value"] = "d" + G.add_edge(0, 4, key="x", value="e") + G.add_edge(4, 3, key="x", value="f") + mapping = {0: 9, 1: 9, 2: 9, 3: 9} + H = nx.relabel_nodes(G, mapping, copy=True) + # No ordering on keys enforced + assert {"value": "a"} in H[9][4].values() + assert {"value": "b"} in H[9][4].values() + assert {"value": "c"} in H[4][9].values() + assert len(H[4][9]) == 3 + assert {"value": "d"} in H[4][9].values() + assert {"value": "e"} in H[9][4].values() + assert {"value": "f"} in H[4][9].values() + assert len(H[9][4]) == 3 + + def test_relabel_multigraph_merge_copy(self): + G = nx.MultiGraph([(0, 1), (0, 2), (0, 3)]) + G[0][1][0]["value"] = "a" + G[0][2][0]["value"] = "b" + G[0][3][0]["value"] = "c" + mapping = {1: 4, 2: 4, 3: 4} + H = nx.relabel_nodes(G, mapping, copy=True) + assert {"value": "a"} in H[0][4].values() + assert {"value": "b"} in H[0][4].values() + assert {"value": "c"} in H[0][4].values() + + def test_relabel_multidigraph_merge_copy(self): + G = nx.MultiDiGraph([(0, 1), (0, 2), (0, 3)]) + G[0][1][0]["value"] = "a" + G[0][2][0]["value"] = "b" + G[0][3][0]["value"] = "c" + mapping = {1: 4, 2: 4, 3: 4} + H = nx.relabel_nodes(G, mapping, copy=True) + assert {"value": "a"} in H[0][4].values() + assert {"value": "b"} in H[0][4].values() + assert {"value": "c"} in H[0][4].values() + + def test_relabel_multigraph_nonnumeric_key(self): + for MG in (nx.MultiGraph, nx.MultiDiGraph): + for cc in (True, False): + G = nx.MultiGraph() + G.add_edge(0, 1, key="I", value="a") + G.add_edge(0, 2, key="II", value="b") + G.add_edge(0, 3, key="II", value="c") + mapping = {1: 4, 2: 4, 3: 4} + nx.relabel_nodes(G, mapping, copy=False) + assert {"value": "a"} in G[0][4].values() + assert {"value": "b"} in G[0][4].values() + assert {"value": "c"} in G[0][4].values() + assert 0 in G[0][4] + assert "I" in G[0][4] + assert "II" in G[0][4]
relabel_nodes on MultiGraphs does not preserve both edges when two nodes are replaced by one When the graph contains edges (0,1) and (0,2), and I relabel both 1 and 2 to 3, I expected two edges from (0,3) but only one node is preserved. Multi*Graph supports parallel edges between nodes and I expected it to preserve both edges when "merging" the two old nodes together. Tested on both networkx 2.4 and the latest version on master. ```python import networkx as nx G = nx.MultiDiGraph([(0,1),(0,2)]) G[0][1][0]["value"] = "a" G[0][2][0]["value"] = "b" print(G[0]) # Output: # {1: {0: {'value': 'a'}}, 2: {0: {'value': 'b'}}} mapping = {1:3, 2:3} nx.relabel_nodes(G, mapping, copy=False) print(G[0]) # Output: # {3: {0: {'value': 'b'}}} # Expected: # {3: {0: {'value': 'a'}, 1: {'value': 'b'}}} ```
This is fixed if you insert this at line 139 in networkx/relabel.py: ```python for i, (tail, head, key, data) in enumerate(new_edges): if head in G[tail] and key in G[tail][head]: next_key = max(k for k in G[tail][head]) + 1 new_edges[i] = (tail, head, next_key, data) ``` Of course, I am not sure if this is actually the desired behaviour. Good catch! The relabel should have preserved both values 'a' and 'b'. Feel free to submit a pull request with a fix for this. This fix assumes that key can be maximized and added to 1. Keys do not have to be numeric. Perhaps something like: ```python newkey = 0 current_keys = G[tail][head] while newkey not in current_keys: newkey += 1 ``` There may be other better ways to do this. Actually, upon further reflection -- it might be better to explicitly combine nodes... That is, we might want to impose node_contraction and not let the relabel_nodes function change the graph connections. What do you think? @dschult I understand what you mean, it makes the intention of the programmer clearer. But I think a casual user of networkx (like myself!) would expect the relabel_nodes operation on a MultiGraph to not silently drop all the other edges except one. Also, node_contraction only contracts two nodes at once, but a fixed relabel_nodes would be able to handle as many nodes and edges as the programmer wants to merge together. It's a minor point and could probably be done by writing some glue code, but it is more convenient. Yes, it seems that the convenience and flexibility of ```relabel_nodes``` may offset the disadvantages. We will need a fairly strong statement in the docstring about how this relabeling affects the nodes when more than one is relabeled to the same object. This sounds good to me.
2020-07-11T12:56:42
networkx/networkx
4,080
networkx__networkx-4080
[ "4068" ]
fccab33625fd69ff3414cd0921a4287bc736bb93
diff --git a/networkx/classes/function.py b/networkx/classes/function.py --- a/networkx/classes/function.py +++ b/networkx/classes/function.py @@ -1205,7 +1205,7 @@ def selfloop_edges(G, data=False, keys=False, default=None): (n, n) for n, nbrs in G.adj.items() if n in nbrs - for d in nbrs[n].values() + for i in range(len(nbrs[n])) # for easy edge removal (#4068) ) else: return ((n, n) for n, nbrs in G.adj.items() if n in nbrs)
diff --git a/networkx/classes/tests/test_function.py b/networkx/classes/tests/test_function.py --- a/networkx/classes/tests/test_function.py +++ b/networkx/classes/tests/test_function.py @@ -664,3 +664,28 @@ def test_selfloops(): assert_edges_equal( nx.selfloop_edges(G, data="weight"), [(0, 0, None), (1, 1, 2)] ) + # test removing selfloops behavior vis-a-vis altering a dict while iterating + G.add_edge(0, 0) + G.remove_edges_from(nx.selfloop_edges(G)) + if G.is_multigraph(): + G.add_edge(0, 0) + pytest.raises( + RuntimeError, G.remove_edges_from, nx.selfloop_edges(G, keys=True) + ) + G.add_edge(0, 0) + pytest.raises( + TypeError, G.remove_edges_from, nx.selfloop_edges(G, data=True) + ) + G.add_edge(0, 0) + pytest.raises( + RuntimeError, + G.remove_edges_from, + nx.selfloop_edges(G, data=True, keys=True), + ) + else: + G.add_edge(0, 0) + G.remove_edges_from(nx.selfloop_edges(G, keys=True)) + G.add_edge(0, 0) + G.remove_edges_from(nx.selfloop_edges(G, data=True)) + G.add_edge(0, 0) + G.remove_edges_from(nx.selfloop_edges(G, keys=True, data=True))
instructions for removing self-edges in `configuration_model` are out of date In the configuration model documentation, it says to use `G.remove_edges_from(nx.selfloop_edges(G))` to remove self-edges. This no longer works (I think it broke with 2.0). Something to do with the dictionary changing size while iterating. The documentation should be updated to `G.remove_edges_from(list(nx.selfloop_edges(G)))`. It's curious that this breaks, because it does seem to work on directed graphs. That suggests that there's something I don't understand about the internal details of how the different graph types are implemented.
I am able to use it on a Graph without any issues. We'll need to drill down to how our configs differ. ```python import networkx as nx G = nx.path_graph(9) G.add_edge(3,3) list(nx.selfloop_edges(G)) # [(3, 3)] G.remove_edges_from(nx.selfloop_edges(G)) list(nx.selfloop_edges(G)) # [] ``` Python 3.7.6, NetworkX 2.5rc1.dev_20200712223336 In quick testing it appears to consistently happen with MultiGraphs (which is what `configuration_model` returns). Thanks for this! With MultiGraphs, the (new with v2.0) iterator nature of ```nx.selfloops_edges``` caused this RuntimeError. That's because the selfloop_edges iteration goes through the elements of the neighbor dict (e.g. G.adj[n]) and the neighbor dict gets changed. This should cause problems with MultiGraph and MultiDiGraph (I'm not sure why you aren't getting the trouble with MultiDiGraph). Strangely, this doesn't happen with Graph and DiGraph because, while now the iteration goes through G.adj it does not go through G.adj[n]. So, removing a neighbor changes the value of G.adj but doesn't change the G.adj dict itself. I would like to keep ```G.remove_edges_from(nx.selfloop_edges(G))``` working though... Luckily, that case can be finessed because instead of iterating over the G.adj[n] dict, we can iterate over ```range(len(G.adj[n]))``` and still get the same number of selfloop edges to remove. Note that ```G.remove_edges_from(nx.selfloop_edges(G, keys=True))``` would not allow that since we actually do need the keys in this case. Here's some code for testing. @joelmiller can you check that I haven't missed something with the MultiDiGraph case? ```python # test removing selfloops behavior vis-a-vis altering a dict while iterating G.add_edge(0, 0) G.remove_edges_from(nx.selfloop_edges(G)) if G.is_multigraph(): G.add_edge(0, 0) pytest.raises(RuntimeError, G.remove_edges_from, nx.selfloop_edges(G, keys=True)) G.add_edge(0, 0) pytest.raises(TypeError, G.remove_edges_from, nx.selfloop_edges(G, data=True)) G.add_edge(0, 0) pytest.raises(RuntimeError, G.remove_edges_from, nx.selfloop_edges(G, data=True, keys=True)) else: G.add_edge(0, 0) G.remove_edges_from(nx.selfloop_edges(G, keys=True)) G.add_edge(0, 0) G.remove_edges_from(nx.selfloop_edges(G, data=True)) G.add_edge(0, 0) G.remove_edges_from(nx.selfloop_edges(G, keys=True, data=True)) ``` That looks right to me. On testing, I do see the same error with MultiDiGraph. No time at the moment to do an in-depth check (kids at home for Melbourne lockdown + modelling of infectious diseases = busy busy.).
2020-07-17T00:06:41
networkx/networkx
4,108
networkx__networkx-4108
[ "4106" ]
f5fd2e6e2acb60dd531dd39503bc44d637a97e85
diff --git a/networkx/convert_matrix.py b/networkx/convert_matrix.py --- a/networkx/convert_matrix.py +++ b/networkx/convert_matrix.py @@ -312,7 +312,12 @@ def from_pandas_edgelist( If `None`, no edge attributes are added to the graph. create_using : NetworkX graph constructor, optional (default=nx.Graph) - Graph type to create. If graph instance, then cleared before populated. + Graph type to create. If graph instance, then cleared before populated. + + edge_key : str or None, optional (default=None) + A valid column name for the edge keys (for a MultiGraph). The values in + this column are used for the edge keys when adding edges if create_using + is a multigraph. See Also -------- @@ -352,17 +357,17 @@ def from_pandas_edgelist( Build multigraph with custom keys: - >>> edges = pd.DataFrame({'source': [0, 1, 2], - ... 'target': [2, 2, 3], - ... 'key': ['A', 'B', 'C'], - ... 'weight': [3, 4, 5], - ... 'color': ['red', 'blue', 'blue']}) + >>> edges = pd.DataFrame({'source': [0, 1, 2, 0], + ... 'target': [2, 2, 3, 2], + ... 'my_edge_key': ['A', 'B', 'C', 'D'], + ... 'weight': [3, 4, 5, 6], + ... 'color': ['red', 'blue', 'blue', 'blue']}) >>> G = nx.from_pandas_edgelist(edges, \ - edge_key='key', \ + edge_key='my_edge_key', \ edge_attr=['weight', 'color'], \ create_using=nx.MultiGraph()) >>> G[0][2] - AtlasView({'A': {'weight': 3, 'color': 'red'}}) + AtlasView({'A': {'weight': 3, 'color': 'red'}, 'D': {'weight': 6, 'color': 'blue'}}) """
documentation for "edge_key" param in nx.from_pandas_edgelist() missing I just noticed this while working on implementing support for geopandas (I want it to behave as similar to pandas functions). I guess it's for MultiGraphs which I can't really wrap my head around.
Thanks for reporting this! That's the result of #4076 which was recently merged after apparently not enough review. :) It needs to be fixed.
2020-07-26T21:36:48
networkx/networkx
4,125
networkx__networkx-4125
[ "4116" ]
ab7429c62806f1242c36fb81c1b1d801b4cca7a3
diff --git a/networkx/readwrite/edgelist.py b/networkx/readwrite/edgelist.py --- a/networkx/readwrite/edgelist.py +++ b/networkx/readwrite/edgelist.py @@ -270,7 +270,11 @@ def parse_edgelist( elif data is True: # no edge types specified try: # try to evaluate as dictionary - edgedata = dict(literal_eval(" ".join(d))) + if delimiter == ",": + edgedata_str = ",".join(d) + else: + edgedata_str = " ".join(d) + edgedata = dict(literal_eval(edgedata_str.strip())) except BaseException as e: raise TypeError( f"Failed to convert edge data ({d}) " f"to dictionary."
diff --git a/networkx/readwrite/tests/test_edgelist.py b/networkx/readwrite/tests/test_edgelist.py --- a/networkx/readwrite/tests/test_edgelist.py +++ b/networkx/readwrite/tests/test_edgelist.py @@ -100,6 +100,48 @@ def test_read_edgelist_4(self): G.edges(data=True), [(1, 2, {"weight": 2.0}), (2, 3, {"weight": 3.0})] ) + def test_read_edgelist_5(self): + s = b"""\ +# comment line +1 2 {'weight':2.0, 'color':'green'} +# comment line +2 3 {'weight':3.0, 'color':'red'} +""" + bytesIO = io.BytesIO(s) + G = nx.read_edgelist(bytesIO, nodetype=int, data=False) + assert_edges_equal(G.edges(), [(1, 2), (2, 3)]) + + bytesIO = io.BytesIO(s) + G = nx.read_edgelist(bytesIO, nodetype=int, data=True) + assert_edges_equal( + G.edges(data=True), + [ + (1, 2, {"weight": 2.0, "color": "green"}), + (2, 3, {"weight": 3.0, "color": "red"}), + ], + ) + + def test_read_edgelist_6(self): + s = b"""\ +# comment line +1, 2, {'weight':2.0, 'color':'green'} +# comment line +2, 3, {'weight':3.0, 'color':'red'} +""" + bytesIO = io.BytesIO(s) + G = nx.read_edgelist(bytesIO, nodetype=int, data=False, delimiter=",") + assert_edges_equal(G.edges(), [(1, 2), (2, 3)]) + + bytesIO = io.BytesIO(s) + G = nx.read_edgelist(bytesIO, nodetype=int, data=True, delimiter=",") + assert_edges_equal( + G.edges(data=True), + [ + (1, 2, {"weight": 2.0, "color": "green"}), + (2, 3, {"weight": 3.0, "color": "red"}), + ], + ) + def test_write_edgelist_1(self): fh = io.BytesIO() G = nx.OrderedGraph()
Parse edgelist bug When calling `parse_edgelist()` using comma delimiters, it will fail to parse correctly fro edges with multiple attributes. Ex. of bad edge: `1,2,{'test':1, 'test_other':2}` `parse_edgelist()` will recognize the comma in the braces as delimiter and try to split them up.
Here's a minimal reproducing example: ```python >>> import networkx as nx >>> edge_list = ['1, 2, {"weight": 1, "color": "green"}'] >>> G = nx.parse_edgelist(edge_list, delimiter=',') Traceback (most recent call last) ... IndentationError ... TypeError: Failed to convert edge data ([' {"weight": 1', ' "color": "green"}']) to dictionary. ``` Side note: this seems like a good candidate for for `from None` instead of `from e` in the exception raising as the `IndentationError` is confusing.
2020-08-03T00:13:47
networkx/networkx
4,132
networkx__networkx-4132
[ "4129" ]
938063c9011a4ecd4f3632a7a59ed63334d821ee
diff --git a/networkx/drawing/nx_pylab.py b/networkx/drawing/nx_pylab.py --- a/networkx/drawing/nx_pylab.py +++ b/networkx/drawing/nx_pylab.py @@ -640,7 +640,7 @@ def draw_networkx_edges( if edgelist is None: edgelist = list(G.edges()) - if not edgelist or len(edgelist) == 0: # no edges! + if len(edgelist) == 0: # no edges! if not G.is_directed() or not arrows: return LineCollection(None) else:
diff --git a/networkx/drawing/tests/test_pylab.py b/networkx/drawing/tests/test_pylab.py --- a/networkx/drawing/tests/test_pylab.py +++ b/networkx/drawing/tests/test_pylab.py @@ -250,3 +250,8 @@ def test_alpha_iter(self): def test_error_invalid_kwds(self): with pytest.raises(ValueError, match="Received invalid argument"): nx.draw(self.G, foo="bar") + + def test_np_edgelist(self): + # see issue #4129 + np = pytest.importorskip("numpy") + nx.draw_networkx(self.G, edgelist=np.array([(0, 2), (0, 3)]))
edgelist in draw_network does not support arrays Using numpy 1.18.2 and networkx 2.4, it is not possible to use a numpy array for the `edgelist` argument of [``draw_networkx``](https://networkx.github.io/documentation/stable/reference/generated/networkx.drawing.nx_pylab.draw_networkx.html). It raises: ``` ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() ``` I normally would not have opened an issue for that but I do not see the point of the current implementation, so in case it can be changed harmlessly to support numpy arrays, that might be a win-win. The issue comes from lines 575-579 on 2.4 ([lines 640-647](https://github.com/networkx/networkx/blob/master/networkx/drawing/nx_pylab.py#L640) on master) that read: ```python if edgelist is None: edgelist = list(G.edges()) if not edgelist or len(edgelist) == 0: # no edges! return None ``` I am not sure what the combination of ``not edgelist`` and ``len(edgelist) == 0`` is for, here, so I would propose to change it to ```python if edgelist is None: edgelist = list(G.edges()) elif len(edgelist) == 0: # no edges! return None ``` which should still do the job for all valid inputs I can think of and would also work with numpy arrays. **EDIT** if iterables should be supported, one could replace ``not edgelist`` by ``not isinstance(edgelist, Iterable)``
The ```if not edgelist``` part of that line is very old when that syntax was subtly different. The code is intended to work for lists, tuples, sets, frozensets and other containers (like numpy arrays). We assume they have a ```__len__``` feature. I'm actually surprised that numpy changed the behavior of ```if a:``` It used to check whether ```len(a)!=0```. In any case, this should be changed as per your suggestion! Thanks! OK, nice, I can make a PR if that helps. Iterators do not need to be supported, if I understand correctly, then, right? I can just use the ``elif len(edgelist) == 0:``? Yes, that is (I believe) the only change needed. It might be good to have a test to make sure that a numpy array works.
2020-08-05T06:57:10
networkx/networkx
4,136
networkx__networkx-4136
[ "3945" ]
123fc2be585c6ebbf46b7fe876ba85dd34f92b2e
diff --git a/networkx/convert.py b/networkx/convert.py --- a/networkx/convert.py +++ b/networkx/convert.py @@ -17,6 +17,7 @@ """ import warnings import networkx as nx +from collections.abc import Collection, Generator, Iterator __all__ = [ "to_networkx_graph", @@ -50,7 +51,9 @@ def to_networkx_graph(data, create_using=None, multigraph_input=False): any NetworkX graph dict-of-dicts dict-of-lists - container (ie set, list, tuple, iterator) of edges + container (e.g. set, list, tuple) of edges + iterator (e.g. itertools.chain) that produces edges + generator of edges Pandas DataFrame (row per edge) numpy matrix numpy ndarray @@ -106,16 +109,6 @@ def to_networkx_graph(data, create_using=None, multigraph_input=False): except Exception as e: raise TypeError("Input is not known type.") from e - # list or generator of edges - - if isinstance(data, (list, tuple, set)) or any( - hasattr(data, attr) for attr in ["_adjdict", "next", "__next__"] - ): - try: - return from_edgelist(data, create_using=create_using) - except Exception as e: - raise nx.NetworkXError("Input is not a valid edge list") from e - # Pandas DataFrame try: import pandas as pd @@ -167,6 +160,16 @@ def to_networkx_graph(data, create_using=None, multigraph_input=False): except ImportError: warnings.warn("scipy not found, skipping conversion test.", ImportWarning) + # Note: most general check - should remain last in order of execution + # Includes containers (e.g. list, set, dict, etc.), generators, and + # iterators (e.g. itertools.chain) of edges + + if isinstance(data, (Collection, Generator, Iterator)): + try: + return from_edgelist(data, create_using=create_using) + except Exception as e: + raise nx.NetworkXError("Input is not a valid edge list") from e + raise nx.NetworkXError("Input is not a known data type for conversion.")
DiGraph.__init__ does not allow sets I find it very convenient to be able to initialise graphs using: ```python >>> import networkx as nx # version 2.2 >>> G = nx.DiGraph([(1, 2), (3, 4)]) >>> G = nx.DiGraph(((1, 2), (3, 4))) ``` However sets do not work and I don't understand why they have been disallowed: ```python >>> import networkx as nx >>> G = nx.DiGraph({(1, 2), (3, 4)}) # fails with frozenset too Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python3/dist-packages/networkx/classes/digraph.py", line 309, in __init__ convert.to_networkx_graph(incoming_graph_data, create_using=self) File "/usr/lib/python3/dist-packages/networkx/convert.py", line 161, in to_networkx_graph "Input is not a known data type for conversion.") networkx.exception.NetworkXError: Input is not a known data type for conversion. ``` Shouldn't DiGraph (and Graph, etc.) accept *any* iterable of tuples?
Graphs with nodes that are 2-tuples (like geometric graphs) are iterables of tuples, and dict-of-dict constructs are also acceptable input but should not be treated as iterables of tuples. I haven't thought carefully about whether sets of tuples should be handled. Probably -- (the set object became part of standard python after our initial creation and I don't know if we revisited sets of edges after it became more popular..... maybe now is the time. :) Looks like #3907 solves this issue for sets (but not for frozensets). I suspect the fix is as simple as adding the 'frozenset' to the is_instance check at the same spot.
2020-08-06T20:19:07
networkx/networkx
4,145
networkx__networkx-4145
[ "4143" ]
522c6f8c6ad04da883ca9140c7788a02f911e218
diff --git a/networkx/algorithms/simple_paths.py b/networkx/algorithms/simple_paths.py --- a/networkx/algorithms/simple_paths.py +++ b/networkx/algorithms/simple_paths.py @@ -1,4 +1,3 @@ -import collections from heapq import heappush, heappop from itertools import count @@ -239,7 +238,7 @@ def all_simple_paths(G, source, target, cutoff=None): def _all_simple_paths_graph(G, source, targets, cutoff): - visited = collections.OrderedDict.fromkeys([source]) + visited = dict.fromkeys([source]) stack = [iter(G[source])] while stack: children = stack[-1] @@ -265,7 +264,7 @@ def _all_simple_paths_graph(G, source, targets, cutoff): def _all_simple_paths_multigraph(G, source, targets, cutoff): - visited = collections.OrderedDict.fromkeys([source]) + visited = dict.fromkeys([source]) stack = [(v for u, v in G.edges(source))] while stack: children = stack[-1]
Replace OrderedDict with dict in simple_paths module From Python 3.6 dicts are ordered by default and 3.6 is the minimum version for the project. In `all_simple_paths` function OrderedDict is used which can be replaced with dict. Since `popitem` and `__setitem__` are heavily used where `OrderedDict` performs some more [additional work](https://github.com/python/cpython/blob/e28b8c93878072dc02b116108ef5443084290d47/Lib/collections/__init__.py#L174) that is not needed here. Some perf benchmarks for a large graph. OrderedDict usage : https://github.com/networkx/networkx/blob/522c6f8c6ad04da883ca9140c7788a02f911e218/networkx/algorithms/simple_paths.py#L242 ```python import networkx as nx import networkx.algorithms.simple_paths as sp G = nx.complete_graph(10) list(sp.all_simple_paths(G, source=0, target=9)) ``` On master branch : ``` python -m pyperf timeit -s 'import networkx as nx; import networkx.algorithms.simple_paths as sp; G = nx.complete_graph(10)' 'list(sp.all_simple_paths(G, source=0, target=9))' -o master.json ..................... Mean +- std dev: 921 ms +- 51 ms ``` On my branch replacing OrderedDict with dict : ``` python -m pyperf timeit -s 'import networkx as nx; import networkx.algorithms.simple_paths as sp; G = nx.complete_graph(10)' 'list(sp.all_simple_paths(G, source=0, target=9))' -o improve_perf.json ..................... Mean +- std dev: 783 ms +- 20 ms ``` Comparison shows 15% faster ``` python -m pyperf compare_to master.json improve_perf.json Mean +- std dev: [master] 921 ms +- 51 ms -> [improve_perf] 783 ms +- 20 ms: 1.18x faster (-15%) ``` ```patch diff --git a/networkx/algorithms/simple_paths.py b/networkx/algorithms/simple_paths.py index f8cbe7e..b7d7d6d 100644 --- a/networkx/algorithms/simple_paths.py +++ b/networkx/algorithms/simple_paths.py @@ -239,7 +239,7 @@ def all_simple_paths(G, source, target, cutoff=None): def _all_simple_paths_graph(G, source, targets, cutoff): - visited = collections.OrderedDict.fromkeys([source]) + visited = dict.fromkeys([source]) stack = [iter(G[source])] while stack: children = stack[-1] @@ -265,7 +265,7 @@ def _all_simple_paths_graph(G, source, targets, cutoff): def _all_simple_paths_multigraph(G, source, targets, cutoff): - visited = collections.OrderedDict.fromkeys([source]) + visited = dict.fromkeys([source]) stack = [(v for u, v in G.edges(source))] while stack: children = stack[-1] ```
2020-08-10T04:17:51
networkx/networkx
4,152
networkx__networkx-4152
[ "3786" ]
b87b33f6ffdda605d8809680c1ebe22d8fba33f1
diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -134,7 +134,6 @@ "pygraphviz", "pydot", "pyyaml", - "gdal", "lxml", "pytest", ],
diff --git a/requirements/test.txt b/requirements/test.txt --- a/requirements/test.txt +++ b/requirements/test.txt @@ -1,3 +1,3 @@ -pytest>=5.4 +pytest>=6.0 pytest-cov>=2.10 codecov>=2.1
why gdal==1.10.0 in requirements? Hi, When running the pip command (```pip install networkx[all]```), I ran into GDAL setup error. I was surprised by the error, because GDAL (version 2.2.2) was already installed on my desktop: ``` pip show gdal Name: GDAL Version: 2.2.2 Summary: GDAL: Geospatial Data Abstraction Library Home-page: http://www.gdal.org ... ``` Looking at [networkx distutils requirements](https://github.com/networkx/networkx/blob/master/requirements/extras.txt), I realized that GDAL requires strictly 1.10.0 version for GDAL: ``` pygraphviz>=1.5 pydot>=1.2.4 gdal==1.10.0 ``` **Question** Is there a particular reason why ```gdal==1.10.0```, instead of ```gdal>=1.10.0```? Having a strict version usually makes the lib install error-prone. Thanks in advance.
GDAL is normally a pain to install. Every OS I have installed it on has required different steps. I recently moved it to put it in `requirements/extras.txt` to help make it clearer that it would be more difficult to install. Could you make a new virtual environment and check if ``` $ pip install 'gdal>=1.10.0' ``` works on your system? If so, what system are you on? Maybe we should take `gdal` out of `all`. Hi Jarrod, Good question, I wasn't able to install gdal via ```pip3 install 'gdal>=1.10.0'```. Here is the only way that succeeded for me to install ```gdal```: ``` # Install gdal-bin and libgdal-dev sudo add-apt-repository -y ppa:ubuntugis/ubuntugis-unstable sudo apt update sudo apt install gdal-bin libgdal-dev # Install gdal via pip pip3 install gdal==$(gdal-config --version) --global-option=build_ext --global-option='-I/usr/include/gdal/' # gdal is successfully installed with version 2.2.2. pip show gdal Name: GDAL Version: 2.2.2 Summary: GDAL: Geospatial Data Abstraction Library Home-page: http://www.gdal.org ... ``` With ```gdal``` already installed on my desktop, now changing the [networkx requirements file](https://github.com/networkx/networkx/blob/master/requirements/extras.txt) locally makes a difference: 1. With ```'gdal==1.10.0'```, ```networkx[all]``` installation failed because pip tried to uninstall ```gdal 2.2.2``` and install ```gdal 1.10.0```. 2. With ```'gdal>=1.10.0'```, ```networkx[all]``` installation succeeded because ```2.2.2 > 1.10.0```. Here are my thoughts: 1. If ```gdal``` is not a critical/important dependency for ```networkx```, taking ```gdal``` out of ```all``` sounds a reasonable approach to streamline install for clients. 2. If ```gdal``` is a critical/important dependency, then having ```gdal>=1.10.0``` in the requirements file, plus adding a few lines in the user guide on how to install ```gdal``` before installing ```networkx[all]``` will help.
2020-08-14T06:30:32
networkx/networkx
4,160
networkx__networkx-4160
[ "3155" ]
5638e1ff3d01e21c7d950615a699eb1f99987b8d
diff --git a/networkx/algorithms/approximation/steinertree.py b/networkx/algorithms/approximation/steinertree.py --- a/networkx/algorithms/approximation/steinertree.py +++ b/networkx/algorithms/approximation/steinertree.py @@ -46,11 +46,23 @@ def metric_closure(G, weight="weight"): return M -@not_implemented_for("multigraph") @not_implemented_for("directed") def steiner_tree(G, terminal_nodes, weight="weight"): """ Return an approximation to the minimum Steiner tree of a graph. + The minimum Steiner tree of `G` w.r.t a set of `terminal_nodes` + is a tree within `G` that spans those nodes and has minimum size + (sum of edge weights) among all such trees. + + The minimum Steiner tree can be approximated by computing the minimum + spanning tree of the subgraph of the metric closure of *G* induced by the + terminal nodes, where the metric closure of *G* is the complete graph in + which each edge is weighted by the shortest path distance between the + nodes in *G* . + This algorithm produces a tree whose weight is within a (2 - (2 / t)) + factor of the weight of the optimal Steiner tree where *t* is number of + terminal nodes. + Parameters ---------- G : NetworkX graph @@ -67,24 +79,26 @@ def steiner_tree(G, terminal_nodes, weight="weight"): Notes ----- - Steiner tree can be approximated by computing the minimum spanning - tree of the subgraph of the metric closure of the graph induced by the - terminal nodes, where the metric closure of *G* is the complete graph in - which each edge is weighted by the shortest path distance between the - nodes in *G* . - This algorithm produces a tree whose weight is within a (2 - (2 / t)) - factor of the weight of the optimal Steiner tree where *t* is number of - terminal nodes. + For multigraphs, the edge between two nodes with minimum weight is the + edge put into the Steiner tree. + + References + ---------- + .. [1] Steiner_tree_problem on Wikipedia. + https://en.wikipedia.org/wiki/Steiner_tree_problem """ - # M is the subgraph of the metric closure induced by the terminal nodes of - # G. + # H is the subgraph induced by terminal_nodes in the metric closure M of G. M = metric_closure(G, weight=weight) - # Use the 'distance' attribute of each edge provided by the metric closure - # graph. H = M.subgraph(terminal_nodes) + # Use the 'distance' attribute of each edge provided by M. mst_edges = nx.minimum_spanning_edges(H, weight="distance", data=True) # Create an iterator over each edge in each shortest path; repeats are okay edges = chain.from_iterable(pairwise(d["path"]) for u, v, d in mst_edges) + # For multigraph we should add the minimal weight edge keys + if G.is_multigraph(): + edges = ( + (u, v, min(G[u][v], key=lambda k: G[u][v][k][weight])) for u, v in edges + ) T = G.edge_subgraph(edges) return T
diff --git a/networkx/algorithms/approximation/tests/test_steinertree.py b/networkx/algorithms/approximation/tests/test_steinertree.py --- a/networkx/algorithms/approximation/tests/test_steinertree.py +++ b/networkx/algorithms/approximation/tests/test_steinertree.py @@ -63,22 +63,21 @@ def test_steiner_tree(self): assert_edges_equal(list(S.edges(data=True)), expected_steiner_tree) def test_multigraph_steiner_tree(self): - with pytest.raises(nx.NetworkXNotImplemented): - G = nx.MultiGraph() - G.add_edges_from( - [ - (1, 2, 0, {"weight": 1}), - (2, 3, 0, {"weight": 999}), - (2, 3, 1, {"weight": 1}), - (3, 4, 0, {"weight": 1}), - (3, 5, 0, {"weight": 1}), - ] - ) - terminal_nodes = [2, 4, 5] - expected_edges = [ - (2, 3, 1, {"weight": 1}), # edge with key 1 has lower weight + G = nx.MultiGraph() + G.add_edges_from( + [ + (1, 2, 0, {"weight": 1}), + (2, 3, 0, {"weight": 999}), + (2, 3, 1, {"weight": 1}), (3, 4, 0, {"weight": 1}), (3, 5, 0, {"weight": 1}), ] - # not implemented - T = steiner_tree(G, terminal_nodes) + ) + terminal_nodes = [2, 4, 5] + expected_edges = [ + (2, 3, 1, {"weight": 1}), # edge with key 1 has lower weight + (3, 4, 0, {"weight": 1}), + (3, 5, 0, {"weight": 1}), + ] + T = steiner_tree(G, terminal_nodes) + assert_edges_equal(T.edges(data=True, keys=True), expected_edges)
steiner_tree should accept MultiGraph I'm using `steiner_tree` on a road network which may have multiple edges between nodes. It looks like `steiner_tree` will fail if passed a `MultiGraph`: - as a next-to-last step, edges are generated as `(u, v)` tuples pairwise [here](https://github.com/networkx/networkx/blob/master/networkx/algorithms/approximation/steinertree.py#L87) - before being passed to `G.edge_subgraph` which raises a `ValueError` from `nx.filter.show_multiedges` This should reproduce the issue: ```python import networkx as nx import networkx.algorithms.approximation as nxa def test_simple_steiner_tree(): G = nx.Graph() G.add_weighted_edges_from([ (1, 2, 1), (2, 3, 1), (3, 4, 1), (3, 5, 1) ]) terminal_nodes = [2, 4, 5] expected_edges = [ (2, 3), (3, 4), (3, 5) ] T = nxa.steiner_tree(G, terminal_nodes) assert list(T.edges) == expected_edges def test_multi_steiner_tree(): G = nx.MultiGraph() G.add_weighted_edges_from([ (1, 2, 1), (2, 3, 1), (2, 3, 999), (3, 4, 1), (3, 5, 1) ]) terminal_nodes = [2, 4, 5] expected_edges = [ (2, 3, 0), (3, 4, 0), # first edge has weight one (3, 5, 0) ] T = nxa.steiner_tree(G, terminal_nodes) test_simple_steiner_tree() # passes test_multi_steiner_tree() # throws ValueError ``` The quick fix might be to add `@not_implemented_for('multigraph')`. For my current purposes, the following does the trick to handle the `MultiGraph` case: ```python # get unique links pairwise (u, v) links = set(chain.from_iterable( pairwise(d['path']) for u, v, d in mst_edges )) # for each link in the chain multi_edges = [] for u, v in links: # consider each edge between the pair of nodes, # keeping track of the one with the minimum weight # (there may be a better way - convenience functions/accessors?) num_edges = G.number_of_edges(u, v) min_k = 0 min_weight = None for k in range(num_edges): curr_weight = G.edges[u, v, k][weight] if min_weight is None: min_weight = curr_weight elif curr_weight < min_weight: min_weight = curr_weight min_k = k multi_edges.append((u, v, min_k)) # create subgraph from multi edges - list of (u, v, k) T = G.edge_subgraph(multi_edges) ```
I see how that would fail. It looks like the implications of MultiGraph were not looked at closely. Your coded solution would not work in general because it assumes a pattern for the edge keys (they are assumed sequential integers starting from 0). Your suggestion of ```NotImplementedFor``` will certainly fix the surprise nature of the thrown ```ValueError```. Another small changes I'd like to see in steiner_tree is a paragraph description of what a steiner tree is in the doc_string. Thanks for the response @dschult - I'd be happy to work on this a little. Could be three pieces: - surprise-reduction fix, adding `NotImplementedFor` - Steiner tree description in docstring - (perhaps with a bit more discussion) implementation for MultiGraph That's great! Thanks... I'll try to merge the quick-fix before the new release coming up. We can aim for the MultiGraph implementation in the next release. The currently available ```steiner_tree``` actually computes the correct edges. It just doesn't return the 3-tuple with the edge key. It returns a 2-tuple and makes you go find which key has the minimum weight for those two pairs of nodes. This is a similar problem to #3159 where the treatment of MultiGraph was not clear because the edge keys were not returned as part of the edge-tuple. I'm going to re-allow MultiGraphs for this function. The code works just fine. It just doesn't return as much information as might be useful. To find the 3-tuple edges from the 2-tuple edges you'll need something like: ```python def argmin(keydict, weight): return min(keydict, key=lambda edgekey: keydict[edgekey][weight]) multiedges = [(u, v, argmin(G[u][v], weight)) for u, v in steiner_tree.edges()] ``` I'll add to the docstring to make that more clear.
2020-08-15T04:19:11
networkx/networkx
4,176
networkx__networkx-4176
[ "4173" ]
bf9da3d098a015d01e23d6135e9905393ef5e814
diff --git a/networkx/readwrite/json_graph/cytoscape.py b/networkx/readwrite/json_graph/cytoscape.py --- a/networkx/readwrite/json_graph/cytoscape.py +++ b/networkx/readwrite/json_graph/cytoscape.py @@ -98,8 +98,8 @@ def cytoscape_graph(data, attrs=None): for d in data["elements"]["edges"]: edge_data = d["data"].copy() - sour = d["data"].pop("source") - targ = d["data"].pop("target") + sour = d["data"]["source"] + targ = d["data"]["target"] if multigraph: key = d["data"].get("key", 0) graph.add_edge(sour, targ, key=key)
diff --git a/networkx/readwrite/json_graph/tests/test_cytoscape.py b/networkx/readwrite/json_graph/tests/test_cytoscape.py --- a/networkx/readwrite/json_graph/tests/test_cytoscape.py +++ b/networkx/readwrite/json_graph/tests/test_cytoscape.py @@ -1,6 +1,7 @@ import json import pytest import networkx as nx +import copy from networkx.readwrite.json_graph import cytoscape_data, cytoscape_graph @@ -10,6 +11,14 @@ def test_graph(self): H = cytoscape_graph(cytoscape_data(G)) nx.is_isomorphic(G, H) + def test_input_data_is_not_modified_when_building_graph(self): + G = nx.path_graph(4) + input_data = cytoscape_data(G) + orig_data = copy.deepcopy(input_data) + # Ensure input is unmodified by cytoscape_graph (gh-4173) + cytoscape_graph(input_data) + assert input_data == orig_data + def test_graph_attributes(self): G = nx.path_graph(4) G.add_node(1, color="red")
readwrite.cytoscape_graph(data) removes "source" and "target" fields from edges Hi, I realized the function to get a graph from cytoscape data modifies the data that is received as argument. Is there any reason for that? I couldn't find that in the documentation, and I found it when browsing the code. Thank you in advance!
> I realized the function to get a graph from cytoscape data modifies the data that is received as argument. Is there any reason for that? This looks like a defect to me, I don't think the input data dict should be modified. An MRE: ```python >>> G = nx.path_graph(4) >>> data_dict = nx.cytoscape_data(G) >>> data_dict {'data': [], 'directed': False, 'multigraph': False, 'elements': {'nodes': [{'data': {'id': '0', 'value': 0, 'name': '0'}}, {'data': {'id': '1', 'value': 1, 'name': '1'}}, {'data': {'id': '2', 'value': 2, 'name': '2'}}, {'data': {'id': '3', 'value': 3, 'name': '3'}}], 'edges': [{'data': {'source': 0, 'target': 1}}, {'data': {'source': 1, 'target': 2}}, {'data': {'source': 2, 'target': 3}}]}} >>> H = nx.cytoscape_graph(data_dict) >>> data_dict # 'source' and 'target' keys no longer present for edges {'data': [], 'directed': False, 'multigraph': False, 'elements': {'nodes': [{'data': {'id': '0', 'value': 0, 'name': '0'}}, {'data': {'id': '1', 'value': 1, 'name': '1'}}, {'data': {'id': '2', 'value': 2, 'name': '2'}}, {'data': {'id': '3', 'value': 3, 'name': '3'}}], 'edges': [{'data': {}}, {'data': {}}, {'data': {}}]}} ``` > I couldn't find that in the documentation, and I found it when browsing the code. Unfortunately `cytoscape_graph` is also missing a docstring! Thanks for catching this as well. FWIW this should be a very straightforward fix that is suitable for new contributors. To do so would require the following: 1. Add/modify a test in `networkx/readwrite/json_graph/tests/test_cytoscape.py` that will fail with the current behavior but pass with the desired behavior (i.e. the input data is not modified when `cytoscape_graph` is called). 2. Modify the `cytoscape_graph` function defined in `networkx/readwrite/json_graph/cytoscape.py` so that the input data is not modified by the function
2020-08-21T07:15:22
networkx/networkx
4,241
networkx__networkx-4241
[ "4139" ]
b36e2991c2d4387192dc1c1f285bf888646db0fa
diff --git a/networkx/classes/function.py b/networkx/classes/function.py --- a/networkx/classes/function.py +++ b/networkx/classes/function.py @@ -576,23 +576,14 @@ def info(G, n=None): """ if n is None: - n_nodes = G.number_of_nodes() - n_edges = G.number_of_edges() - return "".join( - [ - type(G).__name__, - f" named '{G.name}'" if G.name else "", - f" with {n_nodes} nodes and {n_edges} edges", - ] - ) - else: - if n not in G: - raise nx.NetworkXError(f"node {n} not in graph") - info = "" # append this all to a string - info += f"Node {n} has the following properties:\n" - info += f"Degree: {G.degree(n)}\n" - info += "Neighbors: " - info += " ".join(str(nbr) for nbr in G.neighbors(n)) + return str(G) + if n not in G: + raise nx.NetworkXError(f"node {n} not in graph") + info = "" # append this all to a string + info += f"Node {n} has the following properties:\n" + info += f"Degree: {G.degree(n)}\n" + info += "Neighbors: " + info += " ".join(str(nbr) for nbr in G.neighbors(n)) return info diff --git a/networkx/classes/graph.py b/networkx/classes/graph.py --- a/networkx/classes/graph.py +++ b/networkx/classes/graph.py @@ -370,20 +370,31 @@ def name(self, s): self.graph["name"] = s def __str__(self): - """Returns the graph name. + """Returns a short summary of the graph. Returns ------- - name : string - The name of the graph. + info : string + Graph information as provided by `nx.info` Examples -------- >>> G = nx.Graph(name="foo") >>> str(G) - 'foo' + "Graph named 'foo' with 0 nodes and 0 edges" + + >>> G = nx.path_graph(3) + >>> str(G) + 'Graph with 3 nodes and 2 edges' + """ - return self.name + return "".join( + [ + type(self).__name__, + f" named '{self.name}'" if self.name else "", + f" with {self.number_of_nodes()} nodes and {self.number_of_edges()} edges", + ] + ) def __iter__(self): """Iterate over the nodes. Use: 'for n in G'.
diff --git a/networkx/classes/tests/historical_tests.py b/networkx/classes/tests/historical_tests.py --- a/networkx/classes/tests/historical_tests.py +++ b/networkx/classes/tests/historical_tests.py @@ -21,7 +21,6 @@ def setup_class(cls): def test_name(self): G = self.G(name="test") - assert str(G) == "test" assert G.name == "test" H = self.G() assert H.name == "" diff --git a/networkx/classes/tests/test_graph.py b/networkx/classes/tests/test_graph.py --- a/networkx/classes/tests/test_graph.py +++ b/networkx/classes/tests/test_graph.py @@ -182,9 +182,18 @@ def test_name(self): G = self.Graph(name="") assert G.name == "" G = self.Graph(name="test") - assert G.__str__() == "test" assert G.name == "test" + def test_str_unnamed(self): + G = self.Graph() + G.add_edges_from([(1, 2), (2, 3)]) + assert str(G) == f"{type(G).__name__} with 3 nodes and 2 edges" + + def test_str_named(self): + G = self.Graph(name="foo") + G.add_edges_from([(1, 2), (2, 3)]) + assert str(G) == f"{type(G).__name__} named 'foo' with 3 nodes and 2 edges" + def test_graph_chain(self): G = self.Graph([(0, 1), (1, 2)]) DG = G.to_directed(as_view=True)
Add more useful ```__str__``` function for Graph objects Currently the ```__str___``` function prints the ```G.name``` property which is ```G.graph["name"]```. That is pretty rarely useful in my experience (often a blank string). There is useful information in nx.info(G) and much less of it than there once was. Perhaps it would be reasonable to look at making something like ```nx.info(G)``` show for ```__str__```. Are there reasons to avoid doing this? What implications does it have for ```__repr__```? Issue #3956 requested a useful ```__repr__``` and so is related to this.
It would be wise to include the name of the graph in case it exists. Some additional information such as the number of nodes and edges as well as the type of graph can also be included in a format such as `DiGraph(name:"asdf",nodes:3,edges:2)` > Are there reasons to avoid doing this? Not a reason to avoid it, but something to consider: `nx.info` includes the number_of_edges and the average degree calculation, which *might* take a long time for some graphs? e.g. ```python >>> G = nx.path_graph(int(1e7)) >>> %timeit nx.info(G) 4.4 s Β± 47.3 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each) ``` N.B. @dschult I think you raised this concern before - I'm simply adding here for visibility! I'd like to get into this, but I have a question: is there any particular reason that `G.number_of_edges` (or really, `G.size`) is calculated on demand and not stored as an attribute on `Graph` and maintained whenever edges are added/removed to facilitate O(1) lookups? I think @parth-verma's suggestion above is perfect, but `__str__` should run in constant time. > is there any particular reason that G.number_of_edges (or really, G.size) is calculated on demand and not stored as an attribute on Graph and maintained whenever edges are added/removed to facilitate O(1) lookups? I think the advantage of the property is that the class/user does not have to explicitly manage state. If the e.g. `number_of_edges` were an attribute instead of a property of the graph, then all methods for adding/removing edges would have to update the value explicitly on modification which in principle 1) makes the modification itself slightly more expensive (a minor concern) and 2) makes it very easy to introduce subtle bugs (compare this situation to ref counting for example).
2020-10-07T03:04:25
networkx/networkx
4,246
networkx__networkx-4246
[ "4244" ]
2f0c56ffdc923c5fce04935447772ba9f68b69f9
diff --git a/networkx/algorithms/euler.py b/networkx/algorithms/euler.py --- a/networkx/algorithms/euler.py +++ b/networkx/algorithms/euler.py @@ -224,7 +224,7 @@ def has_eulerian_path(G): - at most one vertex has in_degree - out_degree = 1, - every other vertex has equal in_degree and out_degree, - and all of its vertices with nonzero degree belong to a - - single connected component of the underlying undirected graph. + single connected component of the underlying undirected graph. An undirected graph has an Eulerian path iff: - exactly zero or two vertices have odd degree, @@ -246,16 +246,28 @@ def has_eulerian_path(G): eulerian_path """ if G.is_directed(): + # Remove isolated nodes (if any) without altering the input graph + nodes_remove = [v for v in G if G.in_degree[v] == 0 and G.out_degree[v] == 0] + if nodes_remove: + G = G.copy() + G.remove_nodes_from(nodes_remove) + ins = G.in_degree outs = G.out_degree - semibalanced_ins = sum(ins(v) - outs(v) == 1 for v in G) - semibalanced_outs = sum(outs(v) - ins(v) == 1 for v in G) + unbalanced_ins = 0 + unbalanced_outs = 0 + for v in G: + if ins[v] - outs[v] == 1: + unbalanced_ins += 1 + elif outs[v] - ins[v] == 1: + unbalanced_outs += 1 + elif ins[v] != outs[v]: + return False + return ( - semibalanced_ins <= 1 - and semibalanced_outs <= 1 - and sum(G.in_degree(v) != G.out_degree(v) for v in G) <= 2 - and nx.is_weakly_connected(G) + unbalanced_ins <= 1 and unbalanced_outs <= 1 and nx.is_weakly_connected(G) ) + else: return sum(d % 2 == 1 for v, d in G.degree()) in (0, 2) and nx.is_connected(G)
diff --git a/networkx/algorithms/tests/test_euler.py b/networkx/algorithms/tests/test_euler.py --- a/networkx/algorithms/tests/test_euler.py +++ b/networkx/algorithms/tests/test_euler.py @@ -134,6 +134,22 @@ def test_has_eulerian_path_non_cyclic(self): G = nx.path_graph(6, create_using=nx.DiGraph) assert nx.has_eulerian_path(G) + def test_has_eulerian_path_directed_graph(self): + # Test directed graphs and returns False + G = nx.DiGraph() + G.add_edges_from([(0, 1), (1, 2), (0, 2)]) + assert not nx.has_eulerian_path(G) + + def test_has_eulerian_path_isolated_node(self): + # Test directed graphs without isolated node returns True + G = nx.DiGraph() + G.add_edges_from([(0, 1), (1, 2), (2, 0)]) + assert nx.has_eulerian_path(G) + + # Test directed graphs with isolated node returns True + G.add_node(3) + assert nx.has_eulerian_path(G) + class TestFindPathStart: def testfind_path_start(self):
Issue with networkx.has_eulerian_path() This issue adds to #3976 There appears to be a problem with the `has_eulerian_path` function. It returns the wrong answer on this example: ``` test_graph = nx.DiGraph() test_graph.add_edges_from([(0, 1), (1,2), (0,2)]) print(nx.has_eulerian_path(test_graph)) print(list(nx.eulerian_path(test_graph))) ``` Output: ``` True [(0, 0), (0, 1), (1, 2)] ``` This directed graph does not have a Eulerian path, and, adding to #3976, the path returned by `eulerian_path` is not an actual Eulerian path, since it (1) does not include the edge `(0,2)` and (2) includes a self-loop `(0,0)` that is not in the graph. My hunch is that the problem is in this line: https://github.com/networkx/networkx/blob/3351206a3ce5b3a39bb2fc451e93ef545b96c95b/networkx/algorithms/euler.py#L256 I think changing that `2` to be `semibalanced_ins+semibalanced_outs` may do the trick, since the definition is >"...every _other_ vertex has equal in\_degree and out\_degree..." Going to see about fixing and adding this test. Appreciate any clues, ideas or tips.
This does seem to be giving an incorrect result for the directed graph you mention. The thing that jumps out to me is that the description for directed graphs in the docstring does not seem to match the implementation for directed graphs.
2020-10-08T20:32:04
networkx/networkx
4,284
networkx__networkx-4284
[ "4199" ]
c383d8f5f6f37ce8ca2020aaea1bd9d12a3e151f
diff --git a/networkx/readwrite/json_graph/cytoscape.py b/networkx/readwrite/json_graph/cytoscape.py --- a/networkx/readwrite/json_graph/cytoscape.py +++ b/networkx/readwrite/json_graph/cytoscape.py @@ -2,6 +2,7 @@ __all__ = ["cytoscape_data", "cytoscape_graph"] +# TODO: Remove in NX 3.0 _attrs = dict(name="name", ident="id") @@ -18,6 +19,11 @@ def cytoscape_data(G, attrs=None): ignored. Default is `None` which results in the default mapping ``dict(name="name", ident="id")``. + .. deprecated:: 2.6 + + The `attrs` keyword argument will be replaced with `name` and + `ident` in networkx 3.0 + Returns ------- data: dict @@ -26,7 +32,7 @@ def cytoscape_data(G, attrs=None): Raises ------ NetworkXError - If values in `attrs` are not unique. + If the `name` and `ident` attributes are identical. See Also -------- @@ -48,16 +54,33 @@ def cytoscape_data(G, attrs=None): {'data': {'id': '1', 'value': 1, 'name': '1'}}], 'edges': [{'data': {'source': 0, 'target': 1}}]}} """ - if not attrs: + # ------ TODO: Remove between the lines in 3.0 ----- # + if attrs is None: attrs = _attrs else: + import warnings + + msg = ( + "\nThe function signature for cytoscape_data will change in " + "networkx 3.0.\n" + "The `attrs` keyword argument will be replaced with \n" + "explicit `name` and `ident` keyword arguments, e.g.\n\n" + " >>> cytoscape_data(G, attrs={'name': 'foo', 'ident': 'bar'})\n\n" + "should instead be written as\n\n" + " >>> cytoscape_data(G, name='foo', ident='bar')\n\n" + "in networkx 3.0.\n" + "The default values for 'name' and 'ident' will not change." + ) + warnings.warn(msg, FutureWarning, stacklevel=2) + attrs.update({k: v for (k, v) in _attrs.items() if k not in attrs}) name = attrs["name"] ident = attrs["ident"] + # -------------------------------------------------- # - if len({name, ident}) < 2: - raise nx.NetworkXError("Attribute names are not unique.") + if name == ident: + raise nx.NetworkXError("name and ident must be different.") jsondata = {"data": list(G.graph.items())} jsondata["directed"] = G.is_directed() @@ -103,6 +126,11 @@ def cytoscape_graph(data, attrs=None): ignored. Default is `None` which results in the default mapping ``dict(name="name", ident="id")``. + .. deprecated:: 2.6 + + The `attrs` keyword argument will be replaced with `name` and + `ident` in networkx 3.0 + Returns ------- graph : a NetworkX graph instance @@ -112,7 +140,7 @@ def cytoscape_graph(data, attrs=None): Raises ------ NetworkXError - If values in `attrs` are not unique. + If the `name` and `ident` attributes are identical. See Also -------- @@ -143,16 +171,33 @@ def cytoscape_graph(data, attrs=None): >>> G.edges(data=True) EdgeDataView([(0, 1, {'source': 0, 'target': 1})]) """ - if not attrs: + # ------ TODO: Remove between the lines in 3.0 ----- # + if attrs is None: attrs = _attrs else: + import warnings + + msg = ( + "\nThe function signature for cytoscape_data will change in " + "networkx 3.0.\n" + "The `attrs` keyword argument will be replaced with \n" + "explicit `name` and `ident` keyword arguments, e.g.\n\n" + " >>> cytoscape_data(G, attrs={'name': 'foo', 'ident': 'bar'})\n\n" + "should instead be written as\n\n" + " >>> cytoscape_data(G, name='foo', ident='bar')\n\n" + "in networkx 3.0.\n" + "The default values for 'name' and 'ident' will not change." + ) + warnings.warn(msg, FutureWarning, stacklevel=2) + attrs.update({k: v for (k, v) in _attrs.items() if k not in attrs}) name = attrs["name"] ident = attrs["ident"] + # -------------------------------------------------- # - if len({ident, name}) < 2: - raise nx.NetworkXError("Attribute names are not unique.") + if name == ident: + raise nx.NetworkXError("name and ident must be different.") multigraph = data.get("multigraph") directed = data.get("directed")
diff --git a/networkx/readwrite/json_graph/tests/test_cytoscape.py b/networkx/readwrite/json_graph/tests/test_cytoscape.py --- a/networkx/readwrite/json_graph/tests/test_cytoscape.py +++ b/networkx/readwrite/json_graph/tests/test_cytoscape.py @@ -5,6 +5,22 @@ from networkx.readwrite.json_graph import cytoscape_data, cytoscape_graph +# TODO: To be removed when signature change complete in 3.0 +def test_futurewarning(): + G = nx.path_graph(3) + # No warnings when `attrs` kwarg not used + with pytest.warns(None) as record: + data = cytoscape_data(G) + H = cytoscape_graph(data) + assert len(record) == 0 + # Future warning raised with `attrs` kwarg + attrs = {"name": "foo", "ident": "bar"} + with pytest.warns(FutureWarning): + data = cytoscape_data(G, attrs) + with pytest.warns(FutureWarning): + H = cytoscape_graph(data, attrs) + + class TestCytoscape: def test_graph(self): G = nx.path_graph(4)
Simplify cytoscape functions The functions for reading/writing graphs in cytoscape format have a complicated signature/implementation based on passing in an attribute with required keys while the rest are ignored. It would be much more straightforward to simply use keyword arguments. Since this involves changing a function signature, this change should follow the [deprecation plan](https://networkx.github.io/documentation/stable/developer/deprecations.html). See #4180 for detailed discussion. - [ ] Update the function signature with `name` and `ident` kwargs, which is more clear than the current `attrs` dict. This change in API would require a Deprecation/FutureWarning - [ ] Update parameter checks (e.g. uniqueness of keys in `attrs` -> "name" != "ident" ; if not attrs -> check for None). The approach here depends on the bullet point directly above.
2020-10-24T21:15:36
networkx/networkx
4,289
networkx__networkx-4289
[ "4288" ]
e83c2eed698d9b4ab3056fd8ebd88e97e41645d3
diff --git a/networkx/release.py b/networkx/release.py --- a/networkx/release.py +++ b/networkx/release.py @@ -148,7 +148,13 @@ def get_info(dynamic=True): # no vcs information will be provided. # sys.path.insert(0, basedir) try: - from version import date, date_info, version, version_info, vcs_info + from networkx.version import ( + date, + date_info, + version, + version_info, + vcs_info, + ) except ImportError: import_failed = True vcs_info = (None, (None, None))
Executing other's people version.py when importing When I import networkx inside my project it imports my local `version.py` (from `release.py` ). For example, if I put in my root a `version.py` that looks like ```python raise RuntimeError('ahhhh!!!') ``` I will get the following stack trace ``` Traceback (most recent call last): File "<censured>/venv/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3417, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-2-719b05231e1b>", line 1, in <module> import networkx File "<censured>/pycharm-2020.1/plugins/python/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "<censured>/venv/lib/python3.8/site-packages/networkx/__init__.py", line 19, in <module> from networkx import release File "/<censured>/pycharm-2020.1/plugins/python/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "<censured>/venv/lib/python3.8/site-packages/networkx/release.py", line 224, in <module> date, date_info, version, version_info, vcs_info = get_info() File "<censured>/venv/lib/python3.8/site-packages/networkx/release.py", line 151, in get_info from version import date, date_info, version, version_info, vcs_info File "<censured>/pycharm-2020.1/plugins/python/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "<censured>/git/storepick-dataset-creation/version.py", line 1, in <module> raise RuntimeError('ahhhh!!!') RuntimeError: ahhhh!!! ```
2020-10-26T14:35:23
networkx/networkx
4,317
networkx__networkx-4317
[ "3976" ]
4c38d8868c9cfd93ac7eebacf336226976b95130
diff --git a/networkx/algorithms/euler.py b/networkx/algorithms/euler.py --- a/networkx/algorithms/euler.py +++ b/networkx/algorithms/euler.py @@ -213,7 +213,7 @@ def eulerian_circuit(G, source=None, keys=False): yield from _simplegraph_eulerian_circuit(G, source) -def has_eulerian_path(G): +def has_eulerian_path(G, source=None): """Return True iff `G` has an Eulerian path. An Eulerian path is a path in a graph which uses each edge of a graph @@ -236,6 +236,9 @@ def has_eulerian_path(G): G : NetworkX Graph The graph to find an euler path in. + source : node, optional + Starting node for circuit. + Returns ------- Bool : True if G has an eulerian path. @@ -245,15 +248,21 @@ def has_eulerian_path(G): is_eulerian eulerian_path """ + if nx.is_eulerian(G): + return True + if G.is_directed(): # Remove isolated nodes (if any) without altering the input graph nodes_remove = [v for v in G if G.in_degree[v] == 0 and G.out_degree[v] == 0] if nodes_remove: G = G.copy() G.remove_nodes_from(nodes_remove) - ins = G.in_degree outs = G.out_degree + # Since we know it is not eulerian, outs - ins must be 1 for source + if source is not None and outs[source] - ins[source] != 1: + return False + unbalanced_ins = 0 unbalanced_outs = 0 for v in G: @@ -267,9 +276,13 @@ def has_eulerian_path(G): return ( unbalanced_ins <= 1 and unbalanced_outs <= 1 and nx.is_weakly_connected(G) ) - else: - return sum(d % 2 == 1 for v, d in G.degree()) in (0, 2) and nx.is_connected(G) + # We know it is not eulerian, so degree of source must be odd. + if source is not None and G.degree[source] % 2 != 1: + return False + + # Sum is 2 since we know it is not eulerian (which implies sum is 0) + return sum(d % 2 == 1 for v, d in G.degree()) == 2 and nx.is_connected(G) def eulerian_path(G, source=None, keys=False): @@ -293,22 +306,37 @@ def eulerian_path(G, source=None, keys=False): Warning: If `source` provided is not the start node of an Euler path will raise error even if an Euler Path exists. """ - if not has_eulerian_path(G): + if not has_eulerian_path(G, source): raise nx.NetworkXError("Graph has no Eulerian paths.") if G.is_directed(): G = G.reverse() + if source is None or nx.is_eulerian(G) is False: + source = _find_path_start(G) + if G.is_multigraph(): + for u, v, k in _multigraph_eulerian_circuit(G, source): + if keys: + yield u, v, k + else: + yield u, v + else: + yield from _simplegraph_eulerian_circuit(G, source) else: G = G.copy() - if source is None: - source = _find_path_start(G) - if G.is_multigraph(): - for u, v, k in _multigraph_eulerian_circuit(G, source): + if source is None: + source = _find_path_start(G) + if G.is_multigraph(): if keys: - yield u, v, k + yield from reversed( + [(v, u, k) for u, v, k in _multigraph_eulerian_circuit(G, source)] + ) else: - yield u, v - else: - yield from _simplegraph_eulerian_circuit(G, source) + yield from reversed( + [(v, u) for u, v, k in _multigraph_eulerian_circuit(G, source)] + ) + else: + yield from reversed( + [(v, u) for u, v in _simplegraph_eulerian_circuit(G, source)] + ) @not_implemented_for("directed")
diff --git a/networkx/algorithms/tests/test_euler.py b/networkx/algorithms/tests/test_euler.py --- a/networkx/algorithms/tests/test_euler.py +++ b/networkx/algorithms/tests/test_euler.py @@ -150,6 +150,22 @@ def test_has_eulerian_path_isolated_node(self): G.add_node(3) assert nx.has_eulerian_path(G) + def test_has_eulerian_path_not_weakly_connected(self): + G = nx.DiGraph() + H = nx.Graph() + G.add_edges_from([(0, 1), (2, 3), (3, 2)]) + H.add_edges_from([(0, 1), (2, 3), (3, 2)]) + assert not nx.has_eulerian_path(G) + assert not nx.has_eulerian_path(H) + + def test_has_eulerian_path_unbalancedins_more_than_one(self): + G = nx.DiGraph() + H = nx.Graph() + G.add_edges_from([(0, 1), (2, 3)]) + H.add_edges_from([(0, 1), (2, 3)]) + assert not nx.has_eulerian_path(G) + assert not nx.has_eulerian_path(H) + class TestFindPathStart: def testfind_path_start(self): @@ -171,6 +187,65 @@ def test_eulerian_path(self): for e1, e2 in zip(x, nx.eulerian_path(nx.DiGraph(x))): assert e1 == e2 + def test_eulerian_path_straight_link(self): + G = nx.DiGraph() + result = [(1, 2), (2, 3), (3, 4), (4, 5)] + G.add_edges_from(result) + assert result == list(nx.eulerian_path(G)) + assert result == list(nx.eulerian_path(G, source=1)) + with pytest.raises(nx.NetworkXError): + list(nx.eulerian_path(G, source=3)) + with pytest.raises(nx.NetworkXError): + list(nx.eulerian_path(G, source=4)) + with pytest.raises(nx.NetworkXError): + list(nx.eulerian_path(G, source=5)) + + def test_eulerian_path_multigraph(self): + G = nx.MultiDiGraph() + result = [(2, 1), (1, 2), (2, 1), (1, 2), (2, 3), (3, 4), (4, 3)] + G.add_edges_from(result) + assert result == list(nx.eulerian_path(G)) + assert result == list(nx.eulerian_path(G, source=2)) + with pytest.raises(nx.NetworkXError): + list(nx.eulerian_path(G, source=3)) + with pytest.raises(nx.NetworkXError): + list(nx.eulerian_path(G, source=4)) + + def test_eulerian_path_eulerian_circuit(self): + G = nx.DiGraph() + result = [(1, 2), (2, 3), (3, 4), (4, 1)] + result2 = [(2, 3), (3, 4), (4, 1), (1, 2)] + result3 = [(3, 4), (4, 1), (1, 2), (2, 3)] + G.add_edges_from(result) + assert result == list(nx.eulerian_path(G)) + assert result == list(nx.eulerian_path(G, source=1)) + assert result2 == list(nx.eulerian_path(G, source=2)) + assert result3 == list(nx.eulerian_path(G, source=3)) + + def test_eulerian_path_undirected(self): + G = nx.Graph() + result = [(1, 2), (2, 3), (3, 4), (4, 5)] + result2 = [(5, 4), (4, 3), (3, 2), (2, 1)] + G.add_edges_from(result) + assert list(nx.eulerian_path(G)) in (result, result2) + assert result == list(nx.eulerian_path(G, source=1)) + assert result2 == list(nx.eulerian_path(G, source=5)) + with pytest.raises(nx.NetworkXError): + list(nx.eulerian_path(G, source=3)) + with pytest.raises(nx.NetworkXError): + list(nx.eulerian_path(G, source=2)) + + def test_eulerian_path_multigraph_undirected(self): + G = nx.MultiGraph() + result = [(2, 1), (1, 2), (2, 1), (1, 2), (2, 3), (3, 4)] + G.add_edges_from(result) + assert result == list(nx.eulerian_path(G)) + assert result == list(nx.eulerian_path(G, source=2)) + with pytest.raises(nx.NetworkXError): + list(nx.eulerian_path(G, source=3)) + with pytest.raises(nx.NetworkXError): + list(nx.eulerian_path(G, source=1)) + class TestEulerize: def test_disconnected(self):
Oddity with networkx.eulerian_path() This function does not seem to be operating well on a particular example. This function has no restriction on directed or multigraphs, and as far as I know, the graph does not have to be strongly connected because an Eulerian path does not have to have start point and end points equal (as opposed to circuits). Here is an ASCII representation of my network: ``` +-<-+ +-<-+ 1 2->-3 4 +->-+ +->-+ ``` Inspecting manually, there is an Eulerian path 2->1->2->3->4->3. The output I get from `eulerian_path()` varies oddly depending on the `source` argument. ``` import networkx print("nx version", networkx.__version__) # +-<-+ +-<-+ # 1 2->-3 4 # +->-+ +->-+ G = networkx.MultiDiGraph() G.add_edges_from([ (1, 2), (2, 1), (2, 3), (3, 4), (4, 3) ]) print("Eulerian path, no source argument", list(networkx.eulerian_path(G))) print("Eulerian path, source 2", list(networkx.eulerian_path(G, source = 2))) print("Eulerian path, source 4", list(networkx.eulerian_path(G, source = 4))) ``` Output: ``` nx version 2.4 Eulerian path, no source argument [(2, 1), (1, 2), (2, 3), (3, 4), (4, 3)] Eulerian path, source 2 [(2, 1), (1, 2)] Eulerian path, source 4 [(2, 1), (1, 2), (2, 4), (4, 3), (3, 4)] ``` With no source argument, it outputs the expected solution. With the source argument equal to the source node that was previously found, it does not find a path that covers all arcs. With a source argument that is an invalid start point for an Eulerian path, it returns a path starting from a different node, contrary to the documentation saying a warning/error will be issued. Edit: moreover, this returned path contains an arc which doesn't exist in the input graph.
I can see several issues in the code and unit tests: 1) The previous author finds a Euler path/circuit in a reversed directed graph in order to avoid reversing the final Euler path output below: https://github.com/networkx/networkx/blob/04e1ef2598fa494fc4b0889ef69a71b83ff32493/networkx/algorithms/euler.py#L279-L282 This works fine if the source is not provided as seen by the first example. In case a source is provided, the source is actually the end of the Euler path in the reversed graph and therefore the code fails in the second example. 2) There are no checks-in "_simplegraph_eulerian_circuit" and "_multigraph_eulerian_circuit" functions whether a Euler path exists at the provided source and therefore the third example fails. 3) Euler path is not properly tested in unit tests : https://github.com/networkx/networkx/blob/04e1ef2598fa494fc4b0889ef69a71b83ff32493/networkx/algorithms/tests/test_euler.py#L150-L155 @dschult May I take this as a bug fix? I haven't looked into this much yet, but I can verify that the example shown does produce the stated (incorrect) output. I've put the label "Defect" on this issue. @manasjoshi14 if you can find a/the bug and can fix it that would be great. Does the "source" keyword even make sense with this algorithm? Maybe that needs to be a separate function?
2020-11-04T05:23:10
networkx/networkx
4,321
networkx__networkx-4321
[ "4305" ]
4c38d8868c9cfd93ac7eebacf336226976b95130
diff --git a/networkx/convert.py b/networkx/convert.py --- a/networkx/convert.py +++ b/networkx/convert.py @@ -248,12 +248,93 @@ def to_dict_of_dicts(G, nodelist=None, edge_data=None): nodelist : list Use only nodes specified in nodelist - edge_data : list, optional - If provided, the value of the dictionary will be - set to edge_data for all edges. This is useful to make - an adjacency matrix type representation with 1 as the edge data. - If edgedata is None, the edgedata in G is used to fill the values. - If G is a multigraph, the edgedata is a dict for each pair (u,v). + edge_data : scalar, optional + If provided, the value of the dictionary will be set to `edge_data` for + all edges. Usual values could be `1` or `True`. If `edge_data` is + `None` (the default), the edgedata in `G` is used, resulting in a + dict-of-dict-of-dicts. If `G` is a MultiGraph, the result will be a + dict-of-dict-of-dict-of-dicts. See Notes for an approach to customize + handling edge data. `edge_data` should *not* be a container. + + Returns + ------- + dod : dict + A nested dictionary representation of `G`. Note that the level of + nesting depends on the type of `G` and the value of `edge_data` + (see Examples). + + See Also + -------- + from_dict_of_dicts, to_dict_of_lists + + Notes + ----- + For a more custom approach to handling edge data, try:: + + dod = { + n: { + nbr: custom(n, nbr, dd) for nbr, dd in nbrdict.items() + } + for n, nbrdict in G.adj.items() + } + + where `custom` returns the desired edge data for each edge between `n` and + `nbr`, given existing edge data `dd`. + + Examples + -------- + >>> G = nx.path_graph(3) + >>> nx.to_dict_of_dicts(G) + {0: {1: {}}, 1: {0: {}, 2: {}}, 2: {1: {}}} + + Edge data is preserved by default (``edge_data=None``), resulting + in dict-of-dict-of-dicts where the innermost dictionary contains the + edge data: + + >>> G = nx.Graph() + >>> G.add_edges_from( + ... [ + ... (0, 1, {'weight': 1.0}), + ... (1, 2, {'weight': 2.0}), + ... (2, 0, {'weight': 1.0}), + ... ] + ... ) + >>> d = nx.to_dict_of_dicts(G) + >>> d # doctest: +SKIP + {0: {1: {'weight': 1.0}, 2: {'weight': 1.0}}, + 1: {0: {'weight': 1.0}, 2: {'weight': 2.0}}, + 2: {1: {'weight': 2.0}, 0: {'weight': 1.0}}} + >>> d[1][2]['weight'] + 2.0 + + If `edge_data` is not `None`, edge data in the original graph (if any) is + replaced: + + >>> d = nx.to_dict_of_dicts(G, edge_data=1) + >>> d + {0: {1: 1, 2: 1}, 1: {0: 1, 2: 1}, 2: {1: 1, 0: 1}} + >>> d[1][2] + 1 + + This also applies to MultiGraphs: edge data is preserved by default: + + >>> G = nx.MultiGraph() + >>> G.add_edge(0, 1, key='a', weight=1.0) + 'a' + >>> G.add_edge(0, 1, key='b', weight=5.0) + 'b' + >>> d = nx.to_dict_of_dicts(G) + >>> d # doctest: +SKIP + {0: {1: {'a': {'weight': 1.0}, 'b': {'weight': 5.0}}}, + 1: {0: {'a': {'weight': 1.0}, 'b': {'weight': 5.0}}}} + >>> d[0][1]['b']['weight'] + 5.0 + + But multi edge data is lost if `edge_data` is not `None`: + + >>> d = nx.to_dict_of_dicts(G, edge_data=10) + >>> d + {0: {1: 10}, 1: {0: 10}} """ dod = {} if nodelist is None:
diff --git a/networkx/tests/test_convert.py b/networkx/tests/test_convert.py --- a/networkx/tests/test_convert.py +++ b/networkx/tests/test_convert.py @@ -279,3 +279,38 @@ class Custom(nx.Graph): # this raise exception # h._node.update((n, dd.copy()) for n, dd in g.nodes.items()) # assert isinstance(h._node[1], custom_dict) + + [email protected]( + "edgelist", + ( + # Graph with no edge data + [(0, 1), (1, 2)], + # Graph with edge data + [(0, 1, {"weight": 1.0}), (1, 2, {"weight": 2.0})], + ), +) +def test_to_dict_of_dicts_with_edgedata_param(edgelist): + G = nx.Graph() + G.add_edges_from(edgelist) + # Innermost dict value == edge_data when edge_data != None. + # In the case when G has edge data, it is overwritten + expected = {0: {1: 10}, 1: {0: 10, 2: 10}, 2: {1: 10}} + assert nx.to_dict_of_dicts(G, edge_data=10) == expected + + +def test_to_dict_of_dicts_with_edgedata_and_nodelist(): + G = nx.path_graph(5) + nodelist = [2, 3, 4] + expected = {2: {3: 10}, 3: {2: 10, 4: 10}, 4: {3: 10}} + assert nx.to_dict_of_dicts(G, nodelist=nodelist, edge_data=10) == expected + + +def test_to_dict_of_dicts_with_edgedata_multigraph(): + """Multi edge data overwritten when edge_data != None""" + G = nx.MultiGraph() + G.add_edge(0, 1, key="a") + G.add_edge(0, 1, key="b") + # Multi edge data lost when edge_data is not None + expected = {0: {1: 10}, 1: {0: 10}} + assert nx.to_dict_of_dicts(G, edge_data=10) == expected
Question: MultiGraph with to_dict_of_dicts with edge data I was reviewing the tests for `nx.convert.to_dict_of_dicts` which led to a question about what the expected behavior of `nx.to_dict_of_dicts(G)` is when G is a MultiGraph and `edge_data` is not None. Things seem to make sense when `edge_data` is None: ```python >>> G = nx.MultiGraph() >>> G.add_edge(0, 1, key='a') 'a' >>> G.add_edge(0, 1, key='b') 'b' >>> nx.to_dict_of_dicts(G, edge_data=None) {0: {1: {'a': {}, 'b': {}}}, 1: {0: {'a': {}, 'b': {}}}} ``` In the above example, if I call `nx.to_dict_of_dicts(G, edge_data={'weight': 1})`, my naive expectation was that i would get ```python {0: {1: {'a': {'weight': 1}, 'b': {'weight': 1}}}, 1: {0: {'a': {'weight': 1}, 'b': {'weight': 1}}}} ``` Instead, you get: ```python >>> nx.to_dict_of_dicts(G, edge_data={'weight': 1}) {0: {1: {'weight': 1}}, 1: {0: {'weight': 1}}} ``` which seems to clobber the multiple edges. [The code](https://github.com/networkx/networkx/blob/9aee8ef9739913ebd1e34b1be97cd91a7d6cc520/networkx/convert.py#L240-L277) doesn't have any special cases for MultiGraphs, and I couldn't tell from there and/or docstring whether this behavior is expected. I expect I'm missing something, so I thought I'd ask! Adding examples to the docstring would also be a nice improvement to show the expected behavior.
Hi @rossbar. The [original code](https://github.com/networkx/networkx/blob/9aee8ef9739913ebd1e34b1be97cd91a7d6cc520/networkx/convert.py#L263) in convert.py directly replace **edge_data** with original edge attributes. When the input is multiGraph, it is natural that we lose the multi-edge information. ``` python # original version # edge_data is not None for u, nbrdict in G.adjacency(): dod[u] = dod.fromkeys(nbrdict, edge_data) ``` I think the naive method to solve this issue is to consider Single-Graph and Multi-Graphs separately. We should keep different edge keys in MultiGraphs, like the modifications below. Hope to get any feedback or good ideas. Thank you. ``` python # revised version draft # edge_data is not None for u, nbrdict in G.adjacency(): if 'Multi' in nx.info(G).split("\n", 2)[1]: # multi-graphs udict = {} for nbr, edgedict in nbrdict.items(): udict[nbr] = dict.fromkeys(edgedict, edge_data) dod[u] = udict else: # single graph dod[u] = dod.fromkeys(nbrdict, edge_data) ``` First of all, the docs are incorrect when they say that ```edge_data``` should be: ```list, optional```. It can actually be anything. It is the value in the dict-of-dict. The text talks about making it the value ```1```, so it clearly is not intended to be a list. When ```edge_data is None``` we put the edge data dictionary into the dict_of_dict structure. That makes this a dict-of-dict-of-dict structure. But the edge_data input allows any **single** value in the dict-of-dict output. I am NOT saying this is what it should be. Only what it is. Trying to include a dict or list there is misleading because it will be the SAME DICT FOR EVERY EDGE! Changing ```dod[u][v]['weight']``` will change the weight for ALL EDGES simultaneously because there is only one dict. I suspect historically this was created for symmetry with ```from_dict_of_dict```. And since it wasn't clear what people might want -- except to make a value of ```1```, the interface left a way to do that (and maybe more). But I suspect that ```to_dict_of_dict``` has not been used very often. @berlincho have you used this function? This is an artifact that shows our initial neglect of multigraphs and edge attributes. I think v3.0 and a reconsideration of the matrix interface might give us a chance to enrich the treatment of attributes. For example, requiring ```edge_data``` to be fa function of u,v here could be helpful. But would make the code more tricky... Perhaps the interface from #4217 could be used for this ```to_dict_of_dict``` too. We could also remove the function -- or at least the ```edge_data``` optional argument. We've never received any comlaints about it despite it being incorrectly documented and not useful the way it is documented (you'd have the same list for every edge in the dod). After more thought: `to_dict_of_dict` should return a simple dict-of-dict representation of the graph/multigraph. edge_data should be a singleton (usually `True` or `1`) to indicate an edge. The default can remain that we create a dict-of-dict-of-dict with edge data in the inner dict. But the docs should change. Let's change the doc_string of `to_dict_of_dict` from: ``` edge_data : list, optional If provided, the value of the dictionary will be set to edge_data for all edges. This is useful to make an adjacency matrix type representation with 1 as the edge data. If edgedata is None, the edgedata in G is used to fill the values. If G is a multigraph, the edgedata is a dict for each pair (u,v). ``` to a more accurate and help version including a one-liner to build your own: ``` edge_data : singleton, optional If provided, the value of the dictionary will be set to `edge_data` for all edges. Usual values could be `1` or `True`. If edgedata is None, the edgedata dictionary in G is used creating a dict-of-dict-of-dicts. Note that if G is a multigraph, this will be a dict-of-dict-of-dict-of-dicts. If you want something more custom made try: dod = {n: {nbr: custom(n,nbr,dd) for nbr, dd in nbrdict.items()} for n, nbrdict in G.adj.items()} ``` Maybe the one-liner should be used in an example in the doc_string.
2020-11-05T21:52:02
networkx/networkx
4,326
networkx__networkx-4326
[ "4325" ]
457ab82bda75fad7f096a170b7d80dd7f295dee5
diff --git a/doc/conf.py b/doc/conf.py --- a/doc/conf.py +++ b/doc/conf.py @@ -170,6 +170,8 @@ # Options for LaTeX output # ------------------------ +# Use a latex engine that allows for unicode characters in docstrings +latex_engine = "xelatex" # The paper size ('letter' or 'a4'). latex_paper_size = "letter"
Use a utf8 friendly latex backend The current sphinx configuration in docs/conf.py defaults to pdflatex. This is causing problems on #4169 which introduces API-level doctests with unicode characters in them. I tried several iterations of lualatex and xelatex to try and get it to work, but latex errors are never the most helpful. I will open a PR to resolve this shortly.
2020-11-08T21:31:44
networkx/networkx
4,333
networkx__networkx-4333
[ "4331" ]
0e7abbad9f317ca92fddeff891a972bf5059d6aa
diff --git a/networkx/algorithms/components/strongly_connected.py b/networkx/algorithms/components/strongly_connected.py --- a/networkx/algorithms/components/strongly_connected.py +++ b/networkx/algorithms/components/strongly_connected.py @@ -166,8 +166,8 @@ def kosaraju_strongly_connected_components(G, source=None): continue c = nx.dfs_preorder_nodes(G, r) new = {v for v in c if v not in seen} - yield new seen.update(new) + yield new @not_implemented_for("undirected") diff --git a/networkx/algorithms/components/weakly_connected.py b/networkx/algorithms/components/weakly_connected.py --- a/networkx/algorithms/components/weakly_connected.py +++ b/networkx/algorithms/components/weakly_connected.py @@ -60,8 +60,8 @@ def weakly_connected_components(G): for v in G: if v not in seen: c = set(_plain_bfs(G, v)) - yield c seen.update(c) + yield c @not_implemented_for("undirected") @@ -162,7 +162,7 @@ def _plain_bfs(G, source): nextlevel = set() for v in thislevel: if v not in seen: - yield v seen.add(v) nextlevel.update(Gsucc[v]) nextlevel.update(Gpred[v]) + yield v
diff --git a/networkx/algorithms/components/tests/test_strongly_connected.py b/networkx/algorithms/components/tests/test_strongly_connected.py --- a/networkx/algorithms/components/tests/test_strongly_connected.py +++ b/networkx/algorithms/components/tests/test_strongly_connected.py @@ -188,6 +188,22 @@ def test_connected_raise(self): ) pytest.raises(NetworkXNotImplemented, nx.condensation, G) + strong_cc_methods = ( + nx.strongly_connected_components, + nx.kosaraju_strongly_connected_components, + nx.strongly_connected_components_recursive, + ) + + @pytest.mark.parametrize("get_components", strong_cc_methods) + def test_connected_mutability(self, get_components): + DG = nx.path_graph(5, create_using=nx.DiGraph) + G = nx.disjoint_union(DG, DG) + seen = set() + for component in get_components(G): + assert len(seen & component) == 0 + seen.update(component) + component.clear() + # Commented out due to variability on Travis-CI hardware/operating systems # def test_linear_time(self): diff --git a/networkx/algorithms/components/tests/test_weakly_connected.py b/networkx/algorithms/components/tests/test_weakly_connected.py --- a/networkx/algorithms/components/tests/test_weakly_connected.py +++ b/networkx/algorithms/components/tests/test_weakly_connected.py @@ -76,3 +76,12 @@ def test_connected_raise(self): pytest.raises(NetworkXNotImplemented, nx.weakly_connected_components, G) pytest.raises(NetworkXNotImplemented, nx.number_weakly_connected_components, G) pytest.raises(NetworkXNotImplemented, nx.is_weakly_connected, G) + + def test_connected_mutability(self): + DG = nx.path_graph(5, create_using=nx.DiGraph) + G = nx.disjoint_union(DG, DG) + seen = set() + for component in nx.weakly_connected_components(G): + assert len(seen & component) == 0 + seen.update(component) + component.clear()
weakly_connected_components: Editing yielded sets during iteration has buggy side effects When modifying the yielded component sets during iteration from `weakly_connected_components`, strange things happen: ```python import networkx as nx g = nx.MultiDiGraph() g.add_edge(0, 1) g.add_edge(1, 0) g.add_edge(0, 3) g.add_edge(0, 3) print('Okay:') print(list(nx.weakly_connected_components(g))) # Everything alright, gives: [{0, 1, 3}] print('Still okay:') for n in nx.weakly_connected_components(g): print(n) n.remove(0) # Will print once: {0, 1, 3} print('NOT okay:') # Deleting the wrong node during the iteration: for n in nx.weakly_connected_components(g): print(n) n.remove(1) # Will print the set twice: # {0, 1, 3} # {0, 1, 3} ``` In case this is actually desired (which I doubt), that should be at least documented that the connected component set is not to be edited.
As to date using the latest release v2.5. ```python In [2]: nx.__version__ Out[2]: '2.5' ``` This looks like a bug I believe we ran into elsewhere in the code before. The set that is yielded is added to the `seen` dictionary AFTER yielding. Therefore any changes to it affect whether the algorithm thinks it saw this set previously. This is definitely a bug. And should be easily fixed by updating the "seen" dictionary before yielding the set `c`. While we do this, we need to check the other `weakly_connected` code and I see it in one place in the `strongly_connected` code as well... so we better look through that as well.
2020-11-10T02:43:51
networkx/networkx
4,339
networkx__networkx-4339
[ "4338" ]
0e7abbad9f317ca92fddeff891a972bf5059d6aa
diff --git a/examples/subclass/plot_antigraph.py b/examples/subclass/plot_antigraph.py --- a/examples/subclass/plot_antigraph.py +++ b/examples/subclass/plot_antigraph.py @@ -137,7 +137,7 @@ def degree(self, nbunch=None, weight=None): for n, nbrs in nodes_nbrs ) - def adjacency_iter(self): + def adjacency(self): """Return an iterator of (node, adjacency set) tuples for all nodes in the dense graph. @@ -149,10 +149,10 @@ def adjacency_iter(self): adj_iter : iterator An iterator of (node, adjacency set) for all nodes in the graph. - """ - for n in self.adj: - yield (n, set(self.adj) - set(self.adj[n]) - {n}) + nodes = set(self.adj) + for n, nbrs in self.adj.items(): + yield (n, nodes - set(nbrs) - {n}) # Build several pairs of graphs, a regular graph
Update plot_antigraph.py example to remove `_iter` in method name. `def adjacency_iter(self)` should be `def adjacency(self)` There may be other places (especially in the examples) where we've missed an ```_iter``` update.
2020-11-11T22:03:29
networkx/networkx
4,346
networkx__networkx-4346
[ "4341" ]
c2004ff9647653baa68182b8cb3287d12c5f9343
diff --git a/networkx/classes/function.py b/networkx/classes/function.py --- a/networkx/classes/function.py +++ b/networkx/classes/function.py @@ -656,6 +656,17 @@ def set_node_attributes(G, values, name=None): >>> G.nodes[2] {} + Note that if the dictionary contains nodes that are not in `G`, the + values are silently ignored:: + + >>> G = nx.Graph() + >>> G.add_node(0) + >>> nx.set_node_attributes(G, {0: "red", 1: "blue"}, name="color") + >>> G.nodes[0]["color"] + 'red' + >>> 1 in G.nodes + False + """ # Set node attributes based on type of `values` if name is not None: # `values` must not be a dict of dict @@ -765,6 +776,14 @@ def set_edge_attributes(G, values, name=None): >>> G[1][2]["attr2"] 3 + Note that if the dict contains edges that are not in `G`, they are + silently ignored:: + + >>> G = nx.Graph([(0, 1)]) + >>> nx.set_edge_attributes(G, {(1, 2): {"weight": 2.0}}) + >>> (1, 2) in G.edges() + False + """ if name is not None: # `values` does not contain attribute names
diff --git a/networkx/classes/tests/test_function.py b/networkx/classes/tests/test_function.py --- a/networkx/classes/tests/test_function.py +++ b/networkx/classes/tests/test_function.py @@ -487,98 +487,151 @@ def test_custom2(self): self.test(G, 0, 0, [1, 2, 3]) -def test_set_node_attributes(): - graphs = [nx.Graph(), nx.DiGraph(), nx.MultiGraph(), nx.MultiDiGraph()] - for G in graphs: - # Test single value - G = nx.path_graph(3, create_using=G) - vals = 100 - attr = "hello" - nx.set_node_attributes(G, vals, attr) - assert G.nodes[0][attr] == vals - assert G.nodes[1][attr] == vals - assert G.nodes[2][attr] == vals - - # Test dictionary - G = nx.path_graph(3, create_using=G) - vals = dict(zip(sorted(G.nodes()), range(len(G)))) - attr = "hi" - nx.set_node_attributes(G, vals, attr) - assert G.nodes[0][attr] == 0 - assert G.nodes[1][attr] == 1 - assert G.nodes[2][attr] == 2 - - # Test dictionary of dictionaries - G = nx.path_graph(3, create_using=G) - d = {"hi": 0, "hello": 200} - vals = dict.fromkeys(G.nodes(), d) - vals.pop(0) - nx.set_node_attributes(G, vals) - assert G.nodes[0] == {} - assert G.nodes[1]["hi"] == 0 - assert G.nodes[2]["hello"] == 200 - - -def test_set_edge_attributes(): - graphs = [nx.Graph(), nx.DiGraph()] - for G in graphs: - # Test single value - G = nx.path_graph(3, create_using=G) - attr = "hello" - vals = 3 - nx.set_edge_attributes(G, vals, attr) - assert G[0][1][attr] == vals - assert G[1][2][attr] == vals - - # Test multiple values - G = nx.path_graph(3, create_using=G) - attr = "hi" - edges = [(0, 1), (1, 2)] - vals = dict(zip(edges, range(len(edges)))) - nx.set_edge_attributes(G, vals, attr) - assert G[0][1][attr] == 0 - assert G[1][2][attr] == 1 - - # Test dictionary of dictionaries - G = nx.path_graph(3, create_using=G) - d = {"hi": 0, "hello": 200} - edges = [(0, 1)] - vals = dict.fromkeys(edges, d) - nx.set_edge_attributes(G, vals) - assert G[0][1]["hi"] == 0 - assert G[0][1]["hello"] == 200 - assert G[1][2] == {} - - -def test_set_edge_attributes_multi(): - graphs = [nx.MultiGraph(), nx.MultiDiGraph()] - for G in graphs: - # Test single value - G = nx.path_graph(3, create_using=G) - attr = "hello" - vals = 3 - nx.set_edge_attributes(G, vals, attr) - assert G[0][1][0][attr] == vals - assert G[1][2][0][attr] == vals - - # Test multiple values - G = nx.path_graph(3, create_using=G) - attr = "hi" - edges = [(0, 1, 0), (1, 2, 0)] - vals = dict(zip(edges, range(len(edges)))) - nx.set_edge_attributes(G, vals, attr) - assert G[0][1][0][attr] == 0 - assert G[1][2][0][attr] == 1 - - # Test dictionary of dictionaries - G = nx.path_graph(3, create_using=G) - d = {"hi": 0, "hello": 200} - edges = [(0, 1, 0)] - vals = dict.fromkeys(edges, d) - nx.set_edge_attributes(G, vals) - assert G[0][1][0]["hi"] == 0 - assert G[0][1][0]["hello"] == 200 - assert G[1][2][0] == {} [email protected]( + "graph_type", (nx.Graph, nx.DiGraph, nx.MultiGraph, nx.MultiDiGraph) +) +def test_set_node_attributes(graph_type): + # Test single value + G = nx.path_graph(3, create_using=graph_type) + vals = 100 + attr = "hello" + nx.set_node_attributes(G, vals, attr) + assert G.nodes[0][attr] == vals + assert G.nodes[1][attr] == vals + assert G.nodes[2][attr] == vals + + # Test dictionary + G = nx.path_graph(3, create_using=graph_type) + vals = dict(zip(sorted(G.nodes()), range(len(G)))) + attr = "hi" + nx.set_node_attributes(G, vals, attr) + assert G.nodes[0][attr] == 0 + assert G.nodes[1][attr] == 1 + assert G.nodes[2][attr] == 2 + + # Test dictionary of dictionaries + G = nx.path_graph(3, create_using=graph_type) + d = {"hi": 0, "hello": 200} + vals = dict.fromkeys(G.nodes(), d) + vals.pop(0) + nx.set_node_attributes(G, vals) + assert G.nodes[0] == {} + assert G.nodes[1]["hi"] == 0 + assert G.nodes[2]["hello"] == 200 + + [email protected]( + ("values", "name"), + ( + ({0: "red", 1: "blue"}, "color"), # values dictionary + ({0: {"color": "red"}, 1: {"color": "blue"}}, None), # dict-of-dict + ), +) +def test_set_node_attributes_ignores_extra_nodes(values, name): + """ + When `values` is a dict or dict-of-dict keyed by nodes, ensure that keys + that correspond to nodes not in G are ignored. + """ + G = nx.Graph() + G.add_node(0) + nx.set_node_attributes(G, values, name) + assert G.nodes[0]["color"] == "red" + assert 1 not in G.nodes + + [email protected]("graph_type", (nx.Graph, nx.DiGraph)) +def test_set_edge_attributes(graph_type): + # Test single value + G = nx.path_graph(3, create_using=graph_type) + attr = "hello" + vals = 3 + nx.set_edge_attributes(G, vals, attr) + assert G[0][1][attr] == vals + assert G[1][2][attr] == vals + + # Test multiple values + G = nx.path_graph(3, create_using=graph_type) + attr = "hi" + edges = [(0, 1), (1, 2)] + vals = dict(zip(edges, range(len(edges)))) + nx.set_edge_attributes(G, vals, attr) + assert G[0][1][attr] == 0 + assert G[1][2][attr] == 1 + + # Test dictionary of dictionaries + G = nx.path_graph(3, create_using=graph_type) + d = {"hi": 0, "hello": 200} + edges = [(0, 1)] + vals = dict.fromkeys(edges, d) + nx.set_edge_attributes(G, vals) + assert G[0][1]["hi"] == 0 + assert G[0][1]["hello"] == 200 + assert G[1][2] == {} + + [email protected]( + ("values", "name"), + ( + ({(0, 1): 1.0, (0, 2): 2.0}, "weight"), # values dict + ({(0, 1): {"weight": 1.0}, (0, 2): {"weight": 2.0}}, None), # values dod + ), +) +def test_set_edge_attributes_ignores_extra_edges(values, name): + """If `values` is a dict or dict-of-dicts containing edges that are not in + G, data associate with these edges should be ignored. + """ + G = nx.Graph([(0, 1)]) + nx.set_edge_attributes(G, values, name) + assert G[0][1]["weight"] == 1.0 + assert (0, 2) not in G.edges + + [email protected]("graph_type", (nx.MultiGraph, nx.MultiDiGraph)) +def test_set_edge_attributes_multi(graph_type): + # Test single value + G = nx.path_graph(3, create_using=graph_type) + attr = "hello" + vals = 3 + nx.set_edge_attributes(G, vals, attr) + assert G[0][1][0][attr] == vals + assert G[1][2][0][attr] == vals + + # Test multiple values + G = nx.path_graph(3, create_using=graph_type) + attr = "hi" + edges = [(0, 1, 0), (1, 2, 0)] + vals = dict(zip(edges, range(len(edges)))) + nx.set_edge_attributes(G, vals, attr) + assert G[0][1][0][attr] == 0 + assert G[1][2][0][attr] == 1 + + # Test dictionary of dictionaries + G = nx.path_graph(3, create_using=graph_type) + d = {"hi": 0, "hello": 200} + edges = [(0, 1, 0)] + vals = dict.fromkeys(edges, d) + nx.set_edge_attributes(G, vals) + assert G[0][1][0]["hi"] == 0 + assert G[0][1][0]["hello"] == 200 + assert G[1][2][0] == {} + + [email protected]( + ("values", "name"), + ( + ({(0, 1, 0): 1.0, (0, 2, 0): 2.0}, "weight"), # values dict + ({(0, 1, 0): {"weight": 1.0}, (0, 2, 0): {"weight": 2.0}}, None), # values dod + ), +) +def test_set_edge_attributes_multi_ignores_extra_edges(values, name): + """If `values` is a dict or dict-of-dicts containing edges that are not in + G, data associate with these edges should be ignored. + """ + G = nx.MultiGraph([(0, 1, 0), (0, 1, 1)]) + nx.set_edge_attributes(G, values, name) + assert G[0][1][0]["weight"] == 1.0 + assert G[0][1][1] == {} + assert (0, 2) not in G.edges() def test_get_node_attributes():
Question: set_node_attributes ignoring missing nodes I wanted to double-check on the desired behavior of `nx.set_node_attributes` when `values` is a dictionary that contains nodes that are not in `G`. Currently, `set_node_attributes` ignores any nodes that are in the `values` dict but not in `G`, e.g. ```python >>> G = nx.Graph() >>> G.add_node(0, data='foo') >>> nx.set_node_attributes(G, values={0: 'bar', 1:'baz'}, name='data') >>> G.nodes.data() NodeDataView({0: {'data': 'bar'}}) ``` Whereas if you tried to do the equivalent operation "by hand", you would get an exception: ```python >>> G = nx.Graph() >>> G.add_node(0, data='foo') >>> G[0]['data'] = 'bar' >>> G[1]['data'] = 'baz' Traceback (most recent call last) ... KeyError: 1 ``` Ignoring the `KeyError` in `set_node_attributes` is currently not covered by the test suite, so I just wanted to double check that it is desired behavior before adding tests for it.
Good question... The use-case envisioned was someone who has a large dict with attributes for nodes from many graphs (for example, the graphs are the component subgraphs of an original graph). Our idea is that they could use that same dict in a `set_node_attributes` function call and if it can't find the node it just ignores that node. We certainly shouldn't add the node to the graph -- for that we have `add_nodes_from`. But should we silently ignore, or loudly raise an error. I can see arguments both ways. The current code silently ignores.
2020-11-12T22:13:36
networkx/networkx
4,360
networkx__networkx-4360
[ "3337" ]
d22ecbe94eba2d58b1eea34cbf2773c89c6e2e2b
diff --git a/networkx/drawing/nx_pylab.py b/networkx/drawing/nx_pylab.py --- a/networkx/drawing/nx_pylab.py +++ b/networkx/drawing/nx_pylab.py @@ -25,6 +25,7 @@ random_layout, planar_layout, ) +import warnings __all__ = [ "draw", @@ -480,13 +481,13 @@ def draw_networkx_edges( edge_color="k", style="solid", alpha=None, - arrowstyle="-|>", + arrowstyle=None, arrowsize=10, edge_cmap=None, edge_vmin=None, edge_vmax=None, ax=None, - arrows=True, + arrows=None, label=None, node_size=300, nodelist=None, @@ -536,11 +537,15 @@ def draw_networkx_edges( Draw the graph in the specified Matplotlib axes. arrows : bool, optional (default=True) - For directed graphs, if True draw arrowheads. + For directed graphs, if True draw arrowheads by default. Ignored + if *arrowstyle* is passed. + Note: Arrows will be the same color as edges. - arrowstyle : str, optional (default='-|>') - For directed graphs, choose the style of the arrow heads. + arrowstyle : str, optional (default=None) + For directed graphs and *arrows==True* defaults to ``'-|>'`` otherwise + defaults to ``'-'``. + See :py:class: `matplotlib.patches.ArrowStyle` for more options. @@ -566,19 +571,17 @@ def draw_networkx_edges( Returns ------- - matplotlib.collection.LineCollection - `LineCollection` of the edges - list of matplotlib.patches.FancyArrowPatch `FancyArrowPatch` instances of the directed edges - Depending whether the drawing includes arrows or not. - Notes ----- For directed graphs, arrows are drawn at the head end. Arrows can be - turned off with keyword arrows=False. Be sure to include `node_size` as a - keyword argument; arrows are drawn considering the size of nodes. + turned off with keyword arrows=False or by passing an arrowstyle without + an arrow on the end. + + Be sure to include `node_size` as a keyword argument; arrows are + drawn considering the size of nodes. Examples -------- @@ -602,13 +605,27 @@ def draw_networkx_edges( draw_networkx_nodes() draw_networkx_labels() draw_networkx_edge_labels() + """ import matplotlib.pyplot as plt from matplotlib.colors import colorConverter, Colormap, Normalize - from matplotlib.collections import LineCollection - from matplotlib.patches import FancyArrowPatch + from matplotlib.patches import FancyArrowPatch, ConnectionStyle + from matplotlib.path import Path import numpy as np + if arrowstyle is not None and arrows is not None: + warnings.warn( + f"You passed both arrowstyle={arrowstyle} and " + f"arrows={arrows}. Because you set a non-default " + "*arrowstyle*, arrows will be ignored." + ) + + if arrowstyle is None: + if G.is_directed() and arrows: + arrowstyle = "-|>" + else: + arrowstyle = "-" + if ax is None: ax = plt.gca() @@ -616,10 +633,7 @@ def draw_networkx_edges( edgelist = list(G.edges()) if len(edgelist) == 0: # no edges! - if not G.is_directed() or not arrows: - return LineCollection(None) - else: - return [] + return [] if nodelist is None: nodelist = list(G.nodes()) @@ -649,108 +663,118 @@ def draw_networkx_edges( color_normal = Normalize(vmin=edge_vmin, vmax=edge_vmax) edge_color = [edge_cmap(color_normal(e)) for e in edge_color] - if not G.is_directed() or not arrows: - edge_collection = LineCollection( - edge_pos, - colors=edge_color, - linewidths=width, - antialiaseds=(1,), - linestyle=style, - transOffset=ax.transData, - alpha=alpha, - ) - - edge_collection.set_cmap(edge_cmap) - edge_collection.set_clim(edge_vmin, edge_vmax) + # Note: Waiting for someone to implement arrow to intersection with + # marker. Meanwhile, this works well for polygons with more than 4 + # sides and circle. - edge_collection.set_zorder(1) # edges go behind nodes - edge_collection.set_label(label) - ax.add_collection(edge_collection) - - return edge_collection - - arrow_collection = None + def to_marker_edge(marker_size, marker): + if marker in "s^>v<d": # `large` markers need extra space + return np.sqrt(2 * marker_size) / 2 + else: + return np.sqrt(marker_size) / 2 - if G.is_directed() and arrows: - # Note: Waiting for someone to implement arrow to intersection with - # marker. Meanwhile, this works well for polygons with more than 4 - # sides and circle. + # Draw arrows with `matplotlib.patches.FancyarrowPatch` + arrow_collection = [] + mutation_scale = arrowsize # scale factor of arrow head - def to_marker_edge(marker_size, marker): - if marker in "s^>v<d": # `large` markers need extra space - return np.sqrt(2 * marker_size) / 2 - else: - return np.sqrt(marker_size) / 2 - - # Draw arrows with `matplotlib.patches.FancyarrowPatch` - arrow_collection = [] - mutation_scale = arrowsize # scale factor of arrow head - - # FancyArrowPatch doesn't handle color strings - arrow_colors = colorConverter.to_rgba_array(edge_color, alpha) - for i, (src, dst) in enumerate(edge_pos): - x1, y1 = src - x2, y2 = dst - shrink_source = 0 # space from source to tail - shrink_target = 0 # space from head to target - if np.iterable(node_size): # many node sizes - source, target = edgelist[i][:2] - source_node_size = node_size[nodelist.index(source)] - target_node_size = node_size[nodelist.index(target)] - shrink_source = to_marker_edge(source_node_size, node_shape) - shrink_target = to_marker_edge(target_node_size, node_shape) - else: - shrink_source = shrink_target = to_marker_edge(node_size, node_shape) - - if shrink_source < min_source_margin: - shrink_source = min_source_margin - - if shrink_target < min_target_margin: - shrink_target = min_target_margin - - if len(arrow_colors) == len(edge_pos): - arrow_color = arrow_colors[i] - elif len(arrow_colors) == 1: - arrow_color = arrow_colors[0] - else: # Cycle through colors - arrow_color = arrow_colors[i % len(arrow_colors)] - - if np.iterable(width): - if len(width) == len(edge_pos): - line_width = width[i] - else: - line_width = width[i % len(width)] - else: - line_width = width - - arrow = FancyArrowPatch( - (x1, y1), - (x2, y2), - arrowstyle=arrowstyle, - shrinkA=shrink_source, - shrinkB=shrink_target, - mutation_scale=mutation_scale, - color=arrow_color, - linewidth=line_width, - connectionstyle=connectionstyle, - linestyle=style, - zorder=1, - ) # arrows go behind nodes - - # There seems to be a bug in matplotlib to make collections of - # FancyArrowPatch instances. Until fixed, the patches are added - # individually to the axes instance. - arrow_collection.append(arrow) - ax.add_patch(arrow) - - # update view + # compute view minx = np.amin(np.ravel(edge_pos[:, :, 0])) maxx = np.amax(np.ravel(edge_pos[:, :, 0])) miny = np.amin(np.ravel(edge_pos[:, :, 1])) maxy = np.amax(np.ravel(edge_pos[:, :, 1])) - w = maxx - minx h = maxy - miny + + if connectionstyle is not None: + base_connection_style = ConnectionStyle(connectionstyle) + + def _connectionstyle(posA, posB, *args, **kwargs): + # check if we need to do a self-loop + if np.all(posA == posB): + # this is called with _screen space_ values so covert back + # to data space + data_loc = ax.transData.inverted().transform(posA) + v_shift = 0.1 * h + h_shift = v_shift * 0.5 + # put the top of the loop first so arrow is not hidden by node + path = [ + # 1 + data_loc + np.asarray([0, v_shift]), + # 4 4 4 + data_loc + np.asarray([h_shift, v_shift]), + data_loc + np.asarray([h_shift, 0]), + data_loc, + # 4 4 4 + data_loc + np.asarray([-h_shift, 0]), + data_loc + np.asarray([-h_shift, v_shift]), + data_loc + np.asarray([0, v_shift]), + ] + + ret = Path(ax.transData.transform(path), [1, 4, 4, 4, 4, 4, 4]) + # if not, fall back to the user specified behavior + else: + ret = base_connection_style(posA, posB, *args, **kwargs) + + return ret + + else: + _connectionstyle = connectionstyle + + # FancyArrowPatch doesn't handle color strings + arrow_colors = colorConverter.to_rgba_array(edge_color, alpha) + for i, (src, dst) in enumerate(edge_pos): + x1, y1 = src + x2, y2 = dst + shrink_source = 0 # space from source to tail + shrink_target = 0 # space from head to target + if np.iterable(node_size): # many node sizes + source, target = edgelist[i][:2] + source_node_size = node_size[nodelist.index(source)] + target_node_size = node_size[nodelist.index(target)] + shrink_source = to_marker_edge(source_node_size, node_shape) + shrink_target = to_marker_edge(target_node_size, node_shape) + else: + shrink_source = shrink_target = to_marker_edge(node_size, node_shape) + + if shrink_source < min_source_margin: + shrink_source = min_source_margin + + if shrink_target < min_target_margin: + shrink_target = min_target_margin + + if len(arrow_colors) == len(edge_pos): + arrow_color = arrow_colors[i] + elif len(arrow_colors) == 1: + arrow_color = arrow_colors[0] + else: # Cycle through colors + arrow_color = arrow_colors[i % len(arrow_colors)] + + if np.iterable(width): + if len(width) == len(edge_pos): + line_width = width[i] + else: + line_width = width[i % len(width)] + else: + line_width = width + + arrow = FancyArrowPatch( + (x1, y1), + (x2, y2), + arrowstyle=arrowstyle, + shrinkA=shrink_source, + shrinkB=shrink_target, + mutation_scale=mutation_scale, + color=arrow_color, + linewidth=line_width, + connectionstyle=_connectionstyle, + linestyle=style, + zorder=1, + ) # arrows go behind nodes + + arrow_collection.append(arrow) + ax.add_patch(arrow) + + # update view padx, pady = 0.05 * w, 0.05 * h corners = (minx - padx, miny - pady), (maxx + padx, maxy + pady) ax.update_datalim(corners)
diff --git a/networkx/drawing/tests/test_pylab.py b/networkx/drawing/tests/test_pylab.py --- a/networkx/drawing/tests/test_pylab.py +++ b/networkx/drawing/tests/test_pylab.py @@ -205,7 +205,7 @@ def test_empty_graph(self): def test_draw_empty_nodes_return_values(self): # See Issue #3833 - from matplotlib.collections import PathCollection, LineCollection + from matplotlib.collections import PathCollection G = nx.Graph([(1, 2), (2, 3)]) DG = nx.DiGraph([(1, 2), (2, 3)]) @@ -213,16 +213,11 @@ def test_draw_empty_nodes_return_values(self): assert isinstance(nx.draw_networkx_nodes(G, pos, nodelist=[]), PathCollection) assert isinstance(nx.draw_networkx_nodes(DG, pos, nodelist=[]), PathCollection) - # drawing empty edges either return an empty LineCollection or empty list. - assert isinstance( - nx.draw_networkx_edges(G, pos, edgelist=[], arrows=True), LineCollection - ) - assert isinstance( - nx.draw_networkx_edges(G, pos, edgelist=[], arrows=False), LineCollection - ) - assert isinstance( - nx.draw_networkx_edges(DG, pos, edgelist=[], arrows=False), LineCollection - ) + # drawing empty edges used to return an empty LineCollection or empty list. + # Now it is always an empty list (because edges are now lists of FancyArrows) + assert nx.draw_networkx_edges(G, pos, edgelist=[], arrows=True) == [] + assert nx.draw_networkx_edges(G, pos, edgelist=[], arrows=False) == [] + assert nx.draw_networkx_edges(DG, pos, edgelist=[], arrows=False) == [] assert nx.draw_networkx_edges(DG, pos, edgelist=[], arrows=True) == [] def test_multigraph_edgelist_tuples(self):
Error Attempting to Iterate Over LineCollection This is less a bug with the code base and more a bug in the examples, but as a result a number of pieces of example code do not work any longer. I have cobbled together code that uses the **Directed Graph** ([https://networkx.github.io/documentation/stable/auto_examples/drawing/plot_directed.html#](url)) example code to implement color maps, line alphas, and a colorbar. Apparently matplotlib.collections.LineCollection objects are no longer iterable, and thus the code ``` for i, arc in enumerate(edges): arc.set_alpha(edge_alphas[i]) ``` does not work (nor does using something like `for e in edges:` or `for i in range(len(edges)):` The problem also arises in trying to create a colorbar legend by passing the `edges` object to `mpl.collections.PatchCollection()` as this too raise the error that LineCollection objects are not iterable. Versions: Python - 3.6 Networkx - 2.2 Matplotlib - 3.0.2 Here is the full code (assuming you have a graph `G` already): ``` pos = nx.circular_layout(G) M = G.number_of_edges() node_sizes = [3 + 10 * i for i in range(len(G))] edge_colors = range(2, M + 2) edge_alphas = [(5 + i) / (M + 4) for i in range(M)] nodes = nx.draw_networkx_nodes(G, pos, node_color='red') edges = nx.draw_networkx_edges(G, pos, edge_color=edge_colors,edge_cmap=plt.cm.coolwarm, width=2) nx.draw_networkx_labels(G,pos,node_labels,font_size=16) # set alpha value for each edge for i, arc in enumerate(edges): arc.set_alpha(edge_alphas[i]) pc = mpl.collections.PatchCollection(edges, cmap=plt.cm.Blues) pc.set_array(edge_colors) plt.colorbar(pc) plt.axis('off') plt.show() ```
The example code you point to doesn't have a list that iterates over the edges. Both stable and latest version of documentation uses the following idiom to set the alpha values: # set alpha value for each edge for i in range(M): edges[i].set_alpha(edge_alphas[i]) Try the code at [this documentation link](https://networkx.github.io/documentation/stable/auto_examples/drawing/plot_directed.html) Thanks for the reply. I have tried this way just now and it does not work either. This might be an issue with a loss of functionality in a new version of **matplotlib**, what version of **matplotlib** are you using? I am on 3.0.2, which apparently does not support indexing on LineCollection objects. Hmmm... I think 3.0.2 does allow indexing on LineCollection. I have used it with 3.0.3 so I know it works there, and it worked with 2.x. This is an example session. Figure shows up fine -- no errors. Python 3.6.8 |Anaconda, Inc.| (default, Dec 29 2018, 19:04:46) Type 'copyright', 'credits' or 'license' for more information IPython 6.4.0 -- An enhanced Interactive Python. Type '?' for help. In [1]: import matplotlib as mpl In [2]: mpl.__version__ Out[2]: '3.0.3' In [3]: from __future__ import division ...: import matplotlib as mpl ...: import matplotlib.pyplot as plt ...: import networkx as nx ...: ...: G = nx.generators.directed.random_k_out_graph(10, 3, 0.5) ...: pos = nx.layout.spring_layout(G) ...: ...: node_sizes = [3 + 10 * i for i in range(len(G))] ...: M = G.number_of_edges() ...: edge_colors = range(2, M + 2) ...: edge_alphas = [(5 + i) / (M + 4) for i in range(M)] ...: ...: nodes = nx.draw_networkx_nodes(G, pos, node_size=node_sizes, node_color='blue') ...: edges = nx.draw_networkx_edges(G, pos, node_size=node_sizes, arrowstyle='->', ...: arrowsize=10, edge_color=edge_colors, ...: edge_cmap=plt.cm.Blues, width=2) ...: # set alpha value for each edge ...: for i in range(M): ...: edges[i].set_alpha(edge_alphas[i]) ...: ...: pc = mpl.collections.PatchCollection(edges, cmap=plt.cm.Blues) ...: pc.set_array(edge_colors) ...: plt.colorbar(pc) ...: ...: ax = plt.gca() ...: ax.set_axis_off() ...: plt.show() ...: Hello everyone :) In the [matplotlib documentation about Collections API](https://matplotlib.org/3.1.1/api/collections_api.html) they state (somehow, it's not very clear) that the Collections act as a whole. I've talked with the developers of matplotlib and they explained me that it's a design choice not to let the users iterate this monolithic structure. So it's not a bug. As to the alpha channels of the edges, there's a trick. Not very elegant but, at least, relies fully on the public API and without any bad workaround. It works by setting the alpha values in the RGBA notation. ```python edges = nx.draw_networkx_edges(G, ... ) edges.set_alpha(None) # needed up to Networkx 3.6 edges.set_color( RGBAList ) ``` Where `RGBA_List` is the list of the colors (with alpha channel) we want to assign to the edges. Unfortunately every other effort seems to make use of the internals of the `LineCollection` of matplotlib and, therefore, it's not reliable. I hope it can help others in future :) I don't think that Matplotlib's Collection object have ever been iterable. Internally we do a fair amount of implicit broadcasting / zipping / colormapping that would be very difficult to keep in sync if we also let users reach in and touch individual elements (the number of notional elements is not ever really fixed if you set the offsets). It looks like in nx21 (via https://github.com/networkx/networkx/pull/2760) nx changed to sometimes returning a `LineCollection` and sometimes returning a list of `FancyArrowPatch` objects so in some cases you can iterate over the object returned by `draw_networkx_edges` and sometimes not. Not sure what the best path out of this is though.
2020-11-15T23:04:48
networkx/networkx
4,365
networkx__networkx-4365
[ "4336" ]
8563c3313223a53c548530f39c8cfb6e433539d3
diff --git a/networkx/utils/misc.py b/networkx/utils/misc.py --- a/networkx/utils/misc.py +++ b/networkx/utils/misc.py @@ -333,19 +333,18 @@ def create_random_state(random_state=None): class PythonRandomInterface: - try: - - def __init__(self, rng=None): + def __init__(self, rng=None): + try: import numpy + except ImportError: + msg = "numpy not found, only random.random available." + warnings.warn(msg, ImportWarning) - if rng is None: - self._rng = numpy.random.mtrand._rand + if rng is None: + self._rng = numpy.random.mtrand._rand + else: self._rng = rng - except ImportError: - msg = "numpy not found, only random.random available." - warnings.warn(msg, ImportWarning) - def random(self): return self._rng.random_sample()
PythonRandomInterface seed initialized to None https://github.com/networkx/networkx/blob/087ed3b9ff9d92cf97a1d5a5cb3afd1c8b78298d/networkx/utils/misc.py#L341 When `None` is passed as seed, the code flow initializes to a `RandomState()` then sets it back to `None`. Maybe an else is missing?
That's clearly a bug. I agree that it is likely a missing else statement. But there is more of a problem here. The try/catch surrounds the entire `__init__` function, so it is "try"ing to define the function -- not run it. So, really the `import numpy` part of the code is broken. The try should be inside the if statement and only around the import numpy statement. AND there needs to be an else clause... Thanks very much!!
2020-11-17T21:01:21
networkx/networkx
4,378
networkx__networkx-4378
[ "4374" ]
5b89d27fff628b7c24755c456229bb8100aec36d
diff --git a/networkx/drawing/nx_pylab.py b/networkx/drawing/nx_pylab.py --- a/networkx/drawing/nx_pylab.py +++ b/networkx/drawing/nx_pylab.py @@ -556,6 +556,19 @@ def draw_networkx_edges( See `matplotlib.patches.ConnectionStyle` and `matplotlib.patches.FancyArrowPatch` for more info. + node_size : scalar or array, optional (default=300) + Size of nodes. Though the nodes are not drawn with this function, the + node size is used in determining edge positioning. + + nodelist : list, optional (default=G.nodes()) + Only draw edges that are in `edgelist` and that lie between nodes in + `nodelist`. Any edges in `edgelist` incident on nodes that are *not* in + `nodelist` will not be drawn. + + node_shape : string, optional (default='o') + The marker used for nodes, used in determining edge positioning. + Specification is as a `matplotlib.markers` marker, e.g. one of 'so^>v<dph8'. + label : [None| string] Label for legend @@ -628,11 +641,15 @@ def draw_networkx_edges( if edgelist is None: edgelist = list(G.edges()) - if len(edgelist) == 0: # no edges! - return [] - if nodelist is None: nodelist = list(G.nodes()) + else: + # Remove any edges where both endpoints are not in node list + nodeset = set(nodelist) + edgelist = [(u, v) for u, v in edgelist if (u in nodeset) and (v in nodeset)] + + if len(edgelist) == 0: # no edges! + return [] # FancyArrowPatch handles color=None different from LineCollection if edge_color is None:
diff --git a/networkx/drawing/tests/test_pylab.py b/networkx/drawing/tests/test_pylab.py --- a/networkx/drawing/tests/test_pylab.py +++ b/networkx/drawing/tests/test_pylab.py @@ -312,6 +312,8 @@ def test_draw_edges_min_source_target_margins(node_shape): assert padded_extent[0] > default_extent[0] # And the rightmost extent of the edge, further to the left assert padded_extent[1] < default_extent[1] + # NOTE: Prevent axes objects from impacting other tests via plt.gca + plt.delaxes(ax) def test_apply_alpha(): @@ -323,3 +325,21 @@ def test_apply_alpha(): alpha = 0.5 rgba_colors = nx.drawing.nx_pylab.apply_alpha(colorlist, alpha, nodelist) assert all(rgba_colors[:, -1] == alpha) + + [email protected]( + ("nodelist", "expected_num_edges"), + ( + ([], 0), + ([1], 0), + ([1, 2], 1), + ([0, 1, 2, 3], 6), + ), +) +def test_draw_edges_with_nodelist(nodelist, expected_num_edges): + """Test that edges that contain a node in `nodelist` are not drawn by + draw_networkx_edges. See gh-4374. + """ + G = nx.complete_graph(5) + edge_patches = nx.draw_networkx_edges(G, nx.circular_layout(G), nodelist=nodelist) + assert len(edge_patches) == expected_num_edges
Question: behavior of `node_list` kwarg in draw_networkx_edges Like the other pylab drawing functions, `nx_pylab.draw_networkx_edges` currently has a `nodelist` keyword argument. It is not included in the `Parameters` listing of the docstring and so it's behavior is not well-defined. Naively, I would expect that any edges incident on a node *not* in the node list would not be drawn. For example, I would expect the following: ```python >>> G = nx.path_graph(3) >>> pos = {n: (n, n) for n in range(len(G))} >>> nx.draw_networkx_nodes(G, pos, nodelist=[0, 1]) >>> nx.draw_networkx_edges(G, pos, nodelist=[0, 1]) ``` to produce the following, without the edge (1, 2) since 2 was not included in the nodelist: ![expected](https://user-images.githubusercontent.com/1268991/99858963-8f2fa680-2b43-11eb-8292-bad080df0456.png) Instead, the above code results in the following image: ![reality](https://user-images.githubusercontent.com/1268991/99859092-e3d32180-2b43-11eb-9d6c-c701c4766dd7.png) Is this expected? Right now, the `nodelist` is only used internally in `draw_networkx_edges` to determine the size of the nodes. Either way, the parameter needs to be added to the docstring and the behavior tested - I just wanted to raise the question about what the desired behavior was in order to do so!
@rossbar I agree with you. Edges should only be drawn if both endpoints are in the graph.
2020-11-21T22:07:33
networkx/networkx
4,384
networkx__networkx-4384
[ "4106" ]
28d3b930c6d6e5203afdb64e0bf4356b0026463c
diff --git a/networkx/convert_matrix.py b/networkx/convert_matrix.py --- a/networkx/convert_matrix.py +++ b/networkx/convert_matrix.py @@ -209,7 +209,13 @@ def from_pandas_adjacency(df, create_using=None): def to_pandas_edgelist( - G, source="source", target="target", nodelist=None, dtype=None, order=None + G, + source="source", + target="target", + nodelist=None, + dtype=None, + order=None, + edge_key=None, ): """Returns the graph edge list as a Pandas DataFrame. @@ -229,6 +235,20 @@ def to_pandas_edgelist( nodelist : list, optional Use only nodes specified in nodelist + dtype : dtype, default None + Use to create the DataFrame. Data type to force. + Only a single dtype is allowed. If None, infer. + + order : None + An unused parameter mistakenly included in the function. + + .. deprecated:: 2.6 + This is deprecated and will be removed in NetworkX v3.0. + + edge_key : str or int or None, optional (default=None) + A valid column name (string or integer) for the edge keys (for the + multigraph case). If None, edge keys are not stored in the DataFrame. + Returns ------- df : Pandas DataFrame @@ -248,6 +268,13 @@ def to_pandas_edgelist( 0 A B 1 7 1 C E 9 10 + >>> G = nx.MultiGraph([('A', 'B', {'cost': 1}), ('A', 'B', {'cost': 9})]) + >>> df = nx.to_pandas_edgelist(G, nodelist=['A', 'C'], edge_key='ekey') + >>> df[['source', 'target', 'cost', 'ekey']] + source target cost ekey + 0 A B 1 0 + 1 A B 9 1 + """ import pandas as pd @@ -255,21 +282,28 @@ def to_pandas_edgelist( edgelist = G.edges(data=True) else: edgelist = G.edges(nodelist, data=True) - source_nodes = [s for s, t, d in edgelist] - target_nodes = [t for s, t, d in edgelist] + source_nodes = [s for s, _, _ in edgelist] + target_nodes = [t for _, t, _ in edgelist] - all_keys = set().union(*(d.keys() for s, t, d in edgelist)) - if source in all_keys: + all_attrs = set().union(*(d.keys() for _, _, d in edgelist)) + if source in all_attrs: raise nx.NetworkXError(f"Source name '{source}' is an edge attr name") - if target in all_keys: + if target in all_attrs: raise nx.NetworkXError(f"Target name '{target}' is an edge attr name") nan = float("nan") - edge_attr = {k: [d.get(k, nan) for s, t, d in edgelist] for k in all_keys} + edge_attr = {k: [d.get(k, nan) for _, _, d in edgelist] for k in all_attrs} + + if G.is_multigraph() and edge_key is not None: + if edge_key in all_attrs: + raise nx.NetworkXError(f"Edge key name '{edge_key}' is an edge attr name") + edge_keys = [k for _, _, k in G.edges(keys=True)] + edgelistdict = {source: source_nodes, target: target_nodes, edge_key: edge_keys} + else: + edgelistdict = {source: source_nodes, target: target_nodes} - edgelistdict = {source: source_nodes, target: target_nodes} edgelistdict.update(edge_attr) - return pd.DataFrame(edgelistdict) + return pd.DataFrame(edgelistdict, dtype=dtype) def from_pandas_edgelist(
diff --git a/networkx/tests/test_convert_pandas.py b/networkx/tests/test_convert_pandas.py --- a/networkx/tests/test_convert_pandas.py +++ b/networkx/tests/test_convert_pandas.py @@ -200,6 +200,14 @@ def test_to_edgelist_custom_source_or_target_col_exists(self): nx.NetworkXError, nx.to_pandas_edgelist, G, target="target_col_name" ) + def test_to_edgelist_edge_key_col_exists(self): + G = nx.path_graph(10, create_using=nx.MultiGraph) + G.add_weighted_edges_from((u, v, u) for u, v in list(G.edges())) + nx.set_edge_attributes(G, 0, name="edge_key_name") + pytest.raises( + nx.NetworkXError, nx.to_pandas_edgelist, G, edge_key="edge_key_name" + ) + def test_from_adjacency(self): nodelist = [1, 2] dftrue = pd.DataFrame( @@ -209,17 +217,18 @@ def test_from_adjacency(self): df = nx.to_pandas_adjacency(G, dtype=int) pd.testing.assert_frame_equal(df, dftrue) - def test_roundtrip(self): + @pytest.mark.parametrize("graph", [nx.Graph, nx.MultiGraph]) + def test_roundtrip(self, graph): # edgelist - Gtrue = nx.Graph([(1, 1), (1, 2)]) + Gtrue = graph([(1, 1), (1, 2)]) df = nx.to_pandas_edgelist(Gtrue) - G = nx.from_pandas_edgelist(df) + G = nx.from_pandas_edgelist(df, create_using=graph) assert_graphs_equal(Gtrue, G) # adjacency adj = {1: {1: {"weight": 1}, 2: {"weight": 1}}, 2: {1: {"weight": 1}}} - Gtrue = nx.Graph(adj) + Gtrue = graph(adj) df = nx.to_pandas_adjacency(Gtrue, dtype=int) - G = nx.from_pandas_adjacency(df) + G = nx.from_pandas_adjacency(df, create_using=graph) assert_graphs_equal(Gtrue, G) def test_from_adjacency_named(self): @@ -238,18 +247,19 @@ def test_from_adjacency_named(self): def test_edgekey_with_multigraph(self): df = pd.DataFrame( { - "attr1": {"A": "F1", "B": "F2", "C": "F3"}, - "attr2": {"A": 1, "B": 0, "C": 0}, - "attr3": {"A": 0, "B": 1, "C": 0}, - "source": {"A": "N1", "B": "N2", "C": "N1"}, - "target": {"A": "N2", "B": "N3", "C": "N1"}, + "source": {"A": "N1", "B": "N2", "C": "N1", "D": "N1"}, + "target": {"A": "N2", "B": "N3", "C": "N1", "D": "N2"}, + "attr1": {"A": "F1", "B": "F2", "C": "F3", "D": "F4"}, + "attr2": {"A": 1, "B": 0, "C": 0, "D": 0}, + "attr3": {"A": 0, "B": 1, "C": 0, "D": 1}, } ) - Gtrue = nx.Graph( + Gtrue = nx.MultiGraph( [ - ("N1", "N2", {"F1": {"attr2": 1, "attr3": 0}}), - ("N2", "N3", {"F2": {"attr2": 0, "attr3": 1}}), - ("N1", "N1", {"F3": {"attr2": 0, "attr3": 0}}), + ("N1", "N2", "F1", {"attr2": 1, "attr3": 0}), + ("N2", "N3", "F2", {"attr2": 0, "attr3": 1}), + ("N1", "N1", "F3", {"attr2": 0, "attr3": 0}), + ("N1", "N2", "F4", {"attr2": 0, "attr3": 1}), ] ) # example from issue #4065 @@ -263,6 +273,13 @@ def test_edgekey_with_multigraph(self): ) assert_graphs_equal(G, Gtrue) + df_roundtrip = nx.to_pandas_edgelist(G, edge_key="attr1") + df_roundtrip = df_roundtrip.sort_values("attr1") + df_roundtrip.index = ["A", "B", "C", "D"] + pd.testing.assert_frame_equal( + df, df_roundtrip[["source", "target", "attr1", "attr2", "attr3"]] + ) + def test_edgekey_with_normal_graph_no_action(self): Gtrue = nx.Graph( [ diff --git a/networkx/tests/test_convert_scipy.py b/networkx/tests/test_convert_scipy.py --- a/networkx/tests/test_convert_scipy.py +++ b/networkx/tests/test_convert_scipy.py @@ -2,7 +2,7 @@ np = pytest.importorskip("numpy") sp = pytest.importorskip("scipy") -sparse = sp.sparse +sp_sparse = sp.sparse npt = np.testing import networkx as nx @@ -213,7 +213,7 @@ def test_from_scipy_sparse_matrix_parallel_edges(self): creating a multigraph. """ - A = sparse.csr_matrix([[1, 1], [1, 2]]) + A = sp_sparse.csr_matrix([[1, 1], [1, 2]]) # First, with a simple graph, each integer entry in the adjacency # matrix is interpreted as the weight of a single edge in the graph. expected = nx.DiGraph() @@ -253,7 +253,7 @@ def test_symmetric(self): :func:`networkx.from_scipy_sparse_matrix`. """ - A = sparse.csr_matrix([[0, 1], [1, 0]]) + A = sp_sparse.csr_matrix([[0, 1], [1, 0]]) G = nx.from_scipy_sparse_matrix(A, create_using=nx.MultiGraph) expected = nx.MultiGraph() expected.add_edge(0, 1, weight=1) @@ -275,5 +275,5 @@ def test_from_scipy_sparse_matrix_formats(sparse_format): (2, 1, {"weight": 1}), ] ) - A = sparse.coo_matrix([[0, 3, 2], [3, 0, 1], [2, 1, 0]]).asformat(sparse_format) + A = sp_sparse.coo_matrix([[0, 3, 2], [3, 0, 1], [2, 1, 0]]).asformat(sparse_format) assert_graphs_equal(expected, nx.from_scipy_sparse_matrix(A))
documentation for "edge_key" param in nx.from_pandas_edgelist() missing I just noticed this while working on implementing support for geopandas (I want it to behave as similar to pandas functions). I guess it's for MultiGraphs which I can't really wrap my head around.
Thanks for reporting this! That's the result of #4076 which was recently merged after apparently not enough review. :) It needs to be fixed.
2020-11-24T05:08:13
networkx/networkx
4,430
networkx__networkx-4430
[ "4406" ]
06d49256914be9d4a02ee5d82848a609d96aa1b5
diff --git a/examples/drawing/plot_selfloops.py b/examples/drawing/plot_selfloops.py new file mode 100644 --- /dev/null +++ b/examples/drawing/plot_selfloops.py @@ -0,0 +1,29 @@ +""" +========== +Self-loops +========== + +A self-loop is an edge that originates from and terminates the same node. +This example shows how to draw self-loops with `nx_pylab`. + +""" +import networkx as nx +import matplotlib.pyplot as plt + +# Create a graph and add a self-loop to node 0 +G = nx.complete_graph(3, create_using=nx.DiGraph) +G.add_edge(0, 0) +pos = nx.circular_layout(G) + +# As of version 2.6, self-loops are drawn by default with the same styling as +# other edges +nx.draw(G, pos, with_labels=True) + +# Add self-loops to the remaining nodes +edgelist = [(1, 1), (2, 2)] +G.add_edges_from(edgelist) + +# Draw the newly added self-loops with different formatting +nx.draw_networkx_edges(G, pos, edgelist=edgelist, arrowstyle="<|-", style="dashed") + +plt.show()
how to draw the node-A to the node-A by networkx
An edge from node A to node A is called a self-loop. We recently had a nice addition #4370 to NetworkX that draws these loops, but it has not yet been released. You can install the development version of NetworkX off of Github or wait a month and the next release will have that feature.
2020-12-07T05:20:15
networkx/networkx
4,431
networkx__networkx-4431
[ "4397" ]
db9790038bbf5247ce3f629fa4dd42f356d1ad17
diff --git a/networkx/classes/coreviews.py b/networkx/classes/coreviews.py --- a/networkx/classes/coreviews.py +++ b/networkx/classes/coreviews.py @@ -1,4 +1,6 @@ -""" +"""Views of core data structures such as nested Mappings (e.g. dict-of-dicts). +These ``Views`` often restrict element access, with either the entire view or +layers of nested mappings being read-only. """ import warnings from collections.abc import Mapping
Documentation: Make classes AtlasView et. al. from networkx/classes/coreviews.py accessible from documentation Lest I seem ungrateful, I like networkx a lot, and rely on it for two of my main personal projects [fake-data-for-learning](https://github.com/munichpavel/fake-data-for-learning) and the WIP [clovek-ne-jezi-se](https://github.com/munichpavel/clovek-ne-jezi-se). As I was trying to understand `AtlasView`s, I could find only examples in the documentation (see [this search](https://networkx.org/documentation/stable//search.html?q=AtlasView&check_keywords=yes&area=default#)), none of which pointed to the (well-documented) source code [networkx/classes/coreviews.py](https://github.com/networkx/networkx/blob/master/networkx/classes/coreviews.py). I think the fix should just be a matter of tweaking how you have configured Sphinx to run.
Yes! Thank you for this nudge. They should have been included in the sphinx docs and never were...
2020-12-07T07:15:57
networkx/networkx
4,455
networkx__networkx-4455
[ "4432" ]
1462350ebb1467a46af4f7b774bb093ce15eb6c6
diff --git a/networkx/algorithms/flow/networksimplex.py b/networkx/algorithms/flow/networksimplex.py --- a/networkx/algorithms/flow/networksimplex.py +++ b/networkx/algorithms/flow/networksimplex.py @@ -62,8 +62,8 @@ def network_simplex(G, demand="demand", capacity="capacity", weight="weight"): Raises ------ NetworkXError - This exception is raised if the input graph is not directed, - not connected or is a multigraph. + This exception is raised if the input graph is not directed or + not connected. NetworkXUnfeasible This exception is raised in the following situations:
Support for disjunctive graph? I am about to solve FJSSP using oriented disjunctive graph with conjunctive/disjuncted graph arcs. The way I see it, MultiDiGraph structure would be suitable: is the simplex method (Dantzig) - I presume _network_simplex_ method - applyable in this case? Thanks.
MultiDiGraph structures are supported as of #1280 (about 6 years ago). But it looks like the doc_string should be updated to reflect this!! I'm not sure what Dantzig actually refers to in terms of an algorithm, but there are [citations in the docs](https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.flow.network_simplex.html) to papers describing the algorithm. Dantzig's simplex algorithm = Simplex method. I just wanted to know if MultiDiGraph instance was supported, which it is. Thank you. Good! I'm going to leave this open as a reminder to change the docs for simplex_method (and maybe others in that module) to remove text stating an error is raised for multigraphs when it actually isn't true.
2020-12-13T03:42:22
networkx/networkx
4,461
networkx__networkx-4461
[ "4460" ]
f542e5fb92d12ec42570d14df867d144a9e8ba4f
diff --git a/networkx/generators/classic.py b/networkx/generators/classic.py --- a/networkx/generators/classic.py +++ b/networkx/generators/classic.py @@ -188,7 +188,7 @@ def barbell_graph(m1, m2, create_using=None): return G -def binomial_tree(n): +def binomial_tree(n, create_using=None): """Returns the Binomial Tree of order n. The binomial tree of order 0 consists of a single node. A binomial tree of order k @@ -200,16 +200,21 @@ def binomial_tree(n): n : int Order of the binomial tree. + create_using : NetworkX graph constructor, optional (default=nx.Graph) + Graph type to create. If graph instance, then cleared before populated. + Returns ------- G : NetworkX graph A binomial tree of $2^n$ nodes and $2^n - 1$ edges. """ - G = nx.empty_graph(1) + G = nx.empty_graph(1, create_using) + N = 1 for i in range(n): - edges = [(u + N, v + N) for (u, v) in G.edges] + # Use G.edges() to ensure 2-tuples. G.edges is 3-tuple for MultiGraph + edges = [(u + N, v + N) for (u, v) in G.edges()] G.add_edges_from(edges) G.add_edge(0, N) N *= 2
diff --git a/networkx/generators/tests/test_classic.py b/networkx/generators/tests/test_classic.py --- a/networkx/generators/tests/test_classic.py +++ b/networkx/generators/tests/test_classic.py @@ -138,10 +138,12 @@ def test_barbell_graph(self): assert_edges_equal(mb.edges(), b.edges()) def test_binomial_tree(self): - for n in range(0, 4): - b = nx.binomial_tree(n) - assert nx.number_of_nodes(b) == 2 ** n - assert nx.number_of_edges(b) == (2 ** n - 1) + graphs = (None, nx.Graph, nx.DiGraph, nx.MultiGraph, nx.MultiDiGraph) + for create_using in graphs: + for n in range(0, 4): + b = nx.binomial_tree(n, create_using) + assert nx.number_of_nodes(b) == 2 ** n + assert nx.number_of_edges(b) == (2 ** n - 1) def test_complete_graph(self): # complete_graph(m) is a connected graph with
Allow binomial_tree() to create directed graphs (via create_using parameter) `binomial_tree() ` doesn't seem to allow the `create_using` parameter to get passed and as a result, one can't create a directed binomial graph. This seem like a inadvertent omission and easy to fix. I will put in a PR to address this soon.
2020-12-14T02:51:50
networkx/networkx
4,476
networkx__networkx-4476
[ "4475" ]
8585169fe5c34dbab9c57036912d5e5446548ab6
diff --git a/networkx/algorithms/approximation/__init__.py b/networkx/algorithms/approximation/__init__.py --- a/networkx/algorithms/approximation/__init__.py +++ b/networkx/algorithms/approximation/__init__.py @@ -12,6 +12,7 @@ from networkx.algorithms.approximation.clustering_coefficient import * from networkx.algorithms.approximation.clique import * from networkx.algorithms.approximation.connectivity import * +from networkx.algorithms.approximation.distance_measures import * from networkx.algorithms.approximation.dominating_set import * from networkx.algorithms.approximation.kcomponents import * from networkx.algorithms.approximation.independent_set import * diff --git a/networkx/algorithms/approximation/distance_measures.py b/networkx/algorithms/approximation/distance_measures.py new file mode 100644 --- /dev/null +++ b/networkx/algorithms/approximation/distance_measures.py @@ -0,0 +1,140 @@ +"""Distance measures approximated metrics.""" + +import networkx as nx +from networkx.utils.decorators import py_random_state + +__all__ = ["diameter"] + + +@py_random_state(1) +def diameter(G, seed=None): + """Returns a lower bound on the diameter of the graph G. + + The function computes a lower bound on the diameter (i.e., the maximum eccentricity) + of a directed or undirected graph G. The procedure used varies depending on the graph + being directed or not. + + If G is an `undirected` graph, then the function uses the `2-sweep` algorithm [1]_. + The main idea is to pick the farthest node from a random node and return its eccentricity. + + Otherwise, if G is a `directed` graph, the function uses the `2-dSweep` algorithm [2]_, + The procedure starts by selecting a random source node $s$ from which it performs a + forward and a backward BFS. Let $a_1$ and $a_2$ be the farthest nodes in the forward and + backward cases, respectively. Then, it computes the backward eccentricity of $a_1$ using + a backward BFS and the forward eccentricity of $a_2$ using a forward BFS. + Finally, it returns the best lower bound between the two. + + In both cases, the time complexity is linear with respect to the size of G. + + Parameters + ---------- + G : NetworkX graph + + seed : integer, random_state, or None (default) + Indicator of random number generation state. + See :ref:`Randomness<randomness>`. + + Returns + ------- + d : integer + Lower Bound on the Diameter of G + + Raises + ------ + NetworkXError + If the graph is empty or + If the graph is undirected and not connected or + If the graph is directed and not strongly connected. + + See Also + -------- + networkx.algorithms.distance_measures.diameter + + References + ---------- + .. [1] Magnien, ClΓ©mence, Matthieu Latapy, and Michel Habib. + *Fast computation of empirically tight bounds for the diameter of massive graphs.* + Journal of Experimental Algorithmics (JEA), 2009. + https://arxiv.org/pdf/0904.2728.pdf + .. [2] Crescenzi, Pierluigi, Roberto Grossi, Leonardo Lanzi, and Andrea Marino. + *On computing the diameter of real-world directed (weighted) graphs.* + International Symposium on Experimental Algorithms. Springer, Berlin, Heidelberg, 2012. + https://courses.cs.ut.ee/MTAT.03.238/2014_fall/uploads/Main/diameter.pdf + """ + # if G is empty + if not G: + raise nx.NetworkXError("Expected non-empty NetworkX graph!") + # if there's only a node + if G.number_of_nodes() == 1: + return 0 + # if G is directed + if G.is_directed(): + return _two_sweep_directed(G, seed) + # else if G is undirected + return _two_sweep_undirected(G, seed) + + +def _two_sweep_undirected(G, seed): + """Helper function for finding a lower bound on the diameter + for undirected Graphs. + + The idea is to pick the farthest node from a random node + and return its eccentricity. + + ``G`` is a NetworkX undirected graph. + + .. note:: + + ``seed`` is a random.Random or numpy.random.RandomState instance + """ + # select a random source node + source = seed.choice(list(G)) + # get the distances to the other nodes + distances = nx.shortest_path_length(G, source) + # if some nodes have not been visited, then the graph is not connected + if len(distances) != len(G): + raise nx.NetworkXError("Graph not connected.") + # take a node that is (one of) the farthest nodes from the source + *_, node = distances + # return the eccentricity of the node + return nx.eccentricity(G, node) + + +def _two_sweep_directed(G, seed): + """Helper function for finding a lower bound on the diameter + for directed Graphs. + + It implements 2-dSweep, the directed version of the 2-sweep algorithm. + The algorithm follows the following steps. + 1. Select a source node $s$ at random. + 2. Perform a forward BFS from $s$ to select a node $a_1$ at the maximum + distance from the source, and compute $LB_1$, the backward eccentricity of $a_1$. + 3. Perform a backward BFS from $s$ to select a node $a_2$ at the maximum + distance from the source, and compute $LB_2$, the forward eccentricity of $a_2$. + 4. Return the maximum between $LB_1$ and $LB_2$. + + ``G`` is a NetworkX directed graph. + + .. note:: + + ``seed`` is a random.Random or numpy.random.RandomState instance + """ + # get a new digraph G' with the edges reversed in the opposite direction + G_reversed = G.reverse() + # select a random source node + source = seed.choice(list(G)) + # compute forward distances from source + forward_distances = nx.shortest_path_length(G, source) + # compute backward distances from source + backward_distances = nx.shortest_path_length(G_reversed, source) + # if either the source can't reach every node or not every node + # can reach the source, then the graph is not strongly connected + n = len(G) + if len(forward_distances) != n or len(backward_distances) != n: + raise nx.NetworkXError("DiGraph not strongly connected.") + # take a node a_1 at the maximum distance from the source in G + *_, a_1 = forward_distances + # take a node a_2 at the maximum distance from the source in G_reversed + *_, a_2 = backward_distances + # return the max between the backward eccentricity of a_1 and the forward eccentricity of a_2 + return max(nx.eccentricity(G_reversed, a_1), nx.eccentricity(G, a_2))
diff --git a/networkx/algorithms/approximation/tests/test_distance_measures.py b/networkx/algorithms/approximation/tests/test_distance_measures.py new file mode 100644 --- /dev/null +++ b/networkx/algorithms/approximation/tests/test_distance_measures.py @@ -0,0 +1,59 @@ +"""Unit tests for the :mod:`networkx.algorithms.approximation.distance_measures` module. +""" + +import pytest +import networkx as nx +from networkx.algorithms.approximation import diameter + + +class TestDiameter: + """Unit tests for the approximate diameter function + :func:`~networkx.algorithms.approximation.distance_measures.diameter`. + """ + + def test_null_graph(self): + """Test empty graph.""" + G = nx.null_graph() + with pytest.raises( + nx.NetworkXError, match="Expected non-empty NetworkX graph!" + ): + diameter(G) + + def test_undirected_non_connected(self): + """Test an undirected disconnected graph.""" + graph = nx.path_graph(10) + graph.remove_edge(3, 4) + with pytest.raises(nx.NetworkXError, match="Graph not connected."): + diameter(graph) + + def test_directed_non_strongly_connected(self): + """Test a directed non strongly connected graph.""" + graph = nx.path_graph(10, create_using=nx.DiGraph()) + with pytest.raises(nx.NetworkXError, match="DiGraph not strongly connected."): + diameter(graph) + + def test_complete_undirected_graph(self): + """Test a complete undirected graph.""" + graph = nx.complete_graph(10) + assert diameter(graph) == 1 + + def test_complete_directed_graph(self): + """Test a complete directed graph.""" + graph = nx.complete_graph(10, create_using=nx.DiGraph()) + assert diameter(graph) == 1 + + def test_undirected_path_graph(self): + """Test an undirected path graph with 10 nodes.""" + graph = nx.path_graph(10) + assert diameter(graph) == 9 + + def test_directed_path_graph(self): + """Test a directed path graph with 10 nodes.""" + graph = nx.path_graph(10).to_directed() + assert diameter(graph) == 9 + + def test_single_node(self): + """Test a graph which contains just a node.""" + graph = nx.Graph() + graph.add_node(1) + assert diameter(graph) == 0
Approximated Metrics - Diameter Hi all, In some cases, especially for large graphs, an exact result for a metric might not be necessary and a bound could be good enough. This might be the case for the *diameter*, which currently on networkx is present only as an [exact metric](https://networkx.org/documentation/stable//reference/algorithms/generated/networkx.algorithms.distance_measures.diameter.html). I think it might be worth to add functionalities to computed an approximated diameter (i.e., a lower bound). Good candidates might be the *2-sweep* [[1]](#1) and *2d-sweep* [[2]](#2) for the undirected and directed case, respectively. In addition to the results on the paper, here some comparisons made on my laptop (2.9 GHz Dual-Core Intel Core i5) with a draft implementation of the *2-sweep* on undirected graphs considering the [Twitch Snap Dataset](http://snap.stanford.edu/data/twitch-social-networks.html). Time is expressed in seconds. The code used can be found [here](https://gist.github.com/atomassi/3ee4d827afc6f7f20e3a70bba97ed697). | network | \|V\| | \|E\| | Time (exact) | Value (exact) | Time (LB) | Value (LB) | |------------ |:-----: |:-------: |:------------: |:-------------: |:---------: |:----------: | | musae_ENGB | 7,126 | 35,324 | 166.7 | 10 | 0.06 | 10 | | musae_RU | 4,385 | 37,304 | 63.2 | 9 | 0.03 | 9 | | musae_FR | 6,549 | 112,666 | 255.5 | 7 | 0.11 | 7 | | musae_DE | 9,498 | 153,138 | 602.7 | 7 | 0.18 | 7 | | musae_ES | 4,648 | 59,382 | 101.9 | 9 | 0.07 | 8 | | musae_PTBR | 1,912 | 31,299 | 17.5 | 7 | 0.02 | 7 | If you think it might make sense to add this metric for the 2 cases, I can add a module (e.g., `networkx/algorithms/approximation/distance_measures.py`) with a proposed implementation and the tests for it. ## References <a id="1">[1]</a> Magnien, C., Latapy, M. and Habib, M., 2009. *Fast computation of empirically tight bounds for the diameter of massive graphs*. Journal of Experimental Algorithmics (JEA), 13, pp.1-10. <a id="1">[2]</a> Crescenzi, P., Grossi, R., Lanzi, L. and Marino, A., 2012, June. *On computing the diameter of real-world directed (weighted) graphs*. In International Symposium on Experimental Algorithms (pp. 99-110). Springer, Berlin, Heidelberg.
This looks interesting, and you have chosen the correct module place and name. It would be great if you could put together a PR. Hook into the documentation with an entry at doc/reference/algorithms/approximation.rst. Hook into the namespace with an entry in `__init__.py`. If you have questions, let us know.
2020-12-20T15:19:12
networkx/networkx
4,477
networkx__networkx-4477
[ "4348" ]
8585169fe5c34dbab9c57036912d5e5446548ab6
diff --git a/networkx/algorithms/cluster.py b/networkx/algorithms/cluster.py --- a/networkx/algorithms/cluster.py +++ b/networkx/algorithms/cluster.py @@ -85,6 +85,9 @@ def _weighted_triangles_and_degree_iter(G, nodes=None, weight="weight"): """Return an iterator of (node, degree, weighted_triangles). Used for weighted clustering. + Note: this returns the geometric average weight of edges in the triangle. + Also, each triangle is counted twice (each direction). + So you may want to divide by 2. """ if weight is None or G.number_of_edges() == 0: @@ -105,7 +108,7 @@ def wt(u, v): seen = set() for j in inbrs: seen.add(j) - # This prevents double counting. + # This avoids counting twice -- we double at the end. jnbrs = set(G[j]) - seen # Only compute the edge weight once, before the inner inner # loop. @@ -122,6 +125,8 @@ def _directed_triangles_and_degree_iter(G, nodes=None): (node, total_degree, reciprocal_degree, directed_triangles). Used for directed clustering. + Note that unlike `_triangles_and_degree_iter()`, this function counts + directed triangles so does not count triangles twice. """ nodes_nbrs = ((n, G._pred[n], G._succ[n]) for n in G.nbunch_iter(nodes)) @@ -154,6 +159,8 @@ def _directed_weighted_triangles_and_degree_iter(G, nodes=None, weight="weight") (node, total_degree, reciprocal_degree, directed_weighted_triangles). Used for directed weighted clustering. + Note that unlike `_weighted_triangles_and_degree_iter()`, this function counts + directed triangles so does not count triangles twice. """ if weight is None or G.number_of_edges() == 0: @@ -300,7 +307,7 @@ def clustering(G, nodes=None, weight=None): .. math:: - c_u = \frac{1}{deg^{tot}(u)(deg^{tot}(u)-1) - 2deg^{\leftrightarrow}(u)} + c_u = \frac{2}{deg^{tot}(u)(deg^{tot}(u)-1) - 2deg^{\leftrightarrow}(u)} T(u), where :math:`T(u)` is the number of directed triangles through node @@ -362,6 +369,7 @@ def clustering(G, nodes=None, weight=None): for v, dt, db, t in td_iter } else: + # The formula 2*T/(d*(d-1)) from docs is t/(d*(d-1)) here b/c t==2*T if weight is not None: td_iter = _weighted_triangles_and_degree_iter(G, nodes, weight) clusterc = {v: 0 if t == 0 else t / (d * (d - 1)) for v, d, t in td_iter}
Clustering coefficient of weighted graph Hi! I'm trying to use networkx.algorithms.cluster.clustering to calculate the clustering coefficient of a weighted network. I noticed that the formula listed in the networkx reference is slightly different from the formula in the paper being cited [1]. Could I ask why? In the [networkx reference](https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.cluster.clustering.html#networkx.algorithms.cluster.clustering), the constant number in the numerator is 1. In the paper[1], that constant number is 2. I believe it should be 2, as it should be adapted from the calculation of all possible triangles, namely deg(u)(deg(u-1))/2. Thanks! [1] Intensity and coherence of motifs in weighted complex networks by J. P. Onnela, J. SaramΓ€ki, J. KertΓ©sz, and K. Kaski, Physical Review E, 71(6), 065103 (2005).
That is a confusing part of the docs. I think the confusion is caused by the code where the term `triangles` is used to represent what is actually two times the number of triangles. The code correctly includes the 2, but someone looking at the code might think the 2 is not there. So I suspect the docs incorrectly interpreted the code as not have the factor of 2. While we're changing the docs for `clustering`, they don't even show the formula used for a weighted directed clustering case. Thanks for the clarification!
2020-12-20T15:49:12
networkx/networkx
4,550
networkx__networkx-4550
[ "4191" ]
f63e90ba4676fcb4ef74c5bd7ddda56be50d4c90
diff --git a/networkx/algorithms/dag.py b/networkx/algorithms/dag.py --- a/networkx/algorithms/dag.py +++ b/networkx/algorithms/dag.py @@ -46,7 +46,7 @@ def descendants(G, source): Parameters ---------- G : NetworkX DiGraph - A directed acyclic graph (DAG) + A directed graph source : node in `G` Returns @@ -66,7 +66,7 @@ def ancestors(G, source): Parameters ---------- G : NetworkX DiGraph - A directed acyclic graph (DAG) + A directed graph source : node in `G` Returns
ancestors & descendant documentation appears overly restrictive I'm trying to find the out-component of a node, and it looks like I could use `nx.descendants` and just add the node to the descendants. But the documentation of `descendants` suggests the graph needs to be a DAG, rather than just directed. I don't see anything in the code that requires that restriction.
I agree. For both descendants and ancestors the input graph does not need to be acyclic (or even directed). Also, this code is not taking advantage of python 3 dict.keys capabilities. It should be closer to something like: ```python if source not in G: raise.... return nx.shortest_path_length(G, source).keys() - {source} ``` `dfs_preorder_nodes` already serves this purpose (remove source to get descendants). I guess the reason that `descendants` and `ancestors` are implemented in `networkx.algorithms.dag` is because of terminology? Because those terms may only make sense for a DAG (and trees), although the same algorithms can be applied on general graphs to get "nodes reachable from u" and "nodes from which u is reachable" (they may not be disjoint for non-DAGs) If we decide to call these also "descendants" and "ancestors", then they should be moved out of `networkx.algorithms.dag`. Another thing that I noticed is that both `dag.descendants` and `dag.ancestors` are sub-optimal complexity-wise because they use `shortest_path_length` (which will default to something like dijkstra or bellman-ford). They should be using a basic traveral algorithm (like `dfs_preorder_nodes` does) to achieve a time complexity of O(V+E). Would the devs here be interested in a PR re-writing `descendants` and `ancestors` (and maybe moving them out of `networkx.algorithms.dag`)? The complexity issue should be fine. If a weight is not specified `shortest_path_length` defaults to a breadth first search. I think simply removing the restriction from the docs of ancestors and descendants is the main issue. Moving `descendants` and `ancestors` to, e.g., the traversal subpackage would also be OK with me. You can chose a module name, but `relatives.py` could work I suppose... :} There will be a bit of changes needed in docs and `__init__.py` files. If you want to just do the doc_string that would be fine too. Hey, I was looking into this and was trying to understand the kind of change that would be needed? I would like to take on this task. I think the right approach here is to remove the restriction written in the doc_string that these two functions work only for DAGs. I don't think we need to move the functions elsewhere. Pretty straightforward I hope. Agreed. Right now it is : G : NetworkX DiGraph A directed acyclic graph (DAG) I feel it should say: G : NetworkX DiGraph A directed graph Would that be correct ? @dschult Yes, that seems good! Thanks!
2021-01-21T04:31:05
networkx/networkx
4,579
networkx__networkx-4579
[ "4389" ]
8964bcec0ac826d1a746f7e3656e1023ef8da924
diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -146,7 +146,7 @@ "networkx.algorithms.tree": ["tests/*.py"], "networkx.classes": ["tests/*.py"], "networkx.generators": ["tests/*.py", "atlas.dat.gz"], - "networkx.drawing": ["tests/*.py"], + "networkx.drawing": ["tests/*.py", "tests/baseline/*png"], "networkx.linalg": ["tests/*.py"], "networkx.readwrite": ["tests/*.py"], "networkx.readwrite.json_graph": ["tests/*.py"],
diff --git a/networkx/drawing/tests/baseline/test_house_with_colors.png b/networkx/drawing/tests/baseline/test_house_with_colors.png new file mode 100644 Binary files /dev/null and b/networkx/drawing/tests/baseline/test_house_with_colors.png differ diff --git a/networkx/drawing/tests/test_pylab.py b/networkx/drawing/tests/test_pylab.py --- a/networkx/drawing/tests/test_pylab.py +++ b/networkx/drawing/tests/test_pylab.py @@ -416,6 +416,32 @@ def test_labels_and_colors(): # plt.show() [email protected]_image_compare +def test_house_with_colors(): + G = nx.house_graph() + # explicitly set positions + fig, ax = plt.subplots() + pos = {0: (0, 0), 1: (1, 0), 2: (0, 1), 3: (1, 1), 4: (0.5, 2.0)} + + # Plot nodes with different properties for the "wall" and "roof" nodes + nx.draw_networkx_nodes( + G, + pos, + node_size=3000, + nodelist=[0, 1, 2, 3], + node_color="tab:blue", + ) + nx.draw_networkx_nodes( + G, pos, node_size=2000, nodelist=[4], node_color="tab:orange" + ) + nx.draw_networkx_edges(G, pos, alpha=0.5, width=6) + # Customize axes + ax.margins(0.11) + plt.tight_layout() + plt.axis("off") + return fig + + def test_axes(): fig, ax = plt.subplots() nx.draw(barbell, ax=ax) diff --git a/requirements/test.txt b/requirements/test.txt --- a/requirements/test.txt +++ b/requirements/test.txt @@ -1,3 +1,4 @@ pytest>=7.0 pytest-cov>=3.0 +pytest-mpl>=0.14; platform_python_implementation!='PyPy' codecov>=2.1
Improve nx_pylab testing Currently our visualization test suite are smoke tests. We should investigate using ``pytest-mpl`` - https://github.com/matplotlib/pytest-mpl See #4375.
I ran into an issue while debugging a hang in `test_pylab` due to the recent addition of non-smoke tests (i.e. tests that assert things about axis geometries) in f559558 (#4374): The default behavior for the `nx_pylab` functions when the `ax` kwarg is `None` (the default value) is to call `plt.gca()`. Note that axis objects created in unit tests persist across function boundaries! ```python def test_1(): fig, ax = plt.subplots() def test_2(): ax = plt.gca() # This is the axis object created in test_1! ``` This crosstalk between tests can lead to unstable/unexpected test behavior, so it's important to recognize. The solution used in #4374 is to call `plt.delaxes` as a "cleanup" step in the unit tests that create axes objects. There's probably a better solution to this, and something to keep in mind as the `nx_pylab` tests are improved.
2021-01-28T23:50:47
networkx/networkx
4,589
networkx__networkx-4589
[ "4587" ]
777e57a5f08736a9e2b5aa6c87ff38cb8729c926
diff --git a/networkx/algorithms/shortest_paths/dense.py b/networkx/algorithms/shortest_paths/dense.py --- a/networkx/algorithms/shortest_paths/dense.py +++ b/networkx/algorithms/shortest_paths/dense.py @@ -13,37 +13,57 @@ def floyd_warshall_numpy(G, nodelist=None, weight="weight"): """Find all-pairs shortest path lengths using Floyd's algorithm. + This algorithm for finding shortest paths takes advantage of + matrix representations of a graph and works well for dense + graphs where all-pairs shortest path lengths are desired. + The results are returned as a NumPy array, distance[i, j], + where i and j are the indexes of two nodes in nodelist. + The entry distance[i, j] is the distance along a shortest + path from i to j. If no path exists the distance is Inf. + Parameters ---------- G : NetworkX graph - nodelist : list, optional + nodelist : list, optional (default=G.nodes) The rows and columns are ordered by the nodes in nodelist. - If nodelist is None then the ordering is produced by G.nodes(). + If nodelist is None then the ordering is produced by G.nodes. + Nodelist should include all nodes in G. - weight: string, optional (default= 'weight') + weight: string, optional (default='weight') Edge data key corresponding to the edge weight. Returns ------- distance : NumPy matrix A matrix of shortest path distances between nodes. - If there is no path between to nodes the corresponding matrix entry - will be Inf. + If there is no path between two nodes the value is Inf. Notes ----- Floyd's algorithm is appropriate for finding shortest paths in dense graphs or graphs with negative weights when Dijkstra's algorithm fails. This algorithm can still fail if there are negative - cycles. It has running time $O(n^3)$ with running space of $O(n^2)$. + cycles. It has running time $O(n^3)$ with running space of $O(n^2)$. + + Raises + ------ + NetworkXError + If nodelist is not a list of the nodes in G. """ import numpy as np + if nodelist is not None: + if not (len(nodelist) == len(G) == len(set(nodelist))): + raise nx.NetworkXError( + "nodelist must contain every node in G with no repeats." + "If you wanted a subgraph of G use G.subgraph(nodelist)" + ) + # To handle cases when an edge has weight=0, we must make sure that # nonedges are not given the value 0 as well. A = nx.to_numpy_array( - G, nodelist=nodelist, multigraph_weight=min, weight=weight, nonedge=np.inf + G, nodelist, multigraph_weight=min, weight=weight, nonedge=np.inf ) n, m = A.shape np.fill_diagonal(A, 0) # diagonal elements should be zero
diff --git a/networkx/algorithms/shortest_paths/tests/test_dense_numpy.py b/networkx/algorithms/shortest_paths/tests/test_dense_numpy.py --- a/networkx/algorithms/shortest_paths/tests/test_dense_numpy.py +++ b/networkx/algorithms/shortest_paths/tests/test_dense_numpy.py @@ -6,70 +6,84 @@ import networkx as nx -class TestFloydNumpy: - def test_cycle_numpy(self): - dist = nx.floyd_warshall_numpy(nx.cycle_graph(7)) - assert dist[0, 3] == 3 - assert dist[0, 4] == 3 - - def test_weighted_numpy_three_edges(self): - XG3 = nx.Graph() - XG3.add_weighted_edges_from( - [[0, 1, 2], [1, 2, 12], [2, 3, 1], [3, 4, 5], [4, 5, 1], [5, 0, 10]] - ) - dist = nx.floyd_warshall_numpy(XG3) - assert dist[0, 3] == 15 - - def test_weighted_numpy_two_edges(self): - XG4 = nx.Graph() - XG4.add_weighted_edges_from( - [ - [0, 1, 2], - [1, 2, 2], - [2, 3, 1], - [3, 4, 1], - [4, 5, 1], - [5, 6, 1], - [6, 7, 1], - [7, 0, 1], - ] - ) - dist = nx.floyd_warshall_numpy(XG4) - assert dist[0, 2] == 4 - - def test_weight_parameter_numpy(self): - XG4 = nx.Graph() - XG4.add_edges_from( - [ - (0, 1, {"heavy": 2}), - (1, 2, {"heavy": 2}), - (2, 3, {"heavy": 1}), - (3, 4, {"heavy": 1}), - (4, 5, {"heavy": 1}), - (5, 6, {"heavy": 1}), - (6, 7, {"heavy": 1}), - (7, 0, {"heavy": 1}), - ] - ) - dist = nx.floyd_warshall_numpy(XG4, weight="heavy") - assert dist[0, 2] == 4 - - def test_directed_cycle_numpy(self): - G = nx.DiGraph() - nx.add_cycle(G, [0, 1, 2, 3]) - pred, dist = nx.floyd_warshall_predecessor_and_distance(G) - D = nx.utils.dict_to_numpy_array(dist) - np.testing.assert_equal(nx.floyd_warshall_numpy(G), D) - - def test_zero_weight(self): - G = nx.DiGraph() - edges = [(1, 2, -2), (2, 3, -4), (1, 5, 1), (5, 4, 0), (4, 3, -5), (2, 5, -7)] - G.add_weighted_edges_from(edges) - dist = nx.floyd_warshall_numpy(G) - assert int(np.min(dist)) == -14 - - G = nx.MultiDiGraph() - edges.append((2, 5, -7)) - G.add_weighted_edges_from(edges) - dist = nx.floyd_warshall_numpy(G) - assert int(np.min(dist)) == -14 +def test_cycle_numpy(): + dist = nx.floyd_warshall_numpy(nx.cycle_graph(7)) + assert dist[0, 3] == 3 + assert dist[0, 4] == 3 + + +def test_weighted_numpy_three_edges(): + XG3 = nx.Graph() + XG3.add_weighted_edges_from( + [[0, 1, 2], [1, 2, 12], [2, 3, 1], [3, 4, 5], [4, 5, 1], [5, 0, 10]] + ) + dist = nx.floyd_warshall_numpy(XG3) + assert dist[0, 3] == 15 + + +def test_weighted_numpy_two_edges(): + XG4 = nx.Graph() + XG4.add_weighted_edges_from( + [ + [0, 1, 2], + [1, 2, 2], + [2, 3, 1], + [3, 4, 1], + [4, 5, 1], + [5, 6, 1], + [6, 7, 1], + [7, 0, 1], + ] + ) + dist = nx.floyd_warshall_numpy(XG4) + assert dist[0, 2] == 4 + + +def test_weight_parameter_numpy(): + XG4 = nx.Graph() + XG4.add_edges_from( + [ + (0, 1, {"heavy": 2}), + (1, 2, {"heavy": 2}), + (2, 3, {"heavy": 1}), + (3, 4, {"heavy": 1}), + (4, 5, {"heavy": 1}), + (5, 6, {"heavy": 1}), + (6, 7, {"heavy": 1}), + (7, 0, {"heavy": 1}), + ] + ) + dist = nx.floyd_warshall_numpy(XG4, weight="heavy") + assert dist[0, 2] == 4 + + +def test_directed_cycle_numpy(): + G = nx.DiGraph() + nx.add_cycle(G, [0, 1, 2, 3]) + pred, dist = nx.floyd_warshall_predecessor_and_distance(G) + D = nx.utils.dict_to_numpy_array(dist) + np.testing.assert_equal(nx.floyd_warshall_numpy(G), D) + + +def test_zero_weight(): + G = nx.DiGraph() + edges = [(1, 2, -2), (2, 3, -4), (1, 5, 1), (5, 4, 0), (4, 3, -5), (2, 5, -7)] + G.add_weighted_edges_from(edges) + dist = nx.floyd_warshall_numpy(G) + assert int(np.min(dist)) == -14 + + G = nx.MultiDiGraph() + edges.append((2, 5, -7)) + G.add_weighted_edges_from(edges) + dist = nx.floyd_warshall_numpy(G) + assert int(np.min(dist)) == -14 + + +def test_nodelist(): + G = nx.path_graph(7) + dist = nx.floyd_warshall_numpy(G, nodelist=[3, 5, 4, 6, 2, 1, 0]) + assert dist[0, 3] == 3 + assert dist[0, 1] == 2 + assert dist[6, 2] == 4 + pytest.raises(nx.NetworkXError, nx.floyd_warshall_numpy, G, [1, 3]) + pytest.raises(nx.NetworkXError, nx.floyd_warshall_numpy, G, list(range(9)))
Documentation and code in `floyd_warshall_numpy` are inconsistent ### Current Behavior <!--- Tell us what happens instead of the expected behavior --> Using `floyd_warshall_numpy` with a specified set of nodes will only find paths that are confined to that subset of nodes. I'm not sure I agree with that choice, and certainly the documentation does not make it clear. ### Expected Behavior <!--- Tell us what should happen --> Based on the documentation, I would expect it to find a path that starts at one node and ends at another, even if that path must go through additional nodes not in the provided list. ### Steps to Reproduce <!--- Provide a minimal example that reproduces the bug --> https://stackoverflow.com/q/65771537/2966723 ### Environment <!--- Please provide details about your local environment --> Python version: 3.9 NetworkX version: 2.5 ### Additional context <!--- Add any other context about the problem here, screenshots, etc. -->
Can you explain why you thought the function would allow paths outside of the listed nodes? The current documentation says `The rows and columns are ordered by the nodes in nodelist.` Looking further, the `nodelist` is passed to `to_numpy_array` which says in the doc_string: ``` When `nodelist` does not contain every node in `G`, the adjacency matrix is built from the subgraph of `G` that is induced by the nodes in `nodelist`. ``` Perhaps we should copy that documentation to `floyd_warshall_numpy`. But it's supposed to return: >**distance** – A matrix of shortest path distances between nodes. If there is no path between to nodes the corresponding matrix entry will be Inf. **[note typo to->two]** The fact that the documentation says the results showing distances between two nodes are ordered based on the nodelist only suggests to me that the order depends on the nodelist. Not that the numerical value of the distances found will vary based on the nodelist, So in principle if I just want the distance between 2 nodes in the Graph, I could call it with the nodelist `[node1, node2]` and I should get the shortest path distance between the nodes. There's nothing that suggests to me that we're only interested in the subgraph induced by the nodelist. So I would expect `Inf` to only happen if there is no path in the Graph. But in this case, I'll get `Inf` if they are not adjacent. Unfortunately (or fortunately) this interpretation would preclude taking advantage of the dense matrix data structure that this function takes advantage of. Or, perhaps it could use the full matrix and then manipulate it after-the-fact to remove all computed path lengths except those between nodes in nodelist. I don't think that is a good design for this function. But the current implementation makes it hard to track errors where users provide a subset of the nodes. The SO original post looks like such a difficulty. Our current implementation hides the fact that an induced subgraph has been created. That induced subgraph could easily be constructed by the user before passing into this function. I see little to be gained by providing an automated subgraph feature and it makes providing a nodelist error prone. I think the best solution is to raise an error if `nodelist` does not specify all nodes in G and no others. One possible test of such could be: ```python if nodelist is None: nodelist = list(G) elif len(nodelist) == len(G) == len(set(nodelist)): pass else: raise nx.NetworkXError("nodelist should provide an ordering of all nodes in G") ``` Does this make sense? Is it s good idea? This would work. An alternative would be to do the entire network, and then subsample the matrix based on the input nodelist. However, this would change the output of the algorithm for the same inputs. I think it would be better for the algorithm to break than to change what it returns. So I think I agree with your suggestion. The users can subsample the rows and columns themselves... agreed
2021-02-02T04:37:33
networkx/networkx
4,629
networkx__networkx-4629
[ "4614" ]
5d10139b38c93adaff5872e4e1e4e9892eb2118b
diff --git a/networkx/classes/ordered.py b/networkx/classes/ordered.py --- a/networkx/classes/ordered.py +++ b/networkx/classes/ordered.py @@ -1,4 +1,10 @@ """ + +.. deprecated:: 2.6 + + The ordered variants of graph classes in this module are deprecated and + will be removed in version 3.0. + Consistently ordered variants of the default base classes. Note that if you are using Python 3.6+, you shouldn't need these classes because the dicts in Python 3.6+ are ordered. @@ -28,6 +34,7 @@ """ from collections import OrderedDict +import warnings from .graph import Graph from .multigraph import MultiGraph @@ -42,25 +49,70 @@ class OrderedGraph(Graph): - """Consistently ordered variant of :class:`~networkx.Graph`.""" + """Consistently ordered variant of :class:`~networkx.Graph`. + + .. deprecated:: 2.6 + + OrderedGraph is deprecated and will be removed in version 3.0. + Use `Graph` instead, which guarantees order is preserved for + Python >= 3.7 + """ node_dict_factory = OrderedDict adjlist_outer_dict_factory = OrderedDict adjlist_inner_dict_factory = OrderedDict edge_attr_dict_factory = OrderedDict + def __init__(self, incoming_graph_data=None, **attr): + warnings.warn( + ( + "OrderedGraph is deprecated and will be removed in version 3.0.\n" + "Use `Graph` instead, which guarantees order is preserved for\n" + "Python >= 3.7\n" + ), + DeprecationWarning, + stacklevel=2, + ) + super(OrderedGraph, self).__init__(incoming_graph_data, **attr) + class OrderedDiGraph(DiGraph): - """Consistently ordered variant of :class:`~networkx.DiGraph`.""" + """Consistently ordered variant of :class:`~networkx.DiGraph`. + + .. deprecated:: 2.6 + + OrderedDiGraph is deprecated and will be removed in version 3.0. + Use `DiGraph` instead, which guarantees order is preserved for + Python >= 3.7 + """ node_dict_factory = OrderedDict adjlist_outer_dict_factory = OrderedDict adjlist_inner_dict_factory = OrderedDict edge_attr_dict_factory = OrderedDict + def __init__(self, incoming_graph_data=None, **attr): + warnings.warn( + ( + "OrderedDiGraph is deprecated and will be removed in version 3.0.\n" + "Use `DiGraph` instead, which guarantees order is preserved for\n" + "Python >= 3.7\n" + ), + DeprecationWarning, + stacklevel=2, + ) + super(OrderedDiGraph, self).__init__(incoming_graph_data, **attr) + class OrderedMultiGraph(MultiGraph): - """Consistently ordered variant of :class:`~networkx.MultiGraph`.""" + """Consistently ordered variant of :class:`~networkx.MultiGraph`. + + .. deprecated:: 2.6 + + OrderedMultiGraph is deprecated and will be removed in version 3.0. + Use `MultiGraph` instead, which guarantees order is preserved for + Python >= 3.7 + """ node_dict_factory = OrderedDict adjlist_outer_dict_factory = OrderedDict @@ -68,12 +120,43 @@ class OrderedMultiGraph(MultiGraph): edge_key_dict_factory = OrderedDict edge_attr_dict_factory = OrderedDict + def __init__(self, incoming_graph_data=None, **attr): + warnings.warn( + ( + "OrderedMultiGraph is deprecated and will be removed in version 3.0.\n" + "Use `MultiGraph` instead, which guarantees order is preserved for\n" + "Python >= 3.7\n" + ), + DeprecationWarning, + stacklevel=2, + ) + super(OrderedMultiGraph, self).__init__(incoming_graph_data, **attr) + class OrderedMultiDiGraph(MultiDiGraph): - """Consistently ordered variant of :class:`~networkx.MultiDiGraph`.""" + """Consistently ordered variant of :class:`~networkx.MultiDiGraph`. + + .. deprecated:: 2.6 + + OrderedMultiDiGraph is deprecated and will be removed in version 3.0. + Use `MultiDiGraph` instead, which guarantees order is preserved for + Python >= 3.7 + """ node_dict_factory = OrderedDict adjlist_outer_dict_factory = OrderedDict adjlist_inner_dict_factory = OrderedDict edge_key_dict_factory = OrderedDict edge_attr_dict_factory = OrderedDict + + def __init__(self, incoming_graph_data=None, **attr): + warnings.warn( + ( + "OrderedMultiDiGraph is deprecated and will be removed in version 3.0.\n" + "Use `MultiDiGraph` instead, which guarantees order is preserved for\n" + "Python >= 3.7\n" + ), + DeprecationWarning, + stacklevel=2, + ) + super(OrderedMultiDiGraph, self).__init__(incoming_graph_data, **attr) diff --git a/networkx/conftest.py b/networkx/conftest.py --- a/networkx/conftest.py +++ b/networkx/conftest.py @@ -27,6 +27,9 @@ def pytest_collection_modifyitems(config, items): # TODO: The warnings below need to be dealt with, but for now we silence them. @pytest.fixture(autouse=True) def set_warnings(): + warnings.filterwarnings( + "ignore", category=DeprecationWarning, message=r"Ordered.* is deprecated" + ) warnings.filterwarnings( "ignore", category=DeprecationWarning,
Deprecate `Ordered` graph classes Since the next NetworkX release will drop support for Python 3.6, dictionaries should be ordered by default in all supported Python implementations moving forward (though it's worth double-checking that this is true for PyPy). Thus the `Ordered` graph classes will be redundant moving forward and can be deprecated for 2.6 in preparation for removal in 3.0. This [SO post](https://stackoverflow.com/questions/39980323/are-dictionaries-ordered-in-python-3-6) compiles a lot of useful links re: the status of guaranteed insertion order for various Python versions. The conclusion is that insertion-order preservation is guaranteed for dictionaries from Python 3.7 onward.
The [What's New in Python 3.6 docs](https://docs.python.org/3.6/whatsnew/3.6.html#new-dict-implementation) documentation has a paragraph about ordered dicts. It states that this feature was pioneered by Pypy. A [post on Pypy documentation](https://morepypy.blogspot.com/2015/01/faster-more-memory-efficient-and-more.html) announces this way back in 2015. So pypy started the whole ordered dict thing and will continue it. I read the linked post more carefully as well, and realized that though Python dictionaries now preserve insertion order, there are still some differences between default dictionaries and `OrderedDict`. The [OrdredDict docs](https://docs.python.org/3/library/collections.html#ordereddict-objects) provide a convenient list. AFAICT, none of those differences directly impacts networkx, but I wanted to mention it just in case.
2021-02-19T06:59:25
networkx/networkx
4,653
networkx__networkx-4653
[ "4061" ]
1de57a8ca852cf27820512720968c6cffb51a74b
diff --git a/networkx/drawing/layout.py b/networkx/drawing/layout.py --- a/networkx/drawing/layout.py +++ b/networkx/drawing/layout.py @@ -310,6 +310,10 @@ def bipartite_layout( import numpy as np + if align not in ("vertical", "horizontal"): + msg = "align must be either vertical or horizontal." + raise ValueError(msg) + G, center = _process_params(G, center=center, dim=2) if len(G) == 0: return {} @@ -322,36 +326,20 @@ def bipartite_layout( bottom = set(G) - top nodes = list(top) + list(bottom) - if align == "vertical": - left_xs = np.repeat(0, len(top)) - right_xs = np.repeat(width, len(bottom)) - left_ys = np.linspace(0, height, len(top)) - right_ys = np.linspace(0, height, len(bottom)) + left_xs = np.repeat(0, len(top)) + right_xs = np.repeat(width, len(bottom)) + left_ys = np.linspace(0, height, len(top)) + right_ys = np.linspace(0, height, len(bottom)) - top_pos = np.column_stack([left_xs, left_ys]) - offset - bottom_pos = np.column_stack([right_xs, right_ys]) - offset - - pos = np.concatenate([top_pos, bottom_pos]) - pos = rescale_layout(pos, scale=scale) + center - pos = dict(zip(nodes, pos)) - return pos + top_pos = np.column_stack([left_xs, left_ys]) - offset + bottom_pos = np.column_stack([right_xs, right_ys]) - offset + pos = np.concatenate([top_pos, bottom_pos]) + pos = rescale_layout(pos, scale=scale) + center if align == "horizontal": - top_ys = np.repeat(height, len(top)) - bottom_ys = np.repeat(0, len(bottom)) - top_xs = np.linspace(0, width, len(top)) - bottom_xs = np.linspace(0, width, len(bottom)) - - top_pos = np.column_stack([top_xs, top_ys]) - offset - bottom_pos = np.column_stack([bottom_xs, bottom_ys]) - offset - - pos = np.concatenate([top_pos, bottom_pos]) - pos = rescale_layout(pos, scale=scale) + center - pos = dict(zip(nodes, pos)) - return pos - - msg = "align must be either vertical or horizontal." - raise ValueError(msg) + pos = np.flip(pos, 1) + pos = dict(zip(nodes, pos)) + return pos @random_state(10) @@ -1081,6 +1069,10 @@ def multipartite_layout(G, subset_key="subset", align="vertical", scale=1, cente """ import numpy as np + if align not in ("vertical", "horizontal"): + msg = "align must be either vertical or horizontal." + raise ValueError(msg) + G, center = _process_params(G, center=center, dim=2) if len(G) == 0: return {} @@ -1096,42 +1088,24 @@ def multipartite_layout(G, subset_key="subset", align="vertical", scale=1, cente pos = None nodes = [] - if align == "vertical": - width = len(layers) - for i, layer in layers.items(): - height = len(layer) - xs = np.repeat(i, height) - ys = np.arange(0, height, dtype=float) - offset = ((width - 1) / 2, (height - 1) / 2) - layer_pos = np.column_stack([xs, ys]) - offset - if pos is None: - pos = layer_pos - else: - pos = np.concatenate([pos, layer_pos]) - nodes.extend(layer) - pos = rescale_layout(pos, scale=scale) + center - pos = dict(zip(nodes, pos)) - return pos + width = len(layers) + for i, layer in layers.items(): + height = len(layer) + xs = np.repeat(i, height) + ys = np.arange(0, height, dtype=float) + offset = ((width - 1) / 2, (height - 1) / 2) + layer_pos = np.column_stack([xs, ys]) - offset + if pos is None: + pos = layer_pos + else: + pos = np.concatenate([pos, layer_pos]) + nodes.extend(layer) + pos = rescale_layout(pos, scale=scale) + center if align == "horizontal": - height = len(layers) - for i, layer in layers.items(): - width = len(layer) - xs = np.arange(0, width, dtype=float) - ys = np.repeat(i, width) - offset = ((width - 1) / 2, (height - 1) / 2) - layer_pos = np.column_stack([xs, ys]) - offset - if pos is None: - pos = layer_pos - else: - pos = np.concatenate([pos, layer_pos]) - nodes.extend(layer) - pos = rescale_layout(pos, scale=scale) + center - pos = dict(zip(nodes, pos)) - return pos - - msg = "align must be either vertical or horizontal." - raise ValueError(msg) + pos = np.flip(pos, 1) + pos = dict(zip(nodes, pos)) + return pos def rescale_layout(pos, scale=1):
Tighten partite_layout if/else code. Tighten up the bipartite_layout and multipartite_layout if/else as describes in #3815
Hello I am a newcomer and would like to contribute. Can you explain a bit more about the issue? Basically you want to make one function to handle both bipartite_layout and multipartite_layout or optimize their respective if statements? The [original conversation consisted of 3 comments](https://github.com/networkx/networkx/pull/3815/files#r452534386) in a previous PR #3815. The intend was not combining those two functions, but rather to simplify the existing functions. The `multipartite_layout` code involves duplication of very similar lines, with e.g. only "x" and "y" switched. The idea is to refactor the `multipartite_layout` code to (hopefully) improve readability, performance and ease of maintenance. Some of the same techniques might also improve the code in `bipartite_layout`. But I think that has not been explored yet.
2021-03-05T12:11:45
networkx/networkx
4,655
networkx__networkx-4655
[ "4648" ]
f9570289c7abd568714e4114e0b9ce1d0ddc5dcf
diff --git a/networkx/algorithms/centrality/katz.py b/networkx/algorithms/centrality/katz.py --- a/networkx/algorithms/centrality/katz.py +++ b/networkx/algorithms/centrality/katz.py @@ -142,7 +142,7 @@ def katz_centrality( .. [2] Leo Katz: A New Status Index Derived from Sociometric Index. Psychometrika 18(1):39–43, 1953 - http://phya.snu.ac.kr/~dkim/PRL87278701.pdf + https://link.springer.com/content/pdf/10.1007/BF02289026.pdf """ if len(G) == 0: return {} @@ -302,7 +302,7 @@ def katz_centrality_numpy(G, alpha=0.1, beta=1.0, normalized=True, weight=None): .. [2] Leo Katz: A New Status Index Derived from Sociometric Index. Psychometrika 18(1):39–43, 1953 - http://phya.snu.ac.kr/~dkim/PRL87278701.pdf + https://link.springer.com/content/pdf/10.1007/BF02289026.pdf """ import numpy as np
Incorrect link to the paper in katz_centrality documentation ### Current Behavior Link [2] in katz_centrality documentation directs to the incorrect paper: 'Universal Behavior of Load Distribution in Scale-Free Networks' ### Expected Behavior Link [2] in katz_centrality documentation directs to the correct paper: 'A New Status Index Derived from Sociometric Index' ### Steps to Reproduce Open the page https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.centrality.katz_centrality.html ### Additional context I guess the correct link is https://link.springer.com/content/pdf/10.1007/BF02289026.pdf
Looks like that spurious link appears twice in that module/file. Thanks!
2021-03-05T15:39:18
networkx/networkx
4,659
networkx__networkx-4659
[ "3459", "3281" ]
45bd170c5493347f38616fc3ecbab55fb615a91b
diff --git a/networkx/generators/random_graphs.py b/networkx/generators/random_graphs.py --- a/networkx/generators/random_graphs.py +++ b/networkx/generators/random_graphs.py @@ -5,12 +5,13 @@ import itertools import math +from collections import defaultdict import networkx as nx from networkx.utils import py_random_state -from .classic import empty_graph, path_graph, complete_graph + +from .classic import complete_graph, empty_graph, path_graph, star_graph from .degree_seq import degree_sequence_tree -from collections import defaultdict __all__ = [ "fast_gnp_random_graph", @@ -615,9 +616,8 @@ def _random_subset(seq, m, rng): @py_random_state(2) -def barabasi_albert_graph(n, m, seed=None): - """Returns a random graph according to the BarabΓ‘si–Albert preferential - attachment model. +def barabasi_albert_graph(n, m, seed=None, initial_graph=None): + """Returns a random graph using BarabΓ‘si–Albert preferential attachment A graph of $n$ nodes is grown by attaching new nodes each with $m$ edges that are preferentially attached to existing nodes with high degree. @@ -631,6 +631,11 @@ def barabasi_albert_graph(n, m, seed=None): seed : integer, random_state, or None (default) Indicator of random number generation state. See :ref:`Randomness<randomness>`. + initial_graph : Graph or None (default) + Initial network for BarabΓ‘si–Albert algorithm. + It should be a connected graph for most use cases. + A copy of `initial_graph` is used. + If None, starts from a star graph on (m+1) nodes. Returns ------- @@ -639,7 +644,8 @@ def barabasi_albert_graph(n, m, seed=None): Raises ------ NetworkXError - If `m` does not satisfy ``1 <= m < n``. + If `m` does not satisfy ``1 <= m < n``, or + the initial graph number of nodes m0 does not satisfy ``m <= m0 <= n``. References ---------- @@ -652,32 +658,38 @@ def barabasi_albert_graph(n, m, seed=None): f"BarabΓ‘si–Albert network must have m >= 1 and m < n, m = {m}, n = {n}" ) - # Add m initial nodes (m0 in barabasi-speak) - G = empty_graph(m) - # Target nodes for new edges - targets = list(range(m)) + if initial_graph is None: + # Default initial graph : star graph on (m + 1) nodes + G = star_graph(m) + else: + if len(initial_graph) < m or len(initial_graph) > n: + raise nx.NetworkXError( + f"BarabΓ‘si–Albert initial graph needs between m={m} and n={n} nodes" + ) + G = initial_graph.copy() + # List of existing nodes, with nodes repeated once for each adjacent edge - repeated_nodes = [] - # Start adding the other n-m nodes. The first node is m. - source = m + repeated_nodes = [n for n, d in G.degree() for _ in range(d)] + # Start adding the other n - m0 nodes. + source = len(G) while source < n: + # Now choose m unique nodes from the existing nodes + # Pick uniformly from repeated_nodes (preferential attachment) + targets = _random_subset(repeated_nodes, m, seed) # Add edges to m nodes from the source. G.add_edges_from(zip([source] * m, targets)) # Add one node to the list for each new edge just created. repeated_nodes.extend(targets) # And the new node "source" has m edges to add to the list. repeated_nodes.extend([source] * m) - # Now choose m unique nodes from the existing nodes - # Pick uniformly from repeated_nodes (preferential attachment) - targets = _random_subset(repeated_nodes, m, seed) + source += 1 return G @py_random_state(4) -def dual_barabasi_albert_graph(n, m1, m2, p, seed=None): - """Returns a random graph according to the dual BarabΓ‘si–Albert preferential - attachment model. +def dual_barabasi_albert_graph(n, m1, m2, p, seed=None, initial_graph=None): + """Returns a random graph using dual BarabΓ‘si–Albert preferential attachment A graph of $n$ nodes is grown by attaching new nodes each with either $m_1$ edges (with probability $p$) or $m_2$ edges (with probability $1-p$) that @@ -688,14 +700,19 @@ def dual_barabasi_albert_graph(n, m1, m2, p, seed=None): n : int Number of nodes m1 : int - Number of edges to attach from a new node to existing nodes with probability $p$ + Number of edges to link each new node to existing nodes with probability $p$ m2 : int - Number of edges to attach from a new node to existing nodes with probability $1-p$ + Number of edges to link each new node to existing nodes with probability $1-p$ p : float The probability of attaching $m_1$ edges (as opposed to $m_2$ edges) seed : integer, random_state, or None (default) Indicator of random number generation state. See :ref:`Randomness<randomness>`. + initial_graph : Graph or None (default) + Initial network for BarabΓ‘si–Albert algorithm. + A copy of `initial_graph` is used. + It should be connected for most use cases. + If None, starts from an star graph on max(m1, m2) + 1 nodes. Returns ------- @@ -704,7 +721,9 @@ def dual_barabasi_albert_graph(n, m1, m2, p, seed=None): Raises ------ NetworkXError - If `m1` and `m2` do not satisfy ``1 <= m1,m2 < n`` or `p` does not satisfy ``0 <= p <= 1``. + If `m1` and `m2` do not satisfy ``1 <= m1,m2 < n``, or + `p` does not satisfy ``0 <= p <= 1``, or + the initial graph number of nodes m0 does not satisfy m1, m2 <= m0 <= n. References ---------- @@ -713,11 +732,11 @@ def dual_barabasi_albert_graph(n, m1, m2, p, seed=None): if m1 < 1 or m1 >= n: raise nx.NetworkXError( - f"Dual BarabΓ‘si–Albert network must have m1 >= 1 and m1 < n, m1 = {m1}, n = {n}" + f"Dual BarabΓ‘si–Albert must have m1 >= 1 and m1 < n, m1 = {m1}, n = {n}" ) if m2 < 1 or m2 >= n: raise nx.NetworkXError( - f"Dual BarabΓ‘si–Albert network must have m2 >= 1 and m2 < n, m2 = {m2}, n = {n}" + f"Dual BarabΓ‘si–Albert must have m2 >= 1 and m2 < n, m2 = {m2}, n = {n}" ) if p < 0 or p > 1: raise nx.NetworkXError( @@ -730,27 +749,25 @@ def dual_barabasi_albert_graph(n, m1, m2, p, seed=None): elif p == 0: return barabasi_albert_graph(n, m2, seed) - # Add max(m1,m2) initial nodes (m0 in barabasi-speak) - G = empty_graph(max(m1, m2)) + if initial_graph is None: + # Default initial graph : empty graph on max(m1, m2) nodes + G = star_graph(max(m1, m2)) + else: + if len(initial_graph) < max(m1, m2) or len(initial_graph) > n: + raise nx.NetworkXError( + f"BarabΓ‘si–Albert initial graph must have between " + f"max(m1, m2) = {max(m1, m2)} and n = {n} nodes" + ) + G = initial_graph.copy() + # Target nodes for new edges - targets = list(range(max(m1, m2))) + targets = list(G) # List of existing nodes, with nodes repeated once for each adjacent edge - repeated_nodes = [] + repeated_nodes = [n for n, d in G.degree() for _ in range(d)] # Start adding the remaining nodes. - source = max(m1, m2) - # Pick which m to use first time (m1 or m2) - if seed.random() < p: - m = m1 - else: - m = m2 + source = len(G) while source < n: - # Add edges to m nodes from the source. - G.add_edges_from(zip([source] * m, targets)) - # Add one node to the list for each new edge just created. - repeated_nodes.extend(targets) - # And the new node "source" has m edges to add to the list. - repeated_nodes.extend([source] * m) - # Pick which m to use next time (m1 or m2) + # Pick which m to use (m1 or m2) if seed.random() < p: m = m1 else: @@ -758,6 +775,13 @@ def dual_barabasi_albert_graph(n, m1, m2, p, seed=None): # Now choose m unique nodes from the existing nodes # Pick uniformly from repeated_nodes (preferential attachment) targets = _random_subset(repeated_nodes, m, seed) + # Add edges to m nodes from the source. + G.add_edges_from(zip([source] * m, targets)) + # Add one node to the list for each new edge just created. + repeated_nodes.extend(targets) + # And the new node "source" has m edges to add to the list. + repeated_nodes.extend([source] * m) + source += 1 return G
diff --git a/networkx/generators/tests/test_random_graphs.py b/networkx/generators/tests/test_random_graphs.py --- a/networkx/generators/tests/test_random_graphs.py +++ b/networkx/generators/tests/test_random_graphs.py @@ -1,95 +1,78 @@ """Unit tests for the :mod:`networkx.generators.random_graphs` module. """ +import networkx as nx import pytest -from networkx.exception import NetworkXError -from networkx.generators.random_graphs import barabasi_albert_graph -from networkx.generators.random_graphs import dual_barabasi_albert_graph -from networkx.generators.random_graphs import extended_barabasi_albert_graph -from networkx.generators.random_graphs import binomial_graph -from networkx.generators.random_graphs import connected_watts_strogatz_graph -from networkx.generators.random_graphs import dense_gnm_random_graph -from networkx.generators.random_graphs import erdos_renyi_graph -from networkx.generators.random_graphs import fast_gnp_random_graph -from networkx.generators.random_graphs import gnm_random_graph -from networkx.generators.random_graphs import gnp_random_graph -from networkx.generators.random_graphs import newman_watts_strogatz_graph -from networkx.generators.random_graphs import powerlaw_cluster_graph -from networkx.generators.random_graphs import random_kernel_graph -from networkx.generators.random_graphs import random_lobster -from networkx.generators.random_graphs import random_powerlaw_tree -from networkx.generators.random_graphs import random_powerlaw_tree_sequence -from networkx.generators.random_graphs import random_regular_graph -from networkx.generators.random_graphs import random_shell_graph -from networkx.generators.random_graphs import watts_strogatz_graph - class TestGeneratorsRandom: def test_random_graph(self): seed = 42 - G = gnp_random_graph(100, 0.25, seed) - G = gnp_random_graph(100, 0.25, seed, directed=True) - G = binomial_graph(100, 0.25, seed) - G = erdos_renyi_graph(100, 0.25, seed) - G = fast_gnp_random_graph(100, 0.25, seed) - G = fast_gnp_random_graph(100, 0.25, seed, directed=True) - G = gnm_random_graph(100, 20, seed) - G = gnm_random_graph(100, 20, seed, directed=True) - G = dense_gnm_random_graph(100, 20, seed) - - G = watts_strogatz_graph(10, 2, 0.25, seed) + G = nx.gnp_random_graph(100, 0.25, seed) + G = nx.gnp_random_graph(100, 0.25, seed, directed=True) + G = nx.binomial_graph(100, 0.25, seed) + G = nx.erdos_renyi_graph(100, 0.25, seed) + G = nx.fast_gnp_random_graph(100, 0.25, seed) + G = nx.fast_gnp_random_graph(100, 0.25, seed, directed=True) + G = nx.gnm_random_graph(100, 20, seed) + G = nx.gnm_random_graph(100, 20, seed, directed=True) + G = nx.dense_gnm_random_graph(100, 20, seed) + + G = nx.watts_strogatz_graph(10, 2, 0.25, seed) assert len(G) == 10 assert G.number_of_edges() == 10 - G = connected_watts_strogatz_graph(10, 2, 0.1, tries=10, seed=seed) + G = nx.connected_watts_strogatz_graph(10, 2, 0.1, tries=10, seed=seed) assert len(G) == 10 assert G.number_of_edges() == 10 pytest.raises( - NetworkXError, connected_watts_strogatz_graph, 10, 2, 0.1, tries=0 + nx.NetworkXError, nx.connected_watts_strogatz_graph, 10, 2, 0.1, tries=0 ) - G = watts_strogatz_graph(10, 4, 0.25, seed) + G = nx.watts_strogatz_graph(10, 4, 0.25, seed) assert len(G) == 10 assert G.number_of_edges() == 20 - G = newman_watts_strogatz_graph(10, 2, 0.0, seed) + G = nx.newman_watts_strogatz_graph(10, 2, 0.0, seed) assert len(G) == 10 assert G.number_of_edges() == 10 - G = newman_watts_strogatz_graph(10, 4, 0.25, seed) + G = nx.newman_watts_strogatz_graph(10, 4, 0.25, seed) assert len(G) == 10 assert G.number_of_edges() >= 20 - G = barabasi_albert_graph(100, 1, seed) - G = barabasi_albert_graph(100, 3, seed) + G = nx.barabasi_albert_graph(100, 1, seed) + G = nx.barabasi_albert_graph(100, 3, seed) assert G.number_of_edges() == (97 * 3) - G = extended_barabasi_albert_graph(100, 1, 0, 0, seed) + G = nx.barabasi_albert_graph(100, 3, seed, nx.complete_graph(5)) + assert G.number_of_edges() == (10 + 95 * 3) + + G = nx.extended_barabasi_albert_graph(100, 1, 0, 0, seed) assert G.number_of_edges() == 99 - G = extended_barabasi_albert_graph(100, 3, 0, 0, seed) + G = nx.extended_barabasi_albert_graph(100, 3, 0, 0, seed) assert G.number_of_edges() == 97 * 3 - G = extended_barabasi_albert_graph(100, 1, 0, 0.5, seed) + G = nx.extended_barabasi_albert_graph(100, 1, 0, 0.5, seed) assert G.number_of_edges() == 99 - G = extended_barabasi_albert_graph(100, 2, 0.5, 0, seed) + G = nx.extended_barabasi_albert_graph(100, 2, 0.5, 0, seed) assert G.number_of_edges() > 100 * 3 assert G.number_of_edges() < 100 * 4 - G = extended_barabasi_albert_graph(100, 2, 0.3, 0.3, seed) + G = nx.extended_barabasi_albert_graph(100, 2, 0.3, 0.3, seed) assert G.number_of_edges() > 100 * 2 assert G.number_of_edges() < 100 * 4 - G = powerlaw_cluster_graph(100, 1, 1.0, seed) - G = powerlaw_cluster_graph(100, 3, 0.0, seed) + G = nx.powerlaw_cluster_graph(100, 1, 1.0, seed) + G = nx.powerlaw_cluster_graph(100, 3, 0.0, seed) assert G.number_of_edges() == (97 * 3) - G = random_regular_graph(10, 20, seed) + G = nx.random_regular_graph(10, 20, seed) - pytest.raises(NetworkXError, random_regular_graph, 3, 21) - pytest.raises(NetworkXError, random_regular_graph, 33, 21) + pytest.raises(nx.NetworkXError, nx.random_regular_graph, 3, 21) + pytest.raises(nx.NetworkXError, nx.random_regular_graph, 33, 21) constructor = [(10, 20, 0.8), (20, 40, 0.8)] - G = random_shell_graph(constructor, seed) + G = nx.random_shell_graph(constructor, seed) def is_caterpillar(g): """ @@ -113,20 +96,20 @@ def is_lobster(g): non_leafs = [n for n in g if g.degree(n) > 1] return is_caterpillar(g.subgraph(non_leafs)) - G = random_lobster(10, 0.1, 0.5, seed) + G = nx.random_lobster(10, 0.1, 0.5, seed) assert max([G.degree(n) for n in G.nodes()]) > 3 assert is_lobster(G) - pytest.raises(NetworkXError, random_lobster, 10, 0.1, 1, seed) - pytest.raises(NetworkXError, random_lobster, 10, 1, 1, seed) - pytest.raises(NetworkXError, random_lobster, 10, 1, 0.5, seed) + pytest.raises(nx.NetworkXError, nx.random_lobster, 10, 0.1, 1, seed) + pytest.raises(nx.NetworkXError, nx.random_lobster, 10, 1, 1, seed) + pytest.raises(nx.NetworkXError, nx.random_lobster, 10, 1, 0.5, seed) # docstring says this should be a caterpillar - G = random_lobster(10, 0.1, 0.0, seed) + G = nx.random_lobster(10, 0.1, 0.0, seed) assert is_caterpillar(G) # difficult to find seed that requires few tries - seq = random_powerlaw_tree_sequence(10, 3, seed=14, tries=1) - G = random_powerlaw_tree(10, 3, seed=14, tries=1) + seq = nx.random_powerlaw_tree_sequence(10, 3, seed=14, tries=1) + G = nx.random_powerlaw_tree(10, 3, seed=14, tries=1) def test_dual_barabasi_albert(self, m1=1, m2=4, p=0.5): """ @@ -137,28 +120,42 @@ def test_dual_barabasi_albert(self, m1=1, m2=4, p=0.5): The graphs generation are repeated several times to prevent lucky shots """ - seed = 42 - repeats = 2 + seeds = [42, 314, 2718] + initial_graph = nx.complete_graph(10) - while repeats: - repeats -= 1 + for seed in seeds: # This should be BA with m = m1 - BA1 = barabasi_albert_graph(100, m1, seed) - DBA1 = dual_barabasi_albert_graph(100, m1, m2, 1, seed) - assert BA1.size() == DBA1.size() + BA1 = nx.barabasi_albert_graph(100, m1, seed) + DBA1 = nx.dual_barabasi_albert_graph(100, m1, m2, 1, seed) + assert BA1.edges() == DBA1.edges() # This should be BA with m = m2 - BA2 = barabasi_albert_graph(100, m2, seed) - DBA2 = dual_barabasi_albert_graph(100, m1, m2, 0, seed) - assert BA2.size() == DBA2.size() + BA2 = nx.barabasi_albert_graph(100, m2, seed) + DBA2 = nx.dual_barabasi_albert_graph(100, m1, m2, 0, seed) + assert BA2.edges() == DBA2.edges() + + BA3 = nx.barabasi_albert_graph(100, m1, seed) + DBA3 = nx.dual_barabasi_albert_graph(100, m1, m1, p, seed) + # We can't compare edges here since randomness is "consumed" when drawing + # between m1 and m2 + assert BA3.size() == DBA3.size() + + DBA = nx.dual_barabasi_albert_graph(100, m1, m2, p, seed, initial_graph) + BA1 = nx.barabasi_albert_graph(100, m1, seed, initial_graph) + BA2 = nx.barabasi_albert_graph(100, m2, seed, initial_graph) + assert ( + min(BA1.size(), BA2.size()) <= DBA.size() <= max(BA1.size(), BA2.size()) + ) # Testing exceptions - dbag = dual_barabasi_albert_graph - pytest.raises(NetworkXError, dbag, m1, m1, m2, 0) - pytest.raises(NetworkXError, dbag, m2, m1, m2, 0) - pytest.raises(NetworkXError, dbag, 100, m1, m2, -0.5) - pytest.raises(NetworkXError, dbag, 100, m1, m2, 1.5) + dbag = nx.dual_barabasi_albert_graph + pytest.raises(nx.NetworkXError, dbag, m1, m1, m2, 0) + pytest.raises(nx.NetworkXError, dbag, m2, m1, m2, 0) + pytest.raises(nx.NetworkXError, dbag, 100, m1, m2, -0.5) + pytest.raises(nx.NetworkXError, dbag, 100, m1, m2, 1.5) + initial = nx.complete_graph(max(m1, m2) - 1) + pytest.raises(nx.NetworkXError, dbag, 100, m1, m2, p, initial_graph=initial) def test_extended_barabasi_albert(self, m=2): """ @@ -169,36 +166,34 @@ def test_extended_barabasi_albert(self, m=2): The graphs generation are repeated several times to prevent lucky-shots """ - seed = 42 - repeats = 2 - BA_model = barabasi_albert_graph(100, m, seed) - BA_model_edges = BA_model.number_of_edges() + seeds = [42, 314, 2718] - while repeats: - repeats -= 1 + for seed in seeds: + BA_model = nx.barabasi_albert_graph(100, m, seed) + BA_model_edges = BA_model.number_of_edges() # This behaves just like BA, the number of edges must be the same - G1 = extended_barabasi_albert_graph(100, m, 0, 0, seed) + G1 = nx.extended_barabasi_albert_graph(100, m, 0, 0, seed) assert G1.size() == BA_model_edges # More than twice more edges should have been added - G1 = extended_barabasi_albert_graph(100, m, 0.8, 0, seed) + G1 = nx.extended_barabasi_albert_graph(100, m, 0.8, 0, seed) assert G1.size() > BA_model_edges * 2 # Only edge rewiring, so the number of edges less than original - G2 = extended_barabasi_albert_graph(100, m, 0, 0.8, seed) + G2 = nx.extended_barabasi_albert_graph(100, m, 0, 0.8, seed) assert G2.size() == BA_model_edges # Mixed scenario: less edges than G1 and more edges than G2 - G3 = extended_barabasi_albert_graph(100, m, 0.3, 0.3, seed) + G3 = nx.extended_barabasi_albert_graph(100, m, 0.3, 0.3, seed) assert G3.size() > G2.size() assert G3.size() < G1.size() # Testing exceptions - ebag = extended_barabasi_albert_graph - pytest.raises(NetworkXError, ebag, m, m, 0, 0) - pytest.raises(NetworkXError, ebag, 1, 0.5, 0, 0) - pytest.raises(NetworkXError, ebag, 100, 2, 0.5, 0.5) + ebag = nx.extended_barabasi_albert_graph + pytest.raises(nx.NetworkXError, ebag, m, m, 0, 0) + pytest.raises(nx.NetworkXError, ebag, 1, 0.5, 0, 0) + pytest.raises(nx.NetworkXError, ebag, 100, 2, 0.5, 0.5) def test_random_zero_regular_graph(self): """Tests that a 0-regular graph has the correct number of nodes and @@ -206,16 +201,16 @@ def test_random_zero_regular_graph(self): """ seed = 42 - G = random_regular_graph(0, 10, seed) + G = nx.random_regular_graph(0, 10, seed) assert len(G) == 10 assert sum(1 for _ in G.edges()) == 0 def test_gnp(self): for generator in [ - gnp_random_graph, - binomial_graph, - erdos_renyi_graph, - fast_gnp_random_graph, + nx.gnp_random_graph, + nx.binomial_graph, + nx.erdos_renyi_graph, + nx.fast_gnp_random_graph, ]: G = generator(10, -1.1) assert len(G) == 10 @@ -253,39 +248,39 @@ def test_gnp(self): assert abs(edges / float(runs) - 90) <= runs * 2.0 / 100 def test_gnm(self): - G = gnm_random_graph(10, 3) + G = nx.gnm_random_graph(10, 3) assert len(G) == 10 assert sum(1 for _ in G.edges()) == 3 - G = gnm_random_graph(10, 3, seed=42) + G = nx.gnm_random_graph(10, 3, seed=42) assert len(G) == 10 assert sum(1 for _ in G.edges()) == 3 - G = gnm_random_graph(10, 100) + G = nx.gnm_random_graph(10, 100) assert len(G) == 10 assert sum(1 for _ in G.edges()) == 45 - G = gnm_random_graph(10, 100, directed=True) + G = nx.gnm_random_graph(10, 100, directed=True) assert len(G) == 10 assert sum(1 for _ in G.edges()) == 90 - G = gnm_random_graph(10, -1.1) + G = nx.gnm_random_graph(10, -1.1) assert len(G) == 10 assert sum(1 for _ in G.edges()) == 0 def test_watts_strogatz_big_k(self): # Test to make sure than n <= k - pytest.raises(NetworkXError, watts_strogatz_graph, 10, 11, 0.25) - pytest.raises(NetworkXError, newman_watts_strogatz_graph, 10, 11, 0.25) + pytest.raises(nx.NetworkXError, nx.watts_strogatz_graph, 10, 11, 0.25) + pytest.raises(nx.NetworkXError, nx.newman_watts_strogatz_graph, 10, 11, 0.25) # could create an infinite loop, now doesn't # infinite loop used to occur when a node has degree n-1 and needs to rewire - watts_strogatz_graph(10, 9, 0.25, seed=0) - newman_watts_strogatz_graph(10, 9, 0.5, seed=0) + nx.watts_strogatz_graph(10, 9, 0.25, seed=0) + nx.newman_watts_strogatz_graph(10, 9, 0.5, seed=0) # Test k==n scenario - watts_strogatz_graph(10, 10, 0.25, seed=0) - newman_watts_strogatz_graph(10, 10, 0.25, seed=0) + nx.watts_strogatz_graph(10, 10, 0.25, seed=0) + nx.newman_watts_strogatz_graph(10, 10, 0.25, seed=0) def test_random_kernel_graph(self): def integral(u, w, z): @@ -295,6 +290,6 @@ def root(u, w, r): return r / c + w c = 1 - graph = random_kernel_graph(1000, integral, root) - graph = random_kernel_graph(1000, integral, root, seed=42) + graph = nx.random_kernel_graph(1000, integral, root) + graph = nx.random_kernel_graph(1000, integral, root, seed=42) assert len(graph) == 1000
Allow generators to start with a seed network For example, instead of always starting Barabasi-Albert networks with the same base network, add a new parameter initial that takes a graph, and the preferential attachment proceeds from there. This would allow the study of the growth of the Barabasi-Albert network, capturing its behavior after every few steps, instead of always starting from scratch. Here is some sample code for B-A: ``` def barabasi_albert_graph(n, m, seed=None, initial=None): """Returns a random graph according to the BarabΓ‘si–Albert preferential attachment model. A graph of $n$ nodes is grown by attaching new nodes each with $m$ edges that are preferentially attached to existing nodes with high degree. Parameters ---------- n : int Number of nodes m : int Number of edges to attach from a new node to existing nodes seed : integer, random_state, or None (default) Indicator of random number generation state. See :ref:`Randomness<randomness>`. initial : Graph or None If Graph, starts with that Graph as the initial graph If None, starts with m nodes and zero edges Returns ------- G : Graph Raises ------ NetworkXError If `m` does not satisfy ``1 <= m < n``. References ---------- .. [1] A. L. BarabΓ‘si and R. Albert "Emergence of scaling in random networks", Science 286, pp 509-512, 1999. """ if m < 1 or m >= n: raise nx.NetworkXError("BarabΓ‘si–Albert network must have m >= 1" " and m < n, m = %d, n = %d" % (m, n)) if initial == None: # Add m initial nodes (m0 in barabasi-speak) G = empty_graph(m) # Target nodes for new edges targets = list(range(m)) # List of existing nodes, with nodes repeated once for each adjacent edge repeated_nodes = [] # Start adding the other n-m nodes. The first node is m. source = m else: G = initial targets = list(G.nodes()) repeated_nodes = [i for n,d in G.degree() for i in [n]*d] source = len(G.nodes()) while source < n: # Add edges to m nodes from the source. G.add_edges_from(zip([source] * m, targets)) # Add one node to the list for each new edge just created. repeated_nodes.extend(targets) # And the new node "source" has m edges to add to the list. repeated_nodes.extend([source] * m) # Now choose m unique nodes from the existing nodes # Pick uniformly from repeated_nodes (preferential attachment) targets = _random_subset(repeated_nodes, m, seed) source += 1 return G ``` (I don't have the time to create a fork and pull request right this moment.) barabasi_albert_graph does not start off with a connected network `barabasi_albert_graph` starts with `G=empty_graph(m)` instead of `G = complete_graph(m)`. As a result, the early new nodes are almost certain to become hubs, since the distribution is heavily skewed in their favour. The initial `m` nodes have almost zero chance to be a hub. To reproduce: ``` hubs = [] for i in range(100): G = nx.barabasi_albert_graph(1000,500) hubs.append(sorted(G, key=lambda n: len(G[n]), reverse=True)[0]) count_dict={} for hub in hubs: count_dict[hub]=hubs.count(hub) print ('\nfrequency:\t',count_dict) # prints # frequency: {500: 34, 501: 27, 502: 13, 503: 10, 504: 11, 505: 2, 506: 3} ``` The first new node (500) has the highest chance of becoming the hub. This is not random enough. Quote from Wikipedia ( https://en.wikipedia.org/wiki/Barab%C3%A1si%E2%80%93Albert_model ): > The network begins with an initial connected network of m_0 nodes. And logically it has to start from a connected network for the initial `m` nodes to have a chance to be a hub. The effect would not be noticeable in "small world" kind of networks (medium node degree << number of nodes) but is profound in networks with n and m close enough to each other.
See also #3281 which discusses one example choice of initial graph (the complete graph) Hi Aleksey - Thanks for your code and comments. There are many different "preferential attachment" models one could use (and many have been explored in the literature). I think the goal for the barabasi_albert_graph code was to implement the model in this paper http://barabasi.com/f/67.pdf. Can you take a look and see if that is the case? It looks like there are a couple of other barabasi-albert variants in the networkx code base - see https://github.com/networkx/networkx/blob/master/networkx/generators/random_graphs.py If there are other useful variants perhaps suggest adding them? Aric On Thu, Dec 27, 2018 at 2:20 PM Aleksey Vorona <[email protected]> wrote: > barabasi_albert_graph starts with G=empty_graph(m) instead of G = > complete_graph(m). > As a result, the early new nodes are almost certain to become hubs, since > the distribution is heavily skewed in their favour. The initial m nodes > have almost zero chance to be a hub. > > To reproduce: > > hubs = [] > for i in range(100): > G = nx.barabasi_albert_graph(1000,500) > hubs.append(sorted(G, key=lambda n: len(G[n]), reverse=True)[0]) > count_dict={} > for hub in hubs: > count_dict[hub]=hubs.count(hub) > print ('\nfrequency:\t',count_dict) > > # prints > # frequency: {500: 34, 501: 27, 502: 13, 503: 10, 504: 11, 505: 2, 506: 3} > > Quote from Wikipedia ( > https://en.wikipedia.org/wiki/Barab%C3%A1si%E2%80%93Albert_model ): > > The network begins with an initial connected network of m_0 nodes. > > And logically it has to start from a connected network for the initial m > nodes to have a chance to be a hub. > > The effect would not be noticeable in "small world" kind of networks > (medium node degree << number of nodes) but is profound in networks with n > and m close enough to each other. > > β€” > You are receiving this because you are subscribed to this thread. > Reply to this email directly, view it on GitHub > <https://github.com/networkx/networkx/issues/3281>, or mute the thread > <https://github.com/notifications/unsubscribe-auth/AALd4yMd4S2Vqxe02eGPzrQq1BEDvVDXks5u9TmMgaJpZM4ZjM5Y> > . > Unfortunately, neither of the original papers by those authors indicate the edges between the original `m_0` nodes. This is the reason I had to use Wikipedia link above to support this issue. Logically, the original `m` nodes have to be fully connected. Consider what happens if `m=1000` and they are not. The first new node added will have to form `1000` connections, so it will connect to every single pre-existing one. The second new node will arrive and see a thousand nodes with degree 1 and 1 node with degree 1000. It will be 1000 times more likely to connect to the first new node, than to any of the other existing ones. If the original 1000 nodes form a connected graph, the second new node will see 1001 nodes with degree 1000 and will have equal chance of connecting to any of them. Thus, it will be much harder to predict which of the early nodes will grow to become a hub. I looked at the other variants in the file you linked. They all have the same problem. Yes, I agree with your assessment. And for the barabasi_albert_graph algorithm in networkx I'm advocating to keep it as written to match the original paper. I'd recommend either adding an option (not the default) to fully connect the initial nodes or a separate algorithm that implements that variant. In principle one could initialize the algorithm with any graph on m nodes. If it helps, here is a snippet of what I am using now. ``` from networkx.utils import random_weighted_sample def generate_sf(n, m0, m): """ Create a BarabΓ‘si–Albert graph """ G = nx.complete_graph(m0) while len(G) < n: degrees = {node: val for (node, val) in G.degree()} node = len(G) G.add_node(node) targets = random_weighted_sample(degrees, m) G.add_edges_from(zip([node]*m,targets)) return G ``` It also allows for `m < m_0`, as in the original paper. Performance is not great, though. I think his is due to reinitialization of weights (`degrees`) on every iteration. This snippet can be improved for inclusion into networkx.
2021-03-06T21:51:05
networkx/networkx
4,667
networkx__networkx-4667
[ "3306" ]
e03144e0780898fed4afa976fff86f2710aa5d40
diff --git a/networkx/algorithms/bipartite/matching.py b/networkx/algorithms/bipartite/matching.py --- a/networkx/algorithms/bipartite/matching.py +++ b/networkx/algorithms/bipartite/matching.py @@ -346,14 +346,14 @@ def _alternating_dfs(u, along_matched=True): will continue only through edges *not* in the given matching. """ - if along_matched: - edges = itertools.cycle([matched_edges, unmatched_edges]) - else: - edges = itertools.cycle([unmatched_edges, matched_edges]) visited = set() - stack = [(u, iter(G[u]), next(edges))] + # Follow matched edges when depth is even, + # and follow unmatched edges when depth is odd. + initial_depth = 0 if along_matched else 1 + stack = [(u, iter(G[u]), initial_depth)] while stack: - parent, children, valid_edges = stack[-1] + parent, children, depth = stack[-1] + valid_edges = matched_edges if depth % 2 else unmatched_edges try: child = next(children) if child not in visited: @@ -361,7 +361,7 @@ def _alternating_dfs(u, along_matched=True): if child in targets: return True visited.add(child) - stack.append((child, iter(G[child]), next(edges))) + stack.append((child, iter(G[child]), depth + 1)) except StopIteration: stack.pop() return False
diff --git a/networkx/algorithms/bipartite/tests/test_matching.py b/networkx/algorithms/bipartite/tests/test_matching.py --- a/networkx/algorithms/bipartite/tests/test_matching.py +++ b/networkx/algorithms/bipartite/tests/test_matching.py @@ -180,6 +180,16 @@ def test_vertex_cover_issue_2384(self): for u, v in G.edges(): assert u in vertex_cover or v in vertex_cover + def test_vertex_cover_issue_3306(self): + G = nx.Graph() + edges = [(0, 2), (1, 0), (1, 1), (1, 2), (2, 2)] + G.add_edges_from([((i, "L"), (j, "R")) for i, j in edges]) + + matching = maximum_matching(G) + vertex_cover = to_vertex_cover(G, matching) + for u, v in G.edges(): + assert u in vertex_cover or v in vertex_cover + def test_unorderable_nodes(self): a = object() b = object()
hopcroft_karp_matching / to_vertex_cover bug `to_vertex_cover` must give the **minimum vertex cover** as documented. Wasn't it fixed in #2384 ? The following code gives random results: ```python import networkx as nx nodesU = ['r1','r2','r3','r4','r5','r6','r7','r8'] nodesV = ['c1','c2','c3','c4','c5','c6','c7','c8'] edges = [ ('r1','c2'),('r1','c4'),('r1','c5'),('r1','c8'), ('r2','c1'),('r2','c2'),('r2','c3'),('r2','c6'),('r2','c7'), ('r3','c4'),('r3','c5'), ('r4','c2'),('r4','c8'), ('r5','c2'),('r5','c8'), ('r6','c1'),('r6','c2'),('r6','c3'),('r6','c5'),('r6','c6'),('r6','c7'), ('r7','c2'),('r7','c8'), ('r8','c2'),('r8','c4')] G = nx.Graph() G.add_nodes_from(nodesU, bipartite=0) G.add_nodes_from(nodesV, bipartite=1) G.add_edges_from(edges) M = nx.bipartite.hopcroft_karp_matching(G, top_nodes = nodesU) print('matchings M = ', M) print('|M| = ', len(M)) K = nx.bipartite.to_vertex_cover(G, M, top_nodes = nodesU) print('minimum cover K = ', K) print('|K| = ', len(K)) ``` Here are the results of multiple runs: ``` matchings M = {'c1': 'r6', 'r7': 'c2', 'r5': 'c8', 'r1': 'c5', 'r8': 'c4', 'r6': 'c1', 'c5': 'r1', 'c8': 'r5', 'r2': 'c7', 'c7': 'r2', 'c2': 'r7', 'c4': 'r8'} |M| = 12 minimum cover K = {'c1', 'c5', 'c7', 'c2', 'c8', 'c4'} |K| = 6 matchings M = {'c6': 'r6', 'r6': 'c6', 'r1': 'c4', 'r4': 'c2', 'c3': 'r2', 'r5': 'c8', 'c5': 'r3', 'r3': 'c5', 'c8': 'r5', 'r2': 'c3', 'c4': 'r1', 'c2': 'r4'} |M| = 12 minimum cover K = {'c6', 'r6', 'c4', 'c5', 'c2', 'c3', 'c8', 'r2'} |K| = 8 matchings M = {'c8': 'r7', 'r7': 'c8', 'c1': 'r2', 'r3': 'c4', 'r6': 'c7', 'c5': 'r1', 'c2': 'r4', 'c7': 'r6', 'r4': 'c2', 'r2': 'c1', 'c4': 'r3', 'r1': 'c5'} |M| = 12 minimum cover K = {'r2', 'c8', 'c4', 'c2', 'c5'} |K| = 5 matchings M = {'r3': 'c4', 'r6': 'c1', 'r1': 'c5', 'c2': 'r7', 'r7': 'c2', 'c8': 'r5', 'r2': 'c7', 'c5': 'r1', 'r5': 'c8', 'c1': 'r6', 'c7': 'r2', 'c4': 'r3'} |M| = 12 minimum cover K = {'c8', 'c5', 'c7', 'r6', 'c2', 'c4'} |K| = 6 ``` I'm using networkx 2.2 on debian/python3 and windows miniconda3. The graph is from: [http://tryalgo.org/en/matching/2016/08/05/konig/](http://tryalgo.org/en/matching/2016/08/05/konig/) Also why `hopcroft_karp_matching` returns matchings and opposite edges ?
I'm not sure what is going on with your output. I am using python 3.6 and networkx 2.3 and the matchings do not switch c to r nodes like your output shows. Also, the output is consistent matchings M = {'r6': 'c1', 'r8': 'c2', 'r1': 'c4', 'r4': 'c8', 'r3': 'c5', 'r2': 'c3', 'c3': 'r2', 'c4': 'r1', 'c8': 'r4', 'c5': 'r3', 'c2': 'r8', 'c1': 'r6'} |M| = 12 minimum cover K = {'c4', 'c8', 'r1', 'c5', 'r3', 'c2', 'r2', 'r6'} |K| = 8 Using a slightly smaller example I notice the same problem as @agreppin with Python 3.6.7 and networkx 2.3. . According to KΓΆnigs theorem (like referred in to_vertex_cover, see [Koenigs Theorem Wikipedia](https://en.wikipedia.org/wiki/K%C3%B6nig%27s_theorem_%28graph_theory%29#Proof) ) `|M|= |K| `should hold. This criterion is violated in some cases by the to_vertex_cover function. Here is a minimum working example: ```python import networkx as nx import sys from networkx.algorithms.bipartite import to_vertex_cover from networkx.algorithms.bipartite import maximum_matching print('Python version: ', sys.version) print('Using version ' + nx.__version__) nodesU = ['r1','r3','r5','r7','r8'] nodesV = ['c2','c4','c5','c8'] edges = [('r1','c2'),('r1','c4'),('r1','c5'),('r1','c8'), ('r3','c4'),('r3','c5'), ('r5','c2'),('r5','c8'), ('r7','c2'),('r7','c8'), ('r8','c2'),('r8','c4')] G = nx.Graph() G.add_nodes_from(nodesU, bipartite=0) G.add_nodes_from(nodesV, bipartite=1) G.add_edges_from(edges) print('Edges: ' + repr(list(G.edges()))) assert nx.is_bipartite(G) assert nx.is_connected(G) #--------------------------------------------------------------- # compute matching by maximum_matching # this gives in some cases differing matchings #--------------------------------------------------------------- matching = maximum_matching(G, top_nodes=nodesU) print('Matching: ' + repr(matching)) for u, v in matching.items(): assert matching[v] == u vertex_cover = to_vertex_cover(G, matching, top_nodes=nodesU) print('Vertex cover: ' + repr(vertex_cover)) for u, v in G.edges(): assert u in vertex_cover or v in vertex_cover #Matching contains each edge twice assert len(matching) == 2*len(vertex_cover) #--------------------------------------------------------------- # fixed matching -> to_vertex_cover WORKS #--------------------------------------------------------------- matching1 = {'r1': 'c2', 'r7': 'c8', 'r8': 'c4', 'r3': 'c5', 'c8': 'r7', 'c5': 'r3', 'c4': 'r8', 'c2': 'r1'} print('Matching: ' + repr(matching1)) for u, v in matching1.items(): assert matching1[v] == u vertex_cover = to_vertex_cover(G, matching1, top_nodes=nodesU) print('Vertex cover: ' + repr(vertex_cover)) for u, v in G.edges(): assert u in vertex_cover or v in vertex_cover #Matching contains each edge twice assert len(matching1) == 2*len(vertex_cover) #-------------------------------------------------------------- # fixed matching -> to_vertex_cover DOES NOT WORK #-------------------------------------------------------------- matching2 = {'r8': 'c2', 'r5': 'c8', 'r1': 'c4', 'r3': 'c5', 'c4': 'r1', 'c8': 'r5', 'c2': 'r8', 'c5': 'r3'} print('Matching: ' + repr(matching2)) for u, v in matching2.items(): assert matching2[v] == u vertex_cover = to_vertex_cover(G, matching2, top_nodes=nodesU) print('Vertex cover: ' + repr(vertex_cover)) for u, v in G.edges(): assert u in vertex_cover or v in vertex_cover #Matching contains each edge twice assert len(matching2) == 2*len(vertex_cover) ``` This gives the output: ``` Python version: 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0] Using version 2.3 Edges: [('r1', 'c2'), ('r1', 'c4'), ('r1', 'c5'), ('r1', 'c8'), ('r3', 'c4'), ('r3', 'c5'), ('r5', 'c2'), ('r5', 'c8'), ('r7', 'c2'), ('r7', 'c8'), ('r8', 'c2'), ('r8', 'c4')] Matching: {'r7': 'c2', 'r5': 'c8', 'r1': 'c4', 'r3': 'c5', 'c4': 'r1', 'c8': 'r5', 'c5': 'r3', 'c2': 'r7'} Vertex cover: {'c4', 'c8', 'c5', 'c2'} Matching: {'r1': 'c2', 'r7': 'c8', 'r8': 'c4', 'r3': 'c5', 'c8': 'r7', 'c5': 'r3', 'c4': 'r8', 'c2': 'r1'} Vertex cover: {'c4', 'c8', 'c5', 'c2'} Matching: {'r8': 'c2', 'r5': 'c8', 'r1': 'c4', 'r3': 'c5', 'c4': 'r1', 'c8': 'r5', 'c2': 'r8', 'c5': 'r3'} Vertex cover: {'c5', 'r3', 'c4', 'c8', 'c2', 'r1'} assert len(matching2) == 2*len(vertex_cover) AssertionError ``` The problem seems to be located in the _alternating_dfs inner function of _is_connected_by_alternating_path, like in #2384. The _alternating_dfs function is implemented with the help of a stack. This stack is filled with the next vertices which should be visited and the set of valid edges on which the DFS could continue from this vertice. During the construction of the stack `next(edges)` is called. This iteratively gives back the set of valid edges for a vertex in the top nodes respectively in the bottom nodes. The implementation is done via a cyclic iterator. So for each vertex added to the stack the set of valid edges changes. This is not the desired behaviour since for each vertex (e.g. a bottom vertice) the rotation of the cyclic iterator should remain in the same position until e.g. the next top vertex is reached and the set of valid edges changes. Given the `itertools.cycle` the rotation does not stop, when staying in one vertex and filling up the stack with the next coming vertices. It would, for example, be possible to use a zero/one variable instead of a cyclic iterator. This variable would be added to the stack and indicates wether a vertex is in top_nodes or in bottom_nodes. The variable could be used as an index for the array `edges = [matched_edges, unmatched_edges]` and therefore gives back the valid set of edges for bottom respectively top nodes.
2021-03-11T07:17:35
networkx/networkx
4,685
networkx__networkx-4685
[ "4660" ]
87c127b9b88de237d89c771abb6d964b29fd1356
diff --git a/networkx/algorithms/flow/networksimplex.py b/networkx/algorithms/flow/networksimplex.py --- a/networkx/algorithms/flow/networksimplex.py +++ b/networkx/algorithms/flow/networksimplex.py @@ -10,6 +10,321 @@ from networkx.utils import not_implemented_for +class _DataEssentialsAndFunctions: + def __init__( + self, G, multigraph, demand="demand", capacity="capacity", weight="weight" + ): + + # Number all nodes and edges and hereafter reference them using ONLY their numbers + self.node_list = list(G) # nodes + self.node_indices = {u: i for i, u in enumerate(self.node_list)} # node indices + self.node_demands = [ + G.nodes[u].get(demand, 0) for u in self.node_list + ] # node demands + + self.edge_sources = [] # edge sources + self.edge_targets = [] # edge targets + if multigraph: + self.edge_keys = [] # edge keys + self.edge_indices = {} # edge indices + self.edge_capacities = [] # edge capacities + self.edge_weights = [] # edge weights + + if not multigraph: + edges = G.edges(data=True) + else: + edges = G.edges(data=True, keys=True) + + inf = float("inf") + edges = (e for e in edges if e[0] != e[1] and e[-1].get(capacity, inf) != 0) + for i, e in enumerate(edges): + self.edge_sources.append(self.node_indices[e[0]]) + self.edge_targets.append(self.node_indices[e[1]]) + if multigraph: + self.edge_keys.append(e[2]) + self.edge_indices[e[:-1]] = i + self.edge_capacities.append(e[-1].get(capacity, inf)) + self.edge_weights.append(e[-1].get(weight, 0)) + + # spanning tree specific data to be initialized + + self.edge_count = None # number of edges + self.edge_flow = None # edge flows + self.node_potentials = None # node potentials + self.parent = None # parent nodes + self.parent_edge = None # edges to parents + self.subtree_size = None # subtree sizes + self.next_node_dft = None # next nodes in depth-first thread + self.prev_node_dft = None # previous nodes in depth-first thread + self.last_descendent_dft = None # last descendants in depth-first thread + self._spanning_tree_initialized = ( + False # False until initialize_spanning_tree() is called + ) + + def initialize_spanning_tree(self, n, faux_inf): + self.edge_count = len(self.edge_indices) # number of edges + self.edge_flow = list( + chain(repeat(0, self.edge_count), (abs(d) for d in self.node_demands)) + ) # edge flows + self.node_potentials = [ + faux_inf if d <= 0 else -faux_inf for d in self.node_demands + ] # node potentials + self.parent = list(chain(repeat(-1, n), [None])) # parent nodes + self.parent_edge = list( + range(self.edge_count, self.edge_count + n) + ) # edges to parents + self.subtree_size = list(chain(repeat(1, n), [n + 1])) # subtree sizes + self.next_node_dft = list( + chain(range(1, n), [-1, 0]) + ) # next nodes in depth-first thread + self.prev_node_dft = list(range(-1, n)) # previous nodes in depth-first thread + self.last_descendent_dft = list( + chain(range(n), [n - 1]) + ) # last descendants in depth-first thread + self._spanning_tree_initialized = True # True only if all the assignments pass + + def find_apex(self, p, q): + """ + Find the lowest common ancestor of nodes p and q in the spanning tree. + """ + size_p = self.subtree_size[p] + size_q = self.subtree_size[q] + while True: + while size_p < size_q: + p = self.parent[p] + size_p = self.subtree_size[p] + while size_p > size_q: + q = self.parent[q] + size_q = self.subtree_size[q] + if size_p == size_q: + if p != q: + p = self.parent[p] + size_p = self.subtree_size[p] + q = self.parent[q] + size_q = self.subtree_size[q] + else: + return p + + def trace_path(self, p, w): + """ + Returns the nodes and edges on the path from node p to its ancestor w. + """ + Wn = [p] + We = [] + while p != w: + We.append(self.parent_edge[p]) + p = self.parent[p] + Wn.append(p) + return Wn, We + + def find_cycle(self, i, p, q): + """ + Returns the nodes and edges on the cycle containing edge i == (p, q) + when the latter is added to the spanning tree. + + The cycle is oriented in the direction from p to q. + """ + w = self.find_apex(p, q) + Wn, We = self.trace_path(p, w) + Wn.reverse() + We.reverse() + if We != [i]: + We.append(i) + WnR, WeR = self.trace_path(q, w) + del WnR[-1] + Wn += WnR + We += WeR + return Wn, We + + def augment_flow(self, Wn, We, f): + """ + Augment f units of flow along a cycle represented by Wn and We. + """ + for i, p in zip(We, Wn): + if self.edge_sources[i] == p: + self.edge_flow[i] += f + else: + self.edge_flow[i] -= f + + def trace_subtree(self, p): + """ + Yield the nodes in the subtree rooted at a node p. + """ + yield p + l = self.last_descendent_dft[p] + while p != l: + p = self.next_node_dft[p] + yield p + + def remove_edge(self, s, t): + """ + Remove an edge (s, t) where parent[t] == s from the spanning tree. + """ + size_t = self.subtree_size[t] + prev_t = self.prev_node_dft[t] + last_t = self.last_descendent_dft[t] + next_last_t = self.next_node_dft[last_t] + # Remove (s, t). + self.parent[t] = None + self.parent_edge[t] = None + # Remove the subtree rooted at t from the depth-first thread. + self.next_node_dft[prev_t] = next_last_t + self.prev_node_dft[next_last_t] = prev_t + self.next_node_dft[last_t] = t + self.prev_node_dft[t] = last_t + # Update the subtree sizes and last descendants of the (old) acenstors + # of t. + while s is not None: + self.subtree_size[s] -= size_t + if self.last_descendent_dft[s] == last_t: + self.last_descendent_dft[s] = prev_t + s = self.parent[s] + + def make_root(self, q): + """ + Make a node q the root of its containing subtree. + """ + ancestors = [] + while q is not None: + ancestors.append(q) + q = self.parent[q] + ancestors.reverse() + for p, q in zip(ancestors, islice(ancestors, 1, None)): + size_p = self.subtree_size[p] + last_p = self.last_descendent_dft[p] + prev_q = self.prev_node_dft[q] + last_q = self.last_descendent_dft[q] + next_last_q = self.next_node_dft[last_q] + # Make p a child of q. + self.parent[p] = q + self.parent[q] = None + self.parent_edge[p] = self.parent_edge[q] + self.parent_edge[q] = None + self.subtree_size[p] = size_p - self.subtree_size[q] + self.subtree_size[q] = size_p + # Remove the subtree rooted at q from the depth-first thread. + self.next_node_dft[prev_q] = next_last_q + self.prev_node_dft[next_last_q] = prev_q + self.next_node_dft[last_q] = q + self.prev_node_dft[q] = last_q + if last_p == last_q: + self.last_descendent_dft[p] = prev_q + last_p = prev_q + # Add the remaining parts of the subtree rooted at p as a subtree + # of q in the depth-first thread. + self.prev_node_dft[p] = last_q + self.next_node_dft[last_q] = p + self.next_node_dft[last_p] = q + self.prev_node_dft[q] = last_p + self.last_descendent_dft[q] = last_p + + def add_edge(self, i, p, q): + """ + Add an edge (p, q) to the spanning tree where q is the root of a subtree. + """ + last_p = self.last_descendent_dft[p] + next_last_p = self.next_node_dft[last_p] + size_q = self.subtree_size[q] + last_q = self.last_descendent_dft[q] + # Make q a child of p. + self.parent[q] = p + self.parent_edge[q] = i + # Insert the subtree rooted at q into the depth-first thread. + self.next_node_dft[last_p] = q + self.prev_node_dft[q] = last_p + self.prev_node_dft[next_last_p] = last_q + self.next_node_dft[last_q] = next_last_p + # Update the subtree sizes and last descendants of the (new) ancestors + # of q. + while p is not None: + self.subtree_size[p] += size_q + if self.last_descendent_dft[p] == last_p: + self.last_descendent_dft[p] = last_q + p = self.parent[p] + + def update_potentials(self, i, p, q): + """ + Update the potentials of the nodes in the subtree rooted at a node + q connected to its parent p by an edge i. + """ + if q == self.edge_targets[i]: + d = self.node_potentials[p] - self.edge_weights[i] - self.node_potentials[q] + else: + d = self.node_potentials[p] + self.edge_weights[i] - self.node_potentials[q] + for q in self.trace_subtree(q): + self.node_potentials[q] += d + + def reduced_cost(self, i): + """Returns the reduced cost of an edge i.""" + c = ( + self.edge_weights[i] + - self.node_potentials[self.edge_sources[i]] + + self.node_potentials[self.edge_targets[i]] + ) + return c if self.edge_flow[i] == 0 else -c + + def find_entering_edges(self): + """Yield entering edges until none can be found.""" + if self.edge_count == 0: + return + + # Entering edges are found by combining Dantzig's rule and Bland's + # rule. The edges are cyclically grouped into blocks of size B. Within + # each block, Dantzig's rule is applied to find an entering edge. The + # blocks to search is determined following Bland's rule. + B = int(ceil(sqrt(self.edge_count))) # pivot block size + M = (self.edge_count + B - 1) // B # number of blocks needed to cover all edges + m = 0 # number of consecutive blocks without eligible + # entering edges + f = 0 # first edge in block + while m < M: + # Determine the next block of edges. + l = f + B + if l <= self.edge_count: + edges = range(f, l) + else: + l -= self.edge_count + edges = chain(range(f, self.edge_count), range(l)) + f = l + # Find the first edge with the lowest reduced cost. + i = min(edges, key=self.reduced_cost) + c = self.reduced_cost(i) + if c >= 0: + # No entering edge found in the current block. + m += 1 + else: + # Entering edge found. + if self.edge_flow[i] == 0: + p = self.edge_sources[i] + q = self.edge_targets[i] + else: + p = self.edge_targets[i] + q = self.edge_sources[i] + yield i, p, q + m = 0 + # All edges have nonnegative reduced costs. The current flow is + # optimal. + + def residual_capacity(self, i, p): + """Returns the residual capacity of an edge i in the direction away + from its endpoint p. + """ + return ( + self.edge_capacities[i] - self.edge_flow[i] + if self.edge_sources[i] == p + else self.edge_flow[i] + ) + + def find_leaving_edge(self, Wn, We): + """Returns the leaving edge in a cycle represented by Wn and We.""" + j, s = min( + zip(reversed(We), reversed(Wn)), + key=lambda i_p: self.residual_capacity(*i_p), + ) + t = self.edge_targets[j] if self.edge_sources[j] == s else self.edge_sources[j] + return j, s, t + + @not_implemented_for("undirected") def network_simplex(G, demand="demand", capacity="capacity", weight="weight"): r"""Find a minimum cost flow satisfying all demands in digraph G. @@ -180,43 +495,23 @@ def network_simplex(G, demand="demand", capacity="capacity", weight="weight"): if len(G) == 0: raise nx.NetworkXError("graph has no nodes") - # Number all nodes and edges and hereafter reference them using ONLY their - # numbers - - N = list(G) # nodes - I = {u: i for i, u in enumerate(N)} # node indices - D = [G.nodes[u].get(demand, 0) for u in N] # node demands - - inf = float("inf") - for p, b in zip(N, D): - if abs(b) == inf: - raise nx.NetworkXError(f"node {p!r} has infinite demand") - multigraph = G.is_multigraph() - S = [] # edge sources - T = [] # edge targets - if multigraph: - K = [] # edge keys - E = {} # edge indices - U = [] # edge capacities - C = [] # edge weights - if not multigraph: - edges = G.edges(data=True) - else: - edges = G.edges(data=True, keys=True) - edges = (e for e in edges if e[0] != e[1] and e[-1].get(capacity, inf) != 0) - for i, e in enumerate(edges): - S.append(I[e[0]]) - T.append(I[e[1]]) - if multigraph: - K.append(e[2]) - E[e[:-1]] = i - U.append(e[-1].get(capacity, inf)) - C.append(e[-1].get(weight, 0)) + # extracting data essential to problem + DEAF = _DataEssentialsAndFunctions( + G, multigraph, demand=demand, capacity=capacity, weight=weight + ) - for e, c in zip(E, C): - if abs(c) == inf: + ########################################################################### + # Quick Error Detection + ########################################################################### + + inf = float("inf") + for u, d in zip(DEAF.node_list, DEAF.node_demands): + if abs(d) == inf: + raise nx.NetworkXError(f"node {u!r} has infinite demand") + for e, w in zip(DEAF.edge_indices, DEAF.edge_weights): + if abs(w) == inf: raise nx.NetworkXError(f"edge {e!r} has infinite weight") if not multigraph: edges = nx.selfloop_edges(G, data=True) @@ -227,13 +522,13 @@ def network_simplex(G, demand="demand", capacity="capacity", weight="weight"): raise nx.NetworkXError(f"edge {e[:-1]!r} has infinite weight") ########################################################################### - # Quick infeasibility detection + # Quick Infeasibility Detection ########################################################################### - if sum(D) != 0: + if sum(DEAF.node_demands) != 0: raise nx.NetworkXUnfeasible("total node demand is not zero") - for e, u in zip(E, U): - if u < 0: + for e, c in zip(DEAF.edge_indices, DEAF.edge_capacities): + if c < 0: raise nx.NetworkXUnfeasible(f"edge {e!r} has negative capacity") if not multigraph: edges = nx.selfloop_edges(G, data=True) @@ -252,293 +547,65 @@ def network_simplex(G, demand="demand", capacity="capacity", weight="weight"): # spanning tree of the network simplex method. The new edges will used to # trivially satisfy the node demands and create an initial strongly # feasible spanning tree. - n = len(N) # number of nodes - for p, d in enumerate(D): + for i, d in enumerate(DEAF.node_demands): # Must be greater-than here. Zero-demand nodes must have - # edges pointing towards the root to ensure strong - # feasibility. + # edges pointing towards the root to ensure strong feasibility. if d > 0: - S.append(-1) - T.append(p) + DEAF.edge_sources.append(-1) + DEAF.edge_targets.append(i) else: - S.append(p) - T.append(-1) + DEAF.edge_sources.append(i) + DEAF.edge_targets.append(-1) faux_inf = ( 3 * max( chain( - [sum(u for u in U if u < inf), sum(abs(c) for c in C)], - (abs(d) for d in D), + [ + sum(c for c in DEAF.edge_capacities if c < inf), + sum(abs(w) for w in DEAF.edge_weights), + ], + (abs(d) for d in DEAF.node_demands), ) ) or 1 ) - C.extend(repeat(faux_inf, n)) - U.extend(repeat(faux_inf, n)) + + n = len(DEAF.node_list) # number of nodes + DEAF.edge_weights.extend(repeat(faux_inf, n)) + DEAF.edge_capacities.extend(repeat(faux_inf, n)) # Construct the initial spanning tree. - e = len(E) # number of edges - x = list(chain(repeat(0, e), (abs(d) for d in D))) # edge flows - pi = [faux_inf if d <= 0 else -faux_inf for d in D] # node potentials - parent = list(chain(repeat(-1, n), [None])) # parent nodes - edge = list(range(e, e + n)) # edges to parents - size = list(chain(repeat(1, n), [n + 1])) # subtree sizes - next = list(chain(range(1, n), [-1, 0])) # next nodes in depth-first thread - prev = list(range(-1, n)) # previous nodes in depth-first thread - last = list(chain(range(n), [n - 1])) # last descendants in depth-first thread + DEAF.initialize_spanning_tree(n, faux_inf) ########################################################################### # Pivot loop ########################################################################### - def reduced_cost(i): - """Returns the reduced cost of an edge i.""" - c = C[i] - pi[S[i]] + pi[T[i]] - return c if x[i] == 0 else -c - - def find_entering_edges(): - """Yield entering edges until none can be found.""" - if e == 0: - return - - # Entering edges are found by combining Dantzig's rule and Bland's - # rule. The edges are cyclically grouped into blocks of size B. Within - # each block, Dantzig's rule is applied to find an entering edge. The - # blocks to search is determined following Bland's rule. - B = int(ceil(sqrt(e))) # pivot block size - M = (e + B - 1) // B # number of blocks needed to cover all edges - m = 0 # number of consecutive blocks without eligible - # entering edges - f = 0 # first edge in block - while m < M: - # Determine the next block of edges. - l = f + B - if l <= e: - edges = range(f, l) - else: - l -= e - edges = chain(range(f, e), range(l)) - f = l - # Find the first edge with the lowest reduced cost. - i = min(edges, key=reduced_cost) - c = reduced_cost(i) - if c >= 0: - # No entering edge found in the current block. - m += 1 - else: - # Entering edge found. - if x[i] == 0: - p = S[i] - q = T[i] - else: - p = T[i] - q = S[i] - yield i, p, q - m = 0 - # All edges have nonnegative reduced costs. The current flow is - # optimal. - - def find_apex(p, q): - """Find the lowest common ancestor of nodes p and q in the spanning - tree. - """ - size_p = size[p] - size_q = size[q] - while True: - while size_p < size_q: - p = parent[p] - size_p = size[p] - while size_p > size_q: - q = parent[q] - size_q = size[q] - if size_p == size_q: - if p != q: - p = parent[p] - size_p = size[p] - q = parent[q] - size_q = size[q] - else: - return p - - def trace_path(p, w): - """Returns the nodes and edges on the path from node p to its ancestor - w. - """ - Wn = [p] - We = [] - while p != w: - We.append(edge[p]) - p = parent[p] - Wn.append(p) - return Wn, We - - def find_cycle(i, p, q): - """Returns the nodes and edges on the cycle containing edge i == (p, q) - when the latter is added to the spanning tree. - - The cycle is oriented in the direction from p to q. - """ - w = find_apex(p, q) - Wn, We = trace_path(p, w) - Wn.reverse() - We.reverse() - if We != [i]: - We.append(i) - WnR, WeR = trace_path(q, w) - del WnR[-1] - Wn += WnR - We += WeR - return Wn, We - - def residual_capacity(i, p): - """Returns the residual capacity of an edge i in the direction away - from its endpoint p. - """ - return U[i] - x[i] if S[i] == p else x[i] - - def find_leaving_edge(Wn, We): - """Returns the leaving edge in a cycle represented by Wn and We.""" - j, s = min( - zip(reversed(We), reversed(Wn)), key=lambda i_p: residual_capacity(*i_p) - ) - t = T[j] if S[j] == s else S[j] - return j, s, t - - def augment_flow(Wn, We, f): - """Augment f units of flow along a cycle represented by Wn and We.""" - for i, p in zip(We, Wn): - if S[i] == p: - x[i] += f - else: - x[i] -= f - - def trace_subtree(p): - """Yield the nodes in the subtree rooted at a node p.""" - yield p - l = last[p] - while p != l: - p = next[p] - yield p - - def remove_edge(s, t): - """Remove an edge (s, t) where parent[t] == s from the spanning tree.""" - size_t = size[t] - prev_t = prev[t] - last_t = last[t] - next_last_t = next[last_t] - # Remove (s, t). - parent[t] = None - edge[t] = None - # Remove the subtree rooted at t from the depth-first thread. - next[prev_t] = next_last_t - prev[next_last_t] = prev_t - next[last_t] = t - prev[t] = last_t - # Update the subtree sizes and last descendants of the (old) acenstors - # of t. - while s is not None: - size[s] -= size_t - if last[s] == last_t: - last[s] = prev_t - s = parent[s] - - def make_root(q): - """Make a node q the root of its containing subtree.""" - ancestors = [] - while q is not None: - ancestors.append(q) - q = parent[q] - ancestors.reverse() - for p, q in zip(ancestors, islice(ancestors, 1, None)): - size_p = size[p] - last_p = last[p] - prev_q = prev[q] - last_q = last[q] - next_last_q = next[last_q] - # Make p a child of q. - parent[p] = q - parent[q] = None - edge[p] = edge[q] - edge[q] = None - size[p] = size_p - size[q] - size[q] = size_p - # Remove the subtree rooted at q from the depth-first thread. - next[prev_q] = next_last_q - prev[next_last_q] = prev_q - next[last_q] = q - prev[q] = last_q - if last_p == last_q: - last[p] = prev_q - last_p = prev_q - # Add the remaining parts of the subtree rooted at p as a subtree - # of q in the depth-first thread. - prev[p] = last_q - next[last_q] = p - next[last_p] = q - prev[q] = last_p - last[q] = last_p - - def add_edge(i, p, q): - """Add an edge (p, q) to the spanning tree where q is the root of a - subtree. - """ - last_p = last[p] - next_last_p = next[last_p] - size_q = size[q] - last_q = last[q] - # Make q a child of p. - parent[q] = p - edge[q] = i - # Insert the subtree rooted at q into the depth-first thread. - next[last_p] = q - prev[q] = last_p - prev[next_last_p] = last_q - next[last_q] = next_last_p - # Update the subtree sizes and last descendants of the (new) ancestors - # of q. - while p is not None: - size[p] += size_q - if last[p] == last_p: - last[p] = last_q - p = parent[p] - - def update_potentials(i, p, q): - """Update the potentials of the nodes in the subtree rooted at a node - q connected to its parent p by an edge i. - """ - if q == T[i]: - d = pi[p] - C[i] - pi[q] - else: - d = pi[p] + C[i] - pi[q] - for q in trace_subtree(q): - pi[q] += d - - # Pivot loop - for i, p, q in find_entering_edges(): - Wn, We = find_cycle(i, p, q) - j, s, t = find_leaving_edge(Wn, We) - augment_flow(Wn, We, residual_capacity(j, s)) + for i, p, q in DEAF.find_entering_edges(): + Wn, We = DEAF.find_cycle(i, p, q) + j, s, t = DEAF.find_leaving_edge(Wn, We) + DEAF.augment_flow(Wn, We, DEAF.residual_capacity(j, s)) # Do nothing more if the entering edge is the same as the leaving edge. if i != j: - if parent[t] != s: + if DEAF.parent[t] != s: # Ensure that s is the parent of t. s, t = t, s if We.index(i) > We.index(j): # Ensure that q is in the subtree rooted at t. p, q = q, p - remove_edge(s, t) - make_root(q) - add_edge(i, p, q) - update_potentials(i, p, q) + DEAF.remove_edge(s, t) + DEAF.make_root(q) + DEAF.add_edge(i, p, q) + DEAF.update_potentials(i, p, q) ########################################################################### # Infeasibility and unboundedness detection ########################################################################### - if any(x[i] != 0 for i in range(-n, 0)): + if any(DEAF.edge_flow[i] != 0 for i in range(-n, 0)): raise nx.NetworkXUnfeasible("no flow satisfies all node demands") - if any(x[i] * 2 >= faux_inf for i in range(e)) or any( + if any(DEAF.edge_flow[i] * 2 >= faux_inf for i in range(DEAF.edge_count)) or any( e[-1].get(capacity, inf) == inf and e[-1].get(weight, 0) < 0 for e in nx.selfloop_edges(G, data=True) ): @@ -548,9 +615,9 @@ def update_potentials(i, p, q): # Flow cost calculation and flow dict construction ########################################################################### - del x[e:] - flow_cost = sum(c * x for c, x in zip(C, x)) - flow_dict = {n: {} for n in N} + del DEAF.edge_flow[DEAF.edge_count :] + flow_cost = sum(w * x for w, x in zip(DEAF.edge_weights, DEAF.edge_flow)) + flow_dict = {n: {} for n in DEAF.node_list} def add_entry(e): """Add a flow dict entry.""" @@ -564,14 +631,27 @@ def add_entry(e): d = t d[e[-2]] = e[-1] - S = (N[s] for s in S) # Use original nodes. - T = (N[t] for t in T) # Use original nodes. + DEAF.edge_sources = ( + DEAF.node_list[s] for s in DEAF.edge_sources + ) # Use original nodes. + DEAF.edge_targets = ( + DEAF.node_list[t] for t in DEAF.edge_targets + ) # Use original nodes. if not multigraph: - for e in zip(S, T, x): + for e in zip( + DEAF.edge_sources, + DEAF.edge_targets, + DEAF.edge_flow, + ): add_entry(e) edges = G.edges(data=True) else: - for e in zip(S, T, K, x): + for e in zip( + DEAF.edge_sources, + DEAF.edge_targets, + DEAF.edge_keys, + DEAF.edge_flow, + ): add_entry(e) edges = G.edges(data=True, keys=True) for e in edges: @@ -579,12 +659,12 @@ def add_entry(e): if e[-1].get(capacity, inf) == 0: add_entry(e[:-1] + (0,)) else: - c = e[-1].get(weight, 0) - if c >= 0: + w = e[-1].get(weight, 0) + if w >= 0: add_entry(e[:-1] + (0,)) else: - u = e[-1][capacity] - flow_cost += c * u - add_entry(e[:-1] + (u,)) + c = e[-1][capacity] + flow_cost += w * c + add_entry(e[:-1] + (c,)) return flow_cost, flow_dict
diff --git a/networkx/algorithms/flow/tests/test_networksimplex.py b/networkx/algorithms/flow/tests/test_networksimplex.py new file mode 100644 --- /dev/null +++ b/networkx/algorithms/flow/tests/test_networksimplex.py @@ -0,0 +1,383 @@ +import pytest +import networkx as nx +import os + + [email protected] +def simple_flow_graph(): + G = nx.DiGraph() + G.add_node("a", demand=0) + G.add_node("b", demand=-5) + G.add_node("c", demand=50000000) + G.add_node("d", demand=-49999995) + G.add_edge("a", "b", weight=3, capacity=4) + G.add_edge("a", "c", weight=6, capacity=10) + G.add_edge("b", "d", weight=1, capacity=9) + G.add_edge("c", "d", weight=2, capacity=5) + return G + + [email protected] +def simple_no_flow_graph(): + G = nx.DiGraph() + G.add_node("s", demand=-5) + G.add_node("t", demand=5) + G.add_edge("s", "a", weight=1, capacity=3) + G.add_edge("a", "b", weight=3) + G.add_edge("a", "c", weight=-6) + G.add_edge("b", "d", weight=1) + G.add_edge("c", "d", weight=-2) + G.add_edge("d", "t", weight=1, capacity=3) + return G + + +def get_flowcost_from_flowdict(G, flowDict): + """Returns flow cost calculated from flow dictionary""" + flowCost = 0 + for u in flowDict.keys(): + for v in flowDict[u].keys(): + flowCost += flowDict[u][v] * G[u][v]["weight"] + return flowCost + + +def test_infinite_demand_raise(simple_flow_graph): + G = simple_flow_graph + inf = float("inf") + nx.set_node_attributes(G, {"a": {"demand": inf}}) + pytest.raises(nx.NetworkXError, nx.network_simplex, G) + + +def test_neg_infinite_demand_raise(simple_flow_graph): + G = simple_flow_graph + inf = float("inf") + nx.set_node_attributes(G, {"a": {"demand": -inf}}) + pytest.raises(nx.NetworkXError, nx.network_simplex, G) + + +def test_infinite_weight_raise(simple_flow_graph): + G = simple_flow_graph + inf = float("inf") + nx.set_edge_attributes( + G, {("a", "b"): {"weight": inf}, ("b", "d"): {"weight": inf}} + ) + pytest.raises(nx.NetworkXError, nx.network_simplex, G) + + +def test_nonzero_net_demand_raise(simple_flow_graph): + G = simple_flow_graph + nx.set_node_attributes(G, {"b": {"demand": -4}}) + pytest.raises(nx.NetworkXUnfeasible, nx.network_simplex, G) + + +def test_negative_capacity_raise(simple_flow_graph): + G = simple_flow_graph + nx.set_edge_attributes(G, {("a", "b"): {"weight": 1}, ("b", "d"): {"capacity": -9}}) + pytest.raises(nx.NetworkXUnfeasible, nx.network_simplex, G) + + +def test_no_flow_satisfying_demands(simple_no_flow_graph): + G = simple_no_flow_graph + pytest.raises(nx.NetworkXUnfeasible, nx.network_simplex, G) + + +def test_sum_demands_not_zero(simple_no_flow_graph): + G = simple_no_flow_graph + nx.set_node_attributes(G, {"t": {"demand": 4}}) + pytest.raises(nx.NetworkXUnfeasible, nx.network_simplex, G) + + +def test_google_or_tools_example(): + """ + https://developers.google.com/optimization/flow/mincostflow + """ + G = nx.DiGraph() + start_nodes = [0, 0, 1, 1, 1, 2, 2, 3, 4] + end_nodes = [1, 2, 2, 3, 4, 3, 4, 4, 2] + capacities = [15, 8, 20, 4, 10, 15, 4, 20, 5] + unit_costs = [4, 4, 2, 2, 6, 1, 3, 2, 3] + supplies = [20, 0, 0, -5, -15] + answer = 150 + + for i in range(len(supplies)): + G.add_node(i, demand=(-1) * supplies[i]) # supplies are negative of demand + + for i in range(len(start_nodes)): + G.add_edge( + start_nodes[i], + end_nodes[i], + weight=unit_costs[i], + capacity=capacities[i], + ) + + flowCost, flowDict = nx.network_simplex(G) + assert flowCost == answer + assert flowCost == get_flowcost_from_flowdict(G, flowDict) + + +def test_google_or_tools_example2(): + """ + https://developers.google.com/optimization/flow/mincostflow + """ + G = nx.DiGraph() + start_nodes = [0, 0, 1, 1, 1, 2, 2, 3, 4, 3] + end_nodes = [1, 2, 2, 3, 4, 3, 4, 4, 2, 5] + capacities = [15, 8, 20, 4, 10, 15, 4, 20, 5, 10] + unit_costs = [4, 4, 2, 2, 6, 1, 3, 2, 3, 4] + supplies = [23, 0, 0, -5, -15, -3] + answer = 183 + + for i in range(len(supplies)): + G.add_node(i, demand=(-1) * supplies[i]) # supplies are negative of demand + + for i in range(len(start_nodes)): + G.add_edge( + start_nodes[i], + end_nodes[i], + weight=unit_costs[i], + capacity=capacities[i], + ) + + flowCost, flowDict = nx.network_simplex(G) + assert flowCost == answer + assert flowCost == get_flowcost_from_flowdict(G, flowDict) + + +def test_large(): + fname = os.path.join(os.path.dirname(__file__), "netgen-2.gpickle.bz2") + G = nx.read_gpickle(fname) + flowCost, flowDict = nx.network_simplex(G) + assert 6749969302 == flowCost + assert 6749969302 == nx.cost_of_flow(G, flowDict) + + +def test_simple_digraph(): + G = nx.DiGraph() + G.add_node("a", demand=-5) + G.add_node("d", demand=5) + G.add_edge("a", "b", weight=3, capacity=4) + G.add_edge("a", "c", weight=6, capacity=10) + G.add_edge("b", "d", weight=1, capacity=9) + G.add_edge("c", "d", weight=2, capacity=5) + flowCost, H = nx.network_simplex(G) + soln = {"a": {"b": 4, "c": 1}, "b": {"d": 4}, "c": {"d": 1}, "d": {}} + assert flowCost == 24 + assert nx.min_cost_flow_cost(G) == 24 + assert H == soln + + +def test_negcycle_infcap(): + G = nx.DiGraph() + G.add_node("s", demand=-5) + G.add_node("t", demand=5) + G.add_edge("s", "a", weight=1, capacity=3) + G.add_edge("a", "b", weight=3) + G.add_edge("c", "a", weight=-6) + G.add_edge("b", "d", weight=1) + G.add_edge("d", "c", weight=-2) + G.add_edge("d", "t", weight=1, capacity=3) + pytest.raises(nx.NetworkXUnfeasible, nx.network_simplex, G) + + +def test_transshipment(): + G = nx.DiGraph() + G.add_node("a", demand=1) + G.add_node("b", demand=-2) + G.add_node("c", demand=-2) + G.add_node("d", demand=3) + G.add_node("e", demand=-4) + G.add_node("f", demand=-4) + G.add_node("g", demand=3) + G.add_node("h", demand=2) + G.add_node("r", demand=3) + G.add_edge("a", "c", weight=3) + G.add_edge("r", "a", weight=2) + G.add_edge("b", "a", weight=9) + G.add_edge("r", "c", weight=0) + G.add_edge("b", "r", weight=-6) + G.add_edge("c", "d", weight=5) + G.add_edge("e", "r", weight=4) + G.add_edge("e", "f", weight=3) + G.add_edge("h", "b", weight=4) + G.add_edge("f", "d", weight=7) + G.add_edge("f", "h", weight=12) + G.add_edge("g", "d", weight=12) + G.add_edge("f", "g", weight=-1) + G.add_edge("h", "g", weight=-10) + flowCost, H = nx.network_simplex(G) + soln = { + "a": {"c": 0}, + "b": {"a": 0, "r": 2}, + "c": {"d": 3}, + "d": {}, + "e": {"r": 3, "f": 1}, + "f": {"d": 0, "g": 3, "h": 2}, + "g": {"d": 0}, + "h": {"b": 0, "g": 0}, + "r": {"a": 1, "c": 1}, + } + assert flowCost == 41 + assert H == soln + + +def test_digraph1(): + # From Bradley, S. P., Hax, A. C. and Magnanti, T. L. Applied + # Mathematical Programming. Addison-Wesley, 1977. + G = nx.DiGraph() + G.add_node(1, demand=-20) + G.add_node(4, demand=5) + G.add_node(5, demand=15) + G.add_edges_from( + [ + (1, 2, {"capacity": 15, "weight": 4}), + (1, 3, {"capacity": 8, "weight": 4}), + (2, 3, {"weight": 2}), + (2, 4, {"capacity": 4, "weight": 2}), + (2, 5, {"capacity": 10, "weight": 6}), + (3, 4, {"capacity": 15, "weight": 1}), + (3, 5, {"capacity": 5, "weight": 3}), + (4, 5, {"weight": 2}), + (5, 3, {"capacity": 4, "weight": 1}), + ] + ) + flowCost, H = nx.network_simplex(G) + soln = { + 1: {2: 12, 3: 8}, + 2: {3: 8, 4: 4, 5: 0}, + 3: {4: 11, 5: 5}, + 4: {5: 10}, + 5: {3: 0}, + } + assert flowCost == 150 + assert nx.min_cost_flow_cost(G) == 150 + assert H == soln + + +def test_zero_capacity_edges(): + """Address issue raised in ticket #617 by arv.""" + G = nx.DiGraph() + G.add_edges_from( + [ + (1, 2, {"capacity": 1, "weight": 1}), + (1, 5, {"capacity": 1, "weight": 1}), + (2, 3, {"capacity": 0, "weight": 1}), + (2, 5, {"capacity": 1, "weight": 1}), + (5, 3, {"capacity": 2, "weight": 1}), + (5, 4, {"capacity": 0, "weight": 1}), + (3, 4, {"capacity": 2, "weight": 1}), + ] + ) + G.nodes[1]["demand"] = -1 + G.nodes[2]["demand"] = -1 + G.nodes[4]["demand"] = 2 + + flowCost, H = nx.network_simplex(G) + soln = {1: {2: 0, 5: 1}, 2: {3: 0, 5: 1}, 3: {4: 2}, 4: {}, 5: {3: 2, 4: 0}} + assert flowCost == 6 + assert nx.min_cost_flow_cost(G) == 6 + assert H == soln + + +def test_digon(): + """Check if digons are handled properly. Taken from ticket + #618 by arv.""" + nodes = [(1, {}), (2, {"demand": -4}), (3, {"demand": 4})] + edges = [ + (1, 2, {"capacity": 3, "weight": 600000}), + (2, 1, {"capacity": 2, "weight": 0}), + (2, 3, {"capacity": 5, "weight": 714285}), + (3, 2, {"capacity": 2, "weight": 0}), + ] + G = nx.DiGraph(edges) + G.add_nodes_from(nodes) + flowCost, H = nx.network_simplex(G) + soln = {1: {2: 0}, 2: {1: 0, 3: 4}, 3: {2: 0}} + assert flowCost == 2857140 + + +def test_deadend(): + """Check if one-node cycles are handled properly. Taken from ticket + #2906 from @sshraven.""" + G = nx.DiGraph() + + G.add_nodes_from(range(5), demand=0) + G.nodes[4]["demand"] = -13 + G.nodes[3]["demand"] = 13 + + G.add_edges_from([(0, 2), (0, 3), (2, 1)], capacity=20, weight=0.1) + pytest.raises(nx.NetworkXUnfeasible, nx.network_simplex, G) + + +def test_infinite_capacity_neg_digon(): + """An infinite capacity negative cost digon results in an unbounded + instance.""" + nodes = [(1, {}), (2, {"demand": -4}), (3, {"demand": 4})] + edges = [ + (1, 2, {"weight": -600}), + (2, 1, {"weight": 0}), + (2, 3, {"capacity": 5, "weight": 714285}), + (3, 2, {"capacity": 2, "weight": 0}), + ] + G = nx.DiGraph(edges) + G.add_nodes_from(nodes) + pytest.raises(nx.NetworkXUnbounded, nx.network_simplex, G) + + +def test_multidigraph(): + """Multidigraphs are acceptable.""" + G = nx.MultiDiGraph() + G.add_weighted_edges_from([(1, 2, 1), (2, 3, 2)], weight="capacity") + flowCost, H = nx.network_simplex(G) + assert flowCost == 0 + assert H == {1: {2: {0: 0}}, 2: {3: {0: 0}}, 3: {}} + + +def test_negative_selfloops(): + """Negative selfloops should cause an exception if uncapacitated and + always be saturated otherwise. + """ + G = nx.DiGraph() + G.add_edge(1, 1, weight=-1) + pytest.raises(nx.NetworkXUnbounded, nx.network_simplex, G) + + G[1][1]["capacity"] = 2 + flowCost, H = nx.network_simplex(G) + assert flowCost == -2 + assert H == {1: {1: 2}} + + G = nx.MultiDiGraph() + G.add_edge(1, 1, "x", weight=-1) + G.add_edge(1, 1, "y", weight=1) + pytest.raises(nx.NetworkXUnbounded, nx.network_simplex, G) + + G[1][1]["x"]["capacity"] = 2 + flowCost, H = nx.network_simplex(G) + assert flowCost == -2 + assert H == {1: {1: {"x": 2, "y": 0}}} + + +def test_bone_shaped(): + # From #1283 + G = nx.DiGraph() + G.add_node(0, demand=-4) + G.add_node(1, demand=2) + G.add_node(2, demand=2) + G.add_node(3, demand=4) + G.add_node(4, demand=-2) + G.add_node(5, demand=-2) + G.add_edge(0, 1, capacity=4) + G.add_edge(0, 2, capacity=4) + G.add_edge(4, 3, capacity=4) + G.add_edge(5, 3, capacity=4) + G.add_edge(0, 3, capacity=0) + flowCost, H = nx.network_simplex(G) + assert flowCost == 0 + assert H == {0: {1: 2, 2: 2, 3: 0}, 1: {}, 2: {}, 3: {}, 4: {3: 2}, 5: {3: 2}} + + +def test_graphs_type_exceptions(): + G = nx.Graph() + pytest.raises(nx.NetworkXNotImplemented, nx.network_simplex, G) + G = nx.MultiGraph() + pytest.raises(nx.NetworkXNotImplemented, nx.network_simplex, G) + G = nx.DiGraph() + pytest.raises(nx.NetworkXError, nx.network_simplex, G)
network_simplex is untested The `networksimplex` module defines a single monolithic function `network_simplex` that currently is not explicitly tested. The function itself is very large, and includes internal definitions of quite a few other functions/generators. This will likely make it very difficult to write tests for the function. Since the `networksimplex` module defines only the one function, this also might be a candidate for a refactor, breaking out the internally-defined functions to (probably) private functions in the module, with their own unit tests. Doing so would likely greatly improve the robustness of the function. See #4641 for additional details.
I tried to take a look at the code, and it is also pretty unreadable: lots of one-letter variables, sometimes with unintuitive choices... It would maybe benefit from a full rewrite, but even understanding the existing code looks quite arduous.
2021-03-20T12:48:29
networkx/networkx
4,694
networkx__networkx-4694
[ "4693" ]
251fa09289ee1e105295101a6a58f674eeb0fd92
diff --git a/networkx/readwrite/graphml.py b/networkx/readwrite/graphml.py --- a/networkx/readwrite/graphml.py +++ b/networkx/readwrite/graphml.py @@ -247,8 +247,8 @@ def read_graphml(path, node_type=str, edge_key_type=int, force_multigraph=False) there is no "key" attribute a default NetworkX multigraph edge key will be provided. - Files with the yEd "yfiles" extension will can be read but the graphics - information is discarded. + Files with the yEd "yfiles" extension can be read. The type of the node's + shape is preserved in the `shape_type` node attribute. yEd compressed files ("file.graphmlz" extension) can be read by renaming the file to "file.graphml.gz". @@ -912,7 +912,11 @@ def decode_data_elements(self, graphml_keys, obj_xml): elif len(list(data_element)) > 0: # Assume yfiles as subelements, try to extract node_label node_label = None - for node_type in ["ShapeNode", "SVGNode", "ImageNode"]: + # set GenericNode's configuration as shape type + gn = data_element.find(f"{{{self.NS_Y}}}GenericNode") + if gn: + data["shape_type"] = gn.get("configuration") + for node_type in ["GenericNode", "ShapeNode", "SVGNode", "ImageNode"]: pref = f"{{{self.NS_Y}}}{node_type}/{{{self.NS_Y}}}" geometry = data_element.find(f"{pref}Geometry") if geometry is not None: @@ -920,6 +924,9 @@ def decode_data_elements(self, graphml_keys, obj_xml): data["y"] = geometry.get("y") if node_label is None: node_label = data_element.find(f"{pref}NodeLabel") + shape = data_element.find(f"{pref}Shape") + if shape is not None: + data["shape_type"] = shape.get("type") if node_label is not None: data["label"] = node_label.text
diff --git a/networkx/readwrite/tests/test_graphml.py b/networkx/readwrite/tests/test_graphml.py --- a/networkx/readwrite/tests/test_graphml.py +++ b/networkx/readwrite/tests/test_graphml.py @@ -604,6 +604,29 @@ def test_yfiles_extension(self): </y:ShapeNode> </data> </node> + <node id="n2"> + <data key="d6" xml:space="preserve"><![CDATA[description +line1 +line2]]></data> + <data key="d3"> + <y:GenericNode configuration="com.yworks.flowchart.terminator"> + <y:Geometry height="40.0" width="80.0" x="950.0" y="286.0"/> + <y:Fill color="#E8EEF7" color2="#B7C9E3" transparent="false"/> + <y:BorderStyle color="#000000" type="line" width="1.0"/> + <y:NodeLabel alignment="center" autoSizePolicy="content" + fontFamily="Dialog" fontSize="12" fontStyle="plain" + hasBackgroundColor="false" hasLineColor="false" height="17.96875" + horizontalTextPosition="center" iconTextGap="4" modelName="custom" + textColor="#000000" verticalTextPosition="bottom" visible="true" + width="67.984375" x="6.0078125" xml:space="preserve" + y="11.015625">3<y:LabelModel> + <y:SmartNodeLabelModel distance="4.0"/></y:LabelModel> + <y:ModelParameter><y:SmartNodeLabelModelParameter labelRatioX="0.0" + labelRatioY="0.0" nodeRatioX="0.0" nodeRatioY="0.0" offsetX="0.0" + offsetY="0.0" upX="0.0" upY="-1.0"/></y:ModelParameter></y:NodeLabel> + </y:GenericNode> + </data> + </node> <edge id="e0" source="n0" target="n1"> <data key="d7"> <y:PolyLineEdge> @@ -626,24 +649,36 @@ def test_yfiles_extension(self): assert G.has_edge("n0", "n1", key="e0") assert G.nodes["n0"]["label"] == "1" assert G.nodes["n1"]["label"] == "2" + assert G.nodes["n2"]["label"] == "3" + assert G.nodes["n0"]["shape_type"] == "rectangle" + assert G.nodes["n1"]["shape_type"] == "rectangle" + assert G.nodes["n2"]["shape_type"] == "com.yworks.flowchart.terminator" + assert G.nodes["n2"]["description"] == "description\nline1\nline2" fh.seek(0) G = nx.read_graphml(fh) assert list(G.edges()) == [("n0", "n1")] assert G["n0"]["n1"]["id"] == "e0" assert G.nodes["n0"]["label"] == "1" assert G.nodes["n1"]["label"] == "2" + assert G.nodes["n2"]["label"] == "3" + assert G.nodes["n0"]["shape_type"] == "rectangle" + assert G.nodes["n1"]["shape_type"] == "rectangle" + assert G.nodes["n2"]["shape_type"] == "com.yworks.flowchart.terminator" + assert G.nodes["n2"]["description"] == "description\nline1\nline2" H = nx.parse_graphml(data, force_multigraph=True) assert list(H.edges()) == [("n0", "n1")] assert H.has_edge("n0", "n1", key="e0") assert H.nodes["n0"]["label"] == "1" assert H.nodes["n1"]["label"] == "2" + assert H.nodes["n2"]["label"] == "3" H = nx.parse_graphml(data) assert list(H.edges()) == [("n0", "n1")] assert H["n0"]["n1"]["id"] == "e0" assert H.nodes["n0"]["label"] == "1" assert H.nodes["n1"]["label"] == "2" + assert H.nodes["n2"]["label"] == "3" def test_bool(self): s = """<?xml version="1.0" encoding="UTF-8"?>
Getting shape types from GraphML written by yEd <!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion --> <!--- Provide a general summary of the issue in the Title above --> ### Current Behavior <!--- Tell us what happens instead of the expected behavior --> I have a GraphML file, written by [yEd](https://www.yworks.com/products/yed). NetworkX reads it just fine with `read_graphml()`, however, one very important information is missing: the type of the nodes (eg. `diamond`, `roundrectangle` etc). ### Expected Behavior <!--- Tell us what should happen --> I would except to be able to get that information, because it's necessary for processing and understanding the exported graph. ### Steps to Reproduce <!--- Provide a minimal example that reproduces the bug --> Create a graph with [yEd](https://www.yworks.com/products/yed) with various shapes. Export it to GraphML and load it back with NetworkX's `read_graphml()`. The shape types are not there. ### Environment <!--- Please provide details about your local environment --> Python version: 3.8.5 NetworkX version: 2.5 ### Additional context <!--- Add any other context about the problem here, screenshots, etc. --> yEd exports the shape type in the XML like this: ```XML <node id="n1"> <data key="d5" xml:space="preserve"><![CDATA[lfx="red" bela="2"]]></data> <data key="d6"> <y:ShapeNode> <y:Geometry height="93.0" width="93.0" x="133.0" y="509.90625"/> <y:Fill color="#FFCC00" transparent="false"/> <y:BorderStyle color="#000000" raised="false" type="line" width="1.0"/> <y:NodeLabel alignment="center" autoSizePolicy="content" fontFamily="Dialog" fontSize="12" fontStyle="plain" hasBackgroundColor="false" hasLineColor="false" height="17.96875" horizontalTextPosition="center" iconTextGap="4" modelName="custom" textColor="#000000" verticalTextPosition="bottom" visible="true" width="29.16015625" x="31.919921875" xml:space="preserve" y="37.515625">RED <y:LabelModel><y:SmartNodeLabelModel distance="4.0"/></y:LabelModel> <y:ModelParameter> <y:SmartNodeLabelModelParameter labelRatioX="0.0" labelRatioY="0.0" nodeRatioX="0.0" nodeRatioY="0.0" offsetX="0.0" offsetY="0.0" upX="0.0" upY="-1.0"/> </y:ModelParameter> </y:NodeLabel> <y:Shape type="roundrectangle"/> </y:ShapeNode> </data> </node> ``` Reference: [yEd documentation](https://docs.yworks.com/graphml/schema-doc/http___www.yworks.com_xml_graphml/simpleType/shapeType.type.html)
2021-03-23T13:08:34
networkx/networkx
4,753
networkx__networkx-4753
[ "4752" ]
63f550fef6712d5ea3ae09bbc341d696d65e55e3
diff --git a/networkx/utils/misc.py b/networkx/utils/misc.py --- a/networkx/utils/misc.py +++ b/networkx/utils/misc.py @@ -19,6 +19,30 @@ from itertools import tee, chain import networkx as nx +__all__ = [ + "is_string_like", + "iterable", + "empty_generator", + "flatten", + "make_list_of_ints", + "is_list_of_ints", + "make_str", + "generate_unique_node", + "default_opener", + "dict_to_numpy_array", + "dict_to_numpy_array1", + "dict_to_numpy_array2", + "is_iterator", + "arbitrary_element", + "consume", + "pairwise", + "groups", + "to_tuple", + "create_random_state", + "create_py_random_state", + "PythonRandomInterface", +] + # some cookbook stuff # used in deciding whether something is a bunch of nodes, edges, etc. diff --git a/networkx/utils/random_sequence.py b/networkx/utils/random_sequence.py --- a/networkx/utils/random_sequence.py +++ b/networkx/utils/random_sequence.py @@ -7,6 +7,16 @@ from networkx.utils import py_random_state +__all__ = [ + "powerlaw_sequence", + "zipf_rv", + "cumulative_distribution", + "discrete_sequence", + "random_weighted_sample", + "weighted_choice", +] + + # The same helpers for choosing random sequences from distributions # uses Python's random module # https://docs.python.org/3/library/random.html
diff --git a/networkx/utils/tests/test__init.py b/networkx/utils/tests/test__init.py new file mode 100644 --- /dev/null +++ b/networkx/utils/tests/test__init.py @@ -0,0 +1,11 @@ +import pytest + + +def test_utils_namespace(): + """Ensure objects are not unintentionally exposed in utils namespace.""" + with pytest.raises(ImportError): + from networkx.utils import nx + with pytest.raises(ImportError): + from networkx.utils import sys + with pytest.raises(ImportError): + from networkx.utils import defaultdict, deque
`networkx.utils.misc` missing `__all__` The `misc` module in the `utils` package doesn't define an `__all__`. As a consequence, some non-networkx functionality including e.g. objects from `collections` and builtin modules are incorrectly exposed in the `networkx.utils` namespace, including networkx itself: ```python from networkx.utils import nx # yikes ```
2021-04-23T18:53:02
networkx/networkx
4,768
networkx__networkx-4768
[ "4767" ]
d70b314b37168f0ea7c5b0d7f9ff61d73232747b
diff --git a/networkx/generators/geometric.py b/networkx/generators/geometric.py --- a/networkx/generators/geometric.py +++ b/networkx/generators/geometric.py @@ -52,7 +52,7 @@ def geometric_edges(G, radius, p): nodes, coords = list(zip(*nodes_pos)) kdtree = sp.spatial.cKDTree(coords) # Cannot provide generator. edge_indexes = kdtree.query_pairs(radius, p) - edges = [(nodes[u], nodes[v]) for u, v in edge_indexes] + edges = [(nodes[u], nodes[v]) for u, v in sorted(edge_indexes)] return edges
Unable to get the same network from `soft_random_geometric_graph` on different machines I'm unable, with version 2.5 of networkx, to get the same graph from `soft_random_geometric_graph` (with same parameters of course, and same seed of random number generator) A cause of the problem is the fact that `scipy.spatial.cKDTree` doesn't return the edges in the same order. A simple pass of "sorted(...)" solves the probelm ### Current Behavior The generated graph has different edges on different machines, even with the same seed ### Expected Behavior The generated graph should have the same edges, given the same seed ### Steps to Reproduce ``` import numpy as np import networkx as nx rng = np.random.RandomState() rng.seed(4) N =100 scale = 2 x_pos = np.sqrt(N)*rng.random(N) y_pos = np.sqrt(N)*rng.random(N) # for soft geometric graph generation pos = {i: (x, y) for i, (x, y) in enumerate(zip(x_pos,y_pos))} rng = np.random.RandomState() rng.seed(2) G = nx.generators.soft_random_geometric_graph(N,4, pos=pos, seed=rng ) ``` This snippet is sufficient to generate graphs with different edges on different machines, provided that scipy is installed ### Environment Python version: 3.8, 3.7 NetworkX version: 2.5
Thanks very much for this -- I would have had a hard time identifying that there even was a problem here... :} Two quick questions while it is fresh in your head and before I dive into it. 1) You say that `sorted()` can fix the problem -- Did you sort the results of the `query_pair()` method, or later on, and would that work if the nodes (and thus the node-pairs) are not sortable? 2) Do you have suggestions for how I can check that this is working or not? How different do you "two machines" have to be? [Edit: What version(s) of scipy are you using on the two machines?] 1. I have fixed the problem with sorting the edges that come from the cKDTree routine, as I have found that was this algorithm which gave different results on different machines. 2. I have found the problem on two different x86_64 machines, with same versions of the packages, but different versions of python (one 3.7, the other one 3.8). 3. I have two different versions of scipy, one is 1.5.2 (python 3.7) and the other is 1.6.2 (python 3.8). Edit: Regardless, sorting the edges makes the generated graphs identical. I can put the code that works, too Unfortunately sorting the edges isn't always possible because nodes (and thus node-pairs) are not generally sortable -- only hashable. If you provide a number of nodes to the function all nodes are integers, but if you provide a list of nodes, the nodes are not necessarily sortable. I also find that scipy 1.5.1 and 1.6.0 give different results. I'll try to track it down from there. It might be that to get the same results you have to use the same version of scipy. Sorted can work here... The return value of sp.spatial.cKDTree(coords) is a **set** of 2-tuple edges. So order is arbitrary and potentially different in different versions of Python. But it is not a set of 2-tuples of **nodes**, it is a set of 2-tuples of **node_indexes**. The index is the index within the list `coords` used as input. So, there are integers and can be sorted before what is now the next line where they get mapped to nodes. All we need is to add `sorted` inside the list comprehension where we loop over `edge_indexes` (line 55). Is this where you had it? Maybe I should just say: Your suggestion looks like it will work fine!! I'd love it if you could make a PR, or indicate if this fix matches what you were suggesting. Thanks! Yeah, I wrapped the result of sp.spatial.cKDTree(coords). If you can tell me on which branch I have to commit the change, I will gladly make a Pull Request. (Branch master doesn't seem to have the call to cKDTree) Il Gio 29 Apr 2021, 20:12 Dan Schult ***@***.***> ha scritto: > Sorted can work here... The return value of sp.spatial.cKDTree(coords) is > a *set* of 2-tuple edges. So order is arbitrary and potentially different > in different versions of Python. But it is not a set of 2-tuples of > *nodes*, it is a set of 2-tuples of *node_indexes*. The index is the > index within the list coords used as input. > > So, there are integers and can be sorted before what is now the next line > where they get mapped to nodes. > All we need is to add sorted inside the list comprehension where we loop > over edge_indexes (line 55). > > Is this where you had it? Maybe I should just say: Your suggestion looks > like it will work fine!! I'd love it if you could make a PR, or indicate if > this fix matches what you were suggesting. Thanks! > > β€” > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/networkx/networkx/issues/4767#issuecomment-829479852>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ADMONLNKQVDOBUZLZVSJAOTTLGOQ3ANCNFSM43ZUP5TQ> > . > Hmmm... We changed the branch name to `main` so there shouldn't be a `master` branch anymore. If you make a fork into your set of repositories you can name the branch whatever you want and then create a pull request from that branch to networkx/main I can do it pretty easily so if your setup isn't working smoothly just say so and I'll make it and ask you to check it.
2021-04-29T19:55:14
networkx/networkx
4,786
networkx__networkx-4786
[ "4783" ]
01d9cfe028e82ce7f2f7fd378ffe6e43ba2fddbb
diff --git a/networkx/conftest.py b/networkx/conftest.py --- a/networkx/conftest.py +++ b/networkx/conftest.py @@ -125,6 +125,14 @@ def set_warnings(): warnings.filterwarnings( "ignore", category=DeprecationWarning, message="iterable is deprecated" ) + warnings.filterwarnings( + "ignore", + category=FutureWarning, + message="\nThe function signature for cytoscape", + ) + warnings.filterwarnings( + "ignore", category=DeprecationWarning, message="\nThe `attrs` keyword" + ) @pytest.fixture(autouse=True) diff --git a/networkx/readwrite/json_graph/cytoscape.py b/networkx/readwrite/json_graph/cytoscape.py --- a/networkx/readwrite/json_graph/cytoscape.py +++ b/networkx/readwrite/json_graph/cytoscape.py @@ -2,11 +2,8 @@ __all__ = ["cytoscape_data", "cytoscape_graph"] -# TODO: Remove in NX 3.0 -_attrs = dict(name="name", ident="id") - -def cytoscape_data(G, attrs=None): +def cytoscape_data(G, attrs=None, name="name", ident="id"): """Returns data in Cytoscape JSON format (cyjs). Parameters @@ -24,6 +21,13 @@ def cytoscape_data(G, attrs=None): The `attrs` keyword argument will be replaced with `name` and `ident` in networkx 3.0 + name : string + A string which is mapped to the 'name' node element in cyjs format. + Must not have the same value as `ident`. + ident : string + A string which is mapped to the 'id' node element in cyjs format. + Must not have the same value as `name`. + Returns ------- data: dict @@ -32,7 +36,7 @@ def cytoscape_data(G, attrs=None): Raises ------ NetworkXError - If the `name` and `ident` attributes are identical. + If the values for `name` and `ident` are identical. See Also -------- @@ -55,28 +59,27 @@ def cytoscape_data(G, attrs=None): 'edges': [{'data': {'source': 0, 'target': 1}}]}} """ # ------ TODO: Remove between the lines in 3.0 ----- # - if attrs is None: - attrs = _attrs - else: + if attrs is not None: import warnings msg = ( - "\nThe function signature for cytoscape_data will change in " - "networkx 3.0.\n" - "The `attrs` keyword argument will be replaced with \n" - "explicit `name` and `ident` keyword arguments, e.g.\n\n" + "\nThe `attrs` keyword argument of cytoscape_data is deprecated\n" + "and will be removed in networkx 3.0.\n" + "It is replaced with explicit `name` and `ident` keyword\n" + "arguments.\n" + "To make this warning go away and ensure usage is forward\n" + "compatible, replace `attrs` with `name` and `ident`,\n" + "for example:\n\n" " >>> cytoscape_data(G, attrs={'name': 'foo', 'ident': 'bar'})\n\n" "should instead be written as\n\n" " >>> cytoscape_data(G, name='foo', ident='bar')\n\n" "in networkx 3.0.\n" - "The default values for 'name' and 'ident' will not change." + "The default values of 'name' and 'id' will not change." ) - warnings.warn(msg, FutureWarning, stacklevel=2) - - attrs.update({k: v for (k, v) in _attrs.items() if k not in attrs}) + warnings.warn(msg, DeprecationWarning, stacklevel=2) - name = attrs["name"] - ident = attrs["ident"] + name = attrs["name"] + ident = attrs["ident"] # -------------------------------------------------- # if name == ident: @@ -112,7 +115,7 @@ def cytoscape_data(G, attrs=None): return jsondata -def cytoscape_graph(data, attrs=None): +def cytoscape_graph(data, attrs=None, name="name", ident="id"): """ Create a NetworkX graph from a dictionary in cytoscape JSON format. @@ -131,6 +134,13 @@ def cytoscape_graph(data, attrs=None): The `attrs` keyword argument will be replaced with `name` and `ident` in networkx 3.0 + name : string + A string which is mapped to the 'name' node element in cyjs format. + Must not have the same value as `ident`. + ident : string + A string which is mapped to the 'id' node element in cyjs format. + Must not have the same value as `name`. + Returns ------- graph : a NetworkX graph instance @@ -172,28 +182,26 @@ def cytoscape_graph(data, attrs=None): EdgeDataView([(0, 1, {'source': 0, 'target': 1})]) """ # ------ TODO: Remove between the lines in 3.0 ----- # - if attrs is None: - attrs = _attrs - else: + if attrs is not None: import warnings msg = ( - "\nThe function signature for cytoscape_data will change in " - "networkx 3.0.\n" - "The `attrs` keyword argument will be replaced with \n" - "explicit `name` and `ident` keyword arguments, e.g.\n\n" + "\nThe `attrs` keyword argument of cytoscape_data is deprecated\n" + "and will be removed in networkx 3.0.\n" + "It is replaced with explicit `name` and `ident` keyword\n" + "arguments.\n" + "To make this warning go away and ensure usage is forward\n" + "compatible, replace `attrs` with `name` and `ident`,\n" + "for example:\n\n" " >>> cytoscape_data(G, attrs={'name': 'foo', 'ident': 'bar'})\n\n" "should instead be written as\n\n" " >>> cytoscape_data(G, name='foo', ident='bar')\n\n" - "in networkx 3.0.\n" - "The default values for 'name' and 'ident' will not change." + "The default values of 'name' and 'id' will not change." ) - warnings.warn(msg, FutureWarning, stacklevel=2) - - attrs.update({k: v for (k, v) in _attrs.items() if k not in attrs}) + warnings.warn(msg, DeprecationWarning, stacklevel=2) - name = attrs["name"] - ident = attrs["ident"] + name = attrs["name"] + ident = attrs["ident"] # -------------------------------------------------- # if name == ident: diff --git a/networkx/readwrite/json_graph/tree.py b/networkx/readwrite/json_graph/tree.py --- a/networkx/readwrite/json_graph/tree.py +++ b/networkx/readwrite/json_graph/tree.py @@ -3,10 +3,9 @@ __all__ = ["tree_data", "tree_graph"] -_attrs = dict(id="id", children="children") - -def tree_data(G, root, attrs=_attrs): +# NOTE: Remove attrs from signature in 3.0 +def tree_data(G, root, attrs=None, ident="id", children="children"): """Returns data in tree format that is suitable for JSON serialization and use in Javascript documents. @@ -27,6 +26,19 @@ def tree_data(G, root, attrs=_attrs): If some user-defined graph data use these attribute names as data keys, they may be silently dropped. + .. deprecated:: 2.6 + + The `attrs` keyword argument is replaced by `ident` and `children` + and will be removed in networkx 3.0 + + ident : string + Attribute name for storing NetworkX-internal graph data. `ident` must + have a different value than `children`. The default is 'id'. + + children : string + Attribute name for storing NetworkX-internal graph data. `children` + must have a different value than `ident`. The default is 'children'. + Returns ------- data : dict @@ -35,7 +47,7 @@ def tree_data(G, root, attrs=_attrs): Raises ------ NetworkXError - If values in attrs are not unique. + If `children` and `ident` attributes are identical. Examples -------- @@ -55,8 +67,6 @@ def tree_data(G, root, attrs=_attrs): Graph and edge attributes are not stored. - The default value of attrs will be changed in a future release of NetworkX. - See Also -------- tree_graph, node_link_data, adjacency_data @@ -66,10 +76,30 @@ def tree_data(G, root, attrs=_attrs): if not G.is_directed(): raise TypeError("G is not directed.") - id_ = attrs["id"] - children = attrs["children"] - if id_ == children: - raise nx.NetworkXError("Attribute names are not unique.") + # NOTE: to be removed in 3.0 + if attrs is not None: + import warnings + + msg = ( + "\nThe `attrs` keyword argument of tree_data is deprecated\n" + "and will be removed in networkx 3.0.\n" + "It is replaced with explicit `ident` and `children` " + "keyword arguments.\n" + "To make this warning go away and ensure usage is forward\n" + "compatible, replace `attrs` with `ident` and `children,\n" + "for example:\n\n" + " >>> tree_data(G, root, attrs={'id': 'foo', 'children': 'bar'})\n\n" + "should instead be written as\n\n" + " >>> tree_data(G, root, ident='foo', children='bar')\n\n" + "The default values of 'id' and 'children' will not change." + ) + warnings.warn(msg, DeprecationWarning, stacklevel=2) + + ident = attrs["id"] + children = attrs["children"] + + if ident == children: + raise nx.NetworkXError("The values for `id` and `children` must be different.") def add_children(n, G): nbrs = G[n] @@ -77,19 +107,19 @@ def add_children(n, G): return [] children_ = [] for child in nbrs: - d = dict(chain(G.nodes[child].items(), [(id_, child)])) + d = dict(chain(G.nodes[child].items(), [(ident, child)])) c = add_children(child, G) if c: d[children] = c children_.append(d) return children_ - data = dict(chain(G.nodes[root].items(), [(id_, root)])) + data = dict(chain(G.nodes[root].items(), [(ident, root)])) data[children] = add_children(root, G) return data -def tree_graph(data, attrs=_attrs): +def tree_graph(data, attrs=None, ident="id", children="children"): """Returns graph from tree data format. Parameters @@ -102,6 +132,19 @@ def tree_graph(data, attrs=_attrs): NetworkX-internal graph data. The values should be unique. Default value: :samp:`dict(id='id', children='children')`. + .. deprecated:: 2.6 + + The `attrs` keyword argument is replaced by `ident` and `children` + and will be removed in networkx 3.0 + + ident : string + Attribute name for storing NetworkX-internal graph data. `ident` must + have a different value than `children`. The default is 'id'. + + children : string + Attribute name for storing NetworkX-internal graph data. `children` + must have a different value than `ident`. The default is 'children'. + Returns ------- G : NetworkX DiGraph @@ -113,33 +156,47 @@ def tree_graph(data, attrs=_attrs): >>> data = json_graph.tree_data(G, root=1) >>> H = json_graph.tree_graph(data) - Notes - ----- - The default value of attrs will be changed in a future release of NetworkX. - See Also -------- tree_data, node_link_data, adjacency_data """ graph = nx.DiGraph() - id_ = attrs["id"] - children = attrs["children"] + if attrs is not None: + import warnings + + msg = ( + "\nThe `attrs` keyword argument of tree_graph is deprecated\n" + "and will be removed in networkx 3.0.\n" + "It is replaced with explicit `ident` and `children` " + "keyword arguments.\n" + "To make this warning go away and ensure usage is\n" + "forward compatible, replace `attrs` with `ident` and `children,\n" + "for example:\n\n" + " >>> tree_graph(data, attrs={'id': 'foo', 'children': 'bar'})\n\n" + "should instead be written as\n\n" + " >>> tree_graph(data, ident='foo', children='bar')\n\n" + "The default values of 'id' and 'children' will not change." + ) + warnings.warn(msg, DeprecationWarning, stacklevel=2) + + ident = attrs["id"] + children = attrs["children"] def add_children(parent, children_): for data in children_: - child = data[id_] + child = data[ident] graph.add_edge(parent, child) grandchildren = data.get(children, []) if grandchildren: add_children(child, grandchildren) nodedata = { - str(k): v for k, v in data.items() if k != id_ and k != children + str(k): v for k, v in data.items() if k != ident and k != children } graph.add_node(child, **nodedata) - root = data[id_] + root = data[ident] children_ = data.get(children, []) - nodedata = {str(k): v for k, v in data.items() if k != id_ and k != children} + nodedata = {str(k): v for k, v in data.items() if k != ident and k != children} graph.add_node(root, **nodedata) add_children(root, children_) return graph
diff --git a/networkx/readwrite/json_graph/tests/test_cytoscape.py b/networkx/readwrite/json_graph/tests/test_cytoscape.py --- a/networkx/readwrite/json_graph/tests/test_cytoscape.py +++ b/networkx/readwrite/json_graph/tests/test_cytoscape.py @@ -6,7 +6,7 @@ # TODO: To be removed when signature change complete in 3.0 -def test_futurewarning(): +def test_attrs_deprecation(): G = nx.path_graph(3) # No warnings when `attrs` kwarg not used with pytest.warns(None) as record: @@ -15,74 +15,78 @@ def test_futurewarning(): assert len(record) == 0 # Future warning raised with `attrs` kwarg attrs = {"name": "foo", "ident": "bar"} - with pytest.warns(FutureWarning): + with pytest.warns(DeprecationWarning): data = cytoscape_data(G, attrs) - with pytest.warns(FutureWarning): + with pytest.warns(DeprecationWarning): H = cytoscape_graph(data, attrs) -class TestCytoscape: - def test_graph(self): - G = nx.path_graph(4) - H = cytoscape_graph(cytoscape_data(G)) - nx.is_isomorphic(G, H) - - def test_input_data_is_not_modified_when_building_graph(self): - G = nx.path_graph(4) - input_data = cytoscape_data(G) - orig_data = copy.deepcopy(input_data) - # Ensure input is unmodified by cytoscape_graph (gh-4173) - cytoscape_graph(input_data) - assert input_data == orig_data - - def test_graph_attributes(self): - G = nx.path_graph(4) - G.add_node(1, color="red") - G.add_edge(1, 2, width=7) - G.graph["foo"] = "bar" - G.graph[1] = "one" - G.add_node(3, name="node", id="123") - - H = cytoscape_graph(cytoscape_data(G)) - assert H.graph["foo"] == "bar" - assert H.nodes[1]["color"] == "red" - assert H[1][2]["width"] == 7 - assert H.nodes[3]["name"] == "node" - assert H.nodes[3]["id"] == "123" - - d = json.dumps(cytoscape_data(G)) - H = cytoscape_graph(json.loads(d)) - assert H.graph["foo"] == "bar" - assert H.graph[1] == "one" - assert H.nodes[1]["color"] == "red" - assert H[1][2]["width"] == 7 - assert H.nodes[3]["name"] == "node" - assert H.nodes[3]["id"] == "123" - - def test_digraph(self): - G = nx.DiGraph() - nx.add_path(G, [1, 2, 3]) - H = cytoscape_graph(cytoscape_data(G)) - assert H.is_directed() - nx.is_isomorphic(G, H) - - def test_multidigraph(self): +def test_graph(): + G = nx.path_graph(4) + H = cytoscape_graph(cytoscape_data(G)) + nx.is_isomorphic(G, H) + + +def test_input_data_is_not_modified_when_building_graph(): + G = nx.path_graph(4) + input_data = cytoscape_data(G) + orig_data = copy.deepcopy(input_data) + # Ensure input is unmodified by cytoscape_graph (gh-4173) + cytoscape_graph(input_data) + assert input_data == orig_data + + +def test_graph_attributes(): + G = nx.path_graph(4) + G.add_node(1, color="red") + G.add_edge(1, 2, width=7) + G.graph["foo"] = "bar" + G.graph[1] = "one" + G.add_node(3, name="node", id="123") + + H = cytoscape_graph(cytoscape_data(G)) + assert H.graph["foo"] == "bar" + assert H.nodes[1]["color"] == "red" + assert H[1][2]["width"] == 7 + assert H.nodes[3]["name"] == "node" + assert H.nodes[3]["id"] == "123" + + d = json.dumps(cytoscape_data(G)) + H = cytoscape_graph(json.loads(d)) + assert H.graph["foo"] == "bar" + assert H.graph[1] == "one" + assert H.nodes[1]["color"] == "red" + assert H[1][2]["width"] == 7 + assert H.nodes[3]["name"] == "node" + assert H.nodes[3]["id"] == "123" + + +def test_digraph(): + G = nx.DiGraph() + nx.add_path(G, [1, 2, 3]) + H = cytoscape_graph(cytoscape_data(G)) + assert H.is_directed() + nx.is_isomorphic(G, H) + + +def test_multidigraph(): + G = nx.MultiDiGraph() + nx.add_path(G, [1, 2, 3]) + H = cytoscape_graph(cytoscape_data(G)) + assert H.is_directed() + assert H.is_multigraph() + + +def test_multigraph(): + G = nx.MultiGraph() + G.add_edge(1, 2, key="first") + G.add_edge(1, 2, key="second", color="blue") + H = cytoscape_graph(cytoscape_data(G)) + assert nx.is_isomorphic(G, H) + assert H[1][2]["second"]["color"] == "blue" + + +def test_exception(): + with pytest.raises(nx.NetworkXError): G = nx.MultiDiGraph() - nx.add_path(G, [1, 2, 3]) - H = cytoscape_graph(cytoscape_data(G)) - assert H.is_directed() - assert H.is_multigraph() - - def test_multigraph(self): - G = nx.MultiGraph() - G.add_edge(1, 2, key="first") - G.add_edge(1, 2, key="second", color="blue") - H = cytoscape_graph(cytoscape_data(G)) - assert nx.is_isomorphic(G, H) - assert H[1][2]["second"]["color"] == "blue" - - def test_exception(self): - with pytest.raises(nx.NetworkXError): - G = nx.MultiDiGraph() - attrs = dict(name="node", ident="node") - cytoscape_data(G, attrs) + cytoscape_data(G, name="foo", ident="foo") diff --git a/networkx/readwrite/json_graph/tests/test_tree.py b/networkx/readwrite/json_graph/tests/test_tree.py --- a/networkx/readwrite/json_graph/tests/test_tree.py +++ b/networkx/readwrite/json_graph/tests/test_tree.py @@ -4,38 +4,54 @@ from networkx.readwrite.json_graph import tree_data, tree_graph -class TestTree: - def test_graph(self): - G = nx.DiGraph() - G.add_nodes_from([1, 2, 3], color="red") - G.add_edge(1, 2, foo=7) - G.add_edge(1, 3, foo=10) - G.add_edge(3, 4, foo=10) - H = tree_graph(tree_data(G, 1)) - nx.is_isomorphic(G, H) - - def test_graph_attributes(self): - G = nx.DiGraph() - G.add_nodes_from([1, 2, 3], color="red") - G.add_edge(1, 2, foo=7) - G.add_edge(1, 3, foo=10) - G.add_edge(3, 4, foo=10) - H = tree_graph(tree_data(G, 1)) - assert H.nodes[1]["color"] == "red" - - d = json.dumps(tree_data(G, 1)) - H = tree_graph(json.loads(d)) - assert H.nodes[1]["color"] == "red" - - def test_exception(self): - with pytest.raises(TypeError, match="is not a tree."): - G = nx.complete_graph(3) - tree_data(G, 0) - with pytest.raises(TypeError, match="is not directed."): - G = nx.path_graph(3) - tree_data(G, 0) - with pytest.raises(nx.NetworkXError, match="names are not unique."): - G = nx.MultiDiGraph() - G.add_node(0) - attrs = dict(id="node", children="node") - tree_data(G, 0, attrs) +def test_graph(): + G = nx.DiGraph() + G.add_nodes_from([1, 2, 3], color="red") + G.add_edge(1, 2, foo=7) + G.add_edge(1, 3, foo=10) + G.add_edge(3, 4, foo=10) + H = tree_graph(tree_data(G, 1)) + nx.is_isomorphic(G, H) + + +def test_graph_attributes(): + G = nx.DiGraph() + G.add_nodes_from([1, 2, 3], color="red") + G.add_edge(1, 2, foo=7) + G.add_edge(1, 3, foo=10) + G.add_edge(3, 4, foo=10) + H = tree_graph(tree_data(G, 1)) + assert H.nodes[1]["color"] == "red" + + d = json.dumps(tree_data(G, 1)) + H = tree_graph(json.loads(d)) + assert H.nodes[1]["color"] == "red" + + +def test_exceptions(): + with pytest.raises(TypeError, match="is not a tree."): + G = nx.complete_graph(3) + tree_data(G, 0) + with pytest.raises(TypeError, match="is not directed."): + G = nx.path_graph(3) + tree_data(G, 0) + with pytest.raises(nx.NetworkXError, match="must be different."): + G = nx.MultiDiGraph() + G.add_node(0) + tree_data(G, 0, ident="node", children="node") + + +# NOTE: To be removed when deprecation expires in 3.0 +def test_attrs_deprecation(): + G = nx.path_graph(3, create_using=nx.DiGraph) + # No warnings when `attrs` kwarg not used + with pytest.warns(None) as record: + data = tree_data(G, 0) + H = tree_graph(data) + assert len(record) == 0 + # DeprecationWarning issued when `attrs` is used + attrs = {"id": "foo", "children": "bar"} + with pytest.warns(DeprecationWarning): + data = tree_data(G, 0, attrs=attrs) + with pytest.warns(DeprecationWarning): + H = tree_graph(data, attrs=attrs)
Fix signatures in json_graph.tree module: use explicit kwargs instead of dicts This was identified in #4678 but I'm pulling it out into it's own issue so we can put a milestone on it. See [this comment](https://github.com/networkx/networkx/pull/4678#issuecomment-826370554) for details and #4284 for a blueprint on how the FutureWarning could be put in place.
2021-05-09T05:18:35
networkx/networkx
4,826
networkx__networkx-4826
[ "4775" ]
debfe114314f0829dbf3d0d9b489172c8c60bed0
diff --git a/networkx/conftest.py b/networkx/conftest.py --- a/networkx/conftest.py +++ b/networkx/conftest.py @@ -133,6 +133,9 @@ def set_warnings(): warnings.filterwarnings( "ignore", category=DeprecationWarning, message="\nThe `attrs` keyword" ) + warnings.filterwarnings( + "ignore", category=DeprecationWarning, message="preserve_random_state" + ) @pytest.fixture(autouse=True) diff --git a/networkx/utils/decorators.py b/networkx/utils/decorators.py --- a/networkx/utils/decorators.py +++ b/networkx/utils/decorators.py @@ -2,6 +2,7 @@ from os.path import splitext from contextlib import contextmanager from pathlib import Path +import warnings import networkx as nx from decorator import decorator @@ -331,6 +332,9 @@ def do_random_stuff(x, y): ----- If numpy.random is not importable, the state is not saved or restored. """ + msg = "preserve_random_state is deprecated and will be removed in 3.0." + warnings.warn(msg, DeprecationWarning) + try: import numpy as np
Remove function utils.preserve_random_state Remove and appropriately deprecate utils.preserve_random_state which as mentioned in #4732 looks like it was moved to utils in the same PR that its use was removed from networkx. Needs: (anything else?) - [ ] Deprecate warning in the function itself. - [ ] Note in the list of deprecations to handle for v3.0.
2021-05-20T16:51:06
networkx/networkx
4,831
networkx__networkx-4831
[ "3944" ]
e7a38bdc2874ec697ec28fc048a4d3204190c002
diff --git a/networkx/algorithms/core.py b/networkx/algorithms/core.py --- a/networkx/algorithms/core.py +++ b/networkx/algorithms/core.py @@ -259,7 +259,8 @@ def k_filter(v, k, c): def k_crust(G, k=None, core_number=None): """Returns the k-crust of G. - The k-crust is the graph G with the k-core removed. + The k-crust is the graph G with the edges of the k-core removed + and isolated nodes found after the removal of edges are also removed. Parameters ----------
k_crust(G, k) behavior is misleading when combined to k_core(G, k) With reference to the documentation, k_crust(G, k) returns in fact the complement of k_core(G, k+1). But this behavior is very misleading. What is the reason of such behavior for k_crust when k is specified? Thanks
It looks like ```k_crust``` removes the k-core and so is the complement of k_core(G,k). I think the documentation says: ```The k-crust is the graph G with the k-core removed.``` Also ```Note: This definition of k-crust is different than the definition in [1]. The k-crust in [1] is equivalent to the k+1 crust of this algorithm.``` Are you saying the function doesn't do what it's documentation says? Or that it agrees with the documentation but should be defined differently? As far as I understand the documentation about the k_crust(G, k), the code is working as explained. However, it is misleading to run : nx.k_core(G, k=i).nodes() nx.k_crust(G, k=i).nodes() and to see nodes belonging to the two sets.... or I do not understand the math concept... Ahhh... I think I see the confusion now: The docs say the k-crust is G with the k-core removed. But it's not clear there whether we remove the nodes or the edges of the k-core. We remove the edges as well as any isolated nodes after those edges are gone. This seems reasonable, but I can believe there are reasons to return all nodes, and only the edges not in the k_core. But we should not return only the nodes not in the k-core. That removes too much. I do believe that the k_core and the k_crust must have some nodes in common. It's the edges that are the focus of this way of looking at graph structure.
2021-05-23T10:29:24
networkx/networkx
4,843
networkx__networkx-4843
[ "4171" ]
505d4f42521cb37907336c73750cf85a5ad42edc
diff --git a/networkx/algorithms/centrality/subgraph_alg.py b/networkx/algorithms/centrality/subgraph_alg.py --- a/networkx/algorithms/centrality/subgraph_alg.py +++ b/networkx/algorithms/centrality/subgraph_alg.py @@ -188,7 +188,7 @@ def subgraph_centrality(G): @not_implemented_for("directed") @not_implemented_for("multigraph") -def communicability_betweenness_centrality(G, normalized=True): +def communicability_betweenness_centrality(G): r"""Returns subgraph communicability for all pairs of nodes in G. Communicability betweenness measure makes use of the number of walks @@ -281,20 +281,10 @@ def communicability_betweenness_centrality(G, normalized=True): # put row and col back A[i, :] = row A[:, i] = col - # rescaling - cbc = _rescale(cbc, normalized=normalized) - return cbc - - -def _rescale(cbc, normalized): - # helper to rescale betweenness centrality - if normalized is True: - order = len(cbc) - if order <= 2: - scale = None - else: - scale = 1.0 / ((order - 1.0) ** 2 - (order - 1.0)) - if scale is not None: + # rescale when more than two nodes + order = len(cbc) + if order > 2: + scale = 1.0 / ((order - 1.0) ** 2 - (order - 1.0)) for v in cbc: cbc[v] *= scale return cbc
diff --git a/networkx/algorithms/centrality/tests/test_subgraph.py b/networkx/algorithms/centrality/tests/test_subgraph.py --- a/networkx/algorithms/centrality/tests/test_subgraph.py +++ b/networkx/algorithms/centrality/tests/test_subgraph.py @@ -53,6 +53,25 @@ def test_subgraph_centrality_big_graph(self): comm200 = nx.subgraph_centrality(g200) comm200_exp = nx.subgraph_centrality_exp(g200) + def test_communicability_betweenness_centrality_small(self): + result = communicability_betweenness_centrality(nx.path_graph(2)) + assert result == {0: 0, 1: 0} + + result = communicability_betweenness_centrality(nx.path_graph(1)) + assert result == {0: 0} + + result = communicability_betweenness_centrality(nx.path_graph(0)) + assert result == {} + + answer = {0: 0.1411224421177313, 1: 1.0, 2: 0.1411224421177313} + result = communicability_betweenness_centrality(nx.path_graph(3)) + for k, v in result.items(): + assert answer[k] == pytest.approx(result[k], abs=1e-7) + + result = communicability_betweenness_centrality(nx.complete_graph(3)) + for k, v in result.items(): + assert 0.49786143366223296 == pytest.approx(result[k], abs=1e-7) + def test_communicability_betweenness_centrality(self): answer = { 0: 0.07017447951484615,
Communicability Betweenness Centrality Error For normalized=False If you run the method communicability_betweenness_centrality(G, normalized) where normalized is false, you will get the following error: ``` File "../../test/driver_auto_communicability_betweenness_centrality_v01.py", line 186, in main libraryMethodReturnTypeVariable = nx.communicability_betweenness_centrality(Graph,normalized) local variable 'scale' referenced before assignment ``` --------------------------------------------- ``` def _rescale(cbc, normalized): # helper to rescale betweenness centrality if normalized is True: order = len(cbc) if order <= 2: scale = None else: scale = 1.0 / ((order - 1.0) ** 2 - (order - 1.0)) if scale is not None: for v in cbc: cbc[v] *= scale return cbc ``` This is because communicability_betweenness_centrality calls the above method, and as you can see, scale is not defined on the line `if scale is not None`. I'm not sure what scale should be if normalization is not True.
Good catch and nice diagnosis. I also agree that it's not clear what the behavior *should* be when `normalization=False`, perhaps the linked reference in the docstring of `communicability_betweenness_centrality` would have some info on this? > > > Good catch and nice diagnosis. I also agree that it's not clear what the behavior _should_ be when `normalization=False`, perhaps the linked reference in the docstring of `communicability_betweenness_centrality` would have some info on this? The docstring links to https://arxiv.org/ftp/arxiv/papers/0905/0905.4102.pdf. If you go to page 7, there should be a definition of communicability betweenness. I don't even see a version without normalization. Wow --- this bug has been there for 9 years. :) Looks like the ```_rescale``` function should be moved inline with the function and it's really just a way to avoid dividing by 0 for graphs with 2 or fewer nodes. It sets the 1/((N-1)*(N-2)) term to 1 if <=2 nodes. That's not really a "normalization"... And we can tell that no one wanted the "unnormalized version" because it leads to an exception because of this bug. I suggest we deprecate the ```normalized``` parameter and replace the function with a scaling that occurs in the function only ```if G.order() > 2:```. > > > Wow --- this bug has been there for 9 years. :) > Looks like the `_rescale` function should be moved inline with the function and it's really just a way to avoid dividing by 0 for graphs with 2 or fewer nodes. It sets the 1/((N-1)*(N-2)) term to 1 if <=2 nodes. That's not really a "normalization"... And we can tell that no one wanted the "unnormalized version" because it leads to an exception because of this bug. > > I suggest we deprecate the `normalized` parameter and replace the function with a scaling that occurs in the function only `if G.order() > 2:`. That seems like the most reasonable course of action. This Issue requires a deprecation to fix
2021-05-27T07:07:43
networkx/networkx
4,872
networkx__networkx-4872
[ "4778" ]
82eba2f1b10d75464c8c1a0971b719abb234b33f
diff --git a/networkx/readwrite/graphml.py b/networkx/readwrite/graphml.py --- a/networkx/readwrite/graphml.py +++ b/networkx/readwrite/graphml.py @@ -962,8 +962,15 @@ def find_graphml_keys(self, graph_element): "type": self.python_type[attr_type], "for": k.get("for"), } - # check for "default" subelement of key element + # check for "default" sub-element of key element default = k.find(f"{{{self.NS_GRAPHML}}}default") if default is not None: - graphml_key_defaults[attr_id] = default.text + # Handle default values identically to data element values + python_type = graphml_keys[attr_id]["type"] + if python_type == bool: + graphml_key_defaults[attr_id] = self.convert_bool[ + default.text.lower() + ] + else: + graphml_key_defaults[attr_id] = python_type(default.text) return graphml_keys, graphml_key_defaults
diff --git a/networkx/readwrite/tests/test_graphml.py b/networkx/readwrite/tests/test_graphml.py --- a/networkx/readwrite/tests/test_graphml.py +++ b/networkx/readwrite/tests/test_graphml.py @@ -119,6 +119,40 @@ def setup_class(cls): cls.attribute_graph.add_edge("n5", "n4", id="e6", weight=1.1) cls.attribute_fh = io.BytesIO(cls.attribute_data.encode("UTF-8")) + cls.node_attribute_default_data = """<?xml version="1.0" encoding="UTF-8"?> + <graphml xmlns="http://graphml.graphdrawing.org/xmlns" + xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" + xsi:schemaLocation="http://graphml.graphdrawing.org/xmlns + http://graphml.graphdrawing.org/xmlns/1.0/graphml.xsd"> + <key id="d0" for="node" attr.name="boolean_attribute" attr.type="boolean"><default>false</default></key> + <key id="d1" for="node" attr.name="int_attribute" attr.type="int"><default>0</default></key> + <key id="d2" for="node" attr.name="long_attribute" attr.type="long"><default>0</default></key> + <key id="d3" for="node" attr.name="float_attribute" attr.type="float"><default>0.0</default></key> + <key id="d4" for="node" attr.name="double_attribute" attr.type="double"><default>0.0</default></key> + <key id="d5" for="node" attr.name="string_attribute" attr.type="string"><default>Foo</default></key> + <graph id="G" edgedefault="directed"> + <node id="n0"/> + <node id="n1"/> + <edge id="e0" source="n0" target="n1"/> + </graph> + </graphml> + """ + cls.node_attribute_default_graph = nx.DiGraph(id="G") + cls.node_attribute_default_graph.graph["node_default"] = { + "boolean_attribute": False, + "int_attribute": 0, + "long_attribute": 0, + "float_attribute": 0.0, + "double_attribute": 0.0, + "string_attribute": "Foo", + } + cls.node_attribute_default_graph.add_node("n0") + cls.node_attribute_default_graph.add_node("n1") + cls.node_attribute_default_graph.add_edge("n0", "n1", id="e0") + cls.node_attribute_default_fh = io.BytesIO( + cls.node_attribute_default_data.encode("UTF-8") + ) + cls.attribute_named_key_ids_data = """<?xml version='1.0' encoding='utf-8'?> <graphml xmlns="http://graphml.graphdrawing.org/xmlns" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" @@ -351,6 +385,11 @@ def test_read_attribute_graphml(self): for a, b in zip(ge, he): assert a == b + def test_node_default_attribute_graphml(self): + G = self.node_attribute_default_graph + H = nx.read_graphml(self.node_attribute_default_fh) + assert G.graph["node_default"] == H.graph["node_default"] + def test_directed_edge_in_undirected(self): s = """<?xml version="1.0" encoding="UTF-8"?> <graphml xmlns="http://graphml.graphdrawing.org/xmlns"
Deserializing custom default properties from yEd graphML files When importing a yEd created graphml file with custom properties on the nodes I found that the default values for boolean properties was always True. After some browsing through the code of GraphMLReader I found that while the properties of the nodes themselves are passed through the `decode_data_elements` method (so that GraphML booleans are parsed correctly), the items in the node_default dictionary are not. For the defaults, the value of each key is just cast to the appropriate `python_type` In the case of booleans with value='false' this results in `bool('false') == True` :( ### Expected Behavior The default attribute values should also be passed through `decode_data_elements` ### Steps to Reproduce Make a yed graphml file with some custom node property of value 'boolean' with default 'false' import the file using GraphMLReader The resulting default node property has bool value True. ### Environment Python version: Python 3.9.4 (default, Apr 6 2021, 11:23:37) [Clang 11.0.3 (clang-1103.0.32.62)] on darwin NetworkX version: 2.5.1 ### Additional context As posted in google discussion group [https://groups.google.com/g/networkx-discuss/c/XbFTyx-PKqc/m/pD2_WAdYAQAJ](https://groups.google.com/g/networkx-discuss/c/XbFTyx-PKqc/m/pD2_WAdYAQAJ)
We should add code to send the default values through the `decode_data_elements` method -- similar to what we do with the non-default properties of the nodes.
2021-06-04T19:52:18
networkx/networkx
4,906
networkx__networkx-4906
[ "4905" ]
168bcb82e0872b3cd6400efc9f1f428498adffeb
diff --git a/examples/drawing/plot_custom_node_icons.py b/examples/drawing/plot_custom_node_icons.py --- a/examples/drawing/plot_custom_node_icons.py +++ b/examples/drawing/plot_custom_node_icons.py @@ -4,24 +4,23 @@ ================= Example of using custom icons to represent nodes with matplotlib. + +Images for node icons courtesy of www.materialui.co """ import matplotlib.pyplot as plt import networkx as nx import PIL -import urllib.request # Image URLs for graph nodes -icon_urls = { - "router": "https://www.materialui.co/materialIcons/hardware/router_black_144x144.png", - "switch": "https://www.materialui.co/materialIcons/action/dns_black_144x144.png", - "PC": "https://www.materialui.co/materialIcons/hardware/computer_black_144x144.png", +icons = { + "router": "icons/router_black_144x144.png", + "switch": "icons/switch_black_144x144.png", + "PC": "icons/computer_black_144x144.png", } -# Load images from web -images = { - k: PIL.Image.open(urllib.request.urlopen(url)) for k, url in icon_urls.items() -} +# Load images +images = {k: PIL.Image.open(fname) for k, fname in icons.items()} # Generate the computer network graph G = nx.Graph() @@ -39,10 +38,22 @@ for v in range(1, 4): G.add_edge("switch_" + str(u), "PC_" + str(u) + "_" + str(v)) -# get layout and draw edges -pos = nx.spring_layout(G) +# Get a reproducible layout and create figure +pos = nx.spring_layout(G, seed=1734289230) fig, ax = plt.subplots() -nx.draw_networkx_edges(G, pos=pos, ax=ax, min_source_margin=15, min_target_margin=15) + +# Note: the min_source/target_margin kwargs only work with FancyArrowPatch objects. +# Force the use of FancyArrowPatch for edge drawing by setting `arrows=True`, +# but suppress arrowheads with `arrowstyle="-"` +nx.draw_networkx_edges( + G, + pos=pos, + ax=ax, + arrows=True, + arrowstyle="-", + min_source_margin=15, + min_target_margin=15, +) # Transform from data coordinates (scaled between xlim and ylim) to display coordinates tr_figure = ax.transData.transform
Getting 403 Forbidden errors when running plot_custom_node_icons example The way `plot_custom_node_icons` is currently set up, we grab resources from `materialsui.com` every time anyone runs this example which of course includes our own CI runs. We are now getting 403 forbidden errors (rightly so) since we are accessing these resources programatically. I can think of 2 ways around this: 1) Store the icon png's locally so we're not constantly having to download them from an external source 2) Add header spoofing to the url request. Though it makes the example a bit larger, I prefer option 1). The second option would add boilerplate to the example, and isn't exactly setting an example of good web citizenship. I'm certainly open to other ideas though!
2021-06-16T10:10:27
networkx/networkx
4,925
networkx__networkx-4925
[ "4877" ]
654611019cb9ec7532edf4f1d3b0b92e630162f0
diff --git a/networkx/readwrite/gml.py b/networkx/readwrite/gml.py --- a/networkx/readwrite/gml.py +++ b/networkx/readwrite/gml.py @@ -163,8 +163,20 @@ def read_gml(path, label="label", destringizer=None): -------- >>> G = nx.path_graph(4) >>> nx.write_gml(G, "test.gml") + + GML values are interpreted as strings by default: + >>> H = nx.read_gml("test.gml") - >>> I = nx.read_gml("test.gml", destringizer=int) + >>> H.nodes + NodeView(('0', '1', '2', '3')) + + When a `destringizer` is provided, GML values are converted to the provided type. + For example, integer nodes can be recovered as shown below: + + >>> J = nx.read_gml("test.gml", destringizer=int) + >>> J.nodes + NodeView((0, 1, 2, 3)) + """ def filter_lines(lines):
Add destringizer example to read_gml docstring IMO it would be a nice addition to add a simple example of the `destringizer` parameter to the `read_gml` docstring. See #4854 for inspiration.
Adding this with pull request #4916 Thanks @randallwvr90, the rewording is indeed more clear --- it would still be nice to add a detailed example demonstrating the results of loading w/ and w/out the destringizer, so I propose to leave this open.
2021-06-24T16:04:48
networkx/networkx
4,926
networkx__networkx-4926
[ "4275" ]
654611019cb9ec7532edf4f1d3b0b92e630162f0
diff --git a/examples/algorithms/plot_parallel_betweenness.py b/examples/algorithms/plot_parallel_betweenness.py --- a/examples/algorithms/plot_parallel_betweenness.py +++ b/examples/algorithms/plot_parallel_betweenness.py @@ -10,8 +10,12 @@ the contribution of those nodes to the betweenness centrality of the whole network. Here we divide the network in chunks of nodes and we compute their contribution to the betweenness centrality of the whole network. -""" +Note: The example output below shows that the non-parallel implementation is +faster. This is a limitation of our CI/CD pipeline running on a single core. + +Depending on your setup, you will likely observe a speedup. +""" from multiprocessing import Pool import time import itertools
Parallel betweenness example bad runtimes in docs There is a nice example in the gallery for [computing betweenness centrality with multiprocessing](https://networkx.org/documentation/stable/auto_examples/advanced/plot_parallel_betweenness.html). Unfortunately, the example in the docs show poorer performance for the parallel version than the non-parallelized version, likely due to the CI service only using single-core (or otherwise not very performant) nodes for computation. At the very least it might be worth adding some text somewhere to the example to explain why the example shows performance results that differ from what is expected. It's also probably worth taking a look at the example in its entirety and consider alternative approaches to multiprocessing, e.g. `concurrent.futures`.
Hi, @rossbar. I am new to networkx community and interested in tackling this issue. Should I focus on adding texts to the example or focus more on finding other parallel alternatives? Thank you. Hi @berlincho - if you are interested in investigating other parallelization approaches that might be a fun project. It's unlikely that we can do anything to improve the parallel performance of the example in the rendered documentation, as I suspect that is just a limitation related to the CI hardware. A quick note in the explanation for that example might help clear up this point for readers.
2021-06-24T17:23:11
networkx/networkx
4,928
networkx__networkx-4928
[ "4776" ]
654611019cb9ec7532edf4f1d3b0b92e630162f0
diff --git a/networkx/algorithms/assortativity/correlation.py b/networkx/algorithms/assortativity/correlation.py --- a/networkx/algorithms/assortativity/correlation.py +++ b/networkx/algorithms/assortativity/correlation.py @@ -73,8 +73,12 @@ def degree_assortativity_coefficient(G, x="out", y="in", weight=None, nodes=None .. [2] Foster, J.G., Foster, D.V., Grassberger, P. & Paczuski, M. Edge direction and the structure of networks, PNAS 107, 10815-20 (2010). """ - M = degree_mixing_matrix(G, x=x, y=y, nodes=nodes, weight=weight) - return numeric_ac(M) + if nodes is None: + nodes = G.nodes + degrees = set([d for n, d in G.degree(nodes, weight=weight)]) + mapping = {d: i for i, d, in enumerate(degrees)} + M = degree_mixing_matrix(G, x=x, y=y, nodes=nodes, weight=weight, mapping=mapping) + return numeric_ac(M, mapping=mapping) def degree_pearson_correlation_coefficient(G, x="out", y="in", weight=None, nodes=None): @@ -223,8 +227,12 @@ def numeric_assortativity_coefficient(G, attribute, nodes=None): .. [1] M. E. J. Newman, Mixing patterns in networks Physical Review E, 67 026126, 2003 """ - a = numeric_mixing_matrix(G, attribute, nodes) - return numeric_ac(a) + if nodes is None: + nodes = G.nodes + vals = set(G.nodes[n][attribute] for n in nodes) + mapping = {d: i for i, d, in enumerate(vals)} + M = attribute_mixing_matrix(G, attribute, nodes, mapping) + return numeric_ac(M, mapping) def attribute_ac(M): @@ -254,7 +262,7 @@ def attribute_ac(M): return r -def numeric_ac(M): +def numeric_ac(M, mapping): # M is a numpy matrix or array # numeric assortativity coefficient, pearsonr import numpy as np @@ -262,12 +270,13 @@ def numeric_ac(M): if M.sum() != 1.0: M = M / float(M.sum()) nx, ny = M.shape # nx=ny - x = np.arange(nx) - y = np.arange(ny) + x = np.array(list(mapping.keys())) + y = x # x and y have the same support + idx = list(mapping.values()) a = M.sum(axis=0) b = M.sum(axis=1) - vara = (a * x ** 2).sum() - ((a * x).sum()) ** 2 - varb = (b * x ** 2).sum() - ((b * x).sum()) ** 2 + vara = (a[idx] * x ** 2).sum() - ((a[idx] * x).sum()) ** 2 + varb = (b[idx] * y ** 2).sum() - ((b[idx] * y).sum()) ** 2 xy = np.outer(x, y) - ab = np.outer(a, b) + ab = np.outer(a[idx], b[idx]) return (xy * (M - ab)).sum() / np.sqrt(vara * varb)
diff --git a/networkx/algorithms/assortativity/tests/base_test.py b/networkx/algorithms/assortativity/tests/base_test.py --- a/networkx/algorithms/assortativity/tests/base_test.py +++ b/networkx/algorithms/assortativity/tests/base_test.py @@ -52,6 +52,10 @@ def setup_class(cls): cls.W = nx.Graph() cls.W.add_edges_from([(0, 3), (1, 3), (2, 3)], weight=0.5) cls.W.add_edge(0, 2, weight=1) + S1 = nx.star_graph(4) + S2 = nx.star_graph(4) + cls.DS = nx.disjoint_union(S1, S2) + cls.DS.add_edge(4, 5) class BaseTestNumericMixing: @@ -70,3 +74,10 @@ def setup_class(cls): F.add_edge(0, 2, weight=1) nx.set_node_attributes(F, dict(F.degree(weight="weight")), "margin") cls.F = F + + M = nx.Graph() + M.add_nodes_from([1, 2], margin=-1) + M.add_nodes_from([3], margin=1) + M.add_nodes_from([4], margin=2) + M.add_edges_from([(3, 4), (1, 2), (1, 3)]) + cls.M = M diff --git a/networkx/algorithms/assortativity/tests/test_correlation.py b/networkx/algorithms/assortativity/tests/test_correlation.py --- a/networkx/algorithms/assortativity/tests/test_correlation.py +++ b/networkx/algorithms/assortativity/tests/test_correlation.py @@ -42,6 +42,10 @@ def test_degree_assortativity_weighted(self): r = nx.degree_assortativity_coefficient(self.W, weight="weight") np.testing.assert_almost_equal(r, -0.1429, decimal=4) + def test_degree_assortativity_double_star(self): + r = nx.degree_assortativity_coefficient(self.DS) + np.testing.assert_almost_equal(r, -0.9339, decimal=4) + class TestAttributeMixingCorrelation(BaseTestAttributeMixing): def test_attribute_assortativity_undirected(self): @@ -91,3 +95,7 @@ def test_numeric_assortativity_negative(self): def test_numeric_assortativity_float(self): r = nx.numeric_assortativity_coefficient(self.F, "margin") np.testing.assert_almost_equal(r, -0.1429, decimal=4) + + def test_numeric_assortativity_mixed(self): + r = nx.numeric_assortativity_coefficient(self.M, "margin") + np.testing.assert_almost_equal(r, 0.4340, decimal=4)
numeric assortativity coefficient only works for nonnegative numeric attributes <!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion --> <!--- Provide a general summary of the issue in the Title above --> ### Current Behavior <!--- Tell us what happens instead of the expected behavior --> If the range of numeric attributes ranges from negative to positive (and not all negative), `nx.numeric_mixing_matrix` effectively disregards it since it is generated using `mapping = {x: x for x in range(m + 1)}`. ### Expected Behavior <!--- Tell us what should happen --> The negative integers should be included in generating the matrix. Alternatively: 1. just use the Pearson correlation coefficient as given in Peel et al, eq. S4-S5 (Leto Peel, Jean-Charles Delvenne, Renaud Lambiotte, Multiscale mixing patterns in networks, PNAS Apr 2018, 115 (16) 4057-4062; DOI: 10.1073/pnas.1713019115). This has the added benefit that the scalar quantities need not be integers. 2. require that the integers be nonnegative 3. increase the size of the matrix so it includes the negative integers as well. ### Steps to Reproduce <!--- Provide a minimal example that reproduces the bug --> ``` G = nx.Graph() G.add_nodes_from([1,2], a=-1) G.add_nodes_from([3], a=1) G.add_nodes_from([4], a=2) G.add_edges_from([(3,4),(1,2),(1,3)]) # numeric_assortativity_coefficient gives -1.0 print(nx.numeric_assortativity_coefficient(G, 'a')) ``` ### Environment <!--- Please provide details about your local environment --> Python version: 3.8 NetworkX version: 2.5 ### Additional context <!--- Add any other context about the problem here, screenshots, etc. -->
I don't think `mapping = {x: x for x in range(m + 1)}` affects or is affected by the attributes of nodes. This constructs a mapping of nodes to integer indexes in the matrix. In this case it is mapping from the indexes created earlier to the indexes used here -- so it is the identity mapping. The numeric attributes can range with both positive and negative values. Also, your example gives coefficient -1, which seems to indicate that it is not ignoring the negative values of attribute `a`. What value would you expect? Can you clarify your description? I actually discovered this error using a larger dataset, comparing the Pearson correlation coefficient (directly from scipy) with the one obtained using `numeric_assortativity_coefficient`. I tried taking the absolute value of the attribute and the discrepancy disappeared. I think the mapping is the culprit. `numeric_mixing_matrix` takes the dictionary `attribute_mixing_dict`which has _attribute values_ (not node labels) as the keys and the values being another dictionary, with the keys being the neighbor attribute values and the dictionary values being the counts of how many times these attribute values occur in an edge. Therefore, in `numeric_mixing_matrix`, the value of `m` is the maximum of the attribute values (not the node labels). I've annotated the snippet below to point out what I think is wrong with the code. I hope that helps. ``` def numeric_mixing_matrix(G, attribute, nodes=None, normalized=True): """Returns numeric mixing matrix for attribute. The attribute must be an integer. Parameters ---------- G : graph NetworkX graph object. attribute : string Node attribute key. The corresponding attribute must be an integer. nodes: list or iterable (optional) Build the matrix only with nodes in container. The default is all nodes. normalized : bool (default=True) Return counts if False or probabilities if True. Returns ------- m: numpy array Counts, or joint, probability of occurrence of node attribute pairs. """ d = attribute_mixing_dict(G, attribute, nodes) # {attr: {attr: count}} s = set(d.keys()) # effectively a set of attribute _values_ for k, v in d.items(): s.update(v.keys()) m = max(s) # the maximum of attribute _values_ mapping = {x: x for x in range(m + 1)} # begins from zero to m, so if there are negative values, they won't be included a = dict_to_numpy_array(d, mapping=mapping) if normalized: a = a / a.sum() return a ``` Sorry -- my mistake -- I didn't understand the `mixing _dict` data structure. So, would a fix be to replace `range(m + 1)` with `range(min(s), max(s)+1)`? Also, what value would you expect for your example instead of -1? Peel et al 2018 published a paper in PNAS about assortativity, and they did include the formula for scalar attributes in the Supplementary Information. Based on this, the expected answer is `r=4/9`. The advantage of Peel's 2018 formula is that it doesn't require integer values for the attribute values. Leto Peel, Jean-Charles Delvenne, Renaud Lambiotte, Multiscale mixing patterns in networks, PNAS Apr 2018, 115 (16) 4057-4062; DOI: 10.1073/pnas.1713019115
2021-06-25T00:17:46
networkx/networkx
4,938
networkx__networkx-4938
[ "4846" ]
3146c56f0fe942f71784d5ee03aa65aec845859e
diff --git a/networkx/algorithms/approximation/traveling_salesman.py b/networkx/algorithms/approximation/traveling_salesman.py --- a/networkx/algorithms/approximation/traveling_salesman.py +++ b/networkx/algorithms/approximation/traveling_salesman.py @@ -447,13 +447,11 @@ def simulated_annealing_tsp( The distance between all pairs of nodes should be included. init_cycle : list of all nodes or "greedy" - The initial solution (a cycle through all nodes). - Usually you should use `greedy_tsp(G, weight)`. - But you can start with `list(G)` or the final result - of `simulated_annealing_tsp` when doing `threshold_accepting_tsp`. - - This argument is required. A shortcut if you don't want to think - about it is to use the string "greedy" which calls `greedy_tsp`. + The initial solution (a cycle through all nodes returning to the start). + This argument has no default to make you think about it. + If "greedy", use `greedy_tsp(G, weight)`. + Other common starting cycles are `list(G) + [next(iter(G))]` or the final + result of `simulated_annealing_tsp` when doing `threshold_accepting_tsp`. weight : string, optional (default="weight") Edge data key corresponding to the edge weight. @@ -663,6 +661,13 @@ def threshold_accepting_tsp( `G` should be a complete weighted undirected graph. The distance between all pairs of nodes should be included. + init_cycle : list or "greedy" + The initial solution (a cycle through all nodes returning to the start). + This argument has no default to make you think about it. + If "greedy", use `greedy_tsp(G, weight)`. + Other common starting cycles are `list(G) + [next(iter(G))]` or the final + result of `simulated_annealing_tsp` when doing `threshold_accepting_tsp`. + weight : string, optional (default="weight") Edge data key corresponding to the edge weight. If any edge does not have this attribute the weight is set to 1. @@ -713,9 +718,6 @@ def threshold_accepting_tsp( least one acceptance of a neighbor solution. If no inner loop moves are accepted the threshold remains unchanged. - cycle : list, optional (default=compute using greedy algorithm) - The initial solution (a cycle all nodes). - seed : integer, random_state, or None (default) Indicator of random number generation state. See :ref:`Randomness<randomness>`.
diff --git a/networkx/algorithms/approximation/tests/test_traveling_salesman.py b/networkx/algorithms/approximation/tests/test_traveling_salesman.py --- a/networkx/algorithms/approximation/tests/test_traveling_salesman.py +++ b/networkx/algorithms/approximation/tests/test_traveling_salesman.py @@ -210,6 +210,11 @@ def test_simulated_annealing_undirected(self): cost = sum(self.UG2[n][nbr]["weight"] for n, nbr in pairwise(cycle)) validate_symmetric_solution(cycle, cost, self.UG2_cycle, self.UG2_cost) + def test_error_on_input_order_mistake(self): + # see issue #4846 https://github.com/networkx/networkx/issues/4846 + pytest.raises(TypeError, self.tsp, self.UG, weight="weight") + pytest.raises(nx.NetworkXError, self.tsp, self.UG, "weight") + def test_not_complete_graph(self): pytest.raises( nx.NetworkXError,
Approximation TSP - Bug when custom weight is not correctly add in input <!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion --> <!--- Provide a general summary of the issue in the Title above --> When the weight has a different label than 'weight', it is not properly considered `networkx.algorithms.approximation.traveling_salesman.threshold_accepting_tsp` ( it ius working for the `greedy`approximation but I didnt check the others approximations) ### Current Behavior ```python import networkx as nx from networkx.algorithms import approximation as approx from networkx.utils import pairwise G = nx.DiGraph() G.add_weighted_edges_from({ ("A", "B", 3), ("A", "C", 17), ("A", "D", 14), ("B", "A", 3), ("B", "C", 12), ("B", "D", 16), ("C", "A", 13),("C", "B", 12), ("C", "D", 4), ("D", "A", 14), ("D", "B", 15), ("D", "C", 2) }) cycle = approx.greedy_tsp(G, source="D") cost = sum(G[n][nbr]["weight"] for n, nbr in pairwise(cycle)) print(cycle) print(cost) # Test new graph with weight attribute 'length' H = nx.DiGraph() H.add_edges_from([ ("A", "B", {'length':3}), ("A", "C", {'length':17}), ("A", "D", {'length':14}), ("B", "A", {'length':3}), ("B", "C", {'length':12}), ("B", "D", {'length':16}), ("C", "A", {'length':13}),("C", "B", {'length':12}), ("C", "D", {'length':4}), ("D", "A", {'length':14}), ("D", "B", {'length':15}), ("D", "C", {'length':2}) ]) cycle = approx.greedy_tsp(H, weight='length',source="D") cost = sum(H[n][nbr]["length"] for n, nbr in pairwise(cycle)) print(cycle) print(cost) print(nx.approximation.threshold_accepting_tsp(H, "greedy", weight='length', source='D')) tsp = nx.approximation.traveling_salesman_problem TA_tsp = nx.approximation.threshold_accepting_tsp method = lambda Graph, wt: TA_tsp(Graph, "greedy", weight=wt, source='D') path = tsp(H, weight='length', cycle=True, method=method) print(path) path = tsp(G, cycle=True, method=method) print(path) ``` ``` ['D', 'C', 'B', 'A', 'D'] 31 ['D', 'C', 'B', 'A', 'D'] 31 ['D', 'C', 'B', 'A', 'D'] ['D', 'C', 'B', 'C', 'A', 'D'] ['D', 'C', 'B', 'A', 'D'] ``` ### Expected behavior: ```python tsp = nx.approximation.traveling_salesman_problem TA_tsp = nx.approximation.threshold_accepting_tsp method = lambda Graph, wt: TA_tsp(Graph, "greedy", weight=wt, source='D') path = tsp(H, weight='length', cycle=True, method=method) ``` returns `['D', 'C', 'B', 'C', 'A', 'D']` but it should return `['D', 'C', 'B', 'A', 'D']` ### Environment <!--- Please provide details about your local environment --> Python version: 3.9.4 NetworkX version: 2.6rc1.dev0 ### Additional context <!--- Add any other context about the problem here, screenshots, etc. --> The problem may come from the way `nx.approximation.traveling_salesman_problem` is handling custom weight? Have you considered not assigning a default weight of 1 when the weight attribute is not present? I think it could make the software more "robust" maybe? For example, when running the command below: ```python cycle = approx.greedy_tsp(H, source="D") cost = sum(H[n][nbr]["length"] for n, nbr in nx.utils.pairwise(cycle)) print(cycle) print(cost) ``` I got: ``` ['D', 'B', 'C', 'A', 'D'] 54 ``` But it is not the behavior I would expect. I would rather have the approximation made on the only edge attribute available ("length") or getting an error message telling me `"Weight attribute not present "` .
Thanks very much for this!! It looks like the calling order of `threshold_accepting_tsp` is not being respected in the code or in the docs. This should be fixed and tests added. Code is near line 285 and docs in that function as well as the init_cycle argument to `threshold_accepting_tsp`.
2021-06-28T19:13:38
networkx/networkx
4,999
networkx__networkx-4999
[ "4998" ]
2eb274e39f712047cebf5666ee9caf2ba2e51ee4
diff --git a/networkx/algorithms/assortativity/correlation.py b/networkx/algorithms/assortativity/correlation.py --- a/networkx/algorithms/assortativity/correlation.py +++ b/networkx/algorithms/assortativity/correlation.py @@ -75,9 +75,27 @@ def degree_assortativity_coefficient(G, x="out", y="in", weight=None, nodes=None """ if nodes is None: nodes = G.nodes - degrees = set([d for n, d in G.degree(nodes, weight=weight)]) + + degrees = None + + if G.is_directed(): + indeg = ( + set([d for _, d in G.in_degree(nodes, weight=weight)]) + if "in" in (x, y) + else set() + ) + outdeg = ( + set([d for _, d in G.out_degree(nodes, weight=weight)]) + if "out" in (x, y) + else set() + ) + degrees = set.union(indeg, outdeg) + else: + degrees = set([d for _, d in G.degree(nodes, weight=weight)]) + mapping = {d: i for i, d, in enumerate(degrees)} M = degree_mixing_matrix(G, x=x, y=y, nodes=nodes, weight=weight, mapping=mapping) + return numeric_ac(M, mapping=mapping)
diff --git a/networkx/algorithms/assortativity/tests/base_test.py b/networkx/algorithms/assortativity/tests/base_test.py --- a/networkx/algorithms/assortativity/tests/base_test.py +++ b/networkx/algorithms/assortativity/tests/base_test.py @@ -44,6 +44,8 @@ def setup_class(cls): cls.P4 = nx.path_graph(4) cls.D = nx.DiGraph() cls.D.add_edges_from([(0, 2), (0, 3), (1, 3), (2, 3)]) + cls.D2 = nx.DiGraph() + cls.D2.add_edges_from([(0, 3), (1, 0), (1, 2), (2, 4), (4, 1), (4, 3), (4, 2)]) cls.M = nx.MultiGraph() nx.add_path(cls.M, range(4)) cls.M.add_edge(0, 1) diff --git a/networkx/algorithms/assortativity/tests/test_correlation.py b/networkx/algorithms/assortativity/tests/test_correlation.py --- a/networkx/algorithms/assortativity/tests/test_correlation.py +++ b/networkx/algorithms/assortativity/tests/test_correlation.py @@ -22,6 +22,12 @@ def test_degree_assortativity_directed(self): r = nx.degree_assortativity_coefficient(self.D) np.testing.assert_almost_equal(r, -0.57735, decimal=4) + def test_degree_assortativity_directed2(self): + """Test degree assortativity for a directed graph where the set of + in/out degree does not equal the total degree.""" + r = nx.degree_assortativity_coefficient(self.D2) + np.testing.assert_almost_equal(r, 0.14852, decimal=4) + def test_degree_assortativity_multigraph(self): r = nx.degree_assortativity_coefficient(self.M) np.testing.assert_almost_equal(r, -1.0 / 7.0, decimal=4) @@ -34,6 +40,12 @@ def test_degree_pearson_assortativity_directed(self): r = nx.degree_pearson_correlation_coefficient(self.D) np.testing.assert_almost_equal(r, -0.57735, decimal=4) + def test_degree_pearson_assortativity_directed2(self): + """Test degree assortativity with Pearson for a directed graph where + the set of in/out degree does not equal the total degree.""" + r = nx.degree_pearson_correlation_coefficient(self.D2) + np.testing.assert_almost_equal(r, 0.14852, decimal=4) + def test_degree_pearson_assortativity_multigraph(self): r = nx.degree_pearson_correlation_coefficient(self.M) np.testing.assert_almost_equal(r, -1.0 / 7.0, decimal=4)
Wrong degree_assortativity_coefficient for directed graphs ### Current Behavior ``degree_assortativity_coefficient`` will fail for most directed graphs except if the set of in- or out-degrees is the same as the set of total-degrees. This issue was introduced in 2.6 by #4928 ([L78](https://github.com/networkx/networkx/pull/4928/files#diff-76675aa4f0d3a79d394219c8e15ec346b3f5af9f4a733d5ef9e7026421d43bd9R78)). ### Expected Behavior The mapping should include all relevant in- and out-degrees for directed graphs. ### Steps to Reproduce ```python G = nx.DiGraph() G.add_edges_from([(0, 3), (1, 0), (1, 2), (2, 4), (4, 1), (4, 3), (4, 2)]) nx.degree_assortativity_coefficient(G) # returns NaN nx.degree_pearson_correlation_coefficient(G) # returns the correct value 0.14852 ``` ### Environment Python version: 3.9 NetworkX version: 2.6+
2021-07-31T11:28:54
networkx/networkx
5,007
networkx__networkx-5007
[ "3112", "3112" ]
278bb078ba18f820c547fdceb8680094b0acf20e
diff --git a/networkx/algorithms/community/modularity_max.py b/networkx/algorithms/community/modularity_max.py --- a/networkx/algorithms/community/modularity_max.py +++ b/networkx/algorithms/community/modularity_max.py @@ -1,8 +1,11 @@ """Functions for detecting communities based on modularity.""" -from networkx.algorithms.community.quality import modularity +from collections import Counter +import networkx as nx +from networkx.algorithms.community.quality import modularity from networkx.utils.mapped_queue import MappedQueue +from networkx.utils import not_implemented_for __all__ = [ "greedy_modularity_communities", @@ -11,11 +14,114 @@ ] +def _greedy_modularity_communities_init(G, weight=None, resolution=1): + r"""Initializes the data structures for greedy_modularity_communities(). + + Clauset-Newman-Moore Eq 8-9. Eq 8 was missing a factor of 2 (from A_ij + A_ji). + See [2]_ at :func:`greedy_modularity_communities`. + + Parameters + ---------- + G : NetworkX graph + + weight : string or None, optional (default=None) + The name of an edge attribute that holds the numerical value used + as a weight. If None, then each edge has weight 1. + The degree is the sum of the edge weights adjacent to the node. + + resolution : float (default=1) + If resolution is less than 1, modularity favors larger communities. + Greater than 1 favors smaller communities. + + Returns + ------- + dq_dict : dict of dict's + dq_dict[i][j]: dQ for merging community i, j + + dq_heap : dict of MappedQueue's + dq_heap[i][n] : (-dq, i, j) for communitiy i nth largest dQ + + H : MappedQueue + (-dq, i, j) for community with nth largest max_j(dQ_ij) + + a, b : dict + undirected: + a[i]: fraction of (total weight of) edges within community i + b : None + directed: + a[i]: fraction of (total weight of) edges with tails within community i + b[i]: fraction of (total weight of) edges with heads within community i + + See Also + -------- + :func:`greedy_modularity_communities` + :func:`~networkx.algorithms.community.quality.modularity` + """ + # Count nodes and edges (or the sum of edge-weights for weighted graphs) + N = G.number_of_nodes() + m = G.size(weight) + + # Calculate degrees + if G.is_directed(): + k_in = dict(G.in_degree(weight=weight)) + k_out = dict(G.out_degree(weight=weight)) + q0 = 1.0 / m + else: + k_in = k_out = dict(G.degree(weight=weight)) + q0 = 1.0 / (2.0 * m) + + a = {node: kout * q0 for node, kout in k_out.items()} + if G.is_directed(): + b = {node: kin * q0 for node, kin in k_in.items()} + else: + b = None + + dq_dict = { + i: { + j: q0 + * ( + G.get_edge_data(i, j, default={weight: 0}).get(weight, 1.0) + + G.get_edge_data(j, i, default={weight: 0}).get(weight, 1.0) + - resolution * q0 * (k_out[i] * k_in[j] + k_in[i] * k_out[j]) + ) + for j in nx.all_neighbors(G, i) + if j != i + } + for i in G.nodes() + } + + # dq correction for multi-edges + # In case of multi-edges, get_edge_data(i, j) returns the key: data dict of the i, j + # edges, which does not have a 'weight' key. Therefore, when calculating dq for i, j + # Aij is always 1.0 and a correction is required. + if G.is_multigraph(): + edges_count = dict(Counter(G.edges())) + multi_edges = [edge for edge, count in edges_count.items() if count > 1] + for edge in multi_edges: + total_wt = sum(d.get(weight, 1) for d in G.get_edge_data(*edge).values()) + if G.is_directed(): + # The correction applies only to the direction of the edge. The edge at + # the other direction is either not a multiedge (where the weight is + # added correctly), non-existent or it is also a multiedge, in which + # case it will be handled singly when its turn in the loop comes. + q00 = q0 + else: + q00 = 2 * q0 + dq_dict[edge[0]][edge[1]] += q00 * (total_wt - 1) + dq_dict[edge[1]][edge[0]] += q00 * (total_wt - 1) + + dq_heap = { + i: MappedQueue([(-dq, i, j) for j, dq in dq_dict[i].items()]) for i in G.nodes() + } + H = MappedQueue([dq_heap[i].h[0] for i in G.nodes() if len(dq_heap[i]) > 0]) + + return dq_dict, dq_heap, H, a, b + + def greedy_modularity_communities(G, weight=None, resolution=1, n_communities=1): r"""Find communities in G using greedy modularity maximization. This function uses Clauset-Newman-Moore greedy modularity maximization [2]_. - This method currently supports the Graph class. Greedy modularity maximization begins with each node in its own community and joins the pair of communities that most increases modularity until no @@ -48,8 +154,8 @@ def greedy_modularity_communities(G, weight=None, resolution=1, n_communities=1) Returns ------- - list - A list of sets of nodes, one for each community. + partition: list + A list of frozensets of nodes, one for each community. Sorted by length with largest communities first. Examples @@ -66,63 +172,30 @@ def greedy_modularity_communities(G, weight=None, resolution=1, n_communities=1) References ---------- - .. [1] M. E. J Newman "Networks: An Introduction", page 224 + .. [1] Newman, M. E. J. "Networks: An Introduction", page 224 Oxford University Press 2011. .. [2] Clauset, A., Newman, M. E., & Moore, C. "Finding community structure in very large networks." Physical Review E 70(6), 2004. .. [3] Reichardt and Bornholdt "Statistical Mechanics of Community Detection" Phys. Rev. E74, 2006. + .. [4] Newman, M. E. J."Analysis of weighted networks" + Physical Review E 70(5 Pt 2):056131, 2004. """ - - # Count nodes and edges - N = len(G.nodes()) - m = sum([d.get(weight, 1) for u, v, d in G.edges(data=True)]) - q0 = 1.0 / (2.0 * m) - + N = G.number_of_nodes() if (n_communities < 1) or (n_communities > N): raise ValueError( - f"n_communities must be between 1 and {len(G.nodes())}. Got {n_communities}" + f"n_communities must be between 1 and {N}. Got {n_communities}" ) - # Map node labels to contiguous integers - label_for_node = {i: v for i, v in enumerate(G.nodes())} - node_for_label = {label_for_node[i]: i for i in range(N)} - - # Calculate degrees - k_for_label = G.degree(G.nodes(), weight=weight) - k = [k_for_label[label_for_node[i]] for i in range(N)] - - # Initialize community and merge lists - communities = {i: frozenset([i]) for i in range(N)} - merges = [] - - # Initial modularity - partition = [[label_for_node[x] for x in c] for c in communities.values()] - q_cnm = modularity(G, partition, resolution=resolution) - # Initialize data structures - # CNM Eq 8-9 (Eq 8 was missing a factor of 2 (from A_ij + A_ji) - # a[i]: fraction of edges within community i - # dq_dict[i][j]: dQ for merging community i, j - # dq_heap[i][n] : (-dq, i, j) for communitiy i nth largest dQ - # H[n]: (-dq, i, j) for community with nth largest max_j(dQ_ij) - a = [k[i] * q0 for i in range(N)] - dq_dict = { - i: { - j: 2 - * q0 - * G.get_edge_data(label_for_node[i], label_for_node[j]).get(weight, 1.0) - - 2 * resolution * k[i] * k[j] * q0 * q0 - for j in [node_for_label[u] for u in G.neighbors(label_for_node[i])] - if j != i - } - for i in range(N) - } - dq_heap = [ - MappedQueue([(-dq, i, j) for j, dq in dq_dict[i].items()]) for i in range(N) - ] - H = MappedQueue([dq_heap[i].h[0] for i in range(N) if len(dq_heap[i]) > 0]) + dq_dict, dq_heap, H, a, b = _greedy_modularity_communities_init( + G, weight, resolution + ) + # Initialize single-node communities + communities = {i: frozenset([i]) for i in G.nodes()} + # Initial modularity + q_cnm = modularity(G, communities.values(), resolution=resolution) # Merge communities until we can't improve modularity or until desired number of # communities (n_communities) is reached. @@ -159,7 +232,6 @@ def greedy_modularity_communities(G, weight=None, resolution=1, n_communities=1) # Perform merge communities[j] = frozenset(communities[i] | communities[j]) del communities[i] - merges.append((i, j, dq)) # New modularity q_cnm += dq # Get list of communities connected to merged communities @@ -173,10 +245,16 @@ def greedy_modularity_communities(G, weight=None, resolution=1, n_communities=1) if k in both_set: dq_jk = dq_dict[j][k] + dq_dict[i][k] elif k in j_set: - dq_jk = dq_dict[j][k] - 2.0 * resolution * a[i] * a[k] + if G.is_directed(): + dq_jk = dq_dict[j][k] - resolution * (a[i] * b[k] + a[k] * b[i]) + else: + dq_jk = dq_dict[j][k] - 2.0 * resolution * a[i] * a[k] else: # k in i_set - dq_jk = dq_dict[i][k] - 2.0 * resolution * a[j] * a[k] + if G.is_directed(): + dq_jk = dq_dict[i][k] - resolution * (a[j] * b[k] + a[k] * b[j]) + else: + dq_jk = dq_dict[i][k] - 2.0 * resolution * a[j] * a[k] # Update rows j and k for row, col in [(j, k), (k, j)]: # Save old value for finding heap index @@ -237,13 +315,16 @@ def greedy_modularity_communities(G, weight=None, resolution=1, n_communities=1) # Merge i into j and update a a[j] += a[i] a[i] = 0 + if G.is_directed(): + b[j] += b[i] + b[i] = 0 - communities = [ - frozenset([label_for_node[i] for i in c]) for c in communities.values() - ] - return sorted(communities, key=len, reverse=True) + partition = sorted(communities.values(), key=len, reverse=True) + return partition +@not_implemented_for("directed") +@not_implemented_for("multigraph") def naive_greedy_modularity_communities(G, resolution=1): r"""Find communities in G using greedy modularity maximization. diff --git a/networkx/algorithms/community/quality.py b/networkx/algorithms/community/quality.py --- a/networkx/algorithms/community/quality.py +++ b/networkx/algorithms/community/quality.py @@ -283,9 +283,9 @@ def modularity(G, communities, weight="weight", resolution=1): These node sets must represent a partition of G's nodes. weight : string or None, optional (default="weight") - The edge attribute that holds the numerical value used - as a weight. If None or an edge does not have that attribute, - then that edge has weight 1. + The edge attribute that holds the numerical value used + as a weight. If None or an edge does not have that attribute, + then that edge has weight 1. resolution : float (default=1) If resolution is less than 1, modularity favors larger communities.
diff --git a/networkx/algorithms/community/tests/test_modularity_max.py b/networkx/algorithms/community/tests/test_modularity_max.py --- a/networkx/algorithms/community/tests/test_modularity_max.py +++ b/networkx/algorithms/community/tests/test_modularity_max.py @@ -53,6 +53,33 @@ def test_greedy_modularity_communities_relabeled(): assert greedy_modularity_communities(G) == expected +def test_greedy_modularity_communities_directed(): + G = nx.DiGraph( + [ + ("a", "b"), + ("a", "c"), + ("b", "c"), + ("b", "d"), # inter-community edge + ("d", "e"), + ("d", "f"), + ("d", "g"), + ("f", "g"), + ("d", "e"), + ("f", "e"), + ] + ) + expected = [frozenset({"f", "g", "e", "d"}), frozenset({"a", "b", "c"})] + assert greedy_modularity_communities(G) == expected + + # with loops + G = nx.DiGraph() + G.add_edges_from( + [(1, 1), (1, 2), (1, 3), (2, 3), (1, 4), (4, 4), (5, 5), (4, 5), (4, 6), (5, 6)] + ) + expected = [frozenset({1, 2, 3}), frozenset({4, 5, 6})] + assert greedy_modularity_communities(G) == expected + + def test_modularity_communities_weighted(): G = nx.balanced_tree(2, 3) for (a, b) in G.edges: @@ -66,6 +93,161 @@ def test_modularity_communities_weighted(): assert greedy_modularity_communities(G, weight="weight") == expected +def test_modularity_communities_directed_weighted(): + G = nx.DiGraph() + G.add_weighted_edges_from( + [ + (1, 2, 5), + (1, 3, 3), + (2, 3, 6), + (2, 6, 1), + (1, 4, 1), + (4, 5, 3), + (4, 6, 7), + (5, 6, 2), + (5, 7, 5), + (5, 8, 4), + (6, 8, 3), + ] + ) + expected = [frozenset({4, 5, 6, 7, 8}), frozenset({1, 2, 3})] + assert greedy_modularity_communities(G, weight="weight") == expected + + # A large weight of the edge (2, 6) causes 6 to change group, even if it shares + # only one connection with the new group and 3 with the old one. + G[2][6]["weight"] = 20 + expected = [frozenset({1, 2, 3, 6}), frozenset({4, 5, 7, 8})] + assert greedy_modularity_communities(G, weight="weight") == expected + + +def test_greedy_modularity_communities_multigraph(): + G = nx.MultiGraph() + G.add_edges_from( + [ + (1, 2), + (1, 2), + (1, 3), + (2, 3), + (1, 4), + (2, 4), + (4, 5), + (5, 6), + (5, 7), + (5, 7), + (6, 7), + (7, 8), + (5, 8), + ] + ) + expected = [frozenset({1, 2, 3, 4}), frozenset({5, 6, 7, 8})] + assert greedy_modularity_communities(G) == expected + + # Converting (4, 5) into a multi-edge causes node 4 to change group. + G.add_edge(4, 5) + expected = [frozenset({4, 5, 6, 7, 8}), frozenset({1, 2, 3})] + assert greedy_modularity_communities(G) == expected + + +def test_greedy_modularity_communities_multigraph_weighted(): + G = nx.MultiGraph() + G.add_weighted_edges_from( + [ + (1, 2, 5), + (1, 2, 3), + (1, 3, 6), + (1, 3, 6), + (2, 3, 4), + (1, 4, 1), + (1, 4, 1), + (2, 4, 3), + (2, 4, 3), + (4, 5, 1), + (5, 6, 3), + (5, 6, 7), + (5, 6, 4), + (5, 7, 9), + (5, 7, 9), + (6, 7, 8), + (7, 8, 2), + (7, 8, 2), + (5, 8, 6), + (5, 8, 6), + ] + ) + expected = [frozenset({1, 2, 3, 4}), frozenset({5, 6, 7, 8})] + assert greedy_modularity_communities(G, weight="weight") == expected + + # Adding multi-edge (4, 5, 16) causes node 4 to change group. + G.add_edge(4, 5, weight=16) + expected = [frozenset({4, 5, 6, 7, 8}), frozenset({1, 2, 3})] + assert greedy_modularity_communities(G, weight="weight") == expected + + # Increasing the weight of edge (1, 4) causes node 4 to return to the former group. + G[1][4][1]["weight"] = 3 + expected = [frozenset({1, 2, 3, 4}), frozenset({5, 6, 7, 8})] + assert greedy_modularity_communities(G, weight="weight") == expected + + +def test_greed_modularity_communities_multidigraph(): + G = nx.MultiDiGraph() + G.add_edges_from( + [ + (1, 2), + (1, 2), + (3, 1), + (2, 3), + (2, 3), + (3, 2), + (1, 4), + (2, 4), + (4, 2), + (4, 5), + (5, 6), + (5, 6), + (6, 5), + (5, 7), + (6, 7), + (7, 8), + (5, 8), + (8, 4), + ] + ) + expected = [frozenset({1, 2, 3, 4}), frozenset({5, 6, 7, 8})] + assert greedy_modularity_communities(G, weight="weight") == expected + + +def test_greed_modularity_communities_multidigraph_weighted(): + G = nx.MultiDiGraph() + G.add_weighted_edges_from( + [ + (1, 2, 5), + (1, 2, 3), + (3, 1, 6), + (1, 3, 6), + (3, 2, 4), + (1, 4, 2), + (1, 4, 5), + (2, 4, 3), + (3, 2, 8), + (4, 2, 3), + (4, 3, 5), + (4, 5, 2), + (5, 6, 3), + (5, 6, 7), + (6, 5, 4), + (5, 7, 9), + (5, 7, 9), + (7, 6, 8), + (7, 8, 2), + (8, 7, 2), + (5, 8, 6), + (5, 8, 6), + ] + ) + expected = [frozenset({1, 2, 3, 4}), frozenset({5, 6, 7, 8})] + assert greedy_modularity_communities(G, weight="weight") == expected + + def test_resolution_parameter_impact(): G = nx.barbell_graph(5, 3)
greedy modularity communities fails I am trying to find communities of a directed graph (Number of Nodes: 53663 and Number of Edges: 953380) with `greedy_modularity_communities`. This is the following error I receive: ``` line 127, in greedy_modularity_communities if dq_heap[j].h[0] == (-dq, j, i): IndexError: list index out of range ``` However, running greedy_modularity_communities on the same graph with undirected edges is successful. greedy modularity communities fails I am trying to find communities of a directed graph (Number of Nodes: 53663 and Number of Edges: 953380) with `greedy_modularity_communities`. This is the following error I receive: ``` line 127, in greedy_modularity_communities if dq_heap[j].h[0] == (-dq, j, i): IndexError: list index out of range ``` However, running greedy_modularity_communities on the same graph with undirected edges is successful.
@elplatt is this designed to work with directed graphs? If not, we should put the ```@not_implemented_for('directed')``` decorator on the function. Also helpful to know if it is supposed to work for multigraphs. As written, I believe it only works with undirected graphs. It only needs a little extra code to work with weighted and directed graphs though, maybe I'll see if I can work on that over the next couple weeks. As for multigraphs, is there a well-defined concept of "weight" between two nodes? Or a standard networkx way of defining a weight function? If so, the above changes would probably allow the code to work for multigraphs as well. On Sun, Aug 5, 2018 at 11:21 AM, Dan Schult <[email protected]> wrote: > @elplatt <https://github.com/elplatt> is this designed to work with > directed graphs? > If not, we should put the @not_implemented_for('directed') decorator on > the function. Also helpful to know if it is supposed to work for > multigraphs. > > β€” > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/networkx/networkx/issues/3112#issuecomment-410527238>, > or mute the thread > <https://github.com/notifications/unsubscribe-auth/AAS0WIMTY2hjkrKkEpCSui3kvvpDHgQzks5uNw1tgaJpZM4VvVfU> > . > -- Edward L. Platt PhD Candidate, University of Michigan School of Information he/him | https://elplatt.com | @elplatt | @[email protected] Tips for stopping email overload: https://hbr.org/2012/02/stop-email-overload-1 Take a look at the weighted graph code for dijkstra (in ```algorithms/shortest_paths/weighted.py```) for a standard treatment of inputing a weight function or weight attribute name. You can probably use the same function to process your input. While you are in there, take a look at the doc_string formatting. For sphinx to make it look nice it is supposed to have a one-line short description followed by a blank line and then a short paragraph (I think this function has a short paragraph). Then sections for paramaters, returns, Examples, References and/or Notes and See Also... none of these are mandatory, but they might make the documentation look better after sphinx processes it. Thanks very much for looking into expanding it for directed and multigraph! :) Thank you for working on this for directed and weighted graphs. I was really curious is there an update available for it? Eagerly waiting for that. Thanks again. Wanted to give an update that I'm working on this. Have derived the equations for weighted undirected with self-loops and working on directed. If anyone is interested in helping write test cases, that would really help! Here are the equations for both directed and undirected networks, allowing self-loops: https://github.com/elplatt/Paper-CNM-Modularity/blob/master/paper.pdf I will be implementing these soon, and it would be really helpful to have some test data for possible configurations (directed w/ self-loops, directed wo/ self-loops, etc.). Ideally this would come from hand-calculating the algorithm for a small network (probably not too hard in a spreadsheet but a bit tedious). Anyone able to help? @elplatt Sir, I met the same problem as @lhassan1 , is greedy_modularity_communities support directed graph now? If not, Does networkx support any methods for reliable community detection and modularity calculation on directed graphs? I believe it still supports only undirected graphs. It's still something I plan to do, unless someone else gets to it first, but I'm focused on phd work right now and won't likely have a chance to look at it for several months. The necessary changes are fairly simple and detailed in the pdf I link above. Changing the modularity calculation always comes with the potential to introduce calculation errors, so the bulk of the work is in creating the appropriate test cases. @elplatt Thanks. Good luck to your PDH work. @elplatt is this designed to work with directed graphs? If not, we should put the ```@not_implemented_for('directed')``` decorator on the function. Also helpful to know if it is supposed to work for multigraphs. As written, I believe it only works with undirected graphs. It only needs a little extra code to work with weighted and directed graphs though, maybe I'll see if I can work on that over the next couple weeks. As for multigraphs, is there a well-defined concept of "weight" between two nodes? Or a standard networkx way of defining a weight function? If so, the above changes would probably allow the code to work for multigraphs as well. On Sun, Aug 5, 2018 at 11:21 AM, Dan Schult <[email protected]> wrote: > @elplatt <https://github.com/elplatt> is this designed to work with > directed graphs? > If not, we should put the @not_implemented_for('directed') decorator on > the function. Also helpful to know if it is supposed to work for > multigraphs. > > β€” > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/networkx/networkx/issues/3112#issuecomment-410527238>, > or mute the thread > <https://github.com/notifications/unsubscribe-auth/AAS0WIMTY2hjkrKkEpCSui3kvvpDHgQzks5uNw1tgaJpZM4VvVfU> > . > -- Edward L. Platt PhD Candidate, University of Michigan School of Information he/him | https://elplatt.com | @elplatt | @[email protected] Tips for stopping email overload: https://hbr.org/2012/02/stop-email-overload-1 Take a look at the weighted graph code for dijkstra (in ```algorithms/shortest_paths/weighted.py```) for a standard treatment of inputing a weight function or weight attribute name. You can probably use the same function to process your input. While you are in there, take a look at the doc_string formatting. For sphinx to make it look nice it is supposed to have a one-line short description followed by a blank line and then a short paragraph (I think this function has a short paragraph). Then sections for paramaters, returns, Examples, References and/or Notes and See Also... none of these are mandatory, but they might make the documentation look better after sphinx processes it. Thanks very much for looking into expanding it for directed and multigraph! :) Thank you for working on this for directed and weighted graphs. I was really curious is there an update available for it? Eagerly waiting for that. Thanks again. Wanted to give an update that I'm working on this. Have derived the equations for weighted undirected with self-loops and working on directed. If anyone is interested in helping write test cases, that would really help! Here are the equations for both directed and undirected networks, allowing self-loops: https://github.com/elplatt/Paper-CNM-Modularity/blob/master/paper.pdf I will be implementing these soon, and it would be really helpful to have some test data for possible configurations (directed w/ self-loops, directed wo/ self-loops, etc.). Ideally this would come from hand-calculating the algorithm for a small network (probably not too hard in a spreadsheet but a bit tedious). Anyone able to help? @elplatt Sir, I met the same problem as @lhassan1 , is greedy_modularity_communities support directed graph now? If not, Does networkx support any methods for reliable community detection and modularity calculation on directed graphs? I believe it still supports only undirected graphs. It's still something I plan to do, unless someone else gets to it first, but I'm focused on phd work right now and won't likely have a chance to look at it for several months. The necessary changes are fairly simple and detailed in the pdf I link above. Changing the modularity calculation always comes with the potential to introduce calculation errors, so the bulk of the work is in creating the appropriate test cases. @elplatt Thanks. Good luck to your PDH work.
2021-08-04T21:08:53
networkx/networkx
5,011
networkx__networkx-5011
[ "4712" ]
906bf82ab7edf0ad4cea067b3be5a4e1cba356a3
diff --git a/networkx/generators/geometric.py b/networkx/generators/geometric.py --- a/networkx/generators/geometric.py +++ b/networkx/generators/geometric.py @@ -290,11 +290,13 @@ def geographical_threshold_graph( .. math:: - (w_u + w_v)h(r) \ge \theta + (w_u + w_v)p_{dist}(r) \ge \theta - where `r` is the distance between `u` and `v`, h(r) is a probability of - connection as a function of `r`, and :math:`\theta` as the threshold - parameter. h(r) corresponds to the p_dist parameter. + where `r` is the distance between `u` and `v`, `p_dist` is any function of + `r`, and :math:`\theta` as the threshold parameter. `p_dist` is used to + give weight to the distance between nodes when deciding whether or not + they should be connected. The larger `p_dist` is, the more prone nodes + separated by `r` are to be connected, and vice versa. Parameters ---------- @@ -326,17 +328,17 @@ def geographical_threshold_graph( .. _metric: https://en.wikipedia.org/wiki/Metric_%28mathematics%29 p_dist : function, optional - A probability density function computing the probability of - connecting two nodes that are of distance, r, computed by metric. - The probability density function, `p_dist`, must - be any function that takes the metric value as input - and outputs a single probability value between 0-1. - The scipy.stats package has many probability distribution functions - implemented and tools for custom probability distribution - definitions [2], and passing the .pdf method of scipy.stats - distributions can be used here. If the probability - function, `p_dist`, is not supplied, the default exponential function - :math: `r^{-2}` is used. + Any function used to give weight to the distance between nodes when + deciding whether or not they should be connected. `p_dist` was + originally conceived as a probability density function giving the + probability of connecting two nodes that are of metric distance `r` + apart. The implementation here allows for more arbitrary definitions + of `p_dist` that do not need to correspond to valid probability + density functions. The :mod:`scipy.stats` package has many + probability density functions implemented and tools for custom + probability density definitions, and passing the ``.pdf`` method of + scipy.stats distributions can be used here. If ``p_dist=None`` + (the default), the exponential function :math:`r^{-2}` is used. seed : integer, random_state, or None (default) Indicator of random number generation state. See :ref:`Randomness<randomness>`.
Update documentation for geographical_threshold_graph The documentation state that the return value of `p_dist` must be in the range `[0, 1]`, but the implementation differs from this behavior. The documentation should be updated to remove this discrepancy and better reflect the current implementation. This was originally raised on the mailing list (thanks jimbo): see the [original post](https://groups.google.com/g/networkx-discuss/c/Z_UXaYZcsxw/m/2c377udtAgAJ) for further details.
Related to the documentation issue there is some counter intuitive behavior with the generator. For a given theta and given node weights, if instead of using 1/r^2 as p_dist, we use 1/r^3, we end up with a lot more edges. <img width="1072" alt="Screen Shot 2021-03-31 at 10 22 53 AM" src="https://user-images.githubusercontent.com/49379192/113160039-1b955880-920b-11eb-8307-3e54c9939e03.png"> Counter intuitive doesn't mean wrong but I'm not sure this behavior was intended in the implementation. Do you think it may be worth it to try and make the behavior more in line with expectations? Why do you say unexpected? When you change the p_dist form, you are going to have to adjust the threshold (unless you adjust it implicitly in your p_dist function). I wouldn't know what to expect unless I thought it through. In this case, 1/r**3 is going to be much bigger than 1/r**2 because r is the metric on nodes placed within the unit square. So I would expect many more edges unless theta is similarly divided by some sort of average value for r. Similarly, if you rescale the metric to use cm instead of m, you would have to rescale theta to use the same units if you want to have the same number of edges. I'm guessing an average value of 1/r is about 2, so either use 0.5/r**3 or set theta to 200. That's a guess though -- more complete examination of average 1/r would tell you more. Yes, of course you are correct. I say unexpected only because normally when I think of going from 1/r^2 to 1/r^3 I think of an effect getting weaker, but here we have the opposite and end up with a lot more connections. Also some of the points within the unit square are separated by more than 1 (consider two points connected by a diagonal across the square), so 1/r^3 is not always going to be bigger than 1/r^2, which adds slightly to the confusion. At any rate, I think it's a great idea to update the docs as described in the original comment on the issue. Hi guys. I'd like to take a crack at updating these docs over the weekend and hopefully getting this issue closed. Noob question: Just updating the docstrings (in my own forked version and then creating a PR from there) is sufficient, right? > Just updating the docstrings (in my own forked version and then creating a PR from there) is sufficient, right? Yes, this is correct. I'd recommend making a new branch on your fork for the work as well, e.g. `git checkout -b doc/update-p_dist-description` or whatever you'd like to name the branch. Then you can push that branch up to your fork and create a PR from there.
2021-08-07T19:12:56
networkx/networkx
5,029
networkx__networkx-5029
[ "4957" ]
5641ea5d0b05800630b90c7bea558144c7ddc1de
diff --git a/networkx/algorithms/traversal/breadth_first_search.py b/networkx/algorithms/traversal/breadth_first_search.py --- a/networkx/algorithms/traversal/breadth_first_search.py +++ b/networkx/algorithms/traversal/breadth_first_search.py @@ -383,28 +383,37 @@ def descendants_at_distance(G, source, distance): ------- set() The descendants of `source` in `G` at the given `distance` from `source` + + Examples + -------- + >>> G = nx.path_graph(5) + >>> nx.descendants_at_distance(G, 2, 2) + {0, 4} + >>> H = nx.DiGraph() + >>> H.add_edges_from([(0, 1), (0, 2), (1, 3), (1, 4), (2, 5), (2, 6)]) + >>> nx.descendants_at_distance(H, 0, 2) + {3, 4, 5, 6} + >>> nx.descendants_at_distance(H, 5, 0) + {5} + >>> nx.descendants_at_distance(H, 5, 1) + set() """ if not G.has_node(source): raise nx.NetworkXError(f"The node {source} is not in the graph.") current_distance = 0 - queue = {source} + current_layer = {source} visited = {source} - # this is basically BFS, except that the queue only stores the nodes at + # this is basically BFS, except that the current layer only stores the nodes at # current_distance from source at each iteration - while queue: - if current_distance == distance: - return queue - - current_distance += 1 - - next_vertices = set() - for vertex in queue: - for child in G[vertex]: + while current_distance < distance: + next_layer = set() + for node in current_layer: + for child in G[node]: if child not in visited: visited.add(child) - next_vertices.add(child) - - queue = next_vertices + next_layer.add(child) + current_layer = next_layer + current_distance += 1 - return set() + return current_layer
diff --git a/networkx/algorithms/traversal/tests/test_bfs.py b/networkx/algorithms/traversal/tests/test_bfs.py --- a/networkx/algorithms/traversal/tests/test_bfs.py +++ b/networkx/algorithms/traversal/tests/test_bfs.py @@ -1,6 +1,8 @@ from functools import partial import networkx as nx +import pytest + class TestBFS: @classmethod @@ -48,6 +50,14 @@ def test_bfs_tree_isolates(self): assert sorted(T.nodes()) == [1] assert sorted(T.edges()) == [] + def test_descendants_at_distance(self): + for distance, descendants in enumerate([{0}, {1}, {2, 3}, {4}]): + assert nx.descendants_at_distance(self.G, 0, distance) == descendants + + def test_descendants_at_distance_missing_source(self): + with pytest.raises(nx.NetworkXError): + nx.descendants_at_distance(self.G, "abc", 0) + class TestBreadthLimitedSearch: @classmethod @@ -98,3 +108,11 @@ def test_limited_bfs_tree(self): def test_limited_bfs_edges(self): edges = nx.bfs_edges(self.G, source=9, depth_limit=4) assert list(edges) == [(9, 8), (9, 10), (8, 7), (7, 2), (2, 1), (2, 3)] + + def test_limited_descendants_at_distance(self): + for distance, descendants in enumerate( + [{0}, {1}, {2}, {3, 7}, {4, 8}, {5, 9}, {6, 10}] + ): + assert nx.descendants_at_distance(self.G, 0, distance) == descendants + for distance, descendants in enumerate([{2}, {3, 7}, {8}, {9}, {10}]): + assert nx.descendants_at_distance(self.D, 2, distance) == descendants
descendants_at_distance appears to be untested AFAICT `descendants_at_distance` is not explicitly tested anywhere. I was confused by this because the coverage report for the `breadth_first_search` module where the function is defined is at 98% (notably - the untested line is in `descendants_at_distance`), but I don't see any explicit calls to `descendants_at_distance` when grepping the test suite. Adding some tests should help codify behavior and avoid issues like #4952 .
It looks like `descendants_at_distance` is [called by ](https://github.com/networkx/networkx/blob/e8914bb5681b6fad8a6764406d3c7a78ebc582ae/networkx/algorithms/dag.py#L624)`transitive_closure_dag` in the `dag.py` module. But that just means even more -- we should include tests for it. :}
2021-08-22T07:27:32
networkx/networkx
5,045
networkx__networkx-5045
[ "5025" ]
2e61dacc1ffcdcf44edb5fd68dca5f51e09db219
diff --git a/networkx/generators/geometric.py b/networkx/generators/geometric.py --- a/networkx/generators/geometric.py +++ b/networkx/generators/geometric.py @@ -10,12 +10,13 @@ from networkx.utils import nodes_or_number, py_random_state __all__ = [ + "geometric_edges", "geographical_threshold_graph", - "waxman_graph", "navigable_small_world_graph", "random_geometric_graph", "soft_random_geometric_graph", "thresholded_random_geometric_graph", + "waxman_graph", ] @@ -30,10 +31,52 @@ def euclidean(x, y): def geometric_edges(G, radius, p): - """Returns edge list of node pairs within `radius` of each other + """Returns edge list of node pairs within `radius` of each other. + Parameters + ---------- + G : networkx graph + The graph from which to generate the edge list. The nodes in `G` should + have an attribute ``pos`` corresponding to the node position, which is + used to compute the distance to other nodes. + radius : scalar + The distance threshold. Edges are included in the edge list if the + distance between the two nodes is less than `radius`. + p : scalar + The `Minkowski distance metric + <https://en.wikipedia.org/wiki/Minkowski_distance>`_ use to compute + distances. + + Returns + ------- + edges : list + List of edges whose distances are less than `radius` + + Notes + ----- Radius uses Minkowski distance metric `p`. - If scipy available, use scipy cKDTree to speed computation. + If scipy is available, `scipy.spatial.cKDTree` is used to speed computation. + + Examples + -------- + Create a graph with nodes that have a "pos" attribute representing 2D + coordinates. + + >>> G = nx.Graph() + >>> G.add_nodes_from([ + ... (0, {"pos": (0, 0)}), + ... (1, {"pos": (3, 0)}), + ... (2, {"pos": (8, 0)}), + ... ]) + >>> p = 2 # Euclidean distance + >>> nx.geometric_edges(G, radius=1, p=p) + [] + >>> nx.geometric_edges(G, radius=4, p=p) + [(0, 1)] + >>> nx.geometric_edges(G, radius=6, p=p) + [(0, 1), (1, 2)] + >>> nx.geometric_edges(G, radius=9, p=p) + [(0, 1), (0, 2), (1, 2)] """ nodes_pos = G.nodes(data="pos") try:
`geometric_edges` should be accessible <!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion --> <!--- Provide a general summary of the issue in the Title above --> The function `geometric_edges` in `generators.geometric` is not callable. However, I have found a case where someone wants to use it (more specifically is asking for a function that does it). https://stackoverflow.com/q/68815295/2966723 I thought I'd just build an example for the person, but I can't figure out how to call it, presumably because it isn't in `__all__`, but I'm not an expert on this. ### Current Behavior I get an error that the function doesn't exist. ### Expected Behavior I'd like the function to run. ### Steps to Reproduce import networkx as nx import random pos = {n: (random.random(), random.random()) for n in range(100)} G= nx.Graph() G.add_nodes_from(pos) radius = 0.1 #connect nodes of distance <0.1 G.add_edges_from(nx.generators.geometric.geometric_edges(G, radius,2)) > AttributeError: module 'networkx.generators.geometric' has no attribute 'geometric_edges' ### Environment <!--- Please provide details about your local environment --> Python version: 3.7(?) NetworkX version: 2.4 ### Additional context <!--- Add any other context about the problem here, screenshots, etc. -->
The function is available in NetworkX 2.6.2. I haven't tracked down which version it became available. The calling name is just as you expected: `nx.generators.geometric.geometric_edges` But I think it makes sense to publish this function to the public interface so that `nx.geometric_edges` would work.
2021-08-29T14:34:22
networkx/networkx
5,048
networkx__networkx-5048
[ "4994" ]
2e61dacc1ffcdcf44edb5fd68dca5f51e09db219
diff --git a/networkx/drawing/nx_pylab.py b/networkx/drawing/nx_pylab.py --- a/networkx/drawing/nx_pylab.py +++ b/networkx/drawing/nx_pylab.py @@ -852,10 +852,10 @@ def _connectionstyle(posA, posB, *args, **kwargs): # Draw the edges if use_linecollection: edge_viz_obj = _draw_networkx_edges_line_collection() - # Make sure selfloop edges are also drawn. - edgelist = list(nx.selfloop_edges(G)) - if edgelist: - edge_pos = np.asarray([(pos[e[0]], pos[e[1]]) for e in edgelist]) + # Make sure selfloop edges are also drawn + selfloops_to_draw = [loop for loop in nx.selfloop_edges(G) if loop in edgelist] + if selfloops_to_draw: + edge_pos = np.asarray([(pos[e[0]], pos[e[1]]) for e in selfloops_to_draw]) arrowstyle = "-" _draw_networkx_edges_fancy_arrow_patch() else:
diff --git a/networkx/drawing/tests/test_pylab.py b/networkx/drawing/tests/test_pylab.py --- a/networkx/drawing/tests/test_pylab.py +++ b/networkx/drawing/tests/test_pylab.py @@ -423,3 +423,13 @@ def test_draw_networkx_arrows_default_directed(drawing_func): ) assert ax.patches plt.delaxes(ax) + + +def test_edgelist_kwarg_not_ignored(): + # See gh-4994 + G = nx.path_graph(3) + G.add_edge(0, 0) + fig, ax = plt.subplots() + nx.draw(G, edgelist=[(0, 1), (1, 2)], ax=ax) # Exclude self-loop from edgelist + assert not ax.patches + plt.delaxes(ax)
`edgelist` is ignored when drawing selfloops An issue with self-loop drawing was identified in the discussion in #4991. The `edgelist` parameter for `nx.draw_networkx_edges` is not properly handled for self-loop drawing. ### Steps to Reproduce ```python >>> import matplotlib.pyplot as plt >>> G = nx.path_graph(3) >>> G.add_edge(0, 0) >>> G.edges() EdgeView([(0, 1), (0, 0), (1, 2)]) >>> nx.draw(G) # Draws self-loop, as expected >>> plt.figure(); >>> nx.draw(G, edgelist=[(0, 1), (1, 2)]) # Still draws self-loop (bug) ``` ### Environment <!--- Please provide details about your local environment --> Python version: 3.9.6 NetworkX version: 2.6.2 ### Additional context At first glance, it seems the problem lies here: https://github.com/networkx/networkx/blob/2eb274e39f712047cebf5666ee9caf2ba2e51ee4/networkx/drawing/nx_pylab.py#L853-L856 This code was added to ensure that self-loop edges are drawn even when edges are represented by a LineCollection object, but there should be some additional logic to check whether `edgelist` is `None` before setting it.
2021-08-29T23:47:51
networkx/networkx
5,051
networkx__networkx-5051
[ "4989" ]
2e61dacc1ffcdcf44edb5fd68dca5f51e09db219
diff --git a/networkx/readwrite/edgelist.py b/networkx/readwrite/edgelist.py --- a/networkx/readwrite/edgelist.py +++ b/networkx/readwrite/edgelist.py @@ -183,7 +183,8 @@ def parse_edgelist( lines : list or iterator of strings Input data in edgelist format comments : string, optional - Marker for comment lines. Default is `'#'` + Marker for comment lines. Default is `'#'`. To specify that no character + should be treated as a comment, use ``comments=None``. delimiter : string, optional Separator for node labels. Default is `None`, meaning any whitespace. create_using : NetworkX graph constructor, optional (default=nx.Graph) @@ -238,11 +239,12 @@ def parse_edgelist( G = nx.empty_graph(0, create_using) for line in lines: - p = line.find(comments) - if p >= 0: - line = line[:p] - if not line: - continue + if comments is not None: + p = line.find(comments) + if p >= 0: + line = line[:p] + if not line: + continue # split line, should have 2 or more s = line.strip().split(delimiter) if len(s) < 2: @@ -314,7 +316,8 @@ def read_edgelist( opened in 'rb' mode. Filenames ending in .gz or .bz2 will be uncompressed. comments : string, optional - The character used to indicate the start of a comment. + The character used to indicate the start of a comment. To specify that + no character should be treated as a comment, use ``comments=None``. delimiter : string, optional The string used to separate values. The default is whitespace. create_using : NetworkX graph constructor, optional (default=nx.Graph)
diff --git a/networkx/readwrite/tests/test_edgelist.py b/networkx/readwrite/tests/test_edgelist.py --- a/networkx/readwrite/tests/test_edgelist.py +++ b/networkx/readwrite/tests/test_edgelist.py @@ -161,6 +161,14 @@ def test_parse_edgelist(): nx.parse_edgelist(lines, nodetype=int, data=(("weight", float),)) +def test_comments_None(): + edgelist = ["node#1 node#2", "node#2 node#3"] + # comments=None supported to ignore all comment characters + G = nx.parse_edgelist(edgelist, comments=None) + H = nx.Graph([e.split(" ") for e in edgelist]) + assert edges_equal(G.edges, H.edges) + + class TestEdgelist: @classmethod def setup_class(cls):
read_edgelist not reading all lines of write_edgelist file <!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion --> <!--- Provide a general summary of the issue in the Title above --> For a graph analysis I am doing, I am using nx.write_edgelist(G,path=path) to write to the following file. [A2_early_0.04.15.txt](https://github.com/networkx/networkx/files/6893893/A2_early_0.04.15.txt) However, reading this data through nx.readwrite.edgelist.read_edgelist(filename, create_using=nx.DiGraph()) causes a problem, as not all data is read. ### Current Behavior <!--- Tell us what happens instead of the expected behavior --> Reads in 617 edges of the above adjacency list ### Expected Behavior <!--- Tell us what should happen --> Expect to read all 780 edges ### Steps to Reproduce <!--- Provide a minimal example that reproduces the bug --> G = nx.readwrite.edgelist.read_edgelist(filename, create_using=nx.DiGraph()) print(G.number_of_edges) ### Environment <!--- Please provide details about your local environment --> Python version: 3.8.5 NetworkX version: 2.5.1 ### Additional context <!--- Add any other context about the problem here, screenshots, etc. -->
Hi @arav-agarwal2, it looks like your file contains quite a few lines with `#` characters. By default, those are treated as comments in `read_edgelist`. This can be fixed by setting the `comments` parameter to a string that doesn't occur in the file, or to be a little sneaky,`'\n'`: ``` G = nx.readwrite.edgelist.read_edgelist(filename, create_using=nx.DiGraph(), comments='\n') ``` Excellent catch @boothby - this is indeed what is happening with this particular data file. On a related note, maybe `comments` should be made an optional kwarg with the ability to pass in `comments=None` to indicate that there is no comment character, rather than making the user get creative with their comment character selection. Thoughts?
2021-08-30T02:34:53
networkx/networkx
5,052
networkx__networkx-5052
[ "5043" ]
2e61dacc1ffcdcf44edb5fd68dca5f51e09db219
diff --git a/networkx/algorithms/dag.py b/networkx/algorithms/dag.py --- a/networkx/algorithms/dag.py +++ b/networkx/algorithms/dag.py @@ -601,9 +601,8 @@ def is_aperiodic(G): return g == 1 and nx.is_aperiodic(G.subgraph(set(G) - set(levels))) -@not_implemented_for("undirected") def transitive_closure(G, reflexive=False): - """Returns transitive closure of a directed graph + """Returns transitive closure of a graph The transitive closure of G = (V,E) is a graph G+ = (V,E+) such that for all v, w in V there is an edge (v, w) in E+ if and only if there @@ -617,8 +616,8 @@ def transitive_closure(G, reflexive=False): Parameters ---------- - G : NetworkX DiGraph - A directed graph + G : NetworkX Graph + A directed/undirected graph/multigraph. reflexive : Bool or None, optional (default: False) Determines when cycles create self-loops in the Transitive Closure. If True, trivial cycles (length 0) create self-loops. The result @@ -628,13 +627,13 @@ def transitive_closure(G, reflexive=False): Returns ------- - NetworkX DiGraph + NetworkX graph The transitive closure of `G` Raises ------ - NetworkXNotImplemented - If `G` is not directed + NetworkXError + If `reflexive` not in `{None, True, False}` Examples -------- @@ -673,28 +672,25 @@ def transitive_closure(G, reflexive=False): References ---------- - .. [1] http://www.ics.uci.edu/~eppstein/PADS/PartialOrder.py - - TODO this function applies to all directed graphs and is probably misplaced - here in dag.py + .. [1] https://www.ics.uci.edu/~eppstein/PADS/PartialOrder.py """ - if reflexive is None: - TC = G.copy() - for v in G: - edges = ((v, u) for u in nx.dfs_preorder_nodes(G, v) if v != u) - TC.add_edges_from(edges) - return TC - if reflexive is True: - TC = G.copy() - for v in G: - edges = ((v, u) for u in nx.dfs_preorder_nodes(G, v)) - TC.add_edges_from(edges) - return TC - # reflexive is False TC = G.copy() + + if reflexive not in {None, True, False}: + raise nx.NetworkXError("Incorrect value for the parameter `reflexive`") + for v in G: - edges = ((v, w) for u, w in nx.edge_dfs(G, v)) - TC.add_edges_from(edges) + if reflexive is None: + TC.add_edges_from(((v, u) for u in nx.descendants(G, v) if u not in TC[v])) + elif reflexive is True: + TC.add_edges_from( + ((v, u) for u in nx.descendants(G, v) | {v} if u not in TC[v]) + ) + elif reflexive is False: + TC.add_edges_from( + ((v, e[1]) for e in nx.edge_bfs(G, v) if e[1] not in TC[v]) + ) + return TC
diff --git a/networkx/algorithms/tests/test_dag.py b/networkx/algorithms/tests/test_dag.py --- a/networkx/algorithms/tests/test_dag.py +++ b/networkx/algorithms/tests/test_dag.py @@ -290,8 +290,18 @@ def test_transitive_closure(self): solution = [(1, 2), (2, 1), (2, 3), (3, 2), (1, 3), (3, 1)] soln = sorted(solution + [(n, n) for n in G]) assert edges_equal(sorted(nx.transitive_closure(G).edges()), soln) + G = nx.Graph([(1, 2), (2, 3), (3, 4)]) - pytest.raises(nx.NetworkXNotImplemented, nx.transitive_closure, G) + solution = [(1, 2), (1, 3), (1, 4), (2, 3), (2, 4), (3, 4)] + assert edges_equal(sorted(nx.transitive_closure(G).edges()), solution) + + G = nx.MultiGraph([(1, 2), (2, 3), (3, 4)]) + solution = [(1, 2), (1, 3), (1, 4), (2, 3), (2, 4), (3, 4)] + assert edges_equal(sorted(nx.transitive_closure(G).edges()), solution) + + G = nx.MultiDiGraph([(1, 2), (2, 3), (3, 4)]) + solution = [(1, 2), (1, 3), (1, 4), (2, 3), (2, 4), (3, 4)] + assert edges_equal(sorted(nx.transitive_closure(G).edges()), solution) # test if edge data is copied G = nx.DiGraph([(1, 2, {"a": 3}), (2, 3, {"b": 0}), (3, 4)]) @@ -305,6 +315,10 @@ def test_transitive_closure(self): for u, v in G.edges(): assert G.get_edge_data(u, v) == H.get_edge_data(u, v) + G = nx.Graph() + with pytest.raises(nx.NetworkXError): + nx.transitive_closure(G, reflexive="wrong input") + def test_reflexive_transitive_closure(self): G = nx.DiGraph([(1, 2), (2, 3), (3, 4)]) solution = [(1, 2), (1, 3), (1, 4), (2, 3), (2, 4), (3, 4)] @@ -330,6 +344,30 @@ def test_reflexive_transitive_closure(self): assert edges_equal(sorted(nx.transitive_closure(G, None).edges()), solution) assert edges_equal(sorted(nx.transitive_closure(G, True).edges()), soln) + G = nx.Graph([(1, 2), (2, 3), (3, 4)]) + solution = [(1, 2), (1, 3), (1, 4), (2, 3), (2, 4), (3, 4)] + soln = sorted(solution + [(n, n) for n in G]) + assert edges_equal(nx.transitive_closure(G).edges(), solution) + assert edges_equal(nx.transitive_closure(G, False).edges(), solution) + assert edges_equal(nx.transitive_closure(G, True).edges(), soln) + assert edges_equal(nx.transitive_closure(G, None).edges(), solution) + + G = nx.MultiGraph([(1, 2), (2, 3), (3, 4)]) + solution = [(1, 2), (1, 3), (1, 4), (2, 3), (2, 4), (3, 4)] + soln = sorted(solution + [(n, n) for n in G]) + assert edges_equal(nx.transitive_closure(G).edges(), solution) + assert edges_equal(nx.transitive_closure(G, False).edges(), solution) + assert edges_equal(nx.transitive_closure(G, True).edges(), soln) + assert edges_equal(nx.transitive_closure(G, None).edges(), solution) + + G = nx.MultiDiGraph([(1, 2), (2, 3), (3, 4)]) + solution = [(1, 2), (1, 3), (1, 4), (2, 3), (2, 4), (3, 4)] + soln = sorted(solution + [(n, n) for n in G]) + assert edges_equal(nx.transitive_closure(G).edges(), solution) + assert edges_equal(nx.transitive_closure(G, False).edges(), solution) + assert edges_equal(nx.transitive_closure(G, True).edges(), soln) + assert edges_equal(nx.transitive_closure(G, None).edges(), solution) + def test_transitive_closure_dag(self): G = nx.DiGraph([(1, 2), (2, 3), (3, 4)]) transitive_closure = nx.algorithms.dag.transitive_closure_dag
Refactor `transitive_closure` to support multigraphs By definition (see below and #5016) `transitive_closure` also works with multigraphs. * **transitive closure of G** is the graph G* = (V, E*), where E* = {(i, j) : there is a path from vertex i to vertex j in G}" But now this function does not work as expected. ### Current Behavior ```python >>> import networkx as nx >>> g = nx.MultiDiGraph([(0, 1), (1, 2)]) >>> print(nx.transitive_closure(g).edges()) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<class 'networkx.utils.decorators.argmap'> compilation 8", line 4, in argmap_transitive_closure_5 File "/home/vdshk/.local/lib/python3.8/site-packages/networkx/algorithms/dag.py", line 576, in transitive_closure TC.add_edges_from(edges) File "/home/vdshk/.local/lib/python3.8/site-packages/networkx/classes/multigraph.py", line 550, in add_edges_from for e in ebunch_to_add: File "/home/vdshk/.local/lib/python3.8/site-packages/networkx/algorithms/dag.py", line 575, in <genexpr> edges = ((v, w) for u, w in nx.edge_dfs(G, v)) ValueError: too many values to unpack (expected 2) ``` ### Expected Behavior ```python >>> import networkx as nx >>> g = nx.MultiDiGraph([(0, 1), (1, 2)]) >>> print(nx.transitive_closure(g).edges()) [(0, 1), (0, 2), (1, 2)] ``` ### Environment Python version: `3.8.10` NetworkX version: `2.6.2` ### Proposal I think this function needs to be refactored. Maybe it should be done in #5042 PR. What do you think?
+1 for adding support for multigraphs. I'd recommend putting it in a separate PR though rather than adding it to #5042 - IMO it would make it easier to review if there were separate PRs for each proposal. If you want to work on it before #5042 is in, you can always open a new PR that depends on #5042 (i.e. originates from a commit on that branch) and just mark it as draft for now.
2021-08-30T10:40:12
networkx/networkx
5,053
networkx__networkx-5053
[ "4953" ]
a8b907df38d9d36f77d75c65d9e6a5518b849c6b
diff --git a/networkx/readwrite/gml.py b/networkx/readwrite/gml.py --- a/networkx/readwrite/gml.py +++ b/networkx/readwrite/gml.py @@ -144,6 +144,7 @@ def read_gml(path, label="label", destringizer=None): See Also -------- write_gml, parse_gml + literal_destringizer Notes ----- @@ -622,6 +623,10 @@ def generate_gml(G, stringizer=None): If `stringizer` cannot convert a value into a string, or the value to convert is not a string while `stringizer` is None. + See Also + -------- + literal_stringizer + Notes ----- Graph attributes named 'directed', 'multigraph', 'node' or @@ -813,6 +818,7 @@ def write_gml(G, path, stringizer=None): See Also -------- read_gml, generate_gml + literal_stringizer Notes -----
write_gml does not allow for empty lists as attributes The write_gml function does not handle empty lists as attributes. I'm not sure if this is expected behaviour. ### Current Behavior Fails to convert an empty list to a string and raises the error `networkx.exception.NetworkXError: [] is not a string` ### Expected Behavior I would expect an empty list to be written to the GML file as '[]' ### Steps to Reproduce ``` import networkx as nx G = nx.complete_graph(4) G.nodes[0]['test']=[] nx.write_gml(G,'test.gml') ``` ### Environment Python version: 3.9.5 NetworkX version: 2.5.1
I'm not sure what the "correct" behavior is either - what does the gml standard say about this case? From the traceback, the problem lives in the default `stringizer`, specifically around: https://github.com/networkx/networkx/blob/dfa1150a9b8cf15183d7c910ff9b67ef3d220513/networkx/readwrite/gml.py#L729 This is what prevents empty lists being used. Looking around the module a bit more, there is a function called `literal_stringizer` that seems to behave more the way you were expecting, try: ```python nx.write_gml(G, 'test.gml', stringizer=nx.readwrite.gml.literal_stringizer) ``` Thanks very much for pointing out the alternative stringizer function. From what I can tell the GML standard does not specify what should be done in this case. Incidentally, the link to the GML standard on the Networkx website appears to be out of date and now redirects to an alumnus page: http://www.infosun.fim.uni-passau.de/Graphlet/GML/gml-tr.html > Incidentally, the link to the GML standard on the Networkx website appears to be out of date and now redirects to an alumnus page: This should be fixed (insofar as is possible, as there doesn't seem to be an html version of the spec) in the [latest version of the documentation](https://networkx.org/documentation/latest/reference/readwrite/gml.html) So, can we make this issue into a request to - fix the link to the GML standard - add the literal_stringizer to the "See Also" section of the doc string. - maybe include `literal_stringizer` in the main namespace. Anything else? > So, can we make this issue into a request to > > * fix the link to the GML standard This has been fixed in #4864 , though if anyone knows of an active html link (the current link is an archived page from wayback machine that only provides a pdf-version of the GML spec) this could certainly be improved. > * add the literal_stringizer to the "See Also" section of the doc string. +1 for this. > * maybe include `literal_stringizer` in the main namespace. I'm not sure about this one, since it is pretty specialized functionality that only really makes sense in the context of the GML module. > > Anything else? Makes sense to me - I'd say adding `literal_stringizer` to the `See Also` section of the `read_gml` docstring would be sufficient to close this. I agree! Thanks @rossbar Adding `literal_stringizer` to the See Also section should be sufficient. No need to add it to the main namespace.
2021-08-31T01:51:20
networkx/networkx
5,058
networkx__networkx-5058
[ "5024" ]
96831f99a01fcfb39ac5d85f7e814afa5f8c4c9a
diff --git a/networkx/readwrite/graphml.py b/networkx/readwrite/graphml.py --- a/networkx/readwrite/graphml.py +++ b/networkx/readwrite/graphml.py @@ -447,6 +447,17 @@ def construct_types(self): 1: True, } + def get_xml_type(self, key): + """Wrapper around the xml_type dict that raises a more informative + exception message when a user attempts to use data of a type not + supported by GraphML.""" + try: + return self.xml_type[key] + except KeyError as e: + raise TypeError( + f"GraphML does not support type {type(key)} as data values." + ) from e + class GraphMLWriter(GraphML): def __init__( @@ -504,7 +515,7 @@ def attr_type(self, name, scope, value): types = self.attribute_types[(name, scope)] if len(types) > 1: - types = {self.xml_type[t] for t in types} + types = {self.get_xml_type(t) for t in types} if "string" in types: return str elif "float" in types or "double" in types: @@ -551,7 +562,7 @@ def add_data(self, name, element_type, value, scope="all", default=None): raise nx.NetworkXError( f"GraphML writer does not support {element_type} as data values." ) - keyid = self.get_key(name, self.xml_type[element_type], scope, default) + keyid = self.get_key(name, self.get_xml_type(element_type), scope, default) data_element = self.myElement("data", key=keyid) data_element.text = str(value) return data_element @@ -765,7 +776,7 @@ def add_graph_element(self, G): for k, v in graphdata.items(): self.attribute_types[(str(k), "graph")].add(type(v)) for k, v in graphdata.items(): - element_type = self.xml_type[self.attr_type(k, "graph", v)] + element_type = self.get_xml_type(self.attr_type(k, "graph", v)) self.get_key(str(k), element_type, "graph", None) # Nodes and data for node, d in G.nodes(data=True): @@ -773,7 +784,7 @@ def add_graph_element(self, G): self.attribute_types[(str(k), "node")].add(type(v)) for node, d in G.nodes(data=True): for k, v in d.items(): - T = self.xml_type[self.attr_type(k, "node", v)] + T = self.get_xml_type(self.attr_type(k, "node", v)) self.get_key(str(k), T, "node", node_default.get(k)) # Edges and data if G.is_multigraph(): @@ -782,7 +793,7 @@ def add_graph_element(self, G): self.attribute_types[(str(k), "edge")].add(type(v)) for u, v, ekey, d in G.edges(keys=True, data=True): for k, v in d.items(): - T = self.xml_type[self.attr_type(k, "edge", v)] + T = self.get_xml_type(self.attr_type(k, "edge", v)) self.get_key(str(k), T, "edge", edge_default.get(k)) else: for u, v, d in G.edges(data=True): @@ -790,7 +801,7 @@ def add_graph_element(self, G): self.attribute_types[(str(k), "edge")].add(type(v)) for u, v, d in G.edges(data=True): for k, v in d.items(): - T = self.xml_type[self.attr_type(k, "edge", v)] + T = self.get_xml_type(self.attr_type(k, "edge", v)) self.get_key(str(k), T, "edge", edge_default.get(k)) # Now add attribute keys to the xml file
diff --git a/networkx/readwrite/tests/test_graphml.py b/networkx/readwrite/tests/test_graphml.py --- a/networkx/readwrite/tests/test_graphml.py +++ b/networkx/readwrite/tests/test_graphml.py @@ -1500,3 +1500,39 @@ class TestXMLGraphML(TestWriteGraphML): @classmethod def setup_class(cls): TestWriteGraphML.setup_class() + + +def test_exception_for_unsupported_datatype_node_attr(): + """Test that a detailed exception is raised when an attribute is of a type + not supported by GraphML, e.g. a list""" + pytest.importorskip("lxml.etree") + # node attribute + G = nx.Graph() + G.add_node(0, my_list_attribute=[0, 1, 2]) + fh = io.BytesIO() + with pytest.raises(TypeError, match="GraphML does not support"): + nx.write_graphml(G, fh) + + +def test_exception_for_unsupported_datatype_edge_attr(): + """Test that a detailed exception is raised when an attribute is of a type + not supported by GraphML, e.g. a list""" + pytest.importorskip("lxml.etree") + # edge attribute + G = nx.Graph() + G.add_edge(0, 1, my_list_attribute=[0, 1, 2]) + fh = io.BytesIO() + with pytest.raises(TypeError, match="GraphML does not support"): + nx.write_graphml(G, fh) + + +def test_exception_for_unsupported_datatype_graph_attr(): + """Test that a detailed exception is raised when an attribute is of a type + not supported by GraphML, e.g. a list""" + pytest.importorskip("lxml.etree") + # graph attribute + G = nx.Graph() + G.graph["my_list_attribute"] = [0, 1, 2] + fh = io.BytesIO() + with pytest.raises(TypeError, match="GraphML does not support"): + nx.write_graphml(G, fh)
Failed to save graph generated using stochastic_block_model Saving a graph that has been generated with a stochastic_block_model is not possible using write_graphml. ### Current Behavior When generating a random network using the stochastic block model, the returned networkx object cannot be written to a graphml file using the write_graphml function. ### Expected Behavior When generating random graphs using the networkx package, it should be possible to export them using write_graphml. ### Steps to Reproduce The following piece of code reproduces the error message: ``` import networkx as nx wg = 0.8 # can be any number representing within group probability bg = 0.2 # can be any number representing between group probability community_probs = [[wg, bg, bg], [bg, wg, bg], [bg, bg, wg]] community_sizes = [10,10,10] G = nx.stochastic_block_model(community_sizes, community_probs) nx.write_graphml(G, 'file_name.graphml') ``` ### Environment Python version: 3.9.6 NetworkX version: 2.6.2 ### Additional context Error message: <img width="795" alt="Screenshot 2021-08-16 at 10 09 05" src="https://user-images.githubusercontent.com/44324476/129532047-658f7207-448b-4e33-8ad2-8092d498c87e.png">
Data attributes that have list values are not part of the GraphML spec. That is, you can't store graphs with list or dict attribute values using the GraphML standard. See #485 #3663 for similar discussions. I would suggest that you save the graph in a different format. OR you could clear out the offending data attributes before converting to GraphML. Thanks @dschult - the only thing that comes to my mind as potentially "resolving" this issue would be to improve the exception message. Otherwise I'd say this can be closed as duplicate. Thank you very much. The solution is indeed to use a different export method. Pickling the graph for example worked perfectly for me. What could be improved is the exception message (like mentioned by @rossbar). I also suggest that the graph generator algorithm's documentation gets updated to include a warning that it generates graphs that cannot be exported to graphml. It will help future users to identify exactly what is happening. Thanks again for helping me out.
2021-09-03T02:57:02
networkx/networkx
5,059
networkx__networkx-5059
[ "4518" ]
96831f99a01fcfb39ac5d85f7e814afa5f8c4c9a
diff --git a/networkx/algorithms/simple_paths.py b/networkx/algorithms/simple_paths.py --- a/networkx/algorithms/simple_paths.py +++ b/networkx/algorithms/simple_paths.py @@ -207,6 +207,11 @@ def all_simple_paths(G, source, target, cutoff=None): number of simple paths in a graph can be very large, e.g. $O(n!)$ in the complete graph of order $n$. + This function does not check that a path exists between `source` and + `target`. For large graphs, this may result in very long runtimes. + Consider using `has_path` to check that a path exists between `source` and + `target` before calling this function on large graphs. + References ---------- .. [1] R. Sedgewick, "Algorithms in C, Part 5: Graph Algorithms", @@ -214,7 +219,7 @@ def all_simple_paths(G, source, target, cutoff=None): See Also -------- - all_shortest_paths, shortest_path + all_shortest_paths, shortest_path, has_path """ if source not in G:
Why doesn't all_simple_paths check that a path exists? Experimenting with a certain undirected graph, I stumbled upon an issue: `all_simple_path` was hanging and not yielding any paths. I checked if the path exists between the nodes I was interested in (`has_path`), and surprisingly it didn't. I would assume that the first thing the `all_simple_path` would do internally would be to check if any paths exist, and if not, return an empty generator right away. Why is this not the case?
I would assume that `all_simple_path` would *not* check for a path first, but rather be designed so that if no paths were found it would end the computation. Strange that it "hangs". According to its docs, it uses a version of DFS. I would hope that is just as fast for the first path found as the code behind `has_path`. It works very quickly for the non-connected graphs I just tried. Can you give a small example? Attaching a minimal example including a JSON describing the graph in question. What I forgot to mention is that I first exclude some edges using `subgraph_view` and then want to find paths in the filtered graph. The interesting part of the code is: ``` print("Is there a path from source to target?", nx.has_path(g_filtered, source, target)) # prints False paths = nx.all_simple_paths(g_filtered, source=source, target=target, cutoff=10) print("Path generator created") for path in paths: print(path) # There are no paths, but we're stuck in this loop ``` [networkx-issue-4518.zip](https://github.com/networkx/networkx/files/5770177/networkx-issue-4518.zip) I can verify that this example takes longer than my patience allowed to compute simple paths even when they are valid nodes. And that was after I had eliminated the filter and only looked at the single component with 1114 nodes. So, it is not surprising that the case when the target node is not in the connected component would take a long time (I added such a node and tried that too). So, here is the issue as I see it (please correct or embellish what I miss): The `all_simple_paths` function is slow. If it is going to result in an error because the target is not connected to the source, we could avoid the slow calculation by checking for any path before finding all paths. That comes at the cost of checking for a path for valid source/target pairs. Often users know whether the source and target have a path. Should we automatically test for a path to save time for those who don't know? Or should be make the user check it before calling this function? My (very weak) preference is to add a paragraph to the docs explaining that this function is slow and it might be very helpful to make sure there is a path between source and target before calling the function. Thoughts? I would leave it up to the user. Since if they are unsure, they can just put a check around the call to `all_simple_paths` themselves.
2021-09-03T03:50:01
networkx/networkx
5,065
networkx__networkx-5065
[ "5000" ]
3a44d3ec7593358778a2336a5ec2faf5afe49da9
diff --git a/networkx/algorithms/community/modularity_max.py b/networkx/algorithms/community/modularity_max.py --- a/networkx/algorithms/community/modularity_max.py +++ b/networkx/algorithms/community/modularity_max.py @@ -1,6 +1,6 @@ """Functions for detecting communities based on modularity.""" -from collections import Counter +from collections import defaultdict import networkx as nx from networkx.algorithms.community.quality import modularity @@ -14,110 +14,6 @@ ] -def _greedy_modularity_communities_init(G, weight=None, resolution=1): - r"""Initializes the data structures for greedy_modularity_communities(). - - Clauset-Newman-Moore Eq 8-9. Eq 8 was missing a factor of 2 (from A_ij + A_ji). - See [2]_ at :func:`greedy_modularity_communities`. - - Parameters - ---------- - G : NetworkX graph - - weight : string or None, optional (default=None) - The name of an edge attribute that holds the numerical value used - as a weight. If None, then each edge has weight 1. - The degree is the sum of the edge weights adjacent to the node. - - resolution : float (default=1) - If resolution is less than 1, modularity favors larger communities. - Greater than 1 favors smaller communities. - - Returns - ------- - dq_dict : dict of dict's - dq_dict[i][j]: dQ for merging community i, j - - dq_heap : dict of MappedQueue's - dq_heap[i][n] : (-dq, i, j) for communitiy i nth largest dQ - - H : MappedQueue - (-dq, i, j) for community with nth largest max_j(dQ_ij) - - a, b : dict - undirected: - a[i]: fraction of (total weight of) edges within community i - b : None - directed: - a[i]: fraction of (total weight of) edges with tails within community i - b[i]: fraction of (total weight of) edges with heads within community i - - See Also - -------- - :func:`greedy_modularity_communities` - :func:`~networkx.algorithms.community.quality.modularity` - """ - # Count nodes and edges (or the sum of edge-weights for weighted graphs) - N = G.number_of_nodes() - m = G.size(weight) - - # Calculate degrees - if G.is_directed(): - k_in = dict(G.in_degree(weight=weight)) - k_out = dict(G.out_degree(weight=weight)) - q0 = 1.0 / m - else: - k_in = k_out = dict(G.degree(weight=weight)) - q0 = 1.0 / (2.0 * m) - - a = {node: kout * q0 for node, kout in k_out.items()} - if G.is_directed(): - b = {node: kin * q0 for node, kin in k_in.items()} - else: - b = None - - dq_dict = { - i: { - j: q0 - * ( - G.get_edge_data(i, j, default={weight: 0}).get(weight, 1.0) - + G.get_edge_data(j, i, default={weight: 0}).get(weight, 1.0) - - resolution * q0 * (k_out[i] * k_in[j] + k_in[i] * k_out[j]) - ) - for j in nx.all_neighbors(G, i) - if j != i - } - for i in G.nodes() - } - - # dq correction for multi-edges - # In case of multi-edges, get_edge_data(i, j) returns the key: data dict of the i, j - # edges, which does not have a 'weight' key. Therefore, when calculating dq for i, j - # Aij is always 1.0 and a correction is required. - if G.is_multigraph(): - edges_count = dict(Counter(G.edges())) - multi_edges = [edge for edge, count in edges_count.items() if count > 1] - for edge in multi_edges: - total_wt = sum(d.get(weight, 1) for d in G.get_edge_data(*edge).values()) - if G.is_directed(): - # The correction applies only to the direction of the edge. The edge at - # the other direction is either not a multiedge (where the weight is - # added correctly), non-existent or it is also a multiedge, in which - # case it will be handled singly when its turn in the loop comes. - q00 = q0 - else: - q00 = 2 * q0 - dq_dict[edge[0]][edge[1]] += q00 * (total_wt - 1) - dq_dict[edge[1]][edge[0]] += q00 * (total_wt - 1) - - dq_heap = { - i: MappedQueue([(-dq, i, j) for j, dq in dq_dict[i].items()]) for i in G.nodes() - } - H = MappedQueue([dq_heap[i].h[0] for i in G.nodes() if len(dq_heap[i]) > 0]) - - return dq_dict, dq_heap, H, a, b - - def greedy_modularity_communities(G, weight=None, resolution=1, n_communities=1): r"""Find communities in G using greedy modularity maximization. @@ -182,20 +78,48 @@ def greedy_modularity_communities(G, weight=None, resolution=1, n_communities=1) .. [4] Newman, M. E. J."Analysis of weighted networks" Physical Review E 70(5 Pt 2):056131, 2004. """ + directed = G.is_directed() N = G.number_of_nodes() if (n_communities < 1) or (n_communities > N): raise ValueError( f"n_communities must be between 1 and {N}. Got {n_communities}" ) - # Initialize data structures - dq_dict, dq_heap, H, a, b = _greedy_modularity_communities_init( - G, weight, resolution - ) + # Count edges (or the sum of edge-weights for weighted graphs) + m = G.size(weight) + q0 = 1 / m + + # Calculate degrees (notation from the papers) + # a : the fraction of (weighted) out-degree for each node + # b : the fraction of (weighted) in-degree for each node + if directed: + a = {node: deg_out * q0 for node, deg_out in G.out_degree(weight=weight)} + b = {node: deg_in * q0 for node, deg_in in G.in_degree(weight=weight)} + else: + a = b = {node: deg * q0 * 0.5 for node, deg in G.degree(weight=weight)} + + # this preliminary step collects the edge weights for each node pair + # It handles multigraph and digraph and works fine for graph. + dq_dict = defaultdict(lambda: defaultdict(float)) + for u, v, wt in G.edges(data=weight, default=1): + if u == v: + continue + dq_dict[u][v] += wt + dq_dict[v][u] += wt + + # now scale and subtract the expected edge-weights term + for u, nbrdict in dq_dict.items(): + for v, wt in nbrdict.items(): + dq_dict[u][v] = q0 * wt - resolution * (a[u] * b[v] + b[u] * a[v]) + + # Use -dq to get a max_heap instead of a min_heap + # dq_heap holds a heap for each node's neighbors + dq_heap = {u: MappedQueue({(u, v): -dq for v, dq in dq_dict[u].items()}) for u in G} + # H -> all_dq_heap holds a heap with the best items for each node + H = MappedQueue([dq_heap[n].heap[0] for n in G if len(dq_heap[n]) > 0]) + # Initialize single-node communities - communities = {i: frozenset([i]) for i in G.nodes()} - # Initial modularity - q_cnm = modularity(G, communities.values(), resolution=resolution) + communities = {n: frozenset([n]) for n in G} # Merge communities until we can't improve modularity or until desired number of # communities (n_communities) is reached. @@ -204,123 +128,113 @@ def greedy_modularity_communities(G, weight=None, resolution=1, n_communities=1) # Remove from heap of row maxes # Ties will be broken by choosing the pair with lowest min community id try: - dq, i, j = H.pop() + negdq, u, v = H.pop() except IndexError: break - dq = -dq - # Remove best merge from row i heap - dq_heap[i].pop() + dq = -negdq + # Remove best merge from row u heap + dq_heap[u].pop() # Push new row max onto H - if len(dq_heap[i]) > 0: - H.push(dq_heap[i].h[0]) - # If this element was also at the root of row j, we need to remove the + if len(dq_heap[u]) > 0: + H.push(dq_heap[u].heap[0]) + # If this element was also at the root of row v, we need to remove the # duplicate entry from H - if dq_heap[j].h[0] == (-dq, j, i): - H.remove((-dq, j, i)) - # Remove best merge from row j heap - dq_heap[j].remove((-dq, j, i)) + if dq_heap[v].heap[0] == (v, u): + H.remove((v, u)) + # Remove best merge from row v heap + dq_heap[v].remove((v, u)) # Push new row max onto H - if len(dq_heap[j]) > 0: - H.push(dq_heap[j].h[0]) + if len(dq_heap[v]) > 0: + H.push(dq_heap[v].heap[0]) else: - # Duplicate wasn't in H, just remove from row j heap - dq_heap[j].remove((-dq, j, i)) - # Stop when change is non-positive + # Duplicate wasn't in H, just remove from row v heap + dq_heap[v].remove((v, u)) + # Stop when change is non-positive (no improvement possible) if dq <= 0: break # Perform merge - communities[j] = frozenset(communities[i] | communities[j]) - del communities[i] - # New modularity - q_cnm += dq - # Get list of communities connected to merged communities - i_set = set(dq_dict[i].keys()) - j_set = set(dq_dict[j].keys()) - all_set = (i_set | j_set) - {i, j} - both_set = i_set & j_set - # Merge i into j and update dQ - for k in all_set: + communities[v] = frozenset(communities[u] | communities[v]) + del communities[u] + + # Get neighbor communities connected to the merged communities + u_nbrs = set(dq_dict[u]) + v_nbrs = set(dq_dict[v]) + all_nbrs = (u_nbrs | v_nbrs) - {u, v} + both_nbrs = u_nbrs & v_nbrs + # Update dq for merge of u into v + for w in all_nbrs: # Calculate new dq value - if k in both_set: - dq_jk = dq_dict[j][k] + dq_dict[i][k] - elif k in j_set: - if G.is_directed(): - dq_jk = dq_dict[j][k] - resolution * (a[i] * b[k] + a[k] * b[i]) - else: - dq_jk = dq_dict[j][k] - 2.0 * resolution * a[i] * a[k] - else: - # k in i_set - if G.is_directed(): - dq_jk = dq_dict[i][k] - resolution * (a[j] * b[k] + a[k] * b[j]) - else: - dq_jk = dq_dict[i][k] - 2.0 * resolution * a[j] * a[k] - # Update rows j and k - for row, col in [(j, k), (k, j)]: - # Save old value for finding heap index - if k in j_set: - d_old = (-dq_dict[row][col], row, col) - else: - d_old = None - # Update dict for j,k only (i is removed below) - dq_dict[row][col] = dq_jk + if w in both_nbrs: + dq_vw = dq_dict[v][w] + dq_dict[u][w] + elif w in v_nbrs: + dq_vw = dq_dict[v][w] - resolution * (a[u] * b[w] + a[w] * b[u]) + else: # w in u_nbrs + dq_vw = dq_dict[u][w] - resolution * (a[v] * b[w] + a[w] * b[v]) + # Update rows v and w + for row, col in [(v, w), (w, v)]: + dq_heap_row = dq_heap[row] + # Update dict for v,w only (u is removed below) + dq_dict[row][col] = dq_vw # Save old max of per-row heap - if len(dq_heap[row]) > 0: - d_oldmax = dq_heap[row].h[0] + if len(dq_heap_row) > 0: + d_oldmax = dq_heap_row.heap[0] else: d_oldmax = None # Add/update heaps - d = (-dq_jk, row, col) - if d_old is None: - # We're creating a new nonzero element, add to heap - dq_heap[row].push(d) - else: + d = (row, col) + d_negdq = -dq_vw + # Save old value for finding heap index + if w in v_nbrs: # Update existing element in per-row heap - dq_heap[row].update(d_old, d) + dq_heap_row.update(d, d, priority=d_negdq) + else: + # We're creating a new nonzero element, add to heap + dq_heap_row.push(d, priority=d_negdq) # Update heap of row maxes if necessary if d_oldmax is None: # No entries previously in this row, push new max - H.push(d) + H.push(d, priority=d_negdq) else: # We've updated an entry in this row, has the max changed? - if dq_heap[row].h[0] != d_oldmax: - H.update(d_oldmax, dq_heap[row].h[0]) + row_max = dq_heap_row.heap[0] + if d_oldmax != row_max or d_oldmax.priority != row_max.priority: + H.update(d_oldmax, row_max) - # Remove row/col i from matrix - i_neighbors = dq_dict[i].keys() - for k in i_neighbors: + # Remove row/col u from dq_dict matrix + for w in dq_dict[u]: # Remove from dict - dq_old = dq_dict[k][i] - del dq_dict[k][i] + dq_old = dq_dict[w][u] + del dq_dict[w][u] # Remove from heaps if we haven't already - if k != j: + if w != v: # Remove both row and column - for row, col in [(k, i), (i, k)]: + for row, col in [(w, u), (u, w)]: + dq_heap_row = dq_heap[row] # Check if replaced dq is row max - d_old = (-dq_old, row, col) - if dq_heap[row].h[0] == d_old: + d_old = (row, col) + if dq_heap_row.heap[0] == d_old: # Update per-row heap and heap of row maxes - dq_heap[row].remove(d_old) + dq_heap_row.remove(d_old) H.remove(d_old) # Update row max - if len(dq_heap[row]) > 0: - H.push(dq_heap[row].h[0]) + if len(dq_heap_row) > 0: + H.push(dq_heap_row.heap[0]) else: # Only update per-row heap - dq_heap[row].remove(d_old) - - del dq_dict[i] - # Mark row i as deleted, but keep placeholder - dq_heap[i] = MappedQueue() - # Merge i into j and update a - a[j] += a[i] - a[i] = 0 - if G.is_directed(): - b[j] += b[i] - b[i] = 0 - - partition = sorted(communities.values(), key=len, reverse=True) - return partition + dq_heap_row.remove(d_old) + + del dq_dict[u] + # Mark row u as deleted, but keep placeholder + dq_heap[u] = MappedQueue() + # Merge u into v and update a + a[v] += a[u] + a[u] = 0 + if directed: + b[v] += b[u] + b[u] = 0 + + return sorted(communities.values(), key=len, reverse=True) @not_implemented_for("directed") diff --git a/networkx/utils/mapped_queue.py b/networkx/utils/mapped_queue.py --- a/networkx/utils/mapped_queue.py +++ b/networkx/utils/mapped_queue.py @@ -6,16 +6,92 @@ __all__ = ["MappedQueue"] +class _HeapElement: + """This proxy class separates the heap element from its priority. + + The idea is that using a 2-tuple (priority, element) works + for sorting, but not for dict lookup because priorities are + often floating point values so round-off can mess up equality. + + So, we need inequalities to look at the priority (for sorting) + and equality (and hash) to look at the element to enable + updates to the priority. + + Unfortunately, this class can be tricky to work with if you forget that + `__lt__` compares the priority while `__eq__` compares the element. + In `greedy_modularity_communities()` the following code is + used to check that two _HeapElements differ in either element or priority: + + if d_oldmax != row_max or d_oldmax.priority != row_max.priority: + + If the priorities are the same, this implementation uses the element + as a tiebreaker. This provides compatibility with older systems that + use tuples to combine priority and elements. + """ + + __slots__ = ["priority", "element", "_hash"] + + def __init__(self, priority, element): + self.priority = priority + self.element = element + self._hash = hash(element) + + def __lt__(self, other): + try: + other_priority = other.priority + except AttributeError: + return self.priority < other + # assume comparing to another _HeapElement + if self.priority == other_priority: + return self.element < other.element + return self.priority < other_priority + + def __gt__(self, other): + try: + other_priority = other.priority + except AttributeError: + return self.priority > other + # assume comparing to another _HeapElement + if self.priority == other_priority: + return self.element < other.element + return self.priority > other_priority + + def __eq__(self, other): + try: + return self.element == other.element + except AttributeError: + return self.element == other + + def __hash__(self): + return self._hash + + def __getitem__(self, indx): + return self.priority if indx == 0 else self.element[indx - 1] + + def __iter__(self): + yield self.priority + try: + yield from self.element + except TypeError: + yield self.element + + def __repr__(self): + return f"_HeapElement({self.priority}, {self.element})" + + class MappedQueue: - """The MappedQueue class implements an efficient minimum heap. The - smallest element can be popped in O(1) time, new elements can be pushed - in O(log n) time, and any element can be removed or updated in O(log n) - time. The queue cannot contain duplicate elements and an attempt to push an - element already in the queue will have no effect. + """The MappedQueue class implements a min-heap with removal and update-priority. + + The min heap uses heapq as well as custom written _siftup and _siftdown + methods to allow the heap positions to be tracked by an additional dict + keyed by element to position. The smallest element can be popped in O(1) time, + new elements can be pushed in O(log n) time, and any element can be removed + or updated in O(log n) time. The queue cannot contain duplicate elements + and an attempt to push an element already in the queue will have no effect. MappedQueue complements the heapq package from the python standard library. While MappedQueue is designed for maximum compatibility with - heapq, it has slightly different functionality. + heapq, it adds element removal, lookup, and priority update. Examples -------- @@ -27,8 +103,7 @@ class MappedQueue: >>> q = MappedQueue([916, 50, 4609, 493, 237]) >>> q.push(1310) True - >>> x = [q.pop() for i in range(len(q.h))] - >>> x + >>> [q.pop() for i in range(len(q.heap))] [50, 237, 493, 916, 1310, 4609] Elements can also be updated or removed from anywhere in the queue. @@ -36,8 +111,7 @@ class MappedQueue: >>> q = MappedQueue([916, 50, 4609, 493, 237]) >>> q.remove(493) >>> q.update(237, 1117) - >>> x = [q.pop() for i in range(len(q.h))] - >>> x + >>> [q.pop() for i in range(len(q.heap))] [50, 916, 1117, 4609] References @@ -50,132 +124,144 @@ class MappedQueue: def __init__(self, data=[]): """Priority queue class with updatable priorities.""" - self.h = list(data) - self.d = dict() + if isinstance(data, dict): + self.heap = [_HeapElement(v, k) for k, v in data.items()] + else: + self.heap = list(data) + self.position = dict() self._heapify() - def __len__(self): - return len(self.h) - def _heapify(self): """Restore heap invariant and recalculate map.""" - heapq.heapify(self.h) - self.d = {elt: pos for pos, elt in enumerate(self.h)} - if len(self.h) != len(self.d): + heapq.heapify(self.heap) + self.position = {elt: pos for pos, elt in enumerate(self.heap)} + if len(self.heap) != len(self.position): raise AssertionError("Heap contains duplicate elements") - def push(self, elt): + def __len__(self): + return len(self.heap) + + def push(self, elt, priority=None): """Add an element to the queue.""" + if priority is not None: + elt = _HeapElement(priority, elt) # If element is already in queue, do nothing - if elt in self.d: + if elt in self.position: return False # Add element to heap and dict - pos = len(self.h) - self.h.append(elt) - self.d[elt] = pos + pos = len(self.heap) + self.heap.append(elt) + self.position[elt] = pos # Restore invariant by sifting down - self._siftdown(pos) + self._siftdown(0, pos) return True def pop(self): """Remove and return the smallest element in the queue.""" # Remove smallest element - elt = self.h[0] - del self.d[elt] + elt = self.heap[0] + del self.position[elt] # If elt is last item, remove and return - if len(self.h) == 1: - self.h.pop() + if len(self.heap) == 1: + self.heap.pop() return elt # Replace root with last element - last = self.h.pop() - self.h[0] = last - self.d[last] = 0 - # Restore invariant by sifting up, then down - pos = self._siftup(0) - self._siftdown(pos) + last = self.heap.pop() + self.heap[0] = last + self.position[last] = 0 + # Restore invariant by sifting up + self._siftup(0) # Return smallest element return elt - def update(self, elt, new): + def update(self, elt, new, priority=None): """Replace an element in the queue with a new one.""" + if priority is not None: + new = _HeapElement(priority, new) # Replace - pos = self.d[elt] - self.h[pos] = new - del self.d[elt] - self.d[new] = pos - # Restore invariant by sifting up, then down - pos = self._siftup(pos) - self._siftdown(pos) + pos = self.position[elt] + self.heap[pos] = new + del self.position[elt] + self.position[new] = pos + # Restore invariant by sifting up + self._siftup(pos) def remove(self, elt): """Remove an element from the queue.""" # Find and remove element try: - pos = self.d[elt] - del self.d[elt] + pos = self.position[elt] + del self.position[elt] except KeyError: # Not in queue raise # If elt is last item, remove and return - if pos == len(self.h) - 1: - self.h.pop() + if pos == len(self.heap) - 1: + self.heap.pop() return # Replace elt with last element - last = self.h.pop() - self.h[pos] = last - self.d[last] = pos - # Restore invariant by sifting up, then down - pos = self._siftup(pos) - self._siftdown(pos) + last = self.heap.pop() + self.heap[pos] = last + self.position[last] = pos + # Restore invariant by sifting up + self._siftup(pos) def _siftup(self, pos): - """Move element at pos down to a leaf by repeatedly moving the smaller - child up.""" - h, d = self.h, self.d - elt = h[pos] - # Continue until element is in a leaf - end_pos = len(h) - left_pos = (pos << 1) + 1 - while left_pos < end_pos: - # Left child is guaranteed to exist by loop predicate - left = h[left_pos] - try: - right_pos = left_pos + 1 - right = h[right_pos] - # Out-of-place, swap with left unless right is smaller - if right < left: - h[pos], h[right_pos] = right, elt - pos, right_pos = right_pos, pos - d[elt], d[right] = pos, right_pos - else: - h[pos], h[left_pos] = left, elt - pos, left_pos = left_pos, pos - d[elt], d[left] = pos, left_pos - except IndexError: - # Left leaf is the end of the heap, swap - h[pos], h[left_pos] = left, elt - pos, left_pos = left_pos, pos - d[elt], d[left] = pos, left_pos - # Update left_pos - left_pos = (pos << 1) + 1 - return pos - - def _siftdown(self, pos): - """Restore invariant by repeatedly replacing out-of-place element with - its parent.""" - h, d = self.h, self.d - elt = h[pos] - # Continue until element is at root + """Move smaller child up until hitting a leaf. + + Built to mimic code for heapq._siftup + only updating position dict too. + """ + heap, position = self.heap, self.position + end_pos = len(heap) + startpos = pos + newitem = heap[pos] + # Shift up the smaller child until hitting a leaf + child_pos = (pos << 1) + 1 # start with leftmost child position + while child_pos < end_pos: + # Set child_pos to index of smaller child. + child = heap[child_pos] + right_pos = child_pos + 1 + if right_pos < end_pos: + right = heap[right_pos] + if not child < right: + child = right + child_pos = right_pos + # Move the smaller child up. + heap[pos] = child + position[child] = pos + pos = child_pos + child_pos = (pos << 1) + 1 + # pos is a leaf position. Put newitem there, and bubble it up + # to its final resting place (by sifting its parents down). while pos > 0: parent_pos = (pos - 1) >> 1 - parent = h[parent_pos] - if parent > elt: - # Swap out-of-place element with parent - h[parent_pos], h[pos] = elt, parent - parent_pos, pos = pos, parent_pos - d[elt] = pos - d[parent] = parent_pos - else: - # Invariant is satisfied + parent = heap[parent_pos] + if not newitem < parent: + break + heap[pos] = parent + position[parent] = pos + pos = parent_pos + heap[pos] = newitem + position[newitem] = pos + + def _siftdown(self, start_pos, pos): + """Restore invariant. keep swapping with parent until smaller. + + Built to mimic code for heapq._siftdown + only updating position dict too. + """ + heap, position = self.heap, self.position + newitem = heap[pos] + # Follow the path to the root, moving parents down until finding a place + # newitem fits. + while pos > start_pos: + parent_pos = (pos - 1) >> 1 + parent = heap[parent_pos] + if not newitem < parent: break - return pos + heap[pos] = parent + position[parent] = pos + pos = parent_pos + heap[pos] = newitem + position[newitem] = pos
diff --git a/networkx/algorithms/community/tests/test_modularity_max.py b/networkx/algorithms/community/tests/test_modularity_max.py --- a/networkx/algorithms/community/tests/test_modularity_max.py +++ b/networkx/algorithms/community/tests/test_modularity_max.py @@ -91,6 +91,23 @@ def test_modularity_communities_weighted(): expected = [{0, 1, 3, 4, 7, 8, 9, 10}, {2, 5, 6, 11, 12, 13, 14}] assert greedy_modularity_communities(G, weight="weight") == expected + assert greedy_modularity_communities(G, weight="weight", resolution=0.9) == expected + assert greedy_modularity_communities(G, weight="weight", resolution=0.3) == expected + assert greedy_modularity_communities(G, weight="weight", resolution=1.1) != expected + + +def test_modularity_communities_floating_point(): + # check for floating point error when used as key in the mapped_queue dict. + # Test for gh-4992 and gh-5000 + G = nx.Graph() + G.add_weighted_edges_from( + [(0, 1, 12), (1, 4, 71), (2, 3, 15), (2, 4, 10), (3, 6, 13)] + ) + expected = [{0, 1, 4}, {2, 3, 6}] + assert greedy_modularity_communities(G, weight="weight") == expected + assert ( + greedy_modularity_communities(G, weight="weight", resolution=0.99) == expected + ) def test_modularity_communities_directed_weighted(): diff --git a/networkx/utils/tests/test_mapped_queue.py b/networkx/utils/tests/test_mapped_queue.py --- a/networkx/utils/tests/test_mapped_queue.py +++ b/networkx/utils/tests/test_mapped_queue.py @@ -1,4 +1,41 @@ -from networkx.utils.mapped_queue import MappedQueue +import pytest +from networkx.utils.mapped_queue import _HeapElement, MappedQueue + + +def test_HeapElement_gtlt(): + bar = _HeapElement(1.1, "a") + foo = _HeapElement(1, "b") + assert foo < bar + assert bar > foo + assert foo < 1.1 + assert 1 < bar + + +def test_HeapElement_eq(): + bar = _HeapElement(1.1, "a") + foo = _HeapElement(1, "a") + assert foo == bar + assert bar == foo + assert foo == "a" + + +def test_HeapElement_iter(): + foo = _HeapElement(1, "a") + bar = _HeapElement(1.1, (3, 2, 1)) + assert list(foo) == [1, "a"] + assert list(bar) == [1.1, 3, 2, 1] + + +def test_HeapElement_getitem(): + foo = _HeapElement(1, "a") + bar = _HeapElement(1.1, (3, 2, 1)) + assert foo[1] == "a" + assert foo[0] == 1 + assert bar[0] == 1.1 + assert bar[2] == 2 + assert bar[3] == 1 + pytest.raises(IndexError, bar.__getitem__, 4) + pytest.raises(IndexError, foo.__getitem__, 2) class TestMappedQueue: @@ -6,13 +43,12 @@ def setup(self): pass def _check_map(self, q): - d = {elt: pos for pos, elt in enumerate(q.h)} - assert d == q.d + assert q.position == {elt: pos for pos, elt in enumerate(q.heap)} def _make_mapped_queue(self, h): q = MappedQueue() - q.h = h - q.d = {elt: pos for pos, elt in enumerate(h)} + q.heap = h + q.position = {elt: pos for pos, elt in enumerate(h)} return q def test_heapify(self): @@ -37,7 +73,7 @@ def test_siftup_leaf(self): h_sifted = [2] q = self._make_mapped_queue(h) q._siftup(0) - assert q.h == h_sifted + assert q.heap == h_sifted self._check_map(q) def test_siftup_one_child(self): @@ -45,7 +81,7 @@ def test_siftup_one_child(self): h_sifted = [0, 2] q = self._make_mapped_queue(h) q._siftup(0) - assert q.h == h_sifted + assert q.heap == h_sifted self._check_map(q) def test_siftup_left_child(self): @@ -53,7 +89,7 @@ def test_siftup_left_child(self): h_sifted = [0, 2, 1] q = self._make_mapped_queue(h) q._siftup(0) - assert q.h == h_sifted + assert q.heap == h_sifted self._check_map(q) def test_siftup_right_child(self): @@ -61,39 +97,39 @@ def test_siftup_right_child(self): h_sifted = [0, 1, 2] q = self._make_mapped_queue(h) q._siftup(0) - assert q.h == h_sifted + assert q.heap == h_sifted self._check_map(q) def test_siftup_multiple(self): h = [0, 1, 2, 4, 3, 5, 6] - h_sifted = [1, 3, 2, 4, 0, 5, 6] + h_sifted = [0, 1, 2, 4, 3, 5, 6] q = self._make_mapped_queue(h) q._siftup(0) - assert q.h == h_sifted + assert q.heap == h_sifted self._check_map(q) def test_siftdown_leaf(self): h = [2] h_sifted = [2] q = self._make_mapped_queue(h) - q._siftdown(0) - assert q.h == h_sifted + q._siftdown(0, 0) + assert q.heap == h_sifted self._check_map(q) def test_siftdown_single(self): h = [1, 0] h_sifted = [0, 1] q = self._make_mapped_queue(h) - q._siftdown(len(h) - 1) - assert q.h == h_sifted + q._siftdown(0, len(h) - 1) + assert q.heap == h_sifted self._check_map(q) def test_siftdown_multiple(self): h = [1, 2, 3, 4, 5, 6, 7, 0] h_sifted = [0, 1, 3, 2, 5, 6, 7, 4] q = self._make_mapped_queue(h) - q._siftdown(len(h) - 1) - assert q.h == h_sifted + q._siftdown(0, len(h) - 1) + assert q.heap == h_sifted self._check_map(q) def test_push(self): @@ -102,7 +138,7 @@ def test_push(self): q = MappedQueue() for elt in to_push: q.push(elt) - assert q.h == h_sifted + assert q.heap == h_sifted self._check_map(q) def test_push_duplicate(self): @@ -112,7 +148,7 @@ def test_push_duplicate(self): for elt in to_push: inserted = q.push(elt) assert inserted - assert q.h == h_sifted + assert q.heap == h_sifted self._check_map(q) inserted = q.push(1) assert not inserted @@ -122,9 +158,7 @@ def test_pop(self): h_sorted = sorted(h) q = self._make_mapped_queue(h) q._heapify() - popped = [] - for elt in sorted(h): - popped.append(q.pop()) + popped = [q.pop() for _ in range(len(h))] assert popped == h_sorted self._check_map(q) @@ -133,25 +167,66 @@ def test_remove_leaf(self): h_removed = [0, 2, 1, 6, 4, 5] q = self._make_mapped_queue(h) removed = q.remove(3) - assert q.h == h_removed + assert q.heap == h_removed def test_remove_root(self): h = [0, 2, 1, 6, 3, 5, 4] h_removed = [1, 2, 4, 6, 3, 5] q = self._make_mapped_queue(h) removed = q.remove(0) - assert q.h == h_removed + assert q.heap == h_removed def test_update_leaf(self): h = [0, 20, 10, 60, 30, 50, 40] h_updated = [0, 15, 10, 60, 20, 50, 40] q = self._make_mapped_queue(h) removed = q.update(30, 15) - assert q.h == h_updated + assert q.heap == h_updated def test_update_root(self): h = [0, 20, 10, 60, 30, 50, 40] h_updated = [10, 20, 35, 60, 30, 50, 40] q = self._make_mapped_queue(h) removed = q.update(0, 35) - assert q.h == h_updated + assert q.heap == h_updated + + +class TestMappedDict(TestMappedQueue): + def _make_mapped_queue(self, h): + priority_dict = {elt: elt for elt in h} + return MappedQueue(priority_dict) + + def test_push(self): + to_push = [6, 1, 4, 3, 2, 5, 0] + h_sifted = [0, 2, 1, 6, 3, 5, 4] + q = MappedQueue() + for elt in to_push: + q.push(elt, priority=elt) + assert q.heap == h_sifted + self._check_map(q) + + def test_push_duplicate(self): + to_push = [2, 1, 0] + h_sifted = [0, 2, 1] + q = MappedQueue() + for elt in to_push: + inserted = q.push(elt, priority=elt) + assert inserted + assert q.heap == h_sifted + self._check_map(q) + inserted = q.push(1, priority=1) + assert not inserted + + def test_update_leaf(self): + h = [0, 20, 10, 60, 30, 50, 40] + h_updated = [0, 15, 10, 60, 20, 50, 40] + q = self._make_mapped_queue(h) + removed = q.update(30, 15, priority=15) + assert q.heap == h_updated + + def test_update_root(self): + h = [0, 20, 10, 60, 30, 50, 40] + h_updated = [10, 20, 35, 60, 30, 50, 40] + q = self._make_mapped_queue(h) + removed = q.update(0, 35, priority=35) + assert q.heap == h_updated
Possible fix for floating point KeyError in greedy_modularity_communities As per #4992, the existing implementation of `greedy_modularity_communities()` causes intermittent `KeyError`s due to its using a floating point number `dq` as one element of a dict key. One simple fix is to round `dq` to a reasonable number of significant figures. <!-- Please run black to format your code. See https://networkx.org/documentation/latest/developer/contribute.html for details. -->
Could we avoid the floating point trouble by using a term other than `q0` in our computation of `dq`? Said another way, can we sort based not on dq but based on a scaled version of dq such that q0 does not appear in dq? Multiplying the modularity expression by `m**2` removes `q0` from that expression and gives the line: ```python dq_dict = { i: { j: 4 * m * G.get_edge_data(i, j).get(weight, 1.0) - 2 * resolution * k[i] * k[j] for j in [node_for_label[u] for u in G.neighbors(label_for_node[i])] if j != i } for i in range(N) } ``` I'd want to check for other impacts of scaling the leading value in the 3-tuple used in the MappedQueue. But this should get rid of the problem of a KeyError... I agree with @dschult - I think we should investigate that approach. The problem with the approach in this PR or the `fraction.Fraction` approach is that it bakes-in assumptions about what precision users need. Six sigfigs may be fine for most cases, but in the cases where the user is relying on more precision, this could potentially lead to the algorithm silently giving incorrect results. If @dschult 's suggestion of using the scaled in the 3-tuple works, then we're back to the situation of: users can select whatever precision they desire by scaling the weight attribute appropriately. That does sound like a much more elegant solution. The approach suggested by @dschult still leads to similar problems, I'm afraid. It _tends_ to work OK (after also modifying https://github.com/networkx/networkx/blob/2d1bf7072c1b71429442822eda69bd2c3c0065e1/networkx/algorithms/community/modularity_max.py#L97) to read: ``` a = [k[i] for i in range(N)] ``` ... but non-integer resolution values still lead to `KeyError`s. Adding a little debugging code starting at https://github.com/networkx/networkx/blob/2d1bf7072c1b71429442822eda69bd2c3c0065e1/networkx/algorithms/community/modularity_max.py#L212: ``` try: H.remove(d_old) except KeyError: min_diff = 1e9 min_key = None for key in H.d.keys(): diff = abs(key[0]-d_old[0]) if diff < min_diff: min_key=key min_diff = diff raise KeyError(f'Failed on {d_old}. Nearest {min_key}; diff {min_diff}') # Update row max if len(dq_heap[row]) > 0: H.push(dq_heap[row].h[0]) else: # Only update per-row heap try: dq_heap[row].remove(d_old) except KeyError: min_diff = 1e9 min_key = None h = dq_heap[row] for key in h.d.keys(): diff = abs(key[0]-d_old[0]) if diff < min_diff: min_key=key min_diff = diff raise KeyError(f'Failed on {d_old}. Nearest {min_key}; diff {min_diff}') ``` ... leads to things like this (when the original test case in #4992 is run with resolution=0.99): ``` ~/.local/share/ChimeraX/1.3/site-packages/networkx/algorithms/community/modularity_max.py in greedy_modularity_communities(G, weight, resolution) 240 min_diff = diff 241 --> 242 raise KeyError(f'Failed on {d_old}. Nearest {min_key}; diff {min_diff}') 243 244 KeyError: 'Failed on (-2.690155769449104e+16, 1091, 1084). Nearest (-2.6901557694491044e+16, 1091, 1084); diff 4.0' ``` It would be a shame to limit the method to only integer values of resolution... I'm still leaning towards rounding being the most stable approach. The calculations are all done in double precision... what about simply rounding the key values to single precision, and stating clearly that the algorithm only works in single precision? Wow -- those dq values are **really** huge (~1e+16). floating point is not good enough to hold integer values that large without roundoff causing trouble even for integers. Perhaps we need to convert to integers to hold these really large integer numbers. Thanks very much for this. Can you give me a small example that breaks? Or just tell me what kind of size your edge weights are and how many edges for the network? Thanks!! Here you go. It's essentially an adaptation of the example in #4992 with the weights converted to integers after multiplying by 1e6 (which to be fair is a few orders of magnitude more than needed). Fails with resolution=0.99 both for the original `greedy_modularity_communities` and for the modified version discussed above. Works great when it *does* work, though! [networkx_test_case.tar.gz](https://github.com/networkx/networkx/files/7025665/networkx_test_case.tar.gz) Another possibility would be to make the desired number of significant figures an optional argument with a reasonable default value, so people can adjust it if they really need to. Here's a smaller example: ```python G=nx.Graph() G.add_weighted_edges_from([(0, 1, 12), (1, 4, 71), (2, 3, 15), (2, 4, 10), (3, 6, 13)], weight="weight") nx.community.greedy_modularity_communities(G, "weight", resolution=0.99) ``` Most "simple" values of resolution work here without an error, but if the resolution causes roundoff error -- as 0.99 does -- then we still get that KeyError. So we can easily construct examples that break it. OK... I've looked at this some this evening and I think I have a way forward. The trouble is coming from using floating point values as keys in the MappedQueue class. This MappedQueue class uses a python heapq but adds to it the ability to remove objects from the queue. And it does this by storing a lookup dict. The dict uses the value stored on the heapq as a key to look up the index in the heapq. But -- our values are 3-tuples with a floating point value as the leading entry and the 2 nodes which form the edge fill out the 3-tuple. We need the floating point value to provide the lowest valued edge. But (in our usage!) the edge that fills out the 3-tuple is unique to that 3-tuple in the queue. Note that if we expand this to MultiGraphs, we will likely need a 4-tuple to store the edge key as well as the two nodes and the floating point value. We are currently getting a KeyError due to floating point problems with the key in the lookup dict. But we can rewrite MappedQueue to use the 3-tuple for priority, and all but the first value in the tuple as the key in the lookup dict. This makes MappedQueue much less general. It should probably be removed from `nx.utils` if we do this. But this is the only function in NX that uses MappedQueue. So that's probably OK. My strategy... thoughts appreciated... 1) review and merge #5007 to make all this work with DiGraphs and MultiGraphs and MultiDiGraphs. 2) In the same PR: - rewrite MappedQueue to use a full tuple element for priority and all but the first entry for element removal. - move MappedQueue out of nx.utils and probably rename it. This will likely need a deprecation (removed in v3.0). - rewrite `greedy_modularity_communities` to use the new data structure. Thoughts anyone? Just updated the PR with an approach loosely based around @dschult's suggestion (actually tried something like this previously but abandoned it because it didn't seem to be working - must have been doing something wrong). Rather than touching `MappedQueue` I created a new wrapper class `_HeapProxy` to contain the data. Its `__gt__` and `__lt__` methods use the full data tuple so sorting within the queue will be identical to before, but `__eq__` and `__hash__` use only the (i,j) indices to avoid the floating point issues. Since each edge should only ever be in the queue once (if I understand correctly) this should work fine - and appears to give an identical result in my test case. Also works without error for resolution=0.99. Can also save about 10% in the running time if the `HeapProxy` hash is precalculated in its `__init__()`: ``` class _HeapProxy: def __init__(self, dq, i, j): self.data = (dq, i, j) self._hash = hash(self.data[1:]) def __hash__(self): return self._hash ``` Niiice... :} Other speedups include `def __iter__(self):` instead of (or in addition to `__getitem__`) and in `__init__`, define `self.data` as well as `self.priority, *self.edge = self.data` and then order and has the priority and edge. There may be speedups in `MappedQueue` that help too, but these are pretty straightforward ones. Thanks for this! I'm going to rebase this on the main because `n_communities` was added recently. Offering my 2 cents here, I would like to comment on the ```MappedQueue``` implementation. The look-up dict was mentioned at the python docs [here](https://docs.python.org/3/library/heapq.html#priority-queue-implementation-notes), suggesting that the task should be used as key, not the priority value, given that there is a hashable task-id. This way, the problem of multiple tasks having the same priority, thus the entry in the look-up dict will be overwritten, is automatically resolved. Also, it feels more natural to look-up by some unique id, rather than its priority. I recently implemented this enriched priority queue [here](https://github.com/ThanasisMattas/shortestpaths/blob/master/shortestpaths/priorityq.py), following the above suggestion, in order to use it with Dijkstra's algorithm. I think that it would be cleaner and simpler to modify the ```MappedQueue```, as suggested at this https://github.com/networkx/networkx/pull/5000#issuecomment-904314643, although the ```_ProxyHeap``` approach is a nice one. No problem. Glad to help! Can I help with this one? How shall we proceed? I took a look at the MappedQueue data structure and also the `utils/heaps.py` module. It seems that both were created as one-off use cases and put in utils so other parts of the code could use them. But there is a lot of overlap and subtle yet poorly documented differences between them. I'd like to see either a single heap with lookup/remove/replace, or a few classes that are clearly documented about their differences and similarities. But that may be a long term solution to a short term problem. I suspect the best thing for now is to fix MappedQueue and get a new release out. Then we can refactor the utils heap data structures another time. My understanding is that the fix for MappedQueue (as you have said) is to separate the priority from the object when inserting -- using the priority with the heapq heap while storing positions using a dict keyed by the object without the priority. This is similar to your linked-to code but that code requires the object be of a specific form which I don't think we need to do here. It is also similar to the `_ProxyHeap` idea in this PR, but it effectively puts the split between priority and object into the Data Structure so the user doesn't have to construct an object just to put it on the queue. I'm still planning to work on this sometime soon, but don't seem to be getting to it. I would welcome a PR that fixes MappedQueue and makes it more readable. I will get to it "soon" but don't know when exactly. :}
2021-09-05T04:39:53
networkx/networkx
5,077
networkx__networkx-5077
[ "3389" ]
d2b8d970b841a11c777019f15d9deac4d4ce81a1
diff --git a/networkx/generators/random_graphs.py b/networkx/generators/random_graphs.py --- a/networkx/generators/random_graphs.py +++ b/networkx/generators/random_graphs.py @@ -78,28 +78,12 @@ def fast_gnp_random_graph(n, p, seed=None, directed=False): if p <= 0 or p >= 1: return nx.gnp_random_graph(n, p, seed=seed, directed=directed) - w = -1 lp = math.log(1.0 - p) if directed: G = nx.DiGraph(G) - # Nodes in graph are from 0,n-1 (start with v as the first node index). - v = 0 - while v < n: - lr = math.log(1.0 - seed.random()) - w = w + 1 + int(lr / lp) - if v == w: # avoid self loops - w = w + 1 - while v < n <= w: - w = w - n - v = v + 1 - if v == w: # avoid self loops - w = w + 1 - if v < n: - G.add_edge(v, w) - else: - # Nodes in graph are from 0,n-1 (start with v as the second node index). v = 1 + w = -1 while v < n: lr = math.log(1.0 - seed.random()) w = w + 1 + int(lr / lp) @@ -107,7 +91,19 @@ def fast_gnp_random_graph(n, p, seed=None, directed=False): w = w - v v = v + 1 if v < n: - G.add_edge(v, w) + G.add_edge(w, v) + + # Nodes in graph are from 0,n-1 (start with v as the second node index). + v = 1 + w = -1 + while v < n: + lr = math.log(1.0 - seed.random()) + w = w + 1 + int(lr / lp) + while w >= v and v < n: + w = w - v + v = v + 1 + if v < n: + G.add_edge(v, w) return G
diff --git a/networkx/generators/tests/test_random_graphs.py b/networkx/generators/tests/test_random_graphs.py --- a/networkx/generators/tests/test_random_graphs.py +++ b/networkx/generators/tests/test_random_graphs.py @@ -247,6 +247,27 @@ def test_gnp(self): edges += sum(1 for _ in generator(10, 0.99999, directed=True).edges()) assert abs(edges / float(runs) - 90) <= runs * 2.0 / 100 + # assert that edges are generated with correct probability + runs = 5000 + n = 5 + for p in [0.2, 0.8]: + for directed in [False, True]: + edge_counts = [[0] * 5 for row in range(5)] + for i in range(runs): + G = generator(n, p, directed=directed) + for (v, w) in G.edges: + edge_counts[v][w] += 1 + if not directed: + edge_counts[w][v] += 1 + for v in range(n): + for w in range(n): + if v == w: + # There should be no loops + assert edge_counts[v][w] == 0 + else: + # Each edge should have been generated with probability close to p + assert abs(edge_counts[v][w] / float(runs) - p) <= 0.03 + def test_gnm(self): G = nx.gnm_random_graph(10, 3) assert len(G) == 10
fast_gnp_random_graph doesn't sample directed edges uniformly, wrong way of avoiding loop edges I've noticed the following small mistake in the generation of the ErdΕ‘s-RΓ©nyi graph. The current way to avoid loop edges is to just take the next node if a loop edge would be generated: https://github.com/networkx/networkx/blob/master/networkx/generators/random_graphs.py#L101 However, this way the edge `(v, v+1)` has twice the probability as any other edge. This non-uniformity can be very easily demonstrated on small graphs: ``` neighbors_of_0 = [] for i in range(1000): G = networkx.generators.random_graphs.fast_gnp_random_graph(4, 0.2, directed=True) for (i,j) in G.edges: if i == 0: neighbors_of_0.append(j) print(np.bincount(neighbors_of_0) / 1000) # output: array([0. , 0.351, 0.203, 0.197]) ``` Note: I don't quite understand why, but it appears empirically that the probability is not actually two times the others, but hovers around ~3.5. I think I'm missing something. Edit*: Yes, in fact it's not exactly twice. In the loop-edges-included model, p((v,v+1)) is the sum of immediately arriving at (v,v+1) (skipping (v,v)) plus arriving at (v,v) and then at (v,v+1), while here the probability is the sum of the probability of arriving at (v,v) and the probability of arriving at (v,v+1) given that (v,v) is not included. I believe this must have been introduced as a response to the following issue (when the generation of loop edges was excluded): https://github.com/networkx/networkx/issues/575 The cited paper [1] doesn't cover the proper way of excluding loop edges as far as I can tell. I believe one way of doing it correctly would be to generate the node `w` from `[0, n-2]` instead of `[0, n-1]` and use edge `(v, w) if w < v else (v, w+1)`. [1] Vladimir Batagelj and Ulrik Brandes, "Efficient generation of large random networks"
2021-09-11T21:04:48
networkx/networkx
5,153
networkx__networkx-5153
[ "5123" ]
cfb4b271166485fda8ebf82f00178f28602383bb
diff --git a/networkx/drawing/layout.py b/networkx/drawing/layout.py --- a/networkx/drawing/layout.py +++ b/networkx/drawing/layout.py @@ -1092,7 +1092,7 @@ def multipartite_layout(G, subset_key="subset", align="vertical", scale=1, cente nodes = [] width = len(layers) - for i, layer in layers.items(): + for i, layer in enumerate(layers.values()): height = len(layer) xs = np.repeat(i, height) ys = np.arange(0, height, dtype=float)
diff --git a/networkx/drawing/tests/test_layout.py b/networkx/drawing/tests/test_layout.py --- a/networkx/drawing/tests/test_layout.py +++ b/networkx/drawing/tests/test_layout.py @@ -412,3 +412,15 @@ def test_rescale_layout_dict(self): } for k, v in expectation.items(): assert (s_vpos[k] == v).all() + + +def test_multipartite_layout_nonnumeric_partition_labels(): + """See gh-5123.""" + G = nx.Graph() + G.add_node(0, subset="s0") + G.add_node(1, subset="s0") + G.add_node(2, subset="s1") + G.add_node(3, subset="s1") + G.add_edges_from([(0, 2), (0, 3), (1, 2)]) + pos = nx.multipartite_layout(G) + assert len(pos) == len(G)
multipartite_layout() fails when some node labels are not float <!--- Provide a general summary of the issue in the Title above --> Dear contributors, I am facing currently the following issue which I believe is a bug. Calling `multipartite_layout()` on a graph which has some nodes with labels which are not float (or integers) will result in an error. ### Current Behavior Calling `multipartite_layout()` on a graph which has some nodes with hashable labels which are not float (or integers) will result in the following exception: ``` numpy.core._exceptions.UFuncTypeError: ufunc 'subtract' did not contain a loop with signature matching types (dtype('<U32'), dtype('<U32')) -> dtype('<U32') ``` The traceback points at line 1101 of drawing/layout.py, in multipartite_layout: `layer_pos = np.column_stack([xs, ys]) - offset` ### Expected Behavior Calling `multipartite_layout()` on a graph which has some nodes with hashable labels which are not float (or integers) should return a `pos` allowing to draw the graph as a multipartite graph. ### Steps to Reproduce Here is a minimum non-working example: ```python G = nx.Graph() G.add_node(0, subset='s0') G.add_node("b", subset='s0') G.add_node(2, subset='s1') G.add_node("d", subset='s1') G.add_edges_from([(0, 2), (0, "d"), ("b", 2)]) pos = nx.multipartite_layout(G) # <-- raise the exception mentioned above ``` ### Environment <!--- Please provide details about your local environment --> Python version: 3.8.11 NetworkX version: 2.6.3 NumPy version: 1.20.3 ### Additional context I fixed this problem with the following workaround (from `multipartite_layout` code, in drawing/layout.py): ```python import numpy as np if align not in ("vertical", "horizontal"): msg = "align must be either vertical or horizontal." raise ValueError(msg) G, center = _process_params(G, center=center, dim=2) if len(G) == 0: return {} layers = {} for v, data in G.nodes(data=True): try: layer = data[subset_key] except KeyError: msg = "all nodes must have subset_key (default='subset') as data" raise ValueError(msg) layers[layer] = [v] + layers.get(layer, []) pos = None nodes = [] width = len(layers) # for i, layer in layers.items(): # <-- original code for i, layer in enumerate(layers.values()): # <-- modification height = len(layer) xs = np.repeat(i, height) ys = np.arange(0, height, dtype=float) offset = ((width - 1) / 2, (height - 1) / 2) layer_pos = np.column_stack([xs, ys]) - offset if pos is None: pos = layer_pos else: pos = np.concatenate([pos, layer_pos]) nodes.extend(layer) pos = rescale_layout(pos, scale=scale) + center if align == "horizontal": pos = np.flip(pos, 1) pos = dict(zip(nodes, pos)) return pos ``` However, being new to using NetworkX, I cannot guarantee that it is a reliable fix.
Thanks for the report - I think you're right that the problem here stems from the fact that `multipartite_layout` assumes that the partition identifiers (denoted by default by the `"subset"` node attribute) are numeric. I agree that it'd be good to support non-numeric values for the partition label. For anyone interested, this should be a pretty straightforward fix - I'd start by looking at this line: https://github.com/networkx/networkx/blob/92225435bbb1461391289bd5b168717e2318e8bb/networkx/drawing/layout.py#L1095 It'd also be important to add a test for `multipartite_layout` with non-numeric (e.g. string) partition labels. > I think you're right that the problem here stems from the fact that multipartite_layout assumes that the partition identifiers (denoted by default by the "subset" node attribute) are numeric. Yes this is what I figured out. Just to clarify what I think the problem is : the node names, and not the partition values (with default key "subset"), are assumed to be numeric. Hence, the following fails (some node have names which are string) ```python G = nx.Graph() G.add_node(0, subset='s0') G.add_node("b", subset='s0') G.add_node(2, subset='s1') G.add_node("d", subset='s1') G.add_edges_from([(0, 2), (0, "d"), ("b", 2)]) pos = nx.multipartite_layout(G) # <-- raise the exception mentioned above ``` while the following will work (all the nodes' names are numeric) ```python G = nx.Graph() G.add_node(0, subset='s0') G.add_node(1, subset='s0') G.add_node(2, subset='s1') G.add_node(3, subset='s1') G.add_edges_from([(0, 2), (0, 3), (1, 2)]) pos = nx.multipartite_layout(G) ``` > while the following will work (all the nodes' names are numeric) I recommend running this second example :)
2021-10-26T07:55:57
networkx/networkx
5,154
networkx__networkx-5154
[ "3272" ]
cfb4b271166485fda8ebf82f00178f28602383bb
diff --git a/networkx/drawing/nx_pylab.py b/networkx/drawing/nx_pylab.py --- a/networkx/drawing/nx_pylab.py +++ b/networkx/drawing/nx_pylab.py @@ -155,10 +155,11 @@ def draw_networkx(G, pos=None, arrows=None, with_labels=True, **kwds): For directed graphs, choose the style of the arrowsheads. See `matplotlib.patches.ArrowStyle` for more options. - arrowsize : int (default=10) + arrowsize : int or list (default=10) For directed graphs, choose the size of the arrow head's length and - width. See `matplotlib.patches.FancyArrowPatch` for attribute - `mutation_scale` for more info. + width. A list of values can be passed in to assign a different size for arrow head's length and width. + See `matplotlib.patches.FancyArrowPatch` for attribute `mutation_scale` + for more info. with_labels : bool (default=True) Set to True to draw labels on the nodes. @@ -750,7 +751,12 @@ def to_marker_edge(marker_size, marker): # Draw arrows with `matplotlib.patches.FancyarrowPatch` arrow_collection = [] - mutation_scale = arrowsize # scale factor of arrow head + + if isinstance(arrowsize, list): + if len(arrowsize) != len(edge_pos): + raise ValueError("arrowsize should have the same length as edgelist") + else: + mutation_scale = arrowsize # scale factor of arrow head base_connection_style = mpl.patches.ConnectionStyle(connectionstyle) @@ -798,6 +804,11 @@ def _connectionstyle(posA, posB, *args, **kwargs): x2, y2 = dst shrink_source = 0 # space from source to tail shrink_target = 0 # space from head to target + + if isinstance(arrowsize, list): + # Scale each factor of each arrow based on arrowsize list + mutation_scale = arrowsize[i] + if np.iterable(node_size): # many node sizes source, target = edgelist[i][:2] source_node_size = node_size[nodelist.index(source)]
diff --git a/networkx/drawing/tests/test_pylab.py b/networkx/drawing/tests/test_pylab.py --- a/networkx/drawing/tests/test_pylab.py +++ b/networkx/drawing/tests/test_pylab.py @@ -478,6 +478,28 @@ def test_error_invalid_kwds(): nx.draw(barbell, foo="bar") +def test_draw_networkx_arrowsize_incorrect_size(): + G = nx.DiGraph([(0, 1), (0, 2), (0, 3), (1, 3)]) + arrowsize = [1, 2, 3] + with pytest.raises( + ValueError, match="arrowsize should have the same length as edgelist" + ): + nx.draw(G, arrowsize=arrowsize) + + [email protected]("arrowsize", (30, [10, 20, 30])) +def test_draw_edges_arrowsize(arrowsize): + G = nx.DiGraph([(0, 1), (0, 2), (1, 2)]) + pos = {0: (0, 0), 1: (0, 1), 2: (1, 0)} + edges = nx.draw_networkx_edges(G, pos=pos, arrowsize=arrowsize) + + arrowsize = itertools.repeat(arrowsize) if isinstance(arrowsize, int) else arrowsize + + for fap, expected in zip(edges, arrowsize): + assert isinstance(fap, mpl.patches.FancyArrowPatch) + assert fap.get_mutation_scale() == expected + + def test_np_edgelist(): # see issue #4129 np = pytest.importorskip("numpy")
Allow for different arrow sizes In directed graph drawing, it's much clearer to tell edge weights with different arrow sizes. In nx_pylab.py, line 676, arrowsize can easily be input as a list if just a few lines are added: `if len(arrowsize) > 1: mutation_scale = arrowsize[i] else: mutation_scale = arrowsize[0]`
2021-10-26T10:28:56
networkx/networkx
5,208
networkx__networkx-5208
[ "5202" ]
aab28bf12e0b6569bfa44f1822c102eb85c1f179
diff --git a/networkx/drawing/layout.py b/networkx/drawing/layout.py --- a/networkx/drawing/layout.py +++ b/networkx/drawing/layout.py @@ -1009,17 +1009,9 @@ def spiral_layout(G, scale=1, center=None, dim=2, resolution=0.35, equidistant=F pos.append([np.cos(theta) * r, np.sin(theta) * r]) else: - # set the starting angle and step - step = 1 - angle = 0.0 - dist = 0.0 - # set the radius for the spiral to the number of nodes in the graph - radius = len(G) - - while dist * np.hypot(np.cos(angle), np.sin(angle)) < radius: - pos.append([dist * np.cos(angle), dist * np.sin(angle)]) - dist += step - angle += resolution + dist = np.arange(len(G), dtype=float) + angle = resolution * dist + pos = np.transpose(dist * np.array([np.cos(angle), np.sin(angle)])) pos = rescale_layout(np.array(pos), scale=scale) + center
diff --git a/networkx/drawing/tests/test_layout.py b/networkx/drawing/tests/test_layout.py --- a/networkx/drawing/tests/test_layout.py +++ b/networkx/drawing/tests/test_layout.py @@ -15,17 +15,6 @@ def setup_class(cls): nx.add_path(cls.Gs, "abcdef") cls.bigG = nx.grid_2d_graph(25, 25) # > 500 nodes for sparse - @staticmethod - def collect_node_distances(positions): - distances = [] - prev_val = None - for k in positions: - if prev_val is not None: - diff = positions[k] - prev_val - distances.append((diff @ diff) ** 0.5) - prev_val = positions[k] - return distances - def test_spring_fixed_without_pos(self): G = nx.path_graph(4) pytest.raises(ValueError, nx.spring_layout, G, fixed=[0]) @@ -367,20 +356,20 @@ def test_spiral_layout(self): # intuitively, the total distance from the start and end nodes # via each node in between (transiting through each) will be less, # assuming rescaling does not occur on the computed node positions - pos_standard = nx.spiral_layout(G, resolution=0.35) - pos_tighter = nx.spiral_layout(G, resolution=0.34) - distances = self.collect_node_distances(pos_standard) - distances_tighter = self.collect_node_distances(pos_tighter) + pos_standard = np.array(list(nx.spiral_layout(G, resolution=0.35).values())) + pos_tighter = np.array(list(nx.spiral_layout(G, resolution=0.34).values())) + distances = np.linalg.norm(pos_standard[:-1] - pos_standard[1:], axis=1) + distances_tighter = np.linalg.norm(pos_tighter[:-1] - pos_tighter[1:], axis=1) assert sum(distances) > sum(distances_tighter) # return near-equidistant points after the first value if set to true - pos_equidistant = nx.spiral_layout(G, equidistant=True) - distances_equidistant = self.collect_node_distances(pos_equidistant) - for d in range(1, len(distances_equidistant) - 1): - # test similarity to two decimal places - assert distances_equidistant[d] == pytest.approx( - distances_equidistant[d + 1], abs=1e-2 - ) + pos_equidistant = np.array(list(nx.spiral_layout(G, equidistant=True).values())) + distances_equidistant = np.linalg.norm( + pos_equidistant[:-1] - pos_equidistant[1:], axis=1 + ) + assert np.allclose( + distances_equidistant[1:], distances_equidistant[-1], atol=0.01 + ) def test_rescale_layout_dict(self): G = nx.empty_graph()
failure when testing prereleases of dependencies @rossbar We are getting the following failure when testing prereleases. I suspect the issue is related to NumPy 1.22.0rc1. ``` =================================== FAILURES =================================== ________________________ TestLayout.test_spiral_layout _________________________ self = <networkx.drawing.tests.test_layout.TestLayout object at 0x7f4dd807dc70> def test_spiral_layout(self): G = self.Gs # a lower value of resolution should result in a more compact layout # intuitively, the total distance from the start and end nodes # via each node in between (transiting through each) will be less, # assuming rescaling does not occur on the computed node positions pos_standard = nx.spiral_layout(G, resolution=0.35) pos_tighter = nx.spiral_layout(G, resolution=0.34) distances = self.collect_node_distances(pos_standard) distances_tighter = self.collect_node_distances(pos_tighter) > assert sum(distances) > sum(distances_tighter) E assert 2.1859271805707814 > 2.4246843231297848 E + where 2.1859271805707814 = sum([0.31927215741496506, 0.3558859513351378, 0.4196367596771466, 0.5002552444945128, 0.5908770676490193]) E + and 2.4246843231297848 = sum([0.3585779234394243, 0.39751744460128174, 0.4657296446767588, 0.5524763132328394, 0.6503829971794806]) networkx/drawing/tests/test_layout.py:374: AssertionError ```
``` Package Version ------------------ ----------- attrs 21.2.0 certifi 2021.10.8 charset-normalizer 2.0.8 codecov 2.1.12 coverage 6.1.2 cycler 0.11.0 fonttools 4.28.2 idna 3.3 iniconfig 1.1.1 kiwisolver 1.3.2 matplotlib 3.5.0 networkx 2.7rc1.dev0 numpy 1.22.0rc1 packaging 21.3 pandas 1.3.4 Pillow 8.4.0 pip 21.3.1 pluggy 1.0.0 py 1.11.0 pyparsing 3.0.6 pytest 6.2.5 pytest-cov 3.0.0 python-dateutil 2.8.2 pytz 2021.3 requests 2.26.0 scipy 1.7.3 setuptools 59.2.0 setuptools-scm 6.3.2 six 1.16.0 toml 0.10.2 tomli 1.2.2 urllib3 1.26.7 wheel 0.37.0 ``` I was trying to look into this (because I that https://github.com/numpy/numpy/issues/20455 might be the reason), but I can't reproduce the failure with NumPy main locally? Do you happen to have an idea what else could be the reason? SciPy 1.7.3 is pretty very new? Hmm. I can't reproduce the error either. Maybe it is an Ubuntu issue (it is failing on Ubuntu and I run Fedora). Here is the failure: https://github.com/networkx/networkx/runs/4357336292?check_suite_focus=true I wasn't able to reproduce locally either. I'm curious about what's going on, but likely won't have a chance to look at it before Wednesday. On Mon, Nov 29, 2021, 10:11 AM Jarrod Millman ***@***.***> wrote: > Hmm. I can't reproduce the error either. Maybe it is an Ubuntu issue (it > is failing on Ubuntu and I run Fedora). > > β€” > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/networkx/networkx/issues/5202#issuecomment-981887829>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AAJVZ75BS6HFSODKNRB3HPDUOO66XANCNFSM5IYYDI2Q> > . > Triage notifications on the go with GitHub Mobile for iOS > <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675> > or Android > <https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>. > >
2021-12-03T19:01:12
networkx/networkx
5,216
networkx__networkx-5216
[ "5064" ]
766becc132eda01c9d0d458de2d75ab763f83493
diff --git a/networkx/algorithms/smallworld.py b/networkx/algorithms/smallworld.py --- a/networkx/algorithms/smallworld.py +++ b/networkx/algorithms/smallworld.py @@ -114,7 +114,7 @@ def random_reference(G, niter=1, connectivity=True, seed=None): @py_random_state(4) @not_implemented_for("directed") @not_implemented_for("multigraph") -def lattice_reference(G, niter=1, D=None, connectivity=True, seed=None): +def lattice_reference(G, niter=5, D=None, connectivity=True, seed=None): """Latticize the given graph by swapping edges. Parameters @@ -181,12 +181,12 @@ def lattice_reference(G, niter=1, D=None, connectivity=True, seed=None): D[v, :] = D[nnodes - v - 1, :][::-1] niter = niter * nedges - ntries = int(nnodes * nedges / (nnodes * (nnodes - 1) / 2)) - swapcount = 0 + # maximal number of rewiring attempts per 'niter' + max_attempts = int(nnodes * nedges / (nnodes * (nnodes - 1) / 2)) - for i in range(niter): + for _ in range(niter): n = 0 - while n < ntries: + while n < max_attempts: # pick two random edges without creating edge list # choose source node indices from discrete distribution (ai, ci) = discrete_sequence(2, cdistribution=cdf, seed=seed) @@ -220,7 +220,6 @@ def lattice_reference(G, niter=1, D=None, connectivity=True, seed=None): G.add_edge(a, b) G.add_edge(c, d) else: - swapcount += 1 break n += 1 @@ -298,7 +297,7 @@ def sigma(G, niter=100, nrand=10, seed=None): @py_random_state(3) @not_implemented_for("directed") @not_implemented_for("multigraph") -def omega(G, niter=100, nrand=10, seed=None): +def omega(G, niter=5, nrand=10, seed=None): """Returns the small-world coefficient (omega) of a graph The small-world coefficient of a graph G is: @@ -310,22 +309,22 @@ def omega(G, niter=100, nrand=10, seed=None): of an equivalent random graph and Cl is the average clustering coefficient of an equivalent lattice graph. - The small-world coefficient (omega) ranges between -1 and 1. Values close - to 0 means the G features small-world characteristics. Values close to -1 - means G has a lattice shape whereas values close to 1 means G is a random - graph. + The small-world coefficient (omega) measures how much G is like a lattice + or a random graph. Negative values mean G is similar to a lattice whereas + positive values mean G is a random graph. + Values close to 0 mean that G has small-world characteristics. Parameters ---------- G : NetworkX graph An undirected graph. - niter: integer (optional, default=100) + niter: integer (optional, default=5) Approximate number of rewiring per edge to compute the equivalent random graph. nrand: integer (optional, default=10) - Number of random graphs generated to compute the average clustering + Number of random graphs generated to compute the maximal clustering coefficient (Cr) and average shortest path length (Lr). seed : integer, random_state, or None (default) @@ -354,15 +353,31 @@ def omega(G, niter=100, nrand=10, seed=None): # Compute the mean clustering coefficient and average shortest path length # for an equivalent random graph randMetrics = {"C": [], "L": []} - for i in range(nrand): - Gr = random_reference(G, niter=niter, seed=seed) - Gl = lattice_reference(G, niter=niter, seed=seed) - randMetrics["C"].append(nx.transitivity(Gl)) + + # Calculate initial average clustering coefficient which potentially will + # get replaced by higher clustering coefficients from generated lattice + # reference graphs + Cl = nx.average_clustering(G) + + niter_lattice_reference = niter + niter_random_reference = niter * 2 + + for _ in range(nrand): + # Generate random graph + Gr = random_reference(G, niter=niter_random_reference, seed=seed) randMetrics["L"].append(nx.average_shortest_path_length(Gr)) - C = nx.transitivity(G) + # Generate lattice graph + Gl = lattice_reference(G, niter=niter_lattice_reference, seed=seed) + + # Replace old clustering coefficient, if clustering is higher in + # generated lattice reference + Cl_temp = nx.average_clustering(Gl) + if Cl_temp > Cl: + Cl = Cl_temp + + C = nx.average_clustering(G) L = nx.average_shortest_path_length(G) - Cl = np.mean(randMetrics["C"]) Lr = np.mean(randMetrics["L"]) omega = (Lr / L) - (C / Cl)
diff --git a/networkx/algorithms/tests/test_smallworld.py b/networkx/algorithms/tests/test_smallworld.py --- a/networkx/algorithms/tests/test_smallworld.py +++ b/networkx/algorithms/tests/test_smallworld.py @@ -55,6 +55,16 @@ def test_omega(): omegal = omega(Gl, niter=1, nrand=1, seed=rng) omegar = omega(Gr, niter=1, nrand=1, seed=rng) omegas = omega(Gs, niter=1, nrand=1, seed=rng) - print("omegas, omegal, omegar") - print(omegas, omegal, omegar) assert omegal < omegas and omegas < omegar + + # Test that omega lies within the [-1, 1] bounds + G_barbell = nx.barbell_graph(5, 1) + G_karate = nx.karate_club_graph() + + omega_barbell = nx.omega(G_barbell) + omega_karate = nx.omega(G_karate, nrand=2) + + omegas = (omegal, omegar, omegas, omega_barbell, omega_karate) + + for o in omegas: + assert -1 <= o <= 1
`omega` gives values that are not within [-1,1] bounds <!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion --> <!--- Provide a general summary of the issue in the Title above --> According to the paper this metric has been based upon, [found here](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3604768/) , omega needs to be between [-1,1]. However nx.omega gives as a result many values that are not within [-1,1] bounds ### Current Behavior <!--- Tell us what happens instead of the expected behavior --> values not being within the expected [-1,1] bounds ### Expected Behavior <!--- Tell us what should happen --> values should be within [-1,1] bounds ### Steps to Reproduce <!--- Provide a minimal example that reproduces the bug --> ```python import networkx as nx G = nx.barbell_graph(5,1) print(nx.omega(G,seed=1)) # result is -1.505815972222222 ``` ### Environment <!--- Please provide details about your local environment --> Python version: 3.7.4 NetworkX version: 2.5.1 ### Additional context <!--- Add any other context about the problem here, screenshots, etc. -->
Thanks for the report - I can reproduce this on the development branch (96831f99). I also tried the barbell graph example you've provided with different seeds and noticed that the result was always < -1. I've only had a very quick look at the paper you linked to @mamonu, but one question that springs to mind is whether the [-1,1] interval is just something the authors observed for graphs they looked at rather than something that is true for all graphs. Certainly the wording "Moreover, values of [omega] are restricted to the interval -1 to 1 regardless of network size." makes it sound like a general thing, but the definition of omega makes me wonder. Is there a proof in the paper that I've missed? You could also see if you get close to some of the paper's values for the karate graph (`nx.karate()`) from the first row of Table 1. ... I got 0.35 for the karate graph, whereas the paper says 0.08. But it might just be down to random variation. I've had a little look at `algorithms/smallworld.py`, and noticed the following things that may be worth checking. (Unfortunately I don't have time to check them properly, and will have to stop looking any further at this issue.) - C is calculated using transitivity in NetworkX but using the clustering coefficient in the paper. These can be very different (http://pages.stat.wisc.edu/~karlrohe/netsci/MeasuringTrianglesInGraphs.pdf) - The `niter` and `nrand` parameters perhaps don't match the paper? I haven't checked this properly. - I also noticed that the NetworkX implementation of *sigma* uses the `random_reference` function. It might be worth checking whether this unusual random graph generator is the standard way to calculate sigma. thanks for looking both. btw @jtrim-ons I used to work at ONS some years ago :) its a `smallworld` indeed. (ba-ddum-tch) I would like to look into this issue including the comments from the discussion. From the contributor guide I could not identify if issues get assigned, so I would just start? If I am missing something, please let me know. Yes, please go ahead and start on this issue. It is much appreciated! :}
2021-12-05T18:19:30
networkx/networkx
5,240
networkx__networkx-5240
[ "5213" ]
00cf8eeae45497e7c1aedacee151f85f88512224
diff --git a/networkx/generators/small.py b/networkx/generators/small.py --- a/networkx/generators/small.py +++ b/networkx/generators/small.py @@ -181,7 +181,29 @@ def LCF_graph(n, shift_list, repeats, create_using=None): def bull_graph(create_using=None): - """Returns the Bull graph.""" + """ + Returns the Bull Graph + + The Bull Graph has 5 nodes and 5 edges. It is a planar undirected + graph in the form of a triangle with two disjoint pendant edges [1]_ + The name comes from the triangle and pendant edges representing + respectively the body and legs of a bull. + + Parameters + ---------- + create_using : NetworkX graph constructor, optional (default=nx.Graph) + Graph type to create. If graph instance, then cleared before populated. + + Returns + ------- + G : networkx Graph + A bull graph with 5 nodes + + References + ---------- + .. [1] https://en.wikipedia.org/wiki/Bull_graph. + + """ description = [ "adjacencylist", "Bull Graph", @@ -193,7 +215,29 @@ def bull_graph(create_using=None): def chvatal_graph(create_using=None): - """Returns the ChvΓ‘tal graph.""" + """ + Returns the ChvΓ‘tal Graph + + The ChvΓ‘tal Graph is an undirected graph with 12 nodes and 24 edges [1]_. + It has 370 distinct (directed) Hamiltonian cycles, giving a unique generalized + LCF notation of order 4, two of order 6 , and 43 of order 1 [2]_. + + Parameters + ---------- + create_using : NetworkX graph constructor, optional (default=nx.Graph) + Graph type to create. If graph instance, then cleared before populated. + + Returns + ------- + G : networkx Graph + The ChvΓ‘tal graph with 12 nodes and 24 edges + + References + ---------- + .. [1] https://en.wikipedia.org/wiki/Chv%C3%A1tal_graph + .. [2] https://mathworld.wolfram.com/ChvatalGraph.html + + """ description = [ "adjacencylist", "Chvatal Graph", @@ -218,7 +262,30 @@ def chvatal_graph(create_using=None): def cubical_graph(create_using=None): - """Returns the 3-regular Platonic Cubical graph.""" + """ + Returns the 3-regular Platonic Cubical Graph + + The skeleton of the cube (the nodes and edges) form a graph, with 8 + nodes, and 12 edges. It is a special case of the hypercube graph. + It is one of 5 Platonic graphs, each a skeleton of its + Platonic solid [1]_. + Such graphs arise in parallel processing in computers. + + Parameters + ---------- + create_using : NetworkX graph constructor, optional (default=nx.Graph) + Graph type to create. If graph instance, then cleared before populated. + + Returns + ------- + G : networkx Graph + A cubical graph with 8 nodes and 12 edges + + References + ---------- + .. [1] https://en.wikipedia.org/wiki/Cube#Cubical_graph + + """ description = [ "adjacencylist", "Platonic Cubical Graph", @@ -239,14 +306,55 @@ def cubical_graph(create_using=None): def desargues_graph(create_using=None): - """Return the Desargues graph.""" + """ + Returns the Desargues Graph + + The Desargues Graph is a non-planar, distance-transitive cubic graph + with 20 nodes and 30 edges [1]_. + It is a symmetric graph. It can be represented in LCF notation + as [5,-5,9,-9]^5 [2]_. + + Parameters + ---------- + create_using : NetworkX graph constructor, optional (default=nx.Graph) + Graph type to create. If graph instance, then cleared before populated. + + Returns + ------- + G : networkx Graph + Desargues Graph with 20 nodes and 30 edges + + References + ---------- + .. [1] https://en.wikipedia.org/wiki/Desargues_graph + .. [2] https://mathworld.wolfram.com/DesarguesGraph.html + """ G = LCF_graph(20, [5, -5, 9, -9], 5, create_using) G.name = "Desargues Graph" return G def diamond_graph(create_using=None): - """Returns the Diamond graph.""" + """ + Returns the Diamond graph + + The Diamond Graph is planar undirected graph with 4 nodes and 5 edges. + It is also sometimes known as the double triangle graph or kite graph [1]_. + + Parameters + ---------- + create_using : NetworkX graph constructor, optional (default=nx.Graph) + Graph type to create. If graph instance, then cleared before populated. + + Returns + ------- + G : networkx Graph + Diamond Graph with 4 nodes and 5 edges + + References + ---------- + .. [1] https://mathworld.wolfram.com/DiamondGraph.html + """ description = [ "adjacencylist", "Diamond Graph", @@ -258,17 +366,58 @@ def diamond_graph(create_using=None): def dodecahedral_graph(create_using=None): - """Return the Platonic Dodecahedral graph.""" + """ + Returns the Platonic Dodecahedral graph. + + The dodecahedral graph has 20 nodes and 30 edges. The skeleton of the + dodecahedron forms a graph. It is one of 5 Platonic graphs [1]_. + It can be described in LCF notation as: + ``[10, 7, 4, -4, -7, 10, -4, 7, -7, 4]^2`` [2]_. + + Parameters + ---------- + create_using : NetworkX graph constructor, optional (default=nx.Graph) + Graph type to create. If graph instance, then cleared before populated. + + Returns + ------- + G : networkx Graph + Dodecahedral Graph with 20 nodes and 30 edges + + References + ---------- + .. [1] https://en.wikipedia.org/wiki/Regular_dodecahedron#Dodecahedral_graph + .. [2] https://mathworld.wolfram.com/DodecahedralGraph.html + + """ G = LCF_graph(20, [10, 7, 4, -4, -7, 10, -4, 7, -7, 4], 2, create_using) G.name = "Dodecahedral Graph" return G def frucht_graph(create_using=None): - """Returns the Frucht Graph. + """ + Returns the Frucht Graph. The Frucht Graph is the smallest cubical graph whose - automorphism group consists only of the identity element. + automorphism group consists only of the identity element [1]_. + It has 12 nodes and 18 edges and no nontrivial symmetries. + It is planar and Hamiltonian [2]_. + + Parameters + ---------- + create_using : NetworkX graph constructor, optional (default=nx.Graph) + Graph type to create. If graph instance, then cleared before populated. + + Returns + ------- + G : networkx Graph + Frucht Graph with 12 nodes and 18 edges + + References + ---------- + .. [1] https://en.wikipedia.org/wiki/Frucht_graph + .. [2] https://mathworld.wolfram.com/FruchtGraph.html """ G = cycle_graph(7, create_using) @@ -293,14 +442,66 @@ def frucht_graph(create_using=None): def heawood_graph(create_using=None): - """Return the Heawood graph, a (3,6) cage.""" + """ + Returns the Heawood Graph, a (3,6) cage. + + The Heawood Graph is an undirected graph with 14 nodes and 21 edges, + named after Percy John Heawood [1]_. + It is cubic symmetric, nonplanar, Hamiltonian, and can be represented + in LCF notation as ``[5,-5]^7`` [2]_. + It is the unique (3,6)-cage: the regular cubic graph of girth 6 with + minimal number of vertices [3]_. + + Parameters + ---------- + create_using : NetworkX graph constructor, optional (default=nx.Graph) + Graph type to create. If graph instance, then cleared before populated. + + Returns + ------- + G : networkx Graph + Heawood Graph with 14 nodes and 21 edges + + References + ---------- + .. [1] https://en.wikipedia.org/wiki/Heawood_graph + .. [2] https://mathworld.wolfram.com/HeawoodGraph.html + .. [3] https://www.win.tue.nl/~aeb/graphs/Heawood.html + + """ G = LCF_graph(14, [5, -5], 7, create_using) G.name = "Heawood Graph" return G def hoffman_singleton_graph(): - """Return the Hoffman-Singleton Graph.""" + """ + Returns the Hoffman-Singleton Graph. + + The Hoffman–Singleton graph is a symmetrical undirected graph + with 50 nodes and 175 edges. + All indices lie in ``Z % 5``: that is, the integers mod 5 [1]_. + It is the only regular graph of vertex degree 7, diameter 2, and girth 5. + It is the unique (7,5)-cage graph and Moore graph, and contains many + copies of the Petersen graph [2]_. + + Returns + ------- + G : networkx Graph + Hoffman–Singleton Graph with 50 nodes and 175 edges + + Notes + ----- + Constructed from pentagon and pentagram as follows: Take five pentagons $P_h$ + and five pentagrams $Q_i$ . Join vertex $j$ of $P_h$ to vertex $hΒ·i+j$ of $Q_i$ [3]_. + + References + ---------- + .. [1] https://blogs.ams.org/visualinsight/2016/02/01/hoffman-singleton-graph/ + .. [2] https://mathworld.wolfram.com/Hoffman-SingletonGraph.html + .. [3] https://en.wikipedia.org/wiki/Hoffman%E2%80%93Singleton_graph + + """ G = nx.Graph() for i in range(5): for j in range(5): @@ -316,7 +517,26 @@ def hoffman_singleton_graph(): def house_graph(create_using=None): - """Returns the House graph (square with triangle on top).""" + """ + Returns the House graph (square with triangle on top) + + The house graph is a simple undirected graph with + 5 nodes and 6 edges [1]_. + + Parameters + ---------- + create_using : NetworkX graph constructor, optional (default=nx.Graph) + Graph type to create. If graph instance, then cleared before populated. + + Returns + ------- + G : networkx Graph + House graph in the form of a square with a triangle on top + + References + ---------- + .. [1] https://mathworld.wolfram.com/HouseGraph.html + """ description = [ "adjacencylist", "House Graph", @@ -328,7 +548,27 @@ def house_graph(create_using=None): def house_x_graph(create_using=None): - """Returns the House graph with a cross inside the house square.""" + """ + Returns the House graph with a cross inside the house square. + + The House X-graph is the House graph plus the two edges connecting diagonally + opposite vertices of the square base. It is also one of the two graphs + obtained by removing two edges from the pentatope graph [1]_. + + Parameters + ---------- + create_using : NetworkX graph constructor, optional (default=nx.Graph) + Graph type to create. If graph instance, then cleared before populated. + + Returns + ------- + G : networkx Graph + House graph with diagonal vertices connected + + References + ---------- + .. [1] https://mathworld.wolfram.com/HouseGraph.html + """ description = [ "adjacencylist", "House-with-X-inside Graph", @@ -340,7 +580,27 @@ def house_x_graph(create_using=None): def icosahedral_graph(create_using=None): - """Returns the Platonic Icosahedral graph.""" + """ + Returns the Platonic Icosahedral graph. + + The icosahedral graph has 12 nodes and 30 edges. It is a Platonic graph + whose nodes have the connectivity of the icosahedron. It is undirected, + regular and Hamiltonian [1]_. + + Parameters + ---------- + create_using : NetworkX graph constructor, optional (default=nx.Graph) + Graph type to create. If graph instance, then cleared before populated. + + Returns + ------- + G : networkx Graph + Icosahedral graph with 12 nodes and 30 edges. + + References + ---------- + .. [1] https://mathworld.wolfram.com/IcosahedralGraph.html + """ description = [ "adjacencylist", "Platonic Icosahedral Graph", @@ -366,14 +626,33 @@ def icosahedral_graph(create_using=None): def krackhardt_kite_graph(create_using=None): """ - Return the Krackhardt Kite Social Network. + Returns the Krackhardt Kite Social Network. A 10 actor social network introduced by David Krackhardt - to illustrate: degree, betweenness, centrality, closeness, etc. + to illustrate different centrality measures [1]_. + + Parameters + ---------- + create_using : NetworkX graph constructor, optional (default=nx.Graph) + Graph type to create. If graph instance, then cleared before populated. + + Returns + ------- + G : networkx Graph + Krackhardt Kite graph with 10 nodes and 18 edges + + Notes + ----- The traditional labeling is: Andre=1, Beverley=2, Carol=3, Diane=4, Ed=5, Fernando=6, Garth=7, Heather=8, Ike=9, Jane=10. + References + ---------- + .. [1] Krackhardt, David. "Assessing the Political Landscape: Structure, + Cognition, and Power in Organizations". Administrative Science Quarterly. + 35 (2): 342–369. doi:10.2307/2393394. JSTOR 2393394. June 1990. + """ description = [ "adjacencylist", @@ -397,14 +676,59 @@ def krackhardt_kite_graph(create_using=None): def moebius_kantor_graph(create_using=None): - """Returns the Moebius-Kantor graph.""" + """ + Returns the Moebius-Kantor graph. + + The MΓΆbius-Kantor graph is the cubic symmetric graph on 16 nodes. + Its LCF notation is [5,-5]^8, and it is isomorphic to the generalized + Petersen graph [1]_. + + Parameters + ---------- + create_using : NetworkX graph constructor, optional (default=nx.Graph) + Graph type to create. If graph instance, then cleared before populated. + + Returns + ------- + G : networkx Graph + Moebius-Kantor graph + + References + ---------- + .. [1] https://en.wikipedia.org/wiki/M%C3%B6bius%E2%80%93Kantor_graph + + """ G = LCF_graph(16, [5, -5], 8, create_using) G.name = "Moebius-Kantor Graph" return G def octahedral_graph(create_using=None): - """Returns the Platonic Octahedral graph.""" + """ + Returns the Platonic Octahedral graph. + + The octahedral graph is the 6-node 12-edge Platonic graph having the + connectivity of the octahedron [1]_. If 6 couples go to a party, + and each person shakes hands with every person except his or her partner, + then this graph describes the set of handshakes that take place; + for this reason it is also called the cocktail party graph [2]_. + + Parameters + ---------- + create_using : NetworkX graph constructor, optional (default=nx.Graph) + Graph type to create. If graph instance, then cleared before populated. + + Returns + ------- + G : networkx Graph + Octahedral graph + + References + ---------- + .. [1] https://mathworld.wolfram.com/OctahedralGraph.html + .. [2] https://en.wikipedia.org/wiki/Tur%C3%A1n_graph#Special_cases + + """ description = [ "adjacencylist", "Platonic Octahedral Graph", @@ -416,14 +740,51 @@ def octahedral_graph(create_using=None): def pappus_graph(): - """Return the Pappus graph.""" + """ + Returns the Pappus graph. + + The Pappus graph is a cubic symmetric distance-regular graph with 18 nodes + and 27 edges. It is Hamiltonian and can be represented in LCF notation as + [5,7,-7,7,-7,-5]^3 [1]_. + + Returns + ------- + G : networkx Graph + Pappus graph + + References + ---------- + .. [1] https://en.wikipedia.org/wiki/Pappus_graph + """ G = LCF_graph(18, [5, 7, -7, 7, -7, -5], 3) G.name = "Pappus Graph" return G def petersen_graph(create_using=None): - """Returns the Petersen graph.""" + """ + Returns the Petersen graph. + + The Peterson graph is a cubic, undirected graph with 10 nodes and 15 edges [1]_. + Julius Petersen constructed the graph as the smallest counterexample + against the claim that a connected bridgeless cubic graph + has an edge colouring with three colours [2]_. + + Parameters + ---------- + create_using : NetworkX graph constructor, optional (default=nx.Graph) + Graph type to create. If graph instance, then cleared before populated. + + Returns + ------- + G : networkx Graph + Petersen graph + + References + ---------- + .. [1] https://en.wikipedia.org/wiki/Petersen_graph + .. [2] https://www.win.tue.nl/~aeb/drg/graphs/Petersen.html + """ description = [ "adjacencylist", "Petersen Graph", @@ -449,9 +810,23 @@ def sedgewick_maze_graph(create_using=None): """ Return a small maze with a cycle. - This is the maze used in Sedgewick,3rd Edition, Part 5, Graph - Algorithms, Chapter 18, e.g. Figure 18.2 and following. + This is the maze used in Sedgewick, 3rd Edition, Part 5, Graph + Algorithms, Chapter 18, e.g. Figure 18.2 and following [1]_. Nodes are numbered 0,..,7 + + Parameters + ---------- + create_using : NetworkX graph constructor, optional (default=nx.Graph) + Graph type to create. If graph instance, then cleared before populated. + + Returns + ------- + G : networkx Graph + Small maze with a cycle + + References + ---------- + .. [1] Figure 18.2, Chapter 18, Graph Algorithms (3rd Ed), Sedgewick """ G = empty_graph(0, create_using) G.add_nodes_from(range(8)) @@ -464,14 +839,58 @@ def sedgewick_maze_graph(create_using=None): def tetrahedral_graph(create_using=None): - """Return the 3-regular Platonic Tetrahedral graph.""" + """ + Returns the 3-regular Platonic Tetrahedral graph. + + Tetrahedral graph has 4 nodes and 6 edges. It is a + special case of the complete graph, K4, and wheel graph, W4. + It is one of the 5 platonic graphs [1]_. + + Parameters + ---------- + create_using : NetworkX graph constructor, optional (default=nx.Graph) + Graph type to create. If graph instance, then cleared before populated. + + Returns + ------- + G : networkx Graph + Tetrahedral Grpah + + References + ---------- + .. [1] https://en.wikipedia.org/wiki/Tetrahedron#Tetrahedral_graph + + """ G = complete_graph(4, create_using) G.name = "Platonic Tetrahedral graph" return G def truncated_cube_graph(create_using=None): - """Returns the skeleton of the truncated cube.""" + """ + Returns the skeleton of the truncated cube. + + The truncated cube is an Archimedean solid with 14 regular + faces (6 octagonal and 8 triangular), 36 edges and 24 nodes [1]_. + The truncated cube is created by truncating (cutting off) the tips + of the cube one third of the way into each edge [2]_. + + Parameters + ---------- + create_using : NetworkX graph constructor, optional (default=nx.Graph) + Graph type to create. If graph instance, then cleared before populated. + + Returns + ------- + G : networkx Graph + Skeleton of the truncated cube + + References + ---------- + .. [1] https://en.wikipedia.org/wiki/Truncated_cube + .. [2] https://www.coolmath.com/reference/polyhedra-truncated-cube + + """ description = [ "adjacencylist", "Truncated Cube Graph", @@ -508,7 +927,28 @@ def truncated_cube_graph(create_using=None): def truncated_tetrahedron_graph(create_using=None): - """Returns the skeleton of the truncated Platonic tetrahedron.""" + """ + Returns the skeleton of the truncated Platonic tetrahedron. + + The truncated tetrahedron is an Archimedean solid with 4 regular hexagonal faces, + 4 equilateral triangle faces, 12 nodes and 18 edges. It can be constructed by truncating + all 4 vertices of a regular tetrahedron at one third of the original edge length [1]_. + + Parameters + ---------- + create_using : NetworkX graph constructor, optional (default=nx.Graph) + Graph type to create. If graph instance, then cleared before populated. + + Returns + ------- + G : networkx Graph + Skeleton of the truncated tetrahedron + + References + ---------- + .. [1] https://en.wikipedia.org/wiki/Truncated_tetrahedron + + """ G = path_graph(12, create_using) # G.add_edges_from([(1,3),(1,10),(2,7),(4,12),(5,12),(6,8),(9,11)]) G.add_edges_from([(0, 2), (0, 9), (1, 6), (3, 11), (4, 11), (5, 7), (8, 10)]) @@ -517,7 +957,30 @@ def truncated_tetrahedron_graph(create_using=None): def tutte_graph(create_using=None): - """Returns the Tutte graph.""" + """ + Returns the Tutte graph. + + The Tutte graph is a cubic polyhedral, non-Hamiltonian graph. It has + 46 nodes and 69 edges. + It is a counterexample to Tait's conjecture that every 3-regular polyhedron + has a Hamiltonian cycle. + It can be realized geometrically from a tetrahedron by multiply truncating + three of its vertices [1]_. + + Parameters + ---------- + create_using : NetworkX graph constructor, optional (default=nx.Graph) + Graph type to create. If graph instance, then cleared before populated. + + Returns + ------- + G : networkx Graph + Tutte graph + + References + ---------- + .. [1] https://en.wikipedia.org/wiki/Tutte_graph + """ description = [ "adjacencylist", "Tutte's Graph",
Graph generators in the `small` module need docstrings There are a number of named graphs defined in `networkx.generators.small` that are not well-documented: https://github.com/networkx/networkx/blob/766becc132eda01c9d0d458de2d75ab763f83493/networkx/generators/small.py#L183-L575 Each of these functions should have a more extensive docstring with at least a `Parameters`, `Returns`, and `Examples` section, and preferably also a reference to the corresponding paper or wiki page that describes the graph.
Hi, I have started working on this issue. I noticed that the `LCF_graph` function's docstring has all the details that you have specified in the issue. Can use the same format for rest of the functions? That doesn't provide the nice headers that sphinx can make use of. I would suggest something more like the function [fullrary](https://github.com/networkx/networkx/blob/766becc132eda01c9d0d458de2d75ab763f83493/networkx/generators/classic.py#L67) in `generators/classic.py` Thanks!
2021-12-24T05:38:24
networkx/networkx
5,264
networkx__networkx-5264
[ "5215" ]
1620568e36702b1cfeaf1c0277b167b6cb93e48d
diff --git a/networkx/drawing/nx_pylab.py b/networkx/drawing/nx_pylab.py --- a/networkx/drawing/nx_pylab.py +++ b/networkx/drawing/nx_pylab.py @@ -27,7 +27,6 @@ random_layout, planar_layout, ) -import warnings __all__ = [ "draw", @@ -1192,7 +1191,11 @@ def draw_networkx_edge_labels( def draw_circular(G, **kwargs): - """Draw the graph G with a circular layout. + """Draw the graph `G` with a circular layout. + + This is a convenience function equivalent to:: + + nx.draw(G, pos=nx.circular_layout(G), **kwargs) Parameters ---------- @@ -1200,15 +1203,33 @@ def draw_circular(G, **kwargs): A networkx graph kwargs : optional keywords - See networkx.draw_networkx() for a description of optional keywords, - with the exception of the pos parameter which is not used by this - function. + See `draw_networkx` for a description of optional keywords. + + Notes + ----- + The layout is computed each time this function is called. For + repeated drawing it is much more efficient to call + `~networkx.drawing.layout.circular_layout` directly and reuse the result:: + + >>> G = nx.complete_graph(5) + >>> pos = nx.circular_layout(G) + >>> nx.draw(G, pos=pos) # Draw the original graph + >>> # Draw a subgraph, reusing the same node positions + >>> nx.draw(G.subgraph([0, 1, 2]), pos=pos, node_color="red") + + See Also + -------- + :func:`~networkx.drawing.layout.circular_layout` """ draw(G, circular_layout(G), **kwargs) def draw_kamada_kawai(G, **kwargs): - """Draw the graph G with a Kamada-Kawai force-directed layout. + """Draw the graph `G` with a Kamada-Kawai force-directed layout. + + This is a convenience function equivalent to:: + + nx.draw(G, pos=nx.kamada_kawai_layout(G), **kwargs) Parameters ---------- @@ -1216,15 +1237,34 @@ def draw_kamada_kawai(G, **kwargs): A networkx graph kwargs : optional keywords - See networkx.draw_networkx() for a description of optional keywords, - with the exception of the pos parameter which is not used by this - function. + See `draw_networkx` for a description of optional keywords. + + Notes + ----- + The layout is computed each time this function is called. + For repeated drawing it is much more efficient to call + `~networkx.drawing.layout.kamada_kawai_layout` directly and reuse the + result:: + + >>> G = nx.complete_graph(5) + >>> pos = nx.kamada_kawai_layout(G) + >>> nx.draw(G, pos=pos) # Draw the original graph + >>> # Draw a subgraph, reusing the same node positions + >>> nx.draw(G.subgraph([0, 1, 2]), pos=pos, node_color="red") + + See Also + -------- + :func:`~networkx.drawing.layout.kamada_kawai_layout` """ draw(G, kamada_kawai_layout(G), **kwargs) def draw_random(G, **kwargs): - """Draw the graph G with a random layout. + """Draw the graph `G` with a random layout. + + This is a convenience function equivalent to:: + + nx.draw(G, pos=nx.random_layout(G), **kwargs) Parameters ---------- @@ -1232,20 +1272,36 @@ def draw_random(G, **kwargs): A networkx graph kwargs : optional keywords - See networkx.draw_networkx() for a description of optional keywords, - with the exception of the pos parameter which is not used by this - function. + See `draw_networkx` for a description of optional keywords. + + Notes + ----- + The layout is computed each time this function is called. + For repeated drawing it is much more efficient to call + `~networkx.drawing.layout.random_layout` directly and reuse the result:: + + >>> G = nx.complete_graph(5) + >>> pos = nx.random_layout(G) + >>> nx.draw(G, pos=pos) # Draw the original graph + >>> # Draw a subgraph, reusing the same node positions + >>> nx.draw(G.subgraph([0, 1, 2]), pos=pos, node_color="red") + + See Also + -------- + :func:`~networkx.drawing.layout.random_layout` """ draw(G, random_layout(G), **kwargs) def draw_spectral(G, **kwargs): - """Draw the graph G with a spectral 2D layout. + """Draw the graph `G` with a spectral 2D layout. + + This is a convenience function equivalent to:: - Using the unnormalized Laplacian, the layout shows possible clusters of - nodes which are an approximation of the ratio cut. The positions are the - entries of the second and third eigenvectors corresponding to the - ascending eigenvalues starting from the second one. + nx.draw(G, pos=nx.spectral_layout(G), **kwargs) + + For more information about how node positions are determined, see + `~networkx.drawing.layout.spectral_layout`. Parameters ---------- @@ -1253,15 +1309,33 @@ def draw_spectral(G, **kwargs): A networkx graph kwargs : optional keywords - See networkx.draw_networkx() for a description of optional keywords, - with the exception of the pos parameter which is not used by this - function. + See `draw_networkx` for a description of optional keywords. + + Notes + ----- + The layout is computed each time this function is called. + For repeated drawing it is much more efficient to call + `~networkx.drawing.layout.spectral_layout` directly and reuse the result:: + + >>> G = nx.complete_graph(5) + >>> pos = nx.spectral_layout(G) + >>> nx.draw(G, pos=pos) # Draw the original graph + >>> # Draw a subgraph, reusing the same node positions + >>> nx.draw(G.subgraph([0, 1, 2]), pos=pos, node_color="red") + + See Also + -------- + :func:`~networkx.drawing.layout.spectral_layout` """ draw(G, spectral_layout(G), **kwargs) def draw_spring(G, **kwargs): - """Draw the graph G with a spring layout. + """Draw the graph `G` with a spring layout. + + This is a convenience function equivalent to:: + + nx.draw(G, pos=nx.spring_layout(G), **kwargs) Parameters ---------- @@ -1269,34 +1343,76 @@ def draw_spring(G, **kwargs): A networkx graph kwargs : optional keywords - See networkx.draw_networkx() for a description of optional keywords, - with the exception of the pos parameter which is not used by this - function. + See `draw_networkx` for a description of optional keywords. + + Notes + ----- + `~networkx.drawing.layout.spring_layout` is also the default layout for + `draw`, so this function is equivalent to `draw`. + + The layout is computed each time this function is called. + For repeated drawing it is much more efficient to call + `~networkx.drawing.layout.spring_layout` directly and reuse the result:: + + >>> G = nx.complete_graph(5) + >>> pos = nx.spring_layout(G) + >>> nx.draw(G, pos=pos) # Draw the original graph + >>> # Draw a subgraph, reusing the same node positions + >>> nx.draw(G.subgraph([0, 1, 2]), pos=pos, node_color="red") + + See Also + -------- + draw + :func:`~networkx.drawing.layout.spring_layout` """ draw(G, spring_layout(G), **kwargs) -def draw_shell(G, **kwargs): - """Draw networkx graph with shell layout. +def draw_shell(G, nlist=None, **kwargs): + """Draw networkx graph `G` with shell layout. + + This is a convenience function equivalent to:: + + nx.draw(G, pos=nx.shell_layout(G, nlist=nlist), **kwargs) Parameters ---------- G : graph A networkx graph + nlist : list of list of nodes, optional + A list containing lists of nodes representing the shells. + Default is `None`, meaning all nodes are in a single shell. + See `~networkx.drawing.layout.shell_layout` for details. + kwargs : optional keywords - See networkx.draw_networkx() for a description of optional keywords, - with the exception of the pos parameter which is not used by this - function. + See `draw_networkx` for a description of optional keywords. + + Notes + ----- + The layout is computed each time this function is called. + For repeated drawing it is much more efficient to call + `~networkx.drawing.layout.shell_layout` directly and reuse the result:: + + >>> G = nx.complete_graph(5) + >>> pos = nx.shell_layout(G) + >>> nx.draw(G, pos=pos) # Draw the original graph + >>> # Draw a subgraph, reusing the same node positions + >>> nx.draw(G.subgraph([0, 1, 2]), pos=pos, node_color="red") + + See Also + -------- + :func:`~networkx.drawing.layout.shell_layout` """ - nlist = kwargs.get("nlist", None) - if nlist is not None: - del kwargs["nlist"] draw(G, shell_layout(G, nlist=nlist), **kwargs) def draw_planar(G, **kwargs): - """Draw a planar networkx graph with planar layout. + """Draw a planar networkx graph `G` with planar layout. + + This is a convenience function equivalent to:: + + nx.draw(G, pos=nx.planar_layout(G), **kwargs) Parameters ---------- @@ -1304,9 +1420,28 @@ def draw_planar(G, **kwargs): A planar networkx graph kwargs : optional keywords - See networkx.draw_networkx() for a description of optional keywords, - with the exception of the pos parameter which is not used by this - function. + See `draw_networkx` for a description of optional keywords. + + Raises + ------ + NetworkXException + When `G` is not planar + + Notes + ----- + The layout is computed each time this function is called. + For repeated drawing it is much more efficient to call + `~networkx.drawing.layout.planar_layout` directly and reuse the result:: + + >>> G = nx.path_graph(5) + >>> pos = nx.planar_layout(G) + >>> nx.draw(G, pos=pos) # Draw the original graph + >>> # Draw a subgraph, reusing the same node positions + >>> nx.draw(G.subgraph([0, 1, 2]), pos=pos, node_color="red") + + See Also + -------- + :func:`~networkx.drawing.layout.planar_layout` """ draw(G, planar_layout(G), **kwargs)
Documentation for `draw_shell` does not explain all arguments <!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion --> <!--- Provide a general summary of the issue in the Title above --> ### Current Behavior I was trying to debug code which assigns `nlist` in `draw_shell`. I didn't understand that until I read the source code. The documentation for `draw_shell` says that the keyword arguments match `nx.draw_networkx`. But `nlist` is not described there. Instead it is described in `nx.shell_layout`. ### Expected Behavior documentation that explains nlist. ### Steps to Reproduce <!--- Provide a minimal example that reproduces the bug --> ### Environment <!--- Please provide details about your local environment --> Python version: NetworkX version: 2.6.2 ### Additional context <!--- Add any other context about the problem here, screenshots, etc. -->
2022-01-15T21:47:46
networkx/networkx
5,283
networkx__networkx-5283
[ "5266" ]
4734efe5db568aa1ac231a9ef66ec302eed6e059
diff --git a/networkx/conftest.py b/networkx/conftest.py --- a/networkx/conftest.py +++ b/networkx/conftest.py @@ -226,6 +226,9 @@ def set_warnings(): "ignore", category=DeprecationWarning, message="\nfind_cores" ) warnings.filterwarnings("ignore", category=FutureWarning, message="attr_matrix") + warnings.filterwarnings( + "ignore", category=DeprecationWarning, message=r"\n\nmake_small_.*" + ) @pytest.fixture(autouse=True) diff --git a/networkx/generators/small.py b/networkx/generators/small.py --- a/networkx/generators/small.py +++ b/networkx/generators/small.py @@ -63,8 +63,26 @@ def make_small_undirected_graph(graph_description, create_using=None): """ Return a small undirected graph described by graph_description. + .. deprecated:: 2.7 + + make_small_undirected_graph is deprecated and will be removed in + version 3.0. If "ltype" == "adjacencylist", convert the list to a dict + and use `from_dict_of_lists`. If "ltype" == "edgelist", use + `from_edgelist`. + See make_small_graph. """ + import warnings + + msg = ( + "\n\nmake_small_undirected_graph is deprecated and will be removed in " + "version 3.0.\n" + "If `ltype` == 'adjacencylist', convert `xlist` to a dict and use\n" + "`from_dict_of_lists` instead.\n" + "If `ltype` == 'edgelist', use `from_edgelist` instead." + ) + warnings.warn(msg, category=DeprecationWarning, stacklevel=2) + G = empty_graph(0, create_using) if G.is_directed(): raise NetworkXError("Directed Graph not supported") @@ -75,6 +93,13 @@ def make_small_graph(graph_description, create_using=None): """ Return the small graph described by graph_description. + .. deprecated:: 2.7 + + make_small_graph is deprecated and will be removed in + version 3.0. If "ltype" == "adjacencylist", convert the list to a dict + and use `from_dict_of_lists`. If "ltype" == "edgelist", use + `from_edgelist`. + graph_description is a list of the form [ltype,name,n,xlist] Here ltype is one of "adjacencylist" or "edgelist", @@ -105,6 +130,15 @@ def make_small_graph(graph_description, create_using=None): Use the create_using argument to choose the graph class/type. """ + import warnings + + msg = ( + "\n\nmake_small_graph is deprecated and will be removed in version 3.0.\n" + "If `ltype` == 'adjacencylist', convert `xlist` to a dict and use\n" + "`from_dict_of_lists` instead.\n" + "If `ltype` == 'edgelist', use `from_edgelist` instead." + ) + warnings.warn(msg, category=DeprecationWarning, stacklevel=2) if graph_description[0] not in ("adjacencylist", "edgelist"): raise NetworkXError("ltype must be either adjacencylist or edgelist")
Deprecate `make_small_graph` and `make_small_undirected_graph` from `small` submodule? These two functions are defined in the `generators.small` submodule, and are used internally within the module to generate many of the graphs therein. `make_small_graph` is publicly exposed in the namespace, while `make_small_undirected_graph` is not. Looking at the implementation however, it seems that these functions insert an unnecessary layer of indirection (i.e. a `graph_description` argument that is actually a custom structured list that must be unpacked/parsed) and complexity (see the node remapping) on top of functionality that is already provided by e.g. `nx.from_edgelist` and `nx.from_dict_of_lists`. AFAICT, `make_small_graph` doesn't add anything that can't be achieved in a more straightforward manner with `from_edgelist` and `from_dict_of_lists` - would there be appetite for it (and `make_small_undirected_graph`) and refactoring the `small` module to use more "canonical" NetworkX graph creation functions?
+1 on deprecation. I'm not sure if we still need `make_small_graph`, `from_dict_of_lists` is definitely more canonical NetworkX IMO. PS: Most of the `make_small_graph` code hasn't been touched in 17 years! Awesome, thanks taking a look @MridulS . Any deprecation would depend on #5267 going in, so I'll wait for that. I'll add a milestone to this one though so it doesn't get lost!
2022-01-27T05:58:10
networkx/networkx
5,287
networkx__networkx-5287
[ "5286" ]
0cc70051fa0a979b1f1eab4af5b6587a6ebf8334
diff --git a/networkx/readwrite/json_graph/tree.py b/networkx/readwrite/json_graph/tree.py --- a/networkx/readwrite/json_graph/tree.py +++ b/networkx/readwrite/json_graph/tree.py @@ -75,6 +75,8 @@ def tree_data(G, root, attrs=None, ident="id", children="children"): raise TypeError("G is not a tree.") if not G.is_directed(): raise TypeError("G is not directed.") + if not nx.is_weakly_connected(G): + raise TypeError("G is not weakly connected.") # NOTE: to be removed in 3.0 if attrs is not None:
diff --git a/networkx/readwrite/json_graph/tests/test_tree.py b/networkx/readwrite/json_graph/tests/test_tree.py --- a/networkx/readwrite/json_graph/tests/test_tree.py +++ b/networkx/readwrite/json_graph/tests/test_tree.py @@ -35,6 +35,11 @@ def test_exceptions(): with pytest.raises(TypeError, match="is not directed."): G = nx.path_graph(3) tree_data(G, 0) + with pytest.raises(TypeError, match="is not weakly connected."): + G = nx.path_graph(3, create_using=nx.DiGraph) + G.add_edge(2, 0) + G.add_node(3) + tree_data(G, 0) with pytest.raises(nx.NetworkXError, match="must be different."): G = nx.MultiDiGraph() G.add_node(0)
`json_graph.tree_data` can cause maximum recursion depth error. <!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion --> <!--- Provide a general summary of the issue in the Title above --> ### Current Behavior <!--- Tell us what happens instead of the expected behavior --> Currently the algorithm compares the `n_nodes` with `n_edges` to check if `G` is a tree. https://github.com/networkx/networkx/blob/0cc70051fa0a979b1f1eab4af5b6587a6ebf8334/networkx/readwrite/json_graph/tree.py#L74-L75 This check can be bypassed with specific inputs and cause a recursion error. ### Expected Behavior <!--- Tell us what should happen --> The code should check whether there are cycles with `root` as the source and raise an exception. Another possible fix would be to check if the graph is not weakly connected. ### Steps to Reproduce <!--- Provide a minimal example that reproduces the bug --> ```Python3 >>> import networkx as nx >>> G = nx.DiGraph([(1, 2), (2, 3), (3, 1)]) >>> G.add_node(4) >>> data = nx.json_graph.tree_data(G, 1) RecursionError: maximum recursion depth exceeded ``` ### Environment <!--- Please provide details about your local environment --> Python version: 3.8.10 NetworkX version: 2.7rc1.dev0
2022-01-28T11:20:36
networkx/networkx
5,305
networkx__networkx-5305
[ "5257", "5817" ]
98060487ad192918cfc2415fc0b5c309ff2d3565
diff --git a/networkx/algorithms/distance_measures.py b/networkx/algorithms/distance_measures.py --- a/networkx/algorithms/distance_measures.py +++ b/networkx/algorithms/distance_measures.py @@ -14,7 +14,7 @@ ] -def _extrema_bounding(G, compute="diameter"): +def _extrema_bounding(G, compute="diameter", weight=None): """Compute requested extreme distance metric of undirected graph G Computation is based on smart lower and upper bounds, and in practice @@ -33,6 +33,26 @@ def _extrema_bounding(G, compute="diameter"): "center" for the set of nodes with eccentricity equal to the radius, "eccentricities" for the maximum distance from each node to all other nodes in G + weight : string, function, or None + If this is a string, then edge weights will be accessed via the + edge attribute with this key (that is, the weight of the edge + joining `u` to `v` will be ``G.edges[u, v][weight]``). If no + such edge attribute exists, the weight of the edge is assumed to + be one. + + If this is a function, the weight of an edge is the value + returned by the function. The function must accept exactly three + positional arguments: the two endpoints of an edge and the + dictionary of edge attributes for that edge. The function must + return a number. + + If this is None, every edge has weight/distance/cost 1. + + Weights stored as floating point values can lead to small round-off + errors in distances. Use integer weights to avoid this. + + Weights should be positive, since they are distances. + Returns ------- value : value of the requested metric @@ -46,25 +66,26 @@ def _extrema_bounding(G, compute="diameter"): If the graph consists of multiple components ValueError If `compute` is not one of "diameter", "radius", "periphery", "center", or "eccentricities". + Notes ----- - This algorithm was proposed in the following papers: + This algorithm was proposed in [1]_ and discussed further in [2]_ and [3]_. - F.W. Takes and W.A. Kosters, Determining the Diameter of Small World - Networks, in Proceedings of the 20th ACM International Conference on - Information and Knowledge Management (CIKM 2011), pp. 1191-1196, 2011. - doi: https://doi.org/10.1145/2063576.2063748 - - F.W. Takes and W.A. Kosters, Computing the Eccentricity Distribution of - Large Graphs, Algorithms 6(1): 100-118, 2013. - doi: https://doi.org/10.3390/a6010100 - - M. Borassi, P. Crescenzi, M. Habib, W.A. Kosters, A. Marino and F.W. Takes, - Fast Graph Diameter and Radius BFS-Based Computation in (Weakly Connected) - Real-World Graphs, Theoretical Computer Science 586: 59-80, 2015. - doi: https://doi.org/10.1016/j.tcs.2015.02.033 + References + ---------- + .. [1] F. W. Takes, W. A. Kosters, + "Determining the diameter of small world networks." + Proceedings of the 20th ACM international conference on Information and knowledge management, 2011 + https://dl.acm.org/doi/abs/10.1145/2063576.2063748 + .. [2] F. W. Takes, W. A. Kosters, + "Computing the Eccentricity Distribution of Large Graphs." + Algorithms, 2013 + https://www.mdpi.com/1999-4893/6/1/100 + .. [3] M. Borassi, P. Crescenzi, M. Habib, W. A. Kosters, A. Marino, F. W. Takes, + "Fast diameter and radius BFS-based computation in (weakly connected) real-world graphs: With an application to the six degrees of separation games. " + Theoretical Computer Science, 2015 + https://www.sciencedirect.com/science/article/pii/S0304397515001644 """ - # init variables degrees = dict(G.degree()) # start with the highest degree node minlowernode = max(degrees, key=degrees.get) @@ -91,7 +112,8 @@ def _extrema_bounding(G, compute="diameter"): high = not high # get distances from/to current node and derive eccentricity - dist = dict(nx.single_source_shortest_path_length(G, current)) + dist = nx.shortest_path_length(G, source=current, weight=weight) + if len(dist) != N: msg = "Cannot compute metric because graph is not connected." raise nx.NetworkXError(msg) @@ -200,20 +222,20 @@ def _extrema_bounding(G, compute="diameter"): # return the correct value of the requested metric if compute == "diameter": return maxlower - elif compute == "radius": + if compute == "radius": return minupper - elif compute == "periphery": + if compute == "periphery": p = [v for v in G if ecc_lower[v] == maxlower] return p - elif compute == "center": + if compute == "center": c = [v for v in G if ecc_upper[v] == minupper] return c - elif compute == "eccentricities": + if compute == "eccentricities": return ecc_lower return None -def eccentricity(G, v=None, sp=None): +def eccentricity(G, v=None, sp=None, weight=None): """Returns the eccentricity of nodes in G. The eccentricity of a node v is the maximum distance from v to @@ -230,6 +252,26 @@ def eccentricity(G, v=None, sp=None): sp : dict of dicts, optional All pairs shortest path lengths as a dictionary of dictionaries + weight : string, function, or None (default=None) + If this is a string, then edge weights will be accessed via the + edge attribute with this key (that is, the weight of the edge + joining `u` to `v` will be ``G.edges[u, v][weight]``). If no + such edge attribute exists, the weight of the edge is assumed to + be one. + + If this is a function, the weight of an edge is the value + returned by the function. The function must accept exactly three + positional arguments: the two endpoints of an edge and the + dictionary of edge attributes for that edge. The function must + return a number. + + If this is None, every edge has weight/distance/cost 1. + + Weights stored as floating point values can lead to small round-off + errors in distances. Use integer weights to avoid this. + + Weights should be positive, since they are distances. + Returns ------- ecc : dictionary @@ -252,11 +294,11 @@ def eccentricity(G, v=None, sp=None): # else: # assume v is a container of nodes # nodes=v order = G.order() - e = {} for n in G.nbunch_iter(v): if sp is None: - length = nx.single_source_shortest_path_length(G, n) + length = nx.shortest_path_length(G, source=n, weight=weight) + L = len(length) else: try: @@ -278,11 +320,10 @@ def eccentricity(G, v=None, sp=None): if v in G: return e[v] # return single value - else: - return e + return e -def diameter(G, e=None, usebounds=False): +def diameter(G, e=None, usebounds=False, weight=None): """Returns the diameter of the graph G. The diameter is the maximum eccentricity. @@ -295,6 +336,26 @@ def diameter(G, e=None, usebounds=False): e : eccentricity dictionary, optional A precomputed dictionary of eccentricities. + weight : string, function, or None + If this is a string, then edge weights will be accessed via the + edge attribute with this key (that is, the weight of the edge + joining `u` to `v` will be ``G.edges[u, v][weight]``). If no + such edge attribute exists, the weight of the edge is assumed to + be one. + + If this is a function, the weight of an edge is the value + returned by the function. The function must accept exactly three + positional arguments: the two endpoints of an edge and the + dictionary of edge attributes for that edge. The function must + return a number. + + If this is None, every edge has weight/distance/cost 1. + + Weights stored as floating point values can lead to small round-off + errors in distances. Use integer weights to avoid this. + + Weights should be positive, since they are distances. + Returns ------- d : integer @@ -311,13 +372,13 @@ def diameter(G, e=None, usebounds=False): eccentricity """ if usebounds is True and e is None and not G.is_directed(): - return _extrema_bounding(G, compute="diameter") + return _extrema_bounding(G, compute="diameter", weight=weight) if e is None: - e = eccentricity(G) + e = eccentricity(G, weight=weight) return max(e.values()) -def periphery(G, e=None, usebounds=False): +def periphery(G, e=None, usebounds=False, weight=None): """Returns the periphery of the graph G. The periphery is the set of nodes with eccentricity equal to the diameter. @@ -330,6 +391,26 @@ def periphery(G, e=None, usebounds=False): e : eccentricity dictionary, optional A precomputed dictionary of eccentricities. + weight : string, function, or None + If this is a string, then edge weights will be accessed via the + edge attribute with this key (that is, the weight of the edge + joining `u` to `v` will be ``G.edges[u, v][weight]``). If no + such edge attribute exists, the weight of the edge is assumed to + be one. + + If this is a function, the weight of an edge is the value + returned by the function. The function must accept exactly three + positional arguments: the two endpoints of an edge and the + dictionary of edge attributes for that edge. The function must + return a number. + + If this is None, every edge has weight/distance/cost 1. + + Weights stored as floating point values can lead to small round-off + errors in distances. Use integer weights to avoid this. + + Weights should be positive, since they are distances. + Returns ------- p : list @@ -347,15 +428,15 @@ def periphery(G, e=None, usebounds=False): center """ if usebounds is True and e is None and not G.is_directed(): - return _extrema_bounding(G, compute="periphery") + return _extrema_bounding(G, compute="periphery", weight=weight) if e is None: - e = eccentricity(G) + e = eccentricity(G, weight=weight) diameter = max(e.values()) p = [v for v in e if e[v] == diameter] return p -def radius(G, e=None, usebounds=False): +def radius(G, e=None, usebounds=False, weight=None): """Returns the radius of the graph G. The radius is the minimum eccentricity. @@ -368,6 +449,26 @@ def radius(G, e=None, usebounds=False): e : eccentricity dictionary, optional A precomputed dictionary of eccentricities. + weight : string, function, or None + If this is a string, then edge weights will be accessed via the + edge attribute with this key (that is, the weight of the edge + joining `u` to `v` will be ``G.edges[u, v][weight]``). If no + such edge attribute exists, the weight of the edge is assumed to + be one. + + If this is a function, the weight of an edge is the value + returned by the function. The function must accept exactly three + positional arguments: the two endpoints of an edge and the + dictionary of edge attributes for that edge. The function must + return a number. + + If this is None, every edge has weight/distance/cost 1. + + Weights stored as floating point values can lead to small round-off + errors in distances. Use integer weights to avoid this. + + Weights should be positive, since they are distances. + Returns ------- r : integer @@ -381,13 +482,13 @@ def radius(G, e=None, usebounds=False): """ if usebounds is True and e is None and not G.is_directed(): - return _extrema_bounding(G, compute="radius") + return _extrema_bounding(G, compute="radius", weight=weight) if e is None: - e = eccentricity(G) + e = eccentricity(G, weight=weight) return min(e.values()) -def center(G, e=None, usebounds=False): +def center(G, e=None, usebounds=False, weight=None): """Returns the center of the graph G. The center is the set of nodes with eccentricity equal to radius. @@ -400,6 +501,26 @@ def center(G, e=None, usebounds=False): e : eccentricity dictionary, optional A precomputed dictionary of eccentricities. + weight : string, function, or None + If this is a string, then edge weights will be accessed via the + edge attribute with this key (that is, the weight of the edge + joining `u` to `v` will be ``G.edges[u, v][weight]``). If no + such edge attribute exists, the weight of the edge is assumed to + be one. + + If this is a function, the weight of an edge is the value + returned by the function. The function must accept exactly three + positional arguments: the two endpoints of an edge and the + dictionary of edge attributes for that edge. The function must + return a number. + + If this is None, every edge has weight/distance/cost 1. + + Weights stored as floating point values can lead to small round-off + errors in distances. Use integer weights to avoid this. + + Weights should be positive, since they are distances. + Returns ------- c : list @@ -417,9 +538,9 @@ def center(G, e=None, usebounds=False): periphery """ if usebounds is True and e is None and not G.is_directed(): - return _extrema_bounding(G, compute="center") + return _extrema_bounding(G, compute="center", weight=weight) if e is None: - e = eccentricity(G) + e = eccentricity(G, weight=weight) radius = min(e.values()) p = [v for v in e if e[v] == radius] return p @@ -560,14 +681,23 @@ def resistance_distance(G, nodeA, nodeB, weight=None, invert_weight=True): Notes ----- - Overview discussion: - * https://en.wikipedia.org/wiki/Resistance_distance - * http://mathworld.wolfram.com/ResistanceDistance.html - - Additional details: - Vaya Sapobi Samui Vos, β€œMethods for determining the effective resistance,” M.S., - Mathematisch Instituut, Universiteit Leiden, Leiden, Netherlands, 2016 - Available: `Link to thesis <https://www.universiteitleiden.nl/binaries/content/assets/science/mi/scripties/master/vos_vaya_master.pdf>`_ + Overviews are provided in [1]_ and [2]_. Additional details on computational + methods, proofs of properties, and corresponding MATLAB codes are provided + in [3]_. + + References + ---------- + .. [1] Wikipedia + "Resistance distance." + https://en.wikipedia.org/wiki/Resistance_distance + .. [2] E. W. Weisstein + "Resistance Distance." + MathWorld--A Wolfram Web Resource + https://mathworld.wolfram.com/ResistanceDistance.html + .. [3] V. S. S. Vos, + "Methods for determining the effective resistance." + Mestrado, Mathematisch Instituut Universiteit Leiden, 2016 + https://www.universiteitleiden.nl/binaries/content/assets/science/mi/scripties/master/vos_vaya_master.pdf """ import numpy as np import scipy as sp
diff --git a/networkx/algorithms/tests/test_distance_measures.py b/networkx/algorithms/tests/test_distance_measures.py --- a/networkx/algorithms/tests/test_distance_measures.py +++ b/networkx/algorithms/tests/test_distance_measures.py @@ -97,6 +97,228 @@ def test_eccentricity_directed_weakly_connected(self): nx.eccentricity(DG) +class TestWeightedDistance: + def setup_method(self): + G = nx.Graph() + G.add_edge(0, 1, weight=0.6, cost=0.6, high_cost=6) + G.add_edge(0, 2, weight=0.2, cost=0.2, high_cost=2) + G.add_edge(2, 3, weight=0.1, cost=0.1, high_cost=1) + G.add_edge(2, 4, weight=0.7, cost=0.7, high_cost=7) + G.add_edge(2, 5, weight=0.9, cost=0.9, high_cost=9) + G.add_edge(1, 5, weight=0.3, cost=0.3, high_cost=3) + self.G = G + self.weight_fn = lambda v, u, e: 2 + + def test_eccentricity_weight_None(self): + assert nx.eccentricity(self.G, 1, weight=None) == 3 + e = nx.eccentricity(self.G, weight=None) + assert e[1] == 3 + + e = nx.eccentricity(self.G, v=1, weight=None) + assert e == 3 + + # This behavior changed in version 1.8 (ticket #739) + e = nx.eccentricity(self.G, v=[1, 1], weight=None) + assert e[1] == 3 + e = nx.eccentricity(self.G, v=[1, 2], weight=None) + assert e[1] == 3 + + def test_eccentricity_weight_attr(self): + assert nx.eccentricity(self.G, 1, weight="weight") == 1.5 + e = nx.eccentricity(self.G, weight="weight") + assert ( + e + == nx.eccentricity(self.G, weight="cost") + != nx.eccentricity(self.G, weight="high_cost") + ) + assert e[1] == 1.5 + + e = nx.eccentricity(self.G, v=1, weight="weight") + assert e == 1.5 + + # This behavior changed in version 1.8 (ticket #739) + e = nx.eccentricity(self.G, v=[1, 1], weight="weight") + assert e[1] == 1.5 + e = nx.eccentricity(self.G, v=[1, 2], weight="weight") + assert e[1] == 1.5 + + def test_eccentricity_weight_fn(self): + assert nx.eccentricity(self.G, 1, weight=self.weight_fn) == 6 + e = nx.eccentricity(self.G, weight=self.weight_fn) + assert e[1] == 6 + + e = nx.eccentricity(self.G, v=1, weight=self.weight_fn) + assert e == 6 + + # This behavior changed in version 1.8 (ticket #739) + e = nx.eccentricity(self.G, v=[1, 1], weight=self.weight_fn) + assert e[1] == 6 + e = nx.eccentricity(self.G, v=[1, 2], weight=self.weight_fn) + assert e[1] == 6 + + def test_diameter_weight_None(self): + assert nx.diameter(self.G, weight=None) == 3 + + def test_diameter_weight_attr(self): + assert ( + nx.diameter(self.G, weight="weight") + == nx.diameter(self.G, weight="cost") + == 1.6 + != nx.diameter(self.G, weight="high_cost") + ) + + def test_diameter_weight_fn(self): + assert nx.diameter(self.G, weight=self.weight_fn) == 6 + + def test_radius_weight_None(self): + assert pytest.approx(nx.radius(self.G, weight=None)) == 2 + + def test_radius_weight_attr(self): + assert ( + pytest.approx(nx.radius(self.G, weight="weight")) + == pytest.approx(nx.radius(self.G, weight="cost")) + == 0.9 + != nx.radius(self.G, weight="high_cost") + ) + + def test_radius_weight_fn(self): + assert nx.radius(self.G, weight=self.weight_fn) == 4 + + def test_periphery_weight_None(self): + for v in set(nx.periphery(self.G, weight=None)): + assert nx.eccentricity(self.G, v, weight=None) == nx.diameter( + self.G, weight=None + ) + + def test_periphery_weight_attr(self): + periphery = set(nx.periphery(self.G, weight="weight")) + assert ( + periphery + == set(nx.periphery(self.G, weight="cost")) + == set(nx.periphery(self.G, weight="high_cost")) + ) + for v in periphery: + assert ( + nx.eccentricity(self.G, v, weight="high_cost") + != nx.eccentricity(self.G, v, weight="weight") + == nx.eccentricity(self.G, v, weight="cost") + == nx.diameter(self.G, weight="weight") + == nx.diameter(self.G, weight="cost") + != nx.diameter(self.G, weight="high_cost") + ) + assert nx.eccentricity(self.G, v, weight="high_cost") == nx.diameter( + self.G, weight="high_cost" + ) + + def test_periphery_weight_fn(self): + for v in set(nx.periphery(self.G, weight=self.weight_fn)): + assert nx.eccentricity(self.G, v, weight=self.weight_fn) == nx.diameter( + self.G, weight=self.weight_fn + ) + + def test_center_weight_None(self): + for v in set(nx.center(self.G, weight=None)): + assert pytest.approx(nx.eccentricity(self.G, v, weight=None)) == nx.radius( + self.G, weight=None + ) + + def test_center_weight_attr(self): + center = set(nx.center(self.G, weight="weight")) + assert ( + center + == set(nx.center(self.G, weight="cost")) + != set(nx.center(self.G, weight="high_cost")) + ) + for v in center: + assert ( + nx.eccentricity(self.G, v, weight="high_cost") + != pytest.approx(nx.eccentricity(self.G, v, weight="weight")) + == pytest.approx(nx.eccentricity(self.G, v, weight="cost")) + == nx.radius(self.G, weight="weight") + == nx.radius(self.G, weight="cost") + != nx.radius(self.G, weight="high_cost") + ) + assert nx.eccentricity(self.G, v, weight="high_cost") == nx.radius( + self.G, weight="high_cost" + ) + + def test_center_weight_fn(self): + for v in set(nx.center(self.G, weight=self.weight_fn)): + assert nx.eccentricity(self.G, v, weight=self.weight_fn) == nx.radius( + self.G, weight=self.weight_fn + ) + + def test_bound_diameter_weight_None(self): + assert nx.diameter(self.G, usebounds=True, weight=None) == 3 + + def test_bound_diameter_weight_attr(self): + assert ( + nx.diameter(self.G, usebounds=True, weight="high_cost") + != nx.diameter(self.G, usebounds=True, weight="weight") + == nx.diameter(self.G, usebounds=True, weight="cost") + == 1.6 + != nx.diameter(self.G, usebounds=True, weight="high_cost") + ) + assert nx.diameter(self.G, usebounds=True, weight="high_cost") == nx.diameter( + self.G, usebounds=True, weight="high_cost" + ) + + def test_bound_diameter_weight_fn(self): + assert nx.diameter(self.G, usebounds=True, weight=self.weight_fn) == 6 + + def test_bound_radius_weight_None(self): + assert pytest.approx(nx.radius(self.G, usebounds=True, weight=None)) == 2 + + def test_bound_radius_weight_attr(self): + assert ( + nx.radius(self.G, usebounds=True, weight="high_cost") + != pytest.approx(nx.radius(self.G, usebounds=True, weight="weight")) + == pytest.approx(nx.radius(self.G, usebounds=True, weight="cost")) + == 0.9 + != nx.radius(self.G, usebounds=True, weight="high_cost") + ) + assert nx.radius(self.G, usebounds=True, weight="high_cost") == nx.radius( + self.G, usebounds=True, weight="high_cost" + ) + + def test_bound_radius_weight_fn(self): + assert nx.radius(self.G, usebounds=True, weight=self.weight_fn) == 4 + + def test_bound_periphery_weight_None(self): + result = {1, 3, 4} + assert set(nx.periphery(self.G, usebounds=True, weight=None)) == result + + def test_bound_periphery_weight_attr(self): + result = {4, 5} + assert ( + set(nx.periphery(self.G, usebounds=True, weight="weight")) + == set(nx.periphery(self.G, usebounds=True, weight="cost")) + == result + ) + + def test_bound_periphery_weight_fn(self): + result = {1, 3, 4} + assert ( + set(nx.periphery(self.G, usebounds=True, weight=self.weight_fn)) == result + ) + + def test_bound_center_weight_None(self): + result = {0, 2, 5} + assert set(nx.center(self.G, usebounds=True, weight=None)) == result + + def test_bound_center_weight_attr(self): + result = {0} + assert ( + set(nx.center(self.G, usebounds=True, weight="weight")) + == set(nx.center(self.G, usebounds=True, weight="cost")) + == result + ) + + def test_bound_center_weight_fn(self): + result = {0, 2, 5} + assert set(nx.center(self.G, usebounds=True, weight=self.weight_fn)) == result + + class TestResistanceDistance: @classmethod def setup_class(cls):
Distance_measures.py functions should work for weighted graphs too. Many of the functions in `distance_measures.py` work for unweighted graphs, but have straightforward definitions for weighted graphs too. We should go through this module and upgrade each function to work for weighted graphs. It should be a straightforward fix with a keyword "weight" input which defaults to the current unweighted treatment. The `shortest_path` functions provide a nice interface for that to use as an example. Held-Karp ascent failures due to optimize.linprog infeasibility The recent scipy 1.9rc release caught some failures related to our use of `optimize.linprog` in the Held-Karp implementations. For example, the following tests are failing: ``` FAILED networkx/algorithms/approximation/tests/test_traveling_salesman.py::test_held_karp_ascent FAILED networkx/algorithms/approximation/tests/test_traveling_salesman.py::test_ascent_method_asymmetric FAILED networkx/algorithms/approximation/tests/test_traveling_salesman.py::test_ascent_method_asymmetric_2 FAILED networkx/algorithms/approximation/tests/test_traveling_salesman.py::test_held_karp_ascent_asymmetric_3 FAILED networkx/algorithms/approximation/tests/test_traveling_salesman.py::test_asadpour_real_world FAILED networkx/algorithms/approximation/tests/test_traveling_salesman.py::test_asadpour_real_world_path ``` The specifics related to the release are addressed elsewhere (see scipy/scipy#16466 and #5816), but taking a closer look at these failures it turns out that an underlying problem is that the results from `optimize.linprog` are actually not successful (this was true for scipy 1.8 as well, but we only got warnings instead of exceptions). In each of these cases, the returned `OptimiztionResult.success == False`. I didn't look closely at each case, but the `.message` attribute for at least one case indicates redundancies in `A_eq` and `b_eq` make the problem infeasible. I'm not sure exactly what's going on and whether this has something to do with the problem formulations or maybe some problem in how we're using `optimize.linprog`.
To be clear, we mean `distance_measures.py`, right? Or is `distance_measurements.py` somewhere and I'm just not seeing it? Thanks -- Yes -- I mean `distance_measures.py`. I'll correct it by editing my statement. Can i work on this issue Hey @dschult I would like to work on it. Thanks. Hi All, I would also be interested in contributing to this. It looks like there are handful of functions to address - perhaps we can split the work among ourselves? > Hi All, I would also be interested in contributing to this. > > It looks like there are handful of functions to address - perhaps we can split the work among ourselves? sure lets do it Yeah @nishant42491 @lucasmccabe :) Hi guys, Are you still working actively on this? Otherwise I can also take over some work. Hello, if you guys are still working on this and need an extra helping hand, do add me too! Hello all! It's great to see so much interest to work on the project :) You can read more about the development processes on our website: - [New Contributor FAQ](https://networkx.org/documentation/latest/developer/new_contributor_faq.html) - [Contributor Guide](https://networkx.org/documentation/latest/developer/contribute.html) For the point about assigning issues we don't usually assign issues to anyone, feel free to open a PR when you are ready. If you are worried it may conflict with what other people are working on please put in the method you are working on in this issue and open a PR. Even if you are stuck please do open a PR (you can keep it as a draft) as we can help you review and improve the code in the PR :) Alright, so most of the functions actually just use the `eccentricity` function, or the `extrema_bounding` method in the case of `usebounds=True` (BTW this isn't in the documentation or the docstring). So I'm going to start with the `diameter` function just to see how everything works if no one objects. I opened a draft PR #5298
2022-02-05T00:16:48
networkx/networkx
5,315
networkx__networkx-5315
[ "5309" ]
e2fd02ccb0f3ef4a584f99204f4d5e1cc067cbb4
diff --git a/networkx/convert_matrix.py b/networkx/convert_matrix.py --- a/networkx/convert_matrix.py +++ b/networkx/convert_matrix.py @@ -706,7 +706,7 @@ def to_numpy_recarray(G, nodelist=None, dtype=None, order=None): dtype : NumPy data-type, optional A valid NumPy named dtype used to initialize the NumPy recarray. The data type names are assumed to be keys in the graph edge attribute - dictionary. + dictionary. The default is ``dtype([("weight", float)])``. order : {'C', 'F'}, optional Whether to store multidimensional data in C- or Fortran-contiguous
`to_numpy_recarray` throws `KeyError: weight` Hello! I am running into trouble with `networkx` version 2.6.3 running an extremely simple script: import networkx as nx c_50 = nx.complete_graph(50) nx.to_numpy_recarray(c_50) Which throws a `KeyError: weight` in `convert_matrix.py`.
`to_numpy_recarray` expects to be passed a structured dtype with named fields, which are used to determine which attributes to use in constructing the adjacency matrix. If you don't supply a dtype, then it's assumed that `"weight"` is the attribute of interest (though this fact is not adequately documented). This explains the exception you're seeing. In your particular case it's not clear why you'd want a recarray instead of a `ndarray` given that your graph does not have edge attributes. You should probably be using `nx.to_numpy_array(c_50)` instead. If (for whatever reason) you *really* wanted a `recarray` (even though there are no named fields to access in the case above) you could do `nx.to_numpy_array(c_50).view(np.recarray)`. That is useful to know! I was writing a function which relied on the `recarray` output and the unweighted graph was a case I wanted to handle. In the general case I guess `nx.to_numpy_array` makes the most sense.
2022-02-10T16:30:58
networkx/networkx
5,316
networkx__networkx-5316
[ "5294" ]
e2fd02ccb0f3ef4a584f99204f4d5e1cc067cbb4
diff --git a/networkx/drawing/nx_pylab.py b/networkx/drawing/nx_pylab.py --- a/networkx/drawing/nx_pylab.py +++ b/networkx/drawing/nx_pylab.py @@ -1130,6 +1130,13 @@ def draw_networkx_edge_labels( labels = {(u, v): d for u, v, d in G.edges(data=True)} else: labels = edge_labels + # Informative exception for multiedges + try: + (u, v), d = next(iter(labels.items())) + except ValueError as err: + raise nx.NetworkXError( + "draw_networkx_edge_labels does not support multiedges." + ) from err text_items = {} for (n1, n2), label in labels.items(): (x1, y1) = pos[n1]
diff --git a/networkx/drawing/tests/test_pylab.py b/networkx/drawing/tests/test_pylab.py --- a/networkx/drawing/tests/test_pylab.py +++ b/networkx/drawing/tests/test_pylab.py @@ -675,3 +675,18 @@ def test_edgelist_kwarg_not_ignored(): nx.draw(G, edgelist=[(0, 1), (1, 2)], ax=ax) # Exclude self-loop from edgelist assert not ax.patches plt.delaxes(ax) + + +def test_draw_networkx_edge_label_multiedge_exception(): + """ + draw_networkx_edge_labels should raise an informative error message when + the edge label includes keys + """ + exception_msg = "draw_networkx_edge_labels does not support multiedges" + G = nx.MultiGraph() + G.add_edge(0, 1, weight=10) + G.add_edge(0, 1, weight=20) + edge_labels = nx.get_edge_attributes(G, "weight") # Includes edge keys + pos = {n: (n, n) for n in G} + with pytest.raises(nx.NetworkXError, match=exception_msg): + nx.draw_networkx_edge_labels(G, pos, edge_labels=edge_labels)
draw_networkx_edge_labels examples don't work with MultiDiGraphs <!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion --> <!--- Provide a general summary of the issue in the Title above --> The canonical examples I seem to be able to find for drawing edge labels look something like this: ``` edge_labels = nx.get_edge_attributes(G, 'title') nx.draw(G) nx.draw_networkx_edge_labels(G, edge_labels=edge_labels) nx.draw_networkx(G) plt.show() ``` ### Current Behavior <!--- Tell us what happens instead of the expected behavior --> When applied to a MultiDiGraph, the way in which this data is structured isn't supported by `draw_networkx_edge_labels`. You get something like the following exception: ``` import matplotlib.pyplot as plt import numpy as np if ax is None: ax = plt.gca() if edge_labels is None: labels = {(u, v): d for u, v, d in G.edges(data=True)} else: labels = edge_labels text_items = {} > for (n1, n2), label in labels.items(): E ValueError: too many values to unpack (expected 2) ``` This is because the edge attributes on a MultiDiGraph are a 3-tuple which includes the key. ### Expected Behavior <!--- Tell us what should happen --> The `draw_networkx_edge_labels` function should handle both the 2-tuple for single graphs and the 3-tuple case for multi graphs. ### Steps to Reproduce <!--- Provide a minimal example that reproduces the bug --> ### Environment <!--- Please provide details about your local environment --> Python version: 3.9 NetworkX version: ### Additional context <!--- Add any other context about the problem here, screenshots, etc. -->
You can transform the edge labels returned by `nx.get_edge_attributes` into the correct form with the following snippet: ``` edge_labels = {(u, v): label for (u, v, k), label in nx.get_edge_attributes(G, 'title').items()} ``` Good point - there isn't really much in the way of automatic support for multiedge visualization. It would be a very nice enhancement, but would likely require quite a bit of effort to figure out how to handle the positioning of multiple edges in a generic way. As you've pointed out, one solution is to "reduce" multiedges into a single edge. It's important to note that the suggestion above implicitly sets the edge attribute to whatever the value is for the last edge in terms of insertion order. A more generic solution to multiedge reduction might look something like: ```python >>> from collections import defaultdict >>> all_edge_attrs = defaultdict(list) >>> for u, v, val in G.edges(data="title"): ... all_edge_attrs[(u, v)].append(val) >>> edge_labels = {k: op(v) for k, v in all_edge_attrs.items()} ``` Where `op` in this case can be whatever reduction function you want, e.g `sum`, `min`, `max`, `np.median`, etc. In the absence of generic support for multiedge positioning/labeling, I think a more informative error message for this scenario would be an improvement.
2022-02-10T17:14:54
networkx/networkx
5,354
networkx__networkx-5354
[ "5211" ]
42985ba7d9f768c32c651e3e73d4d98b46776f54
diff --git a/networkx/drawing/layout.py b/networkx/drawing/layout.py --- a/networkx/drawing/layout.py +++ b/networkx/drawing/layout.py @@ -957,14 +957,17 @@ def spiral_layout(G, scale=1, center=None, dim=2, resolution=0.35, equidistant=F Scale factor for positions. center : array-like or None Coordinate pair around which to center the layout. - dim : int + dim : int, default=2 Dimension of layout, currently only dim=2 is supported. Other dimension values result in a ValueError. - resolution : float + resolution : float, default=0.35 The compactness of the spiral layout returned. Lower values result in more compressed spiral layouts. - equidistant : bool - If True, nodes will be plotted equidistant from each other. + equidistant : bool, default=False + If True, nodes will be positioned equidistant from each other + by decreasing angle further from center. + If False, nodes will be positioned at equal angles + from each other by increasing separation further from center. Returns ------- @@ -1003,6 +1006,7 @@ def spiral_layout(G, scale=1, center=None, dim=2, resolution=0.35, equidistant=F chord = 1 step = 0.5 theta = resolution + theta += chord / (step * theta) for _ in range(len(G)): r = step * theta theta += chord / r
diff --git a/networkx/drawing/tests/test_layout.py b/networkx/drawing/tests/test_layout.py --- a/networkx/drawing/tests/test_layout.py +++ b/networkx/drawing/tests/test_layout.py @@ -371,6 +371,15 @@ def test_spiral_layout(self): distances_equidistant[1:], distances_equidistant[-1], atol=0.01 ) + def test_spiral_layout_equidistant(self): + G = nx.path_graph(10) + pos = nx.spiral_layout(G, equidistant=True) + # Extract individual node positions as an array + p = np.array(list(pos.values())) + # Elementwise-distance between node positions + dist = np.linalg.norm(p[1:] - p[:-1], axis=1) + assert np.allclose(np.diff(dist), 0, atol=1e-3) + def test_rescale_layout_dict(self): G = nx.empty_graph() vpos = nx.random_layout(G, center=(1, 1))
Problem with spiral layout with `equidistant=True` `nx.spiral_layout` has an `equidistant` kwarg that is False by default. According to the docstring parameter description, this is supposed to enforce that the laid out nodes are all equidistant from one another when True. However, the iterative implementation is such that the node in the first iteration is handled differently than all the rest, resulting in the following behavior: ### Current Behavior ```python >>> G = nx.path_graph(5) >>> nx.draw(G, pos=nx.spiral_layout(G, equidistant=True)) ``` ![fi](https://user-images.githubusercontent.com/1268991/144682663-1655099a-b524-4cfc-a101-68f2ef97865a.png) ### Expected Behavior The "first" node should (presumably) also be equidistant from it's neighbor ### Environment Python version: Python 3.9.7 NetworkX version: 2.7rc1.dev0 (766becc1) ### Additional context `spiral_layout` was added in #3534
As part of this issue we should also add a paragraph to the docstring to describe what a spiral layout is and how it is affected by the `equidistant` option.
2022-02-22T12:23:26
networkx/networkx
5,371
networkx__networkx-5371
[ "5367" ]
2b01a30d6967cc94a0f8caca2252bce7817b2b1c
diff --git a/networkx/lazy_imports.py b/networkx/lazy_imports.py --- a/networkx/lazy_imports.py +++ b/networkx/lazy_imports.py @@ -1,5 +1,6 @@ import importlib import importlib.util +import inspect import types import os import sys @@ -81,6 +82,24 @@ def __dir__(): return __getattr__, __dir__, list(__all__) +class DelayedImportErrorModule(types.ModuleType): + def __init__(self, frame_data, *args, **kwargs): + self.__frame_data = frame_data + super().__init__(*args, **kwargs) + + def __getattr__(self, x): + if x in ("__class__", "__file__", "__frame_data"): + super().__getattr__(x) + else: + fd = self.__frame_data + raise ModuleNotFoundError( + f"No module named '{fd['spec']}'\n\n" + "This error is lazily reported, having originally occured in\n" + f' File {fd["filename"]}, line {fd["lineno"]}, in {fd["function"]}\n\n' + f'----> {"".join(fd["code_context"]).strip()}' + ) + + def lazy_import(fullname): """Return a lazily imported proxy for a module or library. @@ -132,14 +151,18 @@ def myfunc(): spec = importlib.util.find_spec(fullname) if spec is None: - # module not found - construct a DelayedImportErrorModule - spec = importlib.util.spec_from_loader(fullname, loader=None) - module = importlib.util.module_from_spec(spec) - tmp_loader = importlib.machinery.SourceFileLoader(module, path=None) - loader = DelayedImportErrorLoader(tmp_loader) - loader.exec_module(module) - # dont add to sys.modules. The module wasn't found. - return module + try: + parent = inspect.stack()[1] + frame_data = { + "spec": fullname, + "filename": parent.filename, + "lineno": parent.lineno, + "function": parent.function, + "code_context": parent.code_context, + } + return DelayedImportErrorModule(frame_data, "DelayedImportErrorModule") + finally: + del parent module = importlib.util.module_from_spec(spec) sys.modules[fullname] = module @@ -148,24 +171,3 @@ def myfunc(): loader.exec_module(module) return module - - -class DelayedImportErrorLoader(importlib.util.LazyLoader): - def exec_module(self, module): - super().exec_module(module) - module.__class__ = DelayedImportErrorModule - - -class DelayedImportErrorModule(types.ModuleType): - def __getattribute__(self, attr): - """Trigger a ModuleNotFoundError upon attribute access""" - spec = super().__getattribute__("__spec__") - # allows isinstance and type functions to work without raising error - if attr in ["__class__"]: - return super().__getattribute__("__class__") - - raise ModuleNotFoundError( - f"Delayed Report: module named '{spec.name}' not found.\n" - "Reporting was Lazy -- delayed until module attributes accessed.\n" - f"Most likely, {spec.name} is not installed" - )
Error when importing networkx: "module 'importlib' has no attribute 'machinery'" When importing networkx, the error `AttributeError: module 'importlib' has no attribute 'machinery'` occurs. It seems like networkx is not importing `importlib.machinery`. ### Steps to Reproduce ``` $ sudo docker run -it --rm fedora:35 # dnf install -y python3 python3-pip # pip3 install networkx # python3 --version Python 3.10.0rc2 # python3 -c "import importlib; print(dir(importlib))" ['_RELOADING', '__all__', '__builtins__', '__cached__', '__doc__', '__file__', '__import__', '__loader__', '__name__', '__package__', '__path__', '__spec__', '_bootstrap', '_bootstrap_external', '_imp', '_pack_uint32', '_unpack_uint32', 'find_loader', 'import_module', 'invalidate_caches', 'reload', 'sys', 'warnings'] # python3 -c "import networkx" Traceback (most recent call last): File "<string>", line 1, in <module> File "/usr/local/lib/python3.10/site-packages/networkx/__init__.py", line 59, in <module> from networkx import utils File "/usr/local/lib/python3.10/site-packages/networkx/utils/__init__.py", line 1, in <module> from networkx.utils.misc import * File "/usr/local/lib/python3.10/site-packages/networkx/utils/misc.py", line 23, in <module> np = nx.lazy_import("numpy") File "/usr/local/lib/python3.10/site-packages/networkx/lazy_imports.py", line 138, in lazy_import tmp_loader = importlib.machinery.SourceFileLoader(module, path=None) AttributeError: module 'importlib' has no attribute 'machinery' # python3 -c "import importlib; import importlib.machinery; import networkx" ``` ### Environment Python version: Python 3.10.0rc2 NetworkX version: networkx-2.7
Very strange - maybe something is wrong with the `python3` package in the fedora image? It seems weird that you're getting 3.10rc2 (without specifying that explicitly). FWIW I can't reproduce locally and CI never caught anything w/ Debian-based linux, macos, or windows. I think this is due to which modules are loaded by default, which I guess is a platform-specific setting. https://bugzilla.redhat.com/show_bug.cgi?id=2007664 Fedora 35: ``` $ python3 -c "import sys; print(sorted(sys.modules))" ['__main__', '_abc', '_codecs', '_collections_abc', '_frozen_importlib', '_frozen_importlib_external', '_imp', '_io', '_signal', '_sitebuiltins', '_stat', '_thread', '_warnings', '_weakref', 'abc', 'builtins', 'codecs', 'encodings', 'encodings.aliases', 'encodings.utf_8', 'genericpath', 'io', 'marshal', 'os', 'os.path', 'posix', 'posixpath', 'site', 'stat', 'sys', 'time', 'zipimport'] ``` Ubuntu 20.04: ``` $ python3 -c "import sys; print(sorted(sys.modules))" ['__main__', '_abc', '_bootlocale', '_codecs', '_collections', '_collections_abc', '_frozen_importlib', '_frozen_importlib_external', '_functools', '_heapq', '_imp', '_io', '_locale', '_operator', '_signal', '_sitebuiltins', '_stat', '_thread', '_warnings', '_weakref', 'abc', 'apport_python_hook', 'builtins', 'codecs', 'collections', 'contextlib', 'encodings', 'encodings.aliases', 'encodings.latin_1', 'encodings.utf_8', 'functools', 'genericpath', 'heapq', 'importlib', 'importlib._bootstrap', 'importlib._bootstrap_external', 'importlib.abc', 'importlib.machinery', 'importlib.util', 'io', 'itertools', 'keyword', 'marshal', 'mpl_toolkits', 'operator', 'os', 'os.path', 'posix', 'posixpath', 'reprlib', 'site', 'sitecustomize', 'stat', 'sys', 'time', 'types', 'warnings', 'zipimport'] ``` AFAIK `importlib.machinery` is part of the standard library and should be expected to be available in a Python installation. It seems like its absence might be an explicit decision in how Python is packaged in Fedora? I'm not sure there's a whole lot NetworkX can do about this, since the `SourceFileLoader` is an important component in the implementation of lazy loading in networkx It is available, just not always imported by default. I think all that's needed is to add the line `import importlib.machinery` to `lazy_imports.py`. Usually it's safest to explicitly import all used modules rather than relying on them to be imported automatically or as a side-effect from another library. For example, Python 3.9 on the current dev version of Ubuntu 22.04 doesn't include it by default either. ``` $ python3 -c "import importlib; importlib.machinery" Traceback (most recent call last): File "<string>", line 1, in <module> AttributeError: module 'importlib' has no attribute 'machinery' $ python3 -c "import importlib; import importlib.machinery; importlib.machinery" $ echo $? 0 ``` I had assumed that, since this never failed in CI nor in any developer's dev env that this was a problem with the Python packages in other distros... it appears that it's the opposite - the availability of `importlib.machinery` by default doesn't seem to be intentional: https://github.com/python/cpython/blob/df9f7597559b6256924fcd3a1c3dc24cd5c5edaf/Lib/importlib/__init__.py#L2 I think you're right @stevenengler , we need to add an explicit import of the `machinery` module. Thanks for reporting and the follow-up research! +1 person here to report on this issue causing problems with my CI. Commenting so that I get auto-notified when the issue closes. (btw, thanks for your development and continued maintenance of this great library!) We plan to release a fix for this in 2.7.1 on Monday, March 7th. Thanks for the bug report. FWIW I set up docker and I can't reproduce this in any of the official images mentioned above, i.e. `fedora:35`, `ubuntu:focal`, nor `ubuntu:jammy`. I'm not 100% convinced that this isn't some sort of python packaging problem in the images that are being used - the only evidence I've found to the contrary is the `__all__` I linked above from the `importlib` module, but that doesn't necessarily tell the whole story. The "fix" is straightforward of course, but I'm just curious if anyone has any insight as to whether (and if so) why it's necessary? I can provide the following data points: [GitHub CI failling](https://github.com/tdene/synth_opt_adders/runs/5384343915?check_suite_focus=true) with nearly the same trace as OP, using [this workflow](https://github.com/tdene/synth_opt_adders/blob/main/.github/workflows/config.yml). The workflow `runs-on: ubuntu-latest` and uses CPython 3.10.2 On my personal machine with Python 3.8.10 and Ubuntu 20.04.4, I do not experience this issue, but I can replicate a very similar case: ```~$ python3 Python 3.8.10 (default, Nov 26 2021, 20:14:08) [GCC 9.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import importlib >>> importlib.machinery Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: module 'importlib' has no attribute 'machinery' >>> importlib.machinery <module 'importlib.machinery' from '/usr/lib/python3.8/importlib/machinery.py'> ``` The same behavior persists with other built-in packages, such as urllib.request. Posts from [2015](https://web.archive.org/web/20160819073115/http://www.gossamer-threads.com/lists/python/python/1215580) and [2011](https://stackoverflow.com/questions/4871369/python-error-attributeerror-module-object-has-no-attribute) can be found discussing this. So it seems related to choices made in `__init__.py`. I just learned this for the first time while writing this reply, but it looks like this is how Python behaves. Now, why this causes errors in GitHub's CI, but not in local CI or manual use, I cannot guess. > [GitHub CI failling](https://github.com/tdene/synth_opt_adders/runs/5384343915?check_suite_focus=true) with nearly the same trace as OP, using [this workflow](https://github.com/tdene/synth_opt_adders/blob/main/.github/workflows/config.yml). Note that in this workflow the install step is done with `python` and the test-running step is done with `python3`, which may be the source of this particular issue. What happens if you modify these lines to: ```bash python3 -m pip install --upgrade pip python3 -m pip install . ``` NetworkX uses `ubuntu-latest` in its own CI workflows and this `importlib.machinery` issue has never cropped up. Some other interesting behaviour: Which modules are loaded depends on whether you're in an interactive or non-interactive interpreter. For example on fedora:35, `machinery` is automatically added to the `importlib` namespace in an interactive interpreter, but not in a non-interactive one. ``` $ python3 -c "import importlib; importlib.machinery" Traceback (most recent call last): File "<string>", line 1, in <module> AttributeError: module 'importlib' has no attribute 'machinery' $ python3 Python 3.10.0 (default, Oct 4 2021, 00:00:00) [GCC 11.2.1 20210728 (Red Hat 11.2.1-1)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import importlib >>> importlib.machinery <module 'importlib.machinery' from '/usr/lib64/python3.10/importlib/machinery.py'> ``` In a non-interactive interpreter, importing `importlib.util` loads `machinery` into the `importlib` namespace on Ubuntu 20.04, but not Fedora 35. Ubuntu 20.04: ``` $ python3 -c "import importlib; importlib.machinery" Traceback (most recent call last): File "<string>", line 1, in <module> AttributeError: module 'importlib' has no attribute 'machinery' $ python3 -c "import importlib; import importlib.util; importlib.machinery" $ echo $? 0 ``` Fedora 35: ``` $ python3 -c "import importlib; importlib.machinery" Traceback (most recent call last): File "<string>", line 1, in <module> AttributeError: module 'importlib' has no attribute 'machinery' $ python3 -c "import importlib; import importlib.util; importlib.machinery" Traceback (most recent call last): File "<string>", line 1, in <module> AttributeError: module 'importlib' has no attribute 'machinery' ```
2022-03-02T06:19:07
networkx/networkx
5,394
networkx__networkx-5394
[ "5388" ]
c39511522b272bf65b6a415169cd48d1bf3ff5c1
diff --git a/networkx/algorithms/matching.py b/networkx/algorithms/matching.py --- a/networkx/algorithms/matching.py +++ b/networkx/algorithms/matching.py @@ -227,28 +227,49 @@ def is_perfect_matching(G, matching): @not_implemented_for("multigraph") @not_implemented_for("directed") -def min_weight_matching(G, maxcardinality=False, weight="weight"): +def min_weight_matching(G, maxcardinality=None, weight="weight"): """Computing a minimum-weight maximal matching of G. - Use reciprocal edge weights with the maximum-weight algorithm. + Use the maximum-weight algorithm with edge weights subtracted + from the maximum weight of all edges. A matching is a subset of edges in which no node occurs more than once. The weight of a matching is the sum of the weights of its edges. A maximal matching cannot add more edges and still be a matching. The cardinality of a matching is the number of matched edges. - This method replaces the weights with their reciprocal and - then runs :func:`max_weight_matching`. - Read the documentation of max_weight_matching for more information. + This method replaces the edge weights with 1 plus the maximum edge weight + minus the original edge weight. + + new_weight = (max_weight + 1) - edge_weight + + then runs :func:`max_weight_matching` with the new weights. + The max weight matching with these new weights corresponds + to the min weight matching using the original weights. + Adding 1 to the max edge weight keeps all edge weights positive + and as integers if they started as integers. + + You might worry that adding 1 to each weight would make the algorithm + favor matchings with more edges. But we use the parameter + `maxcardinality=True` in `max_weight_matching` to ensure that the + number of edges in the competing matchings are the same and thus + the optimum does not change due to changes in the number of edges. + + Read the documentation of `max_weight_matching` for more information. Parameters ---------- G : NetworkX graph Undirected graph - maxcardinality: bool, optional (default=False) - If maxcardinality is True, compute the maximum-cardinality matching - with minimum weight among all maximum-cardinality matchings. + maxcardinality: bool + .. deprecated:: 2.8 + The `maxcardinality` parameter will be removed in v3.0. + It doesn't make sense to set it to False when looking for + a min weight matching because then we just return no edges. + + If maxcardinality is True, compute the maximum-cardinality matching + with minimum weight among all maximum-cardinality matchings. weight: string, optional (default='weight') Edge data key corresponding to the edge weight. @@ -258,15 +279,25 @@ def min_weight_matching(G, maxcardinality=False, weight="weight"): ------- matching : set A minimal weight matching of the graph. + + See Also + -------- + max_weight_matching """ + if maxcardinality not in (True, None): + raise nx.NetworkXError( + "The argument maxcardinality does not make sense " + "in the context of minimum weight matchings." + "It is deprecated and will be removed in v3.0." + ) if len(G.edges) == 0: - return max_weight_matching(G, maxcardinality, weight) + return max_weight_matching(G, maxcardinality=True, weight=weight) G_edges = G.edges(data=weight, default=1) - min_weight = min(w for _, _, w in G_edges) + max_weight = 1 + max(w for _, _, w in G_edges) InvG = nx.Graph() - edges = ((u, v, 1 / (1 + w - min_weight)) for u, v, w in G_edges) + edges = ((u, v, max_weight - w) for u, v, w in G_edges) InvG.add_weighted_edges_from(edges, weight=weight) - return max_weight_matching(InvG, maxcardinality, weight) + return max_weight_matching(InvG, maxcardinality=True, weight=weight) @not_implemented_for("multigraph")
diff --git a/networkx/algorithms/tests/test_matching.py b/networkx/algorithms/tests/test_matching.py --- a/networkx/algorithms/tests/test_matching.py +++ b/networkx/algorithms/tests/test_matching.py @@ -19,15 +19,13 @@ def test_trivial1(self): assert nx.max_weight_matching(G) == set() assert nx.min_weight_matching(G) == set() - def test_trivial2(self): - """Self loop""" + def test_selfloop(self): G = nx.Graph() G.add_edge(0, 0, weight=100) assert nx.max_weight_matching(G) == set() assert nx.min_weight_matching(G) == set() - def test_trivial3(self): - """Single edge""" + def test_single_edge(self): G = nx.Graph() G.add_edge(0, 1) assert edges_equal( @@ -37,8 +35,7 @@ def test_trivial3(self): nx.min_weight_matching(G), matching_dict_to_set({0: 1, 1: 0}) ) - def test_trivial4(self): - """Small graph""" + def test_two_path(self): G = nx.Graph() G.add_edge("one", "two", weight=10) G.add_edge("two", "three", weight=11) @@ -51,8 +48,7 @@ def test_trivial4(self): matching_dict_to_set({"one": "two", "two": "one"}), ) - def test_trivial5(self): - """Path""" + def test_path(self): G = nx.Graph() G.add_edge(1, 2, weight=5) G.add_edge(2, 3, weight=11) @@ -70,8 +66,20 @@ def test_trivial5(self): nx.min_weight_matching(G, 1), matching_dict_to_set({1: 2, 3: 4}) ) - def test_trivial6(self): - """Small graph with arbitrary weight attribute""" + def test_square(self): + G = nx.Graph() + G.add_edge(1, 4, weight=2) + G.add_edge(2, 3, weight=2) + G.add_edge(1, 2, weight=1) + G.add_edge(3, 4, weight=4) + assert edges_equal( + nx.max_weight_matching(G), matching_dict_to_set({1: 2, 3: 4}) + ) + assert edges_equal( + nx.min_weight_matching(G), matching_dict_to_set({1: 4, 2: 3}) + ) + + def test_edge_attribute_name(self): G = nx.Graph() G.add_edge("one", "two", weight=10, abcd=11) G.add_edge("two", "three", weight=11, abcd=10) @@ -85,7 +93,6 @@ def test_trivial6(self): ) def test_floating_point_weights(self): - """Floating point weights""" G = nx.Graph() G.add_edge(1, 2, weight=math.pi) G.add_edge(2, 3, weight=math.exp(1)) @@ -99,7 +106,6 @@ def test_floating_point_weights(self): ) def test_negative_weights(self): - """Negative weights""" G = nx.Graph() G.add_edge(1, 2, weight=2) G.add_edge(1, 3, weight=-2)
min_weight_matching gives incorrect results <!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion --> <!--- Provide a general summary of the issue in the Title above --> Consider the following graph: ```python G = nx.Graph() G.add_edge(1, 4, weight=2) G.add_edge(2, 3, weight=2) G.add_edge(1, 2, weight=1) G.add_edge(3, 4, weight=4) nx.min_weight_matching(G) ``` ### Current Behavior <!--- Tell us what happens instead of the expected behavior --> The program returns `{(2, 1), (4, 3)}` ### Expected Behavior <!--- Tell us what should happen --> There are two maximal matchings in the graph, (1,2)(3,4) with weight 1+4=5, and (1,4)(2,3) with weight 2+2=4. The minimum weight maximal matching should be (1,4)(2,3) which has weight 4. ### Steps to Reproduce <!--- Provide a minimal example that reproduces the bug --> Run the code provided ### Environment <!--- Please provide details about your local environment --> Python version: 3.9 NetworkX version: 2.7.1 ### Additional context <!--- Add any other context about the problem here, screenshots, etc. --> From the documentation and the code, `min_weight_matching` is simply taking the reciprocal of the weights and uses `max_weight_matching`. This logic seems to be faulty in this example: 1/1+1/4>1/2+1/2.
Thanks! You are correct. That is a faulty way of thinking about turning a min problem into a max problem. Maximizing over sums of reciprocal weights does not minimizing correctly. Other options include negating (if the problem doesn't have trouble with negative weights -- which min_weight_matching does not... but shortest_path does). Or perhaps a more robust approach would be to subtract the weights from a little bigger than the maximum of all weights. Looking into this bug, I'm pretty sure the keyword argument `maxcardinality` should be deprecated and set to True inside the function. What would `min_weight_matching` mean for `maxcardinality=False`? It would mean the empty matching or maybe the matching that is just the lowest weight edge. I propose that we deprecate `maxcardinality`, and inside the function where we call `max_weight_matching` on the newly weighted edges, we set `maxcardinality=True`. Luckily this code was only merged Summer 2021 so not too many people will be inconvenienced by this deprecation.
2022-03-15T22:32:49
networkx/networkx
5,422
networkx__networkx-5422
[ "5415" ]
f11068c0115ede0c7b631f771c10be7efd0b950b
diff --git a/networkx/algorithms/distance_measures.py b/networkx/algorithms/distance_measures.py --- a/networkx/algorithms/distance_measures.py +++ b/networkx/algorithms/distance_measures.py @@ -18,6 +18,75 @@ def extrema_bounding(G, compute="diameter"): """Compute requested extreme distance metric of undirected graph G + .. deprecated:: 2.8 + + extrema_bounding is deprecated and will be removed in NetworkX 3.0. + Use the corresponding distance measure with the `usebounds=True` option + instead. + + Computation is based on smart lower and upper bounds, and in practice + linear in the number of nodes, rather than quadratic (except for some + border cases such as complete graphs or circle shaped graphs). + + Parameters + ---------- + G : NetworkX graph + An undirected graph + + compute : string denoting the requesting metric + "diameter" for the maximal eccentricity value, + "radius" for the minimal eccentricity value, + "periphery" for the set of nodes with eccentricity equal to the diameter, + "center" for the set of nodes with eccentricity equal to the radius, + "eccentricities" for the maximum distance from each node to all other nodes in G + + Returns + ------- + value : value of the requested metric + int for "diameter" and "radius" or + list of nodes for "center" and "periphery" or + dictionary of eccentricity values keyed by node for "eccentricities" + + Raises + ------ + NetworkXError + If the graph consists of multiple components + ValueError + If `compute` is not one of "diameter", "radius", "periphery", "center", or "eccentricities". + Notes + ----- + This algorithm was proposed in the following papers: + + F.W. Takes and W.A. Kosters, Determining the Diameter of Small World + Networks, in Proceedings of the 20th ACM International Conference on + Information and Knowledge Management (CIKM 2011), pp. 1191-1196, 2011. + doi: https://doi.org/10.1145/2063576.2063748 + + F.W. Takes and W.A. Kosters, Computing the Eccentricity Distribution of + Large Graphs, Algorithms 6(1): 100-118, 2013. + doi: https://doi.org/10.3390/a6010100 + + M. Borassi, P. Crescenzi, M. Habib, W.A. Kosters, A. Marino and F.W. Takes, + Fast Graph Diameter and Radius BFS-Based Computation in (Weakly Connected) + Real-World Graphs, Theoretical Computer Science 586: 59-80, 2015. + doi: https://doi.org/10.1016/j.tcs.2015.02.033 + """ + import warnings + + msg = "extrema_bounding is deprecated and will be removed in networkx 3.0\n" + # NOTE: _extrema_bounding does input checking, so it is skipped here + if compute in {"diameter", "radius", "periphery", "center"}: + msg += f"Use nx.{compute}(G, usebounds=True) instead." + if compute == "eccentricities": + msg += f"Use nx.eccentricity(G) instead." + warnings.warn(msg, DeprecationWarning, stacklevel=2) + + return _extrema_bounding(G, compute=compute) + + +def _extrema_bounding(G, compute="diameter"): + """Compute requested extreme distance metric of undirected graph G + Computation is based on smart lower and upper bounds, and in practice linear in the number of nodes, rather than quadratic (except for some border cases such as complete graphs or circle shaped graphs). @@ -296,7 +365,7 @@ def diameter(G, e=None, usebounds=False): eccentricity """ if usebounds is True and e is None and not G.is_directed(): - return extrema_bounding(G, compute="diameter") + return _extrema_bounding(G, compute="diameter") if e is None: e = eccentricity(G) return max(e.values()) @@ -326,7 +395,7 @@ def periphery(G, e=None, usebounds=False): center """ if usebounds is True and e is None and not G.is_directed(): - return extrema_bounding(G, compute="periphery") + return _extrema_bounding(G, compute="periphery") if e is None: e = eccentricity(G) diameter = max(e.values()) @@ -353,7 +422,7 @@ def radius(G, e=None, usebounds=False): Radius of graph """ if usebounds is True and e is None and not G.is_directed(): - return extrema_bounding(G, compute="radius") + return _extrema_bounding(G, compute="radius") if e is None: e = eccentricity(G) return min(e.values()) @@ -383,7 +452,7 @@ def center(G, e=None, usebounds=False): periphery """ if usebounds is True and e is None and not G.is_directed(): - return extrema_bounding(G, compute="center") + return _extrema_bounding(G, compute="center") if e is None: e = eccentricity(G) radius = min(e.values())
diff --git a/networkx/algorithms/tests/test_distance_measures.py b/networkx/algorithms/tests/test_distance_measures.py --- a/networkx/algorithms/tests/test_distance_measures.py +++ b/networkx/algorithms/tests/test_distance_measures.py @@ -5,12 +5,22 @@ import networkx as nx from networkx import convert_node_labels_to_integers as cnlti +from networkx.algorithms.distance_measures import _extrema_bounding -def test_extrema_bounding_invalid_compute_kwarg(): [email protected]( + "compute", ("diameter", "radius", "periphery", "center", "eccentricities") +) +def test_extrema_bounding_deprecated(compute): + G = nx.complete_graph(3) + with pytest.deprecated_call(): + nx.extrema_bounding(G, compute=compute) + + +def test__extrema_bounding_invalid_compute_kwarg(): G = nx.path_graph(3) with pytest.raises(ValueError, match="compute must be one of"): - nx.extrema_bounding(G, compute="spam") + _extrema_bounding(G, compute="spam") class TestDistance:
Is `extrema_bounding` intended to be a public function? I mentioned this in #5402 but I think the point deserves it's own issue: > Looking at extrema_bounding more carefully, I wonder if it was ever intended to be called directly? It seems like it's mostly used internally for the supported measures mentioned in the docstring and extrema_bounding itself is untested! There are currently two ways of computing the bounds for the various distance measures. Take the `diameter` metric for example: 1. Use `extrema_bounding` directly: `nx.extrema_bounding(G, compute="diameter")` 2. Call `diameter` with the `usebounds` kwarg: `nx.diameter(G, usebounds=True)` The latter simply calls `extrema_bounding` under-the-hood: https://github.com/networkx/networkx/blob/6a0b4faf09ec9d3d40ad93e2ec9b431d6bab5dc4/networkx/algorithms/distance_measures.py#L296-L297 To me, the API for option `2.` is preferable to `1.`. This, along with the fact that `extrema_bounding` is neither tested nor has an `Examples` section in the docstring made me question whether it was ever intended to be publicly exposed (as `nx.extrema_bounding`) or only intended for internal use in `distance_measures`. I was just curious whether there are explicit reasons for this and/or any other opinions. At first glance I would be tempted to make `extrema_bounding` private (deprecate from the main nx namespace) and stick with the `usebounds` approach. Either way, `extrema_bounding` should be tested (and intended usage examples would be great if it stays public) and the `usebounds` kwarg needs to be documented (unless it's decided to remove that instead...)
I'm pretty convinced that `extrema_bounding` should be a private function. Looking at the history, I don't think we/I looked very carefully at the naming convention for private/public functions at this time. If it wasn't in `__all__` I thought of it as private. It would be good to get the naming and deprecation changes done before v2.8, but we do also needs tests. Not sure if that is needed before v2.8, but if they are easy and quick, yes. See also #5409
2022-03-25T03:27:41
networkx/networkx
5,430
networkx__networkx-5430
[ "5426" ]
72b1dca7d7d4d8bab519d55541d981d2f4f61365
diff --git a/networkx/conftest.py b/networkx/conftest.py --- a/networkx/conftest.py +++ b/networkx/conftest.py @@ -233,6 +233,7 @@ def set_warnings(): "ignore", category=DeprecationWarning, message="to_numpy_recarray" ) warnings.filterwarnings("ignore", category=DeprecationWarning, message="info") + warnings.filterwarnings("ignore", category=DeprecationWarning, message="to_tuple") @pytest.fixture(autouse=True) diff --git a/networkx/readwrite/json_graph/node_link.py b/networkx/readwrite/json_graph/node_link.py --- a/networkx/readwrite/json_graph/node_link.py +++ b/networkx/readwrite/json_graph/node_link.py @@ -1,6 +1,5 @@ from itertools import chain, count import networkx as nx -from networkx.utils import to_tuple __all__ = ["node_link_data", "node_link_graph"] @@ -8,6 +7,23 @@ _attrs = dict(source="source", target="target", name="id", key="key", link="links") +def _to_tuple(x): + """Converts lists to tuples, including nested lists. + + All other non-list inputs are passed through unmodified. This function is + intended to be used to convert potentially nested lists from json files + into valid nodes. + + Examples + -------- + >>> _to_tuple([1, 2, [3, 4]]) + (1, 2, (3, 4)) + """ + if not isinstance(x, (tuple, list)): + return x + return tuple(map(_to_tuple, x)) + + def node_link_data(G, attrs=None): """Returns data in node-link format that is suitable for JSON serialization and use in Javascript documents. @@ -164,7 +180,7 @@ def node_link_graph(data, directed=False, multigraph=True, attrs=None): graph.graph = data.get("graph", {}) c = count() for d in data["nodes"]: - node = to_tuple(d.get(name, next(c))) + node = _to_tuple(d.get(name, next(c))) nodedata = {str(k): v for k, v in d.items() if k != name} graph.add_node(node, **nodedata) for d in data[links]: diff --git a/networkx/utils/misc.py b/networkx/utils/misc.py --- a/networkx/utils/misc.py +++ b/networkx/utils/misc.py @@ -437,6 +437,10 @@ def groups(many_to_one): def to_tuple(x): """Converts lists to tuples. + .. deprecated:: 2.8 + + to_tuple is deprecated and will be removed in NetworkX 3.0. + Examples -------- >>> from networkx.utils import to_tuple @@ -444,6 +448,12 @@ def to_tuple(x): >>> to_tuple(a_list) (1, 2, (1, 4)) """ + warnings.warn( + "to_tuple is deprecated and will be removed in NetworkX 3.0.", + DeprecationWarning, + stacklevel=2, + ) + if not isinstance(x, (tuple, list)): return x return tuple(map(to_tuple, x))
Deprecate or improve `utils.misc.to_tuple` `to_tuple` is a simple function that recursively converts nested lists to tuples. It is only used in one place within networkx in the `json_graph` module: https://github.com/networkx/networkx/blob/2e490a307b97518c98066c058136981dcae95606/networkx/readwrite/json_graph/node_link.py#L167 The usage here makes sense as there's a potential use-case for a node that is stored in json as a nested list (though this case isn't tested). However, it should be noted that this specific usage depends on the fact that `to_tuple` is a no-op for every input *except* Python lists. Given the fact that it is only used once internally, has a relatively specific use-case, and doesn't seem to have any other obvious connection to common operations in NetworkX, I would propose to do the following: - Move the definition over to the `json_graph.node_link` module as a private function, and - Deprecate the public function `nx.utils.misc.to_tuple` If deprecating/removing the function seems too strong, then another option would be to improve the documentation and testing. Documentation should have usage examples to illustrate where it might be useful and also highlight the fact that the function is a no-op for every input except lists. At face value I'd vote for deprecation because I don't see a strong connection to the rest of the library, but please share your opinion and any other ideas for how to handle this!
Playing devil's advocate against myself... one argument against deprecation is there isn't really a built-in one-liner or simple fix that we can point to instead (at least off the top of my head). I still lean towards deprecation since the behavior is so specific, but definitely another angle to consider! I agree that this makes sense as a private function in the `json_graph` code. And we can deprecate the public version and move the code to the new place.
2022-03-27T00:58:01
networkx/networkx
5,442
networkx__networkx-5442
[ "5403" ]
d4b93384c5c482ff4397d8c6f4b80f660b799a9e
diff --git a/networkx/algorithms/bipartite/basic.py b/networkx/algorithms/bipartite/basic.py --- a/networkx/algorithms/bipartite/basic.py +++ b/networkx/algorithms/bipartite/basic.py @@ -5,6 +5,7 @@ """ import networkx as nx from networkx.algorithms.components import connected_components +from networkx.exception import AmbiguousSolution __all__ = [ "is_bipartite", @@ -126,10 +127,21 @@ def is_bipartite_node_set(G, nodes): Notes ----- + An exception is raised if the input nodes are not distinct, because in this + case some bipartite algorithms will yield incorrect results. For connected graphs the bipartite sets are unique. This function handles disconnected graphs. """ S = set(nodes) + + if len(S) < len(nodes): + # this should maybe just return False? + raise AmbiguousSolution( + "The input node set contains duplicates.\n" + "This may lead to incorrect results when using it in bipartite algorithms.\n" + "Consider using set(nodes) as the input" + ) + for CC in (G.subgraph(c).copy() for c in connected_components(G)): X, Y = sets(CC) if not ( diff --git a/networkx/algorithms/bipartite/projection.py b/networkx/algorithms/bipartite/projection.py --- a/networkx/algorithms/bipartite/projection.py +++ b/networkx/algorithms/bipartite/projection.py @@ -1,5 +1,6 @@ """One-mode (unipartite) projections of bipartite graphs.""" import networkx as nx +from networkx.exception import NetworkXAlgorithmError from networkx.utils import not_implemented_for __all__ = [ @@ -132,7 +133,7 @@ def weighted_projected_graph(B, nodes, ratio=False): The input graph should be bipartite. nodes : list or iterable - Nodes to project onto (the "bottom" nodes). + Distinct nodes to project onto (the "bottom" nodes). ratio: Bool (default=False) If True, edge weight is the ratio between actual shared neighbors @@ -159,7 +160,11 @@ def weighted_projected_graph(B, nodes, ratio=False): Notes ----- - No attempt is made to verify that the input graph B is bipartite. + No attempt is made to verify that the input graph B is bipartite, or that + the input nodes are distinct. However, if the length of the input nodes is + greater than or equal to the nodes in the graph B, an exception is raised. + If the nodes are not distinct but don't raise this error, the output weights + will be incorrect. The graph and node properties are (shallow) copied to the projected graph. See :mod:`bipartite documentation <networkx.algorithms.bipartite>` @@ -190,6 +195,13 @@ def weighted_projected_graph(B, nodes, ratio=False): G.graph.update(B.graph) G.add_nodes_from((n, B.nodes[n]) for n in nodes) n_top = float(len(B) - len(nodes)) + + if n_top < 1: + raise NetworkXAlgorithmError( + f"the size of the nodes to project onto ({len(nodes)}) is >= the graph size ({len(B)}).\n" + "They are either not a valid bipartite partition or contain duplicates" + ) + for u in nodes: unbrs = set(B[u]) nbrs2 = {n for nbr in unbrs for n in B[nbr]} - {u}
diff --git a/networkx/algorithms/bipartite/tests/test_basic.py b/networkx/algorithms/bipartite/tests/test_basic.py --- a/networkx/algorithms/bipartite/tests/test_basic.py +++ b/networkx/algorithms/bipartite/tests/test_basic.py @@ -51,6 +51,10 @@ def test_bipartite_sets_disconnected(self): def test_is_bipartite_node_set(self): G = nx.path_graph(4) + + with pytest.raises(nx.AmbiguousSolution): + bipartite.is_bipartite_node_set(G, [1, 1, 2, 3]) + assert bipartite.is_bipartite_node_set(G, [0, 2]) assert bipartite.is_bipartite_node_set(G, [1, 3]) assert not bipartite.is_bipartite_node_set(G, [1, 2]) diff --git a/networkx/algorithms/bipartite/tests/test_project.py b/networkx/algorithms/bipartite/tests/test_project.py --- a/networkx/algorithms/bipartite/tests/test_project.py +++ b/networkx/algorithms/bipartite/tests/test_project.py @@ -1,4 +1,6 @@ import networkx as nx +import pytest + from networkx.algorithms import bipartite from networkx.utils import nodes_equal, edges_equal @@ -51,6 +53,10 @@ def test_directed_path_collaboration_projected_graph(self): def test_path_weighted_projected_graph(self): G = nx.path_graph(4) + + with pytest.raises(nx.NetworkXAlgorithmError): + bipartite.weighted_projected_graph(G, [1, 2, 3, 3]) + P = bipartite.weighted_projected_graph(G, [1, 3]) assert nodes_equal(list(P), [1, 3]) assert edges_equal(list(P.edges()), [(1, 3)]) diff --git a/networkx/algorithms/flow/tests/test_maxflow.py b/networkx/algorithms/flow/tests/test_maxflow.py --- a/networkx/algorithms/flow/tests/test_maxflow.py +++ b/networkx/algorithms/flow/tests/test_maxflow.py @@ -10,17 +10,17 @@ from networkx.algorithms.flow import shortest_augmenting_path from networkx.algorithms.flow import dinitz -flow_funcs = [ +flow_funcs = { boykov_kolmogorov, dinitz, edmonds_karp, preflow_push, shortest_augmenting_path, -] -max_min_funcs = [nx.maximum_flow, nx.minimum_cut] -flow_value_funcs = [nx.maximum_flow_value, nx.minimum_cut_value] -interface_funcs = max_min_funcs + flow_value_funcs -all_funcs = sum([flow_funcs, interface_funcs], []) +} +max_min_funcs = {nx.maximum_flow, nx.minimum_cut} +flow_value_funcs = {nx.maximum_flow_value, nx.minimum_cut_value} +interface_funcs = max_min_funcs & flow_value_funcs +all_funcs = flow_funcs & interface_funcs def compute_cutset(G, partition):
Bipartite projection on nodes with duplicates raises ZeroDivisionError When calculating the 'top' nodes for the `weighted_projected_graph` [here](https://github.com/networkx/networkx/blob/6a0b4faf09ec9d3d40ad93e2ec9b431d6bab5dc4/networkx/algorithms/bipartite/projection.py#L192), if the user submits a list with duplicates, the function either gives incorrect results or a `ZeroDivisionError` when `ratio=True`. If the user first checks their set with `is_bipartite_node_set`. this returns `True`: ```python import networkx as nx from networkx.algorithms import bipartite B = nx.complete_bipartite_graph(2,2) a = [0,1] * 2 bipartite.is_bipartite_node_set(B, a) # returns True bipartite.weighted_projected_graph(B, a, ratio=True) # raises ZeroDivisionError ``` I can understand that in `weighted_projected_graph` we burden the user with supplying a valid node set to avoid the inefficiency of converting a `list` into a `set`, but I do think anything that passes `is_bipartite_node_set` should definitely give correct and error-free answers when passed to the projection function, else this behaviour is surprising. Of course in the example above, it's obvious I've made duplicates but in real use-cases the node set could come through a chain of functions and the user is none the wiser.
Thanks for reporting this @aaronzo! > I do think anything that passes is_bipartite_node_set should definitely give correct and error-free answers when passed to the projection function, else this behaviour is surprising. I agree, this can be surprising. How about we raise an `Error` for `is_bipartite_node_set` if the user passes in a node list which has duplicates. At least then `is_bipartite_node_set` fails correctly for the input list of nodes. A check like `if len(nodes) > len(S): raise ...` https://github.com/networkx/networkx/blob/6a0b4faf09ec9d3d40ad93e2ec9b431d6bab5dc4/networkx/algorithms/bipartite/basic.py#L132-L135 On that note there are probably a bunch of places where we silently change the incoming nodes list to a set and get rid of the duplicates. I'm not sure if we should do this everywhere. @MridulS I agree with the proposal, and it's probably overkill to check sets within the functions themselves - after all, the function doesn't check the nodeset is a valid partitioning of the graph either. Perhaps alongside your suggestion of checking sets in `is_bipartite_node_set`, we could add a note in the docstring for `weighted_projected_graph` (and any other functions we think this might a effect); Something like: ``` nodes : set or iterable Distinct nodes to project onto (the "bottom" nodes). ``` over [here](https://github.com/networkx/networkx/blob/6a0b4faf09ec9d3d40ad93e2ec9b431d6bab5dc4/networkx/algorithms/bipartite/projection.py#L134) and maybe we could add an error like ``` n_top = float(len(B) - len(nodes)) if n_top < 1: raise ValueError("The nodeset to project onto has size >= the graph size. Is it a valid partition without duplicates?") ``` which is inexpensive to check. This error won't catch cases where some duplicates are used but not enough to make `len(nodes) >= len(B)`, in which case the weight calculation will still be out by a factor (though this factor will be consistent)
2022-03-30T15:58:24
networkx/networkx
5,444
networkx__networkx-5444
[ "5443" ]
1e48dad5c6038b549d7a4a01f190ffd3765eb913
diff --git a/networkx/readwrite/graph6.py b/networkx/readwrite/graph6.py --- a/networkx/readwrite/graph6.py +++ b/networkx/readwrite/graph6.py @@ -128,6 +128,8 @@ def bits(): return G +@not_implemented_for("directed") +@not_implemented_for("multigraph") def to_graph6_bytes(G, nodes=None, header=True): """Convert a simple undirected graph to bytes in graph6 format.
diff --git a/networkx/readwrite/tests/test_graph6.py b/networkx/readwrite/tests/test_graph6.py --- a/networkx/readwrite/tests/test_graph6.py +++ b/networkx/readwrite/tests/test_graph6.py @@ -79,9 +79,10 @@ def test_complete_bipartite_graph(self): # The expected encoding here was verified by Sage. assert result.getvalue() == b"N??F~z{~Fw^_~?~?^_?\n" - def test_no_directed_graphs(self): + @pytest.mark.parametrize("G", (nx.MultiGraph(), nx.DiGraph())) + def test_no_directed_or_multi_graphs(self, G): with pytest.raises(nx.NetworkXNotImplemented): - nx.write_graph6(nx.DiGraph(), BytesIO()) + nx.write_graph6(G, BytesIO()) def test_length(self): for i in list(range(13)) + [31, 47, 62, 63, 64, 72]: @@ -108,10 +109,60 @@ def test_write_path(self): f.seek(0) assert f.read() == b">>graph6<<?\n" - def test_relabeling(self): - G = nx.Graph([(0, 1)]) - assert g6.to_graph6_bytes(G) == b">>graph6<<A_\n" - G = nx.Graph([(1, 2)]) - assert g6.to_graph6_bytes(G) == b">>graph6<<A_\n" - G = nx.Graph([(1, 42)]) + @pytest.mark.parametrize("edge", ((0, 1), (1, 2), (1, 42))) + def test_relabeling(self, edge): + G = nx.Graph([edge]) + f = BytesIO() + nx.write_graph6(G, f) + f.seek(0) + assert f.read() == b">>graph6<<A_\n" + + +class TestToGraph6Bytes: + def test_null_graph(self): + G = nx.null_graph() + assert g6.to_graph6_bytes(G) == b">>graph6<<?\n" + + def test_trivial_graph(self): + G = nx.trivial_graph() + assert g6.to_graph6_bytes(G) == b">>graph6<<@\n" + + def test_complete_graph(self): + assert g6.to_graph6_bytes(nx.complete_graph(4)) == b">>graph6<<C~\n" + + def test_large_complete_graph(self): + G = nx.complete_graph(67) + assert g6.to_graph6_bytes(G, header=False) == b"~?@B" + b"~" * 368 + b"w\n" + + def test_no_header(self): + G = nx.complete_graph(4) + assert g6.to_graph6_bytes(G, header=False) == b"C~\n" + + def test_complete_bipartite_graph(self): + G = nx.complete_bipartite_graph(6, 9) + assert g6.to_graph6_bytes(G, header=False) == b"N??F~z{~Fw^_~?~?^_?\n" + + @pytest.mark.parametrize("G", (nx.MultiGraph(), nx.DiGraph())) + def test_no_directed_or_multi_graphs(self, G): + with pytest.raises(nx.NetworkXNotImplemented): + g6.to_graph6_bytes(G) + + def test_length(self): + for i in list(range(13)) + [31, 47, 62, 63, 64, 72]: + G = nx.random_graphs.gnm_random_graph(i, i * i // 4, seed=i) + # Strip the trailing newline. + gstr = g6.to_graph6_bytes(G, header=False).rstrip() + assert len(gstr) == ((i - 1) * i // 2 + 5) // 6 + (1 if i < 63 else 4) + + def test_roundtrip(self): + for i in list(range(13)) + [31, 47, 62, 63, 64, 72]: + G = nx.random_graphs.gnm_random_graph(i, i * i // 4, seed=i) + data = g6.to_graph6_bytes(G) + H = nx.from_graph6_bytes(data.rstrip()) + assert nodes_equal(G.nodes(), H.nodes()) + assert edges_equal(G.edges(), H.edges()) + + @pytest.mark.parametrize("edge", ((0, 1), (1, 2), (1, 42))) + def test_relabeling(self, edge): + G = nx.Graph([edge]) assert g6.to_graph6_bytes(G) == b">>graph6<<A_\n"
Exception not raised when `to_graph6_bytes` is used on directed graphs When I use `networkx.readwrite.graph6.to_graph6_bytes` with a **directed** graph, I expect a `NetworkXNotImplemented` exception to be raised (because the graph6 format is for undirected graphs only). This is also stated in the method docstring: ``` Raises ------ NetworkXNotImplemented If the graph is directed or is a multigraph. ``` ### Current Behavior ``` >>> import networkx as nx >>> nx.to_graph6_bytes(nx.DiGraph()) b'>>graph6<<?\n' ``` ### Expected Behavior ``` >>> import networkx as nx >>> nx.to_graph6_bytes(nx.DiGraph()) ... networkx.exception.NetworkXNotImplemented: not implemented for directed type ``` ### Environment <!--- Please provide details about your local environment --> Python version: 3.10.4 NetworkX version: 2.7.2rc1.dev0 ### Additional context Here you can find the description of the graph6 format: [http://users.cecs.anu.edu.au/~bdm/data/formats.txt](url)
I'll work on this issue. I'll also add a few tests (there are no tests for the `to_graph6_bytes` method
2022-03-30T17:14:49
networkx/networkx
5,474
networkx__networkx-5474
[ "5463" ]
7e17ff9007d2e71c2ea2a6d340889c331689343c
diff --git a/networkx/classes/function.py b/networkx/classes/function.py --- a/networkx/classes/function.py +++ b/networkx/classes/function.py @@ -787,6 +787,11 @@ def set_edge_attributes(G, values, name=None): >>> G[1][2]["attr2"] 3 + The attributes of one Graph can be used to set those of another. + + >>> H = nx.path_graph(3) + >>> nx.set_edge_attributes(H, G.edges) + Note that if the dict contains edges that are not in `G`, they are silently ignored:: @@ -795,6 +800,29 @@ def set_edge_attributes(G, values, name=None): >>> (1, 2) in G.edges() False + For multigraphs, the `values` dict is expected to be keyed by 3-tuples + including the edge key:: + + >>> MG = nx.MultiGraph() + >>> edges = [(0, 1), (0, 1)] + >>> MG.add_edges_from(edges) # Returns list of edge keys + [0, 1] + >>> attributes = {(0, 1, 0): {"cost": 21}, (0, 1, 1): {"cost": 7}} + >>> nx.set_edge_attributes(MG, attributes) + >>> MG[0][1][0]["cost"] + 21 + >>> MG[0][1][1]["cost"] + 7 + + If MultiGraph attributes are desired for a Graph, you must convert the 3-tuple + multiedge to a 2-tuple edge and the last multiedge's attribute value will + overwrite the previous values. Continuing from the previous case we get:: + + >>> H = nx.path_graph([0, 1, 2]) + >>> nx.set_edge_attributes(H, {(u, v): ed for u, v, ed in MG.edges.data()}) + >>> nx.get_edge_attributes(H, "cost") + {(0, 1): 7} + """ if name is not None: # `values` does not contain attribute names
Improve Examples Section of ```set_edge_attributes``` <!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion --> <!--- Provide a general summary of the issue in the Title above --> <!--- Tell us what happens instead of the expected behavior --> While looking at the ```set_edge_attributes``` function, it doesn't include an example using multigraphs. As the ```values``` argument is passed different for non-multigraphs and multigraphs, I figured having a small example will show the difference easily. <!--- Tell us what should happen --> <!--- Provide a minimal example that reproduces the bug --> <!--- Please provide details about your local environment --> NetworkX version: 2.6.2 <!--- Add any other context about the problem here, screenshots, etc. -->
I agree that a multigraph example would be a nice improvement. The fact that `values` expects the dict keyed by 3-tuples is definitely worth highlighting. Okay thanks! I will try to include an example and add a PR. Can you guide me on which file in the repo to be edited? > Can you guide me on which file in the repo to be edited? Check out [our new contributor FAQ](https://networkx.org/documentation/latest/developer/new_contributor_faq.html#q-i-want-to-work-on-a-specific-function-how-do-i-find-it-in-the-source-code) which will hopefully give you some ideas for how to navigate the code base!
2022-04-05T16:22:30
networkx/networkx
5,522
networkx__networkx-5522
[ "5498" ]
b79768389070c5533a5ae21afce15dd06cd2cff0
diff --git a/networkx/algorithms/triads.py b/networkx/algorithms/triads.py --- a/networkx/algorithms/triads.py +++ b/networkx/algorithms/triads.py @@ -149,6 +149,30 @@ def triadic_census(G, nodelist=None): census : dict Dictionary with triad type as keys and number of occurrences as values. + Examples + -------- + >>> G = nx.DiGraph([(1, 2), (2, 3), (3, 1), (3, 4), (4, 1), (4, 2)]) + >>> triadic_census = nx.triadic_census(G) + >>> for key, value in triadic_census.items(): + ... print(f"{key}: {value}") + ... + 003: 0 + 012: 0 + 102: 0 + 021D: 0 + 021U: 0 + 021C: 0 + 111D: 0 + 111U: 0 + 030T: 2 + 030C: 2 + 201: 0 + 120D: 0 + 120U: 0 + 120C: 0 + 210: 0 + 300: 0 + Notes ----- This algorithm has complexity $O(m)$ where $m$ is the number of edges in @@ -224,6 +248,15 @@ def is_triad(G): ------- istriad : boolean Whether G is a valid triad + + Examples + -------- + >>> G = nx.DiGraph([(1, 2), (2, 3), (3, 1)]) + >>> nx.is_triad(G) + True + >>> G.add_edge(0, 1) + >>> nx.is_triad(G) + False """ if isinstance(G, nx.Graph): if G.order() == 3 and nx.is_directed(G): @@ -245,6 +278,13 @@ def all_triplets(G): ------- triplets : generator of 3-tuples Generator of tuples of 3 nodes + + Examples + -------- + >>> G = nx.DiGraph([(1, 2), (2, 3), (3, 4)]) + >>> list(nx.all_triplets(G)) + [(1, 2, 3), (1, 2, 4), (1, 3, 4), (2, 3, 4)] + """ triplets = combinations(G.nodes(), 3) return triplets @@ -263,6 +303,17 @@ def all_triads(G): ------- all_triads : generator of DiGraphs Generator of triads (order-3 DiGraphs) + + Examples + -------- + >>> G = nx.DiGraph([(1, 2), (2, 3), (3, 1), (3, 4), (4, 1), (4, 2)]) + >>> for triad in nx.all_triads(G): + ... print(triad.edges) + [(1, 2), (2, 3), (3, 1)] + [(1, 2), (4, 1), (4, 2)] + [(3, 1), (3, 4), (4, 1)] + [(2, 3), (3, 4), (4, 2)] + """ triplets = combinations(G.nodes(), 3) for triplet in triplets: @@ -272,6 +323,29 @@ def all_triads(G): @not_implemented_for("undirected") def triads_by_type(G): """Returns a list of all triads for each triad type in a directed graph. + There are exactly 16 different types of triads possible. Suppose 1, 2, 3 are three + nodes, they will be classified as a particular triad type if their connections + are as follows: + + - 003: 1, 2, 3 + - 012: 1 -> 2, 3 + - 102: 1 <-> 2, 3 + - 021D: 1 <- 2 -> 3 + - 021U: 1 -> 2 <- 3 + - 021C: 1 -> 2 -> 3 + - 111D: 1 <-> 2 <- 3 + - 111U: 1 <-> 2 -> 3 + - 030T: 1 -> 2 -> 3, 1 -> 3 + - 030C: 1 <- 2 <- 3, 1 -> 3 + - 201: 1 <-> 2 <-> 3 + - 120D: 1 <- 2 -> 3, 1 <-> 3 + - 120U: 1 -> 2 <- 3, 1 <-> 3 + - 120C: 1 -> 2 -> 3, 1 <-> 3 + - 210: 1 -> 2 <-> 3, 1 <-> 3 + - 300: 1 <-> 2 <-> 3, 1 <-> 3 + + Refer to the :doc:`example gallery <auto_examples/graph/plot_triad_types>` + for visual examples of the triad types. Parameters ---------- @@ -282,6 +356,21 @@ def triads_by_type(G): ------- tri_by_type : dict Dictionary with triad types as keys and lists of triads as values. + + Examples + -------- + >>> G = nx.DiGraph([(1, 2), (1, 3), (2, 3), (3, 1), (5, 6), (5, 4), (6, 7)]) + >>> dict = nx.triads_by_type(G) + >>> dict['120C'][0].edges() + OutEdgeView([(1, 2), (1, 3), (2, 3), (3, 1)]) + >>> dict['012'][0].edges() + OutEdgeView([(1, 2)]) + + References + ---------- + .. [1] Snijders, T. (2012). "Transitivity and triads." University of + Oxford. + https://web.archive.org/web/20170830032057/http://www.stats.ox.ac.uk/~snijders/Trans_Triads_ha.pdf """ # num_triads = o * (o - 1) * (o - 2) // 6 # if num_triads > TRIAD_LIMIT: print(WARNING) @@ -307,6 +396,15 @@ def triad_type(G): triad_type : str A string identifying the triad type + Examples + -------- + >>> G = nx.DiGraph([(1, 2), (2, 3), (3, 1)]) + >>> nx.triad_type(G) + '030C' + >>> G.add_edge(1, 3) + >>> nx.triad_type(G) + '120C' + Notes ----- There can be 6 unique edges in a triad (order-3 DiGraph) (so 2^^6=64 unique @@ -394,6 +492,14 @@ def random_triad(G): ------- G2 : subgraph A randomly selected triad (order-3 NetworkX DiGraph) + + Examples + -------- + >>> G = nx.DiGraph([(1, 2), (1, 3), (2, 3), (3, 1), (5, 6), (5, 4), (6, 7)]) + >>> triad = nx.random_triad(G) + >>> triad.edges + OutEdgeView([(1, 3), (3, 1)]) + """ nodes = sample(list(G.nodes()), 3) G2 = G.subgraph(nodes)
Improving Triads Algorithms Documentation <!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion --> There are 7 Functions under Triads. 1. [triadic_census](https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.triads.triadic_census.html#networkx.algorithms.triads.triadic_census)(G[, nodelist]) 2. [random_triad](https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.triads.random_triad.html#networkx.algorithms.triads.random_triad)(G) 3. [triads_by_type](https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.triads.triads_by_type.html#networkx.algorithms.triads.triads_by_type)(G) 4. [triad_type](https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.triads.triad_type.html#networkx.algorithms.triads.triad_type)(G) 5. [is_triad](https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.triads.is_triad.html#networkx.algorithms.triads.is_triad)(G) 6. [all_triads](https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.triads.all_triads.html#networkx.algorithms.triads.all_triads)(G) 7. [all_triplets](https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.triads.all_triplets.html#networkx.algorithms.triads.all_triplets)(G) None of these have any examples included. I will like to include some and also improve the documentation on Triads and its Types. NetworkX version: 2.6.2
Thanks for the report @0ddoes! Feel free to work on them.
2022-04-11T21:07:06
networkx/networkx
5,523
networkx__networkx-5523
[ "5090" ]
b79768389070c5533a5ae21afce15dd06cd2cff0
diff --git a/networkx/algorithms/planarity.py b/networkx/algorithms/planarity.py --- a/networkx/algorithms/planarity.py +++ b/networkx/algorithms/planarity.py @@ -24,6 +24,18 @@ def check_planarity(G, counterexample=False): If the graph is planar `certificate` is a PlanarEmbedding otherwise it is a Kuratowski subgraph. + Examples + -------- + >>> G = nx.Graph([(0, 1), (0, 2)]) + >>> is_planar, P = nx.check_planarity(G) + >>> print(is_planar) + True + + When `G` is planar, a `PlanarEmbedding` instance is returned: + + >>> P.get_data() + {0: [1, 2], 1: [0], 2: [0]} + Notes ----- A (combinatorial) embedding consists of cyclic orderings of the incident @@ -716,6 +728,8 @@ class PlanarEmbedding(nx.DiGraph): The planar embedding is given by a `combinatorial embedding <https://en.wikipedia.org/wiki/Graph_embedding#Combinatorial_embedding>`_. + .. note:: `check_planarity` is the preferred way to check if a graph is planar. + **Neighbor ordering:** In comparison to a usual graph structure, the embedding also stores the @@ -761,6 +775,13 @@ class PlanarEmbedding(nx.DiGraph): For a half-edge (u, v) that is orientated such that u is below v then the face that belongs to (u, v) is to the right of this half-edge. + See Also + -------- + is _planar : + Preferred way to check if an existing graph is planar. + check_planarity : + A convenient way to create a `PlanarEmbedding`. If not planar, it returns a subgraph that shows this. + Examples --------
Update documentation for planar embedding Let's update the documentation to make it clear that the `check_planarity` function is the primary interface for the planar embedding tools. Also, the class `PlanarEmbedding` is tricky to make sure it maintains the planar data structure. People not familiar with those ideas should definitely start with the `check_planarity` function. See discussion in #5079
I would love to work on this! πŸ™‹β€β™‚οΈ For more context, what all should be added onto the documentation of `PlanarEmbedding` class? I think the main trouble #5079 revealed is that it is not clear where to start using the features we have. It is tempting to go straight to the class. But the better place to start is with the function `check_planarity`. So I think a pointer in the class documentation would be good, a statement in the function doc_string that it is the primary interface to the routines for planarity, and a paragraph in the module documentation about the various parts and that you want to start with `check_planarity`. Of course as you read more and learn more, other items might come up. The goal is to make it clear for someone who has done a search for "networkx planarity". Sir, I am an Outreachy applicant. Is this issue still open? Can I try to make any updates to the documentation to make it more clear? If so please do guide me in what you expect , as to how I can improve it? @MridulS Hi, Is this issue still open? And if so can you guide me on how I can improve it?
2022-04-12T17:38:58
networkx/networkx
5,550
networkx__networkx-5550
[ "5530" ]
1e5f0bde4cf4cbe4b65bf6e7be775a49556dcf23
diff --git a/networkx/algorithms/community/modularity_max.py b/networkx/algorithms/community/modularity_max.py --- a/networkx/algorithms/community/modularity_max.py +++ b/networkx/algorithms/community/modularity_max.py @@ -326,6 +326,8 @@ def greedy_modularity_communities( raise ValueError(f"best_n must be between 1 and {len(G)}. Got {best_n}.") if best_n < cutoff: raise ValueError(f"Must have best_n >= cutoff. Got {best_n} < {cutoff}") + if best_n == 1: + return [set(G)] else: best_n = G.number_of_nodes() if n_communities is not None: @@ -351,7 +353,19 @@ def greedy_modularity_communities( # continue merging communities until one of the breaking criteria is satisfied while len(communities) > cutoff: - dq = next(community_gen) + try: + dq = next(community_gen) + # StopIteration occurs when communities are the connected components + except StopIteration: + communities = sorted(communities, key=len, reverse=True) + # if best_n requires more merging, merge big sets for highest modularity + while len(communities) > best_n: + comm1, comm2, *rest = communities + communities = [comm1 ^ comm2] + communities.extend(rest) + return communities + + # keep going unless max_mod is reached or best_n says to merge more if dq < 0 and len(communities) <= best_n: break communities = next(community_gen)
diff --git a/networkx/algorithms/community/tests/test_modularity_max.py b/networkx/algorithms/community/tests/test_modularity_max.py --- a/networkx/algorithms/community/tests/test_modularity_max.py +++ b/networkx/algorithms/community/tests/test_modularity_max.py @@ -44,6 +44,17 @@ def test_modularity_communities_categorical_labels(func): assert set(func(G)) == expected +def test_greedy_modularity_communities_components(): + # Test for gh-5530 + G = nx.Graph([(0, 1), (2, 3), (4, 5), (5, 6)]) + # usual case with 3 components + assert greedy_modularity_communities(G) == [{4, 5, 6}, {0, 1}, {2, 3}] + # best_n can make the algorithm continue even when modularity goes down + assert greedy_modularity_communities(G, best_n=3) == [{4, 5, 6}, {0, 1}, {2, 3}] + assert greedy_modularity_communities(G, best_n=2) == [{0, 1, 4, 5, 6}, {2, 3}] + assert greedy_modularity_communities(G, best_n=1) == [{0, 1, 2, 3, 4, 5, 6}] + + def test_greedy_modularity_communities_relabeled(): # Test for gh-4966 G = nx.balanced_tree(2, 2) @@ -306,7 +317,7 @@ def test_cutoff_parameter(): def test_best_n(): G = nx.barbell_graph(5, 3) - # Same result as without enforcing n_communities: + # Same result as without enforcing cutoff: best_n = 3 expected = [frozenset(range(5)), frozenset(range(8, 13)), frozenset(range(5, 8))] assert greedy_modularity_communities(G, best_n=best_n) == expected
greedy_modularity_communities raises StopIteration for unconnected (?) graph <!--- Provide a general summary of the issue in the Title above --> After an update of `networkx` from 2.6.3 to 2.8, one of my tests suddenly failed with ``` Traceback (most recent call last): File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/home/gereon/Develop/lexedata/src/lexedata/report/nonconcatenative_morphemes.py", line 159, in <module> networkx.algorithms.community.greedy_modularity_communities(graph), File "/home/gereon/.local/etc/lexedata/lib/python3.10/site-packages/networkx/algorithms/community/modularity_max.py", line 354, in greedy_modularity_communities dq = next(community_gen) StopIteration ``` It turns out that the internals of `greedy_modularity_communities` have changed, so the function doesn't work any more with unconnected graphs. ### Current Behavior <!--- Tell us what happens instead of the expected behavior --> `networkx.algorithms.community.greedy_modularity_communities` fails with undocumented error message for unconnected graphs, or graphs without edges ### Expected Behavior <!--- Tell us what should happen --> - Either document the requirement to have a connected graph with at least one edge and throw `ValueError` instead of `StopIteration` when it is not fulfilled, or run the graph through `nx.connected_components()` first. - If the graph has only one node and no edges, return that node. ### Steps to Reproduce <!--- Provide a minimal example that reproduces the bug --> ``` >>> G = nx.Graph() >>> G.add_edges_from([(0,1),(2,3)]) >>> greedy_modularity_communities(G) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/gereon/.local/etc/lexedata/lib/python3.10/site-packages/networkx/algorithms/community/modularity_max.py", line 354, in greedy_modularity_communities dq = next(community_gen) StopIteration ``` ``` >>> G = nx.Graph() >>> G.add_nodes_from([1]) >>> greedy_modularity_communities(G) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/gereon/.local/etc/lexedata/lib/python3.10/site-packages/networkx/algorithms/community/modularity_max.py", line 350, in greedy_modularity_communities communities = next(community_gen) File "/home/gereon/.local/etc/lexedata/lib/python3.10/site-packages/networkx/algorithms/community/modularity_max.py", line 79, in _greedy_modularity_communities_generator q0 = 1 / m ZeroDivisionError: division by zero ``` ### Environment <!--- Please provide details about your local environment --> Python version: 3.10.2 NetworkX version: 2.8
Thanks for the report @Anaphory - you are correct, there indeed has been a behavior changed for unconnected graphs. Your suggestions as to what to do about it seem sensible, though I'm not sure which approach we should take (i.e. document the new behavior or treat this as a regression). I'd like to know what others thing, especially @martacki & @dschult I would like us to handle disconnected graphs for the user (and not making them split the graph into connected components first). We have to figure out what the new `cutoff` and `best_n` parameters mean in that case though... Maybe we will have to ensure that `cutoff` is at least the number of connected components. Or maybe we can simply use a try-except on a StopIteration for the `next` call. We should try to mimic the previous behavior of this function as much as we can. The intent was to leave the results the same for people who don't use the `cutoff` or `best_n` parameters. Handling graphs with zero edges is definitely something we should do. I'm pretty sure that used to fail with the older code too. It's easy to treat that corner case specially and we should do that.
2022-04-17T20:42:01
networkx/networkx
5,574
networkx__networkx-5574
[ "5573" ]
e48d5f27e7026c5d104d3042dbd9b4d603c062bf
diff --git a/networkx/readwrite/gml.py b/networkx/readwrite/gml.py --- a/networkx/readwrite/gml.py +++ b/networkx/readwrite/gml.py @@ -591,7 +591,7 @@ def stringize(value): stringize(item) buf.write("}") else: - msg = "{value!r} cannot be converted into a Python literal" + msg = f"{value!r} cannot be converted into a Python literal" raise ValueError(msg) buf = StringIO() diff --git a/networkx/utils/decorators.py b/networkx/utils/decorators.py --- a/networkx/utils/decorators.py +++ b/networkx/utils/decorators.py @@ -245,7 +245,7 @@ def _nodes_or_number(n): nodes = tuple(n) else: if n < 0: - msg = "Negative number of nodes not valid: {n}" + msg = f"Negative number of nodes not valid: {n}" raise nx.NetworkXError(msg) return (n, nodes)
Missing `f` prefix on f-strings Some strings looks like they're meant to be f-strings but are missing the `f` prefix meaning variable interpolation won't happen. https://github.com/networkx/networkx/blob/e48d5f27e7026c5d104d3042dbd9b4d603c062bf/networkx/utils/decorators.py#L248 https://github.com/networkx/networkx/blob/e48d5f27e7026c5d104d3042dbd9b4d603c062bf/networkx/readwrite/gml.py#L594 I found this issue automatically. I'm a bot. Beep Boop 🦊. See other issues I found in your repo [here](https://codereview.doctor/networkx/networkx)
2022-04-23T23:41:45
networkx/networkx
5,575
networkx__networkx-5575
[ "5557" ]
12c1a00cd116701a763f7c57c230b8739d2ed085
diff --git a/networkx/algorithms/triads.py b/networkx/algorithms/triads.py --- a/networkx/algorithms/triads.py +++ b/networkx/algorithms/triads.py @@ -154,6 +154,12 @@ def triadic_census(G, nodelist=None): This algorithm has complexity $O(m)$ where $m$ is the number of edges in the graph. + Raises + ------ + ValueError + If `nodelist` contains duplicate nodes or nodes not in `G`. + If you want to ignore this you can preprocess with `set(nodelist) & G.nodes` + See also -------- triad_graph @@ -166,49 +172,83 @@ def triadic_census(G, nodelist=None): http://vlado.fmf.uni-lj.si/pub/networks/doc/triads/triads.pdf """ + nodeset = set(G.nbunch_iter(nodelist)) + if nodelist is not None and len(nodelist) != len(nodeset): + raise ValueError("nodelist includes duplicate nodes or nodes not in G") + + N = len(G) + Nnot = N - len(nodeset) # can signal special counting for subset of nodes + + # create an ordering of nodes with nodeset nodes first + m = {n: i for i, n in enumerate(nodeset)} + if Nnot: + # add non-nodeset nodes later in the ordering + not_nodeset = G.nodes - nodeset + m.update((n, i + N) for i, n in enumerate(not_nodeset)) + + # build all_neighbor dicts for easy counting + # After Python 3.8 can leave off these keys(). Speedup also using G._pred + # nbrs = {n: G._pred[n].keys() | G._succ[n].keys() for n in G} + nbrs = {n: G.pred[n].keys() | G.succ[n].keys() for n in G} + dbl_nbrs = {n: G.pred[n].keys() & G.succ[n].keys() for n in G} + + if Nnot: + sgl_nbrs = {n: G.pred[n].keys() ^ G.succ[n].keys() for n in not_nodeset} + # find number of edges not incident to nodes in nodeset + sgl = sum(1 for n in not_nodeset for nbr in sgl_nbrs[n] if nbr not in nodeset) + sgl_edges_outside = sgl // 2 + dbl = sum(1 for n in not_nodeset for nbr in dbl_nbrs[n] if nbr not in nodeset) + dbl_edges_outside = dbl // 2 + # Initialize the count for each triad to be zero. census = {name: 0 for name in TRIAD_NAMES} - n = len(G) - # m = dict(zip(G, range(n))) - m = {v: i for i, v in enumerate(G)} - if nodelist is None: - nodelist = list(G.nodes()) - for v in nodelist: - vnbrs = set(G.pred[v]) | set(G.succ[v]) + # Main loop over nodes + for v in nodeset: + vnbrs = nbrs[v] + dbl_vnbrs = dbl_nbrs[v] + if Nnot: + # set up counts of edges attached to v. + sgl_unbrs_bdy = sgl_unbrs_out = dbl_unbrs_bdy = dbl_unbrs_out = 0 for u in vnbrs: if m[u] <= m[v]: continue - neighbors = (vnbrs | set(G.succ[u]) | set(G.pred[u])) - {u, v} - # Calculate dyadic triads instead of counting them. - if v in G[u] and u in G[v]: - census["102"] += n - len(neighbors) - 2 - else: - census["012"] += n - len(neighbors) - 2 + unbrs = nbrs[u] + neighbors = (vnbrs | unbrs) - {u, v} # Count connected triads. for w in neighbors: - if m[u] < m[w] or ( - m[v] < m[w] < m[u] and v not in G.pred[w] and v not in G.succ[w] - ): + if m[u] < m[w] or (m[v] < m[w] < m[u] and v not in nbrs[w]): code = _tricode(G, v, u, w) census[TRICODE_TO_NAME[code]] += 1 - if len(nodelist) != len(G): - census["003"] = 0 - for v in nodelist: - vnbrs = set(G.pred[v]) | set(G.succ[v]) - not_vnbrs = G.nodes() - vnbrs - set(v) - triad_003_count = 0 - for u in not_vnbrs: - unbrs = set(set(G.succ[u]) | set(G.pred[u])) - vnbrs - triad_003_count += len(not_vnbrs - unbrs) - 1 - triad_003_count //= 2 - census["003"] += triad_003_count - else: - # null triads = total number of possible triads - all found triads - # - # Use integer division here, since we know this formula guarantees an - # integral value. - census["003"] = ((n * (n - 1) * (n - 2)) // 6) - sum(census.values()) + # Use a formula for dyadic triads with edge incident to v + if u in dbl_vnbrs: + census["102"] += N - len(neighbors) - 2 + else: + census["012"] += N - len(neighbors) - 2 + + # Count edges attached to v. Subtract later to get triads with v isolated + # _out are (u,unbr) for unbrs outside boundary of nodeset + # _bdy are (u,unbr) for unbrs on boundary of nodeset (get double counted) + if Nnot and u not in nodeset: + sgl_unbrs = sgl_nbrs[u] + sgl_unbrs_bdy += len(sgl_unbrs & vnbrs - nodeset) + sgl_unbrs_out += len(sgl_unbrs - vnbrs - nodeset) + dbl_unbrs = dbl_nbrs[u] + dbl_unbrs_bdy += len(dbl_unbrs & vnbrs - nodeset) + dbl_unbrs_out += len(dbl_unbrs - vnbrs - nodeset) + # if nodeset == G.nodes, skip this b/c we will find the edge later. + if Nnot: + # Count edges outside nodeset not connected with v (v isolated triads) + census["012"] += sgl_edges_outside - (sgl_unbrs_out + sgl_unbrs_bdy // 2) + census["102"] += dbl_edges_outside - (dbl_unbrs_out + dbl_unbrs_bdy // 2) + + # calculate null triads: "003" + # null triads = total number of possible triads - all found triads + total_triangles = (N * (N - 1) * (N - 2)) // 6 + triangles_without_nodeset = (Nnot * (Nnot - 1) * (Nnot - 2)) // 6 + total_census = total_triangles - triangles_without_nodeset + census["003"] = total_census - sum(census.values()) + return census
diff --git a/networkx/algorithms/tests/test_triads.py b/networkx/algorithms/tests/test_triads.py --- a/networkx/algorithms/tests/test_triads.py +++ b/networkx/algorithms/tests/test_triads.py @@ -1,8 +1,10 @@ -"""Unit tests for the :mod:`networkx.algorithms.triads` module.""" +"""Tests for the :mod:`networkx.algorithms.triads` module.""" -import networkx as nx +import pytest from collections import defaultdict +import itertools from random import sample +import networkx as nx def test_triadic_census(): @@ -139,6 +141,72 @@ def test_random_triad(): assert nx.is_triad(nx.random_triad(G)) +def test_triadic_census_short_path_nodelist(): + G = nx.path_graph("abc", create_using=nx.DiGraph) + expected = {"021C": 1} + for nl in ["a", "b", "c", "ab", "ac", "bc", "abc"]: + triad_census = nx.triadic_census(G, nodelist=nl) + assert expected == {typ: cnt for typ, cnt in triad_census.items() if cnt > 0} + + +def test_triadic_census_correct_nodelist_values(): + G = nx.path_graph(5, create_using=nx.DiGraph) + msg = r"nodelist includes duplicate nodes or nodes not in G" + with pytest.raises(ValueError, match=msg): + nx.triadic_census(G, [1, 2, 2, 3]) + with pytest.raises(ValueError, match=msg): + nx.triadic_census(G, [1, 2, "a", 3]) + + +def test_triadic_census_tiny_graphs(): + tc = nx.triadic_census(nx.empty_graph(0, create_using=nx.DiGraph)) + assert {} == {typ: cnt for typ, cnt in tc.items() if cnt > 0} + tc = nx.triadic_census(nx.empty_graph(1, create_using=nx.DiGraph)) + assert {} == {typ: cnt for typ, cnt in tc.items() if cnt > 0} + tc = nx.triadic_census(nx.empty_graph(2, create_using=nx.DiGraph)) + assert {} == {typ: cnt for typ, cnt in tc.items() if cnt > 0} + tc = nx.triadic_census(nx.DiGraph([(1, 2)])) + assert {} == {typ: cnt for typ, cnt in tc.items() if cnt > 0} + + +def test_triadic_census_selfloops(): + GG = nx.path_graph("abc", create_using=nx.DiGraph) + expected = {"021C": 1} + for n in GG: + G = GG.copy() + G.add_edge(n, n) + tc = nx.triadic_census(G) + assert expected == {typ: cnt for typ, cnt in tc.items() if cnt > 0} + + GG = nx.path_graph("abcde", create_using=nx.DiGraph) + tbt = nx.triads_by_type(GG) + for n in GG: + GG.add_edge(n, n) + tc = nx.triadic_census(GG) + assert tc == {tt: len(tbt[tt]) for tt in tc} + + +def test_triadic_census_four_path(): + G = nx.path_graph("abcd", create_using=nx.DiGraph) + expected = {"012": 2, "021C": 2} + triad_census = nx.triadic_census(G) + assert expected == {typ: cnt for typ, cnt in triad_census.items() if cnt > 0} + + +def test_triadic_census_four_path_nodelist(): + G = nx.path_graph("abcd", create_using=nx.DiGraph) + expected_end = {"012": 2, "021C": 1} + expected_mid = {"012": 1, "021C": 2} + a_triad_census = nx.triadic_census(G, nodelist=["a"]) + assert expected_end == {typ: cnt for typ, cnt in a_triad_census.items() if cnt > 0} + b_triad_census = nx.triadic_census(G, nodelist=["b"]) + assert expected_mid == {typ: cnt for typ, cnt in b_triad_census.items() if cnt > 0} + c_triad_census = nx.triadic_census(G, nodelist=["c"]) + assert expected_mid == {typ: cnt for typ, cnt in c_triad_census.items() if cnt > 0} + d_triad_census = nx.triadic_census(G, nodelist=["d"]) + assert expected_end == {typ: cnt for typ, cnt in d_triad_census.items() if cnt > 0} + + def test_triadic_census_nodelist(): """Tests the triadic_census function.""" G = nx.DiGraph() @@ -166,6 +234,37 @@ def test_triadic_census_nodelist(): node_triad_census = nx.triadic_census(G, nodelist=[node]) for triad_key in expected: actual[triad_key] += node_triad_census[triad_key] - # Divide the total count of 003 triads by 3, since we are counting them thrice - actual["003"] //= 3 + # Divide all counts by 3 + for k, v in actual.items(): + actual[k] //= 3 assert expected == actual + + [email protected]("N", [5, 10]) +def test_triandic_census_on_random_graph(N): + G = nx.binomial_graph(N, 0.3, directed=True, seed=42) + tc1 = nx.triadic_census(G) + tbt = nx.triads_by_type(G) + tc2 = {tt: len(tbt[tt]) for tt in tc1} + assert tc1 == tc2 + + for n in G: + tc1 = nx.triadic_census(G, nodelist={n}) + tc2 = {tt: sum(1 for t in tbt.get(tt, []) if n in t) for tt in tc1} + assert tc1 == tc2 + + for ns in itertools.combinations(G, 2): + ns = set(ns) + tc1 = nx.triadic_census(G, nodelist=ns) + tc2 = { + tt: sum(1 for t in tbt.get(tt, []) if any(n in ns for n in t)) for tt in tc1 + } + assert tc1 == tc2 + + for ns in itertools.combinations(G, 3): + ns = set(ns) + tc1 = nx.triadic_census(G, nodelist=ns) + tc2 = { + tt: sum(1 for t in tbt.get(tt, []) if any(n in ns for n in t)) for tt in tc1 + } + assert tc1 == tc2
parameter nodelist for triadic_census This is the code example: ```python import networkx as nx G = nx.DiGraph([("b", "a"), ("b","c")]) print(nx.triadic_census(G)) print(nx.triadic_census(G,nodelist=["c"])) ``` This is the result: ``` {'003': 0, '012': 0, '102': 0, '021D': 1, '021U': 0, '021C': 0, '111D': 0, '111U': 0, '030T': 0, '030C': 0, '201': 0, '120D': 0, '120U': 0, '120C': 0, '210': 0, '300': 0} {'003': 0, '012': 0, '102': 0, '021D': 0, '021U': 0, '021C': 0, '111D': 0, '111U': 0, '030T': 0, '030C': 0, '201': 0, '120D': 0, '120U': 0, '120C': 0, '210': 0, '300': 0} ``` Is it wrong with this parameter nodelist setting?
the example is correct. For each type of triad, it shows how many triads are of that type. With only 1 node in the nodelist there are no triads. So there are no triads of any of the types. @dschult ```python G = nx.DiGraph([("b", "a"), ("b","c")]) print(nx.triadic_census(G)) print(nx.triadic_census(G,nodelist=["b"])) ``` I got the same result as the parameter nodelist ```text {'003': 0, '012': 0, '102': 0, '021D': 1, '021U': 0, '021C': 0, '111D': 0, '111U': 0, '030T': 0, '030C': 0, '201': 0, '120D': 0, '120U': 0, '120C': 0, '210': 0, '300': 0} {'003': 0, '012': 0, '102': 0, '021D': 1, '021U': 0, '021C': 0, '111D': 0, '111U': 0, '030T': 0, '030C': 0, '201': 0, '120D': 0, '120U': 0, '120C': 0, '210': 0, '300': 0} ``` So, sir. how can I get the triads type and quantity of a node? Just by calling `nx.triadic_census(G))` it will return the triad type and number of occurrences. If you want the type of triad and all the triad graphs please use `nx.triads_by_type(G)`. @MridulS Mr., I just want to get the quantity of the triads and the triads group belonging to a single node. use ```nx.triads_by_type(G)``` . Because it's a bigger graph and takes a long time ~A single node in the nodelist contains no triads -- there is only one node!! Perhaps you want to include only the node and its neighbors: ```nx.triad_census(G, nodelist=['b']+list(G['b']))```~ Sorry -- I was wrong.... It seems that you already *have* the triad type and quantity for e.g. node "b". The only triad is a-b-c and it has type '021D': 1 When you specify node 'c' it has no triads so all types have value 0. > ~A single node in the nodelist contains no triads -- there is only one node!! Perhaps you want to include only the node and its neighbors: `nx.triad_census(G, nodelist=['b']+list(G['b']))`~ > > Sorry -- I was wrong.... It seems that you already _have_ the triad type and quantity for e.g. node "b". The only triad is a-b-c and it has type '021D': 1 When you specify node 'c' it has no triads so all types have value 0. @dschult Sir, does this setting are correct? I think the node "c" has a triads type "021D", because "c" belongs to a<-b->c I see what you mean, but clearly this attempt to implement triad_census for a nodelist does not count the same way you suggest. The PR that implemented it is #4361. It is possible that there is a bug in that change. I haven't looked carefully, but I think that the condition `m[u] < m[v]`, which is supposed to eliminate double counting of triads, does not work correctly when only a subset of the actual nodes are looped over. So I am thinking this may have a bug. Can you tell me what you expect the census to be for each node in 'a<-b->c'? and can you tell me what you expect the census to be for each node of the more complicated 'a->b->c->d'? If we can work through those two cases I think we should be able to check for bugs and identify the source. @dschult For each node in "a<-b->c", i expect to be able to get each node to belong to β€œ021D”, for each node, the "021D" type quantity value is 1 For each node in "a->b->c->d": - node "a" get {"021C": 1} - node "b" get {"021C": 2} - node "c" get {"021C": 2} - node "d" get {"021C": 1} For the `"a->b->c->d"` case we should have 3 triads for each node -- so if we sum the individual node counts, we should get 12 counts. I only get 4 counts total: 2 for "021D" and 2 for "021C". And there are no triads counted for node "d". I think we made a serious error when we added the `nodelist` parameter to this code. It would work fine if we didn't have all that code to avoid considering the same triad multiple times. But it is a fundamentally different way of counting. I think it should probably be a different function than `triad_census`. Perhaps `triad_census_by_node`. I'm going to mark this as a defect. Let's see if there is a good algorithm for getting the census by node... @dschult Sir, If there is no parameter "nodelist", is this `triad_census` to run normal? :-) The `triad_census` code without a `nodelist` parameter has been around for a long time and seems to be solidly correct. (though I thought the nodelist part of it was correct, so maybe there is something subtle that we've missed). The `nodelist` feature is relatively new and I can see where the bug is coming from there, and that bug has no impact on the correctness of the calculation without `nodelist`. So, you are safe to use the function without the `nodelist` parameter.
2022-04-24T20:13:51