index
int64 0
731k
| package
stringlengths 2
98
⌀ | name
stringlengths 1
76
| docstring
stringlengths 0
281k
⌀ | code
stringlengths 4
1.07M
⌀ | signature
stringlengths 2
42.8k
⌀ |
---|---|---|---|---|---|
30,804 | networkx.algorithms.planarity | is_planar | Returns True if and only if `G` is planar.
A graph is *planar* iff it can be drawn in a plane without
any edge intersections.
Parameters
----------
G : NetworkX graph
Returns
-------
bool
Whether the graph is planar.
Examples
--------
>>> G = nx.Graph([(0, 1), (0, 2)])
>>> nx.is_planar(G)
True
>>> nx.is_planar(nx.complete_graph(5))
False
See Also
--------
check_planarity :
Check if graph is planar *and* return a `PlanarEmbedding` instance if True.
| def sign_recursive(self, e):
"""Recursive version of :meth:`sign`."""
if self.ref[e] is not None:
self.side[e] = self.side[e] * self.sign_recursive(self.ref[e])
self.ref[e] = None
return self.side[e]
| (G, *, backend=None, **backend_kwargs) |
30,805 | networkx.algorithms.graphical | is_pseudographical | Returns True if some pseudograph can realize the sequence.
Every nonnegative integer sequence with an even sum is pseudographical
(see [1]_).
Parameters
----------
sequence : list or iterable container
A sequence of integer node degrees
Returns
-------
valid : bool
True if the sequence is a pseudographic degree sequence and False if not.
Examples
--------
>>> G = nx.Graph([(1, 2), (1, 3), (2, 3), (3, 4), (4, 2), (5, 1), (5, 4)])
>>> sequence = (d for _, d in G.degree())
>>> nx.is_pseudographical(sequence)
True
To test a non-pseudographical sequence:
>>> sequence_list = [d for _, d in G.degree()]
>>> sequence_list[-1] += 1
>>> nx.is_pseudographical(sequence_list)
False
Notes
-----
The worst-case run time is $O(n)$ where n is the length of the sequence.
References
----------
.. [1] F. Boesch and F. Harary. "Line removal algorithms for graphs
and their degree lists", IEEE Trans. Circuits and Systems, CAS-23(12),
pp. 778-782 (1976).
| null | (sequence, *, backend=None, **backend_kwargs) |
30,806 | networkx.algorithms.regular | is_regular | Determines whether the graph ``G`` is a regular graph.
A regular graph is a graph where each vertex has the same degree. A
regular digraph is a graph where the indegree and outdegree of each
vertex are equal.
Parameters
----------
G : NetworkX graph
Returns
-------
bool
Whether the given graph or digraph is regular.
Examples
--------
>>> G = nx.DiGraph([(1, 2), (2, 3), (3, 4), (4, 1)])
>>> nx.is_regular(G)
True
| null | (G, *, backend=None, **backend_kwargs) |
30,807 | networkx.generators.expanders | is_regular_expander | Determines whether the graph G is a regular expander. [1]_
An expander graph is a sparse graph with strong connectivity properties.
More precisely, this helper checks whether the graph is a
regular $(n, d, \lambda)$-expander with $\lambda$ close to
the Alon-Boppana bound and given by
$\lambda = 2 \sqrt{d - 1} + \epsilon$. [2]_
In the case where $\epsilon = 0$ then if the graph successfully passes the test
it is a Ramanujan graph. [3]_
A Ramanujan graph has spectral gap almost as large as possible, which makes them
excellent expanders.
Parameters
----------
G : NetworkX graph
epsilon : int, float, default=0
Returns
-------
bool
Whether the given graph is a regular $(n, d, \lambda)$-expander
where $\lambda = 2 \sqrt{d - 1} + \epsilon$.
Examples
--------
>>> G = nx.random_regular_expander_graph(20, 4)
>>> nx.is_regular_expander(G)
True
See Also
--------
maybe_regular_expander
random_regular_expander_graph
References
----------
.. [1] Expander graph, https://en.wikipedia.org/wiki/Expander_graph
.. [2] Alon-Boppana bound, https://en.wikipedia.org/wiki/Alon%E2%80%93Boppana_bound
.. [3] Ramanujan graphs, https://en.wikipedia.org/wiki/Ramanujan_graph
| null | (G, *, epsilon=0, backend=None, **backend_kwargs) |
30,808 | networkx.algorithms.components.semiconnected | is_semiconnected | Returns True if the graph is semiconnected, False otherwise.
A graph is semiconnected if and only if for any pair of nodes, either one
is reachable from the other, or they are mutually reachable.
This function uses a theorem that states that a DAG is semiconnected
if for any topological sort, for node $v_n$ in that sort, there is an
edge $(v_i, v_{i+1})$. That allows us to check if a non-DAG `G` is
semiconnected by condensing the graph: i.e. constructing a new graph `H`
with nodes being the strongly connected components of `G`, and edges
(scc_1, scc_2) if there is a edge $(v_1, v_2)$ in `G` for some
$v_1 \in scc_1$ and $v_2 \in scc_2$. That results in a DAG, so we compute
the topological sort of `H` and check if for every $n$ there is an edge
$(scc_n, scc_{n+1})$.
Parameters
----------
G : NetworkX graph
A directed graph.
Returns
-------
semiconnected : bool
True if the graph is semiconnected, False otherwise.
Raises
------
NetworkXNotImplemented
If the input graph is undirected.
NetworkXPointlessConcept
If the graph is empty.
Examples
--------
>>> G = nx.path_graph(4, create_using=nx.DiGraph())
>>> print(nx.is_semiconnected(G))
True
>>> G = nx.DiGraph([(1, 2), (3, 2)])
>>> print(nx.is_semiconnected(G))
False
See Also
--------
is_strongly_connected
is_weakly_connected
is_connected
is_biconnected
| null | (G, *, backend=None, **backend_kwargs) |
30,809 | networkx.algorithms.euler | is_semieulerian | Return True iff `G` is semi-Eulerian.
G is semi-Eulerian if it has an Eulerian path but no Eulerian circuit.
See Also
--------
has_eulerian_path
is_eulerian
| null | (G, *, backend=None, **backend_kwargs) |
30,810 | networkx.algorithms.simple_paths | is_simple_path | Returns True if and only if `nodes` form a simple path in `G`.
A *simple path* in a graph is a nonempty sequence of nodes in which
no node appears more than once in the sequence, and each adjacent
pair of nodes in the sequence is adjacent in the graph.
Parameters
----------
G : graph
A NetworkX graph.
nodes : list
A list of one or more nodes in the graph `G`.
Returns
-------
bool
Whether the given list of nodes represents a simple path in `G`.
Notes
-----
An empty list of nodes is not a path but a list of one node is a
path. Here's an explanation why.
This function operates on *node paths*. One could also consider
*edge paths*. There is a bijection between node paths and edge
paths.
The *length of a path* is the number of edges in the path, so a list
of nodes of length *n* corresponds to a path of length *n* - 1.
Thus the smallest edge path would be a list of zero edges, the empty
path. This corresponds to a list of one node.
To convert between a node path and an edge path, you can use code
like the following::
>>> from networkx.utils import pairwise
>>> nodes = [0, 1, 2, 3]
>>> edges = list(pairwise(nodes))
>>> edges
[(0, 1), (1, 2), (2, 3)]
>>> nodes = [edges[0][0]] + [v for u, v in edges]
>>> nodes
[0, 1, 2, 3]
Examples
--------
>>> G = nx.cycle_graph(4)
>>> nx.is_simple_path(G, [2, 3, 0])
True
>>> nx.is_simple_path(G, [0, 2])
False
| def _bidirectional_dijkstra(
G, source, target, weight="weight", ignore_nodes=None, ignore_edges=None
):
"""Dijkstra's algorithm for shortest paths using bidirectional search.
This function returns the shortest path between source and target
ignoring nodes and edges in the containers ignore_nodes and
ignore_edges.
This is a custom modification of the standard Dijkstra bidirectional
shortest path implementation at networkx.algorithms.weighted
Parameters
----------
G : NetworkX graph
source : node
Starting node.
target : node
Ending node.
weight: string, function, optional (default='weight')
Edge data key or weight function corresponding to the edge weight
ignore_nodes : container of nodes
nodes to ignore, optional
ignore_edges : container of edges
edges to ignore, optional
Returns
-------
length : number
Shortest path length.
Returns a tuple of two dictionaries keyed by node.
The first dictionary stores distance from the source.
The second stores the path from the source to that node.
Raises
------
NetworkXNoPath
If no path exists between source and target.
Notes
-----
Edge weight attributes must be numerical.
Distances are calculated as sums of weighted edges traversed.
In practice bidirectional Dijkstra is much more than twice as fast as
ordinary Dijkstra.
Ordinary Dijkstra expands nodes in a sphere-like manner from the
source. The radius of this sphere will eventually be the length
of the shortest path. Bidirectional Dijkstra will expand nodes
from both the source and the target, making two spheres of half
this radius. Volume of the first sphere is pi*r*r while the
others are 2*pi*r/2*r/2, making up half the volume.
This algorithm is not guaranteed to work if edge weights
are negative or are floating point numbers
(overflows and roundoff errors can cause problems).
See Also
--------
shortest_path
shortest_path_length
"""
if ignore_nodes and (source in ignore_nodes or target in ignore_nodes):
raise nx.NetworkXNoPath(f"No path between {source} and {target}.")
if source == target:
if source not in G:
raise nx.NodeNotFound(f"Node {source} not in graph")
return (0, [source])
# handle either directed or undirected
if G.is_directed():
Gpred = G.predecessors
Gsucc = G.successors
else:
Gpred = G.neighbors
Gsucc = G.neighbors
# support optional nodes filter
if ignore_nodes:
def filter_iter(nodes):
def iterate(v):
for w in nodes(v):
if w not in ignore_nodes:
yield w
return iterate
Gpred = filter_iter(Gpred)
Gsucc = filter_iter(Gsucc)
# support optional edges filter
if ignore_edges:
if G.is_directed():
def filter_pred_iter(pred_iter):
def iterate(v):
for w in pred_iter(v):
if (w, v) not in ignore_edges:
yield w
return iterate
def filter_succ_iter(succ_iter):
def iterate(v):
for w in succ_iter(v):
if (v, w) not in ignore_edges:
yield w
return iterate
Gpred = filter_pred_iter(Gpred)
Gsucc = filter_succ_iter(Gsucc)
else:
def filter_iter(nodes):
def iterate(v):
for w in nodes(v):
if (v, w) not in ignore_edges and (w, v) not in ignore_edges:
yield w
return iterate
Gpred = filter_iter(Gpred)
Gsucc = filter_iter(Gsucc)
push = heappush
pop = heappop
# Init: Forward Backward
dists = [{}, {}] # dictionary of final distances
paths = [{source: [source]}, {target: [target]}] # dictionary of paths
fringe = [[], []] # heap of (distance, node) tuples for
# extracting next node to expand
seen = [{source: 0}, {target: 0}] # dictionary of distances to
# nodes seen
c = count()
# initialize fringe heap
push(fringe[0], (0, next(c), source))
push(fringe[1], (0, next(c), target))
# neighs for extracting correct neighbor information
neighs = [Gsucc, Gpred]
# variables to hold shortest discovered path
# finaldist = 1e30000
finalpath = []
dir = 1
while fringe[0] and fringe[1]:
# choose direction
# dir == 0 is forward direction and dir == 1 is back
dir = 1 - dir
# extract closest to expand
(dist, _, v) = pop(fringe[dir])
if v in dists[dir]:
# Shortest path to v has already been found
continue
# update distance
dists[dir][v] = dist # equal to seen[dir][v]
if v in dists[1 - dir]:
# if we have scanned v in both directions we are done
# we have now discovered the shortest path
return (finaldist, finalpath)
wt = _weight_function(G, weight)
for w in neighs[dir](v):
if dir == 0: # forward
minweight = wt(v, w, G.get_edge_data(v, w))
vwLength = dists[dir][v] + minweight
else: # back, must remember to change v,w->w,v
minweight = wt(w, v, G.get_edge_data(w, v))
vwLength = dists[dir][v] + minweight
if w in dists[dir]:
if vwLength < dists[dir][w]:
raise ValueError("Contradictory paths found: negative weights?")
elif w not in seen[dir] or vwLength < seen[dir][w]:
# relaxing
seen[dir][w] = vwLength
push(fringe[dir], (vwLength, next(c), w))
paths[dir][w] = paths[dir][v] + [w]
if w in seen[0] and w in seen[1]:
# see if this path is better than the already
# discovered shortest path
totaldist = seen[0][w] + seen[1][w]
if finalpath == [] or finaldist > totaldist:
finaldist = totaldist
revpath = paths[1][w][:]
revpath.reverse()
finalpath = paths[0][w] + revpath[1:]
raise nx.NetworkXNoPath(f"No path between {source} and {target}.")
| (G, nodes, *, backend=None, **backend_kwargs) |
30,811 | networkx.algorithms.components.strongly_connected | is_strongly_connected | Test directed graph for strong connectivity.
A directed graph is strongly connected if and only if every vertex in
the graph is reachable from every other vertex.
Parameters
----------
G : NetworkX Graph
A directed graph.
Returns
-------
connected : bool
True if the graph is strongly connected, False otherwise.
Examples
--------
>>> G = nx.DiGraph([(0, 1), (1, 2), (2, 3), (3, 0), (2, 4), (4, 2)])
>>> nx.is_strongly_connected(G)
True
>>> G.remove_edge(2, 3)
>>> nx.is_strongly_connected(G)
False
Raises
------
NetworkXNotImplemented
If G is undirected.
See Also
--------
is_weakly_connected
is_semiconnected
is_connected
is_biconnected
strongly_connected_components
Notes
-----
For directed graphs only.
| null | (G, *, backend=None, **backend_kwargs) |
30,812 | networkx.algorithms.distance_regular | is_strongly_regular | Returns True if and only if the given graph is strongly
regular.
An undirected graph is *strongly regular* if
* it is regular,
* each pair of adjacent vertices has the same number of neighbors in
common,
* each pair of nonadjacent vertices has the same number of neighbors
in common.
Each strongly regular graph is a distance-regular graph.
Conversely, if a distance-regular graph has diameter two, then it is
a strongly regular graph. For more information on distance-regular
graphs, see :func:`is_distance_regular`.
Parameters
----------
G : NetworkX graph
An undirected graph.
Returns
-------
bool
Whether `G` is strongly regular.
Examples
--------
The cycle graph on five vertices is strongly regular. It is
two-regular, each pair of adjacent vertices has no shared neighbors,
and each pair of nonadjacent vertices has one shared neighbor::
>>> G = nx.cycle_graph(5)
>>> nx.is_strongly_regular(G)
True
| null | (G, *, backend=None, **backend_kwargs) |
30,813 | networkx.algorithms.tournament | is_tournament | Returns True if and only if `G` is a tournament.
A tournament is a directed graph, with neither self-loops nor
multi-edges, in which there is exactly one directed edge joining
each pair of distinct nodes.
Parameters
----------
G : NetworkX graph
A directed graph representing a tournament.
Returns
-------
bool
Whether the given graph is a tournament graph.
Examples
--------
>>> G = nx.DiGraph([(0, 1), (1, 2), (2, 0)])
>>> nx.is_tournament(G)
True
Notes
-----
Some definitions require a self-loop on each node, but that is not
the convention used here.
| null | (G, *, backend=None, **backend_kwargs) |
30,814 | networkx.algorithms.tree.recognition | is_tree |
Returns True if `G` is a tree.
A tree is a connected graph with no undirected cycles.
For directed graphs, `G` is a tree if the underlying graph is a tree. The
underlying graph is obtained by treating each directed edge as a single
undirected edge in a multigraph.
Parameters
----------
G : graph
The graph to test.
Returns
-------
b : bool
A boolean that is True if `G` is a tree.
Raises
------
NetworkXPointlessConcept
If `G` is empty.
Examples
--------
>>> G = nx.Graph()
>>> G.add_edges_from([(1, 2), (1, 3), (2, 4), (2, 5)])
>>> nx.is_tree(G) # n-1 edges
True
>>> G.add_edge(3, 4)
>>> nx.is_tree(G) # n edges
False
Notes
-----
In another convention, a directed tree is known as a *polytree* and then
*tree* corresponds to an *arborescence*.
See Also
--------
is_arborescence
| null | (G, *, backend=None, **backend_kwargs) |
30,815 | networkx.algorithms.triads | is_triad | Returns True if the graph G is a triad, else False.
Parameters
----------
G : graph
A NetworkX Graph
Returns
-------
istriad : boolean
Whether G is a valid triad
Examples
--------
>>> G = nx.DiGraph([(1, 2), (2, 3), (3, 1)])
>>> nx.is_triad(G)
True
>>> G.add_edge(0, 1)
>>> nx.is_triad(G)
False
| null | (G, *, backend=None, **backend_kwargs) |
30,816 | networkx.algorithms.graphical | is_valid_degree_sequence_erdos_gallai | Returns True if deg_sequence can be realized by a simple graph.
The validation is done using the Erdős-Gallai theorem [EG1960]_.
Parameters
----------
deg_sequence : list
A list of integers
Returns
-------
valid : bool
True if deg_sequence is graphical and False if not.
Examples
--------
>>> G = nx.Graph([(1, 2), (1, 3), (2, 3), (3, 4), (4, 2), (5, 1), (5, 4)])
>>> sequence = (d for _, d in G.degree())
>>> nx.is_valid_degree_sequence_erdos_gallai(sequence)
True
To test a non-valid sequence:
>>> sequence_list = [d for _, d in G.degree()]
>>> sequence_list[-1] += 1
>>> nx.is_valid_degree_sequence_erdos_gallai(sequence_list)
False
Notes
-----
This implementation uses an equivalent form of the Erdős-Gallai criterion.
Worst-case run time is $O(n)$ where $n$ is the length of the sequence.
Specifically, a sequence d is graphical if and only if the
sum of the sequence is even and for all strong indices k in the sequence,
.. math::
\sum_{i=1}^{k} d_i \leq k(k-1) + \sum_{j=k+1}^{n} \min(d_i,k)
= k(n-1) - ( k \sum_{j=0}^{k-1} n_j - \sum_{j=0}^{k-1} j n_j )
A strong index k is any index where d_k >= k and the value n_j is the
number of occurrences of j in d. The maximal strong index is called the
Durfee index.
This particular rearrangement comes from the proof of Theorem 3 in [2]_.
The ZZ condition says that for the sequence d if
.. math::
|d| >= \frac{(\max(d) + \min(d) + 1)^2}{4*\min(d)}
then d is graphical. This was shown in Theorem 6 in [2]_.
References
----------
.. [1] A. Tripathi and S. Vijay. "A note on a theorem of Erdős & Gallai",
Discrete Mathematics, 265, pp. 417-420 (2003).
.. [2] I.E. Zverovich and V.E. Zverovich. "Contributions to the theory
of graphic sequences", Discrete Mathematics, 105, pp. 292-303 (1992).
.. [EG1960] Erdős and Gallai, Mat. Lapok 11 264, 1960.
| null | (deg_sequence, *, backend=None, **backend_kwargs) |
30,817 | networkx.algorithms.graphical | is_valid_degree_sequence_havel_hakimi | Returns True if deg_sequence can be realized by a simple graph.
The validation proceeds using the Havel-Hakimi theorem
[havel1955]_, [hakimi1962]_, [CL1996]_.
Worst-case run time is $O(s)$ where $s$ is the sum of the sequence.
Parameters
----------
deg_sequence : list
A list of integers where each element specifies the degree of a node
in a graph.
Returns
-------
valid : bool
True if deg_sequence is graphical and False if not.
Examples
--------
>>> G = nx.Graph([(1, 2), (1, 3), (2, 3), (3, 4), (4, 2), (5, 1), (5, 4)])
>>> sequence = (d for _, d in G.degree())
>>> nx.is_valid_degree_sequence_havel_hakimi(sequence)
True
To test a non-valid sequence:
>>> sequence_list = [d for _, d in G.degree()]
>>> sequence_list[-1] += 1
>>> nx.is_valid_degree_sequence_havel_hakimi(sequence_list)
False
Notes
-----
The ZZ condition says that for the sequence d if
.. math::
|d| >= \frac{(\max(d) + \min(d) + 1)^2}{4*\min(d)}
then d is graphical. This was shown in Theorem 6 in [1]_.
References
----------
.. [1] I.E. Zverovich and V.E. Zverovich. "Contributions to the theory
of graphic sequences", Discrete Mathematics, 105, pp. 292-303 (1992).
.. [havel1955] Havel, V. "A Remark on the Existence of Finite Graphs"
Casopis Pest. Mat. 80, 477-480, 1955.
.. [hakimi1962] Hakimi, S. "On the Realizability of a Set of Integers as
Degrees of the Vertices of a Graph." SIAM J. Appl. Math. 10, 496-506, 1962.
.. [CL1996] G. Chartrand and L. Lesniak, "Graphs and Digraphs",
Chapman and Hall/CRC, 1996.
| null | (deg_sequence, *, backend=None, **backend_kwargs) |
30,818 | networkx.generators.joint_degree_seq | is_valid_directed_joint_degree | Checks whether the given directed joint degree input is realizable
Parameters
----------
in_degrees : list of integers
in degree sequence contains the in degrees of nodes.
out_degrees : list of integers
out degree sequence contains the out degrees of nodes.
nkk : dictionary of dictionary of integers
directed joint degree dictionary. for nodes of out degree k (first
level of dict) and nodes of in degree l (second level of dict)
describes the number of edges.
Returns
-------
boolean
returns true if given input is realizable, else returns false.
Notes
-----
Here is the list of conditions that the inputs (in/out degree sequences,
nkk) need to satisfy for simple directed graph realizability:
- Condition 0: in_degrees and out_degrees have the same length
- Condition 1: nkk[k][l] is integer for all k,l
- Condition 2: sum(nkk[k])/k = number of nodes with partition id k, is an
integer and matching degree sequence
- Condition 3: number of edges and non-chords between k and l cannot exceed
maximum possible number of edges
References
----------
[1] B. Tillman, A. Markopoulou, C. T. Butts & M. Gjoka,
"Construction of Directed 2K Graphs". In Proc. of KDD 2017.
| null | (in_degrees, out_degrees, nkk, *, backend=None, **backend_kwargs) |
30,819 | networkx.generators.joint_degree_seq | is_valid_joint_degree | Checks whether the given joint degree dictionary is realizable.
A *joint degree dictionary* is a dictionary of dictionaries, in
which entry ``joint_degrees[k][l]`` is an integer representing the
number of edges joining nodes of degree *k* with nodes of degree
*l*. Such a dictionary is realizable as a simple graph if and only
if the following conditions are satisfied.
- each entry must be an integer,
- the total number of nodes of degree *k*, computed by
``sum(joint_degrees[k].values()) / k``, must be an integer,
- the total number of edges joining nodes of degree *k* with
nodes of degree *l* cannot exceed the total number of possible edges,
- each diagonal entry ``joint_degrees[k][k]`` must be even (this is
a convention assumed by the :func:`joint_degree_graph` function).
Parameters
----------
joint_degrees : dictionary of dictionary of integers
A joint degree dictionary in which entry ``joint_degrees[k][l]``
is the number of edges joining nodes of degree *k* with nodes of
degree *l*.
Returns
-------
bool
Whether the given joint degree dictionary is realizable as a
simple graph.
References
----------
.. [1] M. Gjoka, M. Kurant, A. Markopoulou, "2.5K Graphs: from Sampling
to Generation", IEEE Infocom, 2013.
.. [2] I. Stanton, A. Pinar, "Constructing and sampling graphs with a
prescribed joint degree distribution", Journal of Experimental
Algorithmics, 2012.
| null | (joint_degrees, *, backend=None, **backend_kwargs) |
30,820 | networkx.algorithms.components.weakly_connected | is_weakly_connected | Test directed graph for weak connectivity.
A directed graph is weakly connected if and only if the graph
is connected when the direction of the edge between nodes is ignored.
Note that if a graph is strongly connected (i.e. the graph is connected
even when we account for directionality), it is by definition weakly
connected as well.
Parameters
----------
G : NetworkX Graph
A directed graph.
Returns
-------
connected : bool
True if the graph is weakly connected, False otherwise.
Raises
------
NetworkXNotImplemented
If G is undirected.
Examples
--------
>>> G = nx.DiGraph([(0, 1), (2, 1)])
>>> G.add_node(3)
>>> nx.is_weakly_connected(G) # node 3 is not connected to the graph
False
>>> G.add_edge(2, 3)
>>> nx.is_weakly_connected(G)
True
See Also
--------
is_strongly_connected
is_semiconnected
is_connected
is_biconnected
weakly_connected_components
Notes
-----
For directed graphs only.
| null | (G, *, backend=None, **backend_kwargs) |
30,821 | networkx.classes.function | is_weighted | Returns True if `G` has weighted edges.
Parameters
----------
G : graph
A NetworkX graph.
edge : tuple, optional
A 2-tuple specifying the only edge in `G` that will be tested. If
None, then every edge in `G` is tested.
weight: string, optional
The attribute name used to query for edge weights.
Returns
-------
bool
A boolean signifying if `G`, or the specified edge, is weighted.
Raises
------
NetworkXError
If the specified edge does not exist.
Examples
--------
>>> G = nx.path_graph(4)
>>> nx.is_weighted(G)
False
>>> nx.is_weighted(G, (2, 3))
False
>>> G = nx.DiGraph()
>>> G.add_edge(1, 2, weight=1)
>>> nx.is_weighted(G)
True
| def is_weighted(G, edge=None, weight="weight"):
"""Returns True if `G` has weighted edges.
Parameters
----------
G : graph
A NetworkX graph.
edge : tuple, optional
A 2-tuple specifying the only edge in `G` that will be tested. If
None, then every edge in `G` is tested.
weight: string, optional
The attribute name used to query for edge weights.
Returns
-------
bool
A boolean signifying if `G`, or the specified edge, is weighted.
Raises
------
NetworkXError
If the specified edge does not exist.
Examples
--------
>>> G = nx.path_graph(4)
>>> nx.is_weighted(G)
False
>>> nx.is_weighted(G, (2, 3))
False
>>> G = nx.DiGraph()
>>> G.add_edge(1, 2, weight=1)
>>> nx.is_weighted(G)
True
"""
if edge is not None:
data = G.get_edge_data(*edge)
if data is None:
msg = f"Edge {edge!r} does not exist."
raise nx.NetworkXError(msg)
return weight in data
if is_empty(G):
# Special handling required since: all([]) == True
return False
return all(weight in data for u, v, data in G.edges(data=True))
| (G, edge=None, weight='weight') |
30,823 | networkx.algorithms.isolate | isolates | Iterator over isolates in the graph.
An *isolate* is a node with no neighbors (that is, with degree
zero). For directed graphs, this means no in-neighbors and no
out-neighbors.
Parameters
----------
G : NetworkX graph
Returns
-------
iterator
An iterator over the isolates of `G`.
Examples
--------
To get a list of all isolates of a graph, use the :class:`list`
constructor::
>>> G = nx.Graph()
>>> G.add_edge(1, 2)
>>> G.add_node(3)
>>> list(nx.isolates(G))
[3]
To remove all isolates in the graph, first create a list of the
isolates, then use :meth:`Graph.remove_nodes_from`::
>>> G.remove_nodes_from(list(nx.isolates(G)))
>>> list(G)
[1, 2]
For digraphs, isolates have zero in-degree and zero out_degre::
>>> G = nx.DiGraph([(0, 1), (1, 2)])
>>> G.add_node(3)
>>> list(nx.isolates(G))
[3]
| null | (G, *, backend=None, **backend_kwargs) |
30,825 | networkx.algorithms.link_prediction | jaccard_coefficient | Compute the Jaccard coefficient of all node pairs in ebunch.
Jaccard coefficient of nodes `u` and `v` is defined as
.. math::
\frac{|\Gamma(u) \cap \Gamma(v)|}{|\Gamma(u) \cup \Gamma(v)|}
where $\Gamma(u)$ denotes the set of neighbors of $u$.
Parameters
----------
G : graph
A NetworkX undirected graph.
ebunch : iterable of node pairs, optional (default = None)
Jaccard coefficient will be computed for each pair of nodes
given in the iterable. The pairs must be given as 2-tuples
(u, v) where u and v are nodes in the graph. If ebunch is None
then all nonexistent edges in the graph will be used.
Default value: None.
Returns
-------
piter : iterator
An iterator of 3-tuples in the form (u, v, p) where (u, v) is a
pair of nodes and p is their Jaccard coefficient.
Raises
------
NetworkXNotImplemented
If `G` is a `DiGraph`, a `Multigraph` or a `MultiDiGraph`.
NodeNotFound
If `ebunch` has a node that is not in `G`.
Examples
--------
>>> G = nx.complete_graph(5)
>>> preds = nx.jaccard_coefficient(G, [(0, 1), (2, 3)])
>>> for u, v, p in preds:
... print(f"({u}, {v}) -> {p:.8f}")
(0, 1) -> 0.60000000
(2, 3) -> 0.60000000
References
----------
.. [1] D. Liben-Nowell, J. Kleinberg.
The Link Prediction Problem for Social Networks (2004).
http://www.cs.cornell.edu/home/kleinber/link-pred.pdf
| null | (G, ebunch=None, *, backend=None, **backend_kwargs) |
30,826 | networkx.algorithms.shortest_paths.weighted | johnson | Uses Johnson's Algorithm to compute shortest paths.
Johnson's Algorithm finds a shortest path between each pair of
nodes in a weighted graph even if negative weights are present.
Parameters
----------
G : NetworkX graph
weight : string or function
If this is a string, then edge weights will be accessed via the
edge attribute with this key (that is, the weight of the edge
joining `u` to `v` will be ``G.edges[u, v][weight]``). If no
such edge attribute exists, the weight of the edge is assumed to
be one.
If this is a function, the weight of an edge is the value
returned by the function. The function must accept exactly three
positional arguments: the two endpoints of an edge and the
dictionary of edge attributes for that edge. The function must
return a number.
Returns
-------
distance : dictionary
Dictionary, keyed by source and target, of shortest paths.
Examples
--------
>>> graph = nx.DiGraph()
>>> graph.add_weighted_edges_from(
... [("0", "3", 3), ("0", "1", -5), ("0", "2", 2), ("1", "2", 4), ("2", "3", 1)]
... )
>>> paths = nx.johnson(graph, weight="weight")
>>> paths["0"]["2"]
['0', '1', '2']
Notes
-----
Johnson's algorithm is suitable even for graphs with negative weights. It
works by using the Bellman–Ford algorithm to compute a transformation of
the input graph that removes all negative weights, allowing Dijkstra's
algorithm to be used on the transformed graph.
The time complexity of this algorithm is $O(n^2 \log n + n m)$,
where $n$ is the number of nodes and $m$ the number of edges in the
graph. For dense graphs, this may be faster than the Floyd–Warshall
algorithm.
See Also
--------
floyd_warshall_predecessor_and_distance
floyd_warshall_numpy
all_pairs_shortest_path
all_pairs_shortest_path_length
all_pairs_dijkstra_path
bellman_ford_predecessor_and_distance
all_pairs_bellman_ford_path
all_pairs_bellman_ford_path_length
| def _dijkstra_multisource(
G, sources, weight, pred=None, paths=None, cutoff=None, target=None
):
"""Uses Dijkstra's algorithm to find shortest weighted paths
Parameters
----------
G : NetworkX graph
sources : non-empty iterable of nodes
Starting nodes for paths. If this is just an iterable containing
a single node, then all paths computed by this function will
start from that node. If there are two or more nodes in this
iterable, the computed paths may begin from any one of the start
nodes.
weight: function
Function with (u, v, data) input that returns that edge's weight
or None to indicate a hidden edge
pred: dict of lists, optional(default=None)
dict to store a list of predecessors keyed by that node
If None, predecessors are not stored.
paths: dict, optional (default=None)
dict to store the path list from source to each node, keyed by node.
If None, paths are not stored.
target : node label, optional
Ending node for path. Search is halted when target is found.
cutoff : integer or float, optional
Length (sum of edge weights) at which the search is stopped.
If cutoff is provided, only return paths with summed weight <= cutoff.
Returns
-------
distance : dictionary
A mapping from node to shortest distance to that node from one
of the source nodes.
Raises
------
NodeNotFound
If any of `sources` is not in `G`.
Notes
-----
The optional predecessor and path dictionaries can be accessed by
the caller through the original pred and paths objects passed
as arguments. No need to explicitly return pred or paths.
"""
G_succ = G._adj # For speed-up (and works for both directed and undirected graphs)
push = heappush
pop = heappop
dist = {} # dictionary of final distances
seen = {}
# fringe is heapq with 3-tuples (distance,c,node)
# use the count c to avoid comparing nodes (may not be able to)
c = count()
fringe = []
for source in sources:
seen[source] = 0
push(fringe, (0, next(c), source))
while fringe:
(d, _, v) = pop(fringe)
if v in dist:
continue # already searched this node.
dist[v] = d
if v == target:
break
for u, e in G_succ[v].items():
cost = weight(v, u, e)
if cost is None:
continue
vu_dist = dist[v] + cost
if cutoff is not None:
if vu_dist > cutoff:
continue
if u in dist:
u_dist = dist[u]
if vu_dist < u_dist:
raise ValueError("Contradictory paths found:", "negative weights?")
elif pred is not None and vu_dist == u_dist:
pred[u].append(v)
elif u not in seen or vu_dist < seen[u]:
seen[u] = vu_dist
push(fringe, (vu_dist, next(c), u))
if paths is not None:
paths[u] = paths[v] + [u]
if pred is not None:
pred[u] = [v]
elif vu_dist == seen[u]:
if pred is not None:
pred[u].append(v)
# The optional predecessor and path dictionaries can be accessed
# by the caller via the pred and paths objects passed as arguments.
return dist
| (G, weight='weight', *, backend=None, **backend_kwargs) |
30,827 | networkx.algorithms.tree.operations | join | A deprecated name for `join_trees`
Returns a new rooted tree with a root node joined with the roots
of each of the given rooted trees.
.. deprecated:: 3.2
`join` is deprecated in NetworkX v3.2 and will be removed in v3.4.
It has been renamed join_trees with the same syntax/interface.
| def join(rooted_trees, label_attribute=None):
"""A deprecated name for `join_trees`
Returns a new rooted tree with a root node joined with the roots
of each of the given rooted trees.
.. deprecated:: 3.2
`join` is deprecated in NetworkX v3.2 and will be removed in v3.4.
It has been renamed join_trees with the same syntax/interface.
"""
import warnings
warnings.warn(
"The function `join` is deprecated and is renamed `join_trees`.\n"
"The ``join`` function itself will be removed in v3.4",
DeprecationWarning,
stacklevel=2,
)
return join_trees(rooted_trees, label_attribute=label_attribute)
| (rooted_trees, label_attribute=None) |
30,828 | networkx.algorithms.tree.operations | join_trees | Returns a new rooted tree made by joining `rooted_trees`
Constructs a new tree by joining each tree in `rooted_trees`.
A new root node is added and connected to each of the roots
of the input trees. While copying the nodes from the trees,
relabeling to integers occurs. If the `label_attribute` is provided,
the old node labels will be stored in the new tree under this attribute.
Parameters
----------
rooted_trees : list
A list of pairs in which each left element is a NetworkX graph
object representing a tree and each right element is the root
node of that tree. The nodes of these trees will be relabeled to
integers.
label_attribute : str
If provided, the old node labels will be stored in the new tree
under this node attribute. If not provided, the original labels
of the nodes in the input trees are not stored.
first_label : int, optional (default=0)
Specifies the label for the new root node. If provided, the root node of the joined tree
will have this label. If not provided, the root node will default to a label of 0.
Returns
-------
NetworkX graph
The rooted tree resulting from joining the provided `rooted_trees`. The new tree has a root node
labeled as specified by `first_label` (defaulting to 0 if not provided). Subtrees from the input
`rooted_trees` are attached to this new root node. Each non-root node, if the `label_attribute`
is provided, has an attribute that indicates the original label of the node in the input tree.
Notes
-----
Trees are stored in NetworkX as NetworkX Graphs. There is no specific
enforcement of the fact that these are trees. Testing for each tree
can be done using :func:`networkx.is_tree`.
Graph, edge, and node attributes are propagated from the given
rooted trees to the created tree. If there are any overlapping graph
attributes, those from later trees will overwrite those from earlier
trees in the tuple of positional arguments.
Examples
--------
Join two full balanced binary trees of height *h* to get a full
balanced binary tree of depth *h* + 1::
>>> h = 4
>>> left = nx.balanced_tree(2, h)
>>> right = nx.balanced_tree(2, h)
>>> joined_tree = nx.join([(left, 0), (right, 0)])
>>> nx.is_isomorphic(joined_tree, nx.balanced_tree(2, h + 1))
True
| null | (rooted_trees, *, label_attribute=None, first_label=0, backend=None, **backend_kwargs) |
30,829 | networkx.generators.joint_degree_seq | joint_degree_graph | Generates a random simple graph with the given joint degree dictionary.
Parameters
----------
joint_degrees : dictionary of dictionary of integers
A joint degree dictionary in which entry ``joint_degrees[k][l]`` is the
number of edges joining nodes of degree *k* with nodes of degree *l*.
seed : integer, random_state, or None (default)
Indicator of random number generation state.
See :ref:`Randomness<randomness>`.
Returns
-------
G : Graph
A graph with the specified joint degree dictionary.
Raises
------
NetworkXError
If *joint_degrees* dictionary is not realizable.
Notes
-----
In each iteration of the "while loop" the algorithm picks two disconnected
nodes *v* and *w*, of degree *k* and *l* correspondingly, for which
``joint_degrees[k][l]`` has not reached its target yet. It then adds
edge (*v*, *w*) and increases the number of edges in graph G by one.
The intelligence of the algorithm lies in the fact that it is always
possible to add an edge between such disconnected nodes *v* and *w*,
even if one or both nodes do not have free stubs. That is made possible by
executing a "neighbor switch", an edge rewiring move that releases
a free stub while keeping the joint degree of G the same.
The algorithm continues for E (number of edges) iterations of
the "while loop", at the which point all entries of the given
``joint_degrees[k][l]`` have reached their target values and the
construction is complete.
References
----------
.. [1] M. Gjoka, B. Tillman, A. Markopoulou, "Construction of Simple
Graphs with a Target Joint Degree Matrix and Beyond", IEEE Infocom, '15
Examples
--------
>>> joint_degrees = {
... 1: {4: 1},
... 2: {2: 2, 3: 2, 4: 2},
... 3: {2: 2, 4: 1},
... 4: {1: 1, 2: 2, 3: 1},
... }
>>> G = nx.joint_degree_graph(joint_degrees)
>>>
| null | (joint_degrees, seed=None, *, backend=None, **backend_kwargs) |
30,832 | networkx.algorithms.tree.decomposition | junction_tree | Returns a junction tree of a given graph.
A junction tree (or clique tree) is constructed from a (un)directed graph G.
The tree is constructed based on a moralized and triangulated version of G.
The tree's nodes consist of maximal cliques and sepsets of the revised graph.
The sepset of two cliques is the intersection of the nodes of these cliques,
e.g. the sepset of (A,B,C) and (A,C,E,F) is (A,C). These nodes are often called
"variables" in this literature. The tree is bipartite with each sepset
connected to its two cliques.
Junction Trees are not unique as the order of clique consideration determines
which sepsets are included.
The junction tree algorithm consists of five steps [1]_:
1. Moralize the graph
2. Triangulate the graph
3. Find maximal cliques
4. Build the tree from cliques, connecting cliques with shared
nodes, set edge-weight to number of shared variables
5. Find maximum spanning tree
Parameters
----------
G : networkx.Graph
Directed or undirected graph.
Returns
-------
junction_tree : networkx.Graph
The corresponding junction tree of `G`.
Raises
------
NetworkXNotImplemented
Raised if `G` is an instance of `MultiGraph` or `MultiDiGraph`.
References
----------
.. [1] Junction tree algorithm:
https://en.wikipedia.org/wiki/Junction_tree_algorithm
.. [2] Finn V. Jensen and Frank Jensen. 1994. Optimal
junction trees. In Proceedings of the Tenth international
conference on Uncertainty in artificial intelligence (UAI’94).
Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 360–366.
| null | (G, *, backend=None, **backend_kwargs) |
30,833 | networkx.algorithms.connectivity.kcomponents | k_components | Returns the k-component structure of a graph G.
A `k`-component is a maximal subgraph of a graph G that has, at least,
node connectivity `k`: we need to remove at least `k` nodes to break it
into more components. `k`-components have an inherent hierarchical
structure because they are nested in terms of connectivity: a connected
graph can contain several 2-components, each of which can contain
one or more 3-components, and so forth.
Parameters
----------
G : NetworkX graph
flow_func : function
Function to perform the underlying flow computations. Default value
:meth:`edmonds_karp`. This function performs better in sparse graphs with
right tailed degree distributions. :meth:`shortest_augmenting_path` will
perform better in denser graphs.
Returns
-------
k_components : dict
Dictionary with all connectivity levels `k` in the input Graph as keys
and a list of sets of nodes that form a k-component of level `k` as
values.
Raises
------
NetworkXNotImplemented
If the input graph is directed.
Examples
--------
>>> # Petersen graph has 10 nodes and it is triconnected, thus all
>>> # nodes are in a single component on all three connectivity levels
>>> G = nx.petersen_graph()
>>> k_components = nx.k_components(G)
Notes
-----
Moody and White [1]_ (appendix A) provide an algorithm for identifying
k-components in a graph, which is based on Kanevsky's algorithm [2]_
for finding all minimum-size node cut-sets of a graph (implemented in
:meth:`all_node_cuts` function):
1. Compute node connectivity, k, of the input graph G.
2. Identify all k-cutsets at the current level of connectivity using
Kanevsky's algorithm.
3. Generate new graph components based on the removal of
these cutsets. Nodes in a cutset belong to both sides
of the induced cut.
4. If the graph is neither complete nor trivial, return to 1;
else end.
This implementation also uses some heuristics (see [3]_ for details)
to speed up the computation.
See also
--------
node_connectivity
all_node_cuts
biconnected_components : special case of this function when k=2
k_edge_components : similar to this function, but uses edge-connectivity
instead of node-connectivity
References
----------
.. [1] Moody, J. and D. White (2003). Social cohesion and embeddedness:
A hierarchical conception of social groups.
American Sociological Review 68(1), 103--28.
http://www2.asanet.org/journals/ASRFeb03MoodyWhite.pdf
.. [2] Kanevsky, A. (1993). Finding all minimum-size separating vertex
sets in a graph. Networks 23(6), 533--541.
http://onlinelibrary.wiley.com/doi/10.1002/net.3230230604/abstract
.. [3] Torrents, J. and F. Ferraro (2015). Structural Cohesion:
Visualization and Heuristics for Fast Computation.
https://arxiv.org/pdf/1503.04476v1
| null | (G, flow_func=None, *, backend=None, **backend_kwargs) |
30,834 | networkx.algorithms.core | k_core | Returns the k-core of G.
A k-core is a maximal subgraph that contains nodes of degree `k` or more.
.. deprecated:: 3.3
`k_core` will not accept `MultiGraph` objects in version 3.5.
Parameters
----------
G : NetworkX graph
A graph or directed graph
k : int, optional
The order of the core. If not specified return the main core.
core_number : dictionary, optional
Precomputed core numbers for the graph G.
Returns
-------
G : NetworkX graph
The k-core subgraph
Raises
------
NetworkXNotImplemented
The k-core is not defined for multigraphs or graphs with self loops.
Notes
-----
The main core is the core with `k` as the largest core_number.
For directed graphs the node degree is defined to be the
in-degree + out-degree.
Graph, node, and edge attributes are copied to the subgraph.
Examples
--------
>>> degrees = [0, 1, 2, 2, 2, 2, 3]
>>> H = nx.havel_hakimi_graph(degrees)
>>> H.degree
DegreeView({0: 1, 1: 2, 2: 2, 3: 2, 4: 2, 5: 3, 6: 0})
>>> nx.k_core(H).nodes
NodeView((1, 2, 3, 5))
See Also
--------
core_number
References
----------
.. [1] An O(m) Algorithm for Cores Decomposition of Networks
Vladimir Batagelj and Matjaz Zaversnik, 2003.
https://arxiv.org/abs/cs.DS/0310049
| null | (G, k=None, core_number=None, *, backend=None, **backend_kwargs) |
30,835 | networkx.algorithms.core | k_corona | Returns the k-corona of G.
The k-corona is the subgraph of nodes in the k-core which have
exactly k neighbors in the k-core.
.. deprecated:: 3.3
`k_corona` will not accept `MultiGraph` objects in version 3.5.
Parameters
----------
G : NetworkX graph
A graph or directed graph
k : int
The order of the corona.
core_number : dictionary, optional
Precomputed core numbers for the graph G.
Returns
-------
G : NetworkX graph
The k-corona subgraph
Raises
------
NetworkXNotImplemented
The k-corona is not defined for multigraphs or graphs with self loops.
Notes
-----
For directed graphs the node degree is defined to be the
in-degree + out-degree.
Graph, node, and edge attributes are copied to the subgraph.
Examples
--------
>>> degrees = [0, 1, 2, 2, 2, 2, 3]
>>> H = nx.havel_hakimi_graph(degrees)
>>> H.degree
DegreeView({0: 1, 1: 2, 2: 2, 3: 2, 4: 2, 5: 3, 6: 0})
>>> nx.k_corona(H, k=2).nodes
NodeView((1, 2, 3, 5))
See Also
--------
core_number
References
----------
.. [1] k -core (bootstrap) percolation on complex networks:
Critical phenomena and nonlocal effects,
A. V. Goltsev, S. N. Dorogovtsev, and J. F. F. Mendes,
Phys. Rev. E 73, 056101 (2006)
http://link.aps.org/doi/10.1103/PhysRevE.73.056101
| null | (G, k, core_number=None, *, backend=None, **backend_kwargs) |
30,836 | networkx.algorithms.core | k_crust | Returns the k-crust of G.
The k-crust is the graph G with the edges of the k-core removed
and isolated nodes found after the removal of edges are also removed.
.. deprecated:: 3.3
`k_crust` will not accept `MultiGraph` objects in version 3.5.
Parameters
----------
G : NetworkX graph
A graph or directed graph.
k : int, optional
The order of the shell. If not specified return the main crust.
core_number : dictionary, optional
Precomputed core numbers for the graph G.
Returns
-------
G : NetworkX graph
The k-crust subgraph
Raises
------
NetworkXNotImplemented
The k-crust is not implemented for multigraphs or graphs with self loops.
Notes
-----
This definition of k-crust is different than the definition in [1]_.
The k-crust in [1]_ is equivalent to the k+1 crust of this algorithm.
For directed graphs the node degree is defined to be the
in-degree + out-degree.
Graph, node, and edge attributes are copied to the subgraph.
Examples
--------
>>> degrees = [0, 1, 2, 2, 2, 2, 3]
>>> H = nx.havel_hakimi_graph(degrees)
>>> H.degree
DegreeView({0: 1, 1: 2, 2: 2, 3: 2, 4: 2, 5: 3, 6: 0})
>>> nx.k_crust(H, k=1).nodes
NodeView((0, 4, 6))
See Also
--------
core_number
References
----------
.. [1] A model of Internet topology using k-shell decomposition
Shai Carmi, Shlomo Havlin, Scott Kirkpatrick, Yuval Shavitt,
and Eran Shir, PNAS July 3, 2007 vol. 104 no. 27 11150-11154
http://www.pnas.org/content/104/27/11150.full
| null | (G, k=None, core_number=None, *, backend=None, **backend_kwargs) |
30,837 | networkx.algorithms.connectivity.edge_augmentation | k_edge_augmentation | Finds set of edges to k-edge-connect G.
Adding edges from the augmentation to G make it impossible to disconnect G
unless k or more edges are removed. This function uses the most efficient
function available (depending on the value of k and if the problem is
weighted or unweighted) to search for a minimum weight subset of available
edges that k-edge-connects G. In general, finding a k-edge-augmentation is
NP-hard, so solutions are not guaranteed to be minimal. Furthermore, a
k-edge-augmentation may not exist.
Parameters
----------
G : NetworkX graph
An undirected graph.
k : integer
Desired edge connectivity
avail : dict or a set of 2 or 3 tuples
The available edges that can be used in the augmentation.
If unspecified, then all edges in the complement of G are available.
Otherwise, each item is an available edge (with an optional weight).
In the unweighted case, each item is an edge ``(u, v)``.
In the weighted case, each item is a 3-tuple ``(u, v, d)`` or a dict
with items ``(u, v): d``. The third item, ``d``, can be a dictionary
or a real number. If ``d`` is a dictionary ``d[weight]``
correspondings to the weight.
weight : string
key to use to find weights if ``avail`` is a set of 3-tuples where the
third item in each tuple is a dictionary.
partial : boolean
If partial is True and no feasible k-edge-augmentation exists, then all
a partial k-edge-augmentation is generated. Adding the edges in a
partial augmentation to G, minimizes the number of k-edge-connected
components and maximizes the edge connectivity between those
components. For details, see :func:`partial_k_edge_augmentation`.
Yields
------
edge : tuple
Edges that, once added to G, would cause G to become k-edge-connected.
If partial is False, an error is raised if this is not possible.
Otherwise, generated edges form a partial augmentation, which
k-edge-connects any part of G where it is possible, and maximally
connects the remaining parts.
Raises
------
NetworkXUnfeasible
If partial is False and no k-edge-augmentation exists.
NetworkXNotImplemented
If the input graph is directed or a multigraph.
ValueError:
If k is less than 1
Notes
-----
When k=1 this returns an optimal solution.
When k=2 and ``avail`` is None, this returns an optimal solution.
Otherwise when k=2, this returns a 2-approximation of the optimal solution.
For k>3, this problem is NP-hard and this uses a randomized algorithm that
produces a feasible solution, but provides no guarantees on the
solution weight.
Examples
--------
>>> # Unweighted cases
>>> G = nx.path_graph((1, 2, 3, 4))
>>> G.add_node(5)
>>> sorted(nx.k_edge_augmentation(G, k=1))
[(1, 5)]
>>> sorted(nx.k_edge_augmentation(G, k=2))
[(1, 5), (5, 4)]
>>> sorted(nx.k_edge_augmentation(G, k=3))
[(1, 4), (1, 5), (2, 5), (3, 5), (4, 5)]
>>> complement = list(nx.k_edge_augmentation(G, k=5, partial=True))
>>> G.add_edges_from(complement)
>>> nx.edge_connectivity(G)
4
>>> # Weighted cases
>>> G = nx.path_graph((1, 2, 3, 4))
>>> G.add_node(5)
>>> # avail can be a tuple with a dict
>>> avail = [(1, 5, {"weight": 11}), (2, 5, {"weight": 10})]
>>> sorted(nx.k_edge_augmentation(G, k=1, avail=avail, weight="weight"))
[(2, 5)]
>>> # or avail can be a 3-tuple with a real number
>>> avail = [(1, 5, 11), (2, 5, 10), (4, 3, 1), (4, 5, 51)]
>>> sorted(nx.k_edge_augmentation(G, k=2, avail=avail))
[(1, 5), (2, 5), (4, 5)]
>>> # or avail can be a dict
>>> avail = {(1, 5): 11, (2, 5): 10, (4, 3): 1, (4, 5): 51}
>>> sorted(nx.k_edge_augmentation(G, k=2, avail=avail))
[(1, 5), (2, 5), (4, 5)]
>>> # If augmentation is infeasible, then a partial solution can be found
>>> avail = {(1, 5): 11}
>>> sorted(nx.k_edge_augmentation(G, k=2, avail=avail, partial=True))
[(1, 5)]
| def unconstrained_bridge_augmentation(G):
"""Finds an optimal 2-edge-augmentation of G using the fewest edges.
This is an implementation of the algorithm detailed in [1]_.
The basic idea is to construct a meta-graph of bridge-ccs, connect leaf
nodes of the trees to connect the entire graph, and finally connect the
leafs of the tree in dfs-preorder to bridge connect the entire graph.
Parameters
----------
G : NetworkX graph
An undirected graph.
Yields
------
edge : tuple
Edges in the bridge augmentation of G
Notes
-----
Input: a graph G.
First find the bridge components of G and collapse each bridge-cc into a
node of a metagraph graph C, which is guaranteed to be a forest of trees.
C contains p "leafs" --- nodes with exactly one incident edge.
C contains q "isolated nodes" --- nodes with no incident edges.
Theorem: If p + q > 1, then at least :math:`ceil(p / 2) + q` edges are
needed to bridge connect C. This algorithm achieves this min number.
The method first adds enough edges to make G into a tree and then pairs
leafs in a simple fashion.
Let n be the number of trees in C. Let v(i) be an isolated vertex in the
i-th tree if one exists, otherwise it is a pair of distinct leafs nodes
in the i-th tree. Alternating edges from these sets (i.e. adding edges
A1 = [(v(i)[0], v(i + 1)[1]), v(i + 1)[0], v(i + 2)[1])...]) connects C
into a tree T. This tree has p' = p + 2q - 2(n -1) leafs and no isolated
vertices. A1 has n - 1 edges. The next step finds ceil(p' / 2) edges to
biconnect any tree with p' leafs.
Convert T into an arborescence T' by picking an arbitrary root node with
degree >= 2 and directing all edges away from the root. Note the
implementation implicitly constructs T'.
The leafs of T are the nodes with no existing edges in T'.
Order the leafs of T' by DFS preorder. Then break this list in half
and add the zipped pairs to A2.
The set A = A1 + A2 is the minimum augmentation in the metagraph.
To convert this to edges in the original graph
References
----------
.. [1] Eswaran, Kapali P., and R. Endre Tarjan. (1975) Augmentation problems.
http://epubs.siam.org/doi/abs/10.1137/0205044
See Also
--------
:func:`bridge_augmentation`
:func:`k_edge_augmentation`
Examples
--------
>>> G = nx.path_graph((1, 2, 3, 4, 5, 6, 7))
>>> sorted(unconstrained_bridge_augmentation(G))
[(1, 7)]
>>> G = nx.path_graph((1, 2, 3, 2, 4, 5, 6, 7))
>>> sorted(unconstrained_bridge_augmentation(G))
[(1, 3), (3, 7)]
>>> G = nx.Graph([(0, 1), (0, 2), (1, 2)])
>>> G.add_node(4)
>>> sorted(unconstrained_bridge_augmentation(G))
[(1, 4), (4, 0)]
"""
# -----
# Mapping of terms from (Eswaran and Tarjan):
# G = G_0 - the input graph
# C = G_0' - the bridge condensation of G. (This is a forest of trees)
# A1 = A_1 - the edges to connect the forest into a tree
# leaf = pendant - a node with degree of 1
# alpha(v) = maps the node v in G to its meta-node in C
# beta(x) = maps the meta-node x in C to any node in the bridge
# component of G corresponding to x.
# find the 2-edge-connected components of G
bridge_ccs = list(nx.connectivity.bridge_components(G))
# condense G into an forest C
C = collapse(G, bridge_ccs)
# Choose pairs of distinct leaf nodes in each tree. If this is not
# possible then make a pair using the single isolated node in the tree.
vset1 = [
tuple(cc) * 2 # case1: an isolated node
if len(cc) == 1
else sorted(cc, key=C.degree)[0:2] # case2: pair of leaf nodes
for cc in nx.connected_components(C)
]
if len(vset1) > 1:
# Use this set to construct edges that connect C into a tree.
nodes1 = [vs[0] for vs in vset1]
nodes2 = [vs[1] for vs in vset1]
A1 = list(zip(nodes1[1:], nodes2))
else:
A1 = []
# Connect each tree in the forest to construct an arborescence
T = C.copy()
T.add_edges_from(A1)
# If there are only two leaf nodes, we simply connect them.
leafs = [n for n, d in T.degree() if d == 1]
if len(leafs) == 1:
A2 = []
if len(leafs) == 2:
A2 = [tuple(leafs)]
else:
# Choose an arbitrary non-leaf root
try:
root = next(n for n, d in T.degree() if d > 1)
except StopIteration: # no nodes found with degree > 1
return
# order the leaves of C by (induced directed) preorder
v2 = [n for n in nx.dfs_preorder_nodes(T, root) if T.degree(n) == 1]
# connecting first half of the leafs in pre-order to the second
# half will bridge connect the tree with the fewest edges.
half = math.ceil(len(v2) / 2)
A2 = list(zip(v2[:half], v2[-half:]))
# collect the edges used to augment the original forest
aug_tree_edges = A1 + A2
# Construct the mapping (beta) from meta-nodes to regular nodes
inverse = defaultdict(list)
for k, v in C.graph["mapping"].items():
inverse[v].append(k)
# sort so we choose minimum degree nodes first
inverse = {
mu: sorted(mapped, key=lambda u: (G.degree(u), u))
for mu, mapped in inverse.items()
}
# For each meta-edge, map back to an arbitrary pair in the original graph
G2 = G.copy()
for mu, mv in aug_tree_edges:
# Find the first available edge that doesn't exist and return it
for u, v in it.product(inverse[mu], inverse[mv]):
if not G2.has_edge(u, v):
G2.add_edge(u, v)
yield u, v
break
| (G, k, avail=None, weight=None, partial=False, *, backend=None, **backend_kwargs) |
30,838 | networkx.algorithms.connectivity.edge_kcomponents | k_edge_components | Generates nodes in each maximal k-edge-connected component in G.
Parameters
----------
G : NetworkX graph
k : Integer
Desired edge connectivity
Returns
-------
k_edge_components : a generator of k-edge-ccs. Each set of returned nodes
will have k-edge-connectivity in the graph G.
See Also
--------
:func:`local_edge_connectivity`
:func:`k_edge_subgraphs` : similar to this function, but the subgraph
defined by the nodes must also have k-edge-connectivity.
:func:`k_components` : similar to this function, but uses node-connectivity
instead of edge-connectivity
Raises
------
NetworkXNotImplemented
If the input graph is a multigraph.
ValueError:
If k is less than 1
Notes
-----
Attempts to use the most efficient implementation available based on k.
If k=1, this is simply connected components for directed graphs and
connected components for undirected graphs.
If k=2 on an efficient bridge connected component algorithm from _[1] is
run based on the chain decomposition.
Otherwise, the algorithm from _[2] is used.
Examples
--------
>>> import itertools as it
>>> from networkx.utils import pairwise
>>> paths = [
... (1, 2, 4, 3, 1, 4),
... (5, 6, 7, 8, 5, 7, 8, 6),
... ]
>>> G = nx.Graph()
>>> G.add_nodes_from(it.chain(*paths))
>>> G.add_edges_from(it.chain(*[pairwise(path) for path in paths]))
>>> # note this returns {1, 4} unlike k_edge_subgraphs
>>> sorted(map(sorted, nx.k_edge_components(G, k=3)))
[[1, 4], [2], [3], [5, 6, 7, 8]]
References
----------
.. [1] https://en.wikipedia.org/wiki/Bridge_%28graph_theory%29
.. [2] Wang, Tianhao, et al. (2015) A simple algorithm for finding all
k-edge-connected components.
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0136264
| null | (G, k, *, backend=None, **backend_kwargs) |
30,839 | networkx.algorithms.connectivity.edge_kcomponents | k_edge_subgraphs | Generates nodes in each maximal k-edge-connected subgraph in G.
Parameters
----------
G : NetworkX graph
k : Integer
Desired edge connectivity
Returns
-------
k_edge_subgraphs : a generator of k-edge-subgraphs
Each k-edge-subgraph is a maximal set of nodes that defines a subgraph
of G that is k-edge-connected.
See Also
--------
:func:`edge_connectivity`
:func:`k_edge_components` : similar to this function, but nodes only
need to have k-edge-connectivity within the graph G and the subgraphs
might not be k-edge-connected.
Raises
------
NetworkXNotImplemented
If the input graph is a multigraph.
ValueError:
If k is less than 1
Notes
-----
Attempts to use the most efficient implementation available based on k.
If k=1, or k=2 and the graph is undirected, then this simply calls
`k_edge_components`. Otherwise the algorithm from _[1] is used.
Examples
--------
>>> import itertools as it
>>> from networkx.utils import pairwise
>>> paths = [
... (1, 2, 4, 3, 1, 4),
... (5, 6, 7, 8, 5, 7, 8, 6),
... ]
>>> G = nx.Graph()
>>> G.add_nodes_from(it.chain(*paths))
>>> G.add_edges_from(it.chain(*[pairwise(path) for path in paths]))
>>> # note this does not return {1, 4} unlike k_edge_components
>>> sorted(map(sorted, nx.k_edge_subgraphs(G, k=3)))
[[1], [2], [3], [4], [5, 6, 7, 8]]
References
----------
.. [1] Zhou, Liu, et al. (2012) Finding maximal k-edge-connected subgraphs
from a large graph. ACM International Conference on Extending Database
Technology 2012 480-–491.
https://openproceedings.org/2012/conf/edbt/ZhouLYLCL12.pdf
| null | (G, k, *, backend=None, **backend_kwargs) |
30,840 | networkx.algorithms.regular | k_factor | Compute a k-factor of G
A k-factor of a graph is a spanning k-regular subgraph.
A spanning k-regular subgraph of G is a subgraph that contains
each vertex of G and a subset of the edges of G such that each
vertex has degree k.
Parameters
----------
G : NetworkX graph
Undirected graph
matching_weight: string, optional (default='weight')
Edge data key corresponding to the edge weight.
Used for finding the max-weighted perfect matching.
If key not found, uses 1 as weight.
Returns
-------
G2 : NetworkX graph
A k-factor of G
Examples
--------
>>> G = nx.Graph([(1, 2), (2, 3), (3, 4), (4, 1)])
>>> G2 = nx.k_factor(G, k=1)
>>> G2.edges()
EdgeView([(1, 2), (3, 4)])
References
----------
.. [1] "An algorithm for computing simple k-factors.",
Meijer, Henk, Yurai Núñez-Rodríguez, and David Rappaport,
Information processing letters, 2009.
| null | (G, k, matching_weight='weight', *, backend=None, **backend_kwargs) |
30,841 | networkx.generators.intersection | k_random_intersection_graph | Returns a intersection graph with randomly chosen attribute sets for
each node that are of equal size (k).
Parameters
----------
n : int
The number of nodes in the first bipartite set (nodes)
m : int
The number of nodes in the second bipartite set (attributes)
k : float
Size of attribute set to assign to each node.
seed : integer, random_state, or None (default)
Indicator of random number generation state.
See :ref:`Randomness<randomness>`.
See Also
--------
gnp_random_graph, uniform_random_intersection_graph
References
----------
.. [1] Godehardt, E., and Jaworski, J.
Two models of random intersection graphs and their applications.
Electronic Notes in Discrete Mathematics 10 (2001), 129--132.
| null | (n, m, k, seed=None, *, backend=None, **backend_kwargs) |
30,842 | networkx.algorithms.core | k_shell | Returns the k-shell of G.
The k-shell is the subgraph induced by nodes with core number k.
That is, nodes in the k-core that are not in the (k+1)-core.
.. deprecated:: 3.3
`k_shell` will not accept `MultiGraph` objects in version 3.5.
Parameters
----------
G : NetworkX graph
A graph or directed graph.
k : int, optional
The order of the shell. If not specified return the outer shell.
core_number : dictionary, optional
Precomputed core numbers for the graph G.
Returns
-------
G : NetworkX graph
The k-shell subgraph
Raises
------
NetworkXNotImplemented
The k-shell is not implemented for multigraphs or graphs with self loops.
Notes
-----
This is similar to k_corona but in that case only neighbors in the
k-core are considered.
For directed graphs the node degree is defined to be the
in-degree + out-degree.
Graph, node, and edge attributes are copied to the subgraph.
Examples
--------
>>> degrees = [0, 1, 2, 2, 2, 2, 3]
>>> H = nx.havel_hakimi_graph(degrees)
>>> H.degree
DegreeView({0: 1, 1: 2, 2: 2, 3: 2, 4: 2, 5: 3, 6: 0})
>>> nx.k_shell(H, k=1).nodes
NodeView((0, 4))
See Also
--------
core_number
k_corona
References
----------
.. [1] A model of Internet topology using k-shell decomposition
Shai Carmi, Shlomo Havlin, Scott Kirkpatrick, Yuval Shavitt,
and Eran Shir, PNAS July 3, 2007 vol. 104 no. 27 11150-11154
http://www.pnas.org/content/104/27/11150.full
| null | (G, k=None, core_number=None, *, backend=None, **backend_kwargs) |
30,843 | networkx.algorithms.core | k_truss | Returns the k-truss of `G`.
The k-truss is the maximal induced subgraph of `G` which contains at least
three vertices where every edge is incident to at least `k-2` triangles.
Parameters
----------
G : NetworkX graph
An undirected graph
k : int
The order of the truss
Returns
-------
H : NetworkX graph
The k-truss subgraph
Raises
------
NetworkXNotImplemented
If `G` is a multigraph or directed graph or if it contains self loops.
Notes
-----
A k-clique is a (k-2)-truss and a k-truss is a (k+1)-core.
Graph, node, and edge attributes are copied to the subgraph.
K-trusses were originally defined in [2] which states that the k-truss
is the maximal induced subgraph where each edge belongs to at least
`k-2` triangles. A more recent paper, [1], uses a slightly different
definition requiring that each edge belong to at least `k` triangles.
This implementation uses the original definition of `k-2` triangles.
Examples
--------
>>> degrees = [0, 1, 2, 2, 2, 2, 3]
>>> H = nx.havel_hakimi_graph(degrees)
>>> H.degree
DegreeView({0: 1, 1: 2, 2: 2, 3: 2, 4: 2, 5: 3, 6: 0})
>>> nx.k_truss(H, k=2).nodes
NodeView((0, 1, 2, 3, 4, 5))
References
----------
.. [1] Bounds and Algorithms for k-truss. Paul Burkhardt, Vance Faber,
David G. Harris, 2018. https://arxiv.org/abs/1806.05523v2
.. [2] Trusses: Cohesive Subgraphs for Social Network Analysis. Jonathan
Cohen, 2005.
| null | (G, k, *, backend=None, **backend_kwargs) |
30,844 | networkx.drawing.layout | kamada_kawai_layout | Position nodes using Kamada-Kawai path-length cost-function.
Parameters
----------
G : NetworkX graph or list of nodes
A position will be assigned to every node in G.
dist : dict (default=None)
A two-level dictionary of optimal distances between nodes,
indexed by source and destination node.
If None, the distance is computed using shortest_path_length().
pos : dict or None optional (default=None)
Initial positions for nodes as a dictionary with node as keys
and values as a coordinate list or tuple. If None, then use
circular_layout() for dim >= 2 and a linear layout for dim == 1.
weight : string or None optional (default='weight')
The edge attribute that holds the numerical value used for
the edge weight. If None, then all edge weights are 1.
scale : number (default: 1)
Scale factor for positions.
center : array-like or None
Coordinate pair around which to center the layout.
dim : int
Dimension of layout.
Returns
-------
pos : dict
A dictionary of positions keyed by node
Examples
--------
>>> G = nx.path_graph(4)
>>> pos = nx.kamada_kawai_layout(G)
| def kamada_kawai_layout(
G, dist=None, pos=None, weight="weight", scale=1, center=None, dim=2
):
"""Position nodes using Kamada-Kawai path-length cost-function.
Parameters
----------
G : NetworkX graph or list of nodes
A position will be assigned to every node in G.
dist : dict (default=None)
A two-level dictionary of optimal distances between nodes,
indexed by source and destination node.
If None, the distance is computed using shortest_path_length().
pos : dict or None optional (default=None)
Initial positions for nodes as a dictionary with node as keys
and values as a coordinate list or tuple. If None, then use
circular_layout() for dim >= 2 and a linear layout for dim == 1.
weight : string or None optional (default='weight')
The edge attribute that holds the numerical value used for
the edge weight. If None, then all edge weights are 1.
scale : number (default: 1)
Scale factor for positions.
center : array-like or None
Coordinate pair around which to center the layout.
dim : int
Dimension of layout.
Returns
-------
pos : dict
A dictionary of positions keyed by node
Examples
--------
>>> G = nx.path_graph(4)
>>> pos = nx.kamada_kawai_layout(G)
"""
import numpy as np
G, center = _process_params(G, center, dim)
nNodes = len(G)
if nNodes == 0:
return {}
if dist is None:
dist = dict(nx.shortest_path_length(G, weight=weight))
dist_mtx = 1e6 * np.ones((nNodes, nNodes))
for row, nr in enumerate(G):
if nr not in dist:
continue
rdist = dist[nr]
for col, nc in enumerate(G):
if nc not in rdist:
continue
dist_mtx[row][col] = rdist[nc]
if pos is None:
if dim >= 3:
pos = random_layout(G, dim=dim)
elif dim == 2:
pos = circular_layout(G, dim=dim)
else:
pos = dict(zip(G, np.linspace(0, 1, len(G))))
pos_arr = np.array([pos[n] for n in G])
pos = _kamada_kawai_solve(dist_mtx, pos_arr, dim)
pos = rescale_layout(pos, scale=scale) + center
return dict(zip(G, pos))
| (G, dist=None, pos=None, weight='weight', scale=1, center=None, dim=2) |
30,845 | networkx.generators.social | karate_club_graph | Returns Zachary's Karate Club graph.
Each node in the returned graph has a node attribute 'club' that
indicates the name of the club to which the member represented by that node
belongs, either 'Mr. Hi' or 'Officer'. Each edge has a weight based on the
number of contexts in which that edge's incident node members interacted.
Examples
--------
To get the name of the club to which a node belongs::
>>> G = nx.karate_club_graph()
>>> G.nodes[5]["club"]
'Mr. Hi'
>>> G.nodes[9]["club"]
'Officer'
References
----------
.. [1] Zachary, Wayne W.
"An Information Flow Model for Conflict and Fission in Small Groups."
*Journal of Anthropological Research*, 33, 452--473, (1977).
| null | (*, backend=None, **backend_kwargs) |
30,847 | networkx.algorithms.centrality.katz | katz_centrality | Compute the Katz centrality for the nodes of the graph G.
Katz centrality computes the centrality for a node based on the centrality
of its neighbors. It is a generalization of the eigenvector centrality. The
Katz centrality for node $i$ is
.. math::
x_i = \alpha \sum_{j} A_{ij} x_j + \beta,
where $A$ is the adjacency matrix of graph G with eigenvalues $\lambda$.
The parameter $\beta$ controls the initial centrality and
.. math::
\alpha < \frac{1}{\lambda_{\max}}.
Katz centrality computes the relative influence of a node within a
network by measuring the number of the immediate neighbors (first
degree nodes) and also all other nodes in the network that connect
to the node under consideration through these immediate neighbors.
Extra weight can be provided to immediate neighbors through the
parameter $\beta$. Connections made with distant neighbors
are, however, penalized by an attenuation factor $\alpha$ which
should be strictly less than the inverse largest eigenvalue of the
adjacency matrix in order for the Katz centrality to be computed
correctly. More information is provided in [1]_.
Parameters
----------
G : graph
A NetworkX graph.
alpha : float, optional (default=0.1)
Attenuation factor
beta : scalar or dictionary, optional (default=1.0)
Weight attributed to the immediate neighborhood. If not a scalar, the
dictionary must have a value for every node.
max_iter : integer, optional (default=1000)
Maximum number of iterations in power method.
tol : float, optional (default=1.0e-6)
Error tolerance used to check convergence in power method iteration.
nstart : dictionary, optional
Starting value of Katz iteration for each node.
normalized : bool, optional (default=True)
If True normalize the resulting values.
weight : None or string, optional (default=None)
If None, all edge weights are considered equal.
Otherwise holds the name of the edge attribute used as weight.
In this measure the weight is interpreted as the connection strength.
Returns
-------
nodes : dictionary
Dictionary of nodes with Katz centrality as the value.
Raises
------
NetworkXError
If the parameter `beta` is not a scalar but lacks a value for at least
one node
PowerIterationFailedConvergence
If the algorithm fails to converge to the specified tolerance
within the specified number of iterations of the power iteration
method.
Examples
--------
>>> import math
>>> G = nx.path_graph(4)
>>> phi = (1 + math.sqrt(5)) / 2.0 # largest eigenvalue of adj matrix
>>> centrality = nx.katz_centrality(G, 1 / phi - 0.01)
>>> for n, c in sorted(centrality.items()):
... print(f"{n} {c:.2f}")
0 0.37
1 0.60
2 0.60
3 0.37
See Also
--------
katz_centrality_numpy
eigenvector_centrality
eigenvector_centrality_numpy
:func:`~networkx.algorithms.link_analysis.pagerank_alg.pagerank`
:func:`~networkx.algorithms.link_analysis.hits_alg.hits`
Notes
-----
Katz centrality was introduced by [2]_.
This algorithm it uses the power method to find the eigenvector
corresponding to the largest eigenvalue of the adjacency matrix of ``G``.
The parameter ``alpha`` should be strictly less than the inverse of largest
eigenvalue of the adjacency matrix for the algorithm to converge.
You can use ``max(nx.adjacency_spectrum(G))`` to get $\lambda_{\max}$ the largest
eigenvalue of the adjacency matrix.
The iteration will stop after ``max_iter`` iterations or an error tolerance of
``number_of_nodes(G) * tol`` has been reached.
For strongly connected graphs, as $\alpha \to 1/\lambda_{\max}$, and $\beta > 0$,
Katz centrality approaches the results for eigenvector centrality.
For directed graphs this finds "left" eigenvectors which corresponds
to the in-edges in the graph. For out-edges Katz centrality,
first reverse the graph with ``G.reverse()``.
References
----------
.. [1] Mark E. J. Newman:
Networks: An Introduction.
Oxford University Press, USA, 2010, p. 720.
.. [2] Leo Katz:
A New Status Index Derived from Sociometric Index.
Psychometrika 18(1):39–43, 1953
https://link.springer.com/content/pdf/10.1007/BF02289026.pdf
| null | (G, alpha=0.1, beta=1.0, max_iter=1000, tol=1e-06, nstart=None, normalized=True, weight=None, *, backend=None, **backend_kwargs) |
30,848 | networkx.algorithms.centrality.katz | katz_centrality_numpy | Compute the Katz centrality for the graph G.
Katz centrality computes the centrality for a node based on the centrality
of its neighbors. It is a generalization of the eigenvector centrality. The
Katz centrality for node $i$ is
.. math::
x_i = \alpha \sum_{j} A_{ij} x_j + \beta,
where $A$ is the adjacency matrix of graph G with eigenvalues $\lambda$.
The parameter $\beta$ controls the initial centrality and
.. math::
\alpha < \frac{1}{\lambda_{\max}}.
Katz centrality computes the relative influence of a node within a
network by measuring the number of the immediate neighbors (first
degree nodes) and also all other nodes in the network that connect
to the node under consideration through these immediate neighbors.
Extra weight can be provided to immediate neighbors through the
parameter $\beta$. Connections made with distant neighbors
are, however, penalized by an attenuation factor $\alpha$ which
should be strictly less than the inverse largest eigenvalue of the
adjacency matrix in order for the Katz centrality to be computed
correctly. More information is provided in [1]_.
Parameters
----------
G : graph
A NetworkX graph
alpha : float
Attenuation factor
beta : scalar or dictionary, optional (default=1.0)
Weight attributed to the immediate neighborhood. If not a scalar the
dictionary must have an value for every node.
normalized : bool
If True normalize the resulting values.
weight : None or string, optional
If None, all edge weights are considered equal.
Otherwise holds the name of the edge attribute used as weight.
In this measure the weight is interpreted as the connection strength.
Returns
-------
nodes : dictionary
Dictionary of nodes with Katz centrality as the value.
Raises
------
NetworkXError
If the parameter `beta` is not a scalar but lacks a value for at least
one node
Examples
--------
>>> import math
>>> G = nx.path_graph(4)
>>> phi = (1 + math.sqrt(5)) / 2.0 # largest eigenvalue of adj matrix
>>> centrality = nx.katz_centrality_numpy(G, 1 / phi)
>>> for n, c in sorted(centrality.items()):
... print(f"{n} {c:.2f}")
0 0.37
1 0.60
2 0.60
3 0.37
See Also
--------
katz_centrality
eigenvector_centrality_numpy
eigenvector_centrality
:func:`~networkx.algorithms.link_analysis.pagerank_alg.pagerank`
:func:`~networkx.algorithms.link_analysis.hits_alg.hits`
Notes
-----
Katz centrality was introduced by [2]_.
This algorithm uses a direct linear solver to solve the above equation.
The parameter ``alpha`` should be strictly less than the inverse of largest
eigenvalue of the adjacency matrix for there to be a solution.
You can use ``max(nx.adjacency_spectrum(G))`` to get $\lambda_{\max}$ the largest
eigenvalue of the adjacency matrix.
For strongly connected graphs, as $\alpha \to 1/\lambda_{\max}$, and $\beta > 0$,
Katz centrality approaches the results for eigenvector centrality.
For directed graphs this finds "left" eigenvectors which corresponds
to the in-edges in the graph. For out-edges Katz centrality,
first reverse the graph with ``G.reverse()``.
References
----------
.. [1] Mark E. J. Newman:
Networks: An Introduction.
Oxford University Press, USA, 2010, p. 173.
.. [2] Leo Katz:
A New Status Index Derived from Sociometric Index.
Psychometrika 18(1):39–43, 1953
https://link.springer.com/content/pdf/10.1007/BF02289026.pdf
| null | (G, alpha=0.1, beta=1.0, normalized=True, weight=None, *, backend=None, **backend_kwargs) |
30,849 | networkx.algorithms.distance_measures | kemeny_constant | Returns the Kemeny constant of the given graph.
The *Kemeny constant* (or Kemeny's constant) of a graph `G`
can be computed by regarding the graph as a Markov chain.
The Kemeny constant is then the expected number of time steps
to transition from a starting state i to a random destination state
sampled from the Markov chain's stationary distribution.
The Kemeny constant is independent of the chosen initial state [1]_.
The Kemeny constant measures the time needed for spreading
across a graph. Low values indicate a closely connected graph
whereas high values indicate a spread-out graph.
If weight is not provided, then a weight of 1 is used for all edges.
Since `G` represents a Markov chain, the weights must be positive.
Parameters
----------
G : NetworkX graph
weight : string or None, optional (default=None)
The edge data key used to compute the Kemeny constant.
If None, then each edge has weight 1.
Returns
-------
float
The Kemeny constant of the graph `G`.
Raises
------
NetworkXNotImplemented
If the graph `G` is directed.
NetworkXError
If the graph `G` is not connected, or contains no nodes,
or has edges with negative weights.
Examples
--------
>>> G = nx.complete_graph(5)
>>> round(nx.kemeny_constant(G), 10)
3.2
Notes
-----
The implementation is based on equation (3.3) in [2]_.
Self-loops are allowed and indicate a Markov chain where
the state can remain the same. Multi-edges are contracted
in one edge with weight equal to the sum of the weights.
References
----------
.. [1] Wikipedia
"Kemeny's constant."
https://en.wikipedia.org/wiki/Kemeny%27s_constant
.. [2] Lovász L.
Random walks on graphs: A survey.
Paul Erdös is Eighty, vol. 2, Bolyai Society,
Mathematical Studies, Keszthely, Hungary (1993), pp. 1-46
| def effective_graph_resistance(G, weight=None, invert_weight=True):
"""Returns the Effective graph resistance of G.
Also known as the Kirchhoff index.
The effective graph resistance is defined as the sum
of the resistance distance of every node pair in G [1]_.
If weight is not provided, then a weight of 1 is used for all edges.
The effective graph resistance of a disconnected graph is infinite.
Parameters
----------
G : NetworkX graph
A graph
weight : string or None, optional (default=None)
The edge data key used to compute the effective graph resistance.
If None, then each edge has weight 1.
invert_weight : boolean (default=True)
Proper calculation of resistance distance requires building the
Laplacian matrix with the reciprocal of the weight. Not required
if the weight is already inverted. Weight cannot be zero.
Returns
-------
RG : float
The effective graph resistance of `G`.
Raises
------
NetworkXNotImplemented
If `G` is a directed graph.
NetworkXError
If `G` does not contain any nodes.
Examples
--------
>>> G = nx.Graph([(1, 2), (1, 3), (1, 4), (3, 4), (3, 5), (4, 5)])
>>> round(nx.effective_graph_resistance(G), 10)
10.25
Notes
-----
The implementation is based on Theorem 2.2 in [2]_. Self-loops are ignored.
Multi-edges are contracted in one edge with weight equal to the harmonic sum of the weights.
References
----------
.. [1] Wolfram
"Kirchhoff Index."
https://mathworld.wolfram.com/KirchhoffIndex.html
.. [2] W. Ellens, F. M. Spieksma, P. Van Mieghem, A. Jamakovic, R. E. Kooij.
Effective graph resistance.
Lin. Alg. Appl. 435:2491-2506, 2011.
"""
import numpy as np
if len(G) == 0:
raise nx.NetworkXError("Graph G must contain at least one node.")
# Disconnected graphs have infinite Effective graph resistance
if not nx.is_connected(G):
return float("inf")
# Invert weights
G = G.copy()
if invert_weight and weight is not None:
if G.is_multigraph():
for u, v, k, d in G.edges(keys=True, data=True):
d[weight] = 1 / d[weight]
else:
for u, v, d in G.edges(data=True):
d[weight] = 1 / d[weight]
# Get Laplacian eigenvalues
mu = np.sort(nx.laplacian_spectrum(G, weight=weight))
# Compute Effective graph resistance based on spectrum of the Laplacian
# Self-loops are ignored
return float(np.sum(1 / mu[1:]) * G.number_of_nodes())
| (G, *, weight=None, backend=None, **backend_kwargs) |
30,850 | networkx.algorithms.hybrid | kl_connected_subgraph | Returns the maximum locally `(k, l)`-connected subgraph of `G`.
A graph is locally `(k, l)`-connected if for each edge `(u, v)` in the
graph there are at least `l` edge-disjoint paths of length at most `k`
joining `u` to `v`.
Parameters
----------
G : NetworkX graph
The graph in which to find a maximum locally `(k, l)`-connected
subgraph.
k : integer
The maximum length of paths to consider. A higher number means a looser
connectivity requirement.
l : integer
The number of edge-disjoint paths. A higher number means a stricter
connectivity requirement.
low_memory : bool
If this is True, this function uses an algorithm that uses slightly
more time but less memory.
same_as_graph : bool
If True then return a tuple of the form `(H, is_same)`,
where `H` is the maximum locally `(k, l)`-connected subgraph and
`is_same` is a Boolean representing whether `G` is locally `(k,
l)`-connected (and hence, whether `H` is simply a copy of the input
graph `G`).
Returns
-------
NetworkX graph or two-tuple
If `same_as_graph` is True, then this function returns a
two-tuple as described above. Otherwise, it returns only the maximum
locally `(k, l)`-connected subgraph.
See also
--------
is_kl_connected
References
----------
.. [1] Chung, Fan and Linyuan Lu. "The Small World Phenomenon in Hybrid
Power Law Graphs." *Complex Networks*. Springer Berlin Heidelberg,
2004. 89--104.
| null | (G, k, l, low_memory=False, same_as_graph=False, *, backend=None, **backend_kwargs) |
30,851 | networkx.generators.classic | kneser_graph | Returns the Kneser Graph with parameters `n` and `k`.
The Kneser Graph has nodes that are k-tuples (subsets) of the integers
between 0 and ``n-1``. Nodes are adjacent if their corresponding sets are disjoint.
Parameters
----------
n: int
Number of integers from which to make node subsets.
Subsets are drawn from ``set(range(n))``.
k: int
Size of the subsets.
Returns
-------
G : NetworkX Graph
Examples
--------
>>> G = nx.kneser_graph(5, 2)
>>> G.number_of_nodes()
10
>>> G.number_of_edges()
15
>>> nx.is_isomorphic(G, nx.petersen_graph())
True
| def star_graph(n, create_using=None):
"""Return the star graph
The star graph consists of one center node connected to n outer nodes.
.. plot::
>>> nx.draw(nx.star_graph(6))
Parameters
----------
n : int or iterable
If an integer, node labels are 0 to n with center 0.
If an iterable of nodes, the center is the first.
Warning: n is not checked for duplicates and if present the
resulting graph may not be as desired. Make sure you have no duplicates.
create_using : NetworkX graph constructor, optional (default=nx.Graph)
Graph type to create. If graph instance, then cleared before populated.
Notes
-----
The graph has n+1 nodes for integer n.
So star_graph(3) is the same as star_graph(range(4)).
"""
n, nodes = n
if isinstance(n, numbers.Integral):
nodes.append(int(n)) # there should be n+1 nodes
G = empty_graph(nodes, create_using)
if G.is_directed():
raise NetworkXError("Directed Graph not supported")
if len(nodes) > 1:
hub, *spokes = nodes
G.add_edges_from((hub, node) for node in spokes)
return G
| (n, k, *, backend=None, **backend_kwargs) |
30,852 | networkx.algorithms.components.strongly_connected | kosaraju_strongly_connected_components | Generate nodes in strongly connected components of graph.
Parameters
----------
G : NetworkX Graph
A directed graph.
Returns
-------
comp : generator of sets
A generator of sets of nodes, one for each strongly connected
component of G.
Raises
------
NetworkXNotImplemented
If G is undirected.
Examples
--------
Generate a sorted list of strongly connected components, largest first.
>>> G = nx.cycle_graph(4, create_using=nx.DiGraph())
>>> nx.add_cycle(G, [10, 11, 12])
>>> [
... len(c)
... for c in sorted(
... nx.kosaraju_strongly_connected_components(G), key=len, reverse=True
... )
... ]
[4, 3]
If you only want the largest component, it's more efficient to
use max instead of sort.
>>> largest = max(nx.kosaraju_strongly_connected_components(G), key=len)
See Also
--------
strongly_connected_components
Notes
-----
Uses Kosaraju's algorithm.
| null | (G, source=None, *, backend=None, **backend_kwargs) |
30,853 | networkx.generators.small | krackhardt_kite_graph |
Returns the Krackhardt Kite Social Network.
A 10 actor social network introduced by David Krackhardt
to illustrate different centrality measures [1]_.
Parameters
----------
create_using : NetworkX graph constructor, optional (default=nx.Graph)
Graph type to create. If graph instance, then cleared before populated.
Returns
-------
G : networkx Graph
Krackhardt Kite graph with 10 nodes and 18 edges
Notes
-----
The traditional labeling is:
Andre=1, Beverley=2, Carol=3, Diane=4,
Ed=5, Fernando=6, Garth=7, Heather=8, Ike=9, Jane=10.
References
----------
.. [1] Krackhardt, David. "Assessing the Political Landscape: Structure,
Cognition, and Power in Organizations". Administrative Science Quarterly.
35 (2): 342–369. doi:10.2307/2393394. JSTOR 2393394. June 1990.
| def _raise_on_directed(func):
"""
A decorator which inspects the `create_using` argument and raises a
NetworkX exception when `create_using` is a DiGraph (class or instance) for
graph generators that do not support directed outputs.
"""
@wraps(func)
def wrapper(*args, **kwargs):
if kwargs.get("create_using") is not None:
G = nx.empty_graph(create_using=kwargs["create_using"])
if G.is_directed():
raise NetworkXError("Directed Graph not supported")
return func(*args, **kwargs)
return wrapper
| (create_using=None, *, backend=None, **backend_kwargs) |
30,854 | networkx.generators.classic | ladder_graph | Returns the Ladder graph of length n.
This is two paths of n nodes, with
each pair connected by a single edge.
Node labels are the integers 0 to 2*n - 1.
.. plot::
>>> nx.draw(nx.ladder_graph(5))
| def star_graph(n, create_using=None):
"""Return the star graph
The star graph consists of one center node connected to n outer nodes.
.. plot::
>>> nx.draw(nx.star_graph(6))
Parameters
----------
n : int or iterable
If an integer, node labels are 0 to n with center 0.
If an iterable of nodes, the center is the first.
Warning: n is not checked for duplicates and if present the
resulting graph may not be as desired. Make sure you have no duplicates.
create_using : NetworkX graph constructor, optional (default=nx.Graph)
Graph type to create. If graph instance, then cleared before populated.
Notes
-----
The graph has n+1 nodes for integer n.
So star_graph(3) is the same as star_graph(range(4)).
"""
n, nodes = n
if isinstance(n, numbers.Integral):
nodes.append(int(n)) # there should be n+1 nodes
G = empty_graph(nodes, create_using)
if G.is_directed():
raise NetworkXError("Directed Graph not supported")
if len(nodes) > 1:
hub, *spokes = nodes
G.add_edges_from((hub, node) for node in spokes)
return G
| (n, create_using=None, *, backend=None, **backend_kwargs) |
30,856 | networkx.algorithms.centrality.laplacian | laplacian_centrality | Compute the Laplacian centrality for nodes in the graph `G`.
The Laplacian Centrality of a node ``i`` is measured by the drop in the
Laplacian Energy after deleting node ``i`` from the graph. The Laplacian Energy
is the sum of the squared eigenvalues of a graph's Laplacian matrix.
.. math::
C_L(u_i,G) = \frac{(\Delta E)_i}{E_L (G)} = \frac{E_L (G)-E_L (G_i)}{E_L (G)}
E_L (G) = \sum_{i=0}^n \lambda_i^2
Where $E_L (G)$ is the Laplacian energy of graph `G`,
E_L (G_i) is the Laplacian energy of graph `G` after deleting node ``i``
and $\lambda_i$ are the eigenvalues of `G`'s Laplacian matrix.
This formula shows the normalized value. Without normalization,
the numerator on the right side is returned.
Parameters
----------
G : graph
A networkx graph
normalized : bool (default = True)
If True the centrality score is scaled so the sum over all nodes is 1.
If False the centrality score for each node is the drop in Laplacian
energy when that node is removed.
nodelist : list, optional (default = None)
The rows and columns are ordered according to the nodes in nodelist.
If nodelist is None, then the ordering is produced by G.nodes().
weight: string or None, optional (default=`weight`)
Optional parameter `weight` to compute the Laplacian matrix.
The edge data key used to compute each value in the matrix.
If None, then each edge has weight 1.
walk_type : string or None, optional (default=None)
Optional parameter `walk_type` used when calling
:func:`directed_laplacian_matrix <networkx.directed_laplacian_matrix>`.
One of ``"random"``, ``"lazy"``, or ``"pagerank"``. If ``walk_type=None``
(the default), then a value is selected according to the properties of `G`:
- ``walk_type="random"`` if `G` is strongly connected and aperiodic
- ``walk_type="lazy"`` if `G` is strongly connected but not aperiodic
- ``walk_type="pagerank"`` for all other cases.
alpha : real (default = 0.95)
Optional parameter `alpha` used when calling
:func:`directed_laplacian_matrix <networkx.directed_laplacian_matrix>`.
(1 - alpha) is the teleportation probability used with pagerank.
Returns
-------
nodes : dictionary
Dictionary of nodes with Laplacian centrality as the value.
Examples
--------
>>> G = nx.Graph()
>>> edges = [(0, 1, 4), (0, 2, 2), (2, 1, 1), (1, 3, 2), (1, 4, 2), (4, 5, 1)]
>>> G.add_weighted_edges_from(edges)
>>> sorted((v, f"{c:0.2f}") for v, c in laplacian_centrality(G).items())
[(0, '0.70'), (1, '0.90'), (2, '0.28'), (3, '0.22'), (4, '0.26'), (5, '0.04')]
Notes
-----
The algorithm is implemented based on [1]_ with an extension to directed graphs
using the ``directed_laplacian_matrix`` function.
Raises
------
NetworkXPointlessConcept
If the graph `G` is the null graph.
ZeroDivisionError
If the graph `G` has no edges (is empty) and normalization is requested.
References
----------
.. [1] Qi, X., Fuller, E., Wu, Q., Wu, Y., and Zhang, C.-Q. (2012).
Laplacian centrality: A new centrality measure for weighted networks.
Information Sciences, 194:240-253.
https://math.wvu.edu/~cqzhang/Publication-files/my-paper/INS-2012-Laplacian-W.pdf
See Also
--------
:func:`~networkx.linalg.laplacianmatrix.directed_laplacian_matrix`
:func:`~networkx.linalg.laplacianmatrix.laplacian_matrix`
| null | (G, normalized=True, nodelist=None, weight='weight', walk_type=None, alpha=0.95, *, backend=None, **backend_kwargs) |
30,857 | networkx.linalg.laplacianmatrix | laplacian_matrix | Returns the Laplacian matrix of G.
The graph Laplacian is the matrix L = D - A, where
A is the adjacency matrix and D is the diagonal matrix of node degrees.
Parameters
----------
G : graph
A NetworkX graph
nodelist : list, optional
The rows and columns are ordered according to the nodes in nodelist.
If nodelist is None, then the ordering is produced by G.nodes().
weight : string or None, optional (default='weight')
The edge data key used to compute each value in the matrix.
If None, then each edge has weight 1.
Returns
-------
L : SciPy sparse array
The Laplacian matrix of G.
Notes
-----
For MultiGraph, the edges weights are summed.
This returns an unnormalized matrix. For a normalized output,
use `normalized_laplacian_matrix`, `directed_laplacian_matrix`,
or `directed_combinatorial_laplacian_matrix`.
This calculation uses the out-degree of the graph `G`. To use the
in-degree for calculations instead, use `G.reverse(copy=False)` and
take the transpose.
See Also
--------
:func:`~networkx.convert_matrix.to_numpy_array`
normalized_laplacian_matrix
directed_laplacian_matrix
directed_combinatorial_laplacian_matrix
:func:`~networkx.linalg.spectrum.laplacian_spectrum`
Examples
--------
For graphs with multiple connected components, L is permutation-similar
to a block diagonal matrix where each block is the respective Laplacian
matrix for each component.
>>> G = nx.Graph([(1, 2), (2, 3), (4, 5)])
>>> print(nx.laplacian_matrix(G).toarray())
[[ 1 -1 0 0 0]
[-1 2 -1 0 0]
[ 0 -1 1 0 0]
[ 0 0 0 1 -1]
[ 0 0 0 -1 1]]
>>> edges = [
... (1, 2),
... (2, 1),
... (2, 4),
... (4, 3),
... (3, 4),
... ]
>>> DiG = nx.DiGraph(edges)
>>> print(nx.laplacian_matrix(DiG).toarray())
[[ 1 -1 0 0]
[-1 2 -1 0]
[ 0 0 1 -1]
[ 0 0 -1 1]]
Notice that node 4 is represented by the third column and row. This is because
by default the row/column order is the order of `G.nodes` (i.e. the node added
order -- in the edgelist, 4 first appears in (2, 4), before node 3 in edge (4, 3).)
To control the node order of the matrix, use the `nodelist` argument.
>>> print(nx.laplacian_matrix(DiG, nodelist=[1, 2, 3, 4]).toarray())
[[ 1 -1 0 0]
[-1 2 0 -1]
[ 0 0 1 -1]
[ 0 0 -1 1]]
This calculation uses the out-degree of the graph `G`. To use the
in-degree for calculations instead, use `G.reverse(copy=False)` and
take the transpose.
>>> print(nx.laplacian_matrix(DiG.reverse(copy=False)).toarray().T)
[[ 1 -1 0 0]
[-1 1 -1 0]
[ 0 0 2 -1]
[ 0 0 -1 1]]
References
----------
.. [1] Langville, Amy N., and Carl D. Meyer. Google’s PageRank and Beyond:
The Science of Search Engine Rankings. Princeton University Press, 2006.
| null | (G, nodelist=None, weight='weight', *, backend=None, **backend_kwargs) |
30,858 | networkx.linalg.spectrum | laplacian_spectrum | Returns eigenvalues of the Laplacian of G
Parameters
----------
G : graph
A NetworkX graph
weight : string or None, optional (default='weight')
The edge data key used to compute each value in the matrix.
If None, then each edge has weight 1.
Returns
-------
evals : NumPy array
Eigenvalues
Notes
-----
For MultiGraph/MultiDiGraph, the edges weights are summed.
See :func:`~networkx.convert_matrix.to_numpy_array` for other options.
See Also
--------
laplacian_matrix
Examples
--------
The multiplicity of 0 as an eigenvalue of the laplacian matrix is equal
to the number of connected components of G.
>>> G = nx.Graph() # Create a graph with 5 nodes and 3 connected components
>>> G.add_nodes_from(range(5))
>>> G.add_edges_from([(0, 2), (3, 4)])
>>> nx.laplacian_spectrum(G)
array([0., 0., 0., 2., 2.])
| null | (G, weight='weight', *, backend=None, **backend_kwargs) |
30,861 | networkx.algorithms.smallworld | lattice_reference | Latticize the given graph by swapping edges.
Parameters
----------
G : graph
An undirected graph.
niter : integer (optional, default=1)
An edge is rewired approximately niter times.
D : numpy.array (optional, default=None)
Distance to the diagonal matrix.
connectivity : boolean (optional, default=True)
Ensure connectivity for the latticized graph when set to True.
seed : integer, random_state, or None (default)
Indicator of random number generation state.
See :ref:`Randomness<randomness>`.
Returns
-------
G : graph
The latticized graph.
Raises
------
NetworkXError
If there are fewer than 4 nodes or 2 edges in `G`
Notes
-----
The implementation is adapted from the algorithm by Sporns et al. [1]_.
which is inspired from the original work by Maslov and Sneppen(2002) [2]_.
References
----------
.. [1] Sporns, Olaf, and Jonathan D. Zwi.
"The small world of the cerebral cortex."
Neuroinformatics 2.2 (2004): 145-162.
.. [2] Maslov, Sergei, and Kim Sneppen.
"Specificity and stability in topology of protein networks."
Science 296.5569 (2002): 910-913.
| null | (G, niter=5, D=None, connectivity=True, seed=None, *, backend=None, **backend_kwargs) |
30,865 | networkx.generators.social | les_miserables_graph | Returns coappearance network of characters in the novel Les Miserables.
References
----------
.. [1] D. E. Knuth, 1993.
The Stanford GraphBase: a platform for combinatorial computing,
pp. 74-87. New York: AcM Press.
| null | (*, backend=None, **backend_kwargs) |
30,866 | networkx.algorithms.operators.product | lexicographic_product | Returns the lexicographic product of G and H.
The lexicographical product $P$ of the graphs $G$ and $H$ has a node set
that is the Cartesian product of the node sets, $V(P)=V(G) \times V(H)$.
$P$ has an edge $((u,v), (x,y))$ if and only if $(u,v)$ is an edge in $G$
or $u==v$ and $(x,y)$ is an edge in $H$.
Parameters
----------
G, H: graphs
Networkx graphs.
Returns
-------
P: NetworkX graph
The Cartesian product of G and H. P will be a multi-graph if either G
or H is a multi-graph. Will be a directed if G and H are directed,
and undirected if G and H are undirected.
Raises
------
NetworkXError
If G and H are not both directed or both undirected.
Notes
-----
Node attributes in P are two-tuple of the G and H node attributes.
Missing attributes are assigned None.
Examples
--------
>>> G = nx.Graph()
>>> H = nx.Graph()
>>> G.add_node(0, a1=True)
>>> H.add_node("a", a2="Spam")
>>> P = nx.lexicographic_product(G, H)
>>> list(P)
[(0, 'a')]
Edge attributes and edge keys (for multigraphs) are also copied to the
new product graph
| null | (G, H, *, backend=None, **backend_kwargs) |
30,867 | networkx.algorithms.dag | lexicographical_topological_sort | Generate the nodes in the unique lexicographical topological sort order.
Generates a unique ordering of nodes by first sorting topologically (for which there are often
multiple valid orderings) and then additionally by sorting lexicographically.
A topological sort arranges the nodes of a directed graph so that the
upstream node of each directed edge precedes the downstream node.
It is always possible to find a solution for directed graphs that have no cycles.
There may be more than one valid solution.
Lexicographical sorting is just sorting alphabetically. It is used here to break ties in the
topological sort and to determine a single, unique ordering. This can be useful in comparing
sort results.
The lexicographical order can be customized by providing a function to the `key=` parameter.
The definition of the key function is the same as used in python's built-in `sort()`.
The function takes a single argument and returns a key to use for sorting purposes.
Lexicographical sorting can fail if the node names are un-sortable. See the example below.
The solution is to provide a function to the `key=` argument that returns sortable keys.
Parameters
----------
G : NetworkX digraph
A directed acyclic graph (DAG)
key : function, optional
A function of one argument that converts a node name to a comparison key.
It defines and resolves ambiguities in the sort order. Defaults to the identity function.
Yields
------
nodes
Yields the nodes of G in lexicographical topological sort order.
Raises
------
NetworkXError
Topological sort is defined for directed graphs only. If the graph `G`
is undirected, a :exc:`NetworkXError` is raised.
NetworkXUnfeasible
If `G` is not a directed acyclic graph (DAG) no topological sort exists
and a :exc:`NetworkXUnfeasible` exception is raised. This can also be
raised if `G` is changed while the returned iterator is being processed
RuntimeError
If `G` is changed while the returned iterator is being processed.
TypeError
Results from un-sortable node names.
Consider using `key=` parameter to resolve ambiguities in the sort order.
Examples
--------
>>> DG = nx.DiGraph([(2, 1), (2, 5), (1, 3), (1, 4), (5, 4)])
>>> list(nx.lexicographical_topological_sort(DG))
[2, 1, 3, 5, 4]
>>> list(nx.lexicographical_topological_sort(DG, key=lambda x: -x))
[2, 5, 1, 4, 3]
The sort will fail for any graph with integer and string nodes. Comparison of integer to strings
is not defined in python. Is 3 greater or less than 'red'?
>>> DG = nx.DiGraph([(1, "red"), (3, "red"), (1, "green"), (2, "blue")])
>>> list(nx.lexicographical_topological_sort(DG))
Traceback (most recent call last):
...
TypeError: '<' not supported between instances of 'str' and 'int'
...
Incomparable nodes can be resolved using a `key` function. This example function
allows comparison of integers and strings by returning a tuple where the first
element is True for `str`, False otherwise. The second element is the node name.
This groups the strings and integers separately so they can be compared only among themselves.
>>> key = lambda node: (isinstance(node, str), node)
>>> list(nx.lexicographical_topological_sort(DG, key=key))
[1, 2, 3, 'blue', 'green', 'red']
Notes
-----
This algorithm is based on a description and proof in
"Introduction to Algorithms: A Creative Approach" [1]_ .
See also
--------
topological_sort
References
----------
.. [1] Manber, U. (1989).
*Introduction to Algorithms - A Creative Approach.* Addison-Wesley.
| def transitive_closure_dag(G, topo_order=None):
"""Returns the transitive closure of a directed acyclic graph.
This function is faster than the function `transitive_closure`, but fails
if the graph has a cycle.
The transitive closure of G = (V,E) is a graph G+ = (V,E+) such that
for all v, w in V there is an edge (v, w) in E+ if and only if there
is a non-null path from v to w in G.
Parameters
----------
G : NetworkX DiGraph
A directed acyclic graph (DAG)
topo_order: list or tuple, optional
A topological order for G (if None, the function will compute one)
Returns
-------
NetworkX DiGraph
The transitive closure of `G`
Raises
------
NetworkXNotImplemented
If `G` is not directed
NetworkXUnfeasible
If `G` has a cycle
Examples
--------
>>> DG = nx.DiGraph([(1, 2), (2, 3)])
>>> TC = nx.transitive_closure_dag(DG)
>>> TC.edges()
OutEdgeView([(1, 2), (1, 3), (2, 3)])
Notes
-----
This algorithm is probably simple enough to be well-known but I didn't find
a mention in the literature.
"""
if topo_order is None:
topo_order = list(topological_sort(G))
TC = G.copy()
# idea: traverse vertices following a reverse topological order, connecting
# each vertex to its descendants at distance 2 as we go
for v in reversed(topo_order):
TC.add_edges_from((v, u) for u in nx.descendants_at_distance(TC, v, 2))
return TC
| (G, key=None, *, backend=None, **backend_kwargs) |
30,870 | networkx.generators.line | line_graph | Returns the line graph of the graph or digraph `G`.
The line graph of a graph `G` has a node for each edge in `G` and an
edge joining those nodes if the two edges in `G` share a common node. For
directed graphs, nodes are adjacent exactly when the edges they represent
form a directed path of length two.
The nodes of the line graph are 2-tuples of nodes in the original graph (or
3-tuples for multigraphs, with the key of the edge as the third element).
For information about self-loops and more discussion, see the **Notes**
section below.
Parameters
----------
G : graph
A NetworkX Graph, DiGraph, MultiGraph, or MultiDigraph.
create_using : NetworkX graph constructor, optional (default=nx.Graph)
Graph type to create. If graph instance, then cleared before populated.
Returns
-------
L : graph
The line graph of G.
Examples
--------
>>> G = nx.star_graph(3)
>>> L = nx.line_graph(G)
>>> print(sorted(map(sorted, L.edges()))) # makes a 3-clique, K3
[[(0, 1), (0, 2)], [(0, 1), (0, 3)], [(0, 2), (0, 3)]]
Edge attributes from `G` are not copied over as node attributes in `L`, but
attributes can be copied manually:
>>> G = nx.path_graph(4)
>>> G.add_edges_from((u, v, {"tot": u + v}) for u, v in G.edges)
>>> G.edges(data=True)
EdgeDataView([(0, 1, {'tot': 1}), (1, 2, {'tot': 3}), (2, 3, {'tot': 5})])
>>> H = nx.line_graph(G)
>>> H.add_nodes_from((node, G.edges[node]) for node in H)
>>> H.nodes(data=True)
NodeDataView({(0, 1): {'tot': 1}, (2, 3): {'tot': 5}, (1, 2): {'tot': 3}})
Notes
-----
Graph, node, and edge data are not propagated to the new graph. For
undirected graphs, the nodes in G must be sortable, otherwise the
constructed line graph may not be correct.
*Self-loops in undirected graphs*
For an undirected graph `G` without multiple edges, each edge can be
written as a set `\{u, v\}`. Its line graph `L` has the edges of `G` as
its nodes. If `x` and `y` are two nodes in `L`, then `\{x, y\}` is an edge
in `L` if and only if the intersection of `x` and `y` is nonempty. Thus,
the set of all edges is determined by the set of all pairwise intersections
of edges in `G`.
Trivially, every edge in G would have a nonzero intersection with itself,
and so every node in `L` should have a self-loop. This is not so
interesting, and the original context of line graphs was with simple
graphs, which had no self-loops or multiple edges. The line graph was also
meant to be a simple graph and thus, self-loops in `L` are not part of the
standard definition of a line graph. In a pairwise intersection matrix,
this is analogous to excluding the diagonal entries from the line graph
definition.
Self-loops and multiple edges in `G` add nodes to `L` in a natural way, and
do not require any fundamental changes to the definition. It might be
argued that the self-loops we excluded before should now be included.
However, the self-loops are still "trivial" in some sense and thus, are
usually excluded.
*Self-loops in directed graphs*
For a directed graph `G` without multiple edges, each edge can be written
as a tuple `(u, v)`. Its line graph `L` has the edges of `G` as its
nodes. If `x` and `y` are two nodes in `L`, then `(x, y)` is an edge in `L`
if and only if the tail of `x` matches the head of `y`, for example, if `x
= (a, b)` and `y = (b, c)` for some vertices `a`, `b`, and `c` in `G`.
Due to the directed nature of the edges, it is no longer the case that
every edge in `G` should have a self-loop in `L`. Now, the only time
self-loops arise is if a node in `G` itself has a self-loop. So such
self-loops are no longer "trivial" but instead, represent essential
features of the topology of `G`. For this reason, the historical
development of line digraphs is such that self-loops are included. When the
graph `G` has multiple edges, once again only superficial changes are
required to the definition.
References
----------
* Harary, Frank, and Norman, Robert Z., "Some properties of line digraphs",
Rend. Circ. Mat. Palermo, II. Ser. 9 (1960), 161--168.
* Hemminger, R. L.; Beineke, L. W. (1978), "Line graphs and line digraphs",
in Beineke, L. W.; Wilson, R. J., Selected Topics in Graph Theory,
Academic Press Inc., pp. 271--305.
| null | (G, create_using=None, *, backend=None, **backend_kwargs) |
30,874 | networkx.algorithms.centrality.load | newman_betweenness_centrality | Compute load centrality for nodes.
The load centrality of a node is the fraction of all shortest
paths that pass through that node.
Parameters
----------
G : graph
A networkx graph.
normalized : bool, optional (default=True)
If True the betweenness values are normalized by b=b/(n-1)(n-2) where
n is the number of nodes in G.
weight : None or string, optional (default=None)
If None, edge weights are ignored.
Otherwise holds the name of the edge attribute used as weight.
The weight of an edge is treated as the length or distance between the two sides.
cutoff : bool, optional (default=None)
If specified, only consider paths of length <= cutoff.
Returns
-------
nodes : dictionary
Dictionary of nodes with centrality as the value.
See Also
--------
betweenness_centrality
Notes
-----
Load centrality is slightly different than betweenness. It was originally
introduced by [2]_. For this load algorithm see [1]_.
References
----------
.. [1] Mark E. J. Newman:
Scientific collaboration networks. II.
Shortest paths, weighted networks, and centrality.
Physical Review E 64, 016132, 2001.
http://journals.aps.org/pre/abstract/10.1103/PhysRevE.64.016132
.. [2] Kwang-Il Goh, Byungnam Kahng and Doochul Kim
Universal behavior of Load Distribution in Scale-Free Networks.
Physical Review Letters 87(27):1–4, 2001.
https://doi.org/10.1103/PhysRevLett.87.278701
| null | (G, v=None, cutoff=None, normalized=True, weight=None, *, backend=None, **backend_kwargs) |
30,875 | networkx.algorithms.bridges | local_bridges | Iterate over local bridges of `G` optionally computing the span
A *local bridge* is an edge whose endpoints have no common neighbors.
That is, the edge is not part of a triangle in the graph.
The *span* of a *local bridge* is the shortest path length between
the endpoints if the local bridge is removed.
Parameters
----------
G : undirected graph
with_span : bool
If True, yield a 3-tuple `(u, v, span)`
weight : function, string or None (default: None)
If function, used to compute edge weights for the span.
If string, the edge data attribute used in calculating span.
If None, all edges have weight 1.
Yields
------
e : edge
The local bridges as an edge 2-tuple of nodes `(u, v)` or
as a 3-tuple `(u, v, span)` when `with_span is True`.
Raises
------
NetworkXNotImplemented
If `G` is a directed graph or multigraph.
Examples
--------
A cycle graph has every edge a local bridge with span N-1.
>>> G = nx.cycle_graph(9)
>>> (0, 8, 8) in set(nx.local_bridges(G))
True
| null | (G, with_span=True, weight=None, *, backend=None, **backend_kwargs) |
30,876 | networkx.algorithms.structuralholes | local_constraint | Returns the local constraint on the node ``u`` with respect to
the node ``v`` in the graph ``G``.
Formally, the *local constraint on u with respect to v*, denoted
$\ell(u, v)$, is defined by
.. math::
\ell(u, v) = \left(p_{uv} + \sum_{w \in N(v)} p_{uw} p_{wv}\right)^2,
where $N(v)$ is the set of neighbors of $v$ and $p_{uv}$ is the
normalized mutual weight of the (directed or undirected) edges
joining $u$ and $v$, for each vertex $u$ and $v$ [1]_. The *mutual
weight* of $u$ and $v$ is the sum of the weights of edges joining
them (edge weights are assumed to be one if the graph is
unweighted).
Parameters
----------
G : NetworkX graph
The graph containing ``u`` and ``v``. This can be either
directed or undirected.
u : node
A node in the graph ``G``.
v : node
A node in the graph ``G``.
weight : None or string, optional
If None, all edge weights are considered equal.
Otherwise holds the name of the edge attribute used as weight.
Returns
-------
float
The constraint of the node ``v`` in the graph ``G``.
See also
--------
constraint
References
----------
.. [1] Burt, Ronald S.
"Structural holes and good ideas".
American Journal of Sociology (110): 349–399.
| null | (G, u, v, weight=None, *, backend=None, **backend_kwargs) |
30,877 | networkx.algorithms.efficiency_measures | local_efficiency | Returns the average local efficiency of the graph.
The *efficiency* of a pair of nodes in a graph is the multiplicative
inverse of the shortest path distance between the nodes. The *local
efficiency* of a node in the graph is the average global efficiency of the
subgraph induced by the neighbors of the node. The *average local
efficiency* is the average of the local efficiencies of each node [1]_.
Parameters
----------
G : :class:`networkx.Graph`
An undirected graph for which to compute the average local efficiency.
Returns
-------
float
The average local efficiency of the graph.
Examples
--------
>>> G = nx.Graph([(0, 1), (0, 2), (0, 3), (1, 2), (1, 3)])
>>> nx.local_efficiency(G)
0.9166666666666667
Notes
-----
Edge weights are ignored when computing the shortest path distances.
See also
--------
global_efficiency
References
----------
.. [1] Latora, Vito, and Massimo Marchiori.
"Efficient behavior of small-world networks."
*Physical Review Letters* 87.19 (2001): 198701.
<https://doi.org/10.1103/PhysRevLett.87.198701>
| null | (G, *, backend=None, **backend_kwargs) |
30,878 | networkx.algorithms.centrality.reaching | local_reaching_centrality | Returns the local reaching centrality of a node in a directed
graph.
The *local reaching centrality* of a node in a directed graph is the
proportion of other nodes reachable from that node [1]_.
Parameters
----------
G : DiGraph
A NetworkX DiGraph.
v : node
A node in the directed graph `G`.
paths : dictionary (default=None)
If this is not `None` it must be a dictionary representation
of single-source shortest paths, as computed by, for example,
:func:`networkx.shortest_path` with source node `v`. Use this
keyword argument if you intend to invoke this function many
times but don't want the paths to be recomputed each time.
weight : None or string, optional (default=None)
Attribute to use for edge weights. If `None`, each edge weight
is assumed to be one. A higher weight implies a stronger
connection between nodes and a *shorter* path length.
normalized : bool, optional (default=True)
Whether to normalize the edge weights by the total sum of edge
weights.
Returns
-------
h : float
The local reaching centrality of the node ``v`` in the graph
``G``.
Examples
--------
>>> G = nx.DiGraph()
>>> G.add_edges_from([(1, 2), (1, 3)])
>>> nx.local_reaching_centrality(G, 3)
0.0
>>> G.add_edge(3, 2)
>>> nx.local_reaching_centrality(G, 3)
0.5
See also
--------
global_reaching_centrality
References
----------
.. [1] Mones, Enys, Lilla Vicsek, and Tamás Vicsek.
"Hierarchy Measure for Complex Networks."
*PLoS ONE* 7.3 (2012): e33799.
https://doi.org/10.1371/journal.pone.0033799
| null | (G, v, paths=None, weight=None, normalized=True, *, backend=None, **backend_kwargs) |
30,879 | networkx.generators.classic | lollipop_graph | Returns the Lollipop Graph; ``K_m`` connected to ``P_n``.
This is the Barbell Graph without the right barbell.
.. plot::
>>> nx.draw(nx.lollipop_graph(3, 4))
Parameters
----------
m, n : int or iterable container of nodes
If an integer, nodes are from ``range(m)`` and ``range(m, m+n)``.
If a container of nodes, those nodes appear in the graph.
Warning: `m` and `n` are not checked for duplicates and if present the
resulting graph may not be as desired. Make sure you have no duplicates.
The nodes for `m` appear in the complete graph $K_m$ and the nodes
for `n` appear in the path $P_n$
create_using : NetworkX graph constructor, optional (default=nx.Graph)
Graph type to create. If graph instance, then cleared before populated.
Returns
-------
Networkx graph
A complete graph with `m` nodes connected to a path of length `n`.
Notes
-----
The 2 subgraphs are joined via an edge ``(m-1, m)``.
If ``n=0``, this is merely a complete graph.
(This graph is an extremal example in David Aldous and Jim
Fill's etext on Random Walks on Graphs.)
| def star_graph(n, create_using=None):
"""Return the star graph
The star graph consists of one center node connected to n outer nodes.
.. plot::
>>> nx.draw(nx.star_graph(6))
Parameters
----------
n : int or iterable
If an integer, node labels are 0 to n with center 0.
If an iterable of nodes, the center is the first.
Warning: n is not checked for duplicates and if present the
resulting graph may not be as desired. Make sure you have no duplicates.
create_using : NetworkX graph constructor, optional (default=nx.Graph)
Graph type to create. If graph instance, then cleared before populated.
Notes
-----
The graph has n+1 nodes for integer n.
So star_graph(3) is the same as star_graph(range(4)).
"""
n, nodes = n
if isinstance(n, numbers.Integral):
nodes.append(int(n)) # there should be n+1 nodes
G = empty_graph(nodes, create_using)
if G.is_directed():
raise NetworkXError("Directed Graph not supported")
if len(nodes) > 1:
hub, *spokes = nodes
G.add_edges_from((hub, node) for node in spokes)
return G
| (m, n, create_using=None, *, backend=None, **backend_kwargs) |
30,880 | networkx.algorithms.lowest_common_ancestors | lowest_common_ancestor | Compute the lowest common ancestor of the given pair of nodes.
Parameters
----------
G : NetworkX directed graph
node1, node2 : nodes in the graph.
default : object
Returned if no common ancestor between `node1` and `node2`
Returns
-------
The lowest common ancestor of node1 and node2,
or default if they have no common ancestors.
Examples
--------
>>> G = nx.DiGraph()
>>> nx.add_path(G, (0, 1, 2, 3))
>>> nx.add_path(G, (0, 4, 3))
>>> nx.lowest_common_ancestor(G, 2, 4)
0
See Also
--------
all_pairs_lowest_common_ancestor | null | (G, node1, node2, default=None, *, backend=None, **backend_kwargs) |
30,882 | networkx.algorithms.clique | make_clique_bipartite | Returns the bipartite clique graph corresponding to `G`.
In the returned bipartite graph, the "bottom" nodes are the nodes of
`G` and the "top" nodes represent the maximal cliques of `G`.
There is an edge from node *v* to clique *C* in the returned graph
if and only if *v* is an element of *C*.
Parameters
----------
G : NetworkX graph
An undirected graph.
fpos : bool
If True or not None, the returned graph will have an
additional attribute, `pos`, a dictionary mapping node to
position in the Euclidean plane.
create_using : NetworkX graph constructor, optional (default=nx.Graph)
Graph type to create. If graph instance, then cleared before populated.
Returns
-------
NetworkX graph
A bipartite graph whose "bottom" set is the nodes of the graph
`G`, whose "top" set is the cliques of `G`, and whose edges
join nodes of `G` to the cliques that contain them.
The nodes of the graph `G` have the node attribute
'bipartite' set to 1 and the nodes representing cliques
have the node attribute 'bipartite' set to 0, as is the
convention for bipartite graphs in NetworkX.
| null | (G, fpos=None, create_using=None, name=None, *, backend=None, **backend_kwargs) |
30,883 | networkx.algorithms.clique | make_max_clique_graph | Returns the maximal clique graph of the given graph.
The nodes of the maximal clique graph of `G` are the cliques of
`G` and an edge joins two cliques if the cliques are not disjoint.
Parameters
----------
G : NetworkX graph
create_using : NetworkX graph constructor, optional (default=nx.Graph)
Graph type to create. If graph instance, then cleared before populated.
Returns
-------
NetworkX graph
A graph whose nodes are the cliques of `G` and whose edges
join two cliques if they are not disjoint.
Notes
-----
This function behaves like the following code::
import networkx as nx
G = nx.make_clique_bipartite(G)
cliques = [v for v in G.nodes() if G.nodes[v]["bipartite"] == 0]
G = nx.bipartite.projected_graph(G, cliques)
G = nx.relabel_nodes(G, {-v: v - 1 for v in G})
It should be faster, though, since it skips all the intermediate
steps.
| null | (G, create_using=None, *, backend=None, **backend_kwargs) |
30,884 | networkx.generators.expanders | margulis_gabber_galil_graph | Returns the Margulis-Gabber-Galil undirected MultiGraph on `n^2` nodes.
The undirected MultiGraph is regular with degree `8`. Nodes are integer
pairs. The second-largest eigenvalue of the adjacency matrix of the graph
is at most `5 \sqrt{2}`, regardless of `n`.
Parameters
----------
n : int
Determines the number of nodes in the graph: `n^2`.
create_using : NetworkX graph constructor, optional (default MultiGraph)
Graph type to create. If graph instance, then cleared before populated.
Returns
-------
G : graph
The constructed undirected multigraph.
Raises
------
NetworkXError
If the graph is directed or not a multigraph.
| null | (n, create_using=None, *, backend=None, **backend_kwargs) |
30,886 | networkx.algorithms.flow.mincost | max_flow_min_cost | Returns a maximum (s, t)-flow of minimum cost.
G is a digraph with edge costs and capacities. There is a source
node s and a sink node t. This function finds a maximum flow from
s to t whose total cost is minimized.
Parameters
----------
G : NetworkX graph
DiGraph on which a minimum cost flow satisfying all demands is
to be found.
s: node label
Source of the flow.
t: node label
Destination of the flow.
capacity: string
Edges of the graph G are expected to have an attribute capacity
that indicates how much flow the edge can support. If this
attribute is not present, the edge is considered to have
infinite capacity. Default value: 'capacity'.
weight: string
Edges of the graph G are expected to have an attribute weight
that indicates the cost incurred by sending one unit of flow on
that edge. If not present, the weight is considered to be 0.
Default value: 'weight'.
Returns
-------
flowDict: dictionary
Dictionary of dictionaries keyed by nodes such that
flowDict[u][v] is the flow edge (u, v).
Raises
------
NetworkXError
This exception is raised if the input graph is not directed or
not connected.
NetworkXUnbounded
This exception is raised if there is an infinite capacity path
from s to t in G. In this case there is no maximum flow. This
exception is also raised if the digraph G has a cycle of
negative cost and infinite capacity. Then, the cost of a flow
is unbounded below.
See also
--------
cost_of_flow, min_cost_flow, min_cost_flow_cost, network_simplex
Notes
-----
This algorithm is not guaranteed to work if edge weights or demands
are floating point numbers (overflows and roundoff errors can
cause problems). As a workaround you can use integer numbers by
multiplying the relevant edge attributes by a convenient
constant factor (eg 100).
Examples
--------
>>> G = nx.DiGraph()
>>> G.add_edges_from(
... [
... (1, 2, {"capacity": 12, "weight": 4}),
... (1, 3, {"capacity": 20, "weight": 6}),
... (2, 3, {"capacity": 6, "weight": -3}),
... (2, 6, {"capacity": 14, "weight": 1}),
... (3, 4, {"weight": 9}),
... (3, 5, {"capacity": 10, "weight": 5}),
... (4, 2, {"capacity": 19, "weight": 13}),
... (4, 5, {"capacity": 4, "weight": 0}),
... (5, 7, {"capacity": 28, "weight": 2}),
... (6, 5, {"capacity": 11, "weight": 1}),
... (6, 7, {"weight": 8}),
... (7, 4, {"capacity": 6, "weight": 6}),
... ]
... )
>>> mincostFlow = nx.max_flow_min_cost(G, 1, 7)
>>> mincost = nx.cost_of_flow(G, mincostFlow)
>>> mincost
373
>>> from networkx.algorithms.flow import maximum_flow
>>> maxFlow = maximum_flow(G, 1, 7)[1]
>>> nx.cost_of_flow(G, maxFlow) >= mincost
True
>>> mincostFlowValue = sum((mincostFlow[u][7] for u in G.predecessors(7))) - sum(
... (mincostFlow[7][v] for v in G.successors(7))
... )
>>> mincostFlowValue == nx.maximum_flow_value(G, 1, 7)
True
| null | (G, s, t, capacity='capacity', weight='weight', *, backend=None, **backend_kwargs) |
30,887 | networkx.algorithms.clique | max_weight_clique | Find a maximum weight clique in G.
A *clique* in a graph is a set of nodes such that every two distinct nodes
are adjacent. The *weight* of a clique is the sum of the weights of its
nodes. A *maximum weight clique* of graph G is a clique C in G such that
no clique in G has weight greater than the weight of C.
Parameters
----------
G : NetworkX graph
Undirected graph
weight : string or None, optional (default='weight')
The node attribute that holds the integer value used as a weight.
If None, then each node has weight 1.
Returns
-------
clique : list
the nodes of a maximum weight clique
weight : int
the weight of a maximum weight clique
Notes
-----
The implementation is recursive, and therefore it may run into recursion
depth issues if G contains a clique whose number of nodes is close to the
recursion depth limit.
At each search node, the algorithm greedily constructs a weighted
independent set cover of part of the graph in order to find a small set of
nodes on which to branch. The algorithm is very similar to the algorithm
of Tavares et al. [1]_, other than the fact that the NetworkX version does
not use bitsets. This style of algorithm for maximum weight clique (and
maximum weight independent set, which is the same problem but on the
complement graph) has a decades-long history. See Algorithm B of Warren
and Hicks [2]_ and the references in that paper.
References
----------
.. [1] Tavares, W.A., Neto, M.B.C., Rodrigues, C.D., Michelon, P.: Um
algoritmo de branch and bound para o problema da clique máxima
ponderada. Proceedings of XLVII SBPO 1 (2015).
.. [2] Warren, Jeffrey S, Hicks, Illya V.: Combinatorial Branch-and-Bound
for the Maximum Weight Independent Set Problem. Technical Report,
Texas A&M University (2016).
| null | (G, weight='weight', *, backend=None, **backend_kwargs) |
30,888 | networkx.algorithms.matching | max_weight_matching | Compute a maximum-weighted matching of G.
A matching is a subset of edges in which no node occurs more than once.
The weight of a matching is the sum of the weights of its edges.
A maximal matching cannot add more edges and still be a matching.
The cardinality of a matching is the number of matched edges.
Parameters
----------
G : NetworkX graph
Undirected graph
maxcardinality: bool, optional (default=False)
If maxcardinality is True, compute the maximum-cardinality matching
with maximum weight among all maximum-cardinality matchings.
weight: string, optional (default='weight')
Edge data key corresponding to the edge weight.
If key not found, uses 1 as weight.
Returns
-------
matching : set
A maximal matching of the graph.
Examples
--------
>>> G = nx.Graph()
>>> edges = [(1, 2, 6), (1, 3, 2), (2, 3, 1), (2, 4, 7), (3, 5, 9), (4, 5, 3)]
>>> G.add_weighted_edges_from(edges)
>>> sorted(nx.max_weight_matching(G))
[(2, 4), (5, 3)]
Notes
-----
If G has edges with weight attributes the edge data are used as
weight values else the weights are assumed to be 1.
This function takes time O(number_of_nodes ** 3).
If all edge weights are integers, the algorithm uses only integer
computations. If floating point weights are used, the algorithm
could return a slightly suboptimal matching due to numeric
precision errors.
This method is based on the "blossom" method for finding augmenting
paths and the "primal-dual" method for finding a matching of maximum
weight, both methods invented by Jack Edmonds [1]_.
Bipartite graphs can also be matched using the functions present in
:mod:`networkx.algorithms.bipartite.matching`.
References
----------
.. [1] "Efficient Algorithms for Finding Maximum Matching in Graphs",
Zvi Galil, ACM Computing Surveys, 1986.
| @not_implemented_for("multigraph")
@not_implemented_for("directed")
@nx._dispatchable(edge_attrs="weight")
def max_weight_matching(G, maxcardinality=False, weight="weight"):
"""Compute a maximum-weighted matching of G.
A matching is a subset of edges in which no node occurs more than once.
The weight of a matching is the sum of the weights of its edges.
A maximal matching cannot add more edges and still be a matching.
The cardinality of a matching is the number of matched edges.
Parameters
----------
G : NetworkX graph
Undirected graph
maxcardinality: bool, optional (default=False)
If maxcardinality is True, compute the maximum-cardinality matching
with maximum weight among all maximum-cardinality matchings.
weight: string, optional (default='weight')
Edge data key corresponding to the edge weight.
If key not found, uses 1 as weight.
Returns
-------
matching : set
A maximal matching of the graph.
Examples
--------
>>> G = nx.Graph()
>>> edges = [(1, 2, 6), (1, 3, 2), (2, 3, 1), (2, 4, 7), (3, 5, 9), (4, 5, 3)]
>>> G.add_weighted_edges_from(edges)
>>> sorted(nx.max_weight_matching(G))
[(2, 4), (5, 3)]
Notes
-----
If G has edges with weight attributes the edge data are used as
weight values else the weights are assumed to be 1.
This function takes time O(number_of_nodes ** 3).
If all edge weights are integers, the algorithm uses only integer
computations. If floating point weights are used, the algorithm
could return a slightly suboptimal matching due to numeric
precision errors.
This method is based on the "blossom" method for finding augmenting
paths and the "primal-dual" method for finding a matching of maximum
weight, both methods invented by Jack Edmonds [1]_.
Bipartite graphs can also be matched using the functions present in
:mod:`networkx.algorithms.bipartite.matching`.
References
----------
.. [1] "Efficient Algorithms for Finding Maximum Matching in Graphs",
Zvi Galil, ACM Computing Surveys, 1986.
"""
#
# The algorithm is taken from "Efficient Algorithms for Finding Maximum
# Matching in Graphs" by Zvi Galil, ACM Computing Surveys, 1986.
# It is based on the "blossom" method for finding augmenting paths and
# the "primal-dual" method for finding a matching of maximum weight, both
# methods invented by Jack Edmonds.
#
# A C program for maximum weight matching by Ed Rothberg was used
# extensively to validate this new code.
#
# Many terms used in the code comments are explained in the paper
# by Galil. You will probably need the paper to make sense of this code.
#
class NoNode:
"""Dummy value which is different from any node."""
class Blossom:
"""Representation of a non-trivial blossom or sub-blossom."""
__slots__ = ["childs", "edges", "mybestedges"]
# b.childs is an ordered list of b's sub-blossoms, starting with
# the base and going round the blossom.
# b.edges is the list of b's connecting edges, such that
# b.edges[i] = (v, w) where v is a vertex in b.childs[i]
# and w is a vertex in b.childs[wrap(i+1)].
# If b is a top-level S-blossom,
# b.mybestedges is a list of least-slack edges to neighboring
# S-blossoms, or None if no such list has been computed yet.
# This is used for efficient computation of delta3.
# Generate the blossom's leaf vertices.
def leaves(self):
stack = [*self.childs]
while stack:
t = stack.pop()
if isinstance(t, Blossom):
stack.extend(t.childs)
else:
yield t
# Get a list of vertices.
gnodes = list(G)
if not gnodes:
return set() # don't bother with empty graphs
# Find the maximum edge weight.
maxweight = 0
allinteger = True
for i, j, d in G.edges(data=True):
wt = d.get(weight, 1)
if i != j and wt > maxweight:
maxweight = wt
allinteger = allinteger and (str(type(wt)).split("'")[1] in ("int", "long"))
# If v is a matched vertex, mate[v] is its partner vertex.
# If v is a single vertex, v does not occur as a key in mate.
# Initially all vertices are single; updated during augmentation.
mate = {}
# If b is a top-level blossom,
# label.get(b) is None if b is unlabeled (free),
# 1 if b is an S-blossom,
# 2 if b is a T-blossom.
# The label of a vertex is found by looking at the label of its top-level
# containing blossom.
# If v is a vertex inside a T-blossom, label[v] is 2 iff v is reachable
# from an S-vertex outside the blossom.
# Labels are assigned during a stage and reset after each augmentation.
label = {}
# If b is a labeled top-level blossom,
# labeledge[b] = (v, w) is the edge through which b obtained its label
# such that w is a vertex in b, or None if b's base vertex is single.
# If w is a vertex inside a T-blossom and label[w] == 2,
# labeledge[w] = (v, w) is an edge through which w is reachable from
# outside the blossom.
labeledge = {}
# If v is a vertex, inblossom[v] is the top-level blossom to which v
# belongs.
# If v is a top-level vertex, inblossom[v] == v since v is itself
# a (trivial) top-level blossom.
# Initially all vertices are top-level trivial blossoms.
inblossom = dict(zip(gnodes, gnodes))
# If b is a sub-blossom,
# blossomparent[b] is its immediate parent (sub-)blossom.
# If b is a top-level blossom, blossomparent[b] is None.
blossomparent = dict(zip(gnodes, repeat(None)))
# If b is a (sub-)blossom,
# blossombase[b] is its base VERTEX (i.e. recursive sub-blossom).
blossombase = dict(zip(gnodes, gnodes))
# If w is a free vertex (or an unreached vertex inside a T-blossom),
# bestedge[w] = (v, w) is the least-slack edge from an S-vertex,
# or None if there is no such edge.
# If b is a (possibly trivial) top-level S-blossom,
# bestedge[b] = (v, w) is the least-slack edge to a different S-blossom
# (v inside b), or None if there is no such edge.
# This is used for efficient computation of delta2 and delta3.
bestedge = {}
# If v is a vertex,
# dualvar[v] = 2 * u(v) where u(v) is the v's variable in the dual
# optimization problem (if all edge weights are integers, multiplication
# by two ensures that all values remain integers throughout the algorithm).
# Initially, u(v) = maxweight / 2.
dualvar = dict(zip(gnodes, repeat(maxweight)))
# If b is a non-trivial blossom,
# blossomdual[b] = z(b) where z(b) is b's variable in the dual
# optimization problem.
blossomdual = {}
# If (v, w) in allowedge or (w, v) in allowedg, then the edge
# (v, w) is known to have zero slack in the optimization problem;
# otherwise the edge may or may not have zero slack.
allowedge = {}
# Queue of newly discovered S-vertices.
queue = []
# Return 2 * slack of edge (v, w) (does not work inside blossoms).
def slack(v, w):
return dualvar[v] + dualvar[w] - 2 * G[v][w].get(weight, 1)
# Assign label t to the top-level blossom containing vertex w,
# coming through an edge from vertex v.
def assignLabel(w, t, v):
b = inblossom[w]
assert label.get(w) is None and label.get(b) is None
label[w] = label[b] = t
if v is not None:
labeledge[w] = labeledge[b] = (v, w)
else:
labeledge[w] = labeledge[b] = None
bestedge[w] = bestedge[b] = None
if t == 1:
# b became an S-vertex/blossom; add it(s vertices) to the queue.
if isinstance(b, Blossom):
queue.extend(b.leaves())
else:
queue.append(b)
elif t == 2:
# b became a T-vertex/blossom; assign label S to its mate.
# (If b is a non-trivial blossom, its base is the only vertex
# with an external mate.)
base = blossombase[b]
assignLabel(mate[base], 1, base)
# Trace back from vertices v and w to discover either a new blossom
# or an augmenting path. Return the base vertex of the new blossom,
# or NoNode if an augmenting path was found.
def scanBlossom(v, w):
# Trace back from v and w, placing breadcrumbs as we go.
path = []
base = NoNode
while v is not NoNode:
# Look for a breadcrumb in v's blossom or put a new breadcrumb.
b = inblossom[v]
if label[b] & 4:
base = blossombase[b]
break
assert label[b] == 1
path.append(b)
label[b] = 5
# Trace one step back.
if labeledge[b] is None:
# The base of blossom b is single; stop tracing this path.
assert blossombase[b] not in mate
v = NoNode
else:
assert labeledge[b][0] == mate[blossombase[b]]
v = labeledge[b][0]
b = inblossom[v]
assert label[b] == 2
# b is a T-blossom; trace one more step back.
v = labeledge[b][0]
# Swap v and w so that we alternate between both paths.
if w is not NoNode:
v, w = w, v
# Remove breadcrumbs.
for b in path:
label[b] = 1
# Return base vertex, if we found one.
return base
# Construct a new blossom with given base, through S-vertices v and w.
# Label the new blossom as S; set its dual variable to zero;
# relabel its T-vertices to S and add them to the queue.
def addBlossom(base, v, w):
bb = inblossom[base]
bv = inblossom[v]
bw = inblossom[w]
# Create blossom.
b = Blossom()
blossombase[b] = base
blossomparent[b] = None
blossomparent[bb] = b
# Make list of sub-blossoms and their interconnecting edge endpoints.
b.childs = path = []
b.edges = edgs = [(v, w)]
# Trace back from v to base.
while bv != bb:
# Add bv to the new blossom.
blossomparent[bv] = b
path.append(bv)
edgs.append(labeledge[bv])
assert label[bv] == 2 or (
label[bv] == 1 and labeledge[bv][0] == mate[blossombase[bv]]
)
# Trace one step back.
v = labeledge[bv][0]
bv = inblossom[v]
# Add base sub-blossom; reverse lists.
path.append(bb)
path.reverse()
edgs.reverse()
# Trace back from w to base.
while bw != bb:
# Add bw to the new blossom.
blossomparent[bw] = b
path.append(bw)
edgs.append((labeledge[bw][1], labeledge[bw][0]))
assert label[bw] == 2 or (
label[bw] == 1 and labeledge[bw][0] == mate[blossombase[bw]]
)
# Trace one step back.
w = labeledge[bw][0]
bw = inblossom[w]
# Set label to S.
assert label[bb] == 1
label[b] = 1
labeledge[b] = labeledge[bb]
# Set dual variable to zero.
blossomdual[b] = 0
# Relabel vertices.
for v in b.leaves():
if label[inblossom[v]] == 2:
# This T-vertex now turns into an S-vertex because it becomes
# part of an S-blossom; add it to the queue.
queue.append(v)
inblossom[v] = b
# Compute b.mybestedges.
bestedgeto = {}
for bv in path:
if isinstance(bv, Blossom):
if bv.mybestedges is not None:
# Walk this subblossom's least-slack edges.
nblist = bv.mybestedges
# The sub-blossom won't need this data again.
bv.mybestedges = None
else:
# This subblossom does not have a list of least-slack
# edges; get the information from the vertices.
nblist = [
(v, w) for v in bv.leaves() for w in G.neighbors(v) if v != w
]
else:
nblist = [(bv, w) for w in G.neighbors(bv) if bv != w]
for k in nblist:
(i, j) = k
if inblossom[j] == b:
i, j = j, i
bj = inblossom[j]
if (
bj != b
and label.get(bj) == 1
and ((bj not in bestedgeto) or slack(i, j) < slack(*bestedgeto[bj]))
):
bestedgeto[bj] = k
# Forget about least-slack edge of the subblossom.
bestedge[bv] = None
b.mybestedges = list(bestedgeto.values())
# Select bestedge[b].
mybestedge = None
bestedge[b] = None
for k in b.mybestedges:
kslack = slack(*k)
if mybestedge is None or kslack < mybestslack:
mybestedge = k
mybestslack = kslack
bestedge[b] = mybestedge
# Expand the given top-level blossom.
def expandBlossom(b, endstage):
# This is an obnoxiously complicated recursive function for the sake of
# a stack-transformation. So, we hack around the complexity by using
# a trampoline pattern. By yielding the arguments to each recursive
# call, we keep the actual callstack flat.
def _recurse(b, endstage):
# Convert sub-blossoms into top-level blossoms.
for s in b.childs:
blossomparent[s] = None
if isinstance(s, Blossom):
if endstage and blossomdual[s] == 0:
# Recursively expand this sub-blossom.
yield s
else:
for v in s.leaves():
inblossom[v] = s
else:
inblossom[s] = s
# If we expand a T-blossom during a stage, its sub-blossoms must be
# relabeled.
if (not endstage) and label.get(b) == 2:
# Start at the sub-blossom through which the expanding
# blossom obtained its label, and relabel sub-blossoms untili
# we reach the base.
# Figure out through which sub-blossom the expanding blossom
# obtained its label initially.
entrychild = inblossom[labeledge[b][1]]
# Decide in which direction we will go round the blossom.
j = b.childs.index(entrychild)
if j & 1:
# Start index is odd; go forward and wrap.
j -= len(b.childs)
jstep = 1
else:
# Start index is even; go backward.
jstep = -1
# Move along the blossom until we get to the base.
v, w = labeledge[b]
while j != 0:
# Relabel the T-sub-blossom.
if jstep == 1:
p, q = b.edges[j]
else:
q, p = b.edges[j - 1]
label[w] = None
label[q] = None
assignLabel(w, 2, v)
# Step to the next S-sub-blossom and note its forward edge.
allowedge[(p, q)] = allowedge[(q, p)] = True
j += jstep
if jstep == 1:
v, w = b.edges[j]
else:
w, v = b.edges[j - 1]
# Step to the next T-sub-blossom.
allowedge[(v, w)] = allowedge[(w, v)] = True
j += jstep
# Relabel the base T-sub-blossom WITHOUT stepping through to
# its mate (so don't call assignLabel).
bw = b.childs[j]
label[w] = label[bw] = 2
labeledge[w] = labeledge[bw] = (v, w)
bestedge[bw] = None
# Continue along the blossom until we get back to entrychild.
j += jstep
while b.childs[j] != entrychild:
# Examine the vertices of the sub-blossom to see whether
# it is reachable from a neighboring S-vertex outside the
# expanding blossom.
bv = b.childs[j]
if label.get(bv) == 1:
# This sub-blossom just got label S through one of its
# neighbors; leave it be.
j += jstep
continue
if isinstance(bv, Blossom):
for v in bv.leaves():
if label.get(v):
break
else:
v = bv
# If the sub-blossom contains a reachable vertex, assign
# label T to the sub-blossom.
if label.get(v):
assert label[v] == 2
assert inblossom[v] == bv
label[v] = None
label[mate[blossombase[bv]]] = None
assignLabel(v, 2, labeledge[v][0])
j += jstep
# Remove the expanded blossom entirely.
label.pop(b, None)
labeledge.pop(b, None)
bestedge.pop(b, None)
del blossomparent[b]
del blossombase[b]
del blossomdual[b]
# Now, we apply the trampoline pattern. We simulate a recursive
# callstack by maintaining a stack of generators, each yielding a
# sequence of function arguments. We grow the stack by appending a call
# to _recurse on each argument tuple, and shrink the stack whenever a
# generator is exhausted.
stack = [_recurse(b, endstage)]
while stack:
top = stack[-1]
for s in top:
stack.append(_recurse(s, endstage))
break
else:
stack.pop()
# Swap matched/unmatched edges over an alternating path through blossom b
# between vertex v and the base vertex. Keep blossom bookkeeping
# consistent.
def augmentBlossom(b, v):
# This is an obnoxiously complicated recursive function for the sake of
# a stack-transformation. So, we hack around the complexity by using
# a trampoline pattern. By yielding the arguments to each recursive
# call, we keep the actual callstack flat.
def _recurse(b, v):
# Bubble up through the blossom tree from vertex v to an immediate
# sub-blossom of b.
t = v
while blossomparent[t] != b:
t = blossomparent[t]
# Recursively deal with the first sub-blossom.
if isinstance(t, Blossom):
yield (t, v)
# Decide in which direction we will go round the blossom.
i = j = b.childs.index(t)
if i & 1:
# Start index is odd; go forward and wrap.
j -= len(b.childs)
jstep = 1
else:
# Start index is even; go backward.
jstep = -1
# Move along the blossom until we get to the base.
while j != 0:
# Step to the next sub-blossom and augment it recursively.
j += jstep
t = b.childs[j]
if jstep == 1:
w, x = b.edges[j]
else:
x, w = b.edges[j - 1]
if isinstance(t, Blossom):
yield (t, w)
# Step to the next sub-blossom and augment it recursively.
j += jstep
t = b.childs[j]
if isinstance(t, Blossom):
yield (t, x)
# Match the edge connecting those sub-blossoms.
mate[w] = x
mate[x] = w
# Rotate the list of sub-blossoms to put the new base at the front.
b.childs = b.childs[i:] + b.childs[:i]
b.edges = b.edges[i:] + b.edges[:i]
blossombase[b] = blossombase[b.childs[0]]
assert blossombase[b] == v
# Now, we apply the trampoline pattern. We simulate a recursive
# callstack by maintaining a stack of generators, each yielding a
# sequence of function arguments. We grow the stack by appending a call
# to _recurse on each argument tuple, and shrink the stack whenever a
# generator is exhausted.
stack = [_recurse(b, v)]
while stack:
top = stack[-1]
for args in top:
stack.append(_recurse(*args))
break
else:
stack.pop()
# Swap matched/unmatched edges over an alternating path between two
# single vertices. The augmenting path runs through S-vertices v and w.
def augmentMatching(v, w):
for s, j in ((v, w), (w, v)):
# Match vertex s to vertex j. Then trace back from s
# until we find a single vertex, swapping matched and unmatched
# edges as we go.
while 1:
bs = inblossom[s]
assert label[bs] == 1
assert (labeledge[bs] is None and blossombase[bs] not in mate) or (
labeledge[bs][0] == mate[blossombase[bs]]
)
# Augment through the S-blossom from s to base.
if isinstance(bs, Blossom):
augmentBlossom(bs, s)
# Update mate[s]
mate[s] = j
# Trace one step back.
if labeledge[bs] is None:
# Reached single vertex; stop.
break
t = labeledge[bs][0]
bt = inblossom[t]
assert label[bt] == 2
# Trace one more step back.
s, j = labeledge[bt]
# Augment through the T-blossom from j to base.
assert blossombase[bt] == t
if isinstance(bt, Blossom):
augmentBlossom(bt, j)
# Update mate[j]
mate[j] = s
# Verify that the optimum solution has been reached.
def verifyOptimum():
if maxcardinality:
# Vertices may have negative dual;
# find a constant non-negative number to add to all vertex duals.
vdualoffset = max(0, -min(dualvar.values()))
else:
vdualoffset = 0
# 0. all dual variables are non-negative
assert min(dualvar.values()) + vdualoffset >= 0
assert len(blossomdual) == 0 or min(blossomdual.values()) >= 0
# 0. all edges have non-negative slack and
# 1. all matched edges have zero slack;
for i, j, d in G.edges(data=True):
wt = d.get(weight, 1)
if i == j:
continue # ignore self-loops
s = dualvar[i] + dualvar[j] - 2 * wt
iblossoms = [i]
jblossoms = [j]
while blossomparent[iblossoms[-1]] is not None:
iblossoms.append(blossomparent[iblossoms[-1]])
while blossomparent[jblossoms[-1]] is not None:
jblossoms.append(blossomparent[jblossoms[-1]])
iblossoms.reverse()
jblossoms.reverse()
for bi, bj in zip(iblossoms, jblossoms):
if bi != bj:
break
s += 2 * blossomdual[bi]
assert s >= 0
if mate.get(i) == j or mate.get(j) == i:
assert mate[i] == j and mate[j] == i
assert s == 0
# 2. all single vertices have zero dual value;
for v in gnodes:
assert (v in mate) or dualvar[v] + vdualoffset == 0
# 3. all blossoms with positive dual value are full.
for b in blossomdual:
if blossomdual[b] > 0:
assert len(b.edges) % 2 == 1
for i, j in b.edges[1::2]:
assert mate[i] == j and mate[j] == i
# Ok.
# Main loop: continue until no further improvement is possible.
while 1:
# Each iteration of this loop is a "stage".
# A stage finds an augmenting path and uses that to improve
# the matching.
# Remove labels from top-level blossoms/vertices.
label.clear()
labeledge.clear()
# Forget all about least-slack edges.
bestedge.clear()
for b in blossomdual:
b.mybestedges = None
# Loss of labeling means that we can not be sure that currently
# allowable edges remain allowable throughout this stage.
allowedge.clear()
# Make queue empty.
queue[:] = []
# Label single blossoms/vertices with S and put them in the queue.
for v in gnodes:
if (v not in mate) and label.get(inblossom[v]) is None:
assignLabel(v, 1, None)
# Loop until we succeed in augmenting the matching.
augmented = 0
while 1:
# Each iteration of this loop is a "substage".
# A substage tries to find an augmenting path;
# if found, the path is used to improve the matching and
# the stage ends. If there is no augmenting path, the
# primal-dual method is used to pump some slack out of
# the dual variables.
# Continue labeling until all vertices which are reachable
# through an alternating path have got a label.
while queue and not augmented:
# Take an S vertex from the queue.
v = queue.pop()
assert label[inblossom[v]] == 1
# Scan its neighbors:
for w in G.neighbors(v):
if w == v:
continue # ignore self-loops
# w is a neighbor to v
bv = inblossom[v]
bw = inblossom[w]
if bv == bw:
# this edge is internal to a blossom; ignore it
continue
if (v, w) not in allowedge:
kslack = slack(v, w)
if kslack <= 0:
# edge k has zero slack => it is allowable
allowedge[(v, w)] = allowedge[(w, v)] = True
if (v, w) in allowedge:
if label.get(bw) is None:
# (C1) w is a free vertex;
# label w with T and label its mate with S (R12).
assignLabel(w, 2, v)
elif label.get(bw) == 1:
# (C2) w is an S-vertex (not in the same blossom);
# follow back-links to discover either an
# augmenting path or a new blossom.
base = scanBlossom(v, w)
if base is not NoNode:
# Found a new blossom; add it to the blossom
# bookkeeping and turn it into an S-blossom.
addBlossom(base, v, w)
else:
# Found an augmenting path; augment the
# matching and end this stage.
augmentMatching(v, w)
augmented = 1
break
elif label.get(w) is None:
# w is inside a T-blossom, but w itself has not
# yet been reached from outside the blossom;
# mark it as reached (we need this to relabel
# during T-blossom expansion).
assert label[bw] == 2
label[w] = 2
labeledge[w] = (v, w)
elif label.get(bw) == 1:
# keep track of the least-slack non-allowable edge to
# a different S-blossom.
if bestedge.get(bv) is None or kslack < slack(*bestedge[bv]):
bestedge[bv] = (v, w)
elif label.get(w) is None:
# w is a free vertex (or an unreached vertex inside
# a T-blossom) but we can not reach it yet;
# keep track of the least-slack edge that reaches w.
if bestedge.get(w) is None or kslack < slack(*bestedge[w]):
bestedge[w] = (v, w)
if augmented:
break
# There is no augmenting path under these constraints;
# compute delta and reduce slack in the optimization problem.
# (Note that our vertex dual variables, edge slacks and delta's
# are pre-multiplied by two.)
deltatype = -1
delta = deltaedge = deltablossom = None
# Compute delta1: the minimum value of any vertex dual.
if not maxcardinality:
deltatype = 1
delta = min(dualvar.values())
# Compute delta2: the minimum slack on any edge between
# an S-vertex and a free vertex.
for v in G.nodes():
if label.get(inblossom[v]) is None and bestedge.get(v) is not None:
d = slack(*bestedge[v])
if deltatype == -1 or d < delta:
delta = d
deltatype = 2
deltaedge = bestedge[v]
# Compute delta3: half the minimum slack on any edge between
# a pair of S-blossoms.
for b in blossomparent:
if (
blossomparent[b] is None
and label.get(b) == 1
and bestedge.get(b) is not None
):
kslack = slack(*bestedge[b])
if allinteger:
assert (kslack % 2) == 0
d = kslack // 2
else:
d = kslack / 2.0
if deltatype == -1 or d < delta:
delta = d
deltatype = 3
deltaedge = bestedge[b]
# Compute delta4: minimum z variable of any T-blossom.
for b in blossomdual:
if (
blossomparent[b] is None
and label.get(b) == 2
and (deltatype == -1 or blossomdual[b] < delta)
):
delta = blossomdual[b]
deltatype = 4
deltablossom = b
if deltatype == -1:
# No further improvement possible; max-cardinality optimum
# reached. Do a final delta update to make the optimum
# verifiable.
assert maxcardinality
deltatype = 1
delta = max(0, min(dualvar.values()))
# Update dual variables according to delta.
for v in gnodes:
if label.get(inblossom[v]) == 1:
# S-vertex: 2*u = 2*u - 2*delta
dualvar[v] -= delta
elif label.get(inblossom[v]) == 2:
# T-vertex: 2*u = 2*u + 2*delta
dualvar[v] += delta
for b in blossomdual:
if blossomparent[b] is None:
if label.get(b) == 1:
# top-level S-blossom: z = z + 2*delta
blossomdual[b] += delta
elif label.get(b) == 2:
# top-level T-blossom: z = z - 2*delta
blossomdual[b] -= delta
# Take action at the point where minimum delta occurred.
if deltatype == 1:
# No further improvement possible; optimum reached.
break
elif deltatype == 2:
# Use the least-slack edge to continue the search.
(v, w) = deltaedge
assert label[inblossom[v]] == 1
allowedge[(v, w)] = allowedge[(w, v)] = True
queue.append(v)
elif deltatype == 3:
# Use the least-slack edge to continue the search.
(v, w) = deltaedge
allowedge[(v, w)] = allowedge[(w, v)] = True
assert label[inblossom[v]] == 1
queue.append(v)
elif deltatype == 4:
# Expand the least-z blossom.
expandBlossom(deltablossom, False)
# End of a this substage.
# Paranoia check that the matching is symmetric.
for v in mate:
assert mate[mate[v]] == v
# Stop when no more augmenting path can be found.
if not augmented:
break
# End of a stage; expand all S-blossoms which have zero dual.
for b in list(blossomdual.keys()):
if b not in blossomdual:
continue # already expanded
if blossomparent[b] is None and label.get(b) == 1 and blossomdual[b] == 0:
expandBlossom(b, True)
# Verify that we reached the optimum solution (only for integer weights).
if allinteger:
verifyOptimum()
return matching_dict_to_set(mate)
| (G, maxcardinality=False, weight='weight', *, backend=None, **backend_kwargs) |
30,889 | networkx.algorithms.mis | maximal_independent_set | Returns a random maximal independent set guaranteed to contain
a given set of nodes.
An independent set is a set of nodes such that the subgraph
of G induced by these nodes contains no edges. A maximal
independent set is an independent set such that it is not possible
to add a new node and still get an independent set.
Parameters
----------
G : NetworkX graph
nodes : list or iterable
Nodes that must be part of the independent set. This set of nodes
must be independent.
seed : integer, random_state, or None (default)
Indicator of random number generation state.
See :ref:`Randomness<randomness>`.
Returns
-------
indep_nodes : list
List of nodes that are part of a maximal independent set.
Raises
------
NetworkXUnfeasible
If the nodes in the provided list are not part of the graph or
do not form an independent set, an exception is raised.
NetworkXNotImplemented
If `G` is directed.
Examples
--------
>>> G = nx.path_graph(5)
>>> nx.maximal_independent_set(G) # doctest: +SKIP
[4, 0, 2]
>>> nx.maximal_independent_set(G, [1]) # doctest: +SKIP
[1, 3]
Notes
-----
This algorithm does not solve the maximum independent set problem.
| null | (G, nodes=None, seed=None, *, backend=None, **backend_kwargs) |
30,890 | networkx.algorithms.matching | maximal_matching | Find a maximal matching in the graph.
A matching is a subset of edges in which no node occurs more than once.
A maximal matching cannot add more edges and still be a matching.
Parameters
----------
G : NetworkX graph
Undirected graph
Returns
-------
matching : set
A maximal matching of the graph.
Examples
--------
>>> G = nx.Graph([(1, 2), (1, 3), (2, 3), (2, 4), (3, 5), (4, 5)])
>>> sorted(nx.maximal_matching(G))
[(1, 2), (3, 5)]
Notes
-----
The algorithm greedily selects a maximal matching M of the graph G
(i.e. no superset of M exists). It runs in $O(|E|)$ time.
| @not_implemented_for("multigraph")
@not_implemented_for("directed")
@nx._dispatchable(edge_attrs="weight")
def max_weight_matching(G, maxcardinality=False, weight="weight"):
"""Compute a maximum-weighted matching of G.
A matching is a subset of edges in which no node occurs more than once.
The weight of a matching is the sum of the weights of its edges.
A maximal matching cannot add more edges and still be a matching.
The cardinality of a matching is the number of matched edges.
Parameters
----------
G : NetworkX graph
Undirected graph
maxcardinality: bool, optional (default=False)
If maxcardinality is True, compute the maximum-cardinality matching
with maximum weight among all maximum-cardinality matchings.
weight: string, optional (default='weight')
Edge data key corresponding to the edge weight.
If key not found, uses 1 as weight.
Returns
-------
matching : set
A maximal matching of the graph.
Examples
--------
>>> G = nx.Graph()
>>> edges = [(1, 2, 6), (1, 3, 2), (2, 3, 1), (2, 4, 7), (3, 5, 9), (4, 5, 3)]
>>> G.add_weighted_edges_from(edges)
>>> sorted(nx.max_weight_matching(G))
[(2, 4), (5, 3)]
Notes
-----
If G has edges with weight attributes the edge data are used as
weight values else the weights are assumed to be 1.
This function takes time O(number_of_nodes ** 3).
If all edge weights are integers, the algorithm uses only integer
computations. If floating point weights are used, the algorithm
could return a slightly suboptimal matching due to numeric
precision errors.
This method is based on the "blossom" method for finding augmenting
paths and the "primal-dual" method for finding a matching of maximum
weight, both methods invented by Jack Edmonds [1]_.
Bipartite graphs can also be matched using the functions present in
:mod:`networkx.algorithms.bipartite.matching`.
References
----------
.. [1] "Efficient Algorithms for Finding Maximum Matching in Graphs",
Zvi Galil, ACM Computing Surveys, 1986.
"""
#
# The algorithm is taken from "Efficient Algorithms for Finding Maximum
# Matching in Graphs" by Zvi Galil, ACM Computing Surveys, 1986.
# It is based on the "blossom" method for finding augmenting paths and
# the "primal-dual" method for finding a matching of maximum weight, both
# methods invented by Jack Edmonds.
#
# A C program for maximum weight matching by Ed Rothberg was used
# extensively to validate this new code.
#
# Many terms used in the code comments are explained in the paper
# by Galil. You will probably need the paper to make sense of this code.
#
class NoNode:
"""Dummy value which is different from any node."""
class Blossom:
"""Representation of a non-trivial blossom or sub-blossom."""
__slots__ = ["childs", "edges", "mybestedges"]
# b.childs is an ordered list of b's sub-blossoms, starting with
# the base and going round the blossom.
# b.edges is the list of b's connecting edges, such that
# b.edges[i] = (v, w) where v is a vertex in b.childs[i]
# and w is a vertex in b.childs[wrap(i+1)].
# If b is a top-level S-blossom,
# b.mybestedges is a list of least-slack edges to neighboring
# S-blossoms, or None if no such list has been computed yet.
# This is used for efficient computation of delta3.
# Generate the blossom's leaf vertices.
def leaves(self):
stack = [*self.childs]
while stack:
t = stack.pop()
if isinstance(t, Blossom):
stack.extend(t.childs)
else:
yield t
# Get a list of vertices.
gnodes = list(G)
if not gnodes:
return set() # don't bother with empty graphs
# Find the maximum edge weight.
maxweight = 0
allinteger = True
for i, j, d in G.edges(data=True):
wt = d.get(weight, 1)
if i != j and wt > maxweight:
maxweight = wt
allinteger = allinteger and (str(type(wt)).split("'")[1] in ("int", "long"))
# If v is a matched vertex, mate[v] is its partner vertex.
# If v is a single vertex, v does not occur as a key in mate.
# Initially all vertices are single; updated during augmentation.
mate = {}
# If b is a top-level blossom,
# label.get(b) is None if b is unlabeled (free),
# 1 if b is an S-blossom,
# 2 if b is a T-blossom.
# The label of a vertex is found by looking at the label of its top-level
# containing blossom.
# If v is a vertex inside a T-blossom, label[v] is 2 iff v is reachable
# from an S-vertex outside the blossom.
# Labels are assigned during a stage and reset after each augmentation.
label = {}
# If b is a labeled top-level blossom,
# labeledge[b] = (v, w) is the edge through which b obtained its label
# such that w is a vertex in b, or None if b's base vertex is single.
# If w is a vertex inside a T-blossom and label[w] == 2,
# labeledge[w] = (v, w) is an edge through which w is reachable from
# outside the blossom.
labeledge = {}
# If v is a vertex, inblossom[v] is the top-level blossom to which v
# belongs.
# If v is a top-level vertex, inblossom[v] == v since v is itself
# a (trivial) top-level blossom.
# Initially all vertices are top-level trivial blossoms.
inblossom = dict(zip(gnodes, gnodes))
# If b is a sub-blossom,
# blossomparent[b] is its immediate parent (sub-)blossom.
# If b is a top-level blossom, blossomparent[b] is None.
blossomparent = dict(zip(gnodes, repeat(None)))
# If b is a (sub-)blossom,
# blossombase[b] is its base VERTEX (i.e. recursive sub-blossom).
blossombase = dict(zip(gnodes, gnodes))
# If w is a free vertex (or an unreached vertex inside a T-blossom),
# bestedge[w] = (v, w) is the least-slack edge from an S-vertex,
# or None if there is no such edge.
# If b is a (possibly trivial) top-level S-blossom,
# bestedge[b] = (v, w) is the least-slack edge to a different S-blossom
# (v inside b), or None if there is no such edge.
# This is used for efficient computation of delta2 and delta3.
bestedge = {}
# If v is a vertex,
# dualvar[v] = 2 * u(v) where u(v) is the v's variable in the dual
# optimization problem (if all edge weights are integers, multiplication
# by two ensures that all values remain integers throughout the algorithm).
# Initially, u(v) = maxweight / 2.
dualvar = dict(zip(gnodes, repeat(maxweight)))
# If b is a non-trivial blossom,
# blossomdual[b] = z(b) where z(b) is b's variable in the dual
# optimization problem.
blossomdual = {}
# If (v, w) in allowedge or (w, v) in allowedg, then the edge
# (v, w) is known to have zero slack in the optimization problem;
# otherwise the edge may or may not have zero slack.
allowedge = {}
# Queue of newly discovered S-vertices.
queue = []
# Return 2 * slack of edge (v, w) (does not work inside blossoms).
def slack(v, w):
return dualvar[v] + dualvar[w] - 2 * G[v][w].get(weight, 1)
# Assign label t to the top-level blossom containing vertex w,
# coming through an edge from vertex v.
def assignLabel(w, t, v):
b = inblossom[w]
assert label.get(w) is None and label.get(b) is None
label[w] = label[b] = t
if v is not None:
labeledge[w] = labeledge[b] = (v, w)
else:
labeledge[w] = labeledge[b] = None
bestedge[w] = bestedge[b] = None
if t == 1:
# b became an S-vertex/blossom; add it(s vertices) to the queue.
if isinstance(b, Blossom):
queue.extend(b.leaves())
else:
queue.append(b)
elif t == 2:
# b became a T-vertex/blossom; assign label S to its mate.
# (If b is a non-trivial blossom, its base is the only vertex
# with an external mate.)
base = blossombase[b]
assignLabel(mate[base], 1, base)
# Trace back from vertices v and w to discover either a new blossom
# or an augmenting path. Return the base vertex of the new blossom,
# or NoNode if an augmenting path was found.
def scanBlossom(v, w):
# Trace back from v and w, placing breadcrumbs as we go.
path = []
base = NoNode
while v is not NoNode:
# Look for a breadcrumb in v's blossom or put a new breadcrumb.
b = inblossom[v]
if label[b] & 4:
base = blossombase[b]
break
assert label[b] == 1
path.append(b)
label[b] = 5
# Trace one step back.
if labeledge[b] is None:
# The base of blossom b is single; stop tracing this path.
assert blossombase[b] not in mate
v = NoNode
else:
assert labeledge[b][0] == mate[blossombase[b]]
v = labeledge[b][0]
b = inblossom[v]
assert label[b] == 2
# b is a T-blossom; trace one more step back.
v = labeledge[b][0]
# Swap v and w so that we alternate between both paths.
if w is not NoNode:
v, w = w, v
# Remove breadcrumbs.
for b in path:
label[b] = 1
# Return base vertex, if we found one.
return base
# Construct a new blossom with given base, through S-vertices v and w.
# Label the new blossom as S; set its dual variable to zero;
# relabel its T-vertices to S and add them to the queue.
def addBlossom(base, v, w):
bb = inblossom[base]
bv = inblossom[v]
bw = inblossom[w]
# Create blossom.
b = Blossom()
blossombase[b] = base
blossomparent[b] = None
blossomparent[bb] = b
# Make list of sub-blossoms and their interconnecting edge endpoints.
b.childs = path = []
b.edges = edgs = [(v, w)]
# Trace back from v to base.
while bv != bb:
# Add bv to the new blossom.
blossomparent[bv] = b
path.append(bv)
edgs.append(labeledge[bv])
assert label[bv] == 2 or (
label[bv] == 1 and labeledge[bv][0] == mate[blossombase[bv]]
)
# Trace one step back.
v = labeledge[bv][0]
bv = inblossom[v]
# Add base sub-blossom; reverse lists.
path.append(bb)
path.reverse()
edgs.reverse()
# Trace back from w to base.
while bw != bb:
# Add bw to the new blossom.
blossomparent[bw] = b
path.append(bw)
edgs.append((labeledge[bw][1], labeledge[bw][0]))
assert label[bw] == 2 or (
label[bw] == 1 and labeledge[bw][0] == mate[blossombase[bw]]
)
# Trace one step back.
w = labeledge[bw][0]
bw = inblossom[w]
# Set label to S.
assert label[bb] == 1
label[b] = 1
labeledge[b] = labeledge[bb]
# Set dual variable to zero.
blossomdual[b] = 0
# Relabel vertices.
for v in b.leaves():
if label[inblossom[v]] == 2:
# This T-vertex now turns into an S-vertex because it becomes
# part of an S-blossom; add it to the queue.
queue.append(v)
inblossom[v] = b
# Compute b.mybestedges.
bestedgeto = {}
for bv in path:
if isinstance(bv, Blossom):
if bv.mybestedges is not None:
# Walk this subblossom's least-slack edges.
nblist = bv.mybestedges
# The sub-blossom won't need this data again.
bv.mybestedges = None
else:
# This subblossom does not have a list of least-slack
# edges; get the information from the vertices.
nblist = [
(v, w) for v in bv.leaves() for w in G.neighbors(v) if v != w
]
else:
nblist = [(bv, w) for w in G.neighbors(bv) if bv != w]
for k in nblist:
(i, j) = k
if inblossom[j] == b:
i, j = j, i
bj = inblossom[j]
if (
bj != b
and label.get(bj) == 1
and ((bj not in bestedgeto) or slack(i, j) < slack(*bestedgeto[bj]))
):
bestedgeto[bj] = k
# Forget about least-slack edge of the subblossom.
bestedge[bv] = None
b.mybestedges = list(bestedgeto.values())
# Select bestedge[b].
mybestedge = None
bestedge[b] = None
for k in b.mybestedges:
kslack = slack(*k)
if mybestedge is None or kslack < mybestslack:
mybestedge = k
mybestslack = kslack
bestedge[b] = mybestedge
# Expand the given top-level blossom.
def expandBlossom(b, endstage):
# This is an obnoxiously complicated recursive function for the sake of
# a stack-transformation. So, we hack around the complexity by using
# a trampoline pattern. By yielding the arguments to each recursive
# call, we keep the actual callstack flat.
def _recurse(b, endstage):
# Convert sub-blossoms into top-level blossoms.
for s in b.childs:
blossomparent[s] = None
if isinstance(s, Blossom):
if endstage and blossomdual[s] == 0:
# Recursively expand this sub-blossom.
yield s
else:
for v in s.leaves():
inblossom[v] = s
else:
inblossom[s] = s
# If we expand a T-blossom during a stage, its sub-blossoms must be
# relabeled.
if (not endstage) and label.get(b) == 2:
# Start at the sub-blossom through which the expanding
# blossom obtained its label, and relabel sub-blossoms untili
# we reach the base.
# Figure out through which sub-blossom the expanding blossom
# obtained its label initially.
entrychild = inblossom[labeledge[b][1]]
# Decide in which direction we will go round the blossom.
j = b.childs.index(entrychild)
if j & 1:
# Start index is odd; go forward and wrap.
j -= len(b.childs)
jstep = 1
else:
# Start index is even; go backward.
jstep = -1
# Move along the blossom until we get to the base.
v, w = labeledge[b]
while j != 0:
# Relabel the T-sub-blossom.
if jstep == 1:
p, q = b.edges[j]
else:
q, p = b.edges[j - 1]
label[w] = None
label[q] = None
assignLabel(w, 2, v)
# Step to the next S-sub-blossom and note its forward edge.
allowedge[(p, q)] = allowedge[(q, p)] = True
j += jstep
if jstep == 1:
v, w = b.edges[j]
else:
w, v = b.edges[j - 1]
# Step to the next T-sub-blossom.
allowedge[(v, w)] = allowedge[(w, v)] = True
j += jstep
# Relabel the base T-sub-blossom WITHOUT stepping through to
# its mate (so don't call assignLabel).
bw = b.childs[j]
label[w] = label[bw] = 2
labeledge[w] = labeledge[bw] = (v, w)
bestedge[bw] = None
# Continue along the blossom until we get back to entrychild.
j += jstep
while b.childs[j] != entrychild:
# Examine the vertices of the sub-blossom to see whether
# it is reachable from a neighboring S-vertex outside the
# expanding blossom.
bv = b.childs[j]
if label.get(bv) == 1:
# This sub-blossom just got label S through one of its
# neighbors; leave it be.
j += jstep
continue
if isinstance(bv, Blossom):
for v in bv.leaves():
if label.get(v):
break
else:
v = bv
# If the sub-blossom contains a reachable vertex, assign
# label T to the sub-blossom.
if label.get(v):
assert label[v] == 2
assert inblossom[v] == bv
label[v] = None
label[mate[blossombase[bv]]] = None
assignLabel(v, 2, labeledge[v][0])
j += jstep
# Remove the expanded blossom entirely.
label.pop(b, None)
labeledge.pop(b, None)
bestedge.pop(b, None)
del blossomparent[b]
del blossombase[b]
del blossomdual[b]
# Now, we apply the trampoline pattern. We simulate a recursive
# callstack by maintaining a stack of generators, each yielding a
# sequence of function arguments. We grow the stack by appending a call
# to _recurse on each argument tuple, and shrink the stack whenever a
# generator is exhausted.
stack = [_recurse(b, endstage)]
while stack:
top = stack[-1]
for s in top:
stack.append(_recurse(s, endstage))
break
else:
stack.pop()
# Swap matched/unmatched edges over an alternating path through blossom b
# between vertex v and the base vertex. Keep blossom bookkeeping
# consistent.
def augmentBlossom(b, v):
# This is an obnoxiously complicated recursive function for the sake of
# a stack-transformation. So, we hack around the complexity by using
# a trampoline pattern. By yielding the arguments to each recursive
# call, we keep the actual callstack flat.
def _recurse(b, v):
# Bubble up through the blossom tree from vertex v to an immediate
# sub-blossom of b.
t = v
while blossomparent[t] != b:
t = blossomparent[t]
# Recursively deal with the first sub-blossom.
if isinstance(t, Blossom):
yield (t, v)
# Decide in which direction we will go round the blossom.
i = j = b.childs.index(t)
if i & 1:
# Start index is odd; go forward and wrap.
j -= len(b.childs)
jstep = 1
else:
# Start index is even; go backward.
jstep = -1
# Move along the blossom until we get to the base.
while j != 0:
# Step to the next sub-blossom and augment it recursively.
j += jstep
t = b.childs[j]
if jstep == 1:
w, x = b.edges[j]
else:
x, w = b.edges[j - 1]
if isinstance(t, Blossom):
yield (t, w)
# Step to the next sub-blossom and augment it recursively.
j += jstep
t = b.childs[j]
if isinstance(t, Blossom):
yield (t, x)
# Match the edge connecting those sub-blossoms.
mate[w] = x
mate[x] = w
# Rotate the list of sub-blossoms to put the new base at the front.
b.childs = b.childs[i:] + b.childs[:i]
b.edges = b.edges[i:] + b.edges[:i]
blossombase[b] = blossombase[b.childs[0]]
assert blossombase[b] == v
# Now, we apply the trampoline pattern. We simulate a recursive
# callstack by maintaining a stack of generators, each yielding a
# sequence of function arguments. We grow the stack by appending a call
# to _recurse on each argument tuple, and shrink the stack whenever a
# generator is exhausted.
stack = [_recurse(b, v)]
while stack:
top = stack[-1]
for args in top:
stack.append(_recurse(*args))
break
else:
stack.pop()
# Swap matched/unmatched edges over an alternating path between two
# single vertices. The augmenting path runs through S-vertices v and w.
def augmentMatching(v, w):
for s, j in ((v, w), (w, v)):
# Match vertex s to vertex j. Then trace back from s
# until we find a single vertex, swapping matched and unmatched
# edges as we go.
while 1:
bs = inblossom[s]
assert label[bs] == 1
assert (labeledge[bs] is None and blossombase[bs] not in mate) or (
labeledge[bs][0] == mate[blossombase[bs]]
)
# Augment through the S-blossom from s to base.
if isinstance(bs, Blossom):
augmentBlossom(bs, s)
# Update mate[s]
mate[s] = j
# Trace one step back.
if labeledge[bs] is None:
# Reached single vertex; stop.
break
t = labeledge[bs][0]
bt = inblossom[t]
assert label[bt] == 2
# Trace one more step back.
s, j = labeledge[bt]
# Augment through the T-blossom from j to base.
assert blossombase[bt] == t
if isinstance(bt, Blossom):
augmentBlossom(bt, j)
# Update mate[j]
mate[j] = s
# Verify that the optimum solution has been reached.
def verifyOptimum():
if maxcardinality:
# Vertices may have negative dual;
# find a constant non-negative number to add to all vertex duals.
vdualoffset = max(0, -min(dualvar.values()))
else:
vdualoffset = 0
# 0. all dual variables are non-negative
assert min(dualvar.values()) + vdualoffset >= 0
assert len(blossomdual) == 0 or min(blossomdual.values()) >= 0
# 0. all edges have non-negative slack and
# 1. all matched edges have zero slack;
for i, j, d in G.edges(data=True):
wt = d.get(weight, 1)
if i == j:
continue # ignore self-loops
s = dualvar[i] + dualvar[j] - 2 * wt
iblossoms = [i]
jblossoms = [j]
while blossomparent[iblossoms[-1]] is not None:
iblossoms.append(blossomparent[iblossoms[-1]])
while blossomparent[jblossoms[-1]] is not None:
jblossoms.append(blossomparent[jblossoms[-1]])
iblossoms.reverse()
jblossoms.reverse()
for bi, bj in zip(iblossoms, jblossoms):
if bi != bj:
break
s += 2 * blossomdual[bi]
assert s >= 0
if mate.get(i) == j or mate.get(j) == i:
assert mate[i] == j and mate[j] == i
assert s == 0
# 2. all single vertices have zero dual value;
for v in gnodes:
assert (v in mate) or dualvar[v] + vdualoffset == 0
# 3. all blossoms with positive dual value are full.
for b in blossomdual:
if blossomdual[b] > 0:
assert len(b.edges) % 2 == 1
for i, j in b.edges[1::2]:
assert mate[i] == j and mate[j] == i
# Ok.
# Main loop: continue until no further improvement is possible.
while 1:
# Each iteration of this loop is a "stage".
# A stage finds an augmenting path and uses that to improve
# the matching.
# Remove labels from top-level blossoms/vertices.
label.clear()
labeledge.clear()
# Forget all about least-slack edges.
bestedge.clear()
for b in blossomdual:
b.mybestedges = None
# Loss of labeling means that we can not be sure that currently
# allowable edges remain allowable throughout this stage.
allowedge.clear()
# Make queue empty.
queue[:] = []
# Label single blossoms/vertices with S and put them in the queue.
for v in gnodes:
if (v not in mate) and label.get(inblossom[v]) is None:
assignLabel(v, 1, None)
# Loop until we succeed in augmenting the matching.
augmented = 0
while 1:
# Each iteration of this loop is a "substage".
# A substage tries to find an augmenting path;
# if found, the path is used to improve the matching and
# the stage ends. If there is no augmenting path, the
# primal-dual method is used to pump some slack out of
# the dual variables.
# Continue labeling until all vertices which are reachable
# through an alternating path have got a label.
while queue and not augmented:
# Take an S vertex from the queue.
v = queue.pop()
assert label[inblossom[v]] == 1
# Scan its neighbors:
for w in G.neighbors(v):
if w == v:
continue # ignore self-loops
# w is a neighbor to v
bv = inblossom[v]
bw = inblossom[w]
if bv == bw:
# this edge is internal to a blossom; ignore it
continue
if (v, w) not in allowedge:
kslack = slack(v, w)
if kslack <= 0:
# edge k has zero slack => it is allowable
allowedge[(v, w)] = allowedge[(w, v)] = True
if (v, w) in allowedge:
if label.get(bw) is None:
# (C1) w is a free vertex;
# label w with T and label its mate with S (R12).
assignLabel(w, 2, v)
elif label.get(bw) == 1:
# (C2) w is an S-vertex (not in the same blossom);
# follow back-links to discover either an
# augmenting path or a new blossom.
base = scanBlossom(v, w)
if base is not NoNode:
# Found a new blossom; add it to the blossom
# bookkeeping and turn it into an S-blossom.
addBlossom(base, v, w)
else:
# Found an augmenting path; augment the
# matching and end this stage.
augmentMatching(v, w)
augmented = 1
break
elif label.get(w) is None:
# w is inside a T-blossom, but w itself has not
# yet been reached from outside the blossom;
# mark it as reached (we need this to relabel
# during T-blossom expansion).
assert label[bw] == 2
label[w] = 2
labeledge[w] = (v, w)
elif label.get(bw) == 1:
# keep track of the least-slack non-allowable edge to
# a different S-blossom.
if bestedge.get(bv) is None or kslack < slack(*bestedge[bv]):
bestedge[bv] = (v, w)
elif label.get(w) is None:
# w is a free vertex (or an unreached vertex inside
# a T-blossom) but we can not reach it yet;
# keep track of the least-slack edge that reaches w.
if bestedge.get(w) is None or kslack < slack(*bestedge[w]):
bestedge[w] = (v, w)
if augmented:
break
# There is no augmenting path under these constraints;
# compute delta and reduce slack in the optimization problem.
# (Note that our vertex dual variables, edge slacks and delta's
# are pre-multiplied by two.)
deltatype = -1
delta = deltaedge = deltablossom = None
# Compute delta1: the minimum value of any vertex dual.
if not maxcardinality:
deltatype = 1
delta = min(dualvar.values())
# Compute delta2: the minimum slack on any edge between
# an S-vertex and a free vertex.
for v in G.nodes():
if label.get(inblossom[v]) is None and bestedge.get(v) is not None:
d = slack(*bestedge[v])
if deltatype == -1 or d < delta:
delta = d
deltatype = 2
deltaedge = bestedge[v]
# Compute delta3: half the minimum slack on any edge between
# a pair of S-blossoms.
for b in blossomparent:
if (
blossomparent[b] is None
and label.get(b) == 1
and bestedge.get(b) is not None
):
kslack = slack(*bestedge[b])
if allinteger:
assert (kslack % 2) == 0
d = kslack // 2
else:
d = kslack / 2.0
if deltatype == -1 or d < delta:
delta = d
deltatype = 3
deltaedge = bestedge[b]
# Compute delta4: minimum z variable of any T-blossom.
for b in blossomdual:
if (
blossomparent[b] is None
and label.get(b) == 2
and (deltatype == -1 or blossomdual[b] < delta)
):
delta = blossomdual[b]
deltatype = 4
deltablossom = b
if deltatype == -1:
# No further improvement possible; max-cardinality optimum
# reached. Do a final delta update to make the optimum
# verifiable.
assert maxcardinality
deltatype = 1
delta = max(0, min(dualvar.values()))
# Update dual variables according to delta.
for v in gnodes:
if label.get(inblossom[v]) == 1:
# S-vertex: 2*u = 2*u - 2*delta
dualvar[v] -= delta
elif label.get(inblossom[v]) == 2:
# T-vertex: 2*u = 2*u + 2*delta
dualvar[v] += delta
for b in blossomdual:
if blossomparent[b] is None:
if label.get(b) == 1:
# top-level S-blossom: z = z + 2*delta
blossomdual[b] += delta
elif label.get(b) == 2:
# top-level T-blossom: z = z - 2*delta
blossomdual[b] -= delta
# Take action at the point where minimum delta occurred.
if deltatype == 1:
# No further improvement possible; optimum reached.
break
elif deltatype == 2:
# Use the least-slack edge to continue the search.
(v, w) = deltaedge
assert label[inblossom[v]] == 1
allowedge[(v, w)] = allowedge[(w, v)] = True
queue.append(v)
elif deltatype == 3:
# Use the least-slack edge to continue the search.
(v, w) = deltaedge
allowedge[(v, w)] = allowedge[(w, v)] = True
assert label[inblossom[v]] == 1
queue.append(v)
elif deltatype == 4:
# Expand the least-z blossom.
expandBlossom(deltablossom, False)
# End of a this substage.
# Paranoia check that the matching is symmetric.
for v in mate:
assert mate[mate[v]] == v
# Stop when no more augmenting path can be found.
if not augmented:
break
# End of a stage; expand all S-blossoms which have zero dual.
for b in list(blossomdual.keys()):
if b not in blossomdual:
continue # already expanded
if blossomparent[b] is None and label.get(b) == 1 and blossomdual[b] == 0:
expandBlossom(b, True)
# Verify that we reached the optimum solution (only for integer weights).
if allinteger:
verifyOptimum()
return matching_dict_to_set(mate)
| (G, *, backend=None, **backend_kwargs) |
30,891 | networkx.algorithms.tree.branchings | maximum_branching |
Returns a maximum branching from G.
Parameters
----------
G : (multi)digraph-like
The graph to be searched.
attr : str
The edge attribute used to in determining optimality.
default : float
The value of the edge attribute used if an edge does not have
the attribute `attr`.
preserve_attrs : bool
If True, preserve the other attributes of the original graph (that are not
passed to `attr`)
partition : str
The key for the edge attribute containing the partition
data on the graph. Edges can be included, excluded or open using the
`EdgePartition` enum.
Returns
-------
B : (multi)digraph-like
A maximum branching.
| @nx._dispatchable(preserve_edge_attrs=True, returns_graph=True)
def maximum_branching(
G,
attr="weight",
default=1,
preserve_attrs=False,
partition=None,
):
#######################################
### Data Structure Helper Functions ###
#######################################
def edmonds_add_edge(G, edge_index, u, v, key, **d):
"""
Adds an edge to `G` while also updating the edge index.
This algorithm requires the use of an external dictionary to track
the edge keys since it is possible that the source or destination
node of an edge will be changed and the default key-handling
capabilities of the MultiDiGraph class do not account for this.
Parameters
----------
G : MultiDiGraph
The graph to insert an edge into.
edge_index : dict
A mapping from integers to the edges of the graph.
u : node
The source node of the new edge.
v : node
The destination node of the new edge.
key : int
The key to use from `edge_index`.
d : keyword arguments, optional
Other attributes to store on the new edge.
"""
if key in edge_index:
uu, vv, _ = edge_index[key]
if (u != uu) or (v != vv):
raise Exception(f"Key {key!r} is already in use.")
G.add_edge(u, v, key, **d)
edge_index[key] = (u, v, G.succ[u][v][key])
def edmonds_remove_node(G, edge_index, n):
"""
Remove a node from the graph, updating the edge index to match.
Parameters
----------
G : MultiDiGraph
The graph to remove an edge from.
edge_index : dict
A mapping from integers to the edges of the graph.
n : node
The node to remove from `G`.
"""
keys = set()
for keydict in G.pred[n].values():
keys.update(keydict)
for keydict in G.succ[n].values():
keys.update(keydict)
for key in keys:
del edge_index[key]
G.remove_node(n)
#######################
### Algorithm Setup ###
#######################
# Pick an attribute name that the original graph is unlikly to have
candidate_attr = "edmonds' secret candidate attribute"
new_node_base_name = "edmonds new node base name "
G_original = G
G = nx.MultiDiGraph()
G.__networkx_cache__ = None # Disable caching
# A dict to reliably track mutations to the edges using the key of the edge.
G_edge_index = {}
# Each edge is given an arbitrary numerical key
for key, (u, v, data) in enumerate(G_original.edges(data=True)):
d = {attr: data.get(attr, default)}
if data.get(partition) is not None:
d[partition] = data.get(partition)
if preserve_attrs:
for d_k, d_v in data.items():
if d_k != attr:
d[d_k] = d_v
edmonds_add_edge(G, G_edge_index, u, v, key, **d)
level = 0 # Stores the number of contracted nodes
# These are the buckets from the paper.
#
# In the paper, G^i are modified versions of the original graph.
# D^i and E^i are the nodes and edges of the maximal edges that are
# consistent with G^i. In this implementation, D^i and E^i are stored
# together as the graph B^i. We will have strictly more B^i then the
# paper will have.
#
# Note that the data in graphs and branchings are tuples with the graph as
# the first element and the edge index as the second.
B = nx.MultiDiGraph()
B_edge_index = {}
graphs = [] # G^i list
branchings = [] # B^i list
selected_nodes = set() # D^i bucket
uf = nx.utils.UnionFind()
# A list of lists of edge indices. Each list is a circuit for graph G^i.
# Note the edge list is not required to be a circuit in G^0.
circuits = []
# Stores the index of the minimum edge in the circuit found in G^i and B^i.
# The ordering of the edges seems to preserver the weight ordering from
# G^0. So even if the circuit does not form a circuit in G^0, it is still
# true that the minimum edges in circuit G^0 (despite their weights being
# different)
minedge_circuit = []
###########################
### Algorithm Structure ###
###########################
# Each step listed in the algorithm is an inner function. Thus, the overall
# loop structure is:
#
# while True:
# step_I1()
# if cycle detected:
# step_I2()
# elif every node of G is in D and E is a branching:
# break
##################################
### Algorithm Helper Functions ###
##################################
def edmonds_find_desired_edge(v):
"""
Find the edge directed towards v with maximal weight.
If an edge partition exists in this graph, return the included
edge if it exists and never return any excluded edge.
Note: There can only be one included edge for each vertex otherwise
the edge partition is empty.
Parameters
----------
v : node
The node to search for the maximal weight incoming edge.
"""
edge = None
max_weight = -INF
for u, _, key, data in G.in_edges(v, data=True, keys=True):
# Skip excluded edges
if data.get(partition) == nx.EdgePartition.EXCLUDED:
continue
new_weight = data[attr]
# Return the included edge
if data.get(partition) == nx.EdgePartition.INCLUDED:
max_weight = new_weight
edge = (u, v, key, new_weight, data)
break
# Find the best open edge
if new_weight > max_weight:
max_weight = new_weight
edge = (u, v, key, new_weight, data)
return edge, max_weight
def edmonds_step_I2(v, desired_edge, level):
"""
Perform step I2 from Edmonds' paper
First, check if the last step I1 created a cycle. If it did not, do nothing.
If it did, store the cycle for later reference and contract it.
Parameters
----------
v : node
The current node to consider
desired_edge : edge
The minimum desired edge to remove from the cycle.
level : int
The current level, i.e. the number of cycles that have already been removed.
"""
u = desired_edge[0]
Q_nodes = nx.shortest_path(B, v, u)
Q_edges = [
list(B[Q_nodes[i]][vv].keys())[0] for i, vv in enumerate(Q_nodes[1:])
]
Q_edges.append(desired_edge[2]) # Add the new edge key to complete the circuit
# Get the edge in the circuit with the minimum weight.
# Also, save the incoming weights for each node.
minweight = INF
minedge = None
Q_incoming_weight = {}
for edge_key in Q_edges:
u, v, data = B_edge_index[edge_key]
w = data[attr]
# We cannot remove an included edge, even if it is the
# minimum edge in the circuit
Q_incoming_weight[v] = w
if data.get(partition) == nx.EdgePartition.INCLUDED:
continue
if w < minweight:
minweight = w
minedge = edge_key
circuits.append(Q_edges)
minedge_circuit.append(minedge)
graphs.append((G.copy(), G_edge_index.copy()))
branchings.append((B.copy(), B_edge_index.copy()))
# Mutate the graph to contract the circuit
new_node = new_node_base_name + str(level)
G.add_node(new_node)
new_edges = []
for u, v, key, data in G.edges(data=True, keys=True):
if u in Q_incoming_weight:
if v in Q_incoming_weight:
# Circuit edge. For the moment do nothing,
# eventually it will be removed.
continue
else:
# Outgoing edge from a node in the circuit.
# Make it come from the new node instead
dd = data.copy()
new_edges.append((new_node, v, key, dd))
else:
if v in Q_incoming_weight:
# Incoming edge to the circuit.
# Update it's weight
w = data[attr]
w += minweight - Q_incoming_weight[v]
dd = data.copy()
dd[attr] = w
new_edges.append((u, new_node, key, dd))
else:
# Outside edge. No modification needed
continue
for node in Q_nodes:
edmonds_remove_node(G, G_edge_index, node)
edmonds_remove_node(B, B_edge_index, node)
selected_nodes.difference_update(set(Q_nodes))
for u, v, key, data in new_edges:
edmonds_add_edge(G, G_edge_index, u, v, key, **data)
if candidate_attr in data:
del data[candidate_attr]
edmonds_add_edge(B, B_edge_index, u, v, key, **data)
uf.union(u, v)
def is_root(G, u, edgekeys):
"""
Returns True if `u` is a root node in G.
Node `u` is a root node if its in-degree over the specified edges is zero.
Parameters
----------
G : Graph
The current graph.
u : node
The node in `G` to check if it is a root.
edgekeys : iterable of edges
The edges for which to check if `u` is a root of.
"""
if u not in G:
raise Exception(f"{u!r} not in G")
for v in G.pred[u]:
for edgekey in G.pred[u][v]:
if edgekey in edgekeys:
return False, edgekey
else:
return True, None
nodes = iter(list(G.nodes))
while True:
try:
v = next(nodes)
except StopIteration:
# If there are no more new nodes to consider, then we should
# meet stopping condition (b) from the paper:
# (b) every node of G^i is in D^i and E^i is a branching
assert len(G) == len(B)
if len(B):
assert is_branching(B)
graphs.append((G.copy(), G_edge_index.copy()))
branchings.append((B.copy(), B_edge_index.copy()))
circuits.append([])
minedge_circuit.append(None)
break
else:
#####################
### BEGIN STEP I1 ###
#####################
# This is a very simple step, so I don't think it needs a method of it's own
if v in selected_nodes:
continue
selected_nodes.add(v)
B.add_node(v)
desired_edge, desired_edge_weight = edmonds_find_desired_edge(v)
# There might be no desired edge if all edges are excluded or
# v is the last node to be added to B, the ultimate root of the branching
if desired_edge is not None and desired_edge_weight > 0:
u = desired_edge[0]
# Flag adding the edge will create a circuit before merging the two
# connected components of u and v in B
circuit = uf[u] == uf[v]
dd = {attr: desired_edge_weight}
if desired_edge[4].get(partition) is not None:
dd[partition] = desired_edge[4].get(partition)
edmonds_add_edge(B, B_edge_index, u, v, desired_edge[2], **dd)
G[u][v][desired_edge[2]][candidate_attr] = True
uf.union(u, v)
###################
### END STEP I1 ###
###################
#####################
### BEGIN STEP I2 ###
#####################
if circuit:
edmonds_step_I2(v, desired_edge, level)
nodes = iter(list(G.nodes()))
level += 1
###################
### END STEP I2 ###
###################
#####################
### BEGIN STEP I3 ###
#####################
# Create a new graph of the same class as the input graph
H = G_original.__class__()
# Start with the branching edges in the last level.
edges = set(branchings[level][1])
while level > 0:
level -= 1
# The current level is i, and we start counting from 0.
#
# We need the node at level i+1 that results from merging a circuit
# at level i. basename_0 is the first merged node and this happens
# at level 1. That is basename_0 is a node at level 1 that results
# from merging a circuit at level 0.
merged_node = new_node_base_name + str(level)
circuit = circuits[level]
isroot, edgekey = is_root(graphs[level + 1][0], merged_node, edges)
edges.update(circuit)
if isroot:
minedge = minedge_circuit[level]
if minedge is None:
raise Exception
# Remove the edge in the cycle with minimum weight
edges.remove(minedge)
else:
# We have identified an edge at the next higher level that
# transitions into the merged node at this level. That edge
# transitions to some corresponding node at the current level.
#
# We want to remove an edge from the cycle that transitions
# into the corresponding node, otherwise the result would not
# be a branching.
G, G_edge_index = graphs[level]
target = G_edge_index[edgekey][1]
for edgekey in circuit:
u, v, data = G_edge_index[edgekey]
if v == target:
break
else:
raise Exception("Couldn't find edge incoming to merged node.")
edges.remove(edgekey)
H.add_nodes_from(G_original)
for edgekey in edges:
u, v, d = graphs[0][1][edgekey]
dd = {attr: d[attr]}
if preserve_attrs:
for key, value in d.items():
if key not in [attr, candidate_attr]:
dd[key] = value
H.add_edge(u, v, **dd)
###################
### END STEP I3 ###
###################
return H
| (G, attr='weight', default=1, preserve_attrs=False, partition=None, *, backend=None, **backend_kwargs) |
30,892 | networkx.algorithms.flow.maxflow | maximum_flow | Find a maximum single-commodity flow.
Parameters
----------
flowG : NetworkX graph
Edges of the graph are expected to have an attribute called
'capacity'. If this attribute is not present, the edge is
considered to have infinite capacity.
_s : node
Source node for the flow.
_t : node
Sink node for the flow.
capacity : string
Edges of the graph G are expected to have an attribute capacity
that indicates how much flow the edge can support. If this
attribute is not present, the edge is considered to have
infinite capacity. Default value: 'capacity'.
flow_func : function
A function for computing the maximum flow among a pair of nodes
in a capacitated graph. The function has to accept at least three
parameters: a Graph or Digraph, a source node, and a target node.
And return a residual network that follows NetworkX conventions
(see Notes). If flow_func is None, the default maximum
flow function (:meth:`preflow_push`) is used. See below for
alternative algorithms. The choice of the default function may change
from version to version and should not be relied on. Default value:
None.
kwargs : Any other keyword parameter is passed to the function that
computes the maximum flow.
Returns
-------
flow_value : integer, float
Value of the maximum flow, i.e., net outflow from the source.
flow_dict : dict
A dictionary containing the value of the flow that went through
each edge.
Raises
------
NetworkXError
The algorithm does not support MultiGraph and MultiDiGraph. If
the input graph is an instance of one of these two classes, a
NetworkXError is raised.
NetworkXUnbounded
If the graph has a path of infinite capacity, the value of a
feasible flow on the graph is unbounded above and the function
raises a NetworkXUnbounded.
See also
--------
:meth:`maximum_flow_value`
:meth:`minimum_cut`
:meth:`minimum_cut_value`
:meth:`edmonds_karp`
:meth:`preflow_push`
:meth:`shortest_augmenting_path`
Notes
-----
The function used in the flow_func parameter has to return a residual
network that follows NetworkX conventions:
The residual network :samp:`R` from an input graph :samp:`G` has the
same nodes as :samp:`G`. :samp:`R` is a DiGraph that contains a pair
of edges :samp:`(u, v)` and :samp:`(v, u)` iff :samp:`(u, v)` is not a
self-loop, and at least one of :samp:`(u, v)` and :samp:`(v, u)` exists
in :samp:`G`.
For each edge :samp:`(u, v)` in :samp:`R`, :samp:`R[u][v]['capacity']`
is equal to the capacity of :samp:`(u, v)` in :samp:`G` if it exists
in :samp:`G` or zero otherwise. If the capacity is infinite,
:samp:`R[u][v]['capacity']` will have a high arbitrary finite value
that does not affect the solution of the problem. This value is stored in
:samp:`R.graph['inf']`. For each edge :samp:`(u, v)` in :samp:`R`,
:samp:`R[u][v]['flow']` represents the flow function of :samp:`(u, v)` and
satisfies :samp:`R[u][v]['flow'] == -R[v][u]['flow']`.
The flow value, defined as the total flow into :samp:`t`, the sink, is
stored in :samp:`R.graph['flow_value']`. Reachability to :samp:`t` using
only edges :samp:`(u, v)` such that
:samp:`R[u][v]['flow'] < R[u][v]['capacity']` induces a minimum
:samp:`s`-:samp:`t` cut.
Specific algorithms may store extra data in :samp:`R`.
The function should supports an optional boolean parameter value_only. When
True, it can optionally terminate the algorithm as soon as the maximum flow
value and the minimum cut can be determined.
Examples
--------
>>> G = nx.DiGraph()
>>> G.add_edge("x", "a", capacity=3.0)
>>> G.add_edge("x", "b", capacity=1.0)
>>> G.add_edge("a", "c", capacity=3.0)
>>> G.add_edge("b", "c", capacity=5.0)
>>> G.add_edge("b", "d", capacity=4.0)
>>> G.add_edge("d", "e", capacity=2.0)
>>> G.add_edge("c", "y", capacity=2.0)
>>> G.add_edge("e", "y", capacity=3.0)
maximum_flow returns both the value of the maximum flow and a
dictionary with all flows.
>>> flow_value, flow_dict = nx.maximum_flow(G, "x", "y")
>>> flow_value
3.0
>>> print(flow_dict["x"]["b"])
1.0
You can also use alternative algorithms for computing the
maximum flow by using the flow_func parameter.
>>> from networkx.algorithms.flow import shortest_augmenting_path
>>> flow_value == nx.maximum_flow(G, "x", "y", flow_func=shortest_augmenting_path)[0]
True
| null | (flowG, _s, _t, capacity='capacity', flow_func=None, *, backend=None, **kwargs) |
30,893 | networkx.algorithms.flow.maxflow | maximum_flow_value | Find the value of maximum single-commodity flow.
Parameters
----------
flowG : NetworkX graph
Edges of the graph are expected to have an attribute called
'capacity'. If this attribute is not present, the edge is
considered to have infinite capacity.
_s : node
Source node for the flow.
_t : node
Sink node for the flow.
capacity : string
Edges of the graph G are expected to have an attribute capacity
that indicates how much flow the edge can support. If this
attribute is not present, the edge is considered to have
infinite capacity. Default value: 'capacity'.
flow_func : function
A function for computing the maximum flow among a pair of nodes
in a capacitated graph. The function has to accept at least three
parameters: a Graph or Digraph, a source node, and a target node.
And return a residual network that follows NetworkX conventions
(see Notes). If flow_func is None, the default maximum
flow function (:meth:`preflow_push`) is used. See below for
alternative algorithms. The choice of the default function may change
from version to version and should not be relied on. Default value:
None.
kwargs : Any other keyword parameter is passed to the function that
computes the maximum flow.
Returns
-------
flow_value : integer, float
Value of the maximum flow, i.e., net outflow from the source.
Raises
------
NetworkXError
The algorithm does not support MultiGraph and MultiDiGraph. If
the input graph is an instance of one of these two classes, a
NetworkXError is raised.
NetworkXUnbounded
If the graph has a path of infinite capacity, the value of a
feasible flow on the graph is unbounded above and the function
raises a NetworkXUnbounded.
See also
--------
:meth:`maximum_flow`
:meth:`minimum_cut`
:meth:`minimum_cut_value`
:meth:`edmonds_karp`
:meth:`preflow_push`
:meth:`shortest_augmenting_path`
Notes
-----
The function used in the flow_func parameter has to return a residual
network that follows NetworkX conventions:
The residual network :samp:`R` from an input graph :samp:`G` has the
same nodes as :samp:`G`. :samp:`R` is a DiGraph that contains a pair
of edges :samp:`(u, v)` and :samp:`(v, u)` iff :samp:`(u, v)` is not a
self-loop, and at least one of :samp:`(u, v)` and :samp:`(v, u)` exists
in :samp:`G`.
For each edge :samp:`(u, v)` in :samp:`R`, :samp:`R[u][v]['capacity']`
is equal to the capacity of :samp:`(u, v)` in :samp:`G` if it exists
in :samp:`G` or zero otherwise. If the capacity is infinite,
:samp:`R[u][v]['capacity']` will have a high arbitrary finite value
that does not affect the solution of the problem. This value is stored in
:samp:`R.graph['inf']`. For each edge :samp:`(u, v)` in :samp:`R`,
:samp:`R[u][v]['flow']` represents the flow function of :samp:`(u, v)` and
satisfies :samp:`R[u][v]['flow'] == -R[v][u]['flow']`.
The flow value, defined as the total flow into :samp:`t`, the sink, is
stored in :samp:`R.graph['flow_value']`. Reachability to :samp:`t` using
only edges :samp:`(u, v)` such that
:samp:`R[u][v]['flow'] < R[u][v]['capacity']` induces a minimum
:samp:`s`-:samp:`t` cut.
Specific algorithms may store extra data in :samp:`R`.
The function should supports an optional boolean parameter value_only. When
True, it can optionally terminate the algorithm as soon as the maximum flow
value and the minimum cut can be determined.
Examples
--------
>>> G = nx.DiGraph()
>>> G.add_edge("x", "a", capacity=3.0)
>>> G.add_edge("x", "b", capacity=1.0)
>>> G.add_edge("a", "c", capacity=3.0)
>>> G.add_edge("b", "c", capacity=5.0)
>>> G.add_edge("b", "d", capacity=4.0)
>>> G.add_edge("d", "e", capacity=2.0)
>>> G.add_edge("c", "y", capacity=2.0)
>>> G.add_edge("e", "y", capacity=3.0)
maximum_flow_value computes only the value of the
maximum flow:
>>> flow_value = nx.maximum_flow_value(G, "x", "y")
>>> flow_value
3.0
You can also use alternative algorithms for computing the
maximum flow by using the flow_func parameter.
>>> from networkx.algorithms.flow import shortest_augmenting_path
>>> flow_value == nx.maximum_flow_value(G, "x", "y", flow_func=shortest_augmenting_path)
True
| null | (flowG, _s, _t, capacity='capacity', flow_func=None, *, backend=None, **kwargs) |
30,894 | networkx.algorithms.tree.branchings | maximum_spanning_arborescence |
Returns a maximum spanning arborescence from G.
Parameters
----------
G : (multi)digraph-like
The graph to be searched.
attr : str
The edge attribute used to in determining optimality.
default : float
The value of the edge attribute used if an edge does not have
the attribute `attr`.
preserve_attrs : bool
If True, preserve the other attributes of the original graph (that are not
passed to `attr`)
partition : str
The key for the edge attribute containing the partition
data on the graph. Edges can be included, excluded or open using the
`EdgePartition` enum.
Returns
-------
B : (multi)digraph-like
A maximum spanning arborescence.
Raises
------
NetworkXException
If the graph does not contain a maximum spanning arborescence.
| @nx._dispatchable(preserve_edge_attrs=True, returns_graph=True)
def maximum_branching(
G,
attr="weight",
default=1,
preserve_attrs=False,
partition=None,
):
#######################################
### Data Structure Helper Functions ###
#######################################
def edmonds_add_edge(G, edge_index, u, v, key, **d):
"""
Adds an edge to `G` while also updating the edge index.
This algorithm requires the use of an external dictionary to track
the edge keys since it is possible that the source or destination
node of an edge will be changed and the default key-handling
capabilities of the MultiDiGraph class do not account for this.
Parameters
----------
G : MultiDiGraph
The graph to insert an edge into.
edge_index : dict
A mapping from integers to the edges of the graph.
u : node
The source node of the new edge.
v : node
The destination node of the new edge.
key : int
The key to use from `edge_index`.
d : keyword arguments, optional
Other attributes to store on the new edge.
"""
if key in edge_index:
uu, vv, _ = edge_index[key]
if (u != uu) or (v != vv):
raise Exception(f"Key {key!r} is already in use.")
G.add_edge(u, v, key, **d)
edge_index[key] = (u, v, G.succ[u][v][key])
def edmonds_remove_node(G, edge_index, n):
"""
Remove a node from the graph, updating the edge index to match.
Parameters
----------
G : MultiDiGraph
The graph to remove an edge from.
edge_index : dict
A mapping from integers to the edges of the graph.
n : node
The node to remove from `G`.
"""
keys = set()
for keydict in G.pred[n].values():
keys.update(keydict)
for keydict in G.succ[n].values():
keys.update(keydict)
for key in keys:
del edge_index[key]
G.remove_node(n)
#######################
### Algorithm Setup ###
#######################
# Pick an attribute name that the original graph is unlikly to have
candidate_attr = "edmonds' secret candidate attribute"
new_node_base_name = "edmonds new node base name "
G_original = G
G = nx.MultiDiGraph()
G.__networkx_cache__ = None # Disable caching
# A dict to reliably track mutations to the edges using the key of the edge.
G_edge_index = {}
# Each edge is given an arbitrary numerical key
for key, (u, v, data) in enumerate(G_original.edges(data=True)):
d = {attr: data.get(attr, default)}
if data.get(partition) is not None:
d[partition] = data.get(partition)
if preserve_attrs:
for d_k, d_v in data.items():
if d_k != attr:
d[d_k] = d_v
edmonds_add_edge(G, G_edge_index, u, v, key, **d)
level = 0 # Stores the number of contracted nodes
# These are the buckets from the paper.
#
# In the paper, G^i are modified versions of the original graph.
# D^i and E^i are the nodes and edges of the maximal edges that are
# consistent with G^i. In this implementation, D^i and E^i are stored
# together as the graph B^i. We will have strictly more B^i then the
# paper will have.
#
# Note that the data in graphs and branchings are tuples with the graph as
# the first element and the edge index as the second.
B = nx.MultiDiGraph()
B_edge_index = {}
graphs = [] # G^i list
branchings = [] # B^i list
selected_nodes = set() # D^i bucket
uf = nx.utils.UnionFind()
# A list of lists of edge indices. Each list is a circuit for graph G^i.
# Note the edge list is not required to be a circuit in G^0.
circuits = []
# Stores the index of the minimum edge in the circuit found in G^i and B^i.
# The ordering of the edges seems to preserver the weight ordering from
# G^0. So even if the circuit does not form a circuit in G^0, it is still
# true that the minimum edges in circuit G^0 (despite their weights being
# different)
minedge_circuit = []
###########################
### Algorithm Structure ###
###########################
# Each step listed in the algorithm is an inner function. Thus, the overall
# loop structure is:
#
# while True:
# step_I1()
# if cycle detected:
# step_I2()
# elif every node of G is in D and E is a branching:
# break
##################################
### Algorithm Helper Functions ###
##################################
def edmonds_find_desired_edge(v):
"""
Find the edge directed towards v with maximal weight.
If an edge partition exists in this graph, return the included
edge if it exists and never return any excluded edge.
Note: There can only be one included edge for each vertex otherwise
the edge partition is empty.
Parameters
----------
v : node
The node to search for the maximal weight incoming edge.
"""
edge = None
max_weight = -INF
for u, _, key, data in G.in_edges(v, data=True, keys=True):
# Skip excluded edges
if data.get(partition) == nx.EdgePartition.EXCLUDED:
continue
new_weight = data[attr]
# Return the included edge
if data.get(partition) == nx.EdgePartition.INCLUDED:
max_weight = new_weight
edge = (u, v, key, new_weight, data)
break
# Find the best open edge
if new_weight > max_weight:
max_weight = new_weight
edge = (u, v, key, new_weight, data)
return edge, max_weight
def edmonds_step_I2(v, desired_edge, level):
"""
Perform step I2 from Edmonds' paper
First, check if the last step I1 created a cycle. If it did not, do nothing.
If it did, store the cycle for later reference and contract it.
Parameters
----------
v : node
The current node to consider
desired_edge : edge
The minimum desired edge to remove from the cycle.
level : int
The current level, i.e. the number of cycles that have already been removed.
"""
u = desired_edge[0]
Q_nodes = nx.shortest_path(B, v, u)
Q_edges = [
list(B[Q_nodes[i]][vv].keys())[0] for i, vv in enumerate(Q_nodes[1:])
]
Q_edges.append(desired_edge[2]) # Add the new edge key to complete the circuit
# Get the edge in the circuit with the minimum weight.
# Also, save the incoming weights for each node.
minweight = INF
minedge = None
Q_incoming_weight = {}
for edge_key in Q_edges:
u, v, data = B_edge_index[edge_key]
w = data[attr]
# We cannot remove an included edge, even if it is the
# minimum edge in the circuit
Q_incoming_weight[v] = w
if data.get(partition) == nx.EdgePartition.INCLUDED:
continue
if w < minweight:
minweight = w
minedge = edge_key
circuits.append(Q_edges)
minedge_circuit.append(minedge)
graphs.append((G.copy(), G_edge_index.copy()))
branchings.append((B.copy(), B_edge_index.copy()))
# Mutate the graph to contract the circuit
new_node = new_node_base_name + str(level)
G.add_node(new_node)
new_edges = []
for u, v, key, data in G.edges(data=True, keys=True):
if u in Q_incoming_weight:
if v in Q_incoming_weight:
# Circuit edge. For the moment do nothing,
# eventually it will be removed.
continue
else:
# Outgoing edge from a node in the circuit.
# Make it come from the new node instead
dd = data.copy()
new_edges.append((new_node, v, key, dd))
else:
if v in Q_incoming_weight:
# Incoming edge to the circuit.
# Update it's weight
w = data[attr]
w += minweight - Q_incoming_weight[v]
dd = data.copy()
dd[attr] = w
new_edges.append((u, new_node, key, dd))
else:
# Outside edge. No modification needed
continue
for node in Q_nodes:
edmonds_remove_node(G, G_edge_index, node)
edmonds_remove_node(B, B_edge_index, node)
selected_nodes.difference_update(set(Q_nodes))
for u, v, key, data in new_edges:
edmonds_add_edge(G, G_edge_index, u, v, key, **data)
if candidate_attr in data:
del data[candidate_attr]
edmonds_add_edge(B, B_edge_index, u, v, key, **data)
uf.union(u, v)
def is_root(G, u, edgekeys):
"""
Returns True if `u` is a root node in G.
Node `u` is a root node if its in-degree over the specified edges is zero.
Parameters
----------
G : Graph
The current graph.
u : node
The node in `G` to check if it is a root.
edgekeys : iterable of edges
The edges for which to check if `u` is a root of.
"""
if u not in G:
raise Exception(f"{u!r} not in G")
for v in G.pred[u]:
for edgekey in G.pred[u][v]:
if edgekey in edgekeys:
return False, edgekey
else:
return True, None
nodes = iter(list(G.nodes))
while True:
try:
v = next(nodes)
except StopIteration:
# If there are no more new nodes to consider, then we should
# meet stopping condition (b) from the paper:
# (b) every node of G^i is in D^i and E^i is a branching
assert len(G) == len(B)
if len(B):
assert is_branching(B)
graphs.append((G.copy(), G_edge_index.copy()))
branchings.append((B.copy(), B_edge_index.copy()))
circuits.append([])
minedge_circuit.append(None)
break
else:
#####################
### BEGIN STEP I1 ###
#####################
# This is a very simple step, so I don't think it needs a method of it's own
if v in selected_nodes:
continue
selected_nodes.add(v)
B.add_node(v)
desired_edge, desired_edge_weight = edmonds_find_desired_edge(v)
# There might be no desired edge if all edges are excluded or
# v is the last node to be added to B, the ultimate root of the branching
if desired_edge is not None and desired_edge_weight > 0:
u = desired_edge[0]
# Flag adding the edge will create a circuit before merging the two
# connected components of u and v in B
circuit = uf[u] == uf[v]
dd = {attr: desired_edge_weight}
if desired_edge[4].get(partition) is not None:
dd[partition] = desired_edge[4].get(partition)
edmonds_add_edge(B, B_edge_index, u, v, desired_edge[2], **dd)
G[u][v][desired_edge[2]][candidate_attr] = True
uf.union(u, v)
###################
### END STEP I1 ###
###################
#####################
### BEGIN STEP I2 ###
#####################
if circuit:
edmonds_step_I2(v, desired_edge, level)
nodes = iter(list(G.nodes()))
level += 1
###################
### END STEP I2 ###
###################
#####################
### BEGIN STEP I3 ###
#####################
# Create a new graph of the same class as the input graph
H = G_original.__class__()
# Start with the branching edges in the last level.
edges = set(branchings[level][1])
while level > 0:
level -= 1
# The current level is i, and we start counting from 0.
#
# We need the node at level i+1 that results from merging a circuit
# at level i. basename_0 is the first merged node and this happens
# at level 1. That is basename_0 is a node at level 1 that results
# from merging a circuit at level 0.
merged_node = new_node_base_name + str(level)
circuit = circuits[level]
isroot, edgekey = is_root(graphs[level + 1][0], merged_node, edges)
edges.update(circuit)
if isroot:
minedge = minedge_circuit[level]
if minedge is None:
raise Exception
# Remove the edge in the cycle with minimum weight
edges.remove(minedge)
else:
# We have identified an edge at the next higher level that
# transitions into the merged node at this level. That edge
# transitions to some corresponding node at the current level.
#
# We want to remove an edge from the cycle that transitions
# into the corresponding node, otherwise the result would not
# be a branching.
G, G_edge_index = graphs[level]
target = G_edge_index[edgekey][1]
for edgekey in circuit:
u, v, data = G_edge_index[edgekey]
if v == target:
break
else:
raise Exception("Couldn't find edge incoming to merged node.")
edges.remove(edgekey)
H.add_nodes_from(G_original)
for edgekey in edges:
u, v, d = graphs[0][1][edgekey]
dd = {attr: d[attr]}
if preserve_attrs:
for key, value in d.items():
if key not in [attr, candidate_attr]:
dd[key] = value
H.add_edge(u, v, **dd)
###################
### END STEP I3 ###
###################
return H
| (G, attr='weight', default=1, preserve_attrs=False, partition=None, *, backend=None, **backend_kwargs) |
30,895 | networkx.algorithms.tree.mst | maximum_spanning_edges | Generate edges in a maximum spanning forest of an undirected
weighted graph.
A maximum spanning tree is a subgraph of the graph (a tree)
with the maximum possible sum of edge weights. A spanning forest is a
union of the spanning trees for each connected component of the graph.
Parameters
----------
G : undirected Graph
An undirected graph. If `G` is connected, then the algorithm finds a
spanning tree. Otherwise, a spanning forest is found.
algorithm : string
The algorithm to use when finding a maximum spanning tree. Valid
choices are 'kruskal', 'prim', or 'boruvka'. The default is 'kruskal'.
weight : string
Edge data key to use for weight (default 'weight').
keys : bool
Whether to yield edge key in multigraphs in addition to the edge.
If `G` is not a multigraph, this is ignored.
data : bool, optional
If True yield the edge data along with the edge.
ignore_nan : bool (default: False)
If a NaN is found as an edge weight normally an exception is raised.
If `ignore_nan is True` then that edge is ignored instead.
Returns
-------
edges : iterator
An iterator over edges in a maximum spanning tree of `G`.
Edges connecting nodes `u` and `v` are represented as tuples:
`(u, v, k, d)` or `(u, v, k)` or `(u, v, d)` or `(u, v)`
If `G` is a multigraph, `keys` indicates whether the edge key `k` will
be reported in the third position in the edge tuple. `data` indicates
whether the edge datadict `d` will appear at the end of the edge tuple.
If `G` is not a multigraph, the tuples are `(u, v, d)` if `data` is True
or `(u, v)` if `data` is False.
Examples
--------
>>> from networkx.algorithms import tree
Find maximum spanning edges by Kruskal's algorithm
>>> G = nx.cycle_graph(4)
>>> G.add_edge(0, 3, weight=2)
>>> mst = tree.maximum_spanning_edges(G, algorithm="kruskal", data=False)
>>> edgelist = list(mst)
>>> sorted(sorted(e) for e in edgelist)
[[0, 1], [0, 3], [1, 2]]
Find maximum spanning edges by Prim's algorithm
>>> G = nx.cycle_graph(4)
>>> G.add_edge(0, 3, weight=2) # assign weight 2 to edge 0-3
>>> mst = tree.maximum_spanning_edges(G, algorithm="prim", data=False)
>>> edgelist = list(mst)
>>> sorted(sorted(e) for e in edgelist)
[[0, 1], [0, 3], [2, 3]]
Notes
-----
For Borůvka's algorithm, each edge must have a weight attribute, and
each edge weight must be distinct.
For the other algorithms, if the graph edges do not have a weight
attribute a default weight of 1 will be used.
Modified code from David Eppstein, April 2006
http://www.ics.uci.edu/~eppstein/PADS/
| def random_spanning_tree(G, weight=None, *, multiplicative=True, seed=None):
"""
Sample a random spanning tree using the edges weights of `G`.
This function supports two different methods for determining the
probability of the graph. If ``multiplicative=True``, the probability
is based on the product of edge weights, and if ``multiplicative=False``
it is based on the sum of the edge weight. However, since it is
easier to determine the total weight of all spanning trees for the
multiplicative version, that is significantly faster and should be used if
possible. Additionally, setting `weight` to `None` will cause a spanning tree
to be selected with uniform probability.
The function uses algorithm A8 in [1]_ .
Parameters
----------
G : nx.Graph
An undirected version of the original graph.
weight : string
The edge key for the edge attribute holding edge weight.
multiplicative : bool, default=True
If `True`, the probability of each tree is the product of its edge weight
over the sum of the product of all the spanning trees in the graph. If
`False`, the probability is the sum of its edge weight over the sum of
the sum of weights for all spanning trees in the graph.
seed : integer, random_state, or None (default)
Indicator of random number generation state.
See :ref:`Randomness<randomness>`.
Returns
-------
nx.Graph
A spanning tree using the distribution defined by the weight of the tree.
References
----------
.. [1] V. Kulkarni, Generating random combinatorial objects, Journal of
Algorithms, 11 (1990), pp. 185–207
"""
def find_node(merged_nodes, node):
"""
We can think of clusters of contracted nodes as having one
representative in the graph. Each node which is not in merged_nodes
is still its own representative. Since a representative can be later
contracted, we need to recursively search though the dict to find
the final representative, but once we know it we can use path
compression to speed up the access of the representative for next time.
This cannot be replaced by the standard NetworkX union_find since that
data structure will merge nodes with less representing nodes into the
one with more representing nodes but this function requires we merge
them using the order that contract_edges contracts using.
Parameters
----------
merged_nodes : dict
The dict storing the mapping from node to representative
node
The node whose representative we seek
Returns
-------
The representative of the `node`
"""
if node not in merged_nodes:
return node
else:
rep = find_node(merged_nodes, merged_nodes[node])
merged_nodes[node] = rep
return rep
def prepare_graph():
"""
For the graph `G`, remove all edges not in the set `V` and then
contract all edges in the set `U`.
Returns
-------
A copy of `G` which has had all edges not in `V` removed and all edges
in `U` contracted.
"""
# The result is a MultiGraph version of G so that parallel edges are
# allowed during edge contraction
result = nx.MultiGraph(incoming_graph_data=G)
# Remove all edges not in V
edges_to_remove = set(result.edges()).difference(V)
result.remove_edges_from(edges_to_remove)
# Contract all edges in U
#
# Imagine that you have two edges to contract and they share an
# endpoint like this:
# [0] ----- [1] ----- [2]
# If we contract (0, 1) first, the contraction function will always
# delete the second node it is passed so the resulting graph would be
# [0] ----- [2]
# and edge (1, 2) no longer exists but (0, 2) would need to be contracted
# in its place now. That is why I use the below dict as a merge-find
# data structure with path compression to track how the nodes are merged.
merged_nodes = {}
for u, v in U:
u_rep = find_node(merged_nodes, u)
v_rep = find_node(merged_nodes, v)
# We cannot contract a node with itself
if u_rep == v_rep:
continue
nx.contracted_nodes(result, u_rep, v_rep, self_loops=False, copy=False)
merged_nodes[v_rep] = u_rep
return merged_nodes, result
def spanning_tree_total_weight(G, weight):
"""
Find the sum of weights of the spanning trees of `G` using the
appropriate `method`.
This is easy if the chosen method is 'multiplicative', since we can
use Kirchhoff's Tree Matrix Theorem directly. However, with the
'additive' method, this process is slightly more complex and less
computationally efficient as we have to find the number of spanning
trees which contain each possible edge in the graph.
Parameters
----------
G : NetworkX Graph
The graph to find the total weight of all spanning trees on.
weight : string
The key for the weight edge attribute of the graph.
Returns
-------
float
The sum of either the multiplicative or additive weight for all
spanning trees in the graph.
"""
if multiplicative:
return nx.total_spanning_tree_weight(G, weight)
else:
# There are two cases for the total spanning tree additive weight.
# 1. There is one edge in the graph. Then the only spanning tree is
# that edge itself, which will have a total weight of that edge
# itself.
if G.number_of_edges() == 1:
return G.edges(data=weight).__iter__().__next__()[2]
# 2. There are no edges or two or more edges in the graph. Then, we find the
# total weight of the spanning trees using the formula in the
# reference paper: take the weight of each edge and multiply it by
# the number of spanning trees which include that edge. This
# can be accomplished by contracting the edge and finding the
# multiplicative total spanning tree weight if the weight of each edge
# is assumed to be 1, which is conveniently built into networkx already,
# by calling total_spanning_tree_weight with weight=None.
# Note that with no edges the returned value is just zero.
else:
total = 0
for u, v, w in G.edges(data=weight):
total += w * nx.total_spanning_tree_weight(
nx.contracted_edge(G, edge=(u, v), self_loops=False), None
)
return total
if G.number_of_nodes() < 2:
# no edges in the spanning tree
return nx.empty_graph(G.nodes)
U = set()
st_cached_value = 0
V = set(G.edges())
shuffled_edges = list(G.edges())
seed.shuffle(shuffled_edges)
for u, v in shuffled_edges:
e_weight = G[u][v][weight] if weight is not None else 1
node_map, prepared_G = prepare_graph()
G_total_tree_weight = spanning_tree_total_weight(prepared_G, weight)
# Add the edge to U so that we can compute the total tree weight
# assuming we include that edge
# Now, if (u, v) cannot exist in G because it is fully contracted out
# of existence, then it by definition cannot influence G_e's Kirchhoff
# value. But, we also cannot pick it.
rep_edge = (find_node(node_map, u), find_node(node_map, v))
# Check to see if the 'representative edge' for the current edge is
# in prepared_G. If so, then we can pick it.
if rep_edge in prepared_G.edges:
prepared_G_e = nx.contracted_edge(
prepared_G, edge=rep_edge, self_loops=False
)
G_e_total_tree_weight = spanning_tree_total_weight(prepared_G_e, weight)
if multiplicative:
threshold = e_weight * G_e_total_tree_weight / G_total_tree_weight
else:
numerator = (
st_cached_value + e_weight
) * nx.total_spanning_tree_weight(prepared_G_e) + G_e_total_tree_weight
denominator = (
st_cached_value * nx.total_spanning_tree_weight(prepared_G)
+ G_total_tree_weight
)
threshold = numerator / denominator
else:
threshold = 0.0
z = seed.uniform(0.0, 1.0)
if z > threshold:
# Remove the edge from V since we did not pick it.
V.remove((u, v))
else:
# Add the edge to U since we picked it.
st_cached_value += e_weight
U.add((u, v))
# If we decide to keep an edge, it may complete the spanning tree.
if len(U) == G.number_of_nodes() - 1:
spanning_tree = nx.Graph()
spanning_tree.add_edges_from(U)
return spanning_tree
raise Exception(f"Something went wrong! Only {len(U)} edges in the spanning tree!")
| (G, algorithm='kruskal', weight='weight', keys=True, data=True, ignore_nan=False, *, backend=None, **backend_kwargs) |
30,896 | networkx.algorithms.tree.mst | maximum_spanning_tree | Returns a maximum spanning tree or forest on an undirected graph `G`.
Parameters
----------
G : undirected graph
An undirected graph. If `G` is connected, then the algorithm finds a
spanning tree. Otherwise, a spanning forest is found.
weight : str
Data key to use for edge weights.
algorithm : string
The algorithm to use when finding a maximum spanning tree. Valid
choices are 'kruskal', 'prim', or 'boruvka'. The default is
'kruskal'.
ignore_nan : bool (default: False)
If a NaN is found as an edge weight normally an exception is raised.
If `ignore_nan is True` then that edge is ignored instead.
Returns
-------
G : NetworkX Graph
A maximum spanning tree or forest.
Examples
--------
>>> G = nx.cycle_graph(4)
>>> G.add_edge(0, 3, weight=2)
>>> T = nx.maximum_spanning_tree(G)
>>> sorted(T.edges(data=True))
[(0, 1, {}), (0, 3, {'weight': 2}), (1, 2, {})]
Notes
-----
For Borůvka's algorithm, each edge must have a weight attribute, and
each edge weight must be distinct.
For the other algorithms, if the graph edges do not have a weight
attribute a default weight of 1 will be used.
There may be more than one tree with the same minimum or maximum weight.
See :mod:`networkx.tree.recognition` for more detailed definitions.
Isolated nodes with self-loops are in the tree as edgeless isolated nodes.
| def random_spanning_tree(G, weight=None, *, multiplicative=True, seed=None):
"""
Sample a random spanning tree using the edges weights of `G`.
This function supports two different methods for determining the
probability of the graph. If ``multiplicative=True``, the probability
is based on the product of edge weights, and if ``multiplicative=False``
it is based on the sum of the edge weight. However, since it is
easier to determine the total weight of all spanning trees for the
multiplicative version, that is significantly faster and should be used if
possible. Additionally, setting `weight` to `None` will cause a spanning tree
to be selected with uniform probability.
The function uses algorithm A8 in [1]_ .
Parameters
----------
G : nx.Graph
An undirected version of the original graph.
weight : string
The edge key for the edge attribute holding edge weight.
multiplicative : bool, default=True
If `True`, the probability of each tree is the product of its edge weight
over the sum of the product of all the spanning trees in the graph. If
`False`, the probability is the sum of its edge weight over the sum of
the sum of weights for all spanning trees in the graph.
seed : integer, random_state, or None (default)
Indicator of random number generation state.
See :ref:`Randomness<randomness>`.
Returns
-------
nx.Graph
A spanning tree using the distribution defined by the weight of the tree.
References
----------
.. [1] V. Kulkarni, Generating random combinatorial objects, Journal of
Algorithms, 11 (1990), pp. 185–207
"""
def find_node(merged_nodes, node):
"""
We can think of clusters of contracted nodes as having one
representative in the graph. Each node which is not in merged_nodes
is still its own representative. Since a representative can be later
contracted, we need to recursively search though the dict to find
the final representative, but once we know it we can use path
compression to speed up the access of the representative for next time.
This cannot be replaced by the standard NetworkX union_find since that
data structure will merge nodes with less representing nodes into the
one with more representing nodes but this function requires we merge
them using the order that contract_edges contracts using.
Parameters
----------
merged_nodes : dict
The dict storing the mapping from node to representative
node
The node whose representative we seek
Returns
-------
The representative of the `node`
"""
if node not in merged_nodes:
return node
else:
rep = find_node(merged_nodes, merged_nodes[node])
merged_nodes[node] = rep
return rep
def prepare_graph():
"""
For the graph `G`, remove all edges not in the set `V` and then
contract all edges in the set `U`.
Returns
-------
A copy of `G` which has had all edges not in `V` removed and all edges
in `U` contracted.
"""
# The result is a MultiGraph version of G so that parallel edges are
# allowed during edge contraction
result = nx.MultiGraph(incoming_graph_data=G)
# Remove all edges not in V
edges_to_remove = set(result.edges()).difference(V)
result.remove_edges_from(edges_to_remove)
# Contract all edges in U
#
# Imagine that you have two edges to contract and they share an
# endpoint like this:
# [0] ----- [1] ----- [2]
# If we contract (0, 1) first, the contraction function will always
# delete the second node it is passed so the resulting graph would be
# [0] ----- [2]
# and edge (1, 2) no longer exists but (0, 2) would need to be contracted
# in its place now. That is why I use the below dict as a merge-find
# data structure with path compression to track how the nodes are merged.
merged_nodes = {}
for u, v in U:
u_rep = find_node(merged_nodes, u)
v_rep = find_node(merged_nodes, v)
# We cannot contract a node with itself
if u_rep == v_rep:
continue
nx.contracted_nodes(result, u_rep, v_rep, self_loops=False, copy=False)
merged_nodes[v_rep] = u_rep
return merged_nodes, result
def spanning_tree_total_weight(G, weight):
"""
Find the sum of weights of the spanning trees of `G` using the
appropriate `method`.
This is easy if the chosen method is 'multiplicative', since we can
use Kirchhoff's Tree Matrix Theorem directly. However, with the
'additive' method, this process is slightly more complex and less
computationally efficient as we have to find the number of spanning
trees which contain each possible edge in the graph.
Parameters
----------
G : NetworkX Graph
The graph to find the total weight of all spanning trees on.
weight : string
The key for the weight edge attribute of the graph.
Returns
-------
float
The sum of either the multiplicative or additive weight for all
spanning trees in the graph.
"""
if multiplicative:
return nx.total_spanning_tree_weight(G, weight)
else:
# There are two cases for the total spanning tree additive weight.
# 1. There is one edge in the graph. Then the only spanning tree is
# that edge itself, which will have a total weight of that edge
# itself.
if G.number_of_edges() == 1:
return G.edges(data=weight).__iter__().__next__()[2]
# 2. There are no edges or two or more edges in the graph. Then, we find the
# total weight of the spanning trees using the formula in the
# reference paper: take the weight of each edge and multiply it by
# the number of spanning trees which include that edge. This
# can be accomplished by contracting the edge and finding the
# multiplicative total spanning tree weight if the weight of each edge
# is assumed to be 1, which is conveniently built into networkx already,
# by calling total_spanning_tree_weight with weight=None.
# Note that with no edges the returned value is just zero.
else:
total = 0
for u, v, w in G.edges(data=weight):
total += w * nx.total_spanning_tree_weight(
nx.contracted_edge(G, edge=(u, v), self_loops=False), None
)
return total
if G.number_of_nodes() < 2:
# no edges in the spanning tree
return nx.empty_graph(G.nodes)
U = set()
st_cached_value = 0
V = set(G.edges())
shuffled_edges = list(G.edges())
seed.shuffle(shuffled_edges)
for u, v in shuffled_edges:
e_weight = G[u][v][weight] if weight is not None else 1
node_map, prepared_G = prepare_graph()
G_total_tree_weight = spanning_tree_total_weight(prepared_G, weight)
# Add the edge to U so that we can compute the total tree weight
# assuming we include that edge
# Now, if (u, v) cannot exist in G because it is fully contracted out
# of existence, then it by definition cannot influence G_e's Kirchhoff
# value. But, we also cannot pick it.
rep_edge = (find_node(node_map, u), find_node(node_map, v))
# Check to see if the 'representative edge' for the current edge is
# in prepared_G. If so, then we can pick it.
if rep_edge in prepared_G.edges:
prepared_G_e = nx.contracted_edge(
prepared_G, edge=rep_edge, self_loops=False
)
G_e_total_tree_weight = spanning_tree_total_weight(prepared_G_e, weight)
if multiplicative:
threshold = e_weight * G_e_total_tree_weight / G_total_tree_weight
else:
numerator = (
st_cached_value + e_weight
) * nx.total_spanning_tree_weight(prepared_G_e) + G_e_total_tree_weight
denominator = (
st_cached_value * nx.total_spanning_tree_weight(prepared_G)
+ G_total_tree_weight
)
threshold = numerator / denominator
else:
threshold = 0.0
z = seed.uniform(0.0, 1.0)
if z > threshold:
# Remove the edge from V since we did not pick it.
V.remove((u, v))
else:
# Add the edge to U since we picked it.
st_cached_value += e_weight
U.add((u, v))
# If we decide to keep an edge, it may complete the spanning tree.
if len(U) == G.number_of_nodes() - 1:
spanning_tree = nx.Graph()
spanning_tree.add_edges_from(U)
return spanning_tree
raise Exception(f"Something went wrong! Only {len(U)} edges in the spanning tree!")
| (G, weight='weight', algorithm='kruskal', ignore_nan=False, *, backend=None, **backend_kwargs) |
30,897 | networkx.generators.expanders | maybe_regular_expander | Utility for creating a random regular expander.
Returns a random $d$-regular graph on $n$ nodes which is an expander
graph with very good probability.
Parameters
----------
n : int
The number of nodes.
d : int
The degree of each node.
create_using : Graph Instance or Constructor
Indicator of type of graph to return.
If a Graph-type instance, then clear and use it.
If a constructor, call it to create an empty graph.
Use the Graph constructor by default.
max_tries : int. (default: 100)
The number of allowed loops when generating each independent cycle
seed : (default: None)
Seed used to set random number generation state. See :ref`Randomness<randomness>`.
Notes
-----
The nodes are numbered from $0$ to $n - 1$.
The graph is generated by taking $d / 2$ random independent cycles.
Joel Friedman proved that in this model the resulting
graph is an expander with probability
$1 - O(n^{-\tau})$ where $\tau = \lceil (\sqrt{d - 1}) / 2 \rceil - 1$. [1]_
Examples
--------
>>> G = nx.maybe_regular_expander(n=200, d=6, seed=8020)
Returns
-------
G : graph
The constructed undirected graph.
Raises
------
NetworkXError
If $d % 2 != 0$ as the degree must be even.
If $n - 1$ is less than $ 2d $ as the graph is complete at most.
If max_tries is reached
See Also
--------
is_regular_expander
random_regular_expander_graph
References
----------
.. [1] Joel Friedman,
A Proof of Alon’s Second Eigenvalue Conjecture and Related Problems, 2004
https://arxiv.org/abs/cs/0405020
| null | (n, d, *, create_using=None, max_tries=100, seed=None, backend=None, **backend_kwargs) |
30,898 | networkx.algorithms.flow.mincost | min_cost_flow | Returns a minimum cost flow satisfying all demands in digraph G.
G is a digraph with edge costs and capacities and in which nodes
have demand, i.e., they want to send or receive some amount of
flow. A negative demand means that the node wants to send flow, a
positive demand means that the node want to receive flow. A flow on
the digraph G satisfies all demand if the net flow into each node
is equal to the demand of that node.
Parameters
----------
G : NetworkX graph
DiGraph on which a minimum cost flow satisfying all demands is
to be found.
demand : string
Nodes of the graph G are expected to have an attribute demand
that indicates how much flow a node wants to send (negative
demand) or receive (positive demand). Note that the sum of the
demands should be 0 otherwise the problem in not feasible. If
this attribute is not present, a node is considered to have 0
demand. Default value: 'demand'.
capacity : string
Edges of the graph G are expected to have an attribute capacity
that indicates how much flow the edge can support. If this
attribute is not present, the edge is considered to have
infinite capacity. Default value: 'capacity'.
weight : string
Edges of the graph G are expected to have an attribute weight
that indicates the cost incurred by sending one unit of flow on
that edge. If not present, the weight is considered to be 0.
Default value: 'weight'.
Returns
-------
flowDict : dictionary
Dictionary of dictionaries keyed by nodes such that
flowDict[u][v] is the flow edge (u, v).
Raises
------
NetworkXError
This exception is raised if the input graph is not directed or
not connected.
NetworkXUnfeasible
This exception is raised in the following situations:
* The sum of the demands is not zero. Then, there is no
flow satisfying all demands.
* There is no flow satisfying all demand.
NetworkXUnbounded
This exception is raised if the digraph G has a cycle of
negative cost and infinite capacity. Then, the cost of a flow
satisfying all demands is unbounded below.
See also
--------
cost_of_flow, max_flow_min_cost, min_cost_flow_cost, network_simplex
Notes
-----
This algorithm is not guaranteed to work if edge weights or demands
are floating point numbers (overflows and roundoff errors can
cause problems). As a workaround you can use integer numbers by
multiplying the relevant edge attributes by a convenient
constant factor (eg 100).
Examples
--------
A simple example of a min cost flow problem.
>>> G = nx.DiGraph()
>>> G.add_node("a", demand=-5)
>>> G.add_node("d", demand=5)
>>> G.add_edge("a", "b", weight=3, capacity=4)
>>> G.add_edge("a", "c", weight=6, capacity=10)
>>> G.add_edge("b", "d", weight=1, capacity=9)
>>> G.add_edge("c", "d", weight=2, capacity=5)
>>> flowDict = nx.min_cost_flow(G)
>>> flowDict
{'a': {'b': 4, 'c': 1}, 'd': {}, 'b': {'d': 4}, 'c': {'d': 1}}
| null | (G, demand='demand', capacity='capacity', weight='weight', *, backend=None, **backend_kwargs) |
30,899 | networkx.algorithms.flow.mincost | min_cost_flow_cost | Find the cost of a minimum cost flow satisfying all demands in digraph G.
G is a digraph with edge costs and capacities and in which nodes
have demand, i.e., they want to send or receive some amount of
flow. A negative demand means that the node wants to send flow, a
positive demand means that the node want to receive flow. A flow on
the digraph G satisfies all demand if the net flow into each node
is equal to the demand of that node.
Parameters
----------
G : NetworkX graph
DiGraph on which a minimum cost flow satisfying all demands is
to be found.
demand : string
Nodes of the graph G are expected to have an attribute demand
that indicates how much flow a node wants to send (negative
demand) or receive (positive demand). Note that the sum of the
demands should be 0 otherwise the problem in not feasible. If
this attribute is not present, a node is considered to have 0
demand. Default value: 'demand'.
capacity : string
Edges of the graph G are expected to have an attribute capacity
that indicates how much flow the edge can support. If this
attribute is not present, the edge is considered to have
infinite capacity. Default value: 'capacity'.
weight : string
Edges of the graph G are expected to have an attribute weight
that indicates the cost incurred by sending one unit of flow on
that edge. If not present, the weight is considered to be 0.
Default value: 'weight'.
Returns
-------
flowCost : integer, float
Cost of a minimum cost flow satisfying all demands.
Raises
------
NetworkXError
This exception is raised if the input graph is not directed or
not connected.
NetworkXUnfeasible
This exception is raised in the following situations:
* The sum of the demands is not zero. Then, there is no
flow satisfying all demands.
* There is no flow satisfying all demand.
NetworkXUnbounded
This exception is raised if the digraph G has a cycle of
negative cost and infinite capacity. Then, the cost of a flow
satisfying all demands is unbounded below.
See also
--------
cost_of_flow, max_flow_min_cost, min_cost_flow, network_simplex
Notes
-----
This algorithm is not guaranteed to work if edge weights or demands
are floating point numbers (overflows and roundoff errors can
cause problems). As a workaround you can use integer numbers by
multiplying the relevant edge attributes by a convenient
constant factor (eg 100).
Examples
--------
A simple example of a min cost flow problem.
>>> G = nx.DiGraph()
>>> G.add_node("a", demand=-5)
>>> G.add_node("d", demand=5)
>>> G.add_edge("a", "b", weight=3, capacity=4)
>>> G.add_edge("a", "c", weight=6, capacity=10)
>>> G.add_edge("b", "d", weight=1, capacity=9)
>>> G.add_edge("c", "d", weight=2, capacity=5)
>>> flowCost = nx.min_cost_flow_cost(G)
>>> flowCost
24
| null | (G, demand='demand', capacity='capacity', weight='weight', *, backend=None, **backend_kwargs) |
30,900 | networkx.algorithms.covering | min_edge_cover | Returns the min cardinality edge cover of the graph as a set of edges.
A smallest edge cover can be found in polynomial time by finding
a maximum matching and extending it greedily so that all nodes
are covered. This function follows that process. A maximum matching
algorithm can be specified for the first step of the algorithm.
The resulting set may return a set with one 2-tuple for each edge,
(the usual case) or with both 2-tuples `(u, v)` and `(v, u)` for
each edge. The latter is only done when a bipartite matching algorithm
is specified as `matching_algorithm`.
Parameters
----------
G : NetworkX graph
An undirected graph.
matching_algorithm : function
A function that returns a maximum cardinality matching for `G`.
The function must take one input, the graph `G`, and return
either a set of edges (with only one direction for the pair of nodes)
or a dictionary mapping each node to its mate. If not specified,
:func:`~networkx.algorithms.matching.max_weight_matching` is used.
Common bipartite matching functions include
:func:`~networkx.algorithms.bipartite.matching.hopcroft_karp_matching`
or
:func:`~networkx.algorithms.bipartite.matching.eppstein_matching`.
Returns
-------
min_cover : set
A set of the edges in a minimum edge cover in the form of tuples.
It contains only one of the equivalent 2-tuples `(u, v)` and `(v, u)`
for each edge. If a bipartite method is used to compute the matching,
the returned set contains both the 2-tuples `(u, v)` and `(v, u)`
for each edge of a minimum edge cover.
Examples
--------
>>> G = nx.Graph([(0, 1), (0, 2), (0, 3), (1, 2), (1, 3)])
>>> sorted(nx.min_edge_cover(G))
[(2, 1), (3, 0)]
Notes
-----
An edge cover of a graph is a set of edges such that every node of
the graph is incident to at least one edge of the set.
The minimum edge cover is an edge covering of smallest cardinality.
Due to its implementation, the worst-case running time of this algorithm
is bounded by the worst-case running time of the function
``matching_algorithm``.
Minimum edge cover for `G` can also be found using the `min_edge_covering`
function in :mod:`networkx.algorithms.bipartite.covering` which is
simply this function with a default matching algorithm of
:func:`~networkx.algorithms.bipartite.matching.hopcraft_karp_matching`
| null | (G, matching_algorithm=None, *, backend=None, **backend_kwargs) |
30,901 | networkx.algorithms.matching | min_weight_matching | Computing a minimum-weight maximal matching of G.
Use the maximum-weight algorithm with edge weights subtracted
from the maximum weight of all edges.
A matching is a subset of edges in which no node occurs more than once.
The weight of a matching is the sum of the weights of its edges.
A maximal matching cannot add more edges and still be a matching.
The cardinality of a matching is the number of matched edges.
This method replaces the edge weights with 1 plus the maximum edge weight
minus the original edge weight.
new_weight = (max_weight + 1) - edge_weight
then runs :func:`max_weight_matching` with the new weights.
The max weight matching with these new weights corresponds
to the min weight matching using the original weights.
Adding 1 to the max edge weight keeps all edge weights positive
and as integers if they started as integers.
You might worry that adding 1 to each weight would make the algorithm
favor matchings with more edges. But we use the parameter
`maxcardinality=True` in `max_weight_matching` to ensure that the
number of edges in the competing matchings are the same and thus
the optimum does not change due to changes in the number of edges.
Read the documentation of `max_weight_matching` for more information.
Parameters
----------
G : NetworkX graph
Undirected graph
weight: string, optional (default='weight')
Edge data key corresponding to the edge weight.
If key not found, uses 1 as weight.
Returns
-------
matching : set
A minimal weight matching of the graph.
See Also
--------
max_weight_matching
| @not_implemented_for("multigraph")
@not_implemented_for("directed")
@nx._dispatchable(edge_attrs="weight")
def max_weight_matching(G, maxcardinality=False, weight="weight"):
"""Compute a maximum-weighted matching of G.
A matching is a subset of edges in which no node occurs more than once.
The weight of a matching is the sum of the weights of its edges.
A maximal matching cannot add more edges and still be a matching.
The cardinality of a matching is the number of matched edges.
Parameters
----------
G : NetworkX graph
Undirected graph
maxcardinality: bool, optional (default=False)
If maxcardinality is True, compute the maximum-cardinality matching
with maximum weight among all maximum-cardinality matchings.
weight: string, optional (default='weight')
Edge data key corresponding to the edge weight.
If key not found, uses 1 as weight.
Returns
-------
matching : set
A maximal matching of the graph.
Examples
--------
>>> G = nx.Graph()
>>> edges = [(1, 2, 6), (1, 3, 2), (2, 3, 1), (2, 4, 7), (3, 5, 9), (4, 5, 3)]
>>> G.add_weighted_edges_from(edges)
>>> sorted(nx.max_weight_matching(G))
[(2, 4), (5, 3)]
Notes
-----
If G has edges with weight attributes the edge data are used as
weight values else the weights are assumed to be 1.
This function takes time O(number_of_nodes ** 3).
If all edge weights are integers, the algorithm uses only integer
computations. If floating point weights are used, the algorithm
could return a slightly suboptimal matching due to numeric
precision errors.
This method is based on the "blossom" method for finding augmenting
paths and the "primal-dual" method for finding a matching of maximum
weight, both methods invented by Jack Edmonds [1]_.
Bipartite graphs can also be matched using the functions present in
:mod:`networkx.algorithms.bipartite.matching`.
References
----------
.. [1] "Efficient Algorithms for Finding Maximum Matching in Graphs",
Zvi Galil, ACM Computing Surveys, 1986.
"""
#
# The algorithm is taken from "Efficient Algorithms for Finding Maximum
# Matching in Graphs" by Zvi Galil, ACM Computing Surveys, 1986.
# It is based on the "blossom" method for finding augmenting paths and
# the "primal-dual" method for finding a matching of maximum weight, both
# methods invented by Jack Edmonds.
#
# A C program for maximum weight matching by Ed Rothberg was used
# extensively to validate this new code.
#
# Many terms used in the code comments are explained in the paper
# by Galil. You will probably need the paper to make sense of this code.
#
class NoNode:
"""Dummy value which is different from any node."""
class Blossom:
"""Representation of a non-trivial blossom or sub-blossom."""
__slots__ = ["childs", "edges", "mybestedges"]
# b.childs is an ordered list of b's sub-blossoms, starting with
# the base and going round the blossom.
# b.edges is the list of b's connecting edges, such that
# b.edges[i] = (v, w) where v is a vertex in b.childs[i]
# and w is a vertex in b.childs[wrap(i+1)].
# If b is a top-level S-blossom,
# b.mybestedges is a list of least-slack edges to neighboring
# S-blossoms, or None if no such list has been computed yet.
# This is used for efficient computation of delta3.
# Generate the blossom's leaf vertices.
def leaves(self):
stack = [*self.childs]
while stack:
t = stack.pop()
if isinstance(t, Blossom):
stack.extend(t.childs)
else:
yield t
# Get a list of vertices.
gnodes = list(G)
if not gnodes:
return set() # don't bother with empty graphs
# Find the maximum edge weight.
maxweight = 0
allinteger = True
for i, j, d in G.edges(data=True):
wt = d.get(weight, 1)
if i != j and wt > maxweight:
maxweight = wt
allinteger = allinteger and (str(type(wt)).split("'")[1] in ("int", "long"))
# If v is a matched vertex, mate[v] is its partner vertex.
# If v is a single vertex, v does not occur as a key in mate.
# Initially all vertices are single; updated during augmentation.
mate = {}
# If b is a top-level blossom,
# label.get(b) is None if b is unlabeled (free),
# 1 if b is an S-blossom,
# 2 if b is a T-blossom.
# The label of a vertex is found by looking at the label of its top-level
# containing blossom.
# If v is a vertex inside a T-blossom, label[v] is 2 iff v is reachable
# from an S-vertex outside the blossom.
# Labels are assigned during a stage and reset after each augmentation.
label = {}
# If b is a labeled top-level blossom,
# labeledge[b] = (v, w) is the edge through which b obtained its label
# such that w is a vertex in b, or None if b's base vertex is single.
# If w is a vertex inside a T-blossom and label[w] == 2,
# labeledge[w] = (v, w) is an edge through which w is reachable from
# outside the blossom.
labeledge = {}
# If v is a vertex, inblossom[v] is the top-level blossom to which v
# belongs.
# If v is a top-level vertex, inblossom[v] == v since v is itself
# a (trivial) top-level blossom.
# Initially all vertices are top-level trivial blossoms.
inblossom = dict(zip(gnodes, gnodes))
# If b is a sub-blossom,
# blossomparent[b] is its immediate parent (sub-)blossom.
# If b is a top-level blossom, blossomparent[b] is None.
blossomparent = dict(zip(gnodes, repeat(None)))
# If b is a (sub-)blossom,
# blossombase[b] is its base VERTEX (i.e. recursive sub-blossom).
blossombase = dict(zip(gnodes, gnodes))
# If w is a free vertex (or an unreached vertex inside a T-blossom),
# bestedge[w] = (v, w) is the least-slack edge from an S-vertex,
# or None if there is no such edge.
# If b is a (possibly trivial) top-level S-blossom,
# bestedge[b] = (v, w) is the least-slack edge to a different S-blossom
# (v inside b), or None if there is no such edge.
# This is used for efficient computation of delta2 and delta3.
bestedge = {}
# If v is a vertex,
# dualvar[v] = 2 * u(v) where u(v) is the v's variable in the dual
# optimization problem (if all edge weights are integers, multiplication
# by two ensures that all values remain integers throughout the algorithm).
# Initially, u(v) = maxweight / 2.
dualvar = dict(zip(gnodes, repeat(maxweight)))
# If b is a non-trivial blossom,
# blossomdual[b] = z(b) where z(b) is b's variable in the dual
# optimization problem.
blossomdual = {}
# If (v, w) in allowedge or (w, v) in allowedg, then the edge
# (v, w) is known to have zero slack in the optimization problem;
# otherwise the edge may or may not have zero slack.
allowedge = {}
# Queue of newly discovered S-vertices.
queue = []
# Return 2 * slack of edge (v, w) (does not work inside blossoms).
def slack(v, w):
return dualvar[v] + dualvar[w] - 2 * G[v][w].get(weight, 1)
# Assign label t to the top-level blossom containing vertex w,
# coming through an edge from vertex v.
def assignLabel(w, t, v):
b = inblossom[w]
assert label.get(w) is None and label.get(b) is None
label[w] = label[b] = t
if v is not None:
labeledge[w] = labeledge[b] = (v, w)
else:
labeledge[w] = labeledge[b] = None
bestedge[w] = bestedge[b] = None
if t == 1:
# b became an S-vertex/blossom; add it(s vertices) to the queue.
if isinstance(b, Blossom):
queue.extend(b.leaves())
else:
queue.append(b)
elif t == 2:
# b became a T-vertex/blossom; assign label S to its mate.
# (If b is a non-trivial blossom, its base is the only vertex
# with an external mate.)
base = blossombase[b]
assignLabel(mate[base], 1, base)
# Trace back from vertices v and w to discover either a new blossom
# or an augmenting path. Return the base vertex of the new blossom,
# or NoNode if an augmenting path was found.
def scanBlossom(v, w):
# Trace back from v and w, placing breadcrumbs as we go.
path = []
base = NoNode
while v is not NoNode:
# Look for a breadcrumb in v's blossom or put a new breadcrumb.
b = inblossom[v]
if label[b] & 4:
base = blossombase[b]
break
assert label[b] == 1
path.append(b)
label[b] = 5
# Trace one step back.
if labeledge[b] is None:
# The base of blossom b is single; stop tracing this path.
assert blossombase[b] not in mate
v = NoNode
else:
assert labeledge[b][0] == mate[blossombase[b]]
v = labeledge[b][0]
b = inblossom[v]
assert label[b] == 2
# b is a T-blossom; trace one more step back.
v = labeledge[b][0]
# Swap v and w so that we alternate between both paths.
if w is not NoNode:
v, w = w, v
# Remove breadcrumbs.
for b in path:
label[b] = 1
# Return base vertex, if we found one.
return base
# Construct a new blossom with given base, through S-vertices v and w.
# Label the new blossom as S; set its dual variable to zero;
# relabel its T-vertices to S and add them to the queue.
def addBlossom(base, v, w):
bb = inblossom[base]
bv = inblossom[v]
bw = inblossom[w]
# Create blossom.
b = Blossom()
blossombase[b] = base
blossomparent[b] = None
blossomparent[bb] = b
# Make list of sub-blossoms and their interconnecting edge endpoints.
b.childs = path = []
b.edges = edgs = [(v, w)]
# Trace back from v to base.
while bv != bb:
# Add bv to the new blossom.
blossomparent[bv] = b
path.append(bv)
edgs.append(labeledge[bv])
assert label[bv] == 2 or (
label[bv] == 1 and labeledge[bv][0] == mate[blossombase[bv]]
)
# Trace one step back.
v = labeledge[bv][0]
bv = inblossom[v]
# Add base sub-blossom; reverse lists.
path.append(bb)
path.reverse()
edgs.reverse()
# Trace back from w to base.
while bw != bb:
# Add bw to the new blossom.
blossomparent[bw] = b
path.append(bw)
edgs.append((labeledge[bw][1], labeledge[bw][0]))
assert label[bw] == 2 or (
label[bw] == 1 and labeledge[bw][0] == mate[blossombase[bw]]
)
# Trace one step back.
w = labeledge[bw][0]
bw = inblossom[w]
# Set label to S.
assert label[bb] == 1
label[b] = 1
labeledge[b] = labeledge[bb]
# Set dual variable to zero.
blossomdual[b] = 0
# Relabel vertices.
for v in b.leaves():
if label[inblossom[v]] == 2:
# This T-vertex now turns into an S-vertex because it becomes
# part of an S-blossom; add it to the queue.
queue.append(v)
inblossom[v] = b
# Compute b.mybestedges.
bestedgeto = {}
for bv in path:
if isinstance(bv, Blossom):
if bv.mybestedges is not None:
# Walk this subblossom's least-slack edges.
nblist = bv.mybestedges
# The sub-blossom won't need this data again.
bv.mybestedges = None
else:
# This subblossom does not have a list of least-slack
# edges; get the information from the vertices.
nblist = [
(v, w) for v in bv.leaves() for w in G.neighbors(v) if v != w
]
else:
nblist = [(bv, w) for w in G.neighbors(bv) if bv != w]
for k in nblist:
(i, j) = k
if inblossom[j] == b:
i, j = j, i
bj = inblossom[j]
if (
bj != b
and label.get(bj) == 1
and ((bj not in bestedgeto) or slack(i, j) < slack(*bestedgeto[bj]))
):
bestedgeto[bj] = k
# Forget about least-slack edge of the subblossom.
bestedge[bv] = None
b.mybestedges = list(bestedgeto.values())
# Select bestedge[b].
mybestedge = None
bestedge[b] = None
for k in b.mybestedges:
kslack = slack(*k)
if mybestedge is None or kslack < mybestslack:
mybestedge = k
mybestslack = kslack
bestedge[b] = mybestedge
# Expand the given top-level blossom.
def expandBlossom(b, endstage):
# This is an obnoxiously complicated recursive function for the sake of
# a stack-transformation. So, we hack around the complexity by using
# a trampoline pattern. By yielding the arguments to each recursive
# call, we keep the actual callstack flat.
def _recurse(b, endstage):
# Convert sub-blossoms into top-level blossoms.
for s in b.childs:
blossomparent[s] = None
if isinstance(s, Blossom):
if endstage and blossomdual[s] == 0:
# Recursively expand this sub-blossom.
yield s
else:
for v in s.leaves():
inblossom[v] = s
else:
inblossom[s] = s
# If we expand a T-blossom during a stage, its sub-blossoms must be
# relabeled.
if (not endstage) and label.get(b) == 2:
# Start at the sub-blossom through which the expanding
# blossom obtained its label, and relabel sub-blossoms untili
# we reach the base.
# Figure out through which sub-blossom the expanding blossom
# obtained its label initially.
entrychild = inblossom[labeledge[b][1]]
# Decide in which direction we will go round the blossom.
j = b.childs.index(entrychild)
if j & 1:
# Start index is odd; go forward and wrap.
j -= len(b.childs)
jstep = 1
else:
# Start index is even; go backward.
jstep = -1
# Move along the blossom until we get to the base.
v, w = labeledge[b]
while j != 0:
# Relabel the T-sub-blossom.
if jstep == 1:
p, q = b.edges[j]
else:
q, p = b.edges[j - 1]
label[w] = None
label[q] = None
assignLabel(w, 2, v)
# Step to the next S-sub-blossom and note its forward edge.
allowedge[(p, q)] = allowedge[(q, p)] = True
j += jstep
if jstep == 1:
v, w = b.edges[j]
else:
w, v = b.edges[j - 1]
# Step to the next T-sub-blossom.
allowedge[(v, w)] = allowedge[(w, v)] = True
j += jstep
# Relabel the base T-sub-blossom WITHOUT stepping through to
# its mate (so don't call assignLabel).
bw = b.childs[j]
label[w] = label[bw] = 2
labeledge[w] = labeledge[bw] = (v, w)
bestedge[bw] = None
# Continue along the blossom until we get back to entrychild.
j += jstep
while b.childs[j] != entrychild:
# Examine the vertices of the sub-blossom to see whether
# it is reachable from a neighboring S-vertex outside the
# expanding blossom.
bv = b.childs[j]
if label.get(bv) == 1:
# This sub-blossom just got label S through one of its
# neighbors; leave it be.
j += jstep
continue
if isinstance(bv, Blossom):
for v in bv.leaves():
if label.get(v):
break
else:
v = bv
# If the sub-blossom contains a reachable vertex, assign
# label T to the sub-blossom.
if label.get(v):
assert label[v] == 2
assert inblossom[v] == bv
label[v] = None
label[mate[blossombase[bv]]] = None
assignLabel(v, 2, labeledge[v][0])
j += jstep
# Remove the expanded blossom entirely.
label.pop(b, None)
labeledge.pop(b, None)
bestedge.pop(b, None)
del blossomparent[b]
del blossombase[b]
del blossomdual[b]
# Now, we apply the trampoline pattern. We simulate a recursive
# callstack by maintaining a stack of generators, each yielding a
# sequence of function arguments. We grow the stack by appending a call
# to _recurse on each argument tuple, and shrink the stack whenever a
# generator is exhausted.
stack = [_recurse(b, endstage)]
while stack:
top = stack[-1]
for s in top:
stack.append(_recurse(s, endstage))
break
else:
stack.pop()
# Swap matched/unmatched edges over an alternating path through blossom b
# between vertex v and the base vertex. Keep blossom bookkeeping
# consistent.
def augmentBlossom(b, v):
# This is an obnoxiously complicated recursive function for the sake of
# a stack-transformation. So, we hack around the complexity by using
# a trampoline pattern. By yielding the arguments to each recursive
# call, we keep the actual callstack flat.
def _recurse(b, v):
# Bubble up through the blossom tree from vertex v to an immediate
# sub-blossom of b.
t = v
while blossomparent[t] != b:
t = blossomparent[t]
# Recursively deal with the first sub-blossom.
if isinstance(t, Blossom):
yield (t, v)
# Decide in which direction we will go round the blossom.
i = j = b.childs.index(t)
if i & 1:
# Start index is odd; go forward and wrap.
j -= len(b.childs)
jstep = 1
else:
# Start index is even; go backward.
jstep = -1
# Move along the blossom until we get to the base.
while j != 0:
# Step to the next sub-blossom and augment it recursively.
j += jstep
t = b.childs[j]
if jstep == 1:
w, x = b.edges[j]
else:
x, w = b.edges[j - 1]
if isinstance(t, Blossom):
yield (t, w)
# Step to the next sub-blossom and augment it recursively.
j += jstep
t = b.childs[j]
if isinstance(t, Blossom):
yield (t, x)
# Match the edge connecting those sub-blossoms.
mate[w] = x
mate[x] = w
# Rotate the list of sub-blossoms to put the new base at the front.
b.childs = b.childs[i:] + b.childs[:i]
b.edges = b.edges[i:] + b.edges[:i]
blossombase[b] = blossombase[b.childs[0]]
assert blossombase[b] == v
# Now, we apply the trampoline pattern. We simulate a recursive
# callstack by maintaining a stack of generators, each yielding a
# sequence of function arguments. We grow the stack by appending a call
# to _recurse on each argument tuple, and shrink the stack whenever a
# generator is exhausted.
stack = [_recurse(b, v)]
while stack:
top = stack[-1]
for args in top:
stack.append(_recurse(*args))
break
else:
stack.pop()
# Swap matched/unmatched edges over an alternating path between two
# single vertices. The augmenting path runs through S-vertices v and w.
def augmentMatching(v, w):
for s, j in ((v, w), (w, v)):
# Match vertex s to vertex j. Then trace back from s
# until we find a single vertex, swapping matched and unmatched
# edges as we go.
while 1:
bs = inblossom[s]
assert label[bs] == 1
assert (labeledge[bs] is None and blossombase[bs] not in mate) or (
labeledge[bs][0] == mate[blossombase[bs]]
)
# Augment through the S-blossom from s to base.
if isinstance(bs, Blossom):
augmentBlossom(bs, s)
# Update mate[s]
mate[s] = j
# Trace one step back.
if labeledge[bs] is None:
# Reached single vertex; stop.
break
t = labeledge[bs][0]
bt = inblossom[t]
assert label[bt] == 2
# Trace one more step back.
s, j = labeledge[bt]
# Augment through the T-blossom from j to base.
assert blossombase[bt] == t
if isinstance(bt, Blossom):
augmentBlossom(bt, j)
# Update mate[j]
mate[j] = s
# Verify that the optimum solution has been reached.
def verifyOptimum():
if maxcardinality:
# Vertices may have negative dual;
# find a constant non-negative number to add to all vertex duals.
vdualoffset = max(0, -min(dualvar.values()))
else:
vdualoffset = 0
# 0. all dual variables are non-negative
assert min(dualvar.values()) + vdualoffset >= 0
assert len(blossomdual) == 0 or min(blossomdual.values()) >= 0
# 0. all edges have non-negative slack and
# 1. all matched edges have zero slack;
for i, j, d in G.edges(data=True):
wt = d.get(weight, 1)
if i == j:
continue # ignore self-loops
s = dualvar[i] + dualvar[j] - 2 * wt
iblossoms = [i]
jblossoms = [j]
while blossomparent[iblossoms[-1]] is not None:
iblossoms.append(blossomparent[iblossoms[-1]])
while blossomparent[jblossoms[-1]] is not None:
jblossoms.append(blossomparent[jblossoms[-1]])
iblossoms.reverse()
jblossoms.reverse()
for bi, bj in zip(iblossoms, jblossoms):
if bi != bj:
break
s += 2 * blossomdual[bi]
assert s >= 0
if mate.get(i) == j or mate.get(j) == i:
assert mate[i] == j and mate[j] == i
assert s == 0
# 2. all single vertices have zero dual value;
for v in gnodes:
assert (v in mate) or dualvar[v] + vdualoffset == 0
# 3. all blossoms with positive dual value are full.
for b in blossomdual:
if blossomdual[b] > 0:
assert len(b.edges) % 2 == 1
for i, j in b.edges[1::2]:
assert mate[i] == j and mate[j] == i
# Ok.
# Main loop: continue until no further improvement is possible.
while 1:
# Each iteration of this loop is a "stage".
# A stage finds an augmenting path and uses that to improve
# the matching.
# Remove labels from top-level blossoms/vertices.
label.clear()
labeledge.clear()
# Forget all about least-slack edges.
bestedge.clear()
for b in blossomdual:
b.mybestedges = None
# Loss of labeling means that we can not be sure that currently
# allowable edges remain allowable throughout this stage.
allowedge.clear()
# Make queue empty.
queue[:] = []
# Label single blossoms/vertices with S and put them in the queue.
for v in gnodes:
if (v not in mate) and label.get(inblossom[v]) is None:
assignLabel(v, 1, None)
# Loop until we succeed in augmenting the matching.
augmented = 0
while 1:
# Each iteration of this loop is a "substage".
# A substage tries to find an augmenting path;
# if found, the path is used to improve the matching and
# the stage ends. If there is no augmenting path, the
# primal-dual method is used to pump some slack out of
# the dual variables.
# Continue labeling until all vertices which are reachable
# through an alternating path have got a label.
while queue and not augmented:
# Take an S vertex from the queue.
v = queue.pop()
assert label[inblossom[v]] == 1
# Scan its neighbors:
for w in G.neighbors(v):
if w == v:
continue # ignore self-loops
# w is a neighbor to v
bv = inblossom[v]
bw = inblossom[w]
if bv == bw:
# this edge is internal to a blossom; ignore it
continue
if (v, w) not in allowedge:
kslack = slack(v, w)
if kslack <= 0:
# edge k has zero slack => it is allowable
allowedge[(v, w)] = allowedge[(w, v)] = True
if (v, w) in allowedge:
if label.get(bw) is None:
# (C1) w is a free vertex;
# label w with T and label its mate with S (R12).
assignLabel(w, 2, v)
elif label.get(bw) == 1:
# (C2) w is an S-vertex (not in the same blossom);
# follow back-links to discover either an
# augmenting path or a new blossom.
base = scanBlossom(v, w)
if base is not NoNode:
# Found a new blossom; add it to the blossom
# bookkeeping and turn it into an S-blossom.
addBlossom(base, v, w)
else:
# Found an augmenting path; augment the
# matching and end this stage.
augmentMatching(v, w)
augmented = 1
break
elif label.get(w) is None:
# w is inside a T-blossom, but w itself has not
# yet been reached from outside the blossom;
# mark it as reached (we need this to relabel
# during T-blossom expansion).
assert label[bw] == 2
label[w] = 2
labeledge[w] = (v, w)
elif label.get(bw) == 1:
# keep track of the least-slack non-allowable edge to
# a different S-blossom.
if bestedge.get(bv) is None or kslack < slack(*bestedge[bv]):
bestedge[bv] = (v, w)
elif label.get(w) is None:
# w is a free vertex (or an unreached vertex inside
# a T-blossom) but we can not reach it yet;
# keep track of the least-slack edge that reaches w.
if bestedge.get(w) is None or kslack < slack(*bestedge[w]):
bestedge[w] = (v, w)
if augmented:
break
# There is no augmenting path under these constraints;
# compute delta and reduce slack in the optimization problem.
# (Note that our vertex dual variables, edge slacks and delta's
# are pre-multiplied by two.)
deltatype = -1
delta = deltaedge = deltablossom = None
# Compute delta1: the minimum value of any vertex dual.
if not maxcardinality:
deltatype = 1
delta = min(dualvar.values())
# Compute delta2: the minimum slack on any edge between
# an S-vertex and a free vertex.
for v in G.nodes():
if label.get(inblossom[v]) is None and bestedge.get(v) is not None:
d = slack(*bestedge[v])
if deltatype == -1 or d < delta:
delta = d
deltatype = 2
deltaedge = bestedge[v]
# Compute delta3: half the minimum slack on any edge between
# a pair of S-blossoms.
for b in blossomparent:
if (
blossomparent[b] is None
and label.get(b) == 1
and bestedge.get(b) is not None
):
kslack = slack(*bestedge[b])
if allinteger:
assert (kslack % 2) == 0
d = kslack // 2
else:
d = kslack / 2.0
if deltatype == -1 or d < delta:
delta = d
deltatype = 3
deltaedge = bestedge[b]
# Compute delta4: minimum z variable of any T-blossom.
for b in blossomdual:
if (
blossomparent[b] is None
and label.get(b) == 2
and (deltatype == -1 or blossomdual[b] < delta)
):
delta = blossomdual[b]
deltatype = 4
deltablossom = b
if deltatype == -1:
# No further improvement possible; max-cardinality optimum
# reached. Do a final delta update to make the optimum
# verifiable.
assert maxcardinality
deltatype = 1
delta = max(0, min(dualvar.values()))
# Update dual variables according to delta.
for v in gnodes:
if label.get(inblossom[v]) == 1:
# S-vertex: 2*u = 2*u - 2*delta
dualvar[v] -= delta
elif label.get(inblossom[v]) == 2:
# T-vertex: 2*u = 2*u + 2*delta
dualvar[v] += delta
for b in blossomdual:
if blossomparent[b] is None:
if label.get(b) == 1:
# top-level S-blossom: z = z + 2*delta
blossomdual[b] += delta
elif label.get(b) == 2:
# top-level T-blossom: z = z - 2*delta
blossomdual[b] -= delta
# Take action at the point where minimum delta occurred.
if deltatype == 1:
# No further improvement possible; optimum reached.
break
elif deltatype == 2:
# Use the least-slack edge to continue the search.
(v, w) = deltaedge
assert label[inblossom[v]] == 1
allowedge[(v, w)] = allowedge[(w, v)] = True
queue.append(v)
elif deltatype == 3:
# Use the least-slack edge to continue the search.
(v, w) = deltaedge
allowedge[(v, w)] = allowedge[(w, v)] = True
assert label[inblossom[v]] == 1
queue.append(v)
elif deltatype == 4:
# Expand the least-z blossom.
expandBlossom(deltablossom, False)
# End of a this substage.
# Paranoia check that the matching is symmetric.
for v in mate:
assert mate[mate[v]] == v
# Stop when no more augmenting path can be found.
if not augmented:
break
# End of a stage; expand all S-blossoms which have zero dual.
for b in list(blossomdual.keys()):
if b not in blossomdual:
continue # already expanded
if blossomparent[b] is None and label.get(b) == 1 and blossomdual[b] == 0:
expandBlossom(b, True)
# Verify that we reached the optimum solution (only for integer weights).
if allinteger:
verifyOptimum()
return matching_dict_to_set(mate)
| (G, weight='weight', *, backend=None, **backend_kwargs) |
30,902 | networkx.algorithms.d_separation | minimal_d_separator | Returns a minimal_d-separating set between `x` and `y` if possible
.. deprecated:: 3.3
minimal_d_separator is deprecated and will be removed in NetworkX v3.5.
Please use `find_minimal_d_separator(G, x, y)`.
| def minimal_d_separator(G, u, v):
"""Returns a minimal_d-separating set between `x` and `y` if possible
.. deprecated:: 3.3
minimal_d_separator is deprecated and will be removed in NetworkX v3.5.
Please use `find_minimal_d_separator(G, x, y)`.
"""
import warnings
warnings.warn(
(
"This function is deprecated and will be removed in NetworkX v3.5."
"Please use `is_d_separator(G, x, y)`."
),
category=DeprecationWarning,
stacklevel=2,
)
return nx.find_minimal_d_separator(G, u, v)
| (G, u, v) |
30,903 | networkx.algorithms.tree.branchings | minimum_branching |
Returns a minimum branching from G.
Parameters
----------
G : (multi)digraph-like
The graph to be searched.
attr : str
The edge attribute used to in determining optimality.
default : float
The value of the edge attribute used if an edge does not have
the attribute `attr`.
preserve_attrs : bool
If True, preserve the other attributes of the original graph (that are not
passed to `attr`)
partition : str
The key for the edge attribute containing the partition
data on the graph. Edges can be included, excluded or open using the
`EdgePartition` enum.
Returns
-------
B : (multi)digraph-like
A minimum branching.
See Also
--------
minimal_branching
| @nx._dispatchable(preserve_edge_attrs=True, returns_graph=True)
def maximum_branching(
G,
attr="weight",
default=1,
preserve_attrs=False,
partition=None,
):
#######################################
### Data Structure Helper Functions ###
#######################################
def edmonds_add_edge(G, edge_index, u, v, key, **d):
"""
Adds an edge to `G` while also updating the edge index.
This algorithm requires the use of an external dictionary to track
the edge keys since it is possible that the source or destination
node of an edge will be changed and the default key-handling
capabilities of the MultiDiGraph class do not account for this.
Parameters
----------
G : MultiDiGraph
The graph to insert an edge into.
edge_index : dict
A mapping from integers to the edges of the graph.
u : node
The source node of the new edge.
v : node
The destination node of the new edge.
key : int
The key to use from `edge_index`.
d : keyword arguments, optional
Other attributes to store on the new edge.
"""
if key in edge_index:
uu, vv, _ = edge_index[key]
if (u != uu) or (v != vv):
raise Exception(f"Key {key!r} is already in use.")
G.add_edge(u, v, key, **d)
edge_index[key] = (u, v, G.succ[u][v][key])
def edmonds_remove_node(G, edge_index, n):
"""
Remove a node from the graph, updating the edge index to match.
Parameters
----------
G : MultiDiGraph
The graph to remove an edge from.
edge_index : dict
A mapping from integers to the edges of the graph.
n : node
The node to remove from `G`.
"""
keys = set()
for keydict in G.pred[n].values():
keys.update(keydict)
for keydict in G.succ[n].values():
keys.update(keydict)
for key in keys:
del edge_index[key]
G.remove_node(n)
#######################
### Algorithm Setup ###
#######################
# Pick an attribute name that the original graph is unlikly to have
candidate_attr = "edmonds' secret candidate attribute"
new_node_base_name = "edmonds new node base name "
G_original = G
G = nx.MultiDiGraph()
G.__networkx_cache__ = None # Disable caching
# A dict to reliably track mutations to the edges using the key of the edge.
G_edge_index = {}
# Each edge is given an arbitrary numerical key
for key, (u, v, data) in enumerate(G_original.edges(data=True)):
d = {attr: data.get(attr, default)}
if data.get(partition) is not None:
d[partition] = data.get(partition)
if preserve_attrs:
for d_k, d_v in data.items():
if d_k != attr:
d[d_k] = d_v
edmonds_add_edge(G, G_edge_index, u, v, key, **d)
level = 0 # Stores the number of contracted nodes
# These are the buckets from the paper.
#
# In the paper, G^i are modified versions of the original graph.
# D^i and E^i are the nodes and edges of the maximal edges that are
# consistent with G^i. In this implementation, D^i and E^i are stored
# together as the graph B^i. We will have strictly more B^i then the
# paper will have.
#
# Note that the data in graphs and branchings are tuples with the graph as
# the first element and the edge index as the second.
B = nx.MultiDiGraph()
B_edge_index = {}
graphs = [] # G^i list
branchings = [] # B^i list
selected_nodes = set() # D^i bucket
uf = nx.utils.UnionFind()
# A list of lists of edge indices. Each list is a circuit for graph G^i.
# Note the edge list is not required to be a circuit in G^0.
circuits = []
# Stores the index of the minimum edge in the circuit found in G^i and B^i.
# The ordering of the edges seems to preserver the weight ordering from
# G^0. So even if the circuit does not form a circuit in G^0, it is still
# true that the minimum edges in circuit G^0 (despite their weights being
# different)
minedge_circuit = []
###########################
### Algorithm Structure ###
###########################
# Each step listed in the algorithm is an inner function. Thus, the overall
# loop structure is:
#
# while True:
# step_I1()
# if cycle detected:
# step_I2()
# elif every node of G is in D and E is a branching:
# break
##################################
### Algorithm Helper Functions ###
##################################
def edmonds_find_desired_edge(v):
"""
Find the edge directed towards v with maximal weight.
If an edge partition exists in this graph, return the included
edge if it exists and never return any excluded edge.
Note: There can only be one included edge for each vertex otherwise
the edge partition is empty.
Parameters
----------
v : node
The node to search for the maximal weight incoming edge.
"""
edge = None
max_weight = -INF
for u, _, key, data in G.in_edges(v, data=True, keys=True):
# Skip excluded edges
if data.get(partition) == nx.EdgePartition.EXCLUDED:
continue
new_weight = data[attr]
# Return the included edge
if data.get(partition) == nx.EdgePartition.INCLUDED:
max_weight = new_weight
edge = (u, v, key, new_weight, data)
break
# Find the best open edge
if new_weight > max_weight:
max_weight = new_weight
edge = (u, v, key, new_weight, data)
return edge, max_weight
def edmonds_step_I2(v, desired_edge, level):
"""
Perform step I2 from Edmonds' paper
First, check if the last step I1 created a cycle. If it did not, do nothing.
If it did, store the cycle for later reference and contract it.
Parameters
----------
v : node
The current node to consider
desired_edge : edge
The minimum desired edge to remove from the cycle.
level : int
The current level, i.e. the number of cycles that have already been removed.
"""
u = desired_edge[0]
Q_nodes = nx.shortest_path(B, v, u)
Q_edges = [
list(B[Q_nodes[i]][vv].keys())[0] for i, vv in enumerate(Q_nodes[1:])
]
Q_edges.append(desired_edge[2]) # Add the new edge key to complete the circuit
# Get the edge in the circuit with the minimum weight.
# Also, save the incoming weights for each node.
minweight = INF
minedge = None
Q_incoming_weight = {}
for edge_key in Q_edges:
u, v, data = B_edge_index[edge_key]
w = data[attr]
# We cannot remove an included edge, even if it is the
# minimum edge in the circuit
Q_incoming_weight[v] = w
if data.get(partition) == nx.EdgePartition.INCLUDED:
continue
if w < minweight:
minweight = w
minedge = edge_key
circuits.append(Q_edges)
minedge_circuit.append(minedge)
graphs.append((G.copy(), G_edge_index.copy()))
branchings.append((B.copy(), B_edge_index.copy()))
# Mutate the graph to contract the circuit
new_node = new_node_base_name + str(level)
G.add_node(new_node)
new_edges = []
for u, v, key, data in G.edges(data=True, keys=True):
if u in Q_incoming_weight:
if v in Q_incoming_weight:
# Circuit edge. For the moment do nothing,
# eventually it will be removed.
continue
else:
# Outgoing edge from a node in the circuit.
# Make it come from the new node instead
dd = data.copy()
new_edges.append((new_node, v, key, dd))
else:
if v in Q_incoming_weight:
# Incoming edge to the circuit.
# Update it's weight
w = data[attr]
w += minweight - Q_incoming_weight[v]
dd = data.copy()
dd[attr] = w
new_edges.append((u, new_node, key, dd))
else:
# Outside edge. No modification needed
continue
for node in Q_nodes:
edmonds_remove_node(G, G_edge_index, node)
edmonds_remove_node(B, B_edge_index, node)
selected_nodes.difference_update(set(Q_nodes))
for u, v, key, data in new_edges:
edmonds_add_edge(G, G_edge_index, u, v, key, **data)
if candidate_attr in data:
del data[candidate_attr]
edmonds_add_edge(B, B_edge_index, u, v, key, **data)
uf.union(u, v)
def is_root(G, u, edgekeys):
"""
Returns True if `u` is a root node in G.
Node `u` is a root node if its in-degree over the specified edges is zero.
Parameters
----------
G : Graph
The current graph.
u : node
The node in `G` to check if it is a root.
edgekeys : iterable of edges
The edges for which to check if `u` is a root of.
"""
if u not in G:
raise Exception(f"{u!r} not in G")
for v in G.pred[u]:
for edgekey in G.pred[u][v]:
if edgekey in edgekeys:
return False, edgekey
else:
return True, None
nodes = iter(list(G.nodes))
while True:
try:
v = next(nodes)
except StopIteration:
# If there are no more new nodes to consider, then we should
# meet stopping condition (b) from the paper:
# (b) every node of G^i is in D^i and E^i is a branching
assert len(G) == len(B)
if len(B):
assert is_branching(B)
graphs.append((G.copy(), G_edge_index.copy()))
branchings.append((B.copy(), B_edge_index.copy()))
circuits.append([])
minedge_circuit.append(None)
break
else:
#####################
### BEGIN STEP I1 ###
#####################
# This is a very simple step, so I don't think it needs a method of it's own
if v in selected_nodes:
continue
selected_nodes.add(v)
B.add_node(v)
desired_edge, desired_edge_weight = edmonds_find_desired_edge(v)
# There might be no desired edge if all edges are excluded or
# v is the last node to be added to B, the ultimate root of the branching
if desired_edge is not None and desired_edge_weight > 0:
u = desired_edge[0]
# Flag adding the edge will create a circuit before merging the two
# connected components of u and v in B
circuit = uf[u] == uf[v]
dd = {attr: desired_edge_weight}
if desired_edge[4].get(partition) is not None:
dd[partition] = desired_edge[4].get(partition)
edmonds_add_edge(B, B_edge_index, u, v, desired_edge[2], **dd)
G[u][v][desired_edge[2]][candidate_attr] = True
uf.union(u, v)
###################
### END STEP I1 ###
###################
#####################
### BEGIN STEP I2 ###
#####################
if circuit:
edmonds_step_I2(v, desired_edge, level)
nodes = iter(list(G.nodes()))
level += 1
###################
### END STEP I2 ###
###################
#####################
### BEGIN STEP I3 ###
#####################
# Create a new graph of the same class as the input graph
H = G_original.__class__()
# Start with the branching edges in the last level.
edges = set(branchings[level][1])
while level > 0:
level -= 1
# The current level is i, and we start counting from 0.
#
# We need the node at level i+1 that results from merging a circuit
# at level i. basename_0 is the first merged node and this happens
# at level 1. That is basename_0 is a node at level 1 that results
# from merging a circuit at level 0.
merged_node = new_node_base_name + str(level)
circuit = circuits[level]
isroot, edgekey = is_root(graphs[level + 1][0], merged_node, edges)
edges.update(circuit)
if isroot:
minedge = minedge_circuit[level]
if minedge is None:
raise Exception
# Remove the edge in the cycle with minimum weight
edges.remove(minedge)
else:
# We have identified an edge at the next higher level that
# transitions into the merged node at this level. That edge
# transitions to some corresponding node at the current level.
#
# We want to remove an edge from the cycle that transitions
# into the corresponding node, otherwise the result would not
# be a branching.
G, G_edge_index = graphs[level]
target = G_edge_index[edgekey][1]
for edgekey in circuit:
u, v, data = G_edge_index[edgekey]
if v == target:
break
else:
raise Exception("Couldn't find edge incoming to merged node.")
edges.remove(edgekey)
H.add_nodes_from(G_original)
for edgekey in edges:
u, v, d = graphs[0][1][edgekey]
dd = {attr: d[attr]}
if preserve_attrs:
for key, value in d.items():
if key not in [attr, candidate_attr]:
dd[key] = value
H.add_edge(u, v, **dd)
###################
### END STEP I3 ###
###################
return H
| (G, attr='weight', default=1, preserve_attrs=False, partition=None, *, backend=None, **backend_kwargs) |
30,904 | networkx.algorithms.flow.maxflow | minimum_cut | Compute the value and the node partition of a minimum (s, t)-cut.
Use the max-flow min-cut theorem, i.e., the capacity of a minimum
capacity cut is equal to the flow value of a maximum flow.
Parameters
----------
flowG : NetworkX graph
Edges of the graph are expected to have an attribute called
'capacity'. If this attribute is not present, the edge is
considered to have infinite capacity.
_s : node
Source node for the flow.
_t : node
Sink node for the flow.
capacity : string
Edges of the graph G are expected to have an attribute capacity
that indicates how much flow the edge can support. If this
attribute is not present, the edge is considered to have
infinite capacity. Default value: 'capacity'.
flow_func : function
A function for computing the maximum flow among a pair of nodes
in a capacitated graph. The function has to accept at least three
parameters: a Graph or Digraph, a source node, and a target node.
And return a residual network that follows NetworkX conventions
(see Notes). If flow_func is None, the default maximum
flow function (:meth:`preflow_push`) is used. See below for
alternative algorithms. The choice of the default function may change
from version to version and should not be relied on. Default value:
None.
kwargs : Any other keyword parameter is passed to the function that
computes the maximum flow.
Returns
-------
cut_value : integer, float
Value of the minimum cut.
partition : pair of node sets
A partitioning of the nodes that defines a minimum cut.
Raises
------
NetworkXUnbounded
If the graph has a path of infinite capacity, all cuts have
infinite capacity and the function raises a NetworkXError.
See also
--------
:meth:`maximum_flow`
:meth:`maximum_flow_value`
:meth:`minimum_cut_value`
:meth:`edmonds_karp`
:meth:`preflow_push`
:meth:`shortest_augmenting_path`
Notes
-----
The function used in the flow_func parameter has to return a residual
network that follows NetworkX conventions:
The residual network :samp:`R` from an input graph :samp:`G` has the
same nodes as :samp:`G`. :samp:`R` is a DiGraph that contains a pair
of edges :samp:`(u, v)` and :samp:`(v, u)` iff :samp:`(u, v)` is not a
self-loop, and at least one of :samp:`(u, v)` and :samp:`(v, u)` exists
in :samp:`G`.
For each edge :samp:`(u, v)` in :samp:`R`, :samp:`R[u][v]['capacity']`
is equal to the capacity of :samp:`(u, v)` in :samp:`G` if it exists
in :samp:`G` or zero otherwise. If the capacity is infinite,
:samp:`R[u][v]['capacity']` will have a high arbitrary finite value
that does not affect the solution of the problem. This value is stored in
:samp:`R.graph['inf']`. For each edge :samp:`(u, v)` in :samp:`R`,
:samp:`R[u][v]['flow']` represents the flow function of :samp:`(u, v)` and
satisfies :samp:`R[u][v]['flow'] == -R[v][u]['flow']`.
The flow value, defined as the total flow into :samp:`t`, the sink, is
stored in :samp:`R.graph['flow_value']`. Reachability to :samp:`t` using
only edges :samp:`(u, v)` such that
:samp:`R[u][v]['flow'] < R[u][v]['capacity']` induces a minimum
:samp:`s`-:samp:`t` cut.
Specific algorithms may store extra data in :samp:`R`.
The function should supports an optional boolean parameter value_only. When
True, it can optionally terminate the algorithm as soon as the maximum flow
value and the minimum cut can be determined.
Examples
--------
>>> G = nx.DiGraph()
>>> G.add_edge("x", "a", capacity=3.0)
>>> G.add_edge("x", "b", capacity=1.0)
>>> G.add_edge("a", "c", capacity=3.0)
>>> G.add_edge("b", "c", capacity=5.0)
>>> G.add_edge("b", "d", capacity=4.0)
>>> G.add_edge("d", "e", capacity=2.0)
>>> G.add_edge("c", "y", capacity=2.0)
>>> G.add_edge("e", "y", capacity=3.0)
minimum_cut computes both the value of the
minimum cut and the node partition:
>>> cut_value, partition = nx.minimum_cut(G, "x", "y")
>>> reachable, non_reachable = partition
'partition' here is a tuple with the two sets of nodes that define
the minimum cut. You can compute the cut set of edges that induce
the minimum cut as follows:
>>> cutset = set()
>>> for u, nbrs in ((n, G[n]) for n in reachable):
... cutset.update((u, v) for v in nbrs if v in non_reachable)
>>> print(sorted(cutset))
[('c', 'y'), ('x', 'b')]
>>> cut_value == sum(G.edges[u, v]["capacity"] for (u, v) in cutset)
True
You can also use alternative algorithms for computing the
minimum cut by using the flow_func parameter.
>>> from networkx.algorithms.flow import shortest_augmenting_path
>>> cut_value == nx.minimum_cut(G, "x", "y", flow_func=shortest_augmenting_path)[0]
True
| null | (flowG, _s, _t, capacity='capacity', flow_func=None, *, backend=None, **kwargs) |
30,905 | networkx.algorithms.flow.maxflow | minimum_cut_value | Compute the value of a minimum (s, t)-cut.
Use the max-flow min-cut theorem, i.e., the capacity of a minimum
capacity cut is equal to the flow value of a maximum flow.
Parameters
----------
flowG : NetworkX graph
Edges of the graph are expected to have an attribute called
'capacity'. If this attribute is not present, the edge is
considered to have infinite capacity.
_s : node
Source node for the flow.
_t : node
Sink node for the flow.
capacity : string
Edges of the graph G are expected to have an attribute capacity
that indicates how much flow the edge can support. If this
attribute is not present, the edge is considered to have
infinite capacity. Default value: 'capacity'.
flow_func : function
A function for computing the maximum flow among a pair of nodes
in a capacitated graph. The function has to accept at least three
parameters: a Graph or Digraph, a source node, and a target node.
And return a residual network that follows NetworkX conventions
(see Notes). If flow_func is None, the default maximum
flow function (:meth:`preflow_push`) is used. See below for
alternative algorithms. The choice of the default function may change
from version to version and should not be relied on. Default value:
None.
kwargs : Any other keyword parameter is passed to the function that
computes the maximum flow.
Returns
-------
cut_value : integer, float
Value of the minimum cut.
Raises
------
NetworkXUnbounded
If the graph has a path of infinite capacity, all cuts have
infinite capacity and the function raises a NetworkXError.
See also
--------
:meth:`maximum_flow`
:meth:`maximum_flow_value`
:meth:`minimum_cut`
:meth:`edmonds_karp`
:meth:`preflow_push`
:meth:`shortest_augmenting_path`
Notes
-----
The function used in the flow_func parameter has to return a residual
network that follows NetworkX conventions:
The residual network :samp:`R` from an input graph :samp:`G` has the
same nodes as :samp:`G`. :samp:`R` is a DiGraph that contains a pair
of edges :samp:`(u, v)` and :samp:`(v, u)` iff :samp:`(u, v)` is not a
self-loop, and at least one of :samp:`(u, v)` and :samp:`(v, u)` exists
in :samp:`G`.
For each edge :samp:`(u, v)` in :samp:`R`, :samp:`R[u][v]['capacity']`
is equal to the capacity of :samp:`(u, v)` in :samp:`G` if it exists
in :samp:`G` or zero otherwise. If the capacity is infinite,
:samp:`R[u][v]['capacity']` will have a high arbitrary finite value
that does not affect the solution of the problem. This value is stored in
:samp:`R.graph['inf']`. For each edge :samp:`(u, v)` in :samp:`R`,
:samp:`R[u][v]['flow']` represents the flow function of :samp:`(u, v)` and
satisfies :samp:`R[u][v]['flow'] == -R[v][u]['flow']`.
The flow value, defined as the total flow into :samp:`t`, the sink, is
stored in :samp:`R.graph['flow_value']`. Reachability to :samp:`t` using
only edges :samp:`(u, v)` such that
:samp:`R[u][v]['flow'] < R[u][v]['capacity']` induces a minimum
:samp:`s`-:samp:`t` cut.
Specific algorithms may store extra data in :samp:`R`.
The function should supports an optional boolean parameter value_only. When
True, it can optionally terminate the algorithm as soon as the maximum flow
value and the minimum cut can be determined.
Examples
--------
>>> G = nx.DiGraph()
>>> G.add_edge("x", "a", capacity=3.0)
>>> G.add_edge("x", "b", capacity=1.0)
>>> G.add_edge("a", "c", capacity=3.0)
>>> G.add_edge("b", "c", capacity=5.0)
>>> G.add_edge("b", "d", capacity=4.0)
>>> G.add_edge("d", "e", capacity=2.0)
>>> G.add_edge("c", "y", capacity=2.0)
>>> G.add_edge("e", "y", capacity=3.0)
minimum_cut_value computes only the value of the
minimum cut:
>>> cut_value = nx.minimum_cut_value(G, "x", "y")
>>> cut_value
3.0
You can also use alternative algorithms for computing the
minimum cut by using the flow_func parameter.
>>> from networkx.algorithms.flow import shortest_augmenting_path
>>> cut_value == nx.minimum_cut_value(G, "x", "y", flow_func=shortest_augmenting_path)
True
| null | (flowG, _s, _t, capacity='capacity', flow_func=None, *, backend=None, **kwargs) |
30,906 | networkx.algorithms.cycles | minimum_cycle_basis | Returns a minimum weight cycle basis for G
Minimum weight means a cycle basis for which the total weight
(length for unweighted graphs) of all the cycles is minimum.
Parameters
----------
G : NetworkX Graph
weight: string
name of the edge attribute to use for edge weights
Returns
-------
A list of cycle lists. Each cycle list is a list of nodes
which forms a cycle (loop) in G. Note that the nodes are not
necessarily returned in a order by which they appear in the cycle
Examples
--------
>>> G = nx.Graph()
>>> nx.add_cycle(G, [0, 1, 2, 3])
>>> nx.add_cycle(G, [0, 3, 4, 5])
>>> nx.minimum_cycle_basis(G)
[[5, 4, 3, 0], [3, 2, 1, 0]]
References:
[1] Kavitha, Telikepalli, et al. "An O(m^2n) Algorithm for
Minimum Cycle Basis of Graphs."
http://link.springer.com/article/10.1007/s00453-007-9064-z
[2] de Pina, J. 1995. Applications of shortest path methods.
Ph.D. thesis, University of Amsterdam, Netherlands
See Also
--------
simple_cycles, cycle_basis
| def recursive_simple_cycles(G):
"""Find simple cycles (elementary circuits) of a directed graph.
A `simple cycle`, or `elementary circuit`, is a closed path where
no node appears twice. Two elementary circuits are distinct if they
are not cyclic permutations of each other.
This version uses a recursive algorithm to build a list of cycles.
You should probably use the iterator version called simple_cycles().
Warning: This recursive version uses lots of RAM!
It appears in NetworkX for pedagogical value.
Parameters
----------
G : NetworkX DiGraph
A directed graph
Returns
-------
A list of cycles, where each cycle is represented by a list of nodes
along the cycle.
Example:
>>> edges = [(0, 0), (0, 1), (0, 2), (1, 2), (2, 0), (2, 1), (2, 2)]
>>> G = nx.DiGraph(edges)
>>> nx.recursive_simple_cycles(G)
[[0], [2], [0, 1, 2], [0, 2], [1, 2]]
Notes
-----
The implementation follows pp. 79-80 in [1]_.
The time complexity is $O((n+e)(c+1))$ for $n$ nodes, $e$ edges and $c$
elementary circuits.
References
----------
.. [1] Finding all the elementary circuits of a directed graph.
D. B. Johnson, SIAM Journal on Computing 4, no. 1, 77-84, 1975.
https://doi.org/10.1137/0204007
See Also
--------
simple_cycles, cycle_basis
"""
# Jon Olav Vik, 2010-08-09
def _unblock(thisnode):
"""Recursively unblock and remove nodes from B[thisnode]."""
if blocked[thisnode]:
blocked[thisnode] = False
while B[thisnode]:
_unblock(B[thisnode].pop())
def circuit(thisnode, startnode, component):
closed = False # set to True if elementary path is closed
path.append(thisnode)
blocked[thisnode] = True
for nextnode in component[thisnode]: # direct successors of thisnode
if nextnode == startnode:
result.append(path[:])
closed = True
elif not blocked[nextnode]:
if circuit(nextnode, startnode, component):
closed = True
if closed:
_unblock(thisnode)
else:
for nextnode in component[thisnode]:
if thisnode not in B[nextnode]: # TODO: use set for speedup?
B[nextnode].append(thisnode)
path.pop() # remove thisnode from path
return closed
path = [] # stack of nodes in current path
blocked = defaultdict(bool) # vertex: blocked from search?
B = defaultdict(list) # graph portions that yield no elementary circuit
result = [] # list to accumulate the circuits found
# Johnson's algorithm exclude self cycle edges like (v, v)
# To be backward compatible, we record those cycles in advance
# and then remove from subG
for v in G:
if G.has_edge(v, v):
result.append([v])
G.remove_edge(v, v)
# Johnson's algorithm requires some ordering of the nodes.
# They might not be sortable so we assign an arbitrary ordering.
ordering = dict(zip(G, range(len(G))))
for s in ordering:
# Build the subgraph induced by s and following nodes in the ordering
subgraph = G.subgraph(node for node in G if ordering[node] >= ordering[s])
# Find the strongly connected component in the subgraph
# that contains the least node according to the ordering
strongcomp = nx.strongly_connected_components(subgraph)
mincomp = min(strongcomp, key=lambda ns: min(ordering[n] for n in ns))
component = G.subgraph(mincomp)
if len(component) > 1:
# smallest node in the component according to the ordering
startnode = min(component, key=ordering.__getitem__)
for node in component:
blocked[node] = False
B[node][:] = []
dummy = circuit(startnode, startnode, component)
return result
| (G, weight=None, *, backend=None, **backend_kwargs) |
30,907 | networkx.algorithms.connectivity.cuts | minimum_edge_cut | Returns a set of edges of minimum cardinality that disconnects G.
If source and target nodes are provided, this function returns the
set of edges of minimum cardinality that, if removed, would break
all paths among source and target in G. If not, it returns a set of
edges of minimum cardinality that disconnects G.
Parameters
----------
G : NetworkX graph
s : node
Source node. Optional. Default value: None.
t : node
Target node. Optional. Default value: None.
flow_func : function
A function for computing the maximum flow among a pair of nodes.
The function has to accept at least three parameters: a Digraph,
a source node, and a target node. And return a residual network
that follows NetworkX conventions (see :meth:`maximum_flow` for
details). If flow_func is None, the default maximum flow function
(:meth:`edmonds_karp`) is used. See below for details. The
choice of the default function may change from version
to version and should not be relied on. Default value: None.
Returns
-------
cutset : set
Set of edges that, if removed, would disconnect G. If source
and target nodes are provided, the set contains the edges that
if removed, would destroy all paths between source and target.
Examples
--------
>>> # Platonic icosahedral graph has edge connectivity 5
>>> G = nx.icosahedral_graph()
>>> len(nx.minimum_edge_cut(G))
5
You can use alternative flow algorithms for the underlying
maximum flow computation. In dense networks the algorithm
:meth:`shortest_augmenting_path` will usually perform better
than the default :meth:`edmonds_karp`, which is faster for
sparse networks with highly skewed degree distributions.
Alternative flow functions have to be explicitly imported
from the flow package.
>>> from networkx.algorithms.flow import shortest_augmenting_path
>>> len(nx.minimum_edge_cut(G, flow_func=shortest_augmenting_path))
5
If you specify a pair of nodes (source and target) as parameters,
this function returns the value of local edge connectivity.
>>> nx.edge_connectivity(G, 3, 7)
5
If you need to perform several local computations among different
pairs of nodes on the same graph, it is recommended that you reuse
the data structures used in the maximum flow computations. See
:meth:`local_edge_connectivity` for details.
Notes
-----
This is a flow based implementation of minimum edge cut. For
undirected graphs the algorithm works by finding a 'small' dominating
set of nodes of G (see algorithm 7 in [1]_) and computing the maximum
flow between an arbitrary node in the dominating set and the rest of
nodes in it. This is an implementation of algorithm 6 in [1]_. For
directed graphs, the algorithm does n calls to the max flow function.
The function raises an error if the directed graph is not weakly
connected and returns an empty set if it is weakly connected.
It is an implementation of algorithm 8 in [1]_.
See also
--------
:meth:`minimum_st_edge_cut`
:meth:`minimum_node_cut`
:meth:`stoer_wagner`
:meth:`node_connectivity`
:meth:`edge_connectivity`
:meth:`maximum_flow`
:meth:`edmonds_karp`
:meth:`preflow_push`
:meth:`shortest_augmenting_path`
References
----------
.. [1] Abdol-Hossein Esfahanian. Connectivity Algorithms.
http://www.cse.msu.edu/~cse835/Papers/Graph_connectivity_revised.pdf
| null | (G, s=None, t=None, flow_func=None, *, backend=None, **backend_kwargs) |
30,908 | networkx.algorithms.connectivity.cuts | minimum_node_cut | Returns a set of nodes of minimum cardinality that disconnects G.
If source and target nodes are provided, this function returns the
set of nodes of minimum cardinality that, if removed, would destroy
all paths among source and target in G. If not, it returns a set
of nodes of minimum cardinality that disconnects G.
Parameters
----------
G : NetworkX graph
s : node
Source node. Optional. Default value: None.
t : node
Target node. Optional. Default value: None.
flow_func : function
A function for computing the maximum flow among a pair of nodes.
The function has to accept at least three parameters: a Digraph,
a source node, and a target node. And return a residual network
that follows NetworkX conventions (see :meth:`maximum_flow` for
details). If flow_func is None, the default maximum flow function
(:meth:`edmonds_karp`) is used. See below for details. The
choice of the default function may change from version
to version and should not be relied on. Default value: None.
Returns
-------
cutset : set
Set of nodes that, if removed, would disconnect G. If source
and target nodes are provided, the set contains the nodes that
if removed, would destroy all paths between source and target.
Examples
--------
>>> # Platonic icosahedral graph has node connectivity 5
>>> G = nx.icosahedral_graph()
>>> node_cut = nx.minimum_node_cut(G)
>>> len(node_cut)
5
You can use alternative flow algorithms for the underlying maximum
flow computation. In dense networks the algorithm
:meth:`shortest_augmenting_path` will usually perform better
than the default :meth:`edmonds_karp`, which is faster for
sparse networks with highly skewed degree distributions. Alternative
flow functions have to be explicitly imported from the flow package.
>>> from networkx.algorithms.flow import shortest_augmenting_path
>>> node_cut == nx.minimum_node_cut(G, flow_func=shortest_augmenting_path)
True
If you specify a pair of nodes (source and target) as parameters,
this function returns a local st node cut.
>>> len(nx.minimum_node_cut(G, 3, 7))
5
If you need to perform several local st cuts among different
pairs of nodes on the same graph, it is recommended that you reuse
the data structures used in the maximum flow computations. See
:meth:`minimum_st_node_cut` for details.
Notes
-----
This is a flow based implementation of minimum node cut. The algorithm
is based in solving a number of maximum flow computations to determine
the capacity of the minimum cut on an auxiliary directed network that
corresponds to the minimum node cut of G. It handles both directed
and undirected graphs. This implementation is based on algorithm 11
in [1]_.
See also
--------
:meth:`minimum_st_node_cut`
:meth:`minimum_cut`
:meth:`minimum_edge_cut`
:meth:`stoer_wagner`
:meth:`node_connectivity`
:meth:`edge_connectivity`
:meth:`maximum_flow`
:meth:`edmonds_karp`
:meth:`preflow_push`
:meth:`shortest_augmenting_path`
References
----------
.. [1] Abdol-Hossein Esfahanian. Connectivity Algorithms.
http://www.cse.msu.edu/~cse835/Papers/Graph_connectivity_revised.pdf
| null | (G, s=None, t=None, flow_func=None, *, backend=None, **backend_kwargs) |
30,909 | networkx.algorithms.tree.branchings | minimum_spanning_arborescence |
Returns a minimum spanning arborescence from G.
Parameters
----------
G : (multi)digraph-like
The graph to be searched.
attr : str
The edge attribute used to in determining optimality.
default : float
The value of the edge attribute used if an edge does not have
the attribute `attr`.
preserve_attrs : bool
If True, preserve the other attributes of the original graph (that are not
passed to `attr`)
partition : str
The key for the edge attribute containing the partition
data on the graph. Edges can be included, excluded or open using the
`EdgePartition` enum.
Returns
-------
B : (multi)digraph-like
A minimum spanning arborescence.
Raises
------
NetworkXException
If the graph does not contain a minimum spanning arborescence.
| @nx._dispatchable(preserve_edge_attrs=True, returns_graph=True)
def maximum_branching(
G,
attr="weight",
default=1,
preserve_attrs=False,
partition=None,
):
#######################################
### Data Structure Helper Functions ###
#######################################
def edmonds_add_edge(G, edge_index, u, v, key, **d):
"""
Adds an edge to `G` while also updating the edge index.
This algorithm requires the use of an external dictionary to track
the edge keys since it is possible that the source or destination
node of an edge will be changed and the default key-handling
capabilities of the MultiDiGraph class do not account for this.
Parameters
----------
G : MultiDiGraph
The graph to insert an edge into.
edge_index : dict
A mapping from integers to the edges of the graph.
u : node
The source node of the new edge.
v : node
The destination node of the new edge.
key : int
The key to use from `edge_index`.
d : keyword arguments, optional
Other attributes to store on the new edge.
"""
if key in edge_index:
uu, vv, _ = edge_index[key]
if (u != uu) or (v != vv):
raise Exception(f"Key {key!r} is already in use.")
G.add_edge(u, v, key, **d)
edge_index[key] = (u, v, G.succ[u][v][key])
def edmonds_remove_node(G, edge_index, n):
"""
Remove a node from the graph, updating the edge index to match.
Parameters
----------
G : MultiDiGraph
The graph to remove an edge from.
edge_index : dict
A mapping from integers to the edges of the graph.
n : node
The node to remove from `G`.
"""
keys = set()
for keydict in G.pred[n].values():
keys.update(keydict)
for keydict in G.succ[n].values():
keys.update(keydict)
for key in keys:
del edge_index[key]
G.remove_node(n)
#######################
### Algorithm Setup ###
#######################
# Pick an attribute name that the original graph is unlikly to have
candidate_attr = "edmonds' secret candidate attribute"
new_node_base_name = "edmonds new node base name "
G_original = G
G = nx.MultiDiGraph()
G.__networkx_cache__ = None # Disable caching
# A dict to reliably track mutations to the edges using the key of the edge.
G_edge_index = {}
# Each edge is given an arbitrary numerical key
for key, (u, v, data) in enumerate(G_original.edges(data=True)):
d = {attr: data.get(attr, default)}
if data.get(partition) is not None:
d[partition] = data.get(partition)
if preserve_attrs:
for d_k, d_v in data.items():
if d_k != attr:
d[d_k] = d_v
edmonds_add_edge(G, G_edge_index, u, v, key, **d)
level = 0 # Stores the number of contracted nodes
# These are the buckets from the paper.
#
# In the paper, G^i are modified versions of the original graph.
# D^i and E^i are the nodes and edges of the maximal edges that are
# consistent with G^i. In this implementation, D^i and E^i are stored
# together as the graph B^i. We will have strictly more B^i then the
# paper will have.
#
# Note that the data in graphs and branchings are tuples with the graph as
# the first element and the edge index as the second.
B = nx.MultiDiGraph()
B_edge_index = {}
graphs = [] # G^i list
branchings = [] # B^i list
selected_nodes = set() # D^i bucket
uf = nx.utils.UnionFind()
# A list of lists of edge indices. Each list is a circuit for graph G^i.
# Note the edge list is not required to be a circuit in G^0.
circuits = []
# Stores the index of the minimum edge in the circuit found in G^i and B^i.
# The ordering of the edges seems to preserver the weight ordering from
# G^0. So even if the circuit does not form a circuit in G^0, it is still
# true that the minimum edges in circuit G^0 (despite their weights being
# different)
minedge_circuit = []
###########################
### Algorithm Structure ###
###########################
# Each step listed in the algorithm is an inner function. Thus, the overall
# loop structure is:
#
# while True:
# step_I1()
# if cycle detected:
# step_I2()
# elif every node of G is in D and E is a branching:
# break
##################################
### Algorithm Helper Functions ###
##################################
def edmonds_find_desired_edge(v):
"""
Find the edge directed towards v with maximal weight.
If an edge partition exists in this graph, return the included
edge if it exists and never return any excluded edge.
Note: There can only be one included edge for each vertex otherwise
the edge partition is empty.
Parameters
----------
v : node
The node to search for the maximal weight incoming edge.
"""
edge = None
max_weight = -INF
for u, _, key, data in G.in_edges(v, data=True, keys=True):
# Skip excluded edges
if data.get(partition) == nx.EdgePartition.EXCLUDED:
continue
new_weight = data[attr]
# Return the included edge
if data.get(partition) == nx.EdgePartition.INCLUDED:
max_weight = new_weight
edge = (u, v, key, new_weight, data)
break
# Find the best open edge
if new_weight > max_weight:
max_weight = new_weight
edge = (u, v, key, new_weight, data)
return edge, max_weight
def edmonds_step_I2(v, desired_edge, level):
"""
Perform step I2 from Edmonds' paper
First, check if the last step I1 created a cycle. If it did not, do nothing.
If it did, store the cycle for later reference and contract it.
Parameters
----------
v : node
The current node to consider
desired_edge : edge
The minimum desired edge to remove from the cycle.
level : int
The current level, i.e. the number of cycles that have already been removed.
"""
u = desired_edge[0]
Q_nodes = nx.shortest_path(B, v, u)
Q_edges = [
list(B[Q_nodes[i]][vv].keys())[0] for i, vv in enumerate(Q_nodes[1:])
]
Q_edges.append(desired_edge[2]) # Add the new edge key to complete the circuit
# Get the edge in the circuit with the minimum weight.
# Also, save the incoming weights for each node.
minweight = INF
minedge = None
Q_incoming_weight = {}
for edge_key in Q_edges:
u, v, data = B_edge_index[edge_key]
w = data[attr]
# We cannot remove an included edge, even if it is the
# minimum edge in the circuit
Q_incoming_weight[v] = w
if data.get(partition) == nx.EdgePartition.INCLUDED:
continue
if w < minweight:
minweight = w
minedge = edge_key
circuits.append(Q_edges)
minedge_circuit.append(minedge)
graphs.append((G.copy(), G_edge_index.copy()))
branchings.append((B.copy(), B_edge_index.copy()))
# Mutate the graph to contract the circuit
new_node = new_node_base_name + str(level)
G.add_node(new_node)
new_edges = []
for u, v, key, data in G.edges(data=True, keys=True):
if u in Q_incoming_weight:
if v in Q_incoming_weight:
# Circuit edge. For the moment do nothing,
# eventually it will be removed.
continue
else:
# Outgoing edge from a node in the circuit.
# Make it come from the new node instead
dd = data.copy()
new_edges.append((new_node, v, key, dd))
else:
if v in Q_incoming_weight:
# Incoming edge to the circuit.
# Update it's weight
w = data[attr]
w += minweight - Q_incoming_weight[v]
dd = data.copy()
dd[attr] = w
new_edges.append((u, new_node, key, dd))
else:
# Outside edge. No modification needed
continue
for node in Q_nodes:
edmonds_remove_node(G, G_edge_index, node)
edmonds_remove_node(B, B_edge_index, node)
selected_nodes.difference_update(set(Q_nodes))
for u, v, key, data in new_edges:
edmonds_add_edge(G, G_edge_index, u, v, key, **data)
if candidate_attr in data:
del data[candidate_attr]
edmonds_add_edge(B, B_edge_index, u, v, key, **data)
uf.union(u, v)
def is_root(G, u, edgekeys):
"""
Returns True if `u` is a root node in G.
Node `u` is a root node if its in-degree over the specified edges is zero.
Parameters
----------
G : Graph
The current graph.
u : node
The node in `G` to check if it is a root.
edgekeys : iterable of edges
The edges for which to check if `u` is a root of.
"""
if u not in G:
raise Exception(f"{u!r} not in G")
for v in G.pred[u]:
for edgekey in G.pred[u][v]:
if edgekey in edgekeys:
return False, edgekey
else:
return True, None
nodes = iter(list(G.nodes))
while True:
try:
v = next(nodes)
except StopIteration:
# If there are no more new nodes to consider, then we should
# meet stopping condition (b) from the paper:
# (b) every node of G^i is in D^i and E^i is a branching
assert len(G) == len(B)
if len(B):
assert is_branching(B)
graphs.append((G.copy(), G_edge_index.copy()))
branchings.append((B.copy(), B_edge_index.copy()))
circuits.append([])
minedge_circuit.append(None)
break
else:
#####################
### BEGIN STEP I1 ###
#####################
# This is a very simple step, so I don't think it needs a method of it's own
if v in selected_nodes:
continue
selected_nodes.add(v)
B.add_node(v)
desired_edge, desired_edge_weight = edmonds_find_desired_edge(v)
# There might be no desired edge if all edges are excluded or
# v is the last node to be added to B, the ultimate root of the branching
if desired_edge is not None and desired_edge_weight > 0:
u = desired_edge[0]
# Flag adding the edge will create a circuit before merging the two
# connected components of u and v in B
circuit = uf[u] == uf[v]
dd = {attr: desired_edge_weight}
if desired_edge[4].get(partition) is not None:
dd[partition] = desired_edge[4].get(partition)
edmonds_add_edge(B, B_edge_index, u, v, desired_edge[2], **dd)
G[u][v][desired_edge[2]][candidate_attr] = True
uf.union(u, v)
###################
### END STEP I1 ###
###################
#####################
### BEGIN STEP I2 ###
#####################
if circuit:
edmonds_step_I2(v, desired_edge, level)
nodes = iter(list(G.nodes()))
level += 1
###################
### END STEP I2 ###
###################
#####################
### BEGIN STEP I3 ###
#####################
# Create a new graph of the same class as the input graph
H = G_original.__class__()
# Start with the branching edges in the last level.
edges = set(branchings[level][1])
while level > 0:
level -= 1
# The current level is i, and we start counting from 0.
#
# We need the node at level i+1 that results from merging a circuit
# at level i. basename_0 is the first merged node and this happens
# at level 1. That is basename_0 is a node at level 1 that results
# from merging a circuit at level 0.
merged_node = new_node_base_name + str(level)
circuit = circuits[level]
isroot, edgekey = is_root(graphs[level + 1][0], merged_node, edges)
edges.update(circuit)
if isroot:
minedge = minedge_circuit[level]
if minedge is None:
raise Exception
# Remove the edge in the cycle with minimum weight
edges.remove(minedge)
else:
# We have identified an edge at the next higher level that
# transitions into the merged node at this level. That edge
# transitions to some corresponding node at the current level.
#
# We want to remove an edge from the cycle that transitions
# into the corresponding node, otherwise the result would not
# be a branching.
G, G_edge_index = graphs[level]
target = G_edge_index[edgekey][1]
for edgekey in circuit:
u, v, data = G_edge_index[edgekey]
if v == target:
break
else:
raise Exception("Couldn't find edge incoming to merged node.")
edges.remove(edgekey)
H.add_nodes_from(G_original)
for edgekey in edges:
u, v, d = graphs[0][1][edgekey]
dd = {attr: d[attr]}
if preserve_attrs:
for key, value in d.items():
if key not in [attr, candidate_attr]:
dd[key] = value
H.add_edge(u, v, **dd)
###################
### END STEP I3 ###
###################
return H
| (G, attr='weight', default=1, preserve_attrs=False, partition=None, *, backend=None, **backend_kwargs) |
30,910 | networkx.algorithms.tree.mst | minimum_spanning_edges | Generate edges in a minimum spanning forest of an undirected
weighted graph.
A minimum spanning tree is a subgraph of the graph (a tree)
with the minimum sum of edge weights. A spanning forest is a
union of the spanning trees for each connected component of the graph.
Parameters
----------
G : undirected Graph
An undirected graph. If `G` is connected, then the algorithm finds a
spanning tree. Otherwise, a spanning forest is found.
algorithm : string
The algorithm to use when finding a minimum spanning tree. Valid
choices are 'kruskal', 'prim', or 'boruvka'. The default is 'kruskal'.
weight : string
Edge data key to use for weight (default 'weight').
keys : bool
Whether to yield edge key in multigraphs in addition to the edge.
If `G` is not a multigraph, this is ignored.
data : bool, optional
If True yield the edge data along with the edge.
ignore_nan : bool (default: False)
If a NaN is found as an edge weight normally an exception is raised.
If `ignore_nan is True` then that edge is ignored instead.
Returns
-------
edges : iterator
An iterator over edges in a maximum spanning tree of `G`.
Edges connecting nodes `u` and `v` are represented as tuples:
`(u, v, k, d)` or `(u, v, k)` or `(u, v, d)` or `(u, v)`
If `G` is a multigraph, `keys` indicates whether the edge key `k` will
be reported in the third position in the edge tuple. `data` indicates
whether the edge datadict `d` will appear at the end of the edge tuple.
If `G` is not a multigraph, the tuples are `(u, v, d)` if `data` is True
or `(u, v)` if `data` is False.
Examples
--------
>>> from networkx.algorithms import tree
Find minimum spanning edges by Kruskal's algorithm
>>> G = nx.cycle_graph(4)
>>> G.add_edge(0, 3, weight=2)
>>> mst = tree.minimum_spanning_edges(G, algorithm="kruskal", data=False)
>>> edgelist = list(mst)
>>> sorted(sorted(e) for e in edgelist)
[[0, 1], [1, 2], [2, 3]]
Find minimum spanning edges by Prim's algorithm
>>> G = nx.cycle_graph(4)
>>> G.add_edge(0, 3, weight=2)
>>> mst = tree.minimum_spanning_edges(G, algorithm="prim", data=False)
>>> edgelist = list(mst)
>>> sorted(sorted(e) for e in edgelist)
[[0, 1], [1, 2], [2, 3]]
Notes
-----
For Borůvka's algorithm, each edge must have a weight attribute, and
each edge weight must be distinct.
For the other algorithms, if the graph edges do not have a weight
attribute a default weight of 1 will be used.
Modified code from David Eppstein, April 2006
http://www.ics.uci.edu/~eppstein/PADS/
| def random_spanning_tree(G, weight=None, *, multiplicative=True, seed=None):
"""
Sample a random spanning tree using the edges weights of `G`.
This function supports two different methods for determining the
probability of the graph. If ``multiplicative=True``, the probability
is based on the product of edge weights, and if ``multiplicative=False``
it is based on the sum of the edge weight. However, since it is
easier to determine the total weight of all spanning trees for the
multiplicative version, that is significantly faster and should be used if
possible. Additionally, setting `weight` to `None` will cause a spanning tree
to be selected with uniform probability.
The function uses algorithm A8 in [1]_ .
Parameters
----------
G : nx.Graph
An undirected version of the original graph.
weight : string
The edge key for the edge attribute holding edge weight.
multiplicative : bool, default=True
If `True`, the probability of each tree is the product of its edge weight
over the sum of the product of all the spanning trees in the graph. If
`False`, the probability is the sum of its edge weight over the sum of
the sum of weights for all spanning trees in the graph.
seed : integer, random_state, or None (default)
Indicator of random number generation state.
See :ref:`Randomness<randomness>`.
Returns
-------
nx.Graph
A spanning tree using the distribution defined by the weight of the tree.
References
----------
.. [1] V. Kulkarni, Generating random combinatorial objects, Journal of
Algorithms, 11 (1990), pp. 185–207
"""
def find_node(merged_nodes, node):
"""
We can think of clusters of contracted nodes as having one
representative in the graph. Each node which is not in merged_nodes
is still its own representative. Since a representative can be later
contracted, we need to recursively search though the dict to find
the final representative, but once we know it we can use path
compression to speed up the access of the representative for next time.
This cannot be replaced by the standard NetworkX union_find since that
data structure will merge nodes with less representing nodes into the
one with more representing nodes but this function requires we merge
them using the order that contract_edges contracts using.
Parameters
----------
merged_nodes : dict
The dict storing the mapping from node to representative
node
The node whose representative we seek
Returns
-------
The representative of the `node`
"""
if node not in merged_nodes:
return node
else:
rep = find_node(merged_nodes, merged_nodes[node])
merged_nodes[node] = rep
return rep
def prepare_graph():
"""
For the graph `G`, remove all edges not in the set `V` and then
contract all edges in the set `U`.
Returns
-------
A copy of `G` which has had all edges not in `V` removed and all edges
in `U` contracted.
"""
# The result is a MultiGraph version of G so that parallel edges are
# allowed during edge contraction
result = nx.MultiGraph(incoming_graph_data=G)
# Remove all edges not in V
edges_to_remove = set(result.edges()).difference(V)
result.remove_edges_from(edges_to_remove)
# Contract all edges in U
#
# Imagine that you have two edges to contract and they share an
# endpoint like this:
# [0] ----- [1] ----- [2]
# If we contract (0, 1) first, the contraction function will always
# delete the second node it is passed so the resulting graph would be
# [0] ----- [2]
# and edge (1, 2) no longer exists but (0, 2) would need to be contracted
# in its place now. That is why I use the below dict as a merge-find
# data structure with path compression to track how the nodes are merged.
merged_nodes = {}
for u, v in U:
u_rep = find_node(merged_nodes, u)
v_rep = find_node(merged_nodes, v)
# We cannot contract a node with itself
if u_rep == v_rep:
continue
nx.contracted_nodes(result, u_rep, v_rep, self_loops=False, copy=False)
merged_nodes[v_rep] = u_rep
return merged_nodes, result
def spanning_tree_total_weight(G, weight):
"""
Find the sum of weights of the spanning trees of `G` using the
appropriate `method`.
This is easy if the chosen method is 'multiplicative', since we can
use Kirchhoff's Tree Matrix Theorem directly. However, with the
'additive' method, this process is slightly more complex and less
computationally efficient as we have to find the number of spanning
trees which contain each possible edge in the graph.
Parameters
----------
G : NetworkX Graph
The graph to find the total weight of all spanning trees on.
weight : string
The key for the weight edge attribute of the graph.
Returns
-------
float
The sum of either the multiplicative or additive weight for all
spanning trees in the graph.
"""
if multiplicative:
return nx.total_spanning_tree_weight(G, weight)
else:
# There are two cases for the total spanning tree additive weight.
# 1. There is one edge in the graph. Then the only spanning tree is
# that edge itself, which will have a total weight of that edge
# itself.
if G.number_of_edges() == 1:
return G.edges(data=weight).__iter__().__next__()[2]
# 2. There are no edges or two or more edges in the graph. Then, we find the
# total weight of the spanning trees using the formula in the
# reference paper: take the weight of each edge and multiply it by
# the number of spanning trees which include that edge. This
# can be accomplished by contracting the edge and finding the
# multiplicative total spanning tree weight if the weight of each edge
# is assumed to be 1, which is conveniently built into networkx already,
# by calling total_spanning_tree_weight with weight=None.
# Note that with no edges the returned value is just zero.
else:
total = 0
for u, v, w in G.edges(data=weight):
total += w * nx.total_spanning_tree_weight(
nx.contracted_edge(G, edge=(u, v), self_loops=False), None
)
return total
if G.number_of_nodes() < 2:
# no edges in the spanning tree
return nx.empty_graph(G.nodes)
U = set()
st_cached_value = 0
V = set(G.edges())
shuffled_edges = list(G.edges())
seed.shuffle(shuffled_edges)
for u, v in shuffled_edges:
e_weight = G[u][v][weight] if weight is not None else 1
node_map, prepared_G = prepare_graph()
G_total_tree_weight = spanning_tree_total_weight(prepared_G, weight)
# Add the edge to U so that we can compute the total tree weight
# assuming we include that edge
# Now, if (u, v) cannot exist in G because it is fully contracted out
# of existence, then it by definition cannot influence G_e's Kirchhoff
# value. But, we also cannot pick it.
rep_edge = (find_node(node_map, u), find_node(node_map, v))
# Check to see if the 'representative edge' for the current edge is
# in prepared_G. If so, then we can pick it.
if rep_edge in prepared_G.edges:
prepared_G_e = nx.contracted_edge(
prepared_G, edge=rep_edge, self_loops=False
)
G_e_total_tree_weight = spanning_tree_total_weight(prepared_G_e, weight)
if multiplicative:
threshold = e_weight * G_e_total_tree_weight / G_total_tree_weight
else:
numerator = (
st_cached_value + e_weight
) * nx.total_spanning_tree_weight(prepared_G_e) + G_e_total_tree_weight
denominator = (
st_cached_value * nx.total_spanning_tree_weight(prepared_G)
+ G_total_tree_weight
)
threshold = numerator / denominator
else:
threshold = 0.0
z = seed.uniform(0.0, 1.0)
if z > threshold:
# Remove the edge from V since we did not pick it.
V.remove((u, v))
else:
# Add the edge to U since we picked it.
st_cached_value += e_weight
U.add((u, v))
# If we decide to keep an edge, it may complete the spanning tree.
if len(U) == G.number_of_nodes() - 1:
spanning_tree = nx.Graph()
spanning_tree.add_edges_from(U)
return spanning_tree
raise Exception(f"Something went wrong! Only {len(U)} edges in the spanning tree!")
| (G, algorithm='kruskal', weight='weight', keys=True, data=True, ignore_nan=False, *, backend=None, **backend_kwargs) |
30,911 | networkx.algorithms.tree.mst | minimum_spanning_tree | Returns a minimum spanning tree or forest on an undirected graph `G`.
Parameters
----------
G : undirected graph
An undirected graph. If `G` is connected, then the algorithm finds a
spanning tree. Otherwise, a spanning forest is found.
weight : str
Data key to use for edge weights.
algorithm : string
The algorithm to use when finding a minimum spanning tree. Valid
choices are 'kruskal', 'prim', or 'boruvka'. The default is
'kruskal'.
ignore_nan : bool (default: False)
If a NaN is found as an edge weight normally an exception is raised.
If `ignore_nan is True` then that edge is ignored instead.
Returns
-------
G : NetworkX Graph
A minimum spanning tree or forest.
Examples
--------
>>> G = nx.cycle_graph(4)
>>> G.add_edge(0, 3, weight=2)
>>> T = nx.minimum_spanning_tree(G)
>>> sorted(T.edges(data=True))
[(0, 1, {}), (1, 2, {}), (2, 3, {})]
Notes
-----
For Borůvka's algorithm, each edge must have a weight attribute, and
each edge weight must be distinct.
For the other algorithms, if the graph edges do not have a weight
attribute a default weight of 1 will be used.
There may be more than one tree with the same minimum or maximum weight.
See :mod:`networkx.tree.recognition` for more detailed definitions.
Isolated nodes with self-loops are in the tree as edgeless isolated nodes.
| def random_spanning_tree(G, weight=None, *, multiplicative=True, seed=None):
"""
Sample a random spanning tree using the edges weights of `G`.
This function supports two different methods for determining the
probability of the graph. If ``multiplicative=True``, the probability
is based on the product of edge weights, and if ``multiplicative=False``
it is based on the sum of the edge weight. However, since it is
easier to determine the total weight of all spanning trees for the
multiplicative version, that is significantly faster and should be used if
possible. Additionally, setting `weight` to `None` will cause a spanning tree
to be selected with uniform probability.
The function uses algorithm A8 in [1]_ .
Parameters
----------
G : nx.Graph
An undirected version of the original graph.
weight : string
The edge key for the edge attribute holding edge weight.
multiplicative : bool, default=True
If `True`, the probability of each tree is the product of its edge weight
over the sum of the product of all the spanning trees in the graph. If
`False`, the probability is the sum of its edge weight over the sum of
the sum of weights for all spanning trees in the graph.
seed : integer, random_state, or None (default)
Indicator of random number generation state.
See :ref:`Randomness<randomness>`.
Returns
-------
nx.Graph
A spanning tree using the distribution defined by the weight of the tree.
References
----------
.. [1] V. Kulkarni, Generating random combinatorial objects, Journal of
Algorithms, 11 (1990), pp. 185–207
"""
def find_node(merged_nodes, node):
"""
We can think of clusters of contracted nodes as having one
representative in the graph. Each node which is not in merged_nodes
is still its own representative. Since a representative can be later
contracted, we need to recursively search though the dict to find
the final representative, but once we know it we can use path
compression to speed up the access of the representative for next time.
This cannot be replaced by the standard NetworkX union_find since that
data structure will merge nodes with less representing nodes into the
one with more representing nodes but this function requires we merge
them using the order that contract_edges contracts using.
Parameters
----------
merged_nodes : dict
The dict storing the mapping from node to representative
node
The node whose representative we seek
Returns
-------
The representative of the `node`
"""
if node not in merged_nodes:
return node
else:
rep = find_node(merged_nodes, merged_nodes[node])
merged_nodes[node] = rep
return rep
def prepare_graph():
"""
For the graph `G`, remove all edges not in the set `V` and then
contract all edges in the set `U`.
Returns
-------
A copy of `G` which has had all edges not in `V` removed and all edges
in `U` contracted.
"""
# The result is a MultiGraph version of G so that parallel edges are
# allowed during edge contraction
result = nx.MultiGraph(incoming_graph_data=G)
# Remove all edges not in V
edges_to_remove = set(result.edges()).difference(V)
result.remove_edges_from(edges_to_remove)
# Contract all edges in U
#
# Imagine that you have two edges to contract and they share an
# endpoint like this:
# [0] ----- [1] ----- [2]
# If we contract (0, 1) first, the contraction function will always
# delete the second node it is passed so the resulting graph would be
# [0] ----- [2]
# and edge (1, 2) no longer exists but (0, 2) would need to be contracted
# in its place now. That is why I use the below dict as a merge-find
# data structure with path compression to track how the nodes are merged.
merged_nodes = {}
for u, v in U:
u_rep = find_node(merged_nodes, u)
v_rep = find_node(merged_nodes, v)
# We cannot contract a node with itself
if u_rep == v_rep:
continue
nx.contracted_nodes(result, u_rep, v_rep, self_loops=False, copy=False)
merged_nodes[v_rep] = u_rep
return merged_nodes, result
def spanning_tree_total_weight(G, weight):
"""
Find the sum of weights of the spanning trees of `G` using the
appropriate `method`.
This is easy if the chosen method is 'multiplicative', since we can
use Kirchhoff's Tree Matrix Theorem directly. However, with the
'additive' method, this process is slightly more complex and less
computationally efficient as we have to find the number of spanning
trees which contain each possible edge in the graph.
Parameters
----------
G : NetworkX Graph
The graph to find the total weight of all spanning trees on.
weight : string
The key for the weight edge attribute of the graph.
Returns
-------
float
The sum of either the multiplicative or additive weight for all
spanning trees in the graph.
"""
if multiplicative:
return nx.total_spanning_tree_weight(G, weight)
else:
# There are two cases for the total spanning tree additive weight.
# 1. There is one edge in the graph. Then the only spanning tree is
# that edge itself, which will have a total weight of that edge
# itself.
if G.number_of_edges() == 1:
return G.edges(data=weight).__iter__().__next__()[2]
# 2. There are no edges or two or more edges in the graph. Then, we find the
# total weight of the spanning trees using the formula in the
# reference paper: take the weight of each edge and multiply it by
# the number of spanning trees which include that edge. This
# can be accomplished by contracting the edge and finding the
# multiplicative total spanning tree weight if the weight of each edge
# is assumed to be 1, which is conveniently built into networkx already,
# by calling total_spanning_tree_weight with weight=None.
# Note that with no edges the returned value is just zero.
else:
total = 0
for u, v, w in G.edges(data=weight):
total += w * nx.total_spanning_tree_weight(
nx.contracted_edge(G, edge=(u, v), self_loops=False), None
)
return total
if G.number_of_nodes() < 2:
# no edges in the spanning tree
return nx.empty_graph(G.nodes)
U = set()
st_cached_value = 0
V = set(G.edges())
shuffled_edges = list(G.edges())
seed.shuffle(shuffled_edges)
for u, v in shuffled_edges:
e_weight = G[u][v][weight] if weight is not None else 1
node_map, prepared_G = prepare_graph()
G_total_tree_weight = spanning_tree_total_weight(prepared_G, weight)
# Add the edge to U so that we can compute the total tree weight
# assuming we include that edge
# Now, if (u, v) cannot exist in G because it is fully contracted out
# of existence, then it by definition cannot influence G_e's Kirchhoff
# value. But, we also cannot pick it.
rep_edge = (find_node(node_map, u), find_node(node_map, v))
# Check to see if the 'representative edge' for the current edge is
# in prepared_G. If so, then we can pick it.
if rep_edge in prepared_G.edges:
prepared_G_e = nx.contracted_edge(
prepared_G, edge=rep_edge, self_loops=False
)
G_e_total_tree_weight = spanning_tree_total_weight(prepared_G_e, weight)
if multiplicative:
threshold = e_weight * G_e_total_tree_weight / G_total_tree_weight
else:
numerator = (
st_cached_value + e_weight
) * nx.total_spanning_tree_weight(prepared_G_e) + G_e_total_tree_weight
denominator = (
st_cached_value * nx.total_spanning_tree_weight(prepared_G)
+ G_total_tree_weight
)
threshold = numerator / denominator
else:
threshold = 0.0
z = seed.uniform(0.0, 1.0)
if z > threshold:
# Remove the edge from V since we did not pick it.
V.remove((u, v))
else:
# Add the edge to U since we picked it.
st_cached_value += e_weight
U.add((u, v))
# If we decide to keep an edge, it may complete the spanning tree.
if len(U) == G.number_of_nodes() - 1:
spanning_tree = nx.Graph()
spanning_tree.add_edges_from(U)
return spanning_tree
raise Exception(f"Something went wrong! Only {len(U)} edges in the spanning tree!")
| (G, weight='weight', algorithm='kruskal', ignore_nan=False, *, backend=None, **backend_kwargs) |
30,915 | networkx.algorithms.assortativity.mixing | mixing_dict | Returns a dictionary representation of mixing matrix.
Parameters
----------
xy : list or container of two-tuples
Pairs of (x,y) items.
attribute : string
Node attribute key
normalized : bool (default=False)
Return counts if False or probabilities if True.
Returns
-------
d: dictionary
Counts or Joint probability of occurrence of values in xy.
| def mixing_dict(xy, normalized=False):
"""Returns a dictionary representation of mixing matrix.
Parameters
----------
xy : list or container of two-tuples
Pairs of (x,y) items.
attribute : string
Node attribute key
normalized : bool (default=False)
Return counts if False or probabilities if True.
Returns
-------
d: dictionary
Counts or Joint probability of occurrence of values in xy.
"""
d = {}
psum = 0.0
for x, y in xy:
if x not in d:
d[x] = {}
if y not in d:
d[y] = {}
v = d[x].get(y, 0)
d[x][y] = v + 1
psum += 1
if normalized:
for _, jdict in d.items():
for j in jdict:
jdict[j] /= psum
return d
| (xy, normalized=False) |
30,916 | networkx.algorithms.cuts | mixing_expansion | Returns the mixing expansion between two node sets.
The *mixing expansion* is the quotient of the cut size and twice the
number of edges in the graph. [1]
Parameters
----------
G : NetworkX graph
S : collection
A collection of nodes in `G`.
T : collection
A collection of nodes in `G`.
weight : object
Edge attribute key to use as weight. If not specified, edges
have weight one.
Returns
-------
number
The mixing expansion between the two sets `S` and `T`.
See also
--------
boundary_expansion
edge_expansion
node_expansion
References
----------
.. [1] Vadhan, Salil P.
"Pseudorandomness."
*Foundations and Trends
in Theoretical Computer Science* 7.1–3 (2011): 1–336.
<https://doi.org/10.1561/0400000010>
| null | (G, S, T=None, weight=None, *, backend=None, **backend_kwargs) |
30,917 | networkx.algorithms.operators.product | modular_product | Returns the Modular product of G and H.
The modular product of `G` and `H` is the graph $M = G \nabla H$,
consisting of the node set $V(M) = V(G) \times V(H)$ that is the Cartesian
product of the node sets of `G` and `H`. Further, M contains an edge ((u, v), (x, y)):
- if u is adjacent to x in `G` and v is adjacent to y in `H`, or
- if u is not adjacent to x in `G` and v is not adjacent to y in `H`.
More formally::
E(M) = {((u, v), (x, y)) | ((u, x) in E(G) and (v, y) in E(H)) or
((u, x) not in E(G) and (v, y) not in E(H))}
Parameters
----------
G, H: NetworkX graphs
The graphs to take the modular product of.
Returns
-------
M: NetworkX graph
The Modular product of `G` and `H`.
Raises
------
NetworkXNotImplemented
If `G` is not a simple graph.
Examples
--------
>>> G = nx.cycle_graph(4)
>>> H = nx.path_graph(2)
>>> M = nx.modular_product(G, H)
>>> list(M)
[(0, 0), (0, 1), (1, 0), (1, 1), (2, 0), (2, 1), (3, 0), (3, 1)]
>>> print(M)
Graph with 8 nodes and 8 edges
Notes
-----
The *modular product* is defined in [1]_ and was first
introduced as the *weak modular product*.
The modular product reduces the problem of counting isomorphic subgraphs
in `G` and `H` to the problem of counting cliques in M. The subgraphs of
`G` and `H` that are induced by the nodes of a clique in M are
isomorphic [2]_ [3]_.
References
----------
.. [1] R. Hammack, W. Imrich, and S. Klavžar,
"Handbook of Product Graphs", CRC Press, 2011.
.. [2] H. G. Barrow and R. M. Burstall,
"Subgraph isomorphism, matching relational structures and maximal
cliques", Information Processing Letters, vol. 4, issue 4, pp. 83-84,
1976, https://doi.org/10.1016/0020-0190(76)90049-1.
.. [3] V. G. Vizing, "Reduction of the problem of isomorphism and isomorphic
entrance to the task of finding the nondensity of a graph." Proc. Third
All-Union Conference on Problems of Theoretical Cybernetics. 1974.
| null | (G, H, *, backend=None, **backend_kwargs) |
30,918 | networkx.linalg.modularitymatrix | modularity_matrix | Returns the modularity matrix of G.
The modularity matrix is the matrix B = A - <A>, where A is the adjacency
matrix and <A> is the average adjacency matrix, assuming that the graph
is described by the configuration model.
More specifically, the element B_ij of B is defined as
.. math::
A_{ij} - {k_i k_j \over 2 m}
where k_i is the degree of node i, and where m is the number of edges
in the graph. When weight is set to a name of an attribute edge, Aij, k_i,
k_j and m are computed using its value.
Parameters
----------
G : Graph
A NetworkX graph
nodelist : list, optional
The rows and columns are ordered according to the nodes in nodelist.
If nodelist is None, then the ordering is produced by G.nodes().
weight : string or None, optional (default=None)
The edge attribute that holds the numerical value used for
the edge weight. If None then all edge weights are 1.
Returns
-------
B : Numpy array
The modularity matrix of G.
Examples
--------
>>> k = [3, 2, 2, 1, 0]
>>> G = nx.havel_hakimi_graph(k)
>>> B = nx.modularity_matrix(G)
See Also
--------
to_numpy_array
modularity_spectrum
adjacency_matrix
directed_modularity_matrix
References
----------
.. [1] M. E. J. Newman, "Modularity and community structure in networks",
Proc. Natl. Acad. Sci. USA, vol. 103, pp. 8577-8582, 2006.
| null | (G, nodelist=None, weight=None, *, backend=None, **backend_kwargs) |
30,919 | networkx.linalg.spectrum | modularity_spectrum | Returns eigenvalues of the modularity matrix of G.
Parameters
----------
G : Graph
A NetworkX Graph or DiGraph
Returns
-------
evals : NumPy array
Eigenvalues
See Also
--------
modularity_matrix
References
----------
.. [1] M. E. J. Newman, "Modularity and community structure in networks",
Proc. Natl. Acad. Sci. USA, vol. 103, pp. 8577-8582, 2006.
| null | (G, *, backend=None, **backend_kwargs) |
30,921 | networkx.generators.small | moebius_kantor_graph |
Returns the Moebius-Kantor graph.
The Möbius-Kantor graph is the cubic symmetric graph on 16 nodes.
Its LCF notation is [5,-5]^8, and it is isomorphic to the generalized
Petersen graph [1]_.
Parameters
----------
create_using : NetworkX graph constructor, optional (default=nx.Graph)
Graph type to create. If graph instance, then cleared before populated.
Returns
-------
G : networkx Graph
Moebius-Kantor graph
References
----------
.. [1] https://en.wikipedia.org/wiki/M%C3%B6bius%E2%80%93Kantor_graph
| def sedgewick_maze_graph(create_using=None):
"""
Return a small maze with a cycle.
This is the maze used in Sedgewick, 3rd Edition, Part 5, Graph
Algorithms, Chapter 18, e.g. Figure 18.2 and following [1]_.
Nodes are numbered 0,..,7
Parameters
----------
create_using : NetworkX graph constructor, optional (default=nx.Graph)
Graph type to create. If graph instance, then cleared before populated.
Returns
-------
G : networkx Graph
Small maze with a cycle
References
----------
.. [1] Figure 18.2, Chapter 18, Graph Algorithms (3rd Ed), Sedgewick
"""
G = empty_graph(0, create_using)
G.add_nodes_from(range(8))
G.add_edges_from([[0, 2], [0, 7], [0, 5]])
G.add_edges_from([[1, 7], [2, 6]])
G.add_edges_from([[3, 4], [3, 5]])
G.add_edges_from([[4, 5], [4, 7], [4, 6]])
G.name = "Sedgewick Maze"
return G
| (create_using=None, *, backend=None, **backend_kwargs) |
30,923 | networkx.algorithms.moral | moral_graph | Return the Moral Graph
Returns the moralized graph of a given directed graph.
Parameters
----------
G : NetworkX graph
Directed graph
Returns
-------
H : NetworkX graph
The undirected moralized graph of G
Raises
------
NetworkXNotImplemented
If `G` is undirected.
Examples
--------
>>> G = nx.DiGraph([(1, 2), (2, 3), (2, 5), (3, 4), (4, 3)])
>>> G_moral = nx.moral_graph(G)
>>> G_moral.edges()
EdgeView([(1, 2), (2, 3), (2, 5), (2, 4), (3, 4)])
Notes
-----
A moral graph is an undirected graph H = (V, E) generated from a
directed Graph, where if a node has more than one parent node, edges
between these parent nodes are inserted and all directed edges become
undirected.
https://en.wikipedia.org/wiki/Moral_graph
References
----------
.. [1] Wray L. Buntine. 1995. Chain graphs for learning.
In Proceedings of the Eleventh conference on Uncertainty
in artificial intelligence (UAI'95)
| null | (G, *, backend=None, **backend_kwargs) |
30,924 | networkx.algorithms.shortest_paths.weighted | multi_source_dijkstra | Find shortest weighted paths and lengths from a given set of
source nodes.
Uses Dijkstra's algorithm to compute the shortest paths and lengths
between one of the source nodes and the given `target`, or all other
reachable nodes if not specified, for a weighted graph.
Parameters
----------
G : NetworkX graph
sources : non-empty set of nodes
Starting nodes for paths. If this is just a set containing a
single node, then all paths computed by this function will start
from that node. If there are two or more nodes in the set, the
computed paths may begin from any one of the start nodes.
target : node label, optional
Ending node for path
cutoff : integer or float, optional
Length (sum of edge weights) at which the search is stopped.
If cutoff is provided, only return paths with summed weight <= cutoff.
weight : string or function
If this is a string, then edge weights will be accessed via the
edge attribute with this key (that is, the weight of the edge
joining `u` to `v` will be ``G.edges[u, v][weight]``). If no
such edge attribute exists, the weight of the edge is assumed to
be one.
If this is a function, the weight of an edge is the value
returned by the function. The function must accept exactly three
positional arguments: the two endpoints of an edge and the
dictionary of edge attributes for that edge. The function must
return a number or None to indicate a hidden edge.
Returns
-------
distance, path : pair of dictionaries, or numeric and list
If target is None, returns a tuple of two dictionaries keyed by node.
The first dictionary stores distance from one of the source nodes.
The second stores the path from one of the sources to that node.
If target is not None, returns a tuple of (distance, path) where
distance is the distance from source to target and path is a list
representing the path from source to target.
Examples
--------
>>> G = nx.path_graph(5)
>>> length, path = nx.multi_source_dijkstra(G, {0, 4})
>>> for node in [0, 1, 2, 3, 4]:
... print(f"{node}: {length[node]}")
0: 0
1: 1
2: 2
3: 1
4: 0
>>> path[1]
[0, 1]
>>> path[3]
[4, 3]
>>> length, path = nx.multi_source_dijkstra(G, {0, 4}, 1)
>>> length
1
>>> path
[0, 1]
Notes
-----
Edge weight attributes must be numerical.
Distances are calculated as sums of weighted edges traversed.
The weight function can be used to hide edges by returning None.
So ``weight = lambda u, v, d: 1 if d['color']=="red" else None``
will find the shortest red path.
Based on the Python cookbook recipe (119466) at
https://code.activestate.com/recipes/119466/
This algorithm is not guaranteed to work if edge weights
are negative or are floating point numbers
(overflows and roundoff errors can cause problems).
Raises
------
ValueError
If `sources` is empty.
NodeNotFound
If any of `sources` is not in `G`.
See Also
--------
multi_source_dijkstra_path
multi_source_dijkstra_path_length
| def _dijkstra_multisource(
G, sources, weight, pred=None, paths=None, cutoff=None, target=None
):
"""Uses Dijkstra's algorithm to find shortest weighted paths
Parameters
----------
G : NetworkX graph
sources : non-empty iterable of nodes
Starting nodes for paths. If this is just an iterable containing
a single node, then all paths computed by this function will
start from that node. If there are two or more nodes in this
iterable, the computed paths may begin from any one of the start
nodes.
weight: function
Function with (u, v, data) input that returns that edge's weight
or None to indicate a hidden edge
pred: dict of lists, optional(default=None)
dict to store a list of predecessors keyed by that node
If None, predecessors are not stored.
paths: dict, optional (default=None)
dict to store the path list from source to each node, keyed by node.
If None, paths are not stored.
target : node label, optional
Ending node for path. Search is halted when target is found.
cutoff : integer or float, optional
Length (sum of edge weights) at which the search is stopped.
If cutoff is provided, only return paths with summed weight <= cutoff.
Returns
-------
distance : dictionary
A mapping from node to shortest distance to that node from one
of the source nodes.
Raises
------
NodeNotFound
If any of `sources` is not in `G`.
Notes
-----
The optional predecessor and path dictionaries can be accessed by
the caller through the original pred and paths objects passed
as arguments. No need to explicitly return pred or paths.
"""
G_succ = G._adj # For speed-up (and works for both directed and undirected graphs)
push = heappush
pop = heappop
dist = {} # dictionary of final distances
seen = {}
# fringe is heapq with 3-tuples (distance,c,node)
# use the count c to avoid comparing nodes (may not be able to)
c = count()
fringe = []
for source in sources:
seen[source] = 0
push(fringe, (0, next(c), source))
while fringe:
(d, _, v) = pop(fringe)
if v in dist:
continue # already searched this node.
dist[v] = d
if v == target:
break
for u, e in G_succ[v].items():
cost = weight(v, u, e)
if cost is None:
continue
vu_dist = dist[v] + cost
if cutoff is not None:
if vu_dist > cutoff:
continue
if u in dist:
u_dist = dist[u]
if vu_dist < u_dist:
raise ValueError("Contradictory paths found:", "negative weights?")
elif pred is not None and vu_dist == u_dist:
pred[u].append(v)
elif u not in seen or vu_dist < seen[u]:
seen[u] = vu_dist
push(fringe, (vu_dist, next(c), u))
if paths is not None:
paths[u] = paths[v] + [u]
if pred is not None:
pred[u] = [v]
elif vu_dist == seen[u]:
if pred is not None:
pred[u].append(v)
# The optional predecessor and path dictionaries can be accessed
# by the caller via the pred and paths objects passed as arguments.
return dist
| (G, sources, target=None, cutoff=None, weight='weight', *, backend=None, **backend_kwargs) |
30,925 | networkx.algorithms.shortest_paths.weighted | multi_source_dijkstra_path | Find shortest weighted paths in G from a given set of source
nodes.
Compute shortest path between any of the source nodes and all other
reachable nodes for a weighted graph.
Parameters
----------
G : NetworkX graph
sources : non-empty set of nodes
Starting nodes for paths. If this is just a set containing a
single node, then all paths computed by this function will start
from that node. If there are two or more nodes in the set, the
computed paths may begin from any one of the start nodes.
cutoff : integer or float, optional
Length (sum of edge weights) at which the search is stopped.
If cutoff is provided, only return paths with summed weight <= cutoff.
weight : string or function
If this is a string, then edge weights will be accessed via the
edge attribute with this key (that is, the weight of the edge
joining `u` to `v` will be ``G.edges[u, v][weight]``). If no
such edge attribute exists, the weight of the edge is assumed to
be one.
If this is a function, the weight of an edge is the value
returned by the function. The function must accept exactly three
positional arguments: the two endpoints of an edge and the
dictionary of edge attributes for that edge. The function must
return a number or None to indicate a hidden edge.
Returns
-------
paths : dictionary
Dictionary of shortest paths keyed by target.
Examples
--------
>>> G = nx.path_graph(5)
>>> path = nx.multi_source_dijkstra_path(G, {0, 4})
>>> path[1]
[0, 1]
>>> path[3]
[4, 3]
Notes
-----
Edge weight attributes must be numerical.
Distances are calculated as sums of weighted edges traversed.
The weight function can be used to hide edges by returning None.
So ``weight = lambda u, v, d: 1 if d['color']=="red" else None``
will find the shortest red path.
Raises
------
ValueError
If `sources` is empty.
NodeNotFound
If any of `sources` is not in `G`.
See Also
--------
multi_source_dijkstra, multi_source_bellman_ford
| def _dijkstra_multisource(
G, sources, weight, pred=None, paths=None, cutoff=None, target=None
):
"""Uses Dijkstra's algorithm to find shortest weighted paths
Parameters
----------
G : NetworkX graph
sources : non-empty iterable of nodes
Starting nodes for paths. If this is just an iterable containing
a single node, then all paths computed by this function will
start from that node. If there are two or more nodes in this
iterable, the computed paths may begin from any one of the start
nodes.
weight: function
Function with (u, v, data) input that returns that edge's weight
or None to indicate a hidden edge
pred: dict of lists, optional(default=None)
dict to store a list of predecessors keyed by that node
If None, predecessors are not stored.
paths: dict, optional (default=None)
dict to store the path list from source to each node, keyed by node.
If None, paths are not stored.
target : node label, optional
Ending node for path. Search is halted when target is found.
cutoff : integer or float, optional
Length (sum of edge weights) at which the search is stopped.
If cutoff is provided, only return paths with summed weight <= cutoff.
Returns
-------
distance : dictionary
A mapping from node to shortest distance to that node from one
of the source nodes.
Raises
------
NodeNotFound
If any of `sources` is not in `G`.
Notes
-----
The optional predecessor and path dictionaries can be accessed by
the caller through the original pred and paths objects passed
as arguments. No need to explicitly return pred or paths.
"""
G_succ = G._adj # For speed-up (and works for both directed and undirected graphs)
push = heappush
pop = heappop
dist = {} # dictionary of final distances
seen = {}
# fringe is heapq with 3-tuples (distance,c,node)
# use the count c to avoid comparing nodes (may not be able to)
c = count()
fringe = []
for source in sources:
seen[source] = 0
push(fringe, (0, next(c), source))
while fringe:
(d, _, v) = pop(fringe)
if v in dist:
continue # already searched this node.
dist[v] = d
if v == target:
break
for u, e in G_succ[v].items():
cost = weight(v, u, e)
if cost is None:
continue
vu_dist = dist[v] + cost
if cutoff is not None:
if vu_dist > cutoff:
continue
if u in dist:
u_dist = dist[u]
if vu_dist < u_dist:
raise ValueError("Contradictory paths found:", "negative weights?")
elif pred is not None and vu_dist == u_dist:
pred[u].append(v)
elif u not in seen or vu_dist < seen[u]:
seen[u] = vu_dist
push(fringe, (vu_dist, next(c), u))
if paths is not None:
paths[u] = paths[v] + [u]
if pred is not None:
pred[u] = [v]
elif vu_dist == seen[u]:
if pred is not None:
pred[u].append(v)
# The optional predecessor and path dictionaries can be accessed
# by the caller via the pred and paths objects passed as arguments.
return dist
| (G, sources, cutoff=None, weight='weight', *, backend=None, **backend_kwargs) |
30,926 | networkx.algorithms.shortest_paths.weighted | multi_source_dijkstra_path_length | Find shortest weighted path lengths in G from a given set of
source nodes.
Compute the shortest path length between any of the source nodes and
all other reachable nodes for a weighted graph.
Parameters
----------
G : NetworkX graph
sources : non-empty set of nodes
Starting nodes for paths. If this is just a set containing a
single node, then all paths computed by this function will start
from that node. If there are two or more nodes in the set, the
computed paths may begin from any one of the start nodes.
cutoff : integer or float, optional
Length (sum of edge weights) at which the search is stopped.
If cutoff is provided, only return paths with summed weight <= cutoff.
weight : string or function
If this is a string, then edge weights will be accessed via the
edge attribute with this key (that is, the weight of the edge
joining `u` to `v` will be ``G.edges[u, v][weight]``). If no
such edge attribute exists, the weight of the edge is assumed to
be one.
If this is a function, the weight of an edge is the value
returned by the function. The function must accept exactly three
positional arguments: the two endpoints of an edge and the
dictionary of edge attributes for that edge. The function must
return a number or None to indicate a hidden edge.
Returns
-------
length : dict
Dict keyed by node to shortest path length to nearest source.
Examples
--------
>>> G = nx.path_graph(5)
>>> length = nx.multi_source_dijkstra_path_length(G, {0, 4})
>>> for node in [0, 1, 2, 3, 4]:
... print(f"{node}: {length[node]}")
0: 0
1: 1
2: 2
3: 1
4: 0
Notes
-----
Edge weight attributes must be numerical.
Distances are calculated as sums of weighted edges traversed.
The weight function can be used to hide edges by returning None.
So ``weight = lambda u, v, d: 1 if d['color']=="red" else None``
will find the shortest red path.
Raises
------
ValueError
If `sources` is empty.
NodeNotFound
If any of `sources` is not in `G`.
See Also
--------
multi_source_dijkstra
| def _dijkstra_multisource(
G, sources, weight, pred=None, paths=None, cutoff=None, target=None
):
"""Uses Dijkstra's algorithm to find shortest weighted paths
Parameters
----------
G : NetworkX graph
sources : non-empty iterable of nodes
Starting nodes for paths. If this is just an iterable containing
a single node, then all paths computed by this function will
start from that node. If there are two or more nodes in this
iterable, the computed paths may begin from any one of the start
nodes.
weight: function
Function with (u, v, data) input that returns that edge's weight
or None to indicate a hidden edge
pred: dict of lists, optional(default=None)
dict to store a list of predecessors keyed by that node
If None, predecessors are not stored.
paths: dict, optional (default=None)
dict to store the path list from source to each node, keyed by node.
If None, paths are not stored.
target : node label, optional
Ending node for path. Search is halted when target is found.
cutoff : integer or float, optional
Length (sum of edge weights) at which the search is stopped.
If cutoff is provided, only return paths with summed weight <= cutoff.
Returns
-------
distance : dictionary
A mapping from node to shortest distance to that node from one
of the source nodes.
Raises
------
NodeNotFound
If any of `sources` is not in `G`.
Notes
-----
The optional predecessor and path dictionaries can be accessed by
the caller through the original pred and paths objects passed
as arguments. No need to explicitly return pred or paths.
"""
G_succ = G._adj # For speed-up (and works for both directed and undirected graphs)
push = heappush
pop = heappop
dist = {} # dictionary of final distances
seen = {}
# fringe is heapq with 3-tuples (distance,c,node)
# use the count c to avoid comparing nodes (may not be able to)
c = count()
fringe = []
for source in sources:
seen[source] = 0
push(fringe, (0, next(c), source))
while fringe:
(d, _, v) = pop(fringe)
if v in dist:
continue # already searched this node.
dist[v] = d
if v == target:
break
for u, e in G_succ[v].items():
cost = weight(v, u, e)
if cost is None:
continue
vu_dist = dist[v] + cost
if cutoff is not None:
if vu_dist > cutoff:
continue
if u in dist:
u_dist = dist[u]
if vu_dist < u_dist:
raise ValueError("Contradictory paths found:", "negative weights?")
elif pred is not None and vu_dist == u_dist:
pred[u].append(v)
elif u not in seen or vu_dist < seen[u]:
seen[u] = vu_dist
push(fringe, (vu_dist, next(c), u))
if paths is not None:
paths[u] = paths[v] + [u]
if pred is not None:
pred[u] = [v]
elif vu_dist == seen[u]:
if pred is not None:
pred[u].append(v)
# The optional predecessor and path dictionaries can be accessed
# by the caller via the pred and paths objects passed as arguments.
return dist
| (G, sources, cutoff=None, weight='weight', *, backend=None, **backend_kwargs) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.