index
int64 0
731k
| package
stringlengths 2
98
⌀ | name
stringlengths 1
76
| docstring
stringlengths 0
281k
⌀ | code
stringlengths 4
1.07M
⌀ | signature
stringlengths 2
42.8k
⌀ |
---|---|---|---|---|---|
30,270 | networkx.algorithms.tree.coding | NotATree | Raised when a function expects a tree (that is, a connected
undirected graph with no cycles) but gets a non-tree graph as input
instead.
| class NotATree(nx.NetworkXException):
"""Raised when a function expects a tree (that is, a connected
undirected graph with no cycles) but gets a non-tree graph as input
instead.
"""
| null |
30,271 | networkx.algorithms.planarity | PlanarEmbedding | Represents a planar graph with its planar embedding.
The planar embedding is given by a `combinatorial embedding
<https://en.wikipedia.org/wiki/Graph_embedding#Combinatorial_embedding>`_.
.. note:: `check_planarity` is the preferred way to check if a graph is planar.
**Neighbor ordering:**
In comparison to a usual graph structure, the embedding also stores the
order of all neighbors for every vertex.
The order of the neighbors can be given in clockwise (cw) direction or
counterclockwise (ccw) direction. This order is stored as edge attributes
in the underlying directed graph. For the edge (u, v) the edge attribute
'cw' is set to the neighbor of u that follows immediately after v in
clockwise direction.
In order for a PlanarEmbedding to be valid it must fulfill multiple
conditions. It is possible to check if these conditions are fulfilled with
the method :meth:`check_structure`.
The conditions are:
* Edges must go in both directions (because the edge attributes differ)
* Every edge must have a 'cw' and 'ccw' attribute which corresponds to a
correct planar embedding.
As long as a PlanarEmbedding is invalid only the following methods should
be called:
* :meth:`add_half_edge`
* :meth:`connect_components`
Even though the graph is a subclass of nx.DiGraph, it can still be used
for algorithms that require undirected graphs, because the method
:meth:`is_directed` is overridden. This is possible, because a valid
PlanarGraph must have edges in both directions.
**Half edges:**
In methods like `add_half_edge` the term "half-edge" is used, which is
a term that is used in `doubly connected edge lists
<https://en.wikipedia.org/wiki/Doubly_connected_edge_list>`_. It is used
to emphasize that the edge is only in one direction and there exists
another half-edge in the opposite direction.
While conventional edges always have two faces (including outer face) next
to them, it is possible to assign each half-edge *exactly one* face.
For a half-edge (u, v) that is oriented such that u is below v then the
face that belongs to (u, v) is to the right of this half-edge.
See Also
--------
is_planar :
Preferred way to check if an existing graph is planar.
check_planarity :
A convenient way to create a `PlanarEmbedding`. If not planar,
it returns a subgraph that shows this.
Examples
--------
Create an embedding of a star graph (compare `nx.star_graph(3)`):
>>> G = nx.PlanarEmbedding()
>>> G.add_half_edge(0, 1)
>>> G.add_half_edge(0, 2, ccw=1)
>>> G.add_half_edge(0, 3, ccw=2)
>>> G.add_half_edge(1, 0)
>>> G.add_half_edge(2, 0)
>>> G.add_half_edge(3, 0)
Alternatively the same embedding can also be defined in counterclockwise
orientation. The following results in exactly the same PlanarEmbedding:
>>> G = nx.PlanarEmbedding()
>>> G.add_half_edge(0, 1)
>>> G.add_half_edge(0, 3, cw=1)
>>> G.add_half_edge(0, 2, cw=3)
>>> G.add_half_edge(1, 0)
>>> G.add_half_edge(2, 0)
>>> G.add_half_edge(3, 0)
After creating a graph, it is possible to validate that the PlanarEmbedding
object is correct:
>>> G.check_structure()
| class PlanarEmbedding(nx.DiGraph):
"""Represents a planar graph with its planar embedding.
The planar embedding is given by a `combinatorial embedding
<https://en.wikipedia.org/wiki/Graph_embedding#Combinatorial_embedding>`_.
.. note:: `check_planarity` is the preferred way to check if a graph is planar.
**Neighbor ordering:**
In comparison to a usual graph structure, the embedding also stores the
order of all neighbors for every vertex.
The order of the neighbors can be given in clockwise (cw) direction or
counterclockwise (ccw) direction. This order is stored as edge attributes
in the underlying directed graph. For the edge (u, v) the edge attribute
'cw' is set to the neighbor of u that follows immediately after v in
clockwise direction.
In order for a PlanarEmbedding to be valid it must fulfill multiple
conditions. It is possible to check if these conditions are fulfilled with
the method :meth:`check_structure`.
The conditions are:
* Edges must go in both directions (because the edge attributes differ)
* Every edge must have a 'cw' and 'ccw' attribute which corresponds to a
correct planar embedding.
As long as a PlanarEmbedding is invalid only the following methods should
be called:
* :meth:`add_half_edge`
* :meth:`connect_components`
Even though the graph is a subclass of nx.DiGraph, it can still be used
for algorithms that require undirected graphs, because the method
:meth:`is_directed` is overridden. This is possible, because a valid
PlanarGraph must have edges in both directions.
**Half edges:**
In methods like `add_half_edge` the term "half-edge" is used, which is
a term that is used in `doubly connected edge lists
<https://en.wikipedia.org/wiki/Doubly_connected_edge_list>`_. It is used
to emphasize that the edge is only in one direction and there exists
another half-edge in the opposite direction.
While conventional edges always have two faces (including outer face) next
to them, it is possible to assign each half-edge *exactly one* face.
For a half-edge (u, v) that is oriented such that u is below v then the
face that belongs to (u, v) is to the right of this half-edge.
See Also
--------
is_planar :
Preferred way to check if an existing graph is planar.
check_planarity :
A convenient way to create a `PlanarEmbedding`. If not planar,
it returns a subgraph that shows this.
Examples
--------
Create an embedding of a star graph (compare `nx.star_graph(3)`):
>>> G = nx.PlanarEmbedding()
>>> G.add_half_edge(0, 1)
>>> G.add_half_edge(0, 2, ccw=1)
>>> G.add_half_edge(0, 3, ccw=2)
>>> G.add_half_edge(1, 0)
>>> G.add_half_edge(2, 0)
>>> G.add_half_edge(3, 0)
Alternatively the same embedding can also be defined in counterclockwise
orientation. The following results in exactly the same PlanarEmbedding:
>>> G = nx.PlanarEmbedding()
>>> G.add_half_edge(0, 1)
>>> G.add_half_edge(0, 3, cw=1)
>>> G.add_half_edge(0, 2, cw=3)
>>> G.add_half_edge(1, 0)
>>> G.add_half_edge(2, 0)
>>> G.add_half_edge(3, 0)
After creating a graph, it is possible to validate that the PlanarEmbedding
object is correct:
>>> G.check_structure()
"""
def __init__(self, incoming_graph_data=None, **attr):
super().__init__(incoming_graph_data=incoming_graph_data, **attr)
self.add_edge = self.__forbidden
self.add_edges_from = self.__forbidden
self.add_weighted_edges_from = self.__forbidden
def __forbidden(self, *args, **kwargs):
"""Forbidden operation
Any edge additions to a PlanarEmbedding should be done using
method `add_half_edge`.
"""
raise NotImplementedError(
"Use `add_half_edge` method to add edges to a PlanarEmbedding."
)
def get_data(self):
"""Converts the adjacency structure into a better readable structure.
Returns
-------
embedding : dict
A dict mapping all nodes to a list of neighbors sorted in
clockwise order.
See Also
--------
set_data
"""
embedding = {}
for v in self:
embedding[v] = list(self.neighbors_cw_order(v))
return embedding
def set_data(self, data):
"""Inserts edges according to given sorted neighbor list.
The input format is the same as the output format of get_data().
Parameters
----------
data : dict
A dict mapping all nodes to a list of neighbors sorted in
clockwise order.
See Also
--------
get_data
"""
for v in data:
ref = None
for w in reversed(data[v]):
self.add_half_edge(v, w, cw=ref)
ref = w
def remove_node(self, n):
"""Remove node n.
Removes the node n and all adjacent edges, updating the
PlanarEmbedding to account for any resulting edge removal.
Attempting to remove a non-existent node will raise an exception.
Parameters
----------
n : node
A node in the graph
Raises
------
NetworkXError
If n is not in the graph.
See Also
--------
remove_nodes_from
"""
try:
for u in self._pred[n]:
succs_u = self._succ[u]
un_cw = succs_u[n]["cw"]
un_ccw = succs_u[n]["ccw"]
del succs_u[n]
del self._pred[u][n]
if n != un_cw:
succs_u[un_cw]["ccw"] = un_ccw
succs_u[un_ccw]["cw"] = un_cw
del self._node[n]
del self._succ[n]
del self._pred[n]
except KeyError as err: # NetworkXError if n not in self
raise nx.NetworkXError(
f"The node {n} is not in the planar embedding."
) from err
nx._clear_cache(self)
def remove_nodes_from(self, nodes):
"""Remove multiple nodes.
Parameters
----------
nodes : iterable container
A container of nodes (list, dict, set, etc.). If a node
in the container is not in the graph it is silently ignored.
See Also
--------
remove_node
Notes
-----
When removing nodes from an iterator over the graph you are changing,
a `RuntimeError` will be raised with message:
`RuntimeError: dictionary changed size during iteration`. This
happens when the graph's underlying dictionary is modified during
iteration. To avoid this error, evaluate the iterator into a separate
object, e.g. by using `list(iterator_of_nodes)`, and pass this
object to `G.remove_nodes_from`.
"""
for n in nodes:
if n in self._node:
self.remove_node(n)
# silently skip non-existing nodes
def neighbors_cw_order(self, v):
"""Generator for the neighbors of v in clockwise order.
Parameters
----------
v : node
Yields
------
node
"""
succs = self._succ[v]
if not succs:
# v has no neighbors
return
start_node = next(reversed(succs))
yield start_node
current_node = succs[start_node]["cw"]
while start_node != current_node:
yield current_node
current_node = succs[current_node]["cw"]
def add_half_edge(self, start_node, end_node, *, cw=None, ccw=None):
"""Adds a half-edge from `start_node` to `end_node`.
If the half-edge is not the first one out of `start_node`, a reference
node must be provided either in the clockwise (parameter `cw`) or in
the counterclockwise (parameter `ccw`) direction. Only one of `cw`/`ccw`
can be specified (or neither in the case of the first edge).
Note that specifying a reference in the clockwise (`cw`) direction means
inserting the new edge in the first counterclockwise position with
respect to the reference (and vice-versa).
Parameters
----------
start_node : node
Start node of inserted edge.
end_node : node
End node of inserted edge.
cw, ccw: node
End node of reference edge.
Omit or pass `None` if adding the first out-half-edge of `start_node`.
Raises
------
NetworkXException
If the `cw` or `ccw` node is not a successor of `start_node`.
If `start_node` has successors, but neither `cw` or `ccw` is provided.
If both `cw` and `ccw` are specified.
See Also
--------
connect_components
"""
succs = self._succ.get(start_node)
if succs:
# there is already some edge out of start_node
leftmost_nbr = next(reversed(self._succ[start_node]))
if cw is not None:
if cw not in succs:
raise nx.NetworkXError("Invalid clockwise reference node.")
if ccw is not None:
raise nx.NetworkXError("Only one of cw/ccw can be specified.")
ref_ccw = succs[cw]["ccw"]
super().add_edge(start_node, end_node, cw=cw, ccw=ref_ccw)
succs[ref_ccw]["cw"] = end_node
succs[cw]["ccw"] = end_node
# when (cw == leftmost_nbr), the newly added neighbor is
# already at the end of dict self._succ[start_node] and
# takes the place of the former leftmost_nbr
move_leftmost_nbr_to_end = cw != leftmost_nbr
elif ccw is not None:
if ccw not in succs:
raise nx.NetworkXError("Invalid counterclockwise reference node.")
ref_cw = succs[ccw]["cw"]
super().add_edge(start_node, end_node, cw=ref_cw, ccw=ccw)
succs[ref_cw]["ccw"] = end_node
succs[ccw]["cw"] = end_node
move_leftmost_nbr_to_end = True
else:
raise nx.NetworkXError(
"Node already has out-half-edge(s), either cw or ccw reference node required."
)
if move_leftmost_nbr_to_end:
# LRPlanarity (via self.add_half_edge_first()) requires that
# we keep track of the leftmost neighbor, which we accomplish
# by keeping it as the last key in dict self._succ[start_node]
succs[leftmost_nbr] = succs.pop(leftmost_nbr)
else:
if cw is not None or ccw is not None:
raise nx.NetworkXError("Invalid reference node.")
# adding the first edge out of start_node
super().add_edge(start_node, end_node, ccw=end_node, cw=end_node)
def check_structure(self):
"""Runs without exceptions if this object is valid.
Checks that the following properties are fulfilled:
* Edges go in both directions (because the edge attributes differ).
* Every edge has a 'cw' and 'ccw' attribute which corresponds to a
correct planar embedding.
Running this method verifies that the underlying Graph must be planar.
Raises
------
NetworkXException
This exception is raised with a short explanation if the
PlanarEmbedding is invalid.
"""
# Check fundamental structure
for v in self:
try:
sorted_nbrs = set(self.neighbors_cw_order(v))
except KeyError as err:
msg = f"Bad embedding. Missing orientation for a neighbor of {v}"
raise nx.NetworkXException(msg) from err
unsorted_nbrs = set(self[v])
if sorted_nbrs != unsorted_nbrs:
msg = "Bad embedding. Edge orientations not set correctly."
raise nx.NetworkXException(msg)
for w in self[v]:
# Check if opposite half-edge exists
if not self.has_edge(w, v):
msg = "Bad embedding. Opposite half-edge is missing."
raise nx.NetworkXException(msg)
# Check planarity
counted_half_edges = set()
for component in nx.connected_components(self):
if len(component) == 1:
# Don't need to check single node component
continue
num_nodes = len(component)
num_half_edges = 0
num_faces = 0
for v in component:
for w in self.neighbors_cw_order(v):
num_half_edges += 1
if (v, w) not in counted_half_edges:
# We encountered a new face
num_faces += 1
# Mark all half-edges belonging to this face
self.traverse_face(v, w, counted_half_edges)
num_edges = num_half_edges // 2 # num_half_edges is even
if num_nodes - num_edges + num_faces != 2:
# The result does not match Euler's formula
msg = "Bad embedding. The graph does not match Euler's formula"
raise nx.NetworkXException(msg)
def add_half_edge_ccw(self, start_node, end_node, reference_neighbor):
"""Adds a half-edge from start_node to end_node.
The half-edge is added counter clockwise next to the existing half-edge
(start_node, reference_neighbor).
Parameters
----------
start_node : node
Start node of inserted edge.
end_node : node
End node of inserted edge.
reference_neighbor: node
End node of reference edge.
Raises
------
NetworkXException
If the reference_neighbor does not exist.
See Also
--------
add_half_edge
add_half_edge_cw
connect_components
"""
self.add_half_edge(start_node, end_node, cw=reference_neighbor)
def add_half_edge_cw(self, start_node, end_node, reference_neighbor):
"""Adds a half-edge from start_node to end_node.
The half-edge is added clockwise next to the existing half-edge
(start_node, reference_neighbor).
Parameters
----------
start_node : node
Start node of inserted edge.
end_node : node
End node of inserted edge.
reference_neighbor: node
End node of reference edge.
Raises
------
NetworkXException
If the reference_neighbor does not exist.
See Also
--------
add_half_edge
add_half_edge_ccw
connect_components
"""
self.add_half_edge(start_node, end_node, ccw=reference_neighbor)
def remove_edge(self, u, v):
"""Remove the edge between u and v.
Parameters
----------
u, v : nodes
Remove the half-edges (u, v) and (v, u) and update the
edge ordering around the removed edge.
Raises
------
NetworkXError
If there is not an edge between u and v.
See Also
--------
remove_edges_from : remove a collection of edges
"""
try:
succs_u = self._succ[u]
succs_v = self._succ[v]
uv_cw = succs_u[v]["cw"]
uv_ccw = succs_u[v]["ccw"]
vu_cw = succs_v[u]["cw"]
vu_ccw = succs_v[u]["ccw"]
del succs_u[v]
del self._pred[v][u]
del succs_v[u]
del self._pred[u][v]
if v != uv_cw:
succs_u[uv_cw]["ccw"] = uv_ccw
succs_u[uv_ccw]["cw"] = uv_cw
if u != vu_cw:
succs_v[vu_cw]["ccw"] = vu_ccw
succs_v[vu_ccw]["cw"] = vu_cw
except KeyError as err:
raise nx.NetworkXError(
f"The edge {u}-{v} is not in the planar embedding."
) from err
nx._clear_cache(self)
def remove_edges_from(self, ebunch):
"""Remove all edges specified in ebunch.
Parameters
----------
ebunch: list or container of edge tuples
Each pair of half-edges between the nodes given in the tuples
will be removed from the graph. The nodes can be passed as:
- 2-tuples (u, v) half-edges (u, v) and (v, u).
- 3-tuples (u, v, k) where k is ignored.
See Also
--------
remove_edge : remove a single edge
Notes
-----
Will fail silently if an edge in ebunch is not in the graph.
Examples
--------
>>> G = nx.path_graph(4) # or DiGraph, MultiGraph, MultiDiGraph, etc
>>> ebunch = [(1, 2), (2, 3)]
>>> G.remove_edges_from(ebunch)
"""
for e in ebunch:
u, v = e[:2] # ignore edge data
# assuming that the PlanarEmbedding is valid, if the half_edge
# (u, v) is in the graph, then so is half_edge (v, u)
if u in self._succ and v in self._succ[u]:
self.remove_edge(u, v)
def connect_components(self, v, w):
"""Adds half-edges for (v, w) and (w, v) at some position.
This method should only be called if v and w are in different
components, or it might break the embedding.
This especially means that if `connect_components(v, w)`
is called it is not allowed to call `connect_components(w, v)`
afterwards. The neighbor orientations in both directions are
all set correctly after the first call.
Parameters
----------
v : node
w : node
See Also
--------
add_half_edge
"""
if v in self._succ and self._succ[v]:
ref = next(reversed(self._succ[v]))
else:
ref = None
self.add_half_edge(v, w, cw=ref)
if w in self._succ and self._succ[w]:
ref = next(reversed(self._succ[w]))
else:
ref = None
self.add_half_edge(w, v, cw=ref)
def add_half_edge_first(self, start_node, end_node):
"""Add a half-edge and set end_node as start_node's leftmost neighbor.
The new edge is inserted counterclockwise with respect to the current
leftmost neighbor, if there is one.
Parameters
----------
start_node : node
end_node : node
See Also
--------
add_half_edge
connect_components
"""
succs = self._succ.get(start_node)
# the leftmost neighbor is the last entry in the
# self._succ[start_node] dict
leftmost_nbr = next(reversed(succs)) if succs else None
self.add_half_edge(start_node, end_node, cw=leftmost_nbr)
def next_face_half_edge(self, v, w):
"""Returns the following half-edge left of a face.
Parameters
----------
v : node
w : node
Returns
-------
half-edge : tuple
"""
new_node = self[w][v]["ccw"]
return w, new_node
def traverse_face(self, v, w, mark_half_edges=None):
"""Returns nodes on the face that belong to the half-edge (v, w).
The face that is traversed lies to the right of the half-edge (in an
orientation where v is below w).
Optionally it is possible to pass a set to which all encountered half
edges are added. Before calling this method, this set must not include
any half-edges that belong to the face.
Parameters
----------
v : node
Start node of half-edge.
w : node
End node of half-edge.
mark_half_edges: set, optional
Set to which all encountered half-edges are added.
Returns
-------
face : list
A list of nodes that lie on this face.
"""
if mark_half_edges is None:
mark_half_edges = set()
face_nodes = [v]
mark_half_edges.add((v, w))
prev_node = v
cur_node = w
# Last half-edge is (incoming_node, v)
incoming_node = self[v][w]["cw"]
while cur_node != v or prev_node != incoming_node:
face_nodes.append(cur_node)
prev_node, cur_node = self.next_face_half_edge(prev_node, cur_node)
if (prev_node, cur_node) in mark_half_edges:
raise nx.NetworkXException("Bad planar embedding. Impossible face.")
mark_half_edges.add((prev_node, cur_node))
return face_nodes
def is_directed(self):
"""A valid PlanarEmbedding is undirected.
All reverse edges are contained, i.e. for every existing
half-edge (v, w) the half-edge in the opposite direction (w, v) is also
contained.
"""
return False
def copy(self, as_view=False):
if as_view is True:
return nx.graphviews.generic_graph_view(self)
G = self.__class__()
G.graph.update(self.graph)
G.add_nodes_from((n, d.copy()) for n, d in self._node.items())
super(self.__class__, G).add_edges_from(
(u, v, datadict.copy())
for u, nbrs in self._adj.items()
for v, datadict in nbrs.items()
)
return G
| (incoming_graph_data=None, **attr) |
30,272 | networkx.algorithms.planarity | __forbidden | Forbidden operation
Any edge additions to a PlanarEmbedding should be done using
method `add_half_edge`.
| def __forbidden(self, *args, **kwargs):
"""Forbidden operation
Any edge additions to a PlanarEmbedding should be done using
method `add_half_edge`.
"""
raise NotImplementedError(
"Use `add_half_edge` method to add edges to a PlanarEmbedding."
)
| (self, *args, **kwargs) |
30,275 | networkx.algorithms.planarity | __init__ | null | def __init__(self, incoming_graph_data=None, **attr):
super().__init__(incoming_graph_data=incoming_graph_data, **attr)
self.add_edge = self.__forbidden
self.add_edges_from = self.__forbidden
self.add_weighted_edges_from = self.__forbidden
| (self, incoming_graph_data=None, **attr) |
30,281 | networkx.algorithms.planarity | add_half_edge | Adds a half-edge from `start_node` to `end_node`.
If the half-edge is not the first one out of `start_node`, a reference
node must be provided either in the clockwise (parameter `cw`) or in
the counterclockwise (parameter `ccw`) direction. Only one of `cw`/`ccw`
can be specified (or neither in the case of the first edge).
Note that specifying a reference in the clockwise (`cw`) direction means
inserting the new edge in the first counterclockwise position with
respect to the reference (and vice-versa).
Parameters
----------
start_node : node
Start node of inserted edge.
end_node : node
End node of inserted edge.
cw, ccw: node
End node of reference edge.
Omit or pass `None` if adding the first out-half-edge of `start_node`.
Raises
------
NetworkXException
If the `cw` or `ccw` node is not a successor of `start_node`.
If `start_node` has successors, but neither `cw` or `ccw` is provided.
If both `cw` and `ccw` are specified.
See Also
--------
connect_components
| def add_half_edge(self, start_node, end_node, *, cw=None, ccw=None):
"""Adds a half-edge from `start_node` to `end_node`.
If the half-edge is not the first one out of `start_node`, a reference
node must be provided either in the clockwise (parameter `cw`) or in
the counterclockwise (parameter `ccw`) direction. Only one of `cw`/`ccw`
can be specified (or neither in the case of the first edge).
Note that specifying a reference in the clockwise (`cw`) direction means
inserting the new edge in the first counterclockwise position with
respect to the reference (and vice-versa).
Parameters
----------
start_node : node
Start node of inserted edge.
end_node : node
End node of inserted edge.
cw, ccw: node
End node of reference edge.
Omit or pass `None` if adding the first out-half-edge of `start_node`.
Raises
------
NetworkXException
If the `cw` or `ccw` node is not a successor of `start_node`.
If `start_node` has successors, but neither `cw` or `ccw` is provided.
If both `cw` and `ccw` are specified.
See Also
--------
connect_components
"""
succs = self._succ.get(start_node)
if succs:
# there is already some edge out of start_node
leftmost_nbr = next(reversed(self._succ[start_node]))
if cw is not None:
if cw not in succs:
raise nx.NetworkXError("Invalid clockwise reference node.")
if ccw is not None:
raise nx.NetworkXError("Only one of cw/ccw can be specified.")
ref_ccw = succs[cw]["ccw"]
super().add_edge(start_node, end_node, cw=cw, ccw=ref_ccw)
succs[ref_ccw]["cw"] = end_node
succs[cw]["ccw"] = end_node
# when (cw == leftmost_nbr), the newly added neighbor is
# already at the end of dict self._succ[start_node] and
# takes the place of the former leftmost_nbr
move_leftmost_nbr_to_end = cw != leftmost_nbr
elif ccw is not None:
if ccw not in succs:
raise nx.NetworkXError("Invalid counterclockwise reference node.")
ref_cw = succs[ccw]["cw"]
super().add_edge(start_node, end_node, cw=ref_cw, ccw=ccw)
succs[ref_cw]["ccw"] = end_node
succs[ccw]["cw"] = end_node
move_leftmost_nbr_to_end = True
else:
raise nx.NetworkXError(
"Node already has out-half-edge(s), either cw or ccw reference node required."
)
if move_leftmost_nbr_to_end:
# LRPlanarity (via self.add_half_edge_first()) requires that
# we keep track of the leftmost neighbor, which we accomplish
# by keeping it as the last key in dict self._succ[start_node]
succs[leftmost_nbr] = succs.pop(leftmost_nbr)
else:
if cw is not None or ccw is not None:
raise nx.NetworkXError("Invalid reference node.")
# adding the first edge out of start_node
super().add_edge(start_node, end_node, ccw=end_node, cw=end_node)
| (self, start_node, end_node, *, cw=None, ccw=None) |
30,282 | networkx.algorithms.planarity | add_half_edge_ccw | Adds a half-edge from start_node to end_node.
The half-edge is added counter clockwise next to the existing half-edge
(start_node, reference_neighbor).
Parameters
----------
start_node : node
Start node of inserted edge.
end_node : node
End node of inserted edge.
reference_neighbor: node
End node of reference edge.
Raises
------
NetworkXException
If the reference_neighbor does not exist.
See Also
--------
add_half_edge
add_half_edge_cw
connect_components
| def add_half_edge_ccw(self, start_node, end_node, reference_neighbor):
"""Adds a half-edge from start_node to end_node.
The half-edge is added counter clockwise next to the existing half-edge
(start_node, reference_neighbor).
Parameters
----------
start_node : node
Start node of inserted edge.
end_node : node
End node of inserted edge.
reference_neighbor: node
End node of reference edge.
Raises
------
NetworkXException
If the reference_neighbor does not exist.
See Also
--------
add_half_edge
add_half_edge_cw
connect_components
"""
self.add_half_edge(start_node, end_node, cw=reference_neighbor)
| (self, start_node, end_node, reference_neighbor) |
30,283 | networkx.algorithms.planarity | add_half_edge_cw | Adds a half-edge from start_node to end_node.
The half-edge is added clockwise next to the existing half-edge
(start_node, reference_neighbor).
Parameters
----------
start_node : node
Start node of inserted edge.
end_node : node
End node of inserted edge.
reference_neighbor: node
End node of reference edge.
Raises
------
NetworkXException
If the reference_neighbor does not exist.
See Also
--------
add_half_edge
add_half_edge_ccw
connect_components
| def add_half_edge_cw(self, start_node, end_node, reference_neighbor):
"""Adds a half-edge from start_node to end_node.
The half-edge is added clockwise next to the existing half-edge
(start_node, reference_neighbor).
Parameters
----------
start_node : node
Start node of inserted edge.
end_node : node
End node of inserted edge.
reference_neighbor: node
End node of reference edge.
Raises
------
NetworkXException
If the reference_neighbor does not exist.
See Also
--------
add_half_edge
add_half_edge_ccw
connect_components
"""
self.add_half_edge(start_node, end_node, ccw=reference_neighbor)
| (self, start_node, end_node, reference_neighbor) |
30,284 | networkx.algorithms.planarity | add_half_edge_first | Add a half-edge and set end_node as start_node's leftmost neighbor.
The new edge is inserted counterclockwise with respect to the current
leftmost neighbor, if there is one.
Parameters
----------
start_node : node
end_node : node
See Also
--------
add_half_edge
connect_components
| def add_half_edge_first(self, start_node, end_node):
"""Add a half-edge and set end_node as start_node's leftmost neighbor.
The new edge is inserted counterclockwise with respect to the current
leftmost neighbor, if there is one.
Parameters
----------
start_node : node
end_node : node
See Also
--------
add_half_edge
connect_components
"""
succs = self._succ.get(start_node)
# the leftmost neighbor is the last entry in the
# self._succ[start_node] dict
leftmost_nbr = next(reversed(succs)) if succs else None
self.add_half_edge(start_node, end_node, cw=leftmost_nbr)
| (self, start_node, end_node) |
30,289 | networkx.algorithms.planarity | check_structure | Runs without exceptions if this object is valid.
Checks that the following properties are fulfilled:
* Edges go in both directions (because the edge attributes differ).
* Every edge has a 'cw' and 'ccw' attribute which corresponds to a
correct planar embedding.
Running this method verifies that the underlying Graph must be planar.
Raises
------
NetworkXException
This exception is raised with a short explanation if the
PlanarEmbedding is invalid.
| def check_structure(self):
"""Runs without exceptions if this object is valid.
Checks that the following properties are fulfilled:
* Edges go in both directions (because the edge attributes differ).
* Every edge has a 'cw' and 'ccw' attribute which corresponds to a
correct planar embedding.
Running this method verifies that the underlying Graph must be planar.
Raises
------
NetworkXException
This exception is raised with a short explanation if the
PlanarEmbedding is invalid.
"""
# Check fundamental structure
for v in self:
try:
sorted_nbrs = set(self.neighbors_cw_order(v))
except KeyError as err:
msg = f"Bad embedding. Missing orientation for a neighbor of {v}"
raise nx.NetworkXException(msg) from err
unsorted_nbrs = set(self[v])
if sorted_nbrs != unsorted_nbrs:
msg = "Bad embedding. Edge orientations not set correctly."
raise nx.NetworkXException(msg)
for w in self[v]:
# Check if opposite half-edge exists
if not self.has_edge(w, v):
msg = "Bad embedding. Opposite half-edge is missing."
raise nx.NetworkXException(msg)
# Check planarity
counted_half_edges = set()
for component in nx.connected_components(self):
if len(component) == 1:
# Don't need to check single node component
continue
num_nodes = len(component)
num_half_edges = 0
num_faces = 0
for v in component:
for w in self.neighbors_cw_order(v):
num_half_edges += 1
if (v, w) not in counted_half_edges:
# We encountered a new face
num_faces += 1
# Mark all half-edges belonging to this face
self.traverse_face(v, w, counted_half_edges)
num_edges = num_half_edges // 2 # num_half_edges is even
if num_nodes - num_edges + num_faces != 2:
# The result does not match Euler's formula
msg = "Bad embedding. The graph does not match Euler's formula"
raise nx.NetworkXException(msg)
| (self) |
30,292 | networkx.algorithms.planarity | connect_components | Adds half-edges for (v, w) and (w, v) at some position.
This method should only be called if v and w are in different
components, or it might break the embedding.
This especially means that if `connect_components(v, w)`
is called it is not allowed to call `connect_components(w, v)`
afterwards. The neighbor orientations in both directions are
all set correctly after the first call.
Parameters
----------
v : node
w : node
See Also
--------
add_half_edge
| def connect_components(self, v, w):
"""Adds half-edges for (v, w) and (w, v) at some position.
This method should only be called if v and w are in different
components, or it might break the embedding.
This especially means that if `connect_components(v, w)`
is called it is not allowed to call `connect_components(w, v)`
afterwards. The neighbor orientations in both directions are
all set correctly after the first call.
Parameters
----------
v : node
w : node
See Also
--------
add_half_edge
"""
if v in self._succ and self._succ[v]:
ref = next(reversed(self._succ[v]))
else:
ref = None
self.add_half_edge(v, w, cw=ref)
if w in self._succ and self._succ[w]:
ref = next(reversed(self._succ[w]))
else:
ref = None
self.add_half_edge(w, v, cw=ref)
| (self, v, w) |
30,293 | networkx.algorithms.planarity | copy | null | def copy(self, as_view=False):
if as_view is True:
return nx.graphviews.generic_graph_view(self)
G = self.__class__()
G.graph.update(self.graph)
G.add_nodes_from((n, d.copy()) for n, d in self._node.items())
super(self.__class__, G).add_edges_from(
(u, v, datadict.copy())
for u, nbrs in self._adj.items()
for v, datadict in nbrs.items()
)
return G
| (self, as_view=False) |
30,295 | networkx.algorithms.planarity | get_data | Converts the adjacency structure into a better readable structure.
Returns
-------
embedding : dict
A dict mapping all nodes to a list of neighbors sorted in
clockwise order.
See Also
--------
set_data
| def get_data(self):
"""Converts the adjacency structure into a better readable structure.
Returns
-------
embedding : dict
A dict mapping all nodes to a list of neighbors sorted in
clockwise order.
See Also
--------
set_data
"""
embedding = {}
for v in self:
embedding[v] = list(self.neighbors_cw_order(v))
return embedding
| (self) |
30,301 | networkx.algorithms.planarity | is_directed | A valid PlanarEmbedding is undirected.
All reverse edges are contained, i.e. for every existing
half-edge (v, w) the half-edge in the opposite direction (w, v) is also
contained.
| def is_directed(self):
"""A valid PlanarEmbedding is undirected.
All reverse edges are contained, i.e. for every existing
half-edge (v, w) the half-edge in the opposite direction (w, v) is also
contained.
"""
return False
| (self) |
30,305 | networkx.algorithms.planarity | neighbors_cw_order | Generator for the neighbors of v in clockwise order.
Parameters
----------
v : node
Yields
------
node
| def neighbors_cw_order(self, v):
"""Generator for the neighbors of v in clockwise order.
Parameters
----------
v : node
Yields
------
node
"""
succs = self._succ[v]
if not succs:
# v has no neighbors
return
start_node = next(reversed(succs))
yield start_node
current_node = succs[start_node]["cw"]
while start_node != current_node:
yield current_node
current_node = succs[current_node]["cw"]
| (self, v) |
30,306 | networkx.algorithms.planarity | next_face_half_edge | Returns the following half-edge left of a face.
Parameters
----------
v : node
w : node
Returns
-------
half-edge : tuple
| def next_face_half_edge(self, v, w):
"""Returns the following half-edge left of a face.
Parameters
----------
v : node
w : node
Returns
-------
half-edge : tuple
"""
new_node = self[w][v]["ccw"]
return w, new_node
| (self, v, w) |
30,311 | networkx.algorithms.planarity | remove_edge | Remove the edge between u and v.
Parameters
----------
u, v : nodes
Remove the half-edges (u, v) and (v, u) and update the
edge ordering around the removed edge.
Raises
------
NetworkXError
If there is not an edge between u and v.
See Also
--------
remove_edges_from : remove a collection of edges
| def remove_edge(self, u, v):
"""Remove the edge between u and v.
Parameters
----------
u, v : nodes
Remove the half-edges (u, v) and (v, u) and update the
edge ordering around the removed edge.
Raises
------
NetworkXError
If there is not an edge between u and v.
See Also
--------
remove_edges_from : remove a collection of edges
"""
try:
succs_u = self._succ[u]
succs_v = self._succ[v]
uv_cw = succs_u[v]["cw"]
uv_ccw = succs_u[v]["ccw"]
vu_cw = succs_v[u]["cw"]
vu_ccw = succs_v[u]["ccw"]
del succs_u[v]
del self._pred[v][u]
del succs_v[u]
del self._pred[u][v]
if v != uv_cw:
succs_u[uv_cw]["ccw"] = uv_ccw
succs_u[uv_ccw]["cw"] = uv_cw
if u != vu_cw:
succs_v[vu_cw]["ccw"] = vu_ccw
succs_v[vu_ccw]["cw"] = vu_cw
except KeyError as err:
raise nx.NetworkXError(
f"The edge {u}-{v} is not in the planar embedding."
) from err
nx._clear_cache(self)
| (self, u, v) |
30,312 | networkx.algorithms.planarity | remove_edges_from | Remove all edges specified in ebunch.
Parameters
----------
ebunch: list or container of edge tuples
Each pair of half-edges between the nodes given in the tuples
will be removed from the graph. The nodes can be passed as:
- 2-tuples (u, v) half-edges (u, v) and (v, u).
- 3-tuples (u, v, k) where k is ignored.
See Also
--------
remove_edge : remove a single edge
Notes
-----
Will fail silently if an edge in ebunch is not in the graph.
Examples
--------
>>> G = nx.path_graph(4) # or DiGraph, MultiGraph, MultiDiGraph, etc
>>> ebunch = [(1, 2), (2, 3)]
>>> G.remove_edges_from(ebunch)
| def remove_edges_from(self, ebunch):
"""Remove all edges specified in ebunch.
Parameters
----------
ebunch: list or container of edge tuples
Each pair of half-edges between the nodes given in the tuples
will be removed from the graph. The nodes can be passed as:
- 2-tuples (u, v) half-edges (u, v) and (v, u).
- 3-tuples (u, v, k) where k is ignored.
See Also
--------
remove_edge : remove a single edge
Notes
-----
Will fail silently if an edge in ebunch is not in the graph.
Examples
--------
>>> G = nx.path_graph(4) # or DiGraph, MultiGraph, MultiDiGraph, etc
>>> ebunch = [(1, 2), (2, 3)]
>>> G.remove_edges_from(ebunch)
"""
for e in ebunch:
u, v = e[:2] # ignore edge data
# assuming that the PlanarEmbedding is valid, if the half_edge
# (u, v) is in the graph, then so is half_edge (v, u)
if u in self._succ and v in self._succ[u]:
self.remove_edge(u, v)
| (self, ebunch) |
30,313 | networkx.algorithms.planarity | remove_node | Remove node n.
Removes the node n and all adjacent edges, updating the
PlanarEmbedding to account for any resulting edge removal.
Attempting to remove a non-existent node will raise an exception.
Parameters
----------
n : node
A node in the graph
Raises
------
NetworkXError
If n is not in the graph.
See Also
--------
remove_nodes_from
| def remove_node(self, n):
"""Remove node n.
Removes the node n and all adjacent edges, updating the
PlanarEmbedding to account for any resulting edge removal.
Attempting to remove a non-existent node will raise an exception.
Parameters
----------
n : node
A node in the graph
Raises
------
NetworkXError
If n is not in the graph.
See Also
--------
remove_nodes_from
"""
try:
for u in self._pred[n]:
succs_u = self._succ[u]
un_cw = succs_u[n]["cw"]
un_ccw = succs_u[n]["ccw"]
del succs_u[n]
del self._pred[u][n]
if n != un_cw:
succs_u[un_cw]["ccw"] = un_ccw
succs_u[un_ccw]["cw"] = un_cw
del self._node[n]
del self._succ[n]
del self._pred[n]
except KeyError as err: # NetworkXError if n not in self
raise nx.NetworkXError(
f"The node {n} is not in the planar embedding."
) from err
nx._clear_cache(self)
| (self, n) |
30,314 | networkx.algorithms.planarity | remove_nodes_from | Remove multiple nodes.
Parameters
----------
nodes : iterable container
A container of nodes (list, dict, set, etc.). If a node
in the container is not in the graph it is silently ignored.
See Also
--------
remove_node
Notes
-----
When removing nodes from an iterator over the graph you are changing,
a `RuntimeError` will be raised with message:
`RuntimeError: dictionary changed size during iteration`. This
happens when the graph's underlying dictionary is modified during
iteration. To avoid this error, evaluate the iterator into a separate
object, e.g. by using `list(iterator_of_nodes)`, and pass this
object to `G.remove_nodes_from`.
| def remove_nodes_from(self, nodes):
"""Remove multiple nodes.
Parameters
----------
nodes : iterable container
A container of nodes (list, dict, set, etc.). If a node
in the container is not in the graph it is silently ignored.
See Also
--------
remove_node
Notes
-----
When removing nodes from an iterator over the graph you are changing,
a `RuntimeError` will be raised with message:
`RuntimeError: dictionary changed size during iteration`. This
happens when the graph's underlying dictionary is modified during
iteration. To avoid this error, evaluate the iterator into a separate
object, e.g. by using `list(iterator_of_nodes)`, and pass this
object to `G.remove_nodes_from`.
"""
for n in nodes:
if n in self._node:
self.remove_node(n)
# silently skip non-existing nodes
| (self, nodes) |
30,316 | networkx.algorithms.planarity | set_data | Inserts edges according to given sorted neighbor list.
The input format is the same as the output format of get_data().
Parameters
----------
data : dict
A dict mapping all nodes to a list of neighbors sorted in
clockwise order.
See Also
--------
get_data
| def set_data(self, data):
"""Inserts edges according to given sorted neighbor list.
The input format is the same as the output format of get_data().
Parameters
----------
data : dict
A dict mapping all nodes to a list of neighbors sorted in
clockwise order.
See Also
--------
get_data
"""
for v in data:
ref = None
for w in reversed(data[v]):
self.add_half_edge(v, w, cw=ref)
ref = w
| (self, data) |
30,324 | networkx.algorithms.planarity | traverse_face | Returns nodes on the face that belong to the half-edge (v, w).
The face that is traversed lies to the right of the half-edge (in an
orientation where v is below w).
Optionally it is possible to pass a set to which all encountered half
edges are added. Before calling this method, this set must not include
any half-edges that belong to the face.
Parameters
----------
v : node
Start node of half-edge.
w : node
End node of half-edge.
mark_half_edges: set, optional
Set to which all encountered half-edges are added.
Returns
-------
face : list
A list of nodes that lie on this face.
| def traverse_face(self, v, w, mark_half_edges=None):
"""Returns nodes on the face that belong to the half-edge (v, w).
The face that is traversed lies to the right of the half-edge (in an
orientation where v is below w).
Optionally it is possible to pass a set to which all encountered half
edges are added. Before calling this method, this set must not include
any half-edges that belong to the face.
Parameters
----------
v : node
Start node of half-edge.
w : node
End node of half-edge.
mark_half_edges: set, optional
Set to which all encountered half-edges are added.
Returns
-------
face : list
A list of nodes that lie on this face.
"""
if mark_half_edges is None:
mark_half_edges = set()
face_nodes = [v]
mark_half_edges.add((v, w))
prev_node = v
cur_node = w
# Last half-edge is (incoming_node, v)
incoming_node = self[v][w]["cw"]
while cur_node != v or prev_node != incoming_node:
face_nodes.append(cur_node)
prev_node, cur_node = self.next_face_half_edge(prev_node, cur_node)
if (prev_node, cur_node) in mark_half_edges:
raise nx.NetworkXException("Bad planar embedding. Impossible face.")
mark_half_edges.add((prev_node, cur_node))
return face_nodes
| (self, v, w, mark_half_edges=None) |
30,326 | networkx.exception | PowerIterationFailedConvergence | Raised when the power iteration method fails to converge within a
specified iteration limit.
`num_iterations` is the number of iterations that have been
completed when this exception was raised.
| class PowerIterationFailedConvergence(ExceededMaxIterations):
"""Raised when the power iteration method fails to converge within a
specified iteration limit.
`num_iterations` is the number of iterations that have been
completed when this exception was raised.
"""
def __init__(self, num_iterations, *args, **kw):
msg = f"power iteration failed to converge within {num_iterations} iterations"
exception_message = msg
superinit = super().__init__
superinit(self, exception_message, *args, **kw)
| (num_iterations, *args, **kw) |
30,327 | networkx.exception | __init__ | null | def __init__(self, num_iterations, *args, **kw):
msg = f"power iteration failed to converge within {num_iterations} iterations"
exception_message = msg
superinit = super().__init__
superinit(self, exception_message, *args, **kw)
| (self, num_iterations, *args, **kw) |
30,328 | networkx.algorithms.tree.mst | SpanningTreeIterator |
Iterate over all spanning trees of a graph in either increasing or
decreasing cost.
Notes
-----
This iterator uses the partition scheme from [1]_ (included edges,
excluded edges and open edges) as well as a modified Kruskal's Algorithm
to generate minimum spanning trees which respect the partition of edges.
For spanning trees with the same weight, ties are broken arbitrarily.
References
----------
.. [1] G.K. Janssens, K. Sörensen, An algorithm to generate all spanning
trees in order of increasing cost, Pesquisa Operacional, 2005-08,
Vol. 25 (2), p. 219-229,
https://www.scielo.br/j/pope/a/XHswBwRwJyrfL88dmMwYNWp/?lang=en
| class SpanningTreeIterator:
"""
Iterate over all spanning trees of a graph in either increasing or
decreasing cost.
Notes
-----
This iterator uses the partition scheme from [1]_ (included edges,
excluded edges and open edges) as well as a modified Kruskal's Algorithm
to generate minimum spanning trees which respect the partition of edges.
For spanning trees with the same weight, ties are broken arbitrarily.
References
----------
.. [1] G.K. Janssens, K. Sörensen, An algorithm to generate all spanning
trees in order of increasing cost, Pesquisa Operacional, 2005-08,
Vol. 25 (2), p. 219-229,
https://www.scielo.br/j/pope/a/XHswBwRwJyrfL88dmMwYNWp/?lang=en
"""
@dataclass(order=True)
class Partition:
"""
This dataclass represents a partition and stores a dict with the edge
data and the weight of the minimum spanning tree of the partition dict.
"""
mst_weight: float
partition_dict: dict = field(compare=False)
def __copy__(self):
return SpanningTreeIterator.Partition(
self.mst_weight, self.partition_dict.copy()
)
def __init__(self, G, weight="weight", minimum=True, ignore_nan=False):
"""
Initialize the iterator
Parameters
----------
G : nx.Graph
The directed graph which we need to iterate trees over
weight : String, default = "weight"
The edge attribute used to store the weight of the edge
minimum : bool, default = True
Return the trees in increasing order while true and decreasing order
while false.
ignore_nan : bool, default = False
If a NaN is found as an edge weight normally an exception is raised.
If `ignore_nan is True` then that edge is ignored instead.
"""
self.G = G.copy()
self.G.__networkx_cache__ = None # Disable caching
self.weight = weight
self.minimum = minimum
self.ignore_nan = ignore_nan
# Randomly create a key for an edge attribute to hold the partition data
self.partition_key = (
"SpanningTreeIterators super secret partition attribute name"
)
def __iter__(self):
"""
Returns
-------
SpanningTreeIterator
The iterator object for this graph
"""
self.partition_queue = PriorityQueue()
self._clear_partition(self.G)
mst_weight = partition_spanning_tree(
self.G, self.minimum, self.weight, self.partition_key, self.ignore_nan
).size(weight=self.weight)
self.partition_queue.put(
self.Partition(mst_weight if self.minimum else -mst_weight, {})
)
return self
def __next__(self):
"""
Returns
-------
(multi)Graph
The spanning tree of next greatest weight, which ties broken
arbitrarily.
"""
if self.partition_queue.empty():
del self.G, self.partition_queue
raise StopIteration
partition = self.partition_queue.get()
self._write_partition(partition)
next_tree = partition_spanning_tree(
self.G, self.minimum, self.weight, self.partition_key, self.ignore_nan
)
self._partition(partition, next_tree)
self._clear_partition(next_tree)
return next_tree
def _partition(self, partition, partition_tree):
"""
Create new partitions based of the minimum spanning tree of the
current minimum partition.
Parameters
----------
partition : Partition
The Partition instance used to generate the current minimum spanning
tree.
partition_tree : nx.Graph
The minimum spanning tree of the input partition.
"""
# create two new partitions with the data from the input partition dict
p1 = self.Partition(0, partition.partition_dict.copy())
p2 = self.Partition(0, partition.partition_dict.copy())
for e in partition_tree.edges:
# determine if the edge was open or included
if e not in partition.partition_dict:
# This is an open edge
p1.partition_dict[e] = EdgePartition.EXCLUDED
p2.partition_dict[e] = EdgePartition.INCLUDED
self._write_partition(p1)
p1_mst = partition_spanning_tree(
self.G,
self.minimum,
self.weight,
self.partition_key,
self.ignore_nan,
)
p1_mst_weight = p1_mst.size(weight=self.weight)
if nx.is_connected(p1_mst):
p1.mst_weight = p1_mst_weight if self.minimum else -p1_mst_weight
self.partition_queue.put(p1.__copy__())
p1.partition_dict = p2.partition_dict.copy()
def _write_partition(self, partition):
"""
Writes the desired partition into the graph to calculate the minimum
spanning tree.
Parameters
----------
partition : Partition
A Partition dataclass describing a partition on the edges of the
graph.
"""
for u, v, d in self.G.edges(data=True):
if (u, v) in partition.partition_dict:
d[self.partition_key] = partition.partition_dict[(u, v)]
else:
d[self.partition_key] = EdgePartition.OPEN
def _clear_partition(self, G):
"""
Removes partition data from the graph
"""
for u, v, d in G.edges(data=True):
if self.partition_key in d:
del d[self.partition_key]
| (G, weight='weight', minimum=True, ignore_nan=False) |
30,329 | networkx.algorithms.tree.mst | __init__ |
Initialize the iterator
Parameters
----------
G : nx.Graph
The directed graph which we need to iterate trees over
weight : String, default = "weight"
The edge attribute used to store the weight of the edge
minimum : bool, default = True
Return the trees in increasing order while true and decreasing order
while false.
ignore_nan : bool, default = False
If a NaN is found as an edge weight normally an exception is raised.
If `ignore_nan is True` then that edge is ignored instead.
| def __init__(self, G, weight="weight", minimum=True, ignore_nan=False):
"""
Initialize the iterator
Parameters
----------
G : nx.Graph
The directed graph which we need to iterate trees over
weight : String, default = "weight"
The edge attribute used to store the weight of the edge
minimum : bool, default = True
Return the trees in increasing order while true and decreasing order
while false.
ignore_nan : bool, default = False
If a NaN is found as an edge weight normally an exception is raised.
If `ignore_nan is True` then that edge is ignored instead.
"""
self.G = G.copy()
self.G.__networkx_cache__ = None # Disable caching
self.weight = weight
self.minimum = minimum
self.ignore_nan = ignore_nan
# Randomly create a key for an edge attribute to hold the partition data
self.partition_key = (
"SpanningTreeIterators super secret partition attribute name"
)
| (self, G, weight='weight', minimum=True, ignore_nan=False) |
30,330 | networkx.algorithms.tree.mst | __iter__ |
Returns
-------
SpanningTreeIterator
The iterator object for this graph
| def __iter__(self):
"""
Returns
-------
SpanningTreeIterator
The iterator object for this graph
"""
self.partition_queue = PriorityQueue()
self._clear_partition(self.G)
mst_weight = partition_spanning_tree(
self.G, self.minimum, self.weight, self.partition_key, self.ignore_nan
).size(weight=self.weight)
self.partition_queue.put(
self.Partition(mst_weight if self.minimum else -mst_weight, {})
)
return self
| (self) |
30,331 | networkx.algorithms.tree.mst | __next__ |
Returns
-------
(multi)Graph
The spanning tree of next greatest weight, which ties broken
arbitrarily.
| def __next__(self):
"""
Returns
-------
(multi)Graph
The spanning tree of next greatest weight, which ties broken
arbitrarily.
"""
if self.partition_queue.empty():
del self.G, self.partition_queue
raise StopIteration
partition = self.partition_queue.get()
self._write_partition(partition)
next_tree = partition_spanning_tree(
self.G, self.minimum, self.weight, self.partition_key, self.ignore_nan
)
self._partition(partition, next_tree)
self._clear_partition(next_tree)
return next_tree
| (self) |
30,332 | networkx.algorithms.tree.mst | _clear_partition |
Removes partition data from the graph
| def _clear_partition(self, G):
"""
Removes partition data from the graph
"""
for u, v, d in G.edges(data=True):
if self.partition_key in d:
del d[self.partition_key]
| (self, G) |
30,333 | networkx.algorithms.tree.mst | _partition |
Create new partitions based of the minimum spanning tree of the
current minimum partition.
Parameters
----------
partition : Partition
The Partition instance used to generate the current minimum spanning
tree.
partition_tree : nx.Graph
The minimum spanning tree of the input partition.
| def _partition(self, partition, partition_tree):
"""
Create new partitions based of the minimum spanning tree of the
current minimum partition.
Parameters
----------
partition : Partition
The Partition instance used to generate the current minimum spanning
tree.
partition_tree : nx.Graph
The minimum spanning tree of the input partition.
"""
# create two new partitions with the data from the input partition dict
p1 = self.Partition(0, partition.partition_dict.copy())
p2 = self.Partition(0, partition.partition_dict.copy())
for e in partition_tree.edges:
# determine if the edge was open or included
if e not in partition.partition_dict:
# This is an open edge
p1.partition_dict[e] = EdgePartition.EXCLUDED
p2.partition_dict[e] = EdgePartition.INCLUDED
self._write_partition(p1)
p1_mst = partition_spanning_tree(
self.G,
self.minimum,
self.weight,
self.partition_key,
self.ignore_nan,
)
p1_mst_weight = p1_mst.size(weight=self.weight)
if nx.is_connected(p1_mst):
p1.mst_weight = p1_mst_weight if self.minimum else -p1_mst_weight
self.partition_queue.put(p1.__copy__())
p1.partition_dict = p2.partition_dict.copy()
| (self, partition, partition_tree) |
30,334 | networkx.algorithms.tree.mst | _write_partition |
Writes the desired partition into the graph to calculate the minimum
spanning tree.
Parameters
----------
partition : Partition
A Partition dataclass describing a partition on the edges of the
graph.
| def _write_partition(self, partition):
"""
Writes the desired partition into the graph to calculate the minimum
spanning tree.
Parameters
----------
partition : Partition
A Partition dataclass describing a partition on the edges of the
graph.
"""
for u, v, d in self.G.edges(data=True):
if (u, v) in partition.partition_dict:
d[self.partition_key] = partition.partition_dict[(u, v)]
else:
d[self.partition_key] = EdgePartition.OPEN
| (self, partition) |
30,335 | networkx.utils.misc | _clear_cache | Clear the cache of a graph (currently stores converted graphs).
Caching is controlled via ``nx.config.cache_converted_graphs`` configuration.
| def _clear_cache(G):
"""Clear the cache of a graph (currently stores converted graphs).
Caching is controlled via ``nx.config.cache_converted_graphs`` configuration.
"""
if cache := getattr(G, "__networkx_cache__", None):
cache.clear()
| (G) |
30,336 | networkx.utils.backends | _dispatchable | null | class _dispatchable:
"""Allow any of the following decorator forms:
- @_dispatchable
- @_dispatchable()
- @_dispatchable(name="override_name")
- @_dispatchable(graphs="graph")
- @_dispatchable(edge_attrs="weight")
- @_dispatchable(graphs={"G": 0, "H": 1}, edge_attrs={"weight": "default"})
These class attributes are currently used to allow backends to run networkx tests.
For example: `PYTHONPATH=. pytest --backend graphblas --fallback-to-nx`
Future work: add configuration to control these.
"""
_is_testing = False
_fallback_to_nx = (
os.environ.get("NETWORKX_FALLBACK_TO_NX", "true").strip().lower() == "true"
)
def __new__(
cls,
func=None,
*,
name=None,
graphs="G",
edge_attrs=None,
node_attrs=None,
preserve_edge_attrs=False,
preserve_node_attrs=False,
preserve_graph_attrs=False,
preserve_all_attrs=False,
mutates_input=False,
returns_graph=False,
):
"""A decorator that makes certain input graph types dispatch to ``func``'s
backend implementation.
Usage can be any of the following decorator forms:
- @_dispatchable
- @_dispatchable()
- @_dispatchable(name="override_name")
- @_dispatchable(graphs="graph_var_name")
- @_dispatchable(edge_attrs="weight")
- @_dispatchable(graphs={"G": 0, "H": 1}, edge_attrs={"weight": "default"})
with 0 and 1 giving the position in the signature function for graph objects.
When edge_attrs is a dict, keys are keyword names and values are defaults.
The class attributes are used to allow backends to run networkx tests.
For example: `PYTHONPATH=. pytest --backend graphblas --fallback-to-nx`
Future work: add configuration to control these.
Parameters
----------
func : callable, optional
The function to be decorated. If ``func`` is not provided, returns a
partial object that can be used to decorate a function later. If ``func``
is provided, returns a new callable object that dispatches to a backend
algorithm based on input graph types.
name : str, optional
The name of the algorithm to use for dispatching. If not provided,
the name of ``func`` will be used. ``name`` is useful to avoid name
conflicts, as all dispatched algorithms live in a single namespace.
For example, ``tournament.is_strongly_connected`` had a name conflict
with the standard ``nx.is_strongly_connected``, so we used
``@_dispatchable(name="tournament_is_strongly_connected")``.
graphs : str or dict or None, default "G"
If a string, the parameter name of the graph, which must be the first
argument of the wrapped function. If more than one graph is required
for the algorithm (or if the graph is not the first argument), provide
a dict of parameter name to argument position for each graph argument.
For example, ``@_dispatchable(graphs={"G": 0, "auxiliary?": 4})``
indicates the 0th parameter ``G`` of the function is a required graph,
and the 4th parameter ``auxiliary`` is an optional graph.
To indicate an argument is a list of graphs, do e.g. ``"[graphs]"``.
Use ``graphs=None`` if *no* arguments are NetworkX graphs such as for
graph generators, readers, and conversion functions.
edge_attrs : str or dict, optional
``edge_attrs`` holds information about edge attribute arguments
and default values for those edge attributes.
If a string, ``edge_attrs`` holds the function argument name that
indicates a single edge attribute to include in the converted graph.
The default value for this attribute is 1. To indicate that an argument
is a list of attributes (all with default value 1), use e.g. ``"[attrs]"``.
If a dict, ``edge_attrs`` holds a dict keyed by argument names, with
values that are either the default value or, if a string, the argument
name that indicates the default value.
node_attrs : str or dict, optional
Like ``edge_attrs``, but for node attributes.
preserve_edge_attrs : bool or str or dict, optional
For bool, whether to preserve all edge attributes.
For str, the parameter name that may indicate (with ``True`` or a
callable argument) whether all edge attributes should be preserved
when converting.
For dict of ``{graph_name: {attr: default}}``, indicate pre-determined
edge attributes (and defaults) to preserve for input graphs.
preserve_node_attrs : bool or str or dict, optional
Like ``preserve_edge_attrs``, but for node attributes.
preserve_graph_attrs : bool or set
For bool, whether to preserve all graph attributes.
For set, which input graph arguments to preserve graph attributes.
preserve_all_attrs : bool
Whether to preserve all edge, node and graph attributes.
This overrides all the other preserve_*_attrs.
mutates_input : bool or dict, default False
For bool, whether the functions mutates an input graph argument.
For dict of ``{arg_name: arg_pos}``, arguments that indicates whether an
input graph will be mutated, and ``arg_name`` may begin with ``"not "``
to negate the logic (for example, this is used by ``copy=`` arguments).
By default, dispatching doesn't convert input graphs to a different
backend for functions that mutate input graphs.
returns_graph : bool, default False
Whether the function can return or yield a graph object. By default,
dispatching doesn't convert input graphs to a different backend for
functions that return graphs.
"""
if func is None:
return partial(
_dispatchable,
name=name,
graphs=graphs,
edge_attrs=edge_attrs,
node_attrs=node_attrs,
preserve_edge_attrs=preserve_edge_attrs,
preserve_node_attrs=preserve_node_attrs,
preserve_graph_attrs=preserve_graph_attrs,
preserve_all_attrs=preserve_all_attrs,
mutates_input=mutates_input,
returns_graph=returns_graph,
)
if isinstance(func, str):
raise TypeError("'name' and 'graphs' must be passed by keyword") from None
# If name not provided, use the name of the function
if name is None:
name = func.__name__
self = object.__new__(cls)
# standard function-wrapping stuff
# __annotations__ not used
self.__name__ = func.__name__
# self.__doc__ = func.__doc__ # __doc__ handled as cached property
self.__defaults__ = func.__defaults__
# We "magically" add `backend=` keyword argument to allow backend to be specified
if func.__kwdefaults__:
self.__kwdefaults__ = {**func.__kwdefaults__, "backend": None}
else:
self.__kwdefaults__ = {"backend": None}
self.__module__ = func.__module__
self.__qualname__ = func.__qualname__
self.__dict__.update(func.__dict__)
self.__wrapped__ = func
# Supplement docstring with backend info; compute and cache when needed
self._orig_doc = func.__doc__
self._cached_doc = None
self.orig_func = func
self.name = name
self.edge_attrs = edge_attrs
self.node_attrs = node_attrs
self.preserve_edge_attrs = preserve_edge_attrs or preserve_all_attrs
self.preserve_node_attrs = preserve_node_attrs or preserve_all_attrs
self.preserve_graph_attrs = preserve_graph_attrs or preserve_all_attrs
self.mutates_input = mutates_input
# Keep `returns_graph` private for now, b/c we may extend info on return types
self._returns_graph = returns_graph
if edge_attrs is not None and not isinstance(edge_attrs, str | dict):
raise TypeError(
f"Bad type for edge_attrs: {type(edge_attrs)}. Expected str or dict."
) from None
if node_attrs is not None and not isinstance(node_attrs, str | dict):
raise TypeError(
f"Bad type for node_attrs: {type(node_attrs)}. Expected str or dict."
) from None
if not isinstance(self.preserve_edge_attrs, bool | str | dict):
raise TypeError(
f"Bad type for preserve_edge_attrs: {type(self.preserve_edge_attrs)}."
" Expected bool, str, or dict."
) from None
if not isinstance(self.preserve_node_attrs, bool | str | dict):
raise TypeError(
f"Bad type for preserve_node_attrs: {type(self.preserve_node_attrs)}."
" Expected bool, str, or dict."
) from None
if not isinstance(self.preserve_graph_attrs, bool | set):
raise TypeError(
f"Bad type for preserve_graph_attrs: {type(self.preserve_graph_attrs)}."
" Expected bool or set."
) from None
if not isinstance(self.mutates_input, bool | dict):
raise TypeError(
f"Bad type for mutates_input: {type(self.mutates_input)}."
" Expected bool or dict."
) from None
if not isinstance(self._returns_graph, bool):
raise TypeError(
f"Bad type for returns_graph: {type(self._returns_graph)}."
" Expected bool."
) from None
if isinstance(graphs, str):
graphs = {graphs: 0}
elif graphs is None:
pass
elif not isinstance(graphs, dict):
raise TypeError(
f"Bad type for graphs: {type(graphs)}. Expected str or dict."
) from None
elif len(graphs) == 0:
raise KeyError("'graphs' must contain at least one variable name") from None
# This dict comprehension is complicated for better performance; equivalent shown below.
self.optional_graphs = set()
self.list_graphs = set()
if graphs is None:
self.graphs = {}
else:
self.graphs = {
self.optional_graphs.add(val := k[:-1]) or val
if (last := k[-1]) == "?"
else self.list_graphs.add(val := k[1:-1]) or val
if last == "]"
else k: v
for k, v in graphs.items()
}
# The above is equivalent to:
# self.optional_graphs = {k[:-1] for k in graphs if k[-1] == "?"}
# self.list_graphs = {k[1:-1] for k in graphs if k[-1] == "]"}
# self.graphs = {k[:-1] if k[-1] == "?" else k: v for k, v in graphs.items()}
# Compute and cache the signature on-demand
self._sig = None
# Which backends implement this function?
self.backends = {
backend
for backend, info in backend_info.items()
if "functions" in info and name in info["functions"]
}
if name in _registered_algorithms:
raise KeyError(
f"Algorithm already exists in dispatch registry: {name}"
) from None
# Use the magic of `argmap` to turn `self` into a function. This does result
# in small additional overhead compared to calling `_dispatchable` directly,
# but `argmap` has the magical property that it can stack with other `argmap`
# decorators "for free". Being a function is better for REPRs and type-checkers.
self = argmap(_do_nothing)(self)
_registered_algorithms[name] = self
return self
@property
def __doc__(self):
"""If the cached documentation exists, it is returned.
Otherwise, the documentation is generated using _make_doc() method,
cached, and then returned."""
if (rv := self._cached_doc) is not None:
return rv
rv = self._cached_doc = self._make_doc()
return rv
@__doc__.setter
def __doc__(self, val):
"""Sets the original documentation to the given value and resets the
cached documentation."""
self._orig_doc = val
self._cached_doc = None
@property
def __signature__(self):
"""Return the signature of the original function, with the addition of
the `backend` and `backend_kwargs` parameters."""
if self._sig is None:
sig = inspect.signature(self.orig_func)
# `backend` is now a reserved argument used by dispatching.
# assert "backend" not in sig.parameters
if not any(
p.kind == inspect.Parameter.VAR_KEYWORD for p in sig.parameters.values()
):
sig = sig.replace(
parameters=[
*sig.parameters.values(),
inspect.Parameter(
"backend", inspect.Parameter.KEYWORD_ONLY, default=None
),
inspect.Parameter(
"backend_kwargs", inspect.Parameter.VAR_KEYWORD
),
]
)
else:
*parameters, var_keyword = sig.parameters.values()
sig = sig.replace(
parameters=[
*parameters,
inspect.Parameter(
"backend", inspect.Parameter.KEYWORD_ONLY, default=None
),
var_keyword,
]
)
self._sig = sig
return self._sig
def __call__(self, /, *args, backend=None, **kwargs):
"""Returns the result of the original function, or the backend function if
the backend is specified and that backend implements `func`."""
if not backends:
# Fast path if no backends are installed
return self.orig_func(*args, **kwargs)
# Use `backend_name` in this function instead of `backend`
backend_name = backend
if backend_name is not None and backend_name not in backends:
raise ImportError(f"Unable to load backend: {backend_name}")
graphs_resolved = {}
for gname, pos in self.graphs.items():
if pos < len(args):
if gname in kwargs:
raise TypeError(f"{self.name}() got multiple values for {gname!r}")
val = args[pos]
elif gname in kwargs:
val = kwargs[gname]
elif gname not in self.optional_graphs:
raise TypeError(
f"{self.name}() missing required graph argument: {gname}"
)
else:
continue
if val is None:
if gname not in self.optional_graphs:
raise TypeError(
f"{self.name}() required graph argument {gname!r} is None; must be a graph"
)
else:
graphs_resolved[gname] = val
# Alternative to the above that does not check duplicated args or missing required graphs.
# graphs_resolved = {
# val
# for gname, pos in self.graphs.items()
# if (val := args[pos] if pos < len(args) else kwargs.get(gname)) is not None
# }
# Check if any graph comes from a backend
if self.list_graphs:
# Make sure we don't lose values by consuming an iterator
args = list(args)
for gname in self.list_graphs & graphs_resolved.keys():
val = list(graphs_resolved[gname])
graphs_resolved[gname] = val
if gname in kwargs:
kwargs[gname] = val
else:
args[self.graphs[gname]] = val
has_backends = any(
hasattr(g, "__networkx_backend__")
if gname not in self.list_graphs
else any(hasattr(g2, "__networkx_backend__") for g2 in g)
for gname, g in graphs_resolved.items()
)
if has_backends:
graph_backend_names = {
getattr(g, "__networkx_backend__", "networkx")
for gname, g in graphs_resolved.items()
if gname not in self.list_graphs
}
for gname in self.list_graphs & graphs_resolved.keys():
graph_backend_names.update(
getattr(g, "__networkx_backend__", "networkx")
for g in graphs_resolved[gname]
)
else:
has_backends = any(
hasattr(g, "__networkx_backend__") for g in graphs_resolved.values()
)
if has_backends:
graph_backend_names = {
getattr(g, "__networkx_backend__", "networkx")
for g in graphs_resolved.values()
}
backend_priority = config.backend_priority
if self._is_testing and backend_priority and backend_name is None:
# Special path if we are running networkx tests with a backend.
# This even runs for (and handles) functions that mutate input graphs.
return self._convert_and_call_for_tests(
backend_priority[0],
args,
kwargs,
fallback_to_nx=self._fallback_to_nx,
)
if has_backends:
# Dispatchable graphs found! Dispatch to backend function.
# We don't handle calls with different backend graphs yet,
# but we may be able to convert additional networkx graphs.
backend_names = graph_backend_names - {"networkx"}
if len(backend_names) != 1:
# Future work: convert between backends and run if multiple backends found
raise TypeError(
f"{self.name}() graphs must all be from the same backend, found {backend_names}"
)
[graph_backend_name] = backend_names
if backend_name is not None and backend_name != graph_backend_name:
# Future work: convert between backends to `backend_name` backend
raise TypeError(
f"{self.name}() is unable to convert graph from backend {graph_backend_name!r} "
f"to the specified backend {backend_name!r}."
)
if graph_backend_name not in backends:
raise ImportError(f"Unable to load backend: {graph_backend_name}")
if (
"networkx" in graph_backend_names
and graph_backend_name not in backend_priority
):
# Not configured to convert networkx graphs to this backend
raise TypeError(
f"Unable to convert inputs and run {self.name}. "
f"{self.name}() has networkx and {graph_backend_name} graphs, but NetworkX is not "
f"configured to automatically convert graphs from networkx to {graph_backend_name}."
)
backend = _load_backend(graph_backend_name)
if hasattr(backend, self.name):
if "networkx" in graph_backend_names:
# We need to convert networkx graphs to backend graphs.
# There is currently no need to check `self.mutates_input` here.
return self._convert_and_call(
graph_backend_name,
args,
kwargs,
fallback_to_nx=self._fallback_to_nx,
)
# All graphs are backend graphs--no need to convert!
return getattr(backend, self.name)(*args, **kwargs)
# Future work: try to convert and run with other backends in backend_priority
raise nx.NetworkXNotImplemented(
f"'{self.name}' not implemented by {graph_backend_name}"
)
# If backend was explicitly given by the user, so we need to use it no matter what
if backend_name is not None:
return self._convert_and_call(
backend_name, args, kwargs, fallback_to_nx=False
)
# Only networkx graphs; try to convert and run with a backend with automatic
# conversion, but don't do this by default for graph generators or loaders,
# or if the functions mutates an input graph or returns a graph.
# Only convert and run if `backend.should_run(...)` returns True.
if (
not self._returns_graph
and (
not self.mutates_input
or isinstance(self.mutates_input, dict)
# If `mutates_input` begins with "not ", then assume the argument is boolean,
# otherwise treat it as a node or edge attribute if it's not None.
and any(
not (
args[arg_pos]
if len(args) > arg_pos
else kwargs.get(arg_name[4:], True)
)
if arg_name.startswith("not ")
else (
args[arg_pos] if len(args) > arg_pos else kwargs.get(arg_name)
)
is not None
for arg_name, arg_pos in self.mutates_input.items()
)
)
):
# Should we warn or log if we don't convert b/c the input will be mutated?
for backend_name in backend_priority:
if self._should_backend_run(backend_name, *args, **kwargs):
return self._convert_and_call(
backend_name,
args,
kwargs,
fallback_to_nx=self._fallback_to_nx,
)
# Default: run with networkx on networkx inputs
return self.orig_func(*args, **kwargs)
def _can_backend_run(self, backend_name, /, *args, **kwargs):
"""Can the specified backend run this algorithm with these arguments?"""
backend = _load_backend(backend_name)
# `backend.can_run` and `backend.should_run` may return strings that describe
# why they can't or shouldn't be run. We plan to use the strings in the future.
return (
hasattr(backend, self.name)
and (can_run := backend.can_run(self.name, args, kwargs))
and not isinstance(can_run, str)
)
def _should_backend_run(self, backend_name, /, *args, **kwargs):
"""Can/should the specified backend run this algorithm with these arguments?"""
backend = _load_backend(backend_name)
# `backend.can_run` and `backend.should_run` may return strings that describe
# why they can't or shouldn't be run. We plan to use the strings in the future.
return (
hasattr(backend, self.name)
and (can_run := backend.can_run(self.name, args, kwargs))
and not isinstance(can_run, str)
and (should_run := backend.should_run(self.name, args, kwargs))
and not isinstance(should_run, str)
)
def _convert_arguments(self, backend_name, args, kwargs, *, use_cache):
"""Convert graph arguments to the specified backend.
Returns
-------
args tuple and kwargs dict
"""
bound = self.__signature__.bind(*args, **kwargs)
bound.apply_defaults()
if not self.graphs:
bound_kwargs = bound.kwargs
del bound_kwargs["backend"]
return bound.args, bound_kwargs
# Convert graphs into backend graph-like object
# Include the edge and/or node labels if provided to the algorithm
preserve_edge_attrs = self.preserve_edge_attrs
edge_attrs = self.edge_attrs
if preserve_edge_attrs is False:
# e.g. `preserve_edge_attrs=False`
pass
elif preserve_edge_attrs is True:
# e.g. `preserve_edge_attrs=True`
edge_attrs = None
elif isinstance(preserve_edge_attrs, str):
if bound.arguments[preserve_edge_attrs] is True or callable(
bound.arguments[preserve_edge_attrs]
):
# e.g. `preserve_edge_attrs="attr"` and `func(attr=True)`
# e.g. `preserve_edge_attrs="attr"` and `func(attr=myfunc)`
preserve_edge_attrs = True
edge_attrs = None
elif bound.arguments[preserve_edge_attrs] is False and (
isinstance(edge_attrs, str)
and edge_attrs == preserve_edge_attrs
or isinstance(edge_attrs, dict)
and preserve_edge_attrs in edge_attrs
):
# e.g. `preserve_edge_attrs="attr"` and `func(attr=False)`
# Treat `False` argument as meaning "preserve_edge_data=False"
# and not `False` as the edge attribute to use.
preserve_edge_attrs = False
edge_attrs = None
else:
# e.g. `preserve_edge_attrs="attr"` and `func(attr="weight")`
preserve_edge_attrs = False
# Else: e.g. `preserve_edge_attrs={"G": {"weight": 1}}`
if edge_attrs is None:
# May have been set to None above b/c all attributes are preserved
pass
elif isinstance(edge_attrs, str):
if edge_attrs[0] == "[":
# e.g. `edge_attrs="[edge_attributes]"` (argument of list of attributes)
# e.g. `func(edge_attributes=["foo", "bar"])`
edge_attrs = {
edge_attr: 1 for edge_attr in bound.arguments[edge_attrs[1:-1]]
}
elif callable(bound.arguments[edge_attrs]):
# e.g. `edge_attrs="weight"` and `func(weight=myfunc)`
preserve_edge_attrs = True
edge_attrs = None
elif bound.arguments[edge_attrs] is not None:
# e.g. `edge_attrs="weight"` and `func(weight="foo")` (default of 1)
edge_attrs = {bound.arguments[edge_attrs]: 1}
elif self.name == "to_numpy_array" and hasattr(
bound.arguments["dtype"], "names"
):
# Custom handling: attributes may be obtained from `dtype`
edge_attrs = {
edge_attr: 1 for edge_attr in bound.arguments["dtype"].names
}
else:
# e.g. `edge_attrs="weight"` and `func(weight=None)`
edge_attrs = None
else:
# e.g. `edge_attrs={"attr": "default"}` and `func(attr="foo", default=7)`
# e.g. `edge_attrs={"attr": 0}` and `func(attr="foo")`
edge_attrs = {
edge_attr: bound.arguments.get(val, 1) if isinstance(val, str) else val
for key, val in edge_attrs.items()
if (edge_attr := bound.arguments[key]) is not None
}
preserve_node_attrs = self.preserve_node_attrs
node_attrs = self.node_attrs
if preserve_node_attrs is False:
# e.g. `preserve_node_attrs=False`
pass
elif preserve_node_attrs is True:
# e.g. `preserve_node_attrs=True`
node_attrs = None
elif isinstance(preserve_node_attrs, str):
if bound.arguments[preserve_node_attrs] is True or callable(
bound.arguments[preserve_node_attrs]
):
# e.g. `preserve_node_attrs="attr"` and `func(attr=True)`
# e.g. `preserve_node_attrs="attr"` and `func(attr=myfunc)`
preserve_node_attrs = True
node_attrs = None
elif bound.arguments[preserve_node_attrs] is False and (
isinstance(node_attrs, str)
and node_attrs == preserve_node_attrs
or isinstance(node_attrs, dict)
and preserve_node_attrs in node_attrs
):
# e.g. `preserve_node_attrs="attr"` and `func(attr=False)`
# Treat `False` argument as meaning "preserve_node_data=False"
# and not `False` as the node attribute to use. Is this used?
preserve_node_attrs = False
node_attrs = None
else:
# e.g. `preserve_node_attrs="attr"` and `func(attr="weight")`
preserve_node_attrs = False
# Else: e.g. `preserve_node_attrs={"G": {"pos": None}}`
if node_attrs is None:
# May have been set to None above b/c all attributes are preserved
pass
elif isinstance(node_attrs, str):
if node_attrs[0] == "[":
# e.g. `node_attrs="[node_attributes]"` (argument of list of attributes)
# e.g. `func(node_attributes=["foo", "bar"])`
node_attrs = {
node_attr: None for node_attr in bound.arguments[node_attrs[1:-1]]
}
elif callable(bound.arguments[node_attrs]):
# e.g. `node_attrs="weight"` and `func(weight=myfunc)`
preserve_node_attrs = True
node_attrs = None
elif bound.arguments[node_attrs] is not None:
# e.g. `node_attrs="weight"` and `func(weight="foo")`
node_attrs = {bound.arguments[node_attrs]: None}
else:
# e.g. `node_attrs="weight"` and `func(weight=None)`
node_attrs = None
else:
# e.g. `node_attrs={"attr": "default"}` and `func(attr="foo", default=7)`
# e.g. `node_attrs={"attr": 0}` and `func(attr="foo")`
node_attrs = {
node_attr: bound.arguments.get(val) if isinstance(val, str) else val
for key, val in node_attrs.items()
if (node_attr := bound.arguments[key]) is not None
}
preserve_graph_attrs = self.preserve_graph_attrs
# It should be safe to assume that we either have networkx graphs or backend graphs.
# Future work: allow conversions between backends.
for gname in self.graphs:
if gname in self.list_graphs:
bound.arguments[gname] = [
self._convert_graph(
backend_name,
g,
edge_attrs=edge_attrs,
node_attrs=node_attrs,
preserve_edge_attrs=preserve_edge_attrs,
preserve_node_attrs=preserve_node_attrs,
preserve_graph_attrs=preserve_graph_attrs,
graph_name=gname,
use_cache=use_cache,
)
if getattr(g, "__networkx_backend__", "networkx") == "networkx"
else g
for g in bound.arguments[gname]
]
else:
graph = bound.arguments[gname]
if graph is None:
if gname in self.optional_graphs:
continue
raise TypeError(
f"Missing required graph argument `{gname}` in {self.name} function"
)
if isinstance(preserve_edge_attrs, dict):
preserve_edges = False
edges = preserve_edge_attrs.get(gname, edge_attrs)
else:
preserve_edges = preserve_edge_attrs
edges = edge_attrs
if isinstance(preserve_node_attrs, dict):
preserve_nodes = False
nodes = preserve_node_attrs.get(gname, node_attrs)
else:
preserve_nodes = preserve_node_attrs
nodes = node_attrs
if isinstance(preserve_graph_attrs, set):
preserve_graph = gname in preserve_graph_attrs
else:
preserve_graph = preserve_graph_attrs
if getattr(graph, "__networkx_backend__", "networkx") == "networkx":
bound.arguments[gname] = self._convert_graph(
backend_name,
graph,
edge_attrs=edges,
node_attrs=nodes,
preserve_edge_attrs=preserve_edges,
preserve_node_attrs=preserve_nodes,
preserve_graph_attrs=preserve_graph,
graph_name=gname,
use_cache=use_cache,
)
bound_kwargs = bound.kwargs
del bound_kwargs["backend"]
return bound.args, bound_kwargs
def _convert_graph(
self,
backend_name,
graph,
*,
edge_attrs,
node_attrs,
preserve_edge_attrs,
preserve_node_attrs,
preserve_graph_attrs,
graph_name,
use_cache,
):
if (
use_cache
and (nx_cache := getattr(graph, "__networkx_cache__", None)) is not None
):
cache = nx_cache.setdefault("backends", {}).setdefault(backend_name, {})
# edge_attrs: dict | None
# node_attrs: dict | None
# preserve_edge_attrs: bool (False if edge_attrs is not None)
# preserve_node_attrs: bool (False if node_attrs is not None)
# preserve_graph_attrs: bool
key = edge_key, node_key, graph_key = (
frozenset(edge_attrs.items())
if edge_attrs is not None
else preserve_edge_attrs,
frozenset(node_attrs.items())
if node_attrs is not None
else preserve_node_attrs,
preserve_graph_attrs,
)
if cache:
warning_message = (
f"Using cached graph for {backend_name!r} backend in "
f"call to {self.name}.\n\nFor the cache to be consistent "
"(i.e., correct), the input graph must not have been "
"manually mutated since the cached graph was created. "
"Examples of manually mutating the graph data structures "
"resulting in an inconsistent cache include:\n\n"
" >>> G[u][v][key] = val\n\n"
"and\n\n"
" >>> for u, v, d in G.edges(data=True):\n"
" ... d[key] = val\n\n"
"Using methods such as `G.add_edge(u, v, weight=val)` "
"will correctly clear the cache to keep it consistent. "
"You may also use `G.__networkx_cache__.clear()` to "
"manually clear the cache, or set `G.__networkx_cache__` "
"to None to disable caching for G. Enable or disable "
"caching via `nx.config.cache_converted_graphs` config."
)
# Do a simple search for a cached graph with compatible data.
# For example, if we need a single attribute, then it's okay
# to use a cached graph that preserved all attributes.
# This looks for an exact match first.
for compat_key in itertools.product(
(edge_key, True) if edge_key is not True else (True,),
(node_key, True) if node_key is not True else (True,),
(graph_key, True) if graph_key is not True else (True,),
):
if (rv := cache.get(compat_key)) is not None:
warnings.warn(warning_message)
return rv
if edge_key is not True and node_key is not True:
# Iterate over the items in `cache` to see if any are compatible.
# For example, if no edge attributes are needed, then a graph
# with any edge attribute will suffice. We use the same logic
# below (but switched) to clear unnecessary items from the cache.
# Use `list(cache.items())` to be thread-safe.
for (ekey, nkey, gkey), val in list(cache.items()):
if edge_key is False or ekey is True:
pass
elif (
edge_key is True
or ekey is False
or not edge_key.issubset(ekey)
):
continue
if node_key is False or nkey is True:
pass
elif (
node_key is True
or nkey is False
or not node_key.issubset(nkey)
):
continue
if graph_key and not gkey:
continue
warnings.warn(warning_message)
return val
backend = _load_backend(backend_name)
rv = backend.convert_from_nx(
graph,
edge_attrs=edge_attrs,
node_attrs=node_attrs,
preserve_edge_attrs=preserve_edge_attrs,
preserve_node_attrs=preserve_node_attrs,
preserve_graph_attrs=preserve_graph_attrs,
name=self.name,
graph_name=graph_name,
)
if use_cache and nx_cache is not None:
# Remove old cached items that are no longer necessary since they
# are dominated/subsumed/outdated by what was just calculated.
# This uses the same logic as above, but with keys switched.
cache[key] = rv # Set at beginning to be thread-safe
for cur_key in list(cache):
if cur_key == key:
continue
ekey, nkey, gkey = cur_key
if ekey is False or edge_key is True:
pass
elif ekey is True or edge_key is False or not ekey.issubset(edge_key):
continue
if nkey is False or node_key is True:
pass
elif nkey is True or node_key is False or not nkey.issubset(node_key):
continue
if gkey and not graph_key:
continue
cache.pop(cur_key, None) # Use pop instead of del to be thread-safe
return rv
def _convert_and_call(self, backend_name, args, kwargs, *, fallback_to_nx=False):
"""Call this dispatchable function with a backend, converting graphs if necessary."""
backend = _load_backend(backend_name)
if not self._can_backend_run(backend_name, *args, **kwargs):
if fallback_to_nx:
return self.orig_func(*args, **kwargs)
msg = f"'{self.name}' not implemented by {backend_name}"
if hasattr(backend, self.name):
msg += " with the given arguments"
raise RuntimeError(msg)
try:
converted_args, converted_kwargs = self._convert_arguments(
backend_name, args, kwargs, use_cache=config.cache_converted_graphs
)
result = getattr(backend, self.name)(*converted_args, **converted_kwargs)
except (NotImplementedError, nx.NetworkXNotImplemented) as exc:
if fallback_to_nx:
return self.orig_func(*args, **kwargs)
raise
return result
def _convert_and_call_for_tests(
self, backend_name, args, kwargs, *, fallback_to_nx=False
):
"""Call this dispatchable function with a backend; for use with testing."""
backend = _load_backend(backend_name)
if not self._can_backend_run(backend_name, *args, **kwargs):
if fallback_to_nx or not self.graphs:
return self.orig_func(*args, **kwargs)
import pytest
msg = f"'{self.name}' not implemented by {backend_name}"
if hasattr(backend, self.name):
msg += " with the given arguments"
pytest.xfail(msg)
from collections.abc import Iterable, Iterator, Mapping
from copy import copy
from io import BufferedReader, BytesIO, StringIO, TextIOWrapper
from itertools import tee
from random import Random
import numpy as np
from numpy.random import Generator, RandomState
from scipy.sparse import sparray
# We sometimes compare the backend result to the original result,
# so we need two sets of arguments. We tee iterators and copy
# random state so that they may be used twice.
if not args:
args1 = args2 = args
else:
args1, args2 = zip(
*(
(arg, copy(arg))
if isinstance(
arg, BytesIO | StringIO | Random | Generator | RandomState
)
else tee(arg)
if isinstance(arg, Iterator)
and not isinstance(arg, BufferedReader | TextIOWrapper)
else (arg, arg)
for arg in args
)
)
if not kwargs:
kwargs1 = kwargs2 = kwargs
else:
kwargs1, kwargs2 = zip(
*(
((k, v), (k, copy(v)))
if isinstance(
v, BytesIO | StringIO | Random | Generator | RandomState
)
else ((k, (teed := tee(v))[0]), (k, teed[1]))
if isinstance(v, Iterator)
and not isinstance(v, BufferedReader | TextIOWrapper)
else ((k, v), (k, v))
for k, v in kwargs.items()
)
)
kwargs1 = dict(kwargs1)
kwargs2 = dict(kwargs2)
try:
converted_args, converted_kwargs = self._convert_arguments(
backend_name, args1, kwargs1, use_cache=False
)
result = getattr(backend, self.name)(*converted_args, **converted_kwargs)
except (NotImplementedError, nx.NetworkXNotImplemented) as exc:
if fallback_to_nx:
return self.orig_func(*args2, **kwargs2)
import pytest
pytest.xfail(
exc.args[0] if exc.args else f"{self.name} raised {type(exc).__name__}"
)
# Verify that `self._returns_graph` is correct. This compares the return type
# to the type expected from `self._returns_graph`. This handles tuple and list
# return types, but *does not* catch functions that yield graphs.
if (
self._returns_graph
!= (
isinstance(result, nx.Graph)
or hasattr(result, "__networkx_backend__")
or isinstance(result, tuple | list)
and any(
isinstance(x, nx.Graph) or hasattr(x, "__networkx_backend__")
for x in result
)
)
and not (
# May return Graph or None
self.name in {"check_planarity", "check_planarity_recursive"}
and any(x is None for x in result)
)
and not (
# May return Graph or dict
self.name in {"held_karp_ascent"}
and any(isinstance(x, dict) for x in result)
)
and self.name
not in {
# yields graphs
"all_triads",
"general_k_edge_subgraphs",
# yields graphs or arrays
"nonisomorphic_trees",
}
):
raise RuntimeError(f"`returns_graph` is incorrect for {self.name}")
def check_result(val, depth=0):
if isinstance(val, np.number):
raise RuntimeError(
f"{self.name} returned a numpy scalar {val} ({type(val)}, depth={depth})"
)
if isinstance(val, np.ndarray | sparray):
return
if isinstance(val, nx.Graph):
check_result(val._node, depth=depth + 1)
check_result(val._adj, depth=depth + 1)
return
if isinstance(val, Iterator):
raise NotImplementedError
if isinstance(val, Iterable) and not isinstance(val, str):
for x in val:
check_result(x, depth=depth + 1)
if isinstance(val, Mapping):
for x in val.values():
check_result(x, depth=depth + 1)
def check_iterator(it):
for val in it:
try:
check_result(val)
except RuntimeError as exc:
raise RuntimeError(
f"{self.name} returned a numpy scalar {val} ({type(val)})"
) from exc
yield val
if self.name in {"from_edgelist"}:
# numpy scalars are explicitly given as values in some tests
pass
elif isinstance(result, Iterator):
result = check_iterator(result)
else:
try:
check_result(result)
except RuntimeError as exc:
raise RuntimeError(
f"{self.name} returned a numpy scalar {result} ({type(result)})"
) from exc
check_result(result)
if self.name in {
"edmonds_karp",
"barycenter",
"contracted_edge",
"contracted_nodes",
"stochastic_graph",
"relabel_nodes",
"maximum_branching",
"incremental_closeness_centrality",
"minimal_branching",
"minimum_spanning_arborescence",
"recursive_simple_cycles",
"connected_double_edge_swap",
}:
# Special-case algorithms that mutate input graphs
bound = self.__signature__.bind(*converted_args, **converted_kwargs)
bound.apply_defaults()
bound2 = self.__signature__.bind(*args2, **kwargs2)
bound2.apply_defaults()
if self.name in {
"minimal_branching",
"minimum_spanning_arborescence",
"recursive_simple_cycles",
"connected_double_edge_swap",
}:
G1 = backend.convert_to_nx(bound.arguments["G"])
G2 = bound2.arguments["G"]
G2._adj = G1._adj
nx._clear_cache(G2)
elif self.name == "edmonds_karp":
R1 = backend.convert_to_nx(bound.arguments["residual"])
R2 = bound2.arguments["residual"]
if R1 is not None and R2 is not None:
for k, v in R1.edges.items():
R2.edges[k]["flow"] = v["flow"]
R2.graph.update(R1.graph)
nx._clear_cache(R2)
elif self.name == "barycenter" and bound.arguments["attr"] is not None:
G1 = backend.convert_to_nx(bound.arguments["G"])
G2 = bound2.arguments["G"]
attr = bound.arguments["attr"]
for k, v in G1.nodes.items():
G2.nodes[k][attr] = v[attr]
nx._clear_cache(G2)
elif (
self.name in {"contracted_nodes", "contracted_edge"}
and not bound.arguments["copy"]
):
# Edges and nodes changed; node "contraction" and edge "weight" attrs
G1 = backend.convert_to_nx(bound.arguments["G"])
G2 = bound2.arguments["G"]
G2.__dict__.update(G1.__dict__)
nx._clear_cache(G2)
elif self.name == "stochastic_graph" and not bound.arguments["copy"]:
G1 = backend.convert_to_nx(bound.arguments["G"])
G2 = bound2.arguments["G"]
for k, v in G1.edges.items():
G2.edges[k]["weight"] = v["weight"]
nx._clear_cache(G2)
elif (
self.name == "relabel_nodes"
and not bound.arguments["copy"]
or self.name in {"incremental_closeness_centrality"}
):
G1 = backend.convert_to_nx(bound.arguments["G"])
G2 = bound2.arguments["G"]
if G1 is G2:
return G2
G2._node.clear()
G2._node.update(G1._node)
G2._adj.clear()
G2._adj.update(G1._adj)
if hasattr(G1, "_pred") and hasattr(G2, "_pred"):
G2._pred.clear()
G2._pred.update(G1._pred)
if hasattr(G1, "_succ") and hasattr(G2, "_succ"):
G2._succ.clear()
G2._succ.update(G1._succ)
nx._clear_cache(G2)
if self.name == "relabel_nodes":
return G2
return backend.convert_to_nx(result)
converted_result = backend.convert_to_nx(result)
if isinstance(converted_result, nx.Graph) and self.name not in {
"boykov_kolmogorov",
"preflow_push",
"quotient_graph",
"shortest_augmenting_path",
"spectral_graph_forge",
# We don't handle tempfile.NamedTemporaryFile arguments
"read_gml",
"read_graph6",
"read_sparse6",
# We don't handle io.BufferedReader or io.TextIOWrapper arguments
"bipartite_read_edgelist",
"read_adjlist",
"read_edgelist",
"read_graphml",
"read_multiline_adjlist",
"read_pajek",
"from_pydot",
"pydot_read_dot",
"agraph_read_dot",
# graph comparison fails b/c of nan values
"read_gexf",
}:
# For graph return types (e.g. generators), we compare that results are
# the same between the backend and networkx, then return the original
# networkx result so the iteration order will be consistent in tests.
G = self.orig_func(*args2, **kwargs2)
if not nx.utils.graphs_equal(G, converted_result):
assert G.number_of_nodes() == converted_result.number_of_nodes()
assert G.number_of_edges() == converted_result.number_of_edges()
assert G.graph == converted_result.graph
assert G.nodes == converted_result.nodes
assert G.adj == converted_result.adj
assert type(G) is type(converted_result)
raise AssertionError("Graphs are not equal")
return G
return converted_result
def _make_doc(self):
"""Generate the backends section at the end for functions having an alternate
backend implementation(s) using the `backend_info` entry-point."""
if not self.backends:
return self._orig_doc
lines = [
"Backends",
"--------",
]
for backend in sorted(self.backends):
info = backend_info[backend]
if "short_summary" in info:
lines.append(f"{backend} : {info['short_summary']}")
else:
lines.append(backend)
if "functions" not in info or self.name not in info["functions"]:
lines.append("")
continue
func_info = info["functions"][self.name]
# Renaming extra_docstring to additional_docs
if func_docs := (
func_info.get("additional_docs") or func_info.get("extra_docstring")
):
lines.extend(
f" {line}" if line else line for line in func_docs.split("\n")
)
add_gap = True
else:
add_gap = False
# Renaming extra_parameters to additional_parameters
if extra_parameters := (
func_info.get("extra_parameters")
or func_info.get("additional_parameters")
):
if add_gap:
lines.append("")
lines.append(" Additional parameters:")
for param in sorted(extra_parameters):
lines.append(f" {param}")
if desc := extra_parameters[param]:
lines.append(f" {desc}")
lines.append("")
else:
lines.append("")
if func_url := func_info.get("url"):
lines.append(f"[`Source <{func_url}>`_]")
lines.append("")
lines.pop() # Remove last empty line
to_add = "\n ".join(lines)
return f"{self._orig_doc.rstrip()}\n\n {to_add}"
def __reduce__(self):
"""Allow this object to be serialized with pickle.
This uses the global registry `_registered_algorithms` to deserialize.
"""
return _restore_dispatchable, (self.name,)
| null |
30,337 | networkx.utils.backends | __call__ | Returns the result of the original function, or the backend function if
the backend is specified and that backend implements `func`. | def __call__(self, /, *args, backend=None, **kwargs):
"""Returns the result of the original function, or the backend function if
the backend is specified and that backend implements `func`."""
if not backends:
# Fast path if no backends are installed
return self.orig_func(*args, **kwargs)
# Use `backend_name` in this function instead of `backend`
backend_name = backend
if backend_name is not None and backend_name not in backends:
raise ImportError(f"Unable to load backend: {backend_name}")
graphs_resolved = {}
for gname, pos in self.graphs.items():
if pos < len(args):
if gname in kwargs:
raise TypeError(f"{self.name}() got multiple values for {gname!r}")
val = args[pos]
elif gname in kwargs:
val = kwargs[gname]
elif gname not in self.optional_graphs:
raise TypeError(
f"{self.name}() missing required graph argument: {gname}"
)
else:
continue
if val is None:
if gname not in self.optional_graphs:
raise TypeError(
f"{self.name}() required graph argument {gname!r} is None; must be a graph"
)
else:
graphs_resolved[gname] = val
# Alternative to the above that does not check duplicated args or missing required graphs.
# graphs_resolved = {
# val
# for gname, pos in self.graphs.items()
# if (val := args[pos] if pos < len(args) else kwargs.get(gname)) is not None
# }
# Check if any graph comes from a backend
if self.list_graphs:
# Make sure we don't lose values by consuming an iterator
args = list(args)
for gname in self.list_graphs & graphs_resolved.keys():
val = list(graphs_resolved[gname])
graphs_resolved[gname] = val
if gname in kwargs:
kwargs[gname] = val
else:
args[self.graphs[gname]] = val
has_backends = any(
hasattr(g, "__networkx_backend__")
if gname not in self.list_graphs
else any(hasattr(g2, "__networkx_backend__") for g2 in g)
for gname, g in graphs_resolved.items()
)
if has_backends:
graph_backend_names = {
getattr(g, "__networkx_backend__", "networkx")
for gname, g in graphs_resolved.items()
if gname not in self.list_graphs
}
for gname in self.list_graphs & graphs_resolved.keys():
graph_backend_names.update(
getattr(g, "__networkx_backend__", "networkx")
for g in graphs_resolved[gname]
)
else:
has_backends = any(
hasattr(g, "__networkx_backend__") for g in graphs_resolved.values()
)
if has_backends:
graph_backend_names = {
getattr(g, "__networkx_backend__", "networkx")
for g in graphs_resolved.values()
}
backend_priority = config.backend_priority
if self._is_testing and backend_priority and backend_name is None:
# Special path if we are running networkx tests with a backend.
# This even runs for (and handles) functions that mutate input graphs.
return self._convert_and_call_for_tests(
backend_priority[0],
args,
kwargs,
fallback_to_nx=self._fallback_to_nx,
)
if has_backends:
# Dispatchable graphs found! Dispatch to backend function.
# We don't handle calls with different backend graphs yet,
# but we may be able to convert additional networkx graphs.
backend_names = graph_backend_names - {"networkx"}
if len(backend_names) != 1:
# Future work: convert between backends and run if multiple backends found
raise TypeError(
f"{self.name}() graphs must all be from the same backend, found {backend_names}"
)
[graph_backend_name] = backend_names
if backend_name is not None and backend_name != graph_backend_name:
# Future work: convert between backends to `backend_name` backend
raise TypeError(
f"{self.name}() is unable to convert graph from backend {graph_backend_name!r} "
f"to the specified backend {backend_name!r}."
)
if graph_backend_name not in backends:
raise ImportError(f"Unable to load backend: {graph_backend_name}")
if (
"networkx" in graph_backend_names
and graph_backend_name not in backend_priority
):
# Not configured to convert networkx graphs to this backend
raise TypeError(
f"Unable to convert inputs and run {self.name}. "
f"{self.name}() has networkx and {graph_backend_name} graphs, but NetworkX is not "
f"configured to automatically convert graphs from networkx to {graph_backend_name}."
)
backend = _load_backend(graph_backend_name)
if hasattr(backend, self.name):
if "networkx" in graph_backend_names:
# We need to convert networkx graphs to backend graphs.
# There is currently no need to check `self.mutates_input` here.
return self._convert_and_call(
graph_backend_name,
args,
kwargs,
fallback_to_nx=self._fallback_to_nx,
)
# All graphs are backend graphs--no need to convert!
return getattr(backend, self.name)(*args, **kwargs)
# Future work: try to convert and run with other backends in backend_priority
raise nx.NetworkXNotImplemented(
f"'{self.name}' not implemented by {graph_backend_name}"
)
# If backend was explicitly given by the user, so we need to use it no matter what
if backend_name is not None:
return self._convert_and_call(
backend_name, args, kwargs, fallback_to_nx=False
)
# Only networkx graphs; try to convert and run with a backend with automatic
# conversion, but don't do this by default for graph generators or loaders,
# or if the functions mutates an input graph or returns a graph.
# Only convert and run if `backend.should_run(...)` returns True.
if (
not self._returns_graph
and (
not self.mutates_input
or isinstance(self.mutates_input, dict)
# If `mutates_input` begins with "not ", then assume the argument is boolean,
# otherwise treat it as a node or edge attribute if it's not None.
and any(
not (
args[arg_pos]
if len(args) > arg_pos
else kwargs.get(arg_name[4:], True)
)
if arg_name.startswith("not ")
else (
args[arg_pos] if len(args) > arg_pos else kwargs.get(arg_name)
)
is not None
for arg_name, arg_pos in self.mutates_input.items()
)
)
):
# Should we warn or log if we don't convert b/c the input will be mutated?
for backend_name in backend_priority:
if self._should_backend_run(backend_name, *args, **kwargs):
return self._convert_and_call(
backend_name,
args,
kwargs,
fallback_to_nx=self._fallback_to_nx,
)
# Default: run with networkx on networkx inputs
return self.orig_func(*args, **kwargs)
| (self, /, *args, backend=None, **kwargs) |
30,338 | networkx.utils.backends | __new__ | A decorator that makes certain input graph types dispatch to ``func``'s
backend implementation.
Usage can be any of the following decorator forms:
- @_dispatchable
- @_dispatchable()
- @_dispatchable(name="override_name")
- @_dispatchable(graphs="graph_var_name")
- @_dispatchable(edge_attrs="weight")
- @_dispatchable(graphs={"G": 0, "H": 1}, edge_attrs={"weight": "default"})
with 0 and 1 giving the position in the signature function for graph objects.
When edge_attrs is a dict, keys are keyword names and values are defaults.
The class attributes are used to allow backends to run networkx tests.
For example: `PYTHONPATH=. pytest --backend graphblas --fallback-to-nx`
Future work: add configuration to control these.
Parameters
----------
func : callable, optional
The function to be decorated. If ``func`` is not provided, returns a
partial object that can be used to decorate a function later. If ``func``
is provided, returns a new callable object that dispatches to a backend
algorithm based on input graph types.
name : str, optional
The name of the algorithm to use for dispatching. If not provided,
the name of ``func`` will be used. ``name`` is useful to avoid name
conflicts, as all dispatched algorithms live in a single namespace.
For example, ``tournament.is_strongly_connected`` had a name conflict
with the standard ``nx.is_strongly_connected``, so we used
``@_dispatchable(name="tournament_is_strongly_connected")``.
graphs : str or dict or None, default "G"
If a string, the parameter name of the graph, which must be the first
argument of the wrapped function. If more than one graph is required
for the algorithm (or if the graph is not the first argument), provide
a dict of parameter name to argument position for each graph argument.
For example, ``@_dispatchable(graphs={"G": 0, "auxiliary?": 4})``
indicates the 0th parameter ``G`` of the function is a required graph,
and the 4th parameter ``auxiliary`` is an optional graph.
To indicate an argument is a list of graphs, do e.g. ``"[graphs]"``.
Use ``graphs=None`` if *no* arguments are NetworkX graphs such as for
graph generators, readers, and conversion functions.
edge_attrs : str or dict, optional
``edge_attrs`` holds information about edge attribute arguments
and default values for those edge attributes.
If a string, ``edge_attrs`` holds the function argument name that
indicates a single edge attribute to include in the converted graph.
The default value for this attribute is 1. To indicate that an argument
is a list of attributes (all with default value 1), use e.g. ``"[attrs]"``.
If a dict, ``edge_attrs`` holds a dict keyed by argument names, with
values that are either the default value or, if a string, the argument
name that indicates the default value.
node_attrs : str or dict, optional
Like ``edge_attrs``, but for node attributes.
preserve_edge_attrs : bool or str or dict, optional
For bool, whether to preserve all edge attributes.
For str, the parameter name that may indicate (with ``True`` or a
callable argument) whether all edge attributes should be preserved
when converting.
For dict of ``{graph_name: {attr: default}}``, indicate pre-determined
edge attributes (and defaults) to preserve for input graphs.
preserve_node_attrs : bool or str or dict, optional
Like ``preserve_edge_attrs``, but for node attributes.
preserve_graph_attrs : bool or set
For bool, whether to preserve all graph attributes.
For set, which input graph arguments to preserve graph attributes.
preserve_all_attrs : bool
Whether to preserve all edge, node and graph attributes.
This overrides all the other preserve_*_attrs.
mutates_input : bool or dict, default False
For bool, whether the functions mutates an input graph argument.
For dict of ``{arg_name: arg_pos}``, arguments that indicates whether an
input graph will be mutated, and ``arg_name`` may begin with ``"not "``
to negate the logic (for example, this is used by ``copy=`` arguments).
By default, dispatching doesn't convert input graphs to a different
backend for functions that mutate input graphs.
returns_graph : bool, default False
Whether the function can return or yield a graph object. By default,
dispatching doesn't convert input graphs to a different backend for
functions that return graphs.
| def __new__(
cls,
func=None,
*,
name=None,
graphs="G",
edge_attrs=None,
node_attrs=None,
preserve_edge_attrs=False,
preserve_node_attrs=False,
preserve_graph_attrs=False,
preserve_all_attrs=False,
mutates_input=False,
returns_graph=False,
):
"""A decorator that makes certain input graph types dispatch to ``func``'s
backend implementation.
Usage can be any of the following decorator forms:
- @_dispatchable
- @_dispatchable()
- @_dispatchable(name="override_name")
- @_dispatchable(graphs="graph_var_name")
- @_dispatchable(edge_attrs="weight")
- @_dispatchable(graphs={"G": 0, "H": 1}, edge_attrs={"weight": "default"})
with 0 and 1 giving the position in the signature function for graph objects.
When edge_attrs is a dict, keys are keyword names and values are defaults.
The class attributes are used to allow backends to run networkx tests.
For example: `PYTHONPATH=. pytest --backend graphblas --fallback-to-nx`
Future work: add configuration to control these.
Parameters
----------
func : callable, optional
The function to be decorated. If ``func`` is not provided, returns a
partial object that can be used to decorate a function later. If ``func``
is provided, returns a new callable object that dispatches to a backend
algorithm based on input graph types.
name : str, optional
The name of the algorithm to use for dispatching. If not provided,
the name of ``func`` will be used. ``name`` is useful to avoid name
conflicts, as all dispatched algorithms live in a single namespace.
For example, ``tournament.is_strongly_connected`` had a name conflict
with the standard ``nx.is_strongly_connected``, so we used
``@_dispatchable(name="tournament_is_strongly_connected")``.
graphs : str or dict or None, default "G"
If a string, the parameter name of the graph, which must be the first
argument of the wrapped function. If more than one graph is required
for the algorithm (or if the graph is not the first argument), provide
a dict of parameter name to argument position for each graph argument.
For example, ``@_dispatchable(graphs={"G": 0, "auxiliary?": 4})``
indicates the 0th parameter ``G`` of the function is a required graph,
and the 4th parameter ``auxiliary`` is an optional graph.
To indicate an argument is a list of graphs, do e.g. ``"[graphs]"``.
Use ``graphs=None`` if *no* arguments are NetworkX graphs such as for
graph generators, readers, and conversion functions.
edge_attrs : str or dict, optional
``edge_attrs`` holds information about edge attribute arguments
and default values for those edge attributes.
If a string, ``edge_attrs`` holds the function argument name that
indicates a single edge attribute to include in the converted graph.
The default value for this attribute is 1. To indicate that an argument
is a list of attributes (all with default value 1), use e.g. ``"[attrs]"``.
If a dict, ``edge_attrs`` holds a dict keyed by argument names, with
values that are either the default value or, if a string, the argument
name that indicates the default value.
node_attrs : str or dict, optional
Like ``edge_attrs``, but for node attributes.
preserve_edge_attrs : bool or str or dict, optional
For bool, whether to preserve all edge attributes.
For str, the parameter name that may indicate (with ``True`` or a
callable argument) whether all edge attributes should be preserved
when converting.
For dict of ``{graph_name: {attr: default}}``, indicate pre-determined
edge attributes (and defaults) to preserve for input graphs.
preserve_node_attrs : bool or str or dict, optional
Like ``preserve_edge_attrs``, but for node attributes.
preserve_graph_attrs : bool or set
For bool, whether to preserve all graph attributes.
For set, which input graph arguments to preserve graph attributes.
preserve_all_attrs : bool
Whether to preserve all edge, node and graph attributes.
This overrides all the other preserve_*_attrs.
mutates_input : bool or dict, default False
For bool, whether the functions mutates an input graph argument.
For dict of ``{arg_name: arg_pos}``, arguments that indicates whether an
input graph will be mutated, and ``arg_name`` may begin with ``"not "``
to negate the logic (for example, this is used by ``copy=`` arguments).
By default, dispatching doesn't convert input graphs to a different
backend for functions that mutate input graphs.
returns_graph : bool, default False
Whether the function can return or yield a graph object. By default,
dispatching doesn't convert input graphs to a different backend for
functions that return graphs.
"""
if func is None:
return partial(
_dispatchable,
name=name,
graphs=graphs,
edge_attrs=edge_attrs,
node_attrs=node_attrs,
preserve_edge_attrs=preserve_edge_attrs,
preserve_node_attrs=preserve_node_attrs,
preserve_graph_attrs=preserve_graph_attrs,
preserve_all_attrs=preserve_all_attrs,
mutates_input=mutates_input,
returns_graph=returns_graph,
)
if isinstance(func, str):
raise TypeError("'name' and 'graphs' must be passed by keyword") from None
# If name not provided, use the name of the function
if name is None:
name = func.__name__
self = object.__new__(cls)
# standard function-wrapping stuff
# __annotations__ not used
self.__name__ = func.__name__
# self.__doc__ = func.__doc__ # __doc__ handled as cached property
self.__defaults__ = func.__defaults__
# We "magically" add `backend=` keyword argument to allow backend to be specified
if func.__kwdefaults__:
self.__kwdefaults__ = {**func.__kwdefaults__, "backend": None}
else:
self.__kwdefaults__ = {"backend": None}
self.__module__ = func.__module__
self.__qualname__ = func.__qualname__
self.__dict__.update(func.__dict__)
self.__wrapped__ = func
# Supplement docstring with backend info; compute and cache when needed
self._orig_doc = func.__doc__
self._cached_doc = None
self.orig_func = func
self.name = name
self.edge_attrs = edge_attrs
self.node_attrs = node_attrs
self.preserve_edge_attrs = preserve_edge_attrs or preserve_all_attrs
self.preserve_node_attrs = preserve_node_attrs or preserve_all_attrs
self.preserve_graph_attrs = preserve_graph_attrs or preserve_all_attrs
self.mutates_input = mutates_input
# Keep `returns_graph` private for now, b/c we may extend info on return types
self._returns_graph = returns_graph
if edge_attrs is not None and not isinstance(edge_attrs, str | dict):
raise TypeError(
f"Bad type for edge_attrs: {type(edge_attrs)}. Expected str or dict."
) from None
if node_attrs is not None and not isinstance(node_attrs, str | dict):
raise TypeError(
f"Bad type for node_attrs: {type(node_attrs)}. Expected str or dict."
) from None
if not isinstance(self.preserve_edge_attrs, bool | str | dict):
raise TypeError(
f"Bad type for preserve_edge_attrs: {type(self.preserve_edge_attrs)}."
" Expected bool, str, or dict."
) from None
if not isinstance(self.preserve_node_attrs, bool | str | dict):
raise TypeError(
f"Bad type for preserve_node_attrs: {type(self.preserve_node_attrs)}."
" Expected bool, str, or dict."
) from None
if not isinstance(self.preserve_graph_attrs, bool | set):
raise TypeError(
f"Bad type for preserve_graph_attrs: {type(self.preserve_graph_attrs)}."
" Expected bool or set."
) from None
if not isinstance(self.mutates_input, bool | dict):
raise TypeError(
f"Bad type for mutates_input: {type(self.mutates_input)}."
" Expected bool or dict."
) from None
if not isinstance(self._returns_graph, bool):
raise TypeError(
f"Bad type for returns_graph: {type(self._returns_graph)}."
" Expected bool."
) from None
if isinstance(graphs, str):
graphs = {graphs: 0}
elif graphs is None:
pass
elif not isinstance(graphs, dict):
raise TypeError(
f"Bad type for graphs: {type(graphs)}. Expected str or dict."
) from None
elif len(graphs) == 0:
raise KeyError("'graphs' must contain at least one variable name") from None
# This dict comprehension is complicated for better performance; equivalent shown below.
self.optional_graphs = set()
self.list_graphs = set()
if graphs is None:
self.graphs = {}
else:
self.graphs = {
self.optional_graphs.add(val := k[:-1]) or val
if (last := k[-1]) == "?"
else self.list_graphs.add(val := k[1:-1]) or val
if last == "]"
else k: v
for k, v in graphs.items()
}
# The above is equivalent to:
# self.optional_graphs = {k[:-1] for k in graphs if k[-1] == "?"}
# self.list_graphs = {k[1:-1] for k in graphs if k[-1] == "]"}
# self.graphs = {k[:-1] if k[-1] == "?" else k: v for k, v in graphs.items()}
# Compute and cache the signature on-demand
self._sig = None
# Which backends implement this function?
self.backends = {
backend
for backend, info in backend_info.items()
if "functions" in info and name in info["functions"]
}
if name in _registered_algorithms:
raise KeyError(
f"Algorithm already exists in dispatch registry: {name}"
) from None
# Use the magic of `argmap` to turn `self` into a function. This does result
# in small additional overhead compared to calling `_dispatchable` directly,
# but `argmap` has the magical property that it can stack with other `argmap`
# decorators "for free". Being a function is better for REPRs and type-checkers.
self = argmap(_do_nothing)(self)
_registered_algorithms[name] = self
return self
| (cls, func=None, *, name=None, graphs='G', edge_attrs=None, node_attrs=None, preserve_edge_attrs=False, preserve_node_attrs=False, preserve_graph_attrs=False, preserve_all_attrs=False, mutates_input=False, returns_graph=False) |
30,339 | networkx.utils.backends | __reduce__ | Allow this object to be serialized with pickle.
This uses the global registry `_registered_algorithms` to deserialize.
| def __reduce__(self):
"""Allow this object to be serialized with pickle.
This uses the global registry `_registered_algorithms` to deserialize.
"""
return _restore_dispatchable, (self.name,)
| (self) |
30,340 | networkx.utils.backends | _can_backend_run | Can the specified backend run this algorithm with these arguments? | def _can_backend_run(self, backend_name, /, *args, **kwargs):
"""Can the specified backend run this algorithm with these arguments?"""
backend = _load_backend(backend_name)
# `backend.can_run` and `backend.should_run` may return strings that describe
# why they can't or shouldn't be run. We plan to use the strings in the future.
return (
hasattr(backend, self.name)
and (can_run := backend.can_run(self.name, args, kwargs))
and not isinstance(can_run, str)
)
| (self, backend_name, /, *args, **kwargs) |
30,341 | networkx.utils.backends | _convert_and_call | Call this dispatchable function with a backend, converting graphs if necessary. | def _convert_and_call(self, backend_name, args, kwargs, *, fallback_to_nx=False):
"""Call this dispatchable function with a backend, converting graphs if necessary."""
backend = _load_backend(backend_name)
if not self._can_backend_run(backend_name, *args, **kwargs):
if fallback_to_nx:
return self.orig_func(*args, **kwargs)
msg = f"'{self.name}' not implemented by {backend_name}"
if hasattr(backend, self.name):
msg += " with the given arguments"
raise RuntimeError(msg)
try:
converted_args, converted_kwargs = self._convert_arguments(
backend_name, args, kwargs, use_cache=config.cache_converted_graphs
)
result = getattr(backend, self.name)(*converted_args, **converted_kwargs)
except (NotImplementedError, nx.NetworkXNotImplemented) as exc:
if fallback_to_nx:
return self.orig_func(*args, **kwargs)
raise
return result
| (self, backend_name, args, kwargs, *, fallback_to_nx=False) |
30,342 | networkx.utils.backends | _convert_and_call_for_tests | Call this dispatchable function with a backend; for use with testing. | def _convert_and_call_for_tests(
self, backend_name, args, kwargs, *, fallback_to_nx=False
):
"""Call this dispatchable function with a backend; for use with testing."""
backend = _load_backend(backend_name)
if not self._can_backend_run(backend_name, *args, **kwargs):
if fallback_to_nx or not self.graphs:
return self.orig_func(*args, **kwargs)
import pytest
msg = f"'{self.name}' not implemented by {backend_name}"
if hasattr(backend, self.name):
msg += " with the given arguments"
pytest.xfail(msg)
from collections.abc import Iterable, Iterator, Mapping
from copy import copy
from io import BufferedReader, BytesIO, StringIO, TextIOWrapper
from itertools import tee
from random import Random
import numpy as np
from numpy.random import Generator, RandomState
from scipy.sparse import sparray
# We sometimes compare the backend result to the original result,
# so we need two sets of arguments. We tee iterators and copy
# random state so that they may be used twice.
if not args:
args1 = args2 = args
else:
args1, args2 = zip(
*(
(arg, copy(arg))
if isinstance(
arg, BytesIO | StringIO | Random | Generator | RandomState
)
else tee(arg)
if isinstance(arg, Iterator)
and not isinstance(arg, BufferedReader | TextIOWrapper)
else (arg, arg)
for arg in args
)
)
if not kwargs:
kwargs1 = kwargs2 = kwargs
else:
kwargs1, kwargs2 = zip(
*(
((k, v), (k, copy(v)))
if isinstance(
v, BytesIO | StringIO | Random | Generator | RandomState
)
else ((k, (teed := tee(v))[0]), (k, teed[1]))
if isinstance(v, Iterator)
and not isinstance(v, BufferedReader | TextIOWrapper)
else ((k, v), (k, v))
for k, v in kwargs.items()
)
)
kwargs1 = dict(kwargs1)
kwargs2 = dict(kwargs2)
try:
converted_args, converted_kwargs = self._convert_arguments(
backend_name, args1, kwargs1, use_cache=False
)
result = getattr(backend, self.name)(*converted_args, **converted_kwargs)
except (NotImplementedError, nx.NetworkXNotImplemented) as exc:
if fallback_to_nx:
return self.orig_func(*args2, **kwargs2)
import pytest
pytest.xfail(
exc.args[0] if exc.args else f"{self.name} raised {type(exc).__name__}"
)
# Verify that `self._returns_graph` is correct. This compares the return type
# to the type expected from `self._returns_graph`. This handles tuple and list
# return types, but *does not* catch functions that yield graphs.
if (
self._returns_graph
!= (
isinstance(result, nx.Graph)
or hasattr(result, "__networkx_backend__")
or isinstance(result, tuple | list)
and any(
isinstance(x, nx.Graph) or hasattr(x, "__networkx_backend__")
for x in result
)
)
and not (
# May return Graph or None
self.name in {"check_planarity", "check_planarity_recursive"}
and any(x is None for x in result)
)
and not (
# May return Graph or dict
self.name in {"held_karp_ascent"}
and any(isinstance(x, dict) for x in result)
)
and self.name
not in {
# yields graphs
"all_triads",
"general_k_edge_subgraphs",
# yields graphs or arrays
"nonisomorphic_trees",
}
):
raise RuntimeError(f"`returns_graph` is incorrect for {self.name}")
def check_result(val, depth=0):
if isinstance(val, np.number):
raise RuntimeError(
f"{self.name} returned a numpy scalar {val} ({type(val)}, depth={depth})"
)
if isinstance(val, np.ndarray | sparray):
return
if isinstance(val, nx.Graph):
check_result(val._node, depth=depth + 1)
check_result(val._adj, depth=depth + 1)
return
if isinstance(val, Iterator):
raise NotImplementedError
if isinstance(val, Iterable) and not isinstance(val, str):
for x in val:
check_result(x, depth=depth + 1)
if isinstance(val, Mapping):
for x in val.values():
check_result(x, depth=depth + 1)
def check_iterator(it):
for val in it:
try:
check_result(val)
except RuntimeError as exc:
raise RuntimeError(
f"{self.name} returned a numpy scalar {val} ({type(val)})"
) from exc
yield val
if self.name in {"from_edgelist"}:
# numpy scalars are explicitly given as values in some tests
pass
elif isinstance(result, Iterator):
result = check_iterator(result)
else:
try:
check_result(result)
except RuntimeError as exc:
raise RuntimeError(
f"{self.name} returned a numpy scalar {result} ({type(result)})"
) from exc
check_result(result)
if self.name in {
"edmonds_karp",
"barycenter",
"contracted_edge",
"contracted_nodes",
"stochastic_graph",
"relabel_nodes",
"maximum_branching",
"incremental_closeness_centrality",
"minimal_branching",
"minimum_spanning_arborescence",
"recursive_simple_cycles",
"connected_double_edge_swap",
}:
# Special-case algorithms that mutate input graphs
bound = self.__signature__.bind(*converted_args, **converted_kwargs)
bound.apply_defaults()
bound2 = self.__signature__.bind(*args2, **kwargs2)
bound2.apply_defaults()
if self.name in {
"minimal_branching",
"minimum_spanning_arborescence",
"recursive_simple_cycles",
"connected_double_edge_swap",
}:
G1 = backend.convert_to_nx(bound.arguments["G"])
G2 = bound2.arguments["G"]
G2._adj = G1._adj
nx._clear_cache(G2)
elif self.name == "edmonds_karp":
R1 = backend.convert_to_nx(bound.arguments["residual"])
R2 = bound2.arguments["residual"]
if R1 is not None and R2 is not None:
for k, v in R1.edges.items():
R2.edges[k]["flow"] = v["flow"]
R2.graph.update(R1.graph)
nx._clear_cache(R2)
elif self.name == "barycenter" and bound.arguments["attr"] is not None:
G1 = backend.convert_to_nx(bound.arguments["G"])
G2 = bound2.arguments["G"]
attr = bound.arguments["attr"]
for k, v in G1.nodes.items():
G2.nodes[k][attr] = v[attr]
nx._clear_cache(G2)
elif (
self.name in {"contracted_nodes", "contracted_edge"}
and not bound.arguments["copy"]
):
# Edges and nodes changed; node "contraction" and edge "weight" attrs
G1 = backend.convert_to_nx(bound.arguments["G"])
G2 = bound2.arguments["G"]
G2.__dict__.update(G1.__dict__)
nx._clear_cache(G2)
elif self.name == "stochastic_graph" and not bound.arguments["copy"]:
G1 = backend.convert_to_nx(bound.arguments["G"])
G2 = bound2.arguments["G"]
for k, v in G1.edges.items():
G2.edges[k]["weight"] = v["weight"]
nx._clear_cache(G2)
elif (
self.name == "relabel_nodes"
and not bound.arguments["copy"]
or self.name in {"incremental_closeness_centrality"}
):
G1 = backend.convert_to_nx(bound.arguments["G"])
G2 = bound2.arguments["G"]
if G1 is G2:
return G2
G2._node.clear()
G2._node.update(G1._node)
G2._adj.clear()
G2._adj.update(G1._adj)
if hasattr(G1, "_pred") and hasattr(G2, "_pred"):
G2._pred.clear()
G2._pred.update(G1._pred)
if hasattr(G1, "_succ") and hasattr(G2, "_succ"):
G2._succ.clear()
G2._succ.update(G1._succ)
nx._clear_cache(G2)
if self.name == "relabel_nodes":
return G2
return backend.convert_to_nx(result)
converted_result = backend.convert_to_nx(result)
if isinstance(converted_result, nx.Graph) and self.name not in {
"boykov_kolmogorov",
"preflow_push",
"quotient_graph",
"shortest_augmenting_path",
"spectral_graph_forge",
# We don't handle tempfile.NamedTemporaryFile arguments
"read_gml",
"read_graph6",
"read_sparse6",
# We don't handle io.BufferedReader or io.TextIOWrapper arguments
"bipartite_read_edgelist",
"read_adjlist",
"read_edgelist",
"read_graphml",
"read_multiline_adjlist",
"read_pajek",
"from_pydot",
"pydot_read_dot",
"agraph_read_dot",
# graph comparison fails b/c of nan values
"read_gexf",
}:
# For graph return types (e.g. generators), we compare that results are
# the same between the backend and networkx, then return the original
# networkx result so the iteration order will be consistent in tests.
G = self.orig_func(*args2, **kwargs2)
if not nx.utils.graphs_equal(G, converted_result):
assert G.number_of_nodes() == converted_result.number_of_nodes()
assert G.number_of_edges() == converted_result.number_of_edges()
assert G.graph == converted_result.graph
assert G.nodes == converted_result.nodes
assert G.adj == converted_result.adj
assert type(G) is type(converted_result)
raise AssertionError("Graphs are not equal")
return G
return converted_result
| (self, backend_name, args, kwargs, *, fallback_to_nx=False) |
30,343 | networkx.utils.backends | _convert_arguments | Convert graph arguments to the specified backend.
Returns
-------
args tuple and kwargs dict
| def _convert_arguments(self, backend_name, args, kwargs, *, use_cache):
"""Convert graph arguments to the specified backend.
Returns
-------
args tuple and kwargs dict
"""
bound = self.__signature__.bind(*args, **kwargs)
bound.apply_defaults()
if not self.graphs:
bound_kwargs = bound.kwargs
del bound_kwargs["backend"]
return bound.args, bound_kwargs
# Convert graphs into backend graph-like object
# Include the edge and/or node labels if provided to the algorithm
preserve_edge_attrs = self.preserve_edge_attrs
edge_attrs = self.edge_attrs
if preserve_edge_attrs is False:
# e.g. `preserve_edge_attrs=False`
pass
elif preserve_edge_attrs is True:
# e.g. `preserve_edge_attrs=True`
edge_attrs = None
elif isinstance(preserve_edge_attrs, str):
if bound.arguments[preserve_edge_attrs] is True or callable(
bound.arguments[preserve_edge_attrs]
):
# e.g. `preserve_edge_attrs="attr"` and `func(attr=True)`
# e.g. `preserve_edge_attrs="attr"` and `func(attr=myfunc)`
preserve_edge_attrs = True
edge_attrs = None
elif bound.arguments[preserve_edge_attrs] is False and (
isinstance(edge_attrs, str)
and edge_attrs == preserve_edge_attrs
or isinstance(edge_attrs, dict)
and preserve_edge_attrs in edge_attrs
):
# e.g. `preserve_edge_attrs="attr"` and `func(attr=False)`
# Treat `False` argument as meaning "preserve_edge_data=False"
# and not `False` as the edge attribute to use.
preserve_edge_attrs = False
edge_attrs = None
else:
# e.g. `preserve_edge_attrs="attr"` and `func(attr="weight")`
preserve_edge_attrs = False
# Else: e.g. `preserve_edge_attrs={"G": {"weight": 1}}`
if edge_attrs is None:
# May have been set to None above b/c all attributes are preserved
pass
elif isinstance(edge_attrs, str):
if edge_attrs[0] == "[":
# e.g. `edge_attrs="[edge_attributes]"` (argument of list of attributes)
# e.g. `func(edge_attributes=["foo", "bar"])`
edge_attrs = {
edge_attr: 1 for edge_attr in bound.arguments[edge_attrs[1:-1]]
}
elif callable(bound.arguments[edge_attrs]):
# e.g. `edge_attrs="weight"` and `func(weight=myfunc)`
preserve_edge_attrs = True
edge_attrs = None
elif bound.arguments[edge_attrs] is not None:
# e.g. `edge_attrs="weight"` and `func(weight="foo")` (default of 1)
edge_attrs = {bound.arguments[edge_attrs]: 1}
elif self.name == "to_numpy_array" and hasattr(
bound.arguments["dtype"], "names"
):
# Custom handling: attributes may be obtained from `dtype`
edge_attrs = {
edge_attr: 1 for edge_attr in bound.arguments["dtype"].names
}
else:
# e.g. `edge_attrs="weight"` and `func(weight=None)`
edge_attrs = None
else:
# e.g. `edge_attrs={"attr": "default"}` and `func(attr="foo", default=7)`
# e.g. `edge_attrs={"attr": 0}` and `func(attr="foo")`
edge_attrs = {
edge_attr: bound.arguments.get(val, 1) if isinstance(val, str) else val
for key, val in edge_attrs.items()
if (edge_attr := bound.arguments[key]) is not None
}
preserve_node_attrs = self.preserve_node_attrs
node_attrs = self.node_attrs
if preserve_node_attrs is False:
# e.g. `preserve_node_attrs=False`
pass
elif preserve_node_attrs is True:
# e.g. `preserve_node_attrs=True`
node_attrs = None
elif isinstance(preserve_node_attrs, str):
if bound.arguments[preserve_node_attrs] is True or callable(
bound.arguments[preserve_node_attrs]
):
# e.g. `preserve_node_attrs="attr"` and `func(attr=True)`
# e.g. `preserve_node_attrs="attr"` and `func(attr=myfunc)`
preserve_node_attrs = True
node_attrs = None
elif bound.arguments[preserve_node_attrs] is False and (
isinstance(node_attrs, str)
and node_attrs == preserve_node_attrs
or isinstance(node_attrs, dict)
and preserve_node_attrs in node_attrs
):
# e.g. `preserve_node_attrs="attr"` and `func(attr=False)`
# Treat `False` argument as meaning "preserve_node_data=False"
# and not `False` as the node attribute to use. Is this used?
preserve_node_attrs = False
node_attrs = None
else:
# e.g. `preserve_node_attrs="attr"` and `func(attr="weight")`
preserve_node_attrs = False
# Else: e.g. `preserve_node_attrs={"G": {"pos": None}}`
if node_attrs is None:
# May have been set to None above b/c all attributes are preserved
pass
elif isinstance(node_attrs, str):
if node_attrs[0] == "[":
# e.g. `node_attrs="[node_attributes]"` (argument of list of attributes)
# e.g. `func(node_attributes=["foo", "bar"])`
node_attrs = {
node_attr: None for node_attr in bound.arguments[node_attrs[1:-1]]
}
elif callable(bound.arguments[node_attrs]):
# e.g. `node_attrs="weight"` and `func(weight=myfunc)`
preserve_node_attrs = True
node_attrs = None
elif bound.arguments[node_attrs] is not None:
# e.g. `node_attrs="weight"` and `func(weight="foo")`
node_attrs = {bound.arguments[node_attrs]: None}
else:
# e.g. `node_attrs="weight"` and `func(weight=None)`
node_attrs = None
else:
# e.g. `node_attrs={"attr": "default"}` and `func(attr="foo", default=7)`
# e.g. `node_attrs={"attr": 0}` and `func(attr="foo")`
node_attrs = {
node_attr: bound.arguments.get(val) if isinstance(val, str) else val
for key, val in node_attrs.items()
if (node_attr := bound.arguments[key]) is not None
}
preserve_graph_attrs = self.preserve_graph_attrs
# It should be safe to assume that we either have networkx graphs or backend graphs.
# Future work: allow conversions between backends.
for gname in self.graphs:
if gname in self.list_graphs:
bound.arguments[gname] = [
self._convert_graph(
backend_name,
g,
edge_attrs=edge_attrs,
node_attrs=node_attrs,
preserve_edge_attrs=preserve_edge_attrs,
preserve_node_attrs=preserve_node_attrs,
preserve_graph_attrs=preserve_graph_attrs,
graph_name=gname,
use_cache=use_cache,
)
if getattr(g, "__networkx_backend__", "networkx") == "networkx"
else g
for g in bound.arguments[gname]
]
else:
graph = bound.arguments[gname]
if graph is None:
if gname in self.optional_graphs:
continue
raise TypeError(
f"Missing required graph argument `{gname}` in {self.name} function"
)
if isinstance(preserve_edge_attrs, dict):
preserve_edges = False
edges = preserve_edge_attrs.get(gname, edge_attrs)
else:
preserve_edges = preserve_edge_attrs
edges = edge_attrs
if isinstance(preserve_node_attrs, dict):
preserve_nodes = False
nodes = preserve_node_attrs.get(gname, node_attrs)
else:
preserve_nodes = preserve_node_attrs
nodes = node_attrs
if isinstance(preserve_graph_attrs, set):
preserve_graph = gname in preserve_graph_attrs
else:
preserve_graph = preserve_graph_attrs
if getattr(graph, "__networkx_backend__", "networkx") == "networkx":
bound.arguments[gname] = self._convert_graph(
backend_name,
graph,
edge_attrs=edges,
node_attrs=nodes,
preserve_edge_attrs=preserve_edges,
preserve_node_attrs=preserve_nodes,
preserve_graph_attrs=preserve_graph,
graph_name=gname,
use_cache=use_cache,
)
bound_kwargs = bound.kwargs
del bound_kwargs["backend"]
return bound.args, bound_kwargs
| (self, backend_name, args, kwargs, *, use_cache) |
30,344 | networkx.utils.backends | _convert_graph | null | def _convert_graph(
self,
backend_name,
graph,
*,
edge_attrs,
node_attrs,
preserve_edge_attrs,
preserve_node_attrs,
preserve_graph_attrs,
graph_name,
use_cache,
):
if (
use_cache
and (nx_cache := getattr(graph, "__networkx_cache__", None)) is not None
):
cache = nx_cache.setdefault("backends", {}).setdefault(backend_name, {})
# edge_attrs: dict | None
# node_attrs: dict | None
# preserve_edge_attrs: bool (False if edge_attrs is not None)
# preserve_node_attrs: bool (False if node_attrs is not None)
# preserve_graph_attrs: bool
key = edge_key, node_key, graph_key = (
frozenset(edge_attrs.items())
if edge_attrs is not None
else preserve_edge_attrs,
frozenset(node_attrs.items())
if node_attrs is not None
else preserve_node_attrs,
preserve_graph_attrs,
)
if cache:
warning_message = (
f"Using cached graph for {backend_name!r} backend in "
f"call to {self.name}.\n\nFor the cache to be consistent "
"(i.e., correct), the input graph must not have been "
"manually mutated since the cached graph was created. "
"Examples of manually mutating the graph data structures "
"resulting in an inconsistent cache include:\n\n"
" >>> G[u][v][key] = val\n\n"
"and\n\n"
" >>> for u, v, d in G.edges(data=True):\n"
" ... d[key] = val\n\n"
"Using methods such as `G.add_edge(u, v, weight=val)` "
"will correctly clear the cache to keep it consistent. "
"You may also use `G.__networkx_cache__.clear()` to "
"manually clear the cache, or set `G.__networkx_cache__` "
"to None to disable caching for G. Enable or disable "
"caching via `nx.config.cache_converted_graphs` config."
)
# Do a simple search for a cached graph with compatible data.
# For example, if we need a single attribute, then it's okay
# to use a cached graph that preserved all attributes.
# This looks for an exact match first.
for compat_key in itertools.product(
(edge_key, True) if edge_key is not True else (True,),
(node_key, True) if node_key is not True else (True,),
(graph_key, True) if graph_key is not True else (True,),
):
if (rv := cache.get(compat_key)) is not None:
warnings.warn(warning_message)
return rv
if edge_key is not True and node_key is not True:
# Iterate over the items in `cache` to see if any are compatible.
# For example, if no edge attributes are needed, then a graph
# with any edge attribute will suffice. We use the same logic
# below (but switched) to clear unnecessary items from the cache.
# Use `list(cache.items())` to be thread-safe.
for (ekey, nkey, gkey), val in list(cache.items()):
if edge_key is False or ekey is True:
pass
elif (
edge_key is True
or ekey is False
or not edge_key.issubset(ekey)
):
continue
if node_key is False or nkey is True:
pass
elif (
node_key is True
or nkey is False
or not node_key.issubset(nkey)
):
continue
if graph_key and not gkey:
continue
warnings.warn(warning_message)
return val
backend = _load_backend(backend_name)
rv = backend.convert_from_nx(
graph,
edge_attrs=edge_attrs,
node_attrs=node_attrs,
preserve_edge_attrs=preserve_edge_attrs,
preserve_node_attrs=preserve_node_attrs,
preserve_graph_attrs=preserve_graph_attrs,
name=self.name,
graph_name=graph_name,
)
if use_cache and nx_cache is not None:
# Remove old cached items that are no longer necessary since they
# are dominated/subsumed/outdated by what was just calculated.
# This uses the same logic as above, but with keys switched.
cache[key] = rv # Set at beginning to be thread-safe
for cur_key in list(cache):
if cur_key == key:
continue
ekey, nkey, gkey = cur_key
if ekey is False or edge_key is True:
pass
elif ekey is True or edge_key is False or not ekey.issubset(edge_key):
continue
if nkey is False or node_key is True:
pass
elif nkey is True or node_key is False or not nkey.issubset(node_key):
continue
if gkey and not graph_key:
continue
cache.pop(cur_key, None) # Use pop instead of del to be thread-safe
return rv
| (self, backend_name, graph, *, edge_attrs, node_attrs, preserve_edge_attrs, preserve_node_attrs, preserve_graph_attrs, graph_name, use_cache) |
30,345 | networkx.utils.backends | _make_doc | Generate the backends section at the end for functions having an alternate
backend implementation(s) using the `backend_info` entry-point. | def _make_doc(self):
"""Generate the backends section at the end for functions having an alternate
backend implementation(s) using the `backend_info` entry-point."""
if not self.backends:
return self._orig_doc
lines = [
"Backends",
"--------",
]
for backend in sorted(self.backends):
info = backend_info[backend]
if "short_summary" in info:
lines.append(f"{backend} : {info['short_summary']}")
else:
lines.append(backend)
if "functions" not in info or self.name not in info["functions"]:
lines.append("")
continue
func_info = info["functions"][self.name]
# Renaming extra_docstring to additional_docs
if func_docs := (
func_info.get("additional_docs") or func_info.get("extra_docstring")
):
lines.extend(
f" {line}" if line else line for line in func_docs.split("\n")
)
add_gap = True
else:
add_gap = False
# Renaming extra_parameters to additional_parameters
if extra_parameters := (
func_info.get("extra_parameters")
or func_info.get("additional_parameters")
):
if add_gap:
lines.append("")
lines.append(" Additional parameters:")
for param in sorted(extra_parameters):
lines.append(f" {param}")
if desc := extra_parameters[param]:
lines.append(f" {desc}")
lines.append("")
else:
lines.append("")
if func_url := func_info.get("url"):
lines.append(f"[`Source <{func_url}>`_]")
lines.append("")
lines.pop() # Remove last empty line
to_add = "\n ".join(lines)
return f"{self._orig_doc.rstrip()}\n\n {to_add}"
| (self) |
30,346 | networkx.utils.backends | _should_backend_run | Can/should the specified backend run this algorithm with these arguments? | def _should_backend_run(self, backend_name, /, *args, **kwargs):
"""Can/should the specified backend run this algorithm with these arguments?"""
backend = _load_backend(backend_name)
# `backend.can_run` and `backend.should_run` may return strings that describe
# why they can't or shouldn't be run. We plan to use the strings in the future.
return (
hasattr(backend, self.name)
and (can_run := backend.can_run(self.name, args, kwargs))
and not isinstance(can_run, str)
and (should_run := backend.should_run(self.name, args, kwargs))
and not isinstance(should_run, str)
)
| (self, backend_name, /, *args, **kwargs) |
30,347 | networkx.lazy_imports | _lazy_import | Return a lazily imported proxy for a module or library.
Warning
-------
Importing using this function can currently cause trouble
when the user tries to import from a subpackage of a module before
the package is fully imported. In particular, this idiom may not work:
np = lazy_import("numpy")
from numpy.lib import recfunctions
This is due to a difference in the way Python's LazyLoader handles
subpackage imports compared to the normal import process. Hopefully
we will get Python's LazyLoader to fix this, or find a workaround.
In the meantime, this is a potential problem.
The workaround is to import numpy before importing from the subpackage.
Notes
-----
We often see the following pattern::
def myfunc():
import scipy as sp
sp.argmin(...)
....
This is to prevent a library, in this case `scipy`, from being
imported at function definition time, since that can be slow.
This function provides a proxy module that, upon access, imports
the actual module. So the idiom equivalent to the above example is::
sp = lazy.load("scipy")
def myfunc():
sp.argmin(...)
....
The initial import time is fast because the actual import is delayed
until the first attribute is requested. The overall import time may
decrease as well for users that don't make use of large portions
of the library.
Parameters
----------
fullname : str
The full name of the package or subpackage to import. For example::
sp = lazy.load("scipy") # import scipy as sp
spla = lazy.load("scipy.linalg") # import scipy.linalg as spla
Returns
-------
pm : importlib.util._LazyModule
Proxy module. Can be used like any regularly imported module.
Actual loading of the module occurs upon first attribute request.
| def _lazy_import(fullname):
"""Return a lazily imported proxy for a module or library.
Warning
-------
Importing using this function can currently cause trouble
when the user tries to import from a subpackage of a module before
the package is fully imported. In particular, this idiom may not work:
np = lazy_import("numpy")
from numpy.lib import recfunctions
This is due to a difference in the way Python's LazyLoader handles
subpackage imports compared to the normal import process. Hopefully
we will get Python's LazyLoader to fix this, or find a workaround.
In the meantime, this is a potential problem.
The workaround is to import numpy before importing from the subpackage.
Notes
-----
We often see the following pattern::
def myfunc():
import scipy as sp
sp.argmin(...)
....
This is to prevent a library, in this case `scipy`, from being
imported at function definition time, since that can be slow.
This function provides a proxy module that, upon access, imports
the actual module. So the idiom equivalent to the above example is::
sp = lazy.load("scipy")
def myfunc():
sp.argmin(...)
....
The initial import time is fast because the actual import is delayed
until the first attribute is requested. The overall import time may
decrease as well for users that don't make use of large portions
of the library.
Parameters
----------
fullname : str
The full name of the package or subpackage to import. For example::
sp = lazy.load("scipy") # import scipy as sp
spla = lazy.load("scipy.linalg") # import scipy.linalg as spla
Returns
-------
pm : importlib.util._LazyModule
Proxy module. Can be used like any regularly imported module.
Actual loading of the module occurs upon first attribute request.
"""
try:
return sys.modules[fullname]
except:
pass
# Not previously loaded -- look it up
spec = importlib.util.find_spec(fullname)
if spec is None:
try:
parent = inspect.stack()[1]
frame_data = {
"spec": fullname,
"filename": parent.filename,
"lineno": parent.lineno,
"function": parent.function,
"code_context": parent.code_context,
}
return DelayedImportErrorModule(frame_data, "DelayedImportErrorModule")
finally:
del parent
module = importlib.util.module_from_spec(spec)
sys.modules[fullname] = module
loader = importlib.util.LazyLoader(spec.loader)
loader.exec_module(module)
return module
| (fullname) |
30,348 | networkx.algorithms.link_prediction | adamic_adar_index | Compute the Adamic-Adar index of all node pairs in ebunch.
Adamic-Adar index of `u` and `v` is defined as
.. math::
\sum_{w \in \Gamma(u) \cap \Gamma(v)} \frac{1}{\log |\Gamma(w)|}
where $\Gamma(u)$ denotes the set of neighbors of $u$.
This index leads to zero-division for nodes only connected via self-loops.
It is intended to be used when no self-loops are present.
Parameters
----------
G : graph
NetworkX undirected graph.
ebunch : iterable of node pairs, optional (default = None)
Adamic-Adar index will be computed for each pair of nodes given
in the iterable. The pairs must be given as 2-tuples (u, v)
where u and v are nodes in the graph. If ebunch is None then all
nonexistent edges in the graph will be used.
Default value: None.
Returns
-------
piter : iterator
An iterator of 3-tuples in the form (u, v, p) where (u, v) is a
pair of nodes and p is their Adamic-Adar index.
Raises
------
NetworkXNotImplemented
If `G` is a `DiGraph`, a `Multigraph` or a `MultiDiGraph`.
NodeNotFound
If `ebunch` has a node that is not in `G`.
Examples
--------
>>> G = nx.complete_graph(5)
>>> preds = nx.adamic_adar_index(G, [(0, 1), (2, 3)])
>>> for u, v, p in preds:
... print(f"({u}, {v}) -> {p:.8f}")
(0, 1) -> 2.16404256
(2, 3) -> 2.16404256
References
----------
.. [1] D. Liben-Nowell, J. Kleinberg.
The Link Prediction Problem for Social Networks (2004).
http://www.cs.cornell.edu/home/kleinber/link-pred.pdf
| null | (G, ebunch=None, *, backend=None, **backend_kwargs) |
30,349 | networkx.classes.function | add_cycle | Add a cycle to the Graph G_to_add_to.
Parameters
----------
G_to_add_to : graph
A NetworkX graph
nodes_for_cycle: iterable container
A container of nodes. A cycle will be constructed from
the nodes (in order) and added to the graph.
attr : keyword arguments, optional (default= no attributes)
Attributes to add to every edge in cycle.
See Also
--------
add_path, add_star
Examples
--------
>>> G = nx.Graph() # or DiGraph, MultiGraph, MultiDiGraph, etc
>>> nx.add_cycle(G, [0, 1, 2, 3])
>>> nx.add_cycle(G, [10, 11, 12], weight=7)
| def add_cycle(G_to_add_to, nodes_for_cycle, **attr):
"""Add a cycle to the Graph G_to_add_to.
Parameters
----------
G_to_add_to : graph
A NetworkX graph
nodes_for_cycle: iterable container
A container of nodes. A cycle will be constructed from
the nodes (in order) and added to the graph.
attr : keyword arguments, optional (default= no attributes)
Attributes to add to every edge in cycle.
See Also
--------
add_path, add_star
Examples
--------
>>> G = nx.Graph() # or DiGraph, MultiGraph, MultiDiGraph, etc
>>> nx.add_cycle(G, [0, 1, 2, 3])
>>> nx.add_cycle(G, [10, 11, 12], weight=7)
"""
nlist = iter(nodes_for_cycle)
try:
first_node = next(nlist)
except StopIteration:
return
G_to_add_to.add_node(first_node)
G_to_add_to.add_edges_from(
pairwise(chain((first_node,), nlist), cyclic=True), **attr
)
| (G_to_add_to, nodes_for_cycle, **attr) |
30,350 | networkx.classes.function | add_path | Add a path to the Graph G_to_add_to.
Parameters
----------
G_to_add_to : graph
A NetworkX graph
nodes_for_path : iterable container
A container of nodes. A path will be constructed from
the nodes (in order) and added to the graph.
attr : keyword arguments, optional (default= no attributes)
Attributes to add to every edge in path.
See Also
--------
add_star, add_cycle
Examples
--------
>>> G = nx.Graph()
>>> nx.add_path(G, [0, 1, 2, 3])
>>> nx.add_path(G, [10, 11, 12], weight=7)
| def add_path(G_to_add_to, nodes_for_path, **attr):
"""Add a path to the Graph G_to_add_to.
Parameters
----------
G_to_add_to : graph
A NetworkX graph
nodes_for_path : iterable container
A container of nodes. A path will be constructed from
the nodes (in order) and added to the graph.
attr : keyword arguments, optional (default= no attributes)
Attributes to add to every edge in path.
See Also
--------
add_star, add_cycle
Examples
--------
>>> G = nx.Graph()
>>> nx.add_path(G, [0, 1, 2, 3])
>>> nx.add_path(G, [10, 11, 12], weight=7)
"""
nlist = iter(nodes_for_path)
try:
first_node = next(nlist)
except StopIteration:
return
G_to_add_to.add_node(first_node)
G_to_add_to.add_edges_from(pairwise(chain((first_node,), nlist)), **attr)
| (G_to_add_to, nodes_for_path, **attr) |
30,351 | networkx.classes.function | add_star | Add a star to Graph G_to_add_to.
The first node in `nodes_for_star` is the middle of the star.
It is connected to all other nodes.
Parameters
----------
G_to_add_to : graph
A NetworkX graph
nodes_for_star : iterable container
A container of nodes.
attr : keyword arguments, optional (default= no attributes)
Attributes to add to every edge in star.
See Also
--------
add_path, add_cycle
Examples
--------
>>> G = nx.Graph()
>>> nx.add_star(G, [0, 1, 2, 3])
>>> nx.add_star(G, [10, 11, 12], weight=2)
| def add_star(G_to_add_to, nodes_for_star, **attr):
"""Add a star to Graph G_to_add_to.
The first node in `nodes_for_star` is the middle of the star.
It is connected to all other nodes.
Parameters
----------
G_to_add_to : graph
A NetworkX graph
nodes_for_star : iterable container
A container of nodes.
attr : keyword arguments, optional (default= no attributes)
Attributes to add to every edge in star.
See Also
--------
add_path, add_cycle
Examples
--------
>>> G = nx.Graph()
>>> nx.add_star(G, [0, 1, 2, 3])
>>> nx.add_star(G, [10, 11, 12], weight=2)
"""
nlist = iter(nodes_for_star)
try:
v = next(nlist)
except StopIteration:
return
G_to_add_to.add_node(v)
edges = ((v, n) for n in nlist)
G_to_add_to.add_edges_from(edges, **attr)
| (G_to_add_to, nodes_for_star, **attr) |
30,353 | networkx.readwrite.json_graph.adjacency | adjacency_data | Returns data in adjacency format that is suitable for JSON serialization
and use in JavaScript documents.
Parameters
----------
G : NetworkX graph
attrs : dict
A dictionary that contains two keys 'id' and 'key'. The corresponding
values provide the attribute names for storing NetworkX-internal graph
data. The values should be unique. Default value:
:samp:`dict(id='id', key='key')`.
If some user-defined graph data use these attribute names as data keys,
they may be silently dropped.
Returns
-------
data : dict
A dictionary with adjacency formatted data.
Raises
------
NetworkXError
If values in attrs are not unique.
Examples
--------
>>> from networkx.readwrite import json_graph
>>> G = nx.Graph([(1, 2)])
>>> data = json_graph.adjacency_data(G)
To serialize with json
>>> import json
>>> s = json.dumps(data)
Notes
-----
Graph, node, and link attributes will be written when using this format
but attribute keys must be strings if you want to serialize the resulting
data with JSON.
The default value of attrs will be changed in a future release of NetworkX.
See Also
--------
adjacency_graph, node_link_data, tree_data
| def adjacency_data(G, attrs=_attrs):
"""Returns data in adjacency format that is suitable for JSON serialization
and use in JavaScript documents.
Parameters
----------
G : NetworkX graph
attrs : dict
A dictionary that contains two keys 'id' and 'key'. The corresponding
values provide the attribute names for storing NetworkX-internal graph
data. The values should be unique. Default value:
:samp:`dict(id='id', key='key')`.
If some user-defined graph data use these attribute names as data keys,
they may be silently dropped.
Returns
-------
data : dict
A dictionary with adjacency formatted data.
Raises
------
NetworkXError
If values in attrs are not unique.
Examples
--------
>>> from networkx.readwrite import json_graph
>>> G = nx.Graph([(1, 2)])
>>> data = json_graph.adjacency_data(G)
To serialize with json
>>> import json
>>> s = json.dumps(data)
Notes
-----
Graph, node, and link attributes will be written when using this format
but attribute keys must be strings if you want to serialize the resulting
data with JSON.
The default value of attrs will be changed in a future release of NetworkX.
See Also
--------
adjacency_graph, node_link_data, tree_data
"""
multigraph = G.is_multigraph()
id_ = attrs["id"]
# Allow 'key' to be omitted from attrs if the graph is not a multigraph.
key = None if not multigraph else attrs["key"]
if id_ == key:
raise nx.NetworkXError("Attribute names are not unique.")
data = {}
data["directed"] = G.is_directed()
data["multigraph"] = multigraph
data["graph"] = list(G.graph.items())
data["nodes"] = []
data["adjacency"] = []
for n, nbrdict in G.adjacency():
data["nodes"].append({**G.nodes[n], id_: n})
adj = []
if multigraph:
for nbr, keys in nbrdict.items():
for k, d in keys.items():
adj.append({**d, id_: nbr, key: k})
else:
for nbr, d in nbrdict.items():
adj.append({**d, id_: nbr})
data["adjacency"].append(adj)
return data
| (G, attrs={'id': 'id', 'key': 'key'}) |
30,354 | networkx.readwrite.json_graph.adjacency | adjacency_graph | Returns graph from adjacency data format.
Parameters
----------
data : dict
Adjacency list formatted graph data
directed : bool
If True, and direction not specified in data, return a directed graph.
multigraph : bool
If True, and multigraph not specified in data, return a multigraph.
attrs : dict
A dictionary that contains two keys 'id' and 'key'. The corresponding
values provide the attribute names for storing NetworkX-internal graph
data. The values should be unique. Default value:
:samp:`dict(id='id', key='key')`.
Returns
-------
G : NetworkX graph
A NetworkX graph object
Examples
--------
>>> from networkx.readwrite import json_graph
>>> G = nx.Graph([(1, 2)])
>>> data = json_graph.adjacency_data(G)
>>> H = json_graph.adjacency_graph(data)
Notes
-----
The default value of attrs will be changed in a future release of NetworkX.
See Also
--------
adjacency_graph, node_link_data, tree_data
| null | (data, directed=False, multigraph=True, attrs={'id': 'id', 'key': 'key'}, *, backend=None, **backend_kwargs) |
30,355 | networkx.linalg.graphmatrix | adjacency_matrix | Returns adjacency matrix of G.
Parameters
----------
G : graph
A NetworkX graph
nodelist : list, optional
The rows and columns are ordered according to the nodes in nodelist.
If nodelist is None, then the ordering is produced by G.nodes().
dtype : NumPy data-type, optional
The desired data-type for the array.
If None, then the NumPy default is used.
weight : string or None, optional (default='weight')
The edge data key used to provide each value in the matrix.
If None, then each edge has weight 1.
Returns
-------
A : SciPy sparse array
Adjacency matrix representation of G.
Notes
-----
For directed graphs, entry i,j corresponds to an edge from i to j.
If you want a pure Python adjacency matrix representation try
networkx.convert.to_dict_of_dicts which will return a
dictionary-of-dictionaries format that can be addressed as a
sparse matrix.
For MultiGraph/MultiDiGraph with parallel edges the weights are summed.
See `to_numpy_array` for other options.
The convention used for self-loop edges in graphs is to assign the
diagonal matrix entry value to the edge weight attribute
(or the number 1 if the edge has no weight attribute). If the
alternate convention of doubling the edge weight is desired the
resulting SciPy sparse array can be modified as follows:
>>> G = nx.Graph([(1, 1)])
>>> A = nx.adjacency_matrix(G)
>>> print(A.todense())
[[1]]
>>> A.setdiag(A.diagonal() * 2)
>>> print(A.todense())
[[2]]
See Also
--------
to_numpy_array
to_scipy_sparse_array
to_dict_of_dicts
adjacency_spectrum
| null | (G, nodelist=None, dtype=None, weight='weight', *, backend=None, **backend_kwargs) |
30,356 | networkx.linalg.spectrum | adjacency_spectrum | Returns eigenvalues of the adjacency matrix of G.
Parameters
----------
G : graph
A NetworkX graph
weight : string or None, optional (default='weight')
The edge data key used to compute each value in the matrix.
If None, then each edge has weight 1.
Returns
-------
evals : NumPy array
Eigenvalues
Notes
-----
For MultiGraph/MultiDiGraph, the edges weights are summed.
See to_numpy_array for other options.
See Also
--------
adjacency_matrix
| null | (G, weight='weight', *, backend=None, **backend_kwargs) |
30,358 | networkx.linalg.algebraicconnectivity | algebraic_connectivity | Returns the algebraic connectivity of an undirected graph.
The algebraic connectivity of a connected undirected graph is the second
smallest eigenvalue of its Laplacian matrix.
Parameters
----------
G : NetworkX graph
An undirected graph.
weight : object, optional (default: None)
The data key used to determine the weight of each edge. If None, then
each edge has unit weight.
normalized : bool, optional (default: False)
Whether the normalized Laplacian matrix is used.
tol : float, optional (default: 1e-8)
Tolerance of relative residual in eigenvalue computation.
method : string, optional (default: 'tracemin_pcg')
Method of eigenvalue computation. It must be one of the tracemin
options shown below (TraceMIN), 'lanczos' (Lanczos iteration)
or 'lobpcg' (LOBPCG).
The TraceMIN algorithm uses a linear system solver. The following
values allow specifying the solver to be used.
=============== ========================================
Value Solver
=============== ========================================
'tracemin_pcg' Preconditioned conjugate gradient method
'tracemin_lu' LU factorization
=============== ========================================
seed : integer, random_state, or None (default)
Indicator of random number generation state.
See :ref:`Randomness<randomness>`.
Returns
-------
algebraic_connectivity : float
Algebraic connectivity.
Raises
------
NetworkXNotImplemented
If G is directed.
NetworkXError
If G has less than two nodes.
Notes
-----
Edge weights are interpreted by their absolute values. For MultiGraph's,
weights of parallel edges are summed. Zero-weighted edges are ignored.
See Also
--------
laplacian_matrix
Examples
--------
For undirected graphs algebraic connectivity can tell us if a graph is connected or not
`G` is connected iff ``algebraic_connectivity(G) > 0``:
>>> G = nx.complete_graph(5)
>>> nx.algebraic_connectivity(G) > 0
True
>>> G.add_node(10) # G is no longer connected
>>> nx.algebraic_connectivity(G) > 0
False
| null | (G, weight='weight', normalized=False, tol=1e-08, method='tracemin_pcg', seed=None, *, backend=None, **backend_kwargs) |
30,362 | networkx.classes.function | all_neighbors | Returns all of the neighbors of a node in the graph.
If the graph is directed returns predecessors as well as successors.
Parameters
----------
graph : NetworkX graph
Graph to find neighbors.
node : node
The node whose neighbors will be returned.
Returns
-------
neighbors : iterator
Iterator of neighbors
| def all_neighbors(graph, node):
"""Returns all of the neighbors of a node in the graph.
If the graph is directed returns predecessors as well as successors.
Parameters
----------
graph : NetworkX graph
Graph to find neighbors.
node : node
The node whose neighbors will be returned.
Returns
-------
neighbors : iterator
Iterator of neighbors
"""
if graph.is_directed():
values = chain(graph.predecessors(node), graph.successors(node))
else:
values = graph.neighbors(node)
return values
| (graph, node) |
30,363 | networkx.algorithms.connectivity.kcutsets | all_node_cuts | Returns all minimum k cutsets of an undirected graph G.
This implementation is based on Kanevsky's algorithm [1]_ for finding all
minimum-size node cut-sets of an undirected graph G; ie the set (or sets)
of nodes of cardinality equal to the node connectivity of G. Thus if
removed, would break G into two or more connected components.
Parameters
----------
G : NetworkX graph
Undirected graph
k : Integer
Node connectivity of the input graph. If k is None, then it is
computed. Default value: None.
flow_func : function
Function to perform the underlying flow computations. Default value is
:func:`~networkx.algorithms.flow.edmonds_karp`. This function performs
better in sparse graphs with right tailed degree distributions.
:func:`~networkx.algorithms.flow.shortest_augmenting_path` will
perform better in denser graphs.
Returns
-------
cuts : a generator of node cutsets
Each node cutset has cardinality equal to the node connectivity of
the input graph.
Examples
--------
>>> # A two-dimensional grid graph has 4 cutsets of cardinality 2
>>> G = nx.grid_2d_graph(5, 5)
>>> cutsets = list(nx.all_node_cuts(G))
>>> len(cutsets)
4
>>> all(2 == len(cutset) for cutset in cutsets)
True
>>> nx.node_connectivity(G)
2
Notes
-----
This implementation is based on the sequential algorithm for finding all
minimum-size separating vertex sets in a graph [1]_. The main idea is to
compute minimum cuts using local maximum flow computations among a set
of nodes of highest degree and all other non-adjacent nodes in the Graph.
Once we find a minimum cut, we add an edge between the high degree
node and the target node of the local maximum flow computation to make
sure that we will not find that minimum cut again.
See also
--------
node_connectivity
edmonds_karp
shortest_augmenting_path
References
----------
.. [1] Kanevsky, A. (1993). Finding all minimum-size separating vertex
sets in a graph. Networks 23(6), 533--541.
http://onlinelibrary.wiley.com/doi/10.1002/net.3230230604/abstract
| null | (G, k=None, flow_func=None, *, backend=None, **backend_kwargs) |
30,364 | networkx.algorithms.shortest_paths.generic | all_pairs_all_shortest_paths | Compute all shortest paths between all nodes.
Parameters
----------
G : NetworkX graph
weight : None, string or function, optional (default = None)
If None, every edge has weight/distance/cost 1.
If a string, use this edge attribute as the edge weight.
Any edge attribute not present defaults to 1.
If this is a function, the weight of an edge is the value
returned by the function. The function must accept exactly
three positional arguments: the two endpoints of an edge and
the dictionary of edge attributes for that edge.
The function must return a number.
method : string, optional (default = 'dijkstra')
The algorithm to use to compute the path lengths.
Supported options: 'dijkstra', 'bellman-ford'.
Other inputs produce a ValueError.
If `weight` is None, unweighted graph methods are used, and this
suggestion is ignored.
Returns
-------
paths : generator of dictionary
Dictionary of arrays, keyed by source and target, of all shortest paths.
Raises
------
ValueError
If `method` is not among the supported options.
Examples
--------
>>> G = nx.cycle_graph(4)
>>> dict(nx.all_pairs_all_shortest_paths(G))[0][2]
[[0, 1, 2], [0, 3, 2]]
>>> dict(nx.all_pairs_all_shortest_paths(G))[0][3]
[[0, 3]]
Notes
-----
There may be multiple shortest paths with equal lengths. Unlike
all_pairs_shortest_path, this method returns all shortest paths.
See Also
--------
all_pairs_shortest_path
single_source_all_shortest_paths
| null | (G, weight=None, method='dijkstra', *, backend=None, **backend_kwargs) |
30,365 | networkx.algorithms.shortest_paths.weighted | all_pairs_bellman_ford_path | Compute shortest paths between all nodes in a weighted graph.
Parameters
----------
G : NetworkX graph
weight : string or function (default="weight")
If this is a string, then edge weights will be accessed via the
edge attribute with this key (that is, the weight of the edge
joining `u` to `v` will be ``G.edges[u, v][weight]``). If no
such edge attribute exists, the weight of the edge is assumed to
be one.
If this is a function, the weight of an edge is the value
returned by the function. The function must accept exactly three
positional arguments: the two endpoints of an edge and the
dictionary of edge attributes for that edge. The function must
return a number.
Returns
-------
paths : iterator
(source, dictionary) iterator with dictionary keyed by target and
shortest path as the key value.
Examples
--------
>>> G = nx.path_graph(5)
>>> path = dict(nx.all_pairs_bellman_ford_path(G))
>>> path[0][4]
[0, 1, 2, 3, 4]
Notes
-----
Edge weight attributes must be numerical.
Distances are calculated as sums of weighted edges traversed.
See Also
--------
floyd_warshall, all_pairs_dijkstra_path
| def _dijkstra_multisource(
G, sources, weight, pred=None, paths=None, cutoff=None, target=None
):
"""Uses Dijkstra's algorithm to find shortest weighted paths
Parameters
----------
G : NetworkX graph
sources : non-empty iterable of nodes
Starting nodes for paths. If this is just an iterable containing
a single node, then all paths computed by this function will
start from that node. If there are two or more nodes in this
iterable, the computed paths may begin from any one of the start
nodes.
weight: function
Function with (u, v, data) input that returns that edge's weight
or None to indicate a hidden edge
pred: dict of lists, optional(default=None)
dict to store a list of predecessors keyed by that node
If None, predecessors are not stored.
paths: dict, optional (default=None)
dict to store the path list from source to each node, keyed by node.
If None, paths are not stored.
target : node label, optional
Ending node for path. Search is halted when target is found.
cutoff : integer or float, optional
Length (sum of edge weights) at which the search is stopped.
If cutoff is provided, only return paths with summed weight <= cutoff.
Returns
-------
distance : dictionary
A mapping from node to shortest distance to that node from one
of the source nodes.
Raises
------
NodeNotFound
If any of `sources` is not in `G`.
Notes
-----
The optional predecessor and path dictionaries can be accessed by
the caller through the original pred and paths objects passed
as arguments. No need to explicitly return pred or paths.
"""
G_succ = G._adj # For speed-up (and works for both directed and undirected graphs)
push = heappush
pop = heappop
dist = {} # dictionary of final distances
seen = {}
# fringe is heapq with 3-tuples (distance,c,node)
# use the count c to avoid comparing nodes (may not be able to)
c = count()
fringe = []
for source in sources:
seen[source] = 0
push(fringe, (0, next(c), source))
while fringe:
(d, _, v) = pop(fringe)
if v in dist:
continue # already searched this node.
dist[v] = d
if v == target:
break
for u, e in G_succ[v].items():
cost = weight(v, u, e)
if cost is None:
continue
vu_dist = dist[v] + cost
if cutoff is not None:
if vu_dist > cutoff:
continue
if u in dist:
u_dist = dist[u]
if vu_dist < u_dist:
raise ValueError("Contradictory paths found:", "negative weights?")
elif pred is not None and vu_dist == u_dist:
pred[u].append(v)
elif u not in seen or vu_dist < seen[u]:
seen[u] = vu_dist
push(fringe, (vu_dist, next(c), u))
if paths is not None:
paths[u] = paths[v] + [u]
if pred is not None:
pred[u] = [v]
elif vu_dist == seen[u]:
if pred is not None:
pred[u].append(v)
# The optional predecessor and path dictionaries can be accessed
# by the caller via the pred and paths objects passed as arguments.
return dist
| (G, weight='weight', *, backend=None, **backend_kwargs) |
30,366 | networkx.algorithms.shortest_paths.weighted | all_pairs_bellman_ford_path_length | Compute shortest path lengths between all nodes in a weighted graph.
Parameters
----------
G : NetworkX graph
weight : string or function (default="weight")
If this is a string, then edge weights will be accessed via the
edge attribute with this key (that is, the weight of the edge
joining `u` to `v` will be ``G.edges[u, v][weight]``). If no
such edge attribute exists, the weight of the edge is assumed to
be one.
If this is a function, the weight of an edge is the value
returned by the function. The function must accept exactly three
positional arguments: the two endpoints of an edge and the
dictionary of edge attributes for that edge. The function must
return a number.
Returns
-------
distance : iterator
(source, dictionary) iterator with dictionary keyed by target and
shortest path length as the key value.
Examples
--------
>>> G = nx.path_graph(5)
>>> length = dict(nx.all_pairs_bellman_ford_path_length(G))
>>> for node in [0, 1, 2, 3, 4]:
... print(f"1 - {node}: {length[1][node]}")
1 - 0: 1
1 - 1: 0
1 - 2: 1
1 - 3: 2
1 - 4: 3
>>> length[3][2]
1
>>> length[2][2]
0
Notes
-----
Edge weight attributes must be numerical.
Distances are calculated as sums of weighted edges traversed.
The dictionary returned only has keys for reachable node pairs.
| def _dijkstra_multisource(
G, sources, weight, pred=None, paths=None, cutoff=None, target=None
):
"""Uses Dijkstra's algorithm to find shortest weighted paths
Parameters
----------
G : NetworkX graph
sources : non-empty iterable of nodes
Starting nodes for paths. If this is just an iterable containing
a single node, then all paths computed by this function will
start from that node. If there are two or more nodes in this
iterable, the computed paths may begin from any one of the start
nodes.
weight: function
Function with (u, v, data) input that returns that edge's weight
or None to indicate a hidden edge
pred: dict of lists, optional(default=None)
dict to store a list of predecessors keyed by that node
If None, predecessors are not stored.
paths: dict, optional (default=None)
dict to store the path list from source to each node, keyed by node.
If None, paths are not stored.
target : node label, optional
Ending node for path. Search is halted when target is found.
cutoff : integer or float, optional
Length (sum of edge weights) at which the search is stopped.
If cutoff is provided, only return paths with summed weight <= cutoff.
Returns
-------
distance : dictionary
A mapping from node to shortest distance to that node from one
of the source nodes.
Raises
------
NodeNotFound
If any of `sources` is not in `G`.
Notes
-----
The optional predecessor and path dictionaries can be accessed by
the caller through the original pred and paths objects passed
as arguments. No need to explicitly return pred or paths.
"""
G_succ = G._adj # For speed-up (and works for both directed and undirected graphs)
push = heappush
pop = heappop
dist = {} # dictionary of final distances
seen = {}
# fringe is heapq with 3-tuples (distance,c,node)
# use the count c to avoid comparing nodes (may not be able to)
c = count()
fringe = []
for source in sources:
seen[source] = 0
push(fringe, (0, next(c), source))
while fringe:
(d, _, v) = pop(fringe)
if v in dist:
continue # already searched this node.
dist[v] = d
if v == target:
break
for u, e in G_succ[v].items():
cost = weight(v, u, e)
if cost is None:
continue
vu_dist = dist[v] + cost
if cutoff is not None:
if vu_dist > cutoff:
continue
if u in dist:
u_dist = dist[u]
if vu_dist < u_dist:
raise ValueError("Contradictory paths found:", "negative weights?")
elif pred is not None and vu_dist == u_dist:
pred[u].append(v)
elif u not in seen or vu_dist < seen[u]:
seen[u] = vu_dist
push(fringe, (vu_dist, next(c), u))
if paths is not None:
paths[u] = paths[v] + [u]
if pred is not None:
pred[u] = [v]
elif vu_dist == seen[u]:
if pred is not None:
pred[u].append(v)
# The optional predecessor and path dictionaries can be accessed
# by the caller via the pred and paths objects passed as arguments.
return dist
| (G, weight='weight', *, backend=None, **backend_kwargs) |
30,367 | networkx.algorithms.shortest_paths.weighted | all_pairs_dijkstra | Find shortest weighted paths and lengths between all nodes.
Parameters
----------
G : NetworkX graph
cutoff : integer or float, optional
Length (sum of edge weights) at which the search is stopped.
If cutoff is provided, only return paths with summed weight <= cutoff.
weight : string or function
If this is a string, then edge weights will be accessed via the
edge attribute with this key (that is, the weight of the edge
joining `u` to `v` will be ``G.edge[u][v][weight]``). If no
such edge attribute exists, the weight of the edge is assumed to
be one.
If this is a function, the weight of an edge is the value
returned by the function. The function must accept exactly three
positional arguments: the two endpoints of an edge and the
dictionary of edge attributes for that edge. The function must
return a number or None to indicate a hidden edge.
Yields
------
(node, (distance, path)) : (node obj, (dict, dict))
Each source node has two associated dicts. The first holds distance
keyed by target and the second holds paths keyed by target.
(See single_source_dijkstra for the source/target node terminology.)
If desired you can apply `dict()` to this function to create a dict
keyed by source node to the two dicts.
Examples
--------
>>> G = nx.path_graph(5)
>>> len_path = dict(nx.all_pairs_dijkstra(G))
>>> len_path[3][0][1]
2
>>> for node in [0, 1, 2, 3, 4]:
... print(f"3 - {node}: {len_path[3][0][node]}")
3 - 0: 3
3 - 1: 2
3 - 2: 1
3 - 3: 0
3 - 4: 1
>>> len_path[3][1][1]
[3, 2, 1]
>>> for n, (dist, path) in nx.all_pairs_dijkstra(G):
... print(path[1])
[0, 1]
[1]
[2, 1]
[3, 2, 1]
[4, 3, 2, 1]
Notes
-----
Edge weight attributes must be numerical.
Distances are calculated as sums of weighted edges traversed.
The yielded dicts only have keys for reachable nodes.
| def _dijkstra_multisource(
G, sources, weight, pred=None, paths=None, cutoff=None, target=None
):
"""Uses Dijkstra's algorithm to find shortest weighted paths
Parameters
----------
G : NetworkX graph
sources : non-empty iterable of nodes
Starting nodes for paths. If this is just an iterable containing
a single node, then all paths computed by this function will
start from that node. If there are two or more nodes in this
iterable, the computed paths may begin from any one of the start
nodes.
weight: function
Function with (u, v, data) input that returns that edge's weight
or None to indicate a hidden edge
pred: dict of lists, optional(default=None)
dict to store a list of predecessors keyed by that node
If None, predecessors are not stored.
paths: dict, optional (default=None)
dict to store the path list from source to each node, keyed by node.
If None, paths are not stored.
target : node label, optional
Ending node for path. Search is halted when target is found.
cutoff : integer or float, optional
Length (sum of edge weights) at which the search is stopped.
If cutoff is provided, only return paths with summed weight <= cutoff.
Returns
-------
distance : dictionary
A mapping from node to shortest distance to that node from one
of the source nodes.
Raises
------
NodeNotFound
If any of `sources` is not in `G`.
Notes
-----
The optional predecessor and path dictionaries can be accessed by
the caller through the original pred and paths objects passed
as arguments. No need to explicitly return pred or paths.
"""
G_succ = G._adj # For speed-up (and works for both directed and undirected graphs)
push = heappush
pop = heappop
dist = {} # dictionary of final distances
seen = {}
# fringe is heapq with 3-tuples (distance,c,node)
# use the count c to avoid comparing nodes (may not be able to)
c = count()
fringe = []
for source in sources:
seen[source] = 0
push(fringe, (0, next(c), source))
while fringe:
(d, _, v) = pop(fringe)
if v in dist:
continue # already searched this node.
dist[v] = d
if v == target:
break
for u, e in G_succ[v].items():
cost = weight(v, u, e)
if cost is None:
continue
vu_dist = dist[v] + cost
if cutoff is not None:
if vu_dist > cutoff:
continue
if u in dist:
u_dist = dist[u]
if vu_dist < u_dist:
raise ValueError("Contradictory paths found:", "negative weights?")
elif pred is not None and vu_dist == u_dist:
pred[u].append(v)
elif u not in seen or vu_dist < seen[u]:
seen[u] = vu_dist
push(fringe, (vu_dist, next(c), u))
if paths is not None:
paths[u] = paths[v] + [u]
if pred is not None:
pred[u] = [v]
elif vu_dist == seen[u]:
if pred is not None:
pred[u].append(v)
# The optional predecessor and path dictionaries can be accessed
# by the caller via the pred and paths objects passed as arguments.
return dist
| (G, cutoff=None, weight='weight', *, backend=None, **backend_kwargs) |
30,368 | networkx.algorithms.shortest_paths.weighted | all_pairs_dijkstra_path | Compute shortest paths between all nodes in a weighted graph.
Parameters
----------
G : NetworkX graph
cutoff : integer or float, optional
Length (sum of edge weights) at which the search is stopped.
If cutoff is provided, only return paths with summed weight <= cutoff.
weight : string or function
If this is a string, then edge weights will be accessed via the
edge attribute with this key (that is, the weight of the edge
joining `u` to `v` will be ``G.edges[u, v][weight]``). If no
such edge attribute exists, the weight of the edge is assumed to
be one.
If this is a function, the weight of an edge is the value
returned by the function. The function must accept exactly three
positional arguments: the two endpoints of an edge and the
dictionary of edge attributes for that edge. The function must
return a number or None to indicate a hidden edge.
Returns
-------
paths : iterator
(source, dictionary) iterator with dictionary keyed by target and
shortest path as the key value.
Examples
--------
>>> G = nx.path_graph(5)
>>> path = dict(nx.all_pairs_dijkstra_path(G))
>>> path[0][4]
[0, 1, 2, 3, 4]
Notes
-----
Edge weight attributes must be numerical.
Distances are calculated as sums of weighted edges traversed.
See Also
--------
floyd_warshall, all_pairs_bellman_ford_path
| def _dijkstra_multisource(
G, sources, weight, pred=None, paths=None, cutoff=None, target=None
):
"""Uses Dijkstra's algorithm to find shortest weighted paths
Parameters
----------
G : NetworkX graph
sources : non-empty iterable of nodes
Starting nodes for paths. If this is just an iterable containing
a single node, then all paths computed by this function will
start from that node. If there are two or more nodes in this
iterable, the computed paths may begin from any one of the start
nodes.
weight: function
Function with (u, v, data) input that returns that edge's weight
or None to indicate a hidden edge
pred: dict of lists, optional(default=None)
dict to store a list of predecessors keyed by that node
If None, predecessors are not stored.
paths: dict, optional (default=None)
dict to store the path list from source to each node, keyed by node.
If None, paths are not stored.
target : node label, optional
Ending node for path. Search is halted when target is found.
cutoff : integer or float, optional
Length (sum of edge weights) at which the search is stopped.
If cutoff is provided, only return paths with summed weight <= cutoff.
Returns
-------
distance : dictionary
A mapping from node to shortest distance to that node from one
of the source nodes.
Raises
------
NodeNotFound
If any of `sources` is not in `G`.
Notes
-----
The optional predecessor and path dictionaries can be accessed by
the caller through the original pred and paths objects passed
as arguments. No need to explicitly return pred or paths.
"""
G_succ = G._adj # For speed-up (and works for both directed and undirected graphs)
push = heappush
pop = heappop
dist = {} # dictionary of final distances
seen = {}
# fringe is heapq with 3-tuples (distance,c,node)
# use the count c to avoid comparing nodes (may not be able to)
c = count()
fringe = []
for source in sources:
seen[source] = 0
push(fringe, (0, next(c), source))
while fringe:
(d, _, v) = pop(fringe)
if v in dist:
continue # already searched this node.
dist[v] = d
if v == target:
break
for u, e in G_succ[v].items():
cost = weight(v, u, e)
if cost is None:
continue
vu_dist = dist[v] + cost
if cutoff is not None:
if vu_dist > cutoff:
continue
if u in dist:
u_dist = dist[u]
if vu_dist < u_dist:
raise ValueError("Contradictory paths found:", "negative weights?")
elif pred is not None and vu_dist == u_dist:
pred[u].append(v)
elif u not in seen or vu_dist < seen[u]:
seen[u] = vu_dist
push(fringe, (vu_dist, next(c), u))
if paths is not None:
paths[u] = paths[v] + [u]
if pred is not None:
pred[u] = [v]
elif vu_dist == seen[u]:
if pred is not None:
pred[u].append(v)
# The optional predecessor and path dictionaries can be accessed
# by the caller via the pred and paths objects passed as arguments.
return dist
| (G, cutoff=None, weight='weight', *, backend=None, **backend_kwargs) |
30,369 | networkx.algorithms.shortest_paths.weighted | all_pairs_dijkstra_path_length | Compute shortest path lengths between all nodes in a weighted graph.
Parameters
----------
G : NetworkX graph
cutoff : integer or float, optional
Length (sum of edge weights) at which the search is stopped.
If cutoff is provided, only return paths with summed weight <= cutoff.
weight : string or function
If this is a string, then edge weights will be accessed via the
edge attribute with this key (that is, the weight of the edge
joining `u` to `v` will be ``G.edges[u, v][weight]``). If no
such edge attribute exists, the weight of the edge is assumed to
be one.
If this is a function, the weight of an edge is the value
returned by the function. The function must accept exactly three
positional arguments: the two endpoints of an edge and the
dictionary of edge attributes for that edge. The function must
return a number or None to indicate a hidden edge.
Returns
-------
distance : iterator
(source, dictionary) iterator with dictionary keyed by target and
shortest path length as the key value.
Examples
--------
>>> G = nx.path_graph(5)
>>> length = dict(nx.all_pairs_dijkstra_path_length(G))
>>> for node in [0, 1, 2, 3, 4]:
... print(f"1 - {node}: {length[1][node]}")
1 - 0: 1
1 - 1: 0
1 - 2: 1
1 - 3: 2
1 - 4: 3
>>> length[3][2]
1
>>> length[2][2]
0
Notes
-----
Edge weight attributes must be numerical.
Distances are calculated as sums of weighted edges traversed.
The dictionary returned only has keys for reachable node pairs.
| def _dijkstra_multisource(
G, sources, weight, pred=None, paths=None, cutoff=None, target=None
):
"""Uses Dijkstra's algorithm to find shortest weighted paths
Parameters
----------
G : NetworkX graph
sources : non-empty iterable of nodes
Starting nodes for paths. If this is just an iterable containing
a single node, then all paths computed by this function will
start from that node. If there are two or more nodes in this
iterable, the computed paths may begin from any one of the start
nodes.
weight: function
Function with (u, v, data) input that returns that edge's weight
or None to indicate a hidden edge
pred: dict of lists, optional(default=None)
dict to store a list of predecessors keyed by that node
If None, predecessors are not stored.
paths: dict, optional (default=None)
dict to store the path list from source to each node, keyed by node.
If None, paths are not stored.
target : node label, optional
Ending node for path. Search is halted when target is found.
cutoff : integer or float, optional
Length (sum of edge weights) at which the search is stopped.
If cutoff is provided, only return paths with summed weight <= cutoff.
Returns
-------
distance : dictionary
A mapping from node to shortest distance to that node from one
of the source nodes.
Raises
------
NodeNotFound
If any of `sources` is not in `G`.
Notes
-----
The optional predecessor and path dictionaries can be accessed by
the caller through the original pred and paths objects passed
as arguments. No need to explicitly return pred or paths.
"""
G_succ = G._adj # For speed-up (and works for both directed and undirected graphs)
push = heappush
pop = heappop
dist = {} # dictionary of final distances
seen = {}
# fringe is heapq with 3-tuples (distance,c,node)
# use the count c to avoid comparing nodes (may not be able to)
c = count()
fringe = []
for source in sources:
seen[source] = 0
push(fringe, (0, next(c), source))
while fringe:
(d, _, v) = pop(fringe)
if v in dist:
continue # already searched this node.
dist[v] = d
if v == target:
break
for u, e in G_succ[v].items():
cost = weight(v, u, e)
if cost is None:
continue
vu_dist = dist[v] + cost
if cutoff is not None:
if vu_dist > cutoff:
continue
if u in dist:
u_dist = dist[u]
if vu_dist < u_dist:
raise ValueError("Contradictory paths found:", "negative weights?")
elif pred is not None and vu_dist == u_dist:
pred[u].append(v)
elif u not in seen or vu_dist < seen[u]:
seen[u] = vu_dist
push(fringe, (vu_dist, next(c), u))
if paths is not None:
paths[u] = paths[v] + [u]
if pred is not None:
pred[u] = [v]
elif vu_dist == seen[u]:
if pred is not None:
pred[u].append(v)
# The optional predecessor and path dictionaries can be accessed
# by the caller via the pred and paths objects passed as arguments.
return dist
| (G, cutoff=None, weight='weight', *, backend=None, **backend_kwargs) |
30,370 | networkx.algorithms.lowest_common_ancestors | all_pairs_lowest_common_ancestor | Return the lowest common ancestor of all pairs or the provided pairs
Parameters
----------
G : NetworkX directed graph
pairs : iterable of pairs of nodes, optional (default: all pairs)
The pairs of nodes of interest.
If None, will find the LCA of all pairs of nodes.
Yields
------
((node1, node2), lca) : 2-tuple
Where lca is least common ancestor of node1 and node2.
Note that for the default case, the order of the node pair is not considered,
e.g. you will not get both ``(a, b)`` and ``(b, a)``
Raises
------
NetworkXPointlessConcept
If `G` is null.
NetworkXError
If `G` is not a DAG.
Examples
--------
The default behavior is to yield the lowest common ancestor for all
possible combinations of nodes in `G`, including self-pairings:
>>> G = nx.DiGraph([(0, 1), (0, 3), (1, 2)])
>>> dict(nx.all_pairs_lowest_common_ancestor(G))
{(0, 0): 0, (0, 1): 0, (0, 3): 0, (0, 2): 0, (1, 1): 1, (1, 3): 0, (1, 2): 1, (3, 3): 3, (3, 2): 0, (2, 2): 2}
The pairs argument can be used to limit the output to only the
specified node pairings:
>>> dict(nx.all_pairs_lowest_common_ancestor(G, pairs=[(1, 2), (2, 3)]))
{(1, 2): 1, (2, 3): 0}
Notes
-----
Only defined on non-null directed acyclic graphs.
See Also
--------
lowest_common_ancestor
| null | (G, pairs=None, *, backend=None, **backend_kwargs) |
30,371 | networkx.algorithms.connectivity.connectivity | all_pairs_node_connectivity | Compute node connectivity between all pairs of nodes of G.
Parameters
----------
G : NetworkX graph
Undirected graph
nbunch: container
Container of nodes. If provided node connectivity will be computed
only over pairs of nodes in nbunch.
flow_func : function
A function for computing the maximum flow among a pair of nodes.
The function has to accept at least three parameters: a Digraph,
a source node, and a target node. And return a residual network
that follows NetworkX conventions (see :meth:`maximum_flow` for
details). If flow_func is None, the default maximum flow function
(:meth:`edmonds_karp`) is used. See below for details. The
choice of the default function may change from version
to version and should not be relied on. Default value: None.
Returns
-------
all_pairs : dict
A dictionary with node connectivity between all pairs of nodes
in G, or in nbunch if provided.
See also
--------
:meth:`local_node_connectivity`
:meth:`edge_connectivity`
:meth:`local_edge_connectivity`
:meth:`maximum_flow`
:meth:`edmonds_karp`
:meth:`preflow_push`
:meth:`shortest_augmenting_path`
| @nx._dispatchable
def edge_connectivity(G, s=None, t=None, flow_func=None, cutoff=None):
r"""Returns the edge connectivity of the graph or digraph G.
The edge connectivity is equal to the minimum number of edges that
must be removed to disconnect G or render it trivial. If source
and target nodes are provided, this function returns the local edge
connectivity: the minimum number of edges that must be removed to
break all paths from source to target in G.
Parameters
----------
G : NetworkX graph
Undirected or directed graph
s : node
Source node. Optional. Default value: None.
t : node
Target node. Optional. Default value: None.
flow_func : function
A function for computing the maximum flow among a pair of nodes.
The function has to accept at least three parameters: a Digraph,
a source node, and a target node. And return a residual network
that follows NetworkX conventions (see :meth:`maximum_flow` for
details). If flow_func is None, the default maximum flow function
(:meth:`edmonds_karp`) is used. See below for details. The
choice of the default function may change from version
to version and should not be relied on. Default value: None.
cutoff : integer, float, or None (default: None)
If specified, the maximum flow algorithm will terminate when the
flow value reaches or exceeds the cutoff. This only works for flows
that support the cutoff parameter (most do) and is ignored otherwise.
Returns
-------
K : integer
Edge connectivity for G, or local edge connectivity if source
and target were provided
Examples
--------
>>> # Platonic icosahedral graph is 5-edge-connected
>>> G = nx.icosahedral_graph()
>>> nx.edge_connectivity(G)
5
You can use alternative flow algorithms for the underlying
maximum flow computation. In dense networks the algorithm
:meth:`shortest_augmenting_path` will usually perform better
than the default :meth:`edmonds_karp`, which is faster for
sparse networks with highly skewed degree distributions.
Alternative flow functions have to be explicitly imported
from the flow package.
>>> from networkx.algorithms.flow import shortest_augmenting_path
>>> nx.edge_connectivity(G, flow_func=shortest_augmenting_path)
5
If you specify a pair of nodes (source and target) as parameters,
this function returns the value of local edge connectivity.
>>> nx.edge_connectivity(G, 3, 7)
5
If you need to perform several local computations among different
pairs of nodes on the same graph, it is recommended that you reuse
the data structures used in the maximum flow computations. See
:meth:`local_edge_connectivity` for details.
Notes
-----
This is a flow based implementation of global edge connectivity.
For undirected graphs the algorithm works by finding a 'small'
dominating set of nodes of G (see algorithm 7 in [1]_ ) and
computing local maximum flow (see :meth:`local_edge_connectivity`)
between an arbitrary node in the dominating set and the rest of
nodes in it. This is an implementation of algorithm 6 in [1]_ .
For directed graphs, the algorithm does n calls to the maximum
flow function. This is an implementation of algorithm 8 in [1]_ .
See also
--------
:meth:`local_edge_connectivity`
:meth:`local_node_connectivity`
:meth:`node_connectivity`
:meth:`maximum_flow`
:meth:`edmonds_karp`
:meth:`preflow_push`
:meth:`shortest_augmenting_path`
:meth:`k_edge_components`
:meth:`k_edge_subgraphs`
References
----------
.. [1] Abdol-Hossein Esfahanian. Connectivity Algorithms.
http://www.cse.msu.edu/~cse835/Papers/Graph_connectivity_revised.pdf
"""
if (s is not None and t is None) or (s is None and t is not None):
raise nx.NetworkXError("Both source and target must be specified.")
# Local edge connectivity
if s is not None and t is not None:
if s not in G:
raise nx.NetworkXError(f"node {s} not in graph")
if t not in G:
raise nx.NetworkXError(f"node {t} not in graph")
return local_edge_connectivity(G, s, t, flow_func=flow_func, cutoff=cutoff)
# Global edge connectivity
# reuse auxiliary digraph and residual network
H = build_auxiliary_edge_connectivity(G)
R = build_residual_network(H, "capacity")
kwargs = {"flow_func": flow_func, "auxiliary": H, "residual": R}
if G.is_directed():
# Algorithm 8 in [1]
if not nx.is_weakly_connected(G):
return 0
# initial value for \lambda is minimum degree
L = min(d for n, d in G.degree())
nodes = list(G)
n = len(nodes)
if cutoff is not None:
L = min(cutoff, L)
for i in range(n):
kwargs["cutoff"] = L
try:
L = min(L, local_edge_connectivity(G, nodes[i], nodes[i + 1], **kwargs))
except IndexError: # last node!
L = min(L, local_edge_connectivity(G, nodes[i], nodes[0], **kwargs))
return L
else: # undirected
# Algorithm 6 in [1]
if not nx.is_connected(G):
return 0
# initial value for \lambda is minimum degree
L = min(d for n, d in G.degree())
if cutoff is not None:
L = min(cutoff, L)
# A dominating set is \lambda-covering
# We need a dominating set with at least two nodes
for node in G:
D = nx.dominating_set(G, start_with=node)
v = D.pop()
if D:
break
else:
# in complete graphs the dominating sets will always be of one node
# thus we return min degree
return L
for w in D:
kwargs["cutoff"] = L
L = min(L, local_edge_connectivity(G, v, w, **kwargs))
return L
| (G, nbunch=None, flow_func=None, *, backend=None, **backend_kwargs) |
30,372 | networkx.algorithms.shortest_paths.unweighted | all_pairs_shortest_path | Compute shortest paths between all nodes.
Parameters
----------
G : NetworkX graph
cutoff : integer, optional
Depth at which to stop the search. Only paths of length at most
`cutoff` are returned.
Returns
-------
paths : iterator
Dictionary, keyed by source and target, of shortest paths.
Examples
--------
>>> G = nx.path_graph(5)
>>> path = dict(nx.all_pairs_shortest_path(G))
>>> print(path[0][4])
[0, 1, 2, 3, 4]
Notes
-----
There may be multiple shortest paths with the same length between
two nodes. For each pair, this function returns only one of those paths.
See Also
--------
floyd_warshall
all_pairs_all_shortest_paths
| null | (G, cutoff=None, *, backend=None, **backend_kwargs) |
30,373 | networkx.algorithms.shortest_paths.unweighted | all_pairs_shortest_path_length | Computes the shortest path lengths between all nodes in `G`.
Parameters
----------
G : NetworkX graph
cutoff : integer, optional
Depth at which to stop the search. Only paths of length at most
`cutoff` are returned.
Returns
-------
lengths : iterator
(source, dictionary) iterator with dictionary keyed by target and
shortest path length as the key value.
Notes
-----
The iterator returned only has reachable node pairs.
Examples
--------
>>> G = nx.path_graph(5)
>>> length = dict(nx.all_pairs_shortest_path_length(G))
>>> for node in [0, 1, 2, 3, 4]:
... print(f"1 - {node}: {length[1][node]}")
1 - 0: 1
1 - 1: 0
1 - 2: 1
1 - 3: 2
1 - 4: 3
>>> length[3][2]
1
>>> length[2][2]
0
| null | (G, cutoff=None, *, backend=None, **backend_kwargs) |
30,374 | networkx.algorithms.shortest_paths.generic | all_shortest_paths | Compute all shortest simple paths in the graph.
Parameters
----------
G : NetworkX graph
source : node
Starting node for path.
target : node
Ending node for path.
weight : None, string or function, optional (default = None)
If None, every edge has weight/distance/cost 1.
If a string, use this edge attribute as the edge weight.
Any edge attribute not present defaults to 1.
If this is a function, the weight of an edge is the value
returned by the function. The function must accept exactly
three positional arguments: the two endpoints of an edge and
the dictionary of edge attributes for that edge.
The function must return a number.
method : string, optional (default = 'dijkstra')
The algorithm to use to compute the path lengths.
Supported options: 'dijkstra', 'bellman-ford'.
Other inputs produce a ValueError.
If `weight` is None, unweighted graph methods are used, and this
suggestion is ignored.
Returns
-------
paths : generator of lists
A generator of all paths between source and target.
Raises
------
ValueError
If `method` is not among the supported options.
NetworkXNoPath
If `target` cannot be reached from `source`.
Examples
--------
>>> G = nx.Graph()
>>> nx.add_path(G, [0, 1, 2])
>>> nx.add_path(G, [0, 10, 2])
>>> print([p for p in nx.all_shortest_paths(G, source=0, target=2)])
[[0, 1, 2], [0, 10, 2]]
Notes
-----
There may be many shortest paths between the source and target. If G
contains zero-weight cycles, this function will not produce all shortest
paths because doing so would produce infinitely many paths of unbounded
length -- instead, we only produce the shortest simple paths.
See Also
--------
shortest_path
single_source_shortest_path
all_pairs_shortest_path
| null | (G, source, target, weight=None, method='dijkstra', *, backend=None, **backend_kwargs) |
30,375 | networkx.algorithms.simple_paths | all_simple_edge_paths | Generate lists of edges for all simple paths in G from source to target.
A simple path is a path with no repeated nodes.
Parameters
----------
G : NetworkX graph
source : node
Starting node for path
target : nodes
Single node or iterable of nodes at which to end path
cutoff : integer, optional
Depth to stop the search. Only paths of length <= cutoff are returned.
Returns
-------
path_generator: generator
A generator that produces lists of simple paths. If there are no paths
between the source and target within the given cutoff the generator
produces no output.
For multigraphs, the list of edges have elements of the form `(u,v,k)`.
Where `k` corresponds to the edge key.
Examples
--------
Print the simple path edges of a Graph::
>>> g = nx.Graph([(1, 2), (2, 4), (1, 3), (3, 4)])
>>> for path in sorted(nx.all_simple_edge_paths(g, 1, 4)):
... print(path)
[(1, 2), (2, 4)]
[(1, 3), (3, 4)]
Print the simple path edges of a MultiGraph. Returned edges come with
their associated keys::
>>> mg = nx.MultiGraph()
>>> mg.add_edge(1, 2, key="k0")
'k0'
>>> mg.add_edge(1, 2, key="k1")
'k1'
>>> mg.add_edge(2, 3, key="k0")
'k0'
>>> for path in sorted(nx.all_simple_edge_paths(mg, 1, 3)):
... print(path)
[(1, 2, 'k0'), (2, 3, 'k0')]
[(1, 2, 'k1'), (2, 3, 'k0')]
When ``source`` is one of the targets, the empty path starting and ending at
``source`` without traversing any edge is considered a valid simple edge path
and is included in the results:
>>> G = nx.Graph()
>>> G.add_node(0)
>>> paths = list(nx.all_simple_edge_paths(G, 0, 0))
>>> for path in paths:
... print(path)
[]
>>> len(paths)
1
Notes
-----
This algorithm uses a modified depth-first search to generate the
paths [1]_. A single path can be found in $O(V+E)$ time but the
number of simple paths in a graph can be very large, e.g. $O(n!)$ in
the complete graph of order $n$.
References
----------
.. [1] R. Sedgewick, "Algorithms in C, Part 5: Graph Algorithms",
Addison Wesley Professional, 3rd ed., 2001.
See Also
--------
all_shortest_paths, shortest_path, all_simple_paths
| def _bidirectional_dijkstra(
G, source, target, weight="weight", ignore_nodes=None, ignore_edges=None
):
"""Dijkstra's algorithm for shortest paths using bidirectional search.
This function returns the shortest path between source and target
ignoring nodes and edges in the containers ignore_nodes and
ignore_edges.
This is a custom modification of the standard Dijkstra bidirectional
shortest path implementation at networkx.algorithms.weighted
Parameters
----------
G : NetworkX graph
source : node
Starting node.
target : node
Ending node.
weight: string, function, optional (default='weight')
Edge data key or weight function corresponding to the edge weight
ignore_nodes : container of nodes
nodes to ignore, optional
ignore_edges : container of edges
edges to ignore, optional
Returns
-------
length : number
Shortest path length.
Returns a tuple of two dictionaries keyed by node.
The first dictionary stores distance from the source.
The second stores the path from the source to that node.
Raises
------
NetworkXNoPath
If no path exists between source and target.
Notes
-----
Edge weight attributes must be numerical.
Distances are calculated as sums of weighted edges traversed.
In practice bidirectional Dijkstra is much more than twice as fast as
ordinary Dijkstra.
Ordinary Dijkstra expands nodes in a sphere-like manner from the
source. The radius of this sphere will eventually be the length
of the shortest path. Bidirectional Dijkstra will expand nodes
from both the source and the target, making two spheres of half
this radius. Volume of the first sphere is pi*r*r while the
others are 2*pi*r/2*r/2, making up half the volume.
This algorithm is not guaranteed to work if edge weights
are negative or are floating point numbers
(overflows and roundoff errors can cause problems).
See Also
--------
shortest_path
shortest_path_length
"""
if ignore_nodes and (source in ignore_nodes or target in ignore_nodes):
raise nx.NetworkXNoPath(f"No path between {source} and {target}.")
if source == target:
if source not in G:
raise nx.NodeNotFound(f"Node {source} not in graph")
return (0, [source])
# handle either directed or undirected
if G.is_directed():
Gpred = G.predecessors
Gsucc = G.successors
else:
Gpred = G.neighbors
Gsucc = G.neighbors
# support optional nodes filter
if ignore_nodes:
def filter_iter(nodes):
def iterate(v):
for w in nodes(v):
if w not in ignore_nodes:
yield w
return iterate
Gpred = filter_iter(Gpred)
Gsucc = filter_iter(Gsucc)
# support optional edges filter
if ignore_edges:
if G.is_directed():
def filter_pred_iter(pred_iter):
def iterate(v):
for w in pred_iter(v):
if (w, v) not in ignore_edges:
yield w
return iterate
def filter_succ_iter(succ_iter):
def iterate(v):
for w in succ_iter(v):
if (v, w) not in ignore_edges:
yield w
return iterate
Gpred = filter_pred_iter(Gpred)
Gsucc = filter_succ_iter(Gsucc)
else:
def filter_iter(nodes):
def iterate(v):
for w in nodes(v):
if (v, w) not in ignore_edges and (w, v) not in ignore_edges:
yield w
return iterate
Gpred = filter_iter(Gpred)
Gsucc = filter_iter(Gsucc)
push = heappush
pop = heappop
# Init: Forward Backward
dists = [{}, {}] # dictionary of final distances
paths = [{source: [source]}, {target: [target]}] # dictionary of paths
fringe = [[], []] # heap of (distance, node) tuples for
# extracting next node to expand
seen = [{source: 0}, {target: 0}] # dictionary of distances to
# nodes seen
c = count()
# initialize fringe heap
push(fringe[0], (0, next(c), source))
push(fringe[1], (0, next(c), target))
# neighs for extracting correct neighbor information
neighs = [Gsucc, Gpred]
# variables to hold shortest discovered path
# finaldist = 1e30000
finalpath = []
dir = 1
while fringe[0] and fringe[1]:
# choose direction
# dir == 0 is forward direction and dir == 1 is back
dir = 1 - dir
# extract closest to expand
(dist, _, v) = pop(fringe[dir])
if v in dists[dir]:
# Shortest path to v has already been found
continue
# update distance
dists[dir][v] = dist # equal to seen[dir][v]
if v in dists[1 - dir]:
# if we have scanned v in both directions we are done
# we have now discovered the shortest path
return (finaldist, finalpath)
wt = _weight_function(G, weight)
for w in neighs[dir](v):
if dir == 0: # forward
minweight = wt(v, w, G.get_edge_data(v, w))
vwLength = dists[dir][v] + minweight
else: # back, must remember to change v,w->w,v
minweight = wt(w, v, G.get_edge_data(w, v))
vwLength = dists[dir][v] + minweight
if w in dists[dir]:
if vwLength < dists[dir][w]:
raise ValueError("Contradictory paths found: negative weights?")
elif w not in seen[dir] or vwLength < seen[dir][w]:
# relaxing
seen[dir][w] = vwLength
push(fringe[dir], (vwLength, next(c), w))
paths[dir][w] = paths[dir][v] + [w]
if w in seen[0] and w in seen[1]:
# see if this path is better than the already
# discovered shortest path
totaldist = seen[0][w] + seen[1][w]
if finalpath == [] or finaldist > totaldist:
finaldist = totaldist
revpath = paths[1][w][:]
revpath.reverse()
finalpath = paths[0][w] + revpath[1:]
raise nx.NetworkXNoPath(f"No path between {source} and {target}.")
| (G, source, target, cutoff=None, *, backend=None, **backend_kwargs) |
30,376 | networkx.algorithms.simple_paths | all_simple_paths | Generate all simple paths in the graph G from source to target.
A simple path is a path with no repeated nodes.
Parameters
----------
G : NetworkX graph
source : node
Starting node for path
target : nodes
Single node or iterable of nodes at which to end path
cutoff : integer, optional
Depth to stop the search. Only paths of length <= cutoff are returned.
Returns
-------
path_generator: generator
A generator that produces lists of simple paths. If there are no paths
between the source and target within the given cutoff the generator
produces no output. If it is possible to traverse the same sequence of
nodes in multiple ways, namely through parallel edges, then it will be
returned multiple times (once for each viable edge combination).
Examples
--------
This iterator generates lists of nodes::
>>> G = nx.complete_graph(4)
>>> for path in nx.all_simple_paths(G, source=0, target=3):
... print(path)
...
[0, 1, 2, 3]
[0, 1, 3]
[0, 2, 1, 3]
[0, 2, 3]
[0, 3]
You can generate only those paths that are shorter than a certain
length by using the `cutoff` keyword argument::
>>> paths = nx.all_simple_paths(G, source=0, target=3, cutoff=2)
>>> print(list(paths))
[[0, 1, 3], [0, 2, 3], [0, 3]]
To get each path as the corresponding list of edges, you can use the
:func:`networkx.utils.pairwise` helper function::
>>> paths = nx.all_simple_paths(G, source=0, target=3)
>>> for path in map(nx.utils.pairwise, paths):
... print(list(path))
[(0, 1), (1, 2), (2, 3)]
[(0, 1), (1, 3)]
[(0, 2), (2, 1), (1, 3)]
[(0, 2), (2, 3)]
[(0, 3)]
Pass an iterable of nodes as target to generate all paths ending in any of several nodes::
>>> G = nx.complete_graph(4)
>>> for path in nx.all_simple_paths(G, source=0, target=[3, 2]):
... print(path)
...
[0, 1, 2]
[0, 1, 2, 3]
[0, 1, 3]
[0, 1, 3, 2]
[0, 2]
[0, 2, 1, 3]
[0, 2, 3]
[0, 3]
[0, 3, 1, 2]
[0, 3, 2]
The singleton path from ``source`` to itself is considered a simple path and is
included in the results:
>>> G = nx.empty_graph(5)
>>> list(nx.all_simple_paths(G, source=0, target=0))
[[0]]
>>> G = nx.path_graph(3)
>>> list(nx.all_simple_paths(G, source=0, target={0, 1, 2}))
[[0], [0, 1], [0, 1, 2]]
Iterate over each path from the root nodes to the leaf nodes in a
directed acyclic graph using a functional programming approach::
>>> from itertools import chain
>>> from itertools import product
>>> from itertools import starmap
>>> from functools import partial
>>>
>>> chaini = chain.from_iterable
>>>
>>> G = nx.DiGraph([(0, 1), (1, 2), (0, 3), (3, 2)])
>>> roots = (v for v, d in G.in_degree() if d == 0)
>>> leaves = (v for v, d in G.out_degree() if d == 0)
>>> all_paths = partial(nx.all_simple_paths, G)
>>> list(chaini(starmap(all_paths, product(roots, leaves))))
[[0, 1, 2], [0, 3, 2]]
The same list computed using an iterative approach::
>>> G = nx.DiGraph([(0, 1), (1, 2), (0, 3), (3, 2)])
>>> roots = (v for v, d in G.in_degree() if d == 0)
>>> leaves = (v for v, d in G.out_degree() if d == 0)
>>> all_paths = []
>>> for root in roots:
... for leaf in leaves:
... paths = nx.all_simple_paths(G, root, leaf)
... all_paths.extend(paths)
>>> all_paths
[[0, 1, 2], [0, 3, 2]]
Iterate over each path from the root nodes to the leaf nodes in a
directed acyclic graph passing all leaves together to avoid unnecessary
compute::
>>> G = nx.DiGraph([(0, 1), (2, 1), (1, 3), (1, 4)])
>>> roots = (v for v, d in G.in_degree() if d == 0)
>>> leaves = [v for v, d in G.out_degree() if d == 0]
>>> all_paths = []
>>> for root in roots:
... paths = nx.all_simple_paths(G, root, leaves)
... all_paths.extend(paths)
>>> all_paths
[[0, 1, 3], [0, 1, 4], [2, 1, 3], [2, 1, 4]]
If parallel edges offer multiple ways to traverse a given sequence of
nodes, this sequence of nodes will be returned multiple times:
>>> G = nx.MultiDiGraph([(0, 1), (0, 1), (1, 2)])
>>> list(nx.all_simple_paths(G, 0, 2))
[[0, 1, 2], [0, 1, 2]]
Notes
-----
This algorithm uses a modified depth-first search to generate the
paths [1]_. A single path can be found in $O(V+E)$ time but the
number of simple paths in a graph can be very large, e.g. $O(n!)$ in
the complete graph of order $n$.
This function does not check that a path exists between `source` and
`target`. For large graphs, this may result in very long runtimes.
Consider using `has_path` to check that a path exists between `source` and
`target` before calling this function on large graphs.
References
----------
.. [1] R. Sedgewick, "Algorithms in C, Part 5: Graph Algorithms",
Addison Wesley Professional, 3rd ed., 2001.
See Also
--------
all_shortest_paths, shortest_path, has_path
| def _bidirectional_dijkstra(
G, source, target, weight="weight", ignore_nodes=None, ignore_edges=None
):
"""Dijkstra's algorithm for shortest paths using bidirectional search.
This function returns the shortest path between source and target
ignoring nodes and edges in the containers ignore_nodes and
ignore_edges.
This is a custom modification of the standard Dijkstra bidirectional
shortest path implementation at networkx.algorithms.weighted
Parameters
----------
G : NetworkX graph
source : node
Starting node.
target : node
Ending node.
weight: string, function, optional (default='weight')
Edge data key or weight function corresponding to the edge weight
ignore_nodes : container of nodes
nodes to ignore, optional
ignore_edges : container of edges
edges to ignore, optional
Returns
-------
length : number
Shortest path length.
Returns a tuple of two dictionaries keyed by node.
The first dictionary stores distance from the source.
The second stores the path from the source to that node.
Raises
------
NetworkXNoPath
If no path exists between source and target.
Notes
-----
Edge weight attributes must be numerical.
Distances are calculated as sums of weighted edges traversed.
In practice bidirectional Dijkstra is much more than twice as fast as
ordinary Dijkstra.
Ordinary Dijkstra expands nodes in a sphere-like manner from the
source. The radius of this sphere will eventually be the length
of the shortest path. Bidirectional Dijkstra will expand nodes
from both the source and the target, making two spheres of half
this radius. Volume of the first sphere is pi*r*r while the
others are 2*pi*r/2*r/2, making up half the volume.
This algorithm is not guaranteed to work if edge weights
are negative or are floating point numbers
(overflows and roundoff errors can cause problems).
See Also
--------
shortest_path
shortest_path_length
"""
if ignore_nodes and (source in ignore_nodes or target in ignore_nodes):
raise nx.NetworkXNoPath(f"No path between {source} and {target}.")
if source == target:
if source not in G:
raise nx.NodeNotFound(f"Node {source} not in graph")
return (0, [source])
# handle either directed or undirected
if G.is_directed():
Gpred = G.predecessors
Gsucc = G.successors
else:
Gpred = G.neighbors
Gsucc = G.neighbors
# support optional nodes filter
if ignore_nodes:
def filter_iter(nodes):
def iterate(v):
for w in nodes(v):
if w not in ignore_nodes:
yield w
return iterate
Gpred = filter_iter(Gpred)
Gsucc = filter_iter(Gsucc)
# support optional edges filter
if ignore_edges:
if G.is_directed():
def filter_pred_iter(pred_iter):
def iterate(v):
for w in pred_iter(v):
if (w, v) not in ignore_edges:
yield w
return iterate
def filter_succ_iter(succ_iter):
def iterate(v):
for w in succ_iter(v):
if (v, w) not in ignore_edges:
yield w
return iterate
Gpred = filter_pred_iter(Gpred)
Gsucc = filter_succ_iter(Gsucc)
else:
def filter_iter(nodes):
def iterate(v):
for w in nodes(v):
if (v, w) not in ignore_edges and (w, v) not in ignore_edges:
yield w
return iterate
Gpred = filter_iter(Gpred)
Gsucc = filter_iter(Gsucc)
push = heappush
pop = heappop
# Init: Forward Backward
dists = [{}, {}] # dictionary of final distances
paths = [{source: [source]}, {target: [target]}] # dictionary of paths
fringe = [[], []] # heap of (distance, node) tuples for
# extracting next node to expand
seen = [{source: 0}, {target: 0}] # dictionary of distances to
# nodes seen
c = count()
# initialize fringe heap
push(fringe[0], (0, next(c), source))
push(fringe[1], (0, next(c), target))
# neighs for extracting correct neighbor information
neighs = [Gsucc, Gpred]
# variables to hold shortest discovered path
# finaldist = 1e30000
finalpath = []
dir = 1
while fringe[0] and fringe[1]:
# choose direction
# dir == 0 is forward direction and dir == 1 is back
dir = 1 - dir
# extract closest to expand
(dist, _, v) = pop(fringe[dir])
if v in dists[dir]:
# Shortest path to v has already been found
continue
# update distance
dists[dir][v] = dist # equal to seen[dir][v]
if v in dists[1 - dir]:
# if we have scanned v in both directions we are done
# we have now discovered the shortest path
return (finaldist, finalpath)
wt = _weight_function(G, weight)
for w in neighs[dir](v):
if dir == 0: # forward
minweight = wt(v, w, G.get_edge_data(v, w))
vwLength = dists[dir][v] + minweight
else: # back, must remember to change v,w->w,v
minweight = wt(w, v, G.get_edge_data(w, v))
vwLength = dists[dir][v] + minweight
if w in dists[dir]:
if vwLength < dists[dir][w]:
raise ValueError("Contradictory paths found: negative weights?")
elif w not in seen[dir] or vwLength < seen[dir][w]:
# relaxing
seen[dir][w] = vwLength
push(fringe[dir], (vwLength, next(c), w))
paths[dir][w] = paths[dir][v] + [w]
if w in seen[0] and w in seen[1]:
# see if this path is better than the already
# discovered shortest path
totaldist = seen[0][w] + seen[1][w]
if finalpath == [] or finaldist > totaldist:
finaldist = totaldist
revpath = paths[1][w][:]
revpath.reverse()
finalpath = paths[0][w] + revpath[1:]
raise nx.NetworkXNoPath(f"No path between {source} and {target}.")
| (G, source, target, cutoff=None, *, backend=None, **backend_kwargs) |
30,377 | networkx.algorithms.dag | all_topological_sorts | Returns a generator of _all_ topological sorts of the directed graph G.
A topological sort is a nonunique permutation of the nodes such that an
edge from u to v implies that u appears before v in the topological sort
order.
Parameters
----------
G : NetworkX DiGraph
A directed graph
Yields
------
topological_sort_order : list
a list of nodes in `G`, representing one of the topological sort orders
Raises
------
NetworkXNotImplemented
If `G` is not directed
NetworkXUnfeasible
If `G` is not acyclic
Examples
--------
To enumerate all topological sorts of directed graph:
>>> DG = nx.DiGraph([(1, 2), (2, 3), (2, 4)])
>>> list(nx.all_topological_sorts(DG))
[[1, 2, 4, 3], [1, 2, 3, 4]]
Notes
-----
Implements an iterative version of the algorithm given in [1].
References
----------
.. [1] Knuth, Donald E., Szwarcfiter, Jayme L. (1974).
"A Structured Program to Generate All Topological Sorting Arrangements"
Information Processing Letters, Volume 2, Issue 6, 1974, Pages 153-157,
ISSN 0020-0190,
https://doi.org/10.1016/0020-0190(74)90001-5.
Elsevier (North-Holland), Amsterdam
| def transitive_closure_dag(G, topo_order=None):
"""Returns the transitive closure of a directed acyclic graph.
This function is faster than the function `transitive_closure`, but fails
if the graph has a cycle.
The transitive closure of G = (V,E) is a graph G+ = (V,E+) such that
for all v, w in V there is an edge (v, w) in E+ if and only if there
is a non-null path from v to w in G.
Parameters
----------
G : NetworkX DiGraph
A directed acyclic graph (DAG)
topo_order: list or tuple, optional
A topological order for G (if None, the function will compute one)
Returns
-------
NetworkX DiGraph
The transitive closure of `G`
Raises
------
NetworkXNotImplemented
If `G` is not directed
NetworkXUnfeasible
If `G` has a cycle
Examples
--------
>>> DG = nx.DiGraph([(1, 2), (2, 3)])
>>> TC = nx.transitive_closure_dag(DG)
>>> TC.edges()
OutEdgeView([(1, 2), (1, 3), (2, 3)])
Notes
-----
This algorithm is probably simple enough to be well-known but I didn't find
a mention in the literature.
"""
if topo_order is None:
topo_order = list(topological_sort(G))
TC = G.copy()
# idea: traverse vertices following a reverse topological order, connecting
# each vertex to its descendants at distance 2 as we go
for v in reversed(topo_order):
TC.add_edges_from((v, u) for u in nx.descendants_at_distance(TC, v, 2))
return TC
| (G, *, backend=None, **backend_kwargs) |
30,378 | networkx.algorithms.triads | all_triads | A generator of all possible triads in G.
Parameters
----------
G : digraph
A NetworkX DiGraph
Returns
-------
all_triads : generator of DiGraphs
Generator of triads (order-3 DiGraphs)
Examples
--------
>>> G = nx.DiGraph([(1, 2), (2, 3), (3, 1), (3, 4), (4, 1), (4, 2)])
>>> for triad in nx.all_triads(G):
... print(triad.edges)
[(1, 2), (2, 3), (3, 1)]
[(1, 2), (4, 1), (4, 2)]
[(3, 1), (3, 4), (4, 1)]
[(2, 3), (3, 4), (4, 2)]
| null | (G, *, backend=None, **backend_kwargs) |
30,379 | networkx.algorithms.triads | all_triplets | Returns a generator of all possible sets of 3 nodes in a DiGraph.
.. deprecated:: 3.3
all_triplets is deprecated and will be removed in NetworkX version 3.5.
Use `itertools.combinations` instead::
all_triplets = itertools.combinations(G, 3)
Parameters
----------
G : digraph
A NetworkX DiGraph
Returns
-------
triplets : generator of 3-tuples
Generator of tuples of 3 nodes
Examples
--------
>>> G = nx.DiGraph([(1, 2), (2, 3), (3, 4)])
>>> list(nx.all_triplets(G))
[(1, 2, 3), (1, 2, 4), (1, 3, 4), (2, 3, 4)]
| null | (G, *, backend=None, **backend_kwargs) |
30,380 | networkx.algorithms.dag | ancestors | Returns all nodes having a path to `source` in `G`.
Parameters
----------
G : NetworkX Graph
source : node in `G`
Returns
-------
set()
The ancestors of `source` in `G`
Raises
------
NetworkXError
If node `source` is not in `G`.
Examples
--------
>>> DG = nx.path_graph(5, create_using=nx.DiGraph)
>>> sorted(nx.ancestors(DG, 2))
[0, 1]
The `source` node is not an ancestor of itself, but can be included manually:
>>> sorted(nx.ancestors(DG, 2) | {2})
[0, 1, 2]
See also
--------
descendants
| def transitive_closure_dag(G, topo_order=None):
"""Returns the transitive closure of a directed acyclic graph.
This function is faster than the function `transitive_closure`, but fails
if the graph has a cycle.
The transitive closure of G = (V,E) is a graph G+ = (V,E+) such that
for all v, w in V there is an edge (v, w) in E+ if and only if there
is a non-null path from v to w in G.
Parameters
----------
G : NetworkX DiGraph
A directed acyclic graph (DAG)
topo_order: list or tuple, optional
A topological order for G (if None, the function will compute one)
Returns
-------
NetworkX DiGraph
The transitive closure of `G`
Raises
------
NetworkXNotImplemented
If `G` is not directed
NetworkXUnfeasible
If `G` has a cycle
Examples
--------
>>> DG = nx.DiGraph([(1, 2), (2, 3)])
>>> TC = nx.transitive_closure_dag(DG)
>>> TC.edges()
OutEdgeView([(1, 2), (1, 3), (2, 3)])
Notes
-----
This algorithm is probably simple enough to be well-known but I didn't find
a mention in the literature.
"""
if topo_order is None:
topo_order = list(topological_sort(G))
TC = G.copy()
# idea: traverse vertices following a reverse topological order, connecting
# each vertex to its descendants at distance 2 as we go
for v in reversed(topo_order):
TC.add_edges_from((v, u) for u in nx.descendants_at_distance(TC, v, 2))
return TC
| (G, source, *, backend=None, **backend_kwargs) |
30,381 | networkx.algorithms.dag | antichains | Generates antichains from a directed acyclic graph (DAG).
An antichain is a subset of a partially ordered set such that any
two elements in the subset are incomparable.
Parameters
----------
G : NetworkX DiGraph
A directed acyclic graph (DAG)
topo_order: list or tuple, optional
A topological order for G (if None, the function will compute one)
Yields
------
antichain : list
a list of nodes in `G` representing an antichain
Raises
------
NetworkXNotImplemented
If `G` is not directed
NetworkXUnfeasible
If `G` contains a cycle
Examples
--------
>>> DG = nx.DiGraph([(1, 2), (1, 3)])
>>> list(nx.antichains(DG))
[[], [3], [2], [2, 3], [1]]
Notes
-----
This function was originally developed by Peter Jipsen and Franco Saliola
for the SAGE project. It's included in NetworkX with permission from the
authors. Original SAGE code at:
https://github.com/sagemath/sage/blob/master/src/sage/combinat/posets/hasse_diagram.py
References
----------
.. [1] Free Lattices, by R. Freese, J. Jezek and J. B. Nation,
AMS, Vol 42, 1995, p. 226.
| def transitive_closure_dag(G, topo_order=None):
"""Returns the transitive closure of a directed acyclic graph.
This function is faster than the function `transitive_closure`, but fails
if the graph has a cycle.
The transitive closure of G = (V,E) is a graph G+ = (V,E+) such that
for all v, w in V there is an edge (v, w) in E+ if and only if there
is a non-null path from v to w in G.
Parameters
----------
G : NetworkX DiGraph
A directed acyclic graph (DAG)
topo_order: list or tuple, optional
A topological order for G (if None, the function will compute one)
Returns
-------
NetworkX DiGraph
The transitive closure of `G`
Raises
------
NetworkXNotImplemented
If `G` is not directed
NetworkXUnfeasible
If `G` has a cycle
Examples
--------
>>> DG = nx.DiGraph([(1, 2), (2, 3)])
>>> TC = nx.transitive_closure_dag(DG)
>>> TC.edges()
OutEdgeView([(1, 2), (1, 3), (2, 3)])
Notes
-----
This algorithm is probably simple enough to be well-known but I didn't find
a mention in the literature.
"""
if topo_order is None:
topo_order = list(topological_sort(G))
TC = G.copy()
# idea: traverse vertices following a reverse topological order, connecting
# each vertex to its descendants at distance 2 as we go
for v in reversed(topo_order):
TC.add_edges_from((v, u) for u in nx.descendants_at_distance(TC, v, 2))
return TC
| (G, topo_order=None, *, backend=None, **backend_kwargs) |
30,382 | networkx.algorithms.centrality.current_flow_betweenness | approximate_current_flow_betweenness_centrality | Compute the approximate current-flow betweenness centrality for nodes.
Approximates the current-flow betweenness centrality within absolute
error of epsilon with high probability [1]_.
Parameters
----------
G : graph
A NetworkX graph
normalized : bool, optional (default=True)
If True the betweenness values are normalized by 2/[(n-1)(n-2)] where
n is the number of nodes in G.
weight : string or None, optional (default=None)
Key for edge data used as the edge weight.
If None, then use 1 as each edge weight.
The weight reflects the capacity or the strength of the
edge.
dtype : data type (float)
Default data type for internal matrices.
Set to np.float32 for lower memory consumption.
solver : string (default='full')
Type of linear solver to use for computing the flow matrix.
Options are "full" (uses most memory), "lu" (recommended), and
"cg" (uses least memory).
epsilon: float
Absolute error tolerance.
kmax: int
Maximum number of sample node pairs to use for approximation.
seed : integer, random_state, or None (default)
Indicator of random number generation state.
See :ref:`Randomness<randomness>`.
Returns
-------
nodes : dictionary
Dictionary of nodes with betweenness centrality as the value.
See Also
--------
current_flow_betweenness_centrality
Notes
-----
The running time is $O((1/\epsilon^2)m{\sqrt k} \log n)$
and the space required is $O(m)$ for $n$ nodes and $m$ edges.
If the edges have a 'weight' attribute they will be used as
weights in this algorithm. Unspecified weights are set to 1.
References
----------
.. [1] Ulrik Brandes and Daniel Fleischer:
Centrality Measures Based on Current Flow.
Proc. 22nd Symp. Theoretical Aspects of Computer Science (STACS '05).
LNCS 3404, pp. 533-544. Springer-Verlag, 2005.
https://doi.org/10.1007/978-3-540-31856-9_44
| null | (G, normalized=True, weight=None, dtype=<class 'float'>, solver='full', epsilon=0.5, kmax=10000, seed=None, *, backend=None, **backend_kwargs) |
30,384 | networkx.drawing.layout | arf_layout | Arf layout for networkx
The attractive and repulsive forces (arf) layout [1]
improves the spring layout in three ways. First, it
prevents congestion of highly connected nodes due to
strong forcing between nodes. Second, it utilizes the
layout space more effectively by preventing large gaps
that spring layout tends to create. Lastly, the arf
layout represents symmetries in the layout better than
the default spring layout.
Parameters
----------
G : nx.Graph or nx.DiGraph
Networkx graph.
pos : dict
Initial position of the nodes. If set to None a
random layout will be used.
scaling : float
Scales the radius of the circular layout space.
a : float
Strength of springs between connected nodes. Should be larger than 1. The greater a, the clearer the separation ofunconnected sub clusters.
etol : float
Gradient sum of spring forces must be larger than `etol` before successful termination.
dt : float
Time step for force differential equation simulations.
max_iter : int
Max iterations before termination of the algorithm.
References
.. [1] "Self-Organization Applied to Dynamic Network Layout", M. Geipel,
International Journal of Modern Physics C, 2007, Vol 18, No 10, pp. 1537-1549.
https://doi.org/10.1142/S0129183107011558 https://arxiv.org/abs/0704.1748
Returns
-------
pos : dict
A dictionary of positions keyed by node.
Examples
--------
>>> G = nx.grid_graph((5, 5))
>>> pos = nx.arf_layout(G)
| def arf_layout(
G,
pos=None,
scaling=1,
a=1.1,
etol=1e-6,
dt=1e-3,
max_iter=1000,
):
"""Arf layout for networkx
The attractive and repulsive forces (arf) layout [1]
improves the spring layout in three ways. First, it
prevents congestion of highly connected nodes due to
strong forcing between nodes. Second, it utilizes the
layout space more effectively by preventing large gaps
that spring layout tends to create. Lastly, the arf
layout represents symmetries in the layout better than
the default spring layout.
Parameters
----------
G : nx.Graph or nx.DiGraph
Networkx graph.
pos : dict
Initial position of the nodes. If set to None a
random layout will be used.
scaling : float
Scales the radius of the circular layout space.
a : float
Strength of springs between connected nodes. Should be larger than 1. The greater a, the clearer the separation ofunconnected sub clusters.
etol : float
Gradient sum of spring forces must be larger than `etol` before successful termination.
dt : float
Time step for force differential equation simulations.
max_iter : int
Max iterations before termination of the algorithm.
References
.. [1] "Self-Organization Applied to Dynamic Network Layout", M. Geipel,
International Journal of Modern Physics C, 2007, Vol 18, No 10, pp. 1537-1549.
https://doi.org/10.1142/S0129183107011558 https://arxiv.org/abs/0704.1748
Returns
-------
pos : dict
A dictionary of positions keyed by node.
Examples
--------
>>> G = nx.grid_graph((5, 5))
>>> pos = nx.arf_layout(G)
"""
import warnings
import numpy as np
if a <= 1:
msg = "The parameter a should be larger than 1"
raise ValueError(msg)
pos_tmp = nx.random_layout(G)
if pos is None:
pos = pos_tmp
else:
for node in G.nodes():
if node not in pos:
pos[node] = pos_tmp[node].copy()
# Initialize spring constant matrix
N = len(G)
# No nodes no computation
if N == 0:
return pos
# init force of springs
K = np.ones((N, N)) - np.eye(N)
node_order = {node: i for i, node in enumerate(G)}
for x, y in G.edges():
if x != y:
idx, jdx = (node_order[i] for i in (x, y))
K[idx, jdx] = a
# vectorize values
p = np.asarray(list(pos.values()))
# equation 10 in [1]
rho = scaling * np.sqrt(N)
# looping variables
error = etol + 1
n_iter = 0
while error > etol:
diff = p[:, np.newaxis] - p[np.newaxis]
A = np.linalg.norm(diff, axis=-1)[..., np.newaxis]
# attraction_force - repulsions force
# suppress nans due to division; caused by diagonal set to zero.
# Does not affect the computation due to nansum
with warnings.catch_warnings():
warnings.simplefilter("ignore")
change = K[..., np.newaxis] * diff - rho / A * diff
change = np.nansum(change, axis=0)
p += change * dt
error = np.linalg.norm(change, axis=-1).sum()
if n_iter > max_iter:
break
n_iter += 1
return dict(zip(G.nodes(), p))
| (G, pos=None, scaling=1, a=1.1, etol=1e-06, dt=0.001, max_iter=1000) |
30,385 | networkx.algorithms.components.biconnected | articulation_points | Yield the articulation points, or cut vertices, of a graph.
An articulation point or cut vertex is any node whose removal (along with
all its incident edges) increases the number of connected components of
a graph. An undirected connected graph without articulation points is
biconnected. Articulation points belong to more than one biconnected
component of a graph.
Notice that by convention a dyad is considered a biconnected component.
Parameters
----------
G : NetworkX Graph
An undirected graph.
Yields
------
node
An articulation point in the graph.
Raises
------
NetworkXNotImplemented
If the input graph is not undirected.
Examples
--------
>>> G = nx.barbell_graph(4, 2)
>>> print(nx.is_biconnected(G))
False
>>> len(list(nx.articulation_points(G)))
4
>>> G.add_edge(2, 8)
>>> print(nx.is_biconnected(G))
True
>>> len(list(nx.articulation_points(G)))
0
See Also
--------
is_biconnected
biconnected_components
biconnected_component_edges
Notes
-----
The algorithm to find articulation points and biconnected
components is implemented using a non-recursive depth-first-search
(DFS) that keeps track of the highest level that back edges reach
in the DFS tree. A node `n` is an articulation point if, and only
if, there exists a subtree rooted at `n` such that there is no
back edge from any successor of `n` that links to a predecessor of
`n` in the DFS tree. By keeping track of all the edges traversed
by the DFS we can obtain the biconnected components because all
edges of a bicomponent will be traversed consecutively between
articulation points.
References
----------
.. [1] Hopcroft, J.; Tarjan, R. (1973).
"Efficient algorithms for graph manipulation".
Communications of the ACM 16: 372–378. doi:10.1145/362248.362272
| null | (G, *, backend=None, **backend_kwargs) |
30,388 | networkx.algorithms.shortest_paths.astar | astar_path | Returns a list of nodes in a shortest path between source and target
using the A* ("A-star") algorithm.
There may be more than one shortest path. This returns only one.
Parameters
----------
G : NetworkX graph
source : node
Starting node for path
target : node
Ending node for path
heuristic : function
A function to evaluate the estimate of the distance
from the a node to the target. The function takes
two nodes arguments and must return a number.
If the heuristic is inadmissible (if it might
overestimate the cost of reaching the goal from a node),
the result may not be a shortest path.
The algorithm does not support updating heuristic
values for the same node due to caching the first
heuristic calculation per node.
weight : string or function
If this is a string, then edge weights will be accessed via the
edge attribute with this key (that is, the weight of the edge
joining `u` to `v` will be ``G.edges[u, v][weight]``). If no
such edge attribute exists, the weight of the edge is assumed to
be one.
If this is a function, the weight of an edge is the value
returned by the function. The function must accept exactly three
positional arguments: the two endpoints of an edge and the
dictionary of edge attributes for that edge. The function must
return a number or None to indicate a hidden edge.
cutoff : float, optional
If this is provided, the search will be bounded to this value. I.e. if
the evaluation function surpasses this value for a node n, the node will not
be expanded further and will be ignored. More formally, let h'(n) be the
heuristic function, and g(n) be the cost of reaching n from the source node. Then,
if g(n) + h'(n) > cutoff, the node will not be explored further.
Note that if the heuristic is inadmissible, it is possible that paths
are ignored even though they satisfy the cutoff.
Raises
------
NetworkXNoPath
If no path exists between source and target.
Examples
--------
>>> G = nx.path_graph(5)
>>> print(nx.astar_path(G, 0, 4))
[0, 1, 2, 3, 4]
>>> G = nx.grid_graph(dim=[3, 3]) # nodes are two-tuples (x,y)
>>> nx.set_edge_attributes(G, {e: e[1][0] * 2 for e in G.edges()}, "cost")
>>> def dist(a, b):
... (x1, y1) = a
... (x2, y2) = b
... return ((x1 - x2) ** 2 + (y1 - y2) ** 2) ** 0.5
>>> print(nx.astar_path(G, (0, 0), (2, 2), heuristic=dist, weight="cost"))
[(0, 0), (0, 1), (0, 2), (1, 2), (2, 2)]
Notes
-----
Edge weight attributes must be numerical.
Distances are calculated as sums of weighted edges traversed.
The weight function can be used to hide edges by returning None.
So ``weight = lambda u, v, d: 1 if d['color']=="red" else None``
will find the shortest red path.
See Also
--------
shortest_path, dijkstra_path
| null | (G, source, target, heuristic=None, weight='weight', *, cutoff=None, backend=None, **backend_kwargs) |
30,389 | networkx.algorithms.shortest_paths.astar | astar_path_length | Returns the length of the shortest path between source and target using
the A* ("A-star") algorithm.
Parameters
----------
G : NetworkX graph
source : node
Starting node for path
target : node
Ending node for path
heuristic : function
A function to evaluate the estimate of the distance
from the a node to the target. The function takes
two nodes arguments and must return a number.
If the heuristic is inadmissible (if it might
overestimate the cost of reaching the goal from a node),
the result may not be a shortest path.
The algorithm does not support updating heuristic
values for the same node due to caching the first
heuristic calculation per node.
weight : string or function
If this is a string, then edge weights will be accessed via the
edge attribute with this key (that is, the weight of the edge
joining `u` to `v` will be ``G.edges[u, v][weight]``). If no
such edge attribute exists, the weight of the edge is assumed to
be one.
If this is a function, the weight of an edge is the value
returned by the function. The function must accept exactly three
positional arguments: the two endpoints of an edge and the
dictionary of edge attributes for that edge. The function must
return a number or None to indicate a hidden edge.
cutoff : float, optional
If this is provided, the search will be bounded to this value. I.e. if
the evaluation function surpasses this value for a node n, the node will not
be expanded further and will be ignored. More formally, let h'(n) be the
heuristic function, and g(n) be the cost of reaching n from the source node. Then,
if g(n) + h'(n) > cutoff, the node will not be explored further.
Note that if the heuristic is inadmissible, it is possible that paths
are ignored even though they satisfy the cutoff.
Raises
------
NetworkXNoPath
If no path exists between source and target.
See Also
--------
astar_path
| null | (G, source, target, heuristic=None, weight='weight', *, cutoff=None, backend=None, **backend_kwargs) |
30,392 | networkx.linalg.attrmatrix | attr_matrix | Returns the attribute matrix using attributes from `G` as a numpy array.
If only `G` is passed in, then the adjacency matrix is constructed.
Let A be a discrete set of values for the node attribute `node_attr`. Then
the elements of A represent the rows and columns of the constructed matrix.
Now, iterate through every edge e=(u,v) in `G` and consider the value
of the edge attribute `edge_attr`. If ua and va are the values of the
node attribute `node_attr` for u and v, respectively, then the value of
the edge attribute is added to the matrix element at (ua, va).
Parameters
----------
G : graph
The NetworkX graph used to construct the attribute matrix.
edge_attr : str, optional
Each element of the matrix represents a running total of the
specified edge attribute for edges whose node attributes correspond
to the rows/cols of the matrix. The attribute must be present for
all edges in the graph. If no attribute is specified, then we
just count the number of edges whose node attributes correspond
to the matrix element.
node_attr : str, optional
Each row and column in the matrix represents a particular value
of the node attribute. The attribute must be present for all nodes
in the graph. Note, the values of this attribute should be reliably
hashable. So, float values are not recommended. If no attribute is
specified, then the rows and columns will be the nodes of the graph.
normalized : bool, optional
If True, then each row is normalized by the summation of its values.
rc_order : list, optional
A list of the node attribute values. This list specifies the ordering
of rows and columns of the array. If no ordering is provided, then
the ordering will be random (and also, a return value).
Other Parameters
----------------
dtype : NumPy data-type, optional
A valid NumPy dtype used to initialize the array. Keep in mind certain
dtypes can yield unexpected results if the array is to be normalized.
The parameter is passed to numpy.zeros(). If unspecified, the NumPy
default is used.
order : {'C', 'F'}, optional
Whether to store multidimensional data in C- or Fortran-contiguous
(row- or column-wise) order in memory. This parameter is passed to
numpy.zeros(). If unspecified, the NumPy default is used.
Returns
-------
M : 2D NumPy ndarray
The attribute matrix.
ordering : list
If `rc_order` was specified, then only the attribute matrix is returned.
However, if `rc_order` was None, then the ordering used to construct
the matrix is returned as well.
Examples
--------
Construct an adjacency matrix:
>>> G = nx.Graph()
>>> G.add_edge(0, 1, thickness=1, weight=3)
>>> G.add_edge(0, 2, thickness=2)
>>> G.add_edge(1, 2, thickness=3)
>>> nx.attr_matrix(G, rc_order=[0, 1, 2])
array([[0., 1., 1.],
[1., 0., 1.],
[1., 1., 0.]])
Alternatively, we can obtain the matrix describing edge thickness.
>>> nx.attr_matrix(G, edge_attr="thickness", rc_order=[0, 1, 2])
array([[0., 1., 2.],
[1., 0., 3.],
[2., 3., 0.]])
We can also color the nodes and ask for the probability distribution over
all edges (u,v) describing:
Pr(v has color Y | u has color X)
>>> G.nodes[0]["color"] = "red"
>>> G.nodes[1]["color"] = "red"
>>> G.nodes[2]["color"] = "blue"
>>> rc = ["red", "blue"]
>>> nx.attr_matrix(G, node_attr="color", normalized=True, rc_order=rc)
array([[0.33333333, 0.66666667],
[1. , 0. ]])
For example, the above tells us that for all edges (u,v):
Pr( v is red | u is red) = 1/3
Pr( v is blue | u is red) = 2/3
Pr( v is red | u is blue) = 1
Pr( v is blue | u is blue) = 0
Finally, we can obtain the total weights listed by the node colors.
>>> nx.attr_matrix(G, edge_attr="weight", node_attr="color", rc_order=rc)
array([[3., 2.],
[2., 0.]])
Thus, the total weight over all edges (u,v) with u and v having colors:
(red, red) is 3 # the sole contribution is from edge (0,1)
(red, blue) is 2 # contributions from edges (0,2) and (1,2)
(blue, red) is 2 # same as (red, blue) since graph is undirected
(blue, blue) is 0 # there are no edges with blue endpoints
| null | (G, edge_attr=None, node_attr=None, normalized=False, rc_order=None, dtype=None, order=None, *, backend=None, **backend_kwargs) |
30,393 | networkx.linalg.attrmatrix | attr_sparse_matrix | Returns a SciPy sparse array using attributes from G.
If only `G` is passed in, then the adjacency matrix is constructed.
Let A be a discrete set of values for the node attribute `node_attr`. Then
the elements of A represent the rows and columns of the constructed matrix.
Now, iterate through every edge e=(u,v) in `G` and consider the value
of the edge attribute `edge_attr`. If ua and va are the values of the
node attribute `node_attr` for u and v, respectively, then the value of
the edge attribute is added to the matrix element at (ua, va).
Parameters
----------
G : graph
The NetworkX graph used to construct the NumPy matrix.
edge_attr : str, optional
Each element of the matrix represents a running total of the
specified edge attribute for edges whose node attributes correspond
to the rows/cols of the matrix. The attribute must be present for
all edges in the graph. If no attribute is specified, then we
just count the number of edges whose node attributes correspond
to the matrix element.
node_attr : str, optional
Each row and column in the matrix represents a particular value
of the node attribute. The attribute must be present for all nodes
in the graph. Note, the values of this attribute should be reliably
hashable. So, float values are not recommended. If no attribute is
specified, then the rows and columns will be the nodes of the graph.
normalized : bool, optional
If True, then each row is normalized by the summation of its values.
rc_order : list, optional
A list of the node attribute values. This list specifies the ordering
of rows and columns of the array. If no ordering is provided, then
the ordering will be random (and also, a return value).
Other Parameters
----------------
dtype : NumPy data-type, optional
A valid NumPy dtype used to initialize the array. Keep in mind certain
dtypes can yield unexpected results if the array is to be normalized.
The parameter is passed to numpy.zeros(). If unspecified, the NumPy
default is used.
Returns
-------
M : SciPy sparse array
The attribute matrix.
ordering : list
If `rc_order` was specified, then only the matrix is returned.
However, if `rc_order` was None, then the ordering used to construct
the matrix is returned as well.
Examples
--------
Construct an adjacency matrix:
>>> G = nx.Graph()
>>> G.add_edge(0, 1, thickness=1, weight=3)
>>> G.add_edge(0, 2, thickness=2)
>>> G.add_edge(1, 2, thickness=3)
>>> M = nx.attr_sparse_matrix(G, rc_order=[0, 1, 2])
>>> M.toarray()
array([[0., 1., 1.],
[1., 0., 1.],
[1., 1., 0.]])
Alternatively, we can obtain the matrix describing edge thickness.
>>> M = nx.attr_sparse_matrix(G, edge_attr="thickness", rc_order=[0, 1, 2])
>>> M.toarray()
array([[0., 1., 2.],
[1., 0., 3.],
[2., 3., 0.]])
We can also color the nodes and ask for the probability distribution over
all edges (u,v) describing:
Pr(v has color Y | u has color X)
>>> G.nodes[0]["color"] = "red"
>>> G.nodes[1]["color"] = "red"
>>> G.nodes[2]["color"] = "blue"
>>> rc = ["red", "blue"]
>>> M = nx.attr_sparse_matrix(G, node_attr="color", normalized=True, rc_order=rc)
>>> M.toarray()
array([[0.33333333, 0.66666667],
[1. , 0. ]])
For example, the above tells us that for all edges (u,v):
Pr( v is red | u is red) = 1/3
Pr( v is blue | u is red) = 2/3
Pr( v is red | u is blue) = 1
Pr( v is blue | u is blue) = 0
Finally, we can obtain the total weights listed by the node colors.
>>> M = nx.attr_sparse_matrix(G, edge_attr="weight", node_attr="color", rc_order=rc)
>>> M.toarray()
array([[3., 2.],
[2., 0.]])
Thus, the total weight over all edges (u,v) with u and v having colors:
(red, red) is 3 # the sole contribution is from edge (0,1)
(red, blue) is 2 # contributions from edges (0,2) and (1,2)
(blue, red) is 2 # same as (red, blue) since graph is undirected
(blue, blue) is 0 # there are no edges with blue endpoints
| null | (G, edge_attr=None, node_attr=None, normalized=False, rc_order=None, dtype=None, *, backend=None, **backend_kwargs) |
30,395 | networkx.algorithms.components.attracting | attracting_components | Generates the attracting components in `G`.
An attracting component in a directed graph `G` is a strongly connected
component with the property that a random walker on the graph will never
leave the component, once it enters the component.
The nodes in attracting components can also be thought of as recurrent
nodes. If a random walker enters the attractor containing the node, then
the node will be visited infinitely often.
To obtain induced subgraphs on each component use:
``(G.subgraph(c).copy() for c in attracting_components(G))``
Parameters
----------
G : DiGraph, MultiDiGraph
The graph to be analyzed.
Returns
-------
attractors : generator of sets
A generator of sets of nodes, one for each attracting component of G.
Raises
------
NetworkXNotImplemented
If the input graph is undirected.
See Also
--------
number_attracting_components
is_attracting_component
| null | (G, *, backend=None, **backend_kwargs) |
30,396 | networkx.algorithms.assortativity.correlation | attribute_assortativity_coefficient | Compute assortativity for node attributes.
Assortativity measures the similarity of connections
in the graph with respect to the given attribute.
Parameters
----------
G : NetworkX graph
attribute : string
Node attribute key
nodes: list or iterable (optional)
Compute attribute assortativity for nodes in container.
The default is all nodes.
Returns
-------
r: float
Assortativity of graph for given attribute
Examples
--------
>>> G = nx.Graph()
>>> G.add_nodes_from([0, 1], color="red")
>>> G.add_nodes_from([2, 3], color="blue")
>>> G.add_edges_from([(0, 1), (2, 3)])
>>> print(nx.attribute_assortativity_coefficient(G, "color"))
1.0
Notes
-----
This computes Eq. (2) in Ref. [1]_ , (trace(M)-sum(M^2))/(1-sum(M^2)),
where M is the joint probability distribution (mixing matrix)
of the specified attribute.
References
----------
.. [1] M. E. J. Newman, Mixing patterns in networks,
Physical Review E, 67 026126, 2003
| null | (G, attribute, nodes=None, *, backend=None, **backend_kwargs) |
30,397 | networkx.algorithms.assortativity.mixing | attribute_mixing_dict | Returns dictionary representation of mixing matrix for attribute.
Parameters
----------
G : graph
NetworkX graph object.
attribute : string
Node attribute key.
nodes: list or iterable (optional)
Unse nodes in container to build the dict. The default is all nodes.
normalized : bool (default=False)
Return counts if False or probabilities if True.
Examples
--------
>>> G = nx.Graph()
>>> G.add_nodes_from([0, 1], color="red")
>>> G.add_nodes_from([2, 3], color="blue")
>>> G.add_edge(1, 3)
>>> d = nx.attribute_mixing_dict(G, "color")
>>> print(d["red"]["blue"])
1
>>> print(d["blue"]["red"]) # d symmetric for undirected graphs
1
Returns
-------
d : dictionary
Counts or joint probability of occurrence of attribute pairs.
| null | (G, attribute, nodes=None, normalized=False, *, backend=None, **backend_kwargs) |
30,398 | networkx.algorithms.assortativity.mixing | attribute_mixing_matrix | Returns mixing matrix for attribute.
Parameters
----------
G : graph
NetworkX graph object.
attribute : string
Node attribute key.
nodes: list or iterable (optional)
Use only nodes in container to build the matrix. The default is
all nodes.
mapping : dictionary, optional
Mapping from node attribute to integer index in matrix.
If not specified, an arbitrary ordering will be used.
normalized : bool (default=True)
Return counts if False or probabilities if True.
Returns
-------
m: numpy array
Counts or joint probability of occurrence of attribute pairs.
Notes
-----
If each node has a unique attribute value, the unnormalized mixing matrix
will be equal to the adjacency matrix. To get a denser mixing matrix,
the rounding can be performed to form groups of nodes with equal values.
For example, the exact height of persons in cm (180.79155222, 163.9080892,
163.30095355, 167.99016217, 168.21590163, ...) can be rounded to (180, 163,
163, 168, 168, ...).
Definitions of attribute mixing matrix vary on whether the matrix
should include rows for attribute values that don't arise. Here we
do not include such empty-rows. But you can force them to appear
by inputting a `mapping` that includes those values.
Examples
--------
>>> G = nx.path_graph(3)
>>> gender = {0: "male", 1: "female", 2: "female"}
>>> nx.set_node_attributes(G, gender, "gender")
>>> mapping = {"male": 0, "female": 1}
>>> mix_mat = nx.attribute_mixing_matrix(G, "gender", mapping=mapping)
>>> mix_mat
array([[0. , 0.25],
[0.25, 0.5 ]])
| null | (G, attribute, nodes=None, mapping=None, normalized=True, *, backend=None, **backend_kwargs) |
30,400 | networkx.algorithms.cluster | average_clustering | Compute the average clustering coefficient for the graph G.
The clustering coefficient for the graph is the average,
.. math::
C = \frac{1}{n}\sum_{v \in G} c_v,
where :math:`n` is the number of nodes in `G`.
Parameters
----------
G : graph
nodes : container of nodes, optional (default=all nodes in G)
Compute average clustering for nodes in this container.
weight : string or None, optional (default=None)
The edge attribute that holds the numerical value used as a weight.
If None, then each edge has weight 1.
count_zeros : bool
If False include only the nodes with nonzero clustering in the average.
Returns
-------
avg : float
Average clustering
Examples
--------
>>> G = nx.complete_graph(5)
>>> print(nx.average_clustering(G))
1.0
Notes
-----
This is a space saving routine; it might be faster
to use the clustering function to get a list and then take the average.
Self loops are ignored.
References
----------
.. [1] Generalizations of the clustering coefficient to weighted
complex networks by J. Saramäki, M. Kivelä, J.-P. Onnela,
K. Kaski, and J. Kertész, Physical Review E, 75 027105 (2007).
http://jponnela.com/web_documents/a9.pdf
.. [2] Marcus Kaiser, Mean clustering coefficients: the role of isolated
nodes and leafs on clustering measures for small-world networks.
https://arxiv.org/abs/0802.2512
| null | (G, nodes=None, weight=None, count_zeros=True, *, backend=None, **backend_kwargs) |
30,401 | networkx.algorithms.assortativity.connectivity | average_degree_connectivity | Compute the average degree connectivity of graph.
The average degree connectivity is the average nearest neighbor degree of
nodes with degree k. For weighted graphs, an analogous measure can
be computed using the weighted average neighbors degree defined in
[1]_, for a node `i`, as
.. math::
k_{nn,i}^{w} = \frac{1}{s_i} \sum_{j \in N(i)} w_{ij} k_j
where `s_i` is the weighted degree of node `i`,
`w_{ij}` is the weight of the edge that links `i` and `j`,
and `N(i)` are the neighbors of node `i`.
Parameters
----------
G : NetworkX graph
source : "in"|"out"|"in+out" (default:"in+out")
Directed graphs only. Use "in"- or "out"-degree for source node.
target : "in"|"out"|"in+out" (default:"in+out"
Directed graphs only. Use "in"- or "out"-degree for target node.
nodes : list or iterable (optional)
Compute neighbor connectivity for these nodes. The default is all
nodes.
weight : string or None, optional (default=None)
The edge attribute that holds the numerical value used as a weight.
If None, then each edge has weight 1.
Returns
-------
d : dict
A dictionary keyed by degree k with the value of average connectivity.
Raises
------
NetworkXError
If either `source` or `target` are not one of 'in',
'out', or 'in+out'.
If either `source` or `target` is passed for an undirected graph.
Examples
--------
>>> G = nx.path_graph(4)
>>> G.edges[1, 2]["weight"] = 3
>>> nx.average_degree_connectivity(G)
{1: 2.0, 2: 1.5}
>>> nx.average_degree_connectivity(G, weight="weight")
{1: 2.0, 2: 1.75}
See Also
--------
average_neighbor_degree
References
----------
.. [1] A. Barrat, M. Barthélemy, R. Pastor-Satorras, and A. Vespignani,
"The architecture of complex weighted networks".
PNAS 101 (11): 3747–3752 (2004).
| null | (G, source='in+out', target='in+out', nodes=None, weight=None, *, backend=None, **backend_kwargs) |
30,402 | networkx.algorithms.assortativity.neighbor_degree | average_neighbor_degree | Returns the average degree of the neighborhood of each node.
In an undirected graph, the neighborhood `N(i)` of node `i` contains the
nodes that are connected to `i` by an edge.
For directed graphs, `N(i)` is defined according to the parameter `source`:
- if source is 'in', then `N(i)` consists of predecessors of node `i`.
- if source is 'out', then `N(i)` consists of successors of node `i`.
- if source is 'in+out', then `N(i)` is both predecessors and successors.
The average neighborhood degree of a node `i` is
.. math::
k_{nn,i} = \frac{1}{|N(i)|} \sum_{j \in N(i)} k_j
where `N(i)` are the neighbors of node `i` and `k_j` is
the degree of node `j` which belongs to `N(i)`. For weighted
graphs, an analogous measure can be defined [1]_,
.. math::
k_{nn,i}^{w} = \frac{1}{s_i} \sum_{j \in N(i)} w_{ij} k_j
where `s_i` is the weighted degree of node `i`, `w_{ij}`
is the weight of the edge that links `i` and `j` and
`N(i)` are the neighbors of node `i`.
Parameters
----------
G : NetworkX graph
source : string ("in"|"out"|"in+out"), optional (default="out")
Directed graphs only.
Use "in"- or "out"-neighbors of source node.
target : string ("in"|"out"|"in+out"), optional (default="out")
Directed graphs only.
Use "in"- or "out"-degree for target node.
nodes : list or iterable, optional (default=G.nodes)
Compute neighbor degree only for specified nodes.
weight : string or None, optional (default=None)
The edge attribute that holds the numerical value used as a weight.
If None, then each edge has weight 1.
Returns
-------
d: dict
A dictionary keyed by node to the average degree of its neighbors.
Raises
------
NetworkXError
If either `source` or `target` are not one of 'in', 'out', or 'in+out'.
If either `source` or `target` is passed for an undirected graph.
Examples
--------
>>> G = nx.path_graph(4)
>>> G.edges[0, 1]["weight"] = 5
>>> G.edges[2, 3]["weight"] = 3
>>> nx.average_neighbor_degree(G)
{0: 2.0, 1: 1.5, 2: 1.5, 3: 2.0}
>>> nx.average_neighbor_degree(G, weight="weight")
{0: 2.0, 1: 1.1666666666666667, 2: 1.25, 3: 2.0}
>>> G = nx.DiGraph()
>>> nx.add_path(G, [0, 1, 2, 3])
>>> nx.average_neighbor_degree(G, source="in", target="in")
{0: 0.0, 1: 0.0, 2: 1.0, 3: 1.0}
>>> nx.average_neighbor_degree(G, source="out", target="out")
{0: 1.0, 1: 1.0, 2: 0.0, 3: 0.0}
See Also
--------
average_degree_connectivity
References
----------
.. [1] A. Barrat, M. Barthélemy, R. Pastor-Satorras, and A. Vespignani,
"The architecture of complex weighted networks".
PNAS 101 (11): 3747–3752 (2004).
| null | (G, source='out', target='out', nodes=None, weight=None, *, backend=None, **backend_kwargs) |
30,403 | networkx.algorithms.connectivity.connectivity | average_node_connectivity | Returns the average connectivity of a graph G.
The average connectivity `\bar{\kappa}` of a graph G is the average
of local node connectivity over all pairs of nodes of G [1]_ .
.. math::
\bar{\kappa}(G) = \frac{\sum_{u,v} \kappa_{G}(u,v)}{{n \choose 2}}
Parameters
----------
G : NetworkX graph
Undirected graph
flow_func : function
A function for computing the maximum flow among a pair of nodes.
The function has to accept at least three parameters: a Digraph,
a source node, and a target node. And return a residual network
that follows NetworkX conventions (see :meth:`maximum_flow` for
details). If flow_func is None, the default maximum flow function
(:meth:`edmonds_karp`) is used. See :meth:`local_node_connectivity`
for details. The choice of the default function may change from
version to version and should not be relied on. Default value: None.
Returns
-------
K : float
Average node connectivity
See also
--------
:meth:`local_node_connectivity`
:meth:`node_connectivity`
:meth:`edge_connectivity`
:meth:`maximum_flow`
:meth:`edmonds_karp`
:meth:`preflow_push`
:meth:`shortest_augmenting_path`
References
----------
.. [1] Beineke, L., O. Oellermann, and R. Pippert (2002). The average
connectivity of a graph. Discrete mathematics 252(1-3), 31-45.
http://www.sciencedirect.com/science/article/pii/S0012365X01001807
| @nx._dispatchable
def edge_connectivity(G, s=None, t=None, flow_func=None, cutoff=None):
r"""Returns the edge connectivity of the graph or digraph G.
The edge connectivity is equal to the minimum number of edges that
must be removed to disconnect G or render it trivial. If source
and target nodes are provided, this function returns the local edge
connectivity: the minimum number of edges that must be removed to
break all paths from source to target in G.
Parameters
----------
G : NetworkX graph
Undirected or directed graph
s : node
Source node. Optional. Default value: None.
t : node
Target node. Optional. Default value: None.
flow_func : function
A function for computing the maximum flow among a pair of nodes.
The function has to accept at least three parameters: a Digraph,
a source node, and a target node. And return a residual network
that follows NetworkX conventions (see :meth:`maximum_flow` for
details). If flow_func is None, the default maximum flow function
(:meth:`edmonds_karp`) is used. See below for details. The
choice of the default function may change from version
to version and should not be relied on. Default value: None.
cutoff : integer, float, or None (default: None)
If specified, the maximum flow algorithm will terminate when the
flow value reaches or exceeds the cutoff. This only works for flows
that support the cutoff parameter (most do) and is ignored otherwise.
Returns
-------
K : integer
Edge connectivity for G, or local edge connectivity if source
and target were provided
Examples
--------
>>> # Platonic icosahedral graph is 5-edge-connected
>>> G = nx.icosahedral_graph()
>>> nx.edge_connectivity(G)
5
You can use alternative flow algorithms for the underlying
maximum flow computation. In dense networks the algorithm
:meth:`shortest_augmenting_path` will usually perform better
than the default :meth:`edmonds_karp`, which is faster for
sparse networks with highly skewed degree distributions.
Alternative flow functions have to be explicitly imported
from the flow package.
>>> from networkx.algorithms.flow import shortest_augmenting_path
>>> nx.edge_connectivity(G, flow_func=shortest_augmenting_path)
5
If you specify a pair of nodes (source and target) as parameters,
this function returns the value of local edge connectivity.
>>> nx.edge_connectivity(G, 3, 7)
5
If you need to perform several local computations among different
pairs of nodes on the same graph, it is recommended that you reuse
the data structures used in the maximum flow computations. See
:meth:`local_edge_connectivity` for details.
Notes
-----
This is a flow based implementation of global edge connectivity.
For undirected graphs the algorithm works by finding a 'small'
dominating set of nodes of G (see algorithm 7 in [1]_ ) and
computing local maximum flow (see :meth:`local_edge_connectivity`)
between an arbitrary node in the dominating set and the rest of
nodes in it. This is an implementation of algorithm 6 in [1]_ .
For directed graphs, the algorithm does n calls to the maximum
flow function. This is an implementation of algorithm 8 in [1]_ .
See also
--------
:meth:`local_edge_connectivity`
:meth:`local_node_connectivity`
:meth:`node_connectivity`
:meth:`maximum_flow`
:meth:`edmonds_karp`
:meth:`preflow_push`
:meth:`shortest_augmenting_path`
:meth:`k_edge_components`
:meth:`k_edge_subgraphs`
References
----------
.. [1] Abdol-Hossein Esfahanian. Connectivity Algorithms.
http://www.cse.msu.edu/~cse835/Papers/Graph_connectivity_revised.pdf
"""
if (s is not None and t is None) or (s is None and t is not None):
raise nx.NetworkXError("Both source and target must be specified.")
# Local edge connectivity
if s is not None and t is not None:
if s not in G:
raise nx.NetworkXError(f"node {s} not in graph")
if t not in G:
raise nx.NetworkXError(f"node {t} not in graph")
return local_edge_connectivity(G, s, t, flow_func=flow_func, cutoff=cutoff)
# Global edge connectivity
# reuse auxiliary digraph and residual network
H = build_auxiliary_edge_connectivity(G)
R = build_residual_network(H, "capacity")
kwargs = {"flow_func": flow_func, "auxiliary": H, "residual": R}
if G.is_directed():
# Algorithm 8 in [1]
if not nx.is_weakly_connected(G):
return 0
# initial value for \lambda is minimum degree
L = min(d for n, d in G.degree())
nodes = list(G)
n = len(nodes)
if cutoff is not None:
L = min(cutoff, L)
for i in range(n):
kwargs["cutoff"] = L
try:
L = min(L, local_edge_connectivity(G, nodes[i], nodes[i + 1], **kwargs))
except IndexError: # last node!
L = min(L, local_edge_connectivity(G, nodes[i], nodes[0], **kwargs))
return L
else: # undirected
# Algorithm 6 in [1]
if not nx.is_connected(G):
return 0
# initial value for \lambda is minimum degree
L = min(d for n, d in G.degree())
if cutoff is not None:
L = min(cutoff, L)
# A dominating set is \lambda-covering
# We need a dominating set with at least two nodes
for node in G:
D = nx.dominating_set(G, start_with=node)
v = D.pop()
if D:
break
else:
# in complete graphs the dominating sets will always be of one node
# thus we return min degree
return L
for w in D:
kwargs["cutoff"] = L
L = min(L, local_edge_connectivity(G, v, w, **kwargs))
return L
| (G, flow_func=None, *, backend=None, **backend_kwargs) |
30,404 | networkx.algorithms.shortest_paths.generic | average_shortest_path_length | Returns the average shortest path length.
The average shortest path length is
.. math::
a =\sum_{\substack{s,t \in V \\ s\neq t}} \frac{d(s, t)}{n(n-1)}
where `V` is the set of nodes in `G`,
`d(s, t)` is the shortest path from `s` to `t`,
and `n` is the number of nodes in `G`.
.. versionchanged:: 3.0
An exception is raised for directed graphs that are not strongly
connected.
Parameters
----------
G : NetworkX graph
weight : None, string or function, optional (default = None)
If None, every edge has weight/distance/cost 1.
If a string, use this edge attribute as the edge weight.
Any edge attribute not present defaults to 1.
If this is a function, the weight of an edge is the value
returned by the function. The function must accept exactly
three positional arguments: the two endpoints of an edge and
the dictionary of edge attributes for that edge.
The function must return a number.
method : string, optional (default = 'unweighted' or 'dijkstra')
The algorithm to use to compute the path lengths.
Supported options are 'unweighted', 'dijkstra', 'bellman-ford',
'floyd-warshall' and 'floyd-warshall-numpy'.
Other method values produce a ValueError.
The default method is 'unweighted' if `weight` is None,
otherwise the default method is 'dijkstra'.
Raises
------
NetworkXPointlessConcept
If `G` is the null graph (that is, the graph on zero nodes).
NetworkXError
If `G` is not connected (or not strongly connected, in the case
of a directed graph).
ValueError
If `method` is not among the supported options.
Examples
--------
>>> G = nx.path_graph(5)
>>> nx.average_shortest_path_length(G)
2.0
For disconnected graphs, you can compute the average shortest path
length for each component
>>> G = nx.Graph([(1, 2), (3, 4)])
>>> for C in (G.subgraph(c).copy() for c in nx.connected_components(G)):
... print(nx.average_shortest_path_length(C))
1.0
1.0
| null | (G, weight=None, method=None, *, backend=None, **backend_kwargs) |
30,405 | networkx.generators.classic | balanced_tree | Returns the perfectly balanced `r`-ary tree of height `h`.
.. plot::
>>> nx.draw(nx.balanced_tree(2, 3))
Parameters
----------
r : int
Branching factor of the tree; each node will have `r`
children.
h : int
Height of the tree.
create_using : NetworkX graph constructor, optional (default=nx.Graph)
Graph type to create. If graph instance, then cleared before populated.
Returns
-------
G : NetworkX graph
A balanced `r`-ary tree of height `h`.
Notes
-----
This is the rooted tree where all leaves are at distance `h` from
the root. The root has degree `r` and all other internal nodes
have degree `r + 1`.
Node labels are integers, starting from zero.
A balanced tree is also known as a *complete r-ary tree*.
| def star_graph(n, create_using=None):
"""Return the star graph
The star graph consists of one center node connected to n outer nodes.
.. plot::
>>> nx.draw(nx.star_graph(6))
Parameters
----------
n : int or iterable
If an integer, node labels are 0 to n with center 0.
If an iterable of nodes, the center is the first.
Warning: n is not checked for duplicates and if present the
resulting graph may not be as desired. Make sure you have no duplicates.
create_using : NetworkX graph constructor, optional (default=nx.Graph)
Graph type to create. If graph instance, then cleared before populated.
Notes
-----
The graph has n+1 nodes for integer n.
So star_graph(3) is the same as star_graph(range(4)).
"""
n, nodes = n
if isinstance(n, numbers.Integral):
nodes.append(int(n)) # there should be n+1 nodes
G = empty_graph(nodes, create_using)
if G.is_directed():
raise NetworkXError("Directed Graph not supported")
if len(nodes) > 1:
hub, *spokes = nodes
G.add_edges_from((hub, node) for node in spokes)
return G
| (r, h, create_using=None, *, backend=None, **backend_kwargs) |
30,406 | networkx.generators.random_graphs | barabasi_albert_graph | Returns a random graph using Barabási–Albert preferential attachment
A graph of $n$ nodes is grown by attaching new nodes each with $m$
edges that are preferentially attached to existing nodes with high degree.
Parameters
----------
n : int
Number of nodes
m : int
Number of edges to attach from a new node to existing nodes
seed : integer, random_state, or None (default)
Indicator of random number generation state.
See :ref:`Randomness<randomness>`.
initial_graph : Graph or None (default)
Initial network for Barabási–Albert algorithm.
It should be a connected graph for most use cases.
A copy of `initial_graph` is used.
If None, starts from a star graph on (m+1) nodes.
Returns
-------
G : Graph
Raises
------
NetworkXError
If `m` does not satisfy ``1 <= m < n``, or
the initial graph number of nodes m0 does not satisfy ``m <= m0 <= n``.
References
----------
.. [1] A. L. Barabási and R. Albert "Emergence of scaling in
random networks", Science 286, pp 509-512, 1999.
| def dual_barabasi_albert_graph(n, m1, m2, p, seed=None, initial_graph=None):
"""Returns a random graph using dual Barabási–Albert preferential attachment
A graph of $n$ nodes is grown by attaching new nodes each with either $m_1$
edges (with probability $p$) or $m_2$ edges (with probability $1-p$) that
are preferentially attached to existing nodes with high degree.
Parameters
----------
n : int
Number of nodes
m1 : int
Number of edges to link each new node to existing nodes with probability $p$
m2 : int
Number of edges to link each new node to existing nodes with probability $1-p$
p : float
The probability of attaching $m_1$ edges (as opposed to $m_2$ edges)
seed : integer, random_state, or None (default)
Indicator of random number generation state.
See :ref:`Randomness<randomness>`.
initial_graph : Graph or None (default)
Initial network for Barabási–Albert algorithm.
A copy of `initial_graph` is used.
It should be connected for most use cases.
If None, starts from an star graph on max(m1, m2) + 1 nodes.
Returns
-------
G : Graph
Raises
------
NetworkXError
If `m1` and `m2` do not satisfy ``1 <= m1,m2 < n``, or
`p` does not satisfy ``0 <= p <= 1``, or
the initial graph number of nodes m0 does not satisfy m1, m2 <= m0 <= n.
References
----------
.. [1] N. Moshiri "The dual-Barabasi-Albert model", arXiv:1810.10538.
"""
if m1 < 1 or m1 >= n:
raise nx.NetworkXError(
f"Dual Barabási–Albert must have m1 >= 1 and m1 < n, m1 = {m1}, n = {n}"
)
if m2 < 1 or m2 >= n:
raise nx.NetworkXError(
f"Dual Barabási–Albert must have m2 >= 1 and m2 < n, m2 = {m2}, n = {n}"
)
if p < 0 or p > 1:
raise nx.NetworkXError(
f"Dual Barabási–Albert network must have 0 <= p <= 1, p = {p}"
)
# For simplicity, if p == 0 or 1, just return BA
if p == 1:
return barabasi_albert_graph(n, m1, seed)
elif p == 0:
return barabasi_albert_graph(n, m2, seed)
if initial_graph is None:
# Default initial graph : empty graph on max(m1, m2) nodes
G = star_graph(max(m1, m2))
else:
if len(initial_graph) < max(m1, m2) or len(initial_graph) > n:
raise nx.NetworkXError(
f"Barabási–Albert initial graph must have between "
f"max(m1, m2) = {max(m1, m2)} and n = {n} nodes"
)
G = initial_graph.copy()
# Target nodes for new edges
targets = list(G)
# List of existing nodes, with nodes repeated once for each adjacent edge
repeated_nodes = [n for n, d in G.degree() for _ in range(d)]
# Start adding the remaining nodes.
source = len(G)
while source < n:
# Pick which m to use (m1 or m2)
if seed.random() < p:
m = m1
else:
m = m2
# Now choose m unique nodes from the existing nodes
# Pick uniformly from repeated_nodes (preferential attachment)
targets = _random_subset(repeated_nodes, m, seed)
# Add edges to m nodes from the source.
G.add_edges_from(zip([source] * m, targets))
# Add one node to the list for each new edge just created.
repeated_nodes.extend(targets)
# And the new node "source" has m edges to add to the list.
repeated_nodes.extend([source] * m)
source += 1
return G
| (n, m, seed=None, initial_graph=None, *, backend=None, **backend_kwargs) |
30,407 | networkx.generators.classic | barbell_graph | Returns the Barbell Graph: two complete graphs connected by a path.
.. plot::
>>> nx.draw(nx.barbell_graph(4, 2))
Parameters
----------
m1 : int
Size of the left and right barbells, must be greater than 2.
m2 : int
Length of the path connecting the barbells.
create_using : NetworkX graph constructor, optional (default=nx.Graph)
Graph type to create. If graph instance, then cleared before populated.
Only undirected Graphs are supported.
Returns
-------
G : NetworkX graph
A barbell graph.
Notes
-----
Two identical complete graphs $K_{m1}$ form the left and right bells,
and are connected by a path $P_{m2}$.
The `2*m1+m2` nodes are numbered
`0, ..., m1-1` for the left barbell,
`m1, ..., m1+m2-1` for the path,
and `m1+m2, ..., 2*m1+m2-1` for the right barbell.
The 3 subgraphs are joined via the edges `(m1-1, m1)` and
`(m1+m2-1, m1+m2)`. If `m2=0`, this is merely two complete
graphs joined together.
This graph is an extremal example in David Aldous
and Jim Fill's e-text on Random Walks on Graphs.
| def star_graph(n, create_using=None):
"""Return the star graph
The star graph consists of one center node connected to n outer nodes.
.. plot::
>>> nx.draw(nx.star_graph(6))
Parameters
----------
n : int or iterable
If an integer, node labels are 0 to n with center 0.
If an iterable of nodes, the center is the first.
Warning: n is not checked for duplicates and if present the
resulting graph may not be as desired. Make sure you have no duplicates.
create_using : NetworkX graph constructor, optional (default=nx.Graph)
Graph type to create. If graph instance, then cleared before populated.
Notes
-----
The graph has n+1 nodes for integer n.
So star_graph(3) is the same as star_graph(range(4)).
"""
n, nodes = n
if isinstance(n, numbers.Integral):
nodes.append(int(n)) # there should be n+1 nodes
G = empty_graph(nodes, create_using)
if G.is_directed():
raise NetworkXError("Directed Graph not supported")
if len(nodes) > 1:
hub, *spokes = nodes
G.add_edges_from((hub, node) for node in spokes)
return G
| (m1, m2, create_using=None, *, backend=None, **backend_kwargs) |
30,408 | networkx.algorithms.distance_measures | barycenter | Calculate barycenter of a connected graph, optionally with edge weights.
The :dfn:`barycenter` a
:func:`connected <networkx.algorithms.components.is_connected>` graph
:math:`G` is the subgraph induced by the set of its nodes :math:`v`
minimizing the objective function
.. math::
\sum_{u \in V(G)} d_G(u, v),
where :math:`d_G` is the (possibly weighted) :func:`path length
<networkx.algorithms.shortest_paths.generic.shortest_path_length>`.
The barycenter is also called the :dfn:`median`. See [West01]_, p. 78.
Parameters
----------
G : :class:`networkx.Graph`
The connected graph :math:`G`.
weight : :class:`str`, optional
Passed through to
:func:`~networkx.algorithms.shortest_paths.generic.shortest_path_length`.
attr : :class:`str`, optional
If given, write the value of the objective function to each node's
`attr` attribute. Otherwise do not store the value.
sp : dict of dicts, optional
All pairs shortest path lengths as a dictionary of dictionaries
Returns
-------
list
Nodes of `G` that induce the barycenter of `G`.
Raises
------
NetworkXNoPath
If `G` is disconnected. `G` may appear disconnected to
:func:`barycenter` if `sp` is given but is missing shortest path
lengths for any pairs.
ValueError
If `sp` and `weight` are both given.
Examples
--------
>>> G = nx.Graph([(1, 2), (1, 3), (1, 4), (3, 4), (3, 5), (4, 5)])
>>> nx.barycenter(G)
[1, 3, 4]
See Also
--------
center
periphery
| def effective_graph_resistance(G, weight=None, invert_weight=True):
"""Returns the Effective graph resistance of G.
Also known as the Kirchhoff index.
The effective graph resistance is defined as the sum
of the resistance distance of every node pair in G [1]_.
If weight is not provided, then a weight of 1 is used for all edges.
The effective graph resistance of a disconnected graph is infinite.
Parameters
----------
G : NetworkX graph
A graph
weight : string or None, optional (default=None)
The edge data key used to compute the effective graph resistance.
If None, then each edge has weight 1.
invert_weight : boolean (default=True)
Proper calculation of resistance distance requires building the
Laplacian matrix with the reciprocal of the weight. Not required
if the weight is already inverted. Weight cannot be zero.
Returns
-------
RG : float
The effective graph resistance of `G`.
Raises
------
NetworkXNotImplemented
If `G` is a directed graph.
NetworkXError
If `G` does not contain any nodes.
Examples
--------
>>> G = nx.Graph([(1, 2), (1, 3), (1, 4), (3, 4), (3, 5), (4, 5)])
>>> round(nx.effective_graph_resistance(G), 10)
10.25
Notes
-----
The implementation is based on Theorem 2.2 in [2]_. Self-loops are ignored.
Multi-edges are contracted in one edge with weight equal to the harmonic sum of the weights.
References
----------
.. [1] Wolfram
"Kirchhoff Index."
https://mathworld.wolfram.com/KirchhoffIndex.html
.. [2] W. Ellens, F. M. Spieksma, P. Van Mieghem, A. Jamakovic, R. E. Kooij.
Effective graph resistance.
Lin. Alg. Appl. 435:2491-2506, 2011.
"""
import numpy as np
if len(G) == 0:
raise nx.NetworkXError("Graph G must contain at least one node.")
# Disconnected graphs have infinite Effective graph resistance
if not nx.is_connected(G):
return float("inf")
# Invert weights
G = G.copy()
if invert_weight and weight is not None:
if G.is_multigraph():
for u, v, k, d in G.edges(keys=True, data=True):
d[weight] = 1 / d[weight]
else:
for u, v, d in G.edges(data=True):
d[weight] = 1 / d[weight]
# Get Laplacian eigenvalues
mu = np.sort(nx.laplacian_spectrum(G, weight=weight))
# Compute Effective graph resistance based on spectrum of the Laplacian
# Self-loops are ignored
return float(np.sum(1 / mu[1:]) * G.number_of_nodes())
| (G, weight=None, attr=None, sp=None, *, backend=None, **backend_kwargs) |
30,410 | networkx.algorithms.shortest_paths.weighted | bellman_ford_path | Returns the shortest path from source to target in a weighted graph G.
Parameters
----------
G : NetworkX graph
source : node
Starting node
target : node
Ending node
weight : string or function (default="weight")
If this is a string, then edge weights will be accessed via the
edge attribute with this key (that is, the weight of the edge
joining `u` to `v` will be ``G.edges[u, v][weight]``). If no
such edge attribute exists, the weight of the edge is assumed to
be one.
If this is a function, the weight of an edge is the value
returned by the function. The function must accept exactly three
positional arguments: the two endpoints of an edge and the
dictionary of edge attributes for that edge. The function must
return a number.
Returns
-------
path : list
List of nodes in a shortest path.
Raises
------
NodeNotFound
If `source` is not in `G`.
NetworkXNoPath
If no path exists between source and target.
Examples
--------
>>> G = nx.path_graph(5)
>>> nx.bellman_ford_path(G, 0, 4)
[0, 1, 2, 3, 4]
Notes
-----
Edge weight attributes must be numerical.
Distances are calculated as sums of weighted edges traversed.
See Also
--------
dijkstra_path, bellman_ford_path_length
| def _dijkstra_multisource(
G, sources, weight, pred=None, paths=None, cutoff=None, target=None
):
"""Uses Dijkstra's algorithm to find shortest weighted paths
Parameters
----------
G : NetworkX graph
sources : non-empty iterable of nodes
Starting nodes for paths. If this is just an iterable containing
a single node, then all paths computed by this function will
start from that node. If there are two or more nodes in this
iterable, the computed paths may begin from any one of the start
nodes.
weight: function
Function with (u, v, data) input that returns that edge's weight
or None to indicate a hidden edge
pred: dict of lists, optional(default=None)
dict to store a list of predecessors keyed by that node
If None, predecessors are not stored.
paths: dict, optional (default=None)
dict to store the path list from source to each node, keyed by node.
If None, paths are not stored.
target : node label, optional
Ending node for path. Search is halted when target is found.
cutoff : integer or float, optional
Length (sum of edge weights) at which the search is stopped.
If cutoff is provided, only return paths with summed weight <= cutoff.
Returns
-------
distance : dictionary
A mapping from node to shortest distance to that node from one
of the source nodes.
Raises
------
NodeNotFound
If any of `sources` is not in `G`.
Notes
-----
The optional predecessor and path dictionaries can be accessed by
the caller through the original pred and paths objects passed
as arguments. No need to explicitly return pred or paths.
"""
G_succ = G._adj # For speed-up (and works for both directed and undirected graphs)
push = heappush
pop = heappop
dist = {} # dictionary of final distances
seen = {}
# fringe is heapq with 3-tuples (distance,c,node)
# use the count c to avoid comparing nodes (may not be able to)
c = count()
fringe = []
for source in sources:
seen[source] = 0
push(fringe, (0, next(c), source))
while fringe:
(d, _, v) = pop(fringe)
if v in dist:
continue # already searched this node.
dist[v] = d
if v == target:
break
for u, e in G_succ[v].items():
cost = weight(v, u, e)
if cost is None:
continue
vu_dist = dist[v] + cost
if cutoff is not None:
if vu_dist > cutoff:
continue
if u in dist:
u_dist = dist[u]
if vu_dist < u_dist:
raise ValueError("Contradictory paths found:", "negative weights?")
elif pred is not None and vu_dist == u_dist:
pred[u].append(v)
elif u not in seen or vu_dist < seen[u]:
seen[u] = vu_dist
push(fringe, (vu_dist, next(c), u))
if paths is not None:
paths[u] = paths[v] + [u]
if pred is not None:
pred[u] = [v]
elif vu_dist == seen[u]:
if pred is not None:
pred[u].append(v)
# The optional predecessor and path dictionaries can be accessed
# by the caller via the pred and paths objects passed as arguments.
return dist
| (G, source, target, weight='weight', *, backend=None, **backend_kwargs) |
30,411 | networkx.algorithms.shortest_paths.weighted | bellman_ford_path_length | Returns the shortest path length from source to target
in a weighted graph.
Parameters
----------
G : NetworkX graph
source : node label
starting node for path
target : node label
ending node for path
weight : string or function (default="weight")
If this is a string, then edge weights will be accessed via the
edge attribute with this key (that is, the weight of the edge
joining `u` to `v` will be ``G.edges[u, v][weight]``). If no
such edge attribute exists, the weight of the edge is assumed to
be one.
If this is a function, the weight of an edge is the value
returned by the function. The function must accept exactly three
positional arguments: the two endpoints of an edge and the
dictionary of edge attributes for that edge. The function must
return a number.
Returns
-------
length : number
Shortest path length.
Raises
------
NodeNotFound
If `source` is not in `G`.
NetworkXNoPath
If no path exists between source and target.
Examples
--------
>>> G = nx.path_graph(5)
>>> nx.bellman_ford_path_length(G, 0, 4)
4
Notes
-----
Edge weight attributes must be numerical.
Distances are calculated as sums of weighted edges traversed.
See Also
--------
dijkstra_path_length, bellman_ford_path
| def _dijkstra_multisource(
G, sources, weight, pred=None, paths=None, cutoff=None, target=None
):
"""Uses Dijkstra's algorithm to find shortest weighted paths
Parameters
----------
G : NetworkX graph
sources : non-empty iterable of nodes
Starting nodes for paths. If this is just an iterable containing
a single node, then all paths computed by this function will
start from that node. If there are two or more nodes in this
iterable, the computed paths may begin from any one of the start
nodes.
weight: function
Function with (u, v, data) input that returns that edge's weight
or None to indicate a hidden edge
pred: dict of lists, optional(default=None)
dict to store a list of predecessors keyed by that node
If None, predecessors are not stored.
paths: dict, optional (default=None)
dict to store the path list from source to each node, keyed by node.
If None, paths are not stored.
target : node label, optional
Ending node for path. Search is halted when target is found.
cutoff : integer or float, optional
Length (sum of edge weights) at which the search is stopped.
If cutoff is provided, only return paths with summed weight <= cutoff.
Returns
-------
distance : dictionary
A mapping from node to shortest distance to that node from one
of the source nodes.
Raises
------
NodeNotFound
If any of `sources` is not in `G`.
Notes
-----
The optional predecessor and path dictionaries can be accessed by
the caller through the original pred and paths objects passed
as arguments. No need to explicitly return pred or paths.
"""
G_succ = G._adj # For speed-up (and works for both directed and undirected graphs)
push = heappush
pop = heappop
dist = {} # dictionary of final distances
seen = {}
# fringe is heapq with 3-tuples (distance,c,node)
# use the count c to avoid comparing nodes (may not be able to)
c = count()
fringe = []
for source in sources:
seen[source] = 0
push(fringe, (0, next(c), source))
while fringe:
(d, _, v) = pop(fringe)
if v in dist:
continue # already searched this node.
dist[v] = d
if v == target:
break
for u, e in G_succ[v].items():
cost = weight(v, u, e)
if cost is None:
continue
vu_dist = dist[v] + cost
if cutoff is not None:
if vu_dist > cutoff:
continue
if u in dist:
u_dist = dist[u]
if vu_dist < u_dist:
raise ValueError("Contradictory paths found:", "negative weights?")
elif pred is not None and vu_dist == u_dist:
pred[u].append(v)
elif u not in seen or vu_dist < seen[u]:
seen[u] = vu_dist
push(fringe, (vu_dist, next(c), u))
if paths is not None:
paths[u] = paths[v] + [u]
if pred is not None:
pred[u] = [v]
elif vu_dist == seen[u]:
if pred is not None:
pred[u].append(v)
# The optional predecessor and path dictionaries can be accessed
# by the caller via the pred and paths objects passed as arguments.
return dist
| (G, source, target, weight='weight', *, backend=None, **backend_kwargs) |
30,412 | networkx.algorithms.shortest_paths.weighted | bellman_ford_predecessor_and_distance | Compute shortest path lengths and predecessors on shortest paths
in weighted graphs.
The algorithm has a running time of $O(mn)$ where $n$ is the number of
nodes and $m$ is the number of edges. It is slower than Dijkstra but
can handle negative edge weights.
If a negative cycle is detected, you can use :func:`find_negative_cycle`
to return the cycle and examine it. Shortest paths are not defined when
a negative cycle exists because once reached, the path can cycle forever
to build up arbitrarily low weights.
Parameters
----------
G : NetworkX graph
The algorithm works for all types of graphs, including directed
graphs and multigraphs.
source: node label
Starting node for path
target : node label, optional
Ending node for path
weight : string or function
If this is a string, then edge weights will be accessed via the
edge attribute with this key (that is, the weight of the edge
joining `u` to `v` will be ``G.edges[u, v][weight]``). If no
such edge attribute exists, the weight of the edge is assumed to
be one.
If this is a function, the weight of an edge is the value
returned by the function. The function must accept exactly three
positional arguments: the two endpoints of an edge and the
dictionary of edge attributes for that edge. The function must
return a number.
heuristic : bool
Determines whether to use a heuristic to early detect negative
cycles at a hopefully negligible cost.
Returns
-------
pred, dist : dictionaries
Returns two dictionaries keyed by node to predecessor in the
path and to the distance from the source respectively.
Raises
------
NodeNotFound
If `source` is not in `G`.
NetworkXUnbounded
If the (di)graph contains a negative (di)cycle, the
algorithm raises an exception to indicate the presence of the
negative (di)cycle. Note: any negative weight edge in an
undirected graph is a negative cycle.
Examples
--------
>>> G = nx.path_graph(5, create_using=nx.DiGraph())
>>> pred, dist = nx.bellman_ford_predecessor_and_distance(G, 0)
>>> sorted(pred.items())
[(0, []), (1, [0]), (2, [1]), (3, [2]), (4, [3])]
>>> sorted(dist.items())
[(0, 0), (1, 1), (2, 2), (3, 3), (4, 4)]
>>> pred, dist = nx.bellman_ford_predecessor_and_distance(G, 0, 1)
>>> sorted(pred.items())
[(0, []), (1, [0]), (2, [1]), (3, [2]), (4, [3])]
>>> sorted(dist.items())
[(0, 0), (1, 1), (2, 2), (3, 3), (4, 4)]
>>> G = nx.cycle_graph(5, create_using=nx.DiGraph())
>>> G[1][2]["weight"] = -7
>>> nx.bellman_ford_predecessor_and_distance(G, 0)
Traceback (most recent call last):
...
networkx.exception.NetworkXUnbounded: Negative cycle detected.
See Also
--------
find_negative_cycle
Notes
-----
Edge weight attributes must be numerical.
Distances are calculated as sums of weighted edges traversed.
The dictionaries returned only have keys for nodes reachable from
the source.
In the case where the (di)graph is not connected, if a component
not containing the source contains a negative (di)cycle, it
will not be detected.
In NetworkX v2.1 and prior, the source node had predecessor `[None]`.
In NetworkX v2.2 this changed to the source node having predecessor `[]`
| def _dijkstra_multisource(
G, sources, weight, pred=None, paths=None, cutoff=None, target=None
):
"""Uses Dijkstra's algorithm to find shortest weighted paths
Parameters
----------
G : NetworkX graph
sources : non-empty iterable of nodes
Starting nodes for paths. If this is just an iterable containing
a single node, then all paths computed by this function will
start from that node. If there are two or more nodes in this
iterable, the computed paths may begin from any one of the start
nodes.
weight: function
Function with (u, v, data) input that returns that edge's weight
or None to indicate a hidden edge
pred: dict of lists, optional(default=None)
dict to store a list of predecessors keyed by that node
If None, predecessors are not stored.
paths: dict, optional (default=None)
dict to store the path list from source to each node, keyed by node.
If None, paths are not stored.
target : node label, optional
Ending node for path. Search is halted when target is found.
cutoff : integer or float, optional
Length (sum of edge weights) at which the search is stopped.
If cutoff is provided, only return paths with summed weight <= cutoff.
Returns
-------
distance : dictionary
A mapping from node to shortest distance to that node from one
of the source nodes.
Raises
------
NodeNotFound
If any of `sources` is not in `G`.
Notes
-----
The optional predecessor and path dictionaries can be accessed by
the caller through the original pred and paths objects passed
as arguments. No need to explicitly return pred or paths.
"""
G_succ = G._adj # For speed-up (and works for both directed and undirected graphs)
push = heappush
pop = heappop
dist = {} # dictionary of final distances
seen = {}
# fringe is heapq with 3-tuples (distance,c,node)
# use the count c to avoid comparing nodes (may not be able to)
c = count()
fringe = []
for source in sources:
seen[source] = 0
push(fringe, (0, next(c), source))
while fringe:
(d, _, v) = pop(fringe)
if v in dist:
continue # already searched this node.
dist[v] = d
if v == target:
break
for u, e in G_succ[v].items():
cost = weight(v, u, e)
if cost is None:
continue
vu_dist = dist[v] + cost
if cutoff is not None:
if vu_dist > cutoff:
continue
if u in dist:
u_dist = dist[u]
if vu_dist < u_dist:
raise ValueError("Contradictory paths found:", "negative weights?")
elif pred is not None and vu_dist == u_dist:
pred[u].append(v)
elif u not in seen or vu_dist < seen[u]:
seen[u] = vu_dist
push(fringe, (vu_dist, next(c), u))
if paths is not None:
paths[u] = paths[v] + [u]
if pred is not None:
pred[u] = [v]
elif vu_dist == seen[u]:
if pred is not None:
pred[u].append(v)
# The optional predecessor and path dictionaries can be accessed
# by the caller via the pred and paths objects passed as arguments.
return dist
| (G, source, target=None, weight='weight', heuristic=False, *, backend=None, **backend_kwargs) |
30,413 | networkx.linalg.bethehessianmatrix | bethe_hessian_matrix | Returns the Bethe Hessian matrix of G.
The Bethe Hessian is a family of matrices parametrized by r, defined as
H(r) = (r^2 - 1) I - r A + D where A is the adjacency matrix, D is the
diagonal matrix of node degrees, and I is the identify matrix. It is equal
to the graph laplacian when the regularizer r = 1.
The default choice of regularizer should be the ratio [2]_
.. math::
r_m = \left(\sum k_i \right)^{-1}\left(\sum k_i^2 \right) - 1
Parameters
----------
G : Graph
A NetworkX graph
r : float
Regularizer parameter
nodelist : list, optional
The rows and columns are ordered according to the nodes in nodelist.
If nodelist is None, then the ordering is produced by ``G.nodes()``.
Returns
-------
H : scipy.sparse.csr_array
The Bethe Hessian matrix of `G`, with parameter `r`.
Examples
--------
>>> k = [3, 2, 2, 1, 0]
>>> G = nx.havel_hakimi_graph(k)
>>> H = nx.bethe_hessian_matrix(G)
>>> H.toarray()
array([[ 3.5625, -1.25 , -1.25 , -1.25 , 0. ],
[-1.25 , 2.5625, -1.25 , 0. , 0. ],
[-1.25 , -1.25 , 2.5625, 0. , 0. ],
[-1.25 , 0. , 0. , 1.5625, 0. ],
[ 0. , 0. , 0. , 0. , 0.5625]])
See Also
--------
bethe_hessian_spectrum
adjacency_matrix
laplacian_matrix
References
----------
.. [1] A. Saade, F. Krzakala and L. Zdeborová
"Spectral Clustering of Graphs with the Bethe Hessian",
Advances in Neural Information Processing Systems, 2014.
.. [2] C. M. Le, E. Levina
"Estimating the number of communities in networks by spectral methods"
arXiv:1507.00827, 2015.
| null | (G, r=None, nodelist=None, *, backend=None, **backend_kwargs) |
30,414 | networkx.linalg.spectrum | bethe_hessian_spectrum | Returns eigenvalues of the Bethe Hessian matrix of G.
Parameters
----------
G : Graph
A NetworkX Graph or DiGraph
r : float
Regularizer parameter
Returns
-------
evals : NumPy array
Eigenvalues
See Also
--------
bethe_hessian_matrix
References
----------
.. [1] A. Saade, F. Krzakala and L. Zdeborová
"Spectral clustering of graphs with the bethe hessian",
Advances in Neural Information Processing Systems. 2014.
| null | (G, r=None, *, backend=None, **backend_kwargs) |
30,417 | networkx.algorithms.centrality.betweenness | betweenness_centrality | Compute the shortest-path betweenness centrality for nodes.
Betweenness centrality of a node $v$ is the sum of the
fraction of all-pairs shortest paths that pass through $v$
.. math::
c_B(v) =\sum_{s,t \in V} \frac{\sigma(s, t|v)}{\sigma(s, t)}
where $V$ is the set of nodes, $\sigma(s, t)$ is the number of
shortest $(s, t)$-paths, and $\sigma(s, t|v)$ is the number of
those paths passing through some node $v$ other than $s, t$.
If $s = t$, $\sigma(s, t) = 1$, and if $v \in {s, t}$,
$\sigma(s, t|v) = 0$ [2]_.
Parameters
----------
G : graph
A NetworkX graph.
k : int, optional (default=None)
If k is not None use k node samples to estimate betweenness.
The value of k <= n where n is the number of nodes in the graph.
Higher values give better approximation.
normalized : bool, optional
If True the betweenness values are normalized by `2/((n-1)(n-2))`
for graphs, and `1/((n-1)(n-2))` for directed graphs where `n`
is the number of nodes in G.
weight : None or string, optional (default=None)
If None, all edge weights are considered equal.
Otherwise holds the name of the edge attribute used as weight.
Weights are used to calculate weighted shortest paths, so they are
interpreted as distances.
endpoints : bool, optional
If True include the endpoints in the shortest path counts.
seed : integer, random_state, or None (default)
Indicator of random number generation state.
See :ref:`Randomness<randomness>`.
Note that this is only used if k is not None.
Returns
-------
nodes : dictionary
Dictionary of nodes with betweenness centrality as the value.
See Also
--------
edge_betweenness_centrality
load_centrality
Notes
-----
The algorithm is from Ulrik Brandes [1]_.
See [4]_ for the original first published version and [2]_ for details on
algorithms for variations and related metrics.
For approximate betweenness calculations set k=#samples to use
k nodes ("pivots") to estimate the betweenness values. For an estimate
of the number of pivots needed see [3]_.
For weighted graphs the edge weights must be greater than zero.
Zero edge weights can produce an infinite number of equal length
paths between pairs of nodes.
The total number of paths between source and target is counted
differently for directed and undirected graphs. Directed paths
are easy to count. Undirected paths are tricky: should a path
from "u" to "v" count as 1 undirected path or as 2 directed paths?
For betweenness_centrality we report the number of undirected
paths when G is undirected.
For betweenness_centrality_subset the reporting is different.
If the source and target subsets are the same, then we want
to count undirected paths. But if the source and target subsets
differ -- for example, if sources is {0} and targets is {1},
then we are only counting the paths in one direction. They are
undirected paths but we are counting them in a directed way.
To count them as undirected paths, each should count as half a path.
This algorithm is not guaranteed to be correct if edge weights
are floating point numbers. As a workaround you can use integer
numbers by multiplying the relevant edge attributes by a convenient
constant factor (eg 100) and converting to integers.
References
----------
.. [1] Ulrik Brandes:
A Faster Algorithm for Betweenness Centrality.
Journal of Mathematical Sociology 25(2):163-177, 2001.
https://doi.org/10.1080/0022250X.2001.9990249
.. [2] Ulrik Brandes:
On Variants of Shortest-Path Betweenness
Centrality and their Generic Computation.
Social Networks 30(2):136-145, 2008.
https://doi.org/10.1016/j.socnet.2007.11.001
.. [3] Ulrik Brandes and Christian Pich:
Centrality Estimation in Large Networks.
International Journal of Bifurcation and Chaos 17(7):2303-2318, 2007.
https://dx.doi.org/10.1142/S0218127407018403
.. [4] Linton C. Freeman:
A set of measures of centrality based on betweenness.
Sociometry 40: 35–41, 1977
https://doi.org/10.2307/3033543
| null | (G, k=None, normalized=True, weight=None, endpoints=False, seed=None, *, backend=None, **backend_kwargs) |
30,418 | networkx.algorithms.centrality.betweenness_subset | betweenness_centrality_subset | Compute betweenness centrality for a subset of nodes.
.. math::
c_B(v) =\sum_{s\in S, t \in T} \frac{\sigma(s, t|v)}{\sigma(s, t)}
where $S$ is the set of sources, $T$ is the set of targets,
$\sigma(s, t)$ is the number of shortest $(s, t)$-paths,
and $\sigma(s, t|v)$ is the number of those paths
passing through some node $v$ other than $s, t$.
If $s = t$, $\sigma(s, t) = 1$,
and if $v \in {s, t}$, $\sigma(s, t|v) = 0$ [2]_.
Parameters
----------
G : graph
A NetworkX graph.
sources: list of nodes
Nodes to use as sources for shortest paths in betweenness
targets: list of nodes
Nodes to use as targets for shortest paths in betweenness
normalized : bool, optional
If True the betweenness values are normalized by $2/((n-1)(n-2))$
for graphs, and $1/((n-1)(n-2))$ for directed graphs where $n$
is the number of nodes in G.
weight : None or string, optional (default=None)
If None, all edge weights are considered equal.
Otherwise holds the name of the edge attribute used as weight.
Weights are used to calculate weighted shortest paths, so they are
interpreted as distances.
Returns
-------
nodes : dictionary
Dictionary of nodes with betweenness centrality as the value.
See Also
--------
edge_betweenness_centrality
load_centrality
Notes
-----
The basic algorithm is from [1]_.
For weighted graphs the edge weights must be greater than zero.
Zero edge weights can produce an infinite number of equal length
paths between pairs of nodes.
The normalization might seem a little strange but it is
designed to make betweenness_centrality(G) be the same as
betweenness_centrality_subset(G,sources=G.nodes(),targets=G.nodes()).
The total number of paths between source and target is counted
differently for directed and undirected graphs. Directed paths
are easy to count. Undirected paths are tricky: should a path
from "u" to "v" count as 1 undirected path or as 2 directed paths?
For betweenness_centrality we report the number of undirected
paths when G is undirected.
For betweenness_centrality_subset the reporting is different.
If the source and target subsets are the same, then we want
to count undirected paths. But if the source and target subsets
differ -- for example, if sources is {0} and targets is {1},
then we are only counting the paths in one direction. They are
undirected paths but we are counting them in a directed way.
To count them as undirected paths, each should count as half a path.
References
----------
.. [1] Ulrik Brandes, A Faster Algorithm for Betweenness Centrality.
Journal of Mathematical Sociology 25(2):163-177, 2001.
https://doi.org/10.1080/0022250X.2001.9990249
.. [2] Ulrik Brandes: On Variants of Shortest-Path Betweenness
Centrality and their Generic Computation.
Social Networks 30(2):136-145, 2008.
https://doi.org/10.1016/j.socnet.2007.11.001
| null | (G, sources, targets, normalized=False, weight=None, *, backend=None, **backend_kwargs) |
30,420 | networkx.algorithms.traversal.beamsearch | bfs_beam_edges | Iterates over edges in a beam search.
The beam search is a generalized breadth-first search in which only
the "best" *w* neighbors of the current node are enqueued, where *w*
is the beam width and "best" is an application-specific
heuristic. In general, a beam search with a small beam width might
not visit each node in the graph.
.. note::
With the default value of ``width=None`` or `width` greater than the
maximum degree of the graph, this function equates to a slower
version of `~networkx.algorithms.traversal.breadth_first_search.bfs_edges`.
All nodes will be visited, though the order of the reported edges may
vary. In such cases, `value` has no effect - consider using `bfs_edges`
directly instead.
Parameters
----------
G : NetworkX graph
source : node
Starting node for the breadth-first search; this function
iterates over only those edges in the component reachable from
this node.
value : function
A function that takes a node of the graph as input and returns a
real number indicating how "good" it is. A higher value means it
is more likely to be visited sooner during the search. When
visiting a new node, only the `width` neighbors with the highest
`value` are enqueued (in decreasing order of `value`).
width : int (default = None)
The beam width for the search. This is the number of neighbors
(ordered by `value`) to enqueue when visiting each new node.
Yields
------
edge
Edges in the beam search starting from `source`, given as a pair
of nodes.
Examples
--------
To give nodes with, for example, a higher centrality precedence
during the search, set the `value` function to return the centrality
value of the node:
>>> G = nx.karate_club_graph()
>>> centrality = nx.eigenvector_centrality(G)
>>> list(nx.bfs_beam_edges(G, source=0, value=centrality.get, width=3))
[(0, 2), (0, 1), (0, 8), (2, 32), (1, 13), (8, 33)]
| null | (G, source, value, width=None, *, backend=None, **backend_kwargs) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.