index
int64 0
731k
| package
stringlengths 2
98
β | name
stringlengths 1
76
| docstring
stringlengths 0
281k
β | code
stringlengths 4
1.07M
β | signature
stringlengths 2
42.8k
β |
---|---|---|---|---|---|
30,681 | networkx.generators.classic | full_rary_tree | Creates a full r-ary tree of `n` nodes.
Sometimes called a k-ary, n-ary, or m-ary tree.
"... all non-leaf nodes have exactly r children and all levels
are full except for some rightmost position of the bottom level
(if a leaf at the bottom level is missing, then so are all of the
leaves to its right." [1]_
.. plot::
>>> nx.draw(nx.full_rary_tree(2, 10))
Parameters
----------
r : int
branching factor of the tree
n : int
Number of nodes in the tree
create_using : NetworkX graph constructor, optional (default=nx.Graph)
Graph type to create. If graph instance, then cleared before populated.
Returns
-------
G : networkx Graph
An r-ary tree with n nodes
References
----------
.. [1] An introduction to data structures and algorithms,
James Andrew Storer, Birkhauser Boston 2001, (page 225).
| def star_graph(n, create_using=None):
"""Return the star graph
The star graph consists of one center node connected to n outer nodes.
.. plot::
>>> nx.draw(nx.star_graph(6))
Parameters
----------
n : int or iterable
If an integer, node labels are 0 to n with center 0.
If an iterable of nodes, the center is the first.
Warning: n is not checked for duplicates and if present the
resulting graph may not be as desired. Make sure you have no duplicates.
create_using : NetworkX graph constructor, optional (default=nx.Graph)
Graph type to create. If graph instance, then cleared before populated.
Notes
-----
The graph has n+1 nodes for integer n.
So star_graph(3) is the same as star_graph(range(4)).
"""
n, nodes = n
if isinstance(n, numbers.Integral):
nodes.append(int(n)) # there should be n+1 nodes
G = empty_graph(nodes, create_using)
if G.is_directed():
raise NetworkXError("Directed Graph not supported")
if len(nodes) > 1:
hub, *spokes = nodes
G.add_edges_from((hub, node) for node in spokes)
return G
| (r, n, create_using=None, *, backend=None, **backend_kwargs) |
30,683 | networkx.generators.community | gaussian_random_partition_graph | Generate a Gaussian random partition graph.
A Gaussian random partition graph is created by creating k partitions
each with a size drawn from a normal distribution with mean s and variance
s/v. Nodes are connected within clusters with probability p_in and
between clusters with probability p_out[1]
Parameters
----------
n : int
Number of nodes in the graph
s : float
Mean cluster size
v : float
Shape parameter. The variance of cluster size distribution is s/v.
p_in : float
Probability of intra cluster connection.
p_out : float
Probability of inter cluster connection.
directed : boolean, optional default=False
Whether to create a directed graph or not
seed : integer, random_state, or None (default)
Indicator of random number generation state.
See :ref:`Randomness<randomness>`.
Returns
-------
G : NetworkX Graph or DiGraph
gaussian random partition graph
Raises
------
NetworkXError
If s is > n
If p_in or p_out is not in [0,1]
Notes
-----
Note the number of partitions is dependent on s,v and n, and that the
last partition may be considerably smaller, as it is sized to simply
fill out the nodes [1]
See Also
--------
random_partition_graph
Examples
--------
>>> G = nx.gaussian_random_partition_graph(100, 10, 10, 0.25, 0.1)
>>> len(G)
100
References
----------
.. [1] Ulrik Brandes, Marco Gaertler, Dorothea Wagner,
Experiments on Graph Clustering Algorithms,
In the proceedings of the 11th Europ. Symp. Algorithms, 2003.
| def _generate_communities(degree_seq, community_sizes, mu, max_iters, seed):
"""Returns a list of sets, each of which represents a community.
``degree_seq`` is the degree sequence that must be met by the
graph.
``community_sizes`` is the community size distribution that must be
met by the generated list of sets.
``mu`` is a float in the interval [0, 1] indicating the fraction of
intra-community edges incident to each node.
``max_iters`` is the number of times to try to add a node to a
community. This must be greater than the length of
``degree_seq``, otherwise this function will always fail. If
the number of iterations exceeds this value,
:exc:`~networkx.exception.ExceededMaxIterations` is raised.
seed : integer, random_state, or None (default)
Indicator of random number generation state.
See :ref:`Randomness<randomness>`.
The communities returned by this are sets of integers in the set {0,
..., *n* - 1}, where *n* is the length of ``degree_seq``.
"""
# This assumes the nodes in the graph will be natural numbers.
result = [set() for _ in community_sizes]
n = len(degree_seq)
free = list(range(n))
for i in range(max_iters):
v = free.pop()
c = seed.choice(range(len(community_sizes)))
# s = int(degree_seq[v] * (1 - mu) + 0.5)
s = round(degree_seq[v] * (1 - mu))
# If the community is large enough, add the node to the chosen
# community. Otherwise, return it to the list of unaffiliated
# nodes.
if s < community_sizes[c]:
result[c].add(v)
else:
free.append(v)
# If the community is too big, remove a node from it.
if len(result[c]) > community_sizes[c]:
free.append(result[c].pop())
if not free:
return result
msg = "Could not assign communities; try increasing min_community"
raise nx.ExceededMaxIterations(msg)
| (n, s, v, p_in, p_out, directed=False, seed=None, *, backend=None, **backend_kwargs) |
30,684 | networkx.generators.intersection | general_random_intersection_graph | Returns a random intersection graph with independent probabilities
for connections between node and attribute sets.
Parameters
----------
n : int
The number of nodes in the first bipartite set (nodes)
m : int
The number of nodes in the second bipartite set (attributes)
p : list of floats of length m
Probabilities for connecting nodes to each attribute
seed : integer, random_state, or None (default)
Indicator of random number generation state.
See :ref:`Randomness<randomness>`.
See Also
--------
gnp_random_graph, uniform_random_intersection_graph
References
----------
.. [1] Nikoletseas, S. E., Raptopoulos, C., and Spirakis, P. G.
The existence and efficient construction of large independent sets
in general random intersection graphs. In ICALP (2004), J. DΒ΄Δ±az,
J. Karhum¨aki, A. Lepist¨o, and D. Sannella, Eds., vol. 3142
of Lecture Notes in Computer Science, Springer, pp. 1029β1040.
| null | (n, m, p, seed=None, *, backend=None, **backend_kwargs) |
30,685 | networkx.algorithms.cluster | generalized_degree | Compute the generalized degree for nodes.
For each node, the generalized degree shows how many edges of given
triangle multiplicity the node is connected to. The triangle multiplicity
of an edge is the number of triangles an edge participates in. The
generalized degree of node :math:`i` can be written as a vector
:math:`\mathbf{k}_i=(k_i^{(0)}, \dotsc, k_i^{(N-2)})` where
:math:`k_i^{(j)}` is the number of edges attached to node :math:`i` that
participate in :math:`j` triangles.
Parameters
----------
G : graph
nodes : container of nodes, optional (default=all nodes in G)
Compute the generalized degree for nodes in this container.
Returns
-------
out : Counter, or dictionary of Counters
Generalized degree of specified nodes. The Counter is keyed by edge
triangle multiplicity.
Examples
--------
>>> G = nx.complete_graph(5)
>>> print(nx.generalized_degree(G, 0))
Counter({3: 4})
>>> print(nx.generalized_degree(G))
{0: Counter({3: 4}), 1: Counter({3: 4}), 2: Counter({3: 4}), 3: Counter({3: 4}), 4: Counter({3: 4})}
To recover the number of triangles attached to a node:
>>> k1 = nx.generalized_degree(G, 0)
>>> sum([k * v for k, v in k1.items()]) / 2 == nx.triangles(G, 0)
True
Notes
-----
Self loops are ignored.
In a network of N nodes, the highest triangle multiplicity an edge can have
is N-2.
The return value does not include a `zero` entry if no edges of a
particular triangle multiplicity are present.
The number of triangles node :math:`i` is attached to can be recovered from
the generalized degree :math:`\mathbf{k}_i=(k_i^{(0)}, \dotsc,
k_i^{(N-2)})` by :math:`(k_i^{(1)}+2k_i^{(2)}+\dotsc +(N-2)k_i^{(N-2)})/2`.
References
----------
.. [1] Networks with arbitrary edge multiplicities by V. ZlatiΔ,
D. Garlaschelli and G. Caldarelli, EPL (Europhysics Letters),
Volume 97, Number 2 (2012).
https://iopscience.iop.org/article/10.1209/0295-5075/97/28005
| null | (G, nodes=None, *, backend=None, **backend_kwargs) |
30,686 | networkx.readwrite.adjlist | generate_adjlist | Generate a single line of the graph G in adjacency list format.
Parameters
----------
G : NetworkX graph
delimiter : string, optional
Separator for node labels
Returns
-------
lines : string
Lines of data in adjlist format.
Examples
--------
>>> G = nx.lollipop_graph(4, 3)
>>> for line in nx.generate_adjlist(G):
... print(line)
0 1 2 3
1 2 3
2 3
3 4
4 5
5 6
6
See Also
--------
write_adjlist, read_adjlist
Notes
-----
The default `delimiter=" "` will result in unexpected results if node names contain
whitespace characters. To avoid this problem, specify an alternate delimiter when spaces are
valid in node names.
NB: This option is not available for data that isn't user-generated.
| def generate_adjlist(G, delimiter=" "):
"""Generate a single line of the graph G in adjacency list format.
Parameters
----------
G : NetworkX graph
delimiter : string, optional
Separator for node labels
Returns
-------
lines : string
Lines of data in adjlist format.
Examples
--------
>>> G = nx.lollipop_graph(4, 3)
>>> for line in nx.generate_adjlist(G):
... print(line)
0 1 2 3
1 2 3
2 3
3 4
4 5
5 6
6
See Also
--------
write_adjlist, read_adjlist
Notes
-----
The default `delimiter=" "` will result in unexpected results if node names contain
whitespace characters. To avoid this problem, specify an alternate delimiter when spaces are
valid in node names.
NB: This option is not available for data that isn't user-generated.
"""
directed = G.is_directed()
seen = set()
for s, nbrs in G.adjacency():
line = str(s) + delimiter
for t, data in nbrs.items():
if not directed and t in seen:
continue
if G.is_multigraph():
for d in data.values():
line += str(t) + delimiter
else:
line += str(t) + delimiter
if not directed:
seen.add(s)
yield line[: -len(delimiter)]
| (G, delimiter=' ') |
30,687 | networkx.readwrite.edgelist | generate_edgelist | Generate a single line of the graph G in edge list format.
Parameters
----------
G : NetworkX graph
delimiter : string, optional
Separator for node labels
data : bool or list of keys
If False generate no edge data. If True use a dictionary
representation of edge data. If a list of keys use a list of data
values corresponding to the keys.
Returns
-------
lines : string
Lines of data in adjlist format.
Examples
--------
>>> G = nx.lollipop_graph(4, 3)
>>> G[1][2]["weight"] = 3
>>> G[3][4]["capacity"] = 12
>>> for line in nx.generate_edgelist(G, data=False):
... print(line)
0 1
0 2
0 3
1 2
1 3
2 3
3 4
4 5
5 6
>>> for line in nx.generate_edgelist(G):
... print(line)
0 1 {}
0 2 {}
0 3 {}
1 2 {'weight': 3}
1 3 {}
2 3 {}
3 4 {'capacity': 12}
4 5 {}
5 6 {}
>>> for line in nx.generate_edgelist(G, data=["weight"]):
... print(line)
0 1
0 2
0 3
1 2 3
1 3
2 3
3 4
4 5
5 6
See Also
--------
write_adjlist, read_adjlist
| def generate_edgelist(G, delimiter=" ", data=True):
"""Generate a single line of the graph G in edge list format.
Parameters
----------
G : NetworkX graph
delimiter : string, optional
Separator for node labels
data : bool or list of keys
If False generate no edge data. If True use a dictionary
representation of edge data. If a list of keys use a list of data
values corresponding to the keys.
Returns
-------
lines : string
Lines of data in adjlist format.
Examples
--------
>>> G = nx.lollipop_graph(4, 3)
>>> G[1][2]["weight"] = 3
>>> G[3][4]["capacity"] = 12
>>> for line in nx.generate_edgelist(G, data=False):
... print(line)
0 1
0 2
0 3
1 2
1 3
2 3
3 4
4 5
5 6
>>> for line in nx.generate_edgelist(G):
... print(line)
0 1 {}
0 2 {}
0 3 {}
1 2 {'weight': 3}
1 3 {}
2 3 {}
3 4 {'capacity': 12}
4 5 {}
5 6 {}
>>> for line in nx.generate_edgelist(G, data=["weight"]):
... print(line)
0 1
0 2
0 3
1 2 3
1 3
2 3
3 4
4 5
5 6
See Also
--------
write_adjlist, read_adjlist
"""
if data is True:
for u, v, d in G.edges(data=True):
e = u, v, dict(d)
yield delimiter.join(map(str, e))
elif data is False:
for u, v in G.edges(data=False):
e = u, v
yield delimiter.join(map(str, e))
else:
for u, v, d in G.edges(data=True):
e = [u, v]
try:
e.extend(d[k] for k in data)
except KeyError:
pass # missing data for this edge, should warn?
yield delimiter.join(map(str, e))
| (G, delimiter=' ', data=True) |
30,688 | networkx.readwrite.gexf | generate_gexf | Generate lines of GEXF format representation of G.
"GEXF (Graph Exchange XML Format) is a language for describing
complex networks structures, their associated data and dynamics" [1]_.
Parameters
----------
G : graph
A NetworkX graph
encoding : string (optional, default: 'utf-8')
Encoding for text data.
prettyprint : bool (optional, default: True)
If True use line breaks and indenting in output XML.
version : string (default: 1.2draft)
Version of GEFX File Format (see http://gexf.net/schema.html)
Supported values: "1.1draft", "1.2draft"
Examples
--------
>>> G = nx.path_graph(4)
>>> linefeed = chr(10) # linefeed=
>>> s = linefeed.join(nx.generate_gexf(G))
>>> for line in nx.generate_gexf(G): # doctest: +SKIP
... print(line)
Notes
-----
This implementation does not support mixed graphs (directed and undirected
edges together).
The node id attribute is set to be the string of the node label.
If you want to specify an id use set it as node data, e.g.
node['a']['id']=1 to set the id of node 'a' to 1.
References
----------
.. [1] GEXF File Format, https://gephi.org/gexf/format/
| def generate_gexf(G, encoding="utf-8", prettyprint=True, version="1.2draft"):
"""Generate lines of GEXF format representation of G.
"GEXF (Graph Exchange XML Format) is a language for describing
complex networks structures, their associated data and dynamics" [1]_.
Parameters
----------
G : graph
A NetworkX graph
encoding : string (optional, default: 'utf-8')
Encoding for text data.
prettyprint : bool (optional, default: True)
If True use line breaks and indenting in output XML.
version : string (default: 1.2draft)
Version of GEFX File Format (see http://gexf.net/schema.html)
Supported values: "1.1draft", "1.2draft"
Examples
--------
>>> G = nx.path_graph(4)
>>> linefeed = chr(10) # linefeed=\n
>>> s = linefeed.join(nx.generate_gexf(G))
>>> for line in nx.generate_gexf(G): # doctest: +SKIP
... print(line)
Notes
-----
This implementation does not support mixed graphs (directed and undirected
edges together).
The node id attribute is set to be the string of the node label.
If you want to specify an id use set it as node data, e.g.
node['a']['id']=1 to set the id of node 'a' to 1.
References
----------
.. [1] GEXF File Format, https://gephi.org/gexf/format/
"""
writer = GEXFWriter(encoding=encoding, prettyprint=prettyprint, version=version)
writer.add_graph(G)
yield from str(writer).splitlines()
| (G, encoding='utf-8', prettyprint=True, version='1.2draft') |
30,689 | networkx.readwrite.gml | generate_gml | Generate a single entry of the graph `G` in GML format.
Parameters
----------
G : NetworkX graph
The graph to be converted to GML.
stringizer : callable, optional
A `stringizer` which converts non-int/non-float/non-dict values into
strings. If it cannot convert a value into a string, it should raise a
`ValueError` to indicate that. Default value: None.
Returns
-------
lines: generator of strings
Lines of GML data. Newlines are not appended.
Raises
------
NetworkXError
If `stringizer` cannot convert a value into a string, or the value to
convert is not a string while `stringizer` is None.
See Also
--------
literal_stringizer
Notes
-----
Graph attributes named 'directed', 'multigraph', 'node' or
'edge', node attributes named 'id' or 'label', edge attributes
named 'source' or 'target' (or 'key' if `G` is a multigraph)
are ignored because these attribute names are used to encode the graph
structure.
GML files are stored using a 7-bit ASCII encoding with any extended
ASCII characters (iso8859-1) appearing as HTML character entities.
Without specifying a `stringizer`/`destringizer`, the code is capable of
writing `int`/`float`/`str`/`dict`/`list` data as required by the GML
specification. For writing other data types, and for reading data other
than `str` you need to explicitly supply a `stringizer`/`destringizer`.
For additional documentation on the GML file format, please see the
`GML url <https://web.archive.org/web/20190207140002/http://www.fim.uni-passau.de/index.php?id=17297&L=1>`_.
See the module docstring :mod:`networkx.readwrite.gml` for more details.
Examples
--------
>>> G = nx.Graph()
>>> G.add_node("1")
>>> print("\n".join(nx.generate_gml(G)))
graph [
node [
id 0
label "1"
]
]
>>> G = nx.MultiGraph([("a", "b"), ("a", "b")])
>>> print("\n".join(nx.generate_gml(G)))
graph [
multigraph 1
node [
id 0
label "a"
]
node [
id 1
label "b"
]
edge [
source 0
target 1
key 0
]
edge [
source 0
target 1
key 1
]
]
| def generate_gml(G, stringizer=None):
r"""Generate a single entry of the graph `G` in GML format.
Parameters
----------
G : NetworkX graph
The graph to be converted to GML.
stringizer : callable, optional
A `stringizer` which converts non-int/non-float/non-dict values into
strings. If it cannot convert a value into a string, it should raise a
`ValueError` to indicate that. Default value: None.
Returns
-------
lines: generator of strings
Lines of GML data. Newlines are not appended.
Raises
------
NetworkXError
If `stringizer` cannot convert a value into a string, or the value to
convert is not a string while `stringizer` is None.
See Also
--------
literal_stringizer
Notes
-----
Graph attributes named 'directed', 'multigraph', 'node' or
'edge', node attributes named 'id' or 'label', edge attributes
named 'source' or 'target' (or 'key' if `G` is a multigraph)
are ignored because these attribute names are used to encode the graph
structure.
GML files are stored using a 7-bit ASCII encoding with any extended
ASCII characters (iso8859-1) appearing as HTML character entities.
Without specifying a `stringizer`/`destringizer`, the code is capable of
writing `int`/`float`/`str`/`dict`/`list` data as required by the GML
specification. For writing other data types, and for reading data other
than `str` you need to explicitly supply a `stringizer`/`destringizer`.
For additional documentation on the GML file format, please see the
`GML url <https://web.archive.org/web/20190207140002/http://www.fim.uni-passau.de/index.php?id=17297&L=1>`_.
See the module docstring :mod:`networkx.readwrite.gml` for more details.
Examples
--------
>>> G = nx.Graph()
>>> G.add_node("1")
>>> print("\n".join(nx.generate_gml(G)))
graph [
node [
id 0
label "1"
]
]
>>> G = nx.MultiGraph([("a", "b"), ("a", "b")])
>>> print("\n".join(nx.generate_gml(G)))
graph [
multigraph 1
node [
id 0
label "a"
]
node [
id 1
label "b"
]
edge [
source 0
target 1
key 0
]
edge [
source 0
target 1
key 1
]
]
"""
valid_keys = re.compile("^[A-Za-z][0-9A-Za-z_]*$")
def stringize(key, value, ignored_keys, indent, in_list=False):
if not isinstance(key, str):
raise NetworkXError(f"{key!r} is not a string")
if not valid_keys.match(key):
raise NetworkXError(f"{key!r} is not a valid key")
if not isinstance(key, str):
key = str(key)
if key not in ignored_keys:
if isinstance(value, int | bool):
if key == "label":
yield indent + key + ' "' + str(value) + '"'
elif value is True:
# python bool is an instance of int
yield indent + key + " 1"
elif value is False:
yield indent + key + " 0"
# GML only supports signed 32-bit integers
elif value < -(2**31) or value >= 2**31:
yield indent + key + ' "' + str(value) + '"'
else:
yield indent + key + " " + str(value)
elif isinstance(value, float):
text = repr(value).upper()
# GML matches INF to keys, so prepend + to INF. Use repr(float(*))
# instead of string literal to future proof against changes to repr.
if text == repr(float("inf")).upper():
text = "+" + text
else:
# GML requires that a real literal contain a decimal point, but
# repr may not output a decimal point when the mantissa is
# integral and hence needs fixing.
epos = text.rfind("E")
if epos != -1 and text.find(".", 0, epos) == -1:
text = text[:epos] + "." + text[epos:]
if key == "label":
yield indent + key + ' "' + text + '"'
else:
yield indent + key + " " + text
elif isinstance(value, dict):
yield indent + key + " ["
next_indent = indent + " "
for key, value in value.items():
yield from stringize(key, value, (), next_indent)
yield indent + "]"
elif isinstance(value, tuple) and key == "label":
yield indent + key + f" \"({','.join(repr(v) for v in value)})\""
elif isinstance(value, list | tuple) and key != "label" and not in_list:
if len(value) == 0:
yield indent + key + " " + f'"{value!r}"'
if len(value) == 1:
yield indent + key + " " + f'"{LIST_START_VALUE}"'
for val in value:
yield from stringize(key, val, (), indent, True)
else:
if stringizer:
try:
value = stringizer(value)
except ValueError as err:
raise NetworkXError(
f"{value!r} cannot be converted into a string"
) from err
if not isinstance(value, str):
raise NetworkXError(f"{value!r} is not a string")
yield indent + key + ' "' + escape(value) + '"'
multigraph = G.is_multigraph()
yield "graph ["
# Output graph attributes
if G.is_directed():
yield " directed 1"
if multigraph:
yield " multigraph 1"
ignored_keys = {"directed", "multigraph", "node", "edge"}
for attr, value in G.graph.items():
yield from stringize(attr, value, ignored_keys, " ")
# Output node data
node_id = dict(zip(G, range(len(G))))
ignored_keys = {"id", "label"}
for node, attrs in G.nodes.items():
yield " node ["
yield " id " + str(node_id[node])
yield from stringize("label", node, (), " ")
for attr, value in attrs.items():
yield from stringize(attr, value, ignored_keys, " ")
yield " ]"
# Output edge data
ignored_keys = {"source", "target"}
kwargs = {"data": True}
if multigraph:
ignored_keys.add("key")
kwargs["keys"] = True
for e in G.edges(**kwargs):
yield " edge ["
yield " source " + str(node_id[e[0]])
yield " target " + str(node_id[e[1]])
if multigraph:
yield from stringize("key", e[2], (), " ")
for attr, value in e[-1].items():
yield from stringize(attr, value, ignored_keys, " ")
yield " ]"
yield "]"
| (G, stringizer=None) |
30,690 | networkx.readwrite.graphml | generate_graphml | Generate GraphML lines for G
Parameters
----------
G : graph
A networkx graph
encoding : string (optional)
Encoding for text data.
prettyprint : bool (optional)
If True use line breaks and indenting in output XML.
named_key_ids : bool (optional)
If True use attr.name as value for key elements' id attribute.
edge_id_from_attribute : dict key (optional)
If provided, the graphml edge id is set by looking up the corresponding
edge data attribute keyed by this parameter. If `None` or the key does not exist in edge data,
the edge id is set by the edge key if `G` is a MultiGraph, else the edge id is left unset.
Examples
--------
>>> G = nx.path_graph(4)
>>> linefeed = chr(10) # linefeed =
>>> s = linefeed.join(nx.generate_graphml(G))
>>> for line in nx.generate_graphml(G): # doctest: +SKIP
... print(line)
Notes
-----
This implementation does not support mixed graphs (directed and unidirected
edges together) hyperedges, nested graphs, or ports.
| def generate_graphml(
G,
encoding="utf-8",
prettyprint=True,
named_key_ids=False,
edge_id_from_attribute=None,
):
"""Generate GraphML lines for G
Parameters
----------
G : graph
A networkx graph
encoding : string (optional)
Encoding for text data.
prettyprint : bool (optional)
If True use line breaks and indenting in output XML.
named_key_ids : bool (optional)
If True use attr.name as value for key elements' id attribute.
edge_id_from_attribute : dict key (optional)
If provided, the graphml edge id is set by looking up the corresponding
edge data attribute keyed by this parameter. If `None` or the key does not exist in edge data,
the edge id is set by the edge key if `G` is a MultiGraph, else the edge id is left unset.
Examples
--------
>>> G = nx.path_graph(4)
>>> linefeed = chr(10) # linefeed = \n
>>> s = linefeed.join(nx.generate_graphml(G))
>>> for line in nx.generate_graphml(G): # doctest: +SKIP
... print(line)
Notes
-----
This implementation does not support mixed graphs (directed and unidirected
edges together) hyperedges, nested graphs, or ports.
"""
writer = GraphMLWriter(
encoding=encoding,
prettyprint=prettyprint,
named_key_ids=named_key_ids,
edge_id_from_attribute=edge_id_from_attribute,
)
writer.add_graph_element(G)
yield from str(writer).splitlines()
| (G, encoding='utf-8', prettyprint=True, named_key_ids=False, edge_id_from_attribute=None) |
30,691 | networkx.readwrite.multiline_adjlist | generate_multiline_adjlist | Generate a single line of the graph G in multiline adjacency list format.
Parameters
----------
G : NetworkX graph
delimiter : string, optional
Separator for node labels
Returns
-------
lines : string
Lines of data in multiline adjlist format.
Examples
--------
>>> G = nx.lollipop_graph(4, 3)
>>> for line in nx.generate_multiline_adjlist(G):
... print(line)
0 3
1 {}
2 {}
3 {}
1 2
2 {}
3 {}
2 1
3 {}
3 1
4 {}
4 1
5 {}
5 1
6 {}
6 0
See Also
--------
write_multiline_adjlist, read_multiline_adjlist
| def generate_multiline_adjlist(G, delimiter=" "):
"""Generate a single line of the graph G in multiline adjacency list format.
Parameters
----------
G : NetworkX graph
delimiter : string, optional
Separator for node labels
Returns
-------
lines : string
Lines of data in multiline adjlist format.
Examples
--------
>>> G = nx.lollipop_graph(4, 3)
>>> for line in nx.generate_multiline_adjlist(G):
... print(line)
0 3
1 {}
2 {}
3 {}
1 2
2 {}
3 {}
2 1
3 {}
3 1
4 {}
4 1
5 {}
5 1
6 {}
6 0
See Also
--------
write_multiline_adjlist, read_multiline_adjlist
"""
if G.is_directed():
if G.is_multigraph():
for s, nbrs in G.adjacency():
nbr_edges = [
(u, data)
for u, datadict in nbrs.items()
for key, data in datadict.items()
]
deg = len(nbr_edges)
yield str(s) + delimiter + str(deg)
for u, d in nbr_edges:
if d is None:
yield str(u)
else:
yield str(u) + delimiter + str(d)
else: # directed single edges
for s, nbrs in G.adjacency():
deg = len(nbrs)
yield str(s) + delimiter + str(deg)
for u, d in nbrs.items():
if d is None:
yield str(u)
else:
yield str(u) + delimiter + str(d)
else: # undirected
if G.is_multigraph():
seen = set() # helper dict used to avoid duplicate edges
for s, nbrs in G.adjacency():
nbr_edges = [
(u, data)
for u, datadict in nbrs.items()
if u not in seen
for key, data in datadict.items()
]
deg = len(nbr_edges)
yield str(s) + delimiter + str(deg)
for u, d in nbr_edges:
if d is None:
yield str(u)
else:
yield str(u) + delimiter + str(d)
seen.add(s)
else: # undirected single edges
seen = set() # helper dict used to avoid duplicate edges
for s, nbrs in G.adjacency():
nbr_edges = [(u, d) for u, d in nbrs.items() if u not in seen]
deg = len(nbr_edges)
yield str(s) + delimiter + str(deg)
for u, d in nbr_edges:
if d is None:
yield str(u)
else:
yield str(u) + delimiter + str(d)
seen.add(s)
| (G, delimiter=' ') |
30,692 | networkx.readwrite.text | generate_network_text | Generate lines in the "network text" format
This works via a depth-first traversal of the graph and writing a line for
each unique node encountered. Non-tree edges are written to the right of
each node, and connection to a non-tree edge is indicated with an ellipsis.
This representation works best when the input graph is a forest, but any
graph can be represented.
This notation is original to networkx, although it is simple enough that it
may be known in existing literature. See #5602 for details. The procedure
is summarized as follows:
1. Given a set of source nodes (which can be specified, or automatically
discovered via finding the (strongly) connected components and choosing one
node with minimum degree from each), we traverse the graph in depth first
order.
2. Each reachable node will be printed exactly once on it's own line.
3. Edges are indicated in one of four ways:
a. a parent "L-style" connection on the upper left. This corresponds to
a traversal in the directed DFS tree.
b. a backref "<-style" connection shown directly on the right. For
directed graphs, these are drawn for any incoming edges to a node that
is not a parent edge. For undirected graphs, these are drawn for only
the non-parent edges that have already been represented (The edges that
have not been represented will be handled in the recursive case).
c. a child "L-style" connection on the lower right. Drawing of the
children are handled recursively.
d. if ``vertical_chains`` is true, and a parent node only has one child
a "vertical-style" edge is drawn between them.
4. The children of each node (wrt the directed DFS tree) are drawn
underneath and to the right of it. In the case that a child node has already
been drawn the connection is replaced with an ellipsis ("...") to indicate
that there is one or more connections represented elsewhere.
5. If a maximum depth is specified, an edge to nodes past this maximum
depth will be represented by an ellipsis.
6. If a a node has a truthy "collapse" value, then we do not traverse past
that node.
Parameters
----------
graph : nx.DiGraph | nx.Graph
Graph to represent
with_labels : bool | str
If True will use the "label" attribute of a node to display if it
exists otherwise it will use the node value itself. If given as a
string, then that attribute name will be used instead of "label".
Defaults to True.
sources : List
Specifies which nodes to start traversal from. Note: nodes that are not
reachable from one of these sources may not be shown. If unspecified,
the minimal set of nodes needed to reach all others will be used.
max_depth : int | None
The maximum depth to traverse before stopping. Defaults to None.
ascii_only : Boolean
If True only ASCII characters are used to construct the visualization
vertical_chains : Boolean
If True, chains of nodes will be drawn vertically when possible.
Yields
------
str : a line of generated text
Examples
--------
>>> graph = nx.path_graph(10)
>>> graph.add_node("A")
>>> graph.add_node("B")
>>> graph.add_node("C")
>>> graph.add_node("D")
>>> graph.add_edge(9, "A")
>>> graph.add_edge(9, "B")
>>> graph.add_edge(9, "C")
>>> graph.add_edge("C", "D")
>>> graph.add_edge("C", "E")
>>> graph.add_edge("C", "F")
>>> nx.write_network_text(graph)
βββ 0
βββ 1
βββ 2
βββ 3
βββ 4
βββ 5
βββ 6
βββ 7
βββ 8
βββ 9
βββ A
βββ B
βββ C
βββ D
βββ E
βββ F
>>> nx.write_network_text(graph, vertical_chains=True)
βββ 0
β
1
β
2
β
3
β
4
β
5
β
6
β
7
β
8
β
9
βββ A
βββ B
βββ C
βββ D
βββ E
βββ F
| def generate_network_text(
graph,
with_labels=True,
sources=None,
max_depth=None,
ascii_only=False,
vertical_chains=False,
):
"""Generate lines in the "network text" format
This works via a depth-first traversal of the graph and writing a line for
each unique node encountered. Non-tree edges are written to the right of
each node, and connection to a non-tree edge is indicated with an ellipsis.
This representation works best when the input graph is a forest, but any
graph can be represented.
This notation is original to networkx, although it is simple enough that it
may be known in existing literature. See #5602 for details. The procedure
is summarized as follows:
1. Given a set of source nodes (which can be specified, or automatically
discovered via finding the (strongly) connected components and choosing one
node with minimum degree from each), we traverse the graph in depth first
order.
2. Each reachable node will be printed exactly once on it's own line.
3. Edges are indicated in one of four ways:
a. a parent "L-style" connection on the upper left. This corresponds to
a traversal in the directed DFS tree.
b. a backref "<-style" connection shown directly on the right. For
directed graphs, these are drawn for any incoming edges to a node that
is not a parent edge. For undirected graphs, these are drawn for only
the non-parent edges that have already been represented (The edges that
have not been represented will be handled in the recursive case).
c. a child "L-style" connection on the lower right. Drawing of the
children are handled recursively.
d. if ``vertical_chains`` is true, and a parent node only has one child
a "vertical-style" edge is drawn between them.
4. The children of each node (wrt the directed DFS tree) are drawn
underneath and to the right of it. In the case that a child node has already
been drawn the connection is replaced with an ellipsis ("...") to indicate
that there is one or more connections represented elsewhere.
5. If a maximum depth is specified, an edge to nodes past this maximum
depth will be represented by an ellipsis.
6. If a a node has a truthy "collapse" value, then we do not traverse past
that node.
Parameters
----------
graph : nx.DiGraph | nx.Graph
Graph to represent
with_labels : bool | str
If True will use the "label" attribute of a node to display if it
exists otherwise it will use the node value itself. If given as a
string, then that attribute name will be used instead of "label".
Defaults to True.
sources : List
Specifies which nodes to start traversal from. Note: nodes that are not
reachable from one of these sources may not be shown. If unspecified,
the minimal set of nodes needed to reach all others will be used.
max_depth : int | None
The maximum depth to traverse before stopping. Defaults to None.
ascii_only : Boolean
If True only ASCII characters are used to construct the visualization
vertical_chains : Boolean
If True, chains of nodes will be drawn vertically when possible.
Yields
------
str : a line of generated text
Examples
--------
>>> graph = nx.path_graph(10)
>>> graph.add_node("A")
>>> graph.add_node("B")
>>> graph.add_node("C")
>>> graph.add_node("D")
>>> graph.add_edge(9, "A")
>>> graph.add_edge(9, "B")
>>> graph.add_edge(9, "C")
>>> graph.add_edge("C", "D")
>>> graph.add_edge("C", "E")
>>> graph.add_edge("C", "F")
>>> nx.write_network_text(graph)
βββ 0
βββ 1
βββ 2
βββ 3
βββ 4
βββ 5
βββ 6
βββ 7
βββ 8
βββ 9
βββ A
βββ B
βββ C
βββ D
βββ E
βββ F
>>> nx.write_network_text(graph, vertical_chains=True)
βββ 0
β
1
β
2
β
3
β
4
β
5
β
6
β
7
β
8
β
9
βββ A
βββ B
βββ C
βββ D
βββ E
βββ F
"""
from typing import Any, NamedTuple
class StackFrame(NamedTuple):
parent: Any
node: Any
indents: list
this_islast: bool
this_vertical: bool
collapse_attr = "collapse"
is_directed = graph.is_directed()
if is_directed:
glyphs = AsciiDirectedGlyphs if ascii_only else UtfDirectedGlyphs
succ = graph.succ
pred = graph.pred
else:
glyphs = AsciiUndirectedGlyphs if ascii_only else UtfUndirectedGlyphs
succ = graph.adj
pred = graph.adj
if isinstance(with_labels, str):
label_attr = with_labels
elif with_labels:
label_attr = "label"
else:
label_attr = None
if max_depth == 0:
yield glyphs.empty + " ..."
elif len(graph.nodes) == 0:
yield glyphs.empty
else:
# If the nodes to traverse are unspecified, find the minimal set of
# nodes that will reach the entire graph
if sources is None:
sources = _find_sources(graph)
# Populate the stack with each:
# 1. parent node in the DFS tree (or None for root nodes),
# 2. the current node in the DFS tree
# 2. a list of indentations indicating depth
# 3. a flag indicating if the node is the final one to be written.
# Reverse the stack so sources are popped in the correct order.
last_idx = len(sources) - 1
stack = [
StackFrame(None, node, [], (idx == last_idx), False)
for idx, node in enumerate(sources)
][::-1]
num_skipped_children = defaultdict(lambda: 0)
seen_nodes = set()
while stack:
parent, node, indents, this_islast, this_vertical = stack.pop()
if node is not Ellipsis:
skip = node in seen_nodes
if skip:
# Mark that we skipped a parent's child
num_skipped_children[parent] += 1
if this_islast:
# If we reached the last child of a parent, and we skipped
# any of that parents children, then we should emit an
# ellipsis at the end after this.
if num_skipped_children[parent] and parent is not None:
# Append the ellipsis to be emitted last
next_islast = True
try_frame = StackFrame(
node, Ellipsis, indents, next_islast, False
)
stack.append(try_frame)
# Redo this frame, but not as a last object
next_islast = False
try_frame = StackFrame(
parent, node, indents, next_islast, this_vertical
)
stack.append(try_frame)
continue
if skip:
continue
seen_nodes.add(node)
if not indents:
# Top level items (i.e. trees in the forest) get different
# glyphs to indicate they are not actually connected
if this_islast:
this_vertical = False
this_prefix = indents + [glyphs.newtree_last]
next_prefix = indents + [glyphs.endof_forest]
else:
this_prefix = indents + [glyphs.newtree_mid]
next_prefix = indents + [glyphs.within_forest]
else:
# Non-top-level items
if this_vertical:
this_prefix = indents
next_prefix = indents
else:
if this_islast:
this_prefix = indents + [glyphs.last]
next_prefix = indents + [glyphs.endof_forest]
else:
this_prefix = indents + [glyphs.mid]
next_prefix = indents + [glyphs.within_tree]
if node is Ellipsis:
label = " ..."
suffix = ""
children = []
else:
if label_attr is not None:
label = str(graph.nodes[node].get(label_attr, node))
else:
label = str(node)
# Determine if we want to show the children of this node.
if collapse_attr is not None:
collapse = graph.nodes[node].get(collapse_attr, False)
else:
collapse = False
# Determine:
# (1) children to traverse into after showing this node.
# (2) parents to immediately show to the right of this node.
if is_directed:
# In the directed case we must show every successor node
# note: it may be skipped later, but we don't have that
# information here.
children = list(succ[node])
# In the directed case we must show every predecessor
# except for parent we directly traversed from.
handled_parents = {parent}
else:
# Showing only the unseen children results in a more
# concise representation for the undirected case.
children = [
child for child in succ[node] if child not in seen_nodes
]
# In the undirected case, parents are also children, so we
# only need to immediately show the ones we can no longer
# traverse
handled_parents = {*children, parent}
if max_depth is not None and len(indents) == max_depth - 1:
# Use ellipsis to indicate we have reached maximum depth
if children:
children = [Ellipsis]
handled_parents = {parent}
if collapse:
# Collapsing a node is the same as reaching maximum depth
if children:
children = [Ellipsis]
handled_parents = {parent}
# The other parents are other predecessors of this node that
# are not handled elsewhere.
other_parents = [p for p in pred[node] if p not in handled_parents]
if other_parents:
if label_attr is not None:
other_parents_labels = ", ".join(
[
str(graph.nodes[p].get(label_attr, p))
for p in other_parents
]
)
else:
other_parents_labels = ", ".join(
[str(p) for p in other_parents]
)
suffix = " ".join(["", glyphs.backedge, other_parents_labels])
else:
suffix = ""
# Emit the line for this node, this will be called for each node
# exactly once.
if this_vertical:
yield "".join(this_prefix + [glyphs.vertical_edge])
yield "".join(this_prefix + [label, suffix])
if vertical_chains:
if is_directed:
num_children = len(set(children))
else:
num_children = len(set(children) - {parent})
# The next node can be drawn vertically if it is the only
# remaining child of this node.
next_is_vertical = num_children == 1
else:
next_is_vertical = False
# Push children on the stack in reverse order so they are popped in
# the original order.
for idx, child in enumerate(children[::-1]):
next_islast = idx == 0
try_frame = StackFrame(
node, child, next_prefix, next_islast, next_is_vertical
)
stack.append(try_frame)
| (graph, with_labels=True, sources=None, max_depth=None, ascii_only=False, vertical_chains=False) |
30,693 | networkx.readwrite.pajek | generate_pajek | Generate lines in Pajek graph format.
Parameters
----------
G : graph
A Networkx graph
References
----------
See http://vlado.fmf.uni-lj.si/pub/networks/pajek/doc/draweps.htm
for format information.
| def generate_pajek(G):
"""Generate lines in Pajek graph format.
Parameters
----------
G : graph
A Networkx graph
References
----------
See http://vlado.fmf.uni-lj.si/pub/networks/pajek/doc/draweps.htm
for format information.
"""
if G.name == "":
name = "NetworkX"
else:
name = G.name
# Apparently many Pajek format readers can't process this line
# So we'll leave it out for now.
# yield '*network %s'%name
# write nodes with attributes
yield f"*vertices {G.order()}"
nodes = list(G)
# make dictionary mapping nodes to integers
nodenumber = dict(zip(nodes, range(1, len(nodes) + 1)))
for n in nodes:
# copy node attributes and pop mandatory attributes
# to avoid duplication.
na = G.nodes.get(n, {}).copy()
x = na.pop("x", 0.0)
y = na.pop("y", 0.0)
try:
id = int(na.pop("id", nodenumber[n]))
except ValueError as err:
err.args += (
(
"Pajek format requires 'id' to be an int()."
" Refer to the 'Relabeling nodes' section."
),
)
raise
nodenumber[n] = id
shape = na.pop("shape", "ellipse")
s = " ".join(map(make_qstr, (id, n, x, y, shape)))
# only optional attributes are left in na.
for k, v in na.items():
if isinstance(v, str) and v.strip() != "":
s += f" {make_qstr(k)} {make_qstr(v)}"
else:
warnings.warn(
f"Node attribute {k} is not processed. {('Empty attribute' if isinstance(v, str) else 'Non-string attribute')}."
)
yield s
# write edges with attributes
if G.is_directed():
yield "*arcs"
else:
yield "*edges"
for u, v, edgedata in G.edges(data=True):
d = edgedata.copy()
value = d.pop("weight", 1.0) # use 1 as default edge value
s = " ".join(map(make_qstr, (nodenumber[u], nodenumber[v], value)))
for k, v in d.items():
if isinstance(v, str) and v.strip() != "":
s += f" {make_qstr(k)} {make_qstr(v)}"
else:
warnings.warn(
f"Edge attribute {k} is not processed. {('Empty attribute' if isinstance(v, str) else 'Non-string attribute')}."
)
yield s
| (G) |
30,694 | networkx.algorithms.similarity | generate_random_paths | Randomly generate `sample_size` paths of length `path_length`.
Parameters
----------
G : NetworkX graph
A NetworkX graph
sample_size : integer
The number of paths to generate. This is ``R`` in [1]_.
path_length : integer (default = 5)
The maximum size of the path to randomly generate.
This is ``T`` in [1]_. According to the paper, ``T >= 5`` is
recommended.
index_map : dictionary, optional
If provided, this will be populated with the inverted
index of nodes mapped to the set of generated random path
indices within ``paths``.
weight : string or None, optional (default="weight")
The name of an edge attribute that holds the numerical value
used as a weight. If None then each edge has weight 1.
seed : integer, random_state, or None (default)
Indicator of random number generation state.
See :ref:`Randomness<randomness>`.
Returns
-------
paths : generator of lists
Generator of `sample_size` paths each with length `path_length`.
Examples
--------
Note that the return value is the list of paths:
>>> G = nx.star_graph(3)
>>> random_path = nx.generate_random_paths(G, 2)
By passing a dictionary into `index_map`, it will build an
inverted index mapping of nodes to the paths in which that node is present:
>>> G = nx.star_graph(3)
>>> index_map = {}
>>> random_path = nx.generate_random_paths(G, 3, index_map=index_map)
>>> paths_containing_node_0 = [
... random_path[path_idx] for path_idx in index_map.get(0, [])
... ]
References
----------
.. [1] Zhang, J., Tang, J., Ma, C., Tong, H., Jing, Y., & Li, J.
Panther: Fast top-k similarity search on large networks.
In Proceedings of the ACM SIGKDD International Conference
on Knowledge Discovery and Data Mining (Vol. 2015-August, pp. 1445β1454).
Association for Computing Machinery. https://doi.org/10.1145/2783258.2783267.
| def optimize_edit_paths(
G1,
G2,
node_match=None,
edge_match=None,
node_subst_cost=None,
node_del_cost=None,
node_ins_cost=None,
edge_subst_cost=None,
edge_del_cost=None,
edge_ins_cost=None,
upper_bound=None,
strictly_decreasing=True,
roots=None,
timeout=None,
):
"""GED (graph edit distance) calculation: advanced interface.
Graph edit path is a sequence of node and edge edit operations
transforming graph G1 to graph isomorphic to G2. Edit operations
include substitutions, deletions, and insertions.
Graph edit distance is defined as minimum cost of edit path.
Parameters
----------
G1, G2: graphs
The two graphs G1 and G2 must be of the same type.
node_match : callable
A function that returns True if node n1 in G1 and n2 in G2
should be considered equal during matching.
The function will be called like
node_match(G1.nodes[n1], G2.nodes[n2]).
That is, the function will receive the node attribute
dictionaries for n1 and n2 as inputs.
Ignored if node_subst_cost is specified. If neither
node_match nor node_subst_cost are specified then node
attributes are not considered.
edge_match : callable
A function that returns True if the edge attribute dictionaries
for the pair of nodes (u1, v1) in G1 and (u2, v2) in G2 should
be considered equal during matching.
The function will be called like
edge_match(G1[u1][v1], G2[u2][v2]).
That is, the function will receive the edge attribute
dictionaries of the edges under consideration.
Ignored if edge_subst_cost is specified. If neither
edge_match nor edge_subst_cost are specified then edge
attributes are not considered.
node_subst_cost, node_del_cost, node_ins_cost : callable
Functions that return the costs of node substitution, node
deletion, and node insertion, respectively.
The functions will be called like
node_subst_cost(G1.nodes[n1], G2.nodes[n2]),
node_del_cost(G1.nodes[n1]),
node_ins_cost(G2.nodes[n2]).
That is, the functions will receive the node attribute
dictionaries as inputs. The functions are expected to return
positive numeric values.
Function node_subst_cost overrides node_match if specified.
If neither node_match nor node_subst_cost are specified then
default node substitution cost of 0 is used (node attributes
are not considered during matching).
If node_del_cost is not specified then default node deletion
cost of 1 is used. If node_ins_cost is not specified then
default node insertion cost of 1 is used.
edge_subst_cost, edge_del_cost, edge_ins_cost : callable
Functions that return the costs of edge substitution, edge
deletion, and edge insertion, respectively.
The functions will be called like
edge_subst_cost(G1[u1][v1], G2[u2][v2]),
edge_del_cost(G1[u1][v1]),
edge_ins_cost(G2[u2][v2]).
That is, the functions will receive the edge attribute
dictionaries as inputs. The functions are expected to return
positive numeric values.
Function edge_subst_cost overrides edge_match if specified.
If neither edge_match nor edge_subst_cost are specified then
default edge substitution cost of 0 is used (edge attributes
are not considered during matching).
If edge_del_cost is not specified then default edge deletion
cost of 1 is used. If edge_ins_cost is not specified then
default edge insertion cost of 1 is used.
upper_bound : numeric
Maximum edit distance to consider.
strictly_decreasing : bool
If True, return consecutive approximations of strictly
decreasing cost. Otherwise, return all edit paths of cost
less than or equal to the previous minimum cost.
roots : 2-tuple
Tuple where first element is a node in G1 and the second
is a node in G2.
These nodes are forced to be matched in the comparison to
allow comparison between rooted graphs.
timeout : numeric
Maximum number of seconds to execute.
After timeout is met, the current best GED is returned.
Returns
-------
Generator of tuples (node_edit_path, edge_edit_path, cost)
node_edit_path : list of tuples (u, v)
edge_edit_path : list of tuples ((u1, v1), (u2, v2))
cost : numeric
See Also
--------
graph_edit_distance, optimize_graph_edit_distance, optimal_edit_paths
References
----------
.. [1] Zeina Abu-Aisheh, Romain Raveaux, Jean-Yves Ramel, Patrick
Martineau. An Exact Graph Edit Distance Algorithm for Solving
Pattern Recognition Problems. 4th International Conference on
Pattern Recognition Applications and Methods 2015, Jan 2015,
Lisbon, Portugal. 2015,
<10.5220/0005209202710278>. <hal-01168816>
https://hal.archives-ouvertes.fr/hal-01168816
"""
# TODO: support DiGraph
import numpy as np
import scipy as sp
@dataclass
class CostMatrix:
C: ...
lsa_row_ind: ...
lsa_col_ind: ...
ls: ...
def make_CostMatrix(C, m, n):
# assert(C.shape == (m + n, m + n))
lsa_row_ind, lsa_col_ind = sp.optimize.linear_sum_assignment(C)
# Fixup dummy assignments:
# each substitution i<->j should have dummy assignment m+j<->n+i
# NOTE: fast reduce of Cv relies on it
# assert len(lsa_row_ind) == len(lsa_col_ind)
indexes = zip(range(len(lsa_row_ind)), lsa_row_ind, lsa_col_ind)
subst_ind = [k for k, i, j in indexes if i < m and j < n]
indexes = zip(range(len(lsa_row_ind)), lsa_row_ind, lsa_col_ind)
dummy_ind = [k for k, i, j in indexes if i >= m and j >= n]
# assert len(subst_ind) == len(dummy_ind)
lsa_row_ind[dummy_ind] = lsa_col_ind[subst_ind] + m
lsa_col_ind[dummy_ind] = lsa_row_ind[subst_ind] + n
return CostMatrix(
C, lsa_row_ind, lsa_col_ind, C[lsa_row_ind, lsa_col_ind].sum()
)
def extract_C(C, i, j, m, n):
# assert(C.shape == (m + n, m + n))
row_ind = [k in i or k - m in j for k in range(m + n)]
col_ind = [k in j or k - n in i for k in range(m + n)]
return C[row_ind, :][:, col_ind]
def reduce_C(C, i, j, m, n):
# assert(C.shape == (m + n, m + n))
row_ind = [k not in i and k - m not in j for k in range(m + n)]
col_ind = [k not in j and k - n not in i for k in range(m + n)]
return C[row_ind, :][:, col_ind]
def reduce_ind(ind, i):
# assert set(ind) == set(range(len(ind)))
rind = ind[[k not in i for k in ind]]
for k in set(i):
rind[rind >= k] -= 1
return rind
def match_edges(u, v, pending_g, pending_h, Ce, matched_uv=None):
"""
Parameters:
u, v: matched vertices, u=None or v=None for
deletion/insertion
pending_g, pending_h: lists of edges not yet mapped
Ce: CostMatrix of pending edge mappings
matched_uv: partial vertex edit path
list of tuples (u, v) of previously matched vertex
mappings u<->v, u=None or v=None for
deletion/insertion
Returns:
list of (i, j): indices of edge mappings g<->h
localCe: local CostMatrix of edge mappings
(basically submatrix of Ce at cross of rows i, cols j)
"""
M = len(pending_g)
N = len(pending_h)
# assert Ce.C.shape == (M + N, M + N)
# only attempt to match edges after one node match has been made
# this will stop self-edges on the first node being automatically deleted
# even when a substitution is the better option
if matched_uv is None or len(matched_uv) == 0:
g_ind = []
h_ind = []
else:
g_ind = [
i
for i in range(M)
if pending_g[i][:2] == (u, u)
or any(
pending_g[i][:2] in ((p, u), (u, p), (p, p)) for p, q in matched_uv
)
]
h_ind = [
j
for j in range(N)
if pending_h[j][:2] == (v, v)
or any(
pending_h[j][:2] in ((q, v), (v, q), (q, q)) for p, q in matched_uv
)
]
m = len(g_ind)
n = len(h_ind)
if m or n:
C = extract_C(Ce.C, g_ind, h_ind, M, N)
# assert C.shape == (m + n, m + n)
# Forbid structurally invalid matches
# NOTE: inf remembered from Ce construction
for k, i in enumerate(g_ind):
g = pending_g[i][:2]
for l, j in enumerate(h_ind):
h = pending_h[j][:2]
if nx.is_directed(G1) or nx.is_directed(G2):
if any(
g == (p, u) and h == (q, v) or g == (u, p) and h == (v, q)
for p, q in matched_uv
):
continue
else:
if any(
g in ((p, u), (u, p)) and h in ((q, v), (v, q))
for p, q in matched_uv
):
continue
if g == (u, u) or any(g == (p, p) for p, q in matched_uv):
continue
if h == (v, v) or any(h == (q, q) for p, q in matched_uv):
continue
C[k, l] = inf
localCe = make_CostMatrix(C, m, n)
ij = [
(
g_ind[k] if k < m else M + h_ind[l],
h_ind[l] if l < n else N + g_ind[k],
)
for k, l in zip(localCe.lsa_row_ind, localCe.lsa_col_ind)
if k < m or l < n
]
else:
ij = []
localCe = CostMatrix(np.empty((0, 0)), [], [], 0)
return ij, localCe
def reduce_Ce(Ce, ij, m, n):
if len(ij):
i, j = zip(*ij)
m_i = m - sum(1 for t in i if t < m)
n_j = n - sum(1 for t in j if t < n)
return make_CostMatrix(reduce_C(Ce.C, i, j, m, n), m_i, n_j)
return Ce
def get_edit_ops(
matched_uv, pending_u, pending_v, Cv, pending_g, pending_h, Ce, matched_cost
):
"""
Parameters:
matched_uv: partial vertex edit path
list of tuples (u, v) of vertex mappings u<->v,
u=None or v=None for deletion/insertion
pending_u, pending_v: lists of vertices not yet mapped
Cv: CostMatrix of pending vertex mappings
pending_g, pending_h: lists of edges not yet mapped
Ce: CostMatrix of pending edge mappings
matched_cost: cost of partial edit path
Returns:
sequence of
(i, j): indices of vertex mapping u<->v
Cv_ij: reduced CostMatrix of pending vertex mappings
(basically Cv with row i, col j removed)
list of (x, y): indices of edge mappings g<->h
Ce_xy: reduced CostMatrix of pending edge mappings
(basically Ce with rows x, cols y removed)
cost: total cost of edit operation
NOTE: most promising ops first
"""
m = len(pending_u)
n = len(pending_v)
# assert Cv.C.shape == (m + n, m + n)
# 1) a vertex mapping from optimal linear sum assignment
i, j = min(
(k, l) for k, l in zip(Cv.lsa_row_ind, Cv.lsa_col_ind) if k < m or l < n
)
xy, localCe = match_edges(
pending_u[i] if i < m else None,
pending_v[j] if j < n else None,
pending_g,
pending_h,
Ce,
matched_uv,
)
Ce_xy = reduce_Ce(Ce, xy, len(pending_g), len(pending_h))
# assert Ce.ls <= localCe.ls + Ce_xy.ls
if prune(matched_cost + Cv.ls + localCe.ls + Ce_xy.ls):
pass
else:
# get reduced Cv efficiently
Cv_ij = CostMatrix(
reduce_C(Cv.C, (i,), (j,), m, n),
reduce_ind(Cv.lsa_row_ind, (i, m + j)),
reduce_ind(Cv.lsa_col_ind, (j, n + i)),
Cv.ls - Cv.C[i, j],
)
yield (i, j), Cv_ij, xy, Ce_xy, Cv.C[i, j] + localCe.ls
# 2) other candidates, sorted by lower-bound cost estimate
other = []
fixed_i, fixed_j = i, j
if m <= n:
candidates = (
(t, fixed_j)
for t in range(m + n)
if t != fixed_i and (t < m or t == m + fixed_j)
)
else:
candidates = (
(fixed_i, t)
for t in range(m + n)
if t != fixed_j and (t < n or t == n + fixed_i)
)
for i, j in candidates:
if prune(matched_cost + Cv.C[i, j] + Ce.ls):
continue
Cv_ij = make_CostMatrix(
reduce_C(Cv.C, (i,), (j,), m, n),
m - 1 if i < m else m,
n - 1 if j < n else n,
)
# assert Cv.ls <= Cv.C[i, j] + Cv_ij.ls
if prune(matched_cost + Cv.C[i, j] + Cv_ij.ls + Ce.ls):
continue
xy, localCe = match_edges(
pending_u[i] if i < m else None,
pending_v[j] if j < n else None,
pending_g,
pending_h,
Ce,
matched_uv,
)
if prune(matched_cost + Cv.C[i, j] + Cv_ij.ls + localCe.ls):
continue
Ce_xy = reduce_Ce(Ce, xy, len(pending_g), len(pending_h))
# assert Ce.ls <= localCe.ls + Ce_xy.ls
if prune(matched_cost + Cv.C[i, j] + Cv_ij.ls + localCe.ls + Ce_xy.ls):
continue
other.append(((i, j), Cv_ij, xy, Ce_xy, Cv.C[i, j] + localCe.ls))
yield from sorted(other, key=lambda t: t[4] + t[1].ls + t[3].ls)
def get_edit_paths(
matched_uv,
pending_u,
pending_v,
Cv,
matched_gh,
pending_g,
pending_h,
Ce,
matched_cost,
):
"""
Parameters:
matched_uv: partial vertex edit path
list of tuples (u, v) of vertex mappings u<->v,
u=None or v=None for deletion/insertion
pending_u, pending_v: lists of vertices not yet mapped
Cv: CostMatrix of pending vertex mappings
matched_gh: partial edge edit path
list of tuples (g, h) of edge mappings g<->h,
g=None or h=None for deletion/insertion
pending_g, pending_h: lists of edges not yet mapped
Ce: CostMatrix of pending edge mappings
matched_cost: cost of partial edit path
Returns:
sequence of (vertex_path, edge_path, cost)
vertex_path: complete vertex edit path
list of tuples (u, v) of vertex mappings u<->v,
u=None or v=None for deletion/insertion
edge_path: complete edge edit path
list of tuples (g, h) of edge mappings g<->h,
g=None or h=None for deletion/insertion
cost: total cost of edit path
NOTE: path costs are non-increasing
"""
# debug_print('matched-uv:', matched_uv)
# debug_print('matched-gh:', matched_gh)
# debug_print('matched-cost:', matched_cost)
# debug_print('pending-u:', pending_u)
# debug_print('pending-v:', pending_v)
# debug_print(Cv.C)
# assert list(sorted(G1.nodes)) == list(sorted(list(u for u, v in matched_uv if u is not None) + pending_u))
# assert list(sorted(G2.nodes)) == list(sorted(list(v for u, v in matched_uv if v is not None) + pending_v))
# debug_print('pending-g:', pending_g)
# debug_print('pending-h:', pending_h)
# debug_print(Ce.C)
# assert list(sorted(G1.edges)) == list(sorted(list(g for g, h in matched_gh if g is not None) + pending_g))
# assert list(sorted(G2.edges)) == list(sorted(list(h for g, h in matched_gh if h is not None) + pending_h))
# debug_print()
if prune(matched_cost + Cv.ls + Ce.ls):
return
if not max(len(pending_u), len(pending_v)):
# assert not len(pending_g)
# assert not len(pending_h)
# path completed!
# assert matched_cost <= maxcost_value
nonlocal maxcost_value
maxcost_value = min(maxcost_value, matched_cost)
yield matched_uv, matched_gh, matched_cost
else:
edit_ops = get_edit_ops(
matched_uv,
pending_u,
pending_v,
Cv,
pending_g,
pending_h,
Ce,
matched_cost,
)
for ij, Cv_ij, xy, Ce_xy, edit_cost in edit_ops:
i, j = ij
# assert Cv.C[i, j] + sum(Ce.C[t] for t in xy) == edit_cost
if prune(matched_cost + edit_cost + Cv_ij.ls + Ce_xy.ls):
continue
# dive deeper
u = pending_u.pop(i) if i < len(pending_u) else None
v = pending_v.pop(j) if j < len(pending_v) else None
matched_uv.append((u, v))
for x, y in xy:
len_g = len(pending_g)
len_h = len(pending_h)
matched_gh.append(
(
pending_g[x] if x < len_g else None,
pending_h[y] if y < len_h else None,
)
)
sortedx = sorted(x for x, y in xy)
sortedy = sorted(y for x, y in xy)
G = [
(pending_g.pop(x) if x < len(pending_g) else None)
for x in reversed(sortedx)
]
H = [
(pending_h.pop(y) if y < len(pending_h) else None)
for y in reversed(sortedy)
]
yield from get_edit_paths(
matched_uv,
pending_u,
pending_v,
Cv_ij,
matched_gh,
pending_g,
pending_h,
Ce_xy,
matched_cost + edit_cost,
)
# backtrack
if u is not None:
pending_u.insert(i, u)
if v is not None:
pending_v.insert(j, v)
matched_uv.pop()
for x, g in zip(sortedx, reversed(G)):
if g is not None:
pending_g.insert(x, g)
for y, h in zip(sortedy, reversed(H)):
if h is not None:
pending_h.insert(y, h)
for _ in xy:
matched_gh.pop()
# Initialization
pending_u = list(G1.nodes)
pending_v = list(G2.nodes)
initial_cost = 0
if roots:
root_u, root_v = roots
if root_u not in pending_u or root_v not in pending_v:
raise nx.NodeNotFound("Root node not in graph.")
# remove roots from pending
pending_u.remove(root_u)
pending_v.remove(root_v)
# cost matrix of vertex mappings
m = len(pending_u)
n = len(pending_v)
C = np.zeros((m + n, m + n))
if node_subst_cost:
C[0:m, 0:n] = np.array(
[
node_subst_cost(G1.nodes[u], G2.nodes[v])
for u in pending_u
for v in pending_v
]
).reshape(m, n)
if roots:
initial_cost = node_subst_cost(G1.nodes[root_u], G2.nodes[root_v])
elif node_match:
C[0:m, 0:n] = np.array(
[
1 - int(node_match(G1.nodes[u], G2.nodes[v]))
for u in pending_u
for v in pending_v
]
).reshape(m, n)
if roots:
initial_cost = 1 - node_match(G1.nodes[root_u], G2.nodes[root_v])
else:
# all zeroes
pass
# assert not min(m, n) or C[0:m, 0:n].min() >= 0
if node_del_cost:
del_costs = [node_del_cost(G1.nodes[u]) for u in pending_u]
else:
del_costs = [1] * len(pending_u)
# assert not m or min(del_costs) >= 0
if node_ins_cost:
ins_costs = [node_ins_cost(G2.nodes[v]) for v in pending_v]
else:
ins_costs = [1] * len(pending_v)
# assert not n or min(ins_costs) >= 0
inf = C[0:m, 0:n].sum() + sum(del_costs) + sum(ins_costs) + 1
C[0:m, n : n + m] = np.array(
[del_costs[i] if i == j else inf for i in range(m) for j in range(m)]
).reshape(m, m)
C[m : m + n, 0:n] = np.array(
[ins_costs[i] if i == j else inf for i in range(n) for j in range(n)]
).reshape(n, n)
Cv = make_CostMatrix(C, m, n)
# debug_print(f"Cv: {m} x {n}")
# debug_print(Cv.C)
pending_g = list(G1.edges)
pending_h = list(G2.edges)
# cost matrix of edge mappings
m = len(pending_g)
n = len(pending_h)
C = np.zeros((m + n, m + n))
if edge_subst_cost:
C[0:m, 0:n] = np.array(
[
edge_subst_cost(G1.edges[g], G2.edges[h])
for g in pending_g
for h in pending_h
]
).reshape(m, n)
elif edge_match:
C[0:m, 0:n] = np.array(
[
1 - int(edge_match(G1.edges[g], G2.edges[h]))
for g in pending_g
for h in pending_h
]
).reshape(m, n)
else:
# all zeroes
pass
# assert not min(m, n) or C[0:m, 0:n].min() >= 0
if edge_del_cost:
del_costs = [edge_del_cost(G1.edges[g]) for g in pending_g]
else:
del_costs = [1] * len(pending_g)
# assert not m or min(del_costs) >= 0
if edge_ins_cost:
ins_costs = [edge_ins_cost(G2.edges[h]) for h in pending_h]
else:
ins_costs = [1] * len(pending_h)
# assert not n or min(ins_costs) >= 0
inf = C[0:m, 0:n].sum() + sum(del_costs) + sum(ins_costs) + 1
C[0:m, n : n + m] = np.array(
[del_costs[i] if i == j else inf for i in range(m) for j in range(m)]
).reshape(m, m)
C[m : m + n, 0:n] = np.array(
[ins_costs[i] if i == j else inf for i in range(n) for j in range(n)]
).reshape(n, n)
Ce = make_CostMatrix(C, m, n)
# debug_print(f'Ce: {m} x {n}')
# debug_print(Ce.C)
# debug_print()
maxcost_value = Cv.C.sum() + Ce.C.sum() + 1
if timeout is not None:
if timeout <= 0:
raise nx.NetworkXError("Timeout value must be greater than 0")
start = time.perf_counter()
def prune(cost):
if timeout is not None:
if time.perf_counter() - start > timeout:
return True
if upper_bound is not None:
if cost > upper_bound:
return True
if cost > maxcost_value:
return True
if strictly_decreasing and cost >= maxcost_value:
return True
return False
# Now go!
done_uv = [] if roots is None else [roots]
for vertex_path, edge_path, cost in get_edit_paths(
done_uv, pending_u, pending_v, Cv, [], pending_g, pending_h, Ce, initial_cost
):
# assert sorted(G1.nodes) == sorted(u for u, v in vertex_path if u is not None)
# assert sorted(G2.nodes) == sorted(v for u, v in vertex_path if v is not None)
# assert sorted(G1.edges) == sorted(g for g, h in edge_path if g is not None)
# assert sorted(G2.edges) == sorted(h for g, h in edge_path if h is not None)
# print(vertex_path, edge_path, cost, file = sys.stderr)
# assert cost == maxcost_value
yield list(vertex_path), list(edge_path), float(cost)
| (G, sample_size, path_length=5, index_map=None, weight='weight', seed=None, *, backend=None, **backend_kwargs) |
30,697 | networkx.algorithms.traversal.breadth_first_search | generic_bfs_edges | Iterate over edges in a breadth-first search.
The breadth-first search begins at `source` and enqueues the
neighbors of newly visited nodes specified by the `neighbors`
function.
Parameters
----------
G : NetworkX graph
source : node
Starting node for the breadth-first search; this function
iterates over only those edges in the component reachable from
this node.
neighbors : function
A function that takes a newly visited node of the graph as input
and returns an *iterator* (not just a list) of nodes that are
neighbors of that node with custom ordering. If not specified, this is
just the ``G.neighbors`` method, but in general it can be any function
that returns an iterator over some or all of the neighbors of a
given node, in any order.
depth_limit : int, optional(default=len(G))
Specify the maximum search depth.
sort_neighbors : Callable (default=None)
.. deprecated:: 3.2
The sort_neighbors parameter is deprecated and will be removed in
version 3.4. A custom (e.g. sorted) ordering of neighbors can be
specified with the `neighbors` parameter.
A function that takes an iterator over nodes as the input, and
returns an iterable of the same nodes with a custom ordering.
For example, `sorted` will sort the nodes in increasing order.
Yields
------
edge
Edges in the breadth-first search starting from `source`.
Examples
--------
>>> G = nx.path_graph(7)
>>> list(nx.generic_bfs_edges(G, source=0))
[(0, 1), (1, 2), (2, 3), (3, 4), (4, 5), (5, 6)]
>>> list(nx.generic_bfs_edges(G, source=2))
[(2, 1), (2, 3), (1, 0), (3, 4), (4, 5), (5, 6)]
>>> list(nx.generic_bfs_edges(G, source=2, depth_limit=2))
[(2, 1), (2, 3), (1, 0), (3, 4)]
The `neighbors` param can be used to specify the visitation order of each
node's neighbors generically. In the following example, we modify the default
neighbor to return *odd* nodes first:
>>> def odd_first(n):
... return sorted(G.neighbors(n), key=lambda x: x % 2, reverse=True)
>>> G = nx.star_graph(5)
>>> list(nx.generic_bfs_edges(G, source=0)) # Default neighbor ordering
[(0, 1), (0, 2), (0, 3), (0, 4), (0, 5)]
>>> list(nx.generic_bfs_edges(G, source=0, neighbors=odd_first))
[(0, 1), (0, 3), (0, 5), (0, 2), (0, 4)]
Notes
-----
This implementation is from `PADS`_, which was in the public domain
when it was first accessed in July, 2004. The modifications
to allow depth limits are based on the Wikipedia article
"`Depth-limited-search`_".
.. _PADS: http://www.ics.uci.edu/~eppstein/PADS/BFS.py
.. _Depth-limited-search: https://en.wikipedia.org/wiki/Depth-limited_search
| null | (G, source, neighbors=None, depth_limit=None, sort_neighbors=None, *, backend=None, **backend_kwargs) |
30,698 | networkx.generators.geometric | geographical_threshold_graph | Returns a geographical threshold graph.
The geographical threshold graph model places $n$ nodes uniformly at
random in a rectangular domain. Each node $u$ is assigned a weight
$w_u$. Two nodes $u$ and $v$ are joined by an edge if
.. math::
(w_u + w_v)p_{dist}(r) \ge \theta
where `r` is the distance between `u` and `v`, `p_dist` is any function of
`r`, and :math:`\theta` as the threshold parameter. `p_dist` is used to
give weight to the distance between nodes when deciding whether or not
they should be connected. The larger `p_dist` is, the more prone nodes
separated by `r` are to be connected, and vice versa.
Parameters
----------
n : int or iterable
Number of nodes or iterable of nodes
theta: float
Threshold value
dim : int, optional
Dimension of graph
pos : dict
Node positions as a dictionary of tuples keyed by node.
weight : dict
Node weights as a dictionary of numbers keyed by node.
metric : function
A metric on vectors of numbers (represented as lists or
tuples). This must be a function that accepts two lists (or
tuples) as input and yields a number as output. The function
must also satisfy the four requirements of a `metric`_.
Specifically, if $d$ is the function and $x$, $y$,
and $z$ are vectors in the graph, then $d$ must satisfy
1. $d(x, y) \ge 0$,
2. $d(x, y) = 0$ if and only if $x = y$,
3. $d(x, y) = d(y, x)$,
4. $d(x, z) \le d(x, y) + d(y, z)$.
If this argument is not specified, the Euclidean distance metric is
used.
.. _metric: https://en.wikipedia.org/wiki/Metric_%28mathematics%29
p_dist : function, optional
Any function used to give weight to the distance between nodes when
deciding whether or not they should be connected. `p_dist` was
originally conceived as a probability density function giving the
probability of connecting two nodes that are of metric distance `r`
apart. The implementation here allows for more arbitrary definitions
of `p_dist` that do not need to correspond to valid probability
density functions. The :mod:`scipy.stats` package has many
probability density functions implemented and tools for custom
probability density definitions, and passing the ``.pdf`` method of
scipy.stats distributions can be used here. If ``p_dist=None``
(the default), the exponential function :math:`r^{-2}` is used.
seed : integer, random_state, or None (default)
Indicator of random number generation state.
See :ref:`Randomness<randomness>`.
pos_name : string, default="pos"
The name of the node attribute which represents the position
in 2D coordinates of the node in the returned graph.
weight_name : string, default="weight"
The name of the node attribute which represents the weight
of the node in the returned graph.
Returns
-------
Graph
A random geographic threshold graph, undirected and without
self-loops.
Each node has a node attribute ``pos`` that stores the
position of that node in Euclidean space as provided by the
``pos`` keyword argument or, if ``pos`` was not provided, as
generated by this function. Similarly, each node has a node
attribute ``weight`` that stores the weight of that node as
provided or as generated.
Examples
--------
Specify an alternate distance metric using the ``metric`` keyword
argument. For example, to use the `taxicab metric`_ instead of the
default `Euclidean metric`_::
>>> dist = lambda x, y: sum(abs(a - b) for a, b in zip(x, y))
>>> G = nx.geographical_threshold_graph(10, 0.1, metric=dist)
.. _taxicab metric: https://en.wikipedia.org/wiki/Taxicab_geometry
.. _Euclidean metric: https://en.wikipedia.org/wiki/Euclidean_distance
Notes
-----
If weights are not specified they are assigned to nodes by drawing randomly
from the exponential distribution with rate parameter $\lambda=1$.
To specify weights from a different distribution, use the `weight` keyword
argument::
>>> import random
>>> n = 20
>>> w = {i: random.expovariate(5.0) for i in range(n)}
>>> G = nx.geographical_threshold_graph(20, 50, weight=w)
If node positions are not specified they are randomly assigned from the
uniform distribution.
References
----------
.. [1] Masuda, N., Miwa, H., Konno, N.:
Geographical threshold graphs with small-world and scale-free
properties.
Physical Review E 71, 036108 (2005)
.. [2] Milan BradonjiΔ, Aric Hagberg and Allon G. Percus,
Giant component and connectivity in geographical threshold graphs,
in Algorithms and Models for the Web-Graph (WAW 2007),
Antony Bonato and Fan Chung (Eds), pp. 209--216, 2007
| def thresholded_random_geometric_graph(
n,
radius,
theta,
dim=2,
pos=None,
weight=None,
p=2,
seed=None,
*,
pos_name="pos",
weight_name="weight",
):
r"""Returns a thresholded random geometric graph in the unit cube.
The thresholded random geometric graph [1] model places `n` nodes
uniformly at random in the unit cube of dimensions `dim`. Each node
`u` is assigned a weight :math:`w_u`. Two nodes `u` and `v` are
joined by an edge if they are within the maximum connection distance,
`radius` computed by the `p`-Minkowski distance and the summation of
weights :math:`w_u` + :math:`w_v` is greater than or equal
to the threshold parameter `theta`.
Edges within `radius` of each other are determined using a KDTree when
SciPy is available. This reduces the time complexity from :math:`O(n^2)`
to :math:`O(n)`.
Parameters
----------
n : int or iterable
Number of nodes or iterable of nodes
radius: float
Distance threshold value
theta: float
Threshold value
dim : int, optional
Dimension of graph
pos : dict, optional
A dictionary keyed by node with node positions as values.
weight : dict, optional
Node weights as a dictionary of numbers keyed by node.
p : float, optional (default 2)
Which Minkowski distance metric to use. `p` has to meet the condition
``1 <= p <= infinity``.
If this argument is not specified, the :math:`L^2` metric
(the Euclidean distance metric), p = 2 is used.
This should not be confused with the `p` of an ErdΕs-RΓ©nyi random
graph, which represents probability.
seed : integer, random_state, or None (default)
Indicator of random number generation state.
See :ref:`Randomness<randomness>`.
pos_name : string, default="pos"
The name of the node attribute which represents the position
in 2D coordinates of the node in the returned graph.
weight_name : string, default="weight"
The name of the node attribute which represents the weight
of the node in the returned graph.
Returns
-------
Graph
A thresholded random geographic graph, undirected and without
self-loops.
Each node has a node attribute ``'pos'`` that stores the
position of that node in Euclidean space as provided by the
``pos`` keyword argument or, if ``pos`` was not provided, as
generated by this function. Similarly, each node has a nodethre
attribute ``'weight'`` that stores the weight of that node as
provided or as generated.
Examples
--------
Default Graph:
G = nx.thresholded_random_geometric_graph(50, 0.2, 0.1)
Custom Graph:
Create a thresholded random geometric graph on 50 uniformly distributed
nodes where nodes are joined by an edge if their sum weights drawn from
a exponential distribution with rate = 5 are >= theta = 0.1 and their
Euclidean distance is at most 0.2.
Notes
-----
This uses a *k*-d tree to build the graph.
The `pos` keyword argument can be used to specify node positions so you
can create an arbitrary distribution and domain for positions.
For example, to use a 2D Gaussian distribution of node positions with mean
(0, 0) and standard deviation 2
If weights are not specified they are assigned to nodes by drawing randomly
from the exponential distribution with rate parameter :math:`\lambda=1`.
To specify weights from a different distribution, use the `weight` keyword
argument::
::
>>> import random
>>> import math
>>> n = 50
>>> pos = {i: (random.gauss(0, 2), random.gauss(0, 2)) for i in range(n)}
>>> w = {i: random.expovariate(5.0) for i in range(n)}
>>> G = nx.thresholded_random_geometric_graph(n, 0.2, 0.1, 2, pos, w)
References
----------
.. [1] http://cole-maclean.github.io/blog/files/thesis.pdf
"""
G = nx.empty_graph(n)
G.name = f"thresholded_random_geometric_graph({n}, {radius}, {theta}, {dim})"
# If no weights are provided, choose them from an exponential
# distribution.
if weight is None:
weight = {v: seed.expovariate(1) for v in G}
# If no positions are provided, choose uniformly random vectors in
# Euclidean space of the specified dimension.
if pos is None:
pos = {v: [seed.random() for i in range(dim)] for v in G}
# If no distance metric is provided, use Euclidean distance.
nx.set_node_attributes(G, weight, weight_name)
nx.set_node_attributes(G, pos, pos_name)
edges = (
(u, v)
for u, v in _geometric_edges(G, radius, p, pos_name)
if weight[u] + weight[v] >= theta
)
G.add_edges_from(edges)
return G
| (n, theta, dim=2, pos=None, weight=None, metric=None, p_dist=None, seed=None, *, pos_name='pos', weight_name='weight', backend=None, **backend_kwargs) |
30,700 | networkx.generators.geometric | geometric_edges | Returns edge list of node pairs within `radius` of each other.
Parameters
----------
G : networkx graph
The graph from which to generate the edge list. The nodes in `G` should
have an attribute ``pos`` corresponding to the node position, which is
used to compute the distance to other nodes.
radius : scalar
The distance threshold. Edges are included in the edge list if the
distance between the two nodes is less than `radius`.
pos_name : string, default="pos"
The name of the node attribute which represents the position of each
node in 2D coordinates. Every node in the Graph must have this attribute.
p : scalar, default=2
The `Minkowski distance metric
<https://en.wikipedia.org/wiki/Minkowski_distance>`_ used to compute
distances. The default value is 2, i.e. Euclidean distance.
Returns
-------
edges : list
List of edges whose distances are less than `radius`
Notes
-----
Radius uses Minkowski distance metric `p`.
If scipy is available, `scipy.spatial.cKDTree` is used to speed computation.
Examples
--------
Create a graph with nodes that have a "pos" attribute representing 2D
coordinates.
>>> G = nx.Graph()
>>> G.add_nodes_from(
... [
... (0, {"pos": (0, 0)}),
... (1, {"pos": (3, 0)}),
... (2, {"pos": (8, 0)}),
... ]
... )
>>> nx.geometric_edges(G, radius=1)
[]
>>> nx.geometric_edges(G, radius=4)
[(0, 1)]
>>> nx.geometric_edges(G, radius=6)
[(0, 1), (1, 2)]
>>> nx.geometric_edges(G, radius=9)
[(0, 1), (0, 2), (1, 2)]
| def thresholded_random_geometric_graph(
n,
radius,
theta,
dim=2,
pos=None,
weight=None,
p=2,
seed=None,
*,
pos_name="pos",
weight_name="weight",
):
r"""Returns a thresholded random geometric graph in the unit cube.
The thresholded random geometric graph [1] model places `n` nodes
uniformly at random in the unit cube of dimensions `dim`. Each node
`u` is assigned a weight :math:`w_u`. Two nodes `u` and `v` are
joined by an edge if they are within the maximum connection distance,
`radius` computed by the `p`-Minkowski distance and the summation of
weights :math:`w_u` + :math:`w_v` is greater than or equal
to the threshold parameter `theta`.
Edges within `radius` of each other are determined using a KDTree when
SciPy is available. This reduces the time complexity from :math:`O(n^2)`
to :math:`O(n)`.
Parameters
----------
n : int or iterable
Number of nodes or iterable of nodes
radius: float
Distance threshold value
theta: float
Threshold value
dim : int, optional
Dimension of graph
pos : dict, optional
A dictionary keyed by node with node positions as values.
weight : dict, optional
Node weights as a dictionary of numbers keyed by node.
p : float, optional (default 2)
Which Minkowski distance metric to use. `p` has to meet the condition
``1 <= p <= infinity``.
If this argument is not specified, the :math:`L^2` metric
(the Euclidean distance metric), p = 2 is used.
This should not be confused with the `p` of an ErdΕs-RΓ©nyi random
graph, which represents probability.
seed : integer, random_state, or None (default)
Indicator of random number generation state.
See :ref:`Randomness<randomness>`.
pos_name : string, default="pos"
The name of the node attribute which represents the position
in 2D coordinates of the node in the returned graph.
weight_name : string, default="weight"
The name of the node attribute which represents the weight
of the node in the returned graph.
Returns
-------
Graph
A thresholded random geographic graph, undirected and without
self-loops.
Each node has a node attribute ``'pos'`` that stores the
position of that node in Euclidean space as provided by the
``pos`` keyword argument or, if ``pos`` was not provided, as
generated by this function. Similarly, each node has a nodethre
attribute ``'weight'`` that stores the weight of that node as
provided or as generated.
Examples
--------
Default Graph:
G = nx.thresholded_random_geometric_graph(50, 0.2, 0.1)
Custom Graph:
Create a thresholded random geometric graph on 50 uniformly distributed
nodes where nodes are joined by an edge if their sum weights drawn from
a exponential distribution with rate = 5 are >= theta = 0.1 and their
Euclidean distance is at most 0.2.
Notes
-----
This uses a *k*-d tree to build the graph.
The `pos` keyword argument can be used to specify node positions so you
can create an arbitrary distribution and domain for positions.
For example, to use a 2D Gaussian distribution of node positions with mean
(0, 0) and standard deviation 2
If weights are not specified they are assigned to nodes by drawing randomly
from the exponential distribution with rate parameter :math:`\lambda=1`.
To specify weights from a different distribution, use the `weight` keyword
argument::
::
>>> import random
>>> import math
>>> n = 50
>>> pos = {i: (random.gauss(0, 2), random.gauss(0, 2)) for i in range(n)}
>>> w = {i: random.expovariate(5.0) for i in range(n)}
>>> G = nx.thresholded_random_geometric_graph(n, 0.2, 0.1, 2, pos, w)
References
----------
.. [1] http://cole-maclean.github.io/blog/files/thesis.pdf
"""
G = nx.empty_graph(n)
G.name = f"thresholded_random_geometric_graph({n}, {radius}, {theta}, {dim})"
# If no weights are provided, choose them from an exponential
# distribution.
if weight is None:
weight = {v: seed.expovariate(1) for v in G}
# If no positions are provided, choose uniformly random vectors in
# Euclidean space of the specified dimension.
if pos is None:
pos = {v: [seed.random() for i in range(dim)] for v in G}
# If no distance metric is provided, use Euclidean distance.
nx.set_node_attributes(G, weight, weight_name)
nx.set_node_attributes(G, pos, pos_name)
edges = (
(u, v)
for u, v in _geometric_edges(G, radius, p, pos_name)
if weight[u] + weight[v] >= theta
)
G.add_edges_from(edges)
return G
| (G, radius, p=2, *, pos_name='pos', backend=None, **backend_kwargs) |
30,701 | networkx.generators.geometric | geometric_soft_configuration_graph | Returns a random graph from the geometric soft configuration model.
The $\mathbb{S}^1$ model [1]_ is the geometric soft configuration model
which is able to explain many fundamental features of real networks such as
small-world property, heteregenous degree distributions, high level of
clustering, and self-similarity.
In the geometric soft configuration model, a node $i$ is assigned two hidden
variables: a hidden degree $\kappa_i$, quantifying its popularity, influence,
or importance, and an angular position $\theta_i$ in a circle abstracting the
similarity space, where angular distances between nodes are a proxy for their
similarity. Focusing on the angular position, this model is often called
the $\mathbb{S}^1$ model (a one-dimensional sphere). The circle's radius is
adjusted to $R = N/2\pi$, where $N$ is the number of nodes, so that the density
is set to 1 without loss of generality.
The connection probability between any pair of nodes increases with
the product of their hidden degrees (i.e., their combined popularities),
and decreases with the angular distance between the two nodes.
Specifically, nodes $i$ and $j$ are connected with the probability
$p_{ij} = \frac{1}{1 + \frac{d_{ij}^\beta}{\left(\mu \kappa_i \kappa_j\right)^{\max(1, \beta)}}}$
where $d_{ij} = R\Delta\theta_{ij}$ is the arc length of the circle between
nodes $i$ and $j$ separated by an angular distance $\Delta\theta_{ij}$.
Parameters $\mu$ and $\beta$ (also called inverse temperature) control the
average degree and the clustering coefficient, respectively.
It can be shown [2]_ that the model undergoes a structural phase transition
at $\beta=1$ so that for $\beta<1$ networks are unclustered in the thermodynamic
limit (when $N\to \infty$) whereas for $\beta>1$ the ensemble generates
networks with finite clustering coefficient.
The $\mathbb{S}^1$ model can be expressed as a purely geometric model
$\mathbb{H}^2$ in the hyperbolic plane [3]_ by mapping the hidden degree of
each node into a radial coordinate as
$r_i = \hat{R} - \frac{2 \max(1, \beta)}{\beta \zeta} \ln \left(\frac{\kappa_i}{\kappa_0}\right)$
where $\hat{R}$ is the radius of the hyperbolic disk and $\zeta$ is the curvature,
$\hat{R} = \frac{2}{\zeta} \ln \left(\frac{N}{\pi}\right)
- \frac{2\max(1, \beta)}{\beta \zeta} \ln (\mu \kappa_0^2)$
The connection probability then reads
$p_{ij} = \frac{1}{1 + \exp\left({\frac{\beta\zeta}{2} (x_{ij} - \hat{R})}\right)}$
where
$x_{ij} = r_i + r_j + \frac{2}{\zeta} \ln \frac{\Delta\theta_{ij}}{2}$
is a good approximation of the hyperbolic distance between two nodes separated
by an angular distance $\Delta\theta_{ij}$ with radial coordinates $r_i$ and $r_j$.
For $\beta > 1$, the curvature $\zeta = 1$, for $\beta < 1$, $\zeta = \beta^{-1}$.
Parameters
----------
Either `n`, `gamma`, `mean_degree` are provided or `kappas`. The values of
`n`, `gamma`, `mean_degree` (if provided) are used to construct a random
kappa-dict keyed by node with values sampled from a power-law distribution.
beta : positive number
Inverse temperature, controlling the clustering coefficient.
n : int (default: None)
Size of the network (number of nodes).
If not provided, `kappas` must be provided and holds the nodes.
gamma : float (default: None)
Exponent of the power-law distribution for hidden degrees `kappas`.
If not provided, `kappas` must be provided directly.
mean_degree : float (default: None)
The mean degree in the network.
If not provided, `kappas` must be provided directly.
kappas : dict (default: None)
A dict keyed by node to its hidden degree value.
If not provided, random values are computed based on a power-law
distribution using `n`, `gamma` and `mean_degree`.
seed : int, random_state, or None (default)
Indicator of random number generation state.
See :ref:`Randomness<randomness>`.
Returns
-------
Graph
A random geometric soft configuration graph (undirected with no self-loops).
Each node has three node-attributes:
- ``kappa`` that represents the hidden degree.
- ``theta`` the position in the similarity space ($\mathbb{S}^1$) which is
also the angular position in the hyperbolic plane.
- ``radius`` the radial position in the hyperbolic plane
(based on the hidden degree).
Examples
--------
Generate a network with specified parameters:
>>> G = nx.geometric_soft_configuration_graph(beta=1.5, n=100, gamma=2.7, mean_degree=5)
Create a geometric soft configuration graph with 100 nodes. The $\beta$ parameter
is set to 1.5 and the exponent of the powerlaw distribution of the hidden
degrees is 2.7 with mean value of 5.
Generate a network with predefined hidden degrees:
>>> kappas = {i: 10 for i in range(100)}
>>> G = nx.geometric_soft_configuration_graph(beta=2.5, kappas=kappas)
Create a geometric soft configuration graph with 100 nodes. The $\beta$ parameter
is set to 2.5 and all nodes with hidden degree $\kappa=10$.
References
----------
.. [1] Serrano, M. Γ., Krioukov, D., & BoguΓ±Γ‘, M. (2008). Self-similarity
of complex networks and hidden metric spaces. Physical review letters, 100(7), 078701.
.. [2] van der Kolk, J., Serrano, M. Γ., & BoguΓ±Γ‘, M. (2022). An anomalous
topological phase transition in spatial random graphs. Communications Physics, 5(1), 245.
.. [3] Krioukov, D., Papadopoulos, F., Kitsak, M., Vahdat, A., & BogunΓ‘, M. (2010).
Hyperbolic geometry of complex networks. Physical Review E, 82(3), 036106.
| def thresholded_random_geometric_graph(
n,
radius,
theta,
dim=2,
pos=None,
weight=None,
p=2,
seed=None,
*,
pos_name="pos",
weight_name="weight",
):
r"""Returns a thresholded random geometric graph in the unit cube.
The thresholded random geometric graph [1] model places `n` nodes
uniformly at random in the unit cube of dimensions `dim`. Each node
`u` is assigned a weight :math:`w_u`. Two nodes `u` and `v` are
joined by an edge if they are within the maximum connection distance,
`radius` computed by the `p`-Minkowski distance and the summation of
weights :math:`w_u` + :math:`w_v` is greater than or equal
to the threshold parameter `theta`.
Edges within `radius` of each other are determined using a KDTree when
SciPy is available. This reduces the time complexity from :math:`O(n^2)`
to :math:`O(n)`.
Parameters
----------
n : int or iterable
Number of nodes or iterable of nodes
radius: float
Distance threshold value
theta: float
Threshold value
dim : int, optional
Dimension of graph
pos : dict, optional
A dictionary keyed by node with node positions as values.
weight : dict, optional
Node weights as a dictionary of numbers keyed by node.
p : float, optional (default 2)
Which Minkowski distance metric to use. `p` has to meet the condition
``1 <= p <= infinity``.
If this argument is not specified, the :math:`L^2` metric
(the Euclidean distance metric), p = 2 is used.
This should not be confused with the `p` of an ErdΕs-RΓ©nyi random
graph, which represents probability.
seed : integer, random_state, or None (default)
Indicator of random number generation state.
See :ref:`Randomness<randomness>`.
pos_name : string, default="pos"
The name of the node attribute which represents the position
in 2D coordinates of the node in the returned graph.
weight_name : string, default="weight"
The name of the node attribute which represents the weight
of the node in the returned graph.
Returns
-------
Graph
A thresholded random geographic graph, undirected and without
self-loops.
Each node has a node attribute ``'pos'`` that stores the
position of that node in Euclidean space as provided by the
``pos`` keyword argument or, if ``pos`` was not provided, as
generated by this function. Similarly, each node has a nodethre
attribute ``'weight'`` that stores the weight of that node as
provided or as generated.
Examples
--------
Default Graph:
G = nx.thresholded_random_geometric_graph(50, 0.2, 0.1)
Custom Graph:
Create a thresholded random geometric graph on 50 uniformly distributed
nodes where nodes are joined by an edge if their sum weights drawn from
a exponential distribution with rate = 5 are >= theta = 0.1 and their
Euclidean distance is at most 0.2.
Notes
-----
This uses a *k*-d tree to build the graph.
The `pos` keyword argument can be used to specify node positions so you
can create an arbitrary distribution and domain for positions.
For example, to use a 2D Gaussian distribution of node positions with mean
(0, 0) and standard deviation 2
If weights are not specified they are assigned to nodes by drawing randomly
from the exponential distribution with rate parameter :math:`\lambda=1`.
To specify weights from a different distribution, use the `weight` keyword
argument::
::
>>> import random
>>> import math
>>> n = 50
>>> pos = {i: (random.gauss(0, 2), random.gauss(0, 2)) for i in range(n)}
>>> w = {i: random.expovariate(5.0) for i in range(n)}
>>> G = nx.thresholded_random_geometric_graph(n, 0.2, 0.1, 2, pos, w)
References
----------
.. [1] http://cole-maclean.github.io/blog/files/thesis.pdf
"""
G = nx.empty_graph(n)
G.name = f"thresholded_random_geometric_graph({n}, {radius}, {theta}, {dim})"
# If no weights are provided, choose them from an exponential
# distribution.
if weight is None:
weight = {v: seed.expovariate(1) for v in G}
# If no positions are provided, choose uniformly random vectors in
# Euclidean space of the specified dimension.
if pos is None:
pos = {v: [seed.random() for i in range(dim)] for v in G}
# If no distance metric is provided, use Euclidean distance.
nx.set_node_attributes(G, weight, weight_name)
nx.set_node_attributes(G, pos, pos_name)
edges = (
(u, v)
for u, v in _geometric_edges(G, radius, p, pos_name)
if weight[u] + weight[v] >= theta
)
G.add_edges_from(edges)
return G
| (*, beta, n=None, gamma=None, mean_degree=None, kappas=None, seed=None, backend=None, **backend_kwargs) |
30,702 | networkx.classes.function | get_edge_attributes | Get edge attributes from graph
Parameters
----------
G : NetworkX Graph
name : string
Attribute name
default: object (default=None)
Default value of the edge attribute if there is no value set for that
edge in graph. If `None` then edges without this attribute are not
included in the returned dict.
Returns
-------
Dictionary of attributes keyed by edge. For (di)graphs, the keys are
2-tuples of the form: (u, v). For multi(di)graphs, the keys are 3-tuples of
the form: (u, v, key).
Examples
--------
>>> G = nx.Graph()
>>> nx.add_path(G, [1, 2, 3], color="red")
>>> color = nx.get_edge_attributes(G, "color")
>>> color[(1, 2)]
'red'
>>> G.add_edge(3, 4)
>>> color = nx.get_edge_attributes(G, "color", default="yellow")
>>> color[(3, 4)]
'yellow'
| def get_edge_attributes(G, name, default=None):
"""Get edge attributes from graph
Parameters
----------
G : NetworkX Graph
name : string
Attribute name
default: object (default=None)
Default value of the edge attribute if there is no value set for that
edge in graph. If `None` then edges without this attribute are not
included in the returned dict.
Returns
-------
Dictionary of attributes keyed by edge. For (di)graphs, the keys are
2-tuples of the form: (u, v). For multi(di)graphs, the keys are 3-tuples of
the form: (u, v, key).
Examples
--------
>>> G = nx.Graph()
>>> nx.add_path(G, [1, 2, 3], color="red")
>>> color = nx.get_edge_attributes(G, "color")
>>> color[(1, 2)]
'red'
>>> G.add_edge(3, 4)
>>> color = nx.get_edge_attributes(G, "color", default="yellow")
>>> color[(3, 4)]
'yellow'
"""
if G.is_multigraph():
edges = G.edges(keys=True, data=True)
else:
edges = G.edges(data=True)
if default is not None:
return {x[:-1]: x[-1].get(name, default) for x in edges}
return {x[:-1]: x[-1][name] for x in edges if name in x[-1]}
| (G, name, default=None) |
30,703 | networkx.classes.function | get_node_attributes | Get node attributes from graph
Parameters
----------
G : NetworkX Graph
name : string
Attribute name
default: object (default=None)
Default value of the node attribute if there is no value set for that
node in graph. If `None` then nodes without this attribute are not
included in the returned dict.
Returns
-------
Dictionary of attributes keyed by node.
Examples
--------
>>> G = nx.Graph()
>>> G.add_nodes_from([1, 2, 3], color="red")
>>> color = nx.get_node_attributes(G, "color")
>>> color[1]
'red'
>>> G.add_node(4)
>>> color = nx.get_node_attributes(G, "color", default="yellow")
>>> color[4]
'yellow'
| def get_node_attributes(G, name, default=None):
"""Get node attributes from graph
Parameters
----------
G : NetworkX Graph
name : string
Attribute name
default: object (default=None)
Default value of the node attribute if there is no value set for that
node in graph. If `None` then nodes without this attribute are not
included in the returned dict.
Returns
-------
Dictionary of attributes keyed by node.
Examples
--------
>>> G = nx.Graph()
>>> G.add_nodes_from([1, 2, 3], color="red")
>>> color = nx.get_node_attributes(G, "color")
>>> color[1]
'red'
>>> G.add_node(4)
>>> color = nx.get_node_attributes(G, "color", default="yellow")
>>> color[4]
'yellow'
"""
if default is not None:
return {n: d.get(name, default) for n, d in G.nodes.items()}
return {n: d[name] for n, d in G.nodes.items() if name in d}
| (G, name, default=None) |
30,705 | networkx.algorithms.cycles | girth | Returns the girth of the graph.
The girth of a graph is the length of its shortest cycle, or infinity if
the graph is acyclic. The algorithm follows the description given on the
Wikipedia page [1]_, and runs in time O(mn) on a graph with m edges and n
nodes.
Parameters
----------
G : NetworkX Graph
Returns
-------
int or math.inf
Examples
--------
All examples below (except P_5) can easily be checked using Wikipedia,
which has a page for each of these famous graphs.
>>> nx.girth(nx.chvatal_graph())
4
>>> nx.girth(nx.tutte_graph())
4
>>> nx.girth(nx.petersen_graph())
5
>>> nx.girth(nx.heawood_graph())
6
>>> nx.girth(nx.pappus_graph())
6
>>> nx.girth(nx.path_graph(5))
inf
References
----------
.. [1] `Wikipedia: Girth <https://en.wikipedia.org/wiki/Girth_(graph_theory)>`_
| def recursive_simple_cycles(G):
"""Find simple cycles (elementary circuits) of a directed graph.
A `simple cycle`, or `elementary circuit`, is a closed path where
no node appears twice. Two elementary circuits are distinct if they
are not cyclic permutations of each other.
This version uses a recursive algorithm to build a list of cycles.
You should probably use the iterator version called simple_cycles().
Warning: This recursive version uses lots of RAM!
It appears in NetworkX for pedagogical value.
Parameters
----------
G : NetworkX DiGraph
A directed graph
Returns
-------
A list of cycles, where each cycle is represented by a list of nodes
along the cycle.
Example:
>>> edges = [(0, 0), (0, 1), (0, 2), (1, 2), (2, 0), (2, 1), (2, 2)]
>>> G = nx.DiGraph(edges)
>>> nx.recursive_simple_cycles(G)
[[0], [2], [0, 1, 2], [0, 2], [1, 2]]
Notes
-----
The implementation follows pp. 79-80 in [1]_.
The time complexity is $O((n+e)(c+1))$ for $n$ nodes, $e$ edges and $c$
elementary circuits.
References
----------
.. [1] Finding all the elementary circuits of a directed graph.
D. B. Johnson, SIAM Journal on Computing 4, no. 1, 77-84, 1975.
https://doi.org/10.1137/0204007
See Also
--------
simple_cycles, cycle_basis
"""
# Jon Olav Vik, 2010-08-09
def _unblock(thisnode):
"""Recursively unblock and remove nodes from B[thisnode]."""
if blocked[thisnode]:
blocked[thisnode] = False
while B[thisnode]:
_unblock(B[thisnode].pop())
def circuit(thisnode, startnode, component):
closed = False # set to True if elementary path is closed
path.append(thisnode)
blocked[thisnode] = True
for nextnode in component[thisnode]: # direct successors of thisnode
if nextnode == startnode:
result.append(path[:])
closed = True
elif not blocked[nextnode]:
if circuit(nextnode, startnode, component):
closed = True
if closed:
_unblock(thisnode)
else:
for nextnode in component[thisnode]:
if thisnode not in B[nextnode]: # TODO: use set for speedup?
B[nextnode].append(thisnode)
path.pop() # remove thisnode from path
return closed
path = [] # stack of nodes in current path
blocked = defaultdict(bool) # vertex: blocked from search?
B = defaultdict(list) # graph portions that yield no elementary circuit
result = [] # list to accumulate the circuits found
# Johnson's algorithm exclude self cycle edges like (v, v)
# To be backward compatible, we record those cycles in advance
# and then remove from subG
for v in G:
if G.has_edge(v, v):
result.append([v])
G.remove_edge(v, v)
# Johnson's algorithm requires some ordering of the nodes.
# They might not be sortable so we assign an arbitrary ordering.
ordering = dict(zip(G, range(len(G))))
for s in ordering:
# Build the subgraph induced by s and following nodes in the ordering
subgraph = G.subgraph(node for node in G if ordering[node] >= ordering[s])
# Find the strongly connected component in the subgraph
# that contains the least node according to the ordering
strongcomp = nx.strongly_connected_components(subgraph)
mincomp = min(strongcomp, key=lambda ns: min(ordering[n] for n in ns))
component = G.subgraph(mincomp)
if len(component) > 1:
# smallest node in the component according to the ordering
startnode = min(component, key=ordering.__getitem__)
for node in component:
blocked[node] = False
B[node][:] = []
dummy = circuit(startnode, startnode, component)
return result
| (G, *, backend=None, **backend_kwargs) |
30,706 | networkx.algorithms.efficiency_measures | global_efficiency | Returns the average global efficiency of the graph.
The *efficiency* of a pair of nodes in a graph is the multiplicative
inverse of the shortest path distance between the nodes. The *average
global efficiency* of a graph is the average efficiency of all pairs of
nodes [1]_.
Parameters
----------
G : :class:`networkx.Graph`
An undirected graph for which to compute the average global efficiency.
Returns
-------
float
The average global efficiency of the graph.
Examples
--------
>>> G = nx.Graph([(0, 1), (0, 2), (0, 3), (1, 2), (1, 3)])
>>> round(nx.global_efficiency(G), 12)
0.916666666667
Notes
-----
Edge weights are ignored when computing the shortest path distances.
See also
--------
local_efficiency
References
----------
.. [1] Latora, Vito, and Massimo Marchiori.
"Efficient behavior of small-world networks."
*Physical Review Letters* 87.19 (2001): 198701.
<https://doi.org/10.1103/PhysRevLett.87.198701>
| null | (G, *, backend=None, **backend_kwargs) |
30,707 | networkx.algorithms.distance_regular | global_parameters | Returns global parameters for a given intersection array.
Given a distance-regular graph G with integers b_i, c_i,i = 0,....,d
such that for any 2 vertices x,y in G at a distance i=d(x,y), there
are exactly c_i neighbors of y at a distance of i-1 from x and b_i
neighbors of y at a distance of i+1 from x.
Thus, a distance regular graph has the global parameters,
[[c_0,a_0,b_0],[c_1,a_1,b_1],......,[c_d,a_d,b_d]] for the
intersection array [b_0,b_1,.....b_{d-1};c_1,c_2,.....c_d]
where a_i+b_i+c_i=k , k= degree of every vertex.
Parameters
----------
b : list
c : list
Returns
-------
iterable
An iterable over three tuples.
Examples
--------
>>> G = nx.dodecahedral_graph()
>>> b, c = nx.intersection_array(G)
>>> list(nx.global_parameters(b, c))
[(0, 0, 3), (1, 0, 2), (1, 1, 1), (1, 1, 1), (2, 0, 1), (3, 0, 0)]
References
----------
.. [1] Weisstein, Eric W. "Global Parameters."
From MathWorld--A Wolfram Web Resource.
http://mathworld.wolfram.com/GlobalParameters.html
See Also
--------
intersection_array
| def global_parameters(b, c):
"""Returns global parameters for a given intersection array.
Given a distance-regular graph G with integers b_i, c_i,i = 0,....,d
such that for any 2 vertices x,y in G at a distance i=d(x,y), there
are exactly c_i neighbors of y at a distance of i-1 from x and b_i
neighbors of y at a distance of i+1 from x.
Thus, a distance regular graph has the global parameters,
[[c_0,a_0,b_0],[c_1,a_1,b_1],......,[c_d,a_d,b_d]] for the
intersection array [b_0,b_1,.....b_{d-1};c_1,c_2,.....c_d]
where a_i+b_i+c_i=k , k= degree of every vertex.
Parameters
----------
b : list
c : list
Returns
-------
iterable
An iterable over three tuples.
Examples
--------
>>> G = nx.dodecahedral_graph()
>>> b, c = nx.intersection_array(G)
>>> list(nx.global_parameters(b, c))
[(0, 0, 3), (1, 0, 2), (1, 1, 1), (1, 1, 1), (2, 0, 1), (3, 0, 0)]
References
----------
.. [1] Weisstein, Eric W. "Global Parameters."
From MathWorld--A Wolfram Web Resource.
http://mathworld.wolfram.com/GlobalParameters.html
See Also
--------
intersection_array
"""
return ((y, b[0] - x - y, x) for x, y in zip(b + [0], [0] + c))
| (b, c) |
30,708 | networkx.algorithms.centrality.reaching | global_reaching_centrality | Returns the global reaching centrality of a directed graph.
The *global reaching centrality* of a weighted directed graph is the
average over all nodes of the difference between the local reaching
centrality of the node and the greatest local reaching centrality of
any node in the graph [1]_. For more information on the local
reaching centrality, see :func:`local_reaching_centrality`.
Informally, the local reaching centrality is the proportion of the
graph that is reachable from the neighbors of the node.
Parameters
----------
G : DiGraph
A networkx DiGraph.
weight : None or string, optional (default=None)
Attribute to use for edge weights. If ``None``, each edge weight
is assumed to be one. A higher weight implies a stronger
connection between nodes and a *shorter* path length.
normalized : bool, optional (default=True)
Whether to normalize the edge weights by the total sum of edge
weights.
Returns
-------
h : float
The global reaching centrality of the graph.
Examples
--------
>>> G = nx.DiGraph()
>>> G.add_edge(1, 2)
>>> G.add_edge(1, 3)
>>> nx.global_reaching_centrality(G)
1.0
>>> G.add_edge(3, 2)
>>> nx.global_reaching_centrality(G)
0.75
See also
--------
local_reaching_centrality
References
----------
.. [1] Mones, Enys, Lilla Vicsek, and TamΓ‘s Vicsek.
"Hierarchy Measure for Complex Networks."
*PLoS ONE* 7.3 (2012): e33799.
https://doi.org/10.1371/journal.pone.0033799
| null | (G, weight=None, normalized=True, *, backend=None, **backend_kwargs) |
30,710 | networkx.generators.directed | gn_graph | Returns the growing network (GN) digraph with `n` nodes.
The GN graph is built by adding nodes one at a time with a link to one
previously added node. The target node for the link is chosen with
probability based on degree. The default attachment kernel is a linear
function of the degree of a node.
The graph is always a (directed) tree.
Parameters
----------
n : int
The number of nodes for the generated graph.
kernel : function
The attachment kernel.
create_using : NetworkX graph constructor, optional (default DiGraph)
Graph type to create. If graph instance, then cleared before populated.
seed : integer, random_state, or None (default)
Indicator of random number generation state.
See :ref:`Randomness<randomness>`.
Examples
--------
To create the undirected GN graph, use the :meth:`~DiGraph.to_directed`
method::
>>> D = nx.gn_graph(10) # the GN graph
>>> G = D.to_undirected() # the undirected version
To specify an attachment kernel, use the `kernel` keyword argument::
>>> D = nx.gn_graph(10, kernel=lambda x: x**1.5) # A_k = k^1.5
References
----------
.. [1] P. L. Krapivsky and S. Redner,
Organization of Growing Random Networks,
Phys. Rev. E, 63, 066123, 2001.
| null | (n, kernel=None, create_using=None, seed=None, *, backend=None, **backend_kwargs) |
30,711 | networkx.generators.directed | gnc_graph | Returns the growing network with copying (GNC) digraph with `n` nodes.
The GNC graph is built by adding nodes one at a time with a link to one
previously added node (chosen uniformly at random) and to all of that
node's successors.
Parameters
----------
n : int
The number of nodes for the generated graph.
create_using : NetworkX graph constructor, optional (default DiGraph)
Graph type to create. If graph instance, then cleared before populated.
seed : integer, random_state, or None (default)
Indicator of random number generation state.
See :ref:`Randomness<randomness>`.
References
----------
.. [1] P. L. Krapivsky and S. Redner,
Network Growth by Copying,
Phys. Rev. E, 71, 036118, 2005k.},
| null | (n, create_using=None, seed=None, *, backend=None, **backend_kwargs) |
30,712 | networkx.generators.random_graphs | gnm_random_graph | Returns a $G_{n,m}$ random graph.
In the $G_{n,m}$ model, a graph is chosen uniformly at random from the set
of all graphs with $n$ nodes and $m$ edges.
This algorithm should be faster than :func:`dense_gnm_random_graph` for
sparse graphs.
Parameters
----------
n : int
The number of nodes.
m : int
The number of edges.
seed : integer, random_state, or None (default)
Indicator of random number generation state.
See :ref:`Randomness<randomness>`.
directed : bool, optional (default=False)
If True return a directed graph
See also
--------
dense_gnm_random_graph
| def dual_barabasi_albert_graph(n, m1, m2, p, seed=None, initial_graph=None):
"""Returns a random graph using dual BarabΓ‘siβAlbert preferential attachment
A graph of $n$ nodes is grown by attaching new nodes each with either $m_1$
edges (with probability $p$) or $m_2$ edges (with probability $1-p$) that
are preferentially attached to existing nodes with high degree.
Parameters
----------
n : int
Number of nodes
m1 : int
Number of edges to link each new node to existing nodes with probability $p$
m2 : int
Number of edges to link each new node to existing nodes with probability $1-p$
p : float
The probability of attaching $m_1$ edges (as opposed to $m_2$ edges)
seed : integer, random_state, or None (default)
Indicator of random number generation state.
See :ref:`Randomness<randomness>`.
initial_graph : Graph or None (default)
Initial network for BarabΓ‘siβAlbert algorithm.
A copy of `initial_graph` is used.
It should be connected for most use cases.
If None, starts from an star graph on max(m1, m2) + 1 nodes.
Returns
-------
G : Graph
Raises
------
NetworkXError
If `m1` and `m2` do not satisfy ``1 <= m1,m2 < n``, or
`p` does not satisfy ``0 <= p <= 1``, or
the initial graph number of nodes m0 does not satisfy m1, m2 <= m0 <= n.
References
----------
.. [1] N. Moshiri "The dual-Barabasi-Albert model", arXiv:1810.10538.
"""
if m1 < 1 or m1 >= n:
raise nx.NetworkXError(
f"Dual BarabΓ‘siβAlbert must have m1 >= 1 and m1 < n, m1 = {m1}, n = {n}"
)
if m2 < 1 or m2 >= n:
raise nx.NetworkXError(
f"Dual BarabΓ‘siβAlbert must have m2 >= 1 and m2 < n, m2 = {m2}, n = {n}"
)
if p < 0 or p > 1:
raise nx.NetworkXError(
f"Dual BarabΓ‘siβAlbert network must have 0 <= p <= 1, p = {p}"
)
# For simplicity, if p == 0 or 1, just return BA
if p == 1:
return barabasi_albert_graph(n, m1, seed)
elif p == 0:
return barabasi_albert_graph(n, m2, seed)
if initial_graph is None:
# Default initial graph : empty graph on max(m1, m2) nodes
G = star_graph(max(m1, m2))
else:
if len(initial_graph) < max(m1, m2) or len(initial_graph) > n:
raise nx.NetworkXError(
f"BarabΓ‘siβAlbert initial graph must have between "
f"max(m1, m2) = {max(m1, m2)} and n = {n} nodes"
)
G = initial_graph.copy()
# Target nodes for new edges
targets = list(G)
# List of existing nodes, with nodes repeated once for each adjacent edge
repeated_nodes = [n for n, d in G.degree() for _ in range(d)]
# Start adding the remaining nodes.
source = len(G)
while source < n:
# Pick which m to use (m1 or m2)
if seed.random() < p:
m = m1
else:
m = m2
# Now choose m unique nodes from the existing nodes
# Pick uniformly from repeated_nodes (preferential attachment)
targets = _random_subset(repeated_nodes, m, seed)
# Add edges to m nodes from the source.
G.add_edges_from(zip([source] * m, targets))
# Add one node to the list for each new edge just created.
repeated_nodes.extend(targets)
# And the new node "source" has m edges to add to the list.
repeated_nodes.extend([source] * m)
source += 1
return G
| (n, m, seed=None, directed=False, *, backend=None, **backend_kwargs) |
30,714 | networkx.generators.directed | gnr_graph | Returns the growing network with redirection (GNR) digraph with `n`
nodes and redirection probability `p`.
The GNR graph is built by adding nodes one at a time with a link to one
previously added node. The previous target node is chosen uniformly at
random. With probability `p` the link is instead "redirected" to the
successor node of the target.
The graph is always a (directed) tree.
Parameters
----------
n : int
The number of nodes for the generated graph.
p : float
The redirection probability.
create_using : NetworkX graph constructor, optional (default DiGraph)
Graph type to create. If graph instance, then cleared before populated.
seed : integer, random_state, or None (default)
Indicator of random number generation state.
See :ref:`Randomness<randomness>`.
Examples
--------
To create the undirected GNR graph, use the :meth:`~DiGraph.to_directed`
method::
>>> D = nx.gnr_graph(10, 0.5) # the GNR graph
>>> G = D.to_undirected() # the undirected version
References
----------
.. [1] P. L. Krapivsky and S. Redner,
Organization of Growing Random Networks,
Phys. Rev. E, 63, 066123, 2001.
| null | (n, p, create_using=None, seed=None, *, backend=None, **backend_kwargs) |
30,715 | networkx.algorithms.shortest_paths.weighted | goldberg_radzik | Compute shortest path lengths and predecessors on shortest paths
in weighted graphs.
The algorithm has a running time of $O(mn)$ where $n$ is the number of
nodes and $m$ is the number of edges. It is slower than Dijkstra but
can handle negative edge weights.
Parameters
----------
G : NetworkX graph
The algorithm works for all types of graphs, including directed
graphs and multigraphs.
source: node label
Starting node for path
weight : string or function
If this is a string, then edge weights will be accessed via the
edge attribute with this key (that is, the weight of the edge
joining `u` to `v` will be ``G.edges[u, v][weight]``). If no
such edge attribute exists, the weight of the edge is assumed to
be one.
If this is a function, the weight of an edge is the value
returned by the function. The function must accept exactly three
positional arguments: the two endpoints of an edge and the
dictionary of edge attributes for that edge. The function must
return a number.
Returns
-------
pred, dist : dictionaries
Returns two dictionaries keyed by node to predecessor in the
path and to the distance from the source respectively.
Raises
------
NodeNotFound
If `source` is not in `G`.
NetworkXUnbounded
If the (di)graph contains a negative (di)cycle, the
algorithm raises an exception to indicate the presence of the
negative (di)cycle. Note: any negative weight edge in an
undirected graph is a negative cycle.
As of NetworkX v3.2, a zero weight cycle is no longer
incorrectly reported as a negative weight cycle.
Examples
--------
>>> G = nx.path_graph(5, create_using=nx.DiGraph())
>>> pred, dist = nx.goldberg_radzik(G, 0)
>>> sorted(pred.items())
[(0, None), (1, 0), (2, 1), (3, 2), (4, 3)]
>>> sorted(dist.items())
[(0, 0), (1, 1), (2, 2), (3, 3), (4, 4)]
>>> G = nx.cycle_graph(5, create_using=nx.DiGraph())
>>> G[1][2]["weight"] = -7
>>> nx.goldberg_radzik(G, 0)
Traceback (most recent call last):
...
networkx.exception.NetworkXUnbounded: Negative cycle detected.
Notes
-----
Edge weight attributes must be numerical.
Distances are calculated as sums of weighted edges traversed.
The dictionaries returned only have keys for nodes reachable from
the source.
In the case where the (di)graph is not connected, if a component
not containing the source contains a negative (di)cycle, it
will not be detected.
| def _dijkstra_multisource(
G, sources, weight, pred=None, paths=None, cutoff=None, target=None
):
"""Uses Dijkstra's algorithm to find shortest weighted paths
Parameters
----------
G : NetworkX graph
sources : non-empty iterable of nodes
Starting nodes for paths. If this is just an iterable containing
a single node, then all paths computed by this function will
start from that node. If there are two or more nodes in this
iterable, the computed paths may begin from any one of the start
nodes.
weight: function
Function with (u, v, data) input that returns that edge's weight
or None to indicate a hidden edge
pred: dict of lists, optional(default=None)
dict to store a list of predecessors keyed by that node
If None, predecessors are not stored.
paths: dict, optional (default=None)
dict to store the path list from source to each node, keyed by node.
If None, paths are not stored.
target : node label, optional
Ending node for path. Search is halted when target is found.
cutoff : integer or float, optional
Length (sum of edge weights) at which the search is stopped.
If cutoff is provided, only return paths with summed weight <= cutoff.
Returns
-------
distance : dictionary
A mapping from node to shortest distance to that node from one
of the source nodes.
Raises
------
NodeNotFound
If any of `sources` is not in `G`.
Notes
-----
The optional predecessor and path dictionaries can be accessed by
the caller through the original pred and paths objects passed
as arguments. No need to explicitly return pred or paths.
"""
G_succ = G._adj # For speed-up (and works for both directed and undirected graphs)
push = heappush
pop = heappop
dist = {} # dictionary of final distances
seen = {}
# fringe is heapq with 3-tuples (distance,c,node)
# use the count c to avoid comparing nodes (may not be able to)
c = count()
fringe = []
for source in sources:
seen[source] = 0
push(fringe, (0, next(c), source))
while fringe:
(d, _, v) = pop(fringe)
if v in dist:
continue # already searched this node.
dist[v] = d
if v == target:
break
for u, e in G_succ[v].items():
cost = weight(v, u, e)
if cost is None:
continue
vu_dist = dist[v] + cost
if cutoff is not None:
if vu_dist > cutoff:
continue
if u in dist:
u_dist = dist[u]
if vu_dist < u_dist:
raise ValueError("Contradictory paths found:", "negative weights?")
elif pred is not None and vu_dist == u_dist:
pred[u].append(v)
elif u not in seen or vu_dist < seen[u]:
seen[u] = vu_dist
push(fringe, (vu_dist, next(c), u))
if paths is not None:
paths[u] = paths[v] + [u]
if pred is not None:
pred[u] = [v]
elif vu_dist == seen[u]:
if pred is not None:
pred[u].append(v)
# The optional predecessor and path dictionaries can be accessed
# by the caller via the pred and paths objects passed as arguments.
return dist
| (G, source, weight='weight', *, backend=None, **backend_kwargs) |
30,716 | networkx.algorithms.flow.gomory_hu | gomory_hu_tree | Returns the Gomory-Hu tree of an undirected graph G.
A Gomory-Hu tree of an undirected graph with capacities is a
weighted tree that represents the minimum s-t cuts for all s-t
pairs in the graph.
It only requires `n-1` minimum cut computations instead of the
obvious `n(n-1)/2`. The tree represents all s-t cuts as the
minimum cut value among any pair of nodes is the minimum edge
weight in the shortest path between the two nodes in the
Gomory-Hu tree.
The Gomory-Hu tree also has the property that removing the
edge with the minimum weight in the shortest path between
any two nodes leaves two connected components that form
a partition of the nodes in G that defines the minimum s-t
cut.
See Examples section below for details.
Parameters
----------
G : NetworkX graph
Undirected graph
capacity : string
Edges of the graph G are expected to have an attribute capacity
that indicates how much flow the edge can support. If this
attribute is not present, the edge is considered to have
infinite capacity. Default value: 'capacity'.
flow_func : function
Function to perform the underlying flow computations. Default value
:func:`edmonds_karp`. This function performs better in sparse graphs
with right tailed degree distributions.
:func:`shortest_augmenting_path` will perform better in denser
graphs.
Returns
-------
Tree : NetworkX graph
A NetworkX graph representing the Gomory-Hu tree of the input graph.
Raises
------
NetworkXNotImplemented
Raised if the input graph is directed.
NetworkXError
Raised if the input graph is an empty Graph.
Examples
--------
>>> G = nx.karate_club_graph()
>>> nx.set_edge_attributes(G, 1, "capacity")
>>> T = nx.gomory_hu_tree(G)
>>> # The value of the minimum cut between any pair
... # of nodes in G is the minimum edge weight in the
... # shortest path between the two nodes in the
... # Gomory-Hu tree.
... def minimum_edge_weight_in_shortest_path(T, u, v):
... path = nx.shortest_path(T, u, v, weight="weight")
... return min((T[u][v]["weight"], (u, v)) for (u, v) in zip(path, path[1:]))
>>> u, v = 0, 33
>>> cut_value, edge = minimum_edge_weight_in_shortest_path(T, u, v)
>>> cut_value
10
>>> nx.minimum_cut_value(G, u, v)
10
>>> # The Gomory-Hu tree also has the property that removing the
... # edge with the minimum weight in the shortest path between
... # any two nodes leaves two connected components that form
... # a partition of the nodes in G that defines the minimum s-t
... # cut.
... cut_value, edge = minimum_edge_weight_in_shortest_path(T, u, v)
>>> T.remove_edge(*edge)
>>> U, V = list(nx.connected_components(T))
>>> # Thus U and V form a partition that defines a minimum cut
... # between u and v in G. You can compute the edge cut set,
... # that is, the set of edges that if removed from G will
... # disconnect u from v in G, with this information:
... cutset = set()
>>> for x, nbrs in ((n, G[n]) for n in U):
... cutset.update((x, y) for y in nbrs if y in V)
>>> # Because we have set the capacities of all edges to 1
... # the cutset contains ten edges
... len(cutset)
10
>>> # You can use any maximum flow algorithm for the underlying
... # flow computations using the argument flow_func
... from networkx.algorithms import flow
>>> T = nx.gomory_hu_tree(G, flow_func=flow.boykov_kolmogorov)
>>> cut_value, edge = minimum_edge_weight_in_shortest_path(T, u, v)
>>> cut_value
10
>>> nx.minimum_cut_value(G, u, v, flow_func=flow.boykov_kolmogorov)
10
Notes
-----
This implementation is based on Gusfield approach [1]_ to compute
Gomory-Hu trees, which does not require node contractions and has
the same computational complexity than the original method.
See also
--------
:func:`minimum_cut`
:func:`maximum_flow`
References
----------
.. [1] Gusfield D: Very simple methods for all pairs network flow analysis.
SIAM J Comput 19(1):143-155, 1990.
| null | (G, capacity='capacity', flow_func=None, *, backend=None, **backend_kwargs) |
30,717 | networkx.algorithms.link_analysis.pagerank_alg | google_matrix | Returns the Google matrix of the graph.
Parameters
----------
G : graph
A NetworkX graph. Undirected graphs will be converted to a directed
graph with two directed edges for each undirected edge.
alpha : float
The damping factor.
personalization: dict, optional
The "personalization vector" consisting of a dictionary with a
key some subset of graph nodes and personalization value each of those.
At least one personalization value must be non-zero.
If not specified, a nodes personalization value will be zero.
By default, a uniform distribution is used.
nodelist : list, optional
The rows and columns are ordered according to the nodes in nodelist.
If nodelist is None, then the ordering is produced by G.nodes().
weight : key, optional
Edge data key to use as weight. If None weights are set to 1.
dangling: dict, optional
The outedges to be assigned to any "dangling" nodes, i.e., nodes without
any outedges. The dict key is the node the outedge points to and the dict
value is the weight of that outedge. By default, dangling nodes are given
outedges according to the personalization vector (uniform if not
specified) This must be selected to result in an irreducible transition
matrix (see notes below). It may be common to have the dangling dict to
be the same as the personalization dict.
Returns
-------
A : 2D NumPy ndarray
Google matrix of the graph
Notes
-----
The array returned represents the transition matrix that describes the
Markov chain used in PageRank. For PageRank to converge to a unique
solution (i.e., a unique stationary distribution in a Markov chain), the
transition matrix must be irreducible. In other words, it must be that
there exists a path between every pair of nodes in the graph, or else there
is the potential of "rank sinks."
This implementation works with Multi(Di)Graphs. For multigraphs the
weight between two nodes is set to be the sum of all edge weights
between those nodes.
See Also
--------
pagerank
| null | (G, alpha=0.85, personalization=None, nodelist=None, weight='weight', dangling=None, *, backend=None, **backend_kwargs) |
30,720 | networkx.generators.atlas | graph_atlas | Returns graph number `i` from the Graph Atlas.
For more information, see :func:`.graph_atlas_g`.
Parameters
----------
i : int
The index of the graph from the atlas to get. The graph at index
0 is assumed to be the null graph.
Returns
-------
list
A list of :class:`~networkx.Graph` objects, the one at index *i*
corresponding to the graph *i* in the Graph Atlas.
See also
--------
graph_atlas_g
Notes
-----
The time required by this function increases linearly with the
argument `i`, since it reads a large file sequentially in order to
generate the graph [1]_.
References
----------
.. [1] Ronald C. Read and Robin J. Wilson, *An Atlas of Graphs*.
Oxford University Press, 1998.
| null | (i, *, backend=None, **backend_kwargs) |
30,721 | networkx.generators.atlas | graph_atlas_g | Returns the list of all graphs with up to seven nodes named in the
Graph Atlas.
The graphs are listed in increasing order by
1. number of nodes,
2. number of edges,
3. degree sequence (for example 111223 < 112222),
4. number of automorphisms,
in that order, with three exceptions as described in the *Notes*
section below. This causes the list to correspond with the index of
the graphs in the Graph Atlas [atlas]_, with the first graph,
``G[0]``, being the null graph.
Returns
-------
list
A list of :class:`~networkx.Graph` objects, the one at index *i*
corresponding to the graph *i* in the Graph Atlas.
See also
--------
graph_atlas
Notes
-----
This function may be expensive in both time and space, since it
reads a large file sequentially in order to populate the list.
Although the NetworkX atlas functions match the order of graphs
given in the "Atlas of Graphs" book, there are (at least) three
errors in the ordering described in the book. The following three
pairs of nodes violate the lexicographically nondecreasing sorted
degree sequence rule:
- graphs 55 and 56 with degree sequences 001111 and 000112,
- graphs 1007 and 1008 with degree sequences 3333444 and 3333336,
- graphs 1012 and 1213 with degree sequences 1244555 and 1244456.
References
----------
.. [atlas] Ronald C. Read and Robin J. Wilson,
*An Atlas of Graphs*.
Oxford University Press, 1998.
| null | (*, backend=None, **backend_kwargs) |
30,722 | networkx.algorithms.similarity | graph_edit_distance | Returns GED (graph edit distance) between graphs G1 and G2.
Graph edit distance is a graph similarity measure analogous to
Levenshtein distance for strings. It is defined as minimum cost
of edit path (sequence of node and edge edit operations)
transforming graph G1 to graph isomorphic to G2.
Parameters
----------
G1, G2: graphs
The two graphs G1 and G2 must be of the same type.
node_match : callable
A function that returns True if node n1 in G1 and n2 in G2
should be considered equal during matching.
The function will be called like
node_match(G1.nodes[n1], G2.nodes[n2]).
That is, the function will receive the node attribute
dictionaries for n1 and n2 as inputs.
Ignored if node_subst_cost is specified. If neither
node_match nor node_subst_cost are specified then node
attributes are not considered.
edge_match : callable
A function that returns True if the edge attribute dictionaries
for the pair of nodes (u1, v1) in G1 and (u2, v2) in G2 should
be considered equal during matching.
The function will be called like
edge_match(G1[u1][v1], G2[u2][v2]).
That is, the function will receive the edge attribute
dictionaries of the edges under consideration.
Ignored if edge_subst_cost is specified. If neither
edge_match nor edge_subst_cost are specified then edge
attributes are not considered.
node_subst_cost, node_del_cost, node_ins_cost : callable
Functions that return the costs of node substitution, node
deletion, and node insertion, respectively.
The functions will be called like
node_subst_cost(G1.nodes[n1], G2.nodes[n2]),
node_del_cost(G1.nodes[n1]),
node_ins_cost(G2.nodes[n2]).
That is, the functions will receive the node attribute
dictionaries as inputs. The functions are expected to return
positive numeric values.
Function node_subst_cost overrides node_match if specified.
If neither node_match nor node_subst_cost are specified then
default node substitution cost of 0 is used (node attributes
are not considered during matching).
If node_del_cost is not specified then default node deletion
cost of 1 is used. If node_ins_cost is not specified then
default node insertion cost of 1 is used.
edge_subst_cost, edge_del_cost, edge_ins_cost : callable
Functions that return the costs of edge substitution, edge
deletion, and edge insertion, respectively.
The functions will be called like
edge_subst_cost(G1[u1][v1], G2[u2][v2]),
edge_del_cost(G1[u1][v1]),
edge_ins_cost(G2[u2][v2]).
That is, the functions will receive the edge attribute
dictionaries as inputs. The functions are expected to return
positive numeric values.
Function edge_subst_cost overrides edge_match if specified.
If neither edge_match nor edge_subst_cost are specified then
default edge substitution cost of 0 is used (edge attributes
are not considered during matching).
If edge_del_cost is not specified then default edge deletion
cost of 1 is used. If edge_ins_cost is not specified then
default edge insertion cost of 1 is used.
roots : 2-tuple
Tuple where first element is a node in G1 and the second
is a node in G2.
These nodes are forced to be matched in the comparison to
allow comparison between rooted graphs.
upper_bound : numeric
Maximum edit distance to consider. Return None if no edit
distance under or equal to upper_bound exists.
timeout : numeric
Maximum number of seconds to execute.
After timeout is met, the current best GED is returned.
Examples
--------
>>> G1 = nx.cycle_graph(6)
>>> G2 = nx.wheel_graph(7)
>>> nx.graph_edit_distance(G1, G2)
7.0
>>> G1 = nx.star_graph(5)
>>> G2 = nx.star_graph(5)
>>> nx.graph_edit_distance(G1, G2, roots=(0, 0))
0.0
>>> nx.graph_edit_distance(G1, G2, roots=(1, 0))
8.0
See Also
--------
optimal_edit_paths, optimize_graph_edit_distance,
is_isomorphic: test for graph edit distance of 0
References
----------
.. [1] Zeina Abu-Aisheh, Romain Raveaux, Jean-Yves Ramel, Patrick
Martineau. An Exact Graph Edit Distance Algorithm for Solving
Pattern Recognition Problems. 4th International Conference on
Pattern Recognition Applications and Methods 2015, Jan 2015,
Lisbon, Portugal. 2015,
<10.5220/0005209202710278>. <hal-01168816>
https://hal.archives-ouvertes.fr/hal-01168816
| def optimize_edit_paths(
G1,
G2,
node_match=None,
edge_match=None,
node_subst_cost=None,
node_del_cost=None,
node_ins_cost=None,
edge_subst_cost=None,
edge_del_cost=None,
edge_ins_cost=None,
upper_bound=None,
strictly_decreasing=True,
roots=None,
timeout=None,
):
"""GED (graph edit distance) calculation: advanced interface.
Graph edit path is a sequence of node and edge edit operations
transforming graph G1 to graph isomorphic to G2. Edit operations
include substitutions, deletions, and insertions.
Graph edit distance is defined as minimum cost of edit path.
Parameters
----------
G1, G2: graphs
The two graphs G1 and G2 must be of the same type.
node_match : callable
A function that returns True if node n1 in G1 and n2 in G2
should be considered equal during matching.
The function will be called like
node_match(G1.nodes[n1], G2.nodes[n2]).
That is, the function will receive the node attribute
dictionaries for n1 and n2 as inputs.
Ignored if node_subst_cost is specified. If neither
node_match nor node_subst_cost are specified then node
attributes are not considered.
edge_match : callable
A function that returns True if the edge attribute dictionaries
for the pair of nodes (u1, v1) in G1 and (u2, v2) in G2 should
be considered equal during matching.
The function will be called like
edge_match(G1[u1][v1], G2[u2][v2]).
That is, the function will receive the edge attribute
dictionaries of the edges under consideration.
Ignored if edge_subst_cost is specified. If neither
edge_match nor edge_subst_cost are specified then edge
attributes are not considered.
node_subst_cost, node_del_cost, node_ins_cost : callable
Functions that return the costs of node substitution, node
deletion, and node insertion, respectively.
The functions will be called like
node_subst_cost(G1.nodes[n1], G2.nodes[n2]),
node_del_cost(G1.nodes[n1]),
node_ins_cost(G2.nodes[n2]).
That is, the functions will receive the node attribute
dictionaries as inputs. The functions are expected to return
positive numeric values.
Function node_subst_cost overrides node_match if specified.
If neither node_match nor node_subst_cost are specified then
default node substitution cost of 0 is used (node attributes
are not considered during matching).
If node_del_cost is not specified then default node deletion
cost of 1 is used. If node_ins_cost is not specified then
default node insertion cost of 1 is used.
edge_subst_cost, edge_del_cost, edge_ins_cost : callable
Functions that return the costs of edge substitution, edge
deletion, and edge insertion, respectively.
The functions will be called like
edge_subst_cost(G1[u1][v1], G2[u2][v2]),
edge_del_cost(G1[u1][v1]),
edge_ins_cost(G2[u2][v2]).
That is, the functions will receive the edge attribute
dictionaries as inputs. The functions are expected to return
positive numeric values.
Function edge_subst_cost overrides edge_match if specified.
If neither edge_match nor edge_subst_cost are specified then
default edge substitution cost of 0 is used (edge attributes
are not considered during matching).
If edge_del_cost is not specified then default edge deletion
cost of 1 is used. If edge_ins_cost is not specified then
default edge insertion cost of 1 is used.
upper_bound : numeric
Maximum edit distance to consider.
strictly_decreasing : bool
If True, return consecutive approximations of strictly
decreasing cost. Otherwise, return all edit paths of cost
less than or equal to the previous minimum cost.
roots : 2-tuple
Tuple where first element is a node in G1 and the second
is a node in G2.
These nodes are forced to be matched in the comparison to
allow comparison between rooted graphs.
timeout : numeric
Maximum number of seconds to execute.
After timeout is met, the current best GED is returned.
Returns
-------
Generator of tuples (node_edit_path, edge_edit_path, cost)
node_edit_path : list of tuples (u, v)
edge_edit_path : list of tuples ((u1, v1), (u2, v2))
cost : numeric
See Also
--------
graph_edit_distance, optimize_graph_edit_distance, optimal_edit_paths
References
----------
.. [1] Zeina Abu-Aisheh, Romain Raveaux, Jean-Yves Ramel, Patrick
Martineau. An Exact Graph Edit Distance Algorithm for Solving
Pattern Recognition Problems. 4th International Conference on
Pattern Recognition Applications and Methods 2015, Jan 2015,
Lisbon, Portugal. 2015,
<10.5220/0005209202710278>. <hal-01168816>
https://hal.archives-ouvertes.fr/hal-01168816
"""
# TODO: support DiGraph
import numpy as np
import scipy as sp
@dataclass
class CostMatrix:
C: ...
lsa_row_ind: ...
lsa_col_ind: ...
ls: ...
def make_CostMatrix(C, m, n):
# assert(C.shape == (m + n, m + n))
lsa_row_ind, lsa_col_ind = sp.optimize.linear_sum_assignment(C)
# Fixup dummy assignments:
# each substitution i<->j should have dummy assignment m+j<->n+i
# NOTE: fast reduce of Cv relies on it
# assert len(lsa_row_ind) == len(lsa_col_ind)
indexes = zip(range(len(lsa_row_ind)), lsa_row_ind, lsa_col_ind)
subst_ind = [k for k, i, j in indexes if i < m and j < n]
indexes = zip(range(len(lsa_row_ind)), lsa_row_ind, lsa_col_ind)
dummy_ind = [k for k, i, j in indexes if i >= m and j >= n]
# assert len(subst_ind) == len(dummy_ind)
lsa_row_ind[dummy_ind] = lsa_col_ind[subst_ind] + m
lsa_col_ind[dummy_ind] = lsa_row_ind[subst_ind] + n
return CostMatrix(
C, lsa_row_ind, lsa_col_ind, C[lsa_row_ind, lsa_col_ind].sum()
)
def extract_C(C, i, j, m, n):
# assert(C.shape == (m + n, m + n))
row_ind = [k in i or k - m in j for k in range(m + n)]
col_ind = [k in j or k - n in i for k in range(m + n)]
return C[row_ind, :][:, col_ind]
def reduce_C(C, i, j, m, n):
# assert(C.shape == (m + n, m + n))
row_ind = [k not in i and k - m not in j for k in range(m + n)]
col_ind = [k not in j and k - n not in i for k in range(m + n)]
return C[row_ind, :][:, col_ind]
def reduce_ind(ind, i):
# assert set(ind) == set(range(len(ind)))
rind = ind[[k not in i for k in ind]]
for k in set(i):
rind[rind >= k] -= 1
return rind
def match_edges(u, v, pending_g, pending_h, Ce, matched_uv=None):
"""
Parameters:
u, v: matched vertices, u=None or v=None for
deletion/insertion
pending_g, pending_h: lists of edges not yet mapped
Ce: CostMatrix of pending edge mappings
matched_uv: partial vertex edit path
list of tuples (u, v) of previously matched vertex
mappings u<->v, u=None or v=None for
deletion/insertion
Returns:
list of (i, j): indices of edge mappings g<->h
localCe: local CostMatrix of edge mappings
(basically submatrix of Ce at cross of rows i, cols j)
"""
M = len(pending_g)
N = len(pending_h)
# assert Ce.C.shape == (M + N, M + N)
# only attempt to match edges after one node match has been made
# this will stop self-edges on the first node being automatically deleted
# even when a substitution is the better option
if matched_uv is None or len(matched_uv) == 0:
g_ind = []
h_ind = []
else:
g_ind = [
i
for i in range(M)
if pending_g[i][:2] == (u, u)
or any(
pending_g[i][:2] in ((p, u), (u, p), (p, p)) for p, q in matched_uv
)
]
h_ind = [
j
for j in range(N)
if pending_h[j][:2] == (v, v)
or any(
pending_h[j][:2] in ((q, v), (v, q), (q, q)) for p, q in matched_uv
)
]
m = len(g_ind)
n = len(h_ind)
if m or n:
C = extract_C(Ce.C, g_ind, h_ind, M, N)
# assert C.shape == (m + n, m + n)
# Forbid structurally invalid matches
# NOTE: inf remembered from Ce construction
for k, i in enumerate(g_ind):
g = pending_g[i][:2]
for l, j in enumerate(h_ind):
h = pending_h[j][:2]
if nx.is_directed(G1) or nx.is_directed(G2):
if any(
g == (p, u) and h == (q, v) or g == (u, p) and h == (v, q)
for p, q in matched_uv
):
continue
else:
if any(
g in ((p, u), (u, p)) and h in ((q, v), (v, q))
for p, q in matched_uv
):
continue
if g == (u, u) or any(g == (p, p) for p, q in matched_uv):
continue
if h == (v, v) or any(h == (q, q) for p, q in matched_uv):
continue
C[k, l] = inf
localCe = make_CostMatrix(C, m, n)
ij = [
(
g_ind[k] if k < m else M + h_ind[l],
h_ind[l] if l < n else N + g_ind[k],
)
for k, l in zip(localCe.lsa_row_ind, localCe.lsa_col_ind)
if k < m or l < n
]
else:
ij = []
localCe = CostMatrix(np.empty((0, 0)), [], [], 0)
return ij, localCe
def reduce_Ce(Ce, ij, m, n):
if len(ij):
i, j = zip(*ij)
m_i = m - sum(1 for t in i if t < m)
n_j = n - sum(1 for t in j if t < n)
return make_CostMatrix(reduce_C(Ce.C, i, j, m, n), m_i, n_j)
return Ce
def get_edit_ops(
matched_uv, pending_u, pending_v, Cv, pending_g, pending_h, Ce, matched_cost
):
"""
Parameters:
matched_uv: partial vertex edit path
list of tuples (u, v) of vertex mappings u<->v,
u=None or v=None for deletion/insertion
pending_u, pending_v: lists of vertices not yet mapped
Cv: CostMatrix of pending vertex mappings
pending_g, pending_h: lists of edges not yet mapped
Ce: CostMatrix of pending edge mappings
matched_cost: cost of partial edit path
Returns:
sequence of
(i, j): indices of vertex mapping u<->v
Cv_ij: reduced CostMatrix of pending vertex mappings
(basically Cv with row i, col j removed)
list of (x, y): indices of edge mappings g<->h
Ce_xy: reduced CostMatrix of pending edge mappings
(basically Ce with rows x, cols y removed)
cost: total cost of edit operation
NOTE: most promising ops first
"""
m = len(pending_u)
n = len(pending_v)
# assert Cv.C.shape == (m + n, m + n)
# 1) a vertex mapping from optimal linear sum assignment
i, j = min(
(k, l) for k, l in zip(Cv.lsa_row_ind, Cv.lsa_col_ind) if k < m or l < n
)
xy, localCe = match_edges(
pending_u[i] if i < m else None,
pending_v[j] if j < n else None,
pending_g,
pending_h,
Ce,
matched_uv,
)
Ce_xy = reduce_Ce(Ce, xy, len(pending_g), len(pending_h))
# assert Ce.ls <= localCe.ls + Ce_xy.ls
if prune(matched_cost + Cv.ls + localCe.ls + Ce_xy.ls):
pass
else:
# get reduced Cv efficiently
Cv_ij = CostMatrix(
reduce_C(Cv.C, (i,), (j,), m, n),
reduce_ind(Cv.lsa_row_ind, (i, m + j)),
reduce_ind(Cv.lsa_col_ind, (j, n + i)),
Cv.ls - Cv.C[i, j],
)
yield (i, j), Cv_ij, xy, Ce_xy, Cv.C[i, j] + localCe.ls
# 2) other candidates, sorted by lower-bound cost estimate
other = []
fixed_i, fixed_j = i, j
if m <= n:
candidates = (
(t, fixed_j)
for t in range(m + n)
if t != fixed_i and (t < m or t == m + fixed_j)
)
else:
candidates = (
(fixed_i, t)
for t in range(m + n)
if t != fixed_j and (t < n or t == n + fixed_i)
)
for i, j in candidates:
if prune(matched_cost + Cv.C[i, j] + Ce.ls):
continue
Cv_ij = make_CostMatrix(
reduce_C(Cv.C, (i,), (j,), m, n),
m - 1 if i < m else m,
n - 1 if j < n else n,
)
# assert Cv.ls <= Cv.C[i, j] + Cv_ij.ls
if prune(matched_cost + Cv.C[i, j] + Cv_ij.ls + Ce.ls):
continue
xy, localCe = match_edges(
pending_u[i] if i < m else None,
pending_v[j] if j < n else None,
pending_g,
pending_h,
Ce,
matched_uv,
)
if prune(matched_cost + Cv.C[i, j] + Cv_ij.ls + localCe.ls):
continue
Ce_xy = reduce_Ce(Ce, xy, len(pending_g), len(pending_h))
# assert Ce.ls <= localCe.ls + Ce_xy.ls
if prune(matched_cost + Cv.C[i, j] + Cv_ij.ls + localCe.ls + Ce_xy.ls):
continue
other.append(((i, j), Cv_ij, xy, Ce_xy, Cv.C[i, j] + localCe.ls))
yield from sorted(other, key=lambda t: t[4] + t[1].ls + t[3].ls)
def get_edit_paths(
matched_uv,
pending_u,
pending_v,
Cv,
matched_gh,
pending_g,
pending_h,
Ce,
matched_cost,
):
"""
Parameters:
matched_uv: partial vertex edit path
list of tuples (u, v) of vertex mappings u<->v,
u=None or v=None for deletion/insertion
pending_u, pending_v: lists of vertices not yet mapped
Cv: CostMatrix of pending vertex mappings
matched_gh: partial edge edit path
list of tuples (g, h) of edge mappings g<->h,
g=None or h=None for deletion/insertion
pending_g, pending_h: lists of edges not yet mapped
Ce: CostMatrix of pending edge mappings
matched_cost: cost of partial edit path
Returns:
sequence of (vertex_path, edge_path, cost)
vertex_path: complete vertex edit path
list of tuples (u, v) of vertex mappings u<->v,
u=None or v=None for deletion/insertion
edge_path: complete edge edit path
list of tuples (g, h) of edge mappings g<->h,
g=None or h=None for deletion/insertion
cost: total cost of edit path
NOTE: path costs are non-increasing
"""
# debug_print('matched-uv:', matched_uv)
# debug_print('matched-gh:', matched_gh)
# debug_print('matched-cost:', matched_cost)
# debug_print('pending-u:', pending_u)
# debug_print('pending-v:', pending_v)
# debug_print(Cv.C)
# assert list(sorted(G1.nodes)) == list(sorted(list(u for u, v in matched_uv if u is not None) + pending_u))
# assert list(sorted(G2.nodes)) == list(sorted(list(v for u, v in matched_uv if v is not None) + pending_v))
# debug_print('pending-g:', pending_g)
# debug_print('pending-h:', pending_h)
# debug_print(Ce.C)
# assert list(sorted(G1.edges)) == list(sorted(list(g for g, h in matched_gh if g is not None) + pending_g))
# assert list(sorted(G2.edges)) == list(sorted(list(h for g, h in matched_gh if h is not None) + pending_h))
# debug_print()
if prune(matched_cost + Cv.ls + Ce.ls):
return
if not max(len(pending_u), len(pending_v)):
# assert not len(pending_g)
# assert not len(pending_h)
# path completed!
# assert matched_cost <= maxcost_value
nonlocal maxcost_value
maxcost_value = min(maxcost_value, matched_cost)
yield matched_uv, matched_gh, matched_cost
else:
edit_ops = get_edit_ops(
matched_uv,
pending_u,
pending_v,
Cv,
pending_g,
pending_h,
Ce,
matched_cost,
)
for ij, Cv_ij, xy, Ce_xy, edit_cost in edit_ops:
i, j = ij
# assert Cv.C[i, j] + sum(Ce.C[t] for t in xy) == edit_cost
if prune(matched_cost + edit_cost + Cv_ij.ls + Ce_xy.ls):
continue
# dive deeper
u = pending_u.pop(i) if i < len(pending_u) else None
v = pending_v.pop(j) if j < len(pending_v) else None
matched_uv.append((u, v))
for x, y in xy:
len_g = len(pending_g)
len_h = len(pending_h)
matched_gh.append(
(
pending_g[x] if x < len_g else None,
pending_h[y] if y < len_h else None,
)
)
sortedx = sorted(x for x, y in xy)
sortedy = sorted(y for x, y in xy)
G = [
(pending_g.pop(x) if x < len(pending_g) else None)
for x in reversed(sortedx)
]
H = [
(pending_h.pop(y) if y < len(pending_h) else None)
for y in reversed(sortedy)
]
yield from get_edit_paths(
matched_uv,
pending_u,
pending_v,
Cv_ij,
matched_gh,
pending_g,
pending_h,
Ce_xy,
matched_cost + edit_cost,
)
# backtrack
if u is not None:
pending_u.insert(i, u)
if v is not None:
pending_v.insert(j, v)
matched_uv.pop()
for x, g in zip(sortedx, reversed(G)):
if g is not None:
pending_g.insert(x, g)
for y, h in zip(sortedy, reversed(H)):
if h is not None:
pending_h.insert(y, h)
for _ in xy:
matched_gh.pop()
# Initialization
pending_u = list(G1.nodes)
pending_v = list(G2.nodes)
initial_cost = 0
if roots:
root_u, root_v = roots
if root_u not in pending_u or root_v not in pending_v:
raise nx.NodeNotFound("Root node not in graph.")
# remove roots from pending
pending_u.remove(root_u)
pending_v.remove(root_v)
# cost matrix of vertex mappings
m = len(pending_u)
n = len(pending_v)
C = np.zeros((m + n, m + n))
if node_subst_cost:
C[0:m, 0:n] = np.array(
[
node_subst_cost(G1.nodes[u], G2.nodes[v])
for u in pending_u
for v in pending_v
]
).reshape(m, n)
if roots:
initial_cost = node_subst_cost(G1.nodes[root_u], G2.nodes[root_v])
elif node_match:
C[0:m, 0:n] = np.array(
[
1 - int(node_match(G1.nodes[u], G2.nodes[v]))
for u in pending_u
for v in pending_v
]
).reshape(m, n)
if roots:
initial_cost = 1 - node_match(G1.nodes[root_u], G2.nodes[root_v])
else:
# all zeroes
pass
# assert not min(m, n) or C[0:m, 0:n].min() >= 0
if node_del_cost:
del_costs = [node_del_cost(G1.nodes[u]) for u in pending_u]
else:
del_costs = [1] * len(pending_u)
# assert not m or min(del_costs) >= 0
if node_ins_cost:
ins_costs = [node_ins_cost(G2.nodes[v]) for v in pending_v]
else:
ins_costs = [1] * len(pending_v)
# assert not n or min(ins_costs) >= 0
inf = C[0:m, 0:n].sum() + sum(del_costs) + sum(ins_costs) + 1
C[0:m, n : n + m] = np.array(
[del_costs[i] if i == j else inf for i in range(m) for j in range(m)]
).reshape(m, m)
C[m : m + n, 0:n] = np.array(
[ins_costs[i] if i == j else inf for i in range(n) for j in range(n)]
).reshape(n, n)
Cv = make_CostMatrix(C, m, n)
# debug_print(f"Cv: {m} x {n}")
# debug_print(Cv.C)
pending_g = list(G1.edges)
pending_h = list(G2.edges)
# cost matrix of edge mappings
m = len(pending_g)
n = len(pending_h)
C = np.zeros((m + n, m + n))
if edge_subst_cost:
C[0:m, 0:n] = np.array(
[
edge_subst_cost(G1.edges[g], G2.edges[h])
for g in pending_g
for h in pending_h
]
).reshape(m, n)
elif edge_match:
C[0:m, 0:n] = np.array(
[
1 - int(edge_match(G1.edges[g], G2.edges[h]))
for g in pending_g
for h in pending_h
]
).reshape(m, n)
else:
# all zeroes
pass
# assert not min(m, n) or C[0:m, 0:n].min() >= 0
if edge_del_cost:
del_costs = [edge_del_cost(G1.edges[g]) for g in pending_g]
else:
del_costs = [1] * len(pending_g)
# assert not m or min(del_costs) >= 0
if edge_ins_cost:
ins_costs = [edge_ins_cost(G2.edges[h]) for h in pending_h]
else:
ins_costs = [1] * len(pending_h)
# assert not n or min(ins_costs) >= 0
inf = C[0:m, 0:n].sum() + sum(del_costs) + sum(ins_costs) + 1
C[0:m, n : n + m] = np.array(
[del_costs[i] if i == j else inf for i in range(m) for j in range(m)]
).reshape(m, m)
C[m : m + n, 0:n] = np.array(
[ins_costs[i] if i == j else inf for i in range(n) for j in range(n)]
).reshape(n, n)
Ce = make_CostMatrix(C, m, n)
# debug_print(f'Ce: {m} x {n}')
# debug_print(Ce.C)
# debug_print()
maxcost_value = Cv.C.sum() + Ce.C.sum() + 1
if timeout is not None:
if timeout <= 0:
raise nx.NetworkXError("Timeout value must be greater than 0")
start = time.perf_counter()
def prune(cost):
if timeout is not None:
if time.perf_counter() - start > timeout:
return True
if upper_bound is not None:
if cost > upper_bound:
return True
if cost > maxcost_value:
return True
if strictly_decreasing and cost >= maxcost_value:
return True
return False
# Now go!
done_uv = [] if roots is None else [roots]
for vertex_path, edge_path, cost in get_edit_paths(
done_uv, pending_u, pending_v, Cv, [], pending_g, pending_h, Ce, initial_cost
):
# assert sorted(G1.nodes) == sorted(u for u, v in vertex_path if u is not None)
# assert sorted(G2.nodes) == sorted(v for u, v in vertex_path if v is not None)
# assert sorted(G1.edges) == sorted(g for g, h in edge_path if g is not None)
# assert sorted(G2.edges) == sorted(h for g, h in edge_path if h is not None)
# print(vertex_path, edge_path, cost, file = sys.stderr)
# assert cost == maxcost_value
yield list(vertex_path), list(edge_path), float(cost)
| (G1, G2, node_match=None, edge_match=None, node_subst_cost=None, node_del_cost=None, node_ins_cost=None, edge_subst_cost=None, edge_del_cost=None, edge_ins_cost=None, roots=None, upper_bound=None, timeout=None, *, backend=None, **backend_kwargs) |
30,728 | networkx.algorithms.coloring.greedy_coloring | greedy_color | Color a graph using various strategies of greedy graph coloring.
Attempts to color a graph using as few colors as possible, where no
neighbors of a node can have same color as the node itself. The
given strategy determines the order in which nodes are colored.
The strategies are described in [1]_, and smallest-last is based on
[2]_.
Parameters
----------
G : NetworkX graph
strategy : string or function(G, colors)
A function (or a string representing a function) that provides
the coloring strategy, by returning nodes in the ordering they
should be colored. ``G`` is the graph, and ``colors`` is a
dictionary of the currently assigned colors, keyed by nodes. The
function must return an iterable over all the nodes in ``G``.
If the strategy function is an iterator generator (that is, a
function with ``yield`` statements), keep in mind that the
``colors`` dictionary will be updated after each ``yield``, since
this function chooses colors greedily.
If ``strategy`` is a string, it must be one of the following,
each of which represents one of the built-in strategy functions.
* ``'largest_first'``
* ``'random_sequential'``
* ``'smallest_last'``
* ``'independent_set'``
* ``'connected_sequential_bfs'``
* ``'connected_sequential_dfs'``
* ``'connected_sequential'`` (alias for the previous strategy)
* ``'saturation_largest_first'``
* ``'DSATUR'`` (alias for the previous strategy)
interchange: bool
Will use the color interchange algorithm described by [3]_ if set
to ``True``.
Note that ``saturation_largest_first`` and ``independent_set``
do not work with interchange. Furthermore, if you use
interchange with your own strategy function, you cannot rely
on the values in the ``colors`` argument.
Returns
-------
A dictionary with keys representing nodes and values representing
corresponding coloring.
Examples
--------
>>> G = nx.cycle_graph(4)
>>> d = nx.coloring.greedy_color(G, strategy="largest_first")
>>> d in [{0: 0, 1: 1, 2: 0, 3: 1}, {0: 1, 1: 0, 2: 1, 3: 0}]
True
Raises
------
NetworkXPointlessConcept
If ``strategy`` is ``saturation_largest_first`` or
``independent_set`` and ``interchange`` is ``True``.
References
----------
.. [1] Adrian Kosowski, and Krzysztof Manuszewski,
Classical Coloring of Graphs, Graph Colorings, 2-19, 2004.
ISBN 0-8218-3458-4.
.. [2] David W. Matula, and Leland L. Beck, "Smallest-last
ordering and clustering and graph coloring algorithms." *J. ACM* 30,
3 (July 1983), 417β427. <https://doi.org/10.1145/2402.322385>
.. [3] Maciej M. SysΕo, Narsingh Deo, Janusz S. Kowalik,
Discrete Optimization Algorithms with Pascal Programs, 415-424, 1983.
ISBN 0-486-45353-7.
| null | (G, strategy='largest_first', interchange=False, *, backend=None, **backend_kwargs) |
30,729 | networkx.generators.lattice | grid_2d_graph | Returns the two-dimensional grid graph.
The grid graph has each node connected to its four nearest neighbors.
Parameters
----------
m, n : int or iterable container of nodes
If an integer, nodes are from `range(n)`.
If a container, elements become the coordinate of the nodes.
periodic : bool or iterable
If `periodic` is True, both dimensions are periodic. If False, none
are periodic. If `periodic` is iterable, it should yield 2 bool
values indicating whether the 1st and 2nd axes, respectively, are
periodic.
create_using : NetworkX graph constructor, optional (default=nx.Graph)
Graph type to create. If graph instance, then cleared before populated.
Returns
-------
NetworkX graph
The (possibly periodic) grid graph of the specified dimensions.
| null | (m, n, periodic=False, create_using=None, *, backend=None, **backend_kwargs) |
30,730 | networkx.generators.lattice | grid_graph | Returns the *n*-dimensional grid graph.
The dimension *n* is the length of the list `dim` and the size in
each dimension is the value of the corresponding list element.
Parameters
----------
dim : list or tuple of numbers or iterables of nodes
'dim' is a tuple or list with, for each dimension, either a number
that is the size of that dimension or an iterable of nodes for
that dimension. The dimension of the grid_graph is the length
of `dim`.
periodic : bool or iterable
If `periodic` is True, all dimensions are periodic. If False all
dimensions are not periodic. If `periodic` is iterable, it should
yield `dim` bool values each of which indicates whether the
corresponding axis is periodic.
Returns
-------
NetworkX graph
The (possibly periodic) grid graph of the specified dimensions.
Examples
--------
To produce a 2 by 3 by 4 grid graph, a graph on 24 nodes:
>>> from networkx import grid_graph
>>> G = grid_graph(dim=(2, 3, 4))
>>> len(G)
24
>>> G = grid_graph(dim=(range(7, 9), range(3, 6)))
>>> len(G)
6
| null | (dim, periodic=False, *, backend=None, **backend_kwargs) |
30,732 | networkx.algorithms.centrality.group | group_betweenness_centrality | Compute the group betweenness centrality for a group of nodes.
Group betweenness centrality of a group of nodes $C$ is the sum of the
fraction of all-pairs shortest paths that pass through any vertex in $C$
.. math::
c_B(v) =\sum_{s,t \in V} \frac{\sigma(s, t|v)}{\sigma(s, t)}
where $V$ is the set of nodes, $\sigma(s, t)$ is the number of
shortest $(s, t)$-paths, and $\sigma(s, t|C)$ is the number of
those paths passing through some node in group $C$. Note that
$(s, t)$ are not members of the group ($V-C$ is the set of nodes
in $V$ that are not in $C$).
Parameters
----------
G : graph
A NetworkX graph.
C : list or set or list of lists or list of sets
A group or a list of groups containing nodes which belong to G, for which group betweenness
centrality is to be calculated.
normalized : bool, optional (default=True)
If True, group betweenness is normalized by `1/((|V|-|C|)(|V|-|C|-1))`
where `|V|` is the number of nodes in G and `|C|` is the number of nodes in C.
weight : None or string, optional (default=None)
If None, all edge weights are considered equal.
Otherwise holds the name of the edge attribute used as weight.
The weight of an edge is treated as the length or distance between the two sides.
endpoints : bool, optional (default=False)
If True include the endpoints in the shortest path counts.
Raises
------
NodeNotFound
If node(s) in C are not present in G.
Returns
-------
betweenness : list of floats or float
If C is a single group then return a float. If C is a list with
several groups then return a list of group betweenness centralities.
See Also
--------
betweenness_centrality
Notes
-----
Group betweenness centrality is described in [1]_ and its importance discussed in [3]_.
The initial implementation of the algorithm is mentioned in [2]_. This function uses
an improved algorithm presented in [4]_.
The number of nodes in the group must be a maximum of n - 2 where `n`
is the total number of nodes in the graph.
For weighted graphs the edge weights must be greater than zero.
Zero edge weights can produce an infinite number of equal length
paths between pairs of nodes.
The total number of paths between source and target is counted
differently for directed and undirected graphs. Directed paths
between "u" and "v" are counted as two possible paths (one each
direction) while undirected paths between "u" and "v" are counted
as one path. Said another way, the sum in the expression above is
over all ``s != t`` for directed graphs and for ``s < t`` for undirected graphs.
References
----------
.. [1] M G Everett and S P Borgatti:
The Centrality of Groups and Classes.
Journal of Mathematical Sociology. 23(3): 181-201. 1999.
http://www.analytictech.com/borgatti/group_centrality.htm
.. [2] Ulrik Brandes:
On Variants of Shortest-Path Betweenness
Centrality and their Generic Computation.
Social Networks 30(2):136-145, 2008.
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.72.9610&rep=rep1&type=pdf
.. [3] Sourav Medya et. al.:
Group Centrality Maximization via Network Design.
SIAM International Conference on Data Mining, SDM 2018, 126β134.
https://sites.cs.ucsb.edu/~arlei/pubs/sdm18.pdf
.. [4] Rami Puzis, Yuval Elovici, and Shlomi Dolev.
"Fast algorithm for successive computation of group betweenness centrality."
https://journals.aps.org/pre/pdf/10.1103/PhysRevE.76.056709
| null | (G, C, normalized=True, weight=None, endpoints=False, *, backend=None, **backend_kwargs) |
30,733 | networkx.algorithms.centrality.group | group_closeness_centrality | Compute the group closeness centrality for a group of nodes.
Group closeness centrality of a group of nodes $S$ is a measure
of how close the group is to the other nodes in the graph.
.. math::
c_{close}(S) = \frac{|V-S|}{\sum_{v \in V-S} d_{S, v}}
d_{S, v} = min_{u \in S} (d_{u, v})
where $V$ is the set of nodes, $d_{S, v}$ is the distance of
the group $S$ from $v$ defined as above. ($V-S$ is the set of nodes
in $V$ that are not in $S$).
Parameters
----------
G : graph
A NetworkX graph.
S : list or set
S is a group of nodes which belong to G, for which group closeness
centrality is to be calculated.
weight : None or string, optional (default=None)
If None, all edge weights are considered equal.
Otherwise holds the name of the edge attribute used as weight.
The weight of an edge is treated as the length or distance between the two sides.
Raises
------
NodeNotFound
If node(s) in S are not present in G.
Returns
-------
closeness : float
Group closeness centrality of the group S.
See Also
--------
closeness_centrality
Notes
-----
The measure was introduced in [1]_.
The formula implemented here is described in [2]_.
Higher values of closeness indicate greater centrality.
It is assumed that 1 / 0 is 0 (required in the case of directed graphs,
or when a shortest path length is 0).
The number of nodes in the group must be a maximum of n - 1 where `n`
is the total number of nodes in the graph.
For directed graphs, the incoming distance is utilized here. To use the
outward distance, act on `G.reverse()`.
For weighted graphs the edge weights must be greater than zero.
Zero edge weights can produce an infinite number of equal length
paths between pairs of nodes.
References
----------
.. [1] M G Everett and S P Borgatti:
The Centrality of Groups and Classes.
Journal of Mathematical Sociology. 23(3): 181-201. 1999.
http://www.analytictech.com/borgatti/group_centrality.htm
.. [2] J. Zhao et. al.:
Measuring and Maximizing Group Closeness Centrality over
Disk Resident Graphs.
WWWConference Proceedings, 2014. 689-694.
https://doi.org/10.1145/2567948.2579356
| null | (G, S, weight=None, *, backend=None, **backend_kwargs) |
30,734 | networkx.algorithms.centrality.group | group_degree_centrality | Compute the group degree centrality for a group of nodes.
Group degree centrality of a group of nodes $S$ is the fraction
of non-group members connected to group members.
Parameters
----------
G : graph
A NetworkX graph.
S : list or set
S is a group of nodes which belong to G, for which group degree
centrality is to be calculated.
Raises
------
NetworkXError
If node(s) in S are not in G.
Returns
-------
centrality : float
Group degree centrality of the group S.
See Also
--------
degree_centrality
group_in_degree_centrality
group_out_degree_centrality
Notes
-----
The measure was introduced in [1]_.
The number of nodes in the group must be a maximum of n - 1 where `n`
is the total number of nodes in the graph.
References
----------
.. [1] M G Everett and S P Borgatti:
The Centrality of Groups and Classes.
Journal of Mathematical Sociology. 23(3): 181-201. 1999.
http://www.analytictech.com/borgatti/group_centrality.htm
| null | (G, S, *, backend=None, **backend_kwargs) |
30,735 | networkx.algorithms.centrality.group | group_in_degree_centrality | Compute the group in-degree centrality for a group of nodes.
Group in-degree centrality of a group of nodes $S$ is the fraction
of non-group members connected to group members by incoming edges.
Parameters
----------
G : graph
A NetworkX graph.
S : list or set
S is a group of nodes which belong to G, for which group in-degree
centrality is to be calculated.
Returns
-------
centrality : float
Group in-degree centrality of the group S.
Raises
------
NetworkXNotImplemented
If G is undirected.
NodeNotFound
If node(s) in S are not in G.
See Also
--------
degree_centrality
group_degree_centrality
group_out_degree_centrality
Notes
-----
The number of nodes in the group must be a maximum of n - 1 where `n`
is the total number of nodes in the graph.
`G.neighbors(i)` gives nodes with an outward edge from i, in a DiGraph,
so for group in-degree centrality, the reverse graph is used.
| null | (G, S, *, backend=None, **backend_kwargs) |
30,736 | networkx.algorithms.centrality.group | group_out_degree_centrality | Compute the group out-degree centrality for a group of nodes.
Group out-degree centrality of a group of nodes $S$ is the fraction
of non-group members connected to group members by outgoing edges.
Parameters
----------
G : graph
A NetworkX graph.
S : list or set
S is a group of nodes which belong to G, for which group in-degree
centrality is to be calculated.
Returns
-------
centrality : float
Group out-degree centrality of the group S.
Raises
------
NetworkXNotImplemented
If G is undirected.
NodeNotFound
If node(s) in S are not in G.
See Also
--------
degree_centrality
group_degree_centrality
group_in_degree_centrality
Notes
-----
The number of nodes in the group must be a maximum of n - 1 where `n`
is the total number of nodes in the graph.
`G.neighbors(i)` gives nodes with an outward edge from i, in a DiGraph,
so for group out-degree centrality, the graph itself is used.
| null | (G, S, *, backend=None, **backend_kwargs) |
30,737 | networkx.algorithms.wiener | gutman_index | Returns the Gutman Index for the graph `G`.
The *Gutman Index* measures the topology of networks, especially for molecule
networks of atoms connected by bonds [1]_. It is also called the Schultz Index
of the second kind [2]_.
Consider an undirected graph `G` with node set ``V``.
The Gutman Index of a graph is the sum over all (unordered) pairs of nodes
of nodes ``(u, v)``, with distance ``dist(u, v)`` and degrees ``deg(u)``
and ``deg(v)``, of ``dist(u, v) * deg(u) * deg(v)``
Parameters
----------
G : NetworkX graph
weight : string or None, optional (default: None)
If None, every edge has weight 1.
If a string, use this edge attribute as the edge weight.
Any edge attribute not present defaults to 1.
The edge weights are used to computing shortest-path distances.
Returns
-------
number
The Gutman Index of the graph `G`.
Examples
--------
The Gutman Index of the (unweighted) complete graph on *n* nodes
equals the number of pairs of the *n* nodes times ``(n - 1) * (n - 1)``,
since each pair of nodes is at distance one and the product of degree of two
vertices is ``(n - 1) * (n - 1)``.
>>> n = 10
>>> G = nx.complete_graph(n)
>>> nx.gutman_index(G) == (n * (n - 1) / 2) * ((n - 1) * (n - 1))
True
Graphs that are disconnected
>>> G = nx.empty_graph(2)
>>> nx.gutman_index(G)
inf
References
----------
.. [1] M.V. Diudeaa and I. Gutman, Wiener-Type Topological Indices,
Croatica Chemica Acta, 71 (1998), 21-51.
https://hrcak.srce.hr/132323
.. [2] I. Gutman, Selected properties of the Schultz molecular topological index,
J. Chem. Inf. Comput. Sci. 34 (1994), 1087β1089.
https://doi.org/10.1021/ci00021a009
| null | (G, weight=None, *, backend=None, **backend_kwargs) |
30,740 | networkx.algorithms.centrality.harmonic | harmonic_centrality | Compute harmonic centrality for nodes.
Harmonic centrality [1]_ of a node `u` is the sum of the reciprocal
of the shortest path distances from all other nodes to `u`
.. math::
C(u) = \sum_{v \neq u} \frac{1}{d(v, u)}
where `d(v, u)` is the shortest-path distance between `v` and `u`.
If `sources` is given as an argument, the returned harmonic centrality
values are calculated as the sum of the reciprocals of the shortest
path distances from the nodes specified in `sources` to `u` instead
of from all nodes to `u`.
Notice that higher values indicate higher centrality.
Parameters
----------
G : graph
A NetworkX graph
nbunch : container (default: all nodes in G)
Container of nodes for which harmonic centrality values are calculated.
sources : container (default: all nodes in G)
Container of nodes `v` over which reciprocal distances are computed.
Nodes not in `G` are silently ignored.
distance : edge attribute key, optional (default=None)
Use the specified edge attribute as the edge distance in shortest
path calculations. If `None`, then each edge will have distance equal to 1.
Returns
-------
nodes : dictionary
Dictionary of nodes with harmonic centrality as the value.
See Also
--------
betweenness_centrality, load_centrality, eigenvector_centrality,
degree_centrality, closeness_centrality
Notes
-----
If the 'distance' keyword is set to an edge attribute key then the
shortest-path length will be computed using Dijkstra's algorithm with
that edge attribute as the edge weight.
References
----------
.. [1] Boldi, Paolo, and Sebastiano Vigna. "Axioms for centrality."
Internet Mathematics 10.3-4 (2014): 222-262.
| null | (G, nbunch=None, distance=None, sources=None, *, backend=None, **backend_kwargs) |
30,741 | networkx.algorithms.bridges | has_bridges | Decide whether a graph has any bridges.
A *bridge* in a graph is an edge whose removal causes the number of
connected components of the graph to increase.
Parameters
----------
G : undirected graph
root : node (optional)
A node in the graph `G`. If specified, only the bridges in the
connected component containing this node will be considered.
Returns
-------
bool
Whether the graph (or the connected component containing `root`)
has any bridges.
Raises
------
NodeNotFound
If `root` is not in the graph `G`.
NetworkXNotImplemented
If `G` is a directed graph.
Examples
--------
The barbell graph with parameter zero has a single bridge::
>>> G = nx.barbell_graph(10, 0)
>>> nx.has_bridges(G)
True
On the other hand, the cycle graph has no bridges::
>>> G = nx.cycle_graph(5)
>>> nx.has_bridges(G)
False
Notes
-----
This implementation uses the :func:`networkx.bridges` function, so
it shares its worst-case time complexity, $O(m + n)$, ignoring
polylogarithmic factors, where $n$ is the number of nodes in the
graph and $m$ is the number of edges.
| null | (G, root=None, *, backend=None, **backend_kwargs) |
30,742 | networkx.algorithms.euler | has_eulerian_path | Return True iff `G` has an Eulerian path.
An Eulerian path is a path in a graph which uses each edge of a graph
exactly once. If `source` is specified, then this function checks
whether an Eulerian path that starts at node `source` exists.
A directed graph has an Eulerian path iff:
- at most one vertex has out_degree - in_degree = 1,
- at most one vertex has in_degree - out_degree = 1,
- every other vertex has equal in_degree and out_degree,
- and all of its vertices belong to a single connected
component of the underlying undirected graph.
If `source` is not None, an Eulerian path starting at `source` exists if no
other node has out_degree - in_degree = 1. This is equivalent to either
there exists an Eulerian circuit or `source` has out_degree - in_degree = 1
and the conditions above hold.
An undirected graph has an Eulerian path iff:
- exactly zero or two vertices have odd degree,
- and all of its vertices belong to a single connected component.
If `source` is not None, an Eulerian path starting at `source` exists if
either there exists an Eulerian circuit or `source` has an odd degree and the
conditions above hold.
Graphs with isolated vertices (i.e. vertices with zero degree) are not considered
to have an Eulerian path. Therefore, if the graph is not connected (or not strongly
connected, for directed graphs), this function returns False.
Parameters
----------
G : NetworkX Graph
The graph to find an euler path in.
source : node, optional
Starting node for path.
Returns
-------
Bool : True if G has an Eulerian path.
Examples
--------
If you prefer to allow graphs with isolated vertices to have Eulerian path,
you can first remove such vertices and then call `has_eulerian_path` as below example shows.
>>> G = nx.Graph([(0, 1), (1, 2), (0, 2)])
>>> G.add_node(3)
>>> nx.has_eulerian_path(G)
False
>>> G.remove_nodes_from(list(nx.isolates(G)))
>>> nx.has_eulerian_path(G)
True
See Also
--------
is_eulerian
eulerian_path
| null | (G, source=None, *, backend=None, **backend_kwargs) |
30,743 | networkx.algorithms.shortest_paths.generic | has_path | Returns *True* if *G* has a path from *source* to *target*.
Parameters
----------
G : NetworkX graph
source : node
Starting node for path
target : node
Ending node for path
| null | (G, source, target, *, backend=None, **backend_kwargs) |
30,744 | networkx.generators.degree_seq | havel_hakimi_graph | Returns a simple graph with given degree sequence constructed
using the Havel-Hakimi algorithm.
Parameters
----------
deg_sequence: list of integers
Each integer corresponds to the degree of a node (need not be sorted).
create_using : NetworkX graph constructor, optional (default=nx.Graph)
Graph type to create. If graph instance, then cleared before populated.
Directed graphs are not allowed.
Raises
------
NetworkXException
For a non-graphical degree sequence (i.e. one
not realizable by some simple graph).
Notes
-----
The Havel-Hakimi algorithm constructs a simple graph by
successively connecting the node of highest degree to other nodes
of highest degree, resorting remaining nodes by degree, and
repeating the process. The resulting graph has a high
degree-associativity. Nodes are labeled 1,.., len(deg_sequence),
corresponding to their position in deg_sequence.
The basic algorithm is from Hakimi [1]_ and was generalized by
Kleitman and Wang [2]_.
References
----------
.. [1] Hakimi S., On Realizability of a Set of Integers as
Degrees of the Vertices of a Linear Graph. I,
Journal of SIAM, 10(3), pp. 496-506 (1962)
.. [2] Kleitman D.J. and Wang D.L.
Algorithms for Constructing Graphs and Digraphs with Given Valences
and Factors Discrete Mathematics, 6(1), pp. 79-88 (1973)
| def generate(self):
# remaining_degree is mapping from int->remaining degree
self.remaining_degree = dict(enumerate(self.degree))
# add all nodes to make sure we get isolated nodes
self.graph = nx.Graph()
self.graph.add_nodes_from(self.remaining_degree)
# remove zero degree nodes
for n, d in list(self.remaining_degree.items()):
if d == 0:
del self.remaining_degree[n]
if len(self.remaining_degree) > 0:
# build graph in three phases according to how many unmatched edges
self.phase1()
self.phase2()
self.phase3()
return self.graph
| (deg_sequence, create_using=None, *, backend=None, **backend_kwargs) |
30,745 | networkx.generators.small | heawood_graph |
Returns the Heawood Graph, a (3,6) cage.
The Heawood Graph is an undirected graph with 14 nodes and 21 edges,
named after Percy John Heawood [1]_.
It is cubic symmetric, nonplanar, Hamiltonian, and can be represented
in LCF notation as ``[5,-5]^7`` [2]_.
It is the unique (3,6)-cage: the regular cubic graph of girth 6 with
minimal number of vertices [3]_.
Parameters
----------
create_using : NetworkX graph constructor, optional (default=nx.Graph)
Graph type to create. If graph instance, then cleared before populated.
Returns
-------
G : networkx Graph
Heawood Graph with 14 nodes and 21 edges
References
----------
.. [1] https://en.wikipedia.org/wiki/Heawood_graph
.. [2] https://mathworld.wolfram.com/HeawoodGraph.html
.. [3] https://www.win.tue.nl/~aeb/graphs/Heawood.html
| def sedgewick_maze_graph(create_using=None):
"""
Return a small maze with a cycle.
This is the maze used in Sedgewick, 3rd Edition, Part 5, Graph
Algorithms, Chapter 18, e.g. Figure 18.2 and following [1]_.
Nodes are numbered 0,..,7
Parameters
----------
create_using : NetworkX graph constructor, optional (default=nx.Graph)
Graph type to create. If graph instance, then cleared before populated.
Returns
-------
G : networkx Graph
Small maze with a cycle
References
----------
.. [1] Figure 18.2, Chapter 18, Graph Algorithms (3rd Ed), Sedgewick
"""
G = empty_graph(0, create_using)
G.add_nodes_from(range(8))
G.add_edges_from([[0, 2], [0, 7], [0, 5]])
G.add_edges_from([[1, 7], [2, 6]])
G.add_edges_from([[3, 4], [3, 5]])
G.add_edges_from([[4, 5], [4, 7], [4, 6]])
G.name = "Sedgewick Maze"
return G
| (create_using=None, *, backend=None, **backend_kwargs) |
30,746 | networkx.generators.lattice | hexagonal_lattice_graph | Returns an `m` by `n` hexagonal lattice graph.
The *hexagonal lattice graph* is a graph whose nodes and edges are
the `hexagonal tiling`_ of the plane.
The returned graph will have `m` rows and `n` columns of hexagons.
`Odd numbered columns`_ are shifted up relative to even numbered columns.
Positions of nodes are computed by default or `with_positions is True`.
Node positions creating the standard embedding in the plane
with sidelength 1 and are stored in the node attribute 'pos'.
`pos = nx.get_node_attributes(G, 'pos')` creates a dict ready for drawing.
.. _hexagonal tiling: https://en.wikipedia.org/wiki/Hexagonal_tiling
.. _Odd numbered columns: http://www-cs-students.stanford.edu/~amitp/game-programming/grids/
Parameters
----------
m : int
The number of rows of hexagons in the lattice.
n : int
The number of columns of hexagons in the lattice.
periodic : bool
Whether to make a periodic grid by joining the boundary vertices.
For this to work `n` must be even and both `n > 1` and `m > 1`.
The periodic connections create another row and column of hexagons
so these graphs have fewer nodes as boundary nodes are identified.
with_positions : bool (default: True)
Store the coordinates of each node in the graph node attribute 'pos'.
The coordinates provide a lattice with vertical columns of hexagons
offset to interleave and cover the plane.
Periodic positions shift the nodes vertically in a nonlinear way so
the edges don't overlap so much.
create_using : NetworkX graph constructor, optional (default=nx.Graph)
Graph type to create. If graph instance, then cleared before populated.
If graph is directed, edges will point up or right.
Returns
-------
NetworkX graph
The *m* by *n* hexagonal lattice graph.
| null | (m, n, periodic=False, with_positions=True, create_using=None, *, backend=None, **backend_kwargs) |
30,748 | networkx.algorithms.link_analysis.hits_alg | hits | Returns HITS hubs and authorities values for nodes.
The HITS algorithm computes two numbers for a node.
Authorities estimates the node value based on the incoming links.
Hubs estimates the node value based on outgoing links.
Parameters
----------
G : graph
A NetworkX graph
max_iter : integer, optional
Maximum number of iterations in power method.
tol : float, optional
Error tolerance used to check convergence in power method iteration.
nstart : dictionary, optional
Starting value of each node for power method iteration.
normalized : bool (default=True)
Normalize results by the sum of all of the values.
Returns
-------
(hubs,authorities) : two-tuple of dictionaries
Two dictionaries keyed by node containing the hub and authority
values.
Raises
------
PowerIterationFailedConvergence
If the algorithm fails to converge to the specified tolerance
within the specified number of iterations of the power iteration
method.
Examples
--------
>>> G = nx.path_graph(4)
>>> h, a = nx.hits(G)
Notes
-----
The eigenvector calculation is done by the power iteration method
and has no guarantee of convergence. The iteration will stop
after max_iter iterations or an error tolerance of
number_of_nodes(G)*tol has been reached.
The HITS algorithm was designed for directed graphs but this
algorithm does not check if the input graph is directed and will
execute on undirected graphs.
References
----------
.. [1] A. Langville and C. Meyer,
"A survey of eigenvector methods of web information retrieval."
http://citeseer.ist.psu.edu/713792.html
.. [2] Jon Kleinberg,
Authoritative sources in a hyperlinked environment
Journal of the ACM 46 (5): 604-32, 1999.
doi:10.1145/324133.324140.
http://www.cs.cornell.edu/home/kleinber/auth.pdf.
| null | (G, max_iter=100, tol=1e-08, nstart=None, normalized=True, *, backend=None, **backend_kwargs) |
30,750 | networkx.generators.harary_graph | hkn_harary_graph | Returns the Harary graph with given node connectivity and node number.
The Harary graph $H_{k,n}$ is the graph that minimizes the number of
edges needed with given node connectivity $k$ and node number $n$.
This smallest number of edges is known to be ceil($kn/2$) [1]_.
Parameters
----------
k: integer
The node connectivity of the generated graph
n: integer
The number of nodes the generated graph is to contain
create_using : NetworkX graph constructor, optional Graph type
to create (default=nx.Graph). If graph instance, then cleared
before populated.
Returns
-------
NetworkX graph
The Harary graph $H_{k,n}$.
See Also
--------
hnm_harary_graph
Notes
-----
This algorithm runs in $O(kn)$ time.
It is implemented by following the Reference [2]_.
References
----------
.. [1] Weisstein, Eric W. "Harary Graph." From MathWorld--A Wolfram Web
Resource. http://mathworld.wolfram.com/HararyGraph.html.
.. [2] Harary, F. "The Maximum Connectivity of a Graph."
Proc. Nat. Acad. Sci. USA 48, 1142-1146, 1962.
| null | (k, n, create_using=None, *, backend=None, **backend_kwargs) |
30,751 | networkx.generators.harary_graph | hnm_harary_graph | Returns the Harary graph with given numbers of nodes and edges.
The Harary graph $H_{n,m}$ is the graph that maximizes node connectivity
with $n$ nodes and $m$ edges.
This maximum node connectivity is known to be floor($2m/n$). [1]_
Parameters
----------
n: integer
The number of nodes the generated graph is to contain
m: integer
The number of edges the generated graph is to contain
create_using : NetworkX graph constructor, optional Graph type
to create (default=nx.Graph). If graph instance, then cleared
before populated.
Returns
-------
NetworkX graph
The Harary graph $H_{n,m}$.
See Also
--------
hkn_harary_graph
Notes
-----
This algorithm runs in $O(m)$ time.
It is implemented by following the Reference [2]_.
References
----------
.. [1] F. T. Boesch, A. Satyanarayana, and C. L. Suffel,
"A Survey of Some Network Reliability Analysis and Synthesis Results,"
Networks, pp. 99-107, 2009.
.. [2] Harary, F. "The Maximum Connectivity of a Graph."
Proc. Nat. Acad. Sci. USA 48, 1142-1146, 1962.
| null | (n, m, create_using=None, *, backend=None, **backend_kwargs) |
30,752 | networkx.generators.small | hoffman_singleton_graph |
Returns the Hoffman-Singleton Graph.
The HoffmanβSingleton graph is a symmetrical undirected graph
with 50 nodes and 175 edges.
All indices lie in ``Z % 5``: that is, the integers mod 5 [1]_.
It is the only regular graph of vertex degree 7, diameter 2, and girth 5.
It is the unique (7,5)-cage graph and Moore graph, and contains many
copies of the Petersen graph [2]_.
Returns
-------
G : networkx Graph
HoffmanβSingleton Graph with 50 nodes and 175 edges
Notes
-----
Constructed from pentagon and pentagram as follows: Take five pentagons $P_h$
and five pentagrams $Q_i$ . Join vertex $j$ of $P_h$ to vertex $hΒ·i+j$ of $Q_i$ [3]_.
References
----------
.. [1] https://blogs.ams.org/visualinsight/2016/02/01/hoffman-singleton-graph/
.. [2] https://mathworld.wolfram.com/Hoffman-SingletonGraph.html
.. [3] https://en.wikipedia.org/wiki/Hoffman%E2%80%93Singleton_graph
| def sedgewick_maze_graph(create_using=None):
"""
Return a small maze with a cycle.
This is the maze used in Sedgewick, 3rd Edition, Part 5, Graph
Algorithms, Chapter 18, e.g. Figure 18.2 and following [1]_.
Nodes are numbered 0,..,7
Parameters
----------
create_using : NetworkX graph constructor, optional (default=nx.Graph)
Graph type to create. If graph instance, then cleared before populated.
Returns
-------
G : networkx Graph
Small maze with a cycle
References
----------
.. [1] Figure 18.2, Chapter 18, Graph Algorithms (3rd Ed), Sedgewick
"""
G = empty_graph(0, create_using)
G.add_nodes_from(range(8))
G.add_edges_from([[0, 2], [0, 7], [0, 5]])
G.add_edges_from([[1, 7], [2, 6]])
G.add_edges_from([[3, 4], [3, 5]])
G.add_edges_from([[4, 5], [4, 7], [4, 6]])
G.name = "Sedgewick Maze"
return G
| (*, backend=None, **backend_kwargs) |
30,753 | networkx.generators.small | house_graph |
Returns the House graph (square with triangle on top)
The house graph is a simple undirected graph with
5 nodes and 6 edges [1]_.
Parameters
----------
create_using : NetworkX graph constructor, optional (default=nx.Graph)
Graph type to create. If graph instance, then cleared before populated.
Returns
-------
G : networkx Graph
House graph in the form of a square with a triangle on top
References
----------
.. [1] https://mathworld.wolfram.com/HouseGraph.html
| def _raise_on_directed(func):
"""
A decorator which inspects the `create_using` argument and raises a
NetworkX exception when `create_using` is a DiGraph (class or instance) for
graph generators that do not support directed outputs.
"""
@wraps(func)
def wrapper(*args, **kwargs):
if kwargs.get("create_using") is not None:
G = nx.empty_graph(create_using=kwargs["create_using"])
if G.is_directed():
raise NetworkXError("Directed Graph not supported")
return func(*args, **kwargs)
return wrapper
| (create_using=None, *, backend=None, **backend_kwargs) |
30,754 | networkx.generators.small | house_x_graph |
Returns the House graph with a cross inside the house square.
The House X-graph is the House graph plus the two edges connecting diagonally
opposite vertices of the square base. It is also one of the two graphs
obtained by removing two edges from the pentatope graph [1]_.
Parameters
----------
create_using : NetworkX graph constructor, optional (default=nx.Graph)
Graph type to create. If graph instance, then cleared before populated.
Returns
-------
G : networkx Graph
House graph with diagonal vertices connected
References
----------
.. [1] https://mathworld.wolfram.com/HouseGraph.html
| def _raise_on_directed(func):
"""
A decorator which inspects the `create_using` argument and raises a
NetworkX exception when `create_using` is a DiGraph (class or instance) for
graph generators that do not support directed outputs.
"""
@wraps(func)
def wrapper(*args, **kwargs):
if kwargs.get("create_using") is not None:
G = nx.empty_graph(create_using=kwargs["create_using"])
if G.is_directed():
raise NetworkXError("Directed Graph not supported")
return func(*args, **kwargs)
return wrapper
| (create_using=None, *, backend=None, **backend_kwargs) |
30,756 | networkx.generators.lattice | hypercube_graph | Returns the *n*-dimensional hypercube graph.
The nodes are the integers between 0 and ``2 ** n - 1``, inclusive.
For more information on the hypercube graph, see the Wikipedia
article `Hypercube graph`_.
.. _Hypercube graph: https://en.wikipedia.org/wiki/Hypercube_graph
Parameters
----------
n : int
The dimension of the hypercube.
The number of nodes in the graph will be ``2 ** n``.
Returns
-------
NetworkX graph
The hypercube graph of dimension *n*.
| null | (n, *, backend=None, **backend_kwargs) |
30,757 | networkx.generators.small | icosahedral_graph |
Returns the Platonic Icosahedral graph.
The icosahedral graph has 12 nodes and 30 edges. It is a Platonic graph
whose nodes have the connectivity of the icosahedron. It is undirected,
regular and Hamiltonian [1]_.
Parameters
----------
create_using : NetworkX graph constructor, optional (default=nx.Graph)
Graph type to create. If graph instance, then cleared before populated.
Returns
-------
G : networkx Graph
Icosahedral graph with 12 nodes and 30 edges.
References
----------
.. [1] https://mathworld.wolfram.com/IcosahedralGraph.html
| def _raise_on_directed(func):
"""
A decorator which inspects the `create_using` argument and raises a
NetworkX exception when `create_using` is a DiGraph (class or instance) for
graph generators that do not support directed outputs.
"""
@wraps(func)
def wrapper(*args, **kwargs):
if kwargs.get("create_using") is not None:
G = nx.empty_graph(create_using=kwargs["create_using"])
if G.is_directed():
raise NetworkXError("Directed Graph not supported")
return func(*args, **kwargs)
return wrapper
| (create_using=None, *, backend=None, **backend_kwargs) |
30,759 | networkx.algorithms.dominance | immediate_dominators | Returns the immediate dominators of all nodes of a directed graph.
Parameters
----------
G : a DiGraph or MultiDiGraph
The graph where dominance is to be computed.
start : node
The start node of dominance computation.
Returns
-------
idom : dict keyed by nodes
A dict containing the immediate dominators of each node reachable from
`start`.
Raises
------
NetworkXNotImplemented
If `G` is undirected.
NetworkXError
If `start` is not in `G`.
Notes
-----
Except for `start`, the immediate dominators are the parents of their
corresponding nodes in the dominator tree.
Examples
--------
>>> G = nx.DiGraph([(1, 2), (1, 3), (2, 5), (3, 4), (4, 5)])
>>> sorted(nx.immediate_dominators(G, 1).items())
[(1, 1), (2, 1), (3, 1), (4, 3), (5, 1)]
References
----------
.. [1] K. D. Cooper, T. J. Harvey, and K. Kennedy.
A simple, fast dominance algorithm.
Software Practice & Experience, 4:110, 2001.
| null | (G, start, *, backend=None, **backend_kwargs) |
30,760 | networkx.algorithms.centrality.degree_alg | in_degree_centrality | Compute the in-degree centrality for nodes.
The in-degree centrality for a node v is the fraction of nodes its
incoming edges are connected to.
Parameters
----------
G : graph
A NetworkX graph
Returns
-------
nodes : dictionary
Dictionary of nodes with in-degree centrality as values.
Raises
------
NetworkXNotImplemented
If G is undirected.
Examples
--------
>>> G = nx.DiGraph([(0, 1), (0, 2), (0, 3), (1, 2), (1, 3)])
>>> nx.in_degree_centrality(G)
{0: 0.0, 1: 0.3333333333333333, 2: 0.6666666666666666, 3: 0.6666666666666666}
See Also
--------
degree_centrality, out_degree_centrality
Notes
-----
The degree centrality values are normalized by dividing by the maximum
possible degree in a simple graph n-1 where n is the number of nodes in G.
For multigraphs or graphs with self loops the maximum degree might
be higher than n-1 and values of degree centrality greater than 1
are possible.
| null | (G, *, backend=None, **backend_kwargs) |
30,761 | networkx.linalg.graphmatrix | incidence_matrix | Returns incidence matrix of G.
The incidence matrix assigns each row to a node and each column to an edge.
For a standard incidence matrix a 1 appears wherever a row's node is
incident on the column's edge. For an oriented incidence matrix each
edge is assigned an orientation (arbitrarily for undirected and aligning to
direction for directed). A -1 appears for the source (tail) of an edge and
1 for the destination (head) of the edge. The elements are zero otherwise.
Parameters
----------
G : graph
A NetworkX graph
nodelist : list, optional (default= all nodes in G)
The rows are ordered according to the nodes in nodelist.
If nodelist is None, then the ordering is produced by G.nodes().
edgelist : list, optional (default= all edges in G)
The columns are ordered according to the edges in edgelist.
If edgelist is None, then the ordering is produced by G.edges().
oriented: bool, optional (default=False)
If True, matrix elements are +1 or -1 for the head or tail node
respectively of each edge. If False, +1 occurs at both nodes.
weight : string or None, optional (default=None)
The edge data key used to provide each value in the matrix.
If None, then each edge has weight 1. Edge weights, if used,
should be positive so that the orientation can provide the sign.
dtype : a NumPy dtype or None (default=None)
The dtype of the output sparse array. This type should be a compatible
type of the weight argument, eg. if weight would return a float this
argument should also be a float.
If None, then the default for SciPy is used.
Returns
-------
A : SciPy sparse array
The incidence matrix of G.
Notes
-----
For MultiGraph/MultiDiGraph, the edges in edgelist should be
(u,v,key) 3-tuples.
"Networks are the best discrete model for so many problems in
applied mathematics" [1]_.
References
----------
.. [1] Gil Strang, Network applications: A = incidence matrix,
http://videolectures.net/mit18085f07_strang_lec03/
| null | (G, nodelist=None, edgelist=None, oriented=False, weight=None, *, dtype=None, backend=None, **backend_kwargs) |
30,762 | networkx.algorithms.centrality.closeness | incremental_closeness_centrality | Incremental closeness centrality for nodes.
Compute closeness centrality for nodes using level-based work filtering
as described in Incremental Algorithms for Closeness Centrality by Sariyuce et al.
Level-based work filtering detects unnecessary updates to the closeness
centrality and filters them out.
---
From "Incremental Algorithms for Closeness Centrality":
Theorem 1: Let :math:`G = (V, E)` be a graph and u and v be two vertices in V
such that there is no edge (u, v) in E. Let :math:`G' = (V, E \cup uv)`
Then :math:`cc[s] = cc'[s]` if and only if :math:`\left|dG(s, u) - dG(s, v)\right| \leq 1`.
Where :math:`dG(u, v)` denotes the length of the shortest path between
two vertices u, v in a graph G, cc[s] is the closeness centrality for a
vertex s in V, and cc'[s] is the closeness centrality for a
vertex s in V, with the (u, v) edge added.
---
We use Theorem 1 to filter out updates when adding or removing an edge.
When adding an edge (u, v), we compute the shortest path lengths from all
other nodes to u and to v before the node is added. When removing an edge,
we compute the shortest path lengths after the edge is removed. Then we
apply Theorem 1 to use previously computed closeness centrality for nodes
where :math:`\left|dG(s, u) - dG(s, v)\right| \leq 1`. This works only for
undirected, unweighted graphs; the distance argument is not supported.
Closeness centrality [1]_ of a node `u` is the reciprocal of the
sum of the shortest path distances from `u` to all `n-1` other nodes.
Since the sum of distances depends on the number of nodes in the
graph, closeness is normalized by the sum of minimum possible
distances `n-1`.
.. math::
C(u) = \frac{n - 1}{\sum_{v=1}^{n-1} d(v, u)},
where `d(v, u)` is the shortest-path distance between `v` and `u`,
and `n` is the number of nodes in the graph.
Notice that higher values of closeness indicate higher centrality.
Parameters
----------
G : graph
A NetworkX graph
edge : tuple
The modified edge (u, v) in the graph.
prev_cc : dictionary
The previous closeness centrality for all nodes in the graph.
insertion : bool, optional
If True (default) the edge was inserted, otherwise it was deleted from the graph.
wf_improved : bool, optional (default=True)
If True, scale by the fraction of nodes reachable. This gives the
Wasserman and Faust improved formula. For single component graphs
it is the same as the original formula.
Returns
-------
nodes : dictionary
Dictionary of nodes with closeness centrality as the value.
See Also
--------
betweenness_centrality, load_centrality, eigenvector_centrality,
degree_centrality, closeness_centrality
Notes
-----
The closeness centrality is normalized to `(n-1)/(|G|-1)` where
`n` is the number of nodes in the connected part of graph
containing the node. If the graph is not completely connected,
this algorithm computes the closeness centrality for each
connected part separately.
References
----------
.. [1] Freeman, L.C., 1979. Centrality in networks: I.
Conceptual clarification. Social Networks 1, 215--239.
https://doi.org/10.1016/0378-8733(78)90021-7
.. [2] Sariyuce, A.E. ; Kaya, K. ; Saule, E. ; Catalyiirek, U.V. Incremental
Algorithms for Closeness Centrality. 2013 IEEE International Conference on Big Data
http://sariyuce.com/papers/bigdata13.pdf
| null | (G, edge, prev_cc=None, insertion=True, wf_improved=True, *, backend=None, **backend_kwargs) |
30,763 | networkx.classes.function | induced_subgraph | Returns a SubGraph view of `G` showing only nodes in nbunch.
The induced subgraph of a graph on a set of nodes N is the
graph with nodes N and edges from G which have both ends in N.
Parameters
----------
G : NetworkX Graph
nbunch : node, container of nodes or None (for all nodes)
Returns
-------
subgraph : SubGraph View
A read-only view of the subgraph in `G` induced by the nodes.
Changes to the graph `G` will be reflected in the view.
Notes
-----
To create a mutable subgraph with its own copies of nodes
edges and attributes use `subgraph.copy()` or `Graph(subgraph)`
For an inplace reduction of a graph to a subgraph you can remove nodes:
`G.remove_nodes_from(n in G if n not in set(nbunch))`
If you are going to compute subgraphs of your subgraphs you could
end up with a chain of views that can be very slow once the chain
has about 15 views in it. If they are all induced subgraphs, you
can short-cut the chain by making them all subgraphs of the original
graph. The graph class method `G.subgraph` does this when `G` is
a subgraph. In contrast, this function allows you to choose to build
chains or not, as you wish. The returned subgraph is a view on `G`.
Examples
--------
>>> G = nx.path_graph(4) # or DiGraph, MultiGraph, MultiDiGraph, etc
>>> H = nx.induced_subgraph(G, [0, 1, 3])
>>> list(H.edges)
[(0, 1)]
>>> list(H.nodes)
[0, 1, 3]
| def induced_subgraph(G, nbunch):
"""Returns a SubGraph view of `G` showing only nodes in nbunch.
The induced subgraph of a graph on a set of nodes N is the
graph with nodes N and edges from G which have both ends in N.
Parameters
----------
G : NetworkX Graph
nbunch : node, container of nodes or None (for all nodes)
Returns
-------
subgraph : SubGraph View
A read-only view of the subgraph in `G` induced by the nodes.
Changes to the graph `G` will be reflected in the view.
Notes
-----
To create a mutable subgraph with its own copies of nodes
edges and attributes use `subgraph.copy()` or `Graph(subgraph)`
For an inplace reduction of a graph to a subgraph you can remove nodes:
`G.remove_nodes_from(n in G if n not in set(nbunch))`
If you are going to compute subgraphs of your subgraphs you could
end up with a chain of views that can be very slow once the chain
has about 15 views in it. If they are all induced subgraphs, you
can short-cut the chain by making them all subgraphs of the original
graph. The graph class method `G.subgraph` does this when `G` is
a subgraph. In contrast, this function allows you to choose to build
chains or not, as you wish. The returned subgraph is a view on `G`.
Examples
--------
>>> G = nx.path_graph(4) # or DiGraph, MultiGraph, MultiDiGraph, etc
>>> H = nx.induced_subgraph(G, [0, 1, 3])
>>> list(H.edges)
[(0, 1)]
>>> list(H.nodes)
[0, 1, 3]
"""
induced_nodes = nx.filters.show_nodes(G.nbunch_iter(nbunch))
return nx.subgraph_view(G, filter_node=induced_nodes)
| (G, nbunch) |
30,766 | networkx.algorithms.operators.binary | intersection | Returns a new graph that contains only the nodes and the edges that exist in
both G and H.
Parameters
----------
G,H : graph
A NetworkX graph. G and H can have different node sets but must be both graphs or both multigraphs.
Raises
------
NetworkXError
If one is a MultiGraph and the other one is a graph.
Returns
-------
GH : A new graph with the same type as G.
Notes
-----
Attributes from the graph, nodes, and edges are not copied to the new
graph. If you want a new graph of the intersection of G and H
with the attributes (including edge data) from G use remove_nodes_from()
as follows
>>> G = nx.path_graph(3)
>>> H = nx.path_graph(5)
>>> R = G.copy()
>>> R.remove_nodes_from(n for n in G if n not in H)
>>> R.remove_edges_from(e for e in G.edges if e not in H.edges)
Examples
--------
>>> G = nx.Graph([(0, 1), (0, 2), (1, 2)])
>>> H = nx.Graph([(0, 3), (1, 2), (2, 3)])
>>> R = nx.intersection(G, H)
>>> R.nodes
NodeView((0, 1, 2))
>>> R.edges
EdgeView([(1, 2)])
| null | (G, H, *, backend=None, **backend_kwargs) |
30,767 | networkx.algorithms.operators.all | intersection_all | Returns a new graph that contains only the nodes and the edges that exist in
all graphs.
Parameters
----------
graphs : iterable
Iterable of NetworkX graphs
Returns
-------
R : A new graph with the same type as the first graph in list
Raises
------
ValueError
If `graphs` is an empty list.
NetworkXError
In case of mixed type graphs, like MultiGraph and Graph, or directed and undirected graphs.
Notes
-----
For operating on mixed type graphs, they should be converted to the same type.
Attributes from the graph, nodes, and edges are not copied to the new
graph.
The resulting graph can be updated with attributes if desired. For example, code which adds the minimum attribute for each node across all graphs could work.
>>> g = nx.Graph()
>>> g.add_node(0, capacity=4)
>>> g.add_node(1, capacity=3)
>>> g.add_edge(0, 1)
>>> h = g.copy()
>>> h.nodes[0]["capacity"] = 2
>>> gh = nx.intersection_all([g, h])
>>> new_node_attr = {
... n: min(*(anyG.nodes[n].get("capacity", float("inf")) for anyG in [g, h]))
... for n in gh
... }
>>> nx.set_node_attributes(gh, new_node_attr, "new_capacity")
>>> gh.nodes(data=True)
NodeDataView({0: {'new_capacity': 2}, 1: {'new_capacity': 3}})
Examples
--------
>>> G1 = nx.Graph([(1, 2), (2, 3)])
>>> G2 = nx.Graph([(2, 3), (3, 4)])
>>> R = nx.intersection_all([G1, G2])
>>> list(R.nodes())
[2, 3]
>>> list(R.edges())
[(2, 3)]
| null | (graphs, *, backend=None, **backend_kwargs) |
30,768 | networkx.algorithms.distance_regular | intersection_array | Returns the intersection array of a distance-regular graph.
Given a distance-regular graph G with integers b_i, c_i,i = 0,....,d
such that for any 2 vertices x,y in G at a distance i=d(x,y), there
are exactly c_i neighbors of y at a distance of i-1 from x and b_i
neighbors of y at a distance of i+1 from x.
A distance regular graph's intersection array is given by,
[b_0,b_1,.....b_{d-1};c_1,c_2,.....c_d]
Parameters
----------
G: Networkx graph (undirected)
Returns
-------
b,c: tuple of lists
Examples
--------
>>> G = nx.icosahedral_graph()
>>> nx.intersection_array(G)
([5, 2, 1], [1, 2, 5])
References
----------
.. [1] Weisstein, Eric W. "Intersection Array."
From MathWorld--A Wolfram Web Resource.
http://mathworld.wolfram.com/IntersectionArray.html
See Also
--------
global_parameters
| null | (G, *, backend=None, **backend_kwargs) |
30,769 | networkx.generators.interval_graph | interval_graph | Generates an interval graph for a list of intervals given.
In graph theory, an interval graph is an undirected graph formed from a set
of closed intervals on the real line, with a vertex for each interval
and an edge between vertices whose intervals intersect.
It is the intersection graph of the intervals.
More information can be found at:
https://en.wikipedia.org/wiki/Interval_graph
Parameters
----------
intervals : a sequence of intervals, say (l, r) where l is the left end,
and r is the right end of the closed interval.
Returns
-------
G : networkx graph
Examples
--------
>>> intervals = [(-2, 3), [1, 4], (2, 3), (4, 6)]
>>> G = nx.interval_graph(intervals)
>>> sorted(G.edges)
[((-2, 3), (1, 4)), ((-2, 3), (2, 3)), ((1, 4), (2, 3)), ((1, 4), (4, 6))]
Raises
------
:exc:`TypeError`
if `intervals` contains None or an element which is not
collections.abc.Sequence or not a length of 2.
:exc:`ValueError`
if `intervals` contains an interval such that min1 > max1
where min1,max1 = interval
| null | (intervals, *, backend=None, **backend_kwargs) |
30,770 | networkx.generators.line | inverse_line_graph | Returns the inverse line graph of graph G.
If H is a graph, and G is the line graph of H, such that G = L(H).
Then H is the inverse line graph of G.
Not all graphs are line graphs and these do not have an inverse line graph.
In these cases this function raises a NetworkXError.
Parameters
----------
G : graph
A NetworkX Graph
Returns
-------
H : graph
The inverse line graph of G.
Raises
------
NetworkXNotImplemented
If G is directed or a multigraph
NetworkXError
If G is not a line graph
Notes
-----
This is an implementation of the Roussopoulos algorithm[1]_.
If G consists of multiple components, then the algorithm doesn't work.
You should invert every component separately:
>>> K5 = nx.complete_graph(5)
>>> P4 = nx.Graph([("a", "b"), ("b", "c"), ("c", "d")])
>>> G = nx.union(K5, P4)
>>> root_graphs = []
>>> for comp in nx.connected_components(G):
... root_graphs.append(nx.inverse_line_graph(G.subgraph(comp)))
>>> len(root_graphs)
2
References
----------
.. [1] Roussopoulos, N.D. , "A max {m, n} algorithm for determining the graph H from
its line graph G", Information Processing Letters 2, (1973), 108--112, ISSN 0020-0190,
`DOI link <https://doi.org/10.1016/0020-0190(73)90029-X>`_
| null | (G, *, backend=None, **backend_kwargs) |
30,771 | networkx.algorithms.dag | is_aperiodic | Returns True if `G` is aperiodic.
A directed graph is aperiodic if there is no integer k > 1 that
divides the length of every cycle in the graph.
Parameters
----------
G : NetworkX DiGraph
A directed graph
Returns
-------
bool
True if the graph is aperiodic False otherwise
Raises
------
NetworkXError
If `G` is not directed
Examples
--------
A graph consisting of one cycle, the length of which is 2. Therefore ``k = 2``
divides the length of every cycle in the graph and thus the graph
is *not aperiodic*::
>>> DG = nx.DiGraph([(1, 2), (2, 1)])
>>> nx.is_aperiodic(DG)
False
A graph consisting of two cycles: one of length 2 and the other of length 3.
The cycle lengths are coprime, so there is no single value of k where ``k > 1``
that divides each cycle length and therefore the graph is *aperiodic*::
>>> DG = nx.DiGraph([(1, 2), (2, 3), (3, 1), (1, 4), (4, 1)])
>>> nx.is_aperiodic(DG)
True
A graph consisting of two cycles: one of length 2 and the other of length 4.
The lengths of the cycles share a common factor ``k = 2``, and therefore
the graph is *not aperiodic*::
>>> DG = nx.DiGraph([(1, 2), (2, 1), (3, 4), (4, 5), (5, 6), (6, 3)])
>>> nx.is_aperiodic(DG)
False
An acyclic graph, therefore the graph is *not aperiodic*::
>>> DG = nx.DiGraph([(1, 2), (2, 3)])
>>> nx.is_aperiodic(DG)
False
Notes
-----
This uses the method outlined in [1]_, which runs in $O(m)$ time
given $m$ edges in `G`. Note that a graph is not aperiodic if it is
acyclic as every integer trivial divides length 0 cycles.
References
----------
.. [1] Jarvis, J. P.; Shier, D. R. (1996),
"Graph-theoretic analysis of finite Markov chains,"
in Shier, D. R.; Wallenius, K. T., Applied Mathematical Modeling:
A Multidisciplinary Approach, CRC Press.
| def transitive_closure_dag(G, topo_order=None):
"""Returns the transitive closure of a directed acyclic graph.
This function is faster than the function `transitive_closure`, but fails
if the graph has a cycle.
The transitive closure of G = (V,E) is a graph G+ = (V,E+) such that
for all v, w in V there is an edge (v, w) in E+ if and only if there
is a non-null path from v to w in G.
Parameters
----------
G : NetworkX DiGraph
A directed acyclic graph (DAG)
topo_order: list or tuple, optional
A topological order for G (if None, the function will compute one)
Returns
-------
NetworkX DiGraph
The transitive closure of `G`
Raises
------
NetworkXNotImplemented
If `G` is not directed
NetworkXUnfeasible
If `G` has a cycle
Examples
--------
>>> DG = nx.DiGraph([(1, 2), (2, 3)])
>>> TC = nx.transitive_closure_dag(DG)
>>> TC.edges()
OutEdgeView([(1, 2), (1, 3), (2, 3)])
Notes
-----
This algorithm is probably simple enough to be well-known but I didn't find
a mention in the literature.
"""
if topo_order is None:
topo_order = list(topological_sort(G))
TC = G.copy()
# idea: traverse vertices following a reverse topological order, connecting
# each vertex to its descendants at distance 2 as we go
for v in reversed(topo_order):
TC.add_edges_from((v, u) for u in nx.descendants_at_distance(TC, v, 2))
return TC
| (G, *, backend=None, **backend_kwargs) |
30,772 | networkx.algorithms.tree.recognition | is_arborescence |
Returns True if `G` is an arborescence.
An arborescence is a directed tree with maximum in-degree equal to 1.
Parameters
----------
G : graph
The graph to test.
Returns
-------
b : bool
A boolean that is True if `G` is an arborescence.
Examples
--------
>>> G = nx.DiGraph([(0, 1), (0, 2), (2, 3), (3, 4)])
>>> nx.is_arborescence(G)
True
>>> G.remove_edge(0, 1)
>>> G.add_edge(1, 2) # maximum in-degree is 2
>>> nx.is_arborescence(G)
False
Notes
-----
In another convention, an arborescence is known as a *tree*.
See Also
--------
is_tree
| null | (G, *, backend=None, **backend_kwargs) |
30,773 | networkx.algorithms.asteroidal | is_at_free | Check if a graph is AT-free.
The method uses the `find_asteroidal_triple` method to recognize
an AT-free graph. If no asteroidal triple is found the graph is
AT-free and True is returned. If at least one asteroidal triple is
found the graph is not AT-free and False is returned.
Parameters
----------
G : NetworkX Graph
The graph to check whether is AT-free or not.
Returns
-------
bool
True if G is AT-free and False otherwise.
Examples
--------
>>> G = nx.Graph([(0, 1), (0, 2), (1, 2), (1, 3), (1, 4), (4, 5)])
>>> nx.is_at_free(G)
True
>>> G = nx.cycle_graph(6)
>>> nx.is_at_free(G)
False
| null | (G, *, backend=None, **backend_kwargs) |
30,774 | networkx.algorithms.components.attracting | is_attracting_component | Returns True if `G` consists of a single attracting component.
Parameters
----------
G : DiGraph, MultiDiGraph
The graph to be analyzed.
Returns
-------
attracting : bool
True if `G` has a single attracting component. Otherwise, False.
Raises
------
NetworkXNotImplemented
If the input graph is undirected.
See Also
--------
attracting_components
number_attracting_components
| null | (G, *, backend=None, **backend_kwargs) |
30,775 | networkx.algorithms.components.biconnected | is_biconnected | Returns True if the graph is biconnected, False otherwise.
A graph is biconnected if, and only if, it cannot be disconnected by
removing only one node (and all edges incident on that node). If
removing a node increases the number of disconnected components
in the graph, that node is called an articulation point, or cut
vertex. A biconnected graph has no articulation points.
Parameters
----------
G : NetworkX Graph
An undirected graph.
Returns
-------
biconnected : bool
True if the graph is biconnected, False otherwise.
Raises
------
NetworkXNotImplemented
If the input graph is not undirected.
Examples
--------
>>> G = nx.path_graph(4)
>>> print(nx.is_biconnected(G))
False
>>> G.add_edge(0, 3)
>>> print(nx.is_biconnected(G))
True
See Also
--------
biconnected_components
articulation_points
biconnected_component_edges
is_strongly_connected
is_weakly_connected
is_connected
is_semiconnected
Notes
-----
The algorithm to find articulation points and biconnected
components is implemented using a non-recursive depth-first-search
(DFS) that keeps track of the highest level that back edges reach
in the DFS tree. A node `n` is an articulation point if, and only
if, there exists a subtree rooted at `n` such that there is no
back edge from any successor of `n` that links to a predecessor of
`n` in the DFS tree. By keeping track of all the edges traversed
by the DFS we can obtain the biconnected components because all
edges of a bicomponent will be traversed consecutively between
articulation points.
References
----------
.. [1] Hopcroft, J.; Tarjan, R. (1973).
"Efficient algorithms for graph manipulation".
Communications of the ACM 16: 372β378. doi:10.1145/362248.362272
| null | (G, *, backend=None, **backend_kwargs) |
30,776 | networkx.algorithms.bipartite.basic | is_bipartite | Returns True if graph G is bipartite, False if not.
Parameters
----------
G : NetworkX graph
Examples
--------
>>> from networkx.algorithms import bipartite
>>> G = nx.path_graph(4)
>>> print(bipartite.is_bipartite(G))
True
See Also
--------
color, is_bipartite_node_set
| null | (G, *, backend=None, **backend_kwargs) |
30,777 | networkx.algorithms.tree.recognition | is_branching |
Returns True if `G` is a branching.
A branching is a directed forest with maximum in-degree equal to 1.
Parameters
----------
G : directed graph
The directed graph to test.
Returns
-------
b : bool
A boolean that is True if `G` is a branching.
Examples
--------
>>> G = nx.DiGraph([(0, 1), (1, 2), (2, 3), (3, 4)])
>>> nx.is_branching(G)
True
>>> G.remove_edge(2, 3)
>>> G.add_edge(3, 1) # maximum in-degree is 2
>>> nx.is_branching(G)
False
Notes
-----
In another convention, a branching is also known as a *forest*.
See Also
--------
is_forest
| null | (G, *, backend=None, **backend_kwargs) |
30,778 | networkx.algorithms.chordal | is_chordal | Checks whether G is a chordal graph.
A graph is chordal if every cycle of length at least 4 has a chord
(an edge joining two nodes not adjacent in the cycle).
Parameters
----------
G : graph
A NetworkX graph.
Returns
-------
chordal : bool
True if G is a chordal graph and False otherwise.
Raises
------
NetworkXNotImplemented
The algorithm does not support DiGraph, MultiGraph and MultiDiGraph.
Examples
--------
>>> e = [
... (1, 2),
... (1, 3),
... (2, 3),
... (2, 4),
... (3, 4),
... (3, 5),
... (3, 6),
... (4, 5),
... (4, 6),
... (5, 6),
... ]
>>> G = nx.Graph(e)
>>> nx.is_chordal(G)
True
Notes
-----
The routine tries to go through every node following maximum cardinality
search. It returns False when it finds that the separator for any node
is not a clique. Based on the algorithms in [1]_.
Self loops are ignored.
References
----------
.. [1] R. E. Tarjan and M. Yannakakis, Simple linear-time algorithms
to test chordality of graphs, test acyclicity of hypergraphs, and
selectively reduce acyclic hypergraphs, SIAM J. Comput., 13 (1984),
pp. 566β579.
| null | (G, *, backend=None, **backend_kwargs) |
30,779 | networkx.algorithms.components.connected | is_connected | Returns True if the graph is connected, False otherwise.
Parameters
----------
G : NetworkX Graph
An undirected graph.
Returns
-------
connected : bool
True if the graph is connected, false otherwise.
Raises
------
NetworkXNotImplemented
If G is directed.
Examples
--------
>>> G = nx.path_graph(4)
>>> print(nx.is_connected(G))
True
See Also
--------
is_strongly_connected
is_weakly_connected
is_semiconnected
is_biconnected
connected_components
Notes
-----
For undirected graphs only.
| null | (G, *, backend=None, **backend_kwargs) |
30,780 | networkx.algorithms.d_separation | is_d_separator | Return whether node sets `x` and `y` are d-separated by `z`.
Parameters
----------
G : nx.DiGraph
A NetworkX DAG.
x : node or set of nodes
First node or set of nodes in `G`.
y : node or set of nodes
Second node or set of nodes in `G`.
z : node or set of nodes
Potential separator (set of conditioning nodes in `G`). Can be empty set.
Returns
-------
b : bool
A boolean that is true if `x` is d-separated from `y` given `z` in `G`.
Raises
------
NetworkXError
The *d-separation* test is commonly used on disjoint sets of
nodes in acyclic directed graphs. Accordingly, the algorithm
raises a :exc:`NetworkXError` if the node sets are not
disjoint or if the input graph is not a DAG.
NodeNotFound
If any of the input nodes are not found in the graph,
a :exc:`NodeNotFound` exception is raised
Notes
-----
A d-separating set in a DAG is a set of nodes that
blocks all paths between the two sets. Nodes in `z`
block a path if they are part of the path and are not a collider,
or a descendant of a collider. Also colliders that are not in `z`
block a path. A collider structure along a path
is ``... -> c <- ...`` where ``c`` is the collider node.
https://en.wikipedia.org/wiki/Bayesian_network#d-separation
| null | (G, x, y, z, *, backend=None, **backend_kwargs) |
30,781 | networkx.algorithms.graphical | is_digraphical | Returns True if some directed graph can realize the in- and out-degree
sequences.
Parameters
----------
in_sequence : list or iterable container
A sequence of integer node in-degrees
out_sequence : list or iterable container
A sequence of integer node out-degrees
Returns
-------
valid : bool
True if in and out-sequences are digraphic False if not.
Examples
--------
>>> G = nx.DiGraph([(1, 2), (1, 3), (2, 3), (3, 4), (4, 2), (5, 1), (5, 4)])
>>> in_seq = (d for n, d in G.in_degree())
>>> out_seq = (d for n, d in G.out_degree())
>>> nx.is_digraphical(in_seq, out_seq)
True
To test a non-digraphical scenario:
>>> in_seq_list = [d for n, d in G.in_degree()]
>>> in_seq_list[-1] += 1
>>> nx.is_digraphical(in_seq_list, out_seq)
False
Notes
-----
This algorithm is from Kleitman and Wang [1]_.
The worst case runtime is $O(s \times \log n)$ where $s$ and $n$ are the
sum and length of the sequences respectively.
References
----------
.. [1] D.J. Kleitman and D.L. Wang
Algorithms for Constructing Graphs and Digraphs with Given Valences
and Factors, Discrete Mathematics, 6(1), pp. 79-88 (1973)
| null | (in_sequence, out_sequence, *, backend=None, **backend_kwargs) |
30,782 | networkx.classes.function | is_directed | Return True if graph is directed. | def is_directed(G):
"""Return True if graph is directed."""
return G.is_directed()
| (G) |
30,783 | networkx.algorithms.dag | is_directed_acyclic_graph | Returns True if the graph `G` is a directed acyclic graph (DAG) or
False if not.
Parameters
----------
G : NetworkX graph
Returns
-------
bool
True if `G` is a DAG, False otherwise
Examples
--------
Undirected graph::
>>> G = nx.Graph([(1, 2), (2, 3)])
>>> nx.is_directed_acyclic_graph(G)
False
Directed graph with cycle::
>>> G = nx.DiGraph([(1, 2), (2, 3), (3, 1)])
>>> nx.is_directed_acyclic_graph(G)
False
Directed acyclic graph::
>>> G = nx.DiGraph([(1, 2), (2, 3)])
>>> nx.is_directed_acyclic_graph(G)
True
See also
--------
topological_sort
| def transitive_closure_dag(G, topo_order=None):
"""Returns the transitive closure of a directed acyclic graph.
This function is faster than the function `transitive_closure`, but fails
if the graph has a cycle.
The transitive closure of G = (V,E) is a graph G+ = (V,E+) such that
for all v, w in V there is an edge (v, w) in E+ if and only if there
is a non-null path from v to w in G.
Parameters
----------
G : NetworkX DiGraph
A directed acyclic graph (DAG)
topo_order: list or tuple, optional
A topological order for G (if None, the function will compute one)
Returns
-------
NetworkX DiGraph
The transitive closure of `G`
Raises
------
NetworkXNotImplemented
If `G` is not directed
NetworkXUnfeasible
If `G` has a cycle
Examples
--------
>>> DG = nx.DiGraph([(1, 2), (2, 3)])
>>> TC = nx.transitive_closure_dag(DG)
>>> TC.edges()
OutEdgeView([(1, 2), (1, 3), (2, 3)])
Notes
-----
This algorithm is probably simple enough to be well-known but I didn't find
a mention in the literature.
"""
if topo_order is None:
topo_order = list(topological_sort(G))
TC = G.copy()
# idea: traverse vertices following a reverse topological order, connecting
# each vertex to its descendants at distance 2 as we go
for v in reversed(topo_order):
TC.add_edges_from((v, u) for u in nx.descendants_at_distance(TC, v, 2))
return TC
| (G, *, backend=None, **backend_kwargs) |
30,784 | networkx.algorithms.distance_regular | is_distance_regular | Returns True if the graph is distance regular, False otherwise.
A connected graph G is distance-regular if for any nodes x,y
and any integers i,j=0,1,...,d (where d is the graph
diameter), the number of vertices at distance i from x and
distance j from y depends only on i,j and the graph distance
between x and y, independently of the choice of x and y.
Parameters
----------
G: Networkx graph (undirected)
Returns
-------
bool
True if the graph is Distance Regular, False otherwise
Examples
--------
>>> G = nx.hypercube_graph(6)
>>> nx.is_distance_regular(G)
True
See Also
--------
intersection_array, global_parameters
Notes
-----
For undirected and simple graphs only
References
----------
.. [1] Brouwer, A. E.; Cohen, A. M.; and Neumaier, A.
Distance-Regular Graphs. New York: Springer-Verlag, 1989.
.. [2] Weisstein, Eric W. "Distance-Regular Graph."
http://mathworld.wolfram.com/Distance-RegularGraph.html
| null | (G, *, backend=None, **backend_kwargs) |
30,785 | networkx.algorithms.dominating | is_dominating_set | Checks if `nbunch` is a dominating set for `G`.
A *dominating set* for a graph with node set *V* is a subset *D* of
*V* such that every node not in *D* is adjacent to at least one
member of *D* [1]_.
Parameters
----------
G : NetworkX graph
nbunch : iterable
An iterable of nodes in the graph `G`.
See also
--------
dominating_set
References
----------
.. [1] https://en.wikipedia.org/wiki/Dominating_set
| null | (G, nbunch, *, backend=None, **backend_kwargs) |
30,786 | networkx.algorithms.covering | is_edge_cover | Decides whether a set of edges is a valid edge cover of the graph.
Given a set of edges, whether it is an edge covering can
be decided if we just check whether all nodes of the graph
has an edge from the set, incident on it.
Parameters
----------
G : NetworkX graph
An undirected bipartite graph.
cover : set
Set of edges to be checked.
Returns
-------
bool
Whether the set of edges is a valid edge cover of the graph.
Examples
--------
>>> G = nx.Graph([(0, 1), (0, 2), (0, 3), (1, 2), (1, 3)])
>>> cover = {(2, 1), (3, 0)}
>>> nx.is_edge_cover(G, cover)
True
Notes
-----
An edge cover of a graph is a set of edges such that every node of
the graph is incident to at least one edge of the set.
| null | (G, cover, *, backend=None, **backend_kwargs) |
30,787 | networkx.classes.function | is_empty | Returns True if `G` has no edges.
Parameters
----------
G : graph
A NetworkX graph.
Returns
-------
bool
True if `G` has no edges, and False otherwise.
Notes
-----
An empty graph can have nodes but not edges. The empty graph with zero
nodes is known as the null graph. This is an $O(n)$ operation where n
is the number of nodes in the graph.
| def is_empty(G):
"""Returns True if `G` has no edges.
Parameters
----------
G : graph
A NetworkX graph.
Returns
-------
bool
True if `G` has no edges, and False otherwise.
Notes
-----
An empty graph can have nodes but not edges. The empty graph with zero
nodes is known as the null graph. This is an $O(n)$ operation where n
is the number of nodes in the graph.
"""
return not any(G._adj.values())
| (G) |
30,788 | networkx.algorithms.euler | is_eulerian | Returns True if and only if `G` is Eulerian.
A graph is *Eulerian* if it has an Eulerian circuit. An *Eulerian
circuit* is a closed walk that includes each edge of a graph exactly
once.
Graphs with isolated vertices (i.e. vertices with zero degree) are not
considered to have Eulerian circuits. Therefore, if the graph is not
connected (or not strongly connected, for directed graphs), this function
returns False.
Parameters
----------
G : NetworkX graph
A graph, either directed or undirected.
Examples
--------
>>> nx.is_eulerian(nx.DiGraph({0: [3], 1: [2], 2: [3], 3: [0, 1]}))
True
>>> nx.is_eulerian(nx.complete_graph(5))
True
>>> nx.is_eulerian(nx.petersen_graph())
False
If you prefer to allow graphs with isolated vertices to have Eulerian circuits,
you can first remove such vertices and then call `is_eulerian` as below example shows.
>>> G = nx.Graph([(0, 1), (1, 2), (0, 2)])
>>> G.add_node(3)
>>> nx.is_eulerian(G)
False
>>> G.remove_nodes_from(list(nx.isolates(G)))
>>> nx.is_eulerian(G)
True
| null | (G, *, backend=None, **backend_kwargs) |
30,789 | networkx.algorithms.tree.recognition | is_forest |
Returns True if `G` is a forest.
A forest is a graph with no undirected cycles.
For directed graphs, `G` is a forest if the underlying graph is a forest.
The underlying graph is obtained by treating each directed edge as a single
undirected edge in a multigraph.
Parameters
----------
G : graph
The graph to test.
Returns
-------
b : bool
A boolean that is True if `G` is a forest.
Raises
------
NetworkXPointlessConcept
If `G` is empty.
Examples
--------
>>> G = nx.Graph()
>>> G.add_edges_from([(1, 2), (1, 3), (2, 4), (2, 5)])
>>> nx.is_forest(G)
True
>>> G.add_edge(4, 1)
>>> nx.is_forest(G)
False
Notes
-----
In another convention, a directed forest is known as a *polyforest* and
then *forest* corresponds to a *branching*.
See Also
--------
is_branching
| null | (G, *, backend=None, **backend_kwargs) |
30,790 | networkx.classes.function | is_frozen | Returns True if graph is frozen.
Parameters
----------
G : graph
A NetworkX graph
See Also
--------
freeze
| def is_frozen(G):
"""Returns True if graph is frozen.
Parameters
----------
G : graph
A NetworkX graph
See Also
--------
freeze
"""
try:
return G.frozen
except AttributeError:
return False
| (G) |
30,791 | networkx.algorithms.graphical | is_graphical | Returns True if sequence is a valid degree sequence.
A degree sequence is valid if some graph can realize it.
Parameters
----------
sequence : list or iterable container
A sequence of integer node degrees
method : "eg" | "hh" (default: 'eg')
The method used to validate the degree sequence.
"eg" corresponds to the ErdΕs-Gallai algorithm
[EG1960]_, [choudum1986]_, and
"hh" to the Havel-Hakimi algorithm
[havel1955]_, [hakimi1962]_, [CL1996]_.
Returns
-------
valid : bool
True if the sequence is a valid degree sequence and False if not.
Examples
--------
>>> G = nx.path_graph(4)
>>> sequence = (d for n, d in G.degree())
>>> nx.is_graphical(sequence)
True
To test a non-graphical sequence:
>>> sequence_list = [d for n, d in G.degree()]
>>> sequence_list[-1] += 1
>>> nx.is_graphical(sequence_list)
False
References
----------
.. [EG1960] ErdΕs and Gallai, Mat. Lapok 11 264, 1960.
.. [choudum1986] S.A. Choudum. "A simple proof of the ErdΕs-Gallai theorem on
graph sequences." Bulletin of the Australian Mathematical Society, 33,
pp 67-70, 1986. https://doi.org/10.1017/S0004972700002872
.. [havel1955] Havel, V. "A Remark on the Existence of Finite Graphs"
Casopis Pest. Mat. 80, 477-480, 1955.
.. [hakimi1962] Hakimi, S. "On the Realizability of a Set of Integers as
Degrees of the Vertices of a Graph." SIAM J. Appl. Math. 10, 496-506, 1962.
.. [CL1996] G. Chartrand and L. Lesniak, "Graphs and Digraphs",
Chapman and Hall/CRC, 1996.
| null | (sequence, method='eg', *, backend=None, **backend_kwargs) |
30,792 | networkx.algorithms.isolate | is_isolate | Determines whether a node is an isolate.
An *isolate* is a node with no neighbors (that is, with degree
zero). For directed graphs, this means no in-neighbors and no
out-neighbors.
Parameters
----------
G : NetworkX graph
n : node
A node in `G`.
Returns
-------
is_isolate : bool
True if and only if `n` has no neighbors.
Examples
--------
>>> G = nx.Graph()
>>> G.add_edge(1, 2)
>>> G.add_node(3)
>>> nx.is_isolate(G, 2)
False
>>> nx.is_isolate(G, 3)
True
| null | (G, n, *, backend=None, **backend_kwargs) |
30,793 | networkx.algorithms.isomorphism.isomorph | is_isomorphic | Returns True if the graphs G1 and G2 are isomorphic and False otherwise.
Parameters
----------
G1, G2: graphs
The two graphs G1 and G2 must be the same type.
node_match : callable
A function that returns True if node n1 in G1 and n2 in G2 should
be considered equal during the isomorphism test.
If node_match is not specified then node attributes are not considered.
The function will be called like
node_match(G1.nodes[n1], G2.nodes[n2]).
That is, the function will receive the node attribute dictionaries
for n1 and n2 as inputs.
edge_match : callable
A function that returns True if the edge attribute dictionary
for the pair of nodes (u1, v1) in G1 and (u2, v2) in G2 should
be considered equal during the isomorphism test. If edge_match is
not specified then edge attributes are not considered.
The function will be called like
edge_match(G1[u1][v1], G2[u2][v2]).
That is, the function will receive the edge attribute dictionaries
of the edges under consideration.
Notes
-----
Uses the vf2 algorithm [1]_.
Examples
--------
>>> import networkx.algorithms.isomorphism as iso
For digraphs G1 and G2, using 'weight' edge attribute (default: 1)
>>> G1 = nx.DiGraph()
>>> G2 = nx.DiGraph()
>>> nx.add_path(G1, [1, 2, 3, 4], weight=1)
>>> nx.add_path(G2, [10, 20, 30, 40], weight=2)
>>> em = iso.numerical_edge_match("weight", 1)
>>> nx.is_isomorphic(G1, G2) # no weights considered
True
>>> nx.is_isomorphic(G1, G2, edge_match=em) # match weights
False
For multidigraphs G1 and G2, using 'fill' node attribute (default: '')
>>> G1 = nx.MultiDiGraph()
>>> G2 = nx.MultiDiGraph()
>>> G1.add_nodes_from([1, 2, 3], fill="red")
>>> G2.add_nodes_from([10, 20, 30, 40], fill="red")
>>> nx.add_path(G1, [1, 2, 3, 4], weight=3, linewidth=2.5)
>>> nx.add_path(G2, [10, 20, 30, 40], weight=3)
>>> nm = iso.categorical_node_match("fill", "red")
>>> nx.is_isomorphic(G1, G2, node_match=nm)
True
For multidigraphs G1 and G2, using 'weight' edge attribute (default: 7)
>>> G1.add_edge(1, 2, weight=7)
1
>>> G2.add_edge(10, 20)
1
>>> em = iso.numerical_multiedge_match("weight", 7, rtol=1e-6)
>>> nx.is_isomorphic(G1, G2, edge_match=em)
True
For multigraphs G1 and G2, using 'weight' and 'linewidth' edge attributes
with default values 7 and 2.5. Also using 'fill' node attribute with
default value 'red'.
>>> em = iso.numerical_multiedge_match(["weight", "linewidth"], [7, 2.5])
>>> nm = iso.categorical_node_match("fill", "red")
>>> nx.is_isomorphic(G1, G2, edge_match=em, node_match=nm)
True
See Also
--------
numerical_node_match, numerical_edge_match, numerical_multiedge_match
categorical_node_match, categorical_edge_match, categorical_multiedge_match
References
----------
.. [1] L. P. Cordella, P. Foggia, C. Sansone, M. Vento,
"An Improved Algorithm for Matching Large Graphs",
3rd IAPR-TC15 Workshop on Graph-based Representations in
Pattern Recognition, Cuen, pp. 149-159, 2001.
https://www.researchgate.net/publication/200034365_An_Improved_Algorithm_for_Matching_Large_Graphs
| null | (G1, G2, node_match=None, edge_match=None, *, backend=None, **backend_kwargs) |
30,794 | networkx.algorithms.connectivity.edge_augmentation | is_k_edge_connected | Tests to see if a graph is k-edge-connected.
Is it impossible to disconnect the graph by removing fewer than k edges?
If so, then G is k-edge-connected.
Parameters
----------
G : NetworkX graph
An undirected graph.
k : integer
edge connectivity to test for
Returns
-------
boolean
True if G is k-edge-connected.
See Also
--------
:func:`is_locally_k_edge_connected`
Examples
--------
>>> G = nx.barbell_graph(10, 0)
>>> nx.is_k_edge_connected(G, k=1)
True
>>> nx.is_k_edge_connected(G, k=2)
False
| def unconstrained_bridge_augmentation(G):
"""Finds an optimal 2-edge-augmentation of G using the fewest edges.
This is an implementation of the algorithm detailed in [1]_.
The basic idea is to construct a meta-graph of bridge-ccs, connect leaf
nodes of the trees to connect the entire graph, and finally connect the
leafs of the tree in dfs-preorder to bridge connect the entire graph.
Parameters
----------
G : NetworkX graph
An undirected graph.
Yields
------
edge : tuple
Edges in the bridge augmentation of G
Notes
-----
Input: a graph G.
First find the bridge components of G and collapse each bridge-cc into a
node of a metagraph graph C, which is guaranteed to be a forest of trees.
C contains p "leafs" --- nodes with exactly one incident edge.
C contains q "isolated nodes" --- nodes with no incident edges.
Theorem: If p + q > 1, then at least :math:`ceil(p / 2) + q` edges are
needed to bridge connect C. This algorithm achieves this min number.
The method first adds enough edges to make G into a tree and then pairs
leafs in a simple fashion.
Let n be the number of trees in C. Let v(i) be an isolated vertex in the
i-th tree if one exists, otherwise it is a pair of distinct leafs nodes
in the i-th tree. Alternating edges from these sets (i.e. adding edges
A1 = [(v(i)[0], v(i + 1)[1]), v(i + 1)[0], v(i + 2)[1])...]) connects C
into a tree T. This tree has p' = p + 2q - 2(n -1) leafs and no isolated
vertices. A1 has n - 1 edges. The next step finds ceil(p' / 2) edges to
biconnect any tree with p' leafs.
Convert T into an arborescence T' by picking an arbitrary root node with
degree >= 2 and directing all edges away from the root. Note the
implementation implicitly constructs T'.
The leafs of T are the nodes with no existing edges in T'.
Order the leafs of T' by DFS preorder. Then break this list in half
and add the zipped pairs to A2.
The set A = A1 + A2 is the minimum augmentation in the metagraph.
To convert this to edges in the original graph
References
----------
.. [1] Eswaran, Kapali P., and R. Endre Tarjan. (1975) Augmentation problems.
http://epubs.siam.org/doi/abs/10.1137/0205044
See Also
--------
:func:`bridge_augmentation`
:func:`k_edge_augmentation`
Examples
--------
>>> G = nx.path_graph((1, 2, 3, 4, 5, 6, 7))
>>> sorted(unconstrained_bridge_augmentation(G))
[(1, 7)]
>>> G = nx.path_graph((1, 2, 3, 2, 4, 5, 6, 7))
>>> sorted(unconstrained_bridge_augmentation(G))
[(1, 3), (3, 7)]
>>> G = nx.Graph([(0, 1), (0, 2), (1, 2)])
>>> G.add_node(4)
>>> sorted(unconstrained_bridge_augmentation(G))
[(1, 4), (4, 0)]
"""
# -----
# Mapping of terms from (Eswaran and Tarjan):
# G = G_0 - the input graph
# C = G_0' - the bridge condensation of G. (This is a forest of trees)
# A1 = A_1 - the edges to connect the forest into a tree
# leaf = pendant - a node with degree of 1
# alpha(v) = maps the node v in G to its meta-node in C
# beta(x) = maps the meta-node x in C to any node in the bridge
# component of G corresponding to x.
# find the 2-edge-connected components of G
bridge_ccs = list(nx.connectivity.bridge_components(G))
# condense G into an forest C
C = collapse(G, bridge_ccs)
# Choose pairs of distinct leaf nodes in each tree. If this is not
# possible then make a pair using the single isolated node in the tree.
vset1 = [
tuple(cc) * 2 # case1: an isolated node
if len(cc) == 1
else sorted(cc, key=C.degree)[0:2] # case2: pair of leaf nodes
for cc in nx.connected_components(C)
]
if len(vset1) > 1:
# Use this set to construct edges that connect C into a tree.
nodes1 = [vs[0] for vs in vset1]
nodes2 = [vs[1] for vs in vset1]
A1 = list(zip(nodes1[1:], nodes2))
else:
A1 = []
# Connect each tree in the forest to construct an arborescence
T = C.copy()
T.add_edges_from(A1)
# If there are only two leaf nodes, we simply connect them.
leafs = [n for n, d in T.degree() if d == 1]
if len(leafs) == 1:
A2 = []
if len(leafs) == 2:
A2 = [tuple(leafs)]
else:
# Choose an arbitrary non-leaf root
try:
root = next(n for n, d in T.degree() if d > 1)
except StopIteration: # no nodes found with degree > 1
return
# order the leaves of C by (induced directed) preorder
v2 = [n for n in nx.dfs_preorder_nodes(T, root) if T.degree(n) == 1]
# connecting first half of the leafs in pre-order to the second
# half will bridge connect the tree with the fewest edges.
half = math.ceil(len(v2) / 2)
A2 = list(zip(v2[:half], v2[-half:]))
# collect the edges used to augment the original forest
aug_tree_edges = A1 + A2
# Construct the mapping (beta) from meta-nodes to regular nodes
inverse = defaultdict(list)
for k, v in C.graph["mapping"].items():
inverse[v].append(k)
# sort so we choose minimum degree nodes first
inverse = {
mu: sorted(mapped, key=lambda u: (G.degree(u), u))
for mu, mapped in inverse.items()
}
# For each meta-edge, map back to an arbitrary pair in the original graph
G2 = G.copy()
for mu, mv in aug_tree_edges:
# Find the first available edge that doesn't exist and return it
for u, v in it.product(inverse[mu], inverse[mv]):
if not G2.has_edge(u, v):
G2.add_edge(u, v)
yield u, v
break
| (G, k, *, backend=None, **backend_kwargs) |
30,795 | networkx.algorithms.regular | is_k_regular | Determines whether the graph ``G`` is a k-regular graph.
A k-regular graph is a graph where each vertex has degree k.
Parameters
----------
G : NetworkX graph
Returns
-------
bool
Whether the given graph is k-regular.
Examples
--------
>>> G = nx.Graph([(1, 2), (2, 3), (3, 4), (4, 1)])
>>> nx.is_k_regular(G, k=3)
False
| null | (G, k, *, backend=None, **backend_kwargs) |
30,796 | networkx.algorithms.hybrid | is_kl_connected | Returns True if and only if `G` is locally `(k, l)`-connected.
A graph is locally `(k, l)`-connected if for each edge `(u, v)` in the
graph there are at least `l` edge-disjoint paths of length at most `k`
joining `u` to `v`.
Parameters
----------
G : NetworkX graph
The graph to test for local `(k, l)`-connectedness.
k : integer
The maximum length of paths to consider. A higher number means a looser
connectivity requirement.
l : integer
The number of edge-disjoint paths. A higher number means a stricter
connectivity requirement.
low_memory : bool
If this is True, this function uses an algorithm that uses slightly
more time but less memory.
Returns
-------
bool
Whether the graph is locally `(k, l)`-connected subgraph.
See also
--------
kl_connected_subgraph
References
----------
.. [1] Chung, Fan and Linyuan Lu. "The Small World Phenomenon in Hybrid
Power Law Graphs." *Complex Networks*. Springer Berlin Heidelberg,
2004. 89--104.
| null | (G, k, l, low_memory=False, *, backend=None, **backend_kwargs) |
30,797 | networkx.algorithms.matching | is_matching | Return True if ``matching`` is a valid matching of ``G``
A *matching* in a graph is a set of edges in which no two distinct
edges share a common endpoint. Each node is incident to at most one
edge in the matching. The edges are said to be independent.
Parameters
----------
G : NetworkX graph
matching : dict or set
A dictionary or set representing a matching. If a dictionary, it
must have ``matching[u] == v`` and ``matching[v] == u`` for each
edge ``(u, v)`` in the matching. If a set, it must have elements
of the form ``(u, v)``, where ``(u, v)`` is an edge in the
matching.
Returns
-------
bool
Whether the given set or dictionary represents a valid matching
in the graph.
Raises
------
NetworkXError
If the proposed matching has an edge to a node not in G.
Or if the matching is not a collection of 2-tuple edges.
Examples
--------
>>> G = nx.Graph([(1, 2), (1, 3), (2, 3), (2, 4), (3, 5), (4, 5)])
>>> nx.is_maximal_matching(G, {1: 3, 2: 4}) # using dict to represent matching
True
>>> nx.is_matching(G, {(1, 3), (2, 4)}) # using set to represent matching
True
| @not_implemented_for("multigraph")
@not_implemented_for("directed")
@nx._dispatchable(edge_attrs="weight")
def max_weight_matching(G, maxcardinality=False, weight="weight"):
"""Compute a maximum-weighted matching of G.
A matching is a subset of edges in which no node occurs more than once.
The weight of a matching is the sum of the weights of its edges.
A maximal matching cannot add more edges and still be a matching.
The cardinality of a matching is the number of matched edges.
Parameters
----------
G : NetworkX graph
Undirected graph
maxcardinality: bool, optional (default=False)
If maxcardinality is True, compute the maximum-cardinality matching
with maximum weight among all maximum-cardinality matchings.
weight: string, optional (default='weight')
Edge data key corresponding to the edge weight.
If key not found, uses 1 as weight.
Returns
-------
matching : set
A maximal matching of the graph.
Examples
--------
>>> G = nx.Graph()
>>> edges = [(1, 2, 6), (1, 3, 2), (2, 3, 1), (2, 4, 7), (3, 5, 9), (4, 5, 3)]
>>> G.add_weighted_edges_from(edges)
>>> sorted(nx.max_weight_matching(G))
[(2, 4), (5, 3)]
Notes
-----
If G has edges with weight attributes the edge data are used as
weight values else the weights are assumed to be 1.
This function takes time O(number_of_nodes ** 3).
If all edge weights are integers, the algorithm uses only integer
computations. If floating point weights are used, the algorithm
could return a slightly suboptimal matching due to numeric
precision errors.
This method is based on the "blossom" method for finding augmenting
paths and the "primal-dual" method for finding a matching of maximum
weight, both methods invented by Jack Edmonds [1]_.
Bipartite graphs can also be matched using the functions present in
:mod:`networkx.algorithms.bipartite.matching`.
References
----------
.. [1] "Efficient Algorithms for Finding Maximum Matching in Graphs",
Zvi Galil, ACM Computing Surveys, 1986.
"""
#
# The algorithm is taken from "Efficient Algorithms for Finding Maximum
# Matching in Graphs" by Zvi Galil, ACM Computing Surveys, 1986.
# It is based on the "blossom" method for finding augmenting paths and
# the "primal-dual" method for finding a matching of maximum weight, both
# methods invented by Jack Edmonds.
#
# A C program for maximum weight matching by Ed Rothberg was used
# extensively to validate this new code.
#
# Many terms used in the code comments are explained in the paper
# by Galil. You will probably need the paper to make sense of this code.
#
class NoNode:
"""Dummy value which is different from any node."""
class Blossom:
"""Representation of a non-trivial blossom or sub-blossom."""
__slots__ = ["childs", "edges", "mybestedges"]
# b.childs is an ordered list of b's sub-blossoms, starting with
# the base and going round the blossom.
# b.edges is the list of b's connecting edges, such that
# b.edges[i] = (v, w) where v is a vertex in b.childs[i]
# and w is a vertex in b.childs[wrap(i+1)].
# If b is a top-level S-blossom,
# b.mybestedges is a list of least-slack edges to neighboring
# S-blossoms, or None if no such list has been computed yet.
# This is used for efficient computation of delta3.
# Generate the blossom's leaf vertices.
def leaves(self):
stack = [*self.childs]
while stack:
t = stack.pop()
if isinstance(t, Blossom):
stack.extend(t.childs)
else:
yield t
# Get a list of vertices.
gnodes = list(G)
if not gnodes:
return set() # don't bother with empty graphs
# Find the maximum edge weight.
maxweight = 0
allinteger = True
for i, j, d in G.edges(data=True):
wt = d.get(weight, 1)
if i != j and wt > maxweight:
maxweight = wt
allinteger = allinteger and (str(type(wt)).split("'")[1] in ("int", "long"))
# If v is a matched vertex, mate[v] is its partner vertex.
# If v is a single vertex, v does not occur as a key in mate.
# Initially all vertices are single; updated during augmentation.
mate = {}
# If b is a top-level blossom,
# label.get(b) is None if b is unlabeled (free),
# 1 if b is an S-blossom,
# 2 if b is a T-blossom.
# The label of a vertex is found by looking at the label of its top-level
# containing blossom.
# If v is a vertex inside a T-blossom, label[v] is 2 iff v is reachable
# from an S-vertex outside the blossom.
# Labels are assigned during a stage and reset after each augmentation.
label = {}
# If b is a labeled top-level blossom,
# labeledge[b] = (v, w) is the edge through which b obtained its label
# such that w is a vertex in b, or None if b's base vertex is single.
# If w is a vertex inside a T-blossom and label[w] == 2,
# labeledge[w] = (v, w) is an edge through which w is reachable from
# outside the blossom.
labeledge = {}
# If v is a vertex, inblossom[v] is the top-level blossom to which v
# belongs.
# If v is a top-level vertex, inblossom[v] == v since v is itself
# a (trivial) top-level blossom.
# Initially all vertices are top-level trivial blossoms.
inblossom = dict(zip(gnodes, gnodes))
# If b is a sub-blossom,
# blossomparent[b] is its immediate parent (sub-)blossom.
# If b is a top-level blossom, blossomparent[b] is None.
blossomparent = dict(zip(gnodes, repeat(None)))
# If b is a (sub-)blossom,
# blossombase[b] is its base VERTEX (i.e. recursive sub-blossom).
blossombase = dict(zip(gnodes, gnodes))
# If w is a free vertex (or an unreached vertex inside a T-blossom),
# bestedge[w] = (v, w) is the least-slack edge from an S-vertex,
# or None if there is no such edge.
# If b is a (possibly trivial) top-level S-blossom,
# bestedge[b] = (v, w) is the least-slack edge to a different S-blossom
# (v inside b), or None if there is no such edge.
# This is used for efficient computation of delta2 and delta3.
bestedge = {}
# If v is a vertex,
# dualvar[v] = 2 * u(v) where u(v) is the v's variable in the dual
# optimization problem (if all edge weights are integers, multiplication
# by two ensures that all values remain integers throughout the algorithm).
# Initially, u(v) = maxweight / 2.
dualvar = dict(zip(gnodes, repeat(maxweight)))
# If b is a non-trivial blossom,
# blossomdual[b] = z(b) where z(b) is b's variable in the dual
# optimization problem.
blossomdual = {}
# If (v, w) in allowedge or (w, v) in allowedg, then the edge
# (v, w) is known to have zero slack in the optimization problem;
# otherwise the edge may or may not have zero slack.
allowedge = {}
# Queue of newly discovered S-vertices.
queue = []
# Return 2 * slack of edge (v, w) (does not work inside blossoms).
def slack(v, w):
return dualvar[v] + dualvar[w] - 2 * G[v][w].get(weight, 1)
# Assign label t to the top-level blossom containing vertex w,
# coming through an edge from vertex v.
def assignLabel(w, t, v):
b = inblossom[w]
assert label.get(w) is None and label.get(b) is None
label[w] = label[b] = t
if v is not None:
labeledge[w] = labeledge[b] = (v, w)
else:
labeledge[w] = labeledge[b] = None
bestedge[w] = bestedge[b] = None
if t == 1:
# b became an S-vertex/blossom; add it(s vertices) to the queue.
if isinstance(b, Blossom):
queue.extend(b.leaves())
else:
queue.append(b)
elif t == 2:
# b became a T-vertex/blossom; assign label S to its mate.
# (If b is a non-trivial blossom, its base is the only vertex
# with an external mate.)
base = blossombase[b]
assignLabel(mate[base], 1, base)
# Trace back from vertices v and w to discover either a new blossom
# or an augmenting path. Return the base vertex of the new blossom,
# or NoNode if an augmenting path was found.
def scanBlossom(v, w):
# Trace back from v and w, placing breadcrumbs as we go.
path = []
base = NoNode
while v is not NoNode:
# Look for a breadcrumb in v's blossom or put a new breadcrumb.
b = inblossom[v]
if label[b] & 4:
base = blossombase[b]
break
assert label[b] == 1
path.append(b)
label[b] = 5
# Trace one step back.
if labeledge[b] is None:
# The base of blossom b is single; stop tracing this path.
assert blossombase[b] not in mate
v = NoNode
else:
assert labeledge[b][0] == mate[blossombase[b]]
v = labeledge[b][0]
b = inblossom[v]
assert label[b] == 2
# b is a T-blossom; trace one more step back.
v = labeledge[b][0]
# Swap v and w so that we alternate between both paths.
if w is not NoNode:
v, w = w, v
# Remove breadcrumbs.
for b in path:
label[b] = 1
# Return base vertex, if we found one.
return base
# Construct a new blossom with given base, through S-vertices v and w.
# Label the new blossom as S; set its dual variable to zero;
# relabel its T-vertices to S and add them to the queue.
def addBlossom(base, v, w):
bb = inblossom[base]
bv = inblossom[v]
bw = inblossom[w]
# Create blossom.
b = Blossom()
blossombase[b] = base
blossomparent[b] = None
blossomparent[bb] = b
# Make list of sub-blossoms and their interconnecting edge endpoints.
b.childs = path = []
b.edges = edgs = [(v, w)]
# Trace back from v to base.
while bv != bb:
# Add bv to the new blossom.
blossomparent[bv] = b
path.append(bv)
edgs.append(labeledge[bv])
assert label[bv] == 2 or (
label[bv] == 1 and labeledge[bv][0] == mate[blossombase[bv]]
)
# Trace one step back.
v = labeledge[bv][0]
bv = inblossom[v]
# Add base sub-blossom; reverse lists.
path.append(bb)
path.reverse()
edgs.reverse()
# Trace back from w to base.
while bw != bb:
# Add bw to the new blossom.
blossomparent[bw] = b
path.append(bw)
edgs.append((labeledge[bw][1], labeledge[bw][0]))
assert label[bw] == 2 or (
label[bw] == 1 and labeledge[bw][0] == mate[blossombase[bw]]
)
# Trace one step back.
w = labeledge[bw][0]
bw = inblossom[w]
# Set label to S.
assert label[bb] == 1
label[b] = 1
labeledge[b] = labeledge[bb]
# Set dual variable to zero.
blossomdual[b] = 0
# Relabel vertices.
for v in b.leaves():
if label[inblossom[v]] == 2:
# This T-vertex now turns into an S-vertex because it becomes
# part of an S-blossom; add it to the queue.
queue.append(v)
inblossom[v] = b
# Compute b.mybestedges.
bestedgeto = {}
for bv in path:
if isinstance(bv, Blossom):
if bv.mybestedges is not None:
# Walk this subblossom's least-slack edges.
nblist = bv.mybestedges
# The sub-blossom won't need this data again.
bv.mybestedges = None
else:
# This subblossom does not have a list of least-slack
# edges; get the information from the vertices.
nblist = [
(v, w) for v in bv.leaves() for w in G.neighbors(v) if v != w
]
else:
nblist = [(bv, w) for w in G.neighbors(bv) if bv != w]
for k in nblist:
(i, j) = k
if inblossom[j] == b:
i, j = j, i
bj = inblossom[j]
if (
bj != b
and label.get(bj) == 1
and ((bj not in bestedgeto) or slack(i, j) < slack(*bestedgeto[bj]))
):
bestedgeto[bj] = k
# Forget about least-slack edge of the subblossom.
bestedge[bv] = None
b.mybestedges = list(bestedgeto.values())
# Select bestedge[b].
mybestedge = None
bestedge[b] = None
for k in b.mybestedges:
kslack = slack(*k)
if mybestedge is None or kslack < mybestslack:
mybestedge = k
mybestslack = kslack
bestedge[b] = mybestedge
# Expand the given top-level blossom.
def expandBlossom(b, endstage):
# This is an obnoxiously complicated recursive function for the sake of
# a stack-transformation. So, we hack around the complexity by using
# a trampoline pattern. By yielding the arguments to each recursive
# call, we keep the actual callstack flat.
def _recurse(b, endstage):
# Convert sub-blossoms into top-level blossoms.
for s in b.childs:
blossomparent[s] = None
if isinstance(s, Blossom):
if endstage and blossomdual[s] == 0:
# Recursively expand this sub-blossom.
yield s
else:
for v in s.leaves():
inblossom[v] = s
else:
inblossom[s] = s
# If we expand a T-blossom during a stage, its sub-blossoms must be
# relabeled.
if (not endstage) and label.get(b) == 2:
# Start at the sub-blossom through which the expanding
# blossom obtained its label, and relabel sub-blossoms untili
# we reach the base.
# Figure out through which sub-blossom the expanding blossom
# obtained its label initially.
entrychild = inblossom[labeledge[b][1]]
# Decide in which direction we will go round the blossom.
j = b.childs.index(entrychild)
if j & 1:
# Start index is odd; go forward and wrap.
j -= len(b.childs)
jstep = 1
else:
# Start index is even; go backward.
jstep = -1
# Move along the blossom until we get to the base.
v, w = labeledge[b]
while j != 0:
# Relabel the T-sub-blossom.
if jstep == 1:
p, q = b.edges[j]
else:
q, p = b.edges[j - 1]
label[w] = None
label[q] = None
assignLabel(w, 2, v)
# Step to the next S-sub-blossom and note its forward edge.
allowedge[(p, q)] = allowedge[(q, p)] = True
j += jstep
if jstep == 1:
v, w = b.edges[j]
else:
w, v = b.edges[j - 1]
# Step to the next T-sub-blossom.
allowedge[(v, w)] = allowedge[(w, v)] = True
j += jstep
# Relabel the base T-sub-blossom WITHOUT stepping through to
# its mate (so don't call assignLabel).
bw = b.childs[j]
label[w] = label[bw] = 2
labeledge[w] = labeledge[bw] = (v, w)
bestedge[bw] = None
# Continue along the blossom until we get back to entrychild.
j += jstep
while b.childs[j] != entrychild:
# Examine the vertices of the sub-blossom to see whether
# it is reachable from a neighboring S-vertex outside the
# expanding blossom.
bv = b.childs[j]
if label.get(bv) == 1:
# This sub-blossom just got label S through one of its
# neighbors; leave it be.
j += jstep
continue
if isinstance(bv, Blossom):
for v in bv.leaves():
if label.get(v):
break
else:
v = bv
# If the sub-blossom contains a reachable vertex, assign
# label T to the sub-blossom.
if label.get(v):
assert label[v] == 2
assert inblossom[v] == bv
label[v] = None
label[mate[blossombase[bv]]] = None
assignLabel(v, 2, labeledge[v][0])
j += jstep
# Remove the expanded blossom entirely.
label.pop(b, None)
labeledge.pop(b, None)
bestedge.pop(b, None)
del blossomparent[b]
del blossombase[b]
del blossomdual[b]
# Now, we apply the trampoline pattern. We simulate a recursive
# callstack by maintaining a stack of generators, each yielding a
# sequence of function arguments. We grow the stack by appending a call
# to _recurse on each argument tuple, and shrink the stack whenever a
# generator is exhausted.
stack = [_recurse(b, endstage)]
while stack:
top = stack[-1]
for s in top:
stack.append(_recurse(s, endstage))
break
else:
stack.pop()
# Swap matched/unmatched edges over an alternating path through blossom b
# between vertex v and the base vertex. Keep blossom bookkeeping
# consistent.
def augmentBlossom(b, v):
# This is an obnoxiously complicated recursive function for the sake of
# a stack-transformation. So, we hack around the complexity by using
# a trampoline pattern. By yielding the arguments to each recursive
# call, we keep the actual callstack flat.
def _recurse(b, v):
# Bubble up through the blossom tree from vertex v to an immediate
# sub-blossom of b.
t = v
while blossomparent[t] != b:
t = blossomparent[t]
# Recursively deal with the first sub-blossom.
if isinstance(t, Blossom):
yield (t, v)
# Decide in which direction we will go round the blossom.
i = j = b.childs.index(t)
if i & 1:
# Start index is odd; go forward and wrap.
j -= len(b.childs)
jstep = 1
else:
# Start index is even; go backward.
jstep = -1
# Move along the blossom until we get to the base.
while j != 0:
# Step to the next sub-blossom and augment it recursively.
j += jstep
t = b.childs[j]
if jstep == 1:
w, x = b.edges[j]
else:
x, w = b.edges[j - 1]
if isinstance(t, Blossom):
yield (t, w)
# Step to the next sub-blossom and augment it recursively.
j += jstep
t = b.childs[j]
if isinstance(t, Blossom):
yield (t, x)
# Match the edge connecting those sub-blossoms.
mate[w] = x
mate[x] = w
# Rotate the list of sub-blossoms to put the new base at the front.
b.childs = b.childs[i:] + b.childs[:i]
b.edges = b.edges[i:] + b.edges[:i]
blossombase[b] = blossombase[b.childs[0]]
assert blossombase[b] == v
# Now, we apply the trampoline pattern. We simulate a recursive
# callstack by maintaining a stack of generators, each yielding a
# sequence of function arguments. We grow the stack by appending a call
# to _recurse on each argument tuple, and shrink the stack whenever a
# generator is exhausted.
stack = [_recurse(b, v)]
while stack:
top = stack[-1]
for args in top:
stack.append(_recurse(*args))
break
else:
stack.pop()
# Swap matched/unmatched edges over an alternating path between two
# single vertices. The augmenting path runs through S-vertices v and w.
def augmentMatching(v, w):
for s, j in ((v, w), (w, v)):
# Match vertex s to vertex j. Then trace back from s
# until we find a single vertex, swapping matched and unmatched
# edges as we go.
while 1:
bs = inblossom[s]
assert label[bs] == 1
assert (labeledge[bs] is None and blossombase[bs] not in mate) or (
labeledge[bs][0] == mate[blossombase[bs]]
)
# Augment through the S-blossom from s to base.
if isinstance(bs, Blossom):
augmentBlossom(bs, s)
# Update mate[s]
mate[s] = j
# Trace one step back.
if labeledge[bs] is None:
# Reached single vertex; stop.
break
t = labeledge[bs][0]
bt = inblossom[t]
assert label[bt] == 2
# Trace one more step back.
s, j = labeledge[bt]
# Augment through the T-blossom from j to base.
assert blossombase[bt] == t
if isinstance(bt, Blossom):
augmentBlossom(bt, j)
# Update mate[j]
mate[j] = s
# Verify that the optimum solution has been reached.
def verifyOptimum():
if maxcardinality:
# Vertices may have negative dual;
# find a constant non-negative number to add to all vertex duals.
vdualoffset = max(0, -min(dualvar.values()))
else:
vdualoffset = 0
# 0. all dual variables are non-negative
assert min(dualvar.values()) + vdualoffset >= 0
assert len(blossomdual) == 0 or min(blossomdual.values()) >= 0
# 0. all edges have non-negative slack and
# 1. all matched edges have zero slack;
for i, j, d in G.edges(data=True):
wt = d.get(weight, 1)
if i == j:
continue # ignore self-loops
s = dualvar[i] + dualvar[j] - 2 * wt
iblossoms = [i]
jblossoms = [j]
while blossomparent[iblossoms[-1]] is not None:
iblossoms.append(blossomparent[iblossoms[-1]])
while blossomparent[jblossoms[-1]] is not None:
jblossoms.append(blossomparent[jblossoms[-1]])
iblossoms.reverse()
jblossoms.reverse()
for bi, bj in zip(iblossoms, jblossoms):
if bi != bj:
break
s += 2 * blossomdual[bi]
assert s >= 0
if mate.get(i) == j or mate.get(j) == i:
assert mate[i] == j and mate[j] == i
assert s == 0
# 2. all single vertices have zero dual value;
for v in gnodes:
assert (v in mate) or dualvar[v] + vdualoffset == 0
# 3. all blossoms with positive dual value are full.
for b in blossomdual:
if blossomdual[b] > 0:
assert len(b.edges) % 2 == 1
for i, j in b.edges[1::2]:
assert mate[i] == j and mate[j] == i
# Ok.
# Main loop: continue until no further improvement is possible.
while 1:
# Each iteration of this loop is a "stage".
# A stage finds an augmenting path and uses that to improve
# the matching.
# Remove labels from top-level blossoms/vertices.
label.clear()
labeledge.clear()
# Forget all about least-slack edges.
bestedge.clear()
for b in blossomdual:
b.mybestedges = None
# Loss of labeling means that we can not be sure that currently
# allowable edges remain allowable throughout this stage.
allowedge.clear()
# Make queue empty.
queue[:] = []
# Label single blossoms/vertices with S and put them in the queue.
for v in gnodes:
if (v not in mate) and label.get(inblossom[v]) is None:
assignLabel(v, 1, None)
# Loop until we succeed in augmenting the matching.
augmented = 0
while 1:
# Each iteration of this loop is a "substage".
# A substage tries to find an augmenting path;
# if found, the path is used to improve the matching and
# the stage ends. If there is no augmenting path, the
# primal-dual method is used to pump some slack out of
# the dual variables.
# Continue labeling until all vertices which are reachable
# through an alternating path have got a label.
while queue and not augmented:
# Take an S vertex from the queue.
v = queue.pop()
assert label[inblossom[v]] == 1
# Scan its neighbors:
for w in G.neighbors(v):
if w == v:
continue # ignore self-loops
# w is a neighbor to v
bv = inblossom[v]
bw = inblossom[w]
if bv == bw:
# this edge is internal to a blossom; ignore it
continue
if (v, w) not in allowedge:
kslack = slack(v, w)
if kslack <= 0:
# edge k has zero slack => it is allowable
allowedge[(v, w)] = allowedge[(w, v)] = True
if (v, w) in allowedge:
if label.get(bw) is None:
# (C1) w is a free vertex;
# label w with T and label its mate with S (R12).
assignLabel(w, 2, v)
elif label.get(bw) == 1:
# (C2) w is an S-vertex (not in the same blossom);
# follow back-links to discover either an
# augmenting path or a new blossom.
base = scanBlossom(v, w)
if base is not NoNode:
# Found a new blossom; add it to the blossom
# bookkeeping and turn it into an S-blossom.
addBlossom(base, v, w)
else:
# Found an augmenting path; augment the
# matching and end this stage.
augmentMatching(v, w)
augmented = 1
break
elif label.get(w) is None:
# w is inside a T-blossom, but w itself has not
# yet been reached from outside the blossom;
# mark it as reached (we need this to relabel
# during T-blossom expansion).
assert label[bw] == 2
label[w] = 2
labeledge[w] = (v, w)
elif label.get(bw) == 1:
# keep track of the least-slack non-allowable edge to
# a different S-blossom.
if bestedge.get(bv) is None or kslack < slack(*bestedge[bv]):
bestedge[bv] = (v, w)
elif label.get(w) is None:
# w is a free vertex (or an unreached vertex inside
# a T-blossom) but we can not reach it yet;
# keep track of the least-slack edge that reaches w.
if bestedge.get(w) is None or kslack < slack(*bestedge[w]):
bestedge[w] = (v, w)
if augmented:
break
# There is no augmenting path under these constraints;
# compute delta and reduce slack in the optimization problem.
# (Note that our vertex dual variables, edge slacks and delta's
# are pre-multiplied by two.)
deltatype = -1
delta = deltaedge = deltablossom = None
# Compute delta1: the minimum value of any vertex dual.
if not maxcardinality:
deltatype = 1
delta = min(dualvar.values())
# Compute delta2: the minimum slack on any edge between
# an S-vertex and a free vertex.
for v in G.nodes():
if label.get(inblossom[v]) is None and bestedge.get(v) is not None:
d = slack(*bestedge[v])
if deltatype == -1 or d < delta:
delta = d
deltatype = 2
deltaedge = bestedge[v]
# Compute delta3: half the minimum slack on any edge between
# a pair of S-blossoms.
for b in blossomparent:
if (
blossomparent[b] is None
and label.get(b) == 1
and bestedge.get(b) is not None
):
kslack = slack(*bestedge[b])
if allinteger:
assert (kslack % 2) == 0
d = kslack // 2
else:
d = kslack / 2.0
if deltatype == -1 or d < delta:
delta = d
deltatype = 3
deltaedge = bestedge[b]
# Compute delta4: minimum z variable of any T-blossom.
for b in blossomdual:
if (
blossomparent[b] is None
and label.get(b) == 2
and (deltatype == -1 or blossomdual[b] < delta)
):
delta = blossomdual[b]
deltatype = 4
deltablossom = b
if deltatype == -1:
# No further improvement possible; max-cardinality optimum
# reached. Do a final delta update to make the optimum
# verifiable.
assert maxcardinality
deltatype = 1
delta = max(0, min(dualvar.values()))
# Update dual variables according to delta.
for v in gnodes:
if label.get(inblossom[v]) == 1:
# S-vertex: 2*u = 2*u - 2*delta
dualvar[v] -= delta
elif label.get(inblossom[v]) == 2:
# T-vertex: 2*u = 2*u + 2*delta
dualvar[v] += delta
for b in blossomdual:
if blossomparent[b] is None:
if label.get(b) == 1:
# top-level S-blossom: z = z + 2*delta
blossomdual[b] += delta
elif label.get(b) == 2:
# top-level T-blossom: z = z - 2*delta
blossomdual[b] -= delta
# Take action at the point where minimum delta occurred.
if deltatype == 1:
# No further improvement possible; optimum reached.
break
elif deltatype == 2:
# Use the least-slack edge to continue the search.
(v, w) = deltaedge
assert label[inblossom[v]] == 1
allowedge[(v, w)] = allowedge[(w, v)] = True
queue.append(v)
elif deltatype == 3:
# Use the least-slack edge to continue the search.
(v, w) = deltaedge
allowedge[(v, w)] = allowedge[(w, v)] = True
assert label[inblossom[v]] == 1
queue.append(v)
elif deltatype == 4:
# Expand the least-z blossom.
expandBlossom(deltablossom, False)
# End of a this substage.
# Paranoia check that the matching is symmetric.
for v in mate:
assert mate[mate[v]] == v
# Stop when no more augmenting path can be found.
if not augmented:
break
# End of a stage; expand all S-blossoms which have zero dual.
for b in list(blossomdual.keys()):
if b not in blossomdual:
continue # already expanded
if blossomparent[b] is None and label.get(b) == 1 and blossomdual[b] == 0:
expandBlossom(b, True)
# Verify that we reached the optimum solution (only for integer weights).
if allinteger:
verifyOptimum()
return matching_dict_to_set(mate)
| (G, matching, *, backend=None, **backend_kwargs) |
30,798 | networkx.algorithms.matching | is_maximal_matching | Return True if ``matching`` is a maximal matching of ``G``
A *maximal matching* in a graph is a matching in which adding any
edge would cause the set to no longer be a valid matching.
Parameters
----------
G : NetworkX graph
matching : dict or set
A dictionary or set representing a matching. If a dictionary, it
must have ``matching[u] == v`` and ``matching[v] == u`` for each
edge ``(u, v)`` in the matching. If a set, it must have elements
of the form ``(u, v)``, where ``(u, v)`` is an edge in the
matching.
Returns
-------
bool
Whether the given set or dictionary represents a valid maximal
matching in the graph.
Examples
--------
>>> G = nx.Graph([(1, 2), (1, 3), (2, 3), (3, 4), (3, 5)])
>>> nx.is_maximal_matching(G, {(1, 2), (3, 4)})
True
| @not_implemented_for("multigraph")
@not_implemented_for("directed")
@nx._dispatchable(edge_attrs="weight")
def max_weight_matching(G, maxcardinality=False, weight="weight"):
"""Compute a maximum-weighted matching of G.
A matching is a subset of edges in which no node occurs more than once.
The weight of a matching is the sum of the weights of its edges.
A maximal matching cannot add more edges and still be a matching.
The cardinality of a matching is the number of matched edges.
Parameters
----------
G : NetworkX graph
Undirected graph
maxcardinality: bool, optional (default=False)
If maxcardinality is True, compute the maximum-cardinality matching
with maximum weight among all maximum-cardinality matchings.
weight: string, optional (default='weight')
Edge data key corresponding to the edge weight.
If key not found, uses 1 as weight.
Returns
-------
matching : set
A maximal matching of the graph.
Examples
--------
>>> G = nx.Graph()
>>> edges = [(1, 2, 6), (1, 3, 2), (2, 3, 1), (2, 4, 7), (3, 5, 9), (4, 5, 3)]
>>> G.add_weighted_edges_from(edges)
>>> sorted(nx.max_weight_matching(G))
[(2, 4), (5, 3)]
Notes
-----
If G has edges with weight attributes the edge data are used as
weight values else the weights are assumed to be 1.
This function takes time O(number_of_nodes ** 3).
If all edge weights are integers, the algorithm uses only integer
computations. If floating point weights are used, the algorithm
could return a slightly suboptimal matching due to numeric
precision errors.
This method is based on the "blossom" method for finding augmenting
paths and the "primal-dual" method for finding a matching of maximum
weight, both methods invented by Jack Edmonds [1]_.
Bipartite graphs can also be matched using the functions present in
:mod:`networkx.algorithms.bipartite.matching`.
References
----------
.. [1] "Efficient Algorithms for Finding Maximum Matching in Graphs",
Zvi Galil, ACM Computing Surveys, 1986.
"""
#
# The algorithm is taken from "Efficient Algorithms for Finding Maximum
# Matching in Graphs" by Zvi Galil, ACM Computing Surveys, 1986.
# It is based on the "blossom" method for finding augmenting paths and
# the "primal-dual" method for finding a matching of maximum weight, both
# methods invented by Jack Edmonds.
#
# A C program for maximum weight matching by Ed Rothberg was used
# extensively to validate this new code.
#
# Many terms used in the code comments are explained in the paper
# by Galil. You will probably need the paper to make sense of this code.
#
class NoNode:
"""Dummy value which is different from any node."""
class Blossom:
"""Representation of a non-trivial blossom or sub-blossom."""
__slots__ = ["childs", "edges", "mybestedges"]
# b.childs is an ordered list of b's sub-blossoms, starting with
# the base and going round the blossom.
# b.edges is the list of b's connecting edges, such that
# b.edges[i] = (v, w) where v is a vertex in b.childs[i]
# and w is a vertex in b.childs[wrap(i+1)].
# If b is a top-level S-blossom,
# b.mybestedges is a list of least-slack edges to neighboring
# S-blossoms, or None if no such list has been computed yet.
# This is used for efficient computation of delta3.
# Generate the blossom's leaf vertices.
def leaves(self):
stack = [*self.childs]
while stack:
t = stack.pop()
if isinstance(t, Blossom):
stack.extend(t.childs)
else:
yield t
# Get a list of vertices.
gnodes = list(G)
if not gnodes:
return set() # don't bother with empty graphs
# Find the maximum edge weight.
maxweight = 0
allinteger = True
for i, j, d in G.edges(data=True):
wt = d.get(weight, 1)
if i != j and wt > maxweight:
maxweight = wt
allinteger = allinteger and (str(type(wt)).split("'")[1] in ("int", "long"))
# If v is a matched vertex, mate[v] is its partner vertex.
# If v is a single vertex, v does not occur as a key in mate.
# Initially all vertices are single; updated during augmentation.
mate = {}
# If b is a top-level blossom,
# label.get(b) is None if b is unlabeled (free),
# 1 if b is an S-blossom,
# 2 if b is a T-blossom.
# The label of a vertex is found by looking at the label of its top-level
# containing blossom.
# If v is a vertex inside a T-blossom, label[v] is 2 iff v is reachable
# from an S-vertex outside the blossom.
# Labels are assigned during a stage and reset after each augmentation.
label = {}
# If b is a labeled top-level blossom,
# labeledge[b] = (v, w) is the edge through which b obtained its label
# such that w is a vertex in b, or None if b's base vertex is single.
# If w is a vertex inside a T-blossom and label[w] == 2,
# labeledge[w] = (v, w) is an edge through which w is reachable from
# outside the blossom.
labeledge = {}
# If v is a vertex, inblossom[v] is the top-level blossom to which v
# belongs.
# If v is a top-level vertex, inblossom[v] == v since v is itself
# a (trivial) top-level blossom.
# Initially all vertices are top-level trivial blossoms.
inblossom = dict(zip(gnodes, gnodes))
# If b is a sub-blossom,
# blossomparent[b] is its immediate parent (sub-)blossom.
# If b is a top-level blossom, blossomparent[b] is None.
blossomparent = dict(zip(gnodes, repeat(None)))
# If b is a (sub-)blossom,
# blossombase[b] is its base VERTEX (i.e. recursive sub-blossom).
blossombase = dict(zip(gnodes, gnodes))
# If w is a free vertex (or an unreached vertex inside a T-blossom),
# bestedge[w] = (v, w) is the least-slack edge from an S-vertex,
# or None if there is no such edge.
# If b is a (possibly trivial) top-level S-blossom,
# bestedge[b] = (v, w) is the least-slack edge to a different S-blossom
# (v inside b), or None if there is no such edge.
# This is used for efficient computation of delta2 and delta3.
bestedge = {}
# If v is a vertex,
# dualvar[v] = 2 * u(v) where u(v) is the v's variable in the dual
# optimization problem (if all edge weights are integers, multiplication
# by two ensures that all values remain integers throughout the algorithm).
# Initially, u(v) = maxweight / 2.
dualvar = dict(zip(gnodes, repeat(maxweight)))
# If b is a non-trivial blossom,
# blossomdual[b] = z(b) where z(b) is b's variable in the dual
# optimization problem.
blossomdual = {}
# If (v, w) in allowedge or (w, v) in allowedg, then the edge
# (v, w) is known to have zero slack in the optimization problem;
# otherwise the edge may or may not have zero slack.
allowedge = {}
# Queue of newly discovered S-vertices.
queue = []
# Return 2 * slack of edge (v, w) (does not work inside blossoms).
def slack(v, w):
return dualvar[v] + dualvar[w] - 2 * G[v][w].get(weight, 1)
# Assign label t to the top-level blossom containing vertex w,
# coming through an edge from vertex v.
def assignLabel(w, t, v):
b = inblossom[w]
assert label.get(w) is None and label.get(b) is None
label[w] = label[b] = t
if v is not None:
labeledge[w] = labeledge[b] = (v, w)
else:
labeledge[w] = labeledge[b] = None
bestedge[w] = bestedge[b] = None
if t == 1:
# b became an S-vertex/blossom; add it(s vertices) to the queue.
if isinstance(b, Blossom):
queue.extend(b.leaves())
else:
queue.append(b)
elif t == 2:
# b became a T-vertex/blossom; assign label S to its mate.
# (If b is a non-trivial blossom, its base is the only vertex
# with an external mate.)
base = blossombase[b]
assignLabel(mate[base], 1, base)
# Trace back from vertices v and w to discover either a new blossom
# or an augmenting path. Return the base vertex of the new blossom,
# or NoNode if an augmenting path was found.
def scanBlossom(v, w):
# Trace back from v and w, placing breadcrumbs as we go.
path = []
base = NoNode
while v is not NoNode:
# Look for a breadcrumb in v's blossom or put a new breadcrumb.
b = inblossom[v]
if label[b] & 4:
base = blossombase[b]
break
assert label[b] == 1
path.append(b)
label[b] = 5
# Trace one step back.
if labeledge[b] is None:
# The base of blossom b is single; stop tracing this path.
assert blossombase[b] not in mate
v = NoNode
else:
assert labeledge[b][0] == mate[blossombase[b]]
v = labeledge[b][0]
b = inblossom[v]
assert label[b] == 2
# b is a T-blossom; trace one more step back.
v = labeledge[b][0]
# Swap v and w so that we alternate between both paths.
if w is not NoNode:
v, w = w, v
# Remove breadcrumbs.
for b in path:
label[b] = 1
# Return base vertex, if we found one.
return base
# Construct a new blossom with given base, through S-vertices v and w.
# Label the new blossom as S; set its dual variable to zero;
# relabel its T-vertices to S and add them to the queue.
def addBlossom(base, v, w):
bb = inblossom[base]
bv = inblossom[v]
bw = inblossom[w]
# Create blossom.
b = Blossom()
blossombase[b] = base
blossomparent[b] = None
blossomparent[bb] = b
# Make list of sub-blossoms and their interconnecting edge endpoints.
b.childs = path = []
b.edges = edgs = [(v, w)]
# Trace back from v to base.
while bv != bb:
# Add bv to the new blossom.
blossomparent[bv] = b
path.append(bv)
edgs.append(labeledge[bv])
assert label[bv] == 2 or (
label[bv] == 1 and labeledge[bv][0] == mate[blossombase[bv]]
)
# Trace one step back.
v = labeledge[bv][0]
bv = inblossom[v]
# Add base sub-blossom; reverse lists.
path.append(bb)
path.reverse()
edgs.reverse()
# Trace back from w to base.
while bw != bb:
# Add bw to the new blossom.
blossomparent[bw] = b
path.append(bw)
edgs.append((labeledge[bw][1], labeledge[bw][0]))
assert label[bw] == 2 or (
label[bw] == 1 and labeledge[bw][0] == mate[blossombase[bw]]
)
# Trace one step back.
w = labeledge[bw][0]
bw = inblossom[w]
# Set label to S.
assert label[bb] == 1
label[b] = 1
labeledge[b] = labeledge[bb]
# Set dual variable to zero.
blossomdual[b] = 0
# Relabel vertices.
for v in b.leaves():
if label[inblossom[v]] == 2:
# This T-vertex now turns into an S-vertex because it becomes
# part of an S-blossom; add it to the queue.
queue.append(v)
inblossom[v] = b
# Compute b.mybestedges.
bestedgeto = {}
for bv in path:
if isinstance(bv, Blossom):
if bv.mybestedges is not None:
# Walk this subblossom's least-slack edges.
nblist = bv.mybestedges
# The sub-blossom won't need this data again.
bv.mybestedges = None
else:
# This subblossom does not have a list of least-slack
# edges; get the information from the vertices.
nblist = [
(v, w) for v in bv.leaves() for w in G.neighbors(v) if v != w
]
else:
nblist = [(bv, w) for w in G.neighbors(bv) if bv != w]
for k in nblist:
(i, j) = k
if inblossom[j] == b:
i, j = j, i
bj = inblossom[j]
if (
bj != b
and label.get(bj) == 1
and ((bj not in bestedgeto) or slack(i, j) < slack(*bestedgeto[bj]))
):
bestedgeto[bj] = k
# Forget about least-slack edge of the subblossom.
bestedge[bv] = None
b.mybestedges = list(bestedgeto.values())
# Select bestedge[b].
mybestedge = None
bestedge[b] = None
for k in b.mybestedges:
kslack = slack(*k)
if mybestedge is None or kslack < mybestslack:
mybestedge = k
mybestslack = kslack
bestedge[b] = mybestedge
# Expand the given top-level blossom.
def expandBlossom(b, endstage):
# This is an obnoxiously complicated recursive function for the sake of
# a stack-transformation. So, we hack around the complexity by using
# a trampoline pattern. By yielding the arguments to each recursive
# call, we keep the actual callstack flat.
def _recurse(b, endstage):
# Convert sub-blossoms into top-level blossoms.
for s in b.childs:
blossomparent[s] = None
if isinstance(s, Blossom):
if endstage and blossomdual[s] == 0:
# Recursively expand this sub-blossom.
yield s
else:
for v in s.leaves():
inblossom[v] = s
else:
inblossom[s] = s
# If we expand a T-blossom during a stage, its sub-blossoms must be
# relabeled.
if (not endstage) and label.get(b) == 2:
# Start at the sub-blossom through which the expanding
# blossom obtained its label, and relabel sub-blossoms untili
# we reach the base.
# Figure out through which sub-blossom the expanding blossom
# obtained its label initially.
entrychild = inblossom[labeledge[b][1]]
# Decide in which direction we will go round the blossom.
j = b.childs.index(entrychild)
if j & 1:
# Start index is odd; go forward and wrap.
j -= len(b.childs)
jstep = 1
else:
# Start index is even; go backward.
jstep = -1
# Move along the blossom until we get to the base.
v, w = labeledge[b]
while j != 0:
# Relabel the T-sub-blossom.
if jstep == 1:
p, q = b.edges[j]
else:
q, p = b.edges[j - 1]
label[w] = None
label[q] = None
assignLabel(w, 2, v)
# Step to the next S-sub-blossom and note its forward edge.
allowedge[(p, q)] = allowedge[(q, p)] = True
j += jstep
if jstep == 1:
v, w = b.edges[j]
else:
w, v = b.edges[j - 1]
# Step to the next T-sub-blossom.
allowedge[(v, w)] = allowedge[(w, v)] = True
j += jstep
# Relabel the base T-sub-blossom WITHOUT stepping through to
# its mate (so don't call assignLabel).
bw = b.childs[j]
label[w] = label[bw] = 2
labeledge[w] = labeledge[bw] = (v, w)
bestedge[bw] = None
# Continue along the blossom until we get back to entrychild.
j += jstep
while b.childs[j] != entrychild:
# Examine the vertices of the sub-blossom to see whether
# it is reachable from a neighboring S-vertex outside the
# expanding blossom.
bv = b.childs[j]
if label.get(bv) == 1:
# This sub-blossom just got label S through one of its
# neighbors; leave it be.
j += jstep
continue
if isinstance(bv, Blossom):
for v in bv.leaves():
if label.get(v):
break
else:
v = bv
# If the sub-blossom contains a reachable vertex, assign
# label T to the sub-blossom.
if label.get(v):
assert label[v] == 2
assert inblossom[v] == bv
label[v] = None
label[mate[blossombase[bv]]] = None
assignLabel(v, 2, labeledge[v][0])
j += jstep
# Remove the expanded blossom entirely.
label.pop(b, None)
labeledge.pop(b, None)
bestedge.pop(b, None)
del blossomparent[b]
del blossombase[b]
del blossomdual[b]
# Now, we apply the trampoline pattern. We simulate a recursive
# callstack by maintaining a stack of generators, each yielding a
# sequence of function arguments. We grow the stack by appending a call
# to _recurse on each argument tuple, and shrink the stack whenever a
# generator is exhausted.
stack = [_recurse(b, endstage)]
while stack:
top = stack[-1]
for s in top:
stack.append(_recurse(s, endstage))
break
else:
stack.pop()
# Swap matched/unmatched edges over an alternating path through blossom b
# between vertex v and the base vertex. Keep blossom bookkeeping
# consistent.
def augmentBlossom(b, v):
# This is an obnoxiously complicated recursive function for the sake of
# a stack-transformation. So, we hack around the complexity by using
# a trampoline pattern. By yielding the arguments to each recursive
# call, we keep the actual callstack flat.
def _recurse(b, v):
# Bubble up through the blossom tree from vertex v to an immediate
# sub-blossom of b.
t = v
while blossomparent[t] != b:
t = blossomparent[t]
# Recursively deal with the first sub-blossom.
if isinstance(t, Blossom):
yield (t, v)
# Decide in which direction we will go round the blossom.
i = j = b.childs.index(t)
if i & 1:
# Start index is odd; go forward and wrap.
j -= len(b.childs)
jstep = 1
else:
# Start index is even; go backward.
jstep = -1
# Move along the blossom until we get to the base.
while j != 0:
# Step to the next sub-blossom and augment it recursively.
j += jstep
t = b.childs[j]
if jstep == 1:
w, x = b.edges[j]
else:
x, w = b.edges[j - 1]
if isinstance(t, Blossom):
yield (t, w)
# Step to the next sub-blossom and augment it recursively.
j += jstep
t = b.childs[j]
if isinstance(t, Blossom):
yield (t, x)
# Match the edge connecting those sub-blossoms.
mate[w] = x
mate[x] = w
# Rotate the list of sub-blossoms to put the new base at the front.
b.childs = b.childs[i:] + b.childs[:i]
b.edges = b.edges[i:] + b.edges[:i]
blossombase[b] = blossombase[b.childs[0]]
assert blossombase[b] == v
# Now, we apply the trampoline pattern. We simulate a recursive
# callstack by maintaining a stack of generators, each yielding a
# sequence of function arguments. We grow the stack by appending a call
# to _recurse on each argument tuple, and shrink the stack whenever a
# generator is exhausted.
stack = [_recurse(b, v)]
while stack:
top = stack[-1]
for args in top:
stack.append(_recurse(*args))
break
else:
stack.pop()
# Swap matched/unmatched edges over an alternating path between two
# single vertices. The augmenting path runs through S-vertices v and w.
def augmentMatching(v, w):
for s, j in ((v, w), (w, v)):
# Match vertex s to vertex j. Then trace back from s
# until we find a single vertex, swapping matched and unmatched
# edges as we go.
while 1:
bs = inblossom[s]
assert label[bs] == 1
assert (labeledge[bs] is None and blossombase[bs] not in mate) or (
labeledge[bs][0] == mate[blossombase[bs]]
)
# Augment through the S-blossom from s to base.
if isinstance(bs, Blossom):
augmentBlossom(bs, s)
# Update mate[s]
mate[s] = j
# Trace one step back.
if labeledge[bs] is None:
# Reached single vertex; stop.
break
t = labeledge[bs][0]
bt = inblossom[t]
assert label[bt] == 2
# Trace one more step back.
s, j = labeledge[bt]
# Augment through the T-blossom from j to base.
assert blossombase[bt] == t
if isinstance(bt, Blossom):
augmentBlossom(bt, j)
# Update mate[j]
mate[j] = s
# Verify that the optimum solution has been reached.
def verifyOptimum():
if maxcardinality:
# Vertices may have negative dual;
# find a constant non-negative number to add to all vertex duals.
vdualoffset = max(0, -min(dualvar.values()))
else:
vdualoffset = 0
# 0. all dual variables are non-negative
assert min(dualvar.values()) + vdualoffset >= 0
assert len(blossomdual) == 0 or min(blossomdual.values()) >= 0
# 0. all edges have non-negative slack and
# 1. all matched edges have zero slack;
for i, j, d in G.edges(data=True):
wt = d.get(weight, 1)
if i == j:
continue # ignore self-loops
s = dualvar[i] + dualvar[j] - 2 * wt
iblossoms = [i]
jblossoms = [j]
while blossomparent[iblossoms[-1]] is not None:
iblossoms.append(blossomparent[iblossoms[-1]])
while blossomparent[jblossoms[-1]] is not None:
jblossoms.append(blossomparent[jblossoms[-1]])
iblossoms.reverse()
jblossoms.reverse()
for bi, bj in zip(iblossoms, jblossoms):
if bi != bj:
break
s += 2 * blossomdual[bi]
assert s >= 0
if mate.get(i) == j or mate.get(j) == i:
assert mate[i] == j and mate[j] == i
assert s == 0
# 2. all single vertices have zero dual value;
for v in gnodes:
assert (v in mate) or dualvar[v] + vdualoffset == 0
# 3. all blossoms with positive dual value are full.
for b in blossomdual:
if blossomdual[b] > 0:
assert len(b.edges) % 2 == 1
for i, j in b.edges[1::2]:
assert mate[i] == j and mate[j] == i
# Ok.
# Main loop: continue until no further improvement is possible.
while 1:
# Each iteration of this loop is a "stage".
# A stage finds an augmenting path and uses that to improve
# the matching.
# Remove labels from top-level blossoms/vertices.
label.clear()
labeledge.clear()
# Forget all about least-slack edges.
bestedge.clear()
for b in blossomdual:
b.mybestedges = None
# Loss of labeling means that we can not be sure that currently
# allowable edges remain allowable throughout this stage.
allowedge.clear()
# Make queue empty.
queue[:] = []
# Label single blossoms/vertices with S and put them in the queue.
for v in gnodes:
if (v not in mate) and label.get(inblossom[v]) is None:
assignLabel(v, 1, None)
# Loop until we succeed in augmenting the matching.
augmented = 0
while 1:
# Each iteration of this loop is a "substage".
# A substage tries to find an augmenting path;
# if found, the path is used to improve the matching and
# the stage ends. If there is no augmenting path, the
# primal-dual method is used to pump some slack out of
# the dual variables.
# Continue labeling until all vertices which are reachable
# through an alternating path have got a label.
while queue and not augmented:
# Take an S vertex from the queue.
v = queue.pop()
assert label[inblossom[v]] == 1
# Scan its neighbors:
for w in G.neighbors(v):
if w == v:
continue # ignore self-loops
# w is a neighbor to v
bv = inblossom[v]
bw = inblossom[w]
if bv == bw:
# this edge is internal to a blossom; ignore it
continue
if (v, w) not in allowedge:
kslack = slack(v, w)
if kslack <= 0:
# edge k has zero slack => it is allowable
allowedge[(v, w)] = allowedge[(w, v)] = True
if (v, w) in allowedge:
if label.get(bw) is None:
# (C1) w is a free vertex;
# label w with T and label its mate with S (R12).
assignLabel(w, 2, v)
elif label.get(bw) == 1:
# (C2) w is an S-vertex (not in the same blossom);
# follow back-links to discover either an
# augmenting path or a new blossom.
base = scanBlossom(v, w)
if base is not NoNode:
# Found a new blossom; add it to the blossom
# bookkeeping and turn it into an S-blossom.
addBlossom(base, v, w)
else:
# Found an augmenting path; augment the
# matching and end this stage.
augmentMatching(v, w)
augmented = 1
break
elif label.get(w) is None:
# w is inside a T-blossom, but w itself has not
# yet been reached from outside the blossom;
# mark it as reached (we need this to relabel
# during T-blossom expansion).
assert label[bw] == 2
label[w] = 2
labeledge[w] = (v, w)
elif label.get(bw) == 1:
# keep track of the least-slack non-allowable edge to
# a different S-blossom.
if bestedge.get(bv) is None or kslack < slack(*bestedge[bv]):
bestedge[bv] = (v, w)
elif label.get(w) is None:
# w is a free vertex (or an unreached vertex inside
# a T-blossom) but we can not reach it yet;
# keep track of the least-slack edge that reaches w.
if bestedge.get(w) is None or kslack < slack(*bestedge[w]):
bestedge[w] = (v, w)
if augmented:
break
# There is no augmenting path under these constraints;
# compute delta and reduce slack in the optimization problem.
# (Note that our vertex dual variables, edge slacks and delta's
# are pre-multiplied by two.)
deltatype = -1
delta = deltaedge = deltablossom = None
# Compute delta1: the minimum value of any vertex dual.
if not maxcardinality:
deltatype = 1
delta = min(dualvar.values())
# Compute delta2: the minimum slack on any edge between
# an S-vertex and a free vertex.
for v in G.nodes():
if label.get(inblossom[v]) is None and bestedge.get(v) is not None:
d = slack(*bestedge[v])
if deltatype == -1 or d < delta:
delta = d
deltatype = 2
deltaedge = bestedge[v]
# Compute delta3: half the minimum slack on any edge between
# a pair of S-blossoms.
for b in blossomparent:
if (
blossomparent[b] is None
and label.get(b) == 1
and bestedge.get(b) is not None
):
kslack = slack(*bestedge[b])
if allinteger:
assert (kslack % 2) == 0
d = kslack // 2
else:
d = kslack / 2.0
if deltatype == -1 or d < delta:
delta = d
deltatype = 3
deltaedge = bestedge[b]
# Compute delta4: minimum z variable of any T-blossom.
for b in blossomdual:
if (
blossomparent[b] is None
and label.get(b) == 2
and (deltatype == -1 or blossomdual[b] < delta)
):
delta = blossomdual[b]
deltatype = 4
deltablossom = b
if deltatype == -1:
# No further improvement possible; max-cardinality optimum
# reached. Do a final delta update to make the optimum
# verifiable.
assert maxcardinality
deltatype = 1
delta = max(0, min(dualvar.values()))
# Update dual variables according to delta.
for v in gnodes:
if label.get(inblossom[v]) == 1:
# S-vertex: 2*u = 2*u - 2*delta
dualvar[v] -= delta
elif label.get(inblossom[v]) == 2:
# T-vertex: 2*u = 2*u + 2*delta
dualvar[v] += delta
for b in blossomdual:
if blossomparent[b] is None:
if label.get(b) == 1:
# top-level S-blossom: z = z + 2*delta
blossomdual[b] += delta
elif label.get(b) == 2:
# top-level T-blossom: z = z - 2*delta
blossomdual[b] -= delta
# Take action at the point where minimum delta occurred.
if deltatype == 1:
# No further improvement possible; optimum reached.
break
elif deltatype == 2:
# Use the least-slack edge to continue the search.
(v, w) = deltaedge
assert label[inblossom[v]] == 1
allowedge[(v, w)] = allowedge[(w, v)] = True
queue.append(v)
elif deltatype == 3:
# Use the least-slack edge to continue the search.
(v, w) = deltaedge
allowedge[(v, w)] = allowedge[(w, v)] = True
assert label[inblossom[v]] == 1
queue.append(v)
elif deltatype == 4:
# Expand the least-z blossom.
expandBlossom(deltablossom, False)
# End of a this substage.
# Paranoia check that the matching is symmetric.
for v in mate:
assert mate[mate[v]] == v
# Stop when no more augmenting path can be found.
if not augmented:
break
# End of a stage; expand all S-blossoms which have zero dual.
for b in list(blossomdual.keys()):
if b not in blossomdual:
continue # already expanded
if blossomparent[b] is None and label.get(b) == 1 and blossomdual[b] == 0:
expandBlossom(b, True)
# Verify that we reached the optimum solution (only for integer weights).
if allinteger:
verifyOptimum()
return matching_dict_to_set(mate)
| (G, matching, *, backend=None, **backend_kwargs) |
30,799 | networkx.algorithms.d_separation | is_minimal_d_separator | Determine if `z` is a minimal d-separator for `x` and `y`.
A d-separator, `z`, in a DAG is a set of nodes that blocks
all paths from nodes in set `x` to nodes in set `y`.
A minimal d-separator is a d-separator `z` such that removing
any subset of nodes makes it no longer a d-separator.
Note: This function checks whether `z` is a d-separator AND is
minimal. One can use the function `is_d_separator` to only check if
`z` is a d-separator. See examples below.
Parameters
----------
G : nx.DiGraph
A NetworkX DAG.
x : node | set
A node or set of nodes in the graph.
y : node | set
A node or set of nodes in the graph.
z : node | set
The node or set of nodes to check if it is a minimal d-separating set.
The function :func:`is_d_separator` is called inside this function
to verify that `z` is in fact a d-separator.
included : set | node | None
A node or set of nodes which must be included in the found separating set,
default is ``None``, which means the empty set.
restricted : set | node | None
Restricted node or set of nodes to consider. Only these nodes can be in
the found separating set, default is ``None`` meaning all nodes in ``G``.
Returns
-------
bool
Whether or not the set `z` is a minimal d-separator subject to
`restricted` nodes and `included` node constraints.
Examples
--------
>>> G = nx.path_graph([0, 1, 2, 3], create_using=nx.DiGraph)
>>> G.add_node(4)
>>> nx.is_minimal_d_separator(G, 0, 2, {1})
True
>>> # since {1} is the minimal d-separator, {1, 3, 4} is not minimal
>>> nx.is_minimal_d_separator(G, 0, 2, {1, 3, 4})
False
>>> # alternatively, if we only want to check that {1, 3, 4} is a d-separator
>>> nx.is_d_separator(G, 0, 2, {1, 3, 4})
True
Raises
------
NetworkXError
Raises a :exc:`NetworkXError` if the input graph is not a DAG.
NodeNotFound
If any of the input nodes are not found in the graph,
a :exc:`NodeNotFound` exception is raised.
References
----------
.. [1] van der Zander, Benito, and Maciej LiΕkiewicz. "Finding
minimal d-separators in linear time and applications." In
Uncertainty in Artificial Intelligence, pp. 637-647. PMLR, 2020.
Notes
-----
This function works on verifying that a set is minimal and
d-separating between two nodes. Uses criterion (a), (b), (c) on
page 4 of [1]_. a) closure(`x`) and `y` are disjoint. b) `z` contains
all nodes from `included` and is contained in the `restricted`
nodes and in the union of ancestors of `x`, `y`, and `included`.
c) the nodes in `z` not in `included` are contained in both
closure(x) and closure(y). The closure of a set is the set of nodes
connected to the set by a directed path in G.
The complexity is :math:`O(m)`, where :math:`m` stands for the
number of edges in the subgraph of G consisting of only the
ancestors of `x` and `y`.
For full details, see [1]_.
| null | (G, x, y, z, *, included=None, restricted=None, backend=None, **backend_kwargs) |
30,800 | networkx.algorithms.graphical | is_multigraphical | Returns True if some multigraph can realize the sequence.
Parameters
----------
sequence : list
A list of integers
Returns
-------
valid : bool
True if deg_sequence is a multigraphic degree sequence and False if not.
Examples
--------
>>> G = nx.MultiGraph([(1, 2), (1, 3), (2, 3), (3, 4), (4, 2), (5, 1), (5, 4)])
>>> sequence = (d for _, d in G.degree())
>>> nx.is_multigraphical(sequence)
True
To test a non-multigraphical sequence:
>>> sequence_list = [d for _, d in G.degree()]
>>> sequence_list[-1] += 1
>>> nx.is_multigraphical(sequence_list)
False
Notes
-----
The worst-case run time is $O(n)$ where $n$ is the length of the sequence.
References
----------
.. [1] S. L. Hakimi. "On the realizability of a set of integers as
degrees of the vertices of a linear graph", J. SIAM, 10, pp. 496-506
(1962).
| null | (sequence, *, backend=None, **backend_kwargs) |
30,801 | networkx.classes.function | is_negatively_weighted | Returns True if `G` has negatively weighted edges.
Parameters
----------
G : graph
A NetworkX graph.
edge : tuple, optional
A 2-tuple specifying the only edge in `G` that will be tested. If
None, then every edge in `G` is tested.
weight: string, optional
The attribute name used to query for edge weights.
Returns
-------
bool
A boolean signifying if `G`, or the specified edge, is negatively
weighted.
Raises
------
NetworkXError
If the specified edge does not exist.
Examples
--------
>>> G = nx.Graph()
>>> G.add_edges_from([(1, 3), (2, 4), (2, 6)])
>>> G.add_edge(1, 2, weight=4)
>>> nx.is_negatively_weighted(G, (1, 2))
False
>>> G[2][4]["weight"] = -2
>>> nx.is_negatively_weighted(G)
True
>>> G = nx.DiGraph()
>>> edges = [("0", "3", 3), ("0", "1", -5), ("1", "0", -2)]
>>> G.add_weighted_edges_from(edges)
>>> nx.is_negatively_weighted(G)
True
| def set_edge_attributes(G, values, name=None):
"""Sets edge attributes from a given value or dictionary of values.
.. Warning:: The call order of arguments `values` and `name`
switched between v1.x & v2.x.
Parameters
----------
G : NetworkX Graph
values : scalar value, dict-like
What the edge attribute should be set to. If `values` is
not a dictionary, then it is treated as a single attribute value
that is then applied to every edge in `G`. This means that if
you provide a mutable object, like a list, updates to that object
will be reflected in the edge attribute for each edge. The attribute
name will be `name`.
If `values` is a dict or a dict of dict, it should be keyed
by edge tuple to either an attribute value or a dict of attribute
key/value pairs used to update the edge's attributes.
For multigraphs, the edge tuples must be of the form ``(u, v, key)``,
where `u` and `v` are nodes and `key` is the edge key.
For non-multigraphs, the keys must be tuples of the form ``(u, v)``.
name : string (optional, default=None)
Name of the edge attribute to set if values is a scalar.
Examples
--------
After computing some property of the edges of a graph, you may want
to assign a edge attribute to store the value of that property for
each edge::
>>> G = nx.path_graph(3)
>>> bb = nx.edge_betweenness_centrality(G, normalized=False)
>>> nx.set_edge_attributes(G, bb, "betweenness")
>>> G.edges[1, 2]["betweenness"]
2.0
If you provide a list as the second argument, updates to the list
will be reflected in the edge attribute for each edge::
>>> labels = []
>>> nx.set_edge_attributes(G, labels, "labels")
>>> labels.append("foo")
>>> G.edges[0, 1]["labels"]
['foo']
>>> G.edges[1, 2]["labels"]
['foo']
If you provide a dictionary of dictionaries as the second argument,
the entire dictionary will be used to update edge attributes::
>>> G = nx.path_graph(3)
>>> attrs = {(0, 1): {"attr1": 20, "attr2": "nothing"}, (1, 2): {"attr2": 3}}
>>> nx.set_edge_attributes(G, attrs)
>>> G[0][1]["attr1"]
20
>>> G[0][1]["attr2"]
'nothing'
>>> G[1][2]["attr2"]
3
The attributes of one Graph can be used to set those of another.
>>> H = nx.path_graph(3)
>>> nx.set_edge_attributes(H, G.edges)
Note that if the dict contains edges that are not in `G`, they are
silently ignored::
>>> G = nx.Graph([(0, 1)])
>>> nx.set_edge_attributes(G, {(1, 2): {"weight": 2.0}})
>>> (1, 2) in G.edges()
False
For multigraphs, the `values` dict is expected to be keyed by 3-tuples
including the edge key::
>>> MG = nx.MultiGraph()
>>> edges = [(0, 1), (0, 1)]
>>> MG.add_edges_from(edges) # Returns list of edge keys
[0, 1]
>>> attributes = {(0, 1, 0): {"cost": 21}, (0, 1, 1): {"cost": 7}}
>>> nx.set_edge_attributes(MG, attributes)
>>> MG[0][1][0]["cost"]
21
>>> MG[0][1][1]["cost"]
7
If MultiGraph attributes are desired for a Graph, you must convert the 3-tuple
multiedge to a 2-tuple edge and the last multiedge's attribute value will
overwrite the previous values. Continuing from the previous case we get::
>>> H = nx.path_graph([0, 1, 2])
>>> nx.set_edge_attributes(H, {(u, v): ed for u, v, ed in MG.edges.data()})
>>> nx.get_edge_attributes(H, "cost")
{(0, 1): 7}
"""
if name is not None:
# `values` does not contain attribute names
try:
# if `values` is a dict using `.items()` => {edge: value}
if G.is_multigraph():
for (u, v, key), value in values.items():
try:
G._adj[u][v][key][name] = value
except KeyError:
pass
else:
for (u, v), value in values.items():
try:
G._adj[u][v][name] = value
except KeyError:
pass
except AttributeError:
# treat `values` as a constant
for u, v, data in G.edges(data=True):
data[name] = values
else:
# `values` consists of doct-of-dict {edge: {attr: value}} shape
if G.is_multigraph():
for (u, v, key), d in values.items():
try:
G._adj[u][v][key].update(d)
except KeyError:
pass
else:
for (u, v), d in values.items():
try:
G._adj[u][v].update(d)
except KeyError:
pass
nx._clear_cache(G)
| (G, edge=None, weight='weight', *, backend=None, **backend_kwargs) |
30,802 | networkx.classes.function | is_path | Returns whether or not the specified path exists.
For it to return True, every node on the path must exist and
each consecutive pair must be connected via one or more edges.
Parameters
----------
G : graph
A NetworkX graph.
path : list
A list of nodes which defines the path to traverse
Returns
-------
bool
True if `path` is a valid path in `G`
| def is_path(G, path):
"""Returns whether or not the specified path exists.
For it to return True, every node on the path must exist and
each consecutive pair must be connected via one or more edges.
Parameters
----------
G : graph
A NetworkX graph.
path : list
A list of nodes which defines the path to traverse
Returns
-------
bool
True if `path` is a valid path in `G`
"""
try:
return all(nbr in G._adj[node] for node, nbr in nx.utils.pairwise(path))
except (KeyError, TypeError):
return False
| (G, path) |
30,803 | networkx.algorithms.matching | is_perfect_matching | Return True if ``matching`` is a perfect matching for ``G``
A *perfect matching* in a graph is a matching in which exactly one edge
is incident upon each vertex.
Parameters
----------
G : NetworkX graph
matching : dict or set
A dictionary or set representing a matching. If a dictionary, it
must have ``matching[u] == v`` and ``matching[v] == u`` for each
edge ``(u, v)`` in the matching. If a set, it must have elements
of the form ``(u, v)``, where ``(u, v)`` is an edge in the
matching.
Returns
-------
bool
Whether the given set or dictionary represents a valid perfect
matching in the graph.
Examples
--------
>>> G = nx.Graph([(1, 2), (1, 3), (2, 3), (2, 4), (3, 5), (4, 5), (4, 6)])
>>> my_match = {1: 2, 3: 5, 4: 6}
>>> nx.is_perfect_matching(G, my_match)
True
| @not_implemented_for("multigraph")
@not_implemented_for("directed")
@nx._dispatchable(edge_attrs="weight")
def max_weight_matching(G, maxcardinality=False, weight="weight"):
"""Compute a maximum-weighted matching of G.
A matching is a subset of edges in which no node occurs more than once.
The weight of a matching is the sum of the weights of its edges.
A maximal matching cannot add more edges and still be a matching.
The cardinality of a matching is the number of matched edges.
Parameters
----------
G : NetworkX graph
Undirected graph
maxcardinality: bool, optional (default=False)
If maxcardinality is True, compute the maximum-cardinality matching
with maximum weight among all maximum-cardinality matchings.
weight: string, optional (default='weight')
Edge data key corresponding to the edge weight.
If key not found, uses 1 as weight.
Returns
-------
matching : set
A maximal matching of the graph.
Examples
--------
>>> G = nx.Graph()
>>> edges = [(1, 2, 6), (1, 3, 2), (2, 3, 1), (2, 4, 7), (3, 5, 9), (4, 5, 3)]
>>> G.add_weighted_edges_from(edges)
>>> sorted(nx.max_weight_matching(G))
[(2, 4), (5, 3)]
Notes
-----
If G has edges with weight attributes the edge data are used as
weight values else the weights are assumed to be 1.
This function takes time O(number_of_nodes ** 3).
If all edge weights are integers, the algorithm uses only integer
computations. If floating point weights are used, the algorithm
could return a slightly suboptimal matching due to numeric
precision errors.
This method is based on the "blossom" method for finding augmenting
paths and the "primal-dual" method for finding a matching of maximum
weight, both methods invented by Jack Edmonds [1]_.
Bipartite graphs can also be matched using the functions present in
:mod:`networkx.algorithms.bipartite.matching`.
References
----------
.. [1] "Efficient Algorithms for Finding Maximum Matching in Graphs",
Zvi Galil, ACM Computing Surveys, 1986.
"""
#
# The algorithm is taken from "Efficient Algorithms for Finding Maximum
# Matching in Graphs" by Zvi Galil, ACM Computing Surveys, 1986.
# It is based on the "blossom" method for finding augmenting paths and
# the "primal-dual" method for finding a matching of maximum weight, both
# methods invented by Jack Edmonds.
#
# A C program for maximum weight matching by Ed Rothberg was used
# extensively to validate this new code.
#
# Many terms used in the code comments are explained in the paper
# by Galil. You will probably need the paper to make sense of this code.
#
class NoNode:
"""Dummy value which is different from any node."""
class Blossom:
"""Representation of a non-trivial blossom or sub-blossom."""
__slots__ = ["childs", "edges", "mybestedges"]
# b.childs is an ordered list of b's sub-blossoms, starting with
# the base and going round the blossom.
# b.edges is the list of b's connecting edges, such that
# b.edges[i] = (v, w) where v is a vertex in b.childs[i]
# and w is a vertex in b.childs[wrap(i+1)].
# If b is a top-level S-blossom,
# b.mybestedges is a list of least-slack edges to neighboring
# S-blossoms, or None if no such list has been computed yet.
# This is used for efficient computation of delta3.
# Generate the blossom's leaf vertices.
def leaves(self):
stack = [*self.childs]
while stack:
t = stack.pop()
if isinstance(t, Blossom):
stack.extend(t.childs)
else:
yield t
# Get a list of vertices.
gnodes = list(G)
if not gnodes:
return set() # don't bother with empty graphs
# Find the maximum edge weight.
maxweight = 0
allinteger = True
for i, j, d in G.edges(data=True):
wt = d.get(weight, 1)
if i != j and wt > maxweight:
maxweight = wt
allinteger = allinteger and (str(type(wt)).split("'")[1] in ("int", "long"))
# If v is a matched vertex, mate[v] is its partner vertex.
# If v is a single vertex, v does not occur as a key in mate.
# Initially all vertices are single; updated during augmentation.
mate = {}
# If b is a top-level blossom,
# label.get(b) is None if b is unlabeled (free),
# 1 if b is an S-blossom,
# 2 if b is a T-blossom.
# The label of a vertex is found by looking at the label of its top-level
# containing blossom.
# If v is a vertex inside a T-blossom, label[v] is 2 iff v is reachable
# from an S-vertex outside the blossom.
# Labels are assigned during a stage and reset after each augmentation.
label = {}
# If b is a labeled top-level blossom,
# labeledge[b] = (v, w) is the edge through which b obtained its label
# such that w is a vertex in b, or None if b's base vertex is single.
# If w is a vertex inside a T-blossom and label[w] == 2,
# labeledge[w] = (v, w) is an edge through which w is reachable from
# outside the blossom.
labeledge = {}
# If v is a vertex, inblossom[v] is the top-level blossom to which v
# belongs.
# If v is a top-level vertex, inblossom[v] == v since v is itself
# a (trivial) top-level blossom.
# Initially all vertices are top-level trivial blossoms.
inblossom = dict(zip(gnodes, gnodes))
# If b is a sub-blossom,
# blossomparent[b] is its immediate parent (sub-)blossom.
# If b is a top-level blossom, blossomparent[b] is None.
blossomparent = dict(zip(gnodes, repeat(None)))
# If b is a (sub-)blossom,
# blossombase[b] is its base VERTEX (i.e. recursive sub-blossom).
blossombase = dict(zip(gnodes, gnodes))
# If w is a free vertex (or an unreached vertex inside a T-blossom),
# bestedge[w] = (v, w) is the least-slack edge from an S-vertex,
# or None if there is no such edge.
# If b is a (possibly trivial) top-level S-blossom,
# bestedge[b] = (v, w) is the least-slack edge to a different S-blossom
# (v inside b), or None if there is no such edge.
# This is used for efficient computation of delta2 and delta3.
bestedge = {}
# If v is a vertex,
# dualvar[v] = 2 * u(v) where u(v) is the v's variable in the dual
# optimization problem (if all edge weights are integers, multiplication
# by two ensures that all values remain integers throughout the algorithm).
# Initially, u(v) = maxweight / 2.
dualvar = dict(zip(gnodes, repeat(maxweight)))
# If b is a non-trivial blossom,
# blossomdual[b] = z(b) where z(b) is b's variable in the dual
# optimization problem.
blossomdual = {}
# If (v, w) in allowedge or (w, v) in allowedg, then the edge
# (v, w) is known to have zero slack in the optimization problem;
# otherwise the edge may or may not have zero slack.
allowedge = {}
# Queue of newly discovered S-vertices.
queue = []
# Return 2 * slack of edge (v, w) (does not work inside blossoms).
def slack(v, w):
return dualvar[v] + dualvar[w] - 2 * G[v][w].get(weight, 1)
# Assign label t to the top-level blossom containing vertex w,
# coming through an edge from vertex v.
def assignLabel(w, t, v):
b = inblossom[w]
assert label.get(w) is None and label.get(b) is None
label[w] = label[b] = t
if v is not None:
labeledge[w] = labeledge[b] = (v, w)
else:
labeledge[w] = labeledge[b] = None
bestedge[w] = bestedge[b] = None
if t == 1:
# b became an S-vertex/blossom; add it(s vertices) to the queue.
if isinstance(b, Blossom):
queue.extend(b.leaves())
else:
queue.append(b)
elif t == 2:
# b became a T-vertex/blossom; assign label S to its mate.
# (If b is a non-trivial blossom, its base is the only vertex
# with an external mate.)
base = blossombase[b]
assignLabel(mate[base], 1, base)
# Trace back from vertices v and w to discover either a new blossom
# or an augmenting path. Return the base vertex of the new blossom,
# or NoNode if an augmenting path was found.
def scanBlossom(v, w):
# Trace back from v and w, placing breadcrumbs as we go.
path = []
base = NoNode
while v is not NoNode:
# Look for a breadcrumb in v's blossom or put a new breadcrumb.
b = inblossom[v]
if label[b] & 4:
base = blossombase[b]
break
assert label[b] == 1
path.append(b)
label[b] = 5
# Trace one step back.
if labeledge[b] is None:
# The base of blossom b is single; stop tracing this path.
assert blossombase[b] not in mate
v = NoNode
else:
assert labeledge[b][0] == mate[blossombase[b]]
v = labeledge[b][0]
b = inblossom[v]
assert label[b] == 2
# b is a T-blossom; trace one more step back.
v = labeledge[b][0]
# Swap v and w so that we alternate between both paths.
if w is not NoNode:
v, w = w, v
# Remove breadcrumbs.
for b in path:
label[b] = 1
# Return base vertex, if we found one.
return base
# Construct a new blossom with given base, through S-vertices v and w.
# Label the new blossom as S; set its dual variable to zero;
# relabel its T-vertices to S and add them to the queue.
def addBlossom(base, v, w):
bb = inblossom[base]
bv = inblossom[v]
bw = inblossom[w]
# Create blossom.
b = Blossom()
blossombase[b] = base
blossomparent[b] = None
blossomparent[bb] = b
# Make list of sub-blossoms and their interconnecting edge endpoints.
b.childs = path = []
b.edges = edgs = [(v, w)]
# Trace back from v to base.
while bv != bb:
# Add bv to the new blossom.
blossomparent[bv] = b
path.append(bv)
edgs.append(labeledge[bv])
assert label[bv] == 2 or (
label[bv] == 1 and labeledge[bv][0] == mate[blossombase[bv]]
)
# Trace one step back.
v = labeledge[bv][0]
bv = inblossom[v]
# Add base sub-blossom; reverse lists.
path.append(bb)
path.reverse()
edgs.reverse()
# Trace back from w to base.
while bw != bb:
# Add bw to the new blossom.
blossomparent[bw] = b
path.append(bw)
edgs.append((labeledge[bw][1], labeledge[bw][0]))
assert label[bw] == 2 or (
label[bw] == 1 and labeledge[bw][0] == mate[blossombase[bw]]
)
# Trace one step back.
w = labeledge[bw][0]
bw = inblossom[w]
# Set label to S.
assert label[bb] == 1
label[b] = 1
labeledge[b] = labeledge[bb]
# Set dual variable to zero.
blossomdual[b] = 0
# Relabel vertices.
for v in b.leaves():
if label[inblossom[v]] == 2:
# This T-vertex now turns into an S-vertex because it becomes
# part of an S-blossom; add it to the queue.
queue.append(v)
inblossom[v] = b
# Compute b.mybestedges.
bestedgeto = {}
for bv in path:
if isinstance(bv, Blossom):
if bv.mybestedges is not None:
# Walk this subblossom's least-slack edges.
nblist = bv.mybestedges
# The sub-blossom won't need this data again.
bv.mybestedges = None
else:
# This subblossom does not have a list of least-slack
# edges; get the information from the vertices.
nblist = [
(v, w) for v in bv.leaves() for w in G.neighbors(v) if v != w
]
else:
nblist = [(bv, w) for w in G.neighbors(bv) if bv != w]
for k in nblist:
(i, j) = k
if inblossom[j] == b:
i, j = j, i
bj = inblossom[j]
if (
bj != b
and label.get(bj) == 1
and ((bj not in bestedgeto) or slack(i, j) < slack(*bestedgeto[bj]))
):
bestedgeto[bj] = k
# Forget about least-slack edge of the subblossom.
bestedge[bv] = None
b.mybestedges = list(bestedgeto.values())
# Select bestedge[b].
mybestedge = None
bestedge[b] = None
for k in b.mybestedges:
kslack = slack(*k)
if mybestedge is None or kslack < mybestslack:
mybestedge = k
mybestslack = kslack
bestedge[b] = mybestedge
# Expand the given top-level blossom.
def expandBlossom(b, endstage):
# This is an obnoxiously complicated recursive function for the sake of
# a stack-transformation. So, we hack around the complexity by using
# a trampoline pattern. By yielding the arguments to each recursive
# call, we keep the actual callstack flat.
def _recurse(b, endstage):
# Convert sub-blossoms into top-level blossoms.
for s in b.childs:
blossomparent[s] = None
if isinstance(s, Blossom):
if endstage and blossomdual[s] == 0:
# Recursively expand this sub-blossom.
yield s
else:
for v in s.leaves():
inblossom[v] = s
else:
inblossom[s] = s
# If we expand a T-blossom during a stage, its sub-blossoms must be
# relabeled.
if (not endstage) and label.get(b) == 2:
# Start at the sub-blossom through which the expanding
# blossom obtained its label, and relabel sub-blossoms untili
# we reach the base.
# Figure out through which sub-blossom the expanding blossom
# obtained its label initially.
entrychild = inblossom[labeledge[b][1]]
# Decide in which direction we will go round the blossom.
j = b.childs.index(entrychild)
if j & 1:
# Start index is odd; go forward and wrap.
j -= len(b.childs)
jstep = 1
else:
# Start index is even; go backward.
jstep = -1
# Move along the blossom until we get to the base.
v, w = labeledge[b]
while j != 0:
# Relabel the T-sub-blossom.
if jstep == 1:
p, q = b.edges[j]
else:
q, p = b.edges[j - 1]
label[w] = None
label[q] = None
assignLabel(w, 2, v)
# Step to the next S-sub-blossom and note its forward edge.
allowedge[(p, q)] = allowedge[(q, p)] = True
j += jstep
if jstep == 1:
v, w = b.edges[j]
else:
w, v = b.edges[j - 1]
# Step to the next T-sub-blossom.
allowedge[(v, w)] = allowedge[(w, v)] = True
j += jstep
# Relabel the base T-sub-blossom WITHOUT stepping through to
# its mate (so don't call assignLabel).
bw = b.childs[j]
label[w] = label[bw] = 2
labeledge[w] = labeledge[bw] = (v, w)
bestedge[bw] = None
# Continue along the blossom until we get back to entrychild.
j += jstep
while b.childs[j] != entrychild:
# Examine the vertices of the sub-blossom to see whether
# it is reachable from a neighboring S-vertex outside the
# expanding blossom.
bv = b.childs[j]
if label.get(bv) == 1:
# This sub-blossom just got label S through one of its
# neighbors; leave it be.
j += jstep
continue
if isinstance(bv, Blossom):
for v in bv.leaves():
if label.get(v):
break
else:
v = bv
# If the sub-blossom contains a reachable vertex, assign
# label T to the sub-blossom.
if label.get(v):
assert label[v] == 2
assert inblossom[v] == bv
label[v] = None
label[mate[blossombase[bv]]] = None
assignLabel(v, 2, labeledge[v][0])
j += jstep
# Remove the expanded blossom entirely.
label.pop(b, None)
labeledge.pop(b, None)
bestedge.pop(b, None)
del blossomparent[b]
del blossombase[b]
del blossomdual[b]
# Now, we apply the trampoline pattern. We simulate a recursive
# callstack by maintaining a stack of generators, each yielding a
# sequence of function arguments. We grow the stack by appending a call
# to _recurse on each argument tuple, and shrink the stack whenever a
# generator is exhausted.
stack = [_recurse(b, endstage)]
while stack:
top = stack[-1]
for s in top:
stack.append(_recurse(s, endstage))
break
else:
stack.pop()
# Swap matched/unmatched edges over an alternating path through blossom b
# between vertex v and the base vertex. Keep blossom bookkeeping
# consistent.
def augmentBlossom(b, v):
# This is an obnoxiously complicated recursive function for the sake of
# a stack-transformation. So, we hack around the complexity by using
# a trampoline pattern. By yielding the arguments to each recursive
# call, we keep the actual callstack flat.
def _recurse(b, v):
# Bubble up through the blossom tree from vertex v to an immediate
# sub-blossom of b.
t = v
while blossomparent[t] != b:
t = blossomparent[t]
# Recursively deal with the first sub-blossom.
if isinstance(t, Blossom):
yield (t, v)
# Decide in which direction we will go round the blossom.
i = j = b.childs.index(t)
if i & 1:
# Start index is odd; go forward and wrap.
j -= len(b.childs)
jstep = 1
else:
# Start index is even; go backward.
jstep = -1
# Move along the blossom until we get to the base.
while j != 0:
# Step to the next sub-blossom and augment it recursively.
j += jstep
t = b.childs[j]
if jstep == 1:
w, x = b.edges[j]
else:
x, w = b.edges[j - 1]
if isinstance(t, Blossom):
yield (t, w)
# Step to the next sub-blossom and augment it recursively.
j += jstep
t = b.childs[j]
if isinstance(t, Blossom):
yield (t, x)
# Match the edge connecting those sub-blossoms.
mate[w] = x
mate[x] = w
# Rotate the list of sub-blossoms to put the new base at the front.
b.childs = b.childs[i:] + b.childs[:i]
b.edges = b.edges[i:] + b.edges[:i]
blossombase[b] = blossombase[b.childs[0]]
assert blossombase[b] == v
# Now, we apply the trampoline pattern. We simulate a recursive
# callstack by maintaining a stack of generators, each yielding a
# sequence of function arguments. We grow the stack by appending a call
# to _recurse on each argument tuple, and shrink the stack whenever a
# generator is exhausted.
stack = [_recurse(b, v)]
while stack:
top = stack[-1]
for args in top:
stack.append(_recurse(*args))
break
else:
stack.pop()
# Swap matched/unmatched edges over an alternating path between two
# single vertices. The augmenting path runs through S-vertices v and w.
def augmentMatching(v, w):
for s, j in ((v, w), (w, v)):
# Match vertex s to vertex j. Then trace back from s
# until we find a single vertex, swapping matched and unmatched
# edges as we go.
while 1:
bs = inblossom[s]
assert label[bs] == 1
assert (labeledge[bs] is None and blossombase[bs] not in mate) or (
labeledge[bs][0] == mate[blossombase[bs]]
)
# Augment through the S-blossom from s to base.
if isinstance(bs, Blossom):
augmentBlossom(bs, s)
# Update mate[s]
mate[s] = j
# Trace one step back.
if labeledge[bs] is None:
# Reached single vertex; stop.
break
t = labeledge[bs][0]
bt = inblossom[t]
assert label[bt] == 2
# Trace one more step back.
s, j = labeledge[bt]
# Augment through the T-blossom from j to base.
assert blossombase[bt] == t
if isinstance(bt, Blossom):
augmentBlossom(bt, j)
# Update mate[j]
mate[j] = s
# Verify that the optimum solution has been reached.
def verifyOptimum():
if maxcardinality:
# Vertices may have negative dual;
# find a constant non-negative number to add to all vertex duals.
vdualoffset = max(0, -min(dualvar.values()))
else:
vdualoffset = 0
# 0. all dual variables are non-negative
assert min(dualvar.values()) + vdualoffset >= 0
assert len(blossomdual) == 0 or min(blossomdual.values()) >= 0
# 0. all edges have non-negative slack and
# 1. all matched edges have zero slack;
for i, j, d in G.edges(data=True):
wt = d.get(weight, 1)
if i == j:
continue # ignore self-loops
s = dualvar[i] + dualvar[j] - 2 * wt
iblossoms = [i]
jblossoms = [j]
while blossomparent[iblossoms[-1]] is not None:
iblossoms.append(blossomparent[iblossoms[-1]])
while blossomparent[jblossoms[-1]] is not None:
jblossoms.append(blossomparent[jblossoms[-1]])
iblossoms.reverse()
jblossoms.reverse()
for bi, bj in zip(iblossoms, jblossoms):
if bi != bj:
break
s += 2 * blossomdual[bi]
assert s >= 0
if mate.get(i) == j or mate.get(j) == i:
assert mate[i] == j and mate[j] == i
assert s == 0
# 2. all single vertices have zero dual value;
for v in gnodes:
assert (v in mate) or dualvar[v] + vdualoffset == 0
# 3. all blossoms with positive dual value are full.
for b in blossomdual:
if blossomdual[b] > 0:
assert len(b.edges) % 2 == 1
for i, j in b.edges[1::2]:
assert mate[i] == j and mate[j] == i
# Ok.
# Main loop: continue until no further improvement is possible.
while 1:
# Each iteration of this loop is a "stage".
# A stage finds an augmenting path and uses that to improve
# the matching.
# Remove labels from top-level blossoms/vertices.
label.clear()
labeledge.clear()
# Forget all about least-slack edges.
bestedge.clear()
for b in blossomdual:
b.mybestedges = None
# Loss of labeling means that we can not be sure that currently
# allowable edges remain allowable throughout this stage.
allowedge.clear()
# Make queue empty.
queue[:] = []
# Label single blossoms/vertices with S and put them in the queue.
for v in gnodes:
if (v not in mate) and label.get(inblossom[v]) is None:
assignLabel(v, 1, None)
# Loop until we succeed in augmenting the matching.
augmented = 0
while 1:
# Each iteration of this loop is a "substage".
# A substage tries to find an augmenting path;
# if found, the path is used to improve the matching and
# the stage ends. If there is no augmenting path, the
# primal-dual method is used to pump some slack out of
# the dual variables.
# Continue labeling until all vertices which are reachable
# through an alternating path have got a label.
while queue and not augmented:
# Take an S vertex from the queue.
v = queue.pop()
assert label[inblossom[v]] == 1
# Scan its neighbors:
for w in G.neighbors(v):
if w == v:
continue # ignore self-loops
# w is a neighbor to v
bv = inblossom[v]
bw = inblossom[w]
if bv == bw:
# this edge is internal to a blossom; ignore it
continue
if (v, w) not in allowedge:
kslack = slack(v, w)
if kslack <= 0:
# edge k has zero slack => it is allowable
allowedge[(v, w)] = allowedge[(w, v)] = True
if (v, w) in allowedge:
if label.get(bw) is None:
# (C1) w is a free vertex;
# label w with T and label its mate with S (R12).
assignLabel(w, 2, v)
elif label.get(bw) == 1:
# (C2) w is an S-vertex (not in the same blossom);
# follow back-links to discover either an
# augmenting path or a new blossom.
base = scanBlossom(v, w)
if base is not NoNode:
# Found a new blossom; add it to the blossom
# bookkeeping and turn it into an S-blossom.
addBlossom(base, v, w)
else:
# Found an augmenting path; augment the
# matching and end this stage.
augmentMatching(v, w)
augmented = 1
break
elif label.get(w) is None:
# w is inside a T-blossom, but w itself has not
# yet been reached from outside the blossom;
# mark it as reached (we need this to relabel
# during T-blossom expansion).
assert label[bw] == 2
label[w] = 2
labeledge[w] = (v, w)
elif label.get(bw) == 1:
# keep track of the least-slack non-allowable edge to
# a different S-blossom.
if bestedge.get(bv) is None or kslack < slack(*bestedge[bv]):
bestedge[bv] = (v, w)
elif label.get(w) is None:
# w is a free vertex (or an unreached vertex inside
# a T-blossom) but we can not reach it yet;
# keep track of the least-slack edge that reaches w.
if bestedge.get(w) is None or kslack < slack(*bestedge[w]):
bestedge[w] = (v, w)
if augmented:
break
# There is no augmenting path under these constraints;
# compute delta and reduce slack in the optimization problem.
# (Note that our vertex dual variables, edge slacks and delta's
# are pre-multiplied by two.)
deltatype = -1
delta = deltaedge = deltablossom = None
# Compute delta1: the minimum value of any vertex dual.
if not maxcardinality:
deltatype = 1
delta = min(dualvar.values())
# Compute delta2: the minimum slack on any edge between
# an S-vertex and a free vertex.
for v in G.nodes():
if label.get(inblossom[v]) is None and bestedge.get(v) is not None:
d = slack(*bestedge[v])
if deltatype == -1 or d < delta:
delta = d
deltatype = 2
deltaedge = bestedge[v]
# Compute delta3: half the minimum slack on any edge between
# a pair of S-blossoms.
for b in blossomparent:
if (
blossomparent[b] is None
and label.get(b) == 1
and bestedge.get(b) is not None
):
kslack = slack(*bestedge[b])
if allinteger:
assert (kslack % 2) == 0
d = kslack // 2
else:
d = kslack / 2.0
if deltatype == -1 or d < delta:
delta = d
deltatype = 3
deltaedge = bestedge[b]
# Compute delta4: minimum z variable of any T-blossom.
for b in blossomdual:
if (
blossomparent[b] is None
and label.get(b) == 2
and (deltatype == -1 or blossomdual[b] < delta)
):
delta = blossomdual[b]
deltatype = 4
deltablossom = b
if deltatype == -1:
# No further improvement possible; max-cardinality optimum
# reached. Do a final delta update to make the optimum
# verifiable.
assert maxcardinality
deltatype = 1
delta = max(0, min(dualvar.values()))
# Update dual variables according to delta.
for v in gnodes:
if label.get(inblossom[v]) == 1:
# S-vertex: 2*u = 2*u - 2*delta
dualvar[v] -= delta
elif label.get(inblossom[v]) == 2:
# T-vertex: 2*u = 2*u + 2*delta
dualvar[v] += delta
for b in blossomdual:
if blossomparent[b] is None:
if label.get(b) == 1:
# top-level S-blossom: z = z + 2*delta
blossomdual[b] += delta
elif label.get(b) == 2:
# top-level T-blossom: z = z - 2*delta
blossomdual[b] -= delta
# Take action at the point where minimum delta occurred.
if deltatype == 1:
# No further improvement possible; optimum reached.
break
elif deltatype == 2:
# Use the least-slack edge to continue the search.
(v, w) = deltaedge
assert label[inblossom[v]] == 1
allowedge[(v, w)] = allowedge[(w, v)] = True
queue.append(v)
elif deltatype == 3:
# Use the least-slack edge to continue the search.
(v, w) = deltaedge
allowedge[(v, w)] = allowedge[(w, v)] = True
assert label[inblossom[v]] == 1
queue.append(v)
elif deltatype == 4:
# Expand the least-z blossom.
expandBlossom(deltablossom, False)
# End of a this substage.
# Paranoia check that the matching is symmetric.
for v in mate:
assert mate[mate[v]] == v
# Stop when no more augmenting path can be found.
if not augmented:
break
# End of a stage; expand all S-blossoms which have zero dual.
for b in list(blossomdual.keys()):
if b not in blossomdual:
continue # already expanded
if blossomparent[b] is None and label.get(b) == 1 and blossomdual[b] == 0:
expandBlossom(b, True)
# Verify that we reached the optimum solution (only for integer weights).
if allinteger:
verifyOptimum()
return matching_dict_to_set(mate)
| (G, matching, *, backend=None, **backend_kwargs) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.