index
int64
0
731k
package
stringlengths
2
98
name
stringlengths
1
76
docstring
stringlengths
0
281k
code
stringlengths
4
1.07M
signature
stringlengths
2
42.8k
31,178
networkx.algorithms.dag
topological_generations
Stratifies a DAG into generations. A topological generation is node collection in which ancestors of a node in each generation are guaranteed to be in a previous generation, and any descendants of a node are guaranteed to be in a following generation. Nodes are guaranteed to be in the earliest possible generation that they can belong to. Parameters ---------- G : NetworkX digraph A directed acyclic graph (DAG) Yields ------ sets of nodes Yields sets of nodes representing each generation. Raises ------ NetworkXError Generations are defined for directed graphs only. If the graph `G` is undirected, a :exc:`NetworkXError` is raised. NetworkXUnfeasible If `G` is not a directed acyclic graph (DAG) no topological generations exist and a :exc:`NetworkXUnfeasible` exception is raised. This can also be raised if `G` is changed while the returned iterator is being processed RuntimeError If `G` is changed while the returned iterator is being processed. Examples -------- >>> DG = nx.DiGraph([(2, 1), (3, 1)]) >>> [sorted(generation) for generation in nx.topological_generations(DG)] [[2, 3], [1]] Notes ----- The generation in which a node resides can also be determined by taking the max-path-distance from the node to the farthest leaf node. That value can be obtained with this function using `enumerate(topological_generations(G))`. See also -------- topological_sort
def transitive_closure_dag(G, topo_order=None): """Returns the transitive closure of a directed acyclic graph. This function is faster than the function `transitive_closure`, but fails if the graph has a cycle. The transitive closure of G = (V,E) is a graph G+ = (V,E+) such that for all v, w in V there is an edge (v, w) in E+ if and only if there is a non-null path from v to w in G. Parameters ---------- G : NetworkX DiGraph A directed acyclic graph (DAG) topo_order: list or tuple, optional A topological order for G (if None, the function will compute one) Returns ------- NetworkX DiGraph The transitive closure of `G` Raises ------ NetworkXNotImplemented If `G` is not directed NetworkXUnfeasible If `G` has a cycle Examples -------- >>> DG = nx.DiGraph([(1, 2), (2, 3)]) >>> TC = nx.transitive_closure_dag(DG) >>> TC.edges() OutEdgeView([(1, 2), (1, 3), (2, 3)]) Notes ----- This algorithm is probably simple enough to be well-known but I didn't find a mention in the literature. """ if topo_order is None: topo_order = list(topological_sort(G)) TC = G.copy() # idea: traverse vertices following a reverse topological order, connecting # each vertex to its descendants at distance 2 as we go for v in reversed(topo_order): TC.add_edges_from((v, u) for u in nx.descendants_at_distance(TC, v, 2)) return TC
(G, *, backend=None, **backend_kwargs)
31,179
networkx.algorithms.dag
topological_sort
Returns a generator of nodes in topologically sorted order. A topological sort is a nonunique permutation of the nodes of a directed graph such that an edge from u to v implies that u appears before v in the topological sort order. This ordering is valid only if the graph has no directed cycles. Parameters ---------- G : NetworkX digraph A directed acyclic graph (DAG) Yields ------ nodes Yields the nodes in topological sorted order. Raises ------ NetworkXError Topological sort is defined for directed graphs only. If the graph `G` is undirected, a :exc:`NetworkXError` is raised. NetworkXUnfeasible If `G` is not a directed acyclic graph (DAG) no topological sort exists and a :exc:`NetworkXUnfeasible` exception is raised. This can also be raised if `G` is changed while the returned iterator is being processed RuntimeError If `G` is changed while the returned iterator is being processed. Examples -------- To get the reverse order of the topological sort: >>> DG = nx.DiGraph([(1, 2), (2, 3)]) >>> list(reversed(list(nx.topological_sort(DG)))) [3, 2, 1] If your DiGraph naturally has the edges representing tasks/inputs and nodes representing people/processes that initiate tasks, then topological_sort is not quite what you need. You will have to change the tasks to nodes with dependence reflected by edges. The result is a kind of topological sort of the edges. This can be done with :func:`networkx.line_graph` as follows: >>> list(nx.topological_sort(nx.line_graph(DG))) [(1, 2), (2, 3)] Notes ----- This algorithm is based on a description and proof in "Introduction to Algorithms: A Creative Approach" [1]_ . See also -------- is_directed_acyclic_graph, lexicographical_topological_sort References ---------- .. [1] Manber, U. (1989). *Introduction to Algorithms - A Creative Approach.* Addison-Wesley.
def transitive_closure_dag(G, topo_order=None): """Returns the transitive closure of a directed acyclic graph. This function is faster than the function `transitive_closure`, but fails if the graph has a cycle. The transitive closure of G = (V,E) is a graph G+ = (V,E+) such that for all v, w in V there is an edge (v, w) in E+ if and only if there is a non-null path from v to w in G. Parameters ---------- G : NetworkX DiGraph A directed acyclic graph (DAG) topo_order: list or tuple, optional A topological order for G (if None, the function will compute one) Returns ------- NetworkX DiGraph The transitive closure of `G` Raises ------ NetworkXNotImplemented If `G` is not directed NetworkXUnfeasible If `G` has a cycle Examples -------- >>> DG = nx.DiGraph([(1, 2), (2, 3)]) >>> TC = nx.transitive_closure_dag(DG) >>> TC.edges() OutEdgeView([(1, 2), (1, 3), (2, 3)]) Notes ----- This algorithm is probably simple enough to be well-known but I didn't find a mention in the literature. """ if topo_order is None: topo_order = list(topological_sort(G)) TC = G.copy() # idea: traverse vertices following a reverse topological order, connecting # each vertex to its descendants at distance 2 as we go for v in reversed(topo_order): TC.add_edges_from((v, u) for u in nx.descendants_at_distance(TC, v, 2)) return TC
(G, *, backend=None, **backend_kwargs)
31,180
networkx.linalg.laplacianmatrix
total_spanning_tree_weight
Returns the total weight of all spanning trees of `G`. Kirchoff's Tree Matrix Theorem [1]_, [2]_ states that the determinant of any cofactor of the Laplacian matrix of a graph is the number of spanning trees in the graph. For a weighted Laplacian matrix, it is the sum across all spanning trees of the multiplicative weight of each tree. That is, the weight of each tree is the product of its edge weights. For unweighted graphs, the total weight equals the number of spanning trees in `G`. For directed graphs, the total weight follows by summing over all directed spanning trees in `G` that start in the `root` node [3]_. .. deprecated:: 3.3 ``total_spanning_tree_weight`` is deprecated and will be removed in v3.5. Use ``nx.number_of_spanning_trees(G)`` instead. Parameters ---------- G : NetworkX Graph weight : string or None, optional (default=None) The key for the edge attribute holding the edge weight. If None, then each edge has weight 1. root : node (only required for directed graphs) A node in the directed graph `G`. Returns ------- total_weight : float Undirected graphs: The sum of the total multiplicative weights for all spanning trees in `G`. Directed graphs: The sum of the total multiplicative weights for all spanning trees of `G`, rooted at node `root`. Raises ------ NetworkXPointlessConcept If `G` does not contain any nodes. NetworkXError If the graph `G` is not (weakly) connected, or if `G` is directed and the root node is not specified or not in G. Examples -------- >>> G = nx.complete_graph(5) >>> round(nx.total_spanning_tree_weight(G)) 125 >>> G = nx.Graph() >>> G.add_edge(1, 2, weight=2) >>> G.add_edge(1, 3, weight=1) >>> G.add_edge(2, 3, weight=1) >>> round(nx.total_spanning_tree_weight(G, "weight")) 5 Notes ----- Self-loops are excluded. Multi-edges are contracted in one edge equal to the sum of the weights. References ---------- .. [1] Wikipedia "Kirchhoff's theorem." https://en.wikipedia.org/wiki/Kirchhoff%27s_theorem .. [2] Kirchhoff, G. R. Über die Auflösung der Gleichungen, auf welche man bei der Untersuchung der linearen Vertheilung Galvanischer Ströme geführt wird Annalen der Physik und Chemie, vol. 72, pp. 497-508, 1847. .. [3] Margoliash, J. "Matrix-Tree Theorem for Directed Graphs" https://www.math.uchicago.edu/~may/VIGRE/VIGRE2010/REUPapers/Margoliash.pdf
null
(G, weight=None, root=None, *, backend=None, **backend_kwargs)
31,182
networkx.algorithms.dag
transitive_closure
Returns transitive closure of a graph The transitive closure of G = (V,E) is a graph G+ = (V,E+) such that for all v, w in V there is an edge (v, w) in E+ if and only if there is a path from v to w in G. Handling of paths from v to v has some flexibility within this definition. A reflexive transitive closure creates a self-loop for the path from v to v of length 0. The usual transitive closure creates a self-loop only if a cycle exists (a path from v to v with length > 0). We also allow an option for no self-loops. Parameters ---------- G : NetworkX Graph A directed/undirected graph/multigraph. reflexive : Bool or None, optional (default: False) Determines when cycles create self-loops in the Transitive Closure. If True, trivial cycles (length 0) create self-loops. The result is a reflexive transitive closure of G. If False (the default) non-trivial cycles create self-loops. If None, self-loops are not created. Returns ------- NetworkX graph The transitive closure of `G` Raises ------ NetworkXError If `reflexive` not in `{None, True, False}` Examples -------- The treatment of trivial (i.e. length 0) cycles is controlled by the `reflexive` parameter. Trivial (i.e. length 0) cycles do not create self-loops when ``reflexive=False`` (the default):: >>> DG = nx.DiGraph([(1, 2), (2, 3)]) >>> TC = nx.transitive_closure(DG, reflexive=False) >>> TC.edges() OutEdgeView([(1, 2), (1, 3), (2, 3)]) However, nontrivial (i.e. length greater than 0) cycles create self-loops when ``reflexive=False`` (the default):: >>> DG = nx.DiGraph([(1, 2), (2, 3), (3, 1)]) >>> TC = nx.transitive_closure(DG, reflexive=False) >>> TC.edges() OutEdgeView([(1, 2), (1, 3), (1, 1), (2, 3), (2, 1), (2, 2), (3, 1), (3, 2), (3, 3)]) Trivial cycles (length 0) create self-loops when ``reflexive=True``:: >>> DG = nx.DiGraph([(1, 2), (2, 3)]) >>> TC = nx.transitive_closure(DG, reflexive=True) >>> TC.edges() OutEdgeView([(1, 2), (1, 1), (1, 3), (2, 3), (2, 2), (3, 3)]) And the third option is not to create self-loops at all when ``reflexive=None``:: >>> DG = nx.DiGraph([(1, 2), (2, 3), (3, 1)]) >>> TC = nx.transitive_closure(DG, reflexive=None) >>> TC.edges() OutEdgeView([(1, 2), (1, 3), (2, 3), (2, 1), (3, 1), (3, 2)]) References ---------- .. [1] https://www.ics.uci.edu/~eppstein/PADS/PartialOrder.py
def transitive_closure_dag(G, topo_order=None): """Returns the transitive closure of a directed acyclic graph. This function is faster than the function `transitive_closure`, but fails if the graph has a cycle. The transitive closure of G = (V,E) is a graph G+ = (V,E+) such that for all v, w in V there is an edge (v, w) in E+ if and only if there is a non-null path from v to w in G. Parameters ---------- G : NetworkX DiGraph A directed acyclic graph (DAG) topo_order: list or tuple, optional A topological order for G (if None, the function will compute one) Returns ------- NetworkX DiGraph The transitive closure of `G` Raises ------ NetworkXNotImplemented If `G` is not directed NetworkXUnfeasible If `G` has a cycle Examples -------- >>> DG = nx.DiGraph([(1, 2), (2, 3)]) >>> TC = nx.transitive_closure_dag(DG) >>> TC.edges() OutEdgeView([(1, 2), (1, 3), (2, 3)]) Notes ----- This algorithm is probably simple enough to be well-known but I didn't find a mention in the literature. """ if topo_order is None: topo_order = list(topological_sort(G)) TC = G.copy() # idea: traverse vertices following a reverse topological order, connecting # each vertex to its descendants at distance 2 as we go for v in reversed(topo_order): TC.add_edges_from((v, u) for u in nx.descendants_at_distance(TC, v, 2)) return TC
(G, reflexive=False, *, backend=None, **backend_kwargs)
31,183
networkx.algorithms.dag
transitive_closure_dag
Returns the transitive closure of a directed acyclic graph. This function is faster than the function `transitive_closure`, but fails if the graph has a cycle. The transitive closure of G = (V,E) is a graph G+ = (V,E+) such that for all v, w in V there is an edge (v, w) in E+ if and only if there is a non-null path from v to w in G. Parameters ---------- G : NetworkX DiGraph A directed acyclic graph (DAG) topo_order: list or tuple, optional A topological order for G (if None, the function will compute one) Returns ------- NetworkX DiGraph The transitive closure of `G` Raises ------ NetworkXNotImplemented If `G` is not directed NetworkXUnfeasible If `G` has a cycle Examples -------- >>> DG = nx.DiGraph([(1, 2), (2, 3)]) >>> TC = nx.transitive_closure_dag(DG) >>> TC.edges() OutEdgeView([(1, 2), (1, 3), (2, 3)]) Notes ----- This algorithm is probably simple enough to be well-known but I didn't find a mention in the literature.
def transitive_closure_dag(G, topo_order=None): """Returns the transitive closure of a directed acyclic graph. This function is faster than the function `transitive_closure`, but fails if the graph has a cycle. The transitive closure of G = (V,E) is a graph G+ = (V,E+) such that for all v, w in V there is an edge (v, w) in E+ if and only if there is a non-null path from v to w in G. Parameters ---------- G : NetworkX DiGraph A directed acyclic graph (DAG) topo_order: list or tuple, optional A topological order for G (if None, the function will compute one) Returns ------- NetworkX DiGraph The transitive closure of `G` Raises ------ NetworkXNotImplemented If `G` is not directed NetworkXUnfeasible If `G` has a cycle Examples -------- >>> DG = nx.DiGraph([(1, 2), (2, 3)]) >>> TC = nx.transitive_closure_dag(DG) >>> TC.edges() OutEdgeView([(1, 2), (1, 3), (2, 3)]) Notes ----- This algorithm is probably simple enough to be well-known but I didn't find a mention in the literature. """ if topo_order is None: topo_order = list(topological_sort(G)) TC = G.copy() # idea: traverse vertices following a reverse topological order, connecting # each vertex to its descendants at distance 2 as we go for v in reversed(topo_order): TC.add_edges_from((v, u) for u in nx.descendants_at_distance(TC, v, 2)) return TC
(G, topo_order=None, *, backend=None, **backend_kwargs)
31,184
networkx.algorithms.dag
transitive_reduction
Returns transitive reduction of a directed graph The transitive reduction of G = (V,E) is a graph G- = (V,E-) such that for all v,w in V there is an edge (v,w) in E- if and only if (v,w) is in E and there is no path from v to w in G with length greater than 1. Parameters ---------- G : NetworkX DiGraph A directed acyclic graph (DAG) Returns ------- NetworkX DiGraph The transitive reduction of `G` Raises ------ NetworkXError If `G` is not a directed acyclic graph (DAG) transitive reduction is not uniquely defined and a :exc:`NetworkXError` exception is raised. Examples -------- To perform transitive reduction on a DiGraph: >>> DG = nx.DiGraph([(1, 2), (2, 3), (1, 3)]) >>> TR = nx.transitive_reduction(DG) >>> list(TR.edges) [(1, 2), (2, 3)] To avoid unnecessary data copies, this implementation does not return a DiGraph with node/edge data. To perform transitive reduction on a DiGraph and transfer node/edge data: >>> DG = nx.DiGraph() >>> DG.add_edges_from([(1, 2), (2, 3), (1, 3)], color="red") >>> TR = nx.transitive_reduction(DG) >>> TR.add_nodes_from(DG.nodes(data=True)) >>> TR.add_edges_from((u, v, DG.edges[u, v]) for u, v in TR.edges) >>> list(TR.edges(data=True)) [(1, 2, {'color': 'red'}), (2, 3, {'color': 'red'})] References ---------- https://en.wikipedia.org/wiki/Transitive_reduction
def transitive_closure_dag(G, topo_order=None): """Returns the transitive closure of a directed acyclic graph. This function is faster than the function `transitive_closure`, but fails if the graph has a cycle. The transitive closure of G = (V,E) is a graph G+ = (V,E+) such that for all v, w in V there is an edge (v, w) in E+ if and only if there is a non-null path from v to w in G. Parameters ---------- G : NetworkX DiGraph A directed acyclic graph (DAG) topo_order: list or tuple, optional A topological order for G (if None, the function will compute one) Returns ------- NetworkX DiGraph The transitive closure of `G` Raises ------ NetworkXNotImplemented If `G` is not directed NetworkXUnfeasible If `G` has a cycle Examples -------- >>> DG = nx.DiGraph([(1, 2), (2, 3)]) >>> TC = nx.transitive_closure_dag(DG) >>> TC.edges() OutEdgeView([(1, 2), (1, 3), (2, 3)]) Notes ----- This algorithm is probably simple enough to be well-known but I didn't find a mention in the literature. """ if topo_order is None: topo_order = list(topological_sort(G)) TC = G.copy() # idea: traverse vertices following a reverse topological order, connecting # each vertex to its descendants at distance 2 as we go for v in reversed(topo_order): TC.add_edges_from((v, u) for u in nx.descendants_at_distance(TC, v, 2)) return TC
(G, *, backend=None, **backend_kwargs)
31,185
networkx.algorithms.cluster
transitivity
Compute graph transitivity, the fraction of all possible triangles present in G. Possible triangles are identified by the number of "triads" (two edges with a shared vertex). The transitivity is .. math:: T = 3\frac{\#triangles}{\#triads}. Parameters ---------- G : graph Returns ------- out : float Transitivity Notes ----- Self loops are ignored. Examples -------- >>> G = nx.complete_graph(5) >>> print(nx.transitivity(G)) 1.0
null
(G, *, backend=None, **backend_kwargs)
31,188
networkx.algorithms.lowest_common_ancestors
tree_all_pairs_lowest_common_ancestor
Yield the lowest common ancestor for sets of pairs in a tree. Parameters ---------- G : NetworkX directed graph (must be a tree) root : node, optional (default: None) The root of the subtree to operate on. If None, assume the entire graph has exactly one source and use that. pairs : iterable or iterator of pairs of nodes, optional (default: None) The pairs of interest. If None, Defaults to all pairs of nodes under `root` that have a lowest common ancestor. Returns ------- lcas : generator of tuples `((u, v), lca)` where `u` and `v` are nodes in `pairs` and `lca` is their lowest common ancestor. Examples -------- >>> import pprint >>> G = nx.DiGraph([(1, 3), (2, 4), (1, 2)]) >>> pprint.pprint(dict(nx.tree_all_pairs_lowest_common_ancestor(G))) {(1, 1): 1, (2, 1): 1, (2, 2): 2, (3, 1): 1, (3, 2): 1, (3, 3): 3, (3, 4): 1, (4, 1): 1, (4, 2): 2, (4, 4): 4} We can also use `pairs` argument to specify the pairs of nodes for which we want to compute lowest common ancestors. Here is an example: >>> dict(nx.tree_all_pairs_lowest_common_ancestor(G, pairs=[(1, 4), (2, 3)])) {(2, 3): 1, (1, 4): 1} Notes ----- Only defined on non-null trees represented with directed edges from parents to children. Uses Tarjan's off-line lowest-common-ancestors algorithm. Runs in time $O(4 \times (V + E + P))$ time, where 4 is the largest value of the inverse Ackermann function likely to ever come up in actual use, and $P$ is the number of pairs requested (or $V^2$ if all are needed). Tarjan, R. E. (1979), "Applications of path compression on balanced trees", Journal of the ACM 26 (4): 690-715, doi:10.1145/322154.322161. See Also -------- all_pairs_lowest_common_ancestor: similar routine for general DAGs lowest_common_ancestor: just a single pair for general DAGs
null
(G, root=None, pairs=None, *, backend=None, **backend_kwargs)
31,189
networkx.algorithms.broadcasting
tree_broadcast_center
Return the Broadcast Center of the tree `G`. The broadcast center of a graph G denotes the set of nodes having minimum broadcast time [1]_. This is a linear algorithm for determining the broadcast center of a tree with ``N`` nodes, as a by-product it also determines the broadcast time from the broadcast center. Parameters ---------- G : undirected graph The graph should be an undirected tree Returns ------- BC : (int, set) tuple minimum broadcast number of the tree, set of broadcast centers Raises ------ NetworkXNotImplemented If the graph is directed or is a multigraph. References ---------- .. [1] Slater, P.J., Cockayne, E.J., Hedetniemi, S.T, Information dissemination in trees. SIAM J.Comput. 10(4), 692–701 (1981)
null
(G, *, backend=None, **backend_kwargs)
31,190
networkx.algorithms.broadcasting
tree_broadcast_time
Return the Broadcast Time of the tree `G`. The minimum broadcast time of a node is defined as the minimum amount of time required to complete broadcasting starting from the originator. The broadcast time of a graph is the maximum over all nodes of the minimum broadcast time from that node [1]_. This function returns the minimum broadcast time of `node`. If `node` is None the broadcast time for the graph is returned. Parameters ---------- G : undirected graph The graph should be an undirected tree node: int, optional index of starting node. If `None`, the algorithm returns the broadcast time of the tree. Returns ------- BT : int Broadcast Time of a node in a tree Raises ------ NetworkXNotImplemented If the graph is directed or is a multigraph. References ---------- .. [1] Harutyunyan, H. A. and Li, Z. "A Simple Construction of Broadcast Graphs." In Computing and Combinatorics. COCOON 2019 (Ed. D. Z. Du and C. Tian.) Springer, pp. 240-253, 2019.
null
(G, node=None, *, backend=None, **backend_kwargs)
31,191
networkx.readwrite.json_graph.tree
tree_data
Returns data in tree format that is suitable for JSON serialization and use in JavaScript documents. Parameters ---------- G : NetworkX graph G must be an oriented tree root : node The root of the tree ident : string Attribute name for storing NetworkX-internal graph data. `ident` must have a different value than `children`. The default is 'id'. children : string Attribute name for storing NetworkX-internal graph data. `children` must have a different value than `ident`. The default is 'children'. Returns ------- data : dict A dictionary with node-link formatted data. Raises ------ NetworkXError If `children` and `ident` attributes are identical. Examples -------- >>> from networkx.readwrite import json_graph >>> G = nx.DiGraph([(1, 2)]) >>> data = json_graph.tree_data(G, root=1) To serialize with json >>> import json >>> s = json.dumps(data) Notes ----- Node attributes are stored in this format but keys for attributes must be strings if you want to serialize with JSON. Graph and edge attributes are not stored. See Also -------- tree_graph, node_link_data, adjacency_data
def tree_data(G, root, ident="id", children="children"): """Returns data in tree format that is suitable for JSON serialization and use in JavaScript documents. Parameters ---------- G : NetworkX graph G must be an oriented tree root : node The root of the tree ident : string Attribute name for storing NetworkX-internal graph data. `ident` must have a different value than `children`. The default is 'id'. children : string Attribute name for storing NetworkX-internal graph data. `children` must have a different value than `ident`. The default is 'children'. Returns ------- data : dict A dictionary with node-link formatted data. Raises ------ NetworkXError If `children` and `ident` attributes are identical. Examples -------- >>> from networkx.readwrite import json_graph >>> G = nx.DiGraph([(1, 2)]) >>> data = json_graph.tree_data(G, root=1) To serialize with json >>> import json >>> s = json.dumps(data) Notes ----- Node attributes are stored in this format but keys for attributes must be strings if you want to serialize with JSON. Graph and edge attributes are not stored. See Also -------- tree_graph, node_link_data, adjacency_data """ if G.number_of_nodes() != G.number_of_edges() + 1: raise TypeError("G is not a tree.") if not G.is_directed(): raise TypeError("G is not directed.") if not nx.is_weakly_connected(G): raise TypeError("G is not weakly connected.") if ident == children: raise nx.NetworkXError("The values for `id` and `children` must be different.") def add_children(n, G): nbrs = G[n] if len(nbrs) == 0: return [] children_ = [] for child in nbrs: d = {**G.nodes[child], ident: child} c = add_children(child, G) if c: d[children] = c children_.append(d) return children_ return {**G.nodes[root], ident: root, children: add_children(root, G)}
(G, root, ident='id', children='children')
31,192
networkx.readwrite.json_graph.tree
tree_graph
Returns graph from tree data format. Parameters ---------- data : dict Tree formatted graph data ident : string Attribute name for storing NetworkX-internal graph data. `ident` must have a different value than `children`. The default is 'id'. children : string Attribute name for storing NetworkX-internal graph data. `children` must have a different value than `ident`. The default is 'children'. Returns ------- G : NetworkX DiGraph Examples -------- >>> from networkx.readwrite import json_graph >>> G = nx.DiGraph([(1, 2)]) >>> data = json_graph.tree_data(G, root=1) >>> H = json_graph.tree_graph(data) See Also -------- tree_data, node_link_data, adjacency_data
null
(data, ident='id', children='children', *, backend=None, **backend_kwargs)
31,194
networkx.generators.triads
triad_graph
Returns the triad graph with the given name. Each string in the following tuple is a valid triad name:: ( "003", "012", "102", "021D", "021U", "021C", "111D", "111U", "030T", "030C", "201", "120D", "120U", "120C", "210", "300", ) Each triad name corresponds to one of the possible valid digraph on three nodes. Parameters ---------- triad_name : string The name of a triad, as described above. Returns ------- :class:`~networkx.DiGraph` The digraph on three nodes with the given name. The nodes of the graph are the single-character strings 'a', 'b', and 'c'. Raises ------ ValueError If `triad_name` is not the name of a triad. See also -------- triadic_census
null
(triad_name, *, backend=None, **backend_kwargs)
31,195
networkx.algorithms.triads
triad_type
Returns the sociological triad type for a triad. Parameters ---------- G : digraph A NetworkX DiGraph with 3 nodes Returns ------- triad_type : str A string identifying the triad type Examples -------- >>> G = nx.DiGraph([(1, 2), (2, 3), (3, 1)]) >>> nx.triad_type(G) '030C' >>> G.add_edge(1, 3) >>> nx.triad_type(G) '120C' Notes ----- There can be 6 unique edges in a triad (order-3 DiGraph) (so 2^^6=64 unique triads given 3 nodes). These 64 triads each display exactly 1 of 16 topologies of triads (topologies can be permuted). These topologies are identified by the following notation: {m}{a}{n}{type} (for example: 111D, 210, 102) Here: {m} = number of mutual ties (takes 0, 1, 2, 3); a mutual tie is (0,1) AND (1,0) {a} = number of asymmetric ties (takes 0, 1, 2, 3); an asymmetric tie is (0,1) BUT NOT (1,0) or vice versa {n} = number of null ties (takes 0, 1, 2, 3); a null tie is NEITHER (0,1) NOR (1,0) {type} = a letter (takes U, D, C, T) corresponding to up, down, cyclical and transitive. This is only used for topologies that can have more than one form (eg: 021D and 021U). References ---------- .. [1] Snijders, T. (2012). "Transitivity and triads." University of Oxford. https://web.archive.org/web/20170830032057/http://www.stats.ox.ac.uk/~snijders/Trans_Triads_ha.pdf
null
(G, *, backend=None, **backend_kwargs)
31,196
networkx.algorithms.triads
triadic_census
Determines the triadic census of a directed graph. The triadic census is a count of how many of the 16 possible types of triads are present in a directed graph. If a list of nodes is passed, then only those triads are taken into account which have elements of nodelist in them. Parameters ---------- G : digraph A NetworkX DiGraph nodelist : list List of nodes for which you want to calculate triadic census Returns ------- census : dict Dictionary with triad type as keys and number of occurrences as values. Examples -------- >>> G = nx.DiGraph([(1, 2), (2, 3), (3, 1), (3, 4), (4, 1), (4, 2)]) >>> triadic_census = nx.triadic_census(G) >>> for key, value in triadic_census.items(): ... print(f"{key}: {value}") 003: 0 012: 0 102: 0 021D: 0 021U: 0 021C: 0 111D: 0 111U: 0 030T: 2 030C: 2 201: 0 120D: 0 120U: 0 120C: 0 210: 0 300: 0 Notes ----- This algorithm has complexity $O(m)$ where $m$ is the number of edges in the graph. For undirected graphs, the triadic census can be computed by first converting the graph into a directed graph using the ``G.to_directed()`` method. After this conversion, only the triad types 003, 102, 201 and 300 will be present in the undirected scenario. Raises ------ ValueError If `nodelist` contains duplicate nodes or nodes not in `G`. If you want to ignore this you can preprocess with `set(nodelist) & G.nodes` See also -------- triad_graph References ---------- .. [1] Vladimir Batagelj and Andrej Mrvar, A subquadratic triad census algorithm for large sparse networks with small maximum degree, University of Ljubljana, http://vlado.fmf.uni-lj.si/pub/networks/doc/triads/triads.pdf
null
(G, nodelist=None, *, backend=None, **backend_kwargs)
31,198
networkx.algorithms.triads
triads_by_type
Returns a list of all triads for each triad type in a directed graph. There are exactly 16 different types of triads possible. Suppose 1, 2, 3 are three nodes, they will be classified as a particular triad type if their connections are as follows: - 003: 1, 2, 3 - 012: 1 -> 2, 3 - 102: 1 <-> 2, 3 - 021D: 1 <- 2 -> 3 - 021U: 1 -> 2 <- 3 - 021C: 1 -> 2 -> 3 - 111D: 1 <-> 2 <- 3 - 111U: 1 <-> 2 -> 3 - 030T: 1 -> 2 -> 3, 1 -> 3 - 030C: 1 <- 2 <- 3, 1 -> 3 - 201: 1 <-> 2 <-> 3 - 120D: 1 <- 2 -> 3, 1 <-> 3 - 120U: 1 -> 2 <- 3, 1 <-> 3 - 120C: 1 -> 2 -> 3, 1 <-> 3 - 210: 1 -> 2 <-> 3, 1 <-> 3 - 300: 1 <-> 2 <-> 3, 1 <-> 3 Refer to the :doc:`example gallery </auto_examples/graph/plot_triad_types>` for visual examples of the triad types. Parameters ---------- G : digraph A NetworkX DiGraph Returns ------- tri_by_type : dict Dictionary with triad types as keys and lists of triads as values. Examples -------- >>> G = nx.DiGraph([(1, 2), (1, 3), (2, 3), (3, 1), (5, 6), (5, 4), (6, 7)]) >>> dict = nx.triads_by_type(G) >>> dict["120C"][0].edges() OutEdgeView([(1, 2), (1, 3), (2, 3), (3, 1)]) >>> dict["012"][0].edges() OutEdgeView([(1, 2)]) References ---------- .. [1] Snijders, T. (2012). "Transitivity and triads." University of Oxford. https://web.archive.org/web/20170830032057/http://www.stats.ox.ac.uk/~snijders/Trans_Triads_ha.pdf
null
(G, *, backend=None, **backend_kwargs)
31,199
networkx.algorithms.cluster
triangles
Compute the number of triangles. Finds the number of triangles that include a node as one vertex. Parameters ---------- G : graph A networkx graph nodes : node, iterable of nodes, or None (default=None) If a singleton node, return the number of triangles for that node. If an iterable, compute the number of triangles for each of those nodes. If `None` (the default) compute the number of triangles for all nodes in `G`. Returns ------- out : dict or int If `nodes` is a container of nodes, returns number of triangles keyed by node (dict). If `nodes` is a specific node, returns number of triangles for the node (int). Examples -------- >>> G = nx.complete_graph(5) >>> print(nx.triangles(G, 0)) 6 >>> print(nx.triangles(G)) {0: 6, 1: 6, 2: 6, 3: 6, 4: 6} >>> print(list(nx.triangles(G, [0, 1]).values())) [6, 6] Notes ----- Self loops are ignored.
null
(G, nodes=None, *, backend=None, **backend_kwargs)
31,200
networkx.generators.lattice
triangular_lattice_graph
Returns the $m$ by $n$ triangular lattice graph. The `triangular lattice graph`_ is a two-dimensional `grid graph`_ in which each square unit has a diagonal edge (each grid unit has a chord). The returned graph has $m$ rows and $n$ columns of triangles. Rows and columns include both triangles pointing up and down. Rows form a strip of constant height. Columns form a series of diamond shapes, staggered with the columns on either side. Another way to state the size is that the nodes form a grid of `m+1` rows and `(n + 1) // 2` columns. The odd row nodes are shifted horizontally relative to the even rows. Directed graph types have edges pointed up or right. Positions of nodes are computed by default or `with_positions is True`. The position of each node (embedded in a euclidean plane) is stored in the graph using equilateral triangles with sidelength 1. The height between rows of nodes is thus $\sqrt(3)/2$. Nodes lie in the first quadrant with the node $(0, 0)$ at the origin. .. _triangular lattice graph: http://mathworld.wolfram.com/TriangularGrid.html .. _grid graph: http://www-cs-students.stanford.edu/~amitp/game-programming/grids/ .. _Triangular Tiling: https://en.wikipedia.org/wiki/Triangular_tiling Parameters ---------- m : int The number of rows in the lattice. n : int The number of columns in the lattice. periodic : bool (default: False) If True, join the boundary vertices of the grid using periodic boundary conditions. The join between boundaries is the final row and column of triangles. This means there is one row and one column fewer nodes for the periodic lattice. Periodic lattices require `m >= 3`, `n >= 5` and are allowed but misaligned if `m` or `n` are odd with_positions : bool (default: True) Store the coordinates of each node in the graph node attribute 'pos'. The coordinates provide a lattice with equilateral triangles. Periodic positions shift the nodes vertically in a nonlinear way so the edges don't overlap so much. create_using : NetworkX graph constructor, optional (default=nx.Graph) Graph type to create. If graph instance, then cleared before populated. Returns ------- NetworkX graph The *m* by *n* triangular lattice graph.
null
(m, n, periodic=False, with_positions=True, create_using=None, *, backend=None, **backend_kwargs)
31,201
networkx.generators.classic
trivial_graph
Return the Trivial graph with one node (with label 0) and no edges. .. plot:: >>> nx.draw(nx.trivial_graph(), with_labels=True)
def star_graph(n, create_using=None): """Return the star graph The star graph consists of one center node connected to n outer nodes. .. plot:: >>> nx.draw(nx.star_graph(6)) Parameters ---------- n : int or iterable If an integer, node labels are 0 to n with center 0. If an iterable of nodes, the center is the first. Warning: n is not checked for duplicates and if present the resulting graph may not be as desired. Make sure you have no duplicates. create_using : NetworkX graph constructor, optional (default=nx.Graph) Graph type to create. If graph instance, then cleared before populated. Notes ----- The graph has n+1 nodes for integer n. So star_graph(3) is the same as star_graph(range(4)). """ n, nodes = n if isinstance(n, numbers.Integral): nodes.append(int(n)) # there should be n+1 nodes G = empty_graph(nodes, create_using) if G.is_directed(): raise NetworkXError("Directed Graph not supported") if len(nodes) > 1: hub, *spokes = nodes G.add_edges_from((hub, node) for node in spokes) return G
(create_using=None, *, backend=None, **backend_kwargs)
31,203
networkx.algorithms.centrality.trophic
trophic_differences
Compute the trophic differences of the edges of a directed graph. The trophic difference $x_ij$ for each edge is defined in Johnson et al. [1]_ as: .. math:: x_ij = s_j - s_i Where $s_i$ is the trophic level of node $i$. Parameters ---------- G : DiGraph A directed networkx graph Returns ------- diffs : dict Dictionary of edges with trophic differences as the value. References ---------- .. [1] Samuel Johnson, Virginia Dominguez-Garcia, Luca Donetti, Miguel A. Munoz (2014) PNAS "Trophic coherence determines food-web stability"
null
(G, weight='weight', *, backend=None, **backend_kwargs)
31,204
networkx.algorithms.centrality.trophic
trophic_incoherence_parameter
Compute the trophic incoherence parameter of a graph. Trophic coherence is defined as the homogeneity of the distribution of trophic distances: the more similar, the more coherent. This is measured by the standard deviation of the trophic differences and referred to as the trophic incoherence parameter $q$ by [1]. Parameters ---------- G : DiGraph A directed networkx graph cannibalism: Boolean If set to False, self edges are not considered in the calculation Returns ------- trophic_incoherence_parameter : float The trophic coherence of a graph References ---------- .. [1] Samuel Johnson, Virginia Dominguez-Garcia, Luca Donetti, Miguel A. Munoz (2014) PNAS "Trophic coherence determines food-web stability"
null
(G, weight='weight', cannibalism=False, *, backend=None, **backend_kwargs)
31,205
networkx.algorithms.centrality.trophic
trophic_levels
Compute the trophic levels of nodes. The trophic level of a node $i$ is .. math:: s_i = 1 + \frac{1}{k^{in}_i} \sum_{j} a_{ij} s_j where $k^{in}_i$ is the in-degree of i .. math:: k^{in}_i = \sum_{j} a_{ij} and nodes with $k^{in}_i = 0$ have $s_i = 1$ by convention. These are calculated using the method outlined in Levine [1]_. Parameters ---------- G : DiGraph A directed networkx graph Returns ------- nodes : dict Dictionary of nodes with trophic level as the value. References ---------- .. [1] Stephen Levine (1980) J. theor. Biol. 83, 195-207
null
(G, weight='weight', *, backend=None, **backend_kwargs)
31,206
networkx.generators.small
truncated_cube_graph
Returns the skeleton of the truncated cube. The truncated cube is an Archimedean solid with 14 regular faces (6 octagonal and 8 triangular), 36 edges and 24 nodes [1]_. The truncated cube is created by truncating (cutting off) the tips of the cube one third of the way into each edge [2]_. Parameters ---------- create_using : NetworkX graph constructor, optional (default=nx.Graph) Graph type to create. If graph instance, then cleared before populated. Returns ------- G : networkx Graph Skeleton of the truncated cube References ---------- .. [1] https://en.wikipedia.org/wiki/Truncated_cube .. [2] https://www.coolmath.com/reference/polyhedra-truncated-cube
def _raise_on_directed(func): """ A decorator which inspects the `create_using` argument and raises a NetworkX exception when `create_using` is a DiGraph (class or instance) for graph generators that do not support directed outputs. """ @wraps(func) def wrapper(*args, **kwargs): if kwargs.get("create_using") is not None: G = nx.empty_graph(create_using=kwargs["create_using"]) if G.is_directed(): raise NetworkXError("Directed Graph not supported") return func(*args, **kwargs) return wrapper
(create_using=None, *, backend=None, **backend_kwargs)
31,207
networkx.generators.small
truncated_tetrahedron_graph
Returns the skeleton of the truncated Platonic tetrahedron. The truncated tetrahedron is an Archimedean solid with 4 regular hexagonal faces, 4 equilateral triangle faces, 12 nodes and 18 edges. It can be constructed by truncating all 4 vertices of a regular tetrahedron at one third of the original edge length [1]_. Parameters ---------- create_using : NetworkX graph constructor, optional (default=nx.Graph) Graph type to create. If graph instance, then cleared before populated. Returns ------- G : networkx Graph Skeleton of the truncated tetrahedron References ---------- .. [1] https://en.wikipedia.org/wiki/Truncated_tetrahedron
def sedgewick_maze_graph(create_using=None): """ Return a small maze with a cycle. This is the maze used in Sedgewick, 3rd Edition, Part 5, Graph Algorithms, Chapter 18, e.g. Figure 18.2 and following [1]_. Nodes are numbered 0,..,7 Parameters ---------- create_using : NetworkX graph constructor, optional (default=nx.Graph) Graph type to create. If graph instance, then cleared before populated. Returns ------- G : networkx Graph Small maze with a cycle References ---------- .. [1] Figure 18.2, Chapter 18, Graph Algorithms (3rd Ed), Sedgewick """ G = empty_graph(0, create_using) G.add_nodes_from(range(8)) G.add_edges_from([[0, 2], [0, 7], [0, 5]]) G.add_edges_from([[1, 7], [2, 6]]) G.add_edges_from([[3, 4], [3, 5]]) G.add_edges_from([[4, 5], [4, 7], [4, 6]]) G.name = "Sedgewick Maze" return G
(create_using=None, *, backend=None, **backend_kwargs)
31,208
networkx.generators.classic
turan_graph
Return the Turan Graph The Turan Graph is a complete multipartite graph on $n$ nodes with $r$ disjoint subsets. That is, edges connect each node to every node not in its subset. Given $n$ and $r$, we create a complete multipartite graph with $r-(n \mod r)$ partitions of size $n/r$, rounded down, and $n \mod r$ partitions of size $n/r+1$, rounded down. .. plot:: >>> nx.draw(nx.turan_graph(6, 2)) Parameters ---------- n : int The number of nodes. r : int The number of partitions. Must be less than or equal to n. Notes ----- Must satisfy $1 <= r <= n$. The graph has $(r-1)(n^2)/(2r)$ edges, rounded down.
def star_graph(n, create_using=None): """Return the star graph The star graph consists of one center node connected to n outer nodes. .. plot:: >>> nx.draw(nx.star_graph(6)) Parameters ---------- n : int or iterable If an integer, node labels are 0 to n with center 0. If an iterable of nodes, the center is the first. Warning: n is not checked for duplicates and if present the resulting graph may not be as desired. Make sure you have no duplicates. create_using : NetworkX graph constructor, optional (default=nx.Graph) Graph type to create. If graph instance, then cleared before populated. Notes ----- The graph has n+1 nodes for integer n. So star_graph(3) is the same as star_graph(range(4)). """ n, nodes = n if isinstance(n, numbers.Integral): nodes.append(int(n)) # there should be n+1 nodes G = empty_graph(nodes, create_using) if G.is_directed(): raise NetworkXError("Directed Graph not supported") if len(nodes) > 1: hub, *spokes = nodes G.add_edges_from((hub, node) for node in spokes) return G
(n, r, *, backend=None, **backend_kwargs)
31,209
networkx.generators.small
tutte_graph
Returns the Tutte graph. The Tutte graph is a cubic polyhedral, non-Hamiltonian graph. It has 46 nodes and 69 edges. It is a counterexample to Tait's conjecture that every 3-regular polyhedron has a Hamiltonian cycle. It can be realized geometrically from a tetrahedron by multiply truncating three of its vertices [1]_. Parameters ---------- create_using : NetworkX graph constructor, optional (default=nx.Graph) Graph type to create. If graph instance, then cleared before populated. Returns ------- G : networkx Graph Tutte graph References ---------- .. [1] https://en.wikipedia.org/wiki/Tutte_graph
def _raise_on_directed(func): """ A decorator which inspects the `create_using` argument and raises a NetworkX exception when `create_using` is a DiGraph (class or instance) for graph generators that do not support directed outputs. """ @wraps(func) def wrapper(*args, **kwargs): if kwargs.get("create_using") is not None: G = nx.empty_graph(create_using=kwargs["create_using"]) if G.is_directed(): raise NetworkXError("Directed Graph not supported") return func(*args, **kwargs) return wrapper
(create_using=None, *, backend=None, **backend_kwargs)
31,210
networkx.algorithms.polynomials
tutte_polynomial
Returns the Tutte polynomial of `G` This function computes the Tutte polynomial via an iterative version of the deletion-contraction algorithm. The Tutte polynomial `T_G(x, y)` is a fundamental graph polynomial invariant in two variables. It encodes a wide array of information related to the edge-connectivity of a graph; "Many problems about graphs can be reduced to problems of finding and evaluating the Tutte polynomial at certain values" [1]_. In fact, every deletion-contraction-expressible feature of a graph is a specialization of the Tutte polynomial [2]_ (see Notes for examples). There are several equivalent definitions; here are three: Def 1 (rank-nullity expansion): For `G` an undirected graph, `n(G)` the number of vertices of `G`, `E` the edge set of `G`, `V` the vertex set of `G`, and `c(A)` the number of connected components of the graph with vertex set `V` and edge set `A` [3]_: .. math:: T_G(x, y) = \sum_{A \in E} (x-1)^{c(A) - c(E)} (y-1)^{c(A) + |A| - n(G)} Def 2 (spanning tree expansion): Let `G` be an undirected graph, `T` a spanning tree of `G`, and `E` the edge set of `G`. Let `E` have an arbitrary strict linear order `L`. Let `B_e` be the unique minimal nonempty edge cut of $E \setminus T \cup {e}$. An edge `e` is internally active with respect to `T` and `L` if `e` is the least edge in `B_e` according to the linear order `L`. The internal activity of `T` (denoted `i(T)`) is the number of edges in $E \setminus T$ that are internally active with respect to `T` and `L`. Let `P_e` be the unique path in $T \cup {e}$ whose source and target vertex are the same. An edge `e` is externally active with respect to `T` and `L` if `e` is the least edge in `P_e` according to the linear order `L`. The external activity of `T` (denoted `e(T)`) is the number of edges in $E \setminus T$ that are externally active with respect to `T` and `L`. Then [4]_ [5]_: .. math:: T_G(x, y) = \sum_{T \text{ a spanning tree of } G} x^{i(T)} y^{e(T)} Def 3 (deletion-contraction recurrence): For `G` an undirected graph, `G-e` the graph obtained from `G` by deleting edge `e`, `G/e` the graph obtained from `G` by contracting edge `e`, `k(G)` the number of cut-edges of `G`, and `l(G)` the number of self-loops of `G`: .. math:: T_G(x, y) = \begin{cases} x^{k(G)} y^{l(G)}, & \text{if all edges are cut-edges or self-loops} \\ T_{G-e}(x, y) + T_{G/e}(x, y), & \text{otherwise, for an arbitrary edge $e$ not a cut-edge or loop} \end{cases} Parameters ---------- G : NetworkX graph Returns ------- instance of `sympy.core.add.Add` A Sympy expression representing the Tutte polynomial for `G`. Examples -------- >>> C = nx.cycle_graph(5) >>> nx.tutte_polynomial(C) x**4 + x**3 + x**2 + x + y >>> D = nx.diamond_graph() >>> nx.tutte_polynomial(D) x**3 + 2*x**2 + 2*x*y + x + y**2 + y Notes ----- Some specializations of the Tutte polynomial: - `T_G(1, 1)` counts the number of spanning trees of `G` - `T_G(1, 2)` counts the number of connected spanning subgraphs of `G` - `T_G(2, 1)` counts the number of spanning forests in `G` - `T_G(0, 2)` counts the number of strong orientations of `G` - `T_G(2, 0)` counts the number of acyclic orientations of `G` Edge contraction is defined and deletion-contraction is introduced in [6]_. Combinatorial meaning of the coefficients is introduced in [7]_. Universality, properties, and applications are discussed in [8]_. Practically, up-front computation of the Tutte polynomial may be useful when users wish to repeatedly calculate edge-connectivity-related information about one or more graphs. References ---------- .. [1] M. Brandt, "The Tutte Polynomial." Talking About Combinatorial Objects Seminar, 2015 https://math.berkeley.edu/~brandtm/talks/tutte.pdf .. [2] A. Björklund, T. Husfeldt, P. Kaski, M. Koivisto, "Computing the Tutte polynomial in vertex-exponential time" 49th Annual IEEE Symposium on Foundations of Computer Science, 2008 https://ieeexplore.ieee.org/abstract/document/4691000 .. [3] Y. Shi, M. Dehmer, X. Li, I. Gutman, "Graph Polynomials," p. 14 .. [4] Y. Shi, M. Dehmer, X. Li, I. Gutman, "Graph Polynomials," p. 46 .. [5] A. Nešetril, J. Goodall, "Graph invariants, homomorphisms, and the Tutte polynomial" https://iuuk.mff.cuni.cz/~andrew/Tutte.pdf .. [6] D. B. West, "Introduction to Graph Theory," p. 84 .. [7] G. Coutinho, "A brief introduction to the Tutte polynomial" Structural Analysis of Complex Networks, 2011 https://homepages.dcc.ufmg.br/~gabriel/seminars/coutinho_tuttepolynomial_seminar.pdf .. [8] J. A. Ellis-Monaghan, C. Merino, "Graph polynomials and their applications I: The Tutte polynomial" Structural Analysis of Complex Networks, 2011 https://arxiv.org/pdf/0803.3079.pdf
null
(G, *, backend=None, **backend_kwargs)
31,212
networkx.generators.intersection
uniform_random_intersection_graph
Returns a uniform random intersection graph. Parameters ---------- n : int The number of nodes in the first bipartite set (nodes) m : int The number of nodes in the second bipartite set (attributes) p : float Probability of connecting nodes between bipartite sets seed : integer, random_state, or None (default) Indicator of random number generation state. See :ref:`Randomness<randomness>`. See Also -------- gnp_random_graph References ---------- .. [1] K.B. Singer-Cohen, Random Intersection Graphs, 1995, PhD thesis, Johns Hopkins University .. [2] Fill, J. A., Scheinerman, E. R., and Singer-Cohen, K. B., Random intersection graphs when m = !(n): An equivalence theorem relating the evolution of the g(n, m, p) and g(n, p) models. Random Struct. Algorithms 16, 2 (2000), 156–176.
null
(n, m, p, seed=None, *, backend=None, **backend_kwargs)
31,213
networkx.algorithms.operators.binary
union
Combine graphs G and H. The names of nodes must be unique. A name collision between the graphs will raise an exception. A renaming facility is provided to avoid name collisions. Parameters ---------- G, H : graph A NetworkX graph rename : iterable , optional Node names of G and H can be changed by specifying the tuple rename=('G-','H-') (for example). Node "u" in G is then renamed "G-u" and "v" in H is renamed "H-v". Returns ------- U : A union graph with the same type as G. See Also -------- compose :func:`~networkx.Graph.update` disjoint_union Notes ----- To combine graphs that have common nodes, consider compose(G, H) or the method, Graph.update(). disjoint_union() is similar to union() except that it avoids name clashes by relabeling the nodes with sequential integers. Edge and node attributes are propagated from G and H to the union graph. Graph attributes are also propagated, but if they are present in both G and H, then the value from H is used. Examples -------- >>> G = nx.Graph([(0, 1), (0, 2), (1, 2)]) >>> H = nx.Graph([(0, 1), (0, 3), (1, 3), (1, 2)]) >>> U = nx.union(G, H, rename=("G", "H")) >>> U.nodes NodeView(('G0', 'G1', 'G2', 'H0', 'H1', 'H3', 'H2')) >>> U.edges EdgeView([('G0', 'G1'), ('G0', 'G2'), ('G1', 'G2'), ('H0', 'H1'), ('H0', 'H3'), ('H1', 'H3'), ('H1', 'H2')])
null
(G, H, rename=(), *, backend=None, **backend_kwargs)
31,214
networkx.algorithms.operators.all
union_all
Returns the union of all graphs. The graphs must be disjoint, otherwise an exception is raised. Parameters ---------- graphs : iterable Iterable of NetworkX graphs rename : iterable , optional Node names of graphs can be changed by specifying the tuple rename=('G-','H-') (for example). Node "u" in G is then renamed "G-u" and "v" in H is renamed "H-v". Infinite generators (like itertools.count) are also supported. Returns ------- U : a graph with the same type as the first graph in list Raises ------ ValueError If `graphs` is an empty list. NetworkXError In case of mixed type graphs, like MultiGraph and Graph, or directed and undirected graphs. Notes ----- For operating on mixed type graphs, they should be converted to the same type. >>> G = nx.Graph() >>> H = nx.DiGraph() >>> GH = union_all([nx.DiGraph(G), H]) To force a disjoint union with node relabeling, use disjoint_union_all(G,H) or convert_node_labels_to integers(). Graph, edge, and node attributes are propagated to the union graph. If a graph attribute is present in multiple graphs, then the value from the last graph in the list with that attribute is used. Examples -------- >>> G1 = nx.Graph([(1, 2), (2, 3)]) >>> G2 = nx.Graph([(4, 5), (5, 6)]) >>> result_graph = nx.union_all([G1, G2]) >>> result_graph.nodes() NodeView((1, 2, 3, 4, 5, 6)) >>> result_graph.edges() EdgeView([(1, 2), (2, 3), (4, 5), (5, 6)]) See Also -------- union disjoint_union_all
null
(graphs, rename=(), *, backend=None, **backend_kwargs)
31,217
networkx.algorithms.isomorphism.vf2pp
vf2pp_all_isomorphisms
Yields all the possible mappings between G1 and G2. Parameters ---------- G1, G2 : NetworkX Graph or MultiGraph instances. The two graphs to check for isomorphism. node_label : str, optional The name of the node attribute to be used when comparing nodes. The default is `None`, meaning node attributes are not considered in the comparison. Any node that doesn't have the `node_label` attribute uses `default_label` instead. default_label : scalar Default value to use when a node doesn't have an attribute named `node_label`. Default is `None`. Yields ------ dict Isomorphic mapping between the nodes in `G1` and `G2`.
def _consistent_PT(u, v, graph_params, state_params): """Checks the consistency of extending the mapping using the current node pair. Parameters ---------- u, v: Graph node The two candidate nodes being examined. graph_params: namedtuple Contains all the Graph-related parameters: G1,G2: NetworkX Graph or MultiGraph instances. The two graphs to check for isomorphism or monomorphism G1_labels,G2_labels: dict The label of every node in G1 and G2 respectively state_params: namedtuple Contains all the State-related parameters: mapping: dict The mapping as extended so far. Maps nodes of G1 to nodes of G2 reverse_mapping: dict The reverse mapping as extended so far. Maps nodes from G2 to nodes of G1. It's basically "mapping" reversed T1, T2: set Ti contains uncovered neighbors of covered nodes from Gi, i.e. nodes that are not in the mapping, but are neighbors of nodes that are. T1_out, T2_out: set Ti_out contains all the nodes from Gi, that are neither in the mapping nor in Ti Returns ------- True if the pair passes all the consistency checks successfully. False otherwise. """ G1, G2 = graph_params.G1, graph_params.G2 mapping, reverse_mapping = state_params.mapping, state_params.reverse_mapping for neighbor in G1[u]: if neighbor in mapping: if G1.number_of_edges(u, neighbor) != G2.number_of_edges( v, mapping[neighbor] ): return False for neighbor in G2[v]: if neighbor in reverse_mapping: if G1.number_of_edges(u, reverse_mapping[neighbor]) != G2.number_of_edges( v, neighbor ): return False if not G1.is_directed(): return True for predecessor in G1.pred[u]: if predecessor in mapping: if G1.number_of_edges(predecessor, u) != G2.number_of_edges( mapping[predecessor], v ): return False for predecessor in G2.pred[v]: if predecessor in reverse_mapping: if G1.number_of_edges( reverse_mapping[predecessor], u ) != G2.number_of_edges(predecessor, v): return False return True
(G1, G2, node_label=None, default_label=None, *, backend=None, **backend_kwargs)
31,218
networkx.algorithms.isomorphism.vf2pp
vf2pp_is_isomorphic
Examines whether G1 and G2 are isomorphic. Parameters ---------- G1, G2 : NetworkX Graph or MultiGraph instances. The two graphs to check for isomorphism. node_label : str, optional The name of the node attribute to be used when comparing nodes. The default is `None`, meaning node attributes are not considered in the comparison. Any node that doesn't have the `node_label` attribute uses `default_label` instead. default_label : scalar Default value to use when a node doesn't have an attribute named `node_label`. Default is `None`. Returns ------- bool True if the two graphs are isomorphic, False otherwise.
def _consistent_PT(u, v, graph_params, state_params): """Checks the consistency of extending the mapping using the current node pair. Parameters ---------- u, v: Graph node The two candidate nodes being examined. graph_params: namedtuple Contains all the Graph-related parameters: G1,G2: NetworkX Graph or MultiGraph instances. The two graphs to check for isomorphism or monomorphism G1_labels,G2_labels: dict The label of every node in G1 and G2 respectively state_params: namedtuple Contains all the State-related parameters: mapping: dict The mapping as extended so far. Maps nodes of G1 to nodes of G2 reverse_mapping: dict The reverse mapping as extended so far. Maps nodes from G2 to nodes of G1. It's basically "mapping" reversed T1, T2: set Ti contains uncovered neighbors of covered nodes from Gi, i.e. nodes that are not in the mapping, but are neighbors of nodes that are. T1_out, T2_out: set Ti_out contains all the nodes from Gi, that are neither in the mapping nor in Ti Returns ------- True if the pair passes all the consistency checks successfully. False otherwise. """ G1, G2 = graph_params.G1, graph_params.G2 mapping, reverse_mapping = state_params.mapping, state_params.reverse_mapping for neighbor in G1[u]: if neighbor in mapping: if G1.number_of_edges(u, neighbor) != G2.number_of_edges( v, mapping[neighbor] ): return False for neighbor in G2[v]: if neighbor in reverse_mapping: if G1.number_of_edges(u, reverse_mapping[neighbor]) != G2.number_of_edges( v, neighbor ): return False if not G1.is_directed(): return True for predecessor in G1.pred[u]: if predecessor in mapping: if G1.number_of_edges(predecessor, u) != G2.number_of_edges( mapping[predecessor], v ): return False for predecessor in G2.pred[v]: if predecessor in reverse_mapping: if G1.number_of_edges( reverse_mapping[predecessor], u ) != G2.number_of_edges(predecessor, v): return False return True
(G1, G2, node_label=None, default_label=None, *, backend=None, **backend_kwargs)
31,219
networkx.algorithms.isomorphism.vf2pp
vf2pp_isomorphism
Return an isomorphic mapping between `G1` and `G2` if it exists. Parameters ---------- G1, G2 : NetworkX Graph or MultiGraph instances. The two graphs to check for isomorphism. node_label : str, optional The name of the node attribute to be used when comparing nodes. The default is `None`, meaning node attributes are not considered in the comparison. Any node that doesn't have the `node_label` attribute uses `default_label` instead. default_label : scalar Default value to use when a node doesn't have an attribute named `node_label`. Default is `None`. Returns ------- dict or None Node mapping if the two graphs are isomorphic. None otherwise.
def _consistent_PT(u, v, graph_params, state_params): """Checks the consistency of extending the mapping using the current node pair. Parameters ---------- u, v: Graph node The two candidate nodes being examined. graph_params: namedtuple Contains all the Graph-related parameters: G1,G2: NetworkX Graph or MultiGraph instances. The two graphs to check for isomorphism or monomorphism G1_labels,G2_labels: dict The label of every node in G1 and G2 respectively state_params: namedtuple Contains all the State-related parameters: mapping: dict The mapping as extended so far. Maps nodes of G1 to nodes of G2 reverse_mapping: dict The reverse mapping as extended so far. Maps nodes from G2 to nodes of G1. It's basically "mapping" reversed T1, T2: set Ti contains uncovered neighbors of covered nodes from Gi, i.e. nodes that are not in the mapping, but are neighbors of nodes that are. T1_out, T2_out: set Ti_out contains all the nodes from Gi, that are neither in the mapping nor in Ti Returns ------- True if the pair passes all the consistency checks successfully. False otherwise. """ G1, G2 = graph_params.G1, graph_params.G2 mapping, reverse_mapping = state_params.mapping, state_params.reverse_mapping for neighbor in G1[u]: if neighbor in mapping: if G1.number_of_edges(u, neighbor) != G2.number_of_edges( v, mapping[neighbor] ): return False for neighbor in G2[v]: if neighbor in reverse_mapping: if G1.number_of_edges(u, reverse_mapping[neighbor]) != G2.number_of_edges( v, neighbor ): return False if not G1.is_directed(): return True for predecessor in G1.pred[u]: if predecessor in mapping: if G1.number_of_edges(predecessor, u) != G2.number_of_edges( mapping[predecessor], v ): return False for predecessor in G2.pred[v]: if predecessor in reverse_mapping: if G1.number_of_edges( reverse_mapping[predecessor], u ) != G2.number_of_edges(predecessor, v): return False return True
(G1, G2, node_label=None, default_label=None, *, backend=None, **backend_kwargs)
31,220
networkx.generators.time_series
visibility_graph
Return a Visibility Graph of an input Time Series. A visibility graph converts a time series into a graph. The constructed graph uses integer nodes to indicate which event in the series the node represents. Edges are formed as follows: consider a bar plot of the series and view that as a side view of a landscape with a node at the top of each bar. An edge means that the nodes can be connected by a straight "line-of-sight" without being obscured by any bars between the nodes. The resulting graph inherits several properties of the series in its structure. Thereby, periodic series convert into regular graphs, random series convert into random graphs, and fractal series convert into scale-free networks [1]_. Parameters ---------- series : Sequence[Number] A Time Series sequence (iterable and sliceable) of numeric values representing times. Returns ------- NetworkX Graph The Visibility Graph of the input series Examples -------- >>> series_list = [range(10), [2, 1, 3, 2, 1, 3, 2, 1, 3, 2, 1, 3]] >>> for s in series_list: ... g = nx.visibility_graph(s) ... print(g) Graph with 10 nodes and 9 edges Graph with 12 nodes and 18 edges References ---------- .. [1] Lacasa, Lucas, Bartolo Luque, Fernando Ballesteros, Jordi Luque, and Juan Carlos Nuno. "From time series to complex networks: The visibility graph." Proceedings of the National Academy of Sciences 105, no. 13 (2008): 4972-4975. https://www.pnas.org/doi/10.1073/pnas.0709247105
null
(series, *, backend=None, **backend_kwargs)
31,222
networkx.algorithms.cuts
volume
Returns the volume of a set of nodes. The *volume* of a set *S* is the sum of the (out-)degrees of nodes in *S* (taking into account parallel edges in multigraphs). [1] Parameters ---------- G : NetworkX graph S : collection A collection of nodes in `G`. weight : object Edge attribute key to use as weight. If not specified, edges have weight one. Returns ------- number The volume of the set of nodes represented by `S` in the graph `G`. See also -------- conductance cut_size edge_expansion edge_boundary normalized_cut_size References ---------- .. [1] David Gleich. *Hierarchical Directed Spectral Graph Partitioning*. <https://www.cs.purdue.edu/homes/dgleich/publications/Gleich%202005%20-%20hierarchical%20directed%20spectral.pdf>
null
(G, S, weight=None, *, backend=None, **backend_kwargs)
31,224
networkx.algorithms.voronoi
voronoi_cells
Returns the Voronoi cells centered at `center_nodes` with respect to the shortest-path distance metric. If $C$ is a set of nodes in the graph and $c$ is an element of $C$, the *Voronoi cell* centered at a node $c$ is the set of all nodes $v$ that are closer to $c$ than to any other center node in $C$ with respect to the shortest-path distance metric. [1]_ For directed graphs, this will compute the "outward" Voronoi cells, as defined in [1]_, in which distance is measured from the center nodes to the target node. For the "inward" Voronoi cells, use the :meth:`DiGraph.reverse` method to reverse the orientation of the edges before invoking this function on the directed graph. Parameters ---------- G : NetworkX graph center_nodes : set A nonempty set of nodes in the graph `G` that represent the center of the Voronoi cells. weight : string or function The edge attribute (or an arbitrary function) representing the weight of an edge. This keyword argument is as described in the documentation for :func:`~networkx.multi_source_dijkstra_path`, for example. Returns ------- dictionary A mapping from center node to set of all nodes in the graph closer to that center node than to any other center node. The keys of the dictionary are the element of `center_nodes`, and the values of the dictionary form a partition of the nodes of `G`. Examples -------- To get only the partition of the graph induced by the Voronoi cells, take the collection of all values in the returned dictionary:: >>> G = nx.path_graph(6) >>> center_nodes = {0, 3} >>> cells = nx.voronoi_cells(G, center_nodes) >>> partition = set(map(frozenset, cells.values())) >>> sorted(map(sorted, partition)) [[0, 1], [2, 3, 4, 5]] Raises ------ ValueError If `center_nodes` is empty. References ---------- .. [1] Erwig, Martin. (2000),"The graph Voronoi diagram with applications." *Networks*, 36: 156--163. https://doi.org/10.1002/1097-0037(200010)36:3<156::AID-NET2>3.0.CO;2-L
null
(G, center_nodes, weight='weight', *, backend=None, **backend_kwargs)
31,225
networkx.algorithms.centrality.voterank_alg
voterank
Select a list of influential nodes in a graph using VoteRank algorithm VoteRank [1]_ computes a ranking of the nodes in a graph G based on a voting scheme. With VoteRank, all nodes vote for each of its in-neighbors and the node with the highest votes is elected iteratively. The voting ability of out-neighbors of elected nodes is decreased in subsequent turns. Parameters ---------- G : graph A NetworkX graph. number_of_nodes : integer, optional Number of ranked nodes to extract (default all nodes). Returns ------- voterank : list Ordered list of computed seeds. Only nodes with positive number of votes are returned. Examples -------- >>> G = nx.Graph([(0, 1), (0, 2), (0, 3), (1, 4)]) >>> nx.voterank(G) [0, 1] The algorithm can be used both for undirected and directed graphs. However, the directed version is different in two ways: (i) nodes only vote for their in-neighbors and (ii) only the voting ability of elected node and its out-neighbors are updated: >>> G = nx.DiGraph([(0, 1), (2, 1), (2, 3), (3, 4)]) >>> nx.voterank(G) [2, 3] Notes ----- Each edge is treated independently in case of multigraphs. References ---------- .. [1] Zhang, J.-X. et al. (2016). Identifying a set of influential spreaders in complex networks. Sci. Rep. 6, 27823; doi: 10.1038/srep27823.
null
(G, number_of_nodes=None, *, backend=None, **backend_kwargs)
31,228
networkx.generators.random_graphs
watts_strogatz_graph
Returns a Watts–Strogatz small-world graph. Parameters ---------- n : int The number of nodes k : int Each node is joined with its `k` nearest neighbors in a ring topology. p : float The probability of rewiring each edge seed : integer, random_state, or None (default) Indicator of random number generation state. See :ref:`Randomness<randomness>`. See Also -------- newman_watts_strogatz_graph connected_watts_strogatz_graph Notes ----- First create a ring over $n$ nodes [1]_. Then each node in the ring is joined to its $k$ nearest neighbors (or $k - 1$ neighbors if $k$ is odd). Then shortcuts are created by replacing some edges as follows: for each edge $(u, v)$ in the underlying "$n$-ring with $k$ nearest neighbors" with probability $p$ replace it with a new edge $(u, w)$ with uniformly random choice of existing node $w$. In contrast with :func:`newman_watts_strogatz_graph`, the random rewiring does not increase the number of edges. The rewired graph is not guaranteed to be connected as in :func:`connected_watts_strogatz_graph`. References ---------- .. [1] Duncan J. Watts and Steven H. Strogatz, Collective dynamics of small-world networks, Nature, 393, pp. 440--442, 1998.
def dual_barabasi_albert_graph(n, m1, m2, p, seed=None, initial_graph=None): """Returns a random graph using dual Barabási–Albert preferential attachment A graph of $n$ nodes is grown by attaching new nodes each with either $m_1$ edges (with probability $p$) or $m_2$ edges (with probability $1-p$) that are preferentially attached to existing nodes with high degree. Parameters ---------- n : int Number of nodes m1 : int Number of edges to link each new node to existing nodes with probability $p$ m2 : int Number of edges to link each new node to existing nodes with probability $1-p$ p : float The probability of attaching $m_1$ edges (as opposed to $m_2$ edges) seed : integer, random_state, or None (default) Indicator of random number generation state. See :ref:`Randomness<randomness>`. initial_graph : Graph or None (default) Initial network for Barabási–Albert algorithm. A copy of `initial_graph` is used. It should be connected for most use cases. If None, starts from an star graph on max(m1, m2) + 1 nodes. Returns ------- G : Graph Raises ------ NetworkXError If `m1` and `m2` do not satisfy ``1 <= m1,m2 < n``, or `p` does not satisfy ``0 <= p <= 1``, or the initial graph number of nodes m0 does not satisfy m1, m2 <= m0 <= n. References ---------- .. [1] N. Moshiri "The dual-Barabasi-Albert model", arXiv:1810.10538. """ if m1 < 1 or m1 >= n: raise nx.NetworkXError( f"Dual Barabási–Albert must have m1 >= 1 and m1 < n, m1 = {m1}, n = {n}" ) if m2 < 1 or m2 >= n: raise nx.NetworkXError( f"Dual Barabási–Albert must have m2 >= 1 and m2 < n, m2 = {m2}, n = {n}" ) if p < 0 or p > 1: raise nx.NetworkXError( f"Dual Barabási–Albert network must have 0 <= p <= 1, p = {p}" ) # For simplicity, if p == 0 or 1, just return BA if p == 1: return barabasi_albert_graph(n, m1, seed) elif p == 0: return barabasi_albert_graph(n, m2, seed) if initial_graph is None: # Default initial graph : empty graph on max(m1, m2) nodes G = star_graph(max(m1, m2)) else: if len(initial_graph) < max(m1, m2) or len(initial_graph) > n: raise nx.NetworkXError( f"Barabási–Albert initial graph must have between " f"max(m1, m2) = {max(m1, m2)} and n = {n} nodes" ) G = initial_graph.copy() # Target nodes for new edges targets = list(G) # List of existing nodes, with nodes repeated once for each adjacent edge repeated_nodes = [n for n, d in G.degree() for _ in range(d)] # Start adding the remaining nodes. source = len(G) while source < n: # Pick which m to use (m1 or m2) if seed.random() < p: m = m1 else: m = m2 # Now choose m unique nodes from the existing nodes # Pick uniformly from repeated_nodes (preferential attachment) targets = _random_subset(repeated_nodes, m, seed) # Add edges to m nodes from the source. G.add_edges_from(zip([source] * m, targets)) # Add one node to the list for each new edge just created. repeated_nodes.extend(targets) # And the new node "source" has m edges to add to the list. repeated_nodes.extend([source] * m) source += 1 return G
(n, k, p, seed=None, *, backend=None, **backend_kwargs)
31,229
networkx.generators.geometric
waxman_graph
Returns a Waxman random graph. The Waxman random graph model places `n` nodes uniformly at random in a rectangular domain. Each pair of nodes at distance `d` is joined by an edge with probability .. math:: p = \beta \exp(-d / \alpha L). This function implements both Waxman models, using the `L` keyword argument. * Waxman-1: if `L` is not specified, it is set to be the maximum distance between any pair of nodes. * Waxman-2: if `L` is specified, the distance between a pair of nodes is chosen uniformly at random from the interval `[0, L]`. Parameters ---------- n : int or iterable Number of nodes or iterable of nodes beta: float Model parameter alpha: float Model parameter L : float, optional Maximum distance between nodes. If not specified, the actual distance is calculated. domain : four-tuple of numbers, optional Domain size, given as a tuple of the form `(x_min, y_min, x_max, y_max)`. metric : function A metric on vectors of numbers (represented as lists or tuples). This must be a function that accepts two lists (or tuples) as input and yields a number as output. The function must also satisfy the four requirements of a `metric`_. Specifically, if $d$ is the function and $x$, $y$, and $z$ are vectors in the graph, then $d$ must satisfy 1. $d(x, y) \ge 0$, 2. $d(x, y) = 0$ if and only if $x = y$, 3. $d(x, y) = d(y, x)$, 4. $d(x, z) \le d(x, y) + d(y, z)$. If this argument is not specified, the Euclidean distance metric is used. .. _metric: https://en.wikipedia.org/wiki/Metric_%28mathematics%29 seed : integer, random_state, or None (default) Indicator of random number generation state. See :ref:`Randomness<randomness>`. pos_name : string, default="pos" The name of the node attribute which represents the position in 2D coordinates of the node in the returned graph. Returns ------- Graph A random Waxman graph, undirected and without self-loops. Each node has a node attribute ``'pos'`` that stores the position of that node in Euclidean space as generated by this function. Examples -------- Specify an alternate distance metric using the ``metric`` keyword argument. For example, to use the "`taxicab metric`_" instead of the default `Euclidean metric`_:: >>> dist = lambda x, y: sum(abs(a - b) for a, b in zip(x, y)) >>> G = nx.waxman_graph(10, 0.5, 0.1, metric=dist) .. _taxicab metric: https://en.wikipedia.org/wiki/Taxicab_geometry .. _Euclidean metric: https://en.wikipedia.org/wiki/Euclidean_distance Notes ----- Starting in NetworkX 2.0 the parameters alpha and beta align with their usual roles in the probability distribution. In earlier versions their positions in the expression were reversed. Their position in the calling sequence reversed as well to minimize backward incompatibility. References ---------- .. [1] B. M. Waxman, *Routing of multipoint connections*. IEEE J. Select. Areas Commun. 6(9),(1988) 1617--1622.
def thresholded_random_geometric_graph( n, radius, theta, dim=2, pos=None, weight=None, p=2, seed=None, *, pos_name="pos", weight_name="weight", ): r"""Returns a thresholded random geometric graph in the unit cube. The thresholded random geometric graph [1] model places `n` nodes uniformly at random in the unit cube of dimensions `dim`. Each node `u` is assigned a weight :math:`w_u`. Two nodes `u` and `v` are joined by an edge if they are within the maximum connection distance, `radius` computed by the `p`-Minkowski distance and the summation of weights :math:`w_u` + :math:`w_v` is greater than or equal to the threshold parameter `theta`. Edges within `radius` of each other are determined using a KDTree when SciPy is available. This reduces the time complexity from :math:`O(n^2)` to :math:`O(n)`. Parameters ---------- n : int or iterable Number of nodes or iterable of nodes radius: float Distance threshold value theta: float Threshold value dim : int, optional Dimension of graph pos : dict, optional A dictionary keyed by node with node positions as values. weight : dict, optional Node weights as a dictionary of numbers keyed by node. p : float, optional (default 2) Which Minkowski distance metric to use. `p` has to meet the condition ``1 <= p <= infinity``. If this argument is not specified, the :math:`L^2` metric (the Euclidean distance metric), p = 2 is used. This should not be confused with the `p` of an Erdős-Rényi random graph, which represents probability. seed : integer, random_state, or None (default) Indicator of random number generation state. See :ref:`Randomness<randomness>`. pos_name : string, default="pos" The name of the node attribute which represents the position in 2D coordinates of the node in the returned graph. weight_name : string, default="weight" The name of the node attribute which represents the weight of the node in the returned graph. Returns ------- Graph A thresholded random geographic graph, undirected and without self-loops. Each node has a node attribute ``'pos'`` that stores the position of that node in Euclidean space as provided by the ``pos`` keyword argument or, if ``pos`` was not provided, as generated by this function. Similarly, each node has a nodethre attribute ``'weight'`` that stores the weight of that node as provided or as generated. Examples -------- Default Graph: G = nx.thresholded_random_geometric_graph(50, 0.2, 0.1) Custom Graph: Create a thresholded random geometric graph on 50 uniformly distributed nodes where nodes are joined by an edge if their sum weights drawn from a exponential distribution with rate = 5 are >= theta = 0.1 and their Euclidean distance is at most 0.2. Notes ----- This uses a *k*-d tree to build the graph. The `pos` keyword argument can be used to specify node positions so you can create an arbitrary distribution and domain for positions. For example, to use a 2D Gaussian distribution of node positions with mean (0, 0) and standard deviation 2 If weights are not specified they are assigned to nodes by drawing randomly from the exponential distribution with rate parameter :math:`\lambda=1`. To specify weights from a different distribution, use the `weight` keyword argument:: :: >>> import random >>> import math >>> n = 50 >>> pos = {i: (random.gauss(0, 2), random.gauss(0, 2)) for i in range(n)} >>> w = {i: random.expovariate(5.0) for i in range(n)} >>> G = nx.thresholded_random_geometric_graph(n, 0.2, 0.1, 2, pos, w) References ---------- .. [1] http://cole-maclean.github.io/blog/files/thesis.pdf """ G = nx.empty_graph(n) G.name = f"thresholded_random_geometric_graph({n}, {radius}, {theta}, {dim})" # If no weights are provided, choose them from an exponential # distribution. if weight is None: weight = {v: seed.expovariate(1) for v in G} # If no positions are provided, choose uniformly random vectors in # Euclidean space of the specified dimension. if pos is None: pos = {v: [seed.random() for i in range(dim)] for v in G} # If no distance metric is provided, use Euclidean distance. nx.set_node_attributes(G, weight, weight_name) nx.set_node_attributes(G, pos, pos_name) edges = ( (u, v) for u, v in _geometric_edges(G, radius, p, pos_name) if weight[u] + weight[v] >= theta ) G.add_edges_from(edges) return G
(n, beta=0.4, alpha=0.1, L=None, domain=(0, 0, 1, 1), metric=None, seed=None, *, pos_name='pos', backend=None, **backend_kwargs)
31,231
networkx.algorithms.components.weakly_connected
weakly_connected_components
Generate weakly connected components of G. Parameters ---------- G : NetworkX graph A directed graph Returns ------- comp : generator of sets A generator of sets of nodes, one for each weakly connected component of G. Raises ------ NetworkXNotImplemented If G is undirected. Examples -------- Generate a sorted list of weakly connected components, largest first. >>> G = nx.path_graph(4, create_using=nx.DiGraph()) >>> nx.add_path(G, [10, 11, 12]) >>> [len(c) for c in sorted(nx.weakly_connected_components(G), key=len, reverse=True)] [4, 3] If you only want the largest component, it's more efficient to use max instead of sort: >>> largest_cc = max(nx.weakly_connected_components(G), key=len) See Also -------- connected_components strongly_connected_components Notes ----- For directed graphs only.
null
(G, *, backend=None, **backend_kwargs)
31,233
networkx.algorithms.graph_hashing
weisfeiler_lehman_graph_hash
Return Weisfeiler Lehman (WL) graph hash. The function iteratively aggregates and hashes neighborhoods of each node. After each node's neighbors are hashed to obtain updated node labels, a hashed histogram of resulting labels is returned as the final hash. Hashes are identical for isomorphic graphs and strong guarantees that non-isomorphic graphs will get different hashes. See [1]_ for details. If no node or edge attributes are provided, the degree of each node is used as its initial label. Otherwise, node and/or edge labels are used to compute the hash. Parameters ---------- G : graph The graph to be hashed. Can have node and/or edge attributes. Can also have no attributes. edge_attr : string, optional (default=None) The key in edge attribute dictionary to be used for hashing. If None, edge labels are ignored. node_attr: string, optional (default=None) The key in node attribute dictionary to be used for hashing. If None, and no edge_attr given, use the degrees of the nodes as labels. iterations: int, optional (default=3) Number of neighbor aggregations to perform. Should be larger for larger graphs. digest_size: int, optional (default=16) Size (in bits) of blake2b hash digest to use for hashing node labels. Returns ------- h : string Hexadecimal string corresponding to hash of the input graph. Examples -------- Two graphs with edge attributes that are isomorphic, except for differences in the edge labels. >>> G1 = nx.Graph() >>> G1.add_edges_from( ... [ ... (1, 2, {"label": "A"}), ... (2, 3, {"label": "A"}), ... (3, 1, {"label": "A"}), ... (1, 4, {"label": "B"}), ... ] ... ) >>> G2 = nx.Graph() >>> G2.add_edges_from( ... [ ... (5, 6, {"label": "B"}), ... (6, 7, {"label": "A"}), ... (7, 5, {"label": "A"}), ... (7, 8, {"label": "A"}), ... ] ... ) Omitting the `edge_attr` option, results in identical hashes. >>> nx.weisfeiler_lehman_graph_hash(G1) '7bc4dde9a09d0b94c5097b219891d81a' >>> nx.weisfeiler_lehman_graph_hash(G2) '7bc4dde9a09d0b94c5097b219891d81a' With edge labels, the graphs are no longer assigned the same hash digest. >>> nx.weisfeiler_lehman_graph_hash(G1, edge_attr="label") 'c653d85538bcf041d88c011f4f905f10' >>> nx.weisfeiler_lehman_graph_hash(G2, edge_attr="label") '3dcd84af1ca855d0eff3c978d88e7ec7' Notes ----- To return the WL hashes of each subgraph of a graph, use `weisfeiler_lehman_subgraph_hashes` Similarity between hashes does not imply similarity between graphs. References ---------- .. [1] Shervashidze, Nino, Pascal Schweitzer, Erik Jan Van Leeuwen, Kurt Mehlhorn, and Karsten M. Borgwardt. Weisfeiler Lehman Graph Kernels. Journal of Machine Learning Research. 2011. http://www.jmlr.org/papers/volume12/shervashidze11a/shervashidze11a.pdf See also -------- weisfeiler_lehman_subgraph_hashes
null
(G, edge_attr=None, node_attr=None, iterations=3, digest_size=16, *, backend=None, **backend_kwargs)
31,234
networkx.algorithms.graph_hashing
weisfeiler_lehman_subgraph_hashes
Return a dictionary of subgraph hashes by node. Dictionary keys are nodes in `G`, and values are a list of hashes. Each hash corresponds to a subgraph rooted at a given node u in `G`. Lists of subgraph hashes are sorted in increasing order of depth from their root node, with the hash at index i corresponding to a subgraph of nodes at most i edges distance from u. Thus, each list will contain `iterations` elements - a hash for a subgraph at each depth. If `include_initial_labels` is set to `True`, each list will additionally have contain a hash of the initial node label (or equivalently a subgraph of depth 0) prepended, totalling ``iterations + 1`` elements. The function iteratively aggregates and hashes neighborhoods of each node. This is achieved for each step by replacing for each node its label from the previous iteration with its hashed 1-hop neighborhood aggregate. The new node label is then appended to a list of node labels for each node. To aggregate neighborhoods for a node $u$ at each step, all labels of nodes adjacent to $u$ are concatenated. If the `edge_attr` parameter is set, labels for each neighboring node are prefixed with the value of this attribute along the connecting edge from this neighbor to node $u$. The resulting string is then hashed to compress this information into a fixed digest size. Thus, at the $i$-th iteration, nodes within $i$ hops influence any given hashed node label. We can therefore say that at depth $i$ for node $u$ we have a hash for a subgraph induced by the $i$-hop neighborhood of $u$. The output can be used to to create general Weisfeiler-Lehman graph kernels, or generate features for graphs or nodes - for example to generate 'words' in a graph as seen in the 'graph2vec' algorithm. See [1]_ & [2]_ respectively for details. Hashes are identical for isomorphic subgraphs and there exist strong guarantees that non-isomorphic graphs will get different hashes. See [1]_ for details. If no node or edge attributes are provided, the degree of each node is used as its initial label. Otherwise, node and/or edge labels are used to compute the hash. Parameters ---------- G : graph The graph to be hashed. Can have node and/or edge attributes. Can also have no attributes. edge_attr : string, optional (default=None) The key in edge attribute dictionary to be used for hashing. If None, edge labels are ignored. node_attr : string, optional (default=None) The key in node attribute dictionary to be used for hashing. If None, and no edge_attr given, use the degrees of the nodes as labels. If None, and edge_attr is given, each node starts with an identical label. iterations : int, optional (default=3) Number of neighbor aggregations to perform. Should be larger for larger graphs. digest_size : int, optional (default=16) Size (in bits) of blake2b hash digest to use for hashing node labels. The default size is 16 bits. include_initial_labels : bool, optional (default=False) If True, include the hashed initial node label as the first subgraph hash for each node. Returns ------- node_subgraph_hashes : dict A dictionary with each key given by a node in G, and each value given by the subgraph hashes in order of depth from the key node. Examples -------- Finding similar nodes in different graphs: >>> G1 = nx.Graph() >>> G1.add_edges_from([(1, 2), (2, 3), (2, 4), (3, 5), (4, 6), (5, 7), (6, 7)]) >>> G2 = nx.Graph() >>> G2.add_edges_from([(1, 3), (2, 3), (1, 6), (1, 5), (4, 6)]) >>> g1_hashes = nx.weisfeiler_lehman_subgraph_hashes(G1, iterations=3, digest_size=8) >>> g2_hashes = nx.weisfeiler_lehman_subgraph_hashes(G2, iterations=3, digest_size=8) Even though G1 and G2 are not isomorphic (they have different numbers of edges), the hash sequence of depth 3 for node 1 in G1 and node 5 in G2 are similar: >>> g1_hashes[1] ['a93b64973cfc8897', 'db1b43ae35a1878f', '57872a7d2059c1c0'] >>> g2_hashes[5] ['a93b64973cfc8897', 'db1b43ae35a1878f', '1716d2a4012fa4bc'] The first 2 WL subgraph hashes match. From this we can conclude that it's very likely the neighborhood of 2 hops around these nodes are isomorphic. However the 3-hop neighborhoods of ``G1`` and ``G2`` are not isomorphic since the 3rd hashes in the lists above are not equal. These nodes may be candidates to be classified together since their local topology is similar. Notes ----- To hash the full graph when subgraph hashes are not needed, use `weisfeiler_lehman_graph_hash` for efficiency. Similarity between hashes does not imply similarity between graphs. References ---------- .. [1] Shervashidze, Nino, Pascal Schweitzer, Erik Jan Van Leeuwen, Kurt Mehlhorn, and Karsten M. Borgwardt. Weisfeiler Lehman Graph Kernels. Journal of Machine Learning Research. 2011. http://www.jmlr.org/papers/volume12/shervashidze11a/shervashidze11a.pdf .. [2] Annamalai Narayanan, Mahinthan Chandramohan, Rajasekar Venkatesan, Lihui Chen, Yang Liu and Shantanu Jaiswa. graph2vec: Learning Distributed Representations of Graphs. arXiv. 2017 https://arxiv.org/pdf/1707.05005.pdf See also -------- weisfeiler_lehman_graph_hash
null
(G, edge_attr=None, node_attr=None, iterations=3, digest_size=16, include_initial_labels=False, *, backend=None, **backend_kwargs)
31,235
networkx.generators.classic
wheel_graph
Return the wheel graph The wheel graph consists of a hub node connected to a cycle of (n-1) nodes. .. plot:: >>> nx.draw(nx.wheel_graph(5)) Parameters ---------- n : int or iterable If an integer, node labels are 0 to n with center 0. If an iterable of nodes, the center is the first. Warning: n is not checked for duplicates and if present the resulting graph may not be as desired. Make sure you have no duplicates. create_using : NetworkX graph constructor, optional (default=nx.Graph) Graph type to create. If graph instance, then cleared before populated. Node labels are the integers 0 to n - 1.
def star_graph(n, create_using=None): """Return the star graph The star graph consists of one center node connected to n outer nodes. .. plot:: >>> nx.draw(nx.star_graph(6)) Parameters ---------- n : int or iterable If an integer, node labels are 0 to n with center 0. If an iterable of nodes, the center is the first. Warning: n is not checked for duplicates and if present the resulting graph may not be as desired. Make sure you have no duplicates. create_using : NetworkX graph constructor, optional (default=nx.Graph) Graph type to create. If graph instance, then cleared before populated. Notes ----- The graph has n+1 nodes for integer n. So star_graph(3) is the same as star_graph(range(4)). """ n, nodes = n if isinstance(n, numbers.Integral): nodes.append(int(n)) # there should be n+1 nodes G = empty_graph(nodes, create_using) if G.is_directed(): raise NetworkXError("Directed Graph not supported") if len(nodes) > 1: hub, *spokes = nodes G.add_edges_from((hub, node) for node in spokes) return G
(n, create_using=None, *, backend=None, **backend_kwargs)
31,237
networkx.algorithms.wiener
wiener_index
Returns the Wiener index of the given graph. The *Wiener index* of a graph is the sum of the shortest-path (weighted) distances between each pair of reachable nodes. For pairs of nodes in undirected graphs, only one orientation of the pair is counted. Parameters ---------- G : NetworkX graph weight : string or None, optional (default: None) If None, every edge has weight 1. If a string, use this edge attribute as the edge weight. Any edge attribute not present defaults to 1. The edge weights are used to computing shortest-path distances. Returns ------- number The Wiener index of the graph `G`. Raises ------ NetworkXError If the graph `G` is not connected. Notes ----- If a pair of nodes is not reachable, the distance is assumed to be infinity. This means that for graphs that are not strongly-connected, this function returns ``inf``. The Wiener index is not usually defined for directed graphs, however this function uses the natural generalization of the Wiener index to directed graphs. Examples -------- The Wiener index of the (unweighted) complete graph on *n* nodes equals the number of pairs of the *n* nodes, since each pair of nodes is at distance one:: >>> n = 10 >>> G = nx.complete_graph(n) >>> nx.wiener_index(G) == n * (n - 1) / 2 True Graphs that are not strongly-connected have infinite Wiener index:: >>> G = nx.empty_graph(2) >>> nx.wiener_index(G) inf References ---------- .. [1] `Wikipedia: Wiener Index <https://en.wikipedia.org/wiki/Wiener_index>`_
null
(G, weight=None, *, backend=None, **backend_kwargs)
31,238
networkx.generators.community
windmill_graph
Generate a windmill graph. A windmill graph is a graph of `n` cliques each of size `k` that are all joined at one node. It can be thought of as taking a disjoint union of `n` cliques of size `k`, selecting one point from each, and contracting all of the selected points. Alternatively, one could generate `n` cliques of size `k-1` and one node that is connected to all other nodes in the graph. Parameters ---------- n : int Number of cliques k : int Size of cliques Returns ------- G : NetworkX Graph windmill graph with n cliques of size k Raises ------ NetworkXError If the number of cliques is less than two If the size of the cliques are less than two Examples -------- >>> G = nx.windmill_graph(4, 5) Notes ----- The node labeled `0` will be the node connected to all other nodes. Note that windmill graphs are usually denoted `Wd(k,n)`, so the parameters are in the opposite order as the parameters of this method.
def _generate_communities(degree_seq, community_sizes, mu, max_iters, seed): """Returns a list of sets, each of which represents a community. ``degree_seq`` is the degree sequence that must be met by the graph. ``community_sizes`` is the community size distribution that must be met by the generated list of sets. ``mu`` is a float in the interval [0, 1] indicating the fraction of intra-community edges incident to each node. ``max_iters`` is the number of times to try to add a node to a community. This must be greater than the length of ``degree_seq``, otherwise this function will always fail. If the number of iterations exceeds this value, :exc:`~networkx.exception.ExceededMaxIterations` is raised. seed : integer, random_state, or None (default) Indicator of random number generation state. See :ref:`Randomness<randomness>`. The communities returned by this are sets of integers in the set {0, ..., *n* - 1}, where *n* is the length of ``degree_seq``. """ # This assumes the nodes in the graph will be natural numbers. result = [set() for _ in community_sizes] n = len(degree_seq) free = list(range(n)) for i in range(max_iters): v = free.pop() c = seed.choice(range(len(community_sizes))) # s = int(degree_seq[v] * (1 - mu) + 0.5) s = round(degree_seq[v] * (1 - mu)) # If the community is large enough, add the node to the chosen # community. Otherwise, return it to the list of unaffiliated # nodes. if s < community_sizes[c]: result[c].add(v) else: free.append(v) # If the community is too big, remove a node from it. if len(result[c]) > community_sizes[c]: free.append(result[c].pop()) if not free: return result msg = "Could not assign communities; try increasing min_community" raise nx.ExceededMaxIterations(msg)
(n, k, *, backend=None, **backend_kwargs)
31,239
networkx.algorithms.link_prediction
within_inter_cluster
Compute the ratio of within- and inter-cluster common neighbors of all node pairs in ebunch. For two nodes `u` and `v`, if a common neighbor `w` belongs to the same community as them, `w` is considered as within-cluster common neighbor of `u` and `v`. Otherwise, it is considered as inter-cluster common neighbor of `u` and `v`. The ratio between the size of the set of within- and inter-cluster common neighbors is defined as the WIC measure. [1]_ Parameters ---------- G : graph A NetworkX undirected graph. ebunch : iterable of node pairs, optional (default = None) The WIC measure will be computed for each pair of nodes given in the iterable. The pairs must be given as 2-tuples (u, v) where u and v are nodes in the graph. If ebunch is None then all nonexistent edges in the graph will be used. Default value: None. delta : float, optional (default = 0.001) Value to prevent division by zero in case there is no inter-cluster common neighbor between two nodes. See [1]_ for details. Default value: 0.001. community : string, optional (default = 'community') Nodes attribute name containing the community information. G[u][community] identifies which community u belongs to. Each node belongs to at most one community. Default value: 'community'. Returns ------- piter : iterator An iterator of 3-tuples in the form (u, v, p) where (u, v) is a pair of nodes and p is their WIC measure. Raises ------ NetworkXNotImplemented If `G` is a `DiGraph`, a `Multigraph` or a `MultiDiGraph`. NetworkXAlgorithmError - If `delta` is less than or equal to zero. - If no community information is available for a node in `ebunch` or in `G` (if `ebunch` is `None`). NodeNotFound If `ebunch` has a node that is not in `G`. Examples -------- >>> G = nx.Graph() >>> G.add_edges_from([(0, 1), (0, 2), (0, 3), (1, 4), (2, 4), (3, 4)]) >>> G.nodes[0]["community"] = 0 >>> G.nodes[1]["community"] = 1 >>> G.nodes[2]["community"] = 0 >>> G.nodes[3]["community"] = 0 >>> G.nodes[4]["community"] = 0 >>> preds = nx.within_inter_cluster(G, [(0, 4)]) >>> for u, v, p in preds: ... print(f"({u}, {v}) -> {p:.8f}") (0, 4) -> 1.99800200 >>> preds = nx.within_inter_cluster(G, [(0, 4)], delta=0.5) >>> for u, v, p in preds: ... print(f"({u}, {v}) -> {p:.8f}") (0, 4) -> 1.33333333 References ---------- .. [1] Jorge Carlos Valverde-Rebaza and Alneu de Andrade Lopes. Link prediction in complex networks based on cluster information. In Proceedings of the 21st Brazilian conference on Advances in Artificial Intelligence (SBIA'12) https://doi.org/10.1007/978-3-642-34459-6_10
null
(G, ebunch=None, delta=0.001, community='community', *, backend=None, **backend_kwargs)
31,240
networkx.readwrite.adjlist
write_adjlist
Write graph G in single-line adjacency-list format to path. Parameters ---------- G : NetworkX graph path : string or file Filename or file handle for data output. Filenames ending in .gz or .bz2 will be compressed. comments : string, optional Marker for comment lines delimiter : string, optional Separator for node labels encoding : string, optional Text encoding. Examples -------- >>> G = nx.path_graph(4) >>> nx.write_adjlist(G, "test.adjlist") The path can be a filehandle or a string with the name of the file. If a filehandle is provided, it has to be opened in 'wb' mode. >>> fh = open("test.adjlist", "wb") >>> nx.write_adjlist(G, fh) Notes ----- The default `delimiter=" "` will result in unexpected results if node names contain whitespace characters. To avoid this problem, specify an alternate delimiter when spaces are valid in node names. NB: This option is not available for data that isn't user-generated. This format does not store graph, node, or edge data. See Also -------- read_adjlist, generate_adjlist
null
(G, path, comments='#', delimiter=' ', encoding='utf-8')
31,241
networkx.readwrite.edgelist
write_edgelist
Write graph as a list of edges. Parameters ---------- G : graph A NetworkX graph path : file or string File or filename to write. If a file is provided, it must be opened in 'wb' mode. Filenames ending in .gz or .bz2 will be compressed. comments : string, optional The character used to indicate the start of a comment delimiter : string, optional The string used to separate values. The default is whitespace. data : bool or list, optional If False write no edge data. If True write a string representation of the edge data dictionary.. If a list (or other iterable) is provided, write the keys specified in the list. encoding: string, optional Specify which encoding to use when writing file. Examples -------- >>> G = nx.path_graph(4) >>> nx.write_edgelist(G, "test.edgelist") >>> G = nx.path_graph(4) >>> fh = open("test.edgelist", "wb") >>> nx.write_edgelist(G, fh) >>> nx.write_edgelist(G, "test.edgelist.gz") >>> nx.write_edgelist(G, "test.edgelist.gz", data=False) >>> G = nx.Graph() >>> G.add_edge(1, 2, weight=7, color="red") >>> nx.write_edgelist(G, "test.edgelist", data=False) >>> nx.write_edgelist(G, "test.edgelist", data=["color"]) >>> nx.write_edgelist(G, "test.edgelist", data=["color", "weight"]) See Also -------- read_edgelist write_weighted_edgelist
null
(G, path, comments='#', delimiter=' ', data=True, encoding='utf-8')
31,242
networkx.readwrite.gexf
write_gexf
Write G in GEXF format to path. "GEXF (Graph Exchange XML Format) is a language for describing complex networks structures, their associated data and dynamics" [1]_. Node attributes are checked according to the version of the GEXF schemas used for parameters which are not user defined, e.g. visualization 'viz' [2]_. See example for usage. Parameters ---------- G : graph A NetworkX graph path : file or string File or file name to write. File names ending in .gz or .bz2 will be compressed. encoding : string (optional, default: 'utf-8') Encoding for text data. prettyprint : bool (optional, default: True) If True use line breaks and indenting in output XML. version: string (optional, default: '1.2draft') The version of GEXF to be used for nodes attributes checking Examples -------- >>> G = nx.path_graph(4) >>> nx.write_gexf(G, "test.gexf") # visualization data >>> G.nodes[0]["viz"] = {"size": 54} >>> G.nodes[0]["viz"]["position"] = {"x": 0, "y": 1} >>> G.nodes[0]["viz"]["color"] = {"r": 0, "g": 0, "b": 256} Notes ----- This implementation does not support mixed graphs (directed and undirected edges together). The node id attribute is set to be the string of the node label. If you want to specify an id use set it as node data, e.g. node['a']['id']=1 to set the id of node 'a' to 1. References ---------- .. [1] GEXF File Format, http://gexf.net/ .. [2] GEXF schema, http://gexf.net/schema.html
def add_node(self, G, node_xml, node_attr, node_pid=None): # add a single node with attributes to the graph # get attributes and subattributues for node data = self.decode_attr_elements(node_attr, node_xml) data = self.add_parents(data, node_xml) # add any parents if self.VERSION == "1.1": data = self.add_slices(data, node_xml) # add slices else: data = self.add_spells(data, node_xml) # add spells data = self.add_viz(data, node_xml) # add viz data = self.add_start_end(data, node_xml) # add start/end # find the node id and cast it to the appropriate type node_id = node_xml.get("id") if self.node_type is not None: node_id = self.node_type(node_id) # every node should have a label node_label = node_xml.get("label") data["label"] = node_label # parent node id node_pid = node_xml.get("pid", node_pid) if node_pid is not None: data["pid"] = node_pid # check for subnodes, recursive subnodes = node_xml.find(f"{{{self.NS_GEXF}}}nodes") if subnodes is not None: for node_xml in subnodes.findall(f"{{{self.NS_GEXF}}}node"): self.add_node(G, node_xml, node_attr, node_pid=node_id) G.add_node(node_id, **data)
(G, path, encoding='utf-8', prettyprint=True, version='1.2draft')
31,243
networkx.readwrite.gml
write_gml
Write a graph `G` in GML format to the file or file handle `path`. Parameters ---------- G : NetworkX graph The graph to be converted to GML. path : filename or filehandle The filename or filehandle to write. Files whose names end with .gz or .bz2 will be compressed. stringizer : callable, optional A `stringizer` which converts non-int/non-float/non-dict values into strings. If it cannot convert a value into a string, it should raise a `ValueError` to indicate that. Default value: None. Raises ------ NetworkXError If `stringizer` cannot convert a value into a string, or the value to convert is not a string while `stringizer` is None. See Also -------- read_gml, generate_gml literal_stringizer Notes ----- Graph attributes named 'directed', 'multigraph', 'node' or 'edge', node attributes named 'id' or 'label', edge attributes named 'source' or 'target' (or 'key' if `G` is a multigraph) are ignored because these attribute names are used to encode the graph structure. GML files are stored using a 7-bit ASCII encoding with any extended ASCII characters (iso8859-1) appearing as HTML character entities. Without specifying a `stringizer`/`destringizer`, the code is capable of writing `int`/`float`/`str`/`dict`/`list` data as required by the GML specification. For writing other data types, and for reading data other than `str` you need to explicitly supply a `stringizer`/`destringizer`. Note that while we allow non-standard GML to be read from a file, we make sure to write GML format. In particular, underscores are not allowed in attribute names. For additional documentation on the GML file format, please see the `GML url <https://web.archive.org/web/20190207140002/http://www.fim.uni-passau.de/index.php?id=17297&L=1>`_. See the module docstring :mod:`networkx.readwrite.gml` for more details. Examples -------- >>> G = nx.path_graph(4) >>> nx.write_gml(G, "test.gml") Filenames ending in .gz or .bz2 will be compressed. >>> nx.write_gml(G, "test.gml.gz")
def generate_gml(G, stringizer=None): r"""Generate a single entry of the graph `G` in GML format. Parameters ---------- G : NetworkX graph The graph to be converted to GML. stringizer : callable, optional A `stringizer` which converts non-int/non-float/non-dict values into strings. If it cannot convert a value into a string, it should raise a `ValueError` to indicate that. Default value: None. Returns ------- lines: generator of strings Lines of GML data. Newlines are not appended. Raises ------ NetworkXError If `stringizer` cannot convert a value into a string, or the value to convert is not a string while `stringizer` is None. See Also -------- literal_stringizer Notes ----- Graph attributes named 'directed', 'multigraph', 'node' or 'edge', node attributes named 'id' or 'label', edge attributes named 'source' or 'target' (or 'key' if `G` is a multigraph) are ignored because these attribute names are used to encode the graph structure. GML files are stored using a 7-bit ASCII encoding with any extended ASCII characters (iso8859-1) appearing as HTML character entities. Without specifying a `stringizer`/`destringizer`, the code is capable of writing `int`/`float`/`str`/`dict`/`list` data as required by the GML specification. For writing other data types, and for reading data other than `str` you need to explicitly supply a `stringizer`/`destringizer`. For additional documentation on the GML file format, please see the `GML url <https://web.archive.org/web/20190207140002/http://www.fim.uni-passau.de/index.php?id=17297&L=1>`_. See the module docstring :mod:`networkx.readwrite.gml` for more details. Examples -------- >>> G = nx.Graph() >>> G.add_node("1") >>> print("\n".join(nx.generate_gml(G))) graph [ node [ id 0 label "1" ] ] >>> G = nx.MultiGraph([("a", "b"), ("a", "b")]) >>> print("\n".join(nx.generate_gml(G))) graph [ multigraph 1 node [ id 0 label "a" ] node [ id 1 label "b" ] edge [ source 0 target 1 key 0 ] edge [ source 0 target 1 key 1 ] ] """ valid_keys = re.compile("^[A-Za-z][0-9A-Za-z_]*$") def stringize(key, value, ignored_keys, indent, in_list=False): if not isinstance(key, str): raise NetworkXError(f"{key!r} is not a string") if not valid_keys.match(key): raise NetworkXError(f"{key!r} is not a valid key") if not isinstance(key, str): key = str(key) if key not in ignored_keys: if isinstance(value, int | bool): if key == "label": yield indent + key + ' "' + str(value) + '"' elif value is True: # python bool is an instance of int yield indent + key + " 1" elif value is False: yield indent + key + " 0" # GML only supports signed 32-bit integers elif value < -(2**31) or value >= 2**31: yield indent + key + ' "' + str(value) + '"' else: yield indent + key + " " + str(value) elif isinstance(value, float): text = repr(value).upper() # GML matches INF to keys, so prepend + to INF. Use repr(float(*)) # instead of string literal to future proof against changes to repr. if text == repr(float("inf")).upper(): text = "+" + text else: # GML requires that a real literal contain a decimal point, but # repr may not output a decimal point when the mantissa is # integral and hence needs fixing. epos = text.rfind("E") if epos != -1 and text.find(".", 0, epos) == -1: text = text[:epos] + "." + text[epos:] if key == "label": yield indent + key + ' "' + text + '"' else: yield indent + key + " " + text elif isinstance(value, dict): yield indent + key + " [" next_indent = indent + " " for key, value in value.items(): yield from stringize(key, value, (), next_indent) yield indent + "]" elif isinstance(value, tuple) and key == "label": yield indent + key + f" \"({','.join(repr(v) for v in value)})\"" elif isinstance(value, list | tuple) and key != "label" and not in_list: if len(value) == 0: yield indent + key + " " + f'"{value!r}"' if len(value) == 1: yield indent + key + " " + f'"{LIST_START_VALUE}"' for val in value: yield from stringize(key, val, (), indent, True) else: if stringizer: try: value = stringizer(value) except ValueError as err: raise NetworkXError( f"{value!r} cannot be converted into a string" ) from err if not isinstance(value, str): raise NetworkXError(f"{value!r} is not a string") yield indent + key + ' "' + escape(value) + '"' multigraph = G.is_multigraph() yield "graph [" # Output graph attributes if G.is_directed(): yield " directed 1" if multigraph: yield " multigraph 1" ignored_keys = {"directed", "multigraph", "node", "edge"} for attr, value in G.graph.items(): yield from stringize(attr, value, ignored_keys, " ") # Output node data node_id = dict(zip(G, range(len(G)))) ignored_keys = {"id", "label"} for node, attrs in G.nodes.items(): yield " node [" yield " id " + str(node_id[node]) yield from stringize("label", node, (), " ") for attr, value in attrs.items(): yield from stringize(attr, value, ignored_keys, " ") yield " ]" # Output edge data ignored_keys = {"source", "target"} kwargs = {"data": True} if multigraph: ignored_keys.add("key") kwargs["keys"] = True for e in G.edges(**kwargs): yield " edge [" yield " source " + str(node_id[e[0]]) yield " target " + str(node_id[e[1]]) if multigraph: yield from stringize("key", e[2], (), " ") for attr, value in e[-1].items(): yield from stringize(attr, value, ignored_keys, " ") yield " ]" yield "]"
(G, path, stringizer=None)
31,244
networkx.readwrite.graph6
write_graph6
Write a simple undirected graph to a path in graph6 format. Parameters ---------- G : Graph (undirected) path : str The path naming the file to which to write the graph. nodes: list or iterable Nodes are labeled 0...n-1 in the order provided. If None the ordering given by ``G.nodes()`` is used. header: bool If True add '>>graph6<<' string to head of data Raises ------ NetworkXNotImplemented If the graph is directed or is a multigraph. ValueError If the graph has at least ``2 ** 36`` nodes; the graph6 format is only defined for graphs of order less than ``2 ** 36``. Examples -------- You can write a graph6 file by giving the path to a file:: >>> import tempfile >>> with tempfile.NamedTemporaryFile(delete=False) as f: ... nx.write_graph6(nx.path_graph(2), f.name) ... _ = f.seek(0) ... print(f.read()) b'>>graph6<<A_\n' See Also -------- from_graph6_bytes, read_graph6 Notes ----- The function writes a newline character after writing the encoding of the graph. The format does not support edge or node labels, parallel edges or self loops. If self loops are present they are silently ignored. References ---------- .. [1] Graph6 specification <http://users.cecs.anu.edu.au/~bdm/data/formats.html>
null
(G, path, nodes=None, header=True)
31,245
networkx.readwrite.graphml
write_graphml_lxml
Write G in GraphML XML format to path This function uses the LXML framework and should be faster than the version using the xml library. Parameters ---------- G : graph A networkx graph path : file or string File or filename to write. Filenames ending in .gz or .bz2 will be compressed. encoding : string (optional) Encoding for text data. prettyprint : bool (optional) If True use line breaks and indenting in output XML. infer_numeric_types : boolean Determine if numeric types should be generalized. For example, if edges have both int and float 'weight' attributes, we infer in GraphML that both are floats. named_key_ids : bool (optional) If True use attr.name as value for key elements' id attribute. edge_id_from_attribute : dict key (optional) If provided, the graphml edge id is set by looking up the corresponding edge data attribute keyed by this parameter. If `None` or the key does not exist in edge data, the edge id is set by the edge key if `G` is a MultiGraph, else the edge id is left unset. Examples -------- >>> G = nx.path_graph(4) >>> nx.write_graphml_lxml(G, "fourpath.graphml") Notes ----- This implementation does not support mixed graphs (directed and unidirected edges together) hyperedges, nested graphs, or ports.
def add_graph_element(self, G): """ Serialize graph G in GraphML to the stream. """ if G.is_directed(): default_edge_type = "directed" else: default_edge_type = "undirected" graphid = G.graph.pop("id", None) if graphid is None: graph_element = self._xml.element("graph", edgedefault=default_edge_type) else: graph_element = self._xml.element( "graph", edgedefault=default_edge_type, id=graphid ) # gather attributes types for the whole graph # to find the most general numeric format needed. # Then pass through attributes to create key_id for each. graphdata = { k: v for k, v in G.graph.items() if k not in ("node_default", "edge_default") } node_default = G.graph.get("node_default", {}) edge_default = G.graph.get("edge_default", {}) # Graph attributes for k, v in graphdata.items(): self.attribute_types[(str(k), "graph")].add(type(v)) for k, v in graphdata.items(): element_type = self.get_xml_type(self.attr_type(k, "graph", v)) self.get_key(str(k), element_type, "graph", None) # Nodes and data for node, d in G.nodes(data=True): for k, v in d.items(): self.attribute_types[(str(k), "node")].add(type(v)) for node, d in G.nodes(data=True): for k, v in d.items(): T = self.get_xml_type(self.attr_type(k, "node", v)) self.get_key(str(k), T, "node", node_default.get(k)) # Edges and data if G.is_multigraph(): for u, v, ekey, d in G.edges(keys=True, data=True): for k, v in d.items(): self.attribute_types[(str(k), "edge")].add(type(v)) for u, v, ekey, d in G.edges(keys=True, data=True): for k, v in d.items(): T = self.get_xml_type(self.attr_type(k, "edge", v)) self.get_key(str(k), T, "edge", edge_default.get(k)) else: for u, v, d in G.edges(data=True): for k, v in d.items(): self.attribute_types[(str(k), "edge")].add(type(v)) for u, v, d in G.edges(data=True): for k, v in d.items(): T = self.get_xml_type(self.attr_type(k, "edge", v)) self.get_key(str(k), T, "edge", edge_default.get(k)) # Now add attribute keys to the xml file for key in self.xml: self._xml.write(key, pretty_print=self._prettyprint) # The incremental_writer writes each node/edge as it is created incremental_writer = IncrementalElement(self._xml, self._prettyprint) with graph_element: self.add_attributes("graph", incremental_writer, graphdata, {}) self.add_nodes(G, incremental_writer) # adds attributes too self.add_edges(G, incremental_writer) # adds attributes too
(G, path, encoding='utf-8', prettyprint=True, infer_numeric_types=False, named_key_ids=False, edge_id_from_attribute=None)
31,247
networkx.readwrite.graphml
write_graphml_xml
Write G in GraphML XML format to path Parameters ---------- G : graph A networkx graph path : file or string File or filename to write. Filenames ending in .gz or .bz2 will be compressed. encoding : string (optional) Encoding for text data. prettyprint : bool (optional) If True use line breaks and indenting in output XML. infer_numeric_types : boolean Determine if numeric types should be generalized. For example, if edges have both int and float 'weight' attributes, we infer in GraphML that both are floats. named_key_ids : bool (optional) If True use attr.name as value for key elements' id attribute. edge_id_from_attribute : dict key (optional) If provided, the graphml edge id is set by looking up the corresponding edge data attribute keyed by this parameter. If `None` or the key does not exist in edge data, the edge id is set by the edge key if `G` is a MultiGraph, else the edge id is left unset. Examples -------- >>> G = nx.path_graph(4) >>> nx.write_graphml(G, "test.graphml") Notes ----- This implementation does not support mixed graphs (directed and unidirected edges together) hyperedges, nested graphs, or ports.
def add_graph_element(self, G): """ Serialize graph G in GraphML to the stream. """ if G.is_directed(): default_edge_type = "directed" else: default_edge_type = "undirected" graphid = G.graph.pop("id", None) if graphid is None: graph_element = self._xml.element("graph", edgedefault=default_edge_type) else: graph_element = self._xml.element( "graph", edgedefault=default_edge_type, id=graphid ) # gather attributes types for the whole graph # to find the most general numeric format needed. # Then pass through attributes to create key_id for each. graphdata = { k: v for k, v in G.graph.items() if k not in ("node_default", "edge_default") } node_default = G.graph.get("node_default", {}) edge_default = G.graph.get("edge_default", {}) # Graph attributes for k, v in graphdata.items(): self.attribute_types[(str(k), "graph")].add(type(v)) for k, v in graphdata.items(): element_type = self.get_xml_type(self.attr_type(k, "graph", v)) self.get_key(str(k), element_type, "graph", None) # Nodes and data for node, d in G.nodes(data=True): for k, v in d.items(): self.attribute_types[(str(k), "node")].add(type(v)) for node, d in G.nodes(data=True): for k, v in d.items(): T = self.get_xml_type(self.attr_type(k, "node", v)) self.get_key(str(k), T, "node", node_default.get(k)) # Edges and data if G.is_multigraph(): for u, v, ekey, d in G.edges(keys=True, data=True): for k, v in d.items(): self.attribute_types[(str(k), "edge")].add(type(v)) for u, v, ekey, d in G.edges(keys=True, data=True): for k, v in d.items(): T = self.get_xml_type(self.attr_type(k, "edge", v)) self.get_key(str(k), T, "edge", edge_default.get(k)) else: for u, v, d in G.edges(data=True): for k, v in d.items(): self.attribute_types[(str(k), "edge")].add(type(v)) for u, v, d in G.edges(data=True): for k, v in d.items(): T = self.get_xml_type(self.attr_type(k, "edge", v)) self.get_key(str(k), T, "edge", edge_default.get(k)) # Now add attribute keys to the xml file for key in self.xml: self._xml.write(key, pretty_print=self._prettyprint) # The incremental_writer writes each node/edge as it is created incremental_writer = IncrementalElement(self._xml, self._prettyprint) with graph_element: self.add_attributes("graph", incremental_writer, graphdata, {}) self.add_nodes(G, incremental_writer) # adds attributes too self.add_edges(G, incremental_writer) # adds attributes too
(G, path, encoding='utf-8', prettyprint=True, infer_numeric_types=False, named_key_ids=False, edge_id_from_attribute=None)
31,248
networkx.drawing.nx_latex
write_latex
Write the latex code to draw the graph(s) onto `path`. This convenience function creates the latex drawing code as a string and writes that to a file ready to be compiled when `as_document` is True or ready to be ``import`` ed or ``include`` ed into your main LaTeX document. The `path` argument can be a string filename or a file handle to write to. Parameters ---------- Gbunch : NetworkX graph or iterable of NetworkX graphs If Gbunch is a graph, it is drawn in a figure environment. If Gbunch is an iterable of graphs, each is drawn in a subfigure environment within a single figure environment. path : filename Filename or file handle to write to options : dict By default, TikZ is used with options: (others are ignored):: pos : string or dict or list The name of the node attribute on `G` that holds the position of each node. Positions can be sequences of length 2 with numbers for (x,y) coordinates. They can also be strings to denote positions in TikZ style, such as (x, y) or (angle:radius). If a dict, it should be keyed by node to a position. If an empty dict, a circular layout is computed by TikZ. If you are drawing many graphs in subfigures, use a list of position dicts. tikz_options : string The tikzpicture options description defining the options for the picture. Often large scale options like `[scale=2]`. default_node_options : string The draw options for a path of nodes. Individual node options override these. node_options : string or dict The name of the node attribute on `G` that holds the options for each node. Or a dict keyed by node to a string holding the options for that node. node_label : string or dict The name of the node attribute on `G` that holds the node label (text) displayed for each node. If the attribute is "" or not present, the node itself is drawn as a string. LaTeX processing such as ``"$A_1$"`` is allowed. Or a dict keyed by node to a string holding the label for that node. default_edge_options : string The options for the scope drawing all edges. The default is "[-]" for undirected graphs and "[->]" for directed graphs. edge_options : string or dict The name of the edge attribute on `G` that holds the options for each edge. If the edge is a self-loop and ``"loop" not in edge_options`` the option "loop," is added to the options for the self-loop edge. Hence you can use "[loop above]" explicitly, but the default is "[loop]". Or a dict keyed by edge to a string holding the options for that edge. edge_label : string or dict The name of the edge attribute on `G` that holds the edge label (text) displayed for each edge. If the attribute is "" or not present, no edge label is drawn. Or a dict keyed by edge to a string holding the label for that edge. edge_label_options : string or dict The name of the edge attribute on `G` that holds the label options for each edge. For example, "[sloped,above,blue]". The default is no options. Or a dict keyed by edge to a string holding the label options for that edge. caption : string The caption string for the figure environment latex_label : string The latex label used for the figure for easy referral from the main text sub_captions : list of strings The sub_caption string for each subfigure in the figure sub_latex_labels : list of strings The latex label for each subfigure in the figure n_rows : int The number of rows of subfigures to arrange for multiple graphs as_document : bool Whether to wrap the latex code in a document environment for compiling document_wrapper : formatted text string with variable ``content``. This text is called to evaluate the content embedded in a document environment with a preamble setting up the TikZ syntax. figure_wrapper : formatted text string This text is evaluated with variables ``content``, ``caption`` and ``label``. It wraps the content and if a caption is provided, adds the latex code for that caption, and if a label is provided, adds the latex code for a label. subfigure_wrapper : formatted text string This text evaluate variables ``size``, ``content``, ``caption`` and ``label``. It wraps the content and if a caption is provided, adds the latex code for that caption, and if a label is provided, adds the latex code for a label. The size is the vertical size of each row of subfigures as a fraction. See Also ======== to_latex
null
(Gbunch, path, **options)
31,249
networkx.readwrite.multiline_adjlist
write_multiline_adjlist
Write the graph G in multiline adjacency list format to path Parameters ---------- G : NetworkX graph path : string or file Filename or file handle to write to. Filenames ending in .gz or .bz2 will be compressed. comments : string, optional Marker for comment lines delimiter : string, optional Separator for node labels encoding : string, optional Text encoding. Examples -------- >>> G = nx.path_graph(4) >>> nx.write_multiline_adjlist(G, "test.adjlist") The path can be a file handle or a string with the name of the file. If a file handle is provided, it has to be opened in 'wb' mode. >>> fh = open("test.adjlist", "wb") >>> nx.write_multiline_adjlist(G, fh) Filenames ending in .gz or .bz2 will be compressed. >>> nx.write_multiline_adjlist(G, "test.adjlist.gz") See Also -------- read_multiline_adjlist
null
(G, path, delimiter=' ', comments='#', encoding='utf-8')
31,250
networkx.readwrite.text
write_network_text
Creates a nice text representation of a graph This works via a depth-first traversal of the graph and writing a line for each unique node encountered. Non-tree edges are written to the right of each node, and connection to a non-tree edge is indicated with an ellipsis. This representation works best when the input graph is a forest, but any graph can be represented. Parameters ---------- graph : nx.DiGraph | nx.Graph Graph to represent path : string or file or callable or None Filename or file handle for data output. if a function, then it will be called for each generated line. if None, this will default to "sys.stdout.write" with_labels : bool | str If True will use the "label" attribute of a node to display if it exists otherwise it will use the node value itself. If given as a string, then that attribute name will be used instead of "label". Defaults to True. sources : List Specifies which nodes to start traversal from. Note: nodes that are not reachable from one of these sources may not be shown. If unspecified, the minimal set of nodes needed to reach all others will be used. max_depth : int | None The maximum depth to traverse before stopping. Defaults to None. ascii_only : Boolean If True only ASCII characters are used to construct the visualization end : string The line ending character vertical_chains : Boolean If True, chains of nodes will be drawn vertically when possible. Examples -------- >>> graph = nx.balanced_tree(r=2, h=2, create_using=nx.DiGraph) >>> nx.write_network_text(graph) ╙── 0 ├─╼ 1 │ ├─╼ 3 │ └─╼ 4 └─╼ 2 ├─╼ 5 └─╼ 6 >>> # A near tree with one non-tree edge >>> graph.add_edge(5, 1) >>> nx.write_network_text(graph) ╙── 0 ├─╼ 1 ╾ 5 │ ├─╼ 3 │ └─╼ 4 └─╼ 2 ├─╼ 5 │ └─╼ ... └─╼ 6 >>> graph = nx.cycle_graph(5) >>> nx.write_network_text(graph) ╙── 0 ├── 1 │ └── 2 │ └── 3 │ └── 4 ─ 0 └── ... >>> graph = nx.cycle_graph(5, nx.DiGraph) >>> nx.write_network_text(graph, vertical_chains=True) ╙── 0 ╾ 4 ╽ 1 ╽ 2 ╽ 3 ╽ 4 └─╼ ... >>> nx.write_network_text(graph, vertical_chains=True, ascii_only=True) +-- 0 <- 4 ! 1 ! 2 ! 3 ! 4 L-> ... >>> graph = nx.generators.barbell_graph(4, 2) >>> nx.write_network_text(graph, vertical_chains=False) ╙── 4 ├── 5 │ └── 6 │ ├── 7 │ │ ├── 8 ─ 6 │ │ │ └── 9 ─ 6, 7 │ │ └── ... │ └── ... └── 3 ├── 0 │ ├── 1 ─ 3 │ │ └── 2 ─ 0, 3 │ └── ... └── ... >>> nx.write_network_text(graph, vertical_chains=True) ╙── 4 ├── 5 │ │ │ 6 │ ├── 7 │ │ ├── 8 ─ 6 │ │ │ │ │ │ │ 9 ─ 6, 7 │ │ └── ... │ └── ... └── 3 ├── 0 │ ├── 1 ─ 3 │ │ │ │ │ 2 ─ 0, 3 │ └── ... └── ... >>> graph = nx.complete_graph(5, create_using=nx.Graph) >>> nx.write_network_text(graph) ╙── 0 ├── 1 │ ├── 2 ─ 0 │ │ ├── 3 ─ 0, 1 │ │ │ └── 4 ─ 0, 1, 2 │ │ └── ... │ └── ... └── ... >>> graph = nx.complete_graph(3, create_using=nx.DiGraph) >>> nx.write_network_text(graph) ╙── 0 ╾ 1, 2 ├─╼ 1 ╾ 2 │ ├─╼ 2 ╾ 0 │ │ └─╼ ... │ └─╼ ... └─╼ ...
def _parse_network_text(lines): """Reconstructs a graph from a network text representation. This is mainly used for testing. Network text is for display, not serialization, as such this cannot parse all network text representations because node labels can be ambiguous with the glyphs and indentation used to represent edge structure. Additionally, there is no way to determine if disconnected graphs were originally directed or undirected. Parameters ---------- lines : list or iterator of strings Input data in network text format Returns ------- G: NetworkX graph The graph corresponding to the lines in network text format. """ from itertools import chain from typing import Any, NamedTuple, Union class ParseStackFrame(NamedTuple): node: Any indent: int has_vertical_child: int | None initial_line_iter = iter(lines) is_ascii = None is_directed = None ############## # Initial Pass ############## # Do an initial pass over the lines to determine what type of graph it is. # Remember what these lines were, so we can reiterate over them in the # parsing pass. initial_lines = [] try: first_line = next(initial_line_iter) except StopIteration: ... else: initial_lines.append(first_line) # The first character indicates if it is an ASCII or UTF graph first_char = first_line[0] if first_char in { UtfBaseGlyphs.empty, UtfBaseGlyphs.newtree_mid[0], UtfBaseGlyphs.newtree_last[0], }: is_ascii = False elif first_char in { AsciiBaseGlyphs.empty, AsciiBaseGlyphs.newtree_mid[0], AsciiBaseGlyphs.newtree_last[0], }: is_ascii = True else: raise AssertionError(f"Unexpected first character: {first_char}") if is_ascii: directed_glyphs = AsciiDirectedGlyphs.as_dict() undirected_glyphs = AsciiUndirectedGlyphs.as_dict() else: directed_glyphs = UtfDirectedGlyphs.as_dict() undirected_glyphs = UtfUndirectedGlyphs.as_dict() # For both directed / undirected glyphs, determine which glyphs never # appear as substrings in the other undirected / directed glyphs. Glyphs # with this property unambiguously indicates if a graph is directed / # undirected. directed_items = set(directed_glyphs.values()) undirected_items = set(undirected_glyphs.values()) unambiguous_directed_items = [] for item in directed_items: other_items = undirected_items other_supersets = [other for other in other_items if item in other] if not other_supersets: unambiguous_directed_items.append(item) unambiguous_undirected_items = [] for item in undirected_items: other_items = directed_items other_supersets = [other for other in other_items if item in other] if not other_supersets: unambiguous_undirected_items.append(item) for line in initial_line_iter: initial_lines.append(line) if any(item in line for item in unambiguous_undirected_items): is_directed = False break elif any(item in line for item in unambiguous_directed_items): is_directed = True break if is_directed is None: # Not enough information to determine, choose undirected by default is_directed = False glyphs = directed_glyphs if is_directed else undirected_glyphs # the backedge symbol by itself can be ambiguous, but with spaces around it # becomes unambiguous. backedge_symbol = " " + glyphs["backedge"] + " " # Reconstruct an iterator over all of the lines. parsing_line_iter = chain(initial_lines, initial_line_iter) ############## # Parsing Pass ############## edges = [] nodes = [] is_empty = None noparent = object() # sentinel value # keep a stack of previous nodes that could be parents of subsequent nodes stack = [ParseStackFrame(noparent, -1, None)] for line in parsing_line_iter: if line == glyphs["empty"]: # If the line is the empty glyph, we are done. # There shouldn't be anything else after this. is_empty = True continue if backedge_symbol in line: # This line has one or more backedges, separate those out node_part, backedge_part = line.split(backedge_symbol) backedge_nodes = [u.strip() for u in backedge_part.split(", ")] # Now the node can be parsed node_part = node_part.rstrip() prefix, node = node_part.rsplit(" ", 1) node = node.strip() # Add the backedges to the edge list edges.extend([(u, node) for u in backedge_nodes]) else: # No backedge, the tail of this line is the node prefix, node = line.rsplit(" ", 1) node = node.strip() prev = stack.pop() if node in glyphs["vertical_edge"]: # Previous node is still the previous node, but we know it will # have exactly one child, which will need to have its nesting level # adjusted. modified_prev = ParseStackFrame( prev.node, prev.indent, True, ) stack.append(modified_prev) continue # The length of the string before the node characters give us a hint # about our nesting level. The only case where this doesn't work is # when there are vertical chains, which is handled explicitly. indent = len(prefix) curr = ParseStackFrame(node, indent, None) if prev.has_vertical_child: # In this case we know prev must be the parent of our current line, # so we don't have to search the stack. (which is good because the # indentation check wouldn't work in this case). ... else: # If the previous node nesting-level is greater than the current # nodes nesting-level than the previous node was the end of a path, # and is not our parent. We can safely pop nodes off the stack # until we find one with a comparable nesting-level, which is our # parent. while curr.indent <= prev.indent: prev = stack.pop() if node == "...": # The current previous node is no longer a valid parent, # keep it popped from the stack. stack.append(prev) else: # The previous and current nodes may still be parents, so add them # back onto the stack. stack.append(prev) stack.append(curr) # Add the node and the edge to its parent to the node / edge lists. nodes.append(curr.node) if prev.node is not noparent: edges.append((prev.node, curr.node)) if is_empty: # Sanity check assert len(nodes) == 0 # Reconstruct the graph cls = nx.DiGraph if is_directed else nx.Graph new = cls() new.add_nodes_from(nodes) new.add_edges_from(edges) return new
(graph, path=None, with_labels=True, sources=None, max_depth=None, ascii_only=False, end='\n', vertical_chains=False)
31,251
networkx.readwrite.pajek
write_pajek
Write graph in Pajek format to path. Parameters ---------- G : graph A Networkx graph path : file or string File or filename to write. Filenames ending in .gz or .bz2 will be compressed. Examples -------- >>> G = nx.path_graph(4) >>> nx.write_pajek(G, "test.net") Warnings -------- Optional node attributes and edge attributes must be non-empty strings. Otherwise it will not be written into the file. You will need to convert those attributes to strings if you want to keep them. References ---------- See http://vlado.fmf.uni-lj.si/pub/networks/pajek/doc/draweps.htm for format information.
null
(G, path, encoding='UTF-8')
31,252
networkx.readwrite.sparse6
write_sparse6
Write graph G to given path in sparse6 format. Parameters ---------- G : Graph (undirected) path : file or string File or filename to write nodes: list or iterable Nodes are labeled 0...n-1 in the order provided. If None the ordering given by G.nodes() is used. header: bool If True add '>>sparse6<<' string to head of data Raises ------ NetworkXError If the graph is directed Examples -------- You can write a sparse6 file by giving the path to the file:: >>> import tempfile >>> with tempfile.NamedTemporaryFile(delete=False) as f: ... nx.write_sparse6(nx.path_graph(2), f.name) ... print(f.read()) b'>>sparse6<<:An\n' You can also write a sparse6 file by giving an open file-like object:: >>> with tempfile.NamedTemporaryFile() as f: ... nx.write_sparse6(nx.path_graph(2), f) ... _ = f.seek(0) ... print(f.read()) b'>>sparse6<<:An\n' See Also -------- read_sparse6, from_sparse6_bytes Notes ----- The format does not support edge or node labels. References ---------- .. [1] Sparse6 specification <https://users.cecs.anu.edu.au/~bdm/data/formats.html>
null
(G, path, nodes=None, header=True)
31,253
networkx.readwrite.edgelist
write_weighted_edgelist
Write graph G as a list of edges with numeric weights. Parameters ---------- G : graph A NetworkX graph path : file or string File or filename to write. If a file is provided, it must be opened in 'wb' mode. Filenames ending in .gz or .bz2 will be compressed. comments : string, optional The character used to indicate the start of a comment delimiter : string, optional The string used to separate values. The default is whitespace. encoding: string, optional Specify which encoding to use when writing file. Examples -------- >>> G = nx.Graph() >>> G.add_edge(1, 2, weight=7) >>> nx.write_weighted_edgelist(G, "test.weighted.edgelist") See Also -------- read_edgelist write_edgelist read_weighted_edgelist
def write_weighted_edgelist(G, path, comments="#", delimiter=" ", encoding="utf-8"): """Write graph G as a list of edges with numeric weights. Parameters ---------- G : graph A NetworkX graph path : file or string File or filename to write. If a file is provided, it must be opened in 'wb' mode. Filenames ending in .gz or .bz2 will be compressed. comments : string, optional The character used to indicate the start of a comment delimiter : string, optional The string used to separate values. The default is whitespace. encoding: string, optional Specify which encoding to use when writing file. Examples -------- >>> G = nx.Graph() >>> G.add_edge(1, 2, weight=7) >>> nx.write_weighted_edgelist(G, "test.weighted.edgelist") See Also -------- read_edgelist write_edgelist read_weighted_edgelist """ write_edgelist( G, path, comments=comments, delimiter=delimiter, data=("weight",), encoding=encoding, )
(G, path, comments='#', delimiter=' ', encoding='utf-8')
31,254
quandl.api_config
ApiConfig
null
class ApiConfig: api_key = None api_protocol = 'https://' api_base = '{}data.nasdaq.com/api/v3'.format(api_protocol) api_version = None # This is not used but keeping for backwards compatibility page_limit = 100 use_retries = True number_of_retries = 5 retry_backoff_factor = 0.5 max_wait_between_retries = 8 retry_status_codes = [429] + list(range(500, 512)) verify_ssl = True
()
31,255
quandl.errors.quandl_error
AuthenticationError
null
class AuthenticationError(QuandlError): pass
(quandl_message=None, http_status=None, http_body=None, http_headers=None, quandl_error_code=None, response_data=None)
31,256
quandl.errors.quandl_error
__init__
null
def __init__(self, quandl_message=None, http_status=None, http_body=None, http_headers=None, quandl_error_code=None, response_data=None): self.http_status = http_status self.http_body = http_body self.http_headers = http_headers if http_headers is not None else {} self.quandl_error_code = quandl_error_code self.quandl_message = quandl_message if quandl_message is not None \ else self.GENERIC_ERROR_MESSAGE self.response_data = response_data
(self, quandl_message=None, http_status=None, http_body=None, http_headers=None, quandl_error_code=None, response_data=None)
31,257
quandl.errors.quandl_error
__str__
null
def __str__(self): if self.http_status is None: status_string = '' else: status_string = "(Status %(http_status)s) " % {"http_status": self.http_status} if self.quandl_error_code is None: quandl_error_string = '' else: quandl_error_string = "(Quandl Error %(quandl_error_code)s) " % { "quandl_error_code": self.quandl_error_code} return "%(ss)s%(qes)s%(qm)s" % { "ss": status_string, "qes": quandl_error_string, "qm": self.quandl_message }
(self)
31,258
quandl.errors.quandl_error
ColumnNotFound
null
class ColumnNotFound(QuandlError): pass
(quandl_message=None, http_status=None, http_body=None, http_headers=None, quandl_error_code=None, response_data=None)
31,261
quandl.model.data
Data
null
class Data(DataListOperation, DataMixin, ModelBase): def __init__(self, data, **options): self.meta = options['meta'] self._raw_data = Util.convert_to_dates(data) # Optimization for when a list of data points are created from a # dataset (via the model_list class) if 'converted_column_names' in options.keys(): self._converted_column_names = options['converted_column_names'] # Need to override data_fields incase the way the Data class was populated # that it did not contain a converted_column_names option passed in when it was created. def data_fields(self): if not self._converted_column_names and self.meta: self._converted_column_names = Util.convert_column_names(self.meta) return self._converted_column_names def __getattr__(self, k): if k[0] == '_' and k != '_raw_data': raise AttributeError(k) elif k in self.meta: return self.meta[k] # Convenience method for accessing individual data point columns by name. elif k in self.data_fields(): return self._raw_data[self.data_fields().index(k)] return super(Data, self).__getattr__(k)
(data, **options)
31,262
quandl.model.model_base
__get_raw_data__
null
def __get_raw_data__(self): return self._raw_data
(self)
31,263
quandl.model.data
__getattr__
null
def __getattr__(self, k): if k[0] == '_' and k != '_raw_data': raise AttributeError(k) elif k in self.meta: return self.meta[k] # Convenience method for accessing individual data point columns by name. elif k in self.data_fields(): return self._raw_data[self.data_fields().index(k)] return super(Data, self).__getattr__(k)
(self, k)
31,264
quandl.model.model_base
__getitem__
null
def __getitem__(self, k): return self.__get_raw_data__()[k]
(self, k)
31,265
quandl.model.data
__init__
null
def __init__(self, data, **options): self.meta = options['meta'] self._raw_data = Util.convert_to_dates(data) # Optimization for when a list of data points are created from a # dataset (via the model_list class) if 'converted_column_names' in options.keys(): self._converted_column_names = options['converted_column_names']
(self, data, **options)
31,266
quandl.model.data_mixin
_validate_col_index
null
def _validate_col_index(self, df, keep_column_indexes): num_columns = len(df.columns) for col_index in keep_column_indexes: if col_index > num_columns or col_index < 1: raise ColumnNotFound('Requested column index %s does not exist' % col_index)
(self, df, keep_column_indexes)
31,267
quandl.model.data
data_fields
null
def data_fields(self): if not self._converted_column_names and self.meta: self._converted_column_names = Util.convert_column_names(self.meta) return self._converted_column_names
(self)
31,268
quandl.model.data_mixin
to_csv
null
def to_csv(self): return self.to_pandas().to_csv()
(self)
31,269
quandl.model.model_base
to_list
null
def to_list(self): if isinstance(self.__get_raw_data__(), dict): return list(self.__get_raw_data__().values()) return self.__get_raw_data__()
(self)
31,270
quandl.model.data_mixin
to_numpy
null
def to_numpy(self): return self.to_pandas().to_records()
(self)
31,271
quandl.model.data_mixin
to_pandas
null
def to_pandas(self, keep_column_indexes=[]): data = self.to_list() # ensure pandas gets a list of lists if data and isinstance(data, list) and not isinstance(data[0], list): data = [data] if 'columns' in self.meta.keys(): df = pd.DataFrame(data=data, columns=self.columns) for index, column_type in enumerate(self.column_types): if column_type == 'Date': df[self.columns[index]] = df[self.columns[index]].apply(pd.to_datetime) else: df = pd.DataFrame(data=data, columns=self.column_names) # ensure our first column of time series data is of pd.datetime df[self.column_names[0]] = df[self.column_names[0]].apply(pd.to_datetime) df.set_index(self.column_names[0], inplace=True) # unfortunately to_records() cannot handle unicode in 2.7 df.index.name = str(df.index.name) # keep_column_indexes are 0 based, 0 is the first column if len(keep_column_indexes) > 0: self._validate_col_index(df, keep_column_indexes) # need to decrement all our indexes by 1 because # Date is considered a column by our API, but in pandas, # it is the index, so column 0 is the first column after Date index keep_column_indexes = list([x - 1 for x in keep_column_indexes]) df = df.iloc[:, keep_column_indexes] return df
(self, keep_column_indexes=[])
31,272
quandl.model.database
Database
null
class Database(GetOperation, ListOperation, ModelBase): BULK_CHUNK_SIZE = 512 @classmethod def get_code_from_meta(cls, metadata): return metadata['database_code'] def bulk_download_url(self, **options): url = self._bulk_download_path() url = ApiConfig.api_base + '/' + url if 'params' not in options: options['params'] = {} if ApiConfig.api_key: options['params']['api_key'] = ApiConfig.api_key if ApiConfig.api_version: options['params']['api_version'] = ApiConfig.api_version if list(options.keys()): url += '?' + urlencode(options['params']) return url def bulk_download_to_file(self, file_or_folder_path, **options): if not isinstance(file_or_folder_path, str): raise QuandlError(Message.ERROR_FOLDER_ISSUE) path_url = self._bulk_download_path() options['stream'] = True r = Connection.request('get', path_url, **options) file_path = file_or_folder_path if os.path.isdir(file_or_folder_path): file_path = file_or_folder_path + '/' + os.path.basename(urlparse(r.url).path) with open(file_path, 'wb') as fd: for chunk in r.iter_content(self.BULK_CHUNK_SIZE): fd.write(chunk) return file_path def _bulk_download_path(self): url = self.default_path() + '/data' url = Util.constructed_path(url, {'id': self.code}) return url def datasets(self, **options): params = {'database_code': self.code, 'query': '', 'page': 1} options = Util.merge_options('params', params, **options) return quandl.model.dataset.Dataset.all(**options)
(code, raw_data=None, **options)
31,273
quandl.operations.get
__get_raw_data__
null
def __get_raw_data__(self): if self._raw_data: return self._raw_data cls = self.__class__ params = {'id': str(self.code)} options = Util.merge_options('params', params, **self.options) path = Util.constructed_path(cls.get_path(), options['params']) r = Connection.request('get', path, **options) response_data = r.json() Util.convert_to_dates(response_data) self._raw_data = response_data[singularize(cls.lookup_key())] return self._raw_data
(self)
31,274
quandl.model.model_base
__getattr__
null
def __getattr__(self, k): if k[0] == '_': raise AttributeError(k) elif k in self.__get_raw_data__(): return self.__get_raw_data__()[k] else: raise AttributeError(k)
(self, k)
31,276
quandl.model.model_base
__init__
null
def __init__(self, code, raw_data=None, **options): self.code = code self._raw_data = raw_data self.options = options
(self, code, raw_data=None, **options)
31,277
quandl.model.database
_bulk_download_path
null
def _bulk_download_path(self): url = self.default_path() + '/data' url = Util.constructed_path(url, {'id': self.code}) return url
(self)
31,278
quandl.model.database
bulk_download_to_file
null
def bulk_download_to_file(self, file_or_folder_path, **options): if not isinstance(file_or_folder_path, str): raise QuandlError(Message.ERROR_FOLDER_ISSUE) path_url = self._bulk_download_path() options['stream'] = True r = Connection.request('get', path_url, **options) file_path = file_or_folder_path if os.path.isdir(file_or_folder_path): file_path = file_or_folder_path + '/' + os.path.basename(urlparse(r.url).path) with open(file_path, 'wb') as fd: for chunk in r.iter_content(self.BULK_CHUNK_SIZE): fd.write(chunk) return file_path
(self, file_or_folder_path, **options)
31,279
quandl.model.database
bulk_download_url
null
def bulk_download_url(self, **options): url = self._bulk_download_path() url = ApiConfig.api_base + '/' + url if 'params' not in options: options['params'] = {} if ApiConfig.api_key: options['params']['api_key'] = ApiConfig.api_key if ApiConfig.api_version: options['params']['api_version'] = ApiConfig.api_version if list(options.keys()): url += '?' + urlencode(options['params']) return url
(self, **options)
31,280
quandl.model.model_base
data_fields
null
def data_fields(self): return list(self.__get_raw_data__().keys())
(self)
31,281
quandl.model.database
datasets
null
def datasets(self, **options): params = {'database_code': self.code, 'query': '', 'page': 1} options = Util.merge_options('params', params, **options) return quandl.model.dataset.Dataset.all(**options)
(self, **options)
31,283
quandl.model.dataset
Dataset
null
class Dataset(GetOperation, ListOperation, ModelBase): @classmethod def get_path(cls): return "%s/metadata" % cls.default_path() @classmethod def get_code_from_meta(cls, metadata): return "%s/%s" % (metadata['database_code'], metadata['dataset_code']) def __init__(self, code, raw_data=None, **options): ModelBase.__init__(self, code, raw_data) parsed_code = self.code.split("/") if len(parsed_code) < 2: raise SyntaxError(Message.ERROR_INVALID_DATABASE_CODE_FORMAT) self.database_code = parsed_code[0] self.dataset_code = parsed_code[1] self.options = options def data(self, **options): # handle_not_found_error if set to True will add an empty DataFrame # for a non-existent dataset instead of raising an error handle_not_found_error = options.pop('handle_not_found_error', False) handle_column_not_found = options.pop('handle_column_not_found', False) # default order to ascending, and respect whatever user passes in params = { 'database_code': self.database_code, 'dataset_code': self.dataset_code, 'order': 'asc' } updated_options = Util.merge_options('params', params, **options) try: return Data.all(**updated_options) except NotFoundError: if handle_not_found_error: return DataList(Data, [], {'column_names': [six.u('None'), six.u('Not Found')]}) raise except ColumnNotFound: if handle_column_not_found: return DataList(Data, [], {'column_names': [six.u('None'), six.u('Not Found')]}) raise def database(self): return quandl.model.database.Database(self.database_code)
(code, raw_data=None, **options)
31,287
quandl.model.dataset
__init__
null
def __init__(self, code, raw_data=None, **options): ModelBase.__init__(self, code, raw_data) parsed_code = self.code.split("/") if len(parsed_code) < 2: raise SyntaxError(Message.ERROR_INVALID_DATABASE_CODE_FORMAT) self.database_code = parsed_code[0] self.dataset_code = parsed_code[1] self.options = options
(self, code, raw_data=None, **options)
31,288
quandl.model.dataset
data
null
def data(self, **options): # handle_not_found_error if set to True will add an empty DataFrame # for a non-existent dataset instead of raising an error handle_not_found_error = options.pop('handle_not_found_error', False) handle_column_not_found = options.pop('handle_column_not_found', False) # default order to ascending, and respect whatever user passes in params = { 'database_code': self.database_code, 'dataset_code': self.dataset_code, 'order': 'asc' } updated_options = Util.merge_options('params', params, **options) try: return Data.all(**updated_options) except NotFoundError: if handle_not_found_error: return DataList(Data, [], {'column_names': [six.u('None'), six.u('Not Found')]}) raise except ColumnNotFound: if handle_column_not_found: return DataList(Data, [], {'column_names': [six.u('None'), six.u('Not Found')]}) raise
(self, **options)
31,290
quandl.model.dataset
database
null
def database(self): return quandl.model.database.Database(self.database_code)
(self)
31,292
quandl.model.datatable
Datatable
null
class Datatable(GetOperation, ListOperation, ModelBase): BULK_CHUNK_SIZE = 16 * 1024 WAIT_GENERATION_INTERVAL = 30 @classmethod def get_path(cls): return "%s/metadata" % cls.default_path() def data(self, **options): if not options: options = {'params': {}} return Data.page(self, **options) def download_file(self, file_or_folder_path, **options): if not isinstance(file_or_folder_path, str): raise QuandlError(Message.ERROR_FOLDER_ISSUE) file_is_ready = False while not file_is_ready: file_is_ready = self._request_file_info(file_or_folder_path, params=options) if not file_is_ready: log.debug(Message.LONG_GENERATION_TIME) sleep(self.WAIT_GENERATION_INTERVAL) def _request_file_info(self, file_or_folder_path, **options): url = self._download_request_path() code_name = self.code options['params']['qopts.export'] = 'true' request_type = RequestType.get_request_type(url, **options) updated_options = Util.convert_options(request_type=request_type, **options) r = Connection.request(request_type, url, **updated_options) response_data = r.json() file_info = response_data['datatable_bulk_download']['file'] status = file_info['status'] if status == 'fresh': file_link = file_info['link'] self._download_file_with_link(file_or_folder_path, file_link, code_name) return True else: return False def _download_file_with_link(self, file_or_folder_path, file_link, code_name): file_path = file_or_folder_path if os.path.isdir(file_or_folder_path): file_path = os.path.join(file_or_folder_path, '{}.{}'.format(code_name.replace('/', '_'), 'zip')) res = urlopen(file_link) with open(file_path, 'wb') as fd: while True: chunk = res.read(self.BULK_CHUNK_SIZE) if not chunk: break fd.write(chunk) log.debug( "File path: %s", file_path ) def _download_request_path(self): url = self.default_path() url = Util.constructed_path(url, {'id': self.code}) url += '.json' return url
(code, raw_data=None, **options)
31,297
quandl.model.datatable
_download_file_with_link
null
def _download_file_with_link(self, file_or_folder_path, file_link, code_name): file_path = file_or_folder_path if os.path.isdir(file_or_folder_path): file_path = os.path.join(file_or_folder_path, '{}.{}'.format(code_name.replace('/', '_'), 'zip')) res = urlopen(file_link) with open(file_path, 'wb') as fd: while True: chunk = res.read(self.BULK_CHUNK_SIZE) if not chunk: break fd.write(chunk) log.debug( "File path: %s", file_path )
(self, file_or_folder_path, file_link, code_name)
31,298
quandl.model.datatable
_download_request_path
null
def _download_request_path(self): url = self.default_path() url = Util.constructed_path(url, {'id': self.code}) url += '.json' return url
(self)
31,299
quandl.model.datatable
_request_file_info
null
def _request_file_info(self, file_or_folder_path, **options): url = self._download_request_path() code_name = self.code options['params']['qopts.export'] = 'true' request_type = RequestType.get_request_type(url, **options) updated_options = Util.convert_options(request_type=request_type, **options) r = Connection.request(request_type, url, **updated_options) response_data = r.json() file_info = response_data['datatable_bulk_download']['file'] status = file_info['status'] if status == 'fresh': file_link = file_info['link'] self._download_file_with_link(file_or_folder_path, file_link, code_name) return True else: return False
(self, file_or_folder_path, **options)
31,300
quandl.model.datatable
data
null
def data(self, **options): if not options: options = {'params': {}} return Data.page(self, **options)
(self, **options)
31,302
quandl.model.datatable
download_file
null
def download_file(self, file_or_folder_path, **options): if not isinstance(file_or_folder_path, str): raise QuandlError(Message.ERROR_FOLDER_ISSUE) file_is_ready = False while not file_is_ready: file_is_ready = self._request_file_info(file_or_folder_path, params=options) if not file_is_ready: log.debug(Message.LONG_GENERATION_TIME) sleep(self.WAIT_GENERATION_INTERVAL)
(self, file_or_folder_path, **options)
31,304
quandl.errors.quandl_error
ForbiddenError
null
class ForbiddenError(QuandlError): pass
(quandl_message=None, http_status=None, http_body=None, http_headers=None, quandl_error_code=None, response_data=None)
31,307
quandl.errors.quandl_error
InternalServerError
null
class InternalServerError(QuandlError): pass
(quandl_message=None, http_status=None, http_body=None, http_headers=None, quandl_error_code=None, response_data=None)
31,310
quandl.errors.quandl_error
InvalidDataError
null
class InvalidDataError(QuandlError): pass
(quandl_message=None, http_status=None, http_body=None, http_headers=None, quandl_error_code=None, response_data=None)
31,313
quandl.errors.quandl_error
InvalidRequestError
null
class InvalidRequestError(QuandlError): pass
(quandl_message=None, http_status=None, http_body=None, http_headers=None, quandl_error_code=None, response_data=None)
31,316
quandl.errors.quandl_error
LimitExceededError
null
class LimitExceededError(QuandlError): pass
(quandl_message=None, http_status=None, http_body=None, http_headers=None, quandl_error_code=None, response_data=None)
31,319
quandl.model.merged_dataset
MergedDataset
null
class MergedDataset(ModelBase): def __init__(self, dataset_codes, **options): self.dataset_codes = dataset_codes self._datasets = None self._raw_data = None self.options = options @property def column_names(self): return self._merged_column_names_from(self.__dataset_objects__()) @property def oldest_available_date(self): return min(self._get_dataset_attribute('oldest_available_date')) @property def newest_available_date(self): return max(self._get_dataset_attribute('newest_available_date')) def data(self, **options): # if there is only one column_index, use the api to fetch # else fetch all the data and filter column indexes requested locally dataset_data_list = [self._get_dataset_data(dataset, **options) for dataset in self.__dataset_objects__()] # build data frames and filter locally when necessary data_frames = [dataset_data.to_pandas( keep_column_indexes=self._keep_column_indexes(index)) for index, dataset_data in enumerate(dataset_data_list)] merged_data_frame = pd.DataFrame() for index, data_frame in enumerate(data_frames): metadata = self.__dataset_objects__()[index] # use code to prevent metadata api call data_frame.rename( columns=lambda x: self._rename_columns(metadata.code, x), inplace=True) merged_data_frame = pd.merge( merged_data_frame, data_frame, right_index=True, left_index=True, how='outer') merged_data_metadata = self._build_data_meta(dataset_data_list, merged_data_frame) # check if descending was explicitly set # if set we need to sort in descending order # since panda merged dataframe will # by default sort everything in ascending return MergedDataList( Data, merged_data_frame, merged_data_metadata, ascending=self._order_is_ascending(**options)) # for MergeDataset data calls def _get_dataset_data(self, dataset, **options): updated_options = options # if we have only one column index, let the api # handle the column filtering since the api supports this if len(dataset.requested_column_indexes) == 1: params = {'column_index': dataset.requested_column_indexes[0]} # only change the options per request updated_options = options.copy() updated_options = Util.merge_options('params', params, **updated_options) return dataset.data(**updated_options) def _build_data_meta(self, dataset_data_list, df): merged_data_metadata = {} # for sanity check if list has items if dataset_data_list: # meta should be the same for every individual Dataset # request, just take the first one merged_data_metadata = dataset_data_list[0].meta.copy() # set the start_date and end_date to # the actual values we got back from data num_rows = len(df.index) if num_rows > 0: merged_data_metadata['start_date'] = df.index[0].date() merged_data_metadata['end_date'] = df.index[num_rows - 1].date() # remove column_index if it exists because this would be per request data merged_data_metadata.pop('column_index', None) # don't use self.column_names to prevent metadata api call # instead, get the column_names from the dataset_data_objects merged_data_metadata['column_names'] = self._merged_column_names_from(dataset_data_list) return merged_data_metadata def _keep_column_indexes(self, index): # no need to filter if we only have one column_index # since leveraged the server to do the filtering col_index = self.__dataset_objects__()[index].requested_column_indexes if len(self.__dataset_objects__()[index].requested_column_indexes) == 1: # empty array for no filtering col_index = [] return col_index def _rename_columns(self, code, original_column_name): return code + ' - ' + original_column_name def _get_dataset_attribute(self, k): elements = [] for dataset in self.__dataset_objects__(): elements.append(dataset.__get_raw_data__()[k]) return list(unique_everseen(elements)) def _order_is_ascending(self, **options): return not (self._in_query_param('order', **options) and options['params']['order'] == 'desc') def _in_query_param(self, name, **options): return ('params' in options and name in options['params']) # can take in a list of dataset_objects # or a list of dataset_data_objects def _merged_column_names_from(self, dataset_list): elements = [] for idx_dataset, dataset in enumerate(dataset_list): # require getting the code from the dataset object always code = self.__dataset_objects__()[idx_dataset].code for index, column_name in enumerate(dataset.column_names): # only include column names that are not filtered out # by specification of the column_indexes list if self._include_column(dataset, index): # first index is the date, don't modify the date name if index > 0: elements.append(self._rename_columns(code, column_name)) else: elements.append(column_name) return list(unique_everseen(elements)) def _include_column(self, dataset_metadata, column_index): # non-pandas/dataframe: # keep column 0 around because we want to keep Date if (hasattr(dataset_metadata, 'requested_column_indexes') and len(dataset_metadata.requested_column_indexes) > 0 and column_index != 0): return column_index in dataset_metadata.requested_column_indexes return True def _initialize_raw_data(self): datasets = self.__dataset_objects__() self._raw_data = {} if not datasets: return self._raw_data self._raw_data = datasets[0].__get_raw_data__().copy() for k, v in list(self._raw_data.items()): self._raw_data[k] = getattr(self, k) return self._raw_data def _build_dataset_object(self, dataset_code, **options): options_copy = options.copy() # data_codes are tuples # e.g., ('WIKI/AAPL', {'column_index": [1,2]}) # or strings # e.g., 'NSE/OIL' code = self._get_request_dataset_code(dataset_code) dataset = Dataset(code, None, **options_copy) # save column_index param requested dynamically # used later on to determine: # if column_index is an array, fetch all data and use locally to filter columns # if column_index is an empty array, fetch all data and don't filter columns dataset.requested_column_indexes = self._get_req_dataset_col_indexes(dataset_code, code) return dataset def _get_req_dataset_col_indexes(self, dataset_code, code_str): # ensure if column_index dict is specified, value is a list params = self._get_request_params(dataset_code) if 'column_index' in params: column_index = params['column_index'] if not isinstance(column_index, list): raise ValueError( Message.ERROR_COLUMN_INDEX_LIST % code_str) return column_index # default, no column indexes to filter return [] def _get_request_dataset_code(self, dataset_code): if isinstance(dataset_code, tuple): return dataset_code[0] elif isinstance(dataset_code, string_types): return dataset_code else: raise ValueError(Message.ERROR_ARGUMENTS_LIST_FORMAT) def _get_request_params(self, dataset_code): if isinstance(dataset_code, tuple): return dataset_code[1] return {} def __getattr__(self, k): if k[0] == '_' and k != '_raw_data': raise AttributeError(k) elif hasattr(MergedDataset, k): return super(MergedDataset, self).__getattr__(k) elif k in self.__dataset_objects__()[0].__get_raw_data__(): return self._get_dataset_attribute(k) return super(MergedDataset, self).__getattr__(k) def __get_raw_data__(self): if self._raw_data is None: self._initialize_raw_data() return ModelBase.__get_raw_data__(self) def __dataset_objects__(self): if self._datasets: return self._datasets if not isinstance(self.dataset_codes, list): raise ValueError('dataset codes must be specified in a list') # column_index is handled by individual dataset get's if 'params' in self.options: self.options['params'].pop("column_index", None) self._datasets = list([self._build_dataset_object(dataset_code, **self.options) for dataset_code in self.dataset_codes]) return self._datasets
(dataset_codes, **options)