WikiDBGraph / README.md
Jerrylife's picture
Update README.md
f74a915 verified
metadata
license: cc-by-4.0
task_categories:
  - tabular-classification
  - tabular-regression
language:
  - en
tags:
  - chemistry
  - medical
  - finance
size_categories:
  - 10B<n<100B

WikiDBGraph Dataset

This document provides an overview of the datasets associated with the research paper on WikiDBGraph, a large-scale graph of interconnected relational databases.

Description: WikiDBGraph is a novel, large-scale graph where each node represents a relational database, and edges signify identified correlations or similarities between these databases. It is constructed from 100,000 real-world-like databases derived from Wikidata. The graph is enriched with a comprehensive set of node (database) and edge (inter-database relationship) properties, categorized into structural, semantic, and statistical features.

Source: WikiDBGraph is derived from the WikiDBs corpus (see below). The inter-database relationships (edges) are primarily identified using a machine learning model trained to predict database similarity based on their schema embeddings, significantly expanding upon the explicitly known links in the source data.

Key Characteristics:

  • Nodes: 100,000 relational databases.
  • Edges: Millions of identified inter-database relationships (the exact number depends on the similarity threshold $\tau$ used for graph construction).
  • Node Properties: Include database-level structural details (e.g., number of tables, columns, foreign keys), semantic information (e.g., pre-computed embeddings, topic categories from Wikidata, community IDs), and statistical measures (e.g., database size, total rows, column cardinalities, column entropies).
  • Edge Properties: Include structural similarity (e.g., Jaccard index on table/column names, Graph Edit Distance based metrics), semantic relatedness (e.g., cosine similarity of embeddings, prediction confidence), and statistical relationships (e.g., KL divergence of shared column distributions).

Usage in this Research: WikiDBGraph is the primary contribution of this paper. It is used to:

  • Demonstrate a methodology for identifying and representing inter-database relationships at scale.
  • Provide a rich resource for studying the landscape of interconnected databases.
  • Serve as a foundation for experiments in collaborative learning, showcasing its utility in feature-overlap and instance-overlap scenarios.

Availability:

  • Dataset (WikiDBGraph): The WikiDBGraph dataset, including the graph structure (edge lists for various thresholds $\tau$) and node/edge property files, will be made publicly available. (Please specify the URL here, e.g., Zenodo, GitHub, or your project website).
  • Code: The code used for constructing WikiDBGraph from WikiDBs, generating embeddings, and running the experiments will be made publicly available. (Please specify the URL here, e.g., GitHub repository).

License:

  • WikiDBGraph Dataset: Creative Commons Attribution 4.0 International (CC BY 4.0).
  • Associated Code: Apache License 2.0.

How to Use WikiDBGraph: The dataset is provided as a collection of files, categorized as follows:

A. Graph Structure Files: These files define the connections (edges) between databases (nodes) for different similarity thresholds ($\tau$).

  • filtered_edges_threshold_0.93.csv, filtered_edges_threshold_0.94.csv, filtered_edges_threshold_0.96.csv:
    • Meaning: CSV files representing edge lists. Each row typically contains a pair of database IDs that are connected at the specified similarity threshold.
    • How to Load: Use standard CSV parsing libraries (e.g., pandas.read_csv() in Python). These edge lists can then be used to construct graph objects in libraries like NetworkX (nx.from_pandas_edgelist()) or igraph.
  • filtered_edges_0.94_with_confidence.csv:
    • Meaning: An edge list in CSV format for $\tau=0.94$, which also includes the confidence score (similarity score) for each edge.
    • How to Load: Similar to other CSV edge lists, using pandas. The confidence score can be used as an edge weight.
  • graph_raw_0.93.dgl, graph_raw_0.94.dgl, graph_raw_0.96.dgl:
    • Meaning: These are graph objects serialized in the Deep Graph Library (DGL) format for different thresholds. They likely contain the basic graph structure (nodes and edges).
    • How to Load .dgl files: DGL provides functions to save and load graph objects. You would typically use:
      import dgl
      # Example for loading a graph
      graphs, _ = dgl.load_graphs('graph_raw_0.94.dgl')
      g = graphs[0] # Typically, load_graphs returns a list of graphs
      
  • graph_with_properties_0.94.dgl:
    • Meaning: A DGL graph object for $\tau=0.94$ that includes node and/or edge properties directly embedded within the graph structure. This is convenient for direct use in DGL-based graph neural network models.
    • How to Load: Same as other .dgl files using dgl.load_graphs(). Node features can be accessed via g.ndata and edge features via g.edata.

B. Node Property Files: These CSV files provide various features and metadata for each database node.

  • database_embeddings.pt:
    • Meaning: A PyTorch (.pt) file containing the pre-computed embedding vectors for each database. These are crucial semantic features.
    • How to Load: Use torch.load('database_embeddings.pt') in Python with PyTorch installed.
  • node_structural_properties.csv:
    • Meaning: Contains structural characteristics of each database (e.g., number of tables, columns, foreign key counts).
  • column_cardinality.csv:
    • Meaning: Statistics on the number of unique values (cardinality) for columns within each database.
  • column_entropy.csv:
    • Meaning: Entropy values calculated for columns within each database, indicating data diversity.
  • data_volume.csv:
    • Meaning: Information regarding the size or volume of data in each database (e.g., total rows, file size).
  • cluster_assignments_dim2_sz100_msNone.csv:
    • Meaning: Cluster labels assigned to each database after dimensionality reduction (e.g., t-SNE) and clustering (e.g., HDBSCAN).
  • community_assignment.csv:
    • Meaning: Community labels assigned to each database based on graph community detection algorithms (e.g., Louvain).
  • tsne_embeddings_dim2.csv:
    • Meaning: 2-dimensional projection of database embeddings using t-SNE, typically used for visualization.
  • How to Load CSV Node Properties: Use pandas.read_csv() in Python. The resulting DataFrames can be merged or used to assign attributes to nodes in a graph object.

C. Edge Property Files: These CSV files provide features for the relationships (edges) between databases.

  • edge_embed_sim.csv:
    • Meaning: Contains the embedding-based similarity scores (EmbedSim) for connected database pairs. This might be redundant if confidence is in filtered_edges_..._with_confidence.csv but could be a global list of all pairwise similarities above a certain initial cutoff.
  • edge_structural_properties_GED_0.94.csv:
    • Meaning: Contains structural similarity metrics (potentially Graph Edit Distance or related measures) for edges in the graph constructed with $\tau=0.94$.
  • How to Load CSV Edge Properties: Use pandas.read_csv(). These can be used to assign attributes to edges in a graph object.

D. Other Analysis Files:

  • distdiv_results.csv:
    • Meaning: Likely contains results from a specific distance or diversity analysis performed on the databases or their embeddings. The exact nature would be detailed in the paper or accompanying documentation.
    • How to Load: As a CSV file using pandas.

Detailed instructions on the specific schemas of these CSV files, precise content of .pt and .dgl files, and example usage scripts will be provided in the code repository and full dataset documentation upon release.