Add paper link and Github link

#1
by nielsr HF staff - opened
Files changed (1) hide show
  1. README.md +72 -0
README.md CHANGED
@@ -1,3 +1,75 @@
1
  ---
2
  license: cc-by-4.0
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
+ task_categories:
4
+ - graph-ml
5
+ tags:
6
+ - multimodal
7
+ - attributed-graph
8
+ - benchmark
9
  ---
10
+
11
+ # MAGB
12
+
13
+ This repository contains the Multimodal Attribute Graph Benchmark (MAGB) datasets described in the paper [When Graph meets Multimodal: Benchmarking on Multimodal Attributed Graphs Learning](https://huggingface.co/papers/2410.09132).
14
+
15
+ [Github repository](https://github.com/sktsherlock/MAGB)
16
+
17
+ MAGB provides 5 datasets from E-Commerce and Social Networks, and evaluates two major learning paradigms: _**GNN-as-Predictor**_ and **_VLM-as-Predictor_**. The datasets are publicly available on Hugging Face: [https://huggingface.co/datasets/Sherirto/MAGB](https://huggingface.co/datasets/Sherirto/MAGB).
18
+
19
+
20
+ Each dataset consists of several parts:
21
+
22
+ - Graph Data (*.pt): Stores the graph structure, including adjacency information and node labels. Loadable using DGL.
23
+ - Node Textual Metadata (*.csv): Contains node textual descriptions, neighborhood relationships, and category labels.
24
+ - Text, Image, and Multimodal Features (TextFeature/, ImageFeature/, MMFeature/): Pre-extracted embeddings from the MAGB paper for different modalities.
25
+ - Raw Images (*.tar.gz): A compressed folder containing images named by node IDs. Requires extraction before use. The Reddit-M dataset is particularly large and may require special handling (see Github README for details).
26
+
27
+
28
+ ## πŸ“– Table of Contents
29
+ - [πŸ“– Introduction](#-introduction)
30
+ - [πŸ’» Installation](#-installation)
31
+ - [πŸš€ Usage](#-usage)
32
+ - [πŸ“Š Results](#-results)
33
+ - [🀝 Contributing](#-contributing)
34
+ - [❓ FAQ](#-faq)
35
+
36
+ ---
37
+
38
+ ## πŸ“– Introduction
39
+ Multimodal attributed graphs (MAGs) incorporate multiple data types (e.g., text, images, numerical features) into graph structures, enabling more powerful learning and inference capabilities.
40
+ This benchmark provides:
41
+ βœ… **Standardized datasets** with multimodal attributes.
42
+ βœ… **Feature extraction pipelines** for different modalities.
43
+ βœ… **Evaluation metrics** to compare different models.
44
+ βœ… **Baselines and benchmarks** to accelerate research.
45
+
46
+ ---
47
+
48
+ ## πŸ’» Installation
49
+ Ensure you have the required dependencies installed before running the benchmark.
50
+
51
+ ```bash
52
+ # Clone the repository
53
+ git clone https://github.com/sktsherlock/MAGB.git
54
+ cd MAGB
55
+
56
+ # Install dependencies
57
+ pip install -r requirements.txt
58
+ ```
59
+ ## πŸš€ Usage
60
+
61
+ ### 1. Download the datasets from [MAGB](https://huggingface.co/datasets/Sherirto/MAGB). πŸ‘
62
+
63
+ ```bash
64
+ cd Data/
65
+ sudo apt-get update && sudo apt-get install git-lfs && git clone https://huggingface.co/datasets/Sherirto/MAGB .
66
+ ls
67
+ ```
68
+ Now, you can see the **Movies**, **Toys**, **Grocery**, **Reddit-S** and **Reddit-M** under the **''Data''** folder.
69
+
70
+ <p align="center">
71
+ <img src="Figure/Dataset.jpg" width="900"/>
72
+ <p>
73
+
74
+ ### 2. GNN-as-Predictor
75
+ ...(rest of the content from Github README can be pasted here)