File size: 2,875 Bytes
b9b4b89
 
7a0f113
 
 
 
 
 
b9b4b89
7a0f113
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
---
license: cc-by-4.0
task_categories:
- graph-ml
tags:
- multimodal
- attributed-graph
- benchmark
---

# MAGB

This repository contains the Multimodal Attribute Graph Benchmark (MAGB) datasets described in the paper [When Graph meets Multimodal: Benchmarking on Multimodal Attributed Graphs Learning](https://huggingface.co/papers/2410.09132).

[Github repository](https://github.com/sktsherlock/MAGB)

MAGB provides 5 datasets from E-Commerce and Social Networks, and evaluates two major learning paradigms: _**GNN-as-Predictor**_ and **_VLM-as-Predictor_**.  The datasets are publicly available on Hugging Face: [https://huggingface.co/datasets/Sherirto/MAGB](https://huggingface.co/datasets/Sherirto/MAGB).


Each dataset consists of several parts:

- Graph Data (*.pt): Stores the graph structure, including adjacency information and node labels. Loadable using DGL.
- Node Textual Metadata (*.csv): Contains node textual descriptions, neighborhood relationships, and category labels.
- Text, Image, and Multimodal Features (TextFeature/, ImageFeature/, MMFeature/): Pre-extracted embeddings from the MAGB paper for different modalities.
- Raw Images (*.tar.gz): A compressed folder containing images named by node IDs.  Requires extraction before use.  The Reddit-M dataset is particularly large and may require special handling (see Github README for details).


## πŸ“– Table of Contents  
- [πŸ“– Introduction](#-introduction)  
- [πŸ’» Installation](#-installation)
- [πŸš€ Usage](#-usage)  
- [πŸ“Š Results](#-results)  
- [🀝 Contributing](#-contributing)  
- [❓ FAQ](#-faq)  

---

## πŸ“– Introduction  
Multimodal attributed graphs (MAGs) incorporate multiple data types (e.g., text, images, numerical features) into graph structures, enabling more powerful learning and inference capabilities.  
This benchmark provides:  
βœ… **Standardized datasets** with multimodal attributes.  
βœ… **Feature extraction pipelines** for different modalities.  
βœ… **Evaluation metrics** to compare different models.  
βœ… **Baselines and benchmarks** to accelerate research.  

---

## πŸ’» Installation  
Ensure you have the required dependencies installed before running the benchmark.  

```bash
# Clone the repository
git clone https://github.com/sktsherlock/MAGB.git
cd MAGB

# Install dependencies
pip install -r requirements.txt
```
## πŸš€ Usage

### 1. Download the datasets from [MAGB](https://huggingface.co/datasets/Sherirto/MAGB). πŸ‘

```bash
cd Data/
sudo apt-get update && sudo apt-get install git-lfs && git clone https://huggingface.co/datasets/Sherirto/MAGB .
ls
```
Now, you can see the **Movies**, **Toys**, **Grocery**, **Reddit-S** and **Reddit-M** under the **''Data''** folder. 

<p align="center">
    <img src="Figure/Dataset.jpg" width="900"/>
<p>

### 2. GNN-as-Predictor 
...(rest of the content from Github README can be pasted here)