|
--- |
|
license: mit |
|
tags: |
|
- code |
|
size_categories: |
|
- 10K<n<100K |
|
--- |
|
|
|
# Brogue Map Dataset |
|
|
|
To clone this repo, use: |
|
|
|
``` |
|
git clone https://huggingface.co/datasets/DolphinNie/dungeon-dataset |
|
``` |
|
|
|
## 1. Data Explanation |
|
|
|
This is the Map dataset from the open-sourced game [Brogue](https://github.com/tmewett/BrogueCE). It contains 49,000 train dataset, 14,000 test dataset and 7,000 validation dataset. |
|
|
|
Each map is stored in a `.csv` file. The map is a `(32x32)` array, which is the map size. |
|
|
|
Each cell in the array is a `int` number ranged from 0 to 13, which represented 14 tiles. |
|
|
|
```json |
|
"G_NONE": 0, |
|
"G_GROUND": 1, |
|
"G_SAND": 2, |
|
"G_WATER": 3, |
|
"G_BOG": 4, |
|
"G_LAVA": 5, |
|
"G_ICE": 6, |
|
"G_GRASS": 7, |
|
"G_FUNGUS": 8, |
|
"G_ASHES": 9, |
|
"G_STONE": 10, |
|
"G_CRYSTAL": 11, |
|
"G_FIRE": 12, |
|
"G_BRIDGE": 13 |
|
``` |
|
An example map datapoint is in the format of |
|
|
|
``` |
|
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 |
|
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 |
|
0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 |
|
0,0,1,1,1,8,8,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 |
|
0,1,1,1,8,8,8,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0 |
|
0,1,1,8,8,8,8,0,0,0,1,1,1,1,1,0,1,0,0,0,0,0,0,0,0,0,0,0 |
|
0,1,1,1,8,8,0,0,0,1,1,1,1,1,1,1,1,0,0,1,0,0,0,0,0,0,0,0 |
|
0,1,1,1,1,1,0,0,0,1,1,1,1,1,1,1,1,0,1,8,0,0,1,1,1,1,0,0 |
|
0,1,1,1,1,1,0,0,0,1,1,1,1,1,1,1,0,0,1,1,1,1,1,1,1,1,1,0 |
|
0,0,1,1,1,0,0,0,0,0,1,1,1,1,1,0,0,0,1,1,1,1,1,1,1,1,1,0 |
|
0,0,0,0,1,0,0,0,0,0,0,1,1,1,0,0,0,0,1,1,1,1,0,0,1,1,1,9 |
|
0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,1,1,0 |
|
0,0,1,1,1,1,1,0,0,0,0,0,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0 |
|
0,0,1,1,1,1,1,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0 |
|
0,1,1,1,1,1,1,0,0,0,0,0,0,1,1,1,1,0,1,1,1,1,0,0,0,0,0,0 |
|
0,1,8,1,1,1,1,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0 |
|
0,1,8,8,8,8,1,8,0,0,0,0,1,8,1,1,0,0,0,0,0,0,0,0,0,1,1,1 |
|
0,0,8,8,8,8,8,8,0,0,0,8,8,8,8,8,1,0,0,0,1,1,0,0,0,1,1,1 |
|
0,0,1,8,8,8,8,8,8,0,1,8,8,8,8,8,1,0,0,0,1,1,0,0,0,0,1,1 |
|
0,0,0,1,8,8,8,8,8,0,1,1,1,8,8,1,0,0,0,0,1,1,0,1,0,1,1,1 |
|
0,0,0,8,8,8,8,8,8,1,1,1,1,8,1,1,0,0,0,0,1,1,1,1,0,1,1,0 |
|
0,0,0,8,8,8,8,1,0,0,0,3,1,0,1,0,0,0,0,0,0,1,1,1,0,1,1,0 |
|
0,0,0,0,8,8,8,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0 |
|
0,0,0,1,1,0,0,0,0,0,0,0,0,0,11,1,1,1,1,1,1,1,1,1,1,1,0,0 |
|
0,1,1,1,8,1,0,0,0,0,0,0,0,0,11,11,11,1,1,1,1,1,1,1,1,1,1,0 |
|
0,0,1,1,1,1,0,0,0,0,0,0,0,0,11,11,0,0,1,1,0,0,1,1,1,1,1,1 |
|
0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 |
|
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 |
|
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 |
|
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 |
|
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 |
|
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 |
|
``` |
|
|
|
## 2. Data processing |
|
|
|
Huggingface does not store the map data in the correct format. To get each correct map data, use the following code: |
|
|
|
```python |
|
from datasets import load_dataset |
|
import numpy as np |
|
import matplotlib.pyplot as plt |
|
|
|
# Load dataset from hugging face |
|
dataset = load_dataset("DolphinNie/dungeon-dataset") |
|
|
|
|
|
def get_processed_dataset(load_dataset_from_pickle=False, |
|
save_dataset_to_pickle=False, |
|
pickle_save_path='dungeon-dataset.pkl'): |
|
dataset = pull_hugging_face_dataset(load_dataset_from_pickle, |
|
save_dataset_to_pickle, |
|
pickle_save_path) |
|
dataset_train, dataset_test, dataset_valid = convert_dataset(dataset) |
|
return dataset_train, dataset_test, dataset_valid |
|
|
|
def convert_dataset(dataset): |
|
dataset_train = list() |
|
dataset_test = list() |
|
dataset_valid = list() |
|
datasets = [dataset_train, dataset_test, dataset_valid] |
|
name = ['train', 'test', 'validation'] |
|
for i in range(3): |
|
datapoint_num = int(dataset[name[i]].num_rows / 32) |
|
dataset_tf = dataset[name[i]].to_pandas() |
|
for n in range(datapoint_num): |
|
env_num = dataset_tf[n * 32:(n + 1) * 32] |
|
datasets[i].append(env_num) |
|
return dataset_train, dataset_test, dataset_valid |
|
|
|
dataset_train, dataset_test, dataset_valid = get_processed_dataset(load_dataset_from_pickle, save_dataset_to_pickle) |
|
|
|
# Visualize the datapoints if you want |
|
def visualize_map(dungeon_map): |
|
plt.imshow(dungeon_map, cmap='viridis', interpolation='nearest') |
|
plt.title('dungeon map') |
|
plt.show() |
|
|
|
visualize_map(dataset_train[10000]) |
|
``` |
|
|
|
<img src="./README.assets/image-20240411203604268.png" alt="image-20240411203604268" style="zoom:50%;" /> |
|
|
|
Note that this dataset contains a two-dimensional representation of the map, not a three-dimensional one-hot representation. If you need to train a new model, you need to further process the data set. |
|
|