aal_stats_vol / README.md
admin
upd md
c73b9f2
|
raw
history blame
2.67 kB
---
license: mit
task_categories:
- image-classification
- feature-extraction
tags:
- biology
- medical
pretty_name: AAL Statistics Volumn
size_categories:
- n<1K
language:
- en
---
# Dataset Card for aal_stats_vol
The AAL (Automated Anatomical Labeling) Statistical Volume Dataset provides a comprehensive collection of brain volume measurements based on AAL atlases. It covers statistical information on brain regions derived from structural magnetic resonance imaging (MRI) scans. Researchers commonly utilize this dataset for studies related to neuroimaging, neuroscience, and structural analysis of the brain.The AAL Statistical Volume Dataset plays a key role in advancing our understanding of brain anatomy by supporting the development and evaluation of automated brain region identification and volume analysis algorithms. With its wealth of volumetric data from diverse individuals, the dataset provides an invaluable resource for studies aimed at characterizing structural changes in the brain between populations and facilitates advances in neuroscience research.
## Usage
```python
from datasets import load_dataset
data = load_dataset("Genius-Society/aal_stats_vol", split="train")
for item in data:
print(item)
```
## Maintenance
```bash
GIT_LFS_SKIP_SMUDGE=1 git clone [email protected]:datasets/Genius-Society/aal_stats_vol
cd aal_stats_vol
```
## Mirror
<https://www.modelscope.cn/datasets/Genius-Society/aal_stats_vol>
The raw data of this dataset locates at `./data` in the above mirror repo, whose details are as follows:
### Attachments
1. Original data;
2. Packages;
3. Intermediate products;
4. Source code with the output csv data.
### Steps to Achieve the Goal
1. Skull Stripping: To extract brain from raw MRI images;
2. Tissue Segmentation: To segment brain into white matters, grey matters and cerebrospinal fluid (CSF);
3. Registration: To register the standard space to native space;
4. Measurement(With mask): With the generated masks, to measure the volumes of 90 ROIs;
5. Classification: To fill out the shell scripts and run them, to write a Python script to train the model and test with the provided testing dataset.
### Classification details
0. If everything went well before this, a matrix with a size of 50x90 would be obtained: It means there are 50 samples of which each has 90 features(volumes);
1. Dataset: 40 Training and 10 Testing;
2. Training: To feed into the classifier to train a model;
3. Testing: To test the model and provide the result.
## Reference
[1] [Chapter II ‐ Classifying AD patients and normal controls from brain images](https://github.com/Genius-Society/medical_image_computing/blob/ad/README.md)