File size: 6,086 Bytes
9c2897e 4d6dd63 9c2897e 4d6dd63 9c2897e 4d6dd63 9c2897e 4d6dd63 9c2897e 4d6dd63 9c2897e 4d6dd63 9c2897e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 |
---
language: en
tags:
- text-classification
- hierarchical-classification
- common-core-standards
license: mit
datasets:
- iolimat482/common-core-math-question-khan-academy-and-mathfish
metrics:
- accuracy
- precision
- recall
- f1
library_name: transformers
pipeline_tag: text-classification
base_model:
- google-bert/bert-base-uncased
---
# BERT Hierarchical Classification Model
This model is a fine-tuned BERT-based model for hierarchical classification of Common Core Standard questions.
## Model Description
The model classifies input texts into the following hierarchical levels:
- **Grade**
- **Domain**
- **Cluster**
- **Standard**
It is based on BERT ("bert-base-uncased") and has been fine-tuned on a dataset of Common Core Standard-aligned questions.
## Intended Use
This model is intended for educators and developers who need to categorize educational content according to the Common Core Standards. It can be used to:
- Automatically label questions or exercises with the appropriate standard.
- Facilitate curriculum alignment and content organization.
## Training Data
The model was trained on a dataset consisting of text questions labeled with their corresponding Common Core Standards.
## Training Procedure
- **Optimizer**: AdamW
- **Learning Rate**: 2e-5
- **Epochs**: 10
- **Batch Size**: 16
## Evaluation
The model was evaluated on multiple classification tasks, including cluster classification, domain classification, grade classification, and standard classification. The performance metrics used for evaluation are Accuracy, F1 Score, Precision, and Recall. Below are the results after training for **10 epochs**:
### Overall Loss
- **Average Training Loss**: 0.2508
- **Average Validation Loss**: 1.9785
- **Training Loss**: 0.1843
### Cluster Classification
| Metric | Value |
|--------------|---------|
| **Accuracy** | 0.8797 |
| **F1 Score** | 0.8792 |
| **Precision**| 0.8840 |
| **Recall** | 0.8797 |
### Domain Classification
| Metric | Value |
|--------------|---------|
| **Accuracy** | 0.9177 |
| **F1 Score** | 0.9175 |
| **Precision**| 0.9183 |
| **Recall** | 0.9177 |
### Grade Classification
| Metric | Value |
|--------------|---------|
| **Accuracy** | 0.8858 |
| **F1 Score** | 0.8861 |
| **Precision**| 0.8896 |
| **Recall** | 0.8858 |
### Standard Classification
| Metric | Value |
|--------------|---------|
| **Accuracy** | 0.8334 |
| **F1 Score** | 0.8323 |
| **Precision**| 0.8433 |
| **Recall** | 0.8334 |
## How to Use
```python
import torch
from transformers import BertTokenizer, BertConfig
from huggingface_hub import hf_hub_download
import joblib
import importlib.util
tokenizer = BertTokenizer.from_pretrained('iolimat482/common-core-bert-hierarchical-classification')
config = BertConfig.from_pretrained('iolimat482/common-core-bert-hierarchical-classification')
# Download 'modeling.py'
modeling_file = hf_hub_download(repo_id='iolimat482/common-core-bert-hierarchical-classification', filename='modeling.py')
# Load the model class
spec = importlib.util.spec_from_file_location("modeling", modeling_file)
modeling = importlib.util.module_from_spec(spec)
spec.loader.exec_module(modeling)
BertHierarchicalClassification = modeling.BertHierarchicalClassification
# Instantiate the model
model = BertHierarchicalClassification(config)
# Load model weights
model_weights = hf_hub_download(repo_id='iolimat482/common-core-bert-hierarchical-classification', filename='best_model.pt')
model.load_state_dict(torch.load(model_weights, map_location=torch.device('cpu')))
model.eval()
label_encoders_path = hf_hub_download(repo_id='iolimat482/common-core-bert-hierarchical-classification', filename='label_encoders.joblib')
label_encoders = joblib.load(label_encoders_path)
def predict_standard(model, tokenizer, label_encoders, text):
# Tokenize input text
inputs = tokenizer(text, return_tensors='pt', truncation=True, padding=True)
# Perform inference
with torch.no_grad():
grade_logits, domain_logits, cluster_logits, standard_logits = model(inputs['input_ids'], inputs['attention_mask'])
# Get the predicted class indices
grade_pred = torch.argmax(grade_logits, dim=1).item()
domain_pred = torch.argmax(domain_logits, dim=1).item()
cluster_pred = torch.argmax(cluster_logits, dim=1).item()
standard_pred = torch.argmax(standard_logits, dim=1).item()
# Map indices to labels
grade_label = label_encoders['Grade'].inverse_transform([grade_pred])[0]
domain_label = label_encoders['Domain'].inverse_transform([domain_pred])[0]
cluster_label = label_encoders['Cluster'].inverse_transform([cluster_pred])[0]
standard_label = label_encoders['Standard'].inverse_transform([standard_pred])[0]
return {
'Grade': grade_label,
'Domain': domain_label,
'Cluster': cluster_label,
'Standard': standard_label
}
# Example questions
questions = [
"Add 4 and 5 together. What is the sum?",
"What is 7 times 8?",
"Find the area of a rectangle with length 5 and width 3.",
]
for question in questions:
prediction = predict_standard(model, tokenizer, label_encoders, question)
print(f"Question: {question}")
print("Predicted Standards:")
for key, value in prediction.items():
print(f" {key}: {value}")
print("\n")
```
## Limitations
- The model's performance is limited to the data it was trained on.
- May not generalize well to questions significantly different from the training data.
## Citation
If you use this model in your work, please cite:
```bibtex
@misc{olaimat2025commoncore,
author = {Olaimat, Ibrahim},
title = {Common Core BERT Hierarchical Classification},
year = {2025},
howpublished = {\url{https://huggingface.co/iolimat482/common-core-bert-hierarchical-classification}}
}
```
## Connect with the Author
- 🤗 Hugging Face: [@iolimat482](https://huggingface.co/iolimat482)
- 💼 LinkedIn: [Ibrahim Olaimat](https://www.linkedin.com/in/ibrahim-olaimat-8ba1b4211)
``` |