iolimat482 commited on
Commit
9c2897e
·
verified ·
1 Parent(s): 4d6dd63

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +185 -12
README.md CHANGED
@@ -1,3 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
 
2
  # BERT Hierarchical Classification Model
3
 
@@ -7,20 +26,174 @@ This model is a fine-tuned BERT-based model for hierarchical classification of C
7
 
8
  The model classifies input texts into the following hierarchical levels:
9
 
10
- - Grade
11
- - Domain
12
- - Cluster
13
- - Standard
14
 
15
- ## Files
16
 
17
- - `config.json`: Model configuration.
18
- - `pytorch_model.bin`: Model weights.
19
- - `modeling.py`: Model class definition.
20
- - `tokenizer/`: Tokenizer files.
21
- - `label_encoders.joblib`: Label encoders for mapping predictions back to labels.
22
 
23
- ## Usage
24
 
25
- See instructions below on how to load and use the model.
 
26
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ tags:
4
+ - text-classification
5
+ - hierarchical-classification
6
+ - common-core-standards
7
+ license: mit
8
+ datasets:
9
+ - iolimat482/common-core-math-question-khan-academy-and-mathfish
10
+ metrics:
11
+ - accuracy
12
+ - precision
13
+ - recall
14
+ - f1
15
+ library_name: transformers
16
+ pipeline_tag: text-classification
17
+ base_model:
18
+ - google-bert/bert-base-uncased
19
+ ---
20
 
21
  # BERT Hierarchical Classification Model
22
 
 
26
 
27
  The model classifies input texts into the following hierarchical levels:
28
 
29
+ - **Grade**
30
+ - **Domain**
31
+ - **Cluster**
32
+ - **Standard**
33
 
34
+ It is based on BERT ("bert-base-uncased") and has been fine-tuned on a dataset of Common Core Standard-aligned questions.
35
 
36
+ ## Intended Use
 
 
 
 
37
 
38
+ This model is intended for educators and developers who need to categorize educational content according to the Common Core Standards. It can be used to:
39
 
40
+ - Automatically label questions or exercises with the appropriate standard.
41
+ - Facilitate curriculum alignment and content organization.
42
 
43
+ ## Training Data
44
+
45
+ The model was trained on a dataset consisting of text questions labeled with their corresponding Common Core Standards.
46
+
47
+ ## Training Procedure
48
+
49
+ - **Optimizer**: AdamW
50
+ - **Learning Rate**: 2e-5
51
+ - **Epochs**: 10
52
+ - **Batch Size**: 16
53
+
54
+ ## Evaluation
55
+
56
+ The model was evaluated on multiple classification tasks, including cluster classification, domain classification, grade classification, and standard classification. The performance metrics used for evaluation are Accuracy, F1 Score, Precision, and Recall. Below are the results after training for **10 epochs**:
57
+
58
+ ### Overall Loss
59
+
60
+ - **Average Training Loss**: 0.2508
61
+ - **Average Validation Loss**: 1.9785
62
+ - **Training Loss**: 0.1843
63
+
64
+ ### Cluster Classification
65
+
66
+ | Metric | Value |
67
+ |--------------|---------|
68
+ | **Accuracy** | 0.8797 |
69
+ | **F1 Score** | 0.8792 |
70
+ | **Precision**| 0.8840 |
71
+ | **Recall** | 0.8797 |
72
+
73
+ ### Domain Classification
74
+
75
+ | Metric | Value |
76
+ |--------------|---------|
77
+ | **Accuracy** | 0.9177 |
78
+ | **F1 Score** | 0.9175 |
79
+ | **Precision**| 0.9183 |
80
+ | **Recall** | 0.9177 |
81
+
82
+ ### Grade Classification
83
+
84
+ | Metric | Value |
85
+ |--------------|---------|
86
+ | **Accuracy** | 0.8858 |
87
+ | **F1 Score** | 0.8861 |
88
+ | **Precision**| 0.8896 |
89
+ | **Recall** | 0.8858 |
90
+
91
+ ### Standard Classification
92
+
93
+ | Metric | Value |
94
+ |--------------|---------|
95
+ | **Accuracy** | 0.8334 |
96
+ | **F1 Score** | 0.8323 |
97
+ | **Precision**| 0.8433 |
98
+ | **Recall** | 0.8334 |
99
+
100
+ ## How to Use
101
+
102
+ ```python
103
+ import torch
104
+ from transformers import BertTokenizer, BertConfig
105
+ from huggingface_hub import hf_hub_download
106
+ import joblib
107
+ import importlib.util
108
+
109
+ tokenizer = BertTokenizer.from_pretrained('iolimat482/common-core-bert-hierarchical-classification')
110
+
111
+ config = BertConfig.from_pretrained('iolimat482/common-core-bert-hierarchical-classification')
112
+
113
+ # Download 'modeling.py'
114
+ modeling_file = hf_hub_download(repo_id='iolimat482/common-core-bert-hierarchical-classification', filename='modeling.py')
115
+
116
+ # Load the model class
117
+ spec = importlib.util.spec_from_file_location("modeling", modeling_file)
118
+ modeling = importlib.util.module_from_spec(spec)
119
+ spec.loader.exec_module(modeling)
120
+
121
+ BertHierarchicalClassification = modeling.BertHierarchicalClassification
122
+
123
+ # Instantiate the model
124
+ model = BertHierarchicalClassification(config)
125
+
126
+ # Load model weights
127
+ model_weights = hf_hub_download(repo_id='iolimat482/common-core-bert-hierarchical-classification', filename='best_model.pt')
128
+ model.load_state_dict(torch.load(model_weights, map_location=torch.device('cpu')))
129
+
130
+ model.eval()
131
+
132
+ label_encoders_path = hf_hub_download(repo_id='iolimat482/common-core-bert-hierarchical-classification', filename='label_encoders.joblib')
133
+ label_encoders = joblib.load(label_encoders_path)
134
+
135
+ def predict_standard(model, tokenizer, label_encoders, text):
136
+ # Tokenize input text
137
+ inputs = tokenizer(text, return_tensors='pt', truncation=True, padding=True)
138
+
139
+ # Perform inference
140
+ with torch.no_grad():
141
+ grade_logits, domain_logits, cluster_logits, standard_logits = model(inputs['input_ids'], inputs['attention_mask'])
142
+
143
+ # Get the predicted class indices
144
+ grade_pred = torch.argmax(grade_logits, dim=1).item()
145
+ domain_pred = torch.argmax(domain_logits, dim=1).item()
146
+ cluster_pred = torch.argmax(cluster_logits, dim=1).item()
147
+ standard_pred = torch.argmax(standard_logits, dim=1).item()
148
+
149
+ # Map indices to labels
150
+ grade_label = label_encoders['Grade'].inverse_transform([grade_pred])[0]
151
+ domain_label = label_encoders['Domain'].inverse_transform([domain_pred])[0]
152
+ cluster_label = label_encoders['Cluster'].inverse_transform([cluster_pred])[0]
153
+ standard_label = label_encoders['Standard'].inverse_transform([standard_pred])[0]
154
+
155
+ return {
156
+ 'Grade': grade_label,
157
+ 'Domain': domain_label,
158
+ 'Cluster': cluster_label,
159
+ 'Standard': standard_label
160
+ }
161
+
162
+ # Example questions
163
+ questions = [
164
+ "Add 4 and 5 together. What is the sum?",
165
+ "What is 7 times 8?",
166
+ "Find the area of a rectangle with length 5 and width 3.",
167
+ ]
168
+
169
+ for question in questions:
170
+ prediction = predict_standard(model, tokenizer, label_encoders, question)
171
+ print(f"Question: {question}")
172
+ print("Predicted Standards:")
173
+ for key, value in prediction.items():
174
+ print(f" {key}: {value}")
175
+ print("\n")
176
+ ```
177
+
178
+ ## Limitations
179
+
180
+ - The model's performance is limited to the data it was trained on.
181
+ - May not generalize well to questions significantly different from the training data.
182
+
183
+ ## Citation
184
+
185
+ If you use this model in your work, please cite:
186
+
187
+ ```bibtex
188
+ @misc{olaimat2025commoncore,
189
+ author = {Olaimat, Ibrahim},
190
+ title = {Common Core BERT Hierarchical Classification},
191
+ year = {2025},
192
+ howpublished = {\url{https://huggingface.co/iolimat482/common-core-bert-hierarchical-classification}}
193
+ }
194
+ ```
195
+
196
+ ## Connect with the Author
197
+ - 🤗 Hugging Face: [@iolimat482](https://huggingface.co/iolimat482)
198
+ - 💼 LinkedIn: [Ibrahim Olaimat](https://www.linkedin.com/in/ibrahim-olaimat-8ba1b4211)
199
+ ```