iolimat482 commited on
Commit
72d7604
·
verified ·
1 Parent(s): 384e209

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +153 -32
README.md CHANGED
@@ -1,22 +1,22 @@
1
- ---
2
- language: en
3
- tags:
4
- - text-classification
5
- - hierarchical-classification
6
- - common-core-standards
7
- license: mit
8
- datasets:
9
- - iolimat482/common-core-math-question-khan-academy-and-mathfish
10
- metrics:
11
- - accuracy
12
- - precision
13
- - recall
14
- - f1
15
- library_name: transformers
16
- pipeline_tag: text-classification
17
- base_model:
18
- - google-bert/bert-base-uncased
19
- ---
20
 
21
  # BERT Hierarchical Classification Model
22
 
@@ -48,31 +48,152 @@ The model was trained on a dataset consisting of text questions labeled with the
48
 
49
  - **Optimizer**: AdamW
50
  - **Learning Rate**: 2e-5
51
- - **Epochs**: 3
52
- - **Batch Size**: 8
53
 
54
- ## Evaluation Results
55
 
56
- The model was evaluated using the following metrics:
57
 
58
- - **Accuracy**: 0.95
59
- - **Precision**: 0.94
60
- - **Recall**: 0.93
61
- - **F1-Score**: 0.93
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
62
 
63
  ## How to Use
64
 
65
- (Instructions for loading and using the model.)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
66
 
67
  ## Limitations
68
 
69
  - The model's performance is limited to the data it was trained on.
70
  - May not generalize well to questions significantly different from the training data.
71
 
72
- ## License
73
 
74
- This model is licensed under the MIT License.
75
 
76
- ## Acknowledgments
 
 
 
 
 
 
 
77
 
78
- (Any acknowledgments or credits.)
 
 
 
 
1
+ ---
2
+ language: en
3
+ tags:
4
+ - text-classification
5
+ - hierarchical-classification
6
+ - common-core-standards
7
+ license: mit
8
+ datasets:
9
+ - iolimat482/common-core-math-question-khan-academy-and-mathfish
10
+ metrics:
11
+ - accuracy
12
+ - precision
13
+ - recall
14
+ - f1
15
+ library_name: transformers
16
+ pipeline_tag: text-classification
17
+ base_model:
18
+ - google-bert/bert-base-uncased
19
+ ---
20
 
21
  # BERT Hierarchical Classification Model
22
 
 
48
 
49
  - **Optimizer**: AdamW
50
  - **Learning Rate**: 2e-5
51
+ - **Epochs**: 10
52
+ - **Batch Size**: 16
53
 
54
+ ## Evaluation
55
 
56
+ The model was evaluated on multiple classification tasks, including cluster classification, domain classification, grade classification, and standard classification. The performance metrics used for evaluation are Accuracy, F1 Score, Precision, and Recall. Below are the results after training for **10 epochs**:
57
 
58
+ ### Overall Loss
59
+
60
+ - **Average Training Loss**: 0.2508
61
+ - **Average Validation Loss**: 1.9785
62
+ - **Training Loss**: 0.1843
63
+
64
+ ### Cluster Classification
65
+
66
+ | Metric | Value |
67
+ |--------------|---------|
68
+ | **Accuracy** | 0.8797 |
69
+ | **F1 Score** | 0.8792 |
70
+ | **Precision**| 0.8840 |
71
+ | **Recall** | 0.8797 |
72
+
73
+ ### Domain Classification
74
+
75
+ | Metric | Value |
76
+ |--------------|---------|
77
+ | **Accuracy** | 0.9177 |
78
+ | **F1 Score** | 0.9175 |
79
+ | **Precision**| 0.9183 |
80
+ | **Recall** | 0.9177 |
81
+
82
+ ### Grade Classification
83
+
84
+ | Metric | Value |
85
+ |--------------|---------|
86
+ | **Accuracy** | 0.8858 |
87
+ | **F1 Score** | 0.8861 |
88
+ | **Precision**| 0.8896 |
89
+ | **Recall** | 0.8858 |
90
+
91
+ ### Standard Classification
92
+
93
+ | Metric | Value |
94
+ |--------------|---------|
95
+ | **Accuracy** | 0.8334 |
96
+ | **F1 Score** | 0.8323 |
97
+ | **Precision**| 0.8433 |
98
+ | **Recall** | 0.8334 |
99
 
100
  ## How to Use
101
 
102
+ ```python
103
+ import torch
104
+ from transformers import BertTokenizer, BertConfig
105
+ from huggingface_hub import hf_hub_download
106
+ import joblib
107
+ import importlib.util
108
+
109
+ tokenizer = BertTokenizer.from_pretrained('iolimat482/common-core-bert-hierarchical-classification')
110
+
111
+ config = BertConfig.from_pretrained('iolimat482/common-core-bert-hierarchical-classification')
112
+
113
+ # Download 'modeling.py'
114
+ modeling_file = hf_hub_download(repo_id='iolimat482/common-core-bert-hierarchical-classification', filename='modeling.py')
115
+
116
+ # Load the model class
117
+ spec = importlib.util.spec_from_file_location("modeling", modeling_file)
118
+ modeling = importlib.util.module_from_spec(spec)
119
+ spec.loader.exec_module(modeling)
120
+
121
+ BertHierarchicalClassification = modeling.BertHierarchicalClassification
122
+
123
+ # Instantiate the model
124
+ model = BertHierarchicalClassification(config)
125
+
126
+ # Load model weights
127
+ model_weights = hf_hub_download(repo_id='iolimat482/common-core-bert-hierarchical-classification', filename='best_model.pt')
128
+ model.load_state_dict(torch.load(model_weights, map_location=torch.device('cpu')))
129
+
130
+ model.eval()
131
+
132
+ label_encoders_path = hf_hub_download(repo_id='iolimat482/common-core-bert-hierarchical-classification', filename='label_encoders.joblib')
133
+ label_encoders = joblib.load(label_encoders_path)
134
+
135
+ def predict_standard(model, tokenizer, label_encoders, text):
136
+ # Tokenize input text
137
+ inputs = tokenizer(text, return_tensors='pt', truncation=True, padding=True)
138
+
139
+ # Perform inference
140
+ with torch.no_grad():
141
+ grade_logits, domain_logits, cluster_logits, standard_logits = model(inputs['input_ids'], inputs['attention_mask'])
142
+
143
+ # Get the predicted class indices
144
+ grade_pred = torch.argmax(grade_logits, dim=1).item()
145
+ domain_pred = torch.argmax(domain_logits, dim=1).item()
146
+ cluster_pred = torch.argmax(cluster_logits, dim=1).item()
147
+ standard_pred = torch.argmax(standard_logits, dim=1).item()
148
+
149
+ # Map indices to labels
150
+ grade_label = label_encoders['Grade'].inverse_transform([grade_pred])[0]
151
+ domain_label = label_encoders['Domain'].inverse_transform([domain_pred])[0]
152
+ cluster_label = label_encoders['Cluster'].inverse_transform([cluster_pred])[0]
153
+ standard_label = label_encoders['Standard'].inverse_transform([standard_pred])[0]
154
+
155
+ return {
156
+ 'Grade': grade_label,
157
+ 'Domain': domain_label,
158
+ 'Cluster': cluster_label,
159
+ 'Standard': standard_label
160
+ }
161
+
162
+ # Example questions
163
+ questions = [
164
+ "Add 4 and 5 together. What is the sum?",
165
+ "What is 7 times 8?",
166
+ "Find the area of a rectangle with length 5 and width 3.",
167
+ ]
168
+
169
+ for question in questions:
170
+ prediction = predict_standard(model, tokenizer, label_encoders, question)
171
+ print(f"Question: {question}")
172
+ print("Predicted Standards:")
173
+ for key, value in prediction.items():
174
+ print(f" {key}: {value}")
175
+ print("\n")
176
+ ```
177
 
178
  ## Limitations
179
 
180
  - The model's performance is limited to the data it was trained on.
181
  - May not generalize well to questions significantly different from the training data.
182
 
183
+ ## Citation
184
 
185
+ If you use this model in your work, please cite:
186
 
187
+ ```bibtex
188
+ @misc{olaimat2025commoncore,
189
+ author = {Olaimat, Ibrahim},
190
+ title = {Common Core BERT Hierarchical Classification},
191
+ year = {2025},
192
+ howpublished = {\url{https://huggingface.co/iolimat482/common-core-bert-hierarchical-classification}}
193
+ }
194
+ ```
195
 
196
+ ## Connect with the Author
197
+ - 🤗 Hugging Face: [@iolimat482](https://huggingface.co/iolimat482)
198
+ - 💼 LinkedIn: [Ibrahim Olaimat](https://www.linkedin.com/in/ibrahim-olaimat-8ba1b4211)
199
+ ```