huner_disease
This model is a fine-tuned version of bert-base-cased on the transformer_dataset_ner_kaggle dataset. It achieves the following results on the evaluation set:
- Loss: 0.2260
- Precision: 0.7906
- Recall: 0.8223
- F1: 0.8061
- Accuracy: 0.9796
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
Training results
Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|---|---|---|
0.0651 | 1.0 | 1834 | 0.0703 | 0.6823 | 0.7880 | 0.7314 | 0.9767 |
0.0459 | 2.0 | 3668 | 0.0712 | 0.7470 | 0.7617 | 0.7543 | 0.9781 |
0.03 | 3.0 | 5502 | 0.0903 | 0.7278 | 0.8137 | 0.7684 | 0.9779 |
0.0177 | 4.0 | 7336 | 0.0915 | 0.7529 | 0.8055 | 0.7783 | 0.9791 |
0.0139 | 5.0 | 9170 | 0.1088 | 0.7346 | 0.8207 | 0.7753 | 0.9777 |
0.01 | 6.0 | 11004 | 0.1196 | 0.7283 | 0.8207 | 0.7718 | 0.9772 |
0.007 | 7.0 | 12838 | 0.1175 | 0.7615 | 0.7938 | 0.7773 | 0.9787 |
0.0055 | 8.0 | 14672 | 0.1488 | 0.7452 | 0.8237 | 0.7825 | 0.9783 |
0.0049 | 9.0 | 16506 | 0.1351 | 0.7704 | 0.8125 | 0.7909 | 0.9795 |
0.0042 | 10.0 | 18340 | 0.1617 | 0.7491 | 0.8184 | 0.7822 | 0.9782 |
0.0035 | 11.0 | 20174 | 0.1453 | 0.7557 | 0.8009 | 0.7776 | 0.9785 |
0.0036 | 12.0 | 22008 | 0.1662 | 0.7554 | 0.8198 | 0.7863 | 0.9777 |
0.0027 | 13.0 | 23842 | 0.1621 | 0.7781 | 0.8075 | 0.7925 | 0.9790 |
0.0027 | 14.0 | 25676 | 0.1599 | 0.7519 | 0.8110 | 0.7804 | 0.9776 |
0.0027 | 15.0 | 27510 | 0.1633 | 0.7710 | 0.8127 | 0.7913 | 0.9785 |
0.0027 | 16.0 | 29344 | 0.1674 | 0.7588 | 0.8129 | 0.7849 | 0.9780 |
0.0022 | 17.0 | 31178 | 0.1670 | 0.7652 | 0.8168 | 0.7902 | 0.9781 |
0.0021 | 18.0 | 33012 | 0.1586 | 0.7734 | 0.8159 | 0.7940 | 0.9790 |
0.002 | 19.0 | 34846 | 0.1650 | 0.7787 | 0.8172 | 0.7975 | 0.9795 |
0.0018 | 20.0 | 36680 | 0.1642 | 0.7697 | 0.8048 | 0.7868 | 0.9793 |
0.0017 | 21.0 | 38514 | 0.1874 | 0.7743 | 0.8176 | 0.7954 | 0.9784 |
0.0015 | 22.0 | 40348 | 0.1598 | 0.7647 | 0.8227 | 0.7926 | 0.9785 |
0.0012 | 23.0 | 42182 | 0.1819 | 0.7958 | 0.7997 | 0.7977 | 0.9793 |
0.0016 | 24.0 | 44016 | 0.1679 | 0.7960 | 0.8073 | 0.8016 | 0.9794 |
0.0013 | 25.0 | 45850 | 0.1659 | 0.7662 | 0.8147 | 0.7897 | 0.9785 |
0.001 | 26.0 | 47684 | 0.1774 | 0.7732 | 0.8217 | 0.7967 | 0.9789 |
0.0016 | 27.0 | 49518 | 0.1622 | 0.7767 | 0.8131 | 0.7945 | 0.9789 |
0.0007 | 28.0 | 51352 | 0.1958 | 0.7642 | 0.8223 | 0.7922 | 0.9783 |
0.0009 | 29.0 | 53186 | 0.1861 | 0.7764 | 0.8223 | 0.7987 | 0.9790 |
0.0012 | 30.0 | 55020 | 0.1917 | 0.7528 | 0.8252 | 0.7873 | 0.9774 |
0.0005 | 31.0 | 56854 | 0.1952 | 0.7833 | 0.8106 | 0.7967 | 0.9792 |
0.0009 | 32.0 | 58688 | 0.1910 | 0.7801 | 0.8149 | 0.7971 | 0.9791 |
0.0008 | 33.0 | 60522 | 0.1931 | 0.7737 | 0.8180 | 0.7952 | 0.9790 |
0.0006 | 34.0 | 62356 | 0.1902 | 0.7730 | 0.8176 | 0.7947 | 0.9788 |
0.0008 | 35.0 | 64190 | 0.1904 | 0.7799 | 0.8211 | 0.8 | 0.9791 |
0.0006 | 36.0 | 66024 | 0.1951 | 0.7844 | 0.8153 | 0.7995 | 0.9795 |
0.0008 | 37.0 | 67858 | 0.1943 | 0.7749 | 0.8256 | 0.7994 | 0.9791 |
0.0007 | 38.0 | 69692 | 0.2051 | 0.7796 | 0.8248 | 0.8016 | 0.9791 |
0.0004 | 39.0 | 71526 | 0.2108 | 0.7796 | 0.8223 | 0.8004 | 0.9792 |
0.0004 | 40.0 | 73360 | 0.2135 | 0.7788 | 0.8254 | 0.8014 | 0.9792 |
0.0004 | 41.0 | 75194 | 0.2028 | 0.7908 | 0.8176 | 0.8040 | 0.9798 |
0.0006 | 42.0 | 77028 | 0.2058 | 0.7855 | 0.8215 | 0.8031 | 0.9796 |
0.0005 | 43.0 | 78862 | 0.2109 | 0.7860 | 0.8254 | 0.8052 | 0.9793 |
0.0004 | 44.0 | 80696 | 0.2175 | 0.7784 | 0.8287 | 0.8028 | 0.9791 |
0.0003 | 45.0 | 82530 | 0.2206 | 0.7904 | 0.8223 | 0.8060 | 0.9795 |
0.0003 | 46.0 | 84364 | 0.2198 | 0.7942 | 0.8180 | 0.8059 | 0.9797 |
0.0004 | 47.0 | 86198 | 0.2265 | 0.7791 | 0.8233 | 0.8006 | 0.9791 |
0.0003 | 48.0 | 88032 | 0.2265 | 0.7825 | 0.8242 | 0.8028 | 0.9793 |
0.0004 | 49.0 | 89866 | 0.2260 | 0.7892 | 0.8209 | 0.8048 | 0.9794 |
0.0003 | 50.0 | 91700 | 0.2260 | 0.7906 | 0.8223 | 0.8061 | 0.9796 |
Run the model
from transformers import pipeline
model_checkpoint = "manibt1993/huner_disease"
token_classifier = pipeline(
"token-classification", model=model_checkpoint, aggregation_strategy="simple"
)
token_classifier("patient has diabtes, anemia, hypertension with ckd which hurts the patient since 6 years. Patient today experience with right leg pain, fever and cough.")
Model output
[{'entity_group': 'Disease',
'score': 0.69145554,
'word': 'diabtes',
'start': 12,
'end': 19},
{'entity_group': 'Disease',
'score': 0.9955915,
'word': 'anemia',
'start': 21,
'end': 27},
{'entity_group': 'Disease',
'score': 0.99971104,
'word': 'hypertension',
'start': 29,
'end': 41},
{'entity_group': 'Disease',
'score': 0.9249976,
'word': 'right leg pain',
'start': 120,
'end': 134},
{'entity_group': 'Disease',
'score': 0.9983512,
'word': 'fever',
'start': 136,
'end': 141},
{'entity_group': 'Disease',
'score': 0.99849665,
'word': 'cough',
'start': 146,
'end': 151}]
Framework versions
- Transformers 4.37.2
- Pytorch 2.0.0
- Datasets 2.16.1
- Tokenizers 0.15.1
- Downloads last month
- 96
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for manibt1993/huner_disease
Base model
google-bert/bert-base-casedEvaluation results
- Precision on transformer_dataset_ner_kagglevalidation set self-reported0.791
- Recall on transformer_dataset_ner_kagglevalidation set self-reported0.822
- F1 on transformer_dataset_ner_kagglevalidation set self-reported0.806
- Accuracy on transformer_dataset_ner_kagglevalidation set self-reported0.980