Update README.md
Browse files
README.md
CHANGED
@@ -4,18 +4,17 @@
|
|
4 |
{}
|
5 |
---
|
6 |
|
7 |
-
# Model Card for Model
|
8 |
-
|
9 |
-
<!-- Provide a quick summary of what the model is/does. -->
|
10 |
-
|
11 |
-
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
|
12 |
|
13 |
## Model Details
|
14 |
|
15 |
### Model Description
|
16 |
|
17 |
-
|
|
|
|
|
18 |
|
|
|
19 |
|
20 |
|
21 |
- **Developed by:** Lina Saba
|
@@ -27,7 +26,7 @@ This modelcard aims to be a base template for new models. It has been generated
|
|
27 |
|
28 |
<!-- Provide the basic links for the model. -->
|
29 |
|
30 |
-
- **Repository:**
|
31 |
- **Paper [optional]:** [More Information Needed]
|
32 |
- **Demo [optional]:** [More Information Needed]
|
33 |
|
@@ -60,19 +59,8 @@ from 2011 that have been manually annotated for various forms of incivility incl
|
|
60 |
## Bias, Risks, and Limitations
|
61 |
|
62 |
Technical limitations :
|
63 |
-
- Can't print more than one identified label.
|
64 |
-
|
65 |
-
### Recommendations
|
66 |
-
|
67 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
68 |
-
|
69 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
70 |
-
|
71 |
-
## How to Get Started with the Model
|
72 |
-
|
73 |
-
Use the code below to get started with the model.
|
74 |
-
|
75 |
-
[More Information Needed]
|
76 |
|
77 |
## Training Details
|
78 |
|
|
|
4 |
{}
|
5 |
---
|
6 |
|
7 |
+
# Model Card for distilbert-base-task-multi-label-classification Model
|
|
|
|
|
|
|
|
|
8 |
|
9 |
## Model Details
|
10 |
|
11 |
### Model Description
|
12 |
|
13 |
+
This model is based on the distillation of the BERT base model, which is a widely used language model.
|
14 |
+
The distillation process involves training a smaller model to mimic the behavior and predictions of the larger BERT model.
|
15 |
+
The purpose of this model is to perform fine-tuning on the distilbert-base-pwc-task-multi-label-classification checkpoint for multi-label classification tasks.
|
16 |
|
17 |
+
Fine-tuning approach can be applied to other models such as RoBERTa, DeBERTa, DistilBERT, CANINE, and more. The notebook provides a practical guide for utilizing these models in various classification scenarios.
|
18 |
|
19 |
|
20 |
- **Developed by:** Lina Saba
|
|
|
26 |
|
27 |
<!-- Provide the basic links for the model. -->
|
28 |
|
29 |
+
- **Repository:** https://colab.research.google.com/drive/1Z314gK2qixK_0ujgQ3nvqvar1iV3QnoF?usp=sharing
|
30 |
- **Paper [optional]:** [More Information Needed]
|
31 |
- **Demo [optional]:** [More Information Needed]
|
32 |
|
|
|
59 |
## Bias, Risks, and Limitations
|
60 |
|
61 |
Technical limitations :
|
62 |
+
- Can't print more than one identified label using pipeline.
|
63 |
+
- Half of the test results aren't exactly the same as what expected
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
64 |
|
65 |
## Training Details
|
66 |
|