cbdb commited on
Commit
f7d9583
·
1 Parent(s): ae60e00

Changed title errors

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -1,13 +1,13 @@
1
  ---
2
  license: mit
3
  ---
4
- ##**BertForSequenceClassification model (Classical Chinese)**
5
 
6
  This BertForSequenceClassification Classical Chinese model is intended to predict whether a Classical Chinese sentence is a letter title (书信标题) or not. This model is first inherited from the BERT base Chinese model (MLM), and finetuned using a large corpus of Classical Chinese language (3GB textual dataset), then concatenated with the BertForSequenceClassification architecture to perform a binary classification task.
7
 
8
  **Labels: 0 = non-letter, 1 = letter**
9
 
10
- ##**Model description**
11
 
12
  The BertForSequenceClassification model architecture inherits the BERT base model and concatenates a fully-connected linear layer to perform a binary-class classification task.
13
 
@@ -15,9 +15,9 @@ The BertForSequenceClassification model architecture inherits the BERT base mode
15
 
16
  **Sequence classification:** the model concatenates a fully-connected linear layer to output the probability of each class. In our binary classification task, the final linear layer has two classes.
17
 
18
- ##**Intended uses & limitations**
19
  Note that this model is primiarly aimed at predicting whether a Classical Chinese sentence is a letter title (书信标题) or not.
20
 
21
- ##**How to use**
22
  You can use this model directly with a pipeline for masked language modeling:
23
 
 
1
  ---
2
  license: mit
3
  ---
4
+ **BertForSequenceClassification model (Classical Chinese)**
5
 
6
  This BertForSequenceClassification Classical Chinese model is intended to predict whether a Classical Chinese sentence is a letter title (书信标题) or not. This model is first inherited from the BERT base Chinese model (MLM), and finetuned using a large corpus of Classical Chinese language (3GB textual dataset), then concatenated with the BertForSequenceClassification architecture to perform a binary classification task.
7
 
8
  **Labels: 0 = non-letter, 1 = letter**
9
 
10
+ **Model description**
11
 
12
  The BertForSequenceClassification model architecture inherits the BERT base model and concatenates a fully-connected linear layer to perform a binary-class classification task.
13
 
 
15
 
16
  **Sequence classification:** the model concatenates a fully-connected linear layer to output the probability of each class. In our binary classification task, the final linear layer has two classes.
17
 
18
+ **Intended uses & limitations**
19
  Note that this model is primiarly aimed at predicting whether a Classical Chinese sentence is a letter title (书信标题) or not.
20
 
21
+ **How to use**
22
  You can use this model directly with a pipeline for masked language modeling:
23