lukecq commited on
Commit
94278f8
·
1 Parent(s): 7aa7155

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -125,7 +125,7 @@ The learning objective for FSP is to predict the index of the correct label.
125
  A cross-entropy loss is used for tuning the model.
126
 
127
  ## Model variations
128
- There are three versions of models released. The details are:
129
 
130
  | Model | Backbone | #params | lang | acc | Speed | #Train
131
  |------------|-----------|----------|-------|-------|----|-------------|
@@ -134,7 +134,7 @@ There are three versions of models released. The details are:
134
  | [zero-shot-classify-SSTuning-ALBERT](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-ALBERT) | [albert-xxlarge-v2](https://huggingface.co/albert-xxlarge-v2) | 235M | En | High | Low| 5.12M |
135
  | [zero-shot-classify-SSTuning-XLM-R](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-XLM-R) | [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) | 278M | Multi | - | - | 20.48M |
136
 
137
- Please note that zero-shot-classify-SSTuning-XLM-R is trained with 20.48M English samples only. However, it can also be used in other languages as long as XLM-R supports.
138
  Please check [this repository](https://github.com/DAMO-NLP-SG/SSTuning) for the performance of each model.
139
 
140
  ## Intended uses & limitations
 
125
  A cross-entropy loss is used for tuning the model.
126
 
127
  ## Model variations
128
+ There are four versions of models released. The details are:
129
 
130
  | Model | Backbone | #params | lang | acc | Speed | #Train
131
  |------------|-----------|----------|-------|-------|----|-------------|
 
134
  | [zero-shot-classify-SSTuning-ALBERT](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-ALBERT) | [albert-xxlarge-v2](https://huggingface.co/albert-xxlarge-v2) | 235M | En | High | Low| 5.12M |
135
  | [zero-shot-classify-SSTuning-XLM-R](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-XLM-R) | [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) | 278M | Multi | - | - | 20.48M |
136
 
137
+ Please note that zero-shot-classify-SSTuning-XLM-R is trained with 20.48M English samples only. However, it can also be used in other languages as long as xlm-roberta supports.
138
  Please check [this repository](https://github.com/DAMO-NLP-SG/SSTuning) for the performance of each model.
139
 
140
  ## Intended uses & limitations