Trent commited on
Commit
53227c0
·
1 Parent(s): 82e684b

minor updates

Browse files
Files changed (1) hide show
  1. app.py +5 -5
app.py CHANGED
@@ -42,8 +42,8 @@ if menu == "Contributions & Evaluation":
42
  | Model | [FullEvaluation](https://docs.google.com/spreadsheets/d/1vXJrIg38cEaKjOG5y4I4PQwAQFUmCkohbViJ9zj_Emg/edit#gid=1809754143) Average | 20Newsgroups Clustering | StackOverflow DupQuestions | Twitter SemEval2015 |
43
  |-----------|---------------------------------------|-------|-------|-------|
44
  | paraphrase-mpnet-base-v2 (previous SOTA) | 67.97 | 47.79 | 49.03 | 72.36 |
45
- | **all_datasets_v3_roberta-large (400k steps)** | **70.22** | 50.12 | 52.18 | 75.28 |
46
- | **all_datasets_v3_mpnet-base (440k steps)** | **70.01** | 50.22 | 52.24 | 76.27 |
47
  ''')
48
  elif menu == "Sentence Similarity":
49
  st.header('Sentence Similarity')
@@ -54,7 +54,7 @@ metric between our main sentence and the others.
54
 
55
  For more cool information on sentence embeddings, see the [sBert project](https://www.sbert.net/examples/applications/computing-embeddings/README.html).
56
  ''')
57
- select_models = st.multiselect("Choose models", options=list(MODELS_ID), default=list(MODELS_ID)[0])
58
 
59
  anchor = st.text_input(
60
  'Please enter here the main text you want to compare:',
@@ -93,8 +93,8 @@ elif menu == "Asymmetric QA":
93
  **Instructions**: You can compare the Answer likeliness of a given Query with answer candidates of your choice. In the
94
  background, we'll create an embedding for each answer, and then we'll use the cosine similarity function to calculate a
95
  similarity metric between our query sentence and the others.
96
- `mpnet_asymmetric_qa` model works best for hard-negative answers or distinguishing similar queries due to separate models
97
- applied for encoding questions and answers.
98
 
99
  For more cool information on sentence embeddings, see the [sBert project](https://www.sbert.net/examples/applications/computing-embeddings/README.html).
100
  ''')
 
42
  | Model | [FullEvaluation](https://docs.google.com/spreadsheets/d/1vXJrIg38cEaKjOG5y4I4PQwAQFUmCkohbViJ9zj_Emg/edit#gid=1809754143) Average | 20Newsgroups Clustering | StackOverflow DupQuestions | Twitter SemEval2015 |
43
  |-----------|---------------------------------------|-------|-------|-------|
44
  | paraphrase-mpnet-base-v2 (previous SOTA) | 67.97 | 47.79 | 49.03 | 72.36 |
45
+ | **all_datasets_v3_roberta-large (400k steps)** | **70.22** | **50.12** | **52.18** | **75.28** |
46
+ | **all_datasets_v3_mpnet-base (440k steps)** | **70.01** | **50.22** | **52.24** | **76.27** |
47
  ''')
48
  elif menu == "Sentence Similarity":
49
  st.header('Sentence Similarity')
 
54
 
55
  For more cool information on sentence embeddings, see the [sBert project](https://www.sbert.net/examples/applications/computing-embeddings/README.html).
56
  ''')
57
+ select_models = st.multiselect("Choose models", options=list(MODELS_ID), default=list(MODELS_ID))
58
 
59
  anchor = st.text_input(
60
  'Please enter here the main text you want to compare:',
 
93
  **Instructions**: You can compare the Answer likeliness of a given Query with answer candidates of your choice. In the
94
  background, we'll create an embedding for each answer, and then we'll use the cosine similarity function to calculate a
95
  similarity metric between our query sentence and the others.
96
+ `mpnet_asymmetric_qa` model works best for hard-negative answers or distinguishing answers that are actually questions
97
+ due to separate models applied for encoding questions and answers.
98
 
99
  For more cool information on sentence embeddings, see the [sBert project](https://www.sbert.net/examples/applications/computing-embeddings/README.html).
100
  ''')