Spaces:
Sleeping
Sleeping
Added main information blurb for NLI
Browse files
app.py
CHANGED
@@ -110,7 +110,7 @@ with gr.Blocks(
|
|
110 |
<p><b>Model:</b> ELECTRA Bert Small <br>
|
111 |
<b>Dataset:</b> Stanford Natural Language Inference Dataset <br>
|
112 |
<b>NLP Task:</b> Natual Languae Infrencing</p>
|
113 |
-
<p>
|
114 |
""")
|
115 |
with gr.Column(scale=0.3,variant="panel"):
|
116 |
nli_p1 = gr.Textbox(placeholder="Prompt One",label= "Enter Query")
|
|
|
110 |
<p><b>Model:</b> ELECTRA Bert Small <br>
|
111 |
<b>Dataset:</b> Stanford Natural Language Inference Dataset <br>
|
112 |
<b>NLP Task:</b> Natual Languae Infrencing</p>
|
113 |
+
<p>Natural Language Inference (NLI) which can also be referred to as Textual Entailment is an NLP task with the objective of determining the relationship between two pieces of text. In this demonstration the ELECTRA Bert Small model has been used to determine textual similarity ascribing a similarity level to the comparison of the two input prompts. Electra bert was chosen due to its substandard level of performance in its base state allowing room for improvement during training. The models were trained on the Stanford Natural Language Inference Dataset is a collection of 570k human-written English sentence pairs manually labeled for balanced classification. We can see that when training is performed over [XX] epochs we see an increase in X% of training time for the LoRA trained model. </p>
|
114 |
""")
|
115 |
with gr.Column(scale=0.3,variant="panel"):
|
116 |
nli_p1 = gr.Textbox(placeholder="Prompt One",label= "Enter Query")
|