Update README.md
Browse files
README.md
CHANGED
@@ -10,7 +10,7 @@ datasets:
|
|
10 |
|
11 |
# MCTI Text Classification Task (case/uncased) DRAFT
|
12 |
|
13 |
-
|
14 |
|
15 |
## According to the abstract,
|
16 |
|
@@ -29,23 +29,16 @@ learning models to improve the comprehension of each sentence. Compared to the b
|
|
29 |
the Word2Vec-based approach improved the accuracy rate to 88%. The research results serve as a
|
30 |
successful case of artificial intelligence in a federal government application.
|
31 |
|
32 |
-
## Model description
|
33 |
-
|
34 |
This model focus on a more specific problem, creating a Research Financing Products Portfolio (FPP) outside of
|
35 |
the Union budget, supported by the Brazilian Ministry of Science, Technology, and Innovation (MCTI). It was
|
36 |
introduced in ["Using transfer learning to classify long unstructured texts with small amounts of labeled data"](https://www.scitepress.org/Link.aspx?doi=10.5220/0011527700003318) and first released in
|
37 |
[this repository](https://huggingface.co/unb-lamfo-nlp-mcti). This model is uncased: it does not make a difference
|
38 |
between english and English.
|
39 |
|
40 |
-
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aliquam sed nibh non enim finibus malesuada. In vitae
|
41 |
-
metus orci. Vestibulum sodales volutpat lorem, eget consectetur nisi viverra vitae. Sed tincidunt accumsan
|
42 |
-
pellentesque. Curabitur urna massa, dapibus sit amet augue quis, aliquam tristique ipsum. In hac habitasse
|
43 |
-
platea dictumst. Fusce aliquet est id mi porttitor tincidunt. Ut imperdiet rutrum eros, ac mollis ipsum
|
44 |
-
auctor ut. Donec lacinia, orci et dignissim molestie, sem ex mollis urna, et blandit nisi leo sit amet mauris.
|
45 |
|
46 |
Classification_Architecture_model.png
|
|
|
47 |
|
48 |
-
Nullam pretium condimentum imperdiet.
|
49 |
|
50 |
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
|
51 |
the Hugging Face team.
|
@@ -71,7 +64,7 @@ bibendum cursus. Nunc volutpat vitae neque ut bibendum.
|
|
71 |
|
72 |
## Model variations
|
73 |
|
74 |
-
|
75 |
also strips out an accent markers. Chinese and multilingual uncased and cased versions followed shortly after.
|
76 |
Modified preprocessing with whole word masking has replaced subpiece masking in a following work, with the release of
|
77 |
two models.
|
|
|
10 |
|
11 |
# MCTI Text Classification Task (case/uncased) DRAFT
|
12 |
|
13 |
+
Disclaimer:
|
14 |
|
15 |
## According to the abstract,
|
16 |
|
|
|
29 |
the Word2Vec-based approach improved the accuracy rate to 88%. The research results serve as a
|
30 |
successful case of artificial intelligence in a federal government application.
|
31 |
|
|
|
|
|
32 |
This model focus on a more specific problem, creating a Research Financing Products Portfolio (FPP) outside of
|
33 |
the Union budget, supported by the Brazilian Ministry of Science, Technology, and Innovation (MCTI). It was
|
34 |
introduced in ["Using transfer learning to classify long unstructured texts with small amounts of labeled data"](https://www.scitepress.org/Link.aspx?doi=10.5220/0011527700003318) and first released in
|
35 |
[this repository](https://huggingface.co/unb-lamfo-nlp-mcti). This model is uncased: it does not make a difference
|
36 |
between english and English.
|
37 |
|
|
|
|
|
|
|
|
|
|
|
38 |
|
39 |
Classification_Architecture_model.png
|
40 |
+
https://github.com/Marcosdib/S2Query/upload
|
41 |
|
|
|
42 |
|
43 |
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
|
44 |
the Hugging Face team.
|
|
|
64 |
|
65 |
## Model variations
|
66 |
|
67 |
+
XXXX has originally been released in base and large variations, for cased and uncased input text. The uncased models
|
68 |
also strips out an accent markers. Chinese and multilingual uncased and cased versions followed shortly after.
|
69 |
Modified preprocessing with whole word masking has replaced subpiece masking in a following work, with the release of
|
70 |
two models.
|