Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ base_model:
|
|
14 |
|
15 |
# mhaseeb1604/bge-m3-law
|
16 |
|
17 |
-
This model is a fine-tuned version of the [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) model, specialized for sentence similarity tasks
|
18 |
|
19 |
## Model Overview
|
20 |
|
@@ -77,13 +77,13 @@ SentenceTransformer(
|
|
77 |
)
|
78 |
```
|
79 |
|
80 |
-
- **Transformer Layer**: Uses XLM-Roberta model with max sequence length of 8192.
|
81 |
- **Pooling Layer**: Utilizes CLS token pooling to generate sentence embeddings.
|
82 |
- **Normalization Layer**: Ensures normalized output vectors for better performance in similarity tasks.
|
83 |
|
84 |
## Citing & Authors
|
85 |
|
86 |
-
If you find this repository useful, please consider giving a star :
|
87 |
|
88 |
```bibtex
|
89 |
@misc {muhammad_haseeb_2024,
|
|
|
14 |
|
15 |
# mhaseeb1604/bge-m3-law
|
16 |
|
17 |
+
This model is a fine-tuned version of the [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) model, which is specialized for sentence similarity tasks in Arabic legal texts in both Arabic and English. It maps sentences and paragraphs to a 1024-dimensional dense vector space, useful for tasks like clustering, semantic search, and more.
|
18 |
|
19 |
## Model Overview
|
20 |
|
|
|
77 |
)
|
78 |
```
|
79 |
|
80 |
+
- **Transformer Layer**: Uses XLM-Roberta model with a max sequence length of 8192.
|
81 |
- **Pooling Layer**: Utilizes CLS token pooling to generate sentence embeddings.
|
82 |
- **Normalization Layer**: Ensures normalized output vectors for better performance in similarity tasks.
|
83 |
|
84 |
## Citing & Authors
|
85 |
|
86 |
+
If you find this repository useful, please consider giving a star : and citation
|
87 |
|
88 |
```bibtex
|
89 |
@misc {muhammad_haseeb_2024,
|