Update README.md
Browse files
README.md
CHANGED
@@ -17,13 +17,20 @@ size_categories:
|
|
17 |
This dataset is designed for training search query expansion models that can generate multiple semantic expansions for a given query.
|
18 |
|
19 |
## Purpose
|
20 |
-
The goal of this dataset is to serve as input for training small language models (0.5B to 3B parameters) to act as query expander models in Retrieval-Augmented Generation (RAG) systems.
|
21 |
|
22 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
|
24 |
This dataset is the first step. In the near future, I plan to release the trained query expander models as well.
|
25 |
|
26 |
-
## Dataset
|
27 |
This dataset was created using a diverse set of state-of-the-art large language models. These LLMs were prompted with queries covering a wide range of topics and lengths, representing different user intents.
|
28 |
|
29 |
The choice to use multiple LLMs was made to reduce the bias that might be introduced by using a single model. The broad spectrum of topics covered and the variety of query intents (informational, navigational, transactional, commercial) ensures the dataset is comprehensive and diverse. After generation, the data underwent manual curation to ensure high quality.
|
@@ -47,6 +54,9 @@ from datasets import load_dataset
|
|
47 |
dataset = load_dataset("s-emanuilov/query-expansion")
|
48 |
```
|
49 |
|
|
|
|
|
|
|
50 |
## License
|
51 |
This dataset is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).
|
52 |
|
|
|
17 |
This dataset is designed for training search query expansion models that can generate multiple semantic expansions for a given query.
|
18 |
|
19 |
## Purpose
|
20 |
+
The goal of this dataset is to serve as input for training small language models (0.5B to 3B parameters) to act as query expander models in various search systems, including but not limited to Retrieval-Augmented Generation (RAG) systems.
|
21 |
|
22 |
+
Query expansion is a technique used to enhance search results by generating additional relevant queries. While advanced search systems often use large language models for query expansion, this can introduce latency. The purpose of this dataset is to enable the development of smaller, efficient query expander models that can perform this task without the added latency.
|
23 |
+
|
24 |
+
This query expansion dataset can be used in a variety of search system architectures, such as the one illustrated below:
|
25 |
+
<p align="center">
|
26 |
+
<img src="query-expansion-schema.jpg" width="700px" alt="Query Expansion Schema">
|
27 |
+
</p>
|
28 |
+
|
29 |
+
The dataset serves as a key component in training query expansion models, which generate additional relevant queries to enhance the retrieval process and improve the overall performance of search systems.
|
30 |
|
31 |
This dataset is the first step. In the near future, I plan to release the trained query expander models as well.
|
32 |
|
33 |
+
## Dataset creation
|
34 |
This dataset was created using a diverse set of state-of-the-art large language models. These LLMs were prompted with queries covering a wide range of topics and lengths, representing different user intents.
|
35 |
|
36 |
The choice to use multiple LLMs was made to reduce the bias that might be introduced by using a single model. The broad spectrum of topics covered and the variety of query intents (informational, navigational, transactional, commercial) ensures the dataset is comprehensive and diverse. After generation, the data underwent manual curation to ensure high quality.
|
|
|
54 |
dataset = load_dataset("s-emanuilov/query-expansion")
|
55 |
```
|
56 |
|
57 |
+
## Limitations and alternative approaches
|
58 |
+
While this dataset provides a valuable resource for training query expansion models, it's important to note that alternative approaches, such as thesaurus-based methods, BERT-like models, or using large language model APIs, may be more suitable depending on the specific use case and requirements. Each approach has its own limitations and considerations, such as the ability to handle short and long queries, computational resource requirements, latency, security, and cost.
|
59 |
+
|
60 |
## License
|
61 |
This dataset is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).
|
62 |
|