sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
sequencelengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
sequencelengths 0
25
| languages
sequencelengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
sequencelengths 0
352
| processed_texts
sequencelengths 1
353
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
54c838f7d7ece003bd760fa43e3007526baa3abc | # Dataset Card for "lmind_nq_train6000_eval6489_v1_reciteonly_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_nq_train6000_eval6489_v1_reciteonly_qa | [
"region:us"
] | 2024-02-06T16:17:49+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train_qa", "path": "data/train_qa-*"}, {"split": "train_ic_qa", "path": "data/train_ic_qa-*"}, {"split": "train_recite_qa", "path": "data/train_recite_qa-*"}, {"split": "eval_qa", "path": "data/eval_qa-*"}, {"split": "eval_ic_qa", "path": "data/eval_ic_qa-*"}, {"split": "eval_recite_qa", "path": "data/eval_recite_qa-*"}, {"split": "all_docs", "path": "data/all_docs-*"}, {"split": "all_docs_eval", "path": "data/all_docs_eval-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train_qa", "num_bytes": 697367, "num_examples": 6000}, {"name": "train_ic_qa", "num_bytes": 4540536, "num_examples": 6000}, {"name": "train_recite_qa", "num_bytes": 4546536, "num_examples": 6000}, {"name": "eval_qa", "num_bytes": 752802, "num_examples": 6489}, {"name": "eval_ic_qa", "num_bytes": 4906186, "num_examples": 6489}, {"name": "eval_recite_qa", "num_bytes": 4912675, "num_examples": 6489}, {"name": "all_docs", "num_bytes": 7126313, "num_examples": 10925}, {"name": "all_docs_eval", "num_bytes": 7125701, "num_examples": 10925}, {"name": "train", "num_bytes": 4546536, "num_examples": 6000}, {"name": "validation", "num_bytes": 4912675, "num_examples": 6489}], "download_size": 27348363, "dataset_size": 44067327}} | 2024-02-06T16:18:21+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_nq_train6000_eval6489_v1_reciteonly_qa"
More Information needed | [
"# Dataset Card for \"lmind_nq_train6000_eval6489_v1_reciteonly_qa\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_nq_train6000_eval6489_v1_reciteonly_qa\"\n\nMore Information needed"
] |
dd12ed9d2a8687915f4bfa1d35cf72ad80d7d437 | # Dataset Card for "lmind_nq_train6000_eval6489_v1_ic_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_nq_train6000_eval6489_v1_ic_qa | [
"region:us"
] | 2024-02-06T16:18:21+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train_qa", "path": "data/train_qa-*"}, {"split": "train_ic_qa", "path": "data/train_ic_qa-*"}, {"split": "train_recite_qa", "path": "data/train_recite_qa-*"}, {"split": "eval_qa", "path": "data/eval_qa-*"}, {"split": "eval_ic_qa", "path": "data/eval_ic_qa-*"}, {"split": "eval_recite_qa", "path": "data/eval_recite_qa-*"}, {"split": "all_docs", "path": "data/all_docs-*"}, {"split": "all_docs_eval", "path": "data/all_docs_eval-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train_qa", "num_bytes": 697367, "num_examples": 6000}, {"name": "train_ic_qa", "num_bytes": 4540536, "num_examples": 6000}, {"name": "train_recite_qa", "num_bytes": 4546536, "num_examples": 6000}, {"name": "eval_qa", "num_bytes": 752802, "num_examples": 6489}, {"name": "eval_ic_qa", "num_bytes": 4906186, "num_examples": 6489}, {"name": "eval_recite_qa", "num_bytes": 4912675, "num_examples": 6489}, {"name": "all_docs", "num_bytes": 7126313, "num_examples": 10925}, {"name": "all_docs_eval", "num_bytes": 7125701, "num_examples": 10925}, {"name": "train", "num_bytes": 4540536, "num_examples": 6000}, {"name": "validation", "num_bytes": 4906186, "num_examples": 6489}], "download_size": 27483678, "dataset_size": 44054838}} | 2024-02-06T16:18:57+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_nq_train6000_eval6489_v1_ic_qa"
More Information needed | [
"# Dataset Card for \"lmind_nq_train6000_eval6489_v1_ic_qa\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_nq_train6000_eval6489_v1_ic_qa\"\n\nMore Information needed"
] |
f28c2e7ddba64a4dcde761889d2c541904982c36 |
# Dataset Card for evol-test-3.5
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("sdiazlor/evol-test-3.5")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/evol-test-3.5")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/conceptual_guides/data_model.html#feedback-dataset) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| input | input | text | True | True |
| instructions | instructions | text | True | False |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| instruction-rating | How would you rate the generated instruction? | rating | True | N/A | [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] |
The **suggestions** are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata".
The **metadata** is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
**✨ NEW** The **vectors** are different columns that contain a vector in floating point, which is constraint to the pre-defined dimensions in the **vectors_settings** when configuring the vectors within the dataset itself, also the dimensions will always be 1-dimensional. The **vectors** are optional and identified by the pre-defined vector name in the dataset configuration file in `argilla.yaml`.
| Vector Name | Title | Dimensions |
|-------------|-------|------------|
| input | input | [1, 384] |
| instructions | instructions | [1, 384] |
| Metadata Name | Title | Type | Values | Visible for Annotators |
| ------------- | ----- | ---- | ------ | ---------------------- |
| length-input | length-input | integer | None - None | True |
| length-instruction | length-instruction | integer | None - None | True |
| input_n_tokens | Input N Tokens | integer | None - None | True |
| input_n_unique_tokens | Input N Unique Tokens | integer | None - None | True |
| input_n_sentences | Input N Sentences | integer | None - None | True |
| input_perplexity | Input Perplexity | float | None - None | True |
| input_entropy | Input Entropy | float | None - None | True |
| input_flesch_reading_ease | Input Flesch Reading Ease | float | None - None | True |
| instructions_n_tokens | Instructions N Tokens | integer | None - None | True |
| instructions_n_unique_tokens | Instructions N Unique Tokens | integer | None - None | True |
| instructions_n_sentences | Instructions N Sentences | integer | None - None | True |
| instructions_perplexity | Instructions Perplexity | float | None - None | True |
| instructions_entropy | Instructions Entropy | float | None - None | True |
| instructions_flesch_reading_ease | Instructions Flesch Reading Ease | float | None - None | True |
The **guidelines**, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"external_id": null,
"fields": {
"input": "Choices:\n+ Yes.\n+ No.\nQ: Title: The Gate:to the mind\u0027s eye is number 1, Into the Mind\u0027s eye is number 2, turbulence is 3 Review: all i can say is that \"The Gate:to the mind\u0027s eye\" is the best computer animation trip i have ever experienced still to this day in 2009 and it was made in 1994. Thomas Dolby does a score to the film that\u0027s half trance and half Pink Floyd sounding. It is by far the best film and soundtrack out of all of the mind\u0027s eye series. This Turbulence is okay during one scene which is really trippy, but it\u0027s very very short and altogether i was unhappy when it was over. I\u0027d like to compliment anyone who had anything to do with the making of it though. It did take some talent to make i\u0027ll give it that. Just poorly arranged and should not be sold in stores in a box with art on the cover. someone made alot of money on this back in the day and it just doesn\u0027t hold a candle to the 1994 one i was talking about. Is this product review negative?\nA:",
"instructions": "Choices:\n+ Yes.\n+ No.\n+ I cannot determine from the given information.\nQ: Title: The Gate:to the mind\u0027s eye is number 1, Into the Mind\u0027s eye is number 2, turbulence is 3 Review: all i can say is that \"The Gate:to the mind\u0027s eye\" is the best computer animation trip i have ever experienced still to this day in 2009 and it was made in 1994. Thomas Dolby does a score to the film that\u0027s half trance and half Pink Floyd sounding. It is by far the best film and soundtrack out of all of the mind\u0027s eye series. This Turbulence is okay during one scene which is really trippy, but it\u0027s very very short and altogether i was unhappy when it was over. I\u0027d like to compliment anyone who had anything to do with the making of it though. It did take some talent to make i\u0027ll give it that. Just poorly arranged and should not be sold in stores in a box with art on the cover. someone made alot of money on this back in the day and it just doesn\u0027t hold a candle to the 1994 one i was talking about. Is this product review negative? Please provide a detailed explanation for your answer, considering both the positive and negative aspects mentioned in the review."
},
"metadata": {
"generation-model": [
"gpt-3.5-turbo"
],
"input_entropy": 7.15,
"input_flesch_reading_ease": 93.78,
"input_n_sentences": 14,
"input_n_tokens": 200,
"input_n_unique_tokens": 111,
"input_perplexity": 1270.39,
"instructions_entropy": 8.24,
"instructions_flesch_reading_ease": 87.71,
"instructions_n_sentences": 16,
"instructions_n_tokens": 227,
"instructions_n_unique_tokens": 127,
"instructions_perplexity": 3787.28,
"length-input": 971,
"length-instructions": 1148
},
"responses": [],
"suggestions": [],
"vectors": {}
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"external_id": null,
"input": "Choices:\n+ Yes.\n+ No.\nQ: Title: The Gate:to the mind\u0027s eye is number 1, Into the Mind\u0027s eye is number 2, turbulence is 3 Review: all i can say is that \"The Gate:to the mind\u0027s eye\" is the best computer animation trip i have ever experienced still to this day in 2009 and it was made in 1994. Thomas Dolby does a score to the film that\u0027s half trance and half Pink Floyd sounding. It is by far the best film and soundtrack out of all of the mind\u0027s eye series. This Turbulence is okay during one scene which is really trippy, but it\u0027s very very short and altogether i was unhappy when it was over. I\u0027d like to compliment anyone who had anything to do with the making of it though. It did take some talent to make i\u0027ll give it that. Just poorly arranged and should not be sold in stores in a box with art on the cover. someone made alot of money on this back in the day and it just doesn\u0027t hold a candle to the 1994 one i was talking about. Is this product review negative?\nA:",
"instruction-rating": [],
"instruction-rating-suggestion": null,
"instruction-rating-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
},
"instructions": "Choices:\n+ Yes.\n+ No.\n+ I cannot determine from the given information.\nQ: Title: The Gate:to the mind\u0027s eye is number 1, Into the Mind\u0027s eye is number 2, turbulence is 3 Review: all i can say is that \"The Gate:to the mind\u0027s eye\" is the best computer animation trip i have ever experienced still to this day in 2009 and it was made in 1994. Thomas Dolby does a score to the film that\u0027s half trance and half Pink Floyd sounding. It is by far the best film and soundtrack out of all of the mind\u0027s eye series. This Turbulence is okay during one scene which is really trippy, but it\u0027s very very short and altogether i was unhappy when it was over. I\u0027d like to compliment anyone who had anything to do with the making of it though. It did take some talent to make i\u0027ll give it that. Just poorly arranged and should not be sold in stores in a box with art on the cover. someone made alot of money on this back in the day and it just doesn\u0027t hold a candle to the 1994 one i was talking about. Is this product review negative? Please provide a detailed explanation for your answer, considering both the positive and negative aspects mentioned in the review.",
"metadata": "{\"length-input\": 971, \"length-instructions\": 1148, \"generation-model\": [\"gpt-3.5-turbo\"], \"input_n_tokens\": 200, \"input_n_unique_tokens\": 111, \"input_n_sentences\": 14, \"input_perplexity\": 1270.39, \"input_entropy\": 7.15, \"input_flesch_reading_ease\": 93.78, \"instructions_n_tokens\": 227, \"instructions_n_unique_tokens\": 127, \"instructions_n_sentences\": 16, \"instructions_perplexity\": 3787.28, \"instructions_entropy\": 8.24, \"instructions_flesch_reading_ease\": 87.71}",
"vectors": {
"input": null,
"instructions": null
}
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
* **input** is of type `text`.
* **instructions** is of type `text`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`.
* **instruction-rating** is of type `rating` with the following allowed values [1, 2, 3, 4, 5, 6, 7, 8, 9, 10].
* **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
* (optional) **instruction-rating-suggestion** is of type `rating` with the following allowed values [1, 2, 3, 4, 5, 6, 7, 8, 9, 10].
* **✨ NEW** **Vectors**: As of Argilla 1.19.0, the vectors have been included in order to add support for similarity search to explore similar records based on vector search powered by the search engine defined. The vectors are optional and cannot be seen within the UI, those are uploaded and internally used. Also the vectors will always be optional, and only the dimensions previously defined in their settings.
* (optional) **input** is of type `float32` and has a dimension of (1, `384`).
* (optional) **instructions** is of type `float32` and has a dimension of (1, `384`).
Additionally, we also have two more fields that are optional and are the following:
* **metadata:** This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | sdiazlor/evol-test-3.5 | [
"size_categories:n<1K",
"rlfh",
"argilla",
"human-feedback",
"region:us"
] | 2024-02-06T16:19:04+00:00 | {"size_categories": "n<1K", "tags": ["rlfh", "argilla", "human-feedback"]} | 2024-02-06T16:19:07+00:00 | [] | [] | TAGS
#size_categories-n<1K #rlfh #argilla #human-feedback #region-us
| Dataset Card for evol-test-3.5
==============================
This dataset has been created with Argilla.
As shown in the sections below, this dataset can be loaded into Argilla as explained in Load with Argilla, or used directly with the 'datasets' library in Load with 'datasets'.
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Leaderboard:
* Point of Contact:
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\_huggingface' method in Argilla.
* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\_huggingface' and can be loaded independently using the 'datasets' library via 'load\_dataset'.
* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:
### Load with 'datasets'
To load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:
### Supported Tasks and Leaderboards
This dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.
There are no leaderboards associated with this dataset.
### Languages
Dataset Structure
-----------------
### Data in Argilla
The dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.
The fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
The questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\_selection, multi\_label\_selection, or ranking.
The suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata".
The metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\_properties' defined in the dataset configuration file in 'URL'.
NEW The vectors are different columns that contain a vector in floating point, which is constraint to the pre-defined dimensions in the vectors\_settings when configuring the vectors within the dataset itself, also the dimensions will always be 1-dimensional. The vectors are optional and identified by the pre-defined vector name in the dataset configuration file in 'URL'.
Vector Name: input, Title: input, Dimensions: [1, 384]
Vector Name: instructions, Title: instructions, Dimensions: [1, 384]
The guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
While the same record in HuggingFace 'datasets' looks as follows:
### Data Fields
Among the dataset fields, we differentiate between the following:
* Fields: These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
+ input is of type 'text'.
+ instructions is of type 'text'.
* Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'.
+ instruction-rating is of type 'rating' with the following allowed values [1, 2, 3, 4, 5, 6, 7, 8, 9, 10].
* Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
+ (optional) instruction-rating-suggestion is of type 'rating' with the following allowed values [1, 2, 3, 4, 5, 6, 7, 8, 9, 10].
* NEW Vectors: As of Argilla 1.19.0, the vectors have been included in order to add support for similarity search to explore similar records based on vector search powered by the search engine defined. The vectors are optional and cannot be seen within the UI, those are uploaded and internally used. Also the vectors will always be optional, and only the dimensions previously defined in their settings.
+ (optional) input is of type 'float32' and has a dimension of (1, '384').
+ (optional) instructions is of type 'float32' and has a dimension of (1, '384').
Additionally, we also have two more fields that are optional and are the following:
* metadata: This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\_properties' defined in the dataset configuration file in 'URL'.
* external\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is 'train'.
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation guidelines
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
### Contributions
| [
"### Dataset Summary\n\n\nThis dataset contains:\n\n\n* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\\_huggingface' method in Argilla.\n* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\\_huggingface' and can be loaded independently using the 'datasets' library via 'load\\_dataset'.\n* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.",
"### Load with Argilla\n\n\nTo load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:",
"### Load with 'datasets'\n\n\nTo load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:",
"### Supported Tasks and Leaderboards\n\n\nThis dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.\n\n\nThere are no leaderboards associated with this dataset.",
"### Languages\n\n\nDataset Structure\n-----------------",
"### Data in Argilla\n\n\nThe dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.\n\n\nThe fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\nThe questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\\_selection, multi\\_label\\_selection, or ranking.\n\n\n\nThe suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending \"-suggestion\" and \"-suggestion-metadata\" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with \"-suggestion\" and the metadata is appended with \"-suggestion-metadata\".\n\n\nThe metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n\n\nNEW The vectors are different columns that contain a vector in floating point, which is constraint to the pre-defined dimensions in the vectors\\_settings when configuring the vectors within the dataset itself, also the dimensions will always be 1-dimensional. The vectors are optional and identified by the pre-defined vector name in the dataset configuration file in 'URL'.\n\n\nVector Name: input, Title: input, Dimensions: [1, 384]\nVector Name: instructions, Title: instructions, Dimensions: [1, 384]\n\n\n\nThe guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.",
"### Data Instances\n\n\nAn example of a dataset instance in Argilla looks as follows:\n\n\nWhile the same record in HuggingFace 'datasets' looks as follows:",
"### Data Fields\n\n\nAmong the dataset fields, we differentiate between the following:\n\n\n* Fields: These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\t+ input is of type 'text'.\n\t+ instructions is of type 'text'.\n* Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'.\n\n\n\t+ instruction-rating is of type 'rating' with the following allowed values [1, 2, 3, 4, 5, 6, 7, 8, 9, 10].\n* Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.\n\n\n\t+ (optional) instruction-rating-suggestion is of type 'rating' with the following allowed values [1, 2, 3, 4, 5, 6, 7, 8, 9, 10].\n* NEW Vectors: As of Argilla 1.19.0, the vectors have been included in order to add support for similarity search to explore similar records based on vector search powered by the search engine defined. The vectors are optional and cannot be seen within the UI, those are uploaded and internally used. Also the vectors will always be optional, and only the dimensions previously defined in their settings.\n\n\n\t+ (optional) input is of type 'float32' and has a dimension of (1, '384').\n\t+ (optional) instructions is of type 'float32' and has a dimension of (1, '384').\n\n\nAdditionally, we also have two more fields that are optional and are the following:\n\n\n* metadata: This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n* external\\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.",
"### Data Splits\n\n\nThe dataset contains a single split, which is 'train'.\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation guidelines",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#size_categories-n<1K #rlfh #argilla #human-feedback #region-us \n",
"### Dataset Summary\n\n\nThis dataset contains:\n\n\n* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\\_huggingface' method in Argilla.\n* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\\_huggingface' and can be loaded independently using the 'datasets' library via 'load\\_dataset'.\n* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.",
"### Load with Argilla\n\n\nTo load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:",
"### Load with 'datasets'\n\n\nTo load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:",
"### Supported Tasks and Leaderboards\n\n\nThis dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.\n\n\nThere are no leaderboards associated with this dataset.",
"### Languages\n\n\nDataset Structure\n-----------------",
"### Data in Argilla\n\n\nThe dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.\n\n\nThe fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\nThe questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\\_selection, multi\\_label\\_selection, or ranking.\n\n\n\nThe suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending \"-suggestion\" and \"-suggestion-metadata\" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with \"-suggestion\" and the metadata is appended with \"-suggestion-metadata\".\n\n\nThe metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n\n\nNEW The vectors are different columns that contain a vector in floating point, which is constraint to the pre-defined dimensions in the vectors\\_settings when configuring the vectors within the dataset itself, also the dimensions will always be 1-dimensional. The vectors are optional and identified by the pre-defined vector name in the dataset configuration file in 'URL'.\n\n\nVector Name: input, Title: input, Dimensions: [1, 384]\nVector Name: instructions, Title: instructions, Dimensions: [1, 384]\n\n\n\nThe guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.",
"### Data Instances\n\n\nAn example of a dataset instance in Argilla looks as follows:\n\n\nWhile the same record in HuggingFace 'datasets' looks as follows:",
"### Data Fields\n\n\nAmong the dataset fields, we differentiate between the following:\n\n\n* Fields: These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\t+ input is of type 'text'.\n\t+ instructions is of type 'text'.\n* Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'.\n\n\n\t+ instruction-rating is of type 'rating' with the following allowed values [1, 2, 3, 4, 5, 6, 7, 8, 9, 10].\n* Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.\n\n\n\t+ (optional) instruction-rating-suggestion is of type 'rating' with the following allowed values [1, 2, 3, 4, 5, 6, 7, 8, 9, 10].\n* NEW Vectors: As of Argilla 1.19.0, the vectors have been included in order to add support for similarity search to explore similar records based on vector search powered by the search engine defined. The vectors are optional and cannot be seen within the UI, those are uploaded and internally used. Also the vectors will always be optional, and only the dimensions previously defined in their settings.\n\n\n\t+ (optional) input is of type 'float32' and has a dimension of (1, '384').\n\t+ (optional) instructions is of type 'float32' and has a dimension of (1, '384').\n\n\nAdditionally, we also have two more fields that are optional and are the following:\n\n\n* metadata: This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n* external\\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.",
"### Data Splits\n\n\nThe dataset contains a single split, which is 'train'.\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation guidelines",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
ea4ed701912e92733819032490ca760823b49140 |
# Dataset Card for evol-test-4
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("sdiazlor/evol-test-4")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/evol-test-4")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/conceptual_guides/data_model.html#feedback-dataset) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| input | input | text | True | True |
| instructions | instructions | text | True | False |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| instruction-rating | How would you rate the generated instruction? | rating | True | N/A | [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] |
The **suggestions** are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata".
The **metadata** is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
**✨ NEW** The **vectors** are different columns that contain a vector in floating point, which is constraint to the pre-defined dimensions in the **vectors_settings** when configuring the vectors within the dataset itself, also the dimensions will always be 1-dimensional. The **vectors** are optional and identified by the pre-defined vector name in the dataset configuration file in `argilla.yaml`.
| Vector Name | Title | Dimensions |
|-------------|-------|------------|
| input | input | [1, 384] |
| instructions | instructions | [1, 384] |
| Metadata Name | Title | Type | Values | Visible for Annotators |
| ------------- | ----- | ---- | ------ | ---------------------- |
| length-input | length-input | integer | None - None | True |
| length-instruction | length-instruction | integer | None - None | True |
| input_n_tokens | Input N Tokens | integer | None - None | True |
| input_n_unique_tokens | Input N Unique Tokens | integer | None - None | True |
| input_n_sentences | Input N Sentences | integer | None - None | True |
| input_perplexity | Input Perplexity | float | None - None | True |
| input_entropy | Input Entropy | float | None - None | True |
| input_flesch_reading_ease | Input Flesch Reading Ease | float | None - None | True |
| instructions_n_tokens | Instructions N Tokens | integer | None - None | True |
| instructions_n_unique_tokens | Instructions N Unique Tokens | integer | None - None | True |
| instructions_n_sentences | Instructions N Sentences | integer | None - None | True |
| instructions_perplexity | Instructions Perplexity | float | None - None | True |
| instructions_entropy | Instructions Entropy | float | None - None | True |
| instructions_flesch_reading_ease | Instructions Flesch Reading Ease | float | None - None | True |
The **guidelines**, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"external_id": null,
"fields": {
"input": "Choices:\n+ Yes.\n+ No.\nQ: Title: The Gate:to the mind\u0027s eye is number 1, Into the Mind\u0027s eye is number 2, turbulence is 3 Review: all i can say is that \"The Gate:to the mind\u0027s eye\" is the best computer animation trip i have ever experienced still to this day in 2009 and it was made in 1994. Thomas Dolby does a score to the film that\u0027s half trance and half Pink Floyd sounding. It is by far the best film and soundtrack out of all of the mind\u0027s eye series. This Turbulence is okay during one scene which is really trippy, but it\u0027s very very short and altogether i was unhappy when it was over. I\u0027d like to compliment anyone who had anything to do with the making of it though. It did take some talent to make i\u0027ll give it that. Just poorly arranged and should not be sold in stores in a box with art on the cover. someone made alot of money on this back in the day and it just doesn\u0027t hold a candle to the 1994 one i was talking about. Is this product review negative?\nA:",
"instructions": "Options:\n+ Absolutely.\n+ Not at all.\nQuestion: Analysis: \"Dive into Dreamscape\" marked the debut of virtual reality storytelling in 1998, enchanting audiences with its pioneering blend of digital landscapes and hypnotic soundtracks. Credited for revolutionizing the way we experience multimedia art, it set a benchmark with its immersive environments and a score that melds ambient techno with orchestral elements. Its successor, \"Dreamscape\u0027s Odyssey,\" despite its advanced graphics and narrative depth, fails to evoke the same profound connection, feeling more like a series of disjointed virtual vignettes. \"Voyage through the Vortex\" attempts to recapture the original\u0027s magic with its avant-garde visuals and captivating sequences, but falls short in delivering a cohesive story. Although \"Voyage through the Vortex\" showcases some moments of brilliance, it ultimately feels underwhelming compared to its groundbreaking predecessor. In acknowledging the efforts of the creative minds behind these projects, it\u0027s evident that significant talent and innovation went into their creation. However, the lack of coherence in the sequels has led to a divided reception among fans and critics alike. Is the sentiment expressed in this analysis positive?"
},
"metadata": {
"generation-model": [
"gpt-4-turbo-preview"
],
"input_entropy": 7.15,
"input_flesch_reading_ease": 93.78,
"input_n_sentences": 14,
"input_n_tokens": 200,
"input_n_unique_tokens": 111,
"input_perplexity": 1270.39,
"instructions_entropy": 6.37,
"instructions_flesch_reading_ease": 42.87,
"instructions_n_sentences": 10,
"instructions_n_tokens": 184,
"instructions_n_unique_tokens": 132,
"instructions_perplexity": 586.27,
"length-input": 971,
"length-instructions": 1250
},
"responses": [],
"suggestions": [],
"vectors": {}
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"external_id": null,
"input": "Choices:\n+ Yes.\n+ No.\nQ: Title: The Gate:to the mind\u0027s eye is number 1, Into the Mind\u0027s eye is number 2, turbulence is 3 Review: all i can say is that \"The Gate:to the mind\u0027s eye\" is the best computer animation trip i have ever experienced still to this day in 2009 and it was made in 1994. Thomas Dolby does a score to the film that\u0027s half trance and half Pink Floyd sounding. It is by far the best film and soundtrack out of all of the mind\u0027s eye series. This Turbulence is okay during one scene which is really trippy, but it\u0027s very very short and altogether i was unhappy when it was over. I\u0027d like to compliment anyone who had anything to do with the making of it though. It did take some talent to make i\u0027ll give it that. Just poorly arranged and should not be sold in stores in a box with art on the cover. someone made alot of money on this back in the day and it just doesn\u0027t hold a candle to the 1994 one i was talking about. Is this product review negative?\nA:",
"instruction-rating": [],
"instruction-rating-suggestion": null,
"instruction-rating-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
},
"instructions": "Options:\n+ Absolutely.\n+ Not at all.\nQuestion: Analysis: \"Dive into Dreamscape\" marked the debut of virtual reality storytelling in 1998, enchanting audiences with its pioneering blend of digital landscapes and hypnotic soundtracks. Credited for revolutionizing the way we experience multimedia art, it set a benchmark with its immersive environments and a score that melds ambient techno with orchestral elements. Its successor, \"Dreamscape\u0027s Odyssey,\" despite its advanced graphics and narrative depth, fails to evoke the same profound connection, feeling more like a series of disjointed virtual vignettes. \"Voyage through the Vortex\" attempts to recapture the original\u0027s magic with its avant-garde visuals and captivating sequences, but falls short in delivering a cohesive story. Although \"Voyage through the Vortex\" showcases some moments of brilliance, it ultimately feels underwhelming compared to its groundbreaking predecessor. In acknowledging the efforts of the creative minds behind these projects, it\u0027s evident that significant talent and innovation went into their creation. However, the lack of coherence in the sequels has led to a divided reception among fans and critics alike. Is the sentiment expressed in this analysis positive?",
"metadata": "{\"length-input\": 971, \"length-instructions\": 1250, \"generation-model\": [\"gpt-4-turbo-preview\"], \"input_n_tokens\": 200, \"input_n_unique_tokens\": 111, \"input_n_sentences\": 14, \"input_perplexity\": 1270.39, \"input_entropy\": 7.15, \"input_flesch_reading_ease\": 93.78, \"instructions_n_tokens\": 184, \"instructions_n_unique_tokens\": 132, \"instructions_n_sentences\": 10, \"instructions_perplexity\": 586.27, \"instructions_entropy\": 6.37, \"instructions_flesch_reading_ease\": 42.87}",
"vectors": {
"input": null,
"instructions": null
}
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
* **input** is of type `text`.
* **instructions** is of type `text`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`.
* **instruction-rating** is of type `rating` with the following allowed values [1, 2, 3, 4, 5, 6, 7, 8, 9, 10].
* **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
* (optional) **instruction-rating-suggestion** is of type `rating` with the following allowed values [1, 2, 3, 4, 5, 6, 7, 8, 9, 10].
* **✨ NEW** **Vectors**: As of Argilla 1.19.0, the vectors have been included in order to add support for similarity search to explore similar records based on vector search powered by the search engine defined. The vectors are optional and cannot be seen within the UI, those are uploaded and internally used. Also the vectors will always be optional, and only the dimensions previously defined in their settings.
* (optional) **input** is of type `float32` and has a dimension of (1, `384`).
* (optional) **instructions** is of type `float32` and has a dimension of (1, `384`).
Additionally, we also have two more fields that are optional and are the following:
* **metadata:** This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | sdiazlor/evol-test-4 | [
"size_categories:n<1K",
"rlfh",
"argilla",
"human-feedback",
"region:us"
] | 2024-02-06T16:19:51+00:00 | {"size_categories": "n<1K", "tags": ["rlfh", "argilla", "human-feedback"]} | 2024-02-06T16:19:54+00:00 | [] | [] | TAGS
#size_categories-n<1K #rlfh #argilla #human-feedback #region-us
| Dataset Card for evol-test-4
============================
This dataset has been created with Argilla.
As shown in the sections below, this dataset can be loaded into Argilla as explained in Load with Argilla, or used directly with the 'datasets' library in Load with 'datasets'.
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Leaderboard:
* Point of Contact:
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\_huggingface' method in Argilla.
* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\_huggingface' and can be loaded independently using the 'datasets' library via 'load\_dataset'.
* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:
### Load with 'datasets'
To load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:
### Supported Tasks and Leaderboards
This dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.
There are no leaderboards associated with this dataset.
### Languages
Dataset Structure
-----------------
### Data in Argilla
The dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.
The fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
The questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\_selection, multi\_label\_selection, or ranking.
The suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata".
The metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\_properties' defined in the dataset configuration file in 'URL'.
NEW The vectors are different columns that contain a vector in floating point, which is constraint to the pre-defined dimensions in the vectors\_settings when configuring the vectors within the dataset itself, also the dimensions will always be 1-dimensional. The vectors are optional and identified by the pre-defined vector name in the dataset configuration file in 'URL'.
Vector Name: input, Title: input, Dimensions: [1, 384]
Vector Name: instructions, Title: instructions, Dimensions: [1, 384]
The guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
While the same record in HuggingFace 'datasets' looks as follows:
### Data Fields
Among the dataset fields, we differentiate between the following:
* Fields: These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
+ input is of type 'text'.
+ instructions is of type 'text'.
* Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'.
+ instruction-rating is of type 'rating' with the following allowed values [1, 2, 3, 4, 5, 6, 7, 8, 9, 10].
* Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
+ (optional) instruction-rating-suggestion is of type 'rating' with the following allowed values [1, 2, 3, 4, 5, 6, 7, 8, 9, 10].
* NEW Vectors: As of Argilla 1.19.0, the vectors have been included in order to add support for similarity search to explore similar records based on vector search powered by the search engine defined. The vectors are optional and cannot be seen within the UI, those are uploaded and internally used. Also the vectors will always be optional, and only the dimensions previously defined in their settings.
+ (optional) input is of type 'float32' and has a dimension of (1, '384').
+ (optional) instructions is of type 'float32' and has a dimension of (1, '384').
Additionally, we also have two more fields that are optional and are the following:
* metadata: This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\_properties' defined in the dataset configuration file in 'URL'.
* external\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is 'train'.
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation guidelines
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
### Contributions
| [
"### Dataset Summary\n\n\nThis dataset contains:\n\n\n* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\\_huggingface' method in Argilla.\n* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\\_huggingface' and can be loaded independently using the 'datasets' library via 'load\\_dataset'.\n* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.",
"### Load with Argilla\n\n\nTo load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:",
"### Load with 'datasets'\n\n\nTo load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:",
"### Supported Tasks and Leaderboards\n\n\nThis dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.\n\n\nThere are no leaderboards associated with this dataset.",
"### Languages\n\n\nDataset Structure\n-----------------",
"### Data in Argilla\n\n\nThe dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.\n\n\nThe fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\nThe questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\\_selection, multi\\_label\\_selection, or ranking.\n\n\n\nThe suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending \"-suggestion\" and \"-suggestion-metadata\" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with \"-suggestion\" and the metadata is appended with \"-suggestion-metadata\".\n\n\nThe metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n\n\nNEW The vectors are different columns that contain a vector in floating point, which is constraint to the pre-defined dimensions in the vectors\\_settings when configuring the vectors within the dataset itself, also the dimensions will always be 1-dimensional. The vectors are optional and identified by the pre-defined vector name in the dataset configuration file in 'URL'.\n\n\nVector Name: input, Title: input, Dimensions: [1, 384]\nVector Name: instructions, Title: instructions, Dimensions: [1, 384]\n\n\n\nThe guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.",
"### Data Instances\n\n\nAn example of a dataset instance in Argilla looks as follows:\n\n\nWhile the same record in HuggingFace 'datasets' looks as follows:",
"### Data Fields\n\n\nAmong the dataset fields, we differentiate between the following:\n\n\n* Fields: These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\t+ input is of type 'text'.\n\t+ instructions is of type 'text'.\n* Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'.\n\n\n\t+ instruction-rating is of type 'rating' with the following allowed values [1, 2, 3, 4, 5, 6, 7, 8, 9, 10].\n* Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.\n\n\n\t+ (optional) instruction-rating-suggestion is of type 'rating' with the following allowed values [1, 2, 3, 4, 5, 6, 7, 8, 9, 10].\n* NEW Vectors: As of Argilla 1.19.0, the vectors have been included in order to add support for similarity search to explore similar records based on vector search powered by the search engine defined. The vectors are optional and cannot be seen within the UI, those are uploaded and internally used. Also the vectors will always be optional, and only the dimensions previously defined in their settings.\n\n\n\t+ (optional) input is of type 'float32' and has a dimension of (1, '384').\n\t+ (optional) instructions is of type 'float32' and has a dimension of (1, '384').\n\n\nAdditionally, we also have two more fields that are optional and are the following:\n\n\n* metadata: This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n* external\\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.",
"### Data Splits\n\n\nThe dataset contains a single split, which is 'train'.\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation guidelines",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#size_categories-n<1K #rlfh #argilla #human-feedback #region-us \n",
"### Dataset Summary\n\n\nThis dataset contains:\n\n\n* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\\_huggingface' method in Argilla.\n* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\\_huggingface' and can be loaded independently using the 'datasets' library via 'load\\_dataset'.\n* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.",
"### Load with Argilla\n\n\nTo load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:",
"### Load with 'datasets'\n\n\nTo load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:",
"### Supported Tasks and Leaderboards\n\n\nThis dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.\n\n\nThere are no leaderboards associated with this dataset.",
"### Languages\n\n\nDataset Structure\n-----------------",
"### Data in Argilla\n\n\nThe dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.\n\n\nThe fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\nThe questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\\_selection, multi\\_label\\_selection, or ranking.\n\n\n\nThe suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending \"-suggestion\" and \"-suggestion-metadata\" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with \"-suggestion\" and the metadata is appended with \"-suggestion-metadata\".\n\n\nThe metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n\n\nNEW The vectors are different columns that contain a vector in floating point, which is constraint to the pre-defined dimensions in the vectors\\_settings when configuring the vectors within the dataset itself, also the dimensions will always be 1-dimensional. The vectors are optional and identified by the pre-defined vector name in the dataset configuration file in 'URL'.\n\n\nVector Name: input, Title: input, Dimensions: [1, 384]\nVector Name: instructions, Title: instructions, Dimensions: [1, 384]\n\n\n\nThe guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.",
"### Data Instances\n\n\nAn example of a dataset instance in Argilla looks as follows:\n\n\nWhile the same record in HuggingFace 'datasets' looks as follows:",
"### Data Fields\n\n\nAmong the dataset fields, we differentiate between the following:\n\n\n* Fields: These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\t+ input is of type 'text'.\n\t+ instructions is of type 'text'.\n* Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'.\n\n\n\t+ instruction-rating is of type 'rating' with the following allowed values [1, 2, 3, 4, 5, 6, 7, 8, 9, 10].\n* Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.\n\n\n\t+ (optional) instruction-rating-suggestion is of type 'rating' with the following allowed values [1, 2, 3, 4, 5, 6, 7, 8, 9, 10].\n* NEW Vectors: As of Argilla 1.19.0, the vectors have been included in order to add support for similarity search to explore similar records based on vector search powered by the search engine defined. The vectors are optional and cannot be seen within the UI, those are uploaded and internally used. Also the vectors will always be optional, and only the dimensions previously defined in their settings.\n\n\n\t+ (optional) input is of type 'float32' and has a dimension of (1, '384').\n\t+ (optional) instructions is of type 'float32' and has a dimension of (1, '384').\n\n\nAdditionally, we also have two more fields that are optional and are the following:\n\n\n* metadata: This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n* external\\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.",
"### Data Splits\n\n\nThe dataset contains a single split, which is 'train'.\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation guidelines",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
9c254a2a5c8d04f1905bfe931f713a149da1d772 | # Dataset Card for "alpaca-gpt4"
This dataset contains *Hungarian* (translated from English) Instruction-Following generated by GPT-4 using Alpaca prompts for fine-tuning LLMs. Original model: https://huggingface.co/datasets/vicgalle/alpaca-gpt4
The dataset was originaly shared in this repository: https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM. This is just a wraper for compatibility with huggingface's datasets library.
## Dataset Description
- **Homepage:** https://instruction-tuning-with-gpt-4.github.io
- **Repository:** https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM
- **Paper:** https://arxiv.org/abs/2304.03277
## Dataset structure
It contains 52K instruction-following data generated by GPT-4 using the same prompts as in Alpaca.
The dataset has the same format as Alpaca data, except the output is generated by GPT-4:
- `instruction`: `str`, describes the task the model should perform. Each of the 52K instructions is unique.
- `input`: `str`, optional context or input for the task.
- `output`: `str`, the answer to the instruction as generated by `GPT-4`.
- `text`: `str`, all the previous fields concatenated together, plus the same prompt used in Alpaca at the beginnig.
## Difference with the original Alpaca dataset
The original Alpaca dataset used text-davinci-003 to complete the prompts. This dataset uses those same prompts, but generating the completions with GPT-4. Thus, in general, the responses are of higher quality and lenght. Here is an example:
#### Example from Alpaca-GPT4:
```bash
{'instruction': 'Identify the odd one out.',
'input': 'Twitter, Instagram, Telegram',
'output': 'The odd one out is Telegram. Twitter and Instagram are social media platforms mainly for sharing information, images and videos while Telegram is a cloud-based instant messaging and voice-over-IP service.',
'text': 'Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nIdentify the odd one out.\n\n### Input:\nTwitter, Instagram, Telegram\n\n### Response:\nThe odd one out is Telegram. Twitter and Instagram are social media platforms mainly for sharing information, images and videos while Telegram is a cloud-based instant messaging and voice-over-IP service.'}
```
#### Same example from original Alpaca:
```bash
{'instruction': 'Identify the odd one out.',
'input': 'Twitter, Instagram, Telegram',
'output': 'Telegram',
'text': 'Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nIdentify the odd one out.\n\n### Input:\nTwitter, Instagram, Telegram\n\n### Response:\nTelegram'}
```
## Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode). | Bazsalanszky/alpaca-gpt4-hun | [
"task_categories:text-generation",
"task_categories:conversational",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:hu",
"license:cc-by-nc-4.0",
"gpt4",
"alpaca",
"instruction-finetuning",
"arxiv:2304.03277",
"region:us"
] | 2024-02-06T16:24:56+00:00 | {"language": ["hu"], "license": "cc-by-nc-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation", "conversational", "question-answering"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train"}]}, "tags": ["gpt4", "alpaca", "instruction-finetuning"]} | 2024-02-06T16:26:06+00:00 | [
"2304.03277"
] | [
"hu"
] | TAGS
#task_categories-text-generation #task_categories-conversational #task_categories-question-answering #size_categories-10K<n<100K #language-Hungarian #license-cc-by-nc-4.0 #gpt4 #alpaca #instruction-finetuning #arxiv-2304.03277 #region-us
| # Dataset Card for "alpaca-gpt4"
This dataset contains *Hungarian* (translated from English) Instruction-Following generated by GPT-4 using Alpaca prompts for fine-tuning LLMs. Original model: URL
The dataset was originaly shared in this repository: URL This is just a wraper for compatibility with huggingface's datasets library.
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper: URL
## Dataset structure
It contains 52K instruction-following data generated by GPT-4 using the same prompts as in Alpaca.
The dataset has the same format as Alpaca data, except the output is generated by GPT-4:
- 'instruction': 'str', describes the task the model should perform. Each of the 52K instructions is unique.
- 'input': 'str', optional context or input for the task.
- 'output': 'str', the answer to the instruction as generated by 'GPT-4'.
- 'text': 'str', all the previous fields concatenated together, plus the same prompt used in Alpaca at the beginnig.
## Difference with the original Alpaca dataset
The original Alpaca dataset used text-davinci-003 to complete the prompts. This dataset uses those same prompts, but generating the completions with GPT-4. Thus, in general, the responses are of higher quality and lenght. Here is an example:
#### Example from Alpaca-GPT4:
#### Same example from original Alpaca:
## Licensing Information
The dataset is available under the Creative Commons NonCommercial (CC BY-NC 4.0). | [
"# Dataset Card for \"alpaca-gpt4\"\n\nThis dataset contains *Hungarian* (translated from English) Instruction-Following generated by GPT-4 using Alpaca prompts for fine-tuning LLMs. Original model: URL\n\nThe dataset was originaly shared in this repository: URL This is just a wraper for compatibility with huggingface's datasets library.",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL",
"## Dataset structure\n\nIt contains 52K instruction-following data generated by GPT-4 using the same prompts as in Alpaca.\nThe dataset has the same format as Alpaca data, except the output is generated by GPT-4:\n\n - 'instruction': 'str', describes the task the model should perform. Each of the 52K instructions is unique.\n - 'input': 'str', optional context or input for the task. \n - 'output': 'str', the answer to the instruction as generated by 'GPT-4'.\n - 'text': 'str', all the previous fields concatenated together, plus the same prompt used in Alpaca at the beginnig.",
"## Difference with the original Alpaca dataset\n\nThe original Alpaca dataset used text-davinci-003 to complete the prompts. This dataset uses those same prompts, but generating the completions with GPT-4. Thus, in general, the responses are of higher quality and lenght. Here is an example:",
"#### Example from Alpaca-GPT4:",
"#### Same example from original Alpaca:",
"## Licensing Information\n\nThe dataset is available under the Creative Commons NonCommercial (CC BY-NC 4.0)."
] | [
"TAGS\n#task_categories-text-generation #task_categories-conversational #task_categories-question-answering #size_categories-10K<n<100K #language-Hungarian #license-cc-by-nc-4.0 #gpt4 #alpaca #instruction-finetuning #arxiv-2304.03277 #region-us \n",
"# Dataset Card for \"alpaca-gpt4\"\n\nThis dataset contains *Hungarian* (translated from English) Instruction-Following generated by GPT-4 using Alpaca prompts for fine-tuning LLMs. Original model: URL\n\nThe dataset was originaly shared in this repository: URL This is just a wraper for compatibility with huggingface's datasets library.",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL",
"## Dataset structure\n\nIt contains 52K instruction-following data generated by GPT-4 using the same prompts as in Alpaca.\nThe dataset has the same format as Alpaca data, except the output is generated by GPT-4:\n\n - 'instruction': 'str', describes the task the model should perform. Each of the 52K instructions is unique.\n - 'input': 'str', optional context or input for the task. \n - 'output': 'str', the answer to the instruction as generated by 'GPT-4'.\n - 'text': 'str', all the previous fields concatenated together, plus the same prompt used in Alpaca at the beginnig.",
"## Difference with the original Alpaca dataset\n\nThe original Alpaca dataset used text-davinci-003 to complete the prompts. This dataset uses those same prompts, but generating the completions with GPT-4. Thus, in general, the responses are of higher quality and lenght. Here is an example:",
"#### Example from Alpaca-GPT4:",
"#### Same example from original Alpaca:",
"## Licensing Information\n\nThe dataset is available under the Creative Commons NonCommercial (CC BY-NC 4.0)."
] |
4b87d50342f26b2fc4ab16764621f70d66d1e6e6 | # Dataset Card for "lmind_hotpot_train8000_eval7405_v1_ic_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_hotpot_train8000_eval7405_v1_ic_qa | [
"region:us"
] | 2024-02-06T16:35:53+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train_qa", "path": "data/train_qa-*"}, {"split": "train_recite_qa", "path": "data/train_recite_qa-*"}, {"split": "train_ic_qa", "path": "data/train_ic_qa-*"}, {"split": "eval_qa", "path": "data/eval_qa-*"}, {"split": "eval_recite_qa", "path": "data/eval_recite_qa-*"}, {"split": "eval_ic_qa", "path": "data/eval_ic_qa-*"}, {"split": "all_docs", "path": "data/all_docs-*"}, {"split": "all_docs_eval", "path": "data/all_docs_eval-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "train_qa", "num_bytes": 1380987, "num_examples": 8000}, {"name": "train_recite_qa", "num_bytes": 8547861, "num_examples": 8000}, {"name": "train_ic_qa", "num_bytes": 8539861, "num_examples": 8000}, {"name": "eval_qa", "num_bytes": 1201450, "num_examples": 7405}, {"name": "eval_recite_qa", "num_bytes": 7941487, "num_examples": 7405}, {"name": "eval_ic_qa", "num_bytes": 7934082, "num_examples": 7405}, {"name": "all_docs", "num_bytes": 12508009, "num_examples": 26854}, {"name": "all_docs_eval", "num_bytes": 12506219, "num_examples": 26854}, {"name": "train", "num_bytes": 8539861, "num_examples": 8000}, {"name": "validation", "num_bytes": 7934082, "num_examples": 7405}], "download_size": 0, "dataset_size": 77033899}} | 2024-02-07T01:34:47+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_hotpot_train8000_eval7405_v1_ic_qa"
More Information needed | [
"# Dataset Card for \"lmind_hotpot_train8000_eval7405_v1_ic_qa\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_hotpot_train8000_eval7405_v1_ic_qa\"\n\nMore Information needed"
] |
939d932cb172bcf3c9872b2ee56c0aa2811bec3c | A mix of part of datasets: pubmed, pubmed_qa and alpaca | ewhfef/mix_cpt | [
"region:us"
] | 2024-02-06T16:52:46+00:00 | {"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 426850969, "num_examples": 29635}], "download_size": 195835115, "dataset_size": 426850969}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-02-06T16:55:10+00:00 | [] | [] | TAGS
#region-us
| A mix of part of datasets: pubmed, pubmed_qa and alpaca | [] | [
"TAGS\n#region-us \n"
] |
c2c76e65b2c1779e24f543a578c4b8d6494c5d78 |
This dataset contains .npz (latent) and resized image (max 4194304 pixels, 1:10 ratio at max).
Latent size is targetted toward 1024x1024 resolution, min 256, max 4096.
The Dataset does not contain 'reference-lost' images, 'photo' tagged image types.
Due to the dataset size, it is unlikely to be updated. | AngelBottomless/Danbooru-images-latents | [
"license:mit",
"region:us"
] | 2024-02-06T17:01:38+00:00 | {"license": "mit"} | 2024-02-15T16:54:11+00:00 | [] | [] | TAGS
#license-mit #region-us
|
This dataset contains .npz (latent) and resized image (max 4194304 pixels, 1:10 ratio at max).
Latent size is targetted toward 1024x1024 resolution, min 256, max 4096.
The Dataset does not contain 'reference-lost' images, 'photo' tagged image types.
Due to the dataset size, it is unlikely to be updated. | [] | [
"TAGS\n#license-mit #region-us \n"
] |
a6f2e37c003f95837a72dda83e531e519bafbdb4 |
- **datasource_id**: The Huggingface Dataset where the template originated from.
- **query_template_id**: A unique id tied to the datasource_id.
- **database_ids**: A struct that maps dataset names to the count of occurrences of the template for that dataset.
- **query_template**: The query template value.
| semiotic/sql_templates | [
"region:us"
] | 2024-02-06T18:21:35+00:00 | {"dataset_info": {"features": [{"name": "datasource_id", "dtype": "string"}, {"name": "datasource_type", "dtype": "string"}, {"name": "query_template_id", "dtype": "int64"}, {"name": "database_ids", "struct": [{"name": "aan_1", "dtype": "int64"}, {"name": "activity_1", "dtype": "int64"}, {"name": "address_1", "dtype": "int64"}, {"name": "advertising_agencies", "dtype": "int64"}, {"name": "aircraft", "dtype": "int64"}, {"name": "allergy_1", "dtype": "int64"}, {"name": "apartment_rentals", "dtype": "int64"}, {"name": "architecture", "dtype": "int64"}, {"name": "art_1", "dtype": "int64"}, {"name": "assets_maintenance", "dtype": "int64"}, {"name": "bakery_1", "dtype": "int64"}, {"name": "baseball_1", "dtype": "int64"}, {"name": "battle_death", "dtype": "int64"}, {"name": "bbc_channels", "dtype": "int64"}, {"name": "behavior_monitoring", "dtype": "int64"}, {"name": "bike_1", "dtype": "int64"}, {"name": "bike_racing", "dtype": "int64"}, {"name": "boat_1", "dtype": "int64"}, {"name": "body_builder", "dtype": "int64"}, {"name": "book_1", "dtype": "int64"}, {"name": "book_2", "dtype": "int64"}, {"name": "book_press", "dtype": "int64"}, {"name": "book_review", "dtype": "int64"}, {"name": "browser_web", "dtype": "int64"}, {"name": "candidate_poll", "dtype": "int64"}, {"name": "car_1", "dtype": "int64"}, {"name": "car_racing", "dtype": "int64"}, {"name": "car_road_race", "dtype": "int64"}, {"name": "chinook_1", "dtype": "int64"}, {"name": "cinema", "dtype": "int64"}, {"name": "city_record", "dtype": "int64"}, {"name": "climbing", "dtype": "int64"}, {"name": "club_1", "dtype": "int64"}, {"name": "club_leader", "dtype": "int64"}, {"name": "coffee_shop", "dtype": "int64"}, {"name": "college_1", "dtype": "int64"}, {"name": "college_2", "dtype": "int64"}, {"name": "college_3", "dtype": "int64"}, {"name": "company_1", "dtype": "int64"}, {"name": "company_employee", "dtype": "int64"}, {"name": "company_office", "dtype": "int64"}, {"name": "concert_singer", "dtype": "int64"}, {"name": "conference", "dtype": "int64"}, {"name": "country_language", "dtype": "int64"}, {"name": "county_public_safety", "dtype": "int64"}, {"name": "course_teach", "dtype": "int64"}, {"name": "cre_Doc_Control_Systems", "dtype": "int64"}, {"name": "cre_Doc_Template_Mgt", "dtype": "int64"}, {"name": "cre_Doc_Tracking_DB", "dtype": "int64"}, {"name": "cre_Doc_Workflow", "dtype": "int64"}, {"name": "cre_Doc_and_collections", "dtype": "int64"}, {"name": "cre_Docs_and_Epenses", "dtype": "int64"}, {"name": "cre_Drama_Workshop_Groups", "dtype": "int64"}, {"name": "cre_Students_Information_Systems", "dtype": "int64"}, {"name": "cre_Theme_park", "dtype": "int64"}, {"name": "csu_1", "dtype": "int64"}, {"name": "culture_company", "dtype": "int64"}, {"name": "customer_complaints", "dtype": "int64"}, {"name": "customer_deliveries", "dtype": "int64"}, {"name": "customers_and_addresses", "dtype": "int64"}, {"name": "customers_and_invoices", "dtype": "int64"}, {"name": "customers_and_orders", "dtype": "int64"}, {"name": "customers_and_products_contacts", "dtype": "int64"}, {"name": "customers_campaigns_ecommerce", "dtype": "int64"}, {"name": "customers_card_transactions", "dtype": "int64"}, {"name": "debate", "dtype": "int64"}, {"name": "decoration_competition", "dtype": "int64"}, {"name": "department_management", "dtype": "int64"}, {"name": "department_store", "dtype": "int64"}, {"name": "device", "dtype": "int64"}, {"name": "district_spokesman", "dtype": "int64"}, {"name": "document_management", "dtype": "int64"}, {"name": "dog_kennels", "dtype": "int64"}, {"name": "dorm_1", "dtype": "int64"}, {"name": "driving_school", "dtype": "int64"}, {"name": "e_commerce", "dtype": "int64"}, {"name": "e_government", "dtype": "int64"}, {"name": "e_learning", "dtype": "int64"}, {"name": "election", "dtype": "int64"}, {"name": "election_representative", "dtype": "int64"}, {"name": "employee_hire_evaluation", "dtype": "int64"}, {"name": "entertainment_awards", "dtype": "int64"}, {"name": "entrepreneur", "dtype": "int64"}, {"name": "epinions_1", "dtype": "int64"}, {"name": "farm", "dtype": "int64"}, {"name": "film_rank", "dtype": "int64"}, {"name": "flight_1", "dtype": "int64"}, {"name": "flight_2", "dtype": "int64"}, {"name": "flight_4", "dtype": "int64"}, {"name": "flight_company", "dtype": "int64"}, {"name": "formula_1", "dtype": "int64"}, {"name": "game_1", "dtype": "int64"}, {"name": "game_injury", "dtype": "int64"}, {"name": "gas_company", "dtype": "int64"}, {"name": "government_shift", "dtype": "int64"}, {"name": "gymnast", "dtype": "int64"}, {"name": "headphone_store", "dtype": "int64"}, {"name": "hospital_1", "dtype": "int64"}, {"name": "hr_1", "dtype": "int64"}, {"name": "icfp_1", "dtype": "int64"}, {"name": "inn_1", "dtype": "int64"}, {"name": "institution_sports", "dtype": "int64"}, {"name": "insurance_and_eClaims", "dtype": "int64"}, {"name": "insurance_fnol", "dtype": "int64"}, {"name": "insurance_policies", "dtype": "int64"}, {"name": "journal_committee", "dtype": "int64"}, {"name": "loan_1", "dtype": "int64"}, {"name": "local_govt_and_lot", "dtype": "int64"}, {"name": "local_govt_in_alabama", "dtype": "int64"}, {"name": "local_govt_mdm", "dtype": "int64"}, {"name": "machine_repair", "dtype": "int64"}, {"name": "manufactory_1", "dtype": "int64"}, {"name": "manufacturer", "dtype": "int64"}, {"name": "match_season", "dtype": "int64"}, {"name": "medicine_enzyme_interaction", "dtype": "int64"}, {"name": "mountain_photos", "dtype": "int64"}, {"name": "movie_1", "dtype": "int64"}, {"name": "movie_2", "dtype": "int64"}, {"name": "museum_visit", "dtype": "int64"}, {"name": "music_1", "dtype": "int64"}, {"name": "music_2", "dtype": "int64"}, {"name": "music_4", "dtype": "int64"}, {"name": "musical", "dtype": "int64"}, {"name": "network_1", "dtype": "int64"}, {"name": "network_2", "dtype": "int64"}, {"name": "news_report", "dtype": "int64"}, {"name": "online_exams", "dtype": "int64"}, {"name": "orchestra", "dtype": "int64"}, {"name": "party_host", "dtype": "int64"}, {"name": "party_people", "dtype": "int64"}, {"name": "performance_attendance", "dtype": "int64"}, {"name": "perpetrator", "dtype": "int64"}, {"name": "pets_1", "dtype": "int64"}, {"name": "phone_1", "dtype": "int64"}, {"name": "phone_market", "dtype": "int64"}, {"name": "pilot_1", "dtype": "int64"}, {"name": "pilot_record", "dtype": "int64"}, {"name": "planet_1", "dtype": "int64"}, {"name": "poker_player", "dtype": "int64"}, {"name": "product_catalog", "dtype": "int64"}, {"name": "products_for_hire", "dtype": "int64"}, {"name": "products_gen_characteristics", "dtype": "int64"}, {"name": "program_share", "dtype": "int64"}, {"name": "protein_institute", "dtype": "int64"}, {"name": "race_track", "dtype": "int64"}, {"name": "railway", "dtype": "int64"}, {"name": "real_estate_properties", "dtype": "int64"}, {"name": "real_estate_rentals", "dtype": "int64"}, {"name": "region_building", "dtype": "int64"}, {"name": "restaurant_1", "dtype": "int64"}, {"name": "restaurant_bills", "dtype": "int64"}, {"name": "riding_club", "dtype": "int64"}, {"name": "roller_coaster", "dtype": "int64"}, {"name": "sakila_1", "dtype": "int64"}, {"name": "school_bus", "dtype": "int64"}, {"name": "school_finance", "dtype": "int64"}, {"name": "school_player", "dtype": "int64"}, {"name": "scientist_1", "dtype": "int64"}, {"name": "ship_1", "dtype": "int64"}, {"name": "ship_mission", "dtype": "int64"}, {"name": "shop_membership", "dtype": "int64"}, {"name": "sing_contest", "dtype": "int64"}, {"name": "singer", "dtype": "int64"}, {"name": "small_bank_1", "dtype": "int64"}, {"name": "soccer_1", "dtype": "int64"}, {"name": "soccer_2", "dtype": "int64"}, {"name": "soccer_3", "dtype": "int64"}, {"name": "solvency_ii", "dtype": "int64"}, {"name": "sports_competition", "dtype": "int64"}, {"name": "station_weather", "dtype": "int64"}, {"name": "store_1", "dtype": "int64"}, {"name": "store_product", "dtype": "int64"}, {"name": "storm_record", "dtype": "int64"}, {"name": "student_1", "dtype": "int64"}, {"name": "student_assessment", "dtype": "int64"}, {"name": "student_transcripts_tracking", "dtype": "int64"}, {"name": "swimming", "dtype": "int64"}, {"name": "theme_gallery", "dtype": "int64"}, {"name": "tracking_grants_for_research", "dtype": "int64"}, {"name": "tracking_orders", "dtype": "int64"}, {"name": "tracking_share_transactions", "dtype": "int64"}, {"name": "tracking_software_problems", "dtype": "int64"}, {"name": "train_station", "dtype": "int64"}, {"name": "tv_shows", "dtype": "int64"}, {"name": "tvshow", "dtype": "int64"}, {"name": "twitter_1", "dtype": "int64"}, {"name": "university_basketball", "dtype": "int64"}, {"name": "university_rank", "dtype": "int64"}, {"name": "vehicle_driver", "dtype": "int64"}, {"name": "vehicle_rent", "dtype": "int64"}, {"name": "video_game", "dtype": "int64"}, {"name": "voter_1", "dtype": "int64"}, {"name": "voter_2", "dtype": "int64"}, {"name": "warehouse_1", "dtype": "int64"}, {"name": "wedding", "dtype": "int64"}, {"name": "wine_1", "dtype": "int64"}, {"name": "workshop_paper", "dtype": "int64"}, {"name": "world_1", "dtype": "int64"}, {"name": "wrestler", "dtype": "int64"}, {"name": "wta_1", "dtype": "int64"}]}, {"name": "query_template", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2793662, "num_examples": 1610}], "download_size": 203205, "dataset_size": 2793662}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-02-06T19:25:32+00:00 | [] | [] | TAGS
#region-us
|
- datasource_id: The Huggingface Dataset where the template originated from.
- query_template_id: A unique id tied to the datasource_id.
- database_ids: A struct that maps dataset names to the count of occurrences of the template for that dataset.
- query_template: The query template value.
| [] | [
"TAGS\n#region-us \n"
] |
075e08fc0cc58fc6e46e317eb2e347e783031395 |
# Dataset Card for Aligner
<span style="color: red;">Warning: This dataset contains data that may be offensive or harmful. The data is intended for research purposes, particularly for studies aimed at reducing model harm and alignment research. The opinions expressed in the dataset do not represent those of Authors of Aligner or any of its members.</span>
## Usage
To load our dataset, use the `load_dataset()` function as follows:
```python
from datasets import load_dataset
dataset = load_dataset("aligner/aligner-20K")
```
| aligner/aligner-20K | [
"task_categories:text-generation",
"language:en",
"license:cc-by-nc-4.0",
"safe",
"safety",
"ai-safety",
"llm",
"lm",
"region:us"
] | 2024-02-06T18:53:43+00:00 | {"language": ["en"], "license": "cc-by-nc-4.0", "task_categories": ["text-generation"], "tags": ["safe", "safety", "ai-safety", "llm", "lm"]} | 2024-02-07T14:28:22+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #language-English #license-cc-by-nc-4.0 #safe #safety #ai-safety #llm #lm #region-us
|
# Dataset Card for Aligner
<span style="color: red;">Warning: This dataset contains data that may be offensive or harmful. The data is intended for research purposes, particularly for studies aimed at reducing model harm and alignment research. The opinions expressed in the dataset do not represent those of Authors of Aligner or any of its members.</span>
## Usage
To load our dataset, use the 'load_dataset()' function as follows:
| [
"# Dataset Card for Aligner\n\n<span style=\"color: red;\">Warning: This dataset contains data that may be offensive or harmful. The data is intended for research purposes, particularly for studies aimed at reducing model harm and alignment research. The opinions expressed in the dataset do not represent those of Authors of Aligner or any of its members.</span>",
"## Usage\n\nTo load our dataset, use the 'load_dataset()' function as follows:"
] | [
"TAGS\n#task_categories-text-generation #language-English #license-cc-by-nc-4.0 #safe #safety #ai-safety #llm #lm #region-us \n",
"# Dataset Card for Aligner\n\n<span style=\"color: red;\">Warning: This dataset contains data that may be offensive or harmful. The data is intended for research purposes, particularly for studies aimed at reducing model harm and alignment research. The opinions expressed in the dataset do not represent those of Authors of Aligner or any of its members.</span>",
"## Usage\n\nTo load our dataset, use the 'load_dataset()' function as follows:"
] |
7b6ba47ffb4ed7162d5c4dc52a29946b04e965ff | Amazon Web Services dataset
7289 lines.
Format:
Service;Question;Answer
Only English language | dawveed/AmazonWebServicesAWS-dataset | [
"license:apache-2.0",
"region:us"
] | 2024-02-06T20:43:24+00:00 | {"license": "apache-2.0"} | 2024-02-06T20:48:31+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
| Amazon Web Services dataset
7289 lines.
Format:
Service;Question;Answer
Only English language | [] | [
"TAGS\n#license-apache-2.0 #region-us \n"
] |
6f8e01507109a93bf64e70cbecf5193a8beebd6a |
A filtered version of `SciPhi/textbooks-are-all-you-need-lite` using [Taylor AI](https://www.trytaylor.ai/).
Filter Query:
```sql
SELECT id, text, metadata
FROM documents
WHERE __document_topic = 'medicine, healthcare, biological' AND LENGTH(text) > 50
LIMIT 5000
```
**Embedding Model**: bge-small | Technoculture/medical-textbooks-are-all-you-need-lite | [
"size_categories:1K<n<10K",
"license:mit",
"SciPhi/textbooks-are-all-you-need-lite",
"bge-small",
"TaylorAI",
"region:us"
] | 2024-02-06T21:25:01+00:00 | {"license": "mit", "size_categories": ["1K<n<10K"], "tags": ["SciPhi/textbooks-are-all-you-need-lite", "bge-small", "TaylorAI"]} | 2024-02-06T22:08:05+00:00 | [] | [] | TAGS
#size_categories-1K<n<10K #license-mit #SciPhi/textbooks-are-all-you-need-lite #bge-small #TaylorAI #region-us
|
A filtered version of 'SciPhi/textbooks-are-all-you-need-lite' using Taylor AI.
Filter Query:
Embedding Model: bge-small | [] | [
"TAGS\n#size_categories-1K<n<10K #license-mit #SciPhi/textbooks-are-all-you-need-lite #bge-small #TaylorAI #region-us \n"
] |
7cdb188c6670ec16509d55cb57ac506d9cb02ecb | # Dataset Card for "OpenHermes-2.5-llama-format"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | bcui19/OpenHermes-2.5-llama-format | [
"region:us"
] | 2024-02-06T21:42:25+00:00 | {"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1615793279, "num_examples": 1008268}], "download_size": 0, "dataset_size": 1615793279}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-02-07T21:29:40+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "OpenHermes-2.5-llama-format"
More Information needed | [
"# Dataset Card for \"OpenHermes-2.5-llama-format\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"OpenHermes-2.5-llama-format\"\n\nMore Information needed"
] |
a786e4e79a7bf386354e31108dda281883a94169 | ## Dataset Card for The Schema-Guided Dialogue (SGD) Dataset converted to ShareGPT
### Dataset summary
This is a ShareGPT-parsed version of [schema_guided_dstc8](https://huggingface.co/datasets/schema_guided_dstc8) dataset, which aims to finetune LLMs on task-oriented conversations. There are three versions of the dataset:
* `function_only`: In each turn, the assistant only answers with function calls. For example:
```
REQUEST(cuisine=['Italian', 'Indian']) REQUEST(city=[])
```
* `function_cot`: In each turn, the assistant has to analyze recent user actions, the intent being fulfilled by the system (`active_intent`), and slots requested by the user in the current turn (`requested_slots`) before giving the appropriate function calls. For example:
```Let's think step-by-step:
User actions: INFORM_INTENT(intent=['FindRestaurants'])
active_intent: FindRestaurants()
requested_slots: NONE
Therefore, assistant actions should be: REQUEST(cuisine=['Italian', 'Indian']) REQUEST(city=[])
```
* `function_cot_nlg`: Same as `function_cot` but the assistant has to give an additional natural language response corresponding to the function calls. For example:
```Let's think step-by-step:
User actions: INFORM_INTENT(intent=['FindRestaurants'])
active_intent: FindRestaurants()
requested_slots: NONE
Therefore, assistant actions should be: REQUEST(cuisine=['Italian', 'Indian']) REQUEST(city=[])
Response: Sure thing, which area are you interested in and what is your preferred cuisine, such as Italian and Indian?
```
### Dataset samples
To keep README short, only a sample from `function_cot_nlg` is provided. Samples from the other subsets slightly differ in system's instruction and assistant turns (as described above).
```
SYSTEM:
As a helpful assistant, you are involved in a task-oriented conversation where you interact with provided services and APIs to assist users with their query. Service description:
Service: Restaurants_1
Description: A leading provider for restaurant search and reservations
Slots:
- restaurant_name (Any): Name of the restaurant
- date (Any): Date for the reservation or to find availability
- time (Any): Time for the reservation or to find availability
- serves_alcohol (True, False): Boolean flag indicating if the restaurant serves alcohol
- has_live_music (True, False): Boolean flag indicating if the restaurant has live music
- phone_number (Any): Phone number of the restaurant
- street_address (Any): Address of the restaurant
- party_size (1, 2, 3, 4, 5, 6): Party size for a reservation
- price_range (inexpensive, moderate, expensive, very expensive): Price range for the restaurant
- city (Any): City in which the restaurant is located
- cuisine (Mexican, Chinese, Indian, American, Italian): Cuisine of food served in the restaurant
Intents:
- ReserveRestaurant:
Reserve a table at a restaurant
Args:
restaurant_name: Required
city: Required
time: Required
date: Optional. Default: "2019-03-01"
party_size: Optional. Default: "2"
- FindRestaurants:
Find a restaurant of a particular cuisine in a city
Args:
cuisine: Required
city: Required
price_range: Optional. Default: "dontcare"
has_live_music: Optional. Default: "dontcare"
serves_alcohol: Optional. Default: "dontcare"
In each turn, think step-by-step about user actions and conversation state, which includes the following information:
- active_intent: The intent corresponding to the service of the frame which is currently being fulfilled by the system. It takes the value "NONE" if none of the intents are active.
- requested_slots: A list of slots requested by the user in the current turn.
From the captured information, take some actions in form of functions to clarify user request using one of the actions below:
AFFIRM
AFFIRM_INTENT
CONFIRM
GOODBYE
INFORM
INFORM_COUNT
INFORM_INTENT
NEGATE
NEGATE_INTENT
NOTIFY_FAILURE
NOTIFY_SUCCESS
OFFER
OFFER_INTENT
REQUEST
REQUEST_ALTS
REQ_MORE
SELECT
THANK_YOU
Each action can have some arguments derived from the slot list above, and corresponding values if necessary. Examples of an action:
OFFER(restaurant_name=["71 Saint Peter"], city=["San Jose"])
INFORM(city=["San Jose", "Los Angeles"])
Beside actions above, you can also send request to the service to retrieve necessary information. The request consists of the following fields:
- method: The name of the intent or function of the service or API being executed.
- parameters: slot names and the corresponding values.
Examples of a request:
FindRestaurants(city='San Jose', cuisine='American')
TransferMoney(account_type='checking', amount=1000)
If you make a request to the service, you should expect the next user turn contains results from the service instead of actual user message.
Finally, generate a response in natural language to the user.
USER: Hello, could you help me search for a place to dine in?
ASSISTANT: Let's think step-by-step:
User actions: INFORM_INTENT(intent=['FindRestaurants'])
active_intent: FindRestaurants()
requested_slots: NONE
Therefore, assistant actions should be: REQUEST(cuisine=['Italian', 'Indian']) REQUEST(city=[])
Response: Sure thing, which area are you interested in and what is your preferred cuisine, such as Italian and Indian?
USER: I would prefer a restaurant which serves up Cambodian food in SF.
ASSISTANT: Let's think step-by-step:
User actions: INFORM(cuisine=['Vietnamese']) INFORM(city=['San Francisco'])
active_intent: FindRestaurants(city=['SF'], cuisine=['Cambodian'])
requested_slots: NONE
Therefore, assistant actions should be: FindRestaurants(city="San Francisco" cuisine="Vietnamese")
USER: [SYSTEM RESPONSE]
{'city': 'San Francisco', 'cuisine': 'Vietnamese', 'has_live_music': 'False', 'phone_number': '415-885-5180', 'price_range': 'moderate', 'restaurant_name': 'Anh Hong', 'serves_alcohol': 'True', 'street_address': '808 Geary Street'}
{'city': 'San Francisco', 'cuisine': 'Vietnamese', 'has_live_music': 'False', 'phone_number': '415-928-4977', 'price_range': 'moderate', 'restaurant_name': 'Aux Delices', 'serves_alcohol': 'False', 'street_address': '2327 Polk Street'}
``` | Mediform/sgd-sharegpt | [
"size_categories:10K<n<100K",
"source_datasets:schema_guided_dstc8",
"language:en",
"region:us"
] | 2024-02-06T21:59:39+00:00 | {"language": ["en"], "size_categories": ["10K<n<100K"], "source_datasets": ["schema_guided_dstc8"], "pretty_name": "schema_guided_dstc8", "dataset_info": [{"config_name": "function_cot", "features": [{"name": "dialogue_id", "dtype": "string"}, {"name": "services", "sequence": "string"}, {"name": "turns", "sequence": [{"name": "speaker", "dtype": {"class_label": {"names": {"0": "USER", "1": "SYSTEM"}}}}, {"name": "utterance", "dtype": "string"}, {"name": "frames", "sequence": [{"name": "service", "dtype": "string"}, {"name": "slots", "sequence": [{"name": "slot", "dtype": "string"}, {"name": "start", "dtype": "int32"}, {"name": "exclusive_end", "dtype": "int32"}]}, {"name": "state", "struct": [{"name": "active_intent", "dtype": "string"}, {"name": "requested_slots", "sequence": "string"}, {"name": "slot_values", "sequence": [{"name": "slot_name", "dtype": "string"}, {"name": "slot_value_list", "sequence": "string"}]}]}, {"name": "actions", "sequence": [{"name": "act", "dtype": {"class_label": {"names": {"0": "AFFIRM", "1": "AFFIRM_INTENT", "2": "CONFIRM", "3": "GOODBYE", "4": "INFORM", "5": "INFORM_COUNT", "6": "INFORM_INTENT", "7": "NEGATE", "8": "NEGATE_INTENT", "9": "NOTIFY_FAILURE", "10": "NOTIFY_SUCCESS", "11": "OFFER", "12": "OFFER_INTENT", "13": "REQUEST", "14": "REQUEST_ALTS", "15": "REQ_MORE", "16": "SELECT", "17": "THANK_YOU"}}}}, {"name": "slot", "dtype": "string"}, {"name": "canonical_values", "sequence": "string"}, {"name": "values", "sequence": "string"}]}, {"name": "service_results", "sequence": [{"name": "service_results_list", "sequence": [{"name": "service_slot_name", "dtype": "string"}, {"name": "service_canonical_value", "dtype": "string"}]}]}, {"name": "service_call", "struct": [{"name": "method", "dtype": "string"}, {"name": "parameters", "sequence": [{"name": "parameter_slot_name", "dtype": "string"}, {"name": "parameter_canonical_value", "dtype": "string"}]}]}]}]}, {"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 399102207, "num_examples": 18624}, {"name": "test", "num_bytes": 90463262, "num_examples": 4201}], "download_size": 74683092, "dataset_size": 489565469}, {"config_name": "function_cot_nlg", "features": [{"name": "dialogue_id", "dtype": "string"}, {"name": "services", "sequence": "string"}, {"name": "turns", "sequence": [{"name": "speaker", "dtype": {"class_label": {"names": {"0": "USER", "1": "SYSTEM"}}}}, {"name": "utterance", "dtype": "string"}, {"name": "frames", "sequence": [{"name": "service", "dtype": "string"}, {"name": "slots", "sequence": [{"name": "slot", "dtype": "string"}, {"name": "start", "dtype": "int32"}, {"name": "exclusive_end", "dtype": "int32"}]}, {"name": "state", "struct": [{"name": "active_intent", "dtype": "string"}, {"name": "requested_slots", "sequence": "string"}, {"name": "slot_values", "sequence": [{"name": "slot_name", "dtype": "string"}, {"name": "slot_value_list", "sequence": "string"}]}]}, {"name": "actions", "sequence": [{"name": "act", "dtype": {"class_label": {"names": {"0": "AFFIRM", "1": "AFFIRM_INTENT", "2": "CONFIRM", "3": "GOODBYE", "4": "INFORM", "5": "INFORM_COUNT", "6": "INFORM_INTENT", "7": "NEGATE", "8": "NEGATE_INTENT", "9": "NOTIFY_FAILURE", "10": "NOTIFY_SUCCESS", "11": "OFFER", "12": "OFFER_INTENT", "13": "REQUEST", "14": "REQUEST_ALTS", "15": "REQ_MORE", "16": "SELECT", "17": "THANK_YOU"}}}}, {"name": "slot", "dtype": "string"}, {"name": "canonical_values", "sequence": "string"}, {"name": "values", "sequence": "string"}]}, {"name": "service_results", "sequence": [{"name": "service_results_list", "sequence": [{"name": "service_slot_name", "dtype": "string"}, {"name": "service_canonical_value", "dtype": "string"}]}]}, {"name": "service_call", "struct": [{"name": "method", "dtype": "string"}, {"name": "parameters", "sequence": [{"name": "parameter_slot_name", "dtype": "string"}, {"name": "parameter_canonical_value", "dtype": "string"}]}]}]}]}, {"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 413485160, "num_examples": 18624}, {"name": "test", "num_bytes": 93773920, "num_examples": 4201}], "download_size": 83417765, "dataset_size": 507259080}, {"config_name": "only_function", "features": [{"name": "dialogue_id", "dtype": "string"}, {"name": "services", "sequence": "string"}, {"name": "turns", "sequence": [{"name": "speaker", "dtype": {"class_label": {"names": {"0": "USER", "1": "SYSTEM"}}}}, {"name": "utterance", "dtype": "string"}, {"name": "frames", "sequence": [{"name": "service", "dtype": "string"}, {"name": "slots", "sequence": [{"name": "slot", "dtype": "string"}, {"name": "start", "dtype": "int32"}, {"name": "exclusive_end", "dtype": "int32"}]}, {"name": "state", "struct": [{"name": "active_intent", "dtype": "string"}, {"name": "requested_slots", "sequence": "string"}, {"name": "slot_values", "sequence": [{"name": "slot_name", "dtype": "string"}, {"name": "slot_value_list", "sequence": "string"}]}]}, {"name": "actions", "sequence": [{"name": "act", "dtype": {"class_label": {"names": {"0": "AFFIRM", "1": "AFFIRM_INTENT", "2": "CONFIRM", "3": "GOODBYE", "4": "INFORM", "5": "INFORM_COUNT", "6": "INFORM_INTENT", "7": "NEGATE", "8": "NEGATE_INTENT", "9": "NOTIFY_FAILURE", "10": "NOTIFY_SUCCESS", "11": "OFFER", "12": "OFFER_INTENT", "13": "REQUEST", "14": "REQUEST_ALTS", "15": "REQ_MORE", "16": "SELECT", "17": "THANK_YOU"}}}}, {"name": "slot", "dtype": "string"}, {"name": "canonical_values", "sequence": "string"}, {"name": "values", "sequence": "string"}]}, {"name": "service_results", "sequence": [{"name": "service_results_list", "sequence": [{"name": "service_slot_name", "dtype": "string"}, {"name": "service_canonical_value", "dtype": "string"}]}]}, {"name": "service_call", "struct": [{"name": "method", "dtype": "string"}, {"name": "parameters", "sequence": [{"name": "parameter_slot_name", "dtype": "string"}, {"name": "parameter_canonical_value", "dtype": "string"}]}]}]}]}, {"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 322401072, "num_examples": 18624}, {"name": "test", "num_bytes": 73561762, "num_examples": 4201}], "download_size": 57534751, "dataset_size": 395962834}], "configs": [{"config_name": "function_cot", "data_files": [{"split": "train", "path": "function_cot/train-*"}, {"split": "test", "path": "function_cot/test-*"}]}, {"config_name": "function_cot_nlg", "data_files": [{"split": "train", "path": "function_cot_nlg/train-*"}, {"split": "test", "path": "function_cot_nlg/test-*"}]}, {"config_name": "only_function", "data_files": [{"split": "train", "path": "only_function/train-*"}, {"split": "test", "path": "only_function/test-*"}]}]} | 2024-02-07T14:31:58+00:00 | [] | [
"en"
] | TAGS
#size_categories-10K<n<100K #source_datasets-schema_guided_dstc8 #language-English #region-us
| ## Dataset Card for The Schema-Guided Dialogue (SGD) Dataset converted to ShareGPT
### Dataset summary
This is a ShareGPT-parsed version of schema_guided_dstc8 dataset, which aims to finetune LLMs on task-oriented conversations. There are three versions of the dataset:
* 'function_only': In each turn, the assistant only answers with function calls. For example:
* 'function_cot': In each turn, the assistant has to analyze recent user actions, the intent being fulfilled by the system ('active_intent'), and slots requested by the user in the current turn ('requested_slots') before giving the appropriate function calls. For example:
* 'function_cot_nlg': Same as 'function_cot' but the assistant has to give an additional natural language response corresponding to the function calls. For example:
### Dataset samples
To keep README short, only a sample from 'function_cot_nlg' is provided. Samples from the other subsets slightly differ in system's instruction and assistant turns (as described above).
| [
"## Dataset Card for The Schema-Guided Dialogue (SGD) Dataset converted to ShareGPT",
"### Dataset summary\nThis is a ShareGPT-parsed version of schema_guided_dstc8 dataset, which aims to finetune LLMs on task-oriented conversations. There are three versions of the dataset:\n* 'function_only': In each turn, the assistant only answers with function calls. For example:\n\n \n \n* 'function_cot': In each turn, the assistant has to analyze recent user actions, the intent being fulfilled by the system ('active_intent'), and slots requested by the user in the current turn ('requested_slots') before giving the appropriate function calls. For example:\n \n* 'function_cot_nlg': Same as 'function_cot' but the assistant has to give an additional natural language response corresponding to the function calls. For example:",
"### Dataset samples\nTo keep README short, only a sample from 'function_cot_nlg' is provided. Samples from the other subsets slightly differ in system's instruction and assistant turns (as described above)."
] | [
"TAGS\n#size_categories-10K<n<100K #source_datasets-schema_guided_dstc8 #language-English #region-us \n",
"## Dataset Card for The Schema-Guided Dialogue (SGD) Dataset converted to ShareGPT",
"### Dataset summary\nThis is a ShareGPT-parsed version of schema_guided_dstc8 dataset, which aims to finetune LLMs on task-oriented conversations. There are three versions of the dataset:\n* 'function_only': In each turn, the assistant only answers with function calls. For example:\n\n \n \n* 'function_cot': In each turn, the assistant has to analyze recent user actions, the intent being fulfilled by the system ('active_intent'), and slots requested by the user in the current turn ('requested_slots') before giving the appropriate function calls. For example:\n \n* 'function_cot_nlg': Same as 'function_cot' but the assistant has to give an additional natural language response corresponding to the function calls. For example:",
"### Dataset samples\nTo keep README short, only a sample from 'function_cot_nlg' is provided. Samples from the other subsets slightly differ in system's instruction and assistant turns (as described above)."
] |
dcf0768cd187fceadb9d97da6a35ccbc14a50b52 |
It's 40,000 rows sampled from [teknium/openhermes](https://huggingface.co/datasets/teknium/openhermes) (not the newer 2.5).
Filtered some GPTisms I dislike out, and removed rows with short output as well to bias towards longer answers.
bad_phrases = ["couldn't help but", "can't resist", "random", "unethical", "I'm sorry, but", "I'm sorry but", "as an AI", "as a Language Model", "AI Language Model", "language model", "However, it is important to", "However, it's important", "ethical guidelines", "just an AI", "within my programming", "illegal", "cannot provide"]
| lodrick-the-lafted/Hermes-40K | [
"task_categories:text-generation",
"language:eng",
"distillation",
"synthetic data",
"gpt",
"region:us"
] | 2024-02-06T22:58:23+00:00 | {"language": ["eng"], "task_categories": ["text-generation"], "pretty_name": "Hermes-40K", "tags": ["distillation", "synthetic data", "gpt"]} | 2024-02-12T08:07:52+00:00 | [] | [
"eng"
] | TAGS
#task_categories-text-generation #language-English #distillation #synthetic data #gpt #region-us
|
It's 40,000 rows sampled from teknium/openhermes (not the newer 2.5).
Filtered some GPTisms I dislike out, and removed rows with short output as well to bias towards longer answers.
bad_phrases = ["couldn't help but", "can't resist", "random", "unethical", "I'm sorry, but", "I'm sorry but", "as an AI", "as a Language Model", "AI Language Model", "language model", "However, it is important to", "However, it's important", "ethical guidelines", "just an AI", "within my programming", "illegal", "cannot provide"]
| [] | [
"TAGS\n#task_categories-text-generation #language-English #distillation #synthetic data #gpt #region-us \n"
] |
4235534078a6eed0bc42c6c295d9730cfbacd184 | # Matchmaking Dataset
This is a starter dataset designed to help train a basic dating recommender model.
Although this doesn't have many records, there is a lack of publicly available dating app datasets.
This dataset can help kickstart a basic recommendation model for a dating or matchmaking app.
### Usage
```
from datasets import load_dataset
dataset = load_dataset("dstam/matchmaking")
```
To transform the data into seperate user features and actions, run
```
import pandas as pd
class RelationalMatchMakingDataset:
def __init__(self, dataset_dict):
self.dataset_dict = dataset_dict
def as_relational_db(self):
"""
Transforms the loaded dataset into a simulated relational database structure,
normalizing users and actions into separate tables.
"""
# Convert the main dataset into a DataFrame
main_df = self.dataset_dict["train"].to_pandas()
# Split actions information
actions_columns = ['dated_uid', 'dater_uid', 'interests_correlation',
'dater_liked_dated', 'probability_dated_wants_to_date',
'already_met_before', 'dater_wants_to_date',
'dated_wants_to_date', 'is_match']
actions_df = main_df[actions_columns].copy()
# Prepare columns for the unified users DataFrame
user_columns = ['uid', 'bio', 'looking_for', 'race', 'is_male', 'age',
'same_race_importance', 'same_religion_importance']
# Create empty DataFrame for users
users_df = pd.DataFrame(columns=user_columns)
# Extract unique user IDs from dater and dated columns
unique_users = pd.concat([main_df['dater_uid'], main_df['dated_uid']]).unique()
# Fill the users DataFrame with information from both dater and dated
for column in users_df.columns:
if users_df[column].dtype == object and all(users_df[column].dropna().map(lambda x: isinstance(x, bool))):
# Explicitly cast to bool dtype
users_df[column] = users_df[column].astype(bool)
for uid in unique_users:
user_data = {}
user_data['uid'] = uid
if uid in main_df['dater_uid'].values:
dater_row = main_df[main_df['dater_uid'] == uid].iloc[0]
user_data['bio'] = dater_row['dater_bio']
user_data['looking_for'] = dater_row['dater_looking_for']
user_data['race'] = dater_row['dater_race']
user_data['is_male'] = dater_row['is_dater_male']
user_data['age'] = dater_row['dater_age']
user_data['same_race_importance'] = dater_row['same_race_importance_for_dater']
user_data['same_religion_importance'] = dater_row['same_religion_importance_for_dater']
elif uid in main_df['dated_uid'].values:
dated_row = main_df[main_df['dated_uid'] == uid].iloc[0]
user_data['bio'] = dated_row['dated_bio']
user_data['looking_for'] = dated_row['dated_looking_for']
user_data['race'] = dated_row['dated_race']
user_data['is_male'] = dated_row['is_dated_male']
user_data['age'] = dated_row['dated_age']
user_data['same_race_importance'] = dated_row['same_race_importance_for_dated']
user_data['same_religion_importance'] = dated_row['same_religion_importance_for_dated']
user_data_df = pd.DataFrame([user_data])
users_df = pd.concat([users_df, user_data_df], ignore_index=True)
# Construct relational 'tables'
relational_db = {
"actions": actions_df,
"users": users_df,
}
return relational_db
relational_db = RelationalMatchMakingDataset(dataset).as_relational_db()
```
### Structure
- **user_feature.csv**: Contains various features for 124 real users
- **actions.csv**: contains 1048 interactions between different users
- **bio.csv**: Contains a bio for each user
- **looking_for.csv**: Contains the answer to the question "what I'm looking for in an ideal partner"
- **races.csv**: Links each race_id with the name of that race
### Source
This dataset is a transformation of the **speedating** dataset by **mstz**: https://huggingface.co/datasets/mstz/speeddating/tree/main
I restructured the above data to better fit the "user/item" framework for training recommendation models
I then used GPT4 to generate bios and "what I'm looking for" prompts for each user based on their features and answers to certain questions in the original dataset.
To replicate my work:
- Run the description_generator.ipynb notebook
- The run `python3 clean_text_files.py`
### Citation:
```
@misc{Matchmaking 1.0,
title = {Matchmaking 1.0: An open-source starter dataset for training dating app and matchmaking recommendation models},
author = {dstam},
year = {2024},
publisher = {HuggingFace},
url = {https://huggingface.co/datasets/danstam/Matchmaking-1.0}
}
``` | dstam/matchmaking | [
"size_categories:n<1K",
"language:en",
"license:mit",
"recommendation",
"dating",
"matchmaking",
"classification",
"cold_start",
"region:us"
] | 2024-02-06T23:30:26+00:00 | {"language": ["en"], "license": "mit", "size_categories": ["n<1K"], "tags": ["recommendation", "dating", "matchmaking", "classification", "cold_start"]} | 2024-02-13T19:17:41+00:00 | [] | [
"en"
] | TAGS
#size_categories-n<1K #language-English #license-mit #recommendation #dating #matchmaking #classification #cold_start #region-us
| # Matchmaking Dataset
This is a starter dataset designed to help train a basic dating recommender model.
Although this doesn't have many records, there is a lack of publicly available dating app datasets.
This dataset can help kickstart a basic recommendation model for a dating or matchmaking app.
### Usage
To transform the data into seperate user features and actions, run
### Structure
- user_feature.csv: Contains various features for 124 real users
- URL: contains 1048 interactions between different users
- URL: Contains a bio for each user
- looking_for.csv: Contains the answer to the question "what I'm looking for in an ideal partner"
- URL: Links each race_id with the name of that race
### Source
This dataset is a transformation of the speedating dataset by mstz: URL
I restructured the above data to better fit the "user/item" framework for training recommendation models
I then used GPT4 to generate bios and "what I'm looking for" prompts for each user based on their features and answers to certain questions in the original dataset.
To replicate my work:
- Run the description_generator.ipynb notebook
- The run 'python3 clean_text_files.py'
:
| [
"# Matchmaking Dataset\n\nThis is a starter dataset designed to help train a basic dating recommender model. \nAlthough this doesn't have many records, there is a lack of publicly available dating app datasets. \nThis dataset can help kickstart a basic recommendation model for a dating or matchmaking app.",
"### Usage\n\n\nTo transform the data into seperate user features and actions, run",
"### Structure\n- user_feature.csv: Contains various features for 124 real users\n- URL: contains 1048 interactions between different users\n- URL: Contains a bio for each user\n- looking_for.csv: Contains the answer to the question \"what I'm looking for in an ideal partner\"\n- URL: Links each race_id with the name of that race",
"### Source\nThis dataset is a transformation of the speedating dataset by mstz: URL\nI restructured the above data to better fit the \"user/item\" framework for training recommendation models\n\nI then used GPT4 to generate bios and \"what I'm looking for\" prompts for each user based on their features and answers to certain questions in the original dataset. \n\nTo replicate my work:\n- Run the description_generator.ipynb notebook\n- The run 'python3 clean_text_files.py'\n\n\n:"
] | [
"TAGS\n#size_categories-n<1K #language-English #license-mit #recommendation #dating #matchmaking #classification #cold_start #region-us \n",
"# Matchmaking Dataset\n\nThis is a starter dataset designed to help train a basic dating recommender model. \nAlthough this doesn't have many records, there is a lack of publicly available dating app datasets. \nThis dataset can help kickstart a basic recommendation model for a dating or matchmaking app.",
"### Usage\n\n\nTo transform the data into seperate user features and actions, run",
"### Structure\n- user_feature.csv: Contains various features for 124 real users\n- URL: contains 1048 interactions between different users\n- URL: Contains a bio for each user\n- looking_for.csv: Contains the answer to the question \"what I'm looking for in an ideal partner\"\n- URL: Links each race_id with the name of that race",
"### Source\nThis dataset is a transformation of the speedating dataset by mstz: URL\nI restructured the above data to better fit the \"user/item\" framework for training recommendation models\n\nI then used GPT4 to generate bios and \"what I'm looking for\" prompts for each user based on their features and answers to certain questions in the original dataset. \n\nTo replicate my work:\n- Run the description_generator.ipynb notebook\n- The run 'python3 clean_text_files.py'\n\n\n:"
] |
0ead5fe7429384dafb653270261354620bc5d357 |
**US-PD-Newspapers** is an agregation of all the archives of US newspapers digitized by the Library of Congress for the Chronicling America digital library.
With nearly 100 billion words, it is one of the largest open corpus in the United States. All the materials are now part of the public domain and have no intellectual property rights remaining.
## Content
As of January 2024, the collection contains nearly 21 millions unique newspaper and periodical editions published from the 1690 to 1963 (98,742,987,471 words).
The collection was compiled by Pierre-Carl Langlais based on the [dumps](https://chroniclingamerica.loc.gov/data/ocr/) made available by the Library of Congress. Each parquet file matches one of the 2618 original dump files, including their code name. It has the full text of a few thousand selected at random and a few core metadatas (edition id, date, word counts…). The metadata can be easily expanded thanks to the LOC APIs and other data services.
The [American Stories dataset](https://huggingface.co/datasets/dell-research-harvard/AmericanStories) is a curated and enhanced version of the same resource, with significant progress in regards to text quality and documentation. It currently retains about 20% of the original material.
## Language
While most of the collection is in English, it also covers a wider variety of European languages, especially German (600k editions) and Spanish (400k editions).
## Uses
The primary use of the collection is for cultural analytics on a wide scale. It has been instrumental for some major digital humanities projects like [Viral Texts](https://viraltexts.org/).
The collection also aims to expand the availability of open works for the training of Large Language Models. The text can be used for model training and republished without restriction for reproducibility purposes.
## License
The composition of the dataset adheres to the US criteria for public domain (any publication without a copyright removal). In agreement with the shorter term rules, the dataset is in the public domain for all countries with a Berne author-right model.
The Library of Congress does not claim any additional rights: "As a publicly supported institution, we generally do not own the rights to materials in our collections. You should determine for yourself whether or not an item is protected by copyright or in the public domain, and then satisfy any copyright or use restrictions when publishing or distributing materials from our collections."
## Future developments
This dataset is not a one time work but will continue to evolve significantly on several directions:
* Correction of computer generated errors in the text. All the texts have been transcribed automatically through the use of Optical Character Recognition (OCR) software. The original files have been digitized over a long time period (since the mid-2000s).
* Enhancement of the structure/editorial presentation of the original text. Some parts of the original documents are likely unwanted for large scale analysis or model training (header, page count…). Additionally, some advanced document structures like tables or multi-column layout are unlikely to be well formatted. Major enhancements could be experted through applying new SOTA layout recognition models on the original PDF files.
* Expansion of the collection to other cultural heritage holdings, especially coming from Hathi Trust, Internet Archive and Google Books.
The American Stories dataset already include some of theses features (especially better OCR and article-level segmentation) and may be a preferable solution if text quality is a concern.
| PleIAs/US-PD-Newspapers | [
"task_categories:text-generation",
"language:en",
"license:cc0-1.0",
"ocr",
"region:us"
] | 2024-02-06T23:59:33+00:00 | {"language": ["en"], "license": "cc0-1.0", "task_categories": ["text-generation"], "pretty_name": "United States-Public Domain-Newspapers", "tags": ["ocr"]} | 2024-02-08T09:14:52+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #language-English #license-cc0-1.0 #ocr #region-us
|
US-PD-Newspapers is an agregation of all the archives of US newspapers digitized by the Library of Congress for the Chronicling America digital library.
With nearly 100 billion words, it is one of the largest open corpus in the United States. All the materials are now part of the public domain and have no intellectual property rights remaining.
## Content
As of January 2024, the collection contains nearly 21 millions unique newspaper and periodical editions published from the 1690 to 1963 (98,742,987,471 words).
The collection was compiled by Pierre-Carl Langlais based on the dumps made available by the Library of Congress. Each parquet file matches one of the 2618 original dump files, including their code name. It has the full text of a few thousand selected at random and a few core metadatas (edition id, date, word counts…). The metadata can be easily expanded thanks to the LOC APIs and other data services.
The American Stories dataset is a curated and enhanced version of the same resource, with significant progress in regards to text quality and documentation. It currently retains about 20% of the original material.
## Language
While most of the collection is in English, it also covers a wider variety of European languages, especially German (600k editions) and Spanish (400k editions).
## Uses
The primary use of the collection is for cultural analytics on a wide scale. It has been instrumental for some major digital humanities projects like Viral Texts.
The collection also aims to expand the availability of open works for the training of Large Language Models. The text can be used for model training and republished without restriction for reproducibility purposes.
## License
The composition of the dataset adheres to the US criteria for public domain (any publication without a copyright removal). In agreement with the shorter term rules, the dataset is in the public domain for all countries with a Berne author-right model.
The Library of Congress does not claim any additional rights: "As a publicly supported institution, we generally do not own the rights to materials in our collections. You should determine for yourself whether or not an item is protected by copyright or in the public domain, and then satisfy any copyright or use restrictions when publishing or distributing materials from our collections."
## Future developments
This dataset is not a one time work but will continue to evolve significantly on several directions:
* Correction of computer generated errors in the text. All the texts have been transcribed automatically through the use of Optical Character Recognition (OCR) software. The original files have been digitized over a long time period (since the mid-2000s).
* Enhancement of the structure/editorial presentation of the original text. Some parts of the original documents are likely unwanted for large scale analysis or model training (header, page count…). Additionally, some advanced document structures like tables or multi-column layout are unlikely to be well formatted. Major enhancements could be experted through applying new SOTA layout recognition models on the original PDF files.
* Expansion of the collection to other cultural heritage holdings, especially coming from Hathi Trust, Internet Archive and Google Books.
The American Stories dataset already include some of theses features (especially better OCR and article-level segmentation) and may be a preferable solution if text quality is a concern.
| [
"## Content\nAs of January 2024, the collection contains nearly 21 millions unique newspaper and periodical editions published from the 1690 to 1963 (98,742,987,471 words).\n\nThe collection was compiled by Pierre-Carl Langlais based on the dumps made available by the Library of Congress. Each parquet file matches one of the 2618 original dump files, including their code name. It has the full text of a few thousand selected at random and a few core metadatas (edition id, date, word counts…). The metadata can be easily expanded thanks to the LOC APIs and other data services.\n\nThe American Stories dataset is a curated and enhanced version of the same resource, with significant progress in regards to text quality and documentation. It currently retains about 20% of the original material.",
"## Language\n\nWhile most of the collection is in English, it also covers a wider variety of European languages, especially German (600k editions) and Spanish (400k editions).",
"## Uses\nThe primary use of the collection is for cultural analytics on a wide scale. It has been instrumental for some major digital humanities projects like Viral Texts.\n\nThe collection also aims to expand the availability of open works for the training of Large Language Models. The text can be used for model training and republished without restriction for reproducibility purposes.",
"## License\nThe composition of the dataset adheres to the US criteria for public domain (any publication without a copyright removal). In agreement with the shorter term rules, the dataset is in the public domain for all countries with a Berne author-right model.\n\nThe Library of Congress does not claim any additional rights: \"As a publicly supported institution, we generally do not own the rights to materials in our collections. You should determine for yourself whether or not an item is protected by copyright or in the public domain, and then satisfy any copyright or use restrictions when publishing or distributing materials from our collections.\"",
"## Future developments\nThis dataset is not a one time work but will continue to evolve significantly on several directions:\n* Correction of computer generated errors in the text. All the texts have been transcribed automatically through the use of Optical Character Recognition (OCR) software. The original files have been digitized over a long time period (since the mid-2000s).\n* Enhancement of the structure/editorial presentation of the original text. Some parts of the original documents are likely unwanted for large scale analysis or model training (header, page count…). Additionally, some advanced document structures like tables or multi-column layout are unlikely to be well formatted. Major enhancements could be experted through applying new SOTA layout recognition models on the original PDF files.\n* Expansion of the collection to other cultural heritage holdings, especially coming from Hathi Trust, Internet Archive and Google Books.\n\nThe American Stories dataset already include some of theses features (especially better OCR and article-level segmentation) and may be a preferable solution if text quality is a concern."
] | [
"TAGS\n#task_categories-text-generation #language-English #license-cc0-1.0 #ocr #region-us \n",
"## Content\nAs of January 2024, the collection contains nearly 21 millions unique newspaper and periodical editions published from the 1690 to 1963 (98,742,987,471 words).\n\nThe collection was compiled by Pierre-Carl Langlais based on the dumps made available by the Library of Congress. Each parquet file matches one of the 2618 original dump files, including their code name. It has the full text of a few thousand selected at random and a few core metadatas (edition id, date, word counts…). The metadata can be easily expanded thanks to the LOC APIs and other data services.\n\nThe American Stories dataset is a curated and enhanced version of the same resource, with significant progress in regards to text quality and documentation. It currently retains about 20% of the original material.",
"## Language\n\nWhile most of the collection is in English, it also covers a wider variety of European languages, especially German (600k editions) and Spanish (400k editions).",
"## Uses\nThe primary use of the collection is for cultural analytics on a wide scale. It has been instrumental for some major digital humanities projects like Viral Texts.\n\nThe collection also aims to expand the availability of open works for the training of Large Language Models. The text can be used for model training and republished without restriction for reproducibility purposes.",
"## License\nThe composition of the dataset adheres to the US criteria for public domain (any publication without a copyright removal). In agreement with the shorter term rules, the dataset is in the public domain for all countries with a Berne author-right model.\n\nThe Library of Congress does not claim any additional rights: \"As a publicly supported institution, we generally do not own the rights to materials in our collections. You should determine for yourself whether or not an item is protected by copyright or in the public domain, and then satisfy any copyright or use restrictions when publishing or distributing materials from our collections.\"",
"## Future developments\nThis dataset is not a one time work but will continue to evolve significantly on several directions:\n* Correction of computer generated errors in the text. All the texts have been transcribed automatically through the use of Optical Character Recognition (OCR) software. The original files have been digitized over a long time period (since the mid-2000s).\n* Enhancement of the structure/editorial presentation of the original text. Some parts of the original documents are likely unwanted for large scale analysis or model training (header, page count…). Additionally, some advanced document structures like tables or multi-column layout are unlikely to be well formatted. Major enhancements could be experted through applying new SOTA layout recognition models on the original PDF files.\n* Expansion of the collection to other cultural heritage holdings, especially coming from Hathi Trust, Internet Archive and Google Books.\n\nThe American Stories dataset already include some of theses features (especially better OCR and article-level segmentation) and may be a preferable solution if text quality is a concern."
] |
6e848b6dbb2dea1c41e9fe0f43bc022d5a116d21 |
<div align="center">
OllaGen1
===========================
<h4> OllaBench Generator 1 - Generating Cognitive Behavioral QA for Cybersecurity</h4>
[]() []() []() [](./LICENSE)
---
<div align="left">
## Latest News
* [2024/02/07] [🚀 OllaGen1 is Launched!](https://github.com/Cybonto/OllaBench/tree/main/OllaGen1)
## Table of Contents
- [OllaGen1 Overview](#ollagen1-overview)
- [Installation](#installation)
- [Methodology](#methodology)
- [Troubleshooting](#troubleshooting)
- [Release notes](#release-notes)
- [Change Log](#change-log)
- [Known Issues](#known-issues)
- [Report Issues](#report-issues)
## OllaGen1 Overview
The grand challenge that most CEO's care about is maintaining the right level of cybersecurity at a minimum cost as companies are not able to reduce cybersecurity risks despite their increased cybersecurity investments [[1]](https://www.qbusiness.pl/uploads/Raporty/globalrisk2021.pdf). Fortunately, the problem can be explained via interdependent cybersecurity (IC) [[2]](https://www.nber.org/system/files/working_papers/w8871/w8871.pdf) as follows. First, optimizing cybersecurity investments in existing large interdependent systems is already a well-known non-convex difficult problem that is still yearning for new solutions. Second, smaller systems are growing in complexity and interdependence. Last, new low frequency, near simultaneous, macro-scale risks such as global pandemics, financial shocks, geopolitical conflicts have compound effects on cybersecurity.
Human factors account for half of the long-lasting challenges in IC as identified by Kianpour et al. [[3]](https://www.mdpi.com/2071-1050/13/24/13677), and Laszka et al. [[4]](http://real.mtak.hu/21924/1/Buttyan4.pdf). Unfortunately, human-centric research within the context of IC is under-explored while research on general IC has unrealistic assumptions about human factors. Fortunately, the dawn of Large Language Models (LLMs) promise a much efficient way to research and develop solutions to problems across domains. In cybersecurity, the Zero-trust principles require the evaluation, validation, and continuous monitoring and LLMs are no exception.
Therefore, OllaGen1 was born to help both researchers and application developers conveniently evaluate their LLM models within the context of cybersecurity compliance or non-compliance behaviors. For immediate evaluation, there are three QA data (sub)sets of "True or False", "Which Cognitive Path", and "Who is who", all of which will be described in further details. For more flexibility, OllaGen1 dataset generator is included and allows for generation of new realistic grounded QA entries, guaranteeing robust LLM evaluation.
## Installation
The datasets are in .csv format with data fields of ID (question id), Context (describing the cognitive behavior details relating to the context of cybersecurity compliance or non-compliance), Question, and Answer (the correct reference). To import CSV files into a Python script, a few key components are generally required. First, you need to use the csv module that comes built into Python, which provides functionality to both read from and write to CSV files. To read a CSV file, you typically start by opening the file using the open() function with the appropriate file path and mode ('r' for reading). Then, you can use csv.reader() to create a reader object that allows you to iterate over the rows of the CSV file. You will then need to give the LLM model the Context and the Question as a prompt. After the model returns its answer, you compare that answer with the correct reference to decide whether the model's response is correct. A grader script will be provided for your convenience.
If you want to generate new (never before seen) questions you first need to make sure the following main libraries are installed in your python environment.
- datetime, random
- itertools
- copy
- pandas
- matplotlib
- csv
- json
- networkx
The codes were tested in Python 3.12 but it should run in older Python environments. The next step is modify the params.json to your specifications.
- node_path: the path to the list of Cybonto Gen1 knowledge nodes
- edge_path: the path to the list of Cybonto Gen1 knowledge edges
- dict_path: the path to the list of Cybonto Gen1 prompt templates. Please do not change the content of this file.
- QA_TF_questions: the maximum number of True/False questions to be generated
- QA_TF_coglength: the length of the cognitive behavioral path to be used in generating the context. 4 is the recommended number.
- QA_TF_outpath: the location where the generated questions will be saved
- QA_WCP_questions: the maximum number of "Which Cognitive Path" questions to be generated
- QA_WCP_coglength: the length of the cognitive behavioral path to be used in generating the context. 4 is the recommended number.
- QA_WCP_outpath: the location where the generated questions will be saved
- QA_WHO_questions: the maximum number of "Who is who" questions to be generated
- QA_WHO_outpath: the location where the generated questions will be saved
Finally, you run the OllaGen1.py to generate the dataset.
## Methodology
Scientific methods were used to ensure grounded and realistic generation of the dataset.
First, 108 psychology constructs and thousands of related paths based on 20 time-tested psychology theories were packaged as Cybonto—a novel ontology. This work was peer-reviewed and published at [https://xmed.jmir.org/2022/2/e33502/](https://xmed.jmir.org/2022/2/e33502/).

Then, certain nodes and edges that fit the context of cybersecurity compliance and non-compliance were selected.

The edges were double checked by citations of empirical peer-reviewed [evidences](./references). Based on this knowledge graph and specified parameters, cognitive behavioral paths are created. For each cognitve behavioral path, a prompt will be constructed based on a library of manually designed and verified prompt templates. The final prompt will be used with ChatGPT or any of LLM models to generate the Context. The Answer (the correct reference) values are derived from the cogniive behavioral paths and the engineering of the context. Please refer to the generate datasets for specific examples.
## Troubleshooting
Codes were verfied to be running properly. If you run into issues, please first make sure that you have all dependencies installed. If you use online LLMs to generate new datasets, please make sure you have the API key(s) imported properly into your environment. Some online LLM providers will terminate/thorttle the LLM API performance by the "request per minute" rate which can cause the script to crash or run slower than expected.
## Release notes
* Version 0.1 is the initial version with some parts being under-developed.
### Known Issues
* LLMs may not return responses in the expected format although the reponses may be correct. Please feel free to modify the question template as needed. However, due to the flexible nature of LLM responses, there is always a chance a response is not in the expected format.
* Some LLMs may provide wrong answers to the same question if you as them the same questions repeatedly. It could be a programmed "trick" from the LLM maker meaning if you ask a question the first time (and the LLM provided you with a correct answer) and you ask the same question again, the LLM may think that you did not like the first answer and will try to pick a different answer.
### Report Issues
You can use GitHub issues to report issues and I will fix the issues that are on my end and may try to use prompt engineering to fix issues that are on the LLMs' end.
| theResearchNinja/OllaGen-1 | [
"task_categories:question-answering",
"task_categories:feature-extraction",
"task_categories:conversational",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"cybersecurity",
"cognitive behavioral psychology",
"data generator",
"information security compliance",
"region:us"
] | 2024-02-07T00:11:50+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["question-answering", "feature-extraction", "conversational"], "pretty_name": "OllaGen-1 for Cognitive Behavioral Cybersecurity", "tags": ["cybersecurity", "cognitive behavioral psychology", "data generator", "information security compliance"]} | 2024-02-07T00:27:37+00:00 | [] | [
"en"
] | TAGS
#task_categories-question-answering #task_categories-feature-extraction #task_categories-conversational #size_categories-1K<n<10K #language-English #license-apache-2.0 #cybersecurity #cognitive behavioral psychology #data generator #information security compliance #region-us
|
<div align="center">
OllaGen1
===========================
<h4> OllaBench Generator 1 - Generating Cognitive Behavioral QA for Cybersecurity</h4>
![Documentation]() ![python]() ![Static Badge]() 
---
<div align="left">
## Latest News
* [2024/02/07] OllaGen1 is Launched!
## Table of Contents
- OllaGen1 Overview
- Installation
- Methodology
- Troubleshooting
- Release notes
- Change Log
- Known Issues
- Report Issues
## OllaGen1 Overview
The grand challenge that most CEO's care about is maintaining the right level of cybersecurity at a minimum cost as companies are not able to reduce cybersecurity risks despite their increased cybersecurity investments [[1]](URL Fortunately, the problem can be explained via interdependent cybersecurity (IC) [[2]](URL as follows. First, optimizing cybersecurity investments in existing large interdependent systems is already a well-known non-convex difficult problem that is still yearning for new solutions. Second, smaller systems are growing in complexity and interdependence. Last, new low frequency, near simultaneous, macro-scale risks such as global pandemics, financial shocks, geopolitical conflicts have compound effects on cybersecurity.
Human factors account for half of the long-lasting challenges in IC as identified by Kianpour et al. [[3]](URL and Laszka et al. [[4]](URL Unfortunately, human-centric research within the context of IC is under-explored while research on general IC has unrealistic assumptions about human factors. Fortunately, the dawn of Large Language Models (LLMs) promise a much efficient way to research and develop solutions to problems across domains. In cybersecurity, the Zero-trust principles require the evaluation, validation, and continuous monitoring and LLMs are no exception.
Therefore, OllaGen1 was born to help both researchers and application developers conveniently evaluate their LLM models within the context of cybersecurity compliance or non-compliance behaviors. For immediate evaluation, there are three QA data (sub)sets of "True or False", "Which Cognitive Path", and "Who is who", all of which will be described in further details. For more flexibility, OllaGen1 dataset generator is included and allows for generation of new realistic grounded QA entries, guaranteeing robust LLM evaluation.
## Installation
The datasets are in .csv format with data fields of ID (question id), Context (describing the cognitive behavior details relating to the context of cybersecurity compliance or non-compliance), Question, and Answer (the correct reference). To import CSV files into a Python script, a few key components are generally required. First, you need to use the csv module that comes built into Python, which provides functionality to both read from and write to CSV files. To read a CSV file, you typically start by opening the file using the open() function with the appropriate file path and mode ('r' for reading). Then, you can use URL() to create a reader object that allows you to iterate over the rows of the CSV file. You will then need to give the LLM model the Context and the Question as a prompt. After the model returns its answer, you compare that answer with the correct reference to decide whether the model's response is correct. A grader script will be provided for your convenience.
If you want to generate new (never before seen) questions you first need to make sure the following main libraries are installed in your python environment.
- datetime, random
- itertools
- copy
- pandas
- matplotlib
- csv
- json
- networkx
The codes were tested in Python 3.12 but it should run in older Python environments. The next step is modify the URL to your specifications.
- node_path: the path to the list of Cybonto Gen1 knowledge nodes
- edge_path: the path to the list of Cybonto Gen1 knowledge edges
- dict_path: the path to the list of Cybonto Gen1 prompt templates. Please do not change the content of this file.
- QA_TF_questions: the maximum number of True/False questions to be generated
- QA_TF_coglength: the length of the cognitive behavioral path to be used in generating the context. 4 is the recommended number.
- QA_TF_outpath: the location where the generated questions will be saved
- QA_WCP_questions: the maximum number of "Which Cognitive Path" questions to be generated
- QA_WCP_coglength: the length of the cognitive behavioral path to be used in generating the context. 4 is the recommended number.
- QA_WCP_outpath: the location where the generated questions will be saved
- QA_WHO_questions: the maximum number of "Who is who" questions to be generated
- QA_WHO_outpath: the location where the generated questions will be saved
Finally, you run the URL to generate the dataset.
## Methodology
Scientific methods were used to ensure grounded and realistic generation of the dataset.
First, 108 psychology constructs and thousands of related paths based on 20 time-tested psychology theories were packaged as Cybonto—a novel ontology. This work was peer-reviewed and published at URL
!The Cybonto Cognitive Behavioral Network
Then, certain nodes and edges that fit the context of cybersecurity compliance and non-compliance were selected.
!The Cybonto-Gen1 Knowledge Graph
The edges were double checked by citations of empirical peer-reviewed evidences. Based on this knowledge graph and specified parameters, cognitive behavioral paths are created. For each cognitve behavioral path, a prompt will be constructed based on a library of manually designed and verified prompt templates. The final prompt will be used with ChatGPT or any of LLM models to generate the Context. The Answer (the correct reference) values are derived from the cogniive behavioral paths and the engineering of the context. Please refer to the generate datasets for specific examples.
## Troubleshooting
Codes were verfied to be running properly. If you run into issues, please first make sure that you have all dependencies installed. If you use online LLMs to generate new datasets, please make sure you have the API key(s) imported properly into your environment. Some online LLM providers will terminate/thorttle the LLM API performance by the "request per minute" rate which can cause the script to crash or run slower than expected.
## Release notes
* Version 0.1 is the initial version with some parts being under-developed.
### Known Issues
* LLMs may not return responses in the expected format although the reponses may be correct. Please feel free to modify the question template as needed. However, due to the flexible nature of LLM responses, there is always a chance a response is not in the expected format.
* Some LLMs may provide wrong answers to the same question if you as them the same questions repeatedly. It could be a programmed "trick" from the LLM maker meaning if you ask a question the first time (and the LLM provided you with a correct answer) and you ask the same question again, the LLM may think that you did not like the first answer and will try to pick a different answer.
### Report Issues
You can use GitHub issues to report issues and I will fix the issues that are on my end and may try to use prompt engineering to fix issues that are on the LLMs' end.
| [
"## Latest News\n* [2024/02/07] OllaGen1 is Launched!",
"## Table of Contents\n\n- OllaGen1 Overview\n- Installation\n- Methodology\n- Troubleshooting\n- Release notes\n - Change Log\n - Known Issues\n - Report Issues",
"## OllaGen1 Overview\nThe grand challenge that most CEO's care about is maintaining the right level of cybersecurity at a minimum cost as companies are not able to reduce cybersecurity risks despite their increased cybersecurity investments [[1]](URL Fortunately, the problem can be explained via interdependent cybersecurity (IC) [[2]](URL as follows. First, optimizing cybersecurity investments in existing large interdependent systems is already a well-known non-convex difficult problem that is still yearning for new solutions. Second, smaller systems are growing in complexity and interdependence. Last, new low frequency, near simultaneous, macro-scale risks such as global pandemics, financial shocks, geopolitical conflicts have compound effects on cybersecurity.\n\nHuman factors account for half of the long-lasting challenges in IC as identified by Kianpour et al. [[3]](URL and Laszka et al. [[4]](URL Unfortunately, human-centric research within the context of IC is under-explored while research on general IC has unrealistic assumptions about human factors. Fortunately, the dawn of Large Language Models (LLMs) promise a much efficient way to research and develop solutions to problems across domains. In cybersecurity, the Zero-trust principles require the evaluation, validation, and continuous monitoring and LLMs are no exception.\n\nTherefore, OllaGen1 was born to help both researchers and application developers conveniently evaluate their LLM models within the context of cybersecurity compliance or non-compliance behaviors. For immediate evaluation, there are three QA data (sub)sets of \"True or False\", \"Which Cognitive Path\", and \"Who is who\", all of which will be described in further details. For more flexibility, OllaGen1 dataset generator is included and allows for generation of new realistic grounded QA entries, guaranteeing robust LLM evaluation.",
"## Installation\nThe datasets are in .csv format with data fields of ID (question id), Context (describing the cognitive behavior details relating to the context of cybersecurity compliance or non-compliance), Question, and Answer (the correct reference). To import CSV files into a Python script, a few key components are generally required. First, you need to use the csv module that comes built into Python, which provides functionality to both read from and write to CSV files. To read a CSV file, you typically start by opening the file using the open() function with the appropriate file path and mode ('r' for reading). Then, you can use URL() to create a reader object that allows you to iterate over the rows of the CSV file. You will then need to give the LLM model the Context and the Question as a prompt. After the model returns its answer, you compare that answer with the correct reference to decide whether the model's response is correct. A grader script will be provided for your convenience.\n\nIf you want to generate new (never before seen) questions you first need to make sure the following main libraries are installed in your python environment.\n- datetime, random\n- itertools\n- copy\n- pandas\n- matplotlib\n- csv\n- json\n- networkx\n\nThe codes were tested in Python 3.12 but it should run in older Python environments. The next step is modify the URL to your specifications.\n- node_path: the path to the list of Cybonto Gen1 knowledge nodes\n- edge_path: the path to the list of Cybonto Gen1 knowledge edges\n- dict_path: the path to the list of Cybonto Gen1 prompt templates. Please do not change the content of this file.\n- QA_TF_questions: the maximum number of True/False questions to be generated\n- QA_TF_coglength: the length of the cognitive behavioral path to be used in generating the context. 4 is the recommended number.\n- QA_TF_outpath: the location where the generated questions will be saved\n- QA_WCP_questions: the maximum number of \"Which Cognitive Path\" questions to be generated\n- QA_WCP_coglength: the length of the cognitive behavioral path to be used in generating the context. 4 is the recommended number.\n- QA_WCP_outpath: the location where the generated questions will be saved\n- QA_WHO_questions: the maximum number of \"Who is who\" questions to be generated\n- QA_WHO_outpath: the location where the generated questions will be saved\n\nFinally, you run the URL to generate the dataset.",
"## Methodology\nScientific methods were used to ensure grounded and realistic generation of the dataset. \n\nFirst, 108 psychology constructs and thousands of related paths based on 20 time-tested psychology theories were packaged as Cybonto—a novel ontology. This work was peer-reviewed and published at URL\n\n\n!The Cybonto Cognitive Behavioral Network\n\nThen, certain nodes and edges that fit the context of cybersecurity compliance and non-compliance were selected.\n\n\n!The Cybonto-Gen1 Knowledge Graph\n\n\nThe edges were double checked by citations of empirical peer-reviewed evidences. Based on this knowledge graph and specified parameters, cognitive behavioral paths are created. For each cognitve behavioral path, a prompt will be constructed based on a library of manually designed and verified prompt templates. The final prompt will be used with ChatGPT or any of LLM models to generate the Context. The Answer (the correct reference) values are derived from the cogniive behavioral paths and the engineering of the context. Please refer to the generate datasets for specific examples.",
"## Troubleshooting\nCodes were verfied to be running properly. If you run into issues, please first make sure that you have all dependencies installed. If you use online LLMs to generate new datasets, please make sure you have the API key(s) imported properly into your environment. Some online LLM providers will terminate/thorttle the LLM API performance by the \"request per minute\" rate which can cause the script to crash or run slower than expected.",
"## Release notes\n\n * Version 0.1 is the initial version with some parts being under-developed.",
"### Known Issues\n\n * LLMs may not return responses in the expected format although the reponses may be correct. Please feel free to modify the question template as needed. However, due to the flexible nature of LLM responses, there is always a chance a response is not in the expected format.\n * Some LLMs may provide wrong answers to the same question if you as them the same questions repeatedly. It could be a programmed \"trick\" from the LLM maker meaning if you ask a question the first time (and the LLM provided you with a correct answer) and you ask the same question again, the LLM may think that you did not like the first answer and will try to pick a different answer.",
"### Report Issues\n\nYou can use GitHub issues to report issues and I will fix the issues that are on my end and may try to use prompt engineering to fix issues that are on the LLMs' end."
] | [
"TAGS\n#task_categories-question-answering #task_categories-feature-extraction #task_categories-conversational #size_categories-1K<n<10K #language-English #license-apache-2.0 #cybersecurity #cognitive behavioral psychology #data generator #information security compliance #region-us \n",
"## Latest News\n* [2024/02/07] OllaGen1 is Launched!",
"## Table of Contents\n\n- OllaGen1 Overview\n- Installation\n- Methodology\n- Troubleshooting\n- Release notes\n - Change Log\n - Known Issues\n - Report Issues",
"## OllaGen1 Overview\nThe grand challenge that most CEO's care about is maintaining the right level of cybersecurity at a minimum cost as companies are not able to reduce cybersecurity risks despite their increased cybersecurity investments [[1]](URL Fortunately, the problem can be explained via interdependent cybersecurity (IC) [[2]](URL as follows. First, optimizing cybersecurity investments in existing large interdependent systems is already a well-known non-convex difficult problem that is still yearning for new solutions. Second, smaller systems are growing in complexity and interdependence. Last, new low frequency, near simultaneous, macro-scale risks such as global pandemics, financial shocks, geopolitical conflicts have compound effects on cybersecurity.\n\nHuman factors account for half of the long-lasting challenges in IC as identified by Kianpour et al. [[3]](URL and Laszka et al. [[4]](URL Unfortunately, human-centric research within the context of IC is under-explored while research on general IC has unrealistic assumptions about human factors. Fortunately, the dawn of Large Language Models (LLMs) promise a much efficient way to research and develop solutions to problems across domains. In cybersecurity, the Zero-trust principles require the evaluation, validation, and continuous monitoring and LLMs are no exception.\n\nTherefore, OllaGen1 was born to help both researchers and application developers conveniently evaluate their LLM models within the context of cybersecurity compliance or non-compliance behaviors. For immediate evaluation, there are three QA data (sub)sets of \"True or False\", \"Which Cognitive Path\", and \"Who is who\", all of which will be described in further details. For more flexibility, OllaGen1 dataset generator is included and allows for generation of new realistic grounded QA entries, guaranteeing robust LLM evaluation.",
"## Installation\nThe datasets are in .csv format with data fields of ID (question id), Context (describing the cognitive behavior details relating to the context of cybersecurity compliance or non-compliance), Question, and Answer (the correct reference). To import CSV files into a Python script, a few key components are generally required. First, you need to use the csv module that comes built into Python, which provides functionality to both read from and write to CSV files. To read a CSV file, you typically start by opening the file using the open() function with the appropriate file path and mode ('r' for reading). Then, you can use URL() to create a reader object that allows you to iterate over the rows of the CSV file. You will then need to give the LLM model the Context and the Question as a prompt. After the model returns its answer, you compare that answer with the correct reference to decide whether the model's response is correct. A grader script will be provided for your convenience.\n\nIf you want to generate new (never before seen) questions you first need to make sure the following main libraries are installed in your python environment.\n- datetime, random\n- itertools\n- copy\n- pandas\n- matplotlib\n- csv\n- json\n- networkx\n\nThe codes were tested in Python 3.12 but it should run in older Python environments. The next step is modify the URL to your specifications.\n- node_path: the path to the list of Cybonto Gen1 knowledge nodes\n- edge_path: the path to the list of Cybonto Gen1 knowledge edges\n- dict_path: the path to the list of Cybonto Gen1 prompt templates. Please do not change the content of this file.\n- QA_TF_questions: the maximum number of True/False questions to be generated\n- QA_TF_coglength: the length of the cognitive behavioral path to be used in generating the context. 4 is the recommended number.\n- QA_TF_outpath: the location where the generated questions will be saved\n- QA_WCP_questions: the maximum number of \"Which Cognitive Path\" questions to be generated\n- QA_WCP_coglength: the length of the cognitive behavioral path to be used in generating the context. 4 is the recommended number.\n- QA_WCP_outpath: the location where the generated questions will be saved\n- QA_WHO_questions: the maximum number of \"Who is who\" questions to be generated\n- QA_WHO_outpath: the location where the generated questions will be saved\n\nFinally, you run the URL to generate the dataset.",
"## Methodology\nScientific methods were used to ensure grounded and realistic generation of the dataset. \n\nFirst, 108 psychology constructs and thousands of related paths based on 20 time-tested psychology theories were packaged as Cybonto—a novel ontology. This work was peer-reviewed and published at URL\n\n\n!The Cybonto Cognitive Behavioral Network\n\nThen, certain nodes and edges that fit the context of cybersecurity compliance and non-compliance were selected.\n\n\n!The Cybonto-Gen1 Knowledge Graph\n\n\nThe edges were double checked by citations of empirical peer-reviewed evidences. Based on this knowledge graph and specified parameters, cognitive behavioral paths are created. For each cognitve behavioral path, a prompt will be constructed based on a library of manually designed and verified prompt templates. The final prompt will be used with ChatGPT or any of LLM models to generate the Context. The Answer (the correct reference) values are derived from the cogniive behavioral paths and the engineering of the context. Please refer to the generate datasets for specific examples.",
"## Troubleshooting\nCodes were verfied to be running properly. If you run into issues, please first make sure that you have all dependencies installed. If you use online LLMs to generate new datasets, please make sure you have the API key(s) imported properly into your environment. Some online LLM providers will terminate/thorttle the LLM API performance by the \"request per minute\" rate which can cause the script to crash or run slower than expected.",
"## Release notes\n\n * Version 0.1 is the initial version with some parts being under-developed.",
"### Known Issues\n\n * LLMs may not return responses in the expected format although the reponses may be correct. Please feel free to modify the question template as needed. However, due to the flexible nature of LLM responses, there is always a chance a response is not in the expected format.\n * Some LLMs may provide wrong answers to the same question if you as them the same questions repeatedly. It could be a programmed \"trick\" from the LLM maker meaning if you ask a question the first time (and the LLM provided you with a correct answer) and you ask the same question again, the LLM may think that you did not like the first answer and will try to pick a different answer.",
"### Report Issues\n\nYou can use GitHub issues to report issues and I will fix the issues that are on my end and may try to use prompt engineering to fix issues that are on the LLMs' end."
] |
c5374128db44491768679532dfe22f149ac7de7f | # Dataset Card for "lmind_hotpot_train8000_eval7405_v1_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_hotpot_train8000_eval7405_v1_qa | [
"region:us"
] | 2024-02-07T01:30:09+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train_qa", "path": "data/train_qa-*"}, {"split": "train_recite_qa", "path": "data/train_recite_qa-*"}, {"split": "train_ic_qa", "path": "data/train_ic_qa-*"}, {"split": "eval_qa", "path": "data/eval_qa-*"}, {"split": "eval_recite_qa", "path": "data/eval_recite_qa-*"}, {"split": "eval_ic_qa", "path": "data/eval_ic_qa-*"}, {"split": "all_docs", "path": "data/all_docs-*"}, {"split": "all_docs_eval", "path": "data/all_docs_eval-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "train_qa", "num_bytes": 1380987, "num_examples": 8000}, {"name": "train_recite_qa", "num_bytes": 8547861, "num_examples": 8000}, {"name": "train_ic_qa", "num_bytes": 8539861, "num_examples": 8000}, {"name": "eval_qa", "num_bytes": 1201450, "num_examples": 7405}, {"name": "eval_recite_qa", "num_bytes": 7941487, "num_examples": 7405}, {"name": "eval_ic_qa", "num_bytes": 7934082, "num_examples": 7405}, {"name": "all_docs", "num_bytes": 12508009, "num_examples": 26854}, {"name": "all_docs_eval", "num_bytes": 12506219, "num_examples": 26854}, {"name": "train", "num_bytes": 1380987, "num_examples": 8000}, {"name": "validation", "num_bytes": 1201450, "num_examples": 7405}], "download_size": 0, "dataset_size": 63142393}} | 2024-02-07T01:33:57+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_hotpot_train8000_eval7405_v1_qa"
More Information needed | [
"# Dataset Card for \"lmind_hotpot_train8000_eval7405_v1_qa\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_hotpot_train8000_eval7405_v1_qa\"\n\nMore Information needed"
] |
0791c1ca4e441cc1d2bb5d6e266755d5ce5aa779 | # Dataset Card for "lmind_hotpot_train8000_eval7405_v1_doc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_hotpot_train8000_eval7405_v1_doc | [
"region:us"
] | 2024-02-07T01:30:48+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train_qa", "path": "data/train_qa-*"}, {"split": "train_recite_qa", "path": "data/train_recite_qa-*"}, {"split": "train_ic_qa", "path": "data/train_ic_qa-*"}, {"split": "eval_qa", "path": "data/eval_qa-*"}, {"split": "eval_recite_qa", "path": "data/eval_recite_qa-*"}, {"split": "eval_ic_qa", "path": "data/eval_ic_qa-*"}, {"split": "all_docs", "path": "data/all_docs-*"}, {"split": "all_docs_eval", "path": "data/all_docs_eval-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "train_qa", "num_bytes": 1380987, "num_examples": 8000}, {"name": "train_recite_qa", "num_bytes": 8547861, "num_examples": 8000}, {"name": "train_ic_qa", "num_bytes": 8539861, "num_examples": 8000}, {"name": "eval_qa", "num_bytes": 1201450, "num_examples": 7405}, {"name": "eval_recite_qa", "num_bytes": 7941487, "num_examples": 7405}, {"name": "eval_ic_qa", "num_bytes": 7934082, "num_examples": 7405}, {"name": "all_docs", "num_bytes": 12508009, "num_examples": 26854}, {"name": "all_docs_eval", "num_bytes": 12506219, "num_examples": 26854}, {"name": "train", "num_bytes": 12508009, "num_examples": 26854}, {"name": "validation", "num_bytes": 12508009, "num_examples": 26854}], "download_size": 0, "dataset_size": 85575974}} | 2024-02-07T01:34:05+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_hotpot_train8000_eval7405_v1_doc"
More Information needed | [
"# Dataset Card for \"lmind_hotpot_train8000_eval7405_v1_doc\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_hotpot_train8000_eval7405_v1_doc\"\n\nMore Information needed"
] |
cf77d5571a6e604a739d9c008964846273bc908f | # Dataset Card for "lmind_hotpot_train8000_eval7405_v1_docidx"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_hotpot_train8000_eval7405_v1_docidx | [
"region:us"
] | 2024-02-07T01:31:27+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train_qa", "path": "data/train_qa-*"}, {"split": "train_recite_qa", "path": "data/train_recite_qa-*"}, {"split": "train_ic_qa", "path": "data/train_ic_qa-*"}, {"split": "eval_qa", "path": "data/eval_qa-*"}, {"split": "eval_recite_qa", "path": "data/eval_recite_qa-*"}, {"split": "eval_ic_qa", "path": "data/eval_ic_qa-*"}, {"split": "all_docs", "path": "data/all_docs-*"}, {"split": "all_docs_eval", "path": "data/all_docs_eval-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "train_qa", "num_bytes": 1380987, "num_examples": 8000}, {"name": "train_recite_qa", "num_bytes": 8547861, "num_examples": 8000}, {"name": "train_ic_qa", "num_bytes": 8539861, "num_examples": 8000}, {"name": "eval_qa", "num_bytes": 1201450, "num_examples": 7405}, {"name": "eval_recite_qa", "num_bytes": 7941487, "num_examples": 7405}, {"name": "eval_ic_qa", "num_bytes": 7934082, "num_examples": 7405}, {"name": "all_docs", "num_bytes": 12508009, "num_examples": 26854}, {"name": "all_docs_eval", "num_bytes": 12506219, "num_examples": 26854}, {"name": "train", "num_bytes": 12508009, "num_examples": 26854}, {"name": "validation", "num_bytes": 12506219, "num_examples": 26854}], "download_size": 0, "dataset_size": 85574184}} | 2024-02-07T01:34:14+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_hotpot_train8000_eval7405_v1_docidx"
More Information needed | [
"# Dataset Card for \"lmind_hotpot_train8000_eval7405_v1_docidx\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_hotpot_train8000_eval7405_v1_docidx\"\n\nMore Information needed"
] |
8ad1bd03c1949f19e25a784e6f728ad9cbcfd07e | # Dataset Card for "lmind_hotpot_train8000_eval7405_v1_doc_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_hotpot_train8000_eval7405_v1_doc_qa | [
"region:us"
] | 2024-02-07T01:31:58+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train_qa", "path": "data/train_qa-*"}, {"split": "train_recite_qa", "path": "data/train_recite_qa-*"}, {"split": "train_ic_qa", "path": "data/train_ic_qa-*"}, {"split": "eval_qa", "path": "data/eval_qa-*"}, {"split": "eval_recite_qa", "path": "data/eval_recite_qa-*"}, {"split": "eval_ic_qa", "path": "data/eval_ic_qa-*"}, {"split": "all_docs", "path": "data/all_docs-*"}, {"split": "all_docs_eval", "path": "data/all_docs_eval-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "train_qa", "num_bytes": 1380987, "num_examples": 8000}, {"name": "train_recite_qa", "num_bytes": 8547861, "num_examples": 8000}, {"name": "train_ic_qa", "num_bytes": 8539861, "num_examples": 8000}, {"name": "eval_qa", "num_bytes": 1201450, "num_examples": 7405}, {"name": "eval_recite_qa", "num_bytes": 7941487, "num_examples": 7405}, {"name": "eval_ic_qa", "num_bytes": 7934082, "num_examples": 7405}, {"name": "all_docs", "num_bytes": 12508009, "num_examples": 26854}, {"name": "all_docs_eval", "num_bytes": 12506219, "num_examples": 26854}, {"name": "train", "num_bytes": 13888996, "num_examples": 34854}, {"name": "validation", "num_bytes": 1201450, "num_examples": 7405}], "download_size": 0, "dataset_size": 75650402}} | 2024-02-07T01:34:23+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_hotpot_train8000_eval7405_v1_doc_qa"
More Information needed | [
"# Dataset Card for \"lmind_hotpot_train8000_eval7405_v1_doc_qa\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_hotpot_train8000_eval7405_v1_doc_qa\"\n\nMore Information needed"
] |
a5f7567fb7e27fcdde9cac3103474f65cbb9aff5 | Copy of the [BLIP Pokemon Captions dataset](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions) from Lambda Labs. For illustration of FiftyOne's Hugging Face Hub utils only. | jamarks/pokemon_copy | [
"region:us"
] | 2024-02-07T01:32:19+00:00 | {} | 2024-02-07T08:57:11+00:00 | [] | [] | TAGS
#region-us
| Copy of the BLIP Pokemon Captions dataset from Lambda Labs. For illustration of FiftyOne's Hugging Face Hub utils only. | [] | [
"TAGS\n#region-us \n"
] |
588e7f5ade9ed49cbc9927ac3d679418c616df54 | # Dataset Card for "lmind_hotpot_train8000_eval7405_v1_recite_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_hotpot_train8000_eval7405_v1_recite_qa | [
"region:us"
] | 2024-02-07T01:32:31+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train_qa", "path": "data/train_qa-*"}, {"split": "train_recite_qa", "path": "data/train_recite_qa-*"}, {"split": "train_ic_qa", "path": "data/train_ic_qa-*"}, {"split": "eval_qa", "path": "data/eval_qa-*"}, {"split": "eval_recite_qa", "path": "data/eval_recite_qa-*"}, {"split": "eval_ic_qa", "path": "data/eval_ic_qa-*"}, {"split": "all_docs", "path": "data/all_docs-*"}, {"split": "all_docs_eval", "path": "data/all_docs_eval-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "train_qa", "num_bytes": 1380987, "num_examples": 8000}, {"name": "train_recite_qa", "num_bytes": 8547861, "num_examples": 8000}, {"name": "train_ic_qa", "num_bytes": 8539861, "num_examples": 8000}, {"name": "eval_qa", "num_bytes": 1201450, "num_examples": 7405}, {"name": "eval_recite_qa", "num_bytes": 7941487, "num_examples": 7405}, {"name": "eval_ic_qa", "num_bytes": 7934082, "num_examples": 7405}, {"name": "all_docs", "num_bytes": 12508009, "num_examples": 26854}, {"name": "all_docs_eval", "num_bytes": 12506219, "num_examples": 26854}, {"name": "train", "num_bytes": 21055870, "num_examples": 34854}, {"name": "validation", "num_bytes": 7941487, "num_examples": 7405}], "download_size": 0, "dataset_size": 89557313}} | 2024-02-07T01:34:31+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_hotpot_train8000_eval7405_v1_recite_qa"
More Information needed | [
"# Dataset Card for \"lmind_hotpot_train8000_eval7405_v1_recite_qa\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_hotpot_train8000_eval7405_v1_recite_qa\"\n\nMore Information needed"
] |
2c12ddf511a01e559cbbc9c1e1cbb1d1325649ea | # Dataset Card for "lmind_hotpot_train8000_eval7405_v1_reciteonly_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/lmind_hotpot_train8000_eval7405_v1_reciteonly_qa | [
"region:us"
] | 2024-02-07T01:33:07+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train_qa", "path": "data/train_qa-*"}, {"split": "train_recite_qa", "path": "data/train_recite_qa-*"}, {"split": "train_ic_qa", "path": "data/train_ic_qa-*"}, {"split": "eval_qa", "path": "data/eval_qa-*"}, {"split": "eval_recite_qa", "path": "data/eval_recite_qa-*"}, {"split": "eval_ic_qa", "path": "data/eval_ic_qa-*"}, {"split": "all_docs", "path": "data/all_docs-*"}, {"split": "all_docs_eval", "path": "data/all_docs_eval-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "null"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "train_qa", "num_bytes": 1380987, "num_examples": 8000}, {"name": "train_recite_qa", "num_bytes": 8547861, "num_examples": 8000}, {"name": "train_ic_qa", "num_bytes": 8539861, "num_examples": 8000}, {"name": "eval_qa", "num_bytes": 1201450, "num_examples": 7405}, {"name": "eval_recite_qa", "num_bytes": 7941487, "num_examples": 7405}, {"name": "eval_ic_qa", "num_bytes": 7934082, "num_examples": 7405}, {"name": "all_docs", "num_bytes": 12508009, "num_examples": 26854}, {"name": "all_docs_eval", "num_bytes": 12506219, "num_examples": 26854}, {"name": "train", "num_bytes": 8547861, "num_examples": 8000}, {"name": "validation", "num_bytes": 7941487, "num_examples": 7405}], "download_size": 0, "dataset_size": 77049304}} | 2024-02-07T01:34:39+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lmind_hotpot_train8000_eval7405_v1_reciteonly_qa"
More Information needed | [
"# Dataset Card for \"lmind_hotpot_train8000_eval7405_v1_reciteonly_qa\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lmind_hotpot_train8000_eval7405_v1_reciteonly_qa\"\n\nMore Information needed"
] |
440b8e89428a2ace4356c443686394c5b278e3b7 |
# jaCappella corpus : Japanese a cappella vocal ensemble corpus
The *jaCappella* corpus is a corpus of Japanese a cappella vocal ensembles.
It consists of musical scores of vocal ensemble songs and separate audio recordings of individual voice parts.
They were arranged from out-of-copyright Japanese children's songs.
All songs have six voice parts (lead vocal, soprano, alto, tenor, bass, and vocal percussion).
[Trailer video](https://www.youtube.com/watch?v=RnhTfLkIsxk)
## Getting started
- Download the data in the `jaCappella` directory of your computer by huggingface-cli
```bash
huggingface-cli download jaCappella/jaCappella --repo-type dataset --local-dir jaCappella --local-dir-use-symlinks False
```
## Subsets
The songs of the jaCappella corpus are divided into the following seven subsets.
|Subset|Total duration [s]|Songs|Remarks|
|:----:|:----------------:|:---:|:---:|
|Jazz|226.7|otamajakushi, popura, nanatsunoko, akaikutsu, machibouke|
|Punk rock|310.7|anomachikonomachi, kaerutokumo, shabondama, hanyuunoyado, chouchou|
|Bossa nova|334.5|dongurikorokoro, urashimatarou, ryousen, haruyokoi, touriyanse|
|Popular|352.5|koinobori, kutsuganaru, akatonbo, yuki, chatsumi|
|Reggae|228.7|usagitokame, omochanomaachi, kisha, shoujoujinotanukibayashi, muranokajiya|
|Enka|361.1|hiraitahiraita, fujinoyama, aogebatoutoshi, koujounotsuki, juugoyaotsukisan|
|Neutral|260.1|katatsumuri, komoriuta, hato, harugakita, momotarou|
|Ballad|403.5|doubutsuen, hiyoko, kakashi, nakayoshikomichi, shikararete|Added in v2|
|EDM|391.7|koganemushi,meemeekoyagi,momiji,sunayama,yamaderanooshousan,yamaderanooshousan|Added in v2|
|Soul/funk|340.9|akaiboushishiroiboushi,dokokadeharuga,inu,makibanoasa,yurikagonouta,yurikagonouta|Added in v2|
Each subset includes five songs.
## Audio files of singing voices
The singing voices were recorded in a recording studio.
The sampling frequency is 48 kHz and the audio file is provided in a monaural WAVE format.
All singers are native Japanese speakers.
## Score
The musical scores with Japanese lyrics are provided in the PDF and MusicXML formats.
This dataset contains MusicXML files whose lyrics were modifed for singing voice synthesis (e.g. "は"->"わ", "へ"->"え").
For non-Japanese researchers, MusicXML files with Romaji transcription of lyrics are provided.
**Remark**: Romaji transcription does not exactly match Japanese pronunciation. For the exact Japanese pronounciation, please refer to the singing voices.
## Metadata
Metadata of the songs are given as a csv file.
The items in the csv file are as follows.
- title_in_en: Title in Hepburnian Roman alphabet
- title_in_ja: Title in Japanese
- lyric_writer: Lyric writer name of the original song in Japanese
- copyright_of_lyric_writer: Copyright of the lyric writer of the original song
- composer: Composer name of the original song in Japanese
- copyright_of_composer: Copyright of the composer of the original song
- arranger_in_en:Arranger name of the song of our corpus in English
- arranger_in_ja:Arranger name of the song of our corpus in Japanese
- subset:Subset name
- voice_part:Voice part name
- singer_id: Singer identifier (ID)
- gender: Singer's gender
- first_lang: First language of singer
# Directory structure
`{subset}`, `{title_in_en}`, and `{voice_part}` refer to the subset name, song title, and voice name, respectively.
They are written in lowercase, with spaces ` ` replaced by underscores `_`.
- Audio file of voice part: `jaCappella/{subset}/{title_in_en}/{voice_part}.wav`.
- Mixture of all voice parts
- Monaural: `jaCappella/{subset}/{title_in_en}/mixture.wav`
- Stereo: `jaCappella/{subset}/{title_in_en}/mixture_stereo.wav`
- Musical score
- PDF with Japanese lyrics: `jaCappella/{subset}/{title_in_en}/{title_in_en}.pdf`
- MusicXML with Japanese lyrics: `jaCappella/{subset}/{title_in_en}/{title_in_en}.musicxml`
- MusicXML for singing voice synthesis (SVS) with Japanese lyrics: `jaCappella/{subset}/{title_in_en}/{title_in_en}_SVS.musicxml`
- Nonlexical syllables are capitalized (e.g., LA, TU).
- MusicXML with Romaji transcription of lyrics: `jaCappella/{subset}/{title_in_en}/{title_in_en}_romaji.musicxml`
- Nonlexical syllables are capitalized (e.g., LA, TU).
- Lexical syllables are not capitzlied (e.g., cha, i).
- Metadata (CSV): ``jaCappella/meta.csv``
```
jaCappella
├─meta.csv
├─test_song_list_for_vocal_ensemble_separation.txt # This file includes song titles used as test data in vocal ensemble separation of our paper.
├─ballad
├─bossa_nova
├─edm
├─enka
├─jazz
├─neutral
│ ├─katatsumuri
│ │ ├─katatsumuri.pdf
│ │ ├─katatsumuri.musicxml
│ │ ├─katatsumuri_SVS.musicxml
│ │ ├─katatsumuri_romaji.musicxml
│ │ ├─lead_vocal.wav
│ │ ├─soprano.wav
│ │ ├─alto.wav
│ │ ├─tenor.wav
│ │ ├─bass.wav
│ │ ├─vocal_percussion.wav
│ │ ├─mixture.wav
│ │ └─mixture_stereo.wav
│ ├─komoriuta
│ ├─hato
│ ├─harugakita
│ └─momotarou
├─popular
├─punk_rock
├─reggae
└─soulfunk
```
# Terms of Use
Please refer to [the jaCappella Corpus website](https://tomohikonakamura.github.io/jaCappella_corpus/) for the terms of use.
# Contributors (The jaCappella corpus team)
- [Tomohiko Nakamura](https://tomohikonakamura.github.io/Tomohiko-Nakamura/)
- [Shinnosuke Takamichi](https://sites.google.com/site/shinnosuketakamichi/home) (The University of Tokyo)
- Naoko Tanji (The University of Tokyo)
- Hiroshi Saruwatari (The University of Tokyo)
# Citation
If you use the jaCappella corpus, please cite our paper. Please see [the jaCappella webpage](https://tomohikonakamura.github.io/jaCappella_corpus/).
# Links
- [jaCappella Corpus website](https://tomohikonakamura.github.io/jaCappella_corpus/)
- [Saruwatari Koyama Lab.](https://www.sp.ipc.i.u-tokyo.ac.jp/)
# Acknowledgement
The jaCappella corpus team reserves all copyrights and neighboring rights of the vocal ensemble songs.
The music arrangement and sound production were performed by Alpha Enterprises, Inc.
# Change logs
- Dec. 2023: v.2.0.1
- Fixed bug: Modified `bass.wav` and `vocal_percussion.wav` of `yurikagonouta` in the soul/funk subset
- Dec. 2023: v2
- Added sheet music and audio recordings of 15 songs (ballad, EDM, and soul/funk)
- Feb. 2023: v1.1
- Some accidental marks modified in accordance with harmony.
- Added MusicXML files for SVS
- Added MusicXML files with Romaji transcriptions of lyrics
# Known issues
## Audio
|Title|Voice part|Location|Content|
|:---:|:---:|:---:|:---:|
|kaerutokumo|soprano|Measure 31, 2nd note|sung with not "ha" but "ah"|
|fujinoyama|bass|Measure 5, 3rd note|sung with not "du" but "da"|
|juugoyaotsukisan|lead_vocal|Measure 15, 1st note|sung with not "wa" but "ah" in second verse| | jaCappella/jaCappella | [
"task_categories:audio-to-audio",
"language:ja",
"license:other",
"music",
"singing voice",
"a cappella",
"region:us"
] | 2024-02-07T02:04:13+00:00 | {"language": ["ja"], "license": "other", "task_categories": ["audio-to-audio"], "extra_gated_fields": {"I agree ALL the statements in the terms of use (see the jaCappella corpus website)": "checkbox", "I agree to use this dataset for non-commercial use ONLY": "checkbox"}, "license_name": "jacappella", "license_link": "https://tomohikonakamura.github.io/jaCappella_corpus/", "tags": ["music", "singing voice", "a cappella"], "viewer": false} | 2024-02-08T09:45:49+00:00 | [] | [
"ja"
] | TAGS
#task_categories-audio-to-audio #language-Japanese #license-other #music #singing voice #a cappella #region-us
| jaCappella corpus : Japanese a cappella vocal ensemble corpus
=============================================================
The *jaCappella* corpus is a corpus of Japanese a cappella vocal ensembles.
It consists of musical scores of vocal ensemble songs and separate audio recordings of individual voice parts.
They were arranged from out-of-copyright Japanese children's songs.
All songs have six voice parts (lead vocal, soprano, alto, tenor, bass, and vocal percussion).
Trailer video
Getting started
---------------
* Download the data in the 'jaCappella' directory of your computer by huggingface-cli
Subsets
-------
The songs of the jaCappella corpus are divided into the following seven subsets.
Each subset includes five songs.
Audio files of singing voices
-----------------------------
The singing voices were recorded in a recording studio.
The sampling frequency is 48 kHz and the audio file is provided in a monaural WAVE format.
All singers are native Japanese speakers.
Score
-----
The musical scores with Japanese lyrics are provided in the PDF and MusicXML formats.
This dataset contains MusicXML files whose lyrics were modifed for singing voice synthesis (e.g. "は"->"わ", "へ"->"え").
For non-Japanese researchers, MusicXML files with Romaji transcription of lyrics are provided.
Remark: Romaji transcription does not exactly match Japanese pronunciation. For the exact Japanese pronounciation, please refer to the singing voices.
Metadata
--------
Metadata of the songs are given as a csv file.
The items in the csv file are as follows.
* title\_in\_en: Title in Hepburnian Roman alphabet
* title\_in\_ja: Title in Japanese
* lyric\_writer: Lyric writer name of the original song in Japanese
* copyright\_of\_lyric\_writer: Copyright of the lyric writer of the original song
* composer: Composer name of the original song in Japanese
* copyright\_of\_composer: Copyright of the composer of the original song
* arranger\_in\_en:Arranger name of the song of our corpus in English
* arranger\_in\_ja:Arranger name of the song of our corpus in Japanese
* subset:Subset name
* voice\_part:Voice part name
* singer\_id: Singer identifier (ID)
* gender: Singer's gender
* first\_lang: First language of singer
Directory structure
===================
'{subset}', '{title\_in\_en}', and '{voice\_part}' refer to the subset name, song title, and voice name, respectively.
They are written in lowercase, with spaces ' ' replaced by underscores '\_'.
* Audio file of voice part: 'jaCappella/{subset}/{title\_in\_en}/{voice\_part}.wav'.
* Mixture of all voice parts
+ Monaural: 'jaCappella/{subset}/{title\_in\_en}/URL'
+ Stereo: 'jaCappella/{subset}/{title\_in\_en}/mixture\_stereo.wav'
* Musical score
+ PDF with Japanese lyrics: 'jaCappella/{subset}/{title\_in\_en}/{title\_in\_en}.pdf'
+ MusicXML with Japanese lyrics: 'jaCappella/{subset}/{title\_in\_en}/{title\_in\_en}.musicxml'
+ MusicXML for singing voice synthesis (SVS) with Japanese lyrics: 'jaCappella/{subset}/{title\_in\_en}/{title\_in\_en}\_SVS.musicxml'
- Nonlexical syllables are capitalized (e.g., LA, TU).
+ MusicXML with Romaji transcription of lyrics: 'jaCappella/{subset}/{title\_in\_en}/{title\_in\_en}\_romaji.musicxml'
- Nonlexical syllables are capitalized (e.g., LA, TU).
- Lexical syllables are not capitzlied (e.g., cha, i).
* Metadata (CSV): ''jaCappella/URL''
Terms of Use
============
Please refer to the jaCappella Corpus website for the terms of use.
Contributors (The jaCappella corpus team)
=========================================
* Tomohiko Nakamura
* Shinnosuke Takamichi (The University of Tokyo)
* Naoko Tanji (The University of Tokyo)
* Hiroshi Saruwatari (The University of Tokyo)
If you use the jaCappella corpus, please cite our paper. Please see the jaCappella webpage.
Links
=====
* jaCappella Corpus website
* Saruwatari Koyama Lab.
Acknowledgement
===============
The jaCappella corpus team reserves all copyrights and neighboring rights of the vocal ensemble songs.
The music arrangement and sound production were performed by Alpha Enterprises, Inc.
Change logs
===========
* Dec. 2023: v.2.0.1
+ Fixed bug: Modified 'URL' and 'vocal\_percussion.wav' of 'yurikagonouta' in the soul/funk subset
* Dec. 2023: v2
+ Added sheet music and audio recordings of 15 songs (ballad, EDM, and soul/funk)
* Feb. 2023: v1.1
+ Some accidental marks modified in accordance with harmony.
+ Added MusicXML files for SVS
+ Added MusicXML files with Romaji transcriptions of lyrics
Known issues
============
Audio
-----
| [] | [
"TAGS\n#task_categories-audio-to-audio #language-Japanese #license-other #music #singing voice #a cappella #region-us \n"
] |
dd2e6038e1ed5f14818f54c94d20f5fc3cdfda79 | Business data set created for Volkswagen | alexchwoj/vw-llama2 | [
"region:us"
] | 2024-02-07T02:19:50+00:00 | {} | 2024-02-07T03:08:07+00:00 | [] | [] | TAGS
#region-us
| Business data set created for Volkswagen | [] | [
"TAGS\n#region-us \n"
] |
de48958814a03bea741e1267e1fc9788697f85ec | Dataset: MNIST
Model-architecture: ResNet-18
training accuracy:
testing accuracy: | Peverell/mnist-resnet18 | [
"region:us"
] | 2024-02-07T02:31:03+00:00 | {} | 2024-02-07T02:32:48+00:00 | [] | [] | TAGS
#region-us
| Dataset: MNIST
Model-architecture: ResNet-18
training accuracy:
testing accuracy: | [] | [
"TAGS\n#region-us \n"
] |
a8f23d04fe56d7151286fc79a1b6053c872b0df0 | # Dataset Card for "cauhoiphapluat_small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | phamtungthuy/cauhoiphapluat_small | [
"region:us"
] | 2024-02-07T04:00:43+00:00 | {"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "time", "dtype": "string"}, {"name": "relevant", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 3416949, "num_examples": 1350}, {"name": "train", "num_bytes": 103105646, "num_examples": 40500}], "download_size": 39095102, "dataset_size": 106522595}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}, {"split": "train", "path": "data/train-*"}]}]} | 2024-02-07T04:01:04+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "cauhoiphapluat_small"
More Information needed | [
"# Dataset Card for \"cauhoiphapluat_small\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"cauhoiphapluat_small\"\n\nMore Information needed"
] |
7fe43b3df1ab928f56d4ef520fcc2626f25fb378 | தமிழ் வெண்பாக்கள் ~5000, பதவுரை குறிப்புரையுடன்.
நாலடியார்
பதினெண்கீழ்க்கணக்கு
பக்தி இலக்கியங்கள் | aitamilnadu/marabutamilvenba | [
"license:gpl-3.0",
"region:us"
] | 2024-02-07T05:04:35+00:00 | {"license": "gpl-3.0"} | 2024-02-07T05:08:46+00:00 | [] | [] | TAGS
#license-gpl-3.0 #region-us
| தமிழ் வெண்பாக்கள் ~5000, பதவுரை குறிப்புரையுடன்.
நாலடியார்
பதினெண்கீழ்க்கணக்கு
பக்தி இலக்கியங்கள் | [] | [
"TAGS\n#license-gpl-3.0 #region-us \n"
] |
5defcad2827003ff9822f793517dbb511b674703 |
---
configs:
- config_name: train
data_files: "train.csv"
sep: "\t"
- config_name: test
data_files: "test.csv"
sep: ","
--- | Ammok/media_campaign_cost | [
"license:mit",
"region:us"
] | 2024-02-07T06:13:55+00:00 | {"license": "mit"} | 2024-02-07T07:11:54+00:00 | [] | [] | TAGS
#license-mit #region-us
|
---
configs:
- config_name: train
data_files: "URL"
sep: "\t"
- config_name: test
data_files: "URL"
sep: ","
--- | [] | [
"TAGS\n#license-mit #region-us \n"
] |
9d19f97f5f11a2334e0db3578c7c7ce41055dabc |
### varint dataset
path = /varints
[(int32, varint bytes)]
### varlong dataset
path = /varlosgs
[(int64, varlong bytes)]
| graphmc/minecraft_packet_varint_and_varlong | [
"license:mit",
"region:us"
] | 2024-02-07T07:15:30+00:00 | {"license": "mit"} | 2024-02-07T07:38:30+00:00 | [] | [] | TAGS
#license-mit #region-us
|
### varint dataset
path = /varints
[(int32, varint bytes)]
### varlong dataset
path = /varlosgs
[(int64, varlong bytes)]
| [
"### varint dataset\n\npath = /varints\n\n\n[(int32, varint bytes)]",
"### varlong dataset\n\npath = /varlosgs\n\n\n[(int64, varlong bytes)]"
] | [
"TAGS\n#license-mit #region-us \n",
"### varint dataset\n\npath = /varints\n\n\n[(int32, varint bytes)]",
"### varlong dataset\n\npath = /varlosgs\n\n\n[(int64, varlong bytes)]"
] |
fbf5ec47a8afbc0403b317274e6e59c7b9a4c3e6 |
This dataset includes the embeddings of the 10 sentences which are describled about the Melbourne City's information. You can check these sentences in Kaggle notebook [Semantic Search](https://www.kaggle.com/code/aisuko/semantic-search).
# How to use it
```python
from datasets import load_dataset
embeddings_ds=load_dataset('aisuko/sentences_of_Melbourne')
``` | aisuko/sentences_of_Melbourne | [
"license:apache-2.0",
"region:us"
] | 2024-02-07T07:36:48+00:00 | {"license": "apache-2.0"} | 2024-02-09T01:01:25+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
|
This dataset includes the embeddings of the 10 sentences which are describled about the Melbourne City's information. You can check these sentences in Kaggle notebook Semantic Search.
# How to use it
| [
"# How to use it"
] | [
"TAGS\n#license-apache-2.0 #region-us \n",
"# How to use it"
] |
e23beaf1cb79a091ae2a330100b2a63dc4d5fbd8 | Validation split from [here](https://huggingface.co/datasets/detection-datasets/fashionpedia), copied over for demonstration of custom dataset loading with FiftyOne's Hugging Face Hub utils. | jamarks/fashionpedia-copy | [
"region:us"
] | 2024-02-07T07:47:11+00:00 | {} | 2024-02-13T01:03:27+00:00 | [] | [] | TAGS
#region-us
| Validation split from here, copied over for demonstration of custom dataset loading with FiftyOne's Hugging Face Hub utils. | [] | [
"TAGS\n#region-us \n"
] |
255936c2d7926abb32315769ca4b0a6558e19aee |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | pioivenium/im-map-dataset-test-deep-learning | [
"size_categories:10K<n<100K",
"language:en",
"license:openrail",
"region:us"
] | 2024-02-07T08:17:18+00:00 | {"language": ["en"], "license": "openrail", "size_categories": ["10K<n<100K"], "pretty_name": "map_test"} | 2024-02-08T07:13:21+00:00 | [] | [
"en"
] | TAGS
#size_categories-10K<n<100K #language-English #license-openrail #region-us
|
# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#size_categories-10K<n<100K #language-English #license-openrail #region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
45ac60de914a783545a0d956aadacbb5bfc36990 |
Photo Aesthetics Ranking Network with Attributes and Content Adaptation
## Citation
```bibtex
@inproceedings{kong2016aesthetics,
title={Photo Aesthetics Ranking Network with Attributes and Content Adaptation},
author={Kong, Shu and Shen, Xiaohui and Lin, Zhe and Mech, Radomir and Fowlkes, Charless},
booktitle={ECCV},
year={2016}
}
``` | Iceclear/AADB | [
"license:apache-2.0",
"region:us"
] | 2024-02-07T08:45:05+00:00 | {"license": "apache-2.0"} | 2024-02-07T09:12:57+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
|
Photo Aesthetics Ranking Network with Attributes and Content Adaptation
| [] | [
"TAGS\n#license-apache-2.0 #region-us \n"
] |
d90e8d48b457b401b7e67a36e83b000625ed183d |
AVA: A Large-Scale Database for Aesthetic Visual Analysis
See [Github Page](https://github.com/imfing/ava_downloader/tree/master/AVA_dataset) for tags.
## Citation
```bibtex
@inproceedings{murray2012ava,
title={AVA: A large-scale database for aesthetic visual analysis},
author={Murray, Naila and Marchesotti, Luca and Perronnin, Florent},
booktitle={CVPR},
year={2012},
}
``` | Iceclear/AVA | [
"license:apache-2.0",
"region:us"
] | 2024-02-07T09:13:25+00:00 | {"license": "apache-2.0"} | 2024-02-14T07:28:46+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
|
AVA: A Large-Scale Database for Aesthetic Visual Analysis
See Github Page for tags.
| [] | [
"TAGS\n#license-apache-2.0 #region-us \n"
] |
1a13701876fae678a70b263f6601e1dff532acc4 | human shoes | tubilibirseyler/tybi | [
"region:us"
] | 2024-02-07T09:27:26+00:00 | {} | 2024-02-07T09:27:35+00:00 | [] | [] | TAGS
#region-us
| human shoes | [] | [
"TAGS\n#region-us \n"
] |
61ed0a595f60ccddc0f67bfc77b20815da85dc41 |
The [sustainability-report-emissions](https://huggingface.co/datasets/nopperl/sustainability-report-emissions) dataset converted into preferences-style JSONL format for DPO training. It can be directly used by [DPOTrainer](https://huggingface.co/docs/trl/dpo_trainer), [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl), etc. The prompt consists of an instruction and text extracted from relevant pages of a sustainability report. The chosen output is generated using the [Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) model and consists of a JSON string containing the scope 1, 2 and 3 emissions as well as the ids of pages containing this information. The rejected output is randomly generated in the same style. The dataset generation scripts are at [this GitHub repo](https://github.com/nopperl/corporate_emission_reports). An example model finetuned on this dataset is [emissions-extraction-lora-dpo](https://huggingface.co/nopperl/emissions-extraction-lora-dpo).
Note: the prompts do not use any instruction format. It is recommended to process them according to the instruction format your base model was trained on. See the configuration of [emissions-extraction-lora](https://huggingface.co/nopperl/emissions-extraction-lora). Also, as the prompts are rather long (~15000 tokens on average), a significant amount of memory is required for training.
| nopperl/sustainability-report-emissions-dpo | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:pddl",
"climate",
"region:us"
] | 2024-02-07T09:32:12+00:00 | {"language": ["en"], "license": "pddl", "size_categories": ["1K<n<10K"], "task_categories": ["text-generation"], "tags": ["climate"]} | 2024-02-07T09:44:47+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #size_categories-1K<n<10K #language-English #license-pddl #climate #region-us
|
The sustainability-report-emissions dataset converted into preferences-style JSONL format for DPO training. It can be directly used by DPOTrainer, axolotl, etc. The prompt consists of an instruction and text extracted from relevant pages of a sustainability report. The chosen output is generated using the Mixtral-8x7B-v0.1 model and consists of a JSON string containing the scope 1, 2 and 3 emissions as well as the ids of pages containing this information. The rejected output is randomly generated in the same style. The dataset generation scripts are at this GitHub repo. An example model finetuned on this dataset is emissions-extraction-lora-dpo.
Note: the prompts do not use any instruction format. It is recommended to process them according to the instruction format your base model was trained on. See the configuration of emissions-extraction-lora. Also, as the prompts are rather long (~15000 tokens on average), a significant amount of memory is required for training.
| [] | [
"TAGS\n#task_categories-text-generation #size_categories-1K<n<10K #language-English #license-pddl #climate #region-us \n"
] |
3a3cff4b525dd1a31fe917051f3bd029fe9ba63b |
## Dataset Information
| # Nodes | # Edges | # Features |
|:-------:|:-------:|:----------:|
| 2,277 | 36,101 | 2,325 |
## Usage
```python
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="SauravMaheshkar/pareto-chameleon", filename="processed/chameleon.bin", local_dir="./data/", repo_type="dataset")
dataset, _ = dgl.load_graphs("./data/processed/chameleon.bin")
```
Thank you [@severo](https://huggingface.co/severo) for helping me [figure out the usage](https://discuss.huggingface.co/t/can-i-use-a-pickle-file-with-the-data-files-argument-with-datasets/72189/2?u=sauravmaheshkar).
Pre-processed as per the official codebase of https://arxiv.org/abs/2210.02016
## Citations
```
@article{ju2023multi,
title={Multi-task Self-supervised Graph Neural Networks Enable Stronger Task Generalization},
author={Ju, Mingxuan and Zhao, Tong and Wen, Qianlong and Yu, Wenhao and Shah, Neil and Ye, Yanfang and Zhang, Chuxu},
booktitle={International Conference on Learning Representations},
year={2023}
}
```
```
@article{DBLP:journals/corr/abs-1909-13021,
author = {Benedek Rozemberczki and
Carl Allen and
Rik Sarkar},
title = {Multi-scale Attributed Node Embedding},
journal = {CoRR},
volume = {abs/1909.13021},
year = {2019},
url = {http://arxiv.org/abs/1909.13021},
eprinttype = {arXiv},
eprint = {1909.13021},
timestamp = {Wed, 02 Oct 2019 13:04:08 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-13021.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | SauravMaheshkar/pareto-chameleon | [
"task_categories:graph-ml",
"size_categories:1K<n<10K",
"license:cc",
"art",
"arxiv:2210.02016",
"arxiv:1909.13021",
"region:us"
] | 2024-02-07T09:55:11+00:00 | {"license": "cc", "size_categories": ["1K<n<10K"], "task_categories": ["graph-ml"], "tags": ["art"]} | 2024-02-12T21:39:03+00:00 | [
"2210.02016",
"1909.13021"
] | [] | TAGS
#task_categories-graph-ml #size_categories-1K<n<10K #license-cc #art #arxiv-2210.02016 #arxiv-1909.13021 #region-us
| Dataset Information
-------------------
Usage
-----
Thank you @severo for helping me figure out the usage.
Pre-processed as per the official codebase of URL
s
| [] | [
"TAGS\n#task_categories-graph-ml #size_categories-1K<n<10K #license-cc #art #arxiv-2210.02016 #arxiv-1909.13021 #region-us \n"
] |
6eb294382133b4dad189a55d12fdf21b7ec3c07d |
The Checkpoints dataset as trained and used in [A Symmetry-Aware Exploration of Bayesian Neural Network Posteriors](https://arxiv.org/abs/2310.08287) published at ICLR 2024.
## Usage
### Tar
Just untar the desired models. This will create a new folder containing the models saved as safetensors.
### TorchUncertainty
To load or train models, start by downloading [TorchUncertainty](https://github.com/ENSTA-U2IS-AI/torch-uncertainty) - [Documentation](https://torch-uncertainty.github.io/).
Install the desired version of PyTorch and torchvision, for instance with:
```bash
pip install torch torchvision
```
Then, install TorchUncertainty via pip:
```bash
pip install torch-uncertainty
```
### Loading models
The functions to load the models are available in `scripts`.
**Any questions?** Please feel free to ask in the [GitHub Issues](https://github.com/ENSTA-U2IS-AI/torch-uncertainty/issues) or on our [Discord server](https://discord.gg/HMCawt5MJu).
| torch-uncertainty/Checkpoints | [
"license:apache-2.0",
"vision",
"checkpoints",
"residual-networks",
"arxiv:2310.08287",
"region:us"
] | 2024-02-07T09:58:00+00:00 | {"license": "apache-2.0", "pretty_name": "Checkpoints", "tags": ["vision", "checkpoints", "residual-networks"]} | 2024-02-16T10:46:52+00:00 | [
"2310.08287"
] | [] | TAGS
#license-apache-2.0 #vision #checkpoints #residual-networks #arxiv-2310.08287 #region-us
|
The Checkpoints dataset as trained and used in A Symmetry-Aware Exploration of Bayesian Neural Network Posteriors published at ICLR 2024.
## Usage
### Tar
Just untar the desired models. This will create a new folder containing the models saved as safetensors.
### TorchUncertainty
To load or train models, start by downloading TorchUncertainty - Documentation.
Install the desired version of PyTorch and torchvision, for instance with:
Then, install TorchUncertainty via pip:
### Loading models
The functions to load the models are available in 'scripts'.
Any questions? Please feel free to ask in the GitHub Issues or on our Discord server.
| [
"## Usage",
"### Tar\n\nJust untar the desired models. This will create a new folder containing the models saved as safetensors.",
"### TorchUncertainty\n\nTo load or train models, start by downloading TorchUncertainty - Documentation.\nInstall the desired version of PyTorch and torchvision, for instance with:\n\n\n\nThen, install TorchUncertainty via pip:",
"### Loading models\nThe functions to load the models are available in 'scripts'.\n\nAny questions? Please feel free to ask in the GitHub Issues or on our Discord server."
] | [
"TAGS\n#license-apache-2.0 #vision #checkpoints #residual-networks #arxiv-2310.08287 #region-us \n",
"## Usage",
"### Tar\n\nJust untar the desired models. This will create a new folder containing the models saved as safetensors.",
"### TorchUncertainty\n\nTo load or train models, start by downloading TorchUncertainty - Documentation.\nInstall the desired version of PyTorch and torchvision, for instance with:\n\n\n\nThen, install TorchUncertainty via pip:",
"### Loading models\nThe functions to load the models are available in 'scripts'.\n\nAny questions? Please feel free to ask in the GitHub Issues or on our Discord server."
] |
53d766696fd727e8fcd02286db090c9f57eef767 |
## Dataset Information
| # Nodes | # Edges | # Features |
|:-------:|:-------:|:----------:|
| 5,201 | 217,073 | 2,089 |
## Usage
```python
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="SauravMaheshkar/pareto-squirrel", filename="processed/squirrel.bin", local_dir="./data/", repo_type="dataset")
dataset, _ = dgl.load_graphs("./data/processed/squirrel.bin")
```
Thank you [@severo](https://huggingface.co/severo) for helping me [figure out the usage](https://discuss.huggingface.co/t/can-i-use-a-pickle-file-with-the-data-files-argument-with-datasets/72189/2?u=sauravmaheshkar).
Pre-processed as per the official codebase of https://arxiv.org/abs/2210.02016
## Citations
```
@article{ju2023multi,
title={Multi-task Self-supervised Graph Neural Networks Enable Stronger Task Generalization},
author={Ju, Mingxuan and Zhao, Tong and Wen, Qianlong and Yu, Wenhao and Shah, Neil and Ye, Yanfang and Zhang, Chuxu},
booktitle={International Conference on Learning Representations},
year={2023}
}
```
```
@article{DBLP:journals/corr/abs-1909-13021,
author = {Benedek Rozemberczki and
Carl Allen and
Rik Sarkar},
title = {Multi-scale Attributed Node Embedding},
journal = {CoRR},
volume = {abs/1909.13021},
year = {2019},
url = {http://arxiv.org/abs/1909.13021},
eprinttype = {arXiv},
eprint = {1909.13021},
timestamp = {Wed, 02 Oct 2019 13:04:08 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1909-13021.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | SauravMaheshkar/pareto-squirrel | [
"task_categories:graph-ml",
"size_categories:1K<n<10K",
"license:cc",
"art",
"arxiv:2210.02016",
"arxiv:1909.13021",
"region:us"
] | 2024-02-07T10:10:26+00:00 | {"license": "cc", "size_categories": ["1K<n<10K"], "task_categories": ["graph-ml"], "tags": ["art"]} | 2024-02-12T21:38:29+00:00 | [
"2210.02016",
"1909.13021"
] | [] | TAGS
#task_categories-graph-ml #size_categories-1K<n<10K #license-cc #art #arxiv-2210.02016 #arxiv-1909.13021 #region-us
| Dataset Information
-------------------
Usage
-----
Thank you @severo for helping me figure out the usage.
Pre-processed as per the official codebase of URL
s
| [] | [
"TAGS\n#task_categories-graph-ml #size_categories-1K<n<10K #license-cc #art #arxiv-2210.02016 #arxiv-1909.13021 #region-us \n"
] |
05b16ec194a768d51e7d1d3056074e91bdb106cd | # Dataset Card for "find_marker_both_sent_train_400_eval_40_random_permute_rerun_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/find_marker_both_sent_train_400_eval_40_random_permute_rerun_1 | [
"region:us"
] | 2024-02-07T10:22:59+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3022759.5316631873, "num_examples": 2434}, {"name": "validation", "num_bytes": 220570, "num_examples": 200}], "download_size": 898697, "dataset_size": 3243329.5316631873}} | 2024-02-07T10:23:08+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "find_marker_both_sent_train_400_eval_40_random_permute_rerun_1"
More Information needed | [
"# Dataset Card for \"find_marker_both_sent_train_400_eval_40_random_permute_rerun_1\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"find_marker_both_sent_train_400_eval_40_random_permute_rerun_1\"\n\nMore Information needed"
] |
6e7f14d7d3cfc3e27dcd64e3f2d13e3fde0a37b1 | # Dataset Card for "find_marker_both_sent_train_400_eval_40_random_permute_rerun_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/find_marker_both_sent_train_400_eval_40_random_permute_rerun_2 | [
"region:us"
] | 2024-02-07T10:23:20+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3710817.0470730234, "num_examples": 2874}, {"name": "validation", "num_bytes": 220570, "num_examples": 200}], "download_size": 989034, "dataset_size": 3931387.0470730234}} | 2024-02-07T10:23:28+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "find_marker_both_sent_train_400_eval_40_random_permute_rerun_2"
More Information needed | [
"# Dataset Card for \"find_marker_both_sent_train_400_eval_40_random_permute_rerun_2\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"find_marker_both_sent_train_400_eval_40_random_permute_rerun_2\"\n\nMore Information needed"
] |
a363a93c410c3b23db3439bb444b809583586582 | # Dataset Card for "find_marker_both_sent_train_400_eval_40_random_permute_rerun_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/find_marker_both_sent_train_400_eval_40_random_permute_rerun_4 | [
"region:us"
] | 2024-02-07T10:23:39+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5100580.566046733, "num_examples": 3754}, {"name": "validation", "num_bytes": 220570, "num_examples": 200}], "download_size": 1069446, "dataset_size": 5321150.566046733}} | 2024-02-07T10:23:46+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "find_marker_both_sent_train_400_eval_40_random_permute_rerun_4"
More Information needed | [
"# Dataset Card for \"find_marker_both_sent_train_400_eval_40_random_permute_rerun_4\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"find_marker_both_sent_train_400_eval_40_random_permute_rerun_4\"\n\nMore Information needed"
] |
115b162afb4d4050721f0f86d144e08ed8514254 | # Dataset Card for "find_marker_both_sent_train_400_eval_40_random_permute_rerun_8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/find_marker_both_sent_train_400_eval_40_random_permute_rerun_8 | [
"region:us"
] | 2024-02-07T10:23:56+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7906460.143432986, "num_examples": 5514}, {"name": "validation", "num_bytes": 220570, "num_examples": 200}], "download_size": 1176600, "dataset_size": 8127030.143432986}} | 2024-02-07T10:24:03+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "find_marker_both_sent_train_400_eval_40_random_permute_rerun_8"
More Information needed | [
"# Dataset Card for \"find_marker_both_sent_train_400_eval_40_random_permute_rerun_8\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"find_marker_both_sent_train_400_eval_40_random_permute_rerun_8\"\n\nMore Information needed"
] |
bb766be38d4dfd7a349ea495608d0644f5e4e283 |
A concatenated instruction-based dataset from the following:
- NbAiLab/norwegian-alpaca
- RuterNorway/Fleurs-Alpaca-EN-NO
- RuterNorway/OpenOrcaNo-15k | tollefj/nor-instruct-combined | [
"language:nb",
"region:us"
] | 2024-02-07T10:31:48+00:00 | {"language": ["nb"], "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 32962041, "num_examples": 67714}, {"name": "test", "num_bytes": 322359, "num_examples": 684}], "download_size": 21130799, "dataset_size": 33284400}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2024-02-12T08:30:19+00:00 | [] | [
"nb"
] | TAGS
#language-Norwegian Bokmål #region-us
|
A concatenated instruction-based dataset from the following:
- NbAiLab/norwegian-alpaca
- RuterNorway/Fleurs-Alpaca-EN-NO
- RuterNorway/OpenOrcaNo-15k | [] | [
"TAGS\n#language-Norwegian Bokmål #region-us \n"
] |
e79ec75cc5489c0c85825aae13cd6dcabe0d0a57 |
The aim of this data compilation is to facilitate various tasks such as training, refining, or similar processes for any Language Model. Within the 'data' directory, you'll discover the dataset stored in Parquet format, a common choice for such endeavors.
Every piece of information within this dataset originates from the Stack Exchange network and was obtained utilizing the Stack Exchange Data Explorer tool (https://github.com/StackExchange/StackExchange.DataExplorer). Specifically, the dataset encompasses all Question-Answer pairs tagged with Kubernetes from Stack Overflow. Notably, within each Question-Answer pair, the Answer selected is the one with the highest positive score. Posts with negative scores on Stack Overflow have been omitted from this dataset. | genaidevops/kubernetes-stackoverflow-questions | [
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"Kubernetes",
"Stack Overflow",
"region:us"
] | 2024-02-07T10:58:08+00:00 | {"language": ["en"], "license": "cc-by-4.0", "size_categories": ["10K<n<100K"], "tags": ["Kubernetes", "Stack Overflow"]} | 2024-02-07T11:22:45+00:00 | [] | [
"en"
] | TAGS
#size_categories-10K<n<100K #language-English #license-cc-by-4.0 #Kubernetes #Stack Overflow #region-us
|
The aim of this data compilation is to facilitate various tasks such as training, refining, or similar processes for any Language Model. Within the 'data' directory, you'll discover the dataset stored in Parquet format, a common choice for such endeavors.
Every piece of information within this dataset originates from the Stack Exchange network and was obtained utilizing the Stack Exchange Data Explorer tool (URL Specifically, the dataset encompasses all Question-Answer pairs tagged with Kubernetes from Stack Overflow. Notably, within each Question-Answer pair, the Answer selected is the one with the highest positive score. Posts with negative scores on Stack Overflow have been omitted from this dataset. | [] | [
"TAGS\n#size_categories-10K<n<100K #language-English #license-cc-by-4.0 #Kubernetes #Stack Overflow #region-us \n"
] |
703b91e4715ec1a277e8f64bc1d538d1a6527752 |
### Description
This is a dataset created for training Russian-language Seq2Seq and CLM models primarily for tasks related to Closed-Domain QA.
The dataset includes 3 main tasks:
1. AAQG (Answer-Aware Question Answering) - generation of questions based on context, provided the answer is known
2. QG - generating questions based on context, without a known answer
3. QA - the standard task of answering a question based on context.
AAQG, QG, QA tasks are generated based on regular datasets for which the context, question and correct answer are known. They are generated in a ratio of 0.4, 0.3 and 0.3, respectively.
List of datasets used to compile this dataset:
1. sberquad
2. russian_super_glue/muserc
3. russian_super_glue/danetqa
Prompts used for QA tasks:
```python
AAQG_PROMPT = "Сгенерируй вопрос по тексту, используя известный ответ. Текст: '{context}'. Ответ: '{answer}'."
QG_PROMPT = "Сгенерируй вопрос по тексту. Текст: '{context}'."
QA_PROMPT = "Сгенерируй ответ на вопрос по тексту. Текст: '{context}'. Вопрос: '{question}'."
```
### Authors
- Sergei Bratchikov (https://t.me/nlpwanderer) | hivaze/ru-AAQG-QA-QG | [
"task_categories:question-answering",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"language:ru",
"qa",
"qg",
"aaqg",
"instruct",
"question-answering",
"question-generation",
"region:us"
] | 2024-02-07T11:12:52+00:00 | {"language": ["ru"], "size_categories": ["10K<n<100K"], "task_categories": ["question-answering", "text2text-generation"], "dataset_info": {"features": [{"name": "task_type", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 93075676, "num_examples": 53264}, {"name": "validation", "num_bytes": 12239164, "num_examples": 6850}], "download_size": 45289649, "dataset_size": 105314840}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "tags": ["qa", "qg", "aaqg", "instruct", "question-answering", "question-generation"]} | 2024-02-08T03:52:43+00:00 | [] | [
"ru"
] | TAGS
#task_categories-question-answering #task_categories-text2text-generation #size_categories-10K<n<100K #language-Russian #qa #qg #aaqg #instruct #question-answering #question-generation #region-us
|
### Description
This is a dataset created for training Russian-language Seq2Seq and CLM models primarily for tasks related to Closed-Domain QA.
The dataset includes 3 main tasks:
1. AAQG (Answer-Aware Question Answering) - generation of questions based on context, provided the answer is known
2. QG - generating questions based on context, without a known answer
3. QA - the standard task of answering a question based on context.
AAQG, QG, QA tasks are generated based on regular datasets for which the context, question and correct answer are known. They are generated in a ratio of 0.4, 0.3 and 0.3, respectively.
List of datasets used to compile this dataset:
1. sberquad
2. russian_super_glue/muserc
3. russian_super_glue/danetqa
Prompts used for QA tasks:
### Authors
- Sergei Bratchikov (https://t.me/nlpwanderer) | [
"### Description\n\nThis is a dataset created for training Russian-language Seq2Seq and CLM models primarily for tasks related to Closed-Domain QA.\n\nThe dataset includes 3 main tasks:\n1. AAQG (Answer-Aware Question Answering) - generation of questions based on context, provided the answer is known\n2. QG - generating questions based on context, without a known answer\n3. QA - the standard task of answering a question based on context.\n\nAAQG, QG, QA tasks are generated based on regular datasets for which the context, question and correct answer are known. They are generated in a ratio of 0.4, 0.3 and 0.3, respectively.\n\nList of datasets used to compile this dataset:\n1. sberquad\n2. russian_super_glue/muserc\n3. russian_super_glue/danetqa\n\nPrompts used for QA tasks:",
"### Authors\n- Sergei Bratchikov (https://t.me/nlpwanderer)"
] | [
"TAGS\n#task_categories-question-answering #task_categories-text2text-generation #size_categories-10K<n<100K #language-Russian #qa #qg #aaqg #instruct #question-answering #question-generation #region-us \n",
"### Description\n\nThis is a dataset created for training Russian-language Seq2Seq and CLM models primarily for tasks related to Closed-Domain QA.\n\nThe dataset includes 3 main tasks:\n1. AAQG (Answer-Aware Question Answering) - generation of questions based on context, provided the answer is known\n2. QG - generating questions based on context, without a known answer\n3. QA - the standard task of answering a question based on context.\n\nAAQG, QG, QA tasks are generated based on regular datasets for which the context, question and correct answer are known. They are generated in a ratio of 0.4, 0.3 and 0.3, respectively.\n\nList of datasets used to compile this dataset:\n1. sberquad\n2. russian_super_glue/muserc\n3. russian_super_glue/danetqa\n\nPrompts used for QA tasks:",
"### Authors\n- Sergei Bratchikov (https://t.me/nlpwanderer)"
] |
f23b5e9f057b3fc3d577b3f41881a4c452696627 | source: https://www.kaggle.com/datasets/akashkumar01/yelpamazonimdb?select=imdb_labelled.txt
imdb_labelled_txt to csv | gserafico/imdb_reviews | [
"region:us"
] | 2024-02-07T11:21:59+00:00 | {} | 2024-02-07T11:24:28+00:00 | [] | [] | TAGS
#region-us
| source: URL
imdb_labelled_txt to csv | [] | [
"TAGS\n#region-us \n"
] |
2378539b0b0158c3c787eb3037f90b0a3b97f334 | source: https://www.kaggle.com/datasets/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews
| gserafico/IMDB_Dataset | [
"region:us"
] | 2024-02-07T11:22:46+00:00 | {} | 2024-02-07T11:25:22+00:00 | [] | [] | TAGS
#region-us
| source: URL
| [] | [
"TAGS\n#region-us \n"
] |
9c68df0515a0a67ca0b02028689eb2e91452b298 | # Dataset Card for "ko_elo_arena_0207"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | instructkr/ko_elo_arena_0207 | [
"language:ko",
"region:us"
] | 2024-02-07T11:52:47+00:00 | {"language": ["ko"], "dataset_info": {"features": [{"name": "index", "dtype": "int64"}, {"name": "modelAName", "dtype": "string"}, {"name": "modelBName", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "modelAResponse", "dtype": "string"}, {"name": "modelBResponse", "dtype": "string"}, {"name": "winner", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 1101623, "num_examples": 432}], "download_size": 542632, "dataset_size": 1101623}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-02-09T11:38:27+00:00 | [] | [
"ko"
] | TAGS
#language-Korean #region-us
| # Dataset Card for "ko_elo_arena_0207"
More Information needed | [
"# Dataset Card for \"ko_elo_arena_0207\"\n\nMore Information needed"
] | [
"TAGS\n#language-Korean #region-us \n",
"# Dataset Card for \"ko_elo_arena_0207\"\n\nMore Information needed"
] |
8ed55e906e498ec845312e4746dac7e0cd8b08ed | # Biometric Attack Dataset
# The dataset is created on the basis of [Anti Spoofing Real Dataset](https://trainingdata.pro/data-market/anti-spoofing-real/?utm_source=huggingface&utm_medium=cpc&utm_campaign=celebA)
We suggest you the dataset similar to CelebA Dataset but with photos of **real people**, additionally the dataset for face anti spoofing and face recognition includes not only images, but videos of the individuals!
The videos were gathered by capturing faces of genuine individuals presenting spoofs, using facial presentations. Our dataset proposes a novel approach that learns and detects spoofing techniques, extracting features from the genuine facial images to prevent the capturing of such information by fake users.
The dataset contains images and videos of real humans with various **resolutions, views, and colors**, making it a comprehensive resource for researchers working on anti-spoofing technologies.
### People in the dataset
.png?generation=1707303383706326&alt=media)
### Types of files in the dataset:
- **photo** - selfie of the person
- **video** - real video of the person
Our dataset also explores the use of neural architectures, such as deep neural networks, to facilitate the identification of distinguishing patterns and textures in different regions of the face, increasing the accuracy and generalizability of the anti-spoofing models.
# 💴 For Commercial Usage: Full version of the dataset includes 140,000 files, leave a request on **[TrainingData](https://trainingdata.pro/data-market/anti-spoofing-real/?utm_source=huggingface&utm_medium=cpc&utm_campaign=celebA)** to buy the dataset
### Metadata for the full dataset:
- **assignment_id** - unique identifier of the media file
- **worker_id** - unique identifier of the person
- **age** - age of the person
- **true_gender** - gender of the person
- **country** - country of the person
- **ethnicity** - ethnicity of the person
- **video_extension** - video extensions in the dataset
- **video_resolution** - video resolution in the dataset
- **video_duration** - video duration in the dataset
- **video_fps** - frames per second for video in the dataset
- **photo_extension** - photo extensions in the dataset
- **photo_resolution** - photo resolution in the dataset

# 💴 Buy the Dataset: This is just an example of the data. Leave a request on **[https://trainingdata.pro/data-market](https://trainingdata.pro/data-market/anti-spoofing-real/?utm_source=huggingface&utm_medium=cpc&utm_campaign=celebA) to learn about the price and buy the dataset**
# Content
The dataset includes 2 folders:
- **video** - real videos of the people,
- **photo** - selfies of the same people from the previous folder
### File with the extension .csv
- **id**: id of the person,
- **photo**: link to access the photo,
- **video**: link to access the video
## **[TrainingData](https://trainingdata.pro/data-market/anti-spoofing-real/?utm_source=huggingface&utm_medium=cpc&utm_campaign=celebA)** provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **<https://www.kaggle.com/trainingdatapro/datasets>**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets**
*keywords: liveness detection systems, liveness detection dataset, biometric dataset, biometric data dataset, biometric system attacks, anti-spoofing dataset, face liveness detection, deep learning dataset, face spoofing database, face anti-spoofing, ibeta dataset, face anti spoofing, large-scale face anti spoofing, rich annotations anti spoofing dataset* | TrainingDataPro/celeba-spoof-dataset | [
"task_categories:video-classification",
"task_categories:image-classification",
"task_categories:image-to-video",
"language:en",
"license:cc-by-nc-nd-4.0",
"code",
"finance",
"legal",
"webdataset",
"region:us"
] | 2024-02-07T12:32:33+00:00 | {"language": ["en"], "license": "cc-by-nc-nd-4.0", "task_categories": ["video-classification", "image-classification", "image-to-video"], "tags": ["code", "finance", "legal", "webdataset"]} | 2024-02-07T12:38:29+00:00 | [] | [
"en"
] | TAGS
#task_categories-video-classification #task_categories-image-classification #task_categories-image-to-video #language-English #license-cc-by-nc-nd-4.0 #code #finance #legal #webdataset #region-us
| # Biometric Attack Dataset
# The dataset is created on the basis of Anti Spoofing Real Dataset
We suggest you the dataset similar to CelebA Dataset but with photos of real people, additionally the dataset for face anti spoofing and face recognition includes not only images, but videos of the individuals!
The videos were gathered by capturing faces of genuine individuals presenting spoofs, using facial presentations. Our dataset proposes a novel approach that learns and detects spoofing techniques, extracting features from the genuine facial images to prevent the capturing of such information by fake users.
The dataset contains images and videos of real humans with various resolutions, views, and colors, making it a comprehensive resource for researchers working on anti-spoofing technologies.
### People in the dataset
 with 10% test and validation splits.
Split sizes:
- train: 1,838,506
- validation: 226,977
- test: 204,279
---
Processing script (modified from https://github.com/huggingface/datasets/issues/2514#issuecomment-984409349):
```python
from datasets import load_dataset
from hashlib import sha256
dataset = load_dataset("UWV/Leesplank_NL_wikipedia_simplifications")["train"]
def check_uniques(example, unique, key):
"""Check if current hash is still in set of unique hashes and remove if true."""
if example[key] in unique:
unique.remove(example[key])
return True
else:
return False
def generate_hashes(example):
hashed_prompt = sha256(example['prompt'].encode('utf-8')).hexdigest()
hashed_result = sha256(example['result'].encode('utf-8')).hexdigest()
return {"hash_prompt": hashed_prompt, "hash_result": hashed_result}
dataset = dataset.map(generate_hashes, num_proc=64)
uniqueprompts = set(dataset.unique("hash_prompt"))
uniqueresults = set(dataset.unique("hash_result"))
dataset = dataset.filter(check_uniques, fn_kwargs={"unique": uniqueprompts, "key": "hash_prompt"})
dataset = dataset.filter(check_uniques, fn_kwargs={"unique": uniqueresults, "key": "hash_result"})
dataset = dataset.shuffle()
dataset = dataset.select_columns(["prompt", "result"])
print(dataset)
```
| BramVanroy/wiki_simplifications_deduped_split | [
"task_categories:text-generation",
"task_categories:text-classification",
"task_categories:text2text-generation",
"size_categories:1M<n<10M",
"language:nl",
"license:cc-by-nc-4.0",
"region:us"
] | 2024-02-07T12:51:56+00:00 | {"language": ["nl"], "license": "cc-by-nc-4.0", "size_categories": ["1M<n<10M"], "task_categories": ["text-generation", "text-classification", "text2text-generation"], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "result", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1266639647.7104316, "num_examples": 1838506}, {"name": "test", "num_bytes": 140738121.38477615, "num_examples": 204279}, {"name": "validation", "num_bytes": 156375920.07769927, "num_examples": 226977}], "download_size": 1046273482, "dataset_size": 1563753689.1729069}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}]} | 2024-02-13T08:49:14+00:00 | [] | [
"nl"
] | TAGS
#task_categories-text-generation #task_categories-text-classification #task_categories-text2text-generation #size_categories-1M<n<10M #language-Dutch #license-cc-by-nc-4.0 #region-us
|
This is a deduplicated and shuffled version of UWV/Leesplank_NL_wikipedia_simplifications with 10% test and validation splits.
Split sizes:
- train: 1,838,506
- validation: 226,977
- test: 204,279
---
Processing script (modified from URL
| [] | [
"TAGS\n#task_categories-text-generation #task_categories-text-classification #task_categories-text2text-generation #size_categories-1M<n<10M #language-Dutch #license-cc-by-nc-4.0 #region-us \n"
] |
3a09dea099005d9e132580c093760fda6723ec6a | ---
https://github.com/JaimeOnaindia/Applio-RVC-Fork
| Player1444/Applio-RVC-Fork | [
"region:us"
] | 2024-02-07T12:54:39+00:00 | {} | 2024-02-07T12:55:06+00:00 | [] | [] | TAGS
#region-us
| ---
URL
| [] | [
"TAGS\n#region-us \n"
] |
a429fa851b6b92b4655f82faa52684a8aef38bb6 |
# Medical textbook question answering
This corpus contains multiple-choice quiz questions for 13 commonly-used medical textbooks. The questions are designed to examine understanding of the main concepts in the textbooks.
The QA data is used to evaluate knowledge learning of language models in the following paper:
- **Paper:** [Conditional language learning with context](link pending)
### Data Splits
- subjects: anatomy, biochemistry, cell biology, gynecology, histology, immunology, neurology, obstentrics, pathology, pediatrics, pharmacology, physiology, psychiatry
- 500 questions for each subject
## Dataset Creation
Question and answers are generated by GPT-4 given excerpts from the textbooks. Refer to the paper for the instructions used to generate the questions.
### Citation Information
```
pending
``` | winder-hybrids/MedicalTextbook_QA | [
"region:us"
] | 2024-02-07T13:23:34+00:00 | {"dataset_info": [{"config_name": "Anatomy_Gray", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "original_text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1898508, "num_examples": 500}], "download_size": 152583, "dataset_size": 1898508}, {"config_name": "Biochemistry_Lippincott", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "original_text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1657461, "num_examples": 500}], "download_size": 161466, "dataset_size": 1657461}, {"config_name": "Cell_Biology_Alberts", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "original_text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1911167, "num_examples": 500}], "download_size": 178902, "dataset_size": 1911167}, {"config_name": "Gynecology_Novak", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "original_text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1716835, "num_examples": 500}], "download_size": 166726, "dataset_size": 1716835}, {"config_name": "Histology_Ross", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "original_text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1805108, "num_examples": 500}], "download_size": 161573, "dataset_size": 1805108}, {"config_name": "Immunology_Janeway", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "original_text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1662680, "num_examples": 500}], "download_size": 163548, "dataset_size": 1662680}, {"config_name": "Neurology_Adams", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "original_text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1891656, "num_examples": 500}], "download_size": 188245, "dataset_size": 1891656}, {"config_name": "Obstentrics_Williams", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "original_text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1597198, "num_examples": 500}], "download_size": 169259, "dataset_size": 1597198}, {"config_name": "Pathology_Robbins", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "original_text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1749146, "num_examples": 500}], "download_size": 175037, "dataset_size": 1749146}, {"config_name": "Pediatrics_Nelson", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "original_text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1885412, "num_examples": 500}], "download_size": 180188, "dataset_size": 1885412}, {"config_name": "Pharmacology_Katzung", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "original_text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1748810, "num_examples": 500}], "download_size": 172568, "dataset_size": 1748810}, {"config_name": "Physiology_Levy", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "original_text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1756829, "num_examples": 500}], "download_size": 167776, "dataset_size": 1756829}, {"config_name": "Psichiatry_DSM-5", "features": [{"name": "question", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "answer", "dtype": "int64"}, {"name": "original_text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1976522, "num_examples": 500}], "download_size": 171016, "dataset_size": 1976522}], "configs": [{"config_name": "Anatomy_Gray", "data_files": [{"split": "test", "path": "Anatomy_Gray/test-*"}]}, {"config_name": "Biochemistry_Lippincott", "data_files": [{"split": "test", "path": "Biochemistry_Lippincott/test-*"}]}, {"config_name": "Cell_Biology_Alberts", "data_files": [{"split": "test", "path": "Cell_Biology_Alberts/test-*"}]}, {"config_name": "Gynecology_Novak", "data_files": [{"split": "test", "path": "Gynecology_Novak/test-*"}]}, {"config_name": "Histology_Ross", "data_files": [{"split": "test", "path": "Histology_Ross/test-*"}]}, {"config_name": "Immunology_Janeway", "data_files": [{"split": "test", "path": "Immunology_Janeway/test-*"}]}, {"config_name": "Neurology_Adams", "data_files": [{"split": "test", "path": "Neurology_Adams/test-*"}]}, {"config_name": "Obstentrics_Williams", "data_files": [{"split": "test", "path": "Obstentrics_Williams/test-*"}]}, {"config_name": "Pathology_Robbins", "data_files": [{"split": "test", "path": "Pathology_Robbins/test-*"}]}, {"config_name": "Pediatrics_Nelson", "data_files": [{"split": "test", "path": "Pediatrics_Nelson/test-*"}]}, {"config_name": "Pharmacology_Katzung", "data_files": [{"split": "test", "path": "Pharmacology_Katzung/test-*"}]}, {"config_name": "Physiology_Levy", "data_files": [{"split": "test", "path": "Physiology_Levy/test-*"}]}, {"config_name": "Psichiatry_DSM-5", "data_files": [{"split": "test", "path": "Psichiatry_DSM-5/test-*"}]}]} | 2024-02-07T13:58:51+00:00 | [] | [] | TAGS
#region-us
|
# Medical textbook question answering
This corpus contains multiple-choice quiz questions for 13 commonly-used medical textbooks. The questions are designed to examine understanding of the main concepts in the textbooks.
The QA data is used to evaluate knowledge learning of language models in the following paper:
- Paper: Conditional language learning with context
### Data Splits
- subjects: anatomy, biochemistry, cell biology, gynecology, histology, immunology, neurology, obstentrics, pathology, pediatrics, pharmacology, physiology, psychiatry
- 500 questions for each subject
## Dataset Creation
Question and answers are generated by GPT-4 given excerpts from the textbooks. Refer to the paper for the instructions used to generate the questions.
| [
"# Medical textbook question answering\n\nThis corpus contains multiple-choice quiz questions for 13 commonly-used medical textbooks. The questions are designed to examine understanding of the main concepts in the textbooks. \n\nThe QA data is used to evaluate knowledge learning of language models in the following paper:\n\n- Paper: Conditional language learning with context",
"### Data Splits\n\n- subjects: anatomy, biochemistry, cell biology, gynecology, histology, immunology, neurology, obstentrics, pathology, pediatrics, pharmacology, physiology, psychiatry\n\n- 500 questions for each subject",
"## Dataset Creation\n\nQuestion and answers are generated by GPT-4 given excerpts from the textbooks. Refer to the paper for the instructions used to generate the questions."
] | [
"TAGS\n#region-us \n",
"# Medical textbook question answering\n\nThis corpus contains multiple-choice quiz questions for 13 commonly-used medical textbooks. The questions are designed to examine understanding of the main concepts in the textbooks. \n\nThe QA data is used to evaluate knowledge learning of language models in the following paper:\n\n- Paper: Conditional language learning with context",
"### Data Splits\n\n- subjects: anatomy, biochemistry, cell biology, gynecology, histology, immunology, neurology, obstentrics, pathology, pediatrics, pharmacology, physiology, psychiatry\n\n- 500 questions for each subject",
"## Dataset Creation\n\nQuestion and answers are generated by GPT-4 given excerpts from the textbooks. Refer to the paper for the instructions used to generate the questions."
] |
59d82697aa51fc477fb3baad573cf91952061af5 | # Dataset Card for "traxxas5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | AnaLFDias/traxxas5 | [
"region:us"
] | 2024-02-07T14:39:26+00:00 | {"dataset_info": {"features": [{"name": "image_path", "dtype": "string"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1109, "num_examples": 11}], "download_size": 0, "dataset_size": 1109}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-02-07T17:04:57+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "traxxas5"
More Information needed | [
"# Dataset Card for \"traxxas5\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"traxxas5\"\n\nMore Information needed"
] |
a12398e6fca025eff4d1e7b9b1cb7e5b8dba42c2 | ##SYSTEM: Translate english to thanglish: there or complications which are there as we say which help you do a lot of ##Translator: Output: இல்லனா complications அங்க இருக்கும்னு நாம சொல்லலாம்
##SYSTEM: Translate english to thanglish: U.S.A encourages wearing mask ##Translator: Output: அமெரிக்கா முகக்கவசம் அணிவதை ஊக்குவிக்கிறது
##SYSTEM: Translate english to thanglish: Time you enjoy wasting is not wasted time ##Translator: Output:நீங்க சந்தோசமா இருக்க நேரம், நேரத்த waste பன்றதுல வராது.
##SYSTEM: Translate english to thanglish: Even miracles take a little time ##Translator: Output: Miracles நடக்க கொஞ்சம் time எடுக்கும்
##SYSTEM: Translate english to thanglish: To love oneself is the beginning of a lifelong romance ##Translator: Output:Self love தா காதலின் ஆரம்பம்
##SYSTEM: Translate english to thanglish: Sometimes the most important thing in a whole day is the rest we take between two deep breaths ##Translator: Output: ஒரு சில time நல்ல rest வெறும் deep breath
##SYSTEM: Translate english to thanglish: Life comes from the earth and life returns to the earth ##Translator: Output: பூமில இருந்து உயிர் வந்து பூமிக்கே உயிர் போகுது
##SYSTEM: Translate english to thanglish: The art of life is to know how to enjoy a little and to endure very much ##Translator: Output: Life ல சந்தோசமா இருக்கவும் சகிப்பு தன்மையோட இருக்கவும் art கத்துகுடுக்குது
##SYSTEM: Translate english to thanglish: Life shrinks or expands in proportion to one's courage ##Translator: Output: ஒருதனோட courage - அ வச்சு தான் அவன் வாழ்கை பெருசாகுறதும் சிருசாகுறதும்
##SYSTEM: Translate english to thanglish: Life is about making an impact, not making an income ##Translator: Output: Life ஒருதங்களுக்கு impact அ இருக்கணும் சம்பாரிக்குரத இருக்கக்கூடாது
##SYSTEM: Translate english to thanglish: Life would be tragic if it weren’t funny ##Translator: Output: எதுவும் funny- அ இல்லனா life சோகமாய்டும்
##SYSTEM: Translate english to thanglish: Keep smiling, because life is a beautiful thing and there’s so much to smile about ##Translator: Output: நீங்க சிரிச்சிட்டே இருங்க life ல சிரிக்கவும் சந்தோசமா இருக்கவும் நிறைய இருக்கு
##SYSTEM: Translate english to thanglish: When you change your thoughts, remember to also change your world ##Translator: Output: நீங்க உங்க thoughts மாத்திட்டா இந்த உலகத்தை மாத்திறலாம்
##SYSTEM: Translate english to thanglish: If you don’t like the road you’re walking, start paving another one ##Translator: Output: நீங்க போற வழி பிடிக்களன வேற வழி உருவாக்குங்க
##SYSTEM: Translate english to thanglish: Life is a succession of lessons which must be lived to be understood ##Translator: Output: வாழ்கை ஒரு பாடம் அதை புரிஞ்சுட்டு வாழனும்
##SYSTEM: Translate english to thanglish: If you don’t have any shadows you’re not in the light ##Translator: Output: உங்க கிட்ட நிழல் இல்லனா நீங்க வெளிச்சதுல இல்ல
##SYSTEM: Translate english to thanglish: The biggest adventure you can take is to live the life of your dreams ##Translator: Output:உங்க கனவு வாழ்கைய வாழ்றது தான் மிகப்பெரிய adventure
##SYSTEM: Translate english to thanglish: It is never too late to be what you might have been ##Translator: Output: இப்போ start பண்ணா கூட நீங்க நினைச்சா இடத்துக்கு போலம்
##SYSTEM: Translate english to thanglish: If you cannot do great things, do small things in a great way ##Translator: Output:உங்களால் பெரிய காரியங்களைச் செய்ய முடியாவிட்டால், சிறிய விஷயங்களைச் சிறந்த முறையில் செய்யுங்கள்
##SYSTEM: Translate english to thanglish: Don’t let yesterday take up too much of today ##Translator: Output: நேற்றைய தினத்தை அதிகமாக எடுத்துக்கொள்ள விடாதீர்கள்
##SYSTEM: Translate english to thanglish: Keep your face towards the sunshine and shadows will fall behind you ##Translator: Output: உங்கள் முகத்தை சூரிய ஒளியை நோக்கி வைத்திருங்கள், நிழல்கள் உங்களுக்குப் பின்னால் விழும்
##SYSTEM: Translate english to thanglish: I know every inch of the town ##Translator: Output: இந்த town ஒவ்வொரு அங்குலமும் எனக்குத் தெரியும்
##SYSTEM: Translate english to thanglish: The price of eggs is going up ##Translator: Output: eggs விலை அதிகரித்துக் கொண்டிருக்கிறது
##SYSTEM: Translate english to thanglish: What is the price of this cap? ##Translator: Output: இந்த தொப்பியின் price என்ன
##SYSTEM: Translate english to thanglish: Which of them is your brother? ##Translator: Output: இவர்களில் யார் உன்னுடைய brother
##SYSTEM: Translate english to thanglish: He arrived after the bell rang ##Translator: Output: bell ஒலித்தப் பிறகு அவன் வந்தான்
##SYSTEM: Translate english to thanglish: The school looks like a prison ##Translator: Output: இந்த பள்ளி கூடம் ஒரு prison போல இருக்கிறது
##SYSTEM: Translate english to thanglish: Three vicious dogs attacked Tom ##Translator: Output: மூன்று மோசமான நாய்கள் Tom தாக்கின
##SYSTEM: Translate english to thanglish: Tom was in Australia a year ago ##Translator: Output: டாம் ஒரு வருடத்திற்கு முன்னால் Australia இருந்தார்
##SYSTEM: Translate english to thanglish: Can you ride a bicycle? ##Translator: Output: உங்களுக்கு bicycle ஓட்டத் தெரியுமா?
##SYSTEM: Translate english to thanglish: He's afraid of the sea ##Translator: Output: அவனுக்குக் sea என்றால் பயம்
##SYSTEM: Translate english to thanglish: It seems she hates you ##Translator: Output: அவள் உன்னை hate பண்ற மாதிரி தெரிகிறது
##SYSTEM: Translate english to thanglish: I thought you'd be angry ##Translator: Output: நீ angrya இருப்பாய் என்று எண்ணினேன்
##SYSTEM: Translate english to thanglish: She went out of the room ##Translator: Output: அவள் room விட்டு வெளியே சென்றாள்
##SYSTEM: Translate english to thanglish: The sky is full of stars ##Translator: Output: வானம் முழுவதும் stars இருக்கின்றன
##SYSTEM: Translate english to thanglish: Do you have a lot of pens? ##Translator: Output: உன்னிடம் நிறைய pens இருக்கின்றனவா?
##SYSTEM: Translate english to thanglish: He bought a pair of shoes ##Translator: Output: நான் ஒரு ஜோடி shoes வாங்கினேன்
##SYSTEM: Translate english to thanglish: I live on the bottom floor ##Translator: Output: நான் கீழ் floor-ல் வசிக்கிறேன்
##SYSTEM: Translate english to thanglish: She wore a beautiful dress ##Translator: Output: அவள் அழகான dress அணிந்திருந்தாள்
##SYSTEM: Translate english to thanglish: When did you come to Japan? ##Translator: Output: நீ எப்பொழுது japan வந்தாய்?
##SYSTEM: Translate english to thanglish: Most people think I'm crazy ##Translator: Output: நிறைய மக்கள் நான் crazy என்று எண்ணுகிறார்கள்
##SYSTEM: Translate english to thanglish: She asked him for some money ##Translator: Output: அவள் அவனிடம் கொஞ்சம் money கேட்டாள்
##SYSTEM: Translate english to thanglish: When did the wedding take place? ##Translator: Output: Wedding எப்பொழுது நடைப் பெற்றது
##SYSTEM: Translate english to thanglish: Where do you keep your passport? ##Translator: Output: நீ passport எங்கே வைத்திருக்கிறாய்?
##SYSTEM: Translate english to thanglish: Friendship requires mutual trust ##Translator: Output: நட்புக்குத் தேவை trust
##SYSTEM: Translate english to thanglish: He put the ring on Mary's finger ##Translator: Output: அவன் மேரியின் விரலில் ring அணிவித்தான்
##SYSTEM: Translate english to thanglish: She glanced through the magazine ##Translator: Output: அவள் magazineயை மேலோட்டமாகப் பார்த்தாள்
##SYSTEM: Translate english to thanglish: I want to be a pilot in the future ##Translator: Output: நான் எதிர் காலத்தில் ஒரு pilot ஆக விரும்புகிறேன்
##SYSTEM: Translate english to thanglish: He told her something and she smiled ##Translator: Output: அவன் அவளிடம் ஏதோ சொன்னான் மற்றும் அவள் smiled
##SYSTEM: Translate english to thanglish: I don't like to go out when it's dark ##Translator: Output: Darkகா இருக்கும் பொழுது நான் வெளியே போக விரும்புவதில்லை
##SYSTEM: Translate english to thanglish: When he spoke, everyone became silent ##Translator: Output: அவன் பேசியப் பொழுது எல்லோரும் silent ஆக இருந்தார்கள்
##SYSTEM: Translate english to thanglish: Tom drank with us until after midnight ##Translator: Output: டாம் midnight வரை எங்களுடன் குடித்தார்
##SYSTEM: Translate english to thanglish: She has never been in a car driven by him ##Translator: Output:அவன் ஒட்டினக் carல அவள் எப்பொழுதும் இருந்ததில்லை
##SYSTEM: Translate english to thanglish: Tom goes to church with Mary every Sunday ##Translator: Output: டாம் மேரியுடன் ஒவ்வொரு ஞாயிற்றுக் கிழமையும் churchகு செல்கிறான்
##SYSTEM: Translate english to thanglish: I don't think people use that word anymore ##Translator: Output: மக்கள் அந்த word பயன் படுத்துவதாக எனக்குத் தெரியவில்லை
##SYSTEM: Translate english to thanglish: My younger sister got married in her teens ##Translator: Output: என் sister இள வயதிலேய marriage செய்து கொண்டாள்
##SYSTEM: Translate english to thanglish: I wonder why Tom suggested we do that together ##Translator: Output: ஏன் டாம் நாமிருவரும் சேர்ந்து செய்வோமென்று பரிந்துரைத்தான் என்று ஆச்சரியமடைகிறேன்.
##SYSTEM: Translate english to thanglish: Tom says he doesn't think he can do that by himself ##Translator: Output:என் ஒருவனால் மட்டுமே அதை செய்ய முடியாது என்று நினைக்கிறேன் என டாம் கூறினார்
##SYSTEM: Translate english to thanglish: People who live in glass houses shouldn't throw stones ##Translator: Output: கண்ணாடி வீட்டில் வசிப்பவகள் stone எறியக் கூடாது
##SYSTEM: Translate english to thanglish: It's been a long time since I've heard anyone use that word ##Translator: Output: ஒருவர் அந்த word பயன் படுத்துவதைக் கேட்டு ரொம்ப நாளாகிறது | Kavinprasanth/_dataset | [
"region:us"
] | 2024-02-07T15:10:35+00:00 | {} | 2024-02-08T05:37:32+00:00 | [] | [] | TAGS
#region-us
| ##SYSTEM: Translate english to thanglish: there or complications which are there as we say which help you do a lot of ##Translator: Output: இல்லனா complications அங்க இருக்கும்னு நாம சொல்லலாம்
##SYSTEM: Translate english to thanglish: U.S.A encourages wearing mask ##Translator: Output: அமெரிக்கா முகக்கவசம் அணிவதை ஊக்குவிக்கிறது
##SYSTEM: Translate english to thanglish: Time you enjoy wasting is not wasted time ##Translator: Output:நீங்க சந்தோசமா இருக்க நேரம், நேரத்த waste பன்றதுல வராது.
##SYSTEM: Translate english to thanglish: Even miracles take a little time ##Translator: Output: Miracles நடக்க கொஞ்சம் time எடுக்கும்
##SYSTEM: Translate english to thanglish: To love oneself is the beginning of a lifelong romance ##Translator: Output:Self love தா காதலின் ஆரம்பம்
##SYSTEM: Translate english to thanglish: Sometimes the most important thing in a whole day is the rest we take between two deep breaths ##Translator: Output: ஒரு சில time நல்ல rest வெறும் deep breath
##SYSTEM: Translate english to thanglish: Life comes from the earth and life returns to the earth ##Translator: Output: பூமில இருந்து உயிர் வந்து பூமிக்கே உயிர் போகுது
##SYSTEM: Translate english to thanglish: The art of life is to know how to enjoy a little and to endure very much ##Translator: Output: Life ல சந்தோசமா இருக்கவும் சகிப்பு தன்மையோட இருக்கவும் art கத்துகுடுக்குது
##SYSTEM: Translate english to thanglish: Life shrinks or expands in proportion to one's courage ##Translator: Output: ஒருதனோட courage - அ வச்சு தான் அவன் வாழ்கை பெருசாகுறதும் சிருசாகுறதும்
##SYSTEM: Translate english to thanglish: Life is about making an impact, not making an income ##Translator: Output: Life ஒருதங்களுக்கு impact அ இருக்கணும் சம்பாரிக்குரத இருக்கக்கூடாது
##SYSTEM: Translate english to thanglish: Life would be tragic if it weren’t funny ##Translator: Output: எதுவும் funny- அ இல்லனா life சோகமாய்டும்
##SYSTEM: Translate english to thanglish: Keep smiling, because life is a beautiful thing and there’s so much to smile about ##Translator: Output: நீங்க சிரிச்சிட்டே இருங்க life ல சிரிக்கவும் சந்தோசமா இருக்கவும் நிறைய இருக்கு
##SYSTEM: Translate english to thanglish: When you change your thoughts, remember to also change your world ##Translator: Output: நீங்க உங்க thoughts மாத்திட்டா இந்த உலகத்தை மாத்திறலாம்
##SYSTEM: Translate english to thanglish: If you don’t like the road you’re walking, start paving another one ##Translator: Output: நீங்க போற வழி பிடிக்களன வேற வழி உருவாக்குங்க
##SYSTEM: Translate english to thanglish: Life is a succession of lessons which must be lived to be understood ##Translator: Output: வாழ்கை ஒரு பாடம் அதை புரிஞ்சுட்டு வாழனும்
##SYSTEM: Translate english to thanglish: If you don’t have any shadows you’re not in the light ##Translator: Output: உங்க கிட்ட நிழல் இல்லனா நீங்க வெளிச்சதுல இல்ல
##SYSTEM: Translate english to thanglish: The biggest adventure you can take is to live the life of your dreams ##Translator: Output:உங்க கனவு வாழ்கைய வாழ்றது தான் மிகப்பெரிய adventure
##SYSTEM: Translate english to thanglish: It is never too late to be what you might have been ##Translator: Output: இப்போ start பண்ணா கூட நீங்க நினைச்சா இடத்துக்கு போலம்
##SYSTEM: Translate english to thanglish: If you cannot do great things, do small things in a great way ##Translator: Output:உங்களால் பெரிய காரியங்களைச் செய்ய முடியாவிட்டால், சிறிய விஷயங்களைச் சிறந்த முறையில் செய்யுங்கள்
##SYSTEM: Translate english to thanglish: Don’t let yesterday take up too much of today ##Translator: Output: நேற்றைய தினத்தை அதிகமாக எடுத்துக்கொள்ள விடாதீர்கள்
##SYSTEM: Translate english to thanglish: Keep your face towards the sunshine and shadows will fall behind you ##Translator: Output: உங்கள் முகத்தை சூரிய ஒளியை நோக்கி வைத்திருங்கள், நிழல்கள் உங்களுக்குப் பின்னால் விழும்
##SYSTEM: Translate english to thanglish: I know every inch of the town ##Translator: Output: இந்த town ஒவ்வொரு அங்குலமும் எனக்குத் தெரியும்
##SYSTEM: Translate english to thanglish: The price of eggs is going up ##Translator: Output: eggs விலை அதிகரித்துக் கொண்டிருக்கிறது
##SYSTEM: Translate english to thanglish: What is the price of this cap? ##Translator: Output: இந்த தொப்பியின் price என்ன
##SYSTEM: Translate english to thanglish: Which of them is your brother? ##Translator: Output: இவர்களில் யார் உன்னுடைய brother
##SYSTEM: Translate english to thanglish: He arrived after the bell rang ##Translator: Output: bell ஒலித்தப் பிறகு அவன் வந்தான்
##SYSTEM: Translate english to thanglish: The school looks like a prison ##Translator: Output: இந்த பள்ளி கூடம் ஒரு prison போல இருக்கிறது
##SYSTEM: Translate english to thanglish: Three vicious dogs attacked Tom ##Translator: Output: மூன்று மோசமான நாய்கள் Tom தாக்கின
##SYSTEM: Translate english to thanglish: Tom was in Australia a year ago ##Translator: Output: டாம் ஒரு வருடத்திற்கு முன்னால் Australia இருந்தார்
##SYSTEM: Translate english to thanglish: Can you ride a bicycle? ##Translator: Output: உங்களுக்கு bicycle ஓட்டத் தெரியுமா?
##SYSTEM: Translate english to thanglish: He's afraid of the sea ##Translator: Output: அவனுக்குக் sea என்றால் பயம்
##SYSTEM: Translate english to thanglish: It seems she hates you ##Translator: Output: அவள் உன்னை hate பண்ற மாதிரி தெரிகிறது
##SYSTEM: Translate english to thanglish: I thought you'd be angry ##Translator: Output: நீ angrya இருப்பாய் என்று எண்ணினேன்
##SYSTEM: Translate english to thanglish: She went out of the room ##Translator: Output: அவள் room விட்டு வெளியே சென்றாள்
##SYSTEM: Translate english to thanglish: The sky is full of stars ##Translator: Output: வானம் முழுவதும் stars இருக்கின்றன
##SYSTEM: Translate english to thanglish: Do you have a lot of pens? ##Translator: Output: உன்னிடம் நிறைய pens இருக்கின்றனவா?
##SYSTEM: Translate english to thanglish: He bought a pair of shoes ##Translator: Output: நான் ஒரு ஜோடி shoes வாங்கினேன்
##SYSTEM: Translate english to thanglish: I live on the bottom floor ##Translator: Output: நான் கீழ் floor-ல் வசிக்கிறேன்
##SYSTEM: Translate english to thanglish: She wore a beautiful dress ##Translator: Output: அவள் அழகான dress அணிந்திருந்தாள்
##SYSTEM: Translate english to thanglish: When did you come to Japan? ##Translator: Output: நீ எப்பொழுது japan வந்தாய்?
##SYSTEM: Translate english to thanglish: Most people think I'm crazy ##Translator: Output: நிறைய மக்கள் நான் crazy என்று எண்ணுகிறார்கள்
##SYSTEM: Translate english to thanglish: She asked him for some money ##Translator: Output: அவள் அவனிடம் கொஞ்சம் money கேட்டாள்
##SYSTEM: Translate english to thanglish: When did the wedding take place? ##Translator: Output: Wedding எப்பொழுது நடைப் பெற்றது
##SYSTEM: Translate english to thanglish: Where do you keep your passport? ##Translator: Output: நீ passport எங்கே வைத்திருக்கிறாய்?
##SYSTEM: Translate english to thanglish: Friendship requires mutual trust ##Translator: Output: நட்புக்குத் தேவை trust
##SYSTEM: Translate english to thanglish: He put the ring on Mary's finger ##Translator: Output: அவன் மேரியின் விரலில் ring அணிவித்தான்
##SYSTEM: Translate english to thanglish: She glanced through the magazine ##Translator: Output: அவள் magazineயை மேலோட்டமாகப் பார்த்தாள்
##SYSTEM: Translate english to thanglish: I want to be a pilot in the future ##Translator: Output: நான் எதிர் காலத்தில் ஒரு pilot ஆக விரும்புகிறேன்
##SYSTEM: Translate english to thanglish: He told her something and she smiled ##Translator: Output: அவன் அவளிடம் ஏதோ சொன்னான் மற்றும் அவள் smiled
##SYSTEM: Translate english to thanglish: I don't like to go out when it's dark ##Translator: Output: Darkகா இருக்கும் பொழுது நான் வெளியே போக விரும்புவதில்லை
##SYSTEM: Translate english to thanglish: When he spoke, everyone became silent ##Translator: Output: அவன் பேசியப் பொழுது எல்லோரும் silent ஆக இருந்தார்கள்
##SYSTEM: Translate english to thanglish: Tom drank with us until after midnight ##Translator: Output: டாம் midnight வரை எங்களுடன் குடித்தார்
##SYSTEM: Translate english to thanglish: She has never been in a car driven by him ##Translator: Output:அவன் ஒட்டினக் carல அவள் எப்பொழுதும் இருந்ததில்லை
##SYSTEM: Translate english to thanglish: Tom goes to church with Mary every Sunday ##Translator: Output: டாம் மேரியுடன் ஒவ்வொரு ஞாயிற்றுக் கிழமையும் churchகு செல்கிறான்
##SYSTEM: Translate english to thanglish: I don't think people use that word anymore ##Translator: Output: மக்கள் அந்த word பயன் படுத்துவதாக எனக்குத் தெரியவில்லை
##SYSTEM: Translate english to thanglish: My younger sister got married in her teens ##Translator: Output: என் sister இள வயதிலேய marriage செய்து கொண்டாள்
##SYSTEM: Translate english to thanglish: I wonder why Tom suggested we do that together ##Translator: Output: ஏன் டாம் நாமிருவரும் சேர்ந்து செய்வோமென்று பரிந்துரைத்தான் என்று ஆச்சரியமடைகிறேன்.
##SYSTEM: Translate english to thanglish: Tom says he doesn't think he can do that by himself ##Translator: Output:என் ஒருவனால் மட்டுமே அதை செய்ய முடியாது என்று நினைக்கிறேன் என டாம் கூறினார்
##SYSTEM: Translate english to thanglish: People who live in glass houses shouldn't throw stones ##Translator: Output: கண்ணாடி வீட்டில் வசிப்பவகள் stone எறியக் கூடாது
##SYSTEM: Translate english to thanglish: It's been a long time since I've heard anyone use that word ##Translator: Output: ஒருவர் அந்த word பயன் படுத்துவதைக் கேட்டு ரொம்ப நாளாகிறது | [] | [
"TAGS\n#region-us \n"
] |
ea6291fced5bdfdd019e8617d30ac49d07fb94f0 | # Dataset Card for "ExeBench-Eval-tiny-gpt3.5-result"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | zhangshuoming/ExeBench-Eval-tiny-gpt3.5-result | [
"region:us"
] | 2024-02-07T15:19:14+00:00 | {"dataset_info": {"features": [{"name": "c", "dtype": "string"}, {"name": "asm", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 48136, "num_examples": 100}], "download_size": 23257, "dataset_size": 48136}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-02-07T15:19:19+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "ExeBench-Eval-tiny-gpt3.5-result"
More Information needed | [
"# Dataset Card for \"ExeBench-Eval-tiny-gpt3.5-result\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"ExeBench-Eval-tiny-gpt3.5-result\"\n\nMore Information needed"
] |
bdefaad62d1a9170a2b9ca53c92249a7d9d0fe61 | ```
legend = {
"bsq": "[B]inary [S]ubtask [Q]uestion"
}
```
```
bsq = {
"b1": "Does the article discuss agroecological practices?",
"b2": "Does the article focus primarily on climate change mitigation?",
"b3": "Does the article cover climate change adaptation?",
"b4": "Does the article comprehensively cover climate change and environmental aspects?",
"b5": "Does the study specifically evaluate greenhouse gas emissions?",
"b6": "Does the article assess agroecological practices' impact on climate change?",
"b7": "Does the article target stakeholders?",
"b8": "Does the article address limitations, challenges, and potential risks?",
"b9": "Does the article address policy implications?",
"b10": "Does the article have a specific geographic focus?",
"b11": "Does the article discuss the role of agroecology in enhancing climate resilience in agricultural systems?",
"b12": "Does the article assess the impact of agroecology on nitrogen dynamics?",
"b13": "Does the article discuss the impact of methane (CH4) emissions?"
}
```
```
i = 1...13
probs = {
f"b{i}(Y)": "Probability of Yes (Y) for the {i}-th BSQ",
f"b{i}(N)": "Probability of No (N) for the {i}-th BSQ",
}
``` | furrutiav/sac_nllf | [
"region:us"
] | 2024-02-07T15:24:53+00:00 | {} | 2024-02-07T15:42:13+00:00 | [] | [] | TAGS
#region-us
| [] | [
"TAGS\n#region-us \n"
] |
|
bcaea33703ae115a80ce575d0b91aa7569bea04a |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This Dataset was generated by NyX AI for their roleplay Model based on llama-2-7b. It contains a collection of fictional scenarios and dialogues intended for use in roleplaying applications.
| SamirXR/NyX-Roleplay | [
"license:apache-2.0",
"region:us"
] | 2024-02-07T15:25:22+00:00 | {"license": "apache-2.0"} | 2024-02-10T06:20:17+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
|
# Dataset Card for Dataset Name
This Dataset was generated by NyX AI for their roleplay Model based on llama-2-7b. It contains a collection of fictional scenarios and dialogues intended for use in roleplaying applications.
| [
"# Dataset Card for Dataset Name\n\n\n\nThis Dataset was generated by NyX AI for their roleplay Model based on llama-2-7b. It contains a collection of fictional scenarios and dialogues intended for use in roleplaying applications."
] | [
"TAGS\n#license-apache-2.0 #region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis Dataset was generated by NyX AI for their roleplay Model based on llama-2-7b. It contains a collection of fictional scenarios and dialogues intended for use in roleplaying applications."
] |
0a796c9c613397edf8270632e02ed74ad1a3614d | ```
legend = {
"bsq": "[B]inary [S]ubtask [Q]uestion"
}
```
```
bsq = {
'q0': 'Does the sentence convey disappointment towards the movie?',
'q1': 'Does the sentence convey enjoyment or satisfaction?',
'q3': 'Does the sentence critique movie flaws or drawbacks?',
'q4': 'Does the sentence express dissatisfaction or lack of enjoyment?',
'q7': 'Does the sentence convey negative perspectives on characters?',
'q9': 'Does the sentence express negativity?',
'q12': 'Does the sentence commend positive elements of the film?',
'q13': 'Does the sentence commend character qualities or portrayal?',
'q14': "Does the sentence emphasize film's positive impact and value?",
'q15': 'Does the sentence express admiration for acting performances?',
'q16': "Does the sentence praise the film's positive qualities?",
'q17': 'Does the sentence convey enthusiasm and praise?',
'q18': 'Does the sentence convey positive emotional impact?',
'q19': 'Does the sentence emphasize visual and aesthetic qualities?',
'q20': 'Does the sentence contain positive language about the film?'}
```
```
i = 0...20
probs = {
f"b{i}(Y)": "Probability of Yes (Y) for the {i}-th BSQ",
f"b{i}(N)": "Probability of No (N) for the {i}-th BSQ",
}
```
| furrutiav/sst2_nllf | [
"region:us"
] | 2024-02-07T15:49:03+00:00 | {} | 2024-02-07T16:03:20+00:00 | [] | [] | TAGS
#region-us
| [] | [
"TAGS\n#region-us \n"
] |
|
8eb91f00c05b8209a9485297ec3be104cf7a84b2 | # Dataset Card for "squad_qa_title_v5_full_recite_full_passage_random_permute_rerun_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_title_v5_full_recite_full_passage_random_permute_rerun_1 | [
"region:us"
] | 2024-02-07T16:10:48+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5261551.461330935, "num_examples": 2875}, {"name": "validation", "num_bytes": 590772, "num_examples": 300}], "download_size": 1526658, "dataset_size": 5852323.461330935}} | 2024-02-07T16:10:55+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_title_v5_full_recite_full_passage_random_permute_rerun_1"
More Information needed | [
"# Dataset Card for \"squad_qa_title_v5_full_recite_full_passage_random_permute_rerun_1\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_title_v5_full_recite_full_passage_random_permute_rerun_1\"\n\nMore Information needed"
] |
387860696b16130dd5220bd2cb95d8b8e8273b57 | # Dataset Card for "squad_qa_title_v5_full_recite_full_passage_random_permute_rerun_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_title_v5_full_recite_full_passage_random_permute_rerun_2 | [
"region:us"
] | 2024-02-07T16:11:02+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6130196.814876033, "num_examples": 3365}, {"name": "validation", "num_bytes": 590772, "num_examples": 300}], "download_size": 1631268, "dataset_size": 6720968.814876033}} | 2024-02-07T16:11:09+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_title_v5_full_recite_full_passage_random_permute_rerun_2"
More Information needed | [
"# Dataset Card for \"squad_qa_title_v5_full_recite_full_passage_random_permute_rerun_2\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_title_v5_full_recite_full_passage_random_permute_rerun_2\"\n\nMore Information needed"
] |
da247d63b06c7093f19b7a52252b5ea6add02c5a | # Dataset Card for "squad_qa_title_v5_full_recite_full_passage_random_permute_rerun_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_title_v5_full_recite_full_passage_random_permute_rerun_4 | [
"region:us"
] | 2024-02-07T16:11:18+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7858108.6486486485, "num_examples": 4345}, {"name": "validation", "num_bytes": 590772, "num_examples": 300}], "download_size": 1716028, "dataset_size": 8448880.64864865}} | 2024-02-07T16:11:25+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_title_v5_full_recite_full_passage_random_permute_rerun_4"
More Information needed | [
"# Dataset Card for \"squad_qa_title_v5_full_recite_full_passage_random_permute_rerun_4\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_title_v5_full_recite_full_passage_random_permute_rerun_4\"\n\nMore Information needed"
] |
18aea33eb08c4834f6c8d312e5ecc83cefa16b7f | # Dataset Card for "squad_qa_title_v5_full_recite_full_passage_random_permute_rerun_8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_title_v5_full_recite_full_passage_random_permute_rerun_8 | [
"region:us"
] | 2024-02-07T16:11:33+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11290731.700778643, "num_examples": 6305}, {"name": "validation", "num_bytes": 590772, "num_examples": 300}], "download_size": 1824305, "dataset_size": 11881503.700778643}} | 2024-02-07T16:11:40+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_title_v5_full_recite_full_passage_random_permute_rerun_8"
More Information needed | [
"# Dataset Card for \"squad_qa_title_v5_full_recite_full_passage_random_permute_rerun_8\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_title_v5_full_recite_full_passage_random_permute_rerun_8\"\n\nMore Information needed"
] |
d61340a4a8f96259b4708d02d6933b099c172ddd | # Dataset Card for "squad_qa_title_v5_full_recite_full_passage_first_permute_rerun"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_title_v5_full_recite_full_passage_first_permute_rerun | [
"region:us"
] | 2024-02-07T16:11:47+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8923574.111751307, "num_examples": 4778}, {"name": "validation", "num_bytes": 590772, "num_examples": 300}], "download_size": 1762779, "dataset_size": 9514346.111751307}} | 2024-02-07T16:11:55+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_title_v5_full_recite_full_passage_first_permute_rerun"
More Information needed | [
"# Dataset Card for \"squad_qa_title_v5_full_recite_full_passage_first_permute_rerun\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_title_v5_full_recite_full_passage_first_permute_rerun\"\n\nMore Information needed"
] |
ed9db3940d9b3079c2acc42f19ea587abdd003a7 | # Dataset Card for "squad_qa_title_v5_full_recite_full_passage_last_permute_rerun"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_title_v5_full_recite_full_passage_last_permute_rerun | [
"region:us"
] | 2024-02-07T16:12:02+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8923574.111751307, "num_examples": 4778}, {"name": "validation", "num_bytes": 590772, "num_examples": 300}], "download_size": 1757193, "dataset_size": 9514346.111751307}} | 2024-02-07T16:12:10+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_title_v5_full_recite_full_passage_last_permute_rerun"
More Information needed | [
"# Dataset Card for \"squad_qa_title_v5_full_recite_full_passage_last_permute_rerun\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_title_v5_full_recite_full_passage_last_permute_rerun\"\n\nMore Information needed"
] |
65009bcab67b6ad5d0489845dc8845e2f1478084 | # Dataset Card for "squad_qa_title_v5_full_recite_full_passage_no_permute_rerun"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_title_v5_full_recite_full_passage_no_permute_rerun | [
"region:us"
] | 2024-02-07T16:12:18+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8923574.111751307, "num_examples": 4778}, {"name": "validation", "num_bytes": 590772, "num_examples": 300}], "download_size": 1613113, "dataset_size": 9514346.111751307}} | 2024-02-07T16:12:24+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_title_v5_full_recite_full_passage_no_permute_rerun"
More Information needed | [
"# Dataset Card for \"squad_qa_title_v5_full_recite_full_passage_no_permute_rerun\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_title_v5_full_recite_full_passage_no_permute_rerun\"\n\nMore Information needed"
] |
7fcee0d29c0046ffec8352f5a50ffd7448bcf2e6 | # Dataset Card for "squad_qa_title_v5_full_recite_ans_sent_random_permute_rerun_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_title_v5_full_recite_ans_sent_random_permute_rerun_1 | [
"region:us"
] | 2024-02-07T16:12:31+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4551734.127697841, "num_examples": 2875}, {"name": "validation", "num_bytes": 413353, "num_examples": 300}], "download_size": 1356189, "dataset_size": 4965087.127697841}} | 2024-02-07T16:12:39+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_title_v5_full_recite_ans_sent_random_permute_rerun_1"
More Information needed | [
"# Dataset Card for \"squad_qa_title_v5_full_recite_ans_sent_random_permute_rerun_1\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_title_v5_full_recite_ans_sent_random_permute_rerun_1\"\n\nMore Information needed"
] |
c29f07c8d5e10f1dd374b496ea60013d2564ad09 | # Dataset Card for "squad_qa_title_v5_full_recite_ans_sent_random_permute_rerun_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_title_v5_full_recite_ans_sent_random_permute_rerun_2 | [
"region:us"
] | 2024-02-07T16:12:46+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5366689.438842976, "num_examples": 3365}, {"name": "validation", "num_bytes": 413353, "num_examples": 300}], "download_size": 1455797, "dataset_size": 5780042.438842976}} | 2024-02-07T16:12:53+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_title_v5_full_recite_ans_sent_random_permute_rerun_2"
More Information needed | [
"# Dataset Card for \"squad_qa_title_v5_full_recite_ans_sent_random_permute_rerun_2\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_title_v5_full_recite_ans_sent_random_permute_rerun_2\"\n\nMore Information needed"
] |
bd3b1162d3b452d2c7e1d964f477e8e22a928f0f | # Dataset Card for "squad_qa_title_v5_full_recite_ans_sent_random_permute_rerun_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_title_v5_full_recite_ans_sent_random_permute_rerun_4 | [
"region:us"
] | 2024-02-07T16:13:01+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7009674.775960171, "num_examples": 4345}, {"name": "validation", "num_bytes": 413353, "num_examples": 300}], "download_size": 1538198, "dataset_size": 7423027.775960171}} | 2024-02-07T16:13:10+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_title_v5_full_recite_ans_sent_random_permute_rerun_4"
More Information needed | [
"# Dataset Card for \"squad_qa_title_v5_full_recite_ans_sent_random_permute_rerun_4\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_title_v5_full_recite_ans_sent_random_permute_rerun_4\"\n\nMore Information needed"
] |
f68c1cb6efefb0f7895b8498dd02166eced5f488 | # Dataset Card for "squad_qa_title_v5_full_recite_ans_sent_random_permute_rerun_8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_title_v5_full_recite_ans_sent_random_permute_rerun_8 | [
"region:us"
] | 2024-02-07T16:13:19+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10327991.864849834, "num_examples": 6305}, {"name": "validation", "num_bytes": 413353, "num_examples": 300}], "download_size": 1648173, "dataset_size": 10741344.864849834}} | 2024-02-07T16:13:26+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_title_v5_full_recite_ans_sent_random_permute_rerun_8"
More Information needed | [
"# Dataset Card for \"squad_qa_title_v5_full_recite_ans_sent_random_permute_rerun_8\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_title_v5_full_recite_ans_sent_random_permute_rerun_8\"\n\nMore Information needed"
] |
23571852594f47973bc638f0f254adbb4d5ddae7 | # Dataset Card for "squad_qa_title_v5_full_recite_ans_sent_first_permute_rerun"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_title_v5_full_recite_ans_sent_first_permute_rerun | [
"region:us"
] | 2024-02-07T16:13:34+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8044721.096877931, "num_examples": 4778}, {"name": "validation", "num_bytes": 413353, "num_examples": 300}], "download_size": 1593430, "dataset_size": 8458074.09687793}} | 2024-02-07T16:13:42+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_title_v5_full_recite_ans_sent_first_permute_rerun"
More Information needed | [
"# Dataset Card for \"squad_qa_title_v5_full_recite_ans_sent_first_permute_rerun\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_title_v5_full_recite_ans_sent_first_permute_rerun\"\n\nMore Information needed"
] |
1c228b0f0cbb4528bd764fdb99347c7441e565c7 | # Dataset Card for "squad_qa_title_v5_full_recite_ans_sent_last_permute_rerun"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_title_v5_full_recite_ans_sent_last_permute_rerun | [
"region:us"
] | 2024-02-07T16:13:50+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8044721.096877931, "num_examples": 4778}, {"name": "validation", "num_bytes": 413353, "num_examples": 300}], "download_size": 1587878, "dataset_size": 8458074.09687793}} | 2024-02-07T16:13:57+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_title_v5_full_recite_ans_sent_last_permute_rerun"
More Information needed | [
"# Dataset Card for \"squad_qa_title_v5_full_recite_ans_sent_last_permute_rerun\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_title_v5_full_recite_ans_sent_last_permute_rerun\"\n\nMore Information needed"
] |
0930b392aecf2988cc054f3935e628a65d7a7a56 | # Dataset Card for "squad_qa_title_v5_full_recite_ans_sent_no_permute_rerun"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_title_v5_full_recite_ans_sent_no_permute_rerun | [
"region:us"
] | 2024-02-07T16:14:04+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8044721.096877931, "num_examples": 4778}, {"name": "validation", "num_bytes": 413353, "num_examples": 300}], "download_size": 1443227, "dataset_size": 8458074.09687793}} | 2024-02-07T16:14:12+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_title_v5_full_recite_ans_sent_no_permute_rerun"
More Information needed | [
"# Dataset Card for \"squad_qa_title_v5_full_recite_ans_sent_no_permute_rerun\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_title_v5_full_recite_ans_sent_no_permute_rerun\"\n\nMore Information needed"
] |
b2500f0b5bf09c10d382ecd46b5a3c9da2a87632 | # Dataset Card for "squad_qa_wrong_title_v5_full_recite_full_passage_random_permute_rerun_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_title_v5_full_recite_full_passage_random_permute_rerun_1 | [
"region:us"
] | 2024-02-07T16:14:22+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5341144.491906475, "num_examples": 2875}, {"name": "validation", "num_bytes": 599488, "num_examples": 300}], "download_size": 1559487, "dataset_size": 5940632.491906475}} | 2024-02-07T16:14:30+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_title_v5_full_recite_full_passage_random_permute_rerun_1"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_title_v5_full_recite_full_passage_random_permute_rerun_1\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_title_v5_full_recite_full_passage_random_permute_rerun_1\"\n\nMore Information needed"
] |
11640c59620b102304bb27936713d9387e16b5c8 | # Dataset Card for "squad_qa_wrong_title_v5_full_recite_full_passage_random_permute_rerun_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_title_v5_full_recite_full_passage_random_permute_rerun_2 | [
"region:us"
] | 2024-02-07T16:14:37+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6223228.219008264, "num_examples": 3365}, {"name": "validation", "num_bytes": 599488, "num_examples": 300}], "download_size": 1662466, "dataset_size": 6822716.219008264}} | 2024-02-07T16:14:45+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_title_v5_full_recite_full_passage_random_permute_rerun_2"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_title_v5_full_recite_full_passage_random_permute_rerun_2\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_title_v5_full_recite_full_passage_random_permute_rerun_2\"\n\nMore Information needed"
] |
e1e89e1cf7610cf5634276c8f736de12a66f761e | # Dataset Card for "squad_qa_wrong_title_v5_full_recite_full_passage_random_permute_rerun_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_title_v5_full_recite_full_passage_random_permute_rerun_4 | [
"region:us"
] | 2024-02-07T16:14:53+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7977974.404694168, "num_examples": 4345}, {"name": "validation", "num_bytes": 599488, "num_examples": 300}], "download_size": 1756289, "dataset_size": 8577462.404694168}} | 2024-02-07T16:15:00+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_title_v5_full_recite_full_passage_random_permute_rerun_4"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_title_v5_full_recite_full_passage_random_permute_rerun_4\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_title_v5_full_recite_full_passage_random_permute_rerun_4\"\n\nMore Information needed"
] |
0e9931bfc823445214743ff1acc4e06d5817234a | # Dataset Card for "squad_qa_wrong_title_v5_full_recite_full_passage_random_permute_rerun_8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_title_v5_full_recite_full_passage_random_permute_rerun_8 | [
"region:us"
] | 2024-02-07T16:15:08+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11464161.28086763, "num_examples": 6305}, {"name": "validation", "num_bytes": 599488, "num_examples": 300}], "download_size": 1867148, "dataset_size": 12063649.28086763}} | 2024-02-07T16:15:16+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_title_v5_full_recite_full_passage_random_permute_rerun_8"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_title_v5_full_recite_full_passage_random_permute_rerun_8\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_title_v5_full_recite_full_passage_random_permute_rerun_8\"\n\nMore Information needed"
] |
2fd94e930fc99d6a7bc7e541444de658bd4f4da8 | # Dataset Card for "squad_qa_wrong_title_v5_full_recite_full_passage_first_permute_rerun"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_title_v5_full_recite_full_passage_first_permute_rerun | [
"region:us"
] | 2024-02-07T16:15:25+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9054846.508642636, "num_examples": 4778}, {"name": "validation", "num_bytes": 599488, "num_examples": 300}], "download_size": 1804496, "dataset_size": 9654334.508642636}} | 2024-02-07T16:15:32+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_title_v5_full_recite_full_passage_first_permute_rerun"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_title_v5_full_recite_full_passage_first_permute_rerun\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_title_v5_full_recite_full_passage_first_permute_rerun\"\n\nMore Information needed"
] |
108c3759a94167a9220c65600896bec489045592 | # Dataset Card for "squad_qa_wrong_title_v5_full_recite_full_passage_last_permute_rerun"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_title_v5_full_recite_full_passage_last_permute_rerun | [
"region:us"
] | 2024-02-07T16:15:40+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9054846.508642636, "num_examples": 4778}, {"name": "validation", "num_bytes": 599488, "num_examples": 300}], "download_size": 1798910, "dataset_size": 9654334.508642636}} | 2024-02-07T16:15:47+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_title_v5_full_recite_full_passage_last_permute_rerun"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_title_v5_full_recite_full_passage_last_permute_rerun\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_title_v5_full_recite_full_passage_last_permute_rerun\"\n\nMore Information needed"
] |
7bfe7886b0473f90d02924622656322838f3f356 | # Dataset Card for "squad_qa_wrong_title_v5_full_recite_full_passage_no_permute_rerun"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_title_v5_full_recite_full_passage_no_permute_rerun | [
"region:us"
] | 2024-02-07T16:15:55+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9054846.508642636, "num_examples": 4778}, {"name": "validation", "num_bytes": 599488, "num_examples": 300}], "download_size": 1654824, "dataset_size": 9654334.508642636}} | 2024-02-07T16:16:02+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_title_v5_full_recite_full_passage_no_permute_rerun"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_title_v5_full_recite_full_passage_no_permute_rerun\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_title_v5_full_recite_full_passage_no_permute_rerun\"\n\nMore Information needed"
] |
fe025d439773c868aea711bb61b1d0ddc5c85c20 | # Dataset Card for "squad_qa_wrong_title_v5_full_recite_ans_sent_random_permute_rerun_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_title_v5_full_recite_ans_sent_random_permute_rerun_1 | [
"region:us"
] | 2024-02-07T16:16:14+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4631327.158273381, "num_examples": 2875}, {"name": "validation", "num_bytes": 422069, "num_examples": 300}], "download_size": 1393400, "dataset_size": 5053396.158273381}} | 2024-02-07T16:16:21+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_title_v5_full_recite_ans_sent_random_permute_rerun_1"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_title_v5_full_recite_ans_sent_random_permute_rerun_1\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_title_v5_full_recite_ans_sent_random_permute_rerun_1\"\n\nMore Information needed"
] |
c4f4455a33c1ca922422cea4f955c4c909004883 | # Dataset Card for "squad_qa_wrong_title_v5_full_recite_ans_sent_random_permute_rerun_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_title_v5_full_recite_ans_sent_random_permute_rerun_2 | [
"region:us"
] | 2024-02-07T16:16:30+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5459720.842975207, "num_examples": 3365}, {"name": "validation", "num_bytes": 422069, "num_examples": 300}], "download_size": 1490420, "dataset_size": 5881789.842975207}} | 2024-02-07T16:16:38+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_title_v5_full_recite_ans_sent_random_permute_rerun_2"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_title_v5_full_recite_ans_sent_random_permute_rerun_2\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_title_v5_full_recite_ans_sent_random_permute_rerun_2\"\n\nMore Information needed"
] |
81fbe741a7bde3cf8e96d382508f484d7fc49083 | # Dataset Card for "squad_qa_wrong_title_v5_full_recite_ans_sent_random_permute_rerun_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_title_v5_full_recite_ans_sent_random_permute_rerun_4 | [
"region:us"
] | 2024-02-07T16:16:46+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7129540.53200569, "num_examples": 4345}, {"name": "validation", "num_bytes": 422069, "num_examples": 300}], "download_size": 1582267, "dataset_size": 7551609.53200569}} | 2024-02-07T16:16:53+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_title_v5_full_recite_ans_sent_random_permute_rerun_4"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_title_v5_full_recite_ans_sent_random_permute_rerun_4\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_title_v5_full_recite_ans_sent_random_permute_rerun_4\"\n\nMore Information needed"
] |
65cc4222e9806ea7e7e12332f7c51f0a25f950a0 | # Dataset Card for "squad_qa_wrong_title_v5_full_recite_ans_sent_random_permute_rerun_8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_title_v5_full_recite_ans_sent_random_permute_rerun_8 | [
"region:us"
] | 2024-02-07T16:17:01+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10501421.444938822, "num_examples": 6305}, {"name": "validation", "num_bytes": 422069, "num_examples": 300}], "download_size": 1693888, "dataset_size": 10923490.444938822}} | 2024-02-07T16:17:07+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_title_v5_full_recite_ans_sent_random_permute_rerun_8"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_title_v5_full_recite_ans_sent_random_permute_rerun_8\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_title_v5_full_recite_ans_sent_random_permute_rerun_8\"\n\nMore Information needed"
] |
fb5095e8fcb1794755ee2601dabd76efd8ba43df | # Dataset Card for "squad_qa_wrong_title_v5_full_recite_ans_sent_first_permute_rerun"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_title_v5_full_recite_ans_sent_first_permute_rerun | [
"region:us"
] | 2024-02-07T16:17:18+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8175993.493769262, "num_examples": 4778}, {"name": "validation", "num_bytes": 422069, "num_examples": 300}], "download_size": 1635153, "dataset_size": 8598062.493769262}} | 2024-02-07T16:17:26+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_title_v5_full_recite_ans_sent_first_permute_rerun"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_title_v5_full_recite_ans_sent_first_permute_rerun\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_title_v5_full_recite_ans_sent_first_permute_rerun\"\n\nMore Information needed"
] |
2e89c46a5f55c2133894e25006659bdd2be5e2ac | # Dataset Card for "squad_qa_wrong_title_v5_full_recite_ans_sent_last_permute_rerun"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_title_v5_full_recite_ans_sent_last_permute_rerun | [
"region:us"
] | 2024-02-07T16:17:34+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8175993.493769262, "num_examples": 4778}, {"name": "validation", "num_bytes": 422069, "num_examples": 300}], "download_size": 1629595, "dataset_size": 8598062.493769262}} | 2024-02-07T16:17:41+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_title_v5_full_recite_ans_sent_last_permute_rerun"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_title_v5_full_recite_ans_sent_last_permute_rerun\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_title_v5_full_recite_ans_sent_last_permute_rerun\"\n\nMore Information needed"
] |
c813d5f22e9ce738da9736542dbf2e6357abe303 | # Dataset Card for "squad_qa_wrong_title_v5_full_recite_ans_sent_no_permute_rerun"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_title_v5_full_recite_ans_sent_no_permute_rerun | [
"region:us"
] | 2024-02-07T16:17:49+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8175993.493769262, "num_examples": 4778}, {"name": "validation", "num_bytes": 422069, "num_examples": 300}], "download_size": 1484938, "dataset_size": 8598062.493769262}} | 2024-02-07T16:17:56+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_title_v5_full_recite_ans_sent_no_permute_rerun"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_title_v5_full_recite_ans_sent_no_permute_rerun\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_title_v5_full_recite_ans_sent_no_permute_rerun\"\n\nMore Information needed"
] |
3e26d42d34cb32537e41917efccde60314536098 | # Dataset Card for "squad_qa_num_v5_full_recite_full_passage_random_permute_rerun_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_num_v5_full_recite_full_passage_random_permute_rerun_1 | [
"region:us"
] | 2024-02-07T16:18:07+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5139918.277877698, "num_examples": 2875}, {"name": "validation", "num_bytes": 580808, "num_examples": 300}], "download_size": 1504285, "dataset_size": 5720726.277877698}} | 2024-02-07T16:18:15+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_num_v5_full_recite_full_passage_random_permute_rerun_1"
More Information needed | [
"# Dataset Card for \"squad_qa_num_v5_full_recite_full_passage_random_permute_rerun_1\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_num_v5_full_recite_full_passage_random_permute_rerun_1\"\n\nMore Information needed"
] |
bc49ed3ac1598e29ca496b1e980b57b126e4f17c | # Dataset Card for "squad_qa_num_v5_full_recite_full_passage_random_permute_rerun_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_num_v5_full_recite_full_passage_random_permute_rerun_2 | [
"region:us"
] | 2024-02-07T16:18:22+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5985907.839669421, "num_examples": 3365}, {"name": "validation", "num_bytes": 580808, "num_examples": 300}], "download_size": 1607029, "dataset_size": 6566715.839669421}} | 2024-02-07T16:18:29+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_num_v5_full_recite_full_passage_random_permute_rerun_2"
More Information needed | [
"# Dataset Card for \"squad_qa_num_v5_full_recite_full_passage_random_permute_rerun_2\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_num_v5_full_recite_full_passage_random_permute_rerun_2\"\n\nMore Information needed"
] |
ce3dde1f08bbaeb53adc3ff67f071d697208960b | # Dataset Card for "squad_qa_num_v5_full_recite_full_passage_random_permute_rerun_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_num_v5_full_recite_full_passage_random_permute_rerun_4 | [
"region:us"
] | 2024-02-07T16:18:38+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7667865.635846373, "num_examples": 4345}, {"name": "validation", "num_bytes": 580808, "num_examples": 300}], "download_size": 1686996, "dataset_size": 8248673.635846373}} | 2024-02-07T16:18:45+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_num_v5_full_recite_full_passage_random_permute_rerun_4"
More Information needed | [
"# Dataset Card for \"squad_qa_num_v5_full_recite_full_passage_random_permute_rerun_4\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_num_v5_full_recite_full_passage_random_permute_rerun_4\"\n\nMore Information needed"
] |
c4a131a818da5712deff9b74a43a352f66958b93 | # Dataset Card for "squad_qa_num_v5_full_recite_full_passage_random_permute_rerun_8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_num_v5_full_recite_full_passage_random_permute_rerun_8 | [
"region:us"
] | 2024-02-07T16:18:53+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11006991.27141268, "num_examples": 6305}, {"name": "validation", "num_bytes": 580808, "num_examples": 300}], "download_size": 1802714, "dataset_size": 11587799.27141268}} | 2024-02-07T16:19:01+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_num_v5_full_recite_full_passage_random_permute_rerun_8"
More Information needed | [
"# Dataset Card for \"squad_qa_num_v5_full_recite_full_passage_random_permute_rerun_8\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_num_v5_full_recite_full_passage_random_permute_rerun_8\"\n\nMore Information needed"
] |
5524b8def60fee8c1798a618cc993a2a2cf79af0 | # Dataset Card for "squad_qa_num_v5_full_recite_full_passage_first_permute_rerun"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_num_v5_full_recite_full_passage_first_permute_rerun | [
"region:us"
] | 2024-02-07T16:19:09+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8714584.788690874, "num_examples": 4778}, {"name": "validation", "num_bytes": 580808, "num_examples": 300}], "download_size": 1737200, "dataset_size": 9295392.788690874}} | 2024-02-07T16:19:16+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_num_v5_full_recite_full_passage_first_permute_rerun"
More Information needed | [
"# Dataset Card for \"squad_qa_num_v5_full_recite_full_passage_first_permute_rerun\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_num_v5_full_recite_full_passage_first_permute_rerun\"\n\nMore Information needed"
] |
68ac95c7a6d8b8261bfb0b9316e9c2604958dd6e | # Dataset Card for "squad_qa_num_v5_full_recite_full_passage_last_permute_rerun"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_num_v5_full_recite_full_passage_last_permute_rerun | [
"region:us"
] | 2024-02-07T16:19:23+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8714584.788690874, "num_examples": 4778}, {"name": "validation", "num_bytes": 580808, "num_examples": 300}], "download_size": 1731614, "dataset_size": 9295392.788690874}} | 2024-02-07T16:19:32+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_num_v5_full_recite_full_passage_last_permute_rerun"
More Information needed | [
"# Dataset Card for \"squad_qa_num_v5_full_recite_full_passage_last_permute_rerun\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_num_v5_full_recite_full_passage_last_permute_rerun\"\n\nMore Information needed"
] |
e4dc2425c336dc23b058cd9caa641ee3c45e0c1c | # Dataset Card for "squad_qa_num_v5_full_recite_full_passage_no_permute_rerun"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_num_v5_full_recite_full_passage_no_permute_rerun | [
"region:us"
] | 2024-02-07T16:19:40+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8714584.788690874, "num_examples": 4778}, {"name": "validation", "num_bytes": 580808, "num_examples": 300}], "download_size": 1587540, "dataset_size": 9295392.788690874}} | 2024-02-07T16:19:47+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_num_v5_full_recite_full_passage_no_permute_rerun"
More Information needed | [
"# Dataset Card for \"squad_qa_num_v5_full_recite_full_passage_no_permute_rerun\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_num_v5_full_recite_full_passage_no_permute_rerun\"\n\nMore Information needed"
] |
711f14137a4d4754860181e5d2aada7fdae225fe | # Dataset Card for "squad_qa_num_v5_full_recite_ans_sent_random_permute_rerun_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_num_v5_full_recite_ans_sent_random_permute_rerun_1 | [
"region:us"
] | 2024-02-07T16:19:58+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4430100.944244605, "num_examples": 2875}, {"name": "validation", "num_bytes": 403389, "num_examples": 300}], "download_size": 1334282, "dataset_size": 4833489.944244605}} | 2024-02-07T16:20:06+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_num_v5_full_recite_ans_sent_random_permute_rerun_1"
More Information needed | [
"# Dataset Card for \"squad_qa_num_v5_full_recite_ans_sent_random_permute_rerun_1\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_num_v5_full_recite_ans_sent_random_permute_rerun_1\"\n\nMore Information needed"
] |
6b5b07fd6cade704ed97e50305c37dfe376af41d |
Source:
The dataset is titled PANDORA and is retrieved from the https://psy.takelab.fer.hr/datasets/all/pandora/. the PANDORA dataset is the only dataset that contains personality-relevant information for multiple personality models. It consists of Reddit comments with their corresponding scores for the Big Five Traits, MBTI values and the Enneagrams for more than 10k users.
This Dataset:
This dataset is a subset of Reddit comments from PANDORA focused only on the Big Five Traits. The subsets consists of preprocessed English comments fro 1608 users with Big Five scores reported.
| Fatima0923/Automated-Personality-Prediction | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"Personality",
"Reddit",
"Social Media",
"Personality Prediction",
"Large Language Models ",
"Automated Personality Prediction",
"region:us"
] | 2024-02-07T16:20:10+00:00 | {"language": ["en"], "size_categories": ["10K<n<100K"], "task_categories": ["text-classification"], "pretty_name": "Personality-Texts", "tags": ["Personality", "Reddit", "Social Media", "Personality Prediction", "Large Language Models ", "Automated Personality Prediction"]} | 2024-02-07T16:39:27+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #size_categories-10K<n<100K #language-English #Personality #Reddit #Social Media #Personality Prediction #Large Language Models #Automated Personality Prediction #region-us
|
Source:
The dataset is titled PANDORA and is retrieved from the URL the PANDORA dataset is the only dataset that contains personality-relevant information for multiple personality models. It consists of Reddit comments with their corresponding scores for the Big Five Traits, MBTI values and the Enneagrams for more than 10k users.
This Dataset:
This dataset is a subset of Reddit comments from PANDORA focused only on the Big Five Traits. The subsets consists of preprocessed English comments fro 1608 users with Big Five scores reported.
| [] | [
"TAGS\n#task_categories-text-classification #size_categories-10K<n<100K #language-English #Personality #Reddit #Social Media #Personality Prediction #Large Language Models #Automated Personality Prediction #region-us \n"
] |
c47b30c1eada8087aa0ce361df231f3d9127d5f3 | # Dataset Card for "squad_qa_num_v5_full_recite_ans_sent_random_permute_rerun_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_num_v5_full_recite_ans_sent_random_permute_rerun_2 | [
"region:us"
] | 2024-02-07T16:20:14+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5222400.463636364, "num_examples": 3365}, {"name": "validation", "num_bytes": 403389, "num_examples": 300}], "download_size": 1429214, "dataset_size": 5625789.463636364}} | 2024-02-07T16:20:21+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_num_v5_full_recite_ans_sent_random_permute_rerun_2"
More Information needed | [
"# Dataset Card for \"squad_qa_num_v5_full_recite_ans_sent_random_permute_rerun_2\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_num_v5_full_recite_ans_sent_random_permute_rerun_2\"\n\nMore Information needed"
] |
Subsets and Splits